text
stringlengths 6
128k
|
---|
PHENIX Collaboration
# Probing gluon spin-momentum correlations in transversely polarized protons
through midrapidity isolated direct photons in $p^{\uparrow}+p$ collisions at
$\sqrt{s}=200$ GeV
U.A. Acharya Georgia State University, Atlanta, Georgia 30303, USA C. Aidala
Department of Physics, University of Michigan, Ann Arbor, Michigan 48109-1040,
USA Y. Akiba<EMAIL_ADDRESS>RIKEN Nishina Center for Accelerator-
Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research Center,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA M. Alfred
Department of Physics and Astronomy, Howard University, Washington, DC 20059,
USA V. Andrieux Department of Physics, University of Michigan, Ann Arbor,
Michigan 48109-1040, USA N. Apadula Iowa State University, Ames, Iowa 50011,
USA H. Asano Kyoto University, Kyoto 606-8502, Japan RIKEN Nishina Center
for Accelerator-Based Science, Wako, Saitama 351-0198, Japan B. Azmoun
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA V. Babintsev IHEP Protvino, State Research Center of Russian
Federation, Institute for High Energy Physics, Protvino, 142281, Russia N.S.
Bandara Department of Physics, University of Massachusetts, Amherst,
Massachusetts 01003-9337, USA K.N. Barish University of California-
Riverside, Riverside, California 92521, USA S. Bathe Baruch College, City
University of New York, New York, New York, 10010 USA RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA A.
Bazilevsky Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA M. Beaumier University of California-Riverside,
Riverside, California 92521, USA R. Belmont University of Colorado, Boulder,
Colorado 80309, USA Physics and Astronomy Department, University of North
Carolina at Greensboro, Greensboro, North Carolina 27412, USA A. Berdnikov
Saint Petersburg State Polytechnic University, St. Petersburg, 195251 Russia
Y. Berdnikov Saint Petersburg State Polytechnic University, St. Petersburg,
195251 Russia L. Bichon Vanderbilt University, Nashville, Tennessee 37235,
USA B. Blankenship Vanderbilt University, Nashville, Tennessee 37235, USA
D.S. Blau National Research Center “Kurchatov Institute”, Moscow, 123098
Russia National Research Nuclear University, MEPhI, Moscow Engineering
Physics Institute, Moscow, 115409, Russia J.S. Bok New Mexico State
University, Las Cruces, New Mexico 88003, USA M.L. Brooks Los Alamos
National Laboratory, Los Alamos, New Mexico 87545, USA J. Bryslawskyj Baruch
College, City University of New York, New York, New York, 10010 USA
University of California-Riverside, Riverside, California 92521, USA V.
Bumazhnov IHEP Protvino, State Research Center of Russian Federation,
Institute for High Energy Physics, Protvino, 142281, Russia S. Campbell
Columbia University, New York, New York 10027 and Nevis Laboratories,
Irvington, New York 10533, USA V. Canoa Roman Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
R. Cervantes Department of Physics and Astronomy, Stony Brook University,
SUNY, Stony Brook, New York 11794-3800, USA C.Y. Chi Columbia University,
New York, New York 10027 and Nevis Laboratories, Irvington, New York 10533,
USA M. Chiu Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA I.J. Choi University of Illinois at Urbana-Champaign,
Urbana, Illinois 61801, USA J.B. Choi Deceased Jeonbuk National University,
Jeonju, 54896, Korea Z. Citron Weizmann Institute, Rehovot 76100, Israel M.
Connors Georgia State University, Atlanta, Georgia 30303, USA RIKEN BNL
Research Center, Brookhaven National Laboratory, Upton, New York 11973-5000,
USA R. Corliss Department of Physics and Astronomy, Stony Brook University,
SUNY, Stony Brook, New York 11794-3800, USA Y. Corrales Morales Los Alamos
National Laboratory, Los Alamos, New Mexico 87545, USA N. Cronin Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA M. Csanád ELTE, Eötvös Loránd University, H-1117 Budapest,
Pázmány P. s. 1/A, Hungary T. Csörgő Eszterházy Károly University, Károly
Róbert Campus, H-3200 Gyöngyös, Mátrai út 36, Hungary Institute for Particle
and Nuclear Physics, Wigner Research Centre for Physics, Hungarian Academy of
Sciences (Wigner RCP, RMKI) H-1525 Budapest 114, POBox 49, Budapest, Hungary
T.W. Danley Department of Physics and Astronomy, Ohio University, Athens,
Ohio 45701, USA M.S. Daugherity Abilene Christian University, Abilene, Texas
79699, USA G. David Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA K. DeBlasio
University of New Mexico, Albuquerque, New Mexico 87131, USA K. Dehmelt
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA A. Denisov IHEP Protvino, State Research
Center of Russian Federation, Institute for High Energy Physics, Protvino,
142281, Russia A. Deshpande RIKEN BNL Research Center, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
E.J. Desmond Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA A. Dion Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA D. Dixit
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA J.H. Do Yonsei University, IPAP, Seoul
120-749, Korea A. Drees Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA K.A. Drees Collider-
Accelerator Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA J.M. Durham Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA A. Durum IHEP Protvino, State Research Center of Russian
Federation, Institute for High Energy Physics, Protvino, 142281, Russia A.
Enokizono RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama
351-0198, Japan Physics Department, Rikkyo University, 3-34-1 Nishi-
Ikebukuro, Toshima, Tokyo 171-8501, Japan H. En’yo RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan R. Esha Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA S. Esumi Tomonaga Center for the History of the Universe,
University of Tsukuba, Tsukuba, Ibaraki 305, Japan B. Fadem Muhlenberg
College, Allentown, Pennsylvania 18104-5586, USA W. Fan Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA N. Feege Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA D.E. Fields
University of New Mexico, Albuquerque, New Mexico 87131, USA M. Finger
Charles University, Ovocný trh 5, Praha 1, 116 36, Prague, Czech Republic M.
Finger, Jr Charles University, Ovocný trh 5, Praha 1, 116 36, Prague, Czech
Republic D. Fitzgerald Department of Physics, University of Michigan, Ann
Arbor, Michigan 48109-1040, USA S.L. Fokin National Research Center
“Kurchatov Institute”, Moscow, 123098 Russia J.E. Frantz Department of
Physics and Astronomy, Ohio University, Athens, Ohio 45701, USA A. Franz
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA A.D. Frawley Florida State University, Tallahassee, Florida
32306, USA Y. Fukuda Tomonaga Center for the History of the Universe,
University of Tsukuba, Tsukuba, Ibaraki 305, Japan C. Gal Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA P. Gallus Czech Technical University, Zikova 4, 166 36
Prague 6, Czech Republic P. Garg Department of Physics, Banaras Hindu
University, Varanasi 221005, India Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA H. Ge
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA M. Giles Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
F. Giordano University of Illinois at Urbana-Champaign, Urbana, Illinois
61801, USA Y. Goto RIKEN Nishina Center for Accelerator-Based Science, Wako,
Saitama 351-0198, Japan RIKEN BNL Research Center, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA N. Grau Department of Physics,
Augustana University, Sioux Falls, South Dakota 57197, USA S.V. Greene
Vanderbilt University, Nashville, Tennessee 37235, USA M. Grosse Perdekamp
University of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA T.
Gunji Center for Nuclear Study, Graduate School of Science, University of
Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan H. Guragain Georgia State
University, Atlanta, Georgia 30303, USA T. Hachiya Nara Women’s University,
Kita-uoya Nishi-machi Nara 630-8506, Japan RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA J.S.
Haggerty Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA K.I. Hahn Ewha Womans University, Seoul 120-750, Korea H.
Hamagaki Center for Nuclear Study, Graduate School of Science, University of
Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan H.F. Hamilton Abilene
Christian University, Abilene, Texas 79699, USA S.Y. Han Ewha Womans
University, Seoul 120-750, Korea Korea University, Seoul 02841, Korea J.
Hanks Department of Physics and Astronomy, Stony Brook University, SUNY,
Stony Brook, New York 11794-3800, USA M. Harvey Texas Southern University,
Houston, TX 77004, USA S. Hasegawa Advanced Science Research Center, Japan
Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken
319-1195, Japan T.O.S. Haseler Georgia State University, Atlanta, Georgia
30303, USA X. He Georgia State University, Atlanta, Georgia 30303, USA T.K.
Hemmick Department of Physics and Astronomy, Stony Brook University, SUNY,
Stony Brook, New York 11794-3800, USA J.C. Hill Iowa State University, Ames,
Iowa 50011, USA K. Hill University of Colorado, Boulder, Colorado 80309, USA
A. Hodges Georgia State University, Atlanta, Georgia 30303, USA R.S. Hollis
University of California-Riverside, Riverside, California 92521, USA K. Homma
Hiroshima University, Kagamiyama, Higashi-Hiroshima 739-8526, Japan B. Hong
Korea University, Seoul 02841, Korea T. Hoshino Hiroshima University,
Kagamiyama, Higashi-Hiroshima 739-8526, Japan N. Hotvedt Iowa State
University, Ames, Iowa 50011, USA J. Huang Physics Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA S. Huang Vanderbilt
University, Nashville, Tennessee 37235, USA K. Imai Advanced Science
Research Center, Japan Atomic Energy Agency, 2-4 Shirakata Shirane, Tokai-
mura, Naka-gun, Ibaraki-ken 319-1195, Japan M. Inaba Tomonaga Center for the
History of the Universe, University of Tsukuba, Tsukuba, Ibaraki 305, Japan
A. Iordanova University of California-Riverside, Riverside, California 92521,
USA D. Isenhower Abilene Christian University, Abilene, Texas 79699, USA D.
Ivanishchev PNPI, Petersburg Nuclear Physics Institute, Gatchina, Leningrad
region, 188300, Russia B.V. Jacak Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA M. Jezghani
Georgia State University, Atlanta, Georgia 30303, USA Z. Ji Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA X. Jiang Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA B.M. Johnson Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA Georgia State University,
Atlanta, Georgia 30303, USA D. Jouan IPN-Orsay, Univ. Paris-Sud, CNRS/IN2P3,
Université Paris-Saclay, BP1, F-91406, Orsay, France D.S. Jumper University
of Illinois at Urbana-Champaign, Urbana, Illinois 61801, USA J.H. Kang
Yonsei University, IPAP, Seoul 120-749, Korea D. Kapukchyan University of
California-Riverside, Riverside, California 92521, USA S. Karthas Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA D. Kawall Department of Physics, University of
Massachusetts, Amherst, Massachusetts 01003-9337, USA A.V. Kazantsev
National Research Center “Kurchatov Institute”, Moscow, 123098 Russia V.
Khachatryan Department of Physics and Astronomy, Stony Brook University,
SUNY, Stony Brook, New York 11794-3800, USA A. Khanzadeev PNPI, Petersburg
Nuclear Physics Institute, Gatchina, Leningrad region, 188300, Russia A.
Khatiwada Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA
C. Kim University of California-Riverside, Riverside, California 92521, USA
Korea University, Seoul 02841, Korea E.-J. Kim Jeonbuk National University,
Jeonju, 54896, Korea M. Kim Department of Physics and Astronomy, Seoul
National University, Seoul 151-742, Korea D. Kincses ELTE, Eötvös Loránd
University, H-1117 Budapest, Pázmány P. s. 1/A, Hungary A. Kingan Department
of Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA E. Kistenev Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA J. Klatsky Florida State
University, Tallahassee, Florida 32306, USA P. Kline Department of Physics
and Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800,
USA T. Koblesky University of Colorado, Boulder, Colorado 80309, USA D.
Kotov PNPI, Petersburg Nuclear Physics Institute, Gatchina, Leningrad region,
188300, Russia Saint Petersburg State Polytechnic University, St. Petersburg,
195251 Russia S. Kudo Tomonaga Center for the History of the Universe,
University of Tsukuba, Tsukuba, Ibaraki 305, Japan B. Kurgyis ELTE, Eötvös
Loránd University, H-1117 Budapest, Pázmány P. s. 1/A, Hungary K. Kurita
Physics Department, Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima, Tokyo
171-8501, Japan Y. Kwon Yonsei University, IPAP, Seoul 120-749, Korea J.G.
Lajoie Iowa State University, Ames, Iowa 50011, USA D. Larionova Saint
Petersburg State Polytechnic University, St. Petersburg, 195251 Russia A.
Lebedev Iowa State University, Ames, Iowa 50011, USA S. Lee Yonsei
University, IPAP, Seoul 120-749, Korea S.H. Lee Iowa State University, Ames,
Iowa 50011, USA Department of Physics, University of Michigan, Ann Arbor,
Michigan 48109-1040, USA Department of Physics and Astronomy, Stony Brook
University, SUNY, Stony Brook, New York 11794-3800, USA M.J. Leitch Los
Alamos National Laboratory, Los Alamos, New Mexico 87545, USA Y.H. Leung
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA N.A. Lewis Department of Physics, University
of Michigan, Ann Arbor, Michigan 48109-1040, USA X. Li Los Alamos National
Laboratory, Los Alamos, New Mexico 87545, USA S.H. Lim Los Alamos National
Laboratory, Los Alamos, New Mexico 87545, USA Pusan National University,
Pusan 46241, Korea Yonsei University, IPAP, Seoul 120-749, Korea M.X. Liu
Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA V.-R.
Loggins University of Illinois at Urbana-Champaign, Urbana, Illinois 61801,
USA S. Lökös ELTE, Eötvös Loránd University, H-1117 Budapest, Pázmány P. s.
1/A, Hungary D.A. Loomis Department of Physics, University of Michigan, Ann
Arbor, Michigan 48109-1040, USA K. Lovasz Debrecen University, H-4010
Debrecen, Egyetem tér 1, Hungary D. Lynch Physics Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA T. Majoros Debrecen
University, H-4010 Debrecen, Egyetem tér 1, Hungary Y.I. Makdisi Collider-
Accelerator Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA M. Makek Department of Physics, Faculty of Science,
University of Zagreb, Bijenička c. 32 HR-10002 Zagreb, Croatia V.I. Manko
National Research Center “Kurchatov Institute”, Moscow, 123098 Russia E.
Mannel Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA M. McCumber Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA P.L. McGaughey Los Alamos National Laboratory, Los Alamos,
New Mexico 87545, USA D. McGlinchey University of Colorado, Boulder,
Colorado 80309, USA Los Alamos National Laboratory, Los Alamos, New Mexico
87545, USA C. McKinney University of Illinois at Urbana-Champaign, Urbana,
Illinois 61801, USA M. Mendoza University of California-Riverside,
Riverside, California 92521, USA A.C. Mignerey University of Maryland,
College Park, Maryland 20742, USA A. Milov Weizmann Institute, Rehovot
76100, Israel D.K. Mishra Bhabha Atomic Research Centre, Bombay 400 085,
India J.T. Mitchell Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA Iu. Mitrankov Saint Petersburg State
Polytechnic University, St. Petersburg, 195251 Russia M. Mitrankova Saint
Petersburg State Polytechnic University, St. Petersburg, 195251 Russia G.
Mitsuka KEK, High Energy Accelerator Research Organization, Tsukuba, Ibaraki
305-0801, Japan RIKEN BNL Research Center, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA S. Miyasaka RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan Department of
Physics, Tokyo Institute of Technology, Oh-okayama, Meguro, Tokyo 152-8551,
Japan S. Mizuno RIKEN Nishina Center for Accelerator-Based Science, Wako,
Saitama 351-0198, Japan Tomonaga Center for the History of the Universe,
University of Tsukuba, Tsukuba, Ibaraki 305, Japan M.M. Mondal Department of
Physics and Astronomy, Stony Brook University, SUNY, Stony Brook, New York
11794-3800, USA P. Montuenga University of Illinois at Urbana-Champaign,
Urbana, Illinois 61801, USA T. Moon Korea University, Seoul 02841, Korea
Yonsei University, IPAP, Seoul 120-749, Korea D.P. Morrison Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
B. Mulilo Korea University, Seoul 02841, Korea RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan T. Murakami Kyoto
University, Kyoto 606-8502, Japan RIKEN Nishina Center for Accelerator-Based
Science, Wako, Saitama 351-0198, Japan J. Murata RIKEN Nishina Center for
Accelerator-Based Science, Wako, Saitama 351-0198, Japan Physics Department,
Rikkyo University, 3-34-1 Nishi-Ikebukuro, Toshima, Tokyo 171-8501, Japan K.
Nagai Department of Physics, Tokyo Institute of Technology, Oh-okayama,
Meguro, Tokyo 152-8551, Japan K. Nagashima Hiroshima University, Kagamiyama,
Higashi-Hiroshima 739-8526, Japan T. Nagashima Physics Department, Rikkyo
University, 3-34-1 Nishi-Ikebukuro, Toshima, Tokyo 171-8501, Japan J.L. Nagle
University of Colorado, Boulder, Colorado 80309, USA M.I. Nagy ELTE, Eötvös
Loránd University, H-1117 Budapest, Pázmány P. s. 1/A, Hungary I. Nakagawa
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198,
Japan RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA K. Nakano RIKEN Nishina Center for Accelerator-Based
Science, Wako, Saitama 351-0198, Japan Department of Physics, Tokyo Institute
of Technology, Oh-okayama, Meguro, Tokyo 152-8551, Japan C. Nattrass
University of Tennessee, Knoxville, Tennessee 37996, USA S. Nelson Florida
A&M University, Tallahassee, FL 32307, USA T. Niida Tomonaga Center for the
History of the Universe, University of Tsukuba, Tsukuba, Ibaraki 305, Japan
R. Nouicer Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA RIKEN BNL Research Center, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA T. Novák Eszterházy Károly
University, Károly Róbert Campus, H-3200 Gyöngyös, Mátrai út 36, Hungary
Institute for Particle and Nuclear Physics, Wigner Research Centre for
Physics, Hungarian Academy of Sciences (Wigner RCP, RMKI) H-1525 Budapest 114,
POBox 49, Budapest, Hungary N. Novitzky Department of Physics and Astronomy,
Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA Tomonaga
Center for the History of the Universe, University of Tsukuba, Tsukuba,
Ibaraki 305, Japan G. Nukazuka RIKEN Nishina Center for Accelerator-Based
Science, Wako, Saitama 351-0198, Japan RIKEN BNL Research Center, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA A.S. Nyanin National
Research Center “Kurchatov Institute”, Moscow, 123098 Russia E. O’Brien
Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA C.A. Ogilvie Iowa State University, Ames, Iowa 50011, USA
J.D. Orjuela Koop University of Colorado, Boulder, Colorado 80309, USA J.D.
Osborn Department of Physics, University of Michigan, Ann Arbor, Michigan
48109-1040, USA Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831,
USA A. Oskarsson Department of Physics, Lund University, Box 118, SE-221 00
Lund, Sweden G.J. Ottino University of New Mexico, Albuquerque, New Mexico
87131, USA K. Ozawa KEK, High Energy Accelerator Research Organization,
Tsukuba, Ibaraki 305-0801, Japan Tomonaga Center for the History of the
Universe, University of Tsukuba, Tsukuba, Ibaraki 305, Japan V. Pantuev
Institute for Nuclear Research of the Russian Academy of Sciences, prospekt
60-letiya Oktyabrya 7a, Moscow 117312, Russia V. Papavassiliou New Mexico
State University, Las Cruces, New Mexico 88003, USA J.S. Park Department of
Physics and Astronomy, Seoul National University, Seoul 151-742, Korea S.
Park RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama
351-0198, Japan Department of Physics and Astronomy, Seoul National
University, Seoul 151-742, Korea Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA S.F. Pate New
Mexico State University, Las Cruces, New Mexico 88003, USA M. Patel Iowa
State University, Ames, Iowa 50011, USA W. Peng Vanderbilt University,
Nashville, Tennessee 37235, USA D.V. Perepelitsa Physics Department,
Brookhaven National Laboratory, Upton, New York 11973-5000, USA University of
Colorado, Boulder, Colorado 80309, USA G.D.N. Perera New Mexico State
University, Las Cruces, New Mexico 88003, USA D.Yu. Peressounko National
Research Center “Kurchatov Institute”, Moscow, 123098 Russia C.E. PerezLara
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA J. Perry Iowa State University, Ames, Iowa
50011, USA R. Petti Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA M. Phipps Physics Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA University of Illinois
at Urbana-Champaign, Urbana, Illinois 61801, USA C. Pinkenburg Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
R.P. Pisani Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA M. Potekhin Physics Department, Brookhaven National
Laboratory, Upton, New York 11973-5000, USA A. Pun Department of Physics and
Astronomy, Ohio University, Athens, Ohio 45701, USA M.L. Purschke Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
P.V. Radzevich Saint Petersburg State Polytechnic University, St. Petersburg,
195251 Russia N. Ramasubramanian Department of Physics and Astronomy, Stony
Brook University, SUNY, Stony Brook, New York 11794-3800, USA K.F. Read Oak
Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA University of
Tennessee, Knoxville, Tennessee 37996, USA D. Reynolds Chemistry Department,
Stony Brook University, SUNY, Stony Brook, New York 11794-3400, USA V. Riabov
National Research Nuclear University, MEPhI, Moscow Engineering Physics
Institute, Moscow, 115409, Russia PNPI, Petersburg Nuclear Physics Institute,
Gatchina, Leningrad region, 188300, Russia Y. Riabov PNPI, Petersburg
Nuclear Physics Institute, Gatchina, Leningrad region, 188300, Russia Saint
Petersburg State Polytechnic University, St. Petersburg, 195251 Russia D.
Richford Baruch College, City University of New York, New York, New York,
10010 USA T. Rinn University of Illinois at Urbana-Champaign, Urbana,
Illinois 61801, USA Iowa State University, Ames, Iowa 50011, USA S.D.
Rolnick University of California-Riverside, Riverside, California 92521, USA
M. Rosati Iowa State University, Ames, Iowa 50011, USA Z. Rowan Baruch
College, City University of New York, New York, New York, 10010 USA J.
Runchey Iowa State University, Ames, Iowa 50011, USA A.S. Safonov Saint
Petersburg State Polytechnic University, St. Petersburg, 195251 Russia T.
Sakaguchi Physics Department, Brookhaven National Laboratory, Upton, New York
11973-5000, USA H. Sako Advanced Science Research Center, Japan Atomic
Energy Agency, 2-4 Shirakata Shirane, Tokai-mura, Naka-gun, Ibaraki-ken
319-1195, Japan V. Samsonov National Research Nuclear University, MEPhI,
Moscow Engineering Physics Institute, Moscow, 115409, Russia PNPI, Petersburg
Nuclear Physics Institute, Gatchina, Leningrad region, 188300, Russia M.
Sarsour Georgia State University, Atlanta, Georgia 30303, USA S. Sato
Advanced Science Research Center, Japan Atomic Energy Agency, 2-4 Shirakata
Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195, Japan B. Schaefer
Vanderbilt University, Nashville, Tennessee 37235, USA B.K. Schmoll
University of Tennessee, Knoxville, Tennessee 37996, USA K. Sedgwick
University of California-Riverside, Riverside, California 92521, USA R. Seidl
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198,
Japan RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA A. Sen Iowa State University, Ames, Iowa 50011, USA
University of Tennessee, Knoxville, Tennessee 37996, USA R. Seto University
of California-Riverside, Riverside, California 92521, USA A. Sexton
University of Maryland, College Park, Maryland 20742, USA D Sharma
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA D. Sharma Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
I. Shein IHEP Protvino, State Research Center of Russian Federation,
Institute for High Energy Physics, Protvino, 142281, Russia T.-A. Shibata
RIKEN Nishina Center for Accelerator-Based Science, Wako, Saitama 351-0198,
Japan Department of Physics, Tokyo Institute of Technology, Oh-okayama,
Meguro, Tokyo 152-8551, Japan K. Shigaki Hiroshima University, Kagamiyama,
Higashi-Hiroshima 739-8526, Japan M. Shimomura Iowa State University, Ames,
Iowa 50011, USA Nara Women’s University, Kita-uoya Nishi-machi Nara 630-8506,
Japan T. Shioya Tomonaga Center for the History of the Universe, University
of Tsukuba, Tsukuba, Ibaraki 305, Japan P. Shukla Bhabha Atomic Research
Centre, Bombay 400 085, India A. Sickles University of Illinois at Urbana-
Champaign, Urbana, Illinois 61801, USA C.L. Silva Los Alamos National
Laboratory, Los Alamos, New Mexico 87545, USA D. Silvermyr Department of
Physics, Lund University, Box 118, SE-221 00 Lund, Sweden B.K. Singh
Department of Physics, Banaras Hindu University, Varanasi 221005, India C.P.
Singh Department of Physics, Banaras Hindu University, Varanasi 221005, India
V. Singh Department of Physics, Banaras Hindu University, Varanasi 221005,
India M. Slunečka Charles University, Ovocný trh 5, Praha 1, 116 36, Prague,
Czech Republic K.L. Smith Florida State University, Tallahassee, Florida
32306, USA M. Snowball Los Alamos National Laboratory, Los Alamos, New
Mexico 87545, USA R.A. Soltz Lawrence Livermore National Laboratory,
Livermore, California 94550, USA W.E. Sondheim Los Alamos National
Laboratory, Los Alamos, New Mexico 87545, USA S.P. Sorensen University of
Tennessee, Knoxville, Tennessee 37996, USA I.V. Sourikova Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
P.W. Stankus Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
S.P. Stoll Physics Department, Brookhaven National Laboratory, Upton, New
York 11973-5000, USA T. Sugitate Hiroshima University, Kagamiyama, Higashi-
Hiroshima 739-8526, Japan A. Sukhanov Physics Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA T. Sumita RIKEN Nishina
Center for Accelerator-Based Science, Wako, Saitama 351-0198, Japan J. Sun
Department of Physics and Astronomy, Stony Brook University, SUNY, Stony
Brook, New York 11794-3800, USA Z. Sun Debrecen University, H-4010 Debrecen,
Egyetem tér 1, Hungary J. Sziklai Institute for Particle and Nuclear
Physics, Wigner Research Centre for Physics, Hungarian Academy of Sciences
(Wigner RCP, RMKI) H-1525 Budapest 114, POBox 49, Budapest, Hungary K. Tanida
Advanced Science Research Center, Japan Atomic Energy Agency, 2-4 Shirakata
Shirane, Tokai-mura, Naka-gun, Ibaraki-ken 319-1195, Japan RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
Department of Physics and Astronomy, Seoul National University, Seoul 151-742,
Korea M.J. Tannenbaum Physics Department, Brookhaven National Laboratory,
Upton, New York 11973-5000, USA S. Tarafdar Vanderbilt University,
Nashville, Tennessee 37235, USA Weizmann Institute, Rehovot 76100, Israel A.
Taranenko National Research Nuclear University, MEPhI, Moscow Engineering
Physics Institute, Moscow, 115409, Russia G. Tarnai Debrecen University,
H-4010 Debrecen, Egyetem tér 1, Hungary R. Tieulent Georgia State
University, Atlanta, Georgia 30303, USA IPNL, CNRS/IN2P3, Univ Lyon,
Université Lyon 1, F-69622, Villeurbanne, France A. Timilsina Iowa State
University, Ames, Iowa 50011, USA T. Todoroki riken RIKEN BNL Research
Center, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
Tomonaga Center for the History of the Universe, University of Tsukuba,
Tsukuba, Ibaraki 305, Japan M. Tomášek Czech Technical University, Zikova 4,
166 36 Prague 6, Czech Republic C.L. Towell Abilene Christian University,
Abilene, Texas 79699, USA R.S. Towell Abilene Christian University, Abilene,
Texas 79699, USA I. Tserruya Weizmann Institute, Rehovot 76100, Israel Y.
Ueda Hiroshima University, Kagamiyama, Higashi-Hiroshima 739-8526, Japan B.
Ujvari Debrecen University, H-4010 Debrecen, Egyetem tér 1, Hungary H.W. van
Hecke Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA J.
Velkovska Vanderbilt University, Nashville, Tennessee 37235, USA M. Virius
Czech Technical University, Zikova 4, 166 36 Prague 6, Czech Republic V. Vrba
Czech Technical University, Zikova 4, 166 36 Prague 6, Czech Republic
Institute of Physics, Academy of Sciences of the Czech Republic, Na Slovance
2, 182 21 Prague 8, Czech Republic N. Vukman Department of Physics, Faculty
of Science, University of Zagreb, Bijenička c. 32 HR-10002 Zagreb, Croatia
X.R. Wang New Mexico State University, Las Cruces, New Mexico 88003, USA
RIKEN BNL Research Center, Brookhaven National Laboratory, Upton, New York
11973-5000, USA Y.S. Watanabe Center for Nuclear Study, Graduate School of
Science, University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033, Japan C.P.
Wong Georgia State University, Atlanta, Georgia 30303, USA Los Alamos
National Laboratory, Los Alamos, New Mexico 87545, USA C.L. Woody Physics
Department, Brookhaven National Laboratory, Upton, New York 11973-5000, USA
C. Xu New Mexico State University, Las Cruces, New Mexico 88003, USA Q. Xu
Vanderbilt University, Nashville, Tennessee 37235, USA L. Xue Georgia State
University, Atlanta, Georgia 30303, USA S. Yalcin Department of Physics and
Astronomy, Stony Brook University, SUNY, Stony Brook, New York 11794-3800, USA
Y.L. Yamaguchi Department of Physics and Astronomy, Stony Brook University,
SUNY, Stony Brook, New York 11794-3800, USA H. Yamamoto Tomonaga Center for
the History of the Universe, University of Tsukuba, Tsukuba, Ibaraki 305,
Japan A. Yanovich IHEP Protvino, State Research Center of Russian
Federation, Institute for High Energy Physics, Protvino, 142281, Russia J.H.
Yoo Korea University, Seoul 02841, Korea I. Yoon Department of Physics and
Astronomy, Seoul National University, Seoul 151-742, Korea H. Yu New Mexico
State University, Las Cruces, New Mexico 88003, USA Peking University,
Beijing 100871, People’s Republic of China I.E. Yushmanov National Research
Center “Kurchatov Institute”, Moscow, 123098 Russia W.A. Zajc Columbia
University, New York, New York 10027 and Nevis Laboratories, Irvington, New
York 10533, USA A. Zelenski Collider-Accelerator Department, Brookhaven
National Laboratory, Upton, New York 11973-5000, USA S. Zharko Saint
Petersburg State Polytechnic University, St. Petersburg, 195251 Russia L. Zou
University of California-Riverside, Riverside, California 92521, USA
###### Abstract
Studying spin-momentum correlations in hadronic collisions offers a glimpse
into a three-dimensional picture of proton structure. The transverse single-
spin asymmetry for midrapidity isolated direct photons in $p^{\uparrow}+p$
collisions at $\sqrt{s}=200$ GeV is measured with the PHENIX detector at the
Relativistic Heavy Ion Collider (RHIC). Because direct photons in particular
are produced from the hard scattering and do not interact via the strong
force, this measurement is a clean probe of initial-state spin-momentum
correlations inside the proton and is in particular sensitive to gluon
interference effects within the proton. This is the first time direct photons
have been used as a probe of spin-momentum correlations at RHIC. The
uncertainties on the results are a fifty-fold improvement with respect to
those of the one prior measurement for the same observable, from the Fermilab
E704 experiment. These results constrain gluon spin-momentum correlations in
transversely polarized protons.
Unlike lepton-hadron scattering, proton-proton collisions are sensitive to
gluon scattering at leading order. Direct photons are produced directly in the
hard scattering of partons and, because they do not interact via the strong
force, are a phenomenologically clean probe of the structure of the proton. At
large transverse momentum, direct photons are produced at leading order via
the quantum chromodynamics (QCD) 2-to-2 hard scattering subprocesses quark-
gluon Compton scattering ($g+q\rightarrow\gamma+q$) and quark-antiquark
annihilation ($\bar{q}+q\rightarrow\gamma+g$). Compton scattering dominates at
midrapidity Adare _et al._ (2010) because the proton is being probed at
moderate longitudinal momentum fraction, $x$, where gluons are the primary
constituents of the proton. Thus midrapidity direct photon measurements are a
clean probe of gluon structure within the proton.
Transverse single-spin asymmetries (TSSAs) in hadronic collisions are
sensitive to various spin-momentum correlations, i.e. correlations between the
directions of the spin and momentum of partons and/or hadrons involved in a
scattering event. In collisions between one transversely polarized proton and
one unpolarized proton, the TSSA describes the azimuthal-angular dependence of
particle production relative to the transverse polarization direction. TSSAs
have been measured to be as large as 40% in forward charged pion production
Klem _et al._ (1976); Adams _et al._ (1991); Allgower _et al._ (2002);
Arsene _et al._ (2008) and significantly nonzero forward neutral pion
asymmetries have been measured with transverse momentum up to $p_{T}\approx
7~{}{\rm GeV}/c$ Adam _et al._ (2021). In this context, $p_{T}$ serves as
proxy for a hard-scattering energy ($Q$) that is well into the perturbative
regime of QCD. Next-to-leading-order perturbative QCD calculations, which only
include effects from high energy parton scattering predict that these
asymmetries should be small and fall off as $m_{q}/Q$ Kane _et al._ (1978),
where $m_{q}$ is the bare mass of the quark. Thus, to explain these large
TSSAs, they must be considered in the context of the dynamics present in
proton-proton collisions that cannot be calculated perturbatively, namely
dynamics describing proton structure and/or the process of hadronization.
One approach toward explaining the large measured TSSAs is through transverse-
momentum-dependent (TMD) functions. These functions depend on the soft-scale-
parton transverse momentum, $k_{T}$, in addition to the partonic longitudinal
momentum fraction $x$ and $Q$, where $k_{T}\ll Q$. TMD functions can be
directly extracted from measurements that are sensitive to two momentum
scales, such as semi-inclusive deep-inelastic scattering (SIDIS) where the
angle and energy of the scattered electron can be used to directly measure the
hard-scale $Q$ and the transverse momentum of the measured hadron relates to
the soft scales $k_{T}$ of TMD parton distribution functions (PDFs) and
fragmentation functions. The Sivers function is a PDF that describes the
structure of the transversely polarized proton and correlates the transverse
spin of the proton and $k_{T}$ of the parton within it Sivers (1990). The
quark Sivers function has been extracted through polarized SIDIS measurements,
but the gluon Sivers function has remained comparatively less constrained
because SIDIS is not sensitive to gluons at leading order Adolph _et al._
(2017). The direct photon TSSA in proton-proton collisions has been shown to
be sensitive to the gluon Sivers function Godbole _et al._ (2019), but the
$k_{T}$ moment of TMD functions must be used to apply these functions to the
single-scale inclusive TSSAs measured in proton-proton collisions.
Twist-3 correlation functions are another approach toward describing TSSAs.
Unlike TMD functions, collinear twist-3 correlation functions depend only on a
single scale, the hard scale $Q$. Twist-3 functions describe spin-momentum
correlations generated by the quantum mechanical interference between
scattering off of one parton versus scattering off of two. There are two
different types: the quark-gluon-quark (qgq) correlation functions and the
trigluon (ggg) correlation function. In the context of proton structure, qgq
correlation functions describe the interference between scattering off of a
single quark in the proton versus scattering off of a quark, which carries the
same flavor and the same momentum fraction and an additional gluon.
Analogously, the trigluon correlation describes the interference between
scattering off of one gluon in the proton versus scattering off of two.
Additional twist-3 collinear correlation functions describing spin-momentum
correlations in the process of hadronization also exist, but are not relevant
to the production of direct photons. Collinear twist-3 functions have been
shown to be related to the $k_{T}$ moment of TMD functions Boer _et al._
(2003); Ji _et al._ (2006). For example, the Efremov-Teryaev-Qiu-Sterman
(ETQS) function is a qgq correlation function for the polarized proton Efremov
and Teryaev (1985); Qiu and Sterman (1992, 1999) that is related to the
$k_{T}$ moment of the Sivers TMD PDF. The ETQS function has also been
extracted from fits to inclusive TSSAs in proton-proton collisions Kanazawa
_et al._ (2014); Cammarota _et al._ (2020), and the forward direct photon
TSSA has been suggested to be dominated by this ETQS function Kanazawa _et
al._ (2015). The fact that both TMD and collinear twist-3 functions are
nonzero reflects that scattering partons do in fact interact with the color
fields present inside the proton, which goes beyond traditional assumptions
present in hadronic collision studies.
Multiple observables can provide sensitivity to the ggg correlation function.
Midrapidity inclusive hadron TSSA measurements are sensitive to gluon spin-
momentum correlations in the proton but also include potential effects from
hadronization and final-state color interactions. Heavy flavor production at
the Relativistic Heavy Ion Collider (RHIC) is dominated by gluon-gluon fusion
and thus particularly sensitive to gluons in the proton. A heavy flavor hadron
TSSA measurement Aidala _et al._ (2017) has been used to estimate the
trigluon correlation function in the transversely polarized proton assuming no
effects from hadronization or final-state color interactions Koike and Yoshida
(2011). The midrapidity isolated direct photon TSSA is instead a clean probe
of the trigluon correlation function because it is insensitive to
hadronization effects as well as final-state color interactions Koike and
Yoshida (2012).
The only previously published direct photon TSSA measurement is the Fermilab
E704 result, which used a $200~{}{\rm GeV}/c$ polarized proton beam on an
unpolarized proton target ($\sqrt{s}=19.4$ GeV). It was found to be consistent
with zero to within 20% for $2.5<p_{T}^{\gamma}<3.1~{}{\rm GeV}/c$ Adams _et
al._ (1995). The PHENIX results presented in this Letter measure photons with
$p_{T}^{\gamma}>5~{}{\rm GeV}/c$ with total uncertainties up to a factor of 50
smaller than the E704 measurements. This measurement will constrain trigluon
correlations in transversely polarized protons.
The presented direct photon measurement was performed with the PHENIX
experiment in the central rapidity region $|\eta|<0.35$, using
$p^{\uparrow}$$+$$p$ collisions at $\sqrt{s}$ =200 GeV. The data set was
collected in 2015 and corresponds to an integrated luminosity of approximately
60 pb-1. Direct photons were reconstructed using similar techniques to a
previously published direct photon cross section result at $\sqrt{s}$ = 200
GeV Adare _et al._ (2012). The asymmetry was measured with transversely
polarized proton beams at RHIC where the clockwise and counter-clockwise beams
had an average polarization of $0.58\pm 0.02$ and $0.60\pm 0.02$, respectively
Schmidke _et al._ (2018). Collisions between bunches are spaced 106 ns apart
and the polarization direction changes bunch-to-bunch such that two
statistically independent asymmetries can be measured with the same particle
yields through sorting them by the polarization direction in one beam,
effectively averaging over the polarization in the other beam. These two
independent measurements serve as a cross check and are averaged together to
calculate the final asymmetry.
The PHENIX central detector comprises two nearly-back-to-back arms each
covering $\Delta\phi=\pi/2$ in azimuth and $|\eta|<0.35$ in pseudorapidity.
Photons are identified through clusters in the electromagnetic calorimeter
(EMCal), which has two detector arms: the west and the east. The west arm
comprises four sectors of sampling lead-scintillator (PbSc) calorimeters with
granularity $\delta\phi\times\delta\eta=0.011\times 0.011$ and the east arm
comprises two more PbSc sectors along with two sectors of Čerenkov lead-glass
(PbGl) calorimeters with granularity $\delta\phi\times\delta\eta=0.008\times
0.008$ Aphecetche _et al._ (2003).
The PHENIX central tracking system uses pad chambers and a drift chamber to
measure the position of charged particle tracks Adcox _et al._ (2003). The
beam-beam counters (BBC) are far-forward arrays of quartz Čerenkov radiators
that cover the full azimuth and $3.0<|\eta|<3.9$ Allen _et al._ (2003). They
measure the position of the vertex in the beam direction, for which a 30 cm
vertex cut around the nominal collision point is applied. The minimum-bias
trigger fires on crossings where at least one charged particle is measured in
each arm of the BBC. Events with high-$p_{T}$ photons are selected through an
EMCal-based high-energy photon trigger that is taken in coincidence with this
minimum-bias trigger.
All photons used in the asymmetry calculation are required to pass the
following cuts. A shower shape cut selects clusters whose energy distribution
is consistent with a parameterized profile from a photon shower. This reduces
the contribution of clusters from hadrons along with merged photons from high
energy $\pi^{0}$ decays, which resolve as a single cluster in the EMCal. A
time-of-flight cut suppresses the contribution of EMCal noise, where the
timing of the cluster is measured by the EMCal and the time zero reference of
the event is provided by the BBC. A charged-track-veto cut eliminates clusters
that geometrically match with a charged track and uses the track position
measured directly in front of the EMCal. This cut reduces the background from
electrons as well as charged hadrons that were not eliminated by the shower
shape cut.
Direct photon candidates are also required to pass tagging cuts that reduce
the hadronic decay background by eliminating photons that are tagged as coming
from either $\mbox{$\pi^{0}$}\rightarrow\gamma\gamma$ or
$\eta\rightarrow\gamma\gamma$ decays. The candidate direct photon is matched
with a partner photon in the same event and same EMCal arm, which has passed a
minimum-energy cut of 0.5 GeV. A photon is considered tagged as coming from a
$\mbox{$\pi^{0}$}\rightarrow\gamma\gamma$ ($\eta\rightarrow\gamma\gamma$)
decay if it is matched into a photon pair with invariant mass
$105<M_{\gamma\gamma}<165~{}{\rm MeV}/c^{2}$ ($480<M_{\gamma\gamma}<620~{}{\rm
MeV}/c^{2}$), which corresponds roughly to a $\pm 2\sigma$ window around the
observed $\pi^{0}$ and $\eta$ peaks.
Additionally, direct photon candidates have to pass an isolation cut, which
further reduces the contribution of decay photons Adare _et al._ (2012). Ref.
Adare _et al._ (2010) estimates that the contribution of the next-to-leading-
order fragmentation photons to the isolated direct photon sample is less than
15% for photons with $p_{T}>5~{}{\rm GeV}/c$. The photon isolation cut
requires that the sum of the particles’ energy surrounding the photon in a
cone of radius $r=\sqrt{(\Delta\eta)^{2}+(\Delta\phi)^{2}}<0.4$ radians be
less than 10% of the candidate photon’s energy: $E_{\rm cone}<E_{\gamma}\cdot
10\%$. To be included in the cone sum energy, $E_{\rm cone}$, an EMCal cluster
must have energy larger than $0.15~{}{\rm GeV}$ and a charged track needs to
have a momentum above $0.2~{}{\rm GeV}/c$. To provide a more inclusive sample
of the particles surrounding the photon, the clusters and tracks that are
included in the $E_{\rm cone}$ sum are only required to pass a minimum set of
quality cuts. The charged track veto cut is still used to ensure charged
particles are not double counted by the energy that they deposit in the EMCal.
The shower-shape cut is not applied to EMCal clusters to ensure that neutral
hadrons and charged hadrons that were not reconstructed as charged tracks can
still contribute to $E_{\rm cone}$.
The asymmetry measurement is formed from photons that satisfy these criteria,
using similar techniques to previously published PHENIX TSSAs which include
Refs. Aidala _et al._ (2017) and Acharya _et al._ (2021). The TSSA is
determined using the relative luminosity formula:
$A_{N}=\frac{1}{P\,\left<\cos(\phi)\right>}\frac{{N^{\uparrow}}-\mathcal{R}{N^{\downarrow}}}{{N^{\uparrow}}+\mathcal{R}{N^{\downarrow}}},$
(1)
where $\mathcal{R}=\mathcal{L}^{\uparrow}/\mathcal{L}^{\downarrow}$ is the
relative luminosity of collisions for when the beam was polarized up versus
down. $P$ is the average polarization of the beam and
$\left<\cos(\phi)\right>$ is the acceptance factor accounting for the
azimuthal coverage of each detector arm. In Eq. (1), $N$ refers to the
particle yield and the up ($\uparrow$) or down ($\downarrow$) arrow
superscripts refer to the direction of the beam polarization. The asymmetries
are calculated separately for each arm of the detector and averaged together
for the final result, weighted by the statistical uncertainty.
The main source of direct-photon background comes from decay photons that were
not eliminated by the tagging cut because their partner photon was not
measured. This can occur because the partner photon was out of acceptance, hit
a dead area of the detector, or did not pass the minimum-energy cut. To
calculate the isolated direct-photon asymmetry, $A_{N}^{\rm dir}$, the
candidate isolated direct-photon asymmetry, $A_{N}^{\rm iso}$, must be
corrected for the contribution from background:
$A_{N}^{\rm dir}=\frac{A_{N}^{\rm iso}-r_{\pi^{0}}\,A_{N}^{{\rm
iso},\pi^{0}}-r_{\eta}\,A_{N}^{{\rm iso},\eta}}{1-r_{\pi^{0}}-r_{\eta}}.$ (2)
This expression removes the effects of background asymmetries from isolated
$\pi^{0}$ photons, $A_{N}^{{\rm iso},\pi^{0}}$, and isolated $\eta$ photons,
$A_{N}^{{\rm iso},\eta}$, where $r_{\pi^{0}}$ and $r_{\eta}$ are the
background fractions due to photons from $\pi^{0}$ and $\eta$ decays,
respectively. Because the midrapidity $\pi^{0}$ and $\eta$ TSSAs have been
measured to be consistent with zero to high statistical precision Acharya _et
al._ (2021) and their isolated asymmetries were also confirmed to be
consistent with zero, $A_{N}^{\rm{iso},\pi^{0}}$ and $A_{N}^{\rm{iso},\eta}$
are set to zero in Eq. (2). The systematic uncertainty due to setting the
background asymmetries to zero dominates the total systematic uncertainty of
the direct-photon asymmetry for all $p_{T}$ bins. It is assigned by
integrating the inclusive midrapidity $\pi^{0}$ and $\eta$ TSSAs over photon
$p_{T}$ and propagating their uncertainties through Eq. (2).
The background fraction calculation is performed by taking the ratio of
measured photon yields: $N^{{\rm iso},h}_{\rm tag}/N^{\rm iso}$, where $N^{\rm
iso}$ is the isolated direct photon candidate sample. $N^{{\rm iso},h}_{\rm
tag}$ is the number of photons that were tagged as coming from a diphoton
decay of hadron $h$ and pass the photon pair isolation cut, $E_{\rm
cone}-E_{\rm partner}<E_{\gamma}\cdot 10\%$, which subtracts off the energy of
the partner photon, $E_{\rm partner}$. Tagged photons that pass this cut would
have been included in the isolated direct photon candidate sample had their
partner photon not been detected. Simulations are used to calculate how to
convert from the number of tagged decay photons to the number of decay photons
where the partner photon was missed. The background fraction, $r_{h}$, for
photons from $\pi^{0}$ and $\eta$ meson decays is calculated separately to
account for their differences in particle production and decay kinematics,
$r_{h}=R_{h}\frac{N^{{\rm iso},h}_{\rm tag}}{N^{\rm iso}},$ (3)
where $R_{h}$ is the one-miss ratio for the decay of hadron $h$. It is the
ratio in single particle Monte Carlo of the number of photons for which only
one of the simulated decay photons was reconstructed to the number of photons
in which both decay photons were reconstructed Adare _et al._ (2012). These
simulations include the geometry, resolution, and configuration of the dead
areas of the EMCal and use the previously measured $\pi^{0}$ Adare _et al._
(2007) and $\eta$ Adare _et al._ (2011) cross sections. The background
fractions for photons from $\pi^{0}$ ($\eta$) decays are plotted in Fig. 1 and
are systematically larger in the east arm versus the west due to the PbGl
sectors having slightly more dead area compared to the PbSc sectors. The
contribution of decay photons from sources heavier than $\eta$ mesons is
estimated to be less than 3% with respect to the measured background and so an
even smaller percentage of the total direct photon sample. The uncertainty on
the background fraction is propagated through Eq. (2) to assign an additional
systematic uncertainty to the direct-photon asymmetry.
Figure 1: The fractional contribution of photons from (a) $\pi^{0}$ and (b)
$\eta$ decays to the isolated direct photon candidate sample.
A similar method to Eq. (3) is used to find the contribution of merged
$\pi^{0}$ decay photons. The equivalent $R_{h}$ is calculated using simulated
$h\rightarrow\gamma\gamma$ decays, taking the ratio of the number of
reconstructed EMCal clusters produced by merged decay photons divided by the
number of reconstructed clusters associated with a single decay photon. The
contribution from merged photon clusters was found to be less than 0.2%, small
compared to the up to 50% background fraction due to the one-miss effects, and
the contribution from merged $\eta$ decays was confirmed to be negligible.
An additional systematic study is performed by calculating the asymmetry with
the square root formula:
$A_{N}=\frac{1}{P\,\left<\cos(\phi)\right>}\frac{\sqrt{N_{L}^{\uparrow}N_{R}^{\downarrow}}-\sqrt{N_{L}^{\downarrow}N_{R}^{\uparrow}}}{\sqrt{N_{L}^{\uparrow}N_{R}^{\downarrow}}+\sqrt{N_{L}^{\downarrow}N_{R}^{\uparrow}}},$
(4)
where the $L$ and $R$ subscripts refer to yields to the left and to the right
of the polarized-beam-going direction, respectively. This result is verified
to be consistent with the relative luminosity formula results from Eq. (1) and
the differences between these results are assigned as an additional systematic
uncertainty due to possible variations in detector performance and beam
conditions. The systematic uncertainty due to setting the background
asymmetries to zero dominates the total systematic uncertainty by an order of
magnitude for all $p_{T}$ bins except for the highest $p_{T}$ bin, where it is
only slightly larger than the difference between the square root formula and
relative luminosity formula. Another study using bunch shuffling found no
additional systematic effects. Bunch shuffling is a technique that randomizes
the bunch-by-bunch beam polarization directions to confirm that the variations
present in the data are consistent with what is expected by statistical
variation.
Figure 2: Transverse single-spin asymmetry of isolated direct photons measured at midrapidity $|\eta|<0.35$ in $p^{\uparrow}$$+$$p$ collisions at $\sqrt{s}$ = 200 GeV. An additional scale uncertainty of 3.4% due to the polarization uncertainty is not shown. Table 1: The measured $A_{N}$ of isolated direct photons in $p^{\uparrow}$$+$$p$ collisions at $\sqrt{s}$ =200 GeV as a function of $p_{T}$. An additional scale uncertainty of 3.4% due to the polarization uncertainty is not included. $\langle\mbox{$p_{T}$}\rangle[{\rm GeV}/c]$ | $A_{N}^{\rm dir}$ | $\sigma_{\rm stat}$ | $\sigma_{\rm syst}$
---|---|---|---
5.39 | -0.000492 | 0.00299 | 0.00341
6.69 | 0.00247 | 0.00404 | 0.00252
8.77 | 0.00777 | 0.00814 | 0.00159
11.88 | 0.00278 | 0.0105 | 0.00106
The results for the $A_{N}$ of isolated direct photons, $A_{N}^{\rm dir}$, at
midrapidity in $p^{\uparrow}$$+$$p$ collisions at $\sqrt{s}$ = 200 GeV are
shown in Table 1 and in Fig. 2, where the shaded [gray] bands represent the
systematic uncertainty and the vertical bars represent the statistical
uncertainty. The measurement is consistent with zero to within 1% across the
entire $p_{T}$ range. Figure 2 also shows predictions from collinear twist-3
correlation functions. The solid [green] curve shows the contribution of qgq
correlation functions to the direct-photon asymmetry which is calculated using
functions that were published in Ref. Kanazawa _et al._ (2015) that are
integrated over the $|\eta|<0.35$ pseudorapidity range of the PHENIX central
arms. This calculation includes contributions from the qgq correlation
functions present in both the polarized and unpolarized proton, including the
ETQS function which is extracted from a global fit in Ref. Cammarota _et al._
(2020). The error band plotted with the solid [green] curve in Fig. 2 includes
uncertainties propagated from fits to data, but does not include uncertainties
associated with assuming functional forms. Quark-flavor dependence is not
considered in these calculations, including qgq correlators. Direct-photon
production in $p$$+$$p$ collisions is four times more sensitive to the up
quark than the down quark in the proton because of the factor of electric
charge squared in the production cross section.
Given the small predicted contributions from qgq correlation functions to the
midrapidity direct photon TSSA, this measurement can provide a clean
extraction of the ggg function. The predicted ranges for the trigluon
correlation function’s contribution to the direct-photon asymmetry are also
plotted in Fig. 2. The dashed [blue] and dotted [red] curves use results that
were published in Ref. Koike and Yoshida (2011) and were reevaluated as a
function of photon $p_{T}$ for pseudorapidity $\eta=0$ 111The trigluon Model 1
and Model 2 curves in Fig. 2 were provided by S. Yoshida, while D. Pitonyak
provided the quark-gluon-quark curve.. Models 1 and 2 assume different
functional forms for the trigluon correlation function in terms of the
collinear leading-twist gluon PDF; no uncertainties are available for these
curves. As this figure shows, this measurement has the statistical precision,
especially at low $p_{T}$, to constrain the trigluon correlation function.
In summary, the TSSA of midrapidity isolated direct photons was measured by
the PHENIX experiment to be consistent with zero in the presented $p_{T}$
range, with uncertainties as low as 0.4% in the lowest $p_{T}$ bins. This is
the first time direct photons have been used to probe transversely polarized
proton collisions at RHIC and the first measurement of this TSSA in almost 30
years, with significantly higher $p_{T}$ reach and up to a fifty-fold
improvement in uncertainty. Direct photons are a clean probe of proton
structure with no contributions from final-state QCD effects and at
midrapidity are particularly sensitive to gluon dynamics. When included in the
global analysis of world TSSA data, this measurement will constrain gluon
spin-momentum correlations in the transversely polarized proton for $x\approx
x_{T}=0.05-0.18$, marking an important step toward creating a more three-
dimensional picture of proton structure.
###### Acknowledgements.
We thank the staff of the Collider-Accelerator and Physics Departments at
Brookhaven National Laboratory and the staff of the other PHENIX participating
institutions for their vital contributions. We also thank D. Pitonyak and S.
Yoshida for helpful discussions. We acknowledge support from the Office of
Nuclear Physics in the Office of Science of the Department of Energy, the
National Science Foundation, Abilene Christian University Research Council,
Research Foundation of SUNY, and Dean of the College of Arts and Sciences,
Vanderbilt University (U.S.A), Ministry of Education, Culture, Sports,
Science, and Technology and the Japan Society for the Promotion of Science
(Japan), Conselho Nacional de Desenvolvimento Científico e Tecnológico and
Fundação de Amparo à Pesquisa do Estado de São Paulo (Brazil), Natural Science
Foundation of China (People’s Republic of China), Croatian Science Foundation
and Ministry of Science and Education (Croatia), Ministry of Education, Youth
and Sports (Czech Republic), Centre National de la Recherche Scientifique,
Commissariat à l’Énergie Atomique, and Institut National de Physique Nucléaire
et de Physique des Particules (France), Bundesministerium für Bildung und
Forschung, Deutscher Akademischer Austausch Dienst, and Alexander von Humboldt
Stiftung (Germany), J. Bolyai Research Scholarship, EFOP, the New National
Excellence Program (ÚNKP), NKFIH, and OTKA (Hungary), Department of Atomic
Energy and Department of Science and Technology (India), Israel Science
Foundation (Israel), Basic Science Research and SRC(CENuM) Programs through
NRF funded by the Ministry of Education and the Ministry of Science and ICT
(Korea). Physics Department, Lahore University of Management Sciences
(Pakistan), Ministry of Education and Science, Russian Academy of Sciences,
Federal Agency of Atomic Energy (Russia), VR and Wallenberg Foundation
(Sweden), the U.S. Civilian Research and Development Foundation for the
Independent States of the Former Soviet Union, the Hungarian American
Enterprise Scholarship Fund, the US-Hungarian Fulbright Foundation, and the
US-Israel Binational Science Foundation.
## References
* Adare _et al._ (2010) A. Adare _et al._ (PHENIX Collaboration), “High $p_{T}$ direct photon and $\pi^{0}$ triggered azimuthal jet correlations and measurement of $k_{T}$ for isolated direct photons in $p+p$ collisions at $\sqrt{s}=200$ GeV,” Phys. Rev. D 82, 072001 (2010).
* Klem _et al._ (1976) R. D. Klem, J. E. Bowers, H. W. Courant, H. Kagan, M. L. Marshak, E. A. Peterson, K. Ruddick, W. H. Dragoset, and J. B. Roberts, “Measurement of Asymmetries of Inclusive Pion Production in Proton Proton Interactions at 6-GeV/c and 11.8-GeV/c,” Phys. Rev. Lett. 36, 929 (1976).
* Adams _et al._ (1991) D. L. Adams _et al._ (FNAL-E704 Collaboration), “Analyzing power in inclusive $\pi^{+}$ and $\pi^{-}$ production at high $x_{F}$ with a 200-GeV polarized proton beam,” Phys. Lett. 264, 462 (1991).
* Allgower _et al._ (2002) C.E. Allgower _et al._ , “Measurement of analyzing powers of $\pi^{+}$ and $\pi^{-}$ produced on a hydrogen and a carbon target with a 22-GeV/c incident polarized proton beam,” Phys. Rev. D 65, 092008 (2002).
* Arsene _et al._ (2008) I. Arsene _et al._ (BRAHMS Collaboration), “Single Transverse Spin Asymmetries of Identified Charged Hadrons in Polarized $p+p$ Collisions at $\sqrt{s}$ = 62.4 GeV,” Phys. Rev. Lett. 101, 042001 (2008).
* Adam _et al._ (2021) Jaroslav Adam _et al._ (STAR), “Comparison of transverse single-spin asymmetries for forward $\pi^{0}$ production in polarized $pp$, $p\rm{Al}$ and $p\rm{Au}$ collisions at nucleon pair c.m. energy $\sqrt{s_{\mathrm{NN}}}=200$ GeV,” Phys. Rev. D 103, 072005 (2021).
* Kane _et al._ (1978) G. L. Kane, J. Pumplin, and W. Repko, “Transverse Quark Polarization in Large $p_{T}$ Reactions, $e^{+}e^{-}$ Jets, and Leptoproduction: A Test of QCD,” Phys. Rev. Lett. 41, 1689 (1978).
* Sivers (1990) D. W. Sivers, “Single Spin Production Asymmetries from the Hard Scattering of Point-Like Constituents,” Phys. Rev. D 41, 83 (1990).
* Adolph _et al._ (2017) C. Adolph _et al._ (COMPASS Collaboration), “First measurement of the Sivers asymmetry for gluons using SIDIS data,” Phys. Lett. B 772, 854 (2017).
* Godbole _et al._ (2019) R. M. Godbole, A. Kaushik, A. Misra, and S. Padval, “Probing the Gluon Sivers Function through direct photon production at RHIC,” Phys. Rev. D 99, 014003 (2019).
* Boer _et al._ (2003) D. Boer, P.J. Mulders, and F. Pijlman, “Universality of T odd effects in single spin and azimuthal asymmetries,” Nucl. Phys. B667, 201 (2003).
* Ji _et al._ (2006) X. Ji, J.-W. Qiu, W. Vogelsang, and F. Yuan, “A Unified picture for single transverse-spin asymmetries in hard processes,” Phys. Rev. Lett. 97, 082002 (2006).
* Efremov and Teryaev (1985) A. V. Efremov and O. V. Teryaev, “QCD Asymmetry and Polarized Hadron Structure Functions,” Phys. Lett. B 150, 383 (1985).
* Qiu and Sterman (1992) J. Qiu and G. F. Sterman, “Single transverse spin asymmetries in direct photon production,” Nucl. Phys. B378, 52 (1992).
* Qiu and Sterman (1999) J. Qiu and G. F. Sterman, “Single transverse spin asymmetries in hadronic pion production,” Phys. Rev. D 59, 014004 (1999).
* Kanazawa _et al._ (2014) K. Kanazawa, Y. Koike, A. Metz, and D. Pitonyak, “Towards an explanation of transverse single-spin asymmetries in proton-proton collisions: the role of fragmentation in collinear factorization,” Phys. Rev. D 89, 111501 (2014).
* Cammarota _et al._ (2020) J. Cammarota, L. Gamberg, Z.-B. Kang, J. A. Miller, D. Pitonyak, A. Prokudin, T. C. Rogers, and N. Sato (Jefferson Lab Angular Momentum Collaboration), “Origin of single transverse-spin asymmetries in high-energy collisions,” Phys. Rev. D 102, 054002 (2020).
* Kanazawa _et al._ (2015) K. Kanazawa, Y. Koike, A. Metz, and D. Pitonyak, “Transverse single-spin asymmetries in $p^{\uparrow}p\to\gamma X$ from quark-gluon-quark correlations in the proton,” Phys. Rev. D 91, 014013 (2015).
* Aidala _et al._ (2017) C. Aidala _et al._ (PHENIX Collaboration), “Cross section and transverse single-spin asymmetry of muons from open heavy-flavor decays in polarized $p$+$p$ collisions at $\sqrt{s}=200$ GeV,” Phys. Rev. D 95, 112001 (2017).
* Koike and Yoshida (2011) Y. Koike and S. Yoshida, “Probing the three-gluon correlation functions by the single spin asymmetry in $p^{\uparrow}p\rightarrow DX$,” Phys. Rev. D 84, 014026 (2011).
* Koike and Yoshida (2012) Y. Koike and S. Yoshida, “Three-gluon contribution to the single spin asymmetry in Drell-Yan and direct-photon processes,” Phys. Rev. D 85, 034030 (2012).
* Adams _et al._ (1995) D.L. Adams _et al._ (E704 Collaboration), “Measurement of single spin asymmetry for direct photon production in p p collisions at 200-GeV/c,” Phys. Lett. B 345, 569 (1995).
* Adare _et al._ (2012) A. Adare _et al._ (PHENIX Collaboration), “Direct-Photon Production in $p+p$ Collisions at $\sqrt{s}=200$ GeV at Midrapidity,” Phys. Rev. D 86, 072008 (2012).
* Schmidke _et al._ (2018) W. D. Schmidke _et al._ (The RHIC Polarimetry Group), “RHIC polarization for Runs 9–17,” https://technotes.bnl.gov/Home/ViewTechNote/209057 (2018).
* Aphecetche _et al._ (2003) L. Aphecetche _et al._ (PHENIX Collaboration), “PHENIX calorimeter,” Nucl. Instrum. Methods Phys. Res., Sec. A 499, 521 (2003).
* Adcox _et al._ (2003) K. Adcox _et al._ (PHENIX Collaboration), “PHENIX central arm tracking detectors,” Nucl. Instrum. Methods Phys. Res., Sec. A 499, 489 (2003).
* Allen _et al._ (2003) M. Allen _et al._ , “PHENIX inner detectors,” Nucl. Instrum. Methods Phys. Res., Sec. A 499, 549 (2003).
* Acharya _et al._ (2021) U. A. Acharya _et al._ (PHENIX), “Transverse single-spin asymmetries of midrapidity $\pi^{0}$ and $\eta$ mesons in polarized $p+p$ collisions at $\sqrt{s}=200$ GeV,” Phys. Rev. D 103, 052009 (2021).
* Adare _et al._ (2007) A. Adare _et al._ (PHENIX Collaboration), “Inclusive cross-section and double helicity asymmetry for $\pi^{0}$ production in $p+p$ collisions at $\sqrt{s}=$ 200 GeV: Implications for the polarized gluon distribution in the proton,” Phys. Rev. D 76, 051106 (2007).
* Adare _et al._ (2011) A. Adare _et al._ (PHENIX Collaboration), “Cross section and double helicity asymmetry for $\eta$ mesons and their comparison to neutral pion production in p+p collisions at $\sqrt{s}=200$ GeV,” Phys. Rev. D 83, 032001 (2011).
* Note (1) The trigluon Model 1 and Model 2 curves in Fig. 2 were provided by S. Yoshida, while D. Pitonyak provided the quark-gluon-quark curve.
|
# Indices of diagonalizable and universal realizability of spectra††thanks:
Supported by Universidad Católica del Norte-VRIDT 036-2020, NUCLEO UCN
VRIDT-083-2020, Chile.
Charles R. Johnson${}^{a},$ Ana I. Juliob, Ricardo L. Sotob
aDepartment of Mathematics, College of William and Mary
Williamsburg, VA, USA
bDepartamento de Matemáticas, Universidad Católica del Norte
Antofagasta, Chile.
Casilla 1280, Antofagasta, Chile. Corresponding author<EMAIL_ADDRESS>(Charles
R. Johnson<EMAIL_ADDRESS>(Ana I. Julio<EMAIL_ADDRESS>(Ricardo L. Soto)
###### Abstract
A list $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ of complex numbers
(repeats allowed) is said to be realizable if it is the spectrum of an
entrywise nonnegative matrix $A$. $\Lambda$ is diagonalizably realizable if
the realizing matrix $A$ is diagonalizable. $\Lambda$ is said to be
universally realizable if it is realizable for each possible Jordan canonical
form allowed by $\Lambda.$ Here, we study the connection between
diagonalizable realizability and universal realizability of spectra. In
particular, we establish indices of realizability for diagonalizable and
universal realizability. We also define the merge of two spectra and we prove
a result that allow us to easily decide, in many cases, about the universal
realizability of spectra.
AMS classification: 15A18, 15A20, 15A29
Key words: Nonnegative matrices, Spectra diagonalizably realizable, Spectra
universal realizability, Jordan structure, Eigenvalues and eigenvectors
## 1 Introduction
A list $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ of complex numbers (with
repeats allowed) is said to be realizable if there is an $n$-by-$n$
nonnegative matrix $A$ with spectrum $\Lambda$. In this case $A$ is said to be
a realizing matrix. The problem of characterizing the realizability of lists
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ of complex numbers is called
the nonnegative inverse eigenvalue problem (NIEP). We say that $\Lambda$ is
diagonalizably realizable (DR) if it is the spectrum of a diagonalizable
nonnegative matrix. The problem of determining this kind of realizability is
called the diagonalizable realizability problem. $\Lambda$ is universally
realizable (UR) if it is realizable for every possible Jordan canonical form
(JCF) allowed by $\Lambda.$ The problem of determining the universal
realizability of spectra is called the universal realizability problem (URP).
The URP seeks to extend the question of determining the spectral properties
allowed by a nonnegative matrix, not only regarding the eigenvalues
themselves, but also from the point of view of the corresponding JCF. The URP
contains the NIEP and both problems are equivalent if the given complex
numbers $\lambda_{1},\ldots,\lambda_{n}$ are distinct. The NIEP has attracted
the interest of many linear algebraist researchers. The URP, on the other
hand, is a new and even more difficult problem. Both problems remain unsolved
for $n\geq 5.$ A complete solution, if any, is still far away from the present
state of the art. The URP is studied module the NIEP (see [15]). This means
that the methods applied to the NIEP are not only useful but also in many
cases necessary for the URP. Then, getting answers for some topics and open
questions about the URP means a positive impact on the progress towards a
solution.
The NIEP, as we know it today, begins with the works by Suleĭmanova [24] and
Perfect [17, 18]. The NIEP has been solved only for the cases $n=3$ by Loewy
and London [13], and for $n=4$ by Meehan [14] and also independently by Torre-
Mayo et al. [25]. The first known results on the URP are due to Minc [15, 16].
In [15] Minc proved that if $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ is
the spectrum of a diagonalizable positive matrix, then $\Lambda$ is UR. We
want to point out here that previously the URP was called nonnegative inverse
elementary divisors problem [15], and that in [4] the authors used for the
first time the concept and name universal realizability. There are spectra,
not positively realizable, that are known to be UR [3, 22, 23]. For instance,
spectra in the left half-plane, that is,
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ with $\lambda_{1}>0,$
$\mbox{Re}\lambda_{i}\leq 0,$ $i=2,\ldots,n,$ such as real Suleĭmanova spectra
[22] $\lambda_{1}>0>\lambda_{2}\geq\cdots\geq\lambda_{n}$; complex Suleĭmanova
spectra [23]
$\lambda_{1}>0,\text{ }\lambda_{i}\in\left\\{z\in\mathbb{C}:\mbox{Re}z\leq
0,\text{ }\left|\mbox{Re}z\right|\geq\left|\mbox{Im}z\right|\right\\},\text{
}i=2,\ldots,n;$
and Šmigoc spectra [3]
$\lambda_{1}>0,\text{ }\lambda_{i}\in\left\\{z\in\mathbb{C}:\mbox{Re}z\leq
0,\text{
}\left|\sqrt{3}\mbox{Re}z\right|\geq\left|\mbox{Im}z\right|\right\\},\text{
}i=2,\ldots,n,$
which contains the real and complex Suleimanova spectra. These spectra are
realizable if and only if they are UR if and only if
$\sum_{i=1}^{n}\lambda_{i}\geq 0.$ The good behavior of these kind of spectra
led to think that any left half-plane list was UR. Now, we know that this is
not true [9, 10].
In [12] Laffey and Šmigoc proved that a list
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ in the left half-plane is
realizable if and only if
$s_{1}=\sum_{i=1}^{n}\lambda_{i}\geq 0,\text{ \
}s_{2}=\sum_{i=1}^{n}\lambda_{i}^{2}\geq 0,\text{ \ }s_{1}^{2}\leq ns_{2}.$
The positivity and diagonalizability conditions in the result of Minc [15] are
essential for his proof. Minc says that “it is not known if the the theorem
holds for a general nonnegative matrix”, question that has been open for
almost 40 years. Recently, two extensions for nonnegative realizations have
been obtained: the first one by Collao, Salas, and Soto [2] shows that if
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ is the spectrum of a
diagonalizable nonnegative matrix with constant row sums and a positive row or
column, then $\Lambda$ is UR. The second extension by Johnson, Julio, and Soto
[7], shows that if $\Lambda$ is realizable by a diagonalizable ODP matrix,
that is, a diagonalizable nonnegative matrix having all off-diagonal entries
being positive (zeros on diagonal are permitted), then $\Lambda$ is also UR.
Observe that in both extensions, the set of realizing matrices contains the
set of positive realizing matrices. Moreover, the extension in [7] allows to
decide about the universal realizability of lists with
$\sum\limits_{i=1}^{n}\lambda_{i}=0$, which it is not possible from the Minc’s
result. These two extensions open a research line that allow to prove the
universal realizability of non-positively realizable spectra, thus
significantly extending the class of spectra that can be shown to be UR. Since
DR is a necessary condition for UR, all lists of complex numbers that are DR,
are natural candidates to be UR.
Despite the progress that has been made on the URP, there are still numerous
open questions. Two of them, until recently open, were: Is any left half-plane
list UR? In [10] the authors show that a realizable left half-plane list is
not necessarily UR. In fact, they prove that the spectrum
$\Lambda=\left\\{a,-\frac{a}{4}+\frac{\sqrt{5}a}{4}i,-\frac{a}{4}-\frac{\sqrt{5}a}{4}i,-\frac{a}{4}+\frac{\sqrt{5}a}{4}i,-\frac{a}{4}-\frac{\sqrt{5}a}{4}i\right\\}\text{
}a>0,$ (1)
is realizable, but not DR and therefore not UR The other open question was
whether DR realizable implies UR. In [9] the authors show that the spectra
$\\{\lambda_{1},\lambda_{1},\lambda_{2},\lambda_{2}\\}$ with
$\lambda_{1}>0>\lambda_{2}\geq-\lambda_{1},$ $\lambda_{1}+2\lambda_{2}<0,$
have no realization with a JCF having a Jordan block $J_{2}(\lambda_{2})$ of
size $2.$ Thus, for instance, $\Lambda=\\{1,1,-1,-1\\}$ is not UR although it
is DR. Since $\Lambda=\\{1,1,-1,-1\\}$ has a reducible realization, what may
be said about irreducible realizations?. In [10] it was also shown that
irreducible diagonalizable realizations are not necessarily UR. Then, it
remains to know under what conditions a DR left half-plane list of complex
numbers is UR. From extensions in [2, 7] we may say that DR implies UR if
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ is the spectrum of a
diagonalizable nonnegative matrix with constant row sums and a positive row or
column, or it is diagonalizably ODP realizable. The importance of the diagonal
JCF lies in the fact that we know how to join Jordan blocks to obtain larger
Jordan blocks. Then if we may obtain a diagonalizable realizing matrix for
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\},$ we may also obtain a
nonnegative matrix with spectrum $\Lambda$ for each possible JCF allowed by
$\Lambda.$ What about criteria which allow to decide about the universal
realizability of a list of complex numbers? In this work we also introduce an
operation that we name the merge of two spectra and a result which allow to
easily decide, in many cases, about the universal realizability of spectra. We
also establish indexes of Guo type [5] for diagonalizable and universal
realizability.
A matrix $A=[a_{ij}]$ of order $n$ is said to have constant row sums if all
its rows sum to the same constant $\alpha$. We denote by
$\mathcal{CS}_{\alpha}$ the set of all $n$-by-$n$ real matrices with constant
row sums equal to $\alpha$. It is clear that any matrix in
$\mathcal{CS}_{\alpha}$ has an eigenvector
$\mathbf{e}^{\textsuperscript{T}}=[1,\ldots,1]$ corresponding to the
eigenvalue $\alpha$. The relevance of the real matrices with constant row sums
is due to the fact that, the problem of finding a nonnegative matrix with
spectrum $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$, $\lambda_{1}$ being
the Perron eigenvalue, is equivalent to the problem of finding a nonnegative
matrix in $\mathcal{CS}_{\lambda_{1}},$ with spectrum $\Lambda$ (see [6]). We
denote by $E_{i,j}$ the matrix with $1$ in position $(i,j)$ and zeros
elsewhere and we define the matrix
$E(K)=\sum\limits_{i\in K}E_{i,i+1},\text{ \ }K\subset\\{1,2,\ldots,n\\}.$ (2)
The paper is organized as follows: In Section $2$, we introduce the
diagonalizable realizability index $g_{d}(\Lambda/\lambda_{1})$ and the
universal realizability index $g_{u}(\Lambda/\lambda_{1}).$ Then, we show that
a realizable list $\Lambda=\\{\mu,\lambda_{2},\ldots,\lambda_{n}\\}$ of
complex numbers is DR for all $\mu\geq g_{d}(\Lambda/\lambda_{1})$ and that
$\Lambda$ is UR for all $\mu\geq g_{u}(\Lambda/\lambda_{1}).$ In Section $3$,
we define the merge of two spectra and show that if
$\Gamma_{1}=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ and
$\Gamma_{2}=\\{\mu_{1},\ldots,\mu_{m}\\}$ have a diagonalizable ODP
realization, then
$\Gamma=\\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n},\mu_{2},\ldots,\mu_{m}\\}$
has also a diagonalizable ODP realization and therefore $\Gamma$ is UR. This
result becomes a useful criterion to decide, in many cases, the universal
realizability of spectra.
## 2 Diagonalizable and universal realizability indices.
In what follows we will use the following results, Theorems 2.1 to 2.3 below.
Theorem 2.1, due to Brauer [1], is a perturbation result that shows how to
change one single eigenvalue of an $n$-by-$n$ matrix without changing any of
the remaining $(n-1)$ eigenvalues. Theorem 2.2, by Soto and Ccapa [22],
establishes the JCF of the Brauer perturbation
$A+\mathbf{eq}^{\textsuperscript{T}}.$ Theorem 2.3, by Šmigoc [20], shows how
to construct a matrix $C$ with a particular JCF from two given matrices $A$
and $B.$
###### Theorem 2.1
[1] Brauer. Let $A$ be an $n$-by-$n$ matrix with spectrum
$\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$. Let
$\mathbf{v}^{\textsuperscript{T}}=[v_{1},\ldots,v_{n}]$ be an eigenvector of
$A$ associated with the eigenvalue $\lambda_{k}$ and let $\mathbf{q}$ be any
$n$-dimensional vector. Then the matrix $A+\mathbf{vq}^{\textsuperscript{T}}$
has eigenvalues
$\lambda_{1},\ldots,\lambda_{k-1},\lambda_{k}+\mathbf{v}^{\textsuperscript{T}}\mathbf{q},\lambda_{k+1},\ldots,\lambda_{n}$.
###### Theorem 2.2
[22] Soto and Ccapa. Let
$\mathbf{q}^{\textsuperscript{T}}=[q_{1},\ldots,q_{n}]$ be an arbitrary
$n-$dimensional vector. Let $A\in\mathcal{CS}_{\lambda_{1}}$ with JCF
$J(A)=S^{-1}AS=diag\left\\{J_{1}(\lambda_{1}),J_{n_{2}}(\lambda_{2}),\ldots,J_{n_{k}}(\lambda_{k})\right\\}.$
If $\lambda_{1}+\sum_{i=1}^{n}q_{i}\neq\lambda_{i},$ $i=2,\ldots,n$, then the
matrix $A+\mathbf{eq}^{\textsuperscript{T}}$ has Jordan canonical form
$J(A)+\left(\sum_{i=1}^{n}q_{i}\right)E_{11}$. In particular, if
$\sum_{i=1}^{n}q_{i}=0$ then $A$ and $A+\mathbf{eq}^{\textsuperscript{T}}$ are
similar.
###### Theorem 2.3
[20] Šmigoc. Suppose $B$ is an $m$-by-$m$ matrix with JCF that contains at
least one $1$-by-$1$ Jordan block corresponding to the eigenvalue $c$:
$J(B)=\left[\begin{array}[]{cc}c&0\\\ 0&I(B)\end{array}\right].$
Let $\mathbf{t}$ and $\mathbf{s}$, respectively, be the left and the right
eigenvectors of $B$ associated with the $1$-by-$1$ Jordan block in the above
canonical form. Furthermore, we normalize vectors $\mathbf{t}$ and
$\mathbf{s}$ so that $\mathbf{t}^{\textsuperscript{T}}\mathbf{s}=1.$ Let
$J(A)$ be a JCF for an $n$-by-$n$ matrix
$A=\left[\begin{array}[]{cc}A_{1}&\mathbf{a}\\\
\mathbf{b^{\textsuperscript{T}}}&c\end{array}\right],$
where $A_{1}$ is an $(n-1)$-by-$(n-1)$ matrix and $\mathbf{a}$ and
$\mathbf{b}$ are vectors in $\mathbb{C}^{\textsuperscript{n-1}}.$ Then the
matrix
$C=\left[\begin{array}[]{cc}A_{1}&\mathbf{at}^{\textsuperscript{T}}\\\
\mathbf{sb}^{\textsuperscript{T}}&B\end{array}\right]$
has JCF
$J(C)=\left[\begin{array}[]{cc}J(A)&0\\\ 0&I(B)\end{array}\right].$
Let $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ and
$\Lambda/\lambda_{1}=\\{\lambda_{2},\ldots,\lambda_{n}\\}$ be self-conjugate
lists of complex numbers. Guo [5] proved that there is a minimal nonnegative
number $g_{r}(\Lambda/\lambda_{1})$ such that
$\Lambda_{\mu}=\\{\mu,\lambda_{2},\ldots,\lambda_{n}\\}$ is realizable for all
$\mu\geq g_{r}(\Lambda/\lambda_{1}).$ Moreover, Guo established that
$\max_{2\leq j\leq n}\left|\lambda_{j}\right|\leq
g_{r}(\Lambda/\lambda_{1})\leq 2n\max_{2\leq j\leq
n}\left|\lambda_{j}\right|.$ (3)
In [11] it was shown that the upper bound in (3) may be reduced to
$(n-1)\underset{2\leq j\leq n}{\max}\left|\lambda_{j}\right|,$ with $n\geq 5$
in the case when $\lambda_{k},$ $k=2,\ldots,n,$ are conjugates complex, and
that this bound is sharp. In [19] the authors show how to calculate the Guo
index $g_{r}(\Lambda/\lambda_{1})$ for circulant nonnegative matrices.
However, to compute this index becomes a prohibitive task for large $n.$ In
this section, we prove that there is also a minimal nonnegative number
$g_{d}(\Lambda/\lambda_{1})$, called diagonalizable realizability index, such
that $\Lambda_{\mu}=\\{\mu,\lambda_{2},\ldots,\lambda_{n}\\}$ is DR if
$\mu\geq g_{d}(\Lambda/\lambda_{1})$. Of course
$g_{d}(\Lambda/\lambda_{1})\geq g_{r}(\Lambda/\lambda_{1}).$ Equality occurs
if $\Lambda/\lambda_{1}$ has distinct elements and may occur otherwise. The
proof is similar to the proof in [11], but it considers all possible cases,
which it does not occur in [11].
###### Theorem 2.4
If $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ is realizable, then there is
a nonnegative real number $g_{d}(\Lambda/\lambda_{1})$ such that
$\Lambda_{\mu}=\\{\mu,\lambda_{2},\ldots,\lambda_{n}\\}$ is DR for every
$\mu\geq g_{d}(\Lambda/\lambda_{1}).$ Moreover
$g_{r}(\Lambda/\lambda_{1})\leq g_{d}(\Lambda/\lambda_{1})\leq(n-1)\max_{2\leq
j\leq n}\left|\lambda_{j}\right|.$ (4)
Proof. First, we exhibit a value $\mu$ such that $\Lambda_{\mu}$ is DR. Let
$A$ be a realizing matrix for $\Lambda.$ Without loss of generality we assume
that $A\in\mathcal{CS}_{\lambda_{1}}$, that is,
$A\mathbf{e}=\lambda_{1}\mathbf{e}$. If $A$ is diagonalizable we are done. If
not, we take $A=SJS^{-1},$ where $J$ is the JCF of $A$ and
$S\mathbf{e}_{1}=\mathbf{e}$. Now let $\widetilde{J}$ be the same as $J,$
except that any superdiagonal $1^{\prime}s$ are replaced with $0{\acute{}}s.$
So, $\widetilde{J}$ is diagonal with spectrum $\Lambda.$ Define
$\widetilde{A}=S\widetilde{J}S^{-1}.$ If $\widetilde{A}$ is nonnegative we are
done. If not, since $\widetilde{A}\in\mathcal{CS}_{\lambda_{1}}$ we apply
Brauer’s Theorem to produce a nonnegative matrix
$A^{\prime}=\widetilde{A}+\mathbf{eq}^{\textsuperscript{T}}$, where
$\mathbf{q}^{\textsuperscript{T}}=[q_{1},\ldots,q_{n}]$ is an appropriate
nonnegative vector. From Theorem 2.2 $A^{\prime}$ is diagonalizable with
spectrum
$\left\\{\lambda_{1}+\sum\limits_{i=1}^{n}q_{i},\lambda_{2},\ldots,\lambda_{n}\right\\}.$
Let $g_{d}(\Lambda/\lambda_{1})=\lambda_{1}+\sum\limits_{i=1}^{n}q_{i}$. Thus
we have established the existence of a value $g_{d}(\Lambda/\lambda_{1})$ such
that $\Lambda_{\mu}$ is DR for every $\mu\geq g_{d}(\Lambda/\lambda_{1}).$
Next, we show that $g_{d}(\Lambda/\lambda_{1})$ satisfies (4). The lower bound
is clear. Although similar to the proof of Theorem $3.2$ in [11], this is more
complete and consider all cases. In particular, the case in which $\mu_{j},$
$j=2,\ldots,n,$ are all conjugate complex. For the sake of completeness and
since the result is of independent interest, we set the proof here. Let
$m=\underset{2\leq j\leq n}{\max}\left|\lambda_{j}\right|$ and let
$\mu_{j}=\frac{\lambda_{j}}{(n-1)m},$ $j=2,\ldots,n.$ Then,
$\\{\mu_{2},\ldots,\mu_{n}\\}$ is a list of complex numbers such that $\
\left|\mu_{j}\right|\leq\frac{1}{n-1},$ $j=2,\ldots,n.$ Consider the initial
matrix
$B=\begin{bmatrix}0&0&0&\cdots&\cdots&\cdots&\cdots&\cdots&0\\\
-\mu_{2}&\mu_{2}&\ddots&&&&&&\vdots\\\
\vdots&\ddots&\ddots&\ddots&&&&&\vdots\\\
-\mu_{p}&\vdots&\ddots&\mu_{p}&0&&&&\vdots\\\
-x_{s}&y_{s}&&\ddots&x_{s}&-y_{s}&&&\vdots\\\
-x_{s}&-y_{s}&&&y_{s}&x_{s}&\ddots&&\vdots\\\
\vdots&\vdots&&&&\ddots&\ddots&\ddots&0\\\
-x_{t}&y_{t}&&&&&\ddots&x_{t}&-y_{t}\\\
-x_{t}&-y_{t}&\cdots&\cdots&\cdots&\cdots&0&y_{t}&x_{t}\end{bmatrix},$
where $\mu_{2},\ldots,\mu_{p}$ are real, $x_{j}=\mbox{Re}\mu_{j}$,
$y_{j}=\mbox{Im}\mu_{j},$ $p+1\leq j\leq\frac{n+p}{2}.$ Then
$B\in\mathcal{CS}_{0}$ has eigenvalues
$0,\mu_{2},\ldots,\mu_{p},\mu_{p+1},\ldots,\mu_{n}$ and is clear that $B$ is
diagonalizable.
* •
If $\mbox{Re}\mu_{j}\leq 0,$ $j=2,\ldots,n,$ then all entries in the first
column of $B$ are nonnegative. Let
$\mathbf{q}^{\textsuperscript{T}}=\left[0,\frac{1}{n-1},\ldots,\frac{1}{n-1}\right].$
From Theorem 2.1 and Theorem 2.2, $A^{\prime}=B+\mathbf{\
eq}^{\textsuperscript{T}}$ is diagonalizable nonnegative with spectrum
$\\{1,\mu_{2},\ldots,\mu_{n}\\}$ and $A=(n-1)mA^{\prime}$ is diagonalizable
nonnegative with spectrum
$\left\\{(n-1)m,\lambda_{2},\ldots,\lambda_{n}\right\\}$.
* •
If $\mbox{Re}\mu_{j}>0$ for some $j,$ $3\leq j\leq n$, then all the entries in
the $j^{\textsuperscript{st}}$ column of $B$ (or in the
$(j-1)^{\textsuperscript{st}}$ column of $B$ if $j$ corresponds to the second
column in the corresponding $2$-by-$2$ complex block) are nonnegative. Let
$\mathbf{q}^{\textsuperscript{T}}=\left[\frac{1}{n-1},\ldots,\frac{1}{n-1},0,\frac{1}{n-1},\ldots,\frac{1}{n-1}\right],$
with zero in the $j^{\textsuperscript{st}}$ position
($(j-1)^{\textsuperscript{st}}$ position). Then, again,
$A^{\prime}=B+\mathbf{eq}^{\textsuperscript{T}}$ is diagonalizable nonnegative
with spectrum $\\{1,\mu_{2},\ldots,\mu_{n}\\}$ and $A=(n-1)mA^{\prime}$ is
diagonalizable nonnegative with spectrum
$\\{(n-1)m,\lambda_{2},\ldots,\lambda_{n}\\}$. Observe that, from the
necessary and sufficient conditions by Loewy and London [13], the result still
holds for the special case $n=3$ with $\Lambda=\\{\lambda_{1},a+bi,a-bi\\}.$
* •
If $\mu_{2}>0$ with $\mbox{Re}\mu_{j}<0,$ $j=3,\ldots,n,$ then we write the
$-\mbox{Re}\mu_{j}^{\prime}s,$ $3\leq j\leq n,$ along the second column of $B$
and the $\pm\mbox{Im}\mu_{j}^{\prime}s,$ $p+1\leq j\leq n,$ along the first
column of $B.$ Then again with
$\mathbf{q}^{\textsuperscript{T}}=\left[\frac{1}{n-1},0,\frac{1}{n-1},\ldots,\frac{1}{n-1}\right],$
we obtain, as before, the diagonalizable nonnegative matrix
$A=(n-1)mA^{\prime}$ with the required spectrum.
* •
Now we consider the case in which $\mu_{j},$ $j=2,\ldots,n,$ are all conjugate
complex numbers, that is, $\mbox{Im}\mu_{j}\neq 0.$ Let the $2$-by-$2$
diagonal blocks
$\left[\begin{array}[]{cc}\mbox{Re}\mu_{j}&-\mbox{Im}\mu_{j}\\\
\mbox{Im}\mu_{j}&\mbox{Re}\mu_{j}\end{array}\right],\text{ \ }j=2k,\text{
}k=1,2,\ldots,\frac{n-1}{2}$
in the matrix $B.$ In this case we only can use the first column of $B$ to set
$-(\mbox{Re}\mu_{j}+\mbox{Im}\mu_{j})$ in order that $B$
$\in\mathcal{CS}_{0}.$ Now we have the following cases:
* –
If $\mbox{Re}\mu_{j}\geq 0$ for one or more indexes $j,$ then the
corresponding $j^{\textsuperscript{st}}$ columns in $B$ are nonnegative and
the proof follows as before. If
$\left|\mbox{Re}\mu_{j}+\mbox{Im}\mu_{j}\right|>\frac{1}{n-1}$ for some $j$,
then we set the corresponding diagonal block
$\left[\begin{array}[]{cc}\mbox{Re}\mu_{j}&-\mbox{Im}\mu_{j}\\\
\mbox{Im}\mu_{j}&\mbox{Re}\mu_{j}\end{array}\right]$
on the last $2$-by-$2$ diagonal position in the matrix $B$ to distribute any
possible negative amount (from the reciprocal of
$\mbox{Re}\mu_{j}+\mbox{Im}\mu_{j}$) through the last row under some
appropriate columns. Observe that there is at least one negative amount
($-\mbox{Im}\mu_{j}$) in each above $2$-by-$2$ block in $B.$ Thus,
$A^{\prime}$ is nonnegative with spectrum
$\left\\{\sum\limits_{i=1}^{n}q_{i},\mu_{2},\ldots,\mu_{n}\right\\},$
$\sum\limits_{i=1}^{n}q_{i}\leq 1$ and $A=(n-1)mA^{\prime}$ is nonnegative
with spectrum
$\left\\{(n-1)m\sum\limits_{i=1}^{n}q_{i},\lambda_{2},\ldots,\lambda_{n}\right\\},$
where $(n-1)m\sum\limits_{i=1}^{n}q_{i}\leq(n-1)m.$ In fact, for $n=5$ (the
minimum case) with $\mbox{Re}\mu_{i}\geq 0,$ we have
$\left[\begin{array}[]{ccccc}0&0&0&0&0\\\
-\mbox{Re}\mu_{i}+\mbox{Im}\mu_{i}&\mbox{Re}\mu_{i}&-\mbox{Im}\mu_{i}&0&0\\\
-\mbox{Re}\mu_{i}-\mbox{Im}\mu_{i}&\mbox{Im}\mu_{i}&\mbox{Re}\mu_{i}&0&0\\\
-\mbox{Re}\mu_{j}+\mbox{Im}\mu_{j}&0&0&\mbox{Re}\mu_{j}&-\mbox{Im}\mu_{j}\\\
-\mbox{Re}\mu_{j}-\mbox{Im}\mu_{j}+\mbox{Im}\mu_{i}&0&-\mbox{Im}\mu_{i}&\mbox{Im}\mu_{j}&\mbox{Re}\mu_{j}\end{array}\right]$
Then
$\mathbf{q}^{T}={\Large[}\mbox{Re}\mu_{j}+\mbox{Im}\mu_{j},0,\mbox{Im}\mu_{i},0,\mbox{Im}\mu_{j}{\Large]}$
and $\sum\limits_{i=1}^{5}q_{i}=\mbox{Re}\mu_{j}+2\mbox{Im}\mu_{j}+\mbox{
Im}\mu_{i}\leq\frac{4}{n-1}\leq 1.$ As it was said above, the validity of the
case $n=3$ is guaranteed from the necessary and sufficient conditions by Loewy
and London [13].
* –
If $\mbox{Re}\mu_{j}<0$ with
$\left|\mbox{Re}\mu_{j}\right|\geq\left|\mbox{Im}\mu_{j}\right|,$
$j=2,\ldots,n,$ then the first column in $B$ is nonnegative and the proof
follows as before.
* –
If $\mbox{Re}\mu_{j}<0$ with
$\left|\mbox{Re}\mu_{j}\right|<\left|\mbox{Im}\mu_{j}\right|$, $j=2,\ldots,n,$
then
$\left|\mbox{Re}\mu_{j}+\mbox{Im}\mu_{j}\right|<\left|\mbox{
Im}\mu_{j}\right|<\left|\mu_{j}\right|\leq\frac{1}{n-1},$
and the proof follows as before again.
The upper bound in (4) is also sharp for $g_{d}(\Lambda/\lambda_{1}),$ as it
is shown by a diagonalizable realization of the list
$\Lambda=\\{(n-1),\underset{(n-1)\text{times}}{\underbrace{-1,\ldots,-1}}\\}.$
The spectrum $\Lambda=\cup_{i=1}^{\frac{n}{2}}\\{\lambda_{i},-\lambda_{i}\\},$
with real $\lambda_{i},$ shows that the lower bound in (4) is also sharp for
$g_{d}(\Lambda/\lambda_{1}).$ We may also define, similar to
$g_{r}(\Lambda/\lambda_{1})$ and $g_{d}(\Lambda/\lambda_{1}),$ a universal
realizability index $g_{u}(\Lambda/\lambda_{1}).$ Then, the following result
is clear.
###### Corollary 2.1
From Theorem 2.4 we have that
$g_{d}(\Lambda/\lambda_{1})\leq g_{u}(\Lambda/\lambda_{1})\leq(n-1)\max_{2\leq
j\leq n}\left|\lambda_{j}\right|\text{.}$ (5)
Proof. The lower bound is clear. To prove the upper bound we consider the
matrix $A=(n-1)mA^{\prime},$ with
$A^{\prime}=(B+\mathbf{eq}^{T})\in\mathcal{CS}_{1}$, in the proof of Theorem
2.4, which is diagonalizable nonnegative with JCF $J(A)=S^{-1}AS$, where
$S\mathbf{e}_{1}=\mathbf{e}$. Then, for an appropriate matrix $E(K)$ as
defined in (2),
$J(A)+E(K)=S^{-1}AS+E(K)=S^{-1}(A+SE(K)S^{-1})S.$ (6)
Then we may reach any JCF allowed by the spectrum of $A.$ In fact,
$A+SE(K)S^{-1}$ has the desired JCF, $J(A)+E(K)$, although it is no
necessarily nonnegative. Since $SE(K)S^{-1}\in\mathcal{CS}_{0}$, then for a
convenient nonnegative vector
$\mathbf{r}^{\textsuperscript{T}}=[r_{1},\ldots,r_{n}]$ and $\epsilon>0$ small
enough, the matrix $M=\left(A+\epsilon SE(K)S^{-1}\right)+\mathbf{\
er}^{\textsuperscript{T}}$ is nonnegative with spectrum
$\\{(n-1)m,\lambda_{2},\ldots,\lambda_{n}\\}$ or spectrum
$\\{(n-1)m\sum\limits_{i=1}^{n}q_{i},\lambda_{2},\ldots,\lambda_{n}\\},$ with
$\sum\limits_{i=1}^{n}q_{i}\leq 1$ in the case $\mu_{j},$ $j=2,\ldots,n,$ are
conjugate complex numbers. Then, since
$(n-1)m\sum\limits_{i=1}^{n}q_{i}\leq(n-1)m,$ $\Lambda$ is UR and
$g_{u}(\Lambda/\lambda_{1})\leq(n-1)\underset{2\leq j\leq
n}{\max\left|\lambda_{j}\right|.}$
The following example illustrates Theorem 2.4 and Corollary 2.1. In particular
the case where $\mu_{j},$ $j=2,\ldots,n,$ are all conjugates complex numbers.
###### Example 2.1
Consider the list
${\huge\\{}0{\huge,}1+i\sqrt{15},1-i\sqrt{15},-\sqrt{15}+i,-\sqrt{15}-i{\huge\\}.}$
Then $n=5,$ $m=\underset{2\leq j\leq 5}{\max}\left|\lambda_{j}\right|=4$ and
$(n-1)m=16.$ In order to have $\sum\limits_{i=1}^{5}q_{i}\leq 1$ we need to
take the initial matrix $B$ as
$B=\frac{1}{16}\left[\begin{array}[]{ccccc}0&0&0&0&0\\\
1+\sqrt{15}&-\sqrt{15}&-1&0&0\\\ -1+\sqrt{15}&1&-\sqrt{15}&0&0\\\
-1+\sqrt{15}&0&0&1&-\sqrt{15}\\\
0&-1&-\sqrt{15}&\sqrt{15}&1\end{array}\right].$
Then, $A^{\prime}=B+\mathbf{eq}^{T}$ with
$\mathbf{q}^{T}=\frac{1}{16}\left[1,\sqrt{15},\sqrt{15},0,\sqrt{15}\right]$
has the spectrum
$\frac{1}{16}{\LARGE\\{}\sum\limits_{i=1}^{5}q_{i},1+i\sqrt{15},1-i\sqrt{15},-\sqrt{15}+i,-\sqrt{15}-i{\LARGE\\}}{\huge,}$
with $\sum\limits_{i=1}^{5}q_{i}\leq 1$ and
$(n-1)m\sum\limits_{i=1}^{5}q_{i}\leq(n-1)m.$
Since the list $\Lambda=\\{(n-1),-1,\ldots,-1\\}$ is UR, then the upper bound
in (4) is also sharp for $g_{u}(\Lambda/\lambda_{1}).$ Next, we have:
###### Theorem 2.5
Let $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ be a DR list of complex
numbers and let $g_{d}(\Lambda/\lambda_{1})$ be the diagonalizable
realizability index of $\Lambda$. Then for every
$\mu>g_{d}(\Lambda/\lambda_{1}),$
$\Lambda_{\mu}=\\{\mu,\lambda_{2},\ldots,\lambda_{n}\\}$ is UR.
Proof. The result is a straightforward consequence of the Minc’s result, and
the fact that if the Perron eigenvalue is increased for any $\epsilon>0,$ then
any diagonalizably realizable list becomes a diagonalizably positively
realizable list.
###### Remark 2.1
We have seen that $g_{d}(\Lambda/\lambda_{1})\geq g_{r}(\Lambda/\lambda_{1}).$
Of course, if a realizable list $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$
of complex numbers has all its elements distinct, then
$g_{d}(\Lambda/\lambda_{1})=g_{r}(\Lambda/\lambda_{1}).$ In order that
$g_{d}(\Lambda/\lambda_{1})>g_{r}(\Lambda/\lambda_{1}),$ $\Lambda$ must admit
repeats. This is necessary, but not sufficient. For instance, the list
$\Lambda=\\{6,1,1,-4,-4\\}$ is DR (see [21, Lemma $1$]), but
$g_{d}(\Lambda/\lambda_{1})=g_{r}(\Lambda/\lambda_{1})$. It is clear that
$g_{d}(\Lambda/\lambda_{1})>g_{r}(\Lambda/\lambda_{1})$ if and only if
$\Lambda$ is not DR.
###### Example 2.2
The list
$\Lambda=\\{7,5,1,1,-4,-4,-6\\},$
is symmetrically realizable by
$A=\left[\begin{array}[]{ccccccc}0&\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}\\\
\frac{3+\sqrt{5}}{2}&0&\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}\\\
\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&0&\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}\\\
\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&0&\frac{3+\sqrt{5}}{2}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}\\\
\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&0&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}\\\
\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&0&6\\\
\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&\frac{\sqrt{10}}{10}&6&0\end{array}\right].$
Then, since $A$ is diagonalizable ODP, $\Lambda$ is UR and
$g_{r}(\Lambda/\lambda_{1})=g_{d}(\Lambda/\lambda_{1})=g_{u}(\Lambda/\lambda_{1})=7.$
A realizable list $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ of complex
numbers is said to be Perron extreme, if the list
$\Lambda_{\epsilon}=\\{\lambda_{1}-\epsilon,\lambda_{2},\ldots,\lambda_{n}\\}$
is not realizable for every $\epsilon>0.$ If $\Lambda$ is not Perron extreme,
there is an $\epsilon>0$ such that $\Lambda_{\epsilon}$ is realizable. Then we
have the following result, which is equivalent to Theorem 2.5.
###### Corollary 2.2
Let $\Lambda=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ be a list not Perron
extreme of complex numbers with
$\Lambda_{\epsilon}=\\{\lambda_{1}-\epsilon,\lambda_{2},\ldots,\lambda_{n}\\},$
$\epsilon>0,$ being DR. Then $\Lambda$ is UR.
Proof. Let $B\in\mathcal{CS}_{\lambda_{1}-\epsilon}$ be a diagonalizable
nonnegative matrix with spectrum $\Lambda_{\epsilon}.$ Then
$A=B+\mathbf{eq}^{\textsuperscript{T}},\text{ with
}\mathbf{q}^{\textsuperscript{T}}=\left[\frac{\epsilon}{n},\ldots,\frac{\epsilon}{n}\right]$
is positive with spectrum $\Lambda.$ Moreover, from Theorem 2.2, $A$ is
diagonalizable. Hence, $\Lambda$ is UR.
###### Example 2.3
The list $\Lambda=\\{8,2,2,-3,-4,-4\\}$ is not Perron extreme. Since
$\Lambda^{\prime}=\\{7,2,2,-3,-4,-4\\}$ is DR by
$B=\left[\begin{array}[]{cccccc}0&4&0&2&0&1\\\ 4&0&0&2&0&1\\\ 0&2&0&4&0&1\\\
0&2&4&0&0&1\\\ 0&2&0&2&0&3\\\ 0&2&0&2&3&0\end{array}\right],$
then $A=B+\mathbf{eq}^{\textsuperscript{T}},$ where
$\mathbf{q}^{\textsuperscript{T}}=\left[\begin{array}[]{cccccc}\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}&\frac{1}{6}\end{array}\right]$
is diagonalizable positive with spectrum $\Lambda.$ Therefore $\Lambda$ is UR.
###### Remark 2.2
We know that DR does not necessarily imply UR. We also know of two important
extensions [2, 7] of Minc’s result [15]. As far as we know, these extensions
are the most general universal realizability criteria for a solution to URP.
Then, for $\Lambda=\\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\\}$
diagonalizably realizable we may say that DR implies UR if:
$i)$ The realizing matrix for $\Lambda$ is ODP (positive matrices are ODP), or
$ii)$ $\Lambda$ is the spectrum of a nonnegative matrix
$A\in\mathcal{CS}_{\lambda_{1}}$ with a positive row or column.
There are spectra, however, which are UR, without to have necessarily a
realizing matrix of some above classes. For instance:
$iii)$
$\Lambda=\\{\lambda_{1},\lambda_{2},\ldots,\lambda_{\frac{n}{2}},-\lambda_{\frac{n}{2}},\ldots,-\lambda_{2},-\lambda_{1}\\}.$
Is clear that $\Lambda$ has a diagonalizable realization and if it has
$2\times 2$ blocks repeated, we may obtain any coarser JCF.
$iv)$ $\Lambda=\\{\lambda_{1},\lambda_{2},-\lambda_{3},\ldots,-\lambda_{n}\\}$
with $\lambda_{j}>0,$ $j=1,2,\ldots,n.$ $\lambda_{1}>\lambda_{2},$
$\lambda_{1}+\lambda_{2}-\sum\limits_{j=3}^{n}\lambda_{j}=0,$ It was proved in
[4] that $\Lambda$ is UR.
## 3 The merge of two spectra
In this section we define the merge of two spectra in the following way:
###### Definition 3.1
Let $\Gamma_{1}=\\{\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\\}$ and
$\Gamma_{2}=\\{\mu_{1},\mu_{2},\ldots,\mu_{m}\\}$ be lists of complex numbers.
The merge $\Gamma_{1}$ with $\Gamma_{2}$ is
$\Gamma=\\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n},\mu_{2},\ldots,\mu_{m}\\}.$
Then, we have:
###### Theorem 3.1
Suppose that $A$ and $B$ are $n$-by-$n$ and $m$-by-$m$ diagonalizable ODP
matrices with spectrum $\Gamma_{1}=\\{\lambda_{1},\ldots,\lambda_{n}\\}$ and
$\Gamma_{2}=\\{\mu_{1},\ldots,\mu_{m}\\},$ respectively. Then, there is an
$(n+m-1)$-by-$(n+m-1)$ diagonalizable ODP matrix $C$ with spectrum
$\Gamma=\\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n},\mu_{2},\ldots,\mu_{m}\\}.$
Hence, $\Gamma$ is UR.
Proof. Since $A$ is diagonalizable ODP with spectrum $\Gamma_{1},$ it is
irreducible. Then, there is a positive vector
$\mathbf{v}^{\textsuperscript{T}}=[v_{1},\ldots,v_{n}]$ such that
$A\mathbf{v}=\lambda_{1}\mathbf{v}$. Thus, if
$D=diag\\{v_{1},\ldots,v_{n}\\},$ then
$\tilde{A}=D^{-1}AD\in\mathcal{CS}_{\lambda_{1}}$ is diagonalizable ODP. If
$d_{1},\ldots,d_{n}$ are the diagonal entries of $\tilde{A}$, then from
Theorem 2.1,
$A_{1}=\tilde{A}+\mathbf{e}[0,0,\ldots,\mu_{1}]=\begin{bmatrix}A_{11}&\mathbf{a}\\\
\mathbf{b^{\textsuperscript{T}}}&d_{n}+\mu_{1}\end{bmatrix}$ has spectrum
$\\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n}\\}$ and diagonal
entries $d_{1},d_{2},\ldots,d_{n}+\mu_{1}$. It is clear, from Theorem 2.2,
that $A_{1}$ is a diagonalizable ODP matrix.
Since $B$ is diagonalizable ODP with spectrum $\Gamma_{2}$ then, as before,
there is a diagonalizable ODP matrix $\tilde{B}\in\mathcal{CS}_{\mu_{1}}$ with
spectrum $\Gamma_{2}$. Then, the matrix
$B_{1}=\tilde{B}+\mathbf{e}[d_{n},0\ldots,0]$ is diagonalizable ODP with
spectrum $\\{\mu_{1}+d_{n},\mu_{2},\ldots,\mu_{m}\\}$. Finally, by applying
the Šmigoc’s glue technique, Theorem 2.3, there is a matrix
$C=\begin{bmatrix}A_{11}&\mathbf{at^{\textsuperscript{T}}}\\\
\mathbf{sb^{\textsuperscript{T}}}&B_{1}\end{bmatrix}$
with $A_{11}$ the $(n-1)$-by-$(n-1)$ matrix in $A_{1}$ and with
$\mathbf{s},\mathbf{t}$ such that $\mathbf{t^{\textsuperscript{T}}s}=1$, being
the right and left eigenvectors of $B_{1}$, respectively. Now, $C$ has the
spectrum
$\\{\lambda_{1}+\mu_{1},\lambda_{2},\ldots,\lambda_{n},\mu_{2},\ldots,\mu_{m}\\},$
with Jordan canonical form $J(C)=J(A_{1})\oplus I(B_{1})$ with
$J(B_{1})=\begin{bmatrix}\mu_{1}+d_{n}&\\\ &I(B_{1})\end{bmatrix}$. Since
$\mathbf{a},\mathbf{b},\mathbf{s}$ and $\mathbf{t}$ are positive vectors, it
is clear that $C$ is a diagonalizable ODP matrix and therefore $\Gamma$ is UR.
In many cases, Theorem 3.1 may be a useful tool to decide about the universal
realizability of a list of complex numbers, as the following example shows.
###### Example 3.1
Is
$\Lambda=\\{13,1,1,-3,-4,-4,1+3i,1-3i\\},$
universally realizable? To answer the question, consider the lists
$\displaystyle\Lambda_{1}$ $\displaystyle=$
$\displaystyle\\{7,-3,1+3i,1-3i\\}.$ $\displaystyle\Lambda_{2}$
$\displaystyle=$ $\displaystyle\\{6,1,1,-4,-4\\}.$
From [8], $\Lambda_{1}$ has the normal nonnegative realization
$A_{1}=\left[\begin{array}[]{cccc}3&2-\sqrt{3}&\frac{\sqrt{6}+2\sqrt{2}}{2}&\frac{\sqrt{6}+2\sqrt{2}}{2}\\\
2+\sqrt{3}&3&\frac{2\sqrt{2}-\sqrt{6}}{2}&\frac{2\sqrt{2}-\sqrt{6}}{2}\\\
\frac{2\sqrt{2}-\sqrt{6}}{2}&\frac{\sqrt{6}+2\sqrt{2}}{2}&0&3\\\
\frac{2\sqrt{2}-\sqrt{6}}{2}&\frac{\sqrt{6}+2\sqrt{2}}{2}&3&0\end{array}\right],$
which is diagonalizable ODP. From [21, Lemma $1$], $\Lambda_{2}$ has the
symmetric ODP realization
$A_{2}=\left[\begin{array}[]{ccccc}0&\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}\\\
\frac{3+\sqrt{5}}{2}&0&\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}\\\
\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&0&\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}\\\
\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&0&\frac{3+\sqrt{5}}{2}\\\
\frac{3+\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3-\sqrt{5}}{2}&\frac{3+\sqrt{5}}{2}&0\end{array}\right].$
Then, from Theorem 3.1, the merge $\Lambda_{1}$ with $\Lambda_{2},$ that is,
$\Lambda=\\{13,1,1,-3,-4,-4,1+3i,1-3i\\}$
has a diagonalizable ODP realization. Hence, $\Lambda$ is UR.
###### Remark 3.1
It is known that lists of Suleĭmanova type, real or complex, are UR. Now, this
result can be also proved via Theorem 3.1, in the case of real Suleĭmanova
spectra, and of complex Suleĭmanova spectra with
$\left|\mbox{Re}\lambda_{i}\right|>\left|\mbox{Im}\lambda_{i}\right|.$ In
fact, in both cases, we may obtain diagonalizable ODP realizations.
Declaration of Competing Interest
There is no competing interest
## References
* [1] A. Brauer, Limits for the characteristic roots of a matrix IV. Applications to stochastic matrices, Duke Math. J. 19 (1952) 75-91.
* [2] M. Collao, M. Salas, R. L. Soto, Spectra universally realizable by doubly stochastic matrices, Spec. Matrices 6 (2018) 301-309.
* [3] R. C. Díaz, R. L. Soto, Nonnegative inverse elementary divisors problem in the left half plane, Linear and Multilinear Algebra 64 (2016) 258-268.
* [4] M. Collao, C. R. Johnson, R. L. Soto, Universal realizability of spectra with two positive eigenvalues, Linear Algebra Appl. 545 (2018) 226-239.
* [5] W. Guo, Eigenvalues of nonnegative matrices, Linear Algebra Appl. 266 (1997) 261-270.
* [6] C. R. Johnson, Row stochastic matrices similar to doubly stochastic matrices, Linear and Multilinear Algebra 10 (1981) 113-130.
* [7] C. R. Johnson, A. I. Julio, R. L. Soto, Nonnegative realizability with Jordan structure, Linear Algebra Appl. 587 (2020) 302-313.
* [8] A. I. Julio, C. B. Manzaneda, R. L. Soto, Normal nonnegative realization of spectra, Linear and Multilinear Algebra 63 (2015) 1204-1215.
* [9] A. I. Julio, C. Marijuán, M. Pisonero, R. L. Soto, On universal realizability of spectra, Linear Algebra Appl. 563 (2019) 353-372.
* [10] A. I. Julio, C. Marijuán, M. Pisonero, R. L. Soto, Universal realizability in low dimension, Linear Algebra Appl. 619 (2021) 107-136.
* [11] A. I. Julio, R. L. Soto, The role of certain Brauer and Rado results in the nonnegative inverse sepectral problems, Electronic J. Linear Algebra 36 (2020) 484-502.
* [12] T. J. Laffey, H. Šmigoc, Nonnegative realization of spectra having negative real parts, Linear Algebra Appl. 416 (2006) 148-159.
* [13] R. Loewy, D. London, A note on the inverse problem for nonnegative matrices, Linear and Multilinear Algebra. 6 (1978) 83-90.
* [14] M. E. Meehan, Some results on matrix spectra, Ph.D. thesis, National University of Ireland, Dublin, (1998).
* [15] H. Minc, Inverse elementary divisor problem for nonnegative matrices, Proc. of the Amer. Math. Society 83 (4) (1981) 665-669.
* [16] H. Minc, Inverse elementary divisor problem for doubly stochastic matrices, Linear and Multilinear Algebra. 11 (1982) 121-131.
* [17] H. Perfect, Methods of constructing certain stochastic matrices, Duke Math. J. 20 (1953) 395-404.
* [18] H. Perfect, Methods of constructing certain stochastic matrices II, Duke Math. J. 22 (1955) 305-311.
* [19] O. Rojo, R. L. Soto, Existence and construction of nonnegative matrices with complex spectrum, Linear Algebra Appl. 368 (2003) 53-69.
* [20] H. Šmigoc, The inverse eigenvalue problem for nonnegative matrices, Linear Algebra Appl. 393 (2004) 365-374.
* [21] R. L. Soto, O. Rojo, Applications of a Brauer Theorem in the nonnegative inverse eigenvalue problem, Linear Algebra Appl. 416 (2006) 844-856.
* [22] R. L. Soto, J. Ccapa, Nonnegative matrices with prescribed elementary divisors, Electronic Journal of Linear Algebra 17 (2008) 287-303.
* [23] R. L. Soto, R. C. Díaz, H. Nina, M. Salas, Nonnegative matrices with prescribed spectrum and elementary divisors, Linear Algebra Appl. 439 (2013) 3591-3604.
* [24] H. R. Suleimanova, Stochastic matrices with real characteristic values, Dokl. Akad. Nauk SSSR. 66 (1949) 343-345.
* [25] J. Torre-Mayo, M. R. Abril-Raymundo, E. Alarcía-Estévez, C. Marijuán, M. Pisonero, The nonnegative inverse eigenvalue problem from the coefficients of the characteristic polynomial. EBL digraphs, Linear Algebra Appl. 426 (2007) 729-773.
|
††institutetext: School of Natural Sciences, Institute for Advanced Study,
1 Einstein Drive, Princeton, NJ 08540, USA
# The Disk Partition Function in String Theory
Lorenz Eberhardt Sridip Pal<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We investigate the disk partition function for the open string. This is a
subtle problem because of the presence of a residual gauge group
$\mathrm{PSL}(2,\mathbb{R})$ on the worldsheet even after fixing the conformal
gauge. It naively has infinite volume and leads to a vanishing answer. We use
different methods that all demonstrate that $\mathrm{PSL}(2,\mathbb{R})$
effectively behaves like a group with finite negative volume in the path
integral, which leads to a simple prescription for the computation of the disk
partition function. We apply our findings to give a simple rederivation of the
D-brane tensions.
## 1 Introduction
In string perturbation theory much effort was devoted historically to
understand higher point and higher genus correlation functions. For a broad
overview, see e.g. DHoker:1988pdl ; Witten:2012bh . Despite a good
understanding of the integrands of string perturbation theory, performing the
actual integrals has remained a challenging task.
On the other end of the spectrum, there are some exceptional correlators at
genus 0 that require special attention. The reason for this is a residual
gauge group after imposing conformal gauge which is present due to conformal
Killing vectors. For the sphere, there are three complex conformal Killing
vectors corresponding to the group of Möbius transformations. Since the volume
of this group is infinite, one naively concludes that zero-point, one-point
and two-point functions vanish at tree-level in string theory. The same goes
for the open string, where the group of residual Möbius transformations in
$\mathrm{PSL}(2,\mathbb{R})$. This conclusion is however premature, since the
infinities of the residual gauge groups can potentially be compensated by
other infinities in the worldsheet path integral. It is a subtle problem to
compute the actual value of these quantities and only a partial understanding
exists, see Tseytlin:1987ww ; Liu:1987nz ; Tseytlin:1988tv ; Erbin:2019uiz .
Various such quantities were also successfully computed for strings on
$\text{AdS}_{3}$ Maldacena:2001km ; Troost:2011ud .
All these quantities have a physical meaning on which we would like to
comment. Zero-point functions represent the on-shell value of the action of
the effective spacetime theory, which is (super)gravity in the case of the
closed string and the D-brane worldvolume gauge theory in the case of the open
string. These quantities are generically non-vanishing and especially in the
case of the gravity on-shell action somewhat subtle to define. To get a finite
answer one has to introduce local counterterms on an asymptotic cutoff
surface. The first of these is the Gibbons-Hawking-York boundary term
Gibbons:1976ue . Introducing a cutoff in spacetime would be inconsistent with
Weyl symmetry in string theory and it is unclear in general how to implement
it in string theory. We consider this a very important open problem in
understanding the emergence of gravity from string theory.
One-point functions for the closed string represent tadpole diagrams in
spacetime. Most of these tadpole diagrams vanish due to the spacetime
equations of motion. There are however interesting non-vanishing one-point
functions in string theory such as the dilaton one-point function or the
example considered in Troost:2011ud .
Two-point functions represent the tree-level propagators of the spacetime
theory. It was explained in Erbin:2019uiz that these two-point functions are
actually non-zero because the momentum conserving $\delta$-function
$\delta^{D}(k_{1}-k_{2})$ in spacetime is divergent thanks to the mass-shell
condition that implies the conservation of the last component of the momenta
provided that the other components are conserved. The correct expression in
flat space is instead
$2k^{0}(2\pi)^{D-1}\delta^{D-1}(\vec{k}^{\prime}-\vec{k})$.
In this paper, we give a reasonably complete understanding of the disk
partition function, i.e. the open string zero-point function. The disk
partition function computes interesting quantities directly in string theory
such as D-brane tensions. Historically they are often computed in a roundabout
way by imposing various consistency conditions for the exchange of closed
strings between two parallel D-branes. The challenge in this computation is
the presence of the residual gauge group $\mathrm{PSL}(2,\mathbb{R})$. Since
this group is non-compact, it naively has infinite volume. However, it was
proposed in Liu:1987nz that it essentially behaves as a group with finite
_negative_ volume in any computation so that the string disk partition
function $Z_{\text{disk}}$ is simply related to worldsheet disk partition
function $Z_{\text{CFT}}$ by
$Z_{\text{disk}}=\frac{Z_{\text{CFT}}}{\mathop{\text{vol}}(\mathrm{PSL}(2,\mathbb{R}))}\
.$ (1)
This volume can be defined by a procedure akin to defining the gravitational
on-shell action. In the normalization where the Ricci scalar on the group on
the group with respect to the biinvariant metric is $\mathcal{R}=-6$, this
volume works out to be $-\frac{\pi^{2}}{2}$. It is however very mysterious (at
least to the authors) why this procedure should give the correct result.
We are thus motivated to reconsider the problem. We give in this paper three
rigorous (for physicists’ standards) ways to compute the disk partition
function from first principles. Each of the methods reproduce this value for
the effective volume. The first two methods are based on fixing a further
gauge beyond the conformal gauge. Since the metric is already completely
fixed, the further gauge fixing will invariably involve the matter fields on
the worldsheet. For this reason we assume that the spacetime theory on which
the string is propagating involves at least one flat direction, i.e. is for
example time-independent. Backgrounds such as
$\mathrm{AdS}_{3}\times\mathrm{S}^{3}\times\mathbb{T}^{4}$ also work, since
the torus directions are flat. We think however that our method can be
generalized to other backgrounds as well. We explore two different gauge
fixing conditions in terms of the free boson $X$ describing the flat target
space direction. Both of them are slightly subtle and we discuss them in
detail. One can gauge fix the worldsheet path integral further and compute the
effective volume of the gauge group directly in this way. In the third method,
we compute the disk partition function by relating it to a one-point function
on the disk which can be computed without problems. This is done by assuming
that the flat direction is compact. This introduces a modulus in the problem
and the derivative of the disk partition function with respect to the modulus
is by conformal perturbation theory given by a one-point function. We again
recover the effective volume of $\mathrm{PSL}(2,\mathbb{R})$.
We finally apply this technique of computing disk partition functions to a
short rederivation of D-brane tensions Polchinski:1995mt . Since all relevant
issues already arise for the bosonic string, we restrict to it for technical
simplicity. We mention some open problems in Section 6.
## 2 Gauge fixing $\boldsymbol{X_{\ell,m}=0}$
We fix conformal gauge on the disk. In this section, it is convenient to use
the upper hemisphere metric on the disk:
$\hat{g}=\frac{4\,\mathrm{d}z\,\mathrm{d}\bar{z}}{(1+|z|^{2})^{2}}\
,\qquad|z|\leq 1\ .$ (2)
Any physical result will of course be independent of this choice because the
full worldsheet theory is Weyl-invariant. This form of the metric is
convenient, because there is a standard orthonormal basis for the space of
$L^{2}$-functions given by the spherical harmonics. We can consider two
function spaces given by $L^{2}_{\text{D}}(D)$ and $L^{2}_{\text{N}}(D)$,
where $D$ denotes here and in the following the disk. The former consists of
all square-integrable functions $f$ on the unit disk satisfying Dirichlet
boundary conditions $f(|z|=1)=0$,111We could generalize this to
$f(|z|=1)=x_{0}$ for some constant $x_{0}$, but this constant could be removed
by a spacetime translation. while the latter consist of all square-integrable
functions satisfying Neumann boundary conditions $\partial_{n}f(|z|=1)=0$,
where $\partial_{n}$ is the normal (radial) derivative.
Spherical harmonics are given by $Y_{\ell,m}$, $\ell=0$, $1$, $2$, $\dots$ and
$m=-\ell$, $-\ell+1$, $\dots$, $\ell$. They satisfy Neumann (Dirichlet)
boundary conditions for $\ell+m\in 2\mathbb{Z}$ ($\ell+m\in 2\mathbb{Z}+1$).
As we mentioned in the Introduction, we assume that there is one flat
direction in spacetime which is described by the worldsheet boson $X$. In the
following we will concentrate our attention on this boson. We can expand it
into spherical harmonics
$X=\sum_{\ell,m}X_{\ell,m}Y_{\ell,m}$ (3)
with $X_{\ell,m}=0$ for $\ell+m\in 2\mathbb{Z}+1$ and Neumann boundary
conditions or $\ell+m\in 2\mathbb{Z}$ and Dirichlet boundary conditions.
Moreover, reality of $X$ imposes $X_{\ell,m}=\overline{X_{\ell,-m}}$.
Even after fixing the conformal gauge, there is a remaining gauge freedom that
is not fully fixed. This is given by the group of conformal transformations,
which acts as
$X(z)\longmapsto X\circ\gamma^{-1}(z)$ (4)
on the free boson $X$ and fixes $g$. The latter is achieved by combining the
diffeomorphism $\gamma$ with an appropriate Weyl transformation. The (global)
conformal group on the disk is
$\mathrm{PSU}(1,1)\cong\mathrm{PSL}(2,\mathbb{R})$ and acts by fractional
linear transformations.222$\mathrm{PSL}(2,\mathbb{R})$ naturally acts on the
upper half plane, whereas $\mathrm{PSU}(1,1)$ naturally acts on the unit disk.
The two groups are isomorphic via the Cayley transform. We mostly use the name
$\mathrm{PSL}(2,\mathbb{R})$. Thus we have a path integral schematically of
the following form
$Z_{\text{disk}}=\int\frac{\mathscr{D}X}{\mathop{\text{vol}}(\mathrm{PSL}(2,\mathbb{R}))}\
\mathrm{e}^{-S[X]}\ .$ (5)
The path integral over the appropriate space of functions (either
$L^{2}_{\text{N}}(D)$ or $L^{2}_{\text{D}}(D)$). We remark that we have
suppressed the presence of the ghosts and the other bosons in the path
integral. Only with their presence the conformal anomaly cancels and it makes
sense to gauge $\mathrm{PSL}(2,\mathbb{R})$.
Liu and Polchinski Liu:1987nz provided with a prescription to calculate the
“regularized” finite volume of the the group $\mathrm{PSL}(2,\mathbb{R})$,
which we review in Appendix B. Using that, one can obtain
$Z_{\text{disk}}=-\frac{2}{\pi^{2}}\int\mathscr{D}X\ \mathrm{e}^{-S[X]}\ .$
(6)
Here one tacitly assumes a particular normalization of the ghost zero modes.
This issue is also discussed in Appendix B. We denote the CFT path integral
that appear on the RHS by $Z_{\text{CFT}}$,
$Z_{\text{CFT}}\equiv\int\mathscr{D}X\ \mathrm{e}^{-S[X]}\ .$ (7)
We emphasize that the calculation of $Z_{\text{CFT}}$ does not gauge the
global conformal group $\mathrm{PSL}(2,\mathbb{R})$.
In what follows, we are going to show that
$\frac{Z_{\text{disk}}}{Z_{\text{CFT}}}=-\frac{2}{\pi^{2}}$ (8)
using standard QFT techniques, rather than calculating the regularized volume
of $\mathrm{PSL}(2,\mathbb{R})$. Thus we want to also fix gauge-fix the global
conformal group $\mathrm{PSL}(2,\mathbb{R})$. We achieve this by a slightly
modified Faddeev-Popov procedure.
### 2.1 Gauge choice and admissibility
The group of Möbius transformations preserving the unit disk is
$\mathrm{PSU}(1,1)=\left\\{\begin{pmatrix}a&b\\\
\bar{b}&\bar{a}\end{pmatrix}\,\Big{|}\,|a|^{2}-|b|^{2}=1\right\\}\Big{/}\sim\
.$ (9)
Here, the equivalence $\sim$ identifies the matrix with the negative matrix.
Only the $\mathrm{U}(1)$ subgroup specified by $b=0$ acts by isometries on the
metric. This realization of $\mathrm{PSU}(1,1)$ leads to a natural
normalization of the biinvariant metric that is induced from ambient
$\mathbb{C}^{2}\cong\mathbb{R}^{4}$. This is the normalization which we shall
use in the following. The explicit measure is given in Appendix B.
We would like to impose the gauge
$X_{\ell,\pm m}=0$ (10)
for some choice of $(\ell,m)$ in the expansion eq. (3). Note that due to the
reality condition $\overline{X_{\ell,m}}=X_{\ell,-m}$, this is one complex or
two real conditions. This fixes all non-compact directions of
$\mathrm{PSL}(2,\mathbb{R})\cong\mathrm{PSU}(1,1)$ and only leaves the Cartan
subgroup $\mathrm{U}(1)$ unbroken. Since its volume is finite it is easy to
take this into account. For concreteness, let us consider the following two
gauge fixing conditions:
$\text{Dirichlet:}\ X_{2,\pm 1}=0\ ,\qquad\text{Neumann:}\ X_{1,\pm 1}=0\ .$
(11)
In what follows we will be proving the admissibility of the gauge choice. The
argument for $m\not\in\\{-1,1\\}$ is analogous and will lead to the same final
result.
#### Admissibility of gauge choice.
Since the Cartan generator $\mathrm{U}(1)\subset\mathrm{PSU}(1,1)$ remains
unbroken, it is convenient to consider the coset
$\mathrm{PSU}(1,1)/\mathrm{U}(1)\cong D$, which can also be identified with
the unit disk. We stress that this unit disk is not the worldsheet! It comes
equipped with a hyperbolic metric that descends from $\mathrm{PSU}(1,1)$,
which takes the form for $\alpha\in D$
$g=\frac{\pi\,\mathrm{d}\alpha\,\mathrm{d}\bar{\alpha}}{(1-|\alpha|^{2})^{2}}\
.$ (12)
The normalization is induced from the Haar measure on $\mathrm{PSU}(1,1)$. An
explicit representative of $\alpha$ in $\mathrm{PSU}(1,1)$ is given by
$\gamma_{\alpha}=\frac{1}{\sqrt{1-|\alpha|^{2}}}\begin{pmatrix}1&\alpha\\\
\bar{\alpha}&1\end{pmatrix}\ .$ (13)
This Möbius transformation has the property that $\gamma_{\alpha}(0)=\alpha$.
To be explicit, the gauge conditions in eq. (11) read respectively
Dirichlet
$\displaystyle:\qquad\int_{D}\frac{4\,\mathrm{d}^{2}z}{(1+|z|^{2})^{2}}X\circ\gamma_{\alpha}^{-1}(z,\bar{z})Y_{2,1}(\bar{z},z)=0\
,$ (14a) Neumann
$\displaystyle:\qquad\int_{D}\frac{4\,\mathrm{d}^{2}z}{(1+|z|^{2})^{2}}X\circ\gamma_{\alpha}^{-1}(z,\bar{z})Y_{1,1}(\bar{z},z)=0\
.$ (14b)
Here we used orthornomality of the spherical harmonics on the disc, see
Appendix A.2. We should also clarify that by $\mathrm{d}^{2}z$ we mean
$\mathrm{d}\mathop{\text{Re}}(z)\,\mathrm{d}\mathop{\text{Im}}(z)$. We wrote
the gauge condition as one complex condition here , which upon complex
conjugation would also imply the vanishing of $X_{2,-1}$ and $X_{1,-1}$
respectively.
In order to show the admissibility, we define the complex-valued function
$V(\alpha)=\int_{D}\frac{\mathrm{d}^{2}z}{(1+|z|^{2})^{2}}X\circ\gamma_{\alpha}^{-1}(z,\bar{z})\overline{Y_{\ell,1}(z,\bar{z})}\
.$ (15)
Note that $\overline{Y_{\ell,1}(z,\bar{z})}=Y_{\ell,-1}(\bar{z},z)$. We will
call it $V_{\mathrm{N}}(\alpha)$ when we set $\ell=1$ and we are dealing with
Neumann boundary condition. Similarly for the Dirchlet case, we will call it
$V_{\mathrm{D}}(\alpha)$ and set $\ell=2$. Showing admissibility of the gauge
amounts to showing that $V(\alpha)$ has a zero in the unit disk. In fact, we
should also determine the number of zeros since this will be needed in the
calculation of the Faddeev-Popov determinant eventually.
It turns out that the number of zeros of $V(\alpha)$ in the unit disk can be
determined from its behavior near the boundary by using Stokes’ theorem as
explained below. Thus, we first analyze the behavior of $V(\alpha)$ for
$\alpha=\rho\,\mathrm{e}^{i\theta}$ and $\rho$ close to 1. This behavior of
$V(\alpha)$ is entirely universal, because $\gamma_{\alpha}^{-1}(z)$ is close
to the boundary of the worldsheet disk for any choice of $z$ and $\rho\sim 1$.
Thus in this limit one is only probing the function $X$ close to the boundary
of the worldsheet disk, where its behavior is specified by the boundary
conditions. We find
$\displaystyle V_{\mathrm{N}}(\alpha)$
$\displaystyle=i(1-\rho)e^{-i\theta}\underbrace{\sum_{\begin{subarray}{c}\ell\,m\\\
\ell+m=\text{even}\end{subarray}}h_{\text{N}}(\ell,m)\mathop{\text{Im}}\left(X_{\ell,m}\mathrm{e}^{im\theta}\right)}_{f_{\text{N}}(\theta)\equiv\,\text{real
function}}\,+\,o(1-\rho)\ ,$ (16a) $\displaystyle V_{\mathrm{D}}(\alpha)$
$\displaystyle=(1-\rho)e^{-i\theta}\underbrace{\sum_{\begin{subarray}{c}\ell,\,m\\\
\ell+m=\text{odd}\end{subarray}}h_{\text{D}}(\ell,m)\mathop{\text{Re}}\left(X_{\ell,m}\mathrm{e}^{im\theta}\right)}_{f_{\text{D}}(\theta)\equiv\,\text{real
function}}\,+\,o(1-\rho)\ .$ (16b)
The numbers $h_{\text{N}}(\ell,m)$ and $h_{\text{D}}(\ell,m)$ are real. Eq.
(16) follows from the observation
$\int_{D}\frac{4\,\mathrm{d}^{2}z}{(1+|z|^{2})^{2}}Y_{\ell,m}\circ\gamma_{\alpha}^{-1}(z,\bar{z})\overline{Y_{1,1}(z,\bar{z})}=(1-\rho)e^{i(m-1)\theta}h_{\text{N}}(\ell,m)\,+\,o(1-\rho)$
(17)
with $h_{\text{N}}(\ell,m)=-h_{\text{N}}(\ell,-m)$ for the Neumann boundary
condition. This leads to only the imaginary part of
$X_{\ell,m}\mathrm{e}^{im\theta}$ surviving in the sum. Furthermore, reality
of $X_{0,0}$ implies the vanishing of the $m=0$ term. For Dirichlet boundary
condition, we instead have
$\int_{D}\frac{4\,\mathrm{d}^{2}z}{(1+|z|^{2})^{2}}Y_{\ell,m}\circ\gamma_{\alpha}^{-1}(z,\bar{z})\overline{Y_{2,1}(z,\bar{z})}=(1-\rho)e^{i(m-1)\theta}h_{\text{D}}(\ell,m)\,+\,o(1-\rho)$
(18)
where $h_{\text{D}}(\ell,m)=h_{\text{D}}(\ell,-m)$. This leads to only the
real part surviving. It is easy to compute these integrals in Mathematica for
low values of $\ell$ and convince oneself of the validity of this behavior. We
haven’t tried to give a rigorous proof of this property.
Now we consider eq. (16) and compute the following contour integral:
$N\equiv\frac{1}{2\pi i}\int_{\partial D}\frac{\mathrm{d}V}{V}\ .$ (19)
Here the contour encircles $D$ once in counterclockwise sense. To make this
well-defined, we take the contour to be very close to the boundary. We can
compute this directly from the behavior eq. (16):
$N=\frac{1}{2\pi
i}\int_{0}^{2\pi}\frac{\mathrm{d}(\mathrm{e}^{-i\theta}f(\theta))}{\mathrm{e}^{-i\theta}f(\theta)}=\frac{1}{2\pi
i}\int_{0}^{2\pi}(-i\mathrm{d}\theta+\mathrm{d}\log f(\theta))=-1+w(f)\,.$
(20)
where $w(f)$ is the winding the number of the function $f(\theta)$ (which we
called $f_{\text{N}}$ and $f_{\text{D}}$ in eq. (16) depending on the boundary
condition). We would like to conclude that the winding number $N$ of $V$
around the boundary is $-1$. However, $f(\theta)$ is real and is not generally
sign definite, hence can potentially cross zero. For such functions, the
winding number around zero is ill-defined. To cure this we perform the
following replacement
$\displaystyle\text{Dirichlet}:$ $\displaystyle\qquad X\to X+i\varepsilon
Y_{1,0}\ ,$ (21a) $\displaystyle\text{Neumann}:$ $\displaystyle\qquad X\to
X+\varepsilon Y_{1,0}\ ,$ (21b)
with fixed $\varepsilon\neq 0$. This results in an additive modification of
eq. (16); the modified function $f_{\mathrm{N}}(\theta)$ has a constant real
piece while the modified $f_{\mathrm{D}}(\theta)$ has a constant imaginary
piece. This guarantees that the modified function $f(\theta)$ does not pass
through the origin and $w(f)=0$. So with this modification, we have
$N=-1\ .$ (22)
Before analyzing the above equation, let us discuss the meaning of the
regularization. The path integral can be understood as a contour integral in
the space of complex-valued $L^{2}$-functions. This translates into the
reality condition $\overline{X_{\ell,m}}=X_{\ell,-m}$ which specifies the
contour for the modes. However, one can slightly shift the contour which
should leave the value of the path integral unchanged. For the Dirichlet case,
the eq. (21) amounts to $X_{1,0}\to X_{1,0}+i\varepsilon$. This should be
thought of as doing the Gaussian integral over the
$\mathrm{Im}X_{1,0}=\varepsilon$ line instead of on the real line.333The
interpretation for the Neumann case is not as simple as the Dirichlet one,
since here we are regulating using a component which does not really respect
the Neumann boundary condition. We should also mention that the details of
this modification do not matter. We could modify $X$ in any infinitesimal way,
since any generic perturbation of a real function will result in a vanishing
winding number. We just choose (21) for definiteness.
Eq. (22) implies that $V$ has exactly one zero in the disk, provided one
counts zeros with signs and multiplicities as follows. For a generic complex
function $V$ on the unit disk, zeros are isolated. We can encircle a zero by a
contour and view $V(\alpha)$ restricted to the contour as a map
$\mathrm{S}^{1}\longmapsto\mathbb{C}\setminus\\{0\\}$. There is a winding
number associated to this map which is the order of zero. For example the
function $V(\alpha)=\alpha$ has a zero of order 1 around the origin, whereas
the function $V(\alpha)=\bar{\alpha}$ has a zero of order $-1$ around the
origin. For a zero of order $n$, we compute easily
$\int_{\mathcal{C}}\frac{\mathrm{d}V}{V}=n\ ,$ (23)
where the contour $\mathcal{C}$ encircles only the zero of $V$. Now by Stokes’
theorem it follows that the sum of the orders of zeros has to be $-1$. In
particular, there is at least one zero and the gauge is admissible. The
significance of minus sign will become clear in following section when we
discuss a signed version Faddeev-Popov gauge fixing procedure.
Once we have proved that the gauge is admissible, the regularization parameter
$\varepsilon$ does not matter and can be set to $0$. We will do so in rest of
the calculation.
For different gauges with where we impose $X_{\ell,m}=0$ with
$m\not\in\\{-1,1\\}$, we should instead consider
$V(\alpha)=\int_{D}\frac{\mathrm{d}^{2}z}{(1+|z|^{2})^{2}}X\circ\gamma_{\alpha}^{-1}(z,\bar{z})\overline{Y_{\ell,m}(z,\bar{z})}\
.$ (24)
and then the overall winding number $N$ turns out to be $-m$. In what follows
we will use the gauge where $m=1$. It is possible to perform the computation
with other choice of gauge as well with $m\neq 1$ (as long as $m\neq 0$ in
which the gauge is no longer admissible).
### 2.2 Computation of the path integral
After these preparations, the actual computation of the gauge-fixed partition
function is very easy. We can apply the modified Faddeev-Popov procedure that
we reviewed in Appendix C to our problem. It is modified in that it counts
intersections of the gauge orbit with the gauge slice with signs. This is
necessary because while the gauge we have chosen is admissible, it is not
uniquely so. The modified FP-procedure cancels unwanted intersections of the
gauge orbit and the gauge slice by counting them with minus signs. The gauge
group is $\mathrm{PSL}(2,\mathbb{R})$ and the gauge condition is
$F(X)=(X^{g})_{1,1}=0$ for Neumann and $F(X)=(X^{g})_{2,1}=0$ for Dirichlet
boundary conditions. The computation in the previous Section 2.1 shows in fact
precisely that the intersection number $\mathcal{I}$ between the gauge orbit
and the gauge slice is $\mathcal{I}=-1$, independent of $X$, i.e.
$-1=\int_{\mathcal{G}}\mathrm{d}g\
\mathop{\text{det}}\mathop{\text{Jac}}F(X^{g})\,\delta(F(X^{g}))\ .$ (25)
For $m\neq 1$, the LHS of the above equation reads $-m$ instead of $-1$, since
the intersection number is $\mathcal{I}=-m$. In what follows we will use
$m=1$.
#### Neumann Condition.
The Neumann condition involves the modes with $\ell+m$ even. The gauge fixing
condition is $F(X)=X_{1,1}=0$. The Jacobian
$\mathop{\text{Jac}}F(X^{g})$ (26)
is linear in $X$. Hence it can be evaluated mode by mode. It is actually only
non-vanishing for finitely many values of mode $X_{\ell,m}$. When expressing
the group element $g$ in terms of $\alpha\in\mathrm{PSU}(1,1)/\mathrm{U}(1)$
through (13) (and writing $X^{\gamma_{\alpha}}\equiv X^{\alpha}$), we have in
fact the identity
$1=-\int\frac{\pi\,\mathrm{d}^{2}\alpha}{(1-|\alpha|^{2})^{2}}\
J_{\mathrm{N}}(X^{\alpha})\ \delta^{2}(F(X^{\alpha}))$ (27)
with
$\pi
J_{\mathrm{N}}(X)=\frac{36}{5}(\mathrm{Im}X_{2,2})^{2}+\frac{36}{5}(\mathrm{Re}X_{2,2})^{2}-\frac{6}{5}X_{2,0}^{2}\
.$ (28)
The gauge-fixed path integral hence reads explicitly
$Z^{\mathrm{N}}_{\text{disk}}=-\int\mathscr{D}X\
\delta(\mathrm{Re}X_{1,1})\delta(\mathrm{Im}X_{1,1})\,J_{\mathrm{N}}(X)\,\mathrm{e}^{-S[X]}\
.$ (29)
where the action in terms of modes is given by
$S[X]=\frac{1}{4\pi\alpha^{\prime}}\sum_{\ell+m\in
2\mathbb{Z}}\ell(\ell+1)|X_{\ell,m}|^{2}\ .$ (30)
Hence in the ratio of the gauged and the ungauged CFT partition all but
finitely many modes cancel. Thus it is given by a simple ratio of Gaussian
integrals. It works out to be
$\displaystyle\frac{Z^{\mathrm{N}}_{\text{disk}}}{Z^{\mathrm{N}}_{\text{CFT}}}$
$\displaystyle=-\frac{2}{\pi^{2}}\ .$ (31)
#### Dirichlet boundary conditions.
The computation is completely analogous. The Fadeev-Popov determinant works
out to be
$\pi
J_{\mathrm{D}}(X)=\frac{64}{7}\left[(\mathop{\text{Im}}X_{3,2})^{2}+(\mathop{\text{Re}}X_{3,2})^{2}\right]-\frac{16}{5}\sqrt{\frac{3}{7}}X_{1,0}X_{3,0}-\frac{2}{5}(X_{1,0})^{2}-\frac{96}{35}(X_{3,0})^{2}$
(32)
in this case. In particular it again only involves finitely many modes and
allows one to reduce the ratio of the gauged and the ungauged partition
function to a ratio of finite-dimensional integrals. One again recovers
$\frac{Z_{\text{disk}}^{\text{D}}}{Z_{\text{CFT}}^{\text{D}}}=\frac{Z_{\text{disk}}^{\text{N}}}{Z_{\text{CFT}}^{\text{N}}}=-\frac{2}{\pi^{2}}=\text{Regularized
volume of}\ \mathrm{PSL}(2,\mathbb{R})\ ,$ (33)
in agreement with the regularization procedure discussed in Liu:1987nz . This
is the result we anticipated in eq. (8).
## 3 Gauge fixing $\boldsymbol{\mathrm{d}X(0)=0}$
In this section, we repeat the calculation using a different gauge choice. We
mostly focus on the Neumann case and indicate the necessary changes for the
Dirichlet case. We used the gauge choice $X_{1,\pm 1}=0$ before. The
difficulty for this gauge choice was to establish admissibility. We saw that
the gauge is not uniquely fixed, but counting solutions with a sign of the
corresponding Jacobian that enters the Faddeev-Popov determinant, there is
always a unique solution (up to the subtlety that we had to shift the contour
slightly in the complex plane). On the other hand, it was almost trivial to
compute the path integral with the insertion of the corresponding delta-
function and the Jacobian, because this only involved finitely many modes
$X_{\ell,m}$.
In this section we will shift the difficulty – our gauge choice is easily seen
to be admissible, but computing the actual path integral will be more
technical.
### 3.1 Admissibility and uniqueness
Our gauge condition reads
$\mathrm{d}X(0)=0\ ,$ (34)
i.e. the center of the disk is a local extremum for one of the spacetime
coordinates $X$. As before, this leaves the
$\mathrm{U}(1)\subset\mathrm{PSL}(2,\mathbb{R})$ subgroup unbroken. But since
$\mathrm{U}(1)$ is compact, it simply yields an additional factor of $\pi$ in
the final result.444The volume of $\mathrm{U}(1)$ is $\pi$ and not $2\pi$
because the gauge group is $\mathrm{PSL}(2,\mathbb{R})$ and not
$\mathrm{SL}(2,\mathbb{R})$. We will first discuss this condition for Neumann
boundary conditions.
Before discussing admissibility of this gauge, we should address a subtlety.
The restriction $X|_{\partial D}$ is a function on $\partial
D\cong\mathrm{S}^{1}$ and as such would have local extrema (at least two of
them). Since for Neumann boundary conditions, also $\partial_{n}X|_{\partial
D}=0$, it follows that this local extrema of $X|_{\partial D}$ are also local
extrema of $X$. Thus for generic $X$ there are always local extrema on the
boundary of the disk. This is undesirable for our purposes. To rectify this
behavior, we modify slightly the boundary condition as follows:
$\partial_{n}X(z)\Big{|}_{\partial D}=\varepsilon$ (35)
for small $\varepsilon$. $\varepsilon$ can in principle be a non-trivial
function on the boundary of the disk – our only requirement is that it doesn’t
possess a zero. We think of $\varepsilon$ as being very small. This choice
guarantees us that there will be no local extrema on the boundary of the disk.
Our modification either shifted them slightly outside or inside of the disk.
Now we can discuss admissibility of the gauge. For this consider
$\mathrm{d}X$, which we can view as a vectorfield over $D$. We equip $D$ with
a flat metric, so that vectorfields can be identified with 1-forms. Then this
vectorfield has roughly the form as depicted in figure 1. In the example of
the figure, there are three extrema: two (local) maxima and one saddle point.
Figure 1: The derivative $\mathrm{d}X$ on the disk.
Thus, our gauge choice is admissible in this example, but not uniquely so. In
general, the number of (local) maxima, minima and saddlepoints is constrained
by the Poincaré-Hopf theorem.555Or alternatively by the Morse lemma when $X$
is a Morse function. The Poincaré-Hopf theorem says that for a vectorfield of
the form we are considering
$\text{\\# maxima}-\text{\\# saddle points}+\text{\\# minima}=1\ .$ (36)
The RHS of this equation is the Euler characteristic of the disk. This
equation shows in particular that the gauge is admissible.
We are thus in a similar situation as for the other gauge, where the gauge is
not uniquely fixed, but different solutions to the gauge condition are
constrained by a topological condition. We can exploit this by considering the
following quantity
$\int_{\mathrm{PSL}(2,\mathbb{R})}\mathrm{d}\gamma\
\det(\text{Hess}(X^{\gamma})(0))\delta^{2}(\mathrm{d}X^{\gamma}(0))\,.$ (37)
Here, $\mathrm{d}\gamma$ is the Haar measure and $X^{\gamma}\equiv
X\circ\gamma^{-1}$ as before. $\text{Hess}(X)(0)$ is the Hessian matrix
$\text{Hess}(X)(0)=\begin{pmatrix}\partial_{x}^{2}X(0)&\partial_{x}\partial_{y}X(0)\\\
\partial_{x}\partial_{y}X(0)&\partial_{y}^{2}X(0)\end{pmatrix}\ .$ (38)
Given our previous discussion, we can evaluate this expression very
explicitly. As before, we can parametrize the coset
$\mathrm{PSL}(2,\mathbb{R})/\mathrm{U}(1)$ by $\alpha\in D$, see eq. (13).
Following the logic of the modified Faddeev-Popov procedure, this evaluates to
$\displaystyle\int_{D}\frac{\pi\,\mathrm{d}\alpha\,\mathrm{d}\bar{\alpha}}{(1-|\alpha|^{2})^{2}}\det(\mathrm{Hess}(X^{\alpha})(0))\delta^{2}(\mathrm{d}X^{\alpha}(0))=\pi\sum_{\alpha_{0}}\text{sgn}\left(\det\text{Hess}(X(\alpha_{0}))\right)\
.$ (39)
We finally have
$\displaystyle\text{sgn}\left(\det\text{Hess}(X(\alpha_{0}))\right)=\begin{cases}+1&\text{$\alpha_{0}$
is a maximum or minimum of $X(z)$}\\\ -1&\text{$\alpha_{0}$ is a saddlepoint
of $X(z)$}\end{cases}$ (40)
Thus, by the topological constraint (36) on the maxima, minima and
saddlepoints, we have simply
$\displaystyle\int_{D}\frac{\pi\,\mathrm{d}\alpha\,\mathrm{d}\bar{\alpha}}{(1-|\alpha|^{2})^{2}}\det(\mathrm{Hess}(X^{\alpha})(0))\delta^{2}(\mathrm{d}X^{\alpha}(0))=\pi\
.$ (41)
In other words, the intersection number between the gauge slice and the gauge
orbit is $\mathcal{I}=1$. The general logic is again given by the modified FP-
procedure that we review in Appendix C. We finally insert this identity in the
path integral for the disk partition function
$\int\frac{\mathscr{D}X}{\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R})}\mathrm{e}^{-S[X]}\\\
=\frac{1}{\pi}\int\frac{\mathscr{D}X}{\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R})}\int_{D}\frac{\pi\,\mathrm{d}\alpha\,\mathrm{d}\bar{\alpha}}{(1-|\alpha|^{2})^{2}}\det(\mathrm{Hess}(X^{\alpha})(0))\delta^{2}(\mathrm{d}X^{\alpha}(0))\mathrm{e}^{-S[X]}\
.$ (42)
While we suppressed the other directions of the sigma model as well as the
ghost fields, we should remember that they are present in order to have a non-
anomalous $\mathrm{PSL}(2,\mathbb{R})$ symmetry. We suppress them from the
notation for simplicity. With this convention, both the measure and the action
are invariant under $\mathrm{PSL}(2,\mathbb{R})$ transformations –
$\mathscr{D}X=\mathscr{D}X^{\gamma}$ and $S[X^{\gamma}]=S[X]$. Thus, after
replacing $X$ by $X^{\alpha}$ in the measure and the action, we can rename
$X^{\alpha}\to X$ everywhere. The $\alpha$-integral then formally is
$\int_{D}\frac{\pi\,\mathrm{d}\alpha\,\mathrm{d}\bar{\alpha}}{(1-|\alpha|^{2})^{2}}=\int_{\mathrm{PSL}(2,\mathbb{R})}\mathrm{d}\gamma=\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R})\
,$ (43)
which cancels the corresponding factor in the denominator (at least this is
our definition what we mean by
$\mathop{\text{vol}}\mathrm{PSL}(2,\mathbb{R})$). Thus, we end up with the
following gauge-fixed form of the disk partition function
$Z_{\text{disk}}=\frac{1}{\pi}\int\mathscr{D}X\det(\mathrm{Hess}(X)(0))\delta^{2}(\mathrm{d}X(0))\mathrm{e}^{-S[X]}\
.$ (44)
#### Dirichlet case.
Let us indicate the changes for the Dirichlet case. Here, $X|_{\partial D}=0$
and so the derivative along the boundary of $X$ vanishes. Hence we again
expect that generically there can be critical points of $X(z)$ on the boundary
$\partial D$ and we require a similar regularization as before. This situation
is topologically completely equivalent to the Neumann case if we rotate the
vectorfield pointwise by 90 degrees. Then the normal derivative and the
derivative along the boundary get interchanged and we are back to the Neumann
situation that can be regularized as discussed above. Thus, we again have
after regularization
$\text{\\# maxima}-\text{\\# saddle points}+\text{\\# minima}=1\ .$ (45)
The rest of the computation did not require the boundary condition and hence
(44) also holds for Dirichlet boundary conditions.
### 3.2 Computation of the path integral
Next, we compute the gauge-fixed path integral eq. (44). We choose a flat
metric on the disk for simplicity and set $\alpha^{\prime}=1$. We will again
perform the computation first for Neumann boundary conditions and indicate the
changes for Dirichlet boundary conditions below. Let us introduce the standard
generating functional
$W(J)=\left\langle\exp\left(i\int\mathrm{d}^{2}z\
X(z)J(z)\right)\right\rangle\ ,$ (46)
where the correlation function is normalized such that $\langle 1\rangle=1$.
Here, $J(z)$ is an arbitrary source for $X$. We can compute the generating
functional in the following standard way. The Green’s function for the
Laplacian on the disk with Neumann boundary conditions reads
$G(z,w)=\frac{1}{2\pi}\left(\log|z-w|+\log\left(|w||z-w^{*}|\right)\right)-\frac{1}{4\pi}(|z|^{2}+|w|^{2})\
,$ (47)
where $w^{*}=\frac{w}{|w|^{2}}$ is the point reflected at the unit circle.
This Green’s function is symmetric, which becomes obvious if we write it in
the form
$G(z,w)=\frac{1}{2\pi}\left(\log|z-w|+\log|1-z\bar{w}|\right)-\frac{1}{4\pi}(|z|^{2}+|w|^{2})\
.$ (48)
It satisfies
$\Delta_{z}G(z,w)=\delta^{2}(z,w)-\frac{1}{\pi}\ .$ (49)
The correction is expected, because the Laplacian has a zero mode and thus the
inverse only exists for non-zero modes. One can complete the square in the
path integral and derive
$W(J)=\exp\left(\pi\int\mathrm{d}^{2}z\ \mathrm{d}^{2}w\
G(z,w)J(z)J(w)\right)\ .$ (50)
This expression is valid as long as the zero mode $\int\mathrm{d}^{2}z\ J(z)$
vanishes. This will always be satisfied below since our gauge fixing condition
does not involve the zero mode.
Now we turn again to eq. (44). It involves composite operators such as the
determinant of the Hessian which have to be defined properly. Our
regularization is to use point splitting. Correspondingly, the determinant of
the Hessian becomes
$\partial_{x}^{2}X(z_{x})\partial_{y}^{2}X(z_{y})-\partial_{x}\partial_{y}X(z_{x})\partial_{x}\partial_{y}X(z_{y})\
.$ (51)
Here and in the following $\partial_{x}$ ($\partial_{y}$) is the derivative
with respect to the real (imaginary) part of the complex argument. We find it
less confusing to use real coordinates in the computation. We used $z_{x}$ and
$z_{y}$ for the two point-split points to remember which one carries more $x$
and $y$-derivatives. We ultimately want to take them both to zero. Similarly,
the $\delta$-functions can be taken to be
$\delta(\partial_{x}X(z_{x}))\delta(\partial_{y}X(x_{y}))\ .$ (52)
It turns out that in the following computation it is very natural to take them
at the same coordinates as the entries of the Hessian matrix – this will not
lead to singularities. In fact, this point-split version of the integral
simply comes from the modified gauge condition
$\partial_{x}X(z_{x})=0\quad\text{and}\quad\partial_{y}X(z_{y})=0\ .$ (53)
As a first step, we can compute
$\displaystyle\tilde{W}(J)$
$\displaystyle=\left\langle\delta(\partial_{x}X(z_{x}))\delta(\partial_{y}X(z_{y}))\exp\left(i\int\mathrm{d}^{2}z\
X(z)J(z)\right)\right\rangle$ (54)
$\displaystyle=\frac{1}{(2\pi)^{2}}\int_{-\infty}^{\infty}\mathrm{d}k_{x}\mathrm{d}k_{y}\
W\big{(}J+k_{x}\partial_{x}\delta^{2}(z-z_{x})+k_{y}\partial_{y}\delta^{2}(z_{y})\big{)}\
.$ (55)
Notice that as promised, the modified source still does not have a zero mode.
We can plug in the explicit form of $W(J)$ to obtain
$\displaystyle\tilde{W}(J)$
$\displaystyle=\frac{W(J)}{(2\pi)^{2}}\int_{-\infty}^{\infty}\mathrm{d}k_{x}\mathrm{d}k_{y}\
\exp\Bigg{(}\pi\sum_{i,j\in\\{x,y\\}}k_{i}k_{j}\partial_{i}^{(1)}\partial_{j}^{(2)}G(z_{i},z_{j})$
$\displaystyle\qquad\qquad\qquad-2\pi\sum_{i\in\\{x,y\\}}k_{i}\int\mathrm{d}^{2}z\
\partial_{i}^{(2)}G(z,z_{i})J(z)\Bigg{)}\ .$ (56)
The superscript $(1)$ and $(2)$ indicates whether the derivative acts on the
first or second entry of the Green’s function. Remembering that we use point
splitting to define Green’s functions at coincident points, we need to
subtract the singular piece of the Green’s function that is
$\frac{1}{2\pi}\log|z-z|$. This gives
$G_{\text{reg}}(z,z)=\frac{1}{2\pi}\left(\log\left(1-|z|^{2}\right)-|z|^{2}\right)\
.$ (57)
We next compute the integral over $k_{x}$ and $k_{y}$. Let
$A_{i,j}=-\partial_{i}^{(1)}\partial_{j}^{(2)}G(z_{i},z_{j})\ ,\qquad
b_{i}=\int\mathrm{d}^{2}z\ \partial_{i}^{(2)}G(z,z_{i})J(z)\ .$ (58)
We thus simply compute the Gaussian integral with the result
$\tilde{W}(J)=\frac{W(J)}{(2\pi)^{2}\sqrt{\det(A)}}\exp\left(\pi\sum_{i,j}b_{i}(A^{-1})_{i,j}b_{j}\right)\
.$ (59)
It turns out that the matrix $A$, although complicated is indeed positive
definite so that the integral over $k_{x}$ and $k_{y}$ is well-defined. By
direct computation, we have
$\det(A)\Big{|}_{z_{x}=0,z_{y}=0}=\frac{1}{(2\pi)^{2}}\ .$ (60)
Also the exponential behaves nicely in the limit where $z_{x}\to 0$ and
$z_{y}\to 0$ and we obtain
$\sum_{i,j}b_{i}(A^{-1})_{i,j}b_{j}=2\pi\int\mathrm{d}^{2}z\ \mathrm{d}^{2}w\
\sum_{p\in\\{x,y\\}}\partial_{p}^{(2)}G(z,0)\partial_{p}^{(2)}G(w,0)J(z)J(w)$
(61)
Let us define
$\tilde{G}(z,w)=G(z,w)+2\pi\sum_{i\in\\{x,y\\}}\partial_{i}^{(2)}G(z,0)\partial_{i}^{(2)}G(w,0)\
.$ (62)
Thus, after specialization of $z_{x}=z_{y}=0$, we have
$\tilde{W}(J)=\frac{1}{2\pi}\exp\left(\pi\int\mathrm{d}^{2}z\ \mathrm{d}^{2}w\
\tilde{G}(z,w)J(z)J(w)\right)$ (63)
To complete the computation, we also want to include the effect of the
Hessian. Point-splitting again, we simply obtain it by taking functional
derivatives. Remembering also the additional factor of $\frac{1}{\pi}$ from
the volume of the residual gauge group $\mathrm{U}(1)$, we want to compute
$\displaystyle\frac{Z_{\text{disk}}}{Z_{\text{CFT}}}$
$\displaystyle=-\frac{1}{2\pi^{2}}\lim_{z_{x}\to 0,\,z_{y}\to
0}\left((\partial_{x}^{(1)})^{2}(\partial_{y}^{(2)})^{2}-\partial_{x}^{(1)}\partial_{y}^{(1)}\partial_{x}^{(2)}\partial_{y}^{(2)}\right)\frac{\delta}{\delta
J(z_{x})}\frac{\delta}{\delta J(z_{y})}\tilde{W}(J)\Big{|}_{J=0}$ (64)
$\displaystyle=-\frac{1}{\pi}\lim_{z_{x}\to 0,\,z_{y}\to
0}\left((\partial_{x}^{(1)})^{2}(\partial_{y}^{(2)})^{2}-\partial_{x}^{(1)}\partial_{y}^{(1)}\partial_{x}^{(2)}\partial_{y}^{(2)}\right)\tilde{G}(z_{x},z_{y})\
.$ (65)
Here, $Z_{\text{CFT}}$ is the CFT partition function without gauging of
$\mathrm{PSL}(2,\mathbb{R})$. There are two terms – from the original $G(z,w)$
and from the correction term in eq. (62). The second term leads again to
Green’s functions at coincident points which we regularize as before. A direct
computation then leads to
$\frac{Z_{\text{disk}}}{Z_{\text{CFT}}}=-\frac{1}{\pi}\times\frac{2}{\pi}=-\frac{2}{\pi^{2}}\
.$ (66)
This is in perfect agreement with out earlier calculation.
#### Dirichlet case.
For Dirichlet boundary conditions, the following changes need to be made. The
Green’s function now takes the form
$G(z,w)=\frac{1}{2\pi}\left(\log|z-w|-\log|1-z\bar{w}|\right)$ (67)
and there is no zero mode. Furthermore, the matrix $A_{i,j}$ is _negative
definite_ in this case and thus the integral over $k_{x}$ and $k_{y}$ is a
priori ill-defined. However, one can still go on by employing a double Wick
rotation $k_{p}\to ik_{p}$ (but the answer is less well-defined in this case).
This leads to
$\tilde{W}(J)=-\frac{W(J)}{(2\pi)^{2}\sqrt{\det(A)}}\exp\left(\pi\sum_{i,j}b_{i}(A^{-1})_{i,j}b_{j}\right)\
,$ (68)
where the various quantities are given by analogous expressions as in the
Neumann case. The extra minus sign comes from the analytic continuation. The
Wick rotation exchanges branches of the square root. The remaining steps are
completely analogous and one obtains the result
$\frac{Z_{\text{disk}}}{Z_{\text{CFT}}}=\frac{1}{\pi}\times\left(-\frac{2}{\pi}\right)=-\frac{2}{\pi^{2}}\
.$ (69)
## 4 Relation to a one-point function
In this section, we will explain yet another method to compute the disk
partition function by relating it to a one-point function. This is more along
the lines how the disk partition functions were evaluated previously in the
literature. Actually, this was done historically by using the soft dilaton
theorem Shapiro:1975cz ; Ademollo:1975pf that relates the disk partition
function to a one-point function of the dilaton with zero momentum. This
exploits the fact that the dilaton appears in the spacetime effective action
as an exponential. The computation we present here is simpler because one does
not have to deal with the subtleties of the dilaton vertex operator and one
does not have to make any assumption about the spacetime theory.
### 4.1 Marginal operator
Let us suppose that there is a circle of radius $R$ in the spacetime which is
described by a compact free boson $X\sim X+2\pi L$. As before, we want to
compute the path integral over the worldsheet CFT with a
$\mathrm{PSL}(2,\mathbb{R})$ gauging and compare it with the path integral
without gauging.
We make use of the fact that the worldsheet partition function as well as the
gauged string partition function should behave in a simple way on $L$. In
fact, $L$ only enters in the path integral formalism through the zero modes
which leads to the behavior
Neumann $\displaystyle:\ Z_{\text{CFT}}\propto L^{1}\ ,$ (70a) Dirichlet
$\displaystyle:\ Z_{\text{CFT}}\propto L^{0}\ ,$ (70b)
because the zero mode is only present for the Neumann boundary condition. We
assume that this property continues to be true in the full string partition
function $Z_{\text{disk}}$.
In the worldsheet path integral
$Z_{\text{CFT}}=\int\mathscr{D}X\ \mathrm{e}^{-S[X]}\ ,$ (71)
we can make the $L$-dependence explicit by defining $X^{\prime}=L^{-1}X$,
which has the periodicity. Then the worldsheet path integral reads
$Z_{\text{CFT}}=L^{\gamma}\int\mathscr{D}X^{\prime}\
\mathrm{e}^{-L^{2}S[X^{\prime}]}\ ,$ (72)
We put a prefactor $L^{\gamma}$ in front of the path integral to account for
the fact that the measure $\mathscr{D}X^{\prime}$ should also transform under
this replacement. Since the replacement $X^{\prime}=L^{-1}X$ is linear, the
most general transformation is given by an overall factor $L^{\gamma}$.
However, the precise value of the exponent $\gamma$ is scheme dependent and we
leave it open. One can for example compute that in zeta-function
regularization $\gamma=\frac{1}{6}$. Let us write
$V(z)=g^{ab}\partial_{a}X^{\prime}\partial_{b}X^{\prime}(z)$ in the following
for simplicity. We thus have
$\frac{\partial_{L}(L^{-\gamma}Z_{\text{CFT}})}{L^{-\gamma}Z_{\text{CFT}}}=-\frac{L}{2\pi\alpha^{\prime}}\frac{\int\mathscr{D}X^{\prime}\
\int\mathrm{d}^{2}z\
\sqrt{g}\,V(z)\mathrm{e}^{-L^{2}S[X^{\prime}]}}{\int\mathscr{D}X^{\prime}\
\mathrm{e}^{-L^{2}S[X^{\prime}]}}$ (73)
In this expression it is now very simple to gauge fix because we are computing
a one-point function. We can put the vertex operator $V(z)$ in the center of
the disk. We take the disk again to be the unit disk with flat metric so that
the vertex operator is inserted at $z=0$. The remaining Faddeev-Popov
determinant is simply $\frac{1}{\pi}$ coming from the unbroken
$\mathrm{U}(1)$. We thus deduce
$\frac{\partial_{L}(L^{-\gamma}Z_{\text{disk}})}{L^{-\gamma}Z_{\text{CFT}}}=-\frac{L}{2\pi^{2}\alpha^{\prime}}\langle
V(0)\rangle_{L}\ ,$ (74)
where the normalized expectation value is taken w.r.t. the action
$L^{2}S[X^{\prime}]$.
### 4.2 Computation
After having related the disk partition function to a one-point function, we
proceed with the calculation. The expectation value $\langle V(0)\rangle_{L}$
can be computed via Green’s functions as in Section 3. To start, we first
point split the operator $V(z)$ and compute the two point function
$4\langle\partial X(z)\bar{\partial}X(w)\rangle$ (75)
instead which in the limit $z,w\to 0$ gives the desired one-point function.
Here we wrote again $X$ for $X^{\prime}$ to avoid cluttering the notation.
This gives
$\frac{\partial_{L}(L^{-\gamma}Z_{\text{disk}})}{L^{-\gamma}Z_{\text{CFT}}}=-\frac{L}{2\pi^{2}\alpha^{\prime}}\times\left(-\frac{2\pi\alpha^{\prime}}{L^{2}}\right)\times
4\lim_{z,w\to 0}\partial_{z}\bar{\partial}_{w}G(z,w)\ .$ (76)
The additional factor comes from the generating functional $W(J)$ that we
determine as in Section 3.
Notice that so far everything works with both boundary conditions. We also
make the important remark that through point-splitting we have chosen a
renormalization scheme and thus we can only expect agreement for a specific
$\gamma$. For this reason we will consider a combination of the Neumann and
Dirichlet partition functions where the scheme dependence cancels. We can
compute the ratio
$\frac{Z_{\text{disk}}}{LZ_{\text{CFT}}}=\frac{\partial_{L}(L^{-\gamma}Z_{\text{disk}}^{\text{N}})}{L^{-\gamma}Z_{\text{CFT}}^{\mathrm{N}}}-\frac{\partial_{L}(L^{-\gamma}Z_{\text{disk}}^{\text{D}})}{L^{-\gamma}Z_{\text{CFT}}^{\mathrm{D}}}\
.$ (77)
In this equality, we used the proportionalities (70) as well as the
expectation that the ratio $Z_{\text{disk}}/Z_{\text{CFT}}$ does not depend on
the boundary conditions as well as independent of $L$. We finally learn
$\displaystyle\frac{Z_{\text{disk}}}{Z_{\text{CFT}}}$
$\displaystyle=\frac{4}{\pi}\lim_{z,w\to
0}\partial_{z}\bar{\partial}_{w}\left(G^{\text{N}}(z,w)-G^{\text{D}}(z,w)\right)$
(78) $\displaystyle=\frac{4}{\pi}\lim_{z,w\to
0}\partial_{z}\bar{\partial}_{w}\left(\frac{1}{\pi}\log|1-z\bar{w}|-\frac{1}{4\pi}(|z|^{2}+|w|^{2})\right)$
(79) $\displaystyle=-\frac{2}{\pi^{2}}\lim_{z,w\to
0}\frac{1}{(1-z\bar{w})^{2}}=-\frac{2}{\pi^{2}}\ ,$ (80)
in agreement with our previous results. Here we used the explicit form of the
Green’s function eq. (48) and eq. (67).
## 5 Application to D-branes
In this section, we apply our method to the computation of D-brane tension.
Let us imagine a setup with a D$p$-brane in directions $0$ through $p$ (in
flat spacetime). Then without turning on any fluxes, the worldvolume action of
the D-brane is given by the DBI-action – the higher-dimensional generalization
of the Nambu-Goto action (in the Einstein frame):
$S_{\text{D$p$-brane}}=T_{p}\int\mathrm{d}^{p+1}x\
\sqrt{\det(G^{(p)})}=T_{p}\mathop{\text{vol}}(\mathrm{D}p)\ ,$ (81)
where $\mathop{\text{vol}}(\mathrm{D}p)$ is the $(p+1)$-dimensional
worldvolume in spacetime the D-brane occupies and $T_{p}$ is the D$p$-brane
tension – the object we want to compute. We do not turn on any $B$-field or
gauge field background values. The fact that
$\mathop{\text{vol}}(\mathrm{D}p)$ is infinite is not a problem in our
analysis. We could imagine that in a Euclidean spacetime, directions $0$
through $p$ are toroidally compactified so that the worldvolume becomes
finite. We already know that $T_{p}\propto g_{\text{s}}^{-1}$ (the closed
string coupling) since D-branes are non-perturbative objects. Hence the
partition function of the system is to leading order in $g_{s}$ given by
$Z_{\text{D$p$-brane}}=\mathrm{e}^{-S_{\text{D$p$-brane}}}=\mathrm{e}^{-T_{p}\mathop{\text{vol}}(\mathrm{D}p)}\
$ (82)
This partition function needs to be reproduced by a worldsheet computation. To
leading order in $g_{\text{s}}$, the worldsheet partition function of a single
open string ending on the D-brane is given by the disk partition function
$Z_{\text{disk}}$. To account for the fact that there can be arbitrarily many
strings present we need to exponentiate the single-string answer. So we
require
$\mathrm{e}^{-T_{p}\mathop{\text{vol}}(\mathrm{D}p)}\overset{!}{=}\mathrm{e}^{Z_{\text{disk}}+\mathcal{O}(1)}\
.$ (83)
Hence
$T_{p}=-\frac{Z_{\text{disk}}}{\mathop{\text{vol}}(\mathrm{D}p)}=-\frac{Z_{\text{CFT}}^{(p)}}{\mathop{\text{vol}}(\mathrm{D}p)\mathop{\text{vol}}(\mathrm{PSL}(2,\mathbb{R}))}\
.$ (84)
Here we used the above computations that showed that passing from the disk
partition function with $\mathrm{PSL}(2,\mathbb{R})$ gauged to the ungauged
CFT partition function gives rise to a relative factor given by the effective
volume of $\mathrm{PSL}(2,\mathbb{R})$. The superscript $(p)$ reminds us that
there are $p+1$ Neumann directions and $D-p-1=25-p$ Dirichlet directions in
the partition function.
We also note that it was crucial that the effective volume of
$\mathrm{PSL}(2,\mathbb{R})$ turned out to be negative in order to get a
positive D-brane tension.666One could repeat the same computation for
O-planes, whose tensions are computed by the projective plane
$\mathbb{RP}^{2}$ diagram. In this case, the residual symmetry group is
$\mathrm{SO}(3)$, which is compact. Correspondingly, the tension of O-planes
turns out to be _negative_.
### 5.1 $p$-dependence
As a first step in out computation, we fix the $p$-dependence of $T_{p}$. We
use the fact that the effective volume of $\mathrm{PSL}(2,\mathbb{R})$ can be
assigned a finite regularized value (the precise value becomes important only
in the next subsection) and arrive at
$\frac{T_{p+1}}{T_{p}}=\frac{Z_{\text{CFT}}^{(p+1)}}{Z_{\text{CFT}}^{(p)}\mathop{\text{vol}}(\mathbb{R})}=\frac{Z_{\text{CFT}}^{\text{N}}}{Z_{\text{CFT}}^{\text{D}}\mathop{\text{vol}}(\mathbb{R})}\
,$ (85)
where $Z_{\text{CFT}}^{\text{N,D}}$ are the CFT partition functions for a
single free boson. All other directions in the worldsheet partition function
as well as the ghost partition functions cancel. The volume appearing here is
the volume in the direction $p+1$. This will remove the zero mode from the
Neumann partition function. Let us compute the partition function on a
hemisphere of radius $R$ in zeta-function renormalization Hawking:1976ja . The
non-zero modes lead to
$Z_{\text{CFT}}^{\text{N,D}}=\text{(zero
modes)}\times\prod_{\lambda}\sqrt{\frac{4\pi^{2}\alpha^{\prime}R^{2}}{\lambda}}\
.$ (86)
The product runs over all eigenvalues of $-\Delta$ on the unit sphere with the
correct boundary conditions. The zero mode for the Neumann condition leads to
the following contribution. By definition, we normalized the path integral as
follows. Choose an orthonormal basis of $\Delta$. Then the path integral is
simply given by the usual integral over the all the coefficients in this
orthonormal basis. The constant function is hence normalized as
$\frac{1}{\sqrt{2\pi}R}$. Thus, the zero mode integral is
$\int_{-L\sqrt{2\pi}R}^{L\sqrt{2\pi}R}\mathrm{d}X_{0}=\sqrt{2\pi}R\mathop{\text{vol}}(\mathbb{R})\
,$ (87)
where we imagined that the D-brane extends in some region $[-L,L]$. This again
does not matter for the final result, we only need the factor $\sqrt{2\pi}R$
that arises from the correct normalization.
Finally, we note that the eigenvalues of the Laplacian $-\Delta$ are just
$\ell(\ell+1)$. For Neumann boundary conditions, they have multiplicity
$\ell+1$, whereas for Dirichlet boundary conditions, they have multiplicity
$\ell$. Thus,
$\displaystyle\frac{T_{p+1}}{T_{p}}=\sqrt{2\pi}R\prod_{\ell=1}^{\infty}\sqrt{\frac{4\pi^{2}\alpha^{\prime}R^{2}}{\ell(\ell+1)}}=\frac{1}{\sqrt{2\pi\alpha^{\prime}}}\prod_{\ell=1}^{\infty}\frac{1}{\sqrt{\ell(\ell+1)}}\
.$ (88)
Since the result is independent of $R$, we made the convenient choice
$R=\frac{1}{2\pi\sqrt{\alpha^{\prime}}}$. The infinite product can be
evaluated using zeta-function regularization.777Tree level partition functions
in zeta-function regularization in string theory were considered in
Grinstein:1986hd ; Douglas:1986eu ; Weisberger:1986qd . Define
$\zeta_{\text{N}/\text{D}}(s)=\sum_{\ell=1}^{\infty}\frac{1}{(\ell(\ell+1))^{s}}\
.$ (89)
We want to compute $\zeta_{\text{N}/\text{D}}^{\prime}(0)$ which enters the
regulated ratio of determinants. For this, we write
$\displaystyle\zeta_{\text{N}/\text{D}}(s)=\sum_{\ell=1}^{\infty}\left(\frac{1}{\ell^{2s}}-\frac{s}{\ell^{2s+1}}\right)+\sum_{\ell=1}^{\infty}\frac{1}{\ell^{2s}}\left(\frac{1}{(1+\ell^{-1})^{s}}-1+\frac{s}{\ell}\right)\
.$ (90)
The first sum can be expressed through the Riemann zeta-function, whereas the
second sum converges absolutely for $\mathop{\text{Re}}s>-\frac{1}{2}$. Hence
to evaluate the derivative at $s=0$, we can commute the derivative with the
sum. We obtain
$\zeta_{\text{N}/\text{D}}^{\prime}(0)=2\zeta^{\prime}(0)-\gamma+\sum_{\ell=1}^{\infty}\left(\frac{1}{\ell}-\log\left(1+\frac{1}{\ell}\right)\right)\
.$ (91)
Here, we used already that the Riemann zeta-function behaves near $s=1$ as
$\zeta(s)=\frac{1}{s-1}+\gamma+\mathcal{O}(s-1)\ ,$ (92)
where $\gamma$ is the Euler-Mascheroni constant. Furthermore, we can use that
$\zeta^{\prime}(0)=-\frac{1}{2}\log(2\pi)$. The remaining sum is seen to be
equal to $\gamma$ by definition:
$\sum_{\ell=1}^{n}\left(\frac{1}{\ell}-\log\left(1+\frac{1}{\ell}\right)\right)=\sum_{\ell=1}^{n}\frac{1}{\ell}-\log(n+1)\overset{n\to\infty}{\longrightarrow}\gamma\
,$ (93)
where we used the the logarithmic piece is a telescoping sum. Finally, we
simply obtain
$\zeta_{\text{N}/\text{D}}^{\prime}(0)=2\zeta^{\prime}(0)=-\log(2\pi)\ .$ (94)
Putting the pieces together gives
$\frac{T_{p+1}}{T_{p}}=\frac{1}{\sqrt{2\pi\alpha^{\prime}}}\exp\left(\frac{1}{2}\zeta_{\text{N}/\text{D}}^{\prime}(0)\right)=\frac{1}{2\pi\sqrt{\alpha^{\prime}}}\
.$ (95)
### 5.2 Fixing normalization
After having fixed the $p$-dependence, we can compute the overall
normalization. We follow here the conventions of Polchinski Polchinski:1998rq
. We will compute the normalization for the D25-brane where we only impose
Neumann boundary conditions. In his notation,
$Z_{\text{CFT}}=C_{D_{2}}=\frac{1}{\alpha^{\prime}g_{\text{o}}^{2}}\ ,$ (96)
where $g_{\text{o}}$ is the open string coupling, compare to eq. (6.4.14) in
Polchinski. We also have the following relation of the gravitational coupling
$\kappa=\sqrt{8\pi G_{\text{N}}}$ to the open string coupling (eq. (6.6.18)
and eq. (8.7.28)):
$\kappa=2\pi
g_{\text{c}}=2^{-17}\pi^{-\frac{23}{2}}(\alpha^{\prime})^{-6}g_{\text{o}}^{2}\
.$ (97)
Finally, we should remember that the effective volume of the group
$\mathrm{PSL}(2,\mathbb{R})$ is $-2\pi^{2}$ in Polchinski’s normalization, see
also the discussion in Appendix B. This is because the normalization of the
ghosts lead to a different normalization of the measure on
$\mathrm{PSL}(2,\mathbb{R})$ than the one we were considering above. Thus we
can express the result for the D-brane tension as follows:
$T_{25}=\frac{1}{2\pi^{2}}Z_{\text{CFT}}=\frac{1}{2\pi^{2}\alpha^{\prime}g_{\text{o}}^{2}}=\frac{\sqrt{\pi}}{16\kappa}(4\pi^{2}\alpha^{\prime})^{-7}\
.$ (98)
For a general D$p$-brane, we combine this result with eq. (95) and obtain
$T_{p}=\frac{\sqrt{\pi}}{16\kappa}(4\pi^{2}\alpha^{\prime})^{\frac{11-p}{2}}\
.$ (99)
This agrees with eq. (8.7.26) of Polchinski and hence provides a simple way of
computing D-brane tensions.
## 6 Conclusions
We found that the disk partition function in string theory can be rigorously
computed using standard path integral methods. Using one of the bosons on the
worldsheet, one can further fix the residual gauge group
$\mathrm{PSL}(2,\mathbb{R})$. We gave two possible gauge choices: in Section 2
we imposed that when expanding the boson $X$ into spherical harmonics, one of
the coefficients is absent. In Section 3 we imposed that the derivative of $X$
vanishes at the origin of the worldsheet disk. Finally, in Section 4 we used a
more standard procedure and made use of the presence of a modulus in the
worldsheet CFT which allows one to relate the result to a one-point function
through conformal perturbation theory. In all these methods, the conclusion
was the same: The group $\mathrm{PSL}(2,\mathbb{R})$ behaves as if it had a
finite volume $-\frac{\pi^{2}}{2}$ in the path integral (for a suitable
normalization of the metric on the group). We finally saw in Section 5 that
the disk partition function gives a very direct derivation of the D-brane
tensions without the detours that are usually taken in the literature.
In the following we mention some open questions and future directions.
#### Infinite volume.
We have given three independent computations of the disk partition function
and to us they are quite convincingly showing that the gauge group
$\mathrm{PSL}(2,\mathbb{R})$ should be thought of having finite volume.
However, conceptually, this is somewhat counterintuitive. One starts in CFT
with an integral over a function space $L^{2}(D)$ with Neumann or Dirichlet
boundary conditions which is finite after an appropriate regularization.
Gauging of $\mathrm{PSL}(2,\mathbb{R})$ identifies the gauge orbits, which are
non-compact slices inside $L^{2}(D)$. If we would talk about a finite-
dimensional integral, such an identification surely would lead to a vanishing
result, due to the non-compactness of the gauge orbits. The finiteness of the
result for the path integral is hence very unexpected and a result of an
interesting interplay between the non-compactness of the gauge group and the
subtleties of the path integral.
#### Sphere partition function.
Given our success with the disk partition function, one should ask whether one
can similarly compute the more interesting sphere partition function in a
similar manner. This does not seem to be the case from several perspectives.
1. 1.
Liu and Polchinski applied the same regularization procedure as for
$\mathrm{PSL}(2,\mathbb{R})$ to the case of $\mathrm{PSL}(2,\mathbb{C})$.
However, one also gets a logarithmic divergence in the cutoff that is akin to
the appearance conformal anomaly in holographic renormalization
Henningson:1998gx . This prevents one from assigning a well-defined value to
the volume.
2. 2.
The sphere partition function in flat space is expected to vanish. If we could
perform a similar gauge fixing procedure as explored in this article using one
flat spacetime direction, we would conclude that the sphere partition function
should be vanishing for every background with a flat direction in it. This is
not the case – counterexamples include $c=1$ string theory and
$\mathrm{AdS}_{3}\times\mathrm{S}^{3}\times\mathbb{T}^{4}$. Thus, one
spacetime direction should not be sufficient to fix the gauge.
3. 3.
The sphere partition function should vanish for a compact target space. This
is expected from supergravity where the on-shell action is a total derivative
and hence vanishes for a compact spacetime. However, the ungauged worldsheet
partition function is clearly non-vanishing and so
$\mathrm{PSL}(2,\mathbb{C})$ needs to have an infinite volume for consistency.
For these reasons, the computation of the sphere partition function is a much
more subtle problem than the disk partition function that we have treated in
this paper.
## Acknowledgements
We thank Raghu Mahajan for initial collaboration and very useful discussions.
We also thank Adam Levine and Edward Witten for discussions and Douglas
Stanford for comments on a preliminary draft of the paper. LE is supported by
the IBM Einstein Fellowship at the Institute for Advanced Study. SP
acknowledges the support from DOE grant DE-SC0009988.
## Appendix A Conventions
### A.1 The non-linear sigma-model
We take the non-linear sigma-model on the worldsheet Riemann surface $\Sigma$
to be
$S[g,X]=\frac{1}{4\pi\alpha^{\prime}}\int_{\Sigma}\mathrm{d}^{2}z\
\sqrt{g}\,g^{ab}\partial_{a}X^{\mu}\partial_{b}X^{\nu}G_{\mu\nu}(X)\ ,$ (100)
where $G_{\mu\nu}(X)$ is the spacetime metric. We will not have need of the
$B$-field and the dilaton, since we assume throughout the text that there is
one flat direction in spacetime that does not support a non-trivial $B$-field
or a non-constant dilaton.
Let us review the gauge symmetries of the worldsheet action:
1. 1.
Diffeomorphism symmetry:
$\displaystyle X(z)$ $\displaystyle\longmapsto X\circ\varphi^{-1}(z)\ ,$ (101)
$\displaystyle g_{ab}(z)$
$\displaystyle\longmapsto\frac{\mathrm{d}\varphi^{c}}{\mathrm{d}z^{a}}(\varphi^{-1}(z))\frac{\mathrm{d}\varphi^{d}}{\mathrm{d}z^{b}}(\varphi^{-1}(z))g_{cd}(\varphi^{-1}(z))\
.$ (102)
for $\varphi:\Sigma\longmapsto\Sigma$ a diffeomorphism.
2. 2.
Weyl symmetry:
$\displaystyle g_{ab}(z)$ $\displaystyle\longmapsto\lambda(z)g_{ab}(z)$ (103)
for some positive function $\lambda:\Sigma\longmapsto\mathbb{R}_{>0}$.
Conformal gauge fixes $g=\hat{g}$ for some reference metric $\hat{g}$ on
$\Sigma$. In the case of $\Sigma=\mathrm{S}^{2}$ or $\Sigma=D$ in the open
string case, this gauge is always attainable. For example, in Section 2 we
have considered $D$ and $\hat{g}$ is given by eq. (2). For higher genus
surfaces there would be a moduli space of inequivalent metrics which is the
moduli space of Riemann surfaces. It is well-known that the Weyl symmetry is
anomalous unless we are considering the critical string. We will assume
throughout the text that the string is critical.
### A.2 Spherical harmonics on the disk
In this Appendix, we fix our conventions for spherical harmonics. They take
the following form on the unit disk parametrized by the complex coordinates
$(z,\bar{z})$:
$Y_{\ell,m}(z,\bar{z})=\sqrt{\frac{(2\ell+1)(\left|m\right|+\ell)!}{2\pi(\ell-\left|m\right|)!(|m|!)^{2}}}\,(z\bar{z}+1)^{\ell+1}\\\
\times{}_{2}F_{1}(\ell+1,\ell+\left|m\right|+1;\left|m\right|+1;-z\bar{z})\begin{cases}z^{m}\quad\
m\geq 0\\\ \bar{z}^{m}\ \quad\ m\leq 0\ .\end{cases}$ (104)
Spherical harmonics satisfy
$\displaystyle\text{Dirichlet boundary conditions for }\ell+m$
$\displaystyle\in 2\mathbb{Z}+1\ ,$ (105a) $\displaystyle\text{Neumann
boundary conditions for }\ell+m$ $\displaystyle\in 2\mathbb{Z}\ .$ (105b)
They are orthonormal on the disk with round metric (2) (and hence differ by
the usual normalization of spherical harmonics by a factor of $\sqrt{2}$,
since those are orthornomal on the sphere)
$\int_{D}\frac{4r\,\mathrm{d}r\,\mathrm{d}\theta}{(1+r^{2})^{2}}Y_{\ell,m}(re^{i\theta},re^{-i\theta})Y_{\ell^{\prime},m^{\prime}}(re^{-i\theta},re^{i\theta})=\delta_{\ell\ell^{\prime}}\delta_{mm^{\prime}}\
,$ (106)
where both $(\ell,m)$ and $(\ell^{\prime},m^{\prime})$ satisfy the same
boundary condition. Spherical harmonics are eigenfunctions of the Laplacian on
the disk with the upper hemisphere metric,
$\Delta
Y_{\ell,m}=\frac{1+r^{2}}{4}\left(r^{-1}\partial_{r}\left(r\partial_{r}Y_{\ell,m}\right)+r^{-2}\partial_{\phi}^{2}Y_{\ell,m}\right)=-\ell(\ell+1)Y_{\ell,m}\
.$ (107)
## Appendix B The regularized volume of
$\boldsymbol{\mathrm{PSL}(2,\mathbb{R})}$
In this Appendix, we review the computation of the regularized volume of the
Möbius group $\mathrm{PSL}(2,\mathbb{R})$ following Liu:1987nz .
The group of Möbius transformations preserving the unit disk is
$\mathrm{PSL}(2,\mathbb{R})\cong\mathrm{PSU}(1,1)=\left\\{\begin{pmatrix}a&b\\\
\bar{b}&\bar{a}\end{pmatrix}\,\Big{|}\,|a|^{2}-|b|^{2}=1\right\\}\Big{/}\sim\
.$ (108)
Here, the equivalence $\sim$ identifies the matrix with the negative matrix.
Let us parametrize
$a=\mathrm{e}^{i\phi}\cosh x\,,\qquad b=\mathrm{e}^{i\psi}\sinh x\,,\qquad
x\in[0,\infty)\,,\quad\phi\in[0,2\pi)\,,\quad\psi\in[0,\pi)\ .$ (109)
The range of $\psi$ indicates that we are dealing with
$\mathrm{PSL}(2,\mathbb{R})$ rather than the usual $\mathrm{SL}(2,\mathbb{R})$
in which case $\psi$ would have run from $0$ to $2\pi$. The formal volume of
the group is given by
$\mathrm{vol}\left(\mathrm{PSL}(2,\mathbb{R})\right)=\frac{1}{2}\int\
\mathrm{d}^{2}a\ \mathrm{d}^{2}b\ \delta\left(|a|^{2}-|b|^{2}-1\right)\ .$
(110)
In terms of $(x,\phi,\psi)$, the formal expression for the volume of
$\mathrm{PSL}(2,\mathbb{R})$ becomes
$\int_{0}^{\infty}\mathrm{d}x\int_{0}^{2\pi}\mathrm{d}\phi\int_{0}^{\pi}\mathrm{d}\psi\
\cosh x\sinh x\ .$ (111)
Of course the above is divergent, so we need to regulate it. The prescription
advocated by Polchinski and Liu Liu:1987nz is to cut off the $x$ integral at
some large radius $x=x_{*}$, which leads us to the following expression
$\mathrm{vol}\left(\mathrm{PSL}(2,\mathbb{R})\right)=2\pi^{2}\int_{0}^{x_{*}}\mathrm{d}x\
\cosh x\sinh x=\pi^{2}\sinh^{2}x_{*}\ .$ (112)
The area of the cutoff surface at $x=x_{*}$ is equal to
$A_{*}=2\pi^{2}\sinh x_{*}\cosh x_{*}\ .$ (113)
Thus we have
$\mathrm{vol}\left(\mathrm{PSL}(2,\mathbb{R})\right)=\frac{1}{2}\left[\sqrt{\pi^{4}+A_{*}^{2}}-\pi^{2}\right]\underset{A_{*}\to\infty}{\simeq}\frac{A_{*}}{2}-\frac{\pi^{2}}{2}+O\left(A_{*}^{-1}\right)\
.$ (114)
To obtain a finite answer for the volume one proceeds as in the gravitational
path integral and adds a local counter term on the cutoff surface. Thus, the
regularized volume is defined as
$\mathrm{vol}\left(\mathrm{PSL}(2,\mathbb{R})\right)_{\mathrm{reg}}=\lim_{x_{*}\to\infty}\int_{\mathrm{G}_{*}}\mathrm{d}^{3}x\
\sqrt{g}-\frac{1}{2}\int_{\partial\mathrm{G}_{*}}\mathrm{d}^{2}x\ \sqrt{h}\ ,$
(115)
where $\mathrm{G}_{*}$ is the group manifold with a cutoff at $x_{*}$ and $h$
is the induced metric on the cutoff surface. In the case of
$\mathrm{PSL}(2,\mathbb{R})\cong\mathrm{PSU}(1,1)$, this leads to
$\mathrm{vol}\left(\mathrm{PSL}(2,\mathbb{R})\right)_{\mathrm{reg}}=-\frac{\pi^{2}}{2}\
,\quad\text{\\`{a} la Liu-Polchinski \cite[cite]{\@@bibref{Authors
Phrase1YearPhrase2}{Liu:1987nz}{\@@citephrase{(}}{\@@citephrase{)}}}.}$ (116)
Several comments are in order.
1. 1.
This result is independent of how exactly we choose the cutoff surface.
2. 2.
Since $\mathrm{PSL}(2,\mathbb{R})/\mathrm{U}(1)\cong\text{Euclidean
}\mathrm{AdS}_{2}$, this computation is exactly analogous (after integrating
out $\phi$) to the computation of the gravitational on-shell action in
$\mathrm{AdS}_{2}$.
3. 3.
This result depends of course on the normalization of the metric on
$\mathrm{PSL}(2,\mathbb{R})$. We have chosen a normalization such that
$\mathrm{PSU}(1,1)$ is realized as a quadric in
$\mathbb{C}^{2}\cong\mathbb{R}^{4}$ with unit radius. Equivalently our
normalization is fixed by requiring that the Ricci scalar is $\mathcal{R}=-6$
on the group manifold. This is not the normalization that is often employed in
string theory. Instead one parametrizes a group element
$\mathrm{PSL}(2,\mathbb{R})$ by the three images of $0$ and $1$ and $\infty$:
$\gamma(0)=x_{1}$, $\gamma(1)=x_{2}$ and $\gamma(\infty)=x_{3}$ and takes the
measure to be the one of the ghost 3-point function in the standard
normalization,
$\mathrm{d}\mu=\frac{\mathrm{d}x_{1}\,\mathrm{d}x_{2}\,\mathrm{d}x_{3}}{|(x_{1}-x_{2})(x_{2}-x_{3})(x_{3}-x_{1})|}\
.$ (117)
However, by relating the measure in the $(x,\psi,\phi)$ variables that we
considered above to the coordinates $(x_{1},x_{2},x_{3})$, one finds that the
two measures differ by a factor of 4. The relevant change of variables is
somewhat lengthy, but for example the change of variables $\theta_{i}=2\arctan
x_{i}$ transforms this measure to the canonical measure on the unit disc that
is also discussed in (Liu:1987nz, , eq. (7)). In the measure that is defined
by the ghosts via eq. (117), the regularized volume of
$\mathrm{PSL}(2,\mathbb{R})$ instead works out to be
$4\times(-\frac{\pi^{2}}{2})=-2\pi^{2}$.
4. 4.
If one repeats the same computation for $\mathrm{PSL}(2,\mathbb{C})$ (which is
the relevant group for the sphere partition function) one finds an
obstruction. The reason is well known: this computation is essentially the
same as computing the on-shell action of gravity on Euclidean
$\mathrm{AdS}_{3}\cong\mathrm{PSL}(2,\mathbb{C})/\mathrm{SU}(2)$, which
suffers from the conformal anomaly. The conformal anomaly leads to a term that
is logarithmically divergent in the cutoff and which cannot be removed by any
local counterterm. Thus, one cannot give a sensible value for the volume of
$\mathrm{PSL}(2,\mathbb{C})$.
## Appendix C The “Signed” Faddeev-Popov procedure
Let us review the Faddeev-Popov procedure. The gauges that we have chosen
involve Gribov copies and we have to be careful to deal with them correctly.
This means that while the gauge is admissible, it usually is not uniquely so
Gribov:1977wm . Because of them we will use a slightly different version of
the FP-procedure that counts intersections of gauge orbits with the gauge
slight with a sign according to their intersection number. The procedure we
will use was proposed in Hirschfeld:1978yq as a solution to the problem of
Gribov copies.
Let $\mathcal{G}$ be the gauge group in question, which in our case is
$\mathrm{PSL}(2,\mathbb{R})$. We want to compute
$Z=\int\frac{\mathscr{D}X}{\mathop{\text{vol}}(\mathcal{G})}\mathrm{e}^{-S[X]}\
,$ (118)
where the domain of the path integral is given by the appropriate function
space. We write the action of the gauge group on the fields $X$ as $g\cdot
X\equiv X^{g}$. We assume that the gauge group and the measure are invariant
under the gauge group (that is, the gauge symmetry is non-anomalous). So
$S[X^{g}]=S[X]$ and $\mathscr{D}X^{g}=\mathscr{D}X$. The latter assumption
requires of course again the inclusion of the other matter fields and the
ghosts on the worldsheet which we tacitly assume to be included in the
calculation. We also assume for simplicity that there are no large gauge
transformations, i.e. $\mathcal{G}$ is connected. This is the case in our
example.
One then starts by inserting the identity
$1=\int_{\mathcal{G}}\mathrm{d}g\ \Delta(X^{g})\delta(F(X^{g}))$ (119)
in the path integral. Here, $F(X)$ is the gauge fix condition that (ideally)
picks one representative of every gauge orbit. There are subtleties when this
is not the case.
For illustration888We thank Dalimil Mazáč for pointing us to a lecture by
Davide Gaiotto where similar toy example is considered Gaiotto_lecture ., let
us consider the gauge group $\mathbb{R}$ with a gauge constraint, which is
implemented by the function $f(x)=0$. The analogous identity reads
$1=\int_{-\infty}^{\infty}\mathrm{d}x\ |f^{\prime}(x)|\delta(f(x))$ (120)
if the $f(x)=0$ has only one solution at $x=x_{*}$. This is the situation
where the gauge condition picks a unique representative in the gauge orbit. If
$f(x)=0$ has multiple solutions, we have instead
$\int_{-\infty}^{\infty}\mathrm{d}x\
|f^{\prime}(x)|\delta(f(x))=\sum_{x:\,f(x)=0}1=\ \text{number of roots of}\ f\
.$ (121)
So we cannot directly insert this in the path integral and expect a simple
answer. Instead, we have to restrict the integral to a region where $f(x)=0$
has only solution. This is the usual Gribov problem. Nonetheless one can
bypass this problem if one assume suitable boundary conditions on the function
$f$. For example, assume that $f$ has to following additional property
$\lim_{x\to\pm\infty}f(x)=\pm\infty\ .$ (122)
In this case, we have the identity
$\int_{-\infty}^{\infty}\mathrm{d}x\
f^{\prime}(x)\delta(f(x))=\sum_{x:\,f(x)=0}\mathrm{sgn}\left(f^{\prime}(x)\right)=1\
,$ (123)
where the last equality follows from the boundary condition eq. (122). In
fact $1$ is the intersection number of the graph $y=f(x)$ with $y=0$ in the
sense of topology where intersections are counted with signs. See Figure 2 for
an illustration.
Figure 2: The intersection number is $1$ while the total number of roots are
$5$. The horizontal red line is $y=0$, the gauge fixing line while the black
curve is the function $f$. The gauge choice is $f(x)=0$ which provides $5$
roots. The contribution of them towards signed intersection number is $1$.
In this toy set up, omission of the absolute value of $f^{\prime}(x)$ removes
the Gribov ambiguities. In what follows we will be using the above kind of
signed FP procedure, but it will involve more than one variable. Furthermore,
we are required to justify that the intersection number is an invariant among
the space of functions that we are dealing with while doing the path integral.
This is not obvious since the gauge group is non-compact and there might be
similar boundary conditions. The corresponding identity is
$\int_{\mathcal{G}}\mathrm{d}g\
\mathop{\text{det}}\mathop{\text{Jac}}F(X^{g})\,\delta(F(X^{g}))=\sum_{g:\,F(X^{g})=0}\mathop{\text{sgn}}\left(\mathop{\text{det}}\mathop{\text{Jac}}F(X^{g})\right)=\mathcal{I}\
.$ (124)
The RHS is in fact an intersection number, which has a chance to be
independent of $X$, so that the LHS can be inserted in the path integral. Let
us assume this for now, we will justify below that this is indeed the case for
the situation of interest.
Let us now insert
$1=\int_{\mathcal{G}}\mathrm{d}g\ \Delta(X^{g})\delta(F(X^{g}))\,,\quad\
\Delta(X)\equiv\frac{1}{\mathcal{I}}\mathop{\text{det}}\mathop{\text{Jac}}F(X)$
(125)
in the path integral to obtain
$\displaystyle Z$
$\displaystyle=\int\frac{\mathscr{D}X}{\mathop{\text{vol}}(\mathcal{G})}\int_{\mathcal{G}}\mathrm{d}g\
\Delta(X^{g})\delta(F(X^{g}))\mathrm{e}^{-S[X]}$ (126)
$\displaystyle=\int_{\mathcal{G}}\mathrm{d}g\int\frac{\mathscr{D}X^{g}}{\mathop{\text{vol}}(\mathcal{G})}\Delta(X^{g})\delta(F(X^{g}))\mathrm{e}^{-S[X^{g}]}$
(127)
$\displaystyle=\int_{\mathcal{G}}\mathrm{d}g\int\frac{\mathscr{D}X}{\mathop{\text{vol}}(\mathcal{G})}\Delta(X)\delta(F(X))\mathrm{e}^{-S[X]}\
.$ (128)
In the second line we used the invariance of various quantities under the
group action. In the third line we replaced the dummy variable $X^{g}$ with
$X$ everywhere. Now nothing depends on $g$ anymore and we can formally cancel
$\mathop{\text{vol}}(\mathcal{G})$ with $\int_{\mathcal{G}}\mathrm{d}g$. One
hence obtains
$Z=\int\mathscr{D}X\ \Delta(X)\delta(F(X))\mathrm{e}^{-S[X]}\ .$ (129)
The only difference to the standard Faddeev-Popov procedure is a missing
absolute value sign for
$\Delta(X)=\frac{1}{\mathcal{I}}\mathop{\text{det}}\mathop{\text{Jac}}F(X)$.
## References
* (1) E. D’Hoker and D. Phong, _The Geometry of String Perturbation Theory_ , _Rev. Mod. Phys._ 60 (1988) 917.
* (2) E. Witten, _Superstring Perturbation Theory Revisited_ , 1209.5461.
* (3) A. A. Tseytlin, _Renormalization of Mobius Infinities and Partition Function Representation for String Theory Effective Action_ , _Phys. Lett. B_ 202 (1988) 81.
* (4) J. Liu and J. Polchinski, _Renormalization of the Mobius Volume_ , _Phys. Lett. B_ 203 (1988) 39.
* (5) A. A. Tseytlin, _Mobius Infinity Subtraction and Effective Action in $\sigma$ Model Approach to Closed String Theory_, _Phys. Lett. B_ 208 (1988) 221.
* (6) H. Erbin, J. Maldacena and D. Skliros, _Two-Point String Amplitudes_ , _JHEP_ 07 (2019) 139 [1906.06051].
* (7) J. M. Maldacena and H. Ooguri, _Strings in ${\rm AdS}_{3}$ and ${\rm SL}(2,\mathds{R})$ WZW model. Part 3. Correlation functions_, _Phys. Rev._ D65 (2002) 106006 [hep-th/0111180].
* (8) J. Troost, _The $AdS_{3}$ central charge in string theory_, _Phys. Lett. B_ 705 (2011) 260 [1109.1923].
* (9) G. W. Gibbons and S. W. Hawking, _Action Integrals and Partition Functions in Quantum Gravity_ , _Phys. Rev. D_ 15 (1977) 2752.
* (10) J. Polchinski, _Dirichlet Branes and Ramond-Ramond Charges_ , _Phys. Rev. Lett._ 75 (1995) 4724 [hep-th/9510017].
* (11) J. A. Shapiro, _On the Renormalization of Dual Models_ , _Phys. Rev. D_ 11 (1975) 2937.
* (12) M. Ademollo, A. D’Adda, R. D’Auria, F. Gliozzi, E. Napolitano, S. Sciuto et al., _Soft Dilations and Scale Renormalization in Dual Theories_ , _Nucl. Phys. B_ 94 (1975) 221.
* (13) S. W. Hawking, _Zeta Function Regularization of Path Integrals in Curved Space-Time_ , _Commun. Math. Phys._ 55 (1977) 133.
* (14) B. Grinstein and M. B. Wise, _Vacuum Energy and Dilaton Tadpole for the Unoriented Closed Bosonic String_ , _Phys. Rev. D_ 35 (1987) 655.
* (15) M. R. Douglas and B. Grinstein, _Dilaton Tadpole for the Open Bosonic String_ , _Phys. Lett. B_ 183 (1987) 52.
* (16) W. I. Weisberger, _Normalization of the Path Integral Measure and the Coupling Constants for Bosonic Strings_ , _Nucl. Phys. B_ 284 (1987) 171.
* (17) J. Polchinski, _String theory. Vol. 1: An introduction to the bosonic string_ , Cambridge Monographs on Mathematical Physics. Cambridge University Press, 12, 2007, 10.1017/CBO9780511816079.
* (18) M. Henningson and K. Skenderis, _The Holographic Weyl Anomaly_ , _JHEP_ 07 (1998) 023 [hep-th/9806087].
* (19) V. N. Gribov, _Quantization of Nonabelian Gauge Theories_ , _Nucl. Phys. B_ 139 (1978) 1.
* (20) P. Hirschfeld, _Strong Evidence That Gribov Copying Does Not Affect Gauge Theory Functional Integral_ , _Nucl. Phys. B_ 157 (1979) 37.
* (21) D. Gaiotto, “String theory.” http://pirsa.org/displayFlash.php?id=15010066, 2015.
|
# Box-based Refinement for Weakly Supervised
and Unsupervised Localization Tasks
Eyal Gomel
Tel Aviv University
<EMAIL_ADDRESS>Tal Shaharabany
Tel Aviv University
<EMAIL_ADDRESS>Lior Wolf
Tel Aviv University
<EMAIL_ADDRESS>
###### Abstract
It has been established that training a box-based detector network can enhance
the localization performance of weakly supervised and unsupervised methods.
Moreover, we extend this understanding by demonstrating that these detectors
can be utilized to improve the original network, paving the way for further
advancements. To accomplish this, we train the detectors on top of the network
output instead of the image data and apply suitable loss backpropagation. Our
findings reveal a significant improvement in phrase grounding for the “what is
where by looking” task, as well as various methods of unsupervised object
discovery. Our code is available at https://github.com/eyalgomel/box-based-
refinement.
## 1 Introduction
In the task of unsupervised object discovery, one uses clustering methods to
find a subset of the image in which the patches are highly similar, while
being different from patches in other image locations. The similarity is
computed using the embedding provided, e.g., by a transformer $f$ that was
trained using a self-supervised loss. The grouping in the embedding space does
not guarantee that a single continuous image region will be selected, and
often one region out of many is selected, based on some heuristic.
It has been repeatedly shown [47, 58, 5] that by training a detection network,
such as faster R-CNN[39], one can improve the object discovery metrics. This
subsequent detector has two favorable properties over the primary discovery
method: it is bounded to a box shape and shares knowledge across the various
samples.
| |
---|---|---
(a) | (b) | (c)
| |
(d) | (e) | (f)
Figure 1: Examples of refining localization networks. The top row depicts an
example of unsupervised object discovery. (a) the input image (b) the
normalized cut eigenvector using the original DINO [9] network $f$, as
extracted with the TokenCut[58] method. (c) the same eigenvector using the
refined DINO network $f^{h}$ our method produces. The bottom row contains
phrase grounding results (d) the original input corresponding to the phrase
“two football teams”, (e) the localization map using the image-text network
$g$ of [42], and (f) the localization map using the refined $g^{h}$.
In this work, we show that such a detector can also be used to improve the
underlying self-supervised similarity. This is done by training a detector
network $h$ not on top of the image features, as was done previously, but on
the output map of network $f$. Once the detector network $h$ is trained, we
freeze it and use the same loss that was used to train the detector network to
refine the underlying representation of $f$.
At this point, the detector network serves as a way to link a recovered set of
detection boxes to an underlying feature map of $f$. Without it, deriving a
loss would be extremely challenging, since the process used for extracting the
detection box from $f$ is typically non-differentiable.
The outcome of this process is a refined network $f^{h}$, obtained by fine-
tuning $f$ using network $h$. The finetuned network produces a representation
that leads to a spatially coherent grouping of regions, as demonstrated in
Fig. 1(a-c).
A similar process is used for the phrase grounding problem. In this case,
given a textual phrase, a network $g$ is trained to mark a matching image
region. Supervision is performed at the image level, without localization
information, a process known as weakly supervised training. In this case, the
same loss is used to train a network $h$ on a set of extracted regions, and
then to refine $g$.
Our method exhibits remarkable versatility, as demonstrated through extensive
testing on multiple benchmarks, two phrase grounding tasks, and various
unsupervised object discovery methods. In all cases, our method consistently
achieves significant improvements across all metrics, surpassing the
performance of state-of-the-art methods. The move approach introduced trains a
detector on the network output rather than the image data. This strategy,
distinct from previous work, allows us to refine the primary network
independently and further enhance its performance.
Figure 2: An illustration of our method. The phrased grounding network $f$ is
given the input image $I$ and a text phrase $t$ and produces a heatmap $M$. A
heuristic (blue line) then produces a set of bounding boxes $B$ from this map
that are used to train a detection network $h$, which outputs a set of boxes
$\bar{B}$. The loss that is used is applied after applying the optimal
permutation.
## 2 Related work
Our method is tested on two localization tasks that are not fully supervised:
unsupervised object discovery (detection) and phrase grounding. Numerous
studies have been introduced in the realm of unsupervised object discovery,
alongside akin tasks involving detection and segmentation, using different
techniques and methods to discover and localize objects in images (and videos)
without requiring explicit object annotations. In particular, deep learning-
based approaches have been combined with clustering-based methods [64, 49, 45,
57], generative models [56, 4, 33], and object-level grouping [46, 3]. Two of
the methods we build upon in our experiments, LOST [47] and TokenCUT [58],
employ clustering methods on top of the DINO network [9], while MOVE [5] uses
a segmentation head on top of DINO representation.
In the phrase grounding task, text phrases are associated with specific image
locations [62, 26]. When relying on weakly supervised learning, the locations
are not given during training, only during test time [1]. A common way to link
the phrase to the image is to embed both the text and image patches in a
shared embedding space [14, 41, 27]. Recent contributions employ CLIP [38] for
linking text with image locations since it has powerful text and image
encoders and relies on weakly supervised training [31, 42]. It can, therefore,
be used both to represent the text and to obtain a training signal for the
phrase grounding network.
We are not aware of other work in which one network $f$ trains another network
$h$, which in turn is used to refine the first network. There are
contributions in which two networks are trained symbiotically at the same
time. For example, for the task of semi-supervised semantic segmentation, two
differently initialized networks were trained jointly, with each network
creating pseudo-labels for the other [13]. The DINO unsupervised
representation learning method [9] employs a self-distillation process in
which the teacher is a combination of frozen student networks.
The role of $h$ in propagating a detection-based loss back to $f$ is
reminiscent of other cases in which a network is used for the purpose of
supervising another, e.g., GANs [23]. In other cases, an auxiliary network can
be trained in a supervised way to provide a differentiable approximation of an
indifferentiable black box [35].
## 3 The Phrase Grounding Method
While we apply the same method for multiple applications, each application
relies on a different configuration of baseline networks. Therefore, to
minimize confusion, we first focus on phrase grounding. Applying our method to
unsupervised object discovery is explored in Sec. 4.
In phrase grounding, we refine a pre-trained localization model ($g$) using a
detection model ($h$) that we add. $h$ is trained based on $g$ and then the
predictions of $h$, now serving as a teacher, are used to finetune network
$g$, which becomes the student. This cyclic process is illustrated in Fig. 2
and serves to make $g$ more spatially coherent, see Fig. 1(d-f).
The phrase grounding network $g$ is based on an encoder-decoder architecture
adapted to support text-based conditioning [42]. The input signals are (i) a
text $t$ and (ii) an RGB image $I\in R^{3\times W\times H}$. It outputs a
localization heatmap $M$ that identifies image regions in $I$ that correspond
to the part of the scene described by $t$.
$M=g(I,Z_{t}(t))\,,$ (1)
where $M\in R^{W\times H}$ contains values between 0 and 1, and $Z_{t}(t)$ is
a text embedding of the input text $t$, given by the text encoder of CLIP
[37]. Our refinement algorithm uses $g$ with the pre-trained weights published
by [43].
Our method trains a model $h$ to generate a set of bounding boxes $\bar{B}$
that match the localization map $M$.
$\bar{B}=h(M)$ (2)
Thus $h$ provides a feedforward way to generate bounding boxes from $M$. The
alternative provided, for example, by [43] is a multi-step process in which
$M$ is first converted to a binary mask by zeroing out any pixel value lower
than half the mask’s max value [36, 17, 16]. Next, contours are extracted from
the binary mask using the method of [51]. For each detected contour, a
bounding box is extracted, whose score is given by taking the mean value of
$M$ for that bounding box. Finally, a non-maximal suppression is applied over
the boxes with an overlap of at least 0.05 IOU, filtering out low-score boxes
(0.5 of the maximal score).
$h$ replaces this process with a single feed-forward pass. However, its main
goal is to provide a training signal for refining $g$. This is done by
considering the output of $h$ as foreground masks and considering the values
of $g$’s output inside and outside these masks.
### 3.1 Training $h$
The network $h$ is trained to predict a fixed number $k$ of bounding boxes
$\bar{B}$. Each box is represented as a vector $b_{i}\in\mathbb{R}^{6}$ that
contains the center coordinates of the box, its width, and its height. In
addition, the network $h$ contains a logit value, which denotes whether there
is an expected object within each box.
Training is performed maintaining the semi-supervised nature of the phrase
grounding method. The bounding boxes used for training $h$ are extracted using
network $g$ and the method of Suzuki et al[51], as explained above. We call
the set of resulting bounding boxes $B$.
Following Carion et al. [8], we train $h$ using a loss $L_{h}$ that has three
terms: (1) a classification loss $L_{\text{cls}}$, (2) an $l1$ loss
$L_{\text{box}}$, and (3) the GIoU[40] loss $L_{\text{giou}}$.
If the number of objects $k$ returned by $h$ is smaller than the number of
target boxes $|B|$, the $k$ boxes with the highest confidence are used. In the
opposite case, $B$ is padded with zero-coordinate vectors with a “no object”
label.
For computing the loss, one assumes a one-to-one correspondence between the
ground truth objects and the detected boxes. This matching is obtained by
minimizing $L_{h}$ over all possible permutations, using the Hungarian
algorithm [30] for minimal cost bipartite matching. Denote as
$B^{\prime}=[b_{0}^{\prime},b_{1}^{\prime},...,b_{k-1}^{\prime}]$ the matrix
that holds the set of boxes $B$ ordered optimally.
The classification loss $L_{cls}$ is a Negative log-likelihood loss
$L_{\text{cls}}=\sum_{\begin{subarray}{c}i=0\end{subarray}}^{k-1}{-\log{\bar{p}_{i}}}$
(3)
where $\bar{p}_{i}$ is the predicted box logit, representing the probability
of the existence of an object.
$L_{box}$ is applied directly to the coordinates of the centers of the
bounding boxes, their height and width:
$L_{\text{box}}=\sum_{\begin{subarray}{c}i=0\end{subarray}}^{k-1}{\|b_{i}^{\prime}-\bar{b_{i}}\|_{1}}$
(4)
While the loss $L_{box}$ is affected by the size of the box, the 2nd loss,
$L_{giou}$, is a scale-invariant loss given by
$L_{\text{giou}}(B^{\prime},\bar{B})={{\sum}}_{\begin{subarray}{c}i=0\end{subarray}}^{k-1}{1-\left(\frac{\bigl{|}\bar{b_{i}}\cap
b_{i}^{\prime}\bigr{|}}{\bigl{|}\bar{b_{i}}\cup
b_{i}^{\prime}\bigr{|}}-\frac{\bigl{|}c_{i}\setminus(\bar{b_{i}}\cup
b_{i}^{\prime})\bigr{|}}{\bigl{|}C_{i}\bigr{|}}\right)}$ (5)
where $c_{i}$ is the smallest box containing $b^{\prime}_{i}$ and
$\bar{b_{i}}$. All losses are normalized by the number of boxes.
The final loss is a weighted sum of all three losses:
$\begin{split}L_{h}(B^{\prime},\bar{B})=\lambda_{1}*L_{\text{cls}}(B^{\prime},\bar{B})+\lambda_{2}*L_{\text{box}}(B^{\prime},\bar{B})+\\\
\lambda_{3}*L_{\text{giou}}(B^{\prime},\bar{B})\end{split}$ (6)
where $\lambda_{1}=2,\lambda_{2}=5,\lambda_{3}=2$. These weights are similar
to those used in previous work, with an extra emphasis on $\lambda_{1}$ (using
a value of 2 instead of 1), but there was no attempt to optimize them beyond
inspecting a few training images.
### 3.2 Refining $g$
For finetuning $g$, we use the multiple loss terms, including the same loss
terms that are used for training $h$, with a modification. Here, instead of
just calculating the loss between two sets of boxes, we also compute the union
box of ground truth boxes: $BU=Union(B)$. With probability $0.5$ we use $BU$
instead of $B$ for calculating the loss (in this case, the matching is done
with a single box only)
$L_{h_{BU}}=\begin{cases}L_{h}(BU,\bar{B}),&\text{if }p\geq 0.5\\\
L_{h}(B,\bar{B}),&\text{otherwise}\end{cases},p\sim\text{Uniform}[0,1]$ (7)
In addition to the bounding box loss, we use losses for the localization maps
used by [43] to train $g$. This prevents the fine-tuned model from following
$h$ “blindly”, without considering the underlying data.
The relevancy map loss, uses a CLIP-based relevancy [11] to provide rough
estimation for the localization map
$L_{\text{rmap}}(I,H)=\|H-g^{h}(I,Z^{T})\|^{2},$ (8)
where $H$ is the relevancy map and $g^{h}$ is the refined network $g$. The
foreground loss $L_{fore}(I,T)$ is given by
$L_{\text{fore}}(I,t)=-CLIP(g^{h}(I,Z^{T})\odot I,t),$ (9)
where $\odot$ is the Hadamard product. The loss maximizes the similarity given
by CLIP between the mask’s foreground region and the input text $t$. On the
other hand, the background loss $L_{back}(I,t)$ minimizes the similarity CLIP
distance between the background and text $t$
$L_{back}(I,t)=CLIP((1-g^{h}(I,Z^{T}))\odot I,t),$ (10)
The overall loss is given by:
$\begin{split}L_{g}=L_{h_{BU}}+\lambda_{4}*L_{reg}(I,g^{h})+\lambda_{5}*L_{\text{rmap}}(I,H)+\\\
\lambda_{6}*L_{\text{back}}(I,T)+\lambda_{7}*L_{\text{fore}}(I,T)\end{split}$
where $\lambda_{4}=1,\lambda_{5}=64,\lambda_{6}=2,\lambda_{7}=1$. These
hyperparameters reflect the values assigned by previous work, multiplied by 4
in order to approximately balance the loss that arises from $h$ with the other
loss terms.
#### Architecture
$h$ is a VGG16 [48], pre-trained on the ImageNet[18] dataset. In order to
apply it to the single channel heatmap $M\in R^{\times W\times H}$, this input
is repeated three times across the channel dimension. The last layer of the
classifier is replaced by a linear layer of dimensions $4096\times(6k)$, $k$
being the number of boxes predicted by $h$.
## 4 Unsupervised object discovery
For the task of unsupervised object discovery, a vision transformer $f$ is
pretrained in a self-supervised manner, using DINO [9]. It is then used to
extract features $F$ from an input image $I\in R^{3\times W\times H}$
$F=\bar{f}(I)$ (11)
where $\bar{f}$ denotes the latent variables from the transformer $f$. $F\in
R^{d\times N}$, where $d$ is the features dimension and $N$ denotes the number
of patches for $f$. For each patch $p$, we denoted by $f_{p}\in R^{d}$ the
associated feature vector. Bounding boxes based on these features are
extracted using unsupervised techniques, such as LOST [47], TokenCut [58] or
MOVE [5].
LOST builds a patch similarities graph $\mathcal{G}$, with a binary symmetric
adjacency matrix $A\,{=}\,(a_{pq})_{1\leq p,q\leq N}\in\\{0,1\\}^{N\times N}$
where
$\displaystyle a_{pq}=\left\\{\begin{array}[]{ll}1&\text{if
}f_{p}^{\top}{f_{q}}\geq 0,\\\ 0&\text{otherwise}.\end{array}\right.$ (14)
An initial seed $p*$ is selected as the patch with the smallest number of
connections to other patches.
$\displaystyle
p^{*}=\operatorname*{arg\,min}_{p\in\\{1,\ldots,N\\}}d_{p}\text{~{}~{}~{}where~{}~{}~{}}d_{p}=\sum_{q=1}^{N}a_{pq}.$
(15)
This is based on the assumptions that connectivity implies belonging to the
same object, since patch embeddings are similar for the same object, and that
each object occupies less area than the background.
Denote the list of $a$ patches with the lowest degree $d_{p}$ as
$\mathcal{D}_{a}$. LOST then considers the subset of $\mathcal{D}_{a}$ that is
positively correlated, in the embedding space, with $p^{*}$
$\mathcal{S}=\\{q\in\mathcal{D}_{a}|f_{q}^{\top}{f_{p^{*}}}\geq 0\\}$ (16)
This set is then expanded obtaining
$\mathcal{S}^{+}=\\{q|\sum_{p\in\mathcal{S}}f_{q}^{\top}{f_{p}}\geq 0\\}$ (17)
We note that in the image itself, the patches of $\mathcal{S}^{+}$ can be part
of multiple separate regions. The method selects the connected component
(4-connectivity in the image space) in $\mathcal{S}^{+}$ that contains the
seed $p^{*}$ as its single discovered object.
TokenCut[58] employs a slightly different adjacency matrix, $A$, which employs
the cosine similarity score between pairs of feature vectors.
$\displaystyle A{p,q}=\begin{cases}1,&\mbox{if
}\frac{f_{p}^{\top}f_{q}}{\lVert f_{p}\rVert_{2}\lVert
f_{q}\rVert_{2}}\geq\tau\\\ \epsilon,&\mbox{else}\end{cases}\,,$ (18)
where $\tau=0.2$ and $\epsilon=1e-5$.
The normalized cut method [44] is applied to the graph to achieve object
discovery. This method clusters all patches into two groups, based on the 2nd
smallest eigenvector of the normalized adjacency matrix, and selects the group
with the maximal absolute value in this eigenvector. The bounding box of the
patches in this group is returned.
MOVE[5], in contradistinction to the preceding two methodologies, employs a
segmentation network that is trained atop the latent transformer features
denoted as $F$. The resulting output of this network takes the form of a
segmentation map denoted as $M\in R^{W\times H}$. Subsequently, this
segmentation map undergoes binarization with a threshold set at 0.5, followed
by the detection of connected components [7]. The most sizable bounding box is
then selected to correspond to the most extensive connected component.
### 4.1 Training $h$ and refining $f$
The training process of detector $h$ follows the details described in Sec.
3.1, with a few minor changes. There is a single ground-truth bounding box
$B$, extracted from an image $I$ by model $f$ using the unsupervised
techniques described above. Using the same loss term $L_{h}$, $h$ is optimized
to minimize $L_{h}(B,\bar{B})$, where $\bar{B}$ are the $k$ predicted boxes.
To maintain the unsupervised nature of the task, $h$ is initialized with
weights from the self-supervised method DINO[9], using a ResNet-50[25]
backbone. In the phrase grounding case and MOVE [5], the input of $h$ is the
map $M$, and the analogue for non-trainable unsupervised object discovery is
the map $F$ where such map $M$ is missing.
For refining the DINO-trained transformer model $f$, we use the same loss term
$L_{h}$ as is used in phrase grounding and add loss terms to prevent it from
diverging too far. While in phrase grounding we used the loss terms that were
used to train the phrase grounding network, here, for runtime considerations,
we explicitly keep the transformer $f$ in the vicinity of the DINO-pretrained
network.
The loss term is defined as the distance between the output of $f$ and that of
the refined model $f^{h}$
$\displaystyle L_{f}(I)=\|f(I)-f^{h}(I)\|^{2},$ (19)
Both methods [47, 58] are improved by training a Class Agnostic Detector (CAD)
on the extracted bounding boxes. Faster R-CNN [39] is used for CAD, with the
R50-C4 model of Detectron2 [60] based on a ResNet-50[25] backbone. This
backbone is pre-trained with DINO self-supervision. Following this process, we
train an identical CAD using the refined model $f^{h}$. Note that CAD and our
method are complementary. While both train with the same pseudo-labels, CAD is
trained on the original image and cannot backpropagate a loss to the
underlying network $f$.
| a man | | |
---|---|---|---|---
| a mountain biker | | |
| several individuals | | |
| a boy | | |
| muzzles | | |
| a very young girl | | |
| (a) | (b) | (c) | (d)
Figure 3: Sample phrase-grounding results. where (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. | | | |
---|---|---|---|---
| | | |
| | | |
| | | |
(a) | (b) | (c) | (d) | (e)
Figure 4: Single object discovery results. (a) the input image, (b) the
inverse degree of the LOST [47] graph obtained over $f$ (published model); the
red bounding box is directly from LOST, the white is the prediction of CAD
trained on top of it (c) same with our refined model $f^{h}$ and LOST (d) same
as b, but using $f$ together with TokenCut[58], (using the published weights;
the CAD model was not released and is not shown) (e) the results of $f^{h}$
and TokenCut.
Method | Backbone | VG trained | MS-COCO trained
---|---|---|---
VG | Flickr | ReferIt | VG | Flickr | ReferIt
Baseline | Random | 11.15 | 27.24 | 24.30 | 11.15 | 27.24 | 24.30
Baseline | Center | 20.55 | 47.40 | 30.30 | 20.55 | 47.40 | 30.30
GAE [10] | CLIP | 54.72 | 72.47 | 56.76 | 54.72 | 72.47 | 56.76
FCVC [22] | VGG | - | - | - | 14.03 | 29.03 | 33.52
VGLS [61] | VGG | - | - | - | 24.40 | - | -
TD [62] | Inception-2 | 19.31 | 42.40 | 31.97 | - | - | -
SSS [26] | VGG | 30.03 | 49.10 | 39.98 | - | - | -
MG [1] | BiLSTM+VGG | 50.18 | 57.91 | 62.76 | 46.99 | 53.29 | 47.89
MG [1] | ELMo+VGG | 48.76 | 60.08 | 60.01 | 47.94 | 61.66 | 47.52
GbS [2] | VGG | 53.40 | 70.48 | 59.44 | 52.00 | 72.60 | 56.10
WWbL [43] | CLIP+VGG | 62.31 | 75.63 | 65.95 | 59.09 | 75.43 | 61.03
Ours | CLIP+VGG | 63.51 | 78.32 | 67.33 | 60.05 | 77.19 | 63.48
Table 1: Phrase grounding results: “pointing game” accuracy on Visual Genome (VG), Flickr30K, and ReferIt. The methods in the first three rows do not train. Training | Model | Test Bbox Accuracy
---|---|---
VG | Flickr | ReferIt
MS-COCO | MG [1] | 15.77 | 27.06 | 15.15
WWbL [43] | 27.22 | 35.75 | 30.08
Ours | 28.77(27.1) | 47.26(45.01) | 30.63(29.05)
VG | MG [1] | 14.45 | 27.78 | 18.85
WWbL [43] | 27.26 | 36.35 | 32.25
Ours | 31.02(29.23) | 42.40(44.91) | 35.56(34.56)
Table 2: Phrase grounding results: bounding box accuracy on Visual Genome (VG), Flickr30K, and ReferIt. The outcomes obtained from network $h$ are presented within brackets. Train set | Model | Test point Accuracy | Test Bbox Accuracy
---|---|---|---
VG | Flickr | ReferIt | VG | Flickr | ReferIt
COCO | MG [1] | 32.91 | 50.154 | 36.34 | 11.48 | 23.75 | 13.31
WWbL [43] | 44.20 | 61.38 | 43.77 | 17.76 | 32.44 | 21.76
Ours | 46.29 | 63.43 | 44.59 | 22.32 | 38.00 | 22.91
VG | MG [1] | 32.15 | 49.48 | 38.06 | 12.23 | 24.79 | 16.43
WWbL [43] | 43.91 | 58.59 | 44.89 | 17.77 | 31.46 | 18.89
Ours | 46.77 | 61.75 | 44.9 | 22.40 | 35.23 | 23.44
Table 3: WWbL results: bounding box accuracy on Visual Genome (VG), Flickr30K, and ReferIt. Model | VOC07 | VOC12 | MS-COCO
---|---|---|---
Selective Search [52] | 18.8 | 20.9 | 16.0
EdgeBoxes [65] | 31.1 | 31.6 | 28.8
Kim et al. [28] | 43.9 | 46.4 | 35.1
Zhang et al. [63] | 46.2 | 50.5 | 34.8
DDT+ [59] | 50.2 | 53.1 | 38.2
rOSD [54] | 54.5 | 55.3 | 48.5
LOD [55] | 53.6 | 55.1 | 48.5
DINO-seg [9] | 45.8 | 46.2 | 42.1
LOST [47] | 61.9 | 64.0 | 50.7
Ours using LOST | 62.0(42.1) | 66.2(53.5) | 52.0(33.7)
TokenCut [58] | 68.8 | 72.1 | 58.8
Ours using TokenCut | 69.0(44.6) | 72.4(54.1) | 60.7(39.5)
MOVE [5] | 76.0 | 78.8 | 66.6
Ours using MOVE | 77.5(42.9) | 79.6(54.9) | 67.2(48.3)
LOD + CAD [47] | 56.3 | 61.6 | 52.7
rOSD + CAD [47] | 58.3 | 62.3 | 53.0
LOST + CAD [47] | 65.7 | 70.4 | 57.5
Ours using LOST + CAD | 66.1 | 71.0 | 58.7
TokenCut [58] +CAD | 71.4 | 75.3 | 62.6
Ours using TokenCut + CAD | 71.9 | 75.6 | 64.4
MOVE [5] +CAD | 77.1 | 80.3 | 69.1
Ours using MOVE [5] +CAD | 78.7 | 81.3 | 69.3
Table 4: Object Discovery results: CorLoc score on MS-COCO20K, VOC07 and VOC12. Network $h$ was trained using pseudo labels from either LOST [47], TokenCut [58] or MOVE [5]. +CAD indicates training a second-phase class-agnostic detector with model pseudo-boxes as labels. Network $h$ results are enclosed in brackets. Ablation | Test point Accuracy | Test Bbox Accuracy
---|---|---
VG | Flickr | ReferIt | VG | Flickr | ReferIt
w/o Box Union | 57.26 | 72.54 | 62.55 | 25.11 | 28.74 | 24.63
w/o reg. | 53.49 | 68.47 | 61.92 | 26.45 | 42.79 | 29.74
k=1 | 56.84 | 70.74 | 62.15 | 27.75 | 32.35 | 24.73
Ours | 60.05 | 77.19 | 63.48 | 28.77 | 47.26 | 30.63
Table 5: Ablation study for the phrase grounding task. See text for details. All models were trained on MS-COCO14[32] dataset Ablation | VOC07 | VOC12 | MSCOCO20K
---|---|---|---
w/o reg. | 61.72 | 64.45 | 50.13
k=1 | 62.54 | 64.67 | 52.00
k=5 | 62.16 | 64.45 | 51.70
k=10 | 61.92 | 66.16 | 51.98
k=15 | 61.44 | 64.46 | 50.60
Table 6: Ablation study for the object discovery task.
## 5 Experiments
We present our results for three tasks: weakly supervised phrase grounding
(WSPG), “what is were by looking” (WWbL), and unsupervised single object
discovery. The first two use the same phrase grounding network $g$, and the
third one is based on one of two techniques, which both utilize the same pre-
trained transformer $f$.
Datasets For WSPG and WWbL, the network $g$ is trained on either MSCOCO 2014
[32] or the Visual Genome (VG) dataset [29]. Evaluation is carried out on the
test splits of Flickr30k[34], ReferIt[12, 24] and VG [29].
VG contains 77,398, 5,000, and 5000 training, validation, and test images,
respectively. Each image is linked to natural-language text and annotated
bounding boxes. During the training of MSCOCO2014 we use the training split
defined by Akbari et al. [1]. It consists of 82,783 training samples and
40,504 validation samples, where each sample contains an image and five
captions describing the image. ReferIt[12, 24] consists of 130k expressions
referring to 99,535 objects in 20k images. For evaluation, we use the test
split of Akbari et al.[1]. The dataset Flickr30k Entities [34] consists of
224K phrases that depict objects present in more than 31K images, with each
image having five corresponding captions. The evaluation is carried out on a
the test split of Akbari et al.[1]. For unsupervised single object discovery,
the network $g$ is trained on either MSCOCO 20K, PASCAL-VOC07[20] or PASCAL-
VOC12[21]. MS-COCO20K has 19,817 images chosen at random from the MSCOCO 2014
dataset[32]. VOC07 and VOC12 contain 5,011 and 11,540 images respectively,
with each image belonging to one of 20 categories. For evaluation, we follow
common practice and evaluate the train/val datasets. This evaluation is
possible since the task is fully unsupervised.
Implementation details For phrase grounding tasks, the proposed network $h$
backbone is VGG16 [48], pre-trained on the ImageNet[18] dataset. For the
object discovery task, we use $h$ with ResNet-50[25] backbone, pre-trained
with DINO[9] self-supervision on the ImageNet[18] dataset. For both tasks, $h$
predicts $k=10$ bounding boxes. Refining takes place using an Adam optimizer
with a batch size of 36. The learning rate of $h$ is 1e-5, while the learning
rates of $g^{h}$ and $f^{h}$ are 1e-7 and 5e-7, respectively. The optimizer
weight decay regularization is 1e-4. For the first 3000 iterations, network
$h$ is optimized, where $g^{h}/f^{h}$ is fixed. Then, for the rest of the
training (10k iterations), $h$ is fixed while $g^{h}/f^{h}$ is optimized.
Metrics Phrase grounding tasks are evaluated with respect to the accuracy of
the pointing game[62], which is calculated based on the output map by finding
the location of the maximum value, given a query, and checking whether this
point falls within the object’s region.
The “BBox accuracy” metric extracts a bounding box, given an output mask, and
compares it with the ground-truth annotations. A prediction is considered
accurate if IOU between the boxes is larger than 0.5. To extract the bounding
box from an output map $M$, the procedure of Shaharabany et al. [43] is
employed. First, $M$ is binarized using a threshold of 0.5, then contours are
extracted from $M$ using the method of Suzuki et al. [51]. Based on the
contours, a set of bounding boxes is derived by taking the smallest box
containing each contour. These bounding boxes are scored by summing the values
of M within the contour while ignoring boxes with low scores. Next, a non-
maximal suppression process is applied and the minimal bounding box that
contains the remaining bounding boxes is chosen.
The WWbL task is an open-world localization task, with only an image as input
(no text input). Using this image, the goal is to both localize and describe
all of the elements in the scene. To solve this task, a multi-stage algorithm
was introduced by Shaharabany et al. [43], starting with obtaining object
proposals using selective search [52]. Next, BLIP is used to caption these
regions. Captions that are similar to each other are removed using the
Community Detection (Cd) clustering method [6]. Using the learned phrase
grounding model $g$, heatmaps are generated according to the extracted
captions.
Similarly to the phrase grounding task, the WWbL task is evaluated using the
same two metrics: pointing game accuracy and bounding box accuracy). For each
ground-truth pair of bounding box and caption, the closest caption in CLIP
space is selected from the list of automatically generated captions. The
associated output map of the phrase grounding method is then compared to the
ground truth bounding box using the pointing accuracy metric. In addition,
bounding boxes are extracted for the output heatmaps $M$, as described above.
For single object discovery we use the Correct Localization (CorLoc) metric as
used by [19, 54, 55, 53, 59, 15, 50]. A predicted bounding box is considered
as correct if the IOU score between the predicted bounding box and one of the
ground truth bounding boxes is above 0.5. We evaluate our model on the same
datasets as [58, 47, 5].
Results Tab. 1 lists the results for Flickr30k, ReferIt, and VG for the
weakly-supervised phrase grounding task. Evidently, our method is superior to
all baselines, whether training takes place over VG or MS-COCO. In addition to
the pointing game results, Tab. 2 presents bounding box accuracy for the
phrase grounding task (this data is not available for most baselines). Here,
too, our method outperforms the baseline methods by a wide margin.
Phrase grounding samples are provided in Fig. 3, comparing the results after
the refinement process (those with $g^{h}$) to the results of the baseline
$g$. As can be seen, our method encourages the localization maps to match the
typical shape of image objects. As a result, the predicted bounding box after
refining the model is often closer to the actual objects in the image.
The WWbL results are listed in Tab. 3, which depicts the performance obtained
by our $g^{h}$, WWbL [43], and a baseline that employs the phrase grounding
method MG [1] as part of the WWbL captioning procedure described above. Out of
the three models, our refined model $g^{h}$ achieves the best scores, for all
benchmarks and both metrics.
Tab. 4 summarize the results on the VOC07, VOC12, and MS-COCO20K datasets for
the single object discovery task. When utilizing the MOVE [5] model, our
method achieves superior performance compared to all other models across all
datasets. This superiority holds true when comparing all methods without CAD
and when comparing all methods with CAD. Furthermore, our method consistently
outperforms other approaches when refining the DINO model f using both
TokenCut [58] boxes and LOST [47] boxes on all datasets.
Fig. 4 depicts typical samples of our results for the unsupervised object
discovery task, when combining our method with either LOST [47] or TokenCut
[58]. Evidently, our refining process improves object and background
separation and produces a denser output mask, which covers the object more
completely. Furthermore, the extracted bounding boxes become more accurate.
Ablation study In order to validate the individual components of our approach,
we conducted an ablation study.
For the phrase grounding task, this study is reported in Tab. 5. The first
ablation replaces the loss $L_{h_{BU}}$ with the loss $L_{h}$, i.e., no union
of the detection boxes is performed. The second ablation employs only the loss
of $h$, $L_{h_{BU}}$, and disregards the loss terms that were used to train
network $g$. The third ablation employs a single detection box ($k=1$) instead
of the default of $k=10$. As can be seen, these three variants reduce
performance across all metrics and datasets. The exact reduction in
performance varies across the datasets.
To extensively explore the task of unsupervised object discovery, we conducted
a comprehensive ablation study by varying multiple values of k, see Tab. 6.
This ablation was performed using LOST, which is quicker than TokenCut and
without the extra overhead of training CAD. Evidently, removing the
regularization term, leaving only the loss $L_{h}$ (there is no box union in
this task, since both LOST and TokenCut return a single box) hurts
performance. However, as can be expected, using $k=1$, instead of the value of
$k=10$ that is used throughout our experiments, better fits this scenario and
leads to better performance on VOC07 (and virtually the same on MSCOCO20K).
Training time The time it takes to train our method on medium-sized datasets
is reported in Tab. 7. For both original networks, $f$ and $g$, we use
pretrained networks and report the published values. Training $h,f^{h},g^{h}$
reflects our runs on GeForce RTX 2080Ti GPUs ($f$ which is DINO, was trained
on much more involved hardware, while $g$ was trained on similar hardware). As
can be seen, training $h$ and refining $f$ or $g$ to obtain $f^{h}$ or $g^{h}$
is much quicker than the training of the $f$ and $g$ baselines. The difference
in training time between LOST and TokenCut stems from the inference done
during training, which is much quicker for LOST.
Network | Phrase Grounding | Object discovery
---|---|---
| | LOST | TokenCut
$f$ or $g$ | 28 x [4] | 72.6 x [16] | 72.6 x [16]
$h$ | 0.5 x [1] | 0.5 x [1] | 2.5 x [1]
$f^{h}$ or $g^{h}$ | 3.2 x [4] | 5.3 x [1] | 20.5 x [1]
Table 7: Training time (hours) for phrase grounding and unsupervised object
discovery. Within brackets is the number of GPUS used during training.
## 6 Conclusions
We present a novel method, in which a primary network is used in a symbiotic
manner with a detection network. The first network is used to extract a
feature map and detection boxes, which are used as the input and output of the
second. The second network is then used to allow the first network to be
refined using the boxes extracted from its output. All training phases are
performed on the same training set, within the bounds of the allowed level of
supervision. Tested on a wide variety of tasks and benchmarks, the proposed
method consistently improves localization accuracy.
## References
* [1] Hassan Akbari, Svebor Karaman, Surabhi Bhargava, Brian Chen, Carl Vondrick, and Shih-Fu Chang. Multi-level multimodal common semantic space for image-phrase grounding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12476–12486, 2019.
* [2] Assaf Arbelle, Sivan Doveh, Amit Alfassy, Joseph Shtok, Guy Lev, Eli Schwartz, Hilde Kuehne, Hila Barak Levi, Prasanna Sattigeri, Rameswar Panda, et al. Detector-free weakly supervised grounding by separation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1801–1812, 2021.
* [3] Yutong Bai, Xinlei Chen, Alexander Kirillov, Alan Yuille, and Alexander C Berg. Point-level region contrast for object detection pre-training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16061–16070, 2022.
* [4] Adam Bielski and Paolo Favaro. Emergence of object segmentation in perturbed generative models. Advances in Neural Information Processing Systems, 32, 2019.
* [5] Adam Bielski and Paolo Favaro. Move: Unsupervised movable object segmentation and detection. arXiv preprint arXiv:2210.07920, 2022.
* [6] Vincent D Blondel, Jean-Loup Guillaume, Renaud Lambiotte, and Etienne Lefebvre. Fast unfolding of communities in large networks. Journal of statistical mechanics: theory and experiment, 2008(10):P10008, 2008.
* [7] Federico Bolelli, Stefano Allegretti, Lorenzo Baraldi, and Costantino Grana. Spaghetti labeling: Directed acyclic graphs for block-based connected components labeling. IEEE Transactions on Image Processing, PP:1–1, 10 2019.
* [8] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I, page 213–229, 2020\.
* [9] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, 2021.
* [10] Hila Chefer, Shir Gur, and Lior Wolf. Generic attention-model explainability for interpreting bi-modal and encoder-decoder transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 397–406, October 2021.
* [11] Hila Chefer, Shir Gur, and Lior Wolf. Transformer interpretability beyond attention visualization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 782–791, 2021.
* [12] Kan Chen, Rama Kovvuri, and Ram Nevatia. Query-guided regression network with context policy for phrase grounding. In ICCV, 2017.
* [13] Xiaokang Chen, Yuhui Yuan, Gang Zeng, and Jingdong Wang. Semi-supervised semantic segmentation with cross pseudo supervision. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2613–2622, 2021.
* [14] Jason PC Chiu and Eric Nichols. Named entity recognition with bidirectional lstm-cnns. Transactions of the association for computational linguistics, 4:357–370, 2016.
* [15] Minsu Cho, Suha Kwak, Cordelia Schmid, and Jean Ponce. Unsupervised object discovery and localization in the wild: Part-based matching with bottom-up region proposals, 2015.
* [16] Junsuk Choe, Dongyoon Han, Sangdoo Yun, Jung-Woo Ha, Seong Joon Oh, and Hyunjung Shim. Region-based dropout with attention prior for weakly supervised object localization. Pattern Recognition, 116:107949, 2021.
* [17] Junsuk Choe, Seong Joon Oh, Seungho Lee, Sanghyuk Chun, Zeynep Akata, and Hyunjung Shim. Evaluating weakly supervised object localization methods right. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3133–3142, 2020.
* [18] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. ImageNet: A Large-Scale Hierarchical Image Database. In Computer Vision and Pattern Recognition (CVPR), 2009.
* [19] Thomas Deselaers, Bogdan Alexe, and Vittorio Ferrari. Localizing objects while learning their appearance. In Kostas Daniilidis, Petros Maragos, and Nikos Paragios, editors, Computer Vision – ECCV 2010, pages 452–466, Berlin, Heidelberg, 2010. Springer Berlin Heidelberg.
* [20] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2007 Results. pascal-network.org/challenges/VOC/voc2007.
* [21] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman. The PASCAL Visual Object Classes Challenge 2012 (VOC2012) Results. http://www.pascal-network.org/challenges/VOC/voc2012.
* [22] Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh K Srivastava, Li Deng, Piotr Dollár, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C Platt, et al. From captions to visual concepts and back. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1473–1482, 2015.
* [23] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial networks. Communications of the ACM, 63(11):139–144, 2020.
* [24] Michael Grubinger, Paul Clough, Henning Müller, and Thomas Deselaers. The iapr tc-12 benchmark: A new evaluation resource for visual information systems. In International workshop ontoImage, volume 2, 2006.
* [25] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR, pages 770–778, 2016.
* [26] Syed Ashar Javed, Shreyas Saxena, and Vineet Gandhi. Learning unsupervised visual grounding through semantic self-supervision. arXiv preprint arXiv:1803.06506, 2018.
* [27] Aishwarya Kamath, Mannat Singh, Yann LeCun, Gabriel Synnaeve, Ishan Misra, and Nicolas Carion. Mdetr-modulated detection for end-to-end multi-modal understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1780–1790, 2021.
* [28] Gunhee Kim and Antonio Torralba. Unsupervised detection of regions of interest using iterative link analysis. Advances in neural information processing systems, 22, 2009.
* [29] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, et al. Visual genome: Connecting language and vision using crowdsourced dense image annotations. International journal of computer vision, 123(1):32–73, 2017.
* [30] Harold W. Kuhn. The Hungarian Method for the Assignment Problem. Naval Research Logistics Quarterly, 2(1–2):83–97, 1955.
* [31] Liunian Harold Li, Pengchuan Zhang, Haotian Zhang, Jianwei Yang, Chunyuan Li, Yiwu Zhong, Lijuan Wang, Lu Yuan, Lei Zhang, Jenq-Neng Hwang, et al. Grounded language-image pre-training. arXiv preprint arXiv:2112.03857, 2021.
* [32] Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common objects in context. In ECCV, volume 8693 of LNCS, pages 740–755, 2014.
* [33] Lanlan Liu, Michael Muelly, Jia Deng, Tomas Pfister, and Li-Jia Li. Generative modeling for small-data object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 6073–6081, 2019.
* [34] Bryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641–2649, 2015.
* [35] Adam Polyak, Yaniv Taigman, and Lior Wolf. Unsupervised generation of free-form and parameterized avatars. IEEE transactions on pattern analysis and machine intelligence, 42(2):444–459, 2018.
* [36] Zhenyue Qin, Dongwoo Kim, and Tom Gedeon. Rethinking softmax with cross-entropy: Neural network classifier as mutual information estimator. arXiv preprint arXiv:1911.10688, 2019.
* [37] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021.
* [38] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
* [39] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster R-CNN: Towards real-time object detection with region proposal networks. In Advances in Neural Information Processing Systems (NIPS), 2015\.
* [40] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.
* [41] Justyna Sarzynska-Wawer, Aleksander Wawer, Aleksandra Pawlak, Julia Szymanowska, Izabela Stefaniak, Michal Jarkiewicz, and Lukasz Okruszek. Detecting formal thought disorder by deep contextualized word representations. Psychiatry Research, 304:114135, 2021.
* [42] Tal Shaharabany, Yoad Tewel, and Lior Wolf. What is where by looking: Weakly-supervised open-world phrase-grounding without text inputs. arXiv preprint arXiv:2206.09358, 2022.
* [43] Tal Shaharabany, Yoad Tewel, and Lior Wolf. What is where by looking: Weakly-supervised open-world phrase-grounding without text inputs. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
* [44] Jianbo Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000.
* [45] Gyungin Shin, Samuel Albanie, and Weidi Wie. Unsupervised salient object detection with spectral cluster voting. arXiv preprint arXiv:2203.12614, 2022.
* [46] Gyungin Shin, Weidi Xie, and Samuel Albanie. Namedmask: Distilling segmenters from complementary foundation models. In CVPRW, 2023.
* [47] Oriane Siméoni, Gilles Puy, Huy V. Vo, Simon Roburin, Spyros Gidaris, Andrei Bursuc, Patrick Pérez, Renaud Marlet, and Jean Ponce. Localizing objects with self-supervised transformers and no labels. In Proceedings of the British Machine Vision Conference (BMVC), November 2021.
* [48] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* [49] Oriane Siméoni, Chloé Sekkat, Gilles Puy, Antonin Vobecky, Éloi Zablocki, and Patrick Pérez. Unsupervised object localization: Observing the background to discover objects, 2023.
* [50] Parthipan Siva, Chris Russell, Tao Xiang, and Lourdes Agapito. Looking beyond the image: Unsupervised learning for object saliency and detection. In 2013 IEEE Conference on Computer Vision and Pattern Recognition, pages 3238–3245, 2013.
* [51] Satoshi Suzuki et al. Topological structural analysis of digitized binary images by border following. Computer vision, graphics, and image processing, 30(1):32–46, 1985\.
* [52] J.R.R. Uijlings, K.E.A. van de Sande, T. Gevers, and A.W.M. Smeulders. Selective search for object recognition. International Journal of Computer Vision, 2013.
* [53] Huy V. Vo, Francis Bach, Minsu Cho, Kai Han, Yann LeCun, Patrick Perez, and Jean Ponce. Unsupervised image matching and object discovery as optimization, 2019\.
* [54] Huy V. Vo, Patrick Pérez, and Jean Ponce. Toward unsupervised, multi-object discovery in large-scale image collections, 2020.
* [55] Huy V. Vo, Elena Sizikova, Cordelia Schmid, Patrick Pérez, and Jean Ponce. Large-scale unsupervised object discovery, 2021.
* [56] Andrey Voynov, Stanislav Morozov, and Artem Babenko. Object segmentation without labels with large-scale generative models. In International Conference on Machine Learning, pages 10596–10606. PMLR, 2021.
* [57] Xudong Wang, Rohit Girdhar, Stella X Yu, and Ishan Misra. Cut and learn for unsupervised object detection and instance segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3124–3134, 2023.
* [58] Yangtao Wang, Xi Shen, Shell Xu Hu, Yuan Yuan, James L. Crowley, and Dominique Vaufreydaz. Self-supervised transformers for unsupervised object discovery using normalized cut. In Conference on Computer Vision and Pattern Recognition, 2022.
* [59] Xiu-Shen Wei, Chen-Lin Zhang, Jianxin Wu, Chunhua Shen, and Zhi-Hua Zhou. Unsupervised object discovery and co-localization by deep descriptor transformation. Pattern Recognition, 88:113–126, 2019.
* [60] Yuxin Wu, Alexander Kirillov, Francisco Massa, Wan-Yen Lo, and Ross Girshick. Detectron2. https://github.com/facebookresearch/detectron2, 2019.
* [61] Fanyi Xiao, Leonid Sigal, and Yong Jae Lee. Weakly-supervised visual grounding of phrases with linguistic structures. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5945–5954, 2017.
* [62] Jianming Zhang, Sarah Adel Bargal, Zhe Lin, Jonathan Brandt, Xiaohui Shen, and Stan Sclaroff. Top-down neural attention by excitation backprop. International Journal of Computer Vision, 126(10):1084–1102, 2018\.
* [63] Tianshu Zhang, Buzhen Huang, and Yangang Wang. Object-occluded human shape and pose estimation from a single color image. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 7376–7385, 2020.
* [64] Xiao Zhang, Yixiao Ge, Yu Qiao, and Hongsheng Li. Refining pseudo labels with clustering consensus over generations for unsupervised object re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 3436–3445, 2021.
* [65] C Lawrence Zitnick and Piotr Dollár. Edge boxes: Locating object proposals from edges. In Computer Vision–ECCV 2014, pages 391–405. Springer, 2014.
## Supplementary Material
This appendix presents visual results that demonstrate the effectiveness of
our refined models $g^{h}$ and $f^{h}$ in various tasks, including weakly
supervised and unsupervised localization, What-is-where-by-looking, and
unsupervised single object discovery. By building upon existing models $g$ and
$f$, we have showcased improvements in output localization maps and bounding
boxes.
Our comprehensive comparisons span multiple datasets, including MS-COCO14
[32], Visual-Genome [29], Flickr30K [34], ReferIt [12, 24], PASCAL-VOC07 [20],
PASCAL-VOC12 [21], and MS-COCO20K [32]. These comparisons serve to highlight
the adaptability and robustness of our refined models across different tasks
and datasets. The visual results provide strong evidence of our models’
superiority in generating more accurate localization maps and bounding boxes
compared to their base models.
The code and scripts for reproducing the paper’s results are attached to this
supplementary.
## Weakly supervised phrase-grounding visual results
We present visual outcomes of our model, $g^{h}$, which is built upon the
previously published model $g$ by [43]. We compare the localization maps and
bounding box outputs generated by both models and evaluate each bounding box
against the ground truth. We showcase the results for models trained on the
MS-COCO14 [32] and Visual-Genome [29] datasets. For each model, we display
visualizations on the Flickr30K[34], ReferIt [12, 24], and Visual-Genome [29]
datasets. Figures 5, 6, 7 illustrate the results for the MS-COCO-based model,
while the outcomes for the VG-based model can be found in Figures 8, 9, 10.
## What is where by looking visual results
We present visual outcomes for the What-is-where-by-looking task using our
improved model $g^{h}$, which is derived from the previously published model
$g$ by [43]. We compare the localization maps generated by both models, using
the same image but different phrases. In Figure 11, we display the results for
the Flickr30K[34] dataset, with models $g$ and $g^{h}$ trained on the MS-
COCO14 [32] dataset.
## Unsupervised single object discovery visual results
In the context of the unsupervised single object discovery task, we display
visualizations of our model $f^{h}$, which is based on the DINO[9] model $f$.
We compare our findings with those of LOST[47] and TokenCut[58]. For each
comparison, we showcase the output attention map and the output bounding box.
Additionally, we display CAD-based bounding boxes, derived from both our
refined model $f^{h}$ and the original model $f$, if available. For each
method, we exhibit results on the PASCAL-VOC07 [20], PASCAL-VOC12 [21], and
MS-COCO20K[32] datasets. The outcomes for the LOST model can be found in
Figures 12,13,14, while the TokenCut model results are illustrated in Figures
15, 16, 17.
| a young woman | | | | | foreign folk dancers | | |
---|---|---|---|---|---|---|---|---|---
| two men | | | | | a lady | | |
| a woman | | | | | three individuals | | |
| … football players | | | | | chinese | | |
| … patterned rugs | | | | | a small black dog | | |
| a bearded man | | | | | … sweatshirt | | |
| a young man | | | | | a policeman | | |
| a child | | | | | three rugby players | | |
| (a) | (b) | (c) | (d) | (e) | | (f) | (g) | (h)
Figure 5: Phrase-grounding results on Flickr30K[34] dataset. Model $g^{h}$ was trained on MS-COCO14[32] dataset. (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. (e) same as a (f) same as b (g) same as c (h) same as d | woman | | | | | bridge | | |
---|---|---|---|---|---|---|---|---|---
| train | | | | | any of the people | | |
| reddish part of plant | | | | | dude | | |
| bug | | | | | train | | |
| bird | | | | | man | | |
| bird | | | | | any person | | |
| bricks | | | | | the orange one | | |
| couple dancing | | | | | the dude | | |
| (a) | (b) | (c) | (d) | (e) | | (f) | (g) | (h)
Figure 6: Phrase-grounding results on ReferIt[12, 24] dataset. Model $g^{h}$ was trained on MS-COCO14[32] dataset. (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. (e) same as a (f) same as b (g) same as c (h) same as d | … umbrella | | | | | a small bird … branch | | |
---|---|---|---|---|---|---|---|---|---
| … man | | | | | boy … shirt | | |
| bicyclists in a race | | | | | a person skiing | | |
| … baseball game | | | | | the orange is cut … | | |
| … men paddling raft | | | | | blue and white plane | | |
| … her hand up | | | | | a large elephant | | |
| bus is stopped | | | | | … wears black cloths | | |
| (a) | (b) | (c) | (d) | (e) | | (f) | (g) | (h)
Figure 7: Phrase-grounding results on Visual Genome [29] dataset. Model $g^{h}$ was trained on MS-COCO14[32] dataset. (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. (e) same as a (f) same as b (g) same as c (h) same as d | a bucking bull | | | | | a segway scooter | | |
---|---|---|---|---|---|---|---|---|---
| football players | | | | | a young man | | |
| a straining man | | | | | blue and red floaties | | |
| a man | | | | | a well dressed man | | |
| the propeller of a plane | | | | | two little boys | | |
| four people | | | | | a park bench | | |
| (a) | (b) | (c) | (d) | (e) | | (f) | (g) | (h)
Figure 8: Phrase-grounding results on Flickr30K[34] dataset. Model $g^{h}$ was trained on Visual Genome [29] dataset. (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. (e) same as a (f) same as b (g) same as c (h) same as d | couple | | | | | snow front | | |
---|---|---|---|---|---|---|---|---|---
| guy | | | | | black object | | |
| person | | | | | horse | | |
| wack animal | | | | | house | | |
| … group of people | | | | | plane | | |
| buildig | | | | | water | | |
| any of the seals | | | | | building | | |
| horse | | | | | sand | | |
| (a) | (b) | (c) | (d) | (e) | | (f) | (g) | (h)
Figure 9: Phrase-grounding results on ReferIt[12, 24] dataset. Model $g^{h}$ was trained on Visual Genome [29] dataset. (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. (e) same as a (f) same as b (g) same as c (h) same as d | biker … | | | | | a sheep barn | | |
---|---|---|---|---|---|---|---|---|---
| woman playing tennis | | | | | … jumping | | |
| the bird … | | | | | dog running fast | | |
| truck is clean | | | | | … sitting in a bench | | |
| a cat that is inside | | | | | a bear laying outsid | | |
| … playing soccer | | | | | train tracks … | | |
| … made a jump | | | | | … silver armour | | |
| boy surfing … | | | | | the person is surfing | | |
| (a) | (b) | (c) | (d) | (e) | | (f) | (g) | (h)
Figure 10: Phrase-grounding results on Visual Genome [29] dataset. Model $g^{h}$ was trained on the same dataset. (a) the phrase (b) the input image (c) results (black) for network $g$ [43] compared to ground-truth box (green) (d) same for refined network $g^{h}$. (e) same as a (f) same as b (g) same as c (h) same as d | a woman wearing a hat | a woman in a kimono
---|---|---
| | | |
| a bunch of balloons | a group of people walking down the street
| | | |
| a police officer | a person wearing a safety vest
| | | |
| a man and a woman | a bike parked on the side of the road
| | | |
| a woman wearing a denim jacket | a woman with blonde hair
| | | |
(a) | (b) | (c) | (d) | (e)
Figure 11: What-is-where-by-looking results on Flickr30K[34] dataset. Model $g^{h}$ was trained on MS-COCO14[32] dataset. (a) the input image (b) results for network $g$ [43] (c) results for network $g^{h}$ (d-e) same as b-c, using different phrase | | | | |
---|---|---|---|---|---
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
(a) | (b) | (c) | (d) | (e) | (f)
Figure 12: Single object discovery results on MS-COCO14[32] dataset. (a) the input image (b) the inverse degree of the LOST [47]; the red bounding box is directly from LOST, the white is the prediction of CAD trained on top of it (c) same with our refined model $f^{h}$ and LOST (d) same as a (e) same as b (f) same as c | | | | |
---|---|---|---|---|---
| | | | |
| | | | |
| | | | |
(a) | (b) | (c) | (d) | (e) | (f)
Figure 13: Single object discovery results on PASCAL-VOC07[20] dataset. (a) the input image (b) the inverse degree of the LOST [47]; the red bounding box is directly from LOST, the white is the prediction of CAD trained on top of it (c) same with our refined model $f^{h}$ and LOST (d) same as a (e) same as b (f) same as c | | | | |
---|---|---|---|---|---
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
(a) | (b) | (c) | (d) | (e) | (f)
Figure 14: Single object discovery results on PASCAL-VOC12[21] dataset. (a) the input image (b) the inverse degree of the LOST [47]; the red bounding box is directly from LOST, the white is the prediction of CAD trained on top of it (c) same with our refined model $f^{h}$ and LOST (d) same as a (e) same as b (f) same as c | | | | |
---|---|---|---|---|---
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
(a) | (b) | (c) | (d) | (e) | (f)
Figure 15: Single object discovery results on MS-COCO14[32] dataset. (a) the input image (b) the eigenvector attention of the TokenCut [58]; the red bounding box is directly from TokenCut (the CAD model was not released and is not shown) (c) same with our refined model $f^{h}$ and TokenCut, the white bounding box is the prediction of CAD trained on top of $f^{h}$ (d) same as a (e) same as b (f) same as c | | | | |
---|---|---|---|---|---
| | | | |
| | | | |
(a) | (b) | (c) | (d) | (e) | (f)
Figure 16: Single object discovery results on PASCAL-VOC07[20] dataset. (a) the input image (b) the eigenvector attention of the TokenCut [58]; the red bounding box is directly from TokenCut (the CAD model was not released and is not shown) (c) same with our refined model $f^{h}$ and TokenCut, the white bounding box is the prediction of CAD trained on top of $f^{h}$ (d) same as a (e) same as b (f) same as c | | | | |
---|---|---|---|---|---
| | | | |
| | | | |
| | | | |
(a) | (b) | (c) | (d) | (e) | (f)
Figure 17: Single object discovery results on PASCAL-VOC12[21] dataset. (a)
the input image (b) the eigenvector attention of the TokenCut [58]; the red
bounding box is directly from TokenCut (the CAD model was not released and is
not shown) (c) same with our refined model $f^{h}$ and TokenCut, the white
bounding box is the prediction of CAD trained on top of $f^{h}$ (d) same as a
(e) same as b (f) same as c
|
# Sr2IrO4/Sr3Ir2O7 superlattice for a model 2D quantum Heisenberg
antiferromagnet
Hoon Kim These authors contributed equally to this work. Department of
Physics, Pohang University of Science and Technology, Pohang 790-784, Republic
of Korea Center for Artificial Low Dimensional Electronic Systems, Institute
for Basic Science (IBS), 77 Cheongam-Ro, Pohang 37673, South Korea Joel
Bertinshaw11footnotemark: 1 These authors contributed equally to this work.
Max Planck Institute for Solid State Research, Heisenbergstraße 1, D-70569
Stuttgart, Germany J. Porras Max Planck Institute for Solid State Research,
Heisenbergstraße 1, D-70569 Stuttgart, Germany B. Keimer Max Planck
Institute for Solid State Research, Heisenbergstraße 1, D-70569 Stuttgart,
Germany Jungho Kim Advanced Photon Source, Argonne National Laboratory 9700
Cass Ave, Lemont, IL 60439, USA J.-W. Kim Advanced Photon Source, Argonne
National Laboratory 9700 Cass Ave, Lemont, IL 60439, USA Jimin Kim
Department of Physics, Pohang University of Science and Technology, Pohang
790-784, Republic of Korea Center for Artificial Low Dimensional Electronic
Systems, Institute for Basic Science (IBS), 77 Cheongam-Ro, Pohang 37673,
South Korea Jonghwan Kim Center for Artificial Low Dimensional Electronic
Systems, Institute for Basic Science (IBS), 77 Cheongam-Ro, Pohang 37673,
South Korea Department of Materials Science and Engineering, Pohang
University of Science and Technology, Pohang 37673, South Korea Gahee Noh
Gi-Yeop Kim Department of Materials Science and Engineering, Pohang
University of Science and Technology, Pohang 37673, South Korea Si-Young Choi
Department of Materials Science and Engineering, Pohang University of Science
and Technology, Pohang 37673, South Korea B. J. Kim To whom correspondence
should be addressed.
<EMAIL_ADDRESS>Department of Physics, Pohang University of Science and
Technology, Pohang 790-784, Republic of Korea Center for Artificial Low
Dimensional Electronic Systems, Institute for Basic Science (IBS), 77
Cheongam-Ro, Pohang 37673, South Korea
###### Abstract
Spin-orbit entangled pseudospins hold promise for a wide array of exotic
magnetism ranging from a Heisenberg antiferromagnet to a Kitaev spin liquid
depending on the lattice and bonding geometry, but many of the host materials
suffer from lattice distortions and deviate from idealized models in part due
to inherent strong pseudospin-lattice coupling. Here, we report on the
synthesis of a magnetic superlattice comprising the single ($n$=1) and the
double ($n$=2) layer members of the Ruddlesden-Popper series iridates
Srn+1IrnO3n+1 alternating along the $c$-axis, and provide a comprehensive
study of its lattice and magnetic structures using scanning transmission
electron microscopy, resonant elastic and inelastic x-ray scattering, third
harmonic generation measurements and Raman spectroscopy. The superlattice is
free of the structural distortions reported for the parent phases and has a
higher point group symmetry, while preserving the magnetic orders and
pseudospin dynamics inherited from the parent phases, featuring two magnetic
transitions with two symmetry-distinct orders. We infer weaker pseudospin-
lattice coupling from the analysis of Raman spectra and attribute it to
frustrated magnetic-elastic couplings. Thus, the superlattice expresses a near
ideal network of effective spin-one-half moments on a square lattice.
††preprint: APS/123-QED
## I Introduction
The physics of $S$=1/2 antiferromagnet (AF) on a two-dimensional (2D) square
lattice has a long history of research as it is widely believed to hold the
key for the mechanism of high temperature superconductivity in copper oxide
compounds [1, 2, 3, 4, 5]. However, it is rarely realized outside of the Cu-
based compounds, and as a result its generic features are difficult to isolate
from material specifics. Ruddlesden-Popper (RP) series iridates Srn+1IrnO3n+1,
have recently emerged as a new material platform to study the same physics
with spin-orbit entangled $J_{\textrm{eff}}$=1/2 pseudospins replacing the
$S$=1/2 moments in the cuprates [6, 7, 8, 9, 10]. Indeed, the single layer
Sr2IrO4 has reproduced much of the cuprate phenomenology: a pseudogapped metal
[11, 12, 13], and a nodal metal with a $d$-wave symmetric gap indicative of
possible unconventional superconductivity [14, 15] emerge upon electron doping
from the parent phase that is approximately a Heisenberg AF [16, 17]. Further,
experiments indicate existence of various symmetry-breaking orders: polarized
neutron diffraction [18] detects time-reversal symmetry breaking above the
Néel temperature (TN), magnetic torque [19] and second harmonic generation
[20, 21] measurements indicate loss of C4 rotation symmetry, and resonant
x-ray diffraction [22] observes splitting of a magnetic Bragg peak suggesting
formation of a nearly commensurate density wave.
However, iridates are different from the cuprates in several aspects, and to
what extent they are relevant to the essential physics is an open important
issue. First, the dominant orbital character of the pseudospins leading to
strong pseudospin-lattice coupling (PLC) [23, 24, 25], which accounts largely
for the spin-wave gap, questions the validity of spin-only models. Second,
structural distortions of kinds not found in cuprates [26, 27, 28] add
complexity to theory models by allowing additional interactions. For example,
the staggered tetragonal distortion of IrO6 octahedra in Sr2IrO4, breaking the
vertical glide planes and thus lowering the space group from $I$41/$acd$ to
$I$41/$a$ [27, 26], leads to additional pseudospin exchange interactions,
which provide a mechanism for locking of pseudospin canting angles and the
octahedral rotation [29]. In the bilayer compound Sr3Ir2O7, the monoclinic
distortion, lowering the space group from orthorhombic $Bbca$ to monoclinic
$C$2/$c$ [28] results in bending of otherwise straight Ir-O-Ir $c$-axis bonds.
This in turn leads to canting of the AF moments aligned along the $c$-axis,
manifesting as small but clearly measurable net in-plane ferromagnetic moments
[30, 31]. Such distortions lead to deviation from the ideal cubic-symmetric
$J_{\textrm{eff}}$=1/2 states on rectilinear superexchange pathways, which are
assumed in theory models that predict, for example, realization of a Kitaev
spin liquid in a honeycomb lattice [8, 32].
Figure 1: (color online). Stacking pattern of the superlattice as imaged by
STEM. (a) Wide field-of-view STEM image along [100] projection. The
alternation between single-layers (blue) and double-layers (orange) is well
maintained over the entire field of view. (b) Magnified HAADF-STEM image with
single-layer (blue) and double-layer (orange) indicated. (c) A structural
model for the superlattice. The single-layer and double-layer are shifted by a
half unit cell on the SrO planes. IrO6 octahedra are rotated about the
$c$-axis as in the parent compounds. (d) [110]- and (e) [100]-projected HAADF
(left)- and ABF (right)-STEM images overlaid with the atom positions (Sr,
grey; Ir, blue, orange; O, red) from the model.
Here, we report on the synthesis of a Sr2IrO4/Sr3Ir2O7 superlattice, and
provide a comprehensive study of its lattice and magnetic structures. The
lattice structure is investigated by scanning transmission electron microscopy
(STEM), resonant x-ray diffraction (RXD), and rotational anisotropy third
harmonic generation measurements (RA-THG), and the magnetic structure by
magnetometry, RXD, resonant inelastic x-ray scattering (RIXS), and Raman
scattering. The superlattice is free of structural distortions reported for
the parent phases while leaving their magnetic structures intact. The
superlattice features two magnetic transitions with two different orders:
canted $ab$-plane AF and $c$-axis collinear AF inherited from Sr2IrO4 and
Sr3Ir2O7, respectively. Their contrasting pseudospin dynamics, of Heisenberg
and Ising types, also remain unchanged within our experimental resolutions.
However, $ab$-plane magnetic anisotropy of the Heisenberg pseudospins is
significantly reduced indicating weaker PLC, possibly due to the Ising
pseudospins aligned along the $c$-axis resisting the orthorhombic distortions.
Our result shows that two distinct types of quasi-2D magnetism can be
compounded in a superlattice to realize a pseudospin system closer to an ideal
model.
## II Lattice Structure
The superlattice has the nominal chemical composition Sr5Ir3O11 and can be
regarded as a $n$=1.5 member of the RP series. Although this phase is not
quite thermodynamically stable, it forms transiently during a flux growth
before either $n$=1 or $n$=2 phase eventually stabilizes depending on the
starting composition of Sr2CO3 and IrO2 molten in SrCl2 flux and dwelling
temperature. Thermal quenching at high temperature leads to intergrowth of
both phases, and a highly ordered superlattice can be found by a careful
control of the heating sequence. The resulting “crystals” have typically a few
microns thickness, thicker than layer-by-layer grown thin films but thinner
than typical bulk crystals. As the conventional structure refinement is
limited by the small sample volume, we rely on STEM and RXD to verify the
superlattice formation.
Figure 1(a) shows a wide field-of-view STEM image along [100] projection
showing the stacking of single (SL) and double-layer (DL) units alternating
over $>$ 40 unit cells. The stacking sequence is indicated in Fig. 1(b) and
the unit cell is depicted in Fig. 1(c), which is modeled from the known
structures of Sr2IrO4 (Ref. 33) and Sr3Ir2O7 (Ref. 34). Figures 1(d) and 1(e)
show representative high-angle annular dark field (HAADF) and annular bright
field (ABF) images along [110] projection and [100] projection, respectively.
The images are overlaid with the atomic positions based on the unit cell in
Fig. 1(c). In the [100] projection, the staggered rotation of IrO6 octahedra
about the $c$-axis as in the parent compounds is seen as diffuse rods (see
also Figs. S1 and S2). Overall, our data is in good agreement with the model.
Figure 2: (color online). RXD on the superlattice at the Ir $L_{3}$-edge.
Sharp (a) (0 0 L) and (b) (0 2 L) charge reflections are centered at integer-L
values (dashed lines), indicating the superlattice is well ordered across the
bulk of the sample. Minor impurity peaks that match Sr2IrO4 and Sr3Ir2O7 are
marked by cyan and orange sticks, respectively. Miller indices are in the
orthorhombic notation; i.e., reciprocal lattice vectors corresponding to the
unit cell shown in Fig. 1(c).
The superlattice formation is confirmed to represent the bulk of the sample
via RXD conducted at the Ir $L_{3}$-edge. Figures 2(a) and 2(b) plot scans
along (0 0 L) and (0 2 L), respectively, which show reflections centered at
integer-L values with $\sim$ 0.01 Å widths. Impurity peaks of intensities of
the order of $\lesssim$ 1 % of the main peaks are observed, which match the
$c$-axis lattice parameters of either Sr2IrO4 or Sr3Ir2O7. The sharp
reflections and negligible impurity peaks indicate that the superlattice
structure remains well correlated at a macroscopic level. Whereas the in-plane
lattice parameters ($\approx$ 5.502 Å) match those of the parent systems, the
$c$-axis is unique with a length of 16.93 Å. According to the study by Harlow
et al. [35], however, diffraction patterns from a randomly-stacked intergrowth
of $n$=1 and $n$=2 phases can misleadingly appear similar to those of an
ordered phase. Such possibility is ruled out in our sample by selectively
measuring the periodicity of DL, by exploiting the fact that only DLs are
magnetically ordered in the temperature range between T= 220 K and 280 K (to
be discussed in section III).
Figure 3: (color online). RA-THG patterns of the superlattice (open circles)
taken under (a) $PP$, (b) $PS$, (c) $SP$, (d) $SS$ geometries. Incident 1200
nm light was used at room temperature. The third harmonic 400 nm light was
collected as a function of azimuth-angle while the scattering plane rotates
about $c$-axis [36, 37]. The THG signals are normalized by the $PP$ trace,
overlaid with the best fits to bulk electric dipole induced THG tensor of
4/$mmm$ point group (navy lines).
Next, we further refine the structure using RA-THG, a nonlinear optical
process highly sensitive to the bulk crystal symmetry through nonlinear
susceptibility tensors [36, 37, 38]. This technique has been used for Sr2IrO4
and Sr3Ir2O7 to detect subtle structure distortions [27, 28, 21]. Figure 3
shows the azimuth-angle dependence of the third harmonic signals as the
scattering plane is rotated about the $c$-axis, for the four different
polarization configurations of the incident and reflected light, which can be
either parallel ($P$) or perpendicular ($S$) to the scattering plane. The
patterns are symmetric with respect to mirror reflections $a$$\rightarrow$$-a$
and $b$$\rightarrow$$-b$, and four-fold rotations about the $c$-axis. The
combination of both symmetries leads to eight-fold symmetric patterns for the
$PS$ and $SP$. To confirm, the patterns are overlaid with the best fit to
electric-dipole induced THG tensor for 4/$mmm$ (navy), whose expression is
given in Appendix A. We find an excellent agreement and conclude that the
superlattice has a higher point group symmetry than Sr2IrO4 (Ref. 27) and
Sr3Ir2O7 (Ref. 28), in both of which the patterns manifestly lack the mirror
symmetries.
## III Magnetic structure
Figure 4: (color online). Magnetometry of the superlattice. (a) The
superlattice magnetic order consistent with our data. (b) Field-cooled M-T
curves of the superlattice (navy), Sr2IrO4 (cyan) and Sr3Ir2O7 (orange). (c)
M-H hysteresis at 5 K comparing the superlattice (navy) and the bulk Sr2IrO4
(cyan). For a direct comparison, Sr2IrO4 curves are multiplied by the mass
proportion ($\approx$ 0.36) of SL in the superlattice. (d) M-T curves measured
with fields applied along [100] and [110]. Figure 5: (color online).
Magnetic RXD study on the superlattice. (a) Magnetic (1 0 L) reflections
appear at every integer L values, contributed by signals from both SL and DL.
(b) (1 0 L) scans measured at every 5 K upon heating from 200 K to 245 K. (1 0
21) reflection dominated by SL disappears around T = 220 K. (c) At 250 K, the
intensity modulation along (1 0 L) coincides with DL structure factor squared
(dotted line), indicating that the magnetic intensities are dominated by DL.
(d) Temperature dependence of (1 0 8) and (1 0 10) reveal two magnetic
transitions at $T_{N}^{A}$ = 220 K and $T_{N}^{B}$ = 280 K. (e) Polarization
analysis to separate AF signals from in-plane (blue) and out-of-plane (orange)
moments.
Having established the lattice structure of the superlattice, we now turn to
its magnetic structure. In short, the magnetic structure remains almost
unchanged from the parent systems: SL has $ab$-plane canted AF structure while
DL has $c$-axis collinear AF structure, as shown in Fig. 4(a). The net
ferromagnetic response to the dc field with the saturation moment close to
one-third of that of Sr2IrO4 [Figs. 4(b) and 4(c)] suggests that the SL (which
makes up one-third of the superlattice in terms of the number of Ir ions) has
in-plane canted AF structure, while DL has $c$-axis collinear AF structure. We
note that the AF ordering in Sr3Ir2O7 is visible as a small jump in the
magnetization due to its slight monoclinicity $\beta$ $\sim$ 90.05 ∘ (and
thereby canting of the moments [28, 30]), but no such anomaly indicative of DL
magnetic ordering is seen in our tetragonal superlattice [Fig. 4(b)]. Unlike
in Sr2IrO4, we observe a ferromagnetic hysteresis loop in the M-H curve shown
in Fig. 4(c), which implies ferromagnetic staking of SL net moments. Based on
our M-H and M-T curves [Figs. 4(c) and 4(d)] measured for fields along [100]
and [110] directions, we are not able to identify the magnetic easy axis in
the $ab$-plane, which in Sr2IrO4 is clearly along the $a$-axis [24, 23]. This
signifies reduced magnetic anisotropy, which will be discussed in more detail
later on.
Figure 6: (color online). Magnetic excitations in the superlattice. (a) RIXS
map measured at T =10 K along high symmetry directions indicate that the SL
and DL modes follow the dispersions of the parent systems (plotted with
markers). (b) Spectra at select $\bf{q}$ points. The fitted peaks (solid
lines) are compared with those of the parent systems (markers). (c) The
superlattice spectrum at the zone corner ($\pi$,0) (navy line) is well
reproduced by a linear sum of Sr2IrO4 and Sr3Ir2O7 spectra (black line). (d)
The two-magnon Raman spectrum (navy dots), measured in the $B_{\textrm{2g}}$
channel at T = 15 K, is also well approximated by summing the Sr2IrO4 and
Sr3Ir2O7 spectral intensities (black line).
We confirm the magnetic structure shown in Fig. 4(a) using RXD. As in the case
of Sr2IrO4 and Sr3Ir2O7, magnetic reflections are found along (1 0 L) [Fig.
5(a)], but at every integer L, which has contributions from both SL and DL.
However, they can be separated by exploiting the DL structure factor. The
ratio of Ir-O-Ir bond length along the $c$-axis to $c$ lattice parameter
returns oscillation period of $\sim$ 4.15. For example, the DL contribution
nearly vanishes for (1 0 8) and (1 0 21). Indeed, L scans shown in Fig. 5(b)
shows that (1 0 21) peak disappears around T = 220 K as temperature increases
while (1 0 22) peak is present up to T = 250 K. At this temperature, the
intensity modulation well agrees with the DL structure factor squared [Fig.
5(c)], implying that the peaks are due to reflections from DL only. This is
unambiguous evidence for coherent superlattice formation over the probing
depth of x-ray (290 nm $\sim$ 3.1 $\mu$m as calculated in Ref. 39). The SL
transition temperature is measured to be $T_{N}^{A}$ = 220 K from the
temperature dependence of (1 0 8) peak shown in Fig. 5(d). At (1 0 10), two
transitions are seen, the higher temperature one at $T_{N}^{B}$ = 280 K being
the transition in DL.
Additional measurements were conducted using polarization analysis in order to
separate the $ab$-plane and $c$-axis components of the antiferromagnetic
moments. By studying the magnetic (0 5 2) reflection in a horizontal
scattering geometry [Fig. 5(e)], the $\pi$-$\sigma^{\prime}$ channel mostly
detects in-plane moments, whereas $\pi$-$\pi^{\prime}$ is sensitive to out-of-
plane moments. The temperature dependence of the integrated intensities in the
two channels, shown in Fig. 5(e), reveals that the out-of-plane (in-plane)
magnetic signal arises below $T_{N}^{B}$ = 280 K ($T_{N}^{A}$ = 220 K),
thereby located at DL (SL), consistent with the magnetic structure in Fig.
4(a).
## IV Pseudospin dynamics
Having established the static magnetic structure, we now turn to the
pseudospin dynamics. Figure 6(a) plots the pseudospin excitation spectrum
along high symmetry directions. Select $\mathbf{q}$-points are plotted in Fig.
6(b). The spectra are fitted with two peaks and their energy positions are
compared with the spin-wave dispersions for the parent systems. Overall, the
spectra are well described as having two peaks corresponding to spin waves in
SL and DL. It is known that Sr2IrO4 is almost gapless at the magnetic zone
center reflecting the quasi-continuous symmetry in the $ab$-plane [16],
whereas Sr3Ir2O7 has an exceptionally large gap of $\approx$ 90 meV (Ref. 40).
Except in the immediate vicinity of the magnetic zone center, we find a good
agreement with the parent systems. In particular, the spectra at the zone
corner ($\pi$, 0) is well reproduced by a linear sum of the spectra of Sr2IrO4
and Sr3Ir2O7 at the same momenta, as shown in Fig. 6(c). Further, the two-
magnon spectrum measured by Raman scattering [Fig. 6(d)], which is dominated
by zone-boundary modes, is in perfect agreement with a linear sum of those in
Sr2IrO4 and Sr3Ir2O7. Together, these data indicate that the nearest-neighbor
spin exchange couplings remain unaltered with respect to the parent systems.
Figure 7: (color online). Pseudospin-lattice coupling estimated by Raman
spectroscopy. (a) The magnetic mode at the zone center is observed in
$B_{\textrm{2g}}$ scattering geometry (superlattice, navy; Sr2IrO4, blue
gray). (b) The temperature dependent component of the spin-wave gap
$\Delta(T)$ extracted from Raman spectra in (a). The trend of the superlattice
and Sr2IrO4 share the same functional form $A\sqrt{1-T/T_{N}}+B$, where $A$ is
the offset in the log-log plot and proportional to the strength of PLC.
Temperature dependence of $A_{\textrm{1g}}$ phonons in (c) Sr3Ir2O7 and (d)
the superlattice. (e) Integrated intensity of the lower energy
$A_{\textrm{1g}}$ phonon. The spin-dependent component scales with the ordered
moment squared $M_{AF}^{2}$, and dashed lines are fits to functional form of
$1-(T/T_{N})$, whose slopes are proportional to the PLC strength $\Lambda$.
All Raman spectra are corrected for laser heating and Bose factors.
It is perhaps not surprising that there is no significant change in the
pseudospin dynamics across the majority of the Brillouin zone for both SL and
DL, considering that these are quasi-2D systems and only the stacking sequence
is altered. However, it is rather unusual that the two magnetic subsystems
behave almost independently of each other. In particular, magnetic ordering in
DL occurs across SL that remains paramagnetic down to $T_{N}^{A}$. That no
other static order exists in SL above $T_{N}^{A}$ is confirmed from its almost
identical RIXS spectra measured across $T_{N}^{A}$ (Fig. S4). Our
representation analysis (Appendix B) shows that the SL and the DL magnetic
orders belong to different irreducible representations, which guarantees that
they are not coupled in the linear order (this, however, does not rule out
interactions between SL and DL). Further, we note that both $T_{N}^{A}$ and
$T_{N}^{B}$ are reduced only by few degrees from their bulk counterparts,
which confirms that interlayer couplings play a minor role in the critical
phenomena [8, 41, 42].
While the magnetic properties of the superlattice mostly inherit from the
parent Sr2IrO4 and Sr3Ir2O7, some differences are also found. Unlike in the
case of Sr2IrO4, the in-plane magnetic anistropy of SL is hardly visible in
the magnetometry data [Fig. 4(c)]. In Sr2IrO4, the magnetic anisotropy is
known to arise mostly from PLC [23, 24] stabilizing the magnetic ordering
along the $a$\- or $b$-axis. The PLC dominates over the nearest-neighbor
pseudospin exchange anisotropies favoring alignment along the bond directions
(i.e. along [110]), and also accounts for a dominant part of the in-plane
magnon gap [24]. To compare the strength of PLC, we analyze the temperature
dependence of Raman spectra in $B_{\textrm{2g}}$ symmetry channel, which
measures the magnon gap at the zone center [see Fig. 7(a)]. It is seen in Fig.
7(b) that the magnon gap of both Sr2IrO4 and the superlattice follow the same
mean-field-like functional form of $A\sqrt{1-T/T_{N}}+B$, where the
temperature-independent constant $B$ arises from anisotropic magnetic
interactions, and $A$ measures the strength of PLC that scales with the size
of the ordered moment [23, 24, 25]. From the intercept at T = 0 in the log-log
plot shown in Fig. 7(b), we find that the PLC is drastically reduced by a
factor of two. The reduced PLC in SL can be understood in terms of the
structural rigidity provided by DL resisting orthorhombic distortions, which
would be energetically unfavorable for the $c$-axis collinear AF structure.
An indication of suppressed PLC can also be found from a phonon mode strongly
coupled to the AF order in DL. In Sr3Ir2O7, a recent ultrafast optical
spectroscopy study has shown that strong PLC manifests as an abrupt large
enhancement across $T_{N}$ of the amplitude of the coherent oscillation of the
impulsive 4.4 THz ($\approx$ 18 meV) $A_{\textrm{1g}}$ phonon mode [25]. The
intensity follows the phenomenological relation $\sqrt{I(T)}\propto
Z-(\Lambda/4\mu_{B}^{2})M_{AF}^{2}$, where $\Lambda$ and $Z$ are the PLC
strength and temperature-independent spectral weight, respectively [43]. The
strength of PLC in Sr3Ir2O7 is estimated to be two orders of magnitude
stronger than in cubic perovskite manganites with strong spin-lattice
couplings [44]. Since the oscillation amplitude is linearly dependent on Raman
tensor $\partial\alpha/\partial Q$ [45], where $\alpha$ is the polarizability
and $Q$ is the normal coordinate of the phonon mode, the enhancement should be
also visible in Raman spectra. Indeed, we observe a strong enhancement upon
cooling of the 18 meV $A_{\textrm{1g}}$ mode intensity as shown in Fig. 7(c).
However, the corresponding mode in the superlattice at $\approx$ 15 meV shows
only a modest increase comparable to that of the 23 meV mode [Fig. 7(e)].
Taken as a whole, the absence of discernible anisotropy in the magnetization
data, the reduced magnon gap, and the insensitivty of the $A_{\textrm{1g}}$
mode to the magnetic order all consistently point to significant reduction of
PLC in the superlattice.
## V Summary
Recent advances in the control and understanding of 2D materials has recently
expanded into the research of magnetism in the extreme quantum 2D limit and in
artificial heterostructures. In this work, we have demonstrated a successful
growth of a Sr2IrO4/Sr3Ir2O7 magnetic superlattice, whose constituent bulk
phases exhibit novel quantum magnetism in the limit of strong spin-orbit
coupling. While intergrowth of different phases in a RP series is frequently
found, a natural formation of an ordered superlattice is extremely rare. The
superlattice has a lattice symmetry higher than those of the bulk phases and
realize an undistorted square lattice geometry. Thus, the superlattice offers
a unique opportunity to study the pseudospin physics in an ideal setting.
The superlattice preserves the bulk magnetic orders and interleaves $ab$-plane
canted AF and $c$-axis collinear AF alternately stacked along the $c$-axis.
The two mutually orthogonal orders, however, behave independently of each
order reflecting weak interlayer couplings expected for quasi-2D systems. In
particular, there is a temperature range where only one magnetic subsystem
develops an order across the other that remains paramagnetic. Further, the
pseudsospin dynamics remains unchanged from the parent systems for the most
part of the Brillouin zone.
Instead, the incompatible nature of the magnetic orders manifests as a strong
suppression of the PLC, which is expected to be generally strong for iridates.
The reduced PLC in SL is inferred from the smaller zone-center magnon gap,
which in Sr2IrO4 is largely accounted for by the PLC through a pseudo-Jahn-
Teller effect that results in an orthorhombic distortion as the $ab$-plane
magnetic order breaks the tetragonal symmetry of the lattice. This effect,
however, is counteracted in the superlattice by DL. The magnetic order in DL
is collinear along the straight $c$-axis bond, which in Sr3Ir2O7 is bent due
to a slight monoclinicity.
The origin of the distortions in the parent compounds and their absence in the
superlattice is a subject of future research. To the best of our knowledge,
the breaking of glide symmetries as seen in Sr2IrO4, attributed to staggered
tetragonal distortions, is not found in any other transition-metal compounds
of the RP type, which suggests that the distortion is not due to an
instability of the lattice, but rather its interactions with pseudospins whose
magnetic moment size strongly depends on lattice distortions. Similarly, it
remains to be understood if PLC plays any role in stablizing the monoclinic
structure in Sr3Ir2O7. At any rate, many iridates in bulk form exhibit lattice
distortions of some sort and deviate from idealized models, and in this regard
the superlattice stands out as a realization of pseudospin one-half physics on
an undistorted square lattice.
The persistent magnetic orders in the superlattice also allows investigation
of their evolution upon carrier doping. It is known that Sr3Ir2O7 is on the
verge of a metal-insulator transition, and therefore it may well be that DL
turns into a Fermi liquid metal while SL remains a correlated fluid, a
situation akin to the case of electron-doped Sr2IrO4 via surface alkali metal
dosing, where Fermi arcs and a $d$-wave gap are observed, which possibly
involve screening effects from the surface metallic layer. Our results
presented here provide a solid foundation for these future investigations.
## VI Methods
STEM measurements were conducted using JEMARM200F, JEOL at 200 kV with 5th-
order probe-corrector (ASCOR, CEOS GmbH, Germany). The specimens were prepared
by dual-beam focused ion beam (FIB) with Ga ion beams of 1 $\sim$ 5 kV
(Nanolab G3 CX, FEI), and further milled by Ar ion beams of 100 meV (PIPS II,
Gatan) to minimize the surface damage. RXD and RIXS measurements were carried
out at Beamline 6-ID-B and Beamline 27-ID, respectively, at the Advanced
Photon Source, Argonne National Laboratory. For RA-THG measurements, we
adapted the scheme reported in [37] by replacing the phase mask by a pair of
wedge prisms to accommodate a wider wavelength range of incident light (from
800 nm to 1200 nm). The incident 1200 nm light was provided by an optical
parametric amplifier (Orpheus-F, Light Conversion) powered by a femtosecond
laser of 100 kHz repetition rate (Pharos, Light Conversion). Raman
spectroscopy is measured with 633 nm He-Ne laser, which beam of 0.8 mW is
focused on $\sim$2 $\mu$m.
## Acknowledgement
This project is supported by IBS-R014-A2 and National Research Foundation
(NRF) of Korea through the SRC (no. 2018R1A5A6075964). We acknowledge
financial support by the European Research Council under Advanced Grant No.
669550 (Com4Com). The use of the Advanced Photon Source at the Argonne
National Laboratory was supported by the U. S. DOE under Contract No. DE-
AC02-06CH11357. We acknowledge DESY (Hamburg, Germany), a member of the
Helmholtz Association HGF, for the provision of experimental facilities. J. B.
was supported by the Alexander von Humboldt Foundation. G. Noh, G.-Y. Kim, and
S.-Y. Choi acknowledge the support of the Global Frontier Hybrid Interface
Materials by the NRF (Grant No. 2013M3A6B1078872).
## APPENDIX A: Third Harmonic Generation Tensor
We obtain the general form of the bulk electric dipole induced THG tensor for
the 4/$mmm$ point group by starting from the most general form of a fourth-
rank Cartesian tensor and requiring it to be invariant under all of the
symmetry operations contained in the group as:
$\chi^{ED}_{ijkl}=\begin{pmatrix}\begin{pmatrix}xxxx&0&0\\\ 0&xxyy&0\\\
0&0&xxzz\end{pmatrix}&\begin{pmatrix}0&xxyy&0\\\ xxyy&0&0\\\
0&0&0\end{pmatrix}&\begin{pmatrix}0&0&xxzz\\\ 0&0&0\\\ xxzz&0&\end{pmatrix}\\\
\begin{pmatrix}0&xxyy&0\\\ xxyy&0&0\\\
0&0&0\end{pmatrix}&\begin{pmatrix}xxyy&0&0\\\ 0&xxxx&0\\\
0&0&xxzz\end{pmatrix}&\begin{pmatrix}0&0&0\\\ 0&0&xxzz\\\
0&xxzz&0\end{pmatrix}\\\ \begin{pmatrix}0&0&zxxz\\\ 0&0&0\\\
zxxz&0&0\end{pmatrix}&\begin{pmatrix}0&0&0\\\ 0&0&zxxz\\\
0&zxxz&0\end{pmatrix}&\begin{pmatrix}zxxz&0&0\\\ 0&zxxz&0\\\
0&0&zzzz\end{pmatrix}\par\end{pmatrix}\;,$ (1)
where $jkl$($i$) stands for incident(scattered) light polarizations. In our
experiment, the light is incident on the sample surface normal to the $c$-axis
with the incidence angle $\theta$ and azimuth angle $\psi$, which is defined
to be zero when the scattering plane contains the $a$ and $c$ axes). Then, the
polarization vectors are given by
$\displaystyle\vec{\epsilon}_{s}\,$
$\displaystyle=\,(\sin\psi,\,-\cos\psi,\,0)\;,$
$\displaystyle\vec{\epsilon}_{in,p}\,$
$\displaystyle=\,(-\cos\theta\cos\psi,\,-\cos\theta\sin\psi,\,\sin\theta)\;,$
(A2) $\displaystyle\vec{\epsilon}_{out,p}\,$
$\displaystyle=\,(\cos\theta\cos\psi,\,\cos\theta\sin\psi,\,\sin\theta)\;,$
Multiplying the tensor with the polarization vectors, the expressions for the
THG intensity simplify as
$\displaystyle I^{SS}(3\omega)\,\sim\,$ $\displaystyle|A+B\cos(4\psi)|^{2}\;,$
$\displaystyle I^{PS}(3\omega)\,\sim\,$ $\displaystyle|B\sin(4\psi)|^{2}\;,$
(A3) $\displaystyle I^{SP}(3\omega)\,\sim\,$
$\displaystyle|B\sin(4\psi)|^{2}\;,$ $\displaystyle I^{PP}(3\omega)\,\sim\,$
$\displaystyle|A^{\prime}+B\cos(4\psi)|^{2}\;,$
where $A$, $A^{\prime}$ and $B$ are adjustable parameters. The formulae are
consistent with the 4/$mmm$ point group symmetry, invariant under both mirror
reflections ($\psi$ $\rightarrow$ $\pi-\psi$ and $\psi$ $\rightarrow$ $-\psi$)
and C4 rotation about the $c$-axis ($\psi$ $\rightarrow$ $\pi/2+\psi$). The
full expressions and those for lower symmetry point groups are given in the
supplemental materials.
| | | | | | |
---|---|---|---|---|---|---|---
Irrep. | Single layer | | Double layer
| a1 | a2 | | b1 | b2 | b3 | b4
| | | | | | |
$\Gamma_{1}$ | (m1, m2, 0) | (-m1, m2, 0) | | (m3, m4, 0) | (m5, m6, 0) | (-m3, m4, 0) | (-m5, m6, 0)
| (-m2, m1, 0) | (-m2, -m1, 0) | | (-m6, -m5, 0) | (-m4, -m3, 0) | (-m6, m5, 0) | (-m4, m3, 0)
$\Gamma_{2}$ | | | | (0, 0, m1) | (0, 0, -m1) | (0, 0, m1) | (0, 0, -m1)
$\Gamma_{3}$ | | | | (0, 0, m1) | (0, 0, m1) | (0, 0, -m1) | (0, 0, -m1)
$\Gamma_{4}$ | (0, 0, m1) | (0, 0, m1) | | (0, 0, m2) | (0, 0, m2) | (0, 0, m2) | (0, 0, m2)
$\Gamma_{5}$ | (0, 0, m1) | (0, 0, -m1) | | (0, 0, -m2) | (0, 0, m2) | (0, 0, m2) | (0, 0, -m2)
Table 1: Irreducible representations and basis vectors for the magnetic
structures allowed in the superlattice. mi (i$=1,\cdots,6$) are independent
parameters for the magnetic moments. Iridium ions are located at a1:(0,0,0),
a2:(1/2,1/2,0), b1:(0,1/2,$\delta$), b2:(0,1/2,1-$\delta$),
b3:(1/2,0,$\delta$), b4:(1/2,0,1-$\delta$) in the unit cell Fig. 1(c), where
$\delta$ = 0.3791. The magnetic structure in Fig. 4(a) comprises canted
$ab$-plane AF of $\Gamma_{1}$ and $c$-axis collinear AF of $\Gamma_{5}$.
## APPENDIX B: Representation Analysis of the magnetic structure
We present a representation analysis based on the lattice structure shown in
Fig. 1(c), assuming $\mathbf{q}$ = 0 propagation vector based on the fact that
all observed magnetic reflections have integer Miller indices. Its space group
is $P\bar{4}b$2, which lacks inversion symmetry and thus its point group
($\bar{4}m2$) is of lower symmetry than 4/$mmm$. RA-THG is, however,
insensitive to the inversion symmetry and has the same tensor structure for
the two point groups. The inversion symmetry is broken by the way in which
octahdera are rotated in DL in the current model. An inversion symmetric
structure model can be made by doubling the unit cell in such a way that the
octahedral rotation is opposite on two neighboring DLs. Thus, the
determination of the presence of inversion symmetry requires full structure
refinement including the octahedral rotation on each layers, which is beyond
the scope of this work. However, our result that the magnetic orders in SL and
DL belong to different irreducible representations (IR) is not affected by
these details.
In the superlattice, iridium ions in SL and DL are not symmetry related and
thus their magnetic structures are analyzed separately. The result of the
analysis is shown in Table 1. In both SL and DL, the $ab$-plane and $c$-axis
components belong to different irreducible representations. This can easily be
seen from the transformation property under the two-fold rotation about
$c$-axis; the $ab$-plane moments are flipped, whereas the $c$-axis moments
remain invariant. As long as the $ab$-plane and $c$-axis components are not
mixed by any of the symmetry operations contained in the space group, their
different characters for the two-fold rotation guarantee that they belong to
different IRs. A more detailed version of the analysis is presented in the
supplemental material.
## References
* Anderson [1987] P. W. Anderson, The Resonating Valence Bond State in $\mathrm{La}_{2}\mathrm{CuO}_{4}$ and Superconductivity, Science 235, 1196–1198 (1987).
* Lee _et al._ [2006] P. A. Lee, N. Nagaosa, and X.-G. Wen, Doping a Mott insulator: Physics of high-temperature superconductivity, Rev. Mod. Phys. 78, 17–85 (2006).
* Keimer _et al._ [2015] B. Keimer, S. A. Kivelson, M. R. Norman, S. Uchida, and J. Zaanen, From quantum matter to high-temperature superconductivity in copper oxides, Nature 518, 179–186 (2015).
* Plakida [2010] N. Plakida, _High-Temperature Cuprate Superconductors_ (Springer, 2010).
* Orendáčová _et al._ [2019] A. Orendáčová, R. Tarasenko, V. Tkáč, E. Čižmár, M. Orendáč, and A. Feher, Interplay of Spin and Spatial Anisotropy in Low-Dimensional Quantum Magnets with Spin 1/2, Crystals 9, 6 (2019).
* Bertinshaw _et al._ [2019] J. Bertinshaw, Y. K. Kim, G. Khaliullin, and B. J. Kim, Square Lattice Iridates, Annu. Rev. Condens. Matter Phys. 10, 315–336 (2019).
* Kim _et al._ [2008] B. J. Kim, H. Jin, S. J. Moon, J.-Y. Kim, B.-G. Park, C. S. Leem, J. Yu, T. W. Noh, C. Kim, S.-J. Oh, J.-H. Park, V. Durairaj, G. Cao, and E. Rotenberg, Novel ${J}_{\mathrm{eff}}=1/2$ Mott State Induced by Relativistic Spin-Orbit Coupling in ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Phys. Rev. Lett. 101, 076402 (2008).
* Jackeli and Khaliullin [2009] G. Jackeli and G. Khaliullin, Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models, Phys. Rev. Lett. 102, 017205 (2009).
* Kim _et al._ [2009] B. J. Kim, H. Ohsumi, T. Komesu, S. Sakai, T. Morita, H. Takagi, and T. Arima, Phase-Sensitive Observation of a Spin-Orbital Mott State in ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Science 323, 1329–1332 (2009).
* Wang and Senthil [2011] F. Wang and T. Senthil, Twisted Hubbard Model for ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$: Magnetism and Possible High Temperature Superconductivity, Phys. Rev. Lett. 106, 136402 (2011).
* Kim _et al._ [2014] Y. K. Kim, O. Krupin, J. D. Denlinger, A. Bostwick, E. Rotenberg, Q. Zhao, J. F. Mitchell, J. W. Allen, and B. J. Kim, Fermi arcs in a doped pseudospin-1/2 Heisenberg antiferromagnet, Science 345, 187–190 (2014).
* Yan _et al._ [2015] Y. J. Yan, M. Q. Ren, H. C. Xu, B. P. Xie, R. Tao, H. Y. Choi, N. Lee, Y. J. Choi, T. Zhang, and D. L. Feng, Electron-Doped ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$: An Analogue of Hole-Doped Cuprate Superconductors Demonstrated by Scanning Tunneling Microscopy, Phys. Rev. X 5, 041018 (2015).
* Cao _et al._ [2016] Y. Cao, Q. Wang, J. A. Waugh, T. J. Reber, H. Li, X. Zhou, S. Parham, S. R. Park, N. C. Plumb, E. Rotenberg, A. Bostwick, J. D. Denlinger, T. Qi, M. A. Hermele, G. Cao, and D. S. Dessau, Hallmarks of the Mott-metal crossover in the hole-doped pseudospin-1/2 Mott insulator ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Nat. Commun. 7, 11367 (2016).
* Kim _et al._ [2016] Y. K. Kim, N. H. Sung, J. D. Denlinger, and B. J. Kim, Observation of a d-wave gap in electron-doped ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Nat. Phys. 12, 37–41 (2016).
* Sumita _et al._ [2017] S. Sumita, T. Nomoto, and Y. Yanase, Multipole Superconductivity in Nonsymmorphic ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Phys. Rev. Lett. 119, 027001 (2017).
* Kim _et al._ [2012a] J. Kim, D. Casa, M. H. Upton, T. Gog, Y.-J. Kim, J. F. Mitchell, M. van Veenendaal, M. Daghofer, J. van den Brink, G. Khaliullin, and B. J. Kim, Magnetic Excitation Spectra of ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$ Probed by Resonant Inelastic X-Ray Scattering: Establishing Links to Cuprate Superconductors, Phys. Rev. Lett. 108, 177003 (2012a).
* Fujiyama _et al._ [2012] S. Fujiyama, H. Ohsumi, T. Komesu, J. Matsuno, B. J. Kim, M. Takata, T. Arima, and H. Takagi, Two-Dimensional Heisenberg Behavior of ${J}_{\mathrm{eff}}\mathbf{=}1/2$ Isospins in the Paramagnetic State of the Spin-Orbital Mott Insulator ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Phys. Rev. Lett. 108, 247212 (2012).
* Jeong _et al._ [2017] J. Jeong, Y. Sidis, A. Louat, V. Brouet, and P. Bourges, Time-reversal symmetry breaking hidden order in ${\mathrm{Sr}}_{2}{\mathrm{(Ir,Rh)O}}_{4}$, Nat. Commun. 8, 15119 (2017).
* Murayama _et al._ [2021] H. Murayama, K. Ishida, R. Kurihara, T. Ono, Y. Sato, Y. Kasahara, H. Watanabe, Y. Yanase, G. Cao, Y. Mizukami, T. Shibauchi, Y. Matsuda, and S. Kasahara, Bond Directional Anapole Order in a Spin-Orbit Coupled Mott Insulator ${\mathrm{Sr}}_{2}({\mathrm{Ir}}_{1-x}{\mathrm{Rh}}_{x}){\mathrm{O}}_{4}$, Phys. Rev. X 11, 011021 (2021).
* Zhao _et al._ [2016] L. Zhao, D. H. Torchinsky, H. Chu, V. Ivanov, R. Lifshitz, R. Flint, T. Qi, G. Cao, and D. Hsieh, Evidence of an odd-parity hidden order in a spin–orbit coupled correlated iridate, Nat. Phys. 12, 32–36 (2016).
* Seyler _et al._ [2020] K. L. Seyler, A. de la Torre, Z. Porter, E. Zoghlin, R. Polski, M. Nguyen, S. Nadj-Perge, S. D. Wilson, and D. Hsieh, Spin-orbit-enhanced magnetic surface second-harmonic generation in ${\mathrm{Sr}}_{2}\mathrm{Ir}{\mathrm{O}}_{4}$, Phys. Rev. B 102, 201113(R) (2020).
* Chen _et al._ [2018] X. Chen, J. L. Schmehr, Z. Islam, Z. Porter, E. Zoghlin, K. Finkelstein, J. P. C. Ruff, and S. D. Wilson, Unidirectional spin density wave state in metallic $({\mathrm{Sr}}_{1-x}{\mathrm{La}}_{x})_{2}{\mathrm{IrO}}_{4}$, Nat. Commun. 9, 103 (2018).
* Liu and Khaliullin [2019] H. Liu and G. Khaliullin, Pseudo-Jahn-Teller Effect and Magnetoelastic Coupling in Spin-Orbit Mott Insulators, Phys. Rev. Lett. 122, 057203 (2019).
* Porras _et al._ [2019] J. Porras, J. Bertinshaw, H. Liu, G. Khaliullin, N. H. Sung, J.-W. Kim, S. Francoual, P. Steffens, G. Deng, M. M. Sala, A. Efimenko, A. Said, D. Casa, X. Huang, T. Gog, J. Kim, B. Keimer, and B. J. Kim, Pseudospin-lattice coupling in the spin-orbit Mott insulator ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Phys. Rev. B 99, 085125 (2019).
* Hu _et al._ [2019] L. L. Hu, M. Yang, Y. L. Wu, Q. Wu, H. Zhao, F. Sun, W. Wang, R. He, S. L. He, H. Zhang, R. J. Huang, L. F. Li, Y. G. Shi, and J. Zhao, Strong pseudospin-lattice coupling in ${\mathrm{Sr}}_{3}{\mathrm{Ir}}_{2}{\mathrm{O}}_{7}$: Coherent phonon anomaly and negative thermal expansion, Phys. Rev. B 99, 094307 (2019).
* Ye _et al._ [2013] F. Ye, S. Chi, B. C. Chakoumakos, J. A. Fernandez-Baca, T. Qi, and G. Cao, Magnetic and crystal structures of ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$: A neutron diffraction study, Phys. Rev. B 87, 140406(R) (2013).
* Torchinsky _et al._ [2015] D. H. Torchinsky, H. Chu, L. Zhao, N. B. Perkins, Y. Sizyuk, T. Qi, G. Cao, and D. Hsieh, Structural Distortion-Induced Magnetoelastic Locking in ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$ Revealed through Nonlinear Optical Harmonic Generation, Phys. Rev. Lett. 114, 096404 (2015).
* Hogan _et al._ [2016] T. Hogan, L. Bjaalie, L. Zhao, C. Belvin, X. Wang, C. G. Van de Walle, D. Hsieh, and S. D. Wilson, Structural investigation of the bilayer iridate ${\mathrm{Sr}}_{3}{\mathrm{Ir}}_{2}{\mathrm{O}}_{7}$, Phys. Rev. B 93, 134110 (2016).
* Boseggia _et al._ [2013] S. Boseggia, H. C. Walker, J. Vale, R. Springell, Z. Feng, R. S. Perry, M. M. Sala, H. M. Rønnow, S. P. Collins, and D. F. McMorrow, Locking of iridium magnetic moments to the correlated rotation of oxygen octahedra in ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$ revealed by x-ray resonant scattering, J. Phys.: Condens. Matter 25, 422202 (2013).
* Cao _et al._ [2002] G. Cao, Y. Xin, C. S. Alexander, J. E. Crow, P. Schlottmann, M. K. Crawford, R. L. Harlow, and W. Marshall, Anomalous magnetic and transport behavior in the magnetic insulator ${\mathrm{Sr}}_{3}{\mathrm{Ir}}_{2}{\mathrm{O}}_{7}$, Phys. Rev. B 66, 214412 (2002).
* Nagai _et al._ [2007] I. Nagai, Y. Yoshida, S. I. Ikeda, H. Matsuhata, H. Kito, and M. Kosaka, Canted antiferromagnetic ground state in ${\mathrm{Sr}}_{3}{\mathrm{Ir}}_{2}{\mathrm{O}}_{7}$, J. Phys.: Condens. Matter 19, 136214 (2007).
* Chaloupka _et al._ [2010] J. Chaloupka, G. Jackeli, and G. Khaliullin, Kitaev-Heisenberg Model on a Honeycomb Lattice: Possible Exotic Phases in Iridium Oxides ${A}_{2}{\mathrm{IrO}}_{3}$, Phys. Rev. Lett. 105, 027204 (2010).
* Crawford _et al._ [1994] M. K. Crawford, M. A. Subramanian, R. L. Harlow, J. A. Fernandez-Baca, Z. R. Wang, and D. C. Johnston, Structural and Magnetic Studies of ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$, Phys. Rev. B 49, 9198–9201 (1994).
* Matsuhata _et al._ [2004] H. Matsuhata, I. Nagai, Y. Yoshida, S. Hara, S.-I. Ikeda, and N. Shirakawa, Crystal structure of ${\mathrm{sr}}_{3}{\mathrm{ir}}_{2}{\mathrm{o}}_{7}$ investigated by transmission electron microscopy, J. Solid State Chem. 177, 3776–3783 (2004).
* Harlow _et al._ [1995] R. L. Harlow, Z. G. Li, W. J. Marshall, M. K. Crawford, and M. A. Subramanian, Effects of stacking faults on refinement of single crystal X-ray diffraction data for ${\mathrm{Sr}}_{5}{\mathrm{Ir}}_{3}{\mathrm{O}}_{11}$, Materials research bulletin 30, 217–223 (1995).
* Torchinsky _et al._ [2014] D. H. Torchinsky, H. Chu, T. Qi, G. Cao, and D. Hsieh, A low temperature nonlinear optical rotational anisotropy spectrometer for the determination of crystallographic and electronic symmetries, Rev. Sci. Instrum. 85, 083102 (2014).
* Harter _et al._ [2015] J. W. Harter, L. Niu, A. J. Woss, and D. Hsieh, High-speed measurement of rotational anisotropy nonlinear optical harmonic generation using position-sensitive detection, Opt. Lett. 40, 4671–4674 (2015).
* Sipe _et al._ [1987] J. E. Sipe, D. J. Moss, and H. M. van Driel, Phenomenological theory of optical second- and third-harmonic generation from cubic centrosymmetric crystals, Phys. Rev. B 35, 1129–1141 (1987).
* Mazzone _et al._ [2021] D. G. Mazzone, D. Meyers, Y. Cao, J. G. Vale, C. D. Dashwood, Y. Shi, A. J. A. James, N. J. Robinson, J. Lin, V. Thampy, Y. Tanaka, A. S. Johnson, H. Miao, R. Wang, T. A. Assefa, J. Kim, D. Casa, R. Mankowsky, D. Zhu, R. Alonso-Mori, S. Song, H. Yavas, T. Katayama, M. Yabashi, Y. Kubota, S. Owada, J. Liu, J. Yang, R. M. Konik, I. K. Robinson, J. P. Hill, D. F. McMorrow, M. Först, S. Wall, X. Liu, and M. P. M. Dean, Laser-induced transient magnons in ${\mathrm{Sr}}_{3}{\mathrm{Ir}}_{2}{\mathrm{O}}_{7}$ throughout the Brillouin zone, Proc. Natl. Acad. Sci. USA 118, e2103696118 (2021).
* Kim _et al._ [2012b] J. Kim, A. H. Said, D. Casa, M. H. Upton, T. Gog, M. Daghofer, G. Jackeli, J. van den Brink, G. Khaliullin, and B. J. Kim, Large Spin-Wave Energy Gap in the Bilayer Iridate ${\mathrm{Sr}}_{3}{\mathrm{Ir}}_{2}{\mathrm{O}}_{7}$: Evidence for Enhanced Dipolar Interactions Near the Mott Metal-Insulator Transition, Phys. Rev. Lett. 109, 157402 (2012b).
* Vale _et al._ [2015] J. G. Vale, S. Boseggia, H. C. Walker, R. Springell, Z. Feng, E. C. Hunter, R. S. Perry, D. Prabhakaran, A. T. Boothroyd, S. P. Collins, H. M. Rønnow, and D. F. McMorrow, Importance of $XY$ anisotropy in ${\mathrm{Sr}}_{2}{\mathrm{IrO}}_{4}$ revealed by magnetic critical scattering experiments, Phys. Rev. B 92, 020406(R) (2015).
* Irkhin and Katanin [1999] V. Y. Irkhin and A. A. Katanin, Kosterlitz-Thouless and magnetic transition temperatures in layered magnets with a weak easy-plane anisotropy, Phys. Rev. B 60, 2990–2993 (1999).
* Suzuki and Kamimura [1973] N. Suzuki and H. Kamimura, Theory of Spin-Dependent Phonon Raman Scattering in Magnetic Crystals, J. Phys. Soc. Jpn. 35, 985–995 (1973).
* Granado _et al._ [1999] E. Granado, A. García, J. A. Sanjurjo, C. Rettori, I. Torriani, F. Prado, R. D. Sánchez, A. Caneiro, and S. B. Oseroff, Magnetic ordering effects in the Raman spectra of ${\mathrm{La}}_{1-x}{\mathrm{Mn}}_{1-x}{\mathrm{O}}_{3}$, Phys. Rev. B 60, 11879–11882 (1999).
* Zeiger _et al._ [1992] H. J. Zeiger, J. Vidal, T. K. Cheng, E. P. Ippen, G. Dresselhaus, and M. S. Dresselhaus, Theory for displacive excitation of coherent phonons, Phys. Rev. B 45, 768–778 (1992).
|
Using the fact that $\|\bm{v}_{s}^{(T)}\|_{2}\leq 1$ in (126) finally yields:
$\displaystyle\mathscr{L}(f_{\bm{W}^{T}})$ $\displaystyle\leq
2(1-\mu)\exp\left(-\tilde{\Omega}\left(\frac{\alpha^{6}}{\beta^{6}\sigma^{6}}\right)\right)+2\mu\exp\left(-\tilde{\Omega}\left(\frac{1}{\sigma^{6}}\right)\right).$
(127)
Since $\exp(-\alpha^{6}/(\beta^{6}\sigma^{6}))\leq\mu/\mathrm{poly}(d)$ and
$\exp(-\tilde{\Omega}(1/\sigma^{6}))\leq 1/\mathrm{poly(d)}$ , we obtain the
desired result.
∎
### G.4 Proof of the GD+M induction hypotheses
###### Proof of D.5.
We prove the induction hypotheses for the signal $c_{r}^{(t)}.$
##### Proof of $c_{r}^{(t+1)}\geq-\tilde{O}(\sigma_{0})$.
We know that with high probability, $c_{r}^{(0)}\geq-\tilde{O}(\sigma_{0})$.
By Lemma F.1, $c_{r}^{(t)}$ is a non-decreasing sequence and therefore, we
always have $c_{r}^{(t)}\geq-\tilde{O}(\sigma_{0}).$
##### Proof of $c_{r}^{(t+1)}\leq\tilde{O}(1/\beta)$.
Assume that D.4 is true for $t\in[\mathcal{T}_{1},T).$ Now, let’s prove this
induction hypothesis for time $t+1.$ For $\tau\in[t]$, we remind that (3)
update rule is
$\displaystyle c_{r}^{(\tau+1)}$
$\displaystyle=c_{r}^{(\tau)}-\eta\mathcal{G}_{r}^{(\tau+1)}.$ (128)
We sum up (128) for $\tau=\mathcal{T}_{1},\dots,t$ and obtain:
$\displaystyle c_{r}^{(t+1)}$
$\displaystyle=c_{r}^{(\mathcal{T}_{1})}-\eta\sum_{\tau=\mathcal{T}_{1}}^{t}\mathcal{G}_{r}^{(\tau+1)}.$
(129)
We apply the triangle inequality in (129) and obtain:
$\displaystyle|c_{r}^{(t+1)}|$
$\displaystyle\leq|c_{r}^{(\mathcal{T}_{1})}|+\eta\sum_{\tau=\mathcal{T}_{1}}^{t}|\mathcal{G}_{r}^{(\tau+1)}|.$
(130)
We now use D.5 to bound $|c_{r}^{(\mathcal{T}_{1})}|$ in (130):
$\displaystyle|c_{r}^{(t+1)}|$
$\displaystyle\leq\tilde{O}(1/\beta)+\eta\sum_{\tau=\mathcal{T}_{1}}^{t}|\mathcal{G}_{r}^{(\tau+1)}|.$
(131)
We now plug the bound on
$\sum_{\tau=\mathcal{T}_{1}}^{t}|\mathcal{G}_{r}^{(\tau+1)}|$ given by Lemma
J.3. We have:
$\displaystyle|c_{r}^{(t+1)}|$
$\displaystyle\leq\tilde{O}(1/\beta)+\tilde{O}(\eta\alpha\mathcal{T}_{0})+\tilde{O}(\eta\hat{\mu}\beta\mathcal{T}_{1})+\tilde{O}(1)\leq\tilde{O}(1/\beta),$
(132)
where we used
$\tilde{O}(\eta\alpha\mathcal{T}_{0})+\tilde{O}(\eta\hat{\mu}\beta\mathcal{T}_{1})+\tilde{O}(1)\leq
1/\beta.$ This proves the induction hypothesis for $t+1.$ ∎
## Appendix H Extension to $\lambda>0$
Now we discuss how to extend the result to $\lambda>0$. In our result, since
$\lambda=\frac{1}{N\mathrm{poly}(d)}$, we know that before
$T=\tilde{\Theta}\left(\frac{1}{\eta\lambda}\right)$ iterations, the weight
decay would not affect the learning process and we can show everything
similarly.
After iteration $T$, by Lemma I.8 and Lemma J.9, we know that for GD:
$\nu^{(t)}\leq\tilde{O}\left(\lambda\right)$
and for GD + M:
$\nu^{(t)}\leq\tilde{O}\left(\lambda/(\beta^{2})\right)$
For GD, we just need to maintain that $c^{(t)}=\tilde{O}(1/\alpha)$ and
$\Xi_{i}^{(t)}=\tilde{\Omega}(1)$. To see this, we know that if
$c^{(t)}=\tilde{\Omega}(1/\alpha)$, then
$c^{(t+1)}\leq(1-\eta\lambda)c^{(t)}+\eta\tilde{O}\left(\nu^{(t)}\frac{\beta^{3}}{\alpha^{2}}\right)\leq
c^{(t)}$
To show that $\Xi_{i}^{(t)}=\tilde{\Omega}(1)$, assuming that
$\Xi_{i}^{(t)}=1/\mathrm{polylog}(d)$, we know that
$\Xi_{i}^{(t+1)}\geq(1-\eta\lambda)\Xi_{i}^{(t)}+\tilde{\Omega}\left(\eta\frac{1}{N}\right)\geq\Xi_{i}^{(t)}+\tilde{\Omega}\left(\eta\frac{1}{N}\right)$
Similarly, for GD + M, since
$\nu^{(t)}\leq\tilde{O}\left(\lambda/(\beta^{2})\right)$, we know that
$\nabla\hat{L}(W^{(t)})\leq\tilde{O}\left(\lambda\alpha^{3}/(\beta^{2})\right)$
This implies that
$\|W^{(t+1)}-W^{(t)}\|_{2}\leq\tilde{O}\left(\eta\lambda\alpha^{3}/(\beta^{2})\right)$
We need to show that $c^{(t)}=\tilde{\Omega}(1/\beta)$ and all
$|\Xi_{i,j,r}^{(t)}|\leq\tilde{O}(\sigma_{0}\sigma\sqrt{d})$. To see this, we
know that when $c^{(t)}=\Theta\left(\frac{1}{\beta}\right)$, we know that
$c^{(t-t_{0})}=\Theta\left(\frac{1}{\beta}\right)$ for every
$t_{0}\leq\frac{1}{\gamma}$. This implies that
$c^{(t+1)}\geq
c^{(t)}-O\left(\eta\lambda\frac{1}{\beta}\right)+\Omega\left(\frac{\eta}{N}\beta\right)\geq
c^{(t)}+\Omega\left(\frac{\eta}{N}\beta\right)$
On the other hand, for $\Xi_{i,j,r}^{(t)}$ we know that:
$|\Xi_{i,j,r}^{(t+1)}|\leq(1-\eta\lambda)|\Xi_{i,j,r}^{(t)}|+\tilde{O}\left(\eta\nu^{(t)}\sigma_{0}^{2}(\sigma\sqrt{d})^{2}\right)\leq\tilde{O}(\sigma_{0}\sigma\sqrt{d})$
## Appendix I Technical lemmas for GD
This section presents the technical lemmas needed in Appendix F. These lemmas
mainly consists in different rewritings of GD.
### I.1 Rewriting derivatives
Using D.1 and D.2, we rewrite the sigmoid terms $\ell_{i}^{(t)}$ when using
GD.
###### Lemma I.1 ($\mathcal{Z}_{1}$ derivative).
Let $i\in\mathcal{Z}_{1}.$ We have
$\ell_{i}^{(t)}=\Theta(1)\widehat{\ell}^{(t)}(\alpha).$
###### Proof of Lemma I.1.
Let $i\in\mathcal{Z}_{1}$. Using D.1, we bound $\ell_{i}^{(t)}$ as
$\displaystyle\frac{1}{1+\exp\left(\alpha^{3}\sum_{s=1}^{m}(c_{s}^{(t)})^{3}+\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})\right)}$
$\displaystyle\leq\ell_{i}^{(t)}\leq\frac{1}{1+\exp\left(\alpha^{3}\sum_{s=1}^{m}(c_{s}^{(t)})^{3}-\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})\right)}$
$\displaystyle\iff
e^{-\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})}\widehat{\ell}^{(t)}(\alpha)$
$\displaystyle\leq\ell_{i}^{(t)}\leq
e^{\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})}\widehat{\ell}^{(t)}(\alpha).$
(133)
(133) yields the aimed result. ∎
###### Lemma I.2 ($\mathcal{Z}_{2}$ derivative).
Let $i\in\mathcal{Z}_{2}.$ We have
$\ell_{i}^{(t)}=\Theta(1)\widehat{\ell}^{(t)}(\Xi_{i}^{(t)}).$
###### Proof.
Let $i\in\mathcal{Z}_{2}$. Using D.2, we bound $\ell_{i}^{(t)}$ as
$\displaystyle\frac{1}{1+\exp\left(\tilde{O}(\beta^{3}/\alpha^{3})+\Xi_{i}^{(t)}\right)}$
$\displaystyle\leq\ell_{i}^{(t)}\leq\frac{1}{1+\exp\left(-\tilde{O}(\beta^{3}\sigma_{0}^{3})+\Xi_{i}^{(t)}\right)}$
$\displaystyle\iff
e^{-\tilde{O}(\beta^{3}/\alpha^{3})}\widehat{\ell}^{(t)}(\Xi_{i}^{(t)})$
$\displaystyle\leq\ell_{i}^{(t)}\leq
e^{\tilde{O}(\beta^{3}\sigma_{0}^{3})}\widehat{\ell}^{(t)}(\Xi_{i}^{(t)}).$
(134)
(134) yields the aimed result. ∎
### I.2 Signal lemmas
In this section, we present a lemma that bounds the sum over time of the GD
increment.
###### Lemma I.3.
Let $t,\mathscr{T}\in[T]$ such that $\mathscr{T}<t.$ Then, the
$\mathcal{Z}_{1}$ derivative is bounded as:
$\displaystyle\sum_{\tau=\mathscr{T}}^{t}\nu_{1}^{(\tau)}\min\\{\kappa,\alpha^{2}(c^{(\tau)})^{2}\\}$
$\displaystyle\leq\tilde{O}\left(\frac{1}{\eta\alpha^{2}}\right)+\tilde{O}\left(\frac{\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=\mathscr{T}}^{t}\nu_{2}^{(\tau)}.$
###### Proof of Lemma I.3.
From Lemma F.4, we know that:
$\displaystyle c^{(t+1)}$ $\displaystyle\geq
c^{(t)}+\tilde{\Theta}(\eta\alpha)\nu_{1}^{(t)}\min\\{\kappa,\alpha^{2}(c^{(t)})^{2}\\}$
(135)
Let $\mathscr{T},t\in[T]$ such that $\mathscr{T}<t.$ We now sum up (135) for
$\tau=\mathscr{T},\dots,t$ and get:
$\displaystyle\sum_{\tau=\mathscr{T}}^{t}\nu_{1}^{(\tau)}\min\\{\kappa,\alpha^{2}(c^{(\tau)})^{2}\\}$
$\displaystyle\leq\tilde{O}\left(\frac{1}{\eta\alpha}\right)(c^{(t+1)}-c^{(\mathscr{T})}).$
(136)
We now consider two cases.
##### Case 1: $t<T_{0}.$
By definition, we know that $c^{(t)}\leq\tilde{O}(1/\alpha).$ Therefore, (136)
yields:
$\displaystyle\sum_{\tau=\mathscr{T}}^{t}\nu_{1}^{(\tau)}\min\\{\kappa,\alpha^{2}(c^{(\tau)})^{2}\\}$
$\displaystyle\leq\tilde{O}\left(\frac{1}{\eta\alpha^{2}}\right).$ (137)
##### Case 2: $t\in[T_{0},T].$
We distinguish two subcases.
* –
Subcase 1: $\mathscr{T}<T_{0}.$ From Lemma 5.3, we know that:
$\displaystyle c^{(t+1)}$
$\displaystyle\leq\tilde{O}(1/\alpha)+\tilde{O}(\eta\beta^{3}/\alpha^{2})\sum_{\tau=T_{0}}^{t}\nu_{2}^{(\tau)}.$
(138)
We can further bound (138) as:
$\displaystyle c^{(t+1)}$
$\displaystyle\leq\tilde{O}(1/\alpha)+\tilde{O}(\eta\beta^{3}/\alpha^{2})\sum_{\tau=\mathscr{T}}^{t}\nu_{2}^{(\tau)},$
(139)
which combined with (136) implies:
$\displaystyle\sum_{\tau=\mathscr{T}}^{t}\nu_{1}^{(\tau)}\min\\{\kappa,\alpha^{2}(c^{(\tau)})^{2}\\}$
$\displaystyle\leq\tilde{O}\left(\frac{1}{\eta\alpha^{2}}\right)+\tilde{O}\left(\frac{\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=\mathscr{T}}^{t}\nu_{2}^{(\tau)}$
(140)
* –
Subcase 2: $\mathscr{T}>T_{0}.$ From Lemma 5.3, we know that:
$\displaystyle c^{(t+1)}$
$\displaystyle\leq\tilde{O}(1/\alpha)+\tilde{O}(\eta\beta^{3}/\alpha^{2})\sum_{\tau=\mathcal{T}}^{t}\nu_{2}^{(\tau)},$
(141)
which combined with (136) yields (140).
We therefore managed to prove that in all the cases, (140) holds.
∎
### I.3 Noise lemmas
In this section, we present the technical lemmas needed in subsection F.2. The
following lemma bounds the projection of the GD increment on the noise.
###### Lemma I.4.
Let $i\in[N]$, $j\in[P]\backslash\\{P(\bm{X}_{i})\\}$ and $r\in[m]$. Let
$\mathscr{T},t\in[T]$ such that $\mathscr{T}<t.$ Then, the noise update (2)
satisfies
$\begin{split}\left|y_{i}(\Xi_{i,j,r}^{(t)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t-1}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|&\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right)+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha}\right)\sum_{j=\mathscr{T}}^{t-1}\nu_{2}^{(j)}.\end{split}$
###### Proof of Lemma I.4.
Let $i\in[N]$, $j\in[P]\backslash\\{P(\bm{X}_{i})\\}$ and $r\in[m]$. We set up
the following induction hypothesis:
$\begin{split}&\left|y_{i}\Xi_{i,j,r}^{(t)}-y_{i}\Xi_{i,j,r}^{(\mathscr{T})}-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t-1}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\alpha}{\sigma^{2}d}+\frac{\alpha\eta}{N}\right)\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-\tau}\nu_{2}^{(j)},\end{split}$
(142)
Let’s first show this hypothesis for $t=\mathscr{T}.$ From Lemma F.5, we have:
$\begin{split}&\left|y_{i}(\Xi_{i,j,r}^{(\mathscr{T}+1)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\ell_{i}^{(\mathscr{T})}(\Xi_{i,j,r}^{(\mathscr{T})})^{2}\right|\\\
&\leq\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{a\in\mathcal{Z}_{2}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(\mathscr{T})}(\Xi_{a,k,r}^{(\mathscr{T})})^{2}\\\
&+\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{a\in\mathcal{Z}_{1}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(\mathscr{T})}(\Xi_{a,k,r}^{(\mathscr{T})})^{2}.\end{split}$
(143)
Now, we apply D.3 to bound $(\Xi_{a,k,r}^{(\mathscr{T})})^{2}$ in (143) and
obtain:
$\begin{split}&\left|y_{i}\Xi_{i,j,r}^{(\mathscr{T}+1)}-y_{i}\Xi_{i,j,r}^{(\mathscr{T})}-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\ell_{i}^{(\mathscr{T})}(\Xi_{i,j,r}^{(\mathscr{T})})^{2}\right|\\\
&\leq\tilde{\Theta}(\eta
P\sigma^{2}\sqrt{d})\nu_{2}^{(\mathscr{T})}\min\\{\kappa,(c^{(\mathscr{T})})^{2}\alpha^{2}\\}\alpha\\\
&+\tilde{\Theta}(\eta
P\sigma^{2}\sqrt{d})\nu_{1}^{(\mathscr{T})}\min\\{\kappa,(c^{(\mathscr{T})})^{2}\alpha^{2}\\}\alpha.\end{split}$
(144)
We successively apply Lemma I.3, use
$\nu_{2}^{(\mathscr{T})}\min\\{\kappa,(c^{(\mathscr{T})})^{2}\alpha^{2}\\}\alpha\leq\hat{\mu}\tilde{O}(1)\leq\tilde{O}(\hat{\mu})$
and $\hat{\mu}=\Theta(1/N)$ in (144) to finally obtain:
$\displaystyle\left|y_{i}\Xi_{i,j,r}^{(\mathscr{T}+1)}-y_{i}\Xi_{i,j,r}^{(\mathscr{T})}-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\ell_{i}^{(\mathscr{T})}(\Xi_{i,j,r}^{(\mathscr{T})})^{2}\right|$
$\displaystyle\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\eta\alpha}{N}\right)\right).$
Therefore, the induction hypothesis is verified for $t=\mathscr{T}.$ Now,
assume (LABEL:eq:noiseindhypoth) for $t.$ Let’s prove the result for $t+1.$ We
start by summing up the noise update from Lemma F.5 for
$\tau=\mathscr{T},\dots,t$ which yields:
$\begin{split}&\left|y_{i}(\Xi_{i,j,r}^{(t+1)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{\tau=\mathscr{T}}^{t-1}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(\tau)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(\tau)})^{2}\\\
&+\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(t)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}\\\
&+\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{\tau=\mathscr{T}}^{t}\sum_{a\in\mathcal{Z}_{1}}\ell_{a}^{(\tau)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(\tau)})^{2}\end{split}$ (145)
We apply D.3 to bound $(\Xi_{a,k,r}^{(t)})^{2}$ in (LABEL:eq:noiseupd20) and
obtain:
$\begin{split}&\left|y_{i}(\Xi_{i,j,r}^{(t+1)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{\tau=\mathscr{T}}^{t-1}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(\tau)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(\tau)})^{2}\\\ &+\tilde{\Theta}(\eta
P\sigma^{2}\sqrt{d})\nu_{2}^{(t)}\alpha\min\\{\kappa,(c^{(t)})^{2}\alpha^{2}\\}\\\
&+\tilde{\Theta}(\eta
P\sigma^{2}\sqrt{d})\sum_{\tau=\mathscr{T}}^{t}\nu_{1}^{(\tau)}\alpha\min\\{\kappa,(c^{(\tau)})^{2}\alpha^{2}\\}\end{split}$
(146)
Similarly to above, we apply Lemma I.3 to bound
$\sum_{\tau=0}^{t}\nu_{1}^{(\tau)}\alpha\min\\{\kappa,(c^{(\tau)})^{2}\alpha^{2}\\}$.
We also use
$\nu_{2}^{(t)}\alpha\min\\{\kappa,(c^{(t)})^{2}\alpha^{2}\\}\leq\tilde{O}(\hat{\mu})$
and $\hat{\mu}=\Theta(1/N)$ in (LABEL:eq:noiseupd201) and obtain:
$\begin{split}&\left|y_{i}(\Xi_{i,j,r}^{(t+1)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\frac{\tilde{\Theta}(\eta\sigma^{2}\sqrt{d})}{N}\sum_{\tau=\mathscr{T}}^{t-1}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(\tau)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(\tau)})^{2}\\\
&+\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\eta\alpha}{N}\right)\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{j=\mathscr{T}}^{t}\nu_{2}^{(j)}.\end{split}$
(147)
To bound the first term in the right-hand side of (LABEL:eq:noiseupd202), we
use the induction hypothesis (LABEL:eq:noiseindhypoth). Plugging this
inequality in (LABEL:eq:noiseupd202) yields:
$\begin{split}&\left|y_{i}(\Xi_{i,j,r}^{(t+1)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\frac{1}{\sqrt{d}}\sum_{a\in\mathcal{Z}_{2}}\sum_{k\neq
P(X_{k})}y_{a}(\Xi_{a,k,r}^{(t)}-\Xi_{a,k,r}^{(\mathscr{T})})\\\
&+\tilde{O}\left(\frac{P^{2}\sigma^{2}}{\alpha\sqrt{d}}\left(1+\frac{\alpha}{\sigma^{2}d}+\frac{\alpha\eta}{N}\right)\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\right)\\\
&+\frac{P}{\sqrt{d}}\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-1-\tau}\nu_{2}^{(j)}\\\
&+\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\eta\alpha}{N}\right)\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{j=\mathscr{T}}^{t}\nu_{2}^{(j)}.\end{split}$
(148)
Now, we apply D.1 to have
$y_{a}(\Xi_{a,k,r}^{(t)}-\Xi_{a,k,r}^{(0)})\leq\tilde{O}(1)$ in
(LABEL:eq:noiseupd203) and therefore,
$\begin{split}&\left|y_{i}\Xi_{i,j,r}^{(t+1)}-y_{i}\Xi_{i,j,r}^{(\mathscr{T})}-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\frac{\tilde{O}(P)}{\sqrt{d}}+\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\alpha}{\sigma^{2}d}+\frac{\alpha\eta}{N}\right)\sum_{\tau=1}^{t-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=1}^{t-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-\tau}\nu_{2}^{(j)}\\\
&+\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\eta\alpha}{N}\right)\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{j=\mathscr{T}}^{t}\nu_{2}^{(j)}.\end{split}$
(149)
By rearranging the terms, we finally have:
$\begin{split}&\left|y_{i}\Xi_{i,j,r}^{(t+1)}-y_{i}\Xi_{i,j,r}^{(\mathscr{T})}-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|\\\
&\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\left(1+\frac{\alpha}{\sigma^{2}d}+\frac{\alpha\eta}{N}\right)\sum_{\tau=0}^{t-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=0}^{t-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-\tau}\nu_{2}^{(j)},\end{split}$
(150)
which proves the induction hypothesis for $t+1.$
Now, let’s simplify the sum terms in (LABEL:eq:noiseindhypoth). Since
$P\ll\sqrt{d}$, by definition of a geometric sequence, we have:
$\displaystyle\sum_{\tau=0}^{t-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}$
$\displaystyle\leq\frac{1}{1-\frac{P}{\sqrt{d}}}=\Theta(1).$ (151)
Plugging (151) in (LABEL:eq:noiseindhypoth) yields
$\begin{split}\left|y_{i}(\Xi_{i,j,r}^{(t)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|&\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right)\\\
&+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-1-\tau}\nu_{2}^{(j)}.\end{split}$
(152)
Now, let’s simplify the second sum term in (152). Indeed, we have:
$\displaystyle\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-1-\tau}\nu_{2}^{(j)}$
$\displaystyle\leq\sum_{\tau=0}^{t-1-\mathscr{T}}\frac{P^{\tau}}{d^{\tau/2}}\sum_{j=\mathscr{T}}^{t-1}\nu_{2}^{(j)}\leq\Theta(1)\sum_{j=\mathscr{T}}^{t-1}\nu_{2}^{(j)},$
(153)
where we used (151) in the last inequality. Plugging (153) in (152) gives the
final result. ∎
After $T_{1}$ iterations, we prove with Lemma 5.4 that for
$i\in\mathcal{Z}_{2}$ and $j\in[P]\backslash\\{P(\bm{X}_{i})\\}$, there exists
$r\in[m]$ such that $\Xi_{i,j,r}^{(\tau)}$ is large. This implies that
$(\Xi_{i,j,r}^{(\tau)})^{2}\ell_{i}^{(\tau)}(\Xi_{i}^{(\tau)})$ stays well
controlled. We therefore rewrite Lemma I.4 to take this into account.
###### Lemma I.5.
Let $i\in[N]$, $j\in[P]\backslash\\{P(\bm{X}_{i})\\}$ and $r\in[m]$. Let
$\mathscr{T},t\in[T]$ such that $\mathscr{T}<t.$ Then, the noise update (2)
satisfies
$\begin{split}\left|y_{i}(\Xi_{i,j,r}^{(t)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t-1}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}\right|&\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right)+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{j=\mathscr{T}}^{t-1}\nu_{2}^{(j)}.\end{split}$
###### Proof of Lemma I.5.
From Lemma I.4, we know that
$\displaystyle\left|y_{i}(\Xi_{i,j,r}^{(t)}-\Xi_{i,j,r}^{(\mathscr{T})})-\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=\mathscr{T}}^{t-1}\ell_{i}^{(\tau)}(\Xi_{i,j,r}^{(\tau)})^{2}\right|$
$\displaystyle\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right)+\tilde{O}\left(\frac{\eta\beta^{3}}{\alpha^{2}}\right)\sum_{j=\mathscr{T}}^{t-1}\nu_{2}^{(j)}.$
(154)
Using Remark 1, we know that a sufficient condition to have
$\widehat{\ell}^{(\tau)}(\Xi_{i}^{(t)}$ is
$(\Xi_{i,j,r}^{(\tau)})^{2}\geq\kappa\geq\tilde{\Omega}(1).$ Therefore, we can
replace
$\widehat{\ell}^{(t)}(\Xi_{i}^{(t)})(\Xi_{i,j,r}^{(\tau)})^{2}=\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}.$
Plugging this equality in (154) yields the aimed result. ∎
###### Lemma I.6.
Let
$T_{1}=\tilde{O}\left(\frac{N}{\sigma_{0}\sigma\sqrt{d}\sigma^{2}d}\right)$.
For $t\in[T_{1},T]$, we have
$\frac{1}{N}\sum_{\tau=0}^{t}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}\leq\tilde{O}\left(\frac{1}{\eta}\right).$
###### Proof of Lemma I.6.
From Lemma F.9, we know that:
$\displaystyle\sum_{\tau=T_{1}}^{t}\nu_{2}^{(\tau)}\leq\tilde{O}\left(\frac{1}{\eta\sigma_{0}}\right).$
(155)
On the other hand we know from Lemma I.5 that:
$\displaystyle\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=0}^{T_{1}-1}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}$
$\displaystyle\leq
y_{i}(\Xi_{i,j,r}^{(T_{1})}-\Xi_{i,j,r}^{(0)})+\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right)$
(156)
$\displaystyle+\tilde{O}\left(\frac{\eta\hat{\mu}\beta^{3}}{\alpha}\right)T_{1}.$
Besides, we have:
$\tilde{O}\left(\frac{\eta\hat{\mu}\beta^{3}}{\alpha}\right)T_{1}\leq\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right).$
Plugging this inequality yields
$\displaystyle\frac{\tilde{\Theta}(\eta\sigma^{2}d)}{N}\sum_{\tau=0}^{T_{1}-1}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}$
$\displaystyle\leq
y_{i}(\Xi_{i,j,r}^{(T_{1})}-\Xi_{i,j,r}^{(0)})+\tilde{O}\left(\frac{P\sigma^{2}\sqrt{d}}{\alpha}\right).$
(157)
By applying D.1, (157) is eventually bounded as:
$\displaystyle\frac{1}{N}\sum_{\tau=0}^{t}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}$
$\displaystyle\leq\tilde{O}\left(\frac{1}{\eta\sigma^{2}d}\right)+\tilde{O}\left(\frac{P}{\eta\alpha\sqrt{d}}\right)\leq\tilde{O}\left(\frac{1}{\eta}\right).$
(158)
By combining (51) and (158) we deduce that for all
$j\in[P]\backslash\\{P(\bm{X}_{i})\\}$ and $r\in[m]$:
$\displaystyle\frac{1}{N}\sum_{\tau=0}^{t}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}$
$\displaystyle=\frac{1}{N}\sum_{\tau=0}^{T_{1}}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}$
(159)
$\displaystyle+\frac{1}{N}\sum_{\tau=T_{1}}^{t}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(\tau)}\min\\{\kappa,(\Xi_{i,j,r}^{(\tau)})^{2}\\}$
$\displaystyle\leq\tilde{O}\left(\frac{1}{\eta}\right).$
∎
### I.4 Convergence rate of the training loss using GD
In this section, we prove that when using GD, the training loss converges
sublinearly in our setting.
#### I.4.1 Convergence after learning $\mathcal{Z}_{1}$ ($t\in[T_{0},T]$)
###### Lemma I.7 (Convergence rate of the $\mathcal{Z}_{1}$ loss).
Let $t\in[T_{0},T]$. Run GD with learning rate $\eta$ for $t$ iterations.
Then, the $\mathcal{Z}_{1}$ loss sublinearly converges to zero as:
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq\frac{\tilde{O}(1)}{\eta\alpha^{2}(t-T_{0}+1)}.$
###### Proof of Lemma I.7.
Let $t\in[T_{0},T].$ From Lemma F.1, we know that the signal update is lower
bounded as:
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\Theta(\eta\alpha)(1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha)(\alpha
c^{(t)})^{2}.$ (160)
From Lemma 5.1, we know that $c^{(t)}\geq\tilde{\Omega}(1/\alpha)$. Thus, we
simplify (160) as:
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\tilde{\Omega}(\eta\alpha)(1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha).$
(161)
Since
$\alpha^{3}\sum_{r=1}^{m}(c_{r}^{(t)})^{3}\geq\tilde{\Omega}(1/\alpha)-m\tilde{O}(\sigma_{0})\geq\tilde{\Omega}(1/\alpha)>0$,
we can apply Lemma K.22 and obtain:
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\tilde{\Omega}(\eta\alpha)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha).$
(162)
Let’s now assume by contradiction that for $t\in[T_{0},T]$, we have:
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)>\frac{\tilde{\Omega}(1)}{\eta\alpha^{2}(t-T_{0}+1)}.$
(163)
From the (3) update, we know that $c_{r}^{(\tau)}$ is a non-decreasing
sequence which implies that $\sum_{r=1}^{m}(\alpha c_{r}^{(\tau)})^{3}$ is
also non-decreasing. Since $x\mapsto\log(1+\exp(-x))$ is non-increasing, this
implies that for $s\leq t$, we have:
$\displaystyle\frac{\tilde{\Omega}(1)}{\eta\alpha^{2}(t-T_{0}+1)}<(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq(1-\hat{\mu})\widehat{\mathcal{L}}^{(s)}(\alpha).$
(164)
Plugging (164) in the update (162) yields for $s\in[T_{0},t]$:
$\displaystyle c^{(s+1)}>c^{(s)}+\frac{\tilde{\Omega}(1)}{\alpha(t-T_{0}+1)}.$
(165)
Let $t\in[T_{0},T]$. We now sum (165) for $s=T_{0},\dots,t$ and obtain:
$\displaystyle
c^{(t+1)}>c^{(T_{0})}+\frac{\tilde{\Omega}(1)(t-T_{0}+1)}{\alpha(t-T_{0}+1)}>\frac{\tilde{\Omega}(1)}{\alpha},$
(166)
where we used the fact that $c^{(T_{0})}\geq\tilde{\Omega}(1/\alpha)>0$ (Lemma
5.1) in the last inequality. Therefore, we have for $t\in[T_{0},T],$
$c^{(t)}\geq\tilde{\Omega}(1/\alpha)>0$. Let’s now show that (166) implies a
contradiction. Indeed, we have:
$\displaystyle\eta\alpha^{2}(t-T_{0}+1)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)$
$\displaystyle\leq$
$\displaystyle\eta\alpha^{2}T(1-\hat{\mu})\log\left(1+\exp(-(\alpha
c^{(t)})^{3}-\sum_{r\neq r_{\max}}(\alpha c_{r}^{(t)})^{3}\right)$
$\displaystyle\leq$
$\displaystyle\eta\alpha^{2}T(1-\hat{\mu})\log\left(1+\exp(-\tilde{\Omega}(1)\right),$
(167)
where we used $\sum_{r\neq
r_{\max}}(c_{r}^{(t)})^{3}\geq-m\tilde{O}(\sigma_{0}^{3})$ along with (166) in
(167). We now apply Lemma K.22 in (167) and obtain:
$\displaystyle\eta\alpha^{2}(t-T_{0}+1)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq\frac{(1-\hat{\mu})\eta\alpha^{2}T}{1+\exp(\tilde{\Omega}(1))}.$
(168)
Given the values of $T,\eta,\alpha,\hat{\mu}$, we finally have:
$\displaystyle\eta\alpha^{2}(t-(T_{0}-1))(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)<\tilde{O}(1),$
(169)
which contradicts (163). ∎
#### I.4.2 Convergence at late stages ($t\in[T_{1},T]$)
###### Lemma I.8 (Convergence rate of the loss).
Let $t\in[T_{1},T]$. Run GD with learning rate $\eta\in(0,1/L)$ for $t$
iterations. Then, the loss sublinearly converges to zero as:
$\displaystyle\widehat{L}(\bm{W}^{(t)})\leq\frac{\tilde{O}(1)}{\eta(t-T_{1}+1)}.$
###### Proof of Lemma I.8.
We first apply the classical descent lemma for smooth functions (Lemma K.18).
Since $\widehat{L}(W)$ is smooth, we have:
$\displaystyle\widehat{L}(\bm{W}^{(t+1)})\leq\widehat{L}(\bm{W}^{(t)})-\frac{\eta}{2}\|\nabla\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}=\widehat{L}(\bm{W}^{(t)})-\frac{\eta}{2}\sum_{r=1}^{m}\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}.$
(170)
Lemma I.9 provides a lower bound on the gradient. We plug it in (170) and get:
$\displaystyle\widehat{L}(\bm{W}^{(t+1)})\leq\widehat{L}(\bm{W}^{(t)})-\tilde{\Omega}(\eta)\widehat{L}(\bm{W}^{(t)})^{2}.$
(171)
Applying Lemma K.19 to (171) yields the aimed result. ∎
#### I.4.3 Auxiliary lemmas for the proof of Lemma I.8
To obtain the convergence rate in Lemma I.8, we used the following auxiliary
lemma.
###### Lemma I.9 (Bound on the gradient for GD).
Let $t\in[T_{1},T]$. Run GD for $t$ iterations. Then, the norm of gradient is
lower bounded as follows:
$\displaystyle\sum_{r=1}^{m}\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}$
$\displaystyle\geq\tilde{\Omega}(1)\widehat{L}(\bm{W}^{(t)})^{2}.$
###### Proof of Lemma I.9.
Let $t\in[T_{1},T]$. To obtain the lower bound, we project the gradient on the
the signal and on the noise.
##### Projection on the signal.
Since $\|w^{*}\|_{2}=1$, we lower bound
$\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}$ as
$\displaystyle\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}\geq\langle\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)}),w^{*}\rangle^{2}=(\mathscr{G}_{r}^{(t)})^{2}.$
(172)
By successively applying Lemma E.2 and Lemma I.1,
$(\mathscr{G}_{r}^{(t)})^{2}$ is lower bounded as
$\displaystyle(\mathscr{G}_{r}^{(t)})^{2}\geq\left(\frac{\alpha^{3}}{N}\sum_{i\in\mathcal{Z}_{1}}\ell_{i}^{(t)}(c_{r}^{(t)})^{2}\right)^{2}\geq\Omega(1)\left(\alpha^{3}(1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha)(c_{r}^{(t)})^{2}\right)^{2}.$
(173)
Combining (172) and (173) yields:
$\displaystyle\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}\geq\Omega(1)\left(\alpha^{3}(1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha)(c_{r}^{(t)})^{2}\right)^{2}.$
(174)
##### Projection on the noise.
For a fixed $i\in\mathcal{Z}_{2}$ and $j\in[P]\backslash\\{P(\bm{X}_{i})\\}$,
we know that $\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}$ is
lower bounded as
$\displaystyle\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}\geq\left\langle\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)}),\frac{\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\bm{X}_{i}[j]\|_{2}}\right\rangle^{2}=(\mathrm{G}_{r}^{(t)})^{2}.$
(175)
On the other hand, by Lemma I.14, we lower bound $\mathrm{G}_{r}^{(t)}$ term
with probability $1-o(1)$ as:
$\displaystyle(\mathrm{G}_{r}^{(t)})^{2}$
$\displaystyle\geq\left(\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}-\frac{\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\right)^{2}$ (176)
##### Gathering the bounds.
Combining (172), (175), (173) and (176) and using
$2a^{2}+2b^{2}\geq(a+b)^{2},$ we thus bound
$\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}$ as:
$\begin{split}\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}\geq&\left(\frac{\alpha+\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\ell_{i}^{(t)}\alpha^{2}(c_{r}^{(t)})^{2}\right.\\\
&\left.+\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\right.\\\
&\left.-\frac{\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)\right)^{2}.\end{split}$
(177)
We now sum up (177) for $r=1,\dots,m$ and apply Cauchy-Schwarz inequality to
get:
$\begin{split}\sum_{r=1}^{m}\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}&\geq\frac{1}{m}\left(\frac{\alpha+\tilde{O}(\sigma)}{N}\sum_{r=1}^{m}\ell_{i}^{(t)}(\alpha)\alpha^{2}(c_{r}^{(t)})^{2}\right.\\\
&\left.+\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\right.\\\
&\left.-\frac{\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)\right)^{2}.\end{split}$
(178)
We apply Lemma I.1 to further lower bound (178) and get:
$\begin{split}\sum_{r=1}^{m}\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}&\geq\Omega\left(\frac{1}{m}\right)\left((\alpha+\tilde{O}(\sigma))(1-\hat{\mu})\sum_{r=1}^{m}\widehat{\ell}^{(t)}(\alpha)\alpha^{2}(c_{r}^{(t)})^{2}\right.\\\
&\left.+\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\right.\\\
&\left.-\frac{\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)\right)^{2}.\end{split}$
(179)
##### Bound the gradient terms by the loss.
Using Lemma I.10, Lemma I.11 and Lemma I.12 we have:
$\displaystyle(\alpha+\tilde{O}(\sigma))(1-\hat{\mu})\sum_{r=1}^{m}\widehat{\ell}^{(t)}(\alpha)\alpha^{2}(c_{r}^{(t)})^{2}$
$\displaystyle\geq\tilde{\Omega}(\alpha+\tilde{O}(\sigma))\widehat{\mathcal{L}}^{(t)}(\alpha),$
(180)
$\displaystyle\frac{\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)$
$\displaystyle\leq\tilde{O}(\sigma)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha),$
(181)
$\displaystyle\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}$
$\displaystyle\geq\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)}).$
(182)
Plugging (180), (181) and (182) in (179) yields:
$\displaystyle\sum_{r=1}^{m}\|\nabla_{\bm{w}_{r}}\widehat{L}(\bm{W}^{(t)})\|_{2}^{2}$
$\displaystyle\geq\Omega\left(\frac{1}{m}\right)\left((\alpha+\tilde{O}(\sigma))(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\right.$
$\displaystyle\left.+\frac{\tilde{\Omega}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)})-(1-\hat{\mu})\tilde{O}(\sigma)\widehat{\mathcal{L}}^{(t)}(\alpha)\right)^{2}$
$\displaystyle\geq\tilde{\Omega}(1)\left((1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)})\right)^{2},$
(183)
Finally, we use Lemma I.13 and lower bound (183) by
$\widehat{L}(\bm{W}^{(t)})^{2}$. This gives the aimed result.
∎
We now present auxiliary lemmas that link the gradient terms with their
corresponding loss.
###### Lemma I.10.
Let $t\in[T_{1},T].$ Run GD for $t$ iterations. Then, we have:
$\displaystyle\sum_{r=1}^{m}\widehat{\ell}^{(t)}(\alpha)\alpha^{2}(c_{r}^{(t)})^{2}\geq\tilde{\Omega}(1)\widehat{\mathcal{L}}^{(t)}(\alpha).$
###### Proof of Lemma I.10.
In order to bound
$\sum_{r=1}^{m}\widehat{\ell}^{(t)}(\alpha)\alpha^{2}(c_{r}^{(t)})^{2}$, we
apply Lemma K.20. We first verify that the conditions of the lemma are met.
From Lemma 5.1 we know that for $t\in[T_{0},T]$, we have
$c^{(t)}\geq\tilde{\Omega}(1/\alpha)$. Along with D.1, this implies that
$\displaystyle\tilde{\Omega}(1)\leq\tilde{\Omega}(1)-m\tilde{O}(\alpha\sigma_{0})\leq\sum_{r=1}^{m}\alpha
c_{r}^{(t)}\leq\tilde{O}(\alpha)m\leq\tilde{O}(1).$ (184)
Therefore, we can apply Lemma K.20 and get the lower bound:
$\displaystyle\sum_{r=1}^{m}\widehat{\ell}^{(t)}(\alpha)(\alpha
c_{r}^{(t)})^{2}\geq\frac{0.05e^{-m\tilde{O}(\sigma_{0})}}{\tilde{O}(1)\left(1+\frac{m^{2}\tilde{O}(\sigma^{2}\sigma_{0}^{2}d)}{\tilde{\Omega}(1)^{2}}\right)}\log\left(1+e^{-\sum_{r=1}^{m}(\alpha
c_{r}^{(t)})^{3}}\right)\geq\tilde{\Omega}(1)\widehat{\mathcal{L}}^{(t)}(\alpha).$
(185)
∎
###### Lemma I.11.
Let $t\in[T_{1},T].$ Run GD for $t$ iterations. Then, we have:
$\displaystyle\frac{1}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)$
$\displaystyle\leq\tilde{O}(1)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha).$
###### Proof of Lemma I.11.
We again verify that the conditions of Lemma K.20 are met. By using D.1, D.2
and Lemma 5.1, we have:
$\displaystyle\sum_{r=1}^{m}\alpha c_{r}^{(t)}+\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}y_{i}\Xi_{i,j,r}^{(t)}$ $\displaystyle\leq
m\tilde{O}(\alpha)+mP\tilde{O}(\sigma\sigma_{0}\sqrt{d})\leq\tilde{O}(1),$
(186) $\displaystyle\sum_{r=1}^{m}\alpha c_{r}^{(t)}+\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}y_{i}\Xi_{i,j,r}^{(t)}$
$\displaystyle\geq\tilde{\Omega}(1)-m\tilde{O}(\alpha\sigma_{0})\geq\tilde{\Omega}(1).$
By applying Lemma K.20, we have:
$\displaystyle\frac{1}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)$
$\displaystyle\leq\frac{me^{m\tilde{O}(\sigma_{0})}}{\tilde{\Omega}(1)N}\sum_{i\in\mathcal{Z}_{1}}\log\left(1+\exp\left(-\sum_{r=1}^{m}\alpha^{3}(c_{r}^{(t)})^{3}-\Xi_{i}^{(t)}\right)\right)$
$\displaystyle\leq\frac{\tilde{O}(1)}{N}\sum_{i\in\mathcal{Z}_{1}}\log\left(1+\exp\left(-\sum_{r=1}^{m}\alpha^{3}(c_{r}^{(t)})^{3}-\Xi_{i}^{(t)}\right)\right).$
(187)
Lastly, we want to link the loss term in (187) with
$\widehat{\mathcal{L}}^{(t)}(\alpha)$. By applying D.1 and Lemma K.24 in
(187), we finally get:
$\displaystyle\frac{1}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}\left((\alpha^{2}(c_{r}^{(t)})^{2}+(\Xi_{i,j,r}^{(t)})^{2}\right)$
$\displaystyle\leq(1-\hat{\mu})(1+e^{\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})})\widehat{\mathcal{L}}^{(t)}(\alpha)$
(188) $\displaystyle\leq(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha).$
Combining (187) and (188) yields the aimed result. ∎
###### Lemma I.12.
Let $t\in[T_{1},T].$ Run GD for $t$ iterations. Then, we have:
$\displaystyle\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}$
$\displaystyle\geq\frac{\tilde{\Omega}(1)}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)}).$
###### Proof of Lemma I.12.
We again verify that the conditions of Lemma K.20 are met. Using D.1, D.2 and
Lemma 5.4, we have:
$\displaystyle\sum_{r=1}^{m}\beta c_{r}^{(t)}+\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}y_{i}\Xi_{i,j,r}^{(t)}\leq
m\tilde{O}(\beta)+mP\tilde{O}(1)\leq\tilde{O}(1)$ (189)
$\displaystyle\sum_{r=1}^{m}\beta c_{r}^{(t)}+\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}y_{i}\Xi_{i,j,r}^{(t)}\geq\tilde{\Omega}(1)-m\tilde{O}(\sigma_{0})-mP\tilde{O}(\sigma_{0}\sigma\sqrt{d})\geq\tilde{\Omega}(1).$
By applying Lemma K.20, we have:
$\displaystyle\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}$ (190) $\displaystyle\geq$
$\displaystyle\frac{0.05e^{-m\tilde{O}(\sigma\sigma_{0}\sqrt{d})}}{N\tilde{O}(1)\left(1+\frac{m^{2}(\sigma\sigma_{0}\sqrt{d})^{2}}{\tilde{\Omega}(1)}\right)}\sum_{i\in\mathcal{Z}_{2}}\log\left(1+\exp\left(-\sum_{r=1}^{m}\beta^{3}(c_{r}^{(t)})^{3}-\Xi_{i}^{(t)}\right)\right)$
$\displaystyle\geq$
$\displaystyle\frac{\tilde{\Omega}(1)}{N}\sum_{i\in\mathcal{Z}_{2}}\log\left(1+\exp\left(-\sum_{r=1}^{m}\beta^{3}(c_{r}^{(t)})^{3}-\Xi_{i}^{(t)}\right)\right).$
Lastly, we want to link the loss term in (LABEL:eq:neknfce) with
$\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)})$. By applying D.1 and Lemma K.24
in (LABEL:eq:neknfce), we finally get:
$\displaystyle\frac{\tilde{\Omega}(1)}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{r=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}$
$\displaystyle\geq\frac{\tilde{\Omega}(1)e^{-m\tilde{O}(\beta^{3})}}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)})$
(191)
$\displaystyle\geq\frac{\tilde{\Omega}(1)}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)}).$
Combining (LABEL:eq:neknfce) and (191) yields the aimed result. ∎
###### Lemma I.13.
Let $t\in[0,T]$ Run GD for for $t$ iterations. Then, we have:
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)})\geq\Theta(1)\widehat{L}(\bm{W}^{(t)}).$
(192)
###### Proof of Lemma I.13.
we need to lower bound $\widehat{\mathcal{L}}^{(t)}(\alpha)$. By successively
applying Lemma K.24 and D.1, we obtain:
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)$
$\displaystyle=\frac{1}{N}\sum_{i\in\mathcal{Z}_{1}}\frac{1+e^{-\Xi_{i}^{(t)}}}{1+e^{-\Xi_{i}^{(t)}}}\log\left(1+\exp\left(-\sum_{r=1}^{m}(\alpha
c_{r}^{(t)})^{3}\right)\right)$
$\displaystyle\geq\frac{1}{N}\sum_{i\in\mathcal{Z}_{1}}\frac{1}{1+e^{-\Xi_{i}^{(t)}}}\log\left(1+\exp\left(-\sum_{r=1}^{m}(\alpha
c_{r}^{(t)})^{3}\right)-\Xi_{i}^{(t)}\right)$
$\displaystyle\geq\frac{\widehat{L}_{\mathcal{Z}_{1}}(\bm{W}^{(t)})}{1+e^{\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})}}$
$\displaystyle\geq\Theta(1)\widehat{L}_{\mathcal{Z}_{1}}(\bm{W}^{(t)}).$ (193)
By successively applying Lemma K.24 and D.1, we obtain:
$\displaystyle\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\widehat{\mathcal{L}}^{(t)}(\Xi_{i}^{(t)})$
$\displaystyle=\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\frac{1+e^{-\sum_{r=1}^{m}(\beta
c_{r}^{(t)})^{3}}}{1+e^{-\sum_{r=1}^{m}(\beta
c_{r}^{(t)})^{3}}}\log\left(1+\exp\left(-\Xi_{i}^{(t)}\right)\right)$
$\displaystyle\geq\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\frac{1}{1+e^{-\sum_{r=1}^{m}(\beta
c_{r}^{(t)})^{3}}}\log\left(1+\exp\left(-\sum_{r=1}^{m}(\beta
c_{r}^{(t)})^{3}\right)-\Xi_{i}^{(t)}\right)$
$\displaystyle\geq\frac{\widehat{L}_{\mathcal{Z}_{2}}(\bm{W}^{(t)})}{1+e^{\tilde{O}((\beta\sigma_{0})^{3})}}$
$\displaystyle\geq\Theta(1)\widehat{L}_{\mathcal{Z}_{2}}(\bm{W}^{(t)}).$ (194)
Combining (193) and (194) yields the aimed result.
∎
Lastly, to obtain Lemma I.8, we need to bound $G_{r}^{(t)}$ which is given by
the next lemma.
###### Lemma I.14 (Gradient on the normalized noise).
For $r\in[m]$, the gradient of the loss $\widehat{L}(\bm{W}^{(t)})$ projected
on the normalized noise $\textstyle\chi$ satisfies with probability $1-o(1)$
for $r\in[m]$:
$\displaystyle-\mathrm{G}_{r}^{(t)}\geq\frac{\tilde{\Theta}(\sigma\sqrt{d})}{N}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(t)}\sum_{j\neq
P(\bm{X}_{i})}(\Xi_{i,j,r}^{(t)})^{2}-\frac{\tilde{O}(\sigma)}{N}\sum_{i\in\mathcal{Z}_{1}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}.$
###### Proof of Lemma I.14.
Projecting the gradient (given by Lemma E.1) on $\textstyle\chi$ yields:
$\begin{split}-\mathrm{G}_{r}^{(t)}&=\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\frac{\|\bm{X}_{i}[j]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\\\
&+\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\ell_{i}^{(t)}\sum_{j\neq
P(\bm{X}_{i})}\sum_{\begin{subarray}{c}k\neq P(\bm{X}_{i})\\\ k\neq
j\end{subarray}}(\Xi_{i,k,r}^{(t)})^{2}\left\langle\bm{X}_{i}[k],\frac{\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right\rangle\\\
&+\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{\begin{subarray}{c}a\in\mathcal{Z}_{2}\\\
a\neq i\end{subarray}}\ell_{a}^{(t)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}\sum_{j\neq
P(\bm{X}_{i})}\left\langle\bm{X}_{a}[k],\frac{\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right\rangle\\\
&+\frac{3}{N}\sum_{a\in\mathcal{Z}_{1}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(t)}(\Xi_{a,k,r}^{(t)})^{2}\left\langle\bm{X}_{a}[k],\frac{\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right\rangle.\end{split}$ (195)
We further bound (195) as:
$\begin{split}&\left|\mathrm{G}_{r}^{(t)}+\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\frac{\|\bm{X}_{i}[j]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right.\\\
&\left.-\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(t)}\sum_{j\neq
P(\bm{X}_{i})}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}\left|\left\langle\bm{X}_{a}[k],\frac{\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right\rangle\right|\right|\\\
&\leq\frac{3}{N}\sum_{a\in\mathcal{Z}_{1}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(t)}(\Xi_{a,k,r}^{(t)})^{2}\left|\left\langle\bm{X}_{a}[k],\frac{\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right\rangle\right|.\end{split}$ (196)
Since $\frac{\frac{1}{N}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}$ is a unit Gaussian vector, using Lemma
K.8, we bound the right-hand side of (LABEL:eq:Grbd2) with probability
$1-o(1)$, as:
$\begin{split}&\left|\mathrm{G}_{r}^{(t)}+\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\frac{\|\bm{X}_{i}[j]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right.\\\
&\left.-\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(t)}\sum_{j\neq
P(\bm{X}_{i})}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}\left|\left\langle\bm{X}_{a}[k],\frac{\bm{X}_{i}[j]}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right\rangle\right|\right|\\\
&\leq\frac{\sigma}{N}\sum_{a\in\mathcal{Z}_{1}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(t)}(\Xi_{a,k,r}^{(t)})^{2}.\end{split}$ (197)
Now, using Lemma Lemma K.10 , we can further lower bound the left-hand side of
(LABEL:eq:Grbd3) as:
$\begin{split}&\left|\mathrm{G}_{r}^{(t)}+\frac{3}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\frac{\|\bm{X}_{i}[j]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right.\\\
&\left.-\frac{\tilde{\Theta}(P)}{\sqrt{d}N^{2}}\sum_{a\in\mathcal{Z}_{2}}\ell_{a}^{(t)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}\frac{\|\bm{X}_{a}[k]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right|\\\
&\leq\frac{\sigma}{N}\sum_{a\in\mathcal{Z}_{1}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(t)}(\Xi_{a,k,r}^{(t)})^{2}.\end{split}$ (198)
Rewriting (LABEL:eq:Grbd4) yields:
$\begin{split}&\left|\mathrm{G}_{r}^{(t)}+\frac{\Theta(1)}{N^{2}}\sum_{i\in\mathcal{Z}_{2}}\sum_{j\neq
P(\bm{X}_{i})}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\frac{\|\bm{X}_{i}[j]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}\right|\\\
&\leq\frac{\sigma}{N}\sum_{a\in\mathcal{Z}_{1}}\sum_{k\neq
P(\bm{X}_{a})}\ell_{a}^{(t)}(\Xi_{a,k,r}^{(t)})^{2}.\end{split}$ (199)
Remark that $\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\sim\mathcal{N}(0,\frac{\hat{\mu}P}{N}\sigma^{2})$.
By applying Lemma K.9, we have:
$\displaystyle\frac{1}{N}\frac{\|\bm{X}_{i}[j]\|_{2}^{2}}{\|\frac{1}{N}\sum_{b\in\mathcal{Z}_{2}}\sum_{l\neq
P(\bm{X}_{i})}\bm{X}_{b}[l]\|_{2}}$
$\displaystyle=\frac{1}{N}\tilde{\Theta}\left(\sigma\sqrt{\frac{dN}{\hat{\mu}P}}\right)=\tilde{\Theta}\left(\sigma\sqrt{\frac{d}{\hat{\mu}NP}}\right)=\tilde{\Theta}(\sigma\sqrt{d}),$
(200)
where we used $P=\tilde{\Theta}(1)$ and $\hat{\mu}N=\tilde{\Theta}(1)$ in the
last equality of (200). Plugging this in (LABEL:eq:Grbd4vfefvr) yields the
desired result. ∎
## Appendix J Auxiliary lemmas for GD+M
This section presents the auxiliary lemmas needed in Appendix G.
### J.1 Rewriting derivatives
###### Lemma J.1 (Derivatives for GD+M).
Let $i\in\mathcal{Z}_{k}$, for $k\in\\{1,2\\}.$ Then,
$\ell_{i}^{(t)}=\Theta(1)\widehat{\ell}^{(t)}(\theta)$.
###### Proof.
Let $i\in[N].$ Using D.4, we have:
$\displaystyle\ell_{i}^{(t)}$
$\displaystyle=\mathrm{sigmoid}\left(-\theta^{3}\sum_{s=1}^{m}(c_{s}^{(t)})^{3}-\sum_{s=1}^{m}\sum_{j\neq
P(\bm{X}_{i})}(\Xi_{i,j,s}^{(t)})^{3}\right).$
Therefore, we deduce that:
$\displaystyle
e^{-\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})}\widehat{\ell}^{(t)}(\theta)\leq\ell_{i}^{(t)}\leq
e^{\tilde{O}((\sigma\sigma_{0}\sqrt{d})^{3})}\widehat{\ell}^{(t)}(\theta)$
which yields the aimed result. ∎
### J.2 Signal lemmas
In this section, we present the auxiliary lemmas needed to prove D.5. We first
rewrite the (3) update to take into account the case where the signal
$c^{(\tau)}$ becomes large.
###### Lemma J.2 (Rewriting signal momentum).
For $t\in[T]$, the maximal signal momentum $\mathcal{G}^{(t)}$ is bounded as:
$\displaystyle\mathcal{G}^{(t+1)}$
$\displaystyle\leq\Theta(1-\gamma)\sum_{\tau=0}^{t}\gamma^{t-\tau}\left(\alpha\nu_{1}^{(\tau)}\min\\{\kappa,(\alpha
c^{(\tau)})^{2}\\}+\beta\nu_{2}^{(\tau)}\min\\{\kappa,(\beta
c^{(\tau)})^{2}\\}\right).$
###### Proof of Lemma J.2.
Let $t\in[T]$. Using the signal momentum given by Lemma G.1, we know that:
$\displaystyle\mathcal{G}^{(t+1)}$
$\displaystyle=\Theta(1-\gamma)\sum_{\tau=0}^{t}\gamma^{t-\tau}\left(\frac{\alpha}{N}\sum_{i\in\mathcal{Z}_{1}}(\alpha
c^{(\tau)})^{2}\ell_{i}^{(\tau)}+\frac{\beta}{N}\sum_{i=1}^{N}(\beta
c^{(\tau)})^{2}\ell_{i}^{(\tau)}\right).$ (201)
To obtain the desired result, we need to prove for $i\in\mathcal{Z}_{1}$:
$\displaystyle(\alpha c^{(t)})^{2}\ell_{i}^{(\tau)}$
$\displaystyle\leq\Theta(1)\min\\{\kappa,(\alpha
c^{(\tau)})^{2}\\}\ell_{i}^{(\tau)}.$ (202)
Indeed, we remark that:
$\displaystyle(\alpha c^{(\tau)})^{2}\ell_{i}^{(\tau)}$
$\displaystyle=\frac{\alpha^{2}(c^{(\tau)})^{2}}{1+\exp\left(\alpha^{3}\sum_{s=1}^{m}(c_{s}^{(\tau)})^{3}+\Xi_{i}^{(\tau)}\right)}.$
(203)
By using D.4 and D.5, (203) is bounded as:
$\displaystyle(\alpha c^{(\tau)})^{2}\ell_{i}^{(\tau)}$
$\displaystyle=\frac{\alpha^{3}(c^{(\tau)})^{2}}{1+\exp\left(\alpha^{2}(c^{(\tau)})^{3}+\alpha^{3}\sum_{s\neq
r_{\max}}(c_{s}^{(\tau)})^{3}+\Xi_{i}^{(\tau)}\right)}$
$\displaystyle\leq\frac{\alpha^{2}(c^{(\tau)})^{2}}{1+\exp\left(\alpha^{3}(c^{(\tau)})^{3}-\tilde{O}(m\alpha^{3}\sigma_{0}^{3})-\tilde{O}(mP(\sigma\sigma_{0}\sqrt{d})^{3})\right)}$
$\displaystyle=\frac{\Theta(1)(\alpha c^{(\tau)})^{2}}{1+\exp((\alpha
c^{(\tau)})^{3})}.$ (204)
Using Remark 1, the sigmoid term in (J.2) becomes small when $\alpha
c^{(\tau)}\geq\kappa^{1/3}$. To summarize, we have:
$\displaystyle(\alpha c^{(\tau)})^{2}\ell_{i}^{(\tau)}$
$\displaystyle=\begin{cases}0&\text{if }\alpha c^{(\tau)}\geq\kappa^{1/3}\\\
(\alpha c^{(\tau)})^{2}\ell_{i}^{(\tau)}&\text{otherwise}\end{cases}.$ (205)
(205) therefore implies $(\alpha
c^{(t)})^{2}\ell_{i}^{(t)}\leq\Theta(1)\min\\{\kappa^{2/3},(\alpha
c^{(t)})^{2}\\}\ell_{i}^{(t)}$ which implies (202).
A similar reasoning implies for $i\in\mathcal{Z}_{2}$:
$\displaystyle(\beta c^{(t)})^{2}\ell_{i}^{(\tau)}$
$\displaystyle\leq\Theta(1)\min\\{\kappa,\beta^{2}(c^{(t)})^{2}\\}\ell_{i}^{(\tau)}.$
(206)
Plugging (202) and (206) in (201) yields the aimed result. ∎
We proved in Lemma 6.1 that after $\mathcal{T}_{0}$ iterations, the signal
$c^{(t)}\geq\tilde{\Omega}(1/\alpha)$ which makes $\nu_{1}^{(t)}$ small.
Besides, in Lemma 6.3, we show that after $\mathcal{T}_{1}$ iterations, the
signal $c^{(t)}\geq\tilde{\Omega}(1/\beta)$ which makes $\nu_{2}^{(t)}$ small.
We use these two facts to bound the sum over time of signal momentum.
###### Lemma J.3 (Sum of signal momentum at late stages).
For $t\in[\mathcal{T}_{1},T)$, the sum of maximal signal momentum is bounded
as:
$\displaystyle\sum_{s=\mathcal{T}_{1}}^{t}|\mathcal{G}^{(s+1)}|\leq\tilde{O}(\alpha\mathcal{T}_{0})+\tilde{O}(\hat{\mu}\beta\mathcal{T}_{1})+\frac{\tilde{O}(1)}{\eta}.$
(207)
###### Proof of Lemma J.3.
Let $s\in[\mathcal{T}_{1},T]$. From Lemma J.2, the signal momentum is bounded
as:
$\displaystyle|\mathcal{G}^{(s+1)}|$
$\displaystyle\leq\Theta(1-\gamma)\sum_{\tau=0}^{\mathcal{T}_{0}-1}\gamma^{s-\tau}\alpha\nu_{1}^{(\tau)}\min\\{\kappa,(\alpha
c^{(\tau)})^{2}\\}$ (208)
$\displaystyle+\Theta(1-\gamma)\sum_{\tau=\mathcal{T}_{0}}^{s}\gamma^{s-\tau}\alpha\nu_{1}^{(\tau)}\min\\{\kappa,(\alpha
c^{(\tau)})^{2}\\}$
$\displaystyle+\Theta(1-\gamma)\sum_{\tau=0}^{\mathcal{T}_{1}-1}\gamma^{s-\tau}\beta\nu_{2}^{(\tau)}\min\\{\kappa,(\beta
c^{(\tau)})^{2}\\}$
$\displaystyle+\Theta(1-\gamma)\sum_{\tau=\mathcal{T}_{1}}^{s}\gamma^{s-\tau}\beta\nu_{2}^{(\tau)}\min\\{\kappa,(\beta
c^{(\tau)})^{2}\\}.$
We know that for $\tau\geq\mathcal{T}_{0},$
$c^{(\tau)}\geq\tilde{\Omega}(1/\alpha)$ and for $\tau\geq\mathcal{T}_{1},$
$c^{(\tau)}\geq\tilde{\Omega}(1/\beta)$. Plugging these two facts and using
$\nu_{1}^{(\tau)}\leq 1-\hat{\mu}$ and $\nu_{2}^{(\tau)}\leq\hat{\mu}$ in
(208) leads to:
$\displaystyle\mathcal{G}^{(s+1)}$
$\displaystyle\leq(1-\hat{\mu})\alpha\tilde{O}(1-\gamma)\sum_{\tau=0}^{\mathcal{T}_{0}-1}\gamma^{s-\tau}+\alpha\tilde{O}(1-\gamma)\sum_{\tau=\mathcal{T}_{0}}^{s}\gamma^{s-\tau}\nu_{1}^{(\tau)}$
(209)
$\displaystyle+\hat{\mu}\beta\tilde{O}(1-\gamma)\sum_{\tau=0}^{\mathcal{T}_{1}-1}\gamma^{s-\tau}+\beta\tilde{O}(1-\gamma)\sum_{\tau=\mathcal{T}_{1}}^{s}\gamma^{s-\tau}\nu_{2}^{(\tau)}$
For $\tau\in[\mathcal{T}_{0}-1]$, we have
$\gamma^{s-\tau}\leq\gamma^{s-\mathcal{T}_{0}+1}$ and for
$\tau\in[\mathcal{T}_{1}-1]$,
$\gamma^{s-\tau}\leq\gamma^{s-\mathcal{T}_{1}+1}$. From Lemma J.8 and Lemma
6.4, we can bound $\nu_{1}^{(\tau)}$ and $\nu_{2}^{(\tau)}$. Therefore, (209)
is further bounded as:
$\displaystyle\mathcal{G}^{(s+1)}$
$\displaystyle\leq(1-\hat{\mu})\mathcal{T}_{0}\alpha\tilde{O}(1-\gamma)\gamma^{s-\mathcal{T}_{0}+1}+\frac{\tilde{O}(1-\gamma)}{\eta}\sum_{\tau=1}^{s-\mathcal{T}_{0}+1}\frac{\gamma^{s-\mathcal{T}_{0}+1-\tau}}{\tau}$
(210)
$\displaystyle+\hat{\mu}\mathcal{T}_{1}\beta\tilde{O}(1-\gamma)\gamma^{s-\mathcal{T}_{1}+1}+\frac{\tilde{O}(1-\gamma)}{\eta}\sum_{\tau=1}^{s-\mathcal{T}_{1}+1}\frac{\gamma^{s-\mathcal{T}_{1}+1-\tau}}{\tau}$
We now use Lemma K.25 to bound the sum terms in (210). We have:
$\displaystyle\mathcal{G}^{(s+1)}$
$\displaystyle\leq(1-\hat{\mu})\mathcal{T}_{0}\alpha\tilde{O}(1-\gamma)\gamma^{s-\mathcal{T}_{0}+1}+\hat{\mu}\mathcal{T}_{1}\beta\tilde{O}(1-\gamma)\gamma^{s-\mathcal{T}_{1}+1}$
(211)
$\displaystyle+\frac{\tilde{O}(1-\gamma)}{\eta}\left(\gamma^{s-\mathcal{T}_{0}}+\gamma^{(s-\mathcal{T}_{0}+1)/2}\log\left(\frac{s-\mathcal{T}_{0}+1}{2}\right)+\frac{1}{1-\gamma}\frac{2}{s-\mathcal{T}_{0}+1}\right)$
$\displaystyle+\frac{\tilde{O}(1-\gamma)}{\eta}\left(\gamma^{s-\mathcal{T}_{1}}+\gamma^{(s-\mathcal{T}_{1}+1)/2}\log\left(\frac{s-\mathcal{T}_{1}+1}{2}\right)+\frac{1}{1-\gamma}\frac{2}{s-\mathcal{T}_{1}+1}\right).$
We now sum (211) for $s=\mathcal{T}_{1},\dots,t$. Using the geometric sum
inequality $\sum_{s}\gamma^{s}\leq 1/(1-\gamma)$ and obtain:
$\displaystyle\sum_{s=\mathcal{T}_{1}}^{t}\mathcal{G}^{(s+1)}$
$\displaystyle\leq\tilde{O}(\mathcal{T}_{0}\alpha)+\tilde{O}(\hat{\mu}\beta\mathcal{T}_{1})$
(212)
$\displaystyle+\frac{\tilde{O}(1)}{\eta}\left(1+(1-\gamma)\log(t)\sum_{s=\mathcal{T}_{1}}^{t}(\sqrt{\gamma})^{s-\mathcal{T}_{0}+1}+\sum_{s=\mathcal{T}_{1}}^{t}\frac{2}{s-\mathcal{T}_{0}+1}\right)$
$\displaystyle+\frac{\tilde{O}(1)}{\eta}\left(1+(1-\gamma)\log(t)\sum_{s=\mathcal{T}_{1}}^{t}(\sqrt{\gamma})^{s-\mathcal{T}_{1}+1}+\sum_{s=\mathcal{T}_{1}}^{t}\frac{2}{s-\mathcal{T}_{1}+1}\right)$
We plug $\sum_{s}\sqrt{\gamma}^{s}\leq 1/(1-\sqrt{\gamma})$ and
$\sum_{s=1}^{t-\mathcal{T}_{1}+1}1/s\leq\log(t)+1$ in (212). This yields the
desired result. ∎
### J.3 Noise lemmas
In this section, we present the technical lemmas to prove Lemma 6.5.
###### Lemma J.4 (Bound on noise momentum).
Run GD+M on the loss function $\widehat{L}(\bm{W}).$ Let $i\in[N]$,
$j\in[P]\backslash\\{P(\bm{X}_{i})\\}$. At a time $t$, the noise momentum is
bounded with probability $1-o(1)$ as:
$\displaystyle\left|-G_{i,j,r}^{(t+1)}+\gamma
G_{i,j,r}^{(t)}\right|\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\nu^{(t)}.$
###### Proof of Lemma J.4.
Let $i\in[N]$ and $j\in[P]\backslash\\{P(\bm{X}_{i})\\}$. Combining the (4)
update rule and Lemma E.3 to get the noise gradient
$\texttt{G}_{i,j,r}^{(t)}$, we obtain
$\displaystyle\left|-G_{i,j,r}^{(t+1)}+\gamma G_{i,j,r}^{(t)}\right|$ (213)
$\displaystyle\leq\frac{3(1-\gamma)}{N}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}\|\bm{X}_{i}[j]\|_{2}^{2}+\left|\frac{3(1-\gamma)}{N}\sum_{a=1}^{N}\ell_{a}^{(t)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}\langle\bm{X}_{a}[k],\bm{X}_{i}[j]\rangle\right|.$
Using Lemma K.5 and Lemma K.7, (LABEL:eq:diffmoms1) becomes with probability
$1-o(1),$
$\displaystyle\left|-G_{i,j,r}^{(t+1)}+\gamma G_{i,j,r}^{(t)}\right|$ (214)
$\displaystyle\leq\frac{(1-\gamma)\tilde{\Theta}(\sigma^{2}d)}{N}\ell_{i}^{(t)}(\Xi_{i,j,r}^{(t)})^{2}+\frac{(1-\gamma)\tilde{\Theta}(\sigma^{2}\sqrt{d})}{N}\sum_{a=1}^{N}\ell_{a}^{(t)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}.$
Using $\ell_{i}^{(t)}/N\leq\nu^{(t)},$ D.4, we upper bound the first term in
(LABEL:eq:diffmomvfeercs1) to get:
$\displaystyle\left|-G_{i,j,r}^{(t+1)}+\gamma G_{i,j,r}^{(t)}\right|$ (215)
$\displaystyle\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\nu^{(t)}+\frac{(1-\gamma)\tilde{\Theta}(\sigma^{2}\sqrt{d})}{N}\sum_{a=1}^{N}\ell_{a}^{(t)}\sum_{k\neq
P(\bm{X}_{a})}(\Xi_{a,k,r}^{(t)})^{2}.$
We upper bound the second term in (LABEL:eq:diffmoms2) by again using D.4:
$\displaystyle\left|-G_{i,j,r}^{(t+1)}+\gamma
G_{i,j,r}^{(t)}\right|\leq(1-\gamma)\left(\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})+\tilde{O}(P\sigma_{0}^{2}\sigma^{4}d^{3/2})\right)\nu^{(t)}.$
(216)
By using $P\leq\tilde{O}(1)$ and thus,
$\tilde{O}(P\sigma_{0}^{2}\sigma^{4}d^{3/2})\leq\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})$
in (LABEL:eq:Gdiffnoise), we obtain the desired result. ∎
###### Lemma J.5.
Let $t\in[T]$. The noise momentum is bounded as
$\displaystyle|G_{i,j,r}^{(t+1)}|\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\sum_{\tau=0}^{t}\gamma^{t-1-\tau}\nu^{(\tau)}.$
###### Proof of Lemma J.5.
Let $\tau\in[T].$ From Lemma J.4, we know that:
$\displaystyle|G_{i,j,r}^{(\tau+1)}|\leq|\gamma
G_{i,j,r}^{(\tau)}|+(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\nu^{(\tau)}.$
(217)
We unravel the recursion (217) rule for $\tau=0,\dots,t$ and obtain:
$\displaystyle|G_{i,j,r}^{(t+1)}|$
$\displaystyle\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\sum_{\tau=0}^{t}\gamma^{t-\tau}\nu^{(\tau)}.$
∎
###### Lemma J.6 (Noise momentum at late stages).
For $t\in[\mathcal{T}_{1},T)$, the sum of noise momentum is bounded as:
$\displaystyle\sum_{s=\mathcal{T}_{1}}^{t}|G_{i,j,r}^{(s+1)}|\leq\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\left(\mathcal{T}_{1}+\frac{1}{\eta\beta}\right).$
###### Proof of Lemma J.6.
Let $s\in[\mathcal{T}_{1},T)$. We first apply Lemma J.5 and obtain:
$\displaystyle|G_{i,j,r}^{(s+1)}|\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\left(\sum_{\tau=0}^{\mathcal{T}_{1}-1}\gamma^{s-\tau}\nu^{(\tau)}+\sum_{\tau=\mathcal{T}_{1}}^{t}\gamma^{s-\tau}\nu^{(\tau)}\right).$
(218)
Using the bound from Lemma 6.4, (218) becomes
$\displaystyle|G_{i,j,r}^{(s+1)}|\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\left(\sum_{\tau=0}^{\mathcal{T}_{1}-1}\gamma^{s-\tau}+\sum_{\tau=\mathcal{T}_{1}}^{s}\frac{\gamma^{s-\tau}}{\eta\beta(\tau-\mathcal{T}_{1}+1)}\right)$
(219)
For $\tau\in[0,\mathcal{T}_{1}-1]$, we have
$\gamma^{s-1-\tau}\leq\gamma^{s-\mathcal{T}_{1}+1}$. Plugging these two bounds
in (219) implies:
$\displaystyle|G_{i,j,r}^{(s+1)}|\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\left(\mathcal{T}_{1}\gamma^{s-\mathcal{T}_{1}+1}+\frac{1}{\eta\beta}\sum_{\tau=1}^{s-\mathcal{T}_{1}+1}\frac{\gamma^{s-\mathcal{T}_{1}+1-\tau}}{\tau}\right).$
(220)
We now use Lemma K.25 to bound the sum terms in (220). We have:
$\displaystyle|G_{i,j,r}^{(s+1)}|$ (221)
$\displaystyle\leq(1-\gamma)\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\mathcal{T}_{1}\gamma^{s-\mathcal{T}_{1}+1}$
$\displaystyle+\frac{1-\gamma}{\eta\beta}\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\left(\gamma^{s-\mathcal{T}_{1}}+\gamma^{(s-\mathcal{T}_{1}+1)/2}\log\left(\frac{s-\mathcal{T}_{1}+1}{2}\right)+\frac{1}{1-\gamma}\frac{2}{s-\mathcal{T}_{1}+1}\right).$
We now sum (221) for $s=\mathcal{T}_{1},\dots,t$. Using the geometric sum
inequality $\sum_{s}\gamma^{s}\leq 1/(1-\gamma),$ we obtain:
$\displaystyle\sum_{s=\mathcal{T}_{1}}^{t}|G_{i,j,r}^{(s+1)}|\leq\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})\mathcal{T}_{1}+\frac{\tilde{O}(\sigma^{4}\sigma_{0}^{2}d^{2})}{\eta\beta}\left(\log\left(t\right)+\sum_{s=\mathcal{T}_{1}}^{t}\frac{2}{s-\mathcal{T}_{1}+1}\right).$
(222)
We finally use the harmonic series inequality
$\sum_{s=1}^{t-\mathcal{T}_{1}}1/s\leq 1+\log(t)$ in (222) to obtain the
desired result. ∎
### J.4 Convergence rate of the training loss using GD+M
In this section, we prove that when using GD+M, the training loss converges
sublinearly in our setting.
#### J.4.1 Convergence after learning $\mathcal{Z}_{1}$
$(t\in[\mathcal{T}_{0},T])$
###### Lemma J.7.
For $t\in[\mathcal{T}_{0},T]$ Using GD+M with learning rate $\eta$, the loss
sublinearly converges to zero as
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq\tilde{O}\left(\frac{1}{\eta\alpha^{2}(t-\mathcal{T}_{0}+1)}\right).$
(223)
###### Proof of Lemma J.9.
Let $t\in[\mathcal{T}_{0},T].$ Using Lemma J.11, we bound the signal momentum
as:
$\displaystyle-\mathcal{G}^{(t)}$
$\displaystyle\geq\Theta(1-\gamma)\alpha\sum_{s=\mathcal{T}_{0}}^{t}\gamma^{t-s}\widehat{\ell}^{(s)}(\alpha)(\alpha
c^{(s)})^{2}$ $\displaystyle\geq(1-\hat{\mu})\Theta(1-\gamma)\alpha(\alpha
c^{(t)})^{2}\widehat{\ell}^{(t)}(\alpha)\sum_{s=\mathcal{T}_{0}}^{t}\gamma^{t-s}$
$\displaystyle\geq(1-\hat{\mu})\Theta(1)\alpha(\alpha
c^{(t)})^{2}\widehat{\ell}^{(t)}(\alpha).$ (224)
From Lemma 6.1, we know that $c^{(t)}\geq\tilde{\Omega}(1/\alpha).$ Thus, we
simplify (224) as:
$\displaystyle-\mathcal{G}^{(t)}\geq(1-\hat{\mu})\tilde{\Omega}(\alpha)\widehat{\ell}^{(t)}(\alpha).$
(225)
We now plug (225) in the signal update (3).
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\tilde{\Omega}(\eta\alpha)(1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha).$
(226)
We now apply Lemma K.22 to lower bound (226) by loss terms. We have:
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\tilde{\Omega}(\eta\alpha)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha).$
(227)
Let’s now assume by contradiction that for $t\in[\mathcal{T}_{0},T]$, we have:
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)>\frac{\tilde{\Omega}(1)}{\eta\alpha^{2}(t-\mathcal{T}_{0}+1)}.$
(228)
From the (3) update, we know that $c_{r}^{(\tau)}$ is a non-decreasing
sequence which implies that $\sum_{r=1}^{m}(\alpha c_{r}^{(\tau)})^{3}$ is
also non-decreasing for $\tau\in[T]$. Since $x\mapsto\log(1+\exp(-x))$ is non-
increasing, this implies that for $s\leq t$, we have:
$\displaystyle\frac{\tilde{\Omega}(1)}{\eta\alpha^{2}(t-\mathcal{T}_{0}+1)}<(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq(1-\hat{\mu})\widehat{\mathcal{L}}^{(s)}(\alpha).$
(229)
Plugging (229) in the update (228) yields for $s\in[\mathcal{T}_{0},t]$:
$\displaystyle
c^{(s+1)}>c^{(s)}+\frac{\tilde{\Omega}(1)}{\alpha(t-\mathcal{T}_{0}+1)}$ (230)
We now sum (230) for $s=\mathcal{T}_{0},\dots,t$ and obtain:
$\displaystyle
c^{(t+1)}>c^{(\mathcal{T}_{0})}+\frac{\tilde{\Omega}(1)(t-\mathcal{T}_{0}+1)}{\alpha(t-\mathcal{T}_{0}+1)}>\frac{\tilde{\Omega}(1)}{\alpha},$
(231)
where we used the fact that
$c^{(\mathcal{T}_{0})}\geq\tilde{\Omega}(1/\alpha)>0$ (Lemma 6.2) in the last
inequality. Thus, from Lemma 6.1 and (231), we have for
$t\in[\mathcal{T}_{0},T]$, $c^{(t)}\geq\tilde{\Omega}(1/\alpha).$ Let’s now
show that this leads to a contradiction. Indeed, for
$t\in[\mathcal{T}_{0},T]$, we have:
$\displaystyle\eta\alpha^{2}(t-\mathcal{T}_{0}+1)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)$
$\displaystyle\leq\eta\alpha^{2}T(1-\hat{\mu})\log\left(1+\exp(-\tilde{\Omega}(1)\right),$
(232)
where we used $c^{(t)}\geq\tilde{\Omega}(1/\alpha)$ in (232). We now apply
Lemma K.22 in (232) and obtain:
$\displaystyle\eta\alpha^{2}(t-\mathcal{T}_{0}+1)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq\frac{(1-\hat{\mu})\eta\alpha^{2}T}{1+\exp(\tilde{\Omega}(1))}.$
(233)
Given the values of $\alpha,\eta,T$, we finally have:
$\displaystyle\eta\alpha^{2}(t-\mathcal{T}_{0}+1)(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)\leq\tilde{O}(1),$
(234)
which contradicts (228). ∎
We now link the bound on the loss to the derivative $\nu_{1}^{(t)}.$
###### Lemma J.8.
For $t\in[\mathcal{T}_{0},T]$, we have
$\nu_{1}^{(t)}\leq\tilde{O}\left(\frac{1}{\eta(t-\mathcal{T}_{0}+1)\alpha}\right)$.
###### Proof of Lemma J.8.
The proof is similar to the one of Lemma 6.4.
∎
#### J.4.2 Convergence at late stages $(t\in[\mathcal{T}_{1},T])$
###### Lemma J.9 (Convergence rate of the loss).
For $t\in[\mathcal{T}_{1},T]$ Using GD+M with learning rate $\eta>0$, the loss
sublinearly converges to zero as
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)\leq\tilde{O}\left(\frac{1}{\eta\beta^{2}(t-\mathcal{T}_{1}+1)}\right).$
(235)
###### Proof of Lemma J.9.
Let $t\in[\mathcal{T}_{1},T].$ From Lemma J.10, we know that the signal
gradient is bounded as $-\mathscr{G}^{(t)}\geq-\mathscr{G}^{(s)}$ for
$s\in[\mathcal{T}_{1},t].$
$\displaystyle-\mathcal{G}^{(t)}$
$\displaystyle=-\gamma^{t-\mathcal{T}_{1}}\mathcal{G}^{(\mathcal{T}_{1})}-(1-\gamma)\sum_{s=\mathcal{T}_{1}}^{t}\gamma^{t-s}\mathscr{G}^{(s)}$
$\displaystyle\geq-(1-\gamma)\sum_{s=\mathcal{T}_{1}}^{t}\gamma^{t-s}\mathscr{G}^{(s)}$
$\displaystyle\geq-(1-\gamma)\mathscr{G}^{(t)}\sum_{s=\mathcal{T}_{1}}^{t}\gamma^{t-s}$
$\displaystyle=-\Theta(1)\mathscr{G}^{(t)}.$ (236)
From Lemma E.2, the signal gradient is:
$\displaystyle-\mathscr{G}^{(t)}=\Theta(1)\left(\alpha^{3}\widehat{\ell}^{(t)}(\alpha)+\beta^{3}\widehat{\ell}^{(t)}(\beta)\right)(c^{(t)})^{2}.$
(237)
From Lemma 6.3, we know that $c^{(t)}\geq\tilde{\Omega}(1/\beta)$. Thus, we
simplify (237) as:
$\displaystyle-\mathscr{G}^{(t)}\geq\tilde{\Omega}(\beta)\left((1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha)+\hat{\mu}\beta\widehat{\ell}^{(t)}(\beta)\right).$
(238)
By combining (236) and (238), we finally obtain:
$\displaystyle-\mathcal{G}^{(t)}\geq\tilde{\Omega}(\beta)\left((1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha)+\hat{\mu}\widehat{\ell}^{(t)}(\beta)\right).$
(239)
We now plug (239) in the signal update (3).
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\tilde{\Omega}(\eta\beta)\left((1-\hat{\mu})\widehat{\ell}^{(t)}(\alpha)+\hat{\mu}\widehat{\ell}^{(t)}(\beta)\right).$
(240)
We now apply Lemma K.22 to lower bound (240) by loss terms. We have:
$\displaystyle c^{(t+1)}\geq
c^{(t)}+\tilde{\Omega}(\eta\beta)\left((1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)\right).$
(241)
Let’s now assume by contradiction that for $t\in[\mathcal{T}_{1},T]$, we have:
$\displaystyle(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)>\frac{\tilde{\Omega}(1)}{\eta\beta^{2}(t-\mathcal{T}_{1}+1)}.$
(242)
From the (3) update, we know that $c_{r}^{(\tau)}$ is a non-decreasing
sequence which implies that $\sum_{r=1}^{m}(\theta c_{r}^{(\tau)})^{3}$ is
also non-decreasing for $\tau\in[T]$. Since $x\mapsto\log(1+\exp(-x))$ is non-
increasing, this implies that for $s\leq t$, we have:
$\displaystyle\frac{\tilde{\Omega}(1)}{\eta\beta^{2}(t-\mathcal{T}_{1}+1)}<(1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)\leq(1-\hat{\mu})\widehat{\mathcal{L}}^{(s)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(s)}(\beta).$
(243)
Plugging (243) in the update (241) yields for $s\in[\mathcal{T}_{1},t]$:
$\displaystyle
c^{(s+1)}>c^{(s)}+\frac{\tilde{\Omega}(1)}{\beta(t-\mathcal{T}_{1}+1)}$ (244)
We now sum (244) for $s=\mathcal{T}_{1},\dots,t$ and obtain:
$\displaystyle
c^{(t+1)}>c^{(\mathcal{T}_{1})}+\frac{\tilde{\Omega}(1)(t-\mathcal{T}_{1}+1)}{\beta(t-\mathcal{T}_{1}+1)}>\frac{\tilde{\Omega}(1)}{\beta},$
(245)
where we used the fact that
$c^{(\mathcal{T}_{1})}\geq\tilde{\Omega}(1/\beta)>0$ (Lemma 6.2) in the last
inequality. Thus, from Lemma 6.2 and (245), we have for
$t\in[\mathcal{T}_{1},T]$, $c^{(t)}\geq\tilde{\Omega}(1/\beta).$ Let’s now
show that this leads to a contradiction. Indeed, for
$t\in[\mathcal{T}_{1},T]$, we have:
$\displaystyle\eta\beta^{2}(t-\mathcal{T}_{1}+1)\left((1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)\right)$
$\displaystyle\leq$
$\displaystyle\eta\beta^{2}T\left((1-\hat{\mu})\log\left(1+\exp(-(\alpha
c^{(t)})^{3}-\sum_{r\neq r_{\max}}(\alpha c_{r}^{(t)})^{3}\right)\right.$
$\displaystyle\left.\qquad+\hat{\mu}\log\left(1+\exp(-(\beta
c^{(t)})^{3}-\sum_{r\neq r_{\max}}(\beta c_{r}^{(t)})^{3}\right)\right)$
$\displaystyle\leq$
$\displaystyle\eta\beta^{2}T\left((1-\hat{\mu})\log\left(1+\exp(-\tilde{\Omega}(\alpha^{3}/\beta^{3})\right)+\hat{\mu}\log\left(1+\exp(-\tilde{\Omega}(1)\right)\right),$
(246)
where we used $\sum_{r\neq
r_{\max}}(c_{r}^{(t)})^{3}\geq-m\tilde{O}(\sigma_{0}^{3})$ and
$c^{(t)}\geq\tilde{\Omega}(1/\beta)$ in (246). We now apply Lemma K.22 in
(246) and obtain:
$\displaystyle\eta\beta^{2}(t-\mathcal{T}_{1}+1)\left((1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)\right)\leq\frac{(1-\hat{\mu})\eta\beta^{2}T}{1+\exp(\tilde{\Omega}(\alpha^{3}/\beta^{3}))}+\frac{\hat{\mu}\eta\beta^{2}T}{1+\exp(\tilde{\Omega}(1))}.$
(247)
Given the values of $\alpha,\beta,\eta,T,\hat{\mu}$, we finally have:
$\displaystyle\eta\beta^{2}(t-\mathcal{T}_{1}+1)\left((1-\hat{\mu})\widehat{\mathcal{L}}^{(t)}(\alpha)+\hat{\mu}\widehat{\mathcal{L}}^{(t)}(\beta)\right)\leq\tilde{O}(1),$
(248)
which contradicts (242). ∎
#### J.4.3 Auxiliary lemmas
We now provide an auxiliary lemma needed to obtain (J.9).
###### Lemma J.10.
Let $t\in[\mathcal{T}_{1},T].$ Then, the signal gradient decreases i.e.
$-\mathscr{G}^{(s)}\geq-\mathscr{G}^{(t)}$ for $s\in[\mathcal{T}_{1},t].$
###### Proof of Lemma J.10.
From Lemma E.2, we know that
$\displaystyle-\mathscr{G}^{(t)}=\Theta(1)\left(\alpha^{3}\widehat{\ell}^{(t)}(\alpha)+\beta^{3}\widehat{\ell}^{(t)}(\beta)\right)(c^{(t)})^{2}.$
(249)
Since $c_{r}^{(t)}\geq-\tilde{O}(\sigma_{0})$, we bound (249) as:
$\displaystyle-\mathscr{G}^{(t)}\leq\Theta(1)\left(\alpha^{3}\mathfrak{S}((\alpha
c^{(t)})^{3})+\beta^{3}\mathfrak{S}((\beta c^{(t)})^{3})\right)(c^{(t)})^{2}.$
(250)
The function $x\mapsto x^{2}\mathfrak{S}(x^{3})$ is non-increasing for $x\geq
1.$ Since $c^{(t)}\geq\tilde{\Omega}(1/\beta)$, we have:
$\displaystyle-\mathscr{G}^{(t)}\leq\Theta(1)\left(\alpha^{3}\mathfrak{S}((\alpha
c^{(\mathcal{T}_{1})})^{3})+\beta^{3}\mathfrak{S}((\beta
c^{(\mathcal{T}_{1})})^{3})\right)(c^{(\mathcal{T}_{1})})^{2}=-\mathscr{G}^{(\mathcal{T}_{1})}.$
(251)
∎
###### Lemma J.11.
Let $t\in[\mathcal{T}_{0},T].$ Then, the signal $\mathcal{Z}_{1}$ gradient
decreases i.e. $\widehat{\ell}^{(s)}(\alpha)(\alpha
c^{(s)})^{2}\geq\widehat{\ell}^{(t)}(\alpha)(\alpha c^{(t)})^{2}$ for
$s\in[\mathcal{T}_{0},t].$
###### Proof of Lemma J.11.
The proof is similar to the one of Lemma J.10.
∎
## Appendix K Useful lemmas
In this section, we provide the probabilistic and optimization lemmas and the
main inequalities used above.
### K.1 Probabilistic lemmas
In this section, we introduce the probabilistic lemmas used in the proof.
#### K.1.1 High-probability bounds
###### Lemma K.1.
The sum of of symmetric random variables is symmetric.
###### Lemma K.2 (Sum of sub-Gaussians (Vershynin, 2018)).
Let $\sigma_{1},\sigma_{2}>0.$ Let $X$ and $Y$ respetively be $\sigma_{1}$\-
and $\sigma_{2}$-subGaussian random variables. Then, $X+Y$ is
$\sqrt{\sigma_{1}+\sigma_{2}}$-subGaussian random variable.
###### Lemma K.3 (High probability bound subGaussian (Vershynin, 2018)).
Let $t>0.$ Let $X$ be a $\sigma$-subGaussian random variable. Then, we have:
$\displaystyle\mathbb{P}\left[|X|>t\right]\leq
2e^{-\frac{t^{2}}{2\sigma^{2}}}.$
###### Theorem K.1 (Concentration of Lipschitz functions of Gaussian
variables (Wainwright, 2019)).
Let $X_{1},\dots,X_{N}$ be $N$ i.i.d. random variables such that
$X_{i}\sim\mathcal{N}(0,\sigma^{2})$ and $X:=(X_{1},\dots,X_{n}).$ Let
$f\colon\mathbb{R}^{d}\rightarrow\mathbb{R}$ be $L$-Lipschitz with respect to
the Euclidean norm. Then,
$\displaystyle\mathbb{P}[|f(X)-\mathbb{E}[f(X)]|\geq t]\leq
2e^{-\frac{t^{2}}{2L}}.$ (252)
###### Lemma K.4 (Expectation of Gaussian vector (Wainwright, 2019)).
Let $X\in\mathbb{R}^{d}$ be a Gaussian vector such that
$X\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}).$ Then, its expectation is equal to
$\mathbb{E}[\|X\|_{2}]=\Theta(\sigma\sqrt{d}).$
###### Lemma K.5 (High-probability bound on squared norm of Gaussian).
Let $\bm{X}\in\mathbb{R}^{d}$ be a Gaussian vector such that
$X\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d}).$ Then, with probability at
least $1-o(1)$, we have $\|\bm{X}\|_{2}^{2}=\Theta(\sigma^{2}d).$
###### Proof of Lemma K.5. .
We know that the $\|\cdot\|_{2}$ is $1$-Lipschitz and by applying Theorem K.1,
we therefore have::
$\displaystyle\mathbb{P}\left[\left|\|\bm{X}\|_{2}-\mathbb{E}[\|\bm{X}\|_{2}]\right|>\epsilon\right]$
$\displaystyle\leq\exp\left(-\frac{\epsilon^{2}}{2\sigma^{2}}\right).$ (253)
By rewriting (253) and using Lemma K.4, we have with probability $1-\delta,$
$\displaystyle\Theta(\sigma\sqrt{d})-\sigma\sqrt{2\log\left(\frac{1}{\delta}\right)}\leq\|\bm{X}\|_{2}\leq\Theta(\sigma\sqrt{d})+\sigma\sqrt{2\log\left(\frac{1}{\delta}\right)}.$
(254)
By squaring (254) and using $(a+b)^{2}\leq a^{2}+b^{2}$, we obtain the aimed
result. ∎
###### Lemma K.6 (Precise bound on squared norm of Gaussian).
Let $\bm{X}\in\mathbb{R}^{d}$ be a Gaussian vector such that
$X\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d}).$ Then, we have:
$\displaystyle\mathbb{P}\left[\|X\|_{2}\in\left[\frac{1}{2}\sigma\sqrt{d},\frac{3}{2}\sigma\sqrt{d}\right]\right]\geq
1-e^{-d/8}.$
###### Proof of Lemma K.6.
We know that the $\|\cdot\|_{2}$ is $1$-Lipschitz and by applying Theorem K.1,
we therefore have:
$\displaystyle\mathbb{P}\left[\left|\|\bm{X}\|_{2}-\mathbb{E}[\|\bm{X}\|_{2}]\right|>\epsilon\right]$
$\displaystyle\leq\exp\left(-\frac{\epsilon^{2}}{2\sigma^{2}}\right).$ (255)
We use Lemma K.4 and set $\epsilon=\frac{\sigma\sqrt{d}}{2}$ in (255) to
finally get:
$\displaystyle\mathbb{P}\left[\left|\|\bm{X}\|_{2}-\mathbb{E}[\|\bm{X}\|_{2}]\right|>\frac{\sigma\sqrt{d}}{2}\right]$
$\displaystyle\leq\exp\left(-\frac{d}{8}\right).$
∎
###### Lemma K.7 (High-probability bound on dot-product of Gaussians).
Let $X$ and $Y$ be two independent Gaussian vectors in $\mathbb{R}^{d}$ such
that $\bm{X},\bm{Y}$ independent and
$\bm{X}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I})$ and
$\bm{Y}\sim\mathcal{N}(0,\sigma_{0}^{2}\mathbf{I}_{d})$. Assume that
$\sigma\sigma_{0}\leq 1/d.$ Then, with probability $1-o(1)$, we have:
$\displaystyle|\langle\bm{X},\bm{Y}\rangle|$
$\displaystyle\leq\tilde{O}(\sigma\sigma_{0}\sqrt{d}).$
###### Proof of Lemma K.7 .
Let’s define $Z:=\langle\bm{X},\bm{Y}\rangle.$ We first remark that $Z$ is a
sub-exponential random variable. Indeed, the generating moment function is:
$\displaystyle M_{Z}(t)=\mathbb{E}[e^{t\langle
X,Y\rangle}]=\frac{1}{(1-\sigma^{2}\sigma_{0}^{2}t^{2})^{d/2}}=e^{-\frac{d}{2}\log(1-\sigma^{2}\sigma_{0}^{2}t^{2})}\leq
e^{\frac{d\sigma^{2}\sigma_{0}^{2}t^{2}}{2}},\qquad\text{for
}t\leq\frac{1}{\sigma\sigma_{0}}.$
where we used $\log(1-x)\geq-x$ for $x<1$ in the last inequality. Therefore,
by definition of a sub-exponential variable, we have:
$\displaystyle\mathbb{P}\left[|Z-\mathbb{E}[Z]|>\epsilon\right]$
$\displaystyle\leq\begin{cases}2e^{-\frac{\epsilon^{2}}{2d\sigma^{2}\sigma_{0}^{2}}}&\text{for
}0\leq\epsilon\leq d\sigma\sigma_{0}\\\
2e^{-\frac{\epsilon}{2\sigma\sigma_{0}}}&\text{for }\epsilon\geq
d\sigma\sigma_{0}\end{cases}.$ (256)
Since $\sigma^{2}d\leq 1$ and $\epsilon\in[0,1],$ (256) is bounded as:
$\displaystyle\mathbb{P}\left[|Z-\mathbb{E}[Z]|>\epsilon\right]\leq
2e^{-\frac{\epsilon^{2}}{2d\sigma^{2}\sigma_{0}^{2}}}.$ (257)
We know that
$\mathbb{E}[Z]=M^{\prime}(0)=\left(d(1-\sigma^{2}\sigma_{0}^{2}t^{2})^{-\frac{d}{2}-1}\sigma^{2}\sigma_{0}^{2}t\right)(0)=0.$
By plugging this expectation in (257), we have with probability $1-\delta,$
$\displaystyle|\langle\bm{X},\bm{Y}\rangle|$
$\displaystyle\leq\sigma\sigma_{0}\sqrt{2d\log\left(\frac{2}{\delta}\right)}.$
∎
###### Lemma K.8 (High-probability bound on dot-product of Gaussians).
Let $\bm{X}$ and $\bm{Y}$ be two independent Gaussian vectors in
$\mathbb{R}^{d}$ such that
$\bm{X},\bm{Y}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d}).$ Then, with
probability $1-\delta,$ we have:
$\displaystyle\left|\left\langle\frac{\bm{X}}{\|\bm{X}\|_{2}},\bm{Y}\right\rangle\right|$
$\displaystyle\leq\tilde{O}(\sigma).$
###### Proof of Lemma K.7 .
Let $\bm{U}:=\bm{X}/\|\bm{X}\|_{2}$ and $Z:=\langle\bm{U},\bm{Y}\rangle.$ We
know that the pdf of $\bm{U}$ in polar coordinates is
$f_{\bm{U}}(\theta)=\frac{\Gamma(d/2)}{2\pi^{d/2}}.$ Therefore, the generating
moment function of $Z$ is:
$\displaystyle
M_{Z}(t)=\int_{\mathbb{S}^{d-1}}\int_{\mathbb{R}^{d}}e^{t\langle\bm{u},\bm{y}\rangle}f_{\bm{U}}(\bm{u})f_{\bm{Y}}(\bm{y})d\bm{u}d\bm{y}$
$\displaystyle=\frac{\Gamma(d/2)}{2\pi^{d/2}(2\pi\sigma^{2})^{d/2}}\int_{\mathbb{S}^{d-1}}\int_{\mathbb{R}^{d}}e^{t\langle\bm{u},\bm{y}\rangle}e^{-\frac{\|\bm{y}\|_{2}^{2}}{2\sigma^{2}}}d\bm{y}d\bm{u}$
$\displaystyle=\frac{\Gamma(d/2)}{2\pi^{d/2}(2\pi\sigma^{2})^{d/2}}\int_{\mathbb{S}^{d-1}}\int_{\mathbb{R}^{d}}e^{-\frac{\|\bm{y}-t\sigma^{2}\bm{u}\|_{2}^{2}}{2\sigma^{2}}}e^{\frac{t^{2}\sigma^{2}\|\bm{u}\|_{2}^{2}}{2}}d\bm{y}d\bm{u}$
$\displaystyle=\frac{\Gamma(d/2)}{2\pi^{d/2}(2\pi\sigma^{2})^{d/2}}\int_{\mathbb{S}^{d-1}}e^{\frac{\sigma^{2}t^{2}\|\bm{u}\|_{2}^{2}}{2}}d\bm{u}$
$\displaystyle=\frac{\Gamma(d/2)}{2\pi^{d/2}(2\pi\sigma^{2})^{d/2}}\int_{\mathbb{S}^{d-1}}e^{\frac{\sigma^{2}t^{2}}{2}}d\bm{u}$
$\displaystyle=e^{\frac{\sigma^{2}t^{2}}{2}}.$ (258)
(258) indicates that $Z$ is a sub-Gaussian random variable of parameter
$\sigma$. By definition, it satisfies
$\displaystyle\mathbb{P}[|Z|>\epsilon]\leq
2e^{-\frac{\epsilon^{2}}{2\sigma^{2}}}.$ (259)
Setting $\delta=2e^{-\frac{\epsilon^{2}}{2\sigma^{2}}}$ in (259) yields that
we have with probability $1-\delta,$
$\displaystyle\left|\left\langle\frac{\bm{X}}{\|\bm{X}\|_{2}},\bm{Y}\right\rangle\right|$
$\displaystyle\leq\sqrt{2\log\left(\frac{2}{\delta}\right)}.$
∎
###### Lemma K.9 (High probability bound for ratio of norms).
Let $\bm{X}_{1},\dots,\bm{X}_{n}$ i.i.d. vectors from
$\mathcal{N}(0,\sigma^{2}\mathbf{I}).$ Then, with probability $1-o(1)$, we
have:
$\displaystyle\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}=\tilde{\Theta}\left(\sigma\sqrt{\frac{d}{n}}\right).$
(260)
###### Proof of Lemma K.9.
We know that for $\bm{X}_{1}\sim\mathcal{N}(0,\sigma^{2}d)$, we have:
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}\in\left[\frac{\sigma^{2}d}{4},\frac{9\sigma^{2}d}{4}\right]\right]\leq
e^{-d/8}.$ (261)
Therefore, using the law of total probability and (261), we have:
$\displaystyle\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}>t\right]$
$\displaystyle=\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}>t\;\middle|\;\|\bm{X}_{1}\|_{2}^{2}>\frac{9\sigma^{2}d}{4}\right]\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\frac{9\sigma^{2}d}{4}\right]$
$\displaystyle+\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}>t\;\middle|\;\|\bm{X}_{1}\|_{2}^{2}<\frac{9\sigma^{2}d}{4}\right]\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}<\frac{9\sigma^{2}d}{4}\right]$
$\displaystyle\leq
e^{-d/8}+\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}>t\;\middle|\;\|\bm{X}_{1}\|_{2}^{2}<\frac{9\sigma^{2}d}{4}\right].$
(262)
Now, we can further bound (262) as:
$\displaystyle\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}X_{i}\|_{2}}>t\right]$
$\displaystyle\leq
e^{-d/8}+\mathbb{P}\left[\frac{9\sigma^{2}d}{4t}>\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}\right].$
(263)
Since $\sum_{i=1}^{n}\bm{X}_{i}\sim\mathcal{N}(0,n\sigma^{2}\mathbf{I}_{d})$,
we also have
$\displaystyle\mathbb{P}\left[\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}\in\left[\frac{\sigma\sqrt{nd}}{2},\frac{3\sigma\sqrt{nd}}{2}\right]\right]\leq
e^{-d/8}.$ (264)
Therefore by setting $t=\frac{3\sigma}{2}\sqrt{\frac{d}{n}}$, we obtain:
$\displaystyle\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}>\frac{3\sigma}{2}\sqrt{\frac{d}{n}}\right]\leq
2e^{-d/8}.$ (265)
Doing the similar reasoning for the lower bound yields:
$\displaystyle\mathbb{P}\left[\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{n}\bm{X}_{i}\|_{2}}<\frac{\sigma}{2}\sqrt{\frac{d}{n}}\right]\leq
2e^{-d/8}.$ (266)
∎
###### Lemma K.10 (High probability bound norms vs dot product).
Let $\bm{X}_{1},\dots,\bm{X}_{n}$ i.i.d. vectors from
$\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d}).$ Then, with probability $1-o(1),$ we
have:
$\displaystyle\frac{\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|}{\|\sum_{i=1}^{N}\bm{X}_{i}\|_{2}}\leq\frac{\|\bm{X}_{1}\|_{2}^{2}}{\|\sum_{i=1}^{N}\bm{X}_{i}\|_{2}}.$
(267)
###### Proof of Lemma K.10.
To show the result, it’s enough to upper bound the following probability:
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\right].$
(268)
By using the law of total probability we have:
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\right]$
$\displaystyle=$
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\;\middle|\;\|\bm{X}_{1}\|_{2}^{2}\in\left[\frac{\sigma^{2}d}{2},\frac{9\sigma^{2}d}{4}\right]\right]\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}\in\left[\frac{\sigma^{2}d}{2},\frac{9\sigma^{2}}{4}\right]\right]$
$\displaystyle+$
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\;\middle|\;\|\bm{X}_{1}\|_{2}^{2}\not\in\left[\frac{\sigma^{2}d}{2},\frac{9\sigma^{2}d}{4}\right]\right]\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}\not\in\left[\frac{\sigma^{2}d}{2},\frac{9\sigma^{2}}{4}\right]\right]$
$\displaystyle\leq$
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\;\middle|\;\|\bm{X}_{1}\|_{2}^{2}\in\left[\frac{\sigma^{2}d}{2},\frac{9\sigma^{2}d}{4}\right]\right]+e^{-d/8},$
(269)
where we used Lemma K.6 in (269). Using Lemma K.6 again, we can simplify (269)
as:
$\displaystyle\mathbb{P}\left[\|\bm{X}_{1}\|_{2}^{2}>\sqrt{d}|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\right]$
$\displaystyle\leq\mathbb{P}\left[\frac{9\sigma^{2}\sqrt{d}}{4}>|\langle\bm{X}_{1},\bm{X}_{2}\rangle|\right]+e^{-d/8}$
$\displaystyle\leq 2e^{-d/8}.$
∎
#### K.1.2 Anti-concentration of Gaussian polynomials
###### Theorem K.2 (Anti-concentration of Gaussian polynomials (Carbery &
Wright, 2001; Lovett, 2010)).
Let $P(x)=P(x_{1},\dots,x_{n})$ be a degree $d$ polynomial and
$x_{1},\dots,x_{n}$ be i.i.d. Gaussian univariate random variables. Then, the
following holds for all $d,n$.
$\displaystyle\mathbb{P}\left[|P(x)|\leq\epsilon\mathrm{Var}[P(x)]^{1/2}\right]\leq
O(d)\epsilon^{1/d}.$
###### Lemma K.11 (Gaussians and Hermite).
Let
$\mathcal{P}(x_{1},\dots,x_{P})=\sum_{k=1}^{d}\sum_{\mathcal{I}\subset[P]:|\mathcal{I}|=k}c_{\mathcal{I}}\prod_{i\in\mathcal{I}}x_{i}$
be a degree $d$ polynomial where
$x_{1},\dots,x_{P}\overset{i.i.d.}{\sim}\mathcal{N}(0,\sigma^{2})$ and
$c_{\mathcal{I}}\in\mathbb{R}$.
Let $\mathcal{H}(x)=\sum_{e\in\mathbb{N}^{P}:|e|\leq
d}c_{e}^{H}\prod_{i=1}^{P}H_{e_{i}}(x_{i})$ be the corresponding Hermite
polynomial to $\mathcal{P}$ where $\\{H_{e_{k}}\\}_{k=1}^{d}$ is the Hermite
polynomial basis. Then, the variance of $P$ is given by
$\mathrm{Var}[P(x)^{2}]=\sum_{e}|c_{e}^{H}|^{2}.$
###### Lemma K.12.
Let $\\{\bm{v}_{r}\\}_{r=1}^{m}$ be vectors in $\mathbb{R}^{d}$ such that
there exist a unit norm vector $\bm{x}$ that satisfies
$|\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{3}|\geq 1.$ Then, for
$\bm{\xi}_{1},\dots,\bm{\xi}_{k}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d})$
i.i.d., we have:
$\displaystyle\mathbb{P}\left[\left|\sum_{j=1}^{P}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{\xi}_{j}\rangle^{3}\right|\geq\tilde{\Omega}(\sigma^{3})\right]\geq
1-\frac{O(d)}{2^{1/d}}.$
###### Proof of Lemma K.12.
Let $\xi_{1},\dots,\xi_{j}\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d})$ i.i.d.
We decompose $\bm{\xi}_{j}$ as $\bm{\xi}_{j}=\tilde{a}_{j}\bm{x}+\bm{b}_{j}$
where $\bm{b}_{j}$ is an independent Gaussian on the orthogonal complement of
$\bm{x}$ and $\tilde{a}_{j}\sim\mathcal{N}(0,\sigma^{2}).$ Finally, we rewrite
$\tilde{a}_{j}$ as $\tilde{a}_{j}=\sigma a_{j}$ where
$a_{j}\sim\mathcal{N}(0,1).$ Therefore, we can rewrite
$\sum_{j=1}^{P}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{\xi}_{j}\rangle^{3}$ as a
polynomial $\mathcal{P}(a_{1},\dots,a_{P})$ defined as:
$\displaystyle\mathcal{P}(a_{1},\dots,a_{P})$
$\displaystyle=\sigma^{3}\sum_{j=1}^{P}a_{j}^{3}\left(\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{3}\right)+3\sigma^{2}\sum_{j=1}^{P}a_{j}^{2}\left(\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{2}\langle\bm{v}_{r},\bm{b}_{j}\rangle\right)$
(270)
$\displaystyle+3\sigma\sum_{j=1}^{P}a_{j}\left(\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle\langle\bm{v}_{r},\bm{b}_{j}\rangle^{2}\right)+\sum_{j=1}^{P}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{b}_{j}\rangle^{3}.$
We now compute the mean and variance of $\mathcal{P}(a_{1},\dots,a_{P}).$
Those quantities are obtained through the corresponding Hermite polynomial of
$P$ as stated in Lemma K.11. Let $\mathcal{H}(x)$ be an Hermite polynomial of
degree 3. Since the Hermite basis is given by $H_{0}(x)=1,$ $H_{e_{1}}(x)=x,$
$H_{e_{2}}(x)=x^{2}-1$ and $H_{e_{3}}(x)=x^{3}-3x$, for
$\alpha_{j},\beta_{j},\gamma_{j},\delta_{j}\in\mathbb{R}$, we have:
$\displaystyle\mathcal{H}(a_{1},\dots,a_{P})$
$\displaystyle=\sum_{j=1}^{P}\alpha_{j}H_{e_{3}}(a_{j})+\sum_{j=1}^{P}\beta_{j}H_{e_{2}}(a_{j})+\gamma\sum_{j=1}^{P}H_{e_{1}}(a_{j})+\delta\sum_{j=1}^{P}H_{e_{0}}(a_{j})$
$\displaystyle=\sum_{j=1}^{P}\alpha_{j}(a_{j}^{3}-3a_{j})+\sum_{j=1}^{P}\beta_{j}(a_{j}^{2}-1)+\sum_{j=1}^{P}\gamma_{j}a_{j}+\sum_{j=1}^{P}\delta_{j}$
$\displaystyle=\sum_{j=1}^{P}\alpha_{j}a_{j}^{3}+\sum_{j=1}^{P}\beta_{j}a_{j}^{2}+\sum_{j=1}^{P}(\gamma_{j}-3\alpha_{j})a_{j}+\sum_{j=1}^{P}(\delta_{j}-\beta_{j}).$
(271)
Since the decomposition of a polynomial in the monomial basis is unique, we
can equate the coefficients of $H$ and $P$ and obtain:
$\displaystyle\begin{cases}\alpha_{j}=\sigma^{3}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{3}\\\
\beta_{j}=3\sigma^{2}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{2}\langle\bm{v}_{r},\bm{b}_{j}\rangle\\\
\gamma_{j}=3\sigma\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle\langle
v_{r},b_{j}\rangle^{2}+3\sigma^{3}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{3}\\\
\delta_{j}=\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{b}_{j}\rangle^{3}+3\sigma^{2}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{2}\langle\bm{v}_{r},\bm{b}_{j}\rangle\end{cases}.$
(272)
By applying Lemma K.11, we get that
$\mathrm{Var}[P(a)]=\sum_{j=1}^{P}\alpha_{j}^{2}+\sum_{j=1}^{P}\beta_{j}^{2}+\sum_{j=1}^{P}\gamma_{j}^{2}\geq\sum_{j=1}^{P}\alpha_{j}^{2}.$
By using this lower bound on the variance, the fact that
$|\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{x}\rangle^{3}|\geq 1$ and Theorem K.2,
we obtain
$\displaystyle\mathbb{P}\left[\left|\sum_{j=1}^{P}\sum_{r=1}^{m}\langle\bm{v}_{r},\bm{\xi}_{j}\rangle^{3}\right|\geq\epsilon\sigma^{3}\right]\geq
1-O(d)\epsilon^{1/d}$ (273)
Setting $\epsilon=1/2$ in (273) yields the desired result.
∎
#### K.1.3 Properties of the cube of a Gaussian
###### Lemma K.13.
Let $X\sim\mathcal{N}(0,\sigma^{2})$. Then, $X^{3}$ is
$\sigma^{3}$-subGaussian.
###### Proof of Lemma K.13.
By definition of the moment generating function, we have:
$\displaystyle M_{X^{3}}(t)$
$\displaystyle=\sum_{i=0}^{\infty}\frac{t^{i}E[X^{3i}]}{i!}=\sum_{k=0}^{\infty}\frac{t^{2k}\sigma^{6k}(2k-1)!!}{(2k)!}=\sum_{k=0}^{\infty}\frac{t^{2k}\sigma^{6k}}{2^{k}k!}=e^{\frac{t^{2}\sigma^{6}}{2}}.$
∎
###### Lemma K.14.
Let $(\bm{X}[1],\dots,\bm{X}[P-1])$ be i.i.d. random variables such that
$\bm{X}[j]\sim\mathcal{N}(0,\sigma^{2}\mathbf{I}_{d}).$ Let
$(\bm{w}_{1},\dots,\bm{w}_{m})$ be fixed vectors such that
$w_{r}\in\mathbb{R}^{d}.$ Therefore,
$\displaystyle\sum_{s=1}^{m}\sum_{j=1}^{P-1}\langle\bm{w}_{s},\bm{X}[j]\rangle^{3}\text{
is
}\textstyle(\sigma^{3}\sqrt{P-1}\sqrt{\sum_{s=1}^{m}\|\bm{w}_{s}\|_{2}^{6}})-\text{subGaussian.}$
###### Proof.
We know that
$\langle\bm{w}_{s},\bm{X}[j]\rangle\sim\mathcal{N}(0,\|\bm{w}_{s}\|_{2}^{2}\sigma^{2})$.
Therefore, $\langle\bm{w}_{s},\bm{X}[j]\rangle^{3}$ is the cube of a centered
Gaussian. From Lemma K.13, $\langle\bm{w}_{s},\bm{X}[j]\rangle^{3}$ is
$\sigma^{3}\|\bm{w}_{s}\|_{2}^{3}$-subGaussian. Using Lemma K.2, we deduce
that $\sum_{j=1}^{P-1}\langle\bm{w}_{s},\bm{X}[j]\rangle^{3}$ is
$\sqrt{P}\sigma^{3}\|\bm{w}_{s}\|_{2}^{3}$-subGaussian. Applying again Lemma
K.2, we finally obtain that
$\sum_{s=1}^{m}\sum_{j=1}^{P-1}\langle\bm{w}_{s},\bm{X}[j]\rangle^{3}$ is
$\sigma^{3}\sqrt{P-1}\sqrt{\sum_{s=1}^{m}\|\bm{w}_{s}\|_{2}^{6}}$-subGaussian.
∎
### K.2 Tensor Power Method Bound
In this subsection we establish a lemma for comparing the growth speed of two
sequences of updates of the form $z^{(t+1)}=z^{(t)}+\eta
C^{(t)}(z^{(t)})^{2}$. This technique is reminiscent of the classical analysis
of the growth of eigenvalues on the (incremental) tensor power method of
degree $2$ and is stated in full generality in (Allen-Zhu & Li, 2020).
#### K.2.1 Bounds for GD
###### Lemma K.15.
Let $\\{z^{(t)}\\}_{t=0}^{T}$ be a positive sequence defined by the following
recursions
$\displaystyle\begin{cases}z^{(t+1)}\geq z^{(t)}+m(z^{(t)})^{2}\\\
z^{(t+1)}\leq z^{(t)}+M(z^{(t)})^{2}\end{cases},$
where $z^{(0)}>0$ is the initialization and $m,M>0$.Let $\upsilon>0$ such that
$z^{(0)}\leq\upsilon.$ Then, the time $t_{0}$ such that $z_{t}\geq\upsilon$
for all $t\geq t_{0}$ is:
$\displaystyle
t_{0}=\frac{3}{mz^{(0)}}+\frac{8M}{m}\left\lceil\frac{\log(\upsilon/z_{0})}{\log(2)}\right\rceil.$
###### Proof of Lemma K.15.
Let $n\in\mathbb{N}^{*}$. Let $T_{n}$ be the time where $z^{(t)}\geq
2^{n}z^{(0)}$. This time exists because $z^{(t)}$ is a non-decreasing
sequence. We want to find an upper bound on this time. We start with the case
$n=1.$ By summing the recursion, we have:
$\displaystyle z^{(T_{1})}\geq z^{(0)}+m\sum_{s=0}^{T_{1}-1}(z^{(s)})^{2}.$
(274)
We use the fact that $z^{(s)}\geq z^{(0))}$ in (274) and obtain:
$\displaystyle T_{1}\leq\frac{z^{(T_{1})}-z^{(0)}}{m(z^{(0)})^{2}}.$ (275)
Now, we want to bound $z^{(T_{1})}-z^{(0)}$. Using again the recursion and
$z^{(T_{1}-1)}\leq 2z^{(0)}$, we have:
$\displaystyle z^{(T_{1})}\leq z^{(T_{1}-1)}+M(z^{(T_{1}-1)})^{2}\leq
2z^{(0)}+4M(z^{(0)})^{2}.$ (276)
Combining (275) and (276), we get a bound on $T_{1}.$
$\displaystyle T_{1}\leq\frac{1}{m(z^{(0)})}+\frac{4M}{m}.$ (277)
Now, let’s find a bound for $T_{n}$. Starting from the recursion and using the
fact that $z^{(s)}\geq 2^{n-1}z^{(0)}$ for $s\geq T_{n-1}$ we have:
$\displaystyle z^{(T_{n})}\geq
z^{(T_{n-1})}+m\sum_{s=T_{n-1}}^{T_{n}-1}(z^{(s)})^{2}\geq
z^{(T_{n-1})}+(2^{n-1})^{2}m(z^{(0)})^{2}(T_{n}-T_{n-1}).$ (278)
On the other hand, by using $z^{(T_{n}-1)}\leq 2^{n}z^{(0)}$ we upper bound
$z^{(T_{n})}$ as follows.
$\displaystyle z^{(T_{n})}$ $\displaystyle\leq
z^{(T_{n}-1)}+M(z^{(T_{n}-1)})^{2}\leq 2^{n}z^{(0)}+M2^{2n}(z^{(0)})^{2}.$
(279)
Besides, we know that $z^{(T_{n-1})}\geq 2^{n-1}z^{(0)}$. Therefore, we upper
bound $z^{(T_{n})}-z^{(T_{n-1})}$ as
$\displaystyle z^{(T_{n})}-z^{(T_{n-1})}\leq
2^{n-1}z^{(0)}+M2^{2n}(z^{(0)})^{2}.$ (280)
Combining (278) and (280) yields:
$\displaystyle T_{n}\leq T_{n-1}+\frac{1}{2^{n-1}m(z^{(0)})}+\frac{4M}{m}.$
(281)
We now sum (281) for $n=2,\dots,n$, use (277) and obtain:
$\displaystyle T_{n}\leq
T_{1}+\frac{2}{mz^{(0)}}+\frac{4Mn}{m}\leq\frac{3}{mz^{(0)}}+\frac{4M(n+1)}{m}\leq\frac{3}{mz^{(0)}}+\frac{8Mn}{m}.$
(282)
Lastly, we know that $n$ satisfies $2^{n}z^{(0)}\geq\upsilon$ this implies
that we can set
$n=\left\lceil\frac{\log(\upsilon/z_{0})}{\log(2)}\right\rceil$ in (282). ∎
###### Lemma K.16.
Let $\\{z^{(t)}\\}_{t=0}^{T}$ be a positive sequence defined by the following
recursion
$\displaystyle\begin{cases}z^{(t)}\geq
z^{(0)}+A\sum_{s=0}^{t-1}(z^{(s)})^{2}-C\\\ z^{(t)}\leq
z^{(0)}+A\sum_{s=0}^{t-1}(z^{(s)})^{2}+C\end{cases},$ (283)
where $A,C>0$ and $z^{(0)}>0$ is the initialization. Assume that $C\leq
z^{(0)}/2.$ Let $\upsilon>0$ such that $z^{(0)}\leq\upsilon.$ Then, the time
$t_{0}$ such that $z^{(t)}\geq\upsilon$ is upper bounded as:
$\displaystyle
t_{0}=8\left\lceil\frac{\log(\upsilon/z_{0})}{\log(2)}\right\rceil+\frac{21}{(z^{(0)})A}.$
###### Proof of Lemma K.16.
Let $n\in\mathbb{N}^{*}.$ Let $T_{n}$ be the time where $z^{(t)}\geq
2^{n-1}z^{(0)}$. We want to upper bound this time. We start with the case
$n=1.$ We have:
$\displaystyle z^{(T_{1})}\geq z^{(0)}+A\sum_{s=0}^{T_{1}-1}(z^{(s)})^{2}-C$
(284)
By assumption, we know that $C\leq z^{(0)}/2.$ This implies that for all
$z^{(t)}\geq z^{(0)}/2$ for all $t\geq 0.$ Plugging this in (284) yields:
$\displaystyle z^{(T_{1})}\geq z^{(0)}+\frac{A}{4}T_{1}(z^{(0)})^{2}-C$ (285)
From (285), we deduce that:
$\displaystyle T_{1}\leq 4\frac{z^{(T_{1})}-z^{(0)}+C}{A(z^{(0)})^{2}}.$ (286)
Now, we want to upper bound $z^{(T_{1})}-z^{(0)}$. Using (283), we deduce
that:
$\displaystyle\begin{cases}z^{(T_{1})}\geq
z^{(0)}+A\sum_{s=0}^{T_{1}-1}(z^{(s)})^{2}-C\\\ z^{(T_{1}-1)}\leq
z^{(0)}+A\sum_{s=0}^{T_{1}-2}(z^{(s)})^{2}+C\end{cases}.$ (287)
Combining the two equations in (287) yields
$\displaystyle z^{(T_{1})}-z^{(T_{1}-1)}\leq A(z^{(T_{1}-1)})^{2}+2C.$ (288)
Since $T_{1}$ is the first time where $z^{(T_{1})}\geq z^{(0)}$, we have
$z^{(T_{1}-1)}\leq z^{(0)}$. Plugging this in (288) leads to:
$\displaystyle z^{(T_{1})}\leq z^{(0)}+A(z^{(0)})^{2}+2C.$ (289)
Finally, using (289) in (286) and $C=o(z^{(0)})$ gives an upper bound on
$T_{1}.$
$\displaystyle T_{1}\leq 4+\frac{3C}{A(z^{(0)})^{2}}\leq
4+\frac{3}{A(z^{(0)})}.$ (290)
Now, let’s find a bound for $T_{n}$. Starting from the recursion, we have:
$\displaystyle\begin{cases}z^{(T_{n})}\geq
z^{(0)}+A\sum_{s=0}^{T_{n}-1}(z^{(s)})^{2}-C\\\ z^{(T_{n-1})}\leq
z^{(0)}+A\sum_{s=0}^{T_{n-1}-1}(z^{(s)})^{2}+C\end{cases}.$ (291)
We substract the two equations in (291), use $z^{(s)}\geq 2^{n-2}$ for $s\geq
T_{n-1}$ and obtain:
$\displaystyle z^{(T_{n})}-z^{(T_{n-1})}\geq
A\sum_{s=T_{n-1}}^{T_{n}-1}(z^{(s)})^{2}-2C\geq
2^{2(n-2)}(z^{(0)})^{2}A(T_{n}-T_{n-1})-2C.$ (292)
On the other hand, from the recursion, we have the following inequalities:
$\displaystyle\begin{cases}z^{(T_{n})}\leq
z^{(0)}+A\sum_{s=0}^{T_{n}-1}(z^{(s)})^{2}-C\\\ z^{(T_{n}-1)}\geq
z^{(0)}+A\sum_{s=0}^{T_{n}-2}(z^{(s)})^{2}-C\end{cases}.$ (293)
We substract the two equations in (293), use $z^{(T_{n}-1)}\leq
2^{n-1}z^{(0)}$ and upper bound $z^{(T_{n})}$ as follows.
$\displaystyle z^{(T_{n})}$ $\displaystyle\leq
z^{(T_{n}-1)}+A(z^{(T_{n}-1)})^{2}+2C\leq
2^{n-1}z^{(0)}+2^{2(n-1)}A(z^{(0)})^{2}+2C.$ (294)
Besides, we know that $z^{(T_{n-1})}\geq 2^{n-2}z^{(0)}$. Therefore, we upper
bound $z^{(T_{n})}-z^{(T_{n-1})}$ as
$\displaystyle z^{(T_{n})}-z^{(T_{n-1})}$ $\displaystyle\leq
2^{n-2}z^{(0)}+2^{2(n-1)}A(z^{(0)})^{2}+2C.$ (295)
Combining (292) and (295) yields:
$\displaystyle T_{n}$ $\displaystyle\leq
T_{n-1}+4+\frac{1}{2^{(n-2)}(z^{(0)})A}+\frac{4C}{2^{2(n-2)}(z^{(0)})^{2}A}$
(296)
We now sum (296) for $n=2,\dots,n$, use $C=o(z^{(0)})$ and then (290) to
obtain:
$\displaystyle T_{n}$ $\displaystyle\leq
T_{1}+4n+\frac{2}{(z^{(0)})A}+\frac{16C}{(z^{(0)})^{2}A}\leq
T_{1}+4n+\frac{18}{(z^{(0)})A}\leq 4(n+1)+\frac{21}{(z^{(0)})A}.$ (297)
Lastly, we know that $n$ satisfies $2^{n}z^{(0)}\geq\upsilon$ this implies
that we can set
$n=\left\lceil\frac{\log(\upsilon/z_{0})}{\log(2)}\right\rceil$ in (297). ∎
#### K.2.2 Bounds for GD+M
###### Lemma K.17 (Tensor Power Method for momentum).
Let $\gamma\in(0,1).$ Let $\\{c^{(t)}\\}_{t\geq 0}$ and
$\\{\mathcal{G}^{(t)}\\}$ be positive sequences defined by the following
recursions
$\displaystyle\begin{cases}\mathcal{G}^{(t+1)}=\gamma\mathcal{G}^{(t)}-\alpha^{3}(c^{(t)})^{2},\\\
c^{(t+1)}=c^{(t)}-\eta\mathcal{G}^{(t+1)}\end{cases},$
and respectively initialized by $z^{(0)}\geq 0$ and $\mathcal{G}^{(0)}=0$. Let
$\upsilon\in\mathbb{R}$ such that $z^{(0)}\leq\upsilon.$ Then, the time
$t_{0}$ such that $z^{(t)}\geq\upsilon$ is:
$\displaystyle
t_{0}=\frac{1}{1-\gamma}\left\lceil\frac{\log(\upsilon)}{\log(1+\delta)}\right\rceil+\frac{1+\delta}{\eta(1-e^{-1})\alpha^{3}c^{(0)}},$
where $\delta\in(0,1).$
###### Proof of Lemma K.17.
Let $\delta\in(0,1).$ We want to prove the following induction hypotheses:
1. 1.
After
$T_{n}=\frac{n}{1-\gamma}+\sum_{j=0}^{n-2}\frac{\delta(\delta+1)^{j}}{\eta(1-e^{-1})\alpha^{3}c^{(0)}\sum_{\tau=0}^{j}e^{-(j-\tau)}(1+\delta)^{2\tau}}$
iterations, we have:
$\displaystyle-\mathcal{G}^{(T_{n})}\geq(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n-1}e^{-(n-1-\tau)}(1+\delta)^{2\tau}.$
(TPM-1)
2. 2.
After
$T_{n}^{\prime}=\frac{n}{1-\gamma}+\sum_{j=0}^{n-1}\frac{\delta(\delta+1)^{j}}{\eta(1-e^{-1})\alpha^{3}c^{(0)}\sum_{\tau=0}^{j}e^{-(j-\tau)}(1+\delta)^{2\tau}}$,
we have:
$\displaystyle c^{(T_{n}^{\prime})}$ $\displaystyle\geq(1+\delta)^{n}c^{(0)}.$
(TPM-2)
Let’s first prove (TPM-1) and (TPM-2) for $n=1.$ First, by using the momentum
update, we have:
$\displaystyle-\mathcal{G}^{(T_{1})}$
$\displaystyle=(1-\gamma)\alpha^{3}\sum_{\tau=0}^{T_{1}-1}\gamma^{T_{1}-1-\tau}(c^{(\tau)})^{2}\geq\alpha^{3}(1-\gamma^{T_{0}})(c^{(0)})^{2}.$
(298)
Setting $T_{1}=1/(1-\gamma)$ and using $\gamma=1-\varepsilon$, we have
$1-\gamma^{\frac{1}{1-\gamma}}=1-\exp(\log(1-\varepsilon)/\varepsilon)=1-e^{-1}.$
Plugging this in (298) yields (TPM-1) for $n=1.$
Regarding (TPM-2), we use the iterate update to have:
$\displaystyle c^{(T_{1}^{\prime})}$
$\displaystyle=c^{(T_{1})}-\eta\sum_{\tau=T_{1}}^{T_{1}^{\prime}-1}\mathcal{G}^{(\tau)}$
$\displaystyle\geq
c^{(0)}+\eta\alpha^{3}(1-e^{-1})(c^{(0)})^{2}(T_{1}^{\prime}-T_{1}),$ (299)
where we used $c^{(T_{1})}\geq c^{(0)}$ and (298) to obtain (299). Since
$T_{1}^{\prime}+1$ is the first time where $c^{(t)}\geq(1+\delta)c^{(0)},$ we
further simplify (299) to obtain:
$\displaystyle T_{1}^{\prime}$
$\displaystyle=T_{1}+\frac{\delta}{\eta\alpha^{3}(1-e^{-1})c^{(0)}}=\frac{1}{1-\gamma}+\frac{\delta}{\eta\alpha^{3}(1-e^{-1})c^{(0)}}.$
(300)
We therefore obtained (TPM-2) for $n=1.$ Let’s now assume (TPM-1) and (TPM-2)
for $n$. We now want to prove these induction hypotheses for $n+1.$ First, by
using the momentum update, we have:
$\displaystyle-\mathcal{G}^{(T_{n+1})}$
$\displaystyle=-\gamma^{T_{n+1}-T_{n}^{\prime}}\mathcal{G}^{(T_{n}^{\prime})}+(1-\gamma)\alpha^{3}\sum_{\tau=T_{n}^{\prime}}^{T_{n+1}-1}\gamma^{T_{n+1}-1-\tau}(c^{(\tau)})^{2}.$
(301)
From (TPM-2) for $n$, we know that $c^{(t)}\geq(1+\delta)^{n}c^{(0)}$ for
$t>T_{n}^{\prime}$. Therefore, (301) becomes:
$\displaystyle-\mathcal{G}^{(T_{n+1})}$
$\displaystyle=-\gamma^{T_{n+1}-T_{n}^{\prime}}\mathcal{G}^{(T_{n}^{\prime})}+\alpha^{3}(1-\gamma^{T_{n+1}-T_{n}^{\prime}})(1+\delta)^{2n}(c^{(0)})^{2}.$
(302)
From (TPM-1), we know that
$-\mathcal{G}^{(T_{n}^{\prime})}\geq(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n-1}e^{-(n-1-\tau)}(1+\delta)^{2\tau}$
for $t\geq T_{n}.$ Therefore, we simplify (302) as:
$\displaystyle-\mathcal{G}^{(T_{n+1})}$
$\displaystyle\geq\gamma^{T_{n+1}-T_{n}^{\prime}}(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n-1}e^{-(n-1-\tau)}(1+\delta)^{2\tau}$
(303)
$\displaystyle+\alpha^{3}(1-\gamma^{T_{n+1}-T_{n}^{\prime}})(1+\delta)^{2n}(c^{(0)})^{2}.$
When we set $T_{n+1}$ as in (TPM-1), we have
$T_{n+1}-T_{n}^{\prime}=\frac{1}{1-\gamma}.$ Moreover, since
$\gamma=1-\varepsilon$, we have $\gamma^{\frac{1}{1-\gamma}}=e^{-1}$. Using
these two observations, (303) is thus equal to:
$\displaystyle-\mathcal{G}^{(T_{n+1})}$
$\displaystyle\geq(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n-1}e^{-(n-\tau)}(1+\delta)^{2\tau}$
$\displaystyle+\alpha^{3}(1-e^{-1})(1+\delta)^{2n}(c^{(0)})^{2}$
$\displaystyle=(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n}e^{-(n-\tau)}(1+\delta)^{2\tau}.$
(304)
We therefore proved (TPM-1) for $n+1.$ Now, let’s prove (TPM-2). We use the
iterates update and obtain:
$\displaystyle c^{(T_{n+1}^{\prime})}$
$\displaystyle=c^{(T_{n+1})}-\eta\sum_{\tau=T_{n+1}}^{T_{n+1}^{\prime}-1}\mathcal{G}^{(\tau)}$
$\displaystyle\geq(\delta+1)^{n}c^{(0)}+\eta(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n}e^{-(n-\tau)}(1+\delta)^{2\tau}(T_{n+1}-T_{n+1}^{\prime}),$
(305)
where we used $c^{(T_{n+1})}\geq(\delta+1)^{n}c^{(0)}$ and (304) in the last
inequality. Since $T_{n+1}^{\prime}+1$ is the first time where
$c^{(t)}\geq(1+\delta)^{n+1}c^{(0)},$ we further simplify (305) to obtain:
$\displaystyle T_{n+1}^{\prime}$
$\displaystyle=T_{n+1}+\frac{\delta(\delta+1)^{n-1}}{\eta(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n}e^{-(n-\tau)}(1+\delta)^{2\tau}}$
$\displaystyle=\frac{n+1}{1-\gamma}+\sum_{j=0}^{n-1}\frac{\delta(\delta+1)^{j}}{\eta(1-e^{-1})\alpha^{3}c^{(0)}\sum_{\tau=0}^{j}e^{-(j-\tau)}(1+\delta)^{2\tau}}$
$\displaystyle+\frac{\delta(\delta+1)^{n}}{\eta(1-e^{-1})\alpha^{3}(c^{(0)})^{2}\sum_{\tau=0}^{n}e^{-(n-\tau)}(1+\delta)^{2\tau}}$
$\displaystyle=\frac{n+1}{1-\gamma}+\sum_{j=0}^{n}\frac{\delta(\delta+1)^{j}}{\eta(1-e^{-1})\alpha^{3}c^{(0)}\sum_{\tau=0}^{j}e^{-(j-\tau)}(1+\delta)^{2\tau}}.$
(306)
We therefore proved (TPM-2) for $n+1.$
Let’s now obtain an upper bound on $T_{n}^{\prime}.$ We have:
$\displaystyle T_{n}^{\prime}$
$\displaystyle\leq\frac{n}{1-\gamma}+\frac{\delta}{\eta(1-e^{-1})\alpha^{3}c^{(0)}}\sum_{j=0}^{n-1}\frac{1}{(1+\delta)^{j}}$
$\displaystyle\leq\frac{n}{1-\gamma}+\frac{1+\delta}{\eta(1-e^{-1})\alpha^{3}c^{(0)}}:=\mathscr{T}_{n}.$
(307)
Finally, we choose $n$ such that $(1+\delta)^{n}\geq\upsilon$ or equivalently,
$n=\left\lceil\frac{\log(\upsilon)}{\log(1+\delta)}\right\rceil$. Plugging
this choice in $\mathscr{T}_{n}$ yields the desired bound. ∎
### K.3 Optimization lemmas
###### Definition K.1 (Smooth function).
Let $f\colon\mathbb{R}^{n\times d}\rightarrow\mathbb{R}$. $f$ is
$\beta$-smooth if $\|\nabla f(\bm{X})-\nabla
f(\bm{Y})\|_{2}\leq\beta\|\bm{X}-\bm{Y}\|_{2},$ for all
$X,Y\in\mathbb{R}^{n\times d}.$ A consequence of the smoothness is the
inequality:
$\displaystyle f(\bm{X})\leq f(\bm{Y})+\langle\nabla
f(\bm{Y}),\bm{X}-\bm{Y}\rangle+\frac{L}{2}\|\bm{X}-\bm{Y}\|_{2}^{2},\quad\text{for
all }\bm{X},\bm{Y}\in\mathbb{R}^{n\times d}.$
###### Lemma K.18 (Descent lemma for GD).
Let $f\colon\mathbb{R}^{n\times d}\rightarrow\mathbb{R}$ be a $\beta$-smooth
function. Let $\bm{W}^{(t+1)}\in\mathbb{R}^{n\times d}$ be an iterate of $GD$
with learning rate $\eta\in(0,1/L).$ Then, we have
$\displaystyle f(\bm{W}^{(t+1)})\leq f(\bm{W}^{(t)})-\frac{\eta}{2}\|\nabla
f(\bm{W}^{(t)})\|_{2}^{2}.$
###### Proof of Lemma K.18.
By applying the definition of smooth functions and the GD update, we have:
$\displaystyle f(\bm{W}^{(t+1)})$ $\displaystyle\leq
f(\bm{W}^{(t)})+\langle\nabla
f(\bm{W}^{(t)}),\bm{W}^{(t+1)}-\bm{W}^{(t)}\rangle+\frac{L}{2}\|\bm{W}^{(t+1)}-\bm{W}^{(t)}\|_{2}^{2}$
$\displaystyle=f(\bm{W}^{(t)})-\eta\|\nabla
f(\bm{W}^{(t)})\|_{2}^{2}+\frac{L\eta^{2}}{2}\|\nabla
f(\bm{W}^{(t)})\|_{2}^{2}.$ (308)
Setting $\eta<1/L$ in (308) leads to the expected result.
∎
###### Lemma K.19 (Sublinear convergence).
Let $\mathscr{T}\geq 0$. Let $(x_{t})_{t>\mathscr{T}}$ be a non-negative
sequence that satisfies the recursion: $x^{(t+1)}\leq x^{(t)}-A(x^{(t)})^{2},$
for $A>0.$ Then, it is bounded at a time $t>\mathscr{T}$ as
$\displaystyle x^{(t)}\leq\frac{1}{A(t-\mathscr{T})}.$ (309)
###### Proof of Lemma K.19.
Let $\tau\in(\mathscr{T},t]$. By multiplying each side of the recursion by
$(x^{(\tau)}x^{(\tau+1)})^{-1}$, we get:
$\displaystyle\frac{Ax^{(\tau)}}{x^{(\tau+1)}}\leq\frac{1}{x^{(\tau+1)}}-\frac{1}{x^{(\tau)}}.$
(310)
Besides, the update rule indicates that $x^{(\tau)}$ is non-increasing i.e.
$x^{(\tau+1)}\leq x^{(\tau)}.$ Using this fact in (310) yields:
$\displaystyle A\leq\frac{1}{x^{(\tau+1)}}-\frac{1}{x^{(\tau)}}.$ (311)
Now, we sum up (311) for $\tau=\mathscr{T},\dots,t-1$ and obtain:
$\displaystyle
A(t-\mathscr{T})\leq\frac{1}{x^{(t)}}-\frac{1}{x^{(\mathscr{T})}}\leq\frac{1}{x^{(t)}}.$
(312)
Inverting (312) yields the expected result. ∎
### K.4 Other useful lemmas
#### K.4.1 Logarithmic inequalities
###### Lemma K.20 (Connection between derivative and loss).
Let $a_{1},\dots,a_{m}\in\mathbb{R}$ such that $-\delta\leq a_{i}\leq A$ where
$A,\delta>0$. Assume that $\sum_{i=1}^{m}a_{i}\in(C_{-},C_{+})$, where
$C_{+},C_{-}>0$.Then, the following inequality holds:
$\displaystyle\frac{0.05e^{-6mA^{2}\delta}}{C_{+}\left(1+\frac{m^{2}\delta^{2}}{C_{-}^{2}}\right)}\log\left(1+e^{-\sum_{i=1}^{m}a_{i}^{3}}\right)\leq\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}\leq\frac{20me^{6mA^{2}\delta}}{C_{-}}\log\left(1+e^{-\sum_{i=1}^{m}a_{i}^{3}}\right).$
###### Proof of Lemma K.20.
We apply Lemma K.21 to the sequence $a_{i}+\delta$ and obtain:
$\displaystyle\frac{0.1}{C+}\log\left(1+\exp\left(-\sum_{i=1}^{m}(a_{i}+\delta)^{3}\right)\right)$
$\displaystyle\leq\frac{\sum_{i=1}^{m}(a_{i}+\delta)^{2}}{1+\exp(\sum_{i=1}^{m}(a_{i}+\delta)^{3})}$
(313)
$\displaystyle\leq\frac{10m}{C_{-}}\log\left(1+\exp\left(-\sum_{i=1}^{m}(a_{i}+\delta)^{3}\right)\right).$
We apply Lemma K.24 to further simplify (313).
$\displaystyle\frac{0.1e^{-\sum_{i=1}^{m}(3a_{i}^{2}\delta+3a_{i}\delta^{2}+\delta^{3})}}{C+}\log\left(1+\exp\left(-\sum_{i=1}^{m}a_{i}^{3}\right)\right)$
(314)
$\displaystyle\leq\frac{\sum_{i=1}^{m}(a_{i}+\delta)^{2}}{1+\exp(\sum_{i=1}^{m}(a_{i}+\delta)^{3})}$
$\displaystyle\leq\frac{10m(1+e^{-\sum_{i=1}^{m}(3a_{i}^{2}\delta+3a_{i}\delta^{2}+\delta^{3})})}{C_{-}}\log\left(1+\exp\left(-\sum_{i=1}^{m}a_{i}^{3}\right)\right).$
We remark that the term inside the exponential in (314) can be bounded as:
$\displaystyle 0\leq
2\sum_{i=1}^{m}a_{i}^{2}\delta\leq\sum_{i=1}^{m}(3a_{i}^{2}\delta-2\delta^{3})\leq\sum_{i=1}^{m}(3a_{i}^{2}\delta+3a_{i}\delta^{2}+\delta^{3})\leq
6\sum_{i=1}^{m}a_{i}^{2}\delta\leq 6A^{2}m\delta.$ (315)
Plugging (315) in (314) yields:
$\displaystyle\frac{0.1e^{-6mA^{2}\delta}}{C+}\log\left(1+\exp\left(-\sum_{i=1}^{m}a_{i}^{3}\right)\right)$
(316)
$\displaystyle\leq\frac{\sum_{i=1}^{m}(a_{i}+\delta)^{2}}{1+\exp(\sum_{i=1}^{m}(a_{i}+\delta)^{3})}$
$\displaystyle\leq\frac{20m}{C_{-}}\log\left(1+\exp\left(-\sum_{i=1}^{m}a_{i}^{3}\right)\right).$
Lastly, we need to bound the term in the middle in (316). On one hand, we
have:
$\displaystyle\sum_{i=1}^{m}(a_{i}+\delta)^{2}$
$\displaystyle=2\sum_{i=1}^{m}a_{i}^{2}+2m\delta^{2}\leq
2\left(1+\frac{m^{2}\delta^{2}}{\left(\sum_{i=1}^{m}a_{i}\right)^{2}}\right)\sum_{i=1}^{m}a_{i}^{2}\leq
2\left(1+\frac{m^{2}\delta^{2}}{C_{-}^{2}}\right)\sum_{i=1}^{m}a_{i}^{2}.$
(317)
Besides, since $x\mapsto x^{3}$ is non-decreasing, we have the following lower
bound:
$\displaystyle\sum_{i=1}^{m}(a_{i}+\delta)^{3}\geq\sum_{i=1}^{m}a_{i}^{3}.$
(318)
Combining (317) and (318) yields:
$\displaystyle\frac{\sum_{i=1}^{m}(a_{i}+\delta)^{2}}{1+\exp(\sum_{i=1}^{m}(a_{i}+\delta)^{3})}\leq
2\left(1+\frac{m^{2}\delta^{2}}{C_{-}^{2}}\right)\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}.$
(319)
On the other hand, we have:
$\displaystyle\sum_{i=1}^{m}(a_{i}+\delta)^{2}$
$\displaystyle\geq\sum_{i=1}^{m}a_{i}^{2}+2\delta\sum_{i=1}^{m}a_{i}\geq\sum_{i=1}^{m}a_{i}^{2}+2\delta
C_{-}\geq\sum_{i=1}^{m}a_{i}^{2}.$ (320)
Besides, using (315), we have:
$\displaystyle\sum_{i=1}^{m}(a_{i}+\delta)^{3}\leq\sum_{i=1}^{m}a_{i}^{3}+6A^{2}m\delta.$
(321)
Thus, using (320) and (321) yields:
$\displaystyle\frac{\sum_{i=1}^{m}(a_{i}+\delta)^{2}}{1+\exp(\sum_{i=1}^{m}(a_{i}+\delta)^{3})}\geq\frac{e^{-6mA^{2}\delta}\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}.$
(322)
Finally, we obtain the desired result by combining (316), (319) and (322).
∎
###### Lemma K.21 (Connection between derivative and loss for positive
sequences).
Let $a_{1},\dots,a_{m}\in\mathbb{R}$ such that $a_{i}\geq 0$. Assume that
$\sum_{i=1}^{m}a_{i}\in(C_{-},C_{+})$, where $C_{+},C_{-}>0.$ Then, the
following inequality holds:
$\displaystyle\frac{0.1}{C+}\log\left(1+\exp\left(-\sum_{i=1}^{m}a_{i}^{3}\right)\right)\leq\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}\leq\frac{10m}{C_{-}}\log\left(1+\exp\left(-\sum_{i=1}^{m}a_{i}^{3}\right)\right).$
###### Proof of Lemma K.21.
We first remark that:
$\displaystyle\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}$
$\displaystyle=\frac{\left(\sum_{i=1}^{m}a_{i}^{2}\right)\left(\sum_{j=1}^{m}a_{j}\right)}{\left(1+\exp(\sum_{i=1}^{m}a_{i}^{3})\right)\left(\sum_{j=1}^{m}a_{j}\right)}$
$\displaystyle=\frac{\sum_{i=1}^{m}a_{i}^{3}+\sum_{i=1}^{m}\sum_{j\neq
i}a_{i}^{2}a_{j}}{\left(1+\exp(\sum_{i=1}^{m}a_{i}^{3})\right)\left(\sum_{j=1}^{m}a_{j}\right)}.$
(323)
##### Upper bound.
We upper bound (323) by successively applying $\sum_{i=1}^{n}a_{i}>C_{-}$ and
$a_{i}>0$ for all $i$:
$\displaystyle\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}$
$\displaystyle\leq\frac{\sum_{i=1}^{m}a_{i}^{3}+\sum_{i=1}^{m}\sum_{j\neq
i}a_{i}^{2}a_{j}}{C_{-}\left(1+\exp(\sum_{i=1}^{m}a_{i}^{3})\right)}$
$\displaystyle\leq\frac{\sum_{i=1}^{m}a_{i}^{3}+\sum_{i=1}^{m}\sum_{j=1}^{m}a_{i}^{2}a_{j}}{C_{-}\left(1+\exp(\sum_{i=1}^{m}a_{i}^{3})\right)}$
(324)
where we used $a_{i}>0$ for all $i$ in (323). By applying the rearrangement
inequality to (324), we obtain:
$\displaystyle\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}$
$\displaystyle\leq\frac{m}{C_{-}}\frac{\sum_{i=1}^{m}a_{i}^{3}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}.$
(325)
We obtain the final bound by applying Lemma K.22 to (325).
##### Lower bound.
We lower bound (323) by using $\sum_{i=1}^{n}a_{i}\leq C_{+}$ and
$\sum_{i=1}^{m}\sum_{j\neq i}a_{i}^{2}a_{j}$:
$\displaystyle\frac{\sum_{i=1}^{m}a_{i}^{2}}{1+\exp(\sum_{i=1}^{m}a_{i}^{3})}$
$\displaystyle\geq\frac{\sum_{i=1}^{m}a_{i}^{3}+\sum_{i=1}^{m}\sum_{j\neq
i}a_{i}^{2}a_{j}}{C_{+}\left(1+\exp(\sum_{i=1}^{m}a_{i}^{3})\right)}$
$\displaystyle\geq\frac{\sum_{i=1}^{m}a_{i}^{3}}{C_{+}\left(1+\exp(\sum_{i=1}^{m}a_{i}^{3})\right)}.$
(326)
We obtain the final bound by applying Lemma K.22 to (326). ∎
###### Lemma K.22 (Connection between derivative and loss).
Let $x>0.$ Then, we have:
$\displaystyle 0.1\log(1+\exp(-x))\leq\mathfrak{S}(x)\leq 10\log(1+\exp(-x))$
(327)
###### Lemma K.23.
Let $(x^{(t)})_{t\geq 0}$ be a non-negative sequence. Let $A>0.$ Assume that
$\sum_{\tau=0}^{T}x^{(\tau)}\leq A.$ Then, there exists a time
$\mathscr{T}\in[T]$ such that $x^{(\mathscr{T})}\leq A/T.$
###### Proof of Lemma K.23.
Assume by contradiction that for all $\tau\in[T]$, $x^{(\tau)}>A/T$. By
summing up $x^{\tau}$, we obtain $\sum_{\tau=0}^{T}x^{(\tau)}>A.$ This
contradicts the assumption that $\sum_{\tau=0}^{T}x^{(\tau)}\leq A.$
∎
###### Lemma K.24 (Log inequalities).
Let $x,y>0.$ Then, the following inequalities holds:
1. 1.
Assume that $y\leq x.$ We have:
$\displaystyle\log(1+xy)\leq(1+y)\log(1+x).$
2. 2.
Assume $y<1$. We have:
$\displaystyle y\log(1+x)\leq\log(1+xy).$
###### Proof of Lemma K.24.
We first remark that:
$\displaystyle\log(1+xy)-\log(1+x)$
$\displaystyle=\log\left(\frac{1+xy}{1+x}\right)$
$\displaystyle=\log\left(1+\frac{x(y-1)}{1+x}\right).$ (328)
From (328), we deduce an upper bound as:
$\displaystyle\log(1+xy)-\log(1+x)\leq\log\left(1+\frac{x(y+1)}{1+x}\right).$
(329)
Successively using the inequalities $\log(1+x)\leq x$ and
$\frac{x}{1+x}\leq\log(1+x)$ for $x>-1$ in (329) yields:
$\displaystyle\log(1+xy)-\log(1+x)$
$\displaystyle\leq(1+y)\frac{x}{1+x}\leq(1+y)\log(1+x).$
This proves item 1 of the Lemma. Let’s now prove item 2. Using $a^{z}\leq
1+(a-1)z$ for $z\in(0,1)$ and $a\geq 1$, we know that:
$\displaystyle(1+x)^{y}\leq 1+xy.$ (330)
Since $\log$ is non-decreasing, applying $\log$ to (330) proves item 2.
∎
In Appendix J, we need to bound the sum $\sum_{s=1}^{t}\frac{\gamma^{t-s}}{s}$
for $\gamma<1.$ We derive such bound here.
###### Lemma K.25.
Let $t\geq 1$. Then, we have:
$\displaystyle\sum_{s=1}^{t}\frac{\gamma^{t-s}}{s}\leq\gamma^{t-1}+\gamma^{t/2}\log\left(\frac{t}{2}\right)+\frac{2}{t}\frac{1}{1-\gamma}.$
###### Proof of Lemma K.25.
Let $t=1$. Then, we have:
$\displaystyle\sum_{s=1}^{t}\frac{\gamma^{t-s}}{s}=1\leq\gamma^{0}+\gamma^{1/2}\log\left(\frac{1}{2}\right)+\frac{2}{1-\gamma},$
(331)
given our choice of $\gamma$. Let $t\geq 2.$ We split the sum in two parts as
as follows.
$\displaystyle\sum_{s=1}^{t}\frac{\gamma^{t-s}}{s}-\gamma^{t-1}$
$\displaystyle=\sum_{s=2}^{t}\frac{\gamma^{t-s}}{s}$
$\displaystyle=\sum_{s=2}^{\lfloor
t/2\rfloor}\frac{\gamma^{t-s}}{s}+\sum_{s=\lfloor
t/2\rfloor+1}^{t}\frac{\gamma^{t-s}}{s}$ $\displaystyle\leq\gamma^{t-\lfloor
t/2\rfloor}\sum_{s=2}^{\lfloor t/2\rfloor}\frac{1}{s}+\frac{1}{\lfloor
t/2\rfloor+1}\sum_{s=\lfloor t/2\rfloor+1}^{t}\gamma^{t-s}$
$\displaystyle\leq\gamma^{t/2}\sum_{s=2}^{\lfloor
t/2\rfloor}\frac{1}{s}+\frac{2}{t}\sum_{u=0}^{t-\lfloor
t/2\rfloor-1}\gamma^{u}$ (332)
$\displaystyle\leq\gamma^{t/2}\log\left(\frac{t}{2}\right)+\frac{2}{t}\frac{1}{1-\gamma},$
(333)
where we used the harmonic series inequality
$\sum_{s=2}^{\mathscr{T}}1/s\leq\log(\mathscr{T})$,
$\sum_{u=0}^{\mathscr{T}}\gamma^{u}\leq 1/(1-\gamma)$ and $\lfloor
t/2\rfloor\leq t/2$ in (333).
∎
|
# Stable Coorbit Embeddings of Orbifold Quotients
Dustin G. Mixon111Department of Mathematics, The Ohio State University,
Columbus, OH 222Translational Data Analytics Institute, The Ohio State
University, Columbus, OH Yousef Qaddura11footnotemark: 1
###### Abstract
Given a real inner product space $V$ and a group $G$ of linear isometries, we
construct a family of $G$-invariant real-valued functions on $V$ that we call
coorbit filter banks, which unify previous notions of max filter banks and
finite coorbit filter banks. When $V=\mathbb{R}^{d}$ and $G$ is compact, we
establish that a suitable coorbit filter bank is injective and locally lower
Lipschitz in the quotient metric at orbits of maximal dimension. Furthermore,
when the orbit space $\mathbb{S}^{d-1}/G$ is a Riemannian orbifold, we show
that a suitable coorbit filter bank is bi-Lipschitz in the quotient metric.
## 1 Introduction
Many machine learning algorithms are tailored for Euclidean data, typically
represenented as vectors in a real inner product space $V$. However, this
representation often has an ambiguity that stems from a subgroup $G$ of the
orthogonal group $\operatorname{O}(V)$. For example, a point cloud of $n$
points in $\mathbb{R}^{d}$ may be represented by a matrix in
$V:=\mathbb{R}^{d\times n}$, in which case an ambiguity arises from
permutating the columns.
Neglecting such ambiguities can magnify the sample complexity of the machine
learning process. To address this, one strategy involves augmenting the
training set with the entire $G$-orbit of each datapoint [30, 12, 23, 11].
However, when $G$ is large, this approach makes the machine learning process
much more computationally expensive than necessary.
Alternatively, one may address the ambiguity by representing objects as
elements $[x]:=G\cdot x$ in the orbit space $V/G$ equipped with the quotient
metric
$d([x],[y]):=\inf_{\begin{subarray}{c}p\in[x]\\\
q\in[y]\end{subarray}}\|p-q\|.$
(Indeed, this metric is nondegenerate provided the $G$-orbits are
topologically closed). In order to access Euclidean-based machine learning
algorithms, we are inclined to embed the orbit space into Euclidean space
while minimizing distortion to the quotient metric. In other words, we aim for
embeddings which admit bi-Lipschitz bounds.
Recently, [10] introduced a family of embeddings called max filter banks that
enjoy bi-Lipschitz bounds when $G$ is finite. Later work improved on those
bounds [29, 28]. A theoretical question posed in [10] is whether every
injective max filter bank is bi-Lipschitz. When $G$ is finite, this question
was settled by [5], which introduced a more general family of embeddings
called coorbit filter banks. There, it is shown that every injective coorbit
filter bank admits bi-Lipschitz bounds. The question remains open for infinite
$G$ with only three exceptions, each in the context of max filter banks:
* •
Complex phase retrieval [8, 3], in which $V=\mathbb{C}^{d}$ and
$G=\\{z\cdot\operatorname{id}:z\in\mathbb{C},|z|=1\\}$.
* •
Quaternionic phase retrieval, in which $V=\mathbb{H}^{d}$ and
$G=\\{z\cdot\operatorname{id}:z\in\mathbb{H},|z|=1\\}$. This follows from the
argument in [3]; see case (d) in Theorem 71.
* •
Polar actions [29], in which $V/G$ is isometrically isomorphic to
$V^{\prime}/G^{\prime}$ for some finite
$G^{\prime}\leq\operatorname{O}(V^{\prime})$; see Propositions 97 and 10.
In this paper, we give a construction of coorbit filter banks for all compact
groups $G\leq\operatorname{O}(d)$. These maps unify the family of max filter
banks (introduced in [10] for all compact groups) with the family of coorbit
filter banks (introduced in [5, 6] only for finite groups).
It remains open whether every injective coorbit filter bank is bi-Lipschitz.
While we do not provide a complete answer to this question, we study whether
these maps are bi-Lipschitz given enough generic templates. We prove that this
behavior holds for compact groups $G$ whose spherical orbit space
$\mathbb{S}^{d-1}/G$ is a Riemannian orbifold. It remains open whether this
behavior holds for every compact group. Nonetheless, for general compact
groups, we are able to show the existence of positive local lower Lipschitz
bounds at an open and dense subset of points for sufficiently many generic
templates. In fact, we generalize the notions of injectivity, local lower
Lipschitzness and bi-Lipschitzness into the notions of weak, local and strong
subspace avoidance, respectively. The aim of this paper is to identify
conditions under which coorbit filter banks enjoy these properties.
In Section 2, we construct coorbit filter banks and prove that they are
invariant, symmetric and semialgebraic. There, we also introduce notions of
avoidance and state the problem of interest in technical terms. In Section 3,
we recall the notions of principality and cohomogeneity, and we analyze the
geometry of coorbit maps through a natural Voronoi cell decomposition of
space. This sets up the technical language of the paper. In Section 4, we
estimate the upper Lipschitz bound for coorbit filter banks. In Section 5, we
prove that $2c$ generic templates suffice for a coorbit filter bank to be
injective (more generally, weakly avoidant), where $c\leq d$ is the
cohomogeneity of $G\leq\operatorname{O}(d)$. In Section 6, we show that $2c-1$
generic templates suffice for a coorbit filter bank to be locally lower
Lipschitz (more generally, locally avoidant) at principal points, that is, the
open and dense subset of points in $\mathbb{R}^{d}$ whose stabilizers are
minimal. In Section 7, we reduce the problem of strong avoidance to the groups
for which the origin is the only point that is fixed by all of $G$. In Section
8, we classify groups with finite-index stabilizers (e.g., finite groups and
free groups), and show that for those groups, $2c$ generic templates suffice
for coorbit filter banks to be bi-Lipchitz (more generally, strongly
avoidant). In Section 9, we reduce the assertion that a coorbit filter bank of
$G$ is strongly avoidant to the assertion that a max filter bank of $G_{0}$
(the identity component of $G$) is strongly avoidant. In other words, we
reduce the problem to connected groups. In Section 10, we reduce max filtering
to the case where principal stabilizers are trivial. In Section 11, we show
that with enough templates, max filter banks are locally lower Lipschitz at
orbits of maximal dimension, namely regular orbits. In Section 12, we show
that with enough templates, max filter banks embed (spherical) orbifold
quotients into Euclidean space in a bi-Lipschitz manner. We conclude in
Section 13 with a discussion.
## 2 Construction and Basic Properties of Coorbit Maps
### 2.1 Construction and Invariance Properties
###### Definition 1.
Consider any real inner product space $V$ and $G\leq\operatorname{O}(V)$. Let
$\pi_{0}(G)$ denote the group of connected components of $G$.
* (a)
The component coorbit map over $V$ given by $\overline{\mathcal{C}}\colon
V\times V\times\pi_{0}(G)\to\mathbb{R}$ is defined by
$\overline{\mathcal{C}}(x,y,K):=\sup_{\begin{subarray}{c}p\in K\cdot
x\end{subarray}}\langle p,y\rangle.$
* (b)
The sorting map given by
$\downarrow\colon\operatorname{Hom}(\pi_{0}(G),\mathbb{R})\to\mathbb{R}^{|\pi_{0}(G)|}$
is defined on $f\in\operatorname{Hom}(\pi_{0}(G),\mathbb{R})$ by sorting the
entries of the sequence $(f(K))_{K\in\pi_{0}(G)}$ in descending order i.e.
largest goes first.
* (c)
For $i\in\\{1,\dots,|\pi_{0}(G)|\\}$, the coorbit map over $V$ given by
$\overline{\Psi}_{i}\colon V\times V\to\mathbb{R}$ is defined by
$\overline{\Psi}_{i}(x,y):=\,i^{\text{th}}\text{ entry of
}\downarrow\\{K\in\pi_{0}(G)\mapsto\overline{\mathcal{C}}(x,y,K)\in\mathbb{R}\\}.$
###### Remark 2.
By taking all sort indices to be $p_{i}\equiv 1$, a coorbit filter bank
becomes a max filter bank (Definition 28).
The following lemma shows how the component coorbit map interacts with the
group action and that coorbit maps are invariant to said action hence why they
descend to orbit spaces.
###### Lemma 3.
Suppose $G\leq\operatorname{O}(d)$. Then,
1. (a)
For any $g,h\in G$, $x,y\in\mathbb{R}^{d}$ and $K\in\pi_{0}(G)$, we have
$\overline{\mathcal{C}}(gx,hy,K)=\overline{\mathcal{C}}(x,y,h^{-1}Kg).$
2. (b)
Let $S_{\pi_{0}(G)}:=\operatorname{Aut}(\pi_{0}(G))$ denote the group of
bijections from $\pi_{0}(G)$ to itself, and consider the canonical left-action
of $S_{\pi_{0}(G)}$ on $\operatorname{Hom}(\pi_{0}(G),\mathbb{R})$ given by
$s\cdot f=f\circ s^{-1}$ for $s\in S_{\pi_{0}(G)}$ and
$f\in\operatorname{Hom}(\pi_{0}(G),\mathbb{R})$. Then,
$\downarrow(s\cdot f)=\downarrow f$
3. (c)
For $i\in\\{1,\dots,|\pi_{0}(G)|\\}$, it holds that
$\overline{\Psi}_{i}(x,y)=\overline{\Psi}_{i}(gx,hy)$
for all $x,y\in\mathbb{R}^{d}$ and $g,h\in G$.
###### Proof.
Fix $x,y\in\mathbb{R}^{d}$ and $g,h\in G$. For (a), observe that
$\overline{\mathcal{C}}(gx,hy,K)=\sup_{p\in K\cdot gx}\langle
p,hy\rangle=\sup_{p\in K\cdot gx}\langle h^{-1}p,y\rangle=\sup_{p\in
h^{-1}Kg\cdot x}\langle p,y\rangle=\overline{\mathcal{C}}(x,y,h^{-1}Kg),$
as desired. Next, (b) follows from invariance of sorting to permutation. For
(c), we use (a) to obtain
$\overline{\Psi}_{i}(gx,hy)=\downarrow\\{K\in\pi_{0}(G)\mapsto\overline{\mathcal{C}}(gx,hy,K)\in\mathbb{R}\\}=\,\downarrow\\{K\in\pi_{0}(G)\mapsto\overline{\mathcal{C}}(x,y,h^{-1}Kg)\in\mathbb{R}\\},$
and the result follows from using (b) and observing that $K\mapsto h^{-1}Kg$
is in $S_{\pi_{0}(G)}$. ∎
We arrive to the desired construction:
###### Definition 4.
Suppose $G\leq\operatorname{O}(d)$ and let $V=\mathbb{R}^{d}$. Denote the
identity componenet of $G$ by $G_{0}$ and denote $[x]_{0}:=G_{0}\cdot x$ for
$x\in\mathbb{R}^{d}$.
* (a)
The component coorbit map given by ${\mathcal{C}}\colon V/G_{0}\times
V/G_{0}\times\pi_{0}(G)\to\mathbb{R}$ is the unique map that satisfies
$\overline{\mathcal{C}}(x,y,K)=\mathcal{C}([x]_{0},[y]_{0},K)$.
* (a)
For $i\in\\{1,\dots,|\pi_{0}(G)|\\}$, the coorbit map given by $\Psi_{i}\colon
V/G\times V/G\to\mathbb{R}$ is the unique map that satisfies
$\overline{\Psi}_{i}(x,z)=\Psi_{i}([x],[z])$ for all $x,z\in V$.
* (b)
Given templates $z_{1},\ldots,z_{n}\in V$ and sort indices
$p_{1},\ldots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, the corresponding coorbit
filter bank $\Phi\colon V/G\to\mathbb{R}^{n}$ is defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}.$
### 2.2 Symmetry and Scalar Homogeniety Properties
The following lemma shows how the component coorbit map interacts with
switching inputs in $\mathbb{R}^{d}$ and that the coorbit map is switch-
invariant. Moreover, scalar homogeniety of the maps is shown.
###### Lemma 5.
Suppose $G\leq\operatorname{O}(d)$. Then,
1. (a)
For any $x,y\in\mathbb{R}^{d}$, $r\geq 0$ and $K\in\pi_{0}(G)$, it holds that
$\overline{\mathcal{C}}(x,y,K)=\overline{\mathcal{C}}(y,x,K^{-1}),$
and
$\overline{\mathcal{C}}(rx,y,K)=\overline{\mathcal{C}}(x,ry,K)=r\cdot\overline{\mathcal{C}}(x,y,K).$
2. (b)
For $i\in\\{1,\dots,|\pi_{0}(G)|\\}$, it holds that
* (i)
$\overline{\Psi}_{i}(x,y)=\overline{\Psi}_{i}(y,x)$,
* (ii)
$\Psi_{i}([x],[y])=\Psi_{i}([y],[x])$,
for all $x,y\in\mathbb{R}^{d}$.
3. (c)
For $i\in\\{1,\dots,|\pi_{0}(G)|\\}$,
$\Psi_{i}([rx],[y])=\Psi_{i}([x],[ry])=r\cdot\Psi_{i}([x],[y])$.
###### Proof.
For (a), the first assertion follows from the following computation
$\overline{\mathcal{C}}(x,y,K)=\sup_{k\in K}\langle kx,y\rangle=\sup_{k\in
K^{-1}}\langle x,ky\rangle=\sup_{p\in K^{-1}y}\langle
p,x\rangle=\overline{\mathcal{C}}(y,x,K^{-1}),$
and the second assertion follows by a similar argument. The proof of (b) is
similar to the proof of 3(c) wherein here, we note that $K\mapsto K^{-1}$ is
in $S_{\pi_{0}(G)}$. Lastly, (c) is immediate since sorting commutes with
nonnegative scaling. ∎
###### Remark 6.
Occasionaly, we denote $[\cdot]$ and $\Psi_{i}$ by $[\cdot]_{G}$ and
$\Psi_{i}^{G}$ respectively to emphasize the group in consideration.
### 2.3 Preliminary on Semialgebraic Sets and Groups
A basic semialgebraic set is any set of the form
$\\{x\in\mathbb{R}^{n}:p(x)\geq 0\\}$, where
$p\colon\mathbb{R}^{n}\to\mathbb{R}$ is a polynomial function. A semialgebraic
set is any set obtained from some combination of finite unions, finite
intersections, and complements of basic semialgebraic sets. We say a subgroup
of $\operatorname{GL}(d)$ is a semialgebraic group if it is semialgebraic as a
subset of $\mathbb{R}^{d\times d}$. We say a function
$\mathbb{R}^{s}\to\mathbb{R}^{t}$ is a semialgebraic function if its graph is
semialgebraic as a subset of $\mathbb{R}^{s+t}$.
###### Definition 7.
A first-order formula of the language of ordered fields with parameters in R
is a formula written with a finite number of conjunctions, disjunctions,
negations, and universal or existential quantifiers on variables, starting
from atomic formulas which are formulas of the kind $f(x_{1},\dots,x_{n})=0$
or $g(x_{1},\dots,x_{n})>0$, where $f$ and $g$ are polynomials with
coefficients in $\mathbb{R}$. The free variables of a formula are those
variables of the polynomials appearing in the formula, which are not
quantified.
###### Proposition 8 (Proposition 2.2.4 in [7]).
Let $\phi(x_{1},\dots,x_{n})$ be a first-order formula of the language of
ordered fields, with parameters in $\mathbb{R}$ and with free variables
$x_{1},\dots,x_{n}$. Then $\\{x\in\mathbb{R}^{n}:\phi(x)\\}$ is a
semialgebraic set.
By Proposition 2.9.10 in [7], every semialgebraic set is a finite union of
manifolds. As such, the dimension of a semialgebraic set is defined by the
maximum dimension of said manifolds.
The statements in the next proposition are proven in Appendix A of [4].
###### Proposition 9.
The following statements regarding semialgebraic sets and functions hold:
1. (a)
The family of semialgebraic sets is closed under projection, complement,
finite union and finite intersection.
2. (b)
The family of semialgebraic functions is closed under addition,
multiplication, division (when defined), composition and concatenation.
3. (c)
(Conservation of Dimension) If
$\pi\colon\mathbb{R}^{n+d}\mapsto\mathbb{R}^{n}$ is a coordinate projection
and $A$ is a semialgebraic subset of $\mathbb{R}^{n+d}$, then
$\dim(\pi(A))\leq\dim(A)\leq\dim(\pi(A))+\max_{x\in\pi(A)}\dim(\pi^{-1}(x)\cap
A).$ (1)
We remark that conservation of dimension is essential to many arguments in
this paper. The next proposition highlights that semialgebraicity of a group
is equivalent to its compactness.
###### Proposition 10 (Proposition 7 in [29]).
Suppose $G\leq\operatorname{O}(d)$. If the orbits of $G$ are closed, then they
are also the orbits of the topological closure $\overline{G}$ of $G$ in
$\operatorname{O}(d)$. Furthermore, the following are equivalent:
* (a)
$G$ is topologically closed.
* (b)
$G$ is algebraic.
* (c)
$G$ is semialgebraic.
### 2.4 Semialgebraicity of the Coorbit Map
The coorbit map enjoys the propery of being semialgebraic as the following
lemma shows.
###### Lemma 11.
Suppose $G\leq\operatorname{O}(d)$ is a semialgebraic subgroup. Then,
* (a)
Every component $K\in\pi_{0}(G)$ is compact and semialgebraic as a subset of
$\mathbb{R}^{d\times d}$.
* (b)
For any fixed $K\in\pi_{0}(G)$, the $K$-component coorbit map
$\overline{\mathcal{C}}(\cdot,\cdot,K)\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}$
is semialgebraic.
* (c)
For any fixed $i\in\\{1,\dots,|\pi_{0}(G)|\\}$, the coorbit map
$\overline{\Psi}_{i}(\cdot,\cdot)\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}$
is semialgebraic.
###### Proof.
By Proposition 10, $G$ is closed hence a compact Lie group with
$|\pi_{0}(G)|<\infty$. Denote the identity component of $G$ by $G_{0}$. It is
topoligically compact and hence semialgebraic again by Proposition 10. Since
$K=kG_{0}$ for any $k\in K$ and since multiplication in $G$ is semialgebraic
and homeomorphic, we also obtain that $K$ is semialgebraic and compact. This
proves (a). For (b), fix any $K\in\pi_{0}(G)$. Then, the graph of
$\overline{\mathcal{C}}(\cdot,\cdot,K)$ is given by
$\big{\\{}(x,z,r)\in(\mathbb{R}^{d})^{2}\times\mathbb{R}:(\forall k\in
K,r\geq\langle kx,z\rangle)\wedge(\forall\varepsilon\in\mathbb{R},\exists k\in
K,\varepsilon>0\implies r-\varepsilon<\langle kx,z\rangle)\big{\\}}.$
It follows from Proposition 8 that the graph is semialgebraic. For (c), fix
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, the graph of
$\overline{\Psi}_{i}(\cdot,\cdot)$ is given by
$\displaystyle\bigcup_{K\in\pi_{0}(G)}$
$\displaystyle\big{\\{}(x,z,r)\in\mathbb{(}R^{d})^{2}\times\mathbb{R}:r=\overline{\mathcal{C}}(x,z,K)\
\ \wedge$ $\displaystyle(\exists I\subseteq\pi_{0}(G),|I|=i\wedge(\forall P\in
I,\overline{\mathcal{C}}(x,z,P)\geq r)\wedge(\forall P\in\pi_{0}(G)\setminus
I,\overline{\mathcal{C}}(x,z,P)\leq r))\big{\\}}$
where we note that the second line can be expressed in first-order logic as a
disjunction over all finitely many partitions of $\pi_{0}(G)$ into two sets at
least one of which has size $i$. ∎
By similar arguments, one may show that the quotient distance function is
semialgebraic hence obtaining
###### Proposition 12.
Suppose $G\leq\operatorname{O}(d)$ is compact. The quotient metric
$d([\cdot],[\cdot])\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}$ is a
semialgebraic function.
### 2.5 Avoidance Notions
Fix comapct $G\leq\operatorname{O}(d)$, $z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$
and $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, and consider the
corresponding coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. We say $\Phi$ is bi-
Lipschitz if there exists $\alpha\in(0,\infty]$ and $\beta\in[0,\infty)$ such
that the following inequality holds for all $[x]\neq[y]\in\mathbb{R}^{d}/G$
$\alpha\leq\frac{\|\Phi([x])-\Phi([y])\|}{d([x],[y])}\leq\beta.$
This is equivalent to the statement that the closure of the image of the map
$([x],[y])\mapsto\frac{\Phi([x])-\Phi([y])}{d([x],[y])}$ over $[x]\neq[y]$ is
bounded ($\Phi$ is upper Lipschitz) and avoids the zero vector ($\Phi$ is
lower Lipschitz). When we consider the image itself but not its closure, then
avoidance of the zero vector is equivalent to injectivity of $\Phi$. In Lemma
66, it becomes highly relevant to consider avoidance of not just the zero
vector but also any fixed subspace of the codomain. This motivates the
following definitions:
###### Definition 13.
Fix comapct $G\leq\operatorname{O}(d)$, $z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$
and $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, and consider the
corresponding coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. Suppose
$K\in\operatorname{Gr}(n,k)$, that is, $K$ is a $k$-dimensional linear
subspace of $\mathbb{R}^{n}$.
* (a)
Let $\Delta:=\\{(x,y)\in\mathbb{R}^{d}\times\mathbb{R}^{d}:[x]=[y]\\}$. The
difference quotient
$Q\colon(\mathbb{R}^{d}\times\mathbb{R}^{d})\setminus\Delta\to\mathbb{R}^{n}$
with respect to $\Phi$ is defined by
$Q(x,y):=\left\\{\frac{\Psi_{p_{i}}([x],[z_{i}])-\Psi_{p_{i}}([y],[z_{i}])}{d([x],[y])}\right\\}_{i=1}^{n}$
* (b)
We say $\Phi$ weakly avoids $K$ if $\operatorname{im}(Q)\cap K=\varnothing$.
* (c)
We say $\Phi$ locally avoids $K$ at $x\in\mathbb{R}^{d}$ if for all
$x_{n},y_{n}\to x$ with $[x_{n}]\neq[y_{n}]$, we have
$\lim_{n\to\infty}Q(x_{n},y_{n})\notin K,$
whenever the limit exists.
* (d)
We say $\Phi$ strongly avoids $K$ if $\overline{\operatorname{im}(Q)}\cap
K=\varnothing$.
* (e)
We say $\Phi$ is $\varepsilon$-locally lower Lipschitz at $x$ if
$\inf_{\begin{subarray}{c}x_{n},y_{n}\to x\\\
[x_{n}]\neq[y_{n}]\end{subarray}}\
\liminf_{n\to\infty}\|Q(x_{n},y_{n})\|\geq\varepsilon.$
Then, the theoretical problem of interest initially posed as Problem 19 in
[10] is reposed as follows:
###### Problem 14.
* (a)
Is every injective coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ bi-Lipschitz?
* (b)
More generally, is every coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ that weakly avoids
$K\in\operatorname{Gr}(n,k)$ also strongly avoids $K$?
We observe that the notions of avoidance are semialgebraic as summarized in
the following result whose proof is postponed to the end of the section:
###### Lemma 15.
Suppose $G\leq\operatorname{O}(d)$ is compact and fix sort indices
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. For templates
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, denote the corresponding coorbit filter
bank by $\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ and the corresponding
difference quotient by $Q$. Consider the sets
$B:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},V\big{)}\in(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times
k}:\Phi\text{ fails to weakly avoid }\operatorname{im}(V)\big{\\}}$
and
$D:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},V\big{)}\in(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times
k}:\Phi\text{ fails to strongly avoid }\operatorname{im}(V)\big{\\}}$
and for any semialgebraic subset $S$ of $\mathbb{R}^{d}$, consider the set
$C_{S}:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},V\big{)}\in(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times
k}:\Phi\text{ fails to locally avoid }\operatorname{im}(V)\text{ at every
}x\in S\big{\\}}.$
Then, $B$, $C_{S}$ and $D$ are semialgebraic subsets of their respective
ambient spaces.
Ideally and with sort indices arbitrary but fixed, we aim for a bound of the
following form
$\dim(D)\leq nd+nk-1-(n-k-2c)$
where $c\leq d$ is some constant depending on $G$ (more precisely, the
cohomogeniety of $G$ defined in Section 3.1). In other words, $n-k-2c-1$ gives
a lower bound for the codimenion of $D$ in its ambient space. The bound has
two crucial implications. First, the bound implies that it suffices to take
any $n\geq k+2c$ so that $n$ generic templates form a coorbit filter bank that
strongly avoid the image of a generic $V\in\mathbb{R}^{n\times k}$. Second,
the bound on the codimenion is linear in $n$ and $k$. This turns out to be
crucial for an inductive step in the proof of Theorem 69. By meditating on
these two observations, we arrive to the following definitions:
###### Definition 16.
Suppose $G\leq\operatorname{O}(d)$ is compact with identity component $G_{0}$.
For $z_{1},\ldots,z_{m}\in\mathbb{R}^{d}$ and sort indices
$p_{1},\ldots,p_{n}\in\\{1,\ldots,|\pi_{0}(G)|\\}$, denote the corresponding
coorbit filter bank by $\Phi$. For $V\in\mathbb{R}^{n\times k}$, consider the
semialgebraic set
$N_{V}^{\\{p_{i}\\}_{i=1}^{m}}:=\\{\\{z_{i}\\}_{i=1}^{m}\in(\mathbb{R}^{d})^{m}:\Phi\text{
fails to strongly avoid }\operatorname{im}(V)\\}.$
For $k\in\mathbb{Z}_{\geq 0}$ and $m\in\mathbb{N}$, put
$v_{k}^{m}(G):=\min_{W\in\mathbb{R}^{m\times
k}}\min_{\\{p_{i}\\}_{i=1}^{m}\in\\{1,\dots,|\pi_{0}(G)|\\}^{m}}md-\dim(N_{W}^{\\{p_{i}\\}_{i=1}^{m}}),$
and
$n_{k}(G):=\min\\{n\in\mathbb{N}:\min_{m\geq n}v_{k}^{m}(G)>0\\}.$
We define the linear dificiency threshold by
$n^{\prime}(G):=\min\\{n\in\mathbb{N}:v_{k}^{m}(G)>m-k-n\ \ \forall
m\in\mathbb{N},k\geq 0\\},$
where we take $\min\\{\varnothing\\}:=\infty$.
###### Remark 17.
It holds that $k+1\leq n_{k}(G)\leq k+n^{\prime}(G)$.
With the definitions above, we pose the following theoretical problem of
interest:
###### Problem 18.
For a compact $G\leq\operatorname{O}(d)$ with cohomogeneity $c$, does it hold
that $n^{\prime}(G)\leq 2c$?
Recall that the cohomogeneity of $G$ is the minimal codimension taken over
tangent spaces of every orbit (see Section 3.1 for a more precise definition).
The rest of this section aims to prove Lemma 15. First, we unpack basic
properties/characterizations of avoidance notions:
###### Lemma 19.
Fix compact $G\leq\operatorname{O}(d)$, $z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$
and $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, and consider the
corresponding coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ and its corresponding difference
quotient $Q$. Then, the following statements hold
1. (a)
For any $y_{1},y_{2}\in\mathbb{R}^{d}$ with $[y_{1}]\neq[y_{2}]$ and for any
nonzero $x\in\mathbb{R}^{d}$,
$Q(y_{1},y_{2})=Q\left(\frac{y_{1}}{\|x\|},\frac{y_{2}}{\|x\|}\right).$ (2)
2. (b)
For any nonzero $x\in\mathbb{R}^{d}$,
$\Phi$ locally avoids $K$ at $x$ $\Longleftrightarrow$ $\Phi$ locally avoids
$K$ at $\frac{x}{\|x\|}$. (3)
3. (c)
Let $\Omega\subseteq\mathbb{R}^{d}$ be a dense $G$-invariant subset such that
$c\cdot\Omega=\Omega$ for all $c>0$. Then,
$\displaystyle\Phi$ $\displaystyle\text{ strongly avoids
$K$}\iff\overline{\operatorname{im}(Q|_{(\Omega\times\Omega)\cap\Delta})}\cap
K=\varnothing$ (4) $\displaystyle\iff\big{(}\Phi\text{ weakly avoids
$K$}\big{)}\wedge(\forall\,x\in\mathbb{S}^{d-1},\forall x_{n},y_{n}\to x,$
$\displaystyle\qquad\qquad\qquad(\forall
n\in\mathbb{N},[x_{n}]\neq[y_{n}]\wedge
x_{n},y_{n}\in\Omega)\implies\lim_{n\to\infty}Q(x_{n},y_{n})\notin K),$
where we take $\lim_{n\to\infty}Q(x_{n},y_{n})\notin K$ to be true if the
limit does not exist.
###### Proof.
First, (a) follows from scalar homogeniety of distance and coorbit maps
(5(c)). Next, (b) follows from (a). For (c), the first line follows from
continuity of $Q$ and denseness of $\Omega$. The forward implication is
immediate by definition of strong avoidance. For the reverse implication,
suppose $\overline{\operatorname{im}(Q|_{(\Omega\times\Omega)\cap\Delta})}\cap
K\neq\varnothing$. Then, there exists sequences $x_{n},y_{n}\in\Omega$ such
that $[x_{n}]\neq[y_{n}]$ and $\lim_{n\to\infty}Q(x_{n},y_{n})\in K$. Since
$\max\\{\|x_{n}\|,\|y_{n}\|\\}\neq 0$, we may take a subsequence and assume
$\max\\{\|x_{n}\|,\|y_{n}\|\\}=\|x_{n}\|>0$ for all $n$. By (2), We get
$Q(x_{n},y_{n})=Q\left(\frac{x_{n}}{\|x_{n}\|},\frac{y_{n}}{\|x_{n}\|}\right).$
Since $\|x_{n}\|\cdot\Omega=\Omega$, we get that
$u_{n}:=\frac{x_{n}}{\|x_{n}\|}\in\Omega$ and
$v_{n}:=\frac{y_{n}}{\|x_{n}\|}\in\Omega$. By taking further subsequences, we
may assume $u_{n}\to x\in\mathbb{S}^{d-1}$ and $v_{n}\to y$ with $\|y\|\leq
1$. If $[x]\neq[y]$, then by continuity of $Q$ over $\Delta$, we get
$Q([x],[y])\in K$ so that weak avoidance fails. On the other hand, if
$[x]=[y]$, we may translate $u_{n}$ and $v_{n}$ so that
$x=y\in\mathbb{S}^{d-1}$ and $u_{n},v_{n}\in\Omega$ by $G$-invariance of
$\Omega$. In such case, the right hand side of the wedge in (4) fails. ∎
###### Proof of Lemma 15.
Consider a semialgebraic lift of $B$
$\displaystyle L:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},V,p,x,y\big{)}$
$\displaystyle\in(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times
k}\times\mathbb{R}^{k}\times(\mathbb{R}^{d})^{2}:$
$\displaystyle[x]\neq[y],\|x\|^{2}+\|y\|^{2}=1,x\in\mathbb{R}^{d},y\in\mathbb{R}^{d},$
$\displaystyle\Psi_{p_{i}}([z_{i}],[x])-\Psi_{p_{i}}([z_{i}],[y])=d([x],[y])\cdot(Vp)_{i}\
\forall i\in\\{1,\ldots,n\\}\big{\\}}$
By Lemmas 11, 12 and 8, $L$ is semialgebraic. Since $B$ is the projection of
$L$ onto its $(\\{z_{i}\\}_{i=1}^{n},V)$ coordinate, we obtain that $B$ is
semialgebraic by 9(a).
Next, consider a semialgebraic lift of $C_{S}$
$\displaystyle W_{S}$
$\displaystyle:=\big{\\{}(\\{z_{i}\\}_{i=1}^{n},V,x)\in(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times
k}\times S:\exists
p\in\mathbb{R}^{k},\forall\varepsilon\in\mathbb{R}_{>0},\exists
x_{0},y_{0}\in\mathbb{R}^{d},$
$\displaystyle\qquad\qquad[x_{0}]\neq[y_{0}]\wedge|Q(x_{0},y_{0})-Vp|<\varepsilon\wedge[x_{0}],[y_{0}]\in
B_{[x]}(\varepsilon)\big{\\}}$
Again, $W_{S}$ is semialgebraic and since $C_{S}$ is a projection of $W_{S}$,
we obtain that $C_{S}$ is semialgebraic. Lastly, by (4) applied to
$\Omega=\mathbb{R}^{d}$ and $K=\operatorname{im}(V)$, we obtain that $D$ is a
union of $B$ and $C_{\mathbb{R}^{d}}$. ∎
## 3 Geometric Analysis of Coorbit Maps
### 3.1 Preliminary on Stabilizers and Principal Points
In this section, we first recall an orbit-stabilizer type theorem and results
on the poset structure of stabilizers. Following that, we recall that an orbit
space is in fact a geodesic space, and we introduce two layers of
principality.
###### Definition 20.
Let $G\leq O(d)$ be compact with Lie algebra
$\mathfrak{g}\subseteq\mathbb{R}^{n\times n}$. For nonzero
$x\in\mathbb{R}^{d}$, the stabilizer group of $x$ is defined by
$G_{x}:=\\{g\in G:g\cdot x=x\\}$
The tangent space at $x$ is defined by
$T_{x}:=\mathfrak{g}\cdot x$
and the normal space at $x$ is denoted by $N_{x}:=T_{x}^{\perp}$.
We have the following (presumably folklore) result that $T_{x}$ and $N_{x}$
are $G_{x}$-invariant. We were unable to find a reference, so we provide a
proof:
###### Proposition 21.
Suppose $G\leq\operatorname{O}(d)$ is compact and fix $x\in\mathbb{R}^{d}$.
Then, $T_{x}\oplus N_{x}$ is a $G_{x}$-invariant orthogonal decomposition.
###### Proof.
Since $G_{x}\leq\operatorname{O}(d)$, the orthogonal complement of a
$G_{x}$-invariant subspace is $G_{x}$-invariant. As such, we only need to show
$T_{x}$ is $G_{x}$-invariant. Let $h\in G_{x}$ and $t\in T_{x}$. Then, by
definition of $T_{x}$ and continuity of the action of $G$ on $\mathbb{R}^{d}$,
there exists a sequence $g_{n}\to\operatorname{Id}$ in $G$ such that
$\lim_{r\to 0}\left(\frac{g_{n}-\operatorname{Id}}{r}\cdot
x\right)=\left(\lim_{r\to 0}\frac{g_{n}-\operatorname{Id}}{r}\right)\cdot
x=t\in T_{x}$
Now, let $g_{n}^{\prime}:=hg_{n}h^{-1}$ so that $hg_{n}=g_{n}^{\prime}h$ and
$g_{n}^{\prime}\to\operatorname{Id}$. Then, by continuity of the action of $G$
and since $h\in G_{x}$, we obtain
$ht=h\cdot\lim_{r\to 0}\left(\frac{g_{n}-\operatorname{Id}}{r}\cdot
x\right)=\lim_{r\to
0}\left(\frac{g_{n}^{\prime}-\operatorname{Id}}{r}\cdot(hx)\right)\in
T_{hx}=T_{x}$
as desired. ∎
In general, for a compact group $G\leq\operatorname{O}(d)$, we denote its
identity component by $G_{0}$ and denote $[x]_{0}:=G_{0}\cdot x$ for
$x\in\mathbb{R}^{d}$. In many instances, we abuse notation and identify
$[Kx]_{0}$ with $[kx]_{0}$ for any $K\in\pi_{0}(G)$ and any $k\in K$. For
$x\in\mathbb{R}^{d}$, we denote the set of connected components of $[x]$ by
$\pi_{0}([x]):=\\{P:P\text{ is a connected component of }[x]\\}$
The following proposition is an orbit-stabilizer type theorem. The first item
is straighforward. For the second item, see for example the proof of Corollary
3.2.3 in [22].
###### Proposition 22.
Suppose $G\leq\operatorname{O}(d)$ is compact. The following hold:
* (a)
There exists a unique group structure on $\pi_{0}(G)$ that makes the canonical
projection $\pi_{0}^{G}\colon G\to\pi_{0}(G)$ into a surjective homomorphism.
For $x\in\mathbb{R}^{d}$, the action of $G$ on $[x]$ induces an action of
$\pi_{0}(G)$ on $\pi_{0}([x])$ with stabilizer $\pi_{0}^{G}(G_{x})$.
* (b)
For $x\in\mathbb{R}^{d}$, $G\cdot x$ is a compact $C^{\infty}$-submanifold of
$\mathbb{R}^{d}$ whose tangent space at $x$ is given by $T_{x}$. Moreover, the
following diagram commutes
${G}$${G/G_{x}}$${G\cdot
x}$${\pi_{0}(G)/\pi_{0}^{G}(G_{x})}$${\pi_{0}([x])}$$\scriptstyle{\cdot}$$\scriptstyle{\cong}$$\scriptstyle{\cong}$
In fact, conjugacy classes of stabilizers form a partially order set. For
$H\leq G$, denote by $(H)$ the conjugacy class of $H$ in $G$. Given
$x\in\mathbb{R}^{d}$ and $g\in G$, we have $G_{gx}=gG_{x}g^{-1}$. Hence,
$(G_{x})$ depends only on $[x]$. Let $G_{\leq}$ be the set of conjugacy
classes of stabilizer subgroups of $G$, that is $(H)\in G_{\leq}$ if and only
if $H=G_{x}$ for some $x\in\mathbb{R}^{d}$. There is a partial order on
$G_{\leq}$ given by $(H_{1})\leq(H_{2})$ if and only if $H_{1}\leq
gH_{2}g^{-1}$ for some $g\in G$. We call $G_{\leq}$ the poset of conjugacy
classes of stabilizers in $G$.
###### Proposition 23 (Theorem 1.32 in [26]).
For compact $G\leq\operatorname{O}(d)$, $G_{\leq}$ is finite and has a unique
minimum $(G_{P})$.
We call $(G_{P})$ the principal isotropy class. For $H\in(G_{P})$, we call $H$
a principal isotropy group. We define the principal component size by
$C_{P}:=|\pi_{0}^{G}(H)|$ for any $H\in(G_{P})$; this does not depend on the
choice of $H$. This allows for defining principal points as those which have
the “most” trivial stabilizers:
###### Definition 24.
For compact $G\leq\operatorname{O}(d)$, the set of principal points is defined
by
$P(G):=\\{x\in\mathbb{R}^{d}:G_{x}\in(G_{P})\\}$
The set of $\pi_{0}$-principal points is defined by
$P_{\pi_{0}}(G):=\\{x\in\mathbb{R}^{d}:|\pi_{0}^{G}(G_{x})|=C_{P}\\}$
Since all principal orbits share the same maximal (submanifold) dimension
among all orbits, we define the cohomogeneity of $G$ as the codimension of
$[x]$ for any $x\in P(G)$. Now, we recall that $\mathbb{R}^{d}/G$ is a
geodesic space:
###### Definition 25.
Given a metric space $(M,d)$ and $L>0$, we say a curve $\gamma\colon[0,L]\to
M$ is a minimal geodesic from $\gamma(0)$ to $\gamma(L)$ if
$d(\gamma(s),\gamma(t))=|s-t|$ for all $s,t\in[0,L]$, and given $x,y\in M$ we
let $C(x,y)$ denote the set of all minimal geodesics from $x$ to $y$. We say
$M$ is a geodesic space if $C(x,y)\neq\varnothing$ for every $x,y\in M$.
###### Proposition 26 (Lemma 23 in [29]).
Take $G\leq\operatorname{O}(d)$ with closed orbits. For each
$x,y\in\mathbb{R}^{d}$, there exists a bijection
$\frac{\arg\min_{q\in[y]}\|q-x\|}{G_{x}}\longrightarrow C([x],[y]).$
induced by projecting straight lines from $x$ to $\arg\min_{q\in[y]}\|q-x\|$
into the orbit space. In particular, $\mathbb{R}^{d}/G$ is a geodesic space.
Furthermore, we have the following proposition regarding principal points:
###### Proposition 27.
Suppose $G\leq\operatorname{O}(d)$ is compact and denote the principal stratum
in $\mathbb{R}^{d}/G$ by $[P(G)]:=\\{[x]\in\mathbb{R}^{d}/G:x\in P(G)\\}$.
Then, each of the following holds:
* (a)
$P(G)\subseteq P_{\pi_{0}}(G)$ are $G$-invariant open dense semialgebraic
subsets of $\mathbb{R}^{d}$.
* (b)
$[P(G)]$ is a geodesic space and an open dense connected manifold in
$\mathbb{R}^{d}/G$. It admits a unique Riemannian structure whose geodesic
metric is the quotient metric and such that the map
$[\cdot]\big{|}_{P(G)}\colon P(G)\to[P(G)]$ is a Riemannian submersion.
###### Proof.
For (a), $G$-invariance and the inclusion are straightforward. Openness and
denseness of $P(G)$ follows from Theorem 3.82 in [2]. Then, denseness of
$P_{\pi_{0}}(G)$ follows from $P(G)\subseteq P_{\pi_{0}}(G)$, and openness of
$P_{\pi_{0}}(G)$ follows from $(G_{z})\leq(G_{x})$ for $z$ in a neighborhood
of $x$ (see Theorem 1.30 in [26] or Proposition 109). Lastly, semialgebraicity
follows by expressing the sets in first-order logic which is straightforward.
For (b), openness, denseness and connectedness of $[P(G)]$ follow from Theorem
3.82 in [2]. It being a geodesic space follows from Lemma 3.5 in [1]. The rest
of the statement regarding the unique Riemannian structure follows from
Exercise 3.81 in [2]. ∎
### 3.2 Preliminary on Max Filtering
Coorbit filter banks generalize max filter banks [10, 29] and we shall see
later in Section 9 that with enough templates, the analysis of max filtering
is very informative to the analysis of coorbit filter banks. In this section,
we recall properties of max filtering which we shall find useful throughout
the paper.
###### Definition 28.
Suppose $G\leq\operatorname{O}(d)$.
* (a)
The max filtering map
$\langle\langle\cdot,\cdot\rangle\rangle\colon\mathbb{R}^{d}/G\times\mathbb{R}^{d}/G\to\mathbb{R}$
is defined by
$\langle\langle[x],[y]\rangle\rangle:=\sup_{\begin{subarray}{c}p\in[x]\end{subarray}}\langle
p,y\rangle.$
* (b)
Given templates $z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, the corresponding max
filter bank $\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ is defined by
$\Phi([x]):=\\{\langle\langle[z_{i}],[x]\rangle\rangle\\}_{i=1}^{n}.$
Occasionally, we denote $\langle\langle[\cdot],[\cdot]\rangle\rangle$ by
$\langle\langle[\cdot],[\cdot]\rangle\rangle_{G}$ to emphasize the group in
consideration.
By direct inspection, the following equation holds and we will make use of it
frequently:
$\mathcal{C}([x]_{0},[y]_{0},K)=\langle\langle[Kx]_{0},[y]_{0}\rangle\rangle_{G_{0}}$
(5)
Recall that every convex function $f\colon\mathbb{R}^{d}\to\mathbb{R}$ has a
subdifferential $\partial f\colon\mathbb{R}^{d}\to 2^{\mathbb{R}^{d}}$ defined
by
$\partial f(x):=\big{\\{}u\in\mathbb{R}^{d}:f(x+h)\geq f(x)+\langle
h,u\rangle\big{\\}}.$
Note that $f$ is differentiable at $x$ if and only if $\partial f(x)$ is
singleton, in which case it agrees with the gradient. The following lemma
summarizes important properties of max filtering:
###### Proposition 29 (Lemma 2 and Theorem 27 in [10]).
Suppose $G\leq\operatorname{O}(d)$ is compact and let $x,y\in\mathbb{R}^{d}$.
Then, each of the following holds:
1. (a)
$d([x],[y])^{2}=\|x\|^{2}-2\langle\langle[x],[y]\rangle\rangle+\|y\|^{2}$.
2. (b)
$\langle\langle\cdot,[y]\rangle\rangle\colon\mathbb{R}^{d}/G\to\mathbb{R}$ is
$\|y\|$-Lipschitz.
3. (c)
$\langle\langle[\cdot],[y]\rangle\rangle\colon\mathbb{R}^{d}\to\mathbb{R}$ is
convex.
4. (d)
$\partial\langle\langle[\cdot],[y]\rangle\rangle(x)=\operatorname{conv}\\{q\in\mathbb{R}^{d}:q\in[y]\
\operatorname{and}\ \langle x,q\rangle=\langle\langle[x],[y]\rangle\rangle\\}$
where $\operatorname{conv}$ denotes the convex hull operator.
The following proposition forms an essential stepping stone towards proving
that the coorbit map is Lipschitz continuous (Theorem 31). Its proof follows
immediately from 29(b) and (5).
###### Proposition 30.
Suppose $G\leq\operatorname{O}(d)$ and $K\in\pi_{0}(G)$. Then, for
$z\in\mathbb{R}^{d}$,
$\mathcal{C}([z]_{0},\cdot,K)\colon\mathbb{R}^{d}/G\to\mathbb{R}$ is
$\|z\|$-Lipschitz.
### 3.3 Realizing Group Components and Continuity of Coorbit Maps
The goal of this section is to show that the coorbit map is continuous:
###### Theorem 31.
Suppose $G\leq\operatorname{O}(d)$ is compact and fix
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, the following statements hold:
* (a)
$\Psi_{i}([z],\cdot)\colon\mathbb{R}^{d}/G\to\mathbb{R}$ is $\|z\|$-Lipschitz
for $z\in\mathbb{R}^{d}$.
* (b)
$\Psi_{i}(\cdot,\cdot)\colon\mathbb{R}^{d}/G\times\mathbb{R}^{d}/G\to\mathbb{R}$
is locally Lipschitz at every
$([z],[x])\in\mathbb{R}^{d}/G\times\mathbb{R}^{d}/G$ with local Lischitz
constant $2\sqrt{\|z\|^{2}+\|x\|^{2}}$. In particular, $\Psi_{i}$ is
continuous.
In order to do so, we shall analyze the coorbit map from the lens of realizing
group components introduced below. We mimic the approach and notation of
Section 2.2.1 in [5] which only treated the case of finite groups.
###### Definition 32.
Fix a compact group $G\leq\operatorname{O}(d)$. For a sort index
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$ and $x,z\in\mathbb{R}^{d}$, we denote the set
of realizing group components of $z$ with respect to $x$ by
$L^{i}(z,x):=\\{K\in\pi_{0}(G):\mathcal{C}([z]_{0},[x]_{0},K)=\Psi_{i}([z],[x])\\}$
We also define the auxiliary sets
$L^{i}_{>}(z,x):=\\{K\in\pi_{0}(G):\mathcal{C}([z]_{0},[x]_{0},K)>\Psi_{i}([z],[x])\\}$
$L^{i}_{<}(z,x):=\\{K\in\pi_{0}(G):\mathcal{C}([z]_{0},[x]_{0},K)<\Psi_{i}([z],[x])\\}$
For $z,x\neq 0$ in $\mathbb{R}^{d}$, define the separation scale by
$\Delta^{i}(z,x):=\frac{1}{\|z\|}\begin{cases}\min_{K\notin
L^{i}(z,x)}\big{|}\,\mathcal{C}([z]_{0},[x]_{0},K)-\Psi_{i}([z],[x])\,\big{|},&\text{if
}L^{i}(z,x)\neq\pi_{0}(G)\\\ \|x\|,&\text{if
}L^{i}(z,x)=\pi_{0}(G)\end{cases}$ (6)
For $z,x\in\mathbb{R}^{d}$, define $\Delta^{i}(0,x)=\Delta^{i}(z,0)=\infty$.
The usefulness of the separation scale will be highlighted in the statement
and proof Lemma 33. We sometimes denote $L^{i}(z,x)$ by $L^{i}_{G}(z,x)$ to
emphasize the group in consideration. Observe that $\Delta^{i}(z,x)>0$ for all
$x,z\in\mathbb{R}^{d}$. Moreover, by direct inspection or by an argument
similar to Lemma 2.4 in [5], we have that
$|L^{i}_{>}(z,x)|\leq i-1\text{\ \ and \ \
}|L^{i}_{<}(z,x)|\leq|\pi_{0}(G)|-i.$ (7)
Now, we show that small perturbations of $x$ can only shrink $L^{i}(z,x)$. We
mimic the proof of Lemma 2.5 in [5] with a few changes that adapt to our
context.
###### Lemma 33.
Fix a compact group $G\leq\operatorname{O}(d)$ and a sort index
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. For $x,y,z\in\mathbb{R}^{d}$ such that
$\|y\|<\frac{1}{2}\Delta^{i}(z,x)$, it holds that $L^{i}(z,x+y)\subseteq
L^{i}(z,x)$.
###### Proof.
The result is trivial whenever $L^{i}(z,x)=\pi_{0}(G)$, so assume
$L^{i}(z,x)\neq\pi_{0}(G)$ so that in particular $x,z\neq 0$, and
$\Delta^{i}(z,x)$ is given by the first case of (6).
We claim that if $K\in L^{i}_{<}(z,x)$ and $H\in L^{i}_{>}(z,x)\cup
L^{i}(z,x)$, then
$\mathcal{C}([z]_{0},[x+y]_{0},H)>\mathcal{C}([z]_{0},[x+y]_{0},K)$. By
symmetry in the proof, the claim remains true when all inequalities are
flipped. To the end of proving the claim, assume $H\in L^{i}_{>}(z,x)\cup
L^{i}(z,x)$ and $K\in L^{i}_{<}(z,x)$. By Proposition 30, we have
$|\mathcal{C}([z]_{0},[x+y]_{0},H)-\mathcal{C}([z]_{0},[x]_{0},H)|\leq\|z\|\cdot
d([x+y],[x])\leq\|z\|\cdot\|y\|,$
and similarly for $K$. Hence, by assumption and by
$\|y\|<\frac{1}{2}\Delta^{i}(z,x)$, we get
$\displaystyle\mathcal{C}([z]_{0},[x+y]_{0},H)-\mathcal{C}([z]_{0},[x+y]_{0},K)$
$\displaystyle\geq\mathcal{C}([z]_{0},[x]_{0},H)-\mathcal{C}([z]_{0},[x]_{0},K)-2\|y\|\cdot\|z\|$
$\displaystyle\geq\Psi_{i}([z],[x])-\mathcal{C}([z]_{0},[x]_{0},K)-2\|y\|\cdot\|z\|$
$\displaystyle=|\Psi_{i}([z],[x])-\mathcal{C}([z]_{0},[x]_{0},K)|-2\|y\|\cdot\|z\|>0$
which proves the claim.
Now, suppose there exists $K\in L^{i}(z,x+y)$ such that $K\notin L^{i}(z,x)$.
Assume $K\in L^{i}_{<}(z,x)$ without loss generality. By the claim and since
$K\in L^{i}(z,x+y)$, we get that
$\mathcal{C}([z]_{0},[x+y]_{0},H)>\mathcal{C}([z]_{0},[x+y]_{0},K)=\Psi_{i}([z],[x+y])$,
that is, $H\in L^{i}_{>}(z,x+y)$. We obtain the contradiction
$|L^{i}_{>}(z,x)\cup L^{i}(z,x)|\leq|L^{i}_{>}(z,x+y)|$
since immediately from the definition of $\Psi_{i}$, we have
$|L^{i}_{>}(z,x)\cup L^{i}(z,x)|\geq i$ while $|L^{i}_{>}(z,x+y)|<i$. ∎
We are now able to show continuity of the coorbit map.
###### Proof of Theorem 31.
For (a), let $x\in\mathbb{R}^{d}$. Then, $\Delta^{i}(z,x)>0$ and for
$y\in\mathbb{R}^{d}$ such that $\|y\|<\frac{1}{2}\Delta^{i}(z,x)$, Lemma 33
allows us to pick $K\in L^{i}(z,x)\cap L^{i}(z,x+y)$ for some
$K\in\pi_{0}(G)$. Hence, by Proposition 30
$|\Psi_{i}([z],[x+y])-\Psi_{i}([z],[x])|=|\mathcal{C}([z]_{0},[x+y]_{0},K)-\mathcal{C}([z]_{0},[x]_{0},K)|\leq\|z\|d([x+y],[y]).$
As such, $\Psi_{i}([z],\cdot)\colon\mathbb{R}^{d}/G\to\mathbb{R}$ is locally
$\|z\|$-Lipschitz. Since by Proposition 26 $\mathbb{R}^{d}/G$ is a geodesic
space, then by patching the local Lipschitz constant $\|z\|$ along minimal
geodesics, we obtain that
$\Psi_{i}([z],\cdot)\colon\mathbb{R}^{d}/G\to\mathbb{R}$ is globally
$\|z\|$-Lipschitz as desired.
For (b), let $([z],[x])\in\mathbb{R}^{d}/G\times\mathbb{R}^{d}/G$ and for
$[y]\in\mathbb{R}^{d}/G$, denote
$U_{y}:=\\{[y^{\prime}]\in\mathbb{R}^{d}/G:\|y^{\prime}\|<2\|y\|\\}$. Then,
$U_{z}\times U_{x}$ is an open neighborhood of $([z],[x])$ in
$\mathbb{R}^{d}/G\times\mathbb{R}^{d}/G$. For
$([z_{1}],[x_{1}]),([z_{2}],[x_{2}])\in U_{z}\times U_{x}$, we have
$\displaystyle|\Psi_{i}([z_{1}],[x_{1}])-\Psi_{i}([z_{2}],[x_{2}])|$
$\displaystyle\leq|\Psi_{i}([z_{1}],[x_{1}])-\Psi_{i}([z_{1}],[x_{2}])|+|\Psi_{i}([z_{1}],[x_{2}])-\Psi_{i}([z_{2}],[x_{2}])|$
$\displaystyle\leq\|z_{1}\|d([x_{1}],[x_{2}])+\|x_{2}\|d([z_{1}],[z_{2}])$
$\displaystyle<2(\|z\|d([x_{1}],[x_{2}])+\|x\|d([z_{1}],[z_{2}]))$
$\displaystyle\leq
2\sqrt{\|z\|^{2}+\|x\|^{2}}\sqrt{d([x_{1}],[x_{2}])^{2}+d([z_{1}],[z_{2}])^{2}}$
$\displaystyle=2\sqrt{\|z\|^{2}+\|x\|^{2}}\cdot
d_{\mathbb{R}^{d}/G\times\mathbb{R}^{d}/G}\big{(}([z_{1}],[x_{1}]),([z_{2}],[x_{2}])\big{)},$
as desired. ∎
The following lemma shows the influence of stabilzers on the set of realizing
group components. It will find its use in Section 8.
###### Lemma 34.
Fix a compact group $G\leq\operatorname{O}(d)$ and a sort index
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. The following statements hold for
$x,y,z\in\mathbb{R}^{d}$:
1. (a)
$\pi_{0}^{G}(G_{x})\cdot L^{i}_{G}(z,x)=L^{i}_{G}(z,x)$.
2. (b)
Suppose $\|y\|<\frac{1}{2}\Delta^{i}(z,x)$, $L^{i}(z,x)=\pi_{0}^{G}(G_{x})$
and $G_{x}$ is a union of connected components in $G$. Then,
$\Psi_{i}^{G}([z],[x+y])=\Psi_{i^{\prime}}^{G_{x}}([z]_{G_{x}},[x+y]_{G_{x}})$
where
$i^{\prime}=i+1\mod|\pi_{0}^{G}(G_{x})|+1.$
###### Proof.
For (a), if $H\in\pi_{0}^{G}(G_{x})$, then by 3(a)
$\mathcal{C}([z]_{0},[x]_{0},HK)=\mathcal{C}([z]_{0},[H^{-1}x]_{0},K)=\mathcal{C}([z]_{0},[x]_{0},K)$
and the result follows by taking $K\in L^{i}(z,x)$.
For (b), we have $\pi_{0}^{G}(G_{x})=\pi_{0}(G_{x})$ and we let $k$ be the
largest nonnegative integer such that $k|\pi_{0}(G_{x})|<i$, that is
$k=\frac{i-i^{\prime}}{|\pi_{0}^{G}(G_{x})|}$. The assumption
$L^{i}(z,x)=\pi_{0}(G_{x})$ combined with (a) yield that
$L^{k|\pi_{0}(G_{x})|+1}(z,x)=L^{k|\pi_{0}(G_{x})|+2}(z,x)=\dots=L^{k|\pi_{0}(G_{x})|+|\pi_{0}(G_{x})|}(z,x)=\pi_{0}(G_{x}),$
and so
$\bigcup_{1\leq
j\leq|\pi_{0}(G_{x})|}L^{k|\pi_{0}(G_{x})|+j}(z,x)=\pi_{0}(G_{x}).$
By Lemma 33 and since $\|y\|<\frac{1}{2}\Delta^{i}(z,x)$, we obtain
$M:=\bigcup_{1\leq
j\leq|\pi_{0}(G_{x})|}L^{k|\pi_{0}(G_{x})|+j}(z,x+y)\subseteq\pi_{0}(G_{x}).$
and so
$\pi_{0}(G)\setminus M\subseteq L_{>}^{k|\pi_{0}(G_{x})|}(z,x+y)\cup
L_{<}^{k|\pi_{0}(G_{x})|+|\pi_{0}(G_{x})|}(z,x+y).$
By (7), we obtain
$|\pi_{0}(G)|-|M|\leq
k|\pi_{0}(G_{x})|+|\pi_{0}(G)|-k|\pi_{0}(G_{x})|-|\pi_{0}(G_{x})|$
so that $|M|\geq|\pi_{0}(G_{x})|$ and hence $M=\pi_{0}(G_{x})$. As such, when
the sorting map is restricted to
$\operatorname{Hom}(\pi_{0}(G_{x}),\mathbb{R})$, its output is equal to
$\\{\Psi_{k|\pi_{0}(G_{x})|+j}([z],[x+y])\\}_{j=1}^{|\pi_{0}(G_{x})|}$ which
by definition is also equal to
$\\{\Psi^{G_{x}}_{j}([z]_{G_{x}},[x+y]_{G_{x}})\\}_{j=1}^{|\pi_{0}(G_{x})|}$.
Since $i=k|\pi_{0}(G_{x})|+i^{\prime}$ and $1\leq
i^{\prime}\leq|\pi_{0}(G_{x})|$, the result follows. ∎
### 3.4 Voronoi Decompositions
In this section, we set up the technical language of the paper. We use orbits
to decompose space into two families of Voronoi cells, one based purely on the
componenets of the orbit (Definition 36) while the other drills down to the
normal bundle provided by the manifold structure of the orbit (Definition 42).
These decompositions allow us to view coorbit filter banks as “piecewise-
linear” where the pieces are given by fibers of the normal bundle (Definitions
46 and 47). Moreover, we use the decompositions to state equivalent
characterizations of principality (Lemmas 39 and 45). We begin with an
essential preliminary sourced from the theory of nonlinear orthogonal
projection on manifolds. For $x\neq y\in\mathbb{R}^{d}$, we let $(x,y]$ denote
the line segment from $x$ to $y$ which includes $y$ and excludes $x$. Take
$(x,x]:=\\{x\\}$.
###### Proposition 35 (Theorem 3.13a, Theorem 4.1, Remark 3.1 and Corollary
3.9 in [15]).
Suppose $M$ is a smooth submanifold of $\mathbb{R}^{d}$. Fix
$x,z\in\mathbb{R}^{d}$ and suppose that $x\in\arg\max_{p\in M}\langle
p,z\rangle$. Then, each of the following holds:
1. (a)
$z\in N_{x}$ where $N_{x}$ is the orthogonal complement of the tangent space
of $M$ at $x$.
2. (b)
There exists an open neighborhood $U$ of $(z,x]$ such that
$\\{x\\}=\arg\max_{p\in M}\langle p,n\rangle$ for $n\in U\cap N_{x}$ and
$|\arg\max_{p\in M}\langle p,t\rangle|=1$ for $t\in U$. Moreover, the map
$v_{x}$ sending $t\in U$ to the unique element in $\arg\max_{p\in M}\langle
p,t\rangle$ is smooth over $U$.
3. (c)
If there exists an open neighborhood $U_{z}$ around $z$ such that
$|\arg\max_{p\in M}\langle p,t\rangle|=1$ for $t\in U_{z}$, then $z\in U$.
For the purposes of this section, we mainly apply Proposition 35 to orbits
under the action of the identity compoenent $G_{0}$ of a compact group
$G\leq\operatorname{O}(d)$.
###### Definition 36.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Denote by $G_{0}$ be the identity component
of $G$. For $x\in\mathbb{R}^{d}$, we define $V_{[x]_{0}}^{i}$, the open
component Voronoi cell of $x$, through the following characterization
$z\in
V_{[x]_{0}}^{i}\Longleftrightarrow\big{\\{}[x]_{0}\big{\\}}=\big{\\{}[p]_{0}\in\pi_{0}([x]):\Psi_{i}([z],[x])=\mathcal{C}([z]_{0},[p]_{0},G_{0})\big{\\}}.$
In general, for $A\subseteq\pi_{0}([x])$, we define $V_{A}^{i}$ through its
characterization
$z\in V_{A}^{i}\Longleftrightarrow
A=\big{\\{}[p]_{0}\in\pi_{0}([x]):\Psi_{i}([z],[x])=\mathcal{C}([z]_{0},[p]_{0},G_{0})\big{\\}}.$
###### Remark 37.
By (5), $\mathcal{C}([z]_{0},[p]_{0},G_{0})$ may be replaced by
$\langle\langle[z]_{0},[p]_{0}\rangle\rangle_{G_{0}}$ in the definitions
above.
###### Lemma 38.
Fix a compact group $G\leq\operatorname{O}(d)$ with identity component $G_{0}$
and fix $i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, the following statements hold
for all $x\in\mathbb{R}^{d}$:
1. (a)
$V_{[hx]_{0}}^{i}=V_{[x]_{0}}^{i}$ for all $h\in G_{0}$.
2. (b)
$V_{[gx]_{0}}^{i}=g\cdot V_{[x]_{0}}^{i}$ for all $g\in G$.
3. (c)
For $[q_{1}]_{0},[q_{2}]_{0}\in\pi_{0}([x])$, if $V_{[q_{1}]_{0}}^{i}\cap
V_{[q_{2}]_{0}}^{i}\neq\varnothing$, then $[q_{1}]_{0}=[q_{2}]_{0}$ and
$V_{[q_{1}]_{0}}^{i}=V_{[q_{2}]_{0}}^{i}$.
4. (d)
$V^{i}_{[x]_{0}}$ is open.
5. (e)
The set $\\{(z,x)\in(\mathbb{R}^{d})^{2}:z\in V_{[x]_{0}}^{i}\\}$ is
semialgebraic. Moreover, for $A\subseteq\pi_{0}([x])$, the set $V_{A}^{i}$ is
semialgebraic.
6. (f)
For every $z\in\mathbb{R}^{d}$, there exists $[q]_{0}\in\pi_{0}([x])$ such
that $z\in\overline{V_{[q]_{0}}^{i}}$.
7. (g)
If $z\in\overline{V^{i}_{[x]_{0}}}$, then
$\Psi_{i}([z],[x])=\langle\langle[z]_{0},[x]_{0}\rangle\rangle$. The converse
holds when $i=1$.
8. (h)
$z\in V_{[x]_{0}}^{i}$ if and only if $L^{i}(z,x)=\pi_{0}^{G}(G_{x})$.
###### Proof.
For (a), the proof follows immediately from the definition. For (b), by
normality of $G_{0}$ and 3(a), the following holds for $g\in G$ and
$z,p\in\mathbb{R}^{d}$:
$\mathcal{C}([z]_{0},[p]_{0},G_{0})=\mathcal{C}([z]_{0},[p]_{0},g^{-1}G_{0}g)=\mathcal{C}([gz]_{0},[gp]_{0},G_{0}).$
Hence, for $[p]_{0}\in\pi_{0}([x])$ and $gz\in V^{i}_{[gx]_{0}}$, we have
$\displaystyle\\{[gx]_{0}\\}$
$\displaystyle=\\{[p]_{0}\in\pi_{0}([gx]):\Psi_{i}([gz],[gx])=\mathcal{C}([gz]_{0},[p]_{0},G_{0})\\}$
$\displaystyle=\\{[gp]_{0}\in\pi_{0}([gx]):\Psi_{i}([gz],[gx])=\mathcal{C}([gz]_{0},[gp]_{0},G_{0})\\}$
$\displaystyle=g\cdot\\{[p]_{0}\in\pi_{0}([x]):\Psi_{i}([z],[x])=\mathcal{C}([z]_{0},[p]_{0},G_{0})\\}.$
Multiplying by $g^{-1}$ on the left, we obtain $z\in V^{i}_{[x]_{0}}$ as
desired.
For (c), fix $[q_{1}]_{0},[q_{2}]_{0}\in\pi_{0}([x])$ and suppose $z\in
V_{[q_{1}]_{0}}^{i}\cap V_{[q_{2}]_{0}}^{i}$. Then,
$\\{[q_{1}]_{0}\\}=\\{[q_{2}]_{0}\\}=\\{[p]_{0}\in\pi_{0}([x]):\Psi_{i}([z],[x])=\mathcal{C}([z]_{0},[p]_{0},G_{0})\\},$
and the result immediately follows.
For (d), if $z\in V_{[x]_{0}}^{i}$, then by continuity of $\Psi_{i}$ and
$\mathcal{C}$ (Theorems 31 and 30), there exists a nonempty open neighborhood
$U$ of $z$ such that
$|\Psi_{i}([u],[x])-\mathcal{C}([u]_{0},[p]_{0},G_{0})|>0$ for $u\in U$ and
$[p]_{0}\neq[x]_{0}$ in $\pi_{0}([x])$. Then, $U\subseteq V_{[x]_{0}}^{i}$ as
desired.
For (e), the first half follows from expressing the set of interest in first-
order logic as
$\displaystyle\big{\\{}(z,x)\in$
$\displaystyle(\mathbb{R}^{d})^{2}:\overline{\Psi}_{i}(z,x)=\overline{C}(z,x,G_{0})\
\wedge$ $\displaystyle\forall p\in\mathbb{R}^{d},\big{(}(\exists g\in
G,gp=x)\wedge(\forall h\in G_{0},hp\neq
x)\big{)}\implies\overline{\Psi}_{i}(z,p)\neq\overline{C}(z,p,G_{0})\big{\\}},$
so that the result follows from Proposition 8 and Lemma 11. Next, for
$A\subseteq\pi_{0}([x])$, let $x_{1},\cdots,x_{|A|}\in[x]$ be distinguished
elements from each component in $A$, that is
$A=\sqcup_{i=j}^{|A|}[x_{j}]_{0}$. The result follows from the following
semialgebraic first-order logic expression of $V_{A}^{i}$:
$\displaystyle\big{\\{}z\in$ $\displaystyle\mathbb{R}^{d}:\forall
j\in\\{1,\dots,|A|\\},\overline{\Psi}_{i}(z,x_{j})=\overline{C}(z,x_{j},G_{0})\
\wedge$ $\displaystyle\forall p\in\mathbb{R}^{d},\big{(}(\exists g\in
G,gp=x)\wedge(\forall h\in
G_{0},hp\notin\\{x_{1},\dots,x_{|A|}\\})\big{)}\implies\overline{\Psi}_{i}(z,p)\neq\overline{C}(z,p,G_{0})\big{\\}}.$
For (f), suppose by contradiction that there exists an open neighborhood $U$
such that
$\Big{|}\big{\\{}[p]_{0}\in\pi_{0}([x]):\Psi_{i}([z],[x])=\langle\langle[z]_{0},[p]_{0}\rangle\rangle_{G_{0}}\big{\\}}\Big{|}>1$
for all $z\in U$. Then, there exists $A\subseteq\pi_{0}([x])$ such that
$|A|>1$ and $\dim(V_{A}^{i})=d$. Take $[q_{1}]_{0}\neq[q_{2}]_{0}\in A$. Then,
there exists a nonempty open set $W\subseteq V_{A}^{i}$ such that
$\langle\langle[z]_{0},[q_{1}]_{0}\rangle\rangle_{G_{0}}=\langle\langle[z]_{0},[q_{2}]_{0}\rangle\rangle_{G_{0}}$
for all $z\in W$. This contradicts the strong separation of the max filter
over $G_{0}$ (Theorem 8 in [29] or Lemma 57 applied to $j=1$).
For (g), the first assertion follows from continuity of $\Psi_{i}$ proven in
Theorem 31. For the second assertion, if
$\Psi_{1}([z],[x])=\langle\langle[z]_{0},[x]_{0}\rangle\rangle_{G_{0}}$, then
$\langle\langle[x],[z]\rangle\rangle=\langle g_{0}x,z\rangle$ for some
$g_{0}\in G_{0}$. By 35(b), it follows that for any $q\in(z,g_{0}x]$, we have
$\\{g_{0}x\\}=\arg\max_{p\in[x]}\langle p,q\rangle$. In particular, $q\in
V_{[x]_{0}}^{1}$ and the result follows by taking $q\to z$.
For (h), we have $K\in L^{i}(z,x)$ if and only if
$\Psi_{i}([z],[x])=\mathcal{C}([z]_{0},[x]_{0},K)=\mathcal{C}([z]_{0},[K^{-1}x]_{0},G_{0})$
so $z\in V_{[x]_{0}}^{i}$ if and only if $K^{-1}[x]_{0}=[x]_{0}$ for every
$K\in L^{i}(z,x)$, that is $L^{i}(z,x)\subseteq\pi_{0}^{G}(G_{x})$. By 34(a),
$L^{i}(z,x)\subseteq\pi_{0}^{G}(G_{x})$ holds if and only if
$L^{i}(z,x)=\pi_{0}^{G}(G_{x})$. The result follows. ∎
The next lemma shows how the aforementioned component Voronoi cell structure
allows for a characterization of $\pi_{0}$-principality of points in
$\mathbb{R}^{d}$.
###### Lemma 39.
Fix a compact group $G\leq\operatorname{O}(d)$ with identity component
$G_{0}$, and fix a sort index $i\in\\{1,\dots,|\pi_{0}(G)|\\}$. For
$x\in\mathbb{R}^{d}$, the following are equivalent:
1. (a)
$x\in P_{\pi_{0}}(G)$
2. (b)
$z\in V_{[x]_{0}}^{i}$ implies $x\in V_{[z]_{0}}^{i}$.
###### Proof.
Suppose $z\in V_{[x]_{0}}^{i}$. Then,
$[z]_{0}\in
B:=\\{[q]_{0}\in\pi_{0}([z]):\Psi_{i}([z],[x])=\mathcal{C}([q]_{0},[x]_{0},G_{0})\\}.$
We claim that $K\cdot[z]_{0}\in B$ implies $K\in\pi_{0}^{G}(G_{x})$. In
particular, $\pi_{0}^{G}(G_{z})\leq\pi_{0}^{G}(G_{x})$. By Lemma 5, we have
that
$\mathcal{C}([z]_{0},[x]_{0},G_{0})=\mathcal{C}([x]_{0},[z]_{0},G_{0}^{-1})=\mathcal{C}([x]_{0},[z]_{0},G_{0}).$
By 3(a) and since $G_{0}$ is the identity of $\pi_{0}(G)$, it follows that for
$K\in\pi_{0}(G)$, we have
$\displaystyle\mathcal{C}(K\cdot[z]_{0},[x]_{0},G_{0})$
$\displaystyle=\mathcal{C}([z]_{0},[x]_{0},G_{0}K)$
$\displaystyle=\mathcal{C}([z]_{0},[x]_{0},KG_{0})$
$\displaystyle=\mathcal{C}([z]_{0},K^{-1}[x]_{0},G_{0})$
$\displaystyle=\mathcal{C}(K^{-1}[x]_{0},[z]_{0},G_{0}).$
As such, if $K\cdot[z]_{0}\in B$, then by the above computation and since
$z\in V_{[x]_{0}}^{i}$, we get $K^{-1}[x]_{0}=[x]_{0}$ and so
$K\in\pi_{0}^{G}(G_{x})$. This proves the claim.
(a)$\Rightarrow$(b). By minimality of $|\pi_{0}^{G}(G_{x})|$, we obtain
$\pi_{0}^{G}(G_{z})=\pi_{0}^{G}(G_{x})$. It follows that $K\cdot[z]_{0}\in B$
implies $K\in\pi_{0}^{G}(G_{z})$ so that $B=\\{[z]_{0}\\}$ as desired.
(b)$\Rightarrow$(a). By Proposition 27, $P_{\pi_{0}}(G)$ is dense and by Lemma
38, $V_{[x]_{0}}^{i}$ is open. We may then select $y\in V_{[x]_{0}}^{i}\cap
P_{\pi_{0}}(G)$. By assumption, $x\in V_{[y]_{0}}^{i}$ so that by the claim
above, we have $\pi_{0}^{G}(G_{x})\leq\pi_{0}^{G}(G_{y})$. The result follows
by minimality of $|\pi_{0}^{G}(G_{y})|$. ∎
###### Definition 40.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Denote by $G_{0}$ the identity component of
$G$. For $x\in\mathbb{R}^{d}$, we define the open component Voronoi diagram of
$x$ by
$Q_{[x]_{0}}^{i}:=\bigsqcup_{[p]_{0}\in\pi_{0}([x])}V_{[p]_{0}}^{i}$
Note that
$z\in
Q_{[x]_{0}}^{i}\Longleftrightarrow\big{|}\big{\\{}[p]_{0}\in\pi_{0}([x]):\Psi_{i}([z],[x])=\langle\langle[z]_{0},[p]_{0}\rangle\rangle_{G_{0}}\big{\\}}\big{|}=1.$
The following corollary is immediate by Lemmas 38, 39 and 34(a).
###### Corollary 41.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. The following statements hold:
1. (a)
$Q^{i}_{[gx]_{0}}=g\cdot Q^{i}_{[x]_{0}}=Q^{i}_{[x]_{0}}$ for all $g\in G$.
2. (b)
The set $\\{(z,x)\in(\mathbb{R}^{d})^{2}:z\in Q_{[x]_{0}}^{i}\\}$ is
semialgebraic.
3. (c)
$Q^{i}_{[x]_{0}}$ is an open dense semialgebraic subset of $\mathbb{R}^{d}$.
4. (d)
For $z\in P_{\pi_{0}}(G)$, we have $x\in Q_{[z]_{0}}^{i}$ implies $z\in
Q_{[x]_{0}}^{i}$.
5. (e)
$z\in Q_{[x]_{0}}^{i}$ if and only if $|L^{i}(z,x)|=|\pi_{0}^{G}(G_{x})|$.
###### Definition 42.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. For $x\in\mathbb{R}^{d}$, we define
$U_{x}^{i}$, the unique Voronoi cell of $x$, through the following
characterization
$z\in U_{x}^{i}\iff z\in
V_{[x]_{0}}^{i}\wedge\\{x\\}=\arg\max_{p\in[x]_{0}}\langle p,z\rangle$
Furthermore, we define the open Voronoi cell of $x$ by
$V^{i}_{x}:=\operatorname{relint}\big{(}U^{i}_{x}\big{)}$
and we define the open Voronoi diagram of $x$ by
$Q_{x}^{i}:=\bigsqcup_{p\in[x]}V_{p}^{i}$
The following lemma unpacks properties of $U_{x}^{i}$ and $V_{x}^{i}$ and the
third statement justifies the use of disjoint union in the definition of
$Q_{x}^{i}$.
###### Lemma 43.
Fix a compact group $G\leq\operatorname{O}(d)$ with identity component $G_{0}$
and fix $i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, the following hold
1. (a)
$z\in U_{x}^{i}$ if and only if $z\in V_{[x]_{0}}^{i}$ and $G_{x}=\\{g\in
G:\langle\langle[z]_{0},[x]_{0}\rangle\rangle_{G_{0}}=\langle gz,x\rangle\\}$.
2. (b)
$U_{gx}^{i}=g\cdot U_{x}^{i}$ and $V_{gx}^{i}=g\cdot V_{x}^{i}$ for all $g\in
G$.
3. (c)
For $q_{1},q_{2}\in[x]$, if $U_{q_{1}}^{i}\cap U_{q_{2}}^{i}\neq\varnothing$,
then $q_{1}=q_{2}$ and $U_{q_{1}}^{i}=U_{q_{2}}^{i}$.
4. (d)
$U_{x}^{i}\subseteq N_{x}$ and $V_{x}^{i}$ is open in $N_{x}$.
5. (e)
The following characterization holds
$z\in V_{x}^{i}\Longleftrightarrow z\in V_{[x]_{0}}^{i}\cap
U_{x}^{i}\wedge|\arg\max_{p\in[x]_{0}}\langle p,t\rangle|=1\text{ for $t$ in a
neighborhood of $z$}$
6. (f)
The sets $\\{(z,x)\in(\mathbb{R}^{d})^{2}:z\in U_{x}^{i}\\}$ and
$\\{(z,x)\in(\mathbb{R}^{d})^{2}:z\in V_{x}^{i}\\}$ are semialgebraic.
###### Proof.
The proofs of (a), (b) and (c) are straightforward.
Next, for (d), the assertion $U_{x}^{i}\subseteq N_{x}$ follows from 35(a).
For the openness assertion, we show that the span of $U_{x}^{i}\subseteq
N_{x}$ is $N_{x}$. We first note that $V_{[x]_{0}}^{i}$ is open by Lemma 38
and that for $z\in U_{x}$, 35(b) entails that there exists a neighborhood $U$
of $(z,x]\cap V_{[x]_{0}}^{i}$ such that $\varnothing\neq U\cap N_{x}\subseteq
U_{x}^{i}$. The result follows since $\operatorname{span}(U\cap N_{x})=N_{x}$.
For the forward implication in (e), suppose $z\in V_{x}^{i}$. Then by
definition, $z\in V_{[x]_{0}}^{i}\cap U_{x}^{i}$ and by openness of
$V_{x}^{i}$ in $N_{x}$, there exists $\varepsilon>0$ such that with
$q:=z+\varepsilon(z-x)\in V_{x}^{i}$, we have
$\\{x\\}=\arg\max_{p\in[x]_{0}}\langle p,q\rangle$. Since $z\in(q,x]$, the
desired implication follows from 35(b). On the other hand, the reverse
implication in (e), follows from 35(c).
Lastly, (f) follows from a straightforward argument with first-order logic. ∎
The following lemma unpacks properties of the Voronoi open diagram
$Q_{x}^{i}$.
###### Lemma 44.
Fix a compact group $G\leq\operatorname{O}(d)$ with identity component $G_{0}$
and fix $i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, each of the following holds:
1. (a)
$Q_{gx}^{i}=Q_{x}^{i}=g\cdot Q_{x}^{i}=G\cdot V_{x}^{i}$ for all $g\in G$.
2. (b)
The set $\\{(z,x)\in(\mathbb{R}^{d})^{2}:z\in Q_{x}^{i}\\}$ is semialgebraic.
3. (c)
The following characterization holds
$z\in Q_{x}^{i}\Longleftrightarrow\exists[q]_{0}\in\pi_{0}([x]),z\in
V_{[q]_{0}}^{i}\wedge|\arg\max_{p\in[q]_{0}}\langle p,t\rangle|=1\text{ for
$t$ in a neighborhood of $z$}$
4. (d)
$Q_{x}^{i}$ is an open dense semialgebraic subset of $\mathbb{R}^{d}$.
###### Proof.
The proofs of (a) and (b) are straightforward.
The proof of (c) follows immediately using 43(e).
Lastly, for (d), openness of $Q_{x}^{i}$ follows immediately from the openness
of the right-hand characterization in (c). As for denseness, let
$z\in\mathbb{R}^{d}$. Then, by Lemma 38, $z\in\overline{V_{[q]_{0}}^{i}}$ for
some $[q]_{0}\in\pi_{0}([x])$. Next, for $t\in V_{[q]_{0}}^{i}$ and
$q^{\prime}\in\arg\max_{p\in[q]_{0}}\langle p,t\rangle$ and by 35(b), there
exists an open neighborhood $U$ of $(t,q^{\prime}]$ such that
$|\arg\max_{p\in[q]_{0}}\langle p,u\rangle|=1$ for $u\in U$. This entails
denseness of the right-hand characterization in (c) and hence $Q_{x}^{i}$. ∎
Lastly, we end with a characterization of principal points which will its use
in Section 6. The proof is deferred to the appendix since it requires a
technical result regarding symmetry in cut points of (non-complete) Riemannian
manifolds (Section A.1).
###### Lemma 45.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. The following are equivalent:
* (a)
$x\in P(G)$.
* (b)
$z\in V_{x}^{i}$ implies $x\in V_{z}^{i}$.
* (c)
$z\in Q_{x}^{i}$ implies $x\in Q_{z}^{i}$.
###### Proof.
See Section A.3. ∎
### 3.5 The Coorbit Realizer and its Local Properties
###### Definition 46.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. The Voronoi coorbit realizer given by
$v^{i}\colon\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R}^{d}$ is defined by
$(z,x)\mapsto v_{z}^{i}(x):=\begin{cases}gz,&x\in V^{i}_{gz}\text{ for some
$g\in G$}\\\ 0,&x\notin Q_{z}^{i}\end{cases}$
Note that $v^{i}_{z}$ is well-defined since $Q_{z}^{i}$ is a disjoint union
$\bigsqcup_{p\in[z]}V_{p}^{i}$. The coorbit realizer map factors the coorbit
map through an inner product as the first statement of the following lemma
shows.
###### Lemma 47.
Fix a compact group $G\leq\operatorname{O}(d)$ and
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. For $x,z\in\mathbb{R}^{d}$, each of the
following statements holds:
1. (a)
For $x\in Q_{z}^{i}$, $v_{z}^{i}(x)$ is the unique element in $[z]$ such that
$\Psi_{i}([x],[z])=\langle x,v_{z}^{i}(x)\rangle$.
2. (b)
$v_{z}^{i}(x)\in N_{x}$ and $x\in V_{v_{z}^{i}(x)}^{i}\subseteq
N_{v_{z}^{i}(x)}$.
3. (c)
$v_{z}^{i}(gx)=g\cdot v_{z}^{i}(x)=v_{gz}^{i}(x)$ for every $g\in G$.
4. (d)
$v^{i}$ is semialgebraic over $(z,x)\in\mathbb{R}^{d}\times\mathbb{R}^{d}$.
5. (e)
$v_{z}^{i}(\cdot)\colon\mathbb{R}^{d}\to\mathbb{R}^{d}$ is smooth over $x\in
Q_{z}^{i}$.
6. (f)
$\nabla\Psi_{i}([\cdot],[z])|_{x}=v_{z}^{i}(x)$ for $x\in Q_{z}^{i}$.
7. (g)
For $x,z\in P(G)$, $\langle x,v_{z}^{i}(x)\rangle=\langle
z,v_{x}^{i}(z)\rangle$.
###### Proof.
The proofs of (a), (b), (c) and (d) are straightforward. By 35(b), it holds
that $v_{z}^{i}(\cdot)$ is smooth over $Q_{z}^{i}\cap V_{[p]_{0}}^{i}$ for
every $p\in\pi_{0}([z])$. Then, (e) follows since
$Q_{z}^{i}=\sqcup_{p\in\pi_{0}([z])}Q_{z}^{i}\cap V_{[p]_{0}}^{i}$. The
formula of the gradient in (f) follows from 29(d). Lastly, (g) follows by
Lemma 45 and (a). ∎
## 4 Upper Lipschitz Continuity
In this section, we estimate the upper Lipschitz bound of a coorbit filter
bank by the maximum singular value one may obtain by selecting an element from
each template’s orbit.
###### Theorem 48.
Given $G\leq\operatorname{O}(d)$ with closed orbits,
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$ and
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, the coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$ satisfies
$\sup_{\begin{subarray}{c}[x],[y]\in\mathbb{R}^{d}/G\\\
{[x]\neq[y]}\end{subarray}}\frac{\|\Phi([x])-\Phi([y])\|}{d([x],[y])}\leq\max_{g_{1},\ldots,g_{n}\in
G}\|\\{g_{i}z_{i}\\}_{i=1}^{n}\|_{2\to 2}.$
In fact, the result for a max filter bank has been previously established and
we will make use of it to show the same bound for coorbit filter banks:
###### Proposition 49 (Theorem 9 in [29]).
The statement of Theorem 48 holds when $p_{i}\equiv 1$.
Before proceeding, we prove a technical lemma. For $x,y\in\mathbb{R}^{d}$, we
let $[x,y]$ denote the line segment from $x$ to $y$.
###### Lemma 50.
Fix a compact group $G\leq\operatorname{O}(d)$ with identity component $G_{0}$
and fix $i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Fix $x,y,z\in\mathbb{R}^{d}$. Then,
there exists $n\in\mathbb{N}$ and $c_{1}<\cdots<c_{n}\in[x,y]$ such that
* (a)
$c_{1}=x$ and $c_{n}=y$.
* (b)
For every $j\in\\{1,\dots,n-1\\}$, there exists $K_{j}\in\pi_{0}(G)$ such that
$[c_{j},c_{j+1}]\subseteq\overline{V_{[K_{j}z]_{0}}^{i}}$.
For that, we need the following proposition regarding the dimension of the
closure of a semialgebraic set:
###### Proposition 51 (Proposition 2.8.2 in [7]).
Let $A\subseteq\mathbb{R}^{n}$ be a semialgebraic set. Then, $\overline{A}$ is
semialgebraic and
$\dim(A)=\dim(\overline{A}).$
###### Proof of Lemma 50.
First, observe that
$[x,y]=[x,y]\cap\overline{Q^{i}_{[z]_{0}}}=\bigcup_{K\in\pi_{0}(G)}\big{(}[x,y]\cap\overline{V_{[Kz]_{0}}^{i}}\big{)}.$
By Lemma 38 and Proposition 51 and for every $K\in\pi_{0}(G)$, it holds that
$[x,y]\cap\overline{V_{[Kz]_{0}}^{i}}$ is closed, semialgebraic and at most
one-dimensional. By semialgebraicity, $[x,y]\cap\overline{V_{[Kz]_{0}}^{i}}$
is a finite union of manifolds each with dimension at most one, hence a finite
union of a collection of isolated points $B_{K}$ and a collection of closed
intervals $I_{K}$. As such, $[x,y]\setminus\bigcup_{K\in\pi_{0}(G)}(\cup
I_{K})=\varnothing$ since on one hand it is a subset of the finite set
$\bigcup_{K\in\pi_{0}(G)}B_{K}$ but on the other hand, it is open in $[x,y]$.
We conclude that
$\bigcup_{K\in\pi_{0}(G)}\big{(}[x,y]\cap\overline{V_{[Kz]_{0}}^{i}}\big{)}=\bigcup_{K\in\pi_{0}(G)}(\cup
I_{K}),$
and the result follows by partitioning. ∎
We also need the following useful result
###### Lemma 52.
Let $G\leq\operatorname{O}(d)$ be compact. Suppose $x,y\in\mathbb{R}^{d}$ such
that $d([x],[y])=\|x-y\|$. Then, for any $[c,w]\subseteq[x,y]$, it holds that
$d([c],[w])=\|c-w\|$.
###### Proof.
By applying 35(b) to $w\in[x,y]$, we obtain
$\\{x\\}\in\arg\min_{p\in[x]}d(p,w)$, and by applying it to $c\in[w,x]$, we
get $\\{w\\}\in\arg\min_{q\in[w]}d(w,c)$. The result follows. ∎
###### Proof of Theorem 48.
By Proposition 10, we may assume $G$ is closed without loss of generality. Fix
$x,y\in\mathbb{R}^{d}$ such that $[x]\neq[y]$ and $\|x-y\|=d([x],[y])$. Then,
for each $l\in[n]$, take $c_{1}^{l},\dots,c_{n_{l}}^{l}$ and
$\\{K_{j}^{l}\\}_{j=1}^{n_{l}}$ as in Lemma 50 applied with respect to
$z_{l}$. By refining the partition over $l$, we get that there exists
$m\in\mathbb{N}$ and $c_{1}<\cdots<c_{m}\in[x,y]$ such that
* (a)
$c_{1}=x$ and $c_{m}=y$.
* (b)
For every $j\in\\{1,\dots,m-1\\}$, there exists
$\\{K_{j}^{l}\\}_{l=1}^{n}\in\pi_{0}(G)^{n}$ such that
$[c_{j},c_{j+1}]\subseteq\bigcap_{l\in[n]}\overline{V_{[K_{j}^{l}z_{l}]_{0}}^{i}}$.
Then, for each $l\in[n]$, it holds that
$\displaystyle\Psi_{p_{l}}([x],[z_{l}])-\Psi_{p_{l}}([y],[z_{l}])$
$\displaystyle=\sum_{j=1}^{m}\big{(}\Psi_{p_{l}}([c_{j}],[z_{l}])-\Psi_{p_{l}}([c_{j+1}],[z_{l}])\big{)}$
$\displaystyle=\sum_{j=1}^{m}\big{(}\langle\langle[c_{j}],[K_{j}^{l}z_{l}]\rangle\rangle_{G_{0}}-\langle\langle[c_{j+1}],[K_{j}^{l}z_{l}]\rangle\rangle_{G_{0}}\big{)}$
where the second equality follows from Lemma 38. For $j\in\\{1,\dots,m-1\\}$,
define the max filter bank
$\Phi_{j}^{G_{0}}\colon\mathbb{R}^{d}/G\to\mathbb{R}$ by
$\Phi_{j}^{G_{0}}([y])=\\{\langle\langle[y],[K_{j}^{l}z_{l}]\rangle\rangle_{G_{0}}\\}_{l=1}^{n}$.
Then, by Lemma 52 and the above,
$\frac{\Phi([x])-\Phi([y])}{d([x],[y])}=\sum_{j=1}^{m}\frac{\|c_{j+1}-c_{j}\|}{\|x-y\|}\cdot\frac{\Phi_{j}^{G_{0}}([c_{j}]_{0})-\Phi_{j}^{G_{0}}([c_{j+1}]_{0})}{d_{G_{0}}([c_{j}]_{0},[c_{j+1}]_{0})}$
The result now follows by taking norms, applying the triangle inequality and
using Proposition 49. ∎
## 5 Injectivity and Weak Avoidance
In this section, we show that $2c+k$ generic tempaltes suffice for coorbit
filter banks to weakly avoid a fixed or generic $k$-dimensional subspace.
###### Theorem 53.
Suppose the orbits of $G$ are closed with cohomogeniety $c$. Fix
$n\in\mathbb{N}$, $k\in\mathbb{Z}_{\geq 0}$ and
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. For a fixed or generic
$V\in\mathbb{R}^{n\times k}$ and generic
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, the coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$ weakly avoids
$\operatorname{im}(V)$ provided $n\geq 2c+k$.
In particular, when $k=0$, the coorbit filter bank is injective provided
$n\geq 2c$.
When $V\in\mathbb{R}^{n\times k}$ is fixed, Theorem 53 is an immediate
consequence of the following lemma which gives a bound on the dimension of the
set of coorbit filter banks that fail to weakly avoid $\operatorname{im}(V)$.
The remark that follows addresses the case when $V\in\mathbb{R}^{n\times k}$
is generic in Theorem 53.
###### Lemma 54.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$. Fix
$n\in\mathbb{N}$, $k\in\mathbb{Z}_{\geq 0}$ and
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. For
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, denote by
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ the coorbit filter bank defined
by $\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. For
$V\in\mathbb{R}^{n\times k}$, consider the $G^{n}$-invariant failure set
$B_{V}:=\big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Phi\text{ fails
to weakly avoid }\operatorname{im}(V)\big{\\}}.$
Then, $B_{V}$ is semialgebraic and $\dim(B_{V})\leq nd-1-(n-2c-k)$
###### Remark 55.
By further taking
$B:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},V\big{)}\in(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times
k}:\Phi\text{ fails to weakly avoid }\operatorname{im}(V)\big{\\}}$
as in Lemma 15, we argue that $\dim(B)\leq nd+nk+(n-2c-k)$. Let $\Pi_{2}$ be
the projection of $(\mathbb{R}^{d})^{n}\times\mathbb{R}^{n\times k}$ onto the
second component. Then, by conservation of dimension (9(c)), we have
$\dim(B)\leq\Pi_{2}(B)+\max_{V\in\Pi_{2}(B)}\dim(\Pi_{2}^{-1}(V)\cap B).$
Moreover, for every $V\in\mathbb{R}^{n\times k}$, we have
$\dim(\Pi_{2}^{-1}(V)\cap B)=\dim(B_{V})$. Hence, the claim follows by Lemma
54 and the bound $\dim(\Pi_{2}(B))\leq nk$.
The rest of this section is dedicated to proving Lemma 54. To pass from the
dimension $d$ of $\mathbb{R}^{d}$ to the cohomogeneity of the group, we
leverage the following proposition:
###### Proposition 56 (Lemma 1 in [13]).
Consider $G\leq\operatorname{O}(d)$ with closed orbits. Then, for any
$x\in\mathbb{R}^{d}$, it holds that $G\cdot N_{x}=\mathbb{R}^{d}$, that is
$N_{x}$ intersects every orbit.
We also need to the following lemma which shows that the coorbit map is
“strongly separating” (cf. [10, 29]):
###### Lemma 57.
Suppose $G\leq\operatorname{O}(d)$ is compact. Fix $1\leq j\leq|\pi_{0}(G)|$,
$r\in\mathbb{R}$ and $x,y\in\mathbb{R}^{d}$ such that $[x]\neq[y]$. Then,
$\dim\big{\\{}z\in\mathbb{R}^{d}:\Psi_{j}([z],[x])-\Psi_{j}([z],[y])=r\big{\\}}\leq
d-1.$
###### Proof.
Suppose by contradiction that there exists a nonempty open set $U$ such that
$h(z):=\Psi_{j}([z],[x])-\Psi_{j}([z],[y])=r$ for all $z\in U$. By Lemma 44,
$Q_{x}^{j}\cap Q_{y}^{j}$ is open and dense, so we may shrink $U$ so that
$U\subseteq Q_{x}^{j}\cap Q_{y}^{j}$. Then, by Lemma 47, it follows that
$\nabla h(z)=v_{x}^{j}(z)-v_{y}^{j}(z)=0$ for $z\in U$. Since
$v_{x}^{j}(z)\in[x]$ and $v_{y}^{j}(z)\in[y]$, we obtain the contradiction
$[x]=[y]$. ∎
###### Proof of Lemma 54.
Semialgebraicity follows from $B_{V}$ being a projection of the $V$ section of
the semialgebraic set $B$ defined in Lemma 15. Let $N:=N_{w}$ for some $w\in
P(G)$. Then $G\cdot N=\mathbb{R}^{d}$ by Proposition 56. Fix
$V\in\mathbb{R}^{n\times k}$ and consider the semialgebraic lift of $B_{V}$
$\displaystyle L_{V}:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},p,x,y\big{)}\in$
$\displaystyle(\mathbb{R}^{d})^{n}\times\mathbb{R}^{k}\times(\mathbb{R}^{d})^{2}:$
$\displaystyle[x]\neq[y],\|x\|^{2}+\|y\|^{2}=1,x\in N,y\in N,$
$\displaystyle\Psi_{p_{i}}([z_{i}],[x])-\Psi_{p_{i}}([z_{i}],[y])=d([x],[y])\cdot(Vp)_{i}\
\forall i\in\\{1,\ldots,n\\}\big{\\}}.$
Observe that $B_{V}$ is a projection of $L_{V}$ on the $\\{z_{i}\\}_{i=1}^{n}$
coordinate. Denote by $\Pi_{pxy}$ the projection of
$(\mathbb{R}^{d})^{n}\times\mathbb{R}^{k}\times(\mathbb{R}^{d})^{2}$ on the
components $(p,x,y)$. By conservation of dimension (9(c)), we get
$\dim(B_{V})\leq\dim(L_{V})\leq\Pi_{pxy}(L_{V})+\max_{(p,x,y)\in\Pi_{pxy}(L_{V})}\dim(\Pi_{pxy}^{-1}(p,x,y)\cap
L_{V}).$
Observe that $\dim(\Pi_{pxy}(L_{V}))\leq(2c-1)+k$ since $x,y\in N$ and
$p\in\mathbb{R}^{k}$ contribute $2c+k$ degrees of freedom while the condition
$\|x\|^{2}+\|y\|^{2}=1$ takes away one. It now suffices to show that
$\dim(\Pi_{pxy}^{-1}(p,x,y)\cap L_{V})\leq n(d-1)$
for every $(p,x,y)\in\Pi_{pxy}(L_{V})$. To this end, fix
$(p,x,y)\in\Pi_{pxy}(L_{V})$ and observe that
$\displaystyle\dim(\Pi_{pxy}^{-1}(p,x,y)\cap L_{V})$
$\displaystyle=\dim\big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Psi_{p_{i}}([z_{i}],[x])-\Psi_{p_{i}}([z_{i}],[y])=d([x],[y])\cdot(Vp)_{i}\
\forall i\in\\{1,\ldots,n\\}\big{\\}}$ $\displaystyle\leq
n\cdot\max_{\begin{subarray}{c}1\leq j\leq|\pi_{0}(G)|\\\
r\in\mathbb{R}\end{subarray}}\dim\big{\\{}z\in\mathbb{R}^{d}:\Psi_{j}([z],[x])-\Psi_{j}([z],[y])=r\big{\\}}$
$\displaystyle\leq n\cdot(d-1)$
where the last inequality follows from Lemma 57. This finishes the proof. ∎
## 6 Local Avoidance at Principal Points
The purpose of this section is to show that $2c+k-1$ generic templates suffice
for a coorbit filter bank to locally avoid a fixed or generic $k$-dimensional
subspace at principal points.
###### Theorem 58.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$. Fix sort
indices $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, for a fixed or
generic $V\in\mathbb{R}^{n\times k}$ and generic
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, the coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$ locally avoids
$\operatorname{im}(V)$ at every $x\in P(G)$ provided $n\geq 2c-1+k$.
In particular, when $k=0$ and with $n$ generic templates, the coorbit filter
bank is locally lower Lipschitz at every $x\in P(G)$ provided $n\geq 2c-1$.
We also settle Problem 18 for the case of groups acting freely on the sphere:
###### Theorem 59.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$. If the
action of $G$ is free on the sphere, then $n^{\prime}(G)\leq 2c$.
In fact, Theorem 58 with fixed $V$ and Theorem 59 follow immediately from the
following lemma. The case of generic $V$ in Theorem 58 follows by a similar
argument as in Remark 55.
###### Lemma 60.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$. Fix
$n\in\mathbb{N}$, $k\in\mathbb{Z}_{\geq 0}$ and
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. For
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, denote by
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ the coorbit filter bank defined
by $\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. For
$V\in\mathbb{R}^{n\times k}$, consider the failure set
$C_{V}:=\big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Phi\text{ fails
to locally avoid }\operatorname{im}(V)\text{ at every }x\in P(G)\big{\\}}.$
Then, $C_{V}$ is semialgebraic and $\dim(C_{V})\leq nd-1-(n-2c-k+1)$.
The rest of this section is dedicated to proving Lemma 60. We need the
following lemma which is essential in reducing the proof to the linear algebra
of singular value decompoisitions.
###### Lemma 61.
Suppose $G\leq\operatorname{O}(d)$ is compact, $x\in P(G)$ and $x_{n},y_{n}\to
x$ with $[x_{n}]\neq[y_{n}]$. Then, there exists a nonzero $u\in N_{x}$ such
that
$\|u\|=\liminf_{n\to\infty}\frac{||x_{n}-y_{n}||}{d([x_{n}],[y_{n}])}\in[1,\infty)$
and such that for each $i\in\\{1,\dots,|\pi_{0}(G)|\\}$ and for each $z\in
Q_{x}^{i}$, the following convergence holds after taking subsequences
$\frac{\Psi_{i}([x_{n}],[z])-\Psi_{i}([y_{n}],[z])}{d([x_{n}],[y_{n}])}\to\langle
v^{i}_{z}(x),u\rangle.$
###### Proof.
The proof is technical and is postponed to Section B.2. ∎
For $n\geq c$ and a matrix $X\in\mathbb{R}^{n\times c}$, we denote by
$\sigma_{c}(X)$ the minimum singular value of $X$. When $n<c$, we take
$\sigma_{c}(X):=0$. Then, Lemma 61 immediately yields the following
interesting corollary.
###### Corollary 62.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$. Fix sort
indices $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. For
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, denote by
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ the coorbit filter bank defined
by $\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. Then, for $x\in
P(G)$, $\Phi$ is $\sigma_{c}(\\{\langle
v^{p_{i}}_{z_{i}}(x),\cdot\rangle\\}_{i=1}^{n}|_{N_{x}})$-locally lower
Lipschitz at $x$.
###### Proof.
Let $x_{n},y_{n}\to x$ be such that $[x_{n}]\neq[y_{n}]$. By Lemma 61, there
exists nonzero $u\in N_{x}$ such that $\|u\|\geq 1$ and
$Q(x_{n},y_{n})\to\|u\|\cdot\left\\{\left\langle
v^{p_{i}}_{z_{i}}(x),\frac{u}{\|u\|}\right\rangle\right\\}_{i=1}^{n}$
The result now follows by taking norms and using $\|u\|\geq 1$. ∎
###### Proof of Lemma 60.
Semialgebraicity follows from $C_{V}$ being a projection of the $V$ section of
the semialgebraic set $C_{P(G)}$ defined in Lemma 15. Let $N:=N_{w}$ for some
$w\in P(G)$. Then, $G\cdot N=\mathbb{R}^{d}$ by Proposition 56. Denote by $Q$
the difference quotient corresponding to $\Phi$ and put
$P_{\mathbb{S}}(G):=\mathbb{S}^{d-1}\cap P(G)$. Fix $V\in\mathbb{R}^{n\times
k}$ and consider a semialgebraic lift of $C_{V}$
$\displaystyle W_{V}$
$\displaystyle:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},x\big{)}\in(\mathbb{R}^{d})^{n}\times(P_{\mathbb{S}}(G)\cap
N):\Phi\text{ does not locally avoid }\text{im}(V)\text{ at }x\big{\\}}$
$\displaystyle=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},x\big{)}\in(\mathbb{R}^{d})^{n}\times(P_{\mathbb{S}}(G)\cap
N):\exists p\in\mathbb{R}^{k},\forall\varepsilon\in\mathbb{R}_{>0},\exists
x_{0},y_{0}\in\mathbb{R}^{d},$
$\displaystyle\qquad\qquad[x_{0}]\neq[y_{0}]\wedge|Q(x_{0},y_{0})-Vp|<\varepsilon\wedge[x_{0}],[y_{0}]\in
B_{[x]}(\varepsilon)\big{\\}}.$
Observe that $C_{V}$ is a projection of $W_{V}$ on its $\\{z_{i}\\}_{i=1}^{n}$
coordinate.
For
$\big{(}\\{z_{i}\\}_{i=1}^{n},x\big{)}\in(\mathbb{R}^{d})^{n}\times(P_{\mathbb{S}}(G)\cap
N)$ with associated Voronoi coorbit realizers denoted by
$v_{i}:=v^{p_{i}}_{z_{i}}$, define
$I_{x,\\{z_{i}\\}}:=\operatorname{diag}\big{(}\\{1_{v_{i}(x)\neq
0}\\}_{i=1}^{n}\big{)}\in\mathbb{R}^{n\times n}$. By Lemma 47, it is
semialgebraic in $\big{(}\\{z_{i}\\}_{i=1}^{n},x\big{)}$. Now, let
$\mathbb{I}:=\\{I\in\mathbb{R}^{n\times
n}:I=\operatorname{diag}\\{\varepsilon_{i}\\}_{i=1}^{n},\\{\varepsilon_{i}\\}_{i=1}^{n}\in\\{0,1\\}_{i=1}^{n}\\}$.
We have a partion $W_{V}=\bigcup_{I\in\mathbb{I}}W_{V}^{I}$ where
$W_{V}^{I}:=\big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{n},x\big{)}\in(\mathbb{R}^{d})^{n}\times(P_{\mathbb{S}}(G)\cap
N):I_{x,\\{z_{i}\\}}=I\text{ and }\Phi\text{ does not locally avoid
}\text{im}(V)\text{ at }x\big{\\}}.$
To the end of bounding
$\dim(C_{V})\leq\dim(W_{V})=\max_{I\in\mathbb{I}}\dim(W_{V}^{I})$, we fix a
dimension maximizing $I\in\mathbb{I}$ and proceed with bounding
$\dim(W_{V}^{I})$. Without loss of generality, we take
$\\{j:I_{jj}=1\\}=\\{1,\dots,m\\}$ for some $m\in\\{0,\dots,n\\}$. Denote by
$\Pi_{2}$ the projection of $\big{(}\\{z_{i}\\}_{i=1}^{n},x\big{)}\mapsto x$.
By conservation of dimension (9(c)),
$\dim(C_{V})\leq\dim(W_{V}^{I})\leq\dim(\Pi_{2}(W_{V}^{I}))+\max_{x\in\Pi_{2}(W_{V}^{I})}\dim(\Pi_{2}^{-1}(x)\cap
W_{V}^{I}).$
Observe that $\dim(\Pi_{2}(W_{V}^{I}))\leq\dim(P_{\mathbb{S}}(G)\cap N)=c-1$.
It now suffices to fix $x\in\Pi_{2}(W_{V}^{I})$ and show that
$\dim(\Pi_{2}^{-1}(x)\cap W_{V}^{I})\leq nd-1-(n-k-c)$.
Let $\Phi_{I}$ denote the coorbit filter bank corresponding to templates
$\\{z_{i}\\}_{i=1}^{m}$ and sort indices $\\{p_{i}\\}_{i=1}^{m}$. For
$\\{z_{i}\\}_{i=1}^{n}\in\Pi_{2}^{-1}(x)\cap W_{V}^{I}$ and by definition of
$I\in\mathbb{I}$, we have that
$\\{z_{i}\\}_{i=1}^{m}\in\prod_{i=1}^{m}Q_{x}^{p_{i}}$ and
$\\{z_{i}\\}_{i=m+1}^{n}\in\prod_{i=m+1}^{n}(Q_{x}^{p_{i}})^{c}$. Next, let
$V_{m}\in\mathbb{R}^{m\times n}$ denote the truncation of $V$ that only keeps
its first $m$ rows. By Lemma 61, we have
$\\{z_{i}\\}_{i=1}^{m}\in
E:=\left\\{\\{z_{i}\\}_{i=1}^{m}\in\prod_{i=1}^{m}Q_{x}^{p_{i}}:\operatorname{im}(\\{\langle
v_{i}(x),\cdot\rangle|_{N_{x}\setminus\\{0\\}}\\}_{i=1}^{m})\cap\operatorname{im}(V_{m})\neq\varnothing\right\\}.$
Note that $E$ is semialgebraic. We obtain
$\Pi_{2}^{-1}(x)\cap W_{V}^{I}\subseteq
E\times\prod_{i=m+1}^{n}(Q_{x}^{p_{i}})^{c}.$
Since $\dim\left(\prod_{i=m+1}^{n}(Q_{x}^{p_{i}})^{c}\right)\leq(n-m)(d-1)$,
we get
$\dim(\Pi_{2}^{-1}(x)\cap W_{V}^{I})\leq\dim(E)+(n-m)(d-1).$
It now suffices to show that $\dim(E)\leq md-1-(m-k-c)$. We lift $E$ to
$\displaystyle
E_{L}:=\Big{\\{}\big{(}\\{z_{i}\\}_{i=1}^{m},\\{v_{i}\\}_{i=1}^{m}\big{)}\in$
$\displaystyle(\mathbb{R}^{d})^{m}\times\prod_{i=1}^{m}V_{x}^{p_{i}}:\operatorname{im}(\\{\langle
v_{i},\cdot\rangle|_{N_{x}\setminus\\{0\\}}\\}_{i=1}^{m})\cap\operatorname{im}(V_{m})\neq\varnothing$
$\displaystyle\qquad\qquad\qquad\qquad\quad\wedge\exists\\{g_{i}\\}_{i=1}^{m}\in
G^{m},z_{i}=g_{i}v_{i}\ \forall i\in[m]\Big{\\}}.$
Let $\pi_{1}$ and $\pi_{2}$ denote projections on the $\\{z_{i}\\}_{i=1}^{m}$
and $\\{v_{i}\\}_{i=1}^{m}$ coordinates respectively. Then, $E=\pi_{1}(E_{L})$
and for any $\\{v_{i}\\}_{i=1}^{m}\in\pi_{2}(E_{L})$, we have
$\dim(\pi_{2}^{-1}(\\{v_{i}\\}_{i=1}^{m}))\leq m(d-c)$ since each $z_{i}$ has
at most $d-c$ degrees of freedom in the fiber. By conservation of dimension
(9(c)), it suffices to show that $\dim(\pi_{2}(E_{L}))\leq mc-1-(m-k-c)$.
Since $V_{x}^{p_{i}}\subseteq N_{x}$, we have
$\pi_{2}(E_{L})\subseteq F:=\big{\\{}\\{v_{i}\\}_{i=1}^{m}\in
N_{x}^{m}:\operatorname{im}(\\{\langle
v_{i},\cdot\rangle|_{N_{x}\setminus\\{0\\}}\\}_{i=1}^{m})\cap\operatorname{im}(V_{m})\neq\varnothing\big{\\}}$
where we note that $F$ is semialgebraic. By identifying $N_{x}$ with
$\mathbb{R}^{c}$, we lift to the space of singular value decompositions
$\displaystyle F_{L}:=$
$\displaystyle\big{\\{}\big{(}\\{v_{i}\\}_{i=1}^{m},U,\Sigma,W\big{)}\in(\mathbb{R}^{c})^{m}\times\mathbb{R}^{c\times(c-1)}\times
D_{\geq 0}^{(c-1)\times(c-1)}\times\mathbb{R}^{m\times(c-1)}:$
$\displaystyle\qquad\qquad\qquad U^{T}U=\operatorname{Id}_{c-1}\wedge
W^{T}W=\operatorname{Id}_{c-1}\wedge\\{\langle
v_{i},\cdot\rangle|_{\mathbb{R}^{c}}\\}_{i=1}^{m}=W\Sigma U^{T}\big{\\}}$
$\displaystyle\cup\big{\\{}\big{(}\\{v_{i}\\}_{i=1}^{m},U,\Sigma,W\big{)}\in(\mathbb{R}^{c})^{m}\times\mathbb{R}^{c\times
c}\times D_{\geq 0}^{c\times c}\times\mathbb{R}^{m\times
c}:U^{T}U=\operatorname{Id}_{c}\wedge W^{T}W=\operatorname{Id}_{c}$
$\displaystyle\qquad\wedge\big{(}\exists v\in\operatorname{im}(V_{m}),\exists
O\in\operatorname{O}(c),v\text{ is a column of $WO$}\big{)}\wedge\\{\langle
v_{i},\cdot\rangle|_{\mathbb{R}^{c}}\\}_{i=1}^{m}=W\Sigma U^{T}\big{\\}}.$
where $D_{\geq 0}^{c\times c}$ is the space of diagonal matrices with
nonnegative entries. We note that $F_{L}$ is semialgebraic and $F$ is the
projection of $F_{L}$ onto the first component $\\{v_{i}\\}_{i=1}^{m}$. Let
$\pi_{\sigma}$ denote the projection onto the other three components
$(U,\Sigma,W)$. Then, the fibers of $\pi_{\sigma}$ are singleton. Hence, by
conservation of dimension (9(c)), it suffices to show that
$\dim(\pi_{\sigma}(F_{L}))\leq mc-1-(m-k-c)$. We have
$\displaystyle\pi_{\sigma}(F_{L}):=\big{\\{}\big{(}U,\Sigma,W\big{)}\in$
$\displaystyle\mathbb{R}^{c\times(c-1)}\times D_{\geq
0}^{(c-1)\times(c-1)}\times\mathbb{R}^{m\times(c-1)}:U^{T}U=\operatorname{Id}_{c-1}\wedge
W^{T}W=\operatorname{Id}_{c-1}\big{\\}}$
$\displaystyle\cup\big{\\{}\big{(}U,\Sigma,W\big{)}\in$
$\displaystyle\mathbb{R}^{c\times c}\times D_{\geq 0}^{c\times
c}\times\mathbb{R}^{m\times c}:U^{T}U=\operatorname{Id}_{c}\wedge
W^{T}W=\operatorname{Id}_{c}$ $\displaystyle\wedge\exists
v\in\operatorname{im}(V_{m}),\exists O\in\operatorname{O}(c),v\text{ is a
column of $WO$}\big{\\}}.$
We count dimensions. For the first set and by orthonormality constraints, it
holds that $\Sigma$ has $c$ degrees of freedom, $U$ has $c(c-1)-c-c(c-1)/2$
degrees of freedom and $W$ has $m(c-1)-(c-1)-(c-1)(c-2)/2$ degrees of freedom.
The total degrees of freedom are $mc-1-(m-c)$. For the second set, if some
$v\in\operatorname{im}(V_{m})$ is a column of $WO$, then $\Sigma$ has $c$
degrees of freedom and $U$ has $c^{2}-c-c(c-1)/2$ degrees of freedom. With
$O\in\operatorname{O}(c)$ fixed, $WO$ has $m(c-1)+k-c-c(c-1)/2$ degrees of
freedom. These degrees of freedom also account for a right action of $O(c-1)$
which keeps $v$ as a column of $WO$. Then, $W$ may only get an additional
$\dim(\operatorname{O}(c)/\operatorname{O}(c-1))=c-1$ degrees of freedom. By
summing up, the dimension is bounded by $mc-1-(m-k-c)$ and the result follows.
∎
In fact, from the proof above, we have
###### Corollary 63.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$. Fix sort
indices $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, for generic
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$,
$\sigma_{c}(\\{\langle
v^{p_{i}}_{z_{i}}(x),\cdot\rangle\\}_{i=1}^{n}|_{N_{x}})>0$
for every $x\in P(G)$ provided $n\geq 2c-1$.
## 7 Strong Avoidance Reduction to Full Groups
###### Definition 64.
For a compact subgroup $G\leq\operatorname{O}(d)$, we denote
$F_{G}:=\\{x\in\mathbb{R}^{d}:G_{x}=G\\}.$
We say that the action of $G$ is full if $F_{G}=\\{0\\}$.
Note that $F_{G}^{\perp}$ is the largest invariant subspace over which the
restriction of $G$ is full. In this section, we show how strong avoidance for
a group $G$ reduces to its action on $F_{G}^{\perp}$:
###### Theorem 65.
Suppose $G\leq\operatorname{O}(d)$ is compact. Let $F:=F_{G}$ and denote by
$G_{F^{\perp}}\leq\operatorname{O}(F^{\perp})$ the restriction of $G$ to
$F^{\perp}$. Then, $n_{k}(G)\leq n_{k+\dim(F)}(G_{F^{\perp}})$ and
$n^{\prime}(G)\leq n^{\prime}(G_{F^{\perp}})+\dim(F)$.
Once we show the following lemma, the proof of Theorem 65 follows:
###### Lemma 66.
Suppose $G\leq\operatorname{O}(d)$ is compact and $F:=F_{G}$, and denote by
$G_{F^{\perp}}\leq\operatorname{O}(F^{\perp})$ the restriction of $G$ to
$F^{\perp}$. Then, for $k\geq 0$ and $m\geq k+\dim(F)$, it holds that
$v_{k}^{m}(G)\geq\min\\{m-k-\dim(F)+1,v_{k+\dim(F)}^{m}(G_{F^{\perp}})\\}.$
###### Proof of Theorem 65.
Denote $f:=\dim(F)$. First, we tackle the assertion $n^{\prime}(G)\leq
n^{\prime}(G_{F^{\perp}})+f$. By definition of $n^{\prime}(G_{F^{\perp}})$,
$v_{k+f}^{m}(G_{F^{\perp}})\geq m-k-f-n^{\prime}(G_{F^{\perp}})+1.$
By Lemma 66 and with $m\geq k+f$, we have
$\displaystyle v_{k}^{m}(G)$
$\displaystyle\geq\min\big{\\{}m-k-f+1,v_{k+f}^{m}(G_{F^{\perp}})\big{\\}}$
$\displaystyle\geq
m-k-f+1+\min\big{\\{}0,v_{k+f}^{m}(G_{F^{\perp}})-m+k+f-1\big{\\}}$
$\displaystyle\geq m-k-f+1+\min\\{0,-n^{\prime}(G_{F^{\perp}})\\}$
$\displaystyle\geq m-k-f+1-n^{\prime}(G_{F^{\perp}})$
The inequality still holds when $m<k+f$ since $v_{k}^{m}(G)\geq 0$ and
$n^{\prime}(G_{F^{\perp}})\geq 1$. Then, the assertion $n^{\prime}(G)\leq
n^{\prime}(G_{F^{\perp}})+f$ follows by definition of $n^{\prime}(G)$.
Next, we tackle the assertion $n_{k}(G)\leq n_{k+f}(G_{F^{\perp}})$. By Remark
17, recall that $n_{k+f}(G_{F^{\perp}})>k+f$. Then, for $m\geq
n_{k+f}(G_{F^{\perp}})>k+f$, Lemma 66 entails that
$v_{k}^{m}(G)\geq\min\\{m-k-f+1,v_{k+f}^{m}(G_{F^{\perp}})\\}>0$. By
definition of $n_{k}(G)$, it follows that $n_{k}(G)\leq
n_{k+f}(G_{F^{\perp}})$ as desired. ∎
The rest of this section is dedicated to proving Lemma 66. First, we need to
unpack how the coorbit map and the quotient distance interact with the
decomposition $F_{G}\oplus F_{G}^{\perp}$.
###### Lemma 67.
Fix a compact group $G\leq\operatorname{O}(d)$ and a sort index
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. Let $F:=F_{G}$. Denote by $P_{F}$ and
$P_{F}^{\perp}$ the orthogonal projections of $\mathbb{R}^{d}$ onto $F$ and
$F^{\perp}$ respectively. Then, for $y,z\in\mathbb{R}^{d}$, it holds that
$\Psi_{i}([z],[y])=\Psi_{i}([P_{F}^{\perp}z],[P_{F}^{\perp}y])+\langle
P_{F}z,P_{F}y\rangle$
and
$d([z],[y])^{2}=d([P_{F}^{\perp}z],[P_{F}^{\perp}y])^{2}+\|P_{F}z-P_{F}y\|^{2}.$
###### Proof.
Note that by the $G$-invariant orthogonal decomposition
$\mathbb{R}^{d}=F\oplus_{G}F^{\perp}$, we have
$\operatorname{Id}=P_{F}+P_{F}^{\perp}$, $P_{F}^{\perp}g=gP_{F}^{\perp}$ and
$P_{F}g=gP_{F}=P_{F}$ for any $g\in G$. The first equality is by orthogonal
decomposition and the second equality follows from the following computation
$P_{F}^{\perp}g=P_{F}^{\perp}g(P_{F}+P_{F}^{\perp})=P_{F}^{\perp}gP_{F}+P_{F}^{\perp}gP_{F}^{\perp}=gP_{F}^{\perp}.$
The last step follows from the $G$-invariance of $F$ and $F^{\perp}$. The
third equality $P_{F}g=gP_{F}=P_{F}$ follows by a similar computation and the
observation that $g$ fixes $F$.
Then, for any $K\in\pi_{0}(G)$, we have
$\mathcal{C}([z]_{0},[y]_{0},K)=\sup_{k\in K}\langle kz,y\rangle=\langle
P_{F}z,P_{F}y\rangle+\sup_{k\in K}\langle
kP_{F}^{\perp}z,P_{F}^{\perp}y\rangle=\langle
P_{F}z,P_{F}y\rangle+\mathcal{C}([P_{F}^{\perp}z]_{0},[P_{F}^{\perp}y]_{0},K)$
so that the first assertion follows by sorting. For the second assertion and
by the Pythagorean Theorem, it holds that
$\|gz-y\|^{2}=\|P_{F}^{\perp}(gz-y)\|^{2}+\|P_{F}(gz-y)\|^{2}=\|gP_{F}^{\perp}z-P_{F}^{\perp}y\|^{2}+\|P_{F}z-P_{F}y\|^{2}.$
The assertion then follows by taking the minimum over $g\in G$ on both sides.
∎
###### Proof of Lemma 66.
Fix arbitrary templates $\\{z_{i}\\}_{i=1}^{m}\in(\mathbb{R}^{d})^{m}$ and
arbitrary sort indices $\\{p_{i}\\}_{i=1}^{m}$. Denote the corresponding
coorbit filter map by $\Phi$ and the corresponding difference quotient by $Q$.
Fix $V\in\mathbb{R}^{m\times k}$. Suppose that $\Phi$ fails to strongly avoid
$\operatorname{im}(V)$. Then, there exists sequences $x_{n}\to x$ and
$y_{n}\to y$ such that $[x_{n}]\neq[y_{n}]$,
$d([x_{n}],[y_{n}])=\|x_{n}-y_{n}\|$, and
$\lim_{n\to\infty}Q(x_{n},y_{n})\in\operatorname{im}(V)$. By Lemma 67, the
following holds for every $i\in\\{1,\dots,m\\}$
$\displaystyle\frac{\Psi_{p_{i}}([z_{i}],[x_{n}])-\Psi_{p_{i}}([z_{i}],[y_{n}])}{d([x_{n}],[y_{n}])}$
$\displaystyle\qquad=\frac{d([P_{F}^{\perp}x_{n}],[P_{F}^{\perp}y_{n}])}{d([x_{n}],[y_{n}])}\cdot\frac{\Psi_{p_{i}}([P_{F}^{\perp}z_{i}],[P_{F}^{\perp}x_{n}])-\Psi_{p_{i}}([P_{F}^{\perp}z_{i}],[P_{F}^{\perp}y_{n}])}{d([P_{F}^{\perp}x_{n}],[P_{F}^{\perp}y_{n}])}$
$\displaystyle\qquad\qquad\qquad\qquad+\frac{\|P_{F}x_{n}-P_{F}y_{n}\|}{d([x_{n}],[y_{n}])}\cdot\left\langle
P_{F}z_{i},\frac{P_{F}x_{n}-P_{F}y_{n}}{\|P_{F}x_{n}-P_{F}y_{n}\|}\right\rangle$
where we define $\frac{0}{0}=\frac{0}{\|0\|}=0$. Set
$w_{1,n}:=\frac{d([P_{F}^{\perp}x_{n}],[P_{F}^{\perp}y_{n}])}{d([x_{n}],[y_{n}])}\geq
0$ and $w_{2,n}:=\frac{\|P_{F}x_{n}-P_{F}y_{n}\|}{d([x_{n}],[y_{n}])}\geq 0$
and $u_{n}:=\frac{P_{F}x_{n}-P_{F}y_{n}}{\|P_{F}x_{n}-P_{F}y_{n}\|}\in F$.
Denote by $\Phi_{F^{\perp}}$ the coorbit filter bank with group
$G_{F^{\perp}}\leq\operatorname{O}(F^{\perp})$, templates
$\\{P^{\perp}_{F}z_{i}\\}_{i=1}^{m}$ and sort indices $\\{p_{i}\\}_{i=1}^{m}$,
and denote by $Q_{F^{\perp}}$ the corresponding difference quotient. Then, for
every $i\in\\{1,\dots,m\\}$, it holds that
$Q(x_{n},y_{n})=w_{1,n}\cdot
Q_{F^{\perp}}(P^{\perp}_{F}x_{n},P^{\perp}_{F}y_{n})+w_{2,n}\cdot\left\langle
P_{F}z_{i},u_{n}\right\rangle$ (8)
where we take $Q_{F^{\perp}}(P^{\perp}_{F}x_{n},P^{\perp}_{F}y_{n}):=0$ if
$w_{1,n}=0$. By Lemma 67, we have $w_{1,n}^{2}+w_{2,n}^{2}=1$. Then, take a
subsequence such that $w_{j,n}\to w_{j}$ for $j\in\\{1,2\\}$ and $u_{n}\to
u\in F$. There are three cases of interest
* (i)
If $w_{1}=0$, then $w_{2}=1$, $u\in F\cap\mathbb{S}^{d-1}$ and
$\lim_{n\to\infty}Q(x_{n},y_{n})=\left\langle
P_{F}z_{i},u\right\rangle\in\operatorname{im}(V)$. As such,
$\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F\setminus\\{0\\}}\\}_{i=1}^{m})\cap\operatorname{im}(V)\neq\varnothing$.
* (ii)
If $w_{2}=0$, then $w_{1}=1$ and
$\lim_{n\to\infty}Q(x_{n},y_{n})=\lim_{n\to\infty}Q_{F^{\perp}}(P_{F}^{\perp}x_{n},P_{F}^{\perp}y_{n})\in\operatorname{im}(V)\subseteq\operatorname{im}(V)+\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F}\\}_{i=1}^{m})$, so $Q_{F^{\perp}}$ fails to
strongly avoid $\operatorname{im}(V)+\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F}\\}_{i=1}^{m})$.
* (iii)
If $w_{1},w_{2}>0$, then
$\lim_{n\to\infty}Q_{F^{\perp}}(P_{F}^{\perp}x_{n},P_{F}^{\perp}y_{n})=\frac{1}{w_{1}}\lim_{n\to\infty}Q(x_{n},y_{n})+\langle
P_{F}z_{i},-\frac{w_{2}}{w_{1}}u\rangle\in\operatorname{im}(V)+\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F}\\}_{i=1}^{m})$. Again, $Q_{F^{\perp}}$ fails to
strongly avoid $\operatorname{im}(V)+\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F}\\}_{i=1}^{m})$.
We obtain that $N_{V}^{\\{p_{i}\\}_{i=1}^{m}}\subseteq A_{1}\cup A_{2}$ where
$\displaystyle
N_{V}^{\\{p_{i}\\}_{i=1}^{m}}:=\\{\\{(P_{F}z_{i},P_{F}^{\perp}z_{i})\\}_{i=1}^{m}\in(F\times
F^{\perp})^{m}|\Phi\text{ fails to strongly avoid }\operatorname{im}(V)\\},$
$\displaystyle
A_{1}:=\big{\\{}\\{(P_{F}z_{i},P_{F}^{\perp}z_{i})\\}_{i=1}^{m}\in(F\times
F^{\perp})^{m}\big{|}\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F\setminus\\{0\\}}\\}_{i=1}^{m})\cap\operatorname{im}(V)\neq\varnothing\big{\\}},$
$\displaystyle
A_{2}:=\big{\\{}\\{(P_{F}z_{i},P_{F}^{\perp}z_{i})\\}_{i=1}^{m}\in(F\times
F^{\perp})^{m}\big{|}\text{$Q_{F^{\perp}}$ fails to strongly avoid
$\operatorname{im}(V)+\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F}\\}_{i=1}^{m})$}\big{\\}}.$
Note that $A_{1}$ and $A_{2}$ are semialgebraic. Denote $f:=\dim(F)$. We bound
dimensions of $A_{1}$ and $A_{2}$ using conservation of dimension (9(c)). For
$A_{1}$, $\\{P_{F^{\perp}}z_{i}\\}_{i=1}^{m}$ has $m(d-f)$ degrees of freedom.
Next, either $\\{\langle
P_{F}z_{i},\cdot\rangle|_{F\setminus\\{0\\}}\\}_{i=1}^{m}$ does not have rank
$f$ or there are at most $k$ degrees of freedom for the witness of
intersection $v\in\operatorname{im}(V)\cap\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F\setminus\\{0\\}}\\}_{i=1}^{m})$. Then, by a
singular value decomposition argument as in the proof of Lemma 60, the degrees
of freedom of $\\{P_{F}z_{i}\\}_{i=1}^{m}$ are bounded by $mf-1-(m-f-k)$.
Hence, $\dim(A_{1})\leq md-1-(m-f-k)$.
For $A_{2}$, $\\{P_{F}z_{i}\\}_{i=1}^{m}$ has $mf$ degrees of freedom and for
each fixed $\\{P_{F}z_{i}\\}_{i=1}^{m}$, we have $\\{\langle
P_{F}z_{i},\cdot\rangle|_{F\setminus\\{0\\}}\\}_{i=1}^{m}$ has rank at most
$f$, so there exists $W\in\mathbb{R}^{m\times(k+f)}$ such that
$\operatorname{im}(W)=\operatorname{im}(V)+\operatorname{im}(\\{\langle
P_{F}z_{i},\cdot\rangle|_{F\setminus\\{0\\}}\\}_{i=1}^{m})$. Then, by
definition of $v_{k+f}^{m}(G_{F^{\perp}})$, we get
$\displaystyle\dim\\{\\{P_{F}^{\perp}z_{i}\\}_{i=1}^{m}\in(F^{\perp})^{m}:$
$\displaystyle\text{$\Phi_{F^{\perp}}$ fails to strongly avoid
$\operatorname{im}(W)$ }\\}\leq m(d-f)-v_{k+f}^{m}(G_{F^{\perp}}).$
By conservation of dimension, we obtain $\dim(A_{2})\leq md-
v_{k+f}^{m}(G_{F^{\perp}})$. Since $N_{V}^{\\{p_{i}\\}_{i=1}^{m}}\subseteq
A_{1}\cup A_{2}$, it follows that $md-\dim(N_{V}^{\\{p_{i}\\}_{i=1}^{m}})\geq
md-\max\\{\dim(A_{1}),\dim(A_{2})\\}\geq\min\\{m-k-f+1,v_{k+f}^{m}(G_{F^{\perp}})\\}$.
Since $V$ and $\\{p_{i}\\}_{i=1}^{m}$ were arbitrary, the result follows. ∎
## 8 Strong Avoidance for Groups with Finite-Index Stabilizers
###### Definition 68.
Fix a compact subgroup $G\leq\operatorname{O}(d)$. We say $H\leq G$ has finite
index if $G/H$ is a finite set. We say $G$ has a finite-index stabilizer at
$x\in\mathbb{R}^{d}$ if $G_{x}$ has finite index. We say $G$ has finite-index
stabilizers if $G_{x}$ has finite index for every $x\in P(G)^{c}$.
In this section, we investigate the local avoidance behavior at points with
finite-index stabilizers (Lemma 73). More importantly, we solve Problem 18 for
the case where $G$ has finite-index stabilizers:
###### Theorem 69.
Suppose $G\leq\operatorname{O}(d)$ is compact with finite-index stabilizers
and cohomogeniety $c$. Then, $n^{\prime}(G)\leq 2c$.
With Theorem 69 and as in Remark 55, we obtain
###### Corollary 70.
Suppose $G\leq\operatorname{O}(d)$ is compact with finite-index stabilizers
and cohomogeniety $c$. Fix sort indices
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. Then, for a fixed or generic
$V\in\mathbb{R}^{n\times k}$ and generic
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, the coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$ strongly avoids
$\operatorname{im}(V)$ provided $n\geq 2c+k$.
In particular, when $k=0$, the coorbit filter bank is bi-Lipschitz provided
$n\geq 2c$. For the sake of completion, we give a classification for such
class of groups:
###### Theorem 71.
Suppose $G\leq\operatorname{O}(d)$ is compact. Then, $G$ has finite-index
stabilizers if and only if $G$ is a finite group or $G$ acts either freely or
transitively on the sphere in $F_{G}^{\perp}$. In fact, one of the following
cases occurs:
1. (a)
$G$ is a finite group.
2. (b)
$G$ act transitively over the sphere in $F_{G}^{\perp}$.
3. (c)
$G=S^{1}\subseteq\mathbb{C}$ and
$\mathbb{R}^{d}\cong_{\mathbb{R}}\mathbb{C}^{l}\oplus\mathbb{R}^{f}$ such that
for $g=e^{i\theta}\in S^{1}$ and $(c,v)\in\mathbb{C}^{l}\oplus\mathbb{R}^{f}$,
the action is given by $e^{i\theta}(c,v)=(e^{i\theta}c,v)$.
4. (d)
$G=S^{3}\subseteq\mathbb{H}$ and
$\mathbb{R}^{d}\cong_{\mathbb{R}}\mathbb{H}^{l}\oplus\mathbb{R}^{f}$ such that
for $g=q\in S^{3}$ and $(c,v)\in\mathbb{H}^{l}\oplus\mathbb{R}^{f}$, the
action is given by $q(c,v)=(qc,v)$.
5. (e)
$G=S^{1}\rtimes_{\varphi}\mathbb{Z}/2\mathbb{Z}$ and
$\mathbb{R}^{d}\cong_{\mathbb{R}}(\mathbb{C}^{2})^{l}\oplus\mathbb{R}^{k}\oplus\mathbb{R}^{f}$
where $\varphi:\mathbb{Z}/2\mathbb{Z}\to\operatorname{Aut}(S^{1})$ is defined
by $\varphi(0)$ being the identity and $\varphi(1)$ being conjugation, such
that for $g=(e^{i\theta},j)\in G$ and
$(\\{(z_{2p-1},z_{2p})\\}_{p=1}^{l},v_{1},v_{2})\in(\mathbb{C}^{2})^{l}\oplus\mathbb{R}^{k}\oplus\mathbb{R}^{f}$,
the action is given by
$(e^{i\theta},j)\cdot(\\{(z_{2p-1},z_{2p})\\}_{p=1}^{l},v_{1},v_{2})=\begin{cases}(\\{(e^{i\theta}z_{2p-1},e^{i\theta}z_{2p})\\}_{p=1}^{l},v_{1},v_{2})&j=0,\\\
(\\{(-e^{i\theta}\overline{z_{2p-1}},e^{i\theta}\overline{z_{2p}})\\}_{p=1}^{l},-v_{1},v_{2})&j=1.\end{cases}$
###### Proof.
The proof is technical and is postponed to Appendix C. ∎
While the class of groups with finite-index stabilizers is limited, we hope
that the induction method we follow in this section informs future
developments towards strong avoidance and/or bilipschitz properties for
coorbit filter banks.
The rest of this section is dedicated to proving Theorem 69. For $H\leq G$,
denote $\operatorname{Fix}(H):=\\{x\in\mathbb{R}^{d}:H=G_{x}\\}$. Then, by
Section 7.4 in [27], $\operatorname{Fix}(H)$ is open in the linear space
$\overline{\operatorname{Fix}(H)}=\\{x\in\mathbb{R}^{d}:H\subseteq G_{x}\\}$.
We need the following corollary of classification
###### Corollary 72.
Suppose $G\leq\operatorname{O}(d)$ is compact with finite-index stabilizers.
Then, every nonprincipal stabilizers contains the entire principal conjugacy
class $(G_{P})$ of $G$.
###### Proof.
By Theorem 71, $G$ has finite-index stabilizers if and only if either (1) $G$
is a finite group or (2) $G$ acts freely on the sphere in $F_{G}^{\perp}$ or
(3) $G$ acts transitively on the sphere in $F_{G}^{\perp}$. In the first case,
there are finitely many proper $1$-eigenspaces for the finitely many
nonidentity elements of $G$ so the principal isotropy group is trivial. In the
second case and due to freeness, $G$ also has trivial principal isotropy
group. In the third case and due to transitivity, the only nonprincipal
stabilizer is $G$ itself. The result follows. ∎
We also need the following lemma:
###### Lemma 73.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeniety $c$ and
identity component $G_{0}$. Suppose $x\in P(G)^{c}$ and $G_{x}$ has finite
index. Set $F:=\operatorname{Fix}(G_{x})$. For
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$ and
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$, denote by
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ the coorbit filter bank defined
by $\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. Then, for
$V\in\mathbb{R}^{n\times k}$, consider the failure set
$D_{V}:=\big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Phi\text{ fails
to locally avoid }\operatorname{im}(V)\text{ at every
}x\in\operatorname{Fix}(G_{x})\big{\\}}.$
Then, $D_{V}$ is semialgebraic and
$\dim(D_{V})\leq nd-1-(n-k-2\dim(F)+n^{\prime}(G_{x}|_{F^{\perp}})).$
###### Proof.
Fix $V\in\mathbb{R}^{n\times k}$. Semialgebraicity follows from $D_{V}$ being
a projection of the $V$ section of the semialgebraic set $D$ defined in Lemma
15. Set $F:=\operatorname{Fix}(G_{x})$ and $f:=\dim(\overline{F})$. By a
lifting technique as in the proofs of Lemmas 60 and 54, we fix a witness of
failure $x\in F\cap\mathbb{S}^{d-1}$. We note that the space of such witnesses
has dimension at most $f-1$. Then, by conservation of dimension, it suffices
to show that $\dim(D_{V}^{x})\leq nd-(n-k-f+n^{\prime}(G_{x}|_{F^{\perp}}))$
where
$D_{V}^{x}:=\big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Phi\text{
fails to locally avoid }\operatorname{im}(V)\text{ at }x\big{\\}}.$
By a similar argument as in the proof of Lemma 60, we may assume that there
exists $m\in\\{0,\dots,n\\}$ such that after modifying $D_{V}^{x}$, we have
$\begin{cases}\\{z_{i}\\}_{i=m+1}^{n}\in\prod_{i=m+1}^{n}(Q_{x}^{p_{i}})^{c},\\\
\\{z_{i}\\}_{i=1}^{m}\in\big{\\{}\\{z_{i}\\}_{i=1}^{m}\in\prod_{i=1}^{m}Q_{x}^{p_{i}}:\Phi_{m}\text{
fails to locally avoid }\operatorname{im}(V_{m})\text{ at
}x\big{\\}}\end{cases}$
for $\\{z_{i}\\}_{i=1}^{n}\in D_{V}^{x}$. Here, $V_{m}$ denotes a truncation
of $V$ to its first $m$ rows and $\Phi_{m}$ denotes the coorbit filter bank
corresponding to templates $\\{z_{i}\\}_{i=1}^{m}$ and sort indices
$\\{p_{i}\\}_{i=1}^{m}$. Since $G_{x}$ has finite index, we have that $[x]$ is
finite and so with $l:=\operatorname{size}([x])$, there exists
$h_{1},\dots,h_{l}$ such that $[x]=\\{h_{j}x\\}_{j=1}^{l}$ and
$Q_{x}^{p_{i}}=\sqcup_{1\leq j\leq l}(h_{j}\cdot V_{x}^{p_{i}})$. For each
$z_{i}\in Q_{x}^{p_{i}}$, there exists $h_{i}\in\\{h_{j}\\}_{j=1}^{l}$ such
that $h_{i}z_{i}\in V_{x}^{p_{i}}$. By taking a finite union over the
possibilities of $h_{i}$ and since there are at most $(n-m)(d-1)$ degrees of
freedom contributed by $\\{z_{i}\\}_{i=m+1}^{n}$, it suffices to show that
$\dim(D_{V_{m}}^{x})\leq md-(m-k-f+n^{\prime}(G_{x}|_{F^{\perp}}))$ where
$D_{V_{m}}^{x}:=\big{\\{}\\{z_{i}\\}_{i=1}^{m}\in\prod_{i=1}^{m}V_{x}^{p_{i}}:\Phi_{m}\text{
fails to locally avoid }\operatorname{im}(V_{m})\text{ at }x\big{\\}}.$
Fix arbitrary $\\{z_{i}\\}_{i=1}^{m}\in D_{V_{m}}^{x}$ and let $Q_{m}$ denote
the difference quotient correspoding to $\Phi_{m}$. By definition of
$D_{V_{m}}^{x}$, there exists sequences $x_{j},y_{j}\to x$ such that
$[x_{j}]\neq[y_{j}]$, $d([x_{j}],[y_{j}])=\|x_{j}-y_{j}\|$, and
$\lim_{n\to\infty}Q_{m}(x_{j},y_{j})\in\operatorname{im}(V_{m})$. By 34(b) and
for large enough $j$, we have
$\Psi_{p_{i}}([z_{i}],[x_{j}])=\Psi_{p_{i}^{\prime}}^{G_{x}}([z_{i}]_{G_{x}},[x_{j}]_{G_{x}})$
and
$\Psi_{p_{i}}([z_{i}],[y_{j}])=\Psi_{p_{i}^{\prime}}^{G_{x}}([z_{i}]_{G_{x}},[y_{j}]_{G_{x}})$
where $p_{i}^{\prime}=p_{i}+1\mod|\pi_{0}(G_{x})|+1$. Then, since
$d([x_{j}],[y_{j}])=\|x_{j}-y_{j}\|=d_{G_{x}}([x_{j}]_{G_{x}},[y_{j}]_{G_{x}})$,
the following holds for every $i\in\\{1,\dots,m\\}$
$\frac{\Psi_{p_{i}}([z_{i}],[x_{j}])-\Psi_{p_{i}}([z_{i}],[y_{j}])}{d([x_{j}],[y_{j}])}=\frac{\Psi_{p_{i}^{\prime}}^{G_{x}}([z_{i}]_{G_{x}},[x_{j}]_{G_{x}})-\Psi_{p_{i}^{\prime}}^{G_{x}}([z_{i}]_{G_{x}},[y_{j}]_{G_{x}})}{d_{G_{x}}([x_{j}]_{G_{x}},[y_{j}]_{G_{x}})}.$
We obtain that $D_{V_{m}}^{x}$ is a subset of
$\\{\\{z_{i}\\}_{i=1}^{m}\in(\mathbb{R}^{d})^{m}:\Phi^{G_{x}}_{m}\text{ fails
to strongly avoid }\operatorname{im}(V_{m})\\}$. By Theorem 65 and the
definitions of $v_{k}^{m}(G)$ and $n^{\prime}(G_{x})$, it follows that
$\dim(D_{V_{m}}^{x})\leq md-v_{k}^{m}(G)<md-(m-k-n^{\prime}(G_{x}))\leq
md-(m-k-f-n^{\prime}(G_{x}|_{F^{\perp}})),$
as desired. ∎
###### Proof of Theorem 69.
Fix $n\in\mathbb{N}$, $k\in\mathbb{Z}_{\geq 0}$ and
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. Fix
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$ and consider the coorbit filter bank
$\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$. We argue by induction
on $d$. First, the case $d=1$ is trivial since all nonzero points are
principal so the result follows from Lemmas 54 and 60. Assume that for
dimensions $l\in\\{1,\dots,d-1\\}$, every compact $G\leq\operatorname{O}(l)$
with finite-index stabilizers satisfies $n^{\prime}(G)\leq 2c$. Fix
$V\in\mathbb{R}^{n\times k}$, let $x\in P(G)^{c}$ and denote
$F:=\overline{\operatorname{Fix}(G_{x})}$.
Let $c_{F^{\perp}}^{G_{x}}$ be the cohomogeneity of
$G_{x}|_{F^{\perp}}\leq\operatorname{O}(F^{\perp})$. We claim that
$G_{x}|_{F^{\perp}}$ has finite-index statbilizers and
$c_{F^{\perp}}^{G_{x}}=c-f$. By Corollary 72, $G_{x}$ contains all of
$(G_{P})$. The stabilizer in $G_{x}$ of $y\in F^{\perp}$, given by $G_{y}\cap
G_{x}$, is either in $(G_{P})$ or has finite index. As such, $G_{x}$ has
finite-index stabilizers and the dimension of a $G_{x}$-principal orbit in
$F^{\perp}$ is given by
$(d-f)-c_{F^{\perp}}^{G_{x}}=\dim(G_{0})-\dim(G_{p})=d-c$ where $p\in P(G)$.
We get that $c_{F^{\perp}}^{G_{x}}=c-f$ and the claim follows.
Hence, by induction $n^{\prime}(G_{x}|_{F_{x}}^{\perp})\leq 2c-2f$. By Lemma
73, we get $\dim(D_{V}^{G_{x}})\leq nd-1-(n-k-2c)$ where
$D_{V}^{G_{x}}:=\big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Phi\text{
fails to locally avoid }\operatorname{im}(V)\text{ at every
}x\in\operatorname{Fix}(G_{x})\big{\\}}.$
Since $G$ has finite-index stabilizers, the set $S:=\\{H\leq G:\exists x\in
P(G)^{c},H=G_{x}\\}$ is finite. Hence, by taking a finite union of
$D_{V}^{G_{x}}$ over $S$ and by combining that with the bounds in Lemmas 54
and 60, we obtain $n^{\prime}(G)\leq 2c$ as desired. ∎
## 9 The Component Voronoi Characteristic and Connected Reduction
In this section, we mimic and generalize the Voronoi characteristic
developments in Section 3 of [29]. The main goal is to find quantities
$\chi^{\\{p_{i}\\}_{i=1}^{n}}_{\pi_{0}}(G)\leq\chi^{T}_{\pi_{0}}(G)\leq|\pi_{0}(G)|$
(Definition 83) with which we are able to reduce the problem of strong
avoidance to the identity component of the group. When fixed sort indices are
considered, we obtain the following reduction:
###### Theorem 74.
Fix a compact group $G\leq\operatorname{O}(d)$ and sort indices
$p_{1},\dots,p_{n}\in\\{1,\dots,\pi_{0}(G)\\}$. Fix $V\in\mathbb{R}^{n\times
k}$. Then, for any generic templates $z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$,
the corresponding coorbit filter bank $\Phi$ strongly avoids
$\operatorname{im}(V)$ provided
$n>\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G)\cdot(n_{k}(G_{0})-1)$.
With all sort indices considered, we obtain the following result:
###### Theorem 75.
Fix a compact group $G\leq\operatorname{O}(d)$. Then, $v_{k}^{n}(G)\geq
v_{k}^{\lceil n/\chi_{\pi_{0}}^{T}(G)\rceil}(G_{0})$ and
$n_{k}(G)\leq\chi_{\pi_{0}}^{T}(G)(n_{k}(G_{0})-1)+1$.
The rest of this section aims to set up all the tools necessary to the end of
proving Theorems 74 and 75.
###### Definition 76.
Fix a compact group $G\leq\operatorname{O}(d)$ and a sort index
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$. For each $x,y\in\mathbb{R}^{d}$, we define
$S^{i}([x]_{0},[y]_{0}):=\big{\\{}[q]_{0}\in\pi_{0}([y]):V_{[q]_{0}}^{i}\cap
V_{[x]_{0}}^{i}\neq\varnothing\big{\\}}.$
In words, the components of $[x]$ and $[y]$ decompose $\mathbb{R}^{d}$ into
component Voronoi cells in different ways, and $S^{i}([x]_{0},[y]_{0})$
captures which closures of component Voronoi cells corresponding to $[y]_{0}$
are needed to cover $\overline{V_{[x]_{0}}^{i}}$. This is captured in part (b)
of the following lemma:
###### Lemma 77.
Fix compact $G\leq\operatorname{O}(d)$, sort index
$i\in\\{1,\dots,|\pi_{0}(G)|\\}$, $x,y\in\mathbb{R}^{d}$, and
$T\subseteq\pi_{0}([y])$. Consider the statements:
* (a)
$T\supseteq S^{i}([x]_{0},[y]_{0})$.
* (b)
$\bigcup_{[p]_{0}\in
T}\overline{V_{[p]_{0}}^{i}}\supseteq\overline{V_{[x]_{0}}^{i}}$.
* (c)
For every $z\in\mathbb{R}^{d}$, there exists $[v]_{0}\in\pi_{0}([z])$ such
that $\Psi_{i}([z],[x])=\mathcal{C}([v]_{0},[x]_{0},G_{0})$ and
$T\cap\big{\\{}[p]_{0}\in\pi_{0}([y]):\Psi_{i}([z],[y])=\mathcal{C}([v]_{0},[p]_{0},G_{0})\big{\\}}\neq\varnothing.$
Then (a) $\Leftrightarrow$ (b) $\Rightarrow$ (c), and furthermore, (c)
$\Rightarrow$ (b) holds if $i=1$ and $x\in P_{\pi_{0}}(G)$.
###### Proof.
(a)$\Rightarrow$(b). Select $q\in\pi_{0}([y])\setminus T$. Since
$S^{i}([x]_{0},[y]_{0})\subseteq T$, it follows that $V_{[q]_{0}}^{i}\cap
V_{[x]_{0}}^{i}=\varnothing$. Thus, $(V_{[x]_{0}}^{i})^{c}\supseteq
V_{[q]_{0}}^{i}$, and since $(V_{[x]_{0}}^{i})^{c}$ is closed (38(d)), we get
$(V_{[x]_{0}}^{i})^{c}\supseteq\overline{V_{[q]_{0}}^{i}}$, meaning
$\overline{V_{[q]_{0}}^{i}}\cap V_{[x]_{0}}^{i}=\varnothing$. As such,
$V_{[x]_{0}}^{i}\subseteq\mathbb{R}^{d}\setminus\bigg{(}\bigcup_{[q]_{0}\in\pi_{0}([y])\setminus
T}\overline{V_{[q]_{0}}^{i}}\bigg{)}\subseteq\bigcup_{[p]_{0}\in
T}\overline{V_{[p]_{0}}^{i}}.$
The result now follows from the fact that the right-hand side is closed.
(b)$\Rightarrow$(a). Select $[q]_{0}\in\pi_{0}([y])\setminus T$. Then our
assumption on $T$ implies
$V_{[x]_{0}}^{i}\subseteq\overline{V_{[x]_{0}}^{i}}\subseteq\bigcup_{[p]_{0}\in
T}\overline{V_{[p]_{0}}^{i}}\subseteq\bigcup_{[p]_{0}\in\pi_{0}([y])\setminus\\{[q]_{0}\\}}\overline{V_{[p]_{0}}^{i}}\subseteq(V_{[q]_{0}}^{i})^{c}.$
As such, $V_{[q]_{0}}^{i}\cap V_{[x]_{0}}^{i}=\varnothing$, and so
$[q]_{0}\not\in S^{i}([x]_{0},[y]_{0})$.
(b)$\Rightarrow$(c). By Lemma 38, take $[v]_{0}\in\pi_{0}([z])$ such that
$\Psi_{i}([z],[x])=\mathcal{C}([v]_{0},[x]_{0},G_{0})$ and
$v\in\overline{V_{[x]_{0}}^{i}}$. By assumption, there exists $[w]_{0}\in T$
such that $v\in\overline{V_{[w]_{0}}^{i}}$ so that by Lemma 38,
$[w]_{0}\in\\{[p]_{0}\in\pi_{0}([y]):\Psi_{i}([z],[y])=\mathcal{C}([v]_{0},[p]_{0},G_{0})\big{\\}}$.
Then, $[w]_{0}$ witnesses the desired nonemptiness.
(c)$\Rightarrow$(b). Suppose $x\in P_{\pi_{0}}(G)$ and $[z]_{0}\in
V_{[x]_{0}}^{1}$. Then, Lemma 39 gives that $[x]_{0}\in V_{[z]_{0}}^{1}$. By
definition, it follows that
$\\{[z]_{0}\\}=\arg\max_{[p]_{0}\in\pi_{0}([z])}\langle\langle[x]_{0},[p]_{0}\rangle\rangle_{G_{0}}$.
By assumption, there exists $[v]_{0}\in\\{[z]_{0}\\}$ such that
$T\cap\arg\max_{[q]_{0}\in\pi_{0}([y])}\langle\langle[q]_{0},[v]_{0}\rangle\rangle_{G_{0}}\neq\varnothing$.
That is, by Lemma 38, there exists $[p]_{0}\in T$ such that
$[z]_{0}\in\overline{V_{[p]_{0}}^{1}}$. This shows that
$V_{[x]_{0}}^{1}\subseteq\bigcup_{p\in T}\overline{V_{[p]_{0}}^{1}}$, and so
we are done by taking closures. ∎
###### Definition 78.
Define the auxiliary set
$\mathcal{O}_{\pi_{0}}(\\{(z_{i},p_{i})\\}_{i=1}^{n},G):=P_{\pi_{0}}(G)\cap\bigcap_{i=1}^{n}Q_{[z_{i}]_{0}}^{p_{i}}$.
In the following corollary, (a) follows from Corollary 41 while part (b)
follows from Lemma 77.
###### Corollary 79.
Fix compact $G\leq\operatorname{O}(d)$, $z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$,
$p_{1},\ldots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$,
$x\in\mathcal{O}(\\{(z_{i},p_{i})\\}_{i=1}^{n},G)$, and $y\in\mathbb{R}^{d}$.
* (a)
For every $i\in\\{1,\ldots,n\\}$, the set
$\big{\\{}[p]_{0}\in\pi_{0}([z_{i}]_{0}):\Psi_{p_{i}}([z],[x])=\mathcal{C}([p]_{0},[x]_{0},G_{0})\big{\\}}$
consists of a single element $v_{i}([x]_{0})$.
* (b)
There is a nonempty set $\mathcal{F}([x]_{0},[y]_{0})$ of choice functions
$f\colon\\{1,\ldots,n\\}\to\pi_{0}([y])$ such that
$f(i)\in
S^{p_{i}}([x]_{0},[y]_{0})\cap\big{\\{}[p]_{0}\in\pi_{0}([y]):\Psi_{p_{i}}([z],[y])=\mathcal{C}([v_{i}(x)]_{0},[p]_{0},G_{0})\big{\\}}\neq\varnothing$
In words, $[z_{i}]_{0}$ is realized as $v_{i}([x]_{0})$ with respect to
$[x]_{0}$ while $[y]_{0}$ is realized as $f(i)\in S^{p_{i}}([x]_{0},[y]_{0})$
with respect to $v_{i}([x]_{0})$. The importance of what we have introduced
thus far will shine once we show how it interacts with the following
quantitative interpretation of strong avoidance:
###### Definition 80.
Given compact $G\leq\operatorname{O}(d)$, $V\in\mathbb{R}^{n\times k}$,
$\\{z_{i}\\}_{i=1}^{n}\in\mathbb{(}\mathbb{R}^{d})^{n}$ and
$p_{1},\ldots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, the optimal strong
avoidance bound for the corresponding coorbit filter bank $\Phi$ is denoted by
$\sigma_{\min{}}^{G}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V):=\inf_{\begin{subarray}{c}[x],[y]\in\mathbb{R}^{d}/G\\\
{[x]\neq[y]}\end{subarray}}d\bigg{(}\frac{\Phi([x])-\Phi([y])}{d_{G}([x],[y])},\operatorname{im}(V)\bigg{)}.$
When $p_{i}\equiv 1$, we shorten the notation to
$\sigma_{\min{}}^{G}(\\{z_{i}\\}_{i=1}^{n},V)$. When $n=0$, we take
$\sigma_{\min{}}^{G}(\varnothing,V):=0$. Moreover, for
$V\in\mathbb{R}^{n\times k}$ and $I\subseteq[n]$, we denote by
$V_{I}\in\mathbb{R}^{|I|\times k}$ the truncation of $V$ which keeps the rows
corresponding to indices in $I$.
###### Remark 81.
$\Phi$ strongly avoids $\operatorname{im}(V)$ if and only if
$\sigma_{min}^{G}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V)>0$.
The following theorem shows how we can leverage all that we have introduced
thus far to reduce a coorbit filter bank with a nonconnected groups into a max
filter bank with the identity component of said group, by passing through the
buckets supplied by $S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})$ defined
below:
###### Theorem 82.
Given compact $G\leq\operatorname{O}(d)$, $V\in\mathbb{R}^{n\times k}$,
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$ and
$p_{1},\ldots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, put
$S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0}):=\cup_{i=1}^{n}S^{p_{i}}([x]_{0},[y]_{0})$
and put
$\alpha(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G):=\inf_{x,y\in\mathcal{O}(\\{(z_{i},p_{i})\\}_{i=1}^{n},G)}\max_{f\in\mathcal{F}(x,y)}\bigg{(}\sum_{w\in
S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})}\sigma_{\min{}}^{G_{0}}\big{(}\\{v_{i}(x)\\}_{i\in
f^{-1}(w)},V_{f^{-1}(w)}\big{)}^{2}\bigg{)}^{1/2},$
where $v_{i}(x)$ and $\mathcal{F}(x,y)$ are defined as in Corollary 79. The
coorbit filter bank $\Phi\colon\mathbb{R}^{d}/G\to\mathbb{R}^{n}$ defined by
$\Phi([x]):=\\{\Psi_{p_{i}}([z_{i}],[x])\\}_{i=1}^{n}$ satisfies
$\sigma_{\min{}}^{G}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V)\geq\alpha(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G).$
###### Proof.
It suffices to prove
$\left\|\frac{\Phi([x])-\Phi([y])}{d([x],[y])}-v\right\|\geq\alpha(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)$
for all $v\in\operatorname{im}(V)$ and
$x,y\in\mathcal{O}(\\{(z_{i},p_{i})\\}_{i=1}^{n},G)$ with $[x]\neq[y]$; this
follows from the continuity of the left-hand side and the density of
$\mathcal{O}(\\{(z_{i},p_{i})\\}_{i=1}^{n},G)$ in $\mathbb{R}^{d}$. As such,
we fix $v\in\operatorname{im}(V)$ and
$x,y\in\mathcal{O}(\\{(z_{i},p_{i})\\}_{i=1}^{n},G)$ with $[x]\neq[y]$, and we
select $f\in\mathcal{F}(x,y)$ that maximizes
$\sum_{[w]_{0}\in
S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})}\sigma_{\min{}}^{G_{0}}\big{(}\\{v_{i}(x)\\}_{i\in
f^{-1}(w)},V_{f^{-1}(w)}\big{)}^{2}.$
By Corollary 79, we have $\Psi_{p_{i}}([z_{i}],[x])=\langle\langle
v_{i}([x]_{0}),[x]_{0}\rangle\rangle_{G_{0}}$ and
$\Psi_{p_{i}}([z_{i}],[y])=\langle\langle
v_{i}([x]_{0}),f(i)\rangle\rangle_{G_{0}}$, and so
$\displaystyle\|\Phi([x])-\Phi([y])-d([x],[y])v\|^{2}$
$\displaystyle=\sum_{i=1}^{n}\big{(}\langle\langle
v_{i}([x]_{0}),[x]_{0}\rangle\rangle_{G_{0}}-\langle\langle
v_{i}([x]_{0}),f(i)\rangle\rangle_{G_{0}}-d([x],[y])v_{i}\big{)}^{2}$
$\displaystyle=\sum_{[w]_{0}\in
S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})}\sum_{i\in
f^{-1}(w)}\big{(}\langle\langle
v_{i}([x]_{0}),[x]_{0}\rangle\rangle_{G_{0}}-\langle\langle
v_{i}([x]_{0}),[w]_{0}\rangle\rangle_{G_{0}}-d([x],[y])v_{i}\big{)}^{2}$
$\displaystyle=\sum_{[w]_{0}\in
S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})}\sum_{i\in
f^{-1}(w)}\bigg{(}\frac{\langle\langle
v_{i}([x]_{0}),[x]_{0}\rangle\rangle_{G_{0}}-\langle\langle
v_{i}([x]_{0}),[w]_{0}\rangle\rangle_{G_{0}}}{d_{G_{0}}([x]_{0},[w]_{0})}-\frac{d([x],[y])}{d_{G_{0}}([x]_{0},[w]_{0})}v_{i}\bigg{)}^{2}\cdot
d_{G_{0}}([x]_{0},[w]_{0})^{2}$ $\displaystyle\geq\sum_{[w]_{0}\in
S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})}\sigma_{\min{}}^{G_{0}}\big{(}\\{v_{i}(x)\\}_{i\in
f^{-1}(w)},V_{f^{-1}(w)}\big{)}^{2}\cdot d_{G_{0}}([x]_{0},[w]_{0})^{2}$
$\displaystyle\geq\alpha(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)^{2}\cdot
d([x],[y])^{2},$
as desired. ∎
Next, we pass through the worst-case scenario
###### Definition 83.
Suppose $G\leq\operatorname{O}(d)$ is compact and let
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. The component Voronoi
characteristic of $G$ corresponding to $\\{p_{i}\\}_{i=1}^{n}$ is given by
$\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G):=\max_{x,y\in
P_{\pi_{0}}(G)}|S^{\\{p_{i}\\}_{i=1}^{n}}([x]_{0},[y]_{0})|,$
where $P_{\pi_{0}}(G)$ and $S^{i}([x]_{0},[y]_{0})$ are defined in Definitions
24 and 76. The total Voronoi characteristic is defined by
$\chi_{\pi_{0}}^{T}(G):=\chi_{\pi_{0}}^{\\{i\\}_{i=1}^{|\pi_{0}(G)|}}(G).$
In addition, given $V\in\mathbb{R}^{n\times k}$,
$z_{1},\ldots,z_{n}\in\mathbb{R}^{d}$ and
$p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$, we define
$\tilde{\alpha}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G):=\min_{\begin{subarray}{c}I\subseteq\\{1,\ldots,n\\}\\\
|I|\geq
n/\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G)\end{subarray}}\min_{\\{K_{i}\\}_{i\in
I}\in(\pi_{0}(G))^{I}}\sigma_{\min{}}^{G_{0}}\big{(}\\{K_{i}z_{i}\\}_{i\in
I},V_{I}\big{)}.$
###### Remark 84.
$\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G)\leq\chi_{\pi_{0}}^{T}(G)\leq|\pi_{0}(G)|$
and by the pigeonhole principle,
$\tilde{\alpha}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)\leq\alpha(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)$.
###### Proof of Theorems 75 and 74.
Fix sort indices $p_{1},\dots,p_{n}\in\\{1,\dots,|\pi_{0}(G)|\\}$. For
$z_{1},\ldots,z_{m}\in\mathbb{R}^{d}$, let $\Phi$ denote the corresponding
coorbit filter bank, and let
$\Phi_{0}\colon\mathbb{R}^{d}/G_{0}\to\mathbb{R}^{m}$ denote the max filter
bank defined by
$\Phi_{0}([x]_{0}):=\\{\langle\langle[z_{i}]_{0},[x]_{0}\rangle\rangle_{G_{0}}\\}_{i=1}^{m}$.
Fix $V\in\mathbb{R}^{n\times k}$. Consider the semialgebraic sets
$M_{V}:=\\{\\{z_{i}\\}_{i=1}^{m}\in(\mathbb{R}^{d})^{m}:\Phi_{0}\text{ fails
to strongly avoid }\operatorname{im}(V)\\}$
and
$N_{V}:=\\{\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\Phi\text{ fails to
strongly avoid }\operatorname{im}(V)\\}.$
It suffices to show that for $n\in\mathbb{N}$,
$nd-\dim(N_{V})\geq\min_{\begin{subarray}{c}I\subseteq\\{1,\ldots,n\\}\\\
|I|\geq
n/\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G)\end{subarray}}|I|d-\dim(M_{V_{I}})\geq\min_{\begin{subarray}{c}I\subseteq\\{1,\ldots,n\\}\\\
|I|\geq n/\chi_{\pi_{0}}^{T}(G)\end{subarray}}|I|d-\dim(M_{V_{I}}).$
Fix arbitrary $\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}$. Let
$I\subseteq[n]$ and $\\{K_{i}\\}_{i\in I}\in(\pi_{0}(G))^{I}$ be such that
$|I|\geq n/\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G)$ and
$\tilde{\alpha}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)=\sigma_{\min{}}^{G_{0}}(\\{K_{i}z_{i}\\}_{i\in
I},V_{I})$. Then, by Theorems 82 and 84, we get
$\sigma_{\min{}}^{G}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V)\geq\alpha(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)\geq\tilde{\alpha}(\\{(z_{i},p_{i})\\}_{i=1}^{n},V,G)=\sigma_{\min{}}^{G_{0}}(\\{K_{i}z_{i}\\}_{i\in
I},V_{I}).$
As such, if $\\{z_{i}\\}_{i=1}^{n}\in N_{V}$, then $\\{K_{i}z_{i}\\}_{i\in
I}\in M_{V_{I}}$ and so $\\{z_{i}\\}_{i\in I}\in(K_{i}^{-1})_{i\in I}\cdot
M_{V_{I}}$. It follows that
$N_{V}\subseteq\bigcup_{\begin{subarray}{c}I\subseteq\\{1,\ldots,n\\}\\\
|I|\geq
n/\chi_{\pi_{0}}^{\\{p_{i}\\}_{i=1}^{n}}(G)\end{subarray}}\bigcup_{\\{K_{i}\\}_{i\in
I}\in(\pi_{0}(G))^{I}}\Big{\\{}\\{z_{i}\\}_{i=1}^{n}\in(\mathbb{R}^{d})^{n}:\\{z_{i}\\}_{i\in
I}\in\\{K_{i}^{-1}\\}_{i\in I}\cdot M_{V_{I}}\Big{\\}}.$
The union is finite and so we bound $\dim(\\{K_{i}^{-1}\\}_{i\in I}\cdot
M_{V_{I}})$ for fixed $I$ and $\\{K_{i}\\}_{i\in I}$. By $G_{0}$-invariance of
$M_{V_{I}}$ and for any $k_{i}\in G$ such that $G_{0}k_{i}=K_{i}$, we have
$\dim(\\{K_{i}^{-1}\\}_{i\in I}\cdot M_{V_{I}})=\dim((k_{i}^{-1})_{i\in
I}\cdot M_{V_{I}})=\dim(M_{V_{I}})$ where the last step follows since the left
action of $(k_{i}^{-1})_{i\in I}$ is an isometry of $\mathbb{(}R^{d})^{I}$.
Then, $\dim(N_{V})\leq(n-|I|)d+\dim(M_{V_{I}})$ and so
$\dim(N_{V})\leq\max_{\begin{subarray}{c}I\subseteq\\{1,\ldots,n\\}\\\ |I|\geq
n/\chi_{\pi_{0}}^{T}(G)\end{subarray}}(n-|I|)d+\dim(M_{V_{I}}).$
The result now follows by rearrangment. ∎
###### Remark 85.
The action of $G$ on $\mathbb{R}^{d}$ induces an action of $\pi_{0}(G)$ on
$\mathbb{R}^{d}/G$ given by metric space isometries. We label the orbit of
$[x]_{0}\in\mathbb{R}^{d}/G$ under $\pi_{0}(G)$ by $\pi_{0}([x])$. The Voronoi
decomposition $V^{i}_{[x]_{0}}$ in $\mathbb{R}^{d}$ descends into a Voronoi
decomposition of $\mathbb{R}^{d}/G_{0}$ compatible with the action of
$\pi_{0}(G)$ where the coorbit map is thought of through sorting quotient
distances in ascending top-to-bottom order i.e. smallest goes first.
## 10 Minimal Reduction for Generic Max Filtering Avoidance
In this section, we show that the problem of strong avoidance for max
filtering with a group $G$ is equivalent to the same problem when $G$ is
replaced by its minimal reduction:
###### Proposition 86 (Section 1.2 in [19]).
Suppose $G\leq\operatorname{O}(d)$ is compact. Then, there exists $H\leq O(p)$
such that $H$ has trivial principal isotropy and $\mathbb{R}^{d}/G$ is
isometric to $\mathbb{R}^{p}/H$.
###### Definition 87.
When $H$ has minimal dimension in Proposition 86, we call $H$ a minimal
reduction of $G$.
The main results are stated in the following lemma and corollary. The
following lemma shows that max filtering is an orbit space isometry invariant
###### Lemma 88.
For $j\in\\{1,2\\}$, suppose $G_{j}\leq\operatorname{O}(d_{j})$ are compact.
Suppose there exists an isometry
$\psi\colon\mathbb{R}^{d_{1}}/G_{1}\to\mathbb{R}^{d_{2}}/G_{2}$. Let $n\geq
1$, $k\geq 0$ and fix $V\in\mathbb{R}^{n\times k}$. For
$[z_{1}],\dots,[z_{n}]\in\mathbb{R}^{d_{j}}/G_{j}$ and $G$-invariant
semialgebraic sets $Y_{j}$, consider the following statements:
* •
$P_{j}([Y_{j}],\\{[z_{i}]\\}_{i=1}^{n})$: The max filter bank
$\Phi_{G_{j}}\colon\mathbb{R}^{d_{j}}/G\to\mathbb{R}^{n}$ defined by
$\Phi_{G_{j}}([x]):=\\{\langle\langle[x],[z_{i}]\rangle\rangle_{G_{j}}\\}_{i=1}^{n}$
locally avoids $\operatorname{im}(V)$ at every $[y]\in[Y_{j}]$.
* •
$W_{j}(\\{[z_{i}]\\}_{i=1}^{n})$: The max filter bank
$\Phi_{G_{j}}\colon\mathbb{R}^{d_{j}}/G\to\mathbb{R}^{n}$ defined by
$\Phi_{G_{j}}([x]):=\\{\langle\langle[x],[z_{i}]\rangle\rangle_{G_{j}}\\}_{i=1}^{n}$
weakly avoids $\operatorname{im}(V)$.
Then,
1. (a)
$P_{1}([Y_{1}],\\{[z_{i}]\\}_{i=1}^{n})\Longleftrightarrow
P_{2}(\psi([Y_{1}]),\\{\psi([z_{i}])\\}_{i=1}^{n})$.
2. (b)
$W_{1}(\\{[z_{i}]\\}_{i=1}^{n})\Longleftrightarrow
W_{2}(\\{\psi([z_{i}])\\}_{i=1}^{n})$.
The following corollary shows that genericity of strong avoidance of max
filtering is invariant to orbit space isometry:
###### Corollary 89.
In addition to the notation and assumptions in Lemma 88, suppose that
$\psi([Y_{1}])=[Y_{2}]$. Then, $P_{1}([Y_{1}],\\{[z_{i}]\\}_{i=1}^{n})$ (resp.
$W_{1}(\\{[z_{i}]\\}_{i=1}^{n})$) holds for generic
$z_{1},\dots,z_{n}\in\mathbb{R}^{d_{1}}$ if and only if
$P_{2}([Y_{2}],\\{[z_{i}]\\}_{i=1}^{n})$ (resp.
$W_{2}(\\{[z_{i}]\\}_{i=1}^{n})$) holds for generic
$z_{1},\dots,z_{n}\in\mathbb{R}^{d_{2}}$.
Before providing the proofs, we need
###### Lemma 90.
For $j\in\\{1,2\\}$, fix $G_{j}\leq\operatorname{O}(d_{j})$ and suppose that
there exists a homeomorphism
$\varphi\colon\mathbb{R}^{d_{1}}/G_{1}\to\mathbb{R}^{d_{2}}/G_{2}$. Let
$S_{j}\subseteq\mathbb{R}^{d_{j}}$ be semialgebraic and $G_{j}$-invariant. If
$[S_{2}]=\operatorname{im}(\varphi|_{[S_{1}]})$, then $\dim(S_{1})<d_{1}$ if
and only if $\dim(S_{2})<d_{2}$.
###### Proof.
By semialgebraicity and invariance, we have
$\displaystyle\dim(S_{j})=d_{j}$ $\displaystyle\iff$ $\displaystyle
S_{j}\text{ contains an open set}$ $\displaystyle\iff$
$\displaystyle[S_{j}]\text{ contains an open set}.$
The result now follows from continuity of $\varphi$ and $\varphi^{-1}$. ∎
###### Proof of Lemma 88.
For $w\in\mathbb{R}^{d_{j}}$, denote the orbit of $w$ by $[w]_{j}$. By Section
5.1 in [19], we may assume $\psi([0]_{1})=[0]_{2}$. By 29(a), we have for
$x,y\in\mathbb{R}^{d_{1}}$
$\displaystyle 2\langle\langle[x]_{1},[y]_{1}\rangle\rangle$
$\displaystyle=\|x\|^{2}+\|y\|^{2}-d^{2}([x]_{1},[y]_{1})$
$\displaystyle=d^{2}([x]_{1},[0]_{1})+d^{2}([y]_{1},[0]_{1})-d^{2}([x]_{1},[y]_{1})$
$\displaystyle=d^{2}(\psi([x]_{1}),[0]_{2})+d^{2}(\psi([y]_{1}),[0]_{2})-d^{2}(\psi([x]_{1}),\psi([x]_{2}))$
$\displaystyle=2\langle\langle\psi([x]_{1}),\psi([y]_{1})\rangle\rangle.$
As such, max filtering is $\psi$-invariant and the result follows. ∎
###### Proof of Corollary 89.
By Lemma 15, the sets
$C_{Y_{j}}:=\\{\\{[z_{i}]\\}_{i=1}^{n}\in(\mathbb{R}^{d_{j}})^{n}:P_{j}([Y_{j}],\\{[z_{i}]\\}_{i=1}^{n})\\}$
are semialgebraic. The result now follows by applying Lemma 90 to the
component-wise actions of $G_{j}^{n}$ over $(\mathbb{R}^{d_{j}})^{n}$ with
$S_{1}:=\\{\\{[z_{i}]\\}_{i=1}^{n}\in(\mathbb{R}^{d_{1}}/G_{1})^{n}:P_{j}([Y_{1}],\\{[z_{i}]\\}_{i=1}^{n})\\},$
$S_{2}:=\\{\\{[z_{i}]\\}_{i=1}^{n}\in(\mathbb{R}^{d_{2}}/G_{2})^{n}:P_{j}([Y_{2}],\\{[z_{i}]\\}_{i=1}^{n})\\},$
and
$\varphi\colon(\mathbb{R}^{d_{1}})^{n}/G_{1}^{n}\to(\mathbb{R}^{d_{2}})^{n}/G_{2}^{n}$
is the $n$-fold product of $\psi$. A similar argument follows for
$W_{j}(\\{[z_{i}]\\}_{i=1}^{n})$. ∎
## 11 Max Filtering Local Avoidance at Regular Orbits
###### Definition 91.
Suppose $G\leq\operatorname{O}(d)$ is compact with cohomogeneity $c$. The set
of regular points is defined by
$R(G):=\\{x\in\mathbb{R}^{d}:\dim([x])=d-c\\}.$
Equivalently, for $x\in\mathbb{R}^{d}$ and for a principal isotropy group
$G_{p}\leq G_{x}$, we have $x\in R(G)$ if and only if $\dim(G_{x}/G_{p})=0$.
In this section, we leverage minimal reduction (Definition 87) to show that
with enough templates, max filtering locally avoids a fixed subspace at every
regular point. In the case of coorbit filter banks, we refer to Section 9 for
a reduction to max filter banks.
###### Theorem 92.
|
# A scheme for fully programmable linear quantum networks
based on frequency conversion
Patrick Folge<EMAIL_ADDRESS>Michael Stefszky Benjamin Brecht
Christine Silberhorn Paderborn University, Integrated Quantum Optics,
Institute for Photonic Quantum Systems (PhoQS), Warburgerstr. 100, 33098
Paderborn, Germany
###### Abstract
Linear optical quantum networks, consisting of a quantum input state and a
multi-port interferometer, are an important building block for many quantum
technological concepts, e.g., Gaussian boson sampling. Here, we propose the
implementation of such networks based on frequency conversion by utilising a
so called multi-output quantum pulse gate (mQPG). This approach allows the
resource efficient and therefore scalable implementation of frequency-bin
based, fully programmable interferometers in a single spatial and polarization
mode. Quantum input states for this network can be provided by utilising the
strong frequency entanglement of a type-0 parametric down conversion (PDC)
source. Here, we develop a theoretical framework to describe linear networks
based on a mQPG and PDC and utilize it to investigate the limits and
scalabilty of our approach.
††preprint: APS/123-QED
## I Introduction
Linear optical quantum networks (LOQN), which we consider as a multi-port
interferometer with a quantum input state and followed by photon counting or
homodyne detection, have become an increasingly relevant platform and building
block for many quantum technological applications. These include (Gaussian)
boson sampling [1, 2, 3], measurement-based quantum computation [4, 5],
quantum teleportation [6, 7], quantum walks [8, 9, 10], and quantum
simulations[11, 12]. However, to enable useful applications of these concepts,
which extend beyond proof of principle demonstrations, the underlying LOQN
have to reach sufficiently high dimensionality in terms of both contributing
modes and photons. Recent implementations of high dimensional LOQN were
achieved in both the spatial [13] and temporal [14] degrees of freedom and
were able to prove quantum computational advantages. However, these approaches
require many optical components as well as synchronisation and phase stable
implementation of large experimental setups. Thus, scaling these approaches is
a challenging technical task.
LOQNs can also be implemented using spectral encodings and have been explored
by using electro optical modulators (EOMs) [15, 16, 17, 18, 19, 20] or
spectrally multimode homodyne detection [21, 22, 23]. However, the EOM based
approach requires active spectral shaping of the input quantum state which can
result in significant losses and the implementation of arbitrary LOQNs
requires complex pulse shapes of the electrical radio frequency signals. On
the other hand, the homodyne based approach faces the challenge of introducing
non-Gaussian elements, which are a crucial requirement for many of the above
mentioned applications, and require a phase stable implementation.
Figure 1: Schematic depiction of a LOQN. The multi-port interferometer is
characterized by a unitary matrix U, describing how input and output modes are
connected. A quantum state is used as the quantum resource of the system.
In this paper we explore an alternative approach for LOQNs in the spectral
domain which is based on frequency conversion. This introduces a new platform
for photonic quantum information processing and offers a highly efficient
implementation of intrinsically phase stable quantum networks with full
programmability. The general concept of a LOQN is depicted in Fig. 1 and
illustrates the main requirements; controlled preparation of input quantum
states, a stable but reconfigurable multi-port interferometer and detection.
At the core of our approach lies a multi-output quantum pulse gate [24],
allowing one to implement fully programmable frequency bin interferometer. In
combination with a highly multi-mode type-0 parametric down conversion (PDC)
source, one can realise a high dimensional LOQN in one spatial mode by using
only two non-linear waveguides. Note, that if used together with detection in
the photon number basis, our scheme does not require active phase
stabilisation.
This work is organized as follows. First, we introduce the theoretical
modelling of the mQPG, and discuss how it can be utilized to implement
interferometers based on frequency bins. Next, we introduce type-0 PDC as an
appropriate source of input quantum states for the LOQN and theoretically
model the combined system of PDC and mQPG. For this we derive a formalism
which allows us to investigate the quality of the frequency conversion based
LOQN via the squeezing strength and purity of the output state. As an
instructive example, we apply our framework to simulate a minimal example of
an LOQN comprised of a frequency bin beamsplitter and squeezed input states.
Finally, we investigate the fundamental limits of our scheme and explore its
scalability to higher numbers of contributing modes.
## II Theoretical model
In this work, we assume that all fields are in the form of optical pulses,
which are described by a complex spectral amplitude $F(\omega)$. Such modes
are usually labeled temporal modes (TM)[25]. Further, we assume for simplicity
that all fields are in one spatial and polarisation mode. The creation
operator of a photon in such a TM is given by [25, 26]
$\displaystyle\hat{F}^{\dagger}=\int\text{d}\omega
F^{*}(\omega)\hat{a}^{\dagger}(\omega).$ (1)
We will label operators associated with a TM $F(\omega)$ with the same capital
letter and a hat $\hat{F}$.
### II.1 Frequency bin Interferometer
At the heart of a general LOQN lies a mulit-port interferometer, preferably
programmable, which allows one to interfere and process the input states. Such
an interferometer (e.g. based on spatial modes) is characterized by a unitary
matrix $U_{kl}$, which describes how the (spatial) input modes $\hat{f}_{l}$
are connected to the (spatial) output modes $\hat{h}_{k}$ via the operator
transformation
$\displaystyle\hat{h}_{k}=\sum_{l=1}^{N_{in}}U_{kl}\hat{f}_{l}.$ (2)
Here, $N_{in}$ is the number of input modes and therefore also the size of the
unitary matrix. In other words Eq. (2) implies that the interferometer’s
outputs correspond to different superpositions of the inputs, while
maintaining energy conservation.
In this work, we will present a scheme to implement such an interferometer on
the basis of a set of $N_{in}$ separated frequency bins $A_{l}(\omega_{in})$,
where $l$ labels the individual bins at central frequency
$\overline{\omega}^{in}_{l}$ and the $\omega_{in}$-dependence encodes the
spectral profile of the bins (e.g. Gaussian). We first define a set of
superposition modes
$\displaystyle
S_{k}(\omega_{in}):=\sum_{l=1}^{N_{in}}U_{kl}A_{l}(\omega_{in}),$ (3)
which correspond to the outputs of the interferometer. The mode operators of
these then take the form $\hat{S}_{k}=\sum_{l=1}^{N_{in}}U_{kl}\hat{A}_{l}$
and contain the operators $\hat{A}_{l}$ pertaining to the individual bins. To
implement an interferometer on the frequency bin basis, we now design a
process which is capable of operating on the superposition modes $\hat{S}_{k}$
given by Eq. (3). In the following we present the details for an experimental
implementation of this task, which utilises the so called multi-output quantum
pulse gate.
### II.2 The mQPG as an Interferometer
Figure 2: Schematic depiction of the transfer function of a two-output mQPG,
implementing a frequency bin beam splitter. The transfer function (red and
blue) is given as the product of the phase matching function (green) and the
pump spectrum (grey). Imprinting specific amplitudes and phases onto the pump
allows one to program different transfer functions.
A multi-output quantum pulse gate (mQPG) is a specially designed sum-frequency
generation (SFG) process in a periodically poled non-linear waveguide [27,
24]. As an SFG process, it is characterized by a transfer function (TF)
$\displaystyle
G_{SFG}(\omega_{in},\omega_{out})=P(\omega_{P}=\omega_{out}-\omega_{in})\cdot\Phi(\omega_{in},\omega_{out})$
(4)
which is the product of the phase-matching function
$\Phi(\omega_{in},\omega_{out})$ of the nonlinear process and the complex
spectrum $P(\omega_{P})$ of the pump [28]. This TF describes how the
amplitudes at input frequencies $\omega_{in}$ are converted to the output
frequencies $\omega_{out}$. The distinct property of a mQPG, setting it apart
from general SFG, is group velocity matching of the pump and signal fields,
which can be achieved by dispersion engineering of the waveguides [27].
Because of this, the PM-function of a mQPG is oriented perpendicular to the
output-axis, which leads to a situation where the output frequency does not
change for a broad input frequency range.
Note, that the original quantum pulse gate [27, 29] had only one output, but
recently the concept has been expanded for multiple outputs making it ideal
for network applications [24]. The mQPG combines multiple spectrally separated
phasematching peaks within one device, by modulating the periodic poling with
a superstructure. The PM function of such an mQPG with $N_{out}$ peaks then
has the form
$\displaystyle\Phi(\omega_{in},\omega_{out})\approx\sum_{m=1}^{N_{out}}O_{m}(\omega_{out}),$
(5)
where $O_{m}(\omega_{out})$ describes the peak’s spectral profile (typically
sinc-shape) and $m$ labels the different central positions
$\overline{\omega}^{out}_{m}$ of the peaks. The PM function of such an mQPG is
depicted in Fig. 2, where we sketch the mQPG’s general working principle for
two inputs and outputs.
The mQPG allows us to perform operations on arbitrarily chosen superposition
modes of frequency bins. This works under the assumption that the pump
structures (here frequency bins with spectral profile $B(\omega_{P})$) are
spectrally broader than the individual phasematching peaks
$O_{m}(\omega_{out})$ 111This assumption ensures a single mode character of
the conversion process eliminating frequency correlations[31]. Since the mQPG
is an SFG process such a pump bin with a central frequency of
$\overline{\omega}^{pump}$ addresses an input frequency bin with a central
frequency of
$\overline{\omega}^{in}_{m}=\overline{\omega}^{out}_{m}-\overline{\omega}^{pump}$
and converts it to the $m-$th output with a central frequency
$\overline{\omega}^{out}_{m}$. In more detail this means that conversion is
achieved at the intersection of the bins’s pump function $B(\omega_{P})$ and
the PM function, hence, an input bin
$A_{m}(\omega_{in})=B(\overline{\omega}^{out}_{m}-\omega_{in})$ is converted
to the output mode $O_{m}$. Note that this input mode has the same complex
spectral profile as the corresponding pump bin, but is frequency shifted.
Furthermore, due to the orientation of the PM-function, the shape and position
of the output modes do not change when the pump bin is shifted. This crucial
feature allows for the necessary multi-path interference of interferometers,
since multiple input modes can be coherently mapped to the same output by
utilising multiple pump bins (compare Fig. 2). Since the phase and amplitude
of the pump bins also determines the phase and amplitude of the conversion, we
can implement the mapping of one of the superposition modes $S_{k}$ to one of
the output modes $O_{m}$. This is done by appropriately choosing the pump bins
so that all outputs address the same input bins at centers
$\overline{\omega}^{in}_{l}$. With this it is possible to realize a multi-port
interferometer, by programming a pump spectrum of the form
$\displaystyle
P(\omega_{P})=\sum_{m=1}^{N_{out}}\sum_{l=1}^{N_{in}}U_{ml}\cdot
B(\overline{\omega}^{out}_{m}-\overline{\omega}^{in}_{l}-\omega_{P}).$ (6)
Here, $P(\omega_{P})$ is the complete pump spectrum, which is composed of
individual frequency bins labeled by the corresponding frequencies of the
input and output bins and weighted by the corresponding entry $U_{ml}$ of the
unitary matrix describing the network. Using this yields a TF
$\displaystyle G_{U}(\omega_{in},\omega_{out})$
$\displaystyle=\sum_{m=1}^{N_{out}}\sum_{l=1}^{N_{in}}U_{ml}\cdot
A_{l}(\omega_{in})\cdot O_{m}(\omega_{out})$
$\displaystyle=\sum_{m=1}^{N_{out}}S_{m}(\omega_{in})\cdot
O_{m}(\omega_{out}).$ (7)
One simple example of this scheme is depicted in Fig. 2, namely the
implementation of the TF for a balanced beamsplitter
($U_{BS}=((1,1),(1,-1))/\sqrt{2}$) on the freqeuncy bin basis. The TF in this
case is given by
$\displaystyle\begin{split}G_{BS}(\omega_{in},\omega_{out})=(A_{1}(\omega_{in})+A_{2}(\omega_{in}))\cdot
O_{1}(\omega_{out})/\sqrt{2}\\\ +(A_{1}(\omega_{in})-A_{2}(\omega_{in}))\cdot
O_{2}(\omega_{out})/\sqrt{2}.\end{split}$ (8)
To understand the action of such a mQPG on a quantum input state we can
consider the problem within the Heisenberg picture, where a general SFG
process is described via the Bogoliubov transformations [28]:
$\displaystyle\hat{b}^{\prime\prime}(\omega_{in})$
$\displaystyle=\int\text{d}\omega_{in}^{\prime}\;U^{Q}_{b}(\omega_{in},\omega_{in}^{\prime})\hat{b}^{\prime}(\omega_{in}^{\prime})$
$\displaystyle\qquad+\int\text{d}\omega_{out}^{\prime}\;V^{Q}_{b}(\omega_{in},\omega_{out}^{\prime})\hat{a}^{\prime}(\omega_{out}^{\prime})$
(9) $\displaystyle\hat{a}^{\prime\prime}(\omega_{out})$
$\displaystyle=\int\text{d}\omega_{out}^{\prime}\;U^{Q}_{a}(\omega_{out},\omega_{out}^{\prime})\hat{a}^{\prime}(\omega_{out}^{\prime})$
$\displaystyle\qquad-\int\text{d}\omega_{in}^{\prime}\;V^{Q}_{a}(\omega_{out},\omega_{in}^{\prime})\hat{b}^{\prime}(\omega_{in}^{\prime}).$
(10)
Here, the operators representing the fields in front of the SFG are labeled by
a single dash (′) and fields after the SFG by a double dash (′′) (compare Fig.
1a). We consider two different monochromatic operators $\hat{a}$ and $\hat{b}$
for input and output modes to account for the possibility of having orthogonal
polarizations and for the two separated frequency ranges of $\omega_{in}$ and
$\omega_{out}$. The functions $U_{a}^{Q},V_{a}^{Q},U_{b}^{Q},V_{b}^{Q}$ can be
calculated directly from the TF, when time ordering effects are neglected (see
Appendix D). Eq. (10) for an mQPG with a TF (7) simplifies to
$\displaystyle\hat{S}^{\prime\prime}_{m}=\cos(\theta_{m})\hat{S}^{\prime}_{m}+sin(\theta_{m})\hat{O}^{\prime}_{m},$
(11)
$\displaystyle\hat{O}^{\prime\prime}_{m}=\cos(\theta_{m})\hat{O}^{\prime}_{m}-sin(\theta_{m})\hat{S}^{\prime}_{m}.$
(12)
These are the Heisenberg operator transformations for the superposition modes
of the mQPG. The parameter $\theta_{m}$ defines the conversion efficiency
$\sin(\theta_{m})^{2}$ of the $m$-th mode. It can be adjusted with the pump
power and can in principle reach unity [32]. In this case ($\theta_{m}=\pi/2$)
Eq. (12) takes the form
$\displaystyle\hat{O}^{\prime\prime}_{m}=-\hat{S}^{\prime}_{m}=-\sum_{l=1}^{N_{in}}U_{ml}\hat{A}^{\prime}_{i},$
(13)
which is equivalent to relation (2), characterizing the multi-port
interferometer. Note however that Eq. (13) is formulated in terms of frequency
bins which are connected via frequency conversion.
The action of a mQPG can also be interpreted as a coherent filtering of a
superposition mode $S_{m}$ and the simultaneous quantum transduction to an
output mode $O_{m}$. We call this process coherent filtering, because it is
sensitive to the spectral phase of the considered modes. In the next section,
we will describe a source of input states that are naturally compatible with
the mQPG.
### II.3 Spectrally multimode squeezing source
One desirable set of input states for LOQNs are squeezed states, for example
in Gaussian boson sampling, which we consider here. An optimal source for our
frequency bin based network, would deliver squeezed states in the input bins
$A_{k}(\omega_{in})$. However, such sources are challenging to engineer and
would require a sophisticated control of the PDC process, e.g. by utilising
resonators [33]. Therefore, we consider the use of well established degenerate
type-0 PDC sources, which in the high gain regime generate squeezed states in
many TMs [21, 34]. Such PDC sources are characterized by their joint spectral
amplitude (JSA)
$\displaystyle
f(\omega_{in},\omega_{in}^{\prime})=P(\omega_{P}=\omega_{in}+\omega_{in}^{\prime})\cdot\Phi(\omega_{in},\omega_{in}^{\prime})$
(14)
which is given as the product of pump amplitude spectrum and phase matching
function [35]. Note, that since signal and idler are indistinguishable in
type-0 PDC the JSA has to fulfil
$f(\omega_{in},\omega_{in}^{\prime})=f(\omega_{in}^{\prime},\omega_{in})$. The
evolution of an input state (here vacuum) passing through the PDC is given by
the unitary operator
$\displaystyle\hat{U}_{PDC}=\exp\left(-\frac{i}{\hbar}\int\text{d}\omega_{in}\,\text{d}\omega_{in}^{\prime}f(\omega_{in},\omega_{in}^{\prime})\hat{b}^{\dagger}(\omega_{in})\hat{b}^{\dagger}(\omega_{in}^{\prime})\right.$
$\displaystyle\left.+\quad\text{h.c.}\vphantom{\int_{1}^{2}}\right).\qquad$
(15)
For a type-0 PDC source the JSA is given as a narrow stripe oriented along the
anti-diagonal (as illustrated in Fig. 3b). This results from the orientation
of the pump function $P$ and the phase matching $\phi$ along this axis [36].
For a very narrow pump the JSA can be approximated by a $\delta$-function
$\displaystyle f(\omega_{in},\omega_{in}^{\prime})$
$\displaystyle\propto\cdot\delta(\omega_{in}+\omega_{in}^{\prime}-2\omega_{0})$
$\displaystyle\propto\sum_{k}\phi_{k}(\omega_{in}-\omega_{0})\phi_{k}^{*}(-(\omega_{in}^{\prime}-\omega_{0}))$
(16)
which can be decomposed into any orthonormal basis $\\{\phi_{k}\\}$ fulfilling
the completeness relation
$\delta(\omega-\omega^{\prime})=\sum_{k}\phi_{k}(\omega)\phi^{*}_{k}(\omega^{\prime})$.
Note that in Eq. (16) the paired functions are mirrored around the degeneracy
point $\omega_{0}$, e.g. a bin $A_{1}$ at a central frequency
$\omega_{0}+\Delta$ is paired with a bin $A_{2}$ centered at
$\omega_{0}-\Delta$. Since these bins are part of an orthonormal basis the
unitary (15) takes on the form
$\hat{U}_{PDC}=\hat{U}_{12}\otimes\hat{U}_{rest}$ where the unitary describing
the subspace of the bins is
$\displaystyle\hat{U}_{12}=\exp(\alpha\hat{A}_{1}^{\dagger}\hat{A}_{2}^{\dagger}-\alpha^{*}\hat{A}_{1}\hat{A}_{2})$
(17)
and is independent of the unitary $\hat{U}_{rest}$ which describes the
remaining space. Note that Eq. (17) has the form of the well known two-mode
squeezing (TMS) operator [37]. This shows that such a PDC source provides TMS
states between pairs of frequency bins. The parameter $\alpha$ combines
multiple constants, including the pump strength, and determines the squeezing
strength.
Figure 3: a) Schematic depiction of the combined system of Type-0 PDC source
and mQPG. The transfer function of the mQPG can be programmed to implement an
arbitrary interferometer by shaping the pump b) left: schematic depiction of
the JSA in black. The blue areas highlight the effective JSA which is
coherently filtered from the PDC state by the mQPG. The dashed arrows
highlight different two-mode squeezed states. right: the transfer function of
the mQPG which maps the coherently filtered bins into different superpositions
to different output channels c) analogous interferometer in the spatial domain
However, in reality the JSAs of physical PDC sources have a finite width and
the approximation of Eq. (16) is not valid. Therefore, we consider a general
description of type-0 PDC in our model, which allows us to consider any shape
of the JSA. This will enable us to study the influences of it’s non-negligible
width in later sections. We model the PDC in the Heisenberg picture where Eq.
(15) takes the form of the Bogoliubov transformation
$\displaystyle\hat{b}^{\prime}(\omega_{in})=\int\text{d}\omega^{\prime}_{in}\;U^{P}(\omega_{in},\omega^{\prime}_{in})\hat{b}(\omega^{\prime}_{in})\quad\quad$
$\displaystyle+\int\text{d}\omega^{\prime}_{in}\;V^{P}(\omega_{in},\omega^{\prime}_{in})\hat{b}^{{\dagger}}(\omega^{\prime}_{in}).$
(18)
Here, fields after the PDC are labeled with a dash (’) while fields in front
of the PDC do not have an additional label (compare Fig. 3a). Eq. (18) is
similar to (10) of the SFG process, however only one set of monochromatic
operators $\hat{b}$ is considered here, since signal and idler field have the
same polarization and central frequency. The functions $U^{P}$ and $V^{P}$ can
be derived from the JSA (see Appendix C).
### II.4 Describing the complete LOQN
In summary, our scheme to implement LOQNs reads as follows: A type-0 PDC
generates TMS states between pairs of frequency bins, which are subsequently
coherently filtered and superimposed in the output modes of a mQPG. The
resulting quantum state in the outputs is then analogous to the output state
of a spatial interferometer with TMS states in the input. In Fig. 3 we
illustrate our proposed scheme for a specific example network. We depict the
required experimental components of our specific PDC source and a fully
programmable mQPG. To model this combined system we adapt the theory of
intensity filtered type-2 PDC presented in Ref. [38] to include the coherent
filtering by the mQPG. This enables us to describe the frequency converted
quantum state $\rho_{out}$ in the mQPG’s output in the continuous variable
picture via it’s covariance matrix $\sigma_{kl}$. This is possible since we
consider only Gaussian transformations (squeezing and beam splitters)[39, 40].
Due to the fact that the mQPG’s output only consist of the modes $O_{K}$ we
can describe the full output state on the basis of the operators
$\hat{O}_{k}$. The quadrature operators
$\hat{X}_{k}=\frac{1}{\sqrt{2}}(\hat{O}_{k}+\hat{O}_{K}^{\dagger})$ and
$\hat{Y}_{k}=\frac{1}{i\sqrt{2}}(\hat{O}_{k}-\hat{O}_{k}^{\dagger})$
corresponding to the different output modes can be arranged in the vector
$\displaystyle\vec{\hat{R}}=(\hat{X}_{1},\hat{Y}_{1},\hat{X}_{2},\hat{Y}_{2},...).$
(19)
Then the individual elements of the covariance matrix can be expressed as
$\displaystyle\sigma_{kl}=\frac{1}{2}\left\langle\hat{R}_{k}\hat{R}_{l}+\hat{R}_{l}\hat{R}_{k}\right\rangle-\left\langle\hat{R}_{k}\right\rangle\left\langle\hat{R}_{l}\right\rangle.$
(20)
In the following we neglect the last term because we assume vacuum states in
all fields in front of the non-linear elements. Note, however, that this is
not a necessity and that our framework can readily be adapted to include other
input states. We describe the evolution of the states in the Heisenberg
picture, by successively applying the transformations (10) and (18) to the
operators $\hat{O}^{\prime\prime}_{k}$ which results in the expression
$\displaystyle\hat{O}^{\prime\prime}_{k}$
$\displaystyle=\int\;\text{d}\omega_{out}\;H^{1}_{k}(\omega_{out})\hat{a}^{\prime}(\omega_{out})$
$\displaystyle\qquad+\int\;\text{d}\omega_{in}\;H^{2}_{k}(\omega_{in})\hat{b}(\omega_{in})+\;H^{3}_{k}(\omega_{in})\hat{b}^{\dagger}(\omega_{in})$
(21)
where the amplitude functions take the form
$\displaystyle H^{1}_{k}(\omega_{out})$
$\displaystyle=\int\text{d}\omega^{\prime}_{out}\;O_{k}(\omega^{\prime}_{out})U^{Q}_{a}(\omega^{\prime}_{out},\omega_{out})$
$\displaystyle H^{2}_{k}(\omega_{in})$
$\displaystyle=-\int\text{d}\omega^{\prime}_{out}\text{d}\omega^{\prime}_{in}\;O_{k}(\omega^{\prime}_{out})V^{Q}_{a}(\omega^{\prime}_{out},\omega^{\prime}_{in})U^{P}(\omega^{\prime}_{in},\omega_{in})$
$\displaystyle H^{3}_{k}(\omega_{in})$
$\displaystyle=-\int\text{d}\omega_{out}\text{d}\omega^{\prime}_{in}\;O_{k}(\omega^{\prime}_{out})V^{Q}_{a}(\omega^{\prime}_{out},\omega^{\prime}_{in})V^{P}(\omega^{\prime}_{in},\omega_{in}).$
(22)
Inserting these operators into Eq. (20) then allows one to calculate the
covariance matrix for any given JSA and TF, by evaluating the vacuum
expectation values. The resulting form of $\sigma_{kl}$ is derived in Appendix
E. We would like to point out that our scheme, despite our description in the
framework of continuous variable quantum optics, does not assume any
particular detection method. Experimentally it is fully compatible with
detection in the photon number basis after separating the different output
channels by frequency filtering. While simulating this scenario is
computationally demanding since it is effectively a GBS system, the photon
number distributions can in principle be derived from the covariance matrix
[41].
## III Frequency beam splitter
Figure 4: Simulation of a frequency beamsplitter, mapping the bins $A_{1}$ and
$A_{2}$ to bins $O_{1}$ and $O_{2}$. a) Analogous spatial domain scenario b)
Joint spectral amplitude (JSA) of the PDC. Green dots show the perfect two-
mode squeezed JSA between bins $A_{1}$ and $A_{2}$ c) Transfer function of the
mQPG. d) Absoulute value of covariance matrix between bins $A_{1}$ and $A_{2}$
after PDC, e) and between bins $O_{1}$ and $O_{2}$ after mQPG.
As an instructive example of our scheme we simulate the implementation of a
simple LOQN, namely the interference of both modes from a two mode squeezed
state (TMS) on a balanced beamsplitter. For this we expect two independent
single mode squeezed (SMS) states in the output, since this scenario is the
reverse of the well known generation of TMS states by interfering SMS states
on a beamsplitter [37].
The scenario is depicted in Fig. 4, where we summarise the simulation by
displaying the JSA and TF utilised as input for the calculation together with
the resulting covariance matrices both after the PDC and at the output of the
LOQN. To keep the results as general as possible we define the spectral
dimensions (bin width, positions etc.) in terms of the simulation’s input
range $\Delta\omega_{in}$, which bounds the simulation area. In an
experimental setting, this range can be understood as the bandwidth over which
our scheme can operate and which is limited, for example, by the limited pump
spectrum of the mQPG. To highlight the experimental feasibility of our scheme
we provide simulations of realistically achievable non linear processes in
periodically poled LiNbO3 waveguides in Appendix A, according to which we
model our idealised simulations presented here. This results in a JSA which is
approximated as a Gaussian cross-section of width
$\text{FWHM}_{JSA}=0.05\cdot\Delta\omega_{in}$ oriented along the anti-
diagonal (compare Fig 4b). We normalize this JSA to a mean photon number of
$\overline{n}=1$ within the simulation region, to obtain experimentally
realistic squeezing values. The frequency bin beamsplitter on the other hand
is modeled by considering a TF of the form (8), where we consider Gaussian
shapes for all modes ($A_{1}$,$A_{2}$,$O_{1}$,$O_{2}$). The input bins were
chosen to have a width $\text{FWHM}_{bin}=0.1\cdot\Delta\omega_{in}$, larger
than $\text{FWHM}_{JSA}$.
First, we only consider the PDC and calculate the covariance matrix between
two bins $A_{1}$ and $A_{2}$ which are placed symmetrically around the
degeneracy point at $\omega_{0}$. For this we apply (18) to the broadband
operators $\hat{A}_{1}$ and $\hat{A}_{2}$ and then evaluate (20) for the
corresponding quadrature operators. As expected from the discussion above, the
resulting covariance matrix (compare Fig. 4c) represents a TMS state. This is
evident from the sub-matrices of the individual modes, which show noise above
the vacuum level of 0.5 (as one would expect from a thermal state), while
being correlated when considered as as a joint system.
The covariance matrix between the output modes $O_{1}$ and $O_{2}$ after the
mQPG is derived by applying our theoretical model of the complete LOQN to
discretized versions (1500x1500 points) of the JSA and TF. The resulting
covariance matrix (depicted in Fig. 4d) is showing two independent SMS states,
which becomes apparent from the two quadrature variances (diagonal elements)
which are squeezed below the vacuum level. As previously discussed, this is
the expected result for the interference of a TMS state on a beamplitter and
therefore establishes the capability of our scheme to implement LOQN, even
when realist PDC sources with a finite JSA width are considered.
To better understand the limits of our scheme, we explore the quality of the
output state for varying widths of the input bins $A_{k}$. Here, we only
consider the even output ($A_{1}+A_{2}$) of the mQPG. We quantify the quality
of the output state by calculating the purity and squeezing strength of this
state from the resulting covariance matrix. The purity is given by
$\gamma=\text{tr}(\rho_{out}^{2})=1/(2^{N}\sqrt{\text{det}(\sigma)})$ [39] and
the squeezing strength in $dB$ as $S=-10\cdot\log(2\cdot a)$ where $a$ is the
minimal eigenvalue of $\sigma$ [42]. We simulate these quantities for input
bins in a range from $\text{FWHM}_{bin}\approx 0$ to
$\text{FWHM}_{bin}=0.15\Delta\omega_{in}$ and for three different
normalizations of the JSA. These normalizations correspond to different pump
strengths of the PDC process and are chosen to represent JSAs with mean photon
numbers of 0.25, 1 and 2. The results are depicted in Fig. 5.
Figure 5: Squeezing and purity calculated from the covariance matrix after the
freuqency beam splitter for different input bin width $\text{FWHM}_{bin}$. The
different line types correspond to different normalizations of the JSA, which
is proportional to the pump strength.
One can immediately sees from Fig. 5 that a minimum in purity can be observed
for bins which are smaller than the width of the JSA. This can be explained by
strong edge effects during the coherent filtering. Further, no clear optimal
regime for operating the LOQN is observable, instead in the limit of larger
bins purity and squeezing continuously improve. This result is in contrast to
heralded single photon sources from type-2 PDC, where strong spectral
intensity filtering on the herald results in highly pure heralded states [38],
and showcases the fundamentally different behaviour of a coherent filter.
Figure 6: Investigation of squeezing an purity of the single mode squeezed
state in the output channel of a mQPG after filtering from a type-0 PDC state,
for varying frequency bin with and number. The white area is inaccessible,
because neighboring bins overlap. We investigate the cases of equal and
alternating (0 and $\pi$) phase. Top: Purity and Bottom: Squeezing in the
output channel. Three different JSAs are considered with
$FWHM_{JSA}=0.05,0.02,0.01\Delta\omega_{in}$.The dashed white line corresponds
to an threshold purity pf $\gamma_{0}=0.99$ and the black dashed lines
correspond to a squeezing value of $S_{0}=3dB$.
## IV Scaling
We argue that our scheme is an excellent candidate for the resource efficient
scaling of fully programmable LOQN to higher numbers of contributing modes,
since the complete network can be achieved in only two non-linear waveguides.
To understand the fundamental limits and get an estimate of achievable
dimensionality of the systems we perform simulations to investigate how many
bins can be implemented within the given spectral window $\Delta\omega_{in}$.
For this, we consider a single output mQPG with $N$ input bins. Here, we use
box shaped bins $A_{k}$ with a width of $D$, to make use of the complete
spectral range. The bins are positioned maximally spaced, equally distributed
and symmetrically placed around the degeneracy point. For the PDC we consider
the JSA from the previous section, with different widths $\text{FWHM}_{JSA}$,
all normalized to a mean photon number of 2. To account for different
programmings of the LOQN we consider the two extremal cases of equal and
alternating phases ($0$ and $\pi$) between neighboring bins for which SMS
states are expected in the output. Purity and squeezing strength are depicted
in Fig. 6 for varying bin widths and number of used bins.
The upper left corner, representing big bins with sufficient separation, is
expectedly the only area providing good purity and squeezing values for both
cases. Therefore, the LOQN can only operate in this specific region. However,
it also becomes apparent that for thinner JSAs the usable area becomes larger
and more homogeneous, thereby demonstrating that the dimensionality of LOQN
reachable with our approach goes well beyond the two modes of the frequency
bin beamsplitter.
We also want to highlight that the investigated widths of the JSAs are well
achievable with state of the art LiNbO3 waveguides. The thinnest JSA, with
$\text{FWHM}_{JSA}=0.01\Delta\omega_{in}$. for example well approximates the
JSA achievable in a 4cm long waveguide on an input window $\Delta\omega_{in}$
corresponding to 50 nm centered at 1550 nm. In Appendix B we display a rough
estimate of the accessible dimensionalities of our scheme. We find that, with
state of the art mQPGs, input numbers in the hundreds could be expected. One
limitation of our scheme is the realization of high numbers of output modes,
owing to the fact that the different outputs have to share the same pump
bandwidth. However, it is possible to cascade multiple mQPGs, since all
superposition modes which are not addressed are passing the device
unconverted. These modes can therefore be accessed by a consecutive mQPG
corresponding to different outputs, albeit at the cost of increasing the
number of required waveguides needed for implementation.
## V Discussion
The scheme presented in this work considers frequency bins as a basis for the
LOQN, since these are relatively easy to shape and control. But in principle
the scheme can be implemented in many other TM bases, e.g., Hermite-Gaussian
modes. For these we have found similar results, with the difference that for
centered HG modes the input states of the LOQN are SMS instead of TMS. Further
we want to highlight, again, that our scheme does not assume any specific
detection method and even the use of different detection methods in different
output channels can be imagined. When only detection in the photon number
basis is considered our scheme does not require any phase stability between
PDC and mQPG. This is because both non-linear processes are intrinsically
phase stable and a relative phase between them only results in an unknown
global phase of the output modes, which is not detectable in the photon number
basis. In this case two repetition rate locked pump laser sources for PDC and
mQPG are sufficient for a implementation of the LOQN.
Moreover, we want to mention that we assume perfect mQPGs (unity conversion
efficiency and perfect mapping of modes) throughout this work, because we want
to focus on the fundamental limits of the presented scheme. However, our
theoretical framework also allows to study more complicated scenarios
including imperfections, since it only considers a general TF and JSA as
input. This for example allows to include multi-mode effects in the outputs of
the mQPG, which can occur for imperfect PM functions. In this case one output
of the mQPG is described by a bigger covariance matrix, which describes all
modes contributing to said output.
## VI Conclusion
In this work we have presented a novel scheme for the implementation of LOQNs
based on frequency conversion, which utilises so-called multi-output quantum
pulse gates. This approach allows one to construct fully programmable and
inherently phase stable multi-port interferometer on a frequency bin basis. We
demonstrate the feasibility of this approach and its natural compatibility
with broadband squeezing sources, by performing simulations based on a
detailed theoretical model in the continuous variable picture.
A potential experimental implementation of LOQNs based on this approach
requires only two-nonlinear waveguides for the very multi-mode input state
generation and the programmable interferometer. In contrast to other encodings
(e.g. spatial or temporal domain) the achievable dimensionality of this LOQN
is mainly limited by spectral shaping resolution and not by the number of
utilised components (e.g. beamsplitters). Due to this, the relatively low
demand on required components and the inherent compatibility with integrated
optical platforms we believe, that this approach is a promising candidate for
scaling up LOQNs towards practical applications. We find that with state-of-
the art mQPGs a few hundred input modes are feasible. However, reducing the
phasematching width of mQPGs, by for example utilising resonators, could allow
for much larger networks. We expect our approach to become an enabling
platform for future quantum technologies thanks to its inherent scalability,
full programmability, and ease of experimental implementation.
## VII Acknowledgement
The Autors thank J. Sperling and M. Santandrea for helpful discussions. This
work was supported in part by the European Commission H2020- FET-OPEN-RIA,
(STORMYTUNE) under Grant 899587
## VIII Comments
During the preparation of the manuscript we became aware of similar work [43]
## References
* Aaronson and Arkhipov [2013] S. Aaronson and A. Arkhipov, Theory of Computing 9, 143 (2013).
* Hamilton _et al._ [2017] C. S. Hamilton, R. Kruse, L. Sansoni, S. Barkhofen, C. Silberhorn, and I. Jex, Physical Review Letters 119, 170501 (2017), arXiv: 1612.01199.
* Kruse _et al._ [2019] R. Kruse, C. S. Hamilton, L. Sansoni, S. Barkhofen, C. Silberhorn, and I. Jex, Physical Review A 100, 032326 (2019), arXiv: 1801.07488.
* Menicucci _et al._ [2006] N. C. Menicucci, P. van Loock, M. Gu, C. Weedbrook, T. C. Ralph, and M. A. Nielsen, Physical Review Letters 97, 110501 (2006).
* Gu _et al._ [2009] M. Gu, C. Weedbrook, N. C. Menicucci, T. C. Ralph, and P. van Loock, Physical Review A 79, 062318 (2009).
* van Loock and Braunstein [2000] P. van Loock and S. L. Braunstein, Physical Review Letters 84, 3482 (2000), publisher: American Physical Society.
* Yonezawa _et al._ [2004] H. Yonezawa, T. Aoki, and A. Furusawa, Nature 431, 430 (2004), number: 7007 Publisher: Nature Publishing Group.
* Childs [2009] A. M. Childs, Physical Review Letters 102, 180501 (2009).
* Venegas-Andraca [2012] S. E. Venegas-Andraca, Quantum Information Processing 11, 1015 (2012).
* Schreiber _et al._ [2010] A. Schreiber, K. N. Cassemiro, V. Potoček, A. Gábris, P. J. Mosley, E. Andersson, I. Jex, and C. Silberhorn, Physical Review Letters 104, 050502 (2010).
* Huh _et al._ [2015] J. Huh, G. G. Guerreschi, B. Peropadre, J. R. McClean, and A. Aspuru-Guzik, Nature Photonics 9, 615 (2015).
* Banchi _et al._ [2020] L. Banchi, M. Fingerhuth, T. Babej, C. Ing, and J. M. Arrazola, Science Advances 6, eaax1950 (2020).
* Zhong _et al._ [2020] H.-S. Zhong, H. Wang, Y.-H. Deng, M.-C. Chen, L.-C. Peng, Y.-H. Luo, J. Qin, D. Wu, X. Ding, Y. Hu, P. Hu, X.-Y. Yang, W.-J. Zhang, H. Li, Y. Li, X. Jiang, L. Gan, G. Yang, L. You, Z. Wang, L. Li, N.-L. Liu, C.-Y. Lu, and J.-W. Pan, Science 370, 1460 (2020).
* Madsen _et al._ [2022] L. S. Madsen, F. Laudenbach, M. F. Askarani, F. Rortais, T. Vincent, J. F. F. Bulmer, F. M. Miatto, L. Neuhaus, L. G. Helt, M. J. Collins, A. E. Lita, T. Gerrits, S. W. Nam, V. D. Vaidya, M. Menotti, I. Dhand, Z. Vernon, N. Quesada, and J. Lavoie, Nature 606, 75 (2022).
* Lu _et al._ [2018a] H.-H. Lu, J. M. Lukens, N. A. Peters, B. P. Williams, A. M. Weiner, and P. Lougovski, Optica 5, 1455 (2018a).
* Lu _et al._ [2018b] H.-H. Lu, J. M. Lukens, N. A. Peters, O. D. Odele, D. E. Leaird, A. M. Weiner, and P. Lougovski, Physical Review Letters 120, 030502 (2018b).
* Lu _et al._ [2020] H.-H. Lu, E. M. Simmerman, P. Lougovski, A. M. Weiner, and J. M. Lukens, Physical Review Letters 125, 120503 (2020).
* Kues _et al._ [2019] M. Kues, C. Reimer, J. M. Lukens, W. J. Munro, A. M. Weiner, D. J. Moss, and R. Morandotti, Nature Photonics 13, 170 (2019).
* Kues _et al._ [2017] M. Kues, C. Reimer, P. Roztocki, L. R. Cortés, S. Sciara, B. Wetzel, Y. Zhang, A. Cino, S. T. Chu, B. E. Little, D. J. Moss, L. Caspani, J. Azaña, and R. Morandotti, Nature 546, 622 (2017).
* Lu _et al._ [2023] H.-H. Lu, M. Liscidini, A. L. Gaeta, A. M. Weiner, and J. M. Lukens, Optica 10, 1655 (2023).
* Roslund _et al._ [2014] J. Roslund, R. M. de Araújo, S. Jiang, C. Fabre, and N. Treps, Nature Photonics 8, 109 (2014).
* Cai _et al._ [2017] Y. Cai, J. Roslund, G. Ferrini, F. Arzani, X. Xu, C. Fabre, and N. Treps, Nature Communications 8, 15645 (2017).
* Cai _et al._ [2021] Y. Cai, J. Roslund, V. Thiel, C. Fabre, and N. Treps, npj Quantum Information 7, 82 (2021).
* Serino _et al._ [2023] L. Serino, J. Gil-Lopez, M. Stefszky, R. Ricken, C. Eigner, B. Brecht, and C. Silberhorn, PRX Quantum 4, 020306 (2023).
* Brecht _et al._ [2015] B. Brecht, D. V. Reddy, C. Silberhorn, and M. Raymer, Physical Review X 5, 041017 (2015).
* Fabre and Treps [2020] C. Fabre and N. Treps, Reviews of Modern Physics 92, 035005 (2020).
* Brecht _et al._ [2014] B. Brecht, A. Eckstein, R. Ricken, V. Quiring, H. Suche, L. Sansoni, and C. Silberhorn, Physical Review A 90, 030302 (2014), publisher: American Physical Society.
* Christ _et al._ [2013] A. Christ, B. Brecht, W. Mauerer, and C. Silberhorn, New Journal of Physics 15, 053038 (2013).
* Eckstein _et al._ [2011] A. Eckstein, B. Brecht, and C. Silberhorn, Optics Express 19, 13770 (2011).
* Note [1] This assumption ensures a single mode character of the conversion process eliminating frequency correlations.
* Ansari _et al._ [2018] V. Ansari, J. M. Donohue, B. Brecht, and C. Silberhorn, Optica 5, 534 (2018).
* Reddy and Raymer [2018] D. V. Reddy and M. G. Raymer, Optica 5, 423 (2018).
* Ma _et al._ [2023] Z. Ma, J.-Y. Chen, M. Garikapati, Z. Li, C. Tang, Y. M. Sua, and Y.-P. Huang, Physical Review Applied 20, 044033 (2023).
* Kouadou _et al._ [2023] T. Kouadou, F. Sansavini, M. Ansquer, J. Henaff, N. Treps, and V. Parigi, APL Photonics 8, 086113 (2023).
* Christ _et al._ [2011] A. Christ, K. Laiho, A. Eckstein, K. N. Cassemiro, and C. Silberhorn, New Journal of Physics 13, 033027 (2011).
* Roman-Rodriguez _et al._ [2021] V. Roman-Rodriguez, B. Brecht, S. K, C. Silberhorn, N. Treps, E. Diamanti, and V. Parigi, New Journal of Physics 23, 043012 (2021).
* Weedbrook _et al._ [2012] C. Weedbrook, S. Pirandola, R. García-Patrón, N. J. Cerf, T. C. Ralph, J. H. Shapiro, and S. Lloyd, Reviews of Modern Physics 84, 621 (2012).
* Christ _et al._ [2014] A. Christ, C. Lupo, M. Reichelt, T. Meier, and C. Silberhorn, Physical Review A 90, 023823 (2014), arXiv: 1403.2886.
* Ferraro _et al._ [2005] A. Ferraro, S. Olivares, and M. G. A. Paris, arXiv:quant-ph/0503237 (2005), arXiv: quant-ph/0503237.
* Braunstein and van Loock [2005] S. L. Braunstein and P. van Loock, Quantum information with continuous variables 77, 65 (2005).
* Fitzke _et al._ [2023] E. Fitzke, F. Niederschuh, and T. Walther, APL Photonics 8, 026106 (2023), publisher: American Institute of Physics.
* Simon _et al._ [1994] R. Simon, N. Mukunda, and B. Dutta, Physical Review A 49, 1567 (1994).
* Presutti _et al._ [2024] F. Presutti, L. G. Wright, S.-Y. Ma, T. Wang, B. K. Malia, T. Onodera, and P. L. McMahon, (2024), arXiv:2401.06119 [physics, physics:quant-ph].
* Chou _et al._ [1999] M. H. Chou, K. R. Parameswaran, M. M. Fejer, and I. Brener, Optics Letters 24, 1157 (1999).
* Gil-Lopez _et al._ [2021] J. Gil-Lopez, M. Santandrea, G. Roeland, B. Brecht, C. Eigner, R. Ricken, V. Quiring, and C. Silberhorn, New Journal of Physics 23, 063082 (2021).
* Law _et al._ [2000] C. K. Law, I. A. Walmsley, and J. H. Eberly, Physical Review Letters 84, 5304 (2000).
## Appendix A Simulated experiment
Figure 7: Simulations of left: joint spectral amplitude from a type-0 PDC
process in LiNbO3 waveguide. right: the transfer function of a two-output mQPG
implementing a frequency beamsplitter (represented by the dotted box)
In the main text we consider idealised systems, however to demonstrate the
feasibility of the proposed systems we here provide simulations of
realistically achievable nonlinear processes. These simulations are based on
the Sellmeier-equations of titanium in-diffused LiNbO3 waveguides. In Fig. 7a
the joint spectral amplitude of a 1cm long waveguide, pumped with a 3 ps long
pulsed laser at 775 nm, is depicted. To achieve degeneracy at 1550 nm a poling
period of 16.93 $\mu$m is considered. Note, that no sinc-sidelobes are
visible, since the pump width of 0.3 nm is narrower than the phasematching.
For the simulation of the transfer function of a two-output mQPG (depicted in
Fig. 7b) we consider a poling period of 4.33 nm, an 1cm long waveguide and the
superstructure presented in Ref. [44]. To simulate a frequency bin
beamsplitter as discussed in the main text we consider a pump which is
composed of four 3 nm wide bins. The bins are centered around a central
wavelength of 860 nm and could for example be carved out from a 100 fs long
pulse. Note, that these simulations utilise conservative assumptions for the
design parameters, e.g. mQPG waveguides with length around 7cm are obtainable.
## Appendix B Scalability of the Approach
Here we estimate the scalability of our approach to higher dimensions. We
measure this dimensionality in terms of the number of achievable input bins
$N_{in}$. This number is fundamentally limited by four factors: 1) the
spectral range $\Delta\omega_{in}$ over which the type-0 PDC can provide TMS
states between the frequency bins. 2) the pump bandwidth $\Delta\omega_{pump}$
of the mQPG which also limits the available input range. This bandwidth also
has to be divided by the number of output bins $N_{out}$, since each output
requires an equally broad pump region. 3) the phasematching width
$\delta_{mQPG}$ of the mQPG because the mQPG is working under the assumption
that the PM is narrower than the pump structure (bins). 4) the PM width
$\delta_{PDC}$ of the PDC, since the LOQNs operation is limited by this number
as discussed in the main text.
In this the first two points limit the available input range while the latter
two limit the minimal bin size, therefore we estimate the amount of available
input bins by
$\displaystyle N_{in}$ $\displaystyle=\frac{\text{available input
range}}{\text{minimal bin size}}$
$\displaystyle=\frac{\text{min}(\Delta\omega_{in},\Delta\omega_{pump}/N_{pump})}{\text{max}(\delta_{PDC},\delta_{mQPG})}.$
(23)
The results of this estimation are depicted in Fig. 8, together with the
limits set by experimentally demonstrated mQPGs.[24, 45]. Considering a 7 cm
long mQPG together with a 4 THz pump spectrum for example could allow for
systems with 200 input bins.
Figure 8: Estimation of the achievable number of input bins for different
parameters of the available bandwidth of the network (vertical axis) and for
different phasematching widths (horizontal axis). The horizontal white lines
correspond to a mQPG with a pump bandwidth of 4 THz and different numbers of
outputs. The vertical lines correspond to mQPGs with different lengths.
## Appendix C Theory of Type-0 PDC
A type-0 PDC in a single spatial mode and polarization (e.g. in wavguides) can
be described by the unitary operator [35]
$\displaystyle\hat{U}_{PDC}=\exp\left(-\frac{i}{\hbar}\int\text{d}\omega_{i}\text{d}\omega^{\prime}_{i}\;\text{f}(\omega_{i},\omega^{\prime}_{i})\hat{b}^{\dagger}(\omega_{i})\hat{b}^{\dagger}(\omega^{\prime}_{i})\right.$
$\displaystyle\left.\;+\;\text{h.c.}\right).$ (24)
Therein, $\text{f}(\omega_{i},\omega^{\prime}_{i})$ is the joint spectral
amplitude (JSA) of the process. Here, we neglect time ordering effects, which
become relevant for very strong pump fields [28]. In a type-0 PDC signal and
idler are indistinguishable and therefore the JSA has to fulfil
$f(\omega_{i},\omega^{\prime}_{i})=f(\omega^{\prime}_{i},\omega_{i})$. A
common approach in describing PDC states is by performing a Schmidt
decomposition of the JSA
$\displaystyle-\frac{i}{\hbar}\text{f}(\omega_{i},\omega^{\prime}_{i})$
$\displaystyle=\sum_{k}r_{k}^{P}\phi_{k}^{P*}(\omega_{i})\phi_{k}^{P*}(\omega^{\prime}_{i})$
(25)
which results in a set of orthogonal Schmidt-modes
$\left\\{\phi_{k}^{P}(\omega_{i})\right\\}$ with Schmidt-coefficients
$r_{k}^{P}$ [46]. These modes are equal for signal and idler because they are
indistinguishable. By defining the operators
$\hat{\phi}_{k}^{\dagger}:=\int\text{d}\omega_{i}\;\phi_{k}^{P*}(\omega_{i})\hat{b}^{\dagger}(\omega_{i})$,
the Schmidt-decomposition allows to rewrite the unitary (24) as
$\displaystyle\hat{U}_{PDC}=\bigotimes_{k}\exp\left[r_{k}^{P}(\hat{\phi}_{k}^{\dagger})^{2}\;+\;\text{h.c.}\right]=\bigotimes_{k}\hat{S}_{k}^{(SMS)}(r_{k}^{P}),$
(26)
which corresponds to multiple independent single mode squeezing operators on
the different Schmidt modes. However, besides this fundamental structure of
type-0 PDC sources we show in the main text, that in the case of very multi-
mode PDC, also two-mode squeezed states can be extracted from such a source.
In the Heisenberg picture, the unitary (24) takes the form of a linear
Bogoliobov transformation [28]:
$\displaystyle\hat{b}^{\prime}(\omega_{i})=\int\text{d}\omega^{\prime}_{i}\;U^{P}(\omega_{i},\omega^{\prime}_{i})\hat{b}(\omega^{\prime}_{i})+\int\text{d}\omega^{\prime}_{i}\;V^{P}(\omega_{i},\omega^{\prime}_{i})\hat{b}^{\dagger}(\omega^{\prime}_{i}).$
(27)
Here, $U^{P}$ and $V^{P}$ can be expressed with help of the Schmidt modes
$\phi_{k}^{P}(\omega_{i})$ and can therefore be directly obtained from the
JSA. They have the form [28]
$\displaystyle U^{P}(\omega_{i},\omega^{\prime}_{i})$
$\displaystyle=\sum_{k}\phi_{k}^{P*}(\omega_{i})\cosh(r_{k}^{P})\phi_{k}^{P}(\omega^{\prime}_{i})$
$\displaystyle V^{P}(\omega_{i},\omega^{\prime}_{i})$
$\displaystyle=\sum_{k}\phi_{k}^{P*}(\omega_{i})\sinh(r_{k}^{P})\phi_{k}^{P*}(\omega^{\prime}_{i}).$
(28)
## Appendix D Theory of SFG
Because the multi-output quantum pulse gate is based on a sum frequency
generation (SFG) process, it can be described by the unitary operator of a
general SFG process [28]
$\displaystyle\hat{U}_{SFG}=\exp\left(-\frac{i}{\hbar}\int\text{d}\omega_{i}\text{d}\omega_{o}\;\text{G}(\omega_{i},\omega_{o})\hat{a}^{\dagger}(\omega_{o})\hat{b}(\omega_{i})\right.$
$\displaystyle\left.\;+\;\text{h.c.}\right).$ (29)
Here, $G(\omega_{i},\omega_{o})$ is the transfer function (TF) of the process,
which describes, how the input frequencies $\omega_{i}$ are converted the the
output frequencies $\omega_{o}$. Note, that we choose one of the input fields
of the mQPG to be represented by the same operators $\hat{b}(\omega_{i})$ as
the field of the PDC process.
In the Heisenberg picture the SFG process takes the form of the Bogoliobov
transformations [28]
$\displaystyle\hat{b}^{\prime\prime}(\omega_{i})$
$\displaystyle=\int\text{d}\omega^{\prime}_{i}\;U^{Q}_{b}(\omega_{i},\omega^{\prime}_{i})\hat{b}^{\prime}(\omega^{\prime}_{i})$
$\displaystyle\qquad+\int\text{d}\omega^{\prime}_{o}\;V^{Q}_{b}(\omega_{i},\omega^{\prime}_{o})\hat{a}^{\prime}(\omega^{\prime}_{o})$
$\displaystyle\hat{a}^{\prime\prime}(\omega_{o})$
$\displaystyle=\int\text{d}\omega_{o}^{\prime}\;U^{Q}_{a}(\omega_{o},\omega^{\prime}_{o})\hat{a}^{\prime}(\omega^{\prime}_{o})$
$\displaystyle\qquad-\int\text{d}\omega^{\prime}_{i}\;V^{Q}_{a}(\omega_{o},\omega^{\prime}_{i})\hat{b}^{\prime}(\omega^{\prime}_{i}).$
(30)
The functions U and V can again be calculated by performing a Schmidt
decomposition of the TF which takes the form
$\displaystyle-\frac{i}{\hbar}\text{G}(\omega_{i},\omega_{o})=-\sum_{k}r_{k}^{Q}\phi_{k}^{Q}(\omega_{i})\psi_{k}^{Q*}(\omega_{o})$
(31)
and results in the two orthonormal bases
$\left\\{\phi_{k}^{Q}(\omega_{i})\right\\}$ and
$\left\\{\psi_{k}^{Q}(\omega_{o})\right\\}$. This then allows to connect the
Schmidt-modes to the Bogoliobov transformations via [28]
$\displaystyle U^{Q}_{b}(\omega_{i},\omega^{\prime}_{i})$
$\displaystyle=\sum_{k}\phi_{k}^{Q*}(\omega_{i})\cos(r_{k}^{Q})\phi_{k}^{Q}(\omega^{\prime}_{i})$
$\displaystyle V^{Q}_{b}(\omega_{i},\omega^{\prime}_{o})$
$\displaystyle=\sum_{k}\phi_{k}^{Q*}(\omega_{i})\sin(r_{k}^{Q})\psi_{k}^{Q}(\omega^{\prime}_{o})$
$\displaystyle U^{Q}_{a}(\omega_{o},\omega^{\prime}_{o})$
$\displaystyle=\sum_{k}\psi_{k}^{Q*}(\omega_{o})\cos(r_{k}^{Q})\psi_{k}^{Q}(\omega^{\prime}_{o})$
$\displaystyle V^{Q}_{a}(\omega_{o},\omega^{\prime}_{i})$
$\displaystyle=\sum_{k}\psi_{k}^{Q*}(\omega_{o})\sin(r_{k}^{Q})\phi_{k}^{Q}(\omega^{\prime}_{i}).$
(32)
Defining the broadband operators
$\hat{R}_{k}=\int\text{d}\omega_{o}\psi_{k}^{Q}(\omega_{o})\hat{a}(\omega_{o})$
and
$\hat{H}_{k}=\int\text{d}\omega_{i}\phi_{k}^{Q}(\omega_{i})\hat{b}(\omega_{i})$
corresponding to the Schmidt modes allows to simply the transformation (30) to
$\displaystyle\hat{H}^{\prime}_{k}$
$\displaystyle=\cos(r_{k}^{Q})\hat{H}_{k}+\sin(r_{k}^{Q})\hat{R}_{k}$ (33)
$\displaystyle\hat{R}^{\prime}_{k}$
$\displaystyle=\cos(r_{k}^{Q})\hat{R}_{k}-\sin(r_{k}^{Q})\hat{H}_{k}.$ (34)
These equations have the same structure as (12), however since we are
considering general SFG the modes ($\hat{H}_{k}$ and $\hat{R}_{k}$) can
spectrally overlap and are therefore not separately detectable via spectral
multiplexing. This is one of the features enabled via considering a TF of form
(7), realizable in mQPGs, which converts to well separated output modes
$O_{k}$. In other words, the Schmidt modes of the mQPG with a TF (7) are the
superposition modes $S_{k}$ and the output modes $O_{k}$, with degenerate
(equal weights) Schmidt coefficients.
## Appendix E Combining PDC and mQPG
The goal of our model is the description of the output quantum state of the
mQPG. Since each output channel corresponds to one mode $O_{k}$, this output
state can be characterised in terms of the density matrix $\sigma$ on the
basis of the output modes $O_{k}$ (compare Eq. (20)). We describe the dynamics
of the two non-linear processes in the Heisenberg picture by consecutively
applying (18) and (10) to the output operators
$\displaystyle\hat{O}^{\prime\prime}_{k}=\int\text{d}\omega_{o}O_{k}(\omega_{o})\hat{a}^{\prime\prime}(\omega_{o})$
(35)
and obtain
$\displaystyle\hat{O}^{\prime\prime}_{k}$
$\displaystyle=\int\text{d}\omega_{o}\;H^{1}_{k}(\omega_{o})\hat{a}^{\prime}(\omega_{o})$
$\displaystyle\qquad+\int\text{d}\omega_{i}H^{2}_{k}(\omega_{i})\hat{b}(\omega_{i})+H^{3}_{k}(\omega_{i})\hat{b}^{\dagger}(\omega_{i})$
(36)
where we have defined the functions
$\displaystyle H^{1}_{k}(\omega_{o})$
$\displaystyle=\int\text{d}\omega^{\prime}_{o}\;O_{k}(\omega^{\prime}_{o})U^{Q}_{a}(\omega^{\prime}_{o},\omega_{o})$
$\displaystyle H^{2}_{k}(\omega_{i})$
$\displaystyle=-\int\text{d}\omega^{\prime}_{o}\text{d}\omega^{\prime}_{i}\;O_{k}(\omega^{\prime}_{o})V^{Q}_{a}(\omega^{\prime}_{o},\omega^{\prime}_{i})U^{P}(\omega^{\prime}_{i},\omega_{i})$
$\displaystyle H^{3}_{k}(\omega_{i})$
$\displaystyle=-\int\text{d}\omega^{\prime}_{o}\text{d}\omega^{\prime}_{i}\;O_{k}(\omega^{\prime}_{o})V^{Q}_{a}(\omega^{\prime}_{o},\omega^{\prime}_{i})V^{P}(\omega^{\prime}_{i},\omega_{i}).$
(37)
These functions can be derived from a given JSA and TF by utilising (28) and
(32). To now describe the output state of the mQPG, we first observe that the
we can neglect displacement (second term of (20)), since we assume vacuum
states in front of the non-linear elements and do not consider seeding. By
considering the operator order of (19) the covariance matrix can be
constructed from the 2x2 submatrices
$\displaystyle\widetilde{\sigma}_{kl}=\begin{pmatrix}\left\langle\hat{X}_{k}\hat{X}_{l}\right\rangle+\left\langle\hat{X}_{l}\hat{X}_{k}\right\rangle,&\left\langle\hat{X}_{k}\hat{Y}_{l}\right\rangle+\left\langle\hat{Y}_{l}\hat{X}_{k}\right\rangle\\\
\left\langle\hat{Y}_{k}\hat{X}_{l}\right\rangle+\left\langle\hat{X}_{l}\hat{Y}_{k}\right\rangle,&\left\langle\hat{Y}_{k}\hat{Y}_{l}\right\rangle+\left\langle\hat{Y}_{l}\hat{Y}_{k}\right\rangle\end{pmatrix},$
(38)
where $k$ and $l$ label two modes from $\\{\hat{O}_{k}\\}$. The submatrices
for $k=l$ describe the substates in the individual channels and for for $k\neq
l$ it describes the the quadrature covariances between two different output
modes. To calculate these submatrices we first express the individual elements
in terms of the output operators and obtain
$\displaystyle\left\langle\hat{X}_{k}\hat{X}_{l}\right\rangle$
$\displaystyle=\frac{1}{2}\left\langle\
\hat{O}_{k}\hat{O}_{l}+\hat{O}_{k}\hat{O}_{l}^{\dagger}+\hat{O}_{k}^{\dagger}\hat{O}_{l}+\hat{O}_{k}^{\dagger}\hat{O}_{l}^{\dagger}\right\rangle$
$\displaystyle\left\langle\hat{X}_{k}\hat{Y}_{l}\right\rangle$
$\displaystyle=\frac{1}{2i}\left\langle\
\hat{O}_{k}\hat{O}_{l}-\hat{O}_{k}\hat{O}_{l}^{\dagger}+\hat{O}_{k}^{\dagger}\hat{O}_{l}-\hat{O}_{k}^{\dagger}\hat{O}_{l}^{\dagger}\right\rangle$
$\displaystyle\left\langle\hat{Y}_{k}\hat{X}_{l}\right\rangle$
$\displaystyle=\frac{1}{2i}\left\langle\
\hat{O}_{k}\hat{O}_{l}+\hat{O}_{k}\hat{O}_{l}^{\dagger}-\hat{O}_{k}^{\dagger}\hat{O}_{l}-\hat{O}_{k}^{\dagger}\hat{O}_{l}^{\dagger}\right\rangle$
$\displaystyle\left\langle\hat{Y}_{k}\hat{Y}_{l}\right\rangle$
$\displaystyle=\frac{-1}{2}\left\langle\
\hat{O}_{k}\hat{O}_{l}-\hat{O}_{k}\hat{O}_{l}^{\dagger}-\hat{O}_{k}^{\dagger}\hat{O}_{l}+\hat{O}_{k}^{\dagger}\hat{O}_{l}^{\dagger}\right\rangle.$
(39)
By assuming vacuum input states and inserting (37), we are then able to
calculate the terms in (39) which results in
$\displaystyle\bra{0}\hat{O}_{k}\hat{O}_{l}\ket{0}$
$\displaystyle=\int\text{d}\omega_{i}\;H^{2}_{k}(\omega_{i})H^{3}_{l}(\omega_{i})$
(40) $\displaystyle\bra{0}\hat{O}_{k}\hat{O}_{l}^{\dagger}\ket{0}$
$\displaystyle=\int\text{d}\omega_{o}\;H^{1}_{k}(\omega_{o})H_{l}^{1*}(\omega_{o})$
$\displaystyle\qquad+\int\text{d}\omega_{i}\;H^{2}_{k}(\omega_{i})H_{l}^{2*}(\omega_{i})$
$\displaystyle\bra{0}\hat{O}_{k}^{\dagger}\hat{O}_{l}\ket{0}$
$\displaystyle=\int\text{d}\omega_{i}\;H_{k}^{3*}(\omega_{i})H^{3}_{l}(\omega_{i})$
$\displaystyle\bra{0}\hat{O}_{k}^{\dagger}\hat{O}_{l}^{\dagger}\ket{0}$
$\displaystyle=\int\text{d}\omega_{i}\;H_{k}^{3*}(\omega_{i})H_{l}^{2*}(\omega_{i}).$
(41)
This now allows to calculate the complete covariance matrix at the output of
the mQPG. We want to mention that this approach can be applied to describe
general systems comprised of a type-0 PDC and a SFG processes, since it only
requires a JSA and TF as input. The output modes then take the form of the
output Schmidt basis ($\psi_{k}^{Q}(\omega_{o})$) of the TF. This for example
allows to study multi-mode effects occurring in imperfect mQPGs.
|
max}}dt\langle\Sigma_{0,1}^{(0,2)}(t)|B_{t}(V_{t})\varphi_{\rm o},$ (B.40)
where the surface state is defined as
$\langle\Sigma_{0,1}^{(0,2)}(t)|\Psi\coloneqq\langle
F_{t}\circ\Psi(0)\rangle_{C_{(\pi,2\pi t)}}.$ (B.41)
The local coordinate map $F_{t}(w)$ will be fixed shortly by imposing BRST
decoupling (which is equivalent to the homotopy relations). $V_{t}$ is the
Schiffer vector and $B_{t}(V_{t})$ the Beltrami form which are defined as
follows
$B_{t}(V_{t})=\oint_{0}\frac{dw}{2\pi i}b(w)V_{t}(w),\qquad\qquad
V_{t}(w)=\frac{\partial F^{-1}_{t}}{\partial t}\left(F_{t}(w)\right),$ (B.42)
where the contour surrounds the puncture in the local coordinate frame, see
[38] and [22] for more details.
Crucially, the surface state satisfies
$\frac{d}{dt}\langle\Sigma_{0,1}^{(0,2)}(t)|=-\langle\Sigma_{0,1}^{(0,2)}(t)|B_{t}(V_{t})Q_{\rm
o}.$ (B.43)
With these premises the full annulus amplitude (B.2) becomes
$\begin{split}A_{0;1}^{0,2}(\varphi_{\rm o})=&-\frac{1}{2\pi
i}\int_{0}^{1}\frac{dq}{q}\langle\Sigma_{1,1}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes b_{0}^{+}q^{L_{0}^{+}}\right)\left(\varphi_{\rm o}\otimes
l_{0,0}^{(0,1)}\right)\\\ &+\int_{t_{\rm min}}^{t_{\rm
max}}dt\langle\Sigma_{0,1}^{(0,2)}(t)|B_{t}(V_{t})\varphi_{\rm o}\\\
&-\int_{0}^{1}\frac{dq}{q}\langle\Sigma_{0,3}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes 1_{\mathcal{H}_{\rm o}}\otimes
b_{0}q^{L_{0}}\right)\left(o^{i}\otimes\varphi_{\rm o}\otimes
o_{i}\right).\end{split}$ (B.44)
Now, we want to verify if and how BRST exact states decouple. To do so let us
consider $\varphi_{\rm o}=Q_{\rm o}\Lambda$
$\begin{split}A_{0;1}^{0,2}(Q_{\rm o}\Lambda)=&-\frac{1}{2\pi
i}\int_{0}^{1}\frac{dq}{q}\langle\Sigma_{1,1}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes b_{0}^{+}q^{L_{0}^{+}}\right)\left(Q_{\rm o}\Lambda\otimes
l_{0,0}^{(0,1)}\right)\\\ &+\int_{t_{\rm min}}^{t_{\rm
max}}dt\langle\Sigma_{0,1}^{(0,2)}(t)|B_{t}(V_{t})Q_{\rm o}\Lambda\\\
&-\int_{0}^{1}\frac{dq}{q}\langle\Sigma_{0,3}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes 1_{\mathcal{H}_{\rm o}}\otimes
b_{0}q^{L_{0}}\right)\left(o^{i}\otimes Q_{\rm o}\Lambda\otimes
o_{i}\right),\end{split}$ (B.45)
by using the fact that the surface states are BRST invariant, the property
(B.43), and the relations $\bm{Q}_{\rm o}\bm{U}_{\rm o}=0$ and $Q_{\rm
c}l_{0,0}^{(0,1)}=0$ we get
$\begin{split}A_{0;1}^{0,2}(Q_{\rm o}\Lambda)&=+\frac{1}{2\pi
i}\int_{0}^{1}\frac{dq}{q}\langle\Sigma_{1,1}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes\left[b_{0}^{+}q^{L_{0}^{+}},Q_{\rm
c}\right]\right)\left(\Lambda\otimes l_{0,0}^{(0,1)}\right)\\\
&\quad-\int_{t_{\rm min}}^{t_{\rm
max}}dt\frac{d}{dt}\langle\Sigma_{0,1}^{(0,2)}(t)|\Lambda\\\
&\quad+\int_{0}^{1}\frac{dq}{q}\langle\Sigma_{0,3}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes 1_{\mathcal{H}_{\rm o}}\otimes\left[b_{0}q^{L_{0}},Q_{\rm
o}\right]\right)\left(o^{i}\otimes\Lambda\otimes o_{i}\right)\\\
&=-\frac{1}{2\pi
i}\int_{0}^{\infty}ds\frac{d}{ds}\langle\Sigma_{1,1}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes e^{-sL_{0}^{+}}\right)\left(\Lambda\otimes
l_{0,0}^{(0,1)}\right)\\\ &\quad+\langle\Sigma_{0,1}^{(0,2)}(t_{\rm
min})|\Lambda-\langle\Sigma_{0,1}^{(0,2)}(t_{\rm max})|\Lambda\\\
&\quad-\int_{0}^{\infty}ds\frac{d}{ds}\langle\Sigma_{0,3}^{(0,1)}|\left(1_{\mathcal{H}_{\rm
o}}\otimes 1_{\mathcal{H}_{\rm o}}\otimes
e^{-sL_{0}}\right)\left(o^{i}\otimes\Lambda\otimes o_{i}\right)\\\
&=\frac{1}{2\pi i}\langle\Sigma_{1,1}^{(0,1)}|\left(\Lambda\otimes
l_{0,0}^{(0,1)}\right)+\langle\Sigma_{0,1}^{(0,2)}(t_{\rm
min})|\Lambda-\langle\Sigma_{0,1}^{(0,2)}(t_{\rm
max})|\Lambda+\langle\Sigma_{0,3}^{(0,1)}|\left(o^{i}\otimes\Lambda\otimes
o_{i}\right),\end{split}$ (B.46)
where we made the change of variable $q=e^{-s}$ and we ignored the
contributions at the boundary of moduli space (open and closed string
degeneration). Notice that the last line is equivalent to the homotopy
relation (3.19). Therefore, the amplitude vanishes for all $\Lambda$ if we
impose
$\displaystyle\langle\Sigma_{0,3}^{(0,1)}|\left(o^{i}\otimes
1_{\mathcal{H}_{\rm o}}\otimes o_{i}\right)=\langle\Sigma_{0,1}^{(0,2)}(t_{\rm
max})|,$ (B.47) $\displaystyle-\frac{1}{2\pi
i}\langle\Sigma_{1,1}^{(0,1)}|\left(1_{\mathcal{H}_{\rm o}}\otimes
l_{0,0}^{(0,1)}\right)=\langle\Sigma_{0,1}^{(0,2)}(t_{\rm min})|.$ (B.48)
These two relations allow us to determine the local coordinate $F_{t}$ and
thus to fully define the fundamental vertex. Notice that the surface states in
the l.h.s are obtained respectively trough the open and closed plumbing
fixture, with $q=1$. Similarly, the surface states associated to the plumbing
fixture for generic $q$ can be written as
$\displaystyle\langle\Sigma_{\rm open}^{\rm
p.f.}(q)|=\langle\Sigma_{0,3}^{(0,1)}|\left(o^{i}\otimes 1_{\mathcal{H}_{\rm
o}}\otimes q^{L_{0}}o_{i}\right),$ (B.49) $\displaystyle\langle\Sigma_{\rm
closed}^{\rm p.f.}(q)|=-\frac{1}{2\pi
i}\langle\Sigma_{1,1}^{(0,1)}|\left(1_{\mathcal{H}_{\rm o}}\otimes
q^{L_{0}^{+}}l_{0,0}^{(0,1)}\right)$ (B.50)
and we have
$\displaystyle\langle\Sigma_{\rm open}^{\rm p.f.}(q)|\Psi=\langle
f_{0}\circ\Psi(0)\rangle_{\Sigma_{\rm open}^{\rm p.f.}(q)}=\langle g_{\rm
o}\circ f_{0}\circ\Psi(0)\rangle_{C_{\pi,2\pi t_{\rm o}}}=\langle G_{\rm
o}\circ\Psi(0)\rangle_{C_{\pi,2\pi t_{\rm o}}},$ (B.51)
$\displaystyle\langle\Sigma_{\rm closed}^{\rm p.f.}(q)|\Psi=\langle d\circ
f_{\rm o}\circ\Psi(0)\rangle_{\Sigma_{\rm closed}^{\rm p.f.}(q)}=\langle
g_{\rm c}\circ d\circ f_{\rm o}\circ\Psi(0)\rangle_{C_{\pi,2\pi t_{\rm
c}}}=\langle G_{\rm c}\circ\Psi(0)\rangle_{C_{\pi,2\pi t_{\rm c}}}.$ (B.52)
Therefore, considering the $b$-ghost insertions the amplitude becomes
$\begin{split}A_{0;1}^{0,2}(\varphi_{\rm o})=&+\int_{0}^{1}\frac{dq}{q}\langle
d\circ f_{\rm o}\circ\varphi_{\rm o}(0)d\circ f_{\rm c}\circ
b_{o}\rangle_{\Sigma_{\rm closed}^{\rm p.f.}(q)}\\\ &+\int_{t_{\rm
min}}^{t_{\rm max}}dt\langle B_{t}(V_{t})F_{t}\circ\varphi_{\rm
o}(0)\rangle_{C_{\pi,2\pi t}}\\\ &-\int_{0}^{1}\frac{dq}{q}\langle
f_{0}\circ\varphi_{\rm o}(0)f_{\xi}\circ b_{0}\rangle_{\Sigma_{\rm open}^{\rm
p.f.}(q)}\end{split}$ (B.53)
Finally, let us focus on the local coordinate map $F_{t}$. In particular, to
ensure (B.47) and (B.48), we need
$\begin{cases}&\langle\Sigma^{\rm p.f}_{\rm
open}(q=1)|\Psi=\langle\Sigma^{(0,2)}_{0,1}(t_{\rm max})|\Psi\\\
&\langle\Sigma^{\rm p.f}_{\rm
closed}(q=1)|\Psi=\langle\Sigma^{(0,2)}_{0,1}(t_{\rm
min})|\Psi\end{cases}\longrightarrow\begin{cases}&G_{\rm
o}(w)|_{q=1}=F_{t}(w)|_{t=t_{\rm max}}\\\ &G_{\rm
c}(w)|_{q=1}=F_{t}(w)|_{t=t_{\rm min}}\end{cases}$ (B.54)
thus $F_{t}$ can be any holomorphic function which continuously interpolates
$G_{\rm c}(w)|_{q=1}$ and $G_{\rm o}(w)|_{q=1}$ in the interval $t\in[t_{\rm
min},t_{\rm max}]$.
## References
* [1] M. Cho and M. Kim, “A Worldsheet Description of Flux Compactifications,” [arXiv:2311.04959 [hep-th]].
* [2] C. Maccaferri, A. Ruffino and J. Vošmera, “Open-Closed String Field Theory in the Large $N$ Limit,” JHEP 09 (2023), 119 doi:10.1007/JHEP09(2023)119 [arXiv:2305.02844 [hep-th]].
* [3] N. B. Agmon, B. Balthazar, M. Cho, V. A. Rodriguez and X. Yin, “D-instanton Effects in Type IIB String Theory,” [arXiv:2205.00609 [hep-th]].
* [4] D. S. Eniceicu, R. Mahajan, P. Maity, C. Murdia and A. Sen, “The ZZ annulus one-point function in non-critical string theory: A string field theory analysis,” JHEP 12 (2022), 151 doi:10.1007/JHEP12(2022)151 [arXiv:2210.11473 [hep-th]].
* [5] S. Alexandrov, A. H. Fırat, M. Kim, A. Sen and B. Stefański, “D-instanton induced superpotential,” JHEP 07 (2022), 090 doi:10.1007/JHEP07(2022)090 [arXiv:2204.02981 [hep-th]].
* [6] A. Sen, “Normalization of D-instanton amplitudes,” JHEP 11 (2021), 077 doi:10.1007/JHEP11(2021)077 [arXiv:2101.08566 [hep-th]].
* [7] A. Sen, “D-instantons, string field theory and two dimensional string theory,” JHEP 11 (2021), 061 doi:10.1007/JHEP11(2021)061 [arXiv:2012.11624 [hep-th]].
* [8] A. Sen, “D-instanton Perturbation Theory,” JHEP 08 (2020), 075 doi:10.1007/JHEP08(2020)075 [arXiv:2002.04043 [hep-th]].
* [9] A. H. Fırat, “String vertices for the large N limit,” Nucl. Phys. B 1000 (2024), 116485 doi:10.1016/j.nuclphysb.2024.116485 [arXiv:2311.00747 [hep-th]].
* [10] A. H. Fırat, “Bootstrapping closed string field theory,” JHEP 05 (2023), 186 doi:10.1007/JHEP05(2023)186 [arXiv:2302.12843 [hep-th]].
* [11] A. H. Fırat, “Hyperbolic string tadpole,” SciPost Phys. 15 (2023) no.6, 237 doi:10.21468/SciPostPhys.15.6.237 [arXiv:2306.08599 [hep-th]].
* [12] H. Erbin and A. H. Fırat, “Characterizing 4-string contact interaction using machine learning,” [arXiv:2211.09129 [hep-th]].
* [13] A. H. Fırat, “Hyperbolic three-string vertex,” JHEP 08 (2021), 035 doi:10.1007/JHEP08(2021)035 [arXiv:2102.03936 [hep-th]].
* [14] M. Cho, “Open-closed Hyperbolic String Vertices,” JHEP 05 (2020), 046 doi:10.1007/JHEP05(2020)046 [arXiv:1912.00030 [hep-th]].
* [15] K. Costello and B. Zwiebach, “Hyperbolic string vertices,” JHEP 02 (2022), 002 doi:10.1007/JHEP02(2022)002 [arXiv:1909.00033 [hep-th]].
* [16] C. Maccaferri, A. Ruffino and J. Vošmera, “The nilpotent structure of open-closed string field theory,” JHEP 08 (2023), 145 doi:10.1007/JHEP08(2023)145 [arXiv:2305.02843 [hep-th]].
* [17] Y. Okawa, “Correlation functions of scalar field theories from homotopy algebras,” [arXiv:2203.05366 [hep-th]].
* [18] H. Erbin, C. Maccaferri, M. Schnabl and J. Vošmera, “Classical algebraic structures in string theory effective actions,” JHEP 11 (2020), 123 doi:10.1007/JHEP11(2020)123 [arXiv:2006.16270 [hep-th]].
* [19] D. Koyama, Y. Okawa and N. Suzuki, “Gauge-invariant operators of open bosonic string field theory in the low-energy limit,” [arXiv:2006.16710 [hep-th]].
* [20] C. Maccaferri, “String Field Theory,” In Oxford Research Encyclopedia of Physics. Ed. Brian Foster. New York: Oxford University Press, forthcoming. [arXiv:2308.00875 [hep-th]].
* [21] T. Erler, “Four lectures on analytic solutions in open string field theory,” Phys. Rept. 980 (2022), 1-95 doi:10.1016/j.physrep.2022.06.004 [arXiv:1912.00521 [hep-th]].
* [22] H. Erbin, “String Field Theory: A Modern Introduction,” Lect. Notes Phys. 980 (2021), 1-421 2021, ISBN 978-3-030-65320-0, 978-3-030-65321-7 doi:10.1007/978-3-030-65321-7 [arXiv:2301.01686 [hep-th]].
* [23] T. Erler, “Four Lectures on Closed String Field Theory,” Phys. Rept. 851 (2020), 1-36 doi:10.1016/j.physrep.2020.01.003 [arXiv:1905.06785 [hep-th]].
* [24] C. de Lacroix, H. Erbin, S. P. Kashyap, A. Sen and M. Verma, “Closed Superstring Field Theory and its Applications,” Int. J. Mod. Phys. A 32 (2017) no.28n29, 1730021 doi:10.1142/S0217751X17300216 [arXiv:1703.06410 [hep-th]].
* [25] H. Erbin and A. H. Fırat, “Open string stub as an auxiliary string field,” [arXiv:2308.08587 [hep-th]].
* [26] M. Schnabl and G. Stettinger, “Open string field theory with stubs,” [41] JHEP 07 (2023), doi:10.1007/JHEP07(2023)032 [arXiv:2301.13182 [hep-th]].
* [27] C. Chiaffrino and I. Sachs, “QFT with stubs,” JHEP 06 (2022), 120 doi:10.1007/JHEP06(2022)120 [arXiv:2108.04312 [hep-th]].
* [28] A. Sen, “String Field Theory as World-sheet UV Regulator,” JHEP 10 (2019), 119 doi:10.1007/JHEP10(2019)119 [arXiv:1902.00263 [hep-th]].
* [29] P. V. Larocca and C. Maccaferri, “BCFT and OSFT moduli: an exact perturbative comparison,” Eur. Phys. J. C 77 (2017) no.11, 806 doi:10.1140/epjc/s10052-017-5379-3 [arXiv:1702.06489 [hep-th]].
* [30] E. Witten, “The Feynman $i\epsilon$ in String Theory,” JHEP 04 (2015), 055 doi:10.1007/JHEP04(2015)055 [arXiv:1307.5124 [hep-th]].
* [31] A. S. Arvanitakis, O. Hohm, C. Hull and V. Lekeu, “Homotopy Transfer and Effective Field Theory I: Tree-level,” Fortsch. Phys. 70 (2022) no.2-3, 2200003 doi:10.1002/prop.202200003 [arXiv:2007.07942 [hep-th]].
* [32] H. Kajiura, “Homotopy algebra morphism and geometry of classical string field theory,” Nucl. Phys. B 630 (2002), 361-432 doi:10.1016/S0550-3213(02)00174-8 [arXiv:hep-th/0112228 [hep-th]].
* [33] T. Erler and A. H. Fırat, “Wilsonian effective potentials and closed string field theory,” JHEP 02 (2024), 018 doi:10.1007/JHEP02(2024)018 [arXiv:2311.17322 [hep-th]].
* [34] M. Doubek, B. Jurčo and J. Pulmann, “Quantum $L_{\infty}$ Algebras and the Homological Perturbation Lemma,” Comm. Math. Phys. 367 (2019) 215-240 doi:10.1007/s00220-019-03375-x [arXiv:1712.02696 [math-ph]]
* [35] M. Schnabl and G. Stettinger, “More on stubs in open string field theory,” [arXiv:2402.00308 [hep-th]].
* [36] C. B. Thorn, “STRING FIELD THEORY,” Phys. Rept. 175 (1989), 1-101 doi:10.1016/0370-1573(89)90015-X
* [37] D. Z. Freedman, S. B. Giddings, J. A. Shapiro and C. B. Thorn, “The Nonplanar One Loop Amplitude in Witten’s String Field Theory,” Nucl. Phys. B 298 (1988), 253 doi:10.1016/0550-3213(88)90268-4
* [38] B. Zwiebach, “Closed string field theory: Quantum action and the B-V master equation,” Nucl. Phys. B 390 (1993) 33 doi:10.1016/0550-3213(93)90388-6 [hep-th/9206084].
* [39] M. Markl, “Loop homotopy algebras in closed string field theory,” Commun. Math. Phys. 221 (2001), 367-384 doi:10.1007/PL00005575 [arXiv:hep-th/9711045 [hep-th]].
* [40] B. Zwiebach, “Oriented open - closed string theory revisited,” Annals Phys. 267 (1998), 193-248 doi:10.1006/aphy.1998.5803 [arXiv:hep-th/9705241 [hep-th]].
* [41] C. Maccaferri and J. Vošmera, “The classical cosmological constant of open-closed string field theory,” JHEP 10 (2022), 173 doi:10.1007/JHEP10(2022)173 [arXiv:2208.00410 [hep-th]].
* [42] A. Sen and B. Zwiebach, “Quantum background independence of closed string field theory,” Nucl. Phys. B 423 (1994), 580-630 doi:10.1016/0550-3213(94)90145-7 [arXiv:hep-th/9311009 [hep-th]].
* [43] B. Zwiebach, “Interpolating string field theories,” Mod. Phys. Lett. A 7 (1992), 1079-1090 doi:10.1142/S0217732392000951 [arXiv:hep-th/9202015 [hep-th]].
* [44] A. Sen, “Off-shell Amplitudes in Superstring Theory,” Fortsch. Phys. 63 (2015), 149-188 doi:10.1002/prop.201500002 [arXiv:1408.0571 [hep-th]].
|
This paper studies the multi-task high-dimensional linear regression models where the noise among different tasks is correlated, in the moderately high dimensional regime where sample size $n$ and dimension $p$ are of the same order.
Our goal is to estimate the covariance matrix of the noise random vectors, or equivalently the correlation of the noise variables on any pair of two tasks. Treating the regression coefficients as a nuisance parameter, we leverage the multi-task elastic-net and multi-task lasso estimators to estimate the nuisance. By precisely understanding the bias of the squared residual matrix and by correcting this bias, we develop a novel estimator of the noise covariance that converges in Frobenius norm at the rate $n^{-1/2}$ when the covariates are Gaussian. This novel estimator is efficiently computable.
Under suitable conditions, the proposed estimator of the noise covariance attains the same rate of convergence as the “oracle” estimator that knows in advance the regression coefficients of the multi-task model. The Frobenius error bounds obtained in this paper also illustrate the advantage of this new estimator compared to a method-of-moments estimator that does not attempt to estimate the nuisance.
As a byproduct of our techniques, we obtain an estimate of the generalization error of the multi-task elastic-net and multi-task lasso estimators. Extensive simulation studies are carried out to illustrate the numerical performance of the proposed method.
§ INTRODUCTION
§.§ Model and estimation target
Consider a multi-task linear model with $T$ tasks and $n$ observations $(\bx_i, Y_{i1}, Y_{i2},\dots, Y_{iT})$, $\forall i=1,...,n$, where $\bx_i\in \R^p$ is a random feature vector and $Y_{i1}, \ldots, Y_{iT}$ are responses
in the model
\begin{equation}\label{eq: model}
\begin{aligned}
Y_{it} &= \bx_i^\top \bbeta^{(t)} + E_{it} &&\text{for each } t = 1, ..., T; i=1,...,n
&&\text{(scalar form)},
\\
\by\smash{{}^{(t)}} &= \bX \bbeta\smash{{}^{(t)}} + \bep\smash{{}^{(t)}}
&&\text{for each } t = 1, ..., T
&&\text{(vector form)},
\\
\bY &= \bX\bB^* + \bE
&&\text{(matrix form)},
\end{aligned}
\end{equation}
where $\bX\in\R^{n\times p}$ is the design matrix with rows $(\bx_i^{\top})_{i=1,...,n}$,
$\by^{(t)} = (Y_{1t},..., Y_{nt})^\top$ is the response vector for task $t$,
$\bep^{(t)} = (E_{1t},...,E_{nt})^\top$ is the noise vector for task $t$,
$\bbeta^{(t)} \in \R^{p}$ is an unknown fixed coefficient vector for task $t$.
In matrix form,
$\bY\in \R^{n\times T}$ is the response matrix with columns $\by^{{(1)}},...,\by^{(T)}$,
$\bE\in \R^{n\times T}$ has columns $\bep^{{(1)}},...,\bep^{(T)}$,
and $\bB^*\in\R^{p\times T}$ is an unknown coefficient matrix with
columns $\bbeta^{{(1)}},...,\bbeta^{(T)}$. The three forms in (<ref>) are equivalent.
While the $n$ vectors $(\bx_i^\top, y_i^{(1)}, \ldots, y_i^{(T)})_{i=1,...,n}$ of dimension $p+T$ are ,
we assume that for each observation $i=1,...,n$, the noise random variables $E_{i1},...,E_{iT}$ are centered and correlated.
The focus of the present paper is on estimation of the
noise covariance matrix $\bS\in\R^{T\times T}$, which has entries
$\bS_{tt'} = \E[\varepsilon_1^{(t)}\varepsilon_1^{(t')}]$ for any pair $t,t'=1,\ldots,T$, or equivalently
\bS = \E[\tfrac1n\bE^\top\bE].
The noise covariance plays a crucial role in multi-task linear models because it characterizes the noise level and correlation between different tasks:
if tasks $t=1,...,T$ represent time this captures temporal correlation;
if tasks $t=1,...,T$ represent different activation areas in the brain (, [Bertrand et al., 2019]) this captures spatial correlation.
Since $\bS$ is the estimation target,
we view $\bB^*$ as an unknown nuisance parameter.
If $\bB^*=\mathbf0$, then $\bY = \bE$, hence $\bE$ is directly observed and a natural estimator is the sample covariance
There are other possible choices for the sample covariance;
ours coincides with the maximum likelihood estimator of the centered Gaussian model
where the $n$ samples are from $\mathcal N_T(\bf 0, \bS)$.
In the presence of a nuisance parameter $\bB^*\neq \bf 0$,
the above sample covariance is not computable since we only observe $(\bX, \bY)$ and do not have access to $\bE$.
Thus we will refer to
$\frac1n\bE^\top\bE \in\R^{T\times T}$
as the oracle estimator for $\bS$, and its error
$\frac1n\bE^\top\bE - \bS$
will serve as a benchmark.
The nuisance parameter $\bB^*$ is not of interest by itself, but if an estimator $\hbB$ is available that provides good estimation
of $\bB^*$, we would hope to leverage $\hbB$ to estimate the nuisance
and improve estimation of $\bS$.
For instance given an estimate $\hbB$ such that
$\fnorm{\bX(\hbB-\bB^*)}^2/n\to0$, one may use the estimator
\begin{equation}
\textstyle
\label{naive}
\hbS_{(\text{naive})} = \frac1n (\bY - \bX\hbB)^\top(\bY - \bX\hbB)
\end{equation}
to consistently estimate $\bS$ in Frobenius norm.
We refer to this estimator as the naive estimator since it is obtained by simply replacing the noise $\bE$ in the oracle estimator $\frac1n \bE^\top\bE$ with the residual matrix $\bY - \bX\hbB$.
in the regime $p/n\to\gamma$ of interest in the present paper,
the convergence $\fnorm{\bX(\hbB-\bB^*)}^2/n\to0$ is not true even for $T=1$
and common high-dimensional estimators such as Ridge regression [Dobriban and Wager, 2018] or the Lasso [Bayati and Montanari, 2012, Miolane and Montanari, 2018].
Simulations in <Ref> will show that
(<ref>) presents a major bias for estimation of $\bS$.
One goal of this paper is to develop estimator $\hbS$ of $\bS$
by exploiting a commonly used estimator $\hbB$ of the nuisance,
so that in the regime $p/n\to\gamma$ the error $\hbS-\bS$ is comparable to the benchmark $\frac1n\bE^\top\bE - \bS$.
§.§ Related literature
If $T=1$, the above model (<ref>) reduces to the standard linear model
with $\bX\in\R^{n\times p}$ and response vector $\by^{(1)}\in \R^n$.
We will refer to the $T=1$ case as the single-task linear model and drop the superscript $^{(1)}$ for brevity, ,
$y_i = \bx_i^\top \bbeta^* + \ep_i$,
where $\ep_i$ are with mean $0$, and unknown variance $\sigma^2$. The coefficient vector $\bbeta^*$ is typically assumed to be $s$-sparse, , $\bbeta^*$ has at most $s$ nonzero entries. In this single-task linear model, estimation of noise covariance $\bS$ reduces to estimation of the noise variance $\sigma^2=\E[\ep_i^2]$, which has been studied in the literature. Fan et al., 2012 proposed a consistent estimator for $\sigma^2$ based on a refitted cross validation method, which assumes the support of $\bbeta^*$ is correctly recovered; [Belloni et al., 2011] and [Sun and Zhang, 2012] introduced square-root Lasso (scaled Lasso) to jointly estimate the coefficient $\bbeta^*$ and noise variance $\sigma^2$ by
\begin{equation}\label{eq: scaled-lasso}
\textstyle
(\hbbeta, \hsigma) =\argmin_{\bbeta\in \R^p, \sigma>0} \frac{\norm{\by - \bX\bbeta}^2}{2n\sigma} + \frac{\sigma}{2} + \lambda_0\norm{\bbeta}_1.
\end{equation}
This estimator $\hsigma$ is consistent only when the prediction error $\norm{\bX(\hbbeta - \bbeta^*)}^2 /n$ goes to 0, which requires $s\log(p)/n\to 0$.
Estimation of $\sigma^2$ without assumption on $\bX$ was proposed in [Yu and Bien, 2019] by utilizing natural parameterization of the penalized likelihood of the linear model. Their estimator can be expressed as the minimizer of the Lasso problem:
$\hsigma^2_{\lambda} = \min_{\bbeta\in \R^p} \frac{1}{n} \norm{\by - \bX\bbeta}^2 + 2\lambda\norm{\bbeta}_1.$ Consistency of these estimators [Sun and Zhang, 2012, Belloni et al., 2011, Belloni et al., 2014, Yu and Bien, 2019] requires $s\log(p)/n \to 0$ and does not hold in the high-dimensional proportional regime $p/n\to\gamma\in (0, \infty)$.
For this proportional regime $p/n \to \gamma \in (0, \infty)$, [Dicker, 2014] introduced a method-of-moments estimator $\hsigma^2$ of $\sigma^2$,
\begin{equation}\label{eq: est-dicker14}
\hsigma^2 = \frac{n+p+1}{n(n+1)} \norm{\by}^2 - \frac{1}{n(n+1)}\norm{\bSigma^{-\frac12}\bX^\top\by}^2,
\end{equation}
which is unbiased, consistent, and asymptotically normal in high-dimensional linear models with Gaussian predictors and errors. Moreover, [Janson et al., 2017] developed an EigenPrism procedure for the same task as well as confidence intervals for $\sigma^2$.
The estimation procedures in these two papers don't attempt to estimate the nuisance parameter $\bbeta^*$, and require no sparsity on $\bbeta^*$ and isometry structure on $\bSigma$, but assume $\|\bSigma^{\frac 12}\bbeta^*\|^2$ is bounded.
Maximum Likelihood Estimators (MLEs) were studied in
[Dicker and Erdogdu, 2016] for joint estimation of noise level and signal strength in high-dimensional linear models with fixed effects; they showed that a classical MLE for random-effects models may also be used effectively in fixed-effects models.
In the proportional regime,
[Bayati et al., 2013, Miolane and Montanari, 2018] used the Lasso to estimate the nuisance $\bbeta^*$ and produce estimator for $\sigma^2$. Their approach requires an uncorrelated Gaussian design assumption with $\bSigma = \bI_p$.
Bellec, 2020 provided consistent estimators of a similar nature for $\sigma^2$ using more general M-estimators with convex penalty without requiring $\bSigma = \bI_p$. In the special case of the squared loss, this estimator has the form [Bayati et al., 2013, Miolane and Montanari, 2018, Bellec, 2020]
\begin{equation}\label{eq: est-bellec20}
\hsigma^2 = (n -\df)^{-2}\big\{ \norm{\by-\bX\hbbeta}^2 (n+p -2\df) - \norm{\bSigma^{-\frac12}(\by-\bX\hbbeta)}\big\},
\end{equation}
where $\df = \trace[(\partial/\partial \by) \bX\hbbeta]$ denotes the degrees of freedom. This estimator coincides with the method-of-moments estimator in [Dicker, 2014] when $\hbbeta = \bf0$.
For multi-task high-dimensional linear model (<ref>) with $T\ge 2$, the estimation of $\bB^*$ is studied in [Lounici et al., 2011], [Obozinski et al., 2011], [Simon et al., 2013]. These works suggest to use a joint convex optimization problem over the tasks to estimate $\bB^*$. A popular choice is the multi-task elastic-net, which
solves the convex optimization problem
\begin{equation}\label{eq: hbB}
\hbB=\argmin_{\bB\in\R^{p\times T}}
\Big(
\frac{1}{2n}\fnorm*{\bY - \bX\bB }^2 + \lambda \norm{\bB}_{2,1}
+ \frac{\tau}{2} \fnorm{\bB}^2
\Big),
\end{equation}
where $\|\bB\|_{2,1} = \sum_{j=1}^p \|{\bB^{\top} \be_j}\|_2$, and $\fnorm{\cdot}$ denotes the Frobenius norm of a matrix.
This optimization problem can be efficiently solved by existing statistical packages, for instance, scikit-learn [Pedregosa et al., 2011], and glmnet [Friedman et al., 2010].
Note that (<ref>) is also referred to as multi-task (group) Lasso and multi-task Ridge if $\tau = 0$ and $\lambda =0$, respectively.
van de Geer and Stucky, 2016 extended square-root Lasso [Belloni et al., 2011] and scaled Lasso [Sun and Zhang, 2012] to multi-task setting by solving the following problem
\begin{align}\label{eq: multi-ScaledLasso}
(\hbB, \hbS) =
\argmin_{\bB,\bS \succ 0} \Big\{\frac{1}{n} \trace\big((\bY - \bX\bB)\bS^{-\frac 12}(\bY - \bX\bB)^\top\big) + \trace(\bS^{\frac 12}) + 2\lambda_0\norm{\bB}_1 \Big\},
\end{align}
where $\norm{\bB}_1 = \sum_{j,t} |B_{jt}|$.
Note that the covariance estimator in (<ref>) is constrained to be positive definite.
Molstad, 2019 studied the same problem and proposed to estimate $\bS$ by (<ref>) with $\hbB$ in (<ref>), which is consistent under Frobenius norm loss when $\fnorm{\bX(\hbB-\bB^*)}^2/n\to0$.
In a recent paper, Bellec and Romon, 2021 studied the multi-task Lasso problem and proposed confidence intervals for single entries of $\bB^*$ and confidence ellipsoids for single rows of $\bB^*$ under the assumption that $\bS$ is proportional to the identity, which may be restrictive in practice. This literature generalizes degrees of freedom adjustments from single-task to multi-task models, which we will illustrate in <Ref>.
Noise covariance estimation in the high dimensional multi-task linear model is a difficult problem. If the estimand $\bS$ is known to be diagonal,
estimating $\bS$ reduces to the estimation of noise variance for each task,
in which the existing methods for single-task high-dimensional linear models can be applied.
Nonetheless, for general positive semi-definite matrix $\bS$, the noise among different tasks may be correlated,
hence the existing methods are not readily applicable, and a more careful analysis is called for to incorporate the correlation between different tasks. Fourdrinier et al., 2021 considered estimating $\bS$ for the multi-task model (<ref>) where rows of $\bE$ have elliptically symmetric distribution and in the classical regime $p\le n$. However, their estimator has no statistical guarantee under Frobenius norm loss.
Recently, for the proportional regime $p/n \to \gamma \in (0, \infty)$, [Celentano and Montanari, 2021] generalized the estimator $\hsigma^2$ in [Bayati et al., 2013] to the multi-task setting with $T=2$. Their work covers correlated Gaussian designs, where a Lasso or Ridge regression is used to estimate $\bbeta^{(1)}$ for the first task, and another Lasso or Ridge regression is used to estimate $\bbeta^{(2)}$ for the second task.
In other words, they estimate the coefficient vector for each task separately instead of using a multi-task estimator like (<ref>).
It is not trivial to adapt their estimator from the setting $T=2$ to larger $T$, and allow $T$ to increase with $n$.
This present paper takes a different route and aims to fill this gap by proposing a novel noise covariance estimator with theoretical guarantees.
Of course, our method applies directly to the 2-task linear model considered in [Celentano and Montanari, 2021].
§.§ Main Contributions
The present paper introduces a novel estimator $\hbS$ in (<ref>) of the noise covariance $\bS$, which provides consistent estimation of $\bS$ in Frobenius norm,
in the regime where $p$ and $n$ are of the same order.
The estimator $\hbS$ is based on the multi-task elastic-net
estimator $\hbB$ in (<ref>) of the nuisance, and can be seen
as a de-biased version of the naive estimator (<ref>).
The naive estimator (<ref>) suffers from a strong bias in the regime
where $p$ and $n$ are of the same order,
and the estimator $\hbS$
is constructed by precisely understanding this bias and correcting it.
After introducing this novel estimator $\hbS$ in <Ref> below,
we prove several rates of convergence for the Frobenius error
$\fnorm{\hbS-\bS}$, which is comparable, in terms of rate of convergence,
to the benchmark $\fnorm{\frac 1 n \bE^\top\bE - \bS}$ under suitable assumptions.
As a by-product of the techniques developed for the construction of $\hbS$,
we obtain estimates of the generalization error
of $\hbB$, which are of independent interest and can be used for parameter tuning.
§.§ Notation
Basic notation and definitions that will be used in the rest of the paper are given here. Let $[n] = \{1, 2,\ldots, n \}$ for all $n\in\N$.
The vectors $\be_i\in \R^n,\be_j\in\R^p, \be_t\in\R^T$ denote the canonical basis vector of the corresponding index.
We consider restrictions of vectors (, of matrices)
by zeroing the corresponding entries (, columns).
More precisely, for $\bv\in\R^p$ and index set $B\subset [p]$, $\bv_B\in\R^p$ is the vector
with $(\bv_B)_j = 0$ if $j\notin B$ and $(\bv_B)_j = v_j$ if $j\in B$.
If $\bX\in\R^{n\times p}$ and $B\subset[p]$, $\bX_B\in\R^{n\times p}$
such that $(\bX_B)\be_j = {\mathbf 0}$ if $j\notin B$ and
$(\bX_B)\be_j= \bX\be_j$ if $j\in B$.
For a real vector $\ba \in \R^p$,
denotes its Euclidean norm.
For any matrix $\bA$, $\bA^\dagger$ is its Moore–Penrose inverse;
denote its Frobenius, operator and nuclear norm, respectively.
Let $\norm{\bA}_0$ be the number of non-zero rows of $\bA$.
Let $\bA\otimes \bB$ be the Kronecker product of $\bA$ and $\bB$, and $\langle \bA, \bB\rangle = \trace(\bA^\top\bB)$
is the Frobenius inner product for matrices of identical size.
For $\bA$ symmetric, $\phi_{\min}(\bA)$ and $\phi_{\max}(\bA)$ denote its smallest and largest eigenvalues, respectively.
Let $\bI_n$ denote the identity matrix of size $n$ for all $n\in\N$.
For a random sequence $\xi_n$, we write $\xi_n = O_P(a_n)$ if $\xi_n/a_n$ is stochastically bounded.
$C$ denotes an absolute constant and
$C(\tau, \gamma)$ stands for a generic positive constant depending on $\tau,\gamma$;
their expression may vary from place to place.
§.§ Organization
The rest of the paper is organized as follows.
<Ref> introduces our proposed estimator for noise covariance.
<Ref> presents our main theoretical results on proposed estimator and some relevant estimators.
<Ref> demonstrates through numerical experiments that our estimator outperforms several existing methods in the literature, which corroborates our theoretical findings in <Ref>.
<Ref> provides discussion and points out some future research directions.
Proofs of all the results stated in the main body are given in the supplementary, which starts with an outline for ease of navigation.
§ ESTIMATING NOISE COVARIANCE, WITH POSSIBLY DIVERGING NUMBER OF TASKS T
Before we can define our noise covariance estimator, we need to introduce the following building blocks.
Let $\hat{\mathscr{S}} = \{k\in [p]: {\hbB{}^{\top} \be_k} \neq 0\}$
denote the set of nonzero rows of $\hbB$ in (<ref>), and let $|\hat{\mathscr{S}}|$ denote the cardinality of $\hat{\mathscr{S}}$.
For each $k\in \hat{\mathscr{S}}$, define $\bH^{(k)}=\lambda\|\hbB{}^\top \be_k\|^{-1}(\bI_T - \hbB{}^\top\be_k \be_k^\top\hbB ~ \|\hbB{}^\top\be_k\|^{-2} )$, which is the Hessian of the map $\bu \mapsto \lambda\norm*{\bu}$ at $\bu = \hbB{}^\top \be_k$ when $\bu\ne \mathbf0$.
Define $\bM,\bM_1\in\R^{pT \times pT}$ by
\begin{equation}
\textstyle
\bM_1 = \bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})
\qquad
\bM = \bM_1 + n\sum_{k\in\hat{\mathscr{S}}} (\bH^{(k)} \otimes \be_k\be_k^\top)
\end{equation}
where $\bP_{\hat{\mathscr{S}}} = \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top\in\R^{p\times p}$.
Define the residual matrix $\bF$, the error matrix $\bH$, and $\bN$ by
\begin{equation}
\bF=\bY-\bX\hbB,
\qquad
\bH = \bSigma^{1/2}(\hbB - \bB),
\qquad
\bN = (\bI_T \otimes \bX)\bM^\dagger (\bI_T \otimes \bX^\top)
\in \R^{Tn \times Tn}.
\label{eq:def F H N}
\end{equation}
To construct our estimator we also make use of the so-called interaction matrix $\hbA\in \R^{T\times T}$.
The interaction matrix $\hbA\in \R^{T\times T}$ of the estimator $\hbB$ in (<ref>) is defined by
\begin{align}\label{eq: hbA-matrix}
\hbA
= \sum_{i=1}^n (\bI_T \otimes \be_i^\top \bX) \bM^{\dagger} (\bI_T \otimes \bX^\top\be_i)
= \sum_{i=1}^n (\bI_T \otimes \be_i^\top)\bN(\bI_T \otimes \be_i).
\end{align}
The matrix $\hbA$ was introduced in [Bellec and Romon, 2021], where it is used alongside the multi-task Lasso estimator ($\tau=0$ in (<ref>)).
It generalizes the degrees of freedom from Stein, 1981 to the multi-task case.
Intuitively, it captures the correlation between the residuals on different tasks <cit.>.
Our definition of the noise covariance estimator involves $\hbA$,
although our statistical purposes differ greatly from the confidence intervals developed in [Bellec and Romon, 2021].
We are now ready to introduce our estimator $\hbS$ of the noise covariance $\bS$.
With $\bF=\bY-\bX\hbB$ and $\hbA$ as above, define
\begin{equation}\label{eq: hbS}
\hbS = (n\bI_T - \hbA)^{-1} \Bigl[\bF^\top \big( (p+n)\bI_n - \bX \bSigma^{-1}\bX^\top\big) \bF - \hbA\bF^\top\bF
- \bF^\top\bF\hbA
\Bigr](n\bI_T - \hbA)^{-1}.
\end{equation}
Efficient solvers (, in <cit.>) are available to compute $\hbB$.
Computation of $\bF$ is then straightforward, and computing the matrix $\hbA$
only requires inverting a matrix of size $|\hat{\mathscr{S}}|$ <cit.>.
The estimator $\hbS$ generalizes the scalar estimator
(<ref>) to the multi-task setting
in the sense that for $T=1$, $\hbS$ is exactly equal to (<ref>).
Note that unlike in (<ref>), here $\bF^\top\bF$, $\hbA$ and $(n\bI_T - \hbA)$ are matrices of size $T\times T$: the order of matrix multiplication in $\hbS$ matters and should not be switched.
This non-commutativity is not present for $T=1$ in (<ref>) where matrices in $\R^{T\times T}$ are reduced to scalars.
Another special case of $\hbS$ can be seen in [Celentano and Montanari, 2021] for $T=2$ where the matrix $\hbA\in\R^{2\times 2}$ is diagonal and the two columns of $\hbB\in\R^{p\times 2}$ are two Lasso or Ridge estimators computed independently of each other, one for each task. Except in these two special cases — (<ref>) for $T=1$, [Celentano and Montanari, 2021] for $T=2$ and two Lasso/Ridge — we are not aware of previously proposed estimators of the same form as $\hbS$.
§ THEORETICAL ANALYSIS
§.§ Oracle and method-of-moments estimator
Before moving on to the theoretical analysis of $\hbS$, we state our randomness assumptions for $\bE, \bX$ and we study two preliminary estimators:
the oracle $\frac 1 n \bE^\top\bE$ and another estimator obtained by the method of moments.
[Gaussian noise]
$\bE\in \R^{n\times T}$ is a Gaussian noise matrix with $\mathcal N_T(\bf0,\bS)$ rows, where $\bS\in \R^{T\times T}$ is an unknown positive semi-definite matrix.
An oracle with access to the noise matrix $\bE$ may compute the oracle estimator $\hbS_{\rm{(oracle)}} \defas \frac 1n \bE^\top \bE$, with convergence rate given by the following theorem, which will serve as a benchmark.
Under <Ref>,
\begin{equation}
\E\big[ \fnorm{\hbS_{\rm{(oracle)}}- \bS}^2 \big] = \tfrac1n [(\trace(\bS))^2 + \trace(\bS^2)].
\label{eq: bound oracle}
\end{equation}
Consequently, $n^{-1} (\trace(\bS))^2 \le
\E\big[ \fnorm{\hbS_{\rm{(oracle)}} - \bS}^2 \big] \le 2 n^{-1} (\trace(\bS))^2$.
The next assumption concerns the design matrix $\bX$ with rows $\bx_1^\top,\ldots,\bx_n^\top$.
[Gaussian design]
$\bX\in \R^{n\times p}$ is a Gaussian design matrix with $\mathcal N_p(\mathbf 0,\bSigma)$ rows, where $\bSigma$ is a known positive definite matrix. The matrices $\bE$ and $\bX$ are independent.
Under the preceding assumptions, we obtain the following method-of-moments estimator, which extends the estimator for noise variance in [Dicker, 2014] to the multi-task setting. Its error will also serve as a benchmark.
Under <Ref>, the method-of-moments estimator defined as
\begin{equation}
\label{hbS_mm}
\hbS_{\rm{(mm)}} = \frac{(n+1+p)}{n(n+1)} \bY^\top \bY - \frac{1}{n(n+1)} \bY^\top\bX \bSigma^{-1}\bX^\top\bY
\end{equation}
is unbiased for $\bS$, , $\E [\hbS_{\rm{(mm)}} ] = \bS.$ Furthermore, the Frobenius error is bounded from below as
\begin{align}
\E [\fnorm{\hbS_{\rm{(mm)}} - \bS}^2 ] \ge \frac{p-2}{(n+1)^2} \big[\trace(\bS) + \fnorm{\bSigma^{\frac12}\bB^*}^2\big]^2.
\label{eq:lower-boud-mom}
\end{align}
By (<ref>), a larger norm $\fnorm{\bSigma^{1/2}\bB^*}$ induces a larger variance for $\hbS_{\rm{(mm)}}$.
Our goal with an estimate $\hbS$, when a good estimator $\hbB$ of the nuisance is available, is to improve upon the right-hand side of (<ref>)
when the estimation error $\fnorm{\bSigma^{1/2}(\hbB-\bB^*)}$ is smaller than
A high-probability upper bound of the form
$\fnorm{\hbS_{\rm{(mm)}} - \bS}^2
\le C \frac{n+p}{n^2}[\trace(\bS) + \fnorm{\bSigma^{\frac12}\bB^*}^2]^2
that matches the lower bound (<ref>) when $p>n$,
is a consequence of our main result below.
Indeed, when $\hbB=\bf 0$ then $\hbA=\bf0$ and our estimator $\hbS$ from <Ref>
coincides with $\hbS_{\rm{(mm)}}$ up to the minor modification
of replacing $n+1$ by $n$ in (<ref>).
This replacement is immaterial
compared to the right-hand side in (<ref>).
Furthermore, such $\hbS$
corresponds to one of $\tau$ or $\lambda$ being $+\infty$ in (<ref>)
and the aforementioned upper bound
follows by taking $\tau=+\infty$ in the proof of <Ref> below.
The empirical results in <Ref> confirm that $\hbS$ has smaller variance compared to $\hbS_{\rm{(mm)}}$ in simulations.
§.§ Theoretical results for proposed estimator
We have established lower bounds for the oracle estimator and the
method-of-moments estimator that will serve as benchmarks.
We turn to the analysis of the estimator $\hbS$ from <Ref>
under the following additional assumptions.
[High-dimensional regime]
$n,p$ satisfy $p/n \le \gamma$ for a constant $\gamma\in (0, \infty)$.
For asymptotic statements such as those involving the stochastically
bounded notation $O_p(\cdot)$ or the convergence in probability in (<ref>) below, we implicitly consider a sequence
of multi-task problems indexed by $n$ where $p,T,\bB^*,\hbB,\bS$ all implicitly
depend on $n$. The Assumptions, such as $p/n\le\gamma$ above, are required to hold at
all points of the sequence. In particular, $p/n\to\gamma'$ is allowed for any limit $\gamma'\le \gamma$ under <Ref>, although our results do not require a specific value for the limit.
Assume either one of the following:
* $\tau>0$ in the penalty of estimator (<ref>), and let $\tau' = \tau/\opnorm{\bSigma}$.
* $\tau=0$ and for $c>0$, $P(U_1) \ge 1-\frac 1T$ and $P(U_1)\to1$ as $n\to\infty$, where $U_1 = \{\norm{\hbB}_0 \le n(1-c)/2 \}$ is the event that $\hbB$ has at most $n(1-c)/2$ nonzero rows. Finally, $T \le e^{\sqrt{n}}$.
<Ref>(i) requires that the Ridge penalty in (<ref>) be enforced,
so that the objective function is strongly convex.
<Ref>(ii), on the other hand, does not require strong convexity
but that the number of nonzero rows of $\hbB$ is small enough with high-probability,
which is a reasonable assumption when the tuning parameter $\lambda$ in (<ref>) is large enough and $\bB^*$ is sparse enough. While we do not prove
in the present paper that $\P(U_1)\to1$ under assumptions on the tuning parameter $\lambda$ and the sparsity of $\bB^*$, results of a similar nature have been obtained previously in several group-Lasso settings
Suppose that <Ref> hold for all $n, p$ as $n\to\infty$, then almost surely
\begin{equation}
\fnorm{(\bI_T - \hbA/n) (\hbS -\bS)(\bI_T - \hbA/n)} \le \Theta_1 n^{-\frac 12} \big( \fnorm{\bF}^2/n + \fnorm{\bH}^2 +\trace(\bS)\big)
\label{eq:thm33}
\end{equation}
for some non-negative random variable $\Theta_1$ of constant order, in the sense that $\E [\Theta_1^2] \le C(\tau')(T \wedge (1 + \frac pn))(1 + \frac pn))\le C(\gamma, \tau')$
under <Ref>(i),
and ${\E [I(\Omega)\Theta_1^2]} \le C(\gamma,c)$ under <Ref>(ii), where $I(\Omega)$ is the indicator function of an event $\Omega$
with $\P(\Omega)\to 1$.
Above, $\Theta_1\ge 0$ is said to be of constant order because $\Theta_1=O_P(1)$
follows from $\E[\Theta_1^2]\le C(\gamma,\tau')$ or from $\E[I(\Omega)\Theta_1^2]\le C(\gamma,c)$ if the stochastically bounded notation $O_P(1)$ is allowed to
hide constants depending on $(\gamma,\tau')$ or $(\gamma,c)$ only.
In the left-hand side of (<ref>), multiplication by $\bI_T - \hbA/n$ on both sides of the error $\hbS -\bS$ can be further removed, as
\begin{equation}
\fnorm{\hbS -\bS}
\le
\fnorm{(\bI_T - \hbA/n) (\hbS -\bS)(\bI_T - \hbA/n)}
\opnorm{(\bI_T - \hbA/n)^{-1}}^2
\label{eq: ineq op norm}
\end{equation}
and the fact that $\opnorm{(\bI_T - \hbA/n)^{-1}}$ is bounded from above with high probability by a constant depending on $\gamma, \tau', c$ only.
Upper bounds on $\opnorm{(\bI_T - \hbA/n)^{-1}}$ are formally stated in the supplementary material.
§.§ Understanding the right-hand side of (14), and the multi-task generalization error
Before coming back to upper bounds on the error $\fnorm{\hbS-\bS}$,
let us study the quantities appearing in the right-hand side of (<ref>). By (<ref>), $\fnorm{\bF}^2/n$
is the mean squared norm of the residuals and is observable, while the squared error
$\fnorm{\bH}^2=\fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2$ and $\trace[\bS]$ are unknown.
By analogy with single task models, we define the generalization error as the matrix $\bH^\top\bH + \bS$ of size $T\times T$, whose $(t,t')$-th entry is
$\E[(Y^{new}_t - \bx_{new}^T\hbB\be_t)(Y^{new}_{t'} - \bx_{new}^T\hbB\be_{t'})|(\bX,\bY)]$
where $(Y^{new}_t,Y^{new}_{t'},\bx_{new})$ is independent of $(\bX,\bY)$
and has the same distribution
as $(Y_{it},Y_{it'},\bx_i)$ for some $i=1,...,n$.
Estimating the generalization error is useful for parameter tuning:
\begin{equation}
\trace[\bH^T\bH + \bS] = \fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2 + \trace[\bS],
\label{eq:trace-gen}
\end{equation}
minimizing an estimator of $\trace[\bH^T\bH + \bS]$ is a useful proxy
to minimize the Frobenius error $\fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2$
of $\hbB$.
The following theorem gives an estimate for the generalization error matrix
as well as a consistent estimator for its trace (<ref>).
Let <Ref> be fulfilled. Then
\begin{align*}
\fnorm{ \bF^\top\bF/n - (\bI_T - \hbA/n) (\bH^\top\bH + \bS) (\bI_T - \hbA/n)} \le \Theta_2 n^{-\frac 12} \big(\fnorm*{\bF}^2/n + \fnorm{\bH}^2 + \trace(\bS)\big),
\end{align*}
for some non-negative random variable $\Theta_2$ of constant order, in the sense that $\E [\Theta_2] \le C(\gamma,\tau')$ under <Ref>(i),
and with ${\E [I(\Omega)\Theta_2]} \le C(\gamma,c)$ under <Ref>(ii), where $I(\Omega)$ is the indicator function of an event $\Omega$
with $\P(\Omega)\to 1$.
Furthermore, if $T = o(n)$ as $n, p\to \infty$ while $\tau', \gamma, c$ stay constant,
\begin{equation}
\frac{\trace(\bS) + \fnorm{\bH}^2}{\fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}^2/n} \overset{p}{\to} 1.
\label{eq: convergence proba generalization error}
\end{equation}
In the above theorem, $\bS$ and $\bH$ are unknown, while $\hbA$ and $\bF$ can be computed from the observed data $(\bX, \bY)$.
Thus (<ref>) shows that $\fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}^2/n$ is a consistent estimate for
the unobserved quantity $\trace(\bS) + \fnorm{\bH}^2$.
§.§ Back to bounds on estimation error
We are now ready to present our main result on the error bounds for $\hbS$.
It is a consequence of (<ref>), (<ref>) and (<ref>).
Let <Ref> be fulfilled and $T = o(n)$. Then
\begin{align}
\fnorm{\hbS - \bS} &\le O_P(n^{-\frac 12}) (\fnorm*{\bF}^2/n),\\
\fnorm{\hbS - \bS} &\le O_P(n^{-\frac 12}) [\trace(\bS) + \fnorm{\bH}^2].
\label{eq:upper bound trS+H2}
\end{align}
Here the $O_P(n^{-\frac12})$ notation involves constants depending on $\gamma,\tau',c$.
It is instructive at this point to compare (<ref>)
with the lower bound (<ref>) on the Frobenius error
of the method-of-moments estimator. When $p\ge n$ then $\E[\fnorm{\hbS_{\rm{(mm)}}-\bS}^2]\ge \frac{c}{n}[\trace[\bS] + \fnorm{\bSigma^{1/2}\bB^*}^2]^2$;
this is the situation where the Statistician does not attempt to estimate
$\bB^*$, and pays a price of $[\trace[\bS] + \fnorm{\bSigma^{1/2}\bB^*}^2]^2/n$.
On the other hand, by definition of $\bH$ in (<ref>), the right-hand side of (<ref>),
when squared, is of order
$n^{-1}[\trace[\bS] + \fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2]^2$.
Here the error bound only depends on $\bB^*$ through
the estimation error for the nuisance $\fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2$.
This explains that when $\hbB$ is a good estimator of $\bB^*$
and $\fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2$ is smaller compared to $\fnorm{\bSigma^{1/2}\bB^*}^2$, the estimator $\hbS$ that leverages $\hbB$ will outperform
the method-of-moments estimator $\hbS_{\rm{(mm)}}$ which does not attempt to estimate the nuisance.
Finally, the next results show that under additional assumptions,
the estimator $\hbS$ enjoys Frobenius error bounds similar to the oracle estimator $\frac1n\bE^\top\bE$.
$\text{SNR }\le \mathfrak{snr}$ for some positive constant $\mathfrak{snr}$ independent of $n, p, T$, where $\text{SNR}= \fnorm{\bSigma^{\frac12}\bB^*}^2/\trace(\bS)$ denotes the signal-to-noise ratio of the multi-task linear model (<ref>).
Suppose that Assumptions <ref>, <ref>, <ref>, <ref>(i), <ref> and $T=o(n)$ hold, then
\begin{equation}
\fnorm{\hbS - \bS} \le O_P(n^{-\frac 12}) \trace(\bS),
\end{equation}
where $O_P(\cdot)$ hides constants depending on $\gamma,\tau',\mathfrak{snr}$.
\begin{align*}
&\fnorm{\hbS - \bS}^2 \le O_P(T/n) \fnorm{\bS}^2 = o_P(1)\fnorm{\bS}^2,\\
&\big|\norm{\hbS}_* - \trace(\bS)\big| \le O_P(\sqrt{T/n}) \trace(\bS) = o_P(1) \trace(\bS).
\end{align*}
Suppose that Assumptions <ref>, <ref>, <ref>, <ref>(ii)
and $T = o(n)$ hold.
If $\norm{\bB^*}_0\le (1-c)n/2$ and the tuning parameter $\lambda$ is of the form $\lambda=\mu\sqrt{\trace(\bS)/n}$ for some positive constant $\mu$,
\begin{equation}
\fnorm{\hbS - \bS} \le O_P(n^{-\frac12}) (1 + \mu^2) \trace(\bS),
\label{eq: cor37}
\end{equation}
where $O_P(\cdot)$ hides constants depending on $c,\gamma, \phi_{\min}(\bSigma)$.
Comparing <Ref> with <Ref>, we conclude
that $\fnorm{\hbS-\bS}^2$ is of the same order as the Frobenius error of the oracle estimator
in (<ref>) up to constants depending on the signal-to-noise ratio, $\gamma$, and $\tau'$ under <Ref>(i), and up to constants depending on $\mu$, $c, \gamma, \phi_{\min}(\bSigma)$
under <Ref>(ii).
The error bounds in (<ref>)-(<ref>) are measured in Frobenius norm, similarly to existing works on
noise covariance estimation [Molstad, 2019].
Outside the context of linear regression models, much work has been devoted to covariance estimation
in the operator norm.
By the loose bound $\opnorm*{\bM}\leq \fnorm*{\bM}$, our upper bounds carry over to the operator norm.
The same cannot be said for lower bounds, since for instance
$\E\big[\opnorm{\hbS_{\rm{(oracle)}}- \bS}^2 \big] \asymp n^{-1} \opnorm{\bS} \trace(\bS)$ (see, , <cit.>).
[Boxplot for estimating a full rank $\bS$ ]
[Boxplot for estimating a low rank $\bS$ ]
Boxplots for Frobenius norm loss over 100 repetitions.
§ NUMERICAL EXPERIMENTS
Regarding parameters for our simulations,
we set $T=20$, $p=1.5n$ and $n$ equals successively $1000,1500,2000$.
We consider two settings for the noise covariance matrix:
$\bS$ is full-rank and $\bS$ is low-rank.
The complete construction of $\bS$, $\bB^*$ and $\bX$,
as well as implementation details are given in the supplementary material.
We compare our proposed estimator $\hbS$ with relevant estimators including
(1) the naive estimate $\hbS_{\text{(naive)}} = n^{-1}\bF^\top\bF$,
(2) the method-of-moments estimate $\hbS_{\rm{(mm)}}$ defined in Proposition <ref>, and
(3) the oracle estimate $\hbS_{\rm{(oracle)}} = n^{-1}\bE^\top\bE$.
The performance of each estimator is measured in Frobenius norm: for instance, $\fnorm{\hbS-\bS}$ is the loss for proposed estimator $\hbS$.
<Ref> displays the boxplots of the Frobenius loss from the different methods over 100 repetitions.
<Ref> shows that, besides the oracle estimator, our proposed estimator has the best performance with significantly smaller loss compared to the naive and method-of-moments estimators.
Since the estimation target is a $T\times T$ matrix, we also want to compare different estimators in terms of the bias and standard deviation for each entry of $\bS$.
<Ref> presents the heatmaps of bias and standard deviation from different estimators for full rank $\bS$ with $n=1000$.
The remaining heatmaps for different $n$ and for estimation of low rank $\bS$ are available in the supplementary material.
As expected, the oracle estimator has best performance in <Ref>
and smallest bias and variance in <Ref>.
The naive estimator has large bias as we see in <Ref>,
though it has small standard deviation.
The method-of-moments estimator is unbiased but its variance is relatively large,
which means its performance is not stable, as was reflected in <Ref>.
Our proposed estimator improves on both the naive and method-of-moments estimators
because it has much smaller bias than the former,
while having smaller standard deviation than the latter.
[Heatmap of bias for each entry ]
[Heatmap of standard deviation for each entry ]
Heatmaps for estimation of full rank $\bS$ with $n=1000$ over 100 repetitions.
§ LIMITATIONS AND FUTURE WORK
One limitation of the proposed estimator $\hbS$ is
that its construction necessitates the knowledge of $\bSigma$.
Let us first mention that the estimator
$n^{-1}\fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}$ of $\trace(\bS) + \fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2$ in <Ref> does not require knowing $\bSigma$. Thus, this estimator can further be used as a proxy of the error $\fnorm{\bSigma^{1/2}(\hbB-\bB^*)}^2$,
say for parameter tuning, without the knowledge of $\bSigma$.
The problem of estimating $\bS$ with known
$\bSigma$ was studied in [Celentano and Montanari, 2021] for $T=2$: in this
inaccurate covariate model
and for $p/n\le \gamma$, our results yield the convergence rate $n^{-1/2}$
for $\bS$ which improves upon the rate $n^{-c_0}$ for a non-explicit constant $c_0>0$ in <cit.>.
In order to use $\hbS$ when $\bSigma$ is unknown, one may plug-in
an estimator $\hbSigma$ in <Ref>, resulting in an extra term
of order $\opnorm{\hbSigma{}^{-1} - \bSigma^{-1}}\fnorm{\bF}$ for the Frobenius error. See <cit.> for related discussions in the $T=1$ (single-task) case. While, under the proportional regime $p/n\to \gamma$,
no estimator is consistent for all covariance matrices $\bSigma$ in operator norm, consistent estimators do exist under additional structural assumptions [Bickel and Levina, 2008, El Karoui, 2008, Cai et al., 2010].
If available, additional unlabeled samples $(\bx_i)_{i\ge n+1}$ can also
be used to construct norm-consistent estimator of $\bSigma$.
Future directions include extending estimator $\hbS$ to utilize
other estimators of the nuisance $\bB^*$
than the multi-task elastic-net (<ref>); for instance
(<ref>) or the estimators studied in [van de Geer and Stucky, 2016, Molstad, 2019, Bertrand et al., 2019]. In the simpler case where
columns of $\bB^*$ are estimated independently on each task, e.g.,
if the $T$ columns of $\hbB$ are Lasso estimators
$(\hbbeta^{(t)})_{t\in[T]}$ each computed from $\by{}^{(t)}$,
then minor modifications of our proof yield that the estimator (<ref>)
with $\hbA=\diag(\|\hbbeta^{(1)}\|_0,...,\|\hbbeta^{(T)}\|_0)$ enjoys similar
Frobenius norm bounds of order $n^{-1/2}$.
[Bayati and Montanari, 2012]
Mohsen Bayati and Andrea Montanari.
The lasso risk for gaussian matrices.
IEEE Transactions on Information Theory, 580
(4):0 1997–2017, 2012.
[Bayati et al., 2013]
Mohsen Bayati, Murat A Erdogdu, and Andrea Montanari.
Estimating lasso risk and noise level.
In NIPS, volume 26, pages 944–952, 2013.
[Bellec and Kuchibhotla, 2019]
Pierre Bellec and Arun Kuchibhotla.
First order expansion of convex regularized estimators.
In Advances in Neural Information Processing Systems, pages
3462–3473, 2019.
[Bellec, 2020]
Pierre C Bellec.
Out-of-sample error estimate for robust m-estimators with convex
arXiv preprint arXiv:2008.11840, 2020.
[Bellec and Romon, 2021]
Pierre C Bellec and Gabriel Romon.
Chi-square and normal inference in high-dimensional multi-task
arXiv preprint arXiv:2107.07828, 2021.
[Bellec and Tsybakov, 2017]
Pierre C Bellec and Alexandre B Tsybakov.
Bounds on the prediction error of penalized least squares estimators
with convex penalty.
In Vladimir Panov, editor, Modern Problems of Stochastic
Analysis and Statistics, Selected Contributions In Honor of Valentin
Konakov. Springer, 2017.
URL <https://arxiv.org/pdf/1609.06675.pdf>.
[Bellec and Zhang, 2019]
Pierre C Bellec and Cun-Hui Zhang.
De-biasing convex regularized estimators and interval estimation in
linear models.
arXiv preprint arXiv:1912.11943, 2019.
[Bellec and Zhang, 2021]
Pierre C Bellec and Cun-Hui Zhang.
Second-order stein: Sure for sure and other applications in
high-dimensional inference.
The Annals of Statistics, 490 (4):0
1864–1903, 2021.
[Belloni et al., 2011]
Alexandre Belloni, Victor Chernozhukov, and Lie Wang.
Square-root lasso: pivotal recovery of sparse signals via conic
Biometrika, 980 (4):0 791–806, 2011.
[Belloni et al., 2014]
Alexandre Belloni, Victor Chernozhukov, and Lie Wang.
Pivotal estimation via square-root lasso in nonparametric regression.
Ann. Statist., 420 (2):0 757–788, 04 2014.
URL <https://doi.org/10.1214/14-AOS1204>.
[Bertrand et al., 2019]
Quentin Bertrand, Mathurin Massias, Alexandre Gramfort, and Joseph Salmon.
Handling correlated and repeated measurements with the smoothed
multivariate square-root lasso.
In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural
Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
[Bickel and Levina, 2008]
Peter J Bickel and Elizaveta Levina.
Covariance regularization by thresholding.
The Annals of Statistics, 360 (6):0
2577–2604, 2008.
[Boucheron et al., 2013]
Stéphane Boucheron, Gábor Lugosi, and Pascal Massart.
Concentration inequalities: A nonasymptotic theory of
Oxford University Press, 2013.
[Cai et al., 2010]
T Tony Cai, Cun-Hui Zhang, and Harrison H Zhou.
Optimal rates of convergence for covariance matrix estimation.
The Annals of Statistics, 380 (4):0
2118–2144, 2010.
[Celentano and Montanari, 2021]
Michael Celentano and Andrea Montanari.
Cad: Debiasing the lasso with inaccurate covariate model.
arXiv preprint arXiv:2107.14172, 2021.
[Davidson and Szarek, 2001]
Kenneth R Davidson and Stanislaw J Szarek.
Local operator theory, random matrices and banach spaces.
Handbook of the geometry of Banach spaces, 10
(317-366):0 131, 2001.
[Dicker, 2014]
Lee H Dicker.
Variance estimation in high-dimensional linear models.
Biometrika, 1010 (2):0 269–284, 2014.
[Dicker and Erdogdu, 2016]
Lee H Dicker and Murat A Erdogdu.
Maximum likelihood for variance estimation in high-dimensional linear
In Artificial Intelligence and Statistics, pages 159–167.
PMLR, 2016.
[Dobriban and Wager, 2018]
Edgar Dobriban and Stefan Wager.
High-dimensional asymptotics of prediction: Ridge regression and
The Annals of Statistics, 460 (1):0 247–279,
[El Karoui, 2008]
Noureddine El Karoui.
Operator norm consistent estimation of large-dimensional sparse
covariance matrices.
The Annals of Statistics, 360 (6):0
2717–2756, 2008.
[Fan et al., 2012]
Jianqing Fan, Shaojun Guo, and Ning Hao.
Variance estimation using refitted cross-validation in ultrahigh
dimensional regression.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 740 (1):0 37–65, 2012.
[Fourdrinier et al., 2021]
Dominique Fourdrinier, Anis M Haddouche, and Fatiha Mezoued.
Covariance matrix estimation under data–based loss.
Statistics & Probability Letters, page 109160, 2021.
[Friedman et al., 2010]
Jerome Friedman, Trevor Hastie, and Rob Tibshirani.
Regularization paths for generalized linear models via coordinate
Journal of statistical software, 330 (1):0 1,
[Janson et al., 2017]
Lucas Janson, Rina Foygel Barber, and Emmanuel Candes.
Eigenprism: inference for high dimensional signal-to-noise ratios.
Journal of the Royal Statistical Society: Series B (Statistical
Methodology), 790 (4):0 1037–1065, 2017.
[Koltchinskii and Lounici, 2017]
Vladimir Koltchinskii and Karim Lounici.
Concentration inequalities and moment bounds for sample covariance
Bernoulli, 230 (1):0 110–133, 2017.
[Laurent and Massart, 2000]
B. Laurent and P. Massart.
Adaptive estimation of a quadratic functional by model selection.
Ann. Statist., 280 (5):0 1302–1338, 10 2000.
URL <http://dx.doi.org/10.1214/aos/1015957395>.
[Liu and Zhang, 2009]
Han Liu and Jian Zhang.
Estimation consistency of the group lasso and its applications.
In Artificial Intelligence and Statistics, pages 376–383.
PMLR, 2009.
[Lounici et al., 2011]
Karim Lounici, Massimiliano Pontil, Sara Van De Geer, and Alexandre B Tsybakov.
Oracle inequalities and optimal inference under group sparsity.
The annals of statistics, 390 (4):0
2164–2204, 2011.
[Miolane and Montanari, 2018]
Léo Miolane and Andrea Montanari.
The distribution of the lasso: Uniform control over sparse balls and
adaptive parameter tuning.
arXiv preprint arXiv:1811.01212, 2018.
[Molstad, 2019]
Aaron J Molstad.
New insights for the multivariate square-root lasso.
arXiv preprint arXiv:1909.05041, 2019.
[Obozinski et al., 2011]
Guillaume Obozinski, Martin J Wainwright, and Michael I Jordan.
Support union recovery in high-dimensional multivariate regression.
The Annals of Statistics, 390 (1):0 1–47,
[Pedregosa et al., 2011]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel,
M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos,
D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay.
Scikit-learn: Machine learning in Python.
Journal of Machine Learning Research, 12:0 2825–2830,
[Simon et al., 2013]
Noah Simon, Jerome Friedman, and Trevor Hastie.
A blockwise descent algorithm for group-penalized multiresponse and
multinomial regression.
arXiv preprint arXiv:1311.6529, 2013.
[Stein, 1981]
Charles M Stein.
Estimation of the mean of a multivariate normal distribution.
The annals of Statistics, pages 1135–1151, 1981.
[Sun and Zhang, 2012]
Tingni Sun and Cun-Hui Zhang.
Scaled sparse linear regression.
Biometrika, 990 (4):0 879–898, 2012.
[van de Geer and Stucky, 2016]
Sara van de Geer and Benjamin Stucky.
$\chi$ 2-confidence sets in high-dimensional regression.
In Statistical analysis for high-dimensional data, pages
279–306. Springer, 2016.
[Yu and Bien, 2019]
Guo Yu and Jacob Bien.
Estimating the error variance in a high-dimensional linear model.
Biometrika, 1060 (3):0 533–546, 2019.
§ SUPPLEMENT
This supplement is organized as follows:
* In <Ref> we provide details regarding the setting of our simulations, as well as additional experiment results.
* In <Ref> we establish an upper bound for the out-of-sample error, which we could not put in the full paper due to page length limit.
* <Ref> provides the upper bound for $\opnorm{(\bI_T - \hbA/n)^{-1}}$ mentioned after <Ref> in the full paper, and
Appendices <ref>, <ref>, <ref> contain preliminary theoretical statements, which will be useful for proving our main results in the full paper.
* <Ref> contains proofs of main results in <Ref> of the full paper and <Ref>.
* <Ref> contains proofs of preliminary results in Appendices <ref> to <ref>.
Here we introduce basic notations that will be used throughout this supplement.
We use indexes $i$ and $l$ only to loop or sum over $[n] = \{1, 2, \ldots, n\}$, use $j$ and $k$ only to loop or sum over $[p] = \{1, 2, \ldots, p\}$,
use $t$ and $t'$ only to loop or sum over $[T] = \{1, 2, \ldots, T\}$, so that $\be_i$ (and $\be_l$) refer to the $i$-th (and $l$-th) canonical basis vector in $\R^n$, $\be_j$ (and $\be_k$) refer to the $j$-th (and $k$-th) canonical basis vector in $\R^p$, $\be_t$ (and $\be_{t'}$) refer to the $t$-th (and $t'$-th) canonical basis vector in $\R^T$.
For any two real numbers $a$ and $b$, let $a\vee b = \max(a,b)$, and $a\wedge b = \min(a,b)$.
Positive constants that depend on $\gamma, \tau'$ only are denoted by $C(\gamma, \tau')$, and positive constants that depend on $\gamma, c$ only are denoted by $C(\gamma, c)$. The values of these constants
may vary from place to place.
§ EXPERIMENT DETAILS AND ADDITIONAL SIMULATION RESULTS
§.§ Experimental details
This section provides more experimental detail for <Ref> of the full paper.
We consider two types of noise covariance matrix:
(i) $\bS$ is full-rank with $(t,t')$-th entry $\bS_{t,t'} = \frac{\cos(t-t')}{1 + \sqrt{|t-t'|}}$;
(ii) $\bS$ is low-rank with $\bS = \bu\bu^\top$, where $\bu\in \R^{T\times 10}$ has entries from $\calN(0, 1/T)$.
To build the coefficient matrix $\bB^*$, we first set its sparsity pattern, , we define the support $\mathscr{S}$ of cardinality $|\mathscr{S}| = 0.1 p$,
then we generate an intermediate matrix $\bB\in \R^{p\times T}$.
The $j$-th row of $\bB$ is sampled from $\calN_T(\mathbold0, p^{-1}\bI_T)$ if $j\in \mathscr{S}$,
otherwise we set it to be the zero vector.
Finally we let $\bB^* =\bB [\trace(\bS)/ \trace(\bB^\top\bSigma \bB)]^{\frac 12}$,
which forces a signal-to-noise ratio of exactly $1$.
The design matrix $\bX$ is constructed by independently sampling its rows from $\calN_p(\mathbold 0, \bSigma)$ with $\bSigma_{jk} = |j-k|^{0.5}$.
The Python library Scikit-learn [Pedregosa et al., 2011] is used to calculate $\hbB$ in (<ref>).
More precisely we invoke to
obtain $\hbB$ by 5-fold cross-validation with parameters , .
To compute the interaction matrix $\hbA$
we used the efficient implementation described in <cit.>.
The full code needed to reproduce our experiments is part of the supplementary material.
A detailed Readme file is located in the corresponding folder.
The simulations results reported in the full paper and this supplementary material were run on a cluster of 50 CPU-cores (each is an Intel Xeon E5-2680 v4 @2.40GHz) equipped with a total of 150 GB of RAM.
It takes approximately six hours to obtain all of our simulation results.
§.§ Numerical results of Frobenius norm loss
While <Ref> in the full paper provides boxplots of Frobenius norm loss for 100 repetitions,
we report in following <Ref> the mean and standard deviation of the Frobenius norm loss over 100 repetitions.
Comparison of different methods for estimation of $\bS$
2c|$n=1000$ 2c|$n=1500$ 2c$n=2000$
$\bS$ method mean sd mean sd mean sd
4*full rank
naive 2.593 0.090 2.572 0.076 2.562 0.070
mm 2.030 0.616 1.554 0.421 1.413 0.405
proposed 1.207 0.119 0.984 0.084 0.858 0.072
oracle 0.652 0.061 0.534 0.052 0.469 0.045
4*low rank
naive 2.942 0.119 2.912 0.102 2.908 0.094
mm 2.027 0.686 1.561 0.435 1.423 0.414
proposed 1.216 0.172 0.989 0.125 0.854 0.118
oracle 0.654 0.096 0.531 0.081 0.464 0.065
The numerical results in <Ref> are consistent with the boxplots in <Ref>.
It is clear from <Ref> that our proposed estimator has significantly smaller loss than the naive estimator and method-of-moments estimator.
These comparisons again show the superior performance of our proposed estimator.
§.§ Additional heatmaps for estimating full rank S
When estimating the full rank $\bS$ with $(t,t')$-th entry $\bS_{t,t'} = \frac{\cos(t-t')}{1 + \sqrt{|t-t'|}}$, the heatmaps for different estimators from $n=1500$ and $n=2000$ are presented in <Ref>, respectively.
The comparison patterns in <Ref> are similar to the case $n=1000$ in <Ref> of the full paper; our proposed estimator outperforms the naive estimator and method-of-moments estimator.
[Heatmap of bias for each entry ]
[Heatmap of standard deviation in each entry ]
Heatmaps for estimation of full rank $\bS$ with $n=1500$ over 100 repetitions.
[Heatmap of bias for each entry ]
[Heatmap of standard deviation in each entry ]
Heatmaps for estimation of full rank $\bS$ with $n=2000$ over 100 repetitions.
§.§ Heatmaps for estimating low rank S
When estimating the low rank with $\bS = \bu\bu^\top$, and $\bu\in \R^{T\times 10}$ with entries are from $\calN(0, 1/T)$.
We present the heatmaps for different estimators with $n=1000, 1500, 2000$ in <Ref> below. All of these figures convince us that besides the oracle estimator, the proposed estimator has the best performance.
[Heatmap of bias for each entry ]
[Heatmap of standard deviation in each entry ]
Heatmaps for estimation of low rank $\bS$ with $n=1000$ over 100 repetitions.
[Heatmap of bias for each entry ]
[Heatmap of standard deviation in each entry ]
Heatmaps for estimation of low rank $\bS$ with $n=1500$ over 100 repetitions.
[Heatmap of bias for each entry ]
[Heatmap of standard deviation in each entry ]
Heatmaps for estimation of low rank $\bS$ with $n=2000$ over 100 repetitions.
§ OUT-OF-SAMPLE ERROR ESTIMATION
In this appendix, we present a by-product of our techniques for estimating the noise covariance.
Suppose we wish to evaluate the performance of a regression method on a new data, we define the out-of-sample error for the multi-task linear model (<ref>) as
\E \big[(\hbB - \bB^*)^\top\bx_{\text{new}} \bx_{\text{new}}^\top (\hbB - \bB^*)|(\bX, \bY)\big] = \bH^\top \bH,
where $\bx_{\text{new}}$ is independent of the data $(\bX; \bY)$ with the same distribution as any row of $\bX$.
The following theorem on estimation of out-of-sample error is an by-product of our technique for constructing $\hbS$.
Under the same conditions of <Ref>, with $\bZ = \bX \bSigma^{-\frac12}$, we have
\begin{align*}
(\bI_T -\hbA/n)\bH^\top\bH (\bI_T -\hbA/n) - \frac{1}{n^2} \big( \bF^\top \bZ \bZ^\top \bF + \hbA \bF^\top \bF + \bF^\top \bF \hbA -
p\bF^\top \bF\big)}\\
\le & \Theta_3 n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)
\end{align*}
for some non-negative random variable $\Theta_3$ of constant order, in the sense that $\E [\Theta_3] \le C(\gamma,\tau')$ under <Ref>(i),
and with ${\E [I(\Omega)\Theta_3]} \le C(\gamma,c)$ under <Ref>(ii), where $I(\Omega)$ is the indicator function of an event $\Omega$
with $\P(\Omega)\to 1$.
<Ref> generalizes the result in [Bellec, 2020] to multi-task setting.
While the out-of-sample error $\bH^\top\bH$ is unknown, the quantities $\bZ$, $\bF$, $\hbA$ are observable. Since typically the quantity $(\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)$ is of a constant order, <Ref> suggests the following estimate of $\bH^\top\bH$:
\[
\frac{1}{n^2} (\bI_T - \hbA/n)^{-1} \big( \bF^\top \bZ\bZ^\top \bF + \hbA \bF^\top \bF + \bF^\top \bF \hbA -
p\bF^\top \bF\big) (\bI_T - \hbA/n)^{-1},
\]
which can further be used for parameter tuning in multi-task linear model.
We present the proof of <Ref> in <Ref>
§ USEFUL OPERATOR NORM BOUNDS
Let us first introduce two events besides the event $U_1 = \big\{ \norm{\hbB}_0 \le n(1-c)/2 \big\}$ in <Ref>(ii), we define events $U_2$ and $U_3$ as below,
\begin{align*}
%U_1 &= \big\{ \norm{\hbB}_0 \le n(1-c)/2 \big\},\\
U_2 &= \big\{\inf_{\bb\in \R^p: \| \bb\|_0 \le (1-c)n} \|\bX \bb\|^2/(n \|\bSigma^{\frac 12} \bb\|^2) > \eta\big\},\\
U_3 &= \big\{ \opnorm{\bX\bSigma^{-\frac 12}} < 2 \sqrt n + \sqrt p\big\}.
%~ \opnorm{\bE\bS^{-\frac 12}} < 2 \sqrt n + \sqrt T \big\}.
%U_4 &= \big\{\bE : \opnorm{\bE\bS^{-\frac 12}} < 2 \sqrt n + \sqrt T \big\}
\end{align*}
Under <Ref>, <cit.> guarantees $\P(U_2) \ge 1 - C(\gamma, c)e^{-C(\gamma, c)n}$ for some constant $\eta$ that only depends on constants $\gamma, c$.
Under <Ref>, <cit.> guarantees $\P(U_3) > 1 - e^{-n/2}$ and there exists a random variable $z\sim \calN(0,1)$ s.t. $\opnorm{\bX\bSigma^{-\frac 12}} \le \sqrt{n} + \sqrt{p} + z$ almost surely. Therefore, under <Ref>, we have
\begin{equation}\label{eq: opnorm-Z}
\E[ \opnorm{n^{-\frac 12}\bX\bSigma^{-\frac 12}}^2] \le (1 + \sqrt{p/n})^2 + n^{-1} \le C(\gamma).
\end{equation}
Furthermore, under <Ref>(ii), $\P(U_1\cap U_2\cap U_3)\to1$ by a union bound, and for large enough $n$,
\begin{align}
\P\big\{(U_1\cap U_2\cap U_3)^c\big\} \notag
<& 1/T + C(\gamma, c)e^{-n/C(\gamma, c)} \notag\\
=& \frac 1T (1 + TC(\gamma, c)e^{-n/C(\gamma, c)} )\notag\\
<& \frac 1T (1 + C(\gamma, c)e^{\sqrt{n} -n/C(\gamma, c)})\notag \\
<& \frac 1T C(\gamma, c). \label{eq: P-Omega}
\end{align}
Now we provide the operator norm bounds for $\bI_T - \hbA/n$ and $(\bI_T - \hbA/n)^{-1}$.
Suppose that <Ref> holds. If $\tau >0$ in (<ref>) with $\tau' = \tau /\opnorm{\bSigma}$, then
* $\opnorm{\bI_T - \hbA/n} \le 1$.
* In the event $U_3$, we have $\opnorm{(\bI_T - \hbA/n)^{-1}} \le 1 + (\tau')^{-1} (2 + \sqrt{p/n})^2$. Furthermore, $\E [\opnorm{(\bI_T - \hbA/n)^{-1}} ]\le 1 + (\tau')^{-1} [(1 + \sqrt{p/n})^2 +n^{-1}].$
Suppose that <Ref> holds.
If $\tau=0$ in (<ref>), then
* In the event $U_1$, we have $\opnorm{\bI_T - \hbA/n} \le 1$.
* In the event $U_1$, $\opnorm{(\bI_T - \hbA/n)^{-1}} \le C(c)$. Hence, $\E [I(U_1)\opnorm{(\bI_T - \hbA/n)^{-1}} ]\le C(c).$
With $\bN = (\bI_T \otimes \bX) \bM^\dagger (\bI_T \otimes \bX^\top)$, we have $\opnorm{\bN}\le 1$.
§ LIPSCHITZ AND DIFFERENTIAL PROPERTIES FOR A GIVEN, FIXED NOISE MATRIX E
We need to study Lipschitz and differential properties of certain mappings when the noise matrix $\bE$ is fixed.
$g:\R^{p\times T}\to \R$ defined by
$g(\bB)=\tau \fnorm{\bB}^2/2 + \lambda \norm{\bB}_{2,1}$ be the penalty in
For a fixed value of $\bE$,
define the mappings
\begin{align}
\bZ
\bH(\bZ) = \argmin_{\bar\bH\in\R^{p\times T}} \frac{1}{2n}\fnorm{\bE - \bZ\bar\bH}^2 + g(\bSigma^{-1/2}\bar\bH)
\R^{n\times p}
\to
\R^{p\times T}
\\
\bZ
\bF(\bZ) = \bE - \bZ\bH(\bZ)
\R^{n\times p}
\to
\R^{n\times T}
\\
\bZ
D(\bZ) = (\fnorm{\bH(\bZ)}^2 + \fnorm{\bF(\bZ)}^2/n)^{1/2}
\R^{n\times p}
\to
\R
\end{align}
Next, define the random variable $\bZ = \bX\bSigma^{-\frac12}\in\R^{n\times p}$, and let us use the convention that if arguments of $\bH,\bF$ or $D$ are omitted then these mappings are implicitly taken at the realized value of the random variable $\bZ = \bX\bSigma^{-\frac12}\in\R^{n\times p}$ where $\bX$ is the observed design matrix. With this convention and by definition of the above mappings, we then have
$\bH = \bH(\bZ) = \bSigma^{1/2}(\hbB - \bB^*)$ as well as
$\bF = \bF(\bZ) = \bY - \bX\hbB$
and $D = [\fnorm{\bH}^2 + \fnorm{\bF}^2/n]^{1/2}$
so that the notation is consistent with the rest of the paper
(in particular, with (<ref>)).
Finally, denote the $(i,j)$-th entry of $\bZ$ by $z_{ij}$
throughout this appendix, and the corresponding partial derivatives of the above mappings
by $\frac{\partial}{\partial z_{ij}}$.
§.§ Lipschitz properties
For multi-task elastic-net (, $\tau>0$ in (<ref>)), the mapping $\bZ \mapsto D^{-1}\bF/\sqrt{n}$ is $n^{-\frac12}L$-Lipschitz with $L = 8 \max(1, (2\tau')^{-1})$, where $\tau' = \tau/\opnorm{\bSigma}$.
For multi-task group Lasso (, $\tau=0$ in (<ref>)).
we have
(1) In $U_1\cap U_2$, the map $\bZ \mapsto D^{-1}\bF/\sqrt{n}$ is $n^{-\frac12}L$-Lipschitz with $L = 8 \max(1, (2\eta)^{-1})$.
(2) In $U_1\cap U_2 \cap U_3$, the map $\bZ \mapsto D^{-1}\bZ^\top\bF/n$ is $n^{-1/2} (1 + (2 +\sqrt{p/n})L)$-Lipschitz, where $L = 8 \max(1, (2\eta)^{-1})$ as in (1).
Suppose that <Ref> holds, then
(1) Under <Ref>(i) that $\tau>0$ and $\tau'=\tau/\opnorm{\bSigma}$, we have
\begin{align*}
\sum_{ij}\Big(\frac{\partial D}{\partial z_{ij}}\Big)^2 \le n^{-1}D^2 [4 \max(1, (2\tau')^{-1})]^2.
\end{align*}
This implies that
$ nD^{-2}\sum_{ij}(\frac{\partial D}{\partial z_{ij}})^2 \le C(\tau').
(2) Under <Ref>(ii) that $\tau=0$ and $\P(U_1)\to 1$, in the event $U_1\cap U_2$, we have
\begin{align*}
\sum_{ij}\Big(\frac{\partial D}{\partial z_{ij}}\Big)^2 \le n^{-1}D^2 [4 \max(1, (2\eta)^{-1})]^2.
\end{align*}
This implies that
$ nD^{-2}\sum_{ij}(\frac{\partial D}{\partial z_{ij}})^2 \le C(\eta) = C(\gamma, c)$ since $\eta$ is a constant that only depends on $\gamma, c$.
§.§ Derivative formulae
Note that with a fixed noise $\bE$, <Ref> guarantee that the map $\bZ \mapsto \bF$ is Lipschitz, hence the derivative exists almost everywhere by Rademacher’s theorem.
We present the formula for derivative of this map in <Ref>.
Recall $\bF = \bY - \bX\hbB$ with $\hbB$ defined in (<ref>). Under <Ref>(i) $\tau>0$, or under <Ref>(ii) $\tau =0$ and in the event $U_1\cap U_2$, for each $i,l\in [n], j\in [p], t\in [T]$, the following derivative exists almost everywhere and has the expression
\begin{equation*}
\frac{\partial F_{lt}}{\partial z_{ij}} = D_{ij}^{lt} + \Delta_{ij}^{lt},
\end{equation*}
= -(\be_j^\top\bH \otimes \be_i^\top) (\bI_{nT} - \bN) (\be_t\otimes \be_l),$ and
= -(\be_t^\top \otimes \be_l^\top)(\bI_T\otimes \bX)
\bM^\dagger (\bI_T\otimes \bSigma^{\frac12})
\bigl(\bF^\top \otimes \bI_{p}\bigr)(\be_i \otimes\be_j).
Furthermore, a straightforward calculation yields
$$\sum_{i=1}^n D_{ij}^{it} = -\be_j^\top \bH (n\bI_T - \hbA)\be_t.$$
Suppose that <Ref> holds.
(1) Under <Ref>(i) that $\tau>0$ and $\tau' = \tau/\opnorm{\bSigma}$, we have
\begin{align*}
\frac 1n \sum_{ij}\norm*{\frac{\partial (\bF/D)}{\partial z_{ij}}}^2_{\rm F} \le \underbrace{4 \max(1, (\tau')^{-1} (T\wedge \frac{p}{n})) + 2 n^{-1} [4 \max(1, (2\tau')^{-1})]^2}_{f(\tau', T, n, p)}.
\end{align*}
(2) Under <Ref>(ii) that $\tau=0$ and $\P(U_1)\to 1$, in the event $U_1\cap U_2$, we have
\begin{align*}
\frac 1n \sum_{ij}\norm*{\frac{\partial (\bF/D)}{\partial z_{ij}}}^2_{\rm F} \le \underbrace{4 \max(1, (\eta)^{-1} (T\wedge \frac{p}{n})) + 2 n^{-1} [4 \max(1, (2\eta)^{-1})]^2}_{f(\eta, T, n, p)}.
\end{align*}
Furthermore, the right-hand side in (1) can be bounded from above by $C(\tau') (T\wedge \frac pn)$, and the right-hand side in (2) can be bounded from above by $C(\gamma, c)$ in the regime $p/n\le \gamma$.
§ LIPSCHITZ AND DIFFERENTIAL PROPERTIES FOR A GIVEN, FIXED DESIGN MATRIX
We also need to study Lipschitz and derivative properties of functions
of the noise $\bE$ when the design $\bX$ is fixed. Formally, for a given and fixed design matrix $\bX$, define the function
$\bE\mapsto \bF(\bE)$ by
the value $\bY-\bX\hbB$ of the residual matrix
when the observed data $(\bX,\bY)$ is $(\bX,\bX\bB^* + \bE)$
and with $\hbB$ the estimator (<ref>).
Note that this map is 1-Lipschitz by <cit.>. Rademacher’s theorem thus guarantees this map is differentiable almost everywhere. We denote its partial derivative by $\frac{\partial}{\partial E_{it}}$
for each entry $(E_{it})_{i\in[n],t\in[T]}$ of the noise matrix $\bE$.
We present its derivative formula in <Ref> below.
For each $i,l\in [n], t,t'\in [T]$, the following derivative exists almost everywhere and has the expression
\begin{align*}
\frac{\partial F_{lt}}{\partial E_{it'}} = \be_l^\top\be_i\be_t^\top\be_{t'}
- \be_l^\top (\be_t^\top \otimes \bX)\bM^\dagger (\be_{t'} \otimes \bX^\top) \be_i.
\end{align*}
As a consequence, we further have
\begin{equation*}
\sum_{i=1}^n \frac{\partial F_{it}}{\partial E_{it'}} = \be_t^\top (n\bI_T -\hbA)\be_{t'},\quad \sum_{i=1}^n \frac{\partial \be_i^\top\bZ \bH\be_{t}}{\partial E_{it'}} = \be_t^\top \hbA \be_{t'}.
\end{equation*}
§ PROBABILISTIC TOOLS
We first list some useful variants of Stein's formulae and Gaussian-Poincaré inequalities. Let $f'$ denote the derivative of a differentiable univariate function. For a differentiable vector-valued function $\bff(\bz): \R^n \to \R^n$, denote its Jacobian (derivative) and divergence respectively by $\nabla \bff(\bz)$ and $\div \bff(\bz)$, , $[\nabla \bff(\bz)]_{i,l} = \frac{\partial f_i(\bz)}{\partial z_l}$ for $i,l\in [n]$, and $\div \bff(\bz) = \trace(\nabla \bff(\bz))$.
The following identities hold provided the involved derivatives exist a.e. and the expectations are finite.
* $z\sim \calN(0, 1)$, $f: \R \to \R$, then
$$\E [(z f(z) - f'(z))^2] = \E [f(z)^2] + \E[(f'(z))^2].$$
* $\bz \sim \calN_n(\mathbold 0, \bI_n)$, $f: \R^n \to \R^n$, then
\E [(\bz^\top \bff(\bz) - \div\bff(\bz))^2] = \E \big[\norm{\bff(\bz)}^2 + \trace[( \nabla\bff(\bz))^2]\big] \le \E \big[\norm{\bff(\bz)}^2 + \fnorm{ \nabla\bff(\bz)}^2\big],
where the inequality uses Cauchy-Schwarz inequality.
* More generally, for $\bz \sim \calN_n(\mathbold 0, \bSigma)$, $\bff: \R^n \to \R^n$, then
\begin{align*}
\E [(\bz^\top \bff(\bz) - \trace(\bSigma \nabla\bff(\bz))^2]
&= \E \big[\norm{\bSigma^{\frac12}\bff(\bz)}^2 + \trace[(\bSigma \nabla\bff(\bz))^2]\big]\\
&\le \E \big[\norm{\bSigma^{\frac12}\bff(\bz)}^2 + \fnorm{(\bSigma \nabla\bff(\bz)}^2]\big],
\end{align*}
where the inequality uses Cauchy-Schwarz inequality.
The following inequalities hold provided the right-hand side derivatives exist a.e. and the expectations are finite.
* $z \sim \calN(0, 1)$, $f: \R \to \R$, then
$$\text{Var} [f(z)] \le \E [(f'(z))^2].$$
* $\bz \sim \calN_n(\mathbold 0, \bI_n)$, $f: \R^n \to \R$, then
$$\text{Var} [f(\bz)] \le \E [\norm{\nabla f(\bz)}^2].$$
* $\bz \sim \calN_n(\mathbold 0, \bI_n)$, $\bff: \R^n \to \R^m$, then
$$\E[\norm{\bff(\bz) - \E[\bff(\bz)]}^2] \le \E [\fnorm{\nabla \bff(\bz)}^2].$$
* More generally, for $\bz \sim \calN_n(\mathbold 0, \bSigma)$, $\bff: \R^n \to \R^m$, then
$$\E[\norm{\bff(\bz) - \E[\bff(\bz)]}^2] \le \E [\fnorm{\bSigma^{\frac12}\nabla \bff(\bz)}^2].$$
Now we present a few important lemmas, whose proofs are based on <Ref> and <Ref>.
Assume that Assumption <ref> holds. For fixed $\bX$, we have
\begin{equation*}
\E\Bigl[
\fnorm{\bE^\top \bF/\tD - \bS (n\bI_T - \hbA )/\tD}^2\Bigr]
\le 4\trace(\bS),
\end{equation*}
where $\tD = \big(\fnorm*{\bF}^2 + n \trace(\bS)\big)^{\frac 12}$.
Let $\bU, \bV:\R^{n\times p} \to \R^{n\times T}$ be two locally Lipschitz functions of $\bZ$ with $\calN(0,1)$ entries, then
\begin{align*}
&\E\Big[\norm[\Big]{\bU^\top \bZ \bV -
\sum_{j=1}^p\sum_{i=1}^n
\frac{\partial}{\partial z_{ij} }\Bigl(\bU^\top \be_i \be_j^\top \bV \Bigr)
}_{\rm F}^2\Big]\\
\le~& \E \fnorm*{\bU}^2 \fnorm*{\bV}^2+ \E \sum_{ij}\Big[
2\fnorm*{\bV}^2\fnorm*{ \frac{\partial \bU}{\partial z_{ij}} }^2
+ 2\fnorm*{\bU}^2\fnorm*{ \frac{\partial \bV}{\partial z_{ij}} }^2\Big].
\end{align*}
Assume the same setting as <Ref>.
If on some open set $\Omega\subset \R^{n\times p}$ with $\P(\Omega^c)\le C/T$ for some constant $C$, we have (i) $\bU$ is $n^{-1/2} L_1$ -Lipschitz and $\fnorm{\bU}\le 1$, (ii) $\bV$ is $n^{-1/2}L_2$ -Lipschitz and $\fnorm{\bV}\le K$. Then
\begin{align*}
&\E\Big[I(\Omega) \Big\|\bU^\top \bZ \bV -
\sum_{j=1}^p\sum_{i=1}^n
\frac{\partial}{\partial z_{ij} }\Bigl(\bU^\top \be_i \be_j^\top \bV \Bigr)
\Big\|_{\rm F}^2\Big]\\
\le~& K^2 + 2C( K^2 L_1^2 + L_2^2 ) + 2\E \Big[I(\Omega)
\sum_{ij}\Big( K^2\fnorm*{\frac{\partial \bU}{\partial z_{ij}} }^2 + \fnorm*{\frac{\partial \bV}{\partial z_{ij}} }^2\Big) \Big].
\end{align*}
Let $\bU,\bV: \R^{n\times p} \to \R^{n\times T}$
be two locally Lipschitz
functions of $\bZ$ with $\calN(0,1)$ entries.
Assume also that $\fnorm{\bU} \vee \fnorm{\bV} \le 1$ almost surely.
\begin{align*}
\fnorm[\Big]{
p \bU^\top \bV - \sum_{j=1}^p
\Bigl(\sum_{i=1}^n \partial_{ij} \bU^\top \be_i - \bU^\top \bZ \be_j\Bigr)
\Bigl(\sum_{i=1}^n \partial_{ij} \be_i^T \bV - \be_j^T \bZ^T \bV\Bigr)
\Bigr]
\\
\le~&
2 \|\bU\|_\partial
\|\bV\|_\partial
\sqrt p
\bigl(
\sqrt 2 + (3+\sqrt{2})(\|\bU\|_{\partial} + \|\bV\|_{\partial})
\bigr),
\end{align*}
where $ \partial_{ij} \bU \defas \partial \bU /\partial z_{ij}$,
and $\|\bU\|_\partial \defas \E[\sum_{i=1}^n\sum_{j=1}^p
\fnorm{ \partial_{ij} \bU}^2]^{\frac 12}$.
Suppose that <Ref> holds. Let $\bQ_1 = \frac{\frac{1}{n}\big(\bF^\top\bF + \bH^\top\bZ^\top\bF - \bS(n\bI_T - \hbA) \big)}{\fnorm{\bS^{\frac 12}}(\fnorm*{\bF}^2/n + \trace(\bS))^{\frac 12} n^{-\frac 12} }$,
then $\E [\fnorm*{\bQ_1}^2] \le 4$.
Suppose that <Ref> hold.
\[\bQ_2 = \frac{\frac{1}{n^2} \big( \bF^\top \bZ\bZ^\top \bF - \bF^\top\bF (p\bI_T - \hbA)+ (n\bI_T -\hbA) \bH^\top\bZ^\top\bF \big)}{(\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)n^{-\frac 12}},
\]
then $\E [\fnorm*{\bQ_2}^2] \le C(\tau') (T\wedge (1+\frac{p}{n})) (1 + \frac pn)$ under <Ref>(i), and
$\E [I(\Omega)\fnorm*{\bQ_2}^2] \le C(\gamma, c)$ under <Ref>(ii) for some set $\Omega$ with $\P(\Omega)\to 1$.
Suppose that <Ref> hold.
Let $\Xi= (n\bI_T -\hbA)\bH^\top\bZ^\top\bF$, and
\[
\bQ_3 = \frac{\frac{1}{n^2} \big( p\bF^\top \bF -\bF^\top \bZ\bZ^\top \bF - (n\bI_T -\hbA)\bH^\top\bH (n\bI_T -\hbA) - \Xi - \Xi^\top\big)}{(\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)n^{-\frac 12}},
\]
then $\E [\fnorm*{\bQ_3}] \le C(\gamma,\tau')$ under <Ref>(i), and $\E [I(\Omega)\fnorm*{\bQ_3}] \le C(\gamma, c)$ under <Ref>(ii) for some set $\Omega$ with $\P(\Omega)\to 1$.
§ PROOFS OF MAIN RESULTS
In this appendix, we provide proofs of the theoretical results in <Ref> of the pull paper and <Ref> of this supplement.
§.§ Proof of Propositionprop: oracle
With $\bS=\sum_{t=1}^T \sigma_t^2 \bu_t \bu_t^T$
the spectral decomposition of $\bS$,
we have $\fnorm{\bE^\top\bE-n\bS}^2
[\bu_{t'}^T((\bE^\top\bE-n\bS) \bu_t]^2$.
We now compute the expectation of one term
indexed by $(t,t')$.
The random variable $\bu_{t'}^T(\bE^\top\bE-n\bS) \bu_t$
is the sum of $n$ mean zero random variables
with the same distribution as $z_{t'} z_{t}-\bu_{t'}^T\bS\bu_t$
where $(z_t,z_{t'})\sim \calN_2(\mathbf{0},\diag(\sigma_t^2,\sigma_{t'}^2))$. Thus
\E[(\bu_{t'}^T(\bE^\top\bE-n\bS) \bu_t)^2]
n\text{Var}[z_{t'} z_{t}-\bu_{t'}^T\bS\bu_t]
n\sigma_t^2\sigma_{t'}^2 I_{t\ne t'}
due to $\text{Var}[\chi^{2}_1]=2$
if $t=t'$ and independence if $t\ne t'$.
Summing over all $(t,t')\in[T] \times [T]$
$2n\sum_{t=1}^T \sigma_t^4
+n \sum_{t\ne t'}\sigma_t^2\sigma_{t'}^2
=n\sum_{t=1}^T \sigma_t^4
= n \fnorm{\bS}^2
+ n [\trace(\bS)]^2
as desired.
The inequality simply follows from
$\fnorm{\bS}^2\le [\trace(\bS)]^2$ since $\bS$ is positive semi-definite.
§.§ Proof of Propositionprop: mom
Without of loss of generality, we assume $\bSigma = \bI_p$. For general positive definite $\bSigma$, the proof follows by replacing $(\bX, \bB^*)$ with $(\bX \bSigma^{-\frac12}, \bSigma^{\frac12}\bB^*)$.
We first derive the method-of-moments estimator $\hbS_{\rm{(mm)}}$. Under <Ref> with $\bSigma = \bI_p$, $\bX$ has rows from $\calN_p(\mathbf 0, \bI_p)$, $\bE$ has rows from $\calN_T(\bf0, \bS)$, and $\bX$ and $\bE$ are independent. Then, the expectations of $\bY^\top \bY$ and $\bY^\top\bX \bX^\top\bY$ are given by
\begin{align}\label{eq:yy}
\E (\bY^\top \bY) &= \E \big[(\bX\bB^* + \bE)^\top (\bX\bB^* + \bE)\big] = n (\bB^{*\top}\bB^* + \bS),
\end{align}
\begin{align}\label{eq:yxxy}
\E (\bY^\top\bX\bX^\top\bY ) &= \E \big[(\bX\bB^* + \bE)^\top \bX \bX^\top (\bX\bB^* + \bE)\big]\notag\\
&= \E (\bB^{*\top}\bX^\top\bX \bX^\top\bX\bB^*) + \E (\bE^\top\bX \bX^\top\bE)\notag\\
&= \bB^{*\top}\E(\bX^\top\bX \bX^\top\bX)\bB^* + \E (\bE^\top\bX \bX^\top\bE)\notag\\
&= n(n+p+1) \bB^{*\top}\bB^* + np\bS,
\end{align}
where the last line uses
\begin{align*}
&\E (\bX^\top\bX \bX^\top\bX) \\
=~& \E \Big[\sum_{i=1}^n (\bx_i\bx_i^\top) \sum_{l=1}^n(\bx_l\bx_l^\top)\Big]\\
=~& \sum_{i\ne l}\E(\bx_i\bx_i^\top \bx_l\bx_l^\top) + \sum_{i= l} \E (\bx_i\bx_i^\top \bx_l\bx_l^\top)\\
=~& n(n-1) \bI_p^2 + n\E (\bx_1\bx_1^\top \bx_1\bx_1^\top)\\
=~& n(n-1) \bI_p + n [2\bI_p^2 + \trace(\bI_p) \bI_p]\\
=~& n(n+p+1) \bI_p,
\end{align*}
\begin{align*}
\E (\bE^\top\bX \bX^\top\bE)
= \E\big[ \E (\bE^\top\bX \bX^\top\bE |\bE)\big]
= \E\big[ \bE^\top \E(\bX \bX^\top)\bE\big]
= np \bS.
%&= \E\big[ \bE^\top \trace(\bI_p) \bI_n \bE \big]\\
\end{align*}
Solving for $\bS$ from the system of equations (<ref>) and (<ref>), we obtain the method-of-moments estimator
\begin{align*}
\hbS_{\rm{(mm)}} = \frac{(n+p+1)}{n(n+1)} \bY^\top \bY - \frac{1}{n(n+1)} \bY^\top\bX \bX^\top\bY,
\end{align*}
and $\E [\hbS_{\rm{(mm)}} ]= \bS$.
Now we derive the variance lower bound for $\hbS_{\rm{(mm)}}$. Since $\E[\hbS_{\rm{(mm)}} ] = \bS$, $\E \big[\fnorm{\hbS_{\rm{(mm)}} - \bS}^2\big] = \sum_{t, t'} \text{Var}\big\{[\hbS_{\rm{(mm)}} ]_{t,t'}\big\}.$ By definition of $\hbS_{\rm{(mm)}} $,
\begin{align*}
[\hbS_{\rm{(mm)}} ]_{t,t'} = \frac{n+p+1}{n(n+1)} [\by^{(t)}]^\top \by^{(t')} - \frac{1}{n(n+1)}[\by^{(t)}]^\top \bX\bSigma^{-1}\bX^\top\by^{(t')}.
\end{align*}
Since $\by^{(t)} = \bX \bbeta^{(t)} + \bep^{(t)},\quad \by^{(t')} = \bX \bbeta^{(t')} + \bep^{(t')},$ for $t\ne t'$,
without loss of generality, we assume $\bbeta^{(t)} = a_0\be_1$ and $\bbeta^{(t')} = a_1\be_1 + a_2\be_2$ for some constants $a_0, a_1, a_2$.
If necessary, we could let $\bu_1 = \bbeta^{(t)}/\norm{\bbeta^{(t)}}$, and $\bu_2 = \tbu_2/\norm{\tbu_2}$ where $\tbu_2 =\bbeta^{(t')} - \bP_{\bu_1}\bbeta^{(t')}$, and completing the basis to obtain an orthonormal basis $\{\bu_1, \bu_2, \ldots, \bu_p\}$ for $\R^p$. Let $\bU =[\bu_1, \bu_2, \ldots, \bu_p]$, then $\bU$ is an orthogonal matrix, hence $\bX\bU$ and $\bX$ have the same distribution, only the first coordinate of $\bU^\top\bbeta^{(t)}$ is nonzero, and only the first two coordinates of $\bU^\top\bbeta^{(t')}$ are be nonzero. That is, we could perform change of variables by replacing $(\bX, \bbeta^{(t)}, \bbeta^{(t')})$ with $(\bX\bU, \bU^\top\bbeta^{(t)}, \bU^\top\bbeta^{(t')})$.
Therefore, $\by^{(t)}$ and $\by^{(t')}$ are independent of $\{\bX\be_j: 3\le j\le p\}$.
Let $\calF = \sigma(\by^{(t)}, \by^{(t')}, \bX\be_1, \bX\be_2)$ be the $\sigma-$field generated by $(\by^{(t)}, \by^{(t')}, \bX\be_1, \bX\be_2)$, then
\begin{align*}
\text{Var}\big\{[\hbS_{\rm{(mm)}} ]_{t,t'}\big\}
&\ge \E\big[ \text{Var}\big\{[\hbS_{\rm{(mm)}} ]_{t,t'}|\calF\big\}\big]= \frac{1}{n^2(n+1)^2} \E\big[ \text{Var}\big\{ [\by^{(t)}]^\top \bX\bX^\top\by^{(t')}|\calF\big\}\big].
\end{align*}
Note that in the above display,
\begin{align*}
&[\by^{(t)}]^\top \bX\bX^\top\by^{(t')}
= \sum_{j=1}^2[\by^{(t)}]^\top \bX\be_j\be_j^\top\bX^\top\by^{(t')}
+ \sum_{j=3}^p[\by^{(t)}]^\top \bX\be_j\be_j^\top\bX^\top\by^{(t')},
\end{align*}
where the first term is measurable with respect to $\calF$, and the second term is a quadratic form
\begin{align*}
&\sum_{j=3}^p[\by^{(t)}]^\top \bX\be_j\be_j^\top\bX^\top\by^{(t')} = \sum_{j=3}^p \be_j^\top \bX^\top\by^{(t')} [\by^{(t)}]^\top \bX \be_j = \bxi^\top \bLambda \bxi,
\end{align*}
here $\bxi = [\be_3^\top\bX^\top, \ldots, \be_p^\top\bX^\top]^\top\sim \calN(\mathbold{0}, \bI_{n(p-2)})$, and $\bLambda = \bI_{p-2} \otimes \by^{(t')} [\by^{(t)}]^\top$.
Thus, for $t\ne t'$,
\begin{align*}
\text{Var}\big\{[\hbS_{\rm{(mm)}} ]_{t,t'}\big\}
\ge~& \frac{1}{n^2(n+1)^2} \E \Big\{\text{Var}\big\{ \bxi^\top \bLambda \bxi|\calF\big\}\Big\}\\
=~& \frac{1}{n^2(n+1)^2} \E \Big\{ \fnorm{\bLambda}^2 + \trace(\bLambda^2)\Big\}\\
\ge~& \frac{1}{n^2(n+1)^2} \E [ \fnorm{\bLambda}^2 ]\\
=~& \frac{p-2}{n^2(n+1)^2} \E[\norm{\by^{(t)}}^2\norm{\by^{(t')}}^2].
\end{align*}
For $t=t'$, using a similar argument we obtain
\begin{align*}
\text{Var}\big\{[\hbS_{\rm{(mm)}} ]_{t,t'}\big\}
\ge~ \frac{p-1}{n^2(n+1)^2} \E[\norm{\by^{(t)}}^2\norm{\by^{(t')}}^2].
\end{align*}
Summing over all $(t,t')\in [T]\times [T]$ yields
\begin{align*}
\E \big[\fnorm{\hbS_{\rm{(mm)}} - \bS}^2\big]
&\ge \frac{p-2}{n^2(n+1)^2} \sum_{t,t'}\E[\norm{\by^{(t)}}^2\norm{\by^{(t')}}^2]\\
&= \frac{p-2}{n^2(n+1)^2} \E [\fnorm{\bY}^4]\\
&\ge \frac{p-2}{n^2(n+1)^2} (\E [\fnorm{\bY}^2])^2\\
&= \frac{p-2}{(n+1)^2} [\trace(\bS) + \norm{\bB^*}^2]^2.
\end{align*}
§.§ Proof of Theoremthm:covariance
Recall definition of $\hbS$ in <Ref>, and let $\bQ_1$, $\bQ_2$ be defined as in <Ref>. With $\bZ = \bX\bSigma^{-1/2}$, we obtain
\begin{align*}
&n^2\Big[\bQ_2 (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)n^{-\frac 12} - n^{-1} (n\bI_T - \hbA) \bQ_1 \big(\fnorm{\bS^{\frac 12}}\big(\fnorm*{\bF}^2/n + \trace(\bS)\big)^{\frac 12} n^{-\frac 12}\big) \Big]\\
=~& \big( \bF^\top \bZ\bZ^\top \bF - \bF^\top\bF(p\bI_T - \hbA) + (n\bI_T -\hbA)\bH^\top\bZ^\top\bF\big) \\
& - \big[(n\bI_T - \hbA) (\bF^\top\bF + \bH^\top\bZ^\top\bF- \bS (n\bI_T - \hbA))\big]\\
=~& \big( \bF^\top \bZ\bZ^\top \bF - \bF^\top\bF (p\bI_T - \hbA)\big) - (n\bI_T - \hbA) (\bF^\top\bF - \bS (n\bI_T - \hbA))\\
=~& \bF^\top \bZ\bZ^\top \bF + \bF^\top\bF\hbA + \hbA\bF^\top\bF -(n+p)\bF^\top\bF + (n\bI_T - \hbA) \bS (n\bI_T - \hbA)
\\
=~& (n\bI_T - \hbA) \bS (n\bI_T - \hbA) + \bF^\top\bF\hbA + \hbA\bF^\top\bF - \bF^\top((n+p) \bI_T - \bZ\bZ^\top )\bF\\
=~& (n\bI_T - \hbA) \bS (n\bI_T - \hbA) - (n\bI_T - \hbA) \hbS (n\bI_T - \hbA)\\
=~& (n\bI_T - \hbA) (\bS - \hbS) (n\bI_T - \hbA).
\end{align*}
Therefore, by triangle inequality and $\opnorm*{\bI_T - \hbA/n}\le 1$ in <Ref>,
\begin{align*}
&\fnorm[\big]{(\bI_T - \hbA/n) (\bS - \hbS) (\bI_T - \hbA/n)}\\
\le~& \fnorm*{\bQ_2}n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n) +\opnorm*{\bI_T - \hbA/n}\fnorm*{\bQ_1}n^{-\frac 12} \fnorm{\bS^{\frac 12}}\big(\fnorm*{\bF}^2/n + \trace(\bS)\big)^{\frac 12} \\
\le~& \fnorm*{\bQ_2}n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n) +\fnorm*{\bQ_1}n^{-\frac 12} \fnorm{\bS^{\frac 12}}\big(\fnorm*{\bF}^2/n + \trace(\bS)\big)^{\frac 12} \\
\le~& \fnorm*{\bQ_2}n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n) +\fnorm*{\bQ_1}n^{-\frac 12} \frac12 \big[\trace(\bS)+ \big(\fnorm*{\bF}^2/n + \trace(\bS)\big)\big] \\
\le~ & (\fnorm*{\bQ_2} + \fnorm*{\bQ_1}) n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n + \trace(\bS)).
\end{align*}
\fnorm[\big]{(\bI_T - \hbA/n) (\bS -\hbS)(\bI_T - \hbA/n)} \le \Theta_1 n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n + \trace(\bS)),
where $\Theta_1 = \fnorm*{\bQ_1} + \fnorm*{\bQ_2}$.
Note that we have $\E[ \fnorm*{\bQ_1}^2] \le 4$ from <Ref>.
By <Ref>, we have
(1) under <Ref>(i), $\E [\fnorm*{\bQ_2}^2] \le C(\tau')(T \wedge (1 + \frac pn))(1 + \frac pn)$.
\begin{align*}
\E [\Theta_1^2] \le 2\E[\fnorm*{\bQ_1}^2 +\fnorm*{\bQ_2}^2]
&\le 2 [4 + C(\tau')(T \wedge (1 + \frac pn))(1 + \frac pn)]\\
&\le C(\tau')(T \wedge (1 + \frac pn))(1 + \frac pn).
\end{align*}
(2) under <Ref>(ii), $\E [I(\Omega)\fnorm*{\bQ_2}^2] \le C(\gamma, c)$ with $\P(\Omega)
\to 1$. Thus,
$\E [I(\Omega)\Theta_1^2] \le 2\E[\fnorm*{\bQ_1}^2 +I(\Omega)\fnorm*{\bQ_2}^2]\le %C(\eta)\frac pn \le
C(\gamma, c)
§.§ Proof of Theoremthm:generalization error
From the definitions of $\bQ_1, \bQ_2, \bQ_3$ in <Ref>, we have
\begin{align*}
&(\bQ_2^\top + \bQ_3) n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n) + (\bI_T - \hbA/n) \bQ_1 n^{-\frac 12}\fnorm{\bS^{\frac 12}}\big(\fnorm*{\bF}^2/n + \trace(\bS)\big)^{\frac 12}\\
=~& \frac{1}{n} \bF^\top \bF - (\bI_T - \hbA/n) (\bH^\top\bH + \bS) (\bI_T - \hbA/n)\\
=~& (\bI_T - \hbA/n)\big[n^{-1}(\bI_T - \hbA/n)^{-1} \bF^\top \bF (\bI_T - \hbA/n)^{-1} - (\bH^\top\bH + \bS)\big](\bI_T - \hbA/n)\\
=~& (\bI_T - \hbA/n) (\hbR - \bR) (\bI_T - \hbA/n),
\end{align*}
where $\hbR \defas n^{-1}(\bI_T - \hbA/n)^{-1}\bF^\top \bF (\bI_T - \hbA/n)^{-1}$, and $\bR \defas \bH^\top\bH + \bS$.
Therefore, by triangle inequality and $\opnorm*{\bI_T - \hbA/n}\le 1$ from <Ref>,
\begin{align*}
&\fnorm[\big]{(\bI_T - \hbA/n) (\hbR - \bR) (\bI_T - \hbA/n)} \\
\le~& (\fnorm*{\bQ_2} + \fnorm*{\bQ_3}) n^{-\frac 12} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n) + \fnorm*{\bQ_1}n^{-\frac 12} \fnorm{\bS^{\frac 12}}\big(\fnorm*{\bF}^2/n + \trace(\bS)\big)^{\frac 12} \\
\le~& (\fnorm*{\bQ_2} + \fnorm*{\bQ_3} + \fnorm*{\bQ_1}) n^{-\frac 12} (\fnorm*{\bF}^2/n +\fnorm{\bH}^2 + \trace(\bS))\\
=~& \Theta_2 n^{-\frac 12} (\fnorm*{\bF}^2/n +\fnorm{\bH}^2 + \trace(\bS)),
\end{align*}
where $\Theta_2 = \fnorm*{\bQ_1} + \fnorm*{\bQ_2} + \fnorm*{\bQ_3}$.
By <Ref>, we obtain
$\E [\Theta_2] \le C(\gamma, \tau')$ under <Ref>(i)
$\E [I(\Omega)\Theta_2] \le C(\gamma, c)$ with $P(\Omega)\to1$ under <Ref>(ii).
Furthermore, since $\Theta_2 = O_P(1)$, and $\opnorm*{(\bI_T - \hbA/n)^{-1}} = O_P(1)$ from <Ref>,
\begin{align*}
\fnorm{\hbR - \bR} &\le \opnorm*{(\bI_T - \hbA/n)^{-1}}^2 \Theta_2 n^{-\frac 12} (\fnorm*{\bF}^2/n +\fnorm{\bH}^2 + \trace(\bS))\\
&= O_P(n^{-\frac 12}) (\fnorm*{\bF}^2/n +\fnorm{\bH}^2 + \trace(\bS)).
\end{align*}
Since $\frac{1}{n} \bF^\top \bF = (\bI_T - \hbA/n) \hbR (\bI_T - \hbA/n)$, taking trace of both sides gives $\frac 1n \fnorm*{\bF}^2 \le \norm{\hbR}_*$ thanks to $\opnorm{(\bI_T - \hbA/n)} \le1$. Note that $\norm{\bR}_* = \fnorm{\bH}^2 + \trace(\bS)$ by definition of $\bR$, we obtain
\begin{align}\label{eq: R-R}
\fnorm{\hbR - \bR} &\le O_P(n^{-\frac 12}) (\norm{\hbR}_* + \norm{\bR}_*).
\end{align}
Since $\hbR$ and $\bR$ are both $T\times T$ positive semi-definite matrices, whose ranks are at most $T$,
\begin{align*}
&\big|\|\hbR\|_* - \|\bR\|_*\big| \le \|\hbR - \bR\|_*
\le \sqrt{2T}\fnorm{\hbR - \bR}\\
\le~& O_P( (T/n)^{\frac 12}) (\norm{\hbR}_* + \norm{\bR}_*) = o_P(1) (\norm{\hbR}_* + \norm{\bR}_*),
\end{align*}
thanks to $T = o(n)$. That is,
\begin{align*}
\frac{\big|\|\hbR\|_* - \|\bR\|_*\big|}{\|\hbR\|_*+ \|\bR\|_*} \le O_P( (T/n)^{\frac 12}),
\end{align*}
which implies
$\frac{\|\bR\|_*}{\|\hbR\|_*} -1 = O_P( (T/n)^{\frac 12})$, ,
\[
\frac{\trace(\bS)+ \fnorm{\bH}^2}{\fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}^2/n} -1 = O_P( (T/n)^{\frac 12}) = o_P(1).
\]
§.§ Proof of Theoremthm:main
This proof is based on results of <Ref>.
We begin with the result of <Ref>,
\begin{equation*}
\frac{\trace(\bS)+ \fnorm{\bH}^2}{\fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}^2/n} \overset{p}\to 1.
\end{equation*}
In other words,
\[
\trace(\bS)+ \fnorm{\bH}^2 = (1 + o_P(1)) \fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}^2/n.
\]
Thus, the upper bound in <Ref> can be bounded from above as follows
\begin{align*}
&\fnorm{(\bI_T - \hbA/n) (\hbS -\bS)(\bI_T - \hbA/n)} \\
\le~&\Theta_1 n^{-\frac 12} ( \fnorm*{\bF}^2/n + \fnorm{\bH}^2 + \fnorm{\bS^{\frac 12}}^2) \\
\le~& \Theta_1 n^{-\frac 12} (\fnorm*{\bF}^2/n + (1 + o_P(1)) \fnorm{(\bI_T - \hbA/n)^{-1}\bF^\top}^2/n)\\
\le~& \Theta_1 n^{-\frac 12} \big(1 + (1 + o_P(1)) \opnorm{(\bI_T -\hbA)^{-1}}^2\big)\fnorm*{\bF}^2/n\\
=~& O_P(n^{-\frac12}) \fnorm*{\bF}^2/n,
\end{align*}
Using $\opnorm{(\bI_T -\hbA)^{-1}}= O_P(1)$ again, it follows
\begin{align}\label{eq: pf1}
\fnorm{\hbS -\bS} \le O_P(n^{-\frac12}) \fnorm{\bF}^2/n.
\end{align}
A similar argument leads to
\begin{align}\label{eq: pf2}
\fnorm{\hbS -\bS} \le O_P(n^{-\frac12}) (\trace(\bS) + \fnorm{\bH}^2).
\end{align}
§.§ Proof of Corollarycor36
Under <Ref>(i) and <ref>, we proceed to bound $\fnorm{\bF}^2/n$ in terms of $\trace(\bS)$.
Let $L(\bB) = \frac{1}{2n}\fnorm*{\bY - \bX\bB }^2 + \lambda \norm{\bB}_{2,1} + \frac{\tau}{2} \fnorm{\bB}^2$ be the objective function in (<ref>), then $L(\hbB) \le L(\bf0)$ by definition of $\hbB$. Thus,
\begin{align*}
&\frac{1}{2n}\fnorm{\bF }^2 \le \frac{1}{2n}\fnorm{\bF }^2 + \lambda \norm{\hbB}_{2,1}
+ \frac{\tau}{2} \fnorm{\hbB}^2 \le\frac{1}{2n}\fnorm{\bY }^2.
\end{align*}
Now we bound $\frac1n\fnorm{\bY}^2$ by Hanson-Wright inequality.
Since $\bY = \bX\bB^* + \bE$, the rows of $\bY$ are $\calN_T(\bf0, \bSigma_{\by})$ with $\bSigma_{\by} = (\bB^*)^\top \bSigma \bB^*+ \bS$, then $\vec(\bY^\top) \sim \calN(\bf0, \bI_n \otimes \bSigma_{\by})$, and $\bxi \defas [\bI_n \otimes \bSigma_{\by}]^{-\frac12} \vec(\bY^\top)\sim \calN(\mathbold{0}, \bI_{nT})$.
$\fnorm{\bY}^2 = [\vec(\bY^\top)]^\top \vec(\bY^\top) = \bxi^\top (\bI_n \otimes \bSigma_{\by}) \bxi$,
we apply the following variant of Hanson-Wright inequality.
For $\bxi\sim \calN(\mathbold{0}, \bI_N)$, then
\begin{align*}
\P(\bxi^\top \bA\bxi - \trace(\bA) \le 2 \sqrt{x}\fnorm{\bA} + 2x\opnorm{\bA}) \ge1 - \exp(-x).
\end{align*}
In our case, take $\bA = (\bI_n \otimes \bSigma_{\by})$, then $\trace(\bA) = n\trace(\bSigma_{\by})$, $\fnorm{\bA} = \sqrt{n}\fnorm{\bSigma_{\by}}\le \sqrt{n} \trace(\bSigma_{\by})$, $\opnorm{\bA} = \opnorm{\bSigma_{\by}}\le \trace(\bSigma_{\by})$,
thus with probability at least $1 - \exp(-x)$,
\begin{align*}
\fnorm{\bY}^2 - n\trace(\bSigma_{\by})
\le 2 \sqrt{nx}\trace(\bSigma_{\by}) + 2x \trace(\bSigma_{\by}).
\end{align*}
Take $x=n$, then with probability at least $1 - \exp(-n)$,
\begin{align*}
\fnorm{\bF}^2/n \le \fnorm{\bY}^2/n \le 5\trace(\bSigma_{\by}).
\end{align*}
Thus, $\fnorm{\bF}^2/n = O_P(1)\trace(\bSigma_{\by}).$ Together with (<ref>), we obtain
\begin{equation*}
\fnorm{\hbS -\bS} \le O_P(n^{-\frac12}) \trace(\bSigma_{\by}).
\end{equation*}
Note that by <Ref>, $\trace(\bSigma_{\by}) = \fnorm{\bSigma^{\frac12} \bB^*}^2 + \trace(\bS) \le (1 + \mathfrak{snr})\trace(\bS).$
Therefore, we obtain
\begin{equation*}
\fnorm{\hbS -\bS} \le O_P(n^{-\frac12}) \trace(\bS).
\end{equation*}
Furthermore, since $\trace(\bS)\le \sqrt{T}\fnorm{\bS}$ and $T = o(n)$, we have
\fnorm{\hbS -\bS} \le O_P(\sqrt{ T/n}) \fnorm{\bS} = o_P(1) \fnorm{\bS}.
Finally, since $\norm{\bS}_* = \trace(\bS)$, by triangular inequality
\begin{equation*}
\big|\norm{\hbS}_* - \trace(\bS)\big|
\le \norm{\hbS - \bS}_*
\le \sqrt{T}\fnorm{\hbS - \bS}
\le O_P(\sqrt{T/n}) \trace(\bS)
= o_P(1) \trace(\bS).
\end{equation*}
§.§ Proof of Corollarycor37
For $\tau=0$, by the optimality of $\hbB$ in (<ref>),
\frac{1}{2n}\fnorm{\bF}^2 + \lambda\|\hbB\|_{2,1}
\le
\frac{1}{2n}\fnorm{\bE}^2 + \lambda \|\bB^*\|_{2,1}.
Note that $\bF = \bE - \bX(\hbB - \bB^*)= \bE - \bZ\bH$, expanding the squares and rearranging terms yields
\begin{equation}\label{eq:boundH1}
\fnorm{\bZ\bH}^2 \le 2\langle \bE, \bZ\bH\rangle + 2n\lambda (\|\bB^*\|_{2,1} - \|\hbB\|_{2,1}) \le 2\langle \bE, \bZ\bH\rangle + 2n\lambda \|\hbB - \bB^*\|_{2,1}.
\end{equation}
From assumptions in this corollary, $\hbB - \bB^*$ has at most $(1-c)n$ rows. Thus, in the event $U_2$, we have
n\eta \fnorm{\bH}^2 = n\eta \fnorm{\bSigma^{1/2}(\hbB - \bB^*)}^2\le \fnorm{\bX(\hbB - \bB^*)}^2 = \fnorm{\bZ\bH}^2.
We bound the right-hand side two terms in (<ref>) by Cauchy-Schwarz inequality,
\quad \|\hbB - \bB^*\|_{2,1}\le \sqrt{(1-c)n} \fnorm{\hbB - \bB^*}\le \frac{\sqrt{(1-c)n}}{\sqrt{\phi_{\min}(\bSigma)}} \fnorm{\bH} \le \frac{\sqrt{1-c}}{\sqrt{\eta \phi_{\min}(\bSigma)}} \fnorm{\bZ\bH},
and $ \langle \bE, \bZ\bH\rangle \le \fnorm{\bE} \fnorm{\bZ\bH} \le \fnorm{\bS^{\frac 12}} \opnorm{\bE\bS^{-\frac 12}} \fnorm{\bZ\bH}.$
Therefore, by canceling a factor $\fnorm{\bZ\bH}$ from both sides of (<ref>), we have
\begin{align*}
\sqrt{n\eta} \fnorm{\bH} \le \fnorm{\bZ\bH} \le 2 \fnorm{\bS^{\frac 12}} \opnorm{\bE\bS^{-\frac 12}} + \frac{2\sqrt{(1-c)}n\lambda}{\sqrt{\eta \phi_{\min}(\bSigma)}}.
\end{align*}
Using $(a + b)^2 \le 2a^2 + 2b^2$,
\begin{align*}
\fnorm{\bH}^2 \le \frac{4}{n\eta} \trace(\bS) \opnorm{\bE\bS^{-\frac 12}}^2 + \frac{4(1-c)n\lambda^2}{\eta^2 \phi_{\min}(\bSigma)}.% \le \frac{36}{\eta} \trace(\bS) + \frac{4(1-c)n\lambda^2}{\eta^2 \phi_{\min}(\bSigma)}.
\end{align*}
Hence, using $\lambda$ is of the form $\mu\sqrt{\trace(\bS)/n}$, we have
\begin{align}
&\trace(\bS) + \fnorm{\bH}^2 \\
\le~& (1 +4 \eta^{-1} n^{-1}\opnorm{\bE\bS^{-\frac 12}}^2) \trace(\bS) + \frac{4(1-c)\mu^2}{\eta^2 \phi_{\min}(\bSigma)}\trace(\bS)\\
\le~& O_P(1) (1 + \mu^2)\trace(\bS),
\end{align}
where we used that $n^{-1}\opnorm{\bE\bS^{-\frac 12}} = O_P(1)$ by <cit.> and $T = o(n)$.
Now, by <Ref>,
\begin{align*}
\fnorm{\hbS - \bS} \le O_P(n^{-\frac12}) [\trace(\bS) + \fnorm{\bH}^2]\le O_P(n^{-\frac12}) (1 + \mu^2)\trace(\bS),
\end{align*}
where the $O_P(\cdot) $ hides constants depending on $\gamma, c, \phi_{\min}(\bSigma)$ since $\eta$ is a constant that only depends on $\gamma, c$.
§.§ Proof of Theoremthm:out-of-sample
From the definitions of $\bQ_2, \bQ_3$ in <Ref>, we have
\begin{align*}
&\bQ_2 + \bQ_2^\top + \bQ_3\\
=~& \frac{n^{-2}\big( \bF^\top \bZ\bZ^\top \bF +
\hbA \bF^\top \bF + \bF^\top \bF \hbA -
p\bF^\top \bF - (n\bI_T -\hbA)\bH^\top\bH (n\bI_T -\hbA)}{(\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)n^{-\frac 12}}.
\end{align*}
\begin{align*}
(\bI_T -\hbA/n)\bH^\top\bH (\bI_T -\hbA/n) - n^{-2} \big( \bF^\top \bZ\bZ^\top \bF + \hbA \bF^\top \bF + \bF^\top \bF \hbA -
p\bF^\top \bF\big)}\\
=~& \fnorm{\bQ_2 +\bQ_2^\top + \bQ_3} (\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)n^{-\frac 12}\\
\le ~&\Theta_3(\fnorm{\bH}^2 + \fnorm*{\bF}^2/n)n^{-\frac 12},
\end{align*}
where $\Theta_3= 2\fnorm*{\bQ_2} + \fnorm*{\bQ_3}$. The conclusion thus follows by <Ref>.
§ PROOFS OF PRELIMINARY RESULTS
§.§ Proofs of results in Appendixsec:opnorm-bound
(i) For any $\bu\in \R^T$, by defintion (<ref>),
\begin{align*}
\bu^\top\hbA\bu &= \trace\big[ (\bu^{\top} \otimes \bX_{\hat{\mathscr{S}}})
\bM^{\dagger}
(\bu \otimes \bX^\top_{\hat{\mathscr{S}}})\big]\\
&\le \trace\big[ (\bu^{\top} \otimes \bX_{\hat{\mathscr{S}}})
[\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})^{\dagger}]
(\bu \otimes \bX_{\hat{\mathscr{S}}}^\top)\big]\\
&= \trace\big[
(\bu^{\top}\bI_T \bu) \otimes [\bX_{\hat{\mathscr{S}}}(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})^{\dagger}\bX_{\hat{\mathscr{S}}}^\top]\big]\\
&= \norm*{\bu}^2\trace[\bX_{\hat{\mathscr{S}}}(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})^{\dagger}\bX_{\hat{\mathscr{S}}}^\top]\\
&= \norm*{\bu}^2\trace[\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})^{\dagger}].
\end{align*}
Let $r = \rank(\bX_{\hat{\mathscr{S}}})\le \min(n, |\hat{\mathscr{S}}|)$ be the rank of $\bX_{\hat{\mathscr{S}}}$, and $\hphi_1\ge\cdots\ge \hphi_{r}>0$ be the nonzero eigenvalues of $\frac 1n \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}$. We have
\begin{align*}
\opnorm{\hbA/n}&\le \frac 1n \trace[\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})^{\dagger}]\\
&= \frac 1n \trace[\frac 1n \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}(\frac 1n \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau\bP_{\hat{\mathscr{S}}})^{\dagger}]\\
&\le \frac rn \opnorm[\Big]{\frac 1n \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}(\frac 1n \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau\bP_{\hat{\mathscr{S}}})^{\dagger}}\\
&\le \frac{\hphi_1}{\hphi_1 + \tau}\le 1.
\end{align*}
Thus, $\opnorm{\bI - \hbA/n}\le1$ as $\hbA$ is positive semi-definite.
(ii) Note that
\opnorm{(\bI_T - \hbA/n)^{-1}} = (1- \opnorm*{\hbA/n})^{-1} \le 1 + \frac{\hphi_1}{\tau},
\hphi_1
= \opnorm{\frac{1}{n} \bX_{\hat{\mathscr{S}}}^\top \bX_{\hat{\mathscr{S}}}}
\le \opnorm{\frac{1}{n} \bX^\top \bX}\le \frac{1}{n}\opnorm{ \bX^\top\bSigma^{-\frac 12}}^2\opnorm{\bSigma}.
(1) in the event $\{\opnorm{\bX\bSigma^{-\frac12}} < 2\sqrt{n}+\sqrt{p}\}$, we have
\opnorm{(\bI_T - \hbA/n)^{-1}} \le 1 + \tau^{-1} (2 + \sqrt{p/n})^2\opnorm{\bSigma} = 1 + (\tau')^{-1} (2 + \sqrt{p/n})^2.
$\E[\hphi_1] \le \E[n^{-1}\opnorm{ \bX^\top\bSigma^{-\frac 12}}^2\opnorm{\bSigma}] \le [(1 + \sqrt{p/n})^2 +n^{-1}]\opnorm{\bSigma}$ by (<ref>).
\E \opnorm{(\bI_T - \hbA/n)^{-1}}\le 1 + \tau^{-1} \E[\hphi_1] \le 1 + (\tau')^{-1} [(1 + \sqrt{p/n})^2 +n^{-1}].
(i) For $\tau=0$, using the same arguement as proof of <Ref>, we obtain
\begin{align*}
\bu^\top\hbA\bu
\le
\norm{\bu}^2\trace[\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}})^{\dagger}]
\le \norm{\bu}^2 |\hat{\mathscr{S}}|.
\end{align*}
Thus, in the event $U_1$, we have $\opnorm{\hbA}/n \le |\hat{\mathscr{S}}|/n\le (1-c)/2<1$, hence
\opnorm{\bI_T - \hbA/n} \le 1.
(ii) In the event $U_1$, we have
$\opnorm{(\bI_T - \hbA/n)^{-1}} = (1- \opnorm*{\hbA/n})^{-1}\le (1- (1-c)/2)^{-1}$. Furthermore, $\E [ I(U_1) \opnorm{(\bI_T - \hbA/n)^{-1}}] \le (1- (1-c)/2)^{-1}.$
Since $\bM^\dagger\preceq \bM_1^\dagger = \bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger$,
\begin{align*}
\opnorm{\bN} &= \opnorm{(\bI_T \otimes \bX)\bM^\dagger (\bI_T \otimes \bX^\top)}\\
&\le \opnorm{(\bI_T \otimes \bX)(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger) (\bI_T \otimes \bX^\top)}\\
&=\opnorm{\bX_{\hat{\mathscr{S}}}(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger \bX_{\hat{\mathscr{S}}}^\top}\\
&\le 1,
\end{align*}
where the first inequality uses $\opnorm{\bA\bB\bA^\top} \le \opnorm{\bA\bC\bA^\top}$ for $0\preceq \bB \preceq \bC$.
§.§ Proofs of results in Appendixsec:Lipschitz-fix-E
Fixing $\bE$, if $\bX,\bar\bX$ are two design matrices, and $\hbB, \bar\bB$ are the two corresponding multi-task elastic net estimates.
Let $\bZ = \bX \bSigma^{-\frac 12}$,
$\bar\bZ = \bar\bX \bSigma^{-\frac 12}$,
$\bar\bH = \bSigma^{\frac12}(\bar\bB - \bB^*)$,
$\bar\bF = \bY - \bar\bX\bar\bB$,
and $\bar D = [\fnorm{\bar\bH}^2 + \fnorm{\bar\bF}^2/n]^{\frac12}$.
Without loss of generality, we assume $\bar D \le D$.
Recall the multi-task elastic net estimate $\hbB = \argmin_{\bB\in\R^{p\times T}}
\big( \frac{1}{2n}\fnorm{\bY - \bX\bB }^2 + g(\bB) \big)$, where $g(\bB) = \lambda \norm{\bB}_{2,1}
+ \frac{\tau}{2} \fnorm{\bB}^2$.
Define $\varphi:\bB\mapsto \frac 1{2n}
\fnorm{\bE+\bX(\bB^*-\bB)}^2 + g(\bB)$,
$\psi:\bB\mapsto \frac 1{2n} \fnorm{\bX(\hbB-\bB)}^2$
and $\zeta:\bB\mapsto \varphi(\bB) - \psi(\bB)$.
When expanding the squares, it is clear that $\zeta$ is the sum of a linear function and a $\tau$-strong convex penalty, thus $\zeta$ is $\tau$-strongly convex of $\bB$. Additivity of subdifferentials yields $\partial \varphi (\hbB) = \partial \zeta(\hbB) + \partial \psi(\hbB) = \partial \zeta(\hbB)$. By optimality of $\hbB$ we have ${\mathbf 0}_{p\times T}\in \partial \varphi(\hbB)$, thus ${\mathbf 0}_{p\times T}\in \partial \zeta(\hbB)$. By strong convexity of $\zeta$,
\zeta(\bar\bB) - \zeta(\hbB)
\ge \langle \partial \zeta(\hbB), \bar\bB - \hbB \rangle + \frac{\tau}{2} \fnorm{\bar\bB - \hbB}^2
= \frac{\tau}{2} \fnorm{\bar\bB - \hbB}^2,
which can further be rewritten as
\fnorm{\bX (\hbB - \bar\bB)}^2 + n\tau \fnorm{\hbB -\bar\bB}^2
\le \fnorm{\bE - \bX (\bar\bB - \bB^*)}^2 - \fnorm{\bE - \bX (\hbB - \bB^*)}^2 + 2n(g(\bar\bB) - g(\hbB)),
$$\fnorm{\bZ (\bH - \bar\bH)}^2 + n\tau \fnorm{\bSigma^{-\frac12}(\bH - \bar\bH)}^2
\le \fnorm{\bE - \bZ \bar\bH}^2 - \fnorm{\bE - \bZ \bH}^2 + 2n(g(\bar\bB) - g(\hbB)).$$
Summing the above inequality with its counterpart obtained by replacing $(\bX, \hbB, \bH)$ with $(\bar\bX, \bar\bB, \bar\bH)$, we have
\begin{align*}
&(LHS) \\
\defas~&\fnorm{\bZ (\bH - \bar\bH)}^2 + \fnorm{\bar\bZ (\bH - \bar\bH)}^2 + 2n\tau' \fnorm{\bH - \bar\bH}^2\\
\le~& \fnorm{\bZ (\bH - \bar\bH)}^2 + \fnorm{\bar\bZ (\bH - \bar\bH)}^2 + 2n\tau \fnorm{\bSigma^{-\frac12}(\bH - \bar\bH)}^2\\
\le~& \fnorm{\bE - \bZ \bar\bH}^2 - \fnorm{\bE - \bZ \bH}^2 + \fnorm{\bE - \bar\bZ \bH}^2 - \fnorm{\bE - \bar\bZ \bar\bH}^2\\
=~& \langle \bZ(\bH - \bar\bH),\bF+\bar\bF +(\bar\bZ- \bZ)\bar\bH\rangle
+ \langle -\bar\bZ(\bH - \bar\bH),\bF+\bar\bF +(\bZ-\bar\bZ)\bH\rangle\\
=~& \langle (\bZ-\bar\bZ)(\bH - \bar\bH),\bF+\bar\bF\rangle + \langle \bZ(\bH - \bar\bH),(\bar\bZ- \bZ)\bar\bH\rangle + \langle \bar\bZ(\bar\bH - \bH), (\bZ-\bar\bZ)\bH\rangle\\
\le~& \opnorm{\bZ-\bar\bZ}\fnorm{\bH - \bar\bH} (\fnorm{\bF} + \fnorm{\bar\bF})
+ \opnorm{\bZ-\bar\bZ}\fnorm{\bZ (\bH - \bar\bH)} \fnorm{\bar\bH}\\
&+ \opnorm{\bZ-\bar\bZ}\fnorm{\bar\bZ(\bH - \bar\bH)} \fnorm{\bH}\\
\le~& \opnorm{\bZ-\bar\bZ}
\Big[
\sqrt{\frac{(LHS)}{2n\tau'}} (\fnorm{\bF} + \fnorm{\bar\bF}) + \sqrt{(LHS)} (\fnorm{\bar\bH} + \fnorm{\bH})
\Big]\\
\le~& \opnorm{\bZ-\bar\bZ} \sqrt{(LHS)} (D + \bar D)\max(1, (2\tau')^{-\frac12})
\end{align*}
where $\tau' = \tau \phi_{\min}(\bSigma^{-1}) = \tau/\opnorm{\bSigma}$.
That is,
\begin{align*}
\sqrt{(LHS)}
\le
\opnorm{\bZ-\bar\bZ} 2D \max(1, (2\tau')^{-\frac12}).
\end{align*}
\begin{align*}
&n^{-\frac12}\fnorm{\bF - \bar\bF} = n^{-\frac12}\fnorm{\bZ\bH - \bar\bZ\bar\bH}\\
\le~& n^{-\frac12}[\fnorm{\bZ(\bH-\bar\bH)} + \fnorm{(\bZ- \bar\bZ)\bar\bH}]\\
\le~&n^{-\frac12}[\fnorm{\bZ(\bH-\bar\bH)} + \opnorm{\bZ- \bar\bZ}\fnorm{\bH}]\\
\le~&n^{-\frac12}[\sqrt{(LHS)}+ \opnorm{\bZ- \bar\bZ}D]\\
\le~&n^{-\frac12} \opnorm{\bZ- \bar\bZ} D[2 \max(1, (2\tau')^{-\frac12}) + 1].
\end{align*}
So far we obtained
\begin{align*}
\fnorm{\bH - \bar\bH}
\sqrt{\frac{(LHS)}{2n\tau'}}\le
n^{-\frac12} \opnorm{\bZ-\bar\bZ}D (2\tau')^{-\frac12} 2\max(1, (2\tau')^{-\frac12}),\\
n^{-\frac12}\fnorm{\bF - \bar\bF}
&\le n^{-\frac12} \opnorm{\bZ- \bar\bZ} D[2\max(1, (2\tau')^{-\frac12}) + 1].
\end{align*}
Let $\bQ = [\bH^\top, \bF^\top/\sqrt{n}]^\top$ and $\bar \bQ = [\bar\bH^\top, \bar\bF^\top/\sqrt{n}]^\top$, then $D = \fnorm{\bQ}$, $\bar D = \fnorm{\bar \bQ}$. By triangular inequality,
\begin{align*}
|D - \bar D|\le \fnorm{\bQ-\bar \bQ} \le~& \fnorm{\bH-\bar\bH} + \fnorm{\bF-\bar\bF}/\sqrt{n} \\
\le~ & n^{-\frac12}\opnorm{\bZ- \bar\bZ} D [4 \max(1, (2\tau')^{-1})],
\end{align*}
where the last inequality uses the elementary inequality $\max(a,b)(a+b)\le 2 [\max(a,b)]^2$ for $a,b>0$ with $a = 1, b = (2\tau')^{-\frac12}$.
Let $\frac{\partial D}{\partial \bZ}\defas \frac{\partial D}{\partial \vec(\bZ)} \in \R^{1\times np}$, then $\norm*{\frac{\partial D}{\partial \bZ}} \le n^{-\frac12}D L_1$ with $L_1 = [4 \max(1, (2\tau')^{-1})]$. Hence,
\begin{align*}
\sum_{ij} \Big(\frac{\partial D}{\partial z_{ij}}\Big)^2
= \norm*{\frac{\partial D}{\partial \bZ}}^2
\le n^{-1}D^2 L_1^2.
\end{align*}
Furthermore, by triangle inequality
\begin{align*}
\fnorm[\Big]{\frac{\bQ}{D} - \frac{\bar \bQ}{\bar D}}
& \le \frac1D \fnorm{\bQ-\bar \bQ} + \Big|\frac1D - \frac{1}{\bar D} \Big| \fnorm{\bar \bQ}\\
& = \frac1D \fnorm{\bQ-\bar \bQ} + \frac{|D-\bar D|}{D\bar D} \fnorm{\bar \bQ}\\
&\le \frac1D \fnorm{\bQ-\bar \bQ} + \frac{1}{D} \fnorm{\bQ - \bar \bQ}\\
&\le n^{-\frac12}\opnorm{\bZ- \bar\bZ} L,
\end{align*}
where $L = 8 \max(1, (2\tau')^{-1})$.
Therefore, when $\tau >0$, we obtain the two mappings $\bZ \mapsto D^{-1}\bF/\sqrt{n}$, and $\bZ \mapsto D^{-1}\bH$ are both $n^{-\frac12} L$-lipschitz with $L = 8 \max(1, (2\tau')^{-1})$, where $\tau' = \tau/\opnorm{\bSigma}$.
The proof of <Ref> uses a similar argument as proof of <Ref>, we present it here for completeness.
For multi-task group Lasso ($\tau=0$), we restrict our analysis in the event $U_1\cap U_2$, where $U_1 = \big\{ \norm{\hbB}_0 \le n(1-c)/2 \big\}$, $U_2 = \big\{\inf_{\bb\in \R^p: \| \bb\|_0 \le (1-c)n} \|\bX \bb\|^2/(n \|\bSigma^{\frac 12} \bb\|^2) > \eta\big\}.$
Since the only randomness of the problem comes from $\bX$ and $\bE$, there exists a measurable set $\calU$ such that $U_1\cap U_2 =\{ (\bX, \bE)\in \calU\}$.
For some noise matrix $\bE$, consider $\bX,\bar\bX$ two design matrices such that $(\bX, \bE)\in \calU$ and $(\bar\bX, \bE)\in \calU$.
We slightly abuse the notation and let $\hbB, \bar\bB$ denote the two corresponding multi-task group-Lasso estimates.
Thus, the row sparsity of $ \hbB-\bar\bB$ is at most $n(1-c)$.
$\bar\bH = \bar\bB - \bB^*$, $\bar\bF = \bY - \bar\bX\bar\bB$, and $\bar D = [\fnorm{\bar\bH}^2 + \fnorm{\bar\bF}^2/n]^{\frac12}$.
Without loss of generality, we assume $\bar D \le D$.
Since when $\tau=0$, the multi-task group Lasso estimate is $\hbB = \argmin_{\bB\in\R^{p\times T}}
\big( \frac{1}{2n}\fnorm{\bY - \bX\bB }^2 + g(\bB) \big)$, where $g(\bB) = \lambda \norm{\bB}_{2,1}$.
Define $\varphi:\bB\mapsto \frac 1{2n}
\fnorm{\bE+\bX(\bB^*-\bB)}^2 + g(\bB)$,
$\psi:\bB\mapsto \frac 1{2n} \fnorm{\bX(\hbB-\bB)}^2$
and $\zeta:\bB\mapsto \varphi(\bB) - \psi(\bB)$.
Under $\tau=0$, by the same arguments in proof of <ref> with the same functions $\varphi(\cdot), \psi(\cdot), \zeta(\cdot)$, we obtain
\fnorm{\bX (\hbB - \bar\bB)}^2
\le \fnorm{\bE - \bX (\bar\bB - \bB^*)}^2 - \fnorm{\bE - \bX (\hbB - \bB^*)}^2 + 2n(g(\bar\bB) - g(\hbB)).
Summing the above inequality with its counterpart obtained by replacing $(\bX, \hbB, \bH)$ with $(\bar\bX, \bar\bB, \bar\bH)$, we have
\begin{align*}
&\fnorm{\bX (\hbB - \bar\bB)}^2 + \fnorm{\bar\bX (\hbB - \bar\bB)}^2\\
\le~& \fnorm{\bE - \bZ \bar\bH}^2 - \fnorm{\bE - \bZ \bH}^2
+ \fnorm{\bE - \bar\bZ \bH}^2 - \fnorm{\bE - \bar\bZ \bar\bH}^2.
\end{align*}
Note that in event $U_1\cap U_2$, we have
\begin{align*}
\eta n\fnorm{\bSigma^{\frac12}(\hbB - \bar\bB)}^2
\le \fnorm{\bX (\hbB - \bar\bB)}^2, \quad \eta n\fnorm{\bSigma^{\frac12}(\hbB - \bar\bB)}^2 \le \fnorm{\bar\bX (\hbB - \bar\bB)}^2.
\end{align*}
2\eta n\fnorm{(\hbH - \bar\bH)}^2 \le \fnorm{\bZ(\bH - \bar\bH)}^2 + \fnorm{\bar\bZ(\bH - \bar\bH)}^2
$, and
\begin{align*}
\defas~& \max (2\eta n\fnorm{\bH - \bar\bH}^2,\fnorm{\bZ(\bH - \bar\bH)}^2 + \fnorm{\bar\bZ(\bH - \bar\bH)}^2)\\
% &2\eta n\fnorm{\bSigma^{\frac 12}(\bH - \bar\bH)}^2\\
=~&\fnorm{\bZ(\bH - \bar\bH)}^2 + \fnorm{\bar\bZ(\bH - \bar\bH)}^2\\
\le~& \fnorm{\bE - \bZ \bar\bH}^2 - \fnorm{\bE - \bZ \bH}^2
+ \fnorm{\bE - \bar\bZ \bH}^2 - \fnorm{\bE - \bar\bZ \bar\bH}^2.
\end{align*}
Now, in $U_1\cap U_2$, the Lipschitz property of the map $\bZ \mapsto D^{-1}\bF/\sqrt{n}$ follows from the same arguments in proof of <Ref>, with $\tau'$ in <ref> replaced by $\eta$ in this proof.
Furthermore, in the event $U_1\cap U_2\cap U_3$, the Lipschitz property of $\bZ \mapsto D^{-1}\bZ^\top\bF/n$ follows by triangle inequality. To see this, let $\bU = D^{-1}\bF/\sqrt{n}$, and $\bV = D^{-1}\bZ^\top\bF/n = n^{-1/2} \bZ^\top \bU$, thus by triangle inequality
\begin{align*}
\opnorm{\bV - \bar\bV} &= n^{-1/2} \opnorm{\bZ^\top \bU - \bar\bZ^\top \bar\bU}\\
&= n^{-1/2}[ \opnorm{(\bZ - \bar\bZ)^\top \bU} + \opnorm{\bar\bZ^\top (\bU - \bar\bU)}]\\
&\le n^{-1/2}[ \opnorm{\bZ - \bar\bZ} + \opnorm{\bar\bZ}\opnorm{\bU - \bar\bU}]\\
&\le n^{-1/2}( 1 + n^{-1/2}\opnorm{\bar\bZ}L)
\opnorm{\bZ - \bar\bZ}\\
&\le n^{-1/2} (1 + (2 +\sqrt{p/n})L).
\end{align*}
where the last line uses $\opnorm{\bar\bZ}\le 2\sqrt{n} +\sqrt{p} $ in the event $U_3$.
<Ref> (1) is a direct consequence of the intermediate result $|D - \bar D| \le n^{-\frac12}\opnorm{\bZ- \bar\bZ} D [4 \max(1, (2\tau')^{-1})]$ in proof of <Ref>, while <Ref> (2) is a direct consequence of the intermediate result $|D - \bar D| \le n^{-\frac12}\opnorm{\bZ- \bar\bZ} D [4 \max(1, (2\eta)^{-1})]$ in proof of <Ref>.
Before proving the derivative formula, we restate $\hbB$ (defined in (<ref>) of the full paper) below,
\begin{equation}\label{eq: hbB-1}
\hbB=\argmin_{\bB\in\R^{p\times T}}
\Big(
\frac{1}{2n}\fnorm*{\bY - \bX\bB }^2 + \lambda \norm{\bB}_{2,1}
+ \frac{\tau}{2} \fnorm{\bB}^2
\Big),
\end{equation}
where $\|\bB\|_{2,1} = \sum_{j=1}^p \|{\bB^{\top} \be_j}\|_2$.
For the reader's convenience, we recall some useful notations. $\bP_{\hat{\mathscr{S}}} = \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top$. For each $k\in \hat{\mathscr{S}}$, $\bH^{(k)}=\lambda\|\hbB{}^\top \be_k\|_2^{-1}\left(\bI_T - \hbB{}^\top\be_k \be_k^\top\hbB ~ \|\hbB{}^\top\be_k\|_2^{-2} \right)$. $\tbH = \sum_{k\in\hat{\mathscr{S}}} (\bH^{(k)} \otimes \be_k\be_k^\top).$ $\bM_1 = \bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})$, $\bM = \bM_1 + n\tbH\in \R^{pT\times pT}$, and $\bN = (\bI_T \otimes \bX)\bM^\dagger (\bI_T \otimes \bX^\top)$.
We first derive $\frac{\partial F_{lt}}{\partial x_{ij}}$.
Since $\bF = \bY - \bX\hbB = \bE - \bX(\hbB -\bB^*)$, by product rule,
\begin{align*}
\frac{\partial F_{lt}}{\partial x_{ij} } = \be_l^\top \frac{\partial \bE - \bX(\hbB -\bB^*) }{\partial x_{ij} } \be_t
= - \be_l^\top (\dot\bX (\hbB -\bB^*) + \bX\dot\bB) \be_t,
\end{align*}
$\dot\bX \defas \frac{\partial \bX}{\partial x_{ij}} = \be_i\be_j^\top$,
and $\dot\bB \defas \frac{\partial \hbB}{\partial x_{ij}}$.
Now we derive $\vec(\dot\bB)$ from KKT conditions for $\hbB$ defined in (<ref>):
1) For $k\in \hat{\mathscr{S}}$, , $\hbB{}^\top \be_k \ne\mathbf{0}$,
$$\be_k^\top\bX^\top\big[\bE-\bX(\hbB-\bB^*)\big] -n\tau\be_k^\top \hbB= \frac{n \lambda}{\|\hbB{}^\top\be_k\|_2} \be_k^\top \hbB
\quad \in\R^{1\times T}.$$
2) For $k\notin \hat{\mathscr{S}}$, , $\hbB{}^\top \be_k = \mathbf 0$,
$$\norm*{\be_k^\top\bX^\top\big[\bE-\bX(\hbB-\bB^*)\big] -n\tau\be_k^\top \hbB}< n\lambda.$$
Here the strict inequality is guaranteed by Proposition 2.3 of [Bellec, 2020].
Keeping $\bE$ fixed, differentiation of the above display for $k\in\hat{\mathscr{S}}$ w.r.t. $x_{ij}$ yields
$$\be_k^\top\Big[\dot\bX{}^\top\bF - \bX^\top[ \dot\bX(\hbB-\bB^*) +\bX\dot\bB]-n\tau\dot \bB\Big]= n\be_k^\top \dot\bB \bH^{(k)},$$
with $\bH^{(k)}
\lambda
\|\hbB{}^\top \be_k\|_2^{-1}\left(\bI_T - \hbB{}^\top\be_k \be_k^\top\hbB ~ \|\hbB{}^\top\be_k\|_2^{-2} \right)\in\R^{T\times T}$.
and using $\dot\bX = \be_i\be_j^\top$,
\[
\be_k^\top\Big[\be_j\be_i^\top\bF - \bX^\top \be_i \be_j^\top(\hbB-\bB^*)\Big]= \be_k^\top [(\bX^\top\bX + n\tau\bI_{p}) \dot\bB + n\dot\bB \bH^{(k)}].
\]
Recall $\bP_{\hat{\mathscr{S}}} = \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top$. Multiplying by $\be_k$ to the left and summing over $k\in \hat{\mathscr{S}}$, we obtain
\[
\bP_{\hat{\mathscr{S}}}\Big[\be_j\be_i^\top\bF - \bX^\top \be_i \be_j^\top(\hbB-\bB^*)\Big]= \bP_{\hat{\mathscr{S}}} (\bX^\top\bX + n\tau\bI_{p}) \dot\bB + n \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top \dot\bB \bH^{(k)}.
\]
Since $\hat{\mathscr{S}}$ is locally constant in a small neighborhood of $\bX$, $\hbB_{\hat{\mathscr{S}}{}^c}=0$, $\supp(\dot\bB)\subseteq \hat{\mathscr{S}}$. Hence, $\bP_{\hat{\mathscr{S}}}\dot\bB = \dot\bB$, and $\bX\dot\bB = \bX_{\hat{\mathscr{S}}}\dot\bB$. The above display can be rewritten as
\[
\bP_{\hat{\mathscr{S}}} \be_j\be_i^\top\bF - \bX_{\hat{\mathscr{S}}}^\top \be_i \be_j^\top(\hbB-\bB^*)
(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}}) \dot\bB + n \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top \dot\bB \bH^{(k)}.
\]
Vectorizing the above display using property $\vec(\bA\bB\bC) = (\bC^\top \otimes \bA)\vec(\bA)$ yields
\begin{align*}
&(\bF^\top \otimes \bP_{\hat{\mathscr{S}}} \be_j) \vec(\be_i^\top) -
((\hbB-\bB^*)^\top\be_j\otimes \bX_{\hat{\mathscr{S}}}^\top)\vec(\be_i)
\\=~&
[\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}}) + n\sum_{k\in\hat{\mathscr{S}}} (\bH^{(k)} \otimes \be_k\be_k^\top)] \vec(\dot\bB)\\
=~& (\bM_1 + n \tbH)\vec(\dot\bB)\\
=~& \bM \vec(\dot\bB),
\end{align*}
where $\bM_1 = \bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})$, and $\tbH = \sum_{k\in\hat{\mathscr{S}}} (\bH^{(k)} \otimes \be_k\be_k^\top)$.
Under <Ref>(i) that $\tau>0$, it's obviously that $\rank(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})) = T |\hat{\mathscr{S}}|$.
Under <Ref>(ii) that $\tau=0$ with $\P(U_1)\to 1$. In the event $U_1\cap U_2$, we know $\rank(\bX_{\hat{\mathscr{S}}}) = |\hat{\mathscr{S}}|$ from <cit.>, hence $\rank(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})) = T |\hat{\mathscr{S}}|$.
In either of the above two scenarios, we thus have $\dim(\ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}}))) = T (p - |\hat{\mathscr{S}}|)$ by rank-nullity theorem. Since $[\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})] (\be_t\otimes\be_k) = \bf0$ for $t\in[T], k\in \hat{\mathscr{S}}^c$.
Let $V = \{(\be_t\otimes \be_k): t\in[T], k\in\hat{\mathscr{S}}^c\}$ be a vector space, then the elements of $V$ are linear independent, and $\dim(V) = T (p -|\hat{\mathscr{S}}|)$. Thus, $V$ forms a basis for $\ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})$. Since for any $(\be_t\otimes \be_k) \in V$, we also have $\tbH (\be_t\otimes \be_k) = \bf0$, $\ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})) \subseteq \ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}}) + n\tbH)$. On the other hand, if any vector $\bv$ s.t. $[\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}}) + n\tbH]\bv = \bf0$, since these matrices are all positive semi-definite, we have $\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})\bv = \bf0$, which implies that $\ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})+ n\tbH) \subseteq \ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}}))$. Therefore,
\begin{align*}
\ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})+ n\tbH) &= \ker(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}}))\\
&= \mathrm{span}\{(\be_t\otimes \be_k): t\in[T], k\in\hat{\mathscr{S}}^c\},
\end{align*}
\begin{align*}
\range(\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}+ n\tau\bP_{\hat{\mathscr{S}}})+ n\tbH)
&= \mathrm{span}\{(\be_t\otimes \be_k): t\in[T], k\in\hat{\mathscr{S}}\}.
\end{align*}
Since $\dot\bB = \bP_{\hat{\mathscr{S}}}\dot\bB$, $\vec(\dot\bB) = (\bI_T \otimes \bP_{\hat{\mathscr{S}}})\vec(\dot\bB)$, then $\vec(\dot\bB) \in \mathrm{col}(\bI_T\otimes \bP_{\hat{\mathscr{S}}}) = \range (\bM)$. Since $\bM$ is symmetric, $\bM^{\dagger} \bM$ is the orthogonal projection on the range of $\bM$.
\begin{align}\label{eq: vecB}
\vec(\dot\bB) = \bM^{\dagger} \bM \vec(\dot\bB) = \bM^{\dagger} [(\bF^\top\otimes \be_j) - ((\hbB-\bB^*)^\top\be_j\otimes\bX^\top)] \be_i.
\end{align}
Since $\supp(\dot\bB)\subseteq \hat{\mathscr{S}}$, $\bX\dot\bB = \bX_{\hat{\mathscr{S}}}\dot\bB$, we have
\begin{align*}
\frac{\partial F_{lt}}{\partial x_{ij} }
&= - \be_l^\top (\dot\bX(\hbB-\bB^*) + \bX\dot\bB) \be_t\\
&= - (\be_l^\top \be_i\be_j^\top(\hbB-\bB^*)\be_t +
\be_l^\top\bX_{\hat{\mathscr{S}}}\dot\bB\be_t)\\
&= -(\be_l^\top \be_i\be_j^\top(\hbB-\bB^*)\be_t + (\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}})\vec(\dot\bB))\\
&= - \be_l^\top \be_i\be_j^\top(\hbB-\bB^*)\be_t - (\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}})\bM^{\dagger} [(\bF^\top\otimes \be_j) - ((\hbB-\bB^*)^\top\be_j\otimes\bX^\top)] \be_i\\
&= - (e_j^\top(\hbB-\bB^*) \otimes \be_i^\top) (\be_t\otimes \be_l) +
(\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}})\bM^{\dagger}((\hbB-\bB^*)^\top\be_j\otimes\bX^\top\be_i) \\
&\quad -
(\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}})\bM^{\dagger} (\bF^\top\otimes \be_j)\be_i\\
&= - (e_j^\top(\hbB-\bB^*) \otimes \be_i^\top) (\be_t\otimes \be_l) +
(e_j^\top(\hbB-\bB^*) \otimes \be_i^\top)\bN (\be_t\otimes \be_l)\\
&\quad -
(\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}})\bM^{\dagger} (\bF^\top\otimes \bI_p) (\be_i\otimes \be_j)\\
&= - (e_j^\top(\hbB-\bB^*) \otimes \be_i^\top) (\bI_{nT} -\bN)(\be_t\otimes \be_l)
(\be_t^\top \otimes \be_l^\top\bX)\bM^{\dagger} (\bF^\top\otimes \bI_p) (\be_i\otimes \be_j)
\end{align*}
Now we calculate $\frac{\partial F_{lt}}{\partial z_{ij}}$.
Since $\bX = \bZ \bSigma^{\frac 12}$,
$x_{ik} = \sum_{j=1}^p z_{ij} (\bSigma^{\frac 12})_{jk}$, $ \frac{\partial x_{ik}}{\partial z_{ij}} = (\bSigma^{\frac 12})_{jk}$,
\begin{align*}
\frac{\partial F_{lt}}{\partial z_{ij}}
= \sum_{k=1}^p \frac{\partial F_{lt}}{\partial x_{ik}} \frac{\partial x_{ik}}{\partial z_{ij}}
= \sum_{k=1}^p \frac{\partial F_{lt}}{\partial x_{ik}} (\bSigma^{\frac 12})_{jk}
= D_{ij}^{lt} + \Delta_{ij}^{lt},
\end{align*}
\begin{align*}
D_{ij}^{lt} %&= \sum_{k=1}^p D_{ik}^{lt} (\bSigma^{\frac 12})_{jk}\\
&= -\sum_{k=1}^p (\be_k^\top(\hbB-\bB^*) \otimes \be_i^\top) (\bI_{nT} - \bN) (\be_t\otimes \be_l) (\bSigma^{\frac 12})_{jk}\\
&= -(\be_j^\top \bSigma^{\frac 12} (\hbB-\bB^*) \otimes \be_i^\top) (\bI_{nT} - \bN) (\be_t\otimes \be_l)\\
&= -(\be_j^\top\bH \otimes \be_i^\top) (\bI_{nT} - \bN) (\be_t\otimes \be_l),
\end{align*}
\begin{align*}
\Delta_{ij}^{lt}
&=-\sum_{k=1}^p (\be_t^\top \otimes \be_l^\top)(\bI_T\otimes \bX )
\bM^\dagger\bigl(\bF^\top \otimes \bI_{p}\bigr)(\be_i \otimes\be_k) (\bSigma^{\frac 12})_{jk}\\
&=- (\be_t^\top \otimes \be_l^\top)(\bI_T\otimes \bX)
\bM^\dagger\bigl(\bF^\top \otimes \bI_{p}\bigr)(\be_i \otimes \bSigma^{\frac 12}\be_j)\\
&=- (\be_t^\top \otimes \be_l^\top)(\bI_T\otimes \bX)
\bM^\dagger (\bI_T\otimes \bSigma^{\frac 12}) \bigl(\bF^\top \otimes \bI_{p}\bigr)(\be_i \otimes \be_j)\\
% &= -(\be_t^\top \otimes \be_l^\top)(\bI_T\otimes \bZ )
% \tbM^\dagger\bigl(\bF^\top \otimes \bI_{p}\bigr)(\be_i \otimes\be_j).
\end{align*}
It follows that
\begin{align*}
\sum_{i=1}^n D_{ij}^{it}
&=-\sum_{i=1}^n (\be_j^\top\bH \otimes \be_i^\top) (\bI_{nT} - \bN) (\be_t\otimes \be_i)\\
&=- \be_j^\top\bH \big[ \sum_{i=1}^n (\bI_T \otimes \be_i^\top) (\bI_{nT} - \bN) (\bI_T \otimes \be_i)\big] \be_t\\
&= -\be_j^\top \bH (n\bI_T - \hbA)\be_t,
\end{align*}
where the last line follows from definition of $\hbA$ in (<ref>).
(1) For $\tau>0$, by formula of $\frac{\partial F_{lt}}{\partial z_{ij} }$ in <Ref>, we have
\begin{align*}
&\sum_{ij}\norm*{\frac{\partial \bF}{\partial z_{ij}}}^2_{\rm F} = \sum_{ij}\sum_{lt} \Big(\frac{\partial F_{lt}}{\partial z_{ij} } \Big)^2=\sum_{ij}\sum_{lt} \Big( D_{ij}^{lt} + \Delta_{ij}^{lt} \Big)^2 \\
\le~ & 2\sum_{ij,lt} ( D_{ij}^{lt})^2 + 2\sum_{ij,lt} (\Delta_{ij}^{lt})^2 \\
=~ & 2 \fnorm*{(\bH\otimes \bI_n)(\bI_{nT} - \bN)}^2 + 2 \fnorm{(\bI_T\otimes \bX)\bM^\dagger (\bI_T\otimes \bSigma^{\frac12}) (\bF^\top \otimes \bI_{p})}^2\\
\le~ & 2n\fnorm{\bH}^2 + 2 \fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12}) (\bF^\top \otimes \bI_{p})}^2.
\end{align*}
Since $0\preceq \bM^\dagger \preceq \bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger$,
\begin{align*}
&\fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12}) (\bF^\top \otimes \bI_{p})}^2\\
\le~& \opnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12})}^2
\fnorm{(\bF^\top \otimes \bI_{p})}^2\\
\le~& p\opnorm{\bSigma} \fnorm{\bF}^2
\opnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger}^2 \\
\le~ &p\opnorm{\bSigma}\fnorm{\bF}^2\opnorm{(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger\bX_{\hat{\mathscr{S}}}^\top}^2\\
\le~& \frac{p}{n\tau}\opnorm{\bSigma} \fnorm*{\bF}^2\\
=~& \frac{p}{n\tau'} \fnorm*{\bF}^2,
\end{align*}
where the last inequality uses $\opnorm{(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger\bX_{\hat{\mathscr{S}}}^\top}\le (n\tau)^{-1}$.
On the other hand, we also have
\begin{align*}
&\fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12}) (\bF^\top \otimes \bI_{p})}^2\\
\le~& \fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger }^2
\opnorm{(\bI_T\otimes \bSigma^{\frac12})(\bF^\top \otimes \bI_{p})}^2\\
\le~& \fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}}) (\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger )}^2
\fnorm{\bF}^2 \opnorm{\bSigma}\\
\le~& T\fnorm{\bX_{\hat{\mathscr{S}}} (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger}^2
\fnorm{\bF}^2 \opnorm{\bSigma}\\
\le~& T\trace\big[
(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger
\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}
(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger\big]
\fnorm{\bF}^2 \opnorm{\bSigma}\\
\le~& T\trace\big[
(\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + \tau n \bP_{\hat{\mathscr{S}}})^\dagger\big]
\fnorm{\bF}^2 \opnorm{\bSigma}\\
\le~& T (\tau)^{-1}\fnorm{\bF}^2 \opnorm{\bSigma}\\
\le~& T\trace\big[
(\tau n \bP_{\hat{\mathscr{S}}})^\dagger\big]
\fnorm{\bF}^2 \opnorm{\bSigma}\\
\le~& T (\tau)^{-1}\fnorm{\bF}^2 \opnorm{\bSigma}\\
=~& \frac{T}{\tau'} \fnorm*{\bF}^2,
\end{align*}
\begin{align*}
\frac{1}{n}\sum_{ij}\norm*{\frac{\partial \bF}{\partial z_{ij}}}^2_{\rm F}
&\le 2\fnorm{\bH}^2 + 2
(\tau')^{-1} (T\wedge \frac{p}{n})\fnorm*{\bF}^2/n \\
&\le 2\max(1, (\tau')^{-1} (T\wedge \frac{p}{n})) (\fnorm*{\bF}^2/n + \fnorm{\bH}^2)\\
&= 2\max(1, (\tau')^{-1} (T\wedge \frac{p}{n})) D^2.
\end{align*}
Now by product rule and triangle inequality
\begin{align*}
&\frac{1}{n}\sum_{ij}\fnorm*{\frac{\partial \bF/D}{\partial z_{ij}}}^2 \\
\le~&
2 D^{-2}\frac{1}{n}\sum_{ij}\fnorm*{\frac{\partial \bF}{\partial z_{ij}}}^2
2 \frac{1}{n}\sum_{ij}\fnorm*{\bF\frac{\partial D^{-1}}{\partial z_{ij}}}^2\\
=~& 2 D^{-2}\frac{1}{n}\sum_{ij}\fnorm*{\frac{\partial \bF}{\partial z_{ij}}}^2
2 D^{-4}\frac{1}{n}\fnorm{\bF}^2 \sum_{ij}\Big(\frac{\partial D}{\partial z_{ij}}\Big)^2\\
\le~& 2 D^{-2}\frac{1}{n}\sum_{ij}\fnorm*{\frac{\partial \bF}{\partial z_{ij}}}^2
2 D^{-4}\frac{1}{n}\fnorm{\bF}^2 n^{-1} D^2 [4 \max(1, (2\tau')^{-1})]^2\\
\le~& 2 D^{-2}\frac{1}{n}\sum_{ij}\fnorm*{\frac{\partial \bF}{\partial z_{ij}}}^2
2 n^{-1} [4 \max(1, (2\tau')^{-1})]^2\\
\le~& 4 \max(1, (\tau')^{-1} (T\wedge \frac{p}{n})) + 2 n^{-1} [4 \max(1, (2\tau')^{-1})]^2\\
:=~& f(\tau', T, n, p),
\end{align*}
where the second inequality is by <Ref>.
(2) For $\tau=0$, by <Ref>, in the event $U_1\cap U_2$, we obtain the same upper bounds as in the first case (1) with $\tau'$ replaced by $\eta$. To see this,
\begin{align*}
&\fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12}) (\bF^\top \otimes \bI_{p})}^2\\
\le~& \opnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12})}^2
\fnorm{(\bF^\top \otimes \bI_{p})}^2\\
=~& \opnorm{(\bI_T\otimes \bSigma^{\frac12})\bM^\dagger(\bI_T\otimes \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12})}
\le~& \opnorm{(\bI_T\otimes \bSigma^{\frac12})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12})}
\le~& p\ \fnorm{\bF}^2 \frac{1}{n\eta}\\
=~& \frac{p}{n\eta} \fnorm*{\bF}^2,
\end{align*}
where the third inequality is by <Ref>.
Also, we have
\begin{align*}
&\fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12}) (\bF^\top \otimes \bI_{p})}^2\\
\le~& \fnorm{(\bI_T\otimes \bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma^{\frac12})}^2 \opnorm{(\bF^\top \otimes \bI_{p})}^2\\
\le~& \trace\big[(\bI_T\otimes \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}})\bM^\dagger (\bI_T\otimes \bSigma_{\hat{\mathscr{S}},\hat{\mathscr{S}}})\bM^\dagger\big]
\fnorm{\bF}^2 \\
\le~& \trace\big[ (\bI_T\otimes \bSigma_{\hat{\mathscr{S}},\hat{\mathscr{S}}})\bM^\dagger\big]
\fnorm{\bF}^2 \\
\le~& \trace\big[ (\bI_T\otimes \bSigma_{\hat{\mathscr{S}},\hat{\mathscr{S}}}) (\bI_T\otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}})^\dagger)\big]
\fnorm{\bF}^2 \\
=~& T \trace\big[ \bSigma_{\hat{\mathscr{S}},\hat{\mathscr{S}}} (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}})^\dagger\big]
\fnorm{\bF}^2 \\
\le~& T \trace\big[ (n\eta)^{-1} \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}})^\dagger \big]
\fnorm{\bF}^2 \\
\le~& \frac{T}{\eta} \fnorm*{\bF}^2,
\end{align*}
where the penultimate inequality uses $\bSigma_{\hat{\mathscr{S}},\hat{\mathscr{S}}} \preceq (n\eta)^{-1} \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}$ in the event $U_1\cap U_2$.
Therefore, on $U_1\cap U_2$, we have
\begin{align*}
\frac{1}{n}\sum_{ij}\fnorm*{\frac{\partial \bF/D}{\partial z_{ij}}}^2
&\le 4 \max(1, (\eta)^{-1} (T\wedge \frac{p}{n})) + 2 n^{-1} [4 \max(1, (2\tau')^{-1})]^2 \\
&:= f(\eta, T, n, p),
\end{align*}
where the function $f$ is the same as in case (1). The only difference is that $\tau'$ in the upper bound for case (1) is replaced by $\eta$ in case (2).
§.§ Proofs of results in Appendixsec:Lipschitz-fix-X
The following proof of <Ref> relies on a similar argument as proof of <Ref>, we present the proof here for completeness.
Recall the KKT condtions for $\hbB$ defined in (<ref>):
1) For $k\in \hat{\mathscr{S}}$, , $\hbB{}^\top \be_k \ne\mathbf{0}$,
\be_k^\top\bX^\top\big[\bE-\bX(\hbB-\bB^*)\big] -n\tau\be_k^\top \hbB= \frac{n \lambda}{\|\hbB{}^\top\be_k\|_2} \be_k^\top \hbB
\quad \in\R^{1\times T}.
2) For $k\notin \hat{\mathscr{S}}$, , $\hbB{}^\top \be_k = \mathbf 0$,
\norm*{\be_k^\top\bX^\top\big[\bE-\bX(\hbB-\bB^*)\big] -n\tau\be_k^\top \hbB}< n\lambda.
Here the strict inequality is guaranteed by Proposition 2.3 of [Bellec, 2020].
Let $\ddot{\bB} = \frac{\partial \hbB}{\partial E_{it'}}$, $\dot\bE = \frac{\partial \bE}{\partial E_{it'}}$. Differentiation of the above display for $k\in\hat{\mathscr{S}}$ w.r.t. $E_{it'}$ yields
$$\be_k^\top\bX^\top(\dot\bE-\bX \ddot\bB) - n\tau \be_k^\top \ddot\bB
= n\be_k^\top \ddot\bB \bH^{(k)}$$
with $\bH^{(k)}
\lambda
\|\hbB{}^\top \be_k\|_2^{-1}\left(\bI_T - \hbB{}^\top\be_k \be_k^\top\hbB ~ \|\hbB{}^\top\be_k\|_2^{-2} \right)\in\R^{T\times T}$. Rearranging
and using $\dot\bE = \be_i \be_{t'}^\top$,
$$\be_k^\top \bX^\top \be_i \be_{t'}^\top = \be_k^\top[n\ddot \bB \bH^{(k)} + (\bX^\top\bX+n\tau\bI_{p\times p})\ddot\bB].$$
Recall $\bP_{\hat{\mathscr{S}}} = \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top\in\R^{p\times p}$. Multiplying by $\be_k$ to the left and summing over $k\in \hat{\mathscr{S}}$, we obtain
$$\bP_{\hat{\mathscr{S}}} \bX^\top \be_i \be_{t'}^\top = n \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top \ddot \bB \bH^{(k)} + \bP_{\hat{\mathscr{S}}} (\bX^\top\bX+n\tau\bI_{p\times p})\ddot\bB,$$
which reduces to the following by $\supp(\ddot \bB)\subseteq \hat{\mathscr{S}}$ and $\bX\ddot\bB = \bX_{\hat{\mathscr{S}}}\ddot\bB$,
\begin{align*}
\bX_{\hat{\mathscr{S}}}^\top \be_i \be_{t'}^\top &= n \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top\ddot \bB \bH^{(k)} + \bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}}\ddot\bB \bI_T + n\tau\bP_{\hat{\mathscr{S}}} \ddot\bB \bI_T\\
&= n \sum_{k\in\hat{\mathscr{S}}} \be_k\be_k^\top\ddot \bB \bH^{(k)} + (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})
\ddot\bB \bI_T.
\end{align*}
Vectorizing the above yields
\begin{align*}
(\be_{t'} \otimes \bX_{\hat{\mathscr{S}}}^\top) \vec(\be_i) &= [n\sum_{k\in\hat{\mathscr{S}}} (\bH^{(k)}\otimes \be_k\be_k^\top) +\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}})] \vec(\ddot\bB) \\
&= (n \tbH +\bI_T \otimes (\bX_{\hat{\mathscr{S}}}^\top\bX_{\hat{\mathscr{S}}} + n\tau\bP_{\hat{\mathscr{S}}}) )\vec(\ddot\bB)\\
&= \bM\vec(\ddot\bB).
\end{align*}
A similar argument as in Proof of <Ref> leads to
\begin{align*}
\vec(\ddot \bB) = \bM^\dagger\bM \vec(\ddot \bB)
=\bM^{\dagger} (\be_{t'} \otimes \bX_{\hat{\mathscr{S}}}^\top) \be_i.
\end{align*}
Therefore, by $\bX\ddot{\bB}=\bX_{\hat{\mathscr{S}}}\ddot{\bB}$,
\begin{align*}
\frac{\partial F_{lt}}{\partial E_{it'} }
&= \be_l^\top \frac{\partial \bE - \bX(\hbB - \bB^*)}{\partial E_{it'}}\be_t\\
&= \be_l^\top \big( \be_i\be_{t'}^\top - \bX \ddot{\bB}\big) \be_t\\
&= \be_l^\top \be_i\be_{t'}^\top\be_t- \be_l^\top\bX \ddot{\bB}\be_t\\
&= \be_l^\top \be_i\be_{t'}^\top\be_t- (\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}})\vec(\ddot\bB)\\
&= \be_l^\top \be_i\be_{t'}^\top\be_t- (\be_t^\top \otimes \be_l^\top\bX_{\hat{\mathscr{S}}}) \bM^{\dagger} (\be_{t'} \otimes \bX_{\hat{\mathscr{S}}}^\top) \be_i\\
&= \be_l^\top \be_i\be_{t'}^\top\be_t- \be_l^\top(\be_t^\top \otimes \bX_{\hat{\mathscr{S}}}) \bM^{\dagger} (\be_{t'} \otimes \bX_{\hat{\mathscr{S}}}^\top) \be_i\\
&= \be_l^\top \be_i\be_{t'}^\top\be_t- \be_l^\top(\be_t^\top \otimes \bX) \bM^{\dagger} (\be_{t'} \otimes \bX^\top) \be_i,
\end{align*}
where the last equality is due to $\bM^\dagger = (\bI_T \otimes \bP_{\hat{\mathscr{S}}}) \bM^\dagger (\bI_T \otimes \bP_{\hat{\mathscr{S}}})$.
Now the calculation of $\sum_{i=1}^n\frac{\partial F_{it}}{\partial E_{it'}}$ is straightforward,
\begin{align*}
\sum_{i=1}^n\frac{\partial F_{it}}{\partial E_{it'}} &= \sum_{i=1}^n \big[ \be_i^\top\be_i\be_t^\top\be_{t'}
- \be_i^\top (\be_t^\top \otimes \bX)\bM^\dagger (\be_{t'} \otimes \bX^\top) \be_i\big]\\
&= n\be_t^\top\be_{t'} - \trace[(\be_t^\top \otimes \bX)\bM^\dagger (\be_{t'} \otimes \bX^\top)]\\
&= n\be_t^\top\be_{t'} - \be_t^\top \hbA \be_{t'}\\
&= \be_t^\top (n\bI_T -\hbA)\be_{t'},
\end{align*}
where the third equality is due to the formula of $\hbA$ in (<ref>).
Noting that $\bF = \bE - \bZ\bH$, it follows that
$\sum_{i=1}^n\frac{\partial \be_i^\top\bZ\bH \be_t}{\partial E_{it'}} = \be_t^\top \hbA\be_{t'}$.
§.§ Proofs of results in Appendixsec:proba-tools
Let $\bz = \vec(\bE)$, then $\bz\sim \calN(\mathbf{0}, \bK)$ with $\bK = \bS \otimes \bI_n$ by Assumption <ref>.
For each $t_0, t_0' \in [T]$, let
$\bG^{(t_0, t_0')} = \bF \be_{t_0'}\be_{t_0}^\top$, and $\bff(\bz)^{(t_0, t_0')} = \vec(\bG) \tD^{-1} $.
For convenience, we will drop the superscript ${(t_0, t_0')}$ from $\bG^{(t_0, t_0')}$ and $\bff(\bz)^{(t_0, t_0')}$ in this proof.
By $\trace(\bA^\top\bB) = \vec(\bA)^\top \vec(\bB)$, we obtain
\begin{align}\label{eq: a1}
\be_{t_0}^\top \bE^\top \bF \tD^{-1} \be_{t_0'} = \trace(\bE^\top \bF \be_{t_0'}\be_{t_0}^\top) \tD^{-1}= \trace(\bE^\top \bG\tD^{-1}) = \bz^\top\bff(\bz).
\end{align}
By product rule, we have
\begin{align}\label{eq: nabla-f}
\nabla \bff(\bz) = \frac{\partial \vec(\bG) }{\partial \vec(\bE) } \tD^{-1} + \underbrace{\vec(\bG) \frac{\partial \tD^{-1}}{\partial \vec(\bE) }}_{\Rem},
\end{align}
where $\Rem = \bu\bv^\top$ with $\bu = \vec(\bG)\in \R^{nT\times 1}$, $\bv^\top = \frac{\partial \tD^{-1}}{\partial \vec(\bE) }\in \R^{1\times nT}$.
It follows that
\begin{align}\label{eq: stein-tr1}
\trace(\bK \nabla \bff(\bz)) = \trace\Big(\bK \frac{\partial \vec(\bG) }{\partial \vec(\bE)}\Big) \tD^{-1} + \trace(\bK \Rem).
\end{align}
Since $\bK = \bS \otimes \bI_n$ and $\bG = \bF \be_{t_0'}\be_{t_0}^\top$,
$\bK_{it, lt'}= S_{tt'}I(i=l)$, and $G_{it} = F_{it_0'} I(t=t_0)$. It follows
\begin{equation}\label{eq: stein-tr2}
\begin{aligned}
\trace\Big(\bK \frac{\partial \vec(\bG) }{\partial \vec(\bE)}\Big)
= \sum_{i,t}\sum_{l,t'} \bK_{it, lt'} \frac{\partial G_{it}}{\partial E_{lt'} }
%&= \sum_{i}\sum_{t,t'} S_{tt'} \frac{\partial G_{it} }{\partial E_{it'}}\\
= \sum_{t'} S_{t_0t'} \sum_{i}\frac{\partial F_{it_0'}}{\partial E_{it'}}
%&= \sum_{t'} S_{t_0t'} \be_{t_0'}^\top(n\bI_T - \hbA)\be_{t'}\\
= \be_{t_0}^\top \bS (n\bI_T - \hbA) \be_{t_0'},
\end{aligned}
\end{equation}
where the last equality used <Ref> and that $\hbA$ is symmetric.
Now we rewrite the quantity we want to bound as
\begin{align}
\fnorm{\bE^\top \bF/\tD - \bS (n\bI_T - \hbA )/\tD}^2\Bigr] \nonumber\\
=~& \sum_{t_0,t_0'} \E \Big[\Big( \be_{t_0}^\top \bE^\top \bF \tD^{-1} \be_{t_0'} - \be_{t_0}^\top \bS (n\bI_T - \hbA) \be_{t_0'}\tD^{-1} \Big)^2\Big]\nonumber\\
=~& \sum_{t_0,t_0'} \E \Big[ \big(\bz^\top \bff(\bz) - \trace(\bK \nabla \bff(\bz)) + \trace(\bK\Rem)\big)^2\Big] \nonumber\\
\le~& 2\sum_{t_0,t_0'} \Big\{ \E \Big[ \big(\bz^\top \bff(\bz) - \trace(\bK \nabla \bff(\bz)) \big)^2\Big] + \E\Big[ \big(\trace(\bK\Rem)\big)^2 \Big] \Big\}\label{eq: LHS} ,
\end{align}
where the second equality follows from (<ref>), (<ref>) and (<ref>), and the last inequality uses elementary inequality $(a+b)^2 \le 2(a^2 + b^2)$.
We next bound the two terms in (<ref>).
First term in (<ref>).
By second-order Stein formula in <Ref>,
\begin{equation}\label{eq: stein}
\sum_{t_0,t_0'} \E \big(\bz^\top \bff(\bz) - \trace(\bK \nabla \bff(\bz))\big)^2
= \sum_{t_0,t_0'}
\E \Big[\fnorm{\bK^{\frac12}\bff(\bz)}^2 + \trace\big[\big(\bK \nabla \bff(\bz)\big)^2\big] \Big].
\end{equation}
Now we bound the two terms in the right-hand side of (<ref>).
For the first term, recall $\bff(\bz) = \vec(\bG) \tD^{-1} $, and $\bG = \bF \be_{t_0'}\be_{t_0}^\top$, we obtain
\begin{align*}
\fnorm{\bK^{\frac12}\bff(\bz)}^2 = \tD^{-2} \fnorm{(\bS^{\frac12}\otimes \bI_n)\vec(\bG)}^2 = \tD^{-2} \fnorm{\bG\bS^{\frac12}}^2 = \tD^{-2} \fnorm{\bS^{\frac12}\be_{t_0}}^2 \fnorm{\bF\be_{t_0'}}^2.
\end{align*}
Summing over all $(t_0, t_0')\in [T] \times [T]$, we obtain
\begin{equation}\label{eq: stein-RHS1}
\sum_{t_0, t_0'} \fnorm{\bK^{\frac12}\bff(\bz)}^2 = \tD^{-2} \fnorm{\bF}^2 \trace(\bS).
\end{equation}
For the second term in RHS of (<ref>), recall $ \nabla\bff(\bz) = \frac{\partial \vec(\bG) }{\partial \vec(\bE) } \tD^{-1} + \Rem$,
\begin{align}
&\trace\big[\big(\bK \nabla \bff(\bz)\big)^2\big] \nonumber\\
%&= \trace \Big\{\Big[\bK\Big( \frac{\partial \vec(\bG) }{\partial \vec(\bE) } \tD^{-1} + \Rem\Big)\Big]^2\Big\}\nonumber\\
%=~& \trace \Big[\Big( \bK\frac{\partial \vec(\bG) }{\partial \vec(\bE) } \tD^{-1}\Big)^2\Big] + \trace[(\bK\Rem)^2] + 2\trace\Big[ \bK\frac{\partial \vec(\bG) }{\partial \vec(\bE) } \tD^{-1} \bK \Rem\Big]\nonumber\\
=~& \tD^{-2}\trace \Big[\Big( \bK\frac{\partial \vec(\bG) }{\partial \vec(\bE)} \Big)^2\Big] + \trace[(\bK\Rem)^2] + 2\tD^{-1}\trace\Big[ \bK\frac{\partial \vec(\bG) }{\partial \vec(\bE) } \bK \Rem\Big].\label{eq: *2s}
\end{align}
By property of vectorization operation,
$\vec(\bG) = \vec(\bF\be_{t_0'}\be_{t_0}^\top) = (\be_{t_0}\be_{t_0'}^\top \otimes \bI_n) \vec(\bF)$,
$$\frac{\partial \vec(\bG) }{\partial \vec(\bE)}
= (\be_{t_0}\be_{t_0'}^\top \otimes \bI_n) \frac{\partial \vec(\bF) }{\partial \vec(\bE)}, $$
where $\opnorm{ \frac{\partial \vec(\bF) }{\partial \vec(\bE)} } \le 1$ since the map $\vec(\bE) \mapsto \vec(\bF)$ is 1-Lipschitz by <cit.>.
Now we bound the three terms in (<ref>). For the first term, by Cauchy-Schwarz inequality,
\begin{align*}
|
# Proximal Reinforcement Learning: Efficient Off-Policy Evaluation in
Partially Observed Markov Decision Processes
Andrew Bennett
Cornell University
<EMAIL_ADDRESS>Nathan Kallus
Cornell University
<EMAIL_ADDRESS>
(October 28, 2021)
###### Abstract
In applications of offline reinforcement learning to observational data, such
as in healthcare or education, a general concern is that observed actions
might be affected by unobserved factors, inducing confounding and biasing
estimates derived under the assumption of a perfect Markov decision process
(MDP) model. Here we tackle this by considering off-policy evaluation in a
partially observed MDP (POMDP). Specifically, we consider estimating the value
of a given target policy in a POMDP given trajectories with only partial state
observations generated by a different and unknown policy that may depend on
the unobserved state. We tackle two questions: what conditions allow us to
identify the target policy value from the observed data and, given
identification, how to best estimate it. To answer these, we extend the
framework of proximal causal inference to our POMDP setting, providing a
variety of settings where identification is made possible by the existence of
so-called bridge functions. We then show how to construct semiparametrically
efficient estimators in these settings. We term the resulting framework
proximal reinforcement learning (PRL). We demonstrate the benefits of PRL in
an extensive simulation study.
## 1 Introduction
An important problem in reinforcement learning (RL) is off policy evaluation
(OPE), which is defined as estimating the average reward generated by a target
_evaluation_ policy, given observations of data generated by running some
different _behavior_ policy. This problem is particularly important in many
application areas such as healthcare, education, or robotics, where
experimenting with new policies may be expensive, impractical, or unethical.
In such applications OPE may be used in order to estimate the benefit of
proposed policy changes by decision makers, or as a building block for the
related problem of policy optimization. At the same time, in the same
applications, unobservables can make this task difficult due to the lack of
experimentation.
As an example, consider the problem of evaluating a newly proposed policy for
assigning personalized curricula to students semester by semester, where the
curriculum assignment each semester is decided based on observed student
covariates, such as course outcomes and aptitude tests, with the goal of
maximizing student outcomes as measured, _e.g._ , by standardized test scores.
Since it may be unethical to experiment with potentially detrimental
curriculum plans, we may wish to evaluate such policies based on passively
collected data where the targeted curriculum was decided by teachers. However,
there may be factors unobserved in the data that jointly influence the
observed student covariates, curriculum assignments, and student outcomes;
this may arise for example because the teacher can perceive subjective aspects
of the students’ personalities or aptitudes and take these into account in
their decisions. While such confounding breaks the usual Markovian assumptions
that underlie standard approaches to OPE, the process may well be modeled by a
partially observed Markov decision process (POMDP). Two key questions for OPE
in POMDPs are: when is policy value still identifiable despite confounding due
to partial observation and, when it is, how can we estimate it most
efficiently.
In this work we tackle these two questions, expanding the range of settings
that enable identification and providing efficient estimators in these
settings. First, we extend an existing identification result for OPE in
tabular POMDPs (Tennenholtz et al., 2020) to the continuous setting, which
provides some novel insight on this existing approach but also highlights its
limitations. To break these limitations, motivated by these insights, we
provide a new general identification result based on extending the proximal
causal inference framework (Miao et al., 2018a; Cui et al., 2020; Kallus et
al., 2021) to the dynamic, longitudinal setting. This permits identification
in more general settings. And, unlike the previous results, this one expresses
the value of the evaluation policy as the mean of some score function under
the distribution over trajectories induced by the logging policy, which allows
for natural estimators with good qualities. In particular, we prove
appropriate conditions under which the estimators arising from this result are
consistent, asymptotically normal, and semiparametrically efficient. In
addition, we provide a tractable algorithm for computing the nuisance
functions that allow such estimators to be computed, based on recent state-of-
the-art methods for solving conditional moment problems. We term this
framework proximal reinforcement learning (PRL), highlighting the connection
to proximal causal inference. We provide a series of synthetic experiments
that empirically validate our theoretical results and demonstrate the benefits
of PRL.
## 2 Related Work
First, there is a extensive line of recent work on OPE under unmeasured
confounding. This work considers many different forms of confounding,
including confounding that is iid at each time step (Wang et al., 2020;
Bennett et al., 2021; Liao et al., 2021), occurs only at a single time step
(Namkoong et al., 2020), satisfies a “memorylessness” property (Kallus and
Zhou, 2020), follows a POMDP structure (Tennenholtz et al., 2020; Nair and
Jiang, 2021; Oberst and Sontag, 2019; Killian et al., 2020), may take an
arbitrary form (Chen and Zhang, 2021; Chandak et al., 2021), or is in fact not
a confounder (Hu and Wager, 2021). These works have varying foci: Namkoong et
al. (2020); Kallus and Zhou (2020); Chen and Zhang (2021) focus on computing
intervals comprising the partial identification set of all hypothetical policy
values consistent with the data and their assumptions; Oberst and Sontag
(2019); Killian et al. (2020) focus on sampling counterfactual trajectories
under the evaluation policy given that the POMDP follows a particular Gumbel-
softmax structure; Wang et al. (2020); Gasse et al. (2021) focus on using the
offline data to warm start online reinforcement learning; Liao et al. (2021)
study OPE using instrumental variables; Chandak et al. (2021) show that OPE
can be performed under very general confounding if the behavior policy
probabilities of the logged actions are known; Hu and Wager (2021) consider
hidden states that do not affect the behavior policy and are therefore not
confounders but do make OPE harder by breaking Markovianity thereby inducing a
curse of horizon; and Tennenholtz et al. (2020); Nair and Jiang (2021) study
conditions under which the policy value under the POMDP model is identified.
Of the past work on OPE under unmeasured confounding, Tennenholtz et al.
(2020); Nair and Jiang (2021) are closest to ours, since they too consider a
general POMDP model of confounding, namely without restrictions that preserve
Markovianity via iid confounders, knowing the confounder-dependent
propensities, having unconfounded logged actions, or using a specific Gumbel-
softmax form. Tennenholtz et al. (2020) consider a particular class of tabular
POMDPs satisfying some rank constraints, and Nair and Jiang (2021) extend
these results and slightly relax its assumptions. However, both do not
consider how to actually construct OPE estimators based on their
identification results that satisfy desirable properties such as consistency
or asymptotic normality, and they can only be applied to tabular POMDPs. Our
work presents a novel and general identification result and proposes a class
of resulting OPE estimators that possesses such desirable properties.
Another area of relevant literature is on proximal causal inference (PCI). PCI
was first proposed by Miao et al. (2018a), showing that using two
conditionally independent proxies of the confounder (known as a negative
control outcome and a negative control action) we can learn an outcome bridge
function that generalizes the standard mean-outcome function and controls for
the confounding effects. Since then this work has been expanded, including by
alternatively using an action bridge function which instead generalizes the
inverse propensity score (Miao et al., 2018b), allowing for multiple
treatments over time (Tchetgen et al., 2020), performing multiply-robust
treatment effect estimation (Shi et al., 2020), combining outcome and action
bridge functions for semiparametrically efficient estimation (Cui et al.,
2020), using PCI to estimate the value of contextual-bandit policies (Xu et
al., 2021) or generalized treatment effects (Kallus et al., 2021), or
estimating bridge functions using adversarial machine learning (Kallus et al.,
2021; Ghassami et al., 2021). In addition, the OPE for POMDP methodologies of
Tennenholtz et al. (2020); Nair and Jiang (2021) discussed above were said to
be motivated by PCI. Our paper relates to this body of work as it proposes a
new way of performing OPE for POMDPs using PCI, and it also proposes a new
adversarial machine learning-based approach for estimating the bridge
functions.
Finally, there is an extensive body of work on learning policies for POMDPs
using online learning. For example, see Azizzadenesheli et al. (2016), Katt et
al. (2017), Bhattacharya et al. (2020),Yang et al. (2021), Singh et al.
(2021), and references therein. Our work is distinct in that we consider an
offline setting where identification is an issue. At the same time, our work
is related to the online setting in that it could potentially be used to
augment and warm start such approaches if there is also offline observed data
available.
## 3 Problem Setting
A POMDP is formally defined by a tuple
$(\mathcal{S},\mathcal{A},\mathcal{O},H,P_{O},P_{R},P_{T})$, where
$\mathcal{S}$ denotes a state space, $\mathcal{A}$ denotes a finite action
space, $\mathcal{O}$ denotes an observation space, $H\in\mathbb{N}$ denotes a
time horizon, $P_{O}$ is an observation kernel, with $P_{O}^{(t)}(\cdot\mid
s)$ denoting the density of the observation $O_{t}$ given the state $S_{t}=s$
at time $t$, $P_{R}$ is a reward kernel, with $P_{R}^{(t)}(\cdot\mid s,a)$
denoting the density of the (bounded) reward $R_{t}\in[-R_{\max},R_{\max}]$
given the state $S_{t}=s$ and action $A_{t}=a$ at time $t$, and $P_{T}$ is a
transition kernel, with $P_{T}^{(t)}(\cdot\mid s,a)$ denoting the density of
the next $S_{t+1}$ given the state $S_{t}=s$ and action $A_{t}=a$ at time $t$.
Note that we allow for the POMDP to be time inhomogeneous; that is, we allow
the outcome, reward, and transition kernels to potentially depend on the time
index. Finally, we let $O_{0}$ denote some prior observation of the state
before $t=1$ (which may be empty), and we let $\tau^{\textup{full}}_{t}$ and
$\tau_{t}$ denote the true and observed trajectories up to time $t$
respectively, which we define according to
$\displaystyle\tau_{0}$ $\displaystyle=\tau^{\textup{full}}_{0}=O_{0}$
$\displaystyle\tau_{t}$
$\displaystyle=(O_{0},(O_{1},A_{1},R_{1}),(O_{2},A_{2},R_{t}),\ldots,(O_{t},A_{t},R_{t}))$
$\displaystyle\tau^{\textup{full}}_{t}$
$\displaystyle=(O_{0},(S_{1},O_{1},A_{1},R_{1}),(S_{2},O_{2},A_{2},R_{t}),\ldots,(S_{t},O_{t},A_{t},R_{t}))\,.$
Let $\pi_{b}$ be some given randomized _logging policy_ , which is
characterized by a sequence of functions $\pi_{b}^{(1)},\ldots,\pi_{b}^{(H)}$,
where $\pi_{b}^{(t)}(a\mid S_{t})$ denotes the probability that the logging
policy takes action $a\in\mathcal{A}$ at time $t$ given state $S_{t}$. The
logging policy together with the POMDP define a joint distribution over the
(true) trajectory $\tau_{H}^{\textup{full}}$ given by acting according to
$\pi_{b}$; let $\mathcal{P}_{b}$ denote this distribution. All probabilities
and expectations in the ensuing will be with respect to $\mathcal{P}_{b}$
unless otherwise specified, _e.g._ , by a subscript.
Our data consists of observed trajectories generated by the logging policy:
$\mathcal{D}=\\{\tau_{H}^{(1)},\tau_{H}^{(2)},\ldots,\tau_{H}^{(n)}\\}$, where
each $\tau_{H}^{(i)}$ is an iid sample of $\tau_{H}$ (which does not contain
$S_{t}$), distributed according to $\mathcal{P}_{b}$. Importantly, we
emphasize that, although we assume that states are unobserved by the decision
maker and are not included in the logged data $\mathcal{D}$, the logging
policy still uses these hidden states, inducing confounding.
Implicit in our notation $\pi_{b}^{(t)}(a\mid S_{t})$ is that the logging
policy actions are independent of the past given current state $S_{t}$.
Similarly, the POMDP model is characterized by similar independence assumption
with respect to observation and reward emissions, and state transitions. This
means that $\mathcal{P}_{b}$ satisfies a Markovian assumption with respect to
$S_{t}$; however, as $S_{t}$ is unobserved we cannot condition on it and break
the past from the future. We visualize the directed acyclic graph (DAG)
representing $\mathcal{P}_{b}$ in in Fig. 1. In particular, we have the
following conditional independencies in $\mathcal{P}_{b}$: for every $t$,
$\displaystyle O_{t}\perp\\!\\!\\!\perp\tau^{\textup{full}}_{t-1}\mid
S_{t},~{}~{}~{}~{}R_{t}\perp\\!\\!\\!\perp\tau^{\textup{full}}_{t-1},O_{t}\mid
S_{t},A_{t},~{}~{}~{}~{}S_{t+1}\perp\\!\\!\\!\perp\tau^{\textup{full}}_{t-1},O_{t},R_{t}\mid
S_{t},A_{t},~{}~{}~{}~{}A_{t}\perp\\!\\!\\!\perp\tau^{\textup{full}}_{t-1}\mid
S_{t}\,.$
Now, let $\pi_{e}$ be some deterministic _target policy_ that we wish to
evaluate, which is characterized by a sequence of functions
$\pi_{e}^{(1)},\ldots,\pi_{e}^{(H)}$, where
$\pi_{e}^{(t)}(O_{t},\tau_{t-1})\in\mathcal{A}$ denotes the action taken by
policy $\pi_{e}$ at time $t$ given current observation $O_{t}$ and the past
observable trajectory $\tau_{t-1}$. We visualize the POMDP model under such a
policy that only depends on observable data in Fig. 2. Note that we allow
$\pi_{e}^{(t)}$ to potentially depend on all observable data up to time $t$;
this is because the Markovian assumption _does not_ hold with respect to the
observations $O_{t}$, so we may wish to consider policies that use all past
observable information to best account for the unobserved state. We let
$\mathcal{P}_{e}$ denote the distribution over trajectories that would be
obtained by following policy $\pi_{e}$ in the POMDP. Then, given some
discounting factor $\gamma\in(0,1]$, we define the _value_ of policy $\pi_{e}$
as111Unlike some definitions of policy value, we have omitted normalizing by
$\sum_{t=1}^{H}\gamma^{t-1}$ for the sake of brevity.
$v_{\gamma}(\pi_{e})=\sum_{t=1}^{H}\gamma^{t-1}\mathbb{E}_{\mathcal{P}_{e}}[R_{t}]\,,$
The task OPE under the POMDP model is to estimate $v_{\gamma}(\pi_{e})$ (a
function of $\mathcal{P}_{e}$) given $\mathcal{D}$ (drawn from
$\mathcal{P}_{b}$).
$S_{1}$$A_{1}$$R_{1}$$O_{1}$$S_{2}$$A_{2}$$R_{2}$$O_{2}$$S_{3}$$A_{3}$$R_{3}$$O_{3}$……
Figure 1: Graphical representation of the POMDP model under logging policy
$\pi_{b}$. The red arrows make explicit the dependence of $\pi_{b}$ on the
hidden state. Dashed circles denote variables unobserved in our data.
$S_{1}$$A_{1}$$R_{1}$$O_{1}$$S_{2}$$A_{2}$$R_{2}$$O_{2}$$S_{3}$$A_{3}$$R_{3}$$O_{3}$……$\tau_{0}$$\tau_{1}$$\tau_{2}$$\tau_{3}$
Figure 2: Graphical representation of the POMDP model under evaluation policy
$\pi_{e}$. The red arrows make explicit the dependence of $\pi_{e}$ on the
current observation and previous observable trajectory, and the blue nodes and
arrows make explicit the dependence of the observable trajectories on the
data.
## 4 Identification Theory
Before considering how to actually estimate $v_{s}(\pi_{e})$, we first
consider the simpler problem of _identification_ , which is the problem of
finding some function $\psi$ such that
$v_{\gamma}(\pi_{e})=\psi(\mathcal{P}_{b})$. This is the first stepping stone
because $\mathcal{P}_{b}$ is the most we could hope to ever learn from
observing $\mathcal{D}$. If such a $\psi$ exists, then we say that
$v_{\gamma}(\pi_{e})$ is _identified_ with respect to $\mathcal{P}_{b}$. In
general, such an identification result is impossible for the OPE problem given
unobserved confounding as introduced by our POMDP model. Therefore, we must
impose some assumptions on $\mathcal{P}_{b}$ for such identification to be
possible.
To the best of our knowledge, the only existing identification result of this
kind was presented by Tennenholtz et al. (2020), and is only valid in tabular
settings where states and observations are discrete. We will proceed first by
extending this approach to more general, non-tabular settings. However, we
will note that there are some restrictive limitations to estimation based on
this approach. So, motivated by the limitations, we develop a new and more
general identification theory which extends the PCI approach to the sequential
setting and easily enables efficient estimation.
### 4.1 Identification by Time-Independent Sampling and Its Limitations
For our generalization of Tennenholtz et al. (2020), we will consider
evaluating policies $\pi_{e}$ such that $\pi_{e}^{(t)}$ only depends on the
past observable data via $\\{O_{1},\ldots,O_{t}\\}$ and
$\\{A_{1},\ldots,A_{t-1}\\}$.222That is, $\pi_{e}$ disregards $O_{0}$ as well
as past rewards. First, for each $t\in\\{1,\ldots,H\\}$ let
$\Omega_{t}=(Z_{t},W_{t},X_{t},A_{t},R_{t})$, where $Z_{t}=O_{t-1}$,
$W_{t}=O_{t}$, and $X_{t}=O_{t+1}$, and let $\Omega_{0}=X_{0}$.333This
renaming of some of the variables will allow for a clearer connection to the
next section. In addition, define
$\Omega^{*}_{t}=\\{\Omega_{0},\Omega_{1},\ldots,\Omega_{t}\\}$, and let
$\mathcal{P}_{\text{ind}}$ denote the measure on $\Omega^{*}_{H}$ in which
each $\Omega_{t}$ is sampled _independently_ according to its marginal
distribution in $\mathcal{P}_{b}$. Next, we make the following completeness
assumption:
###### Assumption 1 (Completeness).
For each $t\in\\{1,\ldots,H\\}$ and $a\in\mathcal{A}$, if
$\mathbb{E}_{\mathcal{P}_{b}}[g(S_{t})\mid O_{t},A_{t}=a]=0$ almost surely for
some function $g$, then $g(S_{t})=0$ almost surely.
This assumption is fundamental to this identification approach, and
essentially requires that $O_{t}$ captures all degrees of variation in
$S_{t}$. In the case that states and observations are finite, it is necessary
that $O_{t}$ have at least as many categories as $S_{t}$ for this condition to
hold.
Finally, we let $E_{t}$ denote the random variable measurable with respect to
$A_{1:t-1}$ and $W_{1:t}$ that gives the action that would have been assigned
by $\pi_{e}^{(t)}$ given past observations $W_{1:t}$ and past actions
$A_{1:t-1}$. Given this, we are ready to present our first identification
result.
###### Theorem 1.
Let Assumption 1 hold, and suppose that for each $t\in\\{1,\ldots,H\\}$ there
exists a function
$\rho^{(t)}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\mapsto\mathbb{R}$,
such that for every measure $f$ on $W_{t}$ that is absolutely continuous with
respect to $\mathcal{P}_{b}$ and every $a\in\mathcal{A}$, we have almost
surely
$\mathbb{E}\left[\int\rho^{(t)}(Z_{t},A_{t},x)df(x)\ \middle|\
W_{t},A_{t}=a\right]=P(A_{t}=a\mid
W_{t})^{-1}\left(\frac{df}{d\mathcal{P}_{b}}\right)(W_{t})\,,$ (1)
where $df/d\mathcal{P}_{b}$ denotes the Radon-Nikodym derivative of $f$ with
respect to $\mathcal{P}_{b}$. Then, for each $s\in\\{1,\ldots,H\\}$ we have
$\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]=\mathbb{E}_{\mathcal{P}_{\text{ind}}}\left[R_{s}\prod_{t=1}^{s}\mathds{1}\\{A_{t}=E_{t}\\}\rho^{(t)}(Z_{t},A_{t},X_{t-1})\right]$
We note that this result identifies $v_{\gamma}(\pi_{e})$ for any given
$\gamma$, since by construction $\mathcal{P}_{\text{ind}}$ is identified with
respect to $\mathcal{P}_{b}$, and this allows us to express
$v_{\gamma}(\pi_{e})$ as a function of $\mathcal{P}_{\text{ind}}$. Note that
implicit in the assumptions is that $P(A_{t}=a\mid W_{t})>0$.
We call this result a _time-independent sampling_ result, since it is written
as an expectation with respect to $\mathcal{P}_{\text{ind}}$, where data at
each time point is sampled independently. We note that the moment equations
given by Eq. 1 in general are very complicated, and it is not immediately
clear under what conditions this equation is even solvable. In the tabular
setting, we present the following lemma which provides an analytic solution to
Eq. 1 and makes clear the connection to Tennenholtz et al. (2020).
###### Lemma 1.
Suppose that $O_{t}$ is discrete with $k$ categories for every $t$, and
without loss of generality let the support of $O_{t}$ be denoted by
$\\{1,\ldots,k\\}$. In addition, for each $t\in\\{1,\ldots,s\\}$ and
$a\in\mathcal{A}$, let $Q^{(t,a)}$ denote the $k\times k$ matrix defined
according to
$Q^{(t,a)}_{x,y}=P_{\mathcal{P}_{b}}(O_{t}=x\mid A_{t}=a,O_{t-1}=y)\,.$
Then, assuming $Q^{(t,a)}$ is invertible for each $t$ and $a$, Eq. 1 is solved
by
$\rho^{(t)}(z,a,x)=\frac{((Q^{(t,a)})^{-1})_{z,x}}{P(O_{t-1}=z,A_{t}=a)}\,.$
Furthermore, plugging this solution into the identification result of Theorem
1 is identical to Theorem 1 of Tennenholtz et al. (2020).
We also note that in the case that the matrices $Q^{(t,a})$ defined above are
invertible, it easily follows that Assumption 1 holds as long as $U_{t}$ has
at most $k$ categories.
Unfortunately, this identification result has some major shortcomings. For
one, the nuisance function defined by Eq. 1 is generally very complex and is
difficult to estimate in general. Even in the tabular setting where this can
be solved analytically, the result still depends on a large number of matrices
being invertible, which is potentially dubious in practice.444Tennenholtz et
al. (2020) also consider a more flexible version of their result which makes
the invertibility assumption more flexible, but this requires a different
model than the POMDP in which we have multiple conditionally independent
observations at every time step. In addition, since this result is given by an
expectation under $\mathcal{P}_{\text{ind}}$ rather than $\mathcal{P}_{b}$, it
is difficult to analyze using standard efficiency theory given iid samples
from $\mathcal{P}_{b}$. Empirical approximations of this expectation given $n$
iid samples from $\mathcal{P}_{b}$ would require averaging over $n^{s}$ terms,
introducing a curse of dimension. Finally, this expectation clearly does not
have many of the desirable properties for OPE estimating equations held by
many OPE estimators in the simpler MDP setting, such as Neyman orthogonality
(Kallus and Uehara, 2019a, b).
### 4.2 Identification by Proximal Causal Inference
We now discuss an alternative way of obtaining identifiability, via a
reduction to a nested sequence of proximal causal inference (PCI) problems of
the kind described by Cui et al. (2020). These authors considered identifying
the average treatment effect (ATE), and other related causal estimands, for
binary decision making problems with unmeasured confounding given two
independent proxies for the confounders, one of which is conditionally
independent from treatments given confounders, and the other of which is
independent from outcomes given treatment and confounders. We will in fact
leverage the refinement of the PCI approach by Kallus et al. (2021), which has
strictly weaker assumptions than Cui et al. (2020).
Our reduction works by defining random variables $Z_{t}$ and $W_{t}$ for each
$t\in[H]$ that are measurable with respect to the observed trajectory
$\tau_{H}$. We respectively refer to these as _negative control actions_ and
_negative control outcomes_. All negative controls must be satisfy certain
independence properties outlined below. Any definition of such variables that
satisfy these independence properties is considered a valid PCI reduction, and
we will have various examples of valid PCI reductions for our POMDP model at
the end of this section.
To formalize these assumptions, we must first define some additional notation.
Let $\mathcal{P}^{*}_{t}$ denote the measure on trajectories induced by
running policy $\pi_{e}$ for the first $t-1$ actions, and running policy
$\pi_{b}$ henceforth. Note that according to this definition,
$\mathcal{P}_{b}=\mathcal{P}^{*}_{1}$, and
$\mathcal{P}_{e}=\mathcal{P}^{*}_{H+1}$. In addition, we use the notation
$\mathbb{E}^{*}_{t}$ for expectation under $\mathcal{P}^{*}_{t}$, and
$P^{*}_{t}$ for the probability mass or density of random variables under
$\mathcal{P}^{*}_{t}$. Next, for each $t\in\\{1,\ldots,H\\}$ we define
$E_{t}=\pi_{e}^{(t)}(O_{t},\tau_{t-1})$; that is, $E_{t}$ is shorthand for the
action that our deterministic target policy would take given the observable
information available at time $t$. Furthermore, analogous to Section 4.1, for
any choice of negative controls we will define the shorthand notation
$D_{t}=(Z_{t},W_{t},A_{t},E_{t},R_{t})$. Finally, for any random variable
$Y_{t}$ that is measurable with respect to $(R_{t},D_{t+1:H})$, which we refer
to as an _outcome variable at time $t$_, for ease of presentation. We will use
the potential outcome notation $Y_{t}(a)$ for any $a\in\mathcal{A}$ to denote
a random variable with the same distribution as $Y_{t}$ would have if,
possibly counter to fact, action $a$ were taken at time $t$ instead of $A_{t}$
(and subsequent actions were still taken according to the behavior policy). We
note that we will only consider such potential outcome variables under the
measure $\mathcal{P}^{*}_{t}$, in which case $Y_{t}(a)$ corresponds to the
outcome that would be obtained by applying $\pi_{e}$ for the first $t-1$
actions, the fixed action $a$ at time $t$, and then $\pi_{b}$ henceforth (as
opposed to the factual outcome $Y_{t}$ obtained by applying $\pi_{e}$ for the
first $t-1$ actions and $\pi_{b}$ henceforth). We note that according to this
notation we have $Y_{t}(A_{t})=Y_{t}$ always.
Given these definitions, the independence assumptions for a valid PCI
reduction are as follows.
###### Assumption 2 (Negative Controls).
For each $t\in[H]$ and $a\in\mathcal{A}$, and any outcome variable $Y_{t}$
that is measurable w.r.t. $(R_{t},D_{t+1:H})$, we have
$Z_{t},A_{t}\perp\\!\\!\\!\perp_{\mathcal{P}^{*}_{t}}W_{t},E_{t},Y_{t}(a)\mid
S_{t}\,.$
We note that these independence assumptions imply that the decision making
problem under $\mathcal{P}^{*}_{t}$ with confounder $S_{t}$, negative controls
$Z_{t}$ and $W_{t}$, action $A_{t}$, and outcome $(R_{t},D_{t+1:H})$ satisfy
the PCI problem structure as in Cui et al. (2020). We also note that we may
additionally include an observable context variable $X_{t}$, which may be
useful for defining more efficient reductions. In this case, the conditional
independence assumption in Assumption 2 should hold given both $S_{t}$ and
$X_{t}$, and in everything that follows $Z_{t}$, $W_{t}$, and $S_{t}$ should
be replaced with $(Z_{t},X_{t})$, $(W_{t},X_{t})$, and $(S_{t},X_{t})$
respectively, as in Cui et al. (2020). However, we omit $X_{t}$ from the
notation in the rest of the paper for brevity.
Next, our identification result below depends on the existence of some
_bridge_ functions. Specifically, we make the following assumption.
###### Assumption 3 (Bridge Functions Exist).
For each $t\in[H]$ and $a\in\mathcal{A}$, there exists a function $q^{(t)}$
satisfying
$\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
S_{t},A_{t}=a]=P^{*}_{t}(A_{t}=a\mid S_{t})^{-1}\qquad\text{a.s.}$
Furthermore, for any given outcome variable $Y_{t}=\phi(R_{t},D_{t+1:H})$,
there exists a function $h^{(t,\phi)}$ satisfying
$\mathbb{E}^{*}_{t}[h^{(t,\phi)}(W_{t},A_{t})\mid
S_{t},A_{t}=a]=\mathbb{E}^{*}_{t}[\mathds{1}\\{E_{t}=A_{t}\\}Y_{t}\mid
S_{t},A_{t}=a]\qquad\text{a.s.}$
Note that implicit in the assumption is that $P^{*}_{t}(A_{t}=a\mid S_{t})>0$.
We refer to the functions $q^{(t)}$ as _action bridge functions_ , and
$h^{(t,\phi)}$ as _outcome bridge functions_. These may be seen as analogues
of inverse propensity scores and state-action quality functions respectively.
As argued previously by Kallus et al. (2021), assuming the existence of these
functions is more general than the approach taken by Cui et al. (2020), who
require complex completeness conditions. Existence of such functions functions
can be justified, _e.g._ , by conditions on the the singular values of certain
conditional expectation linear operators; we refer readers to Kallus et al.
(2021) for a detailed presentation of such conditions, as well as concrete
examples of bridge functions when the negative controls are discrete, or the
negative controls and $Y_{t}$ are defined by linear models.
Given this, we are now ready to present our main identifiability theorem.
###### Theorem 2.
Let Assumptions 2 and 3 hold. For each $s\in\\{1,\ldots,H\\}$ recursively
define $Y^{(s)}_{s}=R_{s}$, and
$Y^{(s)}_{t-1}=\phi^{(t,s)}(Z_{t},W_{t},A_{t},E_{t},Y_{t})$ for each $t\leq
s$, where the function $\phi^{(t,s)}$ is allowed to take one of the following
three forms:
$\displaystyle\phi^{(t,s)}_{\text{Reg}}(Z_{t},W_{t},A_{t},E_{t},Y_{t}^{(s)})$
$\displaystyle=\sum_{a\in\mathcal{A}}h^{(t,s)}(W_{t},a)$
$\displaystyle\phi^{(t,s)}_{\text{IS}}(Z_{t},W_{t},A_{t},E_{t},Y_{t}^{(s)})$
$\displaystyle=q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}^{(s)}$
$\displaystyle\phi^{(t,s)}_{\text{DR}}(Z_{t},W_{t},A_{t},E_{t},Y_{t}^{(s)})$
$\displaystyle=\sum_{a\in\mathcal{A}}h^{(t,s)}(W_{t},a)+q^{(t)}(Z_{t},A_{t})\left(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}^{(s)}-h^{(t,s)}(W_{t},A_{t})\right)\,,$
where $h^{(t,s)}$ and $q^{(t)}$ are solutions to, respectively,
$\displaystyle\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid W_{t},A_{t}=a]$
$\displaystyle=P^{*}_{t}(A_{t}=a\mid W_{t})^{-1}\quad\text{a.s.}\quad\forall
a\in\mathcal{A}\,,$ (2)
$\displaystyle\mathbb{E}^{*}_{t}[h^{(t,s)}(W_{t},A_{t})\mid Z_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}^{(s)}\mid
Z_{t},A_{t}=a]\quad\text{a.s.}\quad\forall a\in\mathcal{A}\,,$ (3)
which we show must exist.
Then, we have
$\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]=\mathbb{E}_{\mathcal{P}_{b}}[Y_{0}^{(s)}]$
for each $s\in\\{1,\ldots,H\\}$.
We note that $Y_{0}^{(t)}$ is a function of $\tau_{H}$ for each $t$, so
therefore this is a valid identification result. Furthermore, given
Assumptions 2 and 3 it must be the case that there exist solutions to Eqs. 2
and 3; in particular, any functions $q^{(t)}$ and $h^{(t,s)}$ satisfying
Assumption 3 with $\phi(R_{t},D_{t+1:H})=Y_{t}^{(s)}$ must satisfy Eqs. 2 and
3. Importantly, our identification result holds using _any_ functions
$q^{(t)}$ and $h^{(t,s)}$ satisfying Eqs. 2 and 3, even if these functions do
not satisfy the equations in Assumption 3. However, the existence of functions
in Assumption 3 is still important for the proof of Theorem 2, even if
different functions $q^{(t)}$ and $h^{(t,s)}$ are used. We refer readers to
the proof in the appendix for more details.
Comparing with Theorem 1, this result has many immediate advantages. It is
written as an expectation over $\mathcal{P}_{b}$, and so may be analyzed
readily using standard semiparametric efficiency theory, and although Eqs. 3
and 2 may appear complex given that they are expressed in terms of the
intervention distributions $\mathcal{P}^{*}_{t}$, this can easily be dealt
with as discussed later.
Furthermore, in the case that we use $\phi^{(t,s)}_{\text{DR}}$ for each
$t,s$, we present the following corollary, which provides a simplified
equation for $v_{\gamma}(\pi_{e})$.
###### Corollary 1.
Let Assumptions 2 and 3 hold. For each $t\in[H]$ let $h^{(t)}$ be any solution
to
$\mathbb{E}^{*}_{t}[h^{(t)}(W_{t},A_{t})\mid
Z_{t},A_{t}=a]=\mathbb{E}^{*}_{t}[\mathds{1}\\{E_{t}=A_{t}\\}Y_{t}\mid
Z_{t},A_{t}=a]\quad\text{a.s.}\quad\forall a\in\mathcal{A}\,,$ (4)
where we recursively define $Y_{H}=R_{H}$, and
$Y_{t-1}=R_{t-1}+\gamma\left(\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)+q^{(t)}(Z_{t},A_{t})\left(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}-h^{(t)}(W_{t},A_{t})\right)\right)\,,$
(5)
for $t>1$. In addition, let $\eta_{1}=1$, and for each $t>1$ define
$\eta_{t}=\prod_{s=1}^{t-1}q^{(s)}(Z_{s},A_{s})\mathds{1}\\{A_{s}=E_{s}\\}\,.$
(6)
Then, we have
$v_{\gamma}(\pi_{e})=\mathbb{E}_{\mathcal{P}_{b}}[\psi_{\text{IS}}(\tau_{H})]=\mathbb{E}_{\mathcal{P}_{b}}[\psi_{\text{Reg}}(\tau_{H})]=\mathbb{E}_{\mathcal{P}_{b}}[\psi_{\text{DR}}(\tau_{H})]$,
where
$\displaystyle\psi_{\text{IS}}(\tau_{H})$
$\displaystyle=\sum_{t=1}^{H}\gamma^{t-1}\eta_{t+1}R_{t}$ (7)
$\displaystyle\psi_{\text{Reg}}(\tau_{H})$
$\displaystyle=\sum_{a\in\mathcal{A}}h^{(1)}(W_{1},a)$ (8)
$\displaystyle\psi_{\text{DR}}(\tau_{H})$
$\displaystyle=\sum_{t=1}^{H}\gamma^{t-1}\left(\eta_{t+1}R_{t}+\eta_{t}\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)-\eta_{t}q^{(t)}(Z_{t},A_{t})h^{(t)}(W_{t},A_{t})\right)\,.$
(9)
This corollary follows directly from Theorem 2, noting that
$Y_{t}=\sum_{s=t}^{H}\gamma^{s-t}Y_{t}^{(s)}$, and
$h^{(t)}=\sum_{s=t}^{H}\gamma^{s-t}h^{(t,s)}$. We note that Eq. 7 and Eq. 8
have very similar structures to importance sampling and direct method
estimators for the MDP setting, while Eq. 9 has a very similar structure to
the Double Reinforcement Learning (DRL) estimators for the MDP setting (Kallus
and Uehara, 2020), where the $h^{(t)}$ terms are similar to the quality
function terms, and the $\eta_{t}$ and $\nu_{t}$ terms are similar to the
importance sampling terms. In the case of Eq. 9 this is particularly
promising, since DRL estimators enjoy desirable properties such as
semiparametric efficiency in the MDP setting (Kallus and Uehara, 2020).
Indeed, in Section 5 we show that similar properties extend to estimators
defined based on Eq. 9.
### 4.3 Specific Proximal Causal Inference Reductions and Resulting
Identification
We conclude this section with some discussion of how to actually construct a
valid PCI reduction. We provide several options of how this reduction may be
performed, and discuss in each case the assumptions that would be required of
the POMDP and evaluation policy for identification based on our results. In
all cases, we would need to assume or justify the implicit completeness and
regularity assumptions given the corresponding choice of $Z_{t}$ and $W_{t}$.
Furthermore, we note that the practicality of any given reduction would depend
heavily on how well-correlated $W_{t}$ and $Z_{t}$ are for each $t$, which in
turn would impact how easily the required nuisance functions $q^{(t)}$ and
$h^{(t)}$ could be fit.
#### Current and past observation.
A simple choice would be to define $W_{t}=O_{t}$, and $Z_{t}=O_{t-1}$. Since
we require that $Z_{t}$ is conditionally independent of the actions taken by
$\pi_{e}$ at time $t$ onward (given $S_{t}$ and $A_{t}$), this choice would be
valid for instance if $\pi_{e}^{(t)}$ did not depend on the prior observations
$O_{0:t-1}$.
#### Current and $k$-previous observation.
A slight generalization of the previous reduction would be to use
$W_{t}=O_{t}$ and $Z_{t}=O_{\max(t-k,0)}$. This would be valid if
$\pi_{e}^{(t)}$ did not depend on observations $O_{0:\max(t-k,0)}$, _i.e._ ,
only the $k$-most recent observations are used for decision making.
#### Current and initial observation.
In the case that $\pi^{(t)}$ depended on all previous observations except for
$O_{0}$, we may use $W_{t}=O_{t}$ and $Z_{t}=O_{0}$.
#### Two views of current observation.
If each observation factored as $O_{t}=(O_{t}^{\prime},O_{t}^{\prime\prime})$,
where $O_{t}^{\prime}$ and $O_{t}^{\prime\prime}$ were conditionally
independent given $S_{t}$ and $A_{t}$, then we may justify choosing
$Z_{t}=O_{t}^{\prime\prime}$ and $W_{t}=O_{t}^{\prime}$. This would be valid
if $\pi_{e}^{(t)}$ did not depend on $O_{t^{\prime}}^{\prime\prime}$ for any
$t^{\prime}\leq t$. This may be an appealing approach if observations
contained certain view(s) of the state that we did not want to explicitly
consider in decision making, for example for fairness or interpretability
reasons.
#### Current observation and previous reward.
Finally, in the case that $\pi_{e}^{(t)}$ did not depend on past rewards, we
may choose $W_{t}=O_{t}$ and $Z_{t}=R_{t-1}$. Note that this implies a “prior
reward” $R_{0}$; in the case that $\pi_{e}^{(t)}$ did not depend on $O_{0}$
for any $t$, then we could avoid this issue by modifying the reduction,
instead defining $Z_{1}=O_{0}$.
## 5 Policy Value Estimators
Now we turn from the question of identification to estimation. We will focus
on estimation of $v_{s}(\pi_{e})$ based on the identification result given by
Corollary 1. A natural approach to estimating $v_{\gamma}(\pi_{e})$ based on
this would be to use an estimator of the kind
$\hat{v}^{(n)}_{\gamma}(\pi_{e})=\frac{1}{n}\sum_{i=1}^{n}\widehat{\psi}_{\text{DR}}(\tau_{H}^{(i)})\,,$
(10)
where $\widehat{\psi}_{\text{DR}}$ is an approximation of $\psi_{\text{DR}}$
using plug-in estimators for the nuisance functions $h^{(t)}$ and $q^{(t)}$
for each $t$. In the remainder of this section, we will discuss the properties
of estimators of this kind. We will assume in the remainder of this section
that we have fixed a valid PCI reduction that satisfies Assumptions 2 and 3.
### 5.1 Consistency and Asymptotic Normality
We first consider conditions under which the estimator
$\hat{v}^{(n)}_{\gamma}(\pi_{e})$ is consistent and asymptotically normal. For
this, we need to make some assumptions on the quality of our estimated
nuisance functions $\hat{q}^{(t)}$ and $\hat{h}^{(t)}$. Before we introduce
these assumption, we need to introduce some additional notation. Specifically,
for any quantity $\Psi$ that depends on the nuisance functions $q^{(t)}$
and/or $h^{(t)}$, let $\Delta\Psi=\hat{\Psi}-\Psi$ denote the difference
between the estimated quantity $\hat{\Psi}$ using the plugin estimated
nuisances, and the true quantity $\Psi$ using the true nuisances. Then, our
assumption on nuisance estimation quality is as follows.
###### Assumption 4.
For each $a\in\mathcal{A}$, and each
$\Psi,\Psi^{\prime}\in\\{h^{(t)}(W_{t},a),q^{(t)}(W_{t},A_{t}):t\in[H]\\}$
such that $\Psi\neq\Psi^{\prime}$, the following hold, where the randomness in
each bound is defined with respect to the sampling distribution defining the
estimated nuisances plugged into the $\Delta\Psi$ terms.
1. 1.
$\|\Delta\Psi\|_{2,\mathcal{P}_{b}}=o_{p}(1)$
2. 2.
$\|\Delta\Psi\|_{2,\mathcal{P}_{b}}\|\Delta\Psi^{\prime}\|_{2,\mathcal{P}_{b}}=o_{p}(n^{-1/2})$
3. 3.
$\|\Delta\Psi\|_{\infty}=O_{p}(1)$
4. 4.
$\|\Psi\|_{\infty}<\infty$
Essentially, Assumption 4 requires that the nuisances $q^{(t)}$ and $h^{(t)}$
are estimated consistently in terms of the $L_{2,\mathcal{P}_{b}}$ functional
norm for each $t$, and that the corresponding product-error terms converge at
the sub-parametric $o_{p}(n^{-1/2})$ rate. This could be achieved, for
example, if each nuisance were estimated at the $o_{p}(n^{-1/4})$ rate, which
is obtainable for many non-parametric machine learning-based methods
(Chernozhukov et al., 2016). In addition, we require a technical boundedness
condition on the uniform norm of the errors and of the true nuisances
themselves. Given this, we can now present our main consistency and asymptotic
normality theorem.
###### Theorem 3.
Let the conditions of Theorem 2 be given, and assume that the nuisance
functions plugged into $\hat{v}^{(n)}_{\gamma}(\pi_{e})$ are estimated using
cross fitting. Furthermore, suppose that the nuisance estimation for each
cross-fitting fold satisfies Assumption 4. Then, we have
$\sqrt{n}(\hat{v}^{(n)}_{\gamma}(\pi_{e})-v_{\gamma}(\pi_{e}))\to\mathcal{N}(0,\sigma^{2}_{\text{DR}})$
in distribution, where
$\sigma^{2}_{\text{DR}}=\mathbb{E}_{\mathcal{P}_{b}}[(\psi_{\text{DR}}(\tau_{H})-v_{\gamma}(\pi_{e}))^{2}]$
The proof of Theorem 3 relies on the fact that $\psi_{\text{DR}}$ satisfies a
Neyman orthogonality condition with respect to all nuisance functions, and by
applying Chernozhukov et al. (2016, Theorem 3.1). We refer the reader to the
appendix for the detailed proof. We also note that Theorem 3 depends on the
nuisances being fit using cross-fitting. Concretely, this means that we
randomly split the observed trajectories into $K$ folds for some fixed $K\geq
2$, for each fold we compute separate nuisances $\hat{q}^{(t)}$ and
$\hat{h}^{(t)}$ using only the data outside of that fold, and then we compute
$\hat{v}^{(n)}_{\gamma}(\pi_{e})$ with each term
$\widehat{\psi}_{\text{DR}}(\tau_{H}^{(i)})$ computed using the nuisances
estimated excluding $\tau_{H}^{(i)}$. We refer the reader to Chernozhukov et
al. (2016) for a more detailed description of cross-fitting.
One technical note about this theorem is that there may be multiple $q^{(t)}$
and $h^{(t)}$ that solve Eqs. 2 and 4, which creates some ambiguity in both
Assumption 4 and the definition of $\psi_{\text{DR}}(\tau_{H})$. This is
important, since the ambiguity in the definition of
$\psi_{\text{DR}}(\tau_{H})$ effects the value of the asymptotic variance
$\sigma^{2}_{\text{DR}}$. In this case, we implicitly assume that Assumption 4
holds for some arbitrarily given solutions $q^{(t)}$ and $h^{(t)}$ for each
$t\in[H]$, and that $\sigma^{2}_{\text{DR}}$ is defined using the same
$q^{(t)}$ and $h^{(t)}$ solutions.
### 5.2 Semiparametric Efficiency
We now consider the question of _semiparametric efficiency_ of our OPE
estimators. Semiparametrically efficiency is defined relative to a model
$\mathcal{M}$, which is a set of allowed distributions such that
$\mathcal{P}_{b}\in\mathcal{M}$. Roughly speaking, we say that an estimator is
semiparametrically efficient _w.r.t._ $\mathcal{M}$ if it is regular (meaning
invariant to $O_{p}(1/\sqrt{n})$ perturbations in the data generating process
that are allowed by $\mathcal{M}$), and achieves the minimum asymptotic
variance of all regular estimators. We provide a detailed overview of
semiparametric efficiency in Appendix A, but for the purposes of this section
it suffices to say that there exists a function $\psi_{\text{eff}}\in
L_{2,\mathcal{P}_{b}}(\tau_{H})$, called the “efficient influence function”
(EIF) _w.r.t._ $\mathcal{M}$, and that an estimator
$\hat{v}^{(n)}_{\gamma}(\pi_{e})$ is efficient _w.r.t._ $\mathcal{M}$ if and
only if
$\sqrt{n}(\hat{v}^{(n)}_{\gamma}(\pi_{e})-v_{\gamma}(\pi_{e}))=n^{-1/2}\sum_{i=1}^{n}\psi_{\text{eff}}(\tau_{H}^{(i)})+o_{p}(1)$.
One complication in considering models of distributions on $\tau_{H}$ is that
technically the definition of $v_{\gamma}(\pi_{e})$ depends on the full
distribution of $\tau_{H}^{\textup{full}}$. In the case that the distribution
of $\tau_{H}$ corresponds to the logging distribution induced by some behavior
policy and underlying POMDP that satisfies Assumption 3, it is clear from
Theorem 2 that using _any_ nuisances satisfying the required conditional
moment will result in the same policy value estimate $v_{\gamma}(\pi_{e})$.
However, if we allow for distributions on $\tau_{H}$ that do not necessarily
satisfy such conditions, as is standard in the literature on policy
evaluation, it may be the case that different solutions for $h^{(t)}$ and
$q^{(t)}$ result in different values of
$\mathbb{E}_{\mathcal{P}}[\psi_{\text{DR}}(\tau_{H})]$. To avoid such issues,
we consider a model of distributions where the nuisances and corresponding
policy value estimate are uniquely defined, as follows.
###### Definition 1 (Model and Target Parameter).
Define $\mathcal{M}_{e}^{(0)}$ as the set of all distributions on $\tau_{H}$,
and for each $t\geq 1$ recursively define:
1. 1.
$\eta_{t,\mathcal{P}}=\prod_{s=1}^{t-1}q^{(s)}_{\mathcal{P}}(Z_{s},A_{s})\mathds{1}\\{A_{s}=E_{s}\\}$,
for $\mathcal{P}\in\mathcal{M}_{e}^{(t-1)}$
2. 2.
$P^{*}_{t,\mathcal{P}}(A_{t}\mid
W_{t})=\mathbb{E}_{\mathcal{P}}[\eta_{t,\mathcal{P}}\mid
W_{t},A_{t}]P_{\mathcal{P}}(A_{t}\mid W_{t})$, for
$\mathcal{P}\in\mathcal{M}_{e}^{(t-1)}$
3. 3.
$T_{t,\mathcal{P}}:L_{2,\mathcal{P}}(Z_{t},A_{t})\mapsto
L_{2,\mathcal{P}}(W_{t},A_{t})$ where
$(T_{t,\mathcal{P}}g)(W_{t},A_{t})=\mathbb{E}_{\mathcal{P}}[\eta_{t,\mathcal{P}}g(Z_{t},A_{t})\mid
W_{t},A_{t}]$, for $\mathcal{P}\in\mathcal{M}_{e}^{(t-1)}$
4. 4.
$\mathcal{M}_{e}^{(t)}=\mathcal{M}_{e}^{(t-1)}\cap\\{\mathcal{P}:T_{t,\mathcal{P}}\text{
is invertible and }P^{*}_{t,\mathcal{P}}(A_{t}\mid W_{t})^{-1}\in
L_{2,\mathcal{P}}(W_{t},A_{t})\\}$
5. 5.
$q^{(t)}_{\mathcal{P}}(Z_{t},A_{t})=T_{t,\mathcal{P}}^{-1}\left(P^{*}_{t,\mathcal{P}}(A_{t}\mid
W_{t})^{-1}\right)$, for $\mathcal{P}\in\mathcal{M}_{e}^{(t)}$
Furthermore, let $T^{*}_{t,\mathcal{P}}$ denote the adjoint of
$T_{t,\mathcal{P}}$, define $Y_{H}=R_{h}$, and for each $t\in[H]$ recursively
define
1. 1.
$\mu_{t,\mathcal{P}}(Z_{t},A_{t})=\mathbb{E}_{\mathcal{P}}[\eta_{t,\mathcal{P}}\mathds{1}\\{A_{t}=E_{t}\\}Y_{t,\mathcal{P}}\mid
Z_{t},A_{t}]$, for $\mathcal{P}\in\mathcal{M}_{e}^{(t)}$
2. 2.
$h^{(t)}_{\mathcal{P}}(W_{t},A_{t})=(T^{*}_{t,\mathcal{P}})^{-1}\left(\mu_{t,\mathcal{P}}(Z_{t},A_{t})\right)$,
for $\mathcal{P}\in\mathcal{M}_{e}^{(t)}$
3. 3.
$Y_{t-1,\mathcal{P}}=R_{t-1}+\gamma\left(\sum_{a\in\mathcal{A}}h^{(t)}_{\mathcal{P}}(W_{t},a)+q^{(t)}_{\mathcal{P}}(Z_{t},A_{t})\left(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t,\mathcal{P}}-h^{(t)}_{\mathcal{P}}(W_{t},A_{t})\right)\right)$,
for $\mathcal{P}\in\mathcal{M}_{e}^{(t)}$
where the latter is only defined for $t>1$. Finally, let
$\mathcal{M}_{\textup{PCI}}=\mathcal{M}_{e}^{(H)}$, and for each
$\mathcal{P}\in\mathcal{M}_{\textup{PCI}}$ define
$V(\mathcal{P})=\mathbb{E}_{\mathcal{P}}\left[\sum_{a\in\mathcal{A}}h^{(1)}_{\mathcal{P}}(W_{1},a)\right]\,.$
We note that this definition is not circular, since $\eta_{1,\mathcal{P}}=1$
for every $\mathcal{P}$, and so we can concretely define the first set of
quantities in the order they are listed above for each $t\in[H]$ in ascending
order, and the second set in descending order of $t$. We note that in the case
that $\mathcal{P}=\mathcal{P}_{b}$, it is straightforward to reason that
$\eta_{t,\mathcal{P}_{b}}$, $q_{\mathcal{P}_{b}}^{(t)}$,
$h_{\mathcal{P}_{b}}^{(t)}$, and $Y_{t,\mathcal{P}_{b}}$ agree with the
corresponding definitions in Theorems 2 and 1; $T_{t,\mathcal{P}_{b}}$ and
$T^{*}_{t,\mathcal{P}_{b}}$ correspond to standard conditional expectation
operators under $\mathcal{P}^{*}_{t}$; $P^{*}_{t,\mathcal{P}_{b}}(A_{t}\mid
W_{t})=P^{*}_{t}(A_{t}\mid W_{t})$; and
$V(\mathcal{P}_{b})=v_{\gamma}(\pi_{e})$. Therefore,
$\mathcal{M}_{\textup{PCI}}$ is a natural model of observational distributions
where the required nuisances are uniquely defined, and $V(\mathcal{P})$ is a
natural and uniquely defined generalization of $v_{\gamma}(\pi_{e})$ for
distributions $\mathcal{P}$ that cannot be defined as the logging distribution
for some POMDP and behavior policy satisfying Assumption 3.
Finally, we assume the following the following on the actual observed
distribution $\mathcal{P}_{b}$.
###### Assumption 5.
For every sequence of distributions $\mathcal{P}_{n}$ that converge in law to
$\mathcal{P}_{b}$, there exists some integer $N$ such that for all $n\geq N$
and $t\in[H]$ such that $T_{t,\mathcal{P}_{n}}$ and
$T^{*}_{t,\mathcal{P}_{n}}$ are invertible. Furthermore, for all such
sequences and $t\in[H]$ we also have
1. 1.
$\liminf_{n\to\infty}\inf_{\|f(Z_{t},A_{t})\|_{1,\mathcal{P}_{n}}\geq
1}\|T_{t,\mathcal{P}_{n}}f(Z_{t},A_{t})\|_{1,\mathcal{P}_{n}}>0$
2. 2.
$\liminf_{n\to\infty}\inf_{\|g(W_{t},A_{t})\|_{1,\mathcal{P}_{n}}\geq
1}\|T^{*}_{t,\mathcal{P}_{n}}g(W_{t},A_{t})\|_{1,\mathcal{P}_{n}}>0$
3. 3.
$\limsup_{n\to\infty}\|P^{*}_{t,\mathcal{P}_{n}}(A_{t}\mid
W_{t})^{-1}\|_{\infty}<\infty$ .
In addition, for each $t\in[H]$ the distribution $\mathcal{P}_{b}$ satisfies
1. 4.
$\inf_{\|f(Z_{t},A_{t})\|_{2,\mathcal{P}_{b}}\geq
1}\|T_{t,\mathcal{P}_{n}}f(Z_{t},A_{t})\|_{2,\mathcal{P}_{b}}>0$
2. 5.
$\inf_{\|g(W_{t},A_{t})\|_{2,\mathcal{P}_{n}}\geq
1}\|T^{*}_{t,\mathcal{P}_{n}}g(W_{t},A_{t})\|_{2,\mathcal{P}_{b}}>0$ .
The condition that $T_{t,\mathcal{P}_{n}}$ and $T^{*}_{t,\mathcal{P}_{n}}$ are
invertible for large $n$ ensures that the model $\mathcal{M}_{\textup{PCI}}$
is locally saturated at $\mathcal{P}_{b}$, and the additional conditions
ensure that the nuisance functions can be uniformly bounded within parametric
submodels. These are very technical conditions used in our semiparametric
efficiency proof, and it may be possible to relax them. We note also that in
discrete settings, these conditions follow easily given
$\mathcal{P}_{b}\in\mathcal{M}_{\textup{PCI}}$, since in this setting the
conditions can be characterized in terms of the entries or eigenvalues of some
probability matrices being bounded away from zero, which by continuity must be
the case when $\mathcal{P}_{n}$ is sufficiently close to $\mathcal{P}_{b}$. In
continuous settings this kind of reasoning becomes more complicated, but it
might still be possible to derive Assumption 5 from simpler assumptions.
Importantly, the locally saturated condition on $\mathcal{M}_{\textup{PCI}}$
at $\mathcal{P}_{b}$ means that we do not have to explicitly consider the
tangent space of $\mathcal{M}_{\textup{PCI}}$. We chose to enforce this since
the correct tangent space under more general assumptions appears to be very
complex and difficult to define concretely. Furthermore, although more
specific tangent spaces have been proposed in past work on proximal causal
inference, their correctness has not been properly justified. We do not
explore these issues here in further detail as they are technically complex
and besides the point of this paper, but for completeness we provide details
in Appendix B.
Given this setup, we can now present our main efficiency result.
###### Theorem 4.
Suppose that $\mathcal{P}_{b}$ is the observational distribution given by a
POMDP and logging policy that satisfies the conditions of Theorem 2, and let
Assumption 5 be given. Then, $\psi_{\text{DR}}(\tau_{H})-v_{\gamma}(\pi_{e})$
is the efficient influence function for $V(\mathcal{P})$ locally at
$\mathcal{P}=\mathcal{P}_{b}$.
Finally, the following corollary combines this result with Theorem 3, which
shows that under the same conditions, if the nuisances are appropriately
estimated then the resulting estimator will achieve the semiparametric
efficiency bound relative to $\mathcal{M}_{\textup{PCI}}$.
###### Corollary 2.
Let the conditions of Theorems 3 and 4 be given. Then, the estimator
$\hat{v}^{(n)}_{\gamma}(\pi_{e})$ is semi-parametrically efficient relative to
the model $\mathcal{M}_{\textup{PCI}}$.
### 5.3 Nuisance Estimation
Finally, we conclude this section with a discussion of how we may actually
estimate $q^{(t)}$ and $h^{(t)}$. The conditional moment equations Eqs. 2 and
4 defining these nuisances are defined in terms of the intervention
distributions $\mathcal{P}^{*}_{t}$, which are not directly observable.
Therefore, we provide the following lemma, which re-frames these as a nested
series of conditional moment restrictions under $\mathcal{P}_{b}$.
###### Lemma 2.
Let the conditions of Theorem 2 be given. Then, for any collection of
functions $q^{(1)},\ldots,q^{(H)}$ and $h^{(1)},\ldots,h^{(H)}$, these
functions satisfy Eqs. 2 and 4 for every $t\in[H]$ if and only if for every
$t\in[H]$ we have
$\mathbb{E}_{\mathcal{P}_{b}}\left[\eta_{t}\left(g(W_{t},A_{t})q^{(t)}(Z_{t},A_{t})-\sum_{a\in\mathcal{A}}g(W_{t},a)\right)\right]=0\,,$
for all measurable $g(W_{t},A_{t})$, and
$\mathbb{E}_{\mathcal{P}_{b}}\left[\eta_{t}\left(h^{(t)}(W_{t},A_{t})-\mathds{1}\\{E_{t}=A_{t}\\}Y_{t}\right)\,\Big{|}\,Z_{t},A_{t}\right]=0\quad\text{a.s.}\,,$
where $\eta_{t}$ and $Y_{t}$ are defined as in Corollary 1.
We can observe that the moment restrictions defining $q^{(t)}$ for each $t$
depend only on $q^{(t^{\prime})}$ for $t^{\prime}<t$, and those defining
$h^{(t)}$ for each $t$ depend on $h^{(t^{\prime})}$ for $t^{\prime}>t$ and on
$q^{(t^{\prime\prime})}$ for every $t^{\prime\prime}\neq t$. This suggests a
natural order for estimating these nuisances, of $q^{(1)}$ through $q^{(H)}$
first, and then $h^{(H)}$ through $h^{(1)}$. Alternatively, we may jointly
solve for all these nuisances together as a set of $2H$ continua of moment
equations. In addition, the above moment restrictions defining $q^{(t)}$ have
the advantage that they do not explicitly depend on any additional nuisance
functions such as $P^{*}_{t}(A_{t}\mid W_{t})^{-1}$.
Next, we propose a specific meta-algorithm for estimating these nuisances
sequentially, based on the Kernel VMM algorithm of Bennett and Kallus (2021),
which was previously proposed for solving conditional moment problems. This
meta-algorithm assumes some function classes $\mathcal{Q}^{(t)}$ and
$\mathcal{H}^{(t)}$ from which our estimates $q^{(t)}$ and $h^{(t)}$ will be
chosen for each $t\in[H]$. In addition, it requires kernel functions
$K^{(q,t)}$ and $K^{(h,t)}$ for each $t\in[H]$, where the former is defined on
pairs of $(W_{t},A_{t})$ tuples, and the latter on pairs of $(Z_{t},A_{t})$
tuples, as well as hyperparameters $\alpha^{(q,t)}\geq 0$ and
$\alpha^{(h,t)}\geq 0$, and optional regularization functions
$\mathcal{R}^{(q,t)}$ on $q\in\mathcal{Q}^{(t)}$ and $\mathcal{R}^{(h,t)}$ on
$h\in\mathcal{H}^{(t)}$. We note that any or more of the above inputs may in
general be data-driven. Furthermore, for any random variable $X$ measurable
_w.r.t._ $\tau_{H}$ we let $\mathbb{S}(X)$ denote the set of unique values of
$X$ observed in our dataset, define $N(X)=|\mathbb{S}(X)|$, and denote the
elements of $\mathbb{S}(X)$ by
$\\{\mathbb{S}(X)_{1},\ldots,\mathbb{S}(X)_{N(X)}\\}$, where the ordering of
the elements is arbitrary but fixed. We note that for any $X$ it must be the
case that $N(X)\leq n$, since we only observe $n$ trajectories $\tau_{H}$.
Similarly, we define
$\mathbb{S}(X;\mathcal{A})=\\{(x,a):x\in\mathbb{S}(X),a\in\mathcal{A}\\}$,
$N(X;\mathcal{A})=|\mathcal{A}|N(X)$, and again assume an arbitrary but fixed
ordering of the elements in $\mathbb{S}(X;\mathcal{A})$. Finally, we assume
access to some prior estimates $\tilde{q}^{(t)}$ and $\tilde{h}^{(t)}$, which
may be defined arbitrarily and need not necessarily be consistent. Given this,
we present our algorithm in Algorithm 1.
Algorithm 1 Sequential VMM for PCI-POMDP Nuisance Estimation
1: Data $\mathcal{D}=(\tau_{H}^{(1)},\ldots,\tau_{H}^{(n)})$, nuisance
function classes $\mathcal{Q}^{(t)}$ and $\mathcal{H}^{(t)}$, kernel functions
$K^{(q,t)}$ and $K^{(h,t)}$, hyperparameters $\alpha^{(q,t)}$ and
$\alpha^{(h,t)}$, prior estimates $\tilde{q}^{(t)}$ and $\tilde{h}^{(t)}$, and
optional regularization functions $\mathcal{R}^{(q,t)}$ and
$\mathcal{R}^{(h,t)}$, for all $t\in[H]$
2:Nuisance estimates $\hat{q}^{(t)}$ and $\hat{h}^{(t)}$ for all $t\in[H]$
3:for $t\in\\{1,2,\ldots,H\\}$ do
4: if $t=1$ then
5: $\eta_{i}^{(t)}\leftarrow 1$ $\triangleright$ $i\in[n]$
6: else
7:
$\eta_{i}^{(t)}\leftarrow\eta_{i}^{(t-1)}\mathds{1}\\{A_{t-1}^{(i)}=E_{t-1}^{(i)}\\}\hat{q}^{(t-1)}(Z_{t-1}^{(i)},A_{t-1}^{(i)})$
$\triangleright$ $i\in[n]$
8: end if
9: $\hat{q}^{(t)}\leftarrow$ ComputeQ($\mathcal{D}$, $t$, $\mathcal{Q}^{(t)}$,
$\mathcal{R}^{(q,t)}$, $\alpha^{(q,t)}$, $\tilde{q}^{(t)}$, $K^{(q,t)}$,
$\eta^{(t)}$)
10:end for
11:for $t\in\\{H,H-1,\ldots,1\\}$ do
12: if t = H then
13: $\omega_{i}^{(t)}\leftarrow 0$ $\triangleright$ $i\in[n]$
14: else
15:
$\omega_{i}^{(t)}\leftarrow\sum_{a\in\mathcal{A}}\hat{h}^{(t+1)}(W^{(i)}_{t+1},a)+\hat{q}^{(t+1)}(Z^{(i)}_{t+1},A^{(i)}_{t+1})\left(\mu_{i}^{(t+1)}-\hat{h}^{(t+1)}(W^{(i)}_{t+1},A^{(i)}_{t+1})\right)$
$\triangleright$ $i\in[n]$
16: end if
17:
$\mu_{i}^{(t)}\leftarrow\mathds{1}\\{A_{t}^{(i)}=E_{t}^{(i)}\\}(R_{t}^{(i)}+\gamma\omega_{i}^{(t)})$
$\triangleright$ $i\in[n]$
18: $\hat{h}^{(t)}\leftarrow$ ComputeH($\mathcal{D}$, $t$,
$\mathcal{H}^{(t)}$, $\mathcal{R}^{(h,t)}$, $\alpha^{(h,t)}$,
$\tilde{h}^{(t)}$, $K^{(h,t)}$, $\eta^{(t)}$, $\mu^{(t)}$)
19:end for
20:return $\hat{q}^{(1)},\ldots,\hat{q}^{(H)}$,
$\hat{h}^{(1)},\ldots,\hat{h}^{(H)}$
21:function ComputeQ($\mathcal{D}$, $t$, $\mathcal{Q}$, $\mathcal{R}$,
$\alpha$, $\tilde{q}$, $K$, $\eta$)
22: $L_{i,j}\leftarrow
K((W_{t}^{(i)},A_{t}^{(i)}),\mathbb{S}(W_{t};\mathcal{A})_{j})$
$\triangleright$ $i\in[n],j\in[N(W_{t};\mathcal{A})]$
23:
$\tilde{L}_{i,j}\leftarrow\sum_{a\in\mathcal{A}}K((W_{t}^{(i)},a),\mathbb{S}(W_{t};\mathcal{A})_{j})$
$\triangleright$ $i\in[n],j\in[N(W_{t};\mathcal{A})]$
24:
$M_{i,j}\leftarrow\eta_{i}\left(\tilde{q}(Z_{t}^{(i)},A_{t}^{(i)})L_{k,j}-\tilde{L}_{k,j}\right)$
$\triangleright$ $i\in[n],j\in[N(W_{t};\mathcal{A})]$
25: $Q_{i,j}\leftarrow\frac{1}{n}\sum_{k=1}^{n}M_{k,i}M_{k,j}+\alpha
K(\mathbb{S}(W_{t};\mathcal{A})_{i},\mathbb{S}(W_{t};\mathcal{A})_{j})$
$\triangleright$ $i,j\in[N(W_{t};\mathcal{A})]$
26:
$B_{i,j}\leftarrow\frac{1}{n}\sum_{k=1}^{n}\eta_{k}L_{k,i}\mathds{1}\\{(Z_{t}^{(k)},A_{t}^{(k)})=\mathbb{S}(Z_{t},A_{t})_{j}\\}$
$\triangleright$ $i\in[N(W_{t};\mathcal{A})],j\in[N(Z_{t},A_{t})]$
27:
$\rho(q)_{i}\leftarrow\sum_{j\in[N(Z_{t},A_{t})]}B_{i,j}q(\mathbb{S}(Z_{t},A_{t})_{j})-\frac{1}{n}\sum_{k=1}^{n}\eta_{k}\tilde{L}_{k,i}$
$\triangleright$ $i\in[N(W_{t};\mathcal{A})],q\in\mathcal{Q}$
28: return
$\operatorname*{arg\,min}_{q\in\mathcal{Q}}\rho(q)^{T}Q^{-1}\rho(q)+\mathcal{R}(q)$
29:end function
30:function ComputeH($\mathcal{D}$, $t$, $\mathcal{H}$, $\mathcal{R}$,
$\alpha$, $\tilde{h}$, $K$, $\eta$, $\mu$)
31: $L_{i,j}\leftarrow
K((Z_{t}^{(i)},A_{t}^{(i)}),\mathbb{S}(Z_{t},A_{t})_{j})$ $\triangleright$
$i\in[n],j\in[N(Z_{t},A_{t})]$
32:
$M_{i,j}\leftarrow\eta_{i}L_{k,j}\left(\tilde{h}^{(t)}(W_{t}^{(i)},A_{t}^{(i)})-\mu_{i}\right)$
$\triangleright$ $i\in[n],j\in[N(Z_{t},A_{t})]$
33: $Q_{i,j}\leftarrow\frac{1}{n}\sum_{k=1}^{n}M_{k,i}M_{k,j}+\alpha
K(\mathbb{S}(Z_{t},A_{t})_{i},\mathbb{S}(Z_{t},A_{t})_{j})$ $\triangleright$
$i,j\in[N(Z_{t},A_{t})]$
34:
$B_{i,j}\leftarrow\frac{1}{n}\sum_{k=1}^{n}\eta_{k}L_{k,i}\mathds{1}\\{(W_{t}^{(k)},A_{t}^{(k)})=\mathbb{S}(W_{t},A_{t})_{j}\\}$
$\triangleright$ $i\in[N(Z_{t},A_{t})],j\in[N(W_{t},A_{t})]$
35:
$\rho(h)_{i}\leftarrow\sum_{j\in[N(W_{t},A_{t})]}B_{i,j}h(\mathbb{S}(W_{t},A_{t})_{j})-\frac{1}{n}\sum_{k=1}^{n}\eta_{k}\mu_{k}L_{k,i}$
$\triangleright$ $i\in[N(Z_{t},A_{t})],h\in\mathcal{H}$
36: return
$\operatorname*{arg\,min}_{h\in\mathcal{H}}\rho(h)^{T}Q^{-1}\rho(h)+\mathcal{R}(h)$
37:end function
We provide a derivation of this algorithm in Appendix D. We note that it is a
meta-algorithm, since it requires some additional procedures to solve the
respective minimization problems over $q\in\mathcal{Q}^{(t)}$ and
$h\in\mathcal{\mathcal{missing}}H^{(t)}$ at the end of ComputeQ and ComputeH
respectively. However, solving such problems is very standard and well
studied, so we do not consider it explicitly. In the case that the data is
discrete this algorithm is very efficient in terms of how it scales with $n$.
In this case the overall computational cost is $O(Hn)$, since
$N(Z_{t},A_{t})$, $N(W_{t},A_{t})$, and $N(W_{t};\mathcal{A})$ are bounded for
each $t\in[H]$. On the other hand, if the data is continuous, the algorithm is
still valid, although it may be expensive for large $n$ (in particular, in
this case both ComputeQ and ComputeH require computing the inverse of a
$n\times n$ matrix). In this case, it may be more computationally tractable to
consider alternative algorithms, such as an analogue of Algorithm 1 based on
Neural VMM instead of Kernel VMM (Bennett and Kallus, 2021). However, we leave
this problem to future work. Finally, we note that in practice, as in Bennett
and Kallus (2021), we may iterate Algorithm 1 multiple times, each time using
the previous iterate solution for the prior estimates $\tilde{q}^{(t)}$ and
$\tilde{h}^{(t)}$.
## 6 Experiments
### 6.1 Experimental Setup
$s_{1}$$s_{2}$$s_{3}$3.01.08.00.0-2.0-2.0 Figure 3: Graphical representation
of the NoisyObs POMDP scenario. Red dashed edges / blue solid edges represent
the transitions under actions $a_{1}$ / $a_{2}$ respectively, and the numeric
label for each edge indicates the corresponding reward. Note that all
transitions and rewards in NoisyObs are deterministic, and do not depend on
the time index. In each state $s_{i}$ we receive observation $o_{i}$ with
probability $1-\epsilon_{\textup{noise}}$, or observatoin $o_{j}$ with
probability $\epsilon_{\textup{noise}}/2$, for each $j\neq i$.
Finally, we present a series of experiments, which are intended as an
empirical “proof of concept” of the correctness of our theory and algorithms.
In these experiments, we consider a simple POMDP, which we refer to as
NoisyObs, which is a time-homogeneous POMDP with three states, two actions,
and three observation values. We denote these by
$\mathcal{S}=\\{s_{1},s_{2},s_{3}\\}$, $\mathcal{A}=\\{a_{1},a_{2}\\}$, and
$\mathcal{O}=\\{o_{1},o_{2},o_{3}\\}$. We summarize the state transition and
reward structure of the POMDP in Fig. 3. The observation emission process for
this POMDP is given by
$P_{O}^{(t)}(o_{i}\mid s_{j})=\begin{cases}1-\epsilon_{\textup{noise}}&i=j\\\
\epsilon_{\textup{noise}}/2&i\neq j\end{cases}$
for all $t\in[H]$, where $\epsilon_{\textup{noise}}$ is a parameter of the
POMDP. We note that these observations can be seen as a noisy measurement of
the state; _i.e._ , we observe the correct state with probability
$1-\epsilon_{\textup{noise}}$, or a randomly selected incorrect state with
probability $\epsilon_{\textup{noise}}$. In the case that
$\epsilon_{\textup{noise}}=0$ the problem becomes a MDP, and greater values of
$\epsilon_{\textup{noise}}$ indicate more noisy measurements. Thus, NoisyObs
provides a simple model for evaluating sequential decision making policies,
where the logged data may be corrupted.
| $a_{1}$ | $a_{2}$
---|---|---
$s_{1}$ | 0.8 | 0.2
$s_{2}$ | 0.8 | 0.2
$s_{3}$ | 0.2 | 0.8
| $a_{1}$ | $a_{2}$
---|---|---
$o_{1}$ | 1 | 0
$o_{2}$ | 1 | 0
$o_{3}$ | 0 | 1
| $a_{1}$ | $a_{2}$
---|---|---
$o_{1}$ | 0 | 1
$o_{2}$ | 0 | 1
$o_{3}$ | 1 | 0
| $a_{1}$ | $a_{2}$
---|---|---
$o_{1}$ | 1 | 0
$o_{2}$ | 0 | 1
$o_{3}$ | 1 | 0
Table 1: The first table summarizes the probability distribution of the
logging policy $\pi_{b}^{\textsc{NoisyObs}}$, where each row gives the
probability distribution over actions for the corresponding state. The next
three tables similarly summarize the three evaluation policies
$\pi_{e}^{\textup{easy}}$, $\pi_{e}^{\textup{hard}}$, and
$\pi_{e}^{\textup{optim}}$ respectively, which are all deterministic policies
that depend on the current observation only. Note that none of these policies
depend on the time index.
We collected logged data using a time-homogeneous behavioral policy
$\pi_{b}^{\textsc{NoisyObs}}$, with a horizon length $H=3$. For each logged
trajectory first sample a prior state $S_{0}$ by $s_{1}$, $s_{2}$, or $s_{3}$
with probabilities $0.5$, $0.3$, and $0.2$ respectively, a prior observation
$O_{0}\sim P_{O}(\cdot\mid S_{0})$, and a prior action
$A_{0}\sim\pi_{b}^{\textsc{NoisyObs}}(\cdot\mid S_{0})$, and the initial state
$S_{1}$ is given by transitioning from $S_{0}$ with $A_{0}$. In addition, we
considered evaluating three different evaluation policies
$\pi_{e}^{\textup{easy}}$, $\pi_{e}^{\textup{hard}}$, and
$\pi_{e}^{\textup{optim}}$, each of which is also time-homogeneous and depends
only on the current observation. The probability tables for all four policies
are summarized in Table 1. We note that $\pi_{e}^{\textup{easy}}$ and
$\pi_{e}^{\textup{hard}}$ are so because they are designed to have high and
low overlap with the logging policy respectively, and
$\pi_{e}^{\textup{optim}}$ is named so because it is the optimal policy when
$\epsilon_{\textup{noise}}$ is sufficiently small. Therefore these cover a
wide range of different kinds of policies; one with strong overlap, one with
poor overlap, and a high-performing policy with overlap somewhere in the
middle. In all cases, we consider estimating the value of the corresponding
policy with $\gamma=1$.
We performed policy evaluation with the following methods. First, we used our
method described in Section 5 with 5-fold cross-fitting, with nuisance
estimation following Algorithm 1, which we refer to as Ours. We used the PCI
reduction given by setting $Z_{t}=O_{t-1}$, and $W_{t}=O_{t}$, and did not
include an explicit $X_{t}$. For every $t\in[H]$ we set the inputs to the
algorithm as follows: $\mathcal{H}^{(t)}$ and $\mathcal{Q}^{(t)}$ were the set
of all tabular functions; all regularization functions were set as
$\mathcal{R}(f)=\lambda\|f\|_{2,n}$, for some fixed hyperparameter $\lambda$;
all values of $\alpha^{(q,t)}$ and $\alpha^{(h,t)}$ were set a to a common
hyperparameter $\alpha$; and the kernels $K^{(q,t)}$ and $K^{(h,t)}$ were set
as in Bennett and Kallus (2021), using the same process of combining three
Gaussian kernels with automatically calibrated bandwidths based on the
variance of the data. Furthermore, the inputs to the kernel functions were
given by concatenating one-hot embeddings of $Z_{t}$ and $A_{t}$ or $W_{t}$
and $A_{t}$. We describe the selection of hyperparameters $\alpha$ and
$\lambda$ in Appendix E. In addition, we implemented the following benchmark
methods:
1. 1.
MeanR: This is a naive baseline given by
$\frac{1}{n}\sum_{i=1}^{n}\sum_{t=1}^{H}\gamma^{t}R_{t}^{(i)}$
2. 2.
MDP: This is a model-based baseline given by fitting a tabular MDP to the
observed data based on the observed counts and treating the observations as
states, and computing the value of $\pi_{e}$ on this model using dynamic
programming
3. 3.
TIS: This is given by estimating the time-independent sampling identification
quantity defined by Theorems 1 and 1, by estimating the required probability
matrices directly from the observed counts, and replacing the expectation over
$\mathcal{P}_{\text{ind}}$ with its empirical analogue, based on summing over
all $n^{H}$ combinations of separately sampling an observed trajectory at each
time step.
For full details of the implementation of each method, see our code at
https://github.com/CausalML/ProximalRL.
### 6.2 Results
|
---|---
|
|
Figure 4: Experiment results with $\epsilon_{\textup{noise}}=0$. In the top, middle, and bottom rows we display results for estimating the policy value of $\pi_{e}^{\textup{easy}}$, $\pi_{e}^{\textup{hard}}$, and $\pi_{e}^{\textup{optim}}$ respectively. On the left we display the mean policy value estimate for each method and each value of $n$, where the solid black line corresponds to the true policy value, and the shaded regions correspond to one standard deviations of the policy value estimates. On the right we display the corresponding mean squared error of these estimates, where the shaded regions correspond to 95% confidence intervals for these values. |
---|---
|
|
Figure 5: Experiment results with $\epsilon_{\textup{noise}}=0.2$. Results
displayed as in Fig. 4
We now present results policy evaluation for for the above scenario and
policies, using both our method and the above benchmarks. Specifically, for
each $n\in\\{200,500,1000,2000,5000,10000\\}$,
$\pi_{e}\in\\{\pi_{e}^{\textup{easy}},\pi_{e}^{\textup{hard}},\pi_{e}^{\textup{optim}}\\}$,
and $\epsilon\in\\{0,0.2\\}$ we repeated the following process $100$ times:
(1) we sampled $n$ trajectories with horizon length $H=3$, behavior policy
$\pi_{b}^{\textsc{NoisyObs}}$ and noise level
$\epsilon_{\textup{noise}}=\epsilon$; and (2) estimated $v_{1}(\pi_{e})$ using
these $n$ trajectories for each method.
First, in Fig. 4 we display results in the unconfounded case, where
$\epsilon_{\textup{noise}}=0$ (_i.e._ , MDP setting). We can observe that in
this case, both our method and the MDP baseline appear to be consistent, with
accurate estimates of the policy value as $n\to\infty$, as would be expected.
In this setting, the MDP method is generally more accurate than ours with
lower-variance estimates, which makes sense since ours is designed for much
more general conditions.
Next, in Fig. 5 we display results for the confounded case where
$\epsilon_{\textup{noise}}=0.2$ (_i.e._ , POMDP setting). Here, we see that
our method remains consistent, while the MDP method, which is only designed to
work in MDP settings, does not. The only exception is for estimating the value
of $\pi_{e}^{\textup{easy}}$, however this is only because MDP just happens to
have very small bias for estimating this policy, which is serendipitous and
not something that can be guaranteed in general. In general, our method has
much higher variance in this more challenging setting compared with the
previous one, especially for smaller values of $n$. That said, when $n$ is
large the method works very accurately.
Finally, we note that in general, as expected, the MeanR benchmark is
inconsistent in both scenarios, and only works when the target policy just
happens to be be close to the mean logged reward. Furthermore, despite our
identification theory in Section 4.1, the TIS method in general performs very
poorly. In some sense this is unsurprising, since as discussed in Section 4.1
the identification result is of an unusual form (as an expectation over
$\mathcal{P}_{\text{ind}}$), and we did not provide any theory of the
resulting plug-in algorithm’s convergence. Furthermore, in our pilot
experiments we experimented a similar method by instead directly computing the
identification result given by Tennenholtz et al. (2020, Theorem 1) by summing
over all possible trajectories, with empirical estimates of probability
matrices plugged in. However, this approach had similar problems, with errors
significantly greater even than TIS,555We speculate that this occurs because
unlike TIS the estimate is not normalized; it is computed as a sum over all
trajectories rather than as the mean of some empirical distribution, so the
resulting policy value estimates can take values well outside the range of
observed rewards. so we did not include these results. It is possible that,
_e.g._ , more sophisticated approaches to regularizing the nuisance estimation
for such approaches could make them work well, however even then they would
suffer from the theoretical limitations discussed in Section 4.1.
## 7 Conclusion
In this paper, we discussed the problem of OPE for POMDPs. First, we analyzed
the recently proposed approach for identifying the policy value for tabular
POMDPs (Tennenholtz et al., 2020). We showed that while it could be placed
within a more general framework and extended to continuous settings, it
suffers from some theoretical limitations due to the unusual form of the
identification quantity, which brings into question how useful it could be for
actually constructing estimators with good qualities, such as regularity,
$\sqrt{n}$-asymptotic normality, _etc_. Then, motivated by these limitations,
we proposed a new framework for identifying the policy value by sequentially
reducing the problem to a series of proximal causal inference problems. Then,
we extended this identification framework to a framework of estimators based
on double machine learning and cross-fitting (Chernozhukov et al., 2016), and
showed that under appropriate conditions such estimators are asymptotically
normal and semiparametrically efficient. Furthermore, we provided a concrete
algorithm for implementing such an estimator based on recent approaches to
solving conditional moment problems (Bennett and Kallus, 2021). Finally, we
performed an empirical investigation of our proposed estimators in synthetic
settings, and demonstrated that indeed our approach is consistent, even in
confounded settings where standard approaches to OPE fail.
Perhaps the most significant scope for future work on this topic is in the
development of more practical algorithms. Indeed, although our experiments
were only intended as a “proof of concept” of our theory, they also show that
our actual proposed estimators have very high variance with a moderate number
(_e.g._ , 1000) of trajectories, even in this extremely simple toy POMDP.
There are many approaches that may improve on this; for example it may be
beneficial to solve the conditional moment problems defining the $q^{(t)}$ and
$h^{(t)}$ functions simultaneously rather than sequentially as we proposed,
which may result in cascading errors. Related to this, our proposed approach
for solving the conditional moment problems under the intervention
distributions $\mathcal{P}^{*}_{t}$ is to use the functions $q^{(t^{\prime})}$
for $t^{\prime}<t$ (as in Lemma 2), which is akin to an importance sampling
approach. This could be inefficient, and alternative approaches akin to a
direct or doubly robust approach may be possible. Furthermore, although we
showed that our approach can work in toy settings, the hyperparameters needed
for good performance varied from setting to setting. Unfortunately, given
unobserved confounding it is inherently challenging to perform hyperparameter
optimization without actually performing experimentation. Therefore, another
important topic for future work is on more practical approaches to
hyperparameter optimization and algorithm selection for the nuisance
estimation. We note that this is an important and under-explored topic for
problems involving unmeasured confounding in general.
Another area where there is significant scope for future work is on the topic
of semiparametric efficiency. We provided efficiency theory under relatively
strong assumptions which ensure that all of the nuisances are uniquely
determined and that the model is locally saturated. However, this may be
unrealistic in general. Relaxing this assumption means that the tangent set
under consideration becomes more complex, and as previously discussed the
existing work on proximal causal inference does not address how to handle this
correctly. Therefore, working out what the tangent set looks like under more
general assumptions, what form the efficient influence function takes, and
under what conditions (if any) it takes a form similar to
$\psi_{\text{DR}}(\tau_{H})$ are important open questions. Furthermore, these
are also open question for proximal causal inference more generally, as they
are also unsolved in the case of $H=1$.
Finally, in terms of future work, there is the problem of how to actually
apply our theory, as well as policy value estimators as in Section 5, in a
useful way to real-world sequential decision making problems involving
unmeasured confounding. Ultimately, although our work is largely theoretical,
we hope that it will be impactful in motivating new approaches to solving such
challenging problems.
## References
* Azizzadenesheli et al. (2016) K. Azizzadenesheli, A. Lazaric, and A. Anandkumar. Reinforcement learning of pomdps using spectral methods. In _Conference on Learning Theory_ , pages 193–256. PMLR, 2016.
* Bennett and Kallus (2021) A. Bennett and N. Kallus. The variational method of moments. _arXiv preprint arXiv:2012.09422_ , 2021.
* Bennett et al. (2021) A. Bennett, N. Kallus, L. Li, and A. Mousavi. Off-policy evaluation in infinite-horizon reinforcement learning with latent confounders. In _International Conference on Artificial Intelligence and Statistics_ , pages 1999–2007. PMLR, 2021.
* Bhattacharya et al. (2020) S. Bhattacharya, S. Badyal, T. Wheeler, S. Gil, and D. Bertsekas. Reinforcement learning for pomdp: Partitioned rollout and policy iteration with application to autonomous sequential repair problems. _IEEE Robotics and Automation Letters_ , 5(3):3967–3974, 2020.
* Chandak et al. (2021) Y. Chandak, S. Niekum, B. C. da Silva, E. Learned-Miller, E. Brunskill, and P. S. Thomas. Universal off-policy evaluation. _arXiv preprint arXiv:2104.12820_ , 2021.
* Chen and Zhang (2021) S. Chen and B. Zhang. Estimating and improving dynamic treatment regimes with a time-varying instrumental variable. _arXiv preprint arXiv:2104.07822_ , 2021.
* Chernozhukov et al. (2016) V. Chernozhukov, D. Chetverikov, M. Demirer, E. Duflo, C. Hansen, and W. K. Newey. Double machine learning for treatment and causal parameters. Technical report, cemmap working paper, 2016.
* Cui et al. (2020) Y. Cui, H. Pu, X. Shi, W. Miao, and E. T. Tchetgen. Semiparametric proximal causal inference. _arXiv preprint arXiv:2011.08411_ , 2020.
* Gasse et al. (2021) M. Gasse, D. Grasset, G. Gaudron, and P.-Y. Oudeyer. Causal reinforcement learning using observational and interventional data. _arXiv preprint arXiv:2106.14421_ , 2021.
* Ghassami et al. (2021) A. Ghassami, A. Ying, I. Shpitser, and E. T. Tchetgen. Minimax kernel machine learning for a class of doubly robust functionals. _arXiv preprint arXiv:2104.02929_ , 2021.
* Hu and Wager (2021) Y. Hu and S. Wager. Off-policy evaluation in partially observed markov decision processes. _arXiv preprint arXiv:2110.12343_ , 2021.
* Kallus and Uehara (2019a) N. Kallus and M. Uehara. Double reinforcement learning for efficient off-policy evaluation in Markov decision processes, 2019a. arXiv:1908.08526.
* Kallus and Uehara (2019b) N. Kallus and M. Uehara. Efficiently breaking the curse of horizon in off-policy evaluation with double reinforcement learning, 2019b. arXiv:1909.05850.
* Kallus and Uehara (2020) N. Kallus and M. Uehara. Double reinforcement learning for efficient off-policy evaluation in markov decision processes. _Journal of Machine Learning Research_ , 21(167):1–63, 2020.
* Kallus and Zhou (2020) N. Kallus and A. Zhou. Confounding-robust policy evaluation in infinite-horizon reinforcement learning, 2020. arXiv:2002.04518.
* Kallus et al. (2021) N. Kallus, X. Mao, and M. Uehara. Causal inference under unmeasured confounding with negative controls: A minimax learning approach. _arXiv preprint arXiv:2103.14029_ , 2021.
* Katt et al. (2017) S. Katt, F. A. Oliehoek, and C. Amato. Learning in pomdps with monte carlo tree search. In _International Conference on Machine Learning_ , pages 1819–1827. PMLR, 2017.
* Killian et al. (2020) T. W. Killian, M. Ghassemi, and S. Joshi. Counterfactually guided policy transfer in clinical settings. _arXiv preprint arXiv:2006.11654_ , 2020.
* Liao et al. (2021) L. Liao, Z. Fu, Z. Yang, M. Kolar, and Z. Wang. Instrumental variable value iteration for causal offline reinforcement learning. _arXiv preprint arXiv:2102.09907_ , 2021.
* Miao et al. (2018a) W. Miao, Z. Geng, and E. J. Tchetgen Tchetgen. Identifying causal effects with proxy variables of an unmeasured confounder. _Biometrika_ , 105(4):987–993, 2018a.
* Miao et al. (2018b) W. Miao, X. Shi, and E. T. Tchetgen. A confounding bridge approach for double negative control inference on causal effects. _arXiv preprint arXiv:1808.04945_ , 2018b.
* Nair and Jiang (2021) Y. Nair and N. Jiang. A spectral approach to off-policy evaluation for pomdps. _arXiv preprint arXiv:2109.10502_ , 2021.
* Namkoong et al. (2020) H. Namkoong, R. Keramati, S. Yadlowsky, and E. Brunskill. Off-policy policy evaluation for sequential decisions under unobserved confounding. _arXiv preprint arXiv:2003.05623_ , 2020.
* Oberst and Sontag (2019) M. Oberst and D. Sontag. Counterfactual off-policy evaluation with gumbel-max structural causal models. In _International Conference on Machine Learning_ , pages 4881–4890. PMLR, 2019.
* Shi et al. (2020) X. Shi, W. Miao, J. C. Nelson, and E. J. Tchetgen Tchetgen. Multiply robust causal inference with double-negative control adjustment for categorical unmeasured confounding. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 82(2):521–540, 2020.
* Singh et al. (2021) G. Singh, S. Peri, J. Kim, H. Kim, and S. Ahn. Structured world belief for reinforcement learning in pomdp. In _International Conference on Machine Learning_ , pages 9744–9755. PMLR, 2021.
* Tchetgen et al. (2020) E. J. T. Tchetgen, A. Ying, Y. Cui, X. Shi, and W. Miao. An introduction to proximal causal learning. _arXiv preprint arXiv:2009.10982_ , 2020.
* Tennenholtz et al. (2020) G. Tennenholtz, S. Mannor, and U. Shalit. Off-policy evaluation in partially observable environments. In _Proceedings of the 33rd AAAI Conference on Artificial Intelligence (AAAI)_ , 2020.
* Van Der Vaart (1991) A. Van Der Vaart. On differentiable functionals. _The Annals of Statistics_ , pages 178–204, 1991.
* Van der Vaart (2000) A. W. Van der Vaart. _Asymptotic statistics_ , volume 3. Cambridge university press, 2000.
* Wang et al. (2020) L. Wang, Z. Yang, and Z. Wang. Provably efficient causal reinforcement learning with confounded observational data. _arXiv preprint arXiv:2006.12311_ , 2020.
* Xu et al. (2021) L. Xu, H. Kanagawa, and A. Gretton. Deep proxy causal learning and its application to confounded bandit policy evaluation. _arXiv preprint arXiv:2106.03907_ , 2021.
* Yang et al. (2021) C.-H. H. Yang, I. Hung, T. Danny, Y. Ouyang, and P.-Y. Chen. Causal inference q-network: Toward resilient reinforcement learning. _arXiv preprint arXiv:2102.09677_ , 2021.
## Appendix A Semiparametric Efficiency Theory
In this appendix we provide a brief review of semiparametric efficiency
theory, as relevant for the theory in this paper. We will consider a random
variable $X\in\mathcal{X}$, a model (set of distributions) $\mathcal{M}$,
where each $P\in\mathcal{M}$ defines a distribution for $X$, and some scalar
parameter $v:\mathcal{M}\mapsto\mathbb{R}$. Also let $\mu$ denote some
dominating measure such that $P\ll\mu$ for every $P\in\mathcal{P}$, and denote
the corresponding density as $dP/d\mu$. Given iid observations
$X_{1},\ldots,X_{n}$ sampled from some $P_{0}\in\mathcal{M}$, semiparametric
efficiency theory concerns itself with the limits on the estimation of
$v(P_{0})$, given that the estimator is required to be consistent and “well
behaved” (defined concretely below) at all $P$ in a neighborhood of $P_{0}$ in
the model $\mathcal{M}$.
### A.1 Definitions
###### Definition 2 (Influence function of estimators).
An estimator sequence $\hat{v}_{n}(X_{1:n})$ is asymptotically linear (AL)
with influence function (IF) $\psi_{P_{0}}(X)$ if
$\sqrt{n}(\hat{v}_{n}(X_{1:n})-v(P_{0}))=\frac{1}{\sqrt{n}}\sum_{i=1}^{n}\psi_{P_{0}}(X)+o_{p}(1)$
where $\mathbb{E}_{P_{0}}[\psi_{P_{0}}(X)]=0$.
###### Definition 3 (One-dimensional submodel and its score function).
A one-dimensional submodel of $\mathcal{M}$ passing through $P$ is a set of
distributions $\\{P_{\epsilon}:\epsilon\in U\\}\subseteq\mathcal{M}$, where:
1. 1.
$P_{0}=P$
2. 2.
The score function $s(X;\epsilon)=(d/d\epsilon)\log((dP_{\epsilon}/d\mu)(X))$
exists
3. 3.
There exists $u>0$ _s.t._ $\int\sup_{|\epsilon|\leq
u}|s(X;\epsilon)|(dP_{\epsilon}/d\mu)(X)d\mu(X)<\infty$ and
$\mathbb{E}[\sup_{|\epsilon|\leq u}s(X;\epsilon)^{2}]<\infty$ .
Also, we define $s(X)=s(X;0)$, which we refer to as the score function of the
submodel at $P_{0}$, Note that by property (3) we have $s(X)\in L_{2,P}(X)$.
We also note that these conditions on the parametric sub-model are slightly
stronger than those in some related work; these are needed to prove our
semiparametric efficiency results with full rigor, and our definitions below
should be interpreted _w.r.t._ such well-behaved submodels.
###### Definition 4 (Tangent space).
The tangent space of $\mathcal{M}$ at $P_{0}$ is the linear closure of the
score function at $P_{0}$ of all one-dimensional submodels of $\mathcal{M}$
passing through $P_{0}$.
Note that the tangent space is always a cone, since we can always redefine any
one-dimensional parametric submodel replacing $\epsilon$ with any scalar
multiple of $\epsilon$.
###### Definition 5 (Pathwise differentiability).
A functional $v:\mathcal{M}\mapsto\mathbb{R}$ is pathwise differentiable at
$P_{0}$ wrt $\mathcal{M}$ if there exists a mean-zero function
$\psi_{P_{0}}(X)$, such that any one-dimensional submodel $\\{P_{\epsilon}\\}$
of $\mathcal{M}$ passing through $P_{0}$ with score function $s(X)$ satisfies
$\left.\frac{dv(P_{\epsilon})}{d\epsilon}\right|_{\epsilon=0}=\mathbb{E}[\psi_{P_{0}}(X)s(X)]\,.$
The function $\psi_{P_{0}}(X)$ is called a gradient of $v(P_{0})$ at $P_{0}$
wrt $\mathcal{M}$. The efficient IF (EIF, or canonical gradient) of $v(P_{0})$
wrt $\mathcal{M}$ is the unique gradient $\tilde{\psi}_{P_{0}}(X)$ of
$v(P_{0})$ at $P_{0}$ wrt $\mathcal{M}$ that belongs to the tangent space at
$P_{0}$ wrt $\mathcal{M}$.
Finally, we define regular estimators, which are those whose limiting
distribution is robust to local changes to the data generating process. This
is what we alluded to above by “well behaved” estimators. Note that
restricting attention to regular estimators excludes pathological behavior
such as that of the super-efficient Hodges estimator.
###### Definition 6 (Regular estimators).
An estimator sequence $\hat{v}_{n}$ is called regular at $P_{0}$ for
$v(P_{0})$ wrt $\mathcal{M}$ if there exists a limiting probability measure
$L$ such that, for any one-dimensional submodel $\\{P_{\epsilon}\\}$ of
$\mathcal{M}$ passing through $P_{0}$, we have
$\sqrt{n}(\hat{v}_{n}(X_{1:n})-v(P_{1/\sqrt{n}}))\to L$
in distribution as $n\to\infty$, where $X_{1:n}$ are distributed iid according
to $P_{1/\sqrt{n}}$.
Note that this property holds even if $\\{P_{\epsilon}\\}$ is chosen
adversarially in response to $\hat{v}_{n}$.
### A.2 Characterizations
The following characterizes some important equivalences based on the above
definitions. The following are based on Van Der Vaart [1991, Theorm 3.1].
###### Theorem 5 (Influence functions are gradients).
Suppose that $\hat{v}_{n}(X_{1:n})$ is an AL estimator of $v(P_{0})$ with
influence function $\psi_{P_{0}}(X)$, and that $v(P_{0})$ is pathwise
differentiable at $P_{0}$ wrt $\mathcal{M}$. Then $\hat{v}_{n}(X_{1:n})$ is a
regular estimator of $v(P_{0})$ at $P_{0}$ wrt $\mathcal{M}$ if and only if
$\psi_{P_{0}}(X)$ is a gradient of $v(P_{0})$ at $P_{0}$ wrt $\mathcal{M}$.
###### Corollary 3 (Characterization of the EIF).
The EIF wrt $\mathcal{M}$ is the projection of any gradient wrt $\mathcal{M}$
onto the tangent space wrt $\mathcal{M}$.
### A.3 Strategy to calculate the EIF
Given the above, the following is a natural strategy to calculate the EIF:
1. 1.
Calculate a gradient $\psi_{P_{0}}(X)$ of the target parameter $v(P_{0})$ wrt
$\mathcal{M}$
2. 2.
Calculate the gradient space wrt $\mathcal{M}$
3. 3.
Either:
1. (a)
Show that $\psi_{P_{0}}(X)$ already lies in the above tangent space, or
2. (b)
Project $\psi_{P_{0}}(X)$ onto the tangent space
The first part of the above can often be done by explicitly computing the
derivative of $v(P_{\epsilon})$ wrt $\epsilon$, and re-arranging this into the
form $\mathbb{E}[\psi_{P_{0}}(X)s(X)]$ for some function $\psi_{P_{0}}(X)$.
### A.4 Optimalities
Finally, we describe the optimal properties of the EIF
$\tilde{\psi}_{P_{0}}(X)$. We define the _efficiency bound_ as the variance of
the EIF, $\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)]$, which has the
following interpretations. First, the efficiency bound gives a lower-bound on
the risk of any estimator in a local asymptotic minimax sense [Van der Vaart,
2000, Theorem 25.20].
###### Theorem 6 (Local Asymptotic Minimax (LAM) theorem).
Let $v(P_{0})$ be pathwise differentiable at $P_{0}$ wrt $\mathcal{M}$, with
the EIF $\tilde{\psi}_{P_{0}}(X)$. Then, for any estimator sequence
$\hat{v}_{n}(X_{1:n})$, and any symmetric quasi-convex loss function
$l:\mathbb{R}\mapsto[0,\infty)$, we have
$\sup_{m\in\mathbb{N},\\{P_{\epsilon}^{(1)}\\},\ldots,\\{P_{\epsilon}^{(m)}\\}}\lim_{n\to\infty}\sup_{k\in[m]}\mathbb{E}_{P_{1/\sqrt{n}}^{(k)}}\left[l\left(\sqrt{n}\left\\{\hat{v}_{n}(X_{1:n})-v(P_{1/\sqrt{n}})\right\\}\right)\right]\geq\int
l(u)d\mathcal{N}(0,\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)])\,,$
where $\\{P_{\epsilon}^{(1)}\\},\ldots,\\{P_{\epsilon}^{(m)}\\}$ are one-
dimensional submodels of $\mathcal{M}$ passing through $P_{0}$.
In other words, if we allow for adversarial local perturbations to the data
generating process that are consistent with $\mathcal{M}$, then the worst-case
risk of _any_ estimator (not necessarily regular) is lower-bounded by that of
a regular and asymptotic estimator whose influence function is the EIF. This
interpretation follows because, given the above definition of regular
estimators and the central limit theorem, the limiting distribution of such a
regular and AL estimator is
$\mathcal{N}(0,\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)])$ under any such
local perturbations. Note that this theorem also implies the following,
possibly easier-to-interpret corollary.
###### Corollary 4.
Under the same assumptions as Theorem 6, we have
$\inf_{\delta>0}\liminf_{n\to\infty}\sup_{Q\in\mathcal{M},d_{\textup{TV}}(Q,P_{0})\leq\delta}\mathbb{E}_{Q}\left[l\left(\sqrt{n}\left\\{\hat{v}_{n}(X_{1:n})-v(Q)\right\\}\right)\right]\geq\int
l(u)d\mathcal{N}(0,\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)])\,,$
where $d_{\textup{TV}}(\cdot,\cdot)$ is the total variation distance, and
$\mathcal{N}(\mu,\sigma^{2})$ denotes a normal distribution with mean $\mu$
and variance $\sigma^{2}$.
Second, the efficiency bound gives a lower-bound on the risk of any regular
estimator, in a strict non-minimax sense [Van der Vaart, 2000, Theorem 25.21].
###### Theorem 7 (Convolution Theorem).
Let $l:\mathbb{R}\mapsto[0,\infty)$ be a symmetric quasi-convex loss function.
Let $v(P_{0})$ be pathwise differentiable at $P_{0}$ wrt $\mathcal{M}$ with
EIF $\tilde{\psi}_{P_{0}}(X)$, and let $\hat{v}_{n}(X_{1:n})$ be a regular
estimator sequence for $v(P_{0})$ at $P_{0}$ wrt $\mathcal{M}$, with limiting
distribution $L$. Then, we have
$\int l(u)dL(u)\geq\int
l(u)d\mathcal{N}(0,\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)])\,.$
Equality holds obviously when
$L=\mathcal{N}(0,\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)])$, which as
discussed above follows when $\hat{v}_{n}(X_{1:n})$ is regular and AL with
influence function given by the EIF.
We note that in our interpretations of both the LAM and Convolution Theorems,
we argued that if an estimator is regular and AL with influence function
$\tilde{\psi}_{P_{0}}(X)$ then it will achieve the corresponding bound. The
following final theorem shows that the latter property alone is both necessary
and sufficient [Van der Vaart, 2000, Theorem 25.23].
###### Theorem 8.
Let $v(P_{0})$ be pathwise differentiable at $P_{0}$ wrt $\mathcal{M}$, and
let $\tilde{\psi}_{P_{0}}(X)$ be the EIF. Then an estimator sequence is
efficient (regular wrt $\mathcal{M}$ and with limiting distribution
$\mathcal{N}(0,\textup{var}_{P_{0}}[\tilde{\psi}_{P_{0}}(X)])$) if and only if
it is AL with influence function $\tilde{\psi}_{P_{0}}(X)$.
## Appendix B Discussion of Issues with Tangent Spaces in Past Work
Here we will discuss the problems with tagnent spaces proposed in past work on
proximal causal inference. Given that this past work has considered the
simpler setting where $H=1$, we will omit all suffixes and prefixes involving
$t$ in the discussion here. Let $T:L_{2}(Z,A)\mapsto L_{2}(W,A)$ be the
conditional operator defined according to
$Tf(Z,A)=\mathbb{E}[f(Z,A)\mid W,A]\quad\forall f\,,$
whose adjoint $T^{*}:L_{2}(W,A)\mapsto L_{2}(Z,A)$ satisfies
$T^{*}g(W,A)=\mathbb{E}[g(W,A)\mid Z,A]\quad\forall g\,.$
In Cui et al. [2020], the authors propose to use the tangent space, which, in
terms of our notation and definitions of $q$ and $h$, is defined by the
restrictions
$\displaystyle\mathbb{E}[q(Z,A))(s(A\mid W)+s(Z\mid W,A))\mid W,A]$
$\displaystyle\in\text{Range}(T)$
$\displaystyle\mathbb{E}[(\mathds{1}\\{E=A\\}R-h(W,A))s(W,R\mid Z,A)\mid Z,A]$
$\displaystyle\in\text{Range}(T^{*})\,.$
However, this choice of tangent space is never fully justified in terms of the
model under consideration. In Kallus et al. [2021], the authors do justify the
necessity of these restrictions by noting that if $q_{\epsilon}$ and
$h_{\epsilon}$ are differentiable with respect to $\epsilon$ within a given
submodel, then we must have
$\displaystyle\mathbb{E}\left[\left.\frac{\partial}{\partial\epsilon}\right|_{\epsilon=0}q_{\epsilon}(Z,A)\mid
W,A\right]$
$\displaystyle=\left.\frac{\partial}{\partial\epsilon}\right|_{\epsilon=0}P_{\epsilon}(A\mid
W)^{-1}-\mathbb{E}[s(Z\mid W,A)q(Z,A)\mid W,A]$ $\displaystyle=-P(A\mid
W)^{-1}s(A\mid W)-\mathbb{E}[s(Z\mid W,A)q(Z,A)\mid W,A]$
$\displaystyle=-\mathbb{E}[(s(A\mid W)+s(Z\mid W,A))q(Z,A)\mid W,A]$
$\displaystyle\mathbb{E}\left[\left.\frac{\partial}{\partial\epsilon}\right|_{\epsilon=0}h_{\epsilon}(W,A)\mid
Z,A\right]$ $\displaystyle=\mathbb{E}[s(W,R\mid
Z,A)(\mathds{1}\\{E=A\\}R-h(W,A))\mid Z,A]\,.$
Unfortunately, there are still some problems in this choice of tangent space.
Firstly, although they are clearly necessary conditions for differentiability
of the nuisances, it is not clear that they are _sufficient_ conditions; that
is, it is not clear that for a given score function satisfying these
conditions we can actually construct a parametric submodel for which the
nuisances are defined and differentiable. Note that this is contrast to many
other areas of work involving semiparametric efficiency theory, where the
tangent set restrictions simply correspond to some conditional independence
assumptions, in which case it is trivial to see that the tangent set
restrictions invoked are both necessary and sufficient, since the partitioning
of the score function immediately implies the independence structure of
corresponding parametric submodels.
Secondly, it is not clear that diferentiability of the nuisances is even
necessary – indeed we showed how to prove that $\psi_{\text{DR}}(\tau_{H})$ is
a gradient of the policy value without ever assuming or requiring that the
nuisance functions were differentiable – nor is it clear what impact if any
this requirement of nuisance differentiability would have on the actual model
of interest.
Thirdly, Kallus et al. [2021] consider a more general model in which $h$ and
$q$ are not necessarily uniquely determined, in which case the above
restrictions would actually have to hold for _all_ valid $h$ and $q$
functions, and it is not immediately clear that requiring this restriction for
a single chosen $h$ and $q$ is sufficient.
Finally, under a model in which the allowed distributions all actually
correspond to observational distributions for latent variable models with
hidden confounders satisfying the PCI, which the past work implies are the
only kinds of distributions under consideration, there are additional
necessary restrictions on the score functions. For example, let $L=(Z,A)$, and
$Q=(W,R)$, then from the PCI independence assumptions is clear that the
observed distribution must take the form
$P(L,Q)=\int P(S)P(L\mid S)P(Q\mid S)d\mu(S)\,,$
for some latent variable $S$. It is easy to show that this implies that for
any differentiable submodel on the full data $(L,Q,S)$ we have
$\displaystyle s(L,Q)$ $\displaystyle=\frac{\int\partial(s(S)+s(L\mid
S)+s(Q\mid S))P(S)P(L\mid S)P(Q\mid S)d\mu(S)}{\int P(S)P(L\mid S)P(Q\mid
S)d\mu(S)}$ $\displaystyle=\int\partial(s(S)+s(L\mid S)+s(Q\mid S))P(S\mid
L,Q)d\mu(S)$ $\displaystyle=\mathbb{E}[s(S)+s(L\mid S)+s(Q\mid S)\mid L,Q]\,.$
Therefore, there must exist functions $f_{1}$, $f_{2}$, and $f_{3}$ such that
$s(Z,A,W,R)=\mathbb{E}[f_{1}(S)+f_{2}(Z,A;S)+f_{3}(W,R;S)\mid Z,A,W,R]\,,$
which satisfy
$\mathbb{E}[f_{1}(S)]=\mathbb{E}[f_{2}(Z,A;S)\mid
S]=\mathbb{E}[f_{3}(W,R;S)\mid S]=0\,.$
It is not clear that the previously proposed tangent spaces ensure this
condition, for example.
Given these above issues, we took care to define assumptions to avoid such
issues, by ensuring that we consider a model that is locally saturated at
$\mathcal{P}_{b}$, which guarantees that the tangent set is all square
integrable functions. Achieving this involves ensuring that the nuisances are
uniquely determined locally near $\mathcal{P}_{b}$, and defining the parameter
of interest is not defined in terms of the actual policy value, and rather in
terms of the nuisances and the identification quantity; that is, we ensure
that the parameter of interest corresponds to the target policy value for
distributions that actually come from an underlying valid PCI model satisfying
our assumptions, and otherwise is still an unambiguous and well-defined
quantity as long as the nuisances are uniquely defined.
## Appendix C Proofs of Main Theorems and Lemmas
### C.1 Proof of Theorem 1
###### Proof.
We will prove this result for arbitrary fixed $s$. Define
$\displaystyle Y_{s}$ $\displaystyle=R_{s}$ $\displaystyle Y_{t}$
$\displaystyle=\phi^{(t+1)}(Z_{t+1},A_{t+1},W_{t+1},E_{t+1},X_{t},Y_{t+1})\qquad\forall
t\in[s-1]\,,$
where
$\phi^{(t)}(z,a,w,e,x,y)=\rho^{(t)}(z,a,x)\mathds{1}\\{a=e\\}y\,.$
Now, by these definitions we need to prove that
$\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]=\mathbb{E}_{\mathcal{P}_{\text{ind}}}[Y_{0}]\,.$
where $Y_{0}=\phi^{(1)}(Z_{1},A_{1},W_{1},E_{1},X_{0},Y_{1})$.
We will proceed via a recursive argument. In order to set up our key
recursion, we first define some additional notation. First, let
$\mathcal{P}^{*}_{t}$ denote the intervention distribution introduced in
Section 4.2, and let $\mathcal{P}^{*}_{\text{ind},t}$ denote the measure on
$\Omega_{H}^{*}$ defined by a mixture between $\mathcal{P}^{*}_{t+1}$ and
$\mathcal{P}_{\text{ind}}$, where
1. 1.
$\\{W_{1:t-1}\\}$, $\\{X_{1:t-1}\\}$ $\\{A_{1:t-1}\\}$, and $\\{R_{1:t-1}\\}$
are jointly sampled from $\mathcal{P}^{*}_{t}$
2. 2.
$\\{Z_{1},\ldots,Z_{H}\\}$, $\\{W_{t},\ldots,W_{H}\\}$,
$\\{X_{t},\ldots,X_{H}\\}$ $\\{A_{t},\ldots,A_{H}\\}$, and
$\\{R_{t},\ldots,R_{H}\\}$ are jointly sampled from
$\mathcal{P}_{\text{ind}}$.
Given this setup, the inductive relation we would like to prove is
$\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}[\phi^{(t)}(Z_{t},A_{t},W_{t},E_{t},X_{t-1},Y_{t})]=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t+1}}[Y_{t}]\qquad\forall
t\in[s]$ (11)
We note that if Eq. 11 holds, then via chaining this relation and the
recursive definitions of $Y_{t}$, we would instantly have our result, since
$\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},1}}[\phi^{(1)}(Z_{1},A_{1},W_{1},E_{1},W_{0},Y_{1})]=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},1}}[Y_{0}]=\mathbb{E}_{\mathcal{P}_{\text{ind}}}[Y_{0}]$,
and
$\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},s+1}}[R_{s}]=\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]$.
Therefore, it only remains to prove that Eq. 11 holds.
Next, by the assumption on $\phi^{(t)}$ in the theorem statement, we have
$\displaystyle\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\ W_{t},A_{t}=a\right]$
$\displaystyle=\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\int_{x}f_{t-1}(x)\rho^{(t)}(Z_{t},A_{t},x)\
\middle|\ W_{t},A_{t}=a\right]$
$\displaystyle=\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\left(\frac{d\mathcal{P}^{*}_{\text{ind},t+1}}{d\mathcal{P}_{b}}\right)(W_{t})P(A_{t}=a\mid
W_{t})^{-1}$ $\displaystyle=P(A_{t}=a\mid W_{t})^{-1}\,,$
where in this derivation $f_{t-1}$ denotes the density of $X_{t-1}$ under
$\mathcal{P}^{*}_{\text{ind},t}$, which we note is the same as the density of
$W_{t}$ under $\mathcal{P}^{*}_{\text{ind},t+1}$. Given this, applying the
independence assumptions of our POMDP framework we have
$\displaystyle\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}[P(A_{t}=a\mid
S_{t})^{-1}\mid W_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}[P(A_{t}=a\mid
S_{t},W_{t})^{-1}\mid W_{t},A_{t}=a]$
$\displaystyle=\int_{s}\frac{P(S_{t}=s\mid W_{t},A_{t}=a)}{P(A_{t}=a\mid
W_{t},S_{t}=s)}ds$ $\displaystyle=\int_{s}\frac{P(A_{t}=a\mid
W_{t},S_{t}=s)P(S_{t}=s\mid W_{t})}{P(A_{t}=a\mid W_{t},S_{t}=s)P(A_{t}=a\mid
W_{t})}ds$ $\displaystyle=P(A_{t}=a\mid W_{t})^{-1}$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\ W_{t},A_{t}=a\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\ S_{t},W_{t},A_{t}=a\right]W_{t},A_{t}=a\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\ S_{t},A_{t}=a\right]W_{t},A_{t}=a\right]\,.$
Given this, it then follows from Assumption 1 that
$\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\ S_{t},A_{t}=a\right]=P(A_{t}=a\mid S_{t})^{-1}\,,$
which holds almost surely for each $a\in\mathcal{A}$, and therefore also holds
replacing $a$ with $A_{t}$.
Finally, applying this previous equation, we have
$\displaystyle\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}[\phi^{(t)}(Z_{t},A_{t},W_{t},E_{t},X_{t-1},Y_{t}]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\
S_{t},A_{t},W_{t},E_{t},Y_{t}\right]\left(\frac{d\mathcal{P}^{*}_{\text{ind},t+1}}{d\mathcal{P}_{b}}\right)(W_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\rho^{(t)}(Z_{t},A_{t},X_{t-1})\left(\frac{d\mathcal{P}_{b}}{d\mathcal{P}^{*}_{\text{ind},t+1}}\right)(W_{t})\
\middle|\
S_{t},A_{t},\right]\left(\frac{d\mathcal{P}^{*}_{\text{ind},t+1}}{d\mathcal{P}_{b}}\right)(W_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[P(A_{t}\mid
S_{t})^{-1}\left(\frac{d\mathcal{P}^{*}_{\text{ind},t+1}}{d\mathcal{P}_{b}}\right)(W_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\sum_{a}\
\frac{P(A_{t}\mid S_{t},W_{t},E_{t},Y_{t})}{P(A_{t}\mid
S_{t})}\left(\frac{d\mathcal{P}^{*}_{\text{ind},t+1}}{d\mathcal{P}_{b}}\right)(W_{t})\mathds{1}\\{E_{t}=a\\}Y_{t}(E_{t})\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t}}\left[\left(\frac{d\mathcal{P}^{*}_{\text{ind},t+1}}{d\mathcal{P}_{b}}\right)(W_{t})Y_{t}(E_{t})\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}^{*}_{\text{ind},t+1}}[Y_{t}]\,,$
where the third and sixth equalities follow from the independence assumptions
of the POMDP given $S_{t}$. In this derivation we use the potential outcome
notation $Y_{t}(a)$ to denote the value $Y_{t}$ would have taken if we
intervened on the $t$’th action with value $a$ (and the subsequent values of
$X_{t}$ and $R_{t}$ are possibly changed accordingly; note that this
intevention does not change the values of $Z_{t}$ or $W_{t}$ since these
represent observations at time $t-1$ and $t$ respectively.) The final equality
follows because replacing $Y_{t}$ with $Y_{t}(E_{t})$ effectively updates the
mixture distribution $\mathcal{P}^{*}_{\text{ind},t}$ so that $A_{t}$,
$X_{t}$, and $R_{t}$ are included in the set variables sampled according to
$\mathcal{P}^{*}_{t+1}$, rather than in the set of those sampled according to
$\mathcal{P}_{\text{ind}}$. Furthermore, integrating over the Radon-Nikodym
derivative $(d\mathcal{P}^{*}_{\text{ind},t+1}/d\mathcal{P}_{b})(W_{t})$
effectively further updates the mixture distribution so that $W_{t}$ is also
included in the set sampled according to $\mathcal{P}^{*}_{t+1}$, since the
distribution of $W_{t}$ under $\mathcal{P}_{b}$ is the same as the
distribution of $W_{t}$ under $\mathcal{P}^{*}_{\text{ind},t}$. That is, these
two terms effectively replace integration under
$\mathcal{P}^{*}_{\text{ind},t}$ with integration under
$\mathcal{P}^{*}_{\text{ind},t+1}$. This establishes Eq. 11, and therefore as
discussed above the theorem follows by recursion.
∎
### C.2 Proof of Lemma 1
###### Proof.
First we establish the required property of this definition of $\rho^{(t)}$.
Since observations are tabular, the required property is equivalent to
$\displaystyle\mathbb{E}\left[\sum_{x\in\mathcal{O}}f(x)\rho^{(t)}(Z_{t},A_{t},x)\
\middle|\ W_{t},A_{t}=a\right]$
$\displaystyle=\frac{f(W_{t})}{P(W_{t})}P(A_{t}\mid W_{t})^{-1}$
$\displaystyle=\frac{f(W_{t})}{P(A_{t}=a,O_{t}=W_{t})}\,,$
almost surely for every discrete probability distribution $f$ over the
observation space. Now, recalling that $Q^{(t,a)}_{x,y}=P(O_{t}=x\mid
A_{t}=a,O_{t-1}=y)$, plugging the definition of $\rho^{(t)}$ into the LHS
above, we have
$\displaystyle\mathbb{E}\left[\sum_{x\in\mathcal{O}}f(x)\rho^{(t)}(Z_{t},A_{t},x)\
\middle|\ W_{t},A_{t}=a\right]$
$\displaystyle=\mathbb{E}\left[\sum_{x\in\mathcal{O}}f(x)P(O_{t-1}=Z_{t},A_{t}=a)^{-1}(Q^{(t,a)})^{-1}_{Z_{t},x}\
\middle|\ W_{t},A_{t}=a\right]$
$\displaystyle=\sum_{x,z\in\mathcal{O}}f(x)P(O_{t-1}=z,A_{t}=a)^{-1}P(O_{t-1}=z\mid
O_{t}=W_{t},A_{t}=a)(Q^{(t,a)})^{-1}_{z,x}$
$\displaystyle=\sum_{x,z\in\mathcal{O}}f(x)P(O_{t-1}=z,A_{t}=a)^{-1}\frac{P(O_{t}=W_{t}\mid
O_{t-1}=z,A_{t}=a)P(O_{t-1}=z\mid A_{t}=a)}{P(O_{t}=W_{t}\mid
A_{t}=a)}(Q^{(t,a)})^{-1}_{z,x}$
$\displaystyle=\sum_{x,z\in\mathcal{O}}\frac{f(x)P(O_{t-1}=z\mid
A_{t}=a)}{P(O_{t-1}=z,A_{t}=a)P(O_{t}=W_{t}\mid
A_{t}=a)}Q^{(t,a)}_{W_{t},z}(Q^{(t,a)})^{-1}_{z,x}$
$\displaystyle=\sum_{x\in\mathcal{O}}\frac{f(x)}{P(A_{t}=a)P(O_{t}=W_{t}\mid
A_{t}=a)}\sum_{z\in\mathcal{O}}Q^{(t,a)}_{W_{t},z}(Q^{(t,a)})^{-1}_{z,x}$
$\displaystyle=\sum_{x\in\mathcal{O}}\frac{f(x)}{P(O_{t}=W_{t},A_{t}=a)}\mathds{1}\\{W_{t}=x\\}=\frac{f(W_{t})}{P(A_{t}=a,O_{t}=W_{t})}\,,$
which establishes the required property of $\rho^{(t)}$.
Now, for the second part of the theorem, we first note that in terms of our
notation and under our (w.l.o.g.) assumption that the target policy is
deterministic, Tennenholtz et al. [2020, Theorem 1] is equivalent to
$\displaystyle\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]=\sum_{o_{1:s}\in\mathcal{O}^{s},a_{1:s}\in\mathcal{A}^{s}}$
$\displaystyle\left(\prod_{t=1}^{s}\mathds{1}\\{a_{t}=E_{t}(o_{1:t},a_{1:t-1})\\}\right)$
$\displaystyle\cdot\sum_{z\in\mathcal{O}}\mathbb{E}_{\mathcal{P}_{b}}[R_{s}\mid
O_{s}=o_{s},A_{s}=a_{s},O_{s-1}=z]$ $\displaystyle\qquad\cdot
P(O_{s}=o_{s}\mid A_{s}=a,O_{s-1}=z)\omega(o_{1:s},a_{1:s})_{z}\,,$
where $E_{t}(o_{1:t},a_{1:t-1})$ denotes the action taken by $\pi_{e}$ given
$O_{1:t}=o_{1:t}$, and $A_{1:t-1}=a_{1:t-1}$, and
$\displaystyle\omega(o_{1:s},a_{1:s})$
$\displaystyle=\prod_{t=1}^{s}\Xi_{s-t+1}(o_{1:s-t+1},a_{1:s-t+1})$
$\displaystyle\Xi_{t}(o_{1:t},a_{1:t})_{z,z^{\prime}}$
$\displaystyle=\sum_{x\in\mathcal{O}}(Q^{(t,a_{t})})^{-1}_{z,x}P(O_{t}=x,O_{t-1}=o_{t-1}\mid
A_{t-1}=a_{t-1},O_{t-2}=z^{\prime})\qquad\forall t\in\\{2,3,\ldots,s\\}$
$\displaystyle\Xi_{1}(o_{1:t},a_{1:t})_{z}$
$\displaystyle=\sum_{x\in\mathcal{O}}(Q^{(1,a_{1})})^{-1}_{z,x}P(O_{1}=x)\,.$
We note that the term we refer to as $\omega$ was called $\Omega$ in
Tennenholtz et al. [2020], and the terms we refer to as $\Xi$ were called $W$,
and we explicitly write out the matrix multiplication in the definitions of
the $\Xi$ terms. Next, plugging the definition of $\omega$ into the above
equation for $\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]$, and explicitly writing out
the sums implied by the multiplication of the $\Xi_{t}$ terms, and re-
arranging terms, we obtain
$\displaystyle\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]=\sum_{\begin{subarray}{c}o_{1:s}\in\mathcal{O}^{s},a_{1:s}\in\mathcal{A}^{s}\\\
z_{1:s}\in\mathcal{O}^{s},x_{0:s-1}\in\mathcal{O}^{s}\end{subarray}}$
$\displaystyle\left(\prod_{t=1}^{s}\mathds{1}\\{a_{t}=E_{t}(o_{1:t},a_{1:t-1})\\}\right)$
$\displaystyle\cdot\mathbb{E}_{\mathcal{P}_{b}}[R_{s}\mid
O_{s}=o_{s},A_{s}=a_{s},O_{s-1}=z_{s}]$
$\displaystyle\cdot\left(\prod_{t=1}^{s}(Q^{(t,a_{t})})^{-1}_{z_{t},x_{t-1}}P(A_{t}=a_{t},O_{t-1}=z_{t})^{-1}\right)$
$\displaystyle\cdot\left(\prod_{t=1}^{s-1}P(O_{t}=o_{t},A_{t}=a_{t},O_{t-1}=z_{t},O_{t+1}=x_{t})\right)$
$\displaystyle\cdot P(O_{s}=o_{s},A_{s}=a_{s},O_{s-1}=z_{s})P(O_{0}=x_{0})\,.$
Now, we note that
$(Q^{(t,a_{t})})^{-1}_{z_{t},x_{t-1}}P(A_{t}=a_{t},O_{t-1}=z_{t})^{-1}=\rho^{(t)}(z_{t},a_{t},x_{t-1})$,
and that summing over the product of terms
$\prod_{t=1}^{s-1}P(O_{t}=o_{t},A_{t}=a_{t},O_{t-1}=Z_{t},O_{t+1}=x_{t})$ and
$P(O_{s}=o_{s},A_{s}=a_{s},O_{s-1}=Z_{s})$ and $P(O_{0}=x_{0})$ is equivalent
to integrating over $\mathcal{P}_{\text{ind}}$, where $z_{t}$, $a_{t}$,
$x_{t}$, and $o_{t}$ correspond to $Z_{t}$, $A_{t}$, $X_{t}$, and $W_{t}$
respectively. Re-writing the previous equation as an expectation and
simplifying based on this gives us
$\displaystyle\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]$
$\displaystyle=\mathbb{E}_{\mathcal{P}_{\text{ind}}}\left[\mathbb{E}_{\mathcal{P}_{b}}[R_{s}\mid
W_{s},A_{s},Z_{s}]\prod_{t=1}^{s}\mathds{1}\\{A_{t}=E_{t}\\}\rho^{(t)}(Z_{t},A_{t},X_{t-1})\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}_{\text{ind}}}\left[\mathbb{E}_{\mathcal{P}_{\text{ind}}}\left[R_{s}\prod_{t=1}^{s}\mathds{1}\\{A_{t}=E_{t}\\}\rho^{(t)}(Z_{t},A_{t},X_{t-1})\
\middle|\ W_{s},A_{s},Z_{s}\right]\right]$
$\displaystyle=\mathbb{E}_{\mathcal{P}_{\text{ind}}}\left[R_{s}\prod_{t=1}^{s}\mathds{1}\\{A_{t}=E_{t}\\}\rho^{(t)}(Z_{t},A_{t},X_{t-1})\right]\,,$
where the second equation follows since the distribution of $R_{s}$ given
$W_{s}$, $A_{s}$, and $Z_{s}$ is the same under $\mathcal{P}_{b}$ and
$\mathcal{P}_{\text{ind}}$, and because $R_{s}$ is independent of
$\prod_{t=1}^{s}\mathds{1}\\{A_{t}=E_{t}\\}\rho^{(t)}(Z_{t},A_{t},X_{t-1})$
given $(W_{s},A_{s},Z_{s})$ under $\mathcal{P}_{\text{ind}}$. We note that the
final equation is our identification result from Theorem 1, and so we
conclude. ∎
### C.3 Proof of Theorem 2
Before we present the main proof, we establish some additional notation and
some helper lemmas. Using similar notation to Kallus et al. [2021], for any
$t\in[H]$ and $\phi\in L_{2,\mathcal{P}^{*}_{t}}(R_{t},D_{t+1:H})$ we define
the sets
$\displaystyle\mathbb{Q}^{(t)}$ $\displaystyle=\\{q\in
L_{2,\mathcal{P}^{*}_{t}}(Z_{t},A_{t}):\mathbb{E}^{*}_{t}[q(Z_{t},A_{t})-P^{*}_{t}(A_{t}\mid
S_{t})^{-1}\mid S_{t},A_{t}=a]=0\quad\text{a.s.}\quad\forall
a\in\mathcal{A}\\}$ $\displaystyle\mathbb{H}^{(t,\phi)}$
$\displaystyle=\\{h\in
L_{2,\mathcal{P}^{*}_{t}}(W_{t},A_{t}):\mathbb{E}^{*}_{t}[h(W_{t},A_{t})-\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}\mid
S_{t},a_{t}=a]=0\quad\text{a.s.}\quad\forall a\in\mathcal{A}\\}$
$\displaystyle\mathbb{Q}^{(t)}_{\text{obs}}$ $\displaystyle=\\{q\in
L_{2,\mathcal{P}^{*}_{t}}(Z_{t},A_{t}):\mathbb{E}^{*}_{t}[q(Z_{t},A_{t})-P^{*}_{t}(A_{t}\mid
W_{t})^{-1}\mid W_{t},A_{t}=a]=0\quad\text{a.s.}\quad\forall
a\in\mathcal{A}\\}$ $\displaystyle\mathbb{H}^{(t,\phi)}_{\text{obs}}$
$\displaystyle=\\{h\in
L_{2,\mathcal{P}^{*}_{t}}(W_{t},A_{t}):\mathbb{E}^{*}_{t}[h(W_{t},A_{t})-\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}\mid
W_{t},A_{t}=a]=0\quad\text{a.s.}\quad\forall a\in\mathcal{A}\\}\,,$
where $Y_{t}=\phi(R_{t},E_{t+1:H})$.
First, we will prove an important claim from Section 4.2, which is that
Assumption 3 implies that Eqs. 2 and 3 both have solutions. This claim is
formalized by the following lemma.
###### Lemma 3.
Under Assumption 2 and for each $t\in[H]$ and $\phi\in
L_{2,\mathcal{P}^{*}_{t}}(R_{t},D_{t+1:H})$ we have
$\mathbb{Q}^{(t)}\subseteq\mathbb{Q}^{(t)}_{\text{obs}}$ and
$\mathbb{H}^{(t,\phi)}\subseteq\mathbb{H}^{(t,\phi)}_{\text{obs}}$.
###### Proof of Lemma 3.
First, suppose that $q^{(t)}\in\mathbb{Q}^{(t)}$. Then we have
$\displaystyle\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid W_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
S_{t},W_{t},A_{t}=a]\mid W_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
S_{t},A_{t}=a]\mid W_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[P^{*}_{t}(A_{t}=a\mid S_{t})^{-1}\mid
W_{t},A_{t}=a]$ $\displaystyle=\int\frac{P^{*}_{t}(S_{t}=s\mid
W_{t},A_{t}=a)}{P^{*}_{t}(A_{t}=a\mid S_{t}=s)}d\mu(s)$
$\displaystyle=\int\frac{P^{*}_{t}(A_{t}=a\mid
W_{t},S_{t}=s)P^{*}_{t}(S_{t}=s\mid W_{t})}{P^{*}_{t}(A_{t}=a\mid
S_{t}=s)P^{*}_{t}(A_{t}=a\mid W_{t})}d\mu(s)$
$\displaystyle=P^{*}_{t}(A_{t}=a\mid W_{t})^{-1}\int P^{*}_{t}(S_{t}=s\mid
W_{t})d\mu(s)$ $\displaystyle=P^{*}_{t}(A_{t}=a\mid W_{t})^{-1}\,,$
where in the second and sixth equalities we apply the independence assumptions
from Assumption 2, in the third equality we apply the fact that
$q^{(t)}\in\mathbb{Q}^{(t)}$, and the fifth equality follows from Bayes’ rule.
Therefore, $q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$.
Second, suppose that $h^{(t)}\in\mathbb{H}^{(t,\phi)}$. Then we have
$\displaystyle\mathbb{E}^{*}_{t}[h^{(t)}(W_{t},A_{t})\mid Z_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[h^{(t)}(W_{t},A_{t})\mid
S_{t},Z_{t},A_{t}=a]\mid Z_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[h^{(t)}(W_{t},A_{t})\mid
S_{t},A_{t}=a]\mid Z_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[\phi(R_{t},D_{t+1:H})\mid
S_{t},A_{t}=a]\mid Z_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[\phi(R_{t},D_{t+1:H})\mid
S_{t},Z_{t},A_{t}=a]\mid Z_{t},A_{t}=a]$
$\displaystyle=\mathbb{E}^{*}_{t}[\phi(R_{t},D_{t+1:H})\mid Z_{t},A_{t}=a]\,,$
where in the second and fourth equalities we apply the independence
assumptions from Assumption 2, and in the third equality we apply the fact
that $h^{(t)}\in\mathbb{H}^{(t)}$. Therefore,
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$. ∎
Next, we establish the following pair of lemmas, which allow us to establish
that $\phi^{(t,s)}_{\text{IS}}$ and $\phi^{(t,s)}_{\text{Reg}}$ satisfy an
important recursive property in the case that $q^{(t)}\in\mathbb{Q}^{(t)}$ or
$h^{(t)}\in\mathbb{H}^{(H)}$ respectively.
###### Lemma 4.
Suppose that $q^{(t)}\in\mathbb{Q}^{(t)}$, let $Y_{t}=\phi(R_{t},D_{t+1:H})$,
and let Assumption 2 be given. Then, we have
$\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
###### Lemma 5.
Suppose that $h^{(t)}\in\mathbb{H}^{(t,\phi)}$, let
$Y_{t}=\phi(R_{t},D_{t+1:H})$, and let Assumption 2 be given. Then, we have
$\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
###### Proof of Lemma 4.
Given that $q^{(t)}\in\mathbb{Q}^{(t)}$, we have
$\displaystyle\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
S_{t},A_{t},E_{t},Y_{t}(1),\ldots,Y_{t}(m)]\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
S_{t},A_{t}]\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]$
$\displaystyle=\mathbb{E}^{*}_{t}[P^{*}_{t}(A_{t}\mid
S_{t})^{-1}\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]$
$\displaystyle=\mathbb{E}^{*}_{t}[P^{*}_{t}(A_{t}\mid
S_{t})^{-1}\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}(E_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\frac{P^{*}_{t}(A_{t}=a\mid
S_{t},E_{t},Y_{t}(1),\ldots,Y_{t}(m))}{P^{*}_{t}(A_{t}=a\mid
S_{t})}\mathds{1}\\{E_{t}=a\\}Y_{t}(E_{t})\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\mathds{1}\\{E_{t}=a\\}Y_{t}(E_{t})\right]$
$\displaystyle=\mathbb{E}^{*}_{t}[Y_{t}(E_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t+1}[Y_{t}]\,,$
where in the second and sixth equalities we apply the independence assumptions
from Assumption 2, in the third equality we apply the fact that
$q^{(t)}\in\mathbb{Q}^{(t)}$, in the fourth equality we apply the fact that
$Y_{t}=Y_{t}(A_{t})$, and in the final equality we apply the fact that by
definition intervening on the $t$’th action with $E_{t}$ under
$\mathcal{P}^{*}_{t}$ is by definition equivalent to $\mathcal{P}^{*}_{t+1}$.
∎
###### Proof of Lemma 5.
Given that $h^{(t,\phi)}\in\mathbb{Q}^{(t)}$, we have
$\displaystyle\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\
\middle|\ S_{t}\right]\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},A_{t})\
\middle|\ S_{t},A_{t}=a\right]\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\mathds{1}\\{E_{t}=A_{t}\\}Y_{t}\
\middle|\ S_{t},A_{t}=a\right]\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\mathds{1}\\{E_{t}=a\\}Y_{t}(E_{t})\
\middle|\ S_{t},A_{t}=a\right]\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\mathds{1}\\{E_{t}=a\\}Y_{t}(E_{t})\
\middle|\ S_{t}\right]\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\mathds{1}\\{E_{t}=a\\}Y_{t}(E_{t})\right]$
$\displaystyle=\mathbb{E}^{*}_{t}[Y_{t}(E_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t+1}[Y_{t}]$
where in the second and sixth equalities we apply the independence assumptions
from Assumption 2, in the third equality we apply the fact that
$h^{(t)}\in\mathbb{H}^{(t,\phi)}$, in the fourth equality we apply the fact
that $Y_{t}=Y_{t}(A_{t})$, and in the final equality we apply the fact that by
definition intervening on the $t$’th action with $E_{t}$ under
$\mathcal{P}^{*}_{t}$ is by definition equivalent to $\mathcal{P}^{*}_{t+1}$.
∎
Now, by the previous two lemmas, we would be able to establish identification
via backward induction, if it were the case that the functions $q^{(t)}$ and
$h^{(t,s)}$ used for identification were actually members of
$\mathbb{Q}^{(t)}$ and $\mathbb{H}^{(t,\phi)}$ (for $\phi$ such that
$\phi(R_{t},D_{t+1:H})=Y_{t}^{(s)}$). However, instead we assumed that
$q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$ and
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$, so some additional care must
be taken. The next lemma and its corollaries allow us to remedy this issue.
###### Lemma 6.
Let $q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$ and
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$ be chosen arbitrarily, for some
given $Y_{t}=\phi(R_{t},D_{t+1:H})$. Then we have
$\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]\,.$
###### Proof of Lemma 6.
We have
$\displaystyle\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]$
$\displaystyle=\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathbb{E}^{*}_{t}[\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}\mid
Z_{t},A_{t}]]$
$\displaystyle=\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathbb{E}^{*}_{t}[h^{(t)}(W_{t},A_{t})\mid
Z_{t},A_{t}]]$
$\displaystyle=\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})h^{(t)}(W_{t},A_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t}[\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
W_{t},A_{t}]h^{(t)}(W_{t},A_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t}[P^{*}_{t}(A_{t}\mid
W_{t})^{-1}h^{(t)}(W_{t},A_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}\frac{P^{*}_{t}(A_{t}=a\mid
W_{t})}{P^{*}_{t}(A_{t}=a\mid W_{t})}h^{(t)}(W_{t},a)\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]\,.$
∎
###### Corollary 5.
Suppose that $q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$, let
$Y_{t}=\phi(R_{t},D_{t+1:H})$, and let Assumptions 2 and 3 be given. Then, we
have
$\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
###### Corollary 6.
Suppose that $h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$, let
$Y_{t}=\phi(R_{t},D_{t+1:H})$, and let Assumptions 2 and 3 be given. Then, we
have
$\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
Corollary 5 follows because from Assumption 3 there must exist some
$h^{(t)}\in\mathbb{H}^{(t,\phi)}$, and by Lemma 3 we know that
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$, so therefore applying Lemma 6
and then Lemma 5 we have
$\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
Corollary 6 follows by an almost identical logic, since by Assumption 3 there
must exist some $q^{(t)}\in\mathbb{Q}^{(t)}$, and by Lemma 3 we also know that
$q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$. Therefore, applying Lemma 6 and
then Lemma 4 we have
$\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]=\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
These corollaries are sufficient to construct our inductive proof for our main
identification result, in the case of $\phi^{(t,s)}_{\text{IS}}$ and
$\phi^{(t,s)}_{\text{Reg}}$. However, for the case of
$\phi^{(t,s)}_{\text{DR}}$ we need to establish one final lemma before
presenting our main proof.
###### Lemma 7.
Suppose that either $q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$ and $h^{(t)}\in
L_{2,\mathcal{P}^{*}_{t}}(W_{t},A_{t})$ _or_
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$ and $q^{(t)}\in
L_{2,\mathcal{P}^{*}_{t}}(Z_{t},A_{t})$. In addition, let
$Y_{t}=\phi(R_{t},D_{t+1:H})$, and let Assumptions 2 and 3 be given. Then, we
have
$\mathbb{E}^{*}_{t}\left[q^{(t)}(Z_{t},A_{t})(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}-h^{(t)}(W_{t},A_{t}))+\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
###### Proof of Lemma 7.
First consider the case where $q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$ and
$h^{(t)}\in L_{2,\mathcal{P}^{*}_{t}}(W_{t},A_{t})$. In this case, we have
$\displaystyle\mathbb{E}^{*}_{t}\left[q^{(t)}(Z_{t},A_{t})(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}-h^{(t)}(W_{t},A_{t}))+\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]$
$\displaystyle=\mathbb{E}^{*}_{t}t[q^{(t)}(Z_{t},A_{t})\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}]+\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]-\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})h^{(t)}(W_{t},A_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t+1}[Y_{t}]+\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]-\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})h^{(t)}(W_{t},A_{t})]\,,$
where in the second equality we apply Corollary 5. Now, given
$q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$ we can further establish
$\displaystyle\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})h^{(t)}(W_{t},A_{t})]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\mathbb{E}^{*}_{t}[q^{(t)}(Z_{t},A_{t})\mid
W_{t},A_{t}]h^{(t)}(W_{t},A_{t})\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[P^{*}_{t}(A_{t}\mid
W_{t})^{-1}h^{(t)}(W_{t},A_{t})\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},A_{t})\right]\,.$
Thus, plugging this into the previous equation we have
$\mathbb{E}^{*}_{t}\left[q^{(t)}(Z_{t},A_{t})(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}-h^{(t)}(W_{t},A_{t}))+\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]=\mathbb{E}^{*}_{t+1}[Y_{t}]\,.$
Next, instead consider the case where
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$ and $q^{(t)}\in
L_{2,\mathcal{P}^{*}_{t}}(Z_{t},A_{t})$. In this case, we have
$\displaystyle\mathbb{E}^{*}_{t}\left[q^{(t)}(Z_{t},A_{t})(\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}-h^{(t)}(W_{t},A_{t}))+\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]$
$\displaystyle=\mathbb{E}^{*}_{t}\left[q^{(t)}(Z_{t},A_{t})\mathbb{E}^{*}_{t}[\mathds{1}\\{A_{t}=E_{t}\\}Y_{t}-h^{(t)}(W_{t},A_{t})\mid
Z_{t},A_{t}]\right]+\mathbb{E}^{*}_{t}\left[\sum_{a\in\mathcal{A}}h^{(t)}(W_{t},a)\right]$
$\displaystyle=0+\mathbb{E}^{*}_{t+1}[Y_{t}]\,,$
where the second equality follows from Corollary 6 and the fact that
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$. Therefore, under either
conditions we have our desired result.
∎
Now that we have established these preliminary lemmas, we are ready to present
the main proof.
###### Proof of Theorem 2.
First, we have assumed Assumptions 2 and 3, as well as the fact that
$q^{(t)}\in\mathbb{Q}^{(t)}_{\text{obs}}$ and
$h^{(t)}\in\mathbb{H}^{(t,\phi)}_{\text{obs}}$, so it follows from Corollaries
5, 6 and 7 that for any of the choices of $\phi^{(t+1,s)}_{\text{IS}}$,
$\phi^{(t+1,s)}_{\text{Reg}}$, or $\phi^{(t+1,s)}_{\text{DR}}$ for defining
each $Y_{t}^{(s)}$ term (for $t<s$) we have
$\mathbb{E}^{*}_{t}[\phi^{(t,s)}(Z_{t},W_{t},A_{t},E_{t},Y_{t}^{(s)})]=\mathbb{E}^{*}_{t+1}[Y_{t}^{(s)}]\,,$
which holds for every $t<s$. Furthermore, we have defined
$Y_{t}^{(s)}=\phi^{(t+1,s)}(Z_{t+1},W_{t+1},A_{t+1},E_{t+1},Y_{t+1}^{(s)})$
for each $t<s$, and $Y_{s}^{(s)}=R_{s}$, so the previous equation is
equivalent to
$\mathbb{E}^{*}_{t}[Y_{t-1}^{(s)}]=\mathbb{E}^{*}_{t+1}[Y_{t}^{(s)}]\,,$
which again holds for every $t<s$. Therefore, by backward induction we have
$\mathbb{E}^{*}_{1}[Y_{0}^{(s)}]=\mathbb{E}^{*}_{s+1}[R_{s}]\,.$
However, by construction $\mathcal{P}^{*}_{1}=\mathcal{P}_{b}$, and the
distribution of $R_{s}$ under $\mathcal{P}^{*}_{s+1}$ is the same as under
$\mathcal{P}_{e}$, so therefore we have
$\mathbb{E}_{\mathcal{P}_{b}}[Y_{0}^{(s)}]=\mathbb{E}_{\mathcal{P}_{e}}[R_{s}]$,
as required.
∎
### C.4 Proof of Theorem 3
We will prove this theorem by appealing to Chernozhukov et al. [2016, Theorem
3.1]. Therefore, this proof will consist of establishing the conditions of
this theorem. We will first present a lemma establishing the Newman
orthogonality property of this influence function, which not only is a
condition of Chernozhukov et al. [2016, Theorem 3.1] but an important property
in its own right, before presenting the rest of the proof.
In what follows below, for any generic quantity $\Psi$ that depends on our
nuisance functions, we will use the notation $\hat{\Psi}$ to refer to the
value of $\Psi$ using the estimated nuisance functions $\hat{q}^{(t)}$ and
$\hat{h}^{(t)}$ in place of $q^{(t)}$ and $h^{(t)}$ respectively for each
$t\in[H]$, and define $\Delta\Psi=\hat{\Psi}-\Psi$. In addition, for any
$r\in[0,1]$ we let $\Psi|_{r}$ refer to the value of $\Psi$ using the
nuisances $q_{r}{(t)}=q^{(t)}+r\Delta q^{(t)}$ and
$h_{r}^{(t)}=h^{(t)}+r\Delta h^{(t)}$ in place of $q^{(t)}$ and $h^{(t)}$
respectively for each $t\in[H]$, and define $\Delta_{r}\Psi=\Psi|_{r}-\Psi$.
We note that according to these definitions, $\Psi=\Psi|_{0}$,
$\hat{\Psi}=\Psi|_{1}$, and $\Delta\Psi=\Delta_{1}\Psi$. In what follows below
we will treat $\Delta q^{(t)}$ and $\Delta h^{(t)}$ as non-random square
integrable functions with the same signature as $q^{(t)}$ and $h^{(t)}$
respectively for each $t\in[H]$, which may take arbitrary values. This is in
contrast to previous sections, where $\hat{\Psi}$ was treated as a random
quantity with respect to the sampling distribution of the $n$ iid behavior
trajectories. Finally, we note that it is trivial to verify that for any pair
of quantities $\Psi$ and $\Psi^{\prime}$ we have
$\Delta_{r}(\Psi+\Psi^{\prime})=\Delta_{r}\Psi+\Delta_{r}\Psi^{\prime}$, and
$\Delta_{r}(\Psi\Psi^{\prime})=(\Delta_{r}\Psi)\Psi^{\prime}+\Psi(\Delta_{r}\Psi^{\prime})+(\Delta_{r}\Psi)(\Delta_{r}\Psi^{\prime})$,
which we will frequently apply in the derivations below without further
explanation.
###### Lemma 8.
Under the conditions of Theorem 2, as well as the additional assumption that
$\|q^{(t)}(Z_{t},A_{t})\|<\infty$ and $\|h^{(t)}(W_{t},A_{t})\|<\infty$ for
each $t\in[H]$, $\psi_{\text{DR}}$ satisfies Neyman orthogonality with respect
to the nuisances $q^{(t)}$ and $h^{(t)}$ for all $t\in[H]$. More concretely,
|
# Accretion and Obscuration in Merger-Dominated Luminous Red Quasars
Eilat Glikman,1 Stephanie LaMassa,2 Enrico Piconcelli,3 Luca Zappacosta3 and
Mark Lacy4
1Department of Physics, Middlebury College, Middlebury, VT 05753, USA
2Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD,
21218, USA
3Osservatorio Astronomico di Roma (INAF), via Frascati 33, 00040 Monte Porzio
Catone (Roma), Italy
4National Radio Astronomy Observatory, Charlottesville, VA, USA E-mail:
<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We present an analysis of the X-ray properties 10 luminous, dust-reddened
quasars from the FIRST-2MASS (F2M) survey based on new and archival Chandra
observations. These systems are interpreted to be young, transitional objects
predicted by merger-driven models of quasar/galaxy co-evolution. The sources
have been well-studied from the optical through mid-infrared, have Eddington
ratios above 0.1, and possess high-resolution imaging, most of which shows
disturbed morphologies indicative of a recent or ongoing merger. When combined
with previous X-ray studies of five other F2M red quasars, we find that the
sources, especially those hosted by mergers, have moderate to high column
densities ($N_{H}\simeq 10^{22.5-23.5}$ cm-2) and Eddington ratios high enough
to enable radiation pressure to blow out the obscuring material. We confirm
previous findings that red quasars have dust-to-gas ratios that are
significantly lower than the value for the Milky Way’s interstellar medium,
especially when hosted by a merger. The dust-to-gas ratio for two red quasars
that lack evidence for merging morphology is consistent with the Milky Way and
they do not meet the radiative feedback conditions for blowout. These findings
support the picture of quasar/galaxy co-evolution in which a merger results in
feeding of and feedback from an AGN. We compare the F2M red quasars to other
obscured and reddened quasar populations in the literature, finding that,
although morphological information is lacking, nearly all such samples meet
blowout conditions and exhibit outflow signatures suggestive of winds and
feedback.
###### keywords:
galaxies: active – galaxies: evolution – quasars: general – X-rays: galaxies
††pubyear: 2024††pagerange: Accretion and Obscuration in Merger-Dominated
Luminous Red Quasars–A.2
## 1 Introduction
A complete picture of galaxy evolution must include the growth of supermassive
black holes (SMBHs) at their centres, as evidence suggests a formation and
evolutionary relationship between the two. The ubiquity of SMBHs in the
centres of galaxies (Faber et al., 1997), the tight $M_{BH}-\sigma$ relation
(Gebhardt et al., 2000; Ferrarese & Merritt, 2000), and the contemporaneous
peak in star formation and black hole growth over cosmic history (Hopkins &
Beacom, 2006) all point to an energy exchange, or “feedback”, between the
black holes and their hosts. This feedback from active galactic nuclei (AGN)
is still poorly understood, and may come in the form of radiation, winds,
outflows, and/or jets (Fabian, 2012).
One way to explain these observations is through major galaxy mergers that
induce both SMBH accretion and circumnuclear star-formation, resulting in
large amounts of dust and gas that obscure much of the SMBH’s growth (Sanders
et al., 1988; Hopkins et al., 2006). According to this model, the obscuring
dust is eventually cleared by powerful quasar winds, revealing luminous,
unreddened emission from the quasar. In this scenario dust-reddened (or “red”)
quasars represent a crucial early phase in SMBH/galaxy co-evolution: the
transition from a dust-enshrouded core to a typical, unobscured quasar. In the
context of this picture, the reddened phase represents a key component of SMBH
growth with the potential to reveal the physics of feedback once the quasar
becomes luminous enough to blow away the circumnuclear material.
Recently, samples of heavily reddened quasars have been shown to fit into this
scenario as the long-sought transitioning population (e.g., Banerji et al.,
2012; Tsai et al., 2015; LaMassa et al., 2017). A red quasar sample
constructed from the cross-matching of the Faint Images of the Radio Sky at
Twenty cm (FIRST; Becker et al., 1995) survey to the Two-Micron All-Sky Survey
(2MASS; Skrutskie et al., 2006), applying red optical-to-near infrared colour
cuts, and spectroscopically confirming broad-line (Type 1) sources yielded
$\sim$130 objects that span a broad range of redshifts $(0.1<z<3)$ and
reddenings ($0.1<E(B-V)<1.5$; Glikman et al., 2004, 2007; Urrutia et al.,
2009; Glikman et al., 2012, 2013, hereafter called F2M red quasars). Extensive
observations of F2M red quasars show that they are in a transitional phase of
a merger-driven process: Hubble Space Telescope (HST) images show mergers are
very common ($>80\%$ Urrutia et al., 2008; Glikman et al., 2015); they have
high accretion rates ($L/L_{\rm Edd}\gtrsim 0.3$ Kim et al., 2015); their BH
masses are under-massive compared to their hosts, suggesting they have not
finished growing (Urrutia et al., 2012); and a high fraction of them exhibit
outflows and winds via the presence of blue-shifted broad absorption lines in
low-ionization species (i.e., LoBALS and FeLoBALs make up $>60\%$ of F2M red
quasars compared to 5% in the general quasar population; Urrutia et al., 2009)
indicative of winds and outflows. More recently, integral field spectroscopy
of three F2M red quasars show bi-conal superbubbles in [O iii] emission,
catching the short-lived “break-out” phase (Shen et al., 2023).
One way to determine whether an AGN is in the radiatively-driven “blow-out”
phase is by comparing its Eddington ratio
(${\lambda}_{\mathrm{Edd}}=L/{L}_{\mathrm{Edd}}$) to the hydrogen column
density ($N_{H}$). A study of hard X-ray-selected local ($z<0.05$) AGN showed
that they are either completely obscured, with $\mathrm{log}(\lambda_{\rm
Edd})\lesssim-1.5$ and $N_{H}>10^{22}$ cm-2, or largely unobscured, with
$\mathrm{log}(\lambda_{\rm Edd})\gtrsim-1.5$ and $N_{H}<10^{22}$ cm-2 (Ricci
et al., 2017a). There exist a unique set of conditions whereby an AGN has
sufficiently high $\lambda_{\rm Edd}$ and a not-too-high $N_{H}$ to blow out
the dust and gas (Fabian et al., 2008; Ishibashi et al., 2018). Recently,
Stacey et al. (2022) used ALMA observations to show that reddened quasars with
$E(B-V)>0.5$ reside in this “blow-out” region of $\lambda_{\rm Edd}$ vs.
$N_{H}$ space.
While we have measured black hole masses, Eddington ratios, and reddenings
($E(B-V)$) for all the F2M red quasars, our understanding of their X-ray
properties has been deficient. Twelve F2M quasars were observed with Chandra
in 2004 with $5-10$ ksec exposures (Urrutia et al., 2005). While all of the
sources show absorbed X-ray spectra, the detections were mostly too low-count
(all but one had $<100$ counts) for detailed spectral analysis and the
ancillary data for F2M red quasars had not been obtained making it difficult
to draw conclusions.
More recently, we obtained high-quality X-ray spectra of four F2M red quasars
with XMM-Newton and NuSTAR, as well as archival Chandra data (LaMassa et al.,
2016b; Glikman et al., 2017, hereafter, L16 and G17, respectively); three of
these have HST images showing merging hosts. We found that these three sources
fall squarely in the blowout region of the the $\lambda_{\rm Edd}$ vs. $N_{H}$
diagram (Glikman, 2017). The source that lies outside of the blowout region
lacks morphological information, and has a dust-to-gas ratio consistent with
the Galactic value, possibly because it is obscured by dust lanes in its host
galaxy. In addition, a fifth F2M red quasar (F2M J0915) has a 3.2 ksec Chandra
observation analyzed in Urrutia et al. (2005) as well as high-spatial-
resolution imaging revealing a merging host (Urrutia et al., 2008). We list
the properties of these F2M red quasars in Table 1 as a reference for the
remainder of the paper.
Table 1: Properties of Previously Studied F2M Red Quasars Name | R.A. | Decl. | Redshift | $E(B-V)$ | $\log L_{\rm bol}^{\dagger}$ | $L/L_{\rm Edd}^{\ddagger}$ | $\log{N_{H}}$ | Merger? | Ref
---|---|---|---|---|---|---|---|---|---
| (J2000) | (J2000) | | (mag) | (erg s-1) | | (cm-2) | |
F2M J0830 | 08:30:11.12 | +37:59:51.8 | 0.414 | $0.71\pm 0.01$ | $46.20\pm 0.01$ | $0.4\pm 0.1$ | $22.32\pm 0.04$ | Y | L16
F2M J0915 | 09:15:01.70 | +24:18:12.2 | 0.842 | $0.73\pm 0.02$ | $47.696\pm 0.006$ | $0.94\pm 0.48$ | $22.8^{+0.2}_{-0.4}$ | Y | Urrutia et al. (2005)
F2M J1113 | 11:13:54.67 | +12:44:38.9 | 0.681 | $1.26\pm 0.01$ | $47.475\pm 0.006$ | $2.29\pm 0.42$ | $23.1\pm 0.1$ | Y | G17
F2M J1227 | 12:27:49.15 | +32:14:59.0 | 0.137 | $0.828\pm 0.003$ | $45.545\pm 0.005$ | $0.8\pm 0.2$ | $21.5\pm 0.1$ | ? | L16
F2M J1656 | 16:56:47.11 | +38:21:36.7 | 0.732 | $0.519\pm 0.004$ | $46.81\pm 0.01$ | $0.76\pm 0.18$ | $23.0\pm 0.1$ | Y | G17
† Bolometric luminosities were determined by applying a bolometric correction of 7.6 (Richards et al., 2006) to the 6$\mu$m luminosity, |
which was determined by interpolating their WISE mid-infrared luminosities in the rest-frame. |
‡ As reported in Kim et al. (2015) except for F2M J0830 and F2M J1227 which were obtained from G17. |
X-rays give the best measure of an AGN’s true underlying accretion luminosity,
because they originate close to the black hole and penetrate gas and dust for
all but the most obscured AGN ($N_{H}<10^{24}$ cm-2 and $E<10$ keV). Besides
showing evidence for active radiative feedback, the studies in L16 and G17
found that the three merger-hosted sources were best fit by an absorbed power-
law model with a small fraction of the incident emission being leaked or
scattered back into the line-of-sight (we refer to this as the ‘scattering
fraction’; here, $f_{\rm scatt}=1-7\%$) and moderate line-of-sight extinction
($N_{H}=10^{22-23}$ cm-2). Intriguingly, self-consistent physically-motivated
model fitting with MYTorus (Murphy & Yaqoob, 2009) exposes the presence of
globally distributed gas suggesting a more complex environment than a simple
absorber along the line-of-sight.
In this paper we present X-ray observations for 10 additional F2M red quasars,
which doubles the initial sample to more robustly verify the previous results.
All the sources have high resolution imaging which enable us to tie host
galaxy morphology to X-ray properties, including their potential for existing
in a blowout phase. When optical magnitudes are discussed, we specify whether
they are on the AB or Vega system via a subscript. Uncertainties on X-ray
parameters are reported as 90% confidence limits. Throughout this work, we
adopt the concordance $\Lambda$CDM cosmology with $H_{0}=70$ km s-1 Mpc-1,
$\Omega_{M}=0.3$, and $\Omega_{\Lambda}=0.7$.
## 2 The Sample and Observations
### 2.1 Source Selection and Characteristics
Of the $\sim 130$ F2M red quasars, all have optical and/or near-infrared
spectroscopy, as well as photometric coverage from ultraviolet to mid-infrared
wavelengths and 27 have HST11124 red quasars have targeted HST imaging from
Urrutia et al. (2008, 13 objects) and Glikman et al. (2015, 11 objects), one
red quasar was targeted in a snapshot HST program (Marble et al., 2003), and
another was serendipitously located in the background of another HST snapshot
program (GO-11604). or other high resolution imaging (either targeted or
serendipitous). For this study, we assembled a list of F2M red quasars that
have the following observations in hand: (1) high resolution imaging from HST
or other imaging; (2) optical and/or near-infrared spectra with at least one
broad emission line enabling an estimate of a black-hole mass ($M_{BH}$). We
further required that our targets yield at least 70 counts (see §2.2) in $<20$
ksec Chandra observation and identified eight sources that obeyed these
criteria. In addition, two sources were found in the background of archival
Chandra observations, with one source appearing in two different datasets. Our
sample, therefore consists of 10 F2M red quasars with new or archival Chandra
observations.
Figure 1 shows the HST image cutouts for these sources, as well as for F2M
J0915. The images were obtained from the archives, except for F2M J1531 whose
point spread function (PSF) subtracted WFC3/IR F160W image from Glikman et al.
(2015) is reproduced here. The morphology of F2M J1106 is based on integral
field spectroscopy (IFS) with the GMOS instrument showing bi-conal bubbles in
the [O iii] (see Shen et al., 2023). Images of the remaining sources listed in
Table 1 are shown in L16 and G17.
Figure 1: Image cutouts of 9 the 10 F2M red quasars presented in this work,
excluding F2M J1106, but including F2M J0915 which was not presented in the
analysis of L16 or G17. All data are from HST ACS camera with the F814W
filter, except for F2M J1531 which shows the PSF-subtracted image from Glikman
et al. (2015) from the WFC3/IR camera with the F160W filter. The source for
each image is listed in the final column of Table 3. All images are
$7\arcsec\times 7\arcsec$ except F2M J1324, which is $8\arcsec\times 8\arcsec$
due to it having the lowest redshift ($z=0.205$) and larger angular size, and
F2M J1531, which is $8\arcsec\times 8\arcsec$.
The fifth column of Tables 1 and 2 lists the reddening of each quasar
parametrized by the color excess, $E(B-V)$, which we determined by performing
a linear fit in log space to the ratio of each red quasar spectrum,
$f(\lambda)$, to an unreddened quasar template spectrum, i.e.
$\log{\left[\frac{f(\lambda)}{f_{0}(\lambda)}\right]}=-\frac{k(\lambda)E(B-V)}{1.086}.$
(1)
Here, $f_{0}(\lambda)$ is the optical-to-near-infrared quasar composite
template from Glikman et al. (2006) and $k(\lambda)$ is the Small Magellanic
Cloud (SMC) dust extinction law from Gordon & Clayton (1998). Although
$E(B-V)$ values were already in-hand, we recomputed them for this work as
newer spectra had been obtained for some sources which, in some cases,
broadened the wavelength coverage or, in others, improved the signal-to-noise.
The uncertainties on $E(B-V)$ were computed by heavily smoothing and
perturbing the original spectrum by its own error array and re-fitting it to
determine $E(B-V)$ 1000 times. The reported $E(B-V)$ uncertainty is then the
standard deviation of that $E(B-V)$ distribution.
We determine the black hole masses, $M_{BH}$, from a broad emission line in
the quasars’ spectra. Eight sources were analyzed using their broad emission
line widths, either from H$\beta$ or H$\alpha$, using the line with the
highest signal-to-noise ratio. We performed multi-component Gaussian fits to
the lines, including narrow emission line components combined with a broad
component. Figure 2 shows these line fits.
We use the established relations from Shen & Liu (2012),
$\log\bigg{(}\frac{M_{\rm
BH,vir}}{M_{\odot}}\bigg{)}=a+b\log\bigg{(}\frac{L_{5100}}{10^{44}\rm
erg/s}\bigg{)}+c\log\bigg{(}\frac{v_{\rm FWHM}}{\rm km/s}\bigg{)},$ (2)
to compute $M_{BH}$ for each line species, employing the full-width at half
maximum (FWHM) in km s-1 for the velocity term. When the line used was
H$\alpha$, we adopted the values $a=0.774$, $b=0.520$, $c=2.06$ for sources
with $L_{5100}<10^{45.4}$ erg s-1 and $a=1.390$, $b=0.555$, $c=1.873$ for for
sources with $L_{5100}>10^{45.4}$ erg s-1. For $M_{BH}$ estimates based on
H$\beta$, we adopted the values $a=0.895$, $b=0.520$, $c=2.00$, which apply to
sources with $L_{5100}<10^{45.4}$, erg s-1 (the $a$, $b$, $c$ coefficients are
from the calibration of Assef et al., 2011).
Two sources, F2M J0825 and F2M J1532, have only narrow H$\beta$ lines visible
in their optical spectrum, likely because the broad component has experienced
significant extinction from dust. The H$\alpha$ line is shifted into a noisy
part of the optical and near-infrared spectra precluding our ability to
perform reliable Gaussian fitting. Both sources exhibit broad Pa$\beta$
emission in their near-infrared spectrum and their $M_{BH}$ values were
computed in Kim et al. (2015) along with 14 other red quasars using a single-
epoch relation derived by Kim et al. (2010) for Paschen lines,
$\log\bigg{(}\frac{M_{\rm
BH,vir}}{M_{\odot}}\bigg{)}=a+b\log\bigg{(}\frac{L_{{\rm Pa}\beta}}{10^{42}\rm
erg/s}\bigg{)}+c\log\bigg{(}\frac{v_{\rm FWHM}}{1000\rm km/s}\bigg{)},$ (3)
where $a=7.04$, $b=0.48$, and $c=2$. Kim et al. (2010) calibrated this
relation using near-infrared spectra of unreddened quasars and found that they
agree with the Balmer-line-based relations to within 0.18-0.24 dex.
In addition to line widths, the $M_{BH}$ relations require a luminosity at a
particular wavelength to estimate the radial distance to the broad line
region. The bolometric luminosity listed in Tables 1 and 2 is determined by
applying a bolometric correction of 7.6 to the 6$\mu$m luminosity based on the
mean quasar spectral energy distribution (SED) in Richards et al. (2006).
Because the luminosities in the $M_{BH}$ relations are at optical and UV
wavelengths, which are affected by reddening in these quasars, we interpolate
their Wide-field Infrared Survey Explorer (AllWISE; Wright et al., 2010;
Mainzer et al., 2011) mid-infrared fluxes to estimate their rest-frame 6$\mu$m
luminosity, $L_{6\mu{\rm m}}$, and scale it to the optical flux using a ratio
of the bolometric corrections for 6$\mu$m (7.6) and 5100Å (10) for the
Richards et al. (2006) mean quasar spectral energy distribution (SED). To
determine the uncertainty on $L_{6\mu{\rm m}}$, each SED is perturbed by its
photometric errors, drawing from a Gaussian distribution, to generate 1000
SEDs which we interpolate to measure $L_{6\mu{\rm m}}$. The reported
uncertainty is then the standard deviation of the $L_{6\mu{\rm m}}$
distribution. $L_{6\mu{\rm m}}$ is also used to find the bolometric luminosity
($L_{\rm bol}$) of the quasars, applying a bolometric correction factor of 7.6
derived from the same SED. When combined with their bolometric luminosities,
we are able to compute an Eddington ratio ($\lambda_{\rm Edd}$) for each
source. Table 2 lists the quasars, their positions, redshifts, and $E(B-V)$.
The table also lists the source of the imaging, $M_{BH}$, $L_{\rm bol}$, and
$\lambda_{\rm Edd}$.
Table 2: Properties of Newly Added F2M Red Quasars Name | R.A. | Decl. | Redshift | $E(B-V)$ | $M_{BH}$ | Line | $L/L_{\rm Edd}$ | $\log{L_{\rm bol}}^{\sharp}$ | Merger? | Image Ref
---|---|---|---|---|---|---|---|---|---|---
| (J2000) | (J2000) | | (mag) | ($10^{8}M_{\odot}$) | | | (erg s-1) | |
F2M J0825a | 08:25:02.00 | +47:16:52.0 | 0.804 | $0.68\pm 0.01$ | 12.25$\pm$2.98 | Pa$\beta$ | $0.67\pm 0.18$ | $47.353\pm 0.007$ | Y | U08
F2M J0834a | 08:34:07.00 | +35:06:01.8 | 0.470 | $0.91\pm 0.02$ | 31$\pm$10† | H$\alpha$ | $0.04\pm 0.01$ | $46.18\pm 0.01$ | N | U08
F2M J1106a | 11:06:48.30 | +48:07:12.3 | 0.435 | $0.443\pm 0.002$ | 8.2$\pm$2.3 | H$\alpha$ | $0.52\pm 0.15$ | $46.734\pm 0.005$ | ? | S23
F2M J1118a | 11:18:11.10 | $-$00:33:41.9 | 0.686 | $0.61\pm 0.01$ | 6.4$\pm$1.8. | H$\alpha$ | $1.1\pm 0.3$ | $46.943\pm 0.008$ | Y | U08
F2M J1151a | 11:51:24.10 | +53:59:57.4 | 0.780 | $0.67\pm 0.01$ | 5.86$\pm$0.04 | H$\beta$ | $0.50\pm 0.02$ | $46.57\pm 0.02$ | N | U08
F2M J1324a | 13:24:19.90 | +05:37:05.0 | 0.205 | $0.326\pm 0.003$ | 5.5$\pm$1.5‡ | H$\alpha$ | $0.76\pm 0.21$ | $46.718\pm 0.005$ | Y | HST archive
F2M J1507a | 15:07:18.10 | +31:29:42.3 | 0.988 | $0.644\pm 0.003$ | 4.6$\pm$1.3⋆ | H$\alpha$ | $1.48\pm 0.42$ | $46.93\pm 0.01$ | Y | U08
F2M J1531a | 15:31:50.47 | +24:23:17.6 | 2.287 | $0.311\pm 0.004$ | 80$\pm$22. | H$\alpha$ | $1.61\pm 0.46$ | $48.21\pm 0.02$ | Y | G15
F2M J1532a | 15:32:33.19 | +24:15:26.8 | 0.564 | $0.68\pm 0.03$ | 6.12$\pm$4.87 | Pa$\beta$ | $0.29\pm 0.27$ | $46.586\pm 0.007$ | Y | U08
F2M J1715a | 17:15:59.80 | +28:07:16.8 | 0.523 | $0.786\pm 0.003$ | 8.7$\pm$2.4 | H$\alpha$ | $0.33\pm 0.09$ | $46.553\pm 0.008$ | N | M03
♯ Bolometric luminosities were determined by applying a bolometric correction
of 7.6 (Richards et al., 2006) to the 6$\mu$m luminosity which was determined
by interpolating the WISE mid-infrared luminosities in the rest-frame.
† This object has a double-peaked emission line shape, resulting in a likely
over-estimated $M_{BH}$
‡ This source was fit with multiple Gaussian components to account for the
presence of narrow lines.
⋆ This source has a blue-shifted broad line, which we do not use in our
estimate of $M_{BH}$.(see §A.1).
a $M_{BH}$ and $L/L_{\rm Edd}$ were determined in Kim et al. (2015).
Comments – M03 = Marble et al. (2003); U08 = Urrutia et al. (2008); S23 = Shen
et al. (2023); U12 = Urrutia et al. (2012);G15 = Glikman et al. (2015)
Figure 2: Gaussian fitting to emission lines for eight quasars in our sample
that lack $M_{BH}$ from Kim et al. (2015). The black line shows the observed
flux against rest wavelength. The blue line is the best-fit model to the line
profile. The sloped dotted red line is the continuum portion of the best-fit
model. In most cases, a single Gaussian sufficiently fits the data. The three
exceptions, from left to right, are: (1) F2M J1151, whose [O iii] line doublet
is fit together with H$\beta$. (2) F2M J1324, whose H$\alpha$ line is
decomposed into a broad and narrow components and fit along with the [N ii]
nitrogen doublet. In this case the narrow line width is determined by fitting
to the [S ii] doublet shown in green. And, (3) F2M J1507, whose broad
H$\alpha$ line is double-peaked with a blue-shifted component separated by 91Å
(see Appendix A.1 for additional discussion of this source).
### 2.2 Chandra Observations
We obtained Chandra observations in Cycle 21 of eight red quasars that obeyed
our selection requirements outlined in Section 2.1 that had no archival X-ray
data (GO 21700216, PI: Glikman). We designed our observing strategy aiming for
70 counts in the Chandra energy range, which we estimated using the
interpolated 6$\mu$m luminosity and the $L_{X}-L_{IR}$ relation, modified to
reflect the trends seen for red quasars in L16 and G17 (i.e., $\sim 1$ dex
below the Chen et al. 2017 relation which accounts for any intrinsic $N_{H}$;
see §4.2), imposing a 5 ksec minimum on the brighter sources. Table 3 lists
the details of the Chandra observations for the eight sources as well as the
two sources with archival observations. Given that the scatter in the
$L_{X}-L_{IR}$ relation is on a logarithmic scale, while photon detection
rates are linear, our total counts vary significantly from the expected 70.
We processed the data with the CIAO v4.15, with CALDB v4.10.4 (Fruscione et
al., 2006), using the chandra_repro task to produce a filtered events file,
removing periods of anomalously high background. For all but one of the
observations, a spectrum was extracted using a 5″ radius aperture around the
object using the CIAO tool specextract, with the background extracted from an
annulus around the quasar with inner radius 10″ and outer radius 35″. F2M
J1532 was present in two archival observations. One of the archival
observations had F2M J1532 near the edge of the I2 chip on the ACIS-I detector
where the PSF is significantly larger; we use a 35″ radius aperture around the
source and an offset circular aperture with a 120″ radius far from any sources
for the background. The total net counts detected are listed in Table 3 as
reported by the CIAO task dmlist.
Table 3: Summary of Chandra Observations Name | ObsID | Date | $N_{\rm H,Galactic}$ | Net Exposure Time | Net Counts
---|---|---|---|---|---
| | | (1020 cm-2) | (ksec) | (0.5 - 7 keV cnts)
F2M J0825 | 22570 | 2019 December 20 | 4.15 | 9.94 | 218$\pm$15
F2M J0834 | 22571 | 2019 December 15 | 4.03 | 9.94 | 4$\pm$2
F2M J1106 | 22572 | 2019 October 24 | 1.38 | 5.99 | 2$\pm$2
F2M J1118 | 22573 | 2020 January 21 | 4.43 | 11.91 | 50$\pm$7
F2M J1151 | 22574 | 2020 August 7 | 1.33 | 17.83 | 22$\pm$5
F2M J1324 | 22575 | 2020 January 21 | 2.32 | 5.0 | 7$\pm$3
F2M J1507 | 22576 | 2019 November 9 | 1.66 | 19.80 | 70$\pm$9
F2M J1531 | 3336 | 2002 September 25 | 3.61 | 5.06 | 1$\pm$1
F2M J1532 | 3138† | 2001 April 30 | 4.14 | 47.13 | 441$\pm$24
… | 3338 | 2002 July 2 | … | 4.90 | 57$\pm$8
F2M J1715 | 22577 | 2019 October 5 | 3.79 | 8.95 | 255$\pm$16
† Due to being far off-axis, this source was extracted from a 35″aperture and
a nearby 120″-radius circular aperture for the background.
## 3 X-ray fitting
### 3.1 Basic fits
We perform spectral analysis only on sources with $>50$ counts. Three sources
are well detected with $>100$ counts which we grouped by a minimum of 5 counts
per bin. Another two sources have between 50 and 100 counts, which we group by
2 counts per bin. We use the X-ray fitting software XSpec v12.13.0 (Arnaud,
1996) to model these sources. We use the Cash statistics (C-stat; Cash, 1979)
with direct background subtraction (Wachter et al., 1979).
We began by fitting a simple power-law model,
${\tt phabs*zpowerlw},$ (4)
allowing only absorption from gas in the Milky Way (phabs). Table 3 lists the
Galactic hydrogen column density, determined using the colden CIAO task, which
we freeze in all our fits. Given that red quasars experience absorption at
optical wavelengths, we further fit an absorbed power-law model,
${\tt phabs*zphabs*zpowerlw},$ (5)
with absorption occurring both at the source (zphabs) and in the Milky Way to
look for potential intrinsic obscuration in the source. Finally, because the
previous analyses of the X-ray spectra of red quasars revealed an excess of
soft X-ray flux below 2 keV suggesting that there may be scattered or leaked
light at lower energies in excess of the absorbed primary continuum (; ), we
fit a double-absorbed power law with the same photon index for both
components,
${\tt phabs*(zpowerlw+zphabs*zpowerlw)}.$ (6)
We use an F-test to decide whether the additional components significantly
improves the fit with a probability of $>95\%$. We report in Table 4 the
fitted parameters for these sources, indicating in the second column the model
equation used in the best fit.
### 3.2 Complex fits
For one source, F2M 1507, there appears to be a reduction in flux around $4-5$
keV, which may be due to blue-shifted absorption from an outflow. We discuss
the unusual spectral properties and perform more detailed fitting of this
object to account for this absorption in Appendix A.1.
In another source, F2M 1532, we noted the presence of an Fe K$\alpha$ line
suggestive of reflection off a distant medium. While such emission is
typically seen in Type 2 AGN, where the reflection occurs off of the obscuring
torus, it has been seen in at least one red quasar (F2M 0830; Piconcelli et
al., 2010; LaMassa et al., 2016b), where the scattering may be due to clouds
farther out from the nucleus. To address such scenarios, we turn to the
MYTorus model of Murphy & Yaqoob (2009) which solved the radiative transfer of
X-rays from an AGN including scattering off of a torus, line-of-sight
absorption, as well as leakage or scattered light. In XSpec, the model is
defined similar to Eqn 6:
$C\times{\tt phabs}\times[{\tt zpowerlw}\times{\tt
MYTorusZ(N_{H,Z},\theta_{\rm obs},E)}\\\ +A_{S}\times{\tt
MYTorusS(}\Gamma\tt{,N_{H,S},\theta_{\rm obs},E)}\\\ +A_{L}\times{\tt
MYTorusL(}\Gamma\tt{,N_{H,S},\theta_{\rm obs},E)}\\\ +f_{\rm scatt}\times{\tt
zpowerlw}].\\\ $ (7)
where E is the observed energy and MYTorusZ, MYTorusS, and MYTorusL are tables
that contain pre-calculated parameters derived via Monte Carlo calculations
that take into account the reprocessing of the intrinsic AGN continuum in a
toroidal structure for a range of column densities. MYTorusZ is the so-called
‘zeroth-order spectrum’, and represents the intrinsic continuum that makes it
through any absorbing or scattering medium along the line-of-sight
(mytorus_Ezero_v00.fits). MYTorusS tabulates Compton-scattered emission that
is added to the zeroth-order spectrum (mytorus_scatteredH500_v00.fits).
MYTorusL provides fluorescent line emission that is also added to the zeroth-
order spectrum (mytl_V000010nEp000H500_v00.fits, where H200 refers to the
termination energy of the model of 200 keV). This model set up is the same as
previously used for the analysis of F2M red quasars (; ) as well as 3C 223,
whose complex X-ray spectrum has characteristics similar to F2M J1532 (LaMassa
et al., 2023). All three MYTorus components are needed in order to preserve
the self-consistency of the model. We discuss the detailed fitting of F2M
J1532 in Appendix A.2.
We report the results of these complex fits in A.1 and A.2. However, since the
essential parameters ($\Gamma$, $N_{H}$, $f_{\rm scatt}$) used in the
subsequent analysis are similar to the phenomenological results, we do not use
the complex fitted parameters in the subsequent analysis.
Table 4: Best fit parameters for high count sources Name | Model | $\Gamma$ | $\log{N_{H}}$ | $f_{\rm scatt}$ | C-stat | HR
---|---|---|---|---|---|---
| Eqn. | | (cm-2) | (%) | (DOF) |
F2M J0825 | 2 | $2.38^{+0.57}_{-0.54}$ | $22.78^{+0.17}_{-0.24}$ | … | 44.49 (36) | 0.19
F2M J1118 | 2 | $2.18^{+1.18}_{-0.99}$ | $22.45^{+0.36}_{-1.85}$ | … | 10.96 (22) | $-0.06$
F2M J1507 | 3 | $1.8$♯ | $23.5\pm 0.4$ | 16 | 28.29 (27) | 0.31
F2M J1532† | 3 | $1.3\pm 0.5$ | $22.90^{+0.19}_{-0.33}$ | 11 | 113.48 (120) | 0.44,0.38‡
F2M J1715 | 1 | $1.57\pm 0.21$ | … | … | 57.72 (42) | $-0.08$
♯ The photon index for this fit was fixed due to the small number of bins.
(See §A.1 for a more complex modeling approach).
†This source was fit jointly with both observations listed in Table 3.
‡ The HRs were computed separately for each of the observations listed in
Table 3, with the longer observation (ObsID 3138) listed first.
Figure 3: Best model fits to the X-ray spectra for counts in the energy range
0.5 – 7 keV, as described in Table 4. Data are shown as points with error
bars. The solid lines represent the best-fit model with dotted lines
representing the individual components of a partially-covered model (Eqn. 6),
when applicable. The bottom panels of each figure show the counts-to-model
ratios. For F2M J1532, the black and red points represent the two archival
data sets used for the fitting (ObsID 3138 and ObsID 3338, respectively). The
best fit models are coloured correspondingly.
### 3.3 Hardness ratios in the low count regime
For the three sources with $\gtrsim 5$ and $\lesssim 50$ counts – an
insufficient amount for spectral modeling – we instead report hardness ratios
(HRs), which are a meaningful proxy for X-ray absorption. The HR is defined by
comparing the net counts in the hard ($H$) and soft ($S$) bands, defined as
$0.5-2$ keV and $2-7$ keV in the observed frame, respectively, as appropriate
for Chandra’s energy response, by the expression $(H-S)/(H+S)$. We determine
these counts via the CIAO command dmcopy which filters the events file to
create an image with just the photons in each energy band. We then apply the
same source and background regions to measure the source counts using the CIAO
tool dmextract.
Given that we are in this low-count regime, we employ the Bayesian Estimation
of Hardness Ratios (BEHR; Park et al., 2006) code which determines HRs,
properly handling uncertainties in the Poisson limit, including non-
detections. We report in Table 5 the hard and soft counts as well as the mode
of the HR determined by BEHR. The stated uncertainties represent the lower and
upper bounds reported by BEHR.
Assuming an absorbed power-law model for these red quasars (Eqn. 5) and fixing
the power-law index to $\Gamma=1.8$, we can crudely approximate the column
density responsible for the measured HR. Following a similar approach
described in Martocchia et al. (2017), we simulate such a spectrum with the
WebPIMMS interface222https://cxc.harvard.edu/toolkit/pimms.jsp setting the
appropriate Cycle of the observations, providing the soft count rate for each
quasar, varying the intrinsic $N_{H}$, and computing the HR from the predicted
hard count rate until the lower bound reported by BEHR is reached. We then
regard the $N_{H}$ value from the simulated spectrum as representing a lower-
limit for the absorption. We report these values in Table 5 as well. We note
that the $N_{H}$ values derived from HRs are highly simplified, as they
neglect scattering or leakage of X-ray photons (e.g., Eqn. 6) and assume a
fixed power-law continuum, $\Gamma$.
Because they are computed in the observed band, HRs depend on redshift, with
the strongest dependence occurring for moderately absorbed sources
($10^{22}<N_{H}<10^{23}$ cm-2) at $z<1$ (LaMassa et al., 2016a) which is where
all the quasars with computed HRs in this paper lie. None the less, in all
cases, HRs $\gtrsim 0$ imply $N_{H}\gtrsim 10^{22}$ cm-2. Mindful of these
considerations, we compare these values to HRs measured in other red quasar
samples.
Glikman et al. (2018) surveyed the $270$ deg2 equatorial region known as SDSS
Stripe 82 (Frieman et al., 2008) which contains a wealth of multi-wavelength
ancillary data. Using near-to-mid infrared selection to a relatively shallow
flux limit of 20 mJy at 22$~{}\mu$m, they identified 21 red QSOs, most lacking
a radio detection in FIRST but with otherwise similar characteristics as the
F2M red quasars. Four red QSOs in that study had X-ray detections that allowed
for HR measurements. Their redshifts span $z=0.2$ to $z=0.83$ and HR $=-0.085$
to HR $=0.863$, with the $z=0.200$ object having HR = 0.792, thus placing them
all in the moderately absorbed ($N_{H}>10^{22}$ cm-2) regime.
In an X-ray selected red QSO sample over Stripe 82, reaching significantly
fainter sources (including SDSS drop-outs) and thus higher redshifts up to
$z=2.5$, LaMassa et al. (2017) find 12 sources displaying features consistent
with the evolutionary paradigm proposed for the F2M red quasars. LaMassa et
al. (2017) measure a range of HRs and, applying a similar translation between
HR and $N_{H}$, find that half have $N_{H}>10^{22}$ cm-2 with three sources
consistent with no absorption. The same caveats about soft excess due to
scattering and leakage apply here as well such that higher-count X-ray spectra
may reveal more complex physics than a simple absorbed power law.
Table 5: Hardness ratios and absorption in low-count sources Name | Net Soft | Net Hard | HR | $\log(N_{H})$
---|---|---|---|---
| (counts) | (counts) | | (cm-2)
F2M J1106 | $<0.07$ | $2.33\pm 1.74$ | $0.99_{-0.39}^{+0.01}$ | $>22.9$
F2M J1151 | $5.91\pm 2.65$ | $16.31\pm 4.25$ | $0.49_{-0.21}^{+0.19}$ | $>22.8$
F2M J1324 | $2.69\pm 1.73$ | $4.44\pm 2.24$ | $0.29\pm 0.38$ | 21.7 (22.4)†
† This source’s lower and upper bound spanned a very broad range; we provide
in parentheses the $N_{H}$ corresponding to the mode value.
### 3.4 Upper limits for undetected sources
Two sources, F2M J0834 and F2M J1531, have counts consistent with non-
detections. For these sources, we follow the CIAO thread for calculating
source count rates and model-independent fluxes. We compute the flux over the
full energy range using the task srcflux modeling the flux as being absorbed
by Milky Way gas (phabs). We note that F2M J1531 is the only high redshift
source in this sample, having $z=2.287$ while the rest are all at $z<1$, and
is thus the only one with imaging from Glikman et al. (2015). Therefore,
although its morphology shows evidence of a merger, its heterogeneous imaging
aspects do not impact the results presented in Section 4.
Having extracted flux information from all ten sources, we present the soft
(0.5–2 keV), hard (2–10 keV), and full (0.5–10 keV) X-ray fluxes in Table
6333Although the hard band was defined as $2-7$ keV when computing HRs, we
define the hard band as $2-10$ keV when reporting fluxes and luminosities so
we can compare them with established X-ray relations in the literature that
use that band definition.. We also compute the X-ray luminosities in the 2-10
keV band, which are used to compare with other emission diagnostics in Section
4. For objects with sufficient counts to enable spectral fitting (i.e., those
listed in Table 4) we measure and report the observed luminosity using the
best-fit model; we omit Milky Way absorption in this calculation. We then also
report an absorption-corrected luminosity by defining a simple zpow model with
the best-fit power-law index ($\Gamma$) and its normalization. The
uncertainties on the luminosity are derived from the uncertainty on the power-
law normalization. For the low count objects (i.e., those listed in Table 5),
we determine their luminosities assuming a power-law spectrum (zpow) with an
index of $\Gamma=1.8$. We normalize this model based on a fit to the low count
data using the model in Eqn 5, and derive the uncertainties on the luminosity
from the uncertainties on the power-law normalization in this model. Given
that $N_{H}$ for these sources was estimated from the HRs, which are already
uncertain, we do not compute a luminosity from the observed data. We do not
compute a luminosity for the two sources that we deem to be undetected and for
which we report upper limits to their fluxes. Table 6 also reports the rest-
frame absorption-corrected 2-10 keV luminosities as well as their rest-frame
$6\mu$m luminosities, which are determined by interpolating between the WISE
photometric bands, as described in Section 2.1.
Table 6: Observed X-ray Fluxes Name | $F_{0.5-2~{}{\rm keV}}$ | $F_{2-10~{}{\rm keV}}$ | $F_{0.5-10~{}{\rm keV}}$ | $\log L_{2-10~{}{\rm keV,int}}$ | $\log L_{6~{}\mu{\rm m}}$
---|---|---|---|---|---
| ($10^{-14}$ erg cm-2 s-1) | ($10^{-14}$ erg cm-2 s-1) | ($10^{-14}$ erg cm-2 s-1) | (erg s-1) | (erg s-1)
F2M J0825 | $5.4_{-4.2}^{0.5}$ | $27.9_{-18.3}^{+0.7}$ | $33.3_{-32.8}^{+1.4}$ | $45.10^{+0.46}_{-0.44}$ | 46.472$\pm$0.007
F2M J0834 | … | … | $<1.1$† | … | 45.30$\pm$0.01
F2M J1106 | $<0.0003$ | $2.4_{-1.9}^{+1.5}$ | $2.4_{-2.2}^{+1.3}$ | $43.58^{+0.45}_{-0.80}$‡ | 45.854$\pm$0.005
F2M J1118 | $<1.4$ | $4.9_{-4.5}^{+0.1}$ | $<6.2$ | $44.08^{+0.81}$ | 46.062$\pm$0.008
F2M J1151 | $0.11_{-0.03}^{+0.02}$ | $2.8_{-0.7}^{+0.8}$ | $2.9_{-0.7}^{+0.6}$ | $43.96^{+0.15}_{-0.18}$‡ | 45.69$\pm$0.02
F2M J1324 | $0.22_{0.09}^{+0.1}$ | $2.5_{-1.0}^{+1.3}$ | $2.7_{-1.4}^{+1.2}$ | $42.53^{+0.27}_{-0.37}$‡ | 45.837$\pm$0.005
F2M J1507 | $1.1_{-0.6}^{+0.2}$ | $7.3_{-1.3}^{+2.6}$ | $8.4_{-2.9}^{+1.2}$ | $44.46^{+0.21}_{-0.29}$ | 46.05$\pm$0.01
F2M J1531 | … | … | $<0.6$† | … | 47.33$\pm$0.02
F2M J1532 | $2.0_{-0.4}^{+0.2}$ | $36_{-17}^{+1}$ | $38.0_{-13}^{+1}$ | $44.56^{+0.46}_{-0.49}$ | 45.706$\pm$0.007
F2M J1715 | $13.9_{-1.5}^{+1.6}$ | $35_{-6}^{+4}$ | $48.0_{-3.7}^{+3.3}$ | $44.48\pm 0.11$ | 45.673$\pm$0.008
† These fluxes are reported over the 0.5-7 keV range as they are derived
directly from the data with the srcflux task on undetected sources.
Note – Upper limits are quoted when XSpec returns a 1-$\sigma$ lower limit of
0.
‡ These sources had too few counts for spectral modeling. Their intrinsic
luminosities are modeled from a fixed $\Gamma=1.8$
## 4 Results and Discussion
Following the definition and identification of F2M red quasars as a population
in Glikman et al. (2004), several other reddened and obscured quasar samples
have been constructed using various definitions that exhibit similar
characteristics of being in a transitional phase of quasar evolution. Many of
these samples’ selection criteria overlap the F2M selection, but extend along
other parametric axes. We summarize here the various reddened AGN populations
and compare their X-ray-derived properties to the F2M sample in this work.
F2M red quasars were selected by applying the optical to near-infrared colour
cuts of $(R-K)_{\rm Vega}>4$ mag and $(J-K)_{\rm Vega}>1.7$ mag to sources
with matches in FIRST and 2MASS. An essential selection criterion for F2M red
quasars is that they exhibit at least one broad ($v_{\rm FWHM}>1000$ km s-1)
emission line in their spectrum; therefore F2M red quasars are by definition
Type 1 sources. Although the $J-K$ colour cut avoids most low mass (M class)
stars, they remain a strong contaminant since they are abundant in the Galaxy
and have colours that resemble reddened quasars (c.f., Warren et al., 2000).
Radio selection was invoked to more thoroughly avoid them, but as a result the
F2M survey misses large numbers of radio-faint red quasars.
Banerji et al. (2012) and Temple et al. (2019) invoked a more stringent
$(J-K)_{\rm Vega}>2.5$ mag colour cut which naturally identifies more heavily
reddened systems at higher redshifts ($z\gtrsim 1.5$). The sample is
restricted to broad line (Type 1) sources and consists of $\sim 50$ objects.
These heavily reddened quasars (HRQs) are also intrinsically more luminous and
show outflows in [O iii]. Although no rest-frame high-resolution optical
imaging exists to identify whether the HRQs reside in merging hosts, an ALMA
observation of one HRQ, J2315, does show merging evidence (Banerji et al.,
2021).
Aiming to exploit the mid-infrared photometry from WISE, which is less
sensitive to dust extinction, a population of hyperluminous, hot dust obscured
galaxies (Hot DOGs; Wu et al., 2012; Tsai et al., 2015) was identified by the
“W1W2 dropout” method such that they are weak or undetected at 3.4 $\mu$m and
4.6 $\mu$m but bright at 12 $\mu$m and 22 $\mu$m. There are only $\sim 1000$
such sources across the entire extragalactic sky and their redshifts are in
the cosmic noon era ($z\simeq 2-4$). These objects likely contain buried AGN
whose presence is implied by hot dust temperatures $\sim 60-120$ K and optical
spectroscopic diagnostic features, though broad lines are often not seen. Fan
et al. (2016) investigated the morphologies of 18 Hot DOGs with HST imaging
and concluded a merger fraction of $62\pm 14$% which is lower than for the F2M
red quasars ($>80$%) but higher than unobscured AGN hosts ($\sim 30$%;
Villforth, 2023). Farrah et al. (2017) find a similar merger fraction ($\sim
75\%$) in their HST study of Hot DOGs but conclude that this high fraction is
reflective of the massive galaxy population at $z\sim 2$. Hot DOGs have been
rarely detected in X-rays, likely due to the common presence of heavy
($N_{H}>10^{24}$ cm-2) absorption (e.g., Piconcelli et al., 2015; Vito et al.,
2018). Hot DOGs are also interpreted as representing an evolutionary phase.
‘Extremely red quasars’ (ERQs) were selected by the optical-to-mid-infrared
colour $r_{\rm AB}-W4_{\rm Vega}>14$ mag in Ross et al. (2015) and $i_{\rm
AB}-W3_{\rm Vega}>9.8$ mag plus C iv line properties indicative of outflows in
Hamann et al. (2017) resulting in $\gtrsim 300$ ERQs. These criteria pick out
objects that are more heavily reddened than the F2M red quasars at redshifts
similar to the HRQs ($2<z<4$). ERQs contain Type 1 and Type 2 sources, the
latter exhibiting significant amounts of polarization (Alexandroff et al.,
2018; Zakamska & Alexandroff, 2023). Their hosts are largely not in mergers
(only 2/10 sources studied show merger activity; Zakamska et al., 2019) but
exhibit powerful winds seen in broad [O iii] emission lines in excess of 1000
km s-1 and with sufficient energy to impact their hosts (Vayner et al., 2021;
Lau et al., 2022).
Aiming to overcome the radio-selection of the F2M survey, and to exploit the
wealth of multi-wavelength data in the SDSS Stripe 82 region, LaMassa et al.
(2016a) and Glikman et al. (2018) used X-ray selection and WISE colours,
respectively, to identify additional samples of red quasars.
In addition, Jun et al. (2020) performed a meta analysis of the aforementioned
obscured AGN populations to relate their X-ray absorption and dust extinction
properties. In the following sections, we compare the F2M quasars in this work
to those compiled results and place F2M red quasars in the broader context of
luminous obscured quasars.
### 4.1 Dust-to-gas ratios
The X-ray data for the F2M red quasars presented here provide a measure of the
column density, $N_{H}$, which parametrizes the absorption due to atomic gas
along the line of sight. This value is determined via spectral fitting for the
five sources with $\gtrsim 50$ counts and is considered more reliable than the
$N_{H}$ estimated from the HR measured for sources fewer counts. The dust
extinction, parametrized by $E(B-V)$, is reported in Tables 1 and 2. Together,
$E(B-V)$ and $N_{H}$ provide constraints on the nature of the absorber, namely
its dust-to-gas ratio.
In Figure 4 we plot the dust-to-gas ratio for the F2M red quasars as a
function of their 2-10 keV X-ray luminosity with red symbols, where filled
circles are the four previously-studied quasars from L16 and G17 as well as
F2M J0915 from Urrutia et al. (2005). Filled stars represent the five quasars
that had $N_{H}$ determined via spectral fitting and open star symbols are the
three sources whose $N_{H}$ values were estimated from their HRs and are
therefore least precise.
It has already been demonstrated in Maiolino et al. (2001a) that low-
luminosity AGN (i.e., Seyfert galaxies) have dust-to-gas ratios that are
significantly lower than the interstellar medium value determined for the
Milky Way ($1.7\times 10^{-22}$ mag cm2 shown with a horizontal black dashed
line; Bohlin et al., 1978). These Seyfert AGN were selected to have simple
X-ray absorption spectra, avoiding sources with warm absorbers or cold
absorbers with partial-covering (Eqn. 6). The dust-to-gas ratios found for
these AGN, plotted with gray circles 4, is therefore descriptive of
circumnuclear material. The mean value for this sample is
$\log{E(B-V)/N_{H}}=-22.8$, shown with a horizontal dotted gray line.
Given that most of the F2M red quasars are found in mergers, are fit by a
variety of absorption models including partial covering, and are more luminous
by $\sim 1-2$ orders of magnitude, they may not have the same source of
reddening as the Seyferts in Maiolino et al. (2001a). Yet, the dust-to-gas
ratio distribution of F2M red quasars overlaps the Maiolino et al. (2001a)
sample. Since the lower dust-to-gas ratio is suggestive of larger dust grains
(Maiolino et al., 2001b), but is also consistent with dust grains being
sublimated close to the central engine, it is possible that different
mechanisms produce similar results. The two sources whose dust-to-gas ratios
are consistent with the Milky Way value are F2M J1715, which had the lowest
measured $N_{H}$ value such that its best-fit spectrum was a single power law
with no absorption (Eqn. 4)444To compute a dust-to-gas ratio, we use the
$N_{H}$ from an absorbed power-law fit (Eqn. 5; $N_{H}=6\times 10^{21}$ cm-2),
which was a poorer fit than a simple unabsorbed power law according to an
F-test, but allows for an estimate of the dust-to-gas ratio., and F2M J1227
from L16, which lacks imaging to determine whether it is hosted by a merger
and has minimal line-of-sight absorption ($N_{H}=0.34\times 10^{22}$ cm-2).
The rest of the sources have lower dust-to-gas ratios than the Milky Way value
by factors similar to the Maiolino et al. (2001a) sample. In addition, the
mean dust-to-gas ratio is $\log{E(B-V)/N_{H}}=-22.9$ (dotted red line), which
consistent with the previous studies in L16 and G17.
For comparison, we plot the average dust-to-gas ratio for six HRQs studied in
Lansbury et al. (2020) with purple triangles. Their mean value,
$\log{E(B-V)/N_{H}}=-22.3$ shown with a dotted purple line, is higher than the
F2M red quasars and the Maiolino et al. (2001a) sample. This may be explained
by the more stringent colour selection of the HRQ sample which finds
preferentially more reddened sources with higher $E(B-V)$ values. The meta
analysis of Jun et al. (2020), which includes the previously published F2M red
quasars, Type 2 AGN, ERQs, and Hot DOGs, finds a value consistent with the
Maiolino et al. (2001a) average ($\log{E(B-V)/N_{H}}=-22.77\pm 0.41$; dashed
orange line).
Figure 4: Dust-to-gas ratios versus 2-10 keV X-ray luminosity ($L_{X}$). The
Milky Way dust-to-gas value is depicted with a black dashed line. The five
previously studied quasars from L16 and G17 and F2M J0915 are shown with
filled red circles. Filled red stars are the five sources in this work that
had sufficient counts for spectral modeling. Open stars are the three low-
count sources whose absorption was estimated from HRs. The horizontal red
dotted line marks the mean value. For comparison, we plot the sample studied
by Maiolino et al. (2001a) of low-luminosity AGN with gray circles and the
mean value is depicted by a dotted gray line. We also show the HRQs from
Lansbury et al. (2020) with purple triangles and their mean value with a
dotted purple line. The orange dashed line shows the mean dust-to-gas ratio of
a compilation of red and obscured quasars in the literature (Jun et al.,
2020).
### 4.2 X-ray versus infrared luminosity
There exist various tracers of intrinsic AGN luminosity and, by extension,
SMBH accretion that must be reconciled in order to adopt a consistent physical
model for AGN. X-rays trace the innermost emission from the accretion disk,
while mid-infrared (e.g., $6~{}\mu$m) arises from reprocessed UV photons from
the accretion disk that are absorbed by nuclear dust (i.e., the ‘torus’) and
thermally re-emitted. At low luminosities, X-ray and mid-IR emission follow a
linear relation (in log-log space; Lutz et al., 2004). However, observations
of higher-luminosity quasars show a departure from this relation around
$L_{\rm bol}\sim 10^{44}$ erg s-1 (Stern, 2015; Chen et al., 2017) and are are
interpreted as being under-luminous in X-rays (Ricci et al., 2017b). The
decreasing $L_{X}/L_{\rm IR}$ ratio with increasing luminosities could also be
interpreted as due to the increasing bolometric correction in X-rays at high
$L_{\rm bol}$ (Martocchia et al., 2017).
In Figure 5 we plot the eight F2M red quasars with X-ray luminosities (Table
6) along with other samples from the literature. The black stars and green
asterisks are luminous, unobscured, Type 1 quasars from Stern (2015) and
Martocchia et al. (2017). The low luminosity relation from Lutz et al. (2004),
which is calibrated at luminosities too low to appear on this plot, is shown
by the shaded region. The relations from Stern (2015) and Chen et al. (2017),
defined based on unobscured Type 1 samples, are shown with dotted and dashed
lines, respectively, and depart from the shaded region. ERQs are shown with
orange diamonds (Goulding et al., 2018) and exist in the same part of the
$L_{X}-L_{\rm IR}$ relation as the unobscured objects. Hot DOGs, shown with
blue pentagons, fall systematically below the extended relations (Ricci et
al., 2017b).
Models of radiation-driven feedback postulate that X-rays need to be
suppressed in order to enable line-driven winds on small scales (Proga et al.,
2000). And it has been shown that quasars with strong outflows are X-ray weak
(Luo et al., 2013; Zappacosta et al., 2020).
While the F2M red quasars (red symbols, both from this and previous works) are
not as luminous in the infrared as the Hot DOGs or ERQs, they similarly lie
below the $L_{X}-L_{\rm IR}$ relation established at low luminosities, even
when corrected for absorption. F2M red quasars have an anomalously high
fraction of BAL systems indicative of line-driven outflows. In addition, F2M
J1507 shows evidence for ultra-fast out-flowing material in its X-ray spectrum
(§A.1). In this space, Hot DOGs appear to be an extension of F2M red quasars
toward more luminous IR sources whose X-rays are suppressed compared to their
IR luminosity (for a discussion on the X-raw weakness of Hot DOG, see Ricci et
al., 2017b).
We note that the open F2M symbols were in the low-count regime and had their
column density estimated from their HRs which was used to correct the X-ray
luminosity. These luminosities are therefore highly uncertain and may be
underestimated. However, given that their exposure times are similar to the
rest of the sample, while their net counts are lower by more than an order of
magnitude, they are likely intrinsically less luminous.
Figure 5: Rest frame 6 $\mu$m luminosity vs. rest-frame absorption-corrected
2-10 keV X-ray luminosity for different quasar samples. Results from this work
are shown with red circles. The six high-count sources that had successful
spectral modeling are shown with filled circles, while the open circles are
the three low-count sources that were modeled by a fixed $\Gamma=1.8$ power
law. Other red quasars from F2M (; ) are shown with red triangles. Red stars
are WISE-selected red quasars in Stripe 82 (Glikman et al., 2018), which have
not been corrected for absorption. Apart from the two lowest flux sources
whose luminosities may have been significantly underestimated due to
insufficient absorption correction, the newly added F2M red quasars populate a
similar part of this space as the previously studied red quasars. More
luminous quasar samples are shown for comparison. Black stars are unobscured,
Type 1 quasars from Stern (2015). The relation that was derived from those
data is shown with a dotted line. Hyperluminous Type 1 quasars from the WISSH
sample (Martocchia et al., 2017) are shown with green diamonds. Asterisks show
Hot DOGs (Ricci et al., 2017b), which are infrared-hyperluminous, heavily
obscured quasars. ERQs are shown with orange diamonds (Goulding et al., 2018).
The shaded region shows the Lutz et al. (2004) relation derived from local
Seyfert galaxies which breaks down at high luminosities. The dashed line
represents the relation from Chen et al. (2017) derived from luminous AGN in
deep fields.
### 4.3 Radiative feedback
The key to blowing out gas from the vicinity of an AGN – and possibly out of
the host galaxy entirely – may be radiation pressure from a high-enough
luminosity pushing against infalling material whose composition is a mixture
of partially ionized dust and gas. Such a medium has an ‘effective cross
section’, $\sigma_{i}$, that is larger than the Thomson cross section,
$\sigma_{T}$ , reducing the Eddington luminosity for the system. This
resultant ‘effective Eddington limit’ is thus lower than that for pure ionized
hydrogen, enabling AGN with a sufficiently high accretion rate, and a not-too-
high column density to blow out the dust and gas (Fabian et al., 2006, 2008;
Ishibashi et al., 2018). According to this theory, the interplay between an
AGN’s accretion, obscuration, and radiative feedback can be understood through
two parameters: $\lambda_{\rm Edd}$ and $N_{H}$, shown in Figure 6. Sources
with the right combination of $\lambda_{\rm Edd}$ and $N_{H}$ are found in the
white triangular region, referred to as the “forbidden” or “blow-out’‘ region,
where the luminosity is high enough to produce outflows and gas density is low
enough to avoid stalling them. Since the blow-out of gas as a result of
radiation pressure can involve multiple scatterings, as the opacity of the
material increases the blowout region can be expanded out to the dashed line,
which takes radiation trapping into account (Ishibashi et al., 2018).
Ricci et al. (2017a) explored the nature of obscuration for Swift/BAT AGN,
which are hard X-ray-selected AGN with $z<0.05$, in the $N_{H}$ vs.
$\lambda_{\rm Edd}$ plane and find that they lie in the region suggestive of
long-lasting obscuration whether by dust lanes in the host galaxy for sources
with $\log{N_{H}}<22$ or by a high covering fraction of nuclear obscuration
for sources with $\log{N_{H}}>22$. These sources are plotted with black
crosses in Figure 6 confirming that the vast majority of low-luminosity, local
AGN are not engaged in radiative feedback.
The F2M red quasars from L16 and G17 as well as F2M J0915 are shown with
filled red circles. The red stars are sources from this work where filled
stars are the five high-count objects whose $N_{H}$ values were determined
from spectral fitting and the open stars are the three low-count sources whose
$N_{H}$ values were estimated from HRs. The two sources with upper limits on
their X-ray counts are not shown. One of these low-count sources, F2M J1106,
is on the edge of the region modified by radiation trapping. This source shows
bi-conal outflowing superbubbles in [O iii] consistent with trapped photons
pushing against an entrained shell of gas expanding out (Shen et al., 2023).
The time-scale for these bubbles is estimated to be $\sim 10$ Myrs, which is
roughly consistent with the timeline for the most heavily absorbed simulations
in Ishibashi et al. (2018). However, it is also possible that because its
$N_{H}$ value was determined by its HR, this estimate sufficiently uncertain
that it may actually live in the unmodified blowout region. We note that F2M
J0830, whose X-ray analysis was performed in L16, also shows bi-conal
outflowing superbubbles (Shen et al., 2023).
All but one (F2M J1151) of the F2M red quasars in the blowout region show
evidence for merging morphologies in their hosts. The two source below the
$\log(N_{H})=22$ line, where the relatively low obscuration is due to dust
lanes in the host galaxy, either have undisturbed morphologies (F2M J1715) or
lack imaging (F2M J1227).
In an independent investigation of quasar outflow properties at sub-mm
wavelengths, Stacey et al. (2022) found similar differences between blue and
red quasars in archival ALMA observations of sixteen Type 1 quasars with
$J\geq 7$ CO lines. Four of these sources have $E(B-V)>0.5$ determined from
SED fitting, while the remaining sources have $E(B-V)<0.1$. We plot the red
and blue quasars from Stacey et al. (2022) in the left panel of Figure 6 with
orange and blue squares, respectively. Here, too, red quasars with molecular
outflows detected by ALMA are in the blowout region. Blue quasars without
outflows do not reside in blowout region. In addition, the analysis of the CO
lines by Stacey et al. (2022) reveals molecular outflows with velocities of
$500-1000$ km s-1 in the red quasars, while the blue quasars have weaker
velocities $\lesssim 300$ km s-1.
Figure 6: Column density, $N_{H}$, vs. Eddington ratio, $\lambda_{Edd}$ for
F2M red quasars (red points). The curved solid line (labelled $\lambda^{\rm
eff}_{\rm Edd}$) represents the region where radiation pressure is
insufficient to expel the obscuring gas, under the assumption of single
scattering, resulting in a high covering fraction (Fabian et al., 2008). In
the white triangular region, the radiation pressure is sufficiently high to
blow out the gas and produce outflows. The dashed line amends this region by
including radiation trapping (Ishibashi et al., 2018). AGNs below
$\log(N_{H})$ = 22 may be obscured by dust lanes in their hosts. Swift/BAT AGN
in the local universe are shown for comparison (black cross symbols; Ricci et
al., 2017a) demonstrating the paucity of sources in the blowout region among
normal AGNs. Left – Filled circles are the four previously analyzed sources
from L16 and G17 and F2M J0915. Filled stars are the five sources in this work
whose $N_{H}$ was determined by spectral modeling. Open stars are the three
sources whose $N_{H}$ was estimated from their HRs. We also plot, for
comparison, a sample of 14 quasars studied in CO with ALMA (Stacey et al.,
2022) who found that only the reddened quasars ($E(B-V)>0.5$; orange squares)
and none of the unreddened quasars ($E(B-V)<0.1$; blue squares) met blowout
conditions. Right – This panel focuses on the F2M red quasars that have host
morphologies from high resolution imaging where filled sources are mergers
while open sources are undisturbed. We also plot, for comparison, three ERQs
(purple triangles) that also posses HST imaging and morphological information.
The green circle is the Hot DOG from Ricci et al. (2017b). While all the
objects with merging hosts reside in the blowout region, the region is not
exclusively populated by mergers.
Intriguingly, all of the other dust-obscured quasar samples discussed above
(HRQs, Hot DOGs, ERQs, red quasars from Stripe 82, as well as the WISSH
quasars) that have $N_{H}$ and $\lambda_{\rm Edd}$ in the literature reside in
the blowout region. Figure 7 of Lansbury et al. (2020) shows the
aforementioned sources, including the F2M red quasars from L16 and G17, on the
$N_{H}$ vs. $\lambda_{\rm Edd}$ diagram. However, high resolution imaging is
limited to only a handful of these sources. In the right hand panel of Figure
6 we plot $N_{H}$ vs. $\lambda_{\rm Edd}$ again, focusing only on sources with
known host morphologies and we distinguish between merging and undisturbed
systems with closed and open symbols, respectively. Twelve F2M red quasars are
shown with stars where the two that lack HST imaging, F2M J1227 and F2M
J1106555F2M J1106 does possess GMOS IFU imaging with
0$\aas@@fstack{\prime\prime}$4 spatial resolution, which is sufficient to
reveal galaxy-scale ($>10$ kpc) bubbles with a bi-conal morphology. However,
it is unknown whether the quasar is hosted by a merger., are omitted. While
F2M J1715, which appears to be undisturbed666We note that the among the 11 F2M
quasars with sufficient X-ray counts and imaging to be plotted in the right
panel of Figure 6, nine come from the HST observations in Urrutia et al.
(2008) that were designed to reach similar depths; the exposure times ranged
from $\sim 1600$ s to $\sim 2300$ s in the F814W filters, depending on the
redshift. However, F2M J1715 was observed in an 800 s snapshot observation,
which is $\sim 2\times$ shorter than the exposure time for, e.g., F2M J1532,
which is at a similar redshift. This means that the surface brightness limit
for the image of F2M J1715 is $\sim 0.8$ mag shallower and some merger
features might be missed., lies outside the blowout region where dust lanes
can explain its properties, F2M J1151, which is also undisturbed, lies in the
blow out region.
Three ERQs have both HR-based $N_{H}$ estimates (Goulding et al., 2018) and
$\lambda_{\rm Edd}$ (Perrotta et al., 2019) as well as HST imaging
(J0832+1615, J0834+0159, J1652+1728; Zakamska et al., 2019). All three reside
in the blowout region, shown with purple triangles (two points overlap), but
only J1652+1728 is a major merger. Though only a few Hot DOGs have X-ray data
that enable a measurement of $N_{H}$ and most are Type 2, precluding a
measurement of a black hole mass and thus $\lambda_{\rm Edd}$, we are able to
plot one source, WISE J1036+0449 from Ricci et al. (2017b), in the blowout
region with a green circle. Although this object does not have morphological
information, as noted above, the overall population of Hot DOGs has a high
merger fraction.
While all sources with merging hosts reside in the blowout region, a merging
morphology is not a necessary condition to meet the requirements needed to
blow out large amounts of dust and gas. The presence of winds and outflows may
be a more predictive indicator. The entire ERQ sample is shown to have strong
outflows in [O iii], with the highest velocities ($\sim 2000-7000$ km s-1) in
the reddest sources (Perrotta et al., 2019) and a fourth ERQ source that lacks
imaging (J0006+1215) also meets blowout conditions in $N_{H}$ vs.
$\lambda_{\rm Edd}$. Likewise, the HRQs lack host morphology information but
reside in the blowout region (see Figure 7; Lansbury et al., 2020) and exhibit
strong outflows with velocities up to 2500 km s-1 in [O iii] (Temple et al.,
2019). In a reverse approach, Kakkad et al. (2016) selected AGN in the COSMOS
survey that were located in the blowout region and conducted follow-up IFU
observations in [O iii] and find outflow velocities of $\sim 600-1900$ km s-1.
None the less, a systematic imaging campaign on the various samples of
obscured AGN in the blow out region is needed to more thoroughly investigate
any connection between mergers and outflows and better understand the
conditions under which radiative feedback dominates.
## 5 conclusions
In this paper, we investigate the accretion and obscuration properties of a
sample of merger-dominated luminous red quasars via X-ray observations. This
sample consists of 10 newly analysed X-ray observations as well as 5
previously published sources. All but two have high resolution imaging with
HST and one of those two has high resolution, high quality IFU imaging in [O
iii]. Although the sources were not chosen to reside in mergers, ten sources
have clear evidence of morphologically disturbed hosts (as previously
determined by Urrutia et al. 2008 and Glikman et al. 2015). The sample
consists of eight new observations, two sources with archival data sets, and
five previously published sources. When sufficient counts enabled it, we
performed spectral modeling to extract parameters such as $N_{H}$ that enabled
a calculation of the absorption-corrected luminosity. Lower count objects were
analyzed via their HRs to estimate $N_{H}$; in cases of non-detection, we
determine upper limits. We combine these X-ray-derived properties with host
galaxy morphological information from high resolution imaging, dust reddening,
infrared luminosity, and accretion rate ($\lambda_{\rm Edd}$). These data
allow us to investigate the connection among these properties and we find:
1. 1.
F2M red quasars have dust-to-gas ratios that are in general lower than the
interstellar medium of the Milky Way. Their dust-to-gas ratios are consistent
with low-luminosity AGN in the local universe, though the ratio likely arises
from very different physics. The dust-to-gas ratios of F2M red quasars is
somewhat lower than, but roughly consistent with the dust-to-gas ratios of
comparison samples of luminous dusty quasars.
2. 2.
F2M red quasars are under-luminous in X-rays at a given infrared luminosity
when compared with local, low luminosity relations as well as luminous,
unreddened sources that straddle the high luminosity relation. However, their
X-ray deficit is consistent with other, more luminous, dust-obscured quasars
such as the Hot DOGs.
3. 3.
with the exception of two sources, F2M red quasars reside in the “forbidden”
region of $N_{H}$ vs. $\lambda_{\rm Edd}$ indicative of them being in a
blowout phase due to radiation pressure on dust. Furthermore, all F2M red
quasars with merging hosts are in the blowout region as are other luminous
dusty quasars from comparison samples. A broader investigation of the host
morphologies of blue quasars outside the blowout region is needed to better
understand any connection among reddening, feedback, and mergers.
These findings lend further support to F2M red quasars, along with other
luminous dust-reddened quasars, being in a brief transitional phase in a
merger-driven co-evolution of galaxies and their supermassive black holes.
## Acknowledgements
E.G. acknowledges the generous support of the Cottrell Scholar Award through
the Research Corporation for Science Advancement. E.G. is grateful to the
Mittelman Family Foundation for their generous support. E.P. and L.Z.
acknowledge financial support from the Bando Ricerca Fondamentale INAF 2022
Large Grant “Toward an holistic view of the Titans: multi-band observations of
z>6 QSOs powered by greedy supermassive black-holes.” We thank Laura Blecha
for useful discussions on the nature of F2M J1507. We thank Hannah Stacey for
providing the data for the ALMA-detected quasars plotted in Figure 6. We
acknowledge the efforts of Charlotte Moore in the initial phase of this
project.
Support for this work was provided by the National Aeronautics and Space
Administration through Chandra Award Number 21700216 issued by the Chandra
X-ray Center, which is operated by the Smithsonian Astrophysical Observatory
for and on behalf of the National Aeronautics Space Administration under
contract NAS8-03060. The scientific results reported in this article are based
on observations made by the Chandra X-ray Observatory, data obtained from the
Chandra Data Archive, and observations made by the Chandra X-ray Observatory
and published previously in cited articles. This research has made use of
software provided by the Chandra X-ray Center (CXC) in the application
packages CIAO. We gratefully acknowledge the National Science Foundation’s
support of the Keck Northeast Astronomy Consortium’s REU program through grant
AST-1950797.
## Data Availability
The X-ray data underlying this article are publicly available through the
Chandra archives.
## References
* Alexandroff et al. (2018) Alexandroff R. M., et al., 2018, MNRAS, 479, 4936
* Arnaud (1996) Arnaud K. A., 1996, in Jacoby G. H., Barnes J., eds, Astronomical Society of the Pacific Conference Series Vol. 101, Astronomical Data Analysis Software and Systems V. p. 17
* Assef et al. (2011) Assef R. J., et al., 2011, ApJ, 742, 93
* Banerji et al. (2012) Banerji M., McMahon R. G., Hewett P. C., Alaghband-Zadeh S., Gonzalez-Solares E., Venemans B. P., Hawthorn M. J., 2012, MNRAS, 427, 2275
* Banerji et al. (2021) Banerji M., Jones G. C., Carniani S., DeGraf C., Wagg J., 2021, MNRAS, 503, 5583
* Becker et al. (1995) Becker R. H., White R. L., Helfand D. J., 1995, ApJ, 450, 559
* Bohlin et al. (1978) Bohlin R. C., Savage B. D., Drake J. F., 1978, ApJ, 224, 132
* Cash (1979) Cash W., 1979, ApJ, 228, 939
* Chen et al. (2017) Chen C.-T. J., et al., 2017, ApJ, 837, 145
* Faber et al. (1997) Faber S. M., et al., 1997, AJ, 114, 1771
* Fabian (2012) Fabian A. C., 2012, ARA&A, 50, 455
* Fabian et al. (2006) Fabian A. C., Celotti A., Erlund M. C., 2006, MNRAS, 373, L16
* Fabian et al. (2008) Fabian A. C., Vasudevan R. V., Gandhi P., 2008, MNRAS, 385, L43
* Fan et al. (2016) Fan L., et al., 2016, ApJ, 822, L32
* Farrah et al. (2017) Farrah D., et al., 2017, ApJ, 844, 106
* Ferrarese & Merritt (2000) Ferrarese L., Merritt D., 2000, ApJ, 539, L9
* Frieman et al. (2008) Frieman J. A., et al., 2008, AJ, 135, 338
* Fruscione et al. (2006) Fruscione A., et al., 2006, in Silva D. R., Doxsey R. E., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 6270, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 62701V, doi:10.1117/12.671760
* Gebhardt et al. (2000) Gebhardt K., et al., 2000, ApJ, 539, L13
* Glikman (2017) Glikman E., 2017, Research Notes of the American Astronomical Society, 1, 48
* Glikman et al. (2004) Glikman E., Gregg M. D., Lacy M., Helfand D. J., Becker R. H., White R. L., 2004, ApJ, 607, 60
* Glikman et al. (2006) Glikman E., Helfand D. J., White R. L., 2006, ApJ, 640, 579
* Glikman et al. (2007) Glikman E., Helfand D. J., White R. L., Becker R. H., Gregg M. D., Lacy M., 2007, ApJ, 667, 673
* Glikman et al. (2012) Glikman E., et al., 2012, ApJ, 757, 51
* Glikman et al. (2013) Glikman E., et al., 2013, ApJ, 778, 127
* Glikman et al. (2015) Glikman E., Simmons B., Mailly M., Schawinski K., Urry C. M., Lacy M., 2015, ApJ, 806, 218
* Glikman et al. (2017) Glikman E., LaMassa S., Piconcelli E., Urry M., Lacy M., 2017, ApJ, 847, 116
* Glikman et al. (2018) Glikman E., et al., 2018, ApJ, 861, 37
* Gordon & Clayton (1998) Gordon K. D., Clayton G. C., 1998, ApJ, 500, 816
* Goulding et al. (2018) Goulding A. D., et al., 2018, ApJ, 856, 4
* Hamann et al. (2017) Hamann F., et al., 2017, MNRAS, 464, 3431
* Hopkins & Beacom (2006) Hopkins A. M., Beacom J. F., 2006, ApJ, 651, 142
* Hopkins et al. (2006) Hopkins P. F., Hernquist L., Cox T. J., Di Matteo T., Robertson B., Springel V., 2006, ApJS, 163, 1
* Ishibashi et al. (2018) Ishibashi W., Fabian A. C., Ricci C., Celotti A., 2018, MNRAS, 479, 3335
* Jun et al. (2020) Jun H. D., Assef R. J., Carroll C. M., Hickox R. C., Kim Y., Lee J., Ricci C., Stern D., 2020, The Astrophysical Journal, 906, 21
* Kakkad et al. (2016) Kakkad D., et al., 2016, A&A, 592, A148
* Kim et al. (2010) Kim D., Im M., Kim M., 2010, ApJ, 724, 386
* Kim et al. (2015) Kim D., Im M., Glikman E., Woo J.-H., Urrutia T., 2015, ApJ, 812, 66
* LaMassa et al. (2016a) LaMassa S. M., et al., 2016a, ApJ, 818, 88
* LaMassa et al. (2016b) LaMassa S. M., et al., 2016b, ApJ, 820, 70
* LaMassa et al. (2017) LaMassa S. M., et al., 2017, ApJ, 847, 100
* LaMassa et al. (2023) LaMassa S. M., Yaqoob T., Tzanavaris P., Gandhi P., Heckman T., Lansbury G., Siemiginowska A., 2023, ApJ, 944, 152
* Lansbury et al. (2020) Lansbury G. B., Banerji M., Fabian A. C., Temple M. J., 2020, MNRAS, 495, 2652
* Lau et al. (2022) Lau M. W., Hamann F., Gillette J., Perrotta S., Rupke D. S. N., Wylezalek D., Zakamska N. L., 2022, MNRAS, 515, 1624
* Luo et al. (2013) Luo B., et al., 2013, ApJ, 772, 153
* Lutz et al. (2004) Lutz D., Maiolino R., Spoon H. W. W., Moorwood A. F. M., 2004, A&A, 418, 465
* Mainzer et al. (2011) Mainzer A., et al., 2011, ApJ, 731, 53
* Maiolino et al. (2001a) Maiolino R., Marconi A., Salvati M., Risaliti G., Severgnini P., Oliva E., La Franca F., Vanzi L., 2001a, A&A, 365, 28
* Maiolino et al. (2001b) Maiolino R., Marconi A., Oliva E., 2001b, A&A, 365, 37
* Marble et al. (2003) Marble A. R., Hines D. C., Schmidt G. D., Smith P. S., Surace J. A., Armus L., Cutri R. M., Nelson B. O., 2003, ApJ, 590, 707
* Martocchia et al. (2017) Martocchia S., et al., 2017, A&A, 608, A51
* Murphy & Yaqoob (2009) Murphy K. D., Yaqoob T., 2009, MNRAS, 397, 1549
* Park et al. (2006) Park T., Kashyap V. L., Siemiginowska A., van Dyk D. A., Zezas A., Heinke C., Wargelin B. J., 2006, ApJ, 652, 610
* Perrotta et al. (2019) Perrotta S., Hamann F., Zakamska N. L., Alexandroff R. M., Rupke D., Wylezalek D., 2019, MNRAS, 488, 4126
* Piconcelli et al. (2010) Piconcelli E., Vignali C., Bianchi S., Nicastro F., Miniutti G., Fiore F., 2010, ApJ, 710, 992
* Piconcelli et al. (2015) Piconcelli E., et al., 2015, A&A, 574, L9
* Proga et al. (2000) Proga D., Stone J. M., Kallman T. R., 2000, ApJ, 543, 686
* Ricci et al. (2017a) Ricci C., et al., 2017a, Nature, 549, 488
* Ricci et al. (2017b) Ricci C., et al., 2017b, ApJ, 835, 105
* Richards et al. (2006) Richards G. T., et al., 2006, ApJS, 166, 470
* Ross et al. (2015) Ross N. P., et al., 2015, MNRAS, 453, 3932
* Sanders et al. (1988) Sanders D. B., Soifer B. T., Elias J. H., Madore B. F., Matthews K., Neugebauer G., Scoville N. Z., 1988, ApJ, 325, 74
* Shen & Liu (2012) Shen Y., Liu X., 2012, ApJ, 753, 125
* Shen et al. (2023) Shen L., et al., 2023, Science Advances, 9, eadg8287
* Skrutskie et al. (2006) Skrutskie M. F., et al., 2006, AJ, 131, 1163
* Stacey et al. (2022) Stacey H. R., Costa T., McKean J. P., Sharon C. E., Calistro Rivera G., Glikman E., van der Werf P. P., 2022, MNRAS, 517, 3377
* Stern (2015) Stern D., 2015, ApJ, 807, 129
* Temple et al. (2019) Temple M. J., Banerji M., Hewett P. C., Coatman L., Maddox N., Peroux C., 2019, MNRAS, 487, 2594
* Tsai et al. (2015) Tsai C.-W., et al., 2015, ApJ, 805, 90
* Urrutia et al. (2005) Urrutia T., Lacy M., Gregg M. D., Becker R. H., 2005, ApJ, 627, 75
* Urrutia et al. (2008) Urrutia T., Lacy M., Becker R. H., 2008, ApJ, 674, 80
* Urrutia et al. (2009) Urrutia T., Becker R. H., White R. L., Glikman E., Lacy M., Hodge J., Gregg M. D., 2009, ApJ, 698, 1095
* Urrutia et al. (2012) Urrutia T., Lacy M., Spoon H., Glikman E., Petric A., Schulz B., 2012, ApJ, 757, 125
* Vayner et al. (2021) Vayner A., et al., 2021, MNRAS, 504, 4445
* Villforth (2023) Villforth C., 2023, arXiv e-prints, p. arXiv:2309.03276
* Vito et al. (2018) Vito F., et al., 2018, MNRAS, 474, 4528
* Wachter et al. (1979) Wachter K., Leach R., Kellogg E., 1979, ApJ, 230, 274
* Warren et al. (2000) Warren S. J., Hewett P. C., Foltz C. B., 2000, MNRAS, 312, 827
* Wright et al. (2010) Wright E. L., et al., 2010, AJ, 140, 1868
* Wu et al. (2012) Wu J., et al., 2012, ApJ, 756, 96
* Zakamska & Alexandroff (2023) Zakamska N. L., Alexandroff R. M., 2023, MNRAS,
* Zakamska et al. (2019) Zakamska N. L., et al., 2019, MNRAS, 489, 497
* Zappacosta et al. (2020) Zappacosta L., et al., 2020, A&A, 635, L5
* Zubovas & King (2012) Zubovas K., King A., 2012, ApJ, 745, L34
## Appendix A Notes on the Spectral fitting for individual objects
### A.1 F2M 1507+3129
The Balmer lines of this object have a blue-shifted broad emission component
in addition to broad lines at the systemic velocity, which is determined by
the [O iii] lines in its optical spectrum. We note that the the blue-shifted
broad emission component of H$\beta$ is similar in structure to H$\alpha$,
which is shown in Figure 2. We attribute this component to an out-flowing
wind, rather than accretion disk geometry, due to the lack of a red-shifted
component that is typically seen in double-peaked-emitting AGN. The H$\alpha$
component is blue-shifted by 91Å, corresponding to a velocity of $0.014c$,
which is slow compared with typical outflow velocities seen in BALs ($\sim
10,000-60,000$ km s-1). We use the line width from the broad component at the
systemic velocity, which we assume to represent virialized motion, to compute
the black hole mass.
As is seen in Figure 3, the X-ray spectrum exhibits some unusual features
including what appears to be a deficit of flux around $4-5$ keV. There is also
apparent soft excess at energies below 1.5 keV that required a model with a
leakage/scattering component (Eqn. 6). However, given the complexity of this
model and the small number of spectral bins, we chose to freeze the photon
index to $\Gamma=1.8$. We initially ignored the $4-5$ keV and found an
acceptable fit with $N_{H}=3\times 10^{23}$ cm-2 and a scattering fraction of
16%. We further interpret the flux deficit at $4-5$ keV as absorption of Fe
xxvi Ly$\alpha$ ($E_{\rm rest}=6.966$ keV) due to an ultra-fast outflow (UFO)
and amend the model in Eqn. 6 by adding a Gaussian absorption component.
${\tt phabs*(zpowerlw+zphabs*zpowerlw+zgauss)}.$ (8)
A fit to this model resulted in the same $N_{H}$ and scattering fraction,
while accounting for the absorption at the blue-shifted energy of 4.59 keV,
results in a UFO velocity of $0.26c$. The continuum model components are shown
as dotted lines in Figure 3. Table 7 lists the best-fit parameters for this
source.
This UFO velocity is significantly higher than that seen in the blue-shifted
Balmer emission and arises from radii closest to the central engine. While
these features are therefore not associated with the same outflowing system,
their presence may be indicative of feedback on many scales due to sustained
outflowing winds and shocks. Theoretical models predict that radiation driven
relativistic winds interact with shocks against the ISM, triggering galaxy-
wide ionized and neutral outflows (e.g., Zubovas & King, 2012). A higher count
X-ray spectrum would allow for a more thorough exploitation of the energy
resolution of e.g., XMM-Newton, to better constrain the outflow properties
closest to the SMBH. IFU spectroscopy of the Balmer lines would similarly
trace the kinematics of the large scale outflows, to fully trace the feedback
energy being injected into host galaxy by this quasar.
Table 7: UFO Model Fit Parameters Parameter | Value
---|---
$\Gamma$ | 1.8†
Power-law normalization | $1.1_{-0.2}+^{1.2}\times 10^{-4}$
$N_{H}$ (cm-2) | $3.0_{-1.7}^{+3.7}\times 10^{23}$
$E_{FeXXVI}$ Ly$\alpha$ (keV) | 6.966†
$\sigma_{E}$ (keV) | $2.4\times 10^{-4}$‡
EW (keV) | $2.3$
$f_{\rm scatt}$ (%) | $16\pm 13$
$v_{\rm UFO}$ | $0.26c$
† This parameter was frozen.
‡ Given that this feature was fit to a region represented by only two
spectral bins, this value is highly uncertain with an unconstrained lower
bound and an upper bound of $\sigma=1.45$.
energy.
### A.2 F2M 1532+2415
This source shows emission at 4.1 keV which is the redshifted fluorescent Iron
K$\alpha$ line ($E_{\rm rest}=6.4$ keV) suggestive of significant reflected
emission often seen atop a strongly suppressed continuum, which is typical in
Type 2 quasars. F2M J1532, however, is a Type 1 source, with broad emission
lines seen in its spectrum. We performed a self-consistent physically
motivated joint fit, with both Chandra observations, to the X-ray spectrum to
properly account for line-of-sight attenuation, scattering of photons into the
line of sight, and the fluorescent line emission responsible for the Fe
K$\alpha$ feature. We employed Equation LABEL:eqn:mytorus in XSpec in so-
called ‘coupled’ mode, where the column densities ($N_{H,Z},N_{H,S}$), torus
inclination angle ($\theta_{\rm obs}$), normalizations of the scattering and
fluorescent line coefficient components ($A_{S}$ and $A_{L}$, respectively)
are tied together to preserve the model’s self-consistency. This fit, however,
yielded poor results in a best-fit inclination angle of $60^{\circ}$, which is
the default, fixed opening angle of the torus-shaped absorber in the MYTorus
model. This suggests a grazing incidence angle which is highly unlikely.
Therefore, while this is a statistically acceptable fit, it is not a
physically meaningful result (see discussion in LaMassa et al., 2016a, 2023,
for more details on this phenomenon).
Under such circumstances, it is advisable to fit the spectrum with the same
MYTorus model (Eqn. LABEL:eqn:mytorus) but in ‘decoupled’ mode. This approach
assumes that the absorbing and X-ray reprocessing media are not necessarily
the same, nor are they smooth and homogeneous, as is assumed with a simple
torus model. In this approach the line-of-sight column density ($N_{\rm
H,los}$) is provided by the $N_{H,Z}$ parameter which is decoupled from the
global column density ($N_{\rm H,global}$) from surrounding clouds, provided
by the $N_{H,S}$ terms in Equation LABEL:eqn:mytorus, which are still tied to
each other. The inclination angles are also decoupled, such that the
transmitted component is frozen at $90^{\circ}$, while the scattered and
fluorescent line components are frozen to $0^{\circ}$. In this model, we are
not assuming a homogeneous donut-shaped absorber, but utilize the radiative
transfer determined by MYTorus to consider light passing through and
scattering off of a clumpy and patchy medium surrounding the AGN.
A fit to this model yields reassuring results. The power-law is best described
by $\Gamma=1.4$ which, while at the flat end of the range of indices for AGN,
is consistent with the value found from the phenomenological XSpec model (Eqn.
6). Additionally, the best-fit line-of-sight column density is $N_{\rm
H,Z}=7.4\times 10^{22}$ cm-2, which is also consistent with the value found
from the phenomenological XSpec model (Eqn. 6; $N_{H}=7.9\times 10^{22}$ cm-2)
which only considers line-of-sight absorption and does not account for
additional physics. The scattering fraction is also consistent, with the
MYTorus fit yielding $f_{scatt}=10\%$ compared with 11% in the
phenomenological model. Finally, the best-fit global column density is $N_{\rm
H,global}=10^{24}$ cm-2, in the Compton thick regime, which means that this
scattered component does not significantly contribute to the continuum
allowing for the strong similarity seen with the phenomenological model while
also accounting for the Fe K$\alpha$ line. This suggests that F2M J1532 is
enshrouded by a heavy amount of absorption which is non uniform and our line-
of-sight happens to coincide with an opening allowing a direct view to the
broad line emission from the quasar.
Table 8 lists the best-fit MYTorus parameters for this source. Figure 7 shows
the best fit MYTorus model plotted atop the source spectrum, with the
individual model components shown separately on the left and a contour plot of
the global ($N_{H,S}$) versus line-of-sight ($N_{H,Z}$) column densities
showing that while $N_{H,Z}$ is well constrained and consistent with the
phenomenological XSpec model, the global column density is poorly constrained
and highly uncertain.
Figure 7: Left – MYTorus plus scattered power-law (Eqn LABEL:eqn:mytorus) in a ‘decoupled’ joint fit to the observations of F2M J1532. Black points and lines represent ObsID 3138 and red points and lines represent ObsID 3338. The solid line shows the combined model, while the dotted lines are the individual model components. Right – $\chi^{2}$ contour plot of the global column density ($N_{H,S}$) vs. the line-of-sight column density ($N_{H,Z}$) from the ‘decoupled’ MYTorus fit. The cross represents the best-fit parameters at $N_{H,Z}=7.4\times 10^{22}$ cm-2 and $N_{H,S}=1.0\times 10^{24}$ cm-2. The black, red, and blue curves show the 68%, 90%, and 99% confidence levels. We see that while $N_{H,Z}$ is reasonably well-constrained and consistent with the value found for the phenomenological model (Eqn. 6) of $N_{H,Z}=7.9\times 10^{22}$ cm-2, $N_{H,S}$ is poorly constrained. Table 8: Decoupled MYTorus Fit Parameters Parameter | Value
---|---
$\Gamma$† | $1.41_{-0.01}^{+0.28}$
Power-law normalization | $1.0_{-0.2}^{+1.3}\times 10^{-4}$
$N_{H,Z}$ (cm-2) line-of-sight | $7.4_{-2.5}^{+3.7}\times 10^{22}$
$N_{H,S}$ (cm-2) global‡ | $1.0_{-0.8}\times 10^{24}$
$f_{\rm scatt}$ (%) | $9.5_{-4.6}^{+3.4}$
C-stat (dof) | 106.99 (119)
†This parameter was constrained with a lower bound of $\Gamma\geq 1.4$.
† The error analysis of this parameter was found to have an unconstrained
upper bound, as illustrated by the contour diagram shown in Figure 7.
|
# Formation of probability density waves and probability current density waves
by excitation and decay of a doublet of quasistationary states of a three-
barrier heterostructure upon scattering of gaussian wave packets
Yu. G. Peisakhovich Novosibirsk State Technical University, Novosibirsk,
Russia A. A. Shtygashev<EMAIL_ADDRESS>Novosibirsk State Technical
University, Novosibirsk, Russia
###### Abstract
Annotation. A numerical-analytical simulation of scattering by a three-barrier
heterostructure of an electronic Gaussian wave packet, the spectral width of
which is on the order of the distance between the levels of the doublet of
quasi-stationary states, is carried out. It is shown that as a result of
scattering, damped waves of electron charge and current densities are formed
outside the double well, their characteristics are determined by the structure
of the initial wave packet and the poles of the scattering amplitudes. The
frequency of these waves is equal to the difference frequency of the doublet,
the wavenumber is the difference between the wave numbers of free motion of
electrons with resonant energies, and the speed of their propagation is the
ratio of these quantities. The system can go into the regime of repetition or
amplification of the emission of electron waves if a periodic resonant pumping
of the doublet population is provided by scattering of a series of coherent
wave packets.
quantum
###### pacs:
84.40.Az, 84.40.Dc, 85.25.Hv, 42.50.Dv, 42.50.Pq
## I Introduction
The ability of nanoheterostructures to selectively transmit and convert wave
signals of different physical nature makes it possible to create high-speed
and high-frequency devices for optoelectronics, acoustoelectronics,
information transmission systems, laser technology, etc. In recent decades,
laser light sources have been created capable of generating ultrashort pulses
of picosecond, femtosecond, and even attosecond duration Rost2011 -Chek2014 .
This stimulated the intensive development of spectroscopy and high
technologies in the corresponding frequency ranges. The impact of such short-
term signals on microscopic and macroscopic systems and the detection of
responses make it possible to study fast processes, the duration of which is
less than or on the order of the relaxation times in the systems Rost2011
-Ross2002 . In addition to spectroscopic sensing of matter, it is possible to
pose the problem of generating an alternating current in the terahertz range
by converting ultrashort excitation pulses into a system of oscillations and
waves of electron density of charge and current on scales smaller than the
length and time of quantum coherence of electrons. This problem can be solved
using nanoscale heterostructures. It is well known that in thin-film
nanostructures such as a double quantum well with tunnel-transparent walls for
electrons, the energy spectrum of the transverse motion of electrons contains
doublets of resonance levels that are relatively close to each other. In the
forbidden bands of film below the vacuum level, such a spectrum is discrete
and the wave functions of doublet states are localized in the well. In the
allowed bands below and above the vacuum level, the energy spectrum is
continuous and the wave functions of resonance doublets describe delocalized
quasi-stationary states of the transverse scattering problem. The energies of
the doublets and the lifetimes of quasi-stationary states are determined by
the poles of the amplitudes of stationary electron scattering by the
heterostructure, as well as by the shift and smearing of levels due to
inelastic electron scattering. Pulsed excitation and slow decay of a quasi-
resonant nonstationary state formed by the superposition and interference of
quantum states from a narrow band of the electronic spectrum that includes a
doublet can be accompanied by beats of the space-time distributions of the
probability densities and current of electrons whose energies belong to such a
narrow band. This kind of beating often accompany a quantum transient Leo1991
-Cald2016 after a single pulse excitation and last for the lifetime of quasi-
stationary states, which can be much longer than the time period of these
beats if the transparency of the barriers is sufficiently low and the
inelastic processes for electrons are weak. This effect was first observed
indirectly in experiments on differential transmission and four-wave mixing
for femtosecond light pulses in an asymmetric double quantum well Leo1991
-Rosk1992 . In such a well, the quantum beats of the superposition of the wave
functions of the doublet of stationary states of the discrete spectrum of
transverse motion cause the appearance of resonant damped oscillations of the
electron-hole dipole moment and a certain number of registered oscillations of
the dipole electromagnetic radiation at the terahertz difference frequency of
the doublet.
A similar effect should also exist in the case when the doublet of quasi-
stationary states of the transverse scattering problem is located in the
continuous spectrum of the conduction bands above or below the vacuum level
Romo2002 -Peis2008B . In this paper, it will be shown that if the transparency
of the potential barriers of the heterostructure is sufficiently low, then the
coupled oscillations of mixed doublet resonance states should manifest
themselves not only in the periodic flow of the electron density between the
wells through the middle barrier inside the double well Peis2008A -Cald2011 ,
but they should also be accompanied by oscillations of the charge density and
current electrons escaping into outer space through extreme potential
barriers. Outside the double well, these spatiotemporal oscillations of the
envelopes of the charge and current densities can have the character of waves
traveling to the left and right from the heterostructure and decaying in time
and space. The frequency of these waves is equal to the difference frequency
of the doublet, the wavenumber is the difference between the wave numbers of
free motion of electrons with resonant energies, and the speed of their
propagation is the ratio of these quantities. The process of emission of such
electron waves lasts for the lifetime of quasi-stationary states, which can be
much longer than the wave period if the transparency of the barriers is
sufficiently low. With distance from the heterostructure, trains of difference
waves of charge and current densities decay and broaden rather slowly and can
be detected and removed from the system using electric and magnetic fields of
the corresponding structure. The system can switch to the mode of repetition
of the emission of electron waves or even to the mode of self-oscillation if
positive feedback and periodic resonant pumping of the population of the
doublet in the heterostructure are provided.
Population and decay of a doublet of quasistationary states can be provided in
different ways. We theoretically studied and simulated various mechanisms of
this process. The first of them consists in the scattering of a Gaussian
electron wave packet incident on a double-well system from the outside. In the
leading approximation, it can be described by a relatively simple quantum-
mechanical model in the language of only pure one-particle quantum states of
the scattering problem, which makes it possible to rigorously reveal the main
laws of the process and estimate the contributions of the main features. It
turned out that the amplitude of the resonant difference spatio-temporal wave
harmonic can be greater than or of the order of the amplitudes of its smooth
and high-frequency components. These results are presented below in this paper
using the example of a one-dimensional model of scattering of a Gaussian wave
packet by a structure with three identical $\delta$-barriers.
Two other mechanisms of the population and decay of the doublet of quasi-
stationary states that we studied are associated with diffraction by a double-
well heterostructure of photoelectrons arising from the action of an
ultrashort light pulse on the photocathode with the subsequent formation of a
kind of alternating photoemission current. One mechanism is provided by the
incidence of a photoelectron pulse from the outside onto a double-well
heterostructure deposited on a bulk planar photocathode, and the other is
provided by pulsed photoexcitation of electrons directly in thin layers from
the inside of the double-well structure, which itself acts as a very thin
photocathode. To describe these methods of excitation and decay, it is
necessary to consider mixed quantum states taking into account the external
high-frequency electromagnetic pumping field, as well as inelastic scattering
of electrons, using the approximate methods of the nonstationary quantum
theory of many bodies. For this we used the mathematical apparatus of the
density matrix. The results of an approximate description and calculations
will be presented in the following articles, where it will be shown that the
population of quasi-stationary levels can be determined not only by the
explicit pole features of the amplitudes of resonant scattering of electrons
on a double-well heterostructure, but also by the contribution of these
features to the photoexcitation spectrum, and even to the magnitude of the
matrix elements of electronic optical transitions upon photoexcitation from
within the heterostructure.
Here we are interested in the time and space oscillating solutions of the one-
dimensional nonstationary Schrodinger equation with the potential in the form
of wells and barriers, which are located in a finite region. To obtain such
solutions, three methods are most often used: a) direct numerical integration
in finite differences Kons2003 , b) calculation of the dynamic superposition
of solutions of the stationary Schrodinger equation for a boundary value
problem with a continuous and/or discrete spectrum Peis2008A ,Peis2008B ,
Wint1961 ,Cald2013 ,Cald2016 , describing the evolution of a wave packet, c)
representation of the solution in the form of a resonant expansion, the
members of which are the products of the Moshinsky function (associated with
the problem of a quantum gate and diffraction in time) and resonant wave
functions in the internal region of the potential, where they are finite and
normalized by specific conditions. The latter method c) was developed by G.
Garcia-Calderon et al. Camp2009 -Cald2016 . They carried out active research
of transient quantum processes in resonant tunneling and published many
articles containing interesting and important results describing the evolution
and asymptotics of electronic wave functions in different regions of time and
space. Most of these details relate to the internal region of action of the
potential, where the form of the resonant wave functions is known. In
particular, as in our papers Peis2008A -Peis2008B , the impulsive character of
the decay of quasi-stationary states was illustrated Cald2009 if the spectrum
of the initial wave packet covers a small number of quasi-resonant levels; the
flow of the wave function between successive wells was called in Cald2009
,Cald2011 the ”bouncing” and ”breathing” modes. In Cald2013 ,Cald2016 ,
general formulas for decay wave functions outside the region of action of the
potential were also written, the coordinate dependence of these functions was
determined by the Moshinsky functions, and not by the resonance Gamow wave
functions, which exponentially increase with increasing distance from the
system. However, outside the region of action of the potential, the wave
character of the behavior of the probability and current densities during the
decay of a mixture of a doublet of quasistationary states of a two-well
system, considered in our work, was not clearly distinguished and discussed in
Cald2013 ,Cald2016 .
In contrast, in this article, when describing the scattering of one or a
system of Gaussian pulses, we focus on not only the inside, but also on the
outside region of action of the double-well potential. We use method b) to
describe nonstationary probability densities and probability currents at an
arbitrary point in space, and show that in the outer region of a double
quantum well, the envelopes of these quantities demonstrate the properties of
traveling waves. Calculations by the method of continual decomposition b) are
not the calculations of a ””black box” type”, that supposedly ”provide no deep
physical insight” and ”does not provide grasp of the time evolution of the
initial state” Cald2007 -Cald2016 . There is developed by G.F. Drukarev in
1951 Druk1951 ,Baz1969 an elegant version of the saddle-point method, which
allows, within the framework of method b), to identify and estimate the
oscillating contributions of the pole features of the scattering amplitudes to
the wave function of the scattered wave packet in the internal and external
regions of the action of the scattering potential, as was done in Peis2008A
-Peis2008B .
At the end of this introduction, we emphasize that the main thing for us here
is that it is the complete system of wave functions of the stationary
scattering problem of method b) that provides a natural basis for unperturbed
states of the zero approximation for describing and calculating the
interactions of electrons with photons and with other particles in the
subsequent application of the density matrices method to the problem of
photoemission in an open system.
The article is organized as follows. Section II describes a theoretical
quantum mechanical model, provides rigorous formulas, and discusses the
optimal parameters for describing the scattering of a Gaussian wave packet by
a double quantum well. Section III presents the results of rigorous
calculations of space-time oscillations and waves of probability densities and
currents with an explanation of their characteristics. An approximate method
for the analytical identification of these characteristics is described in
Appendix A, and the clear but cumbersome expressions for the probability and
current densities obtained by this method are given in Appendix B.
## II THEORETICAL MODEL, CALCULATION FORMULAS AND PARAMETERS
To confirm the statements made and to highlight the basic laws of the process,
in accordance with the algorithm described in our articles Peis2008A
-Peis2008B , we will analyze in detail a simple one-dimensional model, which
describes the population and subsequent decay of a doublet of quasi-stationary
states of a three-barrier heterostructure due to scattering of pulsed Gaussian
wave packets arriving from the left and having a spectral width of the order
of the distance between the levels of the doublet.
The double quantum well is assumed to be flat, the axis $x$ is directed
perpendicular to it, and the origin of coordinates is placed on its left
boundary. In order to simplify calculations and interpretation of the results,
we simulate potential barriers for electrons with the mass $m$ by three delta
functions
$U(x)=({{\hbar^{2}}\mathord{\left/{\vphantom{{\hbar^{2}}{2m}}}\right.\kern-1.2pt}{2m}})\sum\nolimits_{n=0}^{2}{\Omega\delta(x-x_{n})}$
of the same power $\Omega$ at a distance $d$ from each other at $x_{0}=0$,
$x_{1}=d$, $x_{2}=2d$. These points on the $x$ axis demarcate the four regions
shown in Fig.1. Delta barrier can be used to model real rather narrow and high
potential barrier, while fair estimate $\Omega\approx 2mU_{b}d_{b}/\hbar^{2}$,
where $U_{b}$ is the height of the barrier and $d_{b}$ is its width. The
electron energy is counted from the vacuum level $U(x)=U=0$, which is the same
to the left ($x<x_{0}=0$) and to the right ($x>2d$) of the heterostructure;
the effective flat bottom of the heterostructure wells ($0<x<2d$) is located
at the potential energy $U(x)=\tilde{U}<0$.
Figure 1: Model three-barrier heterostructure. The vertical lines with arrows
picture the $\delta$-barriers.
The basis wave functions $\psi(E,x)$ of the one-dimensional stationary problem
of scattering of a wave with the energy $E$ incident from the left are
solutions of the Schrodinger equation of the system under consideration and
are given by the expressions
$\psi(E,x)=\left\\{{\begin{array}[]{*{20}c}{A_{0E}\operatorname{e}^{ikx}+B_{0E}\operatorname{e}^{-ikx},\quad\quad\quad\quad
x<0,}\\\
\\!\\!\\!{A_{nE}e^{i\tilde{k}(x-x_{n^{\prime}})}\\!+\\!B_{nE}\operatorname{e}^{-i\tilde{k}(x-x_{n^{\prime}})},\;x_{n^{\prime}}\\!\leqslant\\!x\\!\leqslant\\!x_{n},}\\\
{A_{3E}\operatorname{e}^{ik(x-x_{2})},\quad\quad\quad\quad\quad\quad\quad
x>x_{2}.}\\\ \end{array}}\right.$ (1)
where $n=1,2$; $n^{\prime}=n-1$, $k=\hbar^{-1}\sqrt{2mE}$ is the wave number
outside the heterostructure, $\tilde{k}=\hbar^{-1}\sqrt{2m(E-\tilde{U})}$ is
the wave number inside the potential wells, $A_{jE}$ and $B_{jE}$ are the
partial amplitudes of plane monochromatic waves propagating, respectively, to
the right and left in the regions $j=0,1,2,3$, and $B_{3E}=0$ (the wave
arriving from the right is absent), $A_{0E}=\hbar^{-1}\sqrt{m/2\pi k}$ (which
provides normalization of $\psi(E,x)$ to the energy $\delta$-function).
The transfer matrix method Peis2008A -Peis2008B allows one to connect seven
partial amplitudes of four regions by linear relations
$\left({\begin{array}[]{*{20}c}{A_{n+1E}}\\\ {B_{n+1E}}\\\
\end{array}}\right)=M_{nef}\left({\begin{array}[]{*{20}c}{A_{0E}}\\\
{B_{0E}}\\\ \end{array}}\right),$ (2)
where $M_{nef}=L^{-1}M_{\Omega}M^{n}L$, $n=0,1,2$, $M=M_{\Omega}M(d)$,
$L=\left({\begin{array}[]{*{20}c}1&1\\\ {ik}&{-ik}\\\
\end{array}}\right),\quad M_{\Omega}=\left({\begin{array}[]{*{20}c}1&0\\\
\Omega&1\\\ \end{array}}\right),$
$M(d)=\left({\begin{array}[]{*{20}c}{\cos\tilde{k}d}&{\tilde{k}^{-1}\sin{\kern
1.0pt}\,\tilde{k}d}\\\ {-\tilde{k}\sin\tilde{k}d}&{\cos\,\tilde{k}d}\\\
\end{array}}\right)$
and express all partial amplitudes in terms of the amplitude of the incident
wave $A_{0E}$. In particular, from (2) at $n=2$ we obtain expressions for the
amplitudes of the reflected $B_{0E}=rA_{0E}$ and transmitted $A_{3E}=tA_{0E}$
waves, where
$r=-\frac{{\tilde{M}_{21}}}{{\tilde{M}_{22}}},\quad
t=\frac{{\det\tilde{M}}}{{\tilde{M}_{22}}}$ (3)
$r$ \- reflection amplitude, $t$ \- transmission amplitude, $\tilde{M}_{il}$
\- matrix elements of a two-dimensional $(i,l=1,2)$ effective transfer matrix
$\tilde{M}\equiv M_{2ef}=L^{-1}M_{\Omega}M^{2}L$.
Hence, it can be seen that all partial amplitudes (except for $A_{0E}$), as
well as the amplitudes of reflection and transmission, can have pole
singularities, which are determined by the zeros of the matrix element
$\tilde{M}_{22}=0$, that is, they can have a resonance character near quasi-
stationary levels. Complex roots of the equation $\tilde{M}_{22}=0$ and quasi-
stationary levels are grouped into doublets
$E_{p}=E^{\prime}_{p}+iE^{\prime\prime}_{p}$ $(p=1,2)$ (Fig.2a). The real
parts of pairs of close roots $E^{\prime}_{1}=\operatorname{Re}E_{1}$ and
$E^{\prime}_{2}=\operatorname{Re}E_{2}$ give the energies of quasi-stationary
levels. The imaginary parts of the roots $E_{1}^{\prime\prime}=ImE_{1}$ and
$E_{2}^{\prime\prime}=ImE_{2}$ determine the spectral widths and lifetimes
$\tau_{1}=\hbar/E_{1}^{\prime\prime}$ and
$\tau_{2}=\hbar/E_{2}^{\prime\prime}$ of these quasi-stationary states. The
dependence $|\tilde{M}_{22}|^{-1}$ on the real energy $E$ in the vicinity of
the doublet has two close peaks, the widths of which are of the order of
$E_{1}^{\prime\prime}$ and $E_{2}^{\prime\prime}$ (Fig.2b).
Figure 2: (Color online) a) the zeros of $\tilde{M}_{22}=0$ lie in the lower
half-plane of the complex energy and for the doublet lowest above the vacuum
level of the heterostructure in Fig. 1 (at $d=125$ Å, $\tilde{U}=-4$ eV,
$\Omega=18.9$ Å${}^{-1}=10$ a.u.) are equal $E_{1}=(0.647-i1.576\cdot
10^{-4})$ eV and $E_{2}=(0.655-i1.576\cdot 10^{-4})$ eV (i.e., the lifetimes
of quasi-stationary states
$\tau_{1}=\hbar/ImE_{1}={\text{4}}{\text{.175}}\cdot 10^{-12}$ s and
$\tau_{2}=\hbar/ImE_{2}={\text{4}}{\text{.175}}\cdot 10^{-12}$ s; b) the red
curve with two peaks and the right scale depict the dependence
$|\tilde{M}_{22}|^{-1}$ on the real energy $E$ for this system, the maxima
$|\tilde{M}_{22}|^{-1}$ are at $E_{1}=0.647$ eV, $E_{2}=0.655$ eV; the black
curve with one maximum and the left scale depict the square of the modulus of
the spectral function $c_{E}$ of the incident wave packet at optimal for the
heterostructure of Fig. 1 and Fig. 2a parameters:
$E_{C}=\hbar^{2}k_{C}^{2}/2m=0.651$ eV, $k_{C}=0.414$ Å${}^{-1}=0.219$ a.u.,
$x_{C}=-5000$Å, $\Delta x=400$Å, $C_{0}=3.755\cdot 10^{3}$
m${}^{-1/2}=2.73\cdot 10^{-2}$ a.u.,
$n(x_{C},0)=|\Psi(x_{C},0)|^{2}=C_{0}^{2}=1.41\cdot 10^{7}$ m-1 $=7.46\cdot
10^{-4}$ a.u. (expressions (4) and (5)). Atomic unit of length 1 a.u.
$(x)=0.529\cdot 10^{-10}$ m, atomic unit of time 1 a.u. $=2.419\cdot 10^{-17}$
s, atomic unit of probability density 1a.u. $(n)=1.89\cdot 10^{10}$ m-1,
atomic unit of probability current density 1 a.u. $(j)=4.1\cdot 10^{16}$ s-1.
Let an electronic Gaussian wave packet fall on the heterostructure from the
left, the wave function of which at the initial time $t=0$ has the form
$\Psi(x,\;0)=C_{0}\exp\left({ik_{C}x-\frac{{(x-x_{C})^{2}}}{{2(\Delta
x)^{2}}}}\right),$ (4)
where $C_{0}=1/\sqrt{\Delta x\sqrt{\pi}}$, $x_{C}<0$ is the initial coordinate
of the center of the packet, $\Delta x$ \- the initial spatial width of the
packet, $C_{0}$ \- the initial amplitude of the packet, $k_{C}$ \- the wave
number of the spectral center of the packet, which corresponds to the energy
$E_{C}=\hbar^{2}k_{C}^{2}/2m$, in the absence of a scattering potential, the
packet moves with the group velocity $v_{C}=\hbar k_{C}/m$.
The spectral function of the wave packet (4) is determined from the stationary
wave functions $\psi(E,x)$ of the scattering problem
$c_{E}=\int_{-\infty}^{\infty}\psi^{*}(E,x)\Psi(x,0)dx.$ (5)
The parameters of the wave packet (4) are chosen so that at $t=0$, the packet
is located far enough to the left of the heterostructure and that its spectral
function $c_{E}$ also has an almost Gaussian form and overlaps mainly only two
considered quasi-stationary levels (Fig.2b). It was shown in Peis2008A that
this can be easily done by satisfying the conditions
$k_{C}^{-1}\ll\Delta x\ll|x_{C}|\ll k_{C}(\Delta x)^{2},$ (6)
then
$c_{E}\approx\left\\{{\begin{array}[]{*{20}c}{\dfrac{1}{\hbar}\sqrt{\dfrac{{m\Delta
x}}{{\sqrt{\pi}k}}}{e}^{-\frac{{(\Delta
x)^{2}}}{2}(k-k_{C})^{2}}e^{ix_{C}\left({k_{C}-k}\right)},\;E\geqslant 0}\\\
{0,\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad E<0}\\\
\end{array}}\right.$ (7)
and the evolution of the packet is mainly determined by the contribution of
the energy region $E_{\min}<E<E_{\max}$, which includes the selected doublet,
but is far from the neighboring doublets. Therefore, at subsequent times, the
nonstationary wave function is given by the integral
$\Psi(x,t)=\int\limits_{E_{\min}}^{E_{\max}}{c_{E}e^{-iEt/\hbar}\psi(E,x)dE}$
(8)
We are interested in the probability density
$n(x,t)=\left|{\Psi(x,t)}\right|^{2}=\int{\int{\rho_{EE^{\prime}}(t)}n_{EE^{\prime}}(x)dEdE^{\prime}}$
(9)
and the probability current density
$\begin{gathered}j(x,t)=\frac{{i\hbar}}{{2m}}\left({\Psi(x,t)\frac{{d\Psi^{*}(x,t)}}{{dx}}-\Psi^{*}(x,t)\frac{{d\Psi(x,t)}}{{dx}}}\right)=\hfill\\\
\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad=\int{\int{\rho_{EE^{\prime}}(t)j_{EE^{\prime}}(x)}dEdE^{\prime}},\hfill\\\
\end{gathered}$ (10)
where
$n_{EE^{\prime}}(x)=n_{E^{\prime}E}^{*}(x)=\psi(E,x)\psi^{*}(E^{\prime},x)$
(11)
$j_{EE^{\prime}}(x)=\frac{{i\hbar}}{{2m}}\left({\psi(E,x)\frac{{d\psi^{*}(E^{\prime},x)}}{{dx}}-\psi^{*}(E^{\prime},x)\frac{{d\psi(E,x)}}{{dx}}}\right)$
(12)
are ”matrix elements” of density $n_{EE^{\prime}}$ and current density
$j_{EE^{\prime}}(x)$ Land1977 , and
$\rho_{EE^{\prime}}(t)=\rho_{E^{\prime}E}^{*}(t)=c_{E}c_{E^{\prime}}^{*}e^{-i(E-E^{\prime})t/\hbar}$
(13)
is ”density matrix” in a ”pure” quantum-mechanical state $\Psi(x,t)$.
When quasi-stationary states are excited by electromagnetic radiation with
subsequent photoemission, especially from ”inside” the heterostructure, the
states of electrons are ”mixed” and the elements of the density matrix do not
have the form (13), but must be determined from the corresponding kinetic
equations.
The density of the electric charge is $\rho_{e}(x,t)=en(x,t)$ and of the
electric current is $j_{e}(x,t)=ej(x,t)$, where $e$ is the electron charge.
Differentiating (9), (10) and applying the nonstationary Schrodinger equation,
it is easy to make sure that the law of conservation of the probability
density and charge ${{\partial j(x,t)}\mathord{\left/{\vphantom{{\partial
j(x,t)}{\partial x=}}}\right.\kern-1.2pt}{\partial x=}}-{{\partial
n(x,t)}\mathord{\left/{\vphantom{{\partial n(x,t)}{\partial
t}}}\right.\kern-1.2pt}{\partial t}}$ is satisfied at every point in space at
every moment of time.
## III OSCILLATIONS AND WAVES OF CHARGE AND CURRENT DENSITIES. EVALUATION
FORMULAS AND CALCULATION RESULTS
### III.1 Pole Contribution Estimation
Our main goal here is to demonstrate and explain the regular space-time
oscillations of the quantities $n(x,t)$ and $j(x,t)$, caused by the population
and decay of quasi-stationary states of a double quantum well after scattering
of a Gaussian wave packet by this well. Substituting (1) and (4) in (5), (8) -
(10) and performing numerical integration in the range of interest, one can
obtain a series of figures (Fig. 3-Fig. 10) illustrating the details of the
phenomenon.
These oscillations can be described analytically with sufficient accuracy by
estimating the integral (8) using the developed by G.F. Drukarev in 1951
Druk1951 a variant of the saddle point (the fastest descent) method, that
allows one to select and evaluate the contributions of the main poles of the
scattering amplitudes in the desired integral value Baz1969 , as was done in
Peis2008A -Peis2008B . A brief explanation of the essence of this method and
the main formulas for calculating the contributions of the saddle points and
poles of the integrands are given in Appendix A.
The result of applying the saddle point method strongly depends on the width
and position of the saddle in the complex plane, its distance from the origin,
as well as on the form of the spectral function of the packet, scattering
amplitudes, and on the location and type of their features. The position on
the complex plane of the mentioned poles, branch points and other features of
the characteristics of stationary scattering and spectral function does not
depend on time and coordinates (see (Fig.16) in Appendix A). However, the
saddle points, and with them the lines of type I, for a fixed $x$ move with
time $t$ to the origin, usually according to the law $k_{S}\sim 1/t$,
capturing the singular points of stationary scattering in sectors II or III.
This determines the appearance of threshold conditions for $x$ and $t$, under
which the singularities make a noticeable contribution to the integrals,
providing the manifestation in the form of an envelope of the wave function
$\Psi(x,t)$ of various moving maxima, fronts, etc.
In the case under consideration, the saddle points of the exponents in the
integrand (8) are responsible for the formation of thresholds and leading
pulses of reflection and transmission of the main body of the scattered wave
packet, in principle, their contribution can be estimated using (A2). The pole
features of $\psi(E,x)$ (i.e., of amplitudes $A_{jE}$ and $B_{jE}$) are
responsible for the formation of the modulation profile of the functions
$\Psi(x,t)$, $n(x,t)$ and $j(x,t)$, which can oscillate and slowly decay in
time and space due to the rather slow oscillatory decay of the superposition
of quasi-stationary states in a double quantum well, which turned out to be
populated after the departure of the main body of the wave packet. Their main
contribution is proportional to the sum of the residues (A3) at the poles of
the integrands (8). In the space-time regions of the steady oscillations, far
enough beyond the thresholds and leading scattering pulses of the main body of
the packet (when the saddle point and straight line I turn out to be to the
left of the spectral center of the initial wave packet and the poles of the
scattering amplitudes), these pole contributions can be large in comparison
with other contributions and the wave function is approximately proportional
to superpositions of damped traveling waves
$\Psi(x,t)\approx\begin{cases}&\\!\\!\\!\Psi_{0}(x,t)+\sum\nolimits_{p=1}^{2}{\tilde{B}_{0E_{p}}e^{-ik_{p}x-iE_{p}t/\hbar},\quad
x<0},\\\
&\\!\\!\\!\Psi_{n}(x,t)+\sum\nolimits_{p=1}^{2}\left(\tilde{A}_{nE_{p}}e^{i\tilde{k}_{p}(x-x_{n^{\prime}})}+\tilde{B}_{nE_{p}}e^{-i\tilde{k}_{p}(x-x_{n^{\prime}})}\right)e^{-iE_{p}t/\hbar},\quad
n=1,2,\;\;n^{\prime}=n-1,\;\;x_{n^{\prime}}\leq x<x_{n},\\\
&\\!\\!\\!\Psi_{3}(x,t)+\sum\nolimits_{p=1}^{2}{\tilde{A}_{3E_{p}}e^{ik_{p}(x-x_{2})-iE_{p}t/\hbar}},\quad
x>x_{2}.\end{cases}$ (14)
The terms $\Psi_{0}(x,t)$, $\Psi_{n}(x,t)$, $\Psi_{3}(x,t)$ come from the
contributions of those parts of the integration contour of the fastest descent
that are far from the poles of the scattering amplitudes; they are smooth
functions of $x$ and $t$ with relatively small magnitude in the regions under
consideration Peis2008A -Peis2008B . The coefficients
$\tilde{A}_{nE_{p}}=|\tilde{A}_{nE_{p}}|e^{i\alpha_{np}}$ ($n=1,2,3$ ) and
$\tilde{B}_{nE_{p}}=|\tilde{B}_{nE_{p}}|e^{i\beta_{np}}$ ($n=0,1,2$) are
proportional to the residues of the integrand (8) at the poles
$E_{p}=E^{\prime}_{p}+iE^{\prime\prime}_{p}$ ($p=1,2$) of the partial
amplitudes $A_{nE}$ and $B_{nE}$ from (1) with taking into account the
explicitly written coordinate-time exponents, and the complex wave numbers are
equal $k_{p}\equiv k(E_{p}{\kern
1.0pt})=\hbar^{-1}\sqrt{2mE_{p}}=k^{\prime}_{p}+ik^{\prime\prime}_{p}$ and
$\tilde{k}_{p}\equiv\tilde{k}(E_{p})=\hbar^{-1}\sqrt{2m(E_{p}-\tilde{U})}=\tilde{k}^{\prime}_{p}+i\tilde{k}^{\prime\prime}_{p}$
(we choose the root branches so as to satisfy the physical conditions of
damping waves in space). We are interested in systems that provide a
sufficiently slow damping, for which
$E^{\prime}_{p}\gg\left|{E^{\prime\prime}_{p}}\right|$ and
$k^{\prime}_{p}\gg|k^{\prime\prime}_{p}|$.
Below we present the results of numerical calculations using exact formulas
(1) and (8) - (12) the quadratic in $\Psi(x,t)$ values of the probability
density $n(x,t)=\left|{\Psi(x,\;t)}\right|^{2}$ and current of the probability
density $j(x,t)$ of electrons inside and outside the considered
heterostructure for the parameters of the wave packet and heterostructure,
which are given in the caption to Fig.2. These calculations show that
approximation (15) provides a reasonable interpretation and estimation of the
considered oscillatory and wave effects.
### III.2 The region inside the double quantum well
Inside each of the $n=1,2$ wells of heterostructure at $x_{n-1}\leq x\leq
x_{n}$, substituting the expressions of the second line (1) into the exact
formulas (8) - (11) and performing numerical integration, we obtain figures
that demonstrate the probability of finding an electron inside a double
quantum well (Fig.3 and Fig.4) , quasiperiodic flow between the wells of the
wave function and the probability density (Fig.3 and Fig.5), as well as the
corresponding behavior of the probability current density (Fig.6).
The probability of finding an electron inside a double well
$P(t)=\int_{0}^{2d}{|{\Psi(x,\;t)}|^{2}dx}$ with time first increases rather
quickly and then decreases relatively slowly according to a law close to
exponential, while similar probabilities of finding an electron inside each of
the two wells $P_{n}(t)=\int_{(n-1)d}^{nd}{|{\Psi(x,\;t)}|^{2}dx}$ oscillate
with the difference frequency of the doublet
$\omega\equiv\omega_{12}=(E^{\prime}_{2}-E^{\prime}_{1})/\hbar$ and with a
period $T=2\pi/\omega_{12}=2\pi\hbar/(E^{\prime}_{2}-E^{\prime}_{1})\approx
5.27\cdot 10^{-13}$ s$\approx 2.18\cdot 10^{4}$ a.e.$\approx 22000$ a.e.
almost in antiphase with each other (Fig.3):
Figure 3: (Color online) Time dependence of the probabilities of finding an
electron: inside the double well $P(t)$ (black line), in the left well
$P_{1}(t)$ (red line), in the right well $P_{2}(t)$ (blue line). Time is in
atomic units 1 a.u$=2.419\cdot 10^{-17}$ s.
The population of quasi-stationary states occurs approximately during the time
of reflection and transmission of the main body of the wave packet, which is
equal in order of magnitude to $\Delta t\sim d/v_{C}=md/\hbar k_{C}\sim
10^{3}$ a.e.$\ll\tau$, where $v_{C}=\hbar k_{C}/m$. Note that the rate of
increase in the quantity $P(t)$ (the time $\Delta t$ of penetration of an
electron into the well) depends much weaker on the quantity $\Omega$ than the
rate of the subsequent decrease (the time $\tau$ of decay of quasi-stationary
states). Exponential approximation of the decay part of the curve Fig.3 gives
the relaxation time of the population of quasi-stationary states in the
heterostructure $\tau=3.87\cdot 10^{-12}$ s $=1.6\cdot 10^{5}$ a.u. and
effective blur $\hbar/\tau\approx 1.701\cdot 10^{-4}$ eV which is close to
$\operatorname{Im}E_{1}=\operatorname{Im}E_{2}=1.576\cdot 10^{-4}$ eV (see
data Fig.2). The area under the curve $P(t)$ and the maximum value of the
probability of finding an electron inside the double well $P_{\max}$ change
nonmonotonically with increasing value $\Omega$ due to the nonmonotonic
dependence of the transmission coefficients of the $\delta$-barrier: the value
$P_{\max}$ first increases to a certain maximum value at $\Omega\sim d^{-1}$,
and then decreases (Fig.4), but the length of the exponential ”tail” $P(t)$ in
(Fig.3) monotonically increases. The latter is consistent with the statement
proved in our works Peis2008A -Peis2008B that in a heterostructure formed by
$\delta$-barriers of the same power $\Omega$ located at a distance $d$ from
each other, the lifetime $\tau_{n}$ of the $n$-th quasi-stationary state
increases with increasing $\Omega$ (and $d$), but decreases with increasing
$n$ as $\tau_{n}\propto m\Omega^{2}d^{4}/(n+1)^{3}$. Hence it follows that for
the maximum realization of the studied effects, it is desirable to select the
optimal values of all parameters of the problem (see the caption to (Fig.2)),
so that both $P_{\max}$ and $\tau_{n}$ are as large as possible.
Figure 4: (Color online) Maximum probability of finding a particle inside a
double well $P_{\max}$, depending on the barrier power $\Omega$ with other
fixed parameters (Fig.2).
Figure 5: (Color online) Coordinate-time dependence of the values (color scale
on the right) of the probability density $n(x,t)$ of finding a particle inside
the double well $0\leq x\leq 2d$. Coordinate in angstroms Å, time and
probability density in atomic units.
Figure 6: (Color online) Coordinate-time dependence of the values (color scale
on the right) of the probability current density $j(x,t)$ of a particle inside
the double well $0\leq x\leq 2d$. Coordinate in angstroms Å, time and
probability density in atomic units.
Figures (Fig. 5) and (Fig. 6) are quite well explained by (9), (10) and the
second line (14), at such values of $t$ and $x$, at which it is possible to
neglect $\Psi_{n}(x,t)$. Complete analytical expressions $n(x,t)$ and $j(x,t)$
in this approximation are given by formulas (21) and (22), which are written
out in Appendix B. It can be seen from (21) and (22) that, inside the double
well, the quantities $n(x,t)$ and $j(x,t)$ undergo spatio-temporal
oscillations and weak exponential decay with time. The terms in the first
lines of both expressions (21) and (22) almost do not change with time $t$ and
coordinate $x$, the terms in the second lines weakly decay with time, but
quickly change along the coordinate with spatial periods
$\tilde{\lambda}_{p}=\pi/\tilde{k}^{\prime}_{p}\sim\pi/k_{C}$, which are small
in comparison with the width of the wells $d$. The last four lines in both
expressions (21) and (22) describe plane waves traveling to the right and left
inside the wells, the corresponding wave-like temporal oscillations $n(x,t)$
and $j(x,t)$ occur with the difference frequency of the doublet
$\omega\equiv\omega_{12}=(E^{\prime}_{2}-E^{\prime}_{1})/\hbar$. In this case,
the terms in the third and fourth lines describe traveling waves, the
wavelength of which
$\tilde{\lambda}_{-}=2\pi|{\tilde{k}^{\prime}_{1}-\tilde{k}^{\prime}_{2}}|^{-1}$
is large in comparison with the width of the wells $d$; therefore, such terms
inside the wells are almost independent of $x$ at a fixed $t$, the phase
velocity of these waves is
$\tilde{v}_{-}=\omega\tilde{\lambda}_{-}/2\pi=(E^{\prime}_{2}-E^{\prime}_{1})/\hbar|{\tilde{k}^{\prime}_{1}-\tilde{k}^{\prime}_{2}}|=7.189\cdot
10^{5}$ m/s. However, the fifth and sixth lines describe short-wavelength
waves traveling towards each other, the wavelength of which
$\tilde{\lambda}_{+}=2\pi\left|{\tilde{k}^{\prime}_{1}+\tilde{k}^{\prime}_{2}}\right|^{-1}\sim\pi/k_{C}$
is small compared to the width $d$ of the wells, and the phase velocity of
such waves
$\tilde{v}_{+}=\omega\tilde{\lambda}_{+}/2\pi=(E^{\prime}_{2}-E^{\prime}_{1})/\hbar|\tilde{k}^{\prime}_{1}+\tilde{k}^{\prime}_{2}|\approx
2.202\cdot 10^{3}$ m/s, is small compared to $\tilde{v}_{-}$. Note also that
in expression (21) all terms have almost the same order of magnitude,
therefore, in the figure (Fig.5), the coordinate dependence $n(x,t)$ inside
the wells is dominated by short-wavelength components with a wavelength
$\tilde{\lambda}_{+}\sim\pi/k_{C}$, which rather abruptly change their
amplitude between the wells. On the contrary, in expression (22) such short-
wave components make a relatively small contribution to the coordinate
dependence of $j(x,t)$ in comparison with long-wave components
$\tilde{\lambda}_{-}$: the fifth and sixth lines of expression (22) contain a
small factor $|{\tilde{k}^{\prime}_{1}-\tilde{k}^{\prime}_{2}}|\ll\,\,k_{C}$,
and the third and fourth lines of expression (22) contain a large factor
$|{\tilde{k}^{\prime}_{1}+\tilde{k}^{\prime}_{2}}|\approx 2\,k_{C}$,
therefore, on Figure (Fig. 6) the coordinate dependence of $j(x,t)$ inside the
wells is very smooth with a break at the boundaries of the wells.
### III.3 The region outside the double quantum well on the left
Similarly, to the left of the double well at $x<0$, substituting the
expression of the first line (1) into the exact formulas (8)-(12) and
performing numerical integration, we obtain figures that demonstrate the
decaying probability density waves (Fig.7) and current density waves traveling
to the left (Fig.8).
Figure 7: (Color online) Wave coordinate-time dependence of the values (color
scale on the right) of the probability density $n([,t)$ of finding a particle
in the corresponding points of the left half-space $x<0$. Coordinate $x$ in
angstroms Å, time and probability density in atomic units.
Figure 8: (Color online) Wave coordinate-time dependence of the values (color
scale on the right) of the current probability density $j(x,t)$ of finding a
particle in the corresponding points of the left half-space $x<0$. Coordinate
$x$ in angstroms Å, time and probability density in atomic units.
These figures (Fig.7) and (Fig.8) are also quite well explained by (9), (10)
and the first line (15), at such values of $t$ and $x$, at which it is
possible to neglect $\Psi_{0}(x,t)$, that gives the main pole contributions to
$n(x,t)$ and $j(x,t)$ in the form of analytical formulas (23) and (24) given
in Appendix B, which describe the probability density and probability current
waves traveling to the left. Figures (Fig.7) and (Fig.8) show that to the left
of the double well, the wave part of $j(x,t)$ changes almost in antiphase to
the wave part of $n(x,t)$. This is explained by the minus sign in (24) and the
fact that we have $k^{\prime}_{1}\approx k^{\prime}_{2}\approx k_{C}=0.219$
a.u.
### III.4 The region outside the double quantum well on the right
In the same way, to the right of the double well at $x>x_{2}$, after
substituting the expression of the third line (1) into the exact formulas (8)
- (12) and numerical integration, figures are obtained that demonstrate the
decaying waves of the probability density (Fig.8) and the probability current
density traveling to the right.
Figure 9: (Color online) Wave coordinate-time dependence of the values (color
scale on the right) of the probability density $n(x,t)$ of finding a particle
in the corresponding points of the right half-space $x>x_{2}$. Coordinate $x$
in angstroms Å, time and probability density in atomic units.
For the coordinate-time wave dependence of the current density $j(x,t)$ of a
particle at the points of the right half-space $x>x_{2}$, the figure
qualitatively looks like Fig.9, that is, to the right of the double well, the
wave parts of $n(x,t)$ and $j(x,t)$ change almost in phase, but in atomic
units $j(x,t)$ is less than $n(x,t)$ about a decimal order of magnitude.
These dependences are also reasonably well explained by (9), (10) and the
third line (14) at such values of $t$ and $x$ for which it is possible to
neglect $\Psi_{3}(x,t)$, that gives the main pole contributions to $n(x,t)$
and $j(x,t)$ in the form of analytical formulas (25) and (26) given in
Appendix B, which describe the probability density and probability current
waves traveling to the right.
The noted similarity and difference in behavior of $j(x,t)$ and $n(x,t)$ is
explained by the presence in (26) in comparison with (25) of factors
containing $k^{\prime}_{1}\approx k^{\prime}_{2}\approx k_{C}=0.219$ a.u.
### III.5 The complete picture of generation of probability density and
current waves
Thus, inside a heterostructure in the form of a double quantum well,
oscillations of the electron density and current with the difference frequency
of the doublet $\omega\equiv\omega_{12}=(E^{\prime}_{2}-E^{\prime}_{1})/\hbar$
can occur, which looks like a periodic overflow of the electron wave function
$\Psi(x,t)$ and the probability density $n(x,t)$ between the wells (in time
almost in antiphase to the left and to the right), so that outside the
heterostructure the probability density waves and currents density waves
outgoing to the left and to the right are formed. In this case, outside the
heterostructure, quadratic in magnitude $\Psi(x,t)$ values $n(x,t)$ and
$j(x,t)$ oscillate in time with the difference frequency of the doublet
$\omega\equiv\omega_{12}$, and in space with a wavenumber equal to the
difference $k_{12}=k^{\prime}_{2}-k^{\prime}_{1}$, slowly decaying with
decrements determined by the imaginary parts of $E_{p}$ and $k_{p}$.
Waves of $n(x,t)$ and $j(x,t)$ move to the left and to the right with the same
velocities $\operatorname{v}\approx\lambda/T=4.79\cdot 10^{5}$ m/s, where the
wavelength is $\lambda=2\pi/k_{12}=2\pi/|k^{\prime}_{2}-k^{\prime}_{1}|\approx
2480$Å, and the period of the waves is
$T=2\pi/\omega_{12}=2\pi\hbar/(E^{\prime}_{2}-E^{\prime}_{1})\approx 2.18\cdot
10^{4}$ a.u. $\approx 5.27\cdot 10^{-13}$s. The generation of these waves can
be represented on Fig.10 by level lines on the $t-x$-plane.
Figure 10: (Color online) The calculated relief levels of the quantities a)
probability density $n(x,t)$, b) probability current density $j(x,t)$ in
accordance with the color scales to the right of the figures. On this scale,
the region inside the double well is not allowed, and the two lower stripes on
the left qualitatively represent the main bodies of the incident and reflected
wave packets having $n\sim 10^{-3}$ a.u. and $j\sim 10^{-5}$ a.u. Coordinate
$x$ in angstroms Å, time, probability density and probability current density
in atomic units.
## IV PROLONGATION AND AMPLIFICATION OF WAVE GENERATION
In the system under consideration, it is possible to organize a mode of
repetition or amplification of the process of radiation of electron
probability density and probability current density waves. If we know the
space-time periods of the waves under study, then in order to prolong the
radiation process and increase the amplitude of the density and current waves,
we can form a quasiperiodic sequence of wave packets similar to the original
packet (4) to the left of the double quantum well and send them in such a way
as to provide an additional resonant pumping of the population of quasi-
stationary states of the heterostructure. To prepare such a coherent chain of
pulses, one can, for example, use two methods:
1) The first of these methods consists in aligning along the axis $x$ of an
equidistant sequence of identical pulses with a spatial period close to a
value that is a multiple of the doubled resonant difference wavelength
$\lambda=2\pi/|k_{12}|$. At the initial moment of time $t=0$, the wave
function should be prepared in the form of a spatial sequence of $N$ identical
wave packets following the head packet (4), in which the initial coordinates
of the centers $x_{Cn}=x_{C}-n\delta x$ are shifted relative to $x_{C}$
($\delta x$ is shift period; $n=1,2,...,N$). If these packets almost do not
overlap and for each of them conditions (6) and (7) are fulfilled with
replacement $x_{C}\to x_{Cn}$, then instead of the spectral function $c_{E}$
in the integrand (8) there appears the spectral function $c_{N}(E)$ of the
entire sequence of packets, which in this case is given by the sum
$c_{N}(E)=c_{E}\sum\limits_{n=0}^{N-1}{{\text{e}}^{-in\delta x\delta
k}}=c_{E}{\text{e}}^{-i(N-1)\delta x\delta k/2}y_{N}\left({\frac{{\delta
x\delta k}}{2}}\right)$ (15)
where $\delta k=k_{C}-k$, and an interference function
$y_{N}(z)\equiv\frac{{\sin(Nz)}}{{\sin z}}$ (16)
is periodic in $z$ with a period $2\pi$ and has the main extrema $y_{N\max}=N$
at the values of the argument $z_{\max}=s\pi$, where $s$ is an integer. In the
theory of diffraction gratings, it describes an increase in the amplitude of
the resultant wave at its main resonance maxima by $N$ times and its intensity
by $N^{2}$ times. In expression (16) we have $z(k)\equiv\delta x\delta k/2$
and it is obvious that at the main extrema of the function $y_{N}(z)$ all
exponentials are equal to one under the sum sign, and the entire sum is equal
$N$. In our case, the integration of (8) with $c_{N}(E)$ instead of $c_{E}$
provides a significant contribution of the poles
$k_{p}=k^{\prime}_{p}+ik^{\prime\prime}_{p}$ of the scattering amplitudes, as
for one wave packet, therefore, due to the superposition of $N$ resonant
diffracted waves, the function $y_{N}(z)$ can also provide up to a close to
$N^{2}$-fold (on conditions $\left|{k^{\prime\prime}_{p}}\right|\ll
k^{\prime}_{p}$) amplification of the wave amplitudes $n(x,t)=|\Psi(x,t)|^{2}$
and $j(x,t)$ in comparison with their values for one ($N=1$) wave packet in
the corresponding intervals of $x$ and $t$. This takes place if the points
$k_{m}$ of the main extrema of the function $y_{N}(z(k))$ are close to the
points $k_{1}\approx k^{\prime}_{1}$ and $k_{2}\approx k^{\prime}_{2}$ of the
resonance maxima of the moduli of the amplitudes of stationary scattering on
the double well, which can be ensured by selecting the value $\delta x$.
Indeed, the period of $y_{N}(z(k))$ by argument $k=k_{C}-\delta k$ is equal to
$4\pi/\delta x$, when $k$ is counted from $k_{C}$, and since our spectral
center $k_{C}$ of the original wave packet is located almost in the middle
between the resonance wave numbers $k_{1}\approx k^{\prime}_{1}$ and
$k_{2}\approx k^{\prime}_{2}$, then at the main extrema there should be
$|\delta k|=|k_{C}-k_{m}|=|k_{12}|/2=\pi/\lambda$, so favorable for maximum
amplification values of the shift periods should be close to $\delta x\approx
4\pi s/k_{12}=2s\lambda$ (Fig.11). Weaker amplification of waves can occur at
such values of $\delta x$ for which
$N>|{y_{N}(z(k_{1}))}|\approx|{y_{N}(z(k_{2}))}|\geq 1$, and the weakening of
the sum wave will occur at $|{y_{N}(z(k_{1}))}|\approx|{y_{N}(z(k_{2}))}|<1$.
Figure 11: (Color online) Spectral functions $c_{N}(E)$ at $s=2$ favorable for
maximizing wave amplification versus resonance peaks $|\tilde{M}_{22}|^{-1}$
(cf. (Fig.2b)): a) for $N=2$ resonance are given by the first main maxima of
$|c_{2}|^{2}$, b) for resonance are given by the second main maxima of
$|c_{3}|^{2}$. The curves are brought to the same unit scale for ease of
comparison
Figure 12: The spatial profile of the resonant amplification of density and
current probability waves by a sequence of three ($N=3$) identical wave
packets shifted relative to each other by a distance of $\delta x\,\approx
4\pi s/k_{12}=2s\lambda$ = 9828 Å, $s=2$ at the moment of time $t=3\cdot
10^{5}$ a.u. $=7.26\cdot 10^{-12}$ s (the main bodies of the reflected packets
are cut off because they are not of interest to us, they are about an order of
magnitude larger than the vertical size of the panels). Coordinate in
angstroms Å, time, probability density and probability current density in
atomic units.
To find the period $\delta x$, we also used a more general method, which is
also valid in cases where conditions (6) and (7) are violated for all
sequential packets. Namely, the period $\delta x$ was determined numerically
from the points of intersection on the $E-\delta x$-plane of straight lines
$E=E_{1}$, $E=E_{2}$ with the lines of the main extrema of the spectral
function of the entire sequence of wave packets, parametrically depending on
$\delta x$, and calculated not according to (16), but according to the general
formula (5), in which $\Psi(x,0)$ it is taken equal to the initial the wave
function of the entire sequence of wave packets.
Figures (Fig.12) and (Fig.13) demonstrate the resonant coherent amplification
of the probability density and probability current waves by a spatial sequence
of three ($N=3$) identical wave packets shifted in space by $\delta x\,\approx
4\pi s/k_{12}=2s\lambda$ at $s=2$.
Figure 13: The time profile of the resonant amplification of the probability
density and current waves by a sequence of the same three ($N=3$) identical
wave packets (at $s=2$), as in Fig.12 using the example of points $x=x_{0}=0$
on the left and $x=x_{2}=2d$ on the right boundaries of the double quantum
well. The time, probability density and probability current density in atomic
units.
2) The second method involves the creation of almost identical pulse wave
packets in one place sequentially in time with a period close to a multiple of
the resonant difference time period $T=2\pi/\omega_{12}$ of the wave using the
appropriate time aperture function. In this case, the source of particles
should be arranged in such a way that coherent wave impulses of the form (9),
(10), which follow each other, appear sequentially at the same place with a
time period $\delta t$. If we assume that these packets almost do not overlap
and for each of them conditions of the type (6) and (7) are satisfied, then in
each time interval $(N-1)\delta t\leq t\leq N\delta t$ when the $N$ pulses are
excited (and also for $t>(N_{0}-1)\delta t$ if the pulse with the number
$N=N_{0}$ is the last) in the expressions (9) and (10), instead of the product
of spectral functions $c_{E}c_{E^{\prime}}^{*}$, a function of the entire
sequence of pulses $(c_{E}c_{E^{\prime}}^{*})_{N}$ appears, which is now given
by the sum
$(c_{E}c_{E^{\prime}}^{*})_{N}=c_{E}c_{E^{\prime}}^{*}\sum_{n=0}^{N-1}e^{in\delta
t(E-E^{\prime})/\hbar}=c_{E}c_{E^{\prime}}^{*}e^{i(N-1)z^{\prime}}y(z^{\prime})$
(17)
The interference function $y(z^{\prime})$ has the same form (17), but now its
argument is equal $z^{\prime}\equiv z^{\prime}(E-E^{\prime})=\delta
t(E-E^{\prime})/2\hbar$, as above the function $y(z^{\prime})$ is periodic
with the period $2\pi$ and has main extrema $|y_{\max}|=N$ at the values of
the argument $z^{\prime}_{\max}=s\pi$, where $s$ is an integer. Integration in
(9) and (10) with $(c_{E}c_{E^{\prime}}^{*})_{N}$ instead of
$c_{E}c_{E^{\prime}}^{*}$ provides the determining contribution of the poles
$E_{p}=E^{\prime}_{p}+iE^{\prime\prime}_{p}$ of the scattering amplitudes, so
that in the corresponding intervals $x$ and $t$, in which the $N$ pulses of
$n(x,t)$ and $j(x,t)$ are already excited and undergo diffraction, and due to
their superposition the function $y(z^{\prime})$ can now provide only to near
$N$-fold amplification of the wave amplitudes $n(x,t)=|{\Psi(x,\;t)}|^{2}$ and
$j(x,t)$ (on conditions $|E^{\prime\prime}_{p}|\ll E^{\prime}_{p}$, otherwise
due to attenuation, the amplification is weaker) compared to their values for
one ($N=1$) wave packet. This takes place if equalities $E-E^{\prime}\approx
E^{\prime}_{2}-E^{\prime}_{1}=\hbar\omega_{12}=2\pi\hbar/T=2\pi\hbar s/\delta
t$ are satisfied at the poles $E_{p}=E^{\prime}_{p}+iE^{\prime\prime}_{p}$,
which can be ensured by selecting a value $\delta t$ close to a value that is
a multiple of the period of these waves $\delta t=\delta T$. Weaker
amplification of waves can occur at values of $\delta t$ for which
$N>|y_{N}(z^{\prime})|\geq 1$, and there will be attenuation of waves at
values of $\delta t$ for which $|y_{N}(z^{\prime})|<1$.
The period $\delta t$ favorable for amplification can also be found
numerically by $\delta t$ vertically shifting the patterns of Fig.10a) and/or
Fig.10b) until parallel oblique lines of maxima (and minima) of the shifted
and not shifted patterns are superimposed on each other after the required
number $s$ of periods for the required the number $N$ of wave packets.
In relation to $\Psi(x,t)$ the waves $n(x,t)=|{\Psi(x,\;t)}|^{2}$ and $j(x,t)$
are a kind of ”intensity waves”, in case 2) they experience amplification only
by a factor of $N$, in contrast to the previous case 1) and the situation in
the theory of diffraction gratings, which provide an increase in intensity by
a factor of $N^{2}$.
Figures (Fig.14) and (Fig.15) demonstrate the resonant coherent amplification
of the probability density and current waves by the temporal sequence of two
($N=2$) identical wave packets shifted in time by $\delta t=sT=2\pi
s/|\omega_{12}|$ at $s=3$.
Figure 14: The spatial profile of the resonant amplification of density and
current probability waves by a sequence of two ($N=2$) identical wave packets
shifted in time relative to each other by $\delta t=sT=65448$ a.e., $s=3$, at
the moment of time $t=3\cdot 10^{5}$ a.u. $=7.26\cdot 10^{-12}$ s (the main
bodies of the reflected packets are cut off because they are not of interest
to us, they are about an order of magnitude larger than the vertical size of
the panels). Coordinate in angstroms Å, time, probability density and
probability current density in atomic units.
Figure 15: The time profile of the resonant amplification of the probability
density and current waves by a sequence of the same two ($N=2$) identical wave
packets (at $s=3$), as in Fig.14 using the example of points $x=x_{0}=0$ on
the left and $x=x_{2}=2d$ on the right boundaries of the double quantum well.
The time, probability density and probability current density in atomic units.
## V CONCLUSION
The generation probability density and probability current density waves of
electrons in the range of terahertz frequencies and micrometer wavelengths is
of interest from the point of view of various applications of micro- and
nanoelectronics. In this paper, we have shown that such generation can be
realized as a result of the excitation by a pulsed electron source of a
doublet of quasi-stationary states of a three-barrier heterostructure in the
form of a symmetric double quantum well. An exciting electron pulse in the
form of a Gaussian wave packet of picosecond duration, in turn, can be
created, for example, by pulsed photoemission when the photocathode is exposed
to a femtosecond light pulse or in some other way. The results of numerical-
analytical modeling of the formation of the probability density waves and the
probability current density waves outside the heterostructure are based on the
solution of the nonstationary Schr?dinger equation describing the scattering
of a Gaussian wave packet on a model structure formed by three tunnel-
transparent dielectric films modeled by $\delta$-barriers of the same power
separated by thin conducting or vacuum nanometer layers thickness. This
simplified model made it possible to implement numerical calculations and
estimate the frequencies, wavelengths, and velocities of such waves, as well
as the amplitudes of oscillations of probability density and current at a
given intensity of the exciting packet and the power of potential barriers.
The characteristics of the generated waves strongly depend on the parameters
of the heterostructure. By varying the parameters of the heterostructure, one
can change the energies, difference frequencies, and lifetimes of quasi-
stationary doublet states. For layer thicknesses of $1-10^{2}$ nm and barrier
heights of 0.5 - 2.5 eV, it is possible to provide the lifetimes of quasi-
stationary states of $10^{-2}-3\cdot 10^{2}$ ps, the generated difference
frequencies for them and the radiated waves of probability and current
densities of $10^{11}-10^{14}$ Hz, and the wavelengths of these waves
$10-10^{3}$ nm. The process of emission of electron waves can be repeated or
even amplified if a periodic resonant pumping of the doublet population in the
heterostructure is provided by a series of Gaussian pulses with a suitable
duty cycle, incident on the heterostructure in phase with oscillations of the
probability and current densities.
The simple quantum mechanical model discussed in this article makes it
possible to rigorously reveal the main regularities and estimate the
contributions of the main characteristics and singularities to the process of
excitation of waves of probability densities and currents during scattering of
wave packets on a double-well heterostructure. This enables us to study in
detail the properties of the complete system of wave functions of the
stationary scattering problem, which forms a natural basis of unperturbed
states of the zero approximation for more realistic models and methods for
describing and calculating the studied generation processes. In particular,
this refers to the models of fast photoemission in an open system, when, in
order to describe the excitation and structure of the scattered wave packet,
it is necessary to take into account the interactions of electrons with
photons and with other particles in the subsequent application of the density
matrix method for mixed quantum states.
*
## Appendix A
Appendix A
The essence of the method proposed by G.F. Drukarev Druk1951 ; Baz1969 for an
analytical estimate of the contributions of singularities of integrands of the
type (8), in short, is that after substituting (1) into (8) and passing to a
variable $k=\hbar^{-1}\sqrt{2mE}$, each of the seven exponential terms (1)
leads to an estimated integral of the form
$I=\int\limits_{0}^{\infty}{F(k)\exp(-i\beta_{t}(k-k_{S})^{2})dk},$ (18)
where $\beta_{t}=\hbar t/2m$, and the quantities $k_{S}$ and $F(k)$ are
different for the seven terms (1), they depend on $x,t$ and the parameters of
the problem, the functions $F(k)$ can have poles $k_{R}$ or other
singularities in the plane of the complex variable $k$, which are determined
by the features of $c_{E}=c(E(k))$ and of amplitudes of the reflected and
transmitted waves. Integral (18) is usually estimated based on the saddle
point method Lavr1967 , Peis2011 .
Figure 16: Deformation of the spectral integral contour in the area of
analyticity to the line I of the largest slope crossing the saddle point
$k_{S}$, the poles $k_{R}=k_{1}$ and $k_{R}=k_{2}$ of the under-integral
expressions are going around by small circles V.
For very large values $\beta_{t}$, the main contribution to it is associated
with the so-called stationary point $k=k_{S}$ on the real axis, which is a
saddle point for a function $\operatorname{Re}(-i\beta_{t}(k-k_{S})^{2})$ with
respect to variables $\operatorname{Re}k$ and $\operatorname{Im}k$, moreover,
the line of the fastest change of this function (the line of the greatest
slope) is a straight line, let us denote it by I, passing on the plane of the
complex variable $k$ through a point $k_{S}$ at an angle $-\pi/4$ to the real
axis (Fig.16). In accordance with the general rule, the contribution from the
$k_{S}$ neighborhood is found by deforming the integration contour in the
region of analyticity of the integrand so that it passes through the saddle
point along the line I of the greatest slope. The contribution of the saddle
point is usually estimated by the Poisson integral along the line I, in our
case it is equal to
$I_{k_{S}}=F(k_{S})\sqrt{\frac{{-i\pi}}{{\beta_{t}}}}$ (19)
the contributions of other distant parts of the deformed contour are usually
small (lines II and III in Fig.16) in comparison with it.
If, when the contour is displaced near the saddle point, a pole $k_{P}$ or a
branch point of the function $F(k)$ is encountered, then they should be
bypassed along a path of type IV, V, as shown in the figure. In the case of
poles, the contribution of sections IV cancels out, and the contribution of
the small circle V around the pole $k_{P}$ is equal to the residue at this
pole
$I_{p}=\pm 2\pi
iRes\\{F(k_{P})\\}\exp\left({-i\beta_{t}(k_{P}-k_{S})^{2}}\right)$ (20)
and may not be small in comparison with the contribution of the saddle point.
We take the plus sign if the pole is located in sector II of the upper half-
plane, minus - in sector III of the lower half-plane, as in the figure Fig.16
(by passing the pole counterclockwise or clockwise). The contributions of type
(20) are the main ones in the ranges of values of $t$ and $x$, which are of
interest to us, describing the oscillatory-wave behavior of the quantities
$n(x,t)$ and $j(x,t)$.
Appendix B
Let us write down analytical formulas that describe the coordinate-time
dependence of quantities $n(x,t)$ and $j(x,t)$ in the region of validity of
expression (14) for such values of $t$ and $x$, at which the oscillatory-wave
mode of beats of quasi-stationary states is established, because the main
contribution is made by the pole features of the scattering amplitudes, and it
is already possible to neglect small terms $\Psi_{0}(x,t)$, $\Psi_{n}(x,t)$,
$\Psi_{3}(x,t)$ in (14).
In the regions inside each of the two wells at $n^{\prime}d\leqslant
x\leqslant nd$, $n^{\prime}\,=n-1$, $n=1,2$, substitution by the second line
(14) in (9) and (10) gives
$\begin{gathered}n(x,t)=\sum\limits_{p=1}^{2}{|{\kern
1.0pt}\tilde{B}_{nE_{p}}|^{2}}e^{2\left|{\tilde{k}^{\prime\prime}_{p}}\right|(x-x_{n-1})-2\left|{E^{\prime\prime}_{p}}\right|t}+\sum\limits_{p=1}^{2}{|\tilde{A}_{nE_{p}}|^{2}}e^{-2\left|{\tilde{k}^{\prime\prime}_{p}}\right|(x-x_{n-1})-2\left|{E^{\prime\prime}_{p}}\right|t}+\hfill\\\
\quad\quad\;\;+2\sum\limits_{p=1}^{2}{|{\kern
1.0pt}\tilde{A}_{nE_{p}}\tilde{B}_{nE_{p}}|}\cos\left({{\kern 1.0pt}{\kern
1.0pt}2\tilde{k}^{\prime}_{p}(x-x_{n-1})+\alpha_{np}-{\kern 1.0pt}{\kern
1.0pt}\beta_{np}}\right)e^{-2\left|{E^{\prime\prime}_{p}}\right|t}+\hfill\\\
\quad+2\left|{\tilde{A}_{nE_{1}}\tilde{A}_{nE_{2}}}\right|\cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t-{\kern 1.0pt}{\kern
1.0pt}(\tilde{k}^{\prime}_{2}-\tilde{k}^{\prime}_{1})(x-x_{n-1})+\alpha_{n1}-{\kern
1.0pt}{\kern
1.0pt}\alpha_{n2}}\right)e^{-(\left|{\tilde{k}^{\prime\prime}_{1}}\right|+\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-x_{n-1})-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}+\hfill\\\
\quad+2\left|{\tilde{B}_{nE_{1}}\tilde{B}_{nE_{2}}}\right|\cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t+{\kern 1.0pt}{\kern
1.0pt}(\tilde{k}^{\prime}_{2}-\tilde{k}^{\prime}_{1})(x-x_{n-1})+\beta_{n1}-{\kern
1.0pt}{\kern
1.0pt}\beta_{n2}}\right)e^{\,\;(\left|{\tilde{k}^{\prime\prime}_{1}}\right|+\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-x_{n-1})-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}{\kern
1.0pt}{\kern 1.0pt}\,+\hfill\\\
\quad+2\left|{\tilde{A}_{nE_{1}}\tilde{B}_{nE_{2}}}\right|\cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t+{\kern 1.0pt}{\kern
1.0pt}(\tilde{k}^{\prime}_{2}+\tilde{k}^{\prime}_{1})(x-x_{n-1})+\alpha_{n1}-{\kern
1.0pt}{\kern
1.0pt}\beta_{n2}}\right)e^{-(\left|{\tilde{k}^{\prime\prime}_{1}}\right|-\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-x_{n-1})-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}+\hfill\\\
\quad+2\left|{\tilde{A}_{nE_{2}}\tilde{B}_{nE_{1}}}\right|\cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t-{\kern 1.0pt}{\kern
1.0pt}(\tilde{k}^{\prime}_{2}+\tilde{k}^{\prime}_{1})(x-x_{n-1})+\beta_{n1}-{\kern
1.0pt}{\kern
1.0pt}\alpha_{n2}}\right)e^{\;\;(\left|{\tilde{k}^{\prime\prime}_{1}}\right|-\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-x_{n-1})-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}{\kern
1.0pt}\;+\hfill\\\ \end{gathered}$ (21)
$\begin{gathered}j(x,t)=\frac{\hbar}{m}{\kern
1.0pt}\left[{\sum\limits_{p=1}^{2}{\tilde{k}^{\prime}_{p}\left({|\tilde{A}_{nE_{p}}|^{2}e^{-2\left|{\tilde{k}^{\prime\prime}_{p}}\right|(x-n^{\prime}d)}-\;|{\kern
1.0pt}\tilde{B}_{nE_{p}}|^{2}e^{+2\left|{\tilde{k}^{\prime\prime}_{p}}\right|(x-n^{\prime}d)}}\right)\;}e^{-2\left|{E^{\prime\prime}_{p}}\right|t}-}\right.\hfill\\\
\quad\quad\quad\quad\;\;-2\sum\limits_{p=1}^{2}{k^{\prime\prime}_{p}\,|{\kern
1.0pt}\tilde{A}_{nE_{p}}\tilde{B}_{nE_{p}}|}\;sin\left({{\kern 1.0pt}{\kern
1.0pt}2\tilde{k}^{\prime}_{p}(x-n^{\prime}d)+\;\alpha_{np}-{\kern 1.0pt}{\kern
1.0pt}\beta_{np}}\right)e^{-2\left|{E^{\prime\prime}_{p}}\right|t}+\hfill\\\
+\left({\tilde{k}^{\prime}_{2}+\tilde{k}^{\prime}_{1}}\right)\left|{\tilde{A}_{nE_{1}}\tilde{A}_{nE_{2}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t-{\kern 1.0pt}{\kern
1.0pt}(\tilde{k}^{\prime}_{2}-\tilde{k}^{\prime}_{1})(x-n^{\prime}d)+\alpha_{n1}-{\kern
1.0pt}{\kern
1.0pt}\alpha_{n2}}\right)e^{-(\left|{\tilde{k}^{\prime\prime}_{1}}\right|+\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-n^{\prime}d)-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}\\_\hfill\\\
-\left({\tilde{k}^{\prime}_{2}+\tilde{k}^{\prime}_{1}}\right)\left|{\tilde{B}_{nE_{1}}\tilde{B}_{nE_{2}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t+{\kern 1.0pt}{\kern
1.0pt}(\tilde{k}^{\prime}_{2}-\tilde{k}^{\prime}_{1})(x-n^{\prime}d)+\;\beta_{n1}-{\kern
1.0pt}{\kern
1.0pt}\beta_{n2}}\right)e^{(\left|{\tilde{k}^{\prime\prime}_{1}}\right|+\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-n^{\prime}d)-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}+\hfill\\\
+\left({\tilde{k}^{\prime}_{1}-\tilde{k}^{\prime}_{2}}\right)\left|{\tilde{A}_{nE_{1}}\tilde{B}_{nE_{2}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t+{\kern 1.0pt}{\kern
1.0pt}\left({\tilde{k}^{\prime}_{2}+\tilde{k}^{\prime}_{1}}\right)(x-n^{\prime}d)+\;\alpha_{n1}-{\kern
1.0pt}{\kern
1.0pt}\beta_{n2}}\right)e^{-(\left|{\tilde{k}^{\prime\prime}_{1}}\right|-\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-n^{\prime}d)-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}-\hfill\\\
\left.{-\left({\tilde{k}^{\prime}_{1}-\tilde{k}^{\prime}_{2}}\right)\left|{\tilde{A}_{nE_{2}}\tilde{B}_{nE_{1}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t-{\kern 1.0pt}{\kern
1.0pt}\left({\tilde{k}^{\prime}_{2}+\tilde{k}^{\prime}_{1}}\right)(x-n^{\prime}d)+\beta_{n1}-{\kern
1.0pt}\alpha_{n2}}\right)e^{(\left|{\tilde{k}^{\prime\prime}_{1}}\right|-\left|{\tilde{k}^{\prime\prime}_{2}}\right|)(x-n^{\prime}d)-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}}\right]\hfill\\\
\end{gathered}$ (22)
In (22), we neglected small terms proportional $k^{\prime}_{p}$ everywhere
except for the second line, in which we wrote out a similar negligible sum
just to illustrate the symmetry of the entire expression.
In the region to the left of the double well at $x<0$, substitution of the
first line of (14) in (9) and (10) gives expressions describing damped waves
traveling to the left
$\begin{gathered}n(x,t)=\sum\limits_{p=1}^{2}{|{\kern
1.0pt}\tilde{B}_{0E_{p}}|^{2}}e^{2\left|{k^{\prime\prime}_{p}}\right|x-2\left|{E^{\prime\prime}_{p}}\right|t}+\hfill\\\
\quad\quad\quad+2\left|{\tilde{B}_{0E_{1}}\tilde{B}_{0E_{2}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t+{\kern 1.0pt}{\kern
1.0pt}(k^{\prime}_{2}-k^{\prime}_{1})x+\beta_{01}-{\kern 1.0pt}{\kern
1.0pt}\beta_{02}}\right)e^{(\left|{k^{\prime\prime}_{1}}\right|+\left|{k^{\prime\prime}_{2}}\right|)x-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t},\hfill\\\
\end{gathered}$ (23) $\begin{gathered}j(x,t)=-\frac{\hbar}{m}{\kern
1.0pt}\left[{\sum\limits_{p=1}^{2}{k^{\prime}_{p}|\tilde{B}_{0E_{p}}|^{2}}e^{2\left|{k^{\prime\prime}_{p}}\right|x-2\left|{E^{\prime\prime}_{p}}\right|t}}\right.+\hfill\\\
\quad\quad\quad+\left.{(k^{\prime}_{1}+k^{\prime}_{2})|\tilde{B}_{0E_{1}}\tilde{B}_{0E_{2}}|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t+{\kern 1.0pt}{\kern
1.0pt}(k^{\prime}_{2}-k^{\prime}_{1})x+\beta_{01}-{\kern 1.0pt}{\kern
1.0pt}\beta_{02}}\right)e^{(\left|{k^{\prime\prime}_{1}}\right|+\left|{k^{\prime\prime}_{2}}\right|)x-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}}\right],\quad\hfill\\\
\end{gathered}$ (24)
In the region to the right of the double well at $x>x_{2}$, substitution of
the first line of (14) in (9) and (10) gives expressions describing damped
waves traveling to the right
$\begin{gathered}n(x,t)=\sum\limits_{p=1}^{2}{|\tilde{A}_{3E_{p}}||^{2}}e^{-2\left|{k^{\prime\prime}_{p}}\right|(x-x_{2})-2\left|{E^{\prime\prime}_{p}}\right|t}+\hfill\\\
\quad\quad+2\left|{\tilde{A}_{3E_{1}}\tilde{A}_{3E_{2}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t-{\kern 1.0pt}{\kern
1.0pt}(k^{\prime}_{2}-k^{\prime}_{1})(x-x_{2})+\;\alpha_{31}-{\kern
1.0pt}{\kern
1.0pt}\alpha_{32}}\right)e^{-(\left|{k^{\prime\prime}_{1}}\right|+\left|{k^{\prime\prime}_{2}}\right|)(x-x_{2}d)-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t},\hfill\\\
\end{gathered}$ (25) $\begin{gathered}j(x,t)=\frac{\hbar}{m}{\kern
1.0pt}\left[{\sum\limits_{p=1}^{2}{k^{\prime}_{p}|\tilde{A}_{3E_{p}}|^{2}}e^{-2\left|{k^{\prime\prime}_{p}}\right|(x-x_{2})-2\left|{E^{\prime\prime}_{p}}\right|t}}\right.+\hfill\\\
\quad\quad\quad+\left.{(k^{\prime}_{1}+k^{\prime}_{2})\,\left|{\tilde{A}_{3E_{1}}\tilde{A}_{3E_{2}}}\right|cos\left({\omega{\kern
1.0pt}{\kern 1.0pt}t-{\kern 1.0pt}{\kern
1.0pt}(k^{\prime}_{2}-k^{\prime}_{1})(x-x_{2})+\;\alpha_{31}-{\kern
1.0pt}{\kern
1.0pt}\alpha_{32}}\right)e^{-(\left|{k^{\prime\prime}_{1}}\right|+\left|{k^{\prime\prime}_{2}}\right|)(x-x_{2})-(\left|{E^{\prime\prime}_{1}}\right|+\left|{E^{\prime\prime}_{2}}\right|)t}}\right].\hfill\\\
\end{gathered}$ (26)
## References
* (1) A. Rostami, H. Hassan, and H. Baghban, Terahertz Technology Springer-Verlag, Berlin, Heidelberg (2011)
* (2) V.M. Axt and T. Kuhn, Rep. Prog. Phys. 67, 433 (2004)
* (3) M. F. Ciappina, J A Perez-Hernndez, A S Landsman et.al., Rep. Prog. Phys. 80, No 5 ( 2017)
* (4) R. Pazourek, S. Nagele, and J. Burgdorfer, Rev. Mod. Phys. 87, 765 (2015)
* (5) S.V. Chekalin, Usp. Fiz. Nauk. 184, 672 (2014) [Sov.Phys. Usp. 57, 622 (2014)]
* (6) V.P. Zhukov and E.V. Chulkov, Usp. Fiz. Nauk. 179, 113 (2009) [Sov.Phys. Usp. 52, 105 (2009)]
* (7) F. Rossi and T. Kuhn, Rev. Mod. Phys. 74, 895 (2002)
* (8) K. Leo, J. Shah, E. O. Gobel, T. C. Damen, S. Schmitt-Rink, W. Schafer, and K. Kohler, Phys. Rev. Lett. 66, 201 (1991)
* (9) H.G. Roskos, M.C.Nuss, J. Shah, K. Leo, D.A.B. Miller, A.M. Fox, S. Schmitt-Rink, and K. Kohler, Phys. Rev. Lett. 68, 2216 (1992)
* (10) R. Romo, J. Villavicencio, and G. Garcia-Calderon, Phys. Rev. B 66, 033108 (2002)
* (11) Yu. G. Peisakhovich and A.A. Shtygashev, Phys. Rev. B 77, 075326 (2008)
* (12) Yu. G. Peisakhovich and A.A. Shtygashev, Phys. Rev. B 77, 075327 (2008)
* (13) G. Garcia-Calderon, R. Romo, and J. Villavicencio, Phys. Rev. A 79, 052121 (2009)
* (14) S. Cordero, G. Garcia-Calderon, R. Romo, and J. Villavicencio, Phys. Rev. A 84, 042118 (2011)
* (15) S. L. Konsek and T.P. Pearsall, Phys. Rev. B 67, 045306 (2003)
* (16) R. G. Winter, Phys. Rev. 123, 1503 (1961)
* (17) A. del Campo, G. Garcia-Calderon, and J. Muga, Phys. Rep. 476, 1 (2009)
* (18) G. Garcia-Calderon and A. Rubio, Phys. Rev. A 55, 3361 (1997)
* (19) G. Garcia-Calderon, I. Maldonado, and J. Villavicencio, Phys. Rev. A 76, 012103 (2007)
* (20) G. Garcia-Calderon, I. Maldonado, and J. Villavicencio, Phys. Rev. A 88, 052114 (2013)
* (21) G. Garcia-Calderon, J. Villavicencio, A. Hernandez-Maldonado, and R. Romo, Phys. Rev. A 94, 022103 (2016)
* (22) G.F. Drukarev, Zh. Eksp. Teor. Fiz. 21, 59 (1951)
* (23) A.I.Baz, Ya.B.Zeldovich , and A.M.Perelomov, Scattering, Reactions, and Decays in Nonrelativistic Quantum Mechanics (Nauka, Moscow, 1971; IPST, Jerusalem, 1969)
* (24) L. D. Landau and E. M. Lifshitz, Quantum Mechanics. Non-Relativistic Theory. (Oxford:Pergamon Press, 1977)
* (25) M. A. Lavrentiev and B. V. Shabat, Methods of the Theory of Functions of Complex Variable. (Nauka, Moskva, 1988). Methoden der komplexen Funktionentheorie. (Deutsch. Verlag Wissenschaft, 1967)
* (26) Yu. G. Peisakhovich and A.A. Shtygashev, J. Appl. Phys. 110, 053904 (2011)
|
# End-to-End Autoregressive Retrieval via Bootstrapping
for Smart Reply Systems
Benjamin Towle1, Ke Zhou1,2
1University of Nottingham
2Nokia Bell Labs
{benjamin.towle<EMAIL_ADDRESS>
###### Abstract
Reply suggestion systems represent a staple component of many instant
messaging and email systems. However, the requirement to produce sets of
replies, rather than individual replies, makes the task poorly suited for out-
of-the-box retrieval architectures, which only consider individual message-
reply similarity. As a result, these system often rely on additional post-
processing modules to diversify the outputs. However, these approaches are
ultimately bottlenecked by the performance of the initial retriever, which in
practice struggles to present a sufficiently diverse range of options to the
downstream diversification module, leading to the suggestions being less
relevant to the user. In this paper, we consider a novel approach that
radically simplifies this pipeline through an autoregressive text-to-text
retrieval model, that learns the smart reply task end-to-end from a dataset of
(message, reply set) pairs obtained via bootstrapping. Empirical results show
this method consistently outperforms a range of state-of-the-art baselines
across three datasets, corresponding to a 5.1%-17.9% improvement in relevance,
and a 0.5%-63.1% improvement in diversity compared to the best baseline
approach. We make our code publicly
available.111https://github.com/BenjaminTowle/STAR 222Paper accepted to
FINDINGS-EMNLP 2023.
## 1 Introduction
Figure 1: Previous methods [A] compared to our approach, STAR [B]. The example
displayed is taken from the DailyDialog Test set, and compares the predictions
of STAR with SimSR (Towle and Zhou, 2023), the next best method. Our method’s
suggestions present a diverse range of topics/intents to drive the
conversation.
Reply suggestion, or smart reply (SR), systems are a staple component of many
commercial applications such as Gmail, Skype, Outlook, Microsoft Teams,
LinkedIn and Facebook Messenger. They help the user process chats and emails
quicker by offering a set of canned replies which can be clicked without
requiring manual typing. However, dialogue is known to be a one-to-many
problem (Zhao et al., 2017; Towle and Zhou, 2022) – namely, for any given
message, there are multiple possible replies. To reflect this uncertainty,
systems should present a diverse set of options to the user. For instance,
given the message How are you?, an SR system could suggest: {I’m good; Ok; Not
great}. Resultantly, the quality of a given reply depends not only on the
message, but on the other replies in the reply set.
Several prior works explore solutions to this problem such as removing near
duplicates, penalising inter-reply similarity Deb et al. (2019), clustering by
intent (Henderson et al., 2017; Weng et al., 2019), learning latent variables
(Deb et al., 2019, 2021), or model-based simulation (Towle and Zhou, 2023).
However, these methods share a common design choice (Figure 1A): (1) a
retrieval-based Matching model, which has learned a shared embedding space
between messages and replies, returns a shortlist of top scoring replies; (2)
this shortlist is refined through some diversification procedure to obtain the
final reply set.
Unfortunately, this assumes that the initial shortlist contains at least one
good reply set. In practice, we find Matching models often search myopically,
only retrieving candidates that are very similar to one another (Figure 1A).
Thus, the chosen reply set often fails to reflect a diverse range of user
intents, while latency constraints make more sophisticated diversification
techniques or larger shortlists prohibitive (Deb et al., 2019).
An intuitive, but – to the best of our knowledge – unexplored, solution to
this problem is to conduct the retrieval autoregressively, with each reply
conditioned on both the initial message and the previous replies in the set.
Unfortunately, this approach encounters a second problem, namely, the lack of
any datasets containing (message, reply set) pairs (Towle and Zhou, 2023). In
practice, SR systems are trained on individual (message, reply) pairs obtained
from conversation datasets, while the task of presenting multiple diverse
replies to the user is outsourced to a separate diversification module.
To meet this dual need, we present both (i) a bootstrapping method for
creating a high-quality dataset of (message, reply sets) and (ii) a novel
autoregressive retrieval model which predicts sequences of replies. For
solving (i), we observe how model-based planning algorithms have been known to
serve as a powerful policy improvement operator Silver et al. (2017);
Schrittwieser et al. (2019), including in several NLP systems Jang et al.
(2020, 2021). Specifically, the outputs of a model-based planning algorithm
can be used to bootstrap a SR system. Further, by conducting this planning
offline we are able to leverage two key advantages: (1) the system is free of
the latency constraints of online inference, and therefore can increase the
search space coverage of the planning algorithm; (2) the system can leverage
information that would not be available during inference, such as the ground-
truth reply, to further guide the search process. For (ii) we unify both steps
of the standard SR pipeline into a single end-to-end model, which mitigates
the myopic search, and allows the model to learn to diversify its predictions
in a principled way through gradient-based learning. To this end, we present
STAR (Suggested replies with T5 and Autoregressive Retrieval) (Figure 1B). At
a high level, STAR is a text-to-text model trained to output sequences of
replies, where each reply is conditioned both on the initial message and the
previous replies in the sequence. Concretely, we instantiate our method with
the T5 pretrained model (Raffel et al., 2020). We expand T5’s vocabulary by
treating each reply in the candidate pool as a novel token, and demonstrate a
simple-yet-effective technique for initialising the new token embeddings,
which leverages the model’s existing semantic priors. Notably, by treating
each reply as a token, we limit the number of autoregressive decoding steps
required, keeping the model’s efficiency comparable to other retrieval-based
methods.
Empirically, we evaluate our approach on three benchmarks: Reddit (Zhang et
al., 2021), which is the only publicly-available SR benchmark, as well as
PersonaChat (Zhang et al., 2018) and DailyDialog (Li et al., 2017) which are
both widely-used in dialogue research more broadly (Zhang et al., 2019; Roller
et al., 2020, inter alia), and share a similar conversational style with SR
apps. We demonstrate superior performance over state-of-the-art baselines
across all datasets, corresponding to a 5.1%-17.9% improvement in relevance,
and a 0.5%-63.1% improvement in diversity compared to the best baseline
approach. We further show comparable efficiency to previous methods, and
perform a range of ablations to motivate our design choices.
In summary, our key contributions are as follows: (1) an autoregressive
retrieval architecture for sequentially predicting suggested replies; (2) a
bootstrapping framework for generating high-quality data of (message, reply
set) pairs; (3) detailed analysis of model behaviour and performance including
a case study and ablation of key components.
## 2 Related Work
##### Smart reply
The proprietary nature of data from email and chat applications has led
several previous works to use publicly-available dialogue datasets (Zhang et
al., 2021; Deb et al., 2021; Towle and Zhou, 2023) to benchmark SR methods,
due to their analogous conversational nature. While early SR systems used
generative models (Kannan et al., 2016), current production systems favour
retrieval methods due to their greater controllability of outputs and superior
latency (Deb et al., 2019). Increasing the diversity of reply suggestions is a
key focus of previous work, which has been attempted by: (1) mapping replies
to discrete intents / topics Kannan et al. (2016); Chakravarthi and Pasternack
(2017); Weng et al. (2019); (2) re-weighting replies according to their
similarity with other replies in the set (Carbonell and Goldstein-Stewart,
1998; Deb et al., 2019); (3) learning continuous latent variables to generate
multiple queries (Zhao et al., 2017; Deb et al., 2019); (4) using model-based
simulation to iteratively search and evaluate the relevance of candidate reply
sets (Towle and Zhou, 2023). Our proposed method differs from all of these
approaches in that our model learns to account for the interdependencies
between replies through end-to-end backpropagation.
##### Autoregressive retrieval
Integrating neural retrieval into the well-established paradigm of text-to-
text models is of growing interest. Earlier work focuses on outputting a
document ID given a query (Tay et al., 2022). Further work has extended this
by considering alternate ways of representing the document IDs, such as
through unique substrings (Bevilacqua et al., 2022). Another line of work has
used autoregressive retrieval for the entity linking task (Cao et al., 2021a,
b, c). There, the motivation is to reduce the large number of entities by
relying on the text-to-text model’s pre-existing vocabulary, rather than
having to retrieve embeddings from a memory-intensive dense index. Our
proposed method differs considerably from these previous works both in
instantiation and motivation. Instantiation-wise, we generate multiple replies
– critical to making this possible is the novel bootstrapping technique for
creating the dataset of (message, reply set) pairs to train on. Motivation-
wise, our goal is to be able to condition each reply on both the input message
and previous replies in the set, enabling the model to learn to predict
sequences of replies in a differentiable way.
##### Bootstrapping
The idea of bootstrapping training data from limited resources has received
significant recent interest in NLP, given the newly demonstrated few / zero-
shot capabilities of many large language models (Brown et al., 2020). It has
seen usage in few-shot shot text-classification (Schick and Schütze, 2021a),
semantic similarity (Schick and Schütze, 2021b), tool-usage (Schick et al.,
2023), retrieval (Izacard and Grave, 2021), sequence generation (He et al.,
2020), and instruction-tuning (Honovich et al., 2023; Wang et al., 2023; Taori
et al., 2023), amongst others. These techniques can also be seen as a form of
knowledge distillation (Hinton et al., 2015), except that the training
typically involves predicting the exact token targets, rather than using the
soft probabilities of a teacher model. Although sometimes these techniques are
used as an addition to supervised learning (He et al., 2020), in our case
there are no datasets containing the ideal reply sets to suggest to the user.
Instead, we must bootstrap this in a more unsupervised way, by transforming a
dataset of (message, reply) pairs into a dataset of (message, reply set)
pairs.
## 3 Methodology
In this section, we first describe the model-based planning process used to
obtain the bootstrapped dataset of (message, reply set) pairs (Section 3.1).
Then, we show how the STAR architecture can be trained on this dataset
(Section 3.2).
### 3.1 Offline Dataset Creation
Algorithm 1 Offline Dataset Creation. We use $N$=100, $M$=100, $\alpha$=0.75
and $\lambda$=0.05 as our default setting.
Input Matching model $\Phi$, message $x$, precomputed reply vectors
$\\{\mathbf{y_{r}}\\}^{R}$, number of candidates $N$, number of simulations
$M$, final reply set size $K$, query augmentation coefficient $\alpha$,
redundancy penalty $\lambda$.
Output reply set $Y_{K}$
$\mathbf{x},\mathbf{y}\leftarrow\Phi(x),\Phi(y)$
$\tilde{\mathbf{x}}\leftarrow\alpha\mathbf{x}+(1-\alpha)\mathbf{y}$
$\triangleright$ query augmentation
$Y_{N}\leftarrow\operatorname*{\textit{N}-argmax}\limits_{r}(\tilde{\mathbf{x}}\cdot\mathbf{y_{r}})$
$Y_{M}\leftarrow\operatorname*{\textit{M}-argmax}\limits_{r}(\tilde{\mathbf{x}}\cdot\mathbf{y_{r}})$
$q(y_{m}|x)\propto\exp{\tilde{\mathbf{x}}\cdot\mathbf{y_{m}}}$
$\triangleright$ softmax over top-M scores
$Y_{G}\leftarrow\emptyset$
for $k\leftarrow 0$ to $K$ do
$y_{k}\leftarrow\operatorname*{argmax}\limits_{n}\sum\limits_{m}\limits^{M}f(Y_{G}^{n},y_{m})q(y_{m}|x)-\lambda
f(Y_{G},y_{n})$
$Y_{G}\leftarrow Y_{G}\cup y_{k}$
end for
$Y_{K}\leftarrow Y_{G}$
return $Y_{K}$
Our goal is to transform a dialogue dataset $\mathcal{D}=\\{(x,y)\\}$ of
(message, reply) tuples, into a dataset $\mathcal{D}*=\\{(x,Y)\\}$ where $Y$
is the set of replies $\\{y_{k}\\}^{K}$ to be presented to the user. Algorithm
1 summarises this process. While our method is general to any arbitrary
planning algorithm, we choose to instantiate our approach with a modified
version of SimSR (Towle and Zhou, 2023), a recently released publicly
available state-of-the-art SR method, that employs model-based simulation to
predict reply sets. As the original algorithm was designed for online
inference, we make several changes to benefit the offline nature of our
version, and detail the full implementation below.
The initial retrieval is conducted by a Matching model $\Phi$ that separately
encodes messages and replies into a shared latent space. Given an encoded
message $\mathbf{x}=\Phi(x)$, it retrieves the top $N$ candidates from a pool
of pre-computed reply vectors $\mathbf{Y_{R}}=\\{\mathbf{y_{r}}\\}^{R}$ by
combining their dot product similarity with a pre-computed language-model bias
– a standard component of SR systems to downweight overly specific replies
(Deb et al., 2019).
$Y_{N}=\operatorname*{\textit{N}-argmax}_{r}(\mathbf{x}\cdot\mathbf{y_{r}}+\beta\textsc{LM}(y_{r}))$
(1)
We then output the $K$-tuple $Y_{i}\in$ $Y_{N}\choose K$ that has the highest
expected similarity with the human reply, according to some similarity
function $f(\cdot,\cdot)$.
$\operatorname*{argmax}_{i}\mathbb{E}_{y\sim
p(\cdot|x)}\Bigl{[}f(Y_{i},y)\Bigr{]}$ (2)
Given the objective in SR is for at least one of the replies to be relevant,
the similarity function is defined as a maximum over the sampled reply and
each of the replies in the reply set, using term-level F1-score:
$\max\limits_{k}\textsc{F1}(y_{k},y)$.
We assume $y$ is sampled from the ground-truth human distribution
$p(\cdot|x)$. As we do not have access to the true human distribution in
practice, we instead use the same Matching model $q$ as a proxy for this,
given it is trained on (message, reply) pairs. We then approximate the
expectation by marginalising over the top-$M$ most likely replies:
$\approx\operatorname*{argmax}_{i}\sum_{m}^{M}f(Y_{i},y_{m})q(y_{m}|x)$ (3)
In practice, it is intractable to evaluate every possible reply tuple, due to
their combinatorial scaling. We therefore approximate this by greedily
constructing the reply set one reply at a time. Formally, let $Y_{G}$ be the
set of currently selected replies, such that initially $Y_{G}=\emptyset$.
Then, for each of $y_{n}\in Y_{N}$, we compute the expected similarity for the
union of $Y_{G}$ and $y_{n}$, termed $Y_{G}^{n}=Y_{G}\cup y_{n}$ for brevity:
$\sum_{m}^{M}f(Y_{G}^{n},y_{m})q(y_{m}|x)$ (4)
We repeat this process for $K$ timesteps, each time appending the highest
scoring reply to $Y_{G}$, i.e. until $|Y_{G}|=K$. Note that this greedy search
process implicitly canonicalises the order of the replies, as selecting
replies in this way causes them to be roughly ordered by individual message-
reply relevance.
#### 3.1.1 Adjustments
##### Scaling $N$ and $M$
The original SimSR algorithm was used only in an online setting (Towle and
Zhou, 2023). Therefore, the size of the search parameters $N$ (number of
replies in the shortlist) and $M$ (number of simulated user replies) is kept
low (15 and 25 respectively in the original paper). As we only need to run
this model offline however to obtain the dataset, we find setting $N$ and $M$
to much larger values improves relevance (we use 100 for both), enabling both
a broader search (i.e. by increasing $N$) and a more accurate similarity
function (i.e. by increasing $M$).
##### Redundancy penalty
Early testing showed that scaling the search parameters reduced diversity. We
therefore introduce a redundancy penalty, which penalises the model for
selecting replies that are similar to replies already in the set $Y_{G}$. This
is analogous to the inter-document similarity penalty used in the maximum
marginal relevance IR (information retrieval) technique (Carbonell and
Goldstein-Stewart, 1998).
$\sum_{m}^{M}f(Y_{G}^{n},y_{m})q(y_{m}|x)-\lambda f(Y_{G},y_{n})$ (5)
##### Query augmentation
Unlike during online inference, we also have access to the ground-truth reply
$y$ when constructing the dataset. Previous work has found that models obtain
greater representational capabilities when given access to posterior
information (Paranjape et al., 2022; Towle and Zhou, 2022). We therefore use
an augmented query to retrieve with the Matching model. This is obtained by
interpolating between the message and ground-truth reply embeddings. This
biases the model’s predictions towards the observed ground-truth in the
dataset, while still allowing it to benefit from its own learned distribution.
$\mathbf{\tilde{x}}=\alpha\Phi(x)+(1-\alpha)\Phi(y)$ (6)
### 3.2 Proposed STAR Model
We initialise STAR with a T5-based text-to-text language model, which has
previously been shown to be effective in autoregressive retrieval (Tay et al.,
2022). While some autoregressive retrieval approaches identify their
documents/replies through unique substrings (Bevilacqua et al., 2022) or
constrained beam search (Cao et al., 2021b), we focus on approaches requiring
only a limited number of autoregressive steps, to maintain competitive
inference speeds to existing retrieval methods (Section 5.3). There are
several alternatives for this such as treating each reply set as a unique
token, or separately training on each (message, reply pair), but ultimately we
opted for autoregressively treating each reply as a unique token in the
vocabulary in order to exploit the compositionality of reply sets (Section 2
for performance comparison). Note that as the types of replies used in smart
reply are usually quite short and concise, e.g. ‘how are you’, ‘I’m fine
thanks’, ‘yes, that’s right’ etc., systems in deployment only need to retrieve
from a pool of 30k or so replies Deb et al. (2019), in order to provide good
coverage of possible user intents. As a result, we are able to keep the size
of the vocabulary reasonable. Thus, our new vocabulary is defined as:
$W_{tokens}\cup W_{replies}$. An obvious challenge to this approach is that by
treating each reply as a previously unseen word, it removes any semantic
priors the model might have about their meaning. To mitigate this, we employ a
bag-of-words initialisation strategy. Hence, we define the embedding of the
$t$-th reply $E(y_{t})$ as the average over the embeddings of the individual
words within $w_{n}\in y_{t}$.
$E(y_{t})=\frac{1}{N}\sum_{n}^{N}E(w_{n})$ (7)
Intuitively, this ensures that the initial embeddings are close to the word
embeddings of the original vocabulary, while also capturing some of the
underlying semantics of the reply. We allow the weights to update during fine-
tuning. Note that for T5 the output and input embedding layers share weights,
and therefore this approach is used to initialise both layers. We train the
model using cross-entropy loss to predict the next reply given the current
sequence of replies and messages:
$\mathcal{L}_{NLL}=-\sum_{k}^{K}\log p(y_{k}|x,y_{0},...,y_{k-1})$ (8)
## 4 Experimental Setup
### 4.1 Baselines
Previous work has largely been closed-source and is therefore unavailable for
direct comparison (Henderson et al., 2017; Weng et al., 2019; Deb et al.,
2019). With the exception of SimSR, which has publicly available code
333https://github.com/BenjaminTowle/SimSR, we re-implement a variety of
methods that cover the broad range of previous techniques. Due to its
comparable size, all baselines apart from Seq2Seq are initialised with
DistilBERT as the encoder backbone. These are summarised as follows:
##### Seq2Seq
is a generative encoder-decoder. While current production systems and the
majority of related works use only retrieval models (Deb et al., 2019; Towle
and Zhou, 2023), at least one related work includes a standard generative
transformer as a baseline (Zhang et al., 2021), which we follow here. For
maximum comparability with our method, we use the same t5-small model as a
backbone. For each message, we sample $K$ responses independently.
##### Matching
represents the out-of-the-box encoder with no additional diversification
strategy and was used as a baseline method by Zhang et al. (2021). It simply
selects the top $K$ responses according to individual message-reply scores.
##### Matching-Topic
uses an out-of-the-box topic classifier to ensure no two replies share the
same topic, similar to previous work (Henderson et al., 2017; Weng et al.,
2019). The classifier is trained on Twitter (Antypas et al., 2022), due to
their comparable short-form open-domain chat conversations.
##### Maximum Marginal Relevance (MMR)
(Carbonell and Goldstein-Stewart, 1998) is originally an IR technique, used in
several previous SR works (Deb et al., 2019; Towle and Zhou, 2023), which re-
weights reply scores as a linear combination of their message-reply and inter-
reply similarity.
##### MCVAE
(Deb et al., 2019) is a conditional variational autoencoder (Zhao et al.,
2017) which learns to generate multiple query vectors from a single message
embedding, representing the multiple possible reply intents. Candidates are
scored via a voting process, whereby the $K$ most-selected replies are chosen.
##### SimSR
(Towle and Zhou, 2023) uses an iterative search and evaluation process to
select possible reply sets and score them according to their expected
similarity from a learned world model, which serves as a proxy for the user.
To ensure comparability of SimSR with our method and the other baselines, we
include the language-model bias in the scoring process (Equation 1), and also
deduplicate the candidate pool.444Both changes lead to consistently improved
accuracy and diversity across all datasets compared to the original paper.
### 4.2 Datasets
| Train | Valid | Test | $\mathbf{|Y_{R}|}$
---|---|---|---|---
Reddit | 50k | 5k | 5k | 48k
PersonaChat | 66k | 8k | 8k | 64k
DailyDialog | 76k | 7k | 7k | 62k
Table 1: Number of samples in the Train, Validation, Test sets and Candidate
pool in the three datasets for evaluation. The Candidate pool comprises the
Train set with duplicate responses removed.
We evaluate our proposed method across three datasets, summarised in Table 1.
Below, we describe the datasets in more detail and motivate their inclusion.
Note, other than Reddit, there are no publicly available SR datasets, due to
their commercial nature (e.g. Henderson et al. (2017); Deb et al. (2019); Weng
et al. (2019)). Therefore, we adopt several dialogue datasets, which is the
closest alternative to conversations on proprietary chat applications.
##### Reddit
(Zhang et al., 2021) was originally introduced for training multilingual SR
systems, and is the only publicly available dataset specifically intended for
SR purposes. As the original dataset is very large, we follow Towle and Zhou
(2023) and use the reduced version of the dataset. Note, this version only
contains English, as our aim is limited to the monolingual setting. Due to the
organic nature of the dataset, conversations cover a very broad range of
topics.
##### PersonaChat
(Zhang et al., 2018) is a crowdworker-sourced dataset comprising persona-
grounded conversations, in which each speaker is assigned a persona comprising
a few short sentences. Following previous methods (Humeau et al., 2020), we
concatenate the persona to the beginning of the message. The participants are
instructed to chat naturally and to try to get to know one another.
##### DailyDialog
(Li et al., 2017) is a dataset created from English language learning websites
and consists of a variety of high-quality dialogues in everyday scenarios. The
dataset differs from the former two in that the conversations often involve
real-life scenarios, such as asking for directions, and therefore captures a
different variety of conversational skills.
### 4.3 Metrics
We evaluate our method on the same weighted ROUGE ensemble as previous methods
(Lin, 2004; Deb et al., 2019, 2021), which is known to correlate well with
click-through rate (Zhang et al., 2021):
$\frac{\textsc{rouge-1}}{6}+\frac{\textsc{rouge-2}}{3}+\frac{\textsc{rouge-3}}{2}$
(9)
As the goal of SR systems it to ensure that at least one of the suggested
replies is relevant to the user, we only record the maximum ROUGE score across
each of the $K=3$ suggested replies. We also evaluate the model on Self-ROUGE
(Celikyilmaz et al., 2020): This is an unreferenced metric that measures the
internal dissimilarity (i.e. diversity) within the reply set by treating one
reply as the predicted reply and the other parts as the references. Note that
a lower Self-ROUGE score indicates more diversity.
### 4.4 Inference
For inference, we use the entire training set as the candidate pool for each
respective dataset, with deduplication to remove exact matches. For STAR, we
greedily decode the next reply token until $K$ tokens have been decoded. Note,
we only allow the model to output replies represented in the bootstrapped
dataset, and also block non-replies, i.e. words from the original vocabulary,
from being predicted.
## 5 Experimental Results
We focus our efforts on answering the following Research Questions:
$\mathbf{(RQ_{1})}$ How does STAR compare to existing state-of-the-art
methods? (Section 5.1, 5.4); $\mathbf{(RQ_{2})}$ Which components of the data
collection algorithm and fine-tuning have the largest impact on STAR’s
performance? (Section 2); $\mathbf{(RQ_{3})}$ How efficient is STAR in
inference? (Section 5.3)
### 5.1 Main Results
Table 2 compares the performance of different SR systems across the Reddit,
PersonaChat and DialyDialog datasets. In terms of relevance (ROUGE), STAR
shows an especially large improvement in Reddit (+17.9%) and DailyDialog
(+15.8%). We hypothesise the gains in PersonaChat (+5.1%) are more modest
because the replies are more easily predicted due to the persona, which is
concatenated to each message. This significantly reduces the noise during the
initial retrieval for the baselines, as they only need to retrieve the
messages relevant to that particular persona.
For diversity (Self-ROUGE), the strongest gains were found in DailyDialog
(+63.1%). For PersonaChat, STAR performs much better than the retrieval
methods, only falling behind Seq2Seq, due to its altogether noisier outputs as
evidenced by having the worst relevance score. The Reddit results were
comparatively more modest (+0.5%) – we hypothesise this is because the dataset
is altogether more noisy, and so there are relatively few similar replies in
the dataset, as shown by the Self-ROUGE scores being lower than the other two
datasets. Overall, the consistent outperformance in both relevance and
diversity metrics supports the benefits of the STAR approach.
Method | Reddit | PersonaChat | DailyDialog
---|---|---|---
ROUGE $\uparrow$ | Self-ROUGE $\downarrow$ | ROUGE $\uparrow$ | Self-ROUGE $\downarrow$ | ROUGE $\uparrow$ | Self-ROUGE $\downarrow$
Generative models | | | | | |
Seq2Seq | 2.41 | 3.43 | 6.83 | 6.88* | 4.01 | 3.91
Retrieval models | | | | | |
Matching | 1.95 | 9.42 | 7.51 | 21.47 | 6.53 | 16.65
M-Topic | 1.81 | 3.94 | 7.16 | 15.43 | 6.14 | 11.11
M-MMR | 2.20 | 4.44 | 7.81 | 14.57 | 6.13 | 8.63
M-CVAE | 2.30 | 5.02 | 7.43 | 12.21 | 6.78 | 10.49
SimSR555Results surpass reported numbers in original paper due to inclusion of language-model bias and deduplicated candidate pool to support better comparability (Section 4.1). | 2.79 | 2.18 | 9.04 | 10.52 | 6.82 | 4.80
STAR | 3.29* | 2.17 | 9.50* | 7.74 | 7.90* | 1.77*
Table 2: Performance of STAR across Reddit, PersonaChat and DailyDialog Test
sets on relevance (ROUGE) and diversity (Self-ROUGE) metrics. Bold indicates
best result, underline indicates second-best. * = statistically significant
versus next best result on t-test with p-value < 0.01.
### 5.2 Ablation
Model | Reddit | PersonaChat | DailyDialog
---|---|---|---
Configuration | ROUGE $\uparrow$ | Self-ROUGE $\downarrow$ | ROUGE $\uparrow$ | Self-ROUGE $\downarrow$ | ROUGE $\uparrow$ | Self-ROUGE $\downarrow$
STAR | 3.35* | 2.27 | 8.85 | 7.48 | 8.39 | 1.81
Data Collection Ablations | | | | | |
A: No Query Augmentation | 2.94 | 2.00 | 8.99 | 6.94 | 7.24 | 2.89
B: No Redundancy Penalty | 3.06 | 4.29 | 9.03 | 17.26 | 8.98* | 5.90
STAR Training Variants | | | | | |
C: Random embeddings | 2.67 | 4.93 | 8.39 | 10.97 | 6.84 | 4.45
D: Reply sets as tokens | 2.85 | 1.59* | 8.76 | 6.81 | 7.75 | 1.57*
E: Predict replies separately | 2.20 | 26.61 | 8.07 | 30.98 | 6.43 | 20.50
Table 3: Performance of STAR on the Reddit, PersonaChat and DailyDialog
Validation sets under different model configurations. Ablations are applied
separately. Bold indicates best result, underline indicates second-best. * =
statistically significant versus next best result on t-test with p-value <
0.01. Figure 2: Comparison of overall relevance and diversity scores across
ablations, obtained by averaging across all three datasets with equal
weighting.
In Table 3, we conduct ablations across two key axes: data collection and STAR
training. The data collection ablations serve to investigate the benefits of
the novel changes to the SimSR algorithm from Section 3.1.1. The STAR training
ablations investigates the degree to which the improvements in performance are
caused by the bootstrapped dataset or by STAR’s architecture itself; we
achieve this by considering several alternative variants of STAR.
Our data collection ablations consider two features: (A) removing the query
augmentation prevents the model from leveraging any ground truth information
during prediction; (B) removing the redundancy penalty no longer explicitly
penalises lack of diversity in predicted reply sets. For STAR training, we
consider three alternative configurations: (C) we replace the bag-of-words
embeddings with randomly initialised embeddings – this removes any priors
about the meaning of replies and forces the model to learn them tabula rasa;
(D) we treat each reply set as a unique token – this removes the compositional
element from the task, constraining the model to only predicting previously
seen reply sets, therefore testing whether the model is capable of learning to
compose novel reply sets; (E) we remove the ability to account for
interdependencies between replies, by restructuring each (message, reply set)
data point into $K$ data points of (message, replyk), and then outputting the
top-$K$ replies during inference – this investigates whether the benefit lies
simply in the bootstrapped dataset being better suited to the SR task, rather
than in STAR’s ability to account for interdependencies between replies.
In terms of data collection ablations, we found removing the redundancy
penalty significantly reduced the diversity of predictions, although in some
cases offered slightly improved relevance; removing the query augmentation
generally led to a worse relevance/diversity trade-off. For the variants of
STAR training, we found that random embeddings consistently reduced relevance,
while also led to less diverse predictions; reply sets as tokens led to the
most competitive variant of STAR compared to our default setup: diversity was
overall better, due to using preconstructed reply sets from the offline
planning algorithm, but this came at the trade-off of reduced flexibility from
being unable to construct novel reply sets when the context required it –
resultantly, we saw a corresponding reduction in relevance. Finally,
predicting replies separately expectedly harmed both relevance and diversity,
demonstrating the importance of accounting for reply interdependencies.
In Figure 2, we further validated the individual results of our ablation by
aggregating the results across datasets (applying an equal weighting to each
dataset). This demonstrates the overall trend that the default STAR offers the
superior trade-off between relevance and diversity, while treating reply sets
as tokens offered the next best alternative. Nevertheless, we believe that
keeping individual replies as tokens – thus allowing the model to construct
reply sets dynamically – is likely to be an attractive property for deployed
systems, enabling the overall vocabulary size to remain modest.
### 5.3 Run-time Efficiency
Figure 3: Comparison of run-time efficiency between STAR and the baseline
methods. Results are calculated over the Reddit Validation set.
Beyond performance gains in relevance and diversity, a major advantage of an
autoregressive retrieval model is the ability to leverage the scalability of
GPU-based inference. Figure 3 compares the efficiency of STAR with the other
baseline methods. We use an NVIDIA GeForce RTX 3060 Ti GPU and AMD Ryzen 7
5700G with Radeon Graphics CPU, with a batch size of 32. The results show that
the methods can be broadly clustered into three groups. The slowest group is
the generative method Seq2Seq, due to needing to generate each reply word-by-
word. The middle group – SimSR, M-CVAE and M-MMR – is characterised by methods
that comprise a more involved diversification pipeline. The final and fastest
group includes STAR, M-Topic and Matching, where no additional post-hoc
diversification is required (for M-Topic the topics can be pre-computed prior
to inference).
### 5.4 Case Study
Message: | Hi , Kenny . Let’s go for a drink .
---|---
SimSR | \- let’s go ! [#9]
\- ok , let’s go . [#3]
\- ok . let’s get something to drink . [#1]
STAR | \- ok . let’s go . [#5]
\- you want something to drink ? [#89]
\- good idea . [#105]
Message: | Of course ! Let’s go .
SimSR | \- let’s go ! [#1]
\- ok , let’s go . [#5]
\- all right . let’s go . [#12]
STAR | \- let’s go ! [#1]
\- where are we ? [#43]
\- good idea ! [#85]
Table 4: Example model outputs from the DailyDialog Test set, comparing STAR
(ours) with the top-performing baseline method. Numbers in bold indicate the
ranking the reply received according to the Matching model.
Table 4 presents a case study on the DailyDialog Test set. We compare our
approach, STAR, with the top-performing baseline from Table 2, SimSR. In both
examples we consistently find STAR is able to output a broader range of
intents. Quantitatively, we consider the rank that each suggestion receives
according to the initial retrieval of the Matching model that underlies SimSR.
We see that STAR is able to perform a much more global search across the reply
space, selecting replies from within the top 100 or so ranks. This would be
difficult for the standard retrieve-and-rerank approach to emulate, given 100
is usually too large a number to efficiently rerank (Deb et al., 2019).
Qualitatively, SimSR’s suggestions converge around common phrases, e.g. ‘let‘s
go’, which would be difficult to deduplicate with a heuristic rule given only
a limited number of overlapping words between the replies. Conversely, STAR is
able to represent a broader range of intents, such as replying with a question
in both examples. Further examples are provided in Appendix C.
## 6 Conclusion
We introduce STAR, an autoregressive retrieval system for SR, which is an end-
to-end text-to-text model that sequentially predicts replies conditioned on an
initial message. To train STAR, we demonstrate an approach to bootstrap a
dataset of high-quality (message, reply set) pairs, from regular dialogue
datasets containing only (message, reply) pairs. Empirically, our results show
significant improvement over existing state-of-the-art SR baselines, across
multiple datasets, corresponding to a 5.1%-17.9% improvement in relevance, and
a 0.5%-63.1% improvement in diversity compared to the best baseline approach.
Future work could extend these techniques to other set-prediction tasks: e.g.,
in IR the relevance of each document depends on the quantity of new
information it contains compared to other documents in the set. In recommender
systems, use cases include: tailoring a user’s news feed requires that the
news articles presented are not simply duplicates of the same story; designing
a bespoke music playlist requires songs to be unified by common themes but
also sufficiently distinct from one another to maintain the listener’s
interest. Other lines of future work include considering alternate strategies
for initialising the reply embeddings, beyond the bag-of-words initialisation
demonstrated in this paper.
## Acknowledgements
We thank the reviewers for their helpful feedback and suggestions. This work
is partly supported by the EPSRC DTP Studentship program. The opinions
expressed in this paper are the authors’, and are not necessarily
shared/endorsed by their employers and/or sponsors.
## Limitations
Although our work shows that STAR is able to absorb sufficient information
about the replies in its weights, this may become increasingly challenging
when larger numbers of replies need to be embedded. One notable instance of
this would be the multilingual setting, as many SR systems are deployed
globally. In this case, each language typically has its own candidate pool. A
naive implementation which creates separate reply vectors for each language
would incur a significant increase in model size. In this case, we hypothesise
techniques around weight-sharing between reply embeddings between languages
may be beneficial, e.g. ‘how are you’ (en) and ‘ça va’ (fr) sharing the same
vector. Further, our techniques are only demonstrated in publicly available
datasets, whereas proprietary conversations in chat and email applications may
have unique features not accounted for here (e.g. timestamps, cc and bcc
information, and file attachments). Our technique also requires a planning
algorithm to create the initial dataset. This theoretically creates an upper
bound to the overall performance of STAR, as it is limited to cloning the
behaviour of the offline planning algorithm.
## References
* Antypas et al. (2022) Dimosthenis Antypas, Asahi Ushio, Jose Camacho-Collados, Vitor Silva, Leonardo Neves, and Francesco Barbieri. 2022. Twitter topic classification. In _Proceedings of the 29th International Conference on Computational Linguistics_ , pages 3386–3400, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
* Bevilacqua et al. (2022) Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen tau Yih, Sebastian Riedel, and Fabio Petroni. 2022. Autoregressive search engines: Generating substrings as document identifiers. In _NeurIPS_.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In _Advances in Neural Information Processing Systems_ , volume 33, pages 1877–1901. Curran Associates, Inc.
* Cao et al. (2021a) Nicola De Cao, Wilker Aziz, and Ivan Titov. 2021a. Highly parallel autoregressive entity linking with discriminative correction. In _Conference on Empirical Methods in Natural Language Processing_.
* Cao et al. (2021b) Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021b. Autoregressive entity retrieval. In _9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021_. OpenReview.net.
* Cao et al. (2021c) Nicola De Cao, Ledell Yu Wu, Kashyap Popat, Mikel Artetxe, Naman Goyal, Mikhail Plekhanov, Luke Zettlemoyer, Nicola Cancedda, Sebastian Riedel, and Fabio Petroni. 2021c. Multilingual autoregressive entity linking. _Transactions of the Association for Computational Linguistics_ , 10:274–290.
* Carbonell and Goldstein-Stewart (1998) Jaime G. Carbonell and Jade Goldstein-Stewart. 1998. The use of mmr, diversity-based reranking for reordering documents and producing summaries. In _Annual International ACM SIGIR Conference on Research and Development in Information Retrieval_.
* Celikyilmaz et al. (2020) Asli Celikyilmaz, Elizabeth Clark, and Jianfeng Gao. 2020. Evaluation of text generation: A survey. _ArXiv_ , abs/2006.14799.
* Chakravarthi and Pasternack (2017) Nimesh Chakravarthi and Jeff Pasternack. 2017. Building smart replies for member messages. press release. https://engineering.linkedin.com/blog/2017/10/building-smart-replies-for-member-messages.
* Deb et al. (2019) Budhaditya Deb, Peter Bailey, and Milad Shokouhi. 2019. Diversifying reply suggestions using a matching-conditional variational autoencoder. In _North American Chapter of the Association for Computational Linguistics_.
* Deb et al. (2021) Budhaditya Deb, Guoqing Zheng, Milad Shokouhi, and Ahmed Hassan Awadallah. 2021. A conditional generative matching model for multi-lingual reply suggestion. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pages 1553–1568, Punta Cana, Dominican Republic. Association for Computational Linguistics.
* He et al. (2020) Junxian He, Jiatao Gu, Jiajun Shen, and Marc’Aurelio Ranzato. 2020. Revisiting self-training for neural sequence generation. In _Proceedings of ICLR_.
* Henderson et al. (2017) Matthew L. Henderson, Rami Al-Rfou, Brian Strope, Yun-Hsuan Sung, László Lukács, Ruiqi Guo, Sanjiv Kumar, Balint Miklos, and Ray Kurzweil. 2017. Efficient natural language response suggestion for smart reply. _CoRR_ , abs/1705.00652.
* Hinton et al. (2015) Geoffrey E. Hinton, Oriol Vinyals, and Jeffrey Dean. 2015. Distilling the knowledge in a neural network. _ArXiv_ , abs/1503.02531.
* Honovich et al. (2023) Or Honovich, Thomas Scialom, Omer Levy, and Timo Schick. 2023. Unnatural instructions: Tuning language models with (almost) no human labor. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 14409–14428, Toronto, Canada. Association for Computational Linguistics.
* Humeau et al. (2020) Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Architectures and pre-training strategies for fast and accurate multi-sentence scoring. In _International Conference on Learning Representations_.
* Izacard and Grave (2021) Gautier Izacard and Edouard Grave. 2021. Distilling knowledge from reader to retriever for question answering. In _International Conference on Learning Representations_.
* Jang et al. (2020) Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. 2020. Bayes-adaptive monte-carlo planning and learning for goal-oriented dialogues. In _AAAI Conference on Artificial Intelligence_.
* Jang et al. (2021) Youngsoo Jang, Seokin Seo, Jongmin Lee, and Kee-Eung Kim. 2021. Monte-carlo planning and learning with language action value estimates. In _International Conference on Learning Representations_.
* Kannan et al. (2016) Anjuli Kannan, Karol Kurach, Sujith Ravi, Tobias Kaufmann, Andrew Tomkins, Balint Miklos, Gregory S. Corrado, László Lukács, Marina Ganea, Peter Young, and Vivek Ramavajjala. 2016. Smart reply: Automated response suggestion for email. _Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_.
* Li et al. (2017) Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In _International Joint Conference on Natural Language Processing_.
* Lin (2004) Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In _Annual Meeting of the Association for Computational Linguistics_.
* Loshchilov and Hutter (2019) Ilya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In _International Conference on Learning Representations_.
* Paranjape et al. (2022) Ashwin Paranjape, Omar Khattab, Christopher Potts, Matei Zaharia, and Christopher D Manning. 2022. Hindsight: Posterior-guided training of retrievers for improved open-ended generation. In _International Conference on Learning Representations_.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _J. Mach. Learn. Res._ , 21(1).
* Roller et al. (2020) Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric Michael Smith, Y.-Lan Boureau, and Jason Weston. 2020. Recipes for building an open-domain chatbot. In _Conference of the European Chapter of the Association for Computational Linguistics_.
* Sanh et al. (2019) Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. _ArXiv_ , abs/1910.01108.
* Schick et al. (2023) Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Toolformer: Language models can teach themselves to use tools. _ArXiv_ , abs/2302.04761.
* Schick and Schütze (2021a) Timo Schick and Hinrich Schütze. 2021a. Exploiting cloze-questions for few-shot text classification and natural language inference. In _Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume_ , pages 255–269, Online. Association for Computational Linguistics.
* Schick and Schütze (2021b) Timo Schick and Hinrich Schütze. 2021b. Generating datasets with pretrained language models. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 6943–6951, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Schrittwieser et al. (2019) Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, L. Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy P. Lillicrap, and David Silver. 2019. Mastering atari, go, chess and shogi by planning with a learned model. _Nature_ , 588:604 – 609.
* Silver et al. (2017) David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, L. Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Simonyan, and Demis Hassabis. 2017. Mastering chess and shogi by self-play with a general reinforcement learning algorithm. _ArXiv_ , abs/1712.01815.
* Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca.
* Tay et al. (2022) Yi Tay, Vinh Q. Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, Tal Schuster, William W. Cohen, and Donald Metzler. 2022. Transformer memory as a differentiable search index. In _Advances in Neural Information Processing Systems_.
* Towle and Zhou (2022) Benjamin Towle and Ke Zhou. 2022. Learn what is possible, then choose what is best: Disentangling one-to-many relations in language through text-based games. In _Findings of the Association for Computational Linguistics: EMNLP 2022_ , pages 4955–4965, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
* Towle and Zhou (2023) Benjamin Towle and Ke Zhou. 2023. Model-based simulation for optimising smart reply. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 12030–12043, Toronto, Canada. Association for Computational Linguistics.
* Wang et al. (2023) Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, and Hannaneh Hajishirzi. 2023. Self-instruct: Aligning language models with self-generated instructions. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 13484–13508, Toronto, Canada. Association for Computational Linguistics.
* Weng et al. (2019) Yue Weng, Huaixiu Zheng, Franziska Bell, and Gökhan Tür. 2019. Occ: A smart reply system for efficient in-app communications. _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_.
* Wenker (2023) Kilian Wenker. 2023. Who wrote this? how smart replies impact language and agency in the workplace. _Telematics and Informatics Reports_ , 10:100062.
* Zhang et al. (2021) Mozhi Zhang, Wei Wang, Budhaditya Deb, Guoqing Zheng, Milad Shokouhi, and Ahmed Hassan Awadallah. 2021. A dataset and baselines for multilingual reply suggestion. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 1207–1220, Online. Association for Computational Linguistics.
* Zhang et al. (2018) Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur D. Szlam, Douwe Kiela, and Jason Weston. 2018. Personalizing dialogue agents: I have a dog, do you have pets too? In _Annual Meeting of the Association for Computational Linguistics_.
* Zhang et al. (2019) Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B. Dolan. 2019. Dialogpt : Large-scale generative pre-training for conversational response generation. In _Annual Meeting of the Association for Computational Linguistics_.
* Zhao et al. (2017) Tiancheng Zhao, Ran Zhao, and Maxine Eskénazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoencoders. In _ACL_.
## Appendix A Ethical Considerations
Controlling the outputs of dialogue models is a forefront issue in ethics
research for AI, particularly with the impact of recent gains in LLM
capabilities. We believe the risks in the case of SR systems have several
mitigants compared to this: the replies can be vetted by humans before
deployment; the replies are usually in short-form, rather containing complex
information the user may rely on; ultimately, a user must select one of the
options, rather than the system being able to reply without user oversight.
Conversely, there are some risks more unique to SR systems that should be
mentioned. Particularly, the suggestions presented by the system can have
subtle priming effects on user behaviour. Notably, users have been shown to be
slightly more positive in the sentiment of their emails when shown suggested
replies (Wenker, 2023). SR systems are on the whole known to produce more
positive sentiment messages than the human distribution (Kannan et al., 2016).
We see this as an extension of the broader trend of LLMs to be overly
obsequious.
## Appendix B Implementation Details
For constructing the training dataset, we use the following hyperparameters:
SimSR is initialised from the distilbert-base-uncased checkpoint (Sanh et al.,
2019). We set the search parameters to $N=100$ and $M=100$. We use a
redundancy penalty of $0.05$ and a blending alpha of $0.75$ for query
augmentation. Both parameters provided a good trade-off between relevance and
diversity in early testing, so we did not search hyperparameters further (see
Section 2 for ablations).
For training STAR, we initialise our model with the t5-small checkpoint. Note
that this version of T5 has a comparable parameter count to the baselines
which use DistilBERT (60M versus 66M). We tokenise the dataset with a maximum
message length of 64 tokens. We train our model for up to 100k steps, with a
warmup of 1k steps. In practice, the model typically converged around 20k
steps. We use the AdamW optimiser (Loshchilov and Hutter, 2019) with an
initial learning rate of 5e-4 and linear decay. We evaluate every 2k steps, by
taking the ROUGE and Self-ROUGE scores on the validation set (this uses the
ground-truth from the original dataset, not from the bootstrapped dataset),
and employ early stopping once both metrics have ceased to improve. Note, we
found these metrics were a much more reliable stopping point than crossentropy
loss, which typically converged much earlier.
## Appendix C Further Case Studies
Persona: | my favorite tv show is the office.
---|---
i like jazz music.
i do not drive because i live in new york.
i am jewish.
| i do not eat anything sweet.
Message: | i was but am now divorced
SimSR | \- i am sorry to hear that .
\- oh . i am sorry to hear that .
\- i am sorry to hear that . do you have any kids ?
STAR | \- oh i am sorry to hear that
\- do you like music ?
\- like music ? i love jazz
Persona: | hey there my name is jordan and i am a veterinarian.
love to read drama books.
i love playing video games.
i am also a musician on the weekends.
| i am originally from california but i live in florida.
Message: | sometimes . i listen to a lot of music . do you read a lot ?
SimSR | \- hi how are you ?
\- hi how are you today
\- i do not like music at all
STAR | \- i listen to music do you
\- no not really i like all music .
\- yes i do my favorite is country
Table 5: Example model outputs from the PersonaChat Test set, comparing STAR
(ours) with the top-performing baseline method. STAR is able to capture a
broader range of intents through its end-to-end autoregressive retrieval.
Table 5 displays further examples of STAR’s predictions versus SimSR, taken
from the PersonaChat Test set.
|
# TCE: A Test-Based Approach to Measuring Calibration Error
Takuo Matsubara The Alan Turing Institute Newcastle University Niek Tax
Meta Platforms, Inc. Richard Mudd Meta Platforms, Inc. Ido Guy Meta
Platforms, Inc.
###### Abstract
This paper proposes a new metric to measure the calibration error of
probabilistic binary classifiers, called _test-based calibration error_ (TCE).
TCE incorporates a novel loss function based on a statistical test to examine
the extent to which model predictions differ from probabilities estimated from
data. It offers (i) a clear interpretation, (ii) a consistent scale that is
unaffected by class imbalance, and (iii) an enhanced visual representation
with repect to the standard reliability diagram. In addition, we introduce an
optimality criterion for the binning procedure of calibration error metrics
based on a minimal estimation error of the empirical probabilities. We provide
a novel computational algorithm for optimal bins under bin-size constraints.
We demonstrate properties of TCE through a range of experiments, including
multiple real-world imbalanced datasets and ImageNet 1000.
## 1 Introduction
In recent years, it has become ubiquitous to deploy complex machine learning
models in real-world production systems. Many of these systems rely on
probabilistic classifiers that predict the probability that some target
outcome occurs. For such systems, it is often crucial that their predictive
probabilities are _well-calibrated_ , meaning that the predictive probability
accurately reflects the true frequency that the target outcome occurs. In some
contexts, failures to achieve calibration can lead to negative consequences.
In applications like medical diagnoses [Topol, 2019] and autonomous driving
[Grigorescu et al., 2020], associated risks are often assessed based on model
predictions and the consequences of a misguided risk evaluation can be severe.
In online advertising auctions [Li et al., 2015], it is common to incorporate
a prediction of the probability of some outcome of interest (e.g., a click on
an advert) when calculating an advertiser’s bid.
While a number of metrics—such as log-likelihood, user-specified scoring
functions, and the area under the receiver operating characteristic (ROC)
curve—are used to assess the quality of probabilistic classifiers, it is
usually hard or even impossible to gauge whether predictions are well-
calibrated from the values of these metrics. For assessment of calibration, it
is typically necessary to use a metric that measures _calibration error_ ,
that is, a deviation between model predictions and probabilities of target
occurrences estimated from data. The importance of assessing calibration error
has been long emphasised in machine learning [Nixon et al., 2019, Minderer et
al., 2021] and in probabilistic forecasting more broadly [Dawid, 1982, Degroot
and Fienberg, 1983].
However, existing metrics of calibration error have several drawbacks that in
certain scenarios can mean that their values do not appropriately reflect true
calibration performance. In particular, we will demonstrate that values of
existing calibration error metrics have an inconsistent scale that is
influenced by the target class proportion. In applications such as fraud
detection [Abdallah et al., 2016, Tax et al., 2021] and advertising conversion
prediction [Yang and Zhai, 2022], the prevalence, i.e., the proportion of
instances belonging to the target class, is often very low. This leads to
situations where one may be unable to identify whether the values of
calibration error metrics are small due to good calibration performance or due
to the low prevalence. This is also problematic for monitoring applications
aimed at tracking the calibration performance of a model in a production
system, where the prevalence can change over time (i.e., _prior probability
shift_ [Storkey et al., 2009]) and that makes it difficult to understand
whether to attribute changes in the metric to an actual change in calibration
performance or to the change in prevalence.
Furthermore, _binning_ of model predictions—an essential component of most
calibration error metrics [Naeini et al., 2015]—is often based on heuristics
and lacks clear design principles. For calibration error metrics, empirical
probabilities of target occurrences are typically estimated by clustering data
into several subsets based on binning of the associated model predictions. The
design of the binning scheme is a vital factor in the accurate estimation of
the empirical probabilities, yet few principles guiding the design of binning
schemes have emerged to date.
In this paper, we elaborate on the issues of existing calibration error
metrics in Section 2. We establish a simple yet novel metric that
counterbalances the issues in Section 3. Section 4 empirically demonstrates
properties of the proposed metric by experiments based on various datasets.
Related works are discussed in Section 5, followed by the conclusion in
Section 6. This paper focuses on the methodological aspects of the proposed
new metric for binary classification, while theoretical development is left
for future research. Our contributions are summarised as follows:
#### Contributions
* •
Our primary contribution is a novel calibration error metric called _test-
based calibration error_ (TCE). TCE is based on statistical hypothesis testing
and is interpretable as a percentage of model predictions that deviate
significantly from estimated empirical probabilities. TCE produces values in a
normalised, comparable range $[0,100]$ regardless of the class prevalence.
* •
We propose an explanatory visual representation of TCE called the _test-based
reliability diagram_. It carries more information than the standard
reliability diagram and facilitates a better understanding of calibration
performance (See Figure 1).
* •
We introduce an optimality criterion for bins under which optimal bins
minimise an estimation error of the empirical probabilities. We then propose a
novel algorithm to compute optimal bins approximately under the constraints of
the minimum and maximum size of each bin.
## 2 Background
In this section, we introduce the definition of _calibration_ and recap one of
the most common _calibration error_ metrics. We then outline several critical
challenges of existing calibration error metrics. The basic notation used in
this paper is introduced below.
Denote input and output spaces respectively by $\mathcal{X}$ and
$\mathcal{Y}$. We focus on probabilistic binary classification, i.e.
$\mathcal{Y}=\\{0,1\\}$, in which a probabilistic classifier
$P_{\theta}:\mathcal{X}\to[0,1]$ models a conditional probability of $Y=1$
given an input $x\in\mathcal{X}$. The data
$\mathcal{D}:=\\{x_{i},y_{i}\\}_{i=1}^{N}$ are assumed to be i.i.d.
realisations from a random variable $(X,Y)\sim\mathbb{P}$. To simplify
notation, for any data subset $\mathcal{S}\subseteq\mathcal{D}$, we denote by
$\mathcal{S}^{x}$ a set of all inputs $x$ in $\mathcal{S}$ and by
$\mathcal{S}^{y}$ a set of all outputs $y$ in $\mathcal{S}$. By “a set of
bins” or simply “bins”, we mean a set of arbitrary disjoint intervals whose
union is the unit interval $[0,1]$. For example, a set
$\\{\Delta_{b}\\}_{b=1}^{2}$ of intervals $\Delta_{1}=[0.0,0.4)$ and
$\Delta_{2}=[0.4,1.0]$ is a set of bins.
### 2.1 Calibration Error
A probabilistic classifier $P_{\theta}:\mathcal{X}\to[0,1]$ is said to be
_calibrated_ [Dawid, 1982, Bröcker, 2009] if
$\displaystyle\mathbb{P}(Y=1\mid P_{\theta}(X)=Q)=Q$ (1)
for all $Q\in[0,1]$ s.t. the conditional probability is well-defined.
Informally, this criterion implies that the model prediction coincides with
the actual probability of $Y=1$ for all inputs. Any deviation between the
actual probabilities and the model predictions in eq. 1 is often referred to
as _calibration error_ , which quantifies to what degree the classifier
$P_{\theta}$ is calibrated. The empirical computation of such a deviation
involves estimating conditional probability $\mathbb{P}(Y=1|P_{\theta}(X)=Q)$
from data. For given bins $\\{\Delta_{b}\\}_{b=1}^{B}$, define disjoint
subsets $\\{\mathcal{D}_{b}\\}_{b=1}^{B}$ of data $\mathcal{D}$ by
$\displaystyle\mathcal{D}_{b}:=\\{(x_{i},y_{i})\in\mathcal{D}\mid
P_{\theta}(x_{i})\in\Delta_{b}\\}.$ (2)
Simply put, $\mathcal{D}_{b}$ is a subset of data whose model predictions have
similar values. The conditional probability $\mathbb{P}(Y=1\mid
P_{\theta}(X)=Q)$ for any $Q\in\Delta_{b}$ can then be estimated by the
empirical mean of the labels in subset $\mathcal{D}_{b}$:
$\displaystyle\mathbb{P}(Y=1\mid
P_{\theta}(X)=Q)\approx\widehat{P}_{b}:=\frac{1}{N_{b}}\sum_{y_{i}\in\mathcal{D}_{b}^{y}}y_{i}$
(3)
where we denote by $\widehat{P}_{b}$ the estimated conditional probability in
$\mathcal{D}_{b}$ and by $N_{b}$ the sample size of $\mathcal{D}_{b}$.
One of the most common metrics to measure calibration error is _expected
calibration error_ (ECE) [Naeini et al., 2015]. ECE uses equispaced bins
$\\{\Delta_{b}\\}_{b=1}^{B}$ over $[0,1]$ for a given number $B$ and measures
an absolute difference between the averaged model predictions and the
estimated conditional probability $\widehat{P}_{b}$ within each data subset
$\mathcal{D}_{b}$. The value of ECE is defined as
$\displaystyle\text{ECE}:=\sum_{b=1}^{B}\frac{N_{b}}{N}\left|\widehat{P}_{b}-\frac{1}{N_{b}}\sum_{x_{i}\in\mathcal{D}_{b}^{x}}P_{\theta}(x_{i})\right|.$
(4)
ECE has an associated practical visual representation known as the
_reliability diagram_ [Degroot and Fienberg, 1983, Niculescu-Mizil and
Caruana, 2005], which aligns the averaged model prediction and the estimated
conditional probability in each $\mathcal{D}_{b}$ (see Figure 1). The
reliability diagram is a powerful tool to intuitively grasp the deviation
between the model and the estimated probability in ECE.
### 2.2 Challenges in Calibration Error
Calibration error metrics, such as ECE, are widely used in real-world
applications. There nonetheless exist several challenges that may cause a
misassessment of calibration. These problems become evident especially when a
distribution of model predictions $\\{P_{\theta}(x_{i})\\}_{i=1}^{N}$ is not
well-dispersed. This scenario often arises in imbalanced classification where
model predictions tend to be severely skewed towards either $0$ or $1$. The
following paragraphs illustrate challenges of existing calibration error
metrics, which we aim to address.
#### Challenge 1 (Scale-Dependent Interpretation)
In most calibration error metrics, the deviation between the model prediction
and the estimated probability $\widehat{P}_{b}$ in each $\mathcal{D}_{b}$ is
measured by the absolute difference as in eq. 4. However, the use of the
absolute difference can result in values that have an inconsistent scale
influenced by the class prevalence. To illustrate this problem, consider an
estimated probability $\widehat{P}_{b}$ and an averaged model prediction
denoted $\overline{Q}_{b}$ for some $b$ in eq. 4. If $\widehat{P}_{b}=0.50$
and $\overline{Q}_{b}=0.49$, their absolute difference is $0.01$. On the other
hand, if $\widehat{P}_{b}=0.01$ and $\overline{Q}_{b}=0.0001$, their absolute
difference is $0.0099$. Despite the comparison under the absolute difference
suggesting that the probability $\overline{Q}_{b}=0.0001$ with respect to
$\widehat{P}_{b}=0.01$ in the latter case is better calibrated than in the
former case, one may reasonably argue that the latter is not well-
calibrated—or at least not comparable to the former—given the stark difference
in the order of magnitude. Similarly to this illustration, the values of
existing calibration metrics built on the absolute difference can be
proportionally small whenever the scales of $\widehat{P}_{b}$ and
$\overline{Q}_{b}$ are small. This issue makes it difficult to distinguish
whether the metric values are low due to good calibration performance or due
to the small scale of the probabilities as in imbalanced classification.
#### Challenge 2 (Lack of Normalised Range)
The range of values of calibration error metrics built on absolute differences
is not normalised. The range can vary depending on the choice of bins
$\\{\Delta_{b}\\}_{b=1}^{B}$. To illustrate this problem, consider a bin
$\Delta_{b}$ for some $b$. If $\Delta_{b}=[0.4,0.6]$, the absolute difference
between $\widehat{P}_{b}$ and $\overline{Q}_{b}$ falls into a range
$[0.0,0.6]$ because $\widehat{P}_{b}$ is the estimated probability in
$[0.0,1.0]$ and the averaged model prediction $\overline{Q}_{b}$ in the bin
$\Delta_{b}$ takes the value within $\Delta_{b}$. Similarly, a different
choice of bin $\Delta_{b}$ leads to a different range of the absolute
difference. Consequently, the choice of bins $\\{\Delta_{b}\\}_{b=1}^{B}$
impacts the range of the final value of calibration error metrics that are
built on the absolute difference. To assure rigorous comparability of the
final value of a calibration error metric, it is desirable to establish a
measurement of the deviation whose value has a fixed, normalised range
independent of the choice of bins.
#### Challenge 3 (Arbitrary Choice of Bins)
An appropriate choice of bins is critical because it meaningfully impacts on
final values of calibration error metrics. Equispaced bins
$\\{\Delta_{b}\\}_{b=1}^{B}$ over $[0,1]$ for a given number $B$ are one of
the most common choices of bins in practice, as used in ECE. However,
equispaced bins can often cause a situation where a few particular bins
contain the majority of the model predictions when they are not well-dispersed
over $[0,1]$, as often happens in imbalanced classification. If some bin
$\Delta_{b}$ contains the majority of model predictions, the corresponding
estimated probability $\widehat{P}_{b}$ coincides approximately with the
empirical mean of all labels. On the other hand, estimated probabilities of
the bins other than $\Delta_{b}$ become unreliable due to the small size of
samples contained. A potential solution to this problem is to use bins that
adapt based on the dispersion of model predictions. Nixon et al. [2019]
proposed _adaptive calibration error_ (ACE) that computes the value of eq. 4
using bins $\\{\Delta_{b}\\}_{b=1}^{B}$ based on $B$-quantiles of model
predictions $\\{P_{\theta}(x_{i})\\}_{i=1}^{N}$ for given $B$. However,
questions remain regarding the optimal number $B$ of bins and the appropriate
quantile to use for each bin. To the best of our knowledge, there is no
established notion of what makes bins optimal, nor do clear design principles
for bins exist.
## 3 Calibration Error Based on Test and Optimal Bins
We propose a new calibration error metric that offers a simple yet novel
solution to the challenges outlined in Section 2.2. First, in Section 3.1, we
present a general formulation of calibration error metrics that encompasses
most metrics used in practice. This general formulation allows for a
structured understanding of the design of calibration error metrics. In
Section 3.2, we derive from the general formulation a new calibration error
metric, called TCE, which incorporates a loss based on a statistical test to
compare model predictions with estimated empirical probabilities. TCE produces
a value that has a clear interpretation as a percentage of model predictions
determined to deviate significantly from estimated empirical probabilities,
which leads to a normalised range of possible values $[0,100]$ regardless of
the choice of bins $\\{\Delta_{b}\\}_{b=1}^{B}$. In Section 3.3, we consider
an optimal criterion of bins $\\{\Delta_{b}\\}_{b=1}^{B}$ from the perspective
of minimising an estimation error of the empirical probabilities
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$. We then develop a practical regularisation
approach that ensures a minimum and maximum sample size in each subset
$\mathcal{D}_{b}$.
### 3.1 General Calibration Error
The following definition presents an abstract formulation of calibration error
metrics, which we call _general calibration error_ (GCE) for terminological
convenience. Denote by $2^{\mathcal{D}}$ a power set of $\mathcal{D}$, i.e. a
space of all subsets of $\mathcal{D}$ and by $\mathcal{M}$ a space of all
probabilistic classifiers below.
###### Definition 1.
_(GCE)_ Let $L:2^{\mathcal{D}}\times\mathcal{M}\to\mathbb{R}$ be a loss of any
probabilistic classifier evaluated for any data subset. Let $\mathcal{B}$ be a
set of bins $\\{\Delta_{b}\\}_{b=1}^{B}$ that define data subsets
$\\{\mathcal{D}_{b}\\}_{b=1}^{B}$ as in eq. 2. Let $\|\cdot\|$ be a norm of a
$B$-dimensional vector space. For a given probabilistic classifier
$P_{\theta}:\mathcal{X}\to[0,1]$, define a scalar
$\text{GCE}_{b}\in\mathbb{R}$ for each $b=1,\cdots,B$ by
$\displaystyle\text{GCE}_{b}:=L\left(\mathcal{D}_{b},P_{\theta}\right).$ (5)
Then, GCE of the probabilistic classifier $P_{\theta}$ is defined by
$\displaystyle\text{GCE}=\|(\text{GCE}_{1},\cdots,\text{GCE}_{B})\|.$ (6)
This formulation translates the problem of designing a calibration error
metric into a problem of choosing the tuple $(L,\mathcal{B},\|\cdot\|)$. Most
existing calibration error metrics used in practice can be derived by
selecting an appropriate tuple of the loss $L$, the bins $\mathcal{B}$, and
the norm $\|\cdot\|$ in GCE. See Example 1 below for the case of ECE. It is
also immediate to show that ACE can be recovered from GCE.
###### Example 1.
Let $\mathcal{B}$ be equispaced bins $\\{\Delta_{b}\\}_{b=1}^{B}$ over
$[0,1]$, let $L$ be
$L(\mathcal{D}_{b},P_{\theta})=|\frac{1}{N_{b}}\sum_{y\in\mathcal{D}_{b}^{y}}y-\frac{1}{N_{b}}\sum_{x\in\mathcal{D}_{b}^{x}}P_{\theta}(x)|$,
and let $\|\cdot\|$ be a weighted 1-norm
$\|v\|=\sum_{b=1}^{B}\frac{N_{b}}{N}\times|v_{b}|$. The ECE corresponds to the
GCE under this tuple.
We aim to choose the tuple $(L,\mathcal{B},\|\cdot\|)$ so that it addresses
the aforementioned challenges in Section 2.2. Section 3.2 addresses a loss $L$
based on a statistical test and presents the resulting TCE. Subsequently,
Section 3.3 addresses a choice of bins $\mathcal{B}$ that is obtained through
optimisation to minimise an estimation error of the empirical probabilities
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$. All norms $\|\cdot\|$ are equivalent in
finite dimensions, and hence we do not focus on any particular choice. As with
ECE, we use the weighted 1-norm $\|\cdot\|$ in Example 1 for TCE.
### 3.2 Test-based Calibration Errors
We present our main contribution, a new calibration error metric called TCE,
that is derived from GCE by specifying a novel loss $L$ based on a statistical
test. Our proposed loss $L$ summarises the percentage of model predictions
that deviate significantly from the empirical probabilities in each subset
$\mathcal{D}_{b}$. We effectively test a null hypothesis “the probability of
$Y=1$ is equal to $P_{\theta}(x)$” at each $x\in\mathcal{D}_{b}^{x}$ using the
output data $\mathcal{D}_{b}^{y}$. A rigorous formulation of this loss $L$ is
provided below, combined with the definition of the TCE. Note that the bins
$\\{\Delta_{b}\\}_{b=1}^{B}$ and the norm $\|\cdot\|$ of TCE are arbitrary,
while the weighted 1-norm is our default choice of $\|\cdot\|$.
###### Definition 2.
_(TCE)_ Given a statistical test and its significance level $\alpha\in[0,1]$,
let $R$ be a function of any observed dataset of random variable
$Y\in\\{0,1\\}$ and any probability $Q\in[0,1]$, which returns $1$ if a
hypothesis $P(Y=1)=Q$ is rejected based on the dataset and returns $0$
otherwise. In Definition 1, let $L$ be an average rejection percentage s.t.
$\displaystyle
L(\mathcal{D}_{b},P_{\theta})=100\times\frac{1}{N_{b}}\sum_{x\in\mathcal{D}_{b}^{x}}R\left(\mathcal{D}_{b}^{y},P_{\theta}(x)\right).$
(7)
GCE in Definition 1 is then called TCE.
In contrast to existing metrics that examine the difference between averaged
model predictions and empirical probabilities in each bin, TCE examines each
prediction $P_{\theta}(x)$ and summarises the rejection percentage in each
bin. The procedure of TCE can be intuitively interpreted as follows.
###### Remark 1.
Informally speaking, TCE examines whether each model prediction
$P_{\theta}(x)$ can be regarded as an outlier relative to the empirical
probability of the corresponding data $\mathcal{D}_{b}^{y}$, where the test in
function $R$ acts as a criterion for determining outliers. The level of model-
calibration is then measured by the rate of outliers produced by the model.
In this paper, we use the Binomial test as the _de facto_ standard statistical
test to define $R$ in the TCE. TCE based on other tests, including Bayesian
testing approaches, is an open direction for future research. Algorithm 1
summarises the computational procedure of TCE. There are multiple advantages
of TCE as follows.
Algorithm 1 Computation of TCE
data $\mathcal{D}$, model $P_{\theta}$, norm $\|\cdot\|$, bins
$\\{\Delta_{b}\\}_{b=1}^{B}$, function $R$ based on a chosen test and
significant level
a value $\text{TCE}\in\mathbb{R}$
for $b=1,\dots,B$ do
$\mathcal{D}_{b}\leftarrow\\{(x_{i},y_{i})\in\mathcal{D}\mid
P_{\theta}(x_{i})\in\Delta_{b}\\}$ $\triangleright$ make subset
$\text{TCE}_{b}\leftarrow 0$
for $x_{i}\in\mathcal{D}_{b}^{x}$ do
$\text{TCE}_{b}\leftarrow\text{TCE}_{b}+R(\mathcal{D}_{b}^{y},P_{\theta}(x_{i}))$
$\triangleright$ test each
end for
$\text{TCE}_{b}\leftarrow 100/N_{b}\times\text{TCE}_{b}$
end for
$\text{TCE}\leftarrow\|(\text{TCE}_{1},\dots,\text{TCE}_{B})\|$
#### Advantage 1 (Clear Interpretation)
The final value of TCE has a clear interpretation as a percentage of model
predictions that are determined by the test of choice (here the Binomial test)
to deviate significantly from estimated empirical probabilities. Because the
value is a percentage, the range of the value is normalised to $[0,100]$.
Figure 1: Comparison of two visual representations both applied for a gradient
boosting model trained on the _abalone_ dataset used in Section 4.2. (Left) A
new visual representation, which we call the _test-based reliability diagram_.
The central plot shows a violin plot of model predictions in each bin, whose
estimated probability is presented by a red line. The bottom plot shows by
grey bar the sample size of each bin and by red bar the percentage of model
predictions that deviate significantly from the estimated probability in each
bin. The right plot shows a histogram of all model predictions. (Right) The
standard reliability diagram with the bin-size plot on the bottom and the
histogram plot on the right added for comparison.
#### Advantage 2 (Consistent Scale)
The test evaluates the statistical deviation of data from a model prediction
$P_{\theta}(x)$ adaptively and appropriately for each scale of $P_{\theta}(x)$
and data size $N_{b}$. Informally, TCE is the number of relative outliers
determined for each $P_{\theta}(x)$ adaptively. This endows the value with a
consistent scale robust to class imbalance.
#### Advantage 3 (Enhanced Visualisation)
TCE leads to a new visual representation that shows the distribution of model
predictions, and the proportion of model predictions that deviate
significantly from an empirical probability in each bin. See Figure 1 for the
description and comparison with the standard reliability diagram.
Our interest is in the aggregated rejection percentage of all the tests
performed, and so multiple testing corrections—e.g., the Bonferroni correction
to offer a frequentist guarantee to control the familywise error rate—are not
considered. If all the null hypotheses were simultaneously true, TCE would
simply coincide with the false positive rate which equals in expectation to
type I error specified by the significant level of the test. Full discussion
on when and how adjustments for multiple hypotheses tests should be made may
be found in Bender and Lange [2001].
Given that TCE is based on a statistical testing procedure, it may be possible
to apply ideas from power analysis to inform the desired sample size in each
$\mathcal{D}_{b}$. Such analysis may also benefit the algorithm in the next
subsection to compute optimal bins under the bin-size constraints, providing
insights on what bin-size should be used as the constraints. Finally, it is
worth noting that TCE can be extended to multi-class classification. The
following remark presents one straightforward approach to the extension.
###### Remark 2.
Any calibration error metric defined for binary classification can be extended
to multi-class classification by considering classwise-calibration [e.g. Kull
et al., 2019], where the calibration error metric is applied for one-vs-rest
classification of each class independently. A modification of TCE in multi-
class classification settings can then be defined as an average of TCEs
applied for one-vs-rest classification of each class.
### 3.3 Optimal Bins by Monotonic Regressor and Bin-Size Constraints
It is a fundamental challenge to establish a practical and theoretically sound
mechanism to design bins used in calibration error metrics. Ideally designed
bins provides accurate probability estimates $\\{\widehat{P}_{b}\\}_{b=1}^{B}$
from data $\mathcal{D}$ while keeping the size of each bin reasonable. To this
end, we propose a novel algorithm to compute bins that aim to minimise an
estimation error of the probability estimates
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$ under the constraint of the size of each
bin.
Recently, Dimitriadis et al. [2021] pointed out that an existing quadratic
programming algorithm, called pool-adjacent-violators algorithm (PAVA), can be
directly applied to compute “optimal” bins in the context of obtaining a
better reliability diagram. The bins are designed in a manner that minimises
the _Brier score_ [Brier, 1950] of resulting empirical probabilities by virtue
of PAVA. Forging ahead with this observation, we introduce the following
definition that makes explicit in what sense bins $\\{\Delta_{b}\\}_{b=1}^{B}$
can be considered optimal given an arbitrary estimation error $\mathrm{D}$ of
the probability estimates $\\{\widehat{P}_{b}\\}_{b=1}^{B}$ from data
$\mathcal{D}$.
###### Definition 3.
_(Optimal Bins)_ Let $\Pi$ be a space of all sets of bins
$\\{\Delta_{b}\\}_{b=1}^{B}$ for any $B$, with associated data subsets denoted
by $\\{\mathcal{D}_{b}\\}_{b=1}^{B}$ and probability estimates from
$\\{\mathcal{D}_{b}^{y}\\}_{b=1}^{B}$ denoted by
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$. Let $\mathrm{D}$ be any error function
between an observed dataset of random variable $Y\in\\{0,1\\}$ and a given
probability $Q\in[0,1]$. Any set of bins that satisfies
$\displaystyle\min_{\\{\Delta_{b}\\}_{b=1}^{B}\in\Pi}\leavevmode\nobreak\
\sum_{b=1}^{B}W_{b}\times\mathrm{D}(\mathcal{D}_{b}^{y},\widehat{P}_{b})$
$\displaystyle\hskip 100.0pt\text{\emph{subject to}}\leavevmode\nobreak\
\widehat{P}_{1}\leq\dots\leq\widehat{P}_{B}$ (8)
can be considered an optimal set of bins under the estimation error
$\mathrm{D}$, where $W_{b}:=N_{b}/N$ is the weight associated with the error
of subset $\mathcal{D}_{b}^{y}$ of size $N_{b}$.
The monotonic constraint $\widehat{P}_{1}\leq\cdots\leq\widehat{P}_{B}$ of the
probability estimates $\\{\widehat{P}_{b}\\}_{b=1}^{B}$ is a natural
requirement because the choice of bins becomes trivial otherwise. For example,
consider bins $\\{\Delta_{b}\\}_{b=1}^{B}$ with $B=N$ such that $\Delta_{b}$
contains one single point $y_{b}$ and the probability estimate
$\widehat{P}_{b}=y_{b}$ for each $b$. This clearly achieves that
$\sum_{b=1}^{B}W_{b}\times\mathrm{D}(\mathcal{D}_{b}^{y},\widehat{P}_{b})=\frac{1}{N}\sum_{b=1}^{N}\mathrm{D}(\\{y_{b}\\},y_{b})=0$.
Under the monotonic constraint, the choice of bins becomes non-trivial.
Under some choices of the estimation error $\mathrm{D}$, the optimisation of
eq. 8 can be solved as a monotonic regression problem. Given an ordered
dataset $\\{y_{i}\\}_{i=1}^{N}$, a monotonic regression algorithm finds $N$
monotonically increasing values $\widehat{y}_{1}\leq\cdots\leq\widehat{y}_{N}$
that minimise some loss between $\\{\widehat{y}_{i}\\}_{i=1}^{N}$ and
$\\{y_{i}\\}_{i=1}^{N}$. There exist algorithms for various losses, including
the $l_{p}$ loss, the Huber loss, and the Chebyshev loss [de Leeuw et al.,
2009]. PAVA solves a monotonic regression problem under the squared error
$\sum_{i=1}^{N}(\widehat{y}_{i}-y_{i})^{2}$. If we choose the error
$\mathrm{D}$ as the variance of each $\mathcal{D}_{b}^{y}$, i.e.,
$\displaystyle\mathrm{D}(\mathcal{D}_{b}^{y},\widehat{P}_{b})=\frac{1}{N_{b}}\sum_{i=1}^{N_{b}}(y_{i}-\widehat{P}_{b})^{2}$
(9)
the optimal set of bins under $\mathrm{D}$ can be obtained using PAVA, which
corresponds to the case of Dimitriadis et al. [2021]. See Appendix A for the
proof that the optimisation criterion of eq. 8 is indeed minimised at bins
obtained using PAVA. The approach using PAVA is a highly appealing solution to
the design of bins $\\{\Delta_{b}\\}_{b=1}^{B}$ because it achieves a fully-
automated design of the bins based on the clear criterion of eq. 8. However,
such a fully-automated design can occasionally generate a bin that contains an
excessively small or large number of data for the sake of minimising the
aggregated estimation error over all $\\{\widehat{P}_{b}\\}_{b=1}^{B}$.
Imposing a certain regularisation on the minimum and maximum size of each
$\mathcal{D}_{b}$ can aid in keeping some baseline quality of the estimation
of each individual $\widehat{P}_{b}$.
Algorithm 2 PAVA-BC (PAVA with Block Constraints)
ordered scalars $\\{y_{i}\\}_{i=1}^{N}$, size constraints $N_{\text{min}}$ and
$N_{\text{max}}$ s.t. $0\leq N_{\text{min}}\leq N_{\text{max}}\leq N$.
sequence $\\{\widehat{y}_{i}\\}_{i=1}^{N}$
$B\leftarrow 0$
for $i=1,\dots,N-N_{min}$ do
$B\leftarrow B+1$
$Y_{B}\leftarrow y_{i}$
$W_{B}\leftarrow 1$
while $B>1$ do
if $W_{B-1}+W_{B}>N_{min}$ then
If $W_{B-1}+W_{B}>N_{max}$ then Break
If $Y_{B-1}/W_{B-1}<Y_{B}/W_{B}$ then Break
end if
$Y_{B-1}\leftarrow Y_{B-1}+Y_{B}$
$W_{B-1}\leftarrow W_{B-1}+W_{B}$
$B\leftarrow B-1$
end while
end for
if $W_{B}+N_{min}\leq N_{max}$ then
$Y_{B}\leftarrow Y_{B}+\sum_{i=N-N_{\text{min}+1}}^{N}y_{i}$
$W_{B}\leftarrow W_{B}+N_{min}$
else
$B\leftarrow B+1$
$Y_{B}\leftarrow\sum_{i=N-N_{\text{min}}+1}^{N}y_{i}$
$W_{B}\leftarrow N_{min}$
end if
$s\leftarrow 0$
for $j=1,\dots,B$ do
for $k=1,\dots,W_{j}$ do
$\widehat{y}_{s+k}\leftarrow Y_{j}/W_{j}$
end for
$s\leftarrow s+W_{j}$
end for
Algorithm 3 Near-Optimal Bins Based on PAVA-BC
data $\mathcal{D}$, model $P_{\theta}$, size constraints $N_{\text{min}}$ and
$N_{\text{max}}$ s.t. $0\leq N_{\text{min}}\leq N_{\text{max}}\leq N$.
a set of bins $\\{\Delta_{b}\\}_{b=1}^{B}$
$\\{y_{i}\\}_{i=1}^{N}\leftarrow\text{Sort}(\mathcal{D},P_{\theta})$
$\\{\widehat{y}_{i}\\}_{i=1}^{N}\leftarrow\text{PAVA-
BC}(\\{y_{i}\\}_{i=1}^{N},N_{\text{min}},N_{\text{max}})$
$B\leftarrow 1$
$L\leftarrow 0$
$R\leftarrow 0$
for $i=2,\dots,N$ do
if $\widehat{y}_{i-1}\neq\widehat{y}_{i}$ then
$R\leftarrow(P_{\theta}(x_{i-1})+P_{\theta}(x_{i}))/2$
$\Delta_{B}\leftarrow[L,R)$
$L\leftarrow R$
$B\leftarrow B+1$
end if
end for
$\Delta_{B}\leftarrow[L,1.0]$
Therefore, we propose a modified version of PAVA that regularises based on the
given minimum and maximum size of each subset $\mathcal{D}_{b}^{y}$. Algorithm
2 summarises the full algorithm, which we call _PAVA with block constraints_
(PAVA-BC), followed by Algorithm 3 that summarises how to compute bins using
PAVA-BC accordingly, where $\text{Sort}(\mathcal{D},P_{\theta})$ in Algorithm
3 denotes any algorithm that sorts labels $\\{y_{i}\\}_{i=1}^{N}$ in acending
order of model predictions $\\{P_{\theta}(x_{i})\\}_{i=1}^{N}$. By Algorithm
3, we can obtain bins that satisfy the given minimum and maximum size
constraints $N_{\text{min}}$ and $N_{\text{max}}$ in each $\mathcal{D}_{b}$,
while benefitting from the automated design of bins by PAVA. A set of bins
based on PAVA can be recovered by replacing PAVA-BC with PAVA in Algorithm 3.
In general, the introduction of the regularisation can cause mild violation of
the monotonicity $\widehat{P}_{1}\leq\cdots\leq\widehat{P}_{B}$, meaning that
there may exist a few values $\widehat{P}_{b}$ that is smaller than
$\widehat{P}_{b-1}$. See Appendix B for each example where mild violation of
the monotonicity by PAVA-BC occured and did not occur. In practice, mild
violation of the monotonicity can often be a reasonable cost to achieve better
properties of bins. For example, Tibshirani et al. [2011] studied settings
where the monotonicity is only “nearly" satisfied.
See Figure 2 for a comparison of the bins computed by three different
approaches: PAVA, PAVA-BC, and binning based on $10$-quantiles. The bins
produced by PAVA-BC interpolate between the optimal bins produced by PAVA and
the well-sized bins produced by binning based on quantiles. This is further
confirmed by Table 1 which shows the total estimation error in eq. 8 and the
estimation error within each bin in eq. 9 for each approach. The total
estimation error is minimised by PAVA, while an average of the estimation
error within each bin is minimised by binning based on quantiles. In contrast,
PAVA-BC takes a balance between the total and individual estimation error.
Figure 2: Comparison of bins for a random forest model on the _satimage_
dataset used in Section 4.2 based on (top) PAVA, (middle) PAVA-BC, (bottom)
binning based on $10$-quantiles. The dotted line represents the boundary of
each bin and the grey bar represents the size of each bin. Table 1: The total
estimation error and an average of the estimation error within each bin for
the bins in Figure 2.
| PAVA | PAVA-BC | Quantile
---|---|---|---
Total Error | 0.040 | 0.042 | 0.048
Averaged Within-Bin Error | 0.132 | 0.077 | 0.047
## 4 Empirical Evaluation
In this section, we demonstrate the properties of TCE via three experiments.
The first experiment uses synthetic data to examine the properties of TCE
under controlled class imbalance. The second experiment involves ten real-
world datasets from the University of California Irvine (UCI) machine learning
repository [Dua and Graff, 2017], where nine are designed as benchmark tasks
of imbalanced classification, and one is a well-balanced classification task
for comparison. In the second experiment, we also demonstrate that ECE and ACE
may produce misleading assessments of calibration performance under class
imbalance. TCE has the potential to reduce such misinterpretation risks. The
final experiment uses the ImageNet1000 dataset to illustrate that TCE is
applicable to large-scale settings. In all experiments, models are fitted to
training data first and any calibration error metric are computed using
validation data. Source code to reproduce the experiments is available in
https://github.com/facebookresearch/tce.
We compute TCE with bins based on PAVA-BC unless otherwise stated. The minimum
and maximum size of each bin for PAVA-BC are set to $N/20$ and $N/5$ for a
given dataset size $N$. Under these constraints, the number of bins based on
PAVA-BC falls into a range between 5 and 20. In addition to ECE and ACE, we
include the maximum calibration error (MCE) [Naeini et al., 2015] for
comparison. MCE is defined by replacing the weighted 1-norm with the supremum
norm over $b=1,\dots,B$ in Example 1. We denote, by TCE(Q) and MCE(Q), TCE and
MCE each with bins based on $B$-quantiles. For all metrics, $B$-equispaced
bins and $B$-quantiles bins are computed with $B=10$.
### 4.1 Synthetic Data with Controlled Class Imbalance
We first examine TCE using synthetic data from a simulation model considered
in Vaicenavicius et al. [2019]. The data are simulated from a Gaussian
discriminant analysis model $(x,y)\sim P(x\mid y)P(y)$. The output
$y\in\\{0,1\\}$ is first sampled from a Bernoulli distribution $P(y)$ with
parameter $\pi$ and the input $x\in\mathbb{R}$ is then sampled from a Gaussian
distribution $P(x\mid y)=\mathcal{N}(m_{y},s_{y})$ with mean $m_{y}$ and scale
$s_{y}$ dependent of $y$. We set $m_{y}=(2\times y-1)$ and $s_{y}=2$, and
change the parameter $\pi$ for each setting below. By Bayes’ theorem, the
conditional probability of $y$ given $x$ corresponds to a logistic model:
$P(y\mid x)=1/(1+\exp(\beta_{0}+\beta_{1}\times x))$ where
$\beta_{0}=\log(\pi/(1-\pi))$ and $\beta_{1}=4$. A logistic model is therefore
capable of reproducing the probability $P(y\mid x)$ of this synthetic data
perfectly.
We consider two baseline cases of (i) well-balanced classification and (ii)
imbalanced classification in this experiment. We train a logistic model for
the training data simulated with the parameter $\pi=0.5$ (i.e. 50% prevalence)
in case (i) and with $\pi=0.01$ (i.e. 1% prevalence) in case (ii). In each
case (i) and (ii), we generate three different test datasets to create
situations where the trained model is (a) well-calibrated, (b) over-
calibrated, and (c) under-calibrated. We examine the performance of TCE under
these senarios. Test datasets for senarios (a), (b), and (c) are generated
from the simulation model with prevalences $50\%$, $40\%$, and $60\%$ in case
(i) and with prevalences $1\%$, $0\%$, and $2\%$ in case (ii). We generate
20000 data points in total, of which 70% are training data and 30% are test
data.
Table 2 shows the values of four calibration error metrics applied to the
logistic regression model in each scenario. Table 2 demonstrates that all
values of ECE and ACE in imbalanced case (ii) can be smaller than—or very
close to—values for well-calibrated senario (a) in well-balanced case (i). For
example, the ECE value for case (ii)-(b) was smaller than that for case
(i)-(a). In contrast, TCE provides values with a consistent scale in both
well-balanced and imbalanced cases. More simulation studies of TCE with
different hyperparameters are presented in Section C.1.
Table 2: Comparison of four calibration error metrics under senarios (a) - (c)
in each case (i) and (ii).
Prevalence | TCE | TCE(Q) | ECE | ACE
---|---|---|---|---
50% vs 50% | 7.28% | 10.88% | 0.0138 | 0.0150
50% vs 40% | 96.10% | 96.47% | 0.0963 | 0.0951
50% vs 60% | 98.83% | 98.93% | 0.1097 | 0.1096
1% vs 1% | 3.40% | 0.18% | 0.0017 | 0.0031
1% vs 0% | 95.50% | 68.73% | 0.0094 | 0.0094
1% vs 2% | 92.32% | 89.73% | 0.0139 | 0.0139
### 4.2 Imbalanced UCI Datasets
Next, we compare calibration error metrics using real-world datasets in the
regime of severe class imbalance. We use nine UCI datasets that were
preprocessed by Lemaître et al. [2017] as benchmark tasks of imbalanced
classification. We also use one additional UCI dataset with a well-balanced
prevalence for comparison. For each dataset, 70% of samples are used as
training data and 30% of samples are kept as validation data. We train five
different algorithms: logistic regression (LR), support vector machine (SVM),
random forest (RF), gradient boosting (GB), and multi-layer perceptron (MLP).
We evaluate the calibration performance of each model by five different
calibration error metrics in the following tables. Tables 3 and 4 show results
for the imbalanced datasets, _abalone_ and _webpage_ [Dua and Graff, 2017],
respectively. Results for all the other datasets are presented in Section C.2.
In Table 3, the best model ranked by TCE and ACE agree with each other while
ECE identifies RF as the best model. It can be observed from the reliability
diagram of ECE for both the datasets in Section C.2 that a large majority of
model predictions are contained in a single bin of ECE. In such cases, ECE
becomes essentially equivalent to a comparison of global averages of all
labels and all model predictions. Table 4 demonstrates a situation where ECE
and ACE risk misleading assessments of calibration performance. Several values
of ECE and ACE are all sufficiently small in Table 4, by which one may
conclude that it is reasonable to use a model with the smallest calibration
error. However, the values of TCE indicate that no model has a good
calibration performance. In fact, relatively large statistical deviations
between model predictions and empirical probabilities can be observed from the
test-based reliability diagram for the webpage dataset in Section C.2.
Table 3: Comparison of five calibration error metrics for five different
algorithms trained on the abalone dataset.
| TCE | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---
LR | 7.26% | 0.0140 | 0.0252 | 0.0946 | 0.0851
SVM | 47.21% | 0.0436 | 0.0473 | 0.8302 | 0.1170
RF | 33.89% | 0.0127 | 0.0177 | 0.0670 | 0.0547
GB | 4.86% | 0.0182 | 0.0160 | 0.2965 | 0.0418
MLP | 3.83% | 0.0167 | 0.0122 | 0.0806 | 0.0540
Table 4: Comparison of five calibration error metrics for five different
algorithms trained on the webpage dataset.
| TCE | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---
LR | 40.16% | 0.0044 | 0.0034 | 0.3134 | 0.0214
SVM | 59.83% | 0.0043 | 0.0057 | 0.5402 | 0.0239
RF | 99.66% | 0.0234 | 0.0241 | 0.5980 | 0.1189
GB | 71.12% | 0.0086 | 0.0107 | 0.2399 | 0.0436
MLP | 49.81% | 0.0090 | 0.0018 | 0.4344 | 0.0076
### 4.3 K-vs-Rest on ImageNet1000
Finally, we demonstrate that TCE is applicable for a large-scale binary
classification task using ImageNet1000 data. We consider a K-vs-rest
classification problem by using a set of all dog-kind classes (from class 150
to class 275) as a positive class and the rest as a negative class. Under this
setting, 12.5% of validation samples belong to the positive class. We used 5
different trained models: AlexNet, VGG19, ResNet18, ResNet50, and ResNet152.
Their calibration errors were measured based on the ImageNet1000 validation
dataset consisting of 50000 data points. Table 5 demonstrates that TCE
produces interpretable values, with model rankings that largely agree with
other metrics in this setting. The last row of Table 5 shows the average
computational time of each metric. Computation of all the procedures in TCE
required only 71.78 seconds for 50000 data points with 1 CPU on average. The
reliability diagrams corresponding to the results are presented in Section
C.3.
Table 5: Comparison of five calibration error metrics for five different deep
learning models on ImageNet1000 data.
| TCE | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---
AlexNet | 42.74% | 0.0070 | 0.0070 | 0.1496 | 0.0528
VGG19 | 23.57% | 0.0028 | 0.0028 | 0.2148 | 0.0247
Res18 | 29.93% | 0.0042 | 0.0042 | 0.2368 | 0.0350
Res50 | 24.60% | 0.0020 | 0.0018 | 0.1911 | 0.0152
Res152 | 16.09% | 0.0012 | 0.0013 | 0.1882 | 0.0102
Time (s) | 71.78 | 0.4873 | 0.4221 | 0.0046 | 0.0063
## 5 Related Work
Several calibration error metrics have been proposed, including the
aforementioned ECE. MCE is a widely used variant of ECE that replaces the
summation over $b=1,\dots,B$ in (4) with the supremum over $b=1,\dots,B$.
[Kumar et al., 2019] introduce a more general $l_{p}$ calibration error, which
includes both ECE and MCE. ACE replaces the equispaced bins in ECE with bins
designed based on quantiles of model predictions, which prevents high
concentration of data in one bin when data is imbalanced [Nixon et al., 2019].
These calibration error metrics can be extended to multi-class classification
[Kumar et al., 2019]. Other than calibration error, scoring functions
[Gneiting et al., 2007] are commonly used measurements to evaluate a
probabilistic classifier. [Wallace and Dahabreh, 2014] reported a limitation
of the Brier score for imbalanced classification, and proposed the
_stratified_ Brier score that aggregates multiple Brier scores.
This paper designed a new calibration error metric based on a statistical
test. While statistical tests have been used in the context of calibration, we
are the first to incorporate a statistical test into the design of a
calibration error metric. Vaicenavicius et al. [2019] performed a statistical
test on whether ECE computed for synthetic data generated from predictive
probabilities is significantly different from ECE computed for actual data.
Similarly, Widmann et al. [2019] proposed a statistical test of the value of
their calibration error metric built on kernel methods. In contrast to
existing works which considered a test for final values of calibration error
metrics, our approach incorporates a test into the metric itself.
While the use of binning is vital in the vast majority of calibration metrics,
there are a few works on the _binning-free_ design of calibration error
metrics. The main idea is to use an cumulative distribution function (CDF) of
predictive probabilities, which can be estimated without binning, and evaluate
how significantly it differs from an ideal CDF that occurs if the predictive
probabilities are all well-calibrated. For example, Gupta et al. [2021] and
Arrieta-Ibarra et al. [2022] considered the Kolmogorov-Smirnov test for the
empirical CDF, where Gupta et al. [2021] further proposed a spline
interpolation to obtain a continuous approximation of the CDF. An approach
proposed by Kull et al. [2017] can also be regarded as binning-free. It uses a
continuous CDF of the beta distribution produced by their calibration method,
mentioned below, rather than the empirical CDF.
_Calibration methods_ refer to algorithms used to improve the calibration
performance of a model $P_{\theta}$. Usually, they learn some ‘post-hoc’
function $\varphi:[0,1]\to[0,1]$ to be applied to each model predictio so that
the new prediction $\varphi(P_{\theta}(x))$ is better calibrated. Various
calibration algorithms have been proposed in parallel to the development of
calibration error metrics. Platt scaling uses a logistic function for the
post-hoc function $\varphi$ [Platt, 1999]. Alternatively, Kull et al. [2017,
2019] proposed to use a beta distribution in binary classification and a
Dirichlet distribution in multi-class classification. Isotonic regression is a
powerful non-parametric approach to find a monotonically increasing function
$\varphi$ that minimises the Brier score [Zadrozny and Elkan, 2002]. Finally,
Bayesian Binning into Quantiles by Naeini et al. [2015] extends a classical
histogram-based calibration [Zadrozny and Elkan, 2001] to an ensemble of
histogram-based calibrations based on Bayesian model averaging.
## 6 Conclusion
In this paper, we proposed a new calibration error metric TCE that
incorporates a novel loss function based on a statistical test. TCE has (i) a
clear interpretation as a percentage of model predictions determined to
deviate significantly from estimated empirical probabilities, (ii) a
consistent scale that is robust to class imbalance, and (iii) an informative
visual representation that facilitates a better understanding of calibration
performance of probabilistic classifiers. We further introduced an optimality
criterion of bins associated with a minimal estimation error of the empirical
probabilities and a new algorithm to compute optimal bins approximately under
the constraint of the size of each bin.
Our proposal opens up room for new research directions in the context of
calibration. This paper focuses on the methodological development of TCE.
There are various directions to investigate in terms of theoretical properties
of TCE. These include the convergence properties of TCE in the limit of data
size $N$, understanding the minimum number of data points that should be
contained in each subset $\mathcal{D}_{b}$, and a rigorous theoretical
analysis of PAVA-BC. By continuing to investigate these areas, we can refine
and expand our understanding of the capabilities of TCE.
###### Acknowledgements.
The authors would like to thank Abbas Zaidi, Michael Gill, and Will Bullock
for their useful feedback on early work of this paper. TM is supported by The
Alan Turing Institute under the EPSRC grant EP/N510129/1.
## References
* Abdallah et al. [2016] Aisha Abdallah, Mohd Aizaini Maarof, and Anazida Zainal. Fraud detection system: A survey. _Journal of Network and Computer Applications_ , 68:90–113, 2016. ISSN 1084-8045.
* Arrieta-Ibarra et al. [2022] Imanol Arrieta-Ibarra, Paman Gujral, Jonathan Tannen, Mark Tygert, and Cherie Xu. Metrics of calibration for probabilistic predictions. _Journal of Machine Learning Research_ , 23(351):1–54, 2022.
* Bender and Lange [2001] Ralf Bender and Stefan Lange. Adjusting for multiple testing—when and how? _Journal of Clinical Epidemiology_ , 54(4):343–349, 2001. ISSN 0895-4356.
* Brier [1950] Glen W. Brier. Verification of forecasts expressed in terms of probability. _Monthly Weather Review_ , 78(1):1 – 3, 1950\.
* Bröcker [2009] Jochen Bröcker. Reliability, sufficiency, and the decomposition of proper scores. _Quarterly Journal of the Royal Meteorological Society_ , 135(643):1512–1519, 2009.
* Dawid [1982] Philip Dawid. The well-calibrated bayesian. _Journal of the American Statistical Association_ , 77(379):605–610, 1982.
* de Leeuw et al. [2009] Jan de Leeuw, Kurt Hornik, and Patrick Mair. Isotone optimization in r: Pool-adjacent-violators algorithm (pava) and active set methods. _Journal of Statistical Software_ , 32(5):1–24, 2009.
* Degroot and Fienberg [1983] Morris H. Degroot and Stephen E. Fienberg. The comparison and evaluation of forecasters. _The Statistician_ , 32:12–22, 1983.
* Dimitriadis et al. [2021] Timo Dimitriadis, Tilmann Gneiting, and Alexander I. Jordan. Stable reliability diagrams for probabilistic classifiers. _Proceedings of the National Academy of Sciences_ , 118(8), 2021.
* Dua and Graff [2017] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml.
* Elter et al. [2007] M Elter, R Schulz-Wendtland, and T Wittenberg. The prediction of breast cancer biopsy outcomes using two cad approaches that both emphasize an intelligible decision process. _Medical Physics_ , 34(11), 2007.
* Gneiting et al. [2007] Tilmann Gneiting, Fadoua Balabdaoui, and Adrian E. Raftery. Probabilistic forecasts, calibration and sharpness. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 69(2):243–268, 2007.
* Grigorescu et al. [2020] Sorin Grigorescu, Bogdan Trasnea, Tiberiu Cocias, and Gigel Macesanu. A survey of deep learning techniques for autonomous driving. _Journal of Field Robotics_ , 37(3):362–386, 2020\.
* Gupta et al. [2021] Kartik Gupta, Amir Rahimi, Thalaiyasingam Ajanthan, Thomas Mensink, Cristian Sminchisescu, and Richard Hartley. Calibration of neural networks using splines. In _International Conference on Learning Representations_ , 2021.
* Henzi et al. [2022] Alexander Henzi, Alexandre Mösching, and Lutz Dümbgen. Accelerating the Pool-Adjacent-Violators Algorithm for Isotonic Distributional Regression. _Methodology and Computing in Applied Probability_ , 24(4):2633–2645, 2022.
* Kull et al. [2017] Meelis Kull, Telmo M. Silva Filho, and Peter Flach. Beyond sigmoids: How to obtain well-calibrated probabilities from binary classifiers with beta calibration. _Electronic Journal of Statistics_ , 11(2):5052 – 5080, 2017.
* Kull et al. [2019] Meelis Kull, Miquel Perello Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, and Peter Flach. Beyond temperature scaling: Obtaining well-calibrated multi-class probabilities with dirichlet calibration. In _Advances in Neural Information Processing Systems_ , volume 32, 2019.
* Kumar et al. [2019] Ananya Kumar, Percy S Liang, and Tengyu Ma. Verified uncertainty calibration. In _Advances in Neural Information Processing Systems_ , volume 32, 2019.
* Lemaître et al. [2017] Guillaume Lemaître, Fernando Nogueira, and Christos K. Aridas. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. _Journal of Machine Learning Research_ , 18(17):1–5, 2017.
* Li et al. [2015] Cheng Li, Yue Lu, Qiaozhu Mei, Dong Wang, and Sandeep Pandey. Click-through prediction for advertising in twitter timeline. In _Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pages 1959–1968, 2015.
* Minderer et al. [2021] Matthias Minderer, Josip Djolonga, Rob Romijnders, Frances Ann Hubis, Xiaohua Zhai, Neil Houlsby, Dustin Tran, and Mario Lucic. Revisiting the calibration of modern neural networks. In _Advances in Neural Information Processing Systems_ , 2021.
* Naeini et al. [2015] M. P. Naeini, G. F. Cooper, and M. Hauskrecht. Obtaining well calibrated probabilities using bayesian binning. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , pages 2901––2907, 2015.
* Niculescu-Mizil and Caruana [2005] Alexandru Niculescu-Mizil and Rich Caruana. Predicting good probabilities with supervised learning. In _Proceedings of the 22nd International Conference on Machine Learning_ , page 625–632, 2005.
* Nixon et al. [2019] Jeremy Nixon, Michael W. Dusenberry, Linchuan Zhang, Ghassen Jerfel, and Dustin Tran. Measuring calibration in deep learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_ , June 2019.
* Platt [1999] John C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. _Advances in Large Margin Classifiers_ , 10(3), 1999.
* Storkey et al. [2009] Amos Storkey et al. When training and test sets are different: characterizing learning transfer. _Dataset shift in machine learning_ , 30:3–28, 2009.
* Tax et al. [2021] Niek Tax, Kees Jan de Vries, Mathijs de Jong, Nikoleta Dosoula, Bram van den Akker, Jon Smith, Olivier Thuong, and Lucas Bernardi. Machine learning for fraud detection in e-commerce: A research agenda. In _Proceedings of the KDD International Workshop on Deployable Machine Learning for Security Defense (MLHat)_ , pages 30–54. Springer, 2021.
* Tibshirani et al. [2011] Ryan J. Tibshirani, Holger Hoefling, and Robert Tibshirani. Nearly-isotonic regression. _Technometrics_ , 53(1):54–61, 2011.
* Topol [2019] Eric Topol. High-performance medicine: the convergence of human and artificial intelligence. _Nature Medicine_ , 25:44–56, 2019.
* Vaicenavicius et al. [2019] Juozas Vaicenavicius, David Widmann, Carl R. Andersson, Fredrik Lindsten, Jacob Roll, and Thomas Bo Schön. Evaluating model calibration in classification. In _Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics_ , 2019.
* van der Putten and van Someren [2000-2009] Peter van der Putten and Maarten van Someren. Coil challenge 2000: The insurance company case. Technical report, Sentient Machine Research, Amsterdam and Leiden Institute of Advanced Computer Science, 2000-2009.
* Wallace and Dahabreh [2014] Byron C Wallace and Issa J Dahabreh. Improving class probability estimates for imbalanced data. _Knowledge and Information Systems_ , 41(1):33–52, 2014.
* Widmann et al. [2019] David Widmann, Fredrik Lindsten, and Dave Zachariah. Calibration tests in multi-class classification: A unifying framework. In _Advances in Neural Information Processing Systems_ , volume 32, 2019.
* Yang and Zhai [2022] Yanwu Yang and Panyu Zhai. Click-through rate prediction in online advertising: A literature review. _Information Processing & Management_, 59(2):102853, 2022.
* Zadrozny and Elkan [2001] Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision trees and naive bayesian classifiers. In _Proceedings of the Eighteenth International Conference on Machine Learning_ , page 609–616, 2001.
* Zadrozny and Elkan [2002] Bianca Zadrozny and Charles Elkan. Transforming classifier scores into accurate multiclass probability estimates. In _Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , page 694–699, 2002.
TCE: A Test-Based Approach to Measuring Calibration Error
(Supplementary Material)
This supplement contains all the additional results referred to in the main
text. Appendix A contains the proof that the optimisation criterion of eq. 8
is indeed minimised using PAVA. Appendix B shows an example of bins obtained
using PAVA-BC that caused mild violation of the monotonic constraint of the
empirical probabilities $\\{\widehat{P}_{b}\\}_{b=1}^{B}$. Finally, additional
experimental results are presented in Appendix C.
## Appendix A Optimal Bins Based on PAVA
The optimal bins defined by Definition 3 can be exactly computed under the
error function $\mathrm{D}$ specified by eq. 9 which corresponds to the
variance of each $\mathcal{D}_{b}^{y}$. The optimal bins result in
minimisation of a weighted average of the variance of each
$\mathcal{D}_{b}^{y}$ over all $b$, where the weights are proportional to the
size of each bin. The following proposition shows that Algorithm 3 with PAVA-
BC replaced by PAVA generates the optimal bins under the error function
$\mathrm{D}$. In what follows, we assume a standard setting where the solution
of eq. 8 is at least not a set of only one single bin, i.e.,
$\\{\Delta_{b}\\}_{b=1}^{1}=\\{[0,1]\\}$.
###### Proposition 1.
The minimum of eq. 8 in Definition 3 under the error function $\mathrm{D}$ in
eq. 9 is attained at bins computed by Algorithm 3 with PAVA-BC replaced by
PAVA.
###### Proof.
First, we show that the optimasation problem of eq. 8 in Definition 3 under
the loss function $\mathrm{D}$ in eq. 9 is equivalent to the monotonic
regression problem under the squared error. Recall that, given a choice of
bins $\\{\Delta_{b}\\}_{b=1}^{B}$, each label subset $\mathcal{D}_{b}^{y}$ is
defined by $\mathcal{D}_{b}^{y}:=\\{y_{i}\in\mathcal{D}^{y}\mid
P_{\theta}(x_{i})\in\Delta_{b}\\}$. The input of Algorithm 3 is a set of
labels $\mathcal{D}^{y}=\\{y_{i}\\}_{i=1}^{N}$ ordered by in ascending order
of $\\{P_{\theta}(x_{i})\\}_{i=1}^{N}$. This means that each label subset
$\mathcal{D}_{b}^{y}$ is a set of consecutive elements in the ordered set
$\\{y_{i}\\}_{i=1}^{N}$. Therefore, there exist corresponding indices $n_{b}$
and $n_{b+1}$ s.t. each label subset $\mathcal{D}_{b}^{y}$ can expressed by
$\displaystyle\mathcal{D}_{b}^{y}=\\{y_{i}\in\mathcal{D}^{y}\mid
P_{\theta}(x_{i})\in\Delta_{b}\\}=\\{y_{i}\in\mathcal{D}^{y}\mid
i\leavevmode\nobreak\ \leavevmode\nobreak\ \text{s.t.}\leavevmode\nobreak\
\leavevmode\nobreak\ n_{b}\leq i<n_{b+1}\\}.$
Accordingly, with the ordered labels $\mathcal{D}^{y}$, each empirical
probability $\widehat{P}_{b}$ in $\mathcal{D}_{b}^{y}$ can be expressed by
$\displaystyle\widehat{P}_{b}=\frac{1}{N_{b}}\sum_{y\in\mathcal{D}_{b}^{y}}y=\frac{1}{n_{b+1}-n_{b}}\sum_{j=n_{b}}^{n_{b+1}-1}y_{j}.$
Define a set of scalars $\\{g_{i}\\}_{i=1}^{N}$ whose element $g_{i}\in[0,1]$
corresponds to the empirical probability $\widehat{P}_{b}$ of the bin index
$b$ if $n_{b}\leq i<n_{b+1}$. Namely,
$\displaystyle
g_{i}:=\widehat{P}_{b}=\frac{1}{n_{b+1}-n_{b}}\sum_{j=n_{b}}^{n_{b+1}-1}y_{j}\quad\text{for
each}\quad i\quad\text{s.t.}\quad n_{b}\leq i<n_{b+1}.$ (10)
Under these notations, the optimisation criterion in eq. 8 can be rewritten as
$\displaystyle\sum_{b=1}^{B}W_{b}\times\mathrm{D}(\mathcal{D}_{b},\widehat{P}_{b})$
$\displaystyle=\frac{1}{N}\sum_{b=1}^{B}\sum_{y\in\mathcal{D}_{b}}\left(y-\widehat{P}_{b}\right)^{2}=\frac{1}{N}\sum_{b=1}^{B}\sum_{i=n_{b}}^{n_{b+1}-1}\left(y_{i}-\widehat{P}_{b}\right)^{2}=\frac{1}{N}\sum_{i=1}^{N}(y_{i}-g_{i})^{2}.$
(11)
This formulation translates a problem of choosing bins
$\\{\Delta_{b}\\}_{b=1}^{B}$ into a problem of finding a monotonically
increasing sequence $\\{g_{i}\\}_{i=1}^{N}$ that is determined by the choice
of indices $\\{n_{b}\\}_{b=1}^{B}$, so that eq. 11 is minimised. Therefore the
optimasation problem of eq. 8 in Definition 3 under the loss function
$\mathrm{D}$ in eq. 9 is equivalent to the monotonic regression problem under
the squared error whose solution sequence $\\{g_{i}\\}_{i=1}^{N}$ is restriced
to a form of eq. 10.
Next, consider a standard monotonic regression problem under the square error
$\sum_{i=1}^{N}(y_{i}-\widehat{y}_{i})^{2}$ for the ordered set
$\\{y_{i}\\}_{i=1}^{N}$. PAVA finds a monotonically increasing sequence
$\\{\widehat{y}_{i}\\}_{i=1}^{N}$ that minimises the square error. The
solution sequence $\\{\widehat{y}_{i}\\}_{i=1}^{N}$ by PAVA is given in a form
of eq. 10; see e.g. [de Leeuw et al., 2009, Henzi et al., 2022]. This means
that there exists a set of indices $\\{n_{b}^{*}\\}_{b=1}^{B}$ s.t. the
solution sequence $\\{\widehat{y}_{i}\\}_{i=1}^{N}$ by PAVA is expressed as
$\displaystyle\widehat{y}_{i}=\frac{1}{n_{b+1}^{*}-n_{b}^{*}}\sum_{j=n_{b}^{*}}^{n_{b+1}^{*}}y_{j}\quad\text{for
each}\quad i\quad\text{s.t.}\quad n_{b}^{*}\leq i<n_{b+1}^{*}$
and the sequence $\\{\widehat{y}_{i}\\}_{i=1}^{N}$ satisfies the monotonic
constraint $\widehat{y}_{1}\leq\dots\leq\widehat{y}_{N}$ holds. We can obtain
such a solution sequence $\\{\widehat{y}_{i}\\}_{i=1}^{N}$ by applying any
standard implementation of PAVA.
An output of most implementations of PAVA is the solution sequence
$\\{\widehat{y}_{i}\\}_{i=1}^{N}$ rather than the associated indices
$\\{n_{b}^{*}\\}_{b=1}^{B}$. However, the indices $\\{n_{b}^{*}\\}_{b=1}^{B}$
can be easily recovered from a given solution sequence
$\\{\widehat{y}_{i}\\}_{i=1}^{N}$ of PAVA by simply finding all indeces $i$
s.t. $\widehat{y}_{i}\neq\widehat{y}_{i+1}$. Finally, we consider constructing
bins $\\{\Delta_{b}\\}_{b=1}^{B}$ based on the recovered indices
$\\{n_{b}^{*}\\}_{b=1}^{B}$. Recall that the set of labels
$\mathcal{D}^{y}=\\{y_{i}\\}_{i=1}^{N}$ are ordered in ascending order of
$\\{P_{\theta}(x_{i})\\}_{i=1}^{N}$. If we construct each bin $\Delta_{b}$ by
$\displaystyle\Delta_{b}:=\left[\frac{P_{\theta}(x_{n_{b}^{*}-1})+P_{\theta}(x_{n_{b}^{*}})}{2},\frac{P_{\theta}(x_{n_{b+1}^{*}-1})+P_{\theta}(x_{n_{b+1}^{*}})}{2}\right],$
it is sufficient to generate each label subset $\mathcal{D}_{b}^{y}$ that
corresponds to
$\displaystyle\mathcal{D}_{b}^{y}=\\{y_{i}\in\mathcal{D}^{y}\mid
P_{\theta}(x_{i})\in\Delta_{b}\\}=\\{y_{i}\in\mathcal{D}^{y}\mid
i\leavevmode\nobreak\ \leavevmode\nobreak\ \text{s.t.}\leavevmode\nobreak\
\leavevmode\nobreak\ n_{b}^{*}\leq i<n_{b+1}^{*}\\}.$
Then the optimisation criterion in eq. 8, which is translated to the error of
the monotonic regression problem of PAVA, is minimised by the choice of bins
produced in this procedure. Observing that Algorithm 3 with PAVA-BC replaced
by PAVA performs this procedure concludes the proof. ∎
## Appendix B Mild Violation of Monotonicity by PAVA-BC
A monotonic regression algorithm finds a monotonically increasing sequence
$\widehat{y}_{1}\leq\cdots\leq\widehat{y}_{N}$ that minimises some error
$\mathrm{D}(\\{\widehat{y}_{i}\\}_{i=1}^{N},\\{y_{i}\\}_{i=1}^{N})$ for a
given ordered set $\\{y_{i}\\}_{i=1}^{N}$. PAVA is one of the most common
monotonic regression algorithms that uses the square error
$\sum_{i=1}^{N}(\widehat{y}_{i}-y_{i})^{2}$. For some partition $\mathcal{A}$
of indices $I=\\{1,\dots,N\\}$ whose element $A\in\mathcal{A}$ is a set of
consequentive indices in $I$, PAVA produces a solution sequence s.t. each
element $\widehat{y}_{i}$ is given by $\widehat{y}_{i}=(1/|A|)\sum_{i\in
A}y_{i}$ for $A$ in which $i\in A$. We refer to each element $A$ in the
partition $\mathcal{A}$ of indices $I$ as _block_. PAVA-BC produces a solution
sequence that approximates the solution sequence by PAVA under the contraints
of the minimum and maximum size of each block. For some partition
$\mathcal{A}^{\prime}$ of indices $I$, each element $\widehat{y}_{i}$ of the
solution sequence is given by $\widehat{y}_{i}=(1/|A^{\prime}|)\sum_{i\in
A^{\prime}}y_{i}$ for $A^{\prime}$ in which $i\in A^{\prime}$ in the same
manner as PAVA. PAVA-BC meets the minimum and maximum size constraints of each
block $A^{\prime}\in\mathcal{A}^{\prime}$ at the cost of the possibility of
mild violation of the monotonic constraint. It depends on the minimum and
maximum size constraints, data, and models whether violation of the monotonic
constraint occurs by PAVA-BC. Figure 3 shows an example where bins based on
PAVA-BC did not violate the monotonicity of the empirical probabilities
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$. Figure 3 was computed using a random forest
model trained on the _satimage_ dataset used in Section 4.2, and corresponds
to Figure 2 presented in Section 3. The total estimation error in eq. 8 and an
average of the estimation error within each bin in eq. 9 for each set of the
bins in Figure 3 were summerised in Table 1 presented in Section 3. Figure 4
shows an example where bins based on PAVA-BC violated the monotonic constraint
of the empirical probabilities $\\{\widehat{P}_{b}\\}_{b=1}^{B}$. Figure 4 was
computed using a random forest model trained on the _coil_2000_ dataset used
in Section 4.2. The total estimation error in eq. 8 for each set of the bins
in Figure 4 was $0.0509$, $0.0517$, and $0.0521$ for PAVA, PAVA-BC, binning
based on $10$-quantiles, respectively. An average of the estimation error
within each bin in eq. 9 for each set of the bins in Figure 3 was $0.0834$,
$0.0627$, and $0.0520$ for PAVA, PAVA-BC, binning based on $10$-quantiles,
respectively.
Figure 3: Comparison of bins based on three different approaches for a random
forest model on the satimage dataset: (top) PAVA, (middle) PAVA-BC, (bottom)
binning based on $10$-quantiles. The dotted line in the left and right panels
represents the boundary of each bin. The grey bar in the left panel repsents
the size of each bin. The red line in the right panel repsents the empirical
probability of each bin.
Figure 4: Comparison of bins based on three different approaches for a random
forest model on the satimage dataset: (top) PAVA, (middle) PAVA-BC, (bottom)
binning based on $10$-quantiles. Each xaxis is restricted to a range
$[0.0,0.1]$ as the majority of bins were contained in the range in this
example. The dotted line in the left and right panels represents the boundary
of each bin. The grey bar in the left panel repsents the size of each bin. The
red line in the right panel repsents the empirical probability of each bin. A
random forest model trained on the coil_2000 dataset was used.
## Appendix C Additional Experiments
We present additional experiments in each section that complement the
experiments illustrated in the main text. We use the same settings as the main
text for the minimum and maximum size for bins based on PAVA-BC as well as the
bin number $B$ for equi-spaced and quantile-based bins.
### C.1 Simulation Study of TCE
We perform detailed simulation studies of TCE in the same simplified setting
as Section 4.1. We demonstarate sensitivity of TCE to its hyperparameters, an
impact of different dataset size and prevalence, and sensitivity to a small
purtabation to model predictions. In all experiments, we generated training
and test data from the Gaussian discriminant analysis in Section 4.1, each
with the prevalence $P_{\text{training}}(y)$ and $P_{\text{test}}(y)$, and
compute TCE of a logistic model fitted to the training data. In all
experiments except ones on an impact of different dataset size and prevalence,
we set the training data size to $14000$ and set the test data size to $6000$.
We then examine two cases where the model is calibated and miscalibrated
synthetically, setting $P_{\text{training}}(y)=0.5$ and
$P_{\text{test}}(y)=0.5$ for the first case and setting
$P_{\text{training}}(y)=0.5$ and $P_{\text{test}}(y)=0.4$ for the second case.
In summary, we present the following experimental analyses:
* •
Sensitivity to the minimum bin size $N_{\text{min}}$ in PAVA-BC from
$N_{\text{min}}=1$ to $N_{\text{min}}=3000$;
* •
Sensitivity to the maximum bin size $N_{\text{min}}$ in PAVA-BC from
$N_{\text{max}}=6$ to $N_{\text{max}}=6000$;
* •
Sensitivity to a pair of $(N_{\text{min}},N_{\text{max}})$ in PAVA-BC chosen
so that each binsize fall into selected ranges;
* •
Sensitivity to a small purtabation of predictions by a logit-normal noise with
scale $\sigma$ from $\sigma=0.0$ to $\sigma=1.0$;
* •
Sensitivity to a choice of significance level $\alpha$ in the Binomial test
from $\alpha=0.0001$ to $\alpha=0.1$;
* •
Comparison of TCE by different choices of test, binomial test and t-test;
* •
Comparison of TCE by different total sizes $N$ of test dataset from $N=30$ to
$N=60000$;
* •
Comparison of TCE by different prevalences $P$ of dataset from $P=0.5$ to
$P=0.02$.
Tables 6, 7, 8, 9, 10, 11, 12 and 13 presents the result of each experiment
above in order. In each table, TCE(P) denotes TCE based on PAVA-BC, TCE(Q)
denotes TCE based on quantile-binning, and TCE(V) denotes TCE based on PAVA.
For reference, we include values of ECE, ACE, MCE, and MCE(Q), where MCE(Q)
denotes MCE based on quantile-binning. Observations from each result in are
summarised as follows.
* •
Table 6: The performance of TCE(P) to evidence the well-calibrated model was
consistently reasonable for any minimum binsize constaint between
$N_{\text{max}}=1$ and $N_{\text{max}}=600$, while there was a breakdown point
between $N_{\text{min}}=600$ and $N_{\text{min}}=3000$ where TCE(P) was no
longer able to do so. This is likely because the number of bins produced under
the contraint $N_{\text{min}}=3000$ for the total datasize $6000$ was $2$ at
maximum, which was too small to estimate the empirical probabilities
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$ accurately.
* •
Table 7: The performance of TCE(P) to evidence the miscalibrated model was
consistently reasonable for any maximum binsize constaint between
$N_{\text{max}}=300$ and $N_{\text{max}}=6000$, while there was a breakdown
point between $N_{\text{min}}=60$ and $N_{\text{min}}=300$ where TCE(P) was no
longer able to do so. This is likely because the number of bins produced under
the contraint $N_{\text{max}}=60$ for the total datasize $6000$ was $100$ at
minimum, which is too large to estimate the empirical probabilities
$\\{\widehat{P}_{b}\\}_{b=1}^{B}$ accurately.
* •
Table 8: The performance of TCE(P) to evidence both the well-calibrated and
miscalibrated models was arguably the most reasonable when
$(N_{\text{min}},N_{\text{max}})$ was chosen so that the number of bins
produced falls into the range $[5,20]$. This suggests a huristic to use such
$(N_{\text{min}},N_{\text{max}})$ for other experiments.
* •
Table 9: At each model prediction $P_{\theta}(x)$, we sample a new prediction
from a logit-normal distribution centred at $P_{\theta}(x)$ with scale
$\sigma$ to generate a perturbed prediction by a small noise. All calibration
error metrics were shown to have similar sensitivities to the noise. The scale
between $\sigma=0.10$ and $\sigma=0.50$ was the breakdown point where each
metric started to produce an unreasonable score for the well-calibrated model.
* •
Table 10: The performance of TCE(P) to evidence both the well-calibrated and
miscalibrated models was consistently reasonable for any significant level
between $\alpha=0.001$ and $\alpha=0.1$, while there was a breakdown point
between $\alpha=0.1$ and $\alpha=0.5$ where TCE(P) was no longer able to do so
for the well-calibrated model.
* •
Table 11: TCE based on the Binomial test outperformed one based on the t-test
in the majority of the settings. It is possible that the Binomial test
produces more accurate outcomes than the t-test, given that it is an exact
test whose test statistics does not involve any apporoximation.
* •
Table 12: The performance of TCE(P) to evidence both the well-calibrated and
miscalibrated models was consistently reasonable for any dataset size between
$N_{\text{test}}=3000$ and $N_{\text{test}}=60000$, while there was a
breakdown point between $N_{\text{test}}=600$ and $N_{\text{test}}=3000$ where
TCE(P) was no longer able to do so for the well-calibrated model. This is
likely because the dataset size $N_{\text{test}}=600$ was not big enough to
estimate the empirical probabilities $\\{\widehat{P}_{b}\\}_{b=1}^{B}$
accurately. This result may be improved by using different settings of the
minimnum and maximum binsize constaints.
* •
Table 13: The performance of TCE(P) on both the well-calibrated and
miscalibrated models was reasonable for any prevalence. While there was a
fluctuation in values of TCE(P) for different values of prevalence, TCE(P)
overall produced better values than TCE(Q and TCE(V).
Table 6: Sensitivity to the minimum binsize $N_{\text{min}}=1,6,30,300,600,3000$ in PAVA-BC. For comparison purpose, the number of bins $B$ of quantile-binning and equispaced-binning was varied as $B=1000,500,100,50,10,5,1$ along with $N_{\text{min}}$. Note that TCE(V) is a constant across all the row because PAVA does not involve any binsize constraint. Test Prevalence | Min Binsize | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---|---
50% (Calibrated) | 1 | 3.4500 | 5.1000 | 3.4500 | 0.1143 | 0.1651 | 0.8767 | 0.6392
6 | 3.3833 | 4.2000 | 3.4500 | 0.0839 | 0.1142 | 0.8767 | 0.5016
30 | 2.3500 | 4.3000 | 3.4500 | 0.0382 | 0.0457 | 0.8767 | 0.1705
60 | 2.6333 | 3.5667 | 3.4500 | 0.0271 | 0.0370 | 0.2533 | 0.1189
300 | 7.2833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
600 | 13.5667 | 38.7500 | 3.4500 | 0.0116 | 0.0086 | 0.1020 | 0.0236
3000 | 92.2000 | 92.2000 | 3.4500 | 0.0021 | 0.0021 | 0.0021 | 0.0021
40% (Miscalibrated) | 1 | 88.0667 | 6.6667 | 88.0667 | 0.1417 | 0.1847 | 0.8767 | 0.6111
6 | 88.0667 | 8.7000 | 88.0667 | 0.1179 | 0.1389 | 0.8767 | 0.4811
30 | 88.3333 | 32.2833 | 88.0667 | 0.0993 | 0.0992 | 0.8767 | 0.2264
60 | 87.8667 | 56.7667 | 88.0667 | 0.0971 | 0.0964 | 0.2426 | 0.1827
300 | 96.1000 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
600 | 96.6000 | 96.7833 | 88.0667 | 0.0963 | 0.0951 | 0.1099 | 0.1092
3000 | 93.9500 | 93.9500 | 88.0667 | 0.0951 | 0.0951 | 0.0951 | 0.0951
Table 7: Sensitivity to the maximum binsize $N_{\text{max}}=6,30,300,600,3000,6000$ in PAVA-BC. For comparison purpose, the number of bins $B$ of quantile-binning and equispaced-binning was varied as $B=1000,500,100,50,10,5,1$ along with $N_{\text{max}}$. Note that TCE(V) is a constant across all the row because PAVA does not involve any binsize constraint. Test Prevalence | Max Binsize | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---|---
50% (Calibrated) | 6 | 5.8500 | 5.1000 | 3.4500 | 0.1143 | 0.1651 | 0.8767 | 0.6392
30 | 3.0000 | 4.2000 | 3.4500 | 0.0839 | 0.1142 | 0.8767 | 0.5016
60 | 2.3667 | 4.3000 | 3.4500 | 0.0382 | 0.0457 | 0.8767 | 0.1705
300 | 3.7667 | 3.5667 | 3.4500 | 0.0271 | 0.0370 | 0.2533 | 0.1189
600 | 3.3833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
3000 | 3.4500 | 38.7500 | 3.4500 | 0.0116 | 0.0086 | 0.1020 | 0.0236
6000 | 3.4500 | 92.2000 | 3.4500 | 0.0021 | 0.0021 | 0.0021 | 0.0021
40% (Miscalibrated) | 6 | 5.5000 | 6.6667 | 88.0667 | 0.1417 | 0.1847 | 0.8767 | 0.6111
30 | 9.1000 | 8.7000 | 88.0667 | 0.1179 | 0.1389 | 0.8767 | 0.4811
60 | 14.3833 | 32.2833 | 88.0667 | 0.0993 | 0.0992 | 0.8767 | 0.2264
300 | 79.6667 | 56.7667 | 88.0667 | 0.0971 | 0.0964 | 0.2426 | 0.1827
600 | 85.6500 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
3000 | 88.0667 | 96.7833 | 88.0667 | 0.0963 | 0.0951 | 0.1099 | 0.1092
6000 | 88.0667 | 93.9500 | 88.0667 | 0.0951 | 0.0951 | 0.0951 | 0.0951
Table 8: Sensitivity to the pairs $(N_{\text{max}},N_{\text{min}})$ in PAVA-BC selected so that the number of bins produced falls into ranges $[250,1000],[50,200],[25,100],[10,20],[3,10]$. For comparison purpose, the number of bins $B$ of quantile-binning and equispaced-binning was varied as $B=1000,500,100,50,10,5,1$ along with $(N_{\text{max}},N_{\text{min}})$. Note that TCE(V) is a constant across all the row because PAVA does not involve any binsize constraint. Test Prevalence | Binsize Range | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---|---
50% (Calibrated) | [250, 1000] | 3.8000 | 4.2000 | 3.4500 | 0.0839 | 0.1142 | 0.8767 | 0.5016
[50, 200] | 1.8333 | 4.3000 | 3.4500 | 0.0382 | 0.0457 | 0.8767 | 0.1705
[25, 100] | 0.2833 | 3.5667 | 3.4500 | 0.0271 | 0.0370 | 0.2533 | 0.1189
[5, 20] | 7.2833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
[3, 10] | 13.5667 | 38.7500 | 3.4500 | 0.0116 | 0.0086 | 0.1020 | 0.0236
40% (Miscalibrated) | [250, 1000] | 7.7333 | 8.7000 | 88.0667 | 0.1179 | 0.1389 | 0.8767 | 0.4811
[50, 200] | 45.7667 | 32.2833 | 88.0667 | 0.0993 | 0.0992 | 0.8767 | 0.2264
[25, 100] | 66.1833 | 56.7667 | 88.0667 | 0.0971 | 0.0964 | 0.2426 | 0.1827
[10, 20] | 96.1000 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
[3, 10] | 96.6000 | 96.7833 | 88.0667 | 0.0963 | 0.0951 | 0.1099 | 0.1092
Table 9: Sensitivity to a small purtabation to model predictions by a logit-normal noise with scale $\sigma=0.01,0.05,0.10,0.50,1.00$. The maximum and minimum binsize of PAVA-BC were set to $1200$ and $300$. The number of bins of quantile-binning and equispaced-binning was set $10$. Test Prevalence | Noise Level | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---|---
50% (Calibrated) | 0.00 | 7.2833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
0.01 | 8.7167 | 9.6167 | 4.8000 | 0.0113 | 0.0125 | 0.0923 | 0.0527
0.05 | 12.8833 | 11.9000 | 7.7667 | 0.0136 | 0.0156 | 0.1198 | 0.0589
0.10 | 8.3500 | 13.0500 | 3.5500 | 0.0109 | 0.0164 | 0.1143 | 0.0587
0.50 | 61.9500 | 65.0500 | 56.1000 | 0.0615 | 0.0618 | 0.3601 | 0.1498
1.00 | 86.1833 | 84.1000 | 88.3833 | 0.1470 | 0.1478 | 0.3364 | 0.2621
40% (Miscalibrated) | 0.00 | 96.1000 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
0.01 | 96.4000 | 96.4000 | 89.6167 | 0.0962 | 0.0951 | 0.1511 | 0.1332
0.05 | 94.7667 | 95.5333 | 89.1667 | 0.0962 | 0.0951 | 0.1496 | 0.1420
0.10 | 93.8500 | 95.9667 | 86.5833 | 0.0967 | 0.0951 | 0.1852 | 0.1412
0.50 | 86.6667 | 83.9000 | 81.2667 | 0.1071 | 0.1055 | 0.2513 | 0.2203
1.00 | 90.3167 | 88.8500 | 91.2167 | 0.1713 | 0.1698 | 0.4577 | 0.3648
Table 10: Sensitivity to a choice of significance level $\alpha=0.001,0.005,0.01,0.05,0.1,0.5$. The maximum and minimum binsize of PAVA-BC were set to $1200$ and $300$. The number of bins of quantile-binning and equispaced-binning was set $10$. Test Prevalence | Significant Level | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---|---
50% (Calibrated) | 0.001 | 1.4500 | 4.6833 | 0.1833 | 0.0138 | 0.0150 | 0.1020 | 0.0528
0.005 | 2.4667 | 5.5667 | 1.1333 | 0.0138 | 0.0150 | 0.1020 | 0.0528
0.010 | 3.0500 | 6.2000 | 1.6500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
0.050 | 7.2833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
0.100 | 12.8500 | 15.2000 | 6.8667 | 0.0138 | 0.0150 | 0.1020 | 0.0528
0.500 | 53.1000 | 55.3333 | 46.5667 | 0.0138 | 0.0150 | 0.1020 | 0.0528
40% (Miscalibrated) | 0.001 | 77.8000 | 83.3833 | 76.1000 | 0.0963 | 0.0951 | 0.1466 | 0.1314
0.005 | 86.3000 | 92.8000 | 80.0833 | 0.0963 | 0.0951 | 0.1466 | 0.1314
0.010 | 90.1833 | 95.2167 | 83.1167 | 0.0963 | 0.0951 | 0.1466 | 0.1314
0.050 | 96.1000 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
0.100 | 97.2167 | 96.9167 | 90.1667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
0.500 | 99.3000 | 98.7167 | 97.7500 | 0.0963 | 0.0951 | 0.1466 | 0.1314
Table 11: Comparison of TCE based on the Binomial test and the t-test. TCE(Q)-B denotes TCE(Q) based on the Binomial test and TCE(Q)-T denotes TCE(Q) based on the t-test; the same applies for the other columns. The maximum and minimum binsize of PAVA-BC and the number of bins of quantile-binning and equispaced-binning were varied as in Table 8. Test Prevalence | Binsize Range | TCE(P)-B | TCE(P)-T | TCE(Q)-B | TCE(Q)-T | TCE(V)-B | TCE(V)-T
---|---|---|---|---|---|---|---
50% (Calibrated) | [250, 1000] | 3.8000 | 33.6667 | 4.2000 | 31.9167 | 3.4500 | 34.2167
[50, 200] | 1.8333 | 36.0000 | 4.3000 | 31.4333 | 3.4500 | 34.2167
[25, 100] | 0.2833 | 31.3667 | 3.5667 | 40.4333 | 3.4500 | 34.2167
[5, 20] | 7.2833 | 37.8000 | 10.8833 | 41.8500 | 3.4500 | 34.2167
[3, 10] | 13.5667 | 46.5000 | 38.7500 | 68.8167 | 3.4500 | 34.2167
40% (Miscalibrated) | [250, 1000] | 7.7333 | 50.2833 | 8.7000 | 45.1833 | 88.0667 | 97.7333
[50, 200] | 45.7667 | 73.2667 | 32.2833 | 71.1667 | 88.0667 | 97.7333
[25, 100] | 66.1833 | 96.5333 | 56.7667 | 85.3833 | 88.0667 | 97.7333
[5, 20] | 96.1000 | 99.2667 | 96.4667 | 98.4833 | 88.0667 | 97.7333
[3, 10] | 96.6000 | 98.6333 | 96.7833 | 98.4833 | 88.0667 | 97.7333
Table 12: Comparison of TCE by different total sizes $N_{\text{test}}=30,60,300,600,3000,6000,30000,60000$ of test dataset. The training prevalence was $P_{\text{training}}(y)=0.5$ for all datasets. The maximum and minimum binsize of PAVA-BC were set by $N_{\text{max}}=N_{\text{test}}/20$ and $N_{\text{min}}=N_{\text{test}}/5$. The number of bins of quantile-binning and equispaced-binning was set $10$. Test Prevalence | Data Size | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---|---
50% (Calibrated) | 30 | 0.0000 | 0.0000 | 0.0000 | 0.2293 | 0.2631 | 0.4164 | 0.5660
60 | 0.0000 | 3.3333 | 0.0000 | 0.0923 | 0.2158 | 0.7148 | 0.4208
300 | 5.3333 | 11.0000 | 6.3333 | 0.0774 | 0.0867 | 0.1971 | 0.2057
600 | 1.0000 | 4.5000 | 1.6667 | 0.0368 | 0.0445 | 0.3404 | 0.1270
3000 | 8.0667 | 4.6333 | 4.7667 | 0.0190 | 0.0182 | 0.1209 | 0.0304
6000 | 7.2833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
30000 | 16.1633 | 31.7167 | 0.7833 | 0.0036 | 0.0061 | 0.9045 | 0.0164
60000 | 19.1483 | 45.7600 | 4.4417 | 0.0035 | 0.0043 | 0.0949 | 0.0100
40% (Miscalibrated) | 30 | 13.3333 | 6.6667 | 36.6667 | 0.3164 | 0.3377 | 0.6569 | 0.6338
60 | 0.0000 | 3.3333 | 0.0000 | 0.1072 | 0.1611 | 0.7148 | 0.4208
300 | 27.3333 | 37.3333 | 48.3333 | 0.1240 | 0.1368 | 0.1971 | 0.2665
600 | 14.1667 | 8.0000 | 26.5000 | 0.0694 | 0.0685 | 0.5824 | 0.1350
3000 | 92.2333 | 91.7667 | 76.7667 | 0.0964 | 0.0958 | 0.1495 | 0.1358
6000 | 96.1000 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
30000 | 99.4700 | 99.2300 | 97.4433 | 0.0907 | 0.0906 | 0.9045 | 0.1064
60000 | 99.7783 | 99.6600 | 98.9000 | 0.0923 | 0.0923 | 0.0972 | 0.1065
Table 13: Comparison of TCE by different prevalences $P$ of training and test dataset. The training data size was $14000$ and the test data size was $6000$. The maximum and minimum binsize of PAVA-BC were set to $1200$ and $300$. The number of bins of quantile-binning and equispaced-binning was set $10$. Train - Test Prevalence | TCE(P) | TCE(Q) | TCE(V) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---
Calibrated | 50% - 50% | 7.2833 | 10.8833 | 3.4500 | 0.0138 | 0.0150 | 0.1020 | 0.0528
40% - 40% | 7.5500 | 16.2167 | 8.5667 | 0.0137 | 0.0191 | 0.1632 | 0.0365
30% - 30% | 8.1667 | 12.8833 | 2.8167 | 0.0125 | 0.0134 | 0.1042 | 0.0313
20% - 20% | 15.9500 | 22.2167 | 15.9167 | 0.0173 | 0.0153 | 0.6238 | 0.0370
10% - 10% | 11.9833 | 16.7833 | 15.2333 | 0.0096 | 0.0114 | 0.4361 | 0.0218
8% - 8% | 15.7000 | 18.5167 | 23.1500 | 0.0087 | 0.0107 | 0.0700 | 0.0234
6% - 6% | 11.5333 | 17.5500 | 13.9833 | 0.0035 | 0.0109 | 0.3064 | 0.0195
4% - 4% | 18.5000 | 15.6667 | 20.5833 | 0.0046 | 0.0074 | 0.2240 | 0.0177
2% - 2% | 13.1167 | 11.5500 | 20.7667 | 0.0052 | 0.0059 | 0.0052 | 0.0131
Miscalibrated | 50% - 40% | 96.1000 | 96.4667 | 88.0667 | 0.0963 | 0.0951 | 0.1466 | 0.1314
40% - 30% | 96.5667 | 96.1833 | 82.7500 | 0.0872 | 0.0869 | 0.1485 | 0.1262
30% - 20% | 94.9500 | 94.6667 | 88.5833 | 0.0846 | 0.0846 | 0.2146 | 0.1247
20% - 10% | 95.8833 | 95.5833 | 96.4333 | 0.0868 | 0.0868 | 0.6238 | 0.1500
10% - 8% | 32.3500 | 26.7000 | 42.5667 | 0.0151 | 0.0173 | 0.4361 | 0.0502
8% - 6% | 42.3167 | 38.7833 | 45.8333 | 0.0164 | 0.0186 | 0.3259 | 0.0477
6% - 4% | 47.0833 | 39.9500 | 65.9500 | 0.0167 | 0.0188 | 0.3064 | 0.0440
4% - 2% | 56.5833 | 42.4333 | 72.4500 | 0.0142 | 0.0142 | 0.2240 | 0.0337
2% - 0% | 99.9167 | 96.9000 | 100.0000 | 0.0181 | 0.0181 | 0.0181 | 0.0382
### C.2 Results on Other UCI Datasets
Algorithms in Section 4.2 are all trained with the default hyperparameters in
the scikit-learn package, except that the maximum depth in the random forest
is set to 10 and the number of hidden layers in the multiple perceptron is set
to 1 with 1000 units. For better comparison, we add TCE based on quantile
bins, denoted TCE(Q) in each table, to five metrics presented in the main
text. The following Table 14 compares six different calibration error metrics
computed for eight UCI datasets that were not presented in the main text:
coil_2000, isolet, letter_img, mammography, optimal_degits, pen_degits,
satimage, spambase [Dua and Graff, 2017, van der Putten and van Someren,
2000-2009, Elter et al., 2007]. The prevalence of the spambase dataset is
well-balanced and that of the rest is imbalanced. The following Figures 5 and
6 shows the visual representations of TCE, ECE, and ACE—the test-based
reliability diagram and the standard reliability diagram—each for the logistic
regression and the gradient boosting algorithm. We selected four datasets,
abalone, coil_2000, isolet, and webpage, to produce the visual representations
in Figures 5 and 6.
### C.3 Reliability Diagrams of Results on ImageNet1000
The following Figure 7 shows the viaual representations of TCE, ECE, and
ACE—the test-based reliability diagram and the standard reliability
diagram—for four different deep learning models presented in the main text,
where we omit the model ResNet50 whose result sufficiently resembles that of
ResNet18.
Table 14: Comparison of six calibration error metrics for five algorithms trained on eight UCI datasets. The same setting of TCE presented in Section 4 is used. TCE(Q) and MCE(Q) denotes TCE and MCE each based on quantile bins where the number of bins is set to $10$. Data | Algorithm | TCE | TCE(Q) | ECE | ACE | MCE | MCE(Q)
---|---|---|---|---|---|---|---
coil_2000 | LR | 8.6189 | 11.7408 | 0.0047 | 0.0111 | 0.8558 | 0.0326
SVM | 17.0003 | 28.9447 | 0.0071 | 0.0216 | 0.4860 | 0.0381
RF | 22.2260 | 6.1758 | 0.0027 | 0.0125 | 0.2465 | 0.0439
GB | 20.8687 | 12.6569 | 0.0052 | 0.0098 | 0.3738 | 0.0259
MLP | 98.7445 | 98.7784 | 0.0652 | 0.0578 | 0.7900 | 0.1649
isolet | LR | 28.8462 | 27.3932 | 0.0131 | 0.0051 | 0.2183 | 0.0286
SVM | 11.5812 | 13.2479 | 0.0064 | 0.0028 | 0.1969 | 0.0194
RF | 66.4530 | 52.5214 | 0.0524 | 0.0507 | 0.3635 | 0.2137
GB | 25.2991 | 16.6667 | 0.0198 | 0.0174 | 0.4463 | 0.1123
MLP | 9.8291 | 17.5641 | 0.0049 | 0.0031 | 0.4173 | 0.0232
letter_img | LR | 10.5167 | 12.0500 | 0.0025 | 0.0008 | 0.1617 | 0.0042
SVM | 11.8667 | 14.8167 | 0.0019 | 0.0017 | 0.6257 | 0.0146
RF | 26.5000 | 20.2500 | 0.0097 | 0.0033 | 0.5179 | 0.0131
GB | 25.9500 | 18.7333 | 0.0067 | 0.0029 | 0.3653 | 0.0109
MLP | 19.9833 | 9.9833 | 0.0010 | 0.0001 | 0.4550 | 0.0007
mammography | LR | 25.0671 | 26.7660 | 0.0027 | 0.0065 | 0.3594 | 0.0208
SVM | 20.2683 | 20.1490 | 0.0067 | 0.0088 | 0.6741 | 0.0353
RF | 19.4039 | 9.2996 | 0.0047 | 0.0016 | 0.4465 | 0.0043
GB | 14.5156 | 15.4098 | 0.0061 | 0.0034 | 0.5355 | 0.0124
MLP | 20.5663 | 26.9747 | 0.0042 | 0.0027 | 0.4351 | 0.0113
optical_digits | LR | 11.6251 | 27.1649 | 0.0098 | 0.0037 | 0.2251 | 0.0135
SVM | 4.8043 | 10.6762 | 0.0042 | 0.0028 | 0.6608 | 0.0157
RF | 49.6441 | 38.3155 | 0.0451 | 0.0433 | 0.5432 | 0.2271
GB | 13.0486 | 11.2693 | 0.0181 | 0.0168 | 0.5639 | 0.1122
MLP | 4.6856 | 12.1590 | 0.0037 | 0.0034 | 0.5992 | 0.0306
pen_digits | LR | 20.4063 | 23.1049 | 0.0121 | 0.0060 | 0.1652 | 0.0252
SVM | 9.7635 | 10.2790 | 0.0017 | 0.0010 | 0.4735 | 0.0068
RF | 29.6240 | 22.8623 | 0.0152 | 0.0132 | 0.4535 | 0.0592
GB | 9.9151 | 13.0988 | 0.0077 | 0.0058 | 0.6543 | 0.0303
MLP | 9.4603 | 10.0061 | 0.0014 | 0.0004 | 0.6457 | 0.0037
satimage | LR | 23.6665 | 23.0968 | 0.0215 | 0.0223 | 0.7312 | 0.0767
SVM | 10.2020 | 21.8540 | 0.0229 | 0.0163 | 0.1666 | 0.0870
RF | 29.1041 | 20.1450 | 0.0265 | 0.0214 | 0.2084 | 0.1328
GB | 23.2004 | 19.8861 | 0.0154 | 0.0235 | 0.2101 | 0.0902
MLP | 58.0528 | 58.4671 | 0.0352 | 0.0328 | 0.5049 | 0.1384
spambase | LR | 33.6713 | 56.1188 | 0.0256 | 0.0267 | 0.1539 | 0.0895
SVM | 12.8168 | 34.5402 | 0.0177 | 0.0227 | 0.2207 | 0.0465
RF | 66.0391 | 49.4569 | 0.0635 | 0.0601 | 0.2056 | 0.1616
GB | 20.2028 | 20.4200 | 0.0295 | 0.0277 | 0.1409 | 0.0891
MLP | 60.9703 | 67.1253 | 0.0413 | 0.0397 | 0.2931 | 0.1076
(a) abalone
(b) coil_2000
(c) isolet
(d) webpage
Figure 5: Comparison of visual representations of TCE, ECE and ACE for the
logistic regression algorithm. (Left) The test-based reliability diagram of
TCE. (Middle) The reliability diagram of ECE. (Right) The reliability diagram
of ACE. Each row corresponds to a result on the dataset: (a) abalone, (b)
coil_2000, (c) isolet, and (d) webpage.
(a) abalone
(b) coil_2000
(c) isolet
(d) isolet
Figure 6: Comparison of visual representations of TCE, ECE and ACE for the
logistic regression algorithm. (Left) The test-based reliability diagram of
TCE. (Middle) The reliability diagram of ECE. (Right) The reliability diagram
of ACE. Each row corresponds to a result on the dataset: (a) abalone, (b)
coil_2000, (c) isolet, and (d) webpage.
(a) AlexNet
(b) VGG19
(c) ResNet 18
(d) ResNet 152
Figure 7: Comparison of visual representations of TCE, ECE, and ACE on the
ImageNet 1000 dataset. (Left) The test-based reliability diagram of TCE,
(Middle) The reliability diagram of ECE (Right) The reliability diagram of
ACE. Each row corresponds to a result for the model: (a) AlexNet, (b) VGG19,
(c) ResNet 18, and (d) ResNet 152.
|
[7]Equal contributions. Work was done during Zhenglin's visit to Westlake University.
The Sparse Mixture of Experts (SMoE) has been widely employed to enhance the efficiency of training and inference for Transformer-based foundational models, yielding promising results.
However, the performance of SMoE heavily depends on the choice of hyper-parameters, such as the number of experts and the number of experts to be activated (referred to as top-$k$), resulting in significant computational overhead due to the extensive model training by searching over various hyper-parameter configurations.
As a remedy, we introduce the () technique.
incorporates (1) a novel gating method that enables each token to automatically determine the number of experts to activate.
(2) An adaptive process automatically adjusts the number of experts during training.
Extensive numerical results across Vision, Language, and Vision-Language tasks demonstrate the effectiveness of our approach to achieve competitive performance compared to GMoE for vision and language tasks, and MoE-LLaVA for vision-language tasks, while maintaining efficiency by activating fewer parameters.
Our code is available at <https://github.com/LINs-lab/DynMoE>.
§ INTRODUCTION
Illustration of performance fluctuation on various MoE settings.
We carried out experiments on GLUE benchmark [49], employing BERT-large [8] as backbone.
The $x$-axis represents the MoE settings, while the $y$-axis shows the performance on the COLA dataset.
The scalable nature of Transformer models [22] has gained remarkable successes across a spectrum of applications, ranging from language [1, 46, 47] and vision [23, 35] to cross-modality domains [32, 29, 28].
To further enhance performance while maintaining high efficiency, Sparse Mixture of Experts (SMoE) has emerged as a promising technique that significantly reduces computation costs during both training and inference stages [13, 24, 56], and has been shown to achieve comparable or superior performance compared to traditional dense models [25, 21, 7].
Despite its success, SMoE has an unavoidable drawback: the performance of SMoE heavily relies on the choice of hyper-parameters, such as the number of activated experts per token, referred as top-$k$, and the number of experts [6, 12, 53], denoted as $K$. As illustrated in Figure <ref>, the performance discrepancy of MoE models under various configurations can be approximately 1%-3%.
Notably, identifying the optimal hyper-parameter without a sufficient number of ablation studies is challenging.
As the size of the models continues to grow, this limitation could result in a significant waste of computational resources, and in turn, could hinder the efficiency of training MoE-based models in practice.
To tackle the above problems, the objective of this paper is to explore a novel training technique for MoE models, with the aim of addressing the following core question:
Is it possible to develop a MoE training strategy that can automatically determine the number of experts and the number of activated experts per token during the training process?
Hence, we introduce the () method, which addresses the aforementioned question through the introduction of two innovative components: (1) a top-any gating method that enables each token to autonomously determine the number of experts to activate, thereby allowing different tokens to activate varying numbers of experts; (2) an adaptive training process that dynamically adjusts the number of experts, increasing it when the current quantity is inadequate and removing redundant experts as necessary.
Additionally, we introduce a new auxiliary loss function specifically designed to encourage sparsity when employing the top-any gating approach. This loss encourages different experts to be diverse, rather than mandating that all experts be activated with the same frequency.
We summarize the contributions of this paper as follows:
* Introducing , a novel method frees the burden of pivotal hyper-parameter selection for MoE training, which is capable of autonomously determining the number of experts and the number of experts to be activated per token. We provide Tutel and DeepSpeed-MoE implementations for ease of practical usage.
* Conducting extensive empirical experiments across Vision, Language, and Vision-Language tasks.
The results illustrate that achieves comparable or superior performance compared to the well-tuned MoE settings.
§ RELATED WORKS
The Sparse Mixture of Experts (SMoE) approach [11, 44, 24] has been proven to effectively enhance the training and inference efficiency of foundational models. Contemporary studies primarily modify the MLP layer of transformer models into multiple expert models and employ a gating network to determine which expert to select. They only choose a subset of experts for each token during both training and inference [24, 13]. Recently, the SMoE structure has shown success in various research areas. For instance, GMoE [26] has demonstrated that SMoE can enhance generalization performance in vision tasks. Large Language Models (LLMs) have also employed MoE to simultaneously reduce training and inference costs while improving model performance [13, 21, 7, 42, 31]. However, most of these models employ standard SMoE structures and apply the SMoE to various tasks. Our paper focuses on improving the MoE training process, which can be easily integrated with these methods.
Recently, some attempts have been made to improve the architecture of MoE models. For example, researchers have investigated the benefits of sample-wise [41, 15] and token-wise [44, 43, 13] routing. Some studies introduce load balancing loss to ensure that the experts are activated an equal number of times [24, 13]. Expert choice routing [57] addresses load balance by allowing experts to choose tokens; however, this approach also suffers from dropped tokens. SoftMoE [37] uses a slot mechanism to simultaneously resolve the issues of load balance and dropped tokens. Nevertheless, these approaches also require pre-defined hyperparameters, such as the number of experts or the number of experts to be activated. In this paper, we tackle this problem by presenting , an algorithm that automatically determines the number of activated experts for each token and dynamically adds or removes experts during the training process. Furthermore, we introduce a new auxiliary loss function that ensures sparsity when utilizing the algorithm.
§ METHOD
Illustration of the top-any gating method. The input tokens pass through the gating weights $\mW_{g,e}$ corresponding to each expert $e$, obtaining the gating scores. These gating scores are then compared to the gates $\mG_e$ to determine if the subsequent expert will be activated. Finally, the expert outputs are combined to produce the output tokens.
In this section, we introduce the (), an algorithm capable of automatically determining the number of experts and the number of experts to be activated for both training and inference stages.
This is achieved through the incorporation of two crucial components:
(1) The top-any gating method (Figure <ref>), which models the gating mechanism as a multi-label classification problem, allowing tokens to decide the number of experts to be activated on their own.
This enables different tokens to activate varying numbers of experts, including the option to activate no experts.
(2) A carefully designed adaptive process that adds new experts when tokens choose to not activate any existing experts, and removes any surplus experts that have not been activated by any tokens.
The overall process is summarized in Algorithm <ref>.
§.§ Top-Any Gating
In this section, we present the superior gating method to eliminate the need for tuning the top-$k$ value.
We further improve the test-time inference procedure and introduce an additional auxiliary loss to prevent token dropping and boost efficiency.
Traditional top-$k$ gating and the limitations.
The traditional top-$k$ gating method uses the token embedding $\xx$ as inputs and uses an additional gating network $g$ to predict the scores that the input token embedding assigned to each expert.
Typically, given token $\xx \in \R^{d}$ as input, the gating process is defined as the follows [40, 20]:
\begin{align}
g(\xx) \in \R^{K} := \text{softmax}(\mW_g^{T} \xx) \,,
\end{align}
where $\mW_g \in \R^{d \times K}$ is the parameter of the gating network, and $K$ is the number of experts.
Then the output of the MoE layer is defined by
\begin{align}
\yy = \frac{1}{\sum_{e \in \text{Top-}k \left( g(\xx) \right) } g(\xx)_e } \sum_{e \in \text{Top-}k \left( g(\xx) \right) } g(\xx)_e E_e(\xx) \,,
\end{align}
where $ E_e(\xx) \in \R^{d} $ is the output of $e$-th expert given input $\xx$, and $g(\xx)_e$ is the $e$-th entry of $g(\xx)$.
Despite the considerable success of the top-$k$ gating method in enhancing training and inference efficiency, two limitations persist:
* The value of $k$ must be fine-tuned to optimize model performance.
As demonstrated in Figure <ref>, the performance of MoE models can vary significantly with different top-$k$ values.
This observation has also been noted in recent studies [6, 12, 53].
Consequently, substantial computational resources are needed to identify the optimal value of $k$.
* The top-$k$ gating approach assumes that each token must activate the same number of experts, which may not always hold in practice.
For instance, when considering different tasks, there could exist tokens shared by all tasks and those specific to certain tasks, i.e. different tokens could activate different numbers of experts.
Addressing the limitations of top-$k$ gating by tuning-free top-any gating.
To address the aforementioned limitations, we propose the top-any gating method, which does not require a pre-defined value of $k$ and allows different tokens to activate varying numbers of experts during both training and inference stages.
The design of the top-any gating method draws inspiration from the multi-label classification problem.
We consider each expert as an individual class and calculate the classification (gating) score for each class (expert) independently.
Subsequently, all classes (experts) with scores exceeding the threshold are deemed positive (activated).
In detail, given the expert representation matrix $\mW_g \in \R^{K \times d}$, where the $k$-th row of $\mW_g$ acts as the representation of expert $k$, and an input token $\xx \in \R^{d}$, the key steps of top-any gating can be formulated by the following equation:
\begin{align}
s(\xx) & = \frac{\left \langle \xx, \mW_{g} \right \rangle}{\norm{\xx} \norm{\mW_{g}}} \,, \label{equ:gating-sim} \\
g(\xx) & = \text{sign} \left( \sigma \left( s(\xx) \right) - \sigma( \mG ) \right) \,, \label{equ:gating-score}
\end{align}
where $\mW_g \in \R^{K \times d}$ and $\mG \in \R^{K}$.
To illustrate, we first compute the cosine similarities between the token and the expert representation matrix $\mW_g$ and obtain the similarity score $s(\xx) \in \R^{K}$.
Then the sigmoid function $\sigma$ is applied to the similarity score $s(\xx)$ to obtain the scores between $0$ and $1$.
Finally, experts with similarity scores greater than the trainable per-expert threshold $\mG$ are considered to activate experts for the token $\xx$.
It is important to note that the sign function does not support back-propagation, and thus we customize the back-propagation process of this part by directly copying the gradient of $g(\xx)$ to $\sigma \left( s(\xx) \right) - \sigma ( \mG )$ to effectively bypass the sign function.
Given the gating score $g(\xx) \in \R^{K}$, the number of activated experts is then defined by
\begin{align}
k := \text{sum} \left( g(\xx) \right) \,, \label{equ:gating-k}
\end{align}
where $k$ represents the number of experts to be activated for token $\xx$.
The model output of the MoE layer with the top-any gating method can be derived as follows
\begin{align}
\yy = \frac{1}{k} \sum_{g(\xx)_e > 0} E_{e}(\xx) \,. \label{equ:gating-outputs}
\end{align}
Improving the top-any gating during test-time to prevent token dropping.
To facilitate the design of the adaptive expert number process, we did not impose a minimum value on $k$.
Consequently, some tokens may not activate any experts.
To address this issue, during model performance evaluation, we modify the top-any gating to enable top-$1$ gating for tokens that do not choose to activate any experts.
In detail, for the input token $\xx$ with $\text{sum}(g(\xx)) = 0$, the modified gating score $\tilde{g}(\xx)$ is obtained by
\begin{align}
\tilde{g}(\xx)_k =
\begin{split}
\left \{ \begin{array}{ll}
0 & k \not = \argmax_{k} \sigma (s(\xx)) \,, \\
\sigma (s(\xx)) & k = \argmax_{k} \sigma (s(\xx)) \,.
\end{array}
\right.
\end{split}
\end{align}
Guarding efficiency for top-any gating by auxiliary loss.
The primary goal of using MoE models is to improve the training and inference efficiency.
However, in the absence of a cap on the maximum number of activated experts, tokens might activate all experts, which is counterproductive to our primary goal.
Using an auxiliary loss as a regularization over experts may alleviate our issue.
However, existing auxiliary loss methods [24, 13, 51] are primarily designed to ensure load balancing across experts and thus cannot align with our objectives.
While activating all experts can indeed achieve load balancing, it contradicts our aim of improving efficiency by limiting the number of activated experts.
Therefore, we need a solution that not only ensures load balancing but also restricts the number of activated experts.
As a remedy, we propose a new auxiliary loss, namely sparse and simple gating loss, as shown in (<ref>).
The diversity loss and simplicity loss in (<ref>) work together to improve the efficiency of the model by addressing different aspects of the expert representations.
On one hand, the diversity loss encourages independence among the $\mW_g$ representations of various experts.
It serves two purposes: First, it prevents a high degree of similarity between experts, thereby enhancing the model's representational capacity;
Second, it guides tokens to avoid simultaneous activation of all experts, thereby promoting sparse gating for improved efficiency.
On the other hand, the simplicity loss normalizes $\mW_g$ to avoid excessively large values within the matrix, which helps maintain numerical stability and prevents overfitting due to extreme parameter values.
The detailed loss function is defined as follows:
\begin{align}
\textstyle
\cL = \underbrace{\norm{\mW_g^{T} \mW_g - \mI_K}_2}_{\emph{diversity loss}} + \underbrace{\frac{1}{K} \sum_{e=1}^{K} \norm{\ww_{g, e}}_2}_{\emph{simplicity loss}} \,, \label{equ:gating-loss}
\end{align}
where $\mI_K$ is the identity matrix with dimension $K$, and $\ww_{g, e} \in \R^{d}$ is the $e$-th element of $\mW_g$, indicating the representation of the $e$-th expert.
§.§ Adaptive Training Process
[1]
Input data $\xx$, initial gating network parameters $\mW_g$, $\mG$, and $\tau$, experts $E_1, \cdots, E_K$, start record routing flag $flag_{s}$, finish record routing flag $flag_{f}$.
MoE layer output $\yy$, auxiliary loss value.
Set routing flag $flag_{rout} = 1$.
Initialize routing records by $\mR_{\text{rout}} = \0_{K}$.
Initialize non-activate sample records $\mR_{\text{sam}} = \0_{d}$.
Get the gating outputs $g(\xx)$ and $\kk$ by Eq (<ref>) and (<ref>).
Get MoE layer output $\yy$ by Eq (<ref>).
Calculate auxiliary loss by Eq (<ref>).
$flag_{rout} = 1$
$\mR_{E} = \mR_{E} + \text{sum}(g(\xx), \text{dim}=0)$.
$\mR_{S} = \mR_{S} + \sum_{i=1}^{N} \1_{\kk_i = 0} \xx_i$
$flag_{rout} = 0$.
Exists $e$ that $\mR_{\text{E}}^{e} = \mathbf{0}$
Remove experts $e$.
$\mR_{\text{S}, e} \not = \mathbf{0}$
Add new expert $K + 1$ with expert representation $\mW_{g, K + 1} = \mR_{S} / \norm{\mR_{S}}$.
algorithmPseudo code of on each iteration and MoE layer.
In this section, we elaborate on the adaptive training process, which is designed to automatically determine the number of experts.
As illustrated in Figure <ref>, the adaptive process consists of three parts, namely
(1) Routing Recording: recording the routing results during training;
(2) Adding Experts: adding new experts when tokens choose not to activate any existing experts;
and (3) Removing Experts: removing experts that have not been chosen by any tokens.
Routing Recording.
To facilitate the removal and addition of experts, it is essential to track the routing status.
Specifically, we record two key pieces of information for each MoE layer: (1) For each expert $e$, we record the time at which expert $e$ is activated, denoted as $\mR_{E} \in \R^{K}$ (as shown in Line 9 of Algorithm <ref>).
(2) For input data that does not activate any expert, we compute the sum of their embeddings $\xx$ as $\mR_{S} \in \R^{d}$ (as outlined in Line 10 of Algorithm <ref>).
Note that this approach simplifies the expert addition process: by using the token embeddings to initialize the expert representation $\mW_g$, we can achieve a high similarity score between these tokens and the new experts, ensuring that the new expert will be activated by these tokens when added.
As demonstrated in Algorithm <ref>, we utilize $flag_{s}$ and $flag_{f}$ to determine when to start and stop routing recording.
Users can control these two flags as needed.
Adding Experts when there exist tokens that choose not to activate any experts.
We add new experts when the recorded $\mR_{S} \not = \mathbf{0}$, as some tokens do not activate any experts and $\mR_{S}$ is the sum of these tokens.
Therefore, given $K$ activated experts and new expert $K + 1$, we initialize $\mW_{g, K + 1} = \frac{\mR_{S}}{\norm{\mR_{S}}}$ and $\mG_{K+1} = \mathbf{0}$.
Removing Experts when there exist experts not activated by any token.
We remove experts when there is an expert $e$ such that $\mR_{E}^{e} = \mathbf{0}$ (as shown in Line 13 in Algorithm <ref>).
Elaboration on the adaptive training process.
We visualize the adaptive training process of , including record routing, experts adding, and experts removing.
The green strip connecting the token and the expert indicates records of a token routing to an expert.
The red arrow at the bottom part of the figure shows where and when expert addition and removal happens.
§ EXPERIMENTS
In this section, we carry out experiments to address the following questions:
* Q1: Can achieve competitive performance among different MoE settings? See <ref>.
* Q2: Can handle tasks with varying modalities and scales? See <ref>.
* Q3: Will the model trained by maintain sparsity to ensure efficiency? See <ref>.
* Q4: Can offer insights that could guide the design of MoE models? See <ref>.
§.§ Experiment Setup
To answer the above four questions, we conduct experiments on Vision, Language, and Vision-Language tasks. The details are shown in the following.
* Vision Task. For the vision tasks, we follow the same settings as in GMoE [26]. We employ the pre-trained ViT-S/16 [10] model and evaluate it on the DomainBed [16] benchmark.
Our experiments encompass four Domain Generalization datasets: PACS [27], VLCS [2], OfficeHome [48], and DomainNet [36].
All results are reported using the train-validation selection criterion.
* Language Task. The language tasks adhere to the same settings as those in MoEfication [56] and EMoE [38].
The MoE models are built upon the BERT-large [8] architecture using the MoEfication method and are fine-tuned on GLUE [49] tasks, which include COLA [50], QNLI [49], RTE [5], MNLI [52], and MRPC [9].
* Vision-Language Task. The vision-language tasks follows the setting in MoE-LLaVA [31], where we use StableLM-2-1.6B [4], Qwen-1.8B [3] and Phi-2-2.7B [19] as backbone language models, and use clip-vit-large-patch14-336 [39] as the vision encoder.
The models are evaluated on image understanding benchmarks including VQA-v2 [14], GQA [18], VisWiz [17], ScienceQA-IMG [34], TextVQA [45], POPE [30], MME [54], MMBench [33], LLaVA-Bench (in-the-Wild) [32], and MM-Vet [55]. Furthermore, we keep routing records in our model during testing time. For each benchmark, we collect the number of experts' activations per MoE layer and total processed tokens during testing.
§.§ A1: Achieves Competitive Performance among Various MoE Settings
In this section, we carry out experiments on the GLUE benchmark [49], varying the number of experts ($K$) and the value of top-$k$.
The results of these experiments can be observed in Figure <ref>.=-1
The performance of surpasses the average performance among various MoE settings.
As seen in Figure <ref>, we can observe that
* The performance of is higher than the average performance across different values of $K$ and top-$k$ in most tasks, indicating the competitive performance of .
* The performance fluctuates considerably with different $K$ and top-$k$ values, such as up to 3.0% on the RTE task and 1.3% on the COLA task.
overcomes this issue by not requiring pre-defined $K$ and top-$k$ values.
* The performance gain of specific $K$ and top-$k$ choice is not consistent among tasks.
For instance, the $K = 16, k = 4$ setting performs well on QNLI but poorly on MRPC.
In contrast, the always achieve competitive performance among tasks.
Performance of on language tasks.
We conduct experiments on the GLUE benchmark.
The $x$-axis represents MoE settings with varying $K$ and top-$k$ values.
The $y$-axis denotes the model's performance.
Dashed lines indicate the average performance across different settings, as well as the performance of .=-1
§.§ A2: Can Handle Vision, Language, and Vision-Language Tasks
In addition to Language tasks, we also conduct experiments on Vision and Vision-Language tasks to verify the performance of on different modalities and task scales.
The results can be found in Tables <ref>, and <ref>.
The effectiveness of remains consistent in both Vision and Vision-Language tasks.
We can observe the following: (1) outperforms well-tuned MoE [38] in Vision tasks. The performance difference between and well tuned MoE in [26], falls within the range of random fluctuation. (2) When using StableLM-1.6B and Phi-2-2.7B as the backbone, the performance of -LLaVA surpasses that of MoE-LLaVA. (3) With Qwen-1.8B as the backbone, the performance of -LLaVA remains comparable to MoE-LLaVA. In this setting, the average top-$k$ of -LLaVA (avg $k = 1.86$) is also close to the MoE-LLaVA setting ($k=2$).
Performance of on vision tasks:
Our study investigates the performance of on vision tasks using the DomainBed benchmark, with ViT-small serving as the backbone model.
The effectiveness of GMoE is elucidated based on meticulously tuned results as presented in the previous works [26] and [38].
In our implementation of , we configure the maximum number of experts to $8$, with an initial setting of 6 experts.
The number of experts is dynamically adjusted in each iteration for .
We also report the performance of using Gshard loss [24] as the auxiliary loss.
Algorithms PACS VLCS OfficeHome DomainNet Average
GMoE (in [26]) 88.1 80.2 74.2 48.7 72.8
GMoE (carefully tuned [38]) 87.7 79.6 73.1 - -
GMoE (with , Gshard Loss) 88.4 79.4 73.6 47.4 72.2
GMoE (with , Diverse and Simple Gating Loss) 87.6 80.3 73.5 48.2 72.4
Performance of on vision-language tasks: Our study investigates the performance of -LLaVA on image understanding benchmarks.
Evaluation Benchmarks include VQA-v2; GQA; VisWiz; SQA$^I$ (ScienceQA-IMG); VQA$^T$ (TextVQA); POPE; MME; MMB (MMBench); LLaVA$^W$ (LLaVA-Bench (in-the-Wild)); MM-Vet.
For a fair comparison, we set the maximum number of experts to 4 for -LLaVA (the same as the number of experts in MoE-LLaVA) and set the initial number of experts to $2$.
$N_{A}$ indicates the number of activated parameters.
Algorithms $N_{A}$ VQA$^{v2}$ GQA VisWiz SQA$^I$ VQA$^T$ POPE MME MMB LLaVA$^W$ MM-Vet
[r]LLaVA-1.5 (Vicuna-13B) 13B 80.0 63.3 53.6 71.6 61.3 85.9 1531.3 67.7 70.7 35.4
[r]LLaVA-1.5 (Vicuna-7B) 7B 78.5 62.0 50.0 66.8 58.2 85.9 1510.7 64.3 63.4 30.5
[r]LLaVA-Phi (Phi-2-2.7B) 2.7B 71.4 - 35.9 68.4 48.6 85.0 1335.1 59.8 - 28.9
Sparse (StableLM-1.6B)
($K=4,k=2$) 2.06B 76.7 60.3 36.2 62.6 50.1 85.7 1318.2 60.2 86.8 26.9
(avg $k = 1.25$) 1.75B 77.4 61.4 40.6 63.4 48.9 85.7 1300.9 63.2 86.4 28.1
Sparse (Qwen-1.8B)
($K=4,k=2$) 2.24B 76.2 61.5 32.6 63.1 48.0 87.0 1291.6 59.7 88.7 25.3
(avg $k = 1.86$) 2.19B 76.4 60.9 32.4 63.2 47.5 85.8 1302.4 61.3 89.2 24.2
Sparse (Phi-2-2.7B)
($K=4,k=2$) 3.62B 77.6 61.4 43.9 68.5 51.4 86.3 1423.0 65.2 94.1 34.3
(avg $k = 1.68$) 3.35B 77.9 61.6 45.1 68.0 51.8 86.0 1429.6 66.6 95.6 33.6
§.§ A3: Maintains Efficiency by Activating Less Parameters
In this section, we aim to demonstrate that although we did not enforce sparsity on the models, the trained models are still sparse, promising improved inference efficiency.
-LLaVA activates fewer parameters compared to MoE-LLaVA.
In Table <ref>, we display the number of activated parameters in the "$N_A$" column.
When using StabeLM-1.6B as the backbone, -LLaVA activates approximately 15.0$\%$ fewer parameters than MoE-LLaVA.
For Qwen-1.8B, -LLaVA activates about 2.2$\%$ fewer parameters than MoE-LLaVA.
For Phi-2-2.7B, -LLaVA activates about 7.5$\%$ fewer parameters than MoE-LLaVA.
In these three cases, the reduction in activated parameters does not compromise the model's performance.
Ablation studies on the value of top-$k$ during test.
In Table <ref>, we examine the performance of -LLaVA when using different top-$k$ values during the testing phase.
The results indicate that (1) The original -LLaVA outperforms other settings in most cases while activating the fewest number of parameters.
(2) Compared to the StableLM-1.6B backbone, -LLaVA trained with the Qwen-1.8B backbone sometimes favors activating two experts.
This observation aligns with the fact that -LLaVA also chooses to activate about $2$ experts (see Table <ref>).
Average top-$k$ activated experts of on vision-language benchmarks. We record average top-$k$ activated experts for each MoE layer when using StableLM-1.6B as the language model backbone.
Ablation studies on the value of top-$k$ during test.
We train the models using and set different values of top-$k$ during the test.
Training and evaluation settings are identical to that of Table <ref>.
Algorithms $N_{A}$ VQA$^{v2}$ GQA VisWiz SQA$^I$ VQA$^T$ POPE MME MMB LLaVA$^W$ MM-Vet
-LLaVA 1.75B 77.4 61.4 40.6 63.4 48.9 85.7 1300.9 63.2 86.4 28.1
-LLaVA ($k=2$) 2.06B 76.9 61.0 39.1 62.1 49.2 85.7 1320.4 62.4 73.6 28.2
-LLaVA ($k=3$) 2.47B 76.8 60.7 37.0 62.6 48.9 85.5 1306.9 62.5 74.0 26.8
-LLaVA ($k=4$) 2.89B 76.8 60.5 34.8 61.9 49.0 85.8 1321.9 61.9 75.8 27.8
-LLaVA 2.19B 76.2 61.5 32.6 63.1 48.0 87.0 1291.6 59.7 88.7 25.3
-LLaVA ($k=2$) 2.24B 76.2 60.8 33.8 62.2 47.7 87.5 1281.3 60.4 91.3 23.0
-LLaVA ($k=3$) 2.65B 76.2 60.5 32.2 62.9 48.1 88.4 1263.7 60.7 87.8 23.4
-LLaVA ($k=4$) 3.05B 75.7 60.0 31.6 62.8 48.3 88.1 1263.4 61.0 86.7 23.7
-LLaVA 3.35B 77.9 61.6 45.1 68.0 51.8 86.0 1429.6 66.6 95.6 33.6
-LLaVA ($k=2$) 3.62B 77.8 61.5 41.6 67.6 51.8 85.5 1433.5 66.8 95.1 32.7
-LLaVA ($k=3$) 4.46B 77.7 61.8 42.0 68.0 52.3 86.3 1438.1 66.8 94.3 30.8
-LLaVA ($k=4$) 5.30B 77.5 61.4 41.7 68.0 52.4 87.0 1431.5 66.5 95.8 32.8
[Activation frequency (Qwen)]
[Activation frequency (StableLM)]
[Activation frequency (Phi-2)]
Statistics of expert activation frequency in different layers.
We report the frequency of expert activations in various layers for the VQA task.
Larger circles indicate experts that are activated more frequently.
§.§ A4: MoE Structure is Required for Bottom Layer rather than Top Layer
Illustration of the number of activated experts for each layer.
In Figures <ref> and <ref>, we present the average top-$k$ of -LLaVA and the frequency of expert activation across various layers.
Our observations indicate that:
(1) In the top layer (the layer closest to the LM prediction head), tokens tend to select the same expert, while in the bottom layer, tokens activate all experts uniformly.
This suggests that there is no need to convert the top layer to MoE layer, whereas the bottom layer should be transformed into MoE layer.
(2) Different LLM backbones may exhibit distinct expert activation frequency patterns.
For the StableLM backbone, most MoE layers activate only one dominant expert, whereas for the Phi-2 backbone, experts are more likely to be activated uniformly.
§ CONCLUSION AND FUTURE WORKS
In this paper, we introduce , which automatically determines the number of experts and the number of experts to be activated. Our results demonstrate that achieves comparable or even superior performance across various MoE model settings while maintaining efficiency. This highlights 's potential to save researchers' time and computational resources when tuning these hyperparameters. Furthermore, our visualization results reveal interesting observations, such as the reduced number of experts required for the top layers. We believe these insights may inspire future advancements in MoE model design.
However, due to computational resource constraints, we did not test larger scale models. Additionally, the current adaptive process implementation keeps removed experts in a candidate pool, occupying GPU storage. Developing more efficient implementations in the future would be valuable.
[1]
Josh Achiam, Steven Adler, Sandhini Agarwal, Lama Ahmad, Ilge Akkaya, Florencia Leoni Aleman, Diogo Almeida, Janko Altenschmidt, Sam Altman, Shyamal Anadkat, et al.
Gpt-4 technical report.
arXiv preprint arXiv:2303.08774, 2023.
[2]
Isabela Albuquerque, João Monteiro, Mohammad Darvishi, Tiago H Falk, and Ioannis Mitliagkas.
Generalizing to unseen domains via distribution matching.
arXiv preprint arXiv:1911.00804, 2019.
[3]
Jinze Bai, Shuai Bai, Yunfei Chu, Zeyu Cui, Kai Dang, Xiaodong Deng, Yang Fan, Wenbin Ge, Yu Han, Fei Huang, Binyuan Hui, Luo Ji, Mei Li, Junyang Lin, Runji Lin, Dayiheng Liu, Gao Liu, Chengqiang Lu, Keming Lu, Jianxin Ma, Rui Men, Xingzhang Ren, Xuancheng Ren, Chuanqi Tan, Sinan Tan, Jianhong Tu, Peng Wang, Shijie Wang, Wei Wang, Shengguang Wu, Benfeng Xu, Jin Xu, An Yang, Hao Yang, Jian Yang, Shusheng Yang, Yang Yao, Bowen Yu, Hongyi Yuan, Zheng Yuan, Jianwei Zhang, Xingxuan Zhang, Yichang Zhang, Zhenru Zhang, Chang Zhou, Jingren Zhou, Xiaohuan Zhou, and Tianhang Zhu.
Qwen technical report.
[4]
Marco Bellagente, Jonathan Tow, Dakota Mahan, Duy Phung, Maksym Zhuravinskyi, Reshinth Adithyan, James Baicoianu, Ben Brooks, Nathan Cooper, Ashish Datta, Meng Lee, Emad Mostaque, Michael Pieler, Nikhil Pinnaparju, Paulo Rocha, Harry Saini, Hannah Teufel, Niccolo Zanichelli, and Carlos Riquelme.
Stable lm 2 1.6b technical report.
[5]
Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo.
The fifth pascal recognizing textual entailment challenge.
TAC, 7(8):1, 2009.
[6]
Aidan Clark, Diego de Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, et al.
Unified scaling laws for routed language models.
In International conference on machine learning, pages 4057–4086. PMLR, 2022.
[7]
Damai Dai, Chengqi Deng, Chenggang Zhao, RX Xu, Huazuo Gao, Deli Chen, Jiashi Li, Wangding Zeng, Xingkai Yu, Y Wu, et al.
Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models.
arXiv preprint arXiv:2401.06066, 2024.
[8]
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.
Bert: Pre-training of deep bidirectional transformers for language understanding.
In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019.
[9]
Bill Dolan and Chris Brockett.
Automatically constructing a corpus of sentential paraphrases.
In Third international workshop on paraphrasing (IWP2005), 2005.
[10]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al.
An image is worth 16x16 words: Transformers for image recognition at scale.
In International Conference on Learning Representations, 2020.
[11]
David Eigen, Marc'Aurelio Ranzato, and Ilya Sutskever.
Learning factored representations in a deep mixture of experts.
arXiv preprint arXiv:1312.4314, 2013.
[12]
Dongyang Fan, Bettina Messmer, and Martin Jaggi.
Towards an empirical understanding of moe design choices.
arXiv preprint arXiv:2402.13089, 2024.
[13]
William Fedus, Barret Zoph, and Noam Shazeer.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.
Journal of Machine Learning Research, 23(120):1–39, 2022.
[14]
Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh.
Making the v in vqa matter: Elevating the role of image understanding in visual question answering.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 6904–6913, 2017.
[15]
Sam Gross, Marc'Aurelio Ranzato, and Arthur Szlam.
Hard mixtures of experts for large scale weakly supervised vision.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6865–6873, 2017.
[16]
Ishaan Gulrajani and David Lopez-Paz.
In search of lost domain generalization.
arXiv preprint arXiv:2007.01434, 2020.
[17]
Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham.
Vizwiz grand challenge: Answering visual questions from blind people.
In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3608–3617, 2018.
[18]
Drew A Hudson and Christopher D Manning.
Gqa: A new dataset for real-world visual reasoning and compositional question answering.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 6700–6709, 2019.
[19]
Alyssa Hughes.
Phi-2: The surprising power of small language models.
[20]
Changho Hwang, Wei Cui, Yifan Xiong, Ziyue Yang, Ze Liu, Han Hu, Zilong Wang, Rafael Salas, Jithin Jose, Prabhat Ram, et al.
Tutel: Adaptive mixture-of-experts at scale.
Proceedings of Machine Learning and Systems, 5, 2023.
[21]
Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, et al.
Mixtral of experts.
arXiv preprint arXiv:2401.04088, 2024.
[22]
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei.
Scaling laws for neural language models.
arXiv preprint arXiv:2001.08361, 2020.
[23]
Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al.
Segment anything.
In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4015–4026, 2023.
[24]
Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen.
Gshard: Scaling giant models with conditional computation and automatic sharding.
In International Conference on Learning Representations, 2020.
[25]
Bo Li, Yifei Shen, Jingkang Yang, Yezhen Wang, Jiawei Ren, Tong Che, Jun Zhang, and Ziwei Liu.
Sparse mixture-of-experts are domain generalizable learners.
In The Eleventh International Conference on Learning Representations, 2022.
[26]
Bo Li, Yifei Shen, Jingkang Yang, Yezhen Wang, Jiawei Ren, Tong Che, Jun Zhang, and Ziwei Liu.
Sparse mixture-of-experts are domain generalizable learners.
In The Eleventh International Conference on Learning Representations, 2023.
[27]
Da Li, Yongxin Yang, Yi-Zhe Song, and Timothy M Hospedales.
Deeper, broader and artier domain generalization.
In Proceedings of the IEEE international conference on computer vision, pages 5542–5550, 2017.
[28]
Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi.
Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models.
In International conference on machine learning, pages 19730–19742. PMLR, 2023.
[29]
Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi.
Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation.
In International conference on machine learning, pages 12888–12900. PMLR, 2022.
[30]
Yifan Li, Yifan Du, Kun Zhou, Jinpeng Wang, Wayne Xin Zhao, and Ji-Rong Wen.
Evaluating object hallucination in large vision-language models.
arXiv preprint arXiv:2305.10355, 2023.
[31]
Bin Lin, Zhenyu Tang, Yang Ye, Jiaxi Cui, Bin Zhu, Peng Jin, Junwu Zhang, Munan Ning, and Li Yuan.
Moe-llava: Mixture of experts for large vision-language models.
arXiv preprint arXiv:2401.15947, 2024.
[32]
Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee.
Visual instruction tuning.
Advances in neural information processing systems, 36, 2024.
[33]
Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhang, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, et al.
Mmbench: Is your multi-modal model an all-around player?
arXiv preprint arXiv:2307.06281, 2023.
[34]
Pan Lu, Swaroop Mishra, Tanglin Xia, Liang Qiu, Kai-Wei Chang, Song-Chun Zhu, Oyvind Tafjord, Peter Clark, and Ashwin Kalyan.
Learn to explain: Multimodal reasoning via thought chains for science question answering.
Advances in Neural Information Processing Systems, 35:2507–2521, 2022.
[35]
William Peebles and Saining Xie.
Scalable diffusion models with transformers.
In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4195–4205, 2023.
[36]
Xingchao Peng, Qinxun Bai, Xide Xia, Zijun Huang, Kate Saenko, and Bo Wang.
Moment matching for multi-source domain adaptation.
In Proceedings of the IEEE/CVF international conference on computer vision, pages 1406–1415, 2019.
[37]
Joan Puigcerver, Carlos Riquelme, Basil Mustafa, and Neil Houlsby.
From sparse to soft mixtures of experts.
arXiv preprint arXiv:2308.00951, 2023.
[38]
Zihan Qiu, Zeyu Huang, and Jie Fu.
Emergent mixture-of-experts: Can dense pre-trained transformers benefit from emergent modular structures?
arXiv preprint arXiv:2310.10908, 2023.
[39]
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al.
Learning transferable visual models from natural language supervision.
In International conference on machine learning, pages 8748–8763. PMLR, 2021.
[40]
Samyam Rajbhandari, Conglong Li, Zhewei Yao, Minjia Zhang, Reza Yazdani Aminabadi, Ammar Ahmad Awan, Jeff Rasley, and Yuxiong He.
Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale.
In International conference on machine learning, pages 18332–18346. PMLR, 2022.
[41]
Prajit Ramachandran and Quoc V Le.
Diversity and depth in per-example routing models.
In International Conference on Learning Representations, 2018.
[42]
Xiaozhe Ren, Pingyi Zhou, Xinfan Meng, Xinjing Huang, Yadao Wang, Weichao Wang, Pengfei Li, Xiaoda Zhang, Alexander Podolskiy, Grigory Arshinov, et al.
Pangu-$\{$$\backslash$Sigma$\}$: Towards trillion parameter language model with sparse heterogeneous computing.
arXiv preprint arXiv:2303.10845, 2023.
[43]
Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, André Susano Pinto, Daniel Keysers, and Neil Houlsby.
Scaling vision with sparse mixture of experts.
Advances in Neural Information Processing Systems, 34:8583–8595, 2021.
[44]
Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
arXiv preprint arXiv:1701.06538, 2017.
[45]
Amanpreet Singh, Vivek Natarajan, Meet Shah, Yu Jiang, Xinlei Chen, Dhruv Batra, Devi Parikh, and Marcus Rohrbach.
Towards vqa models that can read.
In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8317–8326, 2019.
[46]
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al.
Llama: Open and efficient foundation language models.
arXiv preprint arXiv:2302.13971, 2023.
[47]
Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al.
Llama 2: Open foundation and fine-tuned chat models.
arXiv preprint arXiv:2307.09288, 2023.
[48]
Hemanth Venkateswara, Jose Eusebio, Shayok Chakraborty, and Sethuraman Panchanathan.
Deep hashing network for unsupervised domain adaptation.
In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5018–5027, 2017.
[49]
Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman.
Glue: A multi-task benchmark and analysis platform for natural language understanding.
In International Conference on Learning Representations, 2018.
[50]
Alex Warstadt, Amanpreet Singh, and Samuel R Bowman.
Neural network acceptability judgments.
Transactions of the Association for Computational Linguistics, 7:625–641, 2019.
[51]
Xun Wu, Shaohan Huang, Wenhui Wang, and Furu Wei.
Multi-head mixture-of-experts.
arXiv preprint arXiv:2404.15045, 2024.
[52]
Liang Xu, Hai Hu, Xuanwei Zhang, Lu Li, Chenjie Cao, Yudong Li, Yechen Xu, Kai Sun, Dian Yu, Cong Yu, Yin Tian, Qianqian Dong, Weitang Liu, Bo Shi, Yiming Cui, Junyi Li, Jun Zeng, Rongzhao Wang, Weijian Xie, Yanting Li, Yina Patterson, Zuoyu Tian, Yiwen Zhang, He Zhou, Shaoweihua Liu, Zhe Zhao, Qipeng Zhao, Cong Yue, Xinrui Zhang, Zhengliang Yang, Kyle Richardson, and Zhenzhong Lan.
CLUE: A Chinese language understanding evaluation benchmark.
In Proceedings of the 28th International Conference on Computational Linguistics, pages 4762–4772, Barcelona, Spain (Online), December 2020. International Committee on Computational Linguistics.
[53]
An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al.
M6-t: Exploring sparse expert models and beyond.
arXiv preprint arXiv:2105.15082, 2021.
[54]
Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, and Enhong Chen.
A survey on multimodal large language models.
arXiv preprint arXiv:2306.13549, 2023.
[55]
Weihao Yu, Zhengyuan Yang, Linjie Li, Jianfeng Wang, Kevin Lin, Zicheng Liu, Xinchao Wang, and Lijuan Wang.
Mm-vet: Evaluating large multimodal models for integrated capabilities.
arXiv preprint arXiv:2308.02490, 2023.
[56]
Zhengyan Zhang, Yankai Lin, Zhiyuan Liu, Peng Li, Maosong Sun, and Jie Zhou.
Moefication: Transformer feed-forward layers are mixtures of experts.
In Findings of the Association for Computational Linguistics: ACL 2022, pages 877–890, 2022.
[57]
Yanqi Zhou, Tao Lei, Hanxiao Liu, Nan Du, Yanping Huang, Vincent Zhao, Andrew M Dai, Quoc V Le, James Laudon, et al.
Mixture-of-experts with expert choice routing.
Advances in Neural Information Processing Systems, 35:7103–7114, 2022.
§ EXPERIMENT SETTINGS
We conduct experiments on Vision, Language, and Vision-Language tasks. The detailed experiment settings are shown in the following.
* Vision Task. For the vision tasks, we follow the same settings as in GMoE [26]. We employ the pre-trained ViT-S/16 [10] model and evaluate it on the DomainBed [16] benchmark.
Our experiments encompass four Domain Generalization datasets: PACS [27], VLCS [2], OfficeHome [48], and DomainNet [36].
All results are reported using the train-validation selection criterion.
We conduct all experiments on a single RTX 3090 GPU, and the reported results are averaged over three random seeds. For , we set the maximum number of experts to 8 and the initial number of experts to 6. The adaptive process is executed for each iteration.
* Language Task. The language tasks adhere to the same settings as those in MoEfication [56] and EMoE [38].
The MoE models are built upon the BERT-large [8] architecture using the MoEfication method and are fine-tuned on GLUE [49] tasks, which include COLA [50], QNLI [49], RTE [5], MNLI [52], and MRPC [9]. We conduct all experiments on a single RTX 3090 GPU, and the reported results are averaged over three random seeds. For , we set the maximum number of experts to 8 and the initial number of experts to 6. For each epoch, we begin recording routing at 1/3 of the epoch and complete recording routing and execute the adaptive process at 2/3 of the epoch.
* Vision-Language Task. The vision-language tasks follows the setting in MoE-LLaVA [31], where we use StableLM-2-1.6B [4], Qwen-1.8B [3] and Phi-2 [19] as backbone language models, and use clip-vit-large-patch14-336 [39] as the vision encoder. We conduct model training on 8 A100 (80G) GPUs, completing within 2 days, detailed hyper-parameters setting are shown in Table <ref>. The models are evaluated on image understanding benchmarks including VQA-v2 [14], GQA [18], VisWiz [17], ScienceQA-IMG [34], TextVQA [45], POPE [30], MME [54], MMBench [33], LLaVA-Bench (in-the-Wild) [32], and MM-Vet [55]. Furthermore, we keep routing records in our model during testing time. For each benchmark, we collect the number of experts' activations per MoE layer and total processed tokens during testing.
Detailed training hyper-parameters and configuration.
2*Config 3cModels
StableLM Qwen Phi-2
Maximum experts 3c4
Deepspeed Zero2 Zero2 Zero2_offload
Data 3cLLaVA-Finetuning
Image resolution 3c336 $\times$ 336
Image encoder 3cCLIP-Large/336
Feature select layer 3c-2
Image projector 3cLinear layers with GeLU
Epoch 3c1
Learning rate 3c2e-5
Learning rate schedule 3cCosine
Weight decay 3c0.0
Batch size per GPU 8 8 4
GPU 4 $\times$ A100 (80G) 8 $\times$ A100 (80G) 8 $\times$ A100 (80G)
Precision 3cBf16
§ ADDITIONAL EXPERIMENTS
In this section, we present the detailed results of our experiments on the GLUE benchmark [49] in Table <ref> and on the DomainNet dataset in Table <ref>. These results demonstrate that incorporating the specially designed diversity and simplicity loss significantly enhances the model's performance.
Performance of on language tasks: Our study investigates the performance of on language tasks using the GLUE [49] benchmark, with BERT-large serving as the backbone model. The baselines including traditional MoE methods with different number of experts $K$ and top-$k$. In our implementation of , we configure the maximum number of experts to 16, with an initial setting of 8 experts. The number of experts is dynamically adjusted in each epoch for . The $-$ represents experiment failure, final results could not be obtained using Gshard loss.
Algorithms COLA MRPC QNLI MNLI RTE Average
MoE ($K = 8, k = 1$) 64.10 90.14 92.48 86.56 73.04 81.26
MoE ($K = 8, k = 2$) 64.51 90.19 92.39 86.70 74.85 81.73
MoE ($K = 8, k = 4$) 64.94 89.74 92.52 86.57 75.09 81.77
MoE ($K = 8, k = 8$) 64.03 89.36 92.46 86.61 74.37 81.37
MoE ($K = 16, k = 1$) 63.63 89.81 92.39 86.63 74.01 81.29
MoE ($K = 16, k = 2$) 64.71 90.18 92.53 86.73 72.32 81.29
MoE ($K = 16, k = 4$) 64.12 89.74 92.65 86.59 75.33 81.69
MoE ($K = 16, k = 8$) 64.37 90.35 92.49 86.51 73.53 81.45
, Gshard Loss 64.88 89.85 92.42 - 73.41 -
65.17 90.64 92.59 86.37 73.41 81.64
Detailed results on DomainNet dataset: We report the detailed test results on each domain of the DomainNet dataset.
Algorithms clip info paint quick real sketch Average
GMoE (with , Gshard Loss) 66.8 23.8 54.1 15.9 68.7 54.9 47.4
GMoE (with , Diverse and Simple Gating Loss) 68.0 24.4 55.4 16.6 69.5 55.1 48.2
§ ADDITIONAL VISUALIZATION RESULTS
§.§ Activation Frequency
We present the activation frequency of experts across various MoE layers and evaluation tasks using different backbones: StableLM-1.6B (Figures <ref> and <ref>), Qwen-1.8B (Figures <ref> and <ref>), and Phi-2-2.7B (Figures <ref> and <ref>).
The results suggest that compared to the StableLM-1.6B backbone, experts are more uniformly activated for models utilizing Qwen-1.8B and Phi-2-2.7B as backbone LLMs.
Comparing the performance efficiency of models.
The $x$-axis represents the number of activated parameters, while the $y$-axis shows the performance on the Visual Question Answering (VQA) task.
Activation frequency of experts on various MoE layers and evaluation tasks using StableLM as backbone.
Activation frequency of experts on various MoE layers and evaluation tasks using StableLM as backbone.
Activation frequency of experts on various MoE layers and evaluation tasks using Qwen as backbone.
Activation frequency of experts on various MoE layers and evaluation tasks using Qwen as backbone.
Activation frequency of experts on various MoE layers and evaluation tasks using Phi-2 as backbone.
Activation frequency of experts on various MoE layers and evaluation tasks using Phi-2 as backbone.
§.§ Average Top-$k$
In Figures <ref> and <ref> , we illustrate the average top-$k$ of models using Qwen and Phi-2 as backbone LLMs.
Average top-$k$ activated experts of on vision-language benchmarks, using Qwen as language backbone.
Average top-$k$ activated experts of on vision-language benchmarks, using Phi-2 as language backbone.
§.§ Layer-wise Expert Similarity Matrix
In Figures <ref>, <ref>, and <ref>, we illustrate the similarities between various expert representations, specifically, different rows of $\mW_g$ across multiple MoE layers. These comparisons utilize StableLM-1.6B, Qwen-1.8B, and Phi-2-2.7B as the backbone LLMs. The findings demonstrate that these expert representations are nearly orthogonal, suggesting that different experts capture diverse features, which could potentially enhance the model's capacity.
§.§ Visualization of $\mG$
In Figures <ref>, <ref>, and <ref>, we present the values of the learned threshold $\mG$, employing StableLM-1.6B, Qwen-1.8B, and Phi-2-2.7B as the backbone LLMs. The results reveal that for each MoE layer, there is one expert that is more readily activated. This observation is consistent with the design of Deepseek-MoE [7].
Layer-wise expert similarity matrix (StableLM). We record the experts' cosine similarity per layer during test time. It turns out the cosine similarity between experts is close to 0.
Layer-wise expert activation threshold (StableLM). Darker-colored experts are more likely to be activated compared to lighter-colored experts.
Layer-wise expert similarity matrix (Qwen). We record the experts' cosine similarity per layer during test time. It turns out the cosine similarity between experts is close to 0.
Layer-wise expert activation threshold (Qwen). Darker-colored experts are more likely to be activated compared to lighter-colored experts.
Layer-wise expert similarity matrix (Phi-2). We record the experts' cosine similarity per layer during test time. It turns out the cosine similarity between experts is close to 0.
Layer-wise expert activation threshold (Phi-2). Darker-colored experts are more likely to be activated compared to lighter-colored experts.
|
# Vertex-primitive $s$-arc-transitive digraphs admitting a Suzuki or Ree group
Lei Chen Michael Giudici Cheryl E. Praeger
Department of Mathematics and Statistics
The University of Western Australia
35 Stirling Highway, Perth WA 6009
Australia
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
We study $G$-vertex-primitive and $(G,s)$-arc-transitive digraphs for almost
simple groups $G$ with socle ${}^{2}\mathrm{G}_{2}(3^{2n+1})$ or
$\mathrm{Sz}(2^{2n+1})$. It turns out that $s\leq 1$ for such digraphs. We
also construct examples with $s=1$ for each case.
## 1 Introduction
The property of $s$-arc-transitivity has been well-studied for many years.
Weiss [14] proved that finite undirected graphs that are not cycles can be at
most 7-arc-transitive. On the other hand, the third author [12] showed that
for each $s$ there are infinitely many finite $s$-arc-transitive digraphs that
are not $(s+1)$-arc-transitive.
However, vertex-primitive $s$-arc-transitive digraphs for large $s$ seem rare.
Though extensive attempts had been made to find a vertex-primitive $s$-arc-
transitive digraph for $s\geq 2$, no such examples were found until 2017 when
the second author, with Li and Xia, constructed an infinite family of $2$-arc
transitive examples in [5]. In [7], the second author and Xia asked the
following:
###### Question 1.1.
Is there an upper bound on $s$ for vertex-primitive $s$-arc-transitive
digraphs that are not directed cycles?
A group $G$ is said to be an _almost simple group_ if it has a unique _minimal
normal subgroup_ $T$ such that $T$ is a nonabelian simple group. This implies
(identifying $T$ with the group of inner automorphisms of $T$) that
$T\triangleleft G\leqslant\mathrm{Aut}(T)$. If an upper bound on $s$ in
Question 1.1 exists, then let us denote by $C$ the largest value of $s$ for
which there exists a $G$-vertex-primitive $(G,s)$-arc-transitive digraph with
$G$ an almost simple group. In [7], the second author and Xia proved that $C$
is also the least upper bound on $s^{\prime}$ for vertex-primitive
$s^{\prime}$-arc-transitive digraphs that are not directed cycles.
In [11], Pan, Wu and Yin proved that, if $G=$ Sm or $\mathrm{A}_{m}$, then
$s\leq 2$ except for one subcase left open, and in [6], Giudici, Li and Xia
proved that, if $\mathrm{PSL}_{n}(q)\leqslant
G\leqslant\mathrm{Aut}(\mathrm{PSL}_{n}(q))$, then $s\leq 2$. This paper
determines an upper bound $s$ for vertex-primitive $s$-arc-transitive digraphs
whose automorphism groups are almost simple Ree and Suzuki groups. We
juxtapose the Suzuki groups and the Ree groups in this paper as many
similarities can be found between these two kinds of exceptional simple
groups: (1) the Suzuki groups $\mathrm{Sz}(2^{2n+1})$ bear a relation to the
symplectic groups $\mathrm{Sp}_{4}(2^{2n+1})$ similar to that of the Ree
groups ${}^{2}\mathrm{G}_{2}(3^{2n+1})$ to $\mathrm{G}_{2}(3^{2n+1})$; (2) the
maximal subgroup types of the Suzuki groups and the Ree groups are fairly
similar; (3) the only outer automorphisms of the two groups are field
automorphisms. Hence we are able to apply similar arguments to both. Our main
result is as follows.
###### Theorem 1.2.
Let $s$ be a non-negative integer and let $\Gamma$ be a $G$-vertex-primitive
$(G,s)$-arc-transitive digraph, where $G$ is almost simple with socle
${}^{2}\mathrm{G}_{2}(3^{2n+1})$ or $\mathrm{Sz}(2^{2n+1})$. Then $s\leq 1$.
In the next paragraph we remind readers of some terms mentioned above.
A _digraph_ $\Gamma$ is a pair $(V,\to)$ such that $V$ is the set of vertices
and $\to$ is an anti-symmetric and irreflexive relation on $V$. For a non-
negative integer $s$, we call a sequence $v_{0},v_{1},\dots,v_{s}$ in $V$ an
_$s$ -arc_ if $v_{i}\to v_{i+1}$ for each $i\in\\{0,1,\dots,s-1\\}$. Note that
a 1-arc is simply called an _arc_. For $G\leqslant\mathrm{Aut}(\Gamma)$, we
say that $\Gamma$ is a $(G,s)$-arc-transitive digraph if $G$ acts transitively
on the set of $s$-arcs of $\Gamma$. We note that an $(s+1)$-arc-transitive
digraph is naturally $s$-arc-transitive if every $s$-arc extends to an
$(s+1)$-arc. A transitive subgroup $G\leqslant\mathrm{Sym}(\Omega)$ is said to
be primitive if it does not preserve any non-trivial partition of $\Omega$.
For $G\leqslant\mathrm{Aut}(\Gamma)$, we say that $\Gamma$ is _$G$ -vertex-
primitive_ if $G$ acts primitively on $V$. A digraph is said to be _finite_ if
$|V|$ is finite and all the digraphs we consider in this paper will be finite.
## 2 Preliminaries
### 2.1 Notation
We begin by defining some group theoretic notation:
For a group $X$, we denote by $\mathrm{Soc}(X)$ the socle of $X$, and by
$\Pi(X)$ the set of prime divisors of $|X|$. For a prime number $p$ and an
integer $n$, we denote by $n_{p}$ the $p-$part of $n$, which is the largest
power of $p$ dividing $n$.
The expression $n$ or $\mathrm{C}_{n}$ denotes a cyclic group of order $n$
while $[n]$ denotes an unspecified group of order $n$. The expression $p^{n}$
denotes an elementary abelian group of order $p^{n}$, that is, a direct
product of $n$ copies of $\mathrm{C}_{p}$.
Extensions of groups are written in one of the following ways: $A\times B$
denotes a direct product of $A$ and $B$; also $A:B$ denotes a semidirect
product of $A$ by $B$; and $A.B$ denotes an unspecified extension of $A$ by
$B$.
For groups $A$ and $B$ such that $B\leqslant A$, we denote by
$\mathrm{N}_{A}(B)$ the normaliser of $B$ in $A$, and $\mathrm{C}_{A}(B)$ the
centraliser of $B$ in $A$.
###### Lemma 2.1.
[6, Lemma 2.1] For any positive integer $n$ and prime $p$, we have
$(n!)_{p}<p^{\frac{n}{p-1}}$.
###### Definition 2.2.
Given integers $a,m\geq 2$, a prime $r$ is said to be a _primitive prime
divisor_ of $a^{m}-1$ if $r$ divides $a^{m}-1$ and does not divide $a^{i}-1$
for any $i<m$.
For $r$ a primitive prime divisor of $a^{m}-1$, we conclude by Fermat’s Little
Theorem that $r\equiv 1\pmod{m}$, and therefore $r>m$.
###### Lemma 2.3.
[1, Theorem IX.8.3] For $a,m\geq 2$, there exists a primitive prime divisor of
$a^{m}-1$ except when $(a,m)=(2,6)$, or $a+1$ is a power of $2$ and $m=2$.
### 2.2 Group factorisations
A factorisation of a group $G$ is an expression of $G$ as the product of two
subgroups $A$ and $B$ of $G$, where $A$ and $B$ are called factors. A proper
group factorisation occurs when neither $A$ nor $B$ equals $G$.
###### Definition 2.4.
A factorisation $G=AB$ is called a _homogeneous factorisation_ of $G$ if it is
proper and $A$ is isomorphic to $B$.
We now give two technical lemmas, which will be useful later.
###### Lemma 2.5.
Suppose that $G=\langle H,x\rangle$ with $H\triangleleft G$, and let $m$ be
the smallest positive integer such that $x^{m}\in H$. Suppose that $K\leqslant
G$ and $K=AB$ is a homogeneous factorisation such that $B=A^{t}$ for some
$t\in G$, and let $\pi:G\to G/H$ denote the natural projection map. Then
$\pi(A)=\pi(B)=\pi(K)$.
###### Proof.
Note that $G/H=\langle Hx\rangle$ has order $m$ by the minimality of $m$.
Since $\pi(A)=AH/H\leqslant G/H$ and $G/H$ is cyclic, we conclude that
$\pi(A)=\langle Hx^{j}\rangle$ for some divisor $j$ of $m$. So there exists
$a\in A$ such that $\pi(a)=Hx^{j}$. Thus $Ha=Hx^{j}$, so $a=hx^{j}$ for some
$h\in H$.
As $B=A^{t}$ with $t\in G$, we have $a^{t}\in B$ and
$\pi(a^{t})=\pi(t^{-1})\pi(a)\pi(t)=\pi(a)=Hx^{j}$ since $G/H$ is abelian.
Hence $\pi(B)\geqslant\pi(A)$. The same argument with $A$ and $B$ interchanged
and $t$ replaced by $t^{-1}$, gives that $\pi(A)\geqslant\pi(B)$. Hence
$\pi(A)=\pi(B)$, and so $\pi(K)=\pi(A)\pi(B)=\pi(A)=\pi(B)$. ∎
###### Lemma 2.6.
Suppose that $G=AB$ with $G=PSL_{2}(8):3$ such that $A,B$ are proper subgroups
of $G$. Then $|A|\neq|B|$.
###### Proof.
Suppose for a contradiction that there exists a factorisation $G=AB$ with
$G=\mathrm{PSL}_{2}(8):3$ and $|A|=|B|$. Then we deduce that
$|A|_{p}^{2}=|B|_{p}^{2}\geq|G|_{p}$ for any prime $p$. In particular,
$|A|_{2}\geq 2^{2}$, $|A|_{3}\geq 3^{2}$ and $|A|_{7}=7$. On the other hand,
since $A=A\cap\mathrm{PSL}_{2}(8)$ or $A=(A\cap\mathrm{PSL}_{2}(8)).3$. We
therefore conclude that $|A\cap\mathrm{PSL}_{2}(8)|$ is divisible by 2, 3 and
7. By [1, Corollary 5 and Table 10.7] there are no proper subgroups of
$\mathrm{PSL}_{2}(8)$ with order divisible by 2, 3 and 7, and hence
$A\cap\mathrm{PSL}_{2}(8)=\mathrm{PSL}_{2}(8)$. Similarly, we conclude that
$B\cap\mathrm{PSL}_{2}(8)=\mathrm{PSL}_{2}(8)$. However, since $|A|=|B|$, we
must have that $A=B=G$, which contradicts the fact that $A$ and $B$ are proper
subgroups of $G$. ∎
### 2.3 Arc-transitivity
We say that a group $G$ acts on a digraph $\Gamma$ if $G\leq Aut(\Gamma)$.
Here are two results in [6] and [7] that reveal some important properties for
an $s$-arc-transitive digraph $\Gamma$ where $s\geq 2$.
###### Lemma 2.7.
[7, Lemma 2.2] Let $\Gamma$ be a digraph, and $v_{0}\rightarrow
v_{1}\rightarrow v_{2}$ be a $2$-arc of $\Gamma$. Suppose that $G$ acts arc-
transitively on $\Gamma$. Then $G$ acts $2$-arc-transitively on $\Gamma$ if
and only if $G_{v_{1}}=G_{v_{0}v_{1}}G_{v_{1}v_{2}}$. Moreover, there exists
some $t\in G$ such that $(G_{v_{0}v_{1}})^{t}=G_{v_{1}v_{2}}$.
###### Lemma 2.8.
[6, Lemma 2.14] Let $\Gamma$ be a connected $G$-arc-transitive digraph with
arc $v\rightarrow w$. Let $g\in G$ such that $v^{g}=w$. Then $g$ normalises no
proper nontrivial normal subgroup of $G_{v}$.
We now set out the following hypothesis that we will use throughout the paper.
###### Hypothesis 2.9.
Let $\Gamma$ be a vertex-primitive $(G,s)$-arc-transitive digraph for some
$s\geq 2$, and let $u\to v\to w$ be a $2$-arc and $g\in G$ such that
$(u,v)^{g}=(v,w)$. Then by Lemma 2.7, $(G_{uv})^{g}=G_{vw}$ and
$G_{v}=G_{uv}G_{vw}$ is a homogeneous factorisation.
Note that necessary conditions for a digraph $\Gamma$ to be $G$-vertex-
primitive and $(G,2)$-arc-transitive are that $G_{v}$ is a maximal core-free
subgroup of $G$ and that $G_{v}$ admits a homogeneous factorisation.
Therefore, to disprove the 2-arc-transitivity of $G$ it suffices for us to
show that a maximal core-free subgroup $G_{v}$ does not have a homogeneous
factorisation.
We have the following corollary to Lemma 2.8.
###### Corollary 2.10.
Suppose that Hypothesis 2.9 holds. Then, for each prime $p$ dividing
$|G_{v}|$, $G_{v}$ has at least two subgroups of order $p$.
###### Proof.
Suppose for a contradiction that there exists a prime $p$ such that $G_{v}$
has a unique subgroup $Q$ of order $p$. Then $Q$ is normal in $G_{v}$. By
Hypothesis 2.9, $G_{v}=AB$ where $A=G_{uv}$ and $B=G_{vw}$, so $|G_{v}|$
divides $|A|\cdot|B|=|A|^{2}$ and hence $|A|_{p}=|B|_{p}\geq p$. Hence both of
$A$ and $B$ contain the unique subgroup $Q$ of order $p$, and as $A^{g}=B$, we
have $Q^{g}\leqslant B\leqslant G_{v}$ and therefore $Q^{g}=Q$, contradicting
Lemma 2.8. ∎
## 3 The small Ree groups
Suppose that $\Gamma$ is a $G$-vertex-primitive, $(G,s)$-arc-transitive
digraph such that $\mathrm{Soc}(G)={}^{2}\mathrm{G}_{2}(q)$ with $q=3^{2n+1}$,
for some $n\geq 1$ and $s\geq 1$. Since the action of $G$ on $\Gamma$ is
vertex-primitive, the vertex stabiliser $G_{v}$ is maximal in $G$ and does not
contain ${}^{2}\mathrm{G}_{2}(q)$. The following list of the maximal subgroups
of ${}^{2}\mathrm{G}_{2}(q)$ may be found in [16].
###### Theorem 3.1.
[16, Theorem 4.2] If $q=3^{2n+1}$ with $n\geq 1$, then the maximal subgroups
of ${}^{2}\mathrm{G}_{2}(q)$ are (up to conjugacy):
(i) $[q^{3}]:\mathrm{C}_{q-1}$,
(ii) $2\times\mathrm{PSL}_{2}(q)$,
(iii) $(2^{2}\times\mathrm{D}_{\frac{q+1}{2}}):3$,
(iv) $\mathrm{C}_{q-\sqrt{3q}+1}:6$,
(v) $\mathrm{C}_{q+\sqrt{3q}+1}:6$,
(vi) ${}^{2}\mathrm{G}_{2}(q_{0})$, where $q=q_{0}^{r}$ and $r$ is prime.
Since $\mathrm{Aut}({}^{2}\mathrm{G}_{2}(q))={}^{2}\mathrm{G}_{2}(q):(2n+1)$,
and ${}^{2}\mathrm{G}_{2}(3^{2n+1})\leq
G\leq\mathrm{Aut}({}^{2}\mathrm{G}_{2}(q))$, we have
$G={}^{2}\mathrm{G}_{2}(q):m$, for some divisor $m$ of $2n+1$, and a vertex-
stabiliser $G_{v}$ is maximal in $G$ and does not contain
${}^{2}\mathrm{G}_{2}(q)$. The subgroups of $G$ with these properties are the
following:
###### Corollary 3.2.
For $G={}^{2}G_{2}(3^{2n+1}):m$, where $m$ divides $2n+1$, the maximal
subgroups of $G$ not containing ${}^{2}G_{2}(3^{2n+1})$ are (up to conjugacy):
(i) $([q^{3}]:\mathrm{C}_{q-1}):m$,
(ii) $(2\times\mathrm{PSL}_{2}(q)):m$,
(iii) $((2^{2}\times\mathrm{D}_{\frac{q+1}{2}}):3).m$,
(iv) $(\mathrm{C}_{q-\sqrt{3q}+1}:6).m$,
(v) $(\mathrm{C}_{q+\sqrt{3q}+1}:6).m$,
(vi) ${}^{2}\mathrm{G}_{2}(q_{0}):m$, where $q=q_{0}^{r}$ and $r$ is prime.
For the rest of this section we assume that $s\geq 2$, and hence Hypothesis
2.9 holds for $G={}^{2}G_{2}(3^{2n+1}):m$, where $q=3^{2n+1}>3$ and $m$
divides $2n+1$, and we let $L=\mathrm{Soc}(G)={}^{2}\mathrm{G}_{2}(q)$. We
consider separately each of the possibilities for the maximal subgroup $G_{v}$
according to Corollary 3.2, and in each case derive a contradiction, hence
proving that $s\leq 1$.
We let $\pi$ be the natural projection map $\pi:G_{v}\rightarrow G_{v}/L_{v}$.
Note that since $G=LG_{v}$ we have $\pi(G_{v})\cong G/L\cong C_{m}$. We note
in particular that, by Hypothesis 2.9, $G_{v}$ has a homogeneous factorisation
$G_{v}=AB$ where $A=G_{uv}$ and $B=G_{vw}$ with $A^{g}=B$ for some $g\in G$.
(1)
This implies, first that $\Pi(A)=\Pi(B)=\Pi(G_{v})$, and secondly, by
Corollary 2.10, that for each prime $p$ dividing $|G_{v}|$, $G_{v}$ has at
least two subgroups of order $p$. We use these facts several times in our
arguments.
###### Lemma 3.3.
$G_{v}$ is not a Type (ii) subgroup of $G$.
###### Proof.
Suppose to the contrary that $G_{v}$ is a Type (ii) subgroup of $G$, and
consider the homogeneous factorisation $G_{v}=AB$ in (1), so
$\Pi(A)=\Pi(B)=\Pi(G_{v})$. Let $S$ and $T$ denote the subgroups of
$L_{v}=L\cap G_{v}$ isomorphic to 2 and $\mathrm{PSL}_{2}(q)$, respectively.
Then $L_{v}=S\times T$.
Note that, by Lemma 2.3, there exists $p\in\Pi(G_{v})$ such that $p$ is a
primitive prime divisor of $3^{2(2n+1)}-1$, which is greater than $2(2n+1)$.
Hence $|A|$ is divisible by $p$ and, in particular, $p$ divides the order of
$A_{1}:=A\cap T=A\cap\mathrm{PSL}_{2}(q)$. We also notice that
$\frac{|A||B|}{|A\cap B|}=|AB|=|G_{v}|$
and therefore,
$|A|_{3}^{2}\geq|G_{v}|_{3}=|\mathrm{PSL}_{2}(q)|_{3}m_{3}=3^{2n+1}m_{3}$.
Thus $|A|_{3}\geq 3^{\frac{2n+1}{2}}m_{3}^{\frac{1}{2}}$. However,
$m_{3}\leq(2n+1)_{3}\leq 3^{\frac{2n+1}{3}}\leq 3^{n}$, and
$|A|_{3}=|A_{1}|_{3}|\pi(A)|_{3}\leq|A_{1}|_{3}m_{3}$. Hence
$\displaystyle|A_{1}|_{3}$
$\displaystyle\geq\frac{|A|_{3}}{m_{3}}\geq\frac{3^{\frac{2n+1}{2}}m_{3}^{\frac{1}{2}}}{m_{3}}=3^{\frac{2n+1}{2}}m_{3}^{-\frac{1}{2}}\geq
3^{\frac{2n+1}{2}}3^{-\frac{n}{2}}=3^{\frac{n+1}{2}}>1.$
Thus $\\{3,p\\}\subseteq\Pi(A_{1})$. By [10, Theorem 4 and Table 10.3], there
are no proper subgroups of $\mathrm{PSL}_{2}(q)$ with order divisible by both
3 and $p$, and hence $A_{1}=\mathrm{PSL}_{2}(q)=T$. On the other hand, since
$A^{g}=B$, we have $T^{g}\leqslant A^{g}=B$. However, $T$ is the unique
subgroup in $G_{v}$ isomorphic to $\mathrm{PSL}_{2}(q)$, so this implies that
$T^{g}=T$, which is a contradiction to Lemma 2.8. ∎
###### Lemma 3.4.
$G_{v}$ is not a Type (iii) subgroup of $G$.
###### Proof.
Suppose for a contradiction that $G_{v}$ is a Type (iii) subgroup of $G$, and
again consider the homogeneous factorisation $G_{v}=AB$ in (1) which implies
that, for each $p$ dividing $|G_{v}|$, $G_{v}$ has more than one subgroup of
order $p$. We denote by $S$ and $T$ the normal subgroups of $L_{v}=L\cap
G_{v}$ isomorphic to $2^{2}$ and $\mathrm{D}_{\frac{q+1}{2}}$, respectively,
so that $L_{v}=(S\times T):3$.
By Lemma 2.3 there exists a primitive prime divisor $p$ of
$3^{2(2n+1)}-1=q^{2}-1$. Note that $p\neq 3$, and also $p$ divides $q+1$, and
$p$ is odd (as $p$ does not divide $q-1$). Hence
$p\in\Pi(\mathrm{D}_{\frac{q+1}{2}})\subseteq\Pi(G_{v})$. Since $p>2(2n+1)\geq
2m$ and $p\neq 3$, any subgroup $Q$ of $G_{v}$ of order $p$ must lie in $T$.
Since $T=\mathrm{D}_{\frac{q+1}{2}}$ is dihedral, this implies that $Q$ is the
unique subgroup of order $p$ in $T$ and hence in $G_{v}$. However, this
contradicts Corollary 2.10 and therefore the result follows. ∎
###### Lemma 3.5.
$G_{v}$ is neither a Type (iv) subgroup nor a Type (v) subgroup of $G$.
###### Proof.
Suppose for a contradiction that $G_{v}$ is a Type (iv) or (v) subgroup of
$G$. Recall, as discussed above, that for each prime $p$ dividing $|G_{v}|$,
$G_{v}$ has more than one subgroup of order $p$. We denote by $S$ and $T$ the
(unique) cyclic subgroups of $L_{v}=L\cap G_{v}$ of order $q\pm\sqrt{3q}+1$
and 6, respectively, so that $L_{v}=S:T$. Since $q\pm\sqrt{3q}+1$ is not
divisible by 2 or 3, we see that $|S|$ and $|T|$ are coprime.
By Lemma 2.5 we have that $\pi(A)=\pi(B)=\pi(G_{v})=\mathrm{C}_{m}$. Thus
$|A\cap L_{v}|=|B\cap L_{v}|$. Let $p$ be a prime dividing $|S|$. Then there
is a unique subgroup $Q_{p}\leqslant S$ of order $p$. We note that since $|S|$
and $|T|$ are coprime, $Q_{p}$ is the unique subgroup in $L_{v}$ of order $p$.
###### Claim 1.
$|A\cap L_{v}|_{p}=1$.
Suppose for a contradiction that $|A\cap L_{v}|_{p}\geq p$. Then $A\cap L_{p}$
has a subgroup of order $p$. This subgroup must be $Q_{p}$ as it is the unique
subgroup of order $p$ in $L_{v}$. On the other hand, since $|A\cap
L_{v}|=|B\cap L_{v}|$, we find that $Q_{p}\leqslant B\cap L_{v}$ as well. We
note that $A^{g}=B$, so $Q_{p}^{g}\leqslant(A\cap L_{v})^{g}=B\cap
L_{w}\leqslant B\cap L\leqslant G_{v}\cap L=L_{v}$. This implies that
$Q_{p}^{g}=Q_{p}$. However, this contradicts Lemma 2.8 and therefore Claim 1
holds.
By Claim 1 we conclude that $|A\cap L_{v}|\leq 6$. This implies that
$|A|=|A\cap L_{v}||\pi(A)|\leq 6m$. Suppose first that $n=1$. Then $m\leq 3$,
$q=3^{3}$ and $|A|\leq 18$. Thus $|G_{v}|$ is either divisible by
$37=q+\sqrt{3q}+1$ or by $19=q-\sqrt{3q}+1$. However, $|A|$ is divisible by
neither 37 nor 19 since $|A|\leq 18$. Hence $G_{v}$ does not have a
homogeneous factorisation when $n=1$. Thus $n\geq 2$ and so
$q-\sqrt{3q}+1=3^{2n+1}-3^{n+1}+1\geq 9(2n+1).$
Since $G_{v}=AB$, we have that $|A|\cdot|B|=|G_{v}|\cdot|A\cap B|$. However,
$\displaystyle|G_{v}|\cdot|A\cap B|$ $\displaystyle\geq(q\pm\sqrt{3q}+1)\cdot
6m\geq 9(2n+1)\cdot 6m>(6m)^{2}\geq|A|\cdot|B|.$
So we have a contradiction and the result follows. ∎
###### Lemma 3.6.
$G_{v}$ is not a Type (vi) subgroup of $G$.
###### Proof.
Suppose for a contradiction that $G_{v}$ is a Type (vi) subgroup of $G$, so
$G_{v}=H.m$, where $H=L_{v}=L\cap G_{v}={}^{2}\mathrm{G}_{2}(q_{0})$, with
$3^{2n+1}=q=q_{0}^{r}$ for some prime $r$, such that $r,m$ both divide $2n+1$.
Now $G_{v}$ has a homogeneous factorisation $G_{v}=AB$ where $A=G_{uv}$ and
$B=G_{vw}$ with $B=A^{g}$ for some $g\in G$. Let $X:=A\cap L_{v}$ and
$Y:=B\cap L_{v}$. It follows from Lemma 2.5 that
$\pi(A)=\pi(B)=\mathrm{C}_{m}$. We divide the analysis into two cases:
_Case $1$: $2n+1$ is not prime._ In this case $q_{0}=3^{(2n+1)/r}>3$. Let $C$
be the centraliser of $H={}^{2}\mathrm{G}_{2}(q_{0})$ in $G_{v}$. Then
$\mathrm{Aut}(H)\geq G_{v}/C=(AB)/C=(AC/C)(BC/C)\geq
HC/C\cong{}^{2}\mathrm{G}_{2}(q_{0})$. All the core-free factorisations of an
almost simple group with socle an exceptional group of Lie type are given in
[16, Theorem B], and it follows that $G_{v}/C$ does not have a core-free
factorisation since $q_{0}>3$. Hence ${}^{2}\mathrm{G}_{2}(q_{0})$ is
contained in one of $AC/C$ or $BC/C$. Without loss of generality, we may
assume that ${}^{2}\mathrm{G}_{2}(q_{0})\leq AC/C$. This together with the
fact that $H\cap C=1$ implies that $H\leqslant A$. On the other hand,
$H^{g}\leqslant A^{g}=B$, and since $H$ is the unique subgroup of $G_{v}$
isomorphic to ${}^{2}\mathrm{G}_{2}(q_{0})$, we conclude that $H^{g}=H$, which
contradicts Lemma 2.8.
_Case $2$: $2n+1$ is prime._ In this case, $m\in\\{1,2n+1\\}$ and $r=2n+1$, so
$q_{0}=3$ and $H=L_{v}=H^{\prime}:3$ with $H^{\prime}\cong L_{2}(8)$. If
$m=1$, then $AB$ is a homogeneous factorisation for
$H={}^{2}\mathrm{G}_{2}(3)=\mathrm{PSL}_{2}(8):3$, but no such factorisation
exists by Lemma 2.6. Hence $m=2n+1$ and we have
$\Pi(A)=\Pi(G_{v})=\Pi(H)\cup\\{m\\}=\\{2,3,7,m\\}$ (with possibly
$m\in\\{3,7\\}$).
Recall that $X=A\cap H=A\cap L_{v}$. Suppose that $\Pi(X\cap
H^{\prime})=\\{2,3,7\\}$. By [1, Corollary 5 and Table 10.7] there are no
proper subgroups of $\mathrm{PSL}_{2}(8)$ with order divisible by 2, 3 and 7,
and hence $X\cap H^{\prime}=H^{\prime}\cong\mathrm{PSL}_{2}(8)$ so
$H^{\prime}\leqslant A$. It follows that $(H^{\prime})^{g}\leqslant A^{g}=B$,
and since $H^{\prime}$ is the unique subgroup of $G_{v}$ isomorphic to
$\mathrm{PSL}_{2}(8)$, we conclude that $(H^{\prime})^{g}=H^{\prime}$,
contradicting Lemma 2.8. Thus $\Pi(X\cap H^{\prime})$ is a proper subset of
$\\{2,3,7\\}$.
Further, since $|G_{v}:H^{\prime}|=3m$ is odd, we have $|X\cap
H^{\prime}|_{2}=|A|_{2}\geq|G_{v}|_{2}^{1/2}=2^{3/2}$, and hence $|X\cap
H^{\prime}|_{2}\geq 2^{2}$, so $2\in\Pi(X\cap H^{\prime})$. If $\Pi(X\cap
H^{\prime})=\\{2\\}$ then, since $|A|/|X|$ divides $3m$, it follows that
$\Pi(A)\subseteq\\{2,3,m\\}$ and since $\Pi(A)=\\{2,3,7,m\\}$ we conclude that
$m=7$ and $3=|A|_{3}<3^{3/2}=|G_{v}|_{3}^{1/2}$, which is a contradiction.
Therefore $\Pi(X\cap H^{\prime})=\\{2,p\\}$ for some $p\in\\{3,7\\}$. If $p=3$
then $X\cap H^{\prime}$ is a subgroup of $H^{\prime}=L_{2}(8)$ of order
divisible by 12 and dividing $72$. However there are no such subgroups, see
for example [3, p. 6]. Therefore $\Pi(X\cap H^{\prime})=\\{2,7\\}$, and $X\cap
H^{\prime}$ is a subgroup of $H^{\prime}=L_{2}(8)$ of order divisible by 28.
It follows from [3, p. 6] that $X\cap H^{\prime}=[2^{3}]:7$ since this group
has no subgroups of index 2. The same argument gives $Y\cap
H^{\prime}\cong[2^{3}]:7$.
If the prime $m\neq 3$, then $|X|_{3}=|A|_{3}\geq|G_{v}|_{3}^{1/2}=3^{3/2}$ so
that $|X\cap H^{\prime}|_{3}\geq|X|_{3}/3\geq 3$, which is a contradiction.
Hence $m=3$. In this case, $G_{v}=L_{v}\times\langle
z\rangle\cong(\mathrm{PSL}_{2}(8):3)\times 3$, where $z$ is a field
automorphism of order 3. Since $AB=G_{v}=L_{v}\times\langle z\rangle$, we have
that $\mathrm{PSL}_{2}(8):3=L_{v}\cong G_{v}/\langle z\rangle=((A\langle
z\rangle/\langle z\rangle)(B\langle z\rangle/\langle z\rangle)$. Now $A\langle
z\rangle/\langle z\rangle$ has a normal subgroup $(X\cap H^{\prime})\langle
z\rangle/\langle z\rangle\cong[2^{3}]:7$, and similarly $B\langle
z\rangle/\langle z\rangle$ has a normal subgroup $[2^{3}]:7$. However, by [9,
Theorem A], there are no such factorisations of $\mathrm{PSL}_{2}(8):3$. This
completes the proof. ∎
###### Theorem 3.7.
Suppose that $\Gamma$ is a $G$-vertex-primitive $(G,s)$-arc-transitive digraph
such that $\mathrm{Soc}(G)={}^{2}\mathrm{G}_{2}(q)$ with $q=3^{2n+1}$, for
some $n\geq 1$. Then $s\leq 1$.
###### Proof.
Suppose for a contradiction that $s\geq 2$. Then the conditions of Hypothesis
2.9 hold with $\mathrm{Soc}(G)={}^{2}\mathrm{G}_{2}(q)$. Since $G$ acts
vertex-primitively on $\Gamma$, the vertex stabiliser $G_{v}$ is a maximal
subgroup of $G$ and so is given by Corollary 3.2. By Lemmas 3.3, 3.4, 3.5 and
3.6, $G_{v}$ cannot be of types (ii)–(vi). Hence $G_{v}$ is of type $(i)$.
However, in this case, $G$ acts 2-transitively on the set of right cosets of
$G_{v}$ in $G$, which implies that $\Gamma$ is an undirected graph,
contradicting it being a digraph. Hence the result follows. ∎
## 4 Suzuki Groups
Again, since the action of $G$ on $\Gamma$ is vertex-primitive, a vertex
stabiliser $G_{v}$ is maximal in $G$. The following list of the maximal
subgroups of $\mathrm{Sz}(q)$ may be found in the book [2].
###### Theorem 4.1.
[2, p 385] If $q=2^{2n+1}$ with $n\geq 1$, then the maximal subgroups of
$\mathrm{Sz}(q)$ are (up to conjugacy):
(i) $[q^{2}]:(q-1)$,
(ii) $\mathrm{D}_{2(q-1)}$,
(iii) $\mathrm{C}_{q+\sqrt{2q}+1}:4$,
(iv) $\mathrm{C}_{q-\sqrt{2q}+1}:4$,
(v) $\mathrm{Sz}(q_{0})$, where $q=q_{0}^{r}$, $r$ is prime and $q_{0}>2$.
Since $\mathrm{Aut}(\mathrm{Sz}(q))=\mathrm{Sz}(q):(2n+1)$ and
$\mathrm{Sz}(q)\leq G\leq\mathrm{Sz}(q):(2n+1)$, we have $G=\mathrm{Sz}(q):m$
for some divisor $m$ of $2n+1$, and a vertex stabiliser does not contain
$\mathrm{Sz}(q)$. The maximal such subgroups of $G$ are the following:
###### Corollary 4.2.
For $G=\mathrm{Sz}(q):m$, where $m$ divides $2n+1$, the maximal subgroups of
$G$ not containing $\mathrm{Sz}(q)$ are (up to conjugacy):
(i) $([q^{2}]:(q-1)).m$,
(ii) $\mathrm{D}_{2(q-1)}.m$,
(iii) $((q+\sqrt{2q}+1):4).m$,
(iv) $((q-\sqrt{2q}+1):4).m$,
(v) $\mathrm{Sz}(q_{0}).m$, where $q=q_{0}^{r}$, $r$ is prime, and $q_{0}>2$.
For the rest of this section we assume that Hypothesis 2.9 holds for
$G=\mathrm{Sz}(q):m$, where $q=2^{2n+1}$ and $m$ divides $2n+1$, and we let
$L=\mathrm{Soc}(G)=\mathrm{Sz}(q)$. Let $\pi:G_{v}\rightarrow G_{v}/L_{v}$ be
the natural projection map. Note that since $G=LG_{v}$ we have that
$\pi(G_{v})\cong G/L\cong C_{m}$.
We consider separately each of the possibilities for the maximal subgroup
$G_{v}$ according to Corollary 4.2. We note in particular that, by Hypothesis
2.9, $G_{v}$ has a homogeneous factorisation $G_{v}=AB$ where $A=G_{uv}$ and
$B=G_{vw}$ with $A^{g}=B$ for some $g\in G$. This implies, by Corollary 2.10,
that for each prime $p$ dividing $|G_{v}|$, $G_{v}$ has at least two subgroups
of order $p$. We use these facts several times in our arguments.
###### Lemma 4.3.
Suppose that Hypothesis 2.9 holds with $\mathrm{Soc}(G)=\mathrm{Sz}(q)$. Then
$G_{v}$ is not a Type (ii) subgroup of $G$.
###### Proof.
Suppose to the contrary that $G_{v}$ is a Type (ii) subgroup of $G$. Then
$L_{v}\cong D_{2(q-1)}$. By Lemma 2.3 there exists a primitive prime divisor
$p$ of $2^{2n+1}-1$, and as noted before Lemma 2.3, $p$ satisfies $p>2n+1\geq
m$. Hence $G_{v}$ has a unique subgroup $Q_{p}$ of order $p$, which is a
contradiction, as noted above. ∎
###### Lemma 4.4.
Suppose that Hypothesis 2.9 holds with $\mathrm{Soc}(G)=\mathrm{Sz}(q)$. Then
$G_{v}$ is neither a Type (iii) subgroup nor a Type (iv) subgroup of $G$.
###### Proof.
Suppose to the contrary that $G_{v}$ is a Type (iii) or Type (iv) subgroup of
$G$. As above we have $G_{v}=AB$ with $A^{g}=B$ for some $g\in G$. It follows
from Lemma 2.5 that $\pi(A)=\pi(B)=\pi(G_{v})\cong\mathrm{C}_{m}$, and hence
$|A\cap L_{v}|=|B\cap L_{v}|$.
Let $S$ and $T$ denote cyclic subgroups of $L_{v}=L\cap G_{v}$ of orders
$q\pm\sqrt{2q}+1$ and 4, respectively, such that $L_{v}=S:T$. Since
$q\pm\sqrt{2q}+1$ is an odd integer, the orders $|S|$ and $|T|$ are coprime.
Let $p$ be a prime dividing $|S|$, and note that the cyclic group $S$ has a
unique subgroup $Q_{p}$ of order $p$, and that $Q_{p}$ is the unique subgroup
of order $p$ in $L_{v}$. If $p$ divides $|A\cap L_{v}|$, then $A\cap L_{v}$
contains $Q_{p}$, and since $|A\cap L_{v}|=|B\cap L_{v}|$, also
$Q_{p}\leqslant B\cap L_{v}$. Moreover, since $A^{g}=B$, it follows that
$Q_{p}^{g}$ is also a subgroup of $B\cap L_{v}$ of order $p$, and so
$Q_{p}^{g}=Q_{p}$. However, this contradicts Lemma 2.8, and therefore $p$ does
not divide $|A\cap L_{v}|$. Since this holds for all primes $p$ dividing
$|S|$, we conclude that $|A\cap L_{v}|$ divides $|T|=4$.
Since $A/(A\cap L_{v})\cong AL_{v}/L_{v}\leq G_{v}/L_{v}\cong\mathrm{C}_{m}$,
it follows that $|A|$ divides $4m$, and hence $G_{v}=AB$ has order dividing
$|A|\cdot|B|=|A|^{2}$, which divides $16m^{2}$. On the other hand
$|G_{v}|=4m(q\pm\sqrt{2q}+1)$, and hence the odd integer $q\pm\sqrt{2q}+1$
divides $m$. This is impossible since $q\pm\sqrt{2q}+1\geq
2^{2n+1}-2^{n+1}+1>2n+1\geq m$, for all $n\geq 1$. This contradiction
completes the proof. ∎
###### Lemma 4.5.
Suppose that Hypothesis 2.9 holds with $\mathrm{Soc}(G)=\mathrm{Sz}(q)$. Then
$G_{v}$ is not a Type (v) subgroup of $G$.
###### Proof.
Suppose to the contrary that $G_{v}=\mathrm{Sz}(q_{0}).m$, where
$q=q_{0}^{r}$, for some prime $r$ dividing $2n+1$. As above we have $G_{v}=AB$
with $A^{g}=B$ for some $g\in G$. Let $C$ denote the centraliser of
$\mathrm{Sz}(q_{0})$ in $G_{v}$. Then
$\mathrm{Aut}(\mathrm{Sz}(q_{0}))\gtrsim
G_{v}/C=(AB)/C=(AC/C)(BC/C)\gtrsim\mathrm{Sz}(q_{0}).$
However, $G_{v}/C$ does not have a core-free factorisation by [10, Theorem B],
and therefore one of the factors, say $AC/C$, contains $\mathrm{Sz}(q_{0})$.
This, together with the fact that $L_{v}\cap C=1$, implies that
$\mathrm{Sz}(q_{0})=L_{v}\leqslant A$. Moreover, since $A^{g}=B$, we have
$L_{v}^{g}\leqslant A^{g}=B$. However $L_{v}$ is the only subgroup of $G_{v}$
isomorphic to $Sz(q_{0})$, and hence $L_{v}^{g}=L_{v}$. This contradicts Lemma
2.8, and completes the proof. ∎
Now we can collect all these results to prove the following result.
###### Theorem 4.6.
Suppose that $\Gamma$ is a $G$-vertex-primitive $(G,s)$-arc-transitive digraph
such that $\mathrm{Soc}(G)=\mathrm{Sz}(q)$ with $q=3^{2n+1}$ for some positive
integer $n$. Then $s\leq 1$.
###### Proof.
Suppose for a contradiction that $s\geq 2$. Then the conditions of Hypothesis
2.9 hold with $\mathrm{Soc}(G)={}^{2}\mathrm{Sz}(q)$. Since $G$ acts vertex-
primitively on $\Gamma$, the vertex stabiliser $G_{v}$ is a maximal subgroup
of $G$ and so is given by Corollary 4.2. By Lemmas 4.3, 4.4 and 4.5, $G_{v}$
is not of type $(ii)--(v)$ and so $G_{v}$ must be of type (i). However, in
this case $G$ acts 2-transitively on the set of right cosets of $G_{v}$ and so
$\Gamma$ is an undirected graph, contradicting it being a digraph. Hence the
result follows. ∎
Theorem 1.2 follows immediately from Theorems 3.7 and 4.6.
## 5 Examples
In this final section, we construct examples of vertex-primitive $(G,1)$-arc-
transitive digraphs with $\mathrm{Soc}(G)=\mathrm{Sz}(2^{2n+1})$ and
${}^{2}\mathrm{G}_{2}(3^{2n+1})$, respectively. The following is the mechanism
by which we construct the examples:
Let $G$ be a group, $H$ a subgroup of $G$ which does not contain
$\mathrm{Soc}(G)$, $V:=\\{Hz:z\in G$}, and let $g\in G$ such that
$g^{-1}\notin HgH$. We define a binary relation $\to\,$ on $V$ by
$Hx\to Hy$ if and only if $yx^{-1}\in HgH$ for any $x,y\in G$.
Then $(V,\to)$ is a digraph, which we denote by $\mathrm{Cos}(G,H,g)$. Since
$yg(xg)^{-1}=ygg^{-1}x=yx^{-1}$, right multiplication by elements of $G$
preserves the relation $\to\,$ and hence induces automorphisms of $(V,\to)$,
yielding a subgroup $\mathrm{R}_{H}(G)\cong G$ of
$\mathrm{Aut}(\mathrm{Cos}(G,H,g))$. Further the subgroup $\mathrm{R}_{H}(H)$
of right multplications by elements of $H$ is the stabiliser in
$\mathrm{R}_{H}(G)$ of the vertex $H$ of $(V,\to)$, and it follows from the
definition of the relation $\to\,$ that $\mathrm{R}_{H}(H)$ acts transitively
on the set of arcs $(H,Hx)$ beginning with $H$, since these arcs are precisely
those of the form $(H,Hgh)$ for $h\in H$. Thus $\mathrm{R}_{H}(G)$ acts arc-
transitively on $\mathrm{Cos}(G,H,g)$.
We aim to find a maximal subgroup $H\leqslant G$ and an element $g\in G$ such
that $g^{-1}\notin HgH$ to obtain a 1-arc-transitive digraph
$\mathrm{Cos}(G,H,g)$, for $G=\mathrm{Sz}(2^{2n+1})$ and
$G={}^{2}\mathrm{G}_{2}(3^{n+1})$.
### 5.1 A one-arc transitive digraph admitting a Suzuki group
Here $G=\mathrm{Sz}(q)$, where $q=2^{2n+1}$ with $n\geq 1$. Let $\Omega$
denote a set of size $q^{2}+1$ on which $G$ acts $2$-transitively. Let
$a,b\in\Omega$ with $a\neq b$. We define the following notation:
* (i)
$L:=G_{a}$, so $L\cong[q^{2}]:(q-1)$;
* (ii)
$K:=G_{a}\cap G_{b}$, so $K=\langle\kappa\rangle\cong q-1$, for some
$\kappa\in G$;
* (iii)
$Q=[q^{2}]$, the normal Sylow $2$-subgroup of $L$, so $L=Q\rtimes K$;
* (iv)
an involution $\tau\in\mathrm{N}_{G}(K)$, so
$\mathrm{N}_{G}(K)=\langle\kappa,\tau\,|\,\kappa^{q-1}=\tau^{2}=1,\tau\kappa\tau=\kappa^{-1}\rangle$,
see Theorem 4.1(ii);
* (v)
an element $\rho\in Q$ of order 4.
We use this notation throughout this subsection, and also the following
results.
###### Lemma 5.1.
[13, Proposition 1] For any non-trivial element $x\in Q$, the centraliser
$\mathrm{C}_{G}(x)\leq Q$.
###### Lemma 5.2.
[13, p 108-109] The element $\tau$ satisfies $b=a^{\tau}$ and $a=b^{\tau}$, so
$\tau L\tau=G_{b}$, $L\cap\tau L\tau=K$, and $Q\cap\tau L\tau=\\{1\\}$.
We now give our construction.
###### Lemma 5.3.
Let $H:=\mathrm{N}_{G}(K)$ and $g:=\rho$. Then $g^{-1}\notin HgH$, and
$Cos(G,H,g)$ is a $(G,1)$-arc-transitive digraph.
###### Proof.
As we explained above, if $g^{-1}\notin HgH$, then $Cos(G,H,g)$ is a
$(G,1)$-arc-transitive digraph. So it is sufficient to prove that
$g^{-1}\notin HgH$. Suppose that this is not the case, that is, there exist
$x,y\in H=N_{G}(K)$ such that $\rho^{-1}=x\rho y$.
Note that $L=Q\rtimes K$ and $L\cap N_{G}(K)=K$, and also that $\rho\in Q<L$.
Thus if $x\in K$ then $y=\rho^{-1}x^{-1}\rho\in L$ and hence $y\in L\cap
N_{G}(K)=K$. Similarly if $y\in K$ then also $x\in K$. Thus $x,y$ are either
both in $K$, or both in $N_{G}(K)\setminus K$.
Suppose first that $x,y\in K$. Now $\rho\in Q$, and since $x\in K<L$ and $Q$
is a normal subgroup of $L$, it follows that $x\rho x^{-1}\in Q$, and also
$(x\rho x^{-1})(xy)=x\rho y=\rho^{-1}\in Q$. This implies that $xy\in Q$ and
hence $xy\in K\cap Q=\\{1\\}$. Thus $y=x^{-1}$, and so $\rho^{-1}=x\rho
x^{-1}$, which implies that $x^{2}\in C_{G}(\rho)$. By Lemma 5.1,
$\mathrm{C}_{G}(\rho)\leqslant Q$, so $x^{2}\in K\cap Q=\\{1\\}$. However,
$x\in K$ and $|K|=q-1$ is odd. Hence $x=1$, so $\rho^{-1}=x\rho x^{-1}=\rho$,
which contradicts the fact that $\rho$ has order $4$. Thus we must have
$x,y\in N_{G}(K)\setminus K$, and hence $x=\kappa^{i}\tau$ and
$y=\kappa^{j}\tau$, for some $i,j$. This implies that $\rho^{-1}=x\rho
y=\tau(\kappa^{-i}\rho\kappa^{j})\tau\in\tau L\tau$, and we also have
$\rho^{-1}\in Q$. Thus $\rho^{-1}\in Q\cap\tau L\tau$ and so, by Lemma 5.2,
$\rho^{-1}=1$, which is a contradiction. This completes the proof. ∎
### 5.2 A one-arc transitive digraph admitting a Ree group
Here $G={}^{2}\mathrm{G}_{2}(q)$, where $q=3^{2n+1}$ with $n\geq 1$. Although
several $(G,2)$-arc-transitive undirected graphs have been constructed, see
[4], we are interested in constructing a $(G,1)$-arc-transitive digraph with
$G$ acting primitively on the vertex set. Our treatment follows Wilson’s
description of the group $G$ given in his book [16, Section 4.5]. It is
different from some other constructions for these groups, say in [8], which
require knowledge of Lie algebras and algebraic groups. Wilson has an
elementary approach developed in [17] and [15], and we use the detailed
description given in [16, p 134-138].
Wilson [16] starts with a faithful $7$-dimensional representation of the group
$G={}^{2}\mathrm{G}_{2}(3^{2n+1})$ on a space $V$ over a field
$\mathbf{F}_{q}$ of order $q$. The space $V$ admits a $G$-invariant non-
degenerate symmetric bilinear form $f$ with an orthonormal basis
$\mathcal{B}:=\\{u_{0},u_{1},\ldots,u_{6}\\}$. He defines a second basis
$\mathcal{C}:=\\{v_{1},v_{2},\ldots,v_{7}\\}$ for $V$ by
$\begin{array}[]{llp{1cm}ll}v_{1}&=u_{3}+u_{5}+u_{6},&&v_{2}&=u_{1}+u_{2}+u_{4},\\\
v_{3}&=-u_{0}-u_{3}+u_{6},&&v_{4}&=u_{2}-u_{1},\\\
v_{5}&=-u_{0}+u_{3}-u_{6},&&v_{6}&=-u_{1}-u_{2}+u_{4},\\\
v_{7}&=-u_{3}+u_{5}-u_{6}.\end{array}$
He shows that the maps $\gamma$ and $\sigma$ given by:
$u_{i}^{\gamma}=\begin{cases}u_{i},&i=0,1,3\\\
-u_{i},&i=2,4,5,6\par\end{cases}$
and
$u_{i}^{\sigma}=\begin{cases}u_{i},&i=0,4,5\\\ -u_{i},&i=1,2,3,6\end{cases}$
are commuting involutions lying in the group $G$. Thus $G$ contains the
subgroup
$K=\langle\gamma,\sigma\rangle=\langle\gamma\rangle\times\langle\sigma\rangle\cong
2^{2}$. Moreover, we let $\delta:=\sigma\gamma$, find that
$u_{i}^{\delta}=\begin{cases}u_{i},&i=0,2,6\\\ -u_{i},&i=1,3,4,5\end{cases}$
Let $T:=N_{G}(K)$ and $H=C_{G}(\sigma)$. Note that by [4, Lemma 2.2], $T$ and
$H$ are maximal subgroups of $G$ and $T\cong(2^{2}\times D_{\frac{q+1}{2}}):3$
and $H\cong 2\times\mathrm{PSL}_{2}(q)$. Let us denote by $W$, $W_{1}$,
$W_{2}$ and $W_{3}$ the subspace $\langle u_{1},\ldots,u_{6}\rangle$, $\langle
u_{1},u_{3}\rangle$, $\langle u_{2},u_{6}\rangle$ and $\langle
u_{4},u_{5}\rangle$, respectively. For a subspace $U$ of $V$, we denote the
setwise stabiliser in $G$ of $U$ by $\mathrm{Stab}_{G}(U)$. We need the
following properties of $T$.
###### Lemma 5.4.
The subgroup $T=\mathrm{Stab}_{G}(\langle u_{0}\rangle)$, and $T$ leaves
invariant the subspace $W:=\langle u_{1},u_{2},\ldots,u_{6}\rangle$.
###### Proof.
Let $\mathrm{Fix}(K)$ be the subspace of fixed points of $K$ in $V$. Then
$\mathrm{Fix}(K)=\langle u_{0}\rangle$, by the definitions of $\gamma$ and
$\sigma$, and since $\mathcal{B}$ is an orthonormal basis it follows that
$\mathrm{Fix}(K)^{\perp}=W$. Since $T=N_{G}(K)$ normalises $K$, it follows
that $T$ leaves invariant both $\mathrm{Fix}(K)$ and
$\mathrm{Fix}(K)^{\perp}$. Finally since $T$ is maximal in $G$, it follows
that $T$ is equal to the full stabiliser of $\mathrm{Fix}(K)$. ∎
For any $x\in\\{\delta,\gamma,\sigma\\}$, let $C_{W}(x)=\\{u\in W|u^{x}=u\\}$.
###### Lemma 5.5.
With above notation, $T$ permutes $W_{1}$, $W_{2}$ and $W_{3}$. In particular,
the action of $T$ on $\\{W_{1},W_{2},W_{3}\\}$ is isomorphic to
$\mathrm{C}_{3}$
###### Proof.
Since $T$ normalises $K=\langle\sigma,\gamma\rangle$, we see that $T$ permutes
$\sigma,\gamma$ and $\delta$. We also note that $C_{W}(\gamma)=W_{1}$,
$C_{W}(\delta)=W_{2}$ and $C_{W}(\sigma)=W_{3}$. For $i\in\\{1,2,3\\}$,
$W_{i}=C_{W}(x)$ for some $x\in\\{\sigma,\gamma,\delta\\}$. Since $T$ leaves
$W$ invariant we have that $C_{W}(x)^{t}=C_{W}(x^{t})$ for each $x$. Thus
$W_{i}^{t}=C_{W}(x)^{t}=C_{W}(x^{t})$. Since
$x^{t}\in\\{\sigma,\gamma,\delta\\}$, we find that
$W_{i}^{t}\in\\{W_{1},W_{2},W_{3}\\}$. Moreover, we find that $C_{G}(K)\cong
2^{2}\times D_{\frac{q+1}{2}}$, so $[T:C_{G}(K)]=3$. We note that $C_{G}(K)$
acts trivially on $\\{W_{1},W_{2},W_{3}\\}$. This implies the kernel of the
action of $T$ on $\\{W_{1},W_{2},W_{3}\\}$ is of index 3, so the action is
isomorphic to $C_{3}$. Hence the result follows. ∎
In Wilson’s description [16, p 136], $G$ has a Borel subgroup $B$ such that
there exists $g\in B$ determined by its action on the basis $\mathcal{C}$ as
follows:
$\begin{array}[]{llp{1cm}ll}v_{1}&\mapsto v_{1}&&v_{2}&\mapsto v_{2}\\\
v_{3}&\mapsto v_{1}+v_{3}&&v_{4}&\mapsto v_{2}+v_{4}\\\ v_{5}&\mapsto
2v_{1}+v_{5}&&v_{6}&\mapsto v_{2}+2v_{4}+v_{6}\\\ v_{7}&\mapsto
v_{1}+v_{3}+2v_{5}+v_{7}.\end{array}$
It is easily checked that $g$ acts on $u_{i}$ by:
$\begin{array}[]{llp{1cm}ll}u_{0}&\mapsto u_{0}&&u_{1}&\mapsto u_{2}\\\
u_{2}&\mapsto u_{4}&&u_{3}&\mapsto u_{6}\\\ u_{4}&\mapsto u_{1}&&u_{5}&\mapsto
u_{3}\\\ u_{6}&\mapsto u_{5}.\end{array}$
In particular we find that $g\in T=N_{G}(K)$ and $W_{i}^{g}=W_{i+1}$ for all
$i\in\\{W_{1},W_{2},W_{3}\\}$.
###### Lemma 5.6.
Let $H$ and $g$ be as above. Then $g^{-1}\notin HgH$, and $Cos(G,H,g)$ is a
$(G,1)$-arc-transitive digraph.
###### Proof.
Let $g^{\prime}=g^{-1}$. If $g^{\prime}\notin HgH$, then $Cos(G,H,g)$ is a
$(G,1)$-arc-transitive digraph. So it is sufficient to prove that
$g^{\prime}\notin HgH$. Suppose that this is not the case. that is, there
exist $x,y\in H=C_{G}(\sigma)$ such that $g^{\prime}=x^{-1}gy$, or
equivalently, $xg=g^{\prime}y$. Let us denote by $E_{1}$ and $E_{-1}$ the
eigenspaces of $\sigma$ with eigenvalues 1 and $-1$ respectively. Indeed,
$E_{1}=\langle u_{0},u_{4},u_{5}\rangle$ and $E_{-1}=\langle
u_{1},u_{2},u_{3},u_{6}\rangle$.
###### Claim 2.
We have $u_{0}^{x},u_{0}^{y}\in\langle u_{0}\rangle$.
First we note that since $x,y\in H$, $x\sigma=\sigma x$ and $y\sigma=\sigma
y$. This implies that $u_{0}^{x\sigma}=u_{0}^{\sigma x}=u_{0}^{x}$. Thus
$u_{0}^{x}\in E_{1}$, so that
$u_{0}^{x}=\alpha_{0}u_{0}+\alpha_{4}u_{4}+\alpha_{5}u_{5}$ for some
$\alpha_{i}\in\mathbb{F}_{q}$. Similarly
$u_{0}^{y}=\beta_{0}u_{0}+\beta_{4}u_{4}+\beta_{5}u_{5}$ for some
$\beta_{i}\in\mathbb{F}_{q}$.
Now, we let $I=\\{0,4,5\\}$ and since $xg=g^{\prime}y$, we have:
$\sum_{i\in
I}\beta_{i}u_{i}=u_{0}^{y}=u_{0}^{g^{\prime}y}=u_{0}^{xg}=\sum_{i\in
I}\alpha_{i}u_{i}^{g}=\beta_{0}u_{0}+\beta_{4}u_{1}+\beta_{5}u_{3}.$ (2)
This implies that $\alpha_{0}=\beta_{0}$ and
$\alpha_{4}=\alpha_{5}=\beta_{4}=\beta_{5}=0$. Thus the claim is proved.
Let us consider the action of $x$ and $y$ on $W_{3}$. For any $v\in
W_{3}=\langle u_{4},u_{5}\rangle$, we have that $v^{x\sigma}=v^{\sigma
x}=v^{x}$ and $v^{y\sigma}=v^{\sigma y}=v^{y}$. Thus $v^{x},v^{y}\in
E_{1}=\langle u_{0},u_{4},u_{5}\rangle$. On the other hand, by Claim 2 we find
that $x,y\in T$. By Lemma 5.5 $W_{3}^{x},W_{3}^{y}\in\\{W_{1},W_{2},W_{3}\\}$.
Hence $v^{x},v^{y}\in E_{1}\cap W_{i}$ for some $i\in\\{1,2,3\\}$. Since the
only possible $i$ such that $E_{1}\cap W_{i}\neq\\{0\\}$ is 3, we deduce that
$W_{3}^{x}=W_{3}^{y}=W_{3}$. Hence $x$ and $y$ either swap or fix $W_{1}$ and
$W_{2}$. If $x$ and $y$ swap them, then $x,y$ acts as $(1,2)$ on
$\\{W_{1},W_{2},W_{3}\\}$. However, by Lemma 5.5, the action of $T$ on
$\\{W_{1},W_{2},W_{3}\\}$ is isomorphic to $\mathrm{C}_{3}$ and does not have
any element of order 2. Thus we deduce that $W_{i}^{x}=W_{i}^{y}=W_{i}$ for
$i=1,2$.
Now let us consider the action of $xg$ and $g^{\prime}y$ on $W_{1}$,
$W_{1}^{xg}=W_{1}^{g}=W_{2}$ (3)
while
$W_{1}^{g^{\prime}y}=W_{3}^{y}=W_{3}.$ (4)
This is a contradiction to $xg=g^{\prime}y$. Hence such $x$ and $y$ do not
exist and the result follows. ∎
## References
* [1] N. Blackburn and B. Huppert, Finite groups II, Springer-Verlag, Berlin-New York, 1982.
* [2] J. N. Bray, D. F. Holt and C. M. Roney-Dougal, The maximal subgroups of the low-dimensional finite classical groups, Cambridge University Press, Cambridge, 2013.
* [3] J. H. Conway, R. T. Curtis, S. P. Norton, R. A. Parker, and R. A. Wilson, _Atlas of Finite Groups_ , Clarendon Press, Oxford, 1985.
* [4] X. G. Fang and C. E. Praeger, Finite two-arc transitive graphs admitting a ree simple group, Comm. Algebra, 27 (1999) 3755–3769 .
* [5] M. Giudici, C.H. Li and B. Xia, An infinite family of vertex-primitive 2-arc-transitive digraphs, J. Combin. Theory Ser. B, 127 (2017) 1–13.
* [6] M. Giudici, C.H. Li and B. Xia, Vertex-primitive s-arc-transitive digraphs of linear groups, J. Math. Pure Appl. 223 (2019) 5455–5483.
* [7] M. Giudici and B. Xia, Vertex-quasiprimitive 2-arc-transitive digraphs, Ars Math. Contempt., 14 (2018) 67–82.
* [8] V. M. Levchuk and Y. N. Nuzhin, The structure of Ree groups, Alg. i Log., 24 (1985) 26–41,122.
* [9] M. W. Liebeck, C. E. Praeger and J. Saxl, The maximal factorizations of the finite simple groups and their automorphism groups, Mem. Amer. Math. Soc., 86 (1990), no.432.
* [10] M. W. Liebeck, C. E. Praeger and J. Saxl, Transitive subgroups of primitive groups, J. Algebra 234 (2000) 291–361.
* [11] J. Pan, C. Wu, F. Yin, Vertex-primitive s-arc-transitive digraphs of alternating and symmetric groups, J. Algebra, 544 (2020) 75–91.
* [12] C. E. Praeger, Highly arc-transitive digraphs, European J. Combin. 10 (1989) 281–292.
* [13] M. Suzuki, On a class of doubly transitive groups, Ann. of Math. 75 (1962) 105–145.
* [14] R. Weiss, The non-existence of 8-transitive graphs, Combinatorica 1 (1981) 309–311.
* [15] R. A. Wilson, A new construction of the ree groups of type ${}^{2}G_{2}$, Proc. Edinburgh Math. Soc., 53 (2010) 531–542.
* [16] R. A. Wilson, The Finite simple groups, Grad. Texts in Math., vol. 251, Springer-Verlag, London, 2009.
* [17] R. A. Wilson, Another new approach to the small Ree groups, Arch. Math. 94 (2010) 501–510.
|
# The emergence of low-frequency dual Fano resonances in chiral twisting
metamaterials
Brahim Lemkalli<EMAIL_ADDRESS>Laboratory for the Study of
Advanced Materials and Applications, Department of Physics, Moulay Ismail
University, B.P. 11201, Zitoune, Meknes, Morocco Muamer Kadic Institut
FEMTO-ST, UMR 6174, CNRS, Université de Bourgogne Franche-Comté, 25000
Besançon, France Youssef El Badri Laboratory of optics, information
processing, Mechanics, Energetics and Electronics, Department of Physics,
Moulay Ismail University, B.P. 11201, Zitoune, Meknes, Morocco Sébastien
Guenneau UMI 2004 Abraham de Moivre-CNRS, Imperial College London, SW7 2AZ,
UK Abdellah Mir Laboratory for the Study of Advanced Materials and
Applications, Department of Physics, Moulay Ismail University, B.P. 11201,
Zitoune, Meknes, Morocco Younes Achaoui Laboratory for the Study of Advanced
Materials and Applications, Department of Physics, Moulay Ismail University,
B.P. 11201, Zitoune, Meknes, Morocco
###### Abstract
In the current work, through a finite element analysis, we demonstrate that a
configuration of chiral cells having syndiotactic symmetry provides dual Fano
resonances at low frequency. From the phononic dispersion and transmission
response, we compare the signature provided by a composite made of chiral
cells to the ones of homogeneous medium, isotactic nonchiral, and isotactic
chiral beams. The study results in an innovative design of a mechanical
metamaterial that induces the Fano resonance at low frequency with a
relatively high quality factor. This might be a significant step forward for
mechanical wave filtering and detection. Performances have been evaluated
using a sensor that will be implemented as a thermometer.
Twisting metamaterials, Fano resonance, Temperature sensor
††preprint: AIP/123-QED
## I Introduction
In recent years, the emergence of composite-structured materials has heralded
significant advancements in mechanical engineering [1]. As a result, new
generations of man-made materials, known as ”metamaterials,” are created,
allowing mechanical behaviors to be adapted with new characteristics beyond
the intrinsically well known [2]. From an elastodynamic viewpoint, these allow
the manipulation and control of acoustic wave propagation by both mechanisms,
namely local resonance and Bragg scattering [3, 4]. Besides, mechanical
metamaterials are well known by their various exotic parameters in the static
regime, including negative Poisson’s ratio [5], flexibility [6], and twist
conversion [7, 8], which leads to a ”dynamic paradigm,” used today in a wide
range of applications. For instance, auxetic metamaterials were proposed in
order to enhance seismic shielding against surface waves [9]. Besides,
metamaterials with a twist can exhibit a distinct feature called acoustic
activity. This converts the linear polarization of a transverse wave to
circular polarization [10]. Recently, twisting metamaterials demonstrated the
conversion of longitudinal waves into twist waves [11, 12].
In general, the local resonance is caused by the coupling between a discrete
resonance and a continuous state, which causes the appearance of a peak at the
resonance frequency followed or preceded by the dip of the anti-resonance.
This mechanism is a consequence of constructive and destructive interferences,
respectively, previously reported in the field of optics [13]. Since its
discovery more than 60 years ago [13], the prominent Fano resonance has piqued
the interest of scientists due to its asymmetric nature, which is used in some
relevant applications [14] such as filtering [15] and detection [16].
As a mechanical counterpart, this sort of resonance has gained prominence
[17]. Several devices based on mechanical Fano resonance have been developed
in recent years [18], including concentrated pipes [19], Helmholtz resonators
[20], and phononic crystals [21, 22, 23]. However, the dimensions of these
structures, notably phononic crystals, are equivalent to or even larger than
the wavelengths; also, the Fano resonance effect occurs in just one
operational frequency range. Multi-band systems with sub-wavelength dimensions
and a high quality factor at low frequencies remain a major challenge for the
development of multi-band and multi-functional devices [21]. Dual Fano
resonators for low frequencies have recently been developed, employing an
array of units made up of two types of cell units containing multiple
cavities, each with its own specific set of characteristics [24]. These are
based on the emergence of acoustic metamaterials[25] with dimensions smaller
than the wavelength, leading to exceptional elastic wave manipulation
abilities.
Figure 1: Schematics of the beams. (a) The homogenous medium cell. (b) The
nonchiral isotactic cell ($\alpha=0$). (c) The chiral isotactic cell
$\alpha=arctn(\frac{h}{c})$. (d) The chiral syndiotactic cell
$\alpha=arctn(\frac{h}{c})$. (e) The geometrical parameters of the cells, the
two octagonal plates are separated by a distance of
$h=30$$\mathrm{m}\mathrm{m}$ by rods with diameter of
$d=1.2$$\mathrm{m}\mathrm{m}$ inclined by an angle $\alpha$ equal to
$arctn(\frac{h}{c})$ with the side of the octagon equal to
$c=3.9$$\mathrm{m}\mathrm{m}$ and the radii $R_{1}=5.08$$\mathrm{m}\mathrm{m}$
and $R_{2}=3.3$$\mathrm{m}\mathrm{m}$ and $b=1.4$$\mathrm{m}\mathrm{m}$. The
beams have a width of $a=14.4$$\mathrm{m}\mathrm{m}$ and length of $4h$.
In this study, we leverage the design of a metamaterial with a twist to
generate double Fano resonance at low frequency, inspired by the chiral
tacticity in metamaterials [26]. In Section II, we demonstrate numerically
that a chiral syndiotactic cell generates local resonance. By connecting two
cells in such a way that the contact plane between the two cells forms a
mirror, the Fano resonance fingerprint is the direct consequence of the
coupling between a longitudinal continuum and the discrete state of the chiral
unit-cell. In Section III, we propose an application of the dual Fano
resonances to detect temperature changes in water.
## II Elastodynamic Characteristics
In this section, we analyzed the elastodynamic behavior of four structures in
order to demonstrate the presence of the Fano resonance. These structures have
rectangular beams with a length of $4h$ and a width of $a$ made of two media;
one homogeneous in steel at the beam borders with a length of $h$ and the
other inside, which is a cell in Acrylonitrile Butadiene Styrene (ABS) with a
length of $2h$ alternating the four unit cells. The first cell is purely a
homogeneous medium (Figure 1(a)). The second is a nonchiral isotactic unit
cell throughout two cells composed of non-inclined rods ($\alpha=0$) connected
to octagonal plates (Figure 1(b)). The third is a chiral isotactic unit cell
in which two cells composed of rods inclined by ($\alpha=atan(h/b)$) are
connected to octagonal plates (Figure 1(c)). The fourth cell is a chiral
syndiotactic cell composed of two chiral cells with inclined rods
($\alpha=atan(h/b)$) attached to octagonal plates (Figure 1(d)). These two
cells are connected by a plane of symmetry mirror.
To determine the elastodynamic behaviors of the four beams, we used the
commercial software COMSOL Multiphysics to solve the Navier equations in the
weak form. We considered all the materials used in the simulations as
isotropic linear elastic materials. These are depicted in Table 1.
Table 1: The materials parameters Materials | Young’s modulus | Poisson’s ratio | Density
---|---|---|---
| ($\mathrm{G}\mathrm{P}\mathrm{a}$) | | ($\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{m}^{3}$)
Steel | 201 | 0.33 | 7843
ABS | 2.6 | 0.4 | 1020
Figure 2: The phononic dispersion curves along the $x$-direction in the first
Brillouin zone ($\Gamma X$) for the four beams. (a) The homogeneous medium
cell. (b) The nonchiral isotactic cell. (c) The chiral isotactic cell. (d) The
chiral syndiotactic cell. (e) Screenshots of the syndiotactic chiral cell
beam’s eigenmodes at points $A$, $B$, $C$, and $D$.
The first step was to calculate the phononic dispersion curves towards the
$x$-direction and analyze the eigenmodes of the four beams. To elucidate the
mechanisms that govern the interaction of localised modes (flat modes) with
longitudinal modes, which gives rise to the local resonance. We calculated
mode polarization using equation 1, which is represented by the color bar in
the dispersion curves, as depicted in Figure 2.
$p_{yz}=\frac{\iiint\sqrt{|u_{y}|^{2}+|u_{z}|^{2}}dV_{1}}{\iiint\sqrt{|u_{x}|^{2}+|u_{y}|^{2}+|u_{z}|^{2})}dV_{tot}},$
(1)
where $V_{1}$ is the volume of the inner cell and $V_{tot}$ is the total
volume of the beam.
The beam with the cell of homogeneous medium (ABS) (Figure 2(a)) exhibits four
fundamental modes: bending and transverse, which are degenerated in the
present case because of the symmetry in the $yz$-plane, plus the other two
modes: twisting and longitudinal. As illustrated, the first three modes have
their polarization active in the $yz$-plane, with the exception of the
longitudinal mode, which remains fully polarized all along the $x$-direction.
However, when we substitute the homogeneous medium (ABS) with the nonchiral
isotactic cell (Figure 2(b)) in the beam with a length of $2h$, the first two
modes remain degenerate but have shifted down towards low frequencies. The
polarization indicates that the flat modes do not interfere with the
longitudinal mode.
Analogously, the longitudinal mode remains polarized along the $x$-direction
in the isotactic chiral cell (Figure 2(c)), regardless of the fact that the
first two modes have undergone degeneracy lifting (the transverse modes travel
with different velocities in the first Brillouin zone ($\Gamma X$)) as a
consequence of the symmetry in the $yz$-plane. In other words, the effect of
the isotactic chiral cell has no influence on the polarization of the
longitudinal mode (the coupling between the flat modes does not take place
with the longitudinal mode).
However, due to the presence of a distinct symmetry plane inside the
syndiotactic cell, the first two modes do not undergo degeneracy lifting
(Figure 2(d)). On the other hand, there is interference between a localized
mode, polarized in the $yz$-plane, and the longitudinal mode, which causes the
local resonance composed of two resonances, symmetric and anti-symmetric,
which is produced as a result of the coupling of the flat mode with the
longitudinal mode, resulting in the presence of the Fano resonance near $1$
$\mathrm{k}\mathrm{H}\mathrm{z}$, as indicated by the red color of the mode
polarization around this frequency.
To illustrate that the interference between the localized twist mode and the
longitudinal is entirely responsible for the appearance of the local
resonances, screenshots of the modes at the local resonance are displayed in
Figure 2(e). $A$ has the coordinates of ($k$, $\omega$)=($1$, $862.8)$ and
$B$($0$, $1138$). According to the dispersion curve, these two points
represent the initial local resonance. Both images show a localized
displacement in the center of the syndiotactic chiral cell. These images
suggest that at these frequencies, the effect of longitudinal-twist conversion
between the two cells is active, which results in local resonances. Around $3$
$\mathrm{k}\mathrm{H}\mathrm{z}$ of Figure 2(d), we can discern two polarized
modes in the $yz$-plane, indicating the resonance pattern that forms the Fano
resonance. In those, we consider two points $C$ and $D$, as seen in the
screenshots in Figure 2(e). This last Fano resonance is produced by a high-
order twist; the total displacement is localized in three regions of the
syndiotactic chiral cell: in the center and in the middle of the rods, as seen
in the Figure 2(e).
After demonstrating the existence of the Fano resonance in the syndiotactic
chiral cell using eigenvalue analysis, we have investigated the transmission
analysis of a longitudinal wave through these four beams. We added free media
in steel and Perfectly Matched Layers at each extremity. We used equation $2$
to calculate the longitudinal wave transmission along the $x$-direction for
the four beams.
$T=20\times
log_{10}\frac{\iiint|u_{x}|^{2}dV_{output}}{\iiint|u_{x}|^{2}dV_{input}},$ (2)
Figure 3: (a) Transmission spectrum of the homogeneous medium cell in black
color. The nonchiral isotactic cell in blue color. The chiral isotactic cell
in blue color. The chiral syndiotactic cell in red color. (b) Screenshots of
the syndiotactic chiral beam in the anti-resonance and resonance peaks, at
point A of frequency of $970$ $\mathrm{H}\mathrm{z}$, at point B of frequency
of $1192$ $\mathrm{H}\mathrm{z}$, at point C of frequency of $3082$
$\mathrm{H}\mathrm{z}$, and at point D of frequency of $3127$
$\mathrm{H}\mathrm{z}$.
Figure 3(a) depicts the transmission curves of the beams with a homogeneous
medium cell, both isotactic nonchiral and chiral cells, and a syndiotactic
chiral cell, as shown by the black, blue, and red curves, respectively. The
first three beams have no Fano resonances, whereas the fourth one contains two
Fano resonances. This indicates that the syndiotactic chiral structure
exhibits a dual Fano resonances at low frequencies. We evaluated the quality
factor, which corresponds to the ratio of the resonance frequency to the full
width frequency at half-maximum of each peak. The first peak at the resonance
frequency of $1.191$ $\mathrm{k}\mathrm{H}\mathrm{z}$ has a Q-factor of $350$,
while the second occurs at the resonance frequency of $3.127$
$\mathrm{k}\mathrm{H}\mathrm{z}$ has a Q-factor of $11,010$. Figure 3(b) shows
screenshots at resonance and anti-resonance frequencies, which are the peak
and the dip for the two Fano resonances that exist exclusively in the
syndiotactic chiral cell. The first resonance represents the first-order
twist, as indicated by the dispersion curves, while the second resonance is
the high-order twist.
## III Liquid sensing application
Phononic crystals and metamaterials have gained significant attention in the
realm of sensing, particularly as an innovative resonant platform for
analyzing liquid properties. The general idea is based on the incorporation of
liquid as a constituent in the phononic crystal or within a cavity localized
in the perfect structure [27, 28, 29]. Their sensing functionality, in
particular, is realized through the solid-liquid interaction [30, 31]. Based
on this approach, we propose a sensor based on the chiral syndiotactic cell,
as shown in Figure 4(a), which is a beam composed of two homogeneous media in
steel and a syndiotactic chiral cell in ABS submerged in water with a
dimension two order of magnitude smaller than the geometrical parameters
outlined in Section II. This makes the sensor operates at frequencies around
$100$ $\mathrm{k}\mathrm{H}\mathrm{z}$, which is low frequency in comparison
to the sensors described in the literature.
Figure 4: Chiral syndiotactic beam as liquid sensor. (a) The sensor’s design.
(b) Longitudinal transmission for both peaks as a function of $-1\%$ density
and speed of sound variation.
In order to evaluate the potentiel of the presented sensor to detect changes
in liquid properties, we defined the sensitivity using equation 3, which
quantifies the frequency shift of each resonance peak in response to a slight
change in the liquid properties. Additionally, we use equation 4 to define the
figure of merit (FoM), which evaluates if two nearly identical media can be
distinguished.
$S_{i}=\frac{\Delta f_{i}}{\Delta T},$ (3) $FoM_{i}=\frac{S_{i}\times
Q_{i}}{f_{i}},$ (4)
where $i$ symbolizes peak $1$ or peak $2$, $T$ is the temperature, $Q_{i}$ is
the quality factor of each resonance peak, and $f_{i}$ represents the
resonance frequency for each peak.
In the first step, we assessed the syndiotactic beam sensor’s capabilities
employing investigations given in recent works using phononic crystals, in
which they varied the water density and sound velocity by $1\%$ and determined
the characteristic parameters of each variation [32, 33]. The longitudinal
transmission responses are depicted in Figure 4(b). As shown in Figure 4(b),
the syndiotactic sensor is not highly sensitive to the variations of the sound
speed, but is sensitive to variations in density. We summarize the calculated
parameters of density variation for the two peaks in Table 2.
Table 2: Frequency, $Q$-factor, sensitivity, and figure of merit of the two peaks for $-1\%$ of density water variation. | Peak 1 | Peak 2
---|---|---
Frequency ($\mathrm{k}\mathrm{H}\mathrm{z}$) | 97.9 | 251.9
Q | 840 | $17\times 10^{3}$
$S_{-1\%\rho}$ ($\mathrm{H}\mathrm{z}\mathrm{/}{kgm^{-3}}$) | 16 | 100
$FoM_{-1\%\rho}$ ($\mathrm{1}\mathrm{/}{kgm^{-3}}$) | 0.14 | 6.74
The first peak at $97.9$ $\mathrm{k}\mathrm{H}\mathrm{z}$ has a $Q$-factor of
$840$ and a sensitivity to density variation of $1\%$ equal to $16$
$\mathrm{H}\mathrm{z}\mathrm{/}{kgm^{-3}}$ and a figure of merit of $0.14$
$\mathrm{1}\mathrm{/}{kgm^{-3}}$. Regarding the second peak around $251$
$\mathrm{k}\mathrm{H}\mathrm{z}$, a $Q$-factor of $17,000$, a sensitivity of
$100$ $\mathrm{H}\mathrm{z}\mathrm{/}{kgm^{-3}}$, and a figure of merit of
$6.74$ $\mathrm{1}\mathrm{/}{kgm^{-3}}$. The parameters obtained for the $1\%$
density variation are comparable to those published in the literature [32,
33]. These parameters show that at low frequencies, the chiral syndiotactic
cell has an interesting feature in the detection of density variation unlike
the sound speed.
In the second step, we used the syndiotactic chiral beam to detect changes in
water temperature, in which the density and the speed of sound depend on the
temperature of water, as depicted in Table 3. We computed the longitudinal
transmission in function of water temperature variation, as illustrated in
Figure 5. The frequencies, quality factors, sensitivities, and FoMs of the two
peaks are all described in Table 4. The sensitivity of each peak was
determined by estimating two successive values of the frequency shift produced
by the temperature change.
Table 3: The density and speed of sound as a function of water temperature variation. Temperature | Density | Speed of sound
---|---|---
($\mathrm{\SIUnitSymbolDegree}\mathrm{C}$) | ($\mathrm{k}\mathrm{g}\mathrm{/}\mathrm{m}^{3}$) | ($\mathrm{m}\mathrm{/}\mathrm{s}$)
0 | 999 | 1403
10 | 999 | 1447
20 | 998 | 1481
30 | 995 | 1507
40 | 992 | 1526
50 | 988 | 1541
60 | 983 | 1541
Figure 5: The evolution of longitudinal transmission as a function of water temperature variation. (a) The first peak. (b) The second Peak. Table 4: Frequency, $Q$-factor, sensitivity, and figure of merit of the two peaks for temperature variation in water. | Peak 1 | Peak 2
---|---|---
Frequency ($\mathrm{k}\mathrm{H}\mathrm{z}$) | 98.8 | 253.5
$Q$-factor | 840 | $17\times 10^{3}$
Sensitivity ($\mathrm{H}\mathrm{z}\mathrm{/}{\mathrm{\SIUnitSymbolDegree}C}$) | 3 | 20
FoM ($\mathrm{1}\mathrm{/}{\mathrm{\SIUnitSymbolDegree}C}$) | 0.02 | 1.34
As the water temperature increases, the frequency of the two Fano resonance
peak shifts. Thus, the characteristic frequencies of the two Fano resonance
peaks are sensitive to both the speed of sound and density of water as a
function of temperature. In other words, the chiral syndiotactic beam can
exhibit two peaks that can change the resonance frequency based on the
liquid’s material properties, as illustrated in Figure 5. The first peak has
lower sensitivity than the second peak; however, both peaks have acceptable
characteristics in terms of increasing dimension and using a low frequency
when compared to the phononic sensor, which uses a frequency of hundreds of
$\mathrm{G}\mathrm{H}\mathrm{z}$.
The findings suggest that the syndiotactic chiral beam could be used as a
temperature sensor. Due to the geometric size, two peaks occur at low
frequencies, which indicates that their characteristics are indeed very small.
It should be noted that in order to achieve the highest sensitivity value, it
must use a structure with geometrical parameters scaled by 0.1 compared to the
presented configuration. This means that both frequency response and
sensitivity will be multiplied by a factor of $10$.
## IV Conclusion
To conclude, we used the finite element method to calculate phononic
dispersion and transmission. This was done in order to demonstrate the
presence of dual Fano resonances in chiral metamaterials with twist.
Furthermore, we proved that the beam with the chiral syndiotactic cell based
on the two octagonal plates exhibited dual Fano resonances at low frequencies,
one at $1$ $\mathrm{k}\mathrm{H}\mathrm{z}$ and the other at $3$
$\mathrm{k}\mathrm{H}\mathrm{z}$. This characteristic has been compared to
other beams with a homogeneous medium, an isotactic nonchiral cell, and an
isotactic chiral cell. The interference of localized twisting and longitudinal
modes, in particular, causes low-frequency local resonances, also known as
Fano resonances. Following that, the presence of dual Fano resonances in
syndiotactic beam metamaterials is used to detect liquid properties such as
water density and sound speed. Finally, this study demonstrated that
syndiotactic beam metamaterials with twist can be used as temperature sensors,
exhibiting considerable sensitivity and quality factors for the proposed size
and for low frequencies.
## References
* Dalela, Balaji, and Jena [2022] S. Dalela, P. Balaji, and D. Jena, “A review on application of mechanical metamaterials for vibration control,” Mechanics of advanced materials and structures 29, 3237–3262 (2022).
* Xiao _et al._ [2020] S. Xiao, T. Wang, T. Liu, C. Zhou, X. Jiang, and J. Zhang, “Active metamaterials and metadevices: a review,” Journal of Physics D: Applied Physics 53, 503002 (2020).
* Achaoui _et al._ [2011] Y. Achaoui, A. Khelif, S. Benchabane, L. Robert, and V. Laude, “Experimental observation of locally-resonant and bragg band gaps for surface guided waves in a phononic crystal of pillars,” Physical Review B 83, 104201 (2011).
* Kadic _et al._ [2013] M. Kadic, T. Bückmann, R. Schittny, and M. Wegener, “Metamaterials beyond electromagnetism,” Reports on Progress in physics 76, 126501 (2013).
* Lakes [2017] R. S. Lakes, “Negative-poisson’s-ratio materials: auxetic solids,” Annual review of materials research 47, 63–81 (2017).
* Bertoldi _et al._ [2017] K. Bertoldi, V. Vitelli, J. Christensen, and M. Van Hecke, “Flexible mechanical metamaterials,” Nature Reviews Materials 2, 1–11 (2017).
* Frenzel, Kadic, and Wegener [2017] T. Frenzel, M. Kadic, and M. Wegener, “Three-dimensional mechanical metamaterials with a twist,” Science 358, 1072–1074 (2017).
* Zhong _et al._ [2019] R. Zhong, M. Fu, X. Chen, B. Zheng, and L. Hu, “A novel three-dimensional mechanical metamaterial with compression-torsion properties,” Composite Structures 226, 111232 (2019).
* Ungureanu _et al._ [2015] B. Ungureanu, Y. Achaoui, S. Enoch, S. Brûlé, and S. Guenneau, “Auxetic-like metamaterials as novel earthquake protections,” arXiv preprint arXiv:1510.08785 (2015), 10.48550/arXiv.1510.08785.
* Frenzel _et al._ [2019] T. Frenzel, J. Köpfler, E. Jung, M. Kadic, and M. Wegener, “Ultrasound experiments on acoustical activity in chiral mechanical metamaterials,” Nature communications 10, 1–6 (2019).
* Lemkalli _et al._ [2022] B. Lemkalli, M. Kadic, Y. E. Badri, S. Guenneau, A. Mir, and Y. Achaoui, “Longitudinal-twist wave converter based on chiral metamaterials,” arXiv preprint arXiv:2211.03222 (2022), 10.48550/arXiv.2211.03222.
* Xu _et al._ [2022] Z.-L. Xu, D.-F. Wang, T. Tachi, and K.-C. Chuang, “An origami longitudinal–torsional wave converter,” Extreme Mechanics Letters 51, 101570 (2022).
* Fano [1961] U. Fano, “Effects of configuration interaction on intensities and phase shifts,” Physical Review 124, 1866 (1961).
* Zhou _et al._ [2014] W. Zhou, D. Zhao, Y.-C. Shuai, H. Yang, S. Chuwongin, A. Chadha, J.-H. Seo, K. X. Wang, V. Liu, Z. Ma, _et al._ , “Progress in 2d photonic crystal fano resonance photonics,” Progress in Quantum Electronics 38, 1–74 (2014).
* Shuai _et al._ [2013] Y. Shuai, D. Zhao, Z. Tian, J.-H. Seo, D. V. Plant, Z. Ma, S. Fan, and W. Zhou, “Double-layer fano resonance photonic crystal filters,” Optics Express 21, 24582–24589 (2013).
* Luk’Yanchuk _et al._ [2010] B. Luk’Yanchuk, N. I. Zheludev, S. A. Maier, N. J. Halas, P. Nordlander, H. Giessen, and C. T. Chong, “The fano resonance in plasmonic nanostructures and metamaterials,” Nature materials 9, 707–715 (2010).
* Wang _et al._ [2020] W. Wang, Y. Jin, W. Wang, B. Bonello, B. Djafari-Rouhani, and R. Fleury, “Robust fano resonance in a topological mechanical beam,” Physical Review B 101, 024101 (2020).
* El Boudouti _et al._ [2008] E. El Boudouti, T. Mrabti, H. Al-Wahsh, B. Djafari-Rouhani, A. Akjouj, and L. Dobrzynski, “Transmission gaps and fano resonances in an acoustic waveguide: analytical model,” Journal of Physics: Condensed Matter 20, 255212 (2008).
* Amin _et al._ [2015] M. Amin, A. Elayouch, M. Farhat, M. Addouche, A. Khelif, and H. Bağcı, “Acoustically induced transparency using fano resonant periodic arrays,” Journal of Applied Physics 118, 164901 (2015).
* Qi _et al._ [2014] L. Qi, G. Yu, X. Wang, G. Wang, and N. Wang, “Interference-induced angle-independent acoustical transparency,” Journal of Applied Physics 116, 234506 (2014).
* Zaki _et al._ [2020] S. E. Zaki, A. Mehaney, H. M. Hassanein, and A. H. Aly, “Fano resonance based defected 1d phononic crystal for highly sensitive gas sensing applications,” Scientific Reports 10, 1–16 (2020).
* Goffaux _et al._ [2002] C. Goffaux, J. Sánchez-Dehesa, A. L. Yeyati, P. Lambin, A. Khelif, J. Vasseur, and B. Djafari-Rouhani, “Evidence of fano-like interference phenomena in locally resonant materials,” Physical review letters 88, 225502 (2002).
* Oudich _et al._ [2018] M. Oudich, B. Djafari-Rouhani, B. Bonello, Y. Pennec, S. Hemaidia, F. Sarry, and D. Beyssen, “Rayleigh waves in phononic crystal made of multilayered pillars: confined modes, fano resonances, and acoustically induced transparency,” Physical Review Applied 9, 034013 (2018).
* Sun _et al._ [2019] Y.-Y. Sun, J.-P. Xia, H.-X. Sun, S.-Q. Yuan, Y. Ge, and X.-J. Liu, “Dual-band fano resonance of low-frequency sound based on artificial mie resonances,” Advanced Science 6, 1901307 (2019).
* Cummer, Christensen, and Alù [2016] S. A. Cummer, J. Christensen, and A. Alù, “Controlling sound with acoustic metamaterials,” Nature Reviews Materials 1, 1–13 (2016).
* Bergamini _et al._ [2019] A. Bergamini, M. Miniaci, T. Delpero, D. Tallarico, B. Van Damme, G. Hannema, I. Leibacher, and A. Zemp, “Tacticity in chiral phononic crystals,” Nature communications 10, 1–8 (2019).
* Lucklum, Ke, and Zubtsov [2012] R. Lucklum, M. Ke, and M. Zubtsov, “Two-dimensional phononic crystal sensor based on a cavity mode,” Sensors and Actuators B: Chemical 171, 271–277 (2012).
* Ke, Zubtsov, and Lucklum [2011] M. Ke, M. Zubtsov, and R. Lucklum, “Sub-wavelength phononic crystal liquid sensor,” (2011), 10.1063/1.3610391.
* Oseev _et al._ [2018] A. Oseev, N. Mukhin, R. Lucklum, M. Zubtsov, M.-P. Schmidt, U. Steinmann, A. Fomin, A. Kozyrev, and S. Hirsch, “Study of liquid resonances in solid-liquid composite periodic structures (phononic crystals)–theoretical investigations and practical application for in-line analysis of conventional petroleum products,” Sensors and Actuators B: Chemical 257, 469–477 (2018).
* Wang _et al._ [2017] T.-T. Wang, Y.-F. Wang, Y.-S. Wang, and V. Laude, “Tunable fluid-filled phononic metastrip,” Applied Physics Letters 111, 041906 (2017).
* Wang _et al._ [2022] T.-T. Wang, Y.-F. Wang, Z.-C. Deng, V. Laude, and Y.-S. Wang, “Reconfigurable waveguides defined by selective fluid filling in two-dimensional phononic metaplates,” Mechanical Systems and Signal Processing 165, 108392 (2022).
* Gueddida _et al._ [2021] A. Gueddida, Y. Pennec, V. Zhang, F. Lucklum, M. Vellekoop, N. Mukhin, R. Lucklum, B. Bonello, and B. Djafari Rouhani, “Tubular phononic crystal sensor,” Journal of Applied Physics 130, 105103 (2021).
* Gueddida _et al._ [2022] A. Gueddida, Y. Pennec, A. L. Silveira Fiates, M. J. Vellekoop, B. Bonello, and B. Djafari-Rouhani, “Acoustic sensor based on a cylindrical resonator for monitoring a liquid flow,” Crystals 12, 1398 (2022).
|
# Weak localisation enhanced ultrathin scattering media
R. C. R. Pompe contributed equally to this work Department of Physics,
Bielefeld University, 33615 Bielefeld, Germany D. T. Meiers∗ Physics
Department and Research Center OPTIMAS, Technische Universität Kaiserslautern,
67663 Kaiserslautern, Germany W. Pfeiffer Department of Physics, Bielefeld
University, 33615 Bielefeld, Germany G. von Freymann Physics Department and
Research Center OPTIMAS, Technische Universität Kaiserslautern, 67663
Kaiserslautern, Germany Fraunhofer Institute for Industrial Mathematics ITWM,
67663 Kaiserslautern, Germany
The brilliant white appearance of ultrathin scattering media with low
refractive index contrast and the underlying radiative transport phenomena
fascinate scientists for more than a decade. Examples of such systems are the
scales of beetles of the genus Cyphochilus[1, 2], photonic network structures
[3] or disordered Bragg stacks (DBS) [4, 5]. While previous studies relate the
highly efficient scattering in the scales to the anisotropy of the intra-scale
network and diffusive light transport [6, 7, 11, 12, 10, 8, 9], the coherent
radiation propagation dynamics remained unaccounted for. Here, we identify
different coherent light transport regimes using time and spatially resolved
coherent light scattering spectroscopy. At least 20% of the collected
scattered light originates from weakly localised random photonic modes, in
contrast to solely diffusive light transport assumed to date [6, 7, 8, 9]. The
identification of this significant role of weak localisation in ultrathin
brilliant scattering media establishes a new design paradigm for efficient
scattering optical materials.
Figure 1: Microscopic and ultrafast time-resolved spectroscopy of light
scattered from Cyphochilus scales and microfabricated DBS structures. a,
Scheme of the spectral interference setup (see explanations in the text and in
Methods). b, Photograph of Cyphochilus (left) and disordered Bragg stacks
(DBS, centre closeup) with light microscope images of a single beetle scale
(right top) and DBS (bottom) as insets. c,d, Spatially resolved time domain
amplitude of light scattered from a Cyphochilus scale (c) and DBS (d). The
transition threshold between diffusive regime and resonance radiation as
identified in h are indicated (vertical translucent bar). e,f, Scheme
illustrating how incoming light is scattered in the initial diffusion-like
regime (e) and later via weakly localised photonic modes indicated by closed
pathways (f). The grey structure is a cross section of a Cyphochilus scale
(taken from Wilts et al. [11]). The black overlay on the left side shows the
disordered Bragg stacks. g, Scattered electric field at a single scan position
(white dashed line in c) with indication of the short time Fourier transform
windows used in i (red) and j (blue). h, Wigner distribution function of the
scattered field shown in g. At $105\pm 10$ fs (black line) the dominating
light transport regime changes from diffusion-like to weak localisation
assisted. i,j,k, Fourier spectra of the early time window (i) (-50 to 50 fs,
red in g and h), the later time window (j) (250 to 350 fs window, blue in g
and h) and the total measured time window (k).
In strongly scattering media the description of light propagation as ballistic
transport breaks down and is commonly replaced by diffusive radiation
transport that explains well the observed optical characteristics in numerous
applications [13, 14]. Diffusive radiation transport neglects the coherent
propagation of scattered fields and hence does not account for interference
phenomena in disordered media, which are known to occur for example when weak
localisation gives rise to coherent back scattering [15] or random lasing in
disordered active media [16]. For increased scattering strength coherent back
scattering occurs, when two counter-propagating scattering light paths in the
medium, i.e. the illuminating light and collinear back scattered light,
interfere constructively and giving rise to a peak in the back scattered
intensity, as it was, e.g., reported for Cyphochilus scales [10]. However,
modelling of the brilliant white appearance of Cyphochilus scales still
completely relies on diffusive propagation [6, 7, 8, 9] and thus coherent
effects are neglected. This could hamper tailoring disordered photonic media
since an unambiguously identified scattering mechanism is the basis for
nanostructure design for optimised performance. Using ultrafast time-resolved
light scattering spectromicroscopy [17, 18] we here identify the coherent
light scattering mechanisms for Cyphochilus scales and disordered Bragg stacks
and show that weak localisation in leaky photonic modes significantly
contributes to the brilliant whiteness of these scatterers.
The identification of coherent scattering is significantly facilitated if the
number of interfering pathways is kept small. For example, laser speckles are
most pronounced when only a small area of the scatterer is illuminated.
However, if the detector integrates over sufficiently many different
interfering pathways, the speckles disappear. In this case with exception of
the coherent back scattering peak, the scattering behaviour is often well
explained by diffusive radiation transport theory, although the underlying
transport is coherent. To reduce the number of interfering pathways the
present investigation relies, both, on focused illumination and collection of
scattered light from a small sample volume. Furthermore, coherent propagation
adds a well-defined phase to the scattered fields and thus reconstruction of
the temporal evolution of the scattered electric field provides additional
information on the scattering mechanism.
To systematically study the impact of coherent transport on the whiteness of
the Cyphochilus’ scales, we use the setup shown in Fig. 1a to perform
ultrafast time-resolved light scattering spectromicroscopy on a single
scale[17, 18]. The observations are confirmed for DBS fabricated via direct
laser writing (see Supplementary Information) shown in Fig. 1b. The DBS mimic
the beetle scales, reproduce their known optical properties [4] and allow for
realistic scattering light simulations based on finite-difference time-domain
(FDTD) Maxwell solvers and Monte Carlo (MC) diffusive light transport
simulations.
To achieve the spatial resolution necessary to observe only few interfering
scattering pathways, a parabolic mirror (Fig. 1a, M) focuses a pulsed
Ti:sapphire laser beam down to a $\lesssim 3$ µm spot on the surface of the
sample (Sa) and collects the scattered light under an angle of $\sim 24$°
relative to the specular direction. To filter for intra-scale scattering, i.e.
multiple scattered light components, a cross-polarisation configuration is
used. The illuminated position is scanned by moving the sample using a piezo
stage. Spectral interference [19] between the scattered light pulse (SP) and a
reference pulse (RP) allows for the time reconstruction of the field of the
scattered light (see Methods). The amplitude of the measured electric field
(cf. Fig. 1c and d) shows for both samples essentially the same dynamics, i.e.
spatially varying exponential decay modulated by distinct beating, indicating
interference taking place. As discussed below two different propagation
regimes can be identified in the scattered light signals. Initially diffusion-
like transport (Fig. 1e) dominates, whereas for longer times radiation leaking
from weakly localised photonic modes formed by randomly closed scattering
pathways (Fig. 1f) prevails, which gives rise to the observed beating
behaviour.
To identify the different propagation regimes we analyse the coherent
scattering signal (cf. Fig. 1g) in time and frequency domain by means of the
Wigner distribution function (WDF, see Methods) [20], exemplarily shown in
Fig. 1h for the Cyphochilus scale. For early times broadband features are
present, which reproduce the excitation spectrum when evaluating the short
time Fourier transform (cf. Fig. 1i). At about $105\pm 10$ fs there is a
qualitative change in the spectral content of the WDF, i.e. broad spectral
features are replaced by fine modulations. This time matches closely to the
pulse round trip time (see Methods), i.e. the time a pulse needs to travel
back and forth through the layer assuming a homogeneous, effective medium with
an effective refractive index, as it is commonly done in diffusion
approximation. The spectral modulations stem from multiple sharp resonances,
which become better visible in the short time Fourier transform for later
times (cf. Fig. 1j). The power spectrum illustrates that the signal now
contains spectral peaks independent of the original excitation spectrum,
whereas the scattered light in the initial diffusion-like phase exhibits no
significant modulation. The spectrum for the full measured signal, shown in
Fig. 1k, exhibits spectral peaks on top of a broadband background and thus
reflects the spectral characteristics of both transport regimes.
While the short time Fourier transforms (Fig. 1i,j) allow identifying the
contribution of the different light transport mechanisms over time, this
spectral analysis of resonances lacks of resolution due to the short time
windows. To unambiguously identify the weak localisation assisted scattering
the probability distribution of the resonance lifetimes is investigated
applying full time Fourier transformations. Fig. 2a reveals that the scattered
light spectra possess multiple peaks with varying centre frequency and width
as function of the spatial coordinate. In the incoherent mean of the spectra
over the whole scan (Fig. 2b, grey shaded area) these narrow spectral peaks
average out and reproduce the excitation spectrum (Fig. 2b, dashed line),
macroscopically resulting in the white appearance. Based on peak fitting (Fig.
2b, red curve) we derive the spectral widths of the peaks, which yield a lower
limit for the underlying resonance lifetimes. The distribution of these
lifetimes is displayed in Fig. 2c and follows a log-normal distribution (red
curve), deviating from a normal distribution for longer lifetimes as expected
when localisation effects occur [21]. The tail towards long lifetimes is
associated with the rare occurrence of increasingly localised modes, i.e.
cases where scattering pathways close inside the structure instead of coupling
to loss channels [22].
This identification of weak localisation assisted light scattering is further
supported by FDTD simulations based on the known microstructure of the
Cyphochilus scale [11] (model data provided by courtesy of B. Wilts) and the
DBS. As exemplified in Fig. LABEL:fig:Details_FDTDb and c the local spectra
recorded inside the structures also exhibit sharp resonances. Statistical
analysis of these resonances yields the lifetime distributions shown in Fig.
2d and e, which are in excellent accordance with the experimental results.
Hence, we conclude that the spectral resonances experimentally observed in the
scattered light indeed originate from weakly localised photonic modes
occurring in the same way inside the beetle structure and DBS. The
corresponding spectral features give rise to the observed beating behaviour in
scattered light spectromicroscopy (Fig. 1 c,d).
Figure 2: Lifetime distribution for weakly localised photonic modes. a,
Spatially resolved light scattering spectra of Cyphochilus scale. b, Spectral
intensity (in blue) for the position indicated by the white line in a. The
excitation spectrum and the incoherent mean spectral intensity over the entire
scan is shown as dashed line and grey shaded area, respectively. Distinct
peaks are identified (exemplified by red curve) and used to estimate the
corresponding photonic mode lifetimes. c, Photonic mode lifetime distribution
derived from the scan displayed in a. d, e, Lifetime distributions obtained
from FDTD simulations of the intra-scale structure [11] (d) and the DBS model
(e). f, Transient average power in the monitor plane perpendicular to the
surface sectioning the DBS model (cf. Fig. LABEL:fig:Details_FDTDa) derived
from FDTD simulation (black curve) and average photon counts in the same plane
calculated by Monte Carlo simulation (grey curve). Both ordinates span the
same orders of magnitudes, making the slopes directly comparable. The non-
exponential decay of the FDTD results is indicated by coloured exponential
slopes with different lifetimes $\tau$. The vertical dashed line indicates the
point in time where both curves start to differ. Inset: The time averaged
local power enhancement in a snippet of the FDTD monitor plane averaged over
the time span indicated by the blue line (170-650 fs).
To further investigate the light propagation inside the structure the spatio-
temporal evolution of the local power (in FDTD simulations) and the photon
counts (in MC simulations) are recorded on a monitor plane sectioning the DBS
perpendicular to the surface (cf. Fig. LABEL:fig:Details_FDTDa). To avoid
artefacts from the lateral periodic boundary conditions (see Methods) a
sufficiently large lateral simulation domain of $20\times 20$ µm² is used.
This ensures that any potential spectral contribution from this periodicity
lies far outside the considered spectral range. In contrast to the rather
complex beetle intra-scale structure the DBS consist of simple building blocks
and thus is used for further simulations to keep the computation time
manageable.
The FDTD simulations (Fig. 2f, black curve) reveal a non-exponential decay
with lifetimes $\tau$ ranging from about 80 fs up to roughly 100 fs. This
directly reflects the lifetime distribution (Fig. 2e) possessing a mean value
around 80 fs, implying that for longer times the longer living photonic modes
dominate the decay. In contrast the MC simulations (Fig. 2f, grey curve) show
a mono-exponential decay with a decay constant of 65 fs (cf. Fig.
LABEL:fig:comparison_lsa), failing to match both the simulated and measured
lifetime distributions. Nevertheless, it is possible to find a set of
parameters such that the MC simulations reproduce for the same layer thickness
the properties of the DBS obtained by FDTD simulations, i.e. reflectance,
transport mean free path and initial shape of the curve. Hence, we conclude
that the initial coherent transport inside the structure can be approximated
as diffusive transport emphasising that there is a diffusion-like scattering
regime despite interference effects may occur. However, beyond about 170 fs
modelling as diffusive transport breaks down and the curve obtained by MC
simulation starts to deviate from the FDTD results. Assuming propagation in an
effective medium approach (as done for the experiment) yields a pulse round
trip time of 160 fs for the 100 fs long pulses applied in the simulations (see
Methods). This coincides well with the time at which FDTD and MC simulations
deviate indicating that the pulse round trip time is indeed a suitable
estimation for the upper limit of the time domain in which diffusion-like
photon transport dominates. For longer times the trapping in weakly localised
photonic modes takes over, which is only captured in the fully coherent FDTD
simulations.
The FDTD simulations provide means to directly visualise the weakly localised
photonic modes inside the DBS structure (inset in Fig. 2f). The time averaged
local power enhancement normalised to the average power (see Supplementary
Information) exhibits distinct, spatially localised hotspots with an up to
three times enhanced local power. These hotspots are associated with antinodes
of weakly localised random photonic modes (as depicted schematically in Fig.
1f) which give rise to the experimentally observed distinct peaks in the
spectra (cf. Fig. 1j). As expected incoherent diffusive photon propagation in
MC simulations do not exhibit any hotspots but an almost constant photon count
enhancement across the monitor plane (cf. Fig. LABEL:fig:comparison_lsc).
Summarising the observations and model simulations we conclude that the
scattering yield is dominated by photon leakage from weakly localised photonic
modes after an initial scattering time window, which can be roughly estimated
as the pulse round trip time in the ultrathin scattering layer treated in an
effective medium approach. Such modes have previously been identified for
systems that exhibit random lasing with coherent feedback [23, 16], but were
not yet identified to significantly contribute to the brilliant whiteness of
ultrathin scattering media. As shown in Fig. 3 scattering via weakly localised
photonic modes is responsible for at least about $20\%$ of the total
scattering and thus is relevant when the scattering efficiency of ultrathin
disordered photonic media are concerned. As indicated in the background
shadings of Fig. 3 the scales and the DBS would appear rather greyish and not
brilliant white, if scattering via leakage from weakly localised photonic
modes would be missing.
In conclusion, we have experimentally shown that the light transport in
scattering, brilliant white structures is dominated initially by a diffusion-
like transport which is surpassed by scattering via leakage from weakly
localised photonic modes after roughly the pulse round trip time in the
ultrathin scattering layer. Leakage from weakly localised modes accounts for
at least 20% of the scattered light, underlining their significance for the
brilliant whiteness of the ultrathin scattering media. This identification of
the coherent weak localisation assisted scattering mechanisms based on time-
resolved scattered light spectromicroscopy could serve, both conceptionally
and methodologically, to gain a better understanding of the transport regimes
in disordered materials and their time dynamics. This is e.g. relevant in
imaging through turbid media for bioimaging applications or random lasing
action in disordered gain media [24, 25, 26]. Furthermore, the here
demonstrated weak localisation feature of the biomimetic DBS relying on a
distorted Bragg reflector design provides a blueprint for tailoring
nanostructures to particularly support random photonic resonances which can
enhance light-matter interaction and therefore may find applications as
materials for efficient solar energy harvesting [17, 27, 28] or sensor
applications, where resonance enhanced absorption is employed to improve
sensitivity [29].
Figure 3: Spatially averaged time-dependent accumulated scattering yields. The
square modulus of the time-resolved scattering fields are averaged over the
recorded positions. This incoherent intensity signal is integrated over time
to yield the time-resolved accumulated scattering yield. The background
shading at $t_{\text{thr}}$ indicates the loss of whiteness if weak
localisation assisted scattering would be absent. a, Accumulated scattering
yield experimentally measured for the Cyphochilus scale. The white vertical
line corresponds to a threshold time of $t_{\text{thr}}$=105 fs, as indicated
in Fig. 1b,h, from which one weak localisation scattering dominates. The
scattering yield from weak localisation is $35\%$ (white horizontal line). b,
Accumulated scattering yield for the simulated DBS, with a threshold time of
$t_{\text{thr}}$=160 fs, as indicated in Fig. 2f. The scattering yield from
weak localisation is $21\%$. c, Accumulated scattering yield experimentally
measured for the fabricated DBS, with a threshold time of $t_{\text{thr}}$=190
fs (see Supplementary Information), as indicated in Fig. 1c. The scattering
yield from weak localisation is $20\%$.
## Methods
Experimental setup. The light source is a mode-locked Ti:sapphire laser
(Femtosource Scientific, Femtolasers Produktions GmbH, Austria) with a centre
wavelength of $\lambda_{0}=780$ nm and spectral full width half maximum (FWHM)
$\Delta\lambda=47$ nm, filtered in s-polarisation relative to the sample. To
achieve microscopic resolution the beam is focused onto the sample by a
parabolic mirror (custom fabricate, Jenoptik, Germany). The sample is moved
via a piezo stage (M-664.164, Physik Instrumente (PI) GmbH & Co. KG, Germany)
in the focal plane to scan the excitation and light collection position. The
parabolic mirror horizontally separates the incoming beam, the specular
reflection and the scattered light under different angles, allowing to select
the measured scattering angle via a blocker aperture. To ensure that only
light that was scattered multiple times is measured, the scattered light is
measured in cross polarisation with a spectrometer (USB 2000, Ocean Optics
Inc., USA).
Phase reconstruction. The time resolution is achieved by phase reconstruction
via spectral interference of the scattered light with a reference pulse.
Therefore the incoming pulse is separated into sample and reference path. The
reference path is delayed relative to the sample pulse and rotated into the
measured p-polarisation. The resulting interference spectrum
$|E_{\text{s}}(\omega)+E_{\text{r}}(\omega)|^{2}=|E_{\text{s}}(\omega)|^{2}+|E_{\text{r}}(\omega)|^{2}+E_{\text{s}}(\omega)E_{\text{r}}^{*}(\omega)\cos(\Delta\varphi(\omega))$
contains the phase difference $\Delta\varphi$ between the two beams. Via
Fourier filtering of the interference spectrum and after correcting for the
phase imbalance of the interferometer the phase effect of the sample alone can
be reconstructed (see Supplementary Information). Since the phase difference
is measured no phase optimisation of the probing pulse is necessary.
Wigner distribution function and Short Time Fourier Transform. The Wigner
distribution function is defined as
$W(t,\omega)=\int^{\infty}_{-\infty}E(t-t^{\prime}/2)E^{*}(t+t^{\prime}/2)\exp{(-i\omega
t^{\prime})}\text{d}t^{\prime}$, where $E$ and $E^{*}$ are the complex
electric field and its complex conjugate respectively. The WDF yields the
highest time-frequency resolution possible. On the other hand it is not a
linear transform, resulting in cross-terms modulating the the WDF. To help
with the interpretation the spectral power of the short time Fourier transform
(STFT), given by
$|S(\tau,\omega)|^{2}=|\int^{-\infty}_{\infty}w(t^{\prime},\tau,\Delta
t,t_{r})E(t^{\prime})\exp{(-i\omega t^{\prime})}\text{d}t^{\prime}|^{2}$,
where $w(t,\tau,\Delta t,t_{r})$ is a Tukey window function [33] centred at
time $\tau$, is used, which as linear transform produces no cross-terms. For
the STFT the spectral resolution is limited by the window width $\Delta t=120$
fs. The window rising time is $t_{r}=30$ fs.
Calculation of the pulse round trip time. For a single photon travelling back
and forth through an effective medium with thickness $l_{\text{s}}$ the
effective round trip time is given by
$t_{\text{eff}}={2l_{\text{s}}}/{v_{\text{eff}}}$. The speed of light inside
the medium is calculated via $v_{\text{eff}}=c_{0}/n_{\text{eff}}$ where the
effective refractive index $n_{\text{eff}}$ is computed using the Maxwell-
Garnett mixing rule [30]. To obtain the limit when all photons within the
pulse length have propagated back and forth through the effective medium, i.e.
the pulse round trip time $t_{\text{prt}}$, the pulse length has to be added
to the effective round trip time of a single photon. This ensures that also
the ‘last’ photon within the pulse length has reached the top of the medium
again. The scale and the simulated DBS structure possess a filling fraction of
$f_{\text{scale}}=31\%$ [12, 8] and $f_{\text{DBS}}=27\%$, respectively and
the refractive index of chitin $n_{\text{chitin}}=1.55$ [31] is used in both
cases. Applying these values in the Maxwell-Garnett mixing rule yields
$n_{\text{eff, scale}}=1.15$ for the scale as well as $n_{\text{eff,
DBS}}=1.13$ for the DBS. Evaluating the effective round trip time with a
sample thickness of $l_{\text{s, scale}}=10$ µm [12] and $l_{\text{s,
DBS}}=7.9$ µm results in $t_{\text{eff, scale}}=77$ fs for the scale and
$t_{\text{eff, DBS}}=60$ fs for the DBS, respectively. The pulse length is
defined as the time span between the pulse front and the point in the pulse
tail where the intensity dropped to $I_{\text{p}}/e^{2}$ with the peak
intensity of the pulse $I_{\text{p}}$. In the experiment the pulse front is
set at the point where the intensity first reaches $I_{\text{p}}/e^{2}$
yielding a pulse length of $t_{\text{pulse, exp}}=29$ fs. In the simulation
the definite pulse front as emitted by the source is used, resulting in a
pulse length of $t_{\text{pulse, sim}}=100$ fs. Thus, pulse round trip times
of $t_{\text{prt, scale}}=106$ fs and $t_{\text{prt, DBS}}=160$ fs are
obtained for the scale and DBS, respectively.
Extraction of lifetimes from spectral peaks. We estimate the intensity
lifetimes of the resonances by $\tau_{l}=1/\Delta\omega$, where $\Delta\omega$
is the spectral intensity FWHM of the peak [22]. To measure the spectral
widths of a peak, it is fitted with a Gaussian (cf. Fig.
LABEL:fig:meth_peakfit). Fitting the individual peaks ignores slope change by
overlapping resonances, thus the resulting lifetimes are accordingly lower
estimates. To identify individual peaks in the frequency-position plane of the
line scans a 2D peak finding routine is used.
Finite-difference time-domain simulations. The FDTD simulations were performed
using the software Lumerical FDTD Solutions (Ansys Inc., USA). In all
simulations a plane wave pulse impinges in the z-direction on the respective
structure (cf. Fig. LABEL:fig:Details_FDTDa). In the z-direction we apply
perfectly matched layers as boundary conditions. In the x- and y-direction we
use periodic boundary conditions to eliminate unwanted absorption in lateral
boundaries due to the finite size of the simulation. For the calculation of
the lifetime distribution we collect the spectra from roughly 3900 distinct
point-shaped frequency monitors placed in the structure model provided by
Wilts et al. [11] and the DBS structure (for model parameters see Ref. [4])
respectively, both occupying a footprint of $7\times 7$ µm² and a height of
$7-8$ µm. For excitation we use a light pulse with a centre wavelength of 780
nm and collect wavelengths between 745 nm and 815 nm approximating the
experimental conditions. The calculation of the time-dependent power
distribution is done for a DBS model based on the same parameters but with a
lateral footprint of about $20\times 20$ µm². A time-domain monitor cross
sectioning the structure in the x-z-plane is applied to record every 1.14 fs
the poynting vector at every monitor grid point over a total simulation time
of 1000 fs. A pulse length of 100 fs is used to obtain a spectral narrow band
excitation with a centre wavelength of 780 nm and a FWHM of 14 nm. The zero
time is set to the time when the pulse front enters the structure.
Monte Carlo Simulation. Monte Carlo simulations are performed using a self-
written Matlab code (The MathWorks Inc., USA) based on the well known
algorithm presented in literature [13, 32]. To match the FDTD simulation
conditions no absorption inside the slab is applied and in lateral direction
periodic boundary conditions are used. As light source (with about 6.8 billion
photons) a plane wave is chosen possessing a temporal profile matching the
temporal power profile of the impinging pulse in FDTD simulations. An
appropriate monitor cross sectioning the slab is placed according to the FDTD
setup. The lateral width of the slab is 12 µm, the height and effective
refractive index are equal to the values given above for the simulated DBS
model. The applied transport mean free path of $l_{\text{t}}=3$ µm is equal to
the one obtained by FDTD simulations (see Supplementary Information). A
scattering mean free path of $l_{\text{s}}=1$ µm is selected reproducing the
FDTD results for short times closely (cf. Fig LABEL:fig:comparison_ls). The
anisotropy factor $g$ is defined via $l_{\text{t}}=l_{\text{s}}/(1-g)$ [6] and
hence determined by the choice of $l_{\text{t}}$ and $l_{\text{s}}$.
## Acknowledgements
We gratefully acknowledge financial support from the German Research
Foundation DFG within the priority program ”Tailored Disorder - A science- and
engineering-based approach to materials design for advanced photonic
applications” (SPP 1839). We thank B. D. Wilts for supplying us with a 3D
computer tomography model of the beetle scales’ inner structure. We thank the
team of the Nano Structuring Centre (NSC) at the Technische Universität
Kaiserslautern for their support with focused ion beam milling and scanning
electron microscopy.
## References
* [1] Vukusic, P., Hallam, B. & Noyes, J. Brilliant whiteness in ultrathin beetle scales. Science 315, 348 (2007).
* [2] Luke, S. M., Hallam, B. T., & Vukusic, P. Structural optimization for broadband scattering in several ultra-thin white beetle scales. Appl. Opt. 49, 4246-4254 (2010).
* [3] Utel, F., Cortese, L., Wiersma, D. S. & Pattelli, L. Optimized white reflectance in photonic‐network structures. Adv. Opt. Mater., 7, 1900043 (2019).
* [4] Meiers, D. T., Heep, M.-C. & von Freymann, G. Invited Article: Bragg stacks with tailored disorder create brilliant whiteness. APL Photonics 3, 100802 (2018).
* [5] Rothammer, M., Zollfrank, C., Busch, K., & von Freymann, G. Tailored disorder in photonics: Learning from nature. Adv. Opt. Mater. 9, 2100787 (2021).
* [6] Burresi, M. et al. Bright-white beetle scales optimise multiple scattering of light. Sci. Rep. 4, 6075 (2014).
* [7] Cortese, L. et al. Anisotropic light transport in white beetle scales. Adv. Opt. Mater. 3, 1337-1341 (2015).
* [8] Lee, S. H., Han, S. M. & Han, S. E. Anisotropic diffusion in Cyphochilus white beetle scales. APL Photonics 5, 056103 (2020).
* [9] Lee, S. H., Han, S. M. & Han, S. E. Nanostructure regularity in white beetle scales for stability and strong optical scattering [Invited]. Opt. Mater. Express 11, 1692-1704 (2021).
* [10] Jacucci, G. et al. Coherent backscattering of light by an anisotropic biological network. Interface Focus 9, 20180050 (2019).
* [11] Wilts, B. D. et al. Evolutionary‐optimized photonic network structure in white beetle wing scales. Adv. Mater. 30, 1702057 (2018).
* [12] Burg, S. L. et al. Liquid–liquid phase separation morphologies in ultra-white beetle scales and a synthetic equivalent. Commun. Chem. 2, 100 (2019).
* [13] Schittny, R. et al. Invisibility cloaking in light-scattering media. Laser Photon. Rev. 10, 382-408 (2016).
* [14] Lorenzo, J. R. Principles of diffusive light propagation: Light propagation in tissues with applications in biology and medicine. (World Scientific, 2012).
* [15] Kaveh, M., Rosenbluh, M., Edrei, I. & Freund, I. Weak localization and light scattering from disordered solids. Phys. Rev. Lett. 57, 2049 (1986).
* [16] Wiersma, D. S. The physics and applications of random lasers. Nature Phys. 4, 359-367 (2008).
* [17] Differt, D. et al. Enhanced light absorption in nanotextured amorphous thin-film silicon caused by femtosecond-laser materials processing. Sol. Energy Mater. Sol. Cells 135, 72–77 (2015).
* [18] Aeschlimann, M. et al. Perfect absorption in nanotextured thin films via Anderson-localized photon modes. Nature Photon. 9, 663–668 (2015).
* [19] Lepetit, L., Chériaux, G. & Joffre, M. Linear techniques of phase measurement by femtosecond spectral interferometry for applications in spectroscopy. J. Opt. Soc. Am. B 12, 2467-2474 (1995).
* [20] Mecklenbräuker, W. & Hlawatsch, F. (eds.) The Wigner distribution: Theory and applications in signal processing (Elsevier Science, 1997).
* [21] Pinheiro, F. A. Statistics of quality factors in three-dimensional disordered magneto-optical systems and its applications to random lasers. Phys. Rev. A 78, 023812 (2008).
* [22] Mascheck, M. et al. Observing the localization of light in space and time by ultrafast second-harmonic microscopy. Nature Photon. 6, 293–298 (2012).
* [23] Cao, H. et al. Spatial confinement of laser light in active random media. Phys. Rev. Lett. 84, 5584 (2000).
* [24] Das, C., Trivedi, A., Mitra, K., & Vo-Dinh, T. Short pulse laser propagation through tissues for biomedical imaging. J. Phys. D: Appl. Phys. 36, 1714 (2003).
* [25] Li, J., Qiu, L., Poon, C.-S., & Sunar, U. Analytical models for time-domain diffusion correlation spectroscopy for multi-layer and heterogeneous turbid media. Biomed. Opt. Express 8, 5518-5532 (2017).
* [26] Hohmann, M. et al. Random laser as a potential tool for the determination of the scattering coefficient. Biomed. Opt. Express 12 5439-5451 (2021).
* [27] Zhou, H. et al. Bio-Inspired photonic materials: Prototypes and structural effect designs for applications in solar energy manipulation. Adv. Funct. Mater. 28, 1705309 (2018).
* [28] Loh, J. Y. Y. et al. Waveguide photoreactor enhances solar fuels photon utilization towards maximal optoelectronic-photocatalytic synergy. Nature Commun. 12, 402 (2021).
* [29] Kassa-Baghdouche, L. & Cassan, E. Mid-infrared gas sensor based on high-Q/V point-defect photonic crystal nanocavities. Opt. Quant. Electron. 52, 260 (2020).
* [30] Ruppin, R. Evaluation of extended Maxwell-Garnett theories. Opt. Commun. 182, 273-279 (2000).
* [31] Leertouwer, H. L., Wilts, B. D. & Stavenga, D. G. Refractive index and dispersion of butterfly chitin and bird keratin measured by polarizing interference microscopy. Opt. Express 19, 24061-24066 (2011).
* [32] Wang, L., Jacques, S. L., & Zheng, L. MCML - Monte Carlo modeling of light transport in multi-layered tissues. Comput. Methods Programs Biomed. 47, 131-146 (1995).
* [33] Harris, F. J. On the use of windows for harmonic analysis with the discrete Fourier transform. Proc. IEEE 66, 51-83 (1978).
|
3cm3cm3cm3cm
# Subset SSD for enhanced indexation with sector constraints
Cristiano Arbex Valle1 Cristiano Arbex Valle is funded by FAPEMIG grant
APQ-01267-18. John E Beasley2
###### Abstract
In this paper we apply second order stochastic dominance (SSD) to the problem
of enhanced indexation with asset subset (sector) constraints. The problem we
consider is how to construct a portfolio that is designed to outperform a
given market index whilst having regard to the proportion of the portfolio
invested in distinct market sectors.
In our approach, subset SSD, the portfolio associated with each sector is
treated in a SSD manner. In other words in subset SSD we actively try to find
sector portfolios that SSD dominate their respective sector indices. However
the proportion of the overall portfolio invested in each sector is not pre-
specified, rather it is decided via optimisation.
Computational results are given for our approach as applied to the S&P 500
over the period $29^{\text{th}}$ August 2018 to $29^{\text{th}}$ December
2023. This period, over 5 years, includes the Covid pandemic, which had a
significant effect on stock prices. Our results indicate that the scaled
version of our subset SSD approach significantly outperforms the S&P 500 over
the period considered. Our approach also outperforms the standard SSD based
approach to the problem.
1Departamento de Ciência da Computação,
Universidade Federal de Minas Gerais,
Belo Horizonte, MG 31270-010, Brasil
<EMAIL_ADDRESS>
2Brunel University
Mathematical Sciences, UK
<EMAIL_ADDRESS>
Keywords: enhanced indexation, finance, optimisation, portfolio optimisation,
second order stochastic dominance
## 1 Introduction
In this paper we consider the problem of enhanced indexation with asset subset
(sector) constraints. In this problem we aim to outperform a given market
index whilst having regard to the proportion of the portfolio invested in
distinct market sectors. We apply second order stochastic dominance (SSD) to
the problem. Computational results are given for our approach as applied to
the S&P 500.
The structure of this paper is as follows. In Section 2 we review the relevant
literature as to second order stochastic dominance. In Section 3 we present
SSD from a mathematical viewpoint together with discussion of the standard
cutting plane procedure associated with its resolution. In Section 4 we
present our subset SSD approach when we have sector (asset subset) constraints
present that constrain investment in a number of different subsets of assets.
In Section 5 we present computational results obtained when our subset SSD
approach is applied to the S&P 500. In Section 6 we present our conclusions.
We believe that the contribution to the literature of this paper is:
* •
to present a new approach, _subset SSD_ , for the problem of enhanced
indexation with asset subset (sector) constraints
* •
to demonstrate computationally, using data that we make publicly available,
that our subset SSD approach significantly outperforms both the S&P 500 and
the standard SSD approach to the problem
## 2 Literature review
The importance of stochastic dominance (SD) within financial portfolio
selection has been recognised for decades (Hadar and Russell, 1969; Bawa,
1975; Levy, 1992). For two random variables $X$ and $Y$ it is well known that
$X$ dominates $Y$ under first-order stochastic dominance (FSD,
$X\succeq_{{}_{FSD}}Y$) if and only if it is preferrable over any monotonic
increasing utility function. Likewise, $X$ dominates $Y$ under second-order
stochastic dominance (SSD, $X\succeq_{{}_{SSD}}Y$) if and only if it is
preferrable over any increasing and strictly concave (risk-averse) utility
function (Whitmore and Findlay, 1978).
For many years however SD was primarily a theoretical framework in terms of
financial portfolio optimisation. This was due to the perceived computational
difficulties associated with finding SD-efficient portfolios. In the past
twenty years, however, there has been a shift towards applying SD (especially
SSD) principles in practice, with several optimisation approaches having been
proposed for finding portfolios that are either SSD-efficient (with regards to
a specified set of feasible portfolios) or SSD-dominating (with regards to a
benchmark).
Ogryczak and Ruszczynski (2002) identified several risk measures that can be
employed in mean-risk ($\mu_{x},r_{X}$) decision models that are consistent
with the SSD relation in the sense that $X\succeq_{{}_{SSD}}Y$ implies that
$\mu_{X}\geq\mu_{Y}$ and $r_{X}\leq r_{Y}$. These measures include tail value-
at-risk, tail Gini mean difference and weighted mean deviation from a
quantile. The authors presented stochastic linear programming formulations for
these models whose optimal solutions are guaranteed to be SSD-efficient.
Kuosmanen (2004, 2001) developed the first SSD efficiency tests based on
mathematical programming. Their formulation finds, if it exists, the portfolio
with the highest in-sample mean that dominates a benchmark in the SSD sense.
Post (2003) developed linear programming models for testing if a given
portfolio is SSD-efficient with respect to all possible portfolios given a set
of assets.
Dentcheva and Ruszczynski (2006, 2003) first combine the available assets to
produce a reference (or benchmark) distribution, and then compute a portfolio
which SSD-dominates the benchmark. They used the lower partial moment of order
one to develop the SSD ranking concerning the benchmark portfolio. Their work
has been the basis of several later papers in literature, as referenced below.
Roman et al. (2006) introduced a multi-objective optimisation model to find a
portfolio that achieves SSD dominance over a benchmark. If no such portfolio
exists they find the portfolio whose return distribution comes closest to the
benchmark. They showed that SSD efficiency does not necessarily make a return
distribution desirable, as demonstrated by the optimal portfolio with regards
to maximum expected return (which is SSD-efficient). They emphasised the
crucial role played by a carefully selected benchmark in the process.
Luedtke (2008) presented a model that generalises that of Kuosmanen (2004)
which includes FSD constraints based on a cutting-plane formulation for
problems with integrated chance constraints. Their model involves integer
variables, but relaxing integrality yields a formulation with SSD constraints.
Their objective is to maximise expected portfolio return.
Fábián et al. (2011a, b) introduced a cutting plane reformulation of Roman et
al. (2006) which generalises Dentcheva and Ruszczynski (2006). The authors
replaced the multi-objective nature of the problem by maximising the minimum
value in the SSD relation with regards to a benchmark. Roman et al. (2013)
applied the SSD cutting plane formulation in an enhanced indexation setting.
Valle et al. (2017) added exogenous constraints and reformulated the problem
as an integer linear program, for which a branch-and-cut algorithm was
developed.
Kopa and Post (2015); Post and Kopa (2013) introduced a more generalised
efficiency test which allows for unequal probabilities and higher orders. In
the case of inefficiency their dual model finds a dominating portfolio. If the
portfolio being tested is a benchmark, this dual model can be seen as
equivalent to a model for enhanced indexation.
The set of SSD efficient portfolios is generally very large, and investors
need to decide how to select a portfolio in which to invest from within this
set. The formulation from Post and Kopa (2013) may be used to find different
SSD-efficient portfolios depending on how some parameters are specified.
Hodder et al. (2015) proposed ways to assign values to these parameters with
the goal of helping investors select a single portfolio out of the efficient
set.
Bruni et al. (2017, 2012) developed an alternative approach for SD-based
enhanced indexation. They proposed a criterion called “cumulative zero-order
stochastic $\epsilon$-dominance” (CZS$\epsilon$D). Zero-order SD happens when
all returns from a given portfolio are superior to all returns from an
alternative portfolio. The authors attempt to minimise underperformance by
adding an exponential number of constraints related to the CZS$\epsilon$D
criterion, where $\epsilon$ is the maximum underperformance allowed. The
separation algorithm they use is equivalent to optimising conditional value-
at-risk via linear programming.
Sharma et al. (2017) introduced a relaxed-SSD formulation for enhanced
indexation. The SSD constraints are relaxed by adding under/overachievement
where SSD violation is controlled by setting an appropriate upper bound
related to the total underachievement. The concept of relaxed-SSD was first
introduced by Lizyayev and Ruszczyński (2012).
Sharma and Mehra (2017) proposed a SSD-based approach for producing sector
portfolios. For each sector, their model seeks a SSD portfolio that dominates
the corresponding sector index, whilst focusing on a number of financial
ratios when making sector portfolio decisions. These sector portfolios are
then combined using another model that optimises their mean return subject to
being (if possible) SSD-dominating with respect to the main market index. If
SSD dominance cannot be achieved, either in relation to a sector, or in
relation to the main market index, they relax the dominance constraints in
their models.
Liu et al. (2021) showed that FSD and SSD may not be sufficient to
discriminate between multiple dominating portfolios with regards to a
benchmark. They proposed a new criterion called Interval-based SD (ISD) in
which different SD orders are applied to different parts of the support of the
return distribution. They present a reformulation of Dentcheva and Ruszczynski
(2006) that maximises portfolio return subject to ISD constraints.
Sehgal and Mehra (2021) presented a robust version of the SSD-formulation of
Dentcheva and Ruszczynski (2006). Robustness is introduced by varying asset
returns, and the model is developed as the deterministic equivalent of a
stochastic programming formulation. Goel and Sharma (2021) also generalised
Dentcheva and Ruszczynski (2006) by considering the “utility improvement” in
portfolio returns instead of the returns themselves. The authors proposed
replacing the portfolio and benchmark returns by their respective deviations
in the SSD constraints.
Malavasi et al. (2021) compared the performance of SSD portfolios with
efficient portfolios derived using the standard mean-variance approach of
Markowitz (1952). They also focused on the performance of the global minimum
variance portfolio as compared with portfolios that are stochastically
dominant to this minimum variance portfolio.
Cesarone et al. (2023) compared the formulations of Roman et al. (2013) and
Kopa and Post (2015) with skewed benchmarks obtained by using the reshaping
method of Valle et al. (2017). They found that SSD portfolios that dominate
the skewed benchmark generally perform better out-of-sample.
Liesio et al. (2023) considered the problem of generating an efficient
frontier using stochastic dominance. They presented an approach based on
Pareto optimal solutions of a multiple objective optimisation problem.
Cesarone and Puerto (2024) presented an alternative to Roman et al. (2013)
where, instead of maximising the minimum value of the SSD relation, the
authors proposed a model that optimises the ordered weighted average of a
predefined number of tails.
## 3 Cutting plane based SSD formulation
Based on a reformulation of the conditional value-at-risk minimisation problem
given by Künzi-Bay and Mayer (2006), Fábián et al. (2011a) proposed a novel
cutting plane formulation of the SSD problem, one whose objective is to
maximise the minimum value in the SSD relationship between the portfolio and a
given benchmark (e.g. a market index or some reference distribution). Roman et
al. (2013) then employed the formulation for enhanced indexation. In this
section we outline their approach. Let
* •
$N$ be number of assets available for investment
* •
$S$ be number of scenarios, where the scenarios are assumed to be equiprobable
* •
$r_{is}$ be the return of asset $i$ in scenario $s$
* •
$r^{I}_{s}$ be the benchmark return in scenario $s$
* •
$R_{s}^{P}$ be the return associated with a given asset portfolio $P$ in
scenario $s$
* •
$\text{Tail}^{L}_{\frac{\alpha}{S}}(P)$ be the unconditional expectation of
the smallest $\alpha$ outcomes in $[R_{s}^{P}~{}|~{}s=1,\ldots,S]$, so the
left tail of the portfolio return distribution
For a portfolio $P$ with asset weights $[w_{i}]$ and hence return
$R_{s}^{P}=\sum_{i=1}^{N}r_{is}w_{i}$ in scenario $s$ we, as Fábián et al.
(2011a) albeit with slightly different notation, define
$\text{Tail}^{L}_{\frac{s}{S}}(P)$ using
$\text{Tail}^{L}_{\frac{s}{S}}(P)=\frac{1}{S}\text{(sum of the $s$ smallest
portfolio returns in $[R_{1}^{P},R_{2}^{P},\ldots,R_{S}^{P}]$)}$ (1)
Here $\text{Tail}^{L}_{\frac{s}{S}}(P)$ is the left tail of the cumulative
return distribution associated with $[R_{1}^{P},R_{2}^{P},\ldots,R_{S}^{P}]$
weighted by the constant $(1/S)$ factor.
Let $I$ be some index portfolio which we would (ideally) like to outperform.
The index portfolio has known return $R_{s}^{I}$ in scenario
$s,~{}s=1,\ldots,S$. Let
$\hat{\tau}_{s}=\text{Tail}^{L}_{\frac{s}{S}}(I)~{}s=1,\ldots,S$. Clearly we
would like the tails of the chosen portfolio to improve on the index portfolio
tails, so define the tail differences $\mathcal{V}_{s}$ between the chosen
portfolio and the index portfolio using
$\mathcal{V}_{s}=\text{Tail}^{L}_{\frac{s}{S}}(P)-\hat{\tau}_{s}~{}~{}~{}s=1,\ldots,S$
(2)
If $\mathcal{V}_{s}\geq 0~{}s=1,\ldots,S$ then the portfolio is second order
stochastic dominant to the index portfolio.
Now it is trivial to observe that the sum of the $s$ smallest portfolio
returns in the $S$ scenarios can be found by considering all subsets
$\mathcal{J}$ of the $S$ scenarios of cardinality $s$. In other words
$\text{Tail}^{L}_{\frac{s}{S}}(P)=\frac{1}{S}\min\left[\sum_{j\in\mathcal{J}}\sum_{i=1}^{N}r_{ij}w_{i}~{}|~{}\mathcal{J}\subseteq\\{1,...,S\\},|\mathcal{J}|=s\right]$
(3)
If we are choosing $s$ scenarios from the $S$ scenarios then there are
$\frac{S!}{s!(S-s)!}$ subsets $\mathcal{J}$ that need to be considered. So
Equation (3) defines the $s$ smallest portfolio returns in the $S$ scenarios
using a combinatorial number of constraints.
Now to make use of the combinatorial definition of
$\text{Tail}^{L}_{\frac{s}{S}}(P)$ let $\mathcal{V}$ be the minimum value of
$[\mathcal{V}_{s}~{}|~{}s=1,\ldots,S]$. Then a suitable optimisation program
to decide the portfolio of assets that should be held is to
$\mbox{maximise}~{}\mathcal{V}$ (4)
subject to
$\mathcal{V}_{s}\leq\frac{1}{S}\sum_{j\in\mathcal{J}}\sum_{i=1}^{N}r_{ij}w_{i}-\hat{\tau}_{s}~{}~{}~{}~{}\forall\mathcal{J}\subseteq\\{1,...,S\\},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S$
(5) $\mathcal{V}\leq\mathcal{V}_{s}~{}~{}~{}~{}s=1,\ldots,S$ (6)
$\sum_{i=1}^{N}w_{i}=1$ (7) $0\leq w_{i}\leq 1~{}~{}~{}~{}i=1,\ldots,N$ (8)
$\mathcal{V}\in\mathbb{R}$ (9)
$\mathcal{V}_{s}\in\mathbb{R}~{}~{}~{}s=1,\ldots,S$ (10)
Equation (4), in conjunction with Equation (6), maximises the minimum tail
difference. Equation (5) is the standard SSD combinatorial definition of the
tail differences. Equation (7) ensures that all of our wealth is invested in
assets. Equation (8) is the non-negativity constraint (so no short-selling).
Equation (9) ensures that $\mathcal{V}$ can be positive or negative whilst
Equation (10) ensures that the tail differences $\mathcal{V}_{s}$ can be
positive or negative.
Equations (4)-(10) above is a portfolio choice optimisation program with
explicit consideration of tails. If the objective function has a non-negative
optimal value then the associated portfolio is second order stochastic
dominant with respect to the index.
### 3.1 Cutting plane resolution
We can adopt a cutting plane resolution procedure for the portfolio
optimisation program Equations (4)-(10) above. This has been given previously
(albeit in a slightly different form) by Fábián et al. (2011a).
First define an initial scenario set $\mathcal{J^{*}}$ where there is at least
one set of cardinality $s$, for all values of $s=1,\ldots,S$, in
$\mathcal{J^{*}}$ and amend Equation (5) to
$\mathcal{V}_{s}\leq\frac{1}{S}\sum_{j\in\mathcal{J}}\sum_{i=1}^{N}r_{ij}w_{i}-\hat{\tau}_{s}~{}~{}~{}~{}\forall\mathcal{J}\in\mathcal{J^{*}},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S$
(11)
1. 1.
Solve the amended optimisation program, optimise Equation (4) subject to
Equations (6)-(11).
2. 2.
Consider each value of $s$ ($s=1,\ldots,S$) in turn and if in the solution to
the amended optimisation program
$\mathcal{V}_{s}>\frac{1}{S}\text{(sum of the $s$ smallest portfolio returns
over the $S$ scenarios)}-\hat{\tau}_{s}$ (12)
then add the scenario set associated with these $s$ smallest portfolio returns
to $\mathcal{J^{*}}$. Here the scenario set that is added constitutes a valid
cut associated with Equation (5) that is violated by the current solution.
3. 3.
If scenarios sets have been added to $\mathcal{J^{*}}$ go to Step (1), else
terminate.
Upon termination at Step (3) above we will have a set of values satisfying all
of the constraints in the amended optimisation program. It remains to prove
that we have solved the original (unamended) optimisation program to
optimality. Here the only difference between the original optimisation program
and the amended optimisation program is the replacement of Equation (5) by
Equation (11).
Consider a particular value of $s$. Since we have terminated no cuts of the
form shown in Equation (12) can be added, in other words we must have
$\mathcal{V}_{s}\leq\frac{1}{S}\text{(sum of the $s$ smallest portfolio
returns over the $S$ scenarios)}-\hat{\tau}_{s}$ (13)
But the term (sum of the $s$ smallest portfolio returns over the $S$
scenarios) corresponds to
$\min[\sum_{j\in\mathcal{J}}\sum_{i=1}^{N}r_{ij}w_{i}~{}|~{}\mathcal{J}\subseteq\\{1,...,S\\},|\mathcal{J}|=s]$,
since it is the sum of the $s$ smallest portfolio returns. So Equation (13) is
equivalent to
$\mathcal{V}_{s}\leq\frac{1}{S}\min\left[\sum_{j\in\mathcal{J}}\sum_{i=1}^{N}r_{ij}w_{i}~{}|~{}\mathcal{J}\subseteq\\{1,...,S\\},|\mathcal{J}|=s\right]-\hat{\tau}_{s}$
(14)
Equation (14) in turn implies that $\mathcal{V}_{s}$ satisfies Equation (5) in
the original optimisation program. This is because the summation term on the
right-hand side of that equation is over all subsets of cardinality $s$, so
equivalent to the minimisation term in Equation (14). Hence we have found the
optimal solution to the original (unamended) optimisation program.
### 3.2 Scaled tails
One issue with using Equation (4) as an objective is that there may be
multiple distinct portfolios, each of which has the same maximum $\mathcal{V}$
value. However the SSD formulation can be tailored to focus on certain aspects
of the return distribution associated with the portfolio chosen.
With Equation (6) and an objective of maximising $\mathcal{V}$ more importance
is given to $\text{Tail}^{L}_{\frac{s}{S}}(P)$ when $s$ is small. Namely,
$\text{Tail}^{L}_{\frac{s}{S}}(P)$ for $s$ approaching $S$ is given the same
relative importance by Equation (6) as for $s$ close to 1. But since the left
tails are cumulative, for large values of $s$ the most positive portfolio
returns are “diluted” among smaller returns. An unintended consequence of this
is that solving the maximise $\mathcal{V}$ formulation tends to yield
portfolios that have a smaller left tail when compared to benchmark returns
$[\hat{\tau}_{s}~{}|~{}s=1,\ldots,S$], but also a smaller right tail.
As an alternative Fábián et al. (2011b) proposed scaling the tails by
replacing Equation (6) with
$\frac{s}{S}\mathcal{V}\leq\mathcal{V}_{s}~{}~{}~{}~{}s=1,\ldots,S$ (15)
Here the effect of scaling is that more importance is given to the returns in
the right tails of the distribution.
## 4 Subset SSD
Above we have a single set of assets and we seek a portfolio chosen from these
assets that, in a SSD sense, outperforms (if possible) a given asset index. In
this section we generalise this approach to the case where it is possible to
subdivide the entire set of assets into individual subsets, each with
differing characteristics.
We might be interested in different asset subsets for a number of reasons,
e.g. in a given set of index assets it could be that we believe that large
capitalisation assets and low capitalisation assets exhibit different
behaviour. So in our chosen portfolio we might wish to tailor our exposure to
these two different asset subsets differently. Other asset subsets can be
easily envisaged e.g. based on different market sectors, different momentum
characteristics or any other economic metric. In our approach we do not assume
that the asset subsets are disjoint, in other words a single asset can be in
two or more subsets.
_We should be clear here that under the standard SSD approach exposure to
different asset subsets can be included by adding additional constraints to
the SSD formulation, Equations ( 4)-(10), as seen above. However in our
approach EACH individual asset subset portfolio is treated in a SSD manner._
For clarity this standard SSD approach is given in Section 4.2 below.
Suppose that we have $K$ asset subsets where $N^{k}$ are the assets in asset
subset $k$ and $\cup^{K}_{k=1}N^{k}=[1,...,N]$. We need for each asset subset
an underlying index in order to create an appropriate SSD formulation. Such an
index may be publicly available. If not, one can easily be produced using
weights associated with any index that includes these assets.
As an illustration of this suppose that the weight associated with asset $i$
in an appropriate benchmark index is $\Gamma_{i}$, where the index is price
based, so the price $P_{it}$ of asset $i$ at time $t$ contributes to the
index. Then the sub-index for the set $N^{k}$ at time $t$ is given by
$\sum_{i\in N^{k}}\Gamma_{i}P_{it}$, so the index return associated with asset
subset $k$ at time $t$ is $\left[\sum_{i\in N^{k}}\Gamma_{i}P_{it}/\sum_{i\in
N^{k}}\Gamma_{i}P_{it-1}\right]$. Let $I^{k}$ represent the returns on the
index associated with asset subset $k$. Then
$\hat{\tau}^{k}=(\hat{\tau}^{k}_{1},\ldots,\hat{\tau}^{k}_{S})=\big{(}\text{Tail}^{L}_{\frac{1}{S}}I^{k},\ldots,\text{Tail}^{L}_{\frac{S}{S}}I^{k}\big{)}$.
In the SSD formulation below we add a $k$ superscript associated with asset
subset $N^{k}$ to the previous formulation (Equations (4)-(10)). Let
$\mathcal{V}_{s}^{k}$ be the tail difference between the chosen portfolio and
the index portfolio associated with asset subset $k$.
Let $W^{k}\geq 0$ be the proportion of the portfolio invested in subset $k$,
_where this proportion will be decided by the optimisation_. However note here
that, as will be seen below, the decision maker has the flexibility to impose
bounds on $W^{k}$, or indeed to specify the exact value that $W^{k}$ should
take.
Then, drawing on the program given above, Equations (4)-(10), the constraints
of the subset SSD optimisation program are
$\mathcal{V}_{s}^{k}\leq\frac{1}{S}\sum_{j\in\mathcal{J}}\sum_{i\in
N^{k}}r_{ij}w_{i}/W^{k}-\hat{\tau}_{s}^{k}~{}~{}~{}~{}\forall\mathcal{J}\subseteq\\{1,...,S\\},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S,~{}k=1,\ldots,K$
(16) $W^{k}=\sum_{i\in N^{k}}w_{i}~{}~{}~{}~{}k=1,\ldots,K$ (17)
$\delta^{L}_{k}\leq W_{k}\leq\delta^{U}_{k}~{}~{}~{}~{}k=1,\ldots,K$ (18)
$\sum_{i=1}^{N}w_{i}=1$ (19) $0\leq w_{i}\leq 1~{}~{}~{}~{}i=1,\ldots,N$ (20)
$\mathcal{V}_{s}^{k}\in\mathbb{R}~{}~{}~{}~{}s=1,\ldots,S,~{}k=1,\ldots,K.$
(21)
Equation (16) is the tail difference for each subset $k$. In this equation the
summation in the numerator of the first term on the right-hand side of the
inequality is the return from the investment in assets associated with subset
$k$. But unlike Equation (5) above we do not necessarily have that the sum of
the weights (over assets $i\in N^{k}$) will equal one, so we have to scale
this summation by the $W_{k}$ factor before subtracting the
$\hat{\tau}_{s}^{k}$ associated with subset $k$.
Equation (17) defines the subset proportion based on the sum of the
proportions of the total wealth invested in the assets in the subset. Equation
(18) ensures that the proportion of the total investment in subset $k$ lies
between $\delta^{L}_{k}$ and $\delta^{U}_{k}$ where these are the user defined
lower and upper limits on the proportion of the portfolio invested in subset
$k$. Equation (19) ensures that all of our wealth is invested in assets.
Equation (20) is the non-negativity constraint (so no short-selling) and
Equation (21) ensures that the tail differences $\mathcal{V}^{k}_{s}$ can be
positive or negative.
Now assuming that $W^{k}>0~{}k=1,\ldots,K$ (which we can ensure if we wish by
adding constraints $W^{k}\geq\epsilon~{}k=1,\ldots,K$, where $\epsilon{>}0$
and small) we can linearise Equation (16) to
$W^{k}\mathcal{V}_{s}^{k}\leq\frac{1}{S}\sum_{j\in\mathcal{J}}\sum_{i\in
N^{k}}r_{ij}w_{i}–W^{k}\hat{\tau}_{s}^{k}~{}~{}~{}~{}\forall\mathcal{J}\subseteq\\{1,...,S\\},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S,~{}k=1,\ldots,K$
(22)
Here the $W^{k}\mathcal{V}_{s}^{k}$ term is nonlinear, but can be interpreted
as the _proportion weighted tail difference_ associated with set $k$.
Now based on Equation (4) we might be tempted to have an objective function of
the form $\mbox{maximise}~{}\mathcal{V}$ where
$\mathcal{V}\leq\mathcal{V}_{s}^{k}~{}s{=}1,\ldots,S,~{}k{=}1,\ldots,K$ and
$\mathcal{V}\in\mathbb{R}$. Here each tail difference $\mathcal{V}_{s}^{k}$
influences the objective, bounding it from above. However we have _no prior
knowledge of the investment proportion associated with subset $k$_. So for
example if we adopt an objective of this form we might have two subsets with
the same tail difference (as calculated using Equation (16)), so with the same
influence on $\mathcal{V}$, but very different investment proportions.
This seems perverse - surely an investment with a higher proportion should
have more influence with respect to the objective? In other words (somehow)
the investment proportion $W^{k}$ for subset $k$ should ideally be
incorporated, so that the higher the value of $W^{k}$ the more impact subset
$k$ has on the maximisation objective.
It is clear that one way forward is to replace the nonlinear proportion
weighted tail difference term $W^{k}\mathcal{V}_{s}^{k}$ in Equation (22) by a
single term, say $\mathcal{Z}_{s}^{k}\in\mathbb{R}$, and adopt an objective
function of the form
$\mbox{maximise}~{}\mathcal{V}$ (23)
subject to
$\mathcal{Z}_{s}^{k}\leq\frac{1}{S}\sum_{j\in\mathcal{J}}\sum_{i\in
N^{k}}r_{ij}w_{i}–W^{k}\hat{\tau}_{s}^{k}~{}~{}~{}~{}\forall\mathcal{J}\subseteq\\{1,...,S\\},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S,~{}k=1,\ldots,K$
(24)
$\beta\mathcal{V}\leq\mathcal{Z}_{s}^{k}~{}~{}~{}~{}\forall\mathcal{J}\subseteq\\{1,...,S\\},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S,~{}k=1,\ldots,K$
(25) $\mathcal{V}\in\mathbb{R}$ (26)
$\mathcal{Z}_{s}^{k}\in\mathbb{R}~{}~{}~{}~{}s=1,\ldots,S,~{}k=1,\ldots,K.$
(27)
with the other constraints (Equations (17)-(20)) remaining as before. In
Equation (25) $\beta$ is the scaling factor where $\beta{=}1$ for no scaling
and $\beta{=}s/S$ for scaled tails as in Equation (15).
### 4.1 Cutting plane resolution - $\mathcal{Z}_{s}^{k}$
We can adopt what is effectively the same cutting plane resolution procedure
for the portfolio optimisation program as given previously by Fábián et al.
(2011a) and seen above. For completeness here we set out this procedure in
full.
First define an initial scenario set $\mathcal{J^{*}}$ where there is at least
one set of cardinality $s$, for all values of $s=1,\ldots,S$, in
$\mathcal{J^{*}}$ and amend Equation (24) to
$\mathcal{Z}_{s}^{k}\leq\frac{1}{S}\sum_{j\in\mathcal{J}}\sum_{i\in
N^{k}}r_{ij}w_{i}–W^{k}\hat{\tau}_{s}^{k}~{}~{}~{}~{}\forall\mathcal{J}\in\mathcal{J^{*}},~{}|\mathcal{J}|=s,~{}s=1,\ldots,S,~{}k=1,\ldots,K$
(28)
1. 1.
Solve the amended optimisation program, optimise Equation (23) subject to
Equations (17)-(20),(25)-(28)
2. 2.
Consider all values of $s$ and $k$ ($s=1,\ldots,S,~{}k=1,\ldots,K$) in turn
and if in the solution to the amended optimisation program
$\mathcal{Z}_{s}^{k}>\frac{1}{S}(\mbox{sum of the $s$ smallest portfolio
returns in subset $k$ over the $S$ scenarios)}–W^{k}\hat{\tau}_{s}^{k}$ (29)
then add the scenario set associated with these $s$ smallest returns to
$\mathcal{J^{*}}$. Here the scenario set that is added constitutes a valid cut
associated with Equation (24) that is violated by the current solution.
3. 3.
If scenarios sets have been added to $\mathcal{J^{*}}$ go to Step (1), else
terminate.
### 4.2 Standard SSD based approach
Above we have presented our approach where each individual asset subset is
treated in a SSD manner. The standard approach to the problem of how to
construct a portfolio that is designed to outperform a given market index
whilst having regard to the proportion of the portfolio invested in distinct
asset subsets (market sectors) is to add constraints related to asset subsets
to the standard SSD formulation.
In term of the notation given above this approach would correspond to optimise
Equation (4) subject to Equations (5)-(10),(17),(18). Here we have added the
subset constraints, Equations (17),(18), to the standard SSD formulation.
Computational results, presented below, indicate that for the S&P 500 over the
period which we considered, this standard approach is outperformed by our
approach.
## 5 Computational results
We used a dataset associated with the S&P 500, with daily stock prices from
$29^{\text{th}}$ August 2018 until $29^{\text{th}}$ December 2023. This time
period, over 5 years, includes the Covid pandemic, which had a significant
effect on stock prices. Our data has been manually adjusted to account for
survivorship bias - on a given date only assets that were part of the S&P 500
index at that time are available to be selected for investment.
In order to define the scenarios required by SSD we used a lookback approach
that included the most recent 85 daily prices, which then yield 84 in-sample
returns (roughly a quadrimester in business days).
The SSD subsets were defined by the economic sectors to which each asset
belongs. There are 11 different stock market sectors according to the most
commonly used classification system, known as the Global Industry
Classification Standard (GICS). These sectors are communication services,
consumer discretionary, consumer staples, energy, financials, healthcare,
industrials, materials, real estate, technology and utilities. For each
sector, its benchmark consisted of the corresponding time series for the S&P
sector indices111https://www.spglobal.com/spdji/en/index-family/equity/us-
equity/sp-sectors/. Table 1 shows the S&P 500 sector breakdown as of
$9^{\text{th}}$ October 2023 together with the approximate weight of the
sector with regard to the index.
Sector | Approximate weight (%)
---|---
Technology | 26.0
Healthcare | 14.5
Financials | 12.9
Consumer discretionary | 9.9
Industrials | 8.6
Communication services | 8.2
Consumer staples | 7.4
Energy | 4.5
Utilities | 2.9
Materials | 2.6
Real estate | 2.5
Table 1: S&P 500 sector breakdown
All of the data used in this paper is publicly available for the use by other
researchers at:
https://github.com/cristianoarbex/subsetSSDData/
We used CPLEX Optimizer 22.1.0 (2023) as the linear and integer programming
solver, with default options. Our backtesting tool is developed in Python and
all optimisation models are developed in C++. We ran all experiments on an
Intel(R) Core(TM) i7-3770 CPU @ 3.90GHz with 8 cores, 8GB RAM and with Ubuntu
22.04.3 LTS as the operating system.
### 5.1 Out-of-sample performance
In this section we evaluate the performance of our subset SSD approach when
compared to both the S&P 500 and the standard SSD approach with sector
constraints, which was outlined above in Section 4.2.
As mentioned above we used an in-sample period of 85 days. We conducted
periodic rebalancing every 21 days (roughly one month in business days). To
illustrate our approach our first in-sample period of 85 days runs from
$29^{\text{th}}$ August 2018 until $31^{\text{st}}$ December 2018. So using
this in-sample period (with 84 return values for each asset) we choose a
portfolio (using a SSD strategy) on $31^{\text{st}}$ December 2018, evaluate
its performance out-of-sample for the next 20 business days so from
$31^{\text{st}}$ December 2018 to $31^{\text{st}}$ January 2019, then repeat
the process until the data is exhausted. In total this involved 60 out-of-
sample periods for which we then have a single out-of-sample time series of
returns. For simplicity we assume no transaction costs.
We evaluated four different strategies. The unscaled and scaled versions of
our subset SSD approach and, equivalently, the unscaled and scaled versions of
the standard SSD approach with sector constraints.
In order to define sector bounds, for a given sector $k$ we take its exposure
from Table 1 as $\delta_{k}$ and define an interval $\Delta=0.05$ such
$\delta_{k}^{L}=(1-\Delta)\delta_{k}$ and
$\delta_{k}^{U}=(1+\Delta)\delta_{k}$, where $\delta_{k}^{L}$ and
$\delta_{k}^{U}$ limit exposure to any particular sector, as in Equation (18).
These bounds apply to both subset SSD and standard SSD. This choice ensures
that the portfolios chosen under both subset and standard SSD have similar
exposure to S&P 500 sectors, whilst, at the same time, giving some leeway to
the SSD optimiser in its choice of portfolio.
Figures 2 and 2 show graphically the cumulative returns during the out-of-
sample period for all four strategies and the S&P 500. For easier
visualisation, we show these results separately, with the subset SSD results
in Figure 2 and the standard SSD results in Figure 2. Both figures use exactly
the same scale.
Figure 1: Cumulative out-of-sample returns for both the unscaled and scaled
versions of the subset SSD formulation with sector constraints
Figure 2: Cumulative out-of-sample returns for both the unscaled and scaled
versions of the standard SSD formulation with sector constraints
Considering these figures the effect of the Covid pandemic can be clearly
seen, with a dramatic fall in cumulative returns for the S&P 500 in the first
half of 2020. It is clear from these figures that the scaled version of our
subset SSD approach significantly outperforms the S&P 500 over the time period
considered.
In order to gain some numeric insight into the performance of the four
strategies as seen in Figure 2 and Figure 2 we show some selected comparative
statistics in Table 2. These are calculated from the out-of-sample returns for
the four strategies, and correspondingly for the S&P 500 index.
Let $Q$ be a series of $0,\ldots,T$ daily portfolio values, where $Q_{t}$ is
the value of the given portfolio on day $t$. In Table 2 FV stands for the
final portfolio value, assuming a starting amount of $1, and is calculated as
$Q_{T}/Q_{0}$. CAGR stands for Capital Annualised Growth Rate and as a
percentage is calculated as
$100\left(\left(\frac{Q_{T}}{Q_{0}}\right)^{\frac{1}{Y}}-1\right)$, where
$Y=T/252$ is an approximation for the number of years in the out-of-sample
period. Column Vol represents the annualised sample standard deviation of the
out-of-sample returns. Sharpe and Sortino are the annualised Sharpe and
Sortino ratios respectively, where for their calculation we use the CBOE
10-year treasury notes (symbol TNX) as the risk-free rate. MDD represents the
maximum drawdown and as a percentage is calculated as
$\max\left(0,100\max_{0\leq t<u\leq T}\frac{Q_{t}-Q_{u}}{Q_{t}}\right)$.
Strategies | FV | CAGR | Vol | Sharpe | Sortino | MDD
---|---|---|---|---|---|---
Subset SSD (scaled) | 2.28 | 17.92 | 18.57 | 0.86 | 1.22 | 29.05
Subset SSD (unscaled) | 1.57 | 9.40 | 19.53 | 0.44 | 0.60 | 36.30
Standard SSD (scaled) | 1.80 | 12.48 | 20.89 | 0.56 | 0.77 | 33.93
Standard SSD (unscaled) | 1.58 | 9.55 | 17.15 | 0.49 | 0.67 | 28.08
S&P 500 | 1.90 | 13.74 | 21.31 | 0.61 | 0.84 | 33.92
Table 2: Comparative out-of-sample statistics
With regard to the scaling of tails, Fábián et al. (2011b); Roman et al.
(2013); Valle et al. (2017) all concluded that scaled SSD tends to achieve
superior out-of-sample returns, but not necessarily superior risk, when
compared to unscaled SSD. The reason for this is that by scaling the tails
more importance is given to the returns in the right tails of the
distribution. Here we observe the same behaviour, with the scaled versions of
both standard and subset SSD outperforming their unscaled versions in terms of
performance (FV, CAGR). The gain in absolute performance also translates to
better risk-adjusted performance (Sharpe, Sortino). As can be seen from Table
2 the unscaled formulations both show inferior performance when compared to
the S&P 500.
With regards to the scaled formulations, subset SSD performed considerably
better than standard SSD with sector constraints. We would remind the reader
here that the main difference between the two approaches is that with subset
SSD we actively try to find sector portfolios that SSD dominate their
respective sector indices, as opposed to standard SSD where there is no
attempt to ensure this.
Subset SSD achieved better returns (in terms of FV and CAGR) and better risk
(in terms of Vol and MDD) and therefore much improved risk-adjusted
performance (Sharpe, Sortino) as compared with standard SSD and too as
compared with the S&P 500. Despite the Covid drop in 2020, during the entire
period considered the S&P 500 had a strong positive performance (almost
doubling in value). However subset SSD was able not only to outperform the S&P
500 in terms of return, but also in terms of risk.
Despite the potentially exponential number of constraints involved in the
cutting plane procedures for SSD solution our experience has been that the
computational effort required to solve each portfolio rebalance to optimality
was negligible. In our experiments a total of 60 rebalances were needed. For
the scaled subset SSD formulation the average computational time per rebalance
was 0.58s, with a maximum of 1.86s and a minimum of 0.13s (median 0.54s),
while for the other strategies the average computational time was between 0.3s
and 0.35s and no rebalance required more than a second.
Figure 3 shows the exposure per sector for scaled subset SSD. The figure shows
comparatively little variation per sector, as expected, since the strategies
are limited by sector bounds to be within $\Delta$, here 5%, of the sector
weightings in the S&P 500.
Figure 3: Out-of-sample exposure per sector, scaled subset SSD
### 5.2 Varying sector bounds
To investigate the performance of our subset SSD approach when we varied
sector bounds we performed ten different experiments. As above, in order to
define sector bounds for a given sector $k$ we take its exposure from Table 1
as $\delta_{k}$. Using $\Delta$ we have $\delta_{k}^{L}=(1-\Delta)\delta_{k}$
and $\delta_{k}^{U}=(1+\Delta)\delta_{k}$, where (as before) $\delta_{k}^{L}$
and $\delta_{k}^{U}$ limit exposure to any particular sector, as in Equation
(18).
We evaluated the out-of-sample performance of both scaled subset SSD and
scaled standard SSD for $\Delta=(0.01,0.02,\dots,0.10)$. The results can be
seen in Table 3. In this table we have, for example for FV and scaled subset
SSD, that over the ten values of $\Delta$ considered, the mean FV value was
2.18, the median FV value was 2.13, the minimum FV value was 1.97 and the
maximum FV value was 2.37.
It is clear from Table 3 that, for the data we considered, scaled subset SSD
is superior to scaled standard SSD. For the four performance measures where
high values are better (so FV, CAGR, Sharpe and Sortino) the _minimum_ values
for these measures for scaled subset SSD exceed the _maximum_ values for these
measures for scaled standard SSD. For the two performance measures where low
values are better (so Vol and MDD) the _maximum_ values for these measures for
scaled subset SSD are below the _minimum_ values for these measures for scaled
standard SSD. _In other words with regard to all six performance measures
scaled subset SSD dominates scaled standard SSD._
In a similar fashion for the four performance measures where high values are
better (so FV, CAGR, Sharpe and Sortino) the minimum values for these measures
for scaled subset SSD exceed the values associated with the S&P 500. For the
two performance measures where low values are better (so Vol and MDD) the
maximum values for these measures for scaled subset SSD are below the values
associated with the S&P 500. _In other words with regard to all six
performance measures scaled subset SSD dominates the S &P 500._
Stats | Subset SSD (scaled) | Standard SSD (scaled) | S&P 500
---|---|---|---
Mean | Median | Min | Max | Mean | Median | Min | Max |
FV | 2.18 | 2.13 | 1.97 | 2.37 | 1.80 | 1.80 | 1.80 | 1.81 | 1.90
CAGR | 16.88 | 16.32 | 14.52 | 18.90 | 12.51 | 12.51 | 12.44 | 12.59 | 13.74
Vol | 18.55 | 18.57 | 18.31 | 18.74 | 20.91 | 20.91 | 20.82 | 21.02 | 21.31
Sharpe | 0.81 | 0.79 | 0.70 | 0.91 | 0.56 | 0.56 | 0.56 | 0.57 | 0.61
Sortino | 1.15 | 1.12 | 0.97 | 1.29 | 0.77 | 0.77 | 0.77 | 0.78 | 0.84
MDD | 29.29 | 29.05 | 28.35 | 30.87 | 33.93 | 34.02 | 33.54 | 34.27 | 33.92
Table 3: Summary statistics for the scaled formulations when
$\Delta=(0.01,0.02,\ldots,0.10)$
## 6 Conclusions
In this paper we have considered the problem of how to construct a portfolio
that is designed to outperform a given market index, whilst having regard to
the proportion of the portfolio invested in distinct market sectors.
We presented a new approach, subset SSD, for the problem. In our approach
portfolios associated with each sector are treated in a SSD manner so that we
actively try to find sector portfolios that SSD dominate their respective
sector indices. The proportion of the overall portfolio invested in each
sector is not pre-specified, rather it is decided via optimisation.
Computational results were given for our subset SSD approach as applied to the
S&P 500 over the period $29^{\text{th}}$ August 2018 to $29^{\text{th}}$
December 2023. These indicated that the scaled version of our subset SSD
approach significantly outperforms the S&P 500 over the period considered. Our
approach also outperforms the standard SSD based approach to the problem.
## References
* Bawa [1975] V. S. Bawa. Optimal rules for ordering uncertain prospects. _Journal of Financial Economics_ , 2(1):95–121, 1975. doi: 10.1016/0304-405X(75)90025-2.
* Bruni et al. [2012] Renato Bruni, Francesco Cesarone, Andrea Scozzari, and Fabio Tardella. A new stochastic dominance approach to enhanced index tracking problems. _Economics Bulletin_ , 32(4):3460–3470, 2012.
* Bruni et al. [2017] Renato Bruni, Francesco Cesarone, Andrea Scozzari, and Fabio Tardella. On exact and approximate stochastic dominance strategies for portfolio selection. _European Journal of Operational Research_ , 259(1):322–329, 2017. doi: 10.1016/j.ejor.2016.10.006.
* Cesarone and Puerto [2024] Francesco Cesarone and Justo Puerto. New approximate stochastic dominance approaches for enhanced indexation models. https://arxiv.org/html/2401.12669v19, 2024.
* Cesarone et al. [2023] Francesco Cesarone, Raffaello Cesetti, Giuseppe Orlando, Manuel Luis Martino, and Jacopo Maria Ricci. Comparing SSD-efficient portfolios with a skewed reference distribution. _Mathematics_ , 11(1), 2023. doi: 10.3390/math11010050.
* CPLEX Optimizer 22.1.0 [2023] CPLEX Optimizer 22.1.0. IBM. Available from https://www.ibm.com/products/ ilog-cplex-optimization-studio/cplex-optimizer/, last accessed October 17th 2023, 2023.
* Dentcheva and Ruszczynski [2003] Darinka Dentcheva and Andrzej Ruszczynski. Optimization with stochastic dominance constraints. _SIAM Journal on Optimization_ , 14(2):548–566, 2003. doi: 10.1137/S1052623402420528.
* Dentcheva and Ruszczynski [2006] Darinka Dentcheva and Andrzej Ruszczynski. Portfolio optimization with stochastic dominance constraints. _Journal of Banking & Finance_, 30(2):433–451, 2006. doi: 10.1016/j.jbankfin.2005.04.024.
* Fábián et al. [2011a] C. Fábián, G. Mitra, and D. Roman. Processing second-order stochastic dominance models using cutting-plane representations. _Mathematical Programming_ , 130(1):33–57, 2011a. doi: 10.1007/s10107-009-0326-1.
* Fábián et al. [2011b] C. Fábián, G. Mitra, D. Roman, and V. Zverovich. An enhanced model for portfolio choice with SSD criteria: a constructive approach. _Quantitative Finance_ , 11(10):1525–1534, 2011b. doi: 10.1080/14697680903493607.
* Goel and Sharma [2021] Anubha Goel and Amita Sharma. Deviation measure in second-order stochastic dominance with an application to enhanced indexing. _International Transactions in Operational Research_ , 28(4):2218–2247, 2021. doi: 10.1111/itor.12629.
* Hadar and Russell [1969] J. Hadar and W. Russell. Rules for ordering uncertain prospects. _The American Economic Review_ , 59(1):25–34, 1969. doi: 10.2307/1811090.
* Hodder et al. [2015] James E Hodder, Jens Carsten Jackwerth, and Olga Kolokolova. Improved portfolio choice using second-order stochastic dominance. _Review of Finance_ , 19(4):1623–1647, 2015. doi: 10.1093/rof/rfu025.
* Kopa and Post [2015] M. Kopa and T. Post. A general test for SSD portfolio efficiency. _OR Spectrum_ , 37(1):703–734, 2015\. doi: 10.1007/s00291-014-0373-8.
* Künzi-Bay and Mayer [2006] Alexandra Künzi-Bay and János Mayer. Computational aspects of minimizing conditional value-at-risk. _Computational Management Science_ , 3(1):3–27, 2006. doi: 10.1007/s10287-005-0042-0.
* Kuosmanen [2001] Timo Kuosmanen. Stochastic dominance efficiency tests under diversification. https://econwpa.ub.uni-muenchen.de/econ-wp/fin/papers/0105/0105001.pdf, 2001\.
* Kuosmanen [2004] Timo Kuosmanen. Efficient diversification according to stochastic dominance criteria. _Management Science_ , 50(10):1390–1406, 2004. doi: 10.1287/mnsc.1040.0284.
* Levy [1992] Haim Levy. Stochastic dominance and expected utility: survey and analysis. _Management Science_ , 38(4):555–593, 1992. doi: 10.1287/mnsc.38.4.555.
* Liesio et al. [2023] Juuso Liesio, Markku Kallio, and Nikolaos Argyris. Incomplete risk-preference information in portfolio decision analysis. _European Journal of Operational Research_ , 304(3):1084–1098, FEB 1 2023. ISSN 0377-2217. doi: 10.1016/j.ejor.2022.04.043.
* Liu et al. [2021] Jia Liu, Zhiping Chen, and Giorgio Consigli. Interval-based stochastic dominance: theoretical framework and application to portfolio choices. _Annals of Operations Research_ , 307(1):329–361, 2021. doi: 10.1007/s10479-021-04231-9.
* Lizyayev and Ruszczyński [2012] Andrey Lizyayev and Andrzej Ruszczyński. Tractable almost stochastic dominance. _European Journal of Operational Research_ , 218(2):448–455, 2012. doi: 10.1016/j.ejor.2011.11.019.
* Luedtke [2008] James Luedtke. New formulations for optimization under stochastic dominance constraints. _SIAM Journal on Optimization_ , 19(3):1433–1450, 2008. doi: 10.1137/070707956.
* Malavasi et al. [2021] Matteo Malavasi, Sergio Ortobelli Lozza, and Stefan Truck. Second order of stochastic dominance efficiency vs mean variance efficiency. _European Journal of Operational Research_ , 290(3):1192–1206, MAY 1 2021. ISSN 0377-2217. doi: 10.1016/j.ejor.2020.08.051.
* Markowitz [1952] H. Markowitz. Portfolio selection. _Journal of Finance_ , 7(1):77–91, 1952\. doi: 10.1111/j.1540-6261.1952.tb01525.x.
* Ogryczak and Ruszczynski [2002] W. Ogryczak and A. Ruszczynski. Dual stochastic dominance and related mean-risk models. _SIAM Journal on Optimization_ , 13(1):60–78, 2002. doi: 10.1137/S1052623400375075.
* Post and Kopa [2013] T. Post and M. Kopa. General linear formulations of stochastic dominance criteria. _European Journal of Operational Research_ , 230(2):321–332, 2013. doi: 10.1016/j.ejor.2013.04.015.
* Post [2003] Thierry Post. Empirical tests for stochastic dominance efficiency. _Journal of Finance_ , 58(5):1905–1931, 2003. doi: 10.1111/1540-6261.00592.
* Roman et al. [2006] D. Roman, K. Darby-Dowman, and G. Mitra. Portfolio construction based on stochastic dominance and target return distributions. _Mathematical Programming_ , 108(2):541–569, 2006. doi: 10.1007/s10107-006-0722-8.
* Roman et al. [2013] D. Roman, G. Mitra, and V. Zverovich. Enhanced indexation based on second-order stochastic dominance. _European Journal of Operational Research_ , 228(1):273–281, 2013. doi: 10.1016/j.ejor.2013.01.035.
* Sehgal and Mehra [2021] Ruchika Sehgal and Aparna Mehra. Robust reward–risk ratio portfolio optimization. _International Transactions in Operational Research_ , 28(4):2169–2190, 2021. doi: 10.1111/itor.12652.
* Sharma and Mehra [2017] Amita Sharma and Aparna Mehra. Financial analysis based sectoral portfolio optimization under second order stochastic dominance. _Annals of Operations Research_ , 256:171–197, 2017\. doi: 10.1007/s10479-015-2095-y.
* Sharma et al. [2017] Amita Sharma, Shubhada Agrawal, and Aparna Mehra. Enhanced indexing for risk averse investors using relaxed second order stochastic dominance. _Optimization and Engineering_ , 18(2):407–442, 2017. doi: 10.1007/s11081-016-9329-y.
* Valle et al. [2017] C. A. Valle, D. Roman, and G. Mitra. Novel approaches for portfolio construction using second order stochastic dominance. _Computational Management Science_ , 14(2):257–280, 2017. doi: 10.1007/s10287-017-0274-9.
* Whitmore and Findlay [1978] G. A. Whitmore and M. C. Findlay. _Stochastic dominance: an approach to decision-making under risk_. Lexington Books, 1978.
|
# Decentralized Multi-Agent Planning for Multirotors:
a Fully Online and Communication Latency Robust Approach
Charbel Toumieh The author is an independent researcher (e-mail:
<EMAIL_ADDRESS>
###### Abstract
There are many industrial, commercial and social applications for multi-agent
planning for multirotors such as autonomous agriculture, infrastructure
inspection and search and rescue. Thus, improving on the state-of-the-art of
multi-agent planning to make it a viable real-world solution is of great
benefit. In this work, we propose a new method for multi-agent planning in a
static environment that improves our previous work by making it fully online
as well as robust to communication latency. The proposed framework generates a
global path and a Safe Corridor to avoid static obstacles in an online fashion
(generated offline in our previous work). It then generates a time-aware Safe
Corridor which takes into account the future positions of other agents to
avoid intra-agent collisions. The time-aware Safe Corridor is given with a
local reference trajectory to an MIQP (Mixed-Integer Quadratic Problem)/MPC
(Model Predictive Control) solver that outputs a safe and optimal trajectory.
The planning frequency is adapted to account for communication delays. The
proposed method is fully online, real-time, decentralized, and synchronous. It
is compared to 3 recent state-of-the-art methods in simulations. It
outperforms all methods in robustness and safety as well as flight time. It
also outperforms the only other state-of-the-art latency robust method in
computation time.
video: https://youtu.be/eKwYNU1Q0wY
## I INTRODUCTION
### I-A Problem statement
Multi-agent planning has been gaining in popularity in the research community
due to recent advances. These advances are making it a viable solution to many
commercial, industrial, and military applications. There are multiple
challenges that face a multi-agent planning framework such as the problem of
synchronizing agents for synchronous planning methods and dealing with
communication latency. It is the purpose of this paper to extend upon our
previous state-of-the-art work [1] that outperformed other state-of-the-art
methods in computation efficiency, trajectory speed, and smoothness in a
cluttered environment. We provide a new approach derived from [1] that is
fully online and robust to arbitrary communication latency. We also study the
effect of communication latency on the overall performance of our planner and
compare it with other state-of-the-art methods.
### I-B Related work
#### I-B1 Multi-agent planning for multirotors
In [2], the authors present a centralized multi-agent planning framework that
uses time-aware Safe Corridors. The method has 3 sequential steps: roadmap
generation, then discrete planning, and finally continuous refinement. The
approach presented by the authors is centralized although some steps can be
decentralized. While the computation time is not suitable for online high-
speed planning and replanning, the method used served as an inspiration for
many subsequent methods in the state-of-the-art. Such methods include [3] and
[1] which in turn served as an inspiration for the work presented in this
paper.
Buffered Voronoi Cells have been used by multiple works [4], [5] for multi-
agent collision avoidance but do not account for static obstacles. Other
approaches [6] use separating hyperplanes to avoid collisions between agents
and model static obstacles in the form of ellipsoid constraints in a
decentralized MPC formulation. The generation of ellipsoid representation of
the environment is not trivial and is not addressed by the authors of [6].
MADER, an asynchronous multi-agent planning framework has been proposed in
[7]. The method allows for avoiding static, and dynamic obstacles, as well as
other planning agents. The authors combine a search-based approach with an
optimization approach, where the output of the search-based approach is taken
as initialization for the optimization problem. This choice was made since the
optimization problem defined by the authors is non-convex and requires a good
initial guess.
EGO-Swarm was proposed in [8] as an asynchronous and decentralized trajectory
planner. It requires each planning agent to broadcast its generated trajectory
at a fixed frequency. When each agent receives the trajectories of other
agents, it proceeds immediately to do a collision check. While the approach
has been demonstrated in real-world experiments, it still suffers from
collisions due to communication delays between agents.
In a similar fashion to [2], the authors of [3] present a distributed and
online trajectory generation framework for multi-agent quadrotor systems using
time-aware Safe Corridors (or Linear Safe Corridors). The environment
representation used by the authors is an octomap [9]. The Safe Corridor used
to generate the time-aware Safe Corridor contains only one polyhedron which
leads to slow and conservative trajectories.
In [10], a decentralized model predictive control approach is used for
collision avoidance and cohesive flight. The obstacles are described as
mathematical functions (cylinders, paraboloids …) in order to include them in
the decentralized MPC formulation as constraints. It is however not trivial to
describe an arbitrary cluttered environment through continuous mathematical
functions that are easy to add as constraints to an MPC formulation.
Finally, in our previous work [1], we proposed a decentralized and synchronous
planning framework that is inspired by [2]. The approach takes into account
static obstacles using Safe Corridors (generated from a voxel grid
representation [11]). Safe Corridors are then augmented to time-aware Safe
Corridors to avoid intra-agent collisions. The proposed approach outperforms
state-of-the-art methods in all performance metrics, including robustness,
computation time, and trajectory speed.
#### I-B2 Latency robust multi-agent planning
The previously cited works do not account for communication delay, or can
passively handle latency up to a fixed limit [1]. Some multi-agent planning
frameworks take into account communication delay and will be presented in this
section.
In [12], an asynchronous and decentralized trajectory planner is presented.
The planner guarantees safety using separating hyperplanes from previous
planning iterations. While the presented approach can handle communication
delays, it does not account for any type of obstacles (static or dynamic),
which limits its applicability to the real world.
Finally, RMADER (Robust MADER) is proposed in [13], which is an extension of
MADER [7]. They convexify the optimization problem in order to improve the
computation time. However, they inherit from MADER the polyhedral
representation of the obstacles in the environment. This representation is not
trivial to generate and can add significant overhead to the planning
framework.
### I-C Contribution
The main contribution of our paper is an improved decentralized and
synchronous planning framework that is robust to communication latency. The
proposed framework is built on our previous work [1] and conserves its
advantages. Thus, the proposed method has low computation time and takes into
account static obstacles and other planning agents. The improvements are:
1. 1.
The addition of a mechanism to deal with arbitrary communication latency by
dynamically adapting the planning frequency to avoid collisions and guarantee
safety.
2. 2.
The integration of 2 previously offline steps in [1] (global path generation
step and Safe Corridor generation step) to make the framework fully online and
suitable for real-world applications.
3. 3.
The modification of the stalemate/deadlock resolution mechanism to guarantee
safety.
The method is tested in simulations to show the effect of communication
latency on the performance of the planner. It is also compared to 3 recent
works: EGO-Swarm [8], MADER [7] and RMADER [13] in terms of trajectory
safety/performance as well as computation time.
## II Assumptions
Figure 1: We show the global pipeline of the planning framework of a single
planning agent. It is run in a loop at a varying/adaptive frequency.
We assume perfect control (the controller executes the generated trajectory
perfectly) and perfect localization (each agent can localize itself and other
agents at any moment to an arbitrary accuracy). These assumptions are made by
all of the previously cited state-of-the-art methods. In addition to these
assumptions, we assume that the clocks of the agents are synchronized. We
assume 2 cases:
1. 1.
We can synchronize all agents at the beginning of a given mission.
2. 2.
If an agent (not synchronized) is getting close to a cluster of other
synchronized agents, we assume the range of communication is big enough so
that the agent can synchronize its clock with the cluster before getting close
enough for collision avoidance.
Furthermore, we assume symmetric behavior of the communication: if there is a
latency in the delivery of a message from agent $i$ to agent $j$ in a given
planning iteration/period, the same latency happens when agent $j$ is trying
to deliver a message to agent $i$.
(a) Safe Corridor at iteration $k$.
(b) Safe Corridor at iteration $k+1$.
Figure 2: The obstacles are shown in red. The predicted positions of the agent
are shown as yellow circles (MPC trajectory). They get increasingly
transparent as we move forward in time. At iteration $k$ (Fig. 2(a)), all
polyhedra (in blue) contain at least one point of the MPC trajectory. At the
next iteration $k+1$ (Fig. 2(b)), the first position of the MPC trajectory
moves out of the first polyhedron (in dashed blue lines). Thus, we remove it
from the Safe Corridor and generate another polyhedron (in green) using the
global path. The new polyhedron is added to the Safe Corridor.
## III The planner
Our planner is run concurrently on each agent in a swarm. The dynamical model
of each agent is the same as presented in [1]. We use a voxel grid
representation of the environment, which can be trivially and efficiently
generated [11]. Each agent has a voxel grid that is of fixed size and that
moves with the agent such that the agent is always at its center. This voxel
grid is used for global path finding and Safe Corridor generation. The clocks
of the agents are synchronized.
In [1], the planning is divided into 2 stages: an offline stage for global
path finding and Safe Corridor generation; then an online stage where the
time-aware Safe Corridors and the dynamically feasible trajectory are
generated. In the planner proposed in this paper, the offline stage is now
integrated into the online planning stage so the whole planning/replanning
framework is run online. This makes it suitable for real-world deployment and
missions such as exploration. The steps of the proposed planner are (Fig. 1):
1. 1.
Generate a global path (Sect. III-A).
2. 2.
Generate a Safe Corridor (Sect. III-B)
3. 3.
Generate a time-aware Safe Corridor (Sect. III-C).
4. 4.
Generate a local reference trajectory (Sect. III-D).
5. 5.
Solve the Mixed-Integer Quadratic Program (MIQP)/Model Predictive Control
(MPC) problem to generate a locally optimal trajectory (Sect. III-E).
In the first step, we generate a global path from the position of the agent to
the goal position. This path avoids all static obstacles and is used to
generate the Safe Corridor and to generate the local reference trajectory. In
the second step, we generate a Safe Corridor (a series of overlapping convex
polyhedra) that covers only the free space in the environment. These convex
polyhedra are used as linear constraints in an optimization formulation to
constrain the trajectory to the free space and avoid collisions with static
obstacles. In the third step, we use the recently generated trajectories of
the agents and the Safe Corridor to generate time-aware Safe Corridors. This
allows the agents to avoid intra-agent collisions. In the fourth step, we
sample the global path at a given velocity to generate a local reference
trajectory that the dynamically feasible trajectory tries to follow as closely
as possible. In the fifth and final step, we generate the dynamically feasible
trajectory to be executed by the agent. It is generated by solving an
optimization problem that takes time-aware Safe Corridors and a local
reference trajectory and guarantees that there are no collisions of any nature
(intra-agent or static obstacles) while the agent moves closer to its goal.
These steps were run sequentially and periodically at a fixed frequency in our
previous work [1]. However, in this work, we vary the planning frequency to
account for communication latency. As in [1], each agent broadcasts its
planned trajectory at the end of the planning iteration so that other agents
can know it. In addition to the planned trajectory, we also broadcast the
times we started and finished generating the trajectory so that other agents
can estimate the communication latency (not done in [1] \- more details in
Sect. III-F). We briefly explain each step in this section while focusing more
on the steps where changes were made with respect to [1].
### III-A Generate a global path
In this step, a global path is generated connecting the current position of
the agent to the desired final position using the local voxel grid. The
occupied voxels in the voxel grid are inflated by each agent’s size before
feeding the grid to the path planning algorithm. In case the goal position is
outside the local voxel grid of the agent, we choose an intermediate goal in
the grid as presented in [14]. The main idea is to draw a line connecting the
position of the agent to the goal and get the intersection with the borders of
the voxel grid. This intersection is a voxel and is set as an intermediate
goal. We also clear/set to free all the border voxels of the voxel grid to
help the agent find a path to the intermediate goal in extremely cluttered
environments.
At each iteration, the starting point for the global path search is the last
point in the local reference trajectory generated in the previous planning
iteration (Sect. III-D). The local reference trajectory is then connected to
the path found through the global search to generate the final global path
used in the subsequent sections (for generating the local reference trajectory
of the current iteration).
We use JPS (Jump Point Search) [15] and DMP (Distance Map Planner) for path
planning. JPS employs pruning techniques on the A* algorithm to potentially
speed up the generation time by an order of magnitude. DMP uses artificial
potential fields to push the path generated by JPS away from obstacles. This
adds an additional margin of safety and improves the trajectory generated in
the last step (MIQP optimization output) in terms of speed and smoothness (see
[14] for more details).
(a) Stalemate caused by a symmetrical position.
(b) Perturbing hyperplanes asymmetrically.
(c) Perturbing hyperplanes symmetrically.
Figure 3: A stalemate/deadlock happens when 2 agents are trying to move
towards opposite goals and the solver is stuck on the borders of the
hyperplanes (Fig. 3(a)). Any movement up or down would not decrease the
distance to the goal. If the hyperplanes are perturbed asymmetrically as done
in [1] (Fig. 3(b)), the distance between the agents can potentially become
lower than the safety distance. We modify the perturbation vector (Sect.
III-C) to make the perturbation symmetrical and guarantee safety when the
agents move in the direction of the magenta vectors or any other direction
(Fig. 3(c)). Figure 4: We show the trajectories of 2 agents (in red and
yellow) and the corresponding discrete positions that get more transparent as
we move forward in time. We ignore the positions of each trajectory that have
no corresponding position in the other ($k-2$ and $k+3$). The separating
hyperplanes (dashed lines in different colors) are generated between the
positions of the agents corresponding to the same time in the future starting
from the current iteration $k$. The last separating hyperplane $k+2$ is used
to fill the remaining $N-3$ hyperplanes required to generate the TASC.
### III-B Generate a Safe Corridor around the global path
Safe Corridors are a series of overlapping convex shapes that cover only free
space in the environment. They are used by many state-of-the-art planning
methods to constrain a dynamically feasible trajectory inside them, and thus
guarantee safety [16], [14], [1]. Many methods exist in the literature for
Safe Corridor generation [17], [18], [19] [20]. The method used for the
generation is [19] since it provides the best performance among the state-of-
the-art methods for trajectory planning.
The Safe Corridor generation method takes as input a voxel grid (the local
voxel grid centered around the agent) and the global path around which we want
to generate the Safe Corridor. At each iteration, we always make sure that we
have a certain number $P_{\text{hor}}$ of polyhedra that cover the free space
of the environment.
At the first iteration of planning, we use the global path at the first
iteration to generate a Safe Corridor that contains up to $P_{\text{hor}}$
number of polyhedra (polyhedra horizon). Subsequently, at each planning
period, we use the global path generated in this planning period to update the
Safe Corridor generated in the last step. The update consists of the following
(Fig. 2): all the polyhedra that contain at least one point of the last
generated MPC trajectory are kept. The other polyhedra are removed and new
polyhedra are generated in their place until we have $P_{\text{hor}}$
polyhedra in total. To generate each polyhedron, we sample the global path at
a constant step (voxel size). We then use the first point of the sampled
global path that is outside all the remaining polyhedra as a seed voxel to
generate an additional polyhedron.
### III-C Generate a time-aware Safe Corridor (TASC)
After generating the Safe Corridor, we use it along with the trajectories
generated by all the other agents at the previous iterations to create a time-
aware Safe Corridor (TASC). The future positions predicted by the MPC
trajectories of the agents at the previous planning iterations are used to
generate hyperplanes to constrain the future/MPC positions at the current
iteration. These hyperplanes are added to the constraints of the Safe
Corridor. This creates a series of Safe Corridors at each planning iteration
that we call time-aware Safe Corridors in [1]. We refer the reader to [1] for
a detailed explanation of how time-aware Safe Corridors are generated.
We augment/improve the TASC generation method to account for trajectories that
were not generated at the same planning iteration $k$ (Fig. 4). We ignore the
positions of each trajectory that have no corresponding positions in the other
trajectory ($k-1$ and $k+3$ in Fig. 4). Then, starting with the position of
the current iteration $k$, we generate separating hyperplanes for the rest of
the common positions ($k$, $k+1$ and $k+2$ in Fig. 4). Since we need $N$
separating hyperplanes to generate the TASC (as shown in [1]), we set the rest
of the hyperplanes equal to the last separating hyperplanes ($k+2$ in Fig. 4).
#### III-C1 Dealing with stalemates/deadlocks
In [1], in order to avoid stalemates/deadlocks, we modified the normal vectors
of the separating hyperplanes by perturbing them constantly through time (a
time-varying right-hand rule). This would avoid adding an explicit mechanism
that creates subgoals for each agent to avoid stalemates/deadlocks like in
[21]. We defined the normalized plane normal
$\boldsymbol{n}_{\text{hyp,norm}}$, the right vector $\boldsymbol{r}$ that is
the cross product between $\boldsymbol{n}_{\text{hyp,norm}}$ and
$\boldsymbol{z}_{W}$ plus the cross product between
$\boldsymbol{n}_{\text{hyp,norm}}$ and $\boldsymbol{y}_{W}$, a perturbation
$m$, and a user-chosen coefficient $c$ that defines how tilted the final
normal vector of the hyperplane $\boldsymbol{n}_{\text{hyp,final}}$ is with
respect to the initial vector $\boldsymbol{n}_{\text{hyp}}$:
$\displaystyle\boldsymbol{n}_{\text{hyp,norm}}=\dfrac{\boldsymbol{n}_{\text{hyp}}}{||\boldsymbol{n}_{\text{hyp}}||_{2}}$
(1)
$\displaystyle\boldsymbol{z}_{W}=[0,0,1]^{T},\quad\boldsymbol{y}_{W}=[0,1,0]^{T}$
(2)
$\displaystyle\boldsymbol{r}=\boldsymbol{n}_{\text{hyp,norm}}\times\boldsymbol{z}_{W}+\boldsymbol{n}_{\text{hyp,norm}}\times\boldsymbol{y}_{W}$
(3)
$\displaystyle\boldsymbol{n}_{\text{pert}}=(c+m)\cdot\dfrac{\boldsymbol{r}}{||\boldsymbol{r}||_{2}}+c\cdot\boldsymbol{z}_{W}$
(4)
$\displaystyle\boldsymbol{n}_{\text{hyp,final}}=\boldsymbol{n}_{\text{pert}}+\boldsymbol{n}_{\text{hyp,norm}}$
(5)
However, a component of the perturbation vector $\boldsymbol{n}_{\text{pert}}$
is non-symmetric ($c\cdot\boldsymbol{z}_{W}$), which can generate normal
vectors that are non-colinear. This can result in cases where the distance
between agents is lower than the safety/collision distance $2\cdot
d_{\text{rad}}$ (Fig. 3). For this reason, we replace the non-symmetric term
with the following symmetric term:
$c\cdot(\boldsymbol{z}_{W}\times\boldsymbol{n}_{\text{hyp,norm}})$. The final
perturbation vector then becomes:
$\displaystyle\boldsymbol{n}_{\text{pert}}=(c+m)\cdot\dfrac{\boldsymbol{r}}{||\boldsymbol{r}||_{2}}+c\cdot(\boldsymbol{z}_{W}\times\boldsymbol{n}_{\text{hyp,norm}})$
(6)
It is then added to $\boldsymbol{n}_{\text{hyp,norm}}$ to generate
$\boldsymbol{n}_{\text{hyp,final}}$ as in equation (5).
### III-D Generate a local reference trajectory
We use the global path to generate a local reference trajectory that is used
as a reference for the MPC to follow. The generation of such reference
trajectory is done by sampling the global path at a constant velocity
$v_{\text{samp}}$. The number of sampled points is equal to the number of
discretization steps ($N$) in the MPC/MIQP formulation.
We only generate a new local reference trajectory in the following case: the
last point of the MPC trajectory is within a distance $d_{\text{thresh}}$ from
the last point of the local reference trajectory generated at the previous
iteration. Otherwise, we keep the local reference trajectory generated at the
previous planning iteration.
Figure 5: We show an example of how different agents handle communication
delays between each other. In this example agent 2 communicates with agents 1
and 3, whereas agents 1 and 3 do not communicate with each other (not within
the range of communication). We show in green the computation time of each
agent, in blue the communication latency between agents 1 and 2, and in red
the communication latency between agents 2 and 3. The arrows indicate the time
at which an agent $i$ receives the trajectory $\boldsymbol{T}_{j,k}$ of
another agent $j$ generated at iteration $k$. At the first iteration, all
agents synchronize their first planning iteration to be at the same time. At
the subsequent iterations, an agent skips planning in one of 2 cases: 1) At
least one agent within the communication range is yet to receive its last
generated trajectory 2) It is yet to receive a new generated trajectory of
another agent within the communication range and it has used all the
previously received trajectories of this agent to generate its own trajectory.
### III-E Solving the MIQP/MPC problem
In this final step, we take the reference trajectory, and we solve an MPC
optimization problem that minimizes the distance of the generated trajectory
to the reference trajectory while also minimizing the jerk for smoothness. The
generated trajectory consists of $N+1$ discrete states $\boldsymbol{x}_{i}$,
$i=0,1,...,N$ that contain the position, velocity, and acceleration of the
agent. Each consecutive pair of discrete states are separated by a time step
$h$. Thus, the time horizon of the planning is $N\cdot h$. The velocity and
acceleration of the last state $\boldsymbol{x}_{N}$ are constrained/set to 0
to guarantee a safe trajectory for all agents in case subsequent optimizations
fail (see [1] for more details).
The time-aware Safe Corridor is used to ensure the safety of the trajectory.
We add the linear constraints of the time-aware Safe Corridor to the MPC
optimization problem. By forcing each segment of the MPC trajectory be in at
least one of the polyhedra of the time-aware Safe Corridor, we ensure no
collision happens between the agent and the static obstacles as well as other
planning agents. The final formulation of the optimization problem is a Mixed-
Integer Quadratic Problem (MIQP) exactly like the one presented in [14], [1].
### III-F Handling communication delay
Algorithm 1 Run at every iteration $k$ for agent $i$:
1:delay_planning = false
2:for each agent $j$ in $J$ do
3: if received $\boldsymbol{T}_{j}$ then
4: traj_old[$j$].add($\boldsymbol{T}_{j}$)
5: else
6: if traj_old[$j$].size() == 0 then
7: delay_planning = true
8: if not(delay_planning) then
9: $dt_{\text{delay},i,j}$ = ComputeLatency(traj_old[$j$][0])
10: if $dt_{\text{delay},i,j}+\boldsymbol{T}_{i,\text{last}}$.end then
$>t_{\text{cur}}$
11: delay_planning = true
12:if not(delay_planning) then
13: for each agent $j$ in $J$ do
14: GenerateTASC(traj_old[$j$][0], $\boldsymbol{T}_{i,\text{last}}$)
15: traj_old[$j$].RemoveFirstElement()
Our previous work [1] ran the planning algorithm at a constant period equal to
the MPC discretization step $dt_{\text{plan}}=h$. It was able to handle
communication delay passively by assuming that the communication delay was
lower than a time variable $dt_{\text{max,delay}}$ equal to the planning
period $dt_{\text{plan}}$ minus the planner computation time
$dt_{\text{comp}}$
($dt_{\text{max,delay}}=dt_{\text{plan}}-dt_{\text{comp}}$). However, no
mechanism was in place to handle the communication latency when it exceeds
$dt_{\text{max,delay}}$.
In this work, we propose to adapt the planning period to be able to guarantee
safety no matter the communication delay. In addition to broadcasting the
trajectory $\boldsymbol{T}_{j}$ when it finishes generating it, each agent $j$
broadcasts the time at which it started generating its trajectory i.e. the
time at the start of the planning period ($\boldsymbol{T}_{j}$.start). It also
broadcasts the time it finished generating the trajectory i.e. the time it
sent it ($\boldsymbol{T}_{j}$.end). This allows another agent $i$ to estimate
the communication delay between it and agent $j$ since their clocks are
synchronized. The delay can be estimated by subtracting
$\boldsymbol{T}_{j}$.end from the reception time of agent $i$,
$t_{\text{rec},i}$:
$\displaystyle
dt_{\text{delay},i,j}=t_{\text{rec},i}-\boldsymbol{T}_{j}\text{.end}$ (7)
This in turn allows agent $i$ to know whether its last generated trajectory
$\boldsymbol{T}_{i,\text{last}}$ was received by agent $j$ before the start
time of the current planning period $t_{\text{cur}}$. The last generated
trajectory of agent $i$ is not yet received by agent $j$ if the following
condition is true:
$\displaystyle
dt_{\text{delay},i,j}+\boldsymbol{T}_{i,\text{last}}.\text{end}>t_{\text{cur}}$
(8)
The planner will skip planning at the start of the current planning period and
wait for the next period if one of these 2 cases is true:
1. 1.
It knows that there is another agent within its communication range that is
yet to receive its last planned trajectory.
2. 2.
It is yet to receive a new planned trajectory of another agent within its
communication range and it has used all the old received trajectories of this
agent for planning.
We propose the following algorithm to handle communication latency (Alg. 1).
At every planning iteration (which happens every $dt_{\text{plan}}=h$), every
agent $i$ checks if it received a trajectory from every other agent $j$ (line
3). If it did, it adds the received trajectory to a 2D vector (traj_old) whose
first index indicates the number or ID of the other agent i.e. $j$ (line 4).
If agent $i$ did not receive a trajectory from agent $j$, it checks if there
is an unused old trajectory in the vector traj_old[$j$] (line 5-6). If not, we
delay the planning since we have no new or old trajectory to use for
generating the TASC (line 7). If the planning should not be delayed due to
previous conditions (line 8), we check if it should be delayed because agent
$j$ hasn’t received the trajectory of agent $i$ yet. This is done by first
computing the communication delay using equation (7) (line 9), and then
checking the condition (8) (lines 10-11). Finally, we check if the planning
should be delayed after going through all agents (line 12). If not, we compute
the TASC using the oldest unused trajectory of each agent $j$ and remove it
from the vector of old trajectories (lines 13-15). The starting time
$\boldsymbol{T}_{j}$.start allows to know at which iteration $k$ the
trajectory was generated, which is important in TASC generation (Fig. 4).
We show an example of how this algorithm would perform in Fig. 5. In this
example, agent 2 sees and communicates with agents 1 and 3, but agents 1 and 3
do not see and communicate with each other. Still, the algorithm allows for
safe planning and coordination between all agents.
## IV Simulation Results
The testing setup is similar to what is presented in [13]. Thus, we will use
their results as a reference for our comparison. The simulations are run on
Intel i7 CPUs with a base frequency of 2.6GHz and a turbo boost of 4GHz. The
testing consists of 10 agents in a circular configuration (Fig. 6(a))
exchanging positions. We compare our method with RMADER [13] and 2 versions of
Ego-Swarm [8]. We set the maximum velocity $v_{\text{max}}=10\ \text{m/s}$,
the maximum acceleration $a_{\text{max}}=20\ \text{m/s\textsuperscript{2}}$
and the maximum jerk $j_{\text{max}}=30\ \text{m/s\textsuperscript{3}}$ for
RMADER, Ego-Swarm and our method (along the $x$, $y$ and $z$ directions). For
Ego-Swarm, we also consider a more conservative version (slow Ego-Swarm) with
a maximum acceleration $a_{\text{max}}=10\ \text{m/s\textsuperscript{2}}$ and
a maximum velocity $v_{\text{max}}=5\ \text{m/s}$.
For MADER and RMADER, each agent is represented as a bounding box of size
$0.25\times 0.25\times 0.25$ m. For Ego-Swarm and our planner, each agent is
represented as a sphere of diameter $0.25$ m as per the experiments in [13]
(at the time of writing, the bounding box dimensions and sphere diameter were
not mentioned in [13], but they were communicated to us by the authors of
[13]). The comparison is done with 100 simulated runs for communication
latencies equal to $0$, $50$, and $100$ milliseconds. The comparison metrics
are:
1. 1.
Collision %: percentage of simulations where there was at least one collision.
2. 2.
Average number of stops expected in a single simulation from all agents.
3. 3.
Mean of the jerk cost
$J_{\text{cost}}=\int_{t_{\text{ini}}}^{t_{\text{fin}}}||\boldsymbol{j}(t)||^{2}\mathrm{d}t$
where $t_{\text{ini}}$ and $t_{\text{fin}}$ are the initial and final time of
the trajectory.
4. 4.
Mean of the acceleration cost
$A_{\text{cost}}=\int_{t_{\text{ini}}}^{t_{\text{fin}}}||\boldsymbol{a}(t)||^{2}\mathrm{d}t$.
5. 5.
Mean and max flight time.
6. 6.
Computation time.
Table I: Comparison between Ego-Swarm (ES) [8], slow Ego-Swarm (Slow ES) [8], MADER [7], RMADER [13] and our method. The comparison consists of 100 simulations with communication delays between 10 agents exchanging positions in a circular configuration as in Fig. 6(a). The communication delays are $dt=0$ ms $\mid$ $dt=50$ ms $\mid$ $dt=100$ ms. We show in bold the best performer among the safe planners (RMADER [13] and our planner). Method | Collision [%] | Mean # stops | Accel. cost (m/s2) | Jerk cost (103 m/s3) | Mean flight time (s) | Max flight time (s)
---|---|---|---|---|---|---
ES [8] | 64 $\mid$ 84 $\mid$ 84 | 0.004 $\mid$ 0 $\mid$ 0.01 | 662 $\mid$ 700 $\mid$ 788 | 9.07 $\mid$ 9.46 $\mid$ 10.4 | 7.19 $\mid$ 7.24 $\mid$ 7.28 | 7.38 $\mid$ 7.51 $\mid$ 7.63
Slow ES [8] | 14 $\mid$ 25 $\mid$ 22 | 0 $\mid$ 0 $\mid$ 0 | 110 $\mid$ 113 $\mid$ 113 | 15.4 $\mid$ 15.5 $\mid$ 15.5 | 11.6 $\mid$ 11.7 $\mid$ 11.8 | 11.9 $\mid$ 12 $\mid$ 13
MADER [7] | 15 $\mid$ 38 $\mid$ 42 | 0 $\mid$ 0.001 $\mid$ 0 | 78.1 $\mid$ 74.2 $\mid$ 74.5 | 1.59 $\mid$ 1.64 $\mid$ 1.64 | 6.28 $\mid$ 6.25 $\mid$ 6.26 | 7.15 $\mid$ 7.35 $\mid$ 7.04
RMADER [13] | 0 $\mid$ 0 $\mid$ 0 | 0.46 $\mid$ 0.347 $\mid$ 1.75 | 127 $\mid$ 148 $\mid$ 190 | 2.94 $\mid$ 3.71 $\mid$ 5.94 | 7.28 $\mid$ 7.95 $\mid$ 10.4 | 8.41 $\mid$ 8.80 $\mid$ 11.9
proposed | 0 $\mid$ 0 $\mid$ 0 | 0 $\mid$ 0 $\mid$ 0 | 109 $\mid$ 114 $\mid$ 119 | 2.27 $\mid$ 2.49 $\mid$ 5.03 | 6.77 $\mid$ 6.79 $\mid$ 7.1 | 7.1 $\mid$ 7.3 $\mid$ 7.7
(a) Our planner: 10 agents with $dt=100$ ms with the setup in Tab. I.
(b) Our planner: 12 agents with $dt=0$ ms and obstacles (Sect. IV-C).
(c) Our planner: 12 agents with $dt=150$ ms and obstacles (Sect. IV-C).
Figure 6: The agents start in a circular configuration and swap positions. We show an overhead view of the trajectories generated by our planner in different settings (with and without obstacles), different communication latencies, and different dynamic limits. Table II: Computation time of our planner for the results in Tab. I. We show the mean / max / standard deviation. | $dt=0$ ms | $dt=50$ ms | $dt=100$ ms
---|---|---|---
Comp. (ms) | 10.4 / 61 / 6.6 | 10.1 / 54.7 / 6.4 | 11.4 / 70 / 6.7
### IV-A Planner parameters
The local voxel grid around each agent is of size $15\times 15\times 3.3$ m
and has a voxel size of $0.3$ m. We choose the following parameters: $N=9$,
$h=100$ ms, $v_{\text{samp}}=4.5$ m/s, $P_{\text{hor}}=3$,
$d_{\text{thresh}}=0.4$ m. The rest of the parameters are chosen the same as
in [1] with the exception of the maximum velocity, acceleration, and jerk
which are the same for all planners (Sect. IV).
### IV-B Comparison with the state-of-the-art
We show in Tab. I the results of the planners with different communication
latencies ($0$, $50$, and $100$ ms). Our planner and Ego-Swarm [8] use voxel
girds as representations of the obstacles in the environment. MADER [7] and
RMADER [13] on the other hand use a polyhedral representation of the
environment i.e. all obstacles are represented by a series of convex
polyhedra. This representation is not trivial to generate and may add
considerable overhead to the autonomous navigation pipeline.
Our planner and RMADER [13] are the only planners that are able to generate
collision-free trajectories in all simulations, so we will focus our
comparison on them. Our planner outperforms RMADER in trajectory smoothness
across all latencies using both the acceleration ($25$% better on average) and
the jerk ($24$% better on average) metrics.
The mean and max flight times of our planner grow slower than those of RMADER
with the increase in latency. Over all latencies, our planner outperforms
RMADER in mean flight time by an average of $18$% and max flight time by an
average of $23$%.
#### IV-B1 Computation time
Ego-Swarm is the most computationally efficient with an average computation
time of $0.5$ ms. RMADER improves on MADER [7] in computation time by changing
the optimization problem from non-convex to convex. This improves the mean
computation time by $20$% (from $39.23$ ms to $31.08$ ms) and the max
computation time by $40$% (from $724$ ms to $433$ ms) as reported in [13].
While our planner is not as efficient as Ego-Swarm, it is much more efficient
than RMADER as shown in Tab. II. The mean computation time across all
latencies is $10.6$ ms and the max is $70$ ms.
Table III: Results for 8 and 12 agents in an environment with obstacles (Sect. IV-C). The mean / max / standard deviation of each metric is shown. # | $dt$ (ms) | Distance (m) | Velocity (m/s) | Flight time (s) | Comp. time (ms) | Acc. cost (m/s2) | Jerk cost (103m/s3)
---|---|---|---|---|---|---|---
8 | 0 | 21.6 / 23.1 / 0.72 | 2.52 / 4.21 / 1.24 | 8.47 / 9.5 / 0.4 | 5.5 / 48.7 / 3 | 121 / 170 / 26.2 | 3.5 / 5.56 / 0.94
50 | 21.6 / 23.1 / 0.72 | 2.51 / 4.21 / 1.24 | 8.47 / 9.5 / 0.4 | 5.4 / 48.1 / 2.9 | 121 / 170 / 26.2 | 3.5 / 5.56 / 0.95
100 | 21.6/ 23.4 / 0.76 | 2.43 / 4.24 / 1.22 | 8.7 / 9.5 / 0.42 | 6.2 / 35 / 3.8 | 124 / 182 / 26.7 | 6.59 / 9.11 / 0.96
150 | 21.6 / 23.4 / 0.76 | 2.43 / 4.24 / 1.22 | 8.7 / 9.5 / 0.42 | 6.1 / 33.3 / 3.8 | 124 / 182 / 26.5 | 6.59 / 9.11 / 0.96
12 | 0 | 21.7 / 24.2 / 0.73 | 2.45 / 4.5 / 1.23 | 8.7 / 9.9 / 0.45 | 8.7 / 72.4 / 6 | 130 / 207 / 26.8 | 3.76 / 5.65 / 0.84
50 | 21.7 / 24.1 / 0.73 | 2.46 / 4.5 / 1.24 | 8.7 / 9.9 / 0.43 | 8.4 / 69.6 / 5.8 | 136 / 207 / 27.6 | 4.19 / 6.56 / 0.88
100 | 21.6 / 23.9 / 0.71 | 2.38 / 4.36 / 1.2 | 8.98 / 10.3 / 0.46 | 9.2 / 85.9 / 7.3 | 134 / 240 / 28.7 | 6.86 / 10.8 / 0.97
150 | 21.7 / 23.7 / 0.7 | 2.36 / 4.86 / 1.22 | 9.08 / 10.4 / 0.44 | 10.8 / 86.6 / 8.4 | 146 / 308 / 34.2 | 8.41 / 17.1 / 1.46
### IV-C Environment with obstacles
We add obstacles to the environment as well as delay to see how our planner
performs as the communication latency increases. The obstacles have already
been inflated by the agent’s radius at their generation. We test for $8$ and
$12$ agents. Furthermore, we change the diameter of each agent to $0.3$ m,
$v_{\text{samp}}=3.5$ m/s, $a_{\text{max}}=30$ m/s2, $j_{\text{max}}=60$ m/s3,
$N=7$ and $d_{\text{thresh}}=0.2$ m for experimental diversity. We generate
$70$ obstacles of size $0.2\times 0.2\times 1.5$ m with random positions at
each simulation run (uniform distribution - Fig. 6(b), 6(c)). We do 10
simulation runs for each latency $dt=0,50,100$, and $150$ ms. The performance
metrics used are the distance traversed by each agent, the flight velocity and
time, the computation time, and the acceleration and jerk costs. The mean /
max / standard deviation of each metric are shown in Tab. III.
In all test runs for 8 and 12 agents, all agents were able to reach their
intended goal/destination safely i.e. the safety distance between the agents
was not violated and they did not get stuck along the way.
For 8 agents, the results for $dt=0$ ms and $dt=50$ ms are similar. This is
due to the fact that in both cases, all agents receive the trajectories before
the start of the next planning iteration since the maximum computation time is
below $50$ ms. The results for $dt=100$ ms and $dt=150$ ms are also similar
due to the same reason: in both cases, all agents receive the trajectories of
other agents every 2 planning iterations (the planning period is effectively
$2h$ due to our latency handling algorithm 1).
For 8 and 12 agents, the jerk cost and computation time both increase as the
latency increases. This is due to the more frequent slowdown of each agent as
the latency increases. The slowdown is due to passing through narrow spaces
and avoiding other agents at the same time as well as the latency handling
mechanism (see video link after the abstract).
## V Conclusions and Future Works
In this paper, we presented an improved decentralized, real-time, and
synchronous framework for multi-agent planning. The method improves on our
previous work [1] by making it fully online and suitable for real-world
applications (the global path planning and Safe Corridor generation steps were
done offline in [1]). Furthermore, we added a mechanism to handle arbitrary
communication latency and adapt the planning frequency accordingly. Our
previous work was only able to handle communication latency when it is lower
than a predetermined threshold. We compared our work to 3 state-of-the-art
multi-agent planning methods: Ego-Swarm [8], MADER [7] and RMADER [13]. We
showed that our planner generates the safest trajectories with a $0$%
collision rate. Furthermore, it generates smoother and faster trajectories
than the only other safe and latency robust planner (RMADER) while also being
at least $3\times$ more computationally efficient.
In the future, we plan on implementing our planning method on embedded drone
systems for swarm autonomous navigation. This would require implementing
relative localization algorithms between agents, obstacle detection for
collision avoidance, as well as a communication mechanism for broadcasting
information between agents. Finally, we intend on developing a formation
flight version of our planner. This can be done by adding a cost to the
objective function of our planner that makes agents preserve a predefined
shape.
## References
* [1] C. Toumieh and A. Lambert, “Decentralized multi-agent planning using model predictive control and time-aware safe corridors,” _IEEE Robotics and Automation Letters_ , pp. 1–8, 2022.
* [2] W. Hönig, J. A. Preiss, T. K. S. Kumar, G. S. Sukhatme, and N. Ayanian, “Trajectory planning for quadrotor swarms,” _IEEE Transactions on Robotics_ , vol. 34, no. 4, pp. 856–869, 2018.
* [3] J. Park, D. Kim, G. C. Kim, D. Oh, and H. J. Kim, “Online distributed trajectory planning for quadrotor swarm with feasibility guarantee using linear safe corridor,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 2, pp. 4869–4876, 2022.
* [4] H. Zhu and J. Alonso-Mora, “B-uavc: Buffered uncertainty-aware voronoi cells for probabilistic multi-robot collision avoidance,” in _2019 International Symposium on Multi-Robot and Multi-Agent Systems (MRS)_ , 2019, pp. 162–168.
* [5] D. Zhou, Z. Wang, S. Bandyopadhyay, and M. Schwager, “Fast, on-line collision avoidance for dynamic vehicles using buffered voronoi cells,” _IEEE Robotics and Automation Letters_ , vol. 2, no. 2, pp. 1047–1054, 2017.
* [6] C. E. Luis, M. Vukosavljev, and A. P. Schoellig, “Online trajectory generation with distributed model predictive control for multi-robot motion planning,” _IEEE Robotics and Automation Letters_ , vol. 5, no. 2, pp. 604–611, 2020.
* [7] J. Tordesillas and J. P. How, “Mader: Trajectory planner in multiagent and dynamic environments,” _IEEE Transactions on Robotics_ , pp. 1–14, 2021\.
* [8] X. Zhou, J. Zhu, H. Zhou, C. Xu, and F. Gao, “Ego-swarm: A fully autonomous and decentralized quadrotor swarm system in cluttered environments,” _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , pp. 4101–4107, 2021.
* [9] A. Hornung, K. M. Wurm, M. Bennewitz, C. Stachniss, and W. Burgard, “Octomap: An efficient probabilistic 3d mapping framework based on octrees,” _Autonomous robots_ , vol. 34, no. 3, pp. 189–206, 2013.
* [10] E. Soria, F. Schiano, and D. Floreano, “Distributed predictive drone swarms in cluttered environments,” _IEEE Robotics and Automation Letters_ , vol. 7, no. 1, pp. 73–80, 2021.
* [11] C. Toumieh and A. Lambert, “Gpu accelerated voxel grid generation for fast mav exploration,” _under review for The Journal of Intelligent and Robotic Systems_ , 2021.
* [12] B. Senbaslar and G. Sukhatme, “Asynchronous real-time decentralized multi-robot trajectory planning,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)_ , 2022.
* [13] K. Kondo, J. Tordesillas, R. Figueroa, J. Rached, J. Merkel, P. C. Lusk, and J. P. How, “Robust mader: Decentralized and asynchronous multiagent trajectory planner robust to communication delay,” _arXiv preprint arXiv:2209.13667_ , 2022.
* [14] C. Toumieh and A. Lambert, “High-speed planning in unknown environments for multirotors considering drag,” in _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , 2021, pp. 7844–7850.
* [15] D. D. Harabor and A. Grastien, “Online graph pruning for pathfinding on grid maps,” in _Twenty-Fifth AAAI Conference on Artificial Intelligence_ , 2011\.
* [16] C. Toumieh and A. Lambert, “Near time-optimal trajectory generation for multirotors using numerical optimization and safe corridors,” _Journal of Intelligent & Robotic Systems_, vol. 105, no. 1, pp. 1–10, 2022.
* [17] R. Deits and R. Tedrake, “Computing large convex regions of obstacle-free space through semidefinite programming,” in _Algorithmic foundations of robotics XI_. Springer, 2015, pp. 109–124.
* [18] S. Liu, M. Watterson, K. Mohta, K. Sun, S. Bhattacharya, C. J. Taylor, and V. Kumar, “Planning dynamically feasible trajectories for quadrotors using safe flight corridors in 3-d complex environments,” _IEEE Robotics and Automation Letters_ , vol. 2, no. 3, pp. 1688–1695, 2017.
* [19] C. Toumieh and A. Lambert, “Voxel-grid based convex decomposition of 3d space for safe corridor generation,” _Journal of Intelligent & Robotic Systems_, vol. 105, no. 4, pp. 1–13, 2022.
* [20] ——, “Shape-aware safe corridors generation using voxel grids,” _arXiv e-prints_ , pp. arXiv–2208, 2022.
* [21] J. Park, I. Jang, and H. J. Kim, “Decentralized deadlock-free trajectory planning for quadrotor swarm in obstacle-rich environments - extended version,” _ArXiv_ , vol. abs/2209.09447, 2022.
|
# Generation and robustness of non-local correlations induced by Heisenberg
XYZ and intrinsic decoherence models: $(x,y)$-spin-orbit interactions and
$x$\- magnetic field
F. Aljuaydi Corresponding author<EMAIL_ADDRESS>S. N. Almutairi
Department of Mathematics, College of Science and Humanities, Prince Sattam
bin Abdulaziz University, Al Kharj 11942, Saudi Arabia A.-B. A. Mohamed
Department of Mathematics, College of Science and Humanities, Prince Sattam
bin Abdulaziz University, Al Kharj 11942, Saudi Arabia Department of
Mathematics, Faculty of Science, Assiut University, Assiut, Egypt
###### Abstract
In this work, the Milburn intrinsic decoherence model is used to investigate
the role of spin-spin Heisenberg-XYZ interaction supported by spin-orbit
Dzyaloshinsky–Moriya (DM) interactions of $x$ and $y$-directions together in
the non-local correlation (NLC) dynamics of Local quantum Fisher information
(LQFI), local quantum uncertainty (LQU), and Log-negativity’s entanglement.
The two-qubit-Heisenberg-XYZ (non-X)-states’ non-local correlation generations
are explored under the effects of the uniformity and the inhomogeneity of an
applied $x$-direction external inhomogeneous magnetic field (EIMF). Our
meticulous exploration of the obtained results shows that the spin-spin
Heisenberg XYZ and $x,y$-spin-orbit interactions have a high capability to
raise non-local correlations in the presence of a weak external magnetic
field. The raised non-local correlation can be improved by strengthening the
spin-spin and $x,y$-spin-orbit interactions and increasing the EIMF’s
inhomogeneity and uniformity. Non-local correlation oscillations’ amplitudes
and fluctuations are increased. The degradations of the NLCs’ generations in
the presence of intrinsic decoherence (NLCs’ robustness against intrinsic
decoherence) can be decreased by strengthening the spin-spin interactions.
They can be increased by increasing the intensities of $x,y$-spin-orbit
interactions as well as increasing the EIMF’s inhomogeneity and uniformity.
Keywords: non-local correlation; Heisenberg XYZ states; magnetic fields: spin-
orbit interaction
## I Introduction
Among the numerous quantum systems proposed to implement quantum information
and computation [1, 2], superconducting circuits, trapped ions, and
semiconductor quantum dots are essential techniques for realizing quantum bits
(qubits). Based on electron spins trapped in quantum dots, a quantum computer
protocol has been initially proposed [3, 4, 5], the electron having a spin of
1/2 is the simplest natural qubit. Recently, quantum computation (as a single-
spin-qubit geometric gate) with electron spins (single-spin-qubit geometric
gate) has been realized in quantum dots [6, 7]. Due to tunneling the electrons
from one to the other, the spin-spin coupling and spin-orbit coupling of the
interaction between two qubits can be realized by considering a two-qubit
system represented by two coupled quantum dots’ two electrons. Therefore,
Heisenberg XYZ models describing spin-spin interactions are among of the
important proposed qubit systems. Two qubit Heisenberg XYZ models have been
realized in different systems, including bosonic atoms inside an optical
lattice [8], trapped ions [9] and superconductor systems [10], and linear
molecules [11]. Heisenberg XYZ models have been updated to include spin-orbit
interactions [12, 13] with the first order of SO coupling ( that is known by
Dzyaloshinsky–Moriya interactions [14] (realizing by an antisymmetric
superexchange La2CuO4 interaction [15]), the second order of SO coupling
(Kaplan-Shekhtman-Entin-Wohlman-Aharony interaction [16]). Spin-1/2 Heisenberg
XYZ models also have been updated to include dipole-dipole interaction [17],
and inhomogeneous external magnetic fields (IEMFs) [18, 19].
Exploring two-qubit information dynamics in different proposed qubit systems
to two-qubit resources, relating to different types of nonlocal correlations
(as entanglement, quantum discored, …), is one of the most required research
fields in implementing quantum information and computation [20]. Quantum
entanglement (QE) (realizing by quantifiers’ entropy [21], concurrence [22],
negativity, and log-negativity [23], …) is an important type of qubits’
nonlocal correlations (NLCs) [24, 25] and applications have an important role
in quantum information fields. Where QE has a wide range of applications in
implementing quantum computation, teleportation [26, 27], quantum optical
memory [28], and quantum key distribution [29]. After implementing quantum
discord as another type of qubits’ NLCs beyond entanglement [30], several
NLCs’ quantifiers have been introduced to address other NLCs [31, 32] by using
Wigner–Yanase (WY) skew information [33] and quantum Fisher information (QFI)
[34]. Where, WY-skew-information minimization (local quantum uncertainty [35]
LQU) and the WY-skew-information maximization (uncertainty-induced nonlocality
[36]) have been introduced to quantify other NLCs beyond entanglement. Also,
the minimization of QFI (local quantum Fisher information, LQFI) was used to
implementing other qubits’ NLCs [37, 38]. LQU has a direct connection to LQFI
[39, 40], establishing more two-qubit NLCs in several proposed qubit systems
[46, 47]: as hybrid-spin systems (under random noise [41] and intrinsic
decoherence [42]), two-coupled double quantum dots [43], the mixed-spin
Heisenberg [44], Heisenberg XXX system [45].
The information dynamics of the two-spin Heisenberg XYZ states have been
investigated, by using Milburn intrinsic decoherence model [48], of
entanglement teleportation based on the Heisenberg XYZ chain [49, 50], Fisher
of Heisenberg XXX states’LQFI beyond IEMF effects [51], quantum correlations
of concurrence and LUQ [52]. The previous works have focused on exploring the
time evolution of the two-spin Heisenberg-XYZ states’ NLCs with limited
conditions on the spin-spin and spin-orbit interactions, and the applied
magnetic fields, to ensure residing two-qubits X-states [53, 54, 55, 56, 57,
58, 59, 60, 61]. Therefore, by using the Milburn intrinsic decoherence and
Heisenberg XYZ models are used to investigate the non-local correlation
dynamics of LQFI, LQU, and log-negativity (LN) for general two-qubit-
Heisenberg-XYZ (non-X)-states, inducing by other specific conditions on the
spin-spin and spin-orbit interactions, as well as the applied magnetic fields.
The manuscript structure is prepared to include the Milburn intrinsic
decoherence equation including the Heisenberg XYZ model and its solution in
Sec. (II). But in Sec. (III), we introduce the definition of the NLCs’
quantifiers of LQFI, LQU, and LN. Sec. (IV) presents the outcomes of the
dependence of the NLCs’ quantifiers on the physical parameters. Our
conclusions are provided in Sec. (V).
## II The Heisenberg spin model
Here, Milburn intrinsic decoherence model and Heisenberg XYZ model are used to
examine the embedded capabilities in spin-spin interaction and spin-orbit
interaction (that describes Dzyaloshinsky–Moriya (DM) $x,y$-interactions with
the first order of SO couplings $D_{x}$ and $D_{x}$) to generate essential two
SO-qubits’ nonlocal correlations (NCs) under the effects of the uniformity
$B_{m}$ and the inhomogeneity $b_{m}$ of applied external inhomogeneous
magnetic field (EIMF). For two spin-qubits (each $k$-qubit ($k=A,B$) is
described by upper $|1_{k}\rangle$ and lower $|1_{k}\rangle$ states), the
Hamiltonian of the system is written as
$\displaystyle\\!\\!\\!\\!\\!\hat{H}=\\!\\!\sum_{\alpha=x,y,z}J_{\alpha}\hat{\sigma}^{\alpha}_{A}\hat{\sigma}^{\alpha}_{B}+\sum_{k=A,B}\vec{B}_{k}.\vec{\sigma}_{k}+\vec{D}.(\vec{\sigma}_{A}\times\vec{\sigma}_{B}).$
(1)
$\vec{\sigma}_{k}=(\hat{\sigma}_{k}^{x},\hat{\sigma}_{k}^{y},\hat{\sigma}_{k}^{z})$
with $\hat{\sigma}_{k}^{x,y,z}$ represent the $k$-qubit Pauli matrices.
$\vec{B}_{k}=(B^{x}_{k},B^{y}_{k},B^{z}_{k})$ is the vector of the external
magnetic field applying on $k$-spin,
$\vec{B}_{k}.\vec{\sigma}_{k}=B^{x}_{k}\hat{\sigma}_{k}^{x}+B^{y}_{k}\hat{\sigma}_{k}^{y}+B^{z}_{k}\hat{\sigma}_{k}^{z}$.
In our work, we consider that the EIMF is applied only in the $x$-direction:
$\vec{B}_{k}=(B^{x}_{k},0,0)$, $B^{x}_{A}=B_{m}+b_{m}$, and
$B^{x}_{B}=B_{m}-b_{m}$. $\vec{D}=(D_{x},D_{y},D_{z})$ is the spin-orbit/DM
interaction vector. Therefore, we have
$\vec{D}.(\vec{\sigma}_{A}\times\vec{\sigma}_{B})=D_{x}\hat{C}_{x}+D_{y}\hat{C}_{y}+D_{z}\hat{C}_{z}$
with
$\hat{C}_{\alpha}=\hat{\sigma}_{A}^{\alpha+1}\hat{\sigma}_{B}^{\alpha+2}-\hat{\sigma}_{A}^{\alpha+2}\hat{\sigma}_{B}^{\alpha+1}(\alpha=x,y,z)$.
Here, we take only the $x,y$-spin-orbit interactions:
$\vec{D}=(D_{x},D_{y},0)$. The capacitive spin-spin and $x,y$-spin-orbit
interactions, under the effects of the EIMF characteristics, can be used to
build two-spin-qubit correlations. The considered Hamiltonian is written as
$\displaystyle\hat{H}$ $\displaystyle=$
$\displaystyle\sum_{i=x,y,z}J_{i}\hat{\sigma}^{i}_{A}\hat{\sigma}^{i}_{B}+\sum_{i=x,y}D_{i}\hat{C}_{i}$
(2)
$\displaystyle\qquad\quad+(B_{m}+b_{m})\hat{\sigma}^{x}_{A}+(B_{m}-b_{m})\hat{\sigma}^{x}_{B}.$
The time evilution of the two spin-qubits’ nonlocal correlations (NCs) will be
explored by using Milburn intrinsic decoherence model [48], which is given by
$\frac{d}{dt}\hat{M}(t)=-i[\hat{H},\hat{M}]-\frac{\gamma}{2}[\hat{H},[\hat{H},\hat{M}]],$
(3)
$\hat{M}(t)$ is the density matrix of the generated two-spin-qubits state.
$\gamma$ is the intrinsic spin-spin decoherence (ISSD) coupling.
Here, the two-spin-qubits eigenvalues $V_{k}$ ($k=1,2,3,4$) and the
eigenstates $|V_{k}\rangle$ of the Hamiltonian of Eq. (2) will be calculated,
numerically. And hence, in the two-spin-qubits basis:
$\\{|1_{A}1_{B}\rangle,|1_{A}0_{B}\rangle,|0_{A}1_{B}\rangle,|0_{A}0_{B}\rangle\\}$,
the two-spin-qubits state dynamics can obtained numerically by using the
solution of Eq. (3) giving by
$\hat{M}(t)=\\!\\!\sum^{4}_{m,n=1}\\!\\!U_{mn}(t)\,S_{mn}(t)\,\langle
V_{m}|\hat{M}(0)|V_{n}\rangle\,|V_{m}\rangle\langle V_{n}|.$ (4)
The unitary interaction $U_{mn}(t)$ and the ISSD coupling $S_{mn}(t)$ effects
are controlled by the following terms:
$\displaystyle U_{mn}(t)$ $\displaystyle=$ $\displaystyle
e^{-i(V_{m}-V_{n})t},$ $\displaystyle S_{mn}(t)$ $\displaystyle=$
$\displaystyle e^{-\frac{\gamma}{2}(V_{m}-V_{n})^{2}t}.$ (5)
The Eq.(4) is used to calculate and explore, numerically, the dynamics of the
nonlocal correlations residing within the two-spin-qubits states’ Heisenberg
XYZ model under the effects of the $x,y$-spin-orbit interactions (spin-orbit
interactions acting along the $x-$ and $y-$ directions) and an applied
external magnetic field applying along $x$-direction.
## III Non-local correlation (NLC) quantifiers
Here, the two-spin-qubits’ NLCs will be measured by the following LQFI, LQU,
and logarithmic negativity (LN):
* •
LQFI
LQFI can be used as a two-spin-Heisenberg-XYZ correlation quantifier beyond
entanglement, which is recently introduced as another correlation type. After
calculating the two-spin eigenvalues $\pi_{k}$ ($k=1,2,3,4$) and the two-spin
eigenstates $|\Pi_{k}\rangle$ of Eq. (4 having the representation matrix:
$M(t)=\sum_{m}\pi_{m}|\Pi_{m}\rangle\langle\Pi_{m}|$ with $\pi_{m}\geq 0$ and
$\sum_{m}\pi_{m}=1$, the LQFI is calculated by using the closed expression
[38, 34, 37] giving by
$F(t)=1-\pi_{R}^{\max},$
$\pi_{R}^{\max}$ represents the highest eigenvalue of the symmetric matrix
$R=[r_{ij}]$. Based on the Pauli spin-$\frac{1}{2}$ matrices
$\sigma^{i}\,(i=1,2,3)$ and the elements
$\xi_{mn}^{i}=\langle\Pi_{m}|I\otimes\sigma^{i}|\Pi_{n}\rangle$, the symmetric
matrix elements $r_{ij}$ are given by
$r_{ij}=\sum_{\pi_{m}+\pi_{n}\neq
0}\frac{2\pi_{m}\pi_{n}}{\pi_{m}+\pi_{n}}\xi_{mn}^{i}(\xi_{nm}^{j})^{\dagger}.$
For a two-spin-qubits maximally correlated state, the LQFI function has
$F(t)=1$. The case of $0<F(t)<1$ means that the states have partial LQFI’s
nonlocal correlation.
* •
LQU
LQU of Wigner–Yanase (WY) skew information [33] is realized to use as an
another type of two spin-qubits’ nonlocal correlations [33, 35, 36]. For the
two spin-qubits’ density matrix $M(t)$ of Eq. (4), the LQU can be calculated
by [35]
$\displaystyle U(t)$ $\displaystyle=$ $\displaystyle
1-\lambda_{max}(\Lambda_{AB}),$ (6)
$\lambda_{max}$ designs the largest eigenvalue of the $3\text{x}3$-matrix
$\Lambda=[a_{ij}]$, which have the elements:
$\displaystyle a_{ij}=\text{T}r\mathbf{\big{\\{}}\sqrt{M(t)}(\sigma_{i}\otimes
I)\sqrt{M(t)}(\sigma_{j}\otimes I)\mathbf{\big{\\}}}.$
* •
Logarithmic negativity (LN)
We employ the logarithmic negativity [23] to measure of the generated two-
spin-qubits entanglement. The LN expression is based on the negativity’s
definition $\mu_{t}$ [23] (which is defined as the absolute sum of the
matrix’s negative eigenvalues $(M(t))^{T}$ of the partial transposition of the
two spin qubits density matrix $M(t)$ of Eq. (4). The LN can be expressed as:
$\displaystyle N(t)$ $\displaystyle=$ $\displaystyle\log_{2}[1+2\mu_{t}],$ (7)
The $N(t)=0$ for a disentangled two-spin state, $N(t)=1$ for a maximally
entangled two-spin state, and $0\leq N(t)\leq 1$ for a partially entangled
two-spin state.
## IV Two spin-Heisenberg-XYZ-qubits dynamics
Figure 1: The dynamics of the generated local-QFI, local-QU, and log-
negativity correlations due to the couplings
$(J_{\alpha},J_{y},J_{z})=(0.8,0.8,0.8)$ are shown under the effects of the
applied magnetic field $(B_{m},b_{m})=(0.3,0.5)$ and the $D_{x,y}$
interactions: $(D_{x},D_{y})=(0.0,0.0)$ in (a), $(D_{x},D_{y})=(0.5,0.0)$ in
(b), and $(D_{x},D_{y})=(0.5,0.5)$ in (c).
Figure 2: The LQFI (red solid curve), LQU (boule dash-dotted curve), and log-
negativity (green dashed curve) dynamics of Fig.1c are plotted for different
Heisenberg-XYZ couplings: $(J_{x},J_{y},J_{z})=(1,0.5,1.5)$ in (a) and
$(J_{x},J_{y},J_{z})=(5,1,1.5)$ in (b), and strong $x,y$-spin-orbit
interactions $D_{x}=D_{y}=2$ in (c).
Figure 3: The LQFI (red solid curve), LQU (boule dash-dotted curve), and log-
negativity (green dashed curve) dynamics of Fig.2a (for
$(J_{x},J_{y},J_{z})=(1,0.5,1.5)$, $(B_{m},b_{m})=(0.3,0.5)$, and
$D_{x}=D_{y}=0.5$) are plotted for different large magnetic-field
uniformities: $B_{m}=2$ in (a) and $B_{m}=10$ in (b).
Figure 4: The LQFI (red solid curve), LQU (boule dash-dotted curve), and log-
negativity (green dashed curve) dynamics of Fig.2a (for
$(J_{x},J_{y},J_{z})=(1,0.5,1.5)$, $(B_{m},b_{m})=(0.3,0.5)$, and
$D_{x}=D_{y}=0.5$) are plotted for different large magnetic-field
inhomogeneities: $b_{m}=2$ in (a) and $b_{m}=10$ in (b).
Figure 5: The LQFI (red solid curve), LQU (boule dash-dotted curve), and log-
negativity (green dashed curve) dynamics is shown in the presence of the ISSD
$\gamma=0.05$ and the magnetic field $(B_{m},b_{m})=(0.3,0.5)$ with the
couplings $J_{\alpha}=0.8$ for different couplings: $D_{k}=0(k=x,y)$ in (a),
$D_{k}=0.5$ in (b), and $D_{k}=2$ in (c).
Figure 6: The two spin-qubits correlation dynamics of the of Fig.5b and c is
shown but for strong spin-spin couplings $(J_{x},J_{y},J_{z})=(1,0.5,1.5)$.
Figure 7: The dynamics of the LQFI (red solid curve), LQU (boule dash-dotted
curve), and log-negativity (green dashed curve) of Fig.3a is shown but for
large EIMF’s uniformity $(B_{m},b_{m})=(2,0.5)$ in (a) and large EIMF’s
inhomogeneity $(B_{m},b_{m})=(0.3,2)$ in (b).
Here, we will explore the role of $J_{\alpha}$-spin-spin interactions
supported by $x,y$-spin-orbit interactions in the generation dynamics of the
two-qubit non-local correlations (of LQFI, local LQU, and LN’s entanglement
general Heisenberg-XYZ (non-X)-states in the presence of an $x-$direction
EIMF. To explore the generation of the two-spin-qubits non-local correlations,
we consider that the two spins are initially in their uncorrelated upper
states $|1_{A}\rangle\otimes|1_{B}\rangle$, which its density matrix has no
nonlocal correlations of the considered quantifiers. Our focus is on the
$J_{\alpha}$-spin-spin interaction effects ($D_{x}$ and $D_{y}$), and
inhomogeneous $x-$direction magnetic field parameters $(B_{m}$ and $b_{m})$ in
the presence of the intrinsic spin decoherence (ISSD) coupling.
As our first analysis, we display the dynamics of the two spin qubits nonlocal
correlations of the LQFI, LQU, and LN, generating due to the couplings
$(J_{x},J_{y},J_{z})=(0.8,0.8,0.8)$ supported by different intensities of
$x,y$-spin-orbit interactions in the presence of the inhomogeneous
$x-$direction magnetic field having weak uniformity and inhomogeneity
$(B_{m},b_{m})=(0.3,0.5)$. In the absence of the intrinsic spin-spin
decoherence $\gamma=0$ and the $x,y$-spin-orbit interactions
$(D_{x},D_{y})=(0.0,0.0)$, the Fig.1(a) illustrates that the two-spin-qubits
LQFI, LQU, and log-negativity grow to reach their maximum. The nonlocal
correlations of the LQFI, LQU, and log-negativity undergo slow quasi-regular
oscillations having the same frequencies and different amplitudes. LQFI and
LQU have the same behavior, i.e., the two-spin-qubits correlation is called
the ”Fisher-Wigner–Yanase nonlocal correlation”. The log-negativity amplitude
is always greater than those of the LQFI and LQU. Under these circumstances of
weak coupling regime of $J_{\alpha}=0.8$ and the applied inhomogeneous
$x-$direction magnetic field (weak uniformity and inhomogeneity), the initial
pure-uncorrelated two-spin state undergoes different time-dependent partially
correlated states, except particular time, it transforms maximally correlated
states. The two-spin states have maximal correlations of Fisher-Wigner–Yanase
nonlocal correlation ($F(t)=U(t)=1$) and log-negativity $N(t)=1$ at the same
time. At particular times, we observe that partially two-spin entangled states
have no LQFI or LQU correlation.
The effects of weak intensities of $x,y$-spin-orbit interactions are shown in
Fig.1(b). As is clear in this figure, the regularity and fluctuations of the
generated Fisher-Wigner–Yanase and log-negativity nonlocal correlations are
substantially more than previously presented in the absence of the $x,y$-spin-
orbit interaction. The weak $D_{x}$-spin-orbit interaction
$(D_{x},D_{y})=(0.5,0)$ dramatically improves the appearance of the intervals
of the maximal Fisher-Wigner–Yanase and log-negativity nonlocal correlations,
as well as the intervals in which two-spin entangled states have no LQFI or
LQU correlation. In Fig.1(c), we combined the $D_{x}-$ and $D_{y}-$spin-orbit
interactions $(D_{x},D_{y})=(0.5,0.5)$ into $x,y$-spin-orbit interactions. As
is clear in this figure, the NLC fluctuations between their partial and
maximal values are substantially fewer than previous results in Figs.1(a,b).
Furthermore, the NLC frequency has been reduced while the lower bounds of the
Fisher-Wigner–Yanase and log-negativity nonlocal correlations are shifted up.
This means that the combined $D_{x}-$ and $D_{y}-$spin-orbit interactions
$(D_{x},D_{y})=(0.5,0.5)$ improves the generated partial two-spin-qubits
Fisher-Wigner–Yanase and log-negativity correlations.
Fig.2 shows that the higher couplings of $J_{\alpha}$-spin-spin interactions
and $x,y$-spin-orbit interactions ($(J_{x},J_{y},J_{z})=(1,0.5,1.5)$ in (a)
and $(J_{x},J_{y},J_{z})=(5,1,1.5)$ in (b), and strong $D_{x,y}$-spin-orbit
interaction $D_{x}=D_{y}=2$ in (c)) have a high ability to enhancing the
arisen two-spin-qubits’ NLC of the Fisher-Wigner–Yanase and log-negativity. By
comparing the generated spin-spin NLCs showing in Figs.1(c) and 2(a), we find
that the relative strong couplings of $J_{\alpha}$-spin-spin interactions
$(J_{x},J_{y},J_{z})=(1,0.5,1.5)$ (which are supported by a weak
$D_{x,y}$-spin-orbit interactions $(D_{x},D_{y})=(0.5,0.5)$) increases the
amplitudes and frequencies of the Fisher-Wigner–Yanase and log-negativity’s
oscillations. Fig.2 shows that the higher $J_{\alpha}$-couplings lead to the
spin-spin NLCs’ oscillations have more regularity and fluctuations. The time
positions of the maximal Fisher-Wigner–Yanase and log-negativity correlations
are enhanced. Fig.2(c) is plotted to see the capability the increase of the
$x,y$-spin-orbit interactions $D_{x}=D_{y}=2$ supporting by weak spin-spin
interactions $J_{\alpha}=0.8)$ to enhance the generated spin-spin NLCs when
the external magnetic field applied with weak determinants
$(B_{m},b_{m})=(0.3,0.5)$. By comparing the qualitative dynamics of the
generated Fisher-Wigner–Yanase and log-negativity correlations shown in Fig.1c
($D_{x}=D_{y}=0.5$) with that shown in Fig.2c ($D_{x}=D_{y}=2$), we can deduce
that the $D_{x,y}$-spin-orbit interactions have a high role in enhancement the
generated Fisher-Wigner–Yanase and log-negativity correlations, their
amplitudes are increased and their oscillations have more fluctuations between
their extreme values. In addition, the strong $x,y$-spin-orbit interactions
potentially strength and speed the generation of the Fisher-Wigner–Yanase and
log-negativity correlations, due to the $J_{\alpha}$-spin-spin interactions.
Figures 3 show Fisher-Wigner–Yanase and log-negativity nonlocal correlation
dynamics of Fig.2a (where $(J_{x},J_{y},J_{z})=(1,0.5,1.5)$, $b_{m}=0.5$, and
$D_{x}=D_{y}=0.5$) are plotted for different uniformities of the applied EIMF.
Fig. 3a shows the generated Fisher-Wigner–Yanase and log-negativity
correlations, due to the $J_{\alpha}$-spin-spin and $x,y$-spin-orbit
interactions, previously presented after applying an external magnetic field
(having a small inhomogeneity $b_{m}=0.5$ and a large uniformity $B_{f}=2$).
In this case, the increase of the EIMF uniformity delays the growth of the
LQFI, LQU as well as log-negativity. It increases the two-spin state’s
fluctuations between different partially and maximally correlated states. The
generations of the Fisher-Wigner–Yanase and log-negativity correlations shown
in Fig.2a (with $B_{m}=0.5$) and Fig.3a (with $B_{m}=2$) with those shown in
Fig.3b (with $B_{m}=10$) confirm that the increase of the EIMF uniformity will
enhance the ability of the strong $J_{\alpha}$-spin-spin interactions
supported by weak $x,y$-spin-orbit interactions to create partially and
maximally correlated states with more stability; however, the generated spin-
spin NLCs are more sensitive to the EIMF uniformity.
In the forthcoming analysis of Fig.4, we keep the system with the same
parameters’ values of Fig.2a (where $(J_{x},J_{y},J_{z})=(1,0.5,1.5)$,
$B_{m}=0.3$, and $D_{x}=D_{y}=0.5$) and consider different magnetic-field
inhomogeneities: $b_{m}=2$ in (a) and $b_{m}=10$ in (b). In this case of
Fig.4a, we notice that the larger EIMF uniformities enhance the efficiency of
the generation of the Fisher-Wigner–Yanase and log-negativity correlations.
The EIMF uniformity increases the two-spin state’s fluctuations between
different partially and maximally correlated states. The time positions of the
maxima ($F(t)=U(t)=N(t)\approx 1$ and minima (zero-value)
($F(t)=U(t)=N(t)\approx 0$ of the generated Fisher-Wigner–Yanase and log-
negativity correlations are enhanced. In the case of
$(J_{x},J_{y},J_{z})=(1,0.5,1.5)$, the increase of the EIMF inhomogeneity have
a high role in enhancement the generated two-spin-qubits’ NLCs. Where NLCs
oscillations’ amplitudes and fluctuations are increased (see Fig.4b).
The next illustration of Figs.5-7 is obtained to show the nonlocal correlation
dynamics of the LQFI, LQU, and log-negativity in the presence of the non-zero
ISSD coupling $\gamma=0.05$. By comparing the results of Fig.1a ($\gamma=0.0$)
with these of Fig.5a ($\gamma=0.05$), we find that the LQFI, LQU, and log-
negativity shows a decaying oscillatory dynamical evolutions. The generations
of the Heisenberg-XYZ (non-X)-states’NLCs (due to $J_{\alpha}=0.8$ spin-spin
couplings the applied magnetic field $(B_{m},b_{m})=(0.3,0.5)$ without spin-
orbit interaction) are weaken and have different amplitudes (which are
decreased by increasing the ISSD coupling). After particular interval time,
with non-zero ISSD coupling, the LQFI and LQU present different nonlocal
correlations having different amplitudes with the same behaviors. Moreover,
the NLCs’ robustness (against the ISSD effect) of the LQFI and log-negativity
is more than that of the LQU.
As shown in Figs.5b and c, the increase of the intensities of $x,y$-spin-orbit
interactions ($D_{k}=0(k=x,y)$ in (a), $D_{k}=0.5$ in (b), and $D_{k}=2$ in
(c)) reduce the NLCs’ robustness (against the ISSD effect) of the LQFI, LQU
and log-negativity correlation, the NLCs’ amplitudes significantly decrease as
the $x,y$-spin-orbit interactions increase. Moreover, LQFI and LQU display
sudden changes at different times. The sudden-changes phenomenon has been
studied theoretically [62] and experimentally [63] (see Figs.5b and c). For
very strong $x,y$-spin-orbit interactions $D_{k}=2$ (see Fig.5c), we observe
that the two-spin-qubits log-negativity drops instantly to zero at a
particular time for a long time (sudden-death LN-entanglement phenomenon),
then the disentangled two-spin states have only different stable NLCs of the
LQFI and LQU. We can deduce that the NLCs’ decay resulting from ISSD can be
enhanced by increasing the intensities of $x,y$-spin-orbit interactions.
By comparing the Figs.5(b,c) and Figs.6(b,c), we find that, the strong spin-
spin couplings $(J_{x},J_{y},J_{z})=(1,0.5,1.5)$ reduce the ISSD effect and
improve the NLCs’ robustness (against the ISSD effect) of the LQFI, LQU, and
LN. For very strong $x,y$-spin-orbit interactions $D_{k}=2$ (see Fig.6b), the
sudden-death LN-entanglement phenomenon does not occur, except at the time
$t\approx 0.5\pi$ it occurs instantaneously. The generated two-spin states
have different stable partial NLCs of the LQFI, LQU, and LN. In this case, the
NLCs’ decay resulting from ISSD can be weakened by strengthening the spin-spin
interactions.
In the presence of the ISSD effect $\gamma=0.05$, Fig.7 shows the generated
NLCs of the Fig.3a (or it shows the degradation of NLCs of Fig.6a) after
strengthening the EIMF’s uniformity $(B_{m},b_{m})=(2,0.5)$ in (a) and EIMF’s
inhomogeneity $(B_{m},b_{m})=(0.3,2)$ in (b). From Fig.7a, we observe that the
large EIMF’s uniformity $B_{m}=2$ increases the NLCs’ decay resulting from
ISSD. The time intervals in which the disentangled two-spin states have only
different stable NLCs of the LQFI and LQU appeared. Moreover, the NLCs’
robustness (against the ISSD effect) of the LQFI and LN is reduced by
increasing the large EIMF’s uniformity. The outcomes of Fig.7b illustrate that
strengthening the EIMF’s inhomogeneity $b_{m}=2$ also increases the
degradation of the NLC functions. In this case of the parameters: $b_{m}=2$,
$(J_{x},J_{y},J_{z})=(1,0.5,1.5)$, and $D_{k}=0.5$, we observe that: (1) the
generated NLCs (LQFI, LQU and entanglement) of the Fig.3a degrade (due to and
ISSD effect) and reach their partial stable oscillatory behaviors, quickly,
comparing with the case where the small value of $b_{m}=0.5$ of Fig.6a. We
find that the ability of the EIMF’s inhomogeneity to enhance the ISSD effect
is small compared to that of the EIMF’s uniformity.
## V Conclusion
In this investigation, the Milburn intrinsic decoherence model and Heisenberg
XYZ model are used to examine the embedded capabilities in spin-spin
interaction and spin-orbit interaction (that describes $x,y$-DM interactions)
to generate nonlocal correlations (realizing by LQFI, LQU, and LN) of general
two-spin-qubits (non-X)-states under the effects of the EIMF’s uniformity and
the inhomogeneity. In the presence and absence of the ISSD, the dependence of
the generated nonlocal correlations on the parameters, of spin-spin
interaction and spin-orbit interactions as well as of the EIMF’s uniformity
and the inhomogeneity, are explored. It is found that the spin-spin Heisenberg
XYZ and $x,y$-spin-orbit interactions have a high capability to raise non-
local correlations in the presence of a weak external magnetic field. The
spin-orbit interactions have a high role in the enhancement of the generated
two-spin-qubits Fisher-Wigner–Yanase and log-negativity correlations, their
oscillations’ amplitudes and fluctuations are increased. In the presence and
absence of the ISSD, the NLCs’ generations are weakened and have different
amplitudes, decreasing by increasing the ISSD coupling. The NLCs’ robustness
(against the ISSD effect) of the LQFI and log-negativity is more than that of
the LQU. The phenomenon of the sudden changes occurs during the LQU and LQFI
dynamics whereas the sudden death occurs during log-negativity-entanglement
dynamics. The NLCs’ decay resulting from ISSD can be enhanced by increasing
the intensities of $x,y$-spin-orbit interactions. Strengthening the spin-spin
interactions weakens the NLCs’ decay resulting from ISSD. The generated NLCs
degrade (due to an ISSD effect with a large IMF’s inhomogeneity) and reach
their partially stable oscillatory behaviors, quickly. The ability of the
IMF’s inhomogeneity to increase the ISSD effect is small compared to that of
the EIMF’s uniformity.
## References
## References
* [1] R. Horodecki, P. Horodecki, M. Horodecki and K. Horodecki, Rev. Mod. Phys. 81, 865 (2009).
* [2] M. A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information, Cambridge University Press, Cambridge, (2000).
* [3] D. Loss and D. P. DiVincenzo, Phys. Rev. A 57, 120 (1998).
* [4] G. Burkard, D. Loss, and D. P. DiVincenzo, Phys. Rev. B 59, 2070 (1999).
* [5] L. M. K. Vandersypen, R. Hanson, L. H. van Willems Beveren, J. M. Elzerman, J. S. Greidanus, S. De Franceschi, and L. P. Kouwenhoven (2004). Quantum Computing and Quantum Bits in Mesoscopic Systems. Springer, Boston, M A.
* [6] Rong-Long Ma, Ao-Ran Li, Chu Wang, Zhen-Zhen Kong, Wei-Zhu Liao, Ming Ni, Sheng-Kai Zhu, Ning Chu, Chengxian Zhang, Di Liu, Gang Cao, Gui-Lei Wang, Hai-Ou Li, and Guo-Ping Guo Phys. Rev. Applied 21, 014044 (2024).
* [7] Justyna P. Zwolak and Jacob M. Taylor, Rev. Mod. Phys. 95, 011006 (2023).
* [8] F. Pinheiro, G. M. Bruun, J.-P. Martikainen, and J. Larson, Phys. Rev. Lett. 111, 205302 (2013).
* [9] A. Bermudez, L. Tagliacozzo, G. Sierra and P. Richerme, Phys. Rev. B: Condens. Matter Mater. Phys. 95, 024431 (2017).
* [10] M. Nishiyama, Y. Inada, and Guo-qing Zheng, Phys. Rev. Lett. 98, 047002 (2007).
* [11] W. Yue, Q. Wei, S. Kais, B. Friedrichc and D. Herschbach, Phys. Chem. Chem. Phys. 24, 25270 (2022).
* [12] I. Dzyaloshinski, J. Phys. Chem. Solids 4, 241 (1958).
* [13] T. Moriya, Phys. Rev. 117, 635 (1960).
* [14] T. Moriya, Phys. Rev. Lett. 4, 228 (1960).
* [15] L. Shekhtman, O. Entin-Wohlman, A. Aharony, Phys. Rev. B 47, 174 (1993).
* [16] T. Moriya, Phys. Rev. 120, 91 (1960).
* [17] E. I. Kuznetsova · M. A. Yurischev, Quantum Inf. Process 12, 3587–3605 (2013).
* [18] M. C. Arnesen, S. Bose, and V. Vedral, Phys. Rev. Lett. 87, 017901 (2001).
* [19] D.-C. Li, Z.-L. Cao, Optics Communications 282, 1226–1230 (2009).
* [20] M. Le Bellac, A short introduction to quantum information and quantum computation, Cambridge University Press, Cambridge (2006).
* [21] J. Eisert, M. Cramer, and M. B. Plenio, Rev. Mod. Phys. 82, 277 (2010).
* [22] W. K. Wootters, Phys. Rev. Lett. 80 2245 (1998).
* [23] G. Vidal, R. F. Werner, Phys. Rev. A 65, 032314 (2002).
* [24] F. Eftekhari, M. K. Tavassoly, A. Behjat, M. J. Faghihi, Optics and Laser Technology 168, 109934 (2024).
* [25] M. S. Kheirabady, M. K. Tavassoly, M. Rafeie and E. Ghasemian, Commun. Theor. Phys. 76, 025101 (2024).
* [26] S. L. Braunstein, H. J. Kimble, Phys. Rev. Lett. 80 869 (1998).
* [27] D. Bouwmeester, J. W. Pan, K. Mattle, M. Eibl, H. Weinfurter, A. Zeilinger, Nature 390, 575 (1997).
* [28] Y. Lei, F. K. Asadi, T. Zhong, A. Kuzmich, C. Simon, and M. Hosseini, Optica, 10, 1511-1528 (2023).
* [29] A. K. Ekert, Phys. Rev. Lett. 67, 661 (1991).
* [30] H. Ollivier and W. H. Zurek, Phys. Rev. Lett. 88, 017901 (2001).
* [31] M.-L. Hu, X. Hu, J. Wang, Y. Peng, Y.-R. Zhang and H. Fan, Phys. Rep. 762, 1 (2018).
* [32] A.-B.A. Mohamed and N. Metwally, Quant. Inf. Process. 18, 79 (2019).
* [33] E. P. Wigner, M. M. Yanase, Proc. Natl. Acad. Sci. 49, 910 (1963).
* [34] D. Girolami, A. M. Souza, V. Giovannetti, T. Tufarelli, J.G. Filgueiras, R. S. Sarthour, D.O. Soares-Pinto, I. S. Oliveira, G. Adesso, Phys. Rev. Lett., 112, 210401 (2014).
* [35] D. Girolami, T. Tufarelli, G. Adesso, Phys. Rev. Lett 110, 240402 (2013).
* [36] S.-X. Wu, J. Zhang, C.-S. Yu and H.-S. Song, Phys. Lett. A 378, 344 (2014).
* [37] H. S. Dhar, M.N. Bera, G. Adesso, Phys. Rev. A 991, 032115 (2015).
* [38] S. Kim, L. Li, A. Kumar, J. Wu, Phys. Rev. A. 97, 032326 (2018).
* [39] C.W. Helstrom, Quantum Detection and Estimation Theory (Academic, New York, 1976).
* [40] M. G. A. Paris, Int. J. Quant. Inf. 7, 125, (2009).
* [41] F. Benabdallah, A. Ur Rahman, S. Haddadi and M. Daoud, Phys. Rev. E 106, 034122 (2022).
* [42] F. Benabdallah, K. El Anouz, A. Ur Rahman, M. Daoud, A. El Allati, and S. Haddadi, Fortschr. Phys. 71, 2300032 (2023).
* [43] S. Elghaayda, Z. Dahbi, & M. Mansour, Opt Quant Electron 54, 419 (2022).
* [44] P-F. Wei, Q. Luo, H.-Q.-C. Wang, S.-J. Xiong, B. Liu & Z. Sun, Front. Phys. 19, 21201 (2024).
* [45] A. V. Fedorova,, M. A. Yurischev, Quantum Inf. Process. 21, 92 (2022).
* [46] A.-B. A. Mohamed, A. Farouk, M. F. Yassen, and H. Eleuch, Symmetry 13, 2243 (2021).
* [47] A.-B. A. Mohamed, E. M. Khalil, M. M. Selim, and H. Eleuch,Symmetry 13 13, 352 (2021).
* [48] G. J. Milburn: Intrinsic decoherence in quantum mechanics. Phys. Rev. A 44, 5401 (1991).
* [49] M. Qin, Z.-Z. Ren, Quantum Inf Process 14, 2055–2066 (2015).
* [50] S. Mohammad Hosseiny, J. Seyed‑Yazdi, M. Norouzi, and P. Livreri, Sci. Rep. (2024) 14, 9607
* [51] A.-B. A. Mohamed, F. M. Aldosari, and H. Eleuch, Results in Physics49, 106470 (2023).
* [52] A. Ait Chlih, N. Habiballah, and M. Nassik, Quantum Inf Process 20, 92 (2021).
* [53] R. Jafari and A. Akbari, Phys. Rev. A 101, 062105 (2020).
* [54] C. Mo, G.-F. Zhang, Results in Physics 21, 103759 (2021).
* [55] V. S. Indrajith, R. Sankaranarayanan, Physica A 582, 126250 (2021).
* [56] A. El Aroui, Y. Khedif, N. Habiballah, M. Nassik, Opt. and Quantum Elec. 54, 694 (2022).
* [57] F. Benabdallah, K. El Anouz, M. Daoud, Eur. Phys. J. Plus 137, 548 (2022).
* [58] N. Zidan, A. Rahman and S. Haddadi, Laser Phys. Lett. 20, 025204, (2023).
* [59] M. A. Yurischev, S. Haddadi, Phys. Lett. A 476, 128868 (2023).
* [60] A.-B. A. Mohamed, Quantum Inf Process. 12 1141 (2013).
* [61] A.-B. A. Mohamed, A. Farouk, M. F. Yassen, and H. Eleuch, Appl. Sci.. 10, 3782 (2020).
* [62] J. Maziero, L. C. Celeri, R. M. Serra, and V. Vedral, Phys. Rev. A 80, 044102 (2009).
* [63] J.-S. Xu, X.-Y. Xu, C.-F. Li, C.-J. Zhang, X.-B. Zou, G.-C. Guo, Nature Commun. 1, 7 (2010).
|
This article has been removed by arXiv administrators for copyright
infringement [2020-09-16]
|
# Perturbation analysis for Moore-Penrose inverse of closed operators on
Hilbert spaces
FAPENG DU
School of Mathematical and Physical Sciences, Xuzhou Institute of Technology
Xuzhou 221008, Jiangsu Province, P.R. China
E-mail<EMAIL_ADDRESS>YIFENG XUE
Department of mathematics, East China Normal University
Shanghai 200241, P.R. China
###### Abstract
In this paper, we investigate the perturbation for the Moore-Penrose inverse
of closed operators on Hilbert spaces. By virtue of a new inner product
defined on $H$, we give the expression of the Moore-Penrose inverse
$\bar{T}^{\dagger}$ and the upper bounds of $\|\bar{T}^{\dagger}\|$ and
$\|\bar{T}^{\dagger}-T^{\dagger}\|$. These results obtained in this paper
extend and improve many related results in this area.
2000 Mathematics Subject Classification: 15A09, 47A55
Key words: generalized inverse, Moore-Penrose inverse, stable perturbation,
closed operators
## 1 Introduction
An operator $\bar{T}=T+\delta T$ is called the stable perturbation of $T$ if
$R(\bar{T})\cap N(T^{+})=\\{0\\}$. This notation is introduced by Chen and the
second author in [2, 3]. Later it is generalized to the Banach algebra by the
second author in [15] and to Hilbert $C^{*}$–module by Xu, Wei and Gu in [17].
Using this notation the upper bounds for generalized inverse or Moore–Penrose
inverse of bounded linear operators are discussed(See all references). A
classical result about upper bounds is
$\|\bar{T}^{\dagger}\|\leq\frac{\|T^{\dagger}\|}{1-\|T^{\dagger}\|\|\delta
T\|},\quad\frac{\|\bar{T}^{\dagger}-T^{\dagger}\|}{\|T^{\dagger}\|}\leq\frac{1+\sqrt{5}}{2}\frac{\|T^{\dagger}\|}{1-\|T^{\dagger}\|\|\delta
T\|}.$
In recent years, the perturbation analysis for generalized inverses of closed
operators has been appeared. Some results similar to the perturbation analysis
of bounded linear operators are obtained when $\delta T$ is a T–bounded linear
operator(see [9],[10],[13]).
But there are some unsolved questions. What is the result of the perturbation
for closed operators $T\in C(X,Y)$ when $\delta T$ is a linear operators? What
is the expression of the Moore-Penrose inverse $(T+\delta T)^{\dagger}$ and
how to estimate the upper bounds of $\|\bar{T}^{\dagger}\|$ and
$\|\bar{T}^{\dagger}-T^{\dagger}\|$ when $X,\,Y$ are Hilbert spaces ? The
first question has been solved in [7]. Now we discuss the second question in
this paper.
Let $H,K$ be Hilbert spaces, $T\in C(H,K)$ defined on $D(T)$, $\delta T\in
L(H,K)$ be a linear operators. We introduce a new norm $\|\cdot\|_{T}$ on
$D(T)$ such that $(D(T),\|\cdot\|_{T})$ be a Hilbert spaces and give the
expression of $(T+\delta T)^{\dagger}$ and the upper bounds of
$\|\bar{T}^{\dagger}\|$ and $\|\bar{T}^{\dagger}-T^{\dagger}\|$ when $\delta
T$ is a bounded linear operators on $(D(T),\|\cdot\|_{T})$.
## 2 Preliminaries
Let $X,Y$ be Banach spaces, $L(X,Y),\,C(X,Y)$ and $B(X,Y)$ denote the set of
linear operators, densely-defined closed operators and bounded linear
operators from $X$ to $Y$, respectively. For an operator $T\in L(X,Y)$,
$D(T),\,R(T),\,\ker T$ denoted by the domain, the range and the null spaces of
$T$, respectively.
Let $V$ be a closed subspace of $X$. Recall that $V$ is complemented in $X$ if
there is a closed subspace $U$ in $X$ such that $V\cap U=\\{0\\}$ and $X=V+U$.
In this case, we set $X=V\dotplus U$ and $U=V^{c}$.
###### Definition 2.1
[7] Let $T\in C(X,Y)$. If there is $S\in C(Y,X)$ with $D(S)\supset R(T)$ and
$R(S)\subset D(T)$ such that
$TST=T\;\text{on}\ D(T),\quad STS=S\ \text{on}\;D(S),$
then $S$ is called a generalized inverse of $T$, which is also denoted by
$T^{+}$.
Clearly, $P=I-ST$ (resp. $Q=TS$) are idempotent operators on $D(T)$ (resp.
$D(S)$) with $R(P)=\ker T$ (resp. $R(Q)=R(T)$).
###### Proposition 2.1
Let $T\in C(X,Y)$. Then $T^{+}\in C(Y,X)$ exists if and only if
$X=\ker T\oplus\overline{R(T^{+})},\quad Y=\overline{R(T)}\oplus\ker T^{+}.$
In addition, $T^{+}$ is bounded if $R(T)$ closed.
Proof. $(\Rightarrow).$ If $T^{+}\in C(Y,X)$, then we have
$D(T)=R(S)+\ker T,\quad D(S)=R(T)+\ker S.$
So the assertion follows since $D(T)$ (rsep. $D(S)$) are densely in $X$(rsep.
$Y$).
$(\Leftarrow).$ See Proposition 2.2 in [7].
###### Lemma 2.1
[7] Let $T\in C(X,Y)$ such that $T^{+}$ exists. Let $\delta T\colon D(\delta
T)\rightarrow D(T^{+})$ be a linear operators. Assume that $I+\delta
TT^{+}\colon D(T^{+})\rightarrow D(T^{+})$ is bijective. Put $\bar{T}=T+\delta
T$ and $G=T^{+}(I+\delta TT^{+})^{-1}$. Then the following statements are
equivalent:
1. $(1)$
$R(\bar{T})\cap\ker T^{+}=\\{0\\};$
2. $(2)$
$\bar{T}G\bar{T}=\bar{T},\;G\bar{T}G=G$ and $R(\bar{T}^{+})=R(T^{+})$,
$\ker\bar{T}^{+}=\ker T^{+}$.
3. $(3)$
$(I+\delta TT^{+})^{-1}\bar{T}$ maps $\ker T$ into $R(T);$
4. $(4)$
$(I+\delta TT^{+})^{-1}R(\bar{T})=R(T);$
5. $(5)$
$(I+T^{+}\delta T)^{-1}\ker T=\ker\bar{T}$.
Let $H$ and $K$ be Hilbert spaces. For $T\in C(H,K)$, let
$P_{\overline{R(T)}}$ (resp. $P_{\ker T}$) denote the orthogonal projection
from $K$ (resp. $H$) to $\overline{R(T)}$ (resp. $\ker T$).
###### Definition 2.2
Let $T\in C(H,K)$. Then there is a unique $S\in C(K,H)$ with
$D(S)=R(T)+R(T)^{\perp}$ and $R(S)=\ker T^{\perp}\cap D(T)$ such that
$\displaystyle TST$ $\displaystyle=T\ \text{on}\ D(T),\,\ $ $\displaystyle\,\
STS$ $\displaystyle=S\ \text{on}\ D(S),$ $\displaystyle TS$
$\displaystyle=P_{\overline{R(T)}}\ \text{on}\ D(S),\,\ $ $\displaystyle\,\
ST$ $\displaystyle=I-P_{\ker T}\ \text{on}\ D(T).$
The operator $S$ is called the Moore–Penrose inverse of $T$, denoted by
$T^{\dagger}$. Clearly, $\ker T^{\dagger}=R(T)^{\perp}$ and
$R(T^{\dagger})=\ker T^{\perp}\cap D(T)$. In addition, if $R(T)$ is closed,
then $S$ is bounded.
## 3 Perturbation analysis of M-P inverse on Hilbert spaces
In this section, we investigate the expression of M-P inverse
$\bar{T}^{\dagger}$ and the upper bound of $\|\bar{T}^{\dagger}\|$ and
$\|\bar{T}^{\dagger}-T^{\dagger}\|$.
$\forall x\in H$, let
$\|x\|_{G}=\|x\|+\|Tx\|,$
then we know $T$ is closed if and only if $(D(T),\|\cdot\|_{G})$ is a Banach
space([11, P191]). Clearly $T$ is a bounded linear operators on
$(D(T),\|\cdot\|_{G})$ since $\|Tx\|\leq\|x\|_{G}$.
Denote $(\cdot,\cdot)_{H}$ be a inner product on $H$. $\forall x,y\in D(T)$,
let
$(x,y)_{T}=(x,y)_{H}+(Tx,Ty)_{K}.$
It is easy to check that $(x,y)_{T}$ is a inner product on $D(T)$.
Let
$\|x\|^{2}_{T}=(x,x)_{T},$
then
$\|x\|^{2}_{T}=(x,x)_{T}=(x,x)_{H}+(Tx,Tx)_{K}=\|x\|^{2}+\|Tx\|^{2},$
that is,
$\|x\|_{T}=(\|x\|^{2}+\|Tx\|^{2})^{\frac{1}{2}}.$
Since
$\frac{\sqrt{2}}{2}\|x\|_{G}\leq\|x\|_{T}\leq\|x\|_{G},$
we know $\|\cdot\|_{G}$ equivalence to $\|\cdot\|_{T}$. So $T$ is closed if
and only if $(D(T),\|\cdot\|_{T})$ is a Hilbert space. For convenience, we
denote $(D(T),\|\cdot\|_{T})$ by $D_{T}$ in the context.
Consider a mapping as following:
$\displaystyle\tau:D(T)\subset H\rightarrow D_{T}$ $\displaystyle\tau
x=x,\quad\forall x\in D(T)$
Clearly, $\tau$ is defined on $D(T)$ and $R(\tau)=D_{T}$.
Let $x_{n}\subset D(T)$ and $x_{n}\xrightarrow{\|\cdot\|}x,\;\tau
x_{n}\xrightarrow{\|\cdot\|_{T}}y$, then
$0\leftarrow\|\tau x_{n}-y\|^{2}_{T}=\|x_{n}-y\|^{2}+\|T(x_{n}-y)\|^{2}.$
So $\|x_{n}-y\|\rightarrow 0$. This indicate $y=\tau x=x\in D(T)$. Hence,
$\tau\in C(H,D_{T})$.
Clearly,
$\displaystyle\tau^{\dagger}$ $\displaystyle=\rho\in B(D_{T},H);$
$\displaystyle\rho x$ $\displaystyle=x,\;x\in D_{T}.$
###### Lemma 3.1
[6] Let $A\in C(L,K),B\in C(H,L)$ with $R(A),R(B),R(AB)$ closed and
$R(B)\subseteq D(A)$. Assume that $AB\in C(H,K)$. Then
$\displaystyle(AB)^{\dagger}$
$\displaystyle=P_{\ker(AB)^{\perp}}(B^{\dagger}(A^{\dagger}ABB^{\dagger})^{\dagger}A^{\dagger})\times$
$\displaystyle\\{A(A^{\dagger}ABB^{\dagger})(A^{\dagger}ABB^{\dagger})^{\dagger}A^{\dagger}+(A^{\dagger})^{*}(A^{\dagger}ABB^{\dagger})(A^{\dagger}ABB^{\dagger})^{\dagger}A^{*}-I\\}^{-1}.$
###### Lemma 3.2
Let $T\in C(H,K)$, then $T^{+}\in B(K,H)$ if and only if $T^{+}\in
B(K,D_{T})$, and in this case
$\|T^{+}\|^{2}\leq\|T^{+}\|^{2}_{T}\leq\|T^{+}\|^{2}+\|TT^{+}\|^{2}.$
Proof.
If $T^{+}\in B(K,H)$, then $TT^{+}\in B(K)$. $\forall x\in K$,
$\|T^{+}x\|^{2}_{T}=\|T^{+}x\|^{2}+\|T(T^{+}x)\|^{2}\leq(\|T^{+}\|^{2}+\|TT^{+}\|^{2})\|x\|^{2}.$
Hence, $T^{+}\in B(K,D_{T})$ and
$\|T^{+}\|^{2}_{T}\leq\|T^{+}\|^{2}+\|TT^{+}\|^{2}$.
Conversely, if $T^{+}\in B(K,D_{T})$, then $\forall x\in K$,
$\|T^{+}x\|^{2}=\|T^{+}x\|^{2}_{T}-\|TT^{+}x\|^{2}\leq\|T^{+}x\|^{2}_{T}.$
Hence, $T^{+}\in B(K,H)$ and $\|T^{+}\|\leq\|T^{+}\|_{T}$.
From the above, we have
$\|T^{+}\|^{2}\leq\|T^{+}\|^{2}_{T}\leq\|T^{+}\|^{2}+\|TT^{+}\|^{2}.$
###### Lemma 3.3
Let $T\in C(H,K)$ with $R(T)$ closed. If $T$ has generalized inverse $T^{+}$,
then $T^{\dagger}\in B(K,H)$ and
$T^{\dagger}=-P_{\ker T^{\perp}}(I+P(I-P-P^{*})^{-1})T^{+}(I-Q-Q^{*})^{-1}.$
Proof. Since $R(T)$ closed, we have $T^{+}\in B(K,H)$. So $T^{+}\in
B(K,D_{T})$ by Lemma 3.2. Thus, $Q=TT^{+}\in B(K),\;P=I-T^{+}T\in B(D_{T})$
are idempotent operators. Now we consider the Moore-Penrose inverse
$T^{\dagger}$ of $T$ on $D_{T}$. From [4], we have $T^{\dagger}\in B(K,D_{T})$
and
$T^{\dagger}=-(I+P(I-P-P^{*})^{-1})T^{+}(I-Q-Q^{*})^{-1}.$
Since $T^{\dagger}\in B(K,D_{T})$, we have $T^{\dagger}\in B(K,H)$ by Lemma
3.2. Noting that $T\in C(H,K)$ is a compound operator by $T\in B(D_{T},K)$ and
$\tau\in C(H,D_{T})$. Therefore, by Lemma 3.1, we have
$T^{\dagger}=-P_{\ker T^{\perp}}(I+P(I-P-P^{*})^{-1})T^{+}(I-Q-Q^{*})^{-1}.$
###### Theorem 3.1
Let $T\in C(H,K)$ with $T^{\dagger}\in B(K,H)$, $\delta T\in B(D_{T},K)$ such
that $\bar{T}=T+\delta T$ closed, $D(T)\subseteq D(\delta T)$. If $I+\delta
TT^{\dagger}$ is invertible and $R(\bar{T})\cap N(T^{\dagger})=\\{0\\}$, then
$\bar{T}^{\dagger}\in B(K,H)$ and
$\bar{T}^{\dagger}=-P_{\ker\bar{T}^{\perp}}(I+\bar{P}(I-\bar{P}-\bar{P}^{*})^{-1})G(I-\bar{T}G-(\bar{T}G)^{*})^{-1},$
where $G=T^{\dagger}(I+\delta TT^{\dagger})^{-1},\;\bar{P}=I-G\bar{T}$.
Proof. $\forall x\in D(T)$, there is an $M$ such that $\|\delta Tx\|\leq
M\|x\|_{T}$ since $\delta T\in B(D_{T},K)$. Thus, $\forall y\in K$,
$\|\delta TT^{\dagger}y\|^{2}\leq\|\delta
T\|^{2}_{T}\|T^{\dagger}y\|^{2}_{T}\leq\|\delta
T\|^{2}_{T}(\|T^{\dagger}\|^{2}+1)\|y\|^{2}.$
Hence $G=T^{\dagger}(I+\delta TT^{\dagger})^{-1}\in B(K,H)$ be the generalized
inverse of $\bar{T}$ by Lemma 2.1. By Lemma 3.3, $\bar{T}^{\dagger}\in B(K,H)$
and
$\bar{T}^{\dagger}=-P_{\ker\bar{T}^{\perp}}(I+\bar{P}(I-\bar{P}-\bar{P}^{*})^{-1})G(I-\bar{T}G-(\bar{T}G)^{*})^{-1}.$
###### Remark 3.1
If $\delta T$ is T–bounded, i.e., there are constants $a,\,b>0$ such that
$\|\delta Tx\|\leq a\|x\|+b\|Tx\|,\quad\forall\,x\in D(T),$
then $\delta T\in B(D_{T},K)$. Indeed,
$\|\delta Tx\|^{2}\leq(a\|x\|+b\|Tx\|)^{2}\leq
2(\max(a,b))^{2}(\|x\|^{2}+\|Tx\|^{2})=2(\max(a,b))^{2}\|x\|^{2}_{T}.$
Let $M,N$ are two closed subspaces of $H$. Set
$\delta(M,N)=\sup\\{dist(\mu,N)|\|\mu\|=1,\mu\in M\\}.$
We call $\hat{\delta}(M,N)=\max\\{\delta(M,N),\delta(N,M)\\}$ the gap between
subspaces $M$ and $N$.
###### Proposition 3.1
[11]
(1)
$\delta(M,N)=0$ if and only if $M\subset N$
(2)
$\hat{\delta}(M,N)=0$ if and only if $M=N$
(3)
$\hat{\delta}(M,N)=\hat{\delta}(N,M)$
(4)
$0\leq\delta(M,N)\leq 1$, $0\leq\hat{\delta}(M,N)\leq 1$
(5)
$\hat{\delta}(M,N)=\|P-Q\|$, where $P,Q$ are orthogonal projection on $M,N$,
respectively.
For convenience, we set $\|\delta T\|_{T}=\underset{\|x\|_{T}\leq
1}{\sup}\dfrac{\|\delta Tx\|}{\|x\|_{T}}$ if $\delta T\in B(D_{T},K)$.
###### Lemma 3.4
Under the assumptions of Theorem 3.1, we have
1. 1.
$\delta(R(T),R(\bar{T}))\leq\|\delta
T\|_{T}(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}.$
2. 2.
$\delta(\ker T,\ker(\bar{T}))\leq\|\bar{T}^{\dagger}\|\|\delta T\|_{T}.$
Proof. Noting that $\|\delta Tx\|\leq\|\delta T\|_{T}\|x\|_{T}$ and
$\bar{T}^{\dagger}\in B(K,H)$.
$(1).$ Let $u\in R(T)$ with $\|u\|=1$, then there is a $x\in D(T)$ such that
$u=Tx$.
$\displaystyle dist(u,R(\bar{T}))$
$\displaystyle\leq\|u-\bar{T}(T^{\dagger}Tx)\|=\|\delta TT^{\dagger}Tx\|$
$\displaystyle\leq\|\delta T\|_{T}\|T^{\dagger}Tx\|_{T}$
$\displaystyle\leq\|\delta T\|_{T}(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}\|u\|.$
Hence, $\delta(R(T),R(\bar{T}))\leq\|\delta
T\|_{T}(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}$.
$(2).$ Let $x\in\ker T$ with $\|x\|=1$, then $Tx=0$
$\displaystyle dist(x,\ker(\bar{T}))$
$\displaystyle\leq\|x-(I-\bar{T}^{\dagger}\bar{T})x\|=\|\bar{T}^{\dagger}\delta
Tx\|$ $\displaystyle\leq\|\bar{T}^{\dagger}\|\|\delta Tx\|$
$\displaystyle\leq\|\bar{T}^{\dagger}\|_{T}\|\delta T\|_{T}\|x\|.$
Hence, $\delta(\ker T,\ker(\bar{T}))\leq\|\bar{T}^{\dagger}\|\|\delta
T\|_{T}$.
###### Lemma 3.5
Let M and N be closed subspaces of H. Suppose that $M\cap
N^{\perp}=\\{0\\}$.Then
$\delta(M,N)=\|P_{M}-P_{N}\|.$
Proof. If $\delta(M,N)=1$, then $1=\hat{\delta}(M,N)=\|P_{M}-Q_{N}\|$, Thus
$\delta(M,N)=\|P_{M}-P_{N}\|$.
Assume that $\delta(M,N)=\delta<1$, then $\forall x\in M$,
$\|(I-P_{N})P_{M}x\|=dist(P_{M}x,N)\leq\|P_{M}x\|\delta(M,N)\leq\delta\|x\|.$
So by Lemma 3 of [14], we know $\delta(M,N)=\|P_{M}-P_{N}\|$.
###### Lemma 3.6
Under the assumptions of Theorem 3.1, we have
$\|TT^{\dagger}-\bar{T}\bar{T}^{\dagger}\|=\delta(R(T),R(\bar{T})).$
Proof. Since $T^{\dagger}\in B(K,H)$, we have
$\ker(T^{\dagger})=R(T)^{\perp}$. So
$\ker(T^{\dagger})=\ker(\bar{T}^{\dagger})$ implies
$R(T)\cap\ker(\bar{T}^{\dagger})=\\{0\\}$. By Lemma 3.5, we know
$\|TT^{\dagger}-\bar{T}\bar{T}^{\dagger}\|=\delta(R(T),R(\bar{T})).$
###### Theorem 3.2
Under the assumptions of Theorem 3.1, we have
(1)
$\|\bar{T}^{\dagger}\|\leq\|(1+\delta
TT^{\dagger})^{-1}\|(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}$.
(2)
$\|\bar{T}^{\dagger}-T^{\dagger}\|\leq\dfrac{1+\sqrt{5}}{2}\|\bar{T}^{\dagger}\|\|\delta
T\|_{T}(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}$.
Proof. $(1).$ Since $\delta T\in B(D_{T},K)$, we have
$\|\delta TT^{\dagger}x\|\leq\|\delta
T\|_{T}\|\|T^{\dagger}x\|_{T}\leq\|\delta
T\|_{T}(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}\|x\|.$
Hence, $\delta TT^{\dagger}\in B(K)$. By Lemma 2.1 ,we have
$G=T^{+}(I+\delta TT^{+})^{-1}\in B(K,H),\;P=I-G\bar{T}\in
B(D_{T}),\;Q=\bar{T}G\in B(K)$
and $\|(I+P(I-P-P^{*})^{-1})x\|_{T}\leq\|x\|_{T},\;x\in D(T).$ So,
$\displaystyle\|\bar{T}^{\dagger}x\|^{2}$
$\displaystyle\leq\|\bar{T}^{\dagger}x\|^{2}_{T}=\|-(I+P(I-P-P^{*})^{-1})G(I-Q-Q^{*})^{-1}x\|^{2}_{T}$
$\displaystyle\leq\|G(I-Q-Q^{*})^{-1}x\|^{2}_{T}$
$\displaystyle\leq\|T^{\dagger}(1+\delta
TT^{\dagger})^{-1}(I-Q-Q^{*})^{-1}x\|^{2}+\|TT^{\dagger}(1+\delta
TT^{\dagger})^{-1}(I-Q-Q^{*})^{-1}x\|^{2}$
$\displaystyle\leq(\|T^{\dagger}\|^{2}+1)\|(1+\delta
TT^{\dagger})^{-1}\|^{2}\|x\|^{2}.$
$(2).$ Since $D(T)$ dense in $H$, we can extend $I-T^{\dagger}T$ to the whole
spaces $H$ such that $P_{\ker T}|_{D(T)}=I-T^{\dagger}T$ and $P_{\ker T}$ is
an orthogonal projection form $H$ onto $\ker T$. Similarly, we extend
$I-\bar{T}^{\dagger}\bar{T}$ to the whole spaces $H$ such that
$P_{\ker(\bar{T})}|_{D(\bar{T})}=I-\bar{T}^{\dagger}\bar{T}$ and
$P_{\ker(\bar{T})}$ is an orthogonal projection form $H$ onto $\ker(\bar{T})$.
Clearly, $\forall x\in D(T)$,
$(T^{\dagger}T-\bar{T}^{\dagger}\bar{T})x=(P_{\ker(\bar{T})}-P_{\ker T})x.$
Noting that $\ker T\cap\ker(\bar{T})^{\perp}=\\{0\\}$, we have $\|P_{\ker
T}-P_{\ker(\bar{T})}\|=\delta(\ker T,\ker(\bar{T}))$ by Lemma 3.5.
$\forall y\in K$ and $\|y\|=1$,
$\displaystyle\bar{T}^{\dagger}y-T^{\dagger}y$
$\displaystyle=-\bar{T}^{\dagger}\delta
TT^{\dagger}TT^{\dagger}y+\bar{T}^{\dagger}(\bar{T}\bar{T}^{\dagger}-TT^{\dagger})(I-TT^{\dagger})y$
$\displaystyle+(P_{\ker(\bar{T})}-P_{\ker T})T^{\dagger}y.$
Using the proof of Proposition 7 in [14], we have
$\|\bar{T}^{\dagger}y-T^{\dagger}y\|^{2}_{T}\leq\frac{3+\sqrt{5}}{2}\|\bar{T}^{\dagger}\|^{2}\|\delta
T\|^{2}_{T}(\|T^{\dagger}\|^{2}+1).$
Since
$\|\bar{T}^{\dagger}y-T^{\dagger}y\|\leq\|\bar{T}^{\dagger}y-T^{\dagger}y\|_{T}$,
we have
$\|\bar{T}^{\dagger}-T^{\dagger}\|\leq\frac{1+\sqrt{5}}{2}\|\bar{T}^{\dagger}\|\|\delta
T\|_{T}(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}.$
## 4 Perturbation analysis for $Tx=b$ in Hilbert spaces
In this section, we consider the perturbation of the least square solution of
the following two equations
(1)
$Tx=b,$
(2)
$\bar{T}\bar{x}=\bar{b},$ $(\bar{b}=b+\delta b)$
As we know the solutions of
$\|Tx-b\|=\min_{z\in D(T)}\|Tz-b\|$
are $x=T^{\dagger}b+(I-T^{\dagger}T)z,\forall z\in D(T)$, denoted by $S(T,b)$,
i.e.
$S(T,b)=\\{x:x=T^{\dagger}b+(I-T^{\dagger}T)z,\forall z\in D(T)\\}$
Similarly,
$S(\bar{T},\bar{b})=\\{\bar{x}:\bar{x}=\bar{T}^{\dagger}\bar{b}+(I-\bar{T}^{\dagger}\bar{T})z,\forall
z\in D(\bar{T})\\}$
###### Theorem 4.1
Under the assumptions of Theorem 3.1, we have
$(1)$ For any solution $x=T^{\dagger}b+(I-T^{\dagger}T)z$ in $S(T,b)$, there
exist $\bar{x}\in S(\bar{T},\bar{b})$ such that
$\|\bar{x}-x\|\leq\|\bar{T}^{\dagger}\|\|(b-Tx)+(\delta b-\delta Tx)\|.$
$(2)$ For any solution
$\bar{x}=\bar{T}^{\dagger}\bar{b}+(I-\bar{T}^{\dagger}\bar{T})z$ in
$S(\bar{T},\bar{b})$, there exist $x\in S(T,b)$ such that
$\|\bar{x}-x\|\leq\|T^{\dagger}\|\|(\bar{b}-\bar{T}\bar{x})-(\delta b-\delta
T\bar{x})\|.$
Proof. $(1)$ Taking
$\bar{x}=\bar{T}^{\dagger}\bar{b}+(I-\bar{T}^{\dagger}\bar{T})(T^{\dagger}b+(I-T^{\dagger}T)z).$
Then
$\displaystyle\|\bar{x}-x\|$
$\displaystyle=\|\bar{T}^{\dagger}\bar{b}+(I-\bar{T}^{\dagger}\bar{T})(T^{\dagger}b+(I-T^{\dagger}T)z)-(T^{\dagger}b+(I-T^{\dagger}T)z)\|$
$\displaystyle=\|\bar{T}^{\dagger}\bar{b}-(\bar{T}^{\dagger}\bar{T})(T^{\dagger}b+(I-T^{\dagger}T)z)\|$
$\displaystyle=\|\bar{T}^{\dagger}\delta
b+(I-\bar{T}^{\dagger}\bar{T})T^{\dagger}b-\bar{T}^{\dagger}\bar{T}(I-T^{\dagger}T)z+(\bar{T}^{\dagger}-T^{\dagger})b\|$
$\displaystyle=\|\bar{T}^{\dagger}\delta
b+(I-\bar{T}^{\dagger}\bar{T})T^{\dagger}b-\bar{T}^{\dagger}\bar{T}(I-T^{\dagger}T)z$
$\displaystyle+(-\bar{T}^{\dagger}\delta
TT^{\dagger}b+\bar{T}^{\dagger}(I-TT^{\dagger})b-(I-\bar{T}^{\dagger}\bar{T})T^{\dagger}b)\|$
$\displaystyle=\|\bar{T}^{\dagger}(\bar{b}-\bar{T}x)\|$
$\displaystyle\leq\|\bar{T}^{\dagger}\|\|(b-Tx)+(\delta b-\delta Tx)\|.$
$(2)$ Taking
$x=T^{\dagger}b+(I-T^{\dagger}T)(\bar{T}^{\dagger}\bar{b}+(I-\bar{T}^{\dagger}\bar{T})z).$
By similarly computation to $(1)$, we have
$\|\bar{x}-x\|\leq\|T^{\dagger}\|\|(\bar{b}-\bar{T}\bar{x})-(\delta b-\delta
T\bar{x})\|.$
###### Theorem 4.2
Under the assumptions of Theorem 3.1, we have
$\displaystyle\|\bar{T}^{\dagger}\bar{b}-T^{\dagger}b\|$
$\displaystyle\leq\|(1+\delta
TT^{\dagger})^{-1}\|(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}\|\delta b\|$
$\displaystyle+\frac{1+\sqrt{5}}{2}\|(1+\delta TT^{\dagger})^{-1}\|\|\delta
T\|_{T}(1+\|T^{\dagger}\|^{2})^{\frac{1}{2}}\|b\|.$
Proof. By Theorem 3.2, we have
$\displaystyle\|\bar{T}^{\dagger}\bar{b}-T^{\dagger}b\|$
$\displaystyle=\|\bar{T}^{\dagger}\delta b+(\bar{T}^{\dagger}-T^{\dagger})b\|$
$\displaystyle\leq\|\bar{T}^{\dagger}\delta
b\|+\|\bar{T}^{\dagger}-T^{\dagger}\|\|b\|$ $\displaystyle\leq\|(1+\delta
TT^{\dagger})^{-1}\|(\|T^{\dagger}\|^{2}+1)^{\frac{1}{2}}\|\delta b\|$
$\displaystyle+\frac{1+\sqrt{5}}{2}\|(1+\delta TT^{\dagger})^{-1}\|\|\delta
T\|_{T}(1+\|T^{\dagger}\|^{2})^{\frac{1}{2}}\|b\|.$
## 5 Conclusion
In this paper, we extend the perturbation analysis of Moore-Penrose inverse
for bounded linear operators to closed operators. By virtue of a new inner
product, we give the expression of the Moore-Penrose inverse
$\bar{T}^{\dagger}$ and the upper bounds of $\|\bar{T}^{\dagger}\|$ and
$\|\bar{T}^{\dagger}-T^{\dagger}\|$. As an application, we study the
perturbation of the least square solution. These results enrich and improve
the perturbation theory of Moore-Penrose inverse described in [16].
## References
* [1] A. Ben-Israel, T.N.E. Greville, Generalized inverses: theory and applications. first edition., Wiley, New York, 1974 (second ed., Springer-Verlag, New York, 2003).
* [2] G. Chen, Y. Xue, Perturbation analysis for the operator equation $Tx=b$ in Banach spaces. J. Math. Anal. Appl., 212(1997), 107-125.
* [3] G. Chen, Y. Wei, Y. Xue, Perturbation analysis of the least square solution in Hilbert spaces. Linear Alebra Appl. 244(1996), 69-80.
* [4] G. Chen, Y. Xue, The expression of generalized inverse of the perturbed operators under type I perturbation in Hilbert spaces, Linear Algebra Appl. 285(1998), 1-6.
* [5] J. Ding, On the expression of generalized inverses of perturbed bounded linear operators. Missouri J. Math. Sci. 15(2003), 40-47.
* [6] F. Du, Y. Xue, The reverse order law for generalized inverse of closed operators. Chinese Quart. J. Math. (2013).
* [7] F. Du, Y. Xue, The characterizations of the stable perturbation of a closed operator by a linear operator in Banach spaces. Linear Algebra Appl. 438(2013), 2046-2053.
* [8] C.W. Groetsch, Representations of generalized inverse. J. Math. Anal. Appl., 49(1975), 154-157.
* [9] Q. Huang, W. Zhai, Perturbation and expressions for generalized inverses in Banach spaces and Moore-penrose inverses in Hilbert spaces of closed operators. Linear Algebra Appl. 435(2011), 117-127.
* [10] Q. Huang, On perturbations for oblique projection generalized inverses of closed linear operators in Banach spaces. Linear Algebra Appl. 434(12)(2011), 2368-2474.
* [11] T. Kato, Perturbation Theory for Linear Operators. Springer-Verlag, New York, 1984.
* [12] M.Z. Nashed, Generalized inverse and Applications. Academic Press, New York, 1976.
* [13] Y. Wang, H. Zhang, Perturbation analysis for oblique projection generalized inverses of closed operators in Banach spaces. Linear Algebra Appl. 426(2007), 1-11.
* [14] Y. Xue, G. Chen, Some equivalent conditions of stable perturbation of operators in Hilbert spaces. Applied Math. comput. 147(2004), 65-772.
* [15] Y. Xue, Stable perturbation in Banach algebras. J. Aust. Math. soc. 83(2007), 1-14.
* [16] Y. Xue, Stable Perturbations of Operators and Related Topics, World Scientific, 2012.
* [17] Q. Xu, W. Wei, Y. Gu, Sharp norm–estimation for Moore–Penrose inverses of stable perturbations of Hilbert $C^{*}$–module operators. SIAM J. Numer. Anal., 47(6)(2010), 4735-4758.
|
# Unraveling current-induced dissociation mechanisms in single-molecule
junctions
Yaling Ke Institute of Physics, Albert-Ludwig University Freiburg, Hermann-
Herder-Strasse 3, 79104 Freiburg, Germany André Erpenbeck School of
Chemistry, The Raymond and Beverley Sackler Center for Computational Molecular
and Materials Science, Tel Aviv University, Tel Aviv 6997801, Israel Uri
Peskin Schulich Faculty of Chemistry, Technion-Israel Institute of
Technology, Haifa 32000, Israel Michael Thoss Institute of Physics, Albert-
Ludwig University Freiburg, Hermann-Herder-Strasse 3, 79104 Freiburg, Germany
EUCOR Centre for Quantum Science and Quantum Computing, Albert-Ludwig
University Freiburg, Hermann-Herder-Strasse 3, 79104 Freiburg, Germany
###### Abstract
Understanding current-induced bond rupture in single-molecule junctions is
both of fundamental interest and a prerequisite for the design of molecular
junctions, which are stable at higher bias voltages. In this work, we use a
fully quantum mechanical method based on the hierarchical quantum master
equation approach to analyze the dissociation mechanisms in molecular
junctions. Considering a wide range of transport regimes, from off-resonant to
resonant, non-adiabatic to adiabatic transport, and weak to strong vibronic
coupling, our systematic study identifies three dissociation mechanisms. In
the weak and intermediate vibronic coupling regime, the dominant dissociation
mechanism is stepwise vibrational ladder climbing. For strong vibronic
coupling, dissociation is induced via multi-quantum vibrational excitations
triggered either by a single electronic transition at high bias voltages or by
multiple electronic transitions at low biases. Furthermore, the influence of
vibrational relaxation on the dissociation dynamics is analyzed and strategies
for improving the stability of molecular junctions are discussed.
## I Introduction
Current-induced rupture of chemical bonds is a major concern when single
molecules are being considered as electronic components in nano-scale devices.
The most widely studied architecture in this context is a molecular junction,
where a single molecule is bound to metal or semiconductor electrodes.
Molecular junctions represent a unique architecture to investigate molecules
in a distinct nonequilibrium situation and, in a broader context, to study
basic mechanisms of charge and energy transport in a many-body quantum system
at the nanoscale. Cuevas and Scheer (2010); Galperin, Ratner, and Nitzan
(2007); Bergfield and Ratner (2013); Aradhya and Venkataraman (2013); Bâldea
(2016); Su _et al._ (2016); Thoss and Evers (2018); Evers _et al._ (2020)
An important mechanism of bond rupture in molecular junctions is the coupling
of the transport electrons to the vibrations of the molecule, which gives rise
to current-induced vibrational excitation, i.e. local heating. While the level
of current-induced vibrational excitation is typically small for low voltages
in the off-resonant transport regime, it can be substantial for higher
voltages, in particular in the resonant transport regime. In that regime,
current-induced heating can cause mechanical instability of the junction and
may eventually result in bond rupture.Persson and Avouris (1997); Kim, Komeda,
and Kawai (2002); Koch _et al._ (2006); Huang _et al._ (2006, 2007); Schulze
_et al._ (2008); Ioffe _et al._ (2008); Sabater, Untiedt, and van Ruitenbeek
(2015); Li _et al._ (2015, 2016); Capozzi _et al._ (2016); Schinabeck
(2018); Gelbwaser-Klimovsky _et al._ (2018); Bi _et al._ (2020); Peiris _et
al._ (2020)
The process of current-induced bond rupture has recently been observed
experimentally in molecular junctions.Sabater, Untiedt, and van Ruitenbeek
(2015); Li _et al._ (2015, 2016); Capozzi _et al._ (2016) It is also known
from scanning tunneling microscopy studies of molecules at surfaces.Ho (2002);
Stipe _et al._ (1997); Huang _et al._ (2013) The understanding of the
underlying mechanisms of bond rupture and its implication for the stability in
molecular junctions is not only of fundamental interest, but is also crucial
for the design of molecular junctions, which are stable at higher
voltages.Gelbwaser-Klimovsky _et al._ (2018); Härtle _et al._ (2018);
Kuperman, Nagar, and Peskin (2020) Molecular junctions that are stable at
higher bias voltages are particularly relevant for possible nanoelectronic
applications. Furthermore, the understanding of current-induced bond rupture
is also crucial for current-induced chemistry and nano-scale chemical
catalysis.Li and Somorjai (2010); Kolasinski (2012); Seideman (2016)
The theoretical framework to study current-induced vibrational excitation in
molecular junctions is well established for models, which treat the
vibrational modes within the harmonic approximation. Galperin, Nitzan, and
Ratner (2006); Ryndyk, Hartung, and Cuniberti (2006); Benesch _et al._
(2008); Härtle and Thoss (2011); Schinabeck, Härtle, and Thoss (2018);
Erpenbeck _et al._ (2016) While such models have been used to investigate the
mechanical stability of molecular junctions,Härtle and Thoss (2011); Härtle
and Kulkarni (2015); Schinabeck, Härtle, and Thoss (2018) the study of bond
rupture requires to go beyond the harmonic approximation and use nuclear
potentials which can describe the dissociation process explicitly. This has
been achieved within a classical treatment of the nuclei based on the
Ehrenfest approachDzhioev and Kosov (2011); Dzhioev, Kosov, and Von Oppen
(2013); Pozner, Lifshitz, and Peskin (2014); Erpenbeck _et al._ (2018a) or
using perturbative theories.Koch _et al._ (2006); Foti and Vázquez (2018)
Similarly, the study of mechanical instabilities of molecular junctions under
the influence of non-conservative current-induced forces has so far been based
on classical treatments of the nuclei and/or used the harmonic approximation
for the description of the nuclear potentials.Lu, Brandbyge, and Hedegård
(2010); Lü, Hedegård, and Brandbyge (2011); Lü _et al._ (2012); Preston,
Kershaw, and Kosov (2020); Preston, Gelin, and Kosov (2021) It is also noted
that the theoretical framework to study the related process of dissociative
electron attachment in the gas phase is well established, Domcke (1991);
Gertitschke and Domcke (1993); Čížek, Horáček, and Domcke (1999); Gallup and
Fabrikant (2011) but this problem is conceptually simpler because only a
single electron that is scattered from the molecule has to be considered.
Moreover, the processes of light-induced dissociation or desorption of
molecules at surfaces has also been studied in great detail
theoretically.Brandbyge _et al._ (1995); Saalfrank (2006); Ho (2002); Kim
_et al._ (2015); Frederiksen, Paulsson, and Ueba (2014) In that scenario,
however, typically the system is only temporarily driven out of equilibrium by
a laser pulse, while in molecular junction transport, the electrical current
driven through the molecule results in a nonequilibrium steady state.
Recently, we have developed a fully quantum mechanical theoretical framework
to study current-induced bond rupture in molecular junctions, which takes
proper account of the many-electron, nonequilibrium nature of the transport
process and is not limited to weak coupling.Erpenbeck _et al._ (2018a, 2020)
The method combines the hierarchical quantum master equation (HQME) approach
with a discrete variable representation (DVR) of the nuclear degrees of
freedom to facilitate the description of general potential energy surfaces
(PESs). The application to a model where the charged state is characterized by
a repulsive potential showed that the current-induced population of anti-
bonding states is another important mechanism, which can lead to fast bond
rupture in molecular junctions and can dominate over current-induced
heating.Erpenbeck _et al._ (2020)
In the present work, we extend our previous study and consider a scenario,
where the potential energy surface of the charged molecule also supports bound
states. In this model, both bond-rupture via direct dissociation in the
continuum states of the charged state potential energy surface or via current-
induced heating are possible. The model also accounts for additional
dissociation channels via Feshbach resonances. Furthermore, it incorporates
vibrational relaxation processes induced by coupling of the dissociative
reaction mode to other inactive modes (intramolecular vibrational relaxation),
the phonons of the leads or a possible solution environment. The detailed
analysis of this extended model provides a rather comprehensive understanding
of the mechanisms of current-induced bond rupture in molecular junctions.
The remainder of the paper is organized as follows: In Sec. II, we introduce
the model and the HQME approach used to investigate current-induced reaction
dynamics. The results are presented and discussed in detail in Sec. III,
considering a broad range of different regimes and processes, comprising off-
resonant to resonant transport, weak to strong vibronic and molecule-lead
coupling, as well as vibrational relaxation due to coupling to a phonon bath.
Furthermore, time-dependent current-voltage characteristics are presented,
implications for experiments are addressed, and strategies for improving the
stability of molecular junctions are discussed. We close with a summary in
Sec. IV.
## II Theory
### II.1 Model
Figure 1: Visualization of the molecular junction and the system energy
landscape. The molecular junction consists of a backbone coupled to electrodes
and a side-group. In case of dissociation, the side group detaches from the
backbone. The neutral [charged] state of the molecule is characterized by a
Morse potential $V_{g}(Q)$ [$V_{e}(Q)$] with equilibrium position $Q_{g}$
[$Q_{e}=Q_{g}+\Delta Q$] and fundamental oscillation frequency $\Omega_{0}$.
$E_{\rm C}$ denotes the charging energy, $E_{\rm D}$ the dissociation energy,
and $g_{\rm L/R}(Q)$ the coordinate-dependent molecule-lead coupling. The
energy levels of the bound states in the potentials $V_{g}(Q)$ $[V_{e}(Q)]$
are indicated by horizontal blue [red] lines. The gray dotted line and shaded
area indicate the complex absorbing potential $W(Q)$. The probability absorbed
by the complex absorbing potential is mapped to the representative auxiliary
grid point $Q_{\infty}$. The coupling to a phonon bath is depicted by black
wavy lines within the orange shaded round area.
We consider a molecular junction depicted in Fig. 1, where the molecule,
consisting of a backbone and a side group, is in contact with two macroscopic
electrodes and a phonon bath, which is described by the Hamiltonian
$H=H_{\rm mol}+H_{\rm leads}+H_{\rm mol-leads}+H_{\rm ph}+H_{\rm mol-ph}.$ (1)
For the molecule, a minimal model is adopted comprising a single vibrational
reaction mode, which describes the bonding of the side group, and a spinless
electronic level. The molecular Hamiltonian takes the form
$H_{\rm mol}=\frac{P^{2}}{2M}+V_{g}(Q)dd^{\dagger}+V_{e}(Q)d^{\dagger}d,$ (2)
where $Q$ and $P$ are the coordinate and momentum of the reaction mode,
respectively, and $M$ denotes the corresponding reduced nuclear mass. The
operator $d^{\dagger}$ creates an electron in the molecular electronic level,
and $d$ is its hermitian conjugate. $V_{g}(Q)$ and $V_{e}(Q)$ describe the
potential energy surfaces of the electronic ground state of the neutral and
charged molecule, respectively.
Specifically, the neutral state is described by a Morse potential,
$V_{g}(Q)=E_{\rm D}(1-e^{-a(Q-Q_{g})})^{2},$ (3)
where $Q_{g}$ denotes the equilibrium position, $E_{\rm D}$ the dissociation
energy, and $a=1.028a_{0}^{-1}$ the width parameter of the Morse potential
with $a_{0}$ being the Bohr radius. In the calculations reported below, the
parameters are chosen as $Q_{g}=1.78\textrm{ \AA}$, $a=1.028a_{0}^{-1}$,
$E_{\rm D}=2.38$ eV, and $M=1$ amu (atomic mass unit). The corresponding
fundamental frequency for small oscillations at the bottom of potential well
is $\hbar\Omega_{0}=a\sqrt{2E_{\rm D}/M}=274$ meV, which is within the typical
range of molecular vibrations. For such a high frequency, the nuclear quantum
effects are expected to be non-negligible even at the room temperature. There
are in total 16 bound vibrational states. The choice of these parameters is
motivated by the process of H2 desorption from metal surfaces.Halstead and
Holloway (1990) However, it is emphasized that the goal of this work is to
study the basic mechanisms of current-induced bond rupture and it does not
attempt to describe a specific molecule.
The potential energy surface of the charged state is assumed to be of the same
form as in the neutral state but with a shifted equilibrium position
$Q_{e}=Q_{g}+\Delta Q$,
$V_{e}(Q)=E_{\rm D}(1-e^{-a(Q-Q_{e})})^{2}+E_{\rm C}.$ (4)
Here, $E_{\rm C}$ denotes the charging energy, which is chosen as $E_{\rm
C}=1$ eV in the calculations reported below. The displacement $\Delta Q$
determines the electronic-vibrational (vibronic) coupling strength. It should
be emphasized that different from some other models of vibrationally-coupled
transport in molecular junctions using the harmonic approximation,Galperin,
Nitzan, and Ratner (2006); Härtle and Thoss (2011); Schinabeck _et al._
(2016) the charging energy in the model employed here is independent of
$\Delta Q$. The potential energy surfaces are illustrated schematically in
Fig. 1.
The molecule is coupled to two leads ($\alpha=L/R$), which serve as electron
reservoirs and are modelled by non-interacting electrons,
$H_{\rm leads}=\sum_{\alpha}H_{\alpha}=\sum_{\alpha}\sum_{k}\epsilon_{\alpha
k}c_{\alpha k}^{\dagger}c_{\alpha k},$ (5)
where $c_{\alpha k}^{\dagger}(c_{\alpha k})$ creates (annihilates) an electron
in the $k$th state with energy $\epsilon_{\alpha k}$ in lead $\alpha$.
The molecule-lead interaction is described by
$H_{\rm mol-leads}=\sum_{\alpha}\sum_{k}g_{\alpha}(Q)(t_{\alpha k}c_{\alpha
k}^{\dagger}d+t^{*}_{\alpha k}d^{\dagger}c_{\alpha k}).$ (6)
Here, $t_{\alpha k}$ denotes the coupling strength between the electronic
state at the molecule and the $k$th state in lead $\alpha$. The dependence of
the molecule-lead interaction on the nuclear coordinate $Q$ allows the
modelling of situations where the conductance changes upon detachment of the
side group. This can, for example, result from a change of the
$\pi$-conjugation within the molecular backbone as a consequence of the side
group separating from the molecule. In this work, we employ a dependence on
the nuclear coordinate of the form
$g_{\rm
L/R}(Q)=\frac{1-q}{2}\left[1-\tanh\left(2(Q-\tilde{Q})/a_{0}\right)\right]+q,$
(7)
which is depicted by the green line in Fig. 1. The parameter $\tilde{Q}$
determines the region of the reaction mode, where the transition between
stronger and weaker molecule-lead coupling upon detachment of the side group
occurs, and is set below to $\tilde{Q}=4.0\textrm{~{}\AA}$. In this work, a
non-destructive dissociation is considered, that is, the molecule is still
conductive after the dissociation of the side group. The parameter $q$
determines the relative molecule-lead coupling strength for the dissociated
molecule and is chosen as $q=0.05$.
To model vibrational relaxation of the dissociative reaction mode induced by
coupling to other inactive modes (intramolecular vibrational relaxation), the
phonons of the leads or a possible solution environment, we include the
coupling of the reaction mode to a bosonic bath (in the following referred to
as phonon bath). The phonon bath is modelled by a collection of harmonic
oscillators,
$H_{\rm
ph}=\sum_{j}\frac{p_{j}^{2}}{2m_{j}}+\frac{m_{j}\omega_{j}^{2}q_{j}^{2}}{2},$
(8)
where $\omega_{j}$ denotes the frequency of the $j$th bath oscillator;
$q_{j}$, $p_{j}$, and $m_{j}$ are the corresponding coordinate, momentum, and
mass, respectively. The interaction between the reaction mode and the phonon
bath is given by
$H_{\rm mol-
ph}=-f(Q)\sum_{j}h_{j}q_{j}+\sum_{j}\frac{\left(h_{j}f(Q)\right)^{2}}{2m_{j}\omega_{j}^{2}}.$
(9)
Here, $h_{j}$ denotes the coupling strength between the $j$th bath oscillator
and the reaction mode. The second term in Eq. (9) is a counter term, which is
introduced to avoid an extensive effect of the bath on the potential of the
system.Weiss (2012) The coupling operator $f(Q)$ is taken in the following
form
$\begin{split}f(Q)=&\frac{(Q-Q_{g})}{a_{0}}e^{-\chi\left(\frac{Q-Q_{g}}{a_{0}}\right)^{2}}dd^{\dagger}\\\
&+\frac{(Q-Q_{e})}{a_{0}}e^{-\chi\left(\frac{Q-Q_{e}}{a_{0}}\right)^{2}}d^{\dagger}d,\end{split}$
(10)
where $\chi=\frac{\hbar\Omega_{0}}{4E_{\rm D}}$ denotes the anharmonicity of
the Morse potential.Ilk and Makri (1994) This specific form ensures that the
coupling of the reaction mode to the phonon bath vanishes for large values of
$Q$. In the harmonic limit, $\chi\rightarrow 0$, it reduces to the
paradigmatic linear form.Joutsuka and Ando (2011)
### II.2 Method
To simulate the nonequilibrium dynamics of the molecular junction based on the
Hamiltonian in Eq. (1), we use the HQME method. The HQME approach, also
referred to as hierarchical equations of motion (HEOM), is a reduced density
matrix scheme, which describes the dynamics of a quantum system influenced by
an environment. In the case considered here, the molecule is the system and
the leads together with the phonon bath represent the environment. The HQME
approach generalizes perturbative quantum master equation methods by including
higher-order contributions as well as non-Markovian memory and allows for the
systematic convergence of the results. Härtle _et al._ (2013, 2015); Xu _et
al._ (2017); Trushechkin (2019) The method was originally developed by
Tanimura and Kubo to study relaxation dynamics. Tanimura and Kubo (1989);
Tanimura (2006) Later it was extended to fermionic charge transport,Jin _et
al._ (2007); Jin, Zheng, and Yan (2008); Zheng _et al._ (2009); Yan (2014);
Ye _et al._ (2016); Härtle _et al._ (2013, 2015); Wenderoth, Bätge, and
Härtle (2016) also including electronic-vibrational coupling.Schinabeck _et
al._ (2016); Schinabeck, Härtle, and Thoss (2018); Dou _et al._ (2018) For
more details about the developments and applications of the method, we refer
to the review in Ref. Tanimura, 2020.
In recent work, we have formulated and applied the HQME method to simulate
bond rupture in molecular junctions.Erpenbeck and Thoss (2019); Erpenbeck _et
al._ (2020) Moreover, we have formulated the HQME method for an open quantum
system, which is coupled to multiple fermionic and bosonic environments, as is
the case in the current application of the method.Bätge _et al._ (2021) In
the following, a brief recapitulation of the most important aspects of the
method is given.
We assume that the initial state is given by
$\rho(t=0)=\rho_{\rm s}(0)\rho_{\rm leads}^{\rm eq}\rho_{\rm ph}^{\rm eq},$
(11)
including the initial density matrices of the system, $\rho_{\rm s}(0)$, the
leads, $\rho_{\rm leads}^{\rm eq}$, and the phonon bath, $\rho_{\rm ph}^{\rm
eq}$, respectively. The latter are assumed to be in their respective
equilibrium state. Specifically, the leads are initially described by the
grand canonical distribution,
$\rho_{\rm leads}^{\rm eq}=\prod_{\alpha}\rho_{\alpha}^{\rm
eq}=\prod_{\alpha}\frac{e^{-\beta_{\alpha}(H_{\alpha}-\mu_{\alpha}N_{\alpha})}}{\mathrm{Tr}_{\alpha}\\{e^{-\beta_{\alpha}(H_{\alpha}-\mu_{\alpha}N_{\alpha})}\\}},$
(12)
where $\beta_{\alpha}=1/(k_{\rm B}T_{\alpha})$ denotes the inverse temperature
with Boltzmann constant $k_{\rm B}$, $\mu_{\alpha}$ is the chemical potential
and $N_{\alpha}=\sum_{k}c_{\alpha k}^{\dagger}c_{\alpha k}$ the occupation
number operator of lead $\alpha$, respectively. Moreover,
$\mathrm{Tr}_{\alpha}$ denotes the trace over all electronic degrees of
freedom in lead $\alpha$. The difference of the chemical potentials of the
left and right leads defines the bias voltage $\Phi$, which we assume to drop
symmetrically, i.e., $\mu_{\rm L}=-\mu_{\rm R}=e\Phi/2$. The initial
equilibrium state of the phonon bath at inverse temperature $\beta_{\rm
ph}=1/(k_{\rm B}T_{\rm ph})$ is given by
$\rho_{\rm ph}^{\rm eq}=\frac{e^{-\beta_{\rm ph}H_{\rm ph}}}{\mathrm{Tr}_{\rm
ph}\\{e^{-\beta_{\rm ph}H_{\rm ph}}\\}}.$ (13)
In the studies below, the temperatures of the leads and the phonon bath are
set to $T_{\rm L}=T_{\rm R}=T_{\rm ph}=300$ K.
Due to the Gaussian statistical properties of the electronic reservoirs and
the phonon bath, the influence of the environments on the system dynamics is
encoded in the two-time correlation functions $C_{\alpha}^{\pm}(t-\tau)$ and
$C_{\rm ph}(t-\tau)$ of the lead electrons and the bath phonons, respectively.
The two-time correlation function of the lead electrons is given by
$\begin{split}C_{\alpha}^{\sigma}(t-\tau)\equiv&\mathrm{Tr}_{\alpha}\left\\{\sum_{k}\tilde{c}_{\alpha
k}^{\sigma}(t)\tilde{c}_{\alpha k}^{\bar{\sigma}}(\tau)\rho^{\rm
eq}_{\rm\alpha}\right\\}\\\
=&\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{i\sigma\frac{\epsilon}{\hbar}(t-\tau)}f_{\alpha}^{\sigma}(\epsilon)\Gamma_{\alpha}(\epsilon)\mathrm{d}\epsilon,\end{split}$
(14)
where we have introduced the operators
$\tilde{c}_{\alpha
k}^{\sigma}(t)=e^{\frac{it}{\hbar}H_{\alpha}}t^{\sigma}_{\alpha k}c_{\alpha
k}^{\sigma}e^{-\frac{it}{\hbar}H_{\alpha}}$ (15)
and the notation $\sigma=\pm$, $\bar{\sigma}=-\sigma$, $c^{+}_{\alpha
k}=c^{\dagger}_{\alpha k}$, $c^{-}_{\alpha k}=c_{\alpha k}$, $t^{+}_{\alpha
k}=t^{*}_{\alpha k}$, $t^{-}_{\alpha k}=t_{\alpha k}$. It includes the Fermi
distribution function
$f_{\alpha}^{\sigma}(\epsilon)=1/(1+e^{\sigma\beta_{\alpha}(\epsilon-\mu_{\alpha})})$
and the coupling-weighted density of states of lead $\alpha$ (also called
level-width function),
$\Gamma_{\alpha}(\epsilon)=2\pi\sum_{k}\left|t_{\alpha
k}\right|^{2}\delta(\epsilon-\epsilon_{\alpha k}).$ (16)
For the scope of this work, the leads are described within the wide-band
limit, where the level-width function is energy-independent,
$\Gamma_{\alpha}=2\pi\left|t_{\alpha}\right|^{2}$.
The influence of the phonon bath on the reduced system dynamics is determined
by the correlation function
$\begin{split}C_{\rm ph}(t-\tau)=&\mathrm{Tr}_{\rm
ph}\left\\{\sum_{j}\tilde{q}_{j}(t)\tilde{q}_{j}(\tau)\rho^{\rm eq}_{\rm
ph}\right\\}\\\ =&\frac{1}{2\pi}\int_{-\infty}^{\infty}\mathrm{d}\omega
e^{-i\omega(t-\tau)}J(\omega)f_{\rm ph}(\omega),\end{split}$ (17)
where we have introduced the operators
$\tilde{q}_{j}(t)=e^{iH_{\rm ph}t/\hbar}h_{j}q_{j}e^{-iH_{\rm ph}t/\hbar},$
(18)
the Bose distribution function $f_{\rm ph}(\omega)=1/(e^{\beta_{\rm
ph}\hbar\omega}-1)$, and the spectral density of the phonon bath,
$J(\omega)=2\pi\hbar\sum_{j}\frac{h_{j}^{2}}{2m_{j}\omega_{j}}\delta(\omega-\omega_{j}).$
(19)
In this work, we assume that the spectral density of the phonon bath is of
Lorentz-Drude form,
$J(\omega)=2\hbar\lambda_{\rm
ph}\frac{\omega\omega_{c}}{\omega^{2}+\omega_{c}^{2}},$ (20)
where $\lambda_{\rm ph}$ characterizes the coupling strength and $\omega_{c}$
is the characteristic frequency, which is the inverse of the bath correlation
time $\tau_{c}$.
To obtain a formally closed set of hierarchical equations, a central step is
to represent the two-time correlation functions as sums of exponential
functions. To this end, various sum-over-poles decomposition schemes for the
Fermi and Bose distribution functions Hu, Xu, and Yan (2010); Hu _et al._
(2011); Cui _et al._ (2019); Abe, Yamashita, and Saalfrank (2003) or the
corresponding time correlation functions Tang _et al._ (2015); Rahman and
Kleinekathöfer (2019); Erpenbeck _et al._ (2018b) have been proposed. Here,
we employ the Padé decomposition scheme, which was found to be particularly
efficient at moderate temperatures. In this way, the correlation function in
Eq. (14) can be represented as
$C_{\alpha}^{\sigma}(t-\tau)\approx\hbar\pi\delta(t-\tau)+\sum_{l=1}^{\rm
L}\eta_{\alpha,l}e^{-\gamma_{\alpha,l}^{\sigma}(t-\tau)},$ (21)
with exponents
$\hbar\gamma_{\alpha,l}^{\sigma}=\xi_{l}/\beta_{\alpha}-i\sigma\mu_{\alpha}$
and coefficients $\eta_{\alpha,l}=-2i\pi\kappa_{l}/\beta_{\alpha}$. The Padé
parameters $\kappa_{l}$ and $\xi_{l}$ are obtained using the procedures
described in Ref. Hu _et al._ , 2011. The delta-function term appears due to
the wide-band limit approximation.
Likewise, utilizing the Padé spectrum decomposition of the Bose distribution
function, the time-correlation function for the phonon bath is written as
$C_{\rm ph}(t-\tau)\approx\sum_{k=0}^{\rm
K}\eta_{k}e^{-\gamma_{k}(t-\tau)}+\sum_{k={\rm
K}+1}^{\infty}\frac{2\eta_{k}}{\gamma_{k}}\delta(t-\tau),$ (22)
with $\hbar\gamma_{0}=\omega_{c}$ and $\eta_{0}=\hbar\lambda_{\rm
ph}\omega_{c}\cot(\beta\hbar\omega_{c}/2)-i\hbar\lambda_{\rm ph}\omega_{c}$ as
well as $\hbar\gamma_{k}=\nu_{k}$ and $\eta_{k}=\frac{4\lambda_{\rm
ph}\xi_{k}}{\beta}\frac{\omega_{c}\nu_{k}}{\nu_{k}^{2}-\omega_{c}^{2}}$ for
$k>0$. The Padé parameters $\xi_{k}$ and $\nu_{k}$ for the Bose function are
all real numbers. In the above formula, it is assumed
$\gamma_{k}e^{-\gamma_{k}|t|}\approx 2\delta(t)$ when $\gamma_{k}$ is
significantly larger than the vibrational frequency of the system
$\Omega_{0}$.Ishizaki and Tanimura (2005)
Based on the exponential expansions of the correlation functions in Eqs. (21)
and (22), the HQME method uses a set of auxiliary density operators
$\rho_{\bm{m}}^{\bm{n}}$ to describe the dynamics of the open quantum
system.Jin, Zheng, and Yan (2008); Shi _et al._ (2009); Härtle _et al._
(2013); Schinabeck, Härtle, and Thoss (2018); Bätge _et al._ (2021) The
subscript $\bm{m}$ is a bosonic index vector,
$\bm{m}=(m_{0},m_{1},\cdots,m_{K})$, where every element $m_{k}$ is a non-
negative integer. The superscript $\bm{n}$ denotes an ordered array of
fermionic multi-indices $\bm{a}_{j}=(\alpha_{j},\sigma_{j},l_{j})$, i.e.,
$\bm{n}=(\bm{a}_{1},\cdots,\bm{a}_{n})$, where $\alpha_{j}$ labels the left or
right lead, $\sigma_{j}=\pm$, and $l_{j}$ specifies the fermionic Padé pole.
The reduced density operator of the system, which is defined by tracing the
density operator of the complete system $\rho(t)$ over the environmental DOFs,
$\rho_{\rm s}(t)=\mathrm{Tr}_{{\rm leads+ph}}\\{\rho(t)\\},$ (23)
corresponds to the lowest tier auxiliary density operator, i.e., $\rho_{\rm
s}(t)=\rho_{0}^{0}(t)$.
The HQME is then given by the following equations of motion of the auxiliary
density operators,Jin, Zheng, and Yan (2008); Shi _et al._ (2009); Bätge _et
al._ (2021); Xu _et al._ (2019)
$\begin{split}\frac{\partial\rho_{\bm{m}}^{\bm{n}}(t)}{\partial
t}=&-\frac{i}{\hbar}\left[H_{\rm
mol},\rho_{\bm{m}}^{\bm{n}}(t)\right]_{-}-\frac{1}{\hbar}\mathcal{L}_{\infty}\rho_{\bm{m}}^{\bm{n}}(t)-\left(\sum_{j=1}^{n}\gamma_{a_{j}}+\sum_{k=0}^{K}m_{k}\gamma_{k}\right)\rho_{\bm{m}}^{\bm{n}}(t)\\\
&-\sum_{\alpha\in{\rm
L/R},\sigma\in{\pm}}\frac{\pi}{2\hbar}\left[g_{\alpha}(Q)d^{\overline{\sigma}},\left[g_{\alpha}(Q)d^{\sigma},\rho_{\bm{m}}^{\bm{n}}(t)\right]_{(-)^{n+1}}\right]_{(-)^{n+1}}-\frac{i}{\hbar}\sum_{j=1}^{n}(-1)^{n-j}\mathcal{C}_{a_{j}}\rho_{\bm{m}}^{\bm{n}^{-}_{j}}(t)-\frac{i}{\hbar}\sum_{a_{n+1}}\mathcal{A}_{\alpha_{n+1}}^{\overline{\sigma}_{n+1}}\rho_{\bm{m}}^{\bm{n}^{+}}(t)\\\
&-\frac{\tilde{C}_{K}}{\hbar^{2}}\left[f(Q),\left[f(Q),\rho_{\bm{m}}^{\bm{n}}(t)\right]_{-}\right]_{-}-\frac{i}{\hbar}\sum_{k=0}^{K}\sqrt{(m_{k}+1)\left|\eta_{k}\right|}\mathcal{A}_{k}\rho_{\bm{m}_{k}^{+}}^{\bm{n}}(t)-\frac{i}{\hbar}\sum_{k=0}^{K}\sqrt{\frac{m_{k}}{\left|\eta_{k}\right|}}\mathcal{C}_{k}\rho_{\bm{m}_{k}^{-}}^{\bm{n}}(t).\end{split}$
(24)
Here,
$\tilde{C}_{\rm K}=\sum_{k={\rm
K}+1}^{\infty}\frac{\eta_{k}}{\gamma_{k}}=\frac{2\lambda}{\beta\omega_{c}}-\sum_{k=0}^{K}\frac{\eta_{k}}{\gamma_{k}}$
(25)
and a few shorthand notation are used, including the bosonic index arrays
$\bm{m}_{k}^{+}=(m_{0},\cdots,m_{k}+1,\cdots m_{K})$ and
$\bm{m}_{k}^{-}=(m_{0},\cdots,m_{k}-1,\cdots,m_{K})$, the fermionic index
arrays
$\bm{n}_{j}^{-}=(\bm{a}_{1},\cdots,\bm{a}_{j-1},\bm{a}_{j+1},\cdots,\bm{a}_{n})$
and $\bm{n}^{+}=(\bm{a}_{1},\cdots,\bm{a}_{n},\bm{a}_{n+1})$ as well as the
commutator $[\mathcal{O},\rho_{\bm{m}}^{\bm{n}}]_{-}$ and anticommutator
$[\mathcal{O},\rho_{\bm{m}}^{\bm{n}}]_{+}$. Note that the first term in the
second (third) line of Eq. (24) stems from the delta-function approximation in
Eq. (21) (Eq. (22)). We found that the numerical performance benefits
significantly from these two approximations, because the stiffness of the
equation can be significantly reduced as compared to considering a very large
bandwidth or a very high Padé frequency. The superoperators
$\mathcal{A}_{\alpha}^{\sigma}$, $\mathcal{C}_{a}$, $\mathcal{A}_{k}$, and
$\mathcal{C}_{k}$ connect the different tiers of the hierarchy and are given
by
$\mathcal{A}_{\alpha}^{\sigma}\rho_{\bm{m}}^{\bm{n}}(t)=g_{\alpha}(Q)d^{\sigma}\rho_{\bm{m}}^{\bm{n}}(t)+(-1)^{n}\rho_{\bm{m}}^{\bm{n}}(t)d^{\sigma}g_{\alpha}(Q),$
(26)
$\mathcal{C}_{a}\rho_{\bm{m}}^{\bm{n}}(t)=\eta_{\alpha,l}^{\sigma}g_{\alpha}(Q)d^{\sigma}\rho_{\bm{m}}^{\bm{n}}(t)-(-1)^{n}\eta_{\alpha,l}^{\bar{\sigma}*}\rho_{\bm{m}}^{\bm{n}}(t)d^{\sigma}g_{\alpha}(Q),$
(27)
$\mathcal{A}_{k}\rho_{\bm{m}}^{\bm{n}}(t)=f(Q)\rho_{\bm{m}}^{\bm{n}}(t)-\rho_{\bm{m}}^{\bm{n}}(t)f(Q),$
(28)
$\mathcal{C}_{k}\rho_{\bm{m}}^{\bm{n}}(t)=\eta_{k}f(Q)\rho_{\bm{m}}^{\bm{n}}(t)-\rho_{\bm{m}}^{\bm{n}}(t)f(Q)\eta^{*}_{k}.$
(29)
The numerically exact description provided by the HQME approach employs an
infinite hierarchy of auxiliary density operators, which needs to be truncated
in a suitable manner. For a detailed discussion, we refer to Refs. Tanimura,
2006; Ye _et al._ , 2016.
Within the HQME formalism outlined above, all (auxiliary) density operators
are also acting on the nuclear DOF of the system, the reaction coordinate. In
order to allow for a description of nuclear DOFs with generic PESs, we employ
a discrete variable representation (DVR),Colbert and Miller (1992); Echave and
Clary (1992); Seideman and Miller (1992) which represents the nuclear reaction
coordinate effectively by a finite set of grid points $Q_{i}$.
In order to avoid finite size effects, a complex absorbing potential (CAP),
$W(Q)$, is introduced, which absorbs the parts of the density reaching the
boundary of the DVR grid.Riss and Meyer (1996) In the calculations reported
below, a power-law form of the CAP is used,Erpenbeck _et al._ (2020)
$W(Q)=\zeta(Q-Q_{\rm CAP})^{4}\cdot\Theta(Q-Q_{\rm CAP}),$ (30)
with the Heaviside step function $\Theta$. The parameters of the CAP, $Q_{\rm
CAP}=4.0\textrm{ \AA}$ and $\zeta=5$ eV/$\textrm{\AA}^{4}$, were determined by
test calculations to ensure that the observables obtained do not depend on the
CAP.
As discussed previously,Selstø and Kvaal (2010); Kvaal (2011); Prucker _et
al._ (2018); Erpenbeck and Thoss (2019); Erpenbeck _et al._ (2020) the
introduction of the CAP may result in problems associated with the
conservation of the particle number. In particular, the action of the CAP
causes a decrease of the trace of the density matrix and consequently an
artificial loss of the number of electrons. To avoid these problems, we
compensate for the loss of populations due to the action of the CAP by
introducing an additional Lindblad-like source term into the HQME,Erpenbeck
and Thoss (2019); Erpenbeck _et al._ (2020)
$\begin{split}\mathcal{L}_{\infty}\rho_{\bm{m}}^{\bm{n}}(t)=&2C_{\infty}(Q)\rho_{\bm{m}}^{\bm{n}}(t)C^{\dagger}_{\infty}(Q)\\\
&-\left[C^{\dagger}_{\infty}(Q)C_{\infty}(Q),\rho_{\bm{m}}^{\bm{n}}(t)\right]_{+}\end{split}$
(31)
with
$C_{\infty}(Q)=\sqrt{W(Q)}\left|Q_{\infty}\right\rangle\left\langle Q\right|.$
(32)
This source term maps the probability absorbed by the CAP to an auxiliary grid
point $Q_{\infty}$ (see Fig. 1 and Ref. Erpenbeck _et al._ , 2020).
### II.3 Observables of interest
We briefly comment on the calculation of observables of interest. Any system
observable can be obtained directly from the reduced density matrix
$\rho_{s}(t)=\rho_{0}^{0}(t)$. Several system observables are considered
below. In particular, we are interested in the dissociation probability, which
can be calculated as
$P_{\infty}(t)=\mathrm{Tr}_{s}\left\\{\left|Q_{\infty}\right\rangle\left\langle
Q_{\infty}\right|\rho_{s}(t)\right\\},$ (33)
where $\mathrm{Tr}_{s}$ denotes the trace over electronic and nuclear DOF of
the system. The remaining population $1-P_{\infty}(t)$, which corresponds to
the portion of non-dissociated molecules, is called the survival probability
in the following. Assuming an exponential kinetics of the dissociation process
in the long-time limit, the dissociation rate is given by
$k_{\mathrm{diss}}=-\lim_{t\rightarrow\infty}\frac{d\ln(1-P_{\infty}(t))}{dt}.$
(34)
Another important observable to characterize the dynamics is the population of
the vibrational states in the neutral and charged state of the molecule, given
by
$P^{g}_{v}=\mathrm{Tr}_{s}\left\\{dd^{\dagger}|\psi_{v}^{g}\rangle\langle\psi^{g}_{v}|\rho_{s}(t)\right\\}$
(35)
and
$P^{e}_{v}=\mathrm{Tr}_{s}\left\\{d^{\dagger}d|\psi_{v}^{e}\rangle\langle\psi^{e}_{v}|\rho_{s}(t)\right\\}.$
(36)
Here, $|\psi^{g/e}_{v}\rangle$ denotes the $v$th vibrational eigenfunction in
the neutral/charged state of the molecule. Due to dissociation, the population
of the undissociated molecule decays over time. In the analysis below,
vibrational populations are renormalized by the survival probability
$(1-P_{\infty}(t))$ to obtain a stationary distribution in the long-time limit
despite the dissociation. The renormalized average vibrational excitation can
be obtained as
$\langle
n_{\mathrm{vib}}\rangle=\sum_{v=0}v\frac{\mathrm{Tr}_{s}\left\\{|\psi_{v}^{g}\rangle\langle\psi^{g}_{v}|\rho_{s}\right\\}}{1-P_{\infty}}.$
(37)
For stable molecular junctions, the notion of an effective temperature can be
introduced to quantify the non-thermal vibrational excitation.Preston,
Kershaw, and Kosov (2020); Wang, Nian, and Lü (2020); Zhang, Zheng, and Di
Ventra (2019)
Bath-related observables of interest can be obtained from the auxiliary
density operators. For example, the electronic current from lead $\alpha$ to
the molecule is expressed in terms of the zeroth and first fermionic tier
auxiliary density operators,
$\begin{split}I_{\alpha}(t)=&-2e\frac{\mathrm{d}}{\mathrm{d}t}\left\langle\sum_{k}c_{\alpha
k}^{\dagger}c_{\alpha k}\right\rangle\\\
=&\frac{2ie}{\hbar}\sum_{l=1}\mathrm{Tr}_{s}\left\\{g_{\alpha}(Q)\left(d\rho^{(\alpha,+,l)}_{\bm{0}}(t)-d^{\dagger}\rho_{\bm{0}}^{(\alpha,-,l)}(t)\right)\right\\}\\\
&+\frac{2\pi
e}{\hbar}\mathrm{Tr}_{s}\left\\{g_{\alpha}(Q)g_{\alpha}(Q)\left[dd^{\dagger}\rho_{\bm{0}}^{0}(t)-d^{\dagger}d\rho_{\bm{0}}^{0}(t)\right]\right\\}\end{split}$
(38)
where $e$ denotes the electron charge and where the spin degeneracy is taken
into account. The current passing through the molecule is given by
$I(t)=(I_{L}(t)-I_{R}(t))/2$. The contribution of the zeroth tier auxiliary
density operator in the above current expression is due to the wide-band
approximation. The details of the derivation are provided in the supplementary
material.
### II.4 Numerical details
Here, we provide some details of the numerical calculations.
The initial state of the system is chosen as the vibrational ground state of
the neutral molecule,
$\rho_{s}(0)=dd^{\dagger}\left|\psi^{g}_{v=0}\right\rangle\left\langle\psi_{v=0}^{g}\right|.$
(39)
More details about the influence of the initial state on the dynamics can be
found in the supplementary material. In short, the dissociation rate and the
quantities rescaled by the survival probability at long times are insensitive
to the initial preparation.
The HQME, Eq. (24), is solved using the propagation scheme proposed in Ref.
Wilkins and Dattani, 2015, which is based on the power series expansion of the
propagator and gives a more efficient use of memory. The numerical
calculations are performed on GPUs and the shared memory parallel programming
technique is employed for further speed-up. Furthermore, it should be
emphasized that a large percent of auxiliary density matrices are exactly zero
due to the exclusion principle. Therefore, a sparse matrix multiplication
algorithm is used, where the sparsity of all auxiliary density matrices and
the Hamiltonian are checked before propagation.
For all data presented below, we have tested the convergence of the
observables with respect to the number of DVR grid points, the number of Padé
poles used to represent Fermi and Bose function, the time step of the
integrator, and the truncation tier of the HQME. At least 64 DVR grid-points
for the reactive coordinate are required. The calculations employ 20 Padé
poles for the Fermi function and two for the Bose function. We adopt a
hierarchy truncation scheme, where all auxiliary density matrices beyond the
specified truncation tier are set to zero. The converged hierarchical
truncation tier depends on the molecule-lead coupling strength, bias voltage,
vibration-phonon bath interaction. Stronger molecule-lead coupling or a lower
bias voltage requires a higher fermionic hierarchical truncation tier.
Likewise, a stronger phonon bath coupling requires a higher bosonic hierarchy
tier. For most parameter sets chosen in this work, converged results are
obtained at the second or third fermionic and bosonic tier of the hierarchy.
## III Results and Discussion
In this section, we study the current-induced dissociation dynamics in single-
molecule junctions based on the methods and model introduced above. To provide
a comprehensive analysis of the underlying mechanisms, we consider in Sec.
III.1 \- Sec. III.3 a broad range of different regimes and processes,
comprising off-resonant to resonant transport, weak to strong vibronic and
molecule-lead coupling, as well as vibrational relaxation due to coupling to a
phonon bath. In Sec. III.4, time-dependent current-voltage characteristics are
presented and the implications for experiments are addressed. Furthermore,
strategies for improving the stability of molecular junctions are discussed.
### III.1 Overview of dissociation mechanisms
As a basis for the subsequent detailed analysis, we first give an overview of
the most important dissociation mechanisms.
Fig. 2 summarizes the basic vibrational heating and cooling processes in a
molecular junction. When current-induced vibrational heating exceeds heat
dissipation, the energy accumulated in a certain bond can reach the
dissociation threshold and the bond ruptures. Depending on the specific bond
affected, the functionality of the junction may be destroyed. The underlying
current-induced dissociation mechanism is determined by the applied bias
voltage and intrinsic molecular properties. The bias voltage dictates whether
a process is energetically possible in principle, whereas the kinetics of a
process is controlled by specific properties of the molecular junctions, such
as molecule-lead and vibronic coupling.
A schematic diagram of relevant dissociation mechanisms is shown in Fig. 3.
When vibronic coupling is weak, stepwise vibrational ladder climbing (cf. Fig.
3, process M1) is the dominant dissociation mechanism over the whole bias
range. In this regime, the probability of inelastic transport processes, where
the vibrational mode is excited by multiple vibrational quanta, is small. In
the case of stronger vibronic coupling, multi-quantum vibrational excitations
are favored. When the bias voltage exceeds the dissociation threshold,
$e\Phi>2E_{\rm D}$, the energy of an incoming electron is sufficient to
directly excite the molecule into unbound electronic states, leading to
ultrafast direct dissociation (M2). If the bias voltage is lower, dissociation
can be induced by multiple electronic transitions via multi-quantum vibration
excitations (M3). Similar mechanisms have been invoked to explain molecule
desorption from metal surfaces using scanning tunneling microscopy.Stipe _et
al._ (1997); Lee, Sorescu, and Deng (2011); Tan _et al._ (2011); Zhao _et
al._ (2013); Chen _et al._ (2019) In this context, it was found that the
dissociation rate shows a linear (power-law) dependence on the tunneling
current when the desorption is induced by a single (multiple) electronic
transition(s).Salam, Persson, and Palmer (1994); Lee, Sorescu, and Deng
(2011); Ueba (2003); Tikhodeev and Ueba (2004); Persson and Avouris (1997)
Figure 2: Vibrational heating and cooling processes in molecular junctions.
(a), (b): Transport-related heating (a) and cooling (b) processes, where the
transport of an electron from one lead to the other is accompanied by the
emission or absorption of vibrational energy. (c): Electron-hole pair creation
process, where an electron is transferred from the valence band of one lead
onto the molecule, absorbs energy from the vibrations and then returns to the
conduction band of the same lead. (d): Vibrational relaxation process due to
the coupling to a phonon bath. Figure 3: Sketch of current-induced
dissociation mechanisms in molecular junctions under various conditions. The
horizontal green dotted line marks the boundary between off-resonant and
resonant transport regimes. Red dashed lines indicate the transition region
between dominant dissociation mechanism.
a) $\Delta Q=0.025$Å
b) $\Delta Q=0.3$Å
Figure 4: Average vibrational excitation $\langle n_{\mathrm{vib}}\rangle$ as
well as dissociation rate $k_{\mathrm{diss}}$ as a function of bias voltage
for $\Delta Q=0.025\textrm{\AA}$ (a) and $\Delta Q=0.3\textrm{ \AA}$ (b),
respectively. The close-up of the dissociation rate in the low-bias regime is
shown in the respective inset. Solid and dotted lines represent results
without and with the coupling to a thermal phonon bath, i.e., $\lambda_{\rm
ph}=$ 0 and $\lambda_{\rm ph}=$ 0.05 eV, respectively. Other parameters are
$\Gamma_{\rm L}=\Gamma_{\rm R}=$ 0.05 eV and $\omega_{c}=3\Omega_{0}$.
To demonstrate the characteristics of the different dissociation mechanisms,
Fig. 4 displays the renormalized average vibrational excitation $\langle
n_{\mathrm{vib}}\rangle$ as well as dissociation rate $k_{\mathrm{diss}}$ as a
function of bias voltage for values of the displacement between the neutral
and charged PES of $\Delta Q=0.025\textrm{\AA}$ and $\Delta
Q=0.3\textrm{\AA}$, respectively. These two values of $\Delta Q$ represent
cases of weak and strong vibronic coupling, respectively. The molecule is
coupled symmetrically to both leads and a weak molecule-lead coupling strength
of $\Gamma_{\rm L}=\Gamma_{\rm R}=0.05$ eV is chosen to minimize the influence
of electronic level broadening.
We first discuss the case of weak vibronic coupling ($\Delta Q=0.025\textrm{
\AA}$), in which stepwise vibrational ladder climbing (process M1 in Fig. 3)
is the dominant dissociation mechanism. The results in the left panel of Fig.
4 (a) show that the vibrational excitation $\langle n_{\rm vib}\rangle$
increases from negligible values in the off-resonant transport regime to
moderate values in the resonant regime upon increase of the bias voltage. A
similar behavior is observed for the dissociation rate $k_{\rm diss}$ depicted
in the right panel of Fig. 4 (a). It is noted, though, that the dissociation
rate only increases at voltages of about $\Phi=2.2$ V, which is already above
the onset of resonant transport at 2 V. Additional vibrational relaxation
induced by coupling to a phonon bath ($\lambda_{\rm ph}=0.05$ eV) results in a
substantial reduction of both the vibrational excitation and the dissociation
rate.
In the off-resonant regime, electron transport is dominated by cotunneling and
thus current-induced heating is slow and ineffective. In principle, once the
bias voltage exceeds the threshold $e\Phi>E_{10}$, where $E_{10}=0.258$ eV is
the energy gap between the vibrational ground and first excited state, the
molecule can be excited by a succession of inelastic electrons tunneling
processes to high-lying vibrational bound states in a stepwise manner. In
practice, however, efficient dissociation also requires that vibrational
heating [cf. Fig. 2 (a)] must be faster than vibrational cooling [cf. Fig. 2
(b), (c) and (d)]. As a result, in the off-resonant transpot regime, the
molecule remains preferentially in the vibrational ground state and the
junction is stable.
In the resonant transport regime, the current is significantly larger and
heating becomes significant. For bias voltages in the vicinity of the onset of
resonant transport (in the present model at 2 V), however, cooling effects due
to electron-hole pair creation [cf. Fig. 2 (c)] counteract current-induced
heating in the course of stepwise vibrational ladder climbing, such that the
onset of dissociation appears at a somewhat higher bias voltage. In the high
bias voltage regime, where electron-hole pair creation processes are fully
blocked, harmonic models predict an extremely large vibrational excitation
(vibrational instability). Mitra, Aleiner, and Millis (2004); Härtle and Thoss
(2011); Schinabeck, Härtle, and Thoss (2018) For the more realistic
dissociative model considered here, however, $\langle n_{\mathrm{vib}}\rangle$
saturates to a moderate value. This is the combined result of stepwise heating
and the presence of a dissociation threshold. Dissociation happens before the
vibrational instability can set in. Finally, when energy dissipation to a
phonon bath is efficient and fast enough to compete with current-induced
heating, dissociation is almost completely suppressed.
Next, we consider the case of strong vibronic coupling depicted in Fig. 4 (b)
for the example of $\Delta Q=0.3\textrm{ \AA}$. The results differ from those
obtained in the weak coupling regime in three aspects. First, in contrast to
the saturation observed in the high bias regime for $\Delta Q=0.025\textrm{
\AA}$, both $\langle n_{\mathrm{vib}}\rangle$ and $k_{\mathrm{diss}}$ increase
continuously with increasing bias voltage within the sampled bias window.
Second, the dissociation rate is already significant at the onset of resonant
transport, as shown in the inset of Fig. 4 (b). For example, at a bias voltage
of 1.8 V, the dissociation rate is about 0.5 $\rm{ns}^{-1}$. Furthermore, the
coupling to a phonon bath only slightly reduces the vibrational excitation and
the dissociation rate.
These distinct features point to a different dissociation mechanism. Strong
vibronic coupling allows direct vibrational excitation from low-lying bound
states to continuum states. For higher-bias voltages in the resonant transport
regime this facilitates direct dissociation (process M2 in Fig. 3). Because
dissociation in this case is very fast, vibrational relaxation due to
electron-hole pair creation or coupling to a phonon bath is inefficient.
Moreover, dissociation in the off-resonant lower-voltage regime is possible
because energy exchange with only a few electrons is sufficient for the
molecule to overcome the dissociation threshold, as inelastic transport
processes are accompanied by multi-quantum vibrational excitations (process M3
in Fig. 3).
### III.2 Detailed analysis of current-induced dissociation
In the following, we analyze in detail the mechanisms of current-induced
dissociation in molecular junctions. To simplify the discussion, the coupling
to a phonon bath is neglected throughout this section. We will study the
influence of the phonon bath in Sec. III.3.2.
#### III.2.1 High bias voltage regime
a)
b)
c)
Figure 5: (a) Franck-Condon factors for the system with $\Delta Q=0.01\textrm{
\AA}$. (b) Population dynamics at $\Phi=8$ V. The upper and lower panel
correspond to the charged and neutral state, respectively. In each panel, the
renormalized populations of vibrational bound states
$P^{e/g}_{v}/(1-P_{\infty})$ ($v=0-15$) are shown in color solid lines and the
summation over the populations of the continuum states is shown in black
dotted lines. The renormalization factor is the survival probability
$1-P_{\infty}(t)$, with $P_{\infty}(t)=P_{\infty}^{g}+P_{\infty}^{e}(t)$.
$P^{g}_{\infty}(t)$ and $P^{e}_{\infty}(t)$ are plotted in green dashed lines.
(c) Schematic illustration of the stepwise vibrational heating. Other
parameters are $\Gamma_{\rm L/R}=0.05eV$.
We first consider the high bias voltage regime, exemplified by $\Phi=8$ V. In
this regime, electron-hole pair creation processes are fully blocked and,
thus, only transport-related vibrational heating and cooling processes are
active. Furthermore, electron transport takes place resonantly and direct
dissociation (process M2 in Fig. 3) is energetically possible. We note that
for such high bias voltages, in realistic molecular junctions more than a
single electronic state included in the model may enter the bias window. This
can result in additional phenomena,Härtle, Benesch, and Thoss (2009) which
will be studied in future work.
We start the analysis with the weak vibronic coupling case, $\Delta
Q=0.01\textrm{ \AA}$. The Franck-Condon transition matrix depicted in Fig. 5
(a) shows that in this regime inelastic transport processes are dominated by
single-quantum vibrational excitation and deexcitation. Fig. 5 (b) depicts the
population dynamics in the vibrational state manifold of the neutral and the
charged state for a symmetric molecule-lead coupling scenario. Starting in the
initially prepared vibrational ground state, inelastic transport processes
excite the molecule to the first vibrationally excited state after a cycle of
charging and discharging, as illustrated by the black arrows in Fig. 5 (c). In
a sequence of such inelastic transport processes higher vibrationally excited
states are sequentially populated, as shown by the short-time dynamics in Fig.
5 (b). On a timescale of tens of picoseconds, a quasi-steady distribution is
established. The population of the vibrational continuum states above the
dissociation threshold (black dotted lines) rises on the same timescale as the
population of the highest bound state, which confirms the stepwise vibrational
heating mechanism.
a)
b)
c)
Figure 6: Same as Fig. 5, but for $\Delta Q=0.3\textrm{ \AA}$. Figure 7:
Dissociation rate as a function of current $I_{c}$ for $\Delta Q=0.3\textrm{
\AA}$ and $\Phi=8$ V. The data are obtained by varying $\Gamma_{\rm L/R}$ from
0.001 to 0.01 eV. $I_{c}$ is the plateau value of $I(t)/(1-P_{\infty}(t))$, as
shown as an example for $\Gamma_{\rm L/R}=0.005$ eV in the inset.
In the case of strong vibronic coupling ($\Delta Q=0.3$Å), depicted in Fig. 6,
excitation and deexcitation processes involving multiple vibrational quanta
are favored, as confirmed by the Franck-Condon matrix (see Fig. 6 (a)). The
population dynamics of the vibrational states displayed in Fig. 6 (b) show
that a dissociation probability of 100% (sum of green dashed lines) is reached
within one picosecond and the rescaled population of continuum states (black
dotted lines) is rather high. These observations point to the existence of two
direct dissociation pathways that are induced by a single tunneling electron.
At a voltage of $\Phi=$ 8 V, the incoming electron can excite the molecule
directly from the vibrational ground state or a low-lying vibrationally
excited state into a continuum state, as schematically illustrated by path 1
in Fig. 6 (c). Alternatively, in path 2, the molecule is first charged and
excited by an incoming electron into a high-lying vibrational bound state
($6\leq n_{v}\leq 15$), and then proceeds resonantly into a continuum state
upon discharging, corresponding to dissociation mediated by a Feshbach
resonance.Brisker and Peskin (2008)
It is known from studies of molecular desorption from metal surfaces that the
reaction rate depends linearly on the tunneling current when the desorption is
induced by a single electronic transition. Fig. 7 shows the dissociation rate
$k_{\mathrm{diss}}$ as a function of the quasi steady-state current of the
undissociated molecule $I_{c}$. Here, $I_{c}$ is obtained in our model by
taking the plateau value of $I(t)/(1-P_{\infty}(t))$, as illustrated in the
inset of Fig. 7. The data are obtained by varying the molecule-lead coupling
$\Gamma_{L/R}$ from 0.001 to 0.01 eV. The least-squares fitting of
$k_{\mathrm{diss}}\propto I_{c}^{n}$ yields $n=0.98$, which corroborates our
hypothesis that in the case of strong vibronic coupling and high bias voltage
($e\Phi>2E_{\rm D}$), dissociation is induced by a single tunneling electron.
We note that deviations from the linear dependence are observed when
$\Gamma_{L/R}$ is larger than 0.02 eV. The influence of stronger molecule-lead
coupling will be discussed in Sec. III.3.1.
a)
b)
Figure 8: (a) Dissociation rate $k_{\mathrm{diss}}$ as a function of $\Delta
Q$ at $\Phi=8$ V. The molecule-lead coupling is $\Gamma_{\rm L/R}=$ 0.05 eV.
(b) Franck-Condon matrix elements
$S_{01}=\left|\langle\psi^{e}_{1}|\psi^{g}_{0}\rangle\right|^{2}$ and the
summation of the transition probabilities from the vibrational ground state to
the sixth and higher vibrationally excited states, $S_{0c}=\sum_{v\geq
6}\left|\langle\psi^{e}_{v}|\psi^{g}_{0}\rangle\right|^{2}$.
Having distinguished different dissociation mechanisms in the limiting cases
of weak and strong vibronic coupling, we next analyze the intermediate regime.
To this end, Fig. 8 (a) shows the dissociation rate as a function of the
displacement $\Delta Q$. The dissociation rate exhibits a non-monotonous
dependence on the displacement with a first increase up to about $\Delta
Q=0.16\textrm{ \AA}$, followed by a slight decrease in the range of 0.16
$\textrm{ \AA}<\Delta Q<0.2\textrm{ \AA}$ and then a further increase. The
slope of the second increase is much smaller than that of the first one.
In order to explain these findings, we examine the dependence of Franck-Condon
matrix elements
$S_{vv^{\prime}}=\left|\langle\psi^{e}_{v^{\prime}}|\psi^{g}_{v}\rangle\right|^{2}$
on $\Delta Q$ in Fig. 8 (b). The excitation rate of the $v=0\rightarrow
v^{\prime}=1$ transition is proportional to $S_{01}$. Moreover,
$S_{0c}=\sum_{v\geq
6}\left|\langle\psi^{e}_{v}|\psi^{g}_{0}\rangle\right|^{2}$ is a measure for
the transition probability from the vibrational ground state to the sixth and
higher vibrationally excited states, which is related to the direct
dissociation mechanism. $S_{01}$ exhibits a turnover at $\Delta Q=0.16\textrm{
\AA}$ and $S_{0c}$ increases monotonically and exceeds $S_{01}$ at $\Delta
Q=0.23\textrm{ \AA}$. This is in line with the transitions observed in the
dissociation rate and suggests the following mechanisms in the different
regimes: In the weak to intermediate vibronic coupling regime ($0<\Delta
Q<0.16\textrm{ \AA}$), dissociation is dominated by stepwise vibrational
ladder climbing. In this mechanism, which comprises multiple consecutive
heating steps, the dissociation rate scales non-linearly with the single-step
excitation rate, $k_{s}$,Salam, Persson, and Palmer (1994); Ueba (2003) i.e.,
$k_{\mathrm{diss}}\propto k_{s}^{n}$ ($n\gg 1$). Thus, as $k_{s}$ ($\propto
S_{01}$) increases, the dissociation rate rises also. For $\Delta Q>0.16\text{
\AA}$, $S_{01}$ starts to decrease and multi-quantum vibrational excitations
are preferred. The transition of the dominant dissociation mechanism takes
place in this region. As such, the dissociation rate drops only slightly and
increases again when the dissociation is mainly induced by a single electronic
transition, which is the case for $\Delta Q>0.22\text{ \AA}$, where $S_{0c}$
is close to or larger than $S_{01}$.
#### III.2.2 Intermediate bias voltage regime
a) SYMM ($\Gamma_{\rm R}=\Gamma_{\rm L}$)
b) SYMM ($\Gamma_{\rm R}=\Gamma_{\rm L}$)
c) ASYMM ($\Gamma_{\rm R}=0.1\Gamma_{\rm L}$)
Figure 9: Dissociation rate $k_{\mathrm{diss}}$ as a function of $\Delta Q$
for the model SYMM in (a) and (b) and for the model ASYMM in (c). Different
lines correspond to different bias voltages. The coupling of the molecule to
the left lead is fixed at $\Gamma_{\rm L}=0.05$ eV. Figure 10: Energy-level
scheme illustrating the suppression of electron-hole pair creation processes
with increasing bias voltage for different $\Delta Q$. Shown is the Fermi
distribution of the electrons in the left lead (yellow) for lower and higher
bias voltage, the molecular energy level as well electron-hole pair creation
processes including the absorption of up to three vibrational quanta.
a)
b)
Figure 11: (a) Dissociation rate $k_{\mathrm{diss}}$ as a function of the
current $I_{c}$ for model SYMM with $\Gamma_{\rm L/R}=0.05$ eV. The colored
lines varying from dark purple to yellow correspond to $\Delta Q$ from $0.025$
to $0.3\textrm{ \AA}$ with the increment of $0.025\textrm{ \AA}$. The gray
dotted lines represent a least-squares fitting. (b) Fitting parameter $n$ in
$k_{\mathrm{diss}}\propto I_{c}^{n}$ for different $\Delta Q$.
Next, we turn to the intermediate bias voltage regime, 2 V $\lesssim\Phi<4.76$
V $=2E_{\rm D}/e$. In this regime, electron transport is still resonant,
however, direct dissociation (process M2 in Fig. 3) is no longer possible.
Furthermore, electron-hole pair creation processes are active. In order to
distinguish between transport induced vibrational mechanisms and the influence
of electron-hole pair creation processes, it is insightful to also consider
systems that display different symmetries with respect to the coupling to the
left and the right lead. Fig. 9 depicts the dissociation rates in this regime
for a symmetric (SYMM, panels (a) and (b)) and an asymmetric (ASYMM, panel
(c)) molecule-lead coupling scenario, respectively. The following analysis
focuses exclusively on positive $\Delta Q$. Results for negative $\Delta Q$
will be discussed in Sec. III.3.3.
Overall, the results reveal a more complex dependence of the dissociation rate
on the vibronic coupling $\Delta Q$, as compared to the high bias voltage
regime in Fig. 8. For moderate voltages (e.g. 4 V, depicted by the blue line
in Fig. 9 (a)), the dissociation rate rises for small $\Delta Q$, then
exhibits a turnover, followed by a local minimum and a further increase with
additional structures for larger $\Delta Q$. Lowering the voltage (e.g. to 3 V
or 2.6 V, depicted by the orange and green line in Fig. 9 (a), respectively),
the dissociation rate overall decreases and the additional structures at
larger $\Delta Q$ disappear. For even lower voltage, (e.g. 2.2 V or 1.8 V
shown in Fig. 9 (b)), the dissociation rate increases monotonically with
$\Delta Q$. A similar behavior is observed for the case of asymmetric
molecule-lead coupling depicted in Fig. 9 (c).
We first analyze these observations in the weak to intermediate vibronic
coupling regime. As mentioned before, dissociation in this regime is dominated
by the stepwise vibrational ladder climbing mechanism, which is particularly
sensitive to relaxation effects. In the model considered here, vibrational
relaxation is caused by transport related deexcitation [cf. Fig. 2 (b)] and
electron-hole pair creation [cf. Fig. 2 (c)] processes. The efficiency of
electron-hole pair creation processes depends sensitively on the applied bias
voltage.Härtle and Thoss (2011); Härtle, Peskin, and Thoss (2013); Schinabeck,
Härtle, and Thoss (2018) As shown previously,Härtle and Kulkarni (2015);
Nitzan and Galperin (2018) cooling due to electron-hole pair creation
processes is particularly effective at the onset of resonant transport, but
becomes successively blocked for larger voltages. As a consequence, the
dissociation rate is significantly reduced with decreasing bias voltage, where
pair creation processes become less suppressed. Furthermore, it should be
noted that with increasing $\Delta Q$, vibrational deexcitation processes with
more vibrational quanta are enabled and become dominant in the electron-hole
pair creation processes, as illustrated in Fig. 10. The first turnover of the
dissociation rate with increasing $\Delta Q$ indicates where the electron-hole
pair creation processes are no longer largely blocked, and it is shifted to a
larger $\Delta Q$ with increasing bias voltage.
The importance of electron-hole pair creation processes can be confirmed by
considering the case of asymmetric molecule-lead coupling (model ASYMM,
depicted in Fig. 9 (c)). As is known from the study of asymmetric models with
harmonic vibrations, the cooling efficiency of electron-hole pair creation
processes depends sensitively on the bias polarity.Härtle and Thoss (2011);
Härtle, Peskin, and Thoss (2013); Schinabeck, Härtle, and Thoss (2018) Fig. 9
(c) shows that, while the dissociation rate is independent on bias polarity
for a large bias voltage of 8 V when electron-hole pair creation processes are
blocked, it does depend on the bias polarity for moderate and small voltages.
For example, a bias voltage of +4 V results in a significant smaller
dissociation rate than a voltage of -4 V. This is because for positive bias
voltage, electron-hole pairs are generated predominantly with respect to the
left lead and for negative bias voltage mostly with respect to the right lead.
As in model ASYMM the molecule is stronger coupled to the left lead, the
cooling due to electron-hole pair creation processes is more effective for
positive bias voltage and thus the dissociation rate is smaller. This cooling
effect becomes stronger with increasing $\Delta Q$, because more electron-hole
pair creation processes accompanied by multi-quantum vibrational deexcitations
are kinetically allowed. As a consequence, a pronounced negative slope is
observed in the intermediate vibronic coupling regime, which was also found in
Ref. Koch _et al._ , 2006.
Next, we consider the intermediate to strong vibronic coupling regime. In this
regime, the dominant dissociation mechanism undergoes a transition from
vibrational ladder climbing to multi-quantum vibrational excitations to the
continuum states. The crossover also depends on the current.Salam, Persson,
and Palmer (1994) As mentioned before, at a high bias voltage of 8 V, the
dominance of stepwise vibrational ladder climbing extends to $\Delta
Q=0.16\textrm{ \AA}$. At lower bias voltages, the current is decreased and,
simultaneously, vibrational relaxation is more effective. As a result, the
average time between electron tunneling events becomes comparable or longer
than the vibrational relaxation time. In this situation, as shown by Salam and
coworkers,Salam, Persson, and Palmer (1994) vibrational ladder climbing is
less effective and multi-quantum vibrational excitation to the continuum
states becomes more important. As a consequence, the transition of the
dominant dissociation mechanism can take place at a smaller $\Delta Q$ for a
smaller bias voltage. The dissociation rate increases with increasing vibronic
coupling because of the increased transition probability to continuum states
and the decreased sensitivity to vibrational relaxation effects. The latter is
due to the fact that the dissociation after excitation above the dissociation
threshold is faster than the cooling rates.
In order to verify the above assertion, Fig. 11 (a) shows the dissociation
rate $k_{\mathrm{diss}}$ as a function of the quasi steady-state current of
the undissociated molecule, $I_{c}$, for various $\Delta Q$. The data were
obtained by varying the bias voltage in the range of 2.6 - 3.2 V. It is known
from studies of molecular desorption from metal surfaces that when the
reaction is induced by multiple electronic transitions, the reaction rates
exhibit a power-law dependence on the tunneling current,
$k_{\mathrm{diss}}\propto I_{c}^{n}$, and the exponent $n$ was interpreted as
the number of electrons involved to induce the reaction.Salam, Persson, and
Palmer (1994); Ueba (2003); Tikhodeev and Ueba (2004); Persson and Avouris
(1997) The results in Fig. 11 indicate indeed a power-law relation with values
of $n$ varying from $n=60$ at $\Delta Q=0.025\text{ \AA}$ to $n=7$ at $\Delta
Q=0.3\text{ \AA}$, thus confirming the multiple-electron nature of the
mechanism. The dependence of the power $n$ on $\Delta Q$ depicted in Fig. 11
(b) reveals a pronounced decrease at about $\Delta Q=0.1\text{ \AA}$, which is
in line with the onset of multi-phonon inelastic processes, where fewer
electronic transitions are needed to reach the dissociation threshold. This
confirms a transition of the dominant dissociation mechanism from stepwise
vibrational ladder climbing to multi-quantum vibrational excitations. When
multi-quantum vibrational excitations become dominant, increasing the vibronic
coupling leads to the increased probability of the direct transition from the
lower-lying vibrational states to the continuum states. Therefore, fewer
electrons are involved to induce the dissociation and the dissociation rate is
increased.
#### III.2.3 Lower bias voltage regime
Figure 12: Population dynamics for $\Delta Q=0.3\text{\AA}$ at 1.8 V. The
molecule-lead coupling is $\Gamma_{\rm L/R}=$ 0.05 eV. The upper and lower
panel corresponds to the charged and neutral state, respectively. For details,
see Fig. 5.
Finally, we consider the lower bias voltage regime before the onset of
resonant transport. In many cases studied experimentally, single-molecule
junctions are found to be rather stable at low bias voltage.Su _et al._
(2016) But a recent work has reported that bond rupture can also take place in
the off-resonant regime.Li _et al._ (2016)
We have performed a series of calculations for bias voltages $\Phi<2$ V and a
range of $\Delta Q$. At bias voltages $\Phi\lesssim 1.6$ V, the dissociation
rate is negligible over the time scale accessible with our simulations, which
employ direct time propagation. In this regime, the use of different reaction
rate approaches, based, e.g., on the flux correlation function formalism is
necessary, which will be the subject of future work.
Fig. 9 (b) shows the dissociation rate at a bias voltage of $\Phi=1.8$ V,
which is just below the threshold for resonant transport. In this regime the
dissociation rate is overall very small, but shows significant values for
$\Delta Q>0.25\text{ \AA}$ In order to analyze the dissociation mechanism in
this case, the corresponding population dynamics are shown in Fig. 12 for
$\Delta Q=0.3\text{ \AA}$. It can be seen that the molecule remains
predominantly in the neutral state in this regime. At short times (before 1
ps), the dissociation probability $P^{g}_{\infty}$ (green dashed line in the
lower panel) exhibits a step-like increase, which reflects the sudden switch-
on of bias voltage and molecule-lead coupling at the initial moment. After
this transient dynamics, through co-tunneling transport-induced vibrational
heating, a quasi-steady distribution of vibrationally excited states is formed
at about one picosecond. This steady distribution is independent on the
initial preparations. After the establishment of the quasi-steady
distribution, the dissociation probability $P^{g}_{\infty}$ increases steadily
over time. In this case of low current and strong vibrational relaxation due
to the electron-hole pair creation processes, dissociation is dominated by the
mechanism induced by multi-quantum vibrational excitations to the continuum
states. The energetic analysis suggests that at a bias voltage of $\Phi=1.8$
V, the lowest vibrationally excited state allowed to be excited into the
continuum states (via Feshbach resonance) is $\nu=7$. The required energy is
$\Delta E=E_{\rm D}-E^{g}_{7}=0.77$ eV, which is smaller than the chemical
potential of the left lead, $\mu_{\rm L}=0.9$ eV.
### III.3 Further aspects
In this section, we study further aspects that are important to unravel the
mechanisms of current-induced dissociation in molecular junctions. This
includes the effects of the molecule-lead coupling strength, the influence of
vibrational relaxation induced by coupling to a phonon bath, as well as
situations, where the considered bond shortens upon charging of the molecule,
i.e. $\Delta Q<0$.
#### III.3.1 Influence of molecule-lead coupling strength
a) $\Phi=$ 4 V
b) $\Delta Q=0.2$Å
c) $\Phi=$ 4 V, $\Delta Q=0.2$Å
Figure 13: Dissociation rate $k_{\mathrm{diss}}$ as a function of the
molecule-lead coupling strength $\Gamma$ for different displacement $\Delta Q$
(a), bias voltages (b), and molecular vibrational frequencies (c). We assume a
symmetric molecule-lead coupling scenario, $\Gamma_{\rm L}=\Gamma_{\rm
R}=\Gamma$. The molecular vibrational frequency $\hbar\Omega_{0}=137$ meV in
panel (c) is obtained by setting the nuclear mass to $M=4$ amu but keeping the
Morse potential parameters $E_{\rm D}=2.38$ eV and $a=$1.028 $a_{0}^{-1}$
unchanged. For all other cases shown, the molecular frequency is
$\hbar\Omega_{0}=$ 274 meV.
So far, we have considered molecular junctions with weak coupling to the
electrodes. The molecule-lead coupling strength depends on the specific
molecule, the anchoring group and geometry as well as the electrode
material.Xin _et al._ (2019) For example, it has been reported that graphene
electrodes can provide strong covalent bonding to molecules.Sun _et al._
(2018); Leitherer, Papior, and Brandbyge (2019) In the following, we analyze
the influence of the molecule-lead coupling strength on dissociation dynamics
in the symmetric coupling scenario, $\Gamma_{\rm L}=\Gamma_{\rm R}$.
Fig. 13 shows the dissociation rate $k_{\mathrm{diss}}$ as a function of the
molecule-lead coupling strength $\Gamma_{\rm L}$ for different potential
surface displacements $\Delta Q$ and various bias voltages. In addition,
results for different fundamental vibrational frequencies $\hbar\Omega_{0}$,
obtained by varying the reduced mass $M$, are shown. The range of the
molecule-lead coupling $\Gamma_{\rm L/R}$ covers the transition from non-
adiabatic ($\Gamma_{\rm L/R}\ll\hbar\Omega_{0}$) to adiabatic ($\Gamma_{\rm
L/R}\gg\hbar\Omega_{0}$) transport. A turnover of the dissociation rate is
observed for all parameter sets.
Increasing the molecule-lead coupling can affect the dissociation dynamics in
different ways. On the one hand, increasing $\Gamma_{\rm L/R}$ leads to a
shorter resonance state lifetime and a larger current. On the other hand, it
also increases the adiabaticity of transport dynamics and facilitates
electron-hole pair creation processes.
The dissociation rate is determined by the number of electrons passing through
the molecule per unit time (i.e. the current) and the amount of energy
transferred per electron. When $\Gamma_{\rm L/R}\ll\hbar\Omega_{0}$, i.e. in
the non-adiabatic regime, the rate-determining factor is the current, which
increases with the molecule-lead coupling. In the adiabatic regime,
$\Gamma_{\rm L/R}\gg\hbar\Omega_{0}$, however, electrons tunnel through the
molecule much faster than the vibrational period such that the energy exchange
efficiency is very low. In this case, notwithstanding the increased current,
the transmitted energy per tunneling electron to the molecule reduces with
increasing $\Gamma_{\rm L/R}$. The trade-off between these two counteracting
effects leads to the turnover of the dissociation rate. The slight shift of
the turnover to a smaller $\Gamma$ with increasing displacement and decreasing
bias voltage is due to the electron-hole pair creation processes, which are
facilitated by stronger molecule-lead coupling.
As a side note, the results obtained for larger molecule-lead coupling
$\Gamma$ can be used to evaluate the range of validity of low-order kinetic
schemes.Härtle and Kulkarni (2015); Foti and Vázquez (2018) We compared the
results obtained using different hierarchical truncation tiers for the
parameters shown in Fig. 13 and found that when $\Gamma>\hbar\Omega_{0}/2$,
the first-tier results (equivalent to a second order perturbative treatment of
molecule-lead coupling) always overestimate the dissociation rate.
#### III.3.2 Vibrational relaxation due to coupling to a phonon bath
a) $\omega_{c}=3\Omega_{0}$
b) $\Delta Q=0.025$Å
c) $\Delta Q=0.3$Å
Figure 14: Dissociation rate $k_{\mathrm{diss}}$ for a scenario which
includes vibrational relaxation due to the coupling to a phonon bath. (a)
Dissociation rate $k_{\mathrm{diss}}$ as a function of $\Delta Q$ for
$\lambda_{\rm ph}=0$ and 0.1 eV. For comparison, results without coupling to a
phonon bath are shown by the dotted line. (b,c) Dissociation rate
$k_{\mathrm{diss}}$ as a function of $\lambda_{\rm ph}$ for $\Delta
Q=0.025\textrm{ \AA}$ (b) and $\Delta Q=0.3\textrm{ \AA}$ (c), respectively.
The molecule-lead coupling is fixed at $\Gamma_{\rm L}=\Gamma_{\rm R}=$ 0.05
eV and the bias voltage is $\Phi=4$ V.
Vibrational relaxation in molecular junctions can also be induced by coupling
of the considered reaction mode to other inactive modes (intramolecular
vibrational relaxation), the phonons of the leads or a possible solution
environment. Here, we consider the effect of such additional relaxation
processes on current-induced dissociation by coupling the reaction mode to a
bath of phonons characterized by Lorentz-Drude spectral density function as
described in Sec. II.2. The influence of this coupling is determined by two
parameters, the coupling strength $\lambda_{\rm ph}$ and the characteristic
frequency of the phonon bath $\omega_{c}$. In the overview, Fig. 4 (a), it was
already shown that the coupling to a phonon bath strongly quenches heating and
dissociation. Here, we analyze this effect in more detail based on the data
depicted in Fig. 14.
Fig. 14 (a) shows that the vibrational relaxation process induced by coupling
to the phonon bath causes a pronounced reduction of the dissociation rate, in
particular for small vibronic coupling $|\Delta Q|<0.05\textrm{ \AA}$. To
analyze this effect in more detail, Fig. 14 (b) and (c) depict the
dissociation rate as a function of the bath coupling parameter $\lambda_{\rm
ph}$ for $\Delta Q=0.025\textrm{ \AA}$ (b) and $\Delta Q=0.3\textrm{ \AA}$
(c). Two cut-off frequencies of the phonon bath are chosen, which are
smaller/larger than the vibrational frequency $\hbar\Omega_{0}$.
In the case of weak vibronic coupling, as shown in Fig. 14 (b), the
dissociation rate drops quickly to very small values upon increasing
$\lambda_{\rm ph}$, especially for a fast phonon bath,
$\omega_{c}=3\Omega_{0}$. We emphasize that, different from the relaxation
effect due to electron-hole pair creation processes, this behavior is found to
be independent on the bias voltage. The remarkably effective suppression of
the dissociation is mainly due to the fact that in this regime, dissociation
is dominated by stepwise vibrational ladder climbing, which involves many
consecutive steps. Even if the rate of every step $k_{s}$ is only slightly
reduced by vibrational relaxation, the dissociation rate
$k_{\mathrm{diss}}\propto k_{s}^{n}$ can be many orders of magnitude smaller.
For strong vibronic coupling, as shown in Fig. 14 (c) for $\Delta
Q=0.3\textrm{ \AA}$, the dissociation rate is reduced with increasing
$\lambda_{\rm ph}$ and $\omega_{c}$, but not as pronounced as in the weak
vibronic coupling case. We recall that dissociation in this regime is induced
by multi-quantum vibrational excitations. At the intermediate bias voltage of
4 V considered in Fig. 14 (c), more than a single electronic transition is
required for the molecule to finally reach the dissociation energy. As long as
the vibrational state of the molecule has not yet reached the dissociative
continuum, it is sensitive to vibrational relaxation, which reduces the
dissociation rate. For high bias voltages (e.g. $\Phi=8$ V, data not shown),
direct dissociation becomes the dominating process and the influence of
vibrational relaxation is even smaller.
#### III.3.3 Molecular junctions with negative displacement $\Delta Q$
a)
b)
c) $\Delta Q=-0.1\text{\AA}$
d) $\Delta Q=0.1\text{\AA}$
Figure 15: Franck-Condom factors for the system with $\Delta Q=-0.1\textrm{
\AA}$ (a) and $\Delta Q=0.1\textrm{ \AA}$ (b), respectively. Population
dynamics for $\Delta Q=-0.1\textrm{ \AA}$ (c) and $\Delta Q=0.1\textrm{ \AA}$
(d), respectively. The bias voltage is $\Phi=2.6$ V and $\Gamma_{\rm L/R}=$
0.05 eV.
The charging of the molecule leads a reorganization of the nuclear geometry.
Depending on the specific situation, this may result in bond stretching
($\Delta Q>0$) or compression ($\Delta Q<0$). In the above sections, we
focused on the case with positive $\Delta Q$. Here, we provide a brief
comparative discussion of the case of negative $\Delta Q$. To facilitate the
comparison, the dissociation rates for negative $\Delta Q$ are depicted in the
same plot as for positive $\Delta Q$ in Fig. 9.
It should be noted that within a harmonic model for the vibrational degrees of
freedom, the dynamics is independent on the sign of $\Delta Q$. Thus, the
dependence of the dynamics on the sign of $\Delta Q$ discussed below is also a
manifestation of the anharmonicity of the model. This was also found in a
previous study.Brisker and Peskin (2006)
The results in Fig. 9 show that the dissociation rates are rather insensitive
to the sign of $\Delta Q$ for larger bias voltages. For small to moderated
bias voltages, however, the dissociation rates for negative $\Delta Q$ are
found to be in general significantly larger than for positive $\Delta Q$. For
example, at bias voltage $\Phi=$2.6 V, the dissociation rate for $\Delta
Q=-0.1\textrm{ \AA}$ is more than three orders of magnitude larger than for
$\Delta Q=0.1\textrm{ \AA}$.
To explain the underlying mechanism in more detail, Fig. 15 (a) and (b) depict
the Franck-Condon matrices for $\Delta Q=-0.1\textrm{ \AA}$ and $0.1\textrm{
\AA}$, respectively. Both exhibit the characteristic pattern of anharmonic
models, i.e., an asymmetry with respect to the diagonal line. As an example
for a specific elementary process consider the charging of the molecule
accompanied by excitation to the sixth vibrationally excited state and a bias
voltage of $\Phi=$ 2.6 V. At this voltage, electron-hole pair creation
processes with respect to the left lead can be accompanied by the vibrational
deexcitation processes $n^{e}_{6}\rightarrow n^{g}_{v=0-4}$
($E^{e}_{6}-E^{g}_{v=0-4}>1.3$ eV). The transition probabilities of these
vibrational deexcitation processes for $\Delta Q=-0.1\textrm{ \AA}$ are
smaller than their counterparts for $\Delta Q=0.1\textrm{ \AA}$, as shown in
Fig. 15 (a) and (b). Moreover, the dissociation pathways mediated by Feshbach
resonances from the low-lying vibrationally excited states in the potential
surface of the charged state to the continuum states of the neutral molecule
are available for $\Delta Q=-0.1\textrm{ \AA}$, but are blocked for $\Delta
Q=0.1\textrm{ \AA}$. Thus, the significantly larger increase of the
dissociation rate for negative $\Delta Q$, compared to positive $\Delta Q$, is
caused by less efficient electron-hole pair creation processes and an
increased transition probability to the continuum states.
The above reasoning is confirmed by the population dynamics depicted in Fig.
15 (c) and (d). For $\Delta Q=-0.1\textrm{ \AA}$, due to the weaker electron-
hole pair creation cooling effect, a broad distribution among the vibrational
states is quickly formed and the dissociation is almost completed at 1 ps. In
contrast, for $\Delta Q=0.1\textrm{ \AA}$, the dissociation rate at ten
picoseconds is less than 0.5%. In addition, we observe that for $\Delta
Q=-0.1\textrm{ \AA}$, the population of states above the dissociation barrier
in the neutral state, $P^{g}_{\rm continuum}$, shown as the black dotted line
in the lower panel of Fig. 15 (c), is higher than the population of high-lying
vibrational bound states $(n^{g}_{v}>8)$. This is a result of the population
transfer from the low-lying vibrationally excited states in the potential
surface of the charged molecule to continuum states of the neutral molecule,
which then leads to ultrafast dissociation. But for $\Delta Q=0.1\textrm{
\AA}$, this shortcut is blocked.
The above analysis clearly shows that in more realistic anharmonic models of
molecular junctions, in addition to the strength of the vibronic coupling, its
sign also plays an important role in the current-induced dissociation
dynamics.
### III.4 Time-dependent current-voltage characteristics and implications for
experiments
a) $\lambda_{\rm ph}=0$ eV
b) $\lambda_{\rm ph}=0.05$ eV
c) $\Delta Q=0.025$Å
d) $\Delta Q=0.1$Å
e) $\Delta Q=0.3$Å
Figure 16: Time-dependent current-voltage characteristics. The parameters
$\Delta Q$ and $\lambda_{\rm ph}$ are given in the plot, other parameters of
the calculations are $\Gamma_{\rm L}=\Gamma_{\rm R}=$ 0.05 eV and
$\omega_{c}=3\Omega_{0}$. The vertical dashed lines in (c) - (e) indicate the
onset of resonant transport.
Current-voltage characteristics serve as an important tool for acquiring
information about charge transport in molecular junctions. For a molecular
junction which undergoes bond rupture, or a different structural change, the
conductance may vary over the time of the reactive process, as was observed in
a recent experiment on the picosecond timescale.Arielly _et al._ (2017) The
details depend on the specific process. Here, we analyze the change of the
current-voltage characteristic for a molecular junction, where the bond to a
side group ruptures and the conductance changes upon dissociation of the side
group, as described by the model introduced in Sec. II.1.
Fig. 16 shows time-dependent current-voltage characteristics for the cases of
weak and strong vibronic coupling, without (a) and with (b) coupling to a
phonon bath.
First, we consider cases without coupling to a phonon bath, depicted in Fig.
16 (a). For weak vibronic coupling, $\Delta Q=0.025\textrm{ \AA}$, transport
is dominated by elastic processes and the current-voltage characteristics for
short times ($t=20$ fs) resembles a typical IV curve of a resonant level
model. For longer times, the current at higher bias voltages decreases,
resulting in a negative differential resistance feature in the current-voltage
characteristic. This decrease of the current is a result of the dissociation
process.
For strong vibronic coupling, $\Delta Q=0.3\textrm{ \AA}$, Franck-Condon
blockade reduces the current at the onset of the resonant transport
regime.Koch, Von Oppen, and Andreev (2006); Schinabeck _et al._ (2014) Due to
broadening and the nonequidistant level structure of the vibrational energies,
individual vibronic steps are not seen in the current-voltage characteristic.
Again, the current at higher bias voltages decreases for longer times. The
resulting negative differential resistance is much more pronounced than for
weak vibronic coupling. This is a result of the fast dissociation process in
this regime, which is almost complete within a few picoseconds.
Taking into account additional vibrational relaxation due to the coupling to a
phonon bath (Fig. 16 (b)), dissociation is suppressed for weak vibronic
coupling ($\Delta Q=0.025\textrm{ \AA}$) and thus the negative differential
resistance feature disappears. For strong vibronic coupling ($\Delta
Q=0.3\textrm{ \AA}$), on the other hand, the time-dependent current-voltage
characteristics are very similar to the case without coupling to a phonon
bath.
The behavior of the time-dependent current voltage characteristics in the
voltage region at the onset of resonant transport is depicted in more detail
in Fig. 16 (c)-(e). For weak and strong vibronic coupling, the results show
again the decrease of the current at longer times for higher bias voltages due
to the dissociation process. Interestingly, this is not observed for
intermediate vibronic coupling. The reason for this different behavior is the
effective suppression of the dissociation due to electron-hole pair creation
processes in this parameter regime, as explained in Sec. III.2.2.
These results indicate that strategies, which facilitate electron-hole pair
creation processes will be helpful to further increase the stability of
molecular junctions at moderate bias voltages.Gelbwaser-Klimovsky _et al._
(2018); Härtle _et al._ (2018) Other strategies to increase the stability
include the devise of junctions with efficient coupling of the molecular
vibrations to electrode phonons or a solution environment and the use of
anchoring groups that provide strong molecule-lead coupling such that the
junction operates in the adiabatic transport regime.
These findings may also be interesting in the context of recent experimental
studies which showed the change of the transport characteristics related to
bond rupture and structural changes.Li _et al._ (2015); Capozzi _et al._
(2016); Fung _et al._ (2019); Zang _et al._ (2020) Although the time-scales
observed in the experiments are significantly longer than in our study,Fung
_et al._ (2019) the basic relation between a structural change of the molecule
and the time-dependent change of the current-voltage characteristic should
appear similar.
## IV Conclusion
We have investigated current-induced bond rupture in single-molecule junctions
employing a fully quantum mechanical method based on the HQME approach.
Extending our previous work,Erpenbeck _et al._ (2018a, 2020) we have
considered a model, which includes more general potential energy surfaces,
accounting for both bound and continuum states of the charged molecule, as
well as vibrational relaxation processes induced by coupling of the
dissociative reaction mode to other inactive modes, the phonons of the leads
or a possible solution environment. The model also accounts for additional
dissociation channels via Feshbach resonances. Based on this model, we have
analyzed current-induced dissociation dynamics in a broad range of different
regimes, comprising off-resonant to resonant transport, weak to strong
vibronic coupling as well as non-adiabatic to adiabatic transport.
The study provides a comprehensive analysis of the reaction mechanisms
prevailing in the different regimes. Specifically, we found that for weak to
intermediate vibronic coupling, dissociation is induced by current-induced
stepwise vibrational ladder climbing. In this case, dissociation is sensitive
to vibrational relaxation. For strong vibronic coupling, multi-quantum
vibrational excitations are favored. When the applied bias voltage is high
enough, the molecule can be directly excited into a continuum state and
dissociates. Otherwise, dissociation is induced by a few electronic
transitions. Because of fast dissociation in the continuum states,
dissociation is less sensitive to vibrational relaxation in this regime.
The analysis also revealed a turnover of the dissociation rate upon increase
of molecule-lead coupling, which arises mainly from the transition from non-
adiabatic to adiabatic transport. This shows that strong molecule-lead
coupling can stabilize a molecular junction. Moreover, the results showed that
the dissociation dynamics is affected by the sign of vibronic coupling, i.e.
it exhibits different characteristics depending on whether the charging of the
molecule leads to bond stretching or compression.
Finally, it is noted that the presented method can also be used to study other
processes of current-induced reaction dynamics, such as proton transfer or
isomerization,Hofmeister, Coto, and Thoss (2017); Weckbecker, Coto, and Thoss
(2017) which are important for the realization of molecular switches, diodes
or transistors. With further extensions, it may also be useful to investigate
more complex processes of current-induced chemistry in realistic systems. For
instance, the extension of the current model to higher dimensional systems is
possible by utilizing the low-storage matrix product state representation of
the hierarchical approach.Shi _et al._ (2018); Borrelli (2019); Yan _et al._
(2021)
## Data Availability Statement
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## Acknowledgements
We thank C. Kaspar and J. Bätge for helpful discussions. This work was
supported by the German Research Foundation (DFG). Y.K. gratefully
acknowledges a Research Fellowship of the Alexander von Humboldt Foundation.
A.E. was supported by the Raymond and Beverly Sackler Center for Computational
Molecular and Materials Science, Tel Aviv University. U. P. wishes to
acknowledge the Freiburg Institute for Advanced Studies for support and
stimulating atmosphere, Prof. Thoss and his group for the warm and lovely
hospitality and collaboration during his sabbatical stay at Freiburg, and the
Israel Science Foundation and the Israeli ministry of science and education
for supporting this research. Furthermore, the authors acknowledge support by
the High Performance and Cloud Computing Group at the Zentrum für
Datenverarbeitung of the University of Tübingen, the state of Baden-
Württemberg through bwHPC and the German Research Foundation (DFG) through
grant no INST 37/935-1 FUGG.
## References
* Cuevas and Scheer (2010) J. C. Cuevas and E. Scheer, _Molecular electronics: an introduction to theory and experiment_ (World Scientific, Singapore, 2010).
* Galperin, Ratner, and Nitzan (2007) M. Galperin, M. A. Ratner, and A. Nitzan, “Molecular transport junctions: vibrational effects,” J. Phys.: Condens. Matter 19, 103201 (2007).
* Bergfield and Ratner (2013) J. P. Bergfield and M. A. Ratner, “Forty years of molecular electronics: Non-equilibrium heat and charge transport at the nanoscale,” physica status solidi (b) 250, 2249–2266 (2013).
* Aradhya and Venkataraman (2013) S. V. Aradhya and L. Venkataraman, “Single-molecule junctions beyond electronic transport,” Nat. Nanotechnol. 8, 399 (2013).
* Bâldea (2016) I. Bâldea, _Molecular Electronics: An Experimental and Theoretical Approach_ (CRC Press, 2016).
* Su _et al._ (2016) T. A. Su, M. Neupane, M. L. Steigerwald, L. Venkataraman, and C. Nuckolls, “Chemical principles of single-molecule electronics,” Nat. Rev. Mater. 1, 16002 (2016).
* Thoss and Evers (2018) M. Thoss and F. Evers, “Perspective: Theory of quantum transport in molecular junctions,” J. Chem. Phys. 148, 030901 (2018).
* Evers _et al._ (2020) F. Evers, R. Korytár, S. Tewari, and J. M. van Ruitenbeek, “Advances and challenges in single-molecule electron transport,” Rev. Mod. Phys. 92, 035001 (2020).
* Persson and Avouris (1997) B. Persson and P. Avouris, “Local bond breaking via stm-induced excitations: the role of temperature,” Surf. Sci. 390, 45–54 (1997).
* Kim, Komeda, and Kawai (2002) Y. Kim, T. Komeda, and M. Kawai, “Single-molecule reaction and characterization by vibrational excitation,” Phys. Rev. Lett. 89, 126104 (2002).
* Koch _et al._ (2006) J. Koch, M. Semmelhack, F. Von Oppen, and A. Nitzan, “Current-induced nonequilibrium vibrations in single-molecule devices,” Phys. Rev. B 73, 155306 (2006).
* Huang _et al._ (2006) Z. Huang, B. Xu, Y. Chen, M. D. Ventra, and N. Tao, “Measurement of current-induced local heating in a single molecule junction,” Nano Lett. 6, 1240–1244 (2006).
* Huang _et al._ (2007) Z. Huang, F. Chen, R. D’agosta, P. A. Bennett, M. Di Ventra, and N. Tao, “Local ionic and electron heating in single-molecule junctions,” Nat. Nanotechnol. 2, 698 (2007).
* Schulze _et al._ (2008) G. Schulze, K. J. Franke, A. Gagliardi, G. Romano, C. Lin, A. Rosa, T. A. Niehaus, T. Frauenheim, A. Di Carlo, A. Pecchia, _et al._ , “Resonant electron heating and molecular phonon cooling in single c 60 junctions,” Phys. Rev. Lett. 100, 136801 (2008).
* Ioffe _et al._ (2008) Z. Ioffe, T. Shamai, A. Ophir, G. Noy, I. Yutsis, K. Kfir, O. Cheshnovsky, and Y. Selzer, “Detection of heating in current-carrying molecular junctions by raman scattering,” Nat. Nanotechnol. 3, 727–732 (2008).
* Sabater, Untiedt, and van Ruitenbeek (2015) C. Sabater, C. Untiedt, and J. M. van Ruitenbeek, “Evidence for non-conservative current-induced forces in the breaking of au and pt atomic chains,” Beilstein J. Nanotechnol. 6, 2338–2344 (2015).
* Li _et al._ (2015) H. Li, T. A. Su, V. Zhang, M. L. Steigerwald, C. Nuckolls, and L. Venkataraman, “Electric field breakdown in single molecule junctions,” J. Am. Chem. Soc. 137, 5028–5033 (2015).
* Li _et al._ (2016) H. Li, N. T. Kim, T. A. Su, M. L. Steigerwald, C. Nuckolls, P. Darancet, J. L. Leighton, and L. Venkataraman, “Mechanism for si–si bond rupture in single molecule junctions,” J. Am. Chem. Soc. 138, 16159–16164 (2016).
* Capozzi _et al._ (2016) B. Capozzi, J. Z. Low, J. Xia, Z.-F. Liu, J. B. Neaton, L. M. Campos, and L. Venkataraman, “Mapping the transmission functions of single-molecule junctions,” Nano Lett. 16, 3949–3954 (2016).
* Schinabeck (2018) C. Schinabeck, _Hierarchical quantum master equation approaches to nonequilibrium charge transport through single-molecule junctions_ , Ph.D. thesis, Universität Erlangen-Nürnberg (2018).
* Gelbwaser-Klimovsky _et al._ (2018) D. Gelbwaser-Klimovsky, A. Aspuru-Guzik, M. Thoss, and U. Peskin, “High voltage assisted mechanical stabilization of single-molecule junctions,” Nano Lett. (2018).
* Bi _et al._ (2020) H. Bi, C.-A. Palma, Y. Gong, K. Stallhofer, M. Nuber, C. Jing, F. Meggendorfer, S. Wen, C. Yam, R. Kienberger, _et al._ , “Electron–phonon coupling in current-driven single-molecule junctions,” J. Am. Chem. Soc. 142, 3384–3391 (2020).
* Peiris _et al._ (2020) C. R. Peiris, S. Ciampi, E. M. Dief, J. Zhang, P. J. Canfield, A. P. Le Brun, D. S. Kosov, J. R. Reimers, and N. Darwish, “Spontaneous s–si bonding of alkanethiols to si (111)–h: towards si–molecule–si circuits,” Chem. Sci. 20, 5246–5256 (2020).
* Ho (2002) W. Ho, “Single-molecule chemistry,” J. Chem. Phys. 117, 11033–11061 (2002).
* Stipe _et al._ (1997) B. Stipe, M. Rezaei, W. Ho, S. Gao, M. Persson, and B. Lundqvist, “Single-molecule dissociation by tunneling electrons,” Phys. Rev. Lett. 78, 4410 (1997).
* Huang _et al._ (2013) K. Huang, L. Leung, T. Lim, Z. Ning, and J. C. Polanyi, “Single-electron induces double-reaction by charge delocalization,” J. Am. Chem. Soc. 135, 6220–6225 (2013).
* Härtle _et al._ (2018) R. Härtle, C. Schinabeck, M. Kulkarni, D. Gelbwaser-Klimovsky, M. Thoss, and U. Peskin, “Cooling by heating in nonequilibrium nanosystems,” Phys. Rev. B 98, 081404 (2018).
* Kuperman, Nagar, and Peskin (2020) M. Kuperman, L. Nagar, and U. Peskin, “Mechanical stabilization of nanoscale conductors by plasmon oscillations,” Nano Lett. 20, 5531–5537 (2020).
* Li and Somorjai (2010) Y. Li and G. A. Somorjai, “Nanoscale advances in catalysis and energy applications,” Nano Lett. 10, 2289–2295 (2010).
* Kolasinski (2012) K. W. Kolasinski, _Surface science: foundations of catalysis and nanoscience_ (John Wiley & Sons, 2012).
* Seideman (2016) T. Seideman, _Current-driven phenomena in nanoelectronics_ (CRC Press, 2016).
* Galperin, Nitzan, and Ratner (2006) M. Galperin, A. Nitzan, and M. A. Ratner, “Resonant inelastic tunneling in molecular junctions,” Phys. Rev. B 73, 045314 (2006).
* Ryndyk, Hartung, and Cuniberti (2006) D. Ryndyk, M. Hartung, and G. Cuniberti, “Nonequilibrium molecular vibrons: An approach based on the nonequilibrium green function technique and the self-consistent born approximation,” Phys. Rev. B 73, 045420 (2006).
* Benesch _et al._ (2008) C. Benesch, M. Cizek, J. Klimeš, I. Kondov, M. Thoss, and W. Domcke, “Vibronic effects in single molecule conductance: First-principles description and application to benzenealkanethiolates between gold electrodes,” J. Phys. Chem. C 112, 9880–9890 (2008).
* Härtle and Thoss (2011) R. Härtle and M. Thoss, “Resonant electron transport in single-molecule junctions: Vibrational excitation, rectification, negative differential resistance, and local cooling,” Phys. Rev. B 83, 115414 (2011).
* Schinabeck, Härtle, and Thoss (2018) C. Schinabeck, R. Härtle, and M. Thoss, “Hierarchical quantum master equation approach to electronic-vibrational coupling in nonequilibrium transport through nanosystems: Reservoir formulation and application to vibrational instabilities,” Phys. Rev. B 97, 235429 (2018).
* Erpenbeck _et al._ (2016) A. Erpenbeck, R. Härtle, M. Bockstedte, and M. Thoss, “Vibrationally dependent electron-electron interactions in resonant electron transport through single-molecule junctions,” Phys. Rev. B 93, 115421 (2016).
* Härtle and Kulkarni (2015) R. Härtle and M. Kulkarni, “Effect of broadening in the weak-coupling limit of vibrationally coupled electron transport through molecular junctions and the analogy to quantum dot circuit qed systems,” Phys. Rev. B 91, 245429 (2015).
* Dzhioev and Kosov (2011) A. A. Dzhioev and D. Kosov, “Kramers problem for nonequilibrium current-induced chemical reactions,” J. Chem. Phys. 135, 074701 (2011).
* Dzhioev, Kosov, and Von Oppen (2013) A. A. Dzhioev, D. S. Kosov, and F. Von Oppen, “Out-of-equilibrium catalysis of chemical reactions by electronic tunnel currents,” J. Chem. Phys. 138, 134103 (2013).
* Pozner, Lifshitz, and Peskin (2014) R. Pozner, E. Lifshitz, and U. Peskin, “Charge transport-induced recoil and dissociation in double quantum dots,” Nano Lett. 14, 6244–6249 (2014).
* Erpenbeck _et al._ (2018a) A. Erpenbeck, C. Schinabeck, U. Peskin, and M. Thoss, “Current-induced bond rupture in single-molecule junctions,” Phys. Rev. B 97, 235452 (2018a).
* Foti and Vázquez (2018) G. Foti and H. Vázquez, “Origin of vibrational instabilities in molecular wires with separated electronic states,” J. Phys. Chem. Lett. 9, 2791–2796 (2018).
* Lu, Brandbyge, and Hedegård (2010) J.-T. Lu, M. Brandbyge, and P. Hedegård, “Blowing the fuse: Berry’s phase and runaway vibrations in molecular conductors,” Nano letters 10, 1657–1663 (2010).
* Lü, Hedegård, and Brandbyge (2011) J.-T. Lü, P. Hedegård, and M. Brandbyge, “Laserlike vibrational instability in rectifying molecular conductors,” Phys. Rev. Lett. 107, 046801 (2011).
* Lü _et al._ (2012) J.-T. Lü, M. Brandbyge, P. Hedegård, T. N. Todorov, and D. Dundas, “Current-induced atomic dynamics, instabilities, and raman signals: Quasiclassical langevin equation approach,” Phys. Rev. B 85, 245444 (2012).
* Preston, Kershaw, and Kosov (2020) R. J. Preston, V. F. Kershaw, and D. S. Kosov, “Current-induced atomic motion, structural instabilities, and negative temperatures on molecule-electrode interfaces in electronic junctions,” Phys. Rev. B 101, 155415 (2020).
* Preston, Gelin, and Kosov (2021) R. J. Preston, M. F. Gelin, and D. S. Kosov, “First-passage time theory of activated rate chemical processes in electronic molecular junctions,” J. Chem. Phys. 154, 114108 (2021).
* Domcke (1991) W. Domcke, “Theory of resonance and threshold effects in electron-molecule collisions: The projection-operator approach,” Phys. Rep. 208, 97–188 (1991).
* Gertitschke and Domcke (1993) P. Gertitschke and W. Domcke, “Time-dependent wave-packet description of dissociative electron attachment,” Phys. Rev. A 47, 1031 (1993).
* Čížek, Horáček, and Domcke (1999) M. Čížek, J. Horáček, and W. Domcke, “Associative detachment, dissociative attachment, and vibrational excitation of hcl by low-energy electrons,” Phys. Rev. A 60, 2873–2881 (1999).
* Gallup and Fabrikant (2011) G. A. Gallup and I. I. Fabrikant, “Vibrational feshbach resonances in dissociative electron attachment to uracil,” Phys. Rev. A 83, 012706 (2011).
* Brandbyge _et al._ (1995) M. Brandbyge, P. Hedegård, T. Heinz, J. Misewich, and D. Newns, “Electronically driven adsorbate excitation mechanism in femtosecond-pulse laser desorption,” Phys. Rev. B 52, 6042 (1995).
* Saalfrank (2006) P. Saalfrank, “Quantum dynamical approach to ultrafast molecular desorption from surfaces,” Chem. Rev. 106, 4116–4159 (2006).
* Kim _et al._ (2015) Y. Kim, K. Motobayashi, T. Frederiksen, H. Ueba, and M. Kawai, “Action spectroscopy for single-molecule reactions–experiments and theory,” Prog. Surf. Sci. 90, 85–143 (2015).
* Frederiksen, Paulsson, and Ueba (2014) T. Frederiksen, M. Paulsson, and H. Ueba, “Theory of action spectroscopy for single-molecule reactions induced by vibrational excitations with stm,” Phys. Rev. B 89, 035427 (2014).
* Erpenbeck _et al._ (2020) A. Erpenbeck, Y. Ke, U. Peskin, and M. Thoss, “Current-induced dissociation in molecular junctions beyond the paradigm of vibrational heating: The role of antibonding electronic states,” Phys. Rev. B 102, 195421 (2020).
* Halstead and Holloway (1990) D. Halstead and S. Holloway, “The influence of potential energy surface topologies on the dissociation of h2,” J. Chem. Phys. 93, 2859–2870 (1990).
* Schinabeck _et al._ (2016) C. Schinabeck, A. Erpenbeck, R. Härtle, and M. Thoss, “Hierarchical quantum master equation approach to electronic-vibrational coupling in nonequilibrium transport through nanosystems,” Phys. Rev. B 94, 201407 (2016).
* Weiss (2012) U. Weiss, _Quantum dissipative systems_ , Vol. 13 (World scientific, 2012).
* Ilk and Makri (1994) G. Ilk and N. Makri, “Real time path integral methods for a system coupled to an anharmonic bath,” J. Chem. Phys. 101, 6708–6716 (1994).
* Joutsuka and Ando (2011) T. Joutsuka and K. Ando, “Vibrational spectroscopy and relaxation of an anharmonic oscillator coupled to harmonic bath,” J. Chem. Phys. 134, 204511 (2011).
* Härtle _et al._ (2013) R. Härtle, G. Cohen, D. Reichman, and A. Millis, “Decoherence and lead-induced interdot coupling in nonequilibrium electron transport through interacting quantum dots: A hierarchical quantum master equation approach,” Phys. Rev. B 88, 235426 (2013).
* Härtle _et al._ (2015) R. Härtle, G. Cohen, D. Reichman, and A. Millis, “Transport through an anderson impurity: Current ringing, nonlinear magnetization, and a direct comparison of continuous-time quantum monte carlo and hierarchical quantum master equations,” Phys. Rev. B 92, 085430 (2015).
* Xu _et al._ (2017) M. Xu, L. Song, K. Song, and Q. Shi, “Convergence of high order perturbative expansions in open system quantum dynamics,” J. Chem. Phys. 146, 064102 (2017).
* Trushechkin (2019) A. Trushechkin, “Higher-order corrections to the redfield equation with respect to the system-bath coupling based on the hierarchical equations of motion,” Lobachevskii J. Math. 40, 1606–1618 (2019).
* Tanimura and Kubo (1989) Y. Tanimura and R. Kubo, “Time evolution of a quantum system in contact with a nearly gaussian-markoffian noise bath,” J. Phys. Soc. Jpn. 58, 101–114 (1989).
* Tanimura (2006) Y. Tanimura, “Stochastic liouville, langevin, fokker–planck, and master equation approaches to quantum dissipative systems,” J. Phys. Soc. Jpn. 75, 082001 (2006).
* Jin _et al._ (2007) J. Jin, S. Welack, J. Luo, X.-Q. Li, P. Cui, R.-X. Xu, and Y. Yan, “Dynamics of quantum dissipation systems interacting with fermion and boson grand canonical bath ensembles: Hierarchical equations of motion approach,” J. Chem. Phys. 126, 134113 (2007).
* Jin, Zheng, and Yan (2008) J. Jin, X. Zheng, and Y. Yan, “Exact dynamics of dissipative electronic systems and quantum transport: Hierarchical equations of motion approach,” J. Chem. Phys. 128, 234703 (2008).
* Zheng _et al._ (2009) X. Zheng, J. Jin, S. Welack, M. Luo, and Y. Yan, “Numerical approach to time-dependent quantum transport and dynamical kondo transition,” J. Chem. Phys. 130, 164708 (2009).
* Yan (2014) Y. Yan, “Theory of open quantum systems with bath of electrons and phonons and spins: Many-dissipaton density matrixes approach,” J. Chem. Phys. 140, 054105 (2014).
* Ye _et al._ (2016) L. Ye, X. Wang, D. Hou, R.-X. Xu, X. Zheng, and Y. Yan, “Heom-quick: a program for accurate, efficient, and universal characterization of strongly correlated quantum impurity systems,” WIREs Comput Mol Sci 6, 608–638 (2016).
* Wenderoth, Bätge, and Härtle (2016) S. Wenderoth, J. Bätge, and R. Härtle, “Sharp peaks in the conductance of a double quantum dot and a quantum-dot spin valve at high temperatures: A hierarchical quantum master equation approach,” Phys. Rev. B 94, 121303 (2016).
* Dou _et al._ (2018) W. Dou, C. Schinabeck, M. Thoss, and J. E. Subotnik, “A broadened classical master equation approach for treating electron-nuclear coupling in non-equilibrium transport,” J. Chem. Phys. 148, 102317 (2018).
* Tanimura (2020) Y. Tanimura, “Numerically “exact” approach to open quantum dynamics: The hierarchical equations of motion (heom),” J. Chem. Phys. 153, 020901 (2020).
* Erpenbeck and Thoss (2019) A. Erpenbeck and M. Thoss, “Hierarchical quantum master equation approach to vibronic reaction dynamics at metal surfaces,” J. Chem. Phys. 151, 191101 (2019).
* Bätge _et al._ (2021) J. Bätge, Y. Ke, C. Kaspar, and M. Thoss, “Nonequilibrium open quantum systems with multiple bosonic and fermionic environments: A hierarchical quantum master equation approach,” arXiv preprint arXiv:2102.09484 (2021).
* Hu, Xu, and Yan (2010) J. Hu, R.-X. Xu, and Y. Yan, “Communication: Padé spectrum decomposition of fermi function and bose function,” J. Chem. Phys. 133, 101106 (2010).
* Hu _et al._ (2011) J. Hu, M. Luo, F. Jiang, R.-X. Xu, and Y. Yan, “Padé spectrum decompositions of quantum distribution functions and optimal hierarchical equations of motion construction for quantum open systems,” J. Chem. Phys. 134, 244106 (2011).
* Cui _et al._ (2019) L. Cui, H.-D. Zhang, X. Zheng, R.-X. Xu, and Y. Yan, “Highly efficient and accurate sum-over-poles expansion of fermi and bose functions at near zero temperatures: Fano spectrum decomposition scheme,” J. Chem. Phys. 151, 024110 (2019).
* Abe, Yamashita, and Saalfrank (2003) A. Abe, K. Yamashita, and P. Saalfrank, “Stm and laser-driven atom switch: An open-system density-matrix study of h/si (100),” Phys. Rev. B 67, 235411 (2003).
* Tang _et al._ (2015) Z. Tang, X. Ouyang, Z. Gong, H. Wang, and J. Wu, “Extended hierarchy equation of motion for the spin-boson model,” J. Chem. Phys. 143, 224112 (2015).
* Rahman and Kleinekathöfer (2019) H. Rahman and U. Kleinekathöfer, “Chebyshev hierarchical equations of motion for systems with arbitrary spectral densities and temperatures,” J. Chem. Phys. 150, 244104 (2019).
* Erpenbeck _et al._ (2018b) A. Erpenbeck, C. Hertlein, C. Schinabeck, and M. Thoss, “Extending the hierarchical quantum master equation approach to low temperatures and realistic band structures,” J. Chem. Phys. 149, 064106 (2018b).
* Ishizaki and Tanimura (2005) A. Ishizaki and Y. Tanimura, “Quantum dynamics of system strongly coupled to low-temperature colored noise bath: Reduced hierarchy equations approach,” J. Phys. Soc. Jpn. 74, 3131–3134 (2005).
* Shi _et al._ (2009) Q. Shi, L. Chen, G. Nan, R.-X. Xu, and Y. Yan, “Efficient hierarchical liouville space propagator to quantum dissipative dynamics,” J. Chem. Phys. 130, 084105 (2009).
* Xu _et al._ (2019) M. Xu, Y. Liu, K. Song, and Q. Shi, “A non-perturbative approach to simulate heterogeneous electron transfer dynamics: Effective mode treatment of the continuum electronic states,” J. Chem. Phys. 150, 044109 (2019).
* Colbert and Miller (1992) D. T. Colbert and W. H. Miller, “A novel discrete variable representation for quantum mechanical reactive scattering via the s-matrix kohn method,” J. Chem. Phys. 96, 1982–1991 (1992).
* Echave and Clary (1992) J. Echave and D. C. Clary, “Potential optimized discrete variable representation,” Chem. Phys. Lett. 190, 225–230 (1992).
* Seideman and Miller (1992) T. Seideman and W. H. Miller, “Calculation of the cumulative reaction probability via a discrete variable representation with absorbing boundary conditions,” J. Chem. Phys. 96, 4412–4422 (1992).
* Riss and Meyer (1996) U. V. Riss and H. Meyer, “Investigation on the reflection and transmission properties of complex absorbing potentials,” J. Chem. Phys. 105, 1409–1419 (1996).
* Selstø and Kvaal (2010) S. Selstø and S. Kvaal, “Absorbing boundary conditions for dynamical many-body quantum systems,” J. Phys. B 43, 065004 (2010).
* Kvaal (2011) S. Kvaal, “Multiconfigurational time-dependent hartree method to describe particle loss due to absorbing boundary conditions,” Phys. Rev. A 84, 022512 (2011).
* Prucker _et al._ (2018) V. Prucker, M. Bockstedte, M. Thoss, and P. Coto, “Dynamical simulation of electron transfer processes in self-assembled monolayers at metal surfaces using a density matrix approach,” J. Chem. Phys. 148, 124705 (2018).
* Wang, Nian, and Lü (2020) T. Wang, L.-L. Nian, and J.-T. Lü, “Nonthermal vibrations in biased molecular junctions,” Phys. Rev. E 102, 022127 (2020).
* Zhang, Zheng, and Di Ventra (2019) D. Zhang, X. Zheng, and M. Di Ventra, “Local temperatures out of equilibrium,” Phys. Rep. 830, 1–66 (2019).
* Wilkins and Dattani (2015) D. M. Wilkins and N. S. Dattani, “Why quantum coherence is not important in the fenna–matthews–olsen complex,” J. Chem. Theory Comput. 11, 3411–3419 (2015).
* Lee, Sorescu, and Deng (2011) J. Lee, D. C. Sorescu, and X. Deng, “Electron-induced dissociation of co2 on tio2 (110),” J. Am. Chem. Soc. 133, 10066–10069 (2011).
* Tan _et al._ (2011) S. Tan, Y. Zhao, J. Zhao, Z. Wang, C. Ma, A. Zhao, B. Wang, Y. Luo, J. Yang, and J. Hou, “Co 2 dissociation activated through electron attachment on the reduced rutile tio 2 (110)-1$\times$ 1 surface,” Phys. Rev. B 84, 155418 (2011).
* Zhao _et al._ (2013) A. Zhao, S. Tan, B. Li, B. Wang, J. Yang, and J. Hou, “Stm tip-assisted single molecule chemistry,” Phys. Chem. Chem. Phys. 15, 12428–12441 (2013).
* Chen _et al._ (2019) C. Chen, L. Kong, Y. Wang, P. Cheng, B. Feng, Q. Zheng, J. Zhao, L. Chen, and K. Wu, “Dynamics of single-molecule dissociation by selective excitation of molecular phonons,” Phys. Rev. Lett. 123, 246804 (2019).
* Salam, Persson, and Palmer (1994) G. P. Salam, M. Persson, and R. E. Palmer, “Possibility of coherent multiple excitation in atom transfer with a scanning tunneling microscope,” Phys. Rev. B 49, 10655–10662 (1994).
* Ueba (2003) H. Ueba, “Motions and reactions of single adsorbed molecules induced by vibrational excitation with stm,” Surf. Rev. Lett. 10, 771–796 (2003).
* Tikhodeev and Ueba (2004) S. Tikhodeev and H. Ueba, “Relation between inelastic electron tunneling and vibrational excitation of single adsorbates on metal surfaces,” Phys. Rev. B 70, 125414 (2004).
* Mitra, Aleiner, and Millis (2004) A. Mitra, I. Aleiner, and A. Millis, “Phonon effects in molecular transistors: Quantal and classical treatment,” Phys. Rev. B 69, 245302 (2004).
* Härtle, Benesch, and Thoss (2009) R. Härtle, C. Benesch, and M. Thoss, “Vibrational nonequilibrium effects in the conductance of single molecules with multiple electronic states,” Phys. Rev. Lett. 102, 146801 (2009).
* Brisker and Peskin (2008) D. Brisker and U. Peskin, “Charge-transport-induced dissociation in donor-bridge-acceptor complexes,” J. Chem. Phys. 129, 244709 (2008).
* Härtle, Peskin, and Thoss (2013) R. Härtle, U. Peskin, and M. Thoss, “Vibrationally coupled electron transport in single-molecule junctions: The importance of electron–hole pair creation processes,” Physica Status Solidi (b) 250, 2365–2377 (2013).
* Nitzan and Galperin (2018) A. Nitzan and M. Galperin, “Kinetic schemes in open interacting systems,” J. Phys. Chem. Lett. 9, 4886–4892 (2018).
* Xin _et al._ (2019) N. Xin, J. Guan, C. Zhou, X. Chen, C. Gu, Y. Li, M. A. Ratner, A. Nitzan, J. F. Stoddart, and X. Guo, “Concepts in the design and engineering of single-molecule electronic devices,” Nature Reviews Physics 1, 211–230 (2019).
* Sun _et al._ (2018) H. Sun, Z. Jiang, N. Xin, X. Guo, S. Hou, and J. Liao, “Efficient fabrication of stable graphene-molecule-graphene single-molecule junctions at room temperature,” ChemPhysChem 19, 2258–2265 (2018).
* Leitherer, Papior, and Brandbyge (2019) S. Leitherer, N. Papior, and M. Brandbyge, “Current-induced atomic forces in gated graphene nanoconstrictions,” Phys. Rev. B 100, 035415 (2019).
* Brisker and Peskin (2006) D. Brisker and U. Peskin, “Vibrational anharmonicity effects in electronic tunneling through molecular bridges.” J. Chem. Phys. 125, 111103 (2006).
* Arielly _et al._ (2017) R. Arielly, N. Nachman, Y. Zelinskyy, V. May, and Y. Selzer, “Picosecond time resolved conductance measurements of redox molecular junctions,” J. Chem. Phys. 146, 092306 (2017).
* Koch, Von Oppen, and Andreev (2006) J. Koch, F. Von Oppen, and A. Andreev, “Theory of the franck-condon blockade regime,” Phys. Rev. B 74, 205438 (2006).
* Schinabeck _et al._ (2014) C. Schinabeck, R. Härtle, H. Weber, and M. Thoss, “Current noise in single-molecule junctions induced by electronic-vibrational coupling,” Phys. Rev. B 90, 075409 (2014).
* Fung _et al._ (2019) E.-D. Fung, D. Gelbwaser, J. Taylor, J. Low, J. Xia, I. Davydenko, L. M. Campos, S. Marder, U. Peskin, and L. Venkataraman, “Breaking down resonance: Nonlinear transport and the breakdown of coherent tunneling models in single molecule junctions,” Nano letters 19, 2555–2561 (2019).
* Zang _et al._ (2020) Y. Zang, E.-D. Fung, T. Fu, S. Ray, M. H. Garner, A. Borges, M. L. Steigerwald, S. Patil, G. Solomon, and L. Venkataraman, “Voltage-induced single-molecule junction planarization,” Nano Lett. 21, 673–679 (2020).
* Hofmeister, Coto, and Thoss (2017) C. Hofmeister, P. B. Coto, and M. Thoss, “Controlling the conductance of molecular junctions using proton transfer reactions: A theoretical model study,” J. Chem. Phys. 146, 092317 (2017).
* Weckbecker, Coto, and Thoss (2017) D. Weckbecker, P. Coto, and M. Thoss, “Controlling the conductance of a graphene–molecule nanojunction by proton transfer,” Nano Lett. 17, 3341–3346 (2017).
* Shi _et al._ (2018) Q. Shi, Y. Xu, Y. Yan, and M. Xu, “Efficient propagation of the hierarchical equations of motion using the matrix product state method,” J. Chem. Phys. 148, 174102 (2018).
* Borrelli (2019) R. Borrelli, “Density matrix dynamics in twin-formulation: An efficient methodology based on tensor-train representation of reduced equations of motion,” J. Chem. Phys. 150, 234102 (2019).
* Yan _et al._ (2021) Y. Yan, M. Xu, T. Li, and Q. Shi, “Efficient propagation of the hierarchical equations of motion using the tucker and hierarchical tucker tensors,” J. Chem. Phys. 154, 194104 (2021).
|
# Stranger than Metals
Philip W. Phillips Department of Physics and Institute for Condensed Matter
Theory, University of Illinois, 1110 W. Green Street, Urbana, IL 61801 Nigel
E. Hussey H. H. Wills Physics Laboratory, University of Bristol, Tyndall
Avenue, Bristol BS8 1TL, United Kingdom High Field Magnet Laboratory (HFML-
EMFL) and Institute for Molecules and Materials, Radboud University,
Toernooiveld 7, 6525 ED Nijmegen, Netherlands Peter Abbamonte Department of
Physics, University of Illinois, 1110 W. Green Street, Urbana, IL 61801
###### Abstract
Although the resistivity in traditional metals increases with temperature, its
$T$ dependence vanishes at low or high temperature, albeit for different
reasons. Here, we review a class of materials, known as ‘strange’ metals, that
can violate both principles. In materials exhibiting such behavior, the change
in slope of the resistivity as the mean free path drops below the lattice
constant, or as $T\rightarrow 0$, can be imperceptible, suggesting complete
continuity between the charge carriers at low and high $T$. Since particles
cannot scatter at length scales shorter than the interatomic spacing, strange
metallicity calls into question the relevance of locality and a particle
picture of the underlying current. This review focuses on transport and
spectroscopic data on candidate strange metals with an eye to isolate and
identify a unifying physical principle. Special attention is paid to quantum
criticality, Planckian dissipation, Mottness, and whether a new gauge
principle, which has a clear experimental signature, is needed to account for
the non-local transport seen in strange metals. For the cuprates, strange
metallicity is shown to track the superfluid density, thereby making a theory
of this state the primary hurdle in solving the riddle of high-temperature
superconductivity.
To understand the essential tension between quantum mechanics and gravity,
simply imagine two electrons impinging on the event horizon of a black hole.
While classical gravity predicts that they meet at the center, quantum
mechanics forbids this should the electrons have the same spin. In essence,
classical gravity has no way of preserving Pauli exclusion. Of course
replacing classical general relativity with a quantum theory of gravity at
small enough scales resolves the problem, but what is this scale? In 1899,
Planck formulated a universal length now regarded as the scale below which a
quantum theory of gravity supplants its classical counterpart. The Planck
scale,
$\displaystyle\ell_{P}=\sqrt{\frac{\hbar G}{c^{3}}},$ (1)
is pure dimensional analysis on three fundamental constants: the speed of
light, $c$, Newton’s gravitational constant, $G$, and the quantum of
uncertainty, $\hbar$, Planck’s constant, $h$, divided by $2\pi$. This leads
naturally to a Planck time as the ratio of the Planck length to the speed of
light, $\ell_{P}/c$. Such a Planckian analysis can be extended equally to
many-body systems in contact with a heat bath. All that is necessary is to
include the temperature $T$. A similar dimensional analysis then leads to
$\displaystyle\tau_{P}=\frac{\hbar}{k_{B}T}$ (2)
as the shortest time for heat loss in a many-body system obeying quantum
mechanics with $k_{B}$, Boltzmann’s constant. As no system parameters enter
$\tau_{P}$, this quantity occupies a similar fundamental role in analogy with
the Planck length and is referred to as the Planckian dissipation time.
Although Eq. (2) has had previous incarnations matsubara ; chn89 , in the
realm of charge transport, it defines the time scale for scale-invariant or
Planckian dissipation zaanen04 . Scale-invariance follows because there is no
scale other than temperature appearing in $\tau_{P}$. Achieving such scale
invariance necessitates a highly entangled many-body state. Such a state would
lead to a breakdown of a local single-particle and the advent of new
collective non-local entities as the charge carriers. Precisely what the new
propagating degrees of freedom are is the key mystery of the strange metal.
While the Planck scale $\ell_{P}$ requires high-energy accelerators much
beyond anything now in use, such is not the case with physics at the Planckian
dissipation limit. Early table-top experiments on cuprate superconductors, for
example, revealed a ‘strange metal’ regime defined by a robust $T$-linear
resistivity extending to the highest temperatures measured gurvitch87 ;
martin90 ; takagi92 (see Fig. 1), a possible harbinger of Planckian
dissipation. Recall that in a Fermi liquid, the conductivity, can be well
described by a Drude formula,
$\displaystyle\sigma=\frac{n_{e}e^{2}}{m}\tau_{\rm tr}$ (3)
where $n_{e}$ is the charge carrier density, $e$ and $m$ the charge and mass
of an electron, respectively, and the transport lifetime
$\displaystyle\tau_{\rm tr}=\frac{\hbar
E_{F}}{(k_{B}T)^{2}}=\frac{E_{F}}{k_{B}T}\tau_{P},$ (4)
contains the Fermi energy $E_{F}$ of the quasiparticles. No such energy scale
appears in Eq. (2). If the scattering rate in cuprates is directly
proportional to the resistivity, as it is in simple metals, $T$-linear
resistivity is equivalent to scale-invariant Planckian dissipation only if
$\tau_{tr}=\alpha_{1}\tau_{P}$ with $\alpha_{1}\sim 1$. While this state of
affairs seems to be realized in a host of correlated metals, including the
cuprates marel03 ; cooper09 ; legros19 ; bruin13 , questions that deserve
further deliberation are how accurately is $\alpha_{1}$ known and what are the
assumptions that go into its determination? Regardless of the possible
relationship with Planckian dissipation, what makes $T$-linear resistivity in
the cuprates truly novel is its persistence – from mK temperatures (in both
the electron- and hole-doped cuprates) fournier98 ; mackenzie96b up to 1000 K
(in the hole-doped cuprates) gurvitch87 ; takagi92 – and its omnipresence,
the strange metal regime dominating large swathes of the temperature vs.
doping phase diagram nagaosa92 . In normal metals iofferegel ; gurvitch81 as
well as some heavy fermions husseyMIR , the resistivity asymptotically
approaches a saturation value commensurate with the mean-free-path $\ell$
becoming comparable with the interatomic spacing $a$ – the minimum length over
which a Bloch wave and its associated Fermi velocity and wave vector can be
defined. In many correlated metals – collectively refered to as ‘bad metals’ –
$\ell<a$ at high $T$, thereby violating the so-called Mott-Ioffe-Regel (MIR)
limit iofferegel ; mott ; husseyMIR ; martin90 ; takagi92 ; hussey11 .
Remarkably, no saturation occurs in these bad metals across the MIR threshold,
implying that the whole notion of a Fermi velocity of quasiparticles breaks
down at high $T$. In certain cases, an example of which is shown in Fig. 1,
there is no discernible change in slope as the MIR limit is exceeded. While
this circumstance occurs only in a narrow doping window (in cuprates) hussey11
, such continuity does suggest that, even at low $T$, quasiparticles emkiv95
cannot be the effective propagating degrees of freedom. Evidently, in strongly
correlated electron matter, the current-carrying degrees of freedom in the IR
need not have a particle interpretation. Precisely what the charge carriers
are and the experimental delineation of the strange metal will be the subject
of this review.
Over time, the label ‘strange metal’ has seemingly become ubiquitous, used to
describe any metallic system whose transport properties display behavior that
is irreconcilable with conventional Fermi-liquid or Boltzmann transport
theory. This catch-all phraseology, however, is unhelpful as it fails to
differentiate between the various types of non-Fermi-liquid behavior observed,
some of which deserve special deliberation on their own. In this review, we
attempt to bring strange metal phenomenology into sharper focus, by addressing
a number of pertinent questions. Does the term refer to the resistive behavior
of correlated electron systems at high or low temperatures or both? Does it
describe any $T$-linear resistivity associated with the Planckian timescale,
or something unique? Does it describe the physics of a doped Mott insulator or
the physics associated with quantum criticality (whose underlying origins may
or may not include Mottness as a key ingredient)? Finally, does anything local
carry the current and if not, does explicating the propagating degrees of
freedom in the strange metal require a theory as novel as quantum gravity?
Figure 1: In-plane resistivity of La2-xSrxCuO4 ($x$ = 0.21, adapted from Ref.
cooper09 ; hussey11 ). The dotted points are extrapolated from high-field
magnetoresistance data cooper09 . The shaded area shows the Mott-Ioffe-Regel
(MIR) boundary where the mean-free-path becomes comparable to the
interparticle scattering length.
## I Is Strange Metallicity Ubiquitous?
Table 1: Summary of the dc transport properties of various strange metal
candidates. The first column identifies the candidate compound or family of
compounds. For the hole-doped cuprates, underdoped (UD), optimally doped (OP)
and overdoped (OD) compounds are treated separately, but individual compounds
within each sub-set are not listed as their transport properties are found to
be generic. For the electron-doped cuprates, only La2-xCexCuO4 is selected
since this is the material for which all relevant properties have been
studied, though the Pr- and Nd-based sister compounds do show similar
behavior. ‘MATBG’ stands for magic-angle twisted bilayer graphene. The second
column considers bad metallic behavior, though here a $\checkmark$ mark refers
only to those materials that exhibit $T$-linear resistivity beyond the Mott-
Ioffe-Regel (MIR) limit. Systems identified with a $\times$ show either a
tendency towards saturation or a marked reduction in slope near the MIR limit.
$\checkmark$ marks in the third column identify systems that at a singular
point in their respective phase diagram(s), exhibit $T$-linear resistivity
down to the lowest temperatures studied thus far. The $`$ extended
criticality’ heading for column 4 refers then to systems where a predominant
$T$-linear resistivity at low-$T$ extends over a finite region of the phase
diagram. Column 5 considers systems that exhibit a $T^{2}$ dependence of the
inverse Hall angle cot$\Theta_{\rm H}$ in the same temperature range where
$\rho(T)$ is $T$-linear. Compounds satifsying the ‘Modified Kohler’s’ label in
column 6 have a low-field magnetoresistance (MR), defined as
$\rho(H,T)-\rho(0,T)/\rho(0,T)$, that exhibits a similar $T$-dependence to
tan${}^{2}\Theta_{\rm H}$. The last two columns inspect the high-field MR
behavior of strange metal candidates. Note that the observation of a
$H$-linear MR at high fields does not imply the form of the MR over all fields
and temperatures exhibits quadrature scaling. La2-xSrxCuO4, for example,
displays simultaneous $H$\- and $T$-linearity but no quadrature scaling. The *
marks for FeSe1-xSx highlight the fact that the $H$-linear/quadrature MR seen
in this family coexists with a more conventional MR contribution, indicating
the presence of both strange metal and Fermi-liquid-like components in the dc
transport. The ** marks alongside YbAlB4 highlight the fact that while
$T$-linear resistivity is observed over a wide pressure range, its limiting
low-$T$ dependence is $T^{1.5}$. Finally, dash marks indicate where, as yet,
there have been no reports confirming or otherwise the considered behavior.
| $\rho\propto T$ | $\rho\propto T$ | Extended | cot$\Theta_{\rm H}\propto T^{2}$ | Modified Kohler’s | $H$-linear MR | Quadrature
---|---|---|---|---|---|---|---
| as $T$$\rightarrow\infty$ | as $T$$\rightarrow$ 0 | criticality | (at low $H$) | (at low $H$) | (at high $H$) | MR
UD $p$-cuprates | $\checkmark$ takagi92 | $\times$ proust16 | $\times$ barisic13 | $\checkmark$ carrington92 | $\checkmark$ chan14 | - | -
OP $p$-cuprates | $\checkmark$ gurvitch87 | - | - | $\checkmark$ chien91 | $\checkmark$ harris95 | $\checkmark$ giraldo18 | $\times$ boyd19
OD $p$-cuprates | $\checkmark$ takagi92 | $\checkmark$ cooper09 | $\checkmark$ cooper09 | $\checkmark$ manako92 | $\times$ ayres20 | $\checkmark$ ayres20 | $\checkmark$ ayres20
La2-xCexCuO4 | $\times$ poniatowski20 | $\checkmark$ jin11 | $\checkmark$ jin11 | $\times$ li07 | $\times$ poniatowski21 | $\checkmark$ sarkar18 | $\times$ sarkar18
Sr2RuO4 | $\checkmark$ tyler98 | $\times$ hussey98 | $\times$ barber18 | $\times$ mackenzie96 | $\times$ hussey98 | $\times$ hussey98 | $\times$ hussey98
Sr3Ru2O7 | $\checkmark$ bruin13 | $\checkmark$ bruin13 | $\times$ bruin13 | $\times$ | - | - | -
FeSe1-xSx | $\times$ kasahara14 | $\checkmark$ licci19a | $\times$ licci19a | $\checkmark$ huang20 | $\checkmark$ huang20 | $\checkmark$* licci19b | $\checkmark$* licci19b
BaFe2(As1-xPx)2 | $\times$ hu18 | $\checkmark$ analytis14 | $\times$ analytis14 | - | $\checkmark$ kasahara10 | $\checkmark$ hayes16 | $\checkmark$ hayes16
Ba(Fe1/3Co1/3Ni1/3)2As2 | - | $\checkmark$ nakajima20 | $\times$ nakajima20 | - | - | $\checkmark$ nakajima20 | $\checkmark$ nakajima20
YbRh2Si2 | $\times$ trovarelli00 | $\checkmark$ custers03 | $\checkmark$ custers10 | $\checkmark$ paschen04 | - | - | -
YbBAl4 | $\times$ tomita15 | $\checkmark^{**}$ tomita15 | $\checkmark^{**}$ tomita15 | - | - | - | -
CeCoIn5 | $\times$ nakajima07 | $\checkmark$ bianchi03 | $\times$ bianchi03 | $\checkmark$ nakajima07 | $\checkmark$ nakajima07 | - | -
CeRh6Ge4 | $\times$ shen20 | $\checkmark$ shen20 | $\times$ shen20 | - | - | - | -
(TMTSF)2PF6 | - | $\checkmark$ doiron09 | $\checkmark$ doiron09 | - | - | - | -
MATBG | $\checkmark$ polshyn19 | $\checkmark$ cao20 | $\checkmark$ cao20 | $\checkmark$ lyu20 | - | - | -
In addressing this question, we must first acknowledge the many definitions of
strange metallic behavior that exist, the simplest being a material hosting a
metallic-like resistivity in the absence of quasiparticles. A more precise, if
empirical, definition centres on the $T$-linear resistivity, specifically one
that is distinguishable from that manifest in simple metals and attributed to
electron-phonon scattering. For a metal to be classified as strange, the
$T$-linearity must extend far beyond the typical bounds associated with
phonon-mediated resistivity. At low $T$, this is typically one third of the
Debye temperature, while at high $T$, it is once the magnitude of the
resistivity approaches (roughly 1/2) the value commensurate with the MIR
limit. A sub-set of correlated metals, such as SrRuO3 allen96 and Sr2RuO4
tyler98 , exhibit $T$-linear resistivity at high-$T$ with a magnitude that
clearly violates the MIR limit, but as the system cools down, conventional
Fermi-liquid behavior is restored mackenzie98 ; hussey98 . Hence, while they
are bona fide bad metals – exhibiting metallic resistivity beyond the MIR
limit – they do not classify as strange gunnarsson03 ; husseyMIR .
Another subset, identified here as quantum critical metals, exhibit $T$-linear
resistivity down to the lowest temperatures studied, but only at a singular
quantum critical point (QCP) in their phase diagram associated with a
continuous quantum phase transition to a symmetry broken phase that occurs at
$T$ = 0. In most cases, the phase transition in question is associated with
finite-Q antiferromagnetism (as in pure YbRh2Si2 trovarelli00 , CeCoIn5
bianchi03 and BaFe2(As1-xPx)2 analytis14 ) though recently, similar behavior
has also been reported in systems exhibiting zero-Q order, such as nematic
FeSe1-xSx licci19a or ferromagnetic CeRh6Ge4 shen20 . Away from the QCP, the
low-$T$ resistivity recovers the canonical $T^{2}$ Fermi-liquid form, albeit
with a coefficient that is enhanced as the QCP is approached and the order
parameter fluctuations soften.
By contrast, in overdoped cuprates (both hole- cooper09 ; legros19 and
electron-doped jin11 ), Ge-doped YbRh2Si2 custers10 , YbBAl4 tomita15 and the
organic Bechgaard salts doiron09 , $\rho(T)$ is predominantly $T$-linear down
to low-$T$ not at a singular point in their respective phase diagrams but over
an extended range of the relevant tuning parameter. At first sight, this
‘extended criticality’ is difficult to reconcile with current theories of
quantum criticality, which predict a crossover to a purely $T^{2}$ resistivity
and thus a recovery of FL behavior at low $T$ everywhere except at the
(singular) QCP. Arguably, it is this feature – incompatibility with standard
Fermi-liquid and quantum critical scenarios – that distinguishes a geniune
strange metal from its aspirants. Intriguingly, in many of these systems
$\alpha_{1}$ – the coefficient of the $T$-linear resistivity – is found to
scale with the superconducting transition temperature $T_{c}$. Moreover, for
La2-xCexCuO4 jin11 and (TMTSF)2PF6 doiron09 , extended criticality emerges
beyond a spin density wave QCP, suggesting an intimate link between the
strange metal transport, superconductivity and the presence of critical or
long-wavelength spin fluctuations. In hole-doped cuprates, however, the
strange metal regime looks different, in the sense that the extended
criticality emerges beyond the end of the pseudogap regime that does not
coincide with a magnetic quantum phase transition hussey18 . Furthermore,
while the pseudogap plays host to a multitude of broken symmetry states, the
jury is still out as to whether any of these are responsible for pseudogap
formation or merely instabilities of it.
Besides $T$-linear resistivity, strange metals also exhibit anomalous behavior
in their magnetotransport, including 1) a quadratic temperature dependence of
the inverse Hall angle $\cot\Theta_{H}=\sigma_{\rm xy}/\sigma_{\rm xx}$, 2) a
transverse magnetoresistance (MR) that at low field exhibits modified Kohler’s
scaling ($\Delta\rho/\rho(0)\propto$ tan${}^{2}\Theta_{\rm
H}\propto(1/T^{2})^{2}$ or $(1/(A+BT^{2})^{2})$ harris95 ) and/or 3) a
$H$-linear MR at high fields that may or may not follow quadrature scaling
(whereby $\Delta\rho/T\propto$ $\sqrt{1+\gamma(H/T)^{2}}$) hayes16 ; licci19b
. A survey of the dc transport properties of several strange metal candidates
is presented in Table 1. The combination of a modified Kohler’s rule and
$T^{2}$ Hall angle has been interpreted to indicate the presence of distinct
relaxation times, either for different loci in momentum space carrington92 or
for relaxation processes normal and tangential to the underlying Fermi surface
chien91 . The $H$-linear MR, on the other hand, is inextricably tied to the
$T$-linear zero-field resistivity via its $H/T$ scaling relation, a relation
that can also extend over a broad range of the relevant tuning parameter
ayres20 . In some cases, this link can be obscured, either because $\rho(T)$
itself is not strictly $T$-linear ayres20 or because the quadrature MR co-
exists with a more conventional orbital MR licci19b . Both sets of behavior
highlight once again the possible coexistence of two relaxation times or two
distinct charge-carrying sectors in real materials. Curiously, quadrature
scaling does breaks down inside the pseudogap regime giraldo18 ; boyd19 while
modified Kohler’s scaling is recovered harris95 ; chan14 , suggesting that the
two phenomena may be mutually exclusive in single-band materials. In multiband
materials such as FeSe1-xSx, on the other hand, these different manifestations
of strange metallic transport appear side-by-side licci19b ; huang20 .
Irrespective of these caveats and complexities, what is striking about the
quadrature MR is that it occurs in systems with distinct Fermi surface
topologies, dominant interactions and energy scales, hinting at some
universal, but as yet unidentified, organizing principle.
Restricting the strange metal moniker, as done here, to materials that exhibit
low-$T$ $T$-linear resistivity over an extended region of phase space likewise
restricts strange metallicity to a select ‘club’. What shared feature binds
them together is the key question that will be explored in the coming
sections.
## II Is it Quantum Critical?
Such scale-free $T$-linear resistivity is highly suggestive of some form of
underlying quantum criticality in which the only relevant scale is the
temperature governing collisions between excitations of the order parameter
damlesachdev97 . In fact, following the advent of marginal Fermi liquid (MFL)
phenomenology with its particular charge and spin fluctuation spectra and
associated ($T,\omega$)-linear self energies varma96 , the common
interpretation of such $T$-linear resistivity was and still remains the
nucleus of ideas centered on quantum criticality. The strict definition of
quantum criticality requires the divergence of a thermodynamic quantity. In
heavy fermion metals, the electronic heat capacity ratio $C_{\rm el}/T$ indeed
grows as $\ln(1/T)$ as the antiferromagnetic correlations diverge hf1 ; hf2 ;
bianchi03 . In certain hole-doped cuprates, $C_{\rm el}/T$ also scales as
$\ln(1/T)$ at doping levels close to the end of the pseudogap regime
MichonSheat though here, evidence for a divergent length scale of an
associated order parameter is currently lacking tallon . Moreover,
photoemission suggests that at a $T$-independent critical doping $p_{c}\approx
0.19$, all signatures of incoherent spectral features that define the strange
metal cease, giving way to a more conventional coherent response chen19 . The
abruptness of the transition suggests that it is first-order, posing a
challenge to interpretations based solely on criticality.
As touched upon in the previous section, another major hurdle for the standard
criticality scenario is that the $T$-linear resistivity persists over a wide
range of the relevant tuneable parameter, be it doping as is the case for
cuprates cooper09 ; jin11 ; hussey13 ; legros19 and MATBG cao20 , pressure
for YbBAl4 tomita15 and the organics doiron09 or magnetic field for Ge-doped
YbRh2Si2 custers10 . If quantum criticality is the cause, then it is difficult
to fathom how a thermodynamic quantity can be fashioned to diverge over an
entire phase.
Despite these difficulties, it is worth exploring the connection $T$-linear
resistivity has with continuous quantum critical phenomena, which for the sake
of argument we presume to be tied to a singular point in the phase diagram.
Regardless of the origin of the QCP, universality allows us to answer a simple
question: What constraints does quantum criticality place on the
$T$-dependence of the resistivity? The answer to this question should just be
governed by the fundamental length scale for the correlations. The simplest
formulation of quantum criticality is single-parameter scaling in which the
spatial and temporal correlations are governed by the same diverging length
(see Fig. (2)). Making the additional assumption that the relevant charge
carriers are formed from the quantum critical fluctuations, a simple scaling
analysis on the singular part of the free energy results in the scaling law
chamon05
$\displaystyle\sigma(\omega=0,T)=\frac{q^{2}}{\hbar}f(\omega=0)\left(\frac{k_{B}T}{\hbar
c}\right)^{(d-2)/z}$ (5)
for the $T$-dependence of the conductivity where $f(\omega=0)$ is a non-zero
constant, $q$ is the charge and $z$ is the dynamical exponent, which from
causality must obey the inequality $z\geq 1$. Absent from this expression is
any dependence on an ancillary energy scale for example $E_{F}$ or the plasma
frequency $\omega_{p}$ as the only assumption is scale-invariant transport
irrespective of the details of the system. The analogous expression for the
optical conductivity is wen92
$\displaystyle\sigma(\omega,T=0)\propto\omega^{(d-2)/z}.$ (6)
In pure YbRh2Si2, for example, $\sigma^{-1}(\omega)$ follows an
$\omega$-linear dependence at low frequencies in the same region of the
($T,H$) phase diagram – the quantum critical ‘fan’ – where $\rho(T)$ is also
linear, consistent with this notion of single-parameter scaling prochaska . In
cuprates, on the other hand, the situation is more nuanced. At intermediate
frequencies – sometimes referred to as the mid-infrared response –
$\sigma(\omega$) exhibits a ubiquitous $\omega^{-2/3}$ dependence marel03 .
While this feature in $\sigma(\omega)$ has been interpreted in terms of
quantum critical scaling marel03 , it is inconsistent with the single-
parameter scaling described above. At any doping level, $\sigma(\omega)$ in
the cuprates exhibits a minimum at roughly the charge transfer scale of 1 eV.
This is traditionally marelcolorchange ; CooperUVIR used as the energy scale
demarcating the separation between intraband and interband transitions and
hence serves to separate the low-energy from the high-energy continua. It has
long been debated whether the broad sub-eV $\sigma(\omega)$ response in
cuprates is best analysed in terms of one or two components tannerdrude ;
CooperUVIR . In the former, the $\omega^{-2/3}$ tail is simply a consequence
of the strong $\omega$-linear dependence in 1/$\tau_{tr}(\omega)$ – à la MFL –
while in the latter, it forms part of an incoherent response that is distinct
from the coherent Drude weight centred at $\omega=0$ which itself is described
with either a constant or $\omega$-dependent scattering rate.
Returning to the dc resistivity, we find that in cuprates, where $d$ = 3, an
exponent $z=-1$ is required, a value that is strictly forbidden by causality
chamon05 . For $d$ = 2, as in the case of MATBG, the $T$-dependence vanishes.
This is of course fixed with the replacement of $d\rightarrow d^{\ast}=1$ for
both materials. While $d^{\ast}$ can be construed as the number of dimensions
shl transverse to the Fermi surface, it is difficult to justify such a
procedure here as the persistence of $T$-linearity with no change in slope
above and below the MIR requires a theory that does not rely on FL concepts
such as a Fermi velocity or energy. Furthermore, it is well known that
introducing $d^{\ast}$ yields a power law for the heat capacity, $C\propto
T^{3/2}$ which is not seen experimentally loramSH . On dimensional grounds,
the $z=-1$ result in the context of the Drude formula is a consequence of
compensating the square power of the plasma frequency with powers of $T$ so
that the scaling form Eq. (5) is maintained. A distinct possibility is that
perhaps some other form of quantum criticality beyond single-parameter
scaling, such as a non-critical form of the entropy suggested recently
zaanenentropy , is at work here. We shall return to this idea in section V.
Figure 2: Single-parameter scaling hypothesis in which all length and time
scales are governed by the same diverging length scale, $\xi$ which diverges
at the quantum critical point as $(g-g_{c})^{-\nu}$, $g$ the coupling constant
for the critical interaction. $z$ is the dynamical critical exponent.
Another critical feature of the conductivity is its behavior at finite wave
vector $k$ which may be quantified by the dynamic charge susceptibility,
$\chi^{\prime\prime}(k,\omega)=-\frac{k^{2}}{\omega
e^{2}}\Re\sigma(k,\omega),$ (7)
determined from electron energy-loss spectroscopy (EELS). A restriction on
EELS is that it measures the longitudinal charge response while optics yields
the transverse. At vanishing momentum both are expected to be equal. As optics
has no momentum resolution, comparison with EELS can only be made as
$k\rightarrow 0$. The primary charge excitation in strange metals is a 1 eV
plasmon that was long believed to exhibit the same behavior as in a normal
Fermi liquid nucker1989 ; nucker1991 . Recent high-resolution M-EELS
measurements have called this belief into question, showing that the large-$k$
response is dominated by a continuum which remains flat to high energies,
roughly 2 eV vig17 ; mitrano18 ; husain19 . Such behavior is reminiscent of
the MFL varma96 scenario except in that picture, the continuum persists up to
a cut-off scale determined by the temperature not the Mott scale of 2 eV. In
addition, the continuum exhibits scale-invariant features but with a dynamical
critical exponent, $z\sim\infty$, not possible from a simple QCP.
We conclude then that no form of traditional quantum criticality can account
easily for the power laws seen in strange metallic transport (though we
recognize that $T$-linear resistivity is observed above what appear to be
genuine singular QCPs). The photoemission experiments chen19 indicating a
first-order transition pose an additional problem exacerbated by the
possibility that the criticality might be relevant to a whole region cooper09
; legros19 ; greene ; dessau ; cao20 ; tomita15 ; doiron09 ; custers10 ;
hussey18 rather than a point. Such criticality over an extended region is
reminiscent of critical charged matter kiritsis2 ; kiritsis1 arising from
dilatonic models in gauge-gravity duality. We will revisit aspects of these
ideas in a later section as they have been the most successful (see Table 2)
thus far in reproducing the various characteristics of strange metal physics.
## III Is it Planckian?
While the electrical resistivity in metals can be measured directly, the
scattering rate is entirely an inferred quantity. Herein lies the catch with
Planckian dissipation. Angle-resolved photoemission (ARPES) experiments on
cuprates as early as 1999 reported that the width of the momentum distribution
curves (MDCs) at optimal doping along the nodal direction ($(0,0)$ to
$(\pi,\pi)$) scale as a linear function of temperature and $a_{0}$ \+
0.75$\omega$ for frequencies that exceed $2.5k_{B}T$ valla99 . The momentum
linewidth, which in photoemission enters as Im$\Sigma$ – the imaginary part of
the self energy – can be used to define a lifetime through
$\displaystyle\hbar v_{k}\Delta k={\rm Im}\Sigma(\bf
k,\omega)=2\frac{\hbar}{\tau},$ (8)
with $v_{k}$ the group velocity for momentum state $k$. Extracting the slope
from the data in Figure (2) of Ref. valla99 and using the experimentally
reported Fermi velocity $v_{F}$ = 1.1
eV/$\text{\,}\mathrm{\SIUnitSymbolAngstrom}$, we find that the single-particle
scattering rate $\hbar/\tau\sim 1.7k_{B}T$, i.e. of order the Planckian limit.
Similar results were obtained in subsequent ARPES studies kaminski05 ; bok10 ;
dama03 with a key extension added by Reber et al. dessau whereby the width
of nodal states was observed to obey the quadrature form indicative of a
power-law liquid, $((\hbar\omega)^{2}+(\beta k_{B}T)^{2})^{\lambda}$ where
$\lambda$ is a doping-dependent parameter equal to $1/2$ at optimal doping.
This extraction of the scattering rate from ARPES, however, is not entirely
problem-free as $v_{F}$ is hard to define in ARPES experiments at energies
close to the Fermi level and where, for the most part, the width of the state
exceeds its energy. Indeed, the integral of the density of states using as
input the $v_{F}$ extracted from APRES measurements is found to account for
only half of the as-measured electronic specific heat coefficient yoshida07 .
Furthermore, this reliance on Fermiology leaves open the precise meaning of
Fig. (2) of Ref. bruin13 in which $\tau$ is plotted versus $v_{F}$ for a
series of materials that violate the MIR limit at intermediate to high
temperatures. Despite this, a similar extraction by Legros and colleagues
legros19 , again using Fermiology but focusing on the low-$T$ resistivity,
also found a transport scattering rate close to the Planckian bound. This
consistency between the two analyses reflects the curious fact that the
$T$-linear slope of the dc resistivity does not vary markedly as the MIR
threshold is crossed. It does not, however, necessarily justify either
approach in validating $T$-linear scattering at the Planckian limit. Finally,
while $T$-linearity and Planckian dissipation appear synonymous in the
cuprates, this is not universally the case. In YbRh2Si2 prochaska , for
example, the $T$-linear scattering rate is found to deviate strongly from the
Planckian limit with $\tau_{tr}\sim 0.1\tau_{P}$ paschen04 , while in the
electron-doped cuprates, the notion of a Planckian limit to the scattering
rate has recently been challenged poniatowski21b . This certainly adds to the
intrigue regarding quantum criticality as the underlying cause of Planckian
dissipation.
In principle, the optical conductivity permits an extraction of $\tau$ without
recourse to Fermiology. Within a Drude model, the optical conductivity,
$\displaystyle\sigma(\omega)=\frac{1}{4\pi}\frac{\omega_{p}^{2}\tau_{\rm
tr}}{1+i\omega\tau_{\rm tr}},$ (9)
contains only $\tau_{\rm tr}$ and $\omega_{p}=\sqrt{4\pi n_{e}e^{2}/m}$. At
zero frequency, the Drude formula naturally yields the dc conductivity
$\sigma_{\rm dc}$ while an estimate for the relaxation rate can be extracted
from the width at half maximum of the full Drude response. However, there is
an important caveat: $\tau_{\rm tr}$ is frequency dependent in the cuprates, a
condition that is consistent with various physical models including both the
Fermi liquid and MFL scenarios as well as the large body dessau ; valla99 of
MDC analysis performed on the cuprates. While this prevents a clean separation
of the conductivity into coherent and incoherent parts, van der Marel and
colleagues marel03 were able to show that in the low-frequency limit,
$\omega<1.5k_{B}T/\hbar$, $\tau_{\rm tr}\sim 0.8\tau_{P}$, in agreement with
the dc analysis of Legros legros19 .
A second key issue remains, namely; how can such Drude analysis be justified
for those strange metals in which the MIR limit is violated and the Drude peak
shifts to finite frequencies husseyMIR ? Indeed, in the high-$T$ limit, ‘bad
metallicity’ can be ascribed to a transfer of spectral weight from low- to
high-$\omega$, rather than from an ever-increasing scattering rate (that
within a Drude picture results in a continuous broadening of the Lorentzian
fixed at zero frequency). Given the marked crossover in the form of
$\sigma(\omega)$ at low frequencies, it is indeed remarkable and mysterious
that the slope of the $T$-linear resistivity continues unabated with no
discernible change.
## IV Is it Mottness?
Table 1 encompasses a series of ground states from which $T$-linear
resistivity emerges. In some of these materials, such as the heavy fermions,
the high and low-energy features of the spectrum are relatively distinct in
the sense that spectral weight transfer from the UV to the IR is absent. On
the other hand, hole or electron doping of the parent cuprate induces a marked
transfer of spectral weight of roughly 1-2 eV. As a result, the low-energy
spectral weight grows ctchen ; meinders ; eskes ; PhillipsRMP ; CooperUVIR at
the expense of the degrees of freedom at high energy, a trend that persists
marelcolorchange even inside the superconducting state. This is an intrinsic
feature of Mott systems, namely that the number of low-energy degrees of
freedom is derived from the high-energy spectral weight. As this physics is
distinct from that of a Fermi liquid and intrinsic to Mott physics, it is
termed ‘Mottness’PhillipsRMP . Notably, the mid-infrared response with its
characteristic $\omega^{-2/3}$ scaling is absent from the parent Mott
insulating state. Hence, it must reflect the doping-induced spectral weight
transfer across the Mott gap. It is perhaps not a surprise then that no
low-$T_{c}$ material exhibits such a significant midinfrared feature. In fact,
some theories of cuprate superconductivity Leggett credit its origin to the
mid-infrared scaling. We can quantify the total number of low-energy degrees
of freedom that arise from the UV-IR mixing across the Mott gap by integrating
the optical conductivity,
$\displaystyle N_{\rm eff}(\Omega)=\frac{2mV_{\rm cell}}{\pi
e^{2}}\int_{0}^{\Omega}\sigma(\omega)d\omega,$ (10)
up to the optical gap $\Omega\approx$ 1.2 eV where $V_{\rm cell}$ is the unit-
cell volume. The energy scale of 1.2 eV corresponds to the minimum of the
optical conductivity as mentioned in the previous section. In a rigid-band
semiconductor model in which such spectral weight transfer is absent, $N_{\rm
eff}=x$, where $x$ is the number of holes. In the cuprates, however, $N_{\rm
eff}$ exceeds $x$ as shown in Fig. (3). This is the defining feature of
Mottness ctchen ; meinders ; eskes ; PhillipsRMP since it is ubiquitous in
Mott systems and strictly absent in weakly correlated metals. Even in many of
the strange or quantum critical metals described in Table 1, there is little
or no evidence that Mottness is playing any significant role. Such a
distinction may thus offer a hint to the source of the uniqueness of the
cuprate strange metal. In bad metals, on the other hand, a gradual transfer of
low-frequency spectral weight out to energies of order the Mott scale is
almost universally observed with increasing temperature husseyMIR suggesting
that Mottness is one of the key components of bad metallic transport.
The optical response in cuprates tells us that there are degrees of freedom
that couple to electromagnetism that have no interpretation in terms of doped
holes. That is, they are not local entities as they arise from the mixing of
both UV and IR degrees of freedom. It is such mixing that could account for
the lack of any distinctive energy scalePhillipsRMP , that is scale
invariance, underlying the strange metal. Additionally, Lee et al. showed,
also from optical conductivity studies LeeEM , that throughout the underdoped
regime of the cuprate phase diagram, the effective mass remains constant. As a
result, the Mott transition proceeds by a vanishing of the carrier number
rather than the mass divergence of the Brinkman-Rice scenario BRice . (Note
that while quantum oscillation experiments on underdoped cuprates show
evidence for mass enhancement ramshaw15 , this is thought to be tied to the
charge order centred around 1/8 doping). Such dynamical mixing between the UV
and IR scales in Mott systems is well known to give rise to spectral weight in
the lower Hubbard band meinders ; eskes ; PhillipsRMP that exceeds the number
of electrons, strictly $1+x$, that the band can hold. Consequently, part of
the electromagnetic response of the strange metal at low energies has no
interpretation in terms of electron quasiparticles as it arises strictly from
UV-IR mixing. Precisely how such mixing leads to scale-invariant $T-$linear
resistivity remains open.
Figure 3: Integrated optical conductivity for electron-doped Pr2-xCexCuO4-δ
(triangles) and hole-doped La2-xSrxCuO4-δ (circles) The dashed line indicates
what is expected for a doping a semiconductor. The expectation is that each Ce
or Sr atom contributes just a single charge carrier. Reprinted from Ref.
CooperUVIR .
## V Is it about Gravity?
Table 2: Snapshot of current theoretical modeling of the strange metal based
on consistency with $T-$ linear resistivity, $\omega^{-2/3}$ scaling of the
mid-infrared optical conductivity, quadrature magnetoresistance, extended
quantum criticality, and what predictions are made in terms of experimental
observables. The transcription of the abbreviations is as follows: MFL =
marginal Fermi liqiud, EFL = ersatz Fermi liqiud, SYK = Sachdev-Ye-Kitaev,
AdS/CFT = anti de Sitter space/conformal field theory conjecture, AD/EMD =
anomalous dimensions/Einstein-Maxwell-dilaton, HM = Hubbard model, QMC =
quantum Monte Carlo, ED = exact diagonalization, CA = cold atoms, DMFT/EDMFT =
dynamical mean-field theory/embedded dynamical mean-field theory, A-B =
Aharonov-Bohm effect, ECFL = extremely correlated Fermi liquid, and QCP =
quantum critical point scenarios.
111Apologies to anyone whose work we did not cite.
| $\rho\propto T$ | $\rho\propto T$ | $\sigma\propto\omega^{-2/3}$ | Quadrature | Extended | Experimental
---|---|---|---|---|---|---
| as $T\rightarrow 0$ | as $T\rightarrow\infty$ | | MR | criticality | Prediction
Phenomenological | | | | | |
MFL | $\checkmark$ varma96 | $\times$varma96 | $\times$ | $\times$ | $\times$ | loop currents loopvarma
EFL | \- 222$T$-linear resistivity is an input. | - | - | $\times$ | $\times$ | loop currents else1
Numerical | | | | | |
ECFL | $\times$ | ✓maishastry | - | - | $\times$ | $\times$
HM (QMC/ED/CA) | \- Huang987 | $\checkmark$ Huang987 ; ED ; CA ; ED1 ; ED2 | $\times$ | - | - | -
DMFT/EDMFT | $\checkmark$ Cha18341 | $\checkmark$kotliarDMFT ; tremblay | $\times$ | - | $\checkmark$ tremblay | -
QCP | ✓INem | - | - | - | $\times$ | -
Gravity-based | | | | | |
SYK | $\checkmark$ patelPM ; syk2 | $\checkmark$333A slope change occurs through the MIR. syk2 | $\times$ | $\checkmark$444Quadrature scaling obtained only for a bi-valued random resistor model syk1 with equal weights boyd19 . syk1 | - | $\times$
AdS/CFT | $\checkmark$ adscftstrange | $\checkmark$ adscftstrange | $\checkmark$555While this scaling was thought to arise in pure AdS with an inhomogenous charge density horowitz , later studies langley ; donos found otherwise. kiritsis ; kiritsis2 | $\times$ | $\times$ | $\times$
AD/EMD | $\checkmark$ hk ; gl1 ; limtra | $\checkmark$ hk ; kiritsis ; kiritsis2 ; limtra ; karch2 | $\checkmark$ karch2 ; kiritsis ; kiritsis2 | $\times$ | $\checkmark$kiritsis | Fractional A-B limtra
To frame the theoretical modeling of strange metallicity tabulated in Table 2,
we group the work into three principal categories: 1) phenomenological, 2)
numerical and 3) gravity-related. While both phenomenological models
considered here require (EFL) or predict (MFL) loop currents, they do so for
fundamentally different reasons. (For an explanation of the various acronyms,
please refer to the caption in Table 2). On the EFL account else1 , such
current order is needed to obtain a finite resistivity in the absence of
momentum relaxation (certainly not a natural choice given the Drude fit to the
optical conductivity discussed previously), while in MFL, loop currents
loopvarma are thought to underpin the local fluctuation spectrum varma96 .
ECFL maishastry predicts a resistivity that interpolates between Fermi-
liquid-like $T^{2}$ at low $T$ to $T$-linear for $T\gg T_{\rm FL}$. QMC
Huang987 ; ED ; ED1 ; ED2 as well as cold atom (CA) experiments CA on the
Hubbard model (HM) have established that at high temperatures, the resistivity
is indeed $T$-linear. The Fermion-sign problem, however, prevents any
definitive statement about the low-$T$ behavior in the presence of Mott
physics. Non-Fermi liquid transport in SYK models SY ; Kitaev ; K is achieved
by an all-to-all random interaction. While such interactions might seem
initially unphysical, SYK models are nevertheless natural candidates to
destroy Fermi liquids which, by their nature, permit a purely local
description in momentum space. As a result, they are impervious to repulsive
local-in-space interactions polchinski . Coupling a Fermi liquid to an array
of disordered SYK islands, however, leads syk1 ; syk2 to a non-trivial change
in the electron Green function across the MIR and hence a change in slope of
the resistivity is unavoidable syk1 though it can be minimized through fine
tuning syk2 .
An added feature of these disordered models is that in certain limits, they
have a gravity dual Kitaev ; SY1 ; K ; dual . This state of affairs arises
because the basic propagator K ; SY1 ; Kitaev in the SYK model in imaginary
time describes the motion of fermions, with appropriate boundary conditions,
between two points of the asymptotic boundary of a hyperbolic plane. In real
time, simply replacing the hyperbolic plane with the space-time equivalent,
namely two-dimensional anti de Sitter (AdS) space (a maximally symmetric
Lorentzian manifold with constant negative curvature), accurately describes
all the correlators. It is from this realization that the dual description
between a random spin model and gravity in AdS2 lies sachdev ; Kitaev ; K .
Hence, although the origins of SYK were independent of gravity, its
correlators can be deduced from the asymptotics of the corresponding
spacetime. At the asymptote, only the time coordinate survives and hence
ultimately, SYK dynamics is ultra-local in space with only diverging
correlations in time, an instantiation of local quantum criticality.
Such local quantum criticality is not a new concept in condensed matter
systems and indeed lies at the heart of MFL phenomenology varma96 , DMFT
kotliarDMFT , and is consistent with the momentum-independent continuum found
in the M-EELS data discussed earlier mitrano18 . The deeper question is why
does gravity have anything to do with a spin problem with non-local
interactions? The issue comes down to criticality and to the structure of
general relativity. The second equivalence principle on which general
relativity is based states that no local measurement can detect a uniform
gravitational field. A global measurement is required. Ditto for a critical
system since no local measurement can discern criticality. Observables tied to
the diverging correlation length are required. Hence, at least conceptually,
it is not unreasonable to expect a link between critical matter and gravity.
The modern mathematical machinery which makes it possible to relate the two is
the gauge-gravity duality or the AdS/CFT (conformal field theory) conjecture.
The key claim of this duality maldacena ; witten ; gubser is that some
strongly interacting quantum theories, namely ones which are at least
conformally invariant in $d$-dimensions, are dual to a theory of gravity in a
$d+1$ spacetime that is asymptotically AdS. The radial direction represents
the energy with the quantum theory residing at the UV boundary and the IR
limit deep in the interior at the black hole horizon. Hence, intrinsic to this
construction is a separation between bulk (gravitational) and boundary
(quantum mechanical) degrees of freedom. That the boundary of a gravitational
object has features distinct from the bulk dates back to the observations of
Beckenstein beckenstein and Hawking hawking ; hawkingarea that the
information content of a black hole scales with the area, not the volume. The
requirement that the boundary theory be strongly coupled then arises by
maintaining that the AdS radius exceeds the Planck length $\ell_{P}$. More
explicitly, because the AdS radius and the coupling constant of the boundary
theory are proportional, the requirement $R\gg\ell_{P}$ translates into a
boundary theory that is strongly coupled.
The first incarnation Faulkner ; fireball ; schalm of this duality in the
context of fermion correlators involved modeling fermions at finite density in
$2+1$ dimensions. From the duality, the conformally invariant vacuum of such a
system corresponds to gravity in AdS4, the extra dimension representing the
radial direction along which identical copies of the boundary CFT lie albeit
with differing energy scales. Surprisingly, what was shown Faulkner is that
the low-energy (IR) properties of such a system in the presence of a charge
density are determined by an emergent AdS${}_{2}\times R^{2}$ (with $R^{2}$
representing a plane) spacetime at the black hole horizon. The actual symmetry
includes scale invariance and is denoted by $SL(2,R)$ (a special Lie group of
real $2\times 2$ matrices with a unit determinant). Once again, the
criticality of boundary fermions is determined entirely by the fluctuations in
time, that is, local quantum criticality as seen in SYK. The temperature and
frequency dependence of the conductivity are then determined by the same
exponent Faulkner as expected from Eqs. (5) and (6) and as a result, a
simultaneous description of $T$-linearity and $\omega^{-2/3}$ is not possible,
as noted in Table 2.
This particular hurdle is overcome by the AD/EMD theories Anomdim0 ; Anomdim01
; Anomdim02 ; kiritsis ; kiritsis1 ; kiritsis2 ; cremonini ; Anomdim1 which
as indicated in Table 2, have been the most successful to date in describing
the range of physics observed in strange metals. What is new here is the
introduction of extra fields, dilatons for example, which permit hyperscaling
violationshl and anomalous dimensionsAnomdim0 ; Anomdim01 ; Anomdim02 ;
kiritsis ; kiritsis1 ; kiritsis2 ; cremonini ; Anomdim1 for all operators.
Consequently, under a scale change of the coordinates, the metric is no longer
unscathed. That is, the manifold is not fixed and it is the matter fields that
determine the geometry. Such systems have a covariance, rather than scale
invariance indicative of pure AdS metrics. A consequence of this covariance is
that even the currents acquire anomalous dimensions. But how is this possible
given that a tenet of field theory is that no amount of renormalization can
change the dimension of the current gross from $d-1$? What makes this
possible is that in EMD theories, the extra radial dimension allows physics
beyond Maxwellian electro-magnetism. For example, the standard Maxwell action,
$S=\int dV_{d}F^{2}$ where $F=dA$, requires that the dimension of the gauge
field be fixed to unity, $[A]=1$666What is really required is that $[qA]=1$,
with $q$ the charge. In insisting that $[A]=1$, we are setting $q=1$ but still
all of our statements refer to the product $qA$.. EMD theories use instead an
action of the form $S=\int dV_{d}dyy^{a}F^{2}$ where $y$ is the radial
coordinate of the $d+1$ AdS spacetime. Comparing these two actions leads
immediately to the conclusion that the dimension of $A$ now acquires the value
$[A]=1-a/2$. Hence, even in the bulk of the geometry, the dimension of the
gauge field is not unity. Depending on the value of $a$, $a<0$ at the UV
conformal boundary or $a>0$ at the IR at the black hole horizon, the equations
of motion are non-standard and obey fractional electromagnetism gl1 ; gl2
consistent with a non-traditional dimension for the gauge field. In EMD
theories, it is precisely the anomalous dimensionkiritsis ; kiritsis1 ;
kiritsis2 ; cremonini ; Anomdim0 ; Anomdim01 ; Anomdim02 for conserved
quantities that gives rise to the added freedom for extended quantum
criticality to occur, the simultaneous fitting karch2 of $T-$linearity and
$\omega^{-2/3}$ of the optical conductivity, and the basis for a proposal for
the strange metal based on $[A]=5/3$ hk .
Within these holographic systems, a Drude-like peak in the optical
conductivity can emerge both from the coherent (quasiparticle-like) sector
Davison_15 as well as the incoherent (‘un-particle unparticle ’) sector
hartnoll10 ; kiritsis15 ; chen_17 ; Davison_19 . Application of EMD theory has
also provided fresh insights into the phenomenon of ‘lifetime separation’ seen
in the dc and Hall conductivities of hole-doped cuprates carrington92 ;
chien91 ; manako92 as well as in other candidate strange metals paschen04 ;
lyu20 . For a system with broken translational invariance, the finite density
conductivity comprises two distinct components blake_donos , with the dc
resistivity being dominated by the particle-hole symmetric term – with a
vanishing Hall conductivity – and one from an explicit charge density governed
by more conventional (Umklapp) momentum relaxation that sets the
$T$-dependence of the Hall angle.
The success of EMD theories in the context of strange metal physics raises a
philosophical question: Is all of this just a game? That is, is the
construction of bulk theories with funky electromagnetism fundamental? The
answer here lies in Nöther’s Second Theorem (NST) gl1 ; gl2 ; PhillipsRMP , a
theorem far less known than her ubiquitous first theorem but ultimately of
more importance as it identifies a shortcoming. To illustrate her first
theorem, consider Maxwellian electromagnetism which is invariant under the
transformation $A_{\mu}\rightarrow A_{\mu}+\partial_{\mu}\Lambda$. This
theorem states that there must be a conservation law with the same number of
derivatives as in the gauge principle. Hence the conservation law only
involves a single derivative, namely $\partial_{\mu}J_{\mu}=0$. This is
Nöther’s First Theorem N in practice.
What Nöther N spent the second half of her famous paper trying to rectify is
that the form of the gauge transformation is not unique; hence the
conservation law is arbitrary. It is for this reason that in the second half N
of her foundational paper, she retained all possible higher-order integer
derivatives in the gauge principle. These higher-order derivatives both add
constraints to and change the dimension of the current. Stated succinctly, NST
N dictates that the full family of generators of U(1) invariance determines
the dimension of the current. It is easy to see how this works. Suppose we can
find a quantity, $\hat{Y}$ that commutes with $\partial_{\mu}$. That is,
$\partial_{\mu}\hat{Y}=\hat{Y}\partial_{\mu}$. If this is so, then we can
insert this into the conservation law with impunity. What this does is
redefine the current:
$\partial_{\mu}\hat{Y}J^{\mu}=\partial_{\mu}\tilde{J}^{\mu}$. The new current
$\tilde{J}^{\mu}$ acquires whatever dimensions $\hat{Y}$ has such that
$[\tilde{J}^{\mu}]=d-1-d_{Y}$. But because of the first theorem, $\hat{Y}$
must have come from the gauge transformation and hence must ultimately be a
differential operator itself. That is, there is an equally valid class of
electromagnetisms with gauge transformations of the form $A_{\mu}\rightarrow
A_{\mu}+\partial_{\mu}\hat{Y}\Lambda$. For EMD theories gl1 ; gl2 ;
PhillipsRMP , $\hat{Y}$ is given by the fractional Laplacian,
$\Delta^{(\gamma-1)/2}$ with $[A_{\mu}]=\gamma$ (with $\gamma=1-a/2$ to make
contact with the EMD theories introduced earlier). For most matter as we know
it, $\gamma=1$. The success of EMD theories raises the possibility that the
strangeness of the strange metal hinges on the fact that $\gamma\neq 1$. This
can be tested experimentally using the standard Aharonov-Bohm geometry limtra
; gl1 in which a hole of radius $r$ is punched into a cuprate strange metal.
Because $[A]$ is no longer unity, the integral of $A\cdot d\ell$ is no longer
the dimensionless flux. For physically realizable gauges, this ultimately
provides an obstruction to charge quantization. As a result, deviations limtra
; gl1 from the standard $\pi r^{2}\times B$ dependence for the flux would be
the key experimental feature that a non-local gauge principle is operative in
the strange metal. An alternative would be, as Anderson anderson advocated,
the use of fractional or unparticle propagators with the standard gauge
principle. However, in the end, it all comes down to gauge invariance. The
standard gauge-invariant condition prevents the power laws in unparticle stuff
from influencing the algebraic fall-off of the optical conductivity limtragool
; karch2 as they offer just a prefactor to the polarizations Liao2008 . The
escape route, an anomalous dimension for the underlying gauge field, offers a
viable solution but the price is abandoning locality bora of the action.
Figure 4: Correlation between the superfluid density $n_{s}(0)$ and the
coefficient $\alpha_{1}$ of the $T$-linear resistivity in Tl2Ba2CuO6+δ
(Tl2201) across the strange metal regime (adapted from Refs. culo21 ; putzke21
).
## VI Is it Important?
Given the immense difficulty in constructing a theory of the strange metal,
one might ask why bother? To gauge the importance of the strange metal, look
no further than Fig. (4). This figure shows that the coefficient $\alpha_{1}$
of the $T$-linear resistivity component in the strange metal regime of
overdoped hole-doped cuprates tracks the doping dependence of the $T=0$
superfluid density $n_{s}(0)$. As mentioned earlier, a similar correlation
exists between $\alpha_{1}$ and $T_{c}$ in electron-doped cuprates jin11 , the
Bechgaard salts doiron09 as well as the iron pnictides doiron09 ,
establishing a fundamental link between high-temperature superconductivity and
the strange metal.
For a long time, the drop in $n_{s}(0)$ with doping in cuprates was attributed
to pair breaking, a symptom of the demise of the dominant pairing interaction
within a disordered lattice. Recent mutual inductance measurements, however,
have challenged this view, arguing that the limiting low-$T$ behavior of
$n_{s}(T)$ was incompatible with conventional pair breaking scenarios
bozovic16 . Certainly, the correlation between $\alpha_{1}$ and $n_{s}(0)$ is
unforeseen in such models. Moreover, if the strange metal regime is indeed
populated with non-quasiparticle states, then Fig. (4) indicates a pivotal
role for these states in the pairing condensate culo21 . On more general
grounds, this result informs us that the door to unlocking cuprate
superconductivity is through the strange metal and any theory which divorces
superconductivity from the strange metal is a non-starter. To conclude,
solving the strange metal kills two birds with one stone. Perhaps there is
some justice here. After all, we know from Pippard’s pippard work, which can
be reformulated gl1 ; gl2 in terms of fractional Laplacians, that even
explaining superconductivity in elemental metals necessitates a non-local
relationship between the current and the gauge field. What seems to be
potentially new about the cuprates is that now the normal state as a result of
the strange metal also requires non-locality.
Acknowledgements
PA and PWP acknowledge support from Center for Emergent Superconductivity, a
DOE Energy Frontier Research Center, Grant No. DE-AC0298CH1088. NEH is funded
by Netherlands Organisation for Scientific Research (NWO) (‘Strange Metals’
16METL01), the European Research Council (ERC) under the European Union’s
Horizon 2020 research and innovation programme (No. 835279-Catch-22) and EPSRC
(EP/V02986X/1). The work on fractional electromagnetism was funded through
DMR-2111379.
## References
* (1) T. Matsubara, Prog. Theor. Phys. 14, 351 (1955).
* (2) S. Chakravarty, B. I. Halperin, D. R. Nelson, Phys. Rev. B 39, 2344 (1989).
* (3) J. Zaanen, Nature 430, 512 (2004).
* (4) M. Gurvitch, A. T. Fiory, Phys. Rev. Lett. 59, 1337 (1987).
* (5) S. Martin, A. T. Fiory, R. M. Fleming, L. F. Schneemeyer, J. V. Waszczak, Phys. Rev. B 41, 846 (1990).
* (6) H. Takagi, et al., Phys. Rev. Lett. 69, 2975 (1992).
* (7) D. van der Marel, et al., Nature 425, 271 (2003).
* (8) R. A. Cooper, et al., Science 323, 603 (2009).
* (9) A. Legros, et al., Nat. Phys. 15, 142 (2019).
* (10) J. A. N. Bruin, H. Sakai, R. S. Perry, A. P. Mackenzie, Science 339, 804 (2013).
* (11) P. Fournier, et al., Phys. Rev. Lett. 81, 4720 (1998).
* (12) A. P. Mackenzie, S. R. Julian, D. C. Sinclair, C. T. Lin, Phys. Rev. B 53, 5848 (1996).
* (13) N. Nagaosa, P. A. Lee, Phys. Rev. B 45, 960 (1992).
* (14) A. F. Ioffe, A. R. Regel, Prog. Semicond. 4, 237 (1960).
* (15) M. Gurvitch, Phys. Rev. B 24, 7404 (1981).
* (16) N. E. Hussey, K. Takenaka, H. Takagi, Phil. Mag. 84, 2847 (2004).
* (17) N. F. Mott, Phil. Mag. A 26, 1015 (1972).
* (18) N. E. Hussey, et al., Phil. Trans. Roy. Soc. A 369, 1626 (2011).
* (19) V. J. Emery, S. A. Kivelson, Phys. Rev. Lett. 74, 3253 (1995).
* (20) C. Proust, B. Vignolle, J. Levallois, S. Adachi, N. E. Hussey, Proc. Natl. Acad. Sci. (USA) 113, 13654 (2016).
* (21) N. Barišic, et al., Proc. Natl. Acad. Sci. (USA) 110, 12235 (2013).
* (22) A. Carrington, A. P. Mackenzie, C. T. Lin, J. R. Cooper, Phys. Rev. Lett. 69, 2855 (1992).
* (23) M. K. Chan, et al., Phys. Rev. Lett. 113, 177005 (2014).
* (24) T. R. Chien, Z. Z. Wang, N. P. Ong, Phys. Rev. Lett. 67, 2088 (1991).
* (25) J. M. Harris, et al., Phys. Rev. Lett. 75, 1391 (1995).
* (26) P. Giraldo-Gallo, et al., Science 361, 479 (2018).
* (27) C. Boyd, P. W. Phillips, Phys. Rev. B 100, 155139 (2019).
* (28) T. Manako, Y. Kubo, Y. Shimakawa, Phys. Rev. B 46, 11019 (1992).
* (29) J. Ayres, et al., Nature 575, 661 (2021).
* (30) N. R. Poniatowski, T. Sarkar, S. D. Sarma, R. L. Greene, Phys. Rev. B 103, 020501 (2021).
* (31) K. Jin, N. P. Butch, K. Kirshenbaum, J. Paglione, R. L. Greene, Nature 476, 73 (2011).
* (32) P. Li, F. F. Balakirev, R. L. Greene, Phys. Rev. Lett. 99, 047003 (2007).
* (33) N. R. Poniatowski, T. Sarkar, R. L. Greene, Phys. Rev. B 103, 125102 (2021).
* (34) T. Sarkar, P. R. Mandal, N. R. Poniatowski, M. K. Chan, R. L. Greene, Sci. Adv. 5, eaav6753 (2019).
* (35) A. W. Tyler, A. P. Mackenzie, S. NishiZaki, Y. Maeno, Phys. Rev. B 58, 10107 (1998).
* (36) N. E. Hussey, et al., Phys. Rev. B 57, 5505 (1998).
* (37) M. E. Barber, A. S. Gibbs, Y. Maeno, A. P. Mackenzie, C. W. Hicks, Phys. Rev. Lett. 120, 076602 (2018).
* (38) A. P. Mackenzie, et al., Phys. Rev. B 54, 7425 (1996).
* (39) S. Kasahara, et al., Proc. Natl. Acad. Sci. (USA) 111, 16309 (2014).
* (40) S. Licciardello, et al., Nature 567, 213 (2019).
* (41) W. K. Huang, et al., Phys. Rev. Res. 2, 033367 (2020).
* (42) S. Licciardello, et al., Phys. Rev. Res. 1, 023011 (2019).
* (43) D. Hu, et al., arXiv:1812.11902 .
* (44) J. G. Analytis, et al., Nat. Phys. 10, 194 (2014).
* (45) S. Kasahara, et al., Phys. Rev. B 81, 184519 (2010).
* (46) I. M. Hayes, et al., Nat. Phys. 12, 916 (2016).
* (47) Y. Nakajima, et al., Commun. Phys. 3, 181 (2020).
* (48) O. Trovarelli, et al., Phys. Rev. Lett. 85, 626 (2000).
* (49) J. Custers, et al., Nature 424, 524 (2003).
* (50) J. Custers, et al., Phys. Rev. Lett. 104, 186402 (2010).
* (51) S. Paschen, et al., Nature 432, 881 (2004).
* (52) T. Tomita, K. Kuga, Y. Uwatoko, P. Coleman, S. Nakatsuji, Science 349, 506 (2015).
* (53) Y. Nakajima, et al., J. Phys. Soc. Japan 76, 024703 (2007).
* (54) A. Bianchi, R. Movshovich, I. Vekhter, P. Pagliuso, J. L. Sarrao, Phys. Rev. Lett. 91, 257001 (2003). ${\mathrm{J}}$.-P. Paglione et al., ibid 91, 246405 (2003).
* (55) B. Shen, et al., Nature 579, 51 (2020).
* (56) N. Doiron-Leyraud, et al., Phys. Rev. B 80, 214531 (2009).
* (57) H. Polshyn, et al., Nat. Phys. 15, 1011 (2019).
* (58) Y. Cao, et al., Phys. Rev. Lett. 124, 076801 (202).
* (59) R. Lyu, et al., Phys. Rev. B 103, 245424 (2021).
* (60) P. B. Allen, et al., Phys. Rev. B 53, 4393 (1996).
* (61) A. P. Mackenzie, et al., Phys. Rev. B 58, 13318 (1998).
* (62) O. Gunnarsson, M. Calandra, J. E. Han, Rev. Mod. Phys. 75, 1085 (2003).
* (63) N. E. Hussey, S. Licciardello, J. Buhot, Rep. Prog. Phys. 81, 052501 (2018).
* (64) K. Damle, S. Sachdev, Phys. Rev. B 56, 8714 (1997).
* (65) C. M. Varma, P. B. Littlewood, S. Schmitt-Rink, E. Abrahams, A. E. Ruckenstein, Phys. Rev. Lett. 63, 1996 (1989).
* (66) H. v. Löhneysen, et al., Phys. Rev. Lett. 72, 3262 (1994).
* (67) O. Trovarelli, et al., Phys. Rev. Lett. 85, 626 (2000).
* (68) B. Michon, et al., Nature 567, 218 (2019).
* (69) J. G. Storey, J. L. Tallon, G. V. M. Williams, Phys. Rev. B 78, 140506 (2008).
* (70) S.-D. Chen, et al., Science 366, 1099 (2019).
* (71) N. E. Hussey, H. Gordon-Moys, J. Kokalj, R. H. McKenzie, J. Phys. Conf. Series 449, 012004 (2013).
* (72) P. W. Phillips, C. Chamon, Phys. Rev. Lett. 95, 107002 (2005).
* (73) X.-G. Wen, Phys. Rev. B 46, 2655 (1992).
* (74) L. Prochaska, et al., Science 367, 285 (2020).
* (75) H. J. A. Molegraaf, C. Presura, D. van der Marel, P. H. Kes, M. Li, Science 295, 2239 (2002).
* (76) S. L. Cooper, et al., Phys. Rev. B 41, 11605 (1990).
* (77) M. A. Quijada, et al., Phys. Rev. B 60, 14917 (1999).
* (78) S. A. Hartnoll, A. Lucas, S. Sachdev (2018).
* (79) J. Loram, K. Mirza, J. Wade, J. Cooper, W. Liang, Physica 235C-240C, 134 (1994).
* (80) J. Zaanen, SciPost Phys. 6, 61 (2019).
* (81) N. Nücker, et al., Phys. Rev. B 39, 12379 (1989).
* (82) N. Nücker, U. Eckern, J. Fink, P. Müller, Phys. Rev. B 44, 7155(R) (1991).
* (83) S. Vig, et al., SciPost Phys. 3, 026 (2017).
* (84) M. Mitrano, et al., Proc. Natl. Acad. Sci. (USA) 115, 5392 (2018).
* (85) A. A. Husain, et al., Phys. Rev. X 9, 041062 (2019).
* (86) R. L. Greene, P. R. Mandal, N. R. Poniatowski, T. Sarkar, Ann. Rev. Cond. Matt. Phys. 11, 213 (2020).
* (87) T. J. Reber, et al., Nat. Commun. 10, 5737 (2019).
* (88) C. Charmousis, B. Goutéraux, B. Soo Kim, E. Kiritsis, R. Meyer, JHEP 2010, 151 (2010).
* (89) B. Goutéraux, E. Kiritsis, JHEP 2013, 53 (2013).
* (90) T. Valla, et al., Science 285, 2110 (1999).
* (91) A. Kaminski, et al., Phys. Rev. B 71, 014517 (2005).
* (92) J. M. Bok, et al., Phys. Rev. B 81, 174516 (2010).
* (93) A. Damascelli, Z. Hussain, Z.-X. Shen, Rev. Mod. Phys. 75, 473 (2003).
* (94) T. Yoshida, et al., J. Phys.: Condens. Matt. 19, 125209 (2007).
* (95) N. Poniatowski, T. Sarkar, R. Lobo, S. Das Sarma, R. L. Greene, arXiv:2109.00513 .
* (96) C. T. Chen, et al., Phys. Rev. Lett. 66, 104 (1991).
* (97) M. B. J. Meinders, H. Eskes, G. A. Sawatzky, Phys. Rev. B 48, 3916 (1993).
* (98) H. Eskes, A. M. Oleś, M. B. J. Meinders, W. Stephan, Phys. Rev. B 50, 17980 (1994).
* (99) P. W. Phillips, Rev. Mod. Phys. 82, 1719 (2010).
* (100) A. J. Leggett, Proc. Natl. Acad. Sci. (USA) 96, 8365 (1999).
* (101) Y. S. Lee, et al., Phys. Rev. B 72, 054529 (2005).
* (102) W. F. Brinkman, T. M. Rice, Phys. Rev. B 2, 4302 (1970).
* (103) B. J. Ramshaw, et al., Science 348, 317 (2015).
* (104) M. E. Simon, C. M. Varma, Phys. Rev. Lett. 89, 247003 (2002).
* (105) D. V. Else, T. Senthil, Phys. Rev. Lett. 127, 086601 (2021).
* (106) B. S. Shastry, P. Mai, Phys. Rev. B 101, 115121 (2020).
* (107) E. W. Huang, R. Sheppard, B. Moritz, T. P. Devereaux, Science 366, 987 (2019).
* (108) J. Kokalj, Phys. Rev. B 95, 041110 (2017).
* (109) P. T. Brown, et al., Science 363, 379 (2019).
* (110) A. Vranić, et al., Phys. Rev. B 102, 115142 (2020).
* (111) J. Vučičević, et al., Phys. Rev. Lett. 123, 036601 (2019).
* (112) P. Cha, N. Wentzell, O. Parcollet, A. Georges, E.-A. Kim, Proc. Natl. Acad. Sci. (USA) 117, 18341 (2020).
* (113) X. Deng, et al., Phys. Rev. Lett. 110, 086401 (2013).
* (114) W. Wu, X. Wang, A. M. S. Tremblay (2021).
* (115) S. Lederer, Y. Schattner, E. Berg, S. A. Kivelson, Proc. Natl. Acad. Sci. (USA) 114, 4905 (2017).
* (116) A. A. Patel, S. Sachdev, Phys. Rev. Lett. 123, 066601 (2019).
* (117) P. Cha, A. A. Patel, E. Gull, E.-A. Kim, Phys. Rev. Res. 2, 033434 (2020).
* (118) A. A. Patel, J. McGreevy, D. P. Arovas, S. Sachdev, Phys. Rev. X 8, 021049 (2018).
* (119) T. Faulkner, N. Iqbal, H. Liu, J. McGreevy, D. Vegh, Science 329, 1043 (2010).
* (120) G. T. Horowitz, J. E. Santos, D. Tong, JHEP 2012, 168 (2012).
* (121) B. W. Langley, G. Vanacore, P. W. Phillips, JHEP 2015, 163 (2015).
* (122) A. Donos, J. P. Gauntlett, JHEP 2014, 40 (2014).
* (123) E. Kiritsis, Y. Matsuo, JHEP 2017, 41 (2017).
* (124) S. A. Hartnoll, A. Karch, Phys. Rev. B 91, 155126 (2015).
* (125) G. La Nave, K. Limtragool, P. W. Phillips, Rev. Mod. Phys. 91, 021003 (2019).
* (126) K. Limtragool, P. W. Phillips, Europhys. Lett. 121, 27003 (2018).
* (127) A. Karch, K. Limtragool, P. W. Phillips, JHEP 2016, 175 (2016).
* (128) S. Sachdev, J. Ye, Phys. Rev. Lett. 70, 3339 (1993).
* (129) A. Kitaev, KITP Talks (2015).
* (130) A. Kitaev, S. J. Suh, JHEP 2018, 183 (2018).
* (131) J. Polchinski, arXiv:9210046v2 [hep-th] (1992).
* (132) S. Sachdev, Phys. Rev. Lett. 105, 151602 (2010).
* (133) J. Maldacena, D. Stanford, Phys. Rev. D 94, 106002 (2016).
* (134) S. Sachdev, Quantum Phase Transitions (Cambridge Univ. Press, 2011), second edn.
* (135) J. Maldacena, Int. J. Theor. Phys. 38, 1113 (1999).
* (136) E. Witten, Adv. Theor. Math. Phys. 2, 253 (1998).
* (137) S. Gubser, I. Klebanov, A. Polyakov, Physics Letters B 428, 105 (1998).
* (138) J. D. Bekenstein, Phys. Rev. D 7, 2333 (1973).
* (139) S. W. Hawking, Phys. Rev. D 14, 2460 (1976).
* (140) S. W. Hawking, Commun. Math. Phys. 43, 199 (1975).
* (141) T. Faulkner, N. Iqbal, H. Liu, J. McGreevy, D. Vegh, Science 329, 1043 (2010).
* (142) S.-S. Lee, Phys. Rev. D 79, 086006 (2009).
* (143) M. Čubrović, J. Zaanen, K. Schalm, Science 325, 439–444 (2009).
* (144) B. Goutéraux, Journal of High Energy Physics 2014 (2014).
* (145) A. Karch, Journal of High Energy Physics 2014 (2014).
* (146) B. Goutéraux, Journal of High Energy Physics 2014 (2014).
* (147) S. Cremonini, A. Hoover, L. Li, JHEP 2017, 133 (2017).
* (148) E. Blauvelt, S. Cremonini, A. Hoover, L. Li, S. Waskie, Phys. Rev. D 97, 061901 (2018).
* (149) D. J. Gross, Methods in Field Theory: Les Houches 1975, no. p. 181 (North-Holland, 1975).
* (150) G. L. Nave, P. W. Phillips, Commun. Math. Phys. 366, 119 (2019).
* (151) R. A. Davison, B. Goutéraux, JHEP 2015, 90 (2015).
* (152) P. W. Phillips, B. W. Langley, J. A. Hutasoit, Phys. Rev. B 88, 115129 (2013).
* (153) S. A. Hartnoll, J. Polchinski, E. Silverstein, D. Tong, JHEP 2010, 120 (2010).
* (154) E. Kiritsis, F. Peña Benitez, JHEP 2015, 177 (2015).
* (155) C.-F. Chen, A. Lucas, Phys. Lett. B 774, 569 (2017).
* (156) R. A. Davison, S. A. Gentle, B. Goutéraux, Phys. Rev. Lett. 123, 141601 (2019).
* (157) M. Blake, A. Donos, Phys. Rev. Lett. 114, 021601 (2015).
* (158) E. Noether, Nachr. Ges. Wiss. Gottingen, Math.-Phys. Kl. 1918, 235 (1918).
* (159) P. W. Anderson, Phys. Rev. B 55, 11785 (1997).
* (160) K. Limtragool, P. W. Phillips, Phys. Rev. B 92, 155128 (2015).
* (161) Y. Liao, Euro. Phys. J. C 55, 483 (2008).
* (162) B. Basa, G. La Nave, P. W. Phillips, Phys. Rev. D 101, 106006 (2020).
* (163) M. Čulo, et al., SciPost Phys. 11, 012 (2021).
* (164) C. Putzke, et al., Nat. Phys. 17, 826 (2021).
* (165) I. Božović, X. He, J. Wu, A. T. Bollinger, Nature 536, 309 (2016).
* (166) B. Pippard, Proc. Roy. Soc. A 216, 547 (1953).
|
$\begin{split}&\quad\bigg{(}\frac{E(z_{1},z_{2})E(z_{1}^{\prime},z_{2}^{\prime})}{E(z_{1},z_{1}^{\prime})E(z_{1},z_{2}^{\prime})E(z_{2},z_{1}^{\prime})E(z_{2},z_{2}^{\prime})}\bigg{)}^{2}\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2(\bm{u}(z_{1})-\bm{u}(z^{\prime}_{1})+\bm{u}(z_{2})-\bm{u}(z_{2}^{\prime}))\big{|}2\bm{\tau}\big{)}\vartheta_{\bm{\mu},\bm{\nu}}\big{(}\bm{0}\big{|}2\bm{\tau}\big{)}\\\
&\quad-\frac{(x_{1}-x_{2})(x_{1}^{\prime}-x_{2}^{\prime})}{(x_{1}-x_{1}^{\prime})(x_{2}-x_{2}^{\prime})}\frac{\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2(\bm{u}(z_{1})-\bm{u}(z_{2}^{\prime}))\big{|}2\bm{\tau}\big{)}}{E(z_{1},z_{2}^{\prime})^{2}}\frac{\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2(\bm{u}(z_{2})-\bm{u}(z_{1}^{\prime}))\big{|}2\tau\big{)}}{E(z_{1}^{\prime},z_{2})^{2}}\\\
&\quad+\frac{(x_{1}-x_{2})(x_{1}^{\prime}-x_{2}^{\prime})}{(x_{1}-x_{2}^{\prime})(x_{2}-x_{1}^{\prime})}\frac{\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2(\bm{u}(z_{1})-\bm{u}(z^{\prime}_{1}))\big{|}2\bm{\tau}\big{)}}{E(z_{1},z_{1}^{\prime})^{2}}\frac{\vartheta_{\bm{\mu},\bm{\nu}}(2(\bm{u}(z_{2})-\bm{u}(z_{2}^{\prime}))\big{|}2\bm{\tau}\big{)}}{E(z_{2}^{\prime},z_{2})^{2}}\\\
&=\frac{\big{(}E(z_{1},z_{2})E(z_{1}^{\prime},z_{2}^{\prime})\eta(z_{1})\eta(z_{2})\eta(z_{1}^{\prime})\eta(z_{2}^{\prime})\big{)}^{2}}{(x_{1}-x_{1}^{\prime})(x_{1}-x_{2}^{\prime})(x_{2}-x_{1}^{\prime})(x_{2}-x_{2}^{\prime})}\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2(\bm{u}(z_{1})+\bm{u}(z_{2})-\bm{u}(\infty_{-}))\big{|}2\bm{\tau}\big{)}\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2(-\bm{u}(z^{\prime}_{1})-\bm{u}(z_{2}^{\prime})+\bm{u}(\infty_{-}))\big{|}2\bm{\tau}\big{)},\end{split}$
(5.15)
###### Proof.
The starting point is the simplest non-trivial identity of Theorem 2.8, namely
$m=2$ (Pfaffian of size $4$), which gives
$\begin{split}&\quad\left\langle{\frac{\det(x_{1}-\Lambda)^{2}\det(x_{2}-\Lambda)}{\det(x_{1}^{\prime}-\Lambda)^{2}\det(x_{2}^{\prime}-\Lambda)^{2}}}\right\rangle_{N}^{V}\\\
&=\frac{N}{N+1}(x_{1}-x_{1}^{\prime})(x_{2}-x_{2}^{\prime})(x_{1}-x_{2}^{\prime})(x_{2}-x_{1}^{\prime})\frac{Z_{N+1}^{V}Z_{N-1}^{V}}{(Z_{N}^{V})^{2}}\,\mathcal{K}_{N-1}^{\frac{N}{N-1}V}\big{(}\begin{smallmatrix}2&2\\\
x_{1}&x_{2}\end{smallmatrix}\big{)}\mathcal{K}_{N+1}^{\frac{N}{N+1}V}\big{(}\begin{smallmatrix}-2&-2\\\
x_{1}^{\prime}&x_{2}^{\prime}\end{smallmatrix}\big{)}\\\
&\quad-\frac{(x_{1}-x_{2}^{\prime})(x_{2}-x_{1}^{\prime})}{(x_{1}-x_{2})(x_{1}^{\prime}-x_{2}^{\prime})}\mathcal{K}_{N}^{V}\big{(}\begin{smallmatrix}2&-2\\\
x_{1}&x_{1}^{\prime}\end{smallmatrix}\big{)}\mathcal{K}_{N}^{V}\big{(}\begin{smallmatrix}2&-2\\\
x_{2}&x_{2}^{\prime}\end{smallmatrix}\big{)}+\frac{(x_{1}-x_{1}^{\prime})(x_{2}-x_{2}^{\prime})}{(x_{1}-x_{2})(x_{1}^{\prime}-x_{2}^{\prime})}\mathcal{K}_{N}^{V}\big{(}\begin{smallmatrix}2&-2\\\
x_{1}&x_{2}^{\prime}\end{smallmatrix}\big{)}\mathcal{K}_{N}^{V}\big{(}\begin{smallmatrix}2&-2\\\
x_{2}&x_{1}^{\prime}\end{smallmatrix}\big{)}.\end{split}$
We omit the details of the asymptotic analysis based on Lemmata 4.8 and 4.9:
it is very similar to the $\beta=1$ case. Instead of using them for $K=2N$,
$p=\pm 2$ and $c,\tilde{c}\in\\{-1,1\\}$, now we rather use them with $K=N$
and $p=\pm 1$ and $c,\tilde{c}\in\\{-2,2\\}$. ∎
###### Lemma 5.8.
Theorem 5.7 is equivalent to Theorem 5.4.
###### Proof.
We apply Theorem 5.4 to the hyperelliptic curve with matrix of periods
$\bm{\tau}^{\prime}=-\bm{\tau}^{-1}$. Then, (5.9) is an identity involving
theta functions with matrix
$\frac{\bm{\tau^{\prime}}}{2}=-\frac{\bm{\tau}^{-1}}{2}$. On the other hand,
the modular transformation of the theta function is (see [Mum07, Equation
5.1]), for any $\bm{z},\bm{\mu},\bm{\nu}\in\mathbb{R}^{g}$
$\begin{split}\vartheta_{\bm{\nu},-\bm{\mu}}\big{(}\bm{z}\big{|}-\tfrac{\bm{\tau}^{-1}}{2}\big{)}=D_{\bm{\tau}}\cdot\mathrm{e}^{2{\rm
i}\pi\bm{z}\cdot\bm{\tau}^{-1}(\bm{z})}\vartheta_{\bm{\mu},\bm{\nu}}\big{(}2\bm{z}\big{|}2\bm{\tau}\big{)}.\end{split}$
for some constant $D_{\bm{\tau}}\in\mathbb{C}^{*}$. Applying this to each term
in Theorem 5.7, all terms get the same prefactor and we are left with Theorem
5.7. The operation is reversible. ∎
### 5.4 Formula for the multi-cut equilibrium energy (Proof of Proposition
4.3)
In the proof of Theorem 5.1, if we did not use Proposition 4.3 to simplify the
exponential in (5.4), the rest of the arguments would prove the identity (5.1)
with a prefactor
$e^{2\mathcal{E}[\mu_{\text{eq}}]+2\mathcal{L}[V]+\mathcal{Q}[V,V]+4{\rm
i}\pi\bm{\epsilon}^{*}\cdot(\bm{\tau}(\bm{\epsilon}^{*})+\bm{u}(\infty_{-}))}$
(5.16)
in the right-hand side, valid for any hyperelliptic curve with real Weierstraß
points and the equilibrium measure $\mu_{\text{eq}}$ of the associated
(unconstrained) $\beta=2$ ensemble. Taking all points
$z,z^{\prime},w,w^{\prime}$ to $\infty_{+}$ in this modified identity implies
that this extra factor (5.16) must be equal to $1$. The argument of the
exponential is manifestly real, except perhaps or the last term. As the curve
is hyperelliptic, a basis of the space of holomorphic forms is given by
$\differential\pi_{k}=\frac{x^{k}\differential x}{s}$ for $k\in[g]$. Recall
that $s$ takes imaginary values on the segments $[a_{h},b_{h}]$ for each
$h\in[0,g]$, and real values between the segments. This implies that the
matrix $Q_{k,h}=\oint_{\mathcal{A}_{h}}\differential\pi_{k}$ has purely
imaginary entries. Since $(\differential u_{h})_{h=1}^{g}$ is the basis dual
to $\mathcal{A}$-cycle integration, we have
$\differential
u_{h}=\sum_{k=1}^{g}Q^{-1}_{h,k}\differential\pi_{k},\qquad\text{with}\,\,Q^{-1}\,\,\text{purely
imaginary}.$
Integrating this on the $\mathcal{B}$-cycles which only run between segments
(Section 3.3.1) yields a purely imaginary matrix of periods $\bm{\tau}$. A
path from $\infty_{+}$ to $\infty_{-}$ that does not cross any of the
$\mathcal{A}$\- and $\mathcal{B}$-cycles described in Section 3.3.1 is for
instance the path travelling along the real axis in $\hat{C}_{+}$ from
$-\infty$ to $a_{0}$, then along the real axis in $\hat{C}_{-}$ from $a_{0}$
to $-\infty_{-}$. In this range $s$ is real-valued, so $\bm{u}(\infty_{-})$ is
also purely imaginary. All in all, (5.16) only involves the real exponential,
and we conclude that
$2\mathcal{E}[\mu_{\text{eq}}]+2\mathcal{L}[V]+\mathcal{Q}[V,V]+4{\rm
i}\pi\bm{\epsilon}^{*}\cdot(\bm{\tau}(\bm{\epsilon}^{*})+\bm{u}(\infty_{-}))=0.$
This argument was for $\beta=2$, but we retrieve Proposition 4.3 in full
generality since it is simply the $\beta=2$ identity multiplied by
$\frac{\beta}{2}$ and taking into account the prefactor $\frac{2}{\beta}$ in
the definition of $\mathcal{Q}$, while $\mu_{\text{eq}}$ and $\mathcal{L}$ are
independent of $\beta$. So, it was justified (without loop in the logic) to
proceed with Proposition 4.3 in the proofs of Section 5. In fact, the same
argument would establish Proposition 4.3 as a byproduct of the proof of the
$\beta=1$ Theorem 5.4 or of the $\beta=4$ Theorem 5.7 instead of Theorem 5.1.
## Appendix A Variation of the entropy with respect to filling fractions
(Proof of Proposition 4.4)
Consider the equilibrium measure $\mu_{\text{eq},\bm{\epsilon}}$ of a
$\beta$-ensemble with fixed filling fractions $\bm{\epsilon}$ such that
$M(x)=t_{2g+2}\prod_{h=1}^{g}(x-z_{h})$ with $z_{h}\in(b_{h-1},a_{h})$ in the
notations of Section 2.3. The density of $\mu_{\text{eq},\bm{\epsilon}}$ is
$\rho(x)=\frac{t_{2g+2}}{2\pi}\prod_{h=1}^{g}|x-z_{h}|\prod_{h=0}^{g}\sqrt{|x-a_{h}||x-b_{h}|}\cdot\mathds{1}_{S}(x).$
We need to compute for each $h\in[g]$
$v_{\text{eq},h}=\Big{(}\frac{\beta}{2}-1\Big{)}\int_{S}\partial_{\epsilon_{h}}\big{(}\rho(x)\ln\rho(x)\big{)}=\big{(}\frac{\beta}{2}-1\Big{)}\int_{S}\big{(}\partial_{\epsilon_{h}}\rho(x)\big{)}\ln\rho(x)\differential
x.$ (A.1)
For the last equality we used that $\int_{S}\rho(x)\differential x=1$ has
vanishing $\epsilon_{h}$-derivative. The density $\rho$ can be expressed as a
jump of $W_{1}$ to rewrite
$\begin{split}v_{\text{eq},h}&=\Big{(}\frac{\beta}{2}-1\Big{)}\int_{S}\partial_{\epsilon_{h}}\frac{W_{1}(x-{\rm
i}0)-W_{1}(x+{\rm i}0)}{2i\pi}\ln\rho(x)\differential x\\\
&=\Big{(}\frac{\beta}{2}-1\Big{)}\bigg{(}\sum_{k=1}^{g}\Upsilon_{h}(z_{k})+\frac{1}{2}\sum_{h=0}^{g}\big{(}\Upsilon_{h}(a_{h})+\Upsilon_{h}(b_{h})\big{)}\bigg{)}\end{split}$
(A.2)
in terms of the integrals
$\forall\xi\in\mathbb{R}
\qquad\Upsilon_{h}(\xi):=\int_{S}\partial_{\epsilon_{h}}\Big{(}\frac{W_{1}(x-{\rm
i}0)-W_{1}(x+{\rm i}0)}{2{\rm i}\pi}\Big{)}\ln|x-\xi|\differential x.$ (A.3)
It is well-known (see e.g. [BG24, Appendix A]) that
$\forall z\in\hat{C}_{+}\qquad\partial_{\epsilon_{h}}W_{1}(X(z))\differential
X(z)=2{\rm i}\pi\differential u_{h}(z).$
For $x\in\mathbb{C}\setminus S$ or in $S\pm{\rm i}0$, we define
$\mathfrak{z}(x)$ to be the unique point in $\overline{\hat{C}_{+}}$ such that
$X(\mathfrak{z}(x))=x$. Then:
$\Upsilon_{h}(\xi)=\int_{S}\big{(}\differential u_{h}(\mathfrak{z}(x-{\rm
i}0))-\differential u_{h}(\mathfrak{z}(x+{\rm
i}0))\big{)}\ln|x-\xi|=2\int_{S}\differential u_{h}(\mathfrak{z}(x-{\rm
i}0))\ln|x-\xi|.$
This is a differentiable function of $\xi$. For $\xi\notin S$, we can compute
$\partial_{\xi}\Upsilon_{h}(\xi)=\int_{S}\big{(}\differential
u_{h}(\mathfrak{z}(x-{\rm i}0)-\differential u_{h}(\mathfrak{z}(x+{\rm
i}0))\big{)}\frac{1}{\xi-x}=\oint_{S}\frac{\differential
u_{h}(z)}{\xi-X(z)}=2{\rm i}\pi\frac{\differential u_{h}}{\differential
X}(\mathfrak{z}(\xi)).$
For $\xi\in\mathring{S}$, we rather have
$\partial_{\xi}\Upsilon_{h}(\xi)=2\fint_{S}\frac{\differential
u_{h}(\mathfrak{z}(x-{\rm i}0))}{\xi-x}=-\frac{\differential
u_{h}}{\differential X}(\mathfrak{z}(\xi+{\rm i}0))-\frac{\differential
u_{h}}{\differential X}(\mathfrak{z}(\xi-{\rm i}0))=0.$
We will integrate this starting along the real line starting from
$\xi=-\infty+{\rm i}0$ and using the continuity of $\Upsilon_{h}$ on the real
axis shifted by $+{\rm i}0$. From the definition (A.3) we can see that
$\lim_{\xi\rightarrow\infty}\Upsilon_{h}(\xi)=0$. Therefore
$\frac{\Upsilon_{h}(\xi)}{2{\rm
i}\pi}=\left\\{\begin{array}[]{lll}u_{h}(\mathfrak{z}(\xi))+\sum_{l=0}^{k-1}\big{(}u_{h}(a_{l})-u_{h}(b_{l})\big{)}&&\text{if}\quad\xi\in(b_{k-1},a_{k})\\\
u_{h}(a_{k})+\sum_{l=0}^{k-1}\big{(}u_{h}(a_{k})-u_{h}(b_{k})\big{)}&&\text{if}\quad\xi\in[a_{k},b_{k}]\end{array}\right.$
(A.4)
with the conventions $b_{-1}=-\infty$ and $a_{g+1}=+\infty$. Note that we
could start integrating along the real line coming from $+\infty$, but we
would get an equivalent expression because
$\sum_{k=0}^{g}\bm{u}(a_{k})=\sum_{k=0}^{g}\bm{u}(b_{k}).$ (A.5)
The primitive $\bm{u}$ of $\differential\bm{u}$ in $(\mathbb{C}\setminus S)$
is multivalued, because this domain is not simply-connected. Yet, for the
previous computation, it suffices to define it by integration based at
$\infty_{+}$ in the simply-connected domain $\mathbb{H}\setminus S$, and it is
extended to $S$ and hence $\overline{\mathbb{H}}$ by continuity. Inserting the
formula (A.4) in (A.2) we arrive to
$\bm{v}_{\text{eq},h}=2{\rm
i}\pi\bigg{(}\frac{\beta}{2}-1\bigg{)}\bigg{[}\sum_{k=1}^{g}\big{(}\bm{u}(z_{k})+\bm{u}(a_{0})-\bm{u}(b_{0})+\cdots+\bm{u}(a_{k-1})-\bm{u}(b_{k-1})\big{)}+\sum_{k=0}^{g}\frac{\bm{u}(a_{k})+\bm{u}(b_{k})}{2}\bigg{]}.$
(A.6)
We now compute $\bm{u}(a_{k})$ and $\bm{u}(b_{k})$ as defined above. Denote
$(\bm{e}_{1},\ldots,\bm{e}_{g})$ the canonical basis of $\mathbb{C}^{g}$. Due
to the description of the representatives of the $\mathcal{A}$\- and
$\mathcal{B}$-cycles in Section 3.3.1 and the fact that the hyperelliptic
involution changes the sign of $\differential\bm{u}$, we have
$\bm{u}(b_{0})-\bm{u}(a_{0})=-\frac{1}{2}\oint_{\mathcal{A}_{0}}\differential\bm{u}=\frac{1}{2}\sum_{l=1}^{g}\bm{e}_{l},$
(A.7)
and for any $k\in[g]$
$\begin{split}\bm{u}(b_{k})-\bm{u}(a_{k})&=-\frac{1}{2}\oint_{\mathcal{A}_{k}}\differential\bm{u}=-\frac{1}{2}\bm{e}_{k},\\\
\bm{u}(a_{k})-\bm{u}(b_{k-1})&=\frac{1}{2}\oint_{\mathcal{B}_{k}-\mathcal{B}_{k-1}}\differential\bm{u}=\frac{1}{2}\big{(}\bm{\tau}(\bm{e}_{k})-\bm{\tau}(e_{k-1})\big{)},\end{split}$
(A.8)
with the conventions $\mathcal{B}_{0}=0$ and $\bm{e}_{0}=0$. Since $a_{0}$ is
the only Weierstraß point that does not belong to the $\mathcal{A}$\- and
$\mathcal{B}$-cycles specified in Section 3.3.1, $\bm{u}(\infty_{-})$ can be
obtained by integrating $\differential\bm{u}$ in the first sheet $-\infty$ on
the real line to $a_{0}$, and then to $a_{0}$ from $-\infty$ on the real line
in the second sheet. Therefore
$\bm{u}(a_{0})=\frac{1}{2}\bm{u}(\infty_{-}).$
From (A.7)-(A.8) we deduce
$\bm{u}(b_{0})=\frac{1}{2}\Big{(}\bm{u}(\infty_{-})+\sum_{l=1}^{g}\bm{e}_{l}\Big{)},$
and for $k\in[g]$
$\begin{split}\bm{u}(a_{k})=\frac{1}{2}\Big{(}\bm{u}(\infty_{-})+\sum_{l=k}^{g}\bm{e}_{l}+\sum_{l=1}^{k}\bm{\tau}(\bm{e}_{l})\Big{)},\\\
\bm{u}(b_{k})=\frac{1}{2}\Big{(}\bm{u}(\infty_{-})+\sum_{l=k+1}^{g}\bm{e}_{l}+\sum_{l=1}^{k}\bm{\tau}(\bm{e}_{l})\Big{)}.\end{split}$
Therefore
$\begin{split}\sum_{k=0}^{g}\bm{u}(a_{k})&=\sum_{k=0}^{g}\bm{u}(b_{k})=\frac{1}{2}\bigg{[}(g+1)\bm{u}(\infty_{-})+\sum_{k=1}^{g}\bigg{(}\sum_{l=k}^{g}\bm{e}_{l}+\sum_{l=1}^{k}\bm{\tau}(\bm{e}_{l})\bigg{)}\bigg{]}\\\
&=\frac{1}{2}\Big{(}(g+1)\bm{u}(\infty_{-})+\sum_{l=1}^{g}\big{(}l\bm{e}_{l}+(g+1-l)\bm{\tau}(\bm{e}_{l})\big{)}\Big{)}.\end{split}$
We can return to the computation of $\bm{v}_{\text{eq}}$. By definition in
(4.1) it is real, so we can replace $\bm{u}$ by $\text{Im}\,\bm{u}$ in (A.6).
Since $\bm{u}(b_{l})-\bm{u}(a_{l})$ is real for any $l\in[0,g]$, we get
$\bm{v}_{\text{eq}}=2\pi\bigg{(}1-\frac{\beta}{2}\bigg{)}\bigg{[}\sum_{k=1}^{g}\Big{(}\text{Im}\,\bm{u}(z_{k})+\frac{g+1-k}{2}\,\text{Im}\,\bm{\tau}(e_{k})\Big{)}+\frac{g+1}{2}\text{Im}\,\bm{u}(\infty_{-})\bigg{]}.$
Since we already know that $\bm{\tau}$ and $\bm{u}(\infty_{-})$ are purely
imaginary, we can drop imaginary part and divide by ${\rm i}$ instead, and
this is the final formula.
## References
* [APS01] S. Albeverio, L. Pastur, and M. Shcherbina. On the $1/N$ expansion for some unitary invariant ensembles of random matrices. Commun. Math. Phys., 224:271–305, 2001.
* [BDE00] G. Bonnet, F. David, and B. Eynard. Breakdown of universality in multi-cut matrix models. J. Phys. A, 33:6739–6768, 2000. cond-mat/0003324.
* [BE12] G. Borot and B. Eynard. Geometry of spectral curves and all order dispersive integrable system. SIGMA, 8(100), 2012. math-ph/1110.4936.
* [BE17] G. Borot and B. Eynard. Spectral curves, root systems, and application to $\mathrm{SU}(N)$ Chern–Simons theory on Seifert spaces. Sel. Math. New Series, 23(2):915–1025, 2017. math-ph/1407.4500.
* [BEO15] G. Borot, B. Eynard, and N. Orantin. Abstract loop equations, topological recursion, and applications. Commun. Number Theory and Physics, 9(1):51–187, 2015. math-ph/1303.5808.
* [Ber03] M. Bertola. Free energy of the two-matrix model/dToda tau-function. Nucl. Phys. B, 669:435–461, 2003. hep-th/0306184.
* [Ber11] M. Bertola. Boutroux curves with external field: equilibrium measures without a minimization problem. Anal. Math. Phys., 1(2):167–211, 2011. math-ph/0705.3062.
* [BG13] G. Borot and A. Guionnet. Asymptotic expansion of $\beta$ matrix models in the one-cut regime. Commun. Math. Phys, 317(2):447–483, 2013. math.PR/1107.1167.
* [BG24] G. Borot and A. Guionnet. Asymptotic expansion of $\beta$ matrix models in the multi-cut regime. to appear in Forum of Mathematics, Sigma, 2024. math-ph/1303.1045.
* [BGG] G. Borot, V. Gorin, and A. Guionnet. Fluctuations for multi-cut discrete $\beta$-ensembles and application to random tilings. in preparation.
* [BGK15] G. Borot, A. Guionnet, and K. Kozlowski. Large-$N$ asymptotic expansion for mean field models with Coulomb gas interaction. Int. Math. Res. Not., (20):10451–10524, 2015. math-ph/1312.6664.
* [BIPZ78] E. Brézin, C. Itzykson, G. Parisi, and J. B. Zuber. Planar diagrams. Communications in Mathematical Physics, 59(1):35–51, January 1978\.
* [BS06] A. Borodin and E. Strahov. Averages of characteristic polynomials in random matrix theory. Commun. Pure. Appl. Math., LIX:0161–0253, 2006.
* [CFWW21] C. Charlier, B. Fahs, C. Webb, and M.D. Wong. Asymptotics of Hankel determinants with a multi-cut regular potential and Fisher–Hartwig singularities. 2021\. math-ph/2111.08395.
* [CGM15] T. Claeys, T. Grava, and K.D.T.-R. McLaughlin. Asymptotics for the partition function in two-cut random matrices. Commun. Math. Phys., 339(2):513–587, 2015. math-ph/1410.7001.
* [CMSP17] J. Carlson, S. Müller-Stach, and C. Peters. Period mappings and period domains. Cambridge Studies in Advanced Mathematics. Cambridge University Press, 2nd edition, 2017.
* [EKR15] B. Eynard, T. Kimura, and S. Ribault. Random matrices. 2015\. math-ph/1510.04430.
* [EM98] B. Eynard and M.L. Mehta. Matrices coupled in a chain. I. Eigenvalue correlations. J. Phys. A: Math. Gen., 31:4449, 1998. cond-math/9710230.
* [Fay70] J. Fay. Theta functions on Riemann surfaces, volume 352 of Lecture Notes in Mathematics. Springer, Berlin, 1970.
* [FK92] Hershel M. Farkas and Irwin Kra. Riemann Surfaces, volume 71 of Graduate Texts in Mathematics. Springer, New York, NY, 1992.
* [Joh98] K. Johansson. On fluctuations of eigenvalues of random hermitian matrices. Duke Math. J., 91:151–204, 1998.
* [Jur91] J. Jurkiewicz. Chaotic behaviour in one-matrix model. Phys. Lett. B, 261(3):260–268, 1991.
* [Kri77] I.M. Krichever. Methods of algebraic geometry in the theory of nonlinear equations. Russian. Math. Surveys, 32(6):185–213, 1977.
* [Mat08] V.B. Matveev. $30$ years of finite-gap integration theory. Phil. Trans. R. Soc. A, 366:837–875, 2008.
* [Meh04] M.L. Mehta. Random matrices, volume 142 of Pure and Applied Mathematics. Elsevier/Academic, Amsterdam, 3${}^{\textrm{\\`{e}me}}$ edition, 2004.
* [Mig83] A. A. Migdal. Loop equations and 1/N expansion. Physics Reports, 102(4):199–290, December 1983.
* [Mir41] C. Miranda. Un’osservazione su un teorema di Brouwer. Bollettino dell’Unione Matematica Italiana, 2(3):5–7, 1941.
* [Mul84] M. Mulase. Cohomological structure in soliton equations and Jacobian varieties. J. Diff. Geom., 19:403–430, 1984.
* [Mum07] D. Mumford. Tata lectures on Theta. Modern Birkhäuser Classics. Birkäuser, Boston, 2007. I, reprint of the 1983 edition.
* [Pas06] L. Pastur. Limiting laws of linear eigenvalue statistics for Hermitian matrix models. J. Math. Phys., 47(103303), 2006. math.PR/0608719.
* [Shc13] M. Shcherbina. Fluctuations of linear eigenvalue statistics of $\beta$ matrix models in the multi-cut regime. J. Stat. Phys, 151(6):1004–1034, 2013. math-ph/1205.7062.
* [Shi86] T. Shiota. Characterization of Jacobian varieties in terms of soliton equations. Invent. Math., 83:333–382, 1986.
|
# Hypergeometric integrals, hook formulas and Whittaker vectors
G. Felder⋄, A. Smirnov†, V. Tarasov∘, A. Varchenko⋆
###### Abstract.
We determine the coefficient of proportionality between two multidimensional
hypergeometric integrals. One of them is a solution of the dynamical
difference equations associated with a Young diagram and the other is the
vertex integral associated with the Young diagram. The coefficient of
proportionality is the inverse of the product of weighted hooks of the Young
diagram. It turns out that this problem is closely related to the question of
describing the action of the center of the universal enveloping algebra of
$\mathfrak{gl}_{n}$ on the space of Whittaker vectors in the tensor product of
dual Verma modules with fundamental modules, for which we give an explicit
basis of simultaneous eigenvectors.
⋄ Department of Mathematics, ETH Zurich, 8092 Zurich, Switzerland
${}^{\circ}\mskip-0.99998mu$Department of Mathematical Sciences, Indiana
University –Purdue University Indianapolis
402 North Blackford St, Indianapolis, IN 46202-3216, USA
†,⋆ Department of Mathematics, University of North Carolina at Chapel Hill
Chapel Hill, NC 27599-3250, USA
Key words: Singular vectors, Young diagrams, excited diagrams, hooks, master
function, hypergeometric integrals, Whittaker vectors
2020 Mathematics Subject Classification:
††footnotetext: ${}^{\diamond}\mskip-0.99998mu$E-mail:
<EMAIL_ADDRESS>supported in part by the SNSF under grants
196892, 205607
${}^{\dagger}\mskip-0.99998mu$E-mail<EMAIL_ADDRESS>supported in
part by the NSF under grant DMS - 2054527 and by the RSF under grant
19-11-00062
${}^{\circ}\mskip-0.99998mu$E-mail<EMAIL_ADDRESS>supported in part by
the Simons Foundation under grants 430235, 852996
${}^{\star}\mskip-0.99998mu$E-mail<EMAIL_ADDRESS>supported in part by
the NSF under grant DMS-1954266
In memory of Igor Krichever (1950 –2022)
###### Contents
1. 1 Introduction
2. 2 Singular vectors
1. 2.1 Linear function $\psi$
2. 2.2 Fundamental representations
3. 2.3 Young diagrams
4. 2.4 Singular vectors
5. 2.5 Hooks
6. 2.6 Recurrence relations
7. 2.7 Excited diagrams
8. 2.8 Proof of Theorem 2.5
9. 2.9 Change of variable and weight shift
3. 3 Applications
1. 3.1 Master function
2. 3.2 Weight function of $u_{\lambda}$
3. 3.3 Two integrals
4. 3.4 Whittaker vectors
## 1\. Introduction
We determine the coefficient of proportionality between two multidimensional
hypergeometric integrals. One of them is a solution of the dynamical
difference equations associated with a Young diagram and the other is the
vertex integral associated with the Young diagram. The coefficient of
proportionality is the inverse of the product of weighted hooks of the Young
diagram. The same coefficient appears in the problem of diagonalizing the
action of the center of the universal enveloping algebra of
$\mathfrak{gl}_{n}$ on the space of Whittaker vectors in the tensor product of
dual Verma modules with fundamental modules..
The standard basis $(u_{\lambda})$ of a fundamental $\mathfrak{gl}_{n}$-module
$U_{r}$ is labeled by the Young diagrams $\lambda$ inscribed in an
$(n-r)\times r$-rectangle. One assigns to a basis vector $u_{\lambda}$ a
system of dynamical difference equations
(1.1) $\displaystyle
I(z_{1},\dots,z_{i}+\kappa,\dots,z_{n-1},\kappa)=a_{i}(z_{1},\dots,z_{n-1},\kappa)I(z_{1},\dots,z_{n-1},\kappa),\quad
i=1,\dots,n-1,$
where $I(z_{1},\dots,z_{n-1},\kappa)$ is an unknown scalar function and
$a_{i}$ are suitable coefficients defined in terms of the
$\mathfrak{gl}_{n}$-action on $U_{r}$. The equations were introduced in [TV],
and solutions were constructed in [MV]. A solution $I_{\lambda}(z,\kappa)$ to
(1.1) is given by a hypergeometric integral of dimension equal to the number
of boxes in $\lambda$.
We also introduce another hypergeometric integral $V_{\lambda}(z,\kappa)$ of
the same dimension associated with $\lambda$. We show that it is proportional
to $I_{\lambda}(z,\kappa)$ and determine the coefficient of proportionality
between the two integrals in Theorem 3.1
(1.2) $\displaystyle
V_{\lambda}(z,\kappa)\,=\,\frac{1}{\prod_{\square\in\lambda}h(\square)(z)}\,I_{\lambda}(z,\kappa)\,,$
where $h(\square)(z)$ is the hook-weight of a box $\square$ of $\lambda$, see
the definition in (2.5).
Our motivation for considering (1.2) is the following. On the one hand, the
enumerative geometry of quiver varieties is controlled by two important
objects: the vertex function and the capping operator of a quiver variety [O].
These are the generating functions counting quasimaps to the quiver variety
with nonsingular or relative boundary conditions.
On the other hand, with any quiver variety $X$ one can associate a
hypergeometric integral. The 3D-mirror symmetry predicts that this integral
computes the vertex function of the mirror variety $X^{!}$. The capping
operator of $X^{!}$ is obtained in the same way with additional insertion of
the stable envelope functions to the integral [AO] (also known as weight
functions in the theory of qKZ equations). For example, see [SmV1, SmV2] where
the integral formulas of this type are discussed in the case of cotangent
bundle over Grassmannian $X^{!}=T^{*}Gr(k,n)$.
Let $X$ be the zero-dimensional Nakajima quiver variety of type $A$ associated
to a Young diagram $\lambda$ [DS1, DS2]. The hypergeometric integral assigned
to $X$ is given by the function $V_{\lambda}(z,\kappa)$ and the integral for
the capping operator is given by $I_{\lambda}(z,\kappa)$. Since the cohomology
of $X_{\lambda}$ are one-dimensional, it is expected from [O] that both
integrals are proportional. Our formula (1.2) establishes the coefficient of
proportionality explicitly.
Note, however, that in this case, the mirror $X^{!}_{\lambda}$ is a
2$|\lambda|$-dimensional variety which can not be realized as a quiver
variety. Thus, the vertex function for $X^{!}_{\lambda}$ is not defined by the
methods of [O]. To clarify this point, we refer to the function
$V_{\lambda}(z,\kappa)$ as the vertex integral, instead of the “vertex
function of $X^{!}_{\lambda}$”.
We note also that the coefficient in (1.2) has a geometric meaning: the mirror
variety $X^{!}_{\lambda}$ is equipped with a torus action with unique fixed
point. The denominator of the coefficient in (1.2) is the product over the
half of the tangent weights at this point with parameters $z_{1},\dots,z_{n}$
understood as the equivariant parameters of the torus [DS2].
To determine the coefficient of proportionality we consider the tensor product
$M\otimes U_{r}$ , where $M$ is a Verma module and analyze singular weight
vectors in $M\otimes U_{r}$ of the form
$\displaystyle v(\lambda)=\sum_{\mu\leqslant\lambda}v_{\mu}\otimes
u_{\mu}\,,\qquad\operatorname{with}\ v_{\mu}\in M.$
The collection of vectors $(v_{\mu})$ is quite a nontrivial object. We
simplify it by choosing a suitable linear function $\psi:M\to{\mathbb{C}}$ and
considering instead the collection of numbers $(\psi(v_{\mu}))$. We develop
simple recurrence relations and formulas for these numbers. We also show that
$\displaystyle
V_{\lambda}(z,\kappa)\big{/}I_{\lambda}(z,\kappa)=\psi(v_{\emptyset})/\psi(v_{\lambda}).$
Together with formulas for $\psi(v_{\mu})$ this equation proves formula (1.2).
The numbers $\psi(v_{\mu})$ are functions of the highest weight of $M$
associated to the skew Young diagram $\lambda/\mu$. We show that these numbers
arise in the problem of diagonalizing the action of the center $Z$ of the
universal enveloping algebra on the space of Whittaker vectors of
$M^{\prime}\otimes U_{r}$ where $M^{\prime}$ is the dual of the Verma module
$M$.
A Whittaker vector in a $\mathfrak{g}l_{n}$-module is a vector on which the
nilpotent subalgebra of lower triangular matrices acts via a fixed regular
character, see Section 3.4. The center $Z$ acts on the space of Whittaker
vectors $\operatorname{Wh}(M^{\prime}\otimes U_{r})$. A Whittaker vector
$\beta\in M^{\prime}\otimes U_{r}$ is uniquely determined by its contraction
$\beta(v)\in U_{r}$ with the highest weight vector $v$ of $M$. For generic
highest weight of $M$, we show that a basis of eigenvectors is given by
$\beta_{\lambda}(v)=\sum_{\mu\leqslant\lambda}\sum_{\nu\in
E(\lambda/\mu)}\frac{1}{\prod_{\square\in\lambda\smallsetminus\nu}h(\square)(z)}\,u_{\mu},$
where $\lambda$ runs over the set of Young diagrams fitting in an $(n-r)\times
r$ rectangle and $z$ is an affine function of the highest weight of $M$. The
set $E(\lambda/\mu)$ is the set of Ikeda–Naruse excited diagrams, which are
subsets of $\lambda$ obtained from $\mu$ by moving boxes according to certain
rules. The coefficient of $u_{\mu}$ is $\psi(v_{\mu})$. In particular the
coefficient of $u_{\emptyset}$ is the coefficient of proportionality in (1.2)
(the coefficient of $u_{\lambda}$ is normalized to be 1).
### Aknowledgements
The fourth author thanks FIM at ETH Zurich and IHES in Bures-sur-Yvette for
hospitality in June-July 2023. The fourth author also thanks E. Mukhin for
useful discussions.
## 2\. Singular vectors
### 2.1. Linear function $\psi$
Consider the complex Lie algebra $\mathfrak{gl}_{n}$ with standard generators
$e_{ij}$, $i,j=1,\dots,n$, simple roots $\alpha_{i}$, $i=1,\dots,n-1$, half-
sum $\rho$ of positive roots. Denote $e_{i}=e_{i,i+1}$, $f_{i}=e_{i+1,i}$,
$h_{i}=e_{i,i}-e_{i+1,i+1}$ for $i=1,\dots,n-1$.
Let $M$ be a $\mathfrak{gl}_{n}$ Verma module with highest weight vector $v$.
Define a linear function $\psi:M\to{\mathbb{C}}$ as follows. Any vector
$v^{\prime}\in M$ can be written (in general non-uniquely) as a finite linear
combination of the products of elements $f_{1},\dots,f_{n-1}$ applied to $v$,
$\displaystyle v^{\prime}=\sum
c_{i_{m},i_{m-1},\dots,i_{1}}f_{i_{m}}f_{i_{m-1}}\dots f_{i_{1}}v\,,$
where $1\leqslant i_{j}\leqslant n-1$ and
$c_{i_{m},i_{m-1},\dots,i_{1}}\in{\mathbb{C}}$. Set
(2.1) $\displaystyle\psi(v^{\prime})=\sum c_{i_{m},i_{m-1},\dots,i_{1}}\,.$
The function $\psi$ is well-defined since it is zero on Serre’s relations
$f_{i}^{2}f_{i\pm 1}-2f_{i}f_{i\pm 1}f_{i}+f_{i\pm 1}f_{i}^{2}=0$. It is in
fact a Whittaker vector in the dual of $M$, as will be discussed in Section
3.4.
### 2.2. Fundamental representations
Let $U_{r}$, $r=1,\dots,n-1$, be the $r$-th fundamental representation of
$\mathfrak{gl}_{n}$. Its highest weight is $(1,\dots,1,0,\dots,0)$ with $r$
ones.
The $U_{1}$ is the vector representation ${\mathbb{C}}^{n}$ with standard
basis $u_{i}$, $i=1,\dots,n$. The $U_{r}$ is the $r$-th exterior power
$\wedge^{r}{\mathbb{C}}^{n}$ of the vector representation with standard basis
(2.2) $\displaystyle u_{I}:=u_{i_{1}}\wedge u_{i_{2}}\wedge\dots\wedge
u_{i_{r}}\,,$
where $I=\\{i_{1}<i_{2}<\dots<i_{r}\\}$ is any $r$-element subset of
$\\{1,\dots,n\\}$. Denote by $\mathcal{I}_{r}$ the set of such subsets.
The decomposition
$\displaystyle U_{r}=\oplus_{I\in\mathcal{I}_{k}}{\mathbb{C}}u_{I}\,$
is the weight decomposition. We have $e_{ii}u_{I}=u_{I}$ if $i\in I$ and
$e_{ii}u_{I}=0$ otherwise. Thus, the weight $w(u_{I})$ of $u_{I}$ is the
$n$-vector whose $i$-th coordinate is 1 if $i\in I$ and is 0 otherwise. The
vector $u_{I^{min}}$ with $I^{min}=\\{1<2<\dots<r\\}$ is a highest weight
vector.
### 2.3. Young diagrams
The set $\mathcal{I}_{r}$ is identified with the set of sequences of
nonnegative integers
$\displaystyle\\{0\leqslant\lambda_{1}\leqslant\dots\leqslant\lambda_{r}\leqslant
n-r\\}$
by the formula $\\{i_{1}<i_{2}<\dots<i_{r}\\}\,\mapsto\,\\{i_{1}-1\leqslant
i_{2}-2\leqslant\dots\leqslant i_{r}-r\\}$. The set of such sequences is
identified with the set of Young diagrams inscribed in the $(n-r)\times
r$-rectangle $R$. Thus the set $\mathcal{I}_{r}$ is identified with the set of
Young diagrams inscribed in the rectangle $R$.
For example, $I^{min}$ corresponds to the empty Young diagram $\emptyset$, and
the rectangle $R$ corresponds to the subset $\\{n-r+1<n-r+2<\dots<n\\}$.
We conclude that the basis $(u_{I})$ of $U_{r}$ is labeled by the Young
diagrams. A vector $u_{I}$ will also be denoted $u_{\lambda}$ if $I$
corresponds to a Young diagram $\lambda$. The weight $w(u_{I})$ of $u_{I}$
will also be denoted by $w(\lambda)$.
The set of Young diagrams is partially ordered with respect to inclusion of
the diagrams. We write $\mu\leqslant\lambda$ if the Young diagram $\lambda$
contains the Young diagram $\mu$.
### 2.4. Singular vectors
Let $M$ be the $\mathfrak{gl}_{n}$ Verma module with highest weight $t-\rho$
and highest weight vector $v$ , where $t=(t_{1},\dots,t_{n})$.
Fix a Young diagram $\lambda\in\mathcal{I}_{r}$. Consider the vector subspace
$\cap_{i=1}^{n-1}\operatorname{Ker}e_{i}$ of singular vectors in $M\otimes
U_{r}$ of weight $w(\lambda)+t-\rho$. For generic $t$, this space is one-
dimensional with a generator of the form
(2.3) $\displaystyle v(\lambda):=v\otimes
u_{\lambda}+\sum_{\mu<\lambda}v_{\mu}\otimes u_{\mu}\,$
for suitable vectors $v_{\mu}\in M$. Recall the linear function
$\psi:M\to{\mathbb{C}}$. Define the following scalar functions
$g_{\lambda/\mu}$ of $t$:
(2.4) $\displaystyle
g_{\lambda/\mu}=\psi(v_{\mu})\quad\operatorname{for}\quad\mu<\lambda\quad\operatorname{and}\quad
g_{\lambda/\lambda}=1.$
More precisely, each $g_{\lambda/\mu}$ is a function of
$z=(z_{1},\dots,z_{n-1})$ where $z_{i}=t_{i+1}-t_{i}$.
The main result of this paper is recurrence relations and a formula for
functions $g_{\lambda/\mu}$.
### 2.5. Hooks
The $(n-r)\times r$-rectangle $R$ lies in the positive quadrant in
${\mathbb{R}}^{2}$ and consists of unit boxes $\square_{i,j}$,
$i=1,\dots,n-r$, $j=1,\dots,r$. The center of a box $\square_{i,j}$ has
coordinates $\big{(}i-\frac{1}{2},j-\frac{1}{2}\big{)}$. Every nonempty Young
diagram $\lambda\in\mathcal{I}_{r}$ contains the corner box $\square_{1,1}$.
To every box $\square_{i,j}$ we assign one of $z_{1},\dots,z_{n-1}$ by the
rule:
$\displaystyle z(\square_{i,j})\,:=\,z_{i-j+r}\,.$
For example, $z(\square_{1,r})=z_{1}$, $z(\square_{1,1})=z_{r}$,
$z(\square_{n-r,1})=z_{n-1}$, $z(\square_{n-r,r})=z_{n-r}$. We say that
$z_{i-j+r}$ is the $z$-label of a box $\square_{i,j}$.
Recall that a hook $H_{\lambda}(\square_{i,j})$ of a box $\square_{i,j}$ in a
Young diagram $\lambda$ is the set of all boxes $\square_{a,b}$ in $\lambda$
such that $a=i$, $b\geqslant j$, or $a\geqslant i$, $b=j$. We define the hook-
weight of a box $\square$ of $\lambda$ by the formula
(2.5) $\displaystyle h(\square)=1+\sum_{\square^{\prime}\in
H_{\lambda}(\square)}z(\square^{\prime})\,.$
###### Theorem 2.1.
We have
(2.6) $\displaystyle
g_{\lambda/\emptyset}=\frac{1}{\prod_{\square\in\lambda}h(\square)}\,.$
For example, if $(r,n)=(2,4)$ and $\lambda=(2,1)$. Then $\lambda$ consists of
the three boxes $\square_{1,1}$, $\square_{1,2}$, $\square_{2,1}$, and
$\displaystyle
g_{\lambda/\emptyset}=\frac{1}{(z_{1}+z_{2}+z_{3}+1)(z_{1}+1)(z_{3}+1)}\,.$
In Section 2.7 we give a formula for all coefficients $g_{\lambda/\mu}$.
Theorem 2.1 follows from Theorem 2.5 below.
### 2.6. Recurrence relations
Let $\lambda/\mu$ be a skew-diagram. Let $k_{i}$ be the number of boxes in
$\lambda/\mu$ with $z$-label $z_{i}$, where $i=1,\dots,n-1$. We put $k_{n}=0$.
Define the $z$-content of $\lambda/\mu$ by the formula
$\displaystyle s_{\lambda/\mu}=\sum_{i=1}^{n-1}k_{i}(k_{i}-k_{i+1}+z_{i})\,.$
For example, if $(r,n)=(2,4)$, $\lambda=(2,2)$, $\mu_{1}=\emptyset$,
$\mu_{2}=(1)$, $\mu_{3}=(1,1)$, $\mu_{4}=(2)$, $\mu_{5}=(2,1)$, then
$\displaystyle\phantom{aaa}s_{\lambda/\mu_{1}}=z_{1}+2z_{2}+z_{3}+2,\qquad
s_{\lambda/\mu_{2}}=z_{1}+z_{2}+z_{3}+1,$ $\displaystyle
s_{\lambda/\mu_{3}}=z_{1}+z_{2}+1,\qquad
s_{\lambda/\mu_{4}}=z_{2}+z_{3}+1,\qquad s_{\lambda/\mu_{5}}=z_{2}+1.$
###### Theorem 2.2.
The following recurrence relations hold:
(2.7) $\displaystyle
g_{\lambda/\mu}=\frac{1}{s_{\lambda/\mu}}\sum_{\mu^{\prime}}g_{\lambda/\mu^{\prime}}\,,$
where the sum is over all the Young diagrams $\mu^{\prime}$ such that
$\mu<\mu^{\prime}\leqslant\lambda$ and the skew-diagram $\mu^{\prime}/\mu$
consists of one box.
For example, if $(r,n)=(2,4)$, $\lambda=(2,2)$, and $\mu_{i}$ are as before,
then we have
$\displaystyle g_{\lambda/\emptyset}$ $\displaystyle=$
$\displaystyle\frac{1}{z_{1}+2z_{2}+z_{3}+2}\,g_{\lambda/\mu_{2}}=\frac{1}{(z_{1}+2z_{2}+z_{3}+2)(z_{1}+z_{2}+z_{3}+1)}\,[g_{\lambda/\mu_{3}}+g_{\lambda/\mu_{4}}]$
$\displaystyle=\frac{1}{(z_{1}+2z_{2}+z_{3}+2)(z_{1}+z_{2}+z_{3}+1)}\Big{(}\frac{1}{z_{1}+z_{2}+1}+\frac{1}{z_{2}+z_{3}+1}\Big{)}g_{\lambda/\mu_{5}}$
$\displaystyle=\frac{1}{(z_{1}+z_{2}+z_{3}+1)(z_{1}+z_{2}+1)(z_{2}+z_{3}+1)(z_{2}+1)}\,$
where in the last step we used $g_{\lambda/\lambda}=1$.
###### Proof.
Denote $\beta=t-\rho-\sum_{i=1}^{n-1}k_{i}\alpha_{i}$ where $k_{i}$ are some
nonnegative integers. Let $M[\beta]$ be the weight subspace of $M$ of weight
$\beta$.
###### Lemma 2.3.
For any $v^{\prime}\in M[\beta]$, we have
(2.9)
$\displaystyle\psi((e_{1}+\dots+e_{n-1})v^{\prime})\,=\,-\sum_{i=1}^{n-1}k_{i}(k_{i}-k_{i+1}+z_{i})\,\psi(v^{\prime}).$
###### Proof.
The proof is straightforward. It is enough to check formula (2.9) for
$v^{\prime}=f_{m_{1}}\dots f_{m_{k}}v$ where $1\leqslant m_{j}\leqslant n-1$
and for any $i$ the sequence $m_{1},\dots,m_{k}$ has exactly $k_{i}$ elements
equal to $i$.
For example, for $\beta=t-\rho-2\alpha_{1}-\alpha_{2}$ and
$v^{\prime}=f_{1}f_{1}f_{2}v$ we have
$\displaystyle\psi((e_{1}+e_{2}+e_{3})v^{\prime})$ $\displaystyle=$
$\displaystyle\psi(h_{1}f_{1}f_{2}v+f_{1}h_{1}f_{2}v+f_{1}f_{1}h_{2}v)$
$\displaystyle=$
$\displaystyle-\psi((z_{1}+2)f_{1}f_{2}v+z_{1}f_{1}f_{2}v+(z_{2}+1)f_{1}f_{1}v)=-(2z_{1}+z_{2}+3),$
while $\psi(v^{\prime})=1$. ∎
To prove the theorem notice that the vector $v(\lambda)$ is singular and hence
$\psi((e_{1}+\dots+e_{n-1})v(\lambda))=0$. By Lemma 2.3, we also have
(2.10)
$\displaystyle\psi((e_{1}+\dots+e_{n-1})v(\lambda))=\sum_{\mu<\lambda}\Big{(}-s_{\lambda/\mu}\,g_{\lambda/\mu}+\sum_{\mu^{\prime}}g_{\lambda/\mu^{\prime}}\Big{)}u_{\mu}\,,$
where the second sum is over all the Young diagrams $\mu^{\prime}$ such that
$\mu<\mu^{\prime}\leqslant\lambda$ and the skew-diagram $\mu^{\prime}/\mu$
consists of one box. Since $(u_{\mu})_{\mu\in\mathcal{I}_{r}}$ is a basis of
$U_{r}$, the coefficient of each $u_{\mu}$ in (2.10) must be equal to zero.
This proves the theorem. ∎
###### Corollary 2.4.
Let $d$ be the number of boxes in $\lambda/\mu$. Then
(2.11) $\displaystyle
g_{\lambda/\mu}=\sum_{\mu=\mu_{1}<\mu_{2}<\dots<\mu_{d}<\lambda}\frac{1}{\prod_{i=1}^{d}s_{\lambda/\mu_{i}}}\,.$
See an example in formula (LABEL:ex).
### 2.7. Excited diagrams
Let $\lambda/\mu$ be a skew-diagram and $D$ a subset of the Young diagram
$\lambda$. A box $\square_{i,j}$ of $D$ is called active if the boxes
$\square_{i+1,j},\,\square_{i+1,j+1},\,\square_{i,j+1}$ are all in
$\lambda-D$. Let $b=\square_{i,j}$ be an active box of $D$, define $D_{b}$ to
be the set obtained by replacing $\square_{i,j}$ in $D$ by
$\square_{i+1,j+1}$. We call this replacement an elementary excitation. An
excited diagram of $\lambda/\mu$ is a subset of boxes of $\lambda$ obtained
from the Young diagram $\mu$ after a sequence of elementary excitations on
active boxes. Let $E(\lambda/\mu)$ be the set of excited diagrams of
$\lambda/\mu$, see this definition in [IN, Na, MPP].
###### Theorem 2.5.
We have
(2.12) $\displaystyle
g_{\lambda/\mu}=\frac{1}{\prod_{\square\in\lambda}h(\square)}\,\sum_{\nu\in
E(\lambda/\mu)}\prod_{\square\in\nu}h(\square)\,.$
For example, in the notation of formula (LABEL:ex), the set
$E(\lambda/\mu_{2})$ consists of two elements $\\{\square_{1,1}\\}$ and
$\\{\square_{2,2}\\}$. Then
$\displaystyle
g_{\lambda/\mu_{2}}=\frac{z_{1}+2z_{2}+x_{3}+2}{(z_{1}+z_{2}+z_{3}+1)(z_{1}+z_{2}+1)(z_{2}+z_{3}+1)(z_{2}+1)}\,,$
where
$\displaystyle h(\square_{1,1})+h(\square_{2,2})=z_{1}+2z_{2}+x_{3}+2.$
###### Remark.
The equality between (2.11) and (2.12) in the case $\mu=\emptyset$ is a
generalization of the classical hook-length formula relating the number of
standard Young tableaux of shape $\lambda$ to the inverse product of hook-
lengths. It converges to it in the limit where all $z_{i}$ are equal and tend
to infinity. In the same limit for general $\mu\subset\lambda$ we obtain
Naruse’s generalization for skew diagrams [Na].
### 2.8. Proof of Theorem 2.5
Theorem 2.5 follows from Corollary 2.4 and Naruse’s formula in [Na] by a
change of parameters $z_{1},\dots,z_{n-1}$.
More precisely, let $\lambda\in\mathcal{I}_{r}$ be a nonempty Young diagram.
We say that a box $\square_{i,j}\in\lambda$ is a boundary box if
$\square_{i+1,j+1}\notin\lambda$. Let $\square_{i,j}\in\lambda$ be a boundary
box. If $\square_{i,j+1}\notin\lambda$ and $\square_{i+1,j}\notin\lambda$,
then $\square_{i,j}$ is an active boundary box according to the definition in
Section 2.7 (with $D=\lambda$). If $\square_{i,j+1}\in\lambda$ and
$\square_{i+1,j}\in\lambda$, then we say that $\square_{i,j}$ is a corner
boundary box. If $\square_{i,j+1}\in\lambda$ and
$\square_{i+1,j}\notin\lambda$, or if $\square_{i,j+1}\notin\lambda$ and
$\square_{i+1,j}\in\lambda$, then we say that $\square_{i,j}$ is a flat
boundary box.
If $\lambda$ has a box with $z$-label $z_{i}$, then $\lambda$ has an exactly
one boundary box with $z$-label $z_{i}$. We define new parameters $y_{i}$ by
the following formulas. We define
$\displaystyle z_{i}$ $\displaystyle=$ $\displaystyle
y_{i}-1,\quad\operatorname{if\,the\,boundary\,box\,with\,label}\,z_{i}\,\operatorname{is\,an\,active\,boundary\,box}\,,$
$\displaystyle z_{i}$ $\displaystyle=$ $\displaystyle
y_{i}+1,\quad\operatorname{if\,the\,boundary\,box\,with\,label}\,z_{i}\,\operatorname{is\,a\,corner\,boundary\,box}\,,$
$\displaystyle z_{i}$ $\displaystyle=$ $\displaystyle y_{i}\,,\quad\quad\
\,\operatorname{if\,the\,boundary\,box\,with\,label}\,z_{i}\,\operatorname{is\,a\,flat\,boundary\,box}\,.$
To every box $\square_{i,j}\in\lambda$ we assign one of $y_{1},\dots,y_{n-1}$
by the rule:
$\displaystyle y(\square_{i,j})\,:=\,y_{i-j+r}\,.$
We say that $y_{i-j+r}$ is the $y$-label of a box $\square_{i,j}$.
###### Lemma 2.6.
* (i)
Let $\square$ be a box in $\lambda$ with hook-weight
$h(\square)(z)=1+z_{a}+z_{a+1}+\dots+z_{b}$ for some $a,b$. Then
(2.13) $\displaystyle h(\square)(z(y))=y_{a}+y_{a+1}+\dots+y_{b}\,.$
* (ii)
Let $\mu<\lambda$ and let
$\displaystyle s_{\lambda/\mu}(z)=\sum_{i=1}^{n-1}k_{i}(k_{i}-k_{i+1}+z_{i})$
be the $z$-content of the skew-diagram $\lambda/\mu$. Then
(2.14) $\displaystyle s_{\lambda/\mu}(z(y))=\sum_{i=1}^{n-1}k_{i}y_{i}\,.$
For example, in the notation of formula (LABEL:ex), the change of variables
for $\lambda$ is
$\displaystyle z_{1}=y_{1},\qquad z_{2}=y_{2}-1,\qquad z_{3}=y_{3}\,.$
Then $s_{\lambda/\emptyset}(z)=z_{1}+2z_{2}+z_{3}+2$ and
$s_{\lambda/\emptyset}(z(y))=y_{1}+2y_{2}+y_{3}$ . Similarly,
$s_{\lambda/\mu_{2}}(z)=z_{1}+z_{2}+z_{3}+1$ and
$s_{\lambda/\mu_{2}}(z(y))=y_{1}+y_{2}+y_{3}$ .
###### Proof.
The proof of the lemma is straightforward. For example, we prove part (i). Let
$\square_{i,j}$ be a box in $\lambda$ with hook-weight
$h(\square_{i,j})(z)=1+z_{a}+z_{a+1}+\dots+z_{b}$ for some $a,b$. The boxes
$\square_{i,a}$ and $\square_{b,j}$ are boundary boxes of $\lambda$. Let us
walk from the box $\square_{i,a}$ to the box $\square_{b,j}$ through the
boundary boxes of $\lambda$. This walk consists of $b-a+1$ boundary boxes with
$z$-labels $z_{a},z_{a+1},\dots,z_{b}$. Let $\ell$ be the number of active
boundary boxes in this walk. Then the walk has exactly $\ell-1$ corner
boundary boxes. Hence our change of variables transforms
$h(\square)(z)=1+z_{a}+z_{a+1}+\dots+z_{b}$ to
$1+y_{a}+y_{a+1}+\dots+y_{b}-\ell+(\ell-1)=y_{a}+y_{a+1}+\dots+y_{b}$. Part
(i) is proved. ∎
Having Lemma 2.6 we rewrite Corollary 2.4 in terms of the variables $y_{i}$.
Namely, define the $y$-hook-weight of a box $\square\in\lambda$ by the formula
$\displaystyle\tilde{h}(\square)=\sum_{\square^{\prime}\in
H_{\lambda}(\square)}y(\square^{\prime})\,,$
and the $y$-content of a skew-diagram $\lambda/\mu$ by the formula
$\displaystyle\tilde{s}_{\lambda/\mu}=\sum_{i=1}^{n-1}k_{i}y_{i}\,,$
if $k_{i}$ is the number of boxes in $\lambda/\mu$ with $y$-label $y_{i}$.
Then
$\displaystyle\tilde{h}(\square)(y)=h(\square)(z(y)),\qquad\tilde{s}_{\lambda/\mu}(y)=s_{\lambda/\mu}(z(y))$
by Lemma 2.6. Formula (2.11) takes the form:
(2.15) $\displaystyle g_{\lambda/\mu}(z(y))$ $\displaystyle=$
$\displaystyle\sum_{\mu=\mu_{1}<\mu_{2}<\dots<\mu_{d}<\lambda}\frac{1}{\prod_{i=1}^{d}\tilde{s}_{\lambda/\mu_{i}}(y)}\,.$
On the other hand, H. Naruse’s formula [Na, page 13] states that
(2.16)
$\displaystyle\sum_{\mu=\mu_{1}<\mu_{2}<\dots<\mu_{d}<\lambda}\frac{1}{\prod_{i=1}^{d}\tilde{s}_{\lambda/\mu_{i}}(y)}$
$\displaystyle=$
$\displaystyle\frac{1}{\prod_{\square\in\lambda}\tilde{h}(\square)(y)}\,\sum_{\nu\in
E(\lambda/\mu)}\prod_{\square\in\nu}\tilde{h}(\square)(y)\,,$
see also [IN, MPP]. Hence,
(2.17) $\displaystyle g_{\lambda/\mu}(z(y))$ $\displaystyle=$
$\displaystyle\frac{1}{\prod_{\square\in\lambda}h(\square)(z(y))}\,\sum_{\nu\in
E(\lambda/\mu)}\prod_{\square\in\nu}h(\square)(z(y))\,,$
and Theorem 2.2 is proved.
### 2.9. Change of variable and weight shift
The change of variables $y\mapsto z=z(y)$ can be understood in terms of
weights as follows:
###### Lemma 2.7.
Let $\zeta\colon\mathbb{C}^{n}\to\mathbb{C}^{n-1}$ be the linear map
$t\mapsto(t_{2}-t_{1},\dots,t_{n}-t_{n-1})$. If $z=\zeta(t)$ then
$y=\zeta(t-w(\lambda))$.
###### Proof.
The weight corresponding to the Young diagram $\lambda$ is
$w(\lambda)=(\epsilon_{1},\dots,\epsilon_{n})$ where
$\epsilon_{i}\in\\{0,1\\}$ with $\epsilon_{i}=1$ iff
$i\in\\{i_{1}<\dots<i_{r}\\}$ where $i_{k}=\lambda_{k}+k$ ($k=1,\dots,r$), see
Section 2.3. Then for each $i=1,\dots,n-1$, we have
* •
$\epsilon_{i+1}-\epsilon_{i}=-1$ if $i=i_{k}$ for some $k\in\\{1,\dots,r\\}$
and $i_{k}<i_{k+1}-1$, where we set $i_{r+1}=n+1$,
* •
$\epsilon_{i+1}-\epsilon_{i}=1$ if $i+1=i_{k}$ for some $k\in\\{1,\dots,r\\}$
and $i_{k-1}<i_{k}-1$, where we set $i_{0}=0$,
* •
$\epsilon_{i+1}-\epsilon_{i}=0$, otherwise.
The first alternative occurs iff $i=i_{k}$ and $\lambda_{k}<\lambda_{k+1}$
where we set $\lambda_{r+1}=n-r$. This is exactly the condition for the box
with coordinates $(\lambda_{k},r-k)$, which has $z$-label $z_{i}$, to be an
active boundary box.
The second alternative occurs iff $i+1=i_{k}$ and $\lambda_{k-1}<\lambda_{k}$
where we set $\lambda_{0}=0$. This is exactly the condition for the boundary
box with coordinates $(\lambda_{k},r-k+1)$, which has $z$-label $z_{i}$, to be
a corner boundary box. ∎
## 3\. Applications
### 3.1. Master function
Let $\lambda$ be a Young diagram inscribed in the $(n-r)\times r$ rectangle
$R$. Let $\lambda$ have $k_{i}$ boxes with $z$-label $z_{i}$ for
$i=1,\dots,n-1$. Denote $k=k_{1}+\dots+k_{n-1}$.
Consider ${\mathbb{C}}^{k}$ with coordinates $x=(x_{i,j})$, $i=1,\dots,n-1$,
$j=1,\dots,k_{i}$. Define the master function 111The superpotential in the
terminology of enumerative geometry.
(3.1)
$\displaystyle\Phi_{\lambda}(x,z)=\prod_{i=1}^{n-1}\prod_{j=1}^{k_{i}}x_{i,j}^{z_{i}+1}\prod_{j=1}^{k_{r}}(x_{k,j}-1)^{-1}\prod_{i=1}^{n-1}\prod_{j<j^{\prime}}(x_{i,j}-x_{i,j^{\prime}})^{2}\prod_{i=1}^{n-2}\prod_{j=1}^{k_{i}}\prod_{j^{\prime}=1}^{k_{i+1}}(x_{i,j}-x_{i+1,j^{\prime}})^{-1}\,.$
The linear functions $x_{i,j}$, $x_{k,j}-1$, $x_{i,j}-x_{i,j^{\prime}}$,
$x_{i,j}-x_{i+1,j^{\prime}}$ appearing in the master function define an
arrangement $\mathcal{C}$ of hyperplanes in ${\mathbb{C}}^{k}$.
The group $G=S_{k_{1}}\times\dots\times S_{k_{n-1}}$ acts on
${\mathbb{C}}^{k}$ by permuting the coordinates $(x_{i,j})$ with the same
first index $i$. The arrangement $\mathcal{C}$ and master function
$\Phi_{\lambda}(x,z)$ are $G$-invariant.
For $\kappa\in{\mathbb{C}}^{\times}$, the multivalued function
$\Phi_{\lambda}(x,z)^{1/\kappa}$ defines a rank one local system
$\mathcal{L}_{\kappa}$ on the complement
$X={\mathbb{C}}^{k}\setminus\mathcal{C}$ to the arrangement. The group $G$
acts on the homology $H_{*}(X;\mathcal{L}_{\kappa})$ and cohomology
$H^{*}(X;\mathcal{L}_{\kappa})$. Let $H_{k}(X;\mathcal{L}_{\kappa})^{-}\subset
H_{k}(X;\mathcal{L}_{\kappa})$ and $H^{k}(X;\mathcal{L}_{\kappa})^{-}\subset
H^{k}(X;\mathcal{L}_{\kappa})$ be the isotypical components corresponding to
the sign representation. It is known that for generic $\kappa$, we have $\dim
H^{k}(X;\mathcal{L}_{\kappa})^{-}=\dim H_{k}(X;\mathcal{L}_{\kappa})^{-}=1$
since the space $H^{k}(X;\mathcal{L}_{\kappa})^{-}$ can be identified with the
space of singular vectors in $M\times U_{r}$ of weight $w(\lambda)+t-\rho$,
which is of dimension 1, see [SV].
### 3.2. Weight function of $u_{\lambda}$
Let $u_{\lambda}$ be the basis vector of $U_{r}$ corresponding to the diagram
$\lambda$. The vector $u_{\lambda}$ is related to the highest weight vector
$u_{\emptyset}$ by the formula
(3.2) $\displaystyle u_{\lambda}=f_{\ell_{k}}\dots
f_{\ell_{2}}f_{\ell_{1}}u_{\emptyset}\,,$
where $f_{\ell_{k}},\dots,f_{\ell_{2}},f_{\ell_{1}}$ is a certain (admissible)
sequence of Cartan generators $f_{1},\dots,f_{n-1}$ in which there are exactly
$k_{i}$ elements $f_{i}$ for every $i=1,\dots,n-1$. Let
$\mathcal{F}_{\lambda}$ be the set of all such admissible sequences.
Let $f=\\{f_{\ell_{k}},\dots,f_{\ell_{1}}\\}$ be an admissible sequence.
Define the function $W^{\circ}_{f}(x)$,
(3.3) $\displaystyle
W^{\circ}_{f}(x)=\frac{1}{(x_{a_{k},b_{k}}-x_{a_{k-1},b_{k-1}})\dots(x_{a_{3},b_{3}}-x_{a_{2},b_{2}})(x_{a_{2},b_{2}}-x_{a_{1},b_{1}})(x_{a_{1},b_{1}}-1)}$
such that
1. (i)
each variable $x_{i,j}$ is present in (3.3),
2. (ii)
if $(x_{a_{c},b_{c}}-x_{a_{c-1},b_{c-1}})$ is any of the factors, then
$a_{c}=\ell_{c}$,
3. (iii)
for any $i$ and $1\leqslant j<j^{\prime}\leqslant k_{i}$, the variable
$x_{i,j}$ appears in (3.3) on the right from the variable $x_{i,j^{\prime}}$ .
These properties determine the function $W^{\circ}_{f}(x)$ uniquely.
Define
$\displaystyle
W_{\lambda}(x)=\operatorname{Sym}_{x_{1,1},\dots,x_{1,k_{1}}}\dots\operatorname{Sym}_{x_{n-1,1},\dots,x_{n-1,k_{n-1}}}\Big{[}\sum_{f\in\mathcal{F}_{\lambda}}W^{\circ}_{f}(x)\Big{]}\,,$
where we use the notation
$\operatorname{Sym}_{t_{1},\dots,t_{j}}P({t_{1},\dots,t_{j}}):=\sum_{\sigma\in
S_{j}}P(t_{\sigma(1)},\dots,t_{\sigma(j)})$. The function $W_{\lambda}(x)$ is
called the weight function of the vector $v\otimes u_{\lambda}$ in $M\otimes
U_{r}$.
For example, if $(r,n)=(2,4)$, $\lambda=(2,2)$, then
$x=(x_{1,1},x_{2,1},x_{2,2},x_{3,1})$. There are two admissible sequences
$\displaystyle
u_{\lambda}=f_{2}f_{3}f_{1}f_{2}u_{\emptyset}=f_{2}f_{1}f_{3}f_{2}u_{\emptyset}\,,$
and
$\displaystyle W_{\lambda}(x)$ $\displaystyle=$
$\displaystyle\frac{1}{(x_{2,2}-x_{3,1})(x_{3,1}-x_{1,1})(x_{1,1}-x_{2,1})(x_{2,1}-1)}$
$\displaystyle+$
$\displaystyle\frac{1}{(x_{2,1}-x_{3,1})(x_{3,1}-x_{1,1})(x_{1,1}-x_{2,2})(x_{2,2}-1)}$
$\displaystyle+$
$\displaystyle\frac{1}{(x_{2,2}-x_{1,1})(x_{1,1}-x_{3,1})(x_{3,1}-x_{2,1})(x_{2,1}-1)}$
$\displaystyle+$
$\displaystyle\frac{1}{(x_{2,1}-x_{1,1})(x_{1,1}-x_{3,1})(x_{3,1}-x_{2,2})(x_{2,2}-1)}\,.$
### 3.3. Two integrals
Let $\gamma\in H_{k}(X;\mathcal{L}_{\kappa})^{-}$ be a generator. Let
$\displaystyle\wedge_{i,j}dx_{i,j}\,$
denote the wedge product in the lexicographic order of all the differentials
$dx_{i,j}$ . Define two functions
$\displaystyle
I_{\lambda}(z,\kappa)=\int_{\gamma}\Phi_{\lambda}(x,z)^{1/\kappa}W_{\lambda}(x)\big{(}\wedge_{i,j}dx_{i,j}\big{)}\,,\qquad
V_{\lambda}(z,\kappa)=\int_{\gamma}\Phi_{\lambda}(x,z)^{1/\kappa}\frac{1}{\prod_{i,j}x_{i,j}}\big{(}\wedge_{i,j}dx_{i,j}\big{)}\,.$
Both function are multiplied by the same nonzero constant if we choose a
different generator.
As shown in [MV], the first function is a hypergeometric solution of the
dynamical difference equations associated with the weight subspace
$U_{r}[w(\lambda)]$ of the $\mathfrak{gl}_{n}$-module $U_{r}$. The dynamical
equations were introduced in [TV]. The (hypergeometric) solutions of the
dynamical equations were constructed in [MV]. The dynamical equations is a
system of difference equations of the form
$\displaystyle
I(z_{1},\dots,z_{i}+\kappa,\dots,z_{n-1},\kappa)=a_{i}(z_{1},\dots,z_{n-1},\kappa)I(z_{1},\dots,z_{n-1},\kappa),\qquad
i=1,\dots,n-1,$
for suitable coefficients $a_{i}$ defined in terms of the
$\mathfrak{gl}_{n}$-action on $U_{r}$.
We call the second function $V_{\lambda}(z,\kappa)$ – the vertex integral
associated with the weight subspace $U_{r}[w(\lambda)]$ of the
$\mathfrak{gl}_{n}$-module $U_{r}$.
###### Theorem 3.1.
We have
(3.4) $\displaystyle
V_{\lambda}(z,\kappa)\,=\,\frac{1}{\prod_{\square\in\lambda}h(\square)(z)}\,I_{\lambda}(z,\kappa)\,.$
The starting goal of this project was to find the coefficient of
proportionality between the vertex integral $V_{\lambda}(z,\kappa)$ and the
hypergeometric solution $I_{\lambda}(z,\kappa)$ which turned out to be the
inverse of the product of the hook-weights of the boxes of the Young diagram
$\lambda$.
###### Proof.
In [SV], given $M\otimes U_{r}$ and $\lambda\in\mathcal{I}_{r}$, a vector
$\bar{v}(\lambda)$ is constructed,
$\displaystyle\bar{v}(\lambda):=\bar{v}_{\lambda}\otimes
u_{\lambda}+\sum_{\mu<\lambda}\bar{v}_{\mu}\otimes
u_{\mu}\,,\qquad\bar{v}_{\lambda},\bar{v}_{\mu}\in
H^{k}(X;\mathcal{L})^{-}\otimes M.$
Thus $\bar{v}(\lambda)\in H^{k}(X;\mathcal{L}_{\kappa})^{-}\otimes M\otimes
U_{r}$. The vector $\bar{v}(\lambda)$ has $\mathfrak{gl}_{n}$-weight
$w(\lambda)+t-\rho$ and is singular with respect to the factors $M\otimes
U_{r}$. The vector $\bar{v}(\lambda)$ is a cohomological version of the vector
$v(\lambda)$ defined in (2.3) and studied in the previous sections.
The vector $\bar{v}_{\lambda}$ is represented by the differential form
$\displaystyle\big{(}\Phi(x,z)^{1/\kappa}W_{\lambda}(x)\big{(}\wedge_{i,j}dx_{i,j}\big{)}\big{)}\otimes
v,$
see [SV].
The vector $\bar{v}_{\emptyset}$ is represented by a differential form
constructed as follows. A sequence
$f_{\ell_{k}},\dots,f_{\ell_{2}},f_{\ell_{1}}$ is called weakly admissible if
for $i=1,\dots,n-1$, the sequence contains exactly $k_{i}$ elements $f_{i}$.
Let $\mathcal{F}^{\star}_{\lambda}$ be the set of all weakly admissible
sequences.
For example, if $(r,n)=(2,4)$ and $\lambda=(2,2)$, then
$\mathcal{F}^{\star}_{\lambda}$ consists of 12 sequences:
$\\{f_{2},f_{2},f_{1},f_{3}\\}$, …, $\\{f_{3},f_{1},f_{2},f_{2}\\}$.
Let $f=\\{f_{\ell_{k}},\dots,f_{\ell_{1}}\\}$ be a weakly admissible sequence.
Define the function $W^{\star}_{f}(x)$ by the formula
(3.5) $\displaystyle
W^{\star}_{f}(x)=\frac{1}{(x_{a_{k},b_{k}}-x_{a_{k-1},b_{k-1}})\dots(x_{a_{3},b_{3}}-x_{a_{2},b_{2}})(x_{a_{2},b_{2}}-x_{a_{1},b_{1}})x_{a_{1},b_{1}}}$
such that
1. (i)
each variable $x_{i,j}$ is present in (3.5),
2. (ii)
if $(x_{a_{c},b_{c}}-x_{a_{c-1},b_{c-1}})$ is any of the factors, then
$a_{c}=\ell_{c}$,
3. (ii’)
$(a_{1},b_{1})=(\ell_{1},1)$,
4. (iii)
for any $i$ and $1\leqslant j<j^{\prime}\leqslant k_{i}$, the variable
$x_{i,j}$ appears in (3.3) on the right from the variable $x_{i,j;}$.
These properties determine the function $W^{\star}_{f}(x)$ uniquely.
Notice that the last factors in (3.3) and (3.5) are different.
Define the function $W_{f}(x)$ by the formula
(3.6) $\displaystyle
W_{f}(x)=\operatorname{Sym}_{x_{1,1},\dots,x_{1,k_{1}}}\dots\operatorname{Sym}_{x_{n-1,1},\dots,x_{n-1,k_{n-1}}}\big{[}W^{\star}_{f}(x)\big{]}\,.$
Then the vector $v_{\emptyset}$ is represented by the differential form
$\displaystyle\sum_{f=\\{f_{\ell_{k}},\dots,f_{\ell_{1}}\\}\in\mathcal{F}^{\star}_{\lambda}}\big{(}\Phi_{\lambda}(x,z)^{1/\kappa}W_{f}(x)\big{(}\wedge_{i,j}dx_{i,j}\big{)}\big{)}\otimes
f_{\ell_{k}}\dots f_{\ell_{1}}v,$
see [SV].
Let $\gamma\in H_{k}(X;\mathcal{L}_{\kappa})^{-}$ be a generator. The integral
of $\bar{v}(\lambda)$ over $\gamma$ is a scalar multiple the vector
$v(\lambda)$,
$\displaystyle\int_{\gamma}\bar{v}(\lambda)=c(z,\kappa)\,v(\lambda).$
We apply the linear function $\psi:M\to{\mathbb{C}}$ to both sides of this
equation and equate the coefficients of $u_{\lambda}$ and $u_{\emptyset}$.
Then
$\displaystyle c(z,\kappa)$ $\displaystyle=$
$\displaystyle\int_{\gamma}\Phi(x,z)^{1/\kappa}W_{\lambda}(x)\big{(}\wedge_{i,j}dx_{i,j}\big{)},$
$\displaystyle c(z,\kappa)\,g_{\lambda/\emptyset}(z)$ $\displaystyle=$
$\displaystyle\int_{\gamma}\Phi(x,z)^{1/\kappa}\Big{(}\sum_{f\in\mathcal{F}^{\star}_{\lambda}}W_{f}(x)\Big{)}\big{(}\wedge_{i,j}dx_{i,j}\big{)}\,.$
Using the formula
$\displaystyle\sum_{\sigma\in
S_{k}}\frac{1}{(s_{\sigma(k)}-s_{\sigma(k-1)})(s_{\sigma(k-1)}-s_{\sigma(k-2)})\dots(s_{\sigma(2)}-s_{\sigma(1)})s_{\sigma(a)}}=\frac{1}{\prod_{j=1}^{k}s_{j}}$
and the definition of $W_{f}(x)$ we conclude that
$\displaystyle\sum_{f\in\mathcal{F}^{\star}_{\lambda}}W_{f}(x)=\frac{1}{\prod_{i,j}x_{i,j}}\,.$
Hence
$\displaystyle c(z,\kappa)=I_{\lambda}(z,\kappa),\qquad
c(z,\kappa)\,g_{\lambda/\emptyset}(z)=V_{\lambda}(z,\kappa).$
Now formula (2.6) implies Theorem 3.1. ∎
### 3.4. Whittaker vectors
Let $\mathfrak{n}^{-}$ be the maximal nilpotent subalgebra of
$\mathfrak{gl}_{n}$ of lower triangular matrices. It is generated by
$f_{1},\dots,f_{n-1}$. Let $\eta\colon\mathfrak{n}^{-}\to\mathbb{C}$ be the
character of the Lie algebra $\mathfrak{n}^{-}$ such that $\eta(f_{i})=-1$ for
all $i$.
A Whittaker vector in a $\mathfrak{gl}_{n}$-module $V$ is a vector $u\in V$ so
that $xu=\eta(x)u$ for all $x\in\mathfrak{n}^{-}$. This notion was introduced
and studied by B. Kostant, [Ko]. The space of Whittaker vectors in $V$ is
denoted $\operatorname{Wh}(V)$. It is a module over the center $Z$ of the
universal enveloping algebra of $\mathfrak{gl}_{n}$. A Whittaker vector $u\neq
0$ such that $zu=\chi(z)u$ for all $z\in Z$ and some character $\chi\colon
Z\to\mathbb{C}$ is said to have infinitesimal character $\chi$.
For example let $M^{\prime}=\mathrm{Hom}_{\mathbb{C}}(M,\mathbb{C})$ be the
dual of a Verma module $M$. It is a $\mathfrak{gl}_{n}$-module for the action
$(x\alpha)(m)=-\alpha(xm)$, $\alpha\in M$, $m\in M$, $x\in\mathfrak{gl}_{n}$.
Central elements $z\in Z$ act on $M$ as multiples $\chi_{M^{\prime}}(z)$ of
the identity, for some character $\chi_{M^{\prime}}\colon Z\to\mathbb{C}$. The
linear function $\psi\in M^{\prime}$ of Section 2.1 is defined by the
conditions $f_{i}\psi=-\psi$ and $\psi(v)=1$ and is in particular a Whittaker
vector. On the other hand, any Whittaker vector in $M^{\prime}$ is uniquely
determined by its value on $v$ since $v$ generates $M$ as a module over
$U(\mathfrak{n}^{-})$. Thus:
###### Lemma 3.2.
The space of Whittaker vectors $\operatorname{Wh}(M^{\prime})$ is one-
dimensional, spanned by the Whittaker vector $\psi$ of infinitesimal weight
$\chi_{M^{\prime}}$.
More generally let us consider the problem of describing the $Z$-module of
Whittaker vectors in the $\mathfrak{gl}_{n}$-module $M^{\prime}\otimes
U\cong\operatorname{Hom}_{\mathbb{C}}(M,U)$ for a Verma module $M$ and a
fundamental module $U$. By definition $\alpha\colon M\to U$ is a Whittaker
vector if and only if
$x\alpha(m)=\alpha(xm)+\eta(x)\alpha(m),\quad\forall m\in
M,x\in\mathfrak{n}^{-}.$
It follows that a Whittaker vector $\alpha$ is again uniquely determined by
its value on the highest weight vector $v\in M$.
Let $M_{t-\rho}$ denote the Verma module of highest weight
$t-\rho\in\mathbb{C}^{n}$. Let $\chi(t)=\chi_{M^{\prime}_{t-\rho}}$ be the
infinitesimal character of its dual.
###### Proposition 3.3.
Let $r\in\\{1,\dots,n-1\\}$ and $t\in\mathbb{C}^{n}$ be generic and set
$z_{i}=t_{i+1}-t_{i}$, $(i=1,\dots,n-1)$. Then for each Young diagram
$\lambda\in\mathcal{I}_{r}$ there is a unique Whittaker vector
$\alpha_{\lambda,t}\in\operatorname{Hom}_{\mathbb{C}}(M_{w(\lambda)+t-\rho},U_{r})$
of infinitesimal character $\chi(t)$ such that
$\alpha_{\lambda,t}(v)=\sum_{\mu\leqslant\lambda}g_{\lambda/\mu}(z)u_{\mu},$
where
$g_{\lambda/\mu}(z)=\frac{1}{\prod_{\square\in\lambda}h(\square)}\sum_{\nu\in
E(\lambda/\mu)}\prod_{\square\in\nu}h(\square),$
and $h(\square)=1+\sum_{\square^{\prime}\in
H_{\lambda}(\square)}z(\square^{\prime})$.
###### Proof.
The morphism of $\mathfrak{gl}_{n}$-modules $M_{w(\lambda)+t-\rho}\to
M_{t-\rho}\otimes U_{r}$ sending the highest weight vector to the singular
vector $v(\lambda)=\sum_{\mu\leqslant\lambda}v_{\mu}\otimes u_{\mu}$, see
Section 2.4, induces a morphism
$M^{\prime}_{t-\rho}\to\operatorname{Hom}_{\mathbb{C}}(M_{w(\lambda)+t-\rho},U_{r}).$
The morphism property implies that it sends the Whittaker vector $\psi$ of
infinitesimal character $\chi(t)$ to a Whittaker vector $\alpha$ with the same
infinitesimal character. By definition
$\alpha(v)=\sum_{\mu\leqslant\lambda}\psi(v_{\mu})u_{\mu}$ and
$g_{\lambda/\mu}=\psi(v_{\mu})$ is given in Theorem 2.5. ∎
We thus obtain an explicit diagonalization of the action of the center $Z$ on
the space of Whittaker vectors in $M^{\prime}\otimes U$ for a generic Verma
module $M$ and a fundamental module $U$:
###### Theorem 3.4.
Let $t\in\mathbb{C}^{n}$, $r\in\\{1,\dots,n-1\\}$ and $y_{i}=t_{i+1}-t_{i}$
$(i=1,\dots,n-1)$. Then
$W=\operatorname{Wh}(\operatorname{Hom}_{\mathbb{C}}(M_{t-\rho},U_{r}))$
has dimension $\operatorname{dim}(U_{r})$. For generic $t$, $W$ decomposes
into a direct sum $W=\oplus_{\lambda\in\mathcal{I}_{r}}W_{\lambda}$ of
$Z$-invariant one-dimensional subspaces on which $Z$ acts by the character
$\chi(t-w(\lambda))$. The subspace $W_{\lambda}$ is spanned by the Whittaker
vector $\beta_{\lambda,t}$, such that
$\beta_{\lambda,t}(v)=\sum_{\mu\leqslant\lambda}\tilde{g}_{\lambda/\mu}(y)u_{\mu}$
with
$\tilde{g}_{\lambda/\mu}(y)=\frac{1}{\prod_{\square\in\lambda}\tilde{h}(\square)}\sum_{\nu\in
E(\lambda/\mu)}\prod_{\square\in\nu}\tilde{h}(\square),$
and $\tilde{h}(\square)=\sum_{\square^{\prime}\in
H_{\lambda}(\square)}y(\square^{\prime})$.
###### Proof.
By Proposition 3.3 the Whittaker vectors
$\beta_{\lambda,t}=\alpha_{\lambda,t-w(\lambda)}$ belong to $W$ and have
infinitesimal character $\chi(t-w(\lambda))$. Since
$\beta_{\lambda}(v)=u_{\lambda}$ plus a linear combination of $u_{\mu}$ with
$\mu<\lambda$, these vectors are linearly independent. We need to show that
they span $W$. Let $\beta$ is a Whittaker vector. Since the vectors
$\beta_{\lambda}(v)$ form a basis of $U_{r}$ there exist coefficients
$c_{i}\in\mathbb{C}$ so that
$\gamma=\beta-\sum_{\lambda\in\mathcal{I}_{r}}c_{\lambda}\beta_{\lambda}$ is a
Whittaker vector which vanishes on $v$. Since a Whittaker vector is uniquely
determined by its value on $v$, $\gamma$ must be zero.
Let $t^{\prime}=t-w(\lambda)$ and
$z_{i}^{\prime}=t^{\prime}_{i+1}-t^{\prime}_{i}$, $(i=1,\dots,n-1)$. We need
to compute the coeffcients $g_{\lambda/\mu}(z^{\prime})$. By Lemma 2.7,
$z^{\prime}=z(y)$ defined in Section 2.8 and therefore
$g_{\lambda/\mu}(z^{\prime})=\tilde{g}_{\lambda/\mu}(y)$. ∎
## References
* [1]
* [AO] M.Aganagic, A.Okounkov, Quasimap counts and Bethe eigenfunctions, Moscow Mathematical Journal, 17, 4, (2017), 565–600
* [2]
* [DS1] H. Dinkins, A. Smirnov, Quasimaps to zero-dimensional $A_{\infty}$-quiver varieties, arXiv:1912.04834, 1--34
* [3]
* [DS2] H. Dinkins, A. Smirnov, Capped vertex with descendants for zero dimensional $A_{\infty}$-quiver varieties, arXiv:2005.12980, 1--33
* [4]
* [IN] T. Ikeda, H. Naruse, Excited Young diagrams and equivariant Schubert calculus, Trans. AMS 361 (2009), 5193--5221
* [5]
* [Ko] B. Kostant, On Whittaker Vectors and Representation Theory, Invent. Math. 48 (1978), 101--184
* [6]
* [MO] D. Maulik, A. Okounkov, Quantum groups and quantum cohomology, Astérisque, t. 408 (Société Mathématique de France, 2019), 1--277;
* [7] https://doi.org/10.24033/ast.1074
* [8]
* [MPP] A. Morales, I. Pak, G. Panova, Hook formulas for skew shapes I. $q$-analogues and bijections, Journal of Combinatorial Theory, Series A, Volume 154, February 2018, 350--405
* [9]
* [MV] Y. Markov, A. Varchenko, Hypergeometric Solutions of Trigonometric KZ Equations satisfy Dynamical Difference Equations, Adv. Math. 166 (2002), no. 1, 100--147
* [10]
* [Na] H. Naruse, Schubert calculus and hook formula, talk slides at 73rd Sem. Lothar. Combin., Strobl, Austria, 2014; available at www.mat.univie.ac.at/~slc/wpapers/s73vortrag/naruse.pdf
* [11]
* [O] A. Okounkov, Lectures on $K$-theoretic computations in enumerative geometry, volume 24 of IAS/Park City Math. Ser., pages 251–380. Amer. Math. Soc., Providence, RI, 2017
* [12]
* [SV] V. Schechtman, A. Varchenko, Arrangements of hyperplanes and Lie algebra homology, Invent. Math. 106 (1991), 139--194
* [13]
* [SmV1] A. Smirnov, A. Varchenko, The $p$-adic approximations of vertex functions via $3D$-mirror symmetry, arXiv:2302.03092, 1--22
* [14]
* [SmV2] A. Smirnov, A. Varchenko, Polynomial superpotential for Grassmannian $\operatorname{Gr}(k,n)$ from a limit of vertex function, arXiv:2305.03849, 1--16
* [15]
* [TV] V. Tarasov and A. Varchenko, Difference Equations Compatible with Trigonometric KZ Differential Equations, IMRN 2000, No. 15, 801--829
* [16]
|
# The zonal-flow residual does not tend to zero in the limit of small mirror
ratio
E. Rodríguez G. G. Plunk Max Planck Institute for Plasma Physics, 17491
Greifswald, Germany
###### Abstract
The intensity of the turbulence in tokamaks and stellarators depends on its
ability to excite and sustain zonal flows. Insight into this physics may be
gained by studying the “residual”, i.e. the late-time linear response of the
system to an initial perturbation. We investigate this zonal-flow residual in
the limit of a small magnetic mirror ratio, where we find that the typical
quadratic approximation to RH (Rosenbluth & Hinton, 1998) breaks down. Barely
passing particles are in this limit central in determining the resulting level
of the residual, which we estimate analytically. The role played by the
population with large orbit width provides valuable physical insight into the
response of the residual beyond this limit. Applying this result to tokamak,
quasi-symmetric and quasi-isodynamic equilibria, using a near-axis
approximation, we identify the effect to be more relevant (although small) in
the core of quasi-axisymmetric fields, where the residual is smallest. The
analysis in the paper also clarifies the relationship between the residual and
the geodesic acoustic mode, whose typical theoretical set-ups are similar.
## 1 Introduction
There exists a strong current interest in exploring the space of stellarators
(Spitzer Jr, 1958; Boozer, 1998; Helander, 2014), three-dimensional, toroidal
magnetic confinement fields. Optimising such fields in order to achieve plasma
confinement and ultimately controlled thermonuclear fusion requires of careful
design and shaping of the field for it to present desired physical properties.
In guiding this search, it is imperative to have a good understanding of the
key physics involved. Given the breadth of the stellarator concept, though,
this naturally requires stretching our understanding of physics that are
comparatively mature in the simpler case of the axisymmetric tokamak
(Mukhovatov & Shafranov, 1971; Wesson, 2011).
Amongst the critical elements that govern the behaviour of a stellarator,
turbulence is a particularly interesting and important one. Understanding the
neoclassical behaviour of stellarators has historically captivated much of the
focus of research, mainly because of its predominant role in the transport of
unoptimised stellarators through the so-called $1/\nu$ regime (Galeev et al.,
1969; Stringer, 1972; Ho & Kulsrud, 1987; Nemov et al., 1999; Mynick, 2006).
Progress over the last decades, and especially over the past years (Beidler et
al., 2021; Landreman & Paul, 2022; Goodman et al., 2023), has however brought
turbulence to the forefront, and it is now regarded as one of the key elements
determining the performance of stellarators.
Zonal flow dynamics are of particular interest in the study of turbulence
(Diamond et al., 2005), as they are understood to play a key role in
regulating turbulence by shearing eddies apart, lowering the overall intensity
of turbulent fluctuations. The description of full zonal-flow dynamics is
certainly complex, as an essentially non-linear response of the system.
However, one may learn some basic information about the ability for a given
magnetic equilibrium to sustain such flows by considering the behaviour of the
so-called zonal-flow residual (Rosenbluth & Hinton, 1998; Xiao & Catto, 2006;
Sugama & Watanabe, 2006; Monreal et al., 2016). The residual is the long-time
remnant of an initial radially varying perturbation of the electrostatic
potential. The prevalence of a large such remnant is, at least sometimes,
indicative of the system’s capacity to sustain zonal dynamics in a turbulent
state (Watanabe et al., 2008; Xanthopoulos et al., 2011). The calculation of
the residual thus serves as a reasonable starting point for the assessment of
zonal flows in a given magnetic equilibrium. The main theoretical
understanding of the residual behaviour was pioneered by Rosenbluth & Hinton
(1998), and subsequently refined and extended by others (Xiao & Catto, 2006;
Sugama & Watanabe, 2006; Monreal et al., 2016; Plunk & Helander, 2024),
including in the electromagnetic context (Catto et al., 2017).
The level of the residual depends strongly on the size of the orbit-width,
$\delta$, of the particles in the field, that is, the magnitude of the
particle deviation from flux surfaces as they move along field lines. The
dependence is so strong that, in a typical scenario (Rosenbluth & Hinton,
1998), it is the trapped particles (whose orbit widths are largest) that
contribute most to the residual. The larger the orbit widths, the lower the
residual levels, as the shielding from these becomes more effective
(Rosenbluth & Hinton, 1998; Xiao & Catto, 2006). In fact, it is conventionally
argued that in the limit of $B$ becoming flat (small mirror ratio), the large
trapped particle orbits cause the residual to vanish. Of course it is also in
this limit that there are also no trapped particles left in the problem,
somewhat complicating the asymptotic analysis.
In this paper we revisit the theoretical question of the zonal-flow residual
in this limit. An assessment is presented in Section 2, where we also draw
connections to the standard framework of geodesic-acoustic-modes (Conway et
al., 2021). We learn that barely passing particles play the dominant role in
determining the final finite value of the residual in the small mirror ratio
limit. This large-orbit-width part of the population behaves, we argue, as if
non-omnigeneous, as far as the residual is concerned. We find support for
these claims numerically through linear gyrokinetic simulations. We close the
discussion in Section 4 with an assessment of the relevance of this effect on
tokamaks and omnigeneous stellarators, which appears to be limited.
## 2 Residual calculation in the small mirror ratio limit
### 2.1 Brief derivation of the residual
Let us start our discussion on the zonal-flow residual by calculating it in
its most typical of set-ups. We follow closely the work of Rosenbluth & Hinton
(1998); Xiao & Catto (2006); Monreal et al. (2016); Plunk & Helander (2024),
but include a brief derivation for completeness and as a way of introduction
of notation.
By residual, which we denote $\phi(\infty)$, we mean the surface averaged
collisionless electrostatic potential in the long time limit. To describe it,
we take the linearised, electrostatic gyrokinetic equation as starting point
(Connor et al., 1978, 1980),
$\left(\frac{\partial}{\partial
t}+i\tilde{\omega}_{d}+v_{\parallel}\frac{\partial}{\partial\ell}\right)g=\frac{q}{T}F_{0}J_{0}\frac{\partial}{\partial
t}\phi,$ (1)
written in the ballooning formalism with the variation perpendicular to the
field line described by $\mathbf{k}_{\perp}=k_{\psi}\nabla\psi$. Here $\psi$
is the flux surface label (the toroidal flux over $2\pi$), so that the
electrostatic potential perturbation $\phi$ has a main strong off-surface
variation, which is the reason why there is no diamagnetic term in Eq. (1),
$\omega_{\star}=0$. Other symbols have their usual meaning: $F_{0}$ is the
background Maxwellian distribution, $J_{0}=J_{0}(x_{\perp}\sqrt{2b})$ the
Bessel function of the first kind representing Larmor radius effects and
$b=(k_{\psi}|\nabla\psi|\rho)^{2}/2$ the Larmor radius parameter, with
$\rho=v_{T}/\Omega$, $v_{T}=\sqrt{2T/m}$ and $\Omega=q\bar{B}/m$ (at this
point we are considering a general species of mass $m$, charge $q$ and
temperature $T$). The drift frequency
$\tilde{\omega}_{d}=\omega_{d}(v/v_{T})^{2}(1-\lambda B/2)$ and
$\omega_{d}=\mathbf{v}_{D}\cdot\mathbf{k}_{\perp}=v_{T}\rho\bar{B}k_{\psi}{\boldsymbol{\kappa}}\times\mathbf{B}\cdot\nabla\psi/B^{2}$,
with $\bar{B}$ a reference field, ${\boldsymbol{\kappa}}$ the curvature of the
field and the drift is considered in the low $\beta$ limit. The velocity space
variables are $\lambda=\mu/\mathcal{E}$ and particle velocity
$v=\sqrt{2\mathcal{E}/m}$, where $\mu$ is the first adiabatic invariant and
$\mathcal{E}$ the particle energy. The parallel velocity can then be written
as $v_{\parallel}=\sigma v\sqrt{1-\lambda B}$, where $\sigma$ is the sign of
$v_{\parallel}$.
Equation (1) is then a partial differential equation in time $t$ and the arc
length along the field line $\ell$, for the electrostatic potential $\phi$ and
the non-adiabatic part of the distribution function, $g$, with a dependence on
the velocity space variables $\\{\sigma,v,\lambda\\}$. Performing a Laplace
transform in time (Schiff, 2013, Theorem 2.7) yields
$\left(\omega-\tilde{\omega}_{d}+iv_{\parallel}\frac{\partial}{\partial\ell}\right)\hat{g}=\frac{q}{T}F_{0}J_{0}\omega\hat{\phi}+i\delta\\!F(0),$
(2)
where
$\delta\\!F(0)\stackrel{{\scriptstyle\cdot}}{{=}}g(0)-(q/T)J_{0}F_{0}\phi(0)$
can be interpreted as the initial perturbation of the system, and we are using
the hats to indicate the Laplace transform.
To eliminate the explicit $\ell$ dependence that the curvature,
$\tilde{\omega}_{d}$, brings into the equation, we shall define the orbit
width $\delta$,
$v_{\parallel}\frac{\partial}{\partial\ell}\delta=\tilde{\omega}_{d}-\overline{\tilde{\omega}_{d}}$
(3)
so that we may write,
$\left(iv_{\parallel}\frac{\partial}{\partial\ell}-\overline{\tilde{\omega}_{d}}+\omega\right)\hat{h}=\frac{q}{T}F_{0}\omega
J_{0}\hat{\phi}e^{i\delta}+ie^{i\delta}\delta\\!F(0),$ (4)
and $\hat{h}=\hat{g}e^{i\delta}$. The function $\delta$ describes the off-
surface displacement of particles (in $\psi$) as a function of $\ell$, for
each particle identified by its velocity space labels. The overline notation
indicates the bounce average,
$\overline{f}=\begin{cases}\begin{aligned}
&\frac{1}{\tau_{b}}\frac{1}{v}\int_{\mathrm{b}}\frac{1}{\sqrt{1-\lambda
B}}\sum_{\sigma}f\,\mathrm{d}\ell,\\\
&\lim_{L\rightarrow\infty}\frac{1}{\tau_{t}}\frac{1}{v}\int_{\mathrm{p}}\frac{f\,\mathrm{d}\ell}{\sqrt{1-\lambda
B}}.\end{aligned}\end{cases}$ (5)
The first expression applies to trapped particles, where the integral is taken
between the left and right bounce points and summed over both directions
($\sigma$) of the particle’s motion. The normalisation factor is the bounce
time, $\tau_{b}$, defined following $\overline{1}=1$. For passing particles,
the integral is taken over the whole flux surface (i.e. the infinite extent of
the field line explicitly indicated by the limit), and normalised by the
transit time, $\tau_{t}$.
When $\overline{\tilde{\omega}_{d}}=0$, Eq. (4) simplifies. This corresponds
to the physical interpretation of particles having no net off-surface drift.
This is the defining property of omnigeneity (Hall & McNamara, 1975a; Cary &
Shasharina, 1997; Helander & Nührenberg, 2009; Landreman & Catto, 2012), which
we shall assume to hold throughout this work. For a treatment of the non-
omnigeneous problem see Helander et al. (2011); Monreal et al. (2016).
Because we are interested in the behaviour at large time scales, we expand in
$\omega/\omega_{t}\sim\epsilon_{t}$, applying
$\hat{h}=\hat{h}^{(0)}+\hat{h}^{(1)}+\dots$ and
$\hat{\phi}=\hat{\phi}^{(0)}+\hat{\phi}^{(1)}+\dots$, and considering Eq. (4)
order by order,
$\displaystyle iv_{\parallel}\frac{\partial}{\partial\ell}\hat{h}^{(0)}$
$\displaystyle\approx 0,$ (6a) $\displaystyle
iv_{\parallel}\frac{\partial}{\partial\ell}\hat{h}^{(1)}+\omega\hat{h}^{(0)}$
$\displaystyle\approx\frac{q}{T}\omega
F_{0}J_{0}\hat{\phi}^{(0)}e^{i\delta}+ie^{i\delta}\delta\\!F(0),$ (6b)
$\displaystyle\vdots$
From Eq. (6a) it follows that,
$\hat{h}^{(0)}=\overline{\hat{h}^{(0)}}.$ (7)
Thus, bounce averaging Eq. (6b), and assuming that $\hat{\phi}^{(0)}$ is
$\ell$-independent, we may write down the leading order expression for
$\hat{g}^{(0)}$,
$\hat{g}^{(0)}=\frac{q}{T}F_{0}e^{-i\delta}\overline{J_{0}e^{i\delta}}\hat{\phi}^{(0)}+\frac{i}{\omega}e^{-i\delta}\overline{\delta\\!F(0)e^{i\delta}}.$
(8)
With this expression for $\hat{g}$, we may then apply the quasineutrality
condition (Connor et al., 1980) summing over ions and electrons. Explicitly,
and summing over electrons and ions (subscripts $e$ and $i$ respectively)
$\sum_{e,i}\int\mathrm{d}^{3}\mathbf{v}J_{0}\hat{g}=n\frac{q_{i}}{T_{i}}(1+\tau)\hat{\phi},$
(9)
where $\tau=T_{i}/ZT_{e}$ and $Z=-q_{i}/q_{e}$, then yields
$\hat{\phi}^{(0)}\approx\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{J_{0}e^{i\delta}}F_{0}\right\rangle_{\psi}\hat{\phi}^{(0)}+\frac{i}{\omega}\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{\delta\\!F(0)e^{i\delta}}\right\rangle_{\psi}.$
(10)
Here $\langle\dots\rangle_{\psi}$ denotes a flux surface average (Helander,
2014), and we have taken the limit of $m_{e}\ll m_{i}$, so that the limit of a
negligible electron Larmor radius and electron banana width may be taken; this
is equivalent to an adiabatic electron response
$\phi-\langle\phi\rangle_{\psi}$, making the final form of the residual
independent of electrons.
By inverse Laplace transforming this latest expression (Schiff, 2013, Theorem
2.36), we obtain,
$\phi(\infty)=\frac{\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{\delta\\!F(0)e^{i\delta}}\right\rangle_{\psi}}{1-\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{J_{0}e^{i\delta}}F_{0}\right\rangle_{\psi}}.$
(11)
To finalise the calculation of the residual, we must consider some initial
perturbation of the ion population. Following Rosenbluth & Hinton (1998);
Monreal et al. (2016), we perturb the density of the ions with
$\delta\\!F(0)=(\delta n/n)J_{0}F_{0}$, a perturbed Maxwellian, sidestepping
the issue of detailed initial-condition dependence of the residual, especially
important at shorter wavelengths (Monreal et al., 2016). Applying
quasineutrality at $t=0$, the density perturbation may be directly related to
the perturbed electrostatic potential $\phi(0)$. Assuming that $b$ is
independent of $\ell$ for simplicity, $\delta
n/n=\phi(0)(1-\Gamma_{0})/\Gamma_{0}$ where $\Gamma_{0}=e^{-b}I_{0}(b)$ and
$I_{0}$ is the Bessel function of the first kind. Therefore, the expression
for the residual at long times is,
$\frac{\phi(\infty)}{\phi(0)}\approx\frac{1-\Gamma_{0}}{\Gamma_{0}}\frac{\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{J_{0}e^{i\delta}}F_{0}\right\rangle_{\psi}}{1-\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{J_{0}e^{i\delta}}F_{0}\right\rangle_{\psi}}.$
(12)
### 2.2 Finite orbit width
In order to proceed with the evaluation of Eq. (12) we first need to study the
orbits of our particles, namely $\delta$. These will depend critically on both
$B(\ell)$ (which controls the time spent by particles along different segments
of the field-line), and the normal curvature $\omega_{d}$ (that determines the
off-surface velocity). Although in an actual equilibrium field these functions
are connected to each other, it is formally convenient to set this equilibrium
connection aside, and treat them as largely independent quantities in the
context of a single flux tube.
Despite this independence, it is important to respect some minimal properties.
First, for the choice of functions to appropriately represent the behaviour in
an omnigeneous field, they should prevent diverging particle orbits. We
prevent this ill-behaviour by ensuring that the critical points of $B(\ell)$
match points of zero radial drift; that is, $\omega_{d}(\ell)=0$ wherever
$\mathrm{d}B(\ell)/\mathrm{d}\ell=0$. This property is known as pseudosymmetry
(Mikhailov et al., 2002; Skovoroda, 2005), and is necessary to represent an
omnigeneous field. However, it is not sufficient. In addition, we must impose
that all the orbits $\delta$ are closed; that is, that they come back to the
same $\psi$ at bounce points, or for passing particles, after a period.
With this, we may write explicitly $\delta$ integrating Eq. (3), as
$\delta=\sigma\frac{v}{v_{T}}\int_{\bar{\ell}_{0}}^{\bar{\ell}}\frac{1-\lambda
B/2}{\sqrt{1-\lambda
B}}\frac{\omega_{d}(\bar{\ell}^{\prime})}{\omega_{t}}\mathrm{d}\bar{\ell}^{\prime}$
(13)
where we have introduced a normalised length scale $\bar{\ell}$ and an
associated transit frequency $\omega_{t}=v_{T}/L$, with $L$ some reference
length scale. The integral is defined so that $\delta(\bar{\ell}_{0})=0$,
where $\bar{\ell}_{0}$ corresponds to bounce points for trapped particles, and
the point $B=B_{\mathrm{max}}$ for passing ones to guarantee continuity across
the trapped-passing boundary.111Note that by virtue of omnigeneity it does not
matter which point of maximum $B$ or bounce point (left or right) along the
field line we choose, because $\delta=0$ at all of these by virtue of
omnigeneity. This property of omnigeneous fields is very important, and it
allows us to treat each well along the field line independently from every
other. This is so because there is no accumulation of radial displacement of
passing particles across maxima. Thus, the considerations that the paper
presents for a single well could be extended to multiple ommnigeneous wells,
treating each separately, and summing their contributions when considering
flux surface averages, as needed in Eq. (12).
The regularising role of pseudosymmetry at critical points of $B(\ell)$, where
it avoids diverging behaviour, can be seen directly from Eq. (13). This allows
us to rewrite $\delta$ in a form that avoids the explicit $1/\sqrt{\cdot}$
divergence using integration by parts,
$\delta=-\sigma\frac{v}{v_{T}}\left[\frac{B^{2}}{\partial_{\bar{\ell}}B}\frac{\omega_{d}(\bar{\ell})}{\omega_{t}}\frac{\sqrt{1-\lambda
B}}{B}\right]_{\bar{\ell}_{0}}^{\bar{\ell}}+\sigma\frac{v}{v_{T}}\int_{\bar{\ell}_{0}}^{\bar{\ell}}\frac{\sqrt{1-\lambda
B}}{B}\partial_{\bar{\ell}^{\prime}}\left(\frac{B^{2}}{\partial_{\bar{\ell}^{\prime}}B}\frac{\omega_{d}(\bar{\ell}^{\prime})}{\omega_{t}}\right)\mathrm{d}{\bar{\ell}}^{\prime}.$
(14)
This integrated form of the equation is also useful to numerically compute
$\delta$ near bounce points.
These expressions are so far quite general, and we shall now specialise to a
simple representative system. In particular, we assume to have a single unique
magnetic well along the field line222Along any fieldline of an omnigeneous
field, every time a maximum of $B$ is crossed, one falls into a new magnetic
well. In the case of a tokamak, all those wells are identical by virtue of
axisymmetry, and thus the consideration of a single unique well is sufficient.
Other optimised configurations, though, lack this exact symmetry, which
requires some additional interpretation. Some of this is discussed in Section
4. , described simply by $B=\bar{B}\left(1-\Delta\cos\pi\bar{\ell}\right)$ and
$\omega_{d}=\omega_{d}\sin\pi\bar{\ell}$, where the domain is taken to be
$\bar{\ell}\in[-1,1]$. Thus the scale $L$ can be interpreted as the connection
length in the problem, or the half-width of the well, $\Delta$ the mirror
ratio and $\omega_{d}$ the drift. This particular choice is convenient in two
ways: first, because the choice $\omega_{d}=c\partial_{\ell}B$, with $c$ some
proportioonality constant, simplifies Eq. (14) and conveniently guarantees the
closure of particle orbits; and second, because many of the integrals that
ensue may be carried out exactly for such simple analytic functions. Of
course, deforming these geometric functions away from these forms (in
particular, breaking the parity in $\bar{\ell}$) will directly affect the
orbit shape $\delta$ and ultimately the residual, but this model nonetheless
includes the essential ingredients.
Figure 1: Example of passing and trapped orbits. Numerical examples of trapped
and passing orbits for different values of $\lambda$ for the model field
considered in the paper. The plots were generated for $\Delta=0.05$. The
dotted line on top and bottom correspond to the $\delta(0)$ estimate in Eq.
(18) (grey line simply indicates the reference $\delta=0$ level). Critical
points are marked with solid points.
#### 2.2.1 Passing particles
Let us start our description of the passing particle orbits by considering
their maximum deviation off the flux surface, i.e. their orbit widths
$\delta|_{\bar{\ell}=0}=\delta(0)$. By passing particles we refer to the
portion of velocity space with $\lambda\in(0,1/B_{\mathrm{max}})$, which we
may also label with the convenient shifted variable
$\hat{\lambda}=1/(1+\Delta)-\bar{B}\lambda$. In this case $\hat{\lambda}=0$
represents the trapped-passing boundary, and
$\hat{\lambda}=\bar{B}/B_{\mathrm{max}}$ is approached for the passing
particles far from the trapped-passing boundary, which we will refer to as
strongly passing. It is convenient to introduce yet an additional label for
passing particles, namely
$\kappa=2\lambda\bar{B}\Delta/[1-\lambda\bar{B}(1-\Delta)]$, which is bounded
$\kappa\in(0,1)$ and denotes barely passing particles by $\kappa=1$ and
strongly passing by $\kappa=0$.
For the model field considered, $\delta(0)$ may be evaluated exactly in terms
of $\lambda$, $\Delta$ and other parameters. However, it is more insightful to
consider some relevant asymptotic limits. In the limit of a small mirror ratio
$\Delta$, the passing population is naturally separated into three different
regimes, where we may write,
$\delta_{\mathrm{pass}}(0)\approx-\sigma\frac{v}{v_{T}}\frac{\omega_{d}}{\pi\omega_{t}}\times\begin{cases}\begin{aligned}
&\sqrt{\frac{2}{\Delta}}&\quad\text{if }\hat{\lambda}\ll\Delta,\\\
&\frac{1}{\sqrt{\hat{\lambda}}}&\quad\text{if }\Delta\ll\hat{\lambda}\ll 1,\\\
&\frac{1}{\sqrt{\hat{\lambda}}}+\sqrt{\hat{\lambda}}&\quad\text{if
}\hat{\lambda}\gg\Delta.\end{aligned}\end{cases}$ (15)
The orbits are widest within a layer of width $\Delta$ near the trapped-
passing boundary, where all barely passing particles have large, almost
identical orbits that scale like $\sim 1/\sqrt{\Delta}$. This is a consequence
of particles moving slowly along the field line by an amount
$v_{\parallel}\sim\sqrt{1-\lambda B}\sim\sqrt{\Delta}$. Thus, there always
exists a sufficiently small mirror ratio able to slow down barely passing
particles enough so as for them to have a sizeable orbit width; this is true
even for a small radial drift $\epsilon\equiv\omega_{d}/\pi\omega_{t}\ll 1$.
We estimate the size of the $v$-space layer that includes particles with a
sizeable orbit width (i.e. $|\delta_{\mathrm{pass}}(0)|>1$) in the limit of
$\epsilon\ll 1$ by taking the behaviour of a typical thermal particle
$v/v_{T}\sim 1$ as reference in Eq. (15), so that
$\hat{\lambda}<\epsilon^{2}=\left(\frac{\omega_{d}}{\pi\omega_{t}}\right)^{2}.$
(16)
Such a layer can only exist if the mirror ratio is sufficiently small,
$\Delta/\epsilon^{2}\ll 1.$ (17)
Not satisfying this mirror ratio ordering restores the standard view of
passing particles having small orbit widths (as in the quadratic approximation
of the residual in Rosenbluth & Hinton (1998)). The small mirror ratio
ordering alongside the $\epsilon\ll 1$ assumption are henceforth assumed.
#### 2.2.2 Trapped particles
The procedure above may be repeated for trapped particles. Defining a trapped
particle label
$\bar{\kappa}=1/\kappa=[1/(\lambda\bar{B})-(1-\Delta)]/2\Delta$, deeply
trapped particles are denoted by $\bar{\kappa}=0$ and barely trapped ones by
$\bar{\kappa}=1$. The orbit width may then be written as,
$\delta_{\mathrm{trap}}(0)\approx-\sigma\frac{v}{v_{T}}\epsilon\sqrt{\frac{2\bar{\kappa}}{\Delta}},$
(18)
assuming $\Delta\ll 1$. Unlike passing particles, the majority of trapped
particles have a significant orbit width (in the $\sim 1/\sqrt{\Delta}$
sense), except for a minute fraction near the bottom of the well which barely
moves away from that point. This fraction may be estimated to be
$\bar{\kappa}<\frac{\Delta}{\epsilon^{2}},$ (19)
which we have already assumed small.
### 2.3 Evaluating the residual for small mirror ratio
Figure 2: Separation of particles into groups. The diagram depicts the
separation of the particle population into four different groups (I to IV).
Groups I and IV (light blue) represent the population with a small orbit
width, while II and III (light red) correspond to large ones. The diagram is a
schematic with the vertical representing $1/\lambda$, the horizontal
$\bar{\ell}$ and the black line representing the magnetic well
$B(\bar{\ell})$.
In the limit of a small mirror ratio, we have learned from the analysis of the
orbits that the particle population may be divided into four different groups.
Each of these groups is characterised by having a large or small $\delta$, and
thus a different contribution to Eq. (12). We refer to each of these groups by
Roman numerals I to IV, starting from strongly passing particles (see Figure
2).
To proceed with the residual integral, let us assume for simplicity the
finite-Larmor quantity $b$ to be small. This is compatible with $\epsilon$
being small (note that $\epsilon\propto k_{\perp}\rho_{i}$). With this, we may
write the integral in the denominator of the residual, Eq. (12),
$1-\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}e^{-i\delta}\overline{J_{0}e^{i\delta}}F_{0}\right\rangle_{\psi}\approx
b-\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}\left(e^{-i\delta}\overline{e^{i\delta}}-1\right)F_{0}\right\rangle_{\psi},$
(20)
where we used,
$\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}J_{0}^{2}F_{0}\right\rangle_{\psi}=\frac{1}{2}e^{-b}I_{0}(b),$
(21)
in the small $b$ limit and the velocity space integrals include all groups.
The integral remaining in Eq. (20) has been simplified by dropping finite-
Larmor radius corrections. For groups I and IV for which $\delta$ is small,
retaining $b$ would give an even smaller $O(\delta^{2}b)$ correction, which we
drop. For groups II and III, the correction would also be small in the sense
$O(b\sqrt{\Delta})$, under the assumption of small $\Delta$.
Now separating the integral left in Eq. (20) into the different group
contributions,
$I=\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}\left(e^{-i\delta}\overline{e^{i\delta}}-1\right)F_{0}\right\rangle_{\psi}=\sum_{\mathrm{I,~{}IV}}\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}\left(\overline{\delta}^{2}-\overline{\delta^{2}}\right)F_{0}\right\rangle_{\psi}+\\\
+\sum_{\mathrm{II,~{}III}}\frac{1}{n}\left\langle\int\mathrm{d}^{3}\mathbf{v}\left(e^{-i\delta}\overline{e^{i\delta}}-1\right)F_{0}\right\rangle_{\psi}.$
(22)
This separation enables us to exploit the smallness or largeness of $\delta$
accordingly. The smallness of the orbit width for groups I and IV has already
been exploited to write the leading order contribution in powers of $\delta$
in the first term of the right-hand side of Eq. (22). This contribution should
be familiar, as it has the quadratic form in which the Rosenbluth-Hinton
residual is customarily written (Rosenbluth & Hinton, 1998; Xiao & Catto,
2006; Plunk & Helander, 2024). We set this part of the calculation aside for
now, and focus on the new contributions by groups II and III.
#### 2.3.1 Contribution from barely passing particles (group II)
Let us continue our analysis by looking at barely passing particles in group
II (see Fig. 2), and their contribution to Eq. (22),
$I_{\mathrm{II}}=\underbrace{\frac{1}{n}\left\langle\int_{\mathrm{II}}\mathrm{d}^{3}\mathbf{v}e^{-i\delta}\overline{e^{i\delta}}F_{0}\right\rangle_{\psi}}_{①}-\underbrace{\frac{1}{n}\left\langle\int_{\mathrm{II}}\mathrm{d}^{3}\mathbf{v}F_{0}\right\rangle_{\psi}}_{②}.$
(23)
First consider ①, and rewrite it following Xiao & Catto (2006) as,
$①=\frac{1}{n}\left\langle\int_{\mathrm{II}}\mathrm{d}^{3}\mathbf{v}\left(\overline{\cos\delta}^{2}+\overline{\sin\delta}^{2}\right)F_{0}\right\rangle_{\psi},$
(24)
where we have dropped terms odd in $v_{\parallel}$, annihilated by the
integral over velocity space. Note that, although tempting,
$\overline{\sin\delta}$ is generally nonzero according to our convention for
the bounce average in Eq. (5), where each direction of the passing particles
is treated separately.
To continue with the calculation, we need to evaluate $\overline{\cos\delta}$
explicitly, exploiting that within group II, the function $\delta$ has a large
amplitude. As a result, we expect the cosine of $\delta$ to oscillate quickly
along $\bar{\ell}$ resulting in an almost exact cancellation. The non-zero
contribution may be estimated through the well-known stationary phase
approximation (Bender & Orszag, 2013, Sec. 6.5),
$\overline{\cos\delta}=\frac{1}{\tau_{t}\omega_{t}}\frac{v_{T}}{v}\Re\left\\{\int_{-1}^{1}\frac{e^{i\delta}}{\sqrt{1-\lambda
B}}\mathrm{d}\bar{\ell}\right\\}\approx\frac{1}{\tau_{t}\omega_{t}}\frac{v_{T}}{v}\sum_{i}\sqrt{\frac{2\pi}{|\delta^{\prime\prime}(\ell_{i})|}}\frac{\cos[\delta(\ell_{i})-\pi/4]}{\sqrt{1-\lambda
B(\ell_{i})}},$ (25)
where the sum is over the turning points of $\delta$ in $\bar{\ell}\in[0,1]$.
Using the details of $\delta$ developed in Sec. 2.2.1 and Appendix A,
$\overline{\cos\delta}\approx\frac{2}{\tau_{t}\omega_{t}}\left(\frac{v_{T}}{v}\right)^{3/2}\sqrt{\frac{\omega_{t}}{\omega_{d}}}\left[(4\hat{\lambda})^{-1/4}+\frac{1}{(2\Delta+\hat{\lambda})^{1/4}}\cos\left(\frac{v}{v_{T}}\frac{\epsilon}{\sqrt{\Delta/2+\hat{\lambda}}}-\frac{\pi}{4}\right)\right].$
(26)
The first term inside the square brackets comes from the edge contribution,
and the second from the point of maximum excursion.
Now that we have $\overline{\cos\delta}$ we must integrate over velocity
space, Eq. (24). To do so we introduce the velocity space measure in the
$\\{v,\lambda,\sigma\\}$ coordinate system (already summed over $\sigma$ to
give a factor of 2) (Hazeltine & Meiss, 2003, Sec. 4.4),
$\mathrm{d}^{3}\mathbf{v}\rightarrow\frac{2\pi B}{\sqrt{1-\lambda
B}}v^{2}\mathrm{d}v\mathrm{d}\lambda.$ (27)
and noting that by definition any bounced averaged quantity is
$\bar{\ell}$-independent, write for any function $f$ in our single well,
$\left\langle\int_{\mathrm{II}}\mathrm{d}^{3}\mathbf{v}\bar{f}\right\rangle_{\psi}=\pi\bar{B}\int_{v=0}^{\infty}\int_{\mathrm{II}}v^{2}\frac{v}{v_{T}}\tau_{t}\omega_{t}\bar{f}\mathrm{d}v\mathrm{d}\lambda,$
(28)
correct to leading order in $\Delta$.
The simplifying assumption of a $v$-independent boundary layer in Eq. (16)
allows us to explicitly carry out the integral over $v$ first. Noting the that
with the ordering $\epsilon^{2}/\Delta\gg 1$ (large $A$),
$\displaystyle\int_{0}^{\infty}ve^{-v^{2}}\cos^{2}\left(Av-\frac{\pi}{4}\right)\mathrm{d}v\approx\frac{1}{4},$
(29a) $\displaystyle\int_{0}^{\infty}ve^{-v^{2}}\mathrm{d}v=\frac{1}{2},$
(29b)
we find using the explicit form of the Maxwellian $F_{0}$,
$①\approx\frac{2}{\sqrt{\pi}}\frac{1}{\omega_{d}}\int_{0}^{\epsilon^{2}}\frac{1}{\hat{\tau}_{t}}\left(\frac{1}{\sqrt{\hat{\lambda}}}+\frac{1}{\sqrt{2\Delta+\hat{\lambda}}}\right)\mathrm{d}\hat{\lambda},$
(30)
where $\hat{\tau}_{t}=\tau_{t}(v/v_{T})$ is a function of $\lambda$. In this
form of ① we have already included the contribution from
$\overline{\sin\delta}$, which can be easily shown to be equivalent to that of
the $\overline{\cos\delta}$. To carry out the integral over $\hat{\lambda}$ we
change variables to $\kappa$, defined in Sec. 2.2.1. The integration domain
becomes $\kappa\in[2\Delta/\epsilon^{2},1]$, with an integral measure
$\frac{\mathrm{d}\kappa}{\mathrm{d}\lambda}=2\Delta\bar{B}\left(1+\frac{1-\Delta}{2\Delta}\kappa\right)^{2}.$
(31)
The contribution from the edges of the orbit (the first term in Eq. (30)) can
be shown to be small upon integration over $\kappa$ in the limit of small
$\Delta$. All that is left is the contribution from the point of maximal
excursion, which can be approximated assuming $K(\kappa)\approx\pi/2$,
$①\approx\frac{\epsilon}{\pi^{3/2}}.$ (32)
This concludes the calculation of ①, but ② remains to be found. This
contribution corresponds to finding the fraction of phase space occupied by
the barely passing particles in group II. Using Eq. (28) and the definition of
region II, the integrals over $\kappa$ and $v$ yield,
$②\approx\epsilon.$ (33)
Altogether,
$I_{\mathrm{II}}\approx\frac{\epsilon}{\pi^{3/2}}(1-\pi^{3/2})\approx-0.26\frac{\omega_{d}}{\omega_{t}},$
(34)
yielding an overall negative contribution linear in $k_{\psi}\rho_{i}$.
#### 2.3.2 Contribution from the bulk of trapped particles (group III)
A similar approach to that for the barely passing particles may be directly
applied to the trapped particles that constitute group III. Given the
similarities of the calculation we shall be less explicit here.
The evaluation of the integral starts once again by separating the integral
$I_{\mathrm{III}}$ into two parts, ① and ②, like in Eq. (23). In the
calculation of ①, and unlike for passing particles, we only need to consider
the $\overline{\cos\delta}$ term, as $\overline{\sin\delta}=0$ upon summing
over both particle directions, Eq. (5). The $\overline{\cos\delta}$ term may
be computed much like in the previous section, employing the stationary phase
approach. In this case, the only turning point of $\delta_{\mathrm{trap}}$ is
at the centre of the domain, $\bar{\ell}=0$. With that, using the expressions
for $\delta_{\mathrm{trap}}$ introduced in Sec. 2.2.2 and Appendix A, and
performing the integral over $v$ first,
$①\approx\frac{1}{\pi^{3/2}}\frac{\Delta}{\epsilon},$ (35)
which is a small contribution that vanishes in the limit of $\Delta\rightarrow
0$. The velocity space volume occupied by the bulk of trapped particles, ②, is
of course also small in the limit of a small mirror ratio,
$②\sim\sqrt{\Delta}$. Thus, the contribution to the residual from the trapped
population in group III is small in the limit of $\Delta\rightarrow 0$.
#### 2.3.3 Final form of the residual
Gathering the pieces of the calculation above, the integral in Eq. (22)
evaluates to,
$I\approx-0.26\frac{\omega_{d}}{\omega_{t}},$ (36)
in the limit of $\Delta\ll\epsilon^{2}\ll 1$. The latter is particularly
important to argue that the contribution from the particles of groups I and IV
is subsidiary in this limit. We do not need to compute it explicitly to argue
that it scales like $\epsilon^{2}$, and thus is one order $\epsilon$ higher
than the contribution from barely passing particles. Therefore, we may drop
those contributions in writing the result in Eq. (36).
We may now write the expression for the residual itself, going back to Eq.
(12) using the definition of $I$ in Eq. (22),
$\frac{\phi(\infty)}{\phi(0)}\approx\frac{1}{1+0.26\frac{\omega_{d}}{b\omega_{t}}},$
(37)
which in the limit of $b\ll\omega_{d}$, say for very long radial wavelengths,
can be expressed as
$\frac{\phi(\infty)}{\phi(0)}\approx
1.92~{}k_{\perp}\rho_{i}\left(\frac{k_{\perp}\rho_{i}}{\omega_{d}/\omega_{t}}\right).$
(38)
## 3 Analysis of the residual in the small mirror ratio limit
The preceding analysis demonstrates that in the limit of a small mirror ratio
there remains a finite residual in the problem. Barely passing particles near
the passing-trapped boundary dominate the behaviour of the residual in this
limit. This is a result of a narrow $\lambda$-space layer of width
$\epsilon^{2}$ having sufficiently slow parallel velocities so that their
orbits are wide. The result is a partial shielding of the potential. Their
orbit width is so large, though, that their shielding is not as efficient as
it may be at smaller $\delta$, and thus the residual is larger than one would
a priori expect.
There are two important actors that determine the final value of the residual
in this limit: (i) the width of the layer, and (ii) the shape of the orbit.
Both of these may be identified directly in the derivation of the residual
above. The residual will be larger the smaller the layer is, as the shielding
population decreases. The shorter the time that the particles spend near the
point of maximal excursion, the larger the residual will also be; orbit shapes
that are flat near that point are detrimental to the residual.
Figure 3: Example of residual as a function of mirror ratio. The plots present
(a) the time evolution of the average electrostatic potential for different
mirror-ratios simulated with the gyrokinetic code stella, (b) comparison of
residual from the gyrokinetic code stella and numerical evaluation of Eq.
(12), and (c) relative contribution to the residual by passing/trapped
population, and by each $\lambda$. The simulation for (a) and (b) is based on
the cyclone-base-case with $|\mathbf{B}|$ modified, leaving the curvature
drift unchanged. The color code in (a) corresponds to the different mirror
ratios on the right plot, from lower (darker) to larger (brighter) values of
$\Delta$. The right plot (b) presents the residual values from stella as
scatter points (with errorbars indicating the variation of the potential in
the last 20% of the time trace), the triangle marker shows the simulation of
the flat-$B$ scenario, the solid line the numerical evaluation of Eq. (12),
the dotted black line the analytical estimate of Xiao-Catto (Xiao & Catto,
2006), and the red dotted line the asymptotic expression in Eq. (38). The
central bottom plot (c) shows the relative contribution to the residual by
trapped/passing particles. The plots left and right represent the relative
contribution to the residual by different parts of the population, where the
vertical coordinate represents $1/\lambda$, with the black line representing
$B$. The calculations are done at $k_{\perp}\rho_{i}\approx 0.048$
($k_{y}\rho_{i}=0.05$ in stella).
The behaviour of the residual at small mirror ratio can be checked against
both careful numerical integration of Eq. (12) and linear electrostatic
gyrokinetic simulations with the stella code (Barnes et al., 2019). We present
such a comparison in Figure 3. For that comparison, a local field along a flux
tube is constructed from a reference cyclone-base case (a simple Miller
geometry (Miller et al., 1998)) whose $B$ has been modified with varying
mirror ratios $\Delta$, while keeping all other elements of the geometry
unchanged. The numerical evaluation of Eq. (12) is done by careful treatment
of bounce integrals using double-exponential integration methods (Takahasi &
Mori, 1974) to appropriately deal with bounce points and logarithmic
divergences in $\lambda$-space (details on the python code may be found in the
Zenodo repository associated to this paper). The linear gyrokinetic
simulations are run with large velocity space resolution in an attempt to
resolve the boundary layer in velocity space to the best capacity within
reason. This means that they must also be run for long times, on the order of
the transit time of the smallest resolved velocity in order to reach the
residual. We take the residual from these simulations to be the value of the
potential at the latest time simulated.333We are running these simulations in
stella with $N_{v_{\parallel}}=2000$, $N_{\mu}=100$, $\Delta t=0.0125$ and
$N_{t}=64000$, considered high resolutions. The smallest mirror ratio cases
can be challenging to simulate and converge fully even under these extremely
resolved conditions. For the semi-quantitative considerations in this paper we
consider them to be sufficient, though. In addition to these numerical
niceties, the physical oscillations of the electrostatic potential also pose
an additional limitation, as these variations are not damped completely in the
time domain of consideration for the lowest mirror ratios. This can lead to an
inaccurate ‘measured’ residual, but is once again deemed sufficient in the
time domain considered for the semi-quantitative comparison here considered
(see error-bars in Figure 3). Having these two numerical forms of assessing
the residual provides us with additional forms to diagnose the results. In
particular, and given the good agreement between the simulations with the
numerical evaluation of the residual in Eq. (12), we can assess the
contribution from different regions of velocity space to the residual using
the latter (see Figure 3c).
In the small mirror ratio limit, as predicted, there is a dominant
contribution from a narrow boundary layer (group II). The analytic estimate of
the residual in the small mirror ratio, Eq. (38), agrees to a good degree
(within $\sim 5-10\%$) with the simulation and integration (see red line in
Figure 3b). As the mirror ratio increases the importance within velocity space
shifts (see Figure 3c) and the bulk of trapped particles becomes dominant (the
standard Rosenbluth & Hinton (1998) picture). In that limit the residual can
be estimated by Rosenbluth & Hinton (1998) (RH),
$\left.\frac{\phi(\infty)}{\phi(0)}\right|_{\mathrm{RH}}=\frac{1}{1+1.6\epsilon^{2}/(k_{\perp}\rho_{i})^{2}\sqrt{\Delta}},$
(39)
or more precisely by Xiao & Catto (2006), as explicitly shown in Fig. 3b
(black dotted line). The standard RH residual, Eq. (39), exhibits a stronger
dependence on the drift and transit time compared to the small mirror ratio
limit, although the physical mechanism behind the residual remains broadly
speaking the same. Namely, making the drift $\omega_{d}$ or the connection
length smaller, the orbit width becomes smaller, so does the finite orbit
polarisation and shielding power of the plasma, and thus the resulting
residual grows.
The preeminence of the RH or small mirror residual will change depending on
the parameters of both the field and perturbation. A clear example of the
latter is the dependence on $k_{\perp}\rho_{i}$. In fact, for any finite
$\Delta$, there always exists a perpendicular length-scale long enough for
which the RH scenario is recovered (formally, a value of $k_{\psi}$ below
which the ordering $\epsilon^{2}\gg\Delta$ is violated), leading to a finite
residual at small $k_{\perp}\rho_{i}$. Of course, the field parameters also
play a key role. Most clearly, the variation of the mirror ratio $\Delta$
explicitly involves a regime transition between the $\Delta$-independent
small-mirror residual, Eq. (38), and the RH residual (see Figure 3b). This
takes place when $\Delta\sim\epsilon^{2}$, which is approximately
$\Delta_{t}\approx
0.1(k_{\perp}\rho_{i})^{2}\left(\frac{\omega_{d}/\omega_{t}}{k_{\perp}\rho_{i}}\right)^{2}.$
(40)
If the orbit width of the bulk is made larger, then the small-mirror
contribution becomes relevant sooner. However, we must remain within the limit
$\epsilon^{2}\ll 1$, which we considered in the construction of our residual
calculation. Staying within that limit, the transition mirror ratio must obey
$\Delta_{t}<10^{-1}$, which implies that the transition occurs at small mirror
ratios of at most a few per-cent. Of course, the exact value of this
transition will generally not be as simple. We may compute it more accurately
by defining numerically $\Delta_{t}$ as the mirror ratio at which the low
$k_{\perp}\rho_{i}$ limit of the XC (Xiao & Catto, 2006) residual matches the
low-mirror ratio residual.
Before moving to an analysis of these effects on different equilibria, let us
turn to interpreting the time dependence of the residual observed in Figure
3a. There are clearly two oscillation time-scales in the problem set-up
considered: the faster damped geodesic-acoustic modes (GAMs) (Sugama &
Watanabe, 2006; Gao et al., 2006, 2008; Conway et al., 2021) and a slower
oscillation. The former appear rather invariant under $\Delta$ (as one would
expect from a passing ion dominated phenomenon), while the latter change
significantly. In fact, this slower time scale behaviour is reminiscent of the
slower oscillations attributed to the non-omnigeneous nature of stellarator
fields (Mishchenko et al., 2008; Mishchenko & Kleiber, 2012; Helander et al.,
2011; Monreal et al., 2017; Alonso et al., 2017). This provides us with an
additional way of interpreting the boundary layer contribution to the low-
mirror residual. Because of their long transit time compared to their radial
drift, these particles behave de facto as non-omnigeneous particles, at least
in a transient sense. The result are long time scale oscillations with a slow
damping rate. The damping and frequency of oscillations grow in their time
scale as $\Delta$ becomes smaller, which we attribute to the increasingly non-
omnigeneous behaviour of the particles in this limit. A more in-depth
investigation of this behaviour is left for future work.
### 3.1 Geodesic acoustic mode (GAM) connection
From the analysis of the time trace of our simulations, we observe that the
residual and GAMs are just different dynamical phases of the same system. One
then expects to see them both arise consistently in the same asymptotic limit.
GAMs are damped, oscillatory modes resulting from a balance between streaming
and off-surface drift, basic reigning elements in the residual as well. Thus,
these oscillatory modes are, like the residual, often studied as part of the
assessment of the field response to zonal flows. The basic theoretical set-up
for studying GAMs involves a flat-$|\mathbf{B}|$ field, where dynamics are
dominated by passing ions, and the only inhomogeneity along field-lines is
introduced by an oscillatory $\omega_{d}$. Under the assumption of a small
$\omega_{d}/\omega_{t}$ (equivalent to the small $\epsilon$ we have considered
in this paper), the behaviour of GAMs may be reduced to a simple dispersion
relation Sugama & Watanabe (2005, 2006); Gao et al. (2006, 2008). We reproduce
some of the details of this derivation and the dispersion relation in Appendix
B.
The key observation is that the limit $\omega\rightarrow 0$ of these
dispersion relations, which determine the long time behaviour of the
electrostatic potential (Schiff, 2013, Theorem. 2.36), yields no residual. But
we have shown just above that actually a finite residual remains in the limit
of vanishing mirror ratio. A natural question thus arises: where is this
residual hiding? It might be tempting to identify the slow GAM mode identified
by Gao et al. (2006) with the residual, due to its similar form. This purely
damped mode reads
$\frac{\phi(t\rightarrow\infty)}{\phi(0)}\approx\frac{1}{1+\frac{\epsilon^{2}}{4b}\left(1+\frac{\pi}{2(1+\tau)}\right)}e^{-\gamma
t},$ (41)
where,
$\frac{\gamma}{\omega_{t}}=\frac{\pi^{3/2}}{2}\left[\frac{2b}{\epsilon^{2}}+\left(\frac{1}{2}+\frac{\pi}{4(1+\tau)}\right)\right]^{-1}.$
(42)
The amplitude of the mode exhibits a quadratic finite orbit width dependence
much in the fashion of the RH residual. Although the damping of the mode can
be slow (with a characteristic decay time $\sim\epsilon^{2}/b\omega_{t}$), and
thus display an effective value of the residual (transiently), it does not
formally correspond to a collisionless, undamped residual.
In addition, it has a quadratic scaling rather than the linear one derived
above. To resolve this apparent inconsistency we must recognise the importance
of barely passing particles. For this subset of the population the transit
time is so long that the ordering $\omega_{t}\gg\omega_{d}$ is not accurate,
and thus the derivation of the usual GAM dispersion relation needs reworking.
We present the details of how to do this in Appendix B. Doing so, one can
recover a finite valued residual with the same scaling as derived above,
albeit with a different numerical factor. This difference is due to the
difference in the derivation, and gives a factor of 0.20 instead of a 0.26 in
Eq. (37). This reconciling of the residual and GAM calculations is a
theoretical relief.
## 4 Field survey
In the preceding analysis of the residual problem we learned that there are
two different regimes in which the behaviour of the residual is quite
different. One, the regime where the layer dynamics become dominating, which
occurs at small mirror ratios ($\Delta_{t}<10^{-1}$). And the more typical RH
residual one, occurring at moderate values of $\Delta$, in which the bulk of
the trapped particle population dominates the response of the system. We now
explore the question of which regime prevails under the conditions that arise
in different classes of magnetic equilibria.
Let us start with the simplest family of magnetic field configurations: the
circular tokamak. That is, an axisymmetric magnetic field configuration, with
circular cross-sections and thus a unique magnetic well, which is the closest
scenario to our idealised model-field. In such a scenario, we may reduce the
relevant field properties to a few parameters, namely the safety factor $q$,
the mirror ratio $\Delta$ and the radial wavenumber, $k_{\perp}\rho_{i}$. In
the context of the residual, one may think of the safety factor $q$ as
determining the ratio of the radial drift (in a tokamak $\omega_{d}\sim 1/R$)
to the connection length ($\omega_{t}^{-1}\sim qR$), explicitly
$q=\omega_{d}/(\pi k_{\perp}\rho_{i}\omega_{t})$. With that, the relevant
expressions for the residual read, following Eqs. (37) and (39),
$\left.\frac{\phi(\infty)}{\phi(0)}\right|_{\mathrm{lay}}\approx\frac{1}{1+1.63q/(k_{\perp}\rho_{i})},\quad\left.\frac{\phi(\infty)}{\phi(0)}\right|_{\mathrm{RH}}\approx\frac{1}{1+1.6q^{2}/\sqrt{\Delta}}.$
(43)
The larger the $q$, the larger the connection length, the larger the orbit
width $\delta$ and the the lower the residual. In terms of these tokamak
parameters, we may also rewrite the condition for the regime transition in Eq.
(40): the layer contribution becomes relevant for
$\Delta<\Delta_{t}\sim(qk_{\perp}\rho_{i})^{2}$. For a typical value of $q\sim
1$, and a wavenumber $k_{\perp}\rho_{i}\sim 0.1$, this implies mirror ratios
below a percent. This is a rather small mirror ratio, which will only be
reached sufficiently close to the magnetic axis (where $B$ is nearly constant
due to axisymmetry). For shorter wavelengths or larger safety factors (which
also reduce the residual) $\Delta_{t}$ will be larger. Because this occurs at
the expense of larger orbit width, taking this limit to its extreme will
ultimately lead to $\epsilon\sim 1$, implying $\delta>1$ for all particles,
corresponding to a completely different regime.444Large wavenumber behaviour
was explored by (Xiao & Catto, 2006; Monreal et al., 2016). Physically, as the
orbit sizes become large, they become less effective at shielding the original
potential perturbation, and the residual grows. Note however that this
large-$k_{\perp}\rho_{i}$ behaviour is more sensitive to initial conditions
(Monreal et al., 2016) and electron dynamics should be brought in for a
consistent treatment.
To extend the discussion beyond the rather simplified case of circularly
shaped tokamaks, we need some form in which to estimate the input parameters
to our residual calculation. We will focus on so-called optimised stellarator
configurations: namely, quasisymmetric (Boozer, 1983a; Nührenberg & Zille,
1988; Rodríguez et al., 2020) and quasi-isodynamic (Cary & Shasharina, 1997;
Helander & Nührenberg, 2009; Nührenberg, 2010) ones. The former can be seen as
the natural generalisation of the axisymmetric case, where the field has a
direction of symmetry on $|\mathbf{B}|$ instead of the whole vector
$\mathbf{B}$. The direction of symmetry can be toroidal (quasi-axisymmetry) or
helical (quasi-helical). This symmetry forces the magnetic wells along the
field line to be all nearly identical (same $B$ and $\omega_{d}$ (Boozer,
1983b), but different $k_{\perp}\rho_{i}$). In quasi-isodynamic fields, the
contours of $|\mathbf{B}|$ are closed poloidally, and carefully shaped to
grant omnigeneity (Bernardin et al., 1986; Cary & Shasharina, 1997; Hall &
McNamara, 1975b; Helander, 2014). As a result, wells are differently shaped,
but all share the feature of being omnigeneous; that is, the orbits described
by $\delta$ are closed as in Figure 1. The description will in that case have
to involve an average over wells.
Our approach now will be to construct effective model parameters for all of
these configuration types, that may be applied to the above familiar
expressions for the tokamak case, e.g Eqn. 43. These parameters will be
derived using the inverse-coordinate near-axis description of equilibria
(Garren & Boozer, 1991b; Landreman & Sengupta, 2019; Rodríguez et al., 2023;
Plunk et al., 2019), as detailed in Appendix C, and summarised in Table 1. We
have included the case of a shaped tokamak for comparison. Let us now discuss
the interpretation of these results.
| Tokamak | QS | QI
---|---|---|---
$q_{\mathrm{eff}}$ | $\displaystyle\frac{1}{\iota}\frac{\eta R_{\mathrm{ax}}}{\hat{\mathcal{G}}}$ | $\displaystyle\frac{1}{\iota-N}\frac{\eta R_{\mathrm{ax}}}{\hat{\mathcal{G}}}$ | $\displaystyle\frac{2}{\pi N_{\mathrm{nfp}}}\frac{\bar{d}R_{\mathrm{ax}}}{\hat{\mathcal{G}}}$
$\Delta$ | $r\eta$ | $r\eta$ | $\Delta$
Table 1: Characteristic near-axis residual-related parameters in optimised
stellarators. The table presents the value of the residual-relevant parameters
$q_{\mathrm{eff}}$ and $\Delta$ for tokamaks and different optimised
stellarator types, obtained using the near-axis description of the fields (see
Appendix C). The parameters are: $R_{\mathrm{ax}}$ the effective major radius
(the length of the magnetic axis divided by $2\pi$), $\iota$ the rotational
transform, $N$ the symmetry of the QS field, $N_{\mathrm{nfp}}$ number of
field periods, $\eta$ and $\bar{d}$ leading poloidal variation of
$|\mathbf{B}|$ over flux surfaces (roughly proportional to the axis curvature)
and $\hat{\mathcal{G}}$ geometric factor defined in Eq. (44).
The first important distinction between fields is with regards to the
behaviour of the mirror ratio. In tokamaks, as well as quasisymmetric
stellarators, the mirror ratio has a strong radial dependence. In particular,
because $|\mathbf{B}|$ has a direction of symmetry with a toroidal component,
$\Delta$ must decrease towards the axis and do so at a rate related to the
curvature of the field (within the near axis description it is proportional to
the distance form the axis and $\eta\sim\kappa$, see Appendix C). This implies
the appearance of a finite region near the magnetic axis where the low-mirror
residual becomes relevant. In practice, though, this region tends to be
narrow, and thus likely unimportant (see Figure 4s).
Figure 4: Residual and closeness to the residual transition as a function of
radius. The plot shows the residual (top) and the ratio of the mirror ratio
$\Delta$ to the residual regime tranition value $\Delta_{t}$ (bottom) for
DIII-D (equilibrium from Austin et al. (2019), shot 170680 at 2200ms)
(tokamak), precise QA (QA stellarator) and precise QH (QH stellarator)
configurations (Landreman & Paul, 2022). The residual is computed numerically
evaluating Eq. (12) using the global equilibria of the configurations to
estimate the simplified single-well parameters for the residual calculation.
The bottom plots are evaluated computing $\Delta_{t}$ as the mirror ratio
value at which the XC estimate of the residual equals the small mirror ratio
limit of the residual. It therefore is a measure of relevance of the low-
mirror residual regime. It is clear that the centre of the QA configuration is
where the low-mirror ratio is most relevant. The residual calculation was done
for $k_{\perp}\rho_{i}=0.1$ for these.
It is particularly narrow in tokamaks, where the safety factor decreases
towards the axis and can have a significant global shear, unlike
quasisymmetric stellarators (Landreman & Paul, 2022; Landreman, 2022;
Rodríguez et al., 2023; Giuliani, 2024). The consequence of this is also an
inversion of the behaviour of the residual with radius: it tends to be largest
in the core in a tokamak, but smallest for QS ones (see Figure 4). QI
stellarators are significantly different to both tokamaks and QS stellarators.
As a result of having poloidally closed contours, the on-axis $|\mathbf{B}|$
is not constant, and thus the mirror ratio tends to a non-zero constant on the
axis. This frees $\Delta$ from its strong radial dependence, preventing the
low-mirror residual region from manifesting.
In addition to the differences in $\Delta$, the changes in the magnitude of
the magnetic field gradient $\bnabla B$ (which affects $\omega_{d}$), the flux
surface shaping (which affects $k_{\perp}$) and the connection length (which
affects $\omega_{t}$) do also impact the residual. All of these physical
elements may be captured in a parameter $q_{\mathrm{eff}}=\omega_{d}/(\pi
k_{\perp}\rho_{i}\omega_{t})$, given in Table 1. We define such a parameter to
play the role that the safety factor takes in the circular-cross-section
scenario of the residual. In particular, one should interpret this
$q_{\mathrm{eff}}$ as a generalised form of $q$ in the residual expressed in
Eq. (43) and other places. As such, larger $q_{\mathrm{eff}}$ implies lower
residual and a higher relevance of the low-mirror residual regime. Let us
discuss what determines $q_{\mathrm{eff}}$ for each case in Table 1.
We start by analysing the role played by the perpendicular geometry (in
particular $\langle|\nabla\psi|^{2}\rangle$). This is captured by (see Eqns.
84 and 97),
$\hat{\mathcal{G}}^{2}=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{\mathrm{d}\varphi}{\sin
2e},$ (44)
where we define the angle $e$ such that $\mathcal{E}=\tan e$ is the elongation
of the flux surfaces in the plane normal to the axis as a function of
$\varphi$ (Rodríguez, 2023) and we have considered the limit of small mirror
ratio ($\Delta\ll 1$). The angle $e\in(0,\pi/2)$ may be interpreted as the
angle subtended by a right-angle triangle with the major and minor axes as
catheti. Thus, a circular cross-section is represented by $e=\pi/4$, and the
corresponding $\hat{\mathcal{G}}=1$. Any elliptical shape will then have a
larger $\hat{\mathcal{G}}>1$ (as $\sin 2e<1$ for $e\neq\pi/4$ in the domain
considered). Increasing the elongation of flux surfaces increases the average
flux expansion, $\langle|\nabla\psi|^{2}\rangle$, leading to a decrease of
$q_{\mathrm{eff}}$, a larger residual and a decrease in the importance of the
low-mirror residual. This is consistent with Xiao et al. (2007). Physically,
increasing elongation brings flux surfaces closer together, and thus narrows
the orbit widths in real space. Any non-axisymmetric shape will necessarily
have $\hat{\mathcal{G}}>1$ (Landreman & Sengupta, 2019; Camacho Mata et al.,
2022; Rodríguez, 2023), but variations between optimised configurations will
be moderate given that limiting flux surface shaping is often an optimisation
criterion.
Let us now focus on the differences in the magnitude of the magnetic drifts.
The drift is controlled by the gradients of $|\mathbf{B}|$, which decrease the
residual the larger they become. The balance between magnetic gradients (and
thus magnetic pressure) and magnetic field line tension provides an important
observation: the more curved field lines are, the stronger the gradients. In
the near axis framework, this naturally leads to a picture in which the more
strongly shaped a magnetic axis is, the larger the gradients will be. This
behaviour is represented by parameters $\eta$ and $\bar{d}$ in Table 1 (see
Appendix C for a more precise description), which typically scale like
$\eta\sim\kappa$ (Rodríguez et al., 2023), where $\kappa$ is the axis
curvature. For similarly shaped cross-sections, $\eta$ (or $\bar{d}$) will be
larger for QH and QI stellarators compared to QA and tokamaks (Rodriguez et
al., 2022; Camacho Mata et al., 2022), and even more with the number of field
periods. The drift in the QI case deserves special consideration, because the
pointwise radial drift varies from field line to field line, vanishing on some
(Helander & Nührenberg, 2009; Landreman & Catto, 2012). Thus, on ‘average’,
the drift in these configurations is smaller (see Appendix C for the details),
which can enhance the residual. In brief, QH configurations are expected to
have the largest field gradients, followed by QIs in which the field-line
averaging reduces the effective gradients, and finally QAs and tokamaks.
The last element of consideration in $q_{\mathrm{eff}}$ is the connection
length, i.e. the length along the field line of a magnetic well. The
difference in the topology of the $|\mathbf{B}|$ contours (and their alignment
to magnetic field lines) leads to the following comparative scaling,
$R_{\mathrm{ax}}/\iota:R_{\mathrm{ax}}/(\iota-N):R_{\mathrm{ax}}/N_{\mathrm{nfp}}$.
Of course, this naturally leads to ordering the connection lengths to be
largest for QA and tokamaks, smaller for QHs and the smallest for QIs. This
follows from the observation that the number of field periods serves as an
upper bound of $\iota$ for QHs in practice.
The three elements discussed above compete with each other, but the
preeminence of the connection length on $q_{\mathrm{eff}}$ in practice leads
to the relative ordering,
$q_{\mathrm{eff,tok}}\sim q_{\mathrm{eff,QA}}>q_{\mathrm{eff,QH}}\gtrsim
q_{\mathrm{eff,QI}}.$ (45)
This should be regarded as a rough guide, not as a rigid rule; a similar
ordering for the overall size of the residual is argued by Plunk & Helander
(2024).
Figure 5: Parameter $q_{\mathrm{eff}}$ for QS and QI configurations.
Statistics of $q_{\mathrm{eff}}$ for QS and QI configurations. The left plots
represent the normalised (by total area) density of QH and QA configurations
by their value of $q_{\mathrm{eff}}$ in the QS near-axis database in Landreman
(2022), which serves as a representative population of optimised QS
configurations. The density for each number of field period (color) is stacked
vertically on top of one another, and represents the number of configurations
in the database satisfying those parameters. The rightmost plot shows the same
analysis through a QI near-axis database (Plunk, 2024). This shows the rough
relative ordering of $q_{\mathrm{eff}}$ between different omnigeneous fields,
as indicated in the text. Most QH configurations are $N=4$, and their
$q_{\mathrm{eff}}$ is the lowest for all $N$, while larger or smaller $N$ lead
roughly to larger $q_{\mathrm{eff}}$. This shows the complexity and detail of
The $N=2$ is the main QA.
To strengthen and illustrate this behaviour of $q_{\mathrm{eff}}$ across
different configurations, we use the large database of near-axis QS
configurations of Landreman (2022) and near-axis QI configurations of Plunk
(2024) to evaluate this parameter across configurations. This confirms that
one expects the residual to be smallest in tokamaks and QAs, with the small-
mirror regime barely becoming relevant near their core. We leave a more
complete analysis of these databases and the lessons to be learned from these
for the future. We also note that more complex field shaping beyond the simple
model used in this paper could change some of the exact quantitative behaviour
observed concerning especially the location of the residual transition, but we
also leave this to future investigations.
## 5 Conclusions
In this paper, we have carefully analysed the behaviour of the residual in the
limit of small mirror ratio. The contribution of barely passing particles
provides a finite residual in this limit, changing its usual scaling and
exchanging roles of the importance between trapped and passing particles. We
identify the role of such barely trapped particles and provide some analytical
estimates, that we compare to some gyrokinetic simulations. This limiting
behaviour, however, is shown to occur at very small mirror ratios
$\Delta<(\omega_{d}/\omega_{t})^{2}$, where $\omega_{d}$ is the radial drift
frequency and $\omega_{t}$ the transit frequency of a thermal particle to
travel a connection length. An analysis using near-axis theory of this effect
through tokamaks, quasisymmetric and quasi-isodynamic stellarators suggests
that although barely, the centre of quasi-axisymmetric stellarators is the
region in which some of these effects could manifest most clearly. This
analysis also shows (including a cross-check through a large database of
configurations) that the residual itself tends to be larger in quasi-
isodynamic stellarators, to be followed by quasi-helical and lastly quasi-
axisymmetric (and tokamak) ones.
## Data availability
The data that support the findings of this study are openly available at the
Zenodo repository with DOI/URL 10.5281/zenodo.12805697.
## Acknowledgements
We gratefully acknowledge fruitful discussion with R. Nies and W. Sengupta.
## Funding
E. R. was supported by a grant of the Alexander-von-Humboldt-Stiftung, Bonn,
Germany, through a postdoctoral research fellowship.
## Declaration of interest
The authors report no conflict of interest.
## Appendix A Additional details on the orbit widths
In this appendix we complete the information about the finite orbit width
provided in Section 2.2, necessary to complete the residual calculation in
Section 2.3.
### A.1 Passing particles
Let us consider the shape of the orbits described by the barely passing
particles living within the boundary layer defined in Section 2.2.1 (see
Figure 1). To evaluate the residual integrals in Eq. (12) we require
information about the turning points of $\delta$. In particular, besides the
location and value of $\delta$ extrema, the second derivative (Bender &
Orszag, 2013, Sec. 6.5). The second derivative at those points is,
$\delta^{\prime\prime}_{\mathrm{pass}}=\sigma\frac{v}{v_{T}}\frac{\epsilon\pi^{2}}{2}\times\begin{cases}\begin{aligned}
&\frac{1}{\sqrt{\hat{\lambda}}},&\quad(\bar{\ell}=\pm 1)\\\
&-\frac{1}{\sqrt{2\Delta+\hat{\lambda}}},&\quad(\bar{\ell}=0)\end{aligned}\end{cases}$
(46)
where we have used the definition of $\hat{\lambda}$ and $\Delta\ll 1$.
To complete the orbit description, we also need the transit time of passing
particles. In the simplified single well model, this is defined to be the time
taken by a particle to move from $\bar{\ell}=-1$ to $1$. The time can be
expressed (Helander & Sigmar, 2005, Eq. (7.27)) in terms of the elliptic
function $K$ (Olver et al., 2020, Sec. 19)(Abramowitz & Stegun, 1968, Eq.
(16.1.1)),
$\tau_{t}\omega_{t}=\frac{4}{\pi}\frac{v_{T}}{v}\frac{K(\kappa)}{\sqrt{1-\lambda\bar{B}(1-\Delta)}},$
(47)
where $\kappa=2\lambda\Delta/[1/\bar{B}-\lambda(1-\Delta)]$.
### A.2 Trapped particles
The orbits described by trapped particles are ostensibly different. The
function $\delta(\bar{\ell})$ has a single turning point at the centre of the
orbit, point at which the second derivative is
$\delta_{\mathrm{trap}}^{\prime\prime}(0)\approx\sigma\frac{v}{v_{T}}\frac{\epsilon\pi^{2}}{\sqrt{2\bar{\kappa}\Delta}}.$
(48)
The orbits, unlike those of passing particles, are sharp at, in this case,
bounce points. This is a result of the particles spending longer at these
points, where the radial drift is non-zero. This difference in how particles
spend their time on different parts of their orbit also affects the expression
for the orbit time, here called bounce time (Connor et al., 1983)(Helander &
Sigmar, 2005, Eq. (7.28)),
$\tau_{b}\omega_{t}=\frac{2}{\pi}\frac{v_{T}}{v}\sqrt{\frac{2}{\lambda\bar{B}\Delta}}K(\bar{\kappa}).$
(49)
## Appendix B Residual in a GAM scenario
In this Appendix we present how the description of geodesic acoustic modes
(GAMs) can be made to align with the finite residual result derived in the
main text. To that end, let us start by re-writing the linearised gyrokinetic
equation in Eq. (1) and dropping the initial condition,
$iv_{\parallel}\partial_{\ell}\hat{g}+(\omega-\tilde{\omega}_{d})\hat{g}-J_{0}F_{0}\omega\frac{q\hat{\phi}}{T}=0.$
(50)
As in the residual calculation, we have written the equation for
$k_{\alpha}=0$, which leads to vanishing of the diamagnetic drive.
Because we are here interested in the GAM dynamics, it is conventional to
specialise to an artificial flat-$B$ field, one in which the sole field
property that varies along the field-line is the curvature drift (i.e.
$k_{\perp}\rho_{i}$ is also constant). Modelling
$\omega_{d}(\ell)=\omega_{d}\cos(\pi\ell/L_{d})$, we may Fourier resolve Eq.
(50) writing $\hat{g}=\sum_{n=-\infty}^{\infty}\hat{g}_{n}e^{in\pi\ell/L_{d}}$
and $\hat{\phi}=\sum_{n=-\infty}^{\infty}\hat{\phi}_{n}e^{in\pi\ell/L_{d}}$.
Taking into account the coupling through $\omega_{d}$, and
$\hat{g}\cos\left(\frac{\pi\ell}{L_{d}}\right)=\frac{1}{2}\sum_{n=-\infty}^{\infty}(\hat{g}_{n+1}+\hat{g}_{n-1})e^{in\pi\ell/L_{d}},$
(51)
we may then write Eq. (50) as,
$\left(-nx_{\parallel}+\frac{\omega}{\omega_{t}}\right)\hat{g}_{n}-\frac{\tilde{\omega}_{d}}{2\omega_{t}}\left(\hat{g}_{n-1}+\hat{g}_{n+1}\right)=F_{0}J_{0}\frac{\omega}{\omega_{t}}\frac{q\hat{\phi}_{n}}{T},$
(52)
where $\omega_{t}=\pi v_{T}/L_{d}$ is the transit frequency over the
characteristic scale of the drift variation and
$x_{\parallel}=v_{\parallel}/v_{T}$.
The system has a sideband coupling through the drift, whose overlap is
controlled by $\omega_{d}/\omega_{t}$. Thus, ordering
$\epsilon=\omega_{d}/\omega_{t}\ll 1$ is particularly convenient to regularise
the problem and be able to truncate it. In fact, if we drive the system
uniformly, meaning we assume $\hat{\phi}_{0},~{}\hat{g}_{0}\sim O(1)$, we
expect to find small sidebands. That way, we may focus on the following
reduced system of equations,
$\displaystyle\left(x_{\parallel}+\frac{\omega}{\omega_{t}}\right)\hat{g}_{-1}-\frac{\tilde{\omega}_{d}}{2\omega_{t}}\hat{g}_{0}\approx
F_{0}J_{0}\frac{\omega}{\omega_{t}}\frac{q\hat{\phi}_{-1}}{T},$ (53a)
$\displaystyle\frac{\omega}{\omega_{t}}\hat{g}_{0}-\frac{\tilde{\omega}_{d}}{2\omega_{t}}(\hat{g}_{-1}+\hat{g}_{1})\approx
F_{0}J_{0}\frac{\omega}{\omega_{t}}\frac{q\hat{\phi}_{0}}{T},$ (53b)
$\displaystyle-\left(x_{\parallel}-\frac{\omega}{\omega_{t}}\right)\hat{g}_{1}-\frac{\tilde{\omega}_{d}}{2\omega_{t}}\hat{g}_{0}\approx
F_{0}J_{0}\frac{\omega}{\omega_{t}}\frac{q\hat{\phi}_{1}}{T}.$ (53c)
In addition to the gyrokinetic equation written in this form, we must complete
the eigenvalue problem with the quasineutrality condition. The condition, now
explicitly involving electrons ($e$) and ions ($i$), reads in this basis,
$\frac{T_{i}}{q_{i}}\sum_{s=e,i}\int
J_{0s}\hat{g}_{s,k}\mathrm{d}^{3}\mathbf{v}=n(1+\tau)\hat{\phi}_{k},$ (54)
where the sum is over both ions and electrons. To construct the final form of
the dispersion we shall eventually use $b_{e}/b_{i}\sim m_{e}/m_{i}\ll 1$,
$\zeta_{e}/\zeta_{i}\sim\sqrt{m_{e}/m_{i}}\ll 1$ and
$\epsilon_{e}/\epsilon_{i}\sim\sqrt{m_{i}/m_{e}}$.
### B.1 GAM dispersion
The common form of the dispersion relation for GAMs is obtained by combining
the equations in Eqs. (53) to write $\hat{g}_{0}$ explicitly as function of
$\hat{\phi}_{0}$ to leading order in $O(\epsilon^{2})$ and performing the
appropriate velocity space integrals. The result (Gao et al., 2006, 2008;
Sugama & Watanabe, 2006),
$\mathcal{D}=1-\Gamma_{0}(b)+\frac{\epsilon^{2}}{2}\left[\mathcal{D}^{(2)}-\frac{(\mathcal{D}^{(1)})^{2}}{1+\tau+\mathcal{D}^{(0)}}\right],$
(55)
where,
$\displaystyle\mathcal{D}^{(2)}=~{}\frac{1}{\zeta}\left[\Gamma_{0}(b)\frac{\zeta}{2}\left(1+2\zeta^{2}(1+\zeta
Z(\zeta))\right)+F_{2}(b)\zeta(1+\zeta
Z(\zeta))+\frac{1}{4}F_{4}(b)Z(\zeta)\right],$ (56a)
$\displaystyle\mathcal{D}^{(1)}=~{}\Gamma_{0}(b)\zeta(1+\zeta
Z(\zeta))+\frac{1}{2}F_{2}(b)Z(\zeta),$ (56b)
$\displaystyle\mathcal{D}^{(0)}=\Gamma_{0}(b)\zeta Z(\zeta),$ (56c) $\int
F_{0}J_{0}^{2}\mathrm{d}^{3}\mathbf{v}=\Gamma_{0}(b),$ (56d)
and $\zeta=\omega/\omega_{t}$. The dispersion relation is consistent with
multiple modes, which have been explored in Gao et al. (2008). Note that in
those pieces of work (Gao et al., 2006, 2008; Sugama & Watanabe, 2006), the
problem is solved not using a Fourier resolution of the problem like we have
here, but instead using the integrating factor approach of Connor et al.
(1980).
The dispersion relation in Eq. (55) can be assessed near $\zeta\rightarrow 0$,
which is responsible for the long time response of the plasma (Schiff, 2013,
Theorem 2.36). It may be shown by expanding the dispersion function (Fried &
Conte, 2015), and taking for simplicity the small finite Larmor radius limit,
$\mathcal{D}\approx\frac{b}{\omega}\left[1+\frac{\epsilon^{2}}{4b}\left(1+\frac{\pi}{2(1+\tau)}\right)\right]\left(\omega-\omega_{0}\right),$
(57)
where
$\frac{\omega_{0}}{\omega_{t}}=-i\frac{\sqrt{\pi}}{2}\left[\frac{2b}{\epsilon^{2}}+\left(\frac{1}{2}+\frac{\pi}{4(1+\tau)}\right)\right]^{-1}.$
(58)
The system shows a purely damped mode, but no truly net residual.
### B.2 Revival of the residual
This no residual conclusion is not consistent with the calculation in this
paper. So, where is the residual hiding? To see how the approach to the GAM
could have missed the residual contribution, let us go back to the truncated
system of equations where the $n=0,~{}\pm 1$ modes are retained, Eqs. (53),
and recombine them into
$\frac{T/q}{F_{0}J_{0}}g_{\pm}=\frac{1}{2}\frac{(\tilde{\omega}_{d}/\omega_{t})^{2}\mp
4\zeta(x_{\parallel}\pm\zeta)}{(\tilde{\omega}_{d}/\omega_{t})^{2}+2(x_{\parallel}^{2}-\zeta^{2})}\phi_{\pm}\mp\frac{\tilde{\omega}_{d}}{\omega_{t}}\frac{x_{\parallel}\pm\zeta}{(\tilde{\omega}_{d}/\omega_{t})^{2}+2(x_{\parallel}^{2}-\zeta^{2})}\phi_{0}\\\
-\frac{1}{2}\left(\frac{\tilde{\omega}_{d}}{\omega_{t}}\right)^{2}\frac{1}{(\tilde{\omega}_{d}/\omega_{t})^{2}+2(x_{\parallel}^{2}-\zeta^{2})}\phi_{\mp},$
(59a)
$\frac{T/q}{F_{0}J_{0}}g_{0}=\frac{2(x_{\parallel}^{2}-\zeta^{2})}{(\tilde{\omega}_{d}/\omega_{t})^{2}+2(x_{\parallel}^{2}-\zeta^{2})}\phi_{0}+\frac{\tilde{\omega}_{d}}{\omega_{t}}\frac{x_{\parallel}-\zeta}{(\tilde{\omega}_{d}/\omega_{t})^{2}+2(x_{\parallel}^{2}-\zeta^{2})}\phi_{-}\\\
-\frac{\tilde{\omega}_{d}}{\omega_{t}}\frac{x_{\parallel}+\zeta}{(\tilde{\omega}_{d}/\omega_{t})^{2}+2(x_{\parallel}^{2}-\zeta^{2})}\phi_{+},$
(59b)
where $\pm$ denote the $n=\pm 1$ sidebands. We did not use this full form of
the equations when deriving the dispersion relation for the GAMs, but instead
their limit when $\epsilon=\omega_{d}/\omega\ll 1$. Formally, this ordering
was used to expand the kinetic resonant denominators
$\mathcal{R}=\frac{1}{\tilde{\omega}_{d}^{2}/\omega_{t}^{2}+2(x_{\parallel}^{2}-\zeta^{2})},$
(60)
that are found ubiquitous in Eqs. (59b). For this expansion in the denominator
to be sound we must have, of course,
$x_{\parallel}^{2}-\zeta^{2}\gg\tilde{\omega}_{d}^{2}/\omega_{t}^{2}$, where
we shall not forget the velocity space dependence of
$\tilde{\omega}_{d}=\omega_{d}(x_{\parallel}^{2}+x_{\perp}^{2}/2)$. The GAM
dispersion relation thus fails to describe any physics where
$x_{\parallel}^{2}-\zeta^{2}\ll\epsilon^{2}x_{\perp}^{4}/4$. This is
especially problematic at long time scales (i.e. within a layer in
$\omega$-space where $\omega<\omega_{d}$) and for the part of the population
living within a narrow layer of order $x_{\parallel}\sim\epsilon$ in velocity
space near $x_{\parallel}=0$. I.e. the GAM description overlooks the
contribution from barely passing particles, whose transit time is
significantly longer than that of the bulk.
The question is then, how can one capture the behaviour from within this layer
properly in this GAM formalism? Can one recover a residual result like that in
Eq. (37)? To do so we must not expand in small $\tilde{\omega}_{d}$, but
instead do so in $\zeta\rightarrow 0^{+}$ (indicating approach from the
positive $\Im\\{\omega\\}$ direction). With this in mind, let us write the
quasineutrality condition applied to Eq. (59b) as
$(1+\tau-\mathcal{D}^{(2)})\hat{\phi}(0)\approx-\frac{\epsilon}{2}\left[\mathcal{D}^{(1)}_{-}\hat{\phi}(-1)-\mathcal{D}^{(1)}_{+}\hat{\phi}(1)\right],$
(61)
where
$\displaystyle\mathcal{D}^{(2)}=$ $\displaystyle~{}\frac{2}{\bar{n}}\int
F_{0}J_{0}^{2}(x_{\parallel}^{2}-\zeta^{2})\mathcal{R}\mathrm{d}^{3}\mathbf{v}$
(62) $\displaystyle\mathcal{D}^{(1)}_{\pm}=-\frac{2}{\bar{n}}\int
F_{0}J_{0}^{2}\left(x_{\parallel}^{2}+\frac{x_{\perp}^{2}}{2}\right)(x_{\parallel}\pm\zeta)\mathcal{R}\mathrm{d}^{3}\mathbf{v}.$
(63)
To evaluate these integrals, we rewrite $\mathcal{R}$ by separating it into a
sum over simple poles. To do so, we define,
$\Delta=~{}\sqrt{\frac{1}{\epsilon^{2}}+x_{\perp}^{2}+2\zeta^{2}},\quad\zeta_{\pm}=~{}\frac{\Delta}{\epsilon}\pm\left(\frac{1}{\epsilon^{2}}+\frac{x_{\perp}^{2}}{2}\right),$
(64)
so that
$\mathcal{R}=-\frac{1}{2\epsilon\Delta}\left[\frac{1}{x_{\parallel}^{2}+\zeta_{+}}-\frac{1}{x_{\parallel}^{2}-\zeta_{-}}\right].$
(65)
Choosing the negative branch of the square root for a correct continuation
from $\Im\\{\zeta\\}>0$ to the rest of the complex plane,
$\frac{1}{x_{\parallel}^{2}\pm\zeta_{\pm}}=\frac{1}{2\sqrt{\mp\zeta_{\pm}}}\left(\frac{1}{x_{\parallel}-\sqrt{\mp\zeta_{\pm}}}-\frac{1}{x_{\parallel}+\sqrt{\mp\zeta_{\pm}}}\right),$
(66)
in such a way that the integrals Eqs. (62)-(63) explicitly involve integrals
over $x_{\parallel}$. This form of $\mathcal{R}$ allows us to express
integrals in terms of plasma dispersion functions (Fried & Conte, 2015) upon
appropriate redefinition of the sign of $x_{\parallel}$ (which will annihilate
the contribution from odd $x_{\parallel}$ terms).555We shall here not be
extremely careful with the definition of branch cuts and the precise
deformation of the Laplace contour in $\zeta$-space. This would be needed for
a fuller description of the time response of the system (one that captures the
contribution from branch cuts for example), but here we content ourselves with
the $\zeta\rightarrow 0$ response. As a result, we may write the integrals as
a combination of
$\displaystyle I_{nm}=$ $\displaystyle~{}\frac{1}{\bar{n}}\int
x_{\parallel}^{2n}x_{\perp}^{2m}F_{0}J_{0}^{2}\mathcal{R}\mathrm{d}^{3}\mathbf{v}=-\frac{1}{\epsilon}\int_{0}^{\infty}x_{\perp}^{2m+1}J_{0}^{2}e^{-x_{\perp}^{2}}\frac{1}{\Delta}\left[\frac{Z_{n}(\sqrt{-\zeta_{+}})}{\sqrt{-\zeta_{+}}}-\frac{Z_{n}(\sqrt{\zeta_{-}})}{\sqrt{\zeta_{-}}}\right]\mathrm{d}x_{\perp},$
(67)
where we define,
$Z_{n}(x)=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}\frac{x_{\parallel}^{2n}e^{-x_{\parallel}^{2}}}{x_{\parallel}-x}\mathrm{d}x_{\parallel},$
(68)
for $\Im\\{x\\}>0$, and analytically continued to the rest of the complex
plane. In particular, we may write
$\displaystyle\mathcal{D}_{\pm}^{(1)}=\mp
2\zeta\left(I_{10}+\frac{I_{01}}{2}\right),$ (69a)
$\displaystyle\mathcal{D}^{(2)}=2\left(I_{10}-\zeta^{2}I_{00}\right).$ (69b)
These integrals remain quite sophisticated, and simplifying them is paramount
to analytically proceed forward. A natural simplifying attempt is to use
asymptotic forms of the plasma dispersion function (Fried & Conte, 2015). The
argument $\zeta_{+}\approx
2/\epsilon^{2}+x_{\perp}^{2}-\epsilon^{2}x_{\perp}^{4}/8$, which is a large
and positive real part quantity owing to the largeness of $1/\epsilon^{2}$, we
may use the asymptotic form (Fried & Conte, 2015, Sec. IID)
$Z(x)\approx-\sum_{n=0}^{\infty}x^{-(2n+1)}(n-1/2)!/\sqrt{\pi}$ (the
exponential term is exponentially small). In the case of
$\zeta_{-}\approx\zeta^{2}-x_{\perp}^{4}\epsilon^{2}/8$ and we may consider an
expansion in this small argument. Namely, (Fried & Conte, 2015, Sec. IIC)
$Z(x)=i\sqrt{\pi}\exp(-x^{2})-x\sum_{n=0}^{\infty}(-x^{2})^{n}\sqrt{\pi}/(n+1/2)!$.
This introduces a leading order non-zero imaginary contribution.
With the above tools in place, we may proceed and compute the required
integrals to the necessary order.
#### B.2.1 Integrals for $\mathcal{D}^{(2)}$
Let us compute first the leading order $I_{00}$. Without having to go into the
complex details about the specific branch cuts and complex quadrant of $\zeta$
in the complex plane, one can show (Gradshteyn & Ryzhik, 2014, Eq. 3.387.7)
$I_{00}\approx~{}\int_{0}^{\infty}x_{\perp}J_{0}^{2}e^{-x_{\perp}^{2}}\frac{\sqrt{\pi}}{\sqrt{\frac{x_{\perp}^{4}\epsilon^{2}}{8}-\zeta^{2}}}\mathrm{d}x_{\perp}[1+O(\zeta,\epsilon^{2})]\propto\frac{1}{\epsilon}\ln\left(\frac{\epsilon}{\zeta\sqrt{2}}\right),$
(70)
where for this estimate we have assumed $b\ll 1$ to approximate $J_{0}\sim 1$
and we have kept the leading order term in $\zeta$ (in the limit of small
$\zeta$). So, in the limit of $\zeta\rightarrow 0$, this integral diverges
logarithmically, but its contribution to $\mathcal{D}^{(2)}$ vanishes, Eq.
(69b).
Computing then $I_{10}$, and using $Z_{1}(x)=x[1+xZ(x)]$,
$\displaystyle I_{10}\approx$
$\displaystyle~{}-\int_{0}^{\infty}x_{\perp}J_{0}^{2}e^{-x_{\perp}^{2}}\left[-1+\frac{\epsilon}{2}\sqrt{\frac{\pi}{2}}x_{\perp}^{2}+\frac{\epsilon^{2}}{4}(1+2x_{\perp}^{2}-x_{\perp}^{4})+O(\epsilon^{3})\right]\mathrm{d}x_{\perp}$
(71) $\displaystyle\approx$
$\displaystyle~{}\frac{1}{2}\left[\Gamma_{0}(b)-\frac{\epsilon}{2}\sqrt{\frac{\pi}{2}}F_{2}(b)-\frac{\epsilon^{2}}{4}(\Gamma_{0}(b)+2F_{2}(b)-F_{4}(b))\right]$
(72) $\displaystyle\approx$
$\displaystyle~{}-\frac{1}{2}\left(b-1+\frac{\epsilon}{2}\sqrt{\frac{\pi}{2}}+\frac{\epsilon^{2}}{4}\right)=\frac{1}{2}\mathcal{D}^{(2)},$
(73)
where we used the relevant Weber integrals (Gradshteyn & Ryzhik, 2014, Eq.
6.615) and the notation
$F_{n}=2\int_{0}^{\infty}x^{n+1}e^{-x^{2}}J_{0}^{2}(x\sqrt{2b})\mathrm{d}x$,
and in the last line considered the small $b$ limit. Importantly, there is a
term linear in $\epsilon$ which comes from the pole contribution to the plasma
dispersion function.
#### B.2.2 Integrals for $\mathcal{D}_{\pm}^{(1)}$
With $\mathcal{D}^{(2)}$ constructed, we may turn to $\mathcal{D}^{(1)}$, Eq.
(69a). The integral has an overall factor of $\zeta$, and thus to leading
order, it will vanish unless there is some $\zeta$-divergence. The term
$I_{10}$, which we have just computed, does not have such divergence, and thus
its contribution will vanish. So we only need to calculate $I_{01}$, which one
may show to be $I_{01}\approx\sqrt{2}/\epsilon$ to leading order. Thus,
$\mathcal{D}^{(1)}_{\pm}\sim O(\zeta)$, and thus it will vanish in the small
$\zeta$ limit. One may savely drop the coupling terms in Eq. (61) (the
sideband $\phi_{\pm}$ does not have any divergent behaviour neither).
#### B.2.3 Dispersion relation
Thus, the remaining dispersion function is,
$\mathcal{D}=1-\left[\Gamma_{0}(b)-\frac{\epsilon}{2}\sqrt{\frac{\pi}{2}}F_{2}(b)-\frac{\epsilon^{2}}{4}(\Gamma_{0}(b)+2F_{2}(b)-F_{4}(b))\right],$
(74)
where we have summed over species and taken the limit of $m_{e}/m_{i}\ll 1$,
and all quantities here should now be considered to represent ions. The value
of the residual can then be written666We are being loose here about initial
condition, but we may simply consider the RH initial condition of a uniformly
perturbed potential., assuming $b\ll 1$ for simplicity,
$\frac{\phi(\infty)}{\phi(0)}=\frac{1}{1+\frac{\epsilon}{2b}\sqrt{\frac{\pi}{2}}+\frac{\epsilon^{2}}{4b}}.$
(75)
It includes the leading order linear term in $\epsilon$, as the residual
expression in the main text does. The difference with the result in the main
text is the numerical factor in front of the linear term. As opposed to the
$0.26\omega_{d}/(b\omega_{t})$ obtained in the text, and realising that
$\omega_{t}$ as used in this appendix is $\pi$ times that in the main text,
the result here yields $(1/2\sqrt{2\pi})\omega_{d}/(b\omega_{t})\approx
0.20\omega_{d}/(b\omega_{t})$. This is a 30% discrepancy between both
estimates of the residual, but the same scaling nonetheless.
## Appendix C Near-axis properties in optimised configurations
In this Appendix we present the near-axis calculations necessary to obtain the
expressions in Table 1 for the residual relevant parameters in different
omnigeneous magnetic fields. These should be taken as informed estimates for
the amplitudes of the simple model assumed in the main text. As we shall show,
this is a good fit for QS fields, but not so much for QI. We assume some basic
understanding of inverse-coordinate near-axis theory (Garren & Boozer, 1991b,
a), and shall not derive the basic building elements of it. We refer the
reader to the work by Landreman & Sengupta (2019) for the general equations
for magnetohydrostatic equilibrium and in particular in a quasisymmetric
configuration, and Plunk et al. (2019); Rodríguez & Plunk (2023) for quasi-
isodynamic ones. We shall here use, with further explicit reference to those
works, the elements needed for the evaluation of the appropriate quantities.
### C.1 Quasisymmetric fields
Let us start by writing the magnetic field magnitude near the axis for a
quasisymmetric field (Garren & Boozer, 1991a, Eq. (A1)) (Landreman & Sengupta,
2019, Eq. (2.15)),
$B\approx B_{0}(1+r\eta\cos\chi),$ (76)
where $r=\sqrt{2\psi/\bar{B}}$ is a pseudo-radial coordinate normalised to a
reference $\bar{B}$, and $\chi=\theta-N\varphi$, where $N$ is the direction of
symmetry of the QS field and we are using Boozer coordinates. Because $B_{0}$
is a constant, it is clear from this form that the constant parameter $\eta$
measures the variation of the magnetic field within a surface (to leading
order). Thus, along a field line (at constant $\alpha$) the magnetic field
depends on $\chi=\alpha+\bar{\iota}\varphi$, and thus the mirror ratio is,
$\Delta=r\eta,$ (77)
as indicated in Table 1.
We now need to construct the other input important to the residual calculation
which is,
$q_{\mathrm{eff}}=\frac{1}{\pi}\frac{1}{k_{\perp}\rho_{i}}\frac{\omega_{d}}{\omega_{t}},$
(78)
whose definition is meant to take the place of $q$ in the RH residual. See the
main text, Section 4, for more details, including its connections to banana
widths (roughly $\sim\rho_{i}q_{\mathrm{eff}}/\sqrt{\Delta}$) and the
transition between the low-mirror and RH residual regimes.
Let us start by finding the amplitude of the drift frequency
$\omega_{d}(\chi)$. The curvature drift is by definition,
$\omega_{d}(\chi)=-v_{T}\frac{\mathbf{B}\times\nabla
B\cdot\nabla\psi}{B^{3}}\bar{B}k_{\psi}\rho_{i},$ (79)
where we have defined the ion Larmor radius $\rho_{i}=m_{i}v_{T}/q_{i}\bar{B}$
with respect to some reference field $\bar{B}$. The triple vector product may
be directly computed using the contravariant Boozer coordinate basis in the
near-axis framework (Jorge & Landreman, 2020, Eq. (45))777The expression in
Jorge & Landreman (2020) has an incorrect additional factor of $B_{0}$, as can
be checked dimensionally. This typo is unimportant., which yields
$\omega_{d}(\chi)=-v_{T}B_{0}r\eta k_{\psi}\rho_{i}\sin\chi+O(r^{2}).$ (80)
The coefficient $\omega_{d}$ may be directly read-off from the amplitude of
this expression. Note here that $\eta$ plays a primary role in controlling the
magnitude of the radial drift, as it controls the magnitude of the magnetic
field magnitude gradients.
To make sense of the typical magnitude of $\eta$, it is convenient to
introduce the description of flux surface shapes in the near-axis framework.
Flux surfaces are defined as a function of Boozer coordinates with respect to
the magnetic axis, $\mathbf{r}_{0}$, in the Frenet-Serret basis
$\\{\hat{b},\hat{\kappa},\hat{\tau}\\}$ (tangent, normal and binormal) of the
latter, so that
$\mathbf{r}(\psi,\theta,\varphi)-\mathbf{r}_{0}=X\hat{\kappa}+Y\hat{\tau}+Z\hat{b}$.
Thus $X$ is a function that gives the distance from flux surfaces to the axis
along the normal to the latter. To leading order this is proportional to
$X_{1}=r\eta/\kappa$, while along the binormal it scales like
$Y_{1}\sim\kappa/\eta$ (Landreman & Sengupta, 2019, Eq. (2.13)). Thus, in
order to avoid extreme shaping $\eta\sim\kappa$ (Rodríguez et al., 2023). As
$\kappa$ is generally a function of the toroidal angle and $\eta$ is not, the
shaping of flux surfaces will change toroidally, but one may take the
curvature as a scale for $\eta$. In the case of a circular cross section
tokamak one may show that $\eta=1/R$. This relation between the variation of
the magnetic field and the curvature of the axis (a field line after all) is a
physical consequence of the relation between the bending field lines and
magnetic pressure.
We now need to find an expression for the transit time
$\omega_{t}=v_{T}/L_{d}$, where $L_{d}$ is the connection length; the distance
from the trough to the top of the well. We thus need to compute $\ell$, the
distance along the field line. In quasisymmetry the length is simply a
rescaled form of the Boozer toroidal angle $\varphi$, so that (Landreman &
Sengupta, 2019, Eq. (A20))
$\frac{\mathrm{d}\chi}{\mathrm{d}\ell}\approx\frac{\bar{\iota}}{R_{\mathrm{ax}}},$
(81)
where $R_{\mathrm{ax}}=L_{\mathrm{ax}}/2\pi$ and $L_{\mathrm{ax}}$ is the
length of the magnetic axis, and $\bar{\iota}=\iota-N$. Given that in Eq. (76)
the magnetic field has a well of halfwidth $\pi$, then $L_{d}\approx\pi
R_{\mathrm{ax}}/\bar{\iota}$ and,
$\omega_{t}=\bar{\iota}\frac{v_{T}}{\pi R_{\mathrm{ax}}}.$ (82)
Finally, let us consider the normalized perpendicular wavenumber
$(k_{\perp}\rho_{i})^{2}=\langle|\nabla\psi|^{2}\rangle(k_{\psi}\rho_{i})^{2}$.
Note how we are using an averaged form of the flux expansion, which makes the
FLR parameter constant, as assumed in our model construction. The particular
form of $k_{\perp}\rho_{i}$ is motivated by the involvement of
$b=(k_{\perp}\rho_{i})^{2}/2$ in the residual, where it appears flux surface
averaged (Plunk & Helander, 2024) (including variation along the line would be
straightforward). We need $|\nabla\psi|^{2}$ from the near-axis description of
the field; using the contravariant basis once again (Jorge & Landreman, 2020,
Eq. (41)),
$|\nabla\psi|^{2}\approx
r^{2}\left(B_{0}\frac{\kappa}{\eta}\right)^{2}\left[\left(\frac{\eta}{\kappa}\right)^{4}\sin^{2}\chi+\left(\cos\chi-\sigma\sin\chi\right)^{2}\right],$
(83)
where $\sigma$ is a function of the toroidal angle $\varphi$, result of
solving a non-linear Riccati equation (Garren & Boozer, 1991a; Landreman &
Sengupta, 2019). The flux surface average of this expression can be carried
out straightforwardly, using to leading order
$\langle\dots\rangle\approx\int\mathrm{d}\chi\mathrm{d}\varphi\dots/(4\pi^{2})$,
$\left\langle|\nabla\psi|^{2}\right\rangle\approx(rB_{0}\hat{\mathcal{G}})^{2},$
(84)
where,
$\hat{\mathcal{G}}^{2}=\frac{1}{4\pi}\int_{0}^{2\pi}\left(\frac{\kappa}{\eta}\right)^{2}\left(1+\sigma^{2}+\frac{\eta^{4}}{\kappa^{4}}\right)\mathrm{d}\varphi.$
(85)
The involvement of $\sigma$ makes this geometric quantity rather obscure. In
fact $\sigma$ is directly related to the shaping of flux surfaces as
$Y_{1}=(\kappa/\eta)(\sin\chi+\sigma\cos\chi)$ (Landreman & Sengupta, 2019,
Eq. (2.13)), but its interpretation in simple terms is difficult (Rodríguez,
2023). Although it may be understood roughly as a measure of the rotation of
the elliptical cross-sections near the axis respect to the Frenet-Serret frame
(Rodríguez, 2023, Eq. (B4a)), it also affects the elongation of flux surfaces.
It would be beneficial in the discussion, thus, to provide a more direct
geometric interpretation to $\hat{\mathcal{G}}$. We do so using (Rodríguez,
2023, Eq. (3.2a)) to write,
$\hat{\mathcal{G}}^{2}=\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1}{\sin
2e}\mathrm{d}\varphi$ (86)
where $\mathcal{E}=\tan e$ and $\mathcal{E}$ is the elongation of the flux
surfaces in the plane normal to the axis as a function of $\varphi$. The angle
$e\in(0,\pi/2)$ may be interpreted as the angle subtended by a right-angle
triangle with the major and minor axes of the ellipse as catheti. Thus the
geometric factor $\hat{\mathcal{G}}$ is a direct measure of the flux surface
elongation. A value of $\hat{\mathcal{G}}=1$ corresponds to all cross-section
being circular, any amount of shaping leading to $\hat{\mathcal{G}}>1$.
Putting everything together into $q_{\mathrm{eff}}$,
$q_{\mathrm{eff}}=\frac{1}{\iota-N}\frac{\eta
R_{\mathrm{ax}}}{\hat{\mathcal{G}}}.$ (87)
#### C.1.1 Tokamak limit
The case of the axisymmetric tokamak is a particularly simple limit of this.
Considering the limit of $\kappa\rightarrow 1/R$, where $R$ is the major
radius, then $R_{\mathrm{ax}}\rightarrow R$ and all quantities become
$\varphi$-independent. Then, we may write $q_{\mathrm{eff}}=q(\eta
R)/\hat{\mathcal{G}}_{\mathrm{tok}}$, where $q=1/\iota$ is the safety factor
and, $\hat{\mathcal{G}}_{\mathrm{tok}}^{2}=1/\sin 2e$. If we then consider a
circular cross-section tokamak (where $e=\pi/4$), then $\eta=1/R$,
$\hat{\mathcal{G}}=1$, and thus $q_{\mathrm{eff}}=q$. This is why we have
defined $q_{\mathrm{eff}}$ the way we have. As a reference
$\hat{\mathcal{G}}=2$ corresponds to $e=\pi/8$ and thus an elongation
$\mathcal{E}\approx 0.4$.
### C.2 Quasi-isodynamic fields
Let us write the magnetic field of an exactly omnigeneous, QI, stellarator-
symmetric field near the axis (Plunk et al., 2019, Eq. (6.1)) (Rodríguez &
Plunk, 2023, Eqs. (8-9a)),
$B=B_{0}(\varphi)\left[1-rd(\varphi)\sin\alpha+O(r^{2})\right],$ (88)
where $B_{0}(\varphi)$ and $d(\varphi)$ are even and odd functions of
$\varphi$ respectively. The latter is required for the fulfilment of
omnigeneity. Note that $B$ is here an explicit function of $\alpha$, which
unless the rotational transform is integer, makes $B$ a non-periodic function.
This is the well-known impossibility of achieving omnigeneity exactly to
leading order near the axis with poloidal $|\mathbf{B}|$ contours (Plunk et
al., 2019). Acknowledging that in practice omnigeneity will have to be broken
in some buffer region near the tops (Plunk et al., 2019; Camacho Mata et al.,
2022), we shall consider Eq. (88) as given.
Let us now consider a simple model for the magnetic field on axis,
$B_{0}(\varphi)=\bar{B}\left(1-\Delta\cos N_{\mathrm{nfp}}\varphi\right),$
(89)
where $\Delta$ is the mirror ratio and $N_{\mathrm{nfp}}$ is the number of
field periods (the toroidal $N_{\mathrm{nfp}}$-fold symmetry). Unlike in the
QS scenario, the control of the on-axis magnetic field in a QI configuration
gives complete control of the mirror ratio.
The choice of this form of $B_{0}$ requires the curvature to have vanishing
points at $\varphi=n\pi/N_{\mathrm{nfp}}$ for $n\in\mathbb{Z}$, and non-
vanishing first derivative (often referred to as a first order zero). Not
doing so would lead to the loss of trapped particles as discussed in detail in
Rodríguez & Plunk (2023). As a result, the variation in the field $d(\varphi)$
must also share those zeroes with $\kappa$ to avoid extreme shaping (the
leading order shaping is analogous to the QS scenario). For now, let us keep
it general and construct the necessary coefficients as we did with the QS
case. Starting off the drift, and using (Jorge & Landreman, 2020, Eq. (37)),
$\omega_{d}(\theta)\approx-rv_{T}\bar{B}\kappa\left(X_{1c}\sin\theta-
X_{1s}\cos\theta\right)k_{\psi}\rho_{i},$ (90)
where $X_{1c}$ and $X_{1s}$ are the cosine and sine $\theta$-harmonics of
$X_{1}$ to leading order. Following their definition in terms of $B$
(Landreman & Sengupta, 2019, Eq. (A22)), and using the expression for $B$ in
Eq. (88), for an exactly omnigeneous field,
$\displaystyle X_{1c}=$ $\displaystyle\frac{d}{\kappa}\sin\iota\varphi,$ (91a)
$\displaystyle X_{1s}=$ $\displaystyle-\frac{d}{\kappa}\cos\iota\varphi,$
(91b)
so that Eq. (90) reduces to,
$\omega_{d}(\varphi)=-rv_{T}\bar{B}d(\varphi)k_{\psi}\rho_{i}\cos\alpha.$ (92)
We need the amplitude of this function to feed into $q_{\mathrm{eff}}$, Of
course, generally the shape of this function will not be that of a simple sine
as in the QS case. However, we may choose the simple form,
$d(\varphi)=\bar{d}\sin(N_{\mathrm{nfp}}\varphi),$ (93)
to give an amplitude $\omega_{d}\approx rv_{T}\bar{d}\bar{B}\cos\alpha$. Note
a significant difference with respect to the QS case, which is the explicit
$\alpha$ dependence. The amplitude of the field varies from field-line to
field-line. We have lost the field-line equivalence (Boozer, 1983b; Helander,
2014; Rodriguez et al., 2020) of quasisymmetry. To treat this difference
consistently within the residual treatment we would have to treat more
carefully the variation of the field over the surface. However, for a rough
estimate of the drift amplitude, let us keep it as is for now.
Let us now consider $|\nabla\psi|^{2}$ (Jorge & Landreman, 2020, Eq. (33)),
$|\nabla\psi|^{2}=r^{2}B_{0}^{2}\left[\left(X_{1c}\sin\theta-
X_{1s}\cos\theta\right)^{2}+\left(Y_{1c}\sin\theta-
Y_{1s}\cos\theta\right)^{2}\right],$ (94)
where for our ideal omnigeneneous field (Landreman & Sengupta, 2019, Eq.
(A25)),
$\displaystyle Y_{1c}=$
$\displaystyle\frac{\bar{B}}{B_{0}}\frac{\kappa}{d}\left(\cos\iota\varphi+\sigma\sin\iota\varphi\right),$
(95a) $\displaystyle Y_{1s}=$
$\displaystyle-\frac{\bar{B}}{B_{0}}\frac{\kappa}{d}\left(\sigma\cos\iota\varphi-\sin\iota\varphi\right).$
(95b)
Therefore,
$|\nabla\psi|^{2}\approx
r^{2}B_{0}^{2}\left[\left(\frac{d}{\kappa}\right)^{2}\cos^{2}\alpha+\left(\frac{\kappa}{d}\frac{\bar{B}}{B_{0}}\right)^{2}\left(\sin\alpha+\sigma\cos\alpha\right)^{2}\right].$
(96)
Assuming $\Delta\ll 1$ to simplify the flux surface averages and approximate
$B_{0}\approx\bar{B}$, integrating over $\alpha$ and $\varphi$,
$\left\langle|\nabla\psi|\right\rangle\approx\left(r\bar{B}\hat{\mathcal{G}}_{\mathrm{QI}}\right)^{2},$
(97)
where,
$\hat{\mathcal{G}}_{\mathrm{QI}}^{2}=\frac{1}{4\pi}\int_{0}^{2\pi}\left(\frac{\kappa}{d}\right)^{2}\left(1+\sigma^{2}+\frac{d^{4}}{\kappa^{4}}\right)\mathrm{d}\varphi.$
(98)
Note the similarity of this expression to the QS geometric factor Eq. (85). In
fact, Eq. (98) is exactly equivalent to Eq. (86), the expression in terms of
the elongation of flux surfaces in the plane normal to the magnetic axis.
Finally we compute the connection length, which under the approximation of
$\Delta\ll 1$ we may write as $L_{d}\approx\pi
r_{\mathrm{ax}}/N_{\mathrm{nfp}}$. Putting all together,
$q_{\mathrm{eff}}=\frac{1}{N_{\mathrm{nfp}}}\frac{\bar{d}R_{\mathrm{ax}}}{\hat{\mathcal{G}}_{\mathrm{QI}}}\cos\alpha.$
(99)
Note how this parameter changes from field line to field line. The
contribution to the total residual can be thought of as a sum over wells,
where each of these can be thought of separately, thanks to the condition of
omnigeneity. As we move along the field line then, we see different wells,
which assuming this to be the only element that changes from well to well, and
using
$\lim_{N\rightarrow\infty}\frac{1}{N}\sum_{n=0}^{N}|\cos(2\pi\iota
n)|=\frac{1}{2\pi}\int_{0}^{2\pi}|\cos\alpha|\mathrm{d}\alpha=\frac{2}{\pi},$
(100)
by application of Weyl’s lemma (Weyl, 1916, Eq. (2)) for irrational $\iota$,
we may construct an effective parameter $q_{\mathrm{eff}}$,
$q_{\mathrm{eff}}=\frac{1}{N_{\mathrm{nfp}}}\frac{2}{\pi}\frac{\bar{d}R_{\mathrm{ax}}}{\hat{\mathcal{G}}}.$
(101)
We shall not consider here any more sophisticated approach that deals with
these variations more carefully or takes additional differences between wells
into account.
## References
* Abramowitz & Stegun (1968) Abramowitz, Milton & Stegun, Irene A 1968 Handbook of mathematical functions with formulas, graphs, and mathematical tables, , vol. 55. US Government printing office.
* Alonso et al. (2017) Alonso, JA, Sánchez, E, Calvo, I, Velasco, JL, McCarthy, KJ, Chmyga, A, Eliseev, LG, Estrada, T, Kleiber, R, Krupnik, LI & others 2017 Observation of oscillatory radial electric field relaxation in a helical plasma. Physical Review Letters 118 (18), 185002.
* Austin et al. (2019) Austin, Max E, Marinoni, A, Walker, ML, Brookman, MW, Degrassie, JS, Hyatt, AW, McKee, GR, Petty, CC, Rhodes, TL, Smith, SP & others 2019 Achievement of reactor-relevant performance in negative triangularity shape in the diii-d tokamak. Physical review letters 122 (11), 115001.
* Barnes et al. (2019) Barnes, Michael, Parra, Felix I & Landreman, Matt 2019 stella: An operator-split, implicit–explicit $\delta$f-gyrokinetic code for general magnetic field configurations. Journal of Computational Physics 391, 365–380.
* Beidler et al. (2021) Beidler, CD, Smith, HM, Alonso, A, Andreeva, T, Baldzuhn, J, Beurskens, MNA, Borchardt, Matthias, Bozhenkov, SA, Brunner, Kai Jakob, Damm, Hannes & others 2021 Demonstration of reduced neoclassical energy transport in wendelstein 7-x. Nature 596 (7871), 221–226.
* Bender & Orszag (2013) Bender, Carl M & Orszag, Steven A 2013 Advanced mathematical methods for scientists and engineers I: Asymptotic methods and perturbation theory. Springer Science & Business Media.
* Bernardin et al. (1986) Bernardin, M. P., Moses, R. W. & Tataronis, J. A. 1986 Isodynamical (omnigenous) equilibrium in symmetrically confined plasma configurations. The Physics of Fluids 29 (8), 2605–2611.
* Boozer (1983a) Boozer, Allen H. 1983a Transport and isomorphic equilibria. The Physics of Fluids 26 (2), 496–499.
* Boozer (1983b) Boozer, Allen H 1983b Transport and isomorphic equilibria. The Physics of Fluids 26 (2), 496–499.
* Boozer (1998) Boozer, Allen H 1998 What is a stellarator? Physics of Plasmas 5 (5), 1647–1655.
* Camacho Mata et al. (2022) Camacho Mata, K., Plunk, G. G. & Jorge, R. 2022 Direct construction of stellarator-symmetric quasi-isodynamic magnetic configurations. Journal of Plasma Physics 88 (5), 905880503.
* Cary & Shasharina (1997) Cary, J. R. & Shasharina, S. G. 1997 Omnigenity and quasihelicity in helical plasma confinement systems. Physics of Plasmas 4 (9), 3323–3333, arXiv: https://pubs.aip.org/aip/pop/article-pdf/4/9/3323/12664528/3323_1_online.pdf.
* Catto et al. (2017) Catto, Peter J, Parra, Felix I & Pusztai, István 2017 Electromagnetic zonal flow residual responses. Journal of Plasma Physics 83 (4), 905830402.
* Connor et al. (1980) Connor, JW, Hastie, RJ & Taylor, JB 1980 Stability of general plasma equilibria. iii. Plasma Physics 22 (7), 757.
* Connor et al. (1983) Connor, J. W., Hastie, R. J. & Martin, T. J. 1983 Effect of pressure gradients on the bounce-averaged particle drifts in a tokamak. Nuclear fusion 23 (12), 1702.
* Connor et al. (1978) Connor, J W, Hastie, R J & Taylor, J B 1978 Phys. Rev. Lett. 40 (6), 396.
* Conway et al. (2021) Conway, Garrard D, Smolyakov, Andrei I & Ido, Takeshi 2021 Geodesic acoustic modes in magnetic confinement devices. Nuclear Fusion 62 (1), 013001.
* Diamond et al. (2005) Diamond, Patrick H, Itoh, SI, Itoh, K & Hahm, TS 2005 Zonal flows in plasma—a review. Plasma Physics and Controlled Fusion 47 (5), R35.
* Fried & Conte (2015) Fried, Burton D & Conte, Samuel D 2015 The plasma dispersion function: the Hilbert transform of the Gaussian. Academic press.
* Galeev et al. (1969) Galeev, Albert A, Sagdeev, RZ, Furth, HP & Rosenbluth, MN 1969 Plasma diffusion in a toroidal stellarator. Physical Review Letters 22 (11), 511.
* Gao et al. (2006) Gao, Zhe, Itoh, K, Sanuki, H & Dong, JQ 2006 Multiple eigenmodes of geodesic acoustic mode in collisionless plasmas. Physics of plasmas 13 (10).
* Gao et al. (2008) Gao, Zhe, Itoh, K, Sanuki, H & Dong, JQ 2008 Eigenmode analysis of geodesic acoustic modes. Physics of Plasmas 15 (7).
* Garren & Boozer (1991a) Garren, D. A. & Boozer, A. H. 1991a Existence of quasihelically symmetric stellarators. Physics of Fluids B: Plasma Physics 3 (10), 2822–2834.
* Garren & Boozer (1991b) Garren, D. A. & Boozer, A. H. 1991b Magnetic field strength of toroidal plasma equilibria. Physics of Fluids B: Plasma Physics 3 (10), 2805–2821.
* Giuliani (2024) Giuliani, Andrew 2024 Direct stellarator coil design using global optimization: application to a comprehensive exploration of quasi-axisymmetric devices. Journal of Plasma Physics 90 (3), 905900303.
* Goodman et al. (2023) Goodman, A.G., Camacho Mata, K., Henneberg, S.A., Jorge, R., Landreman, M., Plunk, G.G., Smith, H.M., Mackenbach, R.J.J., Beidler, C.D., Helander, P. & et al. 2023 Constructing precisely quasi-isodynamic magnetic fields. Journal of Plasma Physics 89 (5), 905890504.
* Gradshteyn & Ryzhik (2014) Gradshteyn, Izrail Solomonovich & Ryzhik, Iosif Moiseevich 2014 Table of integrals, series, and products. Academic press.
* Hall & McNamara (1975a) Hall, L. S. & McNamara, B. 1975a Three-dimensional equilibrium of the anisotropic, finite-pressure guiding-center plasma: Theory of the magnetic plasma. The Physics of Fluids 18 (5), 552–565, arXiv: https://pubs.aip.org/aip/pfl/article-pdf/18/5/552/12317924/552_1_online.pdf.
* Hall & McNamara (1975b) Hall, Laurence S. & McNamara, Brendan 1975b Three-dimensional equilibrium of the anisotropic, finite-pressure guiding-center plasma: Theory of the magnetic plasma. The Physics of Fluids 18 (5), 552–565.
* Hazeltine & Meiss (2003) Hazeltine, Richard D & Meiss, James D 2003 Plasma confinement. Courier Corporation.
* Helander (2014) Helander, P. 2014 Theory of plasma confinement in non-axisymmetric magnetic fields. Reports on Progress in Physics 77 (8), 087001.
* Helander et al. (2011) Helander, P, Mishchenko, A, Kleiber, R & Xanthopoulos, P 2011 Oscillations of zonal flows in stellarators. Plasma Physics and Controlled Fusion 53 (5), 054006.
* Helander & Nührenberg (2009) Helander, P. & Nührenberg, J. 2009 Bootstrap current and neoclassical transport in quasi-isodynamic stellarators. Plasma Physics and Controlled Fusion 51 (5), 055004.
* Helander & Sigmar (2005) Helander, Per & Sigmar, Dieter J 2005 Collisional transport in magnetized plasmas, , vol. 4. Cambridge University Press.
* Ho & Kulsrud (1987) Ho, Darwin D.-M. & Kulsrud, Russell M. 1987 Neoclassical transport in stellarators. The Physics of Fluids 30 (2), 442–461.
* Jorge & Landreman (2020) Jorge, Rogerio & Landreman, Matt 2020 The use of near-axis magnetic fields for stellarator turbulence simulations. Plasma Physics and Controlled Fusion 63 (1), 014001.
* Landreman (2022) Landreman, Matt 2022 Mapping the space of quasisymmetric stellarators using optimized near-axis expansion. Journal of Plasma Physics 88 (6), 905880616.
* Landreman & Catto (2012) Landreman, M. & Catto, P. J. 2012 Omnigenity as generalized quasisymmetry. Physics of Plasmas 19 (5), 056103.
* Landreman & Paul (2022) Landreman, M. & Paul, E. 2022 Magnetic fields with precise quasisymmetry for plasma confinement. Physical Review Letters 128 (3), 035001.
* Landreman & Sengupta (2019) Landreman, M. & Sengupta, W. 2019 Constructing stellarators with quasisymmetry to high order. Journal of Plasma Physics 85 (6), 815850601.
* Mikhailov et al. (2002) Mikhailov, M. I., Shafranov, V. D., Subbotin, A. A., Isaev, M. Y., Nührenberg, J., Zille, R. & Cooper, W. A. 2002 42 (11), L23–L26.
* Miller et al. (1998) Miller, RL, Chu, MS, Greene, JM, Lin-Liu, YR & Waltz, RE 1998 Noncircular, finite aspect ratio, local equilibrium model. Physics of Plasmas 5 (4), 973–978.
* Mishchenko et al. (2008) Mishchenko, Alexey, Helander, Per & Könies, Axel 2008 Collisionless dynamics of zonal flows in stellarator geometry. Physics of Plasmas 15 (7).
* Mishchenko & Kleiber (2012) Mishchenko, Alexey & Kleiber, Ralf 2012 Zonal flows in stellarators in an ambient radial electric field. Physics of Plasmas 19 (7).
* Monreal et al. (2016) Monreal, Pedro, Calvo, Iván, Sánchez, Edilberto, Parra, Félix I, Bustos, Andrés, Könies, Axel, Kleiber, Ralf & Görler, Tobias 2016 Residual zonal flows in tokamaks and stellarators at arbitrary wavelengths. Plasma Physics and Controlled Fusion 58 (4), 045018.
* Monreal et al. (2017) Monreal, Pedro, Sánchez, Edilberto, Calvo, Iván, Bustos, Andrés, Parra, Félix I, Mishchenko, Alexey, Könies, Axel & Kleiber, Ralf 2017 Semianalytical calculation of the zonal-flow oscillation frequency in stellarators. Plasma Physics and Controlled Fusion 59 (6), 065005.
* Mukhovatov & Shafranov (1971) Mukhovatov, VS & Shafranov, VD 1971 Plasma equilibrium in a tokamak. Nuclear Fusion 11 (6), 605.
* Mynick (2006) Mynick, H. E. 2006 Transport optimization in stellarators. Physics of Plasmas 13 (5), 058102.
* Nemov et al. (1999) Nemov, V. V., Kasilov, S. V., Kernbichler, W. & Heyn, M. F. 1999 Evaluation of $1/\nu$ neoclassical transport in stellarators. Physics of Plasmas 6 (12), 4622–4632.
* Nührenberg & Zille (1988) Nührenberg, J. & Zille, R. 1988 Quasi-helically symmetric toroidal stellarators. Physics Letters A 129 (2), 113 – 117.
* Nührenberg (2010) Nührenberg, Jürgen 2010 Development of quasi-isodynamic stellarators. Plasma Physics and Controlled Fusion 52 (12), 124003.
* Olver et al. (2020) Olver, F. W. J., Daalhuis, A. B. Olde, Lozier, D. W., Schneider, B. I., Boisvert, R. F., Clark, C. W., B. R. Mille and, B. V. Saunders, Cohl, H. S. & M. A. McClain, eds. 2020 Nist digital library of mathematical functions. http://dlmf.nist.gov/, Release 1.0.26 of 2020-03-15.
* Plunk & Helander (2024) Plunk, GG & Helander, P 2024 The residual flow in well-optimized stellarators. Journal of Plasma Physics 90 (2), 905900205.
* Plunk (2024) Plunk, G. G., et al 2024 A geometric approach to constructing quasi-isodynamic fields. In preparation.
* Plunk et al. (2019) Plunk, G. G., Landreman, M. & Helander, P. 2019 Direct construction of optimized stellarator shapes. part 3. omnigenity near the magnetic axis. Journal of Plasma Physics 85 (6), 905850602.
* Rodriguez et al. (2020) Rodriguez, E., Helander, P. & Bhattacharjee, A. 2020 Necessary and sufficient conditions for quasisymmetry. Physics of Plasmas 27 (6), 062501.
* Rodriguez et al. (2022) Rodriguez, E., Sengupta, W. & Bhattacharjee, A. 2022 Phases and phase-transitions in quasisymmetric configuration space. Plasma Physics and Controlled Fusion 64 (10), 105006.
* Rodríguez et al. (2023) Rodríguez, E, Sengupta, W & Bhattacharjee, A 2023 Constructing the space of quasisymmetric stellarators through near-axis expansion. Plasma Physics and Controlled Fusion 65 (9), 095004.
* Rodríguez (2023) Rodríguez, E. 2023 Magnetohydrodynamic stability and the effects of shaping: a near-axis view for tokamaks and quasisymmetric stellarators. Journal of Plasma Physics 89 (2), 905890211.
* Rodríguez et al. (2020) Rodríguez, E., Helander, P. & Bhattacharjee, A. 2020 Necessary and sufficient conditions for quasisymmetry. Physics of Plasmas 27 (6), 062501.
* Rodríguez & Plunk (2023) Rodríguez, E. & Plunk, G. G. 2023 Higher order theory of quasi-isodynamicity near the magnetic axis of stellarators. Physics of Plasmas 30 (6), 062507.
* Rosenbluth & Hinton (1998) Rosenbluth, MN & Hinton, FL 1998 Poloidal flow driven by ion-temperature-gradient turbulence in tokamaks. Physical review letters 80 (4), 724.
* Schiff (2013) Schiff, Joel L 2013 The Laplace transform: theory and applications. Springer Science & Business Media.
* Skovoroda (2005) Skovoroda, A. A. 2005 3d toroidal geometry of currentless magnetic configurations with improved confinement. Plasma Physics and Controlled Fusion 47 (11), 1911–1924.
* Spitzer Jr (1958) Spitzer Jr, Lyman 1958 The stellarator concept. The Physics of Fluids 1 (4), 253–264.
* Stringer (1972) Stringer, TE 1972 Effect of the magnetic field ripple on diffusion in tokamaks. Nuclear Fusion 12 (6), 689.
* Sugama & Watanabe (2005) Sugama, H & Watanabe, T-H 2005 Dynamics of zonal flows in helical systems. Physical review letters 94 (11), 115001.
* Sugama & Watanabe (2006) Sugama, Hideo & Watanabe, T-H 2006 Collisionless damping of zonal flows in helical systems. Physics of Plasmas 13 (1).
* Takahasi & Mori (1974) Takahasi, Hidetosi & Mori, Masatake 1974 Double exponential formulas for numerical integration. Publications of the Research Institute for Mathematical Sciences 9 (3), 721–741.
* Watanabe et al. (2008) Watanabe, T-H, Sugama, H & Ferrando-Margalet, S 2008 Reduction of turbulent transport with zonal flows enhanced in helical systems. Physical review letters 100 (19), 195002.
* Wesson (2011) Wesson, John 2011 Tokamaks; 4th ed.. International series of monographs on physics . Oxford: Oxford Univ. Press.
* Weyl (1916) Weyl, Hermann 1916 Über die gleichverteilung von zahlen mod. eins. Mathematische Annalen 77 (3), 313–352.
* Xanthopoulos et al. (2011) Xanthopoulos, P, Mischchenko, A, Helander, P, Sugama, H & Watanabe, T-H 2011 Zonal flow dynamics and control of turbulent transport in stellarators. Physical review letters 107 (24), 245002.
* Xiao & Catto (2006) Xiao, Yong & Catto, Peter J 2006 Short wavelength effects on the collisionless neoclassical polarization and residual zonal flow level. Physics of Plasmas 13 (10).
* Xiao et al. (2007) Xiao, Yong, Catto, Peter J & Dorland, William 2007 Effects of finite poloidal gyroradius, shaping, and collisions on the zonal flow residual. Physics of plasmas 14 (5).
|
# Direct Preference Optimization of Video Large Multimodal Models from
Language Model Reward
Ruohong Zhang∗♠ Liangke Gui ♠♢
Zhiqing Sun♠ , Yihao Feng∇ , Keyang Xu△ , Yuanhan Zhang♡ , Di Fu♢ ,
Chunyuan Li♢ , Alexander Hauptmann♠ , Yonatan Bisk♠ , Yiming Yang♠
♠CMU LTI, ♢Bytedance, ∇UT Austin, △Columbia University, ♡NTU
Project Page: https://github.com/RifleZhang/LLaVA-Hound-DPO Equal
contribution.
###### Abstract
Preference modeling techniques, such as direct preference optimization (DPO),
has shown effective in enhancing the generalization abilities of large
language model (LLM). However, in tasks involving video instruction-following,
providing informative feedback, especially for detecting hallucinations in
generated responses, remains a significant challenge. Previous studies have
explored using large large multimodal models (LMMs) as reward models to guide
preference modeling, but their ability to accurately assess the factuality of
generated responses compared to corresponding videos has not been conclusively
established. This paper introduces a novel framework that utilizes detailed
video captions as a proxy of video content, enabling language models to
incorporate this information as supporting evidence for scoring video Question
Answering (QA) predictions. Our approach demonstrates robust alignment with
OpenAI GPT-4V model’s reward mechanism, which directly takes video frames as
input. Furthermore, we show that applying this tailored reward through DPO
significantly improves the performance of video LMMs on video QA tasks.
## 1 Introduction
This paper addresses the challenge of aligning LMMs, particularly in tasks
that involve video instruction following. Despite recent advancements in
reinforcement learning (RL) (Ouyang et al., 2022; Bai et al., 2022; Lee et
al., 2023; Sun et al., 2023b) and DPO (Rafailov et al., 2024; Chen et al.,
2024b; Hosseini et al., 2024), which have been effective in guiding LLMs
towards generating more honest, helpful, and harmless content, their
effectiveness in multimodal contexts remains limited. The critical obstacle
lies in developing a robust reward system capable of distinguishing preferred
responses from less preferred ones, especially when such responses are
generated based on video inputs. The challenge is further complicated by the
presence of hallucinations in generated content, stemming from the scarcity of
alignment data across different modalities (Liu et al., 2023b; Sun et al.,
2023a).
While human preference data is valuable, it is challenging to scale due to its
cost and labor-intensive nature, as highlighted by the LLaVA-RLHF (Sun et al.,
2023a) paper, which collected 10k human-evaluated instances at a considerable
cost of $3000. Existing approaches for distlling preferences, such as those
for image data using GPT-4V (Li et al., 2023d), encounter scalability issues,
especially for video inputs that require analyzing multiple frames. While Ahn
et al. (2024) leverage a supervised finetuning (SFT) model for self-
evaluation, the efficacy of the SFT model remains uncertain, particularly in
accurately assessing the factuality of responses in relation to their
corresponding videos.
To tackle the aforementioned challenges, we introduce a cost-effective reward
mechanism aimed at reliably evaluating the quality of responses generated by
video (LLMs), serving as a basis for further preference optimization. We
propose the use of detailed video captions as a proxy for video content,
enabling a language model analyze video content and assess the accuracy of an
LMM’s response to a related question and determine the presence of
hallucinations. The language model provides natural language feedback as a
chain-of-thought step, and generates a numerical score for reward,
facilitating a cost-effective feedback system.
However, high-quality video captions are essential for this process. To
mitigate the shortage of high-quality video captions, we have developed a
comprehensive video caption dataset, ShareGPTVideo, using a novel prompting
technique with the GPT-4V model, comprising 900k captions that encompass a
wide range of video content, including temporal dynamics, world knowledge,
object attributes, and spatial relationships. With this video caption dataset
available, we verify that our reward mechanism, which utilizes video captions
as a proxy, is well-aligned with evaluations derived from the more powerful,
albeit costlier, GPT-4V model-generated rewards. Employing this reward
mechanism as the basis for DPO algorithm, we train LLaVA-Hound-DPO that
achieves an 8.1% accuracy improvement over the SFT counterpart. This marks a
significant advancement in video LMM alignment and represents the first
successful application of a DPO method in this domain.
Our contributions are outlined as follows:
1. 1.
We develop a large-scale, detailed video caption dataset, covering a wide
array of content. This dataset serves as a foundational resource for LMM model
training and research, facilitating advancements in video understanding tasks.
2. 2.
We introduce a cost-effective method for evaluating video instruction-
following tasks, serving as enhanced evaluation of model performance.
3. 3.
We demonstrate the effective application of DPO to improve model performance
by leveraging the language model feedback as reward, which substantially
improves the alignment of video LMM, establishing a new benchmark for SOTA
performance in video QA tasks.
## 2 Related Work
### 2.1 Large Multi-Modal Models
LMMs (Liu et al., 2023b; a; Bai et al., 2023; Chen et al., 2023; Li et al.,
2023a) have enabled instruction following across modalities by utilizing LLM
as backbones. In the context of video understanding, LLMs have been adapted to
process video content (Lin et al., 2023a; Zhang et al., 2023a; Maaz et al.,
2023; Li et al., 2023b; Luo et al., 2023; Liu et al., 2023c; Jin et al., 2024;
Ahn et al., 2024). Our work adots Video-LLaVA backbone, focusing on model
enhancement through preference modeling with the DPO technique.
### 2.2 Video-text Datasets
Existing video-text datasets typically provide brief sentences or mere
keywords as captions, as indicated by Bain et al. (2021); Wang et al. (2023);
Yu et al. (2019); Jang et al. (2017); Xu et al. (2016). Shvetsova et al.
(2023) uses automatic speech recognition to extract textual content from
videos, but it encounters alignment issues when the audio does not match or is
absent from the visual content. Video-ChatGPT (Li et al., 2023b) employs human
effort to create high-quality video instructions, albeit limited to the
ActivityNet domain with only 100k instruction pairs. Our work leverages the
GPT-4V model with specifically crafted prompts to produce detailed video
captions as community resource for LMM training.
### 2.3 Preference Modeling for LMMs
Preference modeling techniques are employed to enhance the utility of LMMs
while mitigating the issue of hallucination. Sun et al. (2023a) leveraged
Reinforcement Learning with Human Feedback (RLHF) and incorporated caption
information into the reward model to improve the assessment of factuality.
More recently, Ahn et al. (2024) used RL on AI feedback to improve video LMM
performance. For the image understanding, Li et al. (2023d); Gunjal et al.
(2023) introduced the application of DPO on the distilled rewards from GPT-4V
on a group of model outputs, while Zhao et al. (2023) created preference data
using ChatGPT to generate positive and negative pairs informed by detailed
image descriptions. Our contribution extends DPO to the video LMM alignment,
with the use of detailed captions as factual evidence for reward modeling.
Figure 1: Workflow diagram showing: a) the use of GPT-4V for creating a
detailed caption dataset for videos; b) generating video instruction data for
SFT; c) integrating captions into a feedback loop for factually-enhanced DPO,
improving the model’s performance on video instruction-following tasks.
## 3 Method
As shown in fig. 1, our methodology enhances video LMM alignment through DPO
method using rewards from a language model. We elaborate on constructing a
video caption dataset in section 3.1. Subsequently, in section 3.2, we discuss
the generation of video instruction data and the fine-tuning process of our
model. Lastly, section 3.3 details the incorporation of generated captions as
a feedback mechanism for DPO method to refine our model’s factual alignment in
video instruction-following tasks.
### 3.1 Prompting GPT-4V Model for Detailed Video Caption Distillation
The selection of dataset includes videos from three sources: the WebVid and
VIDAL datasets, which are general domain videos sourced from YouTube with 400k
and 450k sampled videos respectively, and the ActivityNet dataset, which adds
50k videos focusing on human activities. The three datasets together result in
a comprehensive collection of 900k videos. To accommodate the requirement that
GPT-4V only takes images as input, we preprocess videos by uniformly
extracting ten frames per video content. These frames are then concatenated
into a sequence to serve as a proxy for the video. This sequence is the input
into GPT-4V to generate a coherent caption for the represented video based on
the frame sequence. The prompt adheres to guidelines covering temporal
dynamics, world knowledge, object attributes, spatial relationships, aesthetic
assessments, etc., with the goal of comprehensively understanding the video
contents.
### 3.2 SFT with Generated Video Instruction Data from Detailed Caption
To generate video instruction-following data for SFT, we adopt a similar
methodology outlined in Video-ChatGPT (Li et al., 2023b). Specifically, we
first randomly sample 20k, 30k, 30k captions in our dataset from ActivityNet,
WebVid and VIDAL respective and then employ ChatGPT to generate three
question-answer pairs given each detailed video caption, resulting in a total
of 240k instruction data for finetuning. This approach ensures that the
instructional data remains factually consistent with the content of the
detailed captions. The specific prompting strategy used for this instruction
generation process is detailed in fig. 13.
Figure 2: Detailed illustration of the proposed factually-enhanced DPO method.
### 3.3 DPO with Feedback from Language Model as Reward
Acquiring high-quality preference data is both costly and labor-intensive.
Although GPT-4V is an effective model for reward distillation, its high cost,
slow performance, and limited accessibility hinder scalability, especially for
video inputs with multiple frames. We propose a cost-efficient method to
generate reward data for DPO using detailed video captions as supporting
evidence, as shown in fig. 2.
Initially, we randomly select a subset of 20k instruction pairs from the
dataset described in section 3.2. The SFT model uses these sampled questions
and their corresponding videos to generate six responses per input pair at a
temperature of $1.0$. This procedure results in 120k question-answer pairs,
which will be evaluated. Subsequently, we employ ChatGPT to process inputs
including a question, the ground truth answer, the model’s prediction, and a
detailed description serving as supportive evidence, with the prompt in fig.
15. This generates an output that includes a natural language explanation as
chain-of-thought step, followed by a numerical reward score on a scale from
$1$ to $5$, indicating the level of factual alignment and overall quality.
For each video and question pair, we randomly select an answer with a score
$\geq$ 3 as positive example, and an answer with a score below $3$ as
negative. Cases where all responses are uniformly scored above or below $3$
are excluded from the dataset. After the selection process, approximately 17k
training instances are compiled for DPO training. Formally, the dataset is
denoted as $\mathcal{D}_{DPO}=\\{(\mathcal{V},x,y_{w},y_{l})\\}$, where
$\mathcal{V}$ is the video, $x$ is the question, $y_{w}$ and $y_{l}$ are the
positive and negative responses. The DPO objective is defined as below:
$\mathcal{L}_{\mathrm{DPO}}\left(\pi_{\theta};\pi_{\mathrm{ref}}\right)=-\mathbb{E}_{\left(\mathcal{V},x,y_{w},y_{l}\right)\sim\mathcal{D}_{DPO}}\left[\log\sigma\left(\beta\log\frac{\pi_{\theta}\left(y_{w}\mid
x,\mathcal{V}\right)}{\pi_{\text{ref }}\left(y_{w}\mid
x,\mathcal{V}\right)}-\beta\log\frac{\pi_{\theta}\left(y_{l}\mid
x,\mathcal{V}\right)}{\pi_{\text{ref }}\left(y_{l}\mid
x,\mathcal{V}\right)}\right)\right]\,,$
where $\pi_{\theta}$ is the policy model to be optimized and $\pi_{\text{ref
}}$ is the base reference model, both models are initialized with SFT weights.
$\sigma$ is the logistic function and $\beta$ is set to $0.1$.
Our approach to reward assignment leverages detailed captions as a proxy for
video frames, offering both cost-effectiveness and efficiency. This method
incurs costs of less than $20, under a pricing model of $1.5 per million
tokens. In comparison, previous methods of preference data collection, such as
in Sun et al. (2023a), required an expenditure of $3,000 to gather 10k human
preference data points. Additionally, the method proposed by Li et al.
(2023d), which employs GPT-4V for reward data labeling, incurs a significantly
higher cost—$30 per million tokens—and demonstrates considerably slower
inference speeds.
Figure 3: Assessing Evaluator Quality Using Captions in Place of Frames. The
left figure shows the distribution of evaluation score differences between
ChatGPT (with caption as proxy) and GPT-4V (directly on frames) evaluations.
The right figure shows the rate of preference agreement between ChatGPT and
GPT-4V as evaluators.
## 4 Assessment of Evaluator with GPT-4V Caption as Evidence
To assess the effectiveness of our proposed reward assignment method, which
utilizes detailed captions as a proxy of actual video frames, we conducted a
comparative analysis with the GPT-4V, used as a video QA evaluator. The latter
reward system employs GPT-4V evaluation directly taking in video frames, a
question, and the model prediction as inputs, with detailed prompt in fig. 16.
Both reward systems follow the same set of guidelines for scoring reward.
To compare the two methods, we sample $200$ videos from each of the WebVid,
VIDAL, and ActivityNet datasets, each associated with one question and two
model predictions from our SFT model, with one preferred and one dispreferred
by ChatGPT. This results in $1,200$ examples, for which we used GPT-4V (with
the ”gpt-4-vision-preview” version) version to assign scores. Filtering
through the Azure API backend resulted in $196$, $151$, and $143$ videos from
each dataset, respectively, having both answers evaluated. The average scores
of all examples from ChatGPT and GPT-4V evaluations were $2.9$ and $3.5$
respectively, indicating a tendency of GPT-4V to yield slightly positive
evaluations. The Pearson Correlation Coefficient (PCC) of $0.47$ ($p<0.01$)
suggests a moderate positive correlation. In fig. 3 (left), the distribution
of the difference between ChatGPT and GPT-4V scores reveals that majority
($>75\%$) of ChatGPT scores fall within one standard deviation ($\sigma=1.31$)
of GPT-4V scores. Additionally, in fig. 3 (right), the agreement on preference
between ChatGPT and GPT-4V, excluding ties, exceeded $70\%$. These findings
cautiously support our benchmark’s applicability in video QA evaluation.
Further refinements for better alignment—such as incorporating Likert scales
Zhou et al. (2023) or GPT-4 evaluation—are areas for future research.
| | Existing Video QA Benchmark from Maaz et al. (2023)
---|---|---
Methods | LLM Size | MSVD-QA | MSRVTT-QA | TGIF-QA
Acc. | Score | Acc. | Score | Acc. | Score
FrozenBiLM (Yang et al., 2022)$*$ | 1B | 32.2 | - | 16.8 | - | 41.0 | -
VideoLLaMA (Zhang et al., 2023a)$*$ | 7B | 51.6 | 2.5 | 29.6 | 1.8 | - | -
LLaMA-Adapter (Zhang et al., 2023b)$*$ | 7B | 54.9 | 3.1 | 43.8 | 2.7 | - | -
VideoChat (Li et al., 2023b)$*$ | 7B | 56.3 | 2.8 | 45.0 | 2.5 | 34.4 | 2.3
BT-Adapter Liu et al. (2023c)$*$ | 7B | 67.5 | 3.7 | 57.0 | 3.2 | - | -
Video-ChatGPT (Maaz et al., 2023) | 7B | 68.6 | 3.8 | 58.9 | 3.4 | 47.8 | 3.2
Chat-UniVi (Jin et al., 2023) | 7B | 70.0 | 3.8 | 53.1 | 3.1 | 46.1 | 3.1
VideoChat2 Li et al. (2023c) | 7B | 70.0 | 3.9 | 54.1 | 3.3 | - | -
Video-LLaVA Lin et al. (2023b) | 7B | 71.8 | 3.9 | 59.0 | 3.4 | 48.4 | 3.2
LLaMA-VID (Li et al., 2023e) | 7B | 72.6 | 3.9 | 58.7 | 3.4 | 49.2 | 3.3
LLaMA-VID Li et al. (2023e) | 13B | 74.3 | 4.0 | 59.8 | 3.4 | 50.8 | 3.3
VLM-RLAIF (Ahn et al., 2024)$*$ | 7B | 76.4 | 4.0 | 63.0 | 3.4 | - | -
LLaVA-Hound-SFT | 7B | 75.7 | 3.9 | 58.7 | 3.3 | 53.5 | 3.3
LLaVA-Hound-DPO | 7B | 80.7 | 4.1 | 70.2 | 3.7 | 61.4 | 3.5
Table 1: Evaluation of Model Performance on Zero-Shot Video Question
Answering Benchmarks Using gpt-3.5-turbo-0613. Models denoted with $*$ have
their results directly sourced from their original publications. Caution is
advised when interpreting these results; see Appendix A for an in-depth
analysis of evaluation challenges. All other baseline models were reproduced
by our team.
## 5 Experimental Results
### 5.1 Model Architecture, Image Data Mix-up and Training Pipelines
We adopt Video-LLaVA (Lin et al., 2023a) as the backbone of our video LMM, but
our dataset and method can be applied to any other architectures as well.
Specifically, Video-LLaVA employs LanguageBind (Zhu et al., 2023) encoder for
image and video frame inputs, a MLP projector with 2 fully connected layers to
map visual embeddings into text space, and Vicuna Chiang et al. (2023) as
large language model. During training, we first initialize the projection MLP
layer with the same Video-LLaVA MLP weight. Then we follow the training stages
below:
Caption Pre-training Stage (LLaVA-Hound-PT): At pretraining stage, we use
captioning data including 650k image caption data from ALLaVA (Chen et al.,
2024a) and our distilled 900k video caption. We freeze the LanguageBind visual
encoder and fine-tune the MLP projector and LLM, with learning rate 2e-5 and
batch size 128.
SFT Stage (LLaVA-Hound-SFT): We use instructional data from both image and
video domain to fine-tune the model for instruction-following ability. Our SFT
model use 600k image instruction data from ALLaVA and our generated 240k video
instruction data, with learning rate 5e-6 and batch size 128.
DPO training Stage (LLaVA-Hound-DPO): We use the 17k preference data
introduced in section 3.3 for DPO training. Following Ivison et al. (2023), we
train our policy model for $3$ epochs with learning rate 5e-7, and a batch
size of 128, resulting in roughly 420 training steps. All the experiments are
performed on 8 A100 gpus.
### 5.2 Existing Benchmark Evaluation
#### Dataset and Testing Environment
We evaluate model performance on three benchmark datasets: MSVD-QA Chen &
Dolan (2011), MSRVTT-QA Xu et al. (2016), and TGIF-QA Jang et al. (2017),
using ChatGPT with version gpt-3.5-turbo-0611 to assess model predictions. The
evaluation prompts follow Maaz et al. (2023). In our experiment, we found that
different ChatGPT versions have high impact on absolute score of metric, but
the overall ranking of models is relatively stable. We select
gpt-3.5-turbo-0613 due to its closeness to the reported score in Video-LLaVA
paper. Further details on the selection rationale and evaluation pitfalls are
discussed in Appendix A.
#### Baseline Selection
Our selection criteria include video LMM models that have demonstrated SOTA
performance, specifically including Video-LLaVA, which is also our choice of
architecture. We consider other contemporaneous SOTA models with similar
reported performance levels to Video-LLaVA, yet have not been directly
compared in prior studies. A key consideration in our selection is the
availability of models with accessible code and checkpoints, which is crucial
for ensuring reproducibility of our findings. To this end, we replicate models
including Video-ChatGPT (Maaz et al., 2023), LLaMA-VID (Li et al., 2023e) (7B
and 13B), Chat-UniVi (Jin et al., 2023), and Video-LLaVA Lin et al. (2023b).
We adopt the results from additional baselines including FrozenBiLM (Yang et
al., 2022), VideoChat (Li et al., 2023b) and VideoLLaMA (Zhang et al., 2023a),
sourced from their original publication.
Figure 4: Examples from MSRVTT-QA and MSVD-QA showcase that our LLaVA-Hound-
DPO generates better responses, and reveal key limitations of the existing
benchmark evaluation.
#### Results
In table 1, our analysis shows that within the SFT models, LLaMA-VID-7B and
Video-LLaVA exhibit comparable performance, with LLaMA-VID-13B performing the
best. Our LLaVA-Hound-SFT model achieves comparable performance to LLaMA-
VID-13B. Incorporating preference modeling, LLaVA-Hound-DPO achieves an
average accuracy of $70.75\%$, surpassing LLaVA-Hound-SFT, which has an
average accuracy of $62.65\%$, by $8.1\%$. Furthermore, LLaVA-Hound-DPO,
enhanced by DPO, exhibits superior accuracy compared to VLM-RLAIF’s
performance achieved through reinforcement learning.
#### Error Analysis
Figure 4 illustrates two examples. In the left example, LLaVA-Hound-SFT
provides an accurate description of the video’s first half but introduces a
hallucination with the phrase “I’m not scared of space,” absent in the video
content. LLaVA-Hound-DPO yields a more accurate inference. In the right
example, both LLaVA-Hound-SFT and Video-LLaVA models produce incorrect
inferences, whereas LLaVA-Hound-DPO successfully correctly identifies the
subject in the video. More critically, these examples unveil two significant
issues within the current benchmark: (1) the auto-generated questions from
existing benchmark may be grammatically incorrect or even nonsensical, and (2)
the answers are limited to a single word, which is insufficient for evaluating
LMMs with long-form text generation. Such constraints in the ground truth
answers hinder the evaluation of crucial aspects like helpfulness and
hallucination detection.
### 5.3 Proposed Benchmark Evaluation with GPT-4V Caption as Supporting
Evidence
As a solution to the above limitations in existing benchmark evaluation, we
propose a new set of test questions for same videos in the benchmark datasets
with generated QA from detailed captions, illustrated in appendix D. Applying
the our reward system in section 4, we report the score from ChatGPT, and a
score value $\geq 3$ will be considered correct for accuracy calculation. This
new long-form QA evaluation potentially support diverse aspects in responses
including relevance, accuracy, clarity and completeness in prompt 16.
| | Proposed Video QA Benchmark (In-domain)
---|---|---
No. | Methods | ActivityNet-QA | VIDAL-QA | WebVid-QA
Acc. | Score | Acc. | Score | Acc. | Score
[1] | Video-ChatGPT (Maaz et al., 2023) | 34.17 | 2.19 | 29.35 | 2.10 | 38.88 | 2.27
[2] | LLaMA-VID-7B (Li et al., 2023e) | 36.54 | 2.27 | 30.58 | 2.15 | 36.99 | 2.24
[3] | LLaMA-VID-13B (Li et al., 2023e) | 37.33 | 2.29 | 32.50 | 2.18 | 39.73 | 2.30
[4] | Chat-UniVi (Jin et al., 2023) | 39.35 | 2.32 | 31.40 | 2.16 | 40.05 | 2.31
[5] | Video-LLaVA Lin et al. (2023b) | 41.35 | 2.38 | 34.30 | 2.24 | 42.47 | 2.39
[6] | LLaVA-Hound-SFT | 66.62 | 3.05 | 60.50 | 2.88 | 71.07 | 3.17
[7] | LLaVA-Hound-DPO | 76.62 | 3.18 | 70.06 | 3.04 | 79.82 | 3.29
[8] | LLaVA-Hound-PT \+ Image Inst. | 69.31 | 3.09 | 60.57 | 2.85 | 68.03 | 3.02
[9] | LLaVA-Hound-PT \+ VChat | 67.34 | 3.02 | 62.33 | 2.89 | 68.98 | 3.00
[10] | LLaVA-Hound-DPO \+ training MLP | 71.89 | 3.10 | 65.57 | 2.95 | 75.37 | 3.21
[11] | LLaVA-Hound-SFT \+ Self-play | 64.11 | 2.85 | 56.28 | 2.68 | 67.89 | 2.95
[12] | LLaVA-Hound-DPO w/ lr3e-7 | 71.13 | 3.08 | 64.90 | 2.92 | 73.25 | 3.17
Table 2: Our proposed video QA benchmark evaluation on in-domain dataset
using gpt-3.5-turbo-0301, with detailed captions as supporting evidence.
| Proposed Video QA Benchmark (Out-of-domain)
---|---
Methods | MSVD-QA | MSRVTT-QA | TGIF-QA | SSV2-QA
Acc. | Score | Acc. | Score | Acc. | Score | Acc. | Score
Video-ChatGPT (Maaz et al., 2023) | 34.06 | 2.20 | 25.65 | 1.98 | 31.35 | 2.09 | 19.36 | 1.75
LLaMA-VID-7B (Li et al., 2023e) | 34.14 | 2.21 | 25.02 | 1.99 | 27.18 | 2.00 | 22.16 | 1.84
LLaMA-VID-13B (Li et al., 2023e) | 35.81 | 2.25 | 26.34 | 2.02 | 27.58 | 2.01 | 21.98 | 1.83
Chat-UniVi (Jin et al., 2023) | 35.61 | 2.23 | 25.89 | 2.01 | 33.23 | 2.13 | 20.59 | 1.79
Video-LLaVA Lin et al. (2023b) | 39.46 | 2.37 | 30.78 | 2.15 | 32.95 | 2.18 | 24.31 | 1.90
LLaVA-Hound-SFT | 66.99 | 3.09 | 57.82 | 2.85 | 66.13 | 3.07 | 35.07 | 2.23
LLaVA-Hound-DPO | 73.64 | 3.12 | 68.29 | 2.98 | 74.00 | 3.12 | 48.89 | 2.53
LLaVA-Hound-PT \+ Image Inst. | 65.19 | 2.96 | 48.66 | 2.52 | 53.83 | 2.62 | 29.60 | 2.04
Table 3: Our proposed video QA benchmark evaluation on out-of-domain dataset
using gpt-3.5-turbo-0301, with detailed captions as supporting evidence.
Table 2 and table 3 shows the in-domain and out-of-domain evaluation. We use
”gpt-3.5-turbo-0301” for evaluation as it is the same version for constructing
DPO dataset. The model performance is more distinguishable from our evaluation
with Video-LLaVA performing the best among the other baseline models.
Video LMM without Video Instruction: [8] in table 2 is baseline trained with
only image instruction fine-tuned on LLaVA-Hound-PT, which achieves an average
accuracy of $65.97\%$, comparable to the LLaVA-Hound-SFT model’s $66.06\%$ in
in-domain QA scenarios. However, its performance significantly drops in out-
of-domain QA contexts ($49.32\%$ vs. $56.50\%$), suggesting that Video QA
training could potentially enhance generalization capabilities.
Quality of Generated SFT: [9] substitutes our generated video QA with the
Video-ChatGPT dataset for Video-LLaVA fine-tuning. A comparison between the
findings of [9] and [6] reveals a marginal performance disparity of $0.2\%$ in
average accuracy, indicating that the quality of our generated QA closely
parallels that of the existing video QA datasets. Given the similar quality in
SFT data, the large gain of [6] over [5] can be reasonably concluded from
large-scale pre-training on video captions.
Unfreeze MLP: The comparison between [10] and [7] reveals a significant
decrease in performance when the MLP is unfrozen during DPO training. Despite
this drop, however, the performance remains superior to that of the SFT
baseline.
Smaller Learning Rate: The comparison between [12] and [7] reveals that using
a smaller learning rate of 3e-7 (vs. 5e-7) results in a decreasing of model
performance. This highlights the future improvements by finding better
hyperparameters.
Self-Play vs. DPO: Chen et al. (2024b) introduced a self-play methodology for
DPO training, which designates ground truth answers as preferred and model-
generated responses as dispreferred. When comparing the results of [11] with
those in [6], a notable decrease in accuracy by $3\%$ from the SFT model is
observed, suggesting that self-play may be less effective for video LMM
alignment, and introducing reward model is helpful.
Figure 5: The left figure shows the test set accuracy of the DPO model w.r.t
the number of training epochs. The right figure shows a comparison of DPO
model performance as generator vs. ranker.
DPO Accuracy vs. Training Epochs. The left of fig. 5 depicts the
generalization performance of the model on out-of-domain video QA tasks with
respect to the number of training epochs. We observe a consistent enhancement
in model performance among datasets during the initial 0 to 2 epochs, with
peak performance materializing at around 2.5 epochs, which corresponds to 350
training steps.
DPO as Ranker vs. Generator. Following Hosseini et al. (2024), we compare the
performance of employing the DPO model as a ranker for candidate answers
produced by the SFT model, operating at a temperature setting of 1.0. As
depicted on the right in fig. 5, we illustrate the test accuracy progression
through the selection of the best among $N$ candidates by the DPO ranker.
Initial observations indicate that the SFT model, when set to a temperature of
1.0, demonstrates a reduced accuracy (43.3%) compared to that achieved through
greedy decoding (57.8%). A steady enhancement in performance is noted as the
number of candidates increases, plateauing at an accuracy of approximately 62%
with 64 candidates. This performance, however, falls short when compared with
the direct application of the DPO model for answer generation, which yields an
accuracy of 68.29%. This difference suggests the stronger generalization of
DPO model in answer generation, despite it is trained on a reward
classification loss. The contradictory results to Hosseini et al. (2024) may
be due to the difference of tasks, i.e. Math vs. Video QA. Refer to appendix E
for more results.
### 5.4 Analysis on Video Captioning Ability from Pre-training
|
---|---
Figure 6: The video caption ability w.r.t number of training data evaluated on
both in-domain and out-of-domain test videos using GPT-4V.
In Figure 6, we present the video captioning ability of models across various
datasets, with a total of 900k distilled data instances. GPT-4V is employed
for self-evaluation (fig. 14), serving as the upper-bound performance, while
the Video-LLaVA serves for comparative analysis, establishing a baseline.
Notably, Video-LLaVA is trained on 54k video QA data instances. However, our
first checkpoint, utilizing only 10% of the data, is trained on 90k high-
quality caption data instances, likely accounting for the observed performance
disparity in the video captioning task. Our results demonstrate that
incorporating more distilled data contributes to improved model performance
across both in-domain and out-of-domain datasets. Despite these improvements,
a performance discrepancy with the GPT-4V model remains. Further, we evaluate
the generalization potential in specific data subsets, as shown in fig. 7 in
the Appendix. These subsets reveal varying degrees of generalization
challenges for different types of dataset. For example, the WebVid subset,
which concentrates on relatively static scenes, necessitates less data for
effective training compared to the VIDAL subset, which is marked by dynamic
scene transitions and a diversity of video themes.
## 6 Conclusion
In this study, we propose an cost-effective reward system that utilizes
detailed captions as proxies for video content. Our findings demonstrate that
the reward scores is well-aligned with the evaluation metrics of GPT-4V, and
the incorporation of this reward mechanism enhances DPO training, resulting in
SOTA performance on video QA tasks.
## 7 Reproducibility Statement
The ensure reproducibility of our work, we plan to release the following
items:
1. 1.
Distilled video captions with corresponding frames.
2. 2.
The model weights including the pre-trained, SFT, and DPO models.
3. 3.
Code for training and testing using existing and our proposed benchmark.
## References
* Ahn et al. (2024) Daechul Ahn, Yura Choi, Youngjae Yu, Dongyeop Kang, and Jonghyun Choi. Tuning large multimodal models for videos using reinforcement learning from ai feedback. _arXiv preprint arXiv:2402.03746_ , 2024.
* Bai et al. (2023) Jinze Bai, Shuai Bai, Shusheng Yang, Shijie Wang, Sinan Tan, Peng Wang, Junyang Lin, Chang Zhou, and Jingren Zhou. Qwen-vl: A frontier large vision-language model with versatile abilities. _arXiv preprint arXiv:2308.12966_ , 2023.
* Bai et al. (2022) Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. _arXiv preprint arXiv:2212.08073_ , 2022.
* Bain et al. (2021) Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in time: A joint video and image encoder for end-to-end retrieval. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 1728–1738, 2021.
* Chen & Dolan (2011) David Chen and William B Dolan. Collecting highly parallel data for paraphrase evaluation. In _Proceedings of the 49th annual meeting of the association for computational linguistics: human language technologies_ , pp. 190–200, 2011.
* Chen et al. (2024a) Guiming Hardy Chen, Shunian Chen, Ruifei Zhang, Junying Chen, Xiangbo Wu, Zhiyi Zhang, Zhihong Chen, Jianquan Li, Xiang Wan, and Benyou Wang. Allava: Harnessing gpt4v-synthesized data for a lite vision-language model. _arXiv preprint arXiv:2402.11684_ , 2024a.
* Chen et al. (2023) Keqin Chen, Zhao Zhang, Weili Zeng, Richong Zhang, Feng Zhu, and Rui Zhao. Shikra: Unleashing multimodal llm’s referential dialogue magic. _arXiv preprint arXiv:2306.15195_ , 2023.
* Chen et al. (2024b) Zixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, and Quanquan Gu. Self-play fine-tuning converts weak language models to strong language models. _arXiv preprint arXiv:2401.01335_ , 2024b.
* Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E Gonzalez, et al. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality. _See https://vicuna. lmsys. org (accessed 14 April 2023)_ , 2023.
* Gunjal et al. (2023) Anisha Gunjal, Jihan Yin, and Erhan Bas. Detecting and preventing hallucinations in large vision language models. _arXiv preprint arXiv:2308.06394_ , 2023.
* Hosseini et al. (2024) Arian Hosseini, Xingdi Yuan, Nikolay Malkin, Aaron Courville, Alessandro Sordoni, and Rishabh Agarwal. V-star: Training verifiers for self-taught reasoners. _arXiv preprint arXiv:2402.06457_ , 2024.
* Ivison et al. (2023) Hamish Ivison, Yizhong Wang, Valentina Pyatkin, Nathan Lambert, Matthew Peters, Pradeep Dasigi, Joel Jang, David Wadden, Noah A Smith, Iz Beltagy, et al. Camels in a changing climate: Enhancing lm adaptation with tulu 2. _arXiv preprint arXiv:2311.10702_ , 2023.
* Jang et al. (2017) Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. Tgif-qa: Toward spatio-temporal reasoning in visual question answering. In _CVPR_ , 2017.
* Jin et al. (2023) Peng Jin, Ryuichi Takanobu, Caiwan Zhang, Xiaochun Cao, and Li Yuan. Chat-univi: Unified visual representation empowers large language models with image and video understanding. _arXiv preprint arXiv:2311.08046_ , 2023.
* Jin et al. (2024) Yang Jin, Zhicheng Sun, Kun Xu, Liwei Chen, Hao Jiang, Quzhe Huang, Chengru Song, Yuliang Liu, Di Zhang, Yang Song, et al. Video-lavit: Unified video-language pre-training with decoupled visual-motional tokenization. _arXiv preprint arXiv:2402.03161_ , 2024.
* Lee et al. (2023) Harrison Lee, Samrat Phatale, Hassan Mansoor, Kellie Lu, Thomas Mesnard, Colton Bishop, Victor Carbune, and Abhinav Rastogi. Rlaif: Scaling reinforcement learning from human feedback with ai feedback. _arXiv preprint arXiv:2309.00267_ , 2023.
* Li et al. (2023a) Junnan Li, Dongxu Li, Silvio Savarese, and Steven Hoi. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. _arXiv preprint arXiv:2301.12597_ , 2023a.
* Li et al. (2023b) KunChang Li, Yinan He, Yi Wang, Yizhuo Li, Wenhai Wang, Ping Luo, Yali Wang, Limin Wang, and Yu Qiao. Videochat: Chat-centric video understanding. _arXiv preprint arXiv:2305.06355_ , 2023b.
* Li et al. (2023c) Kunchang Li, Yali Wang, Yinan He, Yizhuo Li, Yi Wang, Yi Liu, Zun Wang, Jilan Xu, Guo Chen, Ping Luo, et al. Mvbench: A comprehensive multi-modal video understanding benchmark. _arXiv preprint arXiv:2311.17005_ , 2023c.
* Li et al. (2023d) Lei Li, Zhihui Xie, Mukai Li, Shunian Chen, Peiyi Wang, Liang Chen, Yazheng Yang, Benyou Wang, and Lingpeng Kong. Silkie: Preference distillation for large visual language models. _arXiv preprint arXiv:2312.10665_ , 2023d.
* Li et al. (2023e) Yanwei Li, Chengyao Wang, and Jiaya Jia. Llama-vid: An image is worth 2 tokens in large language models. _arXiv preprint arXiv:2311.17043_ , 2023e.
* Lin et al. (2023a) Bin Lin, Yang Ye, Bin Zhu, Jiaxi Cui, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection, 2023a.
* Lin et al. (2023b) Bin Lin, Bin Zhu, Yang Ye, Munan Ning, Peng Jin, and Li Yuan. Video-llava: Learning united visual representation by alignment before projection. _arXiv preprint arXiv:2311.10122_ , 2023b.
* Liu et al. (2023a) Haotian Liu, Chunyuan Li, Yuheng Li, and Yong Jae Lee. Improved baselines with visual instruction tuning. _arXiv preprint arXiv:2310.03744_ , 2023a.
* Liu et al. (2023b) Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. Visual instruction tuning. _arXiv preprint arXiv:2304.08485_ , 2023b.
* Liu et al. (2023c) Ruyang Liu, Chen Li, Yixiao Ge, Ying Shan, Thomas H Li, and Ge Li. One for all: Video conversation is feasible without video instruction tuning. _arXiv preprint arXiv:2309.15785_ , 2023c.
* Luo et al. (2023) Ruipu Luo, Ziwang Zhao, Min Yang, Junwei Dong, Minghui Qiu, Pengcheng Lu, Tao Wang, and Zhongyu Wei. Valley: Video assistant with large language model enhanced ability. _arXiv preprint arXiv:2306.07207_ , 2023.
* Maaz et al. (2023) Muhammad Maaz, Hanoona Rasheed, Salman Khan, and Fahad Shahbaz Khan. Video-chatgpt: Towards detailed video understanding via large vision and language models. _arXiv preprint arXiv:2306.05424_ , 2023.
* Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ , 35:27730–27744, 2022.
* Rafailov et al. (2024) Rafael Rafailov, Archit Sharma, Eric Mitchell, Christopher D Manning, Stefano Ermon, and Chelsea Finn. Direct preference optimization: Your language model is secretly a reward model. _Advances in Neural Information Processing Systems_ , 36, 2024.
* Shvetsova et al. (2023) Nina Shvetsova, Anna Kukleva, Xudong Hong, Christian Rupprecht, Bernt Schiele, and Hilde Kuehne. Howtocaption: Prompting llms to transform video annotations at scale, 2023.
* Sun et al. (2023a) Zhiqing Sun, Sheng Shen, Shengcao Cao, Haotian Liu, Chunyuan Li, Yikang Shen, Chuang Gan, Liang-Yan Gui, Yu-Xiong Wang, Yiming Yang, et al. Aligning large multimodal models with factually augmented rlhf. _arXiv preprint arXiv:2309.14525_ , 2023a.
* Sun et al. (2023b) Zhiqing Sun, Yikang Shen, Hongxin Zhang, Qinhong Zhou, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. Salmon: Self-alignment with principle-following reward models. _arXiv preprint arXiv:2310.05910_ , 2023b.
* Wang et al. (2023) Yi Wang, Yinan He, Yizhuo Li, Kunchang Li, Jiashuo Yu, Xin Ma, Xinhao Li, Guo Chen, Xinyuan Chen, Yaohui Wang, et al. Internvid: A large-scale video-text dataset for multimodal understanding and generation. _arXiv preprint arXiv:2307.06942_ , 2023.
* Xu et al. (2016) Jun Xu, Tao Mei, Ting Yao, and Yong Rui. Msr-vtt: A large video description dataset for bridging video and language. In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , pp. 5288–5296, 2016.
* Yang et al. (2022) Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Zero-shot video question answering via frozen bidirectional language models. _NeurIPS_ , 2022.
* Yu et al. (2019) Zhou Yu, Dejing Xu, Jun Yu, Ting Yu, Zhou Zhao, Yueting Zhuang, and Dacheng Tao. Activitynet-qa: A dataset for understanding complex web videos via question answering. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 9127–9134, 2019.
* Zhang et al. (2023a) Hang Zhang, Xin Li, and Lidong Bing. Video-llama: An instruction-tuned audio-visual language model for video understanding. _arXiv preprint arXiv:2306.02858_ , 2023a.
* Zhang et al. (2023b) Renrui Zhang, Jiaming Han, Aojun Zhou, Xiangfei Hu, Shilin Yan, Pan Lu, Hongsheng Li, Peng Gao, and Yu Qiao. Llama-adapter: Efficient fine-tuning of language models with zero-init attention. _arXiv preprint arXiv:2303.16199_ , 2023b.
* Zhao et al. (2023) Zhiyuan Zhao, Bin Wang, Linke Ouyang, Xiaoyi Dong, Jiaqi Wang, and Conghui He. Beyond hallucinations: Enhancing lvlms through hallucination-aware direct preference optimization. _arXiv preprint arXiv:2311.16839_ , 2023.
* Zhou et al. (2023) Xuhui Zhou, Hao Zhu, Leena Mathur, Ruohong Zhang, Haofei Yu, Zhengyang Qi, Louis-Philippe Morency, Yonatan Bisk, Daniel Fried, Graham Neubig, et al. Sotopia: Interactive evaluation for social intelligence in language agents. _arXiv preprint arXiv:2310.11667_ , 2023.
* Zhu et al. (2023) Bin Zhu, Bin Lin, Munan Ning, Yang Yan, Jiaxi Cui, HongFa Wang, Yatian Pang, Wenhao Jiang, Junwu Zhang, Zongwei Li, et al. Languagebind: Extending video-language pretraining to n-modality by language-based semantic alignment. _arXiv preprint arXiv:2310.01852_ , 2023.
## Appendix A Effect of ChatGPT Version on Official Benchmark Evaluation
Methods | LLM Size | MSVD-QA | MSRVTT-QA | TGIF-QA | Summary
---|---|---|---|---|---
Acc. | Score | Acc. | Score | Acc. | Score | Avg Acc. | Rank
| | gpt-3.5-turbo-0301 evaluation | |
Video-ChatGPT (Maaz et al., 2023) | 7B | 78.62 | 4.00 | 71.67 | 3.63 | 56.31 | 3.45 | 68.87 | 6
LLaMA-VID (Li et al., 2023e) | 7B | 82.57 | 4.12 | 71.94 | 3.65 | 59.00 | 3.63 | 71.17 | 4
LLaMA-VID (Li et al., 2023e) | 13B | 83.72 | 4.16 | 73.63 | 3.68 | 59.72 | 3.66 | 72.36 | 3
Chat-UniVi (Jin et al., 2023) | 7B | 80.52 | 4.02 | 66.92 | 3.41 | 57.73 | 3.49 | 68.39 | 7
Video-LLaVA (Lin et al., 2023b) | 7B | 81.44 | 4.08 | 73.29 | 3.65 | 58.34 | 3.61 | 71.02 | 5
LLaVA-Hound-SFT | 7B | 85.65 | 4.10 | 73.85 | 3.62 | 64.98 | 3.65 | 74.83 | 2
LLaVA-Hound-DPO | 7B | 88.50 | 4.20 | 82.10 | 3.84 | 75.48 | 3.81 | 82.03 | 1
| | gpt-3.5-turbo-0613 evaluation | |
Video-ChatGPT (Maaz et al., 2023) | 7B | 68.55 | 3.80 | 58.90 | 3.36 | 47.83 | 3.21 | 58.43 | 6
LLaMA-VID (Li et al., 2023e) | 7B | 72.62 | 3.92 | 58.73 | 3.38 | 49.21 | 3.28 | 60.19 | 4
LLaMA-VID Li et al. (2023e) | 13B | 74.29 | 3.96 | 59.82 | 3.41 | 50.83 | 3.33 | 61.65 | 3
Chat-UniVi (Jin et al., 2023) | 7B | 70.01 | 3.79 | 53.08 | 3.14 | 46.09 | 3.12 | 56.39 | 7
Video-LLaVA (Lin et al., 2023b) | 7B | 71.75 | 3.88 | 58.97 | 3.39 | 48.39 | 3.24 | 59.70 | 5
LLaVA-Hound-SFT | 7B | 75.70 | 3.86 | 58.73 | 3.31 | 53.51 | 3.30 | 62.65 | 2
LLaVA-Hound-DPO | 7B | 80.73 | 4.07 | 70.15 | 3.66 | 61.38 | 3.46 | 70.75 | 1
| | gpt-3.5-turbo-1106 evaluation | |
Video-ChatGPT (Maaz et al., 2023) | 7B | 73.02 | 4.01 | 62.09 | 3.61 | 47.76 | 3.36 | 60.96 | 6
LLaMA-VID (Li et al., 2023e) | 7B | 75.49 | 4.08 | 62.09 | 3.61 | 51.72 | 3.47 | 63.10 | 4
LLaMA-VID (Li et al., 2023e) | 13B | 76.97 | 4.10 | 63.16 | 3.61 | 52.53 | 3.50 | 64.22 | 3
Chat-UniVi (Jin et al., 2023) | 7B | 72.22 | 3.92 | 55.02 | 3.35 | 48.16 | 3.31 | 58.47 | 7
Video-LLaVA (Lin et al., 2023b) | 7B | 74.76 | 4.04 | 62.70 | 3.60 | 51.21 | 3.45 | 62.89 | 5
LLaVA-Hound-SFT | 7B | 81.09 | 4.08 | 64.13 | 3.57 | 58.05 | 3.53 | 67.76 | 2
LLaVA-Hound-DPO | 7B | 86.05 | 4.23 | 76.75 | 3.85 | 70.02 | 3.71 | 77.61 | 1
Table 4: Performance Evaluation Across ChatGPT Versions on Zero-Shot Video
Question Answering Benchmarks. This table compares the performance of state-
of-the-art video LMMs evaluated under different ChatGPT versions. The absolute
performance metrics scored by ChatGPT vary by versions. However, the
comparative ranking of models under the same ChatGPT version is relatively
stable.
In Table 4, we show impact of using different ChatGPT versions on metric
scores within zero-shot video question answering benchmarks. Our analysis
reveals significant variations in the absolute scores across ChatGPT versions,
but based on the average accuracy metric, the relative ranking of models under
the same ChatGPT version shows consistency.
This comparison underscores a critical issue: many prior studies neglect to
specify the ChatGPT version used, potentially leading to inaccurate
conclusions during evaluation. We advocate for the explicit designation of the
ChatGPT version in future evaluations. Analysis from Table 4 indicates that
the version gpt-3.5-turbo-0613 aligns most closely with the performance of the
Video-LLaVA (Lin et al., 2023a) model, serving as the benchmark for model
performance comparison in our study.
## Appendix B Evaluation of Captioning Ability from pre-training
|
---|---
Figure 7: Training subsets exhibit varying levels of generalization
difficulty. The WebVid subset (left) requires less data compared to the VIDAL
subset (right)
## Appendix C Distilled Caption Demonstration
Figure 8: Dataset examples annotated by GPT-4V
## Appendix D Video QA Dataset Demonstration
Figure 9: Comparing testing QA in existing benchmark with that in our proposed
new benchmark. Figure 10: Comparing testing QA in existing benchmark with that
in our proposed new benchmark, example 2.
## Appendix E Additional DPO Results
Figure 11: Test Set Accuracy of the DPO Model vs. Training Epochs. The figure
illustrates a consistent trend in both in-domain and out-of-domain video QA,
with peak performance occurring at approximately epoch 2.5, equivalent to 350
training steps.
Figure 12: Comparison of DPO Model Performance: Ranker vs. Generator. The DPO
model serves as a ranker, assigning reward scores to candidate answers
generated by the SFT model with a temperature setting of 1.0. Employing the
DPO model directly for answer generation results in superior performance
compared to its use as a ranker.
## Appendix F Prompts for GPT-4V and ChatGPT Queries
⬇
Task Instructions:
\parGiven a caption that summarizes the content of a video, generate three
question-answer pairs that relate directly to the information and context
provided in the caption. The questions should be grounded to the understanding
of the video content.
\parGuidelines for QA Generation:
\par1. Helpfulness: Answers should provide sufficient detail and depth to
fully address the question. They should include relevant explanations, or
context where appropriate, to enhance understanding.
\par2. Faithfulness: The answers must accurately reflect the information
presented in the video caption. Avoid speculation or the inclusion of
information not contained or implied by the caption to maintain the integrity
of the content.
\par3. Diversity: Craft questions that cover different aspects of the video
caption to provide a comprehensive understanding of the content. This includes
factual inquiries, inferential questions, and those that may elicit
explanatory responses.
\parInput Video Caption:
{caption}
\parOutput format:
Q1: <question1>
A1: <answer1>
Q2: <question2>
A2: <answer2>
Q3: <question3>
A3: <answer3>
Figure 13: ChatGPT for instruction generation.
⬇
Your role is to serve as an impartial and objective evaluator of a video
caption provided by a Large Multimodal Model (LMM). Based on the input frames
of a video, assess primarily on two criteria: the coverage of video elements
in the caption and the absence of hallucinations in the response. In this
context, ’hallucination’ refers to the model generating content not present or
implied in the video, such as incorrect details about objects, actions,
counts, or other aspects not evidenced in the video frames.
\parTo evaluate the LMM’s response:
\parStart with a brief explanation of your evaluation process.
Then, assign a rating from the following scale:
\parRating 6: Very informative with good coverage, no hallucination
Rating 5: Very informative, no hallucination
Rating 4: Somewhat informative with some missing details, no hallucination
Rating 3: Not informative, no hallucination
Rating 2: Very informative, with hallucination
Rating 1: Somewhat informative, with hallucination
Rating 0: Not informative, with hallucination
\parLMM Response to Evaluate
{LLM_response}
\parOutput format:
Judgment: <your judgment>
Score: <integer value rating>
Figure 14: GPT-4V evaluation prompt for video captioning.
⬇
Given the following inputs:
\par1. **Ground Truth Video Caption**: {caption}
2. **Question Related to the Caption**: {question}
3. **Ground Truth Answer**: {answer}
4. **Model Predicted Answer**: {prediction}
\parYour task is to evaluate the model’s predicted answer against the ground
truth answer, based on the context provided by the video caption and the
question. Consider the following criteria for evaluation:
\par- **Relevance**: Does the predicted answer directly address the question
posed, considering the information provided in the video caption?
- **Accuracy**: Compare the predicted answer to the ground truth answer. Does the prediction accurately reflect the information given in the ground truth answer without introducing factual inaccuracies?
- **Clarity**: Assess the clarity of the predicted answer. Look for issues such as repetition, unclear descriptions, or any grammatical errors that could hinder understanding.
- **Completeness**: Determine if the predicted answer fully covers the scope of the ground truth answer. Does it leave out critical information or does it include all necessary details?
\par**Output Format**:
Explanation: <brief judgement of prediction>
Score: <a integer score of quality from 1-5>
Figure 15: ChatGPT-Evaluation Prompt for Video Question Answering. This prompt
takes in a detailed caption, question, ground truth answer, and model
prediction, subsequently generating an assessment of the prediction’s quality
alongside a corresponding score based on predefined criteria. A score value
$\geq 3$ will be considered correct for accuracy calculation.
⬇
Your task is to act as an impartial and objective assessor of answers
generated by a Large Multimodal Model (LMM) for video-based questions.
Utilizing video frames, a posed question, and the model’s provided answer,
your evaluation should focus on the following aspects:
\par- **Relevance**: Does the predicted answer directly address the question
posed, considering the information provided in the video caption?
- **Accuracy**: Compare the predicted answer to the ground truth answer. Does the prediction accurately reflect the information given in the ground truth answer without introducing factual inaccuracies?
- **Clarity**: Assess the clarity of the predicted answer. Look for issues such as repetition, unclear descriptions, or any grammatical errors that could hinder understanding.
- **Completeness**: Determine if the predicted answer fully covers the scope of the ground truth answer. Does it leave out critical information or does it include all necessary details?
\par**Input**:
Question: {question}
Model Predicted Answer: {prediction}
\par**Output Format**:
Explanation: <brief judgement of prediction>
Score: <an integer score of quality from 1-5>
Figure 16: GPT-4V Evaluation Prompt for Video Question Answering. Together
with video frames input in GPT-4V API, this prompt takes in a question, and
model prediction, subsequently generating an assessment of the prediction’s
quality alongside a corresponding score based on predefined criteria. A score
value $\geq 3$ will be considered correct for accuracy calculation. This is
used to assess the quality of ChatGPT evaluation in fig. 15.
|
# Effect of ionization waves on dust chain formation in a DC discharge
L. S. Matthews1<EMAIL_ADDRESS>K. Vermillion1 P. Hartmann1,2 M.
Rosenberg3 S. Rostami1 E. G. Kostadinova1,4 T. W. Hyde1 M. Y. Pustylnik5
A. M. Lipaev6,7 A. D. Usachev6 A. V. Zobnin6 M. H. Thoma8 O. Petrov1,6,7
H. M. Thomas5 O. V. Novitskii9 1CASPER, Baylor University, One Bear Place
97316, Waco, TX 76798-7316, USA 2Institute for Solid State Physics and Optics,
Wigner Research Centre for Physics, P.O.Box. 49, H-1525 Budapest, Hungary
3Department of Electrical and Computer Engineering, University of California
San Diego, La Jolla, California 92093, USA 4Physics Department, Auburn
University, Auburn, Alabama, 36849, USA 5Institut für Materialphysik im
Weltraum, Deutsches Zentrum für Luft- und Raumfahrt (DLR), Münchener Straße
20, 82234 Weßling, Germany 6Institute for High Temperatures, Russian Academy
of Sciences, Izhorskaya 13/19, 125412 Moscow, Russia 7Moscow Institute of
Physics and Technology, Institutsky Lane 9, Dolgoprudny, Moscow Region, 141700
Russia 8 I. Physikalisches Institut, Justus-Liebig-Universität Gießen,
Heinrich-Buff-Ring 16, 35392 Gießen, Germany 9 Gagarin Research and Test
Cosmonaut Training Center, 141160 Star City, Moscow Region, Russia
###### Abstract
An interesting aspect of complex plasma is its ability to self-organize into a
variety of structural configurations and undergo transitions between these
states. A striking phenomenon is the isotropic-to-string transition observed
in electrorheological complex plasma under the influence of a symmetric ion
wakefield. Such transitions have been investigated using the Plasma Kristall-4
(PK-4) microgravity laboratory on the International Space Station (ISS).
Recent experiments and numerical simulations have shown that, under PK-4
relevant discharge conditions, the seemingly homogeneous DC discharge column
is highly inhomogeneous, with large axial electric field oscillations
associated with ionization waves occurring on microsecond time scales. A
multi-scale numerical model of the dust-plasma interactions is employed to
investigate the role of the electric field on the charge of individual dust
grains, the ion wakefield, and the order of string-like structures. Results
are compared to dust strings formed in similar conditions in the PK-4
experiment.
## 1 Introduction
The PK-4 system is the latest generation in the line of microgravity dusty
plasma experiments currently in operation on-board the International Space
Station (ISS). Since its installation in the Columbus module in 2014, the PK-4
experiment has produced remarkable experimental data related to dust particle
enhanced plasma emission (Usachev et al., 2016, 2018), transverse ionization
instability (Zobnin et al., 2016), transformations of dust structures
(Polyakov et al., 2017), electrorheological and demixing phenomena (Dietz et
al., 2017), particle kinetics (Liu et al., 2018), structural phase transitions
(Dietz et al., 2018), and dust density waves (Jaiswal et al., 2018). Detailed
reviews of past and recent microgravity dusty plasma activities can be found
in Dietz et al. (2018); Thomas et al. (2019).
Besides these fundamental physical investigations, analysis of the raw
experimental data has shown that under some circumstances the dust particles
show a tendency for chain formation where the particles align into lines
several tens of particles long parallel to the discharge tube axis, as
reported in Pustylnik et al. (2016); Schwabe et al. (2019) and shown in figure
1. This happens most often (but not exclusively) when the polarity switching
is applied, in which the positive and negative polarities of the DC electrodes
are alternated at a frequency of typically 100-500 Hz, with the aim of
stabilizing the dust cloud in the field of view of the observing cameras.
Several previous experiments have produced structures with aligned grains.
Dust lane formation has been reported, e.g., during phase separation in binary
complex plasmas under microgravity (Sütterlin et al., 2009; Du et al., 2012),
driven by the electrostatic interaction between the charged dust grains in
relative motion. Vertical dust particle chains can routinely be prepared in
the electrode sheath region of a radio frequency (RF) gas discharge (Kong et
al., 2011, 2014; Chen et al., 2016), where particle alignment is stabilized by
the enhanced horizontal confinement provided by an open glass box and the ion
wake field due to the fast (supersonic) streaming of ions around the particles
(Hutchinson, 2011, 2012; Kompaneets et al., 2016). The electrorheological
effect (or the homogeneous-to-string transition) can also favor dust chain
formation as demonstrated by Ivlev et al. (2008, 2011). In this case the dust
particles are surrounded by the quasi-neutral plasma bulk, but due to an
externally applied alternating electric field and consequently streaming
(subsonic) ions, the Debye screening sphere around the dust particles becomes
distorted leading to an anisotropic inter-particle repulsion. Note that this
is different than the electrorheological effect in granular suspensions, which
results from polarization of the grains themselves (Kwon et al., 2015).
Among these known chain-forming processes, the electrorheological effect is
the most probable one to be acting in the positive column region of the PK-4
discharge plasma. For a PK-4 neon discharge at $p=50$ Pa and $I=1$ mA, the
experimentally determined plasma parameters yield an axial electric field
$E_{z}\simeq 2.2\pm 0.2$ V/cm, with an electron density $n_{\rm
e}\simeq(2.2\pm 0.2)\times 10^{8}$ cm-3 and electron temperature $T_{\rm
e}\simeq 7\pm 0.5$ eV (Usachev et al., 2004; Khrapak et al., 2012). Assuming a
stable positive column and based on the well-studied equilibrium transport
behavior of Ne+ ions in neutral Ne gas (Skullerud & Larsen, 1990), one can
estimate the ion drift velocity to be $v_{\rm id}\simeq 190$ m/s resulting in
a thermal Mach-number $M_{\rm th}=v_{\rm id}/v_{\rm th}=0.54$. Here the ion
thermal velocity is defined as $v_{\rm th}=\sqrt{k_{\rm B}T_{\rm i}/m_{\rm
i}}$ assuming a temperature of $T_{\rm i}=300$ K for the neon ions. The
thermal Mach number is the key quantity for the estimation of the strength of
the electrorheological effect based on the formula derived in Ivlev et al.
(2008) for the pairwise interparticle interaction energy
$W(r,\theta)\simeq\frac{Q^{2}}{4\pi\varepsilon_{0}}\left[\frac{{\rm
e}^{-r/\lambda_{\rm D}}}{r}-0.43\frac{M_{\rm th}^{2}\lambda_{\rm
D}^{2}}{r^{3}}\left(3\cos^{2}\theta-1\right)\right],$ (1)
where $r$ is the distance between two dust grains of charge $Q$ aligned in the
direction of the ion flow, $\theta$ is the angle relative to the ion drift
direction and $\lambda_{\rm D}$ is the unperturbed Debye screening length. In
this description the isotropic Yukawa (screened Coulomb) interaction is
modified by a dipole-like term and higher order contributions are neglected.
It has been shown in Ivlev et al. (2008) that anisotropy in the particle
distribution gradually starts to develop above a critical value of the thermal
Mach-number $M_{\rm cr}\simeq 0.3$ depending on the plasma conditions and that
apparent ordered chains build up at $M_{\rm th}>1$ with increasing length and
stability as the ion drift speed is further increased. Based on these previous
findings and the assumption of a stable DC positive column, it could be
expected that given the typical PK-4 conditions discussed above, the estimated
thermal Mach number of 0.54 is insufficient to result in the formation of long
particle chains, in contrast with the observed particle behavior. However,
recent simulations and experiments have shown that the plasma column supports
fast-moving ionization waves, with associated ion flows speeds $M_{th}>1$.
Although these variations in the plasma occur on the micro-second timescale,
they appear to have an influence on the dynamics of the dust grains, which
typically occur on a millisecond timescale.
In this work, we examine conditions affecting dust chain structure formation
in the PK-4 experiment based on realistic gas discharge modeling, dust
particle charging simulations, and calculations of the dust-dust and dust-ion
interactions. Of particular interest is the strong electric field created by
ionization waves which travel through the discharge column with a period on a
microsecond timescale. A description of the PK-4 experiment and plasma
conditions determined by a numerical model of the gas discharge are given in
Section 2, with a description of the molecular dynamics (MD) model of the ion
and dust dynamics in Section 3. The dust charge and configuration resulting
from applying different time-averaged discharge conditions are given in
Section 4. These results are compared with observations from the PK-4
experiment in Section 5. Concluding remarks are given in Section 6.
## 2 Methods
The PK-4 experiment utilizes a long direct current (DC) discharge with an
active length of approximately 400 mm in a glass tube with inner diameter of
30 mm, equipped with both neon and argon gases (Pustylnik et al., 2016). The
experiment utilizes several tools for manipulation of the dust, including
movable radio frequency coils, a heating ring (thermal manipulator), an
auxiliary internal ring electrode (electrical manipulation), and a 20 W
continuous infrared laser (optical manipulation), which makes the system very
versatile. The DC drive is realized with a high voltage arbitrary waveform
generator with a frequency bandwidth up to 3 kHz, needed for applying polarity
switching to the electrodes. Six dust particle dispensers are available, each
filled with different mono-disperse spherical dust grains made of melamine-
formaldehyde (MF). In the experiment, the dust particles are suspended in the
center region of the discharge tube, in the bulk of the positive column. The
observation of the dust ensemble and discharge glow is realized by video
imaging, using a set of CCD cameras with an image resolution of 14.2 $\mu$m
per pixel (Schwabe et al., 2019). A detailed description of the setup and
early experiments can be found in Pustylnik et al. (2016).
Figure 1: (top) Schematic of the PK-4 experiment. Six microparticle dispensers
(D1-D6) are mounted on the sides. Cameras C1 and C2 each have a field of view
of 22.4 $\times$ 16.8 mm2 and can be moved along as well as across the plasma
chamber axis. (bottom) Dust particles within the PK-4 experiment showing the
formation of chains.
### 2.1 Gas discharge modeling
A cylindrical symmetric 2D Particle in cell with Monte Carlo collisions
(PIC/MCC) code was implemented and used to simulate the motion and collisions
of electrons and Ne+ ions in neon gas and at solid surfaces in a DC discharge
matching the PK-4 operating conditions. The electric field within the
discharge tube is determined self-consistently from the boundary conditions at
the electrodes and walls of the glass cylinder and the densities of the
charged species. The simulation was used to determine the plasma
characteristics within the PK-4 experiment for a DC plasma in neon held at a
pressure of $p=40$ Pa, gas temperature $T_{g}=300$ K, discharge current
$I=0.8$ mA (with approximately 1000 V DC voltage) with optional 500 Hz
polarity switching. A detailed description of the model, its implementation
and experimental verification are presented in a separate publication
(Hartmann et al., 2020).
Figure 2: Computed spatial distributions of plasma parameters: electron
density (a), Ne+ ion density (b), axial electric field (where positive
indicates in the direction of increasing $z$) (c), radial electric field (d),
mean electron energy (e), mean Ne+ ion energy (f) at $p=40$ Pa and $I=0.8$ mA
with the cathode at $z=0$. The data acquisition time was set to a very short
$0.25\,\mu$s. The real aspect ratio of 3:40 is scaled by stretching the radial
axis by a factor of two for better visibility.
Figure 2 shows the instantaneous spatial distribution of selected plasma
parameters. The global structure reproduces the traditional structure of long
DC discharges: a short cathode fall with large electric field, followed by a
low field region with even a reversed field feature, and the small field
positive column down to the anode. A dominant feature of the instant global
structure is the presence of ionization waves which develop on a $\mu$s-time
scale and travel along the column with phase velocities ranging between 500 m
s-1 and 1200 m s-1. These quasiperiodic waves are characterized by a large
amplitude modulation of the charged particle densities figure 2(a,b) and
alternating axial electric fields figure 2(c). A detailed analysis of the
global plasma parameters computed with the same simulation under similar
discharge conditions is presented in Hartmann et al. (2020). The time-averaged
plasma parameters in the central region are $n_{e}$ = $n_{i}$ = $2.1\times
10^{14}$ m-3, mean energies $\langle\epsilon\rangle_{e}=4.4$ eV and
$\langle\epsilon\rangle_{i}=0.04$ eV, and electric field $E=245$ V/m. The
presence of high amplitude ionization waves along the positive column makes
the time-dependence of the plasma parameters at a given position (where the
dust grains reside) of interest.
Here we focus on the local plasma environment in the central region of the
discharge at position $z=200$ mm and $r=0$. In the following graphs the time-
dependence of the plasma parameters is shown with 0.25 $\mu$s resolution
covering 250 $\mu$s total time at the central position of the cylinder. As
shown in figure 2 (a), the axial electric field varies in magnitude having a
small positive value between the ionization waves (about 100 V/m, where
positive indicates in the direction of increasing z) and peaking at about
-2000 V/m as an ionization front passes.
Figure 3: (a) Axial electric field at the center of the column. Drift velocity
of (b) electrons and (c) ions. The red shading indicates the times between the
ionization waves, and the regions shaded in green denote the times when the
electric field peaks within the ionization waves.
A similar structure is seen in the electron and ion velocities, which rapidly
increase in magnitude within an ionization wave. The velocities are measured
from the moments of the velocity distribution. The first moment is the average
velocity, which shows the net mean drift velocity $v_{d}$ imparted by the DC
electric field in the column (figure 3(b,c)). The second moment of the
velocity distribution gives the standard deviation, which is the average
(thermal) velocity of the plasma particles, $v_{th}$ (figure 4(a,b)). The
temperatures calculated from the time-dependent thermal velocities,
$T_{th}=\frac{2mv_{th}^{2}}{3k_{B}},$ (2)
are shown in figure 4 (c,d). The fully time-averaged electron and ion thermal
energies are $\langle\epsilon\rangle_{e}=4.4$ eV and
$\langle\epsilon\rangle_{i}=0.04$ eV. This is much greater than the average
energies calculated between the ionization waves (marked by the shaded
regions) $\langle\epsilon\rangle_{e}=3.4$ eV and
$\langle\epsilon\rangle_{i}=0.025~{}{\rm eV}=293$ K. In between the ionization
waves, the ions thermalize with the neutral gas at temperature $T_{n}=290$ K.
Note that the drift energy must be carefully taken into account in calculating
the mean thermal energy. According to Monte Carlo simulations of ion drift in
electric fields, the mean energy of the ions including the drift velocities is
given by (Robertson & Sternovsky, 2003)
$\frac{1}{2}m_{i}\langle
v_{i}^{2}\rangle=\frac{\pi}{4}m_{i}v_{d,i}^{2}+\frac{3}{2}k_{B}T_{n},$ (3)
where $T_{n}$ is the temperature of the neutral gas, leading to an expression
for the ion temperature as a function of the drift velocity (Trottenberg et
al., 2006)
$T_{dr,i}=T_{n}+(\frac{\pi-2}{6})\frac{1}{k_{B}}m_{i}v_{dr,i}^{2}.$ (4)
The ion temperature calculated in this manner is shown in figure 4(d) by the
red line. Applying Eq. 4 gives an average ion temperature of $T_{dr,i}$ = 380
K = 0.033 eV over the full time interval, greater than that between the
ionization waves, but less than that calculated without the drift correction.
Using an equation similar to Eq. (4) to calculate the average electron energy
shows that there is very little difference between the average electron energy
between the ionization waves and that averaged over the full time interval,
with $T_{dr,e}$ = 39543 K = 3.41 eV (as indicated by the red line in figure
4(c)).
Figure 4: (a,b) Thermal velocities for the electrons and ions in the axial
(blue) and radial (green) directions calculated from the standard deviation of
the velocity distribution. The velocities in the tangential direction (not
shown) are similar to those in the radial direction. (c,d) Temperature of the
electrons and ions calculated from the thermal velocities, Eq. (2) (blue
line), and the average temperature between ionization waves plus the drift
energy, Eq. (4) (red line). The shaded areas indicate the times between the
ionization waves when the ions thermalize with the neutral gas.
Since the plasma variations occur on the microsecond timescale and the dust
dynamics occur on the millisecond timescale, it seems reasonable to use the
time-averaged parameters to set the conditions used in the dust dynamics
model. However, the drift velocity plays an important role in determining
particle charge and the strength and extent of the ion wake. The ion drift
velocity in this case is less than the sound speed of the plasma
$c_{s}=\sqrt{k_{v}T_{e}/m_{i}}$. Given subsonic ion velocities, as the drift
velocity increases the ion current to a grain’s surface decreases, causing the
grain to become more negatively charged. An increased charge causes the ions
to be more strongly focused, resulting in a stronger ion wake. The increased
flow velocity also causes the spatial extent of the ion wake to be narrower in
the radial (cross-stream) direction and extended in the direction of ion flow
(Matthews et al., 2019). The interparticle interaction energy, as given by Eq.
1, also depends on the ion flow velocity, predicting that anisotropy in the
particle distribution begins to develop when $M_{th}>0.3$ (Ivlev et al., 2008,
2011). In between the striations, the average ion drift velocity is $v_{d,i}$
= 95 m/s = 0.22 $M_{th}$. The average drift velocity over all times is 165 m/s
= 0.39 $M_{th}$, which would seem to be just great enough to start to induce
anisotropy in the particle distribution. The average value within the
ionization waves (times noted by the green boxes in figure 2), is $\langle
v_{d,i}\rangle$ = 489 m/s = 1.14 $M_{th}$, with an average peak value of 1951
m/s = 4.05 $M_{th}$. Apparently, it during the ionization waves the ion flow
is great enough to cause a transition to strongly ordered strings.
Accordingly, we are interested in investigating the effect that the increased
electric field has on the formation of ordered dust strings in the PK-4
experiment.
### 2.2 Dust and ions simulation
The plasma conditions shown in Figs. 2 and 3 are used to model the dynamics
and charging of the dust in a flowing plasma using the molecular dynamics code
DRIAD (Dynamic Response of Ions and Dust) (Matthews et al., 2019), which
solves the equations of motion of the ions and dust on their individual time
scales. Here we compare the dust dynamics given the time-averaged plasma
conditions (Case 1) to three cases where the electron and ion temperatures are
set by the temperatures between the ionization waves (denoted by the red
shaded regions in Fig. 3), but the electric field is increased to yield
different values of the ion drift speed, $M_{th}=v_{dr,i}/v_{th}$. In Case 2,
the average axial electric field without the ionization waves present is used
(denoted by the red shaded regions in Figs. 3 and 4). In Case 3, the electric
field averaged over the ionization waves (as indicated by the green boxes in
Fig. 3) is applied. In Case 4, the magnitude of the electric field is set by
the average of the half-max of the electric field in the ionization waves. In
all cases, the polarity switching of the DC electric field is set to 500 Hz
with a 50% duty cycle (modeling symmetric switching of the electrode
polarities) and the average plasma density is set to $n_{e}=n_{i}=2.1\times
10^{14}$ m-3. The electron and ion temperatures, time-varying axial electric
field $\tilde{E}$, and resultant time-varying ion drift velocity
$\tilde{v}_{dr,i}$ for each case are given in Table 1.
Case | 1 | 2 | 3 | 4
---|---|---|---|---
$T_{e}$ (eV,K) | 3.41, 39500 | 3.38, 39200 | 3.38, 39200 | 3.38, 39200
$T_{i}$ (eV,K) | 0.033, 380 | 0.025, 290 | 0.025, 290 | 0.025, 290
$v_{th,i}(m/s)$ | 489 | 424 | 424 | 424
$\tilde{E}$ (V/m) | 245 | 100 | 510 | 1000
$\tilde{v}_{dr,i}$ (m/s) | 165 | 93 | 429 | 719
$M_{th}$ | 0.34 | 0.22 | 1.01 | 1.69
$\langle Q_{d}\rangle$ ($e^{-}$) | 3898 | 3667 | 4191 | 4819
$\Delta$ ($\mu$m) | 396 | 392 | 401 | 402
$\langle r\rangle$ ($\mu$m) | 14.5 | 12.6 | 11.6 | 11.9
$\langle r\rangle/\Delta(\%)$ | 3.6 | 3.2 | 2.9 | 3.0
Table 1: Discharge conditions used in the ion and dust simulation and
calculated dust charge, inter-particle spacing within the chain, and average
radial displacement.
## 3 Dynamics of Ions and Dust
In each case, we simulate the motion of 20 dust grains (melamine formaldehyde)
with radius $a=3.43~{}\mu$m, which corresponds to dust particle size available
in the PK-4 experiment. The dust particles are initially placed in a cloud
near the center of the simulation region, which has a radius of 1.5
$\lambda_{e}$ and length of 12 $\lambda_{e}$, where $\lambda_{e}=940~{}\mu$m
is the electron Debye length of the plasma calculated for Cases 2-4. The
equation of motion for the dust grains with mass $m_{d}$ and charge $Q_{d}$ is
given by
$m_{d}\frac{d\vec{v}_{d}}{dt}=\vec{F}_{dd}+\vec{F}_{id}+Q_{d}\tilde{E}+\nu^{2}Q_{d}r\hat{r}-\beta\vec{v}+\zeta(t).$
(5)
The forces between the dust particles $\vec{F}_{dd}$ are Coulomb interactions,
as the ions in the simulation provide the shielding, while the forces between
the dust and ions $\vec{F}_{id}$ are taken to be Yukawa interactions (Matthews
et al., 2019; Ashrafi et al., 2020). The ion-dust interactions are accumulated
over the elapsed ion timesteps and then averaged before calculating the dust
acceleration. The electric field $\tilde{E}$ is the axial electric field in
the DC plasma which switches direction with the polarity switching frequency.
There is a very strong confining force to keep the particles from the ends of
the simulation region where the ions are injected (the ions need to travel
approximately one Debye length to reach their equilibrium distribution). The
parabolic radial confinement potential approximates the electric field from
surrounding chains of charged dust particles where the confining strength
$\nu^{2}\propto\bar{Q}/(4\pi\epsilon_{0}\Delta^{3})$, $\bar{Q}$ and $\Delta$
are the average expected particle charge and particle separation, and a
constant of proportionality is used to account for the fact that there are
multiple chains providing the confinement. Depending on the number of nearest
neighbors assumed to participate in the confinement and the shielding length
of the interaction potential, this constant of proportionality can range from
$C=0.5-4.5$. Dust density wave experiments performed in the PK-4 in neon gas
at 40 Pa found an estimated particle charge of $Z_{d}\approx 2200$ for $a$ =
1.60 $\mu$m particles (Jaiswal et al., 2018); assuming the charge scales
linearly with the dust radius, the charge on a particle with radius
$a=3.43~{}\mu$m is estimated to be $Z_{d}\approx$ 4500\. The average
interparticle spacing, estimated from the number of particles visible in an
image frame from the PK-4 experiment, is $\Delta\approx$ 305 $\mu$m. In all
four cases simulated here, a fixed value of
$\nu^{2}=3.0\bar{Q}/(4\pi\epsilon_{0}\Delta^{3})=6.8\times 10^{5}$ Vm-2 was
used. The neutral gas (density $n_{g}$ and molecular mass $m_{g}$) provides
both an energy sink and source with the neutral gas drag depending on the drag
coefficient $\beta$ = $(4\pi/3)\delta a^{2}n_{g}m_{g}\sqrt{8k_{B}T_{g}/\pi
m_{g}}$ (where $\delta$ is a material-dependent constant in the range of 1.0 -
1.44; here we used 1.44 to represent diffuse reflection with accommodation of
gas molecules from a non-conductor) and a Langevin thermostat set by
$\zeta=\sqrt{2\beta k_{B}T_{g}/\Delta t_{d}}$ (the dust time step $\Delta
t_{d}$ = 0.1 ms). The system is allowed to evolve for 1.8 s, at which time the
dust particles have reached their equilibrium configuration.
The wakefield interactions are included self-consistently by solving the
equations of motion for the ions
$m_{d}\frac{d\vec{v}_{i}}{dt}=q_{i}\vec{E}+\vec{F}_{ii}+\vec{F}_{id},$ (6)
where the electric field consists of the confining electric field found within
a cylindrical cavity within a homogeneous distribution of background ions, as
well as the electric field in the DC plasma with polarity switching,
$\tilde{E}$. The ion-ion interactions $\vec{F}_{ii}$ are derived from a Yukawa
potential where the shielding is provided by the electrons, whereas the force
between the ions and dust $\vec{F}_{id}$ is taken to be Coulombic in nature.
This asymmetric treatment of the dust-ion forces has been shown to give a
reasonable quantitative agreement for the potential distribution and
interparticle forces (Piel, 2017). The ions reach equilibrium on a time scale
comparable to the ion plasma period
$2\pi/\omega_{i}=2\pi/\sqrt{n_{i}e^{2}/\epsilon_{0}m_{i}}=3.0\times 10^{-6}$
s, which is fast compared to the period of the polarity switching, 2 ms. The
effect of ion-neutral collisions are incorporated using the null collision
method (Donkó, 2011).
The charge on the dust grains is calculated self-consistently within the
plasma wakefield by summing the ion collisions over the elapsed ion timesteps
to determine the ion current. The electrons are assumed to have a Boltzmann
distribution and the electron current is set using orbital-motion-limited
(OML) theory.
## 4 Results
The resulting equilibrium dust charge and spatial configuration of the dust
are shown for the four cases in figure 5. The view shown is a projection into
the xz-plane, with the radial scale magnified to show the relative
displacement from the central axis. The ion-gas collisions cause the negative
charge of the particles in the chains (indicated by the marker color) to be
reduced from that predicted by OML theory, but the negative charge state
increases with the ion drift speed, as expected.
Figure 5: Final equilibrium charge and dust configuration for each case. Note
the scale in x is magnified to show the displacement from the central axis.
The color bar indicates the charge in units of elementary charge $e^{-}$. (a)
Case 1 $\langle Q_{d}\rangle$ = 3900 $e^{-}$, (b) Case 2 $\langle
Q_{d}\rangle$ = 3670 $e^{-}$, (c) Case 3 $\langle Q_{d}\rangle$ = 4190
$e^{-}$, (d) Case 4 $\langle Q_{d}\rangle$ = 4820 $e^{-}$.
The degree of order in the chain is evaluated using the linear pair
correlation function $g(r)$, which was calculated at each dust time step and
then averaged over the last 5000 time steps (0.5 s). The results are shown in
figure 6. In general, the order within the chain increases as the electric
field is increased (Cases 2-4). Case 2 (figure 6b) shows very little order
beyond the third peak. This is a clear indication that the enhanced wakes due
to the strong ion flow in the ionization waves contribute to the formation of
ordered chains. The fully time-averaged condition, Case 1, figure 6(a), leads
to a configuration which is more ordered than the thermal plasma without
ionization waves (Case 2), but less ordered than the other two cases.
Interestingly, the case with the highest degree of order is not that with the
greatest electric field and resultant ion flow, but Case 3, figure 6(c), which
employs the electric field averaged over the ionization waves.
Figure 6: Pair correlation functions averaged over 0.5 s for each case. (a)
Case 1, (b) Case 2, (c) Case 3, (d) Case 4.
Some clues to this order can be found by examining the ion density at
equilibrium and the overall electric potential of the system. In figure 7 and
figure 8, the ion density and the electric potential are shown for a slice in
the xz-plane. As shown in figure 7, each dust particle attracts a cloud of
ions. In Cases 1 and 2 with the averaged plasma conditions, a distinct ion
cloud surrounds each particle. As the electric field is increased in Cases
3-4, the ion cloud becomes elongated in the axial direction and the cloud
around a grain begins to merge with that of neighboring grains. The increased
particle charge, in addition to the increased ion flow speed, concentrates the
ions in a high-density ridge along the dust string.
Figure 7: Final equilibrium ion density, where the colorbar indicates the
density in units of the background density $n_{i}$/$n_{i0}$. The ion densities
are shown for a slice through the xz-plane averaged over 20 polarity cycles
(40 ms). (a) Case 1, (b) Case 2, (c) Case 3, (d) Case 4.
Finally, the combined electric potential from the ions and dust is shown at
equilibrium in figure 8. Note that these figures are zoomed in on the central
portion of the chain. The potentials are averaged over 20 polarity cycles
(0.04 s). The potential is measured with respect to the maximum potential just
upstream/downstream of the dust string at $z\approx\pm$ 4 mm. Profiles of the
total potential along the axial direction are compared in figure 9 just above
the dust string (in the radial direction) at $x=0.2$ mm (figure 9a) and along
the center of the dust chain at $x=0.0$ mm (figure 9b). Note that in Cases 1
and 2 the overall potential is dominated by the dust grains and is negative
over much of the region surrounding the string. In Case 3, the potential is
slightly positive just to the outside of the dust chain. In Case 4, an
alternating positive/negative potential structure starts to emerge along the
length of the chain.
Figure 8: Equilibrium electric potential, where the colorbar indicates the
difference from the maximum positive potential upstream/downstream of the
strings in mV. The potentials are shown for a slice through the xz-plane
averaged over 20 polarity cycles (40 ms). (a) Case 1, (b) Case 2, (c) Case 3,
(d) Case 4. Figure 9: Equilibrium electric potential along the axial
direction averaged over 20 polarity cycles (40 ms). In (a), the profile is
shown just outside the dust string along $x=0.2$ mm, in (b) the profile is
shown along the center of the dust string at $x=0.0$ mm. The maximum positive
potential between the dust grains are marked with symbols for each of the four
cases.
## 5 Discussion
It is expected that the radial confinement should be proportional to
$Q_{d}^{2}$, assuming that the radial confinement is due to the interaction
between neighboring strings. As given in Eq. 5, the magnitude of the radial
confining force is $\nu^{2}Q_{d}r$. For simplicity, the constant $\nu^{2}$ was
set to be $6.8\times 10^{5}$ Vm-2 for all of the cases, resulting in a
restoring force which is proportional to $Q_{d}$. Thus the radial restoring
force used can be considered to be underpredicted for Case 3 and 4 and
overpredicted for Case 2, relative to Case 1. The average radial position of
all the particles in each chain as a function of time is shown in figure 10.
After the initial Coulomb expansion of the dust cloud at the beginning of the
simulation, the particles all settle near the z-axis. As expected, the case
with the largest average particle charge (Case 4) experiences the greatest
radial restoring force and reaches the equilibrium radial position most
quickly, followed by the other cases in order of decreasing average particle
charge. However, even though the particles in Cases 3 and 4 have the greatest
average charge, these chains have the smallest average radial displacement,
representing better string structure. Notably, Case 1, with the time-averaged
plasma conditions, has the greatest average radial displacement. This is a
clear indication that the ion focusing produced by the strong axial electric
field in Cases 3 and 4 allows for a smaller inter-particle spacing within the
string, despite the increased particle charge, and enhances string alignment.
Figure 10: Average radial displacement of particles in the numerical
simulation. The dashed lines indicate the average radial displacement over the
last 0.6 s, after all of the particles have reached their equilibrium
configuration. At equilibrium, the average radial displacements are 14.5
$\mu$m, 12.6 $\mu$m, 11.6 $\mu$m, and 11.9 $\mu$m for cases 1-4, respectively.
For comparison, data from Campaign 4 of the PK4 experiment performed in
February, 2017, is shown in figure 11a showing chains consisting of 6.86
$\mu$m-diameter particles which were observed in neon gas at 45 Pa. The
particles were trapped by the polarity-switched discharge (current 1.0 mA and
frequency 500 Hz) with a duty cycle of 0.72, in order to compensate for the
residual gas flow. A duty cycle of 0.72 corresponds to an asymmetric AC mode
with 72% of the cycle at positive voltage and 28% at negative voltage. The
linear pair correlation function for five different chains (marked by the
different symbols), were calculated and averaged over 70 frames (1.0 s) as
shown in figure 11b. Qualitatively, the pair correlation functions most
closely resemble that shown for Case 3 (figure 6c) in that there are distinct,
separate peaks out to the position of the sixth-nearest neighbor. The average
inter-particle spacing for the five chains is
$\Delta=270,282,281,270,277~{}\mu$m, from top to bottom, respectively,
calculated from the first peak in g(r). The average radial displacements of a
chain’s particles, measured as the perpendicular distance from a linear fit to
the positions of the particles in a chain are $\langle
r\rangle=11.6,20.4,16.8,13.2,17.8~{}\mu$m. In the experiment, the average
inter-particle spacing within the chain is smaller, and average radial
displacement is larger, than that found for the four cases in the numerical
model (see Table 1), such that $\langle r\rangle/\Delta$ = 4.3, 7.2, 6.0, 4.9,
6.4% for each chain, respectively. This suggests that the particle charge in
the experiment may be less than estimated, possibly due to the fact that the
dust density is great enough to deplete the electrons in the vicinity of the
dust cloud. This would result in stronger ion wake potentials along the chain
axis and weaker repulsion between neighboring chains, allowing the particles
more freedom for radial displacements. Another possible explanation for the
observed larger average radial displacement of the dust particles in the
experiment, as compared to the simulation, is that the asymmetric duty cycle
used in the experiment lead to asymmetric ion focusing around the dust grains.
This could produce a stronger positive wake on one side of the dust, allowing
smaller intra-chain particle spacing, while weakening the radial restoring
force and resulting in less stable chains.
Figure 11: (a) Chains of dust particles observed in the PK-4 experiment.
Symbols mark particles in five different chains which remained intact over the
full time period. (b) Linear pair correlation function for each chain marked
in (a), averaged over 70 frames (1.0 s). Each line is successively offset by
2.5 for clarity.
## 6 Conclusion
A simulation of dust dynamics within a DC discharge plasma was used to
investigate the role of strong electric fields created by ionization waves on
the formation of chain-like dust structures within the plasma. A PIC/MCC code
was used to determine the plasma conditions within the discharge tube, which
were used to set the initial conditions and boundary conditions for an N-body
simulation resolving the motion of the ions and the dust. The PIC/MCC
simulation revealed that there are very strong variations in the plasma
conditions on the microsecond scale which result in a large axial electric
field with a peak magnitude of about 2000 V/m, about 20 times greater than the
background value between the ionization waves. Simulations of dust charging
and dynamics show that time-averaged plasma temperatures and axial electric
field lead to a weakly ordered string structure, leading to the conclusion
that the time-averaged conditions don’t seem to fully capture the plasma
conditions which lead to chain formation. However, simulations using the
plasma temperatures and densities between the ionization waves with an applied
axial electric field show that the order within the string increases with the
electric field strength. The numerical results most closely resemble data from
the PK-4 experiment when the average electric field during the ionization
waves is applied. It appears that the enhanced electric field associated with
the ionization waves could play an important role in generating the string-
like structures observed in the PK-4 experiment.
These simulations were run assuming constant plasma conditions including
electron and ion temperatures and number densities. Future work will examine
the effect of the time-varying plasma parameters calculated from the PIC/MCC
simulation on the dust charging and dynamics.
All authors gratefully acknowledge the joint ESA – Roscosmos “Experiment
Plasmakristall-4” on-board the International Space Station. The microgravity
research is funded by the space administration of the Deutsches Zentrum für
Luft- und Raumfahrt e.V. with funds from the federal ministry for economy and
technology according to a resolution of the Deutscher Bundestag under Grants
No. 50WM1441 and No. 50WM2044. A. M. Lipaev and A. D. Usachev were supported
by the Russian Science Foundation Grant No. 20-12-00365 and participated in
preparation of this experiment and its execution on board the ISS. L. S.
Matthews, T.W. Hyde and M. Rosenberg received support from NASA Grant number
1571701 and NSF Grant numbers 1740203 (LSM, TWH, and MR) and the US Department
of Energy, Office of Science, Office of Fusion Energy Sciences under award
number DE-SC-0021334 (LSM and TWH). P. Hartmann gratefully acknowledges
support from the Hungarian Research, Development and Innovation Office via
grant K-134462.
## References
* Ashrafi et al. (2020) Ashrafi, K. S., Yousefi, R., Chen, M., Matthews, L. S. & Hyde, T. W. 2020 Dust as probes: determining confinement and interaction forces. Physical Review E 102, 043210.
* Chen et al. (2016) Chen, M., Dropmann, M., Zhang, B., Matthews, L. S. & Hyde, T. W. 2016 Ion-wake field inside a glass box. Phys. Rev. E 94, 033201.
* Dietz et al. (2018) Dietz, C., Bergert, R., Steinmüller, B., Kretschmer, M., Mitic, S. & Thoma, M. H. 2018 fcc-bcc phase transition in plasma crystals using time-resolved measurements. Phys. Rev. E 97, 043203.
* Dietz et al. (2017) Dietz, C., Kretschmer, M., Steinmüller, B. & Thoma, M. 2017 Recent microgravity experiments with complex direct current plasmas. Contributions to Plasma Physics 58 (1), 21–29.
* Donkó (2011) Donkó, Z. 2011 Particle simulation methods for studies of low-pressure plasma sources. Plasma Sources Sci. Technol. 20, 024001.
* Du et al. (2012) Du, C.-R., Sütterlin, K. R., Jiang, K., Räth, C., Ivlev, A. V., Khrapak, S., Schwabe, M., Thomas, H. M., Fortov, V. E., Lipaev, A. M., Molotkov, V. I., Petrov, O. F., Malentschenko, Y., Yurtschichin, F., Lonchakov, Y. & Morfill, G. E. 2012 Experimental investigation on lane formation in complex plasmas under microgravity conditions. New Journal of Physics 14 (7), 073058.
* Hartmann et al. (2020) Hartmann, P., Rosenberg, M., Juhasz, Z., Matthews, L. S., Sanford, D. L., Vermillion, K., Reyes, J. C. & Hyde, T. W. 2020 Ionization waves in the PK-4 direct current neon discharge. Plasma Sources Sci. Technol. 29 (11), 115014\.
* Hutchinson (2011) Hutchinson, I. H. 2011 Nonlinear collisionless plasma wakes of small particles. Physics of Plasmas 18 (3), 032111\.
* Hutchinson (2012) Hutchinson, I. H. 2012 Intergrain forces in low-mach-number plasma wakes. Phys. Rev. E 85, 066409.
* Ivlev et al. (2008) Ivlev, A. V., Morfill, G. E., Thomas, H. M., Räth, C., Joyce, G., Huber, P., Kompaneets, R., Fortov, V. E., Lipaev, A. M., Molotkov, V. I., Reiter, T., Turin, M. & Vinogradov, P. 2008 First observation of electrorheological plasmas. Phys. Rev. Lett. 100, 095003.
* Ivlev et al. (2011) Ivlev, A. V., Thoma, M. H., Räth, C., Joyce, G. & Morfill, G. E. 2011 Complex plasmas in external fields: The role of non-hamiltonian interactions. Phys. Rev. Lett. 106, 155001\.
* Jaiswal et al. (2018) Jaiswal, S., Pustylnik, M. Y., Zhdanov, S., Thomas, H. M., Lipaev, A. M., Usachev, A. D., Molotkov, V. I., Fortov, V. E., Thoma, M. H. & Novitskii, O. V. 2018 Dust density waves in a dc flowing complex plasma with discharge polarity reversal. Physics of Plasmas 25 (8), 083705.
* Khrapak et al. (2012) Khrapak, S. A., Tolias, P., Ratynskaia, S., Chaudhuri, M., Zobnin, A., Usachev, A., Rau, C., Thoma, M. H., Petrov, O. F., Fortov, V. E. & Morfill, G. E. 2012 Grain charging in an intermediately collisional plasma. EPL (Europhysics Letters) 97 (3), 35001.
* Kompaneets et al. (2016) Kompaneets, R., Morfill, G. E. & Ivlev, A. V. 2016 Wakes in complex plasmas: A self-consistent kinetic theory. Phys. Rev. E 93, 063201.
* Kong et al. (2011) Kong, J., Hyde, T. W., Matthews, L., Qiao, K., Zhang, Z. & Douglass, A. 2011 One-dimensional vertical dust strings in a glass box. Phys. Rev. E 84, 016411.
* Kong et al. (2014) Kong, J., Qiao, K., Matthews, L. S. & Hyde, T. W. 2014 Interaction force in a vertical dust chain inside a glass box. Phys. Rev. E 90, 013107.
* Kwon et al. (2015) Kwon, S. H., Piao, S. H. & Choi, H. 2015 Electric field-responsive mesoporous suspensions: A review. Nanomaterials 5 (4), 2249–2267.
* Liu et al. (2018) Liu, B., Goree, J., Pustylnik, M. Y., Thomas, H. M., Fortov, V. E., Lipaev, A. M., Usachev, A. D., Molotkov, V. I., Petrov, O. F. & Thoma, M. H. 2018 Particle velocity distribution in a three-dimensional dusty plasma under microgravity conditions. AIP Conference Proceedings 1925 (1), 020005.
* Matthews et al. (2019) Matthews, L. S., Sanford, D. S., Kostadinvoa, E., Ashrafi, K. S., Guay, E. & Hyde, T. W. 2019 Dust charging in dynamic ion wakes. Physics of Plasmas 27, 023703\.
* Piel (2017) Piel, A. 2017 Molecular dynamics simulations of ion flows around dust particles. Physics of Plasmas 24 (3), 033712\.
* Polyakov et al. (2017) Polyakov, D. N., Shumova, V. V. & Vasilyak, L. M. 2017 Transformations of dust structures in glow dc discharge in neon: effect of gas temperature and discharge current. Plasma Sources Science and Technology 26 (8), 08LT01.
* Pustylnik et al. (2016) Pustylnik, M. Y., Fink, M. A., Nosenko, V., Antonova, T., Hagl, T., Thomas, H. M., Zobnin, A. V., Lipaev, A. M., Usachev, A. D., Molotkov, V. I., Petrov, O. F., Fortov, V. E., Rau, C., Deysenroth, C., Albrecht, S., Kretschmer, M., Thoma, M. H., Morfill, G. E., Seurig, R., Stettner, A., Alyamovskaya, V. A., Orr, A., Kufner, E., Lavrenko, E. G., Padalka, G. I., Serova, E. O., Samokutyayev, A. M. & Christoforetti, S. 2016 Plasmakristall-4: New complex (dusty) plasma laboratory on board the international space station. Review of Scientific Instruments 87 (9), 093505.
* Robertson & Sternovsky (2003) Robertson, S. & Sternovsky, Z. 2003 Monte carlo model of ion mobility and diffusion for low and high electric fields. Physical Review E 67, 046405.
* Schwabe et al. (2019) Schwabe, M., Rubin-Zuzic, M., Räth, C. & Pustylnik, M. 2019 Image registration with particles, examplified with the complex plasma laboratory pk-4 on board the international space station. Journal of Imaging 5 (3), 39.
* Skullerud & Larsen (1990) Skullerud, H. R. & Larsen, P. H. 1990 Mobility and diffusion of atomic helium and neon ions in their parent gases. Journal of Physics B: Atomic, Molecular and Optical Physics 23 (6), 1017\.
* Sütterlin et al. (2009) Sütterlin, K. R., Wysocki, A., Ivlev, A. V., Räth, C., Thomas, H. M., Rubin-Zuzic, M., Goedheer, W. J., Fortov, V. E., Lipaev, A. M., Molotkov, V. I., Petrov, O. F., Morfill, G. E. & Löwen, H. 2009 Dynamics of lane formation in driven binary complex plasmas. Phys. Rev. Lett. 102, 085003.
* Thomas et al. (2019) Thomas, H. M., Schwabe, M., Pustylnik, M. Y., Knapek, C. A., Molotkov, I, V., Lipaev, A. M., Petrov, O. F., Fortov, V. E. & Khrapak, S. A. 2019 Complex plasma research on the International Space Station. Plasma Physics and Controlled Fusion 61 (1).
* Trottenberg et al. (2006) Trottenberg, T., Block, D. & Piel, A. 2006 Dust confinement and dust-acoustic waves in weakly magnetized anodic plasmas. Physics of Plasmas 13, 042105.
* Usachev et al. (2004) Usachev, A., Zobnin, A., Petrov, O., Fortov, V., Thoma, M., Kretschmer, M., Ratynskaia, S., Quinn, R., Hoefner, H. & Morfill, G. 2004 The project “Plasmakristall - 4” (PK-4) – a dusty plasma experiment in a combined dc/rf (i) discharge plasma under microgravity conditions. Czechoslovak Journal of Physics 54 (3), C639.
* Usachev et al. (2016) Usachev, A. D., Zobnin, A. V., Petrov, O. F., Fortov, V. E., Thoma, M. H., Pustylnik, M. Y., Fink, M. A. & Morfill, G. E. 2016 Elongated dust clouds in a uniform dc positive column of low pressure gas discharge. Plasma Sources Science and Technology 25 (3), 035009.
* Usachev et al. (2018) Usachev, A. D., Zobnin, A. V., Shonenkov, A. V., Lipaev, A. M., Molotkov, V. I., Petrov, O. F., Fortov, V. E., Pustyl’nik, M. Y., Fink, M. A., Thoma, M. A., Thomas, H. M. & Padalka, G. I. 2018 Influence of dust particles on the neon spectral line intensities at the uniform positive column of dc discharge at the space apparatus “Plasma Kristall-4”. Journal of Physics: Conference Series 946 (1), 012143.
* Zobnin et al. (2016) Zobnin, A. V., Usachev, A. D., Lipaev, A. M., Petrov, O. F., Fortov, V. E., Pustylnik, M. Y., Thomas, H. M., Fink, M. A., Thoma, M. H. & Padalka, G. I. 2016 Transverse ionization instability of the elongated dust cloud in the gas discharge uniform positive column under microgravity conditions. Journal of Physics: Conference Series 774 (1), 012174.
|
Iran University of Science and Technology] Department of Electrical
Engineering, Iran University of Science and Technology, Narmak, Tehran
16486-13114, Iran IR,NMR,UV
# A deep learning approach for inverse design of the metasurface for dual-
polarized waves
Fardin Ghorbani Javad Shabanpour<EMAIL_ADDRESS>[ Sina
Beyraghi Hossein Soleimani Homayoon Oraizi Mohammad Soleimani
###### Abstract
Compared to the conventional metasurface design, machine learning-based
methods have recently created an inspiring platform for an inverse realization
of the metasurfaces. Here, we have used the Deep Neural Network (DNN) for the
generation of desired output unit cell structures for both TE and TM polarized
waves which its working frequency can reach up to 45 GHz. To automatically
generate metasurfaces over wide frequencies, we deliberately design 8 annular
models; thus, each generated meta-atoms in our dataset can produce different
notches in our desired working frequency. Compared to the general approach,
whereby the final metasurface structure may be formed by any randomly
distributed “0” and “1”, we propose here a confined output configuration. By
confining the output, the number of calculations will be decreased and the
learning speed will be increased. Establishing a DNN-confined output
configuration based on the input data for both TE and TM polarized waves is
the novelty to generate the desired metasurface structure for dual orthogonal
polarizations. Moreover, we have demonstrated that our network can attain an
accuracy of 92%. Obtaining the final unit cell directly without any time-
consuming optimization algorithms for both TE and TM polarized waves, and high
average accuracy, open beneficial ways for the inverse metasurface design;
thus, the designer is required only to focus on the design goal.
###### keywords:
American Chemical Society, LaTeX
## 1 Introduction
Metamaterials have attracted widespread attention due to their peculiar assets
to modify the permittivity and permeability 1, 2, 27, 28, 29. Recently, many
novel functionalities have been implemented by metamaterials and their 2D
counterpart, metasurfaces, such as intelligent surfaces for communication3, 4,
real-time wavefront manipulation5, 6, 7, 8, perfect absorption9, 10, and
machine learning metasurface design11, 12.
However, all of these works are founded on conventional approaches including
trial-and-error methods, brute force optimization methods, and parameter
sweep, which are time-consuming processes. Therefore, to solve the above
challenges and to seek a fast, effective, and programmed route for designing a
metasurface, we have benefited from machine learning. Deep learning is an
efficient approach for learning the relationship between input and wanted
information from the samples of past experiences. To be more specific, deep
learning as a special section of machine learning can infer the basic rules
based on formerly specified data; then, for different assigned inputs, the
designed network can estimate reasonable decisions. With expanding development
of machine learning and its possible future applications to address some
important problems such as signal processing13 and through the wall imaging14,
we are now witnessing the opening of machine learning in wave-interaction
phenomena. Owing to its potential capacity to provide higher accuracy, less
design time, and enhance the productivity of a modeling procedure, machine
learning has been introduced in numerous electromagnetic phenomena, for
instance, all-dielectric metasurfaces15, antenna design16, 17, acoustic
metamaterials18, 19, and computational electromagnetics20, 21.
T. Cui et al. presented an inverse metasurface realization that can recognize
the internal principles between input metasurface structures and their
S-parameters with 76.5% of accuracy.22. Zhang et al. have proposed an approach
based on machine learning for designing anisotropic coding metasurfaces with a
connection between deep learning and BPSO to explore the ideal reflection
phases of two-unit cells for the desired target23. Shan et al. have introduced
conditional deep convolutional generative adversarial networks to code the
programmable metasurface for multiple beam steering, with accuracy higher than
94%.24. A machine learning-based inverse metasurface design has been provided,
in25 which is capable of directly computing the output metasurface structure
by entering the sought design targets into the model. Recently, a double deep
Q-learning network has been introduced for distinguishing the ideal type of
material and optimizing a hologram structure to increase its efficiency.26
Here, benefiting from Deep Neural Network (DNN), we propose a network scheme
for automatic metasurface design for dual-polarized waves with an average
accuracy of up to 92%. Our network is capable of generating any desired
metasurface based on the input data for both TE and TM polarized waves which
its working frequency can reach up to 45 GHz. The works presented in 22, 25
can generate the output structure in the frequency range of 5 to 20 GHz and
16-20 GHz, respectively. To broaden the working frequency, we consider 8
annular models with the purpose of generating single or multiple resonances in
the desired working frequency. Besides, to enhance the speed of the training,
shorten the number of computations, and boost the effectiveness of our
network, the output of the network is confined; thus, the DNN should generate
the metasurface structure by employing the proposed 8 annular models. We
demonstrate that the accuracy of this network also reaches 91%. Consuming less
computational resources, having a wide frequency band, generating desired
output metasurface without resorting to optimization procedure, and working
for both TE and TM polarized waves make our method promising for boosting the
speed of computations and designs.
## 2 Results
### 2.1 Metasurface Design
Figure 1 demonstrates the proposed 8 annular models, each of them is composed
of three layers of copper, a substrate (FR4 with permittivity of 4.2+0.025i,
and thickness of 1.5mm), and a ground plane to block the incident
electromagnetic waves. Each annular model is 1.6 mm and is composed of
$8\times 8$ lattices named as “1” and “0” to indicate the spaces with and
without the copper. Each meta-atom consists of $4\times 4$ arbitrarily
distributed annular models and the final unit-cell is 6.4 mm. Therefore, each
unit cell generated as an input of DNN comprises $32\times 32$ lattices with a
length of 0.2 mm. The reason for using 8 annular patterns is to produce single
or multiple resonances in the generated S-parameters in a broad frequency
range from 4 to 45 GHz. Since it is almost impossible to attain the
relationship between the input “0” and “1” matrix and its equivalent
S-parameter, the machine learning method can be considered as an encouraging
solution to shorten the computational operations for obtaining the ideal
results.
Figure 1: Diagram of the design procedure of confined- output configuration
and 8 annular models.
### 2.2 Deep learning
Nowadays, neural network algorithms have appeared to solve some fundamental
challenges especially in optimization and artificial intelligence. Schematic
representation of an artificial neuron is depicted in Figure 2, in which
${A_{n}}$ denotes the input neurons. Since each $A$ is connected to a weight,
the multiplication of $A_{i}$’s by $W_{i}$’s has emerged in the input of the
summation. By considering $\phi(x)$ as an activation function, eventually, the
output of this process is determined by:
$Y=\phi(\sum\limits_{i=1}^{n}W_{i}A_{i}+b_{i})$ (1)
In the above equation, $b$ indicates the bias value. Generally, artificial
neural networks are made of distinct layers of input, output, and a hidden
layer between them. By increasing the hidden layers, the complexity of the
network increases, and the neural network turns into a deep neural network.
Figure 2: A sketch of an artificial neuron.
### 2.3 Confined output configuration
In this paper, we have benefited from a DNN to find out the intrinsic
connections between the output generated metasurface and its S-parameters
features. To establish our dataset, by the means of the RAND function, a
collection of 2000 matrices that form the unit cells are created. Then, we use
CST Microwave Studio to determine the reflection characteristics of each unit
cell under the illumination of both TE and TM polarized waves. Then, by
linking CST MWS with MATLAB, the information regarding reflection
characteristics (resonance frequency, resonance depth, and resonance
bandwidth) are saved in a generated database. Note that for obtaining the
S-parameters of the unit cells in periodic structures, periodic boundary
conditions are adjusted along x- and y-directions, while open boundary
conditions are applied along the propagation of the incoming waves.
Table 1: Elaborate data of the confined output DNN configuration. Figure 3:
Two examples of reflection amplitude of confined output DNN configuration a,d)
final metasurface structure. b,e) simulated S-parameters under illumination of
TE and c,f) TM polarized waves. Figure 4: Two examples of reflection amplitude
of confined output DNN configuration a,d) final metasurface structure. b,e)
simulated S-parameters under illumination of TE and c,f) TM polarized waves.
Table 2: Pre-determined TE and TM input targets (TE / TM) for desired
S-parameters, that are demonstrated in Figure 3 and Figure 4.
Examples | resonance frequency (GHz) | resonance depth (dB) | resonance bandwidth (GHz)
---|---|---|---
Figure 3(a-c) | 6.5, 11.5, 24 / 6.5, 11.5 | -21.5, -22.5, -25 /-12, -21.5 | 0.3, 1, 0.6 / 0.2, 1
Figure 3(d-f) | 27.5 / 14 | -30 / -22 | 0.5 / 0.2
Figure 4(a-c) | 25.5, 40 / 38 | -12, -30 / -18 | 0.4, 0.6 / 0.5
Figure 4(d-f) | 24.5 / 10, 14.5, 33 | -34.5 / -18.5, -20, -14 | 0.5 / 0.2, 0.4, 0.3
Our dataset randomly generates 16 numbers from 1 to 8 to create $4\times 4$
matrix. Each number denotes one of the eight annular models; thus, the output
of our established model is a matrix of $32\times 32$. In the training step,
we have produced 2000 pairs of S-parameters (TE and TM) and their
corresponding metasurface structures (70% belongs to the training and the rest
of the data belongs to a testing set). Each generated unit cell can produce up
to eight resonances. Since we have extracted three features for each resonance
(resonance frequency, resonance depth, and resonance bandwidth), a vector with
a size of 24 has emerged in the input of our devised DNN. To enhance the
network speed, minimize the volume of computations, and boost the
effectiveness of our learning section, we have confined the output of the
network; so that, the final metasurface is formed employing our designed 8
annular models. Note that to create the final vector, eight annular models are
indicated by digital codes of “000” to “111”; thus, the output of the DNN
originates a vector with the length of 48. This confined output configuration
will reduce the volume of computations while maintaining the accuracy of the
network. The details of the confined output configuration are outlined in
Table I.
In our model, we have used dense and dropout layers successively as depicted
in Figure 1. To escape from the misdirecting in the learning section and
decreasing the chance of overfitting, we randomly neglected specified numbers
of neurons in the dropout layer. On the other hand, we have defined the dense
layer in such a way that each neuron has a connection to all the neurons in
the former layers. Throughout the training section, we have used Adam
optimizer to calculate the variances between the predetermined and output data
repeatedly. By defining a loss function as a Mean Square Error (Eq. 2), we
have observed the differences between the original and generated data; thus,
when it reaches the specified criterion, the training step will stop.
${\rm{MSE}}=\frac{1}{m}\sum\limits_{i=1}^{m}{{{({Y_{i}}-{{\hat{Y}}_{i}})}^{2}}}$
(2)
In the above equation, m, $Y_{i}$ and ${{{\hat{Y}}_{i}}}$ represent the number
of data points, observed and predicted values, respectively. To increase the
rate of accuracy, we adopted the sigmoid activation function in the layer
number of 11 (see Table 1) since our wanted output in the DNN is 0 or 1. Eq. 3
and Eq. 4 show the expression of the activation of relu and sigmoid operators.
$f(z)=\left\\{{\begin{array}[]{*{20}{l}}0&{for\,\,z<0}\\\ z&{for\,\,z\geq
0}\end{array}}\right.$ (3)
$\sigma(z)=\frac{1}{{1+{e^{-z}}}}$ (4)
When it comes to evaluating the model, different design targets of
S-parameters are expected to determine whether the designed network can
generate corresponding metasurface structures. To establish our model, the
Tensorflow and Keras frameworks have been adopted, while our network is
performed using Python 3.8.0. Finally, after the design steps are completed,
it is only needed to insert the wanted S-parameter features, and the designed
DNN can form the final unit cell in accordance with the learned information
obtained through the training process.
Table 3: Detailed information of training and evaluation time, in addition to
the model size for confined output DNN architecture.
| Confined output network
---|---
Training time | 19 minutes
Evaluation | 0.038 sec
Model size | 6 MB
To validate the efficiency of our presented DNN, four different examples are
provided in the following. The output matrix of the metasurfaces is generated
based on the input S-parameters data for both TE and TM polarized waves. The
designated reflection information, which is detailed in Table II, is as
follows for all the examples: [notches frequencies; notches depth; and notch
bandwidth]. Numbers before and after the slash symbol ( / ) demonstrate the
desired input data under the illumination of TE and TM-LP waves, respectively.
Then, by entering the final generated matrices into full-wave simulation
software, we obtained the simulated reflection amplitude under the
illumination of dual orthogonal polarizations. As an example, we intend to
design a metasurface with an S-parameter containing three resonances under TE
polarization while having two under -10 dB resonances under TM polarization
waves in specified frequencies (see the first row of Table II). Observe in
Figure 3(b,c) that the full-wave simulation successfully reaches the design
targets.
Figure 4: Diagrams of accuracy and mean square error according to 5000 Epochs
for confined output DNN configuration.
Similarly, for instance in the last example (see Figure 4(d-f)), a metasurface
is designed in such a way that its S-parameters contain one and three
resonances in pre-determined frequencies (see the last row of Table II). Full-
wave simulation results are in accordance with our sought design goals.
Moreover, the diagrams of the accuracy and loss function of our presented DNN
confined output architecture are depicted in Figure 5. In addition, the
details of training and evaluation time, and the model size for our proposed
DNN are presented in Table III. These results are captured utilizing Google
Colab with the model of Tesla T4 and 15 MB of RAM. As can be seen, the design
time of our approach to generating a unit cell is 0.038 sec. Therefore,
compared to the conventional method, which takes about 700 to 800 minutes, our
presented approach is much faster and more efficient. Note that we cannot
directly compare our method to other inverse designs of the metasurface since
the employed GPU and RAM are different.
Accordingly, we have proven that our machine learning-based approach is a
promising candidate for inverse design of the metasurfaces in terms of
calculation repetitions, accuracy and speed. Establishing a DNN-confined
output configuration based on the input data for both TE and TM polarized
waves is the innovation to form a specified metasurface structure for dual
orthogonal polarizations.
## 3 Conclusion
In conclusion, adopting a deep neural network, we have presented an inverse
metasurface design approach for dual orthogonal polarized waves. By merely
specifying four design goals for both TE and TM cases (number of notches,
notch frequencies, notch depths, and notch bandwidths), the designed DNN can
create the final metasurface structure in the output. To broaden the working
frequency, we have considered 8 annular models; so that the created unit cells
in the output can produce different resonances over wide frequencies up to 45
GHz. The numerical simulations illustrate that our DNN can successfully
generate the desired metasurface compared to our pre-determined designed
targets, with an average accuracy higher than 91%. We have shown that the
speed of our presented approach is much higher than conventional metasurface
design, and by proposing the confining output configuration, our approach
equips an encouraging platform as an efficient technique with respect to
computational repetitions, training and evaluation time, and average accuracy.
We believe that our used deep neural network approach is a suitable candidate
for inverse metasurface design for dual-polarized waves and complex wave-
interaction phenomena.
### 3.1 CONFLICT OF INTEREST
The authors declare that there is no conflict of interest
### 3.2 DATA AVAILABILITY
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
## References
* 1 Rajabalipanah, H., Abdolali, A., Shabanpour, J., Momeni, A. & Cheldavi, A. Asymmetric spatial power dividers using phaseamplitude metasurfaces driven by huygens principle. ACS Omega 4, 14340–14352 (2019).
* 2 Shabanpour, J. Full manipulation of the power intensity pattern in a large space-time digital metasurface: from arbitrary multibeam generation to harmonic beam steering scheme. Ann. Phys. 532, 2000321 (2020).
* 3 Di Renzo, Marco, et al. ”Smart radio environments empowered by reconfigurable intelligent surfaces: How it works, state of research, and the road ahead.” IEEE Journal on Selected Areas in Communications 38.11 (2020): 2450-2525.
* 4 Di Renzo, Marco, et al. ”Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison.” IEEE Open Journal of the Communications Society 1 (2020): 798-807.
* 5 Shabanpour, J. Programmable anisotropic digital metasurface for independent manipulation of dual-polarized THz waves based on a voltage-controlled phase transition of VO 2 microwires. J. Mater. Chem. 8, 7189–7199 (2020).
* 6 Shabanpour, J., Beyraghi, S. & Cheldavi, A. Ultrafast reprogrammable multifunctional vanadium-dioxide-assisted metasurface for dynamic THz wavefront engineering. Sci. Rep. 10, 1–14 (2020).
* 7 Javad Shabanpour, Sina Beyraghi, Fardin Ghorbani, and Homayoon Oraizi, ”Implementation of conformal digital metasurfaces for THz polarimetric sensing,” OSA Continuum 4, 1372-1380 (2021)
* 8 Shabanpour, Javad, et al. ”Real-time multi-functional near-infrared wave manipulation with a 3-bit liquid crystal based coding metasurface.” Optics Express 29.10 (2021): 14525-14535.
* 9 Landy, N. I., Sajuyigbe, S., Mock, J. J., Smith, D. R. & Padilla, W. J. Perfect metamaterial absorber. Phys. Rev. Lett. 100, 207402 (2008).
* 10 Shabanpour, J., Beyraghi, S. & Oraizi, H. Reconfigurable honeycomb metamaterial absorber having incident angular stability. Sci. Rep. 10, 1–8 (2020).
* 11 Gu, M., & Goi, E. Holography enabled by artificial intelligence. In Holography, Diffractive Optics, and Applications X (Vol. 11551, p. 1155102). International Society for Optics and Photonics (2020).
* 12 Ghorbani, Fardin, et al. ”Deep neural network-based automatic metasurface design with a wide frequency range.” Scientific Reports 11.1 (2021): 1-8.
* 13 Ghorbani, Fardin, et al. ”EEGsig machine learning-based toolbox for End-to-End EEG signal processing.” arXiv preprint arXiv:2010.12877 (2020).
* 14 Ghorbani, Fardin, Hossein Soleimani, and Mohammad Soleimani. ”Deep Learning Approach for Target Locating in Through-the-Wall Radar under Electromagnetic Complex Wall.” arXiv preprint arXiv:2102.07990 (2021).
* 15 An, S. et al. A deep learning approach for objective-driven all-dielectric metasurface design. ACS Photon. 6, 3196–3207 (2019).
* 16 Cui, L., Zhang, Y., Zhang, R. & Liu, Q. H. A modified efficient KNN method for antenna optimization and design. IEEE Trans. Antennas Propag. 68, 6858–6866 (2020).
* 17 Sharma, Y., Zhang, H. H. & Xin, H. Machine learning techniques for optimizing design of double T-shaped monopole antenna. IEEE Trans. Antennas Propag. 68, 5658–5663 (2020).
* 18 Bacigalupo, Andrea, et al. ”Machine-learning techniques for the optimal design of acoustic metamaterials.” Journal of Optimization Theory and Applications (2019): 1-24.
* 19 Wu, Rih-Teng, et al. ”Design of one-dimensional acoustic metamaterials using machine learning and cell concatenation.” Structural and Multidisciplinary Optimization (2021): 1-25.
* 20 Yao, He Ming, et al. ”Machine learning methodology review for computational electromagnetics.” 2019 International Applied Computational Electromagnetics Society Symposium-China (ACES). Vol. 1. IEEE, 2019.
* 21 Yao, He Ming, E. I. Wei, and Lijun Jiang. ”Two-step enhanced deep learning approach for electromagnetic inverse scattering problems.” IEEE Antennas and Wireless Propagation Letters 18.11 (2019): 2254-2258.
* 22 Qiu, T. et al. Deep learning: a rapid and efficient route to automatic metasurface design. Adv. Sci. 6, 1900128 (2019).
* 23 Zhang, Q. et al. Machine-learning designs of anisotropic digital coding metasurfaces. Adv. Theory Simul. 2, 1800132 (2019).
* 24 Shan, T., Pan, X., Li, M., Xu, S. & Yang, F. Coding programmable metasurfaces based on deep learning techniques. IEEE J. Emerg. Sel. Topics Power Electron 10, 114–125 (2020).
* 25 Shi, X., Qiu, T., Wang, J., Zhao, X. & Qu, S. Metasurface inverse design using machine learning approaches. J. Phys. D. 53, 275105 (2020).
* 26 Sajedian, I., Lee, H. & Rho, J. Double-deep Q-learning to increase the efficiency of metasurface holograms. Sci. Rep. 9, 1–8 (2019).
* 27 Rajabalipanah, Hamid, et al. ”Addition theorem revisiting for phase/amplitude-encoded metasurfaces: Asymmetric spatial power dividers.” arXiv preprint arXiv:1901.04063 (2019).
* 28 Shabanpour, Javad, and Homayoon Oraizi. ”Some useful approximations for calculation of directivities of multibeam power patterns of large planar arrays.” arXiv preprint arXiv:2006.10423 (2020).
* 29 Gholamian, Meysam, Javad Shabanpour, and Ahmad Cheldavi. ”Highly sensitive quarter-mode spoof localized plasmonic resonator for dual-detection RF microfluidic chemical sensor.” Journal of Physics D: Applied Physics 53.14 (2020): 145401.
|
# The role of tunneling in the ionization of atoms by ultrashort and intense
laser pulses
Gabriel M. Lando<EMAIL_ADDRESS>Université Paris-Saclay,
CNRS, LPTMS, 91405 Orsay, France
###### Abstract
Classically allowed transport is shown to compete with quantum tunneling
during the ionization of atoms by ultrashort and intense laser pulses, despite
Keldysh parameters smaller than unity. This is done by comparing exact
probability densities with the ones obtained from purely classical propagation
using the Truncated Wigner Approximation. Not only is classical transport
capable of moving trajectories away from the core, but it can also furnish
ionization probabilities of the same order as the quantum ones for intensities
currently employed in experiments. Our results have implications ranging from
a conceptual correction to semiclassical step models in strong-field physics
to the ongoing debate about tunneling time measurements in attoclock
experiments.
_Introduction –_ Tunneling is one of the most characteristically quantum
phenomena in nature. Often depicted as a near-magical violation of classical
conservation laws, it is the basic mechanism behind several physical and
chemical processes and technologies, such as: the stability of star cores [1],
the decay of radioactive elements [2], Josephson junctions [3], the transfer
of hydrogen atoms in chemical reactions [4, *hammes2006hydrogen, *Lowdin1963,
*Trixler2013], photosynthesis [8], electron transfer in proteins [9] and flash
memory cards [10, *Bez2003], and a multitude of others. It is then remarkable
that, to this day, the use of the term “tunneling” is not uniform among all
areas of physics and chemistry. This is especially true when the phenomenon
takes place in time domain, as opposed to the static tunneling tails
penetrating classically forbidden regions in quantum eigenstates [12].
For conservative systems with a single degree of freedom (DoF), it is tempting
to interpret time-dependent tunneling as the transport of portions of the wave
function through a tall potential barrier. However, this description does not
take momentum into account, and it is well-known that a more accurate picture
is given in terms of the Wigner function [13]. Here, it is possible to
directly visualize the full energy barrier in phase space, which forms a
separatrix, instead of its misleading projection in position space only. One
then sees that the Wigner function naturally extends over the momentum
direction and presents high-energy tails [13, 12, 14, 15]. If points are
sampled on these tails and evolved according to Hamilton’s equations, they
will emerge on the other side of the barrier through classically allowed
transport. This contrasts with the case of tunneling, where transport is
necessarily classically forbidden [16, 17, 18, 19].
The matter of whether classically allowed or forbidden transport is the
dominant mechanism for some particular phenomenon is relatively
straightforward for the simple systems in the above paragraph, but becomes a
lot more blurred in the high-dimensional or time-dependent case: The former is
deceptive because splitting phase space into disjoint regions is harder in
higher dimensions [20, 21]; The latter involves non-stationary potential
barriers through which energy is added to and/or subtracted from the system,
possibly triggering chaos even for a single DoF [22, 23, 24]. For both cases a
proper definition of tunneling is quite challenging, although progress has
been recently made in this direction [19].
The interactions between atoms and ultrashort, intense laser pulses belong to
the category of time-dependent processes with strongly chaotic classical
dynamics. Here, the mechanism referred to as tunneling ionization (TI) lies at
the heart of a plethora of phenomena, forming the first stage of several
semiclassical step models [25, 26]. There are good reasons to question the
role of tunneling in this context, as the overall classical effect of the
pulse is a chaotic shaking of classical trajectories, which are then perfectly
able to escape the core _via_ classically allowed ionization (CAI) [27]. The
number of trajectories scattered by CAI is even enough to reproduce higher-
harmonics generation (HHG) spectra semiclassically, despite the propagated
state being approximately the atom’s ground state [28]. Since semiclassical
calculations rely completely on classical trajectories, this would be
impossible if the transport pathways were truly dominated by TI [16].
In this manuscript, we do not make the standard distinction between vertical
(multi-photon) and horizontal (tunneling) ionization channels [29]. Instead,
we distinguish CAI from TI by direct comparisons with purely classical
simulations, where the distribution of initial trajectories does not come from
any tweaking [30, 31, 32], but from the system’s _true_ ground state. Since
tunneling is stricter and easier to spot in systems with a single DoF, our
simulations are performed using the improved soft-core potential of Majorosi
_et al_ [33] coupled to an intense, ultrashort and linearly polarized (LP)
laser pulse in the near-infrared range. Contrary to intuition, but in line
with [20, 28], we demonstrate that CAI and TI are deeply intertwined, and
possibly inseparable. Among the implications of these results lie a correction
to semiclassical step models, since ionization is mostly unrelated to
tunneling, and an added difficulty to the ongoing debate on “tunneling” time
measurements [34, 35, 30, 36, 37, 32, 38, 39, 40].
_Model system–_ One-dimensional systems allow for visualization ease, but
often disagree with full 3 DoF simulations – at least quantitatively. The
popular soft-core potential, for example, has been shown to largely
overestimate ground-state depletion and other expectation values [33].
Nevertheless, recent work by Majorosi _et al_ has shown that the improved
soft-core system (in atomic units)
$H_{\rm{iSC}}(p_{x},x)=p_{x}^{2}/2-Z\left(Z^{-2}+4x^{2}\right)^{-1/2}\quad$
(1)
offers strikingly accurate estimates when compared to simulations in full
dimensionality using the exact Coulomb potential [33]. We therefore adopt this
potential for hydrogen, setting $Z=1$, and couple it to a linearly polarized
laser pulse in the dipole approximation:
$V(x;t)=x\,\sqrt{I_{0}}f(t)\sin\omega t\quad,$ (2)
where the envelope is given by $f(t)=\exp-((t-t_{0})/\tau)^{2}$. The optical
frequency of the pulse is taken as $\omega\approx
0.057\,\text{au}=780\,\text{nm}$, lying in the near-infrared. The FWHM is
$\tau\approx 4\,\text{fs}$, such that centering the pulse at $t_{0}\approx
10\,\text{fs}$ covers it completely for $t\in[0,20]\,\text{fs}$. Intensities
$I_{0}$ will vary between $1$ and $4\times 10^{14}\,\text{W/cm}^{2}$. The
initial state is chosen as the exact ground state for (1), which we obtain
numerically by diagonalizing the hamiltonian on a position grid ranging from
$-1000$ to $1000\,\text{au}$, with step-size $\Delta x\approx 0.1\,\text{au}$.
We note that the substitution of (1) by any other typical atomic potential, as
well as performing simulations with more DoF, do not change the message
contained in this manuscript: We are concerned only with the differences
between classical and quantum, and dynamics due to LP pulses is essentially
restricted to a single DoF [33]. Another important aspect is that quantum-
classical agreement for systems that undergo tunneling is harder to achieve in
one DoF than in more DoF. This was already remarked for a static electric
field in [20], where the author correctly states that trajectory “leakage”,
which provides the trajectories necessary to reproduce ionization
semiclassically, is more abundant in two DoF than in one. Thus, our choice of
(1) as a model both simplifies interpretation and _overestimates_ the role
played by TI.
_Methods–_ Quantum time-evolution is performed by solving the time-dependent
Schrödinger equation, where a split-operator method [41] using Blanes and
Moan’s 6th order algorithm is employed [42]. The time-step chosen is $\Delta
t=0.2\,\text{au}$, corresponding to $1/550$ of an optical cycle. To mitigate
wave reflections, we use a $\cos^{1/8}$ boundary mask placed at 10% of the
grid’s extremities [43, 44].
Quantum dynamics in phase space, _i.e._ the time evolution of the Wigner
function
$W_{0}(p_{x},x)=\frac{1}{\pi\hbar}\int_{\mathbb{R}}\text{d}\gamma\,\langle
x-\gamma|\psi_{0}\rangle\langle\psi_{0}|x+\gamma\rangle e^{2i\gamma
p/\hbar}\quad,$ (3)
where $|\psi_{0}\rangle$ is the system’s ground state, is dictated by Moyal’s
equation [45, 46, 15, 24]. Here, unlike in other formulations, the $\hbar\to
0$ limit of time-evolution is well-defined and results in a von Neumann-like
equation, with solution given by the Truncated Wigner Approximation (TWA)
$w(p_{x},x;t)=(W_{0}\circ\varrho_{-t})(p_{x},x)\quad.$ (4)
In the above, $\varrho_{-t}$ is the hamiltonian flow bringing the point
$(p_{x},x)$ at $\tau=0$ to its value at $\tau=-t$. Computing this backwards
flow requires us to employ a time-mirrored laser pulse in Hamilton’s
equations, which we solve using an adaptive-step algorithm with automatic
stiffness detection [47, 44].
Expectation values using the TWA follow the recipe of the Wigner formalism:
$\langle
A(t)\rangle_{\rm{classical}}=\int_{\mathbb{R}^{2}}\text{d}x\,\text{d}p_{x}\,w(p_{x},x;t)A(p_{x},x)\quad,$
(5)
where $A$ is the Weyl transform of operator $\hat{A}$ [15]. We note that the
usual way of computing classical expectation values is to transfer the time-
dependence from $w$ to $A$, and then propagate forward in time [48]. We go
through the trouble of negative times because, just as the marginals of Wigner
functions describe quantum probability densities (PDs), the marginals obtained
from the TWA can be seen as their classical equivalents. Comparisons between
quantum and classical PDs will be our tool to determine whether or not TI is
dominant: If this is the case, the classical PDs will have empty regions when
compared to the quantum ones, indicating that the corresponding phase-space
domain cannot be accessed by classical trajectories; If not, quantum and
classical PDs will be non-zero on the same domain.
In practice, since (5) is performed by Monte Carlo, the classical (position)
PDs are nothing but histograms of final positions in the TWA. These, in turn,
come from a classically evolved, non-interacting gas of initial points usually
chosen by importance sampling. Since we are interested in evolving the ground
state Wigner function $W_{0}$, sampling the initial points according to
$|W_{0}|$ is very efficient, and we do so by employing a simple Metropolis-
Hastings scheme [49, 50]. The absolute value is necessary because the ground
state of (1) is not a gaussian and, therefore, $W_{0}$ presents negative-
valued regions (see Fig. 1 ahead) [51]. It should perhaps be mentioned that
this association between a quantum state and a classical ensemble of points is
a sensible way to establish quantum-classical correspondence for dynamics,
being far more general than Ehrenfest’s theorem [52, 53, 54, 19].
Figure 1: The ground state Wigner function, $W_{0}$. The solid black line is
the zero energy contour, so orbits lying inside (outside of) it are bounded
(scattered). The arrows mark orbits of type N1 (magenta), N2 (green) and P
(orange).
_Simulations–_ In the case of potentials such as (1), which is composed of
bounded and scattering regions (corresponding to discrete and continuous
quantum energy spectra), the initial points can have both closed and open
trajectories. For (1), closed and open trajectories have negative and positive
energies, respectively, and we shall refer to them as being of type N or type
P.
Type N trajectories are all periodic. In the absence of the pulse some of
them, which we call type N1, naturally extend far beyond the mean ground state
radius, orbiting the origin with large periods. Other type N trajectories,
which we shall call N2, remain near the origin and have small orbital periods.
Since the laser pulse is in the near-infrared range, it acts on type N2 and N1
trajectories adiabatically and non-adiabatically, respectively. Thus,
elementary classical perturbation theory [55, 23] tells us that the pulse will
keep type N2 trajectories mostly untouched, and scatter away some of type N1.
The former mechanism is responsible for the classical preservation of the
ground state, since it conserves trajectories near the origin, and the latter
is the one behind CAI. See Fig. 1 for a visual depiction of these concepts.
Figure 2: (a) Final Wigner function and (b) TWA after a pulse with intensity
$I_{0}=2.0\times 10^{14}\,\text{W/cm}^{2}$. Blue (red) colors in the Wigner
function are positive (negative), and the TWA is built on around $10^{6}$
trajectories. The trajectories in panel (b) were all ionized by the laser
field, since the ones lying on the tails of the Wigner function are absent in
the set of initial points (see text). Note that the Wigner function and the
TWA have essentially the same spread.
The situation is very different with type P trajectories, whose initial points
lie on the Wigner function’s “tails” (see Fig. 1). Due to the exponential
decay of the tails, less than 0.01% of the initial points form trajectories of
type P. As these trajectories are not bound to the core, they are scattered
away in a matter of a few attoseconds, even in the absence of the laser pulse.
Moreover, by galilean invariance, we could simply translate the pulse to a
time where type P trajectories would have already diverged, so there are many
reasons to consider that they are irrelevant. After verifying that our results
are unchanged whether or not we include them, we simply remove them. This
filtering allows us to track classical ionization probabilities exclusively to
the trajectories that _truly_ ionized, _i.e._ they started bounded and were
scattered by the field. In Fig. 2 we show the Wigner function at the end of a
pulse with intensity $I_{0}=2.0\times 10^{14}\,\text{W/cm}^{2}$ together with
the corresponding TWA, calculated using a filtered set of around $10^{6}$
initial points.
Figure 3: (Left) Heatmap of the probability of finding the electron as a
function of space and time during the laser pulse. The color scale in the left
panel is logarithmic, with green equal to $10^{-2}$ and purple to $10^{-5}$.
(Right) Classical (orange) and quantum (black) position PDs at the end of the
laser pulse. Note that the peak probability is close to $0.5$, but ionization
probabilities are no larger than $0.0006$, requiring a massive zoom in order
to be seen.
Fig. 2 shows that CAI is taking place, since the TWA extends far beyond the
core, and we now move on to quantify how much this classical spread accounts
for total ionization. In the left panel of Fig. 3 we display the quantum PD in
the system as a function of time for the same intensity as Fig. 2. Results are
shown from $5\,\text{fs}$ (which is when the laser pulse starts visibly acting
on the state) to $20\,\text{fs}$. In the right panel we show the final quantum
PD, $P(x)$, together with its classical approximation, obtained from the TWA’s
position marginal.
_Discussion–_ In most situations where purely classical approximations are
employed, what is expected is only a very rough sketch of what is obtained
quantum mechanically. Classical physics cannot reproduce quantum
superposition, which plays a fundamental role in several processes in strong-
field physics (HHG, for instance, relies strongly on it [31]). This makes it
impossible to reconstruct parts of PD that are strongly dependent on phase
interference. The results of Fig. 3 are surprising because the classical
approximation is far more than a rough sketch: Not only does it correctly
estimate the (minuscule) order of atomic ionization probability, but it also
highlights the overall structure of the quantum result. Most importantly, the
classical PD extends over essentially the same range as the quantum one
despite a Keldysh parameter smaller than unity, namely
$\gamma_{\text{K}}\approx 0.78$, which should be indicative of a strong
presence of tunneling [29].
The sharp peaks displayed by the classical PD in the right panel of Fig. 3 are
due to the “whorls” and “tendrils” of Fig. 2, _i.e._ the initial state is
deformed into filaments that are sheared and folded, and the classical
position marginals have peaks on the filaments lying perpendicularly to the
momentum axis [56, 57]. This filamentary structure is usually not enough to
reproduce peak heights, and one must resort to semiclassical approximations:
Quantum interference is then achieved by a rigorous endowing of classical
trajectories with accumulated phases, which superpose and correct peak
intensities [58, 59]. What is fundamental to our purposes is that only the
extremities of the PD in Fig. 3, which are barely visible, are not classically
accessible – These are the ones that emerge exclusively from TI.
Figure 4: Quantum (black) and classical (orange) PDs at the end of the laser
pulse for: (a) $I_{0}=1.0\times 10^{14}\,\text{W/cm}^{2}$; (b)
$I_{0}=4.0\times 10^{14}\,\text{W/cm}^{2}$. Classical calculations employed
$10^{6}$ trajectories. The arrows in panel (b) point at regions where TI can
be safely disentangled from CAI.
The success of both classical and semiclassical mechanics, however, depends on
the existence of available classical trajectories. After they are all
scattered away, TI becomes the only available escape pathway from the core, as
can be seen for static fields or long laser pulses [20]. If the pulses are
strong, trajectories will be scattered faster, such that very strong fields
should also pose problems for classical/semiclassical propagation 111Note that
this is in line with standard Keldysh theory, but for different reasons..
Thus, in Fig. 4 we display the final probability densities for two different
laser pulse intensities: One weaker ($\gamma_{\text{K}}\approx 1.1$) and the
other one stronger ($\gamma_{\text{K}}\approx 0.54$) than in Figs. 2 and 3.
As we can see in Fig. 4(a), for “weak” fields the quantum PD is completely
supported by the classical one, which is almost one order of magnitude larger.
This shows that, in the limit of weak fields, CAI is notoriously dominant over
TI, so much so that it even resembles a case of strong localization [61].
Whether or not semiclassical corrections will be able to suppress the
classical PD and reproduce the peaks depends on how far one is in the
semiclassical regime, _i.e._ how large the actions associated to the
trajectories are with respect to $\hbar$, but a perfect reproduction of
probability densities is not the objective of this manuscript. What is
important is that there’s no region in position space that classical
trajectories didn’t reach, so TI cannot be dominant. In Fig. 4(b) however, we
note the increasing presence of peaks falling outside the reach of classical
trajectories, and TI is seen to arise from outside in. Using Fig. 3, we can
describe this quite easily: Some portions of the wave function that ionize
near the maxima of the laser pulse, with the absolute maximum at
$t_{0}=10\,\text{fs}$, go farther. They provide peaks at the PD’s extremities
that cannot be fully reached by CAI, although this is hard to see for the
intensity used in Fig. 3 and much clearer in Fig. 4(b). In addition to the
classically unreachable extremities, it is also unlikely that semiclassical
approximations will be able to resolve most of the peaks in Fig. 4(b), as it
is clear that both TI and CAI are taking place.
Interestingly, if we interpret Fig. 4(b) together with Fig. 3, we see that the
purest tunneling contributions can be tracked to portions that ionize and
never recombine – In the terminology of HHG literature, the classical
trajectories near these portions (if there are any) never _recollide_ with the
core [62, 29, 63]. The recolliding trajectories abide, as expected, within a
ball of quiver radius $\alpha_{0}=\sqrt{I_{0}}/\omega^{2}$ [29]. As can be
seen in the Figs. 2, 3 and 4, however, this region is always well-populated by
classical trajectories. This explains why HHG spectra could be obtained in
[28] through semiclassical approximations: Tunneling contributions might even
be present near the core, but the classical trajectories in the region were
enough to reproduce spectra exclusively from CAI.
It is then reasonable that the first step in semiclassical step models be
called “ionization” instead of “tunneling ionization”, since one cannot
effectively know whether it came from tunneling or not. This, moreover, adds
to the previously raised concerns about the difficulty of theoretically and
experimentally resolving tunneling time ambiguities, since the contributions
from TI and CAI might be impossible to disentangle. In the end, it is possible
that the measured angular delays in attoclocks, often interpreted as the “time
spent inside a barrier”, are predominantly due to CAI and directly traceable
to transport properties of classical trajectories. This would render
“tunneling times” essentially unrelated to tunneling.
_Conclusions–_ We have used a one-dimensional atom to bring forward the fact
that isolating tunneling from classically allowed ionization can be extremely
hard, even for the simplest of systems, when they are interacting with intense
and short laser pulses. Our results show that what is known as “tunneling
ionization” in semiclassical step models in atomic physics has a major
classical fingerprint, with direct consequences to the proper interpretation
of higher-harmonics generation and tunneling times, among others.
_Acknowledgements–_ I thank Alfredo Ozorio de Almeida, Andrew Hunter, Denis
Ullmo, Frank Großmann, Jessica Almeida, Jonathan Dubois, Olivier Giraud, Peter
Schlagheck, Sebastian Gemsheim and Steven Tomsovic for many stimulating
discussions. I also thank Jan-Michael Rost and the hospitality of the Max
Planck Institute for the Physics of Complex Systems, where the initial stages
of this work were carried out.
## References
* Itoh _et al._ [1979] N. Itoh, H. Totsuji, S. Ichimaru, and H. E. Dewitt, Enhancement of thermonuclear reaction rate due to strong screening. II - Ionic mixtures, Astrophys. J. 234, 1079 (1979).
* Gurney and Condon [1928] R. W. Gurney and E. U. Condon, Wave mechanics and radioactive disintegration, Nature 122, 439 (1928).
* Josephson [1962] B. Josephson, Possible new effects in superconductive tunnelling, Physics Letters 1, 251 (1962).
* Kohen and Klinman [1999] A. Kohen and J. P. Klinman, Hydrogen tunneling in biology, Chemistry & biology 6, R191 (1999).
* Hammes-Schiffer [2006] S. Hammes-Schiffer, Hydrogen tunneling and protein motion in enzyme reactions, Accounts of chemical research 39, 93 (2006).
* Löwdin [1963] P.-O. Löwdin, Proton tunneling in dna and its biological implications, Reviews of Modern Physics 35, 724 (1963).
* Trixler [2013] F. Trixler, Quantum tunnelling to the origin and evolution of life, Curr. Org. Chem. 17, 1758 (2013).
* Peters _et al._ [1978] K. Peters, P. Avouris, and P. Rentzepis, Picosecond dynamics of primary electron-transfer processes in bacterial photosynthesis, Biophysical Journal 23, 207 (1978).
* Gray and Winkler [2003] H. B. Gray and J. R. Winkler, Electron tunneling through proteins, Quarterly reviews of biophysics 36, 341 (2003).
* Esaki [1974] L. Esaki, Long journey into tunneling, Science 183, 1149 (1974).
* Bez _et al._ [2003] R. Bez, E. Camerlenghi, A. Modelli, and A. Visconti, Introduction to flash memory, Proceedings of the IEEE 91, 489 (2003).
* Berry and Mount [1972] M. V. Berry and K. Mount, Semiclassical approximations in wave mechanics, Reports on Progress in Physics 35, 315 (1972).
* Balazs and Voros [1990] N. Balazs and A. Voros, Wigner’s function and tunneling, Annals of Physics 199, 123 (1990).
* Berry [1977] M. V. Berry, Semi-classical mechanics in phase space: a study of wigner’s function, Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences 287, 237 (1977).
* De Almeida [1998] A. M. O. De Almeida, The weyl representation in classical and quantum mechanics, Physics reports 295, 265 (1998).
* Maitra and Heller [1997] N. Maitra and E. Heller, Barrier tunneling and reflection in the time and energy domains: The battle of the exponentials, Physical review letters 78, 3035 (1997).
* Grossmann and Heller [1995] F. Grossmann and E. J. Heller, A semiclassical correlation function approach to barrier tunneling, Chemical physics letters 241, 45 (1995).
* Grossmann [2000] F. Grossmann, Semiclassical real-time tunneling by multiple spawning of classical trajectories, Physical review letters 85, 903 (2000).
* Wang and Tomsovic [2021] H. Wang and S. Tomsovic, Semiclassical propagation of coherent states and wave packets: hidden saddles, arXiv preprint arXiv:2107.08799 (2021).
* Spanner [2003] M. Spanner, Strong field tunnel ionization by real-valued classical trajectories, Physical review letters 90, 233005 (2003).
* Zagoya _et al._ [2014a] C. Zagoya, L. S. Schulman, and F. Grossmann, Interference nature of quantum breather oscillation, Journal of Physics A: Mathematical and Theoretical 47, 165102 (2014a).
* Chirikov [1979a] B. V. Chirikov, A universal instability of many-dimensional oscillator systems, Physics reports 52, 263 (1979a).
* Arnold [1989] V. I. Arnold, _Mathematical Methods of Classical Mechanics_ , 2nd ed. (Springer, 1989).
* de Almeida [1990] A. M. O. de Almeida, _Hamiltonian Systems: Chaos and Quantization_ , 2nd ed. (Cambridge University Press, 1990).
* Corkum [1993] P. B. Corkum, Plasma perspective on strong field multiphoton ionization, Physical review letters 71, 1994 (1993).
* Lewenstein _et al._ [1994] M. Lewenstein, P. Balcou, M. Y. Ivanov, A. L’huillier, and P. B. Corkum, Theory of high-harmonic generation by low-frequency laser fields, Physical Review A 49, 2117 (1994).
* Dubois [2019] J. Dubois, _Electron dynamics for atoms driven by intense and elliptically polarized laser pulses_ , Ph.D. thesis, Aix-Marseille (2019).
* Zagoya _et al._ [2014b] C. Zagoya, J. Wu, M. Ronto, D. Shalashilin, and C. F. de Morisson Faria, Quantum and semiclassical phase-space dynamics of a wave packet in strong fields using initial-value representations, New Journal of Physics 16, 103040 (2014b).
* Großmann [2013] F. Großmann, _Theoretical Femtosecond Physics: Atoms and Molecules in Strong Laser Fields_ , 2nd ed. (Springer, 2013).
* Ni _et al._ [2016] H. Ni, U. Saalmann, and J.-M. Rost, Tunneling ionization time resolved by backpropagation, Physical review letters 117, 023002 (2016).
* van de Sand and Rost [1999] G. van de Sand and J. M. Rost, Irregular orbits generate higher harmonics, Physical review letters 83, 524 (1999).
* Hofmann _et al._ [2019] C. Hofmann, A. S. Landsman, and U. Keller, Attoclock revisited on electron tunnelling time, Journal of Modern Optics 66, 1052 (2019).
* Majorosi _et al._ [2018] S. Majorosi, M. G. Benedict, and A. Czirják, Improved one-dimensional model potentials for strong-field simulations, Physical Review A 98, 023401 (2018).
* Landsman _et al._ [2014] A. S. Landsman, M. Weger, J. Maurer, R. Boge, A. Ludwig, S. Heuser, C. Cirelli, L. Gallmann, and U. Keller, Ultrafast resolution of tunneling delay time, Optica 1, 343 (2014).
* Torlina _et al._ [2015] L. Torlina, F. Morales, J. Kaushal, I. Ivanov, A. Kheifets, A. Zielinski, A. Scrinzi, H. G. Muller, S. Sukiasyan, M. Ivanov, _et al._ , Interpreting attoclock measurements of tunnelling times, Nature Physics 11, 503 (2015).
* Pollak [2017] E. Pollak, Quantum tunneling: the longer the path, the less time it takes, The journal of physical chemistry letters 8, 352 (2017).
* Rost and Saalmann [2019] J. M. Rost and U. Saalmann, Attoclock and tunnelling time, Nature Photonics 13, 439 (2019).
* Kheifets [2020] A. S. Kheifets, The attoclock and the tunneling time debate, Journal of Physics B: Atomic, Molecular and Optical Physics 53, 072001 (2020).
* Sainadh _et al._ [2020] U. S. Sainadh, R. Sang, and I. Litvinyuk, Attoclock and the quest for tunnelling time in strong-field physics, Journal of Physics: Photonics 2, 042002 (2020).
* Hofmann _et al._ [2021] C. Hofmann, A. Bray, W. Koch, H. Ni, and N. I. Shvetsov-Shilovski, Quantum battles in attoscience: tunnelling, The European Physical Journal D 75, 1 (2021).
* Feit _et al._ [1982] M. Feit, J. Fleck Jr, and A. Steiger, Solution of the schrödinger equation by a spectral method, Journal of Computational Physics 47, 412 (1982).
* Blanes and Moan [2002] S. Blanes and P. C. Moan, Practical symplectic partitioned Runge–Kutta and Runge–Kutta–Nyström methods, Journal of Computational and Applied Mathematics 142, 313 (2002).
* Lorin _et al._ [2009] E. Lorin, S. Chelkowski, and A. Bandrauk, Mathematical modeling of boundary conditions for laser-molecule time-dependent schrödinger equations and some aspects of their numerical computation—one-dimensional case, Numerical Methods for Partial Differential Equations: An International Journal 25, 110 (2009).
* Bezanson _et al._ [2017] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, Julia: A fresh approach to numerical computing, SIAM review 59, 65 (2017).
* Groenewold [1946] H. J. Groenewold, On the principles of elementary quantum mechanics, in _On the principles of elementary quantum mechanics_ (Springer, 1946) pp. 1–56.
* Moyal [1949] J. E. Moyal, Quantum mechanics as a statistical theory, in _Mathematical Proceedings of the Cambridge Philosophical Society_ , Vol. 45 (Cambridge University Press, 1949) pp. 99–124.
* Rackauckas and Nie [2017] C. Rackauckas and Q. Nie, Differentialequations.jl – a performant and feature-rich ecosystem for solving differential equations in julia, The Journal of Open Research Software 5 (2017), exported from https://app.dimensions.ai on 2019/05/05.
* Mittal _et al._ [2020] K. M. Mittal, O. Giraud, and D. Ullmo, Semiclassical evaluation of expectation values, Physical Review E 102, 042211 (2020).
* Metropolis _et al._ [1953] N. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, A. H. Teller, and E. Teller, Equation of state calculations by fast computing machines, J. Chem. Phys. 21, 1087 (1953).
* Hastings [1970] W. K. Hastings, Monte Carlo sampling methods using Markov chains and their applications, Biometrika 236, 97 (1970).
* Hudson [1974] R. L. Hudson, When is the wigner quasi-probability density non-negative?, Reports on Mathematical Physics 6, 249 (1974).
* Ballentine _et al._ [1994] L. E. Ballentine, Y. Yang, and J. Zibin, Inadequacy of ehrenfest’s theorem to characterize the classical regime, Physical review A 50, 2854 (1994).
* Drobnỳ _et al._ [1997] G. Drobnỳ, A. Bandilla, and I. Jex, Quantum description of nonlinearly interacting oscillators via classical trajectories, Physical Review A 55, 78 (1997).
* Lasser and Lubich [2020] C. Lasser and C. Lubich, Computing quantum dynamics in the semiclassical regime, Acta Numerica 29, 229 (2020).
* Chirikov [1979b] B. V. Chirikov, A universal instability of many-dimensional oscillator systems, Phys. Rep. 52, 263 (1979b).
* Berry [1979] M. V. Berry, Evolution of semiclassical quantum states in phase space, Journal of Physics A: Mathematical and General 12, 625 (1979).
* Berry _et al._ [1979] M. V. Berry, N. L. Balazs, M. Tabor, and A. Voros, Quantum maps, Annals of Physics 122, 26 (1979).
* Maslov and Fedoriuk [1981] V. P. Maslov and M. V. Fedoriuk, _Semi-Classical Approximation in Quantum Mechanics_ (Springer, 1981).
* Lando _et al._ [2019] G. M. Lando, R. O. Vallejos, G.-L. Ingold, and A. M. O. de Almeida, Quantum revival patterns from classical phase-space trajectories, Physical Review A 99, 042125 (2019).
* Note [1] Note that this is in line with standard Keldysh theory, but for different reasons.
* Anderson [1958] P. W. Anderson, Absence of diffusion in certain random lattices, Physical review 109, 1492 (1958).
* Protopapas _et al._ [1996] M. Protopapas, D. Lappas, C. H. Keitel, and P. L. Knight, Recollisions, bremsstrahlung, and attosecond pulses from intense laser fields, Physical Review A 53, R2933 (1996).
* Dubois _et al._ [2020] J. Dubois, C. Chandre, and T. Uzer, Envelope-driven recollisions triggered by an elliptically polarized pulse, Physical Review Letters 124, 253203 (2020).
|
\intertext{As before, the only candidate for a joint distribution with finite score is $\delta_x(X) e(Z \mid X)$. Note that the marginal on $Z$ for this distribution is itself, since $\int_x \delta_x(X) e(Z \mid X)\;\mathrm dx = e(Z \mid x)$. Thus, our equation becomes}
&= \Ex_{\delta_x(X) e(Z \mid X)} \left[ \beta \log \frac{e(Z \mid x)}{p(z)} + \log \frac{\delta_x(X) e(Z \mid X)}{e(Z \mid x) d(x \mid Z)} \right] \\
&= \Ex_{e(Z \mid x)} \left[ \beta \log \frac{e(Z \mid x)}{p(Z)} + \log \frac{1}{ d(x \mid Z)} \right]\\
&= \kldiv{e(Z|\,x)}{p} + \mathrm{Rec}_{e,d}(x) \\
&= -\beta\text{-}\mathrm{ELBO}_{p,e,d}(x).
\end{align*}
In the main text, we defined $\PDGof{\Psi}$ to be the PDG with edges $\{ \raisebox{-0.3ex}{$\smash{\stackrel{J}{\rightarrow}}$} \mathbf X_J \}_{\mathcal J}$, cpds $p_J(\mathbf X_J) \propto \phi_J(\mathbf X_J)$, and weights $\alpha_J, \beta_J := \theta_J$.
Let $\theelt(\{x\}) := x$ be a function that extracts the unique element singleton set.
It was shown by richardson2020probabilistic (Corolary 4.4.1) that
\[ \theelt \bbr{\dg M_\Psi}^*_1 = \Pr\nolimits_{\Phi, \theta}(\mat x)
= \frac{1}{Z_\Psi} \prod_{J} \phi_J(\mat x_J)^{\theta_J}. \]
Recall the statement of Prop 4.6 from richardson2020probabilistic:
\begin{equation}\label{eqn:nice-score-repeated}
\bbr{\dg M}_\gamma(\mu) = \Ex_{\mat w \sim \mu}\! \Bigg\{ \sum_{ X \xrightarrow{\!\!L} Y } \bigg[\,
\!\beta_L \log \frac{1}{\bp(y^{\mat w} |x^{\mat w})} +
{\color{blue!50!red!90!black}(\gamma\alpha_L - \beta_L ) \log \frac{1}{\mu(y^{\mat w} |x^{\mat w})}} \bigg] -
\gamma \log \frac{1}{\mu(\mat w)} \Bigg\}, \\
\end{equation}
where $x^{\mat w}$
and $y^{\mat w}$ are the respective values of the variables $X$ and $Y$ in the world $\mat w$.
Note that if $\gamma = 1$, and $\alpha,\beta$ are both equal to $\theta$ in $\PDGof{\Psi}$,
the middle term (in purple) is zero. So in our case, since the edges are $\{ \xrightarrow{J} \mathbf X_J \}$ and $\bp[J](\mat X_J) = \phi_J(\mathbf X_J)$, (<ref>) reduces to the standard variational free energy
\begin{align*}
\VFE_\Psi(\mu)
&= \Ex_{\mu} \left[ ~\sum_{J\in \mathcal J} \theta_J \log \frac1{\phi_J(\mat X_J)}\right]-\H(\mu) \numberthis\label{eq:vfe}\\
\qquad
\Ex_{\mu}
% \sum_{J\in \mathcal J} {\varphi_J(\mat X_J)}
\big\langle \boldsymbol\varphi,\, \boldsymbol\theta \big\rangle_{\mathcal J}
- \H(\mu),
\quad\text{where}~\varphi_J(\mat X_J) := \log \frac1{\phi_J(\mat X_J)}.
\end{align*}
By construction, $\Pr_\Psi$ uniquely minimizes $\VFE$.
The 1-inconsistency, $\aar{\dg M_\Psi}$ is the minimum value attained. We calculate:
\begin{align*}
% \aar{(\UPDGof{\Phi}, \theta, \theta)}_1
\aar{\dg M}_1
% &=\bbr{(\UPDGof{\Phi}, \theta, \theta)}_1\Big(\Pr\nolimits_{\Phi, \theta}(\mat w) \Big)\\
&= \VFE_\Psi(\Pr\nolimits_\Psi) \\
%\frac{1}{Z_\Phi} \prod_j \phi_j(\mat w_j)^{\theta_j}
% \Ex_{\mat w \sim \mu}\! \Bigg\{ \sum_{X \xrightarrow{\!\!L} Y} \bigg[\,
% \!\beta_L \log \frac{1}{\bp(y^{\mat w} |x^{\mat w})}
% \bigg] - \log \frac{1}{\Pr\nolimits_{\Phi, \theta}(\mat w) } \Bigg\}
% & \Big[ ~\text{by \eqref{eqn:nice-score-repeated}}~ \Big]\\
% \Ex_{\mat w \sim \mu}\! \Bigg\{ \sum_{X \xrightarrow{\!\!L} Y} \bigg[\,
\Ex_{\mat x \sim \mu}\! \Bigg\{ \sum_{J \in \mathcal J} \bigg[\,
\!\theta_J \log \frac{1}{\phi_J(\mat x_J)}
% (\alpha_L - \beta_L ) \log \frac{1}{\mu(y^{\mat w} |x^{\mat w})}
\bigg] - \log \frac{1}{\Pr\nolimits_{\Phi, \theta}(\mat x) } \Bigg\}
& \Big[ ~\text{by \eqref{eq:vfe}}~ \Big]\\
\Ex_{\mat x \sim \mu}\! \Bigg\{ \sum_{J\in \mathcal J} \bigg[\,
\!\theta_J \log \frac{1}{\phi_J(\mat x_J)}
\bigg] - \log \frac{Z_\Psi}{\prod_{J \in \mathcal J} \phi_J(\mat x_J)^{\theta_j}} \Bigg\}
% & \Big[ \parbox{1.5in}{\centering%
% cpds $\bp$ correspond\\ to factors $\phi_j$} \Big]\\
&\Big[ \text{definition of $\Pr_\Psi$} \Big]\\
\Ex_{\mat x \sim \mu}\! \Bigg\{ \sum_J \bigg[\,
\!\theta_J \log \frac{1}{\phi_J(\mat x_J)}
\bigg] - \sum_{J \in \mathcal J} \left[\theta_J \log \frac{1}{\phi_J(\mat x_J)} \right]
- \log Z_\Psi \Bigg\} \\
&= \Ex_{\mat x \sim \mu} [- \log Z_\Psi] \\
&= - \log Z_\Psi & \Big[~\text{$Z_\Psi$ is constant in $\mat x$}~\Big]
\end{align*}
Since $p$ has high confidence, and ${\tt T}$ is always equal to ${\tt t}$, the only joint distribution on $(X,{\tt T})$ with finite score is $\mu(X, {\tt T}) = p(X) \delta_{{\tt t}}({\tt T})$. We compute its score directly:
\begin{align*}
\aar*{\!\begin{tikzpicture}[center base]
\node[dpad0] (X) at (0,0) {$X$};
\node[dpad0] (2) at (1.1,0) {$\Truth$};
\draw[arr2] (X) to
node[above, pos=0.4,inner sep=2pt]{$\hat c$}
% node[below, pos=0.4, inner sep=2pt]{${\color{gray}\scriptstyle(\beta)}$}
\draw[arr2, <-] (X) to
node[above, pos=0.6, inner sep=2pt]{$p$}
node[below, pos=0.6, inner sep=2pt]
+(-1, 0);
\draw[arr2, <<-] (2) to
node[above, inner sep=2pt, pos=0.6]
\end{tikzpicture}\!}
%oli3: added missing \log below
&= \Ex_{\mu} \log \frac{\mu(X,{\tt T})}{\hat c({\tt t}\,|X)}
= \Ex_{p} \log \frac{1}{\hat c({\tt t}\,|X)}
= \Ex_{p} \log \frac{1}{\exp(-c(X))} \\
% \beta \Ex_{x\sim p}c(x).
&= \Ex_p \log \exp(c(X))
= \Ex_p c(X) =
\Ex_{x\sim p}\! c(x).
\end{align*}
§.§ Additional Proofs for Unnumbered Claims
§.§.§ Details on the Data Processing Inequality Proof
ci2/.style=inner sep=2pt, align=center
pstyle/.style=line width=0.9pt, pcolor!!black
qstyle/.style=line width=1.3pt, qcolor!!black
pqstyle/.style=line width=1.5pt,pcolor!50!qcolor!!black
We now provide more details on the proof of the Data Processing Equality that appeared in <Ref> of the main text. We repeat it now for convenience, with labeled PDGs ($\dg M_1, \ldots, \dg M_5$) and numbered (in)equalities.
% - \log \Pr\nolimits_{p,d}(X\!=\!x) ~=\qquad&\\
\aar*{\!\begin{tikzpicture}[center base]
\node[dpad0] (X) {$X$};
\draw[arr2, <-,qstyle] (X) --
% node[above,pos=0.6]{$q^{{\color{gray}(s)}}$}
node[below, pos=0.65,ci2] {\qlabel}
++(1.1, 0);
\draw[arr2, <-,pstyle] (X) --
% node[above,pos=0.6]{$p^{{\color{gray}(r)}}$}
node[below, pos=0.65, ci2] {\plabel}
++(-1.1, 0);%
\end{tikzpicture}\!}
\aar**{\!\!\begin{tikzpicture}[center base]
\node[dpad0] (X) {$X$};
\node[dpad0,above=.8 of X,align=center] (Y) {$Y$};
\draw[arr2, <-,qstyle] (X) --
node[below, pos=0.65,ci2] {\qlabel}
++(1.1, 0);
\draw[arr2, <-,pstyle] (X) --
node[below, pos=0.65,ci2] {\plabel}
++(-1.1, 0);%
\draw[arr2, pqstyle] (X) --
node[left,pos=0.45,inner sep=1pt]{$f$}
node[right, pos=0.45, inner sep=1.5pt, align=center] % below,rotate=90
\end{tikzpicture}\!\!}
\aar**{\!\!\begin{tikzpicture}[center base]
\node[dpad0] (X1) {$X_1$};
\node[dpad0, right=0.6 of X1] (X2) {$X_2$};
\node[dpad0,above=.8 of {$(X1)!.5!(X2)$},align=center] (Y) {$Y$};
\draw[arr2, -, double equal sign distance] (X1) to (X2);
\draw[arr2, <-,qstyle] (X2) --
node[below, pos=0.65,ci2] {\qlabel}
++(1.1, 0);
\draw[arr2, <-,pstyle] (X1) --
node[below, pos=0.65,ci2] {\plabel}
++(-1.1, 0);%
\draw[arr2,pstyle] (X1) to[bend left=40]
node[above left, pos=0.35, inner sep=1pt]{$f$}
node[below right=0 and 0, pos=0.45, inner sep=0pt, align=center] {\plabel}
\draw[arr2,qstyle] (X2) to[bend right=40]
node[above right, pos=0.35, inner sep=1pt]{$f$}
node[below left=0 and 0, pos=0.45, inner sep=0pt, align=center] {\qlabel}
\end{tikzpicture}\!\!}
\aar**{\!\!\begin{tikzpicture}[center base]
\node[dpad0] (X1) {$X_1$};
\node[dpad0, right=0.65 of X1] (X2) {$X_2$};
\node[dpad0,above=.75 of {$(X1)!.5!(X2)$},align=center] (Y) {$Y$};
\draw[arr2, <-,qstyle] (X2) --
node[below, pos=0.65,ci2] {\qlabel}
++(1.1, 0);
\draw[arr2, <-,pstyle] (X1) --
node[below, pos=0.65,ci2] {\plabel}
++(-1.1, 0);%
\draw[arr2,pstyle] (X1) to[bend left=30]
node[above left, pos=0.35, inner sep=1pt]{$f$}
node[below right=0 and 0, pos=0.45, inner sep=0pt, align=center] {\plabel}
\draw[arr2,qstyle] (X2) to[bend right=30]
node[above right, pos=0.35, inner sep=1pt]{$f$}
node[below left=0 and 0, pos=0.45, inner sep=0pt, align=center] {\qlabel}
\end{tikzpicture}\!\!}
\aar*{\!\begin{tikzpicture}[center base]
\node[dpad0] (X) {$X$};
\draw[arr2, <-,qstyle] (X) --
node[above,pos=0.7,ci2]{$ f\!\circ\! q$}
node[below, pos=0.65,ci2] {\qlabel}
++(1.1, 0);
\draw[arr2, <-,pstyle] (X) --
node[above,pos=0.6,ci2]{$ f\!\circ\! p$}
node[below, pos=0.65,ci2] {\plabel}
++(-1.1, 0);%
\end{tikzpicture}\!}
% \end{equation*}
% \]
$\dg M_1$ $\dg M_2$ $\dg M_3$ $\dg M_4$ $\dg M_5$
We now enumerate the (in)equalities to prove them.
* Let $\mu(X)$ denote the (unique) optimal distribution for $\dg M_1$.
Now, the joint distribution $\mu(X,Y) := \mu(X) f(Y|X)$ has incompatibility with $\dg M_2$ equal to
\begin{align*}
\Inc_{\dg M_2}(\mu(X,Y)) &= \beta \kldiv{\mu(X)}{p(X)} +
\zeta \kldiv{\mu(X)}{q(X)} + (\beta\!+\!\zeta)\Ex_{x\sim\mu}\big[ \kldiv{\mu(Y|x)}{f(Y|x)} \big] \\
&= \Inc_{\dg M_1}(\mu(X)) + (\beta\!+\!\zeta) \Ex_{x \sim \mu}\kldiv{\mu(Y|x)}{f(Y|x)}
% & \hspace{-2in}{\color{gray}\Big[\text{as $\mu(X)$ is optimal for $\dg M_1$}\Big]}
\\
&= \aar{\dg M_1}
& \hspace{-2in}{\color{gray}\Big[\begin{array}{c}
\text{as $\mu(Y|x) = f(Y|x)$ wherever $\mu(x)>0$,}\\
\text{and $\mu(X)$ minimizes $\Inc_{\dg M_1}$}
\end{array}\Big]}
\end{align*}
So $\mu(X,Y)$ witnesses the fact that $\aar{\dg M_2} \le \Inc_{\dg M_2}(\mu(X,Y)) = \aar{\dg M_1}$.
Furthermore, every joint distribution $\nu(X,Y)$ must have at least this incompatibility,
as it must have some marginal $\nu(X)$, which, even by itself, already gives rise to incompatibility of magnitude $\Inc_{\dg
M_1}(\nu(X)) \ge \Inc_{\dg M_1}(\mu(X)) = \aar{\dg M_1} $.
And since this is true for all $\nu(X,Y)$, we have that $\aar{\dg M_2} \ge \aar{\dg M_1}$. So $\aar{\dg M_2} = \aar{\dg M_1}$.
* The equals sign in $\dg M_3$ may be equivalently interpreted as a cpd $\mathit{eq}_{}(X_1|X_2) := x_2 \mapsto \delta_{x_2}(X_1)$, a cpd
$\mathit{eq'}_{}(X_2|X_1) := x_1 \mapsto \delta_{x_1}(X_2)$, or both at once; in each case, the effect is that a joint distribution $\mu$ with support on an outcome for which $X_1 \ne X_2$ gets an infinite penalty, so a minimizer $\mu(X_1,X_2,Y)$ of $\Inc{\dg M_3}$ must be isomorphic to a distribution $\mu'(X,Y)$.
Furthermore, it is easy to verify that $\Inc_{\dg M_2}(\mu'(X,Y)) = \Inc_{\dg M_3}(\mu(X,X,Y))$. More formally, we have:
\begin{align*}
\aar{\dg M_3} &= \inf_{\mu(X_1,X_2, Y)} \Ex_\mu\left[
\beta \log \frac{\mu(X_1)}{p(X_1)}
+ \zeta \log \frac{\mu(X_2)}{q(X_2)}
+ \beta \log\frac{\mu(Y|X_1)}{f(Y|X_1)}
+ \zeta \log\frac{\mu(Y|X_2)}{f(Y|X_2)}
+ \log \frac{\mu(X_1|X_2)}{\mathit{eq}(X_1,X_2)}
\right]
\intertext{but if $X_1$ always equals $X_2$ (which we call simply $X$), as it must for the optimal $\mu$, this becomes}
&= \inf_{\mu(X_1=X_2=X, Y)} \Ex_\mu\left[
\beta \log \frac{\mu(X)}{p(X)}
+ \zeta \log \frac{\mu(X)}{q(X)}
+ \beta \log\frac{\mu(Y|X)}{f(Y|X)}
+ \zeta \log\frac{\mu(Y|X)}{f(Y|X)}
\right] \\
&= \inf_{\mu(X, Y)} \Ex_\mu\left[
\beta \log \frac{\mu(X)}{p(X)}
+ \zeta \log \frac{\mu(X)}{q(X)}
+ (\beta\!+\!\zeta) \log\frac{\mu(Y|X)}{f(Y|X)}
\right] \\
&= \inf_{\mu(X,Y)} \Inc_{\dg M_2}(\mu)\\
&= \aar{\dg M_2}.
% \Inc_{\dg M_2=3}(\mu'(X,Y))
\end{align*}
* Eliminating the edge or edges enforcing the equality $(X_1 = X_2)$ cannot increase inconsistency, by <Ref>.
* Although this final step of composing the edges with shared confidences looks intuitively like it should be true (and it is!), its proof may not be obvious.
We now provide a rigorous proof of this equality.
To ameliorate subscript pains, we henceforth write $X$ for $X_1$, and $Z$ for $X_2$.
We now compute:
\begin{align*}
\aar{\dg M_4}
&= \inf_{\mu(X,Z,Y)}
\Ex_\mu \left[
\beta \log \frac{\mu(X)\, \mu(Y|X)}{p(X)\,f(Y|X)}
+ \zeta \log \frac{\mu(Z)\, \mu(Y|Z)}{q(Z)\,f(Y|Z)}
\right]\\
&= \inf_{\mu(X,Z,Y)}
\Ex_\mu \left[
\beta \log \frac{\mu(Y)\, \mu(X|Y)}{p(X)\,f(Y|X)}
+ \zeta \log \frac{\mu(Y)\, \mu(Z|Y)}{q(Z)\,f(Y|Z)}
\right] & \text{[apply Bayes Rule in numerators]}
\end{align*}
By the chain rule, every distribution $\mu(X,Z,Y)$ may be specified as $\mu(Y)\mu(X|Y)\mu(Z|X,Y)$, so we can rewrite the formula above as
\begin{equation*}
\aar{\dg M_4}
\inf_{\mu(Y)} \inf_{\mu(X|Y)} \inf_{\mu(Z|Y,X)}
\Ex_{y \sim \mu(Y)} \Ex_{x \sim \mu(X|y)} \Ex_{z \sim \mu(Z|y,x)} \left[
\beta \log \frac{\mu(y)\, \mu(x\,|\,y)}{p(x)\,f(y\,|\,x)}
+ \zeta \log \frac{\mu(y)\, \mu(z\,|\,y)}{q(z)\,f(y\,|\,z)}
\right], % \\
\end{equation*}
where $\mu(Z|Y)$ is the defined in terms of the primitives $\mu(X|Y)$ and $\mu(Z|X,Y)$ as $\mu(Z|Y) := y\mapsto \Ex_{x\sim \mu(X|y)} \mu(Z|y,x)$, and is a valid cpd, since it is a mixture distribution.
Since the first term (with $\beta$) does not depend on $z$, we can take it out of the expectation, so
\begin{align*}
\aar{\dg M_4}
&= \inf_{\mu(Y)} \inf_{\mu(X|Y)} \inf_{\mu(Z|Y,X)}
\Ex_{y \sim \mu(Y)} \Ex_{x \sim \mu(X|y)} \left[
% \beta \log \frac{\mu(Y)\, \mu(X|Y)}{p(X)\,f(Y|X)}
\beta \log \frac{\mu(y)\, \mu(x\,|\,y)}{p(x)\,f(y\,|\,x)}
+~~ \zeta~ \Ex_{\substack{\vphantom{|}\\\mathclap{z\sim\mu(Z|y,x)}}}
% + \zeta \!\!\!\Ex_{\substack{\vphantom{|}\\{z\sim\mu(Z|y,x)}}} \!\!\!
\Big[ \log \frac{\mu(y)\, \mu(z\,|\,y)}{q(z)\,f(y\,|\,z)} \Big]
\right]; \\
\intertext{we can split up $\Ex_{\mu(X|y)}$ by linearity of expectation, to get}
\aar{\dg M_4}
&= \inf_{\mu(Y)} \inf_{\mu(X|Y)} \inf_{\mu(Z|Y,X)}
\Ex_{y \sim \mu(Y)} \left[
% \beta \log \frac{\mu(Y)\, \mu(X|Y)}{p(X)\,f(Y|X)}
\beta \!\! \Ex_{\substack{\vphantom{x}\\x\sim\mu(X|y)}}\!\!\Big[
\log \frac{\mu(y)\, \mu(x\,|\,y)}{p(x)\,f(y\,|\,x)} \Big]
+ \zeta\!\! \Ex_{\substack{x \sim \mu(X|y)\\ z\sim\mu(Z|y,x)}}\!\!\Big[
\log \frac{\mu(y)\, \mu(z\,|\,y)}{q(z)\,f(y\,|\,z)} \Big]
\right]
% &\text{[linearity of expectation]}
% \\
\end{align*}
Note that the quantity inside the second expectation does not depend on $x$. Therefore,
the second expectation is just an explicit way of sampling $z$ from the mixture
distribution $\Ex_{x \sim \mu(X|y)} \mu(Z|x,y)$, which is the definition of $\mu(Z|y)$.
Once we make this replacement, it becomes clear that the only feature of $\mu(Z|Y,X)$ that
matters is the mixture $\mu(Z|Y)$. Simplifying the second expectation in this way, and replacing the infemum over $\mu(Z|X,Y)$ with one over $\mu(Z|Y)$ yields:
\begin{equation*}
\aar{\dg M_4}
= \inf_{\mu(Y)} \inf_{\mu(X|Y)} \inf_{\mu(Z|Y)}
\Ex_{y \sim \mu(Y)} \left[
% \beta \log \frac{\mu(Y)\, \mu(X|Y)}{p(X)\,f(Y|X)}
\beta \!\! \Ex_{\substack{\vphantom{x}\\x\sim\mu(X|y)}}\!\!\Big[
\log \frac{\mu(y)\, \mu(x\,|\,y)}{p(x)\,f(y\,|\,x)} \Big]
+ \zeta\!\! \Ex_{\substack{\vphantom{|}\\ z\sim\mu(Z|y)}}\!\!\Big[
\log \frac{\mu(y)\, \mu(z\,|\,y)}{q(z)\,f(y\,|\,z)} \Big]
\right]
\end{equation*}
Now, a cpd $\mu(X|Y)$ is
[modulo measurability concerns that do not affect the infemum; see <Ref>]
a (possibly different) distribution $\nu_y(X)$ for every value of $Y$.
Observe that, inside the expectation over $\mu(Y)$, the cpds $\mu(X|Y)$ and $\mu(Z|Y)$ are used only for the present value of $y$, and do not reference, say, $\mu(X|y')$ for $y'\ne y$.
Because there is no interaction between the choice of cpd $\mu(X|y)$ and $\mu(X|y')$, it is not necessary to jointly optimize over entire cpds $\mu(X|Y)$ all at once.
Rather, it is equivalent to to take the infemum over $\nu(X)$, separately for each $y$.
Symmetrically, we may as well take the infemum over $\lambda(Z)$ separately for each $y$, rather than jointly finding the optimal $\mu(Z|Y)$ all at once.
Operationallly, this means we can pull the infema inside the expectation over $Y$. And since the first term doesn't depend on $Z$ and the second doesn't depend on $X$, we get:
\begin{equation*}
\aar{\dg M_4}
= \inf_{\mu(Y)}
\Ex_{y \sim \mu(Y)} \left[
% \beta \log \frac{\mu(Y)\, \mu(X|Y)}{p(X)\,f(Y|X)}
\inf_{\nu(X)} \beta \Ex_{\nu(X)} \Big[
\log \frac{\mu(y)\, \nu(X)}{p(X)\,f(y\,|X)} \Big]
+ \inf_{\lambda(Z)} \zeta \Ex_{\lambda(Z)} \Big[
\log \frac{\mu(y)\, \lambda(Z)}{q(Z)\,f(y\,|Z)} \Big]
\right]
\end{equation*}
Next, we pull the same trick we've used over and over: find constants so that we can regard the dependence as a relative entropy with respect to the quantity being optimized.
Grouping the quantities apart from $\nu(X)$ on the left term and normalizing them (and analogously for $\lambda(Z)$ on the right), we find that
\begin{equation*}
\aar{\dg M_4}
= \inf_{\mu(Y)}
\Ex_{y \sim \mu(Y)} \left[
\begin{array}{l}
\beta \inf_{\nu(X)}
\kldiv* {\nu(X)}{\frac{1}{C_1(y)} p(X)\frac{f(y|X)}{\mu(y)}}
- \beta \log C_1(y) \\
+ \zeta \inf_{\lambda(Z)}
\kldiv* {\lambda(Z)}{\frac{1}{C_2(y)} q(Z)\frac{f(y|Z)}{\mu(y)}}
- \zeta \log C_2(y)
\end{array}
\right],
\end{equation*}
\[
C_1(y) = \sum_x p(x)\frac{f(y|x)}{\mu(y)} = \frac{1}{\mu(y)} \Ex_{p(X)} f(y|X)
\qquad\text{and}\qquad
C_2(y) = \sum_z q(z)\frac{f(y|z)}{\mu(y)} = \frac{1}{\mu(y)} \Ex_{q(Z)} f(y|Z)
\]
are the constants required to normalize the distributions. Both relative entropies are minimized when their arguments match, at which point they contribute zero, so we have
\begin{align*}
\aar{\dg M_4}
&= \inf_{\mu(Y)}
\Ex_{y \sim \mu(Y)} \left[
% \beta \inf_{\nu(X)}
% \kldiv* {\nu(X)}{\frac{1}{C_1(y)} p(X)\frac{f(y|X)}{\mu(y)}}
\beta \log \frac1{C_1(y)}
% + \zeta \inf_{\lambda(Z)}
% \kldiv* {\lambda(Z)}{\frac{1}{C_2(y)} q(Z)\frac{f(y|Z)}{\mu(y)}}
+ \zeta \log \frac1{C_2(y)}
\right]\\
&= \inf_{\mu(Y)}
\Ex_{y \sim \mu(Y)} \left[
\beta \log \frac{\mu(y)}{\Ex_{p(X)} f(y|X)}
+ \zeta \log \frac{\mu(y)}{\Ex_{q(Z)} f(y|Z)} \right] \\
&= \inf_{\mu(Y)} \Ex_{\mu} \Big[ \beta \kldiv{\mu}{f \circ p} + \zeta \kldiv{\mu}{f\circ q} \Big] \\
&= \aar{\dg M_5}.
\end{align*}
§.§.§ Details for Claims made in Section 8
First, the fact that
\[
% \dg M_1 := ~
\mathcal L_1 = \lambda_\dsymb\mathcal L_\datsymb + \lambda_\ssymb \mathcal L_\simsymb = \aar**{
\begin{tikzpicture}[center base]
% \node[dpad0] (Z) at (-0.2,0) {$Z$};
% \node[tpt={z0|$0$}] at (-0.5,0.1) {};
% \node[tpt={z1|$1$},right=0.15 of z0]{};
\node[tpt={z0|\simsymb}] at (-0.5,0.1) {};
\node[tpt={z1|\datsymb},right=0.35 of z0]{};
\node[Dom={$Z$[label distance=-2.5ex, xshift=1.0em] (Z)
around {\lab{z0}\lab{z1}}},yshift=0.2em ] {};
% \node[dpad0,align=center] (XY) at (1.8,0) {$XY$}; %{$X$\\[-0.3ex]$Y$};
\node[dpad0] (X) at (2.4, 0.6) {$X$};
\node[dpad0] (Y) at (2.4, -0.6) {$Y$};
\coordinate (xyz) at (1.9, 0);
\draw[arr1, <-] (Z) to
% node[above, pos=0.6]{$\hat\lambda$}
node[above, pos=0.6]{$\lambda$}
% node[above, pos=0.6]{$\frac{1}{\lambda_0+\lambda_1}[\lambda_0, \lambda_1]$}
node[below,inner sep=1pt, pos=0.6]{${\color{gray}\scriptstyle( \infty )}$}
+(-1.5, 0);
% \node at (-1,-0.6) {\small where $\lambda(Z) = \frac{\lambda_Z}{\lambda_0+\lambda_1}$};
% \node at (0,-0.6) {\small where $\lambda(Z) \propto \lambda_Z$};
\draw[arr1] (X) to node[right,pos=0.4]{$h$} (Y);
\draw[arr,-,shorten >=0pt] (Z) to[bend left=0, shorten >=0pt]
% node[fill=white, inner sep=0pt, pos=0.55]
% node[inner sep=1pt, pos=0.55]
node[above, inner sep=1pt, pos=0.55]
% {$Z?d:s$}
% {$\begin{bmatrix}d \text{ if } Z\\[-0.3ex] s \text{ else}\end{bmatrix}$}
% {$\begin{matrix}d \text{ if } Z\!\!=\!\!1\\[-0.3ex]
% % s \text{ else}\end{matrix}$}
% \text{else }s\end{matrix}$}
{$\begin{matrix}\datsymb \mapsto d \\[-0.6ex]
\simsymb \mapsto s \end{matrix}$}
% node[above, inner sep=2pt, pos=0.68]
% {${\color{gray}\scriptscriptstyle(r)}$}
node[below,inner sep=1pt]{${\color{gray}\scriptstyle( \infty )}$}
\draw[arr2, shorten <=0pt] (xyz) to (X);
\draw[arr2, shorten <=0pt] (xyz) to (Y);
\end{tikzpicture}}
% (\lambda_\ssymb+\lambda_\dsymb),
\]
where $\lambda(Z=\simsymb) = \lambda_\ssymb$ and
$\lambda(Z=\datsymb) = \lambda_\dsymb$
is immediate.
The two cpds with infinite confidence ensure that the only joint distribution with a finite score is $\lambda_\ssymb s + \lambda_\dsymb d$, and the inconsistency with $h$ is its surprisal, so the inconsistency of this PDG is
\begin{align*}
\Ex_{\lambda_\ssymb s + \lambda_\dsymb d} \Big[\log \frac{1}{h(Y|X)}\Big]
= - \lambda_\ssymb \Ex_{s} [\log {h(Y|X)}] - \lambda_\dsymb \Ex{d}[\log h(Y|X)]
= \lambda_\dsymb\mathcal L_\datsymb + \lambda_\ssymb \mathcal L_\simsymb
= \mathcal L_1,
\quad\text{as promised.}
\end{align*}
The second correspondence is the least straightforward. Let $C = \int sd$ be the normalization constant required to normalize the joint density $sd$. We claim that, for large fixed $\gamma$, we have
\[
% \dg M_2 :=
\mathcal L_2 \approx
\aar**{
\begin{tikzpicture}[center base]
\node[dpad0] (X) at (0, 0.6) {$X$};
\node[dpad0] (Y) at (0, -0.6) {$Y$};
\draw[arr1] (X) to node[left, pos=0.4, inner sep=1pt]{$h$}
% node[below=0pt,inner sep=1pt,rotate=90]{${\color{gray}\scriptstyle(\!\alpha{:}0\!)}$}
%oli: NO NEED!
% node[right=0pt,inner sep=1pt]{${\color{gray}\scriptstyle
% \renewcommand{\arraystretch}{.7}
% \big(\begin{matrix}
% \scriptstyle \alpha : 0 \\ \scriptstyle \beta : 1
% \end{matrix}
% \big)}$}
\coordinate (d0) at (1.8, 0);
\coordinate (dmid) at (0.9, 0);
\coordinate (s0) at (-1.8, 0);
\coordinate (smid) at (-0.9, 0);
\draw[arr,->,shorten <=0pt] (dmid) to[bend right=25] (X);
\draw[arr,->,shorten <=0pt] (dmid) to[bend left=25] (Y);
\draw[arr1,-,shorten <=0pt] (dmid) to
node[below, inner sep=2pt]{${\color{gray}\scriptstyle
\renewcommand{\arraystretch}{.7}
\big(\begin{matrix}
\scriptstyle\alpha: 1 \\[-0.2ex] \scriptstyle\beta: \gamma
\end{matrix} \big)}$}
node[above] {$d$}
\draw[arr,->,shorten <=0pt] (smid) to[bend left=25] (X);
\draw[arr,->,shorten <=0pt] (smid) to[bend right=25] (Y);
\draw[arr1,-,shorten <=0pt] (smid) to
node[below, inner sep=2pt]{${\color{gray}\scriptstyle
\renewcommand{\arraystretch}{.7}
\big( \begin{matrix}
\scriptstyle \alpha: 1 \\[-0.2ex] \scriptstyle \beta: \gamma
\end{matrix} \big)}$}
\end{tikzpicture}}\Bigg._{\!\!\!\gamma}
% - k \log Z_{sd} + H
% - k \log C,
+ \mathit{const},
% \overbrace{ - k \log C, }^{\text{normalization constant for $sd$}}
\]
where $\mathit{const}$ does not depend on $h$. To see this, let $\dg M_2$ be the PDG above, and compute
\begin{align*}
% \aar**{\!\!\!
% \begin{tikzpicture}[center base]
% \node[dpad0] (X) at (0, 0.6) {$X$};
% \node[dpad0] (Y) at (0, -0.6) {$Y$};
% \draw[arr1] (X) to node[left, pos=0.4, inner sep=1pt]{$h$} (Y);
% \coordinate (d0) at (1.8, 0);
% \coordinate (dmid) at (0.9, 0);
% \coordinate (s0) at (-1.8, 0);
% \coordinate (smid) at (-0.9, 0);
% \draw[arr,->,shorten <=0pt] (dmid) to[bend right=25] (X);
% \draw[arr,->,shorten <=0pt] (dmid) to[bend left=25] (Y);
% \draw[arr1,-,shorten <=0pt] (dmid) to
% node[below, inner sep=2pt]{${\color{gray}\scriptstyle
% \renewcommand{\arraystretch}{.7}
% \big(\begin{matrix}
% \scriptstyle\alpha: 1 \\[-0.2ex] \scriptstyle\beta: \gamma
% \end{matrix} \big)}$}
% node[above] {$d$}
% (d0);
% %
% \draw[arr,->,shorten <=0pt] (smid) to[bend left=25] (X);
% \draw[arr,->,shorten <=0pt] (smid) to[bend right=25] (Y);
% \draw[arr1,-,shorten <=0pt] (smid) to
% node[below, inner sep=2pt]{${\color{gray}\scriptstyle
% \renewcommand{\arraystretch}{.7}
% \big( \begin{matrix}
% \scriptstyle \alpha: 1 \\[-0.2ex] \scriptstyle \beta: \gamma
% \end{matrix} \big)}$}
% node[above]{$s$}
% (s0);
% \end{tikzpicture}\!\!}\Bigg._{\!\!\!\gamma}
\aar{\dg M_2}_\gamma
&= \inf_{\mu(X,Y)} \Ex_{\mu} \bigg[
\overbracket{
% \gamma \log \frac{\mu(XY)}{s(XY)}
% + \gamma \log \frac{\mu(XY)}{d(XY)}
\gamma \log \frac{\mu(XY)}{s(XY)}
\frac{\mu(XY)}{d(XY)}
+ \log \frac{\mu(Y|X)}{h(Y|X)} }^{\Inc(\mu)}
\overbracket{
% \gamma \log \frac{1}{s(XY)}
% + \gamma \log \frac{1}{d(XY)}
\gamma \log \frac{1}{s(XY)}
\frac{1}{d(XY)}
- \gamma \log \frac{1}{\mu(XY)}
\bigg] \\
&= \inf_{\mu(X,Y)} \Ex_{\mu} \bigg[
\gamma \log \frac{\mu(XY)}{s(XY)}
\frac{\mu(XY)}{d(XY)}
\frac{1}{\mu(XY)}
\frac{1}{\mu(XY)}
\frac{\mu(XY)}{1}
+ \log \frac{\mu(Y|X)}{h(Y|X)}
\bigg] \\
&= \inf_{\mu(X,Y)} \Ex_{\mu} \bigg[
\gamma \log
\frac{\mu(XY)}{s(XY)d(XY)}
+ \log \frac{\mu(Y|X)}{h(Y|X)}
\bigg] \\
&= \inf_{\mu(X,Y)} \Ex_{\mu} \bigg[
\gamma \log
\frac{\mu(XY) C}{s(XY)d(XY)} - \gamma \log C
+ \log \frac{\mu(Y|X)}{h(Y|X)}
\bigg] \\
&= \inf_{\mu(X,Y)}
\gamma \kldiv*{\mu}{\frac1C{sd}} +
\Ex_{\mu} \bigg[ \log \frac{\mu(Y|X)}{h(Y|X)}\bigg]
- \gamma \log C \\
\end{align*}
$\thickD$ is $(\gamma m)$-strongly convex in a region around its minimizer for some $m>0$ that depends only on $s$ and $d$.
Together with our assumption that $h$ is positive, we find that when $\gamma$ becomes large,
the first term dominates, and the optimizing $\mu$ quickly approaches the normalized density $\nu := \frac1Csd$.
Plugging in $\nu$, we find that the value of the infemum approaches
\begin{align*}
\aar{\dg M_2} &\approx \Ex_{\nu} \bigg[ \log \frac1{h(Y|X)} \bigg] - H_{\nu}(Y|X) - \gamma \log C \\
&= \int_{XY} \frac1C \log \frac1{h(Y|X)} s(X,Y) d(X,Y) \quad- H_{\nu}(Y|X) - \gamma \log C \\
&= \frac{1}{C} \Ex_{s} \bigg[ d(X,Y) \log \frac1{h(Y|X)} \bigg] - H_{\nu}(Y|X) - \gamma \log C \\
&= \frac{1}{C} \mathcal L_2 - H_{\nu}(Y|X) - \gamma \log C, \\[1ex]
% \implies\qquad
\text{and therefore}\qquad
\mathcal L_2 &= C \aar{\dg M_2} + C \H_\nu(Y|X) - \gamma \, C \log C
\\ &= C \aar{\dg M_2} + \mathit{const}.
\end{align*}
Finally, we turn to
\[
% \dg M_3 :=
\mathcal L_3 := \aar**{
\begin{tikzpicture}[center base]
\node[dpad0] (X) at (0, 0.6) {$X$};
\node[dpad0] (Y) at (0, -0.6) {$Y$};
\draw[arr1] (X) to node[left=0pt,pos=0.4, inner sep=1pt]{$h$} (Y);
% \coordinate (d0) at (1.3, 0);
% \coordinate (s0) at (-1.3, 0);
% \node[above left=1pt and 0.5em of d0] {$d$};
% \node[below left=0pt and 0.2em of d0]{${\color{gray}\scriptstyle( \lambda_1 )}$};
% \node[above right=1pt and 0.5em of s0] {$s$};
% \node[below right=0pt and 0.2em of s0]{${\color{gray}\scriptstyle( \lambda_0 )}$};
\coordinate (d0) at (1.8, 0);
\coordinate (dmid) at (0.9, 0);
\coordinate (s0) at (-1.8, 0);
\coordinate (smid) at (-0.9, 0);
\draw[arr,->,shorten <=0pt] (dmid) to[bend right=25] (X);
\draw[arr,->,shorten <=0pt] (dmid) to[bend left=25] (Y);
\draw[arr1,-,shorten <=0pt] (dmid) to
node[below, inner sep=2pt]{${\color{gray}\scriptstyle(\lambda_{\dsymb})}$}
node[above] {$d$}
\draw[arr,->,shorten <=0pt] (smid) to[bend left=25] (X);
\draw[arr,->,shorten <=0pt] (smid) to[bend right=25] (Y);
\draw[arr1,-,shorten <=0pt] (smid) to
node[below, inner sep=2pt]{${\color{gray}\scriptstyle(\lambda_{\ssymb})}$}
% \unmergearr{s0}XY
% \unmergearr{d0}XY
% \draw[arr,-,shorten >=0pt] (Z) to[bend left=0, shorten >=0pt]
% node[fill=white, inner sep=0pt, pos=0.55]
% % {$Z?d:s$}
% {$\begin{bmatrix}d\\[-0.3ex] s\end{bmatrix}$}
% % node[above, inner sep=2pt, pos=0.68]
% % {${\color{gray}\scriptscriptstyle(r)}$}
% (xyz);
% \draw[arr2, shorten <=0pt] (xyz) to (X);
% \draw[arr2, shorten <=0pt] (xyz) to (Y);
% ode[dpad0] (X) {};
\end{tikzpicture}}.
\]
To see the why the optimal distribution $\mu^*(XY)$ is the $\lambda$-weighted geometric mean of $s$ and $d$
, let us first consider the same PDG, except without $h$.
From <Ref>, we have this loss without $h$ in closed form, and from the proof of <Ref>, we see that the optimizing distribution in this case is
the $\lambda$-weighted geometric distribution $\mu^* \propto s(XY)^{\lambda_\ssymb} d(XY)^{\lambda_\dsymb}$.
Now (<Ref>), including $h$ cannot make the PDG any less inconsistent. In particular, by choosing
\[
h^*(Y|X) := \mu^*(Y|X) \propto (Y|X)^{\lambda_\ssymb} d(Y|X)^{\lambda_\dsymb},
\]
to be already compatible with this joint distribution, the inconsistency does not change, while choosing a different $h$ would cause the inconsistency to increase. Thus, the optimal classifier $h^*$ by this metric is indeed as we claim. Finally, it is easy to see that this loss is calibrated: if $s = d$, then the optimal joint distribution is equal to $s$ and to $d$, and the optimal classifier is $h(Y|X) = s(Y|X) = d(Y|X)$. So $\mathcal L_3$ is calibrated.
§.§.§ Details for Claims made in Section 9
Distortion Due to Inconsistency.
In the footnote on fn:logEexp, we claimed that if the model confidence $\beta_p$ were 1 rather than $\infty$, we would have obtained an incconsistency of
$ - \log \Ex_{x\sim p} \exp(- c(x)) $,
and that the optimal distribution would not have been $p(X)$.
\begin{align*}
\aar*{\!\begin{tikzpicture}[center base]
\node[dpad0] (X) at (0,0) {$X$};
\node[dpad0] (2) at (1.1,0) {$\Truth$};
\draw[arr2] (X) to
node[above, pos=0.4,inner sep=2pt]{$\hat c$}
% node[below, pos=0.4, inner sep=2pt]{${\color{gray}\scriptstyle(\beta)}$}
\draw[arr2, <-] (X) to
node[above, pos=0.6, inner sep=2pt]{$p$}
% node[below, pos=0.6, inner sep=2pt]
% {${\color{gray}\scriptscriptstyle(\mskip-2mu\infty\mskip-2mu)}$}
+(-1, 0);
\draw[arr2, <<-] (2) to
node[above, inner sep=2pt, pos=0.6]
\end{tikzpicture}\!}
&= \inf_{\mu(X)} \Ex_{x \sim \mu} \left[ \log \frac{\mu(x)}{p(x)}
+ \log \frac{\mu(\trut\,|\,x)}{\hat c(\trut\,|\,x)} \right]\\
&= \inf_{\mu(X)} \Ex_{x \sim \mu} \left[ \log \frac{\mu(x)}{p(x)}
+ \log \frac{1}{\hat c(\trut\,|\,x)} \right] \\
&= \inf_{\mu(X)} \Ex_{x \sim \mu} \left[ \log \frac{\mu(x)}
{p(x) \exp(-c(x))}\cdot\frac{Z}{Z} \right] \\
\intertext{\raggedleft where $Z = \sum_x p(x) \exp(-c(x)) = \Ex_p \exp(-c(X))$ is the constant required to normalize the distribution
% $\lambda(X) := \frac{1}{Z}p(X)\exp(-c(X)).
&= \inf_{\mu(X)} \kldiv*{\mu}{\frac{1}{Z}\,p(X)\exp(-c(X))} - \log Z \\
&= - \log Z \\
&= - \log \Ex_{x\sim p} \exp(-c(x))
\end{align*}
as promised.
Note also that in the proof, we showed that the optimal distribution is proportional to $p(X) \exp(-c(X))$ which means that it equals $p(X)$ if and only if $c(X)$ is constant in $X$.
Enforcing the Qualitative Picture.
We also claimed without careful proof in <Ref> that, if $\alpha_h = \alpha_{\datadist\xysamp} = 1$, then
\begin{equation*}
\lim_{\gamma\to\infty}
\left.
\aar**{\begin{tikzpicture}[center base]
\begin{scope}[xscale=1.2]
\node[dpad0] (X) at (0.3,0) {$X$};
\node[dpad0] (Yt) at (1,1) {$Y$};
\node[dpad0,align=center] (Yp) at (1.4,0) {$\vphantom{Y}\smash{Y'}$};
\node[dpad0] (2) at (2,1) {$\Truth$};
\coordinate (dstart) at (-0.1,0.9);
\end{scope}
\unmergearr[arr1]{dstart}{X}{Yt}
\node[above=2pt of center-dstartXYt, xshift=-2pt] {$\datadist\xysamp$};
\node[below right=2.0pt and -0.4pt of center-dstartXYt, inner sep=0pt, rotate=25]
\mergearr[arr2]{Yt}{Yp}{2}
\node[above=2pt of center-YtYp2] {$\hat\ell$};
\draw[arr2] (X) to
node[above, inner sep=2pt,pos=0.4] {$h$}
node[below, inner sep=2pt,pos=0.4]
\draw[arr2, <<-] (2) to
node[right, inner sep=2pt, pos=0.6]
% +(0,1);
\end{tikzpicture}}\right._{\!\!\!\gamma}
= \quad\;\;\mathop{\scalebox{1.2}{$\Ex$}}\limits_{\substack{%
\vphantom{x}\\
\mathllap{(x,y)} \sim \mathrlap{\datadist\xysamp} \\
\mathllap{y'} \sim \mathrlap{p(Y'|\,x)}} }
\;\big[\ell(y,y')\big]
\end{equation*}
Why is this? For such a setting of $\alpha$, which intuitively articulates a causal picture where $X,Y$ is generated from $\datadist\xysamp$, and $Y'$ generated by $h(Y'|X)$, the information deficiency $\IDef{\dg S}(\mu(X,Y,Y'))$ of a distribution $\mu$ is
\begin{align*}
\IDef{\dg S}(\mu(X,Y,Y')) &= -\H_\mu(X,Y,Y') + \H(X,Y) + \H(Y'|X) \\
&= \H_\mu(Y'|X) - \H_\mu(Y' | X, Y) \\
&= \I_\mu(Y;Y'|X).
\end{align*}
Both equalities of the derivation above standard information theoretic identities [See, for instance,][]mackay2003information, and the final quantity $\I_{\mu}(Y;Y'|X)$ is the conditional mutual information between $Y$ and $Y'$ given $X$, and is a non-negative number that equals zero if and only if $Y$ and $Y'$ are conditionally independent given $X$.
As a result, as $\gamma\to\infty$ any distribution that for which $Y'$ and $Y$ are not independent given $X$ will incur infinite cost. Since the confidences in $h$ and $\datadist\xysamp$ are also infinite, so will a violation of either cpd.
There is only one distribution that has both cpds and also this independence; that distribution is $\mu(X,Y,Y') := \datadist\xysamp(X,Y)h(Y'|X)$.
Now the argument of <Ref> applies: all other cpds must be matched, and the inconsistency is the expected incompatibility of $\hat l$, which equals
\[ \quad\;\;\mathop{\scalebox{1.2}{$\Ex$}}\limits_{\substack{%
\vphantom{x}\\
\mathllap{(x,y)} \sim \mathrlap{\datadist\xysamp} \\
\mathllap{y'} \sim \mathrlap{p(Y'|\,x)}} }
\; \log\frac{1}{\hat\ell(\trut\,|y,y')}
\quad\;\;\mathop{\scalebox{1.2}{$\Ex$}}\limits_{\substack{%
\vphantom{x}\\
\mathllap{(x,y)} \sim \mathrlap{\datadist\xysamp} \\
\mathllap{y'} \sim \mathrlap{p(Y'|\,x)}} }
\; \log\frac{1}{\exp(-\ell(y,y'))}
\quad\;\;\mathop{\scalebox{1.2}{$\Ex$}}\limits_{\substack{%
\vphantom{x}\\
\mathllap{(x,y)} \sim \mathrlap{\datadist\xysamp} \\
\mathllap{y'} \sim \mathrlap{p(Y'|\,x)}} }
\;\big[ \log \exp(\ell(y,y')) \big]
\quad\;\;\mathop{\scalebox{1.2}{$\Ex$}}\limits_{\substack{%
\vphantom{x}\\
\mathllap{(x,y)} \sim \mathrlap{\datadist\xysamp} \\
\mathllap{y'} \sim \mathrlap{p(Y'|\,x)}} }
\;\big[\ell(y,y')\big]
= \mathcal L
\]
§ MORE NOTES
§.§ Maximum A Posteriori and Priors
The usual telling of the correspondence between regularizers and priors is something like the following.
Suppose you have a parameterized family of distributions
and have observed evidence $X$, but do not know the parameter $\Theta$.
The maximum-likelihood estimate of $\Theta$ is then
\[
\theta^{\mathrm{MLE}}(X) := \arg\max_{\theta\in \Theta} \Pr(X|\theta)
= \arg\max_{\theta\in \Theta} \log \Pr(X|\theta).
\]
The logarithm is a monotonic transformation, so it does not change the argmax, but it has
nicer properties, so that function is generally used instead. (Many of the loss functions in main body of the paper are log-likelihoods also.)
In some sense, better than estimating the maximum likelihood, is to perform a Bayesian update with the new information, to get a distribution over $\Theta$.
If that's too expensive, we could simply take the estimate with the highest posterior probability, which is called the Maximum A Posteriori (MAP) estimate.
For any given $\theta$, the Bayesian reading of Bayes rule states that
\[
\text{posterior $\Pr(\Theta | X)$} = \frac
{\text{likelihood $\Pr(X|\Theta)$}\cdot\text{prior $\Pr(\Theta)$}}{\text{evidence $\Pr(X) = \sum_{\theta'} \Pr(X|\theta')\Pr(\theta')$}}.
\]
So taking a logarithm,
\[
\text{log-posterior $\log \Pr(\Theta | X)$} = \text{log-likelihood $\log \Pr(X|\Theta)$} ~+~ \text{log-prior $\log \Pr(\Theta)$}
- \text{log-evidence $\log \Pr(X)$}.
\]
The final term does not depend on $\theta$, so it is not relevant for finding the optimal $\theta$ by this metric. Swapping the signs so that we are taking a minimum rather than a maximum, the MAP estimate is then given by
\[
\theta^{\mathrm{MAP}}(X) := \arg\min_{\theta \in \Theta} \left\{ \log \frac{1}{\Pr(X|\theta)} + \log \frac1{\Pr(\theta)} \right\}.
\]
Note that if negative log likelihood (or surprisal, $-\log \Pr(X|\theta)$) was our original loss function, we have now added an arbitrary extra term, as a function of $\Theta$, to our loss function. It is in this sense that priors classically correspond to regularizers.
§.§ Surprise
A common justification for using $\I_p(x)$ as a cost for updating a probabilistic model $p(x)$ based on an observed sample $x$, is that by minimizing it, you “maximize the probability of seeing your data”.
[this justification should not be taken too seriously without constraints on $p$, because the optimal value of $p$ is $\delta_x$, which does not generalize.]
But this explanation applies just as well to $-p(x)$. Why include the logarithm?
There are plenty of answers to this question; among them: $\I_p$ is convex in $p$, it decomposes products into arguably simpler sums, is more numerically stable, has a well-defended physical analog in thermodynamics, and is a primative of information theory.
For those after a quick and rigorous justification (as opposed to handwaving or a thermodynamics textbook), none of these answers are entirely satisfying.
They suggest that $\I_p$ has certain nice properties, but not that it enjoys them uniquely, or that no other loss function satisfies nicer ones.
Pedagogically speaking, the situation is more straightforward for us.
Although PDG semantics themselves require non-trivial justification
, they give us in return uniform answers to many questions, starting with:
Why use the surprise $\I_p(x)$, to measure the loss of a model $p(X)$ on sample $x$? Because it is the inconsistency of simultanously believing $X = x$ and $X \sim p$.
|
# Secure bound analysis of quantum key distribution with non-uniform random
seed of privacy amplification
Bingze Yan Yucheng Qiao Qiong Li Haokun Mao
## Abstract
Precise quantum key distribution (QKD) secure bound analysis is essential for
practical QKD systems. The effect of uniformity of random number seed for
privacy amplification is not considered in existing secure bound analysis. In
this paper, we propose and prove the quantum leftover hash lemma with non-
uniform random number seeds based on the min-entropy, and we give a precise
QKD secure bound analysis with non-uniform random number seeds on this basis.
We take the two-decoy BB84 protocol as an example to simulate the effect of
random number seed uniformity on the secure bound of a QKD system. The
experimental results indicate that when the average min-entropy of the random
number generator is below 0.95, the secure bound of a QKD system will be
seriously affected.
## Introduction
Quantum key distribution (QKD) technology provides secure communication
service with information-theoretic security [1]. As the development of QKD
technology, QKD has moved towards the practical stage. The pracical security
of QKD systems has gradually attracted researcher’s attention, and many ideal
assumptions in QKD security analysis are found unsatisfied in practical QKD
systems[2, 3, 4]. One of these assumptions is that the random number seeds
used for privacy amplification in a QKD system must be strictly uniformly
distributed, and this is very difficult to guarantee in an actual system[5].
This gap may seriously affect the security of privacy amplification, which in
turn seriously affects the secure bound of QKD. However, the exact extent of
this impact has not been analyzed.
Privacy amplification is a necessary part of a QKD system. It is the art of
distilling a information-theoretic secure key from a partially secure string
with a hash function by public discussion between two parties [6]. In order to
ensure the security of keys, the hash function must be randomly selected from
a universal hash family with random number seeds in the existing PA secure
proof [5]. Hayashi et al. quantifies the uniformity of random number seeds
with min-entropy, and analyzes the effect of min-entropy of random number
seeds on privacy amplification security under classical information theory[7].
However, there is still a lack of security analysis under quantum information
theory and analysis of the impact of random seed min-entropy on secure bound
of QKD.
Aiming at this problem, this paper proposes and proves the quantum leftover
hash lemma with non-uniform random number seeds, and analyzes a precise QKD
secure bound with non-uniform random number seeds.
In order to further analyze the influence of PA random number seeds on the
secure key rate of QKD systems, we investigate the average min-entropy of
random number generators in existing QKD systems. We find that most systems do
not give the average minimum entropy of their random seeds. Therefore, we
investigated and tested the min-entropy of some commonly used random number
generators in QKD systems. We found that these random number generators could
not achieve the perfect minimum entropy, so they would have a obvious impact
on the secure key rate.
## Results
### Quantum leftover hash lemma with non-uniform random seeds
We discussed the security of QKD under quantum information theory and
universal composable security. The security of a QKD protocol should be
considered on the secrecy and correctness.
Suppose the information possessed by the eavesdropper is $E$, then a key
relative to the eavesdropping information $E$ can be called
${\epsilon}_{sec}-$ secrecy, when the statistical distance between the key and
a key that is uniformly distributed and independent of $E$ is less than
${\epsilon}_{sec}$:
$\frac{1}{2}{\left\|{{\rho_{{\rm{SE}}}}-{S_{U}}\otimes{\rho_{E}}}\right\|_{1}}\leq{\epsilon_{sec}}.$
(1)
In universal composable security theory, key correctness represents the
probability that $S_{A}$ and $S_{B}$ are different:
${Pr}[S_{A}\neq S_{B}]\leq\epsilon_{cor}$ (2)
Considering both secrecy and correctness, when the key is
$\epsilon_{sec}$-secrecy and $\epsilon_{cor}$-correctness, the key is
$\overline{\epsilon}-$secure:
$\overline{\epsilon}=\epsilon_{sec}+\epsilon_{cor}.$ (3)
We proposed and proved the quantum leftover hash lemma with non-uniform random
seeds under quantum information theory and universal composable security:
Theorem 1 (Quantum Leftover Hash Lemma With Non-Uniform Random Seeds) Let
$F_{R}$ be a universal hashing family of functions from $X$ to $S$, $f_{r}$ is
a hash function randomly selected from $F_{R}$ with random seeds
$R\in\\{0,1\\}^{\alpha}$, $|F_{R}|=2^{\alpha}$ and $P_{F_{R}}$ satisfies
$H_{min}(P_{F_{R}})\geq\beta$, and $s=f_{r}(x)$. Let
${\rho_{XE}}=\sum\limits_{x}{\left|x\right\rangle{{\left\langle
x\right|}_{X}}\otimes\rho_{E}^{[x]}}$ and cq-states
${\rho_{{F_{R}}{\rm{S}}E}}=\sum\limits_{{f_{r}}}{\sum\limits_{\rm{s}}{{P_{{F_{R}}}}\left|{{f_{r}}}\right\rangle\langle{f_{r}}{|_{{F_{R}}}}\otimes}}\left|s\right\rangle\langle
s{|_{S}}\otimes\rho_{E}^{\left[{{f_{r}},s}\right]}$. Then for any
$\epsilon\geq 0$,
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}\leq\frac{1}{2}\times{2^{\alpha-\beta}}\times{2^{-\frac{1}{2}(H_{\min}^{\varepsilon}({\rho_{{\rm{XE}}}}\left|E\right.)-l)}}+\varepsilon,$
(4)
where $E$ is the side information of eavesdropper.
More importantly, we further analyzed the effect of random number seed
uniformity on the secure bound of a QKD protocol, and the secure bound of a
QKD system with non-uniform random number seed is obtained as follow,
$l\leq
H_{\min}^{\varepsilon}({\rho_{SE^{\prime}}}\left|{E^{\prime}}\right.)-lea{k_{EC}}-2{\log_{2}}\frac{1}{{2(\varepsilon_{sec}-\varepsilon)}}-\log_{2}{\frac{2}{\epsilon_{cor}}}-(\alpha-\beta).$
(5)
For further analyzing the influence of PA random number seeds on the secure
key rate of QKD systems, we investigated and tested the min-entropy of some
commonly used random number generators in QKD systems as shown in Table 1.
Table 1: The average min-entropy of common random number generator Random Number Generator | Type | Refer/Test | Test Scale | Average Min-entropy
---|---|---|---|---
IDQ Quantis-PCIe-40M | QRNG | Test | 100Mb | 0.990
MATLAB unifrnd | PRNG | Test | 100Mb | 0.988
Random.org | TRNG | Refer | – | 0.931
Intel DRNG | TRNG | Refer | – | 0.930
We refer to a typical decoy BB84 protocol to experiment the effect of random
number min-entropy on the QKD secure key rate. The experiment result is
indicated as Fig. 1 and Fig. 2.
Figure 1: The relation between random uniformity and SKR under different
distances Figure 2: The relation between random uniformity and SKR under
different distances
The above experimental results indicate that, (1) the average min-entropy of
the random number generator is below 0.95, the secure bound of a QKD system
will be seriously affected; (2) Most commonly used random number generators in
a QKD system will influence the secret key rate of QKD seriously.
## Methods
The proof of quantum leftover hash lemma with non-uniform random seeds is
given as below.
Theorem 1 (Quantum Leftover Hash Lemma With Non-Uniform Random Seeds) Let
$F_{R}$ be a universal hashing family of functions from $X$ to $S$, $f_{r}$ is
a hash function randomly selected from $F_{R}$ with random seeds
$R\in\\{0,1\\}^{\alpha}$, $|F_{R}|=2^{\alpha}$ and $P_{F_{R}}$ satisfies
$H_{min}(P_{F_{R}})\geq\beta$, and $s=f_{r}(x)$. Let
${\rho_{XE}}=\sum\limits_{x}{\left|x\right\rangle{{\left\langle
x\right|}_{X}}\otimes\rho_{E}^{[x]}}$ and cq-states
${\rho_{{F_{R}}{\rm{S}}E}}=\sum\limits_{{f_{r}}}{\sum\limits_{\rm{s}}{{P_{{F_{R}}}}\left|{{f_{r}}}\right\rangle\langle{f_{r}}{|_{{F_{R}}}}\otimes}}\left|s\right\rangle\langle
s{|_{S}}\otimes\rho_{E}^{\left[{{f_{r}},s}\right]}$. Then for any
$\epsilon\geq 0$,
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}\leq\frac{1}{2}\times{2^{\alpha-\beta}}\times{2^{-\frac{1}{2}(H_{\min}^{\varepsilon}({\rho_{{\rm{XE}}}}\left|E\right.)-l)}}+\varepsilon,$
(6)
where $E$ is the side information of eavesdropper.
###### Proof.
For,
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}},$
(7)
As $P_{F_{R}}$ satisfies $H_{min}(P_{F_{R}})\geq\beta$, then for any
${{P_{{F_{R}}}}({f_{r}})}$, it satisfies ${{P_{{F_{R}}}}({f_{r}})}\leq
2^{-\beta}$, then,
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}\leq\sum\limits_{{f_{r}}}{{2^{-\beta}}{D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}={2^{\alpha-\beta}}\sum\limits_{{f_{\rm{r}}}}{{2^{-\alpha}}{D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}},$
(8)
Since the set sizes of $F_{R}$ and $F_{U}$ are the same as $2^{\alpha}$, and
the uniform distribution of $F_{U}$ satisfies $P_{F_{u}}(f_{u})=2^{-\alpha}$,
it can be obtained:
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}\leq{2^{\alpha-\beta}}\sum\limits_{{f_{\rm{u}}}}{{P_{{F_{u}}}}({f_{u}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{u}}]}}}}}={2^{\alpha-\beta}}{D_{u}}{(S|{F_{u}}E)_{\rho}}.$
(9)
Further, according to Lemma 1, the upper limit of $\Delta$ can be obtained as:
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}\leq\frac{1}{2}\times{2^{\alpha-\beta}}\times{2^{-\frac{1}{2}(H_{\min}^{\varepsilon}({\rho_{{\rm{XE}}}}\left|E\right.)-l)}}+\varepsilon$
(10)
∎
In the above proof, this paper adopts the method of directly scaling
${{D_{u}}{{(S|F_{u}E)}_{\rho}}}$ to find its upper limit. Another more
intuitive way is to directly scale the maximum collision probability of the
approximate general hash to find the upper limit. The specific process is as
follows.
First, according to the following lemma, the upper limit of
${{D_{u}}{{(S|F_{u}E)}_{\rho}}}$ can be obtained.
###### Lemma 1.
Let ${\rho_{AB}}\in{S_{\leq}}({{\bf{{\rm H}}}_{AB}})$,
${\tau_{B}}\in{S_{\leq}}({{\bf{{\rm H}}}_{B}})$ and
$\sup\\{{\tau_{B}}\\}\supseteq\sup\\{{\rho_{B}}\\}$, then,
${{D_{u}}{{(S|FE)}_{\rho}}}\leq\frac{1}{2}\sqrt{{d_{A}}{\Gamma_{C}}({\rho_{AB}}|{\tau_{B}})-tr({\rho_{B}}\tau_{B}^{-1/2}{\rho_{B}}\tau_{B}^{-1/2})},$
(11)
where $d_{A}$ is the set size of $A$.
According to Lemma 1, the upper limit of ${{D_{u}}{{(S|F_{R}E)}_{\rho}}}$ can
be obtained,
${{D_{u}}{{(S|F_{R}E)}_{\rho}}}\leq\frac{1}{2}\sqrt{{2^{l}}{\Gamma_{C}}({\rho_{FSE}}|{\rho_{F}}\otimes{\tau_{E}})-tr({\rho_{E}}\tau_{E}^{-1/2}{\rho_{E}}\tau_{E}^{-1/2})}.$
(12)
Then, by scaling ${{\Gamma_{C}}({\rho_{FSE}}|{\rho_{F}}\otimes{\tau_{E}})}$ to
find its upper limit, it can get,
$\begin{array}[]{*{20}{l}}{{\Gamma_{C}}({\rho_{{F_{R}}SE}}|{\rho_{F}}\otimes{\tau_{E}})}\\\
{=\sum\limits_{f\in{F_{R}}}{{P_{{F_{R}}}}}\sum\limits_{z}{tr\left({\left|{{f_{r}}}\right\rangle\langle{f_{r}}{|_{{F_{R}}}}\otimes\left|s\right\rangle\langle
s{|_{S}}\otimes\rho_{E}^{\left[{{f_{r}},s}\right]}\tau_{E}^{-1/2}\rho_{E}^{\left[{{f_{r}},s}\right]}\tau_{E}^{-1/2}}\right)}}\\\
{=\mathop{\rm{E}}\limits_{{f_{r}}\in{F_{R}}}\left[{\sum\limits_{z}{tr\left({\rho_{E}^{\left[{{f_{r}},s}\right]}\tau_{E}^{-1/2}\rho_{E}^{\left[{{f_{r}},s}\right]}\tau_{E}^{-1/2}}\right)}}\right]}\\\
{=\sum\limits_{x,x^{\prime}}{\mathop{\rm{E}}\limits_{{f_{r}}\in{F_{R}}}\left[{\sum\limits_{z}{{\delta_{{f_{r}}(x)=z}}{\delta_{{f_{r}}(x^{\prime})=z}}}}\right]}tr\left({\rho_{E}^{\left[x\right]}\tau_{E}^{-1/2}\rho_{E}^{\left[{x^{\prime}}\right]}\tau_{E}^{-1/2}}\right).}\end{array}$
(13)
According to the definition of the $\delta$-almost universal family, when the
random number seed satisfies the uniform distribution, the above expectation
satisfies $\mathop{\rm
E}\limits_{f\in{F_{u}}}\left[{\sum\limits_{z}{{\delta_{f(x)=z}}{\delta_{f(x^{\prime})=z}}}}\right]\leq\delta$.
When the random number seed does not satisfy the uniform distribution, it can
be scaled to get:
$\begin{array}[]{*{20}{l}}{\mathop{\rm{E}}\limits_{{f_{r}}\in{F_{R}}}\left[{\sum\limits_{z}{{\delta_{{f_{r}}(x)=z}}{\delta_{{f_{r}}(x^{\prime})=z}}}}\right]}\\\
{=\sum\limits_{{f_{r}}\in{F_{R}}}{{P_{{F_{R}}}}}\left[{\sum\limits_{z}{{\delta_{{f_{r}}(x)=z}}{\delta_{{f_{r}}(x^{\prime})=z}}}}\right]}\\\
{\leq\sum\limits_{{f_{r}}\in{F_{R}}}{{2^{-\beta}}}\left[{\sum\limits_{z}{{\delta_{{f_{r}}(x)=z}}{\delta_{{f_{r}}(x^{\prime})=z}}}}\right]}\\\
{={2^{\alpha-\beta}}\sum\limits_{{f_{r}}}{{2^{-\alpha}}}\left[{\sum\limits_{z}{{\delta_{{f_{r}}(x)=z}}{\delta_{{f_{r}}(x^{\prime})=z}}}}\right]}\\\
{\leq{2^{\alpha-\beta}}\times\delta}\end{array}$ (14)
According to this result, the upper limit of
${\Gamma_{C}}({\rho_{{F_{R}}SE}}|{\rho_{{F_{R}}}}\otimes{\tau_{E}})$ is,
${\Gamma_{C}}({\rho_{{F_{R}}SE}}|{\rho_{{F_{R}}}}\otimes{\tau_{E}})\leq{\Gamma_{C}}({\rho_{XE}}\left|{{\tau_{E}}}\right.)+{2^{\alpha-\beta}}\times\delta\times{\rm{tr}}\left({{\rho_{E}}\tau_{E}^{-1/2}{\rho_{E}}\tau_{E}^{-1/2}}\right).$
(15)
Let ${\rho_{E}}={\tau_{E}}$, the formula can be further simplified as:
${\Gamma_{C}}({\rho_{{F_{R}}SE}}|{\rho_{{F_{R}}}}\otimes{\tau_{E}})\leq{\Gamma_{C}}({\rho_{XE}}\left|{{\tau_{E}}}\right.)+{2^{\alpha-\beta}}\times\delta\times{\rm{tr}}{\rho_{E}}.$
(16)
Then,
${{D_{u}}{{(S|F_{R}E)}_{\rho}}}\leq\frac{1}{2}\sqrt{{2^{l}}{\Gamma_{C}}({\rho_{XE}}\left|{{\rho_{E}}}\right.)+\left({{2^{\alpha-\beta}}\times\delta\times{2^{l}}-1}\right){\rm{tr}}{\rho_{E}}}.$
(17)
Substitute the smoothed minimum entropy, $\delta=2^{-l}$ and
$\rm{tr}{\rho_{E}}$, we can get,
$\Delta=\sum\limits_{{f_{r}}}{{P_{{F_{R}}}}({f_{r}}){D_{u}}{{(S|E)}_{{\rho^{[{f_{r}}]}}}}}\leq\frac{1}{2}\sqrt{{2^{l-{H_{{{\min}^{\varepsilon}}}}({\rho_{{\rm{XE}}}}\left|E\right.)}}+{2^{\alpha-\beta}}-1}+\varepsilon$
(18)
Comparing this upper limit with the upper limit in the proof, it can be found
that this upper limit is much higher than the upper limit in the proof,
indicating that although the scaling idea of this method is more obvious, the
scaling method in the proof in this paper obtains a tighter upper limit.
## References
* [1] Bennett, Charles and Brassard, G. Quantum cryptography: Public key distribution and coin tossing. _Theoretical Computer Science - TCS_ 560, 175–179 (1984).
* [2] Tomamichel, M., Lim, C. C. W., Gisin, N. & Renner, R. Tight finite-key analysis for quantum cryptography. _Nature Communications_ 3, 634 (2012). 1103.4130.
* [3] Gottesman, D., Hoi-Kwonglo, L. O., Lütkenhaus, N. & Preskill, J. Security of quantum key distribution with imperfect devices. _Quantum Information and Computation_ 4, 325–360 (2004). 0212066.
* [4] Tamaki, K., Curty, M. & Lucamarini, M. Decoy-state quantum key distribution with a leaky source. _New Journal of Physics_ 18, 065008 (2016).
* [5] Tomamichel, M., Schaffner, C., Smith, A. & Renner, R. Leftover hashing against quantum side information. _IEEE Transactions on Information Theory_ 57, 5524–5535 (2011). 1002.2436.
* [6] Bennett, C. H., Brassard, G., Crkpeau, C., Maurer, U. M. & Member, S. Generalized privacy amplification. _Information Theory, IEEE Transactions on_ 41, 1915–1923 (1995). URL http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=476316.
* [7] Hayashi, M. & Tsurumaru, T. More Efficient Privacy Amplification with Less Random Seeds via Dual Universal Hash Function. _IEEE Transactions on Information Theory_ 62, 2213–2232 (2016). arXiv:1311.5322v5.
|
# Convergence of Bi-Virus Epidemic Models with Non-Linear Rates on Networks -
A Monotone Dynamical Systems Approach
Vishwaraj Doshi, Shailaja Mallick, and Do Young Eun
Accepted for publication at IEEE/ACM Transactions on Networking, in September
2022. A subset of the material in this paper appears in [1]. Vishwaraj Doshi
is with the Operations Research Graduate Program, Shailaja Mallick is with the
Department of Computer Science, and Do Young Eun is with the Department of
Electrical and Computer Engineering, North Carolina State University, Raleigh,
NC. Email: {vdoshi, smallic<EMAIL_ADDRESS>This work was supported in part
by National Science Foundation under Grant Nos. CNS-2007423 and CNS-1824518.
###### Abstract
We study convergence properties of competing epidemic models of the
Susceptible-Infected-Susceptible ($SIS$) type. The SIS epidemic model has seen
widespread popularity in modelling the spreading dynamics of contagions such
as viruses, infectious diseases, or even rumors/opinions over contact networks
(graphs). We analyze the case of two such viruses spreading on overlaid
graphs, with non-linear rates of infection spread and recovery. We call this
the non-linear bi-virus model and, building upon recent results, obtain
precise conditions for global convergence of the solutions to a trichotomy of
possible outcomes: a virus-free state, a single-virus state, and to a
coexistence state. Our techniques are based on the theory of monotone
dynamical systems (MDS), in contrast to Lyapunov based techniques that have
only seen partial success in determining convergence properties in the setting
of competing epidemics. We demonstrate how the existing works have been
unsuccessful in characterizing a large subset of the model parameter space for
bi-virus epidemics, including all scenarios leading to coexistence of the
epidemics. To the best of our knowledge, our results are the first in
providing complete convergence analysis for the bi-virus system with non-
linear infection and recovery rates on general graphs.
###### Index Terms:
Epidemics on networks, bi-virus models, multi-layer graphs, monotone dynamical
systems.
## I Introduction and overview
Graph-based epidemic models are widely employed to analyze the spread of real
world phenomena such as communicable diseases [2, 3], computer viruses,
malware [4, 5, 6], product adoption [7, 8, 9], opinions, and rumors [10, 11,
12, 13]. The propagation of such phenomenon (which we cumulatively refer to as
epidemics or viruses) usually takes place via processes such as human contact,
word-of-mouth, exchange of emails or even in social media platforms. Graph
based techniques, with edge based mechanisms to model information spread, have
therefore proven to be effective in capturing such epidemic dynamics, and have
been a research focus over the past few decades [14, 15, 16, 17]. In recent
years, the development of models which capture the competition of two or more
of such epidemics has seen a surge of interest. In particular, models
capturing the behavior of two competing epidemics of the Susceptible-Infected-
Susceptible (SIS) types, also known as the bi-virus or bi-SIS models, have
garnered significant attention over the years [8, 18, 19, 20, 21].
Epidemic models take the form of ordinary differential equations (ODEs) and
their analysis involves the identification of fixed points of the system,
their uniqueness properties, and ultimately showing the convergence of the
solution trajectories to those fixed points. The technique via Lyapunov
functions has historically been a popular method to prove convergence to fixed
points and was also used in epidemiology literature to derive the convergence
properties of the SIS epidemic model. The SIS model was originally introduced
in [2] to capture the spread of Gonorrhea due to contact between individuals
in a population, and was further developed in [22, 23, 24, 25, 26, 27, 28,
29]. The central result for SIS epidemics, originally proved using Lyapunov
functions in [2], is a dichotomy arising from the relation between model
parameter ($\tau\\!>\\!0$) representing the effective infection rate or
strength of the virus,111$\tau=\beta/\delta$, where $\beta>0$ stands for the
infection rate of the virus and $\delta>0$ the recovery rate from the virus.
Section II provides a detailed explanation. and a threshold value
($\tau^{*}\\!>\\!0$). When $\tau\\!\leq\\!\tau^{*}$, the virus spread is not
strong enough and the system converges to a ‘virus-free’ state. When
$\tau\\!>\\!\tau^{*}$, it converges to a state where the virus infects a non-
zero portion of the population. Attempts have also been made to perform
similar convergence analysis for the bi-virus epidemic model [8, 19, 20, 21].
The key questions posed in such literature are: Can both competing epidemics
coexist over the network? If not, which one prevails? Or do both die out? This
trichotomy of possible results is what the recent literature has been trying
to characterize.
When the propagation of the two epidemics occurs over the same network [8,
30], it has been established that coexistence of two viruses is impossible
except in the rare cases where their effective strengths
($\tau_{1},\tau_{2}\\!>\\!0$ for viruses 1, 2, respectively) are equal [21, 8,
20, 19, 18]; the virus with the larger effective strength otherwise wiping out
the other, a phenomenon sometimes referred to as winner takes all [8]. The
situation is much more complicated when the two viruses spread over two
distinct networks overlaid on the same set of nodes. This modeling approach is
more representative of the real world, where competing rumors/products/memes
may not use the same platforms to propagate, though they target the same
individuals. Recent works [18, 21, 19, 20, 31, 32, 33, 34] therefore consider
this more general setting, but unfortunately, a complete characterization of
the trichotomy of outcomes has still proven to be elusive and remains open as
of now.
While the original SIS model introduced in [2] had the aggregate infection and
recovery rates of a node as linear functions of the number of infected
neighbors, there has been a push towards studying more generalized models
where these rates are made heterogeneous (across nodes) and _non-linear_ [35,
36, 37, 38, 39]. Realistic assumptions such as infection rates tending to
saturation with continual increase in neighborhood infection [40, 41, 42, 43]
have become more commonplace, implying that the models employing strictly
linear spreading dynamics often provide overestimates to the real world
infection rates [24, 20]. This paper does not concern itself with answering
which non-linear infection rate best captures the exact dynamics, but we
direct the readers to [20] which provides simulation results comparing non-
linear rate functions to the exact Markovian dynamics for some special
randomly generated graph topologies. In some special cases, non-linear
recovery rates also have an interpretation linking them to reliability theory
in the form infection duration with increasing failure rates (failure here
being the recovery of an infected node). Allowing for non-linear infection and
recovery rates leads to a more general version of the bi-virus model on
overlaid graphs, albeit much more complicated, and the complete convergence
criterion is yet to be fully established [19, 20]. It should be noted that
while we extensively refer to the infection and recovery rates being either
linear or non-linear in this paper, the bi-virus epidemic model itself will
always be a system of non-linear ODEs.
#### Limitations of existing works
Of all the recent works concerning the spread of SIS type bi-virus epidemics
on overlaid networks, [20] and [19] provide conditions under which the system
globally converges to the state where one virus survives while the other dies
out. [20] approaches the problem of showing global convergence by employing
the classic technique via Lyapunov functions. However, finding appropriate
Lyapunov functions is a highly non-trivial task, and as mentioned in [19], is
even more difficult due to the coupled nature of the bi-virus ODE system. This
can be seen in the condition they derive in [20] for the case where, say,
Virus 1 dies out and Virus 2 survives. When $\tau_{1}$ and $\tau_{2}$
represent the effective strengths of Virus 1 and Virus 2, respectively, their
condition translates to $\tau_{1}\\!\leq\\!\tau_{1}^{*}$ where $\tau_{1}^{*}$
is the threshold corresponding to the single-virus case, meaning that Virus 1
would not have survived even if it was the only epidemic present on the
network. More importantly, [20] is unable to characterize convergence
properties for $\tau_{1}\\!>\\!\tau_{1}^{*}$ and
$\tau_{2}\\!>\\!\tau_{2}^{*}$.
The authors in [19] take a different approach and tackle this problem by
applying their ‘qualitative analysis’ technique, which uses results from other
dynamical systems that bound the solutions of the bi-virus ODE; and provide
conditions under which the system globally converges to single-virus
equilibria. As we show later in Section V-B, however, their conditions not
only characterize just a subset of the actual space of parameters that lead to
global convergence to the single-virus equilibria (which they themselves
pointed out), but the size of this subset is highly sensitive to the graph
topology, often much smaller than what it should be in general. In other
words, a complete characterization of the _entire_ space of model parameters,
on which the system globally converges to one of the trichotomic states, has
still been recognized as an open problem in the bi-virus literature [20, 19,
21].
#### Our contributions
In this paper, we analyze the bi-virus model with _non-linear_ infection and
recovery rates (or the _non-linear bi-virus model_ in short) and provide the
complete characterization of the trichotomy of the outcomes with necessary and
sufficient conditions under which the system globally converges to one of the
three possible points: (i) a ‘virus-free’ state, (ii) a ‘single-virus’
equilibrium, or (iii) an equilibrium where both viruses coexist over the
network. While the result for convergence to the virus-free state of the bi-
SIS model is not new for non-linear infection and linear recovery rates, our
proof for the same is the most general form known to date, covering the case
with both infection _and_ recovery rates being non-linear. The proof of
convergence to the virus-free state of the bi-virus model is straightforward,
and directly follows from the convergence criterion for the single-virus SIS
model with non-linear rates. However, the convergence results for fixed points
where only one of the two viruses survives, or to the equilibrium where both
viruses coexist, are not as straightforward to establish, rendering the
typical Lyapunov based approach largely inapplicable.
In proving these results, we first show, using a specially constructed cone
based partial ordering, that the bi-virus epidemic model possesses some
inherent monotonicity properties. We then use novel techniques from the theory
of _monotone dynamical systems_ (MDS) [44] to prove our main results. In
recent control systems literature [45, 46, 47, 48, 49], techniques based on
the construction of cone based partial orderings that leverage the
monotonicity properties of dynamical systems have indeed been studied.
Dynamical systems exhibiting such monotonicity properties are also sometimes
called deferentially positive systems [50] and cooperative systems [51] in the
ODE setting, with interesting applications in consensus problems for
distributed systems [52] and even neural networks [53]. In this paper, we
utilize these MDS techniques in the setting of competing epidemics, and as a
result demonstrate an alternative to Lyapunov based approaches to analyze
convergence properties of epidemic models. The novelty of using the MDS
approach for analysis also lies with [54], which uses similar techniques to
analyze the bi-virus system for the special case of linear infection and
recovery rates, and was developed concurrently and independently with the
initial version of this work [1]. This further highlights the utility of MDS
techniques for the analysis of epidemic models on graphs.
This paper is an extension of our previous work [1], which gives necessary and
sufficient conditions for convergence to the three types of equilibria only
for the special case of the bi-virus model with _linear_ infection and
recovery rates (or the _linear bi-virus model_ in short). Our conditions
therein take a more precise form in terms of the model parameters $\tau_{1}$
and $\tau_{2}$ and one can visualize an exact partition of the model parameter
space into regions corresponding to various convergence outcomes. We note that
this partition of the model parameter space coincides with that in [18],
wherein they employed only _local_ stability results via bifurcation analysis
– concerning only solution trajectories that originate from a small
neighborhood of those fixed points. In contrast, our results in this paper
concern global stability of the system with any combination of linear as well
as more general, non-linear infection and recovery rates.
#### Structure of the paper
In Section II, we first introduce the basic notation used throughout the
paper, along with the classical (single-virus) SIS model and the bi-virus
model. We then provide the generalization to non-linear infection and recovery
rates in Section III with some key assumptions on the infection and recovery
rate functions, complimented by a discussion in Appendix C regarding a special
class of recovery rates. In Section IV, we provide a primer to the MDS theory,
and establish monotonicity results for the single-virus SIS model, proving the
convergence result for the single-virus model with non-linear infection and
recovery rates whose proofs are deferred to Appendix E. We then go on to show
in Section V-A that the non-linear bi-virus model is also a monotone dynamical
system with respect to a specially constructed cone-based partial ordering,
and include the main convergence results in Section V-B. In Section VI we take
the opportunity to provide a more intuitive version of our results by
considering the special case of linear infection and recovery rates, along
with brief comparisons with the existing literature. In Section VII, we
provide numerical results which confirm our theoretical findings. We then
conclude in Section VIII.
For better readability of the paper, all technical proofs of the main results
are deferred to Appendix F. The appendices also include some selected
definitions and results from matrix theory (Appendix A), ODE theory (Appendix
B), and from MDS theory (Appendix D), which we use as part of our proofs of
the Theorems in Section V-B.
## II Preliminaries
### II-A Basic Notations
We standardize the notations of vectors and matrices by using lower case,
bold-faced letters to denote vectors ($\mathbf{v}\\!\in\\!\mathbb{R}^{N}$),
and upper case, bold-faced letters to denote matrices
($\mathbf{M}\\!\in\\!\mathbb{R}^{N\times N}$). We denote by
$\lambda(\mathbf{M})$ the largest real part222We use the $\lambda$ notation
instead of something like $\lambda_{Re}$, since it will mostly be used in
cases where the largest eigenvalue is real, for which $\lambda$ itself is the
largest real eigenvalue. For example, $\lambda(\mathbf{A})$ becomes the
spectral radius for any non-negative matrix $\mathbf{A}$ [55, 56]. of all
eigenvalues of a square matrix $\mathbf{M}$. We use $\text{diag}(\mathbf{v})$
or $\mathbf{D}_{\mathbf{v}}$ to denote the $N\\!\\!\times\\!\\!N$ diagonal
matrix with entries of vector $\mathbf{v}\in\mathbb{R}^{N}$ on the diagonal.
Also, we denote $\mathbf{1}\\!\triangleq\\![1,\\!\cdots\\!,1]^{T}$ and
$\mathbf{0}\\!\triangleq\\![0,\\!\cdots\\!,0]^{T}$, the $N$-dimensional vector
of all ones and zeros, respectively. For vectors, we write
$\mathbf{x}\\!\leq\\!\mathbf{y}$ to indicate that $x_{i}\\!\leq\\!y_{i}$ for
all $i$; $\mathbf{x}\\!<\\!\mathbf{y}$ if $\mathbf{x}\\!\leq\\!\mathbf{y}$ and
$\mathbf{x}\\!\neq\\!\mathbf{y}$; $\mathbf{x}\\!\ll\\!\mathbf{y}$ when all
entries satisfy $x_{i}\\!<\\!y_{i}$. We use
$\mathcal{G}(\mathcal{N},\mathcal{E})$ to represent a general, undirected,
connected graph with $\mathcal{N}\triangleq\\{1,2,\cdots,N\\}$ being the set
of nodes and $\mathcal{E}$ being the set of edges. When we refer to a matrix
$\mathbf{A}\\!=\\![a_{ij}]$ as the adjacency matrix of some graph
$\mathcal{G}(\mathcal{N},\mathcal{E})$, it satisfies
$a_{ij}\triangleq\mathds{1}_{\\{(i,j)\in\mathcal{E}\\}}$ for any
$i,j\in\mathcal{N}$; we use $d_{min}(\mathbf{A})$ and $d_{max}(\mathbf{A})$ to
denote the minimum and maximum degrees of the nodes of the corresponding
graph. Since we only consider connected graphs, all the adjacency matrices in
this paper are automatically considered to be irreducible (see Definition A.1
in Appendix A).
### II-B $SIS$ Model with Linear rates
Consider the graph $\mathcal{G}(\mathcal{N},\mathcal{E})$, and assume that at
any given time $t\geq 0$, each node $i\in\mathcal{N}$ of the graph is either
in an _infected (I)_ , or in a _susceptible (S)_ state. An infected node can
infect each of its susceptible neighbors with rate $\beta>0$.333We say an
event occurs with some _rate_ $\alpha>0$ if it occurs after a random amount of
time, exponentially distributed with parameter $\alpha>0$. It can also, with
rate $\delta>0$, be cured from its infection and revert to being susceptible
again. We write $\mathbf{x}(t)=[x_{i}(t)]\in\mathbb{R}^{N}$, where $x_{i}(t)$
represents the probability that node $i\in\mathcal{N}$ is infected at any
given time $t\geq 0$. Then, the dynamics of the $SIS$ model can be captured
via the system of ODEs given by
$\frac{dx_{i}(t)}{dt}\triangleq\beta(1-x_{i}(t))\sum_{j\in\mathcal{N}}a_{ij}x_{j}(t)-\delta
x_{i}(t)$ (1)
for all $i\in\mathcal{N}$ and $t\geq 0$. In a matrix-vector form, this can be
written as
$\frac{d\mathbf{x}}{dt}\triangleq\beta\text{diag}(\mathbf{1}-\mathbf{x})\mathbf{A}\mathbf{x}-\delta\mathbf{x}$
(2)
where we suppress the $(t)$ notation for brevity. The system (2) is positively
invariant in the set $[0,1]^{N}$, and has $\mathbf{0}$ as a fixed point (the
virus-free equilibrium). The following result is well known from [2], which we
will generalize in Section IV-B.
###### Theorem II.1 (Theorem 3.1 in [2])
Let $\tau\\!\triangleq\\!\beta/\delta$. Then,
1. (i)
either $\tau\leq 1/\lambda(\mathbf{A})$, and $\mathbf{x}^{*}=\mathbf{0}$ is a
globally asymptotically stable fixed point of (2);
2. (ii)
or $\tau>1/\lambda(\mathbf{A})$, and there exists a unique, strictly positive
fixed point $\mathbf{x}^{*}\in(0,1)^{N}$ such that $\mathbf{x}^{*}$ is
globally asymptotically stable in
$[0,1]^{N}\setminus\\{\mathbf{0}\\}$.$\hfill\square$
### II-C Bi-Virus Model with Linear rates
Consider two graphs $\mathcal{G}_{1}(\mathcal{N},\mathcal{E}_{1})$ and
$\mathcal{G}_{2}(\mathcal{N},\mathcal{E}_{2})$, on the same set of nodes
$\mathcal{N}$ but with different edge sets $\mathcal{E}_{1}$ and
$\mathcal{E}_{2}$. At any given time $t\geq 0$, a node $i\in\mathcal{N}$ is
either infected by Virus 1, infected by Virus 2, or is susceptible. A node
infected by Virus 1 infects each of its susceptible neighbors with rate
$\beta_{1}>0$, just like in the $SIS$ model, but does so only to nodes which
are its neighbors with respect to the graph
$\mathcal{G}_{1}(\mathcal{N},\mathcal{E}_{1})$. Nodes infected by Virus 1 also
recover with rate $\delta_{1}>0$, after which they enter the susceptible
state. Similarly, nodes infected by Virus 2 infect their susceptible
neighbors, this time with respect to the graph
$\mathcal{G}_{2}(\mathcal{N},\mathcal{E}_{2})$, with rate $\beta_{2}>0$, while
recovering with rate $\delta_{2}>0$. This competing bi-virus model of epidemic
spread, also referred to as the $SI_{1}I_{2}S$ model, can be represented by
the following ODE system:
$\begin{split}\frac{dx_{i}}{dt}&\triangleq\beta_{1}\left(1-x_{i}-y_{i}\right)\sum_{j\in\mathcal{N}}a_{ij}x_{j}-\delta_{1}x_{i}\\\
\frac{dy_{i}}{dt}&\triangleq\beta_{2}\left(1-x_{i}-y_{i}\right)\sum_{j\in\mathcal{N}}b_{ij}y_{j}-\delta_{2}y_{i}\end{split}$
(3)
for all $i\in\mathcal{N}$ and $t\geq 0$. In matrix-vector form, (3) becomes:
$\begin{split}\frac{d\mathbf{x}}{dt}&\triangleq\beta_{1}\text{diag}\left(\mathbf{1}-\mathbf{x}-\mathbf{y}\right)\mathbf{A}\mathbf{x}-\delta_{1}\mathbf{x}\\\
\frac{d\mathbf{y}}{dt}&\triangleq\beta_{2}\text{diag}\left(\mathbf{1}-\mathbf{x}-\mathbf{y}\right)\mathbf{B}\mathbf{y}-\delta_{2}\mathbf{y},\end{split}$
(4)
where $\mathbf{A}=[a_{ij}]$ and $\mathbf{B}=[b_{ij}]$ are the adjacency
matrices of graphs $\mathcal{G}_{1}(\mathcal{N},\mathcal{E}_{1})$ and
$\mathcal{G}_{2}(\mathcal{N},\mathcal{E}_{2})$, respectively.
## III Epidemic Models with Non-linear Infection and Recovery rates
In this section, we introduce the single-virus and bi-virus SIS models with
non-linear infection and recovery rates. Non-linearities can be attributed to
the spread and recovery from the virus being related to the susceptibility of
the disease (or its prevalence in the population) in a more complicated
manner. This is more general than simply exponential random variables with
constant rates used to model the spreading and recovery processes, which in
aggregate scale linearly with the infection probabilities.444‘Aggregate’ here
refers to the mean field approximation which is one way to derive SIS-type
ODEs. Another way is the large population mean field limit of a stochastic
process, where the connection to the corresponding ODE system is formed via
the Kurtz’s theorem [16]. In this case, linearity is induced by the uniform or
homogeneous mixing assumption which is also a subject of criticism in
epidemiology literature [35, 36, 37, 38]. This is shown to be limiting in
accurately modelling the trajectories of an infection spread; the linear
scaling of the infection and recovery rates shown to being an overestimate to
what is observed in reality [20, 37]. Many works thus argue for the modelling
of these spreading processes with non-linear functions [38, 35, 36, 40]. We
first present the more general single-virus SIS model with a set of intuitive
assumptions (A1)–(A5) for the non-linear infection and recovery rates.
### III-A $SIS$ Model with Non-linear rates
In (1) the term $\sum_{j\in\mathcal{N}}a_{ij}x_{j}(t)$ denotes the overall
rate at which a susceptible node $i\in\mathcal{N}$ gets infected by its
neighbors. In what follows, we replace this by a generic function
$f_{i}(\mathbf{x}(t))$, thereby allowing the overall infection rate for each
node to be any non-linear function of $x_{j}(t)$ for all neighbors $j$ of $i$.
Similarly, we replace the term $\delta x_{i}(t)$, denoting the overall
recovery rate for any node $i\in\mathcal{N}$, by a non-linear function
$q_{i}(\mathbf{x}(t))$. This generic version of the SIS model, allowing for
non-linear infection and recovery rates, is given by the ODE
$\frac{dx_{i}(t)}{dt}=\bar{f}_{i}(\mathbf{x}(t))\triangleq(1-x_{i}(t))f_{i}(\mathbf{x}(t))-q_{i}(\mathbf{x}(t))$
(5)
for all $i\in\mathcal{N}$ and $t\geq 0$. In a matrix-vector form, this can be
written as
$\frac{d\mathbf{x}}{dt}=\bar{F}(\mathbf{x})\triangleq\text{diag}(\mathbf{1}-\mathbf{x})F(\mathbf{x})-Q(\mathbf{x})$
(6)
where $F(\mathbf{x})=[f_{i}(\mathbf{x})]\in\mathbb{R}^{N}$, and
$Q(\mathbf{x})=[q_{i}(\mathbf{x})]\in\mathbb{R}^{N}$ are the vectors of non-
linear infection and recovery rate functions, respectively. We assume that
they are continuous and twice differentiable in $[0,1]^{N}$, with
$\mathbf{J}_{F}(\mathbf{x})$ and $\mathbf{J}_{Q}(\mathbf{x})$ denoting the
Jacobians of $F$ and $Q$ respectively, evaluated at any point
$\mathbf{x}\in[0,1]^{N}$. We now make the following key assumptions:
* (A1)
$F(\mathbf{0})=\mathbf{0}$ and $Q(\mathbf{0})=\mathbf{0}$;
* (A2)
$\left[\mathbf{J}_{F}(\mathbf{x})\right]_{ij}=\frac{\partial
f_{i}(\mathbf{x})}{\partial x_{j}}>0~{}~{}\forall i\neq j$ with $a_{ij}>0$,
otherwise $\left[\mathbf{J}_{F}(\mathbf{x})\right]_{ij}=0$;
* (A3)
$\left[\mathbf{J}_{Q}(\mathbf{x})\right]_{ii}\\!=\\!\frac{\partial
q_{i}(\mathbf{x})}{\partial x_{i}}\\!>\\!0$, and
$\left[\mathbf{J}_{Q}(\mathbf{x})\right]_{ij}\\!=\\!\frac{\partial
q_{i}(\mathbf{x})}{\partial x_{j}}\\!\leq\\!0$ for all $i\\!\neq\\!j$,
$\mathbf{x}\in[0,1]^{N}$. Moreover, $\sum\limits_{j\neq
i}\left[\mathbf{J}_{Q}(\mathbf{x})\right]_{ij}\\!<\\!\left[\mathbf{J}_{Q}(\mathbf{x})\right]_{ii}$;
* (A4)
$f_{i}(\mathbf{x})$ is concave in $[0,1]^{N}\\!\\!$, that is,
$\frac{\partial^{2}f_{i}}{\partial x_{j}\partial x_{k}}\\!\leq\\!0$ for all
$i,\\!j,\\!k\\!\in\\!\mathcal{N}$;
* (A5)
$q_{i}(\mathbf{x})$ is convex function of $x_{i}\in[0,1]^{N}$, and a concave
function of $x_{j}$ for all $j\neq i$. That is,
$\frac{\partial^{2}q_{i}}{\partial^{2}x_{i}}\geq 0$ and
$\frac{\partial^{2}q_{i}}{\partial x_{j}\partial x_{k}}\leq 0$ for all
$i\\!\in\\!\mathcal{N}$, and $j,k\\!\in\\!\mathcal{N}\\!\setminus\\!\\{i\\}$.
Assumption (A1) ensures that the virus-free state is a fixed point of (6),
while (A2) is a proximity assumption that models infection spread only through
edges of the underlying graph. Assumption (A3) concerns with the recovery
rate, allowing it to be reduced by infected neighbors while still being no-
negative. (A4) and (A5) assume concavity properties of the functions
$f_{i}(\mathbf{x})$ and $q_{i}(\mathbf{x})$ in $x_{j}$ for any neighbor $j$ of
$i$. This allows the effect of neighborhood infection $x_{j}$ to saturate555As
$x_{j}$ increases for any neighbor $j$ of node $i$, the magnitude of the
resulting change in both infection rate $f_{i}(\mathbf{x})$ and recovery rate
$q_{i}(\mathbf{x})$ decreases. This is similar to the case of diminishing
returns. as $x_{j}$ increases. Assumption (A5) also assumes convexity of
$q_{i}(\mathbf{x})$ in local infection $x_{i}$, which means that increase in
recovery rate caused by $x_{i}$ can be larger as $x_{i}$ increases.
Examples for non-linear infection rates satisfying (A1)–(A5) include
logarithmic functions $f_{i}(\mathbf{x})=\sum_{j}a_{ij}\ln{(1+x_{j})}$,
similar to those in [20]. Examples of non-linear recovery rates include
polynomial functions such as $q_{i}(\mathbf{x})=(1+x_{i})^{k}-1$ for any
$k\geq 1$. A special class of the permissible non-linear recovery rates, where
the infection duration is dependent solely on local infection $x_{i}$, is
related to processes that have decreasing failure rates (DFR)666Failure rate
for a non-negative random variable is defined as the ratio between its
probability density function (PDF) and its complimentary cumulative
distribution function (CCDF). In the context of infection duration, decreasing
failure rate means that nodes recover at a decreased rate the longer they stay
continuously infected. A more detailed discussion regarding the connection to
SIS recovery rates can be found in Appendix C.. This special class of recovery
processes that are DFR also includes the case of linear recovery rates. Note
that our assumptions allow $f_{i}(\mathbf{x})$ and $q_{i}(\mathbf{x})$ to be
heterogeneous across all nodes $i\in\mathcal{N}$, and the case with linear
rates in (2) readily satisfies (A1)–(A5). This also extends to the linear bi-
virus model (4) being a special case of the non-linear bi-virus model
introduced in the next subsection, with infection and recovery rate functions
therein satisfying the same assumptions (A1)–(A5).
Figure 1: Bi-Virus epidemic spread across overlaid graphs sharing the same set
of nodes. Red and Blue arrows denote the spread of Virus 1 and 2, respectively
from infected nodes $j$ and $k$ (coloured Red and Blue) to the susceptible
node $i$ (uncoloured) with the instantaneous rates as shown. The infected Red
and Blue nodes also recover with a total rate of $r_{i}(\mathbf{x})$ and
$s_{i}(\mathbf{y})$ for any node $i\in\mathcal{N}$, respectively.
### III-B Bi-Virus Model with Non-linear rates
The Bi-Virus model with non-linear infection and recovery rates is given by
the following coupled system of ODEs:
$\begin{split}\frac{dx_{i}}{dt}=\bar{g}_{i}(\mathbf{x},\mathbf{y})&\triangleq\left(1-x_{i}-y_{i}\right)g_{i}(\mathbf{x}(t))-r_{i}(\mathbf{x})\\\
\frac{dy_{i}}{dt}=\bar{h}_{i}(\mathbf{x},\mathbf{y})&\triangleq\left(1-x_{i}-y_{i}\right)h_{i}(\mathbf{y}(t))-s_{i}(\mathbf{y})\end{split}$
(7)
for all $i\in\mathcal{N}$ and $t\geq 0$. In a matrix-vector form, (7) becomes:
$\begin{split}\frac{d\mathbf{x}}{dt}=\bar{G}(\mathbf{x},\mathbf{y})&\triangleq\text{diag}\left(\mathbf{1}-\mathbf{x}-\mathbf{y}\right)G(\mathbf{x})-R(\mathbf{x})\\\
\frac{d\mathbf{y}}{dt}=\bar{H}(\mathbf{x},\mathbf{y})&\triangleq\text{diag}\left(\mathbf{1}-\mathbf{x}-\mathbf{y}\right)H(\mathbf{y})-S(\mathbf{y}),\end{split}$
(8)
Where $G(\mathbf{x})=[g_{i}(\mathbf{x})]$,
$R(\mathbf{x})=[r_{i}(\mathbf{x})]$, and $H(\mathbf{y})=[h_{i}(\mathbf{y})]$,
$S(\mathbf{y})=[s_{i}(\mathbf{y})]$ are the non-linear infection and recovery
rate functions for viruses 1 and 2, respectively. The pairs $(G,R)$ and
$(H,S)$ each satisfy the assumptions (A1)–(A5); where $G$ and $H$ specifically
satisfy (A2) with respect to their corresponding graphs with adjacency
matrices $\mathbf{A}$ and $\mathbf{B}$, respectively. Figure 1 illustrates of
how these competing epidemics spread over the corresponding overlaid graphs.
Assumptions (A1)–(A5) are also more general (weaker) than those assumed in
[19, 20], where the recovery rates are restricted to being linear functions
and are thus a special case of our model. We emphasize that while the set off
assumptions for non-linear rates are mostly similar to (slightly more general
than) those in literature, the characterization of all convergence scenarios
for their respective bi-virus models is incomplete, as we shall discuss later
in Section VI.
## IV Monotone Dynamical Systems and the Single Virus Epidemic
In this section, we provide a succinct introduction to monotone dynamical
systems (MDS) and some important definitions therein. We go on to show that
the $SIS$ model (6) is a monotone dynamical system (specifically a cooperative
system) and briefly apply these MDS techniques to epidemic models by deriving
the exact convergence result of the non-linear $SIS$ model. We also observe
that Theorem II.1 is a special case for when the infection and recovery rates
are linear.
### IV-A Monotone Dynamical Systems - A Primer
A well known result from real analysis is that monotone sequences in compact
(closed and bounded) subsets of $\mathbb{R}^{n}$ converge in $\mathbb{R}^{n}$
[57]. This simple, yet powerful result has been fully integrated with the
theory of dynamical systems in a series of works [51, 58, 59, 60, 61, 62, 63,
64, 65, 66], which cumulatively form the theory of monotone dynamical systems
(MDS). The foundations of MDS were laid down in [51, 58, 59, 60, 61] which
study ordinary differential equations, specifically _cooperative_ ODE systems.
We here provide a brief, informal introduction to such ODE systems, with more
details in Appendix D.
A central tool in the theory of MDS is the notion of generalized cone-
orderings, which extends the concept of monotonicity in vector spaces.
###### Definition IV.1
Given a convex cone $K\subset X$ for any vector space $X$, the cone-ordering
$\leq_{K}$ ($<_{K}$, $\ll_{K}$) generated by $K$ is an order relation that
satisfies
1. (i)
$~{}\mathbf{x}\\!\leq_{K}\\!\mathbf{y}\\!\iff\\!(\mathbf{y}\\!-\\!\mathbf{x})\in
K$;
2. (ii)
$~{}\mathbf{x}\\!<_{K}\\!\mathbf{y}\\!\iff\\!\mathbf{x}\\!\leq_{K}\\!\mathbf{y}$
and $\mathbf{x}\\!\neq\\!\mathbf{y}$; and
3. (iii)
$~{}\mathbf{x}\\!\ll_{K}\\!\mathbf{y}\\!\iff\\!(\mathbf{y}\\!-\\!\mathbf{x})\in\text{int}(K)$,
for any $\mathbf{x},\mathbf{y}\in X$.
Note that, ‘$\ll_{K}$’ implies ‘$<_{K}$’ and is a stronger relation. Cone-
orderings generated by the positive orthant $K\\!=\\!\mathbb{R}^{n}_{+}$ are
simply denoted by $\leq$ ($<,\ll$), that is, without the ‘$K$’ notation.
Let $\phi_{t}(\mathbf{x})$ denote the solution of a dynamical system at some
time $t\\!>\\!0$ starting from an initial point
$\phi_{0}(\mathbf{x})\\!=\\!\mathbf{x}\\!\in\\!\mathbb{R}^{n}$.
###### Definition IV.2
Given a cone-ordering $\leq_{K}$ ($<_{K}$, $\ll_{K}$), the dynamical system is
said to be _monotone_ if for every
$\mathbf{x},\mathbf{y}\\!\in\\!\mathbb{R}^{n}$ such that
$\mathbf{x}\\!\leq_{K}\\!\mathbf{y}$, we have
$\phi_{t}(\mathbf{x})\\!\leq_{K}\\!\phi_{t}(\mathbf{y})$ for all $t\\!>\\!0$.
The system is called _strongly monotone_ if for all
$\mathbf{x},\mathbf{y}\\!\in\\!\mathbb{R}^{n}$ such that
$\mathbf{x}\\!<_{K}\\!\mathbf{y}$, we have
$\phi_{t}(\mathbf{x})\\!\ll_{K}\\!\phi_{t}(\mathbf{y})$ for all $t\\!>\\!0$.
The main result from MDS theory says that (almost) every solution trajectory
of a strongly monotone system always converges to some equilibrium point of
the system [58, 64, 65, 44]. If the system has only one stable fixed point,
then this in itself is enough to prove global convergence. Monotonicity
properties of a dynamical system can therefore be leveraged as an alternative
to constructing Lyapunov functions, which is often intractable.
Consider the following autonomous ODE system
$\dot{\mathbf{x}}=\bar{F}(\mathbf{x}),$ (9)
where $\bar{F}(\mathbf{x})=[\bar{f}_{i}(\mathbf{x})]\in\mathbb{R}^{n}$ is the
vector field. If $\phi_{t}(\mathbf{x})$ is the solution of this ODE system, we
say the system is _co-operative_ if it is monotone. There are ways to find out
whether an ODE system is co-operative or not. In particular, one can answer
this by observing the Jacobian of the vector field [67]. The so-called Kamke
condition [66] says that (9) is co-operative with respect to the cone-ordering
generated by the positive orthant $K=\mathbb{R}^{n}_{+}$ if and only if
$\frac{\partial\bar{f}_{i}}{\partial x_{i}}\geq
0,~{}~{}~{}~{}~{}~{}~{}\text{for all }i\neq j.$ (10)
While it is not straightforward to obtain such a clean condition for any
general convex cone $K$, one can still deduce the co-operative property of the
ODE with respect to any one of the other orthants of $\mathbb{R}^{n}$ by
observing the signed entries of the Jacobian. We will show how this is done
for the bi-virus system (4) later in Section V-A.
If the Jacobian of an ODE system is an irreducible matrix in a subset $D$ of
the state space, we say that the ODE system is irreducible in $D$ (Definition
D.2 in Appendix D). If the ODE system is co-operative in $D$ as well as
irreducible in $D$, then it is strongly monotone in $D$ (Theorem D.4 in
Appendix D). To prove convergence properties, we should ideally be able to
show that our system is strongly monotone in the entirety of the state space
it is contained in, for which we can directly apply the main MDS convergence
result. However, this is often not the case, and one needs additional results
from MDS literature to prove convergence. These details are deferred to
Appendix D.
### IV-B Monotonicity and convergence of SIS epidemic models
The following proposition establishes the monotonicity of the single-virus SIS
model with non-linear infection and recovery rates with respect to the regular
ordering relationship (cone-ordering generated by $R^{N}_{+}$).
###### Proposition IV.3
The ODE system (6) is cooperative in $[0,1]^{N}$ and irreducible in
$(0,1)^{N}$ with respect to the cone-ordering generated by the positive
orthant $\mathbb{R}^{N}_{+}$.$\hfill\square$
We now state the convergence criterion for the non-linear single-virus $SIS$
model.
###### Theorem IV.4
Let $\mathbf{J}_{F}(\mathbf{x})$ and $\mathbf{J}_{Q}(\mathbf{x})$ denote the
Jacobian matrices of the vector valued infection and recovery rate functions
$F(\mathbf{x})$ and $Q(\mathbf{x})$ from (6), respectively. Then,
1. (i)
either $\lambda(\mathbf{J}_{F}(\mathbf{0})-\mathbf{J}_{Q}(\mathbf{0}))\leq 0$,
and $\mathbf{x}^{*}=0$ is the globally asymptotically stable fixed point of
(6);
2. (ii)
or $\lambda(\mathbf{J}_{F}(\mathbf{0})-\mathbf{J}_{Q}(\mathbf{0}))>0$, and
there exists a unique, strictly positive fixed point $\mathbf{x}^{*}\gg 0$
such that $\mathbf{x}^{*}$ is globally asymptotically stable in
$[0,1]^{N}\setminus\\{\mathbf{0}\\}$.$\hfill\square$
The proof for Theorem IV.4 utilizes a result from the monotone dynamical
systems literature, provided as Theorem E.1 in Appendix E. It was originally
proved and applied to linear SIS epidemics in [68] as an alternate proof of
the convergence properties of the model for Gonorrhea spread in [2], which is
a special case of our non-linear model (6). We can also see this in the
following remark.
###### Remark IV.5
For the single-virus SIS model with linear infection and recovery rates (2),
the conditions derived in Theorem IV.4 reduce to those in Theorem II.1.
###### Proof:
By substituting $F(\mathbf{x})=\beta\mathbf{A}\mathbf{x}$ and
$Q(\mathbf{x})=\delta\mathbf{x}$ in (21) (Jacobian of the single-virus system
(6), mentioned in the proof of Theorem IV.4) and evaluating at
$\mathbf{x}=\mathbf{0}$, we get
$\mathbf{J}_{\bar{F}}(\mathbf{0})=\mathbf{J}_{F}(\mathbf{0})\\!-\\!\mathbf{J}_{Q}(\mathbf{0})=\beta\mathbf{A}\\!-\\!\delta\mathbf{I}$.
The condition
$\lambda(\mathbf{J}_{F}(\mathbf{0})\\!-\\!\mathbf{J}_{Q}(\mathbf{0}))=\lambda(\beta\mathbf{A}\\!-\\!\delta\mathbf{I})>0$
$(\leq 0)$ can be rewritten as $\tau>1/\lambda(\mathbf{A})$ $\left(\leq
1/\lambda(\mathbf{A})\right)$ where $\tau=\beta/\delta$, which as the same as
in Theorem II.1. ∎
While Theorem IV.4 could be proved using the steps in [2], which were
recreated again in [20], it requires first the application of two different
Lyapunov functions and also requires proving the uniqueness of the positive
fixed point. Alternatively, one could apply Theorem 1 in [69] to establish the
uniqueness of the positive fixed point by first showing that the Jacobian of
$\bar{F}(\mathbf{x})$ evaluated at any point $\mathbf{x}\gg\mathbf{0}$
satisfying $\bar{F}(\mathbf{x})=\mathbf{0}$, is Hurwitz. This, combined with
Proposition IV.3, could then provide the necessary convergence criterion.
However, we maintain that using Theorem E.1 would be a simpler way to derive
the same results, whose proof is deferred to Appendix E.
## V Main results for the non-linear Bi-Virus model
We provide the necessary and sufficient results on the non-linear infection
and recovery rates of the bi-virus system (8) for convergence to each of the
three different kinds of equilibria: the virus-free, the single-virus
equilibrium, and the co-existence equilibrium. However, before stating the
main convergence results (proofs deferred to Appendix F in [70]), we establish
the monotonicity of the non-linear bi-virus model.
### V-A Monotonicity of the Bi-Virus epidemic models
We first revisit the Kamke condition from Section IV-A, in this instance given
for a the _southeast cone-ordering_ as stated below.
#### Southeast cone-ordering and the Kamke condition
Consider the cone-ordering generated by the convex cone
$K=\\{\mathbb{R}^{N}_{+}\times\mathbb{R}^{N}_{-}\\}\subset\mathbb{R}^{2N}$.
This cone is one of the orthants of $\mathbb{R}^{2N}$, and for $N=1$, it would
correspond to the southeast orthant of $\mathbb{R}^{2}$
$\left(K=\\{\mathbb{R}_{+}\times\mathbb{R}_{-}\\}\subset\mathbb{R}^{2}\right)$.
For any two points $(\mathbf{x},\mathbf{y})$,
$(\bar{\mathbf{x}},\bar{\mathbf{y}})\in\mathbb{R}^{2N}$, it satisfies the
following:
1. (i)
$(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\iff\\!x_{i}\\!\leq\\!\bar{x}_{i}$
and $y_{i}\\!\geq\\!\bar{y}_{i}$ for all $i\\!\in\\!\mathcal{N}$;
2. (ii)
$(\mathbf{x},\mathbf{y})\\!\\!<_{K}\\!\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\\!\iff\\!\\!(\mathbf{x},\mathbf{y})\\!\\!\leq_{K}\\!\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})$
and $(\mathbf{x},\mathbf{y})\\!\neq\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})$;
3. (iii)
$(\mathbf{x},\mathbf{y})\\!\ll_{K}\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\iff\\!x_{i}\\!<\\!\bar{x}_{i}$
and $y_{i}\\!>\\!\bar{y}_{i}$ for all $i\\!\in\\!\mathcal{N}$.
This type of cone-ordering is often referred to as the southeast cone-
ordering, and the corresponding cone $K$ is the southeast orthant of
$\mathbb{R}^{2N}$. As shown in [67], the Kamke condition for determining
whether an ODE system is cooperative or not with respect to the positive
orthant $\mathbb{R}^{2N}_{+}$ can be generalised for cone-orderings generated
by any orthant of $\mathbb{R}^{2N}$, including the southeast orthant. Once
again, this is done by observing the Jacobian of the respective ODE system.
Consider the $2N$ dimensional system given by
$\dot{\mathbf{x}}=\bar{G}(\mathbf{x},\mathbf{y})~{}~{}\text{and}~{}~{}\dot{\mathbf{y}}=\bar{H}(\mathbf{x},\mathbf{y}),$
where $\bar{G}(\mathbf{x},\mathbf{y})=[\bar{g}_{i}(\mathbf{x},\mathbf{y})]$
and $\bar{H}(\mathbf{x},\mathbf{y})=[\bar{h}_{i}(\mathbf{x},\mathbf{y})]$ are
vector-valued functions in $\mathbb{R}^{N}$. The Kamke condition for this
system with respect to the southeast cone-ordering [67] is
$\frac{\partial\bar{g}_{i}}{\partial x_{j}}\geq
0,~{}\frac{\partial\bar{h}_{i}}{\partial y_{j}}\geq 0,~{}\forall i\neq
j,~{}~{}~{}\text{and}~{}~{}~{}\frac{\partial\bar{g}_{i}}{\partial y_{j}}\leq
0,~{}\frac{\partial\bar{h}_{i}}{\partial x_{j}}\leq 0,~{}\forall i,j.$
Roughly speaking, the Jacobian $\mathbf{J}_{GH}(\mathbf{x},\mathbf{y})$ of the
system, evaluated at all points in the state space, should be in the following
block matrix form (where the signs are not strict):
$\mathbf{J}_{\bar{G}\bar{H}}=\begin{bmatrix}*&+&+&-&-&-\\\ +&*&+&-&-&-\\\
+&+&*&-&-&-\\\ -&-&-&*&+&+\\\ -&-&-&+&*&+\\\ -&-&-&+&+&*\end{bmatrix}$ (11)
Note that the state space of the ODE system (4) is given by
$D\triangleq\left\\{(\mathbf{x},\mathbf{y})\in[0,1]^{2N}~{}|~{}\mathbf{x}+\mathbf{y}\leq\mathbf{1}\right\\}$.
###### Proposition V.1
The ODE system (8) (the non-linear bi-virus model) is cooperative in $D$ with
respect to the southeast cone-ordering. It is also irreducible in
$\text{Int}(D)$.
###### Proof:
For all $(\mathbf{x},\mathbf{y})\in D$ and $i\neq j\in\mathcal{N}$, we have
$\frac{\partial\bar{g}_{i}(\mathbf{x},\mathbf{y})}{\partial
x_{j}}=(1-x_{i}-y_{i})\frac{\partial g_{i}(\mathbf{x})}{\partial
x_{j}}-\frac{\partial r_{i}(\mathbf{x})}{\partial x_{j}}\geq 0,$
$\frac{\partial\bar{h}_{i}(\mathbf{x},\mathbf{y})}{\partial
y_{j}}=(1-x_{i}-y_{i})\frac{\partial h_{i}(\mathbf{y})}{\partial
y_{j}}-\frac{\partial s_{i}(\mathbf{x})}{\partial y_{j}}\geq 0$
since $\frac{\partial g_{i}(\mathbf{x})}{\partial x_{j}}\geq 0$,
$\frac{\partial r_{i}(\mathbf{x})}{\partial x_{j}}\leq 0$ and $\frac{\partial
h_{i}(\mathbf{y})}{\partial y_{j}}\geq 0$, $\frac{\partial
s_{i}(\mathbf{y})}{\partial y_{j}}\leq 0$ from assumptions (A2) and (A3), and
$(1-x_{i}-y_{i})\geq 0$. Moreover for all $i\in\mathcal{N}$,
$\frac{\partial\bar{g}_{i}}{\partial y_{i}}=-g_{i}(\mathbf{x})\leq
0~{}~{}\text{and}~{}~{}\frac{\partial\bar{h}_{i}}{\partial
x_{i}}=-h_{i}(\mathbf{y})\leq 0,$
with ${\partial\bar{g}_{i}}/{\partial y_{j}}={\partial\bar{h}_{i}}/{\partial
x_{j}}=0$. Thus, the Kamke conditions are satisfied and the system is
cooperative in $D$.
The Jacobian $\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{x},\mathbf{y})$ of system
(4) is written as
$\begin{split}&\\!\\!\\!\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{x},\mathbf{y})=\\\
&\\!\\!\\!\\!\\!\begin{bmatrix}\mathbf{S}_{\mathbf{x}\mathbf{y}}\mathbf{J}_{G}(\mathbf{x})\\!\\!-\\!\\!\mathbf{D}_{G(\mathbf{x})}\\!\\!-\\!\\!\mathbf{J}_{R}(\mathbf{x})&\\!\\!\\!\\!-\\!\mathbf{D}_{G(\mathbf{x})}\\\
-\\!\mathbf{D}_{H(\mathbf{y})}&\\!\\!\\!\\!\mathbf{S}_{\mathbf{x}\mathbf{y}}\mathbf{J}_{H}(\mathbf{y})\\!\\!-\\!\\!\mathbf{D}_{H(\mathbf{y})}\\!\\!-\\!\\!\mathbf{J}_{S}(\mathbf{y})\end{bmatrix}\\!\\!,\end{split}$
(12)
where
$\mathbf{S}_{\mathbf{x},\mathbf{y}}\triangleq\text{diag}(\mathbf{1}-\mathbf{x}-\mathbf{y})$,
$\mathbf{D}_{G(\mathbf{x})}\triangleq\text{diag}(G(\mathbf{x}))$ and
$\mathbf{D}_{H(\mathbf{y})}\triangleq\text{diag}(H(\mathbf{y}))$. Since the
infection rate functions satisfy assumption (A2) for their corresponding
underlying graphs, $\mathbf{J}_{G}(\mathbf{x})$ and
$\mathbf{J}_{H}(\mathbf{y})$ follow the sign structure of $\mathbf{A}$ and
$\mathbf{B}$ respectively and are irreducible. The off-diagonal blocks of
$\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{x},\mathbf{y})$ are diagonal matrices
with non-zero diagonal entries for $(\mathbf{x},\mathbf{y})\in\text{Int}(D)$,
and there does not exist a permutation matrix that would transform this into a
block upper triangular matrix. Hence, by Definition D.2, the system is
irreducible in $\text{Int}(D)$, and this completes the proof. ∎
From Proposition V.1, we deduce that the non-linear bi-virus system of ODEs
(8) is co-operative in $D$, and thus strongly monotone in $\text{Int}(D)$ in
view of Theorem D.4 in Appendix D. This property also extends to the linear
bi-virus system (4) which is a special case of (8).
### V-B Convergence and Coexistence properties of the Bi-Virus model
We are now ready to establish results on convergence properties of the bi-
virus model and provide conditions for coexistence of two viruses in the non-
linear bi-virus model as in (8).
Let $\mathbf{x}^{*}$ and $\mathbf{y}^{*}$ be the globally attractive fixed
points of the single-virus SIS models that system (8) would reduce to when
Virus 2 and 1, respectively, are not present over the network. These systems
are given by
$\dot{\mathbf{x}}=F^{x}(\mathbf{x})\triangleq\bar{G}(\mathbf{x},\mathbf{0})=\text{diag}(\mathbf{1}-\mathbf{x})G(\mathbf{x})-R(\mathbf{x}),$
(13)
$\dot{\mathbf{y}}=F^{y}(\mathbf{y})\triangleq\bar{H}(\mathbf{0},\mathbf{y})=\text{diag}(\mathbf{1}-\mathbf{y})H(\mathbf{y})-S(\mathbf{y});$
(14)
and by Theorem IV.4, $\mathbf{x}^{*}\\!=\\!\mathbf{0}$
($\mathbf{y}^{*}\\!=\\!\mathbf{0}$) if
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!\leq\\!0$
(if
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!\leq\\!0$),
and $\mathbf{x}^{*}\\!\gg\\!\mathbf{0}$ ($\mathbf{y}^{*}\\!\gg\\!\mathbf{0}$)
otherwise.
We first state the result when the virus-free equilibrium is globally
attractive. We prove this by presenting simple arguments which require only
Theorem IV.4 for SIS model along with the monotonicity properties derived in
the previous section, eliminating the need of a Lyapunov based approach.
###### Theorem V.2 (Convergence to virus-free equilibria)
If
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!\leq\\!0$
and
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!\leq\\!0$,
trajectories of (8) starting from any point in $D$ converge to
$(\mathbf{0},\mathbf{0})$.$\hfill\square$
We next characterize the conditions when the system globally converges to
equilibria when only one of the viruses survives over the network. Let
$\mathbf{S}_{\mathbf{x}}\\!\triangleq\\!\text{diag}(\mathbf{1}\\!-\\!\mathbf{x})$
and
$\mathbf{S}_{\mathbf{y}}\\!\triangleq\\!\text{diag}(\mathbf{1}\\!-\\!\mathbf{y})$
for any $\mathbf{x},\mathbf{y}\in\mathbb{R}^{N}$. Also denote by
$B_{x}\\!\triangleq\\!\left\\{(\mathbf{x},\mathbf{y})\in
D~{}|~{}\mathbf{x}\\!>\\!\mathbf{0}\right\\}$ the set of all points
$(\mathbf{x},\mathbf{y})\\!\in\\!D$ for which $x_{i}\\!>\\!0$ for some
$i\in\mathbb{N}$, and let
$B_{y}\\!\triangleq\\!\left\\{(\mathbf{x},\mathbf{y})\in
D~{}|~{}\mathbf{y}\\!>\\!\mathbf{0}\right\\}$ be a similar set for the $y_{i}$
entries.
###### Theorem V.3 (Convergence to single-virus equilibria)
When
$\lambda\\!\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!>\\!0$
and
$\lambda\\!\left(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!\leq\\!0$,
$(\mathbf{x}^{*},\mathbf{0})$ is globally attractive in $B_{x}$;777We consider
$B_{x}$ as the global domain of attraction instead of $D$ because
$\mathbf{x}=0$ for all points in the set $D\setminus B_{x}$. Starting from
such points the system is no longer a bi-virus epidemic, but a single-virus
SIS system for Virus 2. that is, every trajectory of system (8) starting from
points in $B_{x}$ converges to $(\mathbf{x}^{*},\mathbf{0})$.
Similarly, when
$\lambda\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})-\mathbf{J}_{R}(\mathbf{0})\right)\leq
0$ and
$\lambda\left(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{J}_{H}(\mathbf{0})-\mathbf{J}_{S}(\mathbf{0})\right)>0$
is globally attractive in $B_{y}$. $\hfill\square$
###### Proof:
The idea behind the proof is illustrated in Figure 2. For every
$(\mathbf{x},\mathbf{y})\\!\in\\!B_{x}$ (for example $p_{1}$ and $p_{2}$ in
Figure 2), we construct a point $(\mathbf{x}_{r},\mathbf{y}_{s})$ which
eventually bounds the trajectory starting from $(\mathbf{x},\mathbf{y})$; that
is, we have
$(\mathbf{x}_{r},\mathbf{y}_{s})\\!\ll_{K}\\!\phi_{t_{1}}(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\mathbf{x}^{*},\mathbf{0})$888$\phi_{t}(\mathbf{x},\mathbf{y})$
denotes the solution of (4) at $t\\!\geq\\!0$, with initial point
$(\mathbf{x},\mathbf{y})$. for some $t_{1}\\!\geq\\!0$. From the monotonicity
shown in Proposition V.1, we have
$\phi_{t}(\mathbf{x}_{r},\mathbf{y}_{s})\\!\ll_{K}\\!\phi_{t+t_{1}}(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\mathbf{x}^{*},\mathbf{0})$
for all time $t\\!\geq\\!0$. We prove that the trajectory starting from
$(\mathbf{x}_{r},\mathbf{y}_{s})$ converges to $(\mathbf{x}^{*},0)$
monotonically, with respect to the southeast cone-ordering (Figure 2(a)).
Using this, we show the convergence of trajectories starting from
$(\mathbf{x},\mathbf{y})$ via a sandwich argument (Figure 2(b)). See Appendix
F in [70] for detailed proof. ∎
(a) For every point $p_{k}$, there is a point
$(\mathbf{x}_{rk},\mathbf{y}_{sk})$ starting from which, trajectories converge
monotonically $(\leq_{K})$ to $(\mathbf{x}^{*},0)$. (b) Trajectories starting
from $p_{k}$ eventually bounded by $(\mathbf{x}_{rk},\mathbf{y}_{sk})$;
monotonicity of the system gives convergence to $(\mathbf{x}^{*},0)$.
Figure 2: Illustration of the convergence to $(\mathbf{x}^{*},0)$
(a) Limitations of the literature. (b) Complete characterization of the
convergence trichotomy.
Figure 3: Characterization of the parameter space
Finally, we give the necessary and sufficient conditions that guarantee the
co-existence of the two viruses in the long run. Let $E$ denote the set of all
fixed points of the system in (8).
###### Theorem V.4 (Convergence to coexistence equilibria)
If
$\lambda\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!>\\!0$
and
$\lambda\left(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!>\\!0$,
there exist fixed points of system (8)
$(\hat{\mathbf{x}},\hat{\mathbf{y}})\\!\gg\\!(\mathbf{0},\mathbf{0})$ and
$(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\gg\\!(\mathbf{0},\mathbf{0})$ such
that
$(\mathbf{0},\mathbf{y}^{*})\ll_{K}(\hat{\mathbf{x}},\hat{\mathbf{y}})\leq_{K}(\bar{\mathbf{x}},\bar{\mathbf{y}})\ll_{K}(\mathbf{x}^{*},\mathbf{0}),$
with the possibility that
$(\hat{\mathbf{x}},\hat{\mathbf{y}})=(\bar{\mathbf{x}},\bar{\mathbf{y}})$. All
trajectories of system (8) starting from $B_{x}\cap B_{y}$ converge to the set
of coexistence fixed points
$S\triangleq\left\\{(\mathbf{x}_{e},\mathbf{y}_{e})\\!\in\\!E~{}|~{}(\hat{\mathbf{x}},\hat{\mathbf{y}})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})\\!\leq_{K}\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})\right\\}$.$\hfill\square$
The proof of Theorem V.4 follows similar arguments to that of the previous
theorem, and is the first convergence result for coexistence fixed points in
the competing SIS literature. Note that while we have convergence to ‘a’
coexistence equilibrium, it may or may not be unique in the state space. The
global convergence is therefore to the set of possible coexistence equilibria,
and not necessarily a singular point. Thus, via Theorems V.2, V.3 and V.4 we
cover all possible convergence scenarios of the bi-virus SIS system (8), and
successfully establish the complete theoretical characterization for the
trichotomy of possible outcomes.
## VI Linear Infection and Recovery rates - Discussion and Comparison to
Literature
We now take a look at the special case of the bi-virus epidemic model where
infection and recovery rates scale linearly with the local infection
probability. This is the most commonly analysed setting in literature [21, 54,
31, 32, 33, 34], and allows us to provide a comprehensive discussion on the
related works. With the exception of [54], a line of work seemingly developed
concurrently to ours, we observe that most existing works only provide limited
results regarding convergence to coexistence equilibria. In what follows, we
provide corollaries of Theorems V.2, V.3 and V.4 which characterize
convergence to the trichotomy of possible outcomes for the special case of
linear infection and recovery rates. These results, along with Figure 3, are
reproduced here as they originally were in our previous work [1] which focused
only on characterizing the convergence properties in the case of linear
infection and recovery rates.
The model considered in this section is the bi-virus system (4) with
homogeneous infection and recovery rates999every infected node
$i\in\mathcal{N}$ infects its susceptible neighbor with the same rate
$\beta_{1}>0$ or $\beta_{2}>0$, and in turn recovers with the same rate
$\delta_{1}>0$ or $\delta_{2}>0$, depending on whether it is infected by Virus
1 or 2 respectively.. While at first this may seem too simplistic compared to
the case of linear, heterogeneous rates101010The adjacency matrices
$\mathbf{A}$ and $\mathbf{B}$ in (4) can be symmetric, irreducible, weighted;
with $a_{ij},b_{ij}\geq 0$ (not necessarily $0/1$ valued) multiplied by
$\beta_{1}$ and $\beta_{2}$ respectively, being the infection rates from node
$j\to i$ for Viruses 1 and 2. Recovery rates can similarly be heterogenized as
$\boldsymbol{\delta}_{1}=[\delta_{1}^{i}]$ and
$\boldsymbol{\delta}_{2}=[\delta_{2}^{i}]$ for Viruses 1 and 2; written as
recovery rate matrices $\text{diag}(\boldsymbol{\delta}_{1})$ and
$\text{diag}(\boldsymbol{\delta}_{1})$, respectively., and even generic, non-
linear rates analyzed in literature [19, 20, 21, 54, 31, 32, 33, 34], the
discussions in the ‘Comparison to existing ilterature’ subsection will still
hold for these more general cases. We only stick to the bi-virus system with
homogeneous rates as in (4) to be able to illustrate our results in the form
of Figure 3; the axes capturing the parameters of the system. This enables us
to better explain our contribution, using visual aids in the form of Figure 3,
helping us compare our work with some of the existing literature more
effectively, as opposed to presenting any other special case of the bi-virus
model.
Consider the linear bi-virus system (4). By setting
$G(\mathbf{x})=\beta_{1}\mathbf{A}\mathbf{x}$,
$R(\mathbf{x})=\delta_{1}\mathbf{x}$ and
$H(\mathbf{y})=\beta_{2}\mathbf{B}\mathbf{y}$,
$S(\mathbf{y})=\delta_{2}\mathbf{y}$, we get
$\mathbf{J}_{G}(\mathbf{0})\\!=\\!\beta_{1}\mathbf{A},~{}~{}\mathbf{J}_{R}(\mathbf{0})\\!=\\!\delta_{1}\mathbf{I},$
and
$\mathbf{J}_{H}(\mathbf{0})\\!=\\!\beta_{2}\mathbf{B},~{}~{}\mathbf{J}_{S}(\mathbf{0})\\!=\\!\delta_{2}\mathbf{I}.$
Defining $\tau_{1}\triangleq\beta_{1}/\delta_{1}$,
$\tau_{2}=\triangle\beta_{2}/\delta_{2}$, and plugging in the above
expressions for the Jacobians in Theorems V.2 and V.3, we have the following
Corollaries.
###### Corollary VI.1
If $\tau_{1}\lambda(\mathbf{A})\\!\leq\\!1$ and
$\tau_{2}\lambda(\mathbf{B})\\!\leq\\!1$, trajectories of (4) starting from
any point in $D$ converge to $(\mathbf{0},\mathbf{0})$.$\hfill\square$
###### Corollary VI.2
When $\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!>\\!1$ and
$\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!\leq\\!1$,
$(\mathbf{x}^{*},\mathbf{0})$ is globally attractive in $B_{x}$;111111We
consider $B_{x}$ as the global domain of attraction instead of $D$ because
$\mathbf{x}=0$ for all points in the set $D\setminus B_{x}$. Starting from
such points the system is no longer a bi-virus epidemic, but a single-virus
SIS system for Virus 2. that is, every trajectory of system (4) starting from
points in $B_{x}$ converges to $(\mathbf{x}^{*},\mathbf{0})$.
Similarly, when
$\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!\leq\\!1$ and
$\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!>\\!1$,
$(\mathbf{0},\mathbf{y}^{*})$ is globally attractive in $B_{y}$.
$\hfill\square$
From Corollary VI.2, we can deduce that the threshold values for $\tau_{1}$
and $\tau_{2}$ below which each of the viruses will die out are given by the
equations $\tau_{1}\\!=\\!1/\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})$
and $\tau_{2}\\!=\\!1/\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})$,
respectively. Figure 3(b) plots these threshold values for Virus 1 (in blue)
and Virus 2 (in red) for varying values of $\tau_{1}$ and $\tau_{2}$, and
partitions the entire parameter space into regions R1 – R6 as shown. When
$\tau_{1}\\!>\\!1/\lambda(\mathbf{A})$ and
$\tau_{2}\\!>\\!1/\lambda(\mathbf{B})$, for which values of
$\tau_{1},\tau_{2}$ do not lie in regions R1, R2 or R3, the blue curve lies
above the red curve as in Figure 3(b). This was originally shown in [18] by
deducing that the ratio of slopes of the red and blue curves at point
$(\tau_{1},\tau_{2})=\left(1/\lambda(\mathbf{A}),1/\lambda(\mathbf{B})\right)$
is less than one. This means there exist combinations of $\tau_{1},\tau_{2}$
for which $\tau_{1}$ lies to the right of the blue curve
($\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!>\\!1$), and
$\tau_{2}$ lies above the red curve
($\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!>\\!1$).121212Note
that $\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!\leq\\!1$ and
$\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!\leq\\!1$ is only
possible in region R1, since it is the only region where $\tau_{1}$ can lie to
the left of the blue curve, and $\tau_{2}$ can lie below the red curve. This
effectively reduces the expressions to
$\tau_{1}\lambda(\mathbf{A})\\!\leq\\!1$ and
$\tau_{2}\lambda(\mathbf{B})\\!\leq\\!1$, the conditions for convergence to
the virus-free equilibrium as in Corollary VI.1. This corresponds to region R6
in Figure 3(b), and our final corollary (derived from Theorem V.4) shows that
for values of $\tau_{1},\tau_{2}$ which lie in R6, we observe convergence to
coexistence equilibria.
###### Corollary VI.3 (Convergence to coexistence equilibria)
If $\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!>\\!1$ and
$\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!>\\!1$, there exist
fixed points of system (4)
$(\hat{\mathbf{x}},\hat{\mathbf{y}})\\!\gg\\!(\mathbf{0},\mathbf{0})$ and
$(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\gg\\!(\mathbf{0},\mathbf{0})$ such
that
$(\mathbf{0},\mathbf{y}^{*})\ll_{K}(\hat{\mathbf{x}},\hat{\mathbf{y}})\leq_{K}(\bar{\mathbf{x}},\bar{\mathbf{y}})\ll_{K}(\mathbf{x}^{*},\mathbf{0}),$
with the possibility that
$(\hat{\mathbf{x}},\hat{\mathbf{y}})=(\bar{\mathbf{x}},\bar{\mathbf{y}})$. All
trajectories of system (4) starting from $B_{x}\cap B_{y}$ converge to the set
of coexistence fixed points
$S\triangleq\left\\{(\mathbf{x}_{e},\mathbf{y}_{e})\\!\in\\!E~{}|~{}(\hat{\mathbf{x}},\hat{\mathbf{y}})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})\\!\leq_{K}\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})\right\\}$.$\hfill\square$
| $g_{i}(\mathbf{x})$ | $h_{i}(\mathbf{y})$ | $r_{i}(\mathbf{x})$ | $s_{i}(\mathbf{y})$
---|---|---|---|---
CASE 1 | $\sum_{j}a_{ij}x_{j}$ | $\sum_{j}b_{ij}y_{j}$ | $\delta_{1}x_{i}$ | $\delta_{2}y_{i}$
CASE 2 | $\sum_{j}a_{ij}\ln(1+\alpha_{1}x_{j})$ | $\sum_{j}b_{ij}\ln(1+\alpha_{2}y_{j})$ | $\delta_{1}x_{i}$ | $\delta_{2}y_{i}$
CASE 3 | $\sum_{j}a_{ij}\ln(1+\alpha_{1}x_{j})$ | $\sum_{j}b_{ij}\ln(1+\alpha_{2}y_{j})$ | $(1+x_{i})^{2}-1$ | $(1+y_{i})^{2}-1$
Table I: Summary of infection and recovery rate functions chosen.
#### Comparison to existing literature
Now that we have established all our results, we briefly compare our work with
results from [20, 19], which also talk about global convergence to single-
virus equilibria. To this end, we first illustrate the limitations of the
existing conditions for global convergence in [20, 19] in Figure 3(a); and use
Figure 3(b), where we provide complete characterization of the parameter
space, to draw comparisons with our results. We then discuss the works [31,
34, 32, 33] which consider more general models where there can be more than
two viruses, but present sharper results in the bi-virus setting. Finally, we
will briefly comment on the finiteness of the coexistence equilibria, citing
results from [54].
When translated to the setting of linear infection and recovery rates as in 4,
the result from [19] says that when
$\tau_{1}d_{min}(\mathbf{A})\\!>\\!\tau_{2}d_{max}(\mathbf{B})$, the Virus 2
is sure to die out (Virus 1 could persist or die out), and similarly when
$\tau_{1}d_{max}(\mathbf{A})\\!<\\!\tau_{2}d_{min}(\mathbf{B})$, the Virus 1
is sure to die out. We illustrate these conditions in Figure 3(a), where Virus
1 (Virus 2) is sure to die out if parameters ($\tau_{1},\tau_{2}$) lie above
(below) the blue (red) line. Therefore, the entire yellow-shaded region in
Figure 3(a), between the blue and red lines, is left uncharacterized in [19].
When $\mathbf{A}$ and $\mathbf{B}$ are regular graphs with the same degree
($d_{min}\\!=\\!d_{max}\\!=\\!d$), the blue and red lines coincide, making
coexistence infeasible. This is also mentioned in [18] where they show that
for regular graphs with same degree, the system behaves as if the two graphs
were the same - rendering coexistence impossible (which is also in line with
results in [8]). In contrast, the maximum degree of graphs can also be much
larger than the minimum degree (e.g., power law graphs), causing the yellow-
shaded space to become very large, possibly spanning almost the entire
parameter space.
The main result in [20], when similarly translated to our setting as above,
says that when $\tau_{1}\lambda(\mathbf{A})\\!>\\!1$ and
$\tau_{2}\lambda(\mathbf{B})\\!\leq\\!1$, Virus 1 survives and Virus 2 dies
out. Similarly, when $\tau_{2}\lambda(\mathbf{B})\\!>\\!1$ and
$\tau_{1}\lambda(\mathbf{A})\\!\leq\\!1$, Virus 2 survives and Virus 1 dies
out. These correspond to regions R2 and R3 in Figure 3(b). However, their
results do not cover the convergence properties for $\tau_{1},\tau_{2}$ which
lie in regions R4 – R6. Our Theorems V.3 and V.4, through their corresponding
corollaries, do account for these values of $\tau_{1},\tau_{2}$, and show
convergence to $(\mathbf{0},\mathbf{y}^{*})$, $(\mathbf{x}^{*},\mathbf{0})$ or
to a coexistence fixed point whenever they lie in regions R4, R5, or R6,
respectively.
The works [32, 33] consider the bi-virus epidemic model with heterogeneous
linear infection and recovery rates as a special case of their respective
multi-virus models. Corollary 2 in [33], a more general version of Theorem 5
in [32] which considers the case where $N=2$, establishes existence conditions
for the coexistence equilibria. These conditions are identical to the ones
emerging out of Theorem V.4 when applied to the bi-virus model considered
therein (also identical to the conditions in Corollary VI.3 for the special
case of homogeneous, linear infection and recovery rates), and our result can
therefore be considered as an extension of those in [32, 33]; providing
_convergence_ results in addition to their existence results. Theorem 6 in
[34] (Theorem 8 in [31]) is another interesting result concerning coexistence
equilibria, where they show for the special case of viruses spreading over the
same (possibly weighted) graph that the survival probability vectors of both
the viruses are the same up to a constant multiple; that is, they are
parallel.
The finiteness of the number of single-virus equilibria is evident from
Theorem IV.4, which proves its uniqueness. However, Theorem V.4 and Corollary
VI.3 do not explicitly show that coexistence equilibria are finitely many, let
alone uniqueness131313In Section VII, we show with the aid of simulation
results that the coexistence equilibria are indeed not unique in general.. For
linear, heterogeneous infection and recovery rates, Theorem 3.6 in [54] uses
novel techniques from algebraic geometry to prove that the coexistence
equilibria are finitely many for all possible values of infection and recovery
rates that do not lie in an algebraic set of measure zero. However, this
remains an open problem for general, non-linear infection and recovery rate
functions satisfying (A1)–(A5).
In summary, without our Theorems V.3 and V.4, convergence results from
literature fail to characterize a sizeable portion of the parameter space as
shown in Figure 3(a) by the ‘?’ region (part of the shaded region surrounded
by the arrows). The parameters leading to coexistence are entirely contained
in this region as well - explaining the dearth of convergence results for such
equilibria in the existing literature.
## VII Numerical Results
In this section, we present simulation results to support our theoretical
findings for the bi-virus SIS model for combinations of non-linear as well as
linear infection and recovery rates. To this end, we consider an undirected,
connected graph (103 nodes, 239 edges), called Autonomous System (AS-733),
from the SNAP repository [71]. For both the linear and non-linear bi-virus
model, we generate an additional graph, overlaid on the same set of nodes, by
modifying the original graph (AS-733-A with
$\lambda(\mathbf{A})\\!=\\!12.16$), removing and adding edges while ensuring
connectivity between the nodes. The new additional graph, AS-733-B, has 741
edges with $\lambda(\mathbf{B})\\!=\\!15.53$. Note that since our theoretical
results hold for any general graphs, we only use this set as example graphs to
numerically demonstrate the convergence properties. Similar numerical results
can indeed be obtained for any other networks (such as social networks).
We test the convergence dynamics of the bi-virus model over a range of
combinations of linear and non-linear infection and recovery rates. To this
end, we consider three different bi-virus models, and Table I summarizes the
three cases with the corresponding infection and recovery rate functions as
shown. Note that for non-linear infection and recovery rates, we consider the
logarithmic and polynomial functions briefly mentioned in Section III, to
ensure that our three cases satisfy assumptions (A1)–(A5).
For each of the three cases, we construct combinations of parameters
($\tau_{1}$ or $\tau_{2}$ for linear rates, and $\alpha_{1}$ or $\alpha_{2}$
for non-linear rates), to develop three convergence scenarios, that satisfy
the assumptions of Theorems V.3 and V.4. These three scenarios correspond to
global convergence of the bi-virus system to fixed points where (a) Virus 1 is
the surviving epidemic (which spreads on graph AS-733-A), (b) Virus 2 is the
surviving epidemic (which spreads on graph AS-733-B), (c) both viruses
coexist, (where Virus 1 spreads on graph AS-733-A and Virus 2 on AS-733-B).
Parameters corresponding to these three scenarios are provided in the table
inset in Figures 4–6(a)–(c) corresponding to the three cases.
To visualize our system in two dimensions, we use
$avgX\\!\triangleq\\!(1/N)\sum_{i\in\mathcal{N}}x_{i}$ on the x-axis, and
$avgY\\!\triangleq\\!(1/N)\sum_{i\in\mathcal{N}}y_{i}$ on the y-axis. We plot
trajectories of the bi-virus system starting from different initial points in
the state space $D$ to observe their convergence, with red arrows representing
the trajectories’ direction of movement at various time intervals. Here, the
state space $D$ is the region that lies below the dotted-line (for example, in
Figure 4), ensuring $x_{i}+y_{i}\\!\leq\\!1$ for all $i\in\mathcal{N}$, for
every initial point. To ensure that the convergences observed in our phase
plots match the conditions laid out in Theorems V.3 and V.4, we track the
eigenvalues
$\lambda(\mathbf{U})\triangleq\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(0)-\mathbf{J}_{R}(0))$
and
$\lambda(\mathbf{V})\triangleq\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{J}_{H}(0)-\mathbf{J}_{S}(0))$.
$\lambda(\mathbf{U})$ ($\lambda(\mathbf{V})$) being positive or negative
corresponds to Virus 1 (Virus 2) surviving or dying out, respectively.
(a) $\lambda(U)>0$, $\lambda(V)<0$; Virus 1 survives (b) $\lambda(U)<0$,
$\lambda(V)>0$; Virus 2 survives
(c) $\lambda(U)>0$, $\lambda(V)>0$; Both coexist
Figure 4: Phase plots for a system with linear infection and recovery rates
(CASE 1) on the AS-733 graph.
(a) $\lambda(U)>0$, $\lambda(V)<0$; Virus 1 survives (b) $\lambda(U)<0$,
$\lambda(V)>0$; Virus 2 survives
(c) $\lambda(U)>0$, $\lambda(V)>0$; Both coexist
Figure 5: Phase plots for a system with non-linear infection and linear
recovery rates (CASE 2) on the AS-733 graph.
(a) $\lambda(U)>0$, $\lambda(V)<0$; Virus 1 survives (b) $\lambda(U)<0$,
$\lambda(V)>0$; Virus 2 survives
(c) $\lambda(U)>0$, $\lambda(V)>0$; Both coexist
Figure 6: Phase plots for a system with non-linear infection and recovery
rates (CASE 3) on the AS-733 graph. Figure 7: Coexistence condition with
Multiple equilibrium points
In Figures 4–6(a)–(c), we show numerical results for the three cases,
respectively. Figures 4–6(a) and 4–6(b) show convergence to the two different
single-virus equilibria, where the parameters therein satisfy the two set of
conditions as in Theorem V.3. Figures 4–6(c) show convergence to the
coexistence equilibria, which also satisfies the coexistence conditions as
outlined in Theorem V.4. We observe a unique coexistence equilibrium when the
viruses are competing over graphs AS-733-A and AS-733-B, for which the
eigenvalues $\lambda(\mathbf{A})$ and $\lambda(\mathbf{B})$ are significantly
different. Interestingly, we also observe multiple coexistence equilibria as
shown in Figure 7. We obtain this result by creating another additional graph
by modifying the original graph AS-733-A such that the eigenvalue of this new
graph is as close to the original one where this new graph AS-733-C has 259
edges with $\lambda(\mathbf{C})\\!\\!=\\!\\!12.26$. The ‘upper left’ and
‘lower right’ coexistence fixed points characterize the set $S$ of all such
equilibria, as in Theorem V.4. This can be seen more closely in the inset in
Figure 7, where the number beside each fixed point (in red) corresponds to the
different initial starting points (in blue) of the trajectories. Thus,
convergence to set $S$ occurs globally over the state space, but exactly which
coexistence fixed point the system converges to is dependent on the initial
point. We are thus able to observe all possible convergence scenarios from
Section V-B, including multiple coexistence equilibria.
## VIII Concluding Remarks
By utilizing the techniques from Monotone Dynamical Systems (MDS), in this
paper, we show that a generic bi-virus epidemic model with non-linear
infection and recovery rates is monotone with respect to a specially
constructed partial ordering. This monotonicity allows us to give necessary
and sufficient conditions on the non-linear infection and recovery rates, and
thus completely characterize the entire parameter space of the bi-virus
system, a contrast to the usual Lyapunov based approach. We bridge the gap
between linear stability properties and global convergence results (or lack
thereof) for the bi-virus model with non-linear rates (including the special
case with linear rates) in the literature, and succeed in providing a complete
characterization of the trichotomy of possible outcomes for such competing
epidemics - a well known open problem. Our results demonstrate how powerful
these alternative proving techniques can be, compared to classical Lyapunov
approaches; and we note that it may be worth exploring such monotonicity
properties in other dynamics on graphs as well, where competition is a general
theme. Additionally, establishing a rigorous relationship between the SIS ODE
models with non-linear rates as studied in this paper, and the correct
probabilistic dynamics describing these non-linear rates, is of interest in
order to complete the theoretical pictures for SIS models with non-linear
rates.
## References
* [1] V. Doshi, S. Mallick, and D. Y. Eun, “Competing Epidemics on Graphs - Global Convergence and Coexistence,” in _IEEE INFOCOM_ , 2021.
* [2] A. Lajmanovich and J. A. Yorke, “A deterministic model for gonorrhea in a nonhomogeneous population,” _Mathematical Biosciences_ , vol. 28, no. 3, pp. 221 – 236, 1976.
* [3] H. W. Hethcote, “The mathematics of infectious diseases,” _SIAM Review_ , vol. 42, no. 4, pp. 599–653, 2000.
* [4] M. Garetto, W. Gong, and D. Towsley, “Modeling malware spreading dynamics,” in _IEEE INFOCOM_ , San Francisco, CA, 2003.
* [5] L.-X. Yang, X. Yang, J. Liu, Q. Zhu, and C. Gan, “Epidemics of computer viruses: a complex-network approach,” _Applied Mathematics and Computation_ , vol. 219, no. 16, pp. 8705–8717, 2013.
* [6] S. Hosseini and M. A. Azgomi, “A model for malware propagation in scale-free networks based on rumor spreading process,” _Computer Networks_ , vol. 108, pp. 97–107, 2016.
* [7] K. R. Apt and E. Markakis, “Diffusion in social networks with competing products,” in _International Symposium on Algorithmic Game Theory_ , 2011\.
* [8] B. A. Prakash, A. Beutel, R. Rosenfeld, and C. Faloutsos, “Winner takes all: competing viruses or ideas on fair-play networks,” in _ACM World Wide Web_ , 2012.
* [9] S. F. Ruf, K. Paarporn, P. E. Pare, and M. Egerstedt, “Dynamics of opinion-dependent product spread,” in _IEEE Conference on Decision and Control_ , Melbourne, Australia, 2017.
* [10] D. Trpevski, W. K. Tang, and L. Kocarev, “Model for rumor spreading over networks,” _Physical Review E_ , vol. 81, no. 5, p. 056102, 2010.
* [11] L. Zhao, H. Cui, X. Qiu, X. Wang, and J. Wang, “SIR rumor spreading model in the new media age,” _Physica A: Statistical Mechanics and its Applications_ , vol. 392, no. 4, pp. 995–1003, 2013.
* [12] X. Lin, Q. Jiao, and L. Wang, “Opinion propagation over signed networks: Models and convergence analysis,” _IEEE Transactions on Automatic Control_ , vol. 64, no. 8, pp. 3431–3438, 2018.
* [13] I. Koprulu, Y. Kim, and N. B. Shroff, “Battle of opinions over evolving social networks,” _IEEE/ACM Transactions on Networking_ , vol. 27, no. 2, pp. 532–545, 2019.
* [14] S. Banerjee, A. Chatterjee, and S. Shakkottai, “Epidemic thresholds with external agents,” in _IEEE INFOCOM_ , Toronto, ON, 2014.
* [15] A. Ganesh, L. Massoulie, and D. Towsley, “The effect of network topology on the spread of epidemics,” in _IEEE INFOCOM_ , Miami, FL, 2005.
* [16] M. Draief and L. Massoulié, _Epidemics and Rumours in Complex Networks_ , 1st ed. Cambridge University Press, 2010\.
* [17] F. Darabi Sahneh, C. Scoglio, and P. Van Mieghem, “Generalized epidemic mean-field model for spreading processes over multilayer complex networks,” _IEEE/ACM Transactions on Networking_ , vol. 21, no. 5, pp. 1609–1620, 2013\.
* [18] F. D. Sahneh and C. Scoglio, “Competitive epidemic spreading over arbitrary multilayer networks,” _Physical Review E_ , vol. 89, no. 6, p. 062817, 2014\.
* [19] A. Santos, J. M. F. Moura, and J. M. F. Xavier, “Bi-virus SIS epidemics over networks: Qualitative analysis,” _IEEE Transactions on Network Science and Engineering_ , vol. 2, no. 1, pp. 17–29, 2015.
* [20] L.-X. Yang, X. Yang, and Y. Y. Tang, “A bi-virus competing spreading model with generic infection rates,” _IEEE Transactions on Network Science and Engineering_ , vol. 5, no. 1, pp. 2–13, 2017.
* [21] J. Liu, P. E. Paré, A. Nedich, C. Y. Tang, C. L. Beck, and T. Basar, “Analysis and control of a continuous-time bi-virus model,” _IEEE Transactions on Automatic Control_ , 2019.
* [22] P. Van Mieghem, “The n-intertwined SIS epidemic network model,” _Computing_ , vol. 93, no. 2–4, p. 147–169, 2011.
* [23] J. Omic and P. Van Mieghem, “Epidemic spreading in networks—variance of the number of infected nodes,” _Delft University of Technology, Report_ , 2009\.
* [24] P. Van Mieghem, J. Omic, and R. Kooij, “Virus spread in networks,” _IEEE/ACM Transactions on Networking_ , vol. 17, no. 1, pp. 1–14, 2009.
* [25] A. Gray, D. Greenhalgh, L. Hu, X. Mao, and J. Pan, “A stochastic differential equation SIS epidemic model,” _SIAM Journal on Applied Mathematics_ , vol. 71, no. 3, pp. 876–902, 2011.
* [26] C. Li, R. van de Bovenkamp, and P. Van Mieghem, “Susceptible-infected-susceptible model: A comparison of n-intertwined and heterogeneous mean-field approximations,” _Physical Review E_ , vol. 86, no. 2, p. 026116, 2012.
* [27] Y. Wang, Z. Jin, Z. Yang, Z.-K. Zhang, T. Zhou, and G.-Q. Sun, “Global analysis of an SIS model with an infective vector on complex networks,” _Nonlinear Analysis: Real World Applications_ , vol. 13, no. 2, pp. 543–557, 2012.
* [28] D. Guo, S. Trajanovski, R. van de Bovenkamp, H. Wang, and P. Van Mieghem, “Epidemic threshold and topological structure of susceptible-infectious-susceptible epidemics in adaptive networks,” _Physical Review E_ , vol. 88, no. 4, p. 042802, 2013.
* [29] M. Benaïm and M. W. Hirsch, “Differential and stochastic epidemic models,” _Fields Institute communications_ , vol. 21, pp. 31–44, 1999.
* [30] Y. Wang, G. Xiao, and J. Liu, “Dynamics of competing ideas in complex social systems,” _New Journal of Physics_ , vol. 14, no. 1, p. 013015, 2012.
* [31] P. E. Paré, J. Liu, C. L. Beck, A. Nedić, and T. Başar, “Multi-competitive viruses over static and time-varying networks,” in _IEEE American Control Conference_ , Seattle, WA, 2017.
* [32] A. Janson, S. Gracy, P. E. Paré, H. Sandberg, and K. H. Johansson, “Analysis of a Networked SIS Multi-Virus Model with a Shared Resource,” _IFAC-PapersOnLine_ , vol. 53, no. 5, pp. 797–802, 2020, 3rd IFAC Workshop on Cyber-Physical and Human Systems CPHS 2020.
* [33] A. Janson, S. Gracy, P. E. Paré, H. Sandberg, and K. H. Johansson, “Networked Multi-Virus Spread with a Shared Resource: Analysis and Mitigation Strategies,” _ArXiv_ , vol. abs/2011.07569, 2020.
* [34] P. E. Paré, J. Liu, C. L. Beck, A. Nedić, and T. Başar, “Multi-competitive viruses over time-varying networks with mutations and human awareness,” _Autom._ , vol. 123, p. 109330, 2021.
* [35] S. Bansal, B. Grenfell, and L. Meyers, “When individual behaviour mattersl homogeneous and network models in epidemiology,” _Journal Royal Society, Interface_ , vol. 4, no. 16, pp. 879–891, 2007.
* [36] M. E. Hochberg, “Non-linear transmission rates and the dynamics of infectious disease,” _Journal of Theoretical Biology_ , vol. 153, no. 3, pp. 301–321, 1991.
* [37] H. Hu, K. Nigmatulina, and P. Eckhoff, “The scaling of contact rates with population density for the infectious disease models,” _Mathematical Biosciences_ , vol. 244, no. 2, pp. 125–134, 2013.
* [38] N. D. Barlow, “Non-linear transmission and simple models for bovine tuberculosis,” _Journal of Animal Ecology_ , vol. 69, no. 4, pp. 703–713, 2000.
* [39] C. Gan, X. Yang, W. Liu, Q. Zhu, and X. Zhang, “An epidemic model of computer viruses with vaccination and generalized nonlinear incidence rate,” _Applied Mathematics and Computation_ , vol. 222, pp. 265–274, 2013.
* [40] W. Liu, H. Hetchote, and S. Levin, “Dynamical behavior of epidemiological models with nonlinear incidence rates.” _Journal of Mathematical Biology_ , vol. 25, no. 4, pp. 359–380, 1987.
* [41] L.-X. Yang and X. Yang, “The impact of nonlinear infection rate on the spread of computer virus,” _Nonlinear Dynamics_ , vol. 82, 05 2015.
* [42] H. Yuan, G. Liu, and G. Chen, “On modeling the crowding and psychological effects in network-virus prevalence with nonlinear epidemic model,” _Applied Mathematics and Computation_ , vol. 219, p. 2387–2397, 11 2012\.
* [43] S. Ruan and W. Wang, “Dynamical behavior of an epidemic model with a nonlinear incidence rate,” _Journal of Differential Equations_ , vol. 188, no. 1, pp. 135–163, 2003.
* [44] H. L. Smith, “Monotone dynamical systems: Reflections on new advances and applications,” _Discrete and Continuous Dynamical Systems - A_ , vol. 37, p. 485, 2017.
* [45] P. De Leenheer and D. Aeyels, “Stability properties of equilibria of classes of cooperative systems,” _IEEE Transactions on Automatic Control_ , vol. 46, no. 12, pp. 1996–2001, 2001.
* [46] D. Angeli and E. D. Sontag, “Monotone control systems,” _IEEE Transactions on Automatic Control_ , vol. 48, no. 10, pp. 1684–1698, 2003.
* [47] V. S. Bokharaie, O. Mason, and M. Verwoerd, “D-stability and delay-independent stability of homogeneous cooperative systems,” _IEEE Transactions on Automatic Control_ , vol. 55, no. 12, pp. 2882–2885, 2010.
* [48] L. Van Hien and H. Trinh, “Exponential stability of two-dimensional homogeneous monotone systems with bounded directional delays,” _IEEE Transactions on Automatic Control_ , vol. 63, no. 8, pp. 2694–2700, 2018.
* [49] D. Efimov, T. Raissi, and A. Zolghadri, “Control of nonlinear and lpv systems: Interval observer-based framework,” _IEEE Transactions on Automatic Control_ , vol. 58, no. 3, pp. 773–778, 2013.
* [50] F. Forni and R. Sepulchre, “Differentially positive systems,” _IEEE Transactions on Automatic Control_ , vol. 61, no. 2, pp. 346–359, 2016.
* [51] M. W. Hirsch, “Systems of differential equations which are competitive or cooperative: I. limit sets,” _SIAM Journal on Mathematical Analysis_ , vol. 13, no. 2, pp. 167–179, 1982.
* [52] C. Altafini, “Consensus problems on networks with antagonistic interactions,” _IEEE Transactions on Automatic Control_ , vol. 58, no. 4, pp. 935–946, 2013.
* [53] M. D. Marco, M. Forti, M. Grazzini, and L. Pancioni, “Limit set dichotomy and multistability for a class of cooperative neural networks with delays,” _IEEE Transactions on Neural Networks and Learning Systems_ , vol. 23, no. 9, pp. 1473–1485, 2012.
* [54] M. Ye, B. Anderson, and J. Liu, “Convergence and equilibria analysis of a networked bivirus epidemic model,” _arXiv preprint arXiv:2111.07507_ , 2021\.
* [55] C. D. Meyer, _Matrix analysis and applied linear algebra_. SIAM, 2000, vol. 71.
* [56] A. Berman and R. J. Plemmons, _Nonnegative Matrices in the Mathematical Sciences_. SIAM, 1994.
* [57] J. Yeh, _Real Analysis_ , 2nd ed. WORLD SCIENTIFIC, 2006.
* [58] M. W. Hirsch, “Systems of differential equations that are competitive or cooperative: II. convergence almost everywhere,” _SIAM Journal on Mathematical Analysis_ , vol. 16, no. 3, pp. 423–439, 1985.
* [59] ——, “Systems of differential equations which are competitive or cooperative: III. competing species,” _Nonlinearity_ , vol. 1, no. 1, pp. 51–71, 1988.
* [60] ——, “System of differential equations that are competitive or cooperative: IV. structural stability in three-dimensional systems,” _SIAM Journal on Mathematical Analysis_ , vol. 21, no. 5, p. 1225–1234, 1990.
* [61] ——, “Systems of differential equations that are competitive or cooperative: V. convergence in 3-dimensional systems,” _Journal of Differential Equations_ , vol. 80, no. 1, pp. 94 – 106, 1989.
* [62] H. L. Smith, “Systems of ordinary differential equations which generate an order preserving flow. a survey of results,” _SIAM Review_ , vol. 30, no. 1, pp. 87–113, 1988.
* [63] H. L. Smith and H. R. Thieme, “Quasi convergence and stability for strongly order-preserving semiflows,” _SIAM Journal on Mathematical Analysis_ , vol. 21, no. 3, pp. 673–692, 1990.
* [64] ——, “Convergence for strongly order-preserving semiflows,” _SIAM Journal on Mathematical Analysis_ , vol. 22, no. 4, pp. 1081–1101, 1991.
* [65] M. W. Hirsch and H. L. Smith, “Generic Quasi-convergence for Strongly Order Preserving Semiflows: A New Approach,” _Journal of Dynamics and Differential Equations_ , vol. 16, pp. 433–439, 2004.
* [66] H. L. Smith, _Monotone dynamical systems: An introduction to the theory of competitive and cooperative systems_. American Mathematical Society, 2014.
* [67] ——, “Is my system of ODEs cooperative?” 2012. [Online]. Available: https://math.la.asu.edu/ halsmith/identifyMDS.pdf
* [68] U. Krause and P. Ranft, “A limit set trichotomy for monotone nonlinear dynamical systems,” _Nonlinear Analysis: Theory, Methods & Applications_, vol. 19, no. 4, pp. 375 – 392, 1992.
* [69] M. Ye, J. Liu, B. D. Anderson, and M. Cao, “Applications of the Poincare-Hopf Theorem: Epidemic Models and Lotka-Volterra Systems,” _IEEE Transactions on Automatic Control_ , 2021.
* [70] V. Doshi, S. Mallick, and D. Y. Eun, “Convergence of bi-virus epidemic models with non-linear rates on networks - a monotone dynamical systems approach: Supplementary material.”
* [71] J. Leskovec and A. Krevl, “SNAP Datasets: Stanford large network dataset collection,” http://snap.stanford.edu/data, jun 2014.
* [72] L. Perko, _Differential Equations and Dynamical Systems_ , 3rd ed. Springer Science & Business Media, 2001.
* [73] S. Ross, _Stochastic Processes_. Wiley, 1996.
## Appendix A Basic Definitions and Results from Matrix Theory
We first provide some well known results surrounding irreducible square
matrices.
###### Definition A.1
[55] A square matrix $\mathbf{A}$ is reducible if there exists a permutation
matrix $\mathbf{P}$ such that $\mathbf{P}^{T}\mathbf{A}\mathbf{P}$ is a block
diagonal matrix. If no such permutation matrix exists, we say that
$\mathbf{A}$ is irreducible.
One way to check if a matrix is irreducible is by observing the underlying
directed graph, where there is an edge between two nodes only if $a_{ij}\neq
0$. The matrix $A$ is irreducible if and only if this underlying directed
graph is strongly connected.
###### Definition A.2
[56] A M-matrix is a matrix with non-positive off-diagonal elements with
eigenvalues whose real parts are non-negative.
We use the following well known result for non-negative, irreducible matrices
heavily throughout the paper.
###### Theorem A.3
(Perron-Frobenius)[55] Let $\mathbf{A}$ be a non-negative, irreducible matrix.
Then, $\lambda(\mathbf{A})$ is a strictly positive real number, and the
corresponding eigenvector $\mathbf{v}$ where
$\mathbf{A}\mathbf{v}=\lambda(\mathbf{A})\mathbf{v}$ is also strictly
positive. We call $\lambda(\mathbf{A})>0$ and $\mathbf{v}\gg\mathbf{0}$ the PF
eigenvalue and PF eigenvector of the matrix respectively.$\hfill\square$
The following result is on irreducible M-matrices.
###### Lemma A.4
[56] Given an irreducible and non-singular M-matrix $\mathbf{M}$, its inverse
$\mathbf{M}^{-1}$ has strictly positive entries.$\hfill\square$
## Appendix B Definitions and results from ODE literature
We use the following definitions and results from the ODE literature
throughout the paper.
###### Definition B.1
The ‘flow’ of a dynamical system in a metric space $X$ is a map
$\phi:X\\!\times\\!\mathbb{R}\\!\to\\!X$ such that for any $x_{0}\\!\in\\!X$
and all $s,t\in\mathbb{R}$, we have $\phi_{0}(x_{0})\\!=\\!x_{0}$ and
$\phi_{s}\left(\phi_{t}(x_{0})\right)\\!=\\!\phi_{t+s}(x_{0})$.
###### Definition B.2
A flow $\phi:X\times\mathbb{R}\to X$ is positively invariant in set $P\subset
X$ if for every $x_{0}\in P$, $\phi_{t}(x_{0})\in P$ for all $t>0$.
###### Definition B.3
Given a flow $\phi$, an ‘equilibrium’ or a ‘fixed point’ of the system is a
point $x^{*}\in X$ such that $\\{x^{*}\\}$ is a positively invariant set. For
the ODE system $\dot{x}=F(x)$, we have $F(x^{*})=0$ at the equilibrium.
For an equilibrium point $x^{*}\in X$ we say that the trajectory starting at
$x_{0}\in X$ converges to $x^{*}$ if $\lim_{t\to\infty}\phi_{t}(x_{0})=x^{*}$.
The following result is true for stable fixed points of the ODE system from
Definition B.1.
###### Proposition B.4
[72] Let $\mathbf{J}F(x_{0})$ be the Jacobian of the ODE system evaluated at a
fixed point $x_{0}$ and assume it to be an irreducible matrix. Let
$\lambda\left(\mathbf{J}F(x_{0})\right)<0$ and suppose the corresponding
eigenvalue $\mathbf{v}$ is strictly positive ($\mathbf{v}\gg\mathbf{0}$).
Then, there exists an $\epsilon>0$ such that $F(x_{0}+r\mathbf{v})\ll 0$ for
all $r\in(0,\epsilon]$ and $F(x_{0}+r\mathbf{v})\gg 0$ for all
$r\in(0,-\epsilon]$141414In other words eigenvector $\mathbf{v}$ is tangent to
the stable manifold of the ODE system at the stable fixed points
$x_{0}$..$\hfill\square$
## Appendix C DFR Processes as Non-Linear Recovery Rates
In this appendix, we form the connection between failure rates from
reliability theory [73], and the infection duration at any node in SIS type
epidemics. To this end, we start by formally defining the term failure rate.
###### Definition C.1
[73] Let $T>0$ be any continuous random variable with distribution
$F_{T}(s)=\mathbb{P}(T\leq s)$, and density function $f_{T}(s)$ for all $s>0$,
with $\bar{F}_{T}(s)=1-F_{T}(s)=\mathbb{P}(T>s)$. Then, the failure rate at
any given time $s>0$ is defined as
$r_{T}(s)\triangleq\frac{f_{T}(s)}{\bar{F}_{T}(s)}.$ (15)
We say $T$ has a decreasing/increasing failure rate (DFR/IFR) if $r_{T}(s)$ is
a decreasing/increasing function of $s>0$.
When $T$ is the lifetime of a system, the DFR case corresponds to the system
aging negatively. This means that as time elapses, the residual time (time
till the system fails) is more likely to increase rather than decrease. $T$
could also have an interpretation in the context of node recovery. For the
linear SIS epidemic model as in (1), consider an infected node
$i\in\mathcal{N}$ and define $T\triangleq$ time taken for node $i$ to recover
(random), with $f_{T}(s)$ and $\bar{F}_{T}(s)$ as in Definition C.1. Loosely
speaking, we can ignore the infection rate terms in (1) to take a closer look
at the recovery process via the ODE
$\dot{x}_{i}(s)=-\delta x_{i}(s),$ (16)
with the initial condition $x_{i}(0)=1$ (implying that node $i$ is last
infected at time $s=0$). The ODE (16) has an exact solution for all $s>0$,
given by $x_{i}(s)=e^{-\delta s}.$ This solution allows us to interpret
$x_{i}$ as the cumulative distribution function (CCDF) of an exponential
random variable151515When $T\sim\exp(\delta)$, we have
$\bar{F}_{T}(s)=P(T>s)=e^{-\delta s}$. with rate $\delta>0$. Using this
interpretation, we have $x_{i}(s)=P(T>s)=\bar{F}_{T}(s)$, and
$-\dot{x}_{i}(s)=f_{T}(s)$. (16) can then be rewritten as
$r_{T}(s)=\frac{-\dot{x}_{i}(s)}{x_{i}(s)}=\delta,$
for any $s>0$. $T$ is thus exponentially distributed, and has a constant
failure rate (it is both DFR and IFR).
We now consider the case where the random variable $T$ is defined for the more
general SIS epidemic model with non-linear recovery rate $q_{i}(x_{i})$ for
node $i$.161616Note that this is the special case where $q_{i}$ is only a
function of $x_{i}$, not of $x_{j}$ for neighbors $j$ of node $i$. Ignoring
the infection rate terms in (5) like before, we obtain
$\dot{x}_{i}(s)=-q_{i}\left(x_{i}(s)\right),$ (17)
retaining the previous interpretation of $x_{i}$ as the CCDF of $T$. This can
be further rearranged to obtain an expression for the failure rate as
$r_{T}(s)=\frac{-\dot{x}_{i}(s)}{x_{i}(s)}=\frac{q_{i}\left(x_{i}(s)\right)}{x_{i}(s)}$
for any $s>0$. From Definition C.1 we know $T$ is DFR if $r_{T}(s)$ is
decreasing in $s>0$. Supposing $q_{i}$ is such that $T$ is indeed DFR,
$\log(r_{T}(s))$ is also decreases in $s$, and we get
$\frac{d}{ds}\log\left(r_{T}(s)\right)=\frac{q_{i}^{\prime}(x_{i}(s))\dot{x}_{i}(s)}{q(x_{i}(s))}-\frac{\dot{x}_{i}(s)}{x_{i}(s)}\leq
0,$
where $q_{i}^{\prime}(x_{i}(s))$ denotes the derivative with respect to
$x_{i}$. Since $\dot{x}_{i}(s)=-q_{i}(x_{i}(s))$ from (17) and
$q_{i}^{\prime}(x(s))\geq 0$ from (A3), rearranging the previous equation
gives us following the condition for $T$ to be DFR
$x_{i}q_{i}^{\prime}(x_{i})-q_{i}(x_{i})\geq 0.$ (18)
In (18), the $(s)$ notation has been suppressed for clarity. Since
$q_{i}(0)=0$, the convexity of $q_{i}$ with respect to $x_{i}$ implies (18).
Roughly speaking, the DFR case (which also includes linear recovery rates as
in (1)) is a subclass of recovery rate functions $q_{i}(\mathbf{x})$
satisfying assumptions (A1)–(A5). Even though the above steps may not be
exact, they provide intuition on how infections which fester and grow worse
with time form part of our modelling assumptions in Section III.
## Appendix D Results from MDS and Cooperative Systems
###### Definition D.1
[51, 62, 44] A flow $\phi$ is said to be monotone if for all
$\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}$ such that
$\mathbf{x}\leq_{K}\mathbf{y}$ and any $t\geq 0$, we have
$\phi_{t}(\mathbf{x})\leq_{K}\phi_{t}(\mathbf{y}).$
If the flow represents the solution of an ODE system, we say that the ODE
system is co-operative.
###### Definition D.2
Consider the system (9) and let
$\mathbf{J}F(\mathbf{x})\\!\triangleq\\!\left[{df_{i}(\mathbf{x})}/{dx_{j}}\right]$
be the Jacobian of the right hand side evaluated at any point
$\mathbf{x}\\!\in\\!\mathbb{R}^{n}$. We say that (9) is an irreducible ODE in
set $D\in\mathbb{R}^{n}$ if for all $\mathbf{x}\in D$,
$\mathbf{J}F(\mathbf{x})$ is an irreducible matrix.
###### Definition D.3
[62, 66, 44] The flow $\phi$ is said to be strongly monotone if it is
monotone, and for all $\mathbf{x},\mathbf{y}\in\mathbb{R}^{n}$ such that
$\mathbf{x}<_{K}\mathbf{y}$, and time $t\geq 0$, we have
$\phi_{t}(\mathbf{x})\ll_{k}\phi_{t}(\mathbf{y}).$
###### Theorem D.4
[62, 66, 44] Let (9) be irreducible and co-operative in some set
$D\subset\mathbb{R}^{n}$. Then the solution $\phi$ (restricted to $t\geq 0$)
is strongly monotone.$\hfill\square$
As part of the main result of monotone dynamical systems, trajectories of
strongly monotone systems, starting from almost anywhere (in the measure
theoretic sense) in the state space, converge to the set of equilibrium points
[58, 64, 65, 44]. However, often the systems are strongly monotone only in the
interior of the state spaces instead of the entirety of the state space. In
such cases, the following results are useful.
###### Proposition D.5
(Proposition 3.2.1 in [66]) Consider the ODE system (9) which is cooperative
in a compact set $D\subset\mathbb{R}^{n}$ with respect to some cone-ordering,
and let $<_{r}$ stand for any of the order relations $\leq_{K},<_{K},\ll_{K}$.
Then,
$P_{+}\\!\triangleq\\!\left\\{\mathbf{x}\\!\in\\!D~{}|~{}\mathbf{0}\\!<_{r}\\!F(\mathbf{x})\right\\}$
and
$P_{-}\\!\triangleq\\!\left\\{\mathbf{x}\\!\in\\!D~{}|~{}F(\mathbf{x})\\!<_{r}\\!\mathbf{0}\right\\}$
are positively invariant, and the trajectory
$\left\\{\phi_{t}(\mathbf{x})\right\\}_{t\geq 0}$ for any point
$\mathbf{x}\\!\in\\!P_{+}$ or $\mathbf{x}\\!\in\\!P_{-}$ converges to an
equilibrium.$\hfill\square$
###### Theorem D.6
(Theorem 4.3.3 in [66]) Let (9) be cooperative (with respect to some cone-
ordering $\leq_{K}$) in a compact set $D\subset\mathbb{R}^{n}$ and let
$\mathbf{x}_{0}\in D$ be an equilibrium point. Suppose that
$s\triangleq\lambda(\mathbf{J}F(\mathbf{x}_{0}))>0$ (i.e. $\mathbf{x}_{0}$ is
an unstable fixed point) and there is an eigenvector
$\mathbf{v}\gg_{K}\mathbf{0}$ such that
$\mathbf{J}F(\mathbf{x}_{0})\mathbf{v}=s\mathbf{v}$. Then, there exists
$\epsilon_{0}\in(0,\epsilon]$ and another equilibrium point $\mathbf{x}_{e}$
such that for each $r\in(0,\epsilon_{0}]$, the solution
$\phi_{t}(\mathbf{x}_{r})$ has the following properties:
* (1)
$\mathbf{x}_{r}\\!\ll_{K}\\!\phi_{t_{1}}(\mathbf{x}_{r})\\!\ll_{K}\\!\phi_{t_{2}}(\mathbf{x}_{r})\\!\ll_{K}\\!\mathbf{x}_{e}$,
for any $0\\!<\\!t_{1}\\!<\\!t_{2}$.
* (2)
${d\phi_{t}(\mathbf{x}_{r})}/{dt}\gg_{K}\mathbf{0}$, for any $t>0$.
* (3)
$\phi_{t}(\mathbf{x}_{r})\rightarrow\mathbf{x}_{e}$, as
$t\rightarrow\infty$.$\hfill\square$
## Appendix E Proofs of the results in Section IV
###### Proof:
To prove that system (6) is co-operative with respect to the positive orthant,
we show that it satisfies Kamke’s condition in (10). Differentiating the right
hand side of (5) with respect to $x_{j}$, we get
$\displaystyle\frac{\partial\bar{f}_{i}(\mathbf{x})}{\partial
x_{j}}=(1-x_{i})\frac{\partial f_{i}(x)}{\partial x_{j}}=\frac{\partial
q_{i}(\mathbf{x})}{\partial x_{j}}.$
This corresponds to the $(ij)$’th off-diagonal entry of the Jacobian
$\mathbf{J}_{\bar{F}}(\mathbf{x})$ evaluated at $\mathbf{x}\in[0,1]^{N}$. It
is non-negative for any $i\neq j\in\mathcal{N}$ since $(1-x_{i})\geq 0$ and
due to assumption (A3), and the ODE (6) is therefore co-operative in
$[0,1]^{N}$ with respect to the regular cone ordering.
From assumption (A3), $\mathbf{J}_{\bar{F}}(\mathbf{x})_{ij}$ is also strictly
positive for any $\mathbf{x}\in(0,1)^{N}$ whenever $a_{ij}>0$. This means that
$\mathbf{J}_{\bar{F}}(\mathbf{x})$, and as a consequence the ODE system, is
irreducible for any $\mathbf{x}\in(0,1)^{N}$. ∎
To derive the convergence properties of the non-linear $SIS$ model, we make
use of a result form [68], rewritten below in a simpler form suitable for our
setting.
###### Theorem E.1
(Theorem 4 in [68]) Consider a generic ODE system (9) invariant to some subset
$S\subset\mathbb{R}^{N}_{+}$, and let $\mathbf{J}_{\bar{F}}$ stand for its
Jacobian matrix. Suppose that:
* (C1)
$f_{i}(\mathbf{x})\geq 0$ for all $\mathbf{x}\geq 0$ with $x_{i}=0$;
* (C2)
for all $\mathbf{x}\gg\mathbf{0}$ in $S$, $\alpha\in(0,1)$, it satisfies
$\mathbf{J}_{\bar{F}}(\mathbf{x})_{ij}\leq\mathbf{J}_{\bar{F}}(\alpha\mathbf{x})_{ij}$
for all $i,j\in\mathcal{N}$, with strict inequality for at least one pair of
$i,j$;
* (C3)
for all $\mathbf{u}\ll\mathbf{w}$ in $S$, it satisfies
$\mathbf{J}_{\bar{F}}(\mathbf{w})\leq\mathbf{J}_{\bar{F}}(\mathbf{u})$;
* (C4)
it is co-operative in $S$ with respect to the regular ordering relation, and
irreducible in $\text{Int}(S)$.
Then, exactly one of the following outcomes occurs:
1. (i)
$\phi_{t}(\mathbf{x})$ is unbounded for all $\mathbf{x}\in
S\setminus\\{\mathbf{0}\\}$;
2. (ii)
$\phi_{t}(\mathbf{x})\rightarrow\mathbf{0}$ as $t\rightarrow\infty$, for all
$\mathbf{x}\in S\setminus\\{\mathbf{0}\\}$;
3. (iii)
There exists a unique, strictly positive fixed point
$\mathbf{x}^{*}\gg\mathbf{0}$ such that
$\phi_{t}(\mathbf{x})\rightarrow\mathbf{x}^{*}$ as $t\rightarrow\infty$, for
all $\mathbf{x}\in S\setminus\\{\mathbf{0}\\}$.$\hfill\square$
We now use the above to prove Theorem IV.4.
###### Proof:
We prove Theorem IV.4 by showing that it satisfies conditions (C1)-(C4) of
Theorem E.1, and then performing stability analysis to evaluate conditions for
each of the three possible outcomes therein.
From Proposition (IV.3), we know that (6) already satisfies (C4). The right
hand side of (5) satisfies (C1) because $q_{i}(x_{i})=0$ when $x_{i}=0$, and
because $(1-x_{i})$ and $f_{i}(\mathbf{x})$ are all non-negative for any
$\mathbf{x}\in[0,1]^{N}$. To check whether (C2) and (C3) is satisfied, observe
that from assumptions (A2)–(A5), we have
$\displaystyle\mathbf{J}_{F}(\mathbf{u})>\mathbf{J}_{F}(\mathbf{w})$ (19)
$\displaystyle\mathbf{J}_{Q}(\mathbf{u})<\mathbf{J}_{Q}(\mathbf{w})$ (20)
for all $\mathbf{u}<\mathbf{w}$.171717Here, the ordering between matrices
$\mathbf{M}^{a}<\mathbf{M}^{b}$ means
$\mathbf{M}^{a}_{ij}\leq\mathbf{M}^{b}_{ij}$ with the inequality being strict
for at least one pair of $i,j$. Here, $\mathbf{J}_{Q}$ is a diagonal matrix
since $\partial q_{i}/\partial x_{j}=0$ for all $i\neq j\in\mathcal{N}$.
Denote by $\mathbf{J}_{\bar{F}}$ the Jacobian matrix of system (6). Note that
for any point $\mathbf{x}\in[0,1]^{N}$, we have
$\mathbf{J}_{\bar{F}}(\mathbf{x})=\text{diag}(\mathbf{1}-\mathbf{x})\mathbf{J}_{F}(\mathbf{x})-\text{diag}\left(F(\mathbf{x})\right)-\mathbf{J}_{Q}(\mathbf{x})$
(21)
Combining the above with (19) and (20), we have for any points
$\mathbf{u}<\mathbf{w}$ that
$\displaystyle\mathbf{J}_{\bar{F}}(\mathbf{u})$
$\displaystyle=\text{diag}(\mathbf{1}-\mathbf{u})\mathbf{J}_{F}(\mathbf{u})-\text{diag}\left(F(\mathbf{u})\right)-\mathbf{J}_{Q}(\mathbf{u})$
$\displaystyle>\text{diag}(\mathbf{1}-\mathbf{w})\mathbf{J}_{F}(\mathbf{w})-\text{diag}\left(F(\mathbf{u})\right)-\mathbf{J}_{Q}(\mathbf{w})$
$\displaystyle\geq\text{diag}(\mathbf{1}-\mathbf{w})\mathbf{J}_{F}(\mathbf{w})-\text{diag}\left(F(\mathbf{w})\right)-\mathbf{J}_{Q}(\mathbf{w})$
$\displaystyle=\mathbf{J}_{\bar{F}}(\mathbf{w}),$
where the first inequality is due to
$(\mathbf{1}-\mathbf{u})>(\mathbf{1}-\mathbf{w})$ and (19) and (20). The
second inequality is from the non-negativity and monotonicity assumptions (A2)
and (A3) implying $F(\mathbf{u})\leq F(\mathbf{w})$. Since
$\mathbf{J}_{\bar{F}}(\mathbf{u})>\mathbf{J}_{\bar{F}}(\mathbf{w})$ for any
$\mathbf{u}<\mathbf{w}$, this is enough to satisfy both conditions (C2) and
(C3).
Since system (6) satisfies (C1)–(C4), Theorem E.1 applies. Since the system is
invariant in $[0,1]^{N}$, which is a bounded subset of $\mathbb{R}^{N}$,
outcome (i) of Theorem E.1 never occurs. From assumption (A1), the vector
$\mathbf{0}=[0,\cdots,0]^{T}$ (the virus-free equilibrium) is always a fixed
point of the system. We now find conditions under which trajectories of (6)
starting from anywhere in $[0,1]^{N}\setminus\\{\mathbf{0}\\}$ converge to
either zero, or to a unique strictly positive fixed point (outcomes (ii) and
(iii) in Theorem E.1 respectively), by check the stability properties of the
system.
The virus-free fixed point zero is unstable [72] when
$\lambda(\mathbf{J}_{\bar{F}}(\mathbf{0}))=\lambda(\mathbf{J}_{F}(\mathbf{0})-\mathbf{J}_{Q}(\mathbf{0}))\leq
0$. Under this condition, outcome (ii) in Theorem E.1 is not possible, and
there exists a unique, strictly positive fixed point
$\mathbf{x}^{*}\gg\mathbf{0}$ which is globally asymptotically stable in
$[0,1]^{N}\setminus\\{\mathbf{0}\\}$. Conversely when zero is a stable fixed
point, that is when
$\lambda(\mathbf{J}_{\bar{F}}(\mathbf{0}))=\lambda(\mathbf{J}_{F}(\mathbf{0})-\mathbf{J}_{Q}(\mathbf{0}))>0$,
it is globally attractive. ∎
## Appendix F Proofs of the Main Results
Throughout this Section, we use $\phi_{t}(\mathbf{x}_{0},\mathbf{y}_{0})$ to
represent the solution of (8) at time $t\geq 0$, starting from
$(\mathbf{x}_{0},\mathbf{y}_{0})\in D$. We will need the following results to
prove the theorems from Section V-B.
###### Proposition F.1
Starting from any point $D\setminus\left\\{(\mathbf{0},\mathbf{0})\right\\}$,
trajectories of (8) converge to the set
$Z\triangleq\left\\{(\mathbf{u},\mathbf{w})\in
D~{}|~{}(\mathbf{0},\mathbf{y}^{*})\leq_{K}(\mathbf{u},\mathbf{w})\leq_{K}(\mathbf{x}^{*},\mathbf{0})\right\\}.$
###### Proof:
For any $(\mathbf{r},\mathbf{s})\in D\setminus\\{(\mathbf{0},\mathbf{0})\\}$,
there exists points $\mathbf{x},\mathbf{y}\in[0,1]^{N}$ such that
$(\mathbf{0},\mathbf{y})\leq_{K}(\mathbf{r},\mathbf{s})\leq_{K}(\mathbf{x},\mathbf{0})$.
Then, from Definition D.1 of a monotone system, we have
$\phi_{t}(\mathbf{0},\mathbf{y})\leq_{K}\phi_{t}(\mathbf{r},\mathbf{s})\leq_{K}\phi_{t}(\mathbf{x},\mathbf{0})$
for any $t>0$. Since
$\phi_{t}(\mathbf{x},\mathbf{0})\rightarrow(\mathbf{x}^{*},\mathbf{0})$ and
$\phi_{t}(\mathbf{0},\mathbf{y})\rightarrow(\mathbf{0},\mathbf{y}^{*})$, we
get
$(\mathbf{0},\mathbf{y}^{*})\leq_{K}\lim_{t\rightarrow\infty}\phi_{t}(\mathbf{r},\mathbf{s})\leq_{K}(\mathbf{x}^{*},\mathbf{0})$.
Thus the trajectory $\left\\{\phi_{t}(\mathbf{r},\mathbf{s})\right\\}_{t\geq
0}$ converges to $Z$, completing the proof. ∎
Since the set $Z$ depends on $\mathbf{x}^{*}$ and $\mathbf{y}^{*}$, the fixed
points of systems (13) and (14), and we can determine when these fixed points
are positive or zero, Proposition F.1 helps us to quickly point out a subset
of the state space to which trajectories starting from any point in
$D\\!\setminus\\!\left\\{(\mathbf{0},\mathbf{0})\right\\}$ converge.
###### Proof:
When
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})-\mathbf{J}_{R}(\mathbf{0})\right)\\!\leq\\!0$
and
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})-\mathbf{J}_{S}(\mathbf{0})\right)\\!\leq\\!0$,
we know from Theorem IV.4 that $\mathbf{x}^{*}=\mathbf{y}^{*}=0$. Therefore,
trajectories of (8) starting from any point in
$D\setminus\left\\{(\mathbf{0},\mathbf{0})\right\\}$ converge to the set
$Z\triangleq\left\\{(\mathbf{u},\mathbf{w})\in
D~{}|~{}(\mathbf{0},\mathbf{0})\leq_{K}(\mathbf{u},\mathbf{w})\leq_{K}(\mathbf{0},\mathbf{0})\right\\}=\left\\{(\mathbf{0},\mathbf{0})\right\\}$.
Hence, the virus-free equilibrium is globally asymptotically stable in $D$,
which completes the proof. ∎
Proposition F.1 can also be applied to show that $(\mathbf{x}^{*},\mathbf{0})$
where $\mathbf{x}^{*}\\!\gg\\!\mathbf{0}$ is globally attractive when
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!>\\!0$
and
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!\leq\\!0$.
This is because from Theorem IV.4, we know that
$\mathbf{x}^{*}\\!\gg\\!\mathbf{0}$ and $\mathbf{y}^{*}\\!=\\!\mathbf{0}$. We
then have $Z\triangleq\left\\{(\mathbf{u},\mathbf{w})\in
D~{}|~{}(\mathbf{0},\mathbf{0})\leq_{K}(\mathbf{u},\mathbf{w})\leq_{K}(\mathbf{x}^{*},\mathbf{0})\right\\}$,
implying that the system (8) ultimately reduces to the single $SIS$ system
(13), which we know globally converges to $\mathbf{x}^{*}$. By a symmetric
argument, we also have that $(\mathbf{0},\mathbf{y}^{*})$ where
$\mathbf{y}^{*}\\!\gg\\!\mathbf{0}$ is globally attractive when
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!\leq\\!0$
and
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!>\\!0$.
Therefore these cases are easily analyzed by applying Proposition F.1 in
conjunction with Theorem IV.4. In terms of the linear bi-virus model whose
parameters are easier to visualize, values of $\tau_{1}$ and $\tau_{2}$ which
satisfy these conditions, lie in regions R2 and R3 of Figure 3(b) and we
henceforth exclude them from our analysis, considering only those values of
$\tau_{1}$ and $\tau_{2}$ for which $\tau_{1}\lambda(\mathbf{A})\\!>\\!1$ and
$\tau_{2}\lambda(\mathbf{B})\\!>\\!1$ always holds; equivalently considering
only the cases where
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})-\mathbf{J}_{R}(\mathbf{0})\right)>0$
and
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})-\mathbf{J}_{S}(\mathbf{0})\right)>0$
always hold for nonlinear infection and recovery rates. Thus, $\mathbf{x}^{*}$
and $\mathbf{y}^{*}$ are henceforth implied to be strictly positive vectors.
Before formally proving Theorems V.3 and V.4, we provide some additional
constructions and notations which will help simplify the proofs. As in the
proof of Theorem IV.4, the Jacobians $\mathbf{J}F^{x}(\mathbf{x})$ and
$\mathbf{J}F^{y}(\mathbf{y})$ of systems (13) and (14), respectively, are
$\displaystyle\mathbf{J}F^{x}(\mathbf{x})$
$\displaystyle=\text{diag}(\mathbf{1}\\!-\\!\mathbf{x})\mathbf{J}_{G}(\mathbf{x})-\text{diag}(G(\mathbf{x}))-\mathbf{J}_{R}(\mathbf{x}),$
$\displaystyle\mathbf{J}F^{y}(\mathbf{y})$
$\displaystyle=\text{diag}(\mathbf{1}\\!-\\!\mathbf{y})\mathbf{J}_{H}(\mathbf{y})-\text{diag}(H(\mathbf{y}))-\mathbf{J}_{S}(\mathbf{y}),$
for all $\mathbf{x},\mathbf{y}\in[0,1]^{N}$. Now recall the Jacobian
$\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{x},\mathbf{y})$ of the bi-virus ODE (8)
from (12). When evaluated at $(\mathbf{x}^{*},\mathbf{0})$ and at
$(\mathbf{0},\mathbf{y}^{*})$, we get
$\begin{split}\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{x}^{*},\mathbf{0})=\begin{bmatrix}\mathbf{J}F^{x}(\mathbf{x}^{*})&\mathbf{K}\\\
\mathbf{0}&\mathbf{J}_{y}\end{bmatrix}\end{split}$ (22)
where $\mathbf{K}\\!=\\!-\text{diag}(G(vx^{*}))$,
$\mathbf{J}_{y}\\!=\\!\text{diag}(\mathbf{1}\\!-\\!\mathbf{x}^{*})\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})$,
and
$\begin{split}\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{0},\mathbf{y}^{*})=\begin{bmatrix}\mathbf{J}_{x}&\mathbf{0}\\\
\mathbf{L}&\mathbf{J}F^{y}(\mathbf{y}^{*})\end{bmatrix}\end{split}$ (23)
where $\mathbf{L}\\!=\\!-\text{diag}(H(\mathbf{y}^{*}))$,
$\mathbf{J}_{x}\\!=\\!\text{diag}(\mathbf{1}\\!-\\!\mathbf{y}^{*})\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})$.
This leads us to the following proposition, where the ordering $\leq_{K}$
($<_{K},\ll_{K}$) stands for the south east cone-ordering.
###### Proposition F.2
When
$\lambda\\!\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!>\\!0$,
we have
$\lambda\left(\mathbf{J}_{\bar{G}\bar{H}}(0,\mathbf{y}^{*})\right)\\!=\\!\lambda(\mathbf{J}_{x})\\!>\\!0$,
and the corresponding eigenvector
$(\mathbf{u},\mathbf{v})\\!\in\\!\mathbb{R}^{2N}$ of
$\mathbf{J}_{\bar{G}\bar{H}}(0,\mathbf{y}^{*})$ satisfies
$(\mathbf{u},\mathbf{v})\\!\gg_{K}\\!(\mathbf{0},\mathbf{0})$.
###### Proof:
First, recall that $\mathbf{y}^{*}\gg\mathbf{0}$ is the asymptotically stable
fixed point of (14). This implies that the real parts of all eigenvalues of
the Jacobian $\mathbf{J}F^{y}(\mathbf{y}^{*})$ of (14) evaluated at
$\mathbf{y}^{*}$ are negative. Since $\mathbf{J}F^{y}(\mathbf{y}^{*})$ is an
irreducible matrix as discussed in Section V-A, with non-negative off-diagonal
elements, its PF eigenvalue (obtained by perturbing with a large multiple of
the identity matrix) is real and negative, that is
$\lambda\left(\mathbf{J}F^{y}(\mathbf{y}^{*})\right)<0$.
From the assumption, we have
$\lambda\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})-\mathbf{J}_{R}(\mathbf{0})\right)=\lambda(\mathbf{J}_{x})>0$.
Since $\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{0},\mathbf{y}^{*})$ is a block
triangle matrix, we have
$\lambda\left(\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{0},\mathbf{y}^{*})\right)\\!=\\!\max\\!\left\\{\lambda(J_{x}),\lambda\left(\mathbf{J}F^{y}(\mathbf{y}^{*})\right)\right\\}$,
and since $\lambda\left(\mathbf{J}F^{y}(\mathbf{y}^{*})\right)<0$, we obtain
$\lambda\left(\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{0},\mathbf{y}^{*})\right)=\lambda(\mathbf{J}_{x})>0$.
Then, the corresponding eigenvector $(\mathbf{u},\mathbf{v})$ satisfies
$\mathbf{J}_{x}\mathbf{u}\\!=\\!\lambda(\mathbf{J}_{x})\mathbf{u}~{}~{}~{}~{}\text{and}~{}~{}~{}~{}\mathbf{L}\mathbf{u}\\!+\\!\mathbf{J}F^{y}(\mathbf{y}^{*})\mathbf{v}\\!=\\!\lambda(\mathbf{J}_{x})\mathbf{v}.$
From the first equation, we can tell that $\mathbf{u}$ is the eigenvector of
$\mathbf{J}_{x}$ corresponding to its PF eigenvalue, and thus satisfies
$\mathbf{u}\\!\gg\\!\mathbf{0}$. Now recall that
$\mathbf{J}F^{y}(\mathbf{y}^{*})$ had eigenvalues with strictly negative real
parts.
$\lambda(\mathbf{J}_{x})\mathbf{I}\\!-\\!\mathbf{J}F^{y}(\mathbf{y}^{*})$ is
then a matrix with eigenvalues having strictly positive real parts (since
$\lambda(\mathbf{J}_{x})\\!>\\!0$). The matrix
$\mathbf{M}\triangleq\lambda(\mathbf{J}_{x})\mathbf{I}\\!-\\!\mathbf{J}F^{y}(\mathbf{y}^{*})$
is then, by Definition A.2, an M-matrix. By construction, it is also
irreducible and invertible and from Lemma A.4, we obtain that
$\mathbf{M}^{-1}$ is a (strictly) positive matrix. The second equation in the
above can then be rewritten as
$\mathbf{v}=\mathbf{M}^{-1}\mathbf{L}\mathbf{u}\ll\mathbf{0}$, where the
inequality is because $\mathbf{L}\\!=\\!-\text{diag}(H(\mathbf{y}^{*}))$ has
strictly negative diagonal elements ($H(\mathbf{y}^{*})$ being positive from
assumptions (A2) and (A3)). Therefore, since $\mathbf{u}\gg\mathbf{0}$ and
$\mathbf{v}\ll\mathbf{0}$, we have $(\mathbf{u},\mathbf{v})\gg_{K}\mathbf{0}$,
completing the proof. ∎
The intention behind introducing Proposition F.2 was to satisfy the
assumptions of Theorem D.6. In particular, when
$\lambda\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})-\mathbf{J}_{R}(\mathbf{0})\right)\\!>\\!0$,
$(0,\mathbf{y}^{*})$ is an unstable fixed point; by Proposition F.2 and
Theorem D.6, there exists an $\epsilon_{1}>0$ and another fixed point
$(\mathbf{x}_{e},\mathbf{y}_{e})$ such that for any point
$(\mathbf{x}_{r},\mathbf{y}_{r})\triangleq(\mathbf{0},\mathbf{y}^{*})+r(\mathbf{u},\mathbf{v})$
where $r\in(0,\epsilon_{1}]$, we have
$\begin{split}(0,\mathbf{y}^{*})\\!\ll\\!(\mathbf{x}_{r},\mathbf{y}_{r})\\!\ll_{K}\\!\phi_{t}(\mathbf{x}_{r},\mathbf{y}_{r})\\!\ll_{K}\\!\phi_{s}(\mathbf{x}_{r},\mathbf{y}_{r})\\!\leq_{K}\\!(\mathbf{x}^{*},\mathbf{0})\end{split}$
for all $s\\!>\\!t\\!>\\!0$. Moreover, for all $(\mathbf{x},\mathbf{y})$ such
that
$(\mathbf{0},\mathbf{y}^{*})\\!\ll_{K}\\!(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})$,
there exists an $r\\!\in\\!(0,\epsilon]$ sufficiently small such that
$(\mathbf{x}_{r},\mathbf{y}_{r})\\!\leq_{K}\\!(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})$.
Since
$\phi_{t}(\mathbf{x}_{r},\mathbf{y}_{r})\\!\rightarrow\\!(\mathbf{x}_{e},\mathbf{y}_{e})$,
monotonicity implies
$\phi_{t}(\mathbf{x},\mathbf{y})\\!\rightarrow\\!(\mathbf{x}_{e},\mathbf{y}_{e})$
as $t\\!\to\\!\infty$.
Now, we can either have
$(\mathbf{x}_{e},\mathbf{y}_{e})\\!=\\!(\mathbf{x}^{*},\mathbf{0})$, which
occurs when $(\mathbf{x}^{*},\mathbf{0})$ is the other stable fixed point of
(8), or
$(\mathbf{x}_{e},\mathbf{y}_{e})\\!=\\!(\hat{\mathbf{x}},\hat{\mathbf{y}})\\!\gg\\!\mathbf{0}$
which occurs when $(\mathbf{x}^{*},\mathbf{0})$ is an unstable fixed point.
Note that $(\mathbf{x}^{*},\mathbf{0})$ is stable (unstable) if and only if
$\lambda\left(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{J}_{G}(\mathbf{0})-\mathbf{J}_{R}(\mathbf{0})\right)\\!\leq\\!0$
($>\\!0$). We will talk about both these possibilities one by one and
exploring these will eventually lead to Theorems V.3 and V.4. But before we do
that, we first prove the following proposition about convergence to the fixed
point $(\mathbf{x}_{e},\mathbf{y}_{e})$ (whichever of the two it may be).
###### Proposition F.3
Trajectories of the system (4) starting from any point
$(\mathbf{x},\mathbf{y})$ such that
$(\mathbf{0},\mathbf{y}^{*})<_{K}(\mathbf{x},\mathbf{y})\leq_{K}(\mathbf{x}_{e},\mathbf{y}_{e})$
converge to $(\mathbf{x}_{e},\mathbf{y}_{e})$.$\hfill\square$
###### Proof:
Recall that we already know that for all
$(\mathbf{0},\mathbf{y}^{*})\\!\ll_{K}\\!(\mathbf{x},\mathbf{y})\\!\leq\\!(\mathbf{x}^{*},\mathbf{y}^{*})$,
$\phi_{t}(\mathbf{x},\mathbf{y})\\!\rightarrow\\!(\mathbf{x}^{*},\mathbf{0})$.
We would however like to show this for all $(\mathbf{x},\mathbf{y})\in
Z\setminus(\mathbf{0},\mathbf{y}^{*})$, that is even when
$(\mathbf{x},\mathbf{y})$ satisfies
$(\mathbf{0},\mathbf{y}^{*})<_{K}(\mathbf{x},\mathbf{y})\leq(\mathbf{x}^{*},\mathbf{0})$.
To do this, we create a set of points which converge to
$(\mathbf{x}^{*},\mathbf{0})$, just like we created
$(\mathbf{x}_{r},\mathbf{y}_{r})$ before, and then use a monotonicity argument
to show convergence to $(\mathbf{0},\mathbf{y}^{*})$ of trajectories starting
for all points $(\mathbf{x},\mathbf{y})$ satisfying
$(\mathbf{0},\mathbf{y}^{*})<_{K}(\mathbf{x},\mathbf{y})\leq(\mathbf{x}^{*},\mathbf{0})$.
Recall that, $\mathbf{y}^{*}$ is an asymptotically stable fixed point of (14),
and from the proof of Proposition F.2 we know that
$\lambda\left(\mathbf{J}F^{y}(\mathbf{y}^{*})\right)<0$. Let
$\mathbf{w}\gg\mathbf{0}$ be the corresponding PF eigenvector. Then by
Proposition B.4, there exists an $\epsilon_{2}>0$ such that for all
$s\in(0,\epsilon_{2}]$, $F^{y}(\mathbf{y}^{*}+s\mathbf{w})\ll\mathbf{0}$. We
can then define points
$(\mathbf{x}_{r},\mathbf{y}_{s})\triangleq(r\mathbf{u},\mathbf{y}^{*}+s\mathbf{w})$
for any $r\in(0,\epsilon_{1}]$ and $s\in(0,\epsilon_{2}]$, where
$\mathbf{u}\gg\mathbf{0}$ is the eigenvector of $\mathbf{J}_{x}$ from
Proposition F.2. We will first show that trajectories starting from these
points converge to $(\mathbf{x}_{e},\mathbf{y}_{e})$. By rearranging the terms
of (8), we can rewrite it as
$\displaystyle\dot{\mathbf{x}}=$
$\displaystyle~{}\text{diag}(\mathbf{1}-\mathbf{y}^{*})G(\mathbf{x})-R(\mathbf{x})+\text{diag}(\mathbf{y}^{*}-\mathbf{x}-\mathbf{y})G(\mathbf{x})$
$\displaystyle=$
$\displaystyle~{}\text{diag}(\mathbf{1}-\mathbf{y}^{*})\mathbf{J}_{G}(\mathbf{0})\mathbf{x}-\mathbf{J}_{R}(\mathbf{0})\mathbf{x}$
$\displaystyle+\text{diag}(\mathbf{y}^{*}-\mathbf{x}-\mathbf{y})G(\mathbf{x})+O\left(\|\mathbf{x}\|^{2}\right)$
$\displaystyle=$
$\displaystyle~{}\mathbf{J}_{x}\mathbf{x}+O\left(\|\mathbf{x}\|\left[\|\mathbf{y}-\mathbf{y}^{*}\|+\|\mathbf{x}\|\right]\right),$
$\displaystyle\dot{\mathbf{y}}=$
$\displaystyle~{}\text{diag}(\mathbf{1}-\mathbf{y})H(\mathbf{y})-S(\mathbf{y})-\text{diag}(\mathbf{x})H(\mathbf{y})$
$\displaystyle=$
$\displaystyle~{}F^{y}(\mathbf{y})+O\left(\|\mathbf{y}\|\right),$
for all $(\mathbf{x},\mathbf{y})\in D,$181818Here, $O(x)$ is used to represent
terms which satisfy $O(x)\to 0$ as $x\to 0$.where the first equality is from a
Taylor series expansion of $G$ and $R$ around $\mathbf{0}$. For any point
$(\mathbf{x}_{r},\mathbf{y}_{s})=(r\mathbf{u},\mathbf{y}^{*}+s\mathbf{w})$,
the above equations can be written as
$\displaystyle\dot{\mathbf{x}}$
$\displaystyle=r\lambda(\mathbf{J}_{x})\mathbf{u}+rO\left(\|\mathbf{u}\|\left[s\|\mathbf{w}\|+r\|\mathbf{u}\|\right]\right)$
$\displaystyle=r\left[\lambda(\mathbf{J}_{x})\mathbf{u}+O(r+s)\right]\ $
$\displaystyle\dot{\mathbf{y}}$
$\displaystyle=F^{y}(\mathbf{y}^{*}+s\mathbf{w})+O\left(\|s\mathbf{y}\|\right)$
$\displaystyle=F^{y}(\mathbf{y}^{*}+s\mathbf{w})+O\left(s\right).$
For sufficiently small $r$ and $s$, we have $\dot{\mathbf{x}}\gg\mathbf{0}$
(since $\lambda(\mathbf{J}_{x})>0$ and $\mathbf{u}\gg\mathbf{0}$) and
$\dot{\mathbf{y}}\ll\mathbf{0}$ (since
$F^{y}(\mathbf{y}^{*}+s\mathbf{w})\ll\mathbf{0}$ for all
$s\in(0,\epsilon_{2}]$). This satisfies the conditions for Proposition D.5,
and trajectories starting from such points will be monotonically increasing
(according to the south-east cone ordering), eventually converging to the
fixed point $(\mathbf{x}_{e},\mathbf{y}_{e})$.
Now see that for any point $(\mathbf{x},\mathbf{y})$ such that
$(\mathbf{0},\mathbf{y}^{*})\\!<_{K}\\!(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})$,
where $\mathbf{x}\\!>\\!\mathbf{0}$ and $\mathbf{y}\\!\leq\\!\mathbf{y}^{*}$,
by the nature of the ODE system (4) all zero entries of the $\mathbf{x}$ term
will eventually become positive (if it isn’t already). Therefore, there exists
a time $t_{1}>0$ such that $\mathbf{x}(t_{1})\\!\gg\\!\mathbf{0}$, and there
exist $r,s$ small enough such that
$(\mathbf{x}_{r},\mathbf{y}_{s})\\!\ll_{K}\\!\phi_{t_{1}}(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})$.
Again by monotonicity, since
$\phi_{t}(\mathbf{x}_{r},\mathbf{y}_{s})\rightarrow(\mathbf{x}_{e},\mathbf{y}_{e})$,
we have
$\phi_{t+t_{1}}(\mathbf{x},\mathbf{y})\rightarrow(\mathbf{x}_{e},\mathbf{y}_{e})$
as $t\rightarrow\infty$, completing the proof. ∎
We now consider the case where
$(\mathbf{x}_{e},\mathbf{y}_{e})\\!=\\!(\mathbf{x}^{*},\mathbf{0})$ and give
the proof for Theorem V.3. We prove it only for when
$\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!>\\!1$ and
$\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!\leq\\!1$, since the
other case follows by a symmetric argument.
###### Proof:
When
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!\leq\\!0$,
$(\mathbf{x}^{*},\mathbf{0})$ is a stable fixed point of system (8), since all
eigenvalues of $\mathbf{J}_{\bar{G}\bar{H}}(\mathbf{x}^{*},\mathbf{0})$ have
non-positive real parts, and we have
$(\mathbf{x}_{e},\mathbf{y}_{e})=(\mathbf{x}^{*},\mathbf{0})$. Proposition F.3
then implies that trajectories starting from all points in
$Z\setminus\left\\{(\mathbf{0},\mathbf{y}^{*})\right\\}$ converge to
$(\mathbf{x}^{*},\mathbf{0})$. According to Proposition F.1, trajectories
starting from all points $(\mathbf{x},\mathbf{y})\in B_{x}$ in the system
eventually enter the set $Z$, thereby eventually converging to
$(\mathbf{x}^{*},\mathbf{0})$, giving us global convergence in $B_{x}$. ∎
Similarly, we use Propositon F.3 to prove Theorem V.4.
###### Proof:
When
$\lambda\left(\mathbf{J}_{G}(\mathbf{0})\\!-\\!\mathbf{J}_{R}(\mathbf{0})\right)\\!>\\!0$
and
$\lambda\left(\mathbf{J}_{H}(\mathbf{0})\\!-\\!\mathbf{J}_{S}(\mathbf{0})\right)\\!>\\!0$,
both $(\mathbf{0},\mathbf{y}^{*})$ and $(\mathbf{x}^{*},\mathbf{0})$ are
unstable fixed points, and $(\mathbf{x}_{e},\mathbf{y}_{e})$ takes the form of
a positive fixed point $(\hat{\mathbf{x}},\hat{\mathbf{y}})\gg\mathbf{0}$ (it
cannot be $(\mathbf{x}^{*},\mathbf{0})$, which is unstable). Then from
Proposition F.3, it attracts trajectories beginning from all points
$(\mathbf{x},\mathbf{y})$ satisfying
$(\mathbf{0},\mathbf{y}^{*})\\!<_{K}\\!(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\hat{\mathbf{x}},\hat{\mathbf{y}})$.
Similarly, we have a symmetric result beginning from
$\tau_{2}\lambda(\mathbf{S}_{\mathbf{x}^{*}}\mathbf{B})\\!>\\!1$ (symmetric to
Proposition F.2 which assumes
$\tau_{1}\lambda(\mathbf{S}_{\mathbf{y}^{*}}\mathbf{A})\\!>\\!1$ instead), and
we can say that there exists another fixed point
$(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\gg\\!\mathbf{0}$ which attracts all
points $(\mathbf{x},\mathbf{y})$ satisfying
$(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\leq_{K}\\!(\mathbf{x},\mathbf{y})\\!<_{K}\\!(\mathbf{x}^{*},\mathbf{0})$.
By construction, we then have
$(\hat{\mathbf{x}},\hat{\mathbf{y}})\\!\leq_{K}\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})$,
with the possibility of being equal.
To prove global convergence of the system to the set
$S\\!=\\!\left\\{(\mathbf{x}_{e},\mathbf{y}_{e})\\!\in\\!E~{}|~{}(\hat{\mathbf{x}},\hat{\mathbf{x}})\\!\leq_{K}\\!(\mathbf{x}_{e},\mathbf{y}_{e})\\!\leq_{K}\\!(\bar{\mathbf{x}},\bar{\mathbf{y}})\right\\}$,
observe first that as part of the proof of Proposition F.3 we showed that for
trajectories starting from any point $(\mathbf{x},\mathbf{y})$ in the state
space, there exists $r\\!>\\!0$ and $s\\!>\\!0$ small enough, and
$t_{1}\\!>\\!0$ such that
$(\mathbf{x}_{r},\mathbf{y}_{s})\\!\ll_{K}\\!\phi_{t_{1}}(\mathbf{x},\mathbf{y})\\!\leq_{K}\\!(\hat{\mathbf{x}},\hat{\mathbf{y}})$
where $(\mathbf{x}_{r},\mathbf{y}_{s})$ is a point very close to
$(\mathbf{x}^{*},\mathbf{0})$. By a parallel argument, we can find a similar
point $(\mathbf{x}_{p},\mathbf{y}_{q})$ very close to
$(\mathbf{0},\mathbf{y}^{*})$ and a time $t_{2}$ such that
$(\bar{\mathbf{x}},\bar{\mathbf{y}})\\!\leq_{K}\\!\phi_{t_{2}}(\mathbf{x},\mathbf{y})\\!\ll_{K}\\!(\mathbf{x}_{p},\mathbf{y}_{q})$.
Then, we have
$(\mathbf{x}_{r},\mathbf{y}_{s})\\!\ll_{K}\\!\phi_{\max\\{t_{1},t_{2}\\}}(\mathbf{x},\mathbf{y})\\!\ll_{K}\\!(\mathbf{x}_{p},\mathbf{y}_{q})$.
Since
$\phi_{t}(\mathbf{x}_{r},\mathbf{y}_{s})\\!\rightarrow\\!(\hat{\mathbf{x}},\hat{\mathbf{x}})\in
S$, and
$\phi_{t}(\mathbf{x}_{p},\mathbf{y}_{q})\rightarrow(\bar{\mathbf{x}},\bar{\mathbf{x}})\in
S$, we can once again, due to monotonicity of the system and by invoking a
sandwich argument, say that
$\phi_{t+\max\\{t_{1},t_{2}\\}}(\mathbf{x},\mathbf{y})$ converges to an
equilibrium point in $S$ as $t\\!\rightarrow\\!\infty$. This completes the
proof. ∎
| Vishwaraj Doshi received his B.E. degree in mechanical engineering from the
University of Mumbai, Mumbai, MH, India, and Masters degree in Operations
Research from North Carolina State University, Raleigh, NC, USA, in 2015 and
2017 respectively. He completed his Ph.D. degree with the Operations Research
Graduate Program at North Carolina State University in 2022, and is now a part
of the Data Science and Advanced Analytics team at IQVIA. His primary research
interests include design of randomized algorithms on graphs, and epidemic
models on networks.
---|---
| Shailaja Mallick is a Ph.D. student in the Computer Science Department at
North Carolina State University. She received her B.Tech in Computer Science
from UCE, Burla, India and Masters in Computer Systems and Networks from
Chalmers University of Technology, Sweden. Her current research interests are
in the area of social network analysis, network and performance modeling using
techniques from mathematical biology, graph theory, stochastic modeling and
simulation.
---|---
| Do Young Eun (Senior Member, IEEE) received his B.S. and M.S. degree in
Electrical Engineering from Korea Advanced Institute of Science and Technology
(KAIST), Taejon, Korea, in 1995 and 1997, respectively, and Ph.D. degree from
Purdue University, West Lafayette, IN, in 2003. Since August 2003, he has been
with the Department of Electrical and Computer Engineering at North Carolina
State University, Raleigh, NC, where he is currently a professor. His research
interests include distributed optimization for machine learning, machine
learning algorithms for networks, distributed and randomized algorithms for
large social networks and wireless networks, epidemic modeling and analysis,
graph analytics and mining techniques with network applications. He has been a
member of Technical Program Committee of various conferences including IEEE
INFOCOM, ICC, Globecom, ACM MobiHoc, and ACM Sigmetrics. He is serving on the
editorial board of IEEE Transactions on Network Science and Engineering, and
previously served for IEEE/ACM Transactions on Networking and Computer
Communications Journal, and was TPC co-chair of WASA’11. He received the Best
Paper Awards in the IEEE ICCCN 2005, IEEE IPCCC 2006, and IEEE NetSciCom 2015,
and the National Science Foundation CAREER Award 2006. He supervised and co-
authored a paper that received the Best Student Paper Award in ACM MobiCom
2007.
---|---
|
# speechocean762: An Open-Source Non-native English Speech Corpus For
Pronunciation Assessment
###### Abstract
This paper introduces a new open-source speech corpus named “speechocean762”
designed for pronunciation assessment use, consisting of 5000 English
utterances from 250 non-native speakers, where half of the speakers are
children. Five experts annotated each of the utterances at sentence-level,
word-level and phoneme-level. A baseline system is released in open source to
illustrate the phoneme-level pronunciation assessment workflow on this corpus.
This corpus is allowed to be used freely for commercial and non-commercial
purposes. It is available for free download from OpenSLR, and the
corresponding baseline system is published in the Kaldi speech recognition
toolkit.
Index Terms: corpus, computer-assisted language learning (CALL), second
language (L2)
## 1 Introduction
As an indispensable part of Computer-aided language learning (CALL), computer-
aided pronunciation training (CAPT) applications with pronunciation assessment
technology are widely used in foreign language learning [1, 2] and proficiency
tests [3]. CAPT has been proved very useful to improve the pronunciation of
the foreign language learners [4]. Due to the acute shortage of qualified
teachers [5] and the increasing popularity of online learning, the research of
pronunciation assessment is being paid more attention [6].
According to the real-world CAPT applications' features, we divide the
practical pronunciation assessment tasks into three categories by the
assessment granularity: sentence-level, word-level, and phoneme-level. The
sentence-level assessment evaluates the whole sentence. Specifically, three
types of sentence-level scores frequently appear in practical CAPT systems:
accuracy, completeness, and fluency. The accuracy indicates the level of the
learner pronounce each word in the utterance correctly; the completeness
indicates the percentage of the words that are actually pronounced, and the
fluency here is in the narrow sense[7], which focuses on whether the speaker
pronounces smoothly and without unnecessary pauses. The word-level assessment
has a finer scale than the sentence-level assessment. Typical word-level
scores are accuracy and stress. Furthermore, as the finest granularity
assessment, the phoneme-level assessment evaluates each phone's pronunciation
quality in the utterance. Note that the word-level accuracy score should not
be regarded as the simple average of the phone-level accuracy scores, although
they have strong correlations. Take the word ``above'' (/bv/) as an example. A
foreign language learner may mispronounce it as /bv/ (mispronounce // to // )
or as /kv/ (mispronounce /b/ to /k/). For the two incorrect pronunciations,
the numbers of the mispronounced phones are both one, but most people may
realize that the latter mispronunciation is worse than the former.
There are some public corpora for pronunciation assessment. The ISLE Speech
Corpus [8] is an early and widely accepted [9, 10, 11] data set. It contains
mispronunciation tags at the word and phoneme level, and the speakers are all
from German and Italian. It is free for academic use, but it is charged for
commercial use. ERJ [12] is another famous non-native English corpus for
pronunciation assessment, collected from 202 Japanese students annotated with
phonemic and prosodic symbols. ATR-Gruhn [13] is a non-native English corpus
with multiple accents. The annotations of ATR-Gruhn are speaker-level
proficiency ratings. TL-school [14] is a corpus of speech utterances collected
in northern Italy schools for assessing the performance of students learning
both English and German. The data set of a spoken CALL shared task [15] is
available to download, where Swiss students answer prompts in English, and the
students' responses are manually labeled as ``accept'' or ``reject''.
L2-ARCTIC [16] is a non-native English speech corpus with manual annotations,
which has been used in some recent studies [17, 18], and it uses substitution,
deletion, and insertion to annotate for the phoneme-level scoring. Sell-corpus
[19] is another multiple accented Chinese-English speech corpus with phoneme
substitution annotations. Some corpora, such as CU-CHLOE [20], Supra-CHLOE
[21] and COLSEC [22], have been used in many studies [23, 24, 25, 26] but are
not publicly available. Corpora for languages other than English also exist.
The Tokyo-Kikuko [27] is a non-native Japanese corpus with phonemic and
prosodic annotations. The iCALL corpus [28] is a Mandarin corpus spoken by
non-native speakers of European descent with annotated pronunciation errors.
The SingaKids-Mandarin [29] corpus focuses on mispronunciation patterns in
Singapore children’s Mandarin speech.
To our knowledge, none of the existing non-native English corpora for
pronunciation assessment contains all the following features:
* •
It is available for free download for both commercial and non-commercial
purposes.
* •
The speaker variety encompasses young children and adults.
* •
The manual annotations are in many aspects at sentence-level, word-level and
phoneme-level.
To meet these features, we created this corpus to support researchers in their
pronunciation assessment studies. The corpus is available on the OpenSLR
111https://www.openslr.org/101 website, and the corresponding baseline system
has been a part of the Kaldi speech recognition toolkit
222https://github.com/kaldi-asr/kaldi/tree/master/egs/gop_speechocean762.
The rest of this paper is organized as follows: Section 2 describes the audio
acquisition. Section 3 details how we annotated the data for the pronunciation
assessment tasks. In Section 4, a Kaldi recipe for this corpus is introduced,
which illustrates how to do phoneme-level pronunciation assessment, and the
experiment results are provided as well.
## 2 Audio Acquisition
This corpus's text script is selected from daily life text, containing about
2,600 common English words. As shown in Figure 1, speakers were asked to hold
their mobile phones 20cm from their mouths and read the text as accurately as
possible in a quiet 3$\times$3 meters room. The mobile phones include the
popular models of Apple, Samsung, Xiaomi, and Huawei. The number of sentences
read aloud by each speaker is 20, and the total duration of the audio is about
6 hours.
The speakers are 250 English learners whose mother tongue is Mandarin. The
training set and test set are divided randomly, with 125 speakers for each.
We carefully selected the speakers considering gender, age and proficiency of
English. The experts roughly rated the speaker's English pronunciation
proficiency into three levels: good, average, and poor. Figure 2 shows the
distributions of the speaker's English pronunciation proficiency. Figure 3
shows the distributions of the speaker's age. The gender ratio is 1:1 for both
adults and children.
Figure 1: Recording setup. Speakers read the text holding their mobile phones
in a quiet room. Figure 2: Speaker's English pronunciation proficiency
distributions. Figure 3: Speaker's age distributions. Figure 4: The
``SpeechOcean uTrans'' Application. Before this dialog is displayed, the
experts have reached an agreement on the canonical phone sequences by voting.
For the phoneme-level scoring, the expert selects the phone symbol and then
makes a score of 0 or 1. If a phone symbol is not be selected, the score would
be 2 as the default.
## 3 Manual Annotation
Manual annotations are the essential part of this corpus. The annotations are
the scores that indicate the pronunciation quality. Each utterance in this
corpus is scored manually by five experts independently under the same
metrics.
### 3.1 Manual Scoring Metrics
The experts discussed and formulated the manual scoring metrics. Table 1 shows
the detailed metrics. The phoneme-level score is the pronunciation accuracy of
each phone. The word-level scores include accuracy and stress, and the
sentence-level scores include accuracy, completeness, fluency and prosody. The
sentence-level completeness score, which is not depicted in Table 1, is the
percentage of the words in the target text that are actually pronounced.
Table 1: Manual Scoring Metrics Score | Description
---|---
| Phoneme-level Accuracy
2 | The phone is pronounced correctly
1 | The phone is pronounced with a heavy accent
0 | The pronunciation is incorrect or missed
| Word-level Accuracy
10 | The pronunciation of the whole word is correct
7-9 | Most phones in the word are pronounced correctly, but the word's pronunciation has heavy accents
4-6 | No more than 30% phones in the word are wrongly pronounced
2-3 | More than 30% phones in the word are wrongly pronounced, or be mispronounced into some other word
0-1 | The whole pronunciation is hard to distinguish or the word is missed
| Word-level Stress
10 | The stress position is correct, or the word is a mono-syllable word
5 | The stress position is incorrect
| Sentence-level Accuracy
9-10 | The overall pronunciation of the sentence is excellent without obvious mispronunciation
7-8 | The overall pronunciation of the sentence is good, with a few mispronunciations
5-6 | The pronunciation of the sentence has many mispronunciations but it is still understandable
3-4 | Awkward pronunciation with many serious mispronunciations
0-2 | The pronunciation of the whole sentence is unable to understand or there is no voice
| Sentence-level Fluency
8-10 | Coherent speech, without noticeable pauses, repetition or stammering
6-7 | Coherent speech in general, with a few pauses, repetition and stammering
4-5 | The speech is incoherent, with many pauses, repetition and stammering
0-3 | The speaker is not able to read the sentence as a whole or there is no voice
| Sentence-level Prosodic
9-10 | Correct intonation, stable speaking speed and rhythm
7-8 | Nearly correct intonation at a stable speaking speed
3-6 | Unstable speech speed, or the intonation is inappropriate
0-2 | The reading of the sentence is too stammering to do prosodic scoring or there is no voice
Figure 5: Building LG directly for the word ``fast'' with the canonical phone
sequence voted by the experts, with skippable silence. Figure 6: The part
related of the word ``fast'' in L.
### 3.2 The Multiple Canonical Phone Sequences Problem
The phoneme-level scoring requires determining the canonical phone sequence. A
problem in practice is that the canonical phone sequence may not be unique.
Take the word ``fast'' as an example. In middle school, most Chinese students
were taught that this word should be pronounced as /f:st/, so a proper
canonical phone sequence is ``F AA S T'' with the phone set defined by the CMU
Dictionary [30]. However, some speakers may pronounce this word as /fæst/
following the American pronunciation. If that is the case, the phone ``AA'' in
the canonical phone sequence ``F AA S T'' would be misjudged as low score. The
proper canonical phone sequence, in this case, should be ``F AE S T''.
Our solution is as follows. For each word, experts will be shown several
possible canonical phone sequences before scoring. The expert must first
select the sequence that is closest to the pronunciation in her or his belief.
Since there are five experts, the sequence chosen by each expert may be
different, so the five experts vote to determine the final canonical sequence.
Then all the experts use the same canonical phone sequence to score. The
canonical phone sequences are carried as a part of the corpus's meta-
information.
### 3.3 Scoring Workflow
We developed an application named ``SpeechOcean uTrans'' for the experts to
convieniently score the audio. The interface of the application is shown in
Figure 4.
Before the scoring, the experts read the transcript and listen to the audio to
get familiar with the utterance. Then the experts are required to listen to
the audio repeatedly at least three times. As we mentioned, some words have
more than one canonical phone sequence. For those words, experts need to
choose and vote to reach an agreement on the canonical phone sequence. Then
the experts score the audio following the scoring metrics expressed in Table
1. If the scores seem unreasonable, for example, the word-level score is high
but all the phone-level scores are low, the ``SpeechOcean uTrans'' application
would raise a warning message to remind the expert to recheck the scores.
### 3.4 Score Distribution
Figure 7 shows the distribution of the sentence-level scores. The phoneme-
level and word-level score distributions are shown in the Figure 8, where the
phoneme-level scores are mapped linearly to the range 0 to 10 for comparison.
The sentence-level scores variety encompasses 3 to 10, while most of the word-
level and phoneme-level scores are from 8 to 10. This behaviour stems from the
fact that high sentence-level scores rely on a consistently ``good'' word and
phoneme pronouncation. Even a single word mispronunciation can lead to a low
overall score. Due to limited space, we suggest readers to refer to the
available online corpus to obtain the detailed statistics.
Figure 7: Sentence-level score distribution. Figure 8: Score distribution in
different levels.
## 4 The Kaldi Recipe
For demonstrating how to use this corpus to score at phoneme-level, we
uploaded a recipe named ``gop_speechocean762'' to the Kaldi toolkit.
### 4.1 Pipeline
We believe that the classical method is more suitable for building the
baseline system than the latest methods. So the pipeline is built following
the neural network (NN) based goodness of pronunciation (GOP) method, which is
widely used and detailed in [31]. Here we only represent some specifics of
implementing it on Kaldi. The GOP method requires a pre-trained acoustic model
trained by native spoken data, which is trained by the
``egs/librispeech/s5/local/nnet3/run_tdnn.sh'' script in Kaldi. The frame-
level posterior matrix is generated through forward propagation on the native
acoustic model, and the matrix is used for the forced alignment and the
computing to obtain the GOP values and the GOP-based features, whose
definitions could be found in [31] as well. Then we train a regressor for each
phone using the GOP-based features to predict the phoneme-level scores.
### 4.2 Alignment Graph Building without Lexicon
Kaldi's default alignment setup does not guarantee the alignment output to be
identical to the canonical phone sequence voted by the experts. We continue to
use the word ``fast'' as the example. The two possible phone sequences of this
word, which are ``F AA S T'' and ``F AE S T'' specifically, are both contained
in the lexicon finite state transducer (FST), shown in Figure 6. In that case,
the phone sequence produced by the alignment is uncertain. If the experts'
canonical phone sequence differs from the alignment result, the scores will
not be comparable with the manual scores.
Therefore, we build the lexicon-to-grammar (LG) FST directly using the
canonical phone sequence voted by the experts without composing the lexicon
FST and the grammar FST. The process of directly constructing LG is simple:
first, construct a linear FST structure, whose input labels are the canonical
phone sequences voted by the experts, whereas the output labels are the
corresponding words and epsilons [32]. Then, add skippable silence between the
words, and use the disambiguation symbol to construct the tail at the end of
LG, as shown in Figure 5.
### 4.3 Supervised Training and Data Balancing
With the GOP-based features and the corresponding manual scores, we train a
regressor for each mono phone. The model structure is a support vector
regressor (SVR) [33]. Besides, we train polynomial regression models with the
GOP values directly for each phone as an alternative lightweight method.
A problem is that the data's phoneme-level scores are quite unbalanced, as
discussed in Section 3.4. We use the high-score samples of other phones as the
current phone's low-score samples to supplement the training set to address
this issue. For example, a good pronunciation sample of the phone AE can be
considered as a poor pronunciation sample of the phone AA. For the model
training of a particular phone, we randomly select the samples of other phones
with high manual scores, setting their scores as zero and add them to the
training set.
### 4.4 Results
For evaluating the recipe's performance, we compare the predicted scores with
the manual scores to calculate the mean squared error (MSE) and Pearson
correlation coefficient (PCC). The result is shown in Table 2.
As a baseline system, this recipe is based on the classical NN-based GOP
method without using latest techniques. So the result is not quite strong,
which is in line with our expectations.
Table 2: Performance of the recipe | MSE | PCC
---|---|---
GOP value | 0.69 | 0.25
GOP-based feature | 0.16 | 0.45
## 5 Conclusions
We released an open-source corpus for pronunciation assessment tasks. The
corpus includes both child and adult speech and is manually annotated by five
experts. The annotations are at sentence-level, word-level and phoneme-level.
A Kaldi recipe is released to illustrate to use of the classic GOP method for
phoneme-level scoring. In the future, we will expand the recipe to word-level
and sentence-level scoring.
## 6 Acknowledgements
The authors would like to thank Jan Trmal for uploading this corpus to
OpenSLR. The authors would also like to thank Heinrich Dinkel and Qinghua Wu
for their helpful suggestions.
## References
* [1] H. Franco, H. Bratt, R. Rossier, V. Rao Gadde, E. Shriberg, V. Abrash, and K. Precoda, ``Eduspeak®: A speech recognition and pronunciation scoring toolkit for computer-aided language learning applications,'' _Language Testing_ , vol. 27, no. 3, pp. 401–418, 2010.
* [2] G. Li, ``The training skills of college students’ oral English based on the computer-aided language learning environment,'' in _Journal of Physics: Conference Series_ , vol. 1578, no. 1. IOP Publishing, 2020, p. 012040.
* [3] L. Gu, L. Davis, J. Tao, and K. Zechner, ``Using spoken language technology for generating feedback to prepare for the TOEFL iBT® test: a user perception study,'' _Assessment in Education: Principles, Policy & Practice_, pp. 1–14, 2020.
* [4] J. Wang, ``On optimization of non-intelligence factors in college English teaching in computer-aided language learning environments,'' in _Applied Mechanics and Materials_ , vol. 644. Trans Tech Publ, 2014, pp. 6124–6127.
* [5] K. P. McVey and J. Trinidad, ``Nuance in the noise: The complex reality of teacher shortages.'' _Bellwether Education Partners_ , 2019.
* [6] V. C.-W. Cheng, V. K.-T. Lau, R. W.-K. Lam, T.-J. Zhan, and P.-K. Chan, ``Improving English phoneme pronunciation with automatic speech recognition using voice chatbot,'' in _International Conference on Technology in Education_. Springer, 2020, pp. 88–99.
* [7] P. Lennon, ``The lexical element in spoken second language fluency,'' in _Perspectives on fluency_. University of Michigan, 2000, pp. 25–42.
* [8] W. Menzel, E. Atwell, P. Bonaventura, D. Herron, P. Howarth, R. Morton, and C. Souter, ``The ISLE corpus of non-native spoken English,'' in _Proceedings of LREC 2000: Language Resources and Evaluation Conference, vol. 2_. European Language Resources Association, 2000, pp. 957–964.
* [9] T. Oba and E. Atwell, ``Using the HTK speech recogniser to anlayse prosody in a corpus of german spoken learner's English,'' in _UCREL Technical Paper number 16. Special issue. Proceedings of the Corpus Linguistics 2003 conference_. Lancaster University, 2003, pp. 591–598.
* [10] F. Hönig, T. Bocklet, K. Riedhammer, A. Batliner, and E. Nöth, ``The automatic assessment of non-native prosody: Combining classical prosodic analysis with acoustic modelling,'' in _Thirteenth Annual Conference of the International Speech Communication Association_ , 2012.
* [11] S. Papi, E. Trentin, R. Gretter, M. Matassoni, and D. Falavigna, ``Mixtures of deep neural experts for automated speech scoring,'' _Proc. Interspeech 2020_ , pp. 3845–3849, 2020.
* [12] N. Minematsu, Y. Tomiyama, K. Yoshimoto, K. Shimizu, S. Nakagawa, M. Dantsuji, and S. Makino, ``Development of English speech database read by Japanese to support call research,'' in _Proceedings of ICA, vol. 1_. European Language Resources Association, 2004, pp. 557–560.
* [13] R. Gruhn, T. Cincarek, and S. Nakamura, ``A multi-accent non-native English database,'' in _ASJ_ , 2004, pp. 195–196.
* [14] R. Gretter, M. Matassoni, S. Bannò, and F. Daniele, ``TLT-school: a corpus of non native children speech,'' in _Proceedings of The 12th Language Resources and Evaluation Conference_ , 2020, pp. 378–385.
* [15] C. Baur, C. Chua, J. Gerlach, E. Rayner, M. Russel, H. Strik, and X. Wei, ``Overview of the 2017 spoken call shared task,'' in _Workshop on Speech and Language Technology in Education (SLaTE)_ , 2017.
* [16] G. Zhao, S. Sonsaat, A. Silpachai, I. Lucic, E. Chukharev-Hudilainen, J. Levis, and R. Gutierrez-Osuna, ``L2-ARCTIC: A non-native English speech corpus,'' _Proc. Interspeech 2018_ , pp. 2783–2787, 2018.
* [17] B.-C. Yan, M.-C. Wu, H.-T. Hung, and B. Chen, ``An end-to-end mispronunciation detection system for L2 English speech leveraging novel anti-phone modeling,'' in _Proc. Interspeech 2020_ , 2020, pp. 3032–3036.
* [18] Y. Feng, G. Fu, Q. Chen, and K. Chen, ``SED-MDD: Towards sentence dependent end-to-end mispronunciation detection and diagnosis,'' in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2020, pp. 3492–3496.
* [19] Y. Chen, J. Hu, and X. Zhang, ``Sell-corpus: an open source multiple accented chinese-english speech corpus for l2 english learning assessment,'' in _IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2019, pp. 7425–7429.
* [20] K. Li, X. Qian, and H. Meng, ``Mispronunciation detection and diagnosis in L2 English speech using multidistribution deep neural networks,'' _IEEE/ACM Transactions on Audio, Speech, and Language Processing_ , vol. 25, no. 1, pp. 193–207, 2016.
* [21] M. Li, S. Zhang, K. Li, A. M. Harrison, W.-K. Lo, and H. Meng, ``Design and collection of an L2 English corpus with a suprasegmental focus for chinese learners of English.'' in _ICPhS_ , 2011, pp. 1210–1213.
* [22] H. Yang and N. Wei, _Construction and data analysis of a Chinese learner spoken English corpus_. Shanhai Foreign Languse Eduacation Press, 2005.
* [23] D. Luo, X. Yang, and L. Wang, ``Improvement of segmental mispronunciation detection with prior knowledge extracted from large L2 speech corpus,'' in _Twelfth Annual Conference of the International Speech Communication Association_ , 2011.
* [24] K. Li, X. Qian, S. Kang, and H. Meng, ``Lexical stress detection for L2 English speech using deep belief networks.'' in _Interspeech_ , 2013, pp. 1811–1815.
* [25] K. Li, X. Wu, and H. Meng, ``Intonation classification for L2 English speech using multi-distribution deep neural networks,'' _Computer Speech & Language_, vol. 43, pp. 18–33, 2017.
* [26] K. Li, S. Mao, X. Li, Z. Wu, and H. Meng, ``Automatic lexical stress and pitch accent detection for L2 English speech using multi-distribution deep neural networks,'' _Speech Communication_ , vol. 96, pp. 28–36, 2018.
* [27] K. Nishina, Y. Yoshimura, I. Saita, Y. Takai, K. Maekawa, N. Minematsu, S. Nakagawa, S. Makino, and M. Dantsuji, ``Development of Japanese speech database read by non-native speakers for constructing call system,'' in _Proc. ICA_ , 2004, pp. 561–564.
* [28] N. F. Chen, R. Tong, D. Wee, P. Lee, B. Ma, and H. Li, ``iCALL corpus: Mandarin chinese spoken by non-native speakers of european descent,'' in _Sixteenth Annual Conference of the International Speech Communication Association_ , 2015.
* [29] G. Shang and S. Zhao, ``Singapore mandarin: Its positioning, internal structure and corpus planning,'' in _Paper presented atthe 22nd Annual Conference of the Southeast Asian Linguistics Society, Agay, France_ , 2012.
* [30] R. Weide, ``The CMU pronunciation dictionary.'' Carnegie Mellon University, 1998.
* [31] W. Hu, Y. Qian, F. K. Soong, and Y. Wang, ``Improved mispronunciation detection with deep neural network trained acoustic models and transfer learning based logistic regression classifiers,'' _Speech Communication_ , vol. 67, pp. 154–166, 2015.
* [32] M. Mohri, F. Pereira, and M. Riley, ``Speech recognition with weighted finite-state transducers,'' in _Springer handbook of speech processing_. Springer, 2008, pp. 559–584.
* [33] H. Drucker, C. J. Burges, L. Kaufman, A. Smola, V. Vapnik _et al._ , ``Support vector regression machines,'' _Advances in neural information processing systems_ , vol. 9, pp. 155–161, 1997.
|
L_{a_{2}^{-1}a_{3}^{-1}}R_{a_{2}}\widetilde{v}_{3}+L_{a_{2}^{-1}}\widetilde{v}_{2},R_{h_{2}^{-1}}w_{2}+L_{h_{2}}R_{h_{1}^{-1}h_{2}^{-1}}w_{1}\rangle-(v\leftrightarrow
w).\end{split}$
Therefore, by comparing them we see
$\bar{\omega}^{\mathfrak{h}}-\Phi_{2}^{*}\Omega=\delta\beta$ as desired. ∎
###### Remark 4.9.
It was proved in [88] that for any Manin triple
$(\mathfrak{g},\mathfrak{h}_{+}.\mathfrak{h}_{-})$ there is a $2$-shifted
Lagrangian correspondence
---
$\textstyle{pt\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{B}H_{+}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{B}H_{-}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{B}G}$
This is closely related to our result in this section.
## 5\. Double Lie group models of $\mathcal{B}G$
The simplicial picture introduced in the previous sections is not the only
available approach to describe $\mathcal{B}G$. In this section we will use the
language of double Lie groups to define other models for $\mathcal{B}G$ and
its symplectic structure.
### 5.1. Strict Lie 2-groups
A strict Lie $2$-group [13] is a group object in the category of Lie groupoids
and (strict) Lie groupoid morphisms, that is, it is a Lie groupoid
$G_{1}\Rightarrow G_{0}$ which equipped with a multiplication functor, an
inverse functor, and an identity functor, satisfying (strictly) associativity,
and other expected axioms for groups. It is well known (and not hard to see)
that a strict Lie $2$-group $G_{1}\Rightarrow G_{0}$ gives rise to a double
Lie group (defined in (4.1))
$\textstyle{G_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{pt\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{G_{0}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{pt.}$
Such a strict Lie 2-group is also a special case of a Lie 2-group which is
defined previously in Definition 2.1 using simplicial manifolds. A strict Lie
2-group $G_{1}\Rightarrow G_{0}$ gives rise to a Lie 2-group
$\cdots G_{0}\times
G_{0}\times_{m_{0},G_{0},{\mathsf{t}}}G_{1}\begin{subarray}{c}\longrightarrow\\\\[-8.50006pt]
\longrightarrow\\\\[-8.50006pt]
\longrightarrow\end{subarray}G_{0}\rightrightarrows pt,$
where $m_{0}:G_{0}\times G_{0}\to G_{0}$ is the 0-th level of the
multiplication functor. We refer to [104] for more details.
### 5.2. The de Rham triple complex of a double Lie group
Recall from Section 4.1 that a double Lie group has an associated bisimplicial
manifold $\mathcal{G}_{\bullet,\bullet}$. Therefore differential forms on
$\mathcal{G}_{\bullet,\bullet}$ live in the de Rham triple complex
$(\Omega^{\bullet}(\mathcal{G}_{\bullet,\bullet}),d,\delta^{v},\delta^{h})$
where $d:\Omega^{k}(\mathcal{G}_{j,i})\to\Omega^{k+1}(\mathcal{G}_{j,i})$ is
the usual de Rham differential and
$\delta^{v}:\Omega^{k}(\mathcal{G}_{j,i})\to\Omega^{k}(\mathcal{G}_{j+1,i}),\
\delta^{h}:\Omega^{k}(\mathcal{G}_{j,i})\to\Omega^{k}(\mathcal{G}_{j,i+1})$
are the simplicial differentials
$\delta^{h}=\sum_{l=0}^{i+1}d^{h*}_{l},\quad\delta^{v}=\sum_{l=0}^{j+1}d^{v*}_{l},\quad\text{
and }\quad\widetilde{D}=\delta^{h}+(-1)^{i}\delta^{v}+(-1)^{i+j}d$
is the differential on the total complex.
###### Definition 5.1.
A $(q,p)$-shifted $k$-form on a double Lie group
$\mathcal{G}_{\bullet,\bullet}$ is
$\alpha_{\bullet,\bullet}=\sum_{j=0}^{q}\sum_{i=0}^{p}\alpha_{j,i}\quad\text{with}\quad\alpha_{j,i}\in\Omega^{k+p+q-i-j}(\mathcal{G}_{j,i}).$
We say that $\alpha_{\bullet,\bullet}$ is closed if
$\tilde{D}\alpha_{\bullet,\bullet}=0$.
###### Remark 5.2.
Following [65] (see also [67]), we call the pair
$(\mathcal{G}_{\bullet,\bullet},\omega_{\bullet,\bullet})$ a symplectic double
Lie group if $\omega_{\bullet,\bullet}$ is a $(1,1)$-shifted $2$-form
satisfying
$\omega_{\bullet,\bullet}=\omega_{1,1}\in\Omega^{2}(\mathcal{G}_{1,1}),\quad\widetilde{D}\omega_{\bullet,\bullet}=0\quad\text{and}\quad\omega^{\sharp}_{1,1}:T^{*}\mathcal{G}_{1,1}\xrightarrow{\sim}T\mathcal{G}_{1,1}.$
As we will see, the models presented in this section are not symplectic double
Lie groups. Therefore a more general definition (that we will not introduce)
seems to be missing.
### 5.3. The models
If $G$ is a Lie group then the unit groupoid of $G$ as a strict Lie 2-group
gives rise to a double Lie group as described in Section 5.1. More concretely,
we have that
(5.1)
$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{pt\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{pt}$
is a double Lie group, which we denote by $G_{\bullet,\bullet}$. The vertical
structure is given by the unit groupoid of $G$ and horizontal multiplication
is given by the multiplication on the Lie group $G$.
If $(\mathfrak{g},\langle\cdot,\cdot\rangle)$ is a quadratic Lie algebra,
Theorem 3.1 states that $(NG_{\bullet},\Omega_{\bullet})$ is a $2$-shifted
symplectic Lie $1$-group. Clearly we can define a $(0,2)$-shifted $2$-form
$\Omega_{\bullet,\bullet}$ on $G_{\bullet,\bullet}$ by
$\Omega_{\bullet,\bullet}=\begin{pmatrix}0&0&0\\\
\Omega&-\Theta&0\end{pmatrix},\quad\text{with}\quad\Omega_{0,2}=\Omega\in\Omega^{2}(G^{\times
2})\text{ and }\Omega_{0,1}=-\Theta\in\Omega^{3}(G).$
###### Proposition 5.3.
The $(0,2)$-shifted $2$-form $\Omega_{\bullet,\bullet}$ is closed.
###### Proof.
This follows directly form the fact that $\Omega_{\bullet}$ is closed in
$NG_{\bullet}$, $\delta^{h}|_{j=0}$ is the simplicial differential of
$NG_{\bullet}$, and $\delta^{v}|_{i=0}=0$ since
$s^{v}=t^{v}=\operatorname{id}$. Hence
$\widetilde{D}(\Omega_{\bullet,\bullet})=(-1)^{i}\delta^{v}(\Omega_{0,\bullet})+D(\Omega_{\bullet})=0.$
∎
For a given Lie group $G$ we define the Lie groupoid $\Omega
G\rightrightarrows P_{e}G$ with structure maps
(5.2) $\displaystyle s(\tau)(t)=\tau(\frac{t}{2}),\quad
t(\tau)(t)=\tau(1-\frac{t}{2}),\quad i(\tau)(t)=\tau(1-t),$ (5.7)
$\displaystyle
m(\tau_{1},\tau_{2})(t)=\left\\{\begin{array}[]{ll}\tau_{2}(t)&t\in[0,\frac{1}{2}],\\\
\tau_{1}(t)&t\in[\frac{1}{2},1],\end{array}\right.\quad\text{and}\quad
u(\gamma)(t)=\left\\{\begin{array}[]{ll}\gamma(2t)&t\in[0,\frac{1}{2}],\\\
\gamma(1-2t)&t\in[\frac{1}{2},1].\end{array}\right.$
As in the previous section $\Omega G$ and $P_{e}G$ are completed in an
appropriate Sobolev norm so that the structure maps are smooth. Moreover, it
is not hard to verify that $\Omega G$ and $P_{e}G$ are infinite dimensional
Lie groups under the point-wise multiplication, and this makes $\Omega
G\rightrightarrows P_{e}G$ into a strict Lie 2-group.
###### Proposition 5.4.
For a Lie group $G$ we have the double Lie group
${\mathbb{G}}_{\bullet,\bullet}$ given by the square
(5.8) $\textstyle{\Omega
G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{pt\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{P_{e}G\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{pt}$
with vertical structure maps defined by (5.2) and (5.7) and horizontal
multiplication given by
$m^{h}(\tau_{1},\tau_{2})(t)=\tau_{1}(t)\tau_{2}(t),\quad\forall\tau_{1},\tau_{2}\in\Omega
G\;\text{or}\;P_{e}G.$
###### Proof.
The fact that ${\mathbb{G}}_{\bullet,\bullet}$ is a double Lie group, i.e.
that the vertical source and target are group morphisms and that the
multiplications commute, follows by inspection. ∎
###### Remark 5.5.
When $G$ is connected and simply connected, the quotient stack $[P_{e}G/\Omega
G]\cong G$ is representable. Thus, the strict Lie 2-group $\Omega
G\rightrightarrows P_{e}G$ is Morita equivalent to the Lie 1-group
$NG_{\bullet}$. We may hence view both (5.1) and (5.8) as double Lie group
models for $\mathcal{B}G$. More explicitly, the Morita equivalence is given
through
(5.9) $\begin{array}[]{ccc}ev_{1,1}:\Omega G\to G,&&ev_{0,1}:P_{e}G\to G,\\\
ev_{1,1}(\tau)=\tau(\frac{1}{2}),&&ev_{0,1}(\gamma)=\gamma(1),\end{array}$
which may be further extended to a double Lie group morphism
$ev_{\bullet,\bullet}:{\mathbb{G}}_{\bullet,\bullet}\to G_{\bullet,\bullet}$.
When $(\mathfrak{g},\langle\cdot,\cdot\rangle)$ is a quadratic Lie algebra,
the double Lie group ${\mathbb{G}}_{\bullet,\bullet}$ is endowed with the
Segal’s $2$-form $\omega\in\Omega^{2}(\Omega
G)=\Omega^{2}({\mathbb{G}}_{1,1})$ defined in (3.9). But $\omega$ can not be
multiplicative with respect to the group structure (otherwise it will be an
example of a symplectic group). Therefore, in order to obtain a closed form on
${\mathbb{G}}_{\bullet,\bullet}$ we need to introduce a new term. Define the
$1$-form
(5.10) $\eta_{(\tau_{1},\tau_{2})}((a_{1},a_{2}))=\int_{0}^{1}\langle
R_{\tau_{2}(t)^{-1}}\tau_{2}^{\prime}(t),\widehat{a}_{1}(t)\rangle dt\
\in\Omega^{1}(\Omega G^{\times 2})=\Omega^{1}({\mathbb{G}}_{1,2}),$
where $\tau_{i}\in\Omega G,\ a_{i}\in T_{\tau_{i}}\Omega G$, and
$\widehat{a}_{1}(t)=L_{\tau_{1}(t)^{-1}}a_{1}(t).$
###### Proposition 5.6.
The double Lie group ${\mathbb{G}}_{\bullet,\bullet}$ has a closed
$(1,2)$-shifted $1$-form $\omega_{\bullet,\bullet}$ defined by
$\omega_{\bullet,\bullet}=\begin{pmatrix}-\eta&\omega&0\\\
0&0&0\end{pmatrix}\quad\text{with}\quad\omega_{1,2}=-\eta\in\Omega^{1}({\mathbb{G}}_{1,2})\
\text{ and }\ \omega_{1,1}=\omega\in\Omega^{2}({\mathbb{G}}_{1,1}).$
###### Proof.
The result will follow from Theorem 5.8 which states that
$\omega_{\bullet,\bullet}=-\frac{1}{2}\big{(}\widetilde{D}(\alpha_{\bullet,\bullet})+ev^{*}_{\bullet,\bullet}\Omega_{\bullet,\bullet}\big{)},$
and the fact that $\Omega_{\bullet,\bullet}$ is closed by Proposition 5.3. ∎
It is quite surprising that in the double picture, we obtain a $1$-form (5.10)
instead of a $2$-form as in Theorem 3.15 of the simplicial picture, to bridge
the finite and infinite models. We believe that the term $\eta$ should be
related to the descent equations computed in [5].
### 5.4. The equivalence
Given a Lie group with a quadratic Lie algebra we create two different double
Lie groups endowed with differential forms
$(G_{\bullet,\bullet},\Omega_{\bullet,\bullet})$ and
$({\mathbb{G}}_{\bullet,\bullet},\omega_{\bullet,\bullet})$. Here we will show
that they are equivalent.
As stated in Remark 5.5, there is a double Lie group morphism
$ev_{\bullet,\bullet}:{\mathbb{G}}_{\bullet,\bullet}\to G_{\bullet,\bullet}$
given by (5.9). As in the simplicial case, the forms
$\omega_{\bullet,\bullet}$ and
$ev^{*}_{\bullet,\bullet}\Omega_{\bullet,\bullet}$ do not agree. So we need to
introduce another form on ${\mathbb{G}}_{\bullet,\bullet}$, which we denote
$\alpha_{\bullet,\bullet}$. The $(1,2)$-shifted $0$-form
$\alpha_{\bullet,\bullet}$ is defined by
$\alpha_{\bullet,\bullet}=\begin{pmatrix}0&-\alpha&0\\\
-\operatorname{\mathbb{T}}(\Omega)&-\operatorname{\mathbb{T}}(\Theta)&0\end{pmatrix}\quad\text{where}\quad\alpha=\int_{0}^{1}\langle
L_{\tau(t)^{-1}}\tau^{\prime}(t),L_{\tau(t)^{-1}}v(t)\rangle\in\Omega^{1}(\Omega
G)$
and $\operatorname{\mathbb{T}}(\Omega)\in\Omega^{1}(P_{e}G^{\times
2})=\Omega^{1}((P_{e}G)^{\times 2})=\Omega^{1}({\mathbb{G}}_{0,2})$, and
$\operatorname{\mathbb{T}}(\Theta)\in\Omega^{2}(P_{e}G)=\Omega^{2}({\mathbb{G}}_{0,1})$
are the transgressions of the forms on $NG_{\bullet}$ introduced in (3.1).
###### Remark 5.7.
The $1$-form $\alpha\in\Omega^{1}(\Omega G)$ also has a nice interpretation in
terms of the $S^{1}$-action on $\Omega G$. The based loop group $\Omega G$
carries an $S^{1}$-action given by rotation of loops and we denote its
infinitesimal generator by $X_{S^{1}}(\tau)=\tau^{\prime}.$ Then the fixed
points of the action correspond to the critical points of the function
$\alpha(X_{S^{1}})$.
###### Theorem 5.8.
The evaluation map $ev_{\bullet,\bullet}:{\mathbb{G}}_{\bullet,\bullet}\to
G_{\bullet,\bullet}$ satisfy
$-\omega_{\bullet,\bullet}-\frac{1}{2}ev^{*}_{\bullet,\bullet}\Omega_{\bullet,\bullet}=\widetilde{D}(\frac{1}{2}\alpha_{\bullet,\bullet}).$
###### Proof.
In order to prove this result, we need to show the following equality between
matrices,
$\begin{pmatrix}0&0&0&0\\\ 0&\eta&-\omega&0\\\
0&-\frac{1}{2}ev_{0,2}^{*}\Omega&\frac{1}{2}ev^{*}_{0,1}\Theta&0\end{pmatrix}=\frac{1}{2}\begin{pmatrix}0&0&\delta^{v}\alpha&0\\\
0&-\delta^{h}\alpha-\delta^{v}\operatorname{\mathbb{T}}(\Omega)&\delta^{v}\operatorname{\mathbb{T}}(\Theta)-d\alpha&0\\\
-\delta^{h}\operatorname{\mathbb{T}}(\Omega)&-\delta^{h}\operatorname{\mathbb{T}}(\Theta)-d\operatorname{\mathbb{T}}(\Omega)&d\operatorname{\mathbb{T}}(\Theta)&0\end{pmatrix}.$
For the equality in the first row we need to compute
$\delta^{v}\alpha\in\Omega^{1}({\mathbb{G}}_{2,1})$ and show that it is zero.
Recall that ${\mathbb{G}}_{2,1}=\Omega G\times_{P_{e}G}\Omega G$, hence we
pick $\tau_{1},\tau_{2}\in\Omega G$ with
$\tau_{1}(\frac{t}{2})=\tau_{2}(1-\frac{t}{2})$ and $a_{i}\in
T_{\tau_{i}}\Omega G$ also with $a_{1}(\frac{t}{2})=a_{2}(1-\frac{t}{2})$.
Then
$\begin{split}&(\delta^{v}\alpha)_{(\tau_{1},\tau_{2})}(a_{1},a_{2})=\alpha_{\tau_{2}}(a_{2})-\alpha_{m^{v}(\tau_{1},\tau_{2})}(Tm^{v}(a_{1},a_{2}))+\alpha_{\tau_{1}}(a_{1})\\\
=&\alpha_{\tau_{2}}(a_{2})+\alpha_{\tau_{1}}(a_{1})-\int_{0}^{\frac{1}{2}}\langle
L_{\tau_{2}(t)^{-1}}\tau_{2}^{\prime}(t),L_{\tau_{2}(t)^{-1}}a_{2}(t)\rangle-\int_{\frac{1}{2}}^{1}\langle
L_{\tau_{1}(t)^{-1}}\tau_{1}^{\prime}(t),L_{\tau_{1}(t)^{-1}}a_{1}(t)\rangle\\\
=&\int_{\frac{1}{2}}^{1}\langle
L_{\tau_{2}(t)^{-1}}\tau_{2}^{\prime}(t),L_{\tau_{2}(t)^{-1}}a_{2}(t)\rangle+\int_{0}^{\frac{1}{2}}\langle
L_{\tau_{1}(t)^{-1}}\tau_{1}^{\prime}(t),L_{\tau_{1}(t)^{-1}}a_{1}(t)\rangle=0.\end{split}$
In order to verify the first equality in the middle row, we need to make
explicit computations of the differentials. We start by computing
$\delta^{h}\alpha\in\Omega^{1}({\mathbb{G}}_{1,2})$. Since
${\mathbb{G}}_{1,2}=\Omega G^{\times 2}$, we pick $\tau_{1},\tau_{2}\in\Omega
G$ and $a_{i}\in T_{\tau_{i}}\Omega G$. Then
(5.11)
$\begin{split}(\delta^{h}\alpha)_{(\tau_{1},\tau_{2})}((a_{1},a_{2}))=&\
\alpha_{\tau_{2}}(a_{2})-\alpha_{\tau_{1}\tau_{2}}(R_{\tau_{2}}a_{1}+L_{\tau_{1}}a_{2})+\alpha_{\tau_{1}}(a_{1})\\\
=&-\int_{0}^{1}\langle
L_{\tau_{1}^{-1}(t)}\tau_{1}^{\prime}(t),R_{\tau_{2}^{-1}(t)}a_{2}(t)\rangle-\eta_{(\tau_{1},\tau_{2})}((a_{1},a_{2})).\end{split}$
An easy computation shows that
$\delta^{v}\operatorname{\mathbb{T}}(\Omega)=\operatorname{\mathbb{T}}(\Omega)_{|\Omega
G}$ and by Proposition D.2 and the formula (3.2) we get that
(5.12)
$\begin{split}\delta^{v}\operatorname{\mathbb{T}}(\Omega)_{(\tau_{1},\tau_{2})}((a_{1},a_{2}))=&\operatorname{\mathbb{T}}(\Omega)_{(\tau_{1},\tau_{2})}((a_{1},a_{2}))=\int_{0}^{1}\Omega_{(\tau_{1},\tau_{2})}((\tau_{1}^{\prime},\tau_{2}^{\prime}),(a_{1},a_{2}))\\\
=&\int_{0}^{1}\langle
L_{\tau_{1}^{-1}(t)}\tau_{1}^{\prime}(t),R_{\tau_{2}^{-1}(t)}a_{2}(t)\rangle-\eta_{(\tau_{1},\tau_{2})}((a_{1},a_{2})).\end{split}$
Hence combining (5.11) and (5.12), we obtain
$\frac{1}{2}(-\delta^{v}\alpha-\delta^{h}\operatorname{\mathbb{T}}(\Omega))=\eta.$
For the second term in the middle row, we use again the fact that
$\delta^{v}\operatorname{\mathbb{T}}(\nu)=\operatorname{\mathbb{T}}(\nu)_{|\Omega
G}$, and by Lemma 3.14 we obtain that
$-\omega=-\omega^{P}|_{\Omega
G}=\frac{1}{2}(\operatorname{\mathbb{T}}(\Theta)-d\alpha^{P})|_{\Omega
G}=\frac{1}{2}(\delta^{v}\operatorname{\mathbb{T}}(\Theta)-d\alpha).$
Finally, the equality on the last row follows directly from the properties of
the transgression given in Propositions D.1 and D.3, the fact that
$ev_{0,1}(\gamma)=\gamma(1)$ and
$ev_{0,2}(\gamma_{1},\gamma_{2})=(\gamma_{1}(1),\gamma_{2}(1))$ and that
$D\Omega_{\bullet}=0$. Explicitly
$\displaystyle\delta^{h}\operatorname{\mathbb{T}}(\Omega)$ $\displaystyle=$
$\displaystyle-\operatorname{\mathbb{T}}(\delta\Omega)=0,$
$\displaystyle-\frac{1}{2}\delta^{h}\operatorname{\mathbb{T}}(\Theta)-\frac{1}{2}d\operatorname{\mathbb{T}}(\Omega)$
$\displaystyle=$
$\displaystyle-\frac{1}{2}\operatorname{\mathbb{T}}(d\Omega)-\frac{1}{2}ev_{1}^{*}\Omega+\frac{1}{2}\operatorname{\mathbb{T}}(d\Omega)=-\frac{1}{2}ev_{0,2}^{*}\Omega,$
$\displaystyle\frac{1}{2}d\operatorname{\mathbb{T}}(\Theta)$ $\displaystyle=$
$\displaystyle\frac{1}{2}ev^{*}_{1}\Theta-\frac{1}{2}\operatorname{\mathbb{T}}(d\Theta)=\frac{1}{2}ev^{*}_{0,1}\Theta.$
Therefore the two matrices coincides in all the entries, and we have proved
the statement. ∎
## Appendix A Sobolev spaces
Here we recall some analytic facts about Sobolev spaces used in this article
(see also [29, Sect.4], [11, Sect.14]). For a finite-dimensional compact
manifold $M$, possibly with boundary and corners, let $H_{r}(M)$ denote the
order $r$ Sobolev space of functions. These functions and their weak
derivatives are $L^{2}$-functions. The $C^{\infty}$ functions are dense in
$H_{r}(M)$. The point-wise multiplication makes $H_{r}(M)$ a Banach algebra
when $r-\frac{1}{2}\dim M>0$ [2, Theorem 4.39]. If $Z\subset M$ is a
submanifold, and $r-\frac{1}{2}\operatorname{codim}(Z)>0$, then the
restriction of continuous functions from $M$ to $Z$ extends to a continuous
linear map $H_{r}(M)\to H_{r-\frac{1}{2}\operatorname{codim}(Z)}(Z)$, with a
continuous right inverse.
If $N$ is another finite-dimensional manifold and $r-\frac{1}{2}\dim(M)>0$,
one defines spaces $\operatorname{Hom}_{r}(M,N)$ of maps from $M$ to $N$ of
Sobolev class $r$ by choosing local charts for $N$. In particular, if $G$ is a
finite-dimensional Lie group and $r-\frac{1}{2}\dim M>0$, then
$\operatorname{Hom}_{r}(M,G)$ is a Banach (even Hilbert) Lie group under
point-wise multiplication [29, Sect.4].
We are particularly interested in the loop group
$LG:=\operatorname{Hom}_{r}(S^{1},G)$ and the based loop group $\Omega
G:=\\{\tau\in LG|\tau(0)=e\\}$ for a fixed $r\in\mathbb{Z}^{\geq 1}$. In
addition to the advantage of having a Banach manifold, using the version of
based loops with Sobolev completion is also helpful as sometimes our
construction of face or degeneracy maps via concatenation does not result in a
smooth map, but instead a map in the Sobolev completion.
Given a domain $U$ in $\mathbb{R}^{n}$, the map sending $f\in C^{\infty}(U)$
to the evaluation of its $m$-th derivative $f^{(m)}(x)$ at point $x\in U$ can
be extended to a bounded linear (thus smooth) map on $H_{r}(U)$ if $r>m$.
Therefore, the de Rham differentiation operator is bounded linear (thus
smooth) $H_{r+1}(U)\to H_{r}(U)$.
## Appendix B Universal integration $\int\mathfrak{g}_{\bullet}$ and
truncations
Using the idea of Sullivan’s space realisation and a suitable truncation,
Henriques shows in [54] a procedure to integrate a Lie $n$-algebra, i.e. an
$n$-term $L_{\infty}$-algebra, to a Lie $n$-group171717Notice that the
truncation procedure creates a possible obstruction to this integration
procedure. That is, although a finite dimensional Lie algebra can be always
integrated to a Lie group, not every Lie $n$-algebra can be integrated to a
Lie $n$-group.. In general, the Lie $n$-groups obtained by this method are
infinite dimensional. Here we recall his construction in the special case of a
Lie algebra.
Let $\mathfrak{g}=(\mathfrak{g},[\cdot,\cdot])$ be a Lie algebra and
$\operatorname{CE}(\mathfrak{g})$ its Chavelley-Eilenberg differential
complex, that is
$\operatorname{CE}(\mathfrak{g})=\wedge^{\bullet}\mathfrak{g}^{*}$ with
differential
$d_{CE}\xi(x_{1},\dots,x_{k})=\sum_{i<j}(-1)^{i+j-1}\xi([x_{i},x_{j}],x_{1},\dots,\hat{x}_{i},\dots,\hat{x}_{j},\dots,x_{k}).$
The universal object integrating it, $\int\mathfrak{g}_{\bullet}$, was
constructed in [52, 54] in the following way (we also follow the treatment in
[54, 29] of the differential structure, which is also similar to that in
[19]). For any $k\in\mathbb{Z}^{\geq 0}$ consider the standard $k$-simplex
$\Delta^{k}=\\{(t_{0},\dots,t_{k})\in\mathbb{R}^{k+1}|\sum_{i=0}^{k}t_{i}=1\\}.$
Denote by $\Omega^{\bullet}(\Delta^{k})$ subspaces of de Rham forms on
$\Delta^{k}$ with Sobolev class $r$ (to be a Banach algebra we need
$r>\frac{1}{2}k$ as in Appendix A) defined by
$\Omega^{\bullet}(\Delta^{k}):=\\{\alpha|\
\alpha=\sum_{I=i_{1}<\dots<i_{k}}\alpha^{I}dt_{I}\text{ where
}\alpha^{I}\;\text{is of Sobolev class}\;r,\text{ and }d\alpha\;\text{is also
in this form}\\}.$
This makes $(\Omega^{\bullet}(\Delta^{k}),d)$ into a differential graded
algebra, which we abbreviate to d.g.a.. Denote by
$\operatorname{Hom}_{d.g.a.}$ the set of morphisms between d.g.a.s. Then
$\operatorname{Hom}_{d.g.a.}(\operatorname{CE}(\mathfrak{g}),\Omega^{\bullet}(\Delta^{k}))$
carries a natural Banach manifold structure [54, Theorem 5.10]. Notice that
$C^{\infty}$ functions do not form a Banach space, but its completion under a
Sobolev norm to a Sobolev space is a Banach space.
It can be helpful to view elements in
$\operatorname{Hom}_{d.g.a.}\big{(}\operatorname{CE}(\mathfrak{g}),\Omega^{\bullet}(\Delta^{k})\big{)}$
as Lie algebroid morphisms from $T\Delta^{k}$ to $\mathfrak{g}$, and we define
$\operatorname{Hom}_{\operatorname{algd}}(T\Delta^{k},\mathfrak{g}):=\operatorname{Hom}_{d.g.a.}\big{(}\operatorname{CE}(\mathfrak{g}),\Omega^{\bullet}(\Delta^{k})\big{)}.$
More precisely, a vector bundle morphism $\psi:T\Delta^{k}\to\mathfrak{g}$ can
be written explicitly as $\psi=\sum_{i=0}^{k}\psi_{i}dt_{i}$ with $\psi_{i}\in
C^{r+1}(\Delta^{k},\mathfrak{g})$. Moreover, $\psi$ defines an element in
$\operatorname{Hom}_{\operatorname{algd}}(T\Delta^{k},\mathfrak{g})$ if it is
further a Lie algebroid morphism, that is it satisfies the Maurer-Cartan
equation
$\frac{d\psi_{i}}{dt_{j}}-\frac{d\psi_{j}}{dt_{i}}=[\psi_{i},\psi_{j}],\quad\forall
i,j\in\\{0,\dots,k\\}.$
The Banach manifolds
$\operatorname{Hom}_{\operatorname{algd}}(T\Delta^{\bullet},\mathfrak{g})$
form a simplicial manifold, denoted by $\int\mathfrak{g}_{\bullet}$ in [54],
with face and degeneracy maps induced by the natural ones between $\Delta^{k}$
and $\Delta^{k-1}$. The simplicial manifold $\int\mathfrak{g}_{\bullet}$ is
conjectured [53, Section 7] to be a universal integration object of
$\mathfrak{g}$, i.e. the $L_{\infty}$-group integrating $\mathfrak{g}$,
partially because its 1-truncation is the universal191919which is only
universal among Lie (1-)groups integrating $\mathfrak{g}$ (connected and
simply connected) Lie group $G$ integrating $\mathfrak{g}$. Moreover, $\int$
is an exact functor with respect to a class of distinguished
fibrations—“quasi-split fibrations” [84, Section 9]. Such fibrations include
acyclic fibrations as well as fibrations that arise in string-like extensions.
In particular, $\int$ sends $L_{\infty}$ quasi-isomorphisms to weak
equivalences, quasi-split fibrations to Kan fibrations, and preserves acyclic
fibrations as well as pullbacks of acyclic/quasi-split fibrations.
Now let us give some details on truncations.
###### Definition B.1.
[See e.g. [69] for the case of simplicial sets] Given a simplicial manifold
$X_{\bullet}$ the $n$-truncation $\tau_{n}(X)_{\bullet}$ is the simplicial set
defined as
$\tau_{n}(X)_{k}=S_{k},\quad\text{if }\;k\leq
n-1,\quad\tau_{n}(X)_{k}=X_{k}/\sim^{n}_{k+1}\text{ if }\;k\geq n,$
where $x_{k}\sim^{n}_{k+1}x^{\prime}_{k}$ if and only if they are simplicial
homotopic relative to $(n-1)$-skeleton. In other words,
$sk_{n-1}(x_{k})=sk_{n-1}(x^{\prime}_{k})$ and $\exists\
x_{k+1}\in\hom(\Delta[k]\times\Delta[1],X_{\bullet})$ such that
$x_{k+1}|_{sk_{n-1}(\Delta[k])}=sk_{n-1}(x_{k})=sk_{n-1}(x^{\prime}_{k}),\quad
x_{k+1}|_{\Delta[k]\times 0}=x_{k}\quad\text{and}\quad
x_{k+1}|_{\Delta[k]\times 1}=x^{\prime}_{k}.$
We pay special attention to the $n$-truncations for the simplicial manifold
$\int\mathfrak{g}_{\bullet}$ when $n=1,2$. In this case, the homotopies that
we mod out can be understood in the following way:
$\psi^{i}\in\int\mathfrak{g}_{k}$ with
$\psi^{0}|_{T\partial\Delta^{k}}=\psi^{1}|_{T\partial\Delta^{k}}$ for $i=0,1$
are homotopic if there exists
$\Psi\in\operatorname{Hom}_{\operatorname{algd}}(T(\Delta^{k}\times
I),\mathfrak{g})$ such that
$\Psi(x,0)=\psi^{0},\quad\Psi(x,1)=\psi^{1},\quad\Psi(y,t)=\psi^{0}(y)=\psi^{1}(y)\text{
for }y\in sk_{n-1}\Delta^{k},k\geq n.$
Notice in particular that when $k=n$, we have
$sk_{n-1}\Delta^{k}=\partial\Delta^{k}$.
The $n$-truncation of an arbitrary simplicial manifold needs not be a
simplicial manifold as there are quotients involved. In our concrete case,
[54, Theorem 7.5] implies that $\tau_{1}(\int\mathfrak{g})_{\bullet}$ and
$\tau_{2}(\int\mathfrak{g})_{\bullet}$ are simplicial manifolds because
$\pi_{2}(G)=0$ for a finite dimensional Lie group. More explicitly we have the
following.
###### Proposition B.2.
([54, Example 7.2]) Let $G$ be the connected and simply connected Lie group
integrating $\mathfrak{g}$. Then
$\tau_{1}(\int\mathfrak{g})_{\bullet}=NG_{\bullet}$.
###### Proposition B.3 ([54]).
Let $G$ be the connected and simply connected Lie group integrating
$\mathfrak{g}$. Then the Lie 2-group $\tau_{2}(\int\mathfrak{g})_{\bullet}$ is
equal to ${\mathbb{G}}_{\bullet}$ where
${\mathbb{G}}_{k}=\operatorname{Hom}_{e}(sk_{1}\Delta^{k},G)$, which is the
space of maps of Sobolev class $r+1$ sending $(0,\dots,0,1)\in
sk_{1}\Delta^{k}$ to $e$. The face and degeneracy maps are those induced from
the simplices.
###### Proof.
The calculation is more or less done in [54, Sect.8]. We summarise here: let
us denote by $\operatorname{Hom}_{e}(\Delta^{k},G)$ the space of maps of
Sobolev class ${r+1}$ which send $(0,\dots,0,1)\in\Delta^{k}$ to $e\in G$.
When $k=1$, we also write $P_{e}G:=\operatorname{Hom}_{e}(\Delta^{1},G)$. Then
according to [54, Example 5.5] or [19, Remark 3.8],
$\operatorname{Hom}_{\operatorname{algd}}(T\Delta^{k},\mathfrak{g})=\operatorname{Hom}_{e}(\Delta^{k},G)$.
Thus $\int\mathfrak{g}_{1}=P_{e}G$.
The equivalence relation $\sim_{k}$ we take in truncation is exactly the
homotopy relative to the boundary in $G$. Thus
${\mathbb{G}}_{k}=\operatorname{Hom}_{e}(sk_{1}\Delta^{k},G)$. ∎
## Appendix C Explicit formulas for the Lie $2$-group
${\mathbb{G}}_{\bullet}$
The tangent of the Lie $2$-group ${\mathbb{G}}_{\bullet}$ is also a Lie
$2$-group202020In fact the group structure is compatible with the vector
bundle structure and hence it is more than just a Lie $2$-group. and is given
by
$T{\mathbb{G}}_{\bullet}=\cdots\Omega TG\
\begin{subarray}{c}\longrightarrow\\\\[-8.50006pt]
\longrightarrow\\\\[-8.50006pt]
\longrightarrow\end{subarray}P_{(e,0_{e})}TG\rightrightarrows pt.$
The 3rd level is also important and is
(C.1) $\begin{split}T{\mathbb{G}}_{3}=\\{(a_{0},a_{1},a_{2})\in(\Omega
TG)^{\times 3}\ |\ \text{ for }t\in[0,\frac{1}{3}],\ a_{0}(t)=a_{1}(t),\\\
a_{0}(t+\frac{2}{3})=a_{2}(\frac{1}{3}-t),\
a_{1}(t+\frac{2}{3})=a_{2}(t+\frac{2}{3})\\}.\end{split}$
The face and degeneracies are obtained by taking variations of equations
(3.4), (3.5), and (3.6). More explicitly, the faces $Td_{i}:\Omega TG\to
P_{(e,0_{e})}TG$ are given by
(C.2) $\begin{split}&Td_{0}(a)=a(\frac{t}{3}),\quad
Td_{1}(a)=a(1-\frac{t}{3}),\\\
&Td_{2}(a)=R_{\tau(\frac{1}{3})^{-1}}a(\frac{1+t}{3})-L_{\tau(\frac{t+1}{3})\tau(\frac{1}{3})^{-1}}R_{\tau(\frac{1}{3})^{-1}}a(\frac{1}{3}).\end{split}$
The faces $Td_{i}:T{\mathbb{G}}_{3}\to\Omega TG$ are given by
(C.3) $Td_{i}(a_{0},a_{1},a_{2})=a_{i}\quad\text{ for }0\leq i\leq 3,$
where
(C.4)
$a_{3}(t)=\left\\{\begin{array}[]{ll}R_{\tau_{0}(\frac{1}{3})^{-1}}a_{0}(t+\frac{1}{3})-L_{\tau_{3}(t)}R_{\tau_{0}(\frac{1}{3})^{-1}}a_{0}(\frac{1}{3})&t\in[0,\frac{1}{3}],\\\
R_{\tau_{0}(\frac{1}{3})^{-1}}a_{2}(t)-L_{\tau_{3}(t)}R_{\tau_{0}(\frac{1}{3})^{-1}}a_{0}(\frac{1}{3})&t\in[\frac{1}{3},\frac{2}{3}],\\\
R_{\tau_{0}(\frac{1}{3})^{-1}}a_{1}(\frac{4}{3}-t)-L_{\tau_{3}(t)}R_{\tau_{0}(\frac{1}{3})^{-1}}a_{0}(\frac{1}{3})&t\in[\frac{2}{3},1].\end{array}\right.$
The degeneracies $Ts_{i}:P_{(e,0_{e})}TG\to\Omega TG$ are given by
(C.5) $Ts_{0}(u)(t)=\left\\{\begin{array}[]{ll}u(3t)&t\in[0,\frac{1}{3}],\\\
u(1)&t\in[\frac{1}{3},\frac{2}{3}],\\\
u(3-3t)&t\in[\frac{2}{3},1],\end{array}\right.\quad
Ts_{1}(u)(t)=\left\\{\begin{array}[]{ll}u(0)&t\in[0,\frac{1}{3}],\\\
u(3t-1)&t\in[\frac{1}{3},\frac{2}{3}],\\\
u(3-3t)&t\in[\frac{2}{3},1].\end{array}\right.$
Here we also include a direct proof that Segal’s 2-form is multiplicative.
###### Proposition C.1.
The $2$-shifted $2$-form $\omega_{\bullet}$ on ${\mathbb{G}}_{\bullet}$ is
closed, that is,
$d\omega=0\qquad\text{and}\qquad\delta\omega=0.$
###### Proof.
Since $\omega$ is a symplectic form on $\Omega G$ we know that $d\omega=0$, so
it remains to see that $\delta\omega=0$. We need to compute the pullback of
$\omega$ by the face maps (3.5). Pick
$\tau\equiv(\tau_{0},\tau_{1},\tau_{2})\in{\mathbb{G}}_{3}$ and
$a\equiv(a_{0},a_{1},a_{2}),b\equiv(b_{0},b_{1},b_{2})\in
T_{\tau}{\mathbb{G}}_{3}$ write
$\widehat{a}_{i}(t)=L_{\tau_{i}(t)^{-1}}a_{i}(t)$ and similarly for $b$. Then
using (3.5) and (C.3) we obtain
$(d_{i}^{*}\omega)_{\tau}(a,b)=\omega_{\tau_{i}}(a_{i},b_{i})=A_{i}+B_{i}+C_{i}\quad\text{for
}i=0,\cdots,3,$
where $A_{i}$ correspond to the first third of the loop, $B_{i}$ to the second
third and $C_{i}$ to the last third. Since $C^{\infty}$ maps are dense in
Sobolev space, we only need to verify our result at smooth loops. In order to
compare the terms $A_{i},B_{i}$, and $C_{i}$ for different values of $i$, we
use (C.1) and the face maps given by (C.3). Moreover, equation (C.4) implies
(C.6)
$\left\\{\begin{array}[]{ll}a_{0}(t)=R_{\tau_{0}(\frac{1}{3})}a_{3}(t-\frac{1}{3})+L_{\tau_{3}(t-\frac{1}{3})}a_{0}(\frac{1}{3})&t\in[\frac{1}{3},\frac{2}{3}],\\\
a_{2}(t)=R_{\tau_{0}(\frac{1}{3})}a_{3}(t)+L_{\tau_{3}(t)}a_{0}(\frac{1}{3})&t\in[\frac{1}{3},\frac{2}{3}],\\\
a_{1}(t)=R_{\tau_{0}(\frac{1}{3})}a_{3}(\frac{4}{3}-t)+L_{\tau_{3}(\frac{4}{3}-t)}a_{0}(\frac{1}{3})&t\in[\frac{1}{3},\frac{2}{3}],\\\
\end{array}\right.$
and the same identities holds for $b$.
For $i=0$ we see that the first term is given by
$A_{0}=\int_{0}^{\frac{1}{3}}\langle\widehat{a}^{\prime}_{0}(t),\widehat{b}_{0}(t)\rangle
dt=\int_{0}^{\frac{1}{3}}\langle\widehat{a}^{\prime}_{1}(t),\widehat{b}_{1}(t)\rangle
dt=A_{1},$
where the middle equality follows from the definition of $T{\mathbb{G}}_{3}$.
The second term can be rewritten using (C.6) as
$B_{0}=\int_{\frac{1}{3}}^{\frac{2}{3}}\langle\widehat{a}^{\prime}_{0}(t),\widehat{b}_{0}(t)\rangle
dt=\int_{\frac{1}{3}}^{\frac{2}{3}}\langle\alpha,\beta\rangle$
with
$\alpha=\Big{(}L_{\tau_{0}(\frac{1}{3})^{-1}\tau_{3}(t-\frac{1}{3})^{-1}}\big{(}R_{\tau_{0}(\frac{1}{3})}a_{3}(t-\frac{1}{3})+L_{\tau_{3}(t-\frac{1}{3})}a_{0}(\frac{1}{3})\big{)}\Big{)}^{\prime}=Ad_{\tau_{0}(\frac{1}{3})^{-1}}\widehat{a}^{\prime}_{3}(t-\frac{1}{3})$
and
$\beta=L_{\tau_{0}(\frac{1}{3})^{-1}\tau_{3}(t-\frac{1}{3})^{-1}}\Big{(}R_{\tau_{0}(\frac{1}{3})}b_{3}(t-\frac{1}{3})+L_{\tau_{3}(t-\frac{1}{3})}b_{0}(\frac{1}{3})\Big{)}=Ad_{\tau_{0}(\frac{1}{3})^{-1}}\widehat{b}_{3}(t-\frac{1}{3})+\widehat{b}_{0}(\frac{1}{3}).$
Setting $s=t-\frac{1}{3}$, and noticing that $\langle-,-\rangle$ is adjoint
invariant, we conclude that
$B_{0}=\int_{0}^{\frac{1}{3}}\langle\widehat{a}^{\prime}_{3}(s),\widehat{b}_{3}(s)\rangle+\langle
Ad_{\tau_{0}(\frac{1}{3})^{-1}}\widehat{a}^{\prime}_{3}(s),\widehat{b}_{0}(\frac{1}{3})\rangle
ds=A_{3}+\int_{0}^{\frac{1}{3}}\langle\widehat{a}^{\prime}_{3}(s),R_{\tau_{0}(\frac{1}{3})^{-1}}b_{0}(\frac{1}{3})\rangle
ds.$
Finally the third term
$C_{0}=\int_{\frac{2}{3}}^{1}\langle\widehat{a}^{\prime}_{0}(t),\widehat{b}_{0}(t)\rangle
dt=\int_{\frac{2}{3}}^{1}\langle\widehat{a}^{\prime}_{2}(1-t),\widehat{b}_{2}(1-t)\rangle
dt=-\int_{0}^{\frac{1}{3}}\langle\widehat{a}^{\prime}_{2}(s),\widehat{b}_{2}(s)\rangle
ds=-A_{2},$
where in the middle equation we use the definition of $T{\mathbb{G}}_{3}$ and
in the last we make the change of variable $s=1-t$. By similar computations
one shows that
$\begin{array}[]{rl}B_{1}=&-C_{3}-\int_{\frac{2}{3}}^{1}\langle\widehat{a}^{\prime}_{3}(s),R_{\tau_{0}(\frac{1}{3})^{-1}}b_{0}(\frac{1}{3})\rangle
ds,\\\ C_{1}=&C_{2},\\\
B_{2}=&B_{3}+\int_{\frac{1}{3}}^{\frac{2}{3}}\langle\widehat{a}^{\prime}_{3}(s),R_{\tau_{0}(\frac{1}{3})^{-1}}b_{0}(\frac{1}{3})\rangle
ds.\end{array}$
Thus we may conclude that
$\delta\omega_{\tau}(a,b)=\sum_{i=0}^{3}(-1)^{i}(d_{i}^{*}\omega)_{\tau}(a,b)=\sum_{i=0}^{3}(-1)^{i}(A_{i}+B_{i}+C_{i})=\int_{0}^{1}\langle\widehat{a}^{\prime}_{3}(s),R_{\tau_{0}(\frac{1}{3})^{-1}}w_{0}(\frac{1}{3})\rangle
ds=0,$
where in the last step we are using Stokes’ Theorem and the fact that
$\widehat{a}_{3}(0)=\widehat{a}_{3}(1)=0_{e}.$ ∎
## Appendix D Transgression map
In this appendix we recollect some useful formulas for the transgression map
going from $k$-forms on a manifold to $(k-1)$-forms on its path space, which
should be well known to experts.
Let $M$ be a manifold and denote by $PM$ its path space, i.e.
$PM=\\{\gamma:[0,1]\to M\ |\ \gamma\text{ of class }H_{r}\\}$, and consider
$PM$ as a Banach manifold. The evaluation map $ev:[0,1]\times PM\to M,\
ev(t,\gamma)=\gamma(t)$ at the level of forms produces the map
$ev^{*}:\Omega^{k}(M)\to\Omega^{k}([0,1]\times PM),$
and since the interval has dimension $1$ we can give the more explicit
description
(D.1) $\Omega^{k}([0,1]\times
PM)=\Omega^{0}([0,1])\otimes\Omega^{k}(PM)\oplus\Omega^{1}([0,1])\otimes\Omega^{k-1}(PM).$
The transgression map is defined as the composition of evaluation with
integration:
(D.2)
$\begin{array}[]{cccl}\operatorname{\mathbb{T}}:&\Omega^{k}(M)&\to&\Omega^{k-1}(PM)\\\
&\omega&\mapsto&\operatorname{\mathbb{T}}(\omega)=\int_{0}^{1}ev^{*}\omega.\end{array}$
Now we give short proofs for the properties of the transgression map that we
use in the article.
###### Proposition D.1.
The transgression map and de Rham differential satisfy the following relation:
$\operatorname{\mathbb{T}}(d\omega)=ev_{1}^{*}\omega-ev_{0}^{*}\omega-
d_{PM}\operatorname{\mathbb{T}}(\omega),$
where $ev_{t}(\gamma)=\gamma(t)$ for $t\in[0,1].$
###### Proof.
$\begin{split}\operatorname{\mathbb{T}}(d\omega)=&\int_{0}^{1}ev^{*}d\omega=\int_{0}^{1}d_{[0,1]\times
PM}(ev^{*}\omega)=\int_{0}^{1}d_{[0,1]}ev^{*}\omega-\int_{0}^{1}d_{PM}(ev^{*}\omega)\\\
=&ev^{*}_{1}\omega-ev^{*}_{0}\omega-
d_{PM}\operatorname{\mathbb{T}}(\omega),\end{split}$
where we use the decomposition (D.1) and Stokes’ theorem. ∎
###### Proposition D.2.
In terms of vectors the transgression map has the following explicit formula:
$\operatorname{\mathbb{T}}(\omega)_{\gamma}(v_{1},\cdots,v_{k-1})=\int_{0}^{1}\omega_{\gamma(t)}\big{(}\gamma^{\prime}(t),v_{1}(t),\cdots,v_{k-1}(t)\big{)}dt,$
where $\omega\in\Omega^{k}(M)$ and $v_{i}\in T_{\gamma}PM.$
###### Proof.
With decomposition (D.1), we write
(D.3)
$ev^{*}\omega=\omega_{0}\otimes\omega_{k}+\omega_{1}\otimes\omega_{k-1},$
where $\omega_{k}\in\Omega^{k}(PM)$, $\omega_{k-1}\in\Omega^{k-1}(PM)$,
$\omega_{1}\in\Omega^{1}([0,1])$ and $\omega_{0}\in\Omega^{0}([0,1])$. Then
contracting with $[0,1]$ via $\int_{0}^{1}$, only the second term survives.
Thus for tangent vectors $x_{i}\in T_{\gamma}PM$, $i=1,\dots,k-1$, at
$\gamma\in PM$, we have
(D.4)
$\int_{0}^{1}ev^{*}\omega_{\gamma}(v_{1},\dots,v_{k-1})=\int_{0}^{1}\omega_{1}\otimes\omega_{k-1}(v_{1},\dots,v_{k-1}).$
At the same time, for a tangent vector $(w,v)\in T_{t}[0,1]\times
T_{\gamma}PM$, we have $T_{\gamma}ev(w,v)=w\gamma^{\prime}+v$. This can be
seen by taking a variation (a small path) $(t+w\epsilon,\gamma^{\epsilon})$
representing $(w,v)$ (thus $\gamma^{0}=\gamma$). Then
$T_{\gamma}ev(w,v)=\frac{d}{d\epsilon}\bigg{|}_{\epsilon=0}(\gamma^{\epsilon}(t+w\epsilon))=\frac{d}{d\epsilon}\bigg{|}_{\epsilon=0}\gamma^{0}(t+w\epsilon)+\frac{d}{d\epsilon}\bigg{|}_{\epsilon=0}\gamma^{\epsilon}(t)=w\gamma^{\prime}(t)+v(t).$
Thus
$\begin{split}&ev^{*}\omega|_{(t,\gamma)}((w_{1},v_{1}),…,(w_{k},v_{k}))=\omega_{\gamma(t)}(T_{\gamma}ev(w_{1},v_{1}),…,T_{\gamma}ev(w_{k},v_{k}))\\\
=&\omega_{\gamma(t)}(v_{1}(t),…,v_{k}(t))+\omega_{\gamma(t)}(w_{1}\gamma^{\prime}(t),v_{2}(t),…,v_{k}(t))+c.p..\end{split}$
Thus comparing with (D.3), we have $\omega_{k}=\omega$, $\omega_{0}=1$,
$\omega_{k-1}=\iota(\gamma^{\prime})\omega$, and $\omega_{1}=dt$. Combining
with (D.4) obtains the desired formula. ∎
Suppose that we have a simplicial manifold $X_{\bullet}$. Then the path space
$PX_{\bullet}$ is again a simplicial manifold given by
$(PX)_{n}=P(X_{n}),\quad(Pd)_{i}(\gamma)(t)=(d_{i}\circ\gamma)(t),\quad\text{and}\quad(Ps)_{i}(\gamma)(t)=(s_{i}\circ\gamma)(t).$
###### Proposition D.3.
Let $X_{\bullet}$ be a simplicial manifold. Then the transgression commutes
with the simplicial differentials, i.e.
$\operatorname{\mathbb{T}}(\delta^{X}\omega)=\delta^{PX}\operatorname{\mathbb{T}}(\omega).$
###### Proof.
In order to prove this, we first observe that if $X_{\bullet}$ is a simplicial
manifold then $TX_{\bullet}$ is also a simplicial manifold with faces and
degeneracies given by the corresponding tangent maps. It then follows that
there is a canonical identification between $P(TX)_{\bullet}$ and
$T(PX)_{\bullet}$ as simplicial manifolds. Using this canonical identification
and the explicit formula of the transgression given by Proposition D.2, for
$\omega\in\Omega^{k}(X_{n-1}),$ $\gamma\in PX_{n},$ and $v_{j}\in
T_{\gamma}PX_{n}$ we have
$\begin{split}&\big{(}(Pd_{i})^{*}\operatorname{\mathbb{T}}(\omega)\big{)}_{\gamma}(v_{1},\cdots,v_{k-1})=\operatorname{\mathbb{T}}(\omega)_{Pd_{i}(\gamma)}\Big{(}TPd_{i}(v_{1}),\cdots,TPd_{i}(v_{k-1})\Big{)}dt\\\
&=\int_{0}^{1}\omega_{Pd_{i}(\gamma)(t)}\Big{(}Pd_{i}(\gamma)^{\prime}(t),TPd_{i}(v_{1})(t),\cdots,TPd_{i}(v_{k-1})(t)\Big{)}dt\\\
&=\int_{0}^{1}\omega_{d_{i}(\gamma(t))}\Big{(}Td_{i}(\gamma(t))^{\prime},PTd_{i}(v_{1})(t),\cdots,PTd_{i}(v_{k-1})(t)\Big{)}dt\\\
&=\int_{0}^{1}\omega_{d_{i}(\gamma(t))}\Big{(}Td_{i}(\gamma^{\prime}(t)),Td_{i}(v_{1}(t)),\cdots,Td_{i}(v_{k-1}(t))\Big{)}\\\
&=\int_{0}^{1}(d^{*}_{i}\omega)_{\gamma(t)}\Big{(}\gamma^{\prime}(t),v_{1}(t),\cdots,v_{k-1}(t)\Big{)}dt=\operatorname{\mathbb{T}}(d^{*}_{i}\omega)_{\gamma}(v_{1},\cdots,v_{k-1}).\end{split}$
Since the simplicial differentials are alternating sums of face maps, this
prove the statement. ∎
## Appendix E IM-form (by Florian Dorsch)
For completeness of this article, we now give proofs for some very useful
statements in the unpublished lecture note [51], which are stated on page
82-83 without proof.
###### Lemma E.1.
Let $\alpha_{\bullet}$ be a $m$-shifted 2-form on a Lie $n$-groupoid
$X_{\bullet}$. Then
1. (1)
the IM-form $\lambda^{\alpha_{\bullet}}$ vanishes on degenerate vectors, that
is, for $m\geq 1$, for any point $x\in X_{0}$ we have
$\lambda^{\alpha_{\bullet}}_{x}(Ts_{i}u,w)=0,\quad\forall u\in T_{x}X_{p-1},\
w\in T_{x}X_{q},\quad 0\leq i\leq p-1.$
For $m=0$, this is an empty condition.
2. (2)
if $\alpha_{m}$ is multiplicative, that is $\delta\alpha_{m}=0$, the IM-form
$\lambda^{\alpha_{\bullet}}$ is infinitesimally multiplicative. That is, when
$m\geq 1$, for $v\in\mathcal{T}_{p+1}(X_{\bullet})$,
$w\in\mathcal{T}_{q}(X_{\bullet})$ with $0\leq p\leq m-1$ and $p+q=m$, we have
(E.1) $\lambda^{\alpha_{\bullet}}(\partial
v,w)+(-1)^{p+1}\lambda^{\alpha_{\bullet}}(v,\partial w)=0;$
and when $m=0$, for $v\in\mathcal{T}_{1}(X_{\bullet})$,
$w\in\mathcal{T}_{0}(X_{\bullet})=T_{0}X_{0}$ we have
(E.2) $\lambda^{\alpha_{\bullet}}(\partial v,w)=0.$
These are equivalent to (2.6) with the correct interpretation in extreme cases
of indices explained in Remark 2.15;
3. (3)
IM-forms are invariant under gauge transformation. That is, when $m\geq 1$,
$\lambda^{\alpha_{\bullet}+D\phi_{\bullet}}=\lambda^{\alpha_{\bullet}},\quad\text{for
any $(m-1)$-shifted 2-form $\phi_{\bullet}$}.$
###### Proof.
(1) To show that $\lambda^{\alpha_{\bullet}}$ vanishes on degeneracies, it is
enough to verify that
(E.3) $\alpha_{m}(Ts_{\pi(p+q-1)}\dots Ts_{\pi(p)}Ts_{i}u,Ts_{\pi(p-1)}\dots
Ts_{\pi(0)}w)=0$
for all $u\in T_{x}X_{p-1}$, $w\in T_{x}X_{q}$ and $s_{i}:X_{p-1}\rightarrow
X_{p},\,\,0\leq i\leq p-1$. We begin by making the following observation. Let
$\displaystyle i_{p-1}>\dots>i_{j}=\lambda>\dots>i_{0}\>\>\>\text{and}$
$\displaystyle i_{p+q-1}>\dots>i_{p+l}>\lambda>i_{p+l-1}>\dots>i_{p}$
be indices such that
$\displaystyle Ts_{i_{p-1}}\dots Ts_{\lambda}\dots
Ts_{i_{0}}w\>\>\>\text{and}\>\>\>Ts_{i_{p+q-1}}\dots
Ts_{i_{p+l}}Ts_{\lambda}Ts_{i_{p+l-1}}\dots Ts_{i_{p}}u$
are well-defined tangent vectors in $T_{x}X_{m}$. Then by simplicial
identities we have
(E.4) $\begin{split}&\alpha_{m}(Ts_{i_{p+q-1}}\dots
Ts_{i_{p+m}}Ts_{\lambda}Ts_{i_{p+m-1}}\dots Ts_{i_{p}}u,Ts_{i_{p-1}}\dots
Ts_{\lambda}\dots Ts_{i_{0}}w)\\\
=&\alpha_{m}(Ts_{\lambda}Ts_{i_{p+q-1}-1}\dots
Ts_{i_{p+m}-1}Ts_{i_{p+m-1}}\dots Ts_{i_{p}}u,Ts_{\lambda}Ts_{i_{p-1}-1}\dots
Ts_{i_{j+1}-1}\dots Ts_{i_{0}}w)\\\
=&s_{\lambda}^{\ast}\alpha_{m}(Ts_{i_{p+q-1}-1}\dots
Ts_{i_{p+m}-1}Ts_{i_{p+m-1}}\dots Ts_{i_{p}}u,Ts_{i_{p-1}-1}\dots
Ts_{i_{j+1}-1}\dots Ts_{i_{0}}w)\\\ =&0\end{split}$
as $\alpha_{m}$ is normalized. The rest of argument is essentially put down to
(E.4).
First suppose that $i=\pi(p+j)\in\\{\pi(p+q-1),\dots,\pi(p)\\}$ for $0\leq
j\leq q-1$. Since $i=\pi(p+j)>\dots>\pi(p)$, it follows from the simplicial
identities that
$\displaystyle Ts_{\pi(p+q-1)}\dots Ts_{\pi(p)}Ts_{i}u=Ts_{\pi(p+q-1)}\dots
Ts_{\pi(p+j+1)}Ts_{i+j+1}Ts_{\pi(p+j)}\dots Ts_{\pi(p)}u.$
If $\pi(p+j+1)>i+j+1$, the index $i+j+1$ is not contained in
$\\{\pi(p+q-1),\dots,\pi(p)\\}$, so $i+j+1\in\\{\pi(p-1),\dots,\pi(0)\\}$ and
(E.3) follows from (E.4).
Otherwise, we distinguish between the following two cases:
1. a)
There exists a minimal $l\in\\{j+2,\dots,q-1\\}$ such that $\pi(p+l)>i+l$.
Then $i+l\in$ $\\{\pi(p-1),\dots,\pi(0)\\}$, so (E.4) applies.
2. b)
For all $l\in\\{q-1,\dots,j+1\\}:\pi(p+l)=i+l$. Then $p+q-1\geq
i+q>\pi(p+q-1)$ and $i+q\in\\{\pi(p-1),\dots,\pi(0)\\}$, so (E.4) applies.
Thus $\alpha_{m}(Ts_{\pi(p+q-1)}\dots Ts_{\pi(p)}Ts_{i}u,Ts_{\pi(p-1)}\dots
Ts_{\pi(0)}w)$ vanishes for $i\in\\{\pi(p+q-1),\dots,\pi(p)\\}$. The case when
$i\in\\{\pi(p-1),\dots,\pi(0)\\}$ works similarly.
(2) We first look at the case when $m=0$, and we want to prove Eq. (E.2).
Since $\alpha_{\bullet}$ is multiplicative, we have
$0=\delta\alpha_{0}(v,Ts_{0}w)=\alpha_{0}(Td_{0}v,Td_{0}Ts_{0}w)-\alpha_{0}(Td_{1}v,Td_{1}Ts_{0}w).$
Since $Td_{0}v=0$ and $d_{1}s_{0}=\operatorname{id}$, the desired equation
(E.2) is proven.
When $m\geq 1$, we consider the sum
$\begin{split}&\sum_{\pi\in\mathsf{Shuff}(p+1,q)}\text{sgn}(\pi)\,\delta\alpha_{m}(Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v,Ts_{\pi(p)}\dots Ts_{\pi(0)}w)\\\
=&\sum_{i=0}^{m+1}(-1)^{i}\sum_{\pi\in\mathsf{Shuff}(p+1,q)}\text{sgn}(\pi)\,\alpha_{m}(Td_{i}Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v,Td_{i}Ts_{\pi(p)}\dots Ts_{\pi(0)}w),\end{split}$
which vanishes because $\alpha_{m}$ is multiplicative. We begin by showing
that even
$\displaystyle\sum_{\pi\in\mathsf{Shuff}(p+1,q)}\text{sgn}(\pi)\,\alpha_{m}(Td_{i}Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v,Td_{i}Ts_{\pi(p)}\dots Ts_{\pi(0)}w)=0$
for $i\in\\{0,\dots,m\\}$. For a fixed $i\in\\{0,\dots,m\\}$ and a shuffle
$\pi\in\mathsf{Shuff}(p+1,q)$, the index $i$ either lies in
$\\{\pi(p+q),\dots,\pi(p+1)\\}$ or in $\\{\pi(p),\dots,\pi(0)\\}$. First,
consider the case $i=\pi(p+1+j)\in\\{\pi(p+q),\dots,\pi(p+1)\\}$. From the
simplicial identities, it follows that
$\displaystyle Td_{i}Ts_{\pi(p+q)}\dots Ts_{\pi(p+1)}v=Ts_{\pi(p+q)-1}\dots
Ts_{\pi(p+1+j+1)-1}Ts_{\pi(p+j)}\dots Ts_{\pi(p+1)}v.$
Similarly, we have
(E.5) $\displaystyle\underbrace{Td_{i}Ts_{\pi(p)}\dots
Ts_{\pi(0)}w}_{(\ast\ast)}=\begin{cases}Ts_{\pi(p)}\dots
Ts_{\pi(0)}Td_{i-(p+1)}w,\>\>\>\>\>\>\text{if}\>\>i>1+\pi(p),\\\ \\\
Ts_{\pi(p)-1}\dots Ts_{\pi(0)-1}Td_{i}w,\>\>\>\>\>\>\text{if}\>\>i<\pi(0),\\\
\\\ Ts_{\pi(p)-1}\dots Ts_{\pi(l+1)-1}Ts_{\pi(l)}\dots
Ts_{\pi(0)}Td_{i-(l+1)}w,\\\
\text{if}\>\>\pi(l+1)>i>\pi(l)+1\>\>\text{for}\>\>l\in\\{0,\dots,p-1\\},\\\
\\\ Ts_{\pi(p)-1}\dots Ts_{\pi(l+1)-1}Ts_{\pi(l-1)}\dots Ts_{\pi(0)}w,\\\
\text{if}\>\>\>i=1+\pi(l)\>\>\text{for}\>\>l\in\\{0,\dots,p\\}.\end{cases}$
We consider in each different situation:
1. a)
if $i>1+\pi(p)$ then $i-(p+1)\leq p+q-(p+1)=q-1$, so $Td_{i-(p+1)}w=0$ and
$(\ast\ast)$ vanishes,
2. b)
if $i<\pi(0)$ then $i\leq p+q-(p+1)=q-1$, so $Td_{i}w=0$ and $(\ast\ast)$
vanishes,
3. c)
if $\pi(l+1)>i>\pi(l)+1$ for $l\in\\{0,\dots,p-1\\}$, then $i-(l+1)\leq
q+l-(l+1)=q-1$, so $Td_{i-(l+1)}w=0$ and $(\ast\ast)$ vanishes.
Thus we are left with the last situation in (E.5). In this case,
$i=1+\pi(l),\>l\in\\{0,\dots,p\\}$, thus we have $\pi(l)=\pi(p+1+j)-1$. We
define a new $(p+1,q)$-shuffle $\tilde{\pi}$ by
$\displaystyle\tilde{\pi}(k)=\begin{cases}i-1=\pi(l),\>\>\>\>\>\>\text{if}\>\>k=p+1+j,\\\
i=\pi(p+1+j),\>\>\>\>\>\>\text{if}\>\>k=l,\\\
\pi(k),\>\>\>\>\>\>\text{otherwise,}\end{cases}$
which can be illustrated as
$\scriptsize{\begin{pmatrix}0&\dots&\mathbf{l}&\dots&p,&p+1&\dots&\mathbf{p+1+j}&\dots&p+q\\\
\pi(0)<&\dots&<\mathbf{\pi(p+1+j)-1}<&\dots&<\pi(p),&\pi(p+1)<&\dots&<\mathbf{\pi(l)}<&\dots&<\pi(p+q)\end{pmatrix}}.$
Then clearly $\text{sgn}(\pi)=-\text{sgn}(\tilde{\pi})$ and
$\displaystyle Td_{i}Ts_{\pi(p)}\dots
Ts_{\pi(0)}w=Td_{i}Ts_{\tilde{\pi}(p)}\dots Ts_{\tilde{\pi}(0)}w,$
$\displaystyle Td_{i}Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v=Td_{i}Ts_{\tilde{\pi}(p+q)}\dots Ts_{\tilde{\pi}(p+1)}v.$
Notice that terms
$\displaystyle\text{sgn}(\pi)\,\alpha_{m}(Td_{i}Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v,Td_{i}Ts_{\pi(p)}\dots Ts_{\pi(0)}w),$
$\displaystyle\text{sgn}(\tilde{\pi})\,\alpha_{m}(Td_{i}Ts_{\tilde{\pi}(p+q)}\dots
Ts_{\tilde{\pi}(p+1)}v,Td_{i}Ts_{\tilde{\pi}(p)}\dots Ts_{\tilde{\pi}(0)}w)$
cancel with each other. The case $i\in\\{\pi(p),\dots,\pi(0)\\}$ can be
treated similarly. Thus
$\displaystyle\sum_{\pi\in\mathsf{Shuff}(p+1,q)}\text{sgn}(\pi)\,\alpha_{m}(Td_{n+1}Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v,Td_{n+1}Ts_{\pi(p)}\dots Ts_{\pi(0)}w)=0.$
We now distinguish between two types of $(p+1,q)$-shuffles: shuffles
satisfying $\pi(p+q)=m$ and shuffles satisfying $\pi(p)=m$. There exists a 1-1
correspondence between $(p+1,q)$-shuffles $\pi$ with $\pi(p+q)=m$ and
$(p+1,q-1)$-shuffles $\tau$ via $\tau(k)=\pi(k)$ for
$k\in\\{0,\dots,p+q-1\\}$. Likewise there exists a 1-1 correspondence between
$(p+1,q)$-shuffles $\pi$ with $\pi(p)=m$ and $(p,q)$-shuffles $\chi$ via
$\displaystyle\chi(k)=\begin{cases}\pi(k)\>\>\>\>\>\text{if}\>\>k\in\\{0,\dots,p-1\\},\\\
\pi(k+1)\>\>\>\>\>\text{if}\>\>k\in\\{p,\dots,p+q-1\\}.\end{cases}$
With these correspondences it follows that
(E.6)
$\begin{split}0=&\sum_{\pi\in\mathsf{Shuff}(p+1,q)}\text{sgn}(\pi)\,\alpha_{m}(Td_{m+1}Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}v,Td_{m+1}Ts_{\pi(p)}\dots Ts_{\pi(0)}w)\\\
=&\sum_{\pi\in\mathsf{Shuff}(p+1,q),\pi(p+q)=m}\text{sgn}(\pi)\,\alpha_{m}(Ts_{\pi(p+q-1)}\dots
Ts_{\pi(p+1)}v,Ts_{\pi(p)}\dots Ts_{\pi(0)}Td_{q}w)\\\
+&\sum_{\pi\in\mathsf{Shuff}(p+1,q),\pi(p)=m}\text{sgn}(\pi)\,\alpha_{m}(Ts_{\pi(p+q)}\dots
Ts_{\pi(p+1)}Td_{p+1}v,Ts_{\pi(p-1)}\dots Ts_{\pi(0)}w)\\\
=&(-1)^{q}\sum_{\tau\in\mathsf{Shuff}(p+1,q-1)}\text{sgn}(\tau)\,\alpha_{m}(Ts_{\tau(p+q-1)}\dots
Ts_{\tau(p+1)}v,Ts_{\tau(p)}\dots Ts_{\tau(0)}\partial w)\\\
+&(-1)^{p+1}\sum_{\chi\in\mathsf{Shuff}(p,q)}\text{sgn}(\chi)(-1)^{q}\,\alpha_{m}(Ts_{\chi(p+q-1)}\dots
Ts_{\chi(p)}\partial v,Ts_{\chi(p-1)}\dots Ts_{\chi(0)}w)\\\
=&(-1)^{q}\lambda^{\alpha_{\bullet}}(v,\partial
w)+(-1)^{q}(-1)^{p+1}\lambda^{\alpha_{\bullet}}(\partial v,w).\end{split}$
Thus (E.1) is proven.
(3) It is enough to show that $\lambda^{D\phi_{\bullet}}$ vanishes for any
$(m-1)$-shifted form $\phi_{\bullet}$. From (E.6) and the fact that
$(D\phi)_{m}=\delta\phi_{m-1}$, it follows that
$\displaystyle\lambda^{D\phi_{\bullet}}(v,w)=\lambda^{\phi_{\bullet}}(\partial
v,w)+(-1)^{p}\lambda^{\phi_{\bullet}}(v,\partial w),$
for tangent vectors $v\in T_{x_{0}}X_{p},\>w\in
T_{x_{0}}X_{q},\>p+q=m,\>\,p,q\geq 1$. The two summands on the right hand side
turn out to be equal to zero: we note that
$\displaystyle\underbrace{\lambda^{D\phi_{\bullet}}\big{(}(-1)^{p}Ts_{p-1}\partial
v,w\big{)}}_{=0}=\lambda^{\phi_{\bullet}}(\partial\,(-1)^{p}Ts_{p-1}\partial
v,w)+(-1)^{p}\underbrace{\lambda^{\phi_{\bullet}}((-1)^{p}Ts_{p-1}\partial
v,\partial w)}_{=0}$
$\displaystyle=\lambda^{\phi_{\bullet}}\big{(}(-1)^{p}Td_{p}(-1)^{p}Ts_{p-1}\partial
v,w\big{)}=\lambda^{\phi_{\bullet}}(\partial v,w).$
The terms $\lambda^{D\phi_{\bullet}}\big{(}(-1)^{p}Ts_{p-1}\partial
v,w\big{)}$ and $\lambda^{\phi_{\bullet}}\big{(}(-1)^{p}Ts_{p-1}\partial
v,\partial w\big{)}$ are zero thanks to item (1). Thus
$\lambda^{\phi_{\bullet}}(\partial v,w)=0$. Analogously
$\lambda^{\phi_{\bullet}}(v,\partial w)=0$.
It remains to be shown that $\lambda^{D\phi_{\bullet}}(v,w)=0$ if
$v\in\mathcal{T}_{p}(X_{\bullet}),\>w\in\mathcal{T}_{q}(X_{\bullet})$,
$p,q\in\\{0,m\\}$. We first consider the case $p=0,\>q=m$. Then
(E.7)
$\begin{split}\lambda^{D\phi_{\bullet}}(v,w)=&\delta\phi_{m-1}(Ts_{m-1}\dots
Ts_{0}v,w)\\\ =&\sum_{i=0}^{m}(-1)^{i}\phi(Td_{i}Ts_{m-1}\dots
Ts_{0}v,Td_{i}w)\\\ =&(-1)^{m}\phi_{m-1}(Td_{m}Ts_{n-1}\dots
Ts_{0}v,Td_{m}w)\\\ =&\phi_{m-1}(Ts_{m-2}\dots
Ts_{0}v,(-1)^{m}Td_{m}w)=\lambda^{\phi_{\bullet}}(v,\partial w),\end{split}$
which equals zero since
$\begin{split}0=\lambda^{D\phi_{\bullet}}\big{(}v,(-1)^{m}Ts_{m-1}\partial
w\big{)}=&\lambda^{\phi_{\bullet}}(v,\partial\,(-1)^{m}Ts_{m-1}\partial w)\\\
=&\lambda^{\phi_{\bullet}}(v,(-1)^{m}Td_{m}(-1)^{m}Ts_{m-1}\partial
w)=\lambda^{\phi_{\bullet}}(v,\partial w),\end{split}$
where we have used (E.7) for the first equal sign and the fact that
$\lambda^{D\phi_{\bullet}}$ vanishes on degeneracies. Analogously it can be
shown that $\lambda^{D\phi_{\bullet}}=0$ if $p=m$ and $q=0$. ∎
## References
* [1] https://mathoverflow.net/questions/24500/cotangent-bundle-of-a-differentiable-stack.
* [2] Robert A. Adams and John J. F. Fournier. Sobolev spaces, volume 140 of Pure and Applied Mathematics (Amsterdam). Elsevier/Academic Press, Amsterdam, second edition, 2003.
* [3] Anton Alekseev, Henrique Bursztyn, and Eckhard Meinrenken. Pure spinors on Lie groups. Astérisque, (327):131–199 (2010), 2009.
* [4] Anton Alekseev and Eckhard Meinrenken. On the coadjoint Virasoro action. Preprint arxiv:2211.06216.
* [5] Anton Alekseev, Florian Naef, Xiaomeng Xu, and Chenchang Zhu. Chern-Simons, Wess-Zumino and other cocycles from Kashiwara-Vergne and associators. Lett. Math. Phys., 108(3):757–778, 2018.
* [6] M. Alexandrov, A. Schwarz, O. Zaboronsky, and M. Kontsevich. The geometry of the master equation and topological quantum field theory. Internat. J. Modern Phys. A, 12(7):1405–1429, 1997.
* [7] Daniel Alvarez, Henrique Bursztyn, and Miquel Cueca. Shifted Lagrangian structures in Poisson geometry. Work in progress.
* [8] C. Angulo and M. Cueca. The Van Est homomorphism of a strict Lie 2-algebra. Work in progress.
* [9] C. Arias Abad and M. Crainic. The Weil algebra and the Van Est isomorphism. Ann. Inst. Fourier (Grenoble), 61(3):927–970, 2011.
* [10] M. Artin and B. Mazur. On the van Kampen theorem. Topology, 5:179–189, 1966.
* [11] M. F. Atiyah and R. Bott. The Yang-Mills equations over Riemann surfaces. Philos. Trans. Roy. Soc. London Ser. A, 308(1505):523–615, 1983\.
* [12] John C. Baez, Alexander E. Hoffnung, and Christopher L. Rogers. Categorified symplectic geometry and the classical string. Comm. Math. Phys., 293(3):701–725, 2010.
* [13] John C. Baez and Aaron D. Lauda. Higher-dimensional algebra. V. 2-groups. Theory Appl. Categ., 12:423–491 (electronic), 2004.
* [14] John C. Baez, Danny Stevenson, Alissa S. Crans, and Urs Schreiber. From loop groups to 2-groups. Homology Homotopy Appl., 9(2):101–135, 2007.
* [15] Ruggero Bandiera, Zhuo Chen, Mathieu Stiénon, and Ping Xu. Shifted derived Poisson manifolds associated with Lie pairs. Comm. Math. Phys., 375(3):1717–1760, 2020.
* [16] David Baraglia and Pedram Hekmati. Transitive Courant algebroids, string structures and $T$-duality. Adv. Theor. Math. Phys., 19(3):613–672, 2015.
* [17] Francesco Bonechi, Nicola Ciccoli, Camille Laurent-Gengoux, and Ping Xu. Shifted Poisson structures on differentiable stacks. Int. Math. Res. Not. IMRN, (9):6627–6704, 2022.
* [18] R. Bott, H. Shulman, and J. Stasheff. On the de Rham theory of certain classifying spaces. Advances in Math., 20(1):43–56, 1976.
* [19] Olivier Brahic and Chenchang Zhu. Lie algebroid fibrations. Adv. Math., 226(4):3105–3135, 2011.
* [20] Jean-Luc Brylinski. Loop spaces, characteristic classes and geometric quantization, volume 107 of Progress in Mathematics. Birkhäuser Boston Inc., Boston, MA, 1993.
* [21] H. Bursztyn, V. Dolgushev, and S. Waldmann. Morita equivalence and characteristic classes of star products. J. Reine Angew. Math., 662:95–163, 2012.
* [22] H. Bursztyn, D. Iglesias Ponte, and P. Ševera. Courant morphisms and moment maps. Math. Res. Lett., 16(2):215–232, 2009.
* [23] Henrique Bursztyn, Marius Crainic, Alan Weinstein, and Chenchang Zhu. Integration of twisted Dirac brackets. Duke Math. J., 123(3):549–607, 2004.
* [24] Henrique Bursztyn and Thiago Drummond. Lie theory of multiplicative tensors. Math. Ann., 375(3-4):1489–1554, 2019.
* [25] Henrique Bursztyn and Rui Loja Fernandes. Picard groups of Poisson manifolds. J. Differential Geom., 109(1):1–38, 2018.
* [26] Henrique Bursztyn, David Iglesias-Ponte, and Jiang-Hua Lu. Dirac geometry and integration of Poisson homogeneous spaces. arXiv:1905.11453, page 40, 2020.
* [27] Henrique Bursztyn, Inocencio Ortiz, and Stefan Waldmann. Morita equivalence of formal Poisson structures. Int. Math. Res. Not. IMRN, (18):13703–13752, 2022.
* [28] Henrique Bursztyn and Alan Weinstein. Poisson geometry and Morita equivalence. In Poisson geometry, deformation quantisation and group representations, volume 323 of London Math. Soc. Lecture Note Ser., pages 1–78. Cambridge Univ. Press, Cambridge, 2005.
* [29] Alejandro Cabrera, M. Gualtieri, and E. Meinrenken. Dirac geometry of the holonomy fibration. Comm. Math. Phys., 355(3):865–904, 2017.
* [30] Damien Calaque. Shifted cotangent stacks are shifted symplectic. Ann. Fac. Sci. Toulouse Math. (6), 28(1):67–90, 2019.
* [31] Damien Calaque. Derived stacks in symplectic geometry. In New spaces in physics, pages 155–201. Cambridge Univ. Press, Cambridge, 2021.
* [32] Damien Calaque, Rune Haugseng, and Claudia Scheimbauer. The AKSZ Construction in Derived Algebraic Geometry as an Extended Topological Field Theory. Preprint arxiv:2108.02473.
* [33] Damien Calaque, Tony Pantev, Bertrand Toën, Michel Vaquié, and Gabriele Vezzosi. Shifted Poisson structures and deformation quantization. J. Topol., 10(2):483–584, 2017.
* [34] Alan L. Carey, Stuart Johnson, Michael K. Murray, Danny Stevenson, and Bai-Ling Wang. Bundle gerbes for Chern-Simons and Wess-Zumino-Witten theories. Comm. Math. Phys., 259(3):577–613, 2005.
* [35] A. S. Cattaneo and G. Felder. Poisson sigma models and symplectic groupoids. In Quantization of singular symplectic quotients, volume 198 of Progr. Math., pages 61–93. Birkhäuser, Basel, 2001.
* [36] Alberto S. Cattaneo, Pavel Mnev, and Konstantin Wernli. Split Chern-Simons theory in the BV-BFV formalism. In Quantization, geometry and noncommutative structures in mathematics and physics, Math. Phys. Stud., pages 293–324. Springer, Cham, 2017\.
* [37] A. Coste, P. Dazord, and A. Weinstein. Groupoïdes symplectiques. In Publications du Département de Mathématiques. Nouvelle Série. A, Vol. 2, volume 87 of Publ. Dép. Math. Nouvelle Sér. A, pages i–ii, 1–62. Univ. Claude-Bernard, Lyon, 1987.
* [38] Matias del Hoyo and Cristian Ortiz. Morita equivalences of vector bundles. Int. Math. Res. Not. IMRN, (14):4395–4432, 2020.
* [39] Patrick Delorme. Classification des triples de Manin pour les algèbres de Lie réductives complexes. J. Algebra, 246(1):97–174, 2001. With an appendix by Guillaume Macey.
* [40] Robbert Dijkgraaf and Edward Witten. Topological gauge theories and group cohomology. Comm. Math. Phys., 129(2):393–429, 1990.
* [41] Johan L. Dupont. Simplicial de Rham cohomology and characteristic classes of flat bundles. Topology, 15(3):233–245, 1976.
* [42] J. Duskin. Higher-dimensional torsors and the cohomology of topoi: the abelian theory. In Applications of sheaves (Proc. Res. Sympos. Appl. Sheaf Theory to Logic, Algebra and Anal., Univ. Durham, Durham, 1977), volume 753 of Lecture Notes in Math., pages 255–279. Springer, Berlin, 1979.
* [43] John W. Duskin. Simplicial matrices and the nerves of weak $n$-categories. I. Nerves of bicategories. Theory Appl. Categ., 9:198–308 (electronic), 2001/02. CT2000 Conference (Como).
* [44] Shmuel Elitzur, Gregory Moore, Adam Schwimmer, and Nathan Seiberg. Remarks on the canonical quantization of the Chern-Simons-Witten theory. Nuclear Phys. B, 326(1):108–134, 1989.
* [45] P. Etingof and O. Schiffmann. Lectures on Quantum Groups. Lectures in Mathematical Physics. International Press, Somerville, MA, second edition, 2002.
* [46] Pavel Etingof and Alexander Varchenko. Geometry and classification of solutions of the classical dynamical Yang-Baxter equation. Comm. Math. Phys., 192(1):77–120, 1998.
* [47] Yves Félix, John Oprea, and Daniel Tanré. Algebraic models in geometry, volume 17 of Oxford Graduate Texts in Mathematics. Oxford University Press, Oxford, 2008.
* [48] Rui L. Fernandes, Du Li, Leonid Ryvkin, Arne Wessel, and Chenchang Zhu. Differentiation of higher Lie groupoids. Work in progress.
* [49] Domenico Fiorenza, Hisham Sati, and Urs Schreiber. A higher stacky perspective on Chern-Simons theory. In Mathematical aspects of quantum field theories, Math. Phys. Stud., pages 153–211. Springer, Cham, 2015.
* [50] Daniel S. Freed. Remarks on Chern-Simons theory. Bull. Amer. Math. Soc. (N.S.), 46(2):221–254, 2009.
* [51] Ezra Getzler. Differential forms on stacks [slides].
* [52] Ezra Getzler. Lie theory for nilpotent $L_{\infty}$-algebras. Annals of Mathematics. Second Series, 170(1):271–301, 2009. Available at http://arxiv.org/abs/math/0404003.
* [53] André Henriques. Integrating $L_{\infty}$-algebras. Arxiv version v1.
* [54] André Henriques. Integrating $L_{\infty}$-algebras. Compos. Math., 144(4):1017–1045, 2008.
* [55] André Henriques. What Chern-Simons theory assigns to a point. Proc. Natl. Acad. Sci. USA, 114(51):13418–13423, 2017.
* [56] Benjamin Hoffman and Reyer Sjamaar. Stacky Hamiltonian actions and symplectic reduction. Int. Math. Res. Not. IMRN, (20):15209–15300, 2021.
* [57] Zhen Huan. 2-Representations of Lie 2-groups and 2-Vector Bundles. Preprint arxiv:2208.10042.
* [58] Madeleine Jotz, Rajan Amit Mehta, and Theocharis Papantonis. Modules and representations up to homotopy of Lie n-algebroids. Preprint, arXiv:2001.01101, page 33, 2020.
* [59] Maxim Kontsevich. Formal (non)commutative symplectic geometry. In The Gel′fand Mathematical Seminars, 1990–1992, pages 173–187. Birkhäuser Boston, Boston, MA, 1993.
* [60] Y. Kosmann-Schwarzbach. Lie bialgebras, Poisson Lie groups and dressing transformations. In Integrability of nonlinear systems (Pondicherry, 1996), volume 495 of Lecture Notes in Phys., pages 104–170. Springer, Berlin, 1997\.
* [61] Serge Lang. Differential and Riemannian manifolds, volume 160 of Graduate Texts in Mathematics. Springer-Verlag, New York, third edition, 1995.
* [62] Du Li. Higher Groupoid Actions, Bibundles, and Differentiation. PhD thesis, Georg-August University, Göttingen, http://ediss.uni-goettingen.de/handle/11858/00-1735-0000-0022-5F4F-A, August 2014\.
* [63] David Li-Bland and Pavol Ševera. Integration of exact Courant algebroids. Electron. Res. Announc. Math. Sci., 19:58–76, 2012.
* [64] David Li-Bland and Pavol Ševera. Symplectic and Poisson geometry of the moduli spaces of flat connections over quilted surfaces. In Mathematical aspects of quantum field theories, Math. Phys. Stud., pages 343–411. Springer, Cham, 2015.
* [65] Jiang-Hua Lu and Alan Weinstein. Groupoïdes symplectiques doubles des groupes de Lie-Poisson. C. R. Acad. Sci. Paris Sér. I Math., 309(18):951–954, 1989\.
* [66] Jiang-Hua Lu and Alan Weinstein. Poisson Lie groups, dressing transformations, and Bruhat decompositions. J. Differential Geom., 31(2):501–526, 1990.
* [67] K. C. H. Mackenzie. On symplectic double groupoids and the duality of Poisson groupoids. Internat. J. Math., 10(4):435–456, 1999.
* [68] Kirill C. H. Mackenzie. General theory of Lie groupoids and Lie algebroids, volume 213 of London Mathematical Society Lecture Note Series. Cambridge University Press, Cambridge, 2005.
* [69] J. Peter May. Simplicial objects in algebraic topology. Chicago Lectures in Mathematics. University of Chicago Press, Chicago, IL, 1992. Reprint of the 1967 original.
* [70] R. A. Mehta and X. Tang. From double Lie groupoids to local Lie 2-groupoids. Bull. Braz. Math. Soc. (N.S.), 42(4):651–681, 2011.
* [71] R. A. Mehta and X. Tang. Constant symplectic 2-groupoids. Lett. Math. Phys., 108(5):1203–1223, 2018.
* [72] R. A. Mehta and X. Tang. Symplectic structures on the integration of exact Courant algebroids. J. Geom. Phys., 127:68–83, 2018.
* [73] Rajan Amit Mehta. $Q$-algebroids and their cohomology. J. Symplectic Geom., 7(3):263–293, 2009.
* [74] E. Meinrenken and C. Woodward. Moduli spaces of flat connections on $2$-manifolds, cobordism, and Witten’s volume formulas. In Advances in geometry, volume 172 of Progr. Math., pages 271–295. Birkhäuser Boston, Boston, MA, 1999.
* [75] Gregory W. Moore and Yuji Tachikawa. On 2d TQFTs whose values are holomorphic symplectic varieties. In String-Math 2011, volume 85 of Proc. Sympos. Pure Math., pages 191–207. Amer. Math. Soc., Providence, RI, 2012.
* [76] Michael Murray, David Michael Roberts, and Christoph Wockel. Quasi-periodic paths and a string 2-group model from the free loop group. J. Lie Theory, 27(4):1151–1177, 2017.
* [77] T. Pantev, B. Toën, M. Vaquié, and G. Vezzosi. Shifted symplectic structures. Publ. Math. Inst. Hautes Études Sci., 117:271–328, 2013.
* [78] Andrew Pressley and Graeme Segal. Loop groups. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 1986. Oxford Science Publications.
* [79] J. P. Pridham. Presenting higher stacks as simplicial schemes. Adv. Math., 238:184–245, 2013.
* [80] J. P. Pridham. Shifted Poisson and symplectic structures on derived $N$-stacks. J. Topol., 10(1):178–210, 2017.
* [81] Jonathan P. Pridham. An outline of shifted Poisson structures and deformation quantisation in derived differential geometry. Preprint arxiv:1804.07622.
* [82] B. Pym and P. Safronov. Shifted symplectic Lie algebroids. Int. Math. Res. Not. IMRN, (21):7489–7557, 2020.
* [83] A. G. Reyman and M. A. Semenov-Tian-Shansky. Reduction of Hamiltonian systems, affine Lie algebras and Lax equations. Invent. Math., 54(1):81–100, 1979.
* [84] Christopher L. Rogers and Chenchang Zhu. On the homotopy theory for Lie $\infty$-groupoids, with an application to integrating $L_{\infty}$-algebras. Algebr. Geom. Topol., 20(3):1127–1219, 2020.
* [85] Stefano Ronchi. Higher Van Est theory. Ph.D. thesis in preparation, George-August-Universität Göttingen.
* [86] Dmitry Roytenberg. On the structure of graded symplectic supermanifolds and Courant algebroids. In Quantization, Poisson brackets and beyond (Manchester, 2001), volume 315 of Contemp. Math., pages 169–185. Amer. Math. Soc., Providence, RI, 2002.
* [87] P. Safronov. Quasi-Hamiltonian reduction via classical Chern-Simons theory. Adv. Math., 287:733–773, 2016.
* [88] P. Safronov. Poisson-Lie structures as shifted Poisson structures. Adv. Math., 381:Paper No. 107633, 68, 2021.
* [89] Albert Schwarz. Geometry of Batalin-Vilkovisky quantization. Comm. Math. Phys., 155(2):249–260, 1993.
* [90] P. Ševera. Some title containing the words “homotopy” and “symplectic”, e.g. this one. In Travaux mathématiques. Fasc. XVI, Trav. Math., XVI, pages 121–137. Univ. Luxemb., Luxembourg, 2005.
* [91] Pavol Ševera. ${L}_{\infty}$-algebras as 1-jets of simplicial manifolds (and a bit beyond). arXiv:math/0612349.
* [92] Pavol Ševera and Michal Širaň. Integration of differential graded manifolds. Int. Math. Res. Not. IMRN, (20):6769–6814, 2020.
* [93] Y. Sheng and C. Zhu. Higher extensions of Lie algebroids. Commun. Contemp. Math., 19(3):1650034, 41, 2017.
* [94] H. B. Shulman. Characteristic-Classes and Foliations. ProQuest LLC, Ann Arbor, MI, 1972. Thesis (Ph.D.)–University of California, Berkeley.
* [95] Stephan Stolz and Peter Teichner. What is an elliptic object? In Topology, geometry and quantum field theory, volume 308 of London Math. Soc. Lecture Note Ser., pages 247–343. Cambridge Univ. Press, Cambridge, 2004.
* [96] Hsian-Hua Tseng and Chenchang Zhu. Integrating Poisson manifolds via stacks. Travaux mathématique, 15:285–297, 2006.
* [97] Michel Van den Bergh. Double Poisson algebras. Trans. Amer. Math. Soc., 360(11):5711–5769, 2008.
* [98] Konrad Waldorf. Multiplicative bundle gerbes with connection. Differential Geom. Appl., 28(3):313–340, 2010.
* [99] Konrad Waldorf. String connections and Chern-Simons theory. Trans. Amer. Math. Soc., 365(8):4393–4432, 2013.
* [100] Charles A. Weibel. An introduction to homological algebra, volume 38 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1994.
* [101] A. Weinstein. The symplectic structure on moduli space. In The Floer memorial volume, volume 133 of Progr. Math., pages 627–635. Birkhäuser, Basel, 1995.
* [102] P. Xu. Momentum maps and Morita equivalence. J. Differential Geom., 67(2):289–333, 2004.
* [103] Ping Xu. Morita equivalent symplectic groupoids. In Symplectic geometry, groupoids, and integrable systems (Berkeley, CA, 1989), volume 20 of Math. Sci. Res. Inst. Publ., pages 291–311. Springer, New York, 1991.
* [104] C. Zhu. $n$-groupoids and stacky groupoids. Int. Math. Res. Not. IMRN, 2009(21):4087–4141, 2009.
* [105] Chenchang Zhu. Kan replacement of simplicial manifolds. Letters in Mathematical Physics, 90:383–405, 2009.
|
# Above-threshold ionization at laser intensity greater than $10^{20}$ W/cm2
A. Yandow Center for High Energy Density Science, The University of Texas at
Austin, 2515 Speeday Stop C1600, Austin, TX 78712 Lawrence Livermore National
Laboratory, Livermore, CA 94551<EMAIL_ADDRESS>T. N. Ha C. Aniculaesei H.
L. Smith C. G. Richmond M. M. Spinks H. J. Quevedo S. Bruce M. Darilek
C. Chang D. A. Garcia E. Gaul M. E. Donovan B. M. Hegelich T. Ditmire
Center for High Energy Density Science, The University of Texas at Austin,
2515 Speedway Stop C1600, Austin, TX 78712
###### Abstract
We present the first experimental observation of above-threshold ionization
(ATI) electrons produced by ionization of the neon K-shell in a laser field
where intensity exceeds 1020 W/cm2. An array of plastic scintillating
calorimeter detectors was used to measure the high-energy electrons at four
angles in the laser forward direction. Coarse energy resolution was obtained
using aluminum filters of several thicknesses to block lower-energy electrons.
A threshold intensity around $2\times 10^{20}$ W/cm2 is observed for
production of energetic ATI electrons in the laser forward direction, with
maximum electron energy exceeding 10 MeV. L-shell electrons with energies
$<1.4$ MeV are scattered further forward along the laser direction than
expected. We present comparisons of the measured total electron energies to
the predictions of a Monte Carlo models employing the ADK-PPT ionization model
and the Augst barrier suppression ionization model.
Suggested keywords
††preprint: AAPM/123-QED
## I Introduction
Above-threshold ionization (ATI) is the process by which an electron absorbs
many more photons than required for ionization during a laser-atom
interaction. Absorption of a single additional photon over the required
threshold was observed in 1979 by Agostini et al. [1]. The modern two-step
model of ATI was proposed by Corkum et al. to explain the absorption of
thousands of laser photons above the threshold [2], and it has been extended
to explain high-harmonic generation in gases [3][4] and nonsequential double
ionization (NSDI)[5][6][7]. The two-step model of ATI in a strong, near-
infrared laser field describes the ionization process as a quantum mechanical
tunneling model and predicts the subsequent electron dynamics by integrating
the classical Lorentz force equations. The two-step model of ATI has explained
a number of experimental observations, including relativistic electrons
gaining a momentum component in the laser forward direction[8] and the
preferential ejection of ATI electrons along the laser polarization direction
[9][10]. At present, ATI electrons with energies exceeding 1 MeV from argon
and xenon have been observed, corresponding to the absorption of one million
excess photons above the ionization threshold[10][11].
Tunneling ionization becomes the dominant ionization mechanism for near-
infrared laser fields when intensity exceeds $10^{14}$ W/cm2. Tunneling
ionization first observed in the pioneering experiments of Augst et al.
[12][13]. The Ammosov-Krainov-Delone and Perelomov-Popov-Terent’ev (ADK-PPT)
model of tunneling ionization [14][15] has been verified with precision
measurements of argon charge states at intensity exceeding $2\times 10^{19}$
W/cm2 [16]. The highest ion charge states observed experimentally are
Ar16+[16][17], Xe26+[17][18], and Kr24+[18]. The probability of tunneling
ionization is a strongly nonlinear function of laser intensity, leading to the
use of high ion charge states as a direct peak laser intensity diagnostic with
varying degrees of success [18][19]. With laser intensity estimates calculated
from indirect diagnostics exceeding $2\times 10^{22}$ W/cm2 [20][21] and
10-PW-class laser facilities in their final stages of development[22], there
has been considerable renewed interest in highly-charged ions as a direct peak
laser intensity diagnostic. Recent numerical studies of ionization have
developed a tunneling cascade ionization model for complex ions in a laser
field [23], demonstrated that K-shell ionization yields are the most robust
when considering different ionization models [24], and identified features of
the ionization yield curves that are robust when considering different
intensity distributions at the focal plane [25]. Monte Carlo simulations of
ionization that include ion motion in the laser field demonstrate the ions can
be accelerated to energies that make conventional time-of-flight ion yield
measurements impossible at intensity above $10^{21}$ W/cm2 [26], so we explore
the detection of the ATI electrons from these high charge states for future
ionization physics experiments and direct laser intensity diagnostics.
Modulations in ATI electron energy spectra and angular distributions
corresponding to ionization of different electron shells of the target atomic
species has been observed for the N, M, and L-shells of krypton and xenon
[11]. These modulations arise from the large difference in ionization
potential between the atomic shells, with ATI electrons produced by ionization
of a deeply-bound state accelerated to higher energies by a stronger laser
electric field than an outer shell state[11]. The large ionization potential
gap between the L-shell and K-shell of neon will result in a strong modulation
of the energy spectrum and angular distribution, with the K-shell electrons
gaining about an order of magnitude more energy than the L-shell electrons.
This strong modulation in both the energy spectrum and the angular
distribution raises the possibility of a novel direct laser intensity
diagnostic, where the production of highly-charged ion states can be inferred
by the detection of high-energy ATI electrons ejected in the laser forward
direction. Such a diagnostic will be relatively easy to execute experimentally
using a low-density stream of noble gas as a target and a magnetic
spectrometer to detect the ATI electrons. The ionization of the K-shell in
noble gases would allow for direct laser intensity measurement around 1020
W/cm2 (Ne9+), $3\times 10^{21}$ W/cm2 (Ar17+), and $10^{23}$ W/cm2 (Kr35+) in
any laser field, provided the ADK-PPT tunneling ionization model can be
verified experimentally on a laser system with reliable indirect diagnostics
at these intensities as well.
ATI electrons hold significant promise as a direct laser intensity diagnostic
between $10^{20}$ and $10^{24}$ W/cm2, where ponderomotive expulsion of the
ions becomes a significant obstacle to direct ionization yield
measurement[26]. Recently proposed techniques using vacuum-accelerated
electrons and protons [27][28] will not yield an accurate intensity estimate
without a well-known pulse duration when prepulses scatter target electrons
[29] and the ions gain only a fraction of their ponderomotive potential energy
[28]. The strong nonlinearity of the tunneling ionization rate prevents the
K-shell electrons from being scattered by prepulses even when laser contrast
is as low as $10^{-3}$. The ceiling intensity of our method is about $10^{24}$
W/cm2, above which expulsion of highly-charged ions before the arrival of the
peak laser field strength would prevent K-shell ionization [26] and an ion
ponderomotive diagnostic such as that proposed in [27] would be most
appropriate.
We report in this paper the observation of multi-MeV ATI electrons produced by
the interaction of a low-density neon gas jet ($<3\times 10^{13}$ cm-3) with a
well-characterized laser pulse with intensity exceeding $10^{20}$ W/cm2. We
observe these ATI electrons on four plastic scintillating calorimeter
detectors positioned in the laser forward direction and along the plane of
laser polarization. We compare the observed integrated ATI electron energy
yields to the predictions of an ADK-PPT Monte Carlo model of ATI and a barrier
suppression model of ATI, using methods similar to those described previously
elsewhere[26]. The measurements have qualitative similarities with the models’
predictions, including the existence of a threshold intensity above which the
ionization probability increases rapidly with intensity and a saturation
intensity above which the ATI electron energy yield is dominated by the focal
volume rather than by the probability of ionization in the center of the
focus. However, we observe poor quantitative agreement with the modeling,
which significantly underestimates the observed threshold intensity by a
factor between two and three. We also observe electrons with energies
exceeding $10$ MeV for the first time [30].
## II Experimental Design
Figure 1: Diagram of the experimental setup, showing five detectors arranged
around the laser-atom interaction region. Three detectors were placed in the
plane of laser polarization at angles of $0^{\circ}$, $30^{\circ}$, and
$53^{\circ}$ from the laser forward direction. One additional detector was
placed $60^{\circ}$ out of the plane of polarization and $43^{\circ}$ from the
laser forward direction. A control detector was placed $110^{\circ}$ from the
forward direction out of the polarization plane.
A diagram of the experimental setup is given in Fig. 1. A low-density plume of
neon is introduced in vacuum using a flow-calibrated orifice with a diameter
of 100 $\mu$m held at a backing pressure of 60 Torr located 4 mm below the
laser focus. We estimate a gas density of $3\times 10^{14}$ cm-3 from a
steady-state Ansys-Fluent simulation of the gas expansion into vacuum [31].
Five scintillating calorimeter detectors were placed around the focus. Four
detectors were placed in the laser forward direction, with one oriented along
the direction of laser propagation, two lying in the plane of laser
polarization, and one outsize the polarization plane. The fifth detector was
placed out of the forward direction and polarization plane, where no ATI
electrons are expected as a control. We expect the higher-energy ATI electrons
to be ejected further towards the laser forward direction [8][11] and
preferentially ejected in the plane of laser polarization [9][10].
Each detector consisted of a 50 mm diameter, 40 mm long cylinder of long-
lifetime (285 ns) scintillating plastic (Eljen Technologies EJ240) coupled to
a photomultiplier tube (PMT) with a tapered voltage divider for optimal pulse
linearity. The plastic scintillator and PMTs were encased in a vacuum-
compatible PTFE housing that was made light-tight with colloidal graphite and
aluminum foil. The plastic scintillator was chosen to decrease sensitivity to
high-energy photons and a long scintillating lifetime allowed the detector to
function as a calorimeter, with the output signal producing an integrated
measurement of the energy of all electrons incident on the detector. The
relatively large solid angle subtended by the detectors ($\sim$ 0.03
steradians) allowed several hundred ATI electrons to hit each detector,
enabling relatively accurate calorimeter measurements with only a few shots at
each laser intensity. Information about the shape of the energy spectrum was
obtained by varying the thickness of aluminum shielding in front of each
detector, which is explained further in the next section.
The scintillating detectors were calibrated by using standard pulse-height
analysis techniques to measure absorbed energy spectra from two gamma
radiation sources, Co-60 and Cs-137, at high operating voltage. An MeV-
equivalent charge was obtained by measuring the location of the Compton edges
in the acquired spectra and comparing to absorbed energy spectra calculated
using G4beamline [32], a Monte Carlo particle transport software package based
on Geant4. The uncertainty in this MeV-equivalent charge is between 20% and
25%, and is the dominant source of uncertainty in the ATI electron energy
yields. The PMT gain curves as a function of bias voltage were characterized
by exposure to a Q-switched Nd:YAG laser light source.
The output current pulse from each PMT was recorded on a Tektronix TDS5054
oscilloscope and digitally filtered to eliminate noise from electromagnetic
pulse (EMP) on shot. The current pulse amplitude and integrated charge obey a
linear relationship during normal detector operation. The upper saturation
limit showed increasing amplitude without increasing charge, and corresponded
to $~{}7$ GeV of integrated ATI electron energy absorbed in the scintillator.
The lower charge limit corresponded to a regime where residual ringing
disrupted the linear relationship, with large amplitude current pulses
integrating to near-zero charge, and was chosen to prevent uncertainty induced
by detector ringing from dominating over the uncertainty from the charge
calibration. Measurements falling outside these limits are excluded from the
figures presented in this paper but the lowest detector charge threshold is
marked if appropriate.
We performed a series of control tests to confirm that the observed
scintillator signal was caused by high-energy electrons. We compare the signal
with minimal detector shielding at the $30^{\circ}$ and $110^{\circ}$
(control) positions and found the signal at the forward detector position was
at least two and a half orders of magnitude greater than the signal observed
at the control detector position. We swapped the scintillating detectors
between these two positions several shots after the experiment start to verify
that the observed signal in the laser forward direction was not an artifact of
that particular scintillating detector. Multiple control shots were taken with
no target gas to confirm the signal was not just electromagnetic pulse (EMP).
We also verified the effect was intensity-dependent and not energy dependent
by stretching the pulse to a length of 2 ps and observing the signal to
disappear. The observation of a much stronger intensity-dependent signal in
the laser forward direction means that the observed signal is generated by
high-energy electrons, as it cannot be attributed to a detector artifact. We
also included a number of helium control shots in this study to demonstrate
the contribution of any vacuum-accelerated L-shell electrons to the signal.
We used the Texas Petawatt Laser in rod shot mode (64 mm Nd:silicate amplifier
only) allowing an increased repetition rate of 2.5 shots per hour. We
installed a f/1.4 off-axis parabolic mirror (OAPM) to reach an intensity
exceeding $2\times 10^{20}$ W/cm2. Laser intensity was calculated using the
indirect Output Sensor Package (OSP) of the Texas Petawatt Laser, which
includes diagnostics to measure near field, equivalent far field (focal spot),
wavefront, pulse duration, and energy [20]. Pulse duration was deconvolved
from a second-order autocorrelation assuming a Gaussian pulse shape. Wavefront
was measured using a PHASICS SID4 wavefront sensor, and a deformable mirror
was used to optimize the laser wavefront before every shot.
The focal spot in the target chamber was measured using a Mitutoyo 50x plan
apochromatic long-working distance microscope objective (0.55 numerical
aperture), a 200 mm achromatic lens, and a vacuum-compatible CCD camera
mounted in a Thorlabs optical cage system. We estimate the central maximum of
the focal spot to have a full-width half-maximum (FWHM) of $2.6\pm 0.2$ $\mu$m
from Gaussian fittings of focal spot images. The far-field diagnostic plane
measured at OSP does not necessarily coincide with the plane of highest laser
intensity within the low-density gas jet due to defocus remaining in the
wavefront after correction, which can lead to a systematic underestimate of
the laser intensity. The focal spot profile used to compute the peak laser
intensity was calculated from the measured wavefront including all aberrations
except defocus. An inverted-field autocorrelator was used to diagnose pulse-
front tilt. We estimate a 42 fs pulse front tilt from the angular shift of the
far-field during grating optimization, for a total typical pulse duration of
170 fs. Intensity changes were achieved by inserting calibrated neutral
density filters (ND) before the rod amplifier in the TPW laser chain. The gain
of the rod amplifier remained fixed to ensure the amplified spectrum,
compressed pulse duration, and laser wavefront remain the same when the pulse
energy is decreased.
## III ATI Model Description
The theoretical K-shell electron yields and energy spectra were calculated
using the two-step quasi-classical models of ATI. A Monte Carlo simulation of
tunneling ionization in the laser field using the ADK-PPT model of
ionization[15][14][33] predicted the initial conditions for K-shell electrons
in the laser field. The static tunneling ionization rate for a single electron
expressed in atomic units is given by:
$\displaystyle W_{ADK-
PPT}(t)=C_{n^{*}l^{*}}^{2}I_{p}\frac{(2l+1)(l+|m|)!}{2^{|m|}|m|!(l-|m|)!}\times$
$\displaystyle\bigg{(}\frac{1}{2}\tilde{F}(t)\bigg{)}^{1+|m|-2n^{*}}exp\bigg{(}-\frac{2}{3\tilde{F}(t)}\bigg{)}$
(1)
where the reduced field strength $\tilde{F}(t)$ is defined as
$\tilde{F}(t)=\frac{\sqrt{\textbf{E}^{*}(t)\textbf{E}(t)}}{(2I_{p})^{3/2}}$
(2)
where $I_{p}$ is the ionization potential and $l,m$ are the orbital quantum
numbers. The extension of the original PPT model by Ammosov, Krainov, and
Delone introduced an effective principle number $n^{*}$ and an effective
orbital quantum number $l^{*}$ given by
$\displaystyle n^{*}=\frac{Z}{\sqrt{2I_{p}}}$ (3) $\displaystyle
l^{*}=n^{*}_{o}-1$ (4)
where $Z$ is the residual charge ($Z=1$ for neutral atoms). The constants
$C_{n^{*}l^{*}}^{2}$ are expressed as
$C_{n^{*}l^{*}}^{2}\approx\bigg{[}\bigg{(}\frac{2exp(1)}{n^{*}}\bigg{)}^{n^{*}}\frac{1}{\sqrt{2\pi
n^{*}}}\bigg{]}^{2}$ (5)
The exponential factor in Eq. III dominates the scaling of ionization
probability with laser intensity, leading to an intensity threshold for the
appearance of high ion charge states. Ion motion, although it has no
significant effect on the ionization yield at $10^{20}$ W/cm2, was included
[26].
At each timestep, Eq. III was used to predict the probability of ionization
and Monte Carlo methods were used to increment the ion charge state. Non-
sequential double ionization (NDSI), inelastic tunneling[34][35][36],
collective tunneling[37][35], and relativistic ionization
effects[38][39][40][41][42] were excluded from the calculations. From these
initial conditions, the electron trajectories were calculated by integrating
the Lorentz force equations using an adaptive-timestep Runge-Kutta (RK45)
numerical method. A maximum of $10^{5}$ test electrons were simulated at each
intensity, originating within an isointensity boundary where ionization
outside could be neglected due to the strong ionization rate dependence on
intensity. We chose a series of model intensities between $3\times 10^{19}$
W/cm2 and $4\times 10^{20}$ W/cm2 to demonstrate the model behavior above and
below the barrier suppression intensity of Ne9+. Within each volume, we chose
$2.5\times 10^{5}$ initial positions for neutral ions. From these ionization
events, we calculated the energy spectrum and angular distribution of at most
$10^{5}$ ATI electrons.
An additional ATI Monte Carlo model using the Augst barrier suppression
ionization (BSI) model [13] was developed as well. The simulations were
performed similarly, except the ionization event occurred at the timestep
$\tilde{F}>1/16n^{*}$ and did not occur otherwise. Figure 2 shows the K-shell
ATI electron yield predicted by the Monte Carlo simulation as a function of
laser intensity assuming a gas density of $3\times 10^{14}$ cm-3. Analytic
predictions of the ADK-PPT model, the Tong-Lin-Lotstedt model for tunneling
rate near the barrier suppression regime [43][44], and the Augst BSI model
compare favorably with the Monte Carlo modeling. The effect of barrier
suppression corrections on the tunneling ionization rate, which is predicted
to be significant with pulses shorter than 15 fs [45], can be safely neglected
for this relatively long pulse duration.
Figure 2: Total number of K-shell ATI electrons predicted by different models
of ATI. Solid, dashed, and dotted curves are the predictions of the ADK-PPT
[14][15], Tong-Lin-Lotstedt model for helium-like ions [44], and barrier
suppression ionization [13]. Blue circles and green squares are from Monte
Carlo simulations using the ADK-PPT and BSI models, respectively. Gas density
is assumed to be $3\times 10^{14}$ cm-3 in the laser focus. Color figures
available online.
(a)
(b)
Figure 3: a) Simulated energy spectrum of the K-shell electrons at an
intensity of 1.06 $\times 10^{20}$ W/cm2 (blue) And 4.20 $\times 10^{20}$
W/cm2 (green, dashed). Inset figures shows same curves on a linear scale. b)
Simulated L-shell electron spectrum at the same two intensities. Color figures
available online.
The laser focal spot is computed for every shot but we lack information on the
exact structure of the phase fronts as the laser pulse propagates through the
focal plane, so we make a considerable number of simplifying assumptions when
modeling the laser focus. Ionization rate depends strongly on intensity, so we
assume the K-shell electrons are all produced in the central maximum at the
focal plane, and we do not consider laser energy scattered outside the central
maximum in the model. We also make the assumption that we can treat this
central maximum as a Gaussian laser focus with nonparaxial corrections
included up to fifth order in the diffraction angle [46]. We assume a focus
with a $1/e^{2}$ diameter of 2.25 $\mu$m, which we estimated from direct
measurements in the target chamber. During the experiment rod shots, we
estimate the 1/e2 spot of the central maximum was 2.2 $\pm$ 0.2 $\mu$m. We
incorporate the measured pulse front tilt of 42 fs by assuming a Gaussian
pulse shape with an intensity FWHM of 170 fs. Similar approaches have been
taken to model the laser fields at focus in previous experimental studies
[10][11][8]. Some particle-in-cell (PIC) methods have shown promise for
predicting the energy spectra of vacuum-accelerated electrons at an intensity
of $10^{19}$ W/cm2 [47], but no such methods have been applied to simulating
ATI electron dynamics in a laser field with a more complex spatial structure
than a Gaussian beam.
Figures 3a and 3b show the energy spectra of the K-shell and L-shell ATI
electrons, respectively, predicted by the Monte Carlo ADK-PPT modeling at two
intensities ($1.06\times 10^{20}$ W/cm2 and $4.2\times 10^{20}$ W/cm2). The
predicted angular distributions of the ATI electrons at the same intensities
are shown in Figure 4. The modeling predicts the ATI electron energy spectra
and angular distributions are be strongly modulated, with the higher-energy
K-shell electrons expelled at an angle around 25∘ from the laser forward and
the lower-energy L-shell electrons expelled an angles greater than
$60^{\circ}$ from the laser forward direction.
Figure 4: Angular distribution of the K-shell electrons (no markers) and
L-shell electrons (square markets) at laser intensities of $1.06\times
10^{20}$ W/cm2 (solid, blue) and $4.2\times 10^{20}$ W/cm2 (dashed, green).
Color figures available online.
The modeled K-shell energy spectra in Figure 3a also show that the number of
high energy electrons ($>15$ MeV) produced can increase by more than an order
of magnitude as the laser intensity increases toward the maximum intensity
used in the experiment. However, other features of the K-shell electrons are
relatively stable, with the peak of the energy spectrum in Fig 3a (inset)
increasing from 3.5 MeV to 4.7 MeV and the angular distribution in Figure 4
nearly unchanged. The modeling predicts that the energy yield attributed to
the highest-energy electrons, which are observed on the $0^{\circ}$ detector,
demonstrate a stronger scaling with laser intensity than the other detectors.
The energy yield at this position increases with intensity due to both the
larger number of electrons generated in the focus, as shown in Figure 2, and
the larger number of electrons in the high-energy tail of the spectrum. The
energy yields detected at the $30^{\circ}$ position, where the electron
energies and angular distribution change little with increasing intensity,
will display a scaling dominated by the total number of electrons produced by
the K-shell ionization.
The L-shell electrons, shown in Figure 3b, are predicted to obtain energy less
than 1 MeV and be ejected from the focus at an angle around $70^{\circ}$, and
can therefore be filtered from the K-shell electrons in energy and angle, but
we must treat these model predictions with caution. Vacuum acceleration of
electrons demonstrates very strong sensitivity to initial position in the
laser focus [48], leading to the possibility that the simulation method may
under-sample initial positions in the focus that yield L-shell electrons that
are accelerated to higher energies or ejected further towards the laser
propagation direction. The last study of vacuum acceleration of electrons from
ionized helium in this intensity regime ($\sim 3\times 10^{20}$ W/cm2) found
disagreement between the measured angular distribution of vacuum-accelerated
electrons and the angular electron distributions predicted by particle-in-cell
modeling. The authors suggested this discrepancy may be caused by poor
sampling of the focal volume in their simulations [49], and there is evidence
of L-shell electrons scattered as far forward as $30^{\circ}$ from the laser
propagation direction from the helium control shots.
The simulated ATI electron yields and energy spectra incident on each detector
are used to calculate the energy deposited in the plastic scintillator.
Several thicknesses of aluminum shielding were used in the experiment to block
lower-energy electrons. The detector efficiencies for each shielding thickness
were calculated using G4beamline [32], a Monte Carlo particle transport
software package based on Geant4, that includes energy deposited in the
plastic scintillator by electrons, positrons, and high-energy photons
generated in the interaction. The detector efficiencies are shown in Figure 5.
The detector efficiency at each energy and shielding thickness is calculated
from the simulated visible energy deposited in the scintillating plastic by a
monoenergetic beam of $10^{4}$ electrons with a divergence similar to the
incident ATI electrons.
Figure 5: Detector efficiencies at different aluminum thicknesses (right
axis) alongside a simulated K-shell ATI electron spectrum using the ADK-PPT
model at intensity of $1.06\times 10^{20}$ W/cm2 (left axis). Color figures
available online.
The predicted energy yield in the scintillators can be computed by combining
the electron yields and energy spectra from the ATI modeling with the
calculated detector efficiencies. The predicted ATI electron energy yield is
given by
$Y_{ATI}(\theta,\phi,Z)=\int_{0}^{E_{max}}w_{V}E^{\prime}p(E^{\prime},\theta,\phi)\eta_{Al}(E^{\prime},Z)dE^{\prime}$
(6)
where $p(E,\theta,\phi)$ is the energy spectrum (count) at the detector
position, $\eta_{Al}(E,Z)$ is the detector efficiency with an aluminum
shielding thickness of Z, and $w_{V}$ is a volume weighting factor
corresponding to the number of real K-shell electrons produced per simulated
ATI electron. The volume weighting factor is calculated by using a gas density
of $3\times 10^{14}$ cm-3 and a confocal volume estimated by integrating a
Gaussian beam volume bounded by the same isointensity shell used in the model
simulations.
We can gain some information about the energy spectrum of electrons at a given
intensity by varying the shielding thickness $Z$ and measured the energy
deposited. Although the energy integration cannot be uniquely inverted to give
an electron spectrum, we can compare the predicted energy deposited to the
observed energy deposited and search for energy ranges where the ATI model
spectrum either overestimates or underestimates the electron number. From the
efficiency curve shapes, we conclude this method has poor resolution for
electron energies above $10$ MeV but can yield some energy information in the
0.3-6 MeV energy range.
## IV ATI Electron Energy Yields
A number of laser intensity scans were performed using different shielding
configurations. On all unshielded detectors in the laser forward direction the
integrated electron energy increase up to almost three orders of magnitude.
Helium control shots show that some of the electrons accelerated toward these
detectors are low-energy L-shell electrons ($<$ 1.4 MeV) scattered in the
laser forward direction by vacuum acceleration.
Figure 6: Measured electron energy deposited in a scintillating detector
located at $30^{\circ}$ from the laser forward direction in the polarization
plane. Unshielded and shielded (2.6 mm aluminum) intensity scans are given by
blue circles and green triangles, respectively. Helium control shots in the
unshielded configuration are given by purple crosses and the detector charge
floor for helium control shots in the shielded configuration is marked. Monte
Carlo simulations using ADK-PPT (solid) and BSI (dashed) models shown for
comparison. Color figures available online.
Figure 6 shows the energy absorbed in the scintillating detector at
$30^{\circ}$, where the number of K-shell ATI electrons is expected to be the
highest. Intensity scans with the unshielded scintillator (blue circle) and a
shielded configuration (green triangles) are shown to compare the total
integrated electron energy with the integrated energy from electrons with
energy greater than 2.8 MeV. Helium control shots (purple crosses) demonstrate
that the L-shell electrons contribute some of the deposited energy in the
unshielded configuration. The helium control shot energy yield is about an
order of magnitude lower than the neon yield at $2.5\times 10^{20}$ W/cm2,
demonstrating that the neon L-shell electrons account for $\sim 1/2$ of the
observed energy yield when accounting for the difference in electron density
at the focus. Two helium control shots taken in the shielded configuration
with the same backing pressure ($n_{a}\sim 3\times 10^{14}$ cm-3 yielded no
repeatable signal, with the dynamic range floor for these control shots marked
on Figure 6. The helium control shots establish an upper limit of 2.8 MeV for
vacuum-accelerated, which is slightly lower than the maximum energy of vacuum-
accelerated electrons by Kalashnikov near this angle in this intensity regime
[49].
The shielded measurements show a threshold intensity around $2\times 10^{20}$
W/cm2, above which the probability of electron production with energy $>$ 2.8
MeV increases rapidly with intensity. A scaling transition around $3\times
10^{20}$ W/cm2 marks the saturation intensity where the scaling transitions
from an ionization probability scaling dominated by the exponential term in
Eq. III to a focal volume scaling. The threshold-like behavior and scaling
transition are features of ATI that are mirrored both Monte Carlo models,
although neither model correctly predicts the threshold or saturation
intensities and both overestimate the ionization yield.
Similar laser intensity scans at two additional positions are presented in
Figures 7 and 8, corresponding to positions $53^{\circ}$ from the laser
forward direction (in polarization plane) and $43^{\circ}$ from the laser
forward direction ($60^{\circ}$ out of the polarization plane), respectively.
These positions were chosen due to space restrictions in the target chamber
and experimental setup. The helium control shots with no shielding installed
are comparable to the deposited energies measured with neon in both cases,
showing the L-shell electrons will contribute to the signal substantially.
Installing a 1 mm aluminum shield, which blocks electrons with energy $<1.4$
MeV, decreases the electron energy yields an order of magnitude at each
detector. The electron energy yields in the shielded configuration show the
same characteristic ATI features, the threshold and saturation intensities,
seen in 6, with the saturation effect somewhat more exaggerated because the
K-shell ATI electrons will be ejected further forward in the laser direction
as laser intensity continues to increase. The unshielded measurements
dominated by lower-energy electrons do not display the clear scaling change
visible in the shielded measurements, and so they are likely L-shell electrons
scattered in the laser forward direction by a vacuum acceleration process.
Figure 7: Measured energy deposited in a scintillating detector located at
$53^{\circ}$ from the laser forward direction in the polarization plane.
Unshielded and shielded (1 mm aluminum) intensity scans are given by blue
circles and green triangles, respectively. Helium control shots in the
unshielded configuration are given by purple crosses. Monte Carlo simulations
using ADK-PPT (solid) and BSI (dashed) models shown for comparison. Color
figures available online. Figure 8: Measured energy deposited in a
scintillating detector located at $43^{\circ}$ from the laser forward
direction in the polarization plane. Unshielded and shielded (1 mm aluminum)
intensity scans are given by blue circles and green triangles, respectively.
Helium control shots in the unshielded configuration are given by purple
crosses. Monte Carlo simulations using the ADK-PPT ionization model (solid)
shown for comparison. Color figures available online.
At these larger angles, the ADK-PPT simulations tended to overestimate the
electron energy yields while the Augst BSI model tended to be an
underestimate, instead predicting a greater proportion of higher-energy ATI
electrons that would scatter further forward in the focus. The BSI model also
exhibited an unexpectedly strong polarization dependence for low-energy ATI
electrons because the probability of being “born” into the field off a laser
cycle peak is higher for ATI electrons produced by the rising edge of the
laser focus and scattered out before the arrival of peak laser intensity. An
insufficient number of test electrons in the BSI simulations were scattered
toward the $43^{\circ}$, so only the ADK-PPT model is shown in Fig. 8.
Figure 9: Measured electron energy deposited in a scintillating detector
located at on the laser propagation axis. Two shielded (1 mm and 2.6 mm
aluminum) intensity scans are given by blue circles and green triangles,
respectively. Lowest detector charge floors are marked over the intensity
range where shots were taken in each shielding configuration. Monte Carlo
simulations using ADK-PPT (solid) and BSI (dashed) models shown for
comparison. Color figures available online.
Figure 9 shows the measured electron energy deposited in the scintillating
detector oriented in the laser forward direction, shielded with a minimum of 1
mm of aluminum to block electrons with energy $<$ 1.4 MeV. We observe a
threshold appearance intensity of $2\times 10^{20}$ W/cm2 for high-energy
electrons in the laser forward direction, which are found to be in the 10-16
MeV range [30]. The measured ATI electron energy yields along the laser
forward direction fall nearly two orders of magnitude lower than the ADK-PPT
and BSI simulation predictions. While a scaling transition is not obvious in
the measurements, it is important to note that the average energy of these
electrons is much higher than at other detector positions. A single 15 MeV
electron incident on this detector would yield a $\sim 500$ MeV/Sr response,
so some of these measurements between 2-3 $\times 10^{20}$ W/cm2 represent a
single-digit number of electrons, and uncertainty due to sampling statistics
obscures the scaling transition. Measurements falling below the instrument
dynamic range floor (hollow markers) at $10^{20}$ W/cm2 show that not even a
single one of these ATI electrons exceeding 10 MeV was detected below the
threshold intensity.
We do not observe good quantitative agreement between the predicted ATI energy
yields of either Monte Carlo model and the measured energy yields, although
the measurements demonstrate self-consistent qualitative features of tunneling
ionization between the four detector positions. All show an appearance
intensity for a population of high-energy electrons above $2\times 10^{20}$
W/cm2 and three of the four detector positions show a consistent saturation
intensity. The ADK-PPT tunneling ATI model predicts these features will appear
on all detectors at about the same intensity, although the model intensity
underestimates the experimental intensity by a factor of 3-4. The BSI ATI
model predicts a narrower angular distribution of ATI electrons that broadens
as the intensity increases, the focal volume grows, and a broader range of
electron initial conditions over the focal volume and pulse duration are
sampled. This broadening of the ATI electron angular distribution as intensity
increases leads to the higher predicted appearance intensity at 53∘.
Therefore, the measured ATI energy yields are more consistent with some form
of tunneling process that allows for electrons to originate from a wider range
of initial conditions below the saturation intensity than it is with a true
intensity threshold process.
## V Electron Energies
The limited number of laser shots and the low density of target gas necessary
to avoid collective plasma effects prevented measurement of the electron
spectrum using a magnetic spectrometer. We placed a series of aluminum filters
of different thicknesses in front of the scintillating plastic to gain
spectral information. While such a method provides only crude information
about the energy spectrum, it can be used to show the maximum ATI electron
energy is between 10-16 MeV. A comparison to the maximum energies predicted by
ATI models shows the maximum ATI energy range consistent with the measurements
falls between the predictions of relativistic and nonrelativistic
ponderomotive models [30].
Figures 10a and 10b show the measurements of integrated electron energy at the
$30^{\circ}$ positions at two average laser intensities. The predictions of
ADK-PPT Monte Carlo model and BSI model at several intensities are marked on
Figures 10a and 10b, respectively. Figure 11 shows the measured electron
energy yield along the laser forward direction, and the predictions of the
ADK-PPT (solid) and BSI (dashed) models, and is consistent with a maximum ATI
electron energy in the 10-16 MeV range [30].
Both models only show that quantitative agreement with the electron energy
yield measurements is only possible when the laser model intensity is taken to
be significantly less than the laser intensity computed using indirect laser
diagnostics. As with the laser intensity scans discussed in Section IV, we
observe the ADK-PPT ATI model provides a more consistent description of the
measurements between different detector positions, even though the model
intensity is four times lower than the estimated laser intensity in the
experiment. The BSI model does not make predictions that are consistent
between the on-axis and $30^{\circ}$ detectors, with the intensity that is
most consistent with the electron energy yields for the on-axis detector in
Figure 11 ($1.55\times 10^{20}$ W/cm2) underestimating the measurements at
$30^{\circ}$ by a factor of $\sim 5$. The ADK-PPT model shows a more
consistent model intensity around $10^{20}$ W/cm2 between the two detector
positions.
Some qualitative statements about the shape of the spectrum can be gathered by
comparing the measurements in Figure 10a to the detector efficiency curves in
Figure 5. We cannot draw many conclusions about the unshielded measurement
because of the evidence of forward-scattered L-shell electrons shown by the
control shots in Figure 6, so we cannot make a statement about the population
of electrons with energy $<2.8$ MeV. The Monte Carlo ADK-PPT model predicts a
steeper electron energy yield drop-off than the measurements, corresponding to
a model overestimate of the proportion of electrons with energy between
2.8-4.7 MeV and an underestimate of the number of electrons with energy $>6.5$
MeV. The BSI model predictions in Figure 10b show a decrease in electron
energy yield with shield thickness that is more consistent with measurements,
which could indicate an ionization process with higher onset intensity than
predicted by the ADK-PPT model.
(a)
(b)
Figure 10: Electron energy yields measured at the $30^{\circ}$ position with
varying shield thicknesses at two average intensities, $4.1\pm 0.4\times
10^{20}$ W/cm2 and $2.2\pm 0.4\times 10^{20}$ W/cm2 compared to closest
predictions of a) the ADK-PPT Monte Carlo model (solid lines) and b) the Augst
BSI Monte Carlo model(dashed lines). Color is added for clarity. Color figures
available online. Figure 11: Electron energy yields measured along the laser
forward direction with varying shield thicknesses at an average intensity of
$4.1\pm 0.4\times 10^{20}$ W/cm2. Solid curves (open circles) are ADK-PPT
model predictions and dashed curved (open squares) are BSI model predictions.
The electron energy yields predicted by the modeling at the on-axis position
shown in Figure 11 should be interpreted with care because the Gaussian focus
assumption made in the model is not a realistic description of the laser
fields. Higher-order spatial modes experience increased Guoy phase shifts as
the beam passes through the focus, which will limit the distance a
relativistic electron can stay in phase with the peak of the paraxial laser
electric field to a fraction of a Rayleigh range, which should substantially
decrease the maximum ATI electron energy. The fields of higher-order spatial
modes will also scatter high-energy electrons over a larger range of angles
than expected from a Gaussian model, which could explain why the ADK-PPT model
underestimates the electron energy yield at the $30^{\circ}$ detector with 7.6
mm of shielding in Figure 10a. Further development of ATI simulation
techniques to incorporate a more realistic model of the laser fields is
necessary to further study ATI electrons and develop laser intensity
diagnostics using ATI electrons.
We performed a similar analysis for the detectors at the $53^{\circ}$ and
$43^{\circ}$, and found that the installation of 1 mm aluminum shield
decreased the energy deposited more than an order of magnitude as seen in
Figs. 7 and 8. No repeatable signal was observed when 2.6 mm of shielding was
used, limiting the maximum ATI electron energy to below 2.8 MeV at these two
angles.
## VI Conclusion
We report the first observation of ATI electrons with energies exceeding 10
MeV as well as the first indirect evidence of the ionization of helium-like
neon in an intense laser field to the best of our knowledge. We measured the
energy deposited in an array of scintillating detectors by high-energy ATI
electrons and performed scans of laser intensity in several configurations and
presented a comparison with the two Monte Carlo models of neon K-shell
ionization.The ADK-PPT ATI model predicted roughly consistent appearance and
saturation intensities between the four detector positions, a qualitative
prediction consistent with the experimental measurements, although the ADK-PPT
model significantly underestimated these intensities. These qualitative
features were not predicted in the BSI Monte Carlo modeling because a
probabilistic tunneling process allows for a much less restricted range of
electron initial positions in the focal volume and laser phases at ionization.
ADK-PPT model-derived intensities derived from ionization yield measurements
in prior studies have not always demonstrated consistency with laser intensity
calculated from indirect diagnostic measurements or self-consistency when
different atomic species are used. Ionization of lithium-like argon (Ar16+)
has been demonstrated to occur in an intensity range from $1-2\times 10^{19}$
W/cm2 in two different studies [16][17]. Ionization yields of xenon in the
same laser field were found to give an ADK-PPT model intensity of $3.5\times
10^{18}$ W/cm2, much lower than the indirectly estimated intensity of
$2.6\times 10^{19}$ W/cm2 or argon-yield ADK-PPT model-derived intensity of
$1.3\times 10^{19}$ W/cm2 [17]. The authors emphasized the repeatability of
their results but were not able to provide a theoretical explanation for the
systematic decrease of model-derived intensity with atomic number. Chowdhury
et al. similarly calculated a model intensity from precision measurements of
argon charge states and found a similarly low model intensity, although it was
within the lower bound of their experimental intensity uncertainty [16]. An
ADK-PPT model intensity shift factor of $\sim 4$ was not expected for
ionization of helium-like neon given the simplicity of the electronic shell
structure and given how precisely helium ionization yields agree with the ADK-
PPT model [6].
Some recent modifications to the ADK-PPT model have been proposed to account
for barrier suppression effects for helium-like ions [44], but they are
typically more relevant for pulses much shorter than 170 fs [45] and L-shell
or M-shell orbitals [24], which we confirmed in the calculations presented in
Figure 2. Relativistic corrections that suppress the ionization rate are
predicted to be negligible at an intensity of 1020 W/cm2 [38][39]. The
spectral information we were able to obtain by increasing the shielding
thickness at $30^{\circ}$ may be consistent with a higher threshold intensity
accelerating electrons ejected at this angle to higher energies but the model
of the laser fields is not realistic enough to demonstrate this agreement
conclusively. Our observation of a neon K-shell ionization intensity above
$10^{20}$ W/cm2 may be a reason why it has not been reported in previous
studies, but no study has explicitly stated that neon charge states were not
observed in this intensity range. Momentum conservation during the ionization
process will accelerate the ions to energies on the order of tens of eV, so
spectrometer design in previous studies may have been a factor as well.
Our observation of forward-scattered L-shell electrons is unexpected from the
simplified model of the laser focus used in this paper but is consistent with
other experiments reported. Kalashnikov et al. report vacuum accelerated
electrons from helium over a similar laser intensity range and angular
distribution [49]. They also found disagreement with the angular distributions
of vacuum-accelerated electrons predicted by both their particle-in-cell
modeling and analytical calculations, which predicted a local maximum around
$20^{\circ}$ for forward-scattered electrons. Instead they observed the
electron number to increase monotonically as angle increased from 5∘ to 70∘,
which they attribute to poor sampling of initial conditions in the focal
volume. A comprehensive model of the L-shell electrons in the detected energy
range ($>$ 0.3 MeV) will likely have to into account pulse shape [29], focal
spot asymmetry [50], and a more realistic model of laser fields at the focal
plane to match experiment.
At laser intensity exceeding 1021 W/cm2, ATI electrons from the K-shell of
argon ($>3\times 10^{21}$ W/cm2 and krypton ($>10^{23}$ W/cm2) are predicted
to exceed energies of 100 MeV and 1 GeV, respectively. These ATI electrons
will be ejected very nearly in the laser forward direction and hold promise as
a low-dose ultrafast radiation source and as a direct laser intensity
diagnostic. Measurement of the ATI electron spectrum would be more
straightforward than the measurements presented in this paper, as the high
energy and low electron divergence would enable the use of a large-aperture
magnetic spectrometer located outside the vacuum chamber and along the laser
forward direction. Similar scintillating detectors could be placed behind the
magnet to detect ATI electrons in different energy ranges. Vacuum acceleration
of the L-shell electrons to comparable energies can be suppressed by
engineering a $\sim 10^{-2}$ pre-pulse that arrives a few pulse durations
before the main laser pulse [29].
###### Acknowledgements.
A. Y. acknowledges helpful conversations with E. Chowdhury regarding the
design of this experiment. This work was supported by the DOE, Office of
Science, Fusion Energy Sciences under Contract No. DE-SC0021125: LaserNetUS: A
Proposal to Advance North America’s First High Intensity Laser Research
Network, the Air Force Office of Scientific Research through Awards No.
FA9550-14-1-0045 and No. FA9550-17-1-0264, and the National Nuclear Security
Agency (NNSA) through Award No. NA0002008. This work was also performed under
the auspices of the U.S. Department of Energy by Lawrence Livermore National
Laboratory under Contract DE-AC52-07NA27344. A. Y. gratefully acknowledges the
generous support of the Jane and Michael Downer Fellowship in Laser Physics in
Memory of Glenn Bryan Focht.
## References
* Agostini _et al._ [1979] P. Agostini, F. Fabre, G. Mainfray, G. Petite, and N. K. Rahman, Free-Free Transitions Following Six-Photon Ionization of Xenon Atoms, Physical Review Letters 42, 1127 (1979).
* Corkum _et al._ [1989] P. B. Corkum, N. H. Burnett, and F. Brunel, Above-threshold ionization in the long-wavelength limit, Physical Review Letters 62, 1259 (1989).
* Corkum [1993] P. B. Corkum, Plasma perspective on strong field multiphoton ionization, Physical Review Letters 71, 1994 (1993).
* Krause _et al._ [1992] J. L. Krause, K. J. Schafer, and K. C. Kulander, High-order harmonic generation from atoms and ions in the high intensity regime, Physical Review Letters 68, 3535 (1992).
* Watson _et al._ [1997] J. B. Watson, A. Sanpera, D. G. Lappas, P. L. Knight, and K. Burnett, Nonsequential Double Ionization of Helium, Physical Review Letters 78, 1884 (1997).
* Walker _et al._ [1994] B. Walker, B. Sheehy, L. F. DiMauro, P. Agostini, K. J. Schafer, and K. C. Kulander, Precision Measurement of Strong Field Double Ionization of Helium, Physical Review Letters 73, 1227 (1994).
* Fittinghoff _et al._ [1992] D. N. Fittinghoff, P. R. Bolton, B. Chang, and K. C. Kulander, Observation of nonsequential double ionization of helium with optical tunneling, Physical Review Letters 69, 2642 (1992).
* Moore _et al._ [1995] C. I. Moore, J. P. Knauer, and D. D. Meyerhofer, Observation of the Transition from Thomson to Compton Scattering in Multiphoton Interactions with Low-Energy Electrons, Physical Review Letters 74, 2439 (1995).
* McNaught _et al._ [1998] S. J. McNaught, J. P. Knauer, and D. D. Meyerhofer, Photoelectron initial conditions for tunneling ionization in a linearly polarized laser, Physical Review A 58, 1399 (1998).
* DiChiara _et al._ [2008] A. D. DiChiara, I. Ghebregziabher, R. Sauer, J. Waesche, S. Palaniyappan, B. L. Wen, and B. C. Walker, Relativistic MeV Photoelectrons from the Single Atom Response of Argon to a 1019 W/cm2 Laser Field, Physical Review Letters 101, 173002 (2008).
* Ekanayake _et al._ [2013] N. Ekanayake, S. Luo, P. D. Grugan, W. B. Crosby, A. D. Camilo, C. V. McCowan, R. Scalzi, A. Tramontozzi, L. E. Howard, S. J. Wells, C. Mancuso, T. Stanev, M. F. Decamp, and B. C. Walker, Electron Shell Ionization of Atoms with Classical, Relativistic Scattering, Physical Review Letters 110, 203003 (2013).
* Augst _et al._ [1989] S. Augst, D. Strickland, D. D. Meyerhofer, S. L. Chin, and J. H. Eberly, Tunneling ionization of noble gases in a high-intensity laser field, Physical Review Letters 63, 2212 (1989).
* Augst _et al._ [1991] S. Augst, D. D. Meyerhofer, D. Strickland, and S. L. Chin, Laser ionization of noble gases by Coulomb-barrier suppression, Journal of the Optical Society of America B 8, 858 (1991).
* Perelomov _et al._ [1966] A. Perelomov, V. Popov, and M. Terent’ev, Ionization of Atoms in an Alternating Electric Field, Soviet Journal of Experimental and Theoretical Physics 23, 924 (1966).
* Ammosov _et al._ [1986] M. V. Ammosov, N. B. Delone, and V. P. Krainov, Tunnel ionization of complex atoms and of atomic ions in an alternating electromagnetic field, Journal of Experimental and Theoretical Physics 64, 1191 (1986).
* Chowdhury _et al._ [2001] E. A. Chowdhury, C. P. J. Barty, and B. C. Walker, “Nonrelativistic” ionization of the L -shell states in argon by a “relativistic” $10^{19}$ W/cm2 laser field, Physical Review A 63, 042712 (2001).
* Yamakawa _et al._ [2003] K. Yamakawa, Y. Akahane, Y. Fukuda, M. Aoyama, N. Inoue, and H. Ueda, Ionization of many-electron atoms by ultrafast laser pulses with peak intensities greater than $10^{19}$ W/cm2, Physical Review A 68, 065403 (2003).
* Akahane _et al._ [2006] Y. Akahane, J. Ma, Y. Fukuda, M. Aoyoma, H. Kiriyama, J. V. Sheldakova, A. V. Kudryashov, and K. Yamakawa, Characterization of wave-front corrected 100 TW, 10 Hz laser pulses with peak intensities greater than 1020 W/cm2, Review of Scientific Instruments 77, 023102 (2006).
* Link _et al._ [2006] A. Link, E. A. Chowdhury, J. T. Morrison, V. M. Ovchinnikov, D. Offermann, L. Van Woerkom, R. R. Freeman, J. Pasley, E. Shipton, F. Beg, P. Rambo, J. Schwarz, M. Geissel, A. Edens, and J. L. Porter, Development of an in situ peak intensity measurement method for ultraintense single shot laser-plasma experiments at the Sandia Z petawatt facility, Review of Scientific Instruments 77, 10E723 (2006).
* Tiwari _et al._ [2019] G. Tiwari, E. Gaul, M. Martinez, G. Dyer, J. Gordon, M. Spinks, T. Toncian, B. Bowers, X. Jiao, R. Kupfer, L. Lisi, E. McCary, R. Roycroft, A. Yandow, G. D. Glenn, M. Donovan, T. Ditmire, and B. M. Hegelich, Beam distortion effects upon focusing an ultrashort petawatt laser pulse to greater than $10^{22}$ W/cm2, Optics Letters 44, 2764 (2019).
* Yoon _et al._ [2019] J. W. Yoon, C. Jeon, J. Shin, S. K. Lee, H. W. Lee, I. W. Choi, H. T. Kim, J. H. Sung, and C. H. Nam, Achieving the laser intensity of $5.5\times 10^{22}$ W/cm2 with a wavefront-corrected multi-PW laser, Optics Express 27, 20412 (2019).
* Rus _et al._ [2017] B. Rus, P. Bakule, D. Kramer, J. Naylon, J. Thoma, M. Fibrich, J. T. Green, J. C. Lagron, R. Antipenkov, J. Bartoníček, F. Batysta, R. Baše, R. Boge, S. Buck, J. Cupal, M. A. Drouin, M. Ďurák, B. Himmel, T. Havlíček, P. Homer, A. Honsa, M. Horáček, P. Hríbek, J. Hubáček, Z. Hubka, G. Kalinchenko, K. Kasl, L. Indra, P. Korous, M. Košelja, L. Koubíková, M. Laub, T. Mazanec, A. Meadows, J. Novák, D. Peceli, J. Polan, D. Snopek, V. Šobr, P. Trojek, B. Tykalewicz, P. Velpula, E. Verhagen, Š. Vyhlídka, J. Weiss, C. Haefner, A. Bayramian, S. Betts, A. Erlandson, J. Jarboe, G. Johnson, J. Horner, D. Kim, E. Koh, C. Marshall, D. Mason, E. Sistrunk, D. Smith, T. Spinka, J. Stanley, C. Stolz, T. Suratwala, S. Telford, T. Ditmire, E. Gaul, M. Donovan, C. Frederickson, G. Friedman, D. Hammond, D. Hidinger, G. Chériaux, A. Jochmann, M. Kepler, C. Malato, M. Martinez, T. Metzger, M. Schultze, P. Mason, K. Ertel, A. Lintern, C. Edwards, C. Hernandez-Gomez, and J. Collier, ELI-beamlines: progress in development of next generation short-pulse laser systems, in _Research Using Extreme Light: Entering New Frontiers with Petawatt-Class Lasers III_, Vol. 10241, edited by G. Korn and L. O. Silva (2017) p. 102410J.
* Ciappina _et al._ [2019] M. F. Ciappina, S. V. Popruzhenko, S. V. Bulanov, T. Ditmire, G. Korn, and S. Weber, Progress toward atomic diagnostics of ultrahigh laser intensities, Physical Review A 99, 043405 (2019).
* Ciappina _et al._ [2020] M. F. Ciappina, E. E. Peganov, and S. V. Popruzhenko, Focal-shape effects on the efficiency of the tunnel-ionization probe for extreme laser intensities, Matter and Radiation at Extremes 5, 044401 (2020), arXiv:2002.11222 .
* Ciappina and Popruzhenko [2020] M. F. Ciappina and S. V. Popruzhenko, Diagnostics of ultra-intense laser pulses using tunneling ionization, Laser Physics Letters 17, 025301 (2020), arXiv:1911.11233 .
* Yandow _et al._ [2019] A. Yandow, T. Toncian, and T. Ditmire, Direct laser ion acceleration and above-threshold ionization at intensities from $5\times 10^{21}$ W/cm2 to $3\times 10^{23}$ W/cm2, Physical Review A 100, 053406 (2019), arXiv:arXiv:1909.02158v2 .
* Vais _et al._ [2020] O. E. Vais, A. G. R. Thomas, A. M. Maksimchuk, K. Krushelnick, and V. Y. Bychenkov, Characterizing extreme laser intensities by ponderomotive acceleration of protons from rarified gas, New Journal of Physics 22, 023003 (2020).
* Vais and Bychenkov [2021] O. E. Vais and V. Y. Bychenkov, Complementary diagnostics of high-intensity femtosecond laser pulses via vacuum acceleration of protons and electrons, Plasma Physics and Controlled Fusion 63, 014002 (2021).
* Vais and Bychenkov [2018] O. E. Vais and V. Y. Bychenkov, Direct electron acceleration for diagnostics of a laser pulse focused by an off-axis parabolic mirror, Applied Physics B 124, 211 (2018).
* Yandow _et al._ [2023] A. Yandow, T. N. Ha, C. Aniculaesei, H. L. Smith, C. G. Richmond, M. M. Spinks, H. J. Quevedo, S. Bruce, D. A. Garcia, M. Darilek, C. Chang, E. Gaul, M. E. Donovan, B. M. Hegelich, and T. Ditmire, Multi-mev electrons from above-threshold ionization of the neon k-shell (2023).
* [31] Ansys Fluent, Release 12.0.
* Roberts and Kaplan [2007] T. J. Roberts and D. M. Kaplan, G4beamline simulation program for matter-dominated beamlines, in _2007 IEEE Particle Accelerator Conference (PAC)_ (IEEE, 2007) pp. 3468–3470.
* Popov [2004] V. S. Popov, Tunnel and multiphoton ionization of atoms and ions in a strong laser field (Keldysh theory), Physics-Uspekhi 47, 855 (2004).
* Kornev _et al._ [2003] A. S. Kornev, E. B. Tulenko, and B. A. Zon, Kinetics of multiple ionization of rare-gas atoms in a circularly polarized laser field, Physical Review A 68, 043414 (2003).
* Zon [1999] B. A. Zon, Many-electron tunneling in atoms, Journal of Experimental and Theoretical Physics 89, 219 (1999).
* Bryan _et al._ [2006] W. A. Bryan, S. L. Stebbings, J. McKenna, E. M. L. English, M. Suresh, J. Wood, B. Srigengan, I. C. E. Turcu, J. M. Smith, E. J. Divall, C. J. Hooker, A. J. Langley, J. L. Collier, I. D. Williams, and W. R. Newell, Atomic excitation during recollision-free ultrafast multi-electron tunnel ionization, Nature Physics 2, 379 (2006).
* Kornev _et al._ [2009] A. S. Kornev, E. B. Tulenko, and B. A. Zon, Many-body effects in multiply charged ion formation in a superstrong laser field, Physical Review A 79, 063405 (2009).
* Milosevic _et al._ [2002a] N. Milosevic, V. P. Krainov, and T. Brabec, Relativistic theory of tunnel ionization, Journal of Physics B: Atomic, Molecular and Optical Physics 35, 311 (2002a).
* Milosevic _et al._ [2002b] N. Milosevic, V. P. Krainov, and T. Brabec, Semiclassical Dirac Theory of Tunnel Ionization, Physical Review Letters 89, 193001 (2002b).
* Mur _et al._ [1998] V. D. Mur, B. M. Karnakov, and V. S. Popov, Relativistic version of the imaginary-time formalism, Journal of Experimental and Theoretical Physics 87, 433 (1998).
* Yakaboylu _et al._ [2013] E. Yakaboylu, M. Klaiber, H. Bauke, K. Z. Hatsagortsyan, and C. H. Keitel, Relativistic features and time delay of laser-induced tunnel ionization, Physical Review A 88, 063421 (2013), arXiv:1309.0610 .
* Klaiber _et al._ [2013] M. Klaiber, E. Yakaboylu, H. Bauke, K. Z. Hatsagortsyan, and C. H. Keitel, Under-the-Barrier Dynamics in Laser-Induced Relativistic Tunneling, Physical Review Letters 110, 153004 (2013), arXiv:1205.2004 .
* Tong and Lin [2005] X. M. Tong and C. D. Lin, Empirical formula for static field ionization rates of atoms and molecules by lasers in the barrier-suppression regime, Journal of Physics B: Atomic, Molecular and Optical Physics 38, 2593 (2005).
* Lötstedt _et al._ [2020] E. Lötstedt, M. F. Ciappina, and K. Yamanouchi, Static-field ionization model of He-like ions for diagnostics of light-field intensity, Physical Review A 102, 013112 (2020).
* Kostyukov and Golovanov [2018] I. Y. Kostyukov and A. A. Golovanov, Field ionization in short and extremely intense laser pulses, Physical Review A 98, 043407 (2018).
* Salamin [2007] Y. Salamin, Fields of a Gaussian beam beyond the paraxial approximation, Applied Physics B 86, 319 (2007).
* Ivanov _et al._ [2018] K. A. Ivanov, I. N. Tsymbalov, O. E. Vais, S. G. Bochkarev, R. V. Volkov, V. Y. Bychenkov, and A. B. Savel’ev, Accelerated electrons for in situ peak intensity monitoring of tightly focused femtosecond laser radiation at high intensities, Plasma Physics and Controlled Fusion 60, 105011 (2018).
* Popov _et al._ [2008] K. I. Popov, V. Y. Bychenkov, W. Rozmus, and R. D. Sydora, Electron vacuum acceleration by a tightly focused laser pulse, Physics of Plasmas 15, 013108 (2008).
* Kalashnikov _et al._ [2015] M. Kalashnikov, A. Andreev, K. Ivanov, A. Galkin, V. Korobkin, M. Romanovsky, O. Shiryaev, M. Schnuerer, J. Braenzel, and V. Trofimov, Diagnostics of peak laser intensity based on the measurement of energy of electrons emitted from laser focal region, Laser and Particle Beams 33, 361 (2015).
* Hegelich _et al._ [2023] B. M. Hegelich, L. Labun, and O. Z. Labun, Revisiting Experimental Signatures of the Ponderomotive Force, Photonics 10, 1 (2023).
|
††thanks: These authors contributed equally.††thanks: These authors
contributed equally.
# Evidence of many-body localization in 2D from quantum Monte Carlo simulation
Ho-Kin Tang Centre for Advanced 2D Materials, National University of
Singapore, 6 Science Drive 2, Singapore 117546 School of Science, Harbin
Institute of Technology, Shenzhen, P. R. China 518055 N. Swain Centre for
Advanced 2D Materials, National University of Singapore, 6 Science Drive 2,
Singapore 117546 School of Physical and Mathematical Sciences, Nanyang
Technological University, 21 Nanyang Link, Singapore 637371 D. C. W. Foo
Centre for Advanced 2D Materials, National University of Singapore, 6 Science
Drive 2, Singapore 117546 B. J. J. Khor Centre for Advanced 2D Materials,
National University of Singapore, 6 Science Drive 2, Singapore 117546 G.
Lemarié MajuLab, CNRS-UCA-SU-NUS-NTU International Joint Research Unit IRL
3654, Singapore Centre for Quantum Technologies, National University of
Singapore, Singapore 117543 Laboratoire de Physique Théorique, Université de
Toulouse, CNRS, UPS, France F. F. Assaad Institut für Theoretische Physik
und Astrophysik and Würzburg-Dresden Cluster of Excellence ct.qmat,
Universität Würzburg, 97074 Würzburg, Germany S. Adam Department of
Materials Science and Engineering, National University of Singapore, 9
Engineering Drive 1, Singapore 117575 Yale-NUS College, 16 College Ave West,
Singapore 138527 Department of Physics, Faculty of Science, National
University of Singapore, 2 Science Drive 3, Singapore 117542 Centre for
Advanced 2D Materials, National University of Singapore, 6 Science Drive 2,
Singapore 117546 P. Sengupta Centre for Advanced 2D Materials, National
University of Singapore, 6 Science Drive 2, Singapore 117546 School of
Physical and Mathematical Sciences, Nanyang Technological University, 21
Nanyang Link, Singapore 637371
###### Abstract
We use the stochastic series expansion quantum Monte Carlo method, together
with the eigenstate-to-Hamiltonian construction, to map the localized Bose
glass ground state of the disordered two-dimensional Heisenberg model to
excited states of new target Hamiltonians. The localized nature of the ground
state is established by studying the participation entropy, local entanglement
entropy, and local magnetization, all known in the literature to also be
identifying characteristics of many-body localized states. Our construction
maps the ground state of the parent Hamiltonian to a single excited state of a
new target Hamiltonian, which retains the same form as the parent Hamiltonian,
albeit with correlated and large disorder. We furthermore provide evidence
that the mapped eigenstates are genuine localized states and not special zero-
measure localized states like the quantum scar-states. Our results provide
concrete evidence for the existence of the many-body localized phase in two
dimensions.
Introduction: Disorder and interactions induce novel phases and phenomena in
quantum many body systems. Disordered non-interacting systems are known to
have localized states in one and two dimensions Anderson (1958); Mott and
Twose (1961); Gol’dshtein _et al._ (1977); Evers and Mirlin (2008). In recent
years, it has emerged that localization of the entire eigenspectrum persists
in the presence of interactions and strong disorder, constituting Many Body
Localization (MBL) – a phenomenon that has been subject to intense
investigation since its inception due to both fundamental and practical
reasons Basko _et al._ (2006); Nandkishore and Huse (2015). The MBL phase is
a new phase of matter that breaks ergodicity and violates the Eigenvalue
Thermalization Hypothesis (ETH) Nandkishore and Huse (2015); Alet and
Laflorencie (2018); Abanin _et al._ (2019). In this phase, a closed system
does not thermalize under its own dynamics, and hence cannot be described
within the framework of conventional quantum statistical physics. At the same
time, the long memory associated with the slow dynamics makes the MBL phase
appealing for many practical applications Huse _et al._ (2013); Pekker _et
al._ (2014); Bahri _et al._ (2015).
The existence of the MBL phase in one dimension has been well established
through numerical Luitz _et al._ (2015); Serbyn and Moore (2016); Khemani
_et al._ (2016); Lim and Sheng (2016) and analytical studies Imbrie _et al._
(2017) as well as in experiments Schreiber _et al._ (2015); Smith _et al._
(2016). On the other hand, its fate in two dimensions has been contentious De
Roeck and Huveneers (2017); Doggen _et al._ (2020), though evidence is
accumulating towards the affirmative Wahl _et al._ (2019); Kshetrimayum _et
al._ (2020); Chertkov _et al._ (2021); Théveniaut _et al._ (2020); Decker
_et al._ (2021). In this work, as shown in Fig. 1, we present convincing
numerical evidence for the existence of MBL states in the 2D random-field
Heisenberg antiferromagnet. Using the Eigenstate-to-Hamiltonian construction
(EHC) described in Chertkov and Clark (2018); Qi and Ranard (2019); Dupont and
Laflorencie (2019), we map the Bose glass (BG) ground state obtained from
large scale quantum Monte Carlo (QMC) simulations of the 2D disordered
Heisenberg model to an excited state of another Hamiltonian that differs only
in terms of the configuration of disorder (see Fig. 1 (a)). The state
considered has non-ergodic properties characteristic of MBL states (see Fig. 1
(c)), while the new Hamiltonian has correlated disorder (Fig. 1 (b)). A
crucial aspect of our work is the careful determination of the conditions of
validity of this mapping. We also check that the obtained excited state is not
a zero-measure non-ergodic state such as the many-body scar states Turner _et
al._ (2018a); Serbyn _et al._ (2021) but a generic MBL state. This indicates
the possibility of MBL states in 2D systems with correlated and large
disorder.
Figure 1: (a) Pictorial representation of our methodology to study many-body
localization in 2D. The ground state, $|\Psi_{0}\rangle$, of a specified 2D
parent Hamiltonian, ${\cal H}$, is obtained using QMC. The EHC is used to find
a target Hamiltonians, $\tilde{\cal H}$, for which $|\Psi_{0}\rangle$ is an
approximate excited eigenstate, allowing QMC to indirectly probe excited state
properties. By construction both ${\cal H}$ and $\tilde{\cal H}$ exhibit the
same form, and differ only in the local disorder distribution. The new
disorder, $\\{\tilde{h}_{i}\\}$ is found to have a correlated structure, and
to be of much larger strength than the original disorder, $\\{h_{i}\\}$, which
is uncorrelated. (b) Averaged correlation function of the new disorder, $C(r)$
(see text) indicating strong spatial correlations of new disorder
$\\{\tilde{h}_{i}\\}$. (c) Scaling of disorder-averaged participation entropy,
$S_{\infty}$ of the state $|\Psi_{0}\rangle$ with the Hilbert space size,
${\cal N}$. The state exhibits non-ergodic behavior using metrics that are
known in the literature (e.g. Ref. Macé _et al._ (2019)) to be identifying
characteristics of many-body localization.
Background: There has been persistent controversy surrounding the existence of
MBL in 2D. On one hand, the thermal avalanche argument states that rare
regions of low disorder may form thermal bubbles that precipitate an avalanche
effect, ultimately thermalising the system De Roeck and Huveneers (2017);
Ponte _et al._ (2017); Doggen _et al._ (2020). However, later works have
identified circumstances under which such avalanche events do not occur, or
are not observable in an experimentally accessible time frame Potirniche _et
al._ (2019); Foo _et al._ (2022). Moreover, the experimental signatures of
the MBL are just as convincing in 2D as in 1D Schreiber _et al._ (2015); Choi
_et al._ (2016); Bordia _et al._ (2016); Sbroscia _et al._ (2020).
From a computational perspective, establishing the existence of MBL in 2D is
significantly more challenging than in 1D. Exact diagonalization (ED) is
limited to system sizes that are generally too small to provide meaningful
results (see however Théveniaut _et al._ (2020)). There is thus a need for
approximate computational approaches that allow us to analyse highly excited
states of disordered many-body systems. The main difficulty is that the
density of high-energy states is exponentially large. Nevertheless, successful
approximate methods have been developed, e.g. DMRG-X, shift-invert MPS and
tensor network methods Khemani _et al._ (2016); Yu _et al._ (2017); Wahl
_et al._ (2019).
A different line of study has emerged in recent years wherein one considers
the ground state of interacting bosons or fermions with disorder (for which
powerful numerical techniques exist in 2D and 3D) and then use the EHC
formalism to identify a Hamiltonian for which the said ground state is an
excited eigenstate. This was used in Refs.Dupont and Laflorencie (2019);
Dupont _et al._ (2019) to study MBL in the 1D disordered Heisenberg model.
Although MBL is a property of excited states, it shares several features in
common with the ground state of disordered bosons; in particular, they both
obey an area law for the entanglement entropy. The interplay between strong
interactions and disorder in the ground state of interacting bosons has been
extensively studied and results in the well-known BG phase, which is
insulating and localized Giamarchi and Schulz (1987, 1988); Fisher _et al._
(1989a); Doggen _et al._ (2017).
This paper combines the versatility of established QMC methods with the
recently proposed EHC to address the existence of MBL in 2D. The validity of
the EHC is a very subtle question that we address carefully in this paper. It
provides a way to use large scale numerical methods (such as QMC or DMRG) to
address MBL properties, which are otherwise restricted to small system sizes.
This is especially important in 2D. Furthermore, the EHC provides a way to
systematically build Hamiltonians with MBL properties. The recent discovery of
stark MBL and MBL by a confining potential suggest that non-trivial
potentials/correlated disorders can induce MBL properties Schulz _et al._
(2019); Doggen _et al._ (2021); Yao _et al._ (2021); Foo _et al._ (2022).
EHC provides a systematic way to build such Hamiltonians.
Model: The 2D $S=1/2$ antiferromagnetic Heisenberg model with random magnetic
fields is described by the Hamiltonian,
$\displaystyle{\cal H}=\sum_{\langle i,j\rangle}{\bf S}_{i}\cdot{\bf
S}_{j}+\sum_{i}h_{i}S_{i}^{z},$ (1)
where ${\bf S}_{i}$ is the spin operator at site $i$, $\langle..\rangle$
indicates a sum over nearest-neighbour sites, and $h_{i}\in[-h,h]$ represents
the local random magnetic field disorder. The Hamiltonian commutes with the
total magnetization, $S^{z}_{\text{tot}}=\sum_{i}S_{i}^{z}$, – and only states
in the $S^{z}_{\text{tot}}=0$ sector are considered when evaluating the ground
state.
QMC-EHC method: We start by determining the ground state $|\Psi_{0}\rangle$
of ${\cal H}$, Eqn. (1), the parent Hamiltonian, using the stochastic series
expansion (SSE) QMC method Sandvik (1992, 1999); Sengupta and Haas (2007).
This method has been successfully used in the past to probe the superfluid to
BG transition Fisher _et al._ (1989b); Pollet _et al._ (2009); Prokof’ev and
Svistunov (2004); Álvarez Zúñiga _et al._ (2015). Due to the presence of a
finite-size gap, the ground state can be accessed in SSE QMC by using a
sufficiently large inverse temperature $\beta$ Prokof’ev and Svistunov (2004);
Álvarez Zúñiga _et al._ (2015). In our simulations, we have set $\beta=8L$ to
ensure we are in the ground state.
Next we conduct a search for a new Hamiltonian, $\tilde{{\cal H}}$, with the
same form as ${\cal H}$ as in Eqn. (1), but different disorder configuration,
for which $|\Psi_{0}\rangle$ is an eigenstate Qi and Ranard (2019); Chertkov
and Clark (2018). The target Hamiltonian, $\tilde{{\cal H}}$, is obtained by
analyzing the quantum covariance matrix, ${\mathcal{C}}_{ij}$, which is
defined in terms of the ground state expectation values of the local
Hamiltonian operators of ${\cal H}$ as
${\mathcal{C}}_{ij}=\langle{\mathcal{O}}_{i}{\mathcal{O}}_{j}\rangle-\langle{\mathcal{O}}_{i}\rangle\langle{\mathcal{O}}_{j}\rangle,$
(2)
where ${\mathcal{O}}_{0}=\sum_{\langle i,j\rangle}S_{i}^{z}S_{j}^{z}$ and
${\mathcal{O}}_{i}=S_{i}^{z},\;\;i=1,\ldots,N$. The determination of
${\mathcal{C}}_{ij}$ requires new measuring techniques in the SSE QMC approach
that we detail in the Supplementary Material.
A normalized eigenvector
$(\tilde{J},h^{{}^{\prime}}_{1},...,h^{{}^{\prime}}_{N})$ of
${\mathcal{C}}_{ij}$ contains the parameters defining a target Hamiltonian
$\tilde{\cal H}=\sum_{\langle i,j\rangle}{\bf S}_{i}\cdot{\bf
S}_{j}+\sum_{i}\tilde{h}_{i}S_{i}^{z}$ where
$\tilde{h}_{i}=h^{{}^{\prime}}_{i}/\tilde{J}$. The corresponding eigenvalue
$e$ gives the variance of the energy of $|\Psi_{0}\rangle$ with respect to
$\tilde{\cal H}$:
$e/\tilde{J}^{2}=\langle\Psi_{0}|\tilde{\cal
H}^{2}|\Psi_{0}\rangle-\langle\Psi_{0}|\tilde{\cal H}|\Psi_{0}\rangle^{2}\;.$
(3)
A vanishing eigenvalue of ${\mathcal{C}}_{ij}$ thus signals that
$|\Psi_{0}\rangle$ is an eigenstate of $\tilde{\cal H}$ with energy
$E=\langle\Psi_{0}|{\tilde{\cal H}}|\Psi_{0}\rangle$.
Working with an ordered set of eigenvalues, $e_{1},e_{2},\ldots,e_{N+1}$, of
${\mathcal{C}}_{ij}$, we note that the first two eigenvalues, $e_{1}$ and
$e_{2}$ are always zero (up to numerical precision) - these correspond to the
parent Hamiltonian, ${\cal H}$ and the constraint $S^{z}_{\text{tot}}=0$. The
other eigenvalues $e_{j}$, $j>2$ will typically not be exactly zero.
Nevertheless, a sufficiently small $e_{j}$ can still be used to define a
target Hamiltonian, of which the ground state $|\Psi_{0}\rangle$ will be an
approximate eigenstate. In the following, we focus on the smallest non-zero
eigenvalue $e_{3}$. Our methodology is summarized in Fig. 1(a).
Bose glass ground state with characteristic non-ergodic properties of MBL
states: Our model Eq. (1) has a Bose glass ground state beyond a certain
critical disorder strength $h_{c}\approx 2.35$, see Álvarez Zúñiga _et al._
(2015) and Sup. Mat. We first show that this ground state has three distinct
non-ergodic properties which are characteristic of MBL states.
We first consider the participation entropy (see Humeniuk and Roscilde
(2012a); Luitz _et al._ (2014a, b) and Sup. Mat. for details of the
calculation) which describes the contribution of basis states to the ground
state wave function. In Fig. 1(c), we show the behavior of the disorder-
averaged
$S_{\infty}=\lim_{q\to\infty}\frac{1}{1-q}\ln\left(\sum_{i}|\langle\Psi_{0}|\phi_{i}\rangle|^{2q}\right)$,
(where $|\phi_{i}\rangle$ are basis states) with the Hilbert space volume
${\cal N}$. We observe that $S_{\infty}=D\ln{\cal N}+c$, where $D$ is a
multifractal dimension and $c$ is a constant. We clearly find $D<1$ and $c>0$
which indicates that only a vanishing fraction of states of the configuration
space contribute to the Bose glass ground state. This is a clear signature of
non-ergodic behavior which has been found in the MBL phase, see Ref. Macé _et
al._ (2019). This behavior is in marked difference with the ETH ergodic
regime, where $D=1$ and $c<0$. Similar scaling behavior is also observed for
the second order participation entropy, $S_{2}$ (see Sup. Mat.).
Second, we measure the local entanglement entropy
$S^{E}=-\ln\text{Tr}\rho_{loc}^{2}$ for a bipartition of the system as one
site and the rest, using the SSE extended ensemble scheme Humeniuk and
Roscilde (2012a); Luitz _et al._ (2014a, b). In Fig. 2(a), the distribution
of $S^{E}$, $P(S^{E})$ shows a sharp peak close to $S^{E}=0$. This is a
prominent feature of MBL (see Ref. Wahl _et al._ (2019)), where any given
site is almost disentangled from other sites of the lattice and its reduced
density matrix, $\rho_{\text{loc}}$ can be approximated as that of a pure
state.
Third, in Fig. 2(b), we show the distribution of local magnetization
$P(m_{z})$. We find a bipolar distribution with peak values at $m_{z}=\pm
1/2$, a signature of polarization along the on-site disordered magnetic field.
Following Refs. Dupont and Laflorencie (2019); Laflorencie _et al._ (2020),
we further look into the maximum polarisation, defined as $\delta_{\rm
min}=1/2-{\rm max}(|m_{z}^{i}|)$. We observe that the typical average of
$\delta_{\rm min}$, $\delta_{\rm min}^{\rm typ}\propto L^{-\gamma}$, with
$\gamma\sim 3.5$ for $h=5$ (see inset). This behavior is analogous to the
freezing of local moments in the MBL phase Dupont and Laflorencie (2019);
Laflorencie _et al._ (2020).
Figure 2: (a) Distribution of local entanglement entropy $P(S^{E})$ in the
ground state for various system sizes for $h=5$. $P(S^{E})$ shows a sharp peak
at $S^{E}\sim 0$ indicating that each site is almost disentangled from the
other sites, a characteristic signature of MBL Wahl _et al._ (2019). As
expected, the $S^{E}$ peak moves towards $S^{E}=0$ with increasing system
sizes. (b) (Main panel) Distribution of local magnetization $P(m_{z})$ in the
ground state for different system sizes for $h=5$. $P(m_{z})$ is strongly
peaked at the values $m_{z}=\pm 1/2$, indicative of the local moments being
fully aligned with the local random magnetic field. (Inset) Power-law decay of
maximum polarization $\delta_{min}$ (see text) with system size, another
characteristic signature of MBL Dupont and Laflorencie (2019); Laflorencie
_et al._ (2020).
Figure 3: (a) Finite-size scaling of the disorder averaged energy variance,
$\overline{e_{3}}$, obtained via ED and QMC methods. $\overline{e_{3}}$
exhibits a power law decay behavior with increased $L$. (b) Overlap of the
true ground state, $|\Psi_{0}\rangle$, of $\cal H$ and a range of eigenstates
of $\tilde{\cal H}$ close to $\tilde{E}$, obtained via ED with $4\times 4$
lattice. The overlap is maximum ($\approx 1$), for a single eigenstate with
eigenvalue $\approx\tilde{E}$. This establishes that for sufficiently large
disorder, EHC maps the ground state to a single excited eigenstate. (c)
Comparison of relative residue, $R$ (see text) and maximum overlap, $O_{m}$,
obtained from ED. EHC holds when $O_{m}\approx 1$, i.e. even for $R\gg 1$. In
other words, the locality of MBL allows EHC to work, even if the energy
resolution is not sufficient.
Reliability of EHC mapping: EHC is an approximate method and we here assess
its reliability, see Fig. 3. We find that the disorder averaged $e_{3}$ decays
as a power-law with system size, and thus vanishes in the thermodynamic limit
(see Fig. 3(a)). This does not guarantee however that the ground state maps to
a single eigenstate of the new Hamiltonian. In fact, as the excited states of
a many-body system have an exponentially large density, the ground state could
on the contrary correspond to a superposition of eigenstates. This limitation
is common to all such approximate methods Khemani _et al._ (2016); Yu _et
al._ (2017); Wahl _et al._ (2019).
To address this question, we use exact diagonalization Weinberg and Bukov
(2017), keeping in mind the limited applicability to small system sizes (see
Sup. Mat.). We determine the eigenstates $|\Psi_{\alpha}\rangle$ of
${\cal\tilde{H}}$ close in energy to that of the ground state
$\tilde{E}=\langle\Psi_{0}|\tilde{\cal H}|\Psi_{0}\rangle$ and calculate their
corresponding overlap $O_{\alpha}=|\langle\Psi_{0}|\Psi_{\alpha}\rangle|^{2}$.
In Fig. 3(b), we observe that the maximum overlap, $O_{m}={\rm
max}(O_{\alpha})\to 1$, indicating that for strong enough disorder, the ground
state maps to a single eigenstate of ${\cal\tilde{H}}$.
An alternate figure of merit, accessible to QMC, is the relative residue,
$R=\sqrt{e_{3}/\tilde{J}^{2}}/\Delta_{L}$, with $\Delta_{L}$ the mean level-
spacing of the many-body Hamiltonian. Comparison of $R$ with $O_{m}$ is shown
in Fig. 3(c). We clearly see that when $R<1$, i.e. when the error on the
energy is small compared to $\Delta_{L}$, $O_{m}\approx 1$, thus the EHC
works. However, we also observe that $O_{m}\approx 1$ for many realizations
where $R\gg 1$. In other words, even if the energy resolution is not
sufficient, the locality of MBL nevertheless allows EHC to work. This is a
consequence of the non-ergodicity of the state $|\Psi_{0}\rangle$ and of the
MBL properties of the new Hamiltonian (see below). Indeed, MBL states close by
in energy are located far apart in configuration space. This is confirmed by
the fact that EHC works better for larger disorder as seen in Fig. 3(c).
Properties of the new disorder: Unlike the original disorder which is
uncorrelated, the new disorder $\tilde{h}_{i}$, obtained from EHC, is strongly
correlated and of large amplitude. Similar to Refs. Dupont and Laflorencie
(2019); Dupont _et al._ (2019), we observe that $\tilde{h}_{i}=h_{i}+\Delta
h_{i}$, with $\Delta h_{i}$ showing strong spatial correlations of large
amplitude. This is characterized by the disorder-averaged correlation
function, $C(r)=(\sum_{d_{ij}=r}\Delta h_{i}\Delta h_{i+r})/(\sum_{i}\Delta
h_{i}^{2})$, where $d_{ij}$ is the distance between sites $i$ and $j$. In fig.
1(b), we show the behavior of $C(r)$, where the correlation length is seen to
vary like $L$, a signature of strong spatial correlations. Such spatial
correlations can enhance MBL, similar to what has been seen in stark MBL
Schulz _et al._ (2019); Doggen _et al._ (2021); Foo _et al._ (2022);
Agrawal _et al._ (2022).
MBL properties of other eigenstates : There still exists a possibility that
the mapped state is a zero-measure localized state, for example a quantum scar
state, and not a genuine MBL eigenstate. To address this, we use ED
calculations to study the localization properties of other eigenstates
($|\Psi_{\alpha}\rangle$) close by in energy to $\tilde{E}$. The inverse
participation ratios ${\rm
IPR}(\Psi_{\alpha})=\sum_{i}|\langle\Psi_{\alpha}|\phi_{i}\rangle|^{4}$ of
these eigenstates Visscher (1972) are shown in fig. 4. They all have similar
values as that of the mapped excited state, and therefore similar localization
properties. This is in stark contrast with the case of the PXP model which is
known to host quantum scar states Sun and Robicheaux (2008); Turner _et al._
(2018a, b) (see inset of fig. 4). This leads us to claim that the mapped
states obtained via EHC approach belong to a genuine MBL phase.
Figure 4: (Main panel) IPR of other eigenstates of $\tilde{\cal H}$ close by
in energy to the mapped excited state, using ED on a $4\times 4$ lattice for
$h=10$. The states exhibit similar IPR values and therefore similar
localization properties. (Inset) Similar analysis for the $S=1/2$ PXP model on
a chain of size $L=16$ which is known to host quantum scar-states Sun and
Robicheaux (2008); Turner _et al._ (2018a, b). IPR values of the scar-states
(blue filled circles) are orders of magnitude larger than those of the
remaining eigenstates (red empty circles).
Conclusions: We have developed a new method for determining highly excited
states of strongly disordered Hamiltonians of large sizes. This method, based
on a combination of Quantum Monte Carlo and the Eigenstate to Hamiltonian
Construction, allows us to map a ground state to a new Hamiltonian having this
state as an approximate eigenstate. Applied to the disordered Heisenberg
model, this method allows us to overcome the strong finite-size constraints
encountered in numerical studies of MBL and to characterize MBL in two
dimensions. At strong disorder, the Bose glass ground state has non-ergodic
properties characteristic of MBL states, and we have carefully determined the
conditions for this state to correspond to a unique excited state of the new
constructed Hamiltonian. This new Hamiltonian retains the same form as the
parent Hamiltonian, albeit with a correlated and large disorder. Furthermore,
we have provided evidence that the mapped eigenstate is a genuine localized
state and not a special zero-measure localized state like quantum scar-states.
Our work thus indicates the possibility of MBL states in 2D in systems where
the disorder is strong and correlated. 2D MBL has been much debated recently,
with some numerical Wahl _et al._ (2019); Kshetrimayum _et al._ (2020);
Chertkov _et al._ (2021); Théveniaut _et al._ (2020); Decker _et al._
(2021) and experimental Choi _et al._ (2016); Bordia _et al._ (2016);
Sbroscia _et al._ (2020) observations, but theoretical arguments exist that
suggest it is unstable De Roeck and Huveneers (2017); Doggen _et al._ (2020).
However, in the presence of correlations, as in stark MBL or with a
quasiperiodic or confining potential, there seems to be a consensus in favor
of a 2D MBL Schulz _et al._ (2019); Doggen _et al._ (2021); Foo _et al._
(2022); Agrawal _et al._ (2022). Our results both confirm this but moreover
indicate how to systematically construct Hamiltonians with non-ergodic states,
a very interesting possibility for applications in quantum technologies where
non-ergodicity protects quantum information.
###### Acknowledgements.
We thank Maxime Dupont and Rubem Mondaini for helpful discussions, Miguel Dias
Costa for assistance with the parallelization of our code, and our anonymous
referees for insightful suggestions. This work is supported by the Singapore
Ministry of Education AcRF Tier 2 grant (MOE2017-T2-1-130), and made possible
by allocation of computational resources at the Centre for Advanced 2D
Materials (CA2DM), and the Singapore National Super Computing Centre (NSCC).
HKT thanks support from the Start-Up Research Funds in HITSZ (Grant No.
ZX20210478, X2022000). FFA thanks support from the Würzburg-Dresden Cluster of
Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147,
project-id 390858490). GL acknowledges the support of the projects GLADYS
ANR-19- CE30-0013 and MANYLOK ANR-18-CE30-0017 of the French National Research
Agency (ANR), by the Singapore Ministry of Education Academic Research Fund
Tier I (WBS No. R-144- 000-437-114).
## Supplementary Material
## I Calculation of the Quantum Covariance Matrix with SSE QMC
The eigenstate to Hamiltonian construction (EHC) approach requires only a
collection of expectation values with respect to the ground state in order to
construct the quantum covariance matrix. In SSE QMC Sandvik _et al._ (1997);
Sandvik (1999), ground state expectation values for finite size systems is
obtained by choosing a sufficiently large inverse temperature $\beta$ (that
depends on the system size). The spectrum of any finite-sized system is
discrete and for simulations performed at temperatures smaller than the
finite-size gap (between the ground state and the first excited state),
contributions from higher energy states are exponentially suppressed, yielding
ground state expectation values for the finite size system. Estimates for
thermodynamic quantities are then obtained through a simultaneous finite-size
and finite-temperature scaling (the temperature for each simulation is
adjusted carefully to ensure that it is smaller than the finite size gap). In
the literature, this approach has been successfully applied by all finite-
temperature QMC algorithms (SSE, determinant QMC, world line QMC, path
integral QMC) to investigate the ground state phases of interacting spins,
bosons and fermions, both with and without disorder. In our simulations, we
have set $\beta=8L$ to ensure we are in the ground state.
We calculate the quantum covariance matrix, $\bm{{\cal C}}$ with the SSE QMC
method as follows. In the Hamiltonian of the 2D $S=1/2$ antiferromagnetic
Heisenberg model considered
$\displaystyle{\cal H}=J\sum_{\langle i,j\rangle}{\bf S}_{i}\cdot{\bf
S}_{j}+\sum_{i}h_{i}S_{i}^{z},$ (4)
the Ising term ($S_{i}^{z}S_{j}^{z}$) and the magnetic field term
($h_{i}S_{i}^{z}$) are the diagonal terms ${\mathcal{O}}^{d}$, while the
exchange term ($S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}$) is the off-diagonal
term ${\mathcal{O}}^{od}$ Sandvik (2010). As described in the manuscript, the
calculation of the covariance matrix involves the computation of expectation
values of the product of terms of Eqn.(4). In SSE QMC, we use the Taylor
expansion to expand the exponential part of the partition function. The
partition function hence can be written as a sum of different Hamiltonian
operators with the inverse temperature $\beta$ as its order, in which its
sequence is usually referred to operator string.
### I.1 ${\mathcal{O}}^{d}{\mathcal{O}}^{d}$ terms
As both of the terms belong to diagonal type operation, we can do direct
measurement for every spin state along the non-empty operator string.
$\displaystyle\langle{\mathcal{O}}^{d1}{\mathcal{O}}^{d2}\rangle=\big{\langle}\frac{1}{N_{H}}\sum_{p=1}^{N_{H}}{\mathcal{O}}^{d1}_{p}{\mathcal{O}}^{d2}_{p}\big{\rangle},$
(5)
where $N_{H}$ is the total number of the non-empty operator string in each
measuring step, $p$ is the slice index, and $\big{\langle}...\big{\rangle}$ is
the average of Monte Carlo steps. As the spin state only changes during the
off-diagonal operation, we can boost the efficiency by bookkeeping spins on
most of the sites.
### I.2 ${\mathcal{O}}^{od}{\mathcal{O}}^{od}$ terms
Only the exchange-exchange term in $\bm{{\cal C}}$ belongs to this category.
We cannot directly measure the off-diagonal term from the spin state. Instead,
we use the number of appearance of the consecutive operators along the
operator string to estimate its value.
$\displaystyle\langle{\mathcal{O}}^{od1}{\mathcal{O}}^{od2}\rangle=\frac{1}{\beta^{2}}\langle(N_{H}-1)N_{cons.}({\mathcal{O}}^{od1},{\mathcal{O}}^{od2})\rangle$
(6)
where $N_{cons.}$ is the number of consecutive appearances of
${\mathcal{O}}^{od1}$ and ${\mathcal{O}}^{od2}$ along the operator string in
each Monte Carlo step.
### I.3 ${\mathcal{O}}^{d}{\mathcal{O}}^{od}$ terms
To calculate the combination of both diagonal and off-diagonal terms, we can
combine both mentioned technique. At the occasion that ${\mathcal{O}}^{od}$
appears, we measure the ${\mathcal{O}}^{d}$ using direct measurement on the
spin state.
$\displaystyle\langle{\mathcal{O}}^{d1}{\mathcal{O}}^{od2}\rangle=\frac{1}{\beta}\langle\sum_{{\mathcal{O}}_{p}={\mathcal{O}}^{od2}}{\mathcal{O}}^{d1}_{p}\rangle$
(7)
where ${\mathcal{O}}_{p}={\mathcal{O}}^{od2}$ is the slice that the operator
is off-diagonal.
## II Eigenstate-to-Hamiltonian Construction (EHC) Approach
Figure 5: Figure highlighting the overlap of the ground state,
$|\Psi_{0}\rangle$, of the parent Hamiltonian, ${\cal H}$, and the eigenstates
of the mapped Hamiltonian, $\tilde{\cal H}$, close to the energy
$\tilde{E}=\langle\Psi_{0}|\tilde{\cal H}|\Psi_{0}\rangle$, for a given
disorder configuration of strength $h=10$ (left) and $h=1$ (right) in 1D
(panels (a-b)) and 2D (panels (c-d)). As seen in the left panels, for large
disorder value, the overlap is maximum ($\approx 1$) for a single eigenstate
closest to $\tilde{E}$, indicating the EHC has successfully discovered a
$\tilde{\cal H}$ hosting $|\Psi_{0}\rangle$ as an exact eigenstate. In
contrast, for weak disorder (right panels), the overlap $\ll 1$, indicating a
failure of the EHC.
Figure 6: Distribution of the maximum overlap ($O_{m}$) of the ground state
and the eigenstates of the mapped Hamiltonian, close to the energy
$\tilde{E}$, for varying disorder strength and system sizes in 1D (panels
(a)-(c)) and 2D (panel (d)). Panel (a) shows that for weak disorder values,
the distribution of maximum overlap has a broad feature and a peak at
vanishing overlap value for large system sizes. This indicates the mapping of
the ground state to a superposition of eigenstates. Panel (b) shows that for
large disorder value, the distribution of maximum overlap is peaked at
$O_{m}\approx 1$. The peak strengthens with increasing system sizes, further
establishing the fact the EHC mapping to only one eigenstate. The panel (c)
shows this crossover behavior with gradually increasing/decreasing the
disorder strength. In panel (d), we show this behavior for 2D, where we see a
very sharp change in the behavior of the distribution of maximum overlap for
weak and strong disorder values.
Once we have computed the quantum covariance matrix, we diagonalize it and
label the eigenvalues in increasing magnitude as,
$e_{1},e_{2},\ldots,e_{N+1}$. The first two eigenvalues, $e_{1}$ and $e_{2}$
are trivially zero, corresponding to the original parent Hamiltonian and the
total spin operator. We consider the next non-zero eigenvalue, $e_{3}$ and the
associated normalized eigenvector
$\Psi_{3}=(\tilde{J},h^{{}^{\prime}}_{1},...,h^{{}^{\prime}}_{N})$ to
construct the new Hamiltonian $\tilde{\cal H}$,
$\tilde{\cal H}=\sum_{\langle i,j\rangle}{\bf S}_{i}\cdot{\bf
S}_{j}+\sum_{i}\tilde{h}_{i}S_{i}^{z}$ (8)
such that, $\tilde{h}_{i}=h^{{}^{\prime}}_{i}/\tilde{J}$. It can be shown that
the variance $\sigma^{2}(\tilde{\cal H})=e_{3}/\tilde{J}^{2}$. Further, we
show that disorder averaged $e_{3}$ exhibiting a power-law decay behavior with
increased system size, thereby indicating that $|\Psi_{0}\rangle$ is an
eigenstate of $\tilde{\cal H}$ with energy
$\tilde{E}=\langle\Psi_{0}|\tilde{\cal H}|\Psi_{0}\rangle$ in the
thermodynamic limit.
While the accuracy of the EHC mapping can be inferred from a decaying behavior
of eigenvalues with increased system size, and thus vanishing in the
thermodynamic limit, due to the exponentially large degeneracy of excited-
eigenstates close to energy, $\tilde{E}$, the question remains, whether
$\Psi_{0}$ maps to a single eigenstate of $\tilde{\cal H}$ or a superposition
of eigenstates. We have performed Exact Diagonalization (ED) calculations in
1D and 2D, using the state-of-the-art Quspin packageWeinberg and Bukov (2017,
2019) to address this.
In Fig. 5, we show the overlap,
$O_{\alpha}=|\langle\Psi_{0}|\Psi_{\alpha}\rangle|^{2}$ of the actual ground
state, $\Psi_{0}$ and the eigenstates of the target Hamiltonian $\tilde{\cal
H}$, close to the energy, $\tilde{E}=\langle\Psi_{0}|\tilde{\cal
H}|\Psi_{0}\rangle$, for a single disorder realisation of strong and weak
disorder values in a 1D chain of size $L=16$. We find that in the strong
disorder case, the overlap is maximum $O_{m}\approx 1$ for a single eigenstate
and vanishing for the remaining eigenstates. This indicates the EHC mapping to
only one eigenstate in the strong disorder limit. On the other hand for the
weak disorder configuration, the overlap is finite for several eigenstates.
This is indicative of the fact that the ground state maps to a superposition
of eigenstates. However, as the excited states are obtained from the mapping
of the ground state, they have non-ergodic properties, which are quite
interesting to study further. We observe a very similar feature in the overlap
behavior for calculations in a 2D lattice of size $4\times 4$ (see bottom
panels of Fig. 5).
Further, we study the distribution of the maximum overlap $O_{m}$ of the
ground state and eigenstate of the mapped Hamiltonian, close to the energy,
$\tilde{E}$, for representative weak ($h=1$) and strong ($h=10$) disorder
values with varying system sizes in 1D. The distribution is computed for 5000
disorder configurations. As seen in Fig. 6, right panel, for large disorder
value, the distribution is peaked at value $O_{m}\approx 1$. The peak
strengthens with increasing system sizes, further establishing the fact that
the ground state has overlap with a single eigenstate of the mapped
Hamiltonian obtained via the EHC formalism. In contrast, for weak disorder
(left panel), the distribution peaks for vanishing values of overlap for large
system sizes. This indicates that the ground state has overlap with a
superposition of eigenstates of the new Hamiltonian obtained via the EHC
formalism. The bottom-left panel of Fig. 6 shows this crossover behavior with
gradually increasing/decreasing the disorder strength. We show this behavior
for 2D in the bottom-right panel of Fig. 6. We find that the distribution of
the maximum overlap exhibits similar behavior for the 1D chain and the 2D
lattice.
## III Participation Entropy
Figure 7: Scaling of second order Renyi entropy, $S_{2}$ with the Hilbert
space size ${\cal N}$ in the presence of disorder, demonstrating the non-
ergodic behavior of the Bose glass ground state.
The $q$-th order Rényi participation entropy of a state $\ket{\psi}$ is given
by
$S_{q}=\frac{1}{1-q}\ln\sum_{i}p^{q}_{i},$ (9)
where $p_{i}=|\braket{\psi}{\phi_{i}}|^{2}$ and the $\ket{\phi_{i}}$ are some
set of orthonormal basis states. In particular, we focus on $q=2$ and
$q\to\infty$. These two quantities provide the measure of how many states of a
configuration space contribute to a wave function.
We use the approaches developed in Humeniuk and Roscilde (2012b); Luitz _et
al._ (2014c, b) to calculate the participation entropy. These approaches use
the counting of occurrence for each spin configuration to calculate the
participation entropy. $S_{q}$ is found using the probability of having
identical configurations in different replica in each Monte Carlo step, while
$S_{\infty}$ is calculated using the probability of maximally occurring spin
configuration. For strong disorders, the maximally occurred spin configuration
is usually almost aligned with the local magnetic field.
In Fig. 7, we show the scaling of disorder-averaged $S_{2}$ with the Hilbert
space size, ${\cal N}$ in the localised regime. The slope of the line
$S_{2}=D_{2}\ln{\cal N}+c$ represents the multifractal dimension $D_{2}$, and
we find $D_{2}\ll 1$. This indicates that only a vanishingly small fraction of
basis states (among the exponentially large space of states in the
configuration space) contribute to the Bose glass ground state in our
simulations; highlighting it’s strong non-ergodic behavior. The behavior of
$S_{\infty}$ is shown in the main text.
## IV Ground state phase transition
Figure 8: (Main) Behaviour of the scaled stiffness, $L^{2}\rho_{s}$, with
varying $h$ near the transition region. The curves for different system sizes
cross at $h=h_{c}$, providing an accurate estimate of the critical disorder
strength, $h_{c}\approx 2.35$. (Inset) Finite size scaling of the spin
stiffness, $\rho_{s}$, with varying system sizes for different disorder
strengths. In the thermodynamic limit, $\rho_{s}\to 0$ as $h\geq h_{c}$
increases, establishing the BG phase as the ground state.
The ground state of ${\cal H}$, Eq. (4), has two distinct phases as disorder
strength $h$ varies, with a quantum phase transition at a critical $h_{c}$.
These phases may be characterised by measuring the spin stiffness,
$\rho_{s}=\frac{1}{N}\frac{\partial^{2}E}{\partial\phi^{2}}$, defined as the
response of the total energy, $E$, to a twist by angle $\phi$. The delocalized
superfluid (SF) phase (for $h<h_{c}$) has finite spin stiffness, whereas the
localized Bose glass (BG) phase (for $h>h_{c}$) has vanishing spin stiffness,
and $h_{c}$, can be determined from the scaling of $\rho_{s}$.
In SSE, the stiffness is measured by the fluctuation in winding number($W$) of
the world lines as $\rho_{s}=\langle W^{2}\rangle/2\beta$, where $\beta$ is
the inverse temperature Sandvik (2010). Close to the critical point, the
stiffness obeys the scaling relation
$\rho_{s}(L,h)=L^{-z}f[(h-h_{c})L^{1/\nu}],$ (10)
where the correlation length exponent is $\nu=1$ Prokof’ev and Svistunov
(2004), and the dynamical critical exponent is found to be $z=2$. Plotting the
scaled stiffness $L^{z}\rho_{s}$ against $h$ for different system sizes
provides an accurate estimate of the critical disorder strength, $h_{c}$
Álvarez Zúñiga _et al._ (2015). The results are shown in Fig. 8, which
suggest $h_{c}\approx 2.35$. The interacting ground state changes from a
delocalized superfluid state to a localized Bose glass state for $h>h_{c}$.
## References
* Anderson (1958) P. W. Anderson, Phys. Rev. 109, 1492 (1958).
* Mott and Twose (1961) N. Mott and W. Twose, Advances in Physics 10, 107 (1961).
* Gol’dshtein _et al._ (1977) I. Y. Gol’dshtein, S. A. Molchanov, and L. A. Pastur, Functional Analysis and Its Applications 11, 1 (1977).
* Evers and Mirlin (2008) F. Evers and A. D. Mirlin, Rev. Mod. Phys. 80, 1355 (2008).
* Basko _et al._ (2006) D. Basko, I. Aleiner, and B. Altshuler, Ann. Phys. (N. Y.) 321, 1126 (2006).
* Nandkishore and Huse (2015) R. Nandkishore and D. A. Huse, Annu. Rev. Condens. Matter Phys. 6, 15 (2015).
* Alet and Laflorencie (2018) F. Alet and N. Laflorencie, C. R. Phys. 19, 498 (2018).
* Abanin _et al._ (2019) D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Rev. Mod. Phys. 91, 021001 (2019).
* Huse _et al._ (2013) D. A. Huse, R. Nandkishore, V. Oganesyan, A. Pal, and S. L. Sondhi, Phys. Rev. B 88, 014206 (2013).
* Pekker _et al._ (2014) D. Pekker, G. Refael, E. Altman, E. Demler, and V. Oganesyan, Phys. Rev. X 4, 011052 (2014).
* Bahri _et al._ (2015) Y. Bahri, R. Vosk, E. Altman, and A. Vishwanath, Nat. Commun. 6, 7341 (2015).
* Luitz _et al._ (2015) D. J. Luitz, N. Laflorencie, and F. Alet, Phys. Rev. B 91, 081103(R) (2015).
* Serbyn and Moore (2016) M. Serbyn and J. E. Moore, Phys. Rev. B 93, 041424(R) (2016).
* Khemani _et al._ (2016) V. Khemani, F. Pollmann, and S. L. Sondhi, Phys. Rev. Lett. 116, 247204 (2016).
* Lim and Sheng (2016) S. P. Lim and D. N. Sheng, Phys. Rev. B 94, 045111 (2016).
* Imbrie _et al._ (2017) J. Z. Imbrie, V. Ros, and A. Scardicchio, Annalen der Physik 529, 1600278 (2017).
* Schreiber _et al._ (2015) M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, Science 349, 842 (2015).
* Smith _et al._ (2016) J. Smith, A. Lee, P. Richerme, B. Neyenhuis, P. W. Hess, P. Hauke, M. Heyl, D. A. Huse, and C. Monroe, Nat. Phys. 12, 907 (2016).
* De Roeck and Huveneers (2017) W. De Roeck and F. Huveneers, Phys. Rev. B 95, 155129 (2017).
* Doggen _et al._ (2020) E. V. H. Doggen, I. V. Gornyi, A. D. Mirlin, and D. G. Polyakov, Phys. Rev. Lett. 125, 155701 (2020).
* Wahl _et al._ (2019) T. B. Wahl, A. Pal, and S. H. Simon, Nat. Phys. 15, 164 (2019).
* Kshetrimayum _et al._ (2020) A. Kshetrimayum, M. Goihl, and J. Eisert, Phys. Rev. B 102, 235132 (2020).
* Chertkov _et al._ (2021) E. Chertkov, B. Villalonga, and B. K. Clark, Phys. Rev. Lett. 126, 180602 (2021).
* Théveniaut _et al._ (2020) H. Théveniaut, Z. Lan, G. Meyer, and F. Alet, Phys. Rev. Research 2, 033154 (2020).
* Decker _et al._ (2021) K. S. C. Decker, D. M. Kennes, and C. Karrasch, arXiv:2106.12861 (2021).
* Chertkov and Clark (2018) E. Chertkov and B. K. Clark, Phys. Rev. X 8, 031029 (2018).
* Qi and Ranard (2019) X. L. Qi and D. Ranard, Quantum 3, 159 (2019).
* Dupont and Laflorencie (2019) M. Dupont and N. Laflorencie, Phys. Rev. B 99, 020202(R) (2019).
* Turner _et al._ (2018a) C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papić, Nature Physics 14, 745 (2018a).
* Serbyn _et al._ (2021) M. Serbyn, D. A. Abanin, and Z. Papić, Nature Physics 17, 675 (2021).
* Macé _et al._ (2019) N. Macé, F. Alet, and N. Laflorencie, Phys. Rev. Lett. 123, 180601 (2019).
* Ponte _et al._ (2017) P. Ponte, C. R. Laumann, D. A. Huse, and A. Chandran, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 375, 1 (2017).
* Potirniche _et al._ (2019) I. D. Potirniche, S. Banerjee, and E. Altman, Phys. Rev. B 99, 205149 (2019).
* Foo _et al._ (2022) D. C. W. Foo, N. Swain, P. Sengupta, G. Lemarié, and S. Adam, arXiv:2022.09072 (2022).
* Choi _et al._ (2016) J. Y. Choi, S. Hild, J. Zeiher, P. Schauß, A. Rubio-Abadal, T. Yefsah, V. Khemani, D. A. Huse, I. Bloch, and C. Gross, Science 352, 1547 (2016).
* Bordia _et al._ (2016) P. Bordia, H. P. Lüschen, S. S. Hodgman, M. Schreiber, I. Bloch, and U. Schneider, Phys. Rev. Lett. 116, 140401 (2016).
* Sbroscia _et al._ (2020) M. Sbroscia, K. Viebahn, E. Carter, J.-C. Yu, A. Gaunt, and U. Schneider, Phys. Rev. Lett. 125, 200604 (2020).
* Yu _et al._ (2017) X. Yu, D. Pekker, and B. K. Clark, Phys. Rev. Lett. 118, 017201 (2017).
* Dupont _et al._ (2019) M. Dupont, N. Macé, and N. Laflorencie, Phys. Rev. B 100, 134201 (2019).
* Giamarchi and Schulz (1987) T. Giamarchi and H. J. Schulz, Europhysics Letters (EPL) 3, 1287 (1987).
* Giamarchi and Schulz (1988) T. Giamarchi and H. J. Schulz, Phys. Rev. B 37, 325 (1988).
* Fisher _et al._ (1989a) M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Phys. Rev. B 40, 546 (1989a).
* Doggen _et al._ (2017) E. V. H. Doggen, G. Lemarié, S. Capponi, and N. Laflorencie, Phys. Rev. B 96, 180202(R) (2017).
* Schulz _et al._ (2019) M. Schulz, C. A. Hooley, R. Moessner, and F. Pollmann, Phys. Rev. Lett. 122, 040606 (2019).
* Doggen _et al._ (2021) E. V. H. Doggen, I. V. Gornyi, and D. G. Polyakov, Phys. Rev. B 103, L100202 (2021).
* Yao _et al._ (2021) R. Yao, T. Chanda, and J. Zakrzewski, Phys. Rev. B 104, 014201 (2021).
* Sandvik (1992) A. W. Sandvik, J. Phys. A: Math. Gen. 25, 3667 (1992).
* Sandvik (1999) A. W. Sandvik, Phys. Rev. B 59, R14157 (1999).
* Sengupta and Haas (2007) P. Sengupta and S. Haas, Phys. Rev. Lett. 99, 050403 (2007).
* Fisher _et al._ (1989b) M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, Phys. Rev. B 40, 546 (1989b).
* Pollet _et al._ (2009) L. Pollet, N. V. Prokof’ev, B. V. Svistunov, and M. Troyer, Phys. Rev. Lett. 103, 140402 (2009).
* Prokof’ev and Svistunov (2004) N. Prokof’ev and B. Svistunov, Phys. Rev. Lett. 92, 015703 (2004).
* Álvarez Zúñiga _et al._ (2015) J. P. Álvarez Zúñiga, D. J. Luitz, G. Lemarié, and N. Laflorencie, Phys. Rev. Lett. 114, 155301 (2015).
* Humeniuk and Roscilde (2012a) S. Humeniuk and T. Roscilde, Phys. Rev. B 86, 235116 (2012a).
* Luitz _et al._ (2014a) D. J. Luitz, X. Plat, N. Laflorencie, and F. Alet, Phys. Rev. B 90, 125105 (2014a).
* Luitz _et al._ (2014b) D. J. Luitz, F. Alet, and N. Laflorencie, Phys. Rev. Lett. 112, 057203 (2014b).
* Laflorencie _et al._ (2020) N. Laflorencie, G. Lemarié, and N. Macé, Phys. Rev. Research 2, 042033(R) (2020).
* Weinberg and Bukov (2017) P. Weinberg and M. Bukov, SciPost Phys. 2, 003 (2017).
* Agrawal _et al._ (2022) U. Agrawal, R. Vasseur, and S. Gopalakrishnan, arXiv preprint arXiv:2204.03665 (2022).
* Visscher (1972) W. Visscher, Journal of Non-Crystalline Solids 8-10, 477 (1972).
* Sun and Robicheaux (2008) B. Sun and F. Robicheaux, New J. Phys. 10, 045032 (2008).
* Turner _et al._ (2018b) C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papić, Phys. Rev. B 98, 155134 (2018b).
* Sandvik _et al._ (1997) A. W. Sandvik, R. R. P. Singh, and D. K. Campbell, Phys. Rev. B 56, 14510 (1997).
* Sandvik (2010) A. W. Sandvik, AIP Conference Proceedings 1297, 135 (2010).
* Weinberg and Bukov (2019) P. Weinberg and M. Bukov, SciPost Phys. 7, 20 (2019).
* Humeniuk and Roscilde (2012b) S. Humeniuk and T. Roscilde, Phys. Rev. B 86, 235116 (2012b).
* Luitz _et al._ (2014c) D. J. Luitz, X. Plat, N. Laflorencie, and F. Alet, Phys. Rev. B 90, 125105 (2014c).
|
# Additive Decoders for Latent Variables Identification and Cartesian-Product
Extrapolation
Sébastien Lachapelle∗ &Divyat Mahajan∗ &Ioannis Mitliagkas† &Simon Lacoste-
Julien† Mila & DIRO, Université de Montréal
###### Abstract
We tackle the problems of latent variables identification and “out-of-support”
image generation in representation learning. We show that both are possible
for a class of decoders that we call additive, which are reminiscent of
decoders used for object-centric representation learning (OCRL) and well
suited for images that can be decomposed as a sum of object-specific images.
We provide conditions under which exactly solving the reconstruction problem
using an additive decoder is guaranteed to identify the blocks of latent
variables up to permutation and block-wise invertible transformations. This
guarantee relies only on very weak assumptions about the distribution of the
latent factors, which might present statistical dependencies and have an
almost arbitrarily shaped support. Our result provides a new setting where
nonlinear independent component analysis (ICA) is possible and adds to our
theoretical understanding of OCRL methods. We also show theoretically that
additive decoders can generate novel images by recombining observed factors of
variations in novel ways, an ability we refer to as Cartesian-product
extrapolation. We show empirically that additivity is crucial for both
identifiability and extrapolation on simulated data. ††∗ Equal contribution. †
Canada CIFAR AI Chair. ††Correspondence to: {lachaseb,
<EMAIL_ADDRESS>
### 1 Introduction
The integration of connectionist and symbolic approaches to artificial
intelligence has been proposed as a solution to the lack of robustness,
transferability, systematic generalization and interpretability of current
deep learning algorithms [38, 4, 9, 18, 14] with justifications rooted in
cognitive sciences [13, 20, 31] and causality [40, 45]. However, the problem
of extracting meaningful symbols grounded in low-level observations, e.g.
images, is still open. This problem is sometime referred to as disentanglement
[4, 34] or causal representation learning [45]. The question of
identifiability in representation learning, which originated in works on
nonlinear independent component analysis (ICA) [46, 22, 23, 25], has been the
focus of many recent efforts [35, 47, 19, 33, 3, 6, 29]. The mathematical
results of these works provide rigorous explanations for when and why symbolic
representations can be extracted from low-level observations. In a similar
spirit, Object-centric representation learning (OCRL) aims to learn a
representation in which the information about different objects are encoded
separately [12, 15, 7, 17, 11, 37, 10]. These approaches have shown impressive
results empirically, but the exact reason why they can perform this form of
segmentation without any supervision is poorly understood.
Figure 1: Left: Additive decoders model the additive structure of scenes
composed of multiple objects. Right: Additive decoders allow to generate novel
images never seen during training via Cartesian-product extrapolation
(Corollary 3). Purple regions correspond to latents/observations seen during
training. The blue regions correspond to the Cartesian-product extension. The
middle set is the manifold of images of balls. In this example, the learner
never saw both balls high, but these can be generated nevertheless thanks to
the additive nature of the scene. Details in Section 3.2.
#### 1.1 Contributions
Our first contribution is an analysis of the identifiability of a class of
decoders we call additive (Definition 1). Essentially, a decoder
${\bm{f}}({\bm{z}})$ acting on a latent vector
${\bm{z}}\in{\mathbb{R}}^{d_{z}}$ to produce an observation ${\bm{x}}$ is said
to be additive if it can be written as
${\bm{f}}({\bm{z}})=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{z}}_{B})$
where ${\mathcal{B}}$ is a partition of $\\{1,\dots,d_{z}\\}$,
${\bm{f}}^{(B)}({\bm{z}}_{B})$ are “block-specific” decoders and the
${\bm{z}}_{B}$ are non-overlapping subvectors of ${\bm{z}}$. This class of
decoder is particularly well suited for images ${\bm{x}}$ that can be
expressed as a sum of images corresponding to different objects (left of
Figure 1). Unsurprisingly, this class of decoder bears similarity with the
decoding architectures used in OCRL (Section 2), which already showed
important successes at disentangling objects without any supervision. Our
identifiability results provide conditions under which exactly solving the
reconstruction problem with an additive decoder identifies the latent blocks
${\bm{z}}_{B}$ up to permutation and block-wise transformations (Theorems 1 &
2). We believe these results will be of interest to both the OCRL community,
as they partly explain the empirical success of these approaches, and to the
nonlinear ICA and disentanglement community, as it provides an important
special case where identifiability holds. This result relies on the block-
specific decoders being “sufficiently nonlinear” (Assumption 2) and requires
only very weak assumptions on the distribution of the ground-truth latent
factors of variations. In particular, these factors can be statistically
dependent and their support can be (almost) arbitrary.
Our second contribution is to show theoretically that additive decoders can
generate images never seen during training by recombining observed factors of
variations in novel ways (Corollary 3). To describe this ability, we coin the
term “Cartesian-product extrapolation” (right of Figure 1). We believe the
theoretical framework laid out in this work to understand “out-of-support”
generation is a step towards understanding theoretically why modern generative
models such as DALLE-2 [42] and Stable Diffusion [43] can be creative.
Both latent variables identification and Cartesian-product extrapolation are
validated experimentally on simulated data (Section 4). More specifically, we
observe that additivity is crucial for both by comparing against a non-
additive decoder which fails to disentangle and extrapolate.
Notation. Scalars are denoted in lower-case and vectors in lower-case bold,
e.g. $x\in{\mathbb{R}}$ and ${\bm{x}}\in{\mathbb{R}}^{n}$. We maintain an
analogous notation for scalar-valued and vector-valued functions, e.g. $f$ and
${\bm{f}}$. The $i$th coordinate of the vector ${\bm{x}}$ is denoted by
${\bm{x}}_{i}$. The first $n$ integers excluding $0$ is denoted by $[n]$.
Given a subset of indices $S\subseteq[n]$, ${\bm{x}}_{S}$ denotes the
subvector consisting of entries ${\bm{x}}_{i}$ for $i\in S$. Given a function
${\bm{f}}({\bm{x}}_{S})\in{\mathbb{R}}^{m}$ with input ${\bm{x}}_{S}$, the
derivative of ${\bm{f}}$ w.r.t. ${\bm{x}}_{i}$ is denoted by
$D_{i}{\bm{f}}({\bm{x}}_{S})\in{\mathbb{R}}^{m}$ and the second derivative
w.r.t. ${\bm{x}}_{i}$ and ${\bm{x}}_{i^{\prime}}$ is
$D^{2}_{i,i^{\prime}}{\bm{f}}({\bm{x}}_{S})\in{\mathbb{R}}^{m}$. See Table 2
in appendix for more.
Code: Our code repository can be found at this link.
### 2 Background & Literature review
Identifiability of latent variable models. The problem of latent variables
identification can be best explained with a simple example. Suppose
observations ${\bm{x}}\in{\mathbb{R}}^{d_{x}}$ are generated i.i.d. by first
sampling a latent vector ${\bm{z}}\in{\mathbb{R}}^{d_{z}}$ from a distribution
${\mathbb{P}}_{\bm{z}}$ and feeding it into a decoder function
${\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$, i.e.
${\bm{x}}={\bm{f}}({\bm{z}})$. By choosing an alternative model defined as
$\hat{\bm{f}}:={\bm{f}}\circ{\bm{v}}$ and
$\hat{\bm{z}}:={\bm{v}}^{-1}({\bm{z}})$ where
${\bm{v}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{z}}$ is some
bijective transformation, it is easy to see that the distributions of
$\hat{\bm{x}}=\hat{\bm{f}}(\hat{\bm{z}})$ and ${\bm{x}}$ are the same since
$\hat{\bm{f}}(\hat{\bm{z}})={\bm{f}}\circ{\bm{v}}({\bm{v}}^{-1}({\bm{z}}))={\bm{f}}({\bm{z}})$.
The problem of identifiability is that, given only the distribution over
${\bm{x}}$, it is impossible to distinguish between the two models
$({\bm{f}},{\bm{z}})$ and $(\hat{\bm{f}},\hat{\bm{z}})$. This is problematic
when one wants to discover interpretable factors of variations since
${\bm{z}}$ and $\hat{\bm{z}}$ could be drastically different. There are
essentially two strategies to go around this problem: (i) restricting the
hypothesis class of decoders $\hat{\bm{f}}$ [46, 19, 6, 49], and/or (ii)
restricting/adding structure to the distribution of $\hat{\bm{z}}$ [23, 36,
30, 33]. By doing so, the hope is that the only bijective mapping ${\bm{v}}$
keeping $\hat{\bm{f}}$ and $\hat{\bm{z}}$ into their respective hypothesis
classes will be trivial indeterminacies such as permutations and element-wise
rescalings. Our contribution, which is to restrict the decoder function
$\hat{\bm{f}}$ to be additive (Definition 1), falls into the first category.
The restricted function classes for decoders proposed so far do not clearly
apply to images, unlike additive decoders which nicely captures their additive
nature. Moreover, the methods that do not restrict the decoder must instead
restrict/structure the distribution of the latent factors by assuming, e.g.,
sparse temporal dependencies [22, 27, 1, 28], conditionally independent latent
variables given an observed auxiliary variable [23, 25], that interventions
targetting the latents are observed [30, 33, 5, 2, 3], or that the support of
the latents is a Cartesian-product [48, 44]. In contrast, our result makes
very mild assumptions about the distribution of the latent factors, which can
present statistical dependencies, have an almost arbitrarily shaped support
and does not require any interventions. Additionally, none of these works
provide extrapolation guarantees as we do in Section 3.2.
Object-centric representation learning (OCRL). Lin et al. [32] classified OCRL
methods in two categories: scene mixture models [15, 16, 17, 37] & spatial-
attention models [12, 8, 7, 11]. Additive decoders can be seen as an
approximation to the decoding architectures used in the former category, which
typically consist of an object-specific decoder ${\bm{f}}^{(\text{obj})}$
acting on object-specific latent blocks ${\bm{z}}_{B}$ and “mixed” together
via a masking mechanism ${\bm{m}}^{(B)}({\bm{z}})$ which selects which pixel
belongs to which object. More precisely,
$\displaystyle{\bm{f}}({\bm{z}})=\sum_{B\in{\mathcal{B}}}{\bm{m}}^{(B)}({\bm{z}})\odot{\bm{f}}^{(\text{obj})}({\bm{z}}_{B})\
\text{, where}\
{\bm{m}}^{(B)}_{k}({\bm{z}})=\frac{\exp({\bm{a}}_{k}({\bm{z}}_{B}))}{\sum_{B^{\prime}\in{\mathcal{B}}}\exp({\bm{a}}_{k}({\bm{z}}_{B^{\prime}}))}\,,$
(1)
and where ${\mathcal{B}}$ is a partition of $[d_{z}]$ made of equal-size
blocks $B$ and ${\bm{a}}:{\mathbb{R}}^{|B|}\rightarrow{\mathbb{R}}^{d_{x}}$
outputs a score that is normalized via a softmax operation to obtain the masks
${\bm{m}}^{(B)}({\bm{z}})$. Many of these works also present some mechanism to
select dynamically how many objects are present in the scene and thus have a
variable-size representation ${\bm{z}}$, an important technical aspect we omit
in our analysis. Empirically, training these decoders based on some form of
reconstruction objective, probabilistic or not, yields latent blocks
${\bm{z}}_{B}$ that represent the information of individual objects
separately. We believe our work constitutes a step towards providing a
mathematically grounded explanation for why these approaches can perform this
form of disentanglement without supervision (Theorems 1 & 2). Many
architectural innovations in scene mixture models concern the encoder, but our
analysis focuses solely on the structure of the decoder ${\bm{f}}({\bm{z}})$,
which is a shared aspect across multiple methods. Generalization capabilities
of object-centric representations were studied empirically by Dittadi et al.
[10] but did not cover Cartesian-product extrapolation (Corollary 3) on which
we focus here.
Additive decoders are also closely related to the penalty introduced by
Peebles et al. [41] which consists in regularizing the Hessian of the decoder
to be diagonal. In Appendix A.2, we show that “additivity” and “diagonal
Hessian” are equivalent properties. They showed empirically that this penalty
can induce disentanglement on datasets such as CLEVR [24], which is a standard
benchmark for OCRL, but did not provide any formal justification. Our work
provides a rigorous explanation for these successes and highlights the link
between the diagonal Hessian penalty and OCRL.
### 3 Additive decoders for disentanglement & extrapolation
Our theoretical results assume the existence of some data-generating process
describing how the observations ${\bm{x}}$ are generated and, importantly,
what are the “natural” factors of variations.
###### Assumption 1 (Data-generating process).
The set of possible observations is given by a lower dimensional manifold
${\bm{f}}({\mathcal{Z}}^{\textnormal{test}})$ embedded in
${\mathbb{R}}^{d_{x}}$ where ${\mathcal{Z}}^{\textnormal{test}}$ is an open
set of ${\mathbb{R}}^{d_{z}}$ and
${\bm{f}}:{\mathcal{Z}}^{\textnormal{test}}\rightarrow{\mathbb{R}}^{d_{x}}$ is
a $C^{2}$-diffeomorphism onto its image. We will refer to ${\bm{f}}$ as the
_ground-truth decoder_. At training time, the observations are i.i.d. samples
given by ${\bm{x}}={\bm{f}}({\bm{z}})$ where ${\bm{z}}$ is distributed
according to the probability measure
${\mathbb{P}}^{\textnormal{train}}_{\bm{z}}$ with support
${\mathcal{Z}}^{\textnormal{train}}\subseteq{\mathcal{Z}}^{\textnormal{test}}$.
Throughout, we assume that ${\mathcal{Z}}^{\textnormal{train}}$ is regularly
closed (Definition 6).
Intuitively, the ground-truth decoder ${\bm{f}}$ is effectively relating the
“natural factors of variations” ${\bm{z}}$ to the observations ${\bm{x}}$ in a
one-to-one fashion. The map ${\bm{f}}$ is a $C^{2}$-diffeomorphism onto its
image, which means that it is $C^{2}$ (has continuous second derivative) and
that its inverse (restricted to the image of ${\bm{f}}$) is also $C^{2}$.
Analogous assumptions are very common in the literature on nonlinear ICA and
disentanglement [23, 25, 30, 1].
We emphasize the distinction between ${\mathcal{Z}}^{\textnormal{train}}$,
which corresponds to the observations seen during training, and
${\mathcal{Z}}^{\textnormal{test}}$, which corresponds to the set of all
possible images. The case where
${\mathcal{Z}}^{\textnormal{train}}\not={\mathcal{Z}}^{\textnormal{test}}$
will be of particular interest when discussing extrapolation in Section 3.2.
The “regularly closed” condition on ${\mathcal{Z}}^{\textnormal{train}}$ is
mild, as it is satisfied as soon as the distribution of ${\bm{z}}$ has a
density w.r.t. the Lebesgue measure on ${\mathbb{R}}^{d_{z}}$. It is violated,
for example, when ${\bm{z}}$ is a discrete random vector. Figure 2 illustrates
this assumption with simple examples.
Objective. Our analysis is based on the simple objective of reconstructing the
observations ${\bm{x}}$ by learning an encoder
$\hat{\bm{g}}:{\mathbb{R}}^{d_{x}}\rightarrow{\mathbb{R}}^{d_{z}}$ and a
decoder $\hat{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$.
Note that we assumed implicitly that the dimensionality of the learned
representation matches the dimensionality of the ground-truth. We define the
set of latent codes the encoder can output when evaluated on the training
distribution:
$\displaystyle\hat{\mathcal{Z}}^{\textnormal{train}}:=\hat{\bm{g}}({\bm{f}}({\mathcal{Z}}^{\textnormal{train}}))\,.$
(2)
When the images of the ground-truth and learned decoders match, i.e.
${\bm{f}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\bm{f}}(\hat{\mathcal{Z}}^{\textnormal{train}})$,
which happens when the reconstruction task is solved exactly, one can define
the map
${\bm{v}}:\hat{\mathcal{Z}}^{\textnormal{train}}\rightarrow{\mathcal{Z}}^{\textnormal{train}}$
as
$\displaystyle{\bm{v}}:={\bm{f}}^{-1}\circ\hat{\bm{f}}\,.$ (3)
This function is going to be crucial throughout the work, especially to define
${\mathcal{B}}$-disentanglement (Definition 3), as it relates the learned
representation to the ground-truth representation.
Before introducing our formal definition of additive decoders, we introduce
the following notation: Given a set
${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ and a subset of indices
$B\subseteq[d_{z}]$, let us define ${\mathcal{Z}}_{B}$ to be the projection of
${\mathcal{Z}}$ onto dimensions labelled by the index set $B$. More formally,
$\displaystyle{\mathcal{Z}}_{B}:=\\{{\bm{z}}_{B}\mid{\bm{z}}\in\mathcal{Z}\\}\subseteq{\mathbb{R}}^{|B|}\,.$
(4)
Intuitively, we will say that a decoder is additive when its output is the
summation of the outputs of “object-specific” decoders that depend only on
each latent block ${\bm{z}}_{B}$. This captures the idea that an image can be
seen as the juxatoposition of multiple images which individually correspond to
objects in the scene or natural factors of variations (left of Figure 1). The
following definition makes this precise and slightly more general by adding an
extra invertible function $\sigma$ at the output.
###### Definition 1 ($(\sigma,{\mathcal{B}})$-additive function).
Let $\sigma:{\mathbb{R}}^{d_{x}}\rightarrow{\mathbb{R}}^{d_{x}}$ be an
invertible transformation and let ${\mathcal{B}}$ be a partition of
$[d_{z}]$111Without loss of generality, we assume that the partition
${\mathcal{B}}$ is contiguous, i.e. each $B\in{\mathcal{B}}$ can be written as
$B=\\{i+1,i+2,\dots,i+|B|\\}$.. A function
${\bm{f}}:{\mathcal{Z}}\rightarrow{\mathbb{R}}^{d_{x}}$ is said to be
$(\sigma,{\mathcal{B}})$-additive if there exist functions
${\bm{f}}^{(B)}:{\mathcal{Z}}_{B}\rightarrow{\mathbb{R}}^{d_{x}}$ for all
${B\in{\mathcal{B}}}$ such that
$\displaystyle\forall{\bm{z}}\in{\mathcal{Z}},{\bm{f}}({\bm{z}})=\sigma(\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{z}}_{B}))\,.$
(5)
This additivity property will be central to our analysis as it will be the
driving force of identifiability (Theorem 1 & 2) and Cartesian-product
extrapolation (Corollary 3). Since $\sigma$ and ${\mathcal{B}}$ are fixed
throughout, we will simply say that a function is additive to mean that it is
$(\sigma,{\mathcal{B}})$-additive. Note that the presence of $\sigma$ allows a
wider applicability of the result. For example, one can obtain a
“multiplicative decoder” by taking $\sigma({\bm{x}}):=\exp({\bm{x}})$, where
$\exp$ is applied element-wise.
Differences with OCRL in practice. We point out that, although the additive
decoders make intuitive sense for OCRL, they are not expressive enough to
represent the “masked decoders” typically used in practice (Equation (1)). The
lack of additivity stems from the normalization in the masks
${\bm{m}}^{(B)}({\bm{z}})$. We hypothesize that studying the simpler additive
decoders might still reveal interesting phenomena present in modern OCRL
approaches due to their resemblance. Another difference is that, in practice,
the same object-specific decoder ${\bm{f}}^{(\text{obj})}$ is applied to every
latent block ${\bm{z}}_{B}$. Our theory allows for these functions to be
different, but also applies when functions are the same. Additionally, this
parameter sharing across ${\bm{f}}^{(B)}$ enables modern methods to have a
variable number of objects across samples, an important practical point our
theory does not cover.
#### 3.1 Identifiability analysis
We now study the identifiability of additive decoders and show how they can
yield disentanglement. Our definition of disentanglement will rely on
partition-respecting permutations:
###### Definition 2 (Partition-respecting permutations).
Let ${\mathcal{B}}$ be a partition of $\\{1,...,d_{z}\\}$. A permutation $\pi$
over $\\{1,...,d_{z}\\}$ respects ${\mathcal{B}}$ if, for all
$B\in{\mathcal{B}},\ \pi(B)\in{\mathcal{B}}$.
Essentially, a permutation that respects ${\mathcal{B}}$ is one which can
permute blocks of ${\mathcal{B}}$ and permute elements within a block, but
cannot “mix” blocks together. We now introduce
${\mathcal{B}}$-disentanglement.
###### Definition 3 (${\mathcal{B}}$-disentanglement).
A learned decoder
$\hat{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ is said to
be ${\mathcal{B}}$-disentangled w.r.t. the ground-truth decoder ${\bm{f}}$
when
${\bm{f}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\bm{f}}(\hat{\mathcal{Z}}^{\textnormal{train}})$
and the mapping ${\bm{v}}:={\bm{f}}^{-1}\circ\hat{\bm{f}}$ is a diffeomorphism
from $\hat{\mathcal{Z}}^{\textnormal{train}}$ to
${\mathcal{Z}}^{\textnormal{train}}$ satisfying the following property: there
exists a permutation $\pi$ respecting ${\mathcal{B}}$ such that, for all
$B\in{\mathcal{B}}$, there exists a function
$\bar{\bm{v}}_{\pi(B)}:\hat{\mathcal{Z}}^{\textnormal{train}}_{B}\rightarrow{\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$
such that, for all ${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$,
${\bm{v}}_{\pi(B)}({\bm{z}})=\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})$. In other
words, ${\bm{v}}_{\pi(B)}({\bm{z}})$ depends only on ${\bm{z}}_{B}$.
Thus, ${\mathcal{B}}$-disentanglement means that the blocks of latent
dimensions ${\bm{z}}_{B}$ are disentangled from one another, but that
variables within a given block might remain entangled.
###### Example 1.
To illustrate ${\mathcal{B}}$-disentanglement, imagine a scene consisting of
two balls moving around in 2D where the “ground-truth” representation is given
by ${\bm{z}}=(x^{1},y^{1},x^{2},y^{2})$ where ${\bm{z}}_{B_{1}}=(x^{1},y^{1})$
and ${\bm{z}}_{B_{2}}=(x^{2},y^{2})$ are the coordinates of each ball (here,
${\mathcal{B}}:=\\{\\{1,2\\},\\{3,4\\}\\}$). In that case, a learned
representation is ${\mathcal{B}}$-disentangled when the balls are disentangled
from one another. However, the basis in which the position of each ball is
represented might differ in both representations.
The first identifiability result (Theorem 1) shows a weaker form of
disentanglement we call local ${\mathcal{B}}$-disentanglement. It means the
Jacobian matrix of ${\bm{v}}$ has a “block-permutation” structure everywhere.
###### Definition 4 (Local ${\mathcal{B}}$-disentanglement).
A learned decoder
$\hat{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ is said to
be locally ${\mathcal{B}}$-disentangled w.r.t. the ground-truth decoder
${\bm{f}}$ when
${\bm{f}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\bm{f}}(\hat{\mathcal{Z}}^{\textnormal{train}})$
and the mapping ${\bm{v}}:={\bm{f}}^{-1}\circ\hat{\bm{f}}$ is a diffeomorphism
from $\hat{\mathcal{Z}}^{\textnormal{train}}$ to
${\mathcal{Z}}^{\textnormal{train}}$ with a mapping
${\bm{v}}:\hat{\mathcal{Z}}^{\textnormal{train}}\rightarrow{\mathcal{Z}}^{\textnormal{train}}$
satisfying the following property: for all
${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$, there exists a
permutation $\pi$ respecting ${\mathcal{B}}$ such that, for all
$B\in{\mathcal{B}}$, the columns of
$D{\bm{v}}_{\pi(B)}({\bm{z}})\in{\mathbb{R}}^{|B|\times d_{z}}$ outside block
$B$ are zero.
In Appendix A.3, we provide three examples where local disentanglement holds
but not global disentanglement. The first one illustrates how having a
disconnected support can allow for a permutation $\pi$ (from Definition 4)
that changes between disconnected regions of the support. The last two
examples show how, even if the permutation stays the same throughout the
support, we can still violate global disentanglement, even with a connected
support.
We now state the main identifiability result of this work which provides
conditions to guarantee local disentanglement. We will then see how to go from
local to global disentanglement in the subsequent Theorem 2. For pedagogical
reasons, we delay the formalization of the sufficient nonlinearity Assumption
2 on which the result crucially relies.
###### Theorem 1 (Local disentanglement via additive decoders).
Suppose that the data-generating process satisfies Assumption 1, that the
learned decoder
$\hat{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ is a
$C^{2}$-diffeomorphism, that the encoder
$\hat{\bm{g}}:{\mathbb{R}}^{d_{x}}\rightarrow{\mathbb{R}}^{d_{z}}$ is
continuous, that both ${\bm{f}}$ and $\hat{\bm{f}}$ are additive (Definition
1) and that ${\bm{f}}$ is sufficiently nonlinear as formalized by Assumption
2. Then, if $\hat{\bm{f}}$ and $\hat{\bm{g}}$ solve the reconstruction problem
on the training distribution, i.e.
${\mathbb{E}}^{\textnormal{train}}||{\bm{x}}-\hat{\bm{f}}(\hat{\bm{g}}({\bm{x}}))||^{2}=0$,
we have that $\hat{\bm{f}}$ is locally ${\mathcal{B}}$-disentangled w.r.t.
${\bm{f}}$ (Definition 4) .
The proof of Theorem 1, which can be found in Appendix A.4, is inspired from
Hyvärinen et al. [23]. The essential differences are that (i) they leverage
the additivity of the conditional log-density of ${\bm{z}}$ given an auxiliary
variable ${\bm{u}}$ (i.e. conditional independence) instead of the additivity
of the decoder function ${\bm{f}}$, (ii) we extend their proof techniques to
allow for “block” disentanglement, i.e. when ${\mathcal{B}}$ is not the
trivial partition $\\{\\{1\\},\dots,\\{d_{z}\\}\\}$, (iii) the asssumption
“sufficient variability” of the prior $p({\bm{z}}\mid{\bm{u}})$ of Hyvärinen
et al. [23] is replaced by an analogous assumption of “sufficient
nonlinearity” of the decoder ${\bm{f}}$ (Assumption 2), and (iv) we consider
much more general supports ${\mathcal{Z}}^{\textnormal{train}}$ which makes
the jump from local to global disentanglement less direct in our case.
Sufficient nonlinearity. The following assumption is key in proving Theorem 2,
as it requires that the ground-truth decoder is “sufficiently nonlinear”. This
is reminiscent of the “sufficient variability” assumptions found in the
nonlinear ICA litterature, which usually concerns the distribution of the
latent variable ${\bm{z}}$ as opposed to the decoder ${\bm{f}}$ [21, 22, 23,
25, 26, 30, 49]. We clarify this link in Appendix A.5 and provide intuitions
why sufficient nonlinearity can be satisfied when $d_{x}\gg d_{z}$.
###### Assumption 2 (Sufficient nonlinearity of ${\bm{f}}$).
Let $q:=d_{z}+\sum_{B\in{\mathcal{B}}}\frac{|B|(|B|+1)}{2}$. For all
${\bm{z}}\in{\mathcal{Z}}^{\textnormal{train}}$, ${\bm{f}}$ is such that the
following matrix has independent columns (i.e. full column-rank):
$\displaystyle{\bm{W}}({\bm{z}})$
$\displaystyle:=\left[\left[D_{i}{\bm{f}}^{(B)}({\bm{z}}_{B})\right]_{i\in B}\
\left[D^{2}_{i,i^{\prime}}{\bm{f}}^{(B)}({\bm{z}}_{B})\right]_{(i,i^{\prime})\in
B_{\leq}^{2}}\right]_{B\in{\mathcal{B}}}\in{\mathbb{R}}^{d_{x}\times q}\,,$
(6)
where $B^{2}_{\leq}:=B^{2}\cap\\{(i,i^{\prime})\mid i^{\prime}\leq i\\}$. Note
this implies $d_{x}\geq q$.
The following example shows that Theorem 1 does not contradict the
nonidentifiability of linear ICA.
###### Example 2 (Importance of Assumption 2).
Suppose ${\bm{f}}({\bm{z}})={\bm{A}}{\bm{z}}$ and take
$\hat{\bm{f}}({\bm{z}}):={\bm{f}}({\bm{V}}{\bm{z}})$ where
${\bm{A}}\in{\mathbb{R}}^{d_{x}\times d_{z}}$ is full rank and
${\bm{V}}\in{\mathbb{R}}^{d_{z}\times d_{z}}$ is invertible. By construction,
${\bm{v}}({\bm{z}}):={\bm{f}}^{-1}\circ\hat{\bm{f}}({\bm{z}})={\bm{V}}{\bm{z}}$.
Also, both ${\bm{f}}({\bm{z}})$ and $\hat{\bm{f}}({\bm{z}})$ are additive
since
${\bm{f}}({\bm{z}})=\sum_{B\in{\mathcal{B}}}{\bm{A}}_{\cdot,B}{\bm{z}}_{B}$
and
$\hat{\bm{f}}({\bm{z}})=\sum_{B\in{\mathcal{B}}}({\bm{A}}{\bm{V}})_{\cdot,B}{\bm{z}}_{B}$,
even if ${\bm{V}}$ does not have a “block-permutation structure”, i.e. no
disentanglement. The reason we cannot apply Theorem 1 here is because
Assumption 2 is not satisfied. Indeed, the second derivatives of
${\bm{f}}^{(B)}({\bm{z}}_{B}):={\bm{A}}_{\cdot,B}{\bm{z}}_{B}$ are all zero
and hence ${\bm{W}}({\bm{z}})$ cannot have full column-rank.
###### Example 3 (A sufficiently nonlinear ${\bm{f}}$).
In Appendix A.6 we show numerically that the function
$\displaystyle{\bm{f}}({\bm{z}}):=[{\bm{z}}_{1},{\bm{z}}_{1}^{2},{\bm{z}}_{1}^{3},{\bm{z}}_{1}^{4}]^{\top}+[({\bm{z}}_{2}+1),({\bm{z}}_{2}+1)^{2},({\bm{z}}_{2}+1)^{3},({\bm{z}}_{2}+1)^{4}]^{\top}$
(7)
is a diffeomorphism from the square $[-1,0]\times[0,1]$ to its image that
satisfies Assumption 2.
##### 3.1.1 From local to global disentanglement
The following result provides additional assumptions to guarantee global
disentanglement (Definition 3) as opposed to only local disentanglement
(Definition 4). See Appendix A.7 for its proof.
###### Theorem 2 (From local to global disentanglement).
Suppose that all the assumptions of Theorem 1 hold. Additionally, assume
${\mathcal{Z}}^{\textnormal{train}}$ is path-connected (Definition 8) and that
the block-specific decoders ${\bm{f}}^{(B)}$ and $\hat{\bm{f}}^{(B)}$ are
injective for all blocks $B\in{\mathcal{B}}$. Then, if $\hat{\bm{f}}$ and
$\hat{\bm{g}}$ solve the reconstruction problem on the training distribution,
i.e.
${\mathbb{E}}^{\textnormal{train}}||{\bm{x}}-\hat{\bm{f}}(\hat{\bm{g}}({\bm{x}}))||^{2}=0$,
we have that $\hat{\bm{f}}$ is (globally) ${\mathcal{B}}$-disentangled w.r.t.
${\bm{f}}$ (Definition 3) and, for all $B\in{\mathcal{B}}$,
$\displaystyle\hat{\bm{f}}^{(B)}({\bm{z}}_{B})={\bm{f}}^{(\pi(B))}(\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B}))+{\bm{c}}^{(B)}\text{,
for all }{\bm{z}}_{B}\in\hat{\mathcal{Z}}^{\textnormal{train}}_{B}\,,$ (8)
where the functions $\bar{\bm{v}}_{\pi(B)}$ are from Defintion 3 and the
vectors ${\bm{c}}^{(B)}\in{\mathbb{R}}^{d_{x}}$ are constants such that
$\sum_{B\in{\mathcal{B}}}{\bm{c}}^{(B)}=0$. We also have that the functions
$\bar{\bm{v}}_{\pi(B)}:\hat{\mathcal{Z}}_{B}^{\textnormal{train}}\rightarrow{\mathcal{Z}}_{\pi(B)}^{\textnormal{train}}$
are $C^{2}$-diffeomorphisms and have the following form:
$\displaystyle\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})=({\bm{f}}^{\pi(B)})^{-1}(\hat{\bm{f}}^{(B)}({\bm{z}}_{B})-{\bm{c}}^{(B)}),\
\text{for all\ }{\bm{z}}_{B}\in\hat{\mathcal{Z}}^{\textnormal{train}}_{B}\,.$
(9)
Equation (8) in the above result shows that each block-specific learned
decoder $\hat{\bm{f}}^{(B)}$ is “imitating” a block-specific ground-truth
decoder ${\bm{f}}^{\pi(B)}$. Indeed, the “object-specific” image outputted by
the decoder $\hat{\bm{f}}^{(B)}$ evaluated at some
${\bm{z}}_{B}\in\hat{\mathcal{Z}}^{\textnormal{train}}_{B}$ is the same as the
image outputted by ${\bm{f}}^{(B)}$ evaluated at
${\bm{v}}({\bm{z}}_{B})\in{\mathcal{Z}}^{\textnormal{train}}_{B}$, up to an
additive constant vector ${\bm{c}}^{(B)}$. These constants cancel each other
out when taking the sum of the block-specific decoders.
Figure 2: Illustrating regularly closed sets (Definition 6) and path-connected
sets (Definition 8). Theorem 2 requires ${\mathcal{Z}}^{\textnormal{train}}$
to satisfy both properties.
Equation (9) provides an explicit form for the function
$\bar{\bm{v}}_{\pi(B)}$, which is essentially the learned block-specific
decoder composed with the inverse of the ground-truth block-specific decoder.
Additional assumptions to go from local to global. Assuming that the support
of ${\mathbb{P}}^{\textnormal{train}}_{\bm{z}}$,
${\mathcal{Z}}^{\textnormal{train}}$, is path-connected (see Definition 8 in
appendix) is useful since it prevents the permutation $\pi$ of Definition 4
from changing between two disconnected regions of
$\hat{\mathcal{Z}}^{\textnormal{train}}$. See Figure 2 for an illustration. In
Appendix A.8, we discuss the additional assumption that each ${\bm{f}}^{(B)}$
must be injective and show that, in general, it is not equivalent to the
assumption that $\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}$ is injective.
#### 3.2 Cartesian-product extrapolation
Figure 3: Illustration of Definition 5.
In this section, we show how a learned additive decoder can be used to
generate images ${\bm{x}}$ that are “out of support” in the sense that
${\bm{x}}\not\in{\bm{f}}({\mathcal{Z}}^{\textnormal{train}})$, but that are
still on the manifold of “reasonable” images, i.e.
${\bm{x}}\in{\bm{f}}({\mathcal{Z}}^{\textnormal{test}})$.
To characterize the set of images the learned decoder can generate, we will
rely on the notion of “cartesian-product extension”, which we define next.
###### Definition 5 (Cartesian-product extension).
Given a set ${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ and partition
${\mathcal{B}}$ of $[d_{z}]$, we define the Cartesian-product extension of
${\mathcal{Z}}$ as
$\displaystyle\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}):=\prod_{B\in{\mathcal{B}}}{\mathcal{Z}}_{B}\,,\text{where
${\mathcal{Z}}_{B}:=\\{{\bm{z}}_{B}\mid{\bm{z}}\in{\mathcal{Z}}\\}$.}$ (10)
The above definition is illustrated in Figure 3. The Cartesian-product
extension of ${\mathcal{Z}}$, $\text{CPE}_{\mathcal{B}}({\mathcal{Z}})$, is
indeed an extension of ${\mathcal{Z}}$ since ${\mathcal{Z}}$ is typically a
proper subset of $\prod_{B\in{\mathcal{B}}}{\mathcal{Z}}_{\mathcal{B}}$.
Let us define
$\bar{\bm{v}}:\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}})\rightarrow\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})$
to be the natural extension of the function
${\bm{v}}:\hat{\mathcal{Z}}^{\textnormal{train}}\rightarrow{\mathcal{Z}}^{\textnormal{train}}$.
More explicitly, $\bar{\bm{v}}$ is the “concatenation” of the functions
$\bar{\bm{v}}_{B}$ given in Definition 3:
$\displaystyle\bar{\bm{v}}({\bm{z}})^{\top}:=[\bar{\bm{v}}_{B_{1}}({\bm{z}}_{\pi^{-1}(B_{1})})^{\top}\cdots\bar{\bm{v}}_{B_{\ell}}({\bm{z}}_{\pi^{-1}(B_{\ell})})^{\top}]\,,$
(11)
where $\ell$ is the number of blocks in ${\mathcal{B}}$. This map is a
diffeomorphism because each $\bar{\bm{v}}_{\pi(B)}$ is a diffeomorphism from
$\hat{\mathcal{Z}}^{\textnormal{train}}_{B}$ to
${\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$ (Theorem 2).
We already know that
$\hat{\bm{f}}({\bm{z}})={\bm{f}}\circ\bar{\bm{v}}({\bm{z}})$ for all
${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$. The following result
shows that this equality holds in fact on the larger set
$\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}})$, the
Cartesian-product extension of $\hat{\mathcal{Z}}^{\textnormal{train}}$. See
right of Figure 1 for an illustration of the following corollary.
###### Corollary 3 (Cartesian-product extrapolation).
Suppose the assumptions of Theorem 2 holds. Then,
$\displaystyle\text{for all
${\bm{z}}\in\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}})$,}\
\sigma(\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}_{B}))=\sigma(\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(\pi(B))}(\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})))\,.$
(12)
Furthermore, if
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\subseteq{\mathcal{Z}}^{\textnormal{test}}$,
then
$\hat{\bm{f}}(\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}}))\subseteq{\bm{f}}({\mathcal{Z}}^{\textnormal{test}})$.
Equation (12) tells us that the learned decoder $\hat{\bm{f}}$ “imitates” the
ground-truth ${\bm{f}}$ not just over
$\hat{\mathcal{Z}}^{\textnormal{train}}$, but also over its Cartesian-product
extension. This is important since it guarantees that we can generate
observations never seen during training as follows: Choose a latent vector
${\bm{z}}^{\text{new}}$ that is in the Cartesian-product extension of
$\hat{\mathcal{Z}}^{\textnormal{train}}$, but not in
$\hat{\mathcal{Z}}^{\textnormal{train}}$ itself, i.e.
${\bm{z}}^{\text{new}}\in\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}})\setminus\hat{\mathcal{Z}}^{\textnormal{train}}$.
Then, evaluate the learned decoder on ${\bm{z}}^{\text{new}}$ to get
${\bm{x}}^{\text{new}}:=\hat{\bm{f}}({\bm{z}}^{\text{new}})$. By Corollary 3,
we know that
${\bm{x}}^{\text{new}}={\bm{f}}\circ\bar{\bm{v}}({\bm{z}}^{\text{new}})$, i.e.
it is the observation one would have obtain by evaluating the ground-truth
decoder ${\bm{f}}$ on the point
$\bar{\bm{v}}({\bm{z}}^{\text{new}})\in\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})$.
In addition, this ${\bm{x}}^{\text{new}}$ has never been seen during training
since
$\bar{\bm{v}}({\bm{z}}^{\text{new}})\not\in\bar{\bm{v}}(\hat{\mathcal{Z}}^{\textnormal{train}})={\mathcal{Z}}^{\textnormal{train}}$.
The experiment of Figure 4 illustrates this procedure.
About the extra assumption
“$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\subseteq{\mathcal{Z}}^{\textnormal{test}}$”.
Recall that, in Assumption 1, we interpreted
${\bm{f}}({\mathcal{Z}}^{\textnormal{test}})$ to be the set of “reasonable”
observations ${\bm{x}}$, of which we only observe a subset
${\bm{f}}({\mathcal{Z}}^{\textnormal{train}})$. Under this interpretation,
${\mathcal{Z}}^{\textnormal{test}}$ is the set of reasonable values for the
vector ${\bm{z}}$ and the additional assumption that
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\subseteq{\mathcal{Z}}^{\textnormal{test}}$
in Corollary 3 requires that the Cartesian-product extension of
${\mathcal{Z}}^{\textnormal{train}}$ consists only of reasonable values of
${\bm{z}}$. From this assumption, we can easily conclude that
$\hat{\bm{f}}(\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}}))\subseteq{\bm{f}}({\mathcal{Z}}^{\textnormal{test}})$,
which can be interpreted as: “The novel observations ${\bm{x}}^{\text{new}}$
obtained via Cartesian-product extrapolation are reasonable”. Appendix A.10
describes an example where the assumption is violated, i.e.
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\not\subseteq{\mathcal{Z}}^{\textnormal{test}}$.
The practical implication of this is that the new observations
${\bm{x}}^{\text{new}}$ obtained via Cartesian-product extrapolation might not
always be reasonable.
Disentanglement is not enough for extrapolation. To the best of our knowledge,
Corollary 3 is the first result that formalizes how disentanglement can induce
extrapolation. We believe it illustrates the fact that disentanglement alone
is not sufficient to enable extrapolation and that one needs to restrict the
hypothesis class of decoders in some way. Indeed, given a learned decoder
$\hat{\bm{f}}$ that is disentangled w.r.t. ${\bm{f}}$ on the training support
${\mathcal{Z}}^{\textnormal{train}}$, one cannot guarantee both decoders will
“agree” outside the training domain without further restricting $\hat{\bm{f}}$
and ${\bm{f}}$. This work has focused on “additivity”, but we believe other
types of restriction could correspond to other types of extrapolation.
| ScalarLatents | BlockLatents | BlockLatents
---|---|---|---
| | (independent ${\bm{z}}$) | (dependent ${\bm{z}}$)
Decoders | RMSE | $\text{LMS}_{\text{Spear}}$ | $\text{RMSE}^{\text{OOS}}$ | $\text{LMS}_{\text{Spear}}^{\text{OOS}}$ | RMSE | $\text{LMS}_{\text{Tree}}$ | RMSE | $\text{LMS}_{\text{Tree}}$
Non-add. | .01$\pm$.001 | 84.4$\pm$14.23 | .11$\pm$.06 | 82.3$\pm$.07 | .03$\pm$.01 | 70.2$\pm$17.5 | .02$\pm$.005 | 87.4$\pm$6.7
Additive | .01$\pm$.002 | 96.2$\pm$11.4 | .02$\pm$.008 | 94.5$\pm$14.7 | .02$\pm$.005 | 91.9$\pm$12.5 | .02$\pm$.003 | 99.4$\pm$1.4
Table 1: Reporting reconstruction mean squared error (RMSE $\downarrow$) and
the Latent Matching Score (LMS $\uparrow$) for the three datasets considered:
ScalarLatents and BlockLatents with independent and dependent latents. Runs
were repeated with 10 random initializations. $\text{RMSE}^{\text{OOS}}$ and
$\text{LMS}_{\text{Spear}}^{\text{OOS}}$ are the same metric but evaluated out
of support (see Appendix B.3 for details). While the standard deviation is
high, the differences are still clear as can be seen in their box plot version
in Appendix B.4.
### 4 Experiments
We now present empirical validations of the theoretical results presented
earlier. To achieve this, we compare the ability of additive and non-additive
decoders to both identify ground-truth latent factors (Theorems 1 & 2) and
extrapolate (Corollary 3) when trained to solve the reconstruction task on
simple images ($64\times 64\times 3$) consisting of two balls moving in space
[2]. See Appendix B.1 for training details. We consider two datasets: one
where the two ball positions can only vary along the $y$-axis (ScalarLatents)
and one where the positions can vary along both the $x$ and $y$ axes
(BlockLatents).
ScalarLatents: The ground-truth latent vector ${\bm{z}}\in{\mathbb{R}}^{2}$ is
such that ${\bm{z}}_{1}$ and ${\bm{z}}_{2}$ corresponds to the height
(y-coordinate) of the first and second ball, respectively. Thus the partition
is simply ${\mathcal{B}}=\\{\\{1\\},\\{2\\}\\}$ (each object has only one
latent factor). This simple setting is interesting to study since the low
dimensionality of the latent space ($d_{z}=2$) allows for exhaustive
visualizations like Figure 4. To study Cartesian-product extrapolation
(Corollary 3), we sample the latent factor ${\bm{z}}$ uniformly from the
L-shaped support given by
${\mathcal{Z}}^{\textnormal{train}}:=[0,1]\times[0,1]\setminus[0.5,1]\times[0.5,1]$.
This means the training set does not contain images where both balls appear in
the upper half of the image.
BlockLatents: The ground-truth latent vector ${\bm{z}}\in{\mathbb{R}}^{4}$ is
such that ${\bm{z}}_{\\{1,2\\}}$ and ${\bm{z}}_{\\{3,4\\}}$ correspond to the
$xy$ position of the first and second ball, respectively (the partition is
simply ${\mathcal{B}}=\\{\\{1,2\\},\\{3,4\\}\\}$, i.e. each object has two
latent factors). Thus, this more challenging setting illustrates “block-
disentanglement”. The latent ${\bm{z}}$ is sampled uniformly from the
hypercube $[0,1]^{4}$ but the images presenting occlusion (when a ball is
behind another) are rejected from the dataset. We discuss how additive
decoders cannot model images presenting occlusion in Appendix A.11. We also
present an additional version of this dataset where we sample from the
hypercube $[0,1]^{4}$ with dependencies. See Appendix B.2 for more details
about data generation.
Evaluation metrics: To evaluate disentanglement, we compute a matrix of scores
$(s_{B,B^{\prime}})\in{\mathbb{R}}^{\ell\times\ell}$ where $\ell$ is the
number of blocks in ${\mathcal{B}}$ and $s_{B,B^{\prime}}$ is a score
measuring how well we can predict the ground-truth block ${\bm{z}}_{B}$ from
the learned latent block
$\hat{\bm{z}}_{B^{\prime}}=\hat{\bm{g}}_{B^{\prime}}({\bm{x}})$ outputted by
the encoder. The final Latent Matching Score (LMS) is computed as
$\textnormal{LMS}=\operatorname*{arg\,max}_{\pi\in\mathfrak{S}_{\mathcal{B}}}\frac{1}{\ell}\sum_{B\in{\mathcal{B}}}s_{B,\pi(B)}$,
where $\mathfrak{S}_{\mathcal{B}}$ is the set of permutations respecting
${\mathcal{B}}$ (Definition 2). When
${\mathcal{B}}:=\\{\\{1\\},\dots,\\{d_{z}\\}\\}$ and the score used is the
absolute value of the correlation, LMS is simply the mean correlation
coefficient (MCC), which is widely used in the nonlinear ICA literature [21,
22, 23, 25, 30]. Because our theory guarantees recovery of the latents only up
to invertible and potentially nonlinear transformations, we use the Spearman
correlation, which can capture nonlinear relationships unlike the Pearson
correlation. We denote this score by $\text{LMS}_{\text{Spear}}$ and will use
it in the dataset ScalarLatents. For the BlockLatents dataset, we cannot use
Spearman correlation (because ${\bm{z}}_{B}$ are two dimensional). Instead, we
take the score $s_{B,B^{\prime}}$ to be the $R^{2}$ score of a regression
tree. We denote this score by $\text{LMS}_{\text{tree}}$. There are subtleties
to take care of when one wants to evaluate $\text{LMS}_{\text{tree}}$ on a
non-additive model due to the fact that the learned representation does not
have a natural partition ${\mathcal{B}}$. We must thus search over partitions.
We discuss this and provide further details on the metrics in Appendix B.3.
#### 4.1 Results
(a) Additive decoder
(b) Non-additive decoder
Figure 4: Figure (a) shows the learned latent space,
$\hat{\mathcal{Z}}^{\textnormal{train}}$, and the corresponding reconstructed
images of the additive decoder with median $\text{LMS}_{\text{Spear}}$ among
runs performed on the ScalarLatents dataset. Figure (b) shows the same thing
for the non-additive decoder. The red dots correspond to latent factors used
to generate the images and the red square highlights extrapolated images.
(a) Additive Decoder
(b) Non-Additive Decoder
Figure 5: Latent responses for the case of independent latents in the
BlockLatent dataset. In each plot, we report the latent factors predicted from
multiple images where one ball moves along only one axis at a time. For the
additive case, at most two latents change, as it should, while more than two
latents change for the non-additive case. See Appendix B.5 for details.
Additivity is important for disentanglement. Table 1 shows that the additive
decoder obtains a much higher $\text{LMS}_{\text{Spear}}$ than its non-
additive counterpart on all three datasets considered, even if both decoders
have very small reconstruction errors. This is corroborated by the
visualizations of Figures 4 & 5. Appendix B.5 additionally shows object-
specific reconstructions for the BlockLatents dataset. We emphasize that
disentanglement is possible even when the latent factors are dependent (or
causally related), as shown on the ScalarLatents dataset (L-shaped support
implies dependencies) and on the BlockLatents dataset with dependencies (Table
1). Note that prior works have relied on interventions [3, 2, 5] or Cartesian-
product supports [48, 44] to deal with dependencies.
Additivity is important for Cartesian-product extrapolation. Figure 4
illustrates that the additive decoder can generate images that are outside the
training domain (both balls in upper half of the image) while its non-additive
counterpart cannot. Furthermore, Table 1 also corroborates this showing that
the “out-of-support” (OOS) reconstruction MSE and $\text{LMS}_{\text{Spear}}$
(evaluated only on the samples never seen during training) are significantly
better for the additive than for the non-additive decoder.
Importance of connected support. Theorem 2 required that the support of the
latent factors, ${\mathcal{Z}}^{\textnormal{train}}$, was path-connected.
Appendix B.6 shows experiments where this assumption is violated, which yields
lower $\text{LMS}_{\text{Spear}}$ for the additive decoder, thus highlighting
the importance of this assumption.
### 5 Conclusion
We provided an in-depth identifiability analysis of additive decoders, which
bears resemblance to standard decoders used in OCRL, and introduced a novel
theoretical framework showing how this architecture can generate reasonable
images never seen during training via “Cartesian-product extrapolation”. We
validated empirically both of these results and confirmed that additivity was
indeed crucial. By studying rigorously how disentanglement can induce
extrapolation, our work highlighted the necessity of restricting the decoder
to extrapolate and set the stage for future works to explore disentanglement
and extrapolation in other function classes such as masked decoders typically
used in OCRL. We believe this line of work has the potential of expanding our
understanding of creativity in generative models, ultimately resulting in
representations that generalizes better.
### Acknowledgements
This research was partially supported by the Canada CIFAR AI Chair Program, by
an IVADO excellence PhD scholarship and by Samsung Electronics Co., Ldt. The
experiments were in part enabled by computational resources provided by by
Calcul Québec (calculquebec.ca) and the Digital Research Alliance of Canada
(alliancecan.ca). Simon Lacoste-Julien is a CIFAR Associate Fellow in the
Learning in Machines & Brains program.
### References
* Ahuja et al. [2022a] K. Ahuja, J. Hartford, and Y. Bengio. Properties from mechanisms: an equivariance perspective on identifiable representation learning. In _International Conference on Learning Representations_ , 2022a.
* Ahuja et al. [2022b] K. Ahuja, J. Hartford, and Y. Bengio. Weakly supervised representation learning with sparse perturbations, 2022b.
* Ahuja et al. [2022c] K. Ahuja, Y. Wang, D. Mahajan, and Y. Bengio. Interventional causal representation learning. _arXiv preprint arXiv:2209.11924_ , 2022c.
* Bengio et al. [2013] Y. Bengio, A. Courville, and P. Vincent. Representation learning: A review and new perspectives. _IEEE transactions on pattern analysis and machine intelligence_ , 2013.
* Brehmer et al. [2022] J. Brehmer, P. De Haan, P. Lippe, and T. Cohen. Weakly supervised causal representation learning. In _Advances in Neural Information Processing Systems_ , 2022.
* Buchholz et al. [2022] S. Buchholz, M. Besserve, and B. Schölkopf. Function classes for identifiable nonlinear independent component analysis. In _Advances in Neural Information Processing Systems_ , 2022.
* Burgess et al. [2019] C. P. Burgess, L. Matthey, N. Watters, R. Kabra, I. Higgins, M. Botvinick, and A. Lerchner. Monet: Unsupervised scene decomposition and representation, 2019.
* Crawford and Pineau [2019] E. Crawford and J. Pineau. Spatially invariant unsupervised object detection with convolutional neural networks. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 2019\.
* d’Avila Garcez and Lamb [2020] A. S. d’Avila Garcez and L. Lamb. Neurosymbolic AI: The 3rd wave. _ArXiv_ , abs/2012.05876, 2020.
* Dittadi et al. [2022] A. Dittadi, S. S. Papa, M. De Vita, B. Schölkopf, O. Winther, and F. Locatello. Generalization and robustness implications in object-centric learning. In _Proceedings of the 39th International Conference on Machine Learning_ , 2022.
* Engelcke et al. [2020] M. Engelcke, A. R. Kosiorek, O. P. Jones, and I. Posner. Genesis: Generative scene inference and sampling with object-centric latent representations. In _International Conference on Learning Representations_ , 2020.
* Eslami et al. [2016] S. M. A. Eslami, N. Heess, T. Weber, Y. Tassa, D. Szepesvari, K. Kavukcuoglu, and G. E. Hinton. Attend, infer, repeat: Fast scene understanding with generative models. In _Advances in Neural Information Processing Systems_ , 2016.
* Fodor and Pylyshyn [1988] J. A. Fodor and Z. W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. _Cognition_ , 1988.
* Goyal and Bengio [2022] A. Goyal and Y. Bengio. Inductive biases for deep learning of higher-level cognition. _Proc. R. Soc. A 478: 20210068_ , 2022.
* Greff et al. [2016] K. Greff, A. Rasmus, M. Berglund, T. Hao, H. Valpola, and J. Schmidhuber. Tagger: Deep unsupervised perceptual grouping. In _Advances in Neural Information Processing Systems_ , 2016.
* Greff et al. [2017] K. Greff, S. van Steenkiste, and J. Schmidhuber. Neural expectation maximization. In _Advances in Neural Information Processing Systems_ , 2017.
* Greff et al. [2019] K. Greff, R. L. Kaufman, R. Kabra, N. Watters, C. Burgess, D. Zoran, L. Matthey, M. Botvinick, and A. Lerchner. Multi-object representation learning with iterative variational inference. In _Proceedings of the 36th International Conference on Machine Learning_ , 2019.
* Greff et al. [2020] K. Greff, S. van Steenkiste, and J. Schmidhuber. On the binding problem in artificial neural networks. _ArXiv_ , abs/2012.05208, 2020.
* Gresele et al. [2021] L. Gresele, J. V. Kügelgen, V. Stimper, B. Schölkopf, and M. Besserve. Independent mechanism analysis, a new concept? In _Advances in Neural Information Processing Systems_ , 2021.
* Harnad [1990] S. Harnad. The symbol grounding problem. _Physica D: Nonlinear Phenomena_ , 1990.
* Hyvärinen and Morioka [2016] A. Hyvärinen and H. Morioka. Unsupervised feature extraction by time-contrastive learning and nonlinear ica. In _Advances in Neural Information Processing Systems_ , 2016.
* Hyvärinen and Morioka [2017] A. Hyvärinen and H. Morioka. Nonlinear ICA of Temporally Dependent Stationary Sources. In _Proceedings of the 20th International Conference on Artificial Intelligence and Statistics_ , 2017.
* Hyvärinen et al. [2019] A. Hyvärinen, H. Sasaki, and R. E. Turner. Nonlinear ica using auxiliary variables and generalized contrastive learning. In _AISTATS_. PMLR, 2019.
* Johnson et al. [2016] J. Johnson, B. Hariharan, L. van der Maaten, L. Fei-Fei, C. L. Zitnick, and R. B. Girshick. Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2016.
* Khemakhem et al. [2020a] I. Khemakhem, D. Kingma, R. Monti, and A. Hyvärinen. Variational autoencoders and nonlinear ica: A unifying framework. In _Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics_ , 2020a.
* Khemakhem et al. [2020b] I. Khemakhem, R. Monti, D. Kingma, and A. Hyvärinen. Ice-beem: Identifiable conditional energy-based deep models based on nonlinear ica. In _Advances in Neural Information Processing Systems_ , 2020b.
* Klindt et al. [2021] D. A. Klindt, L. Schott, Y. Sharma, I. Ustyuzhaninov, W. Brendel, M. Bethge, and D. M. Paiton. Towards nonlinear disentanglement in natural data with temporal sparse coding. In _9th International Conference on Learning Representations_ , 2021\.
* Lachapelle and Lacoste-Julien [2022] S. Lachapelle and S. Lacoste-Julien. Partial disentanglement via mechanism sparsity. In _UAI 2022 Workshop on Causal Representation Learning_ , 2022.
* Lachapelle et al. [2022a] S. Lachapelle, T. Deleu, D. Mahajan, I. Mitliagkas, Y. Bengio, S. Lacoste-Julien, and Q. Bertrand. Synergies between disentanglement and sparsity: a multi-task learning perspective, 2022a.
* Lachapelle et al. [2022b] S. Lachapelle, P. Rodriguez Lopez, Y. Sharma, K. E. Everett, R. Le Priol, A. Lacoste, and S. Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In _First Conference on Causal Learning and Reasoning_ , 2022b.
* Lake et al. [2017] B. M. Lake, T. D. Ullman, J. B. Tenenbaum, and S. J. Gershman. Building machines that learn and think like people. _Behavioral and Brain Sciences_ , 2017.
* Lin et al. [2020] Z. Lin, Y. Wu, S. V. Peri, W. Sun, G. Singh, F. Deng, J. Jiang, and S. Ahn. Space: Unsupervised object-oriented scene representation via spatial attention and decomposition. In _International Conference on Learning Representations_ , 2020.
* Lippe et al. [2022] P. Lippe, S. Magliacane, S. Löwe, Y. M. Asano, T. Cohen, and E. Gavves. CITRIS: Causal identifiability from temporal intervened sequences, 2022\.
* Locatello et al. [2019] F. Locatello, S. Bauer, M. Lucic, G. Raetsch, S. Gelly, B. Schölkopf, and O. Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In _Proceedings of the 36th International Conference on Machine Learning_ , 2019.
* Locatello et al. [2020a] F. Locatello, B. Poole, G. Raetsch, B. Schölkopf, O. Bachem, and M. Tschannen. Weakly-supervised disentanglement without compromises. In _Proceedings of the 37th International Conference on Machine Learning_ , 2020a.
* Locatello et al. [2020b] F. Locatello, M. Tschannen, S. Bauer, G. Rätsch, B. Schölkopf, and O. Bachem. Disentangling factors of variations using few labels. In _International Conference on Learning Representations_ , 2020b.
* Locatello et al. [2020c] F. Locatello, D. Weissenborn, T. Unterthiner, A. Mahendran, G. Heigold, J. Uszkoreit, A. Dosovitskiy, and T. Kipf. Object-centric learning with slot attention. In _Advances in Neural Information Processing Systems_ , 2020c.
* Marcus [2001] G. F. Marcus. The algebraic mind : integrating connectionism and cognitive science, 2001\.
* Munkres [2000] J. R. Munkres. _Topology_. Prentice Hall, Inc., 2 edition, 2000.
* Pearl [2019] J. Pearl. The seven tools of causal inference, with reflections on machine learning. _Commun. ACM_ , 2019.
* Peebles et al. [2020] W. Peebles, J. Peebles, J.-Y. Zhu, A. A. Efros, and A. Torralba. The hessian penalty: A weak prior for unsupervised disentanglement. In _Proceedings of European Conference on Computer Vision (ECCV)_ , 2020.
* Ramesh et al. [2022] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_ , 2022.
* Rombach et al. [2022] R. Rombach, A. Blattmann, D. Lorenz, P. Esser, and B. Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2022.
* Roth et al. [2023] K. Roth, M. Ibrahim, Z. Akata, P. Vincent, and D. Bouchacourt. Disentanglement of correlated factors via hausdorff factorized support. In _The Eleventh International Conference on Learning Representations_ , 2023.
* Schölkopf et al. [2021] B. Schölkopf, F. Locatello, S. Bauer, N. R. Ke, N. Kalchbrenner, A. Goyal, and Y. Bengio. Toward causal representation learning. _Proceedings of the IEEE - Advances in Machine Learning and Deep Neural Networks_ , 2021.
* Taleb and Jutten [1999] A. Taleb and C. Jutten. Source separation in post-nonlinear mixtures. _IEEE Transactions on Signal Processing_ , 1999.
* Von Kügelgen et al. [2021] J. Von Kügelgen, Y. Sharma, L. Gresele, W. Brendel, B. Schölkopf, M. Besserve, and F. Locatello. Self-supervised learning with data augmentations provably isolates content from style. In _Thirty-Fifth Conference on Neural Information Processing Systems_ , 2021.
* Wang and Jordan [2022] Y. Wang and M. I. Jordan. Desiderata for representation learning: A causal perspective, 2022.
* Zheng et al. [2022] Y. Zheng, I. Ng, and K. Zhang. On the identifiability of nonlinear ICA: Sparsity and beyond. In _Advances in Neural Information Processing Systems_ , 2022.
## Appendix
Table 2: Table of Notation. Calligraphic & indexing conventions
---
$[n]$ | $:=$ | $\\{1,2,\dots,n\\}$
$x$ | | Scalar (random or not, depending on context)
${\bm{x}}$ | | Vector (random or not, depending on context)
${\bm{X}}$ | | Matrix
${\mathcal{X}}$ | | Set/Support
$f$ | | Scalar-valued function
${\bm{f}}$ | | Vector-valued function
$Df$, $D{\bm{f}}$ | | Jacobian of $f$ and ${\bm{f}}$
$D^{2}f$ | | Hessian of $f$
$B\subseteq[n]$ | | Subset of indices
$|B|$ | | Cardinality of the set $B$
${\bm{x}}_{B}$ | | Vector formed with the $i$th coordinates of ${\bm{x}}$, for all $i\in B$
${\bm{X}}_{B,B^{\prime}}$ | | Matrix formed with the entries $(i,j)\in B\times B^{\prime}$ of ${\bm{X}}$.
Given ${\mathcal{X}}\subseteq{\mathbb{R}}^{n}$, ${\mathcal{X}}_{B}$ | $:=$ | $\\{{\bm{x}}_{B}\mid{\bm{x}}\in{\mathcal{X}}\\}$ (projection of ${\mathcal{X}}$)
Recurrent notation
${\bm{x}}\in{\mathbb{R}}^{d_{x}}$ | | Observation
${\bm{z}}\in{\mathbb{R}}^{d_{z}}$ | | Vector of latent factors of variations
${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ | | Support of ${\bm{z}}$
${\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ | | Ground-truth decoder function
$\tilde{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ | | Learned decoder function
${\mathcal{B}}$ | | A partition of $[d_{z}]$ (assumed contiguous w.l.o.g.)
$B\in{\mathcal{B}}$ | | A block of the partition ${\mathcal{B}}$
$B(i)\in{\mathcal{B}}$ | | The unique block of ${\mathcal{B}}$ that contains $i$
$\pi:[d_{z}]\rightarrow[d_{z}]$ | | A permutation
$S_{\mathcal{B}}$ | $:=$ | $\bigcup_{B\in{\mathcal{B}}}B^{2}$
$S_{\mathcal{B}}^{c}$ | $:=$ | $[d_{z}]^{2}\setminus S_{{\mathcal{B}}}$
${\mathbb{R}}^{d_{z}\times d_{z}}_{S_{\mathcal{B}}}$ | $:=$ | $\\{{\bm{M}}\in{\mathbb{R}}^{d_{z}\times d_{z}}\mid(i,j)\not\in S_{\mathcal{B}}\implies{\bm{M}}_{i,j}=0\\}$
General topology
$\overline{{\mathcal{X}}}$ | | Closure of the subset ${\mathcal{X}}\subseteq{\mathbb{R}}^{n}$
${\mathcal{X}}^{\circ}$ | | Interior of the subset ${\mathcal{X}}\subseteq{\mathbb{R}}^{n}$
### Appendix A Identifiability and Extrapolation Analysis
#### A.1 Useful definitions and lemmas
###### Definition 6 (Regularly closed sets).
A set ${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ is regularly closed if
${\mathcal{Z}}=\overline{{\mathcal{Z}}^{\circ}}$, i.e. if it is equal to the
closure of its interior.
###### Definition 7 (Connected sets).
A set ${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ is connected if it cannot
be written as a union of non-empty and disjoint open sets (in the subspace
topology).
###### Definition 8 (Path-connected sets).
A set ${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ is path-connected if for
all pair of points ${\bm{z}}^{0},{\bm{z}}^{1}\in{\mathcal{Z}}$, there exists a
continuous map $\bm{{\phi}}:[0,1]\rightarrow{\mathcal{Z}}$ such that
$\bm{\phi}(0)={\bm{z}}^{0}$ and $\bm{\phi}(1)={\bm{z}}^{1}$. Such a map is
called a path between ${\bm{z}}^{0}$ and ${\bm{z}}^{1}$.
This lemma is taken from [30].
###### Lemma 4 (Sparsity pattern of an invertible matrix contains a
permutation).
Let ${\bm{L}}\in{\mathbb{R}}^{m\times m}$ be an invertible matrix. Then, there
exists a permutation $\sigma$ such that ${\bm{L}}_{i,\sigma(i)}\not=0$ for all
$i$.
###### Proof.
Since the matrix ${\bm{L}}$ is invertible, its determinant is non-zero, i.e.
$\displaystyle\det({\bm{L}}):=\sum_{\pi\in\mathfrak{S}_{m}}\text{sign}(\pi)\prod_{i=1}^{m}{\bm{L}}_{i,\pi(i)}\neq
0\,,$ (13)
where $\mathfrak{S}_{m}$ is the set of $m$-permutations. This equation implies
that at least one term of the sum is non-zero, meaning there exists
$\pi\in\mathfrak{S}_{m}$ such that for all $i\in[m]$, ${\bm{L}}_{i,\pi(i)}\neq
0$. ∎
###### Definition 9 (Aligned subspaces of ${\mathbb{R}}^{m\times n}$).
Given a subset $S\subseteq\\{1,...,m\\}\times\\{1,...,n\\}$, we define
$\displaystyle{\mathbb{R}}^{m\times
n}_{S}:=\\{{\bm{M}}\in{\mathbb{R}}^{m\times n}\mid(i,j)\not\in
S\implies{\bm{M}}_{i,j}=0\\}\,.$ (14)
###### Definition 10 (Useful sets).
Given a partition ${\mathcal{B}}$ of $[d]$, we define
$\displaystyle S_{\mathcal{B}}:=\bigcup_{B\in{\mathcal{B}}}B^{2}\ \ \ \ \
S_{\mathcal{B}}^{c}:=\\{1,\dots,d_{z}\\}^{2}\setminus S_{\mathcal{B}}$ (15)
###### Definition 11 ($C^{k}$-diffeomorphism).
Let ${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$ and
${\mathcal{X}}\subseteq{\mathbb{R}}^{d_{x}}$. A map
${\bm{f}}:{\mathcal{Z}}\rightarrow{\mathcal{X}}$ is said to be a
$C^{k}$-diffeomorphism if it is bijective, $C^{2}$ and has a $C^{2}$ inverse.
###### Definition 12 ($C^{k}$-diffeomorphism onto its image).
Let ${\mathcal{Z}}\subseteq{\mathbb{R}}^{d_{z}}$. A map
${\bm{f}}:{\mathcal{Z}}\rightarrow{\mathbb{R}}^{d_{x}}$ is said to be a
$C^{k}$-diffeomorphism onto its image if the restriction ${\bm{f}}$ to its
image, i.e. $\tilde{\bm{f}}:{\mathcal{Z}}\rightarrow{\bm{f}}({\mathcal{Z}})$,
is a $C^{k}$-diffeomorphism.
Note: Differentiability is typically defined for functions that have an open
domain in ${\mathbb{R}}^{n}$. However, in the definition above, the set
${\mathcal{Z}}$ might not be open in ${\mathbb{R}}^{d_{z}}$ and
${\bm{f}}({\mathcal{Z}})$ might not be open in ${\mathbb{R}}^{d_{x}}$
(${\bm{f}}({\mathcal{Z}})$ is the domain of ${\bm{f}}^{-1}$). In the case of
an arbitrary domain $A$, it is customary to say that a function
${\bm{f}}:A\subseteq{\mathbb{R}}^{n}\rightarrow{\mathbb{R}}^{m}$ is $C^{k}$ if
there exists a $C^{k}$ function ${\bm{g}}$ defined on an open set
$U\subseteq{\mathbb{R}}^{n}$ that contains $A$ such that
${\bm{g}}\big{|}_{A}={\bm{f}}$ (i.e. ${\bm{g}}$ extends ${\bm{f}}$).
#### A.2 Relationship between additive decoders and the diagonal Hessian
penalty
###### Proposition 5 (Equivalence between additivity and diagonal Hessian).
Let ${\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ be a
$C^{2}$ function. Then,
${\bm{f}}({\bm{z}})=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{z}}_{B})\text{
with ${\bm{f}}^{(B)}$ being $C^{2}$}\iff\begin{array}[]{l}\forall
k\in[d_{x}],\ {\bm{z}}\in{\mathcal{Z}},\ \ D^{2}{\bm{f}}_{k}({\bm{z}})\text{
is }\\\ \text{block diagonal with blocks in ${\mathcal{B}}$}.\end{array}$ (16)
###### Proof.
We start by showing the “$\implies$” direction. Let $B$ and $B^{\prime}$ be
two distinct blocks of ${\mathcal{B}}$. Let $i\in B$ and $i^{\prime}\in
B^{\prime}$. We can compute the derivative of ${\bm{f}}_{k}$ w.r.t.
${\bm{z}}_{i}$:
$\displaystyle
D_{i}{\bm{f}}_{k}({\bm{z}})=\sum_{\bar{B}\in{\mathcal{B}}}D_{i}{\bm{f}}_{k}^{(\bar{B})}({\bm{z}}_{\bar{B}})=D_{i}{\bm{f}}_{k}^{(B)}(z_{B})\,,$
(17)
where the last equality holds because $i\in B$ and not in any other block
$\bar{B}$. Furthermore,
$\displaystyle
D^{2}_{i,i^{\prime}}{\bm{f}}_{k}({\bm{z}})=D^{2}_{i,i^{\prime}}{\bm{f}}^{(B)}_{k}({\bm{z}}_{B})=0\,,$
(18)
where the last equality holds because $i^{\prime}\not\in B$. This shows that
$D^{2}{\bm{f}}_{k}({\bm{z}})$ is block diagonal.
We now show the “$\impliedby$” direction. Fix $k\in[d_{x}]$,
$B\in{\mathcal{B}}$. We know that $D^{2}_{B,B^{c}}{\bm{f}}_{k}({\bm{z}})=0$
for all ${\bm{z}}\in{\mathbb{R}}^{d_{z}}$. Fix
${\bm{z}}\in{\mathbb{R}}^{d_{z}}$. Consider a continuously differentiable path
$\bm{{\phi}}:[0,1]\rightarrow{\mathbb{R}}^{|B^{c}|}$ such that
$\bm{\phi}(0)=0$ and $\bm{\phi}(1)={\bm{z}}_{B^{c}}$. As
$D^{2}_{B,B^{c}}{\bm{f}}_{k}({\bm{z}})$ is a continuous function of
${\bm{z}}$, we can use the fundamental theorem of calculus for line integrals
to get that
$\displaystyle
D_{B}{\bm{f}}_{k}({\bm{z}}_{B},{\bm{z}}_{B^{c}})-D_{B}{\bm{f}}_{k}({\bm{z}}_{B},0)=\int_{0}^{1}\underbrace{D^{2}_{B,B^{c}}{\bm{f}}_{k}({\bm{z}}_{B},\bm{\phi}(t))}_{=0}\bm{\phi}^{\prime}(t)dt=0\,,$
(19)
(where
$D^{2}_{B,B^{c}}{\bm{f}}_{k}({\bm{z}}_{B},\bm{\phi}(t))\bm{\phi}^{\prime}(t)$
denotes a matrix-vector product) which implies that
$\displaystyle
D_{B}{\bm{f}}_{k}({\bm{z}})=D_{B}{\bm{f}}_{k}({\bm{z}}_{B},0)\,.$ (20)
And the above equality holds for all $B\in{\mathcal{B}}$ and all
${\bm{z}}\in{\mathbb{R}}^{d_{z}}$.
Choose an arbitrary ${\bm{z}}\in{\mathbb{R}}^{d_{z}}$. Consider a continously
differentiable path $\bm{\psi}:[0,1]\rightarrow{\mathbb{R}}^{d_{z}}$ such that
$\bm{\psi}(0)=0$ and $\bm{\psi}(1)={\bm{z}}$. By applying the fundamental
theorem of calculus for line integrals once more, we have that
$\displaystyle{\bm{f}}_{k}({\bm{z}})-{\bm{f}}_{k}(0)$
$\displaystyle=\int_{0}^{1}D{\bm{f}}_{k}(\bm{\psi}(t))\bm{\psi}^{\prime}(t)dt$
(21)
$\displaystyle=\int_{0}^{1}\sum_{B\in{\mathcal{B}}}D_{B}{\bm{f}}_{k}(\bm{\psi}(t))\bm{\psi}^{\prime}_{B}(t)dt$
(22)
$\displaystyle=\sum_{B\in{\mathcal{B}}}\int_{0}^{1}D_{B}{\bm{f}}_{k}(\bm{\psi}(t))\bm{\psi}^{\prime}_{B}(t)dt$
(23)
$\displaystyle=\sum_{B\in{\mathcal{B}}}\int_{0}^{1}D_{B}{\bm{f}}_{k}(\bm{\psi}_{B}(t),0)\bm{\psi}^{\prime}_{B}(t)dt\,,$
(24)
where the last equality holds by (20). We can further apply the fundamental
theorem of calculus for line integrals to each term
$\int_{0}^{1}D_{B}{\bm{f}}_{k}(\bm{\psi}_{B}(t),0)\bm{\psi}^{\prime}_{B}(t)dt$
to get
$\displaystyle{\bm{f}}_{k}({\bm{z}})-{\bm{f}}_{k}(0)$
$\displaystyle=\sum_{B\in{\mathcal{B}}}({\bm{f}}_{k}({\bm{z}}_{B},0)-{\bm{f}}_{k}(0,0))$
(25) $\displaystyle\implies{\bm{f}}_{k}({\bm{z}})$
$\displaystyle={\bm{f}}_{k}(0)+\sum_{B\in{\mathcal{B}}}({\bm{f}}_{k}({\bm{z}}_{B},0)-{\bm{f}}_{k}(0))$
(26)
$\displaystyle=\sum_{B\in{\mathcal{B}}}\underbrace{\left({\bm{f}}_{k}({\bm{z}}_{B},0)-\frac{|{\mathcal{B}}|-1}{|{\mathcal{B}}|}{\bm{f}}_{k}(0)\right)}_{{\bm{f}}_{k}^{(B)}({\bm{z}}_{B}):=}\,.$
(27)
and since ${\bm{z}}$ was arbitrary, the above holds for all
${\bm{z}}\in{\mathbb{R}}^{d_{z}}$. Note that the functions
${\bm{f}}_{k}^{(B)}({\bm{z}}_{B})$ must be $C^{2}$ because ${\bm{f}}_{k}$ is
$C^{2}$. This concludes the proof. ∎
#### A.3 Examples of local but non-global disentanglement
In this section, we provide examples of mapping
${\bm{v}}:\hat{\mathcal{Z}}^{\textnormal{train}}\rightarrow{\mathcal{Z}}^{\textnormal{train}}$
that satisfy the local disentanglement property of Definition 4, but not the
global disentanglement property of Definition 3. Note that these notions are
defined for pairs of decoders ${\bm{f}}$ and $\hat{\bm{f}}$, but here we
construct directly the function ${\bm{v}}$ which is usually defined as
${\bm{f}}^{-1}\circ\hat{\bm{f}}$. However, given ${\bm{v}}$ we can always
define ${\bm{f}}$ and $\hat{\bm{f}}$ to be such that
${\bm{f}}^{-1}\circ\hat{\bm{f}}={\bm{v}}$: Simply take
${\bm{f}}({\bm{z}}):=[{\bm{z}}_{1},\dots,{\bm{z}}_{d_{z}},0,\dots,0]^{\top}\in{\mathbb{R}}^{d_{x}}$
and $\hat{\bm{f}}:={\bm{f}}\circ{\bm{v}}$. This construction however yields a
decoder ${\bm{f}}$ that is not sufficiently nonlinear (Assumption 2). Clearly
the mappings ${\bm{v}}$ that we provide in the following examples cannot be
written as compositions of decoders ${\bm{f}}^{-1}\circ\hat{\bm{f}}$ where
${\bm{f}}$ and $\hat{\bm{f}}$ satisfy all assumptions of Theorem 2, as this
would contradict the theorem. In Examples 4 & 5, the path-connected assumption
of Theorem 2 is violated. In Example 6, it is less obvious to see which
assumptions would be violated.
###### Example 4 (Disconnected support with changing permutation).
Let ${\bm{v}}:\hat{\mathcal{Z}}\rightarrow{\mathbb{R}}^{2}$ s.t.
$\hat{\mathcal{Z}}=\hat{\mathcal{Z}}^{(1)}\cup\hat{\mathcal{Z}}^{(2)}\subseteq{\mathbb{R}}^{2}$
where
$\hat{\mathcal{Z}}^{(1)}=\\{{\bm{z}}\in{\mathbb{R}}^{2}\mid{\bm{z}}_{1}\leq 0\
\text{and}\ {\bm{z}}_{2}\leq 0\\}$ and
$\hat{\mathcal{Z}}^{(2)}=\\{{\bm{z}}\in{\mathbb{R}}^{2}\mid{\bm{z}}_{1}\geq 1\
\text{and}\ {\bm{z}}_{2}\geq 1\\}$. Assume
$\displaystyle{\bm{v}}({\bm{z}}):=\begin{cases}({\bm{z}}_{1},{\bm{z}}_{2}),&\text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(1)}\\\ ({\bm{z}}_{2},{\bm{z}}_{1}),&\text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(2)}\\\ \end{cases}\,.$ (28)
Step 1: ${\bm{v}}$ is a diffeomorphism. We first show it is injective. Suppose
${\bm{v}}({\bm{z}}^{1})={\bm{v}}({\bm{z}}^{2})$ for some
${\bm{z}}^{1},{\bm{z}}^{2}\in\hat{\mathcal{Z}}$. It implies that both
${\bm{z}}_{1}$ and ${\bm{z}}_{2}$ are in the same region
$\hat{\mathcal{Z}}^{(i)}$. To see this, assume w.l.o.g. that
${\bm{z}}^{1}\in{\mathcal{Z}}^{(1)}$ and ${\bm{z}}^{2}\in{\mathcal{Z}}^{(2)}$.
This means
${\bm{v}}({\bm{z}}^{1})={\bm{v}}({\bm{z}}^{2})\implies({\bm{z}}^{1}_{1},{\bm{z}}^{1}_{2})=({\bm{z}}^{2}_{2},{\bm{z}}^{2}_{1})$,
which is a contradiction since ${\bm{z}}^{1}_{1}\leq 0$ and
${\bm{z}}^{2}_{2}\geq 1$. Because ${\bm{z}}^{1}$ and ${\bm{z}}^{2}$ are in the
same region, we have
${\bm{v}}({\bm{z}}^{1})={\bm{v}}({\bm{z}}^{2})\implies{\bm{z}}^{1}={\bm{z}}^{2}$.
Since ${\bm{v}}$ is injective, it is also bijective on its image.
The Jacobian of ${\bm{v}}$ is given by
$\displaystyle D{\bm{v}}({\bm{z}}):=\begin{cases}\begin{bmatrix}1&0\\\
0&1\end{bmatrix},&\text{if}\ {\bm{z}}\in\hat{\mathcal{Z}}^{(1)}\\\
\begin{bmatrix}0&1\\\ 1&0\end{bmatrix},&\text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(2)}\\\ \end{cases}\,,$ (29)
which is full rank everywhere. Hence ${\bm{v}}$ is a diffeomorphism onto its
image.
Step 2: ${\bm{v}}$ is locally disentangled. By (29), the Jacobian
$D{\bm{v}}({\bm{z}})$ is everywhere a permutation matrix, hence ${\bm{v}}$ is
locally disentangled.
Step 3: ${\bm{v}}$ is not globally disentangled. That is because
${\bm{v}}_{1}({\bm{z}}_{1},{\bm{z}}_{2})$ depends on both ${\bm{z}}_{1}$ and
${\bm{z}}_{2}$. Indeed, if ${\bm{z}}_{2}=0$, we have that
${\bm{v}}_{1}(-1,0)=-1\not=0={\bm{v}}_{1}(0,0)$. Also, if ${\bm{z}}_{1}=1$, we
have that ${\bm{v}}_{1}(1,1)=1\not=2={\bm{v}}_{1}(1,2)$.
###### Example 5 (Disconnected support with fixed permutation).
Let ${\bm{v}}:\hat{\mathcal{Z}}\rightarrow{\mathbb{R}}^{2}$ s.t.
$\hat{\mathcal{Z}}=\hat{\mathcal{Z}}^{(1)}\cup\hat{\mathcal{Z}}^{(2)}\subseteq{\mathbb{R}}^{2}$
where
$\hat{\mathcal{Z}}^{(1)}=\\{{\bm{z}}\in{\mathbb{R}}^{2}\mid{\bm{z}}_{2}\leq
0\\}$ and
$\hat{\mathcal{Z}}^{(2)}=\\{{\bm{z}}\in{\mathbb{R}}^{2}\mid{\bm{z}}_{2}\geq
1\\}$. Assume
${\bm{v}}({\bm{z}}):={\bm{z}}+\mathbbm{1}({\bm{z}}\in\hat{\mathcal{Z}}^{(2)})$.
Step 1: ${\bm{v}}$ is a diffeomorphism. We now show that ${\bm{v}}$ is
injective. Take ${\bm{z}}^{1},{\bm{z}}^{2}\in\hat{\mathcal{Z}}$ such that
${\bm{v}}({\bm{z}}^{1})={\bm{v}}({\bm{z}}^{2})$. The points ${\bm{z}}^{1}$ and
${\bm{z}}^{2}$ must belong to the same ${\mathcal{Z}}^{(i)}$ since otherwise
we have that ${\bm{z}}^{1}={\bm{z}}^{2}+\mathbbm{1}$ (assuming w.l.o.g. that
${\bm{z}}^{1}\in{\mathcal{Z}}^{(1)}$ and
${\bm{z}}^{2}\in{\mathcal{Z}}^{(2)}$), which implies that
$0\geq{\bm{z}}^{1}_{2}={\bm{z}}^{2}_{2}+1\geq 2$, which is a contradiction.
Since both ${\bm{z}}^{1}$ and ${\bm{z}}^{2}$ are in the same region, we have
that
$\mathbbm{1}({\bm{z}}^{1}\in\hat{\mathcal{Z}}^{(2)})=\mathbbm{1}({\bm{z}}^{2}\in\hat{\mathcal{Z}}^{(2)})$,
which implies that
$\displaystyle{\bm{v}}({\bm{z}}^{1})$ $\displaystyle={\bm{v}}({\bm{z}}^{2})\,$
(30)
$\displaystyle{\bm{z}}^{1}+\mathbbm{1}({\bm{z}}^{1}\in\hat{\mathcal{Z}}^{(2)})$
$\displaystyle={\bm{z}}^{2}+\mathbbm{1}({\bm{z}}^{2}\in\hat{\mathcal{Z}}^{(2)})\,$
(31)
$\displaystyle{\bm{z}}^{1}+\mathbbm{1}({\bm{z}}^{1}\in\hat{\mathcal{Z}}^{(2)})$
$\displaystyle={\bm{z}}^{2}+\mathbbm{1}({\bm{z}}^{1}\in\hat{\mathcal{Z}}^{(2)})\,$
(32) $\displaystyle{\bm{z}}^{1}$ $\displaystyle={\bm{z}}^{2}\,,$ (33)
which means ${\bm{v}}$ is injective. This, of course, means that it is
bijective on its image, which we denote by
${\mathcal{Z}}:={\bm{v}}(\hat{\mathcal{Z}})$.
The Jacobian of ${\bm{v}}$ is $D{\bm{v}}({\bm{z}})={\bm{I}}$ which is
invertible everywhere on $\hat{\mathcal{Z}}$. Hence, ${\bm{v}}$ is a
diffeomorphism from $\hat{\mathcal{Z}}$ to ${\mathcal{Z}}$.
Step 2: ${\bm{v}}$ is locally disentangled. This is clear since
$D{\bm{v}}({\bm{z}})={\bm{I}}$ everywhere.
Step 3: ${\bm{v}}$ is not globally disentangled. Indeed, the function
${\bm{v}}_{1}({\bm{z}}_{1},{\bm{z}}_{2})={\bm{z}}_{1}+\mathbbm{1}({\bm{z}}\in\hat{\mathcal{Z}}^{(2)})$
is not constant in ${\bm{z}}_{2}$.
Figure 6: Illustration of
$\hat{\mathcal{Z}}=\hat{\mathcal{Z}}^{(b)}\cup\hat{\mathcal{Z}}^{(o)}$ in
Example 6 where $\hat{\mathcal{Z}}^{(b)}$ is the blue region and
$\hat{\mathcal{Z}}^{(o)}$ is the orange region. The two black dots correspond
to $(-1/2,-1/2)$ and $(1/2,-1/2)$, where the function
${\bm{v}}_{2}({\bm{z}}_{1},{\bm{z}}_{2})$ is evaluated to show that it is not
constant in ${\bm{z}}_{1}$.
###### Example 6 (Connected support).
Let ${\bm{v}}:\hat{\mathcal{Z}}\rightarrow{\mathbb{R}}^{2}$ s.t.
$\hat{\mathcal{Z}}=\hat{\mathcal{Z}}^{(b)}\cup\hat{\mathcal{Z}}^{(o)}$ where
$\hat{\mathcal{Z}}^{(b)}$ and $\hat{\mathcal{Z}}^{(o)}$ are respectively the
blue and orange regions of Figure 6. Both regions contain their boundaries.
The function ${\bm{v}}$ is defined as follows:
$\displaystyle{\bm{v}}_{1}({\bm{z}})$ $\displaystyle:={\bm{z}}_{1}$ (34)
$\displaystyle{\bm{v}}_{2}({\bm{z}})$
$\displaystyle:=\begin{cases}\frac{({\bm{z}}_{2}+1)^{2}+1}{2},&\text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(b)}\\\ e^{{\bm{z}}_{2}},&\text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(o)}\end{cases}\,.$ (35)
We must now verify that ${\bm{v}}_{2}({\bm{z}})$ is $C^{2}$ at the frontier
between $\hat{\mathcal{Z}}^{(b)}$ and $\hat{\mathcal{Z}}^{(o)}$, i.e. when
${\bm{z}}\in[1/4,1]\times\\{0\\}$.
${\bm{v}}_{2}({\bm{z}})$ is continuous since
$\displaystyle\left.\frac{({\bm{z}}_{2}+1)^{2}+1}{2}\right|_{{\bm{z}}_{2}=0}=1=\left.e^{{\bm{z}}_{2}}\right|_{{{\bm{z}}_{2}=0}}\,.$
(36)
${\bm{v}}_{2}({\bm{z}})$ is $C^{1}$ since
$\displaystyle\left(\left.\frac{({\bm{z}}_{2}+1)^{2}+1}{2}\right)^{\prime}\right|_{{\bm{z}}_{2}=0}=\left.({\bm{z}}_{2}+1)\right|_{{\bm{z}}_{2}=0}=1=\left.e^{{\bm{z}}_{2}}\right|_{{{\bm{z}}_{2}=0}}=\left.(e^{{\bm{z}}_{2}})^{\prime}\right|_{{{\bm{z}}_{2}=0}}\,.$
(37)
${\bm{v}}_{2}({\bm{z}})$ is $C^{2}$ since
$\displaystyle\left(\left.\frac{({\bm{z}}_{2}+1)^{2}+1}{2}\right)^{\prime\prime}\right|_{{\bm{z}}_{2}=0}=\left.1\right|_{{\bm{z}}_{2}=0}=1=\left.e^{{\bm{z}}_{2}}\right|_{{{\bm{z}}_{2}=0}}=\left.(e^{{\bm{z}}_{2}})^{\prime\prime}\right|_{{{\bm{z}}_{2}=0}}\,.$
(38)
The Jacobian of ${\bm{v}}$ is
$\displaystyle D{\bm{v}}({\bm{z}}):=\begin{cases}\begin{bmatrix}1&0\\\
0&{\bm{z}}_{2}+1\end{bmatrix},\ \text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(b)}\\\ \begin{bmatrix}1&0\\\
0&e^{{\bm{z}}_{2}}\end{bmatrix},\ \text{if}\
{\bm{z}}\in\hat{\mathcal{Z}}^{(o)}\end{cases}\,,$ (39)
which is invertible and a permutation-scaling matrix everywhere on
$\hat{\mathcal{Z}}$. Thus local disentanglement holds.
However, ${\bm{v}}_{2}({\bm{z}}_{1},{\bm{z}}_{2})$ is not constant in
${\bm{z}}_{1}$. Indeed,
$\displaystyle{\bm{v}}_{2}(-\frac{1}{2},-\frac{1}{2})=\left.\frac{({\bm{z}}_{2}+1)^{2}+1}{2}\right|_{{\bm{z}}_{2}=-1/2}=\frac{5}{8}\not=e^{-1/2}={\bm{v}}_{2}(\frac{1}{2},-\frac{1}{2})\,.$
(40)
Thus global disentanglement does not hold.
#### A.4 Proof of Theorem 1
###### Proposition 6.
Suppose that the data-generating process satisfies Assumption 1, that the
learned decoder
$\hat{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ is a
$C^{2}$-diffeomorphism onto its image and that the encoder
$\hat{\bm{g}}:{\mathbb{R}}^{d_{x}}\rightarrow{\mathbb{R}}^{d_{z}}$ is
continuous. Then, if $\hat{\bm{f}}$ and $\hat{\bm{g}}$ solve the
reconstruction problem on the training distribution, i.e.
${\mathbb{E}}^{\textnormal{train}}||{\bm{x}}-\hat{\bm{f}}(\hat{\bm{g}}({\bm{x}}))||^{2}=0$,
we have that
${\bm{f}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\bm{f}}(\hat{\mathcal{Z}}^{\textnormal{train}})$
and the map ${\bm{v}}:={\bm{f}}^{-1}\circ\hat{\bm{f}}$ is a
$C^{2}$-diffeomorphism from $\hat{\mathcal{Z}}^{\textnormal{train}}$ to
${\mathcal{Z}}^{\textnormal{train}}$.
###### Proof.
First note that
$\displaystyle{\mathbb{E}}^{\textnormal{train}}||{\bm{x}}-\hat{\bm{f}}(\hat{\bm{g}}({\bm{x}}))||^{2}={\mathbb{E}}^{\textnormal{train}}||{\bm{f}}({\bm{z}})-\hat{\bm{f}}(\hat{\bm{g}}({\bm{f}}({\bm{z}})))||^{2}=0\,,$
(41)
which implies that, for ${\mathbb{P}}_{\bm{z}}$-almost every
${\bm{z}}\in{\mathcal{Z}}^{\textnormal{train}}$,
${\bm{f}}({\bm{z}})=\hat{\bm{f}}(\hat{\bm{g}}({\bm{f}}({\bm{z}})))\,.$
But since the functions on both sides of the equations are continuous, the
equality holds for all ${\bm{z}}\in{\mathcal{Z}}^{\textnormal{train}}$. This
implies that
${\bm{f}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\bm{f}}\circ\hat{\bm{g}}\circ{\bm{f}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\bm{f}}(\hat{\mathcal{Z}}^{\textnormal{train}})$.
Let ${\bm{h}}:=\hat{\bm{g}}\circ{\bm{f}}$. Since $\hat{\bm{f}}$ is a
$C^{2}$-diffeomorphism on its image, we have
$\displaystyle\hat{\bm{f}}^{-1}\circ{\bm{f}}({\bm{z}})={\bm{h}}({\bm{z}})\,,$
(42)
which is a composition of $C^{2}$-diffeomorphisms and is thus itself a
$C^{2}$-diffeomorphism from ${\mathcal{Z}}^{\textnormal{train}}$ to
${\bm{h}}({\mathcal{Z}}^{\textnormal{train}})=\hat{\mathcal{Z}}^{\textnormal{train}}$.
This concludes the proof, since, ${\bm{v}}={\bm{h}}^{-1}$. ∎
The following technical lemma can be skipped at first read.
###### Lemma 7.
Let ${\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ and
$\tilde{\bm{f}}:{\mathbb{R}}^{d_{z}}\rightarrow{\mathbb{R}}^{d_{x}}$ be two
$C^{2}$ functions such that, for all ${\bm{z}}\in{\mathcal{Z}}$,
${\bm{f}}({\bm{z}})=\tilde{\bm{f}}({\bm{z}})$. Then, for all
${\bm{z}}\in\overline{{\mathcal{Z}}^{\circ}}$,
$D{\bm{f}}({\bm{z}})=D\tilde{{\bm{f}}}({\bm{z}})$ and for all $k$,
$D^{2}{\bm{f}}_{k}({\bm{z}})$ = $D^{2}\tilde{\bm{f}}_{k}({\bm{z}})$.
###### Proof.
The derivative is defined only on the interior of a domain, hence
$\displaystyle\forall{\bm{z}}\in{\mathcal{Z}}^{\circ},D{\bm{f}}({\bm{z}})$
$\displaystyle=D\tilde{\bm{f}}({\bm{z}})$ (43)
Choose ${\bm{z}}_{0}\in\overline{{\mathcal{Z}}^{\circ}}$. Because
${\bm{z}}_{0}$ is in the closure of ${\mathcal{Z}}^{\circ}$, there exists a
sequence $\\{{\bm{z}}_{k}\\}_{k=1}^{\infty}\subseteq{\mathcal{Z}}^{\circ}$
such that $\lim_{k\to\infty}{\bm{z}}_{k}={\bm{z}}_{0}$. Of course we have
$\displaystyle\lim_{k\to\infty}D{\bm{f}}({\bm{z}}_{k})$
$\displaystyle=\lim_{k\to\infty}D\tilde{\bm{f}}({\bm{z}}_{k})$ (44)
$\displaystyle D{\bm{f}}({\bm{z}}_{0})$
$\displaystyle=D\tilde{\bm{f}}({\bm{z}}_{0})\,,$ (45)
where the last step holds because the derivative itself is continuous. Hence
the derivatives are equal on $\overline{{\mathcal{Z}}^{\circ}}$. Because
${\bm{f}}$ and $\tilde{\bm{f}}$ are $C^{2}$, their derivatives are $C^{1}$. We
can thus apply a similar argument to show that their second derivatives are
equal on $\overline{\overline{{\mathcal{Z}}^{\circ}}^{\circ}}$, which can be
shown to be equal to $\overline{{\mathcal{Z}}^{\circ}}$. ∎
See 1
###### Proof.
We can apply Proposition 6 and have that the map
${\bm{v}}:={\bm{f}}^{-1}\circ\hat{\bm{f}}$ is a $C^{2}$-diffeomorphism from
$\hat{\mathcal{Z}}^{\textnormal{train}}$ to
${\mathcal{Z}}^{\textnormal{train}}$. This allows one to write
$\displaystyle{\bm{f}}\circ{\bm{v}}({\bm{z}})$
$\displaystyle=\hat{\bm{f}}({\bm{z}})\
\forall{\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$ (46)
$\displaystyle\sigma(\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}})))$
$\displaystyle=\sigma(\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}_{B}))$
(47)
$\displaystyle\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))$
$\displaystyle=\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}_{B})\
\forall{\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}\,.$ (48)
Since ${\mathcal{Z}}^{\textnormal{train}}$ is regularly closed and is
diffeomorphic to $\hat{\mathcal{Z}}^{\textnormal{train}}$,
$\hat{\mathcal{Z}}^{\textnormal{train}}$ must also be regularly closed
(topological properties are preserved by diffeomorphisms). Moreover by Lemma
7, Equation (48) implies the first and second derivatives are equal over
$\overline{(\hat{\mathcal{Z}}^{\textnormal{train}})^{\circ}}$ (the closure of
the interior of $\hat{\mathcal{Z}}^{\textnormal{train}}$). But since
$\hat{\mathcal{Z}}^{\textnormal{train}}$ is regularly closed, we have
$\overline{(\hat{\mathcal{Z}}^{\textnormal{train}})^{\circ}}=\hat{\mathcal{Z}}^{\textnormal{train}}$
and thus the first and second derivatives are equal on
$\hat{\mathcal{Z}}^{\textnormal{train}}$.
Let ${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$. Choose some
$J\in{\mathcal{B}}$ and some $j\in J$. Differentiate both sides of the above
equation with respect to ${\bm{z}}_{j}$, which yields:
$\displaystyle\sum_{B\in{\mathcal{B}}}\sum_{i\in
B}D_{i}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))D_{j}{\bm{v}}_{i}({\bm{z}})$
$\displaystyle=D_{j}\hat{\bm{f}}^{(J)}({\bm{z}}_{J})\,.$ (49)
Choose $J^{\prime}\in{\mathcal{B}}\setminus\\{J\\}$ and $j^{\prime}\in
J^{\prime}$. Differentiating the above w.r.t. ${\bm{z}}_{j^{\prime}}$ yields
$\displaystyle\sum_{B\in{\mathcal{B}}}\sum_{i\in
B}\left[D_{i}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))D^{2}_{j,j^{\prime}}{\bm{v}}_{i}({\bm{z}})+\sum_{i^{\prime}\in
B}D^{2}_{i,i^{\prime}}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))D_{j^{\prime}}{\bm{v}}_{i^{\prime}}({\bm{z}})D_{j}{\bm{v}}_{i}({\bm{z}})\right]$
$\displaystyle=0$ $\displaystyle\sum_{B\in{\mathcal{B}}}\bigg{[}\sum_{i\in
B}\Big{[}D_{i}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))D^{2}_{j,j^{\prime}}{\bm{v}}_{i}({\bm{z}})+D^{2}_{i,i}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))D_{j^{\prime}}{\bm{v}}_{i}({\bm{z}})D_{j}{\bm{v}}_{i}({\bm{z}})\Big{]}\bigg{.}+\quad$
$\displaystyle\bigg{.}\sum_{(i,i^{\prime})\in
B^{2}_{<}}D^{2}_{i,i^{\prime}}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))(D_{j^{\prime}}{\bm{v}}_{i^{\prime}}({\bm{z}})D_{j}{\bm{v}}_{i}({\bm{z}})+D_{j^{\prime}}{\bm{v}}_{i}({\bm{z}})D_{j}{\bm{v}}_{i^{\prime}}({\bm{z}}))\bigg{]}$
$\displaystyle=0\,,$ (50)
where $B^{2}_{<}:=B^{2}\cap\\{(i,i^{\prime})\mid i^{\prime}<i\\}$. For the
sake of notational conciseness, we are going to refer to $S_{\mathcal{B}}$ and
$S_{\mathcal{B}}^{c}$ as $S$ and $S^{c}$ (Definition 10). Also, define
$\displaystyle S_{<}:=\bigcup_{B\in{\mathcal{B}}}B^{2}_{<}\,.$ (51)
Let us define the vectors
$\displaystyle\forall i\in\\{1,...d_{z}\\},\ \vec{a}_{i}({\bm{z}})$
$\displaystyle:=(D^{2}_{j,j^{\prime}}{\bm{v}}_{i}({\bm{z}}))_{(j,j^{\prime})\in
S^{c}}$ (52) $\displaystyle\forall i\in\\{1,...d_{z}\\},\
\vec{b}_{i}({\bm{z}})$
$\displaystyle:=(D_{j^{\prime}}{\bm{v}}_{i}({\bm{z}})D_{j}{\bm{v}}_{i}({\bm{z}}))_{(j,j^{\prime})\in
S^{c}}$ (53) $\displaystyle\forall B\in{\mathcal{B}},\
\forall(i,i^{\prime})\in B^{2}_{<},\ \vec{c}_{i,i^{\prime}}({\bm{z}})$
$\displaystyle:=(D_{j^{\prime}}{\bm{v}}_{i^{\prime}}({\bm{z}})D_{j}{\bm{v}}_{i}({\bm{z}})+D_{j^{\prime}}{\bm{v}}_{i}({\bm{z}})D_{j}{\bm{v}}_{i^{\prime}}({\bm{z}}))_{(j,j^{\prime})\in
S^{c}}$ (54)
This allows us to rewrite, for all $k\in\\{1,...,d_{x}\\}$
$\displaystyle\sum_{B\in{\mathcal{B}}}\left[\sum_{i\in
B}\left[D_{i}{\bm{f}}_{k}^{(B)}({\bm{v}}_{B}({\bm{z}}))\vec{a}_{i}({\bm{z}})+D^{2}_{i,i}{\bm{f}}_{k}^{(B)}({\bm{v}}_{B}({\bm{z}}))\vec{b}_{i}({\bm{z}})\right]+\sum_{(i,i^{\prime})\in
B^{2}_{<}}D^{2}_{i,i^{\prime}}{\bm{f}}_{k}^{(B)}({\bm{v}}_{B}({\bm{z}}))\vec{c}_{i,i^{\prime}}({\bm{z}})\right]$
$\displaystyle=0\,.$ (55)
We define
$\displaystyle{\bm{w}}({\bm{z}},k)$
$\displaystyle:=((D_{i}{\bm{f}}_{k}^{(B)}({\bm{z}}_{B}))_{i\in
B},(D^{2}_{i,i}{\bm{f}}_{k}^{(B)}({\bm{z}}_{B}))_{i\in
B},(D^{2}_{i,i^{\prime}}{\bm{f}}_{k}^{(B)}({\bm{z}}_{B}))_{(i,i^{\prime})\in
B_{<}^{2}})_{B\in{\mathcal{B}}}$ (56) $\displaystyle{\bm{M}}({\bm{z}})$
$\displaystyle:=[[\vec{a}_{i}({\bm{z}})]_{i\in
B},[\vec{b}_{i}({\bm{z}})]_{i\in
B},[\vec{c}_{i,i^{\prime}}({\bm{z}})]_{(i,i^{\prime})\in
B_{<}^{2}}]_{B\in{\mathcal{B}}}\,,$ (57)
which allows us to write, for all $k\in\\{1,...,d_{z}\\}$
$\displaystyle{\bm{M}}({\bm{z}}){\bm{w}}({\bm{v}}({\bm{z}}),k)=0\,.$ (58)
We can now recognize that the matrix ${\bm{W}}({\bm{v}}({\bm{z}}))$ of
Assumption 2 is given by
$\displaystyle{\bm{W}}({\bm{v}}({\bm{z}}))^{\top}=\left[{\bm{w}}({\bm{v}}({\bm{z}}),1)\
\dots\ {\bm{w}}({\bm{v}}({\bm{z}}),d_{x})\right]\,$ (59)
which allows us to write
$\displaystyle{\bm{M}}({\bm{z}}){\bm{W}}({\bm{v}}({\bm{z}}))^{\top}=0$ (60)
$\displaystyle{\bm{W}}({\bm{v}}({\bm{z}})){\bm{M}}({\bm{z}})^{\top}=0$ (61)
Since ${\bm{W}}({\bm{v}}({\bm{z}}))$ has full column-rank (by Assumption 2 and
the fact that ${\bm{v}}({\bm{z}})\in{\mathcal{Z}}^{\textnormal{train}}$),
there exists $q$ rows that are linearly independent. Let $K$ be the index set
of these rows. This means ${\bm{W}}({\bm{v}}({\bm{z}}))_{K,\cdot}$ is an
invertible matrix. We can thus write
$\displaystyle{\bm{W}}({\bm{v}}({\bm{z}}))_{K,\cdot}{\bm{M}}({\bm{z}})^{\top}$
$\displaystyle=0$ (62)
$\displaystyle({\bm{W}}({\bm{v}}({\bm{z}}))_{K,\cdot})^{-1}{\bm{W}}({\bm{v}}({\bm{z}}))_{K,\cdot}{\bm{M}}({\bm{z}})^{\top}$
$\displaystyle=({\bm{W}}({\bm{v}}({\bm{z}}))_{K,\cdot})^{-1}0$ (63)
$\displaystyle{\bm{M}}({\bm{z}})^{\top}$ $\displaystyle=0\,,$ (64)
which means, in particular, that, $\forall i\in\\{1,\dots,d_{z}\\}$,
$\vec{b}_{i}({\bm{z}})=0$, i.e.,
$\displaystyle\forall i\in\\{1,\dots,d_{z}\\},\forall(j,j^{\prime})\in
S^{c},D_{j}{\bm{v}}_{i}({\bm{z}})D_{j^{\prime}}{\bm{v}}_{i}({\bm{z}})=0\ $
(65)
Since the ${\bm{v}}$ is a diffeomorphism, its Jacobian matrix
$D{\bm{v}}({\bm{z}})$ is invertible everywhere. By Lemma 4, this means there
exists a permutation $\pi$ such that, for all $j$,
$D_{j}{\bm{v}}_{\pi(j)}({\bm{z}})\not=0$. This and (65) imply that
$\displaystyle\forall(j,j^{\prime})\in S^{c},\ \
D_{j}{\bm{v}}_{\pi(j^{\prime})}({\bm{z}})\underbrace{D_{j^{\prime}}{\bm{v}}_{\pi(j^{\prime})}({\bm{z}})}_{\not=0}$
$\displaystyle=0,$ (66) $\displaystyle\implies\forall(j,j^{\prime})\in S^{c},\
\ D_{j}{\bm{v}}_{\pi(j^{\prime})}({\bm{z}})$ $\displaystyle=0\,.$ (67)
To show that $D{\bm{v}}({\bm{z}})$ is a ${\mathcal{B}}$-block permutation
matrix, the only thing left to show is that $\pi$ respects ${\mathcal{B}}$.
For this, we use the fact that, $\forall
B\in{\mathcal{B}},\forall(i,i^{\prime})\in B^{2}_{<}$,
$\vec{c}_{i,i^{\prime}}({\bm{z}})=0$ (recall ${\bm{M}}({\bm{z}})=0$). Because
$\vec{c}_{i,i^{\prime}}({\bm{z}})=\vec{c}_{i^{\prime},i}({\bm{z}})$, we can
write
$\displaystyle\forall(i,i^{\prime})\in S\ \text{s.t.}\
i\not=i^{\prime},\forall(j,j^{\prime})\in
S^{c},D_{j^{\prime}}{\bm{v}}_{i^{\prime}}({\bm{z}})D_{j}{\bm{v}}_{i}({\bm{z}})+D_{j^{\prime}}{\bm{v}}_{i}({\bm{z}})D_{j}{\bm{v}}_{i^{\prime}}({\bm{z}})=0\,.$
(68)
We now show that if $(j,j^{\prime})\in S^{c}$ (indices belong to different
blocks), then $(\pi(j),\pi(j^{\prime}))\in S^{c}$ (they also belong to
different blocks). Assume this is false, i.e. there exists
$(j_{0},j^{\prime}_{0})\in S^{c}$ such that
$(\pi(j_{0}),\pi(j^{\prime}_{0}))\in S$. Then we can apply (68) (with
$i:=\pi(j_{0})$ and $i^{\prime}:=\pi(j^{\prime}_{0})$) and get
$\displaystyle\underbrace{D_{j_{0}^{\prime}}{\bm{v}}_{\pi(j_{0}^{\prime})}({\bm{z}})D_{j_{0}}{\bm{v}}_{\pi(j_{0})}({\bm{z}})}_{\not=0}+D_{j^{\prime}_{0}}{\bm{v}}_{\pi(j_{0})}({\bm{z}})D_{j_{0}}{\bm{v}}_{\pi(j^{\prime}_{0})}({\bm{z}})=0\,,$
(69)
where the left term in the sum is different of 0 because of the definition of
$\pi$. This implies that
$\displaystyle
D_{j_{0}^{\prime}}{\bm{v}}_{\pi(j_{0})}({\bm{z}})D_{j_{0}}{\bm{v}}_{\pi(j^{\prime}_{0})}({\bm{z}})\not=0\,,$
(70)
otherwise (69) cannot hold. But (70) contradicts (67). Thus, we have that,
$\displaystyle(j,j^{\prime})\in S^{c}\implies(\pi(j),\pi(j^{\prime}))\in
S^{c}\,.$ (71)
The contraposed is
$\displaystyle(\pi(j),\pi(j^{\prime}))\in S\implies(j,j^{\prime})\in S$ (72)
$\displaystyle(j,j^{\prime})\in S\implies(\pi^{-1}(j),\pi^{-1}(j^{\prime}))\in
S\,.$ (73)
From the above, it is clear that $\pi^{-1}$ respects ${\mathcal{B}}$ which
implies that $\pi$ respects ${\mathcal{B}}$ (Lemma 8). Thus
$D{\bm{v}}({\bm{z}})$ is a ${\mathcal{B}}$-block permutation matrix. ∎
###### Lemma 8 (${\mathcal{B}}$-respecting permutations form a group).
Let ${\mathcal{B}}$ be a partition of $\\{1,\dots,d_{z}\\}$ and let $\pi$ and
$\bar{\pi}$ be a permutation of $\\{1,\dots,d_{z}\\}$ that respect
${\mathcal{B}}$. The following holds:
1. 1.
The identity permutation $e$ respects ${\mathcal{B}}$.
2. 2.
The composition $\pi\circ\bar{\pi}$ respects ${\mathcal{B}}$.
3. 3.
The inverse permutation $\pi^{-1}$ respects ${\mathcal{B}}$.
###### Proof.
The first statement is trivial, since for all $B\in{\mathcal{B}}$,
$e(B)=B\in{\mathcal{B}}$.
The second statement follows since for all $B\in{\mathcal{B}}$,
$\bar{\pi}(B)\in{\mathcal{B}}$ and thus $\pi(\bar{\pi}(B))\in{\mathcal{B}}$.
We now prove the third statement. Let $B\in{\mathcal{B}}$. Since $\pi$ is
surjective and respects ${\mathcal{B}}$, there exists a
$B^{\prime}\in{\mathcal{B}}$ such that $\pi(B^{\prime})=B$. Thus,
$\pi^{-1}(B)=\pi^{-1}(\pi(B^{\prime}))=B^{\prime}\in{\mathcal{B}}$. ∎
#### A.5 Sufficient nonlinearity v.s. sufficient variability in nonlinear ICA
with auxiliary variables
In Section 3.1, we introduced the “sufficient nonlinearity” condition
(Assumption 2) and highlighted its resemblance to the “sufficient variability”
assumptions often found in the nonlinear ICA literature [21, 22, 23, 25, 26,
30, 49]. We now clarify this connection. To make the discussion more concrete,
we consider the sufficient variability assumption found in Hyvärinen et al.
[23]. In this work, the latent variable ${\bm{z}}$ is assumed to be
distributed according to
$\displaystyle
p({\bm{z}}\mid{\bm{u}}):=\prod_{i=1}^{d_{z}}p_{i}({\bm{z}}_{i}\mid{\bm{u}})\,.$
(74)
In other words, the latent factors ${\bm{z}}_{i}$ are mutually conditionally
independent given an observed auxiliary variable ${\bm{u}}$. Define
$\displaystyle{\bm{w}}({\bm{z}},{\bm{u}}):=\left(\left(\frac{\partial}{\partial{\bm{z}}_{i}}\log
p_{i}({\bm{z}}_{i}\mid{\bm{u}})\right)_{i\in[d_{z}]}\
\left(\frac{\partial^{2}}{\partial{\bm{z}}_{i}^{2}}\log
p_{i}({\bm{z}}_{i}\mid{\bm{u}})\right)_{i\in[d_{z}]}\right)\in{\mathbb{R}}^{2d_{z}}\,.$
(75)
We now recall the assumption of sufficient variability of Hyvärinen et al.
[23]:
###### Assumption 3 (Assumption of variability from Hyvärinen et al. [23,
Theorem 1]).
For any ${\bm{z}}\in{\mathbb{R}}^{d_{z}}$, there exists $2d_{z}+1$ values of
${\bm{u}}$, denoted by
${\bm{u}}^{(0)},{\bm{u}}^{(1)},\dots,{\bm{u}}^{(2d_{z})}$ such that the
$2d_{z}$ vectors
$\displaystyle{\bm{w}}({\bm{z}},{\bm{u}}^{(1)})-{\bm{w}}({\bm{z}},{\bm{u}}^{(0)}),\dots,{\bm{w}}({\bm{z}},{\bm{u}}^{(2d_{z})})-{\bm{w}}({\bm{z}},{\bm{u}}^{(0)})\,$
(76)
are linearly independent.
To emphasize the resemblance with our assumption of sufficient nonlinearity,
we rewrite it in the special case where the partition
${\mathcal{B}}:=\\{\\{1\\},\dots,\\{d_{z}\\}\\}$. Note that, in that case,
$q:=d_{z}+\sum_{B\in{\mathcal{B}}}\frac{|B|(|B|+1)}{2}=2d_{z}$.
###### Assumption 4 (Sufficient nonlinearity (trivial partition)).
For all ${\bm{z}}\in{\mathcal{Z}}^{\textnormal{train}}$, ${\bm{f}}$ is such
that the following matrix has independent columns (i.e. full column-rank):
$\displaystyle{\bm{W}}({\bm{z}})$
$\displaystyle:=\left[\left[D_{i}{\bm{f}}^{(i)}({\bm{z}}_{i})\right]_{i\in[d_{z}]}\
\left[D^{2}_{i,i}{\bm{f}}^{(i)}({\bm{z}}_{i})\right]_{i\in[d_{z}]}\right]\in{\mathbb{R}}^{d_{x}\times
2d_{z}}\,.$ (77)
One can already see the resemblance between Assumptions 3 & 4, e.g. both have
something to do with first and second derivatives. To make the connection even
more explicit, define ${\bm{w}}({\bm{z}},k)$ to be the $k$th row of
${\bm{W}}({\bm{z}})$ (do not conflate with ${\bm{w}}({\bm{z}},{\bm{u}})$).
Also, recall the basic fact from linear algebra that the column-rank is always
equal to the row-rank. This means that ${\bm{W}}({\bm{z}})$ is full column-
rank if and only if there exists $k_{1}$, …, $k_{2d_{z}}\in[d_{x}]$ such that
the vectors ${\bm{w}}({\bm{z}},k_{1}),\dots,{\bm{w}}({\bm{z}},k_{2d_{z}})$ are
linearly independent. It is then easy to see the correspondance between
${\bm{w}}({\bm{z}},k)$ and
${\bm{w}}({\bm{z}},{\bm{u}})-{\bm{w}}({\bm{z}},{\bm{u}}^{(0)})$ (from
Assumption 3) and between the pixel index $k\in[d_{x}]$ and the auxiliary
variable ${\bm{u}}$.
#### A.6 Example of a sufficiently nonlinear additive decoder
###### Example 7 (A sufficiently nonlinear ${\bm{f}}$ \- Example 3
continued).
Consider the additive function
$\displaystyle{\bm{f}}({\bm{z}}):=\begin{bmatrix}{\bm{z}}_{1}\\\
{\bm{z}}_{1}^{2}\\\ {\bm{z}}_{1}^{3}\\\
{\bm{z}}_{1}^{4}\end{bmatrix}+\begin{bmatrix}({\bm{z}}_{2}+1)\\\
({\bm{z}}_{2}+1)^{2}\\\ ({\bm{z}}_{2}+1)^{3}\\\
({\bm{z}}_{2}+1)^{4}\end{bmatrix}\,.$ (78)
We will provide a numerical verification that this function is a
diffeomorphism from the square $[-1,0]\times[0,1]$ to its image that satisfies
Assumption 2.
The Jacobian of ${\bm{f}}$ is given by
$\displaystyle D{\bm{f}}({\bm{z}})=\begin{bmatrix}1&1\\\
2{\bm{z}}_{1}&2({\bm{z}}_{2}+1)\\\ 3{\bm{z}}_{1}^{2}&3({\bm{z}}_{2}+1)^{2}\\\
4{\bm{z}}_{1}^{3}&4({\bm{z}}_{2}+1)^{3}\\\ \end{bmatrix}\,,$ (79)
and the matrix ${\bm{W}}({\bm{z}})$ from Assumption 2 is given by
$\displaystyle{\bm{W}}({\bm{z}})=\begin{bmatrix}1&0&1&0\\\
2{\bm{z}}_{1}&2&2({\bm{z}}_{2}+1)&2\\\
3{\bm{z}}_{1}^{2}&6{\bm{z}}_{1}&3({\bm{z}}_{2}+1)^{2}&6({\bm{z}}_{2}+1)\\\
4{\bm{z}}_{1}^{3}&12{\bm{z}}_{1}^{2}&4({\bm{z}}_{2}+1)^{3}&12({\bm{z}}_{2}+1)^{2}\end{bmatrix}\,.$
(80)
Figure 7 presents a numerical verification that ${\bm{f}}$ is injective, has a
full rank Jacobian and satisfies Assumption 2. Injective ${\bm{f}}$ with full
rank Jacobian is enough to conclude that ${\bm{f}}$ is a diffeomorphism onto
its image.
Figure 7: Numerical verification that
${\bm{f}}:[-1,0]\times[0,1]\rightarrow{\mathbb{R}}^{4}$ from Example 7 is
injective (left), has a full rank Jacobian (middle) and satisfies Assumption 2
(right). The left figure shows that ${\bm{f}}$ is injective on the square
$[-1,0]\times[0,1]$ since one can recover ${\bm{z}}$ uniquely by knowing the
values of ${\bm{f}}_{1}({\bm{z}})$ and ${\bm{f}}_{2}({\bm{z}})$, i.e. knowing
the level sets. The middle figure reports the
$\det(D{\bm{f}}({\bm{z}})^{\top}D{\bm{f}}({\bm{z}}))$ and shows that it is
nonzero in the square $[-1,0]\times[0,1]$, which means the Jacobian is full
rank. The right figure shows the determinant of the matrix
${\bm{W}}({\bm{z}})$ (from Assumption 2), we can see that it is nonzero
everywhere on the square $[-1,0]\times[0,1]$.
#### A.7 Proof of Theorem 2
We start with a simple definition:
###### Definition 13 (${\mathcal{B}}$-block permutation matrices).
A matrix ${\bm{A}}\in{\mathbb{R}}^{d\times d}$ is a ${\mathcal{B}}$-block
permutation matrix if it is invertible and can be written as
${\bm{A}}={\bm{C}}{\bm{P}}_{\pi}$ where ${\bm{P}}_{\pi}$ is the matrix
representing the ${\mathcal{B}}$-respecting permutation $\pi$
($P_{\pi}{\bm{e}}_{i}={\bm{e}}_{\pi(i)}$) and
${\bm{C}}\in{\mathbb{R}}^{d\times d}_{S_{\mathcal{B}}}$ (See Definitions 9 &
10).
The following technical lemma leverages continuity and path-connectedness to
show that the block-permutation structure must remain the same across the
whole domain. It can be skipped at first read.
###### Lemma 9.
Let ${\mathcal{C}}$ be a connected subset of some topological space and let
${\bm{M}}:{\mathcal{C}}\rightarrow{\mathbb{R}}^{d\times d}$ be a continuous
function. Suppose that, for all $c\in{\mathcal{C}}$, ${\bm{M}}(c)$ is a
${\mathcal{B}}$-block permutation matrix (Definition 13). Then, there exists a
${\mathcal{B}}$-respecting permutation $\pi$ such that for all
$c\in{\mathcal{C}}$ and all distinct $B,B^{\prime}\in{\mathcal{B}}$,
${\bm{M}}(c)_{\pi(B^{\prime}),B}=0$.
###### Proof.
The reason this result is not trivial, is that, even if ${\bm{M}}(c)$ is a
${\mathcal{B}}$-block permutation for all $c$, the permutation might change
for different $c$. The goal of this lemma is to show that, if ${\mathcal{C}}$
is connected and the map ${\bm{M}}(\cdot)$ is continuous, then one can find a
single permutation that works for all $c\in{\mathcal{C}}$.
First, since ${\mathcal{C}}$ is connected and ${\bm{M}}$ is continuous, its
image, ${\bm{M}}({\mathcal{C}})$, must be connected (by [39, Theorem 23.5]).
Second, from the hypothesis of the lemma, we know that
$\displaystyle{\bm{M}}({\mathcal{C}})\subseteq{\mathcal{A}}:=\left(\bigcup_{\pi\in\mathfrak{S}({\mathcal{B}})}{\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}\right)\setminus\\{\text{singular
matrices}\\}\,,$ (81)
where $\mathfrak{S}({\mathcal{B}})$ is the set of ${\mathcal{B}}$-respecting
permutations and ${\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}=\\{{\bm{M}}{\bm{P}}_{\pi}\mid{\bm{M}}\in{\mathbb{R}}_{S_{\mathcal{B}}}^{d\times
d}\\}$. We can rewrite the set ${\mathcal{A}}$ above as
$\displaystyle{\mathcal{A}}=\bigcup_{\pi\in\mathfrak{S}({\mathcal{B}})}\left({\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}\setminus\\{\text{singular
matrices}\\}\right)\,,$ (82)
We now define an equivalence relation $\sim$ over ${\mathcal{B}}$-respecting
permutation: $\pi\sim\pi^{\prime}$ iff for all $B\in{\mathcal{B}}$,
$\pi(B)=\pi^{\prime}(B)$. In other words, two ${\mathcal{B}}$-respecting
permutations are equivalent if they send every block to the same block (note
that they can permute elements of a given block differently). We notice that
$\displaystyle\pi\sim\pi^{\prime}\implies{\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}={\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi^{\prime}}\,.$ (83)
Let $\mathfrak{S}({\mathcal{B}})/\sim$ be the set of equivalence classes
induce by $\sim$ and let $\Pi$ stand for one such equivalence class. Thanks to
(83), we can define, for all $\Pi\in\mathfrak{S}({\mathcal{B}})/\sim$, the
following set:
$\displaystyle V_{\Pi}:={\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}\setminus\\{\text{singular matrices}\\},\
\text{for some $\pi\in\Pi$}\,,$ (84)
where the specific choice of $\pi\in\Pi$ is arbitrary (any
$\pi^{\prime}\in\Pi$ would yield the same definition, by (83)). This
construction allows us to write
$\displaystyle{\mathcal{A}}=\bigcup_{\Pi\in\mathfrak{S}({\mathcal{B}})/\sim}V_{\Pi}\,,$
(85)
We now show that $\\{V_{\Pi}\\}_{\Pi\in\mathfrak{S}({\mathcal{B}})/\sim}$
forms a partition of ${\mathcal{A}}$. Choose two distinct equivalence classes
of permutations $\Pi$ and $\Pi^{\prime}$ and let $\pi\in\Pi$ and
$\pi^{\prime}\in\Pi^{\prime}$ be representatives. We note that
$\displaystyle{\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}\cap{\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi^{\prime}}\subseteq\\{\text{singular
matrices}\\}\,,$ (86)
since any matrix that is both in ${\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}$ and ${\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi^{\prime}}$ must have at least one row filled
with zeros. This implies that
$\displaystyle V_{\Pi}\cap V_{\Pi^{\prime}}=\emptyset\,,$ (87)
which shows that $\\{V_{\Pi}\\}_{\Pi\in\mathfrak{S}({\mathcal{B}})/\sim}$ is
indeed a partition of ${\mathcal{A}}$.
Each $V_{\Pi}$ is closed in ${\mathcal{A}}$ (wrt the relative topology) since
$\displaystyle V_{\Pi}={\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}\setminus\\{\text{singular
matrices}\\}={\mathcal{A}}\cap\underbrace{{\mathbb{R}}^{d\times
d}_{S_{\mathcal{B}}}{\bm{P}}_{\pi}}_{\text{closed in ${\mathbb{R}}^{d\times
d}$}}.$ (88)
Moreover, $V_{\Pi}$ is open in ${\mathcal{A}}$, since
$\displaystyle
V_{\Pi}={\mathcal{A}}\setminus\underbrace{\bigcup_{\Pi^{\prime}\not=\Pi}V_{\Pi^{\prime}}}_{\text{closed
in ${\mathcal{A}}$}}\,.$ (89)
Thus, for any $\Pi\in\mathfrak{S}({\mathcal{B}})/\sim$, the sets $V_{\Pi}$ and
$\bigcup_{\Pi^{\prime}\not=\Pi}V_{\Pi^{\prime}}$ forms a separation (see [39,
Section 23]). Since ${\bm{M}}({\mathcal{C}})$ is a connected subset of
${\mathcal{A}}$, it must lie completely in $V_{\Pi}$ or
$\bigcup_{\Pi^{\prime}\not=\Pi}V_{\Pi^{\prime}}$, by [39, Lemma 23.2]. Since
this is true for all $\Pi$, it must follow that there exists a $\Pi^{*}$ such
that ${\bm{M}}({\mathcal{C}})\subseteq\Pi^{*}$, which completes the proof. ∎
See 2
###### Proof.
Step 1 - Showing the permutation $\pi$ does not change for different
${\bm{z}}$. Theorem 1 showed local ${\mathcal{B}}$-disentanglement, i.e. for
all ${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$, $D{\bm{v}}({\bm{z}})$
has a ${\mathcal{B}}$-block permutation structure. The first step towards
showing global disentanglement is to show that this block structure is the
same for all ${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$ (a priori,
$\pi$ could be different for different ${\bm{z}}$). Since ${\bm{v}}$ is
$C^{2}$, its Jacobian $D{\bm{v}}({\bm{z}})$ is continuous. Since
${\mathcal{Z}}^{\textnormal{train}}$ is path-connected,
$\hat{\mathcal{Z}}^{\textnormal{train}}$ must also be since both sets are
diffeomorphic. By Lemma 9, this means the ${\mathcal{B}}$-block permutation
structure of $D{\bm{v}}({\bm{z}})$ is the same for all
${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$ (implicitly using the fact
that path-connected implies connected). In other words, for all
${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$ and all distinct
$B,B^{\prime}\in{\mathcal{B}}$, $D_{B}{\bm{v}}_{\pi(B^{\prime})}({\bm{z}})=0$.
Step 1 - Linking object-specific decoders. We now show that, for all
$B\in{\mathcal{B}}$,
$\hat{\bm{f}}^{(B)}({\bm{z}}_{B})={\bm{f}}^{(\pi(B))}({\bm{v}}_{\pi(B)}({\bm{z}}))+{\bm{c}}^{(B)}$
for all ${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$. To do this, we
rewrite (49) as
$\displaystyle
D\hat{\bm{f}}^{(J)}({\bm{z}}_{J})=\sum_{B\in{\mathcal{B}}}D{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}))D_{J}{\bm{v}}_{B}({\bm{z}})\,,$
(90)
but because $B\not=\pi(J)\implies D_{J}{\bm{v}}_{B}({\bm{z}})=0$ (block-
permutation structure), we get
$\displaystyle
D\hat{\bm{f}}^{(J)}({\bm{z}}_{J})=D{\bm{f}}^{(\pi(J))}({\bm{v}}_{\pi(J)}({\bm{z}}))D_{J}{\bm{v}}_{\pi(J)}({\bm{z}})\,.$
(91)
The above holds for all $J\in{\mathcal{B}}$. We simply change $J$ by $B$ in
the following equation.
$\displaystyle
D\hat{\bm{f}}^{(B)}({\bm{z}}_{B})=D{\bm{f}}^{(\pi(B))}({\bm{v}}_{\pi(B)}({\bm{z}}))D_{B}{\bm{v}}_{\pi(B)}({\bm{z}})\,.$
(92)
Now notice that the r.h.s. of the above equation is equal to
$D({\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)})$. We can thus write
$\displaystyle
D\hat{\bm{f}}^{(B)}({\bm{z}}_{B})=D({\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)})({\bm{z}})\,,\text{for
all }{\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}\,.$ (93)
Now choose distinct
${\bm{z}},{\bm{z}}^{0}\in\hat{\mathcal{Z}}^{\textnormal{train}}$. Since
${\mathcal{Z}}^{\textnormal{train}}$ is path-connected,
$\hat{\mathcal{Z}}^{\textnormal{train}}$ also is since they are diffeomorphic.
Hence, there exists a continuously differentiable function
$\bm{\phi}:[0,1]\rightarrow\hat{\mathcal{Z}}^{\textnormal{train}}$ such that
$\bm{\phi}(0)={\bm{z}}^{0}$ and $\bm{\phi}(1)={\bm{z}}$. We can now use (93)
together with the gradient theorem, a.k.a. the fundamental theorem of calculus
for line integrals, to show the following
$\displaystyle\int_{0}^{1}D\hat{\bm{f}}^{(B)}(\bm{\phi}_{B}({\bm{z}}))\cdot\bm{\phi}_{B}(t)dt$
$\displaystyle=\int_{0}^{1}D({\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)})(\bm{\phi}({\bm{z}}))\cdot\bm{\phi}(t)dt$
(94)
$\displaystyle\hat{\bm{f}}^{(B)}({\bm{z}}_{B})-\hat{\bm{f}}^{(B)}({\bm{z}}_{B}^{0})$
$\displaystyle={\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}})-{\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}}^{0})$
(95) $\displaystyle\hat{\bm{f}}^{(B)}({\bm{z}}_{B})$
$\displaystyle={\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}})+\underbrace{(\hat{\bm{f}}^{(B)}({\bm{z}}_{B}^{0})-{\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}}^{0}))}_{\text{constant
in ${\bm{z}}$}}$ (96) $\displaystyle\hat{\bm{f}}^{(B)}({\bm{z}}_{B})$
$\displaystyle={\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}})+{\bm{c}}^{(B)}\,,$
(97)
which holds for all ${\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$.
We now show that $\sum_{B\in{\mathcal{B}}}{\bm{c}}^{(B)}=0$. Take some
${\bm{z}}^{0}\in{\mathcal{Z}}^{\textnormal{train}}$. Equations (48) & (97)
tell us that
$\displaystyle\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}^{0}))$
$\displaystyle=\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}^{0}_{B})$
(98)
$\displaystyle=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(\pi(B))}({\bm{v}}_{\pi(B)}({\bm{z}}^{0}))+\sum_{B\in{\mathcal{B}}}{\bm{c}}^{(B)}$
(99)
$\displaystyle=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{v}}_{B}({\bm{z}}^{0}))+\sum_{B\in{\mathcal{B}}}{\bm{c}}^{(B)}$
(100) $\displaystyle\implies 0$
$\displaystyle=\sum_{B\in{\mathcal{B}}}{\bm{c}}^{(B)}$ (101)
Step 2 - From local to global disentanglement. By assumption, the functions
${\bm{f}}^{(B)}:{\mathcal{Z}}^{\textnormal{train}}_{B}\rightarrow{\mathbb{R}}^{d_{x}}$
are injective. This will allow us to show that ${\bm{v}}_{\pi(B)}({\bm{z}})$
depends only on ${\bm{z}}_{B}$. We proceed by contradiction. Suppose there
exists
$({\bm{z}}_{B},{\bm{z}}_{B^{c}})\in\hat{\mathcal{Z}}^{\textnormal{train}}$ and
${\bm{z}}^{0}_{B^{c}}$ such that
$({\bm{z}}_{B},{\bm{z}}^{0}_{B^{c}})\in\hat{\mathcal{Z}}^{\textnormal{train}}$
and
${\bm{v}}_{\pi(B)}({\bm{z}}_{B},{\bm{z}}_{B^{c}})\not={\bm{v}}_{\pi(B)}({\bm{z}}_{B},{\bm{z}}^{0}_{B^{c}})$.
This means
${\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}}_{B},{\bm{z}}_{B})+{\bm{c}}^{(B)}=\hat{\bm{f}}^{(B)}({\bm{z}}_{B})={\bm{f}}^{(\pi(B))}\circ{\bm{v}}_{\pi(B)}({\bm{z}}_{B},{\bm{z}}^{0}_{B})+{\bm{c}}^{(B)}$
${\bm{f}}^{(\pi(B))}({\bm{v}}_{\pi(B)}({\bm{z}}_{B},{\bm{z}}_{B}))={\bm{f}}^{(\pi(B))}({\bm{v}}_{\pi(B)}({\bm{z}}_{B},{\bm{z}}^{0}_{B}))$
which is a contradiction with the fact that ${\bm{f}}^{(\pi(B))}$ is
injective. Hence, ${\bm{v}}_{\pi(B)}({\bm{z}})$ depends only on
${\bm{z}}_{B}$. We also get an explicit form for ${\bm{v}}_{\pi(B)}$:
$\displaystyle({\bm{f}}^{\pi(B)})^{-1}(\hat{\bm{f}}^{(B)}({\bm{z}}_{B})-{\bm{c}}^{(B)})$
$\displaystyle={\bm{v}}_{\pi(B)}({\bm{z}})\text{ for all
}{\bm{z}}\in{\mathcal{Z}}^{\textnormal{train}}\,.$ (102)
We define the map
$\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B}):=({\bm{f}}^{\pi(B)})^{-1}(\hat{\bm{f}}^{(B)}({\bm{z}}_{B})-{\bm{c}}^{(B)})$
which is from $\hat{\mathcal{Z}}^{\textnormal{train}}_{B}$ to
${\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$. This allows us to rewrite (97)
as
$\displaystyle\hat{\bm{f}}^{(B)}({\bm{z}}_{B})$
$\displaystyle={\bm{f}}^{(\pi(B))}\circ\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})+{\bm{c}}^{(B)}\,,\text{
for all }{\bm{z}}_{B}\in{\mathcal{Z}}^{\textnormal{train}}_{B}\,.$ (103)
Because $\hat{\bm{f}}^{(B)}$ is also injective, we must have that
$\bar{\bm{v}}_{\pi(B)}:\hat{\mathcal{Z}}^{\textnormal{train}}_{B}\rightarrow{\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$
is injective as well.
We now show that $\bar{\bm{v}}_{\pi(B)}$ is surjective. Choose some
${\bm{z}}_{\pi(B)}\in{\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$. We can
always find ${\bm{z}}_{\pi(B)^{c}}$ such that
$({\bm{z}}_{\pi(B)},{\bm{z}}_{\pi(B)^{c}})\in{\mathcal{Z}}^{\textnormal{train}}$.
Because
${\bm{v}}:\hat{\mathcal{Z}}^{\textnormal{train}}\rightarrow{\mathcal{Z}}^{\textnormal{train}}$
is surjective (it is a diffeomorphism), there exists a
${\bm{z}}^{0}\in\hat{\mathcal{Z}}^{\textnormal{train}}$ such that
${\bm{v}}({\bm{z}}^{0})=({\bm{z}}_{\pi(B)},{\bm{z}}_{\pi(B)^{c}})$. By (102),
we have that
$\displaystyle\bar{\bm{v}}_{\pi(B)}({\bm{z}}^{0}_{B})={\bm{v}}_{\pi(B)}({\bm{z}}^{0})\,.$
(104)
which means $\bar{\bm{v}}_{\pi(B)}({\bm{z}}^{0}_{B})={\bm{z}}_{\pi(B)}$.
We thus have that $\bar{\bm{v}}_{\pi(B)}$ is bijective. It is a diffeomorphism
because
$\displaystyle\det D\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})=\det
D_{B}{\bm{v}}_{\pi(B)}({\bm{z}})\not=0\
\forall{\bm{z}}\in\hat{\mathcal{Z}}^{\textnormal{train}}$ (105)
where the first equality holds by (102) and the second holds because
${\bm{v}}$ is a diffeomorphism and has block-permutation structure, which
means it has a nonzero determinant everywhere on
$\hat{\mathcal{Z}}^{\textnormal{train}}$ and is equal to the product of the
determinants of its blocks, which implies each block $D_{B}{\bm{v}}_{\pi(B)}$
must have nonzero determinant everywhere.
Since
$\bar{\bm{v}}_{\pi(B)}:\hat{\mathcal{Z}}_{B}^{\textnormal{train}}\rightarrow{\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$
bijective and has invertible Jacobian everywhere, it must be a diffeomorphism.
∎
#### A.8 Injectivity of object-specific decoders v.s. injectivity of their
sum
We want to explore the relationship between the injectivity of individual
object-specific decoders ${\bm{f}}^{(B)}$ and the injectivity of their sum,
i.e. $\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}$.
We first show the simple fact that having each ${\bm{f}}^{(B)}$ injective is
not sufficient to have $\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}$ injective.
Take ${\bm{f}}^{(B)}({\bm{z}}_{B})={\bm{W}}^{(B)}{\bm{z}}_{B}$ where
${\bm{W}}^{(B)}\in{\mathbb{R}}^{d_{x}\times|B|}$ has full column-rank for all
$B\in{\mathcal{B}}$. We have that
$\displaystyle\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{z}}_{B})=\sum_{B\in{\mathcal{B}}}{\bm{W}}^{(B)}{\bm{z}}_{B}=[{\bm{W}}^{(B_{1})}\
\cdots\ {\bm{W}}^{(B_{\ell})}]{\bm{z}}\,,$ (106)
where it is clear that the matrix $[{\bm{W}}^{(B_{1})}\ \cdots\
{\bm{W}}^{(B_{\ell})}]\in{\mathbb{R}}^{d_{x}\times d_{z}}$ is not necessarily
injective even if each ${\bm{W}}^{(B)}$ is. This is the case, for instance, if
all ${\bm{W}}^{(B)}$ have the same image.
We now provide conditions such that $\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}$
injective implies each ${\bm{f}}^{(B)}$ injective. We start with a simple
lemma:
###### Lemma 10.
If $g\circ h$ is injective, then $h$ is injective.
###### Proof.
By contradiction, assume that $h$ is not injective. Then, there exists
distinct $x_{1},x_{2}\in\text{Dom}(h)$ such that $h(x_{1})=h(x_{2})$. This
implies $g\circ h(x_{1})=g\circ h(x_{2})$, which violates injectivity of
$g\circ h$. ∎
The following Lemma provides a condition on the domain of the function
$\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}$,
${\mathcal{Z}}^{\textnormal{train}}$, so that its injectivity implies
injectivity of the functions ${\bm{f}}^{(B)}$.
###### Lemma 11.
Assume that, for all $B\in{\mathcal{B}}$ and for all distinct
${\bm{z}}_{B},{\bm{z}}^{\prime}_{B}\in{\mathcal{Z}}^{\textnormal{train}}_{B}$,
there exists ${\bm{z}}_{B^{c}}$ such that
$({\bm{z}}_{B},{\bm{z}}_{B^{c}}),({\bm{z}}^{\prime}_{B},{\bm{z}}_{B^{c}})\in{\mathcal{Z}}^{\textnormal{train}}$.
Then, whenever $\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}$ is injective, each
${\bm{f}}^{(B)}$ must be injective.
###### Proof.
Notice that
${\bm{f}}({\bm{z}}):=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{z}}_{B})$ can
be written as ${\bm{f}}:=\text{SumBlocks}\circ\bar{\bm{f}}({\bm{z}})$ where
$\displaystyle\bar{\bm{f}}({\bm{z}}):=\begin{bmatrix}{\bm{f}}^{(B_{1})}({\bm{z}}_{B_{1}})\\\
\vdots\\\ {\bm{f}}^{(B_{\ell})}({\bm{z}}_{B_{\ell}})\end{bmatrix}\,,\text{ and
}\text{SumBlocks}({\bm{x}}^{(B_{1})},\dots,{\bm{x}}^{(B_{\ell})}):=\sum_{B\in{\mathcal{B}}}{\bm{x}}^{(B)}$
(107)
Since ${\bm{f}}$ is injective, by Lemma 10 $\bar{\bm{f}}$ must be injective.
We now show that each ${\bm{f}}^{(B)}$ must also be injective. Take
${\bm{z}}_{B},{\bm{z}}^{\prime}_{B}\in{\mathcal{Z}}^{\textnormal{train}}_{B}$
such that
${\bm{f}}^{(B)}({\bm{z}}_{B})={\bm{f}}^{(B)}({\bm{z}}^{\prime}_{B})$. By
assumption, we know there exists a ${\bm{z}}_{B^{c}}$ s.t.
$({\bm{z}}_{B},{\bm{z}}_{B^{c}})$ and
$({\bm{z}}^{\prime}_{B},{\bm{z}}_{B^{c}})$ are in
${\mathcal{Z}}^{\textnormal{train}}$. By construction, we have that
$\bar{\bm{f}}(({\bm{z}}_{B},{\bm{z}}_{B^{c}}))=\bar{\bm{f}}(({\bm{z}}^{\prime}_{B},{\bm{z}}_{B^{c}}))$.
By injectivity of $\bar{\bm{f}}$, we have that
$({\bm{z}}_{B},{\bm{z}}_{B^{c}})\not=({\bm{z}}^{\prime}_{B},{\bm{z}}_{B^{c}})$,
which implies ${\bm{z}}_{B}\not={\bm{z}}^{\prime}_{B}$, i.e. ${\bm{f}}^{(B)}$
is injective. ∎
#### A.9 Proof of Corollary 3
See 3
###### Proof.
Pick ${\bm{z}}\in\textnormal{CPE}(\hat{\mathcal{Z}}^{\textnormal{train}})$. By
definition, this means that, for all $B\in{\mathcal{B}}$,
${\bm{z}}_{B}\in\hat{\mathcal{Z}}^{\textnormal{train}}_{B}$. We thus have
that, for all $B\in{\mathcal{B}}$,
$\displaystyle\hat{\bm{f}}^{(B)}({\bm{z}}_{B})$
$\displaystyle={\bm{f}}^{(\pi(B))}\circ\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})+{\bm{c}}^{(B)}\,.$
(108)
We can thus sum over $B$ to obtain
$\displaystyle\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}_{B})$
$\displaystyle=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(\pi(B))}\circ\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})+\underbrace{\sum_{B\in{\mathcal{B}}}{\bm{c}}^{(B)}}_{=0}\,.$
(109)
Since ${\bm{z}}\in\textnormal{CPE}(\hat{\mathcal{Z}}^{\textnormal{train}})$
was arbitrary, we have
$\displaystyle\text{for all
}{\bm{z}}\in\textnormal{CPE}(\hat{\mathcal{Z}}^{\textnormal{train}}),\
\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}_{B})$
$\displaystyle=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(\pi(B))}\circ\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B})$
(110)
$\displaystyle\sigma(\sum_{B\in{\mathcal{B}}}\hat{\bm{f}}^{(B)}({\bm{z}}_{B}))$
$\displaystyle=\sigma(\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(\pi(B))}\circ\bar{\bm{v}}_{\pi(B)}({\bm{z}}_{B}))$
(111)
$\displaystyle\hat{\bm{f}}({\bm{z}})={\bm{f}}\circ\bar{\bm{v}}({\bm{z}})\,,$
(112)
where
$\bar{\bm{v}}:\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}})\rightarrow\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})$
is defined as
$\displaystyle\bar{\bm{v}}({\bm{z}}):=\begin{bmatrix}\bar{\bm{v}}_{B_{1}}({\bm{z}}_{\pi^{-1}(B_{1})})\\\
\vdots\\\
\bar{\bm{v}}_{B_{\ell}}({\bm{z}}_{\pi^{-1}(B_{\ell})})\end{bmatrix}\,,$ (113)
The map $\bar{\bm{v}}$ is a diffeomorphism since each $\bar{\bm{v}}_{\pi(B)}$
is a diffeomorphism from $\hat{\mathcal{Z}}^{\textnormal{train}}_{B}$ to
${\mathcal{Z}}^{\textnormal{train}}_{\pi(B)}$.
By (112) we get
$\displaystyle\hat{\bm{f}}(\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}}))={\bm{f}}\circ\bar{\bm{v}}(\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}}))\,,$
(114)
and since the map $\bar{\bm{v}}$ is surjective we have
$\bar{\bm{v}}(\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}}))=\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})$
and thus
$\displaystyle\hat{\bm{f}}(\textnormal{CPE}_{\mathcal{B}}(\hat{\mathcal{Z}}^{\textnormal{train}}))={\bm{f}}(\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}}))\,.$
(115)
Hence if
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\subseteq{\mathcal{Z}}^{\textnormal{test}}$,
then
${\bm{f}}(\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}}))\subseteq{\bm{f}}({\mathcal{Z}}^{\textnormal{test}})$.
∎
#### A.10 Will all extrapolated images make sense?
Here is a minimal example where the assumption
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\not\subseteq{\mathcal{Z}}^{\textnormal{test}}$
is violated.
###### Example 8 (Violation of
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\not\subseteq{\mathcal{Z}}^{\textnormal{test}}$).
Imagine ${\bm{z}}=({\bm{z}}_{1},{\bm{z}}_{2})$ where ${\bm{z}}_{1}$ and
${\bm{z}}_{2}$ are the $x$-positions of two distinct balls. It does not make
sense to have two balls occupying the same location in space and thus whenever
${\bm{z}}_{1}={\bm{z}}_{2}$ we have
$({\bm{z}}_{1},{\bm{z}}_{2})\not\in{\mathcal{Z}}^{\textnormal{test}}$. But if
$(1,2)$ and $(2,1)$ are both in ${\mathcal{Z}}^{\textnormal{train}}$, it
implies that $(1,1)$ and $(2,2)$ are in
$\textnormal{CPE}({\mathcal{Z}}^{\textnormal{train}})$, which is a violation
of
$\textnormal{CPE}_{\mathcal{B}}({\mathcal{Z}}^{\textnormal{train}})\subseteq{\mathcal{Z}}^{\textnormal{test}}$.
#### A.11 Additive decoders cannot model occlusion
We now explain why additive decoders cannot model occlusion. Occlusion occurs
when an object is partially hidden behind another one. Intuitively, the issue
is the following: Consider two images consisting of two objects, A and B (each
image shows both objects). In both images, the position of object A is the
same and in exactly one of the images, object B partially occludes object A.
Since the position of object $A$ did not change, its corresponding latent
block ${\bm{z}}_{A}$ is also unchanged between both images. However, the
pixels occupied by object A do change between both images because of
occlusion. The issue is that, because of additivity, ${\bm{z}}_{A}$ and
${\bm{z}}_{B}$ cannot interact to make some pixels that belonged to object A
“disappear” to be replaced by pixels of object B. In practice, object-centric
representation learning methods rely a masking mechanism which allows
interactions between ${\bm{z}}_{A}$ and ${\bm{z}}_{B}$ (See Equation 1 in
Section 2). This highlights the importance of studying this class of decoders
in future work.
### Appendix B Experiments
#### B.1 Training Details
###### Loss Function.
We use the standard reconstruction objective of mean squared error loss
between the ground truth data and the reconstructed/generated data.
###### Hyperparameters.
For both the ScalarLatents and the BlockLatents dataset, we used the Adam
optimizer with the hyperparameters defined below. We also use early stopping
with patience 1000, i.e, the training stops if the reconstruction loss on the
validation dataset does not improve consecutively for 1000 epochs. Note that
we maintain consistent hyperparameters across both the Additive decoder and
the Non-Additive decoder method.
* •
Batch Size: $64$
* •
Learning Rate: $5\times 10^{-4}$
* •
Weight Decay: $5\times 10^{-4}$
* •
Total Epochs: $5000$
###### Model Architecture.
We use the following architectures for Encoder and Decoder across both the
datasets (ScalarLatents, BlockLatents). Note that for the ScalarLatents
dataset we train with latent dimension $d_{z}=2$, and for the BlockLatents
dataset we train with latent dimension $d_{z}=4$, which corresponds to the
dimensionalities of the ground-truth data generating process for both
datasets.
Encoder Architecture:
* •
RestNet-18 Architecture till the penultimate layer ($512$ dimensional feature
output)
* •
Stack of 6 fully-connected layer blocks, with each block consisting of Linear
Layer ( dimensions: $512\times 512$), Batch Normalization layer, and Leaky
ReLU activation (negative slope: $0.1$).
* •
Final Linear Layer (dimension: $512\times d_{z}$) followed by Batch
Normalization Layer to output the latent representation.
Decoder Architecture (Non-additive):
* •
Fully connected layer block with input as latent representation, consisting of
Linear Layer (dimension: $d_{z}\times 512$), Batch Normalization layer, and
Leaky ReLU activation (negative slope: $0.1$).
* •
Stack of 6 fully-connected layer blocks, with each block consisting of Linear
Layer ( dimensions: $512\times 512$), Batch Normalization layer, and Leaky
ReLU activation (negative slope: $0.1$).
* •
Series of DeConvolutional layers, where each DeConvolutional layer is follwed
by Leaky ReLU (negative slope: $0.01$) activation.
* –
DeConvolution Layer ($c_{in}$: $64$, $c_{out}$: $64$, kernel: $4$; stride:
$2$; padding: $1$)
* –
DeConvolution Layer ($c_{in}$: $64$, $c_{out}$: $32$, kernel: $4$; stride:
$2$; padding: $1$)
* –
DeConvolution Layer ($c_{in}$: $32$, $c_{out}$: $32$, kernel: $4$; stride:
$2$; padding: $1$)
* –
DeConvolution Layer ($c_{in}$: $32$, $c_{out}$: $3$, kernel: $4$; stride: $2$;
padding: $1$)
Decoder Architecture (Additive): Recall that an additive decoder has the form
${\bm{f}}({\bm{z}})=\sum_{B\in{\mathcal{B}}}{\bm{f}}^{(B)}({\bm{z}}_{B})$.
Each ${\bm{f}}^{(B)}$ has the same architecture as the one presented above for
the non-additive case, but the input has dimensionality $|B|$ (which is 1 or
2, depending on the dataset). Note that we do not share parameters among the
functions ${\bm{f}}^{(B)}$.
#### B.2 Datasets Details
We use the moving balls environment from Ahuja et al. [2] with images of
dimension $64\times 64\times 3$, with latent vector (${\bm{z}}$) representing
the position coordinates of each balls. We consider only two balls, hence,
${\bm{z}}\in{\mathbb{R}}^{4}$ with the partition
${\mathcal{B}}=\\{\\{1,2\\},\\{3,4\\}\\}$ where $({\bm{z}}_{1},{\bm{z}}_{2})$
and $({\bm{z}}_{3},{\bm{z}}_{4})$ correspond to the first and second ball,
respectively. The rendered images have pixels in the range [0, 255], which we
normalize with the mean and standard deviation per channel of the ImageNet
dataset.
###### ScalarLatents Dataset.
We fix the x-coordinate of each ball with separation between them along the
x-axis, i.e., ${\bm{z}}_{1}=0.25$ and ${\bm{z}}_{3}=0.75$. We sample the
y-coordinate of the first ball from a continuous uniform distribution as
follows: ${\bm{z}}_{2}\sim$ Uniform(0, 1). Then we sample the y-coordinate of
the second ball as per the following scheme:
${\bm{z}}_{4}\sim\begin{cases}\text{Uniform}(0,1)&\text{if}\;{\bm{z}}_{2}\leq
0.5\\\ \text{Uniform}(0,0.5)&\text{else}\end{cases}$
We can ignore the x-coordinate of each ball (${\bm{z}}_{1},{\bm{z}}_{3}$) as
they are fixed and considering only the y-coordinate of each ball
(${\bm{z}}_{2},{\bm{z}}_{4}$) as the effective latent variables. Hence, this
leads to the L-shaped latent support, i.e.,
${\mathcal{Z}}^{\textnormal{train}}:=[0,1]\times[0,1]\setminus[0.5,1]\times[0.5,1]$.
We use $50k$ samples for the test dataset, while we use $10k$ samples for the
train dataset along with $2.5k$ samples ($25\%$ of the train sample size) for
the validation dataset.
###### BlockLatents Dataset.
For this dataset, we allow the balls to move in both the x, y directions, but
we rejected the images that present occlusion, i.e. when one ball hides
another one. For the case of independent latents, we sample each latent
component independently and identically distributed according to a uniform
distribution over $(0,1)$, i.e. $z_{i}\sim$ Uniform(0, 1).222Note that, in the
independent latents case, the latents are not actually independent because of
the rejection step which prevents occlusion from happening.
For the case of dependent latents, we sample the latents corresponding to the
first ball similarly from the same continuous uniform distribution, i.e,
$z_{1},z_{2}\sim$ Uniform (0, 1). However, the latents of the second ball are
a function of the latents of the first ball, as described in what follows:
${\bm{z}}_{3}\sim\begin{cases}\text{Uniform}(0,0.5)&\text{if}\;1.25\times({\bm{z}}_{1}^{2}+{\bm{z}}_{2}^{2})\geq
1.0\\\
\text{Uniform}(0.5,1)&\text{if}\;1.25\times({\bm{z}}_{1}^{2}+{\bm{z}}_{2}^{2})<1.0\end{cases}$
${\bm{z}}_{4}\sim\begin{cases}\text{Uniform}(0.5,1)&\text{if}\;1.25\times({\bm{z}}_{1}^{2}+{\bm{z}}_{2}^{2})\geq
1.0\\\
\text{Uniform}(0,0.5)&\text{if}\;1.25\times({\bm{z}}_{1}^{2}+{\bm{z}}_{2}^{2})<1.0\end{cases}$
Intuitively, this means the second ball will be placed in either the top-left
or the bottom-right quadrant based on the position of the first ball.
Finally, as mentioned above, for both the independent and dependent latents
case, we perform rejection sampling to remove data points where the two balls
collide with each other which leads to occlusion. This is performed as a
simple comparison among the distance between the center of the balls and the
diameter of the balls.
Note that our dependent BlockLatent setup is same as the non-linear SCM case
from Ahuja et al. [3].
We use $50k$ samples for both the train and the test dataset, along with
$12.5k$ samples ($25\%$ of the train sample size) for the validation dataset.
###### Disconnected Support Dataset.
For this dataset, we have similar setup as the ScalarLatents dataset; we fix
the x-coordinates of both the balls (${\bm{z}}_{1}=0.25$, ${\bm{z}}_{3}=0.75$)
and only vary the y-coordinates (${\bm{z}}_{2},{\bm{z}}_{4}$) of each ball. We
sample the y-coordinate of the first ball (${\bm{z}}_{2}$) from a continuous
uniform distribution as follows: ${\bm{z}}_{2}\sim$ Uniform(0, 1). Then we
sample the y-coordinate of the second ball (${\bm{z}}_{4}$) from either of the
following continuous uniform distribution with equal probability; Uniform(0,
0.25) and Uniform(0.75, 1). The sampling of ${\bm{z}}_{4}$ from distributions
with no support overlap leads to disconnected regions in the latent support,
i.e.,
${\mathcal{Z}}^{\textnormal{train}}:=[0,1]\times[0,1]\setminus[0.25,0.75]\times[0.25,0.75]$.
We use $50k$ samples for the test dataset, while we use $10k$ samples for the
train dataset along with $2.5k$ samples ($25\%$ of the train sample size) for
the validation dataset.
#### B.3 Evaluation Metrics
Recall that, to evaluate disentanglement, we compute a matrix of scores
$(s_{B,B^{\prime}})\in{\mathbb{R}}^{\ell\times\ell}$ where $\ell$ is the
number of blocks in ${\mathcal{B}}$ and $s_{B,B^{\prime}}$ is a score
measuring how well we can predict the ground-truth block ${\bm{z}}_{B}$ from
the learned latent block
$\hat{\bm{z}}_{B^{\prime}}=\hat{\bm{g}}_{B^{\prime}}({\bm{x}})$ outputted by
the encoder. The final Latent Matching Score (LMS) is computed as
$\textnormal{LMS}=\operatorname*{arg\,max}_{\pi\in\mathfrak{S}_{\mathcal{B}}}\frac{1}{\ell}\sum_{B\in{\mathcal{B}}}s_{B,\pi(B)}$,
where $\mathfrak{S}_{\mathcal{B}}$ is the set of permutations respecting
${\mathcal{B}}$ (Definition 2). These scores are always computed on the test
set.
###### Metric $\text{LMS}_{\text{Spear}}$:
As mentioned in the main paper, this metric is used for the ScalarLatents
dataset where each block is 1-dimensional. Hence, this metric is almost the
same as the mean correlation coefficient (MCC), which is widely used in the
nonlinear ICA literature [21, 22, 23, 25, 30], with the only difference that
we use Spearman correlation instead of Pearson correlation as a score
$s_{B,B^{\prime}}$. The Spearman correlation can capture nonlinear monotonous
relations, unlike Pearson which can only capture linear dependencies. We favor
Spearman over Pearson because our identifiability result (Theorem 2)
guarantees we can recover the latents only up to permutation and element-wise
invertible transformations, which can be nonlinear.
|
# Generalized Eigenvalue Based Detection of Signals in Colored Noise: A Sample
Deficient Analysis
Prathapasinghe Dharmawansa1, Saman Atapattu2, Jamie Evans3, and Kandeepan
Sithamparanathan2 Email<EMAIL_ADDRESS>2{saman.atapattu,
<EMAIL_ADDRESS><EMAIL_ADDRESS>1Department of
Electronic and Telecomm. Engineering, University of Moratuwa, Moratuwa, Sri
Lanka
2School of Engineering, RMIT University, Melbourne, Victoria, Australia
3Department of Electrical and Electronic Engineering, University of Melbourne,
Victoria, Australia
###### Abstract
This paper investigates the signal detection problem in colored noise with an
unknown covariance matrix. To be specific, we consider a scenario in which the
number of signal bearing samples ($n$) is strictly smaller than the
dimensionality of the signal space ($m$). Our test statistic is the leading
generalized eigenvalue of the whitened sample covariance matrix (a.k.a.
$F$-matrix) which is constructed by whitening the signal bearing sample
covariance matrix with noise-only sample covariance matrix. The sample
deficiency (i.e., $m>n$) in turn makes this $F$-matrix rank deficient, thereby
singular. Therefore, an exact statistical characterization of the leading
generalized eigenvalue (l.g.e.) of a singular $F$-matrix is of paramount
importance to assess the performance of the detector (i.e., the receiver
operating characteristics (ROC)). To this end, we employ the powerful
orthogonal polynomial approach to derive a new finite dimensional c.d.f.
expression for the l.g.e. of a singular $F$-matrix. It turns out that when the
noise only sample covariance matrix is nearly rank deficient and the signal-
to-noise ratio is $O(m)$, the ROC profile converges to a limit.
###### Index Terms:
Colored noise, Detection, Eigenvalues, $F$-matrix, orthogonal polynomials,
Random matrix, Receiver operating characteristics (ROC), singular Wishart
matrix, Stiefel manifold
## I Introduction
The detection of signals embedded in noise is a fundamental problem with
numerous applications in various scientific disciplines [1, 2, 3, 4, 5]. In
this respect, the test statistic based on the leading sample eigenvalue of the
sample covariance matrix (a.k.a. Roy’s largest root) has been popular among
detection theorists [3, 4, 5, 6, 7, 8]. In its most basic form with additive
white Gaussian noise assumption, this amounts to statistically characterizing
the largest eigenvalue of a Wishart matrix having a spiked covariance
structure, see e.g., [9, 10, 5, 6, 11] and references therein.
The white Gaussian noise assumption, though very common in the classical
setting, may not hold in certain practical scenarios [12, 13, 14, 15]. In such
situations, the generalized eigenvalues of the so-called whitened signal-plus-
noise sample covariance matrix (a.k.a. $F$-matrix) has been employed [2, 6, 4,
5]. To be specific, the whitening operation requires to have two sample
covariance matrices: noise only and signal-plus-noise [2, 4, 5, 6]. The noise-
only sample covariance matrix can easily be formed in many practical scenarios
as delineated in [2]. In this regard, one has to make sure that the number of
noise only samples $p$ is greater than or equal to the dimensionality of the
system $m$ so that the noise-only sample covariance matrix is invertible. As
for the number of signal-plus-noise samples $n$, it is common to make the
assumption that $n\geq m$. However, $n<m$ scenario (i.e., sample deficiency)
is increasingly common in modern applications (e.g., state-of-the-art radar
and sonar systems [1]). Under this setting, the signal-plus-noise sample
covariance matrix becomes rank deficient (i.e., singular) [16, 17, 18, 19,
20]. This in turn makes the whitened signal-plus-noise sample covariance
matrix also singular.
The fundamental high dimensional, high signal-to-noise-ratio (SNR), and finite
dimensional characteristics of the largest generalized sample eigenvalue based
detection in colored noise for $n\geq m$ have been thoroughly investigated in
[2], [7], and [4], respectively. Nevertheless, to the best of our knowledge, a
tractable finite dimensional analysis for $n<m$ (i.e., sample deficient)
scenario is not available in the literature. Thus, in this paper, we focus on
this sample deficient regime.
Under the Gaussian assumption with $n<m$, the largest generalized sample
eigenvalue based detection in colored noise amounts to finite dimensional
characterization of the largest eigenvalue of correlated complex singular
$F$-matrix. The joint eigenvalue density of the uncorrelated real singular
$F$-matrix has been derived in [16]. The joint eigenvalue density of complex
correlated singular $F$-matrix, which contains the so-called heterogeneous
hypergeometric function of two matrix arguments, has been reported in [21] .
An expression involving heterogeneous hypergeometric function of one matrix
argument for the largest generalized eigenvalue has also been derived therein.
However, the algebraic complexity of these hypergeometric functions in turn
makes them less amenable to further analysis. Therefore, in this paper,
capitalizing on powerful contour integral approach due to [22], we present
simple and tractable closed-form solutions to the joint eigenvalue density and
the cumulative distribution function (c.d.f.) of the maximum generalized
eigenvalue of the complex correlated singular $F$-matrix when the underlying
covariance matrix assumes a single spiked structure. This new c.d.f.
expression further facilitates the analysis of the receiver operating
characteristics (ROC) of the largest root test.
The key results developed in this paper shed some light on the impact of the
the system dimension ($m$), the number of signal-plus-noise samples ($n$) and
noise-only observations ($p$), and the SNR ($\gamma$) on the ROC. For
instance, the relative disparity between $m$ and $n$ degrades the ROC profile
for fixed values of the other parameters. However, when $\gamma=O(m)$ and
$p=m$ (i.e., when the noise-only sample covariance matrix is nearly rank
deficient), the ROC profile converges to a limit as $m\to\infty$.
The following notation is used throughout this paper. A complex Gaussian
random variable $X$ with zero mean and variance $\sigma^{2}$ is denoted as
$X\sim\mathcal{CN}(0,\sigma^{2})$. The superscript $(\cdot)^{\dagger}$
indicates the Hermitian transpose, $\text{det}(\cdot)$ denotes the determinant
of a square matrix, $\text{tr}(\cdot)$ represents the trace of a square
matrix, and $\text{etr}(\cdot)$ stands for
$\exp\left(\text{tr}(\cdot)\right)$. The $n\times n$ identity matrix is
represented by $\mathbf{I}_{n}$ and the Euclidean norm of a vector
$\mathbf{w}$ is denoted by $||\mathbf{w}||$. The symmetric positive definite
square root of a symmetric positive definite matrix $\mathbf{B}$ is denoted by
$\mathbf{B}^{1/2}$. A diagonal matrix with the diagonal entries
$a_{1},a_{2},\ldots,a_{n}$ is denoted by
$\text{diag}(a_{1},a_{2},\ldots,a_{n})$. We denote the $m\times m$ unitary
group by $\mathcal{U}_{m}$, whereas the set of all $m\times n$ ($m>n$) complex
matrices $\mathbf{U}_{1}$ such that
$\mathbf{U}_{1}^{\dagger}\mathbf{U}_{1}=\mathbf{I}_{n}$ (i.e., with
orthonormal columns), denoted by $\mathcal{V}_{n,m}$, is known as the complex
Stiefel manifold. Finally, we use the following notation to compactly
represent the determinant of an $n\times n$ block matrix:
$\begin{split}\det\left[a_{i}\;\;b_{i,j}\right]_{\begin{subarray}{c}i=1,2,\ldots,n\\\
j=2,3,\ldots,n\end{subarray}}&=\left|\begin{array}[]{ccccc}a_{1}&b_{1,2}&b_{1,3}&\ldots&b_{1,n}\\\
\vdots&\vdots&\vdots&\ddots&\vdots\\\
a_{n}&b_{n,2}&b_{n,3}&\ldots&b_{n,n}\end{array}\right|.\end{split}$
## II Problem formulation
Consider the following signal detection problem in colored Gaussian noise:
$\mathbf{x}=\sqrt{\rho}\mathbf{h}s+\mathbf{n}$ where
$\mathbf{x}\in\mathbb{C}^{m}$, $\mathbf{h}\in\mathbb{C}^{m}$ is an unknown
non-random vector, $\rho\geq 0$, $s\sim\mathcal{CN}(0,1)$ is the signal, and
$\mathbf{n}\sim\mathcal{CN}_{m}(\mathbf{0},\boldsymbol{\Sigma})$ denotes the
colored noise which is independent of $s$. Moreover, the noise covariance
matrix $\boldsymbol{\Sigma}$ is unknown at the detector. Now the classical
signal detection problem reduces to the following hypothesis testing problem
$\displaystyle\mathcal{H}_{0}:\;\rho=0\;\;\;\;\;\;\text{Signal is absent}$
$\displaystyle\mathcal{H}_{1}:\;\rho>0\;\;\;\;\;\text{Signal is present}.$
Noting that the covariance matrix of $\mathbf{x}$ assumes two different
structures under the two hypotheses, the above testing problem can be written
in terms of covariance matrices as
$\displaystyle\begin{array}[]{ll}\mathcal{H}_{0}:\;\boldsymbol{\Sigma}_{n}=\boldsymbol{\Sigma}&\text{Signal
is absent}\\\
\mathcal{H}_{1}:\;\boldsymbol{\Sigma}_{s}=\rho\mathbf{h}\mathbf{h}^{\dagger}+\boldsymbol{\Sigma}&\text{Signal
is present}\end{array}$
where $(\cdot)^{\dagger}$ denotes the conjugate transpose. Let us now consider
the symmetric matrix
$\boldsymbol{\Theta}=\boldsymbol{\Sigma}_{n}^{-1/2}\boldsymbol{\Sigma}_{s}\boldsymbol{\Sigma}_{n}^{-1/2}=\boldsymbol{\Sigma}^{-1/2}\mathbf{h}\mathbf{h}^{\dagger}\boldsymbol{\Sigma}^{-1/2}+\mathbf{I}_{m}$
with the generalized
eigenvalues$\lambda_{1}\leq\lambda_{2}\leq\ldots\leq\lambda_{m}$. Since
$\mathbf{hh}^{\dagger}$ is a rank-$1$ matrix, we readily obtain
$\lambda_{m}=1+\mathbf{h}^{\dagger}\boldsymbol{\Sigma}^{-1}\mathbf{h}>1$,
whereas $\lambda_{1}=\lambda_{2}=\ldots=\lambda_{m-1}=1$. This discrimination
power of $\lambda_{m}$ indicates its utility as a test statistic in the above
hypothesis testing problem [2, 5, 6, 7, 4].
In most practical scenarios, the covariance matrices $\boldsymbol{\Sigma}_{n}$
and $\boldsymbol{\Sigma}_{s}$ are unknown so that the above procedure cannot
be trivially applied. To circumvent this difficulty, the covariance matrices
$\boldsymbol{\Sigma}_{n}$ and $\boldsymbol{\Sigma}_{s}$ are commonly replaced
by their sample estimates. To be precise, let us assume that we have $n\geq 1$
i.i.d. sample observations from signal-plus-noise scenario given by
$\\{\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}\\}$ and $p>1$ i.i.d.
sample observations from noise-only scenario given by
$\\{\mathbf{n}_{1},\mathbf{n}_{2},\ldots,\mathbf{n}_{p}\\}$. Consequently, the
sample estimates of $\boldsymbol{\Sigma}_{n}$ and $\boldsymbol{\Sigma}_{s}$
become
$\displaystyle\widehat{\boldsymbol{\Sigma}}_{n}=\frac{1}{p}\sum_{\ell=1}^{p}\mathbf{n}_{\ell}\mathbf{n}_{\ell}^{\dagger}\quad\text{and}\quad\widehat{\boldsymbol{\Sigma}}_{s}=\frac{1}{n}\sum_{k=1}^{n}\mathbf{x}_{k}\mathbf{x}_{k}^{\dagger}.$
(1)
Here we assume that the number of noise only samples is at least the
dimensionality of the system (i.e., $p\geq m$), whereas the number of possible
signal-plus-noise samples is strictly smaller than the dimensionality of the
system (i.e., $m>n$). Nevertheless, this assumption makes the estimated
covariance matrix $\widehat{\boldsymbol{\Sigma}}_{s}$ rank deficient (i.e.,
rank at most $n$) and therefore, singular. Consequently, following [2, 5, 6,
7], we form the singular matrix
$\displaystyle\widehat{\boldsymbol{\Theta}}=\widehat{\boldsymbol{\Sigma}}_{n}^{-1/2}\widehat{\boldsymbol{\Sigma}}_{s}\widehat{\boldsymbol{\Sigma}}_{n}^{-1/2}$
(2)
and investigate its maximum eigenvalue as the test statistic.To be precise, we
have
$p\widehat{\boldsymbol{\Sigma}}_{n}\sim\mathcal{CW}_{m}\left(p,\boldsymbol{\Sigma}\right)$
and most importantly, $n\widehat{\boldsymbol{\Sigma}}_{s}$ assumes a singular
Wishart density (i.e., due to $m>n$) given by
$n\widehat{\boldsymbol{\Sigma}}_{s}\sim\mathcal{CW}_{m}\left(n,\boldsymbol{\Sigma}+\rho\mathbf{h}\mathbf{h}^{\dagger}\right)$.
Keeping in mind that the eigenvalues of $\widehat{\boldsymbol{\Theta}}$ do not
change under the simultaneous transformations
$\widehat{\boldsymbol{\Sigma}}_{n}\mapsto\boldsymbol{\Sigma}^{-1/2}\widehat{\boldsymbol{\Sigma}}_{n}\boldsymbol{\Sigma}^{-1/2}$,
and
$\widehat{\boldsymbol{\Sigma}}_{s}\mapsto\boldsymbol{\Sigma}^{-1/2}\widehat{\boldsymbol{\Sigma}}_{s}\boldsymbol{\Sigma}^{-1/2}$,
without loss of generality we assume that
$\boldsymbol{\Sigma}=\sigma^{2}\mathbf{I}_{m}$. Consequently, in what follows,
we statistically characterize the maximum eigenvalue of
$\widehat{\boldsymbol{\Theta}}$ for
$\displaystyle
p\widehat{\boldsymbol{\Sigma}}_{n}\sim\mathcal{CW}_{m}\left(p,\mathbf{I}_{m}\right)$
(3) $\displaystyle
n\widehat{\boldsymbol{\Sigma}}_{s}\sim\mathcal{CW}_{m}\left(n,\mathbf{I}_{m}+\gamma\mathbf{s}\mathbf{s}^{\dagger}\right)$
(4)
where $\gamma=\rho||\mathbf{h}||^{2}/\sigma^{2}$ and
$\mathbf{s}=\mathbf{h}/||\mathbf{h}||$ denotes a unit vector.
For future use, let us denote the maximum eigenvalue of
$\widehat{\boldsymbol{\Theta}}$ as $\hat{\lambda}_{\max}$. Now, to facilitate
the assessment of the performance of the maximum-eigen based detector, we need
to evaluate the detection111This is also known as the power of the test. and
false alarm probabilities. They may be expressed as
$\displaystyle
P_{D}(\gamma,\lambda_{\text{th}})=\Pr\left(\hat{\lambda}_{\max}>\lambda_{\text{th}}|\mathcal{H}_{1}\right)$
(5) $\displaystyle
P_{F}(\gamma,\lambda_{\text{th}})=\Pr\left(\hat{\lambda}_{\max}>\lambda_{\text{th}}|\mathcal{H}_{0}\right)$
(6)
where $\lambda_{\text{th}}$ is the threshold. Now the $(P_{D},P_{F})$
characterizes the detector and is referred to as the ROC profile.
The main technical challenge here is to statistically characterize the maximum
eigenvalue of the singular matrix $\widehat{\boldsymbol{\Theta}}$, under the
alternative $\mathcal{H}_{1}$, in terms of simple algebraic functions. To this
end, capitalizing on the powerful orthogonal polynomial techniques due to
Mehta [23], we obtain an exact closed-form solution for the c.d.f. of the
maximum eigenvalue.
$\displaystyle
g(x_{1},\ldots,x_{n})=\displaystyle\sum_{k=1}^{n}\frac{1}{\displaystyle\prod_{\begin{subarray}{c}\ell=2\\\
\ell\neq
k\end{subarray}}^{n}\left(x_{k}-x_{\ell}\right)}\left[\frac{\Gamma(n+p-m+1)}{c_{\eta}^{m-1}x_{k}^{m-n}\left(1-c_{\eta}x_{k}\right)^{n+p-m+1}}-\sum_{j=0}^{m-n-1}\frac{\Gamma(p-j)}{\Gamma(m-n-j)c_{\eta}^{n+j}x_{k}^{j+1}}\right].$
(13)
## III C.D.F. of the Maximum Eigenvalue
Here we develop some fundamental results pertaining to the representation of
the joint eigenvalue density of a correlated singular $F$-matrix and the
c.d.f. of its dominant eigenvalue. To this end, we require some preliminary
results given below.
### III-A Preliminaries
Let $\mathbf{A}\sim\mathcal{W}_{m}\left(n,\boldsymbol{\Sigma}\right)$ and
$\mathbf{B}\sim\mathcal{W}_{m}\left(p,\mathbf{I}_{m}\right)$ be two
independent Wishart matrices with $p\geq m>n$. Then the matrix $\mathbf{A}$ is
said to follow a singular Wishart matrix. As such, the density of $\mathbf{A}$
is defined on the space of $m\times m$ Hermitian positive semi-definite
matrices of rank $n$ [19, 20]. Now the matrix
$\mathbf{F}=\mathbf{B}^{-1/2}\mathbf{A}\mathbf{B}^{-1/2}\in\mathbb{C}^{m\times
m}$ follows a singular $F$-distribution [21]. Therefore, $\mathbf{F}$ assumes
the eigen-decomposition
$\mathbf{F}=\mathbf{U}_{1}\boldsymbol{\Lambda}\mathbf{U}_{1}^{\dagger}$, where
$\mathbf{U}_{1}\in\mathcal{V}_{n,m}$ and
$\boldsymbol{\Lambda}=\text{diag}\left(\lambda_{1},\lambda_{2},\ldots,\lambda_{n}\right)$
denotes the non-zero eigenvalues of $\mathbf{F}$ ordered such that
$0<\lambda_{1}<\lambda_{2}<\ldots<\lambda_{n}<\infty$.
###### Definition 1
The joint density of the ordered eigenvalues
$0<\lambda_{1}<\lambda_{2}<\ldots<\lambda_{n}<\infty$ of the singular matrix
$\mathbf{F}$ is given by [21]
$\displaystyle f(\lambda_{1},\cdots,\lambda_{n})$
$\displaystyle=\frac{\mathcal{K}_{1}(m,n,p)}{\text{det}^{n}\left[\boldsymbol{\Sigma}\right]}\prod_{j=1}^{m}\lambda_{j}^{m-n}\Delta_{m}^{2}(\boldsymbol{\lambda})$
$\displaystyle\hskip
5.69054pt\times\int_{\mathcal{V}_{n,m}}\frac{\left(\mathbf{U}_{1}^{\dagger}{\rm
d}\mathbf{U}_{1}\right)}{\det^{(n+p)}{\left[\mathbf{I}_{m}+\boldsymbol{\Sigma}^{-1}\mathbf{U}_{1}\boldsymbol{\Lambda}\mathbf{U}_{1}^{\dagger}\right]}}$
(7)
where $\left(\mathbf{U}_{1}^{\dagger}{\rm d}\mathbf{U}_{1}\right)$ denotes the
exterior differential form representing the uniform measure on the complex
Stiefel manifold [19, 20], $\Delta_{n}(\boldsymbol{\lambda})=\prod_{1\leq
i<j\leq n}\left(\lambda_{j}-\lambda_{i}\right)$ is the Vandermonde
determinant, and
$\mathcal{K}_{1}(m,n,p)=\frac{\pi^{n(n-m-1)}\widetilde{\Gamma}_{m}(n+p)}{2^{n}\widetilde{\Gamma}_{m}(p)\widetilde{\Gamma}_{n}(n)}$
with the complex multivariate gamma function is written in terms of the
classical gamma function $\Gamma(\cdot)$ as
$\widetilde{\Gamma}_{m}(z)=\pi^{\frac{1}{2}m(m-1)}\prod_{j=1}^{m}\Gamma\left(z-j+1\right),\;\Re{\left\\{z\right\\}}>m-1$.
###### Definition 2
Jacobi polynomials can be defined as follows [24, eq. 5.112]:
$P_{n}^{(a,b)}(x)=\sum_{k=0}^{n}\binom{n+a}{n-k}\binom{n+k+a+b}{k}\left(\frac{x-1}{2}\right)^{k}\hskip
17.07164pt$ (8)
where $a,b>-1$, $\binom{n}{k}=\frac{n!}{(n-k)!k!}$ with $n\geq k\geq 0$.
### III-B Finite Dimensional Characterization of the C.D.F.
Having presented the above preliminary results, now we focus on deriving a new
exact c.d.f. for the maximum eigenvalue of $\mathbf{F}$ when the covariance
matrix $\boldsymbol{\Sigma}$ takes the so called rank-$1$ perturbation of the
identity (i.e., single spiked) form. In this case, the covariance matrix can
be decomposed as
$\displaystyle\boldsymbol{\Sigma}=\mathbf{I}_{m}+\eta\mathbf{ss}^{\dagger}=\mathbf{S}_{u}\text{diag}\left(1+\eta,1,1,\ldots,1\right)\mathbf{S}_{u}^{\dagger}$
(9)
from which we obtain
$\displaystyle\boldsymbol{\Sigma}^{-1}=\left(\mathbf{I}_{m}+\eta\mathbf{ss}^{\dagger}\right)^{-1}=\mathbf{I}_{m}-\frac{\eta}{1+\eta}\mathbf{ss}^{\dagger}$
(10)
where
$\mathbf{S}_{u}=\left(\mathbf{s}\;\mathbf{s}_{2}\;\ldots\mathbf{s}_{m}\right)\in\mathcal{U}_{m}$
and $\eta\geq 0$. Following [21], the matrix integral in (1) can be expressed
in terms of the so called heterogeneous hypergeometric function of two matrix
arguments (see e.g., Theorem 2 therein). However, the utility of such
functions are limited as they are not amenable to further analysis. To
circumvent this difficulty, capitalizing on a contour integral approach due to
[22], here we derive a new joint eigenvalue density which contains simple
algebraic functions. This new form further facilitates the use of powerful
orthogonal polynomial techniques due to Mehta [23] to derive the c.d.f. of the
dominant eigenvalue. The following corollary gives the new alternative
expression for the joint density.
$\displaystyle G^{(\alpha)}_{\eta}(x)$
$\displaystyle=\frac{(n+\alpha)!x^{n(\alpha+m)-m+1}}{c_{\eta}^{m-1}\left(1-c_{\eta}x\right)^{n+\alpha+1}}\det\left[\Omega^{(\alpha)}_{i}(x,\eta)\hskip
8.53581pt\Psi_{i,j}(x)\right]_{\begin{subarray}{c}i=1,2,...,\alpha+1\\\
j=2,3,...,\alpha+1\end{subarray}}$ $\displaystyle\hskip
170.71652pt+\frac{(-1)^{n}}{c_{\eta}^{n}}x^{n(m+\alpha-1)}\det\left[(-1)^{i-1}\Phi^{(\alpha)}_{i}(x,\eta)\hskip
8.53581pt\Psi_{i,j}(x)\right]_{\begin{subarray}{c}i=1,2,...,\alpha+1\\\
j=2,3,...,\alpha+1\end{subarray}}$ (16)
###### Corollary 1
Let
$\mathbf{A}\sim\mathcal{W}_{m}(n,\mathbf{I}_{m}+\eta\mathbf{s}\mathbf{s}^{\dagger})$
and $\mathbf{B}\sim\mathcal{W}_{m}(p,\mathbf{I}_{m})$ be independent Wishart
matrices with $p\geq m>n$ and $\eta>0$. Then the joint density of the ordered
eigenvalues $0\leq\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{n}<\infty$
of the singular matrix
$\mathbf{F}=\mathbf{B}^{-1/2}\mathbf{A}\mathbf{B}^{-1/2}$ is given by
$\displaystyle f(\lambda_{1},\cdots,\lambda_{n})$
$\displaystyle=\frac{\mathcal{K}_{2}(m,n,p)}{\left(1+\eta\right)^{n}}\prod_{j=1}^{n}\frac{\lambda_{j}^{m-n}}{(1+\lambda_{j})^{p+n}}\Delta_{n}^{2}(\boldsymbol{\lambda})$
$\displaystyle\qquad\qquad\times
g\left(\frac{\lambda_{1}}{1+\lambda_{1}},\ldots,\frac{\lambda_{n}}{1+\lambda_{n}}\right)$
(11)
where $g(x_{1},\ldots,x_{n})$ is shown in (13) at the bottom of the page with
$c_{\eta}=\frac{\eta}{\eta+1}$ and
.$\mathcal{K}_{2}(m,n,p)=\frac{\pi^{n(n-1)}\Gamma(m)\widetilde{\Gamma}_{m}(n+p)}{\Gamma(n+p)\widetilde{\Gamma}_{m}(p)\widetilde{\Gamma}_{n}(n)\widetilde{\Gamma}_{n}(m)}$.
###### Proof:
Omitted due to space limitations. ∎
###### Remark 1
It is worth noting that the joint density corresponding to $\eta=0$ can easily
be obtained from (1) as
$\displaystyle
h(\lambda_{1},\ldots,\lambda_{n})=\frac{\pi^{n(n-1)}\widetilde{\Gamma}_{m}(n+p)}{\widetilde{\Gamma}_{m}(p)\widetilde{\Gamma}_{n}(n)\widetilde{\Gamma}_{n}(m)}\prod_{j=1}^{n}\frac{\lambda_{j}^{m-n}}{(1+\lambda_{j})^{p+n}}\Delta_{n}^{2}(\boldsymbol{\lambda})$
where we have used the fact $\int_{\mathcal{V}_{n,m}}\left(\mathbf{U}_{1}{\rm
d}\mathbf{U}_{1}^{\dagger}\right)=\frac{2^{n}\pi^{mn}}{\widetilde{\Gamma}_{n}(m)}$.
The above expression coincides with [21, Corollary 1].
We may use the new joint density given in Corollary 1 to obtain the c.d.f. of
the maximum eigenvalue of singular $F$-matrix, which is given by the following
theorem.
###### Theorem 1
Let
$\mathbf{A}\sim\mathcal{W}_{m}(n,\mathbf{I}_{m}+\eta\mathbf{ss}^{\dagger})$
and $\mathbf{B}\sim\mathcal{W}_{m}(p,\mathbf{I}_{m})$ be independent with
$p\geq m>n$ and $\eta>0$. Then the c.d.f. of the maximum eigenvalue
$\lambda_{\max}$ of the singular matrix
$\mathbf{F}=\mathbf{B}^{-1/2}\mathbf{A}\mathbf{B}^{-1/2}$ is given by
$\displaystyle
F^{(\alpha)}_{\lambda_{\max}}(x;\eta)=\Pr\left\\{\lambda_{\max}\leq
x\right\\}$
$\displaystyle=\dfrac{\mathcal{K}_{\alpha}(m,n)}{(1+\eta)^{n}}G^{(\alpha)}_{\eta}\left(\frac{x}{1+x}\right)$
where $G^{(\alpha)}_{\eta}(x)$ is shown in (III-B) at the bottom of the next
page,
$\Psi_{i,j}(x)=(m+i-1)_{j-2}P_{n+i-j}^{(j-2,m-n+j-2)}\left(\frac{2}{x}-1\right),$
$\displaystyle\Phi^{(\alpha)}_{i}(x,\eta)=\sum_{k=0}^{m-n-1}\frac{(m+\alpha-k-1)!(n+k+i-2)!}{k!(m+i-k-2)!c_{\eta}^{k}x^{k}},$
$\displaystyle\Omega^{(\alpha)}_{i}(x,\eta)$
$\displaystyle=\frac{(n+i-2)!}{(m+i-2)!}\sum_{k=0}^{n+i-2}\frac{(-1)^{k}(m+i+k-2)!}{(n+i-2-k)!k!(k+1)!}$
$\displaystyle\quad\quad\times{}_{2}F_{1}\left(n+\alpha+1,k+1;k+2;\frac{-x\eta}{1+\eta(1-x)}\right),$
${}_{2}F_{1}(a,b;c;z)$ is the Gauss hypergoemteric function,
$(a)_{k}=a(a+1)(a+2)\ldots(a+k-1)$ with $(a)_{0}=1$ denotes the Pochhammer
symbol,
$\mathcal{K}_{\alpha}(m,n)=\prod_{j=1}^{\alpha}\frac{(m+n+j-2)!}{(n-1)!(m+n+2j-2)!}$,
and $\alpha=p-m$.
###### Proof:
See Appendix A. ∎
The computational complexity of the above new c.d.f. depends on the size of
the determinant which is $p-m$. Clearly, when the relative difference between
$p$ and $m$ is small, irrespective of their individual magnitudes, the c.d.f.
can be computed very efficiently. This distinctive advantage is due to the
orthogonal polynomial approach that we have employed. To further highlight
this fact, in the following corollary, we present the c.d.f. corresponding to
the special case of $\eta=0$.
###### Corollary 2
The exact c.d.f. of the maximum eigenvalue of
$\mathbf{B}^{-1/2}\mathbf{A}\mathbf{B}^{-1/2}$ corresponding to $\eta=0$ is
given by
$\displaystyle F^{(\alpha)}_{\lambda_{\max}}(x;0)$
$\displaystyle=C_{\alpha}(m,n)\left(\dfrac{x}{1+x}\right)^{n(m+\alpha)}$
$\displaystyle\quad\times\det\left[\Psi_{i+1,j+1}\left(\frac{x}{1+x}\right)\right]_{i,j=1,2,...,\alpha}$
(14)
where $C_{\alpha}(m.n)=\prod_{k=1}^{\alpha}\frac{(m+n+k-1)!}{(m+n+2k-2)!}$.
Having armed with the above characteristics of the maximum eigenvalue of
$\mathbf{B}^{-1/2}\mathbf{A}\mathbf{B}^{-1/2}$, in what follows, we focus on
the ROC of the maximum eigenvalue based detector.
Figure 1: $P_{D}$ vs $P_{F}$ for different SNRs when $m=9,n=5$, and $p=13$.
## IV ROC of the Largest Generalized Eigenvalue
Let us now analyze the behavior of detection and false alarm probabilities
associated with the maximum eigenvalue based test. To this end, by exploiting
the relationship between the non-zero eigenvalues of
$\widehat{\boldsymbol{\Theta}}$ and $\mathbf{F}$ given by by
$\hat{\lambda}_{j}=(p/n)\lambda_{j}$, for $j=1,2,\ldots,n$, we may express the
c.d.f. of the maximum eigenvalue corresponding to
$\widehat{\boldsymbol{\Theta}}$ as $F_{\lambda_{\max}}^{(\alpha)}(\kappa
x;\gamma)$, where $\kappa=n/p$.
Now in light of Theorem 1 along with (5), (6), the detection and false alarm
probabilities can be written, respectively, as
$\displaystyle P_{D}(\gamma,\lambda_{\text{th}})$
$\displaystyle=1-F_{\lambda_{\max}}^{(\alpha)}(\kappa\lambda_{\text{th}};\gamma)$
(15) $\displaystyle P_{F}(\lambda_{\text{th}})$
$\displaystyle=1-F_{\lambda_{\max}}^{(\alpha)}(\kappa\lambda_{\text{th}};0).$
(16)
In general, obtaining an explicit functional relationship between $P_{D}$ and
$P_{F}$ (i.e., the ROC profile) is an arduous task. Nevertheless, in the
important case of $\alpha=0$, such an explicit relationship is possible as
shown in the following corollary.
###### Corollary 3
In the important case of $\alpha=0$ (i.e., $p=m$), the quantities $P_{D}$ and
$P_{F}$ are functionally related as
$\displaystyle
P_{D}=1-\frac{G^{(0)}_{\gamma}\left(\left[1-P_{F}\right]^{1/nm}\right)}{(n-1)!(1+\gamma)^{n}}.$
(17)
Figure 2: $P_{D}$ vs $P_{F}$ for different $m$ values when $n=4$ and $p=15$.
Figure 3: $P_{D}$ vs $P_{F}$ for different $n$ values when $m=100$, $p=100$
and $\gamma=m$. An upper bound on the limiting ROC profile is shown in dashed
line.
Since the configuration $p=m$ barely guarantees the positive definiteness of
the sample estimate of the noise-only covariance matrix[25], this represents
the worst possible ROC profile.
The ROC curves corresponding to various parameter settings are shown in Figs.
1 and 2. The ROC of the maximum generalized eigenvalue is shown in Fig. 1 for
different SNR values. The power improvement with the increasing SNR is clearly
visible in Fig. 1. The next important parameter which affects the ROC profile
is the dimensionality of the system $m$. To this end, Fig. 2 shows the effect
of $m$ for fixed $n$ in two different settings: when $\gamma$ scales with $m$
(i.e., $\gamma=\theta m$) and $\gamma$ is free of $m$. As can be seen, the
disparity between $m$ and $n$ degrades both ROC profiles. Since we operate
below the phase transition, as $m\to p$ with $\gamma$ independent of $m$, the
largest generalized eigenvalue looses its detection power, which is also
visible in the figure. However, under the same setting with $\gamma=\theta m$
(i.e., $\gamma=O(m)$), the ROC profile converges to a limit as $m\to p$. To
further highlight this, we depict the ROC profiles when $m=p$ and
$\gamma=O(m)$ for different values of $n$ and large $m$ in Fig. 3. Although,
we cannot exactly quantify this limit, our numerical results suggest a tight
upper bound on this limit as $P_{D}=1-(1-P_{F})^{\theta+1}$, which is also
depicted in the figure.
## V Conclusion
This paper investigates the detection problem in colored noise using the
largest generalized eigenvalue of whitened signal-plus-noise sample covariance
matrix. In particular, our focus is on the sample deficient regime in which
the number of signal-plus-noise observations is strictly less than the system
dimension (i.e., $m>n$). We have assessed the performance of this detector by
developing a new expression for the c.d.f. of the largest generalized
eigenvalue of a complex singular $F$-matrix. It turns out that when the noise-
only sample covariance matrix is nearly rank deficient (i.e., $p=m$) and
$\gamma=O(m)$, the ROC profile corresponding to the largest sample generalized
eigenvalue converges to a limit as $m$ increases. Since an exact evaluation of
this limit seems an arduous task, we provide a tight upper bound on this
limit.
## Appendix A Proof of the c.d.f. of the maximum eigenvalue
We find it convenient to derive the c.d.f. of the maximum of the transformed
variables $y_{j}=\lambda_{j}/(1+\lambda_{j}),\;j=1,2,\ldots,n$, since the map
$y\mapsto y/(y+1)$ preserves the order. To this end, following Corollary 1, we
express the joint density of $y_{1}<y_{2}<\ldots<y_{n}$ as
$\displaystyle
p(y_{1},\ldots,y_{n})=f\left(\frac{y_{1}}{1-y_{1}},\ldots,\frac{y_{n}}{1-y_{n}}\right)\prod_{j=1}^{n}\frac{1}{(1-y_{j})^{2}}.$
Now by definition, the c.d.f. of $y_{\max}$ assumes the form
$\displaystyle\Pr(y_{\max}\leq x)=\idotsint\limits_{0<y_{1}<\ldots<y_{n}\leq
x}p(y_{1},\ldots,y_{n}){\rm d}y_{1}\ldots{\rm d}y_{n}.$
Consequently, we exploit the symmetry and the homogeneity of each of the terms
to remove the ordered region of integration and summations which in turn
yields
$\displaystyle\Pr(y_{\max}\leq x)$
$\displaystyle=K_{\alpha}(\eta)x^{m(n-1)+1}\mathcal{A}(x)$
$\displaystyle\quad-\sum_{k=0}^{m-n-1}q_{\alpha}(\eta,k)x^{n(m-1)-k}\mathcal{B}(k,x)$
(18)
where
$K_{\alpha}(\eta)=\mathcal{K}_{2}(m,n,p)(n+\alpha)!/(n-1)!c_{\eta}^{m-1}(1+\eta)^{n}$,
$q_{\alpha}(\eta,k)=\mathcal{K}_{2}(m,n,p)(m+\alpha-k-1)!/(n-1)!(1+\eta)^{n}(m-n-k-1)!c_{\eta}^{n+k}$,
$\displaystyle\mathcal{A}(x)=\int_{(0,1)^{n}}$
$\displaystyle\frac{(1-xy_{1})^{\alpha}}{(1-c_{\eta}xy_{1})^{n+\alpha+1}}\Delta_{n}^{2}(\mathbf{y})$
$\displaystyle\qquad\qquad\times\prod_{\ell=2}^{n}\frac{(1-xy_{\ell})^{\alpha}}{\left(y_{1}-y_{\ell}\right)}y_{\ell}^{m-n}{\rm
d}\mathbf{y},$ (19) $\displaystyle\mathcal{B}(k,x)=\int_{(0,1)^{n}}$
$\displaystyle y_{1}^{m-n-k-1}(1-xy_{1})^{\alpha}\Delta_{n}^{2}(\mathbf{y})$
$\displaystyle\qquad\qquad\times\prod_{\ell=2}^{n}\frac{(1-xy_{\ell})^{\alpha}}{\left(y_{1}-y_{\ell}\right)}y_{\ell}^{m-n}{\rm
d}\mathbf{y},$ (20)
$(0,1)^{n}=(0,1)\times(0,1)\times\ldots\times(0,1)$ with $\times$ denoting the
Cartesian product, and ${\rm d}\mathbf{y}={\rm d}y_{1}\ldots{\rm d}y_{n}$.
Since the above two multiple integrals are structurally similar, we focus on
the evaluation of $\mathcal{A}(x)$ while the other follows in a similar
manner. Therefore, noting the decomposition
$\Delta_{n}^{2}(\mathbf{y})=\prod_{k=2}^{n}(y_{1}-y_{k})^{2}\prod_{2\leq
i<\ell\leq n}(y_{\ell}-y_{i})^{2}$, we may rewrite (A) as
$\displaystyle\mathcal{A}(x)=\int_{0}^{1}\frac{(1-xy_{1})^{\alpha}}{(1-c_{\eta}xy_{1})^{n+\alpha+1}}\mathcal{Q}_{n-1}(y_{1},\beta,x){\rm
d}y_{1}$ (21)
where $\beta=m-n$ and
$\mathcal{Q}_{n}(y_{1},\beta,x)=\int_{[0,1]^{n}}\Delta_{n}^{2}(\mathbf{z})\prod_{j=1}^{n}z_{j}^{\beta}(1-xz_{j})^{\alpha}(y_{1}-z_{j}){\rm
d}\mathbf{z}.$
The above $n$-fold integral can be evaluated with the help of [23, Ch. 22]
followed by some tedious algebraic manipulation to yield
$\displaystyle\mathcal{Q}_{n}(y_{1},\beta,x)$
$\displaystyle\quad=K_{(\beta,n)}x^{\alpha(n+1)}(1-xy_{1})^{-\alpha}$
$\displaystyle\qquad\quad\times\det\left[P_{n+i-1}^{(0,\beta)}\left(2y_{1}-1\right)\hskip
14.22636pt\widetilde{\Psi}_{i,j}(x)\right]_{\begin{subarray}{c}i=1,2,...,\alpha+1\\\
j=2,3,...,\alpha+1\end{subarray}}$
where
$\widetilde{\Psi}_{i,j}(x)=(n+i+\beta)_{j-2}P^{(j-2,\beta+j-2)}_{n+1+i-j}\left(\frac{2}{x}-1\right)$
and
$\displaystyle K_{(\beta,n)}$
$\displaystyle=\prod_{j=1}^{\alpha+1}\frac{(n+j-1)!(n+\beta+j-1)!}{(2n+2j+\beta-2)!}$
$\displaystyle\qquad\qquad\times\prod_{j=0}^{n-1}\frac{j!(j+1)!(\beta+j)!}{(\beta+n+j)!}\prod_{j=0}^{\alpha-1}\frac{1}{j!}.$
In light of the above development and noting that only the first column of the
determinant depends on $y_{1}$, we rewrite (21) as
$\displaystyle\mathcal{A}(x)$
$\displaystyle=K_{(\beta,n-1)}x^{n\alpha}\det\left[\widetilde{\Omega}_{i}^{(\alpha)}(x,\eta)\hskip
14.22636pt\Psi_{i,j}(x)\right]_{\begin{subarray}{c}i=1,2,...,\alpha+1\\\
j=2,3,...,\alpha+1\end{subarray}}$
where
$\widetilde{\Omega}_{i}^{(\alpha)}(x,\eta)=\int_{0}^{1}\frac{P_{n+i-1}^{(0,\beta)}\left(2y_{1}-1\right)}{(1-c_{\eta}xy_{1})^{n+\alpha+1}}{\rm
d}y_{1}$. Now following definition 2, we may expand the denominator and
perform term by term integration with the help of [26, Eq. 3.194.1] to obtain
$\displaystyle\mathcal{A}(x)$
$\displaystyle=\frac{K_{(\beta,n-1)}x^{n\alpha}}{(1-c_{\eta}x)^{n+p-m+1}}$
$\displaystyle\qquad\times\det\left[\Omega_{i}^{(\alpha)}(x,\eta)\hskip
14.22636pt\Psi_{i,j}(x)\right]_{\begin{subarray}{c}i=1,2,...,\alpha+1\\\
j=2,3,...,\alpha+1\end{subarray}}.$ (22)
As for $\mathcal{B}(k,x)$, following similar arguments as before, with some
tedious algebraic manipulation, we obtain
$\displaystyle\mathcal{B}(k,x)$
$\displaystyle=(-1)^{n+1}K_{(\beta,n-1)}(\beta-k-1)!(k!)^{-1}x^{n\alpha}$
$\displaystyle\qquad\qquad\times\det\left[c_{i}(k)\hskip
14.22636pt\Psi_{i,j}(x)\right]_{\begin{subarray}{c}i=1,2,...,\alpha+1\\\
j=2,3,...,\alpha+1\end{subarray}}$ (23)
where $c_{i}(k)=(-1)^{i-1}\frac{(n+k+i-2)!}{(m+i-k-2)!}$. Finally, we
substitute (A) and (A) into (A) and make use of the functional relationship
$\lambda_{\max}=y_{\max}/(1-y_{\max})$ with some algebraic manipulation to
conclude the proof.
## References
* [1] R. R. Nadakuditi and A. Edelman, “Sample eigenvalue based detection of high-dimensional signals in white noise using relatively few samples,” _IEEE Trans. Signal Process._ , vol. 56, no. 7, pp. 2625–2638, Jul. 2008\.
* [2] R. R. Nadakuditi and J. W. Silverstein, “Fundamental limit of sample generalized eigenvalue based detection of signals in noise using relatively few signal-bearing and noise-only samples,” _IEEE J. Sel. Topics Signal Process._ , vol. 4, no. 3, pp. 468–480, Jun. 2010.
* [3] A. Onatski, “Detection of weak signals in high-dimensional complex-valued data,” _Random Matrices: Theory and Applications_ , vol. 03, no. 01, p. 1450001, 2014.
* [4] L. D. Chamain, P. Dharmawansa, S. Atapattu, and C. Tellambura, “Eigenvalue-based detection of a signal in colored noise: Finite and asymptotic analyses,” _IEEE Trans. Inf. Theory_ , vol. 66, no. 10, pp. 6413–6433, 2020.
* [5] I. M. Johnstone and B. Nadler, “Roy’s largest root test under rank-one alternatives,” _Biometrika_ , vol. 104, no. 1, pp. 181–193, 2017.
* [6] P. Dharmawansa, B. Nadler, and O. Shwartz, “Roy‘s largest root under rank-one perturbations: The complex valued case and applications,” _J. Multivar. Anal._ , vol. 174, p. 104524, 2019.
* [7] P. Dharmawansa, I. M. Johnstone, and A. Onatski, “Local asymptotic normality of the spectrum of high-dimensional spiked F-ratios,” _arXiv:1411.3875 [math.ST]_ , Nov. 2014.
* [8] Q. Wang and J. Yao, “Extreme eigenvalues of large-dimensional spiked Fisher matrices with application,” _Ann. Statist._ , vol. 45, no. 1, pp. 415–460, Feb. 2017.
* [9] J. Baik, G. B. Arous, and S. Péché, “Phase transition of the largest eigenvalue for non-null complex sample covariance matrices,” _Ann. Probab._ , vol. 33, no. 5, pp. 1643–1697, 2005.
* [10] J. Baik and J. W. Silverstein, “Eigenvalues of large sample covariance matrices of spiked population models,” _J. Multivariate Anal._ , vol. 97, no. 6, pp. 1382–1408, 2006.
* [11] R. Couillet and M. Debbah, _Random Matrix Methods for Wireless Communications_. Cambridge University Press, Sep. 2011.
* [12] E. Maris, “A resampling method for estimating the signal subspace of spatio-temporal EEG/MEG data,” _IEEE Trans. Biomed. Eng._ , vol. 50, no. 8, pp. 935–949, Aug 2003.
* [13] J. Vinogradova, R. Couillet, and W. Hachem, “Statistical inference in large antenna arrays under unknown noise pattern,” _IEEE Trans. Signal Process._ , vol. 61, no. 22, pp. 5633–5645, Nov. 2013.
* [14] S. Hiltunen, P. Loubaton, and P. Chevalier, “Large system analysis of a GLRT for detection with large sensor arrays in temporally white noise,” _IEEE Trans. Signal Process._ , vol. 63, no. 20, pp. 5409–5423, Oct. 2015\.
* [15] N. Asendorf and R. R. Nadakuditi, “Improved detection of correlated signals in low-rank-plus-noise type data sets using informative canonical correlation analysis (ICCA),” _IEEE Trans. Inf. Theory_ , vol. 63, no. 6, pp. 3451–3467, Jun. 2017.
* [16] M. S. Srivastava, “Singular wishart and multivariate beta distributions,” _Ann. Stat._ , vol. 31, no. 5, pp. 1537–1560, 2003.
* [17] R. K. Mallik, “The pseudo-wishart distribution and its application to MIMO systems,” _IEEE Trans. Inf. Theory_ , vol. 49, no. 10, pp. 2761–2769, 2003\.
* [18] H. Uhlig, “On singular wishart and singular multivariate beta distributions,” _Ann. Stat._ , pp. 395–405, 1994.
* [19] T. Ratnarajah and R. Vaillancourt, “Complex singular wishart matrices and applications,” _Comput. Math. with Appl._ , vol. 50, no. 3-4, pp. 399–411, 2005.
* [20] A. Onatski, “The tracy-widom limit for the largest eigenvalues of singular complex wishart matrices,” _Ann. Appl. Probab._ , vol. 18, no. 2, pp. 470–490, 2008.
* [21] K. Shimizu and H. Hashiguchi, “Expressing the largest eigenvalue of a singular beta f-matrix with heterogeneous hypergeometric functions,” _Random Matrices: Theory Appl._ , vol. 11, no. 01, p. 2250005, 2022.
* [22] D. Wang, “The largest eigenvalue of real symmetric, Hermitian and Hermitian self-dual random matrix models with rank one external source, part I,” _J. Stat. Phys._ , vol. 146, no. 4, pp. 719–761, 2012.
* [23] M. L. Mehta, _Random Matrices_. Academic Press, 2004, vol. 142.
* [24] L. C. Andrews, _Special Functions of Mathematics for Engineers_. SPIE Press, 1998.
* [25] R. J. Muirhead, _Aspects of Multivariate Statistical Theory_. John Wiley & Sons, 2009, vol. 197.
* [26] I. Gradshteyn and I. Ryzhik, _Table of Integrals, Series, and Products_ , 7th ed. Boston: Academic Press, 2007.
|
0 Research please specify Rita Sevastjanova and Eren Cakmak are with
University of Konstanz. E-mail<EMAIL_ADDRESS>Shauli
Ravfogel is with Bar-Ilan University. E-mail<EMAIL_ADDRESS>Ryan
Cotterell is with ETH. E-mail<EMAIL_ADDRESS>Mennatallah El-
Assady is with ETH, AI Center. E-mail<EMAIL_ADDRESS>Biv et al.: Global
Illumination for Fun and Profit
# Visual Comparison of Language Model Adaptation
Rita Sevastjanova Eren Cakmak Shauli Ravfogel Ryan Cotterell and
Mennatallah El-Assady
###### Abstract
Neural language models are widely used; however, their model parameters often
need to be adapted to the specific domains and tasks of an application, which
is time- and resource-consuming. Thus, adapters have recently been introduced
as a lightweight alternative for model adaptation. They consist of a small set
of task-specific parameters with a reduced training time and simple parameter
composition. The simplicity of adapter training and composition comes along
with new challenges, such as maintaining an overview of adapter properties and
effectively comparing their produced embedding spaces. To help developers
overcome these challenges, we provide a twofold contribution. First, in close
collaboration with NLP researchers, we conducted a requirement analysis for an
approach supporting adapter evaluation and detected, among others, the need
for both intrinsic (i.e., embedding similarity-based) and extrinsic (i.e.,
prediction-based) explanation methods. Second, motivated by the gathered
requirements, we designed a flexible visual analytics workspace that enables
the comparison of adapter properties. In this paper, we discuss several design
iterations and alternatives for interactive, comparative visual explanation
methods. Our comparative visualizations show the differences in the adapted
embedding vectors and prediction outcomes for diverse human-interpretable
concepts (e.g., person names, human qualities). We evaluate our workspace
through case studies and show that, for instance, an adapter trained on the
language debiasing task according to context-0 (decontextualized) embeddings
introduces a new type of bias where words (even gender-independent words such
as countries) become more similar to female- than male pronouns. We
demonstrate that these are artifacts of context-0 embeddings, and the adapter
effectively eliminates the gender information from the contextualized word
representations.
###### keywords:
Language Model Adaptation, Adapter, Word Embeddings, Sequence Classification,
Visual Analytics
K.6.1Management of Computing and Information SystemsProject and People
ManagementLife Cycle; K.7.mThe Computing ProfessionMiscellaneousEthics We
present a workspace that enables the evaluation and comparison of adapters –
lightweight alternatives for language model fine-tuning. After data pre-
processing (e.g., embedding extraction), users can select pre-trained
adapters, create explanations, and explore model differences through three
types of visualizations: Concept Embedding Similarity, Concept Embedding
Projection, and Concept Prediction Similarity. The explanations are provided
for single models as well as model comparisons. For each explanation, we
provide further explanation details, such as the word contexts as well as
embedding vectors themselves.
Introduction
Language models (LMs) such as the masked language model BERT [11] are widely
used for diverse natural language processing (NLP) and understanding tasks.
Such models are capable of learning manifold language properties in an
unsupervised manner [59]. However, the model parameters typically need to be
updated before using them on downstream tasks, such as sentiment
classification. Task specific fine-tuning [27, 55] along with domain specific
fine-tuning [22, 21] are the most common methods for parameter adaptation.
Although fine-tuning methods commonly achieve state-of-the-art results on many
NLP tasks [55], they come along with limitations such as a high training time
and storage [32]. To overcome the shortcomings of the model fine-tuning,
Houlsby et al. [26] have recently introduced adapter modules – a lightweight
alternative for LM fine-tuning. Instead of adapting the complete model,
adapters learn a small set of task-specific parameters, requiring less
training time and storage space. For a more efficient adapter training and
composition, Pfeiffer et al. [49] have proposed a modular adapter framework
called AdapterHub. It comes along with adapter-transformers – an extension of
HuggingFace’s transformers library111https://github.com/Adapter-Hub/adapter-
transformers, integrating adapters into state-of-the-art LMs. In addition to
the simple parameter adaptation, the AdapterHub framework allows sharing
adapters with the community, supporting open science practices.
The AdapterHub repository currently contains almost 400 adapters for 72 text
analysis tasks and 50 languages. To select the best adapter for a given
analysis task, one needs to be able to compare the adapters and their learned
language properties. The related work has shown that such model comparison
tasks are the focus of both model- and data-driven users working with LMs [5].
To understand more about the typical analysis setting, data, and performed
tasks when evaluating fine-tuned model properties, we conducted literature
review and semi-structured interviews with two NLP researchers. The
requirement analysis revealed that researchers are interested in analyzing
models with respect to different human-interpretable concepts. In particular,
they investigate how specific concept representations change during fine-
tuning. The analysis is typically performed on two types of data: (1) word
embedding representations and (2) classifier prediction outcomes. Using word
embeddings, they analyze evolving concept intersections as well as newly
produced artifacts like strange word associations (e.g., biases). Prediction
outcomes are used to analyze task-adapted model behavior changes, e.g.,
whether specific word associations lead to unexpected prediction outcomes.
The adapters trained on one particular task typically have different
architectures [26, 50] and training corpora. These different learning settings
usually lead to different model performances; it is difficult, though, to keep
track of such performance variations. The continuous development of new
adapters thus dictates the need for a solution that assists the analysis and
comparison of adapter properties.
To support the NLP community in an effective adapter evaluation and
comparison, we contribute a novel visual analytics workspace. The workspace
integrates adapters from the AdapterHub repository and enables their analysis
through three types of visual explanation methods: Concept Embedding
Similarity, Concept Embedding Projection, and Concept Prediction Similarity
(see Visual Comparison of Language Model Adaptation). We support model
comparison according to their produced word embeddings and classification
predictions, i.e., both intrinsic and extrinsic evaluation methods. The
explanations are performed on diverse human-interpretable concepts related to
bias mitigation and sentiment analysis tasks (e.g., gender-related
stereotypes, human qualities). The users can upload further concepts to the
workspace to cover further analysis directions. The modular composition of
visual explanations supports such analysis extensions.
The comparison of adapter properties requires sufficient comparative
visualization designs. As described by Gleicher [19], the design of
comparative visualizations is not trivial since they typically combine the
issues of representing individual objects as well as their relationships. In
order to design an appropriate solution, we rely on the comparative
visualization guidelines [19] and consider four task- and data-related
aspects: (1) comparative elements, (2) challenges related to representing
relationships between the comparative elements, (3) strategies to overcome the
challenges, and (4) a sufficient design solution. The design process
constituted of several iterations in close collaboration with NLP researchers.
In section 4 we present some of the considered design alternatives; others are
provided as supplementary material to this paper.
We show the applicability of the workspace through case studies created
collaboratively with NLP researchers. In particular, we compare the properties
of six adapters related to debiasing, sentiment classification, and named
entity recognition tasks. We present new insights into model properties
related to human-interpretable concepts and show that, for instance, context-0
(decontextualized) embeddings of the adapter trained on the language debiasing
task contain a bias where words become more similar to female- than male
pronouns; however, the gender information is eliminated from the
contextualized word representations.
To summarize, the contribution of this paper is threefold. (1) We present
requirements for a visual analytics system supporting fine-tuned LM
comparison. (2) We introduce a workspace for model comparison and present
design considerations for three types of comparative, visual explanation
methods. (3) We present new insights into multiple adapter properties through
expert case studies.
Figure 1: The workspace contains three views: Adapter Composition View (A),
which lists adapters from AdapterHub repository, Explanation Composition View
(B) for modular explanation generation, and Visual Comparison View (Workspace)
for model comparison. Here: contrary to the rotten-tomatoes model, the
context-0 embeddings of the sst-2 sentiment classifier strongly encode the two
polarities of human qualities.
## 1 Background and Related Work
In the following, we describe background information related to LM fine-tuning
and related work to explanation methods.
### 1.1 Language Model Fine-Tuning
In this paper, we analyze transformers, which are multi-layer models that use
attention mechanisms [69]. In these models, each token of the input sequence
is mapped to a high-dimensional vector (i.e., context-dependent embedding that
encodes specific context properties). These embeddings are updated in each
transformer’s layer; thus, one can extract and analyze contextualized word
embeddings layerwise (e.g., 12 layers for the BERT-base model). It has been
shown that these embeddings encode different language properties found in the
training data[59]. LMs, including transformers, are commonly fine-tuned to
capture language characteristics for specific domains or tasks. Domain-
adaptive fine-tuning is an unsupervised fine-tuning approach based on a masked
language modeling task on text from a specific domain [22]. Intermediate-task
training is a model’s fine-tuning on labeled data prior to task-specific fine-
tuning [52]. Task-specific fine-tuning deals with adapting an LM to a
particular output label distribution [27]. The fine-tuning of LMs is effective
yet time- and resource-consuming. Kirkpatrick et al. [32] also showed that
fine-tuning can lead to catastrophic forgetting of language characteristics
acquired during the model’s pre-training. To overcome these limitations,
Houlsby et al. [26] introduced adapters. They are a lightweight alternative
for model fine-tuning, only optimizing a small set of task-specific parameters
learned and stored during the adaptation phase, thus, reducing both training
time and storage space. The AdapterHub framework [49] has brought the
advantage of a simple and efficient adapter composition and reuse – one can
upload their trained adapters to the AdapterHub or
HuggingFace222https://huggingface.co/ repositories, and they are available in
the framework for interested parties, supporting the open science practice.
Adapters can be trained on masked language modeling as well as specific
downstream tasks (e.g., sentiment classification). The trained adapters can be
‘attached’ to the pre-trained model, leading to adapted model parameters. The
model with an attached task adapter can be used for the target task (e.g.,
sentiment classification). Adapters have been applied for tasks such as
natural language generation [38], machine translation [53, 31], domain
adaptation [51, 18], injection of external knowledge [35], and language
debiasing [34].
### 1.2 Visual Embedding Explanation and Comparison
With respect to explainability, most relevant work has focused on
visualizations that show how transformers work and what they learn. For
example, visual analytics systems like NLIZE [40], Seq2Seq-Vis [66], BertViz
[70], exBERT [25], SANVis [46], and Attention Flows [10] visualize the
attention layer, i.e., to highlight tokens to which the model attends to in
order to solve a task. Although widely used, attentions and their suitability
for explanation purposes are being controversially discussed in related work
(see, e.g., [28]). Other work has focused on visualizing word embeddings to
show what LMs learn. The first such tools were designed for static embeddings,
such as word2vec [44] and GloVe [47], and facilitated analogies [39] and tasks
related to local word neighborhoods [23]. Later, Berger [3] explored
correlations between embedding clusters in BERT [11]. Recent tools focus on LM
comparison tasks by visualizing multiple models simultaneously. For instance,
Strobelt et al. [67] present LMDiff – a tool that visually compares LM
probability distributions and suggests interesting text instances for the
analysis. Heimerl et al. [24] present embComb, which applies different metrics
to measure differences in the local structure around embedding objects (e.g.,
tokens). Embedding Comparator by Boggust et al. [5] is a system for embedding
comparison through small multiples. It calculates and visualizes similarity
scores for the embedded objects based on their local neighborhoods (i.e.,
shared nearest neighbors). Different from these two approaches, we provide
explanations of pre-defined human-interpretable concepts, enabling testing
more specific hypotheses related to embedding intersections. Sivaraman et
al.[65] present Emblaze, which uses an animated scatterplot and integrates
visual augmentations to summarize changes in the analyzed embedding spaces. In
contrast, we compare models by aligning the two spaces using juxtaposition,
superposition, and explicit encoding techniques. Our recent work called
LMFingerprints [62] applies scoring techniques to examine properties encoded
in embedding vectors and supports model as well as model layer comparison.
Embedding comparison tasks are relevant for all types of data that get
represented by embedding vectors. For instance, Li et al. [36] present a
visual analytics system for node embedding comparison (i.e., graph data), and
Arendt et al. [1] introduce a visualization technique called Parallel
Embeddings for concept-oriented model comparison on image data, to name a few.
## 2 Requirement Analysis
Before designing the visual analytics workspace, we conducted a literature
review related to LM comparison tasks (e.g., [5, 65, 24]). Furthermore, we
conducted two semi-structured interviews in an online setting with two NLP
researchers (co-authors of this paper) with expertise in language modeling
tasks to discuss further common evaluation-related analysis aspects. Our goal
was to gather specific linguistically motivated analysis tasks and research
challenges for the evaluation of adapted LMs. In the following, we describe
the gathered requirements through Models and Data and Users and Tasks [45].
### 2.1 Models and Data
The NLP research focuses not only on developing and adapting new models with
better performance but also on understanding the linguistic properties the
models implicitly capture. Probing classifiers [29, 37, 12] and adversarial
testing [20, 41, 58] are the most common methods used in computational
linguistics to understand such properties. The current research explores not
only what the models learn but also when they fail and which limitations they
have, such as different types of biases [17, 43, 4]; as well as ways to
mitigate those biases [16, 72, 14, 56, 57]. Visualizations are used to analyze
the model latent spaces to gain insights into the degree of changes in
embedding vectors [15, 61], properties encoded in embedding vectors [62], and
word neighborhood changes [24, 5, 65]. Especially, the comparison of embedding
local neighborhoods is one of the critical tasks for many users of LMs [5,
65]. For such comparisons, one first needs to select words for the analysis.
Boggust et al. [5] write that this is commonly done either in a data- or
model-driven way, for instance, by exploring specific domain-related words or
challenging words for the analyzed model. During the interviews, the NLP
researchers agreed with this statement and emphasized that evaluation methods
related to model limitations often explore specific, pre-defined human-
interpretable concepts such as gender-related stereotypes. When analyzing such
human-interpretable concepts, people commonly analyze contextualized word
embeddings. For some methods (e.g., Word Embedding Association Tests [7]),
researchers compute word-level vectors without an explicit context [34, 71].
In particular, for BERT, one can append the sequence start and the separator
token before and after the word, respectively (e.g., [CLS] word [SEP]) and
extract embeddings with context size zero [74] (also known as decontextualized
embeddings [6]). In the following, we call them context-0 embeddings. Our
experts also emphasized the need to ‘connect’ the embedding space with the
model’s behavior to inspect whether specific embedding vectors influence the
model’s predictions on downstream tasks.
### 2.2 Users and Tasks
With this work, we aim to support developers and researchers who adapt and
evaluate LMs to perform their analysis more easily by focusing on the analysis
of diverse human-interpretable concepts. To do that, we gathered task-related
requirements. NLP researchers’ work is related to comparison (i.e., baseline)
tasks. In particular, their analysis typically involves (T0) a comparison of
multiple LMs with different architectures or fine-tuning settings as well as
multiple model layers. Second, they typically analyze specific human-
interpretable concepts and try to (T1) partition the representation (e.g.,
embedding) space according to these concepts. Third, they try to (T2)
understand interactions between specific concepts, e.g., to what extent these
concepts are represented similarly in the representation (e.g., embedding)
space. They aim to (T3) detect ‘unexpected’ associations, e.g., positive
sentiment words that tend to trigger the negative sentiment because, e.g.,
they are negated. And finally, their goal is to (T4) connect the
representation space with the actual behavior of the model, e.g., to
understand whether concepts are separated in the representation space yet do
not affect the behavior of the model.
## 3 Visual Analytics Workspace: Data Processing
In this section, we present our visual analytics workspace and its three main
components: Adapter Composition View (in Figure 1 A), Explanation Composition
View (in Figure 1 B), and Visual Comparison View (in Figure 1 Workspace) for
model and layer comparison. Before introducing the workspace design in section
4, we describe the data processing.
### 3.1 Data Modeling
Motivated by the gathered requirements, we first build the data model. Since
human-interpretable concept analysis plays a crucial role in NLP research, we
start by modeling such concepts. By default, we work with concepts that are
commonly used in research related to bias
mitigation333https://github.com/cisnlp/bias-in-nlp and sentiment analysis. The
users can upload further concepts as .json files in the interface. One concept
is represented by two word lists, each having a specific polarity. For
instance, a concept called person names consists of two word lists – male
person names and female person names, respectively. We provide the following
concepts: male/female person names, male/female pronouns, male/female-related
nouns, male/female-related stereotypes, positive/negative human qualities,
high/low-GDP countries, and words related to weak/strong, family/career,
science/arts, intelligence/appearance.
We first model each word in a concept through a list of sentences in which the
word is used. For this purpose we use the Yelp dataset [73]; the user can also
upload other datasets and use them for explanations. The associated sentences
are used for two purposes. First, we use them as an input to the (adapted) LM
to extract the word’s contextualized word embeddings. The embeddings are
extracted layerwise (i.e., layer 1-12 for BERT-base) and get aggregated [6]
for each unique word (e.g., one average embedding from all occurrences of the
word Germany per layer). Second, we use these sentences as input for task
adapters for prediction making. Furthermore, we extract the word’s context-0
embedding by using the model’s special tokens and the word itself as the input
to the model (i.e., [CLS] word [SEP]). For words that do not occur in the
vocabulary, we average their sub-token embeddings.
### 3.2 Adapter Composition and Explanation Composition
We load adapters from AdapterHub repository and list them in the Adapter
Composition View. The user can select an adapter for the analysis by clicking
on the particular icon. Currently, we have pre-processed the data for six
models: the pre-trained BERT (BERT-base-uncased), the debiasing BERT [34], and
four task adapters for BERT (sentiment classifiers sst-2, rotten-tomates [54],
and imdb [54], and the named entity recognizer conll2003). For a new adapter
selection, the data is first pre-processed and stored in the database.
The user defines which explanation methods to use for their analysis in the
Explanation Composition View. The explanations are constructed from available
concepts and three visualization types. The visualizations include Concept
Embedding Similarity, Concept Embedding Projection, and Concept Prediction
Similarity. The Concept Embedding Similarity requires an input of two
concepts: one is used as an anchor in the visualization and the other is
explained through the cosine similarity to the anchor. The Concept Embedding
Projection requires an input of one or two concepts (to analyze a single
concept or the relation between two (un)related concepts). The user can choose
between multiple projection techniques: Principal Component Analysis (PCA)
[30], Multidimensional Scaling (MDS) [33], t-Distributed Stochastic Neighbor
Embedding (t-SNE) [68], and Uniform Manifold Approximation and Projection
(UMAP) [42]. The Concept Prediction Similarity can be applied only on adapters
with prediction heads (e.g., sentiment classifier). The explanation requires
an input of one concept; the class labels are used as anchors in the
visualization.
The pre-computed adapters, as well as created explanations, are displayed on
top of the Visual Comparison View, represented through an icon and adapter’s
or explanation’s name. The user first selects an explanation type, then an
adapter that they would like to analyze. To guide the users toward interesting
adapters for the analysis, we display a glyph underneath the adapter’s icon.
The glyph shows the overlap between the two concept word lists for the
selected explanation. The overlap is determined using a similar algorithm to
the class consistency [64] that is commonly used to select good scatterplot
views for high-dimensional data. An example of these glyphs is shown in Figure
1. The explanation visualization is displayed in the Visual Comparison View on
a zoomable canvas; hence, one can display as many explanations on the canvas
as needed. A draggable placeholder icon marks the position where the next
selected adapter visualization will be displayed on the screen.
## 4 Visual Analytics Workspace: Design Rationale
Figure 2: We provide two types of model comparison designs for analyzing
concept embedding similarity, i.e., juxtapositon where two models are
displayed next to each other and superposition, where two models are displayed
in one visualization. Here: the contextualized word embeddings extracted from
layer 11 for the rotten-tomatoes and sst-2 sentiment classifiers differentiate
between positive- and negative human qualities. The rotten-tomatoes model
requires context to separate the two polarities since the separation is
stronger than for context-0 embeddings (see Figure 1).
In the following, we describe the design rationale and the visual encoding for
the designed explanation visualizations. Our workspace supports the
exploration of a single model and the comparison of two models or two model
layers (T0). We apply diverse explanation methods (i.e., the similarity in the
high-dimensional space, embedding projection, and explanation details) to
detect and avoid potential artifacts generated by a single approach (e.g.,
projection artifacts). The design of the comparison visualizations was
motivated by the design guidelines by Gleicher [19] that consider the
comparative elements, challenges that may occur, strategies to overcome the
challenges, and the design solutions.
#### Global Visual Encoding
In all visualizations, we use the visual mark called point [9] (i.e.,
rectangle) to represent words. Hidden word labels are displayed by hovering
over a word’s rectangle. We use positional encoding [9] to partition the
embedding space (T1), detect concept intersections (T2), and locate
‘unexpected’ associations (T3). The position is used to show the similarity
between words according to underlying features such as different types of word
embedding vectors or prediction labels. We group words belonging to the same
concept through an additional visual mark, i.e., area/contour. The contours
are implemented using the d3-contour
library444https://github.com/d3/d3-contour based on a two-dimensional kernel
density estimation on the point clouds. The user can specify how many contour
lines to display in the visualization by moving a slider. To support
memorization and ease the readability, we use a global
color encoding [9] for concepts. In particular, we use two diverging color
pairs. One color pair represents the two word lists of a concept. The
selection of the color pairs was not trivial since the colors had two
objectives: the separability between two concepts and the separability between
two word lists of one concept. The final decision was made as follows: we
selected two warm colors (i.e., pink and yellow) representing one concept and
two cold colors (i.e., green and blue) representing the other, as shown in the
side figure. Further color alternatives are included in the supplementary
material.
#### Visual Encoding for Single Model Visualizations
By default, we display as many details as possible in the single
visualizations but avoid label overplotting. An algorithm measures whether
displaying a label would lead to overlap. The algorithm iterates through words
in both word lists of a concept and measures the bounding box of each text
element that gets added to the visualization. If the new element creates an
overlap, it is hidden in the visualization.
#### Visual Encoding for Model Comparison Visualizations
For effective model comparison, we use both the juxtaposition design (see
[19]) and either the superposition for visualizations that have a positional
anchor or explicit encoding for visualizations that lack the positional anchor
(e.g., projection techniques). By default, we show the summary [19] of the two
models to avoid datapoint overplotting. The summaries are created using the
contour library; the source model is represented through its contour in the 2D
space, and the target model is represented through its filled-out area. We use
the scan sequentially [19] strategy to show exact word positions. The filter
icons are explained in subsection 4.1.
### 4.1 Concept Embedding Similarity
This explanation displays the cosine similarity between two concepts enabling
to partition the embedding space (T1), detect concept intersections (T2), as
well as locate ‘unexpected’ associations (T3). In this representation, one
concept is used as an anchor for explanation purposes. The other concept can
be the same as the anchor (e.g., human qualities used twice in Figure 2) or it
may differ from the anchor (e.g., person names as a concept and pronouns as an
anchor in Figure 6). We measure the average cosine similarity between a word
in the concept to words in each pole of the selected anchor. It helps to
analyze different biases in the data, for instance, whether, e.g., female
pronouns are more similar to specific stereotype words than male pronouns.
(1) Single Model Explanation – The two anchor word lists represent the two
axes in the scatterplot visualization (e.g., negative qualities represent
y-axis and positive qualities represent x-axis in Figure 2). The average
similarity values between a word in the concept to the anchors are used as
coordinates in the 2D visualization. A word’s (e.g., cheerful in Figure 2)
average similarity to the first anchor word list (e.g., negative qualities)
specifies the word’s y-position and the average similarity to the second
anchor word list (e.g., positive qualities) specifies the word’s x-position.
To support the readability, we add a diagonal line to the visualization as a
point of reference. If a word is more similar to the first word list, then it
will be located on the left-hand-side of the diagonal; if a word is more
similar to the second word list, then it will be located on the right-hand-
side of the diagonal. Words that are equally similar to both word lists are
located on the diagonal. By default, we display all words in the concept word
lists as rectangles and show non-overlapping labels. Since most of the word
lists consist of ca. 100 words, the visualization has overplotting issues that
limit the analysis of concept intersections. To overcome these issues, we add
a contour line around each pole. We use the d3-contours library and specify
the bandwidth parameter to 5, which leads to larger areas for more dense
regions; however, single outlier data points are enclosed in separate, smaller
areas, enabling the detection of ‘unexpected’ associations (T3). The area is
colored in the particular concept’s color with a decreased opacity.
(a) In layer 11, the PCA projection generates almost identical 2D spaces for
contextualized embeddings extracted from pre-trained BERT and conll2003 named
entity recognizer (see the low opacity of word rectangles in the plot on the
right hand side). In both models, the person names get separated by gender.
(b) In layer 11, the PCA projection of context-0 embeddings from conll2003
named entity recognizer produces four distinct clusters. Two clusters (with
low opacity) have similar neighborhoods in both models. These are rare person
names (e.g., Nevaeh) and long country names (e.g., Trinidad and Tobago).
Person names do not encode gender.
Figure 3: We provide two different types of model comparison designs for
analyzing concept embedding projections, i.e., juxtapositon where two models
are displayed next to each other and explicit encoding that summarizes
embedding changes through word neighborhood overlaps.
(2) Model Comparison Explanation – As mentioned in section 2, the overall goal
of NLP researchers is to compare models or layers with respect to concept
distributions (T0). The design of comparison visualizations is not trivial, as
described by Gleicher [19]. Thus, in order to consider all relevant aspects,
we follow his design guidelines.
The comparison visualization for Concept Embedding Similarity has to display
two models or layers simultaneously, each showing the distribution of concept
words with respect to selected anchors. Two types of challenges may arise when
designing for this objective: (1) the concepts, as well as models, may
overlap, and (2) word similarity changes may produce patterns that are
difficult to outline all at once. Before we describe the strategies to
overcome these challenges, we name our design considerations. Gleicher [19]
names three design alternatives for comparison visualizations: juxtaposition,
superposition, and explicit encoding. In our workspace, each explanation can
be explored in a juxtaposition design (shown in Figure 2 left) since single
model visualizations are always displayed next to each other on the screen.
This representation has limitations, though. Since we use all the available 2D
space for a single model to reduce word overlaps, the visualizations of the
compared models often have different scales. Thus, the detailed model and
concept overlap analysis is restricted. Therefore, instead of using
juxtaposition, we place two models in the same representation using the
superposition design (shown in Figure 2, right). The superposition is a valid
alternative since the Concept Embedding Similarity visualization has anchors
(which is not the case for projection techniques, as described in the
following).
In the comparison visualization, we display the cosine similarity values
between concept words and anchors for two models simultaneously (T0). We
follow the comparative visualization guidelines and apply two strategies that
enable the analysis of overlapping concepts, models, and word similarity
patterns. First, we provide a summary of the two models. We, therefore,
display only the contours of their word positions; more details (e.g., word
exact positions) are displayed on demand. During the design process, we
created several alternative representations to visually separate the two
models. Each designed alternative was discussed with a group of visual
analytics experts to critically assess the representation’s advantages and
limitations. In particular, we created representations that showed two types
of the density of the visualized words, i.e., discrete as well as continuous.
The discrete representation displayed the density regions through triangles
arranged on a grid layout, whereby each model was represented with triangles
of different sizes and opacity (smaller rectangles with higher opacity for the
target
model, see design A in the side figure). The continuous representation
summarized the models through their contours (see design B in the side
figure). After several discussions, the latter was selected as the final
design due to its visual smoothness and limited clutter. The final design is
as follows: the first (i.e., source) model is displayed only through contour
borders. Since the words themselves are not visible, we use multiple contour
lines to highlight the density of the word-occurrence regions. The second
(i.e., target) model is displayed through a filled-out area of the contour
regions with transparency. In addition to the model summarization, we apply
the scan sequentially strategy to enable the analysis of word similarity
changes. For this purpose, we implemented filter buttons that can be used to
highlight words that have common properties with respect to their positional
changes (i.e., their position in the source model compared to their position
in the target model in the 2D space). In particular, we measure the angle
between the word’s position in the source and the target model. By hovering
over one of the filter buttons , words with similar positional changes are
highlighted in the visualization. The buttons themselves are colored according
to the anchor to which words in the target model become more similar in
comparison to the source model. An example of the word filtering is shown in
Figure 1.
### 4.2 Concept Embedding Projection
The second explanation method displays the words in a 2D visualization,
whereby the 2D positions are obtained using a projection technique such as PCA
on the embedding vectors. This explanation visually partitions the
representation space (T1) and supports the analysis of concept intersections
(T2). Since in the Concept Embedding Similarity explanation we compute the
similarity on high-dimensional vectors, this representation shows the
similarity from a different modeling perspective.
(1) Single Model Explanation – The explanation displays words within one or
two concepts, depending on whether the user wants to analyze one concept or
the overlap of two (un)related concepts. Like in every visualization, we
display words as rectangles and, by default, show labels for words that do not
overlap. To support the readability of dense regions, we designed and
discussed several design alternatives. First, we displayed words using a
scatterplot technique, which is common for displaying projection data (design
A in the side figure). Since the goal of the visualization is to clearly show
concept intersections (T2), however, words in the projection often overlap,
this representation was not feasible. Second, we applied a kernel density
estimation algorithm on the projected words to estimate and visualize the
densest regions in the 2D space. We
first represented the density through triangles displayed in a grid layout,
whereas the density value was mapped to the triangles’ opacity (design B in
the side figure). Similar to the simple scatterplot, it was difficult to
detect concept intersections easily. Thus, in the final design, we use
multiple contours showing the estimated density of the different regions
(Figure 3). It allows detecting not only the densest regions but also words
with unexpected associations (T3) (i.e., outliers).
(2) Model Comparison Explanation – Our goal is to display intersections and
positional changes of one or two concept word lists. The challenge of this
representation is grounded in the artifacts of the applied projection
techniques. In particular, since we rely on projection techniques to compute
word coordinates, the visualization lacks an interpretable point of reference;
projection techniques typically come with artifacts such as rotation or
flipping of the representation space, making the comparison of two spaces
difficult. Like in all other visualizations, the user can explore model
differences in a juxtaposition design since the single model explanations are
always placed next to each other on the screen (as shown in 3(b), left). The
juxtaposition has limitations, though. If the compared models produce
different embedding spaces (which is the case for most of the model and layer
comparisons), they produce 2D spaces that are difficult to align. The
insufficiency of the superposition design is depicted in the side figure.
There, we represent a word’s positional changes through lines,
whereas a line connects the word’s position in the source model with the
position in the target model. Due to rotation artifacts, the comparison of
word changes is restricted even if the changes are minor. Thus, for projection
comparison purposes, we apply the third design alternative, i.e., the explicit
encoding design (as shown in 3(b), right).
For the explicit encoding, we first define relationships to encode in the
visualization [19], i.e., we explain the projection changes through word
nearest neighbors in the 2D space. In particular, after computing the
projection’s coordinates, we compute ten nearest neighbors for each word and
store them as attributes in the data structure. When the user explores two
models according to their embedding projections, we visually explain the
neighborhood overlaps. This, according to design guidelines [19], is an
example of the summarize strategy. Unlike the Concept Embedding Similarity
visualization, we display only a single word’s instance in the visualization.
Its 2D coordinates, by default, are coordinates from the source model. The
user can change it by clicking on the model’s name in the visualization (shown
in 3(b), right). The neighborhood changes are displayed as follows. For each
word, we measure the neighborhood overlap (the number of equal neighbors in
the source and target model) and map it to the size of the word’s rectangle
representation. The higher the overlap, the larger the rectangle and the lower
the opacity. Moreover, we add horizontal
lines to the rectangle, each showing the nearest neighbors from the particular
concept’s pole. As shown in the side figure, in the pre-trained BERT the
person-name Maverick is more similar to countries (blue and green lines on the
left-hand-side) than person names; in the conll2003 named entity recognizer,
this word becomes more similar to person names (yellow and pink lines on the
right-hand-side of the rectangle). An example of two models with similar word
neighborhoods is shown in 3(a) and with different word neighborhoods – in
3(b). If the word neighborhoods change, then rectangles are smaller with a
higher opacity, as shown in 3(b). In addition to the summarize strategy, we
support the scan sequentially strategy to enable the analysis of word
neighborhood changes. The users can filter words based on their neighborhoods
by clicking on the glyph representations displayed on top of the
visualization. The filtered words are highlighted; the rest are faded out
(shown in Figure 4). On mouse over a word, its nearest neighbors in the source
model are highlighted; on click, the nearest neighbors in the target model are
highlighted, enabling a simple neighborhood comparison.
Figure 4: Words with similar neighborhoods can be filtered by selecting
particular glyphs. In conll2003 named entity recognizer, country names Jordan
and Chad are more similar to person names than countries.
### 4.3 Concept Prediction Similarity
The third visualization can be used on adapters that have been trained on two-
class classification tasks. It explains the prediction similarity of two
models that are trained on the same task, e.g., whether two sentiment
classifiers produce similar prediction outcomes, and connects the
representation space and the model’s behavior (T4). For this task, the user
has to select one concept; the model then predicts class labels for the words’
assigned sentences.
(1) Single Model Explanation – To provide an overview of prediction
similarity, we aggregate the label information for all sentences in which the
word is used in the corpus and use the average prediction to determine the
word’s x-coordinate in the visualization. In particular, we divide the number
of sentences having the first prediction label (e.g., NEGATIVE sentiment) by
the total number of sentences for the particular word; the more predictions
with the first class label – the closer the point is to the beginning of the
x-axis. If the predictions are equal for both class labels, the word is placed
in the middle of the x-axis. The y-coordinate is determined by the word’s
position in the particular word list. The words themselves are displayed as
rectangles.
(2) Model Comparison Explanation – In the comparison visualization, our goal
is to show the prediction differences between two models (T0). Since in this
visualization we have clear anchors (the prediction labels), we can apply a
similar design approach as for the Concept Embedding Similarity plot. In
particular, we use both juxtaposition as well as superposition designs. In the
superposition design, both models are represented in the same visualization,
as shown in Figure 5. We stick to the same design as for the Concept Embedding
Similarity plot and first summarize the model predictions through contours.
The source model is represented through the contour’s borders; the target
model’s contours are filled out with a decreased opacity. The user can click
on the filtering icons displayed on top of the visualization; the prediction
changes are highlighted accordingly, supporting the scan sequentially
strategy.
### 4.4 Explanation Details
When explaining model changes, researchers usually try to find the reasons for
particular patterns in the data. Thus, we designed three visualizations to
explain patterns in the comparison visualizations.
Context Concordance View – The patterns in the Concept Embedding Similarity
visualization can be influenced by the word contexts (sentences) from which
the contextualized word embeddings are extracted. Thus, for this
visualization, we added a Context Concordance View that lists all sentences in
which a word is used in the corpus (shown in Visual Comparison of Language
Model Adaptation, right). The view is displayed when clicking on the
particular word in the Concept Embedding Similarity visualization. There, the
selected word is highlighted for a better comparison.
Projection Artifact View – We propose a dense pixel visualization to explore
the latent space and reveal semantically similar embeddings. The pixel
visualization is inspired by Shin et al. [63] stripe-based visualization of
word embeddings. The primary goal is to create a compact visual summary of the
embeddings with all dimensions without using dimensionality reduction methods
(e.g., PCA). The pixel visualization displays each embedding as a vertical
pixel bar, a grid-shaped column where each colored pixel (rectangle) is an
embedding feature value. Herefore, we normalize the embeddings to the unit
length and color the pixels according to a diverging color scheme. Then we
place the pixel bars next to each other on the x-axis, producing a dense pixel
visualization. The y-axis displays the 768 embedding dimensions, and the rows
are ordered by the median of the visualized embedding dimensions to highlight
block and band patterns [2]. The x-axis can be reordered by linking and
brushing in the single model explanations to interactively create clusters to
highlight and display as a block of embeddings. Alternatively, the embeddings
can be clustered using HDBSCAN [8] using cosine similarity to detect clusters
of similar embeddings. We can explore clusters in latent space through
clustering without relying on dimensionality reduction methods, which
typically produce some artifacts. Overall, comparing the colored pixel bars
enables us to perceive pairwise similarities between the embeddings and
generate new insights into the latent space, such as identifying groups of
similar embeddings, meaningful embedding dimensions, or outliers.
Figure 5: Concept Prediction Similarity shows two sentiment classifiers (see
A). Compared to the sst-2 model (contour borders), the rotten-tomatoes model
(filled areas) classifies sentences with occurrences of positive and negative
human qualities more often as NEGATIVE (B).
Prediction View – To explore the exact prediction differences in the Concept
Prediction Similarity comparison visualization, we display the predicted
labels for all sentences assigned to a word in the Prediction View (shown in
Visual Comparison of Language Model Adaptation, right). The view is displayed
when selecting a word in the Concept Prediction Similarity visualization.
Figure 6: Context-0 embeddings are used for evaluation purposes in Word
Embedding Association Tests [71, 34]. Their produced spaces differ from the
contextualized ones, though. Although context-0 embeddings suggest that the
debiasing adapter by [34] inverts the gender bias of the pre-trained BERT, the
PCA projection on contextualized embeddings shows that the adapter
successfully eliminates the gender information.
## 5 Evaluation
We conducted expert case studies [60] with the experts from the requirement
analysis (see section 2) to assess initial feedback on the visualization
sufficiency for model comparison tasks. We further gathered positive
(informal) feedback from two computational linguistic professors on the
designed workspace. We present insights created for three out of six models
introduced in subsection 3.2: the pre-trained BERT, the debiasing adapter for
BERT by Lauscher et al. [34], and the conll2003 named entity recognizer. We
plan to extend the study with more participants to quantitatively evaluate the
usability of the interface.
### 5.1 Expert Study Setup
The following insights were created collaboratively with two experts in
natural language processing tasks. The study was conducted online in the form
of a video conference. The experts had two main tasks: (1) to investigate
models related to bias and (2) to explore the limitations of a named entity
recognition model. The experts further analyzed predictions for sentiment
classifiers (T4) as described in subsection 4.3; however, they are not
included in the case study description below due to the paper’s space
considerations. The study was concluded with a semi-structured interview about
the workspace’s usability.
Data – The data for the study included the 10 human-interpretable concepts
introduced in subsection 3.1. The contextualized word embedding
representations were extracted from the Yelp dataset [73], whereby each word
in the concept list was represented by up to 300 contexts.
Tasks – For the analysis related to bias detection, the interface provides the
debiasing model trained by Lausher et al. [35]. We use their evaluation
results as ground truth to investigate whether the insights can be replicated
using our workspace. In particular, the authors show that the model is
effective in attenuating gender biases according to most of the applied
evaluation methods. However, the results of the Word Embedding Association
Test (WEAT) [7] are less successful. The WEAT test measures the association
between two target word sets (e.g., male pronouns) and (e.g., female pronouns)
based on their mean cosine similarity to words from two attribute sets (e.g.,
science terms) and (e.g., art terms) that is measured on context-0 (i.e.,
static [35]) word embeddings. Lauscher et al. observe that according to the
WEAT test, the pre-trained BERT model is insignificantly biased; however, the
debiasing adapter does not reduce the bias but instead – inverts it. The
participants thus received the task to evaluate the particular adapter
regarding two specific analysis tasks: (1) to inspect how the embedding space
is partitioned for gender-related concepts (T1) and (2) to explore gender-
related concept intersections (T2).
Their second task was to analyze the conll2003 named entity recognizer
concerning its learning capabilities of specific named entity categories such
as person names and countries. Their particular analysis tasks were to
investigate whether the model partitions the embedding space according to the
different categories (T1), whether there are intersections between the
categories (T2), and whether the model produces ‘unexpected’ associations (T3)
between specific named entities.
### 5.2 Expert Case Studies
In the following, we describe gained insights for the specified tasks.
(Task 1) Bias in Language Models – To gain insights into the gender-related
concept representation and their intersections, the participants investigated
the Concept Embedding Similarity visualization. They selected the pre-trained
BERT and debiasing models and analyzed the word similarities between different
concepts (e.g., person names as shown in Figure 6) to pronouns that were
displayed as anchors in the visualization. The visualization revealed that in
the upper layers (e.g., layer 11) of the pre-trained BERT, context-0
embeddings for person names are slightly more similar to male pronouns than
female pronouns, but the difference is insignificant. However, in debiasing
adapter, most of these person names (even male person names) are more similar
to female pronouns. Similar patterns could be observed for other concepts
(e.g., gender-related stereotypes, countries), which matches the observations
by Lauscher et al. [34]. It is important to notice that this ‘bias inversion’
is visible only for context-0 embeddings. When exploring the relationships
between the same concepts computed on contextualized word embeddings (in
Figure 6), both Concept Embedding Similarity and Concept Embedding Projection
visualizations show that the debiasing adapter was able to eliminate the
gender information – the visualizations show no separation between the person-
name and pronoun concepts. However, in the pre-trained BERT, female person
names are more similar to female pronouns and male person names are more
similar to male pronouns. The visualizations reveal that most of the models
obtain the gender information from the word’s context, and it is not encoded
in the word (e.g., person name) itself. The only exception is the sst-2
sentiment classifier; there, even context-0 embeddings get separated by gender
(side figure). Different to other adapters, the sst-2 model is trained on
phrases extracted from Stanford parse trees rather than full sentences. Thus,
words in isolation that are used to extract the context-0 embeddings present
an unnatural input to most of the models [6]; however, the input is less
unnatural for the sst-2 model since some of its training instances are one or
two words long.
(Task 2) Named Entity Recognition – To analyze the learning capabilities of
the conll2003 named entity recognizer, the participants explored the Concept
Embedding Similarity visualization for the concept low/high-GDP countries –
two word lists, each grouping countries with a similar GDP rank according to
2020 statistics. As shown in the side
figure, the conll2003 model learns that most of the countries are similar
without encoding their welfare (see the top-right corner). By exploring the
word positions, one can see that the model does not recognize the country
Eswatini since its similarity to both low-GDP and high-GDP countries is low
(0.31) compared to other countries that have a similarity of 0.8.
Next, the participants analyzed the model’s distinction between person names
and country names – a typical task for a named entity recognizer. The Concept
Embedding Projection visualization of the two concepts is shown in Figure 3.
In the early layers, both models produce similar word neighborhoods and the
person names and country names have a poor separation. In upper layers (e.g.,
layer 11 in 3(b)), the projection of conll2003 embeddings displays four
clusters. One cluster contains country names (Figure 7 cluster A) and another
– person names (Figure 7 cluster B). The neighborhoods of the two smaller
clusters are similar to those in the pre-trained BERT, suggesting that the
conll2003 model did not capture any new properties for these particular words.
By interactively exploring the word neighborhoods, one can observe that one
cluster consists of rare person names (e.g., Nevaeh), whereas the other
contains relatively long country names (e.g., Trinidad and Tobago). Since the
visualizations show the context-0 embeddings, the person names are not
separated by gender. To investigate whether the four clusters are artifacts
generated by the PCA projection, the embeddings values were displayed in the
Projection Artifact View. Figure 7 shows that the values for embedding vectors
within one cluster produce similar patterns, suggesting that the four clusters
are not the projection’s generated artifacts. The separation between long and
short country names, as well as common and rare person names, might be a
reason of long and rare words not being in the BERT’s vocabulary; thus, this
might be an artifact of averaging sub-token embedding vectors and must be
further investigated.
Figure 7: In Projection Artifact View, the user can explore embedding vectors
aligned as columns in a pixel visualization. We use a bipolar color scale to
show vector values (from min blue to max orange).
### 5.3 Preliminary Expert Feedback
The experts provided positive feedback concerning the workspace’s
applicability for model evaluation and comparison tasks. They described the
interface to be intuitive and easy to use. The experts found it useful having
the option to choose between different concepts, and in particular–with
respect to bias–different ways to quantify it. This allows them to evaluate
the models along ‘different axes’, and this is in accordance with works that
have shown that bias is manifested in multiple ways. The experts also
appreciated the ability to analyze both the representations and the
predictions that provide two complementary ways to explain a model: the
prediction-based view focuses on the more high level ‘interface’ (i.e.,
model’s predictions) while the representation analysis focuses on its actual
working mechanism (i.e., how these predictions are derived). The workspace
also demonstrates and makes use of one of the advantages of adapters over
other fine-tuning methods – the fact they are easily integrated into one pre-
trained model without having to fine-tune a different model per task.
One important advantage of our workspace was described by the experts as
follows. Adapters are usually tested in-domain (e.g., people train for the
sentiment task and evaluate on sentiment prediction). The ‘side-effects’ the
training has on other aspects are often unaddressed. Thus, it was appreciated
that the workspace puts emphasis on evaluating a given adapter according to
metrics that are not necessarily related to the main tasks it was trained on.
The interface with its diverse concepts brings another advantage, particularly
for the bias evaluation tasks. According to the experts, while certain notions
of bias are well studied, the more interesting cases are those which are more
subtle and less intuitive or straightforward. The workspace makes it easier to
explore the representation space of the models and potentially discover new
notions of bias, or more generally, undesired properties of the model in
question, as depicted in the subsection 5.2. The limitations of the workspace
are formulated as research opportunities in the following section.
## 6 Discussion and Research Opportunities
In the previous section, we presented how we can use our workspace to gain
insights into model specificities. During the design and evaluation process,
we discovered several opportunities for future research.
Comparison of Numerous Models – Currently, our workspace supports the direct
comparison of two models at a time. An interesting research challenge would be
to display more than two models in the same comparison visualization. While
designing our visualizations, we faced challenges in how to select designs
that allow visually separate the two models. By displaying more than two
models simultaneously, one would need to come up with new visual design
alternatives.
Supporting Model Fine-Tuning – Our work is a step toward effectively comparing
adapter models. It is still limited to explorative tasks and, at this point,
does not actively suggest which actions to undertake to improve the adapter
performances. We see, however, this as a very important direction for future
work. The system should provide insights into the models’ strengths and
limitations and, in an ideal case, also provide hints or suggestions on which
steps should be overtaken (e.g., adaptation of the training dataset) to
improve the models’ performances.
Visual Explanations Combined with Probing Classifiers – During our
collaboration, the NLP researchers mentioned several potential extensions
concerning the functionality of the workspace. Since they commonly train
classifiers to investigate concept intersections, they mentioned this as an
extension to the visual explanation methods. The two methods used in parallel
could increase their trust in the generated insights. In particular, if the
projection and the classifier produce similar results, it is more likely to be
true and less likely to be an artifact of the particular method in use.
Support for Adapter Training – Currently, our workspace supports the analysis
of adapters from the AdapterHub repository. The framework, however, supports
different adapter composition techniques, such as adapter stacking [50] as
well as their fusion [48]. We plan to extend the workspace in a way that
researchers could train new adapters in the interface by applying the
different adapter composition methods and directly evaluate their created
representation spaces, which, hopefully, would lead to better-performing
models for downstream tasks.
## 7 Conclusion
We presented a novel visual analytics workspace for the analysis and
comparison of LMs that are adapted for different masked language modeling and
downstream classification tasks. The design was motivated by requirements
gathered during a literature review and collaboration with NLP researchers. We
introduced three new comparison visualizations: Concept Embedding Similarity,
Concept Embedding Projection, and Concept Prediction Similarity that were
designed by applying the comparative visualization guidelines by Gleicher
[19]. We show the applicability of the workspace through expert case studies,
confirm findings from the related work, and generate new insights into adapter
learning properties. A demo is available as part of the LingVis framework [13]
under: https://adapters.demo.lingvis.io/.
## Acknowledgments
This paper was funded by the Deutsche Forschungsgemeinschaft (DFG, German
Research Foundation) within projects BU 1806/10-2 “Questions Visualized” of
the FOR2111, and the ETH AI Center.
## References
* [1] D. L. Arendt, N. Nur, Z. Huang, G. Fair, and W. Dou. Parallel Embeddings: A Visualization Technique for Contrasting Learned Representations. In Proc. of the 25th Int. Conf. on Intelligent User Interfaces, pp. 259–274, 2020.
* [2] M. Behrisch, B. Bach, N. Henry Riche, T. Schreck, and J.-D. Fekete. Matrix reordering methods for table and network visualization. In Computer Graphics Forum, vol. 35, pp. 693–716. Wiley Online Library, 2016.
* [3] M. Berger. Visually Analyzing Contextualized Embeddings. In IEEE Visualization Conf. (VIS), pp. 276–280. IEEE Computer Society, Los Alamitos, CA, USA, oct 2020.
* [4] S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach. Language (technology) is power: A critical survey of “bias” in NLP. In Proc. of the Association for Computational Linguistics, pp. 5454–5476. Association for Computational Linguistics, Online, July 2020.
* [5] A. Boggust, B. Carter, and A. Satyanarayan. Embedding Comparator: Visualizing Differences in Global Structure and Local Neighborhoods via Small Multiples. In 27th Int. Conf. on Intelligent User Interfaces, pp. 746–766, 2022.
* [6] R. Bommasani, K. Davis, and C. Cardie. Interpreting Pretrained Contextualized Representations via Reductions to Static Embeddings. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 4758–4781. Association for Computational Linguistics, Online, July 2020. doi: 10 . 18653/v1/2020 . acl-main . 431
* [7] A. Caliskan, J. J. Bryson, and A. Narayanan. Semantics derived automatically from language corpora contain human-like biases. Science, 356(6334):183–186, 2017.
* [8] R. J. Campello, D. Moulavi, and J. Sander. Density-based clustering based on hierarchical density estimates. In Pacific-Asia Conf. on Knowledge Discovery and Data Mining, pp. 160–172. Springer, 2013.
* [9] M. S. T. Carpendale. Considering visual variables as a basis for information visualisation. PRISM, 2003.
* [10] J. F. DeRose, J. Wang, and M. Berger. Attention flows: Analyzing and comparing attention mechanisms in language models. IEEE Trans. on Visualization and Computer Graphics, 2020.
* [11] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* [12] D. Edmiston. A Systematic Analysis of Morphological Content in BERT Models for Multiple Languages. arXiv preprint arXiv:2004.03032, 2020.
* [13] M. El-Assady, W. Jentner, F. Sperrle, R. Sevastjanova, A. Hautli, M. Butt, and D. Keim. lingvis.io – A Linguistic Visual Analytics Framework. In Proc. of the Association for Computational Linguistics: System Demonstrations, pp. 13–18, 2019.
* [14] Y. Elazar and Y. Goldberg. Adversarial removal of demographic attributes from text data. In Proc. of the 2018 Conf. on Empirical Methods in Natural Language Processing, pp. 11–21, 2018.
* [15] K. Ethayarajh. How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings. In Proc. of the Conf. on Empirical Methods in Natural Language Proc. and the Int. Joint Conf. on Natural Language Processing (EMNLP-IJCNLP), pp. 55–65. ACL, Hong Kong, China, Nov. 2019.
* [16] Y. Ganin and V. Lempitsky. Unsupervised domain adaptation by backpropagation. In Int. Conf. on Machine Learning, pp. 1180–1189. PMLR, 2015.
* [17] I. Garrido-Muñoz, A. Montejo-Ráez, F. Martínez-Santiago, and L. A. Ureña-López. A survey on bias in deep nlp. Applied Sciences, 11(7):3184, 2021.
* [18] G. Glavaš, A. Ganesh, and S. Somasundaran. Training and domain adaptation for supervised text segmentation. In Proc. of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 110–116. Association for Computational Linguistics, Online, Apr. 2021.
* [19] M. Gleicher. Considerations for visualizing comparison. IEEE Trans. on Visualization and Computer Graphics, 24:413–423, 2018.
* [20] M. Glockner, V. Shwartz, and Y. Goldberg. Breaking NLI Systems with Sentences that Require Simple Lexical Inferences. In Proc. of the 56th Association for Computational Linguistics, pp. 650–655. ACL, 2018.
* [21] S. Gururangan, A. Marasović, S. Swayamdipta, K. Lo, I. Beltagy, D. Downey, and N. A. Smith. Don’t stop pretraining: Adapt language models to domains and tasks. In Proc. of the Association for Computational Linguistics, pp. 8342–8360. Association for Computational Linguistics, Online, July 2020.
* [22] X. Han and J. Eisenstein. Unsupervised domain adaptation of contextualized embeddings for sequence labeling. In EMNLP, 2019.
* [23] F. Heimerl and M. Gleicher. Interactive analysis of word vector embeddings. In Computer Graphics Forum, vol. 37, pp. 253–265. Wiley Online Library, 2018.
* [24] F. Heimerl, C. Kralj, T. Moller, and M. Gleicher. embcomp: Visual interactive comparison of vector embeddings. IEEE Trans. on Visualization and Computer Graphics, 2020.
* [25] B. Hoover, H. Strobelt, and S. Gehrmann. exBERT: A Visual Analysis Tool to Explore Learned Representations in Transformers Models. In Proc. of the Association for Computational Linguistics, System Demonstrations. ACL, 2020.
* [26] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Laroussilhe, A. Gesmundo, M. Attariyan, and S. Gelly. Parameter-efficient transfer learning for nlp. In Int. Conf. on Machine Learning, pp. 2790–2799. PMLR, 2019.
* [27] J. Howard and S. Ruder. Universal language model fine-tuning for text classification. In Proc. of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 328–339. Association for Computational Linguistics, Melbourne, Australia, July 2018.
* [28] S. Jain and B. C. Wallace. Attention is not explanation. In NAACL, 2019.
* [29] G. Jawahar, B. Sagot, and D. Seddah. What does BERT learn about the structure of language? In Proc. of the Association for Computational Linguistics, pp. 3651–3657. ACL, Florence, Italy, July 2019.
* [30] I. T. Jolliffe and J. Cadima. Principal component analysis: a review and recent developments. Philosophical Trans. of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2065):20150202, 2016.
* [31] Y. Kim, P. Petrov, P. Petrushkov, S. Khadivi, and H. Ney. Pivot-based transfer learning for neural machine translation between non-English languages. In Proc. of the 2019 Conf. on Empirical Methods in Natural Language Processing and the 9th Int. Joint Conf. on Natural Language Processing (EMNLP-IJCNLP), pp. 866–876. Association for Computational Linguistics, Hong Kong, China, Nov. 2019. doi: 10 . 18653/v1/D19-1080
* [32] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, D. Hassabis, C. Clopath, D. Kumaran, and R. Hadsell. Overcoming catastrophic forgetting in neural networks. Proc. of the National Academy of Sciences, 114(13):3521–3526, 2017\.
* [33] J. B. Kruskal. Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Psychometrika, 29(1):1–27, 1964.
* [34] A. Lauscher, T. Lueken, and G. Glavaš. Sustainable modular debiasing of language models. In Findings of the Association for Computational Linguistics: EMNLP 2021, pp. 4782–4797. Association for Computational Linguistics, Punta Cana, Dominican Republic, Nov. 2021.
* [35] A. Lauscher, O. Majewska, L. F. R. Ribeiro, I. Gurevych, N. Rozanov, and G. Glavaš. Common Sense or World Knowledge? Investigating Adapter-Based Knowledge Injection into Pretrained Transformers. In Proc. of Deep Learning Inside Out (DeeLIO): The First Workshop on Knowledge Extraction and Integration for Deep Learning Architectures, pp. 43–49. Association for Computational Linguistics, Online, Nov. 2020.
* [36] Q. Li, K. S. Njotoprawiro, H. Haleem, Q. Chen, C. Yi, and X. Ma. Embeddingvis: A visual analytics approach to comparative network embedding inspection. In 2018 IEEE Conf. on Visual Analytics Science and Technology (VAST), pp. 48–59. IEEE, 2018.
* [37] Y. Lin, Y. C. Tan, and R. Frank. Open Sesame: Getting inside BERT’s Linguistic Knowledge. In Proc. of the ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pp. 241–253. ACL, Florence, Italy, Aug. 2019.
* [38] Z. Lin, A. Madotto, and P. Fung. Exploring versatile generative language model via parameter-efficient transfer learning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 441–459. Association for Computational Linguistics, Online, Nov. 2020.
* [39] S. Liu, P.-T. Bremer, J. J. Thiagarajan, V. Srikumar, B. Wang, Y. Livnat, and V. Pascucci. Visual exploration of semantic relationships in neural word embeddings. IEEE Trans. on Visualization and Computer Graphics, 24(1):553–562, 2017.
* [40] S. Liu, Z. Li, T. Li, V. Srikumar, V. Pascucci, and P.-T. Bremer. Nlize: A perturbation-driven visual interrogation tool for analyzing and interpreting natural language inference models. IEEE Trans. on Visualization and Computer Graphics, 25(1):651–660, 2018.
* [41] R. Marvin and T. Linzen. Targeted Syntactic Evaluation of Language Models. In Proc. of the Conf. on Empirical Methods in Natural Language Processing, pp. 1192–1202. ACL, Brussels, Belgium, Oct.-Nov. 2018.
* [42] L. McInnes, J. Healy, N. Saul, and L. Grossberger. UMAP: Uniform Manifold Approximation and Projection. The Journal of Open Source Software, 3(29):861, 2018.
* [43] N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman, and A. Galstyan. A survey on bias and fairness in machine learning. ACM Computing Surveys (CSUR), 54(6):1–35, 2021.
* [44] T. Mikolov, K. Chen, G. Corrado, and J. Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
* [45] S. Miksch and W. Aigner. A matter of time: Applying a data–users–tasks design triangle to visual analytics of time-oriented data. Computers & Graphics, 38:286–290, 2014.
* [46] C. Park, I. Na, Y. Jo, S. Shin, J. Yoo, B. C. Kwon, J. Zhao, H. Noh, Y. Lee, and J. Choo. Sanvis: Visual analytics for understanding self-attention networks. In IEEE Visualization Conf. (VIS), pp. 146–150. IEEE, 2019.
* [47] J. Pennington, R. Socher, and C. Manning. GloVe: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar, Oct. 2014. doi: 10 . 3115/v1/D14-1162
* [48] J. Pfeiffer, A. Kamath, A. Rücklé, K. Cho, and I. Gurevych. AdapterFusion: Non-destructive task composition for transfer learning. pp. 487–503, 2021.
* [49] J. Pfeiffer, A. Rücklé, C. Poth, A. Kamath, I. Vulić, S. Ruder, K. Cho, and I. Gurevych. AdapterHub: A framework for adapting transformers. In Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing (EMNLP 2020): Systems Demonstrations, pp. 46–54. Association for Computational Linguistics, Online, 2020.
* [50] J. Pfeiffer, I. Vulić, I. Gurevych, and S. Ruder. MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer. In Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing (EMNLP), pp. 7654–7673. Association for Computational Linguistics, Online, Nov. 2020.
* [51] M. Q. Pham, J. M. Crego, F. Yvon, and J. Senellart. A study of residual adapters for multi-domain neural machine translation. In Proc. of the Fifth Conf. on Machine Translation, pp. 617–628. Association for Computational Linguistics, Online, Nov. 2020.
* [52] J. Phang, T. Févry, and S. R. Bowman. Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks. ArXiv, abs/1811.01088, 2018.
* [53] J. Philip, A. Berard, M. Gallé, and L. Besacier. Monolingual adapters for zero-shot neural machine translation. In Proc. of the 2020 Conf. on Empirical Methods in Natural Language Processing (EMNLP), pp. 4465–4470. Association for Computational Linguistics, Online, Nov. 2020.
* [54] C. Poth, J. Pfeiffer, A. R”uckl’e, and I. Gurevych. What to pre-train on? Efficient intermediate task selection. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pp. 10585–10605. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, Nov. 2021.
* [55] X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang. Pre-trained models for natural language processing: A survey. Science China Technological Sciences, 63(10):1872–1897, 2020.
* [56] S. Ravfogel, Y. Elazar, H. Gonen, M. Twiton, and Y. Goldberg. Null it out: Guarding protected attributes by iterative nullspace projection. In Proc. of the Association for Computational Linguistics, pp. 7237–7256, 2020.
* [57] S. Ravfogel, M. Twiton, Y. Goldberg, and R. D. Cotterell. Linear adversarial concept erasure. In K. Chaudhuri, S. Jegelka, L. Song, C. Szepesvari, G. Niu, and S. Sabato, eds., Proc. of the 39th Int. Conf. on Machine Learning, vol. 162 of Proc. of Machine Learning Research, pp. 18400–18421. PMLR, 17–23 Jul 2022.
* [58] K. Richardson, H. Hu, L. S. Moss, and A. Sabharwal. Probing Natural Language Inference Models through Semantic Fragments. In Association for the Advancement of Artificial Intelligence (AAAI), pp. 8713–8721. AAAI Press, 2020.
* [59] A. Rogers, O. Kovaleva, and A. Rumshisky. A Primer in BERTology: What We Know About How BERT Works. Trans. of the Association for Computational Linguistics, 8:842–866, 2020.
* [60] M. Sedlmair, M. Meyer, and T. Munzner. Design Study Methodology: Reflections from the Trenches and the Stacks. IEEE Trans. on Visualization and Computer Graphics, 18(12):2431–2440, Dec. 2012.
* [61] R. Sevastjanova, A.-L. Kalouli, C. Beck, H. Hauptmann, and M. El-Assady. Explaining Contextualization in Language Models using Visual Analytics. In Proc. of the Association for Computational Linguistics, ACL. ACL, 2021.
* [62] R. Sevastjanova, A.-L. Kalouli, C. Beck, H. Hauptmann, and M. El-Assady. LMFingerprints: Visual Explanations of Language Model Embedding Spaces through Layerwise Contextualization Scores. Computer Graphics Forum, 41(3):295–307, 2022.
* [63] J. Shin, A. Madotto, and P. Fung. Interpreting word embeddings with eigenvector analysis. In 32nd Conf. on Neural Information Processing Systems (NIPS 2018), IRASL workshop, 2018.
* [64] M. Sips, B. Neubert, J. P. Lewis, and P. Hanrahan. Selecting good views of high-dimensional data using class consistency. In Computer Graphics Forum, vol. 28, pp. 831–838. Wiley Online Library, 2009.
* [65] V. Sivaraman, Y. Wu, and A. Perer. Emblaze: Illuminating machine learning representations through interactive comparison of embedding spaces. In 27th Int. Conf. on Intelligent User Interfaces, pp. 418–432, 2022.
* [66] H. Strobelt, S. Gehrmann, M. Behrisch, A. Perer, H. Pfister, and A. M. Rush. S eq 2s eq-v is: A visual debugging tool for sequence-to-sequence models. IEEE Trans. on Visualization and Computer Graphics, 25(1):353–363, 2018.
* [67] H. Strobelt, B. Hoover, A. Satyanaryan, and S. Gehrmann. LMdiff: A visual diff tool to compare language models. In Proc. of the 2021 Conf. on Empirical Methods in Natural Language Processing: System Demonstrations, pp. 96–105. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic, Nov. 2021\.
* [68] L. Van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of machine learning research, 9(11), 2008.
* [69] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Proc. of the 31st Int. Conf. on Neural Information Processing Systems, NIPS’17, p. 6000–6010. Curran Associates Inc., Red Hook, NY, USA, 2017\.
* [70] J. Vig. A Multiscale Visualization of Attention in the Transformer Model. In Proc. of the Association for Computational Linguistics: System Demonstrations, pp. 37–42. Association for Computational Linguistics, Florence, Italy, July 2019.
* [71] I. Vulić, S. Baker, E. M. Ponti, U. Petti, I. Leviant, K. Wing, O. Majewska, E. Bar, M. Malone, T. Poibeau, R. Reichart, and A. Korhonen. Multi-SimLex: A Large-Scale Evaluation of Multilingual and Crosslingual Lexical Semantic Similarity. Computational Linguistics, 46(4):847–897, 02 2020. doi: 10 . 1162/coli_a_00391
* [72] Q. Xie, Z. Dai, Y. Du, E. Hovy, and G. Neubig. Controllable invariance through adversarial feature learning. Advances in Neural Information Processing Systems, 30, 2017.
* [73] X. Zhang, J. Zhao, and Y. LeCun. Character-level convolutional networks for text classification. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, eds., Advances in Neural Information Processing Systems, vol. 28. Curran Associates, Inc., 2015.
* [74] M. Zhao, P. Dufter, Y. Yaghoobzadeh, and H. Schütze. Quantifying the Contextualization of Word Representations with Semantic Class Probing. In Findings of the Association for Computational Linguistics: EMNLP 2020, pp. 1219–1234. Association for Computational Linguistics, Online, Nov. 2020.
|
# AffectiveNet: Affective-Motion Feature Learning for Micro Expression
Recognition
Monu Verma, Santosh Kumar Vipparthi, and Girdhari Singh Monu Verma, Santosh
Kumar Vipparthi and Girdhari Singh are with Vision Intelligence Lab at
Department of Computer Science and Engineering, Malaviya National Institute of
Technology, Jaipur, India (Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
Micro-expressions are hard to spot due to fleeting and involuntary moments of
facial muscles. Interpretation of micro emotions from video clips is a
challenging task. In this paper we propose an affective-motion imaging that
cumulates rapid and short-lived variational information of micro expressions
into a single response. Moreover, we have proposed an AffectiveNet: affective-
motion feature learning network that can perceive subtle changes and learns
the most discriminative dynamic features to describe the emotion classes. The
AffectiveNet holds two blocks: MICRoFeat and MFL block. MICRoFeat block
conserves the scale-invariant features, which allows network to capture both
coarse and tiny edge variations. While MFL block learns micro-level dynamic
variations from two different intermediate convolutional layers. Effectiveness
of the proposed network is tested over four datasets by using two experimental
setups: person independent (PI) and cross dataset (CD) validation. The
experimental results of the proposed network outperforms the state-of-the-art
approaches with significant margin for MER approaches.
###### Index Terms:
Affective-Motion Imaging, multi scale features, AffectiveNet, micro expression
recognition.
## 1 Introduction
Micro expressions (ME) have rich source of information to reveal the true
emotions of a person. MEs originate in high stake situations, when a person is
trying to repress his/her genuine feelings within manifested expressions
(macro expressions). Usually, macro expression active on a face for 4 to 5
seconds that can be perceived easily. whereas, MEs are active for 1/30 to 1/25
seconds. Since MEs are short-lived and fleeting in nature, it is hard to
differentiate them through naked eyes. Earlier, trained persons were able to
spot micro-expressions but achieve less than 50$\%$ accuracy. Analysis of real
emotions from video clips has a wide range of applications such as: depression
analysis, police interrogation, law enforcement, multi-media entertainment,
clinical diagnosis etc.
In literature many feature extraction methods (handcrafted / traditional
feature descriptors) were proposed as spatiotemporal LBP with integral
projection (STLBP-IP) [1] and FDM [2] to encode spatial and temporal changes
from the ME video sequences. However, handcrafted feature descriptors were
focused only on superficial features and failed to capture sufficient features
of MEs.
Nowadays, with the advent of technology, deep learning models [3, 4, 5] have
gained popularity in solving various computer vision tasks like image
classification, semantic segmentation, face authorization, biometric, and many
more. Recently, some deep models [6, 7, 8, 9] are proposed to deal with micro-
expression recognition. However, most of the existing methods use combination
of CNN and RNN or CNN and LSTM to extract the spatial and temporal features,
respectively. These methods first capture the spatial features from the frames
and then these are fed to RNN or LSTM to fetch the temporal features. Thereby
these approaches failed to establish relationship between spatial and temporal
features occurring simultaneously in frames and lead to degrading the
performance.
Inspired from the literature [4, 5], a novel affective-motion feature learning
method is proposed to learn and classify the features of micro expressions.
The AffectiveNet has ability to capture spatial and temporal features
simultaneously from the affective-motion images. The contributions of the
proposed approach are summarized as follows:
Figure 1: Visualization of (a) input video (V) with k frames, (b) Motion
Images (MI) generated by multiplying input frames with coefficients Frame
weights (Fw) and (c) Affective-Motion Images (AMI) representing both
appearance and motion that occurred between the frames in a 2d image.
1. 1.
We propose an affective-motion imaging that summarizes the spatial structure
features with temporal variations into one image instance.
2. 2.
We propose an AffectiveNet for micro-expression recognition by introducing two
blocks: MICRoFeat and MFL blocks. MICRoFeat block has been proposed to enhance
learning capability of the network by capturing multi-scale features.MFL block
has been designed to increase the discriminability of the network as it is
able to learn micro-level features.
3. 3.
The effectiveness of the proposed AffcetiveNet is examined by adopting two
validation schemes: person independent and cross dataset over for benchmark
datasets and compared with state-of-the-art MER approaches.
## 2 Literature Review
Feature extraction is an essential part of MER task. Wang et al. [12]
introduced a tensor independent color space model (TICS) by representing the
image sequences in 4D structure such as: 2D structure represents the spatial
texture patterns, 3rd dimension for momentary variation features and 4th
dimension describes RGB color components to spot the micro-expressions.
Furthermore, they extended thier work and proposed sparse tensor canonical
correlation method [13] to analyse the micro expressions movements. Happy et
al. [14] proposed a fuzzy histogram-based optical flow orientation technique
(FHOFO) to capture temporal features of the micro-expressions. Wang et al.
[15] introduced the main directional maximal difference (MDMD) to capture the
facial expressive movements by extracting the maximal magnitude difference in
between the optical flow directions.
Recently, the adoption of deep learning networks of VGG Net [3], ResNet [4]
and MobileNet [5] have created a tremendous take-off in the field of computer
vision. The literature on MER shows that, convolutional neural networks (CNN)
based models also achieve impressive results up to some extent.
Furthermore, an evolutionary search is being applied to detect the disparities
between the frames of micro expressions. Wang et al. [6] introduced a micro
attention module in resnet [4] that mainly focused on expressive regions which
included most of the action units. To capture the action units they utilized
the transfer learning from macro to micro expressions.khor et al. [16] adopted
CNN network and long short-term memory (LSTM) to learn the Spatio-temporal
information for each image frame. Wang et al. [7] utlized the CNN model for
visual feature extraction and LSTM for sequence learning between the frames to
spot the micro expressions. Moreover, Li et al. [10] introduced a 3d flow CNN
network, which incorporated optical flow information with CNN network to
learned deep features of minute variation responsible to spot micro expression
class. xia et al. [8] proposed a recurrent convoloution neural network to
capture the features of subtle changes occurred between image sequences.Liong
et al. [17] also utilized optical flow to represents flow variations between
frames and feed them to three parallelly connected CNN layer streams that
learn the salient features of micro-expressions and classify them accordingly.
Xia et al. [9]proposed a extended recurrent convolution network to extract the
spatial-temporal deformations of micro-expression sequence by considering
appearance and geometrical information, respectively .
Figure 2: The detailed architecture of the proposed Affective Network for
micro expression recognition. Figure 3: The feature maps of happy image
produced at each convolutional layer of MICRoFeat block.
## 3 Proposed Method
Micro-expressions appear only in the few frames of a video due to fleeting and
short-lived nature. Therefore, interpretation of the content in a video and
spotting micro-expressions between the frames is a challenging task. In
literature the state-of-art MER systems apply complex algorithms to represent
the adequate video content . Moreover, all benchmark datasets hold variant
size video sequences, thus most of the state-of-the-art approaches utilized
the time interpolation to normalize the dataset. It may lose or alter the
domain knowledge of micro-expressions by shearing or filling holes in between
the frames. To address these issues, in this paper we have proposed affective-
motion imaging (AMI). Affective-motion image represents video content into a
single instance by preserving high stake active dynamics of micro expressions.
Hence, we have used an AffectiveNet learn the dynamics of micro-expressions
and interprets the relevant emotion class.
### 3.1 Affective-Motion Imaging
Inspired from the literature [11] in this paper we introduced affective-motion
imaging (AMI). AMI interprets the content of the video by focusing on the
facial moving regions and compress that into a single instance. Therefore,
affective-motion image implies movements in a still image by summarizing
spatial and temporal dynamics of the whole video frames. To construct a single
image instance from video sequences, we estimate the motion between the frames
and allocated ranks to video frames by using a ranking function. Let LR is a
ranking function, which updates the Rank of frames by using Eq. 1-2.
$LR[1,i]=\frac{(2\times I[1,i])-k}{I[1,i]}$ (1) $I[1,i]=[i,i+1,i+2,...k]$ (2)
where $I$ represents as index matrix with $i\in{1,2,..k}$ and $k$ implies the
total number of frames extracted from the video . Furthermore, frame weight
$Fw(i)$ is assigned to each frame by using Eq. 3.
$Fw(i)=\sum_{j-1}^{k-i}{LR[1,j]}$ (3)
Moreover, motion images are computed by utilizing Eq. 4.
$MI_{i}=\nu_{i}\times Fw(i)$ (4)
where $\nu_{i}\in\nu$ represents the $i^{th}$ frame of the video $\nu$.
Specifically, frame weights analyze the motions between the frames in a video
and quantify it with the help of ranking function. Further, frames are
amplified by multiplying frame weight coefficient named as motion images.
Motion images magnify the temporal changes and abbreviated uniform
information, as shown in Fig. 1. Finally, Affective-motion image is computed
by merging all motion images such as.
$AMI=\sum_{i}^{k}{MI_{i}}$ (5)
Samples of affective motion images are demonstrated in Fig.1. From Fig. 1, it
is clearly visible that the affective motion images successfully preserve the
influencing dynamics of micro-expressions within single frame. Moreover,
affective motion images abbreviated uniform information and help to protrude
nonuniform variations, highlighted in red blocks those play decision making
role in MER. Further to learn effective features of micro-expressions
affective motion images are forwarded to the AffectiveNet.
Figure 4: The feature maps generated from two emotion classes a) Anger and b) Happy, at 1st level convolutional layers of Affective Network. The region of interest (red block) shows that AffectiveNet is able to differentiate between two expression classes (inter-class). TABLE I: Recognition Accuracy Comparison on CASME-I and CSME-II Datasets. *This result is from the corresponding original paper and H, S, D, R, T, P, N, O, Sa, F stands for Happy, Surprise, Disgust, Repression, Tense, Positive, Negative, Others, Sad, Fear. AffectiveNet-2 represents results evaluated by following experimental setup used in STRCNN [9]. Method | Task | CASME-I | CASME-II
---|---|---|---
STLBP-IP*[1] | $\left(H,S,D,R,O\right)$ | N$/$A | 59.91
FDM* [2] | $\left(D,R,S,T\right)$ | 56.14 | 45.93
3D-Flow* [10] | $\left(H,S,D,R,T\right)$ | 55.44 | 59.11
TICS* [12] | $\left(P,N,S,O\right)$ | 61.86 | 61.11
FHOFO* [14] | $\left(P,N,S,O\right)$ | 65.99 | 55.86
CNN-LSTM* [16] | $\left(H,S,D,R,O\right)$ | 60.98 | N$/$A
MicroAtt* [6] | $\left(A,D,F,H,Sa,S,O\right)$ | N/A | 65.90
Sp-RCNN* [8] | $\left(P,N,S,O\right)$ | 63.20 | 65.80
STRCNN* [9] | $\left(P,N,S,O\right)$ | N$/$A | 56.00
ResNet-50 [4] | $\left(P,N,S,O\right)$ | 25.04 | 32.12
MobileNet [5] | $\left(P,N,S,O\right)$ | 33.77 | 30.25
Af-Net-KS-1 | $\left(P,N,S,O\right)$ | 56.48 | 45.64
Af-Net-KS-2 | $\left(P,N,S,O\right)$ | 60.26 | 49.58
Af-Net-LFC | $\left(P,N,S,O\right)$ | 56.51 | 53.62
Af-Net-WoMFL | $\left(P,N,S,O\right)$ | 57.94 | 60.17
Af-Net-$3\times 3$ | $\left(P,N,S,O\right)$ | 59.32 | 54.12
Af-Net-$1\times 1$ | $\left(P,N,S,O\right)$ | 56.53 | 43.88
AffectiveNet-1 | $\left(P,N,S,O\right)$ | 66.99 | 61.58
AffectiveNet-2 | $\left(P,N,S,O\right)$ | 72.64 | 68.74
### 3.2 Affective Network
In this paper we have proposed a portable CNN model affective motion feature
learning (AffectiveNet) that learns the salient features of micro-expression
by capturing momentary changes from the affective motion images. AffectiveNet
mainly comprises of two blocks: multi-receptive feature preservative
(MICRoFeat) block and microfeature learning (MFL) block as shown in Fig. 2.
#### 3.2.1 MICRoFeat Block
Micro-level variations can be captured through affective-motion images, where
the expressive regions may spread from small region to extensive regions. The
micro-level expression variations are clearly depicted in Fig. 1. Although
these changes are imperceptible but have a high impact in identifying the
micro-expressions. Therefore, a robust CNN network that can elicit both coarse
and detailed texture features are needed to acquire sufficient knowledge for
adequate emotion classification. In literature, it has been confirmed that
inferior variations like eyebrow lift, check crinkles, forehead wrinkles,
glabella, chin, eyelid and lip lines can be captured through small sized
convolutional filters, while the abstract changes like eyes, nose, mouth and
lip shapes tend to respond with large sized filter. However, most of the CNN
based models like VGG Net [3], ResNet [4] and MobileNet [5] hold uniform-sized
filters. Thus, these networks degrade the performance of micro-expression
recognition as they fail to acquire enough feature variations from affective-
motion images. Therefore, in this paper, we have introduced MICRoFeat block to
extract the detailed expressive features from the affective-motion images. The
MICRoFeat block has ability to capture detailed features from small regions to
extensive regions, by applying four convolutional (Conv) layers with multi-
scale filters as $3\times 3$, $5\times 5$, $7\times 7$ and $11\times 11$. Let
$I(u,v)$ be an input image and $\varepsilon_{S}^{x,N}\\{\cdot\\}$ represents
conv function, where, $S$ implies for stride, $N$ is depth, $x$ stands for the
size of filter. Then, output of Conv layers with multi-scale filters are
computed by Eq. (6-7).
$Fm_{i}^{*}=\varepsilon_{1}^{p(i),16}\\{I(u,v)\\}$ (6)
where, $i={1,2,3,4}$, represents the each multi-scale conv layers and
$p(i)=[3,5,7,11]$ (7)
Further, feature maps $Fm_{i}^{*}$ of each layer are forwarded to the next
aligned encapsulated feature (EncapFeat) blocks. EncapFeat block imposes two
Conv layers with different scales as $3\times 3$ and $5\times 5$ to express
the edge variations of each muscle movement (those provokes facial
expressions) by extracting coarse to fine edge variations. Furthermore,
resultant feature maps are refined by employing $3\times 3$ Conv layer.
Resultant feature maps $Fm_{i}^{1}$ of the EncapFeat are computed by using
Eq.(8-10).
$Fm_{i}^{1}=EncapFeat\\{Fm_{i}^{*}\\}$ (8)
$EncapFeat\\{Fm_{i}^{*}\\}=\varepsilon_{2}^{3,64}\\{f_{1}\\{Fm_{i}^{*}\\}+f_{2}\\{Fm_{i}^{*}\\}\\}$
(9) $f_{k}\\{Fm_{i}^{*}\\}=\varepsilon_{2}^{2k+1,32}\\{Fm_{i}^{*}\\}$ (10)
Moreover, response of each EncapFeat block is coupled and forwarded to next
down-sampled Conv layers such as.
$MICRoFeat=\\{Fm_{1}^{1}\|Fm_{2}^{1}\|Fm_{3}^{1}\|Fm_{4}^{1}\\}$ (11)
where, $\|$, represents the concat operation. The effectiveness of the
MICRoFeat block is depicted in Fig. 3, where red highlighted boxes represent
the expressive regions. From the Fig. 3, it is clear that small $\left(3\times
3,5\times 5\right)$ and large $\left(7\times 7,11\times 11\right)$ sized
filters are able to extract minute and high-level variations, respectively.
#### 3.2.2 MFL block
Inspired from the residual concept [4], we introduced a MFL module. The main
aim of this module is to refine the micro-expression regions in two parallel
stages as shown in the Fig.2. In stage 1, low-level features of widespread
expressive regions are learned through one FC layer. These low-level features
increase the learning capability of the network. Similarly, in stage 2, micro
variation in the high-level feature are forwarded parallelly to FC network.
Further, resultant features of laterally connected FC layers are fused to
capture micro-level variations in an expressive region. The lower layer
features are effective for identifying variations in small regions. Thus,
fusion of these features improve the discriminable capability between the
inter and intra-class variations. Moreover, MFL block increases the learning
capability with minimum number of parameters for AffectiveNet as compared to
existing state-of-the-art approaches. Let FC represents the fully connected
layer and concat implies for the depth concatenation function. Then, output
feature vector $Fv$ is computed by Eq. (12-16).
$Fv=FC^{4}\left(\beta\\{Fv^{1}\|Fv^{2}\\}\right)$ (12)
$Fv^{1}=FC^{32}\left(Fm^{2}\right)$ (13) $Fv^{2}=FC^{32}\left(Fm^{3}\right)$
(14) $Fm^{2}=\varepsilon_{2}^{3,184}\\{\beta\left(MICRoFeat\right)\\}$ (15)
$Fm^{3}=\varepsilon_{2}^{3,196}\\{\varepsilon_{2}^{3,128\\{Fm^{2}\\}}\\}$ (16)
Where, $\beta$ represents the batch normalization (BN) function. BN is
incorporated in proposed network to deal with the issue of divergence in
feature distribution that occurs due to disproportion of image sets: training
and testing data. Thereby, BN improves the strength of AffectiveNet by
normalizing the feature responses of the preceding contact layer by
subtracting batch mean and dividing by the standard deviation as follows:
$p_{k}=\omega\bar{q_{k}}+\phi\beta\left(\bar{q_{k}}\right)$ (17)
$\bar{q}=\frac{q_{k}-n_{B}}{\sqrt{S_{B}^{2}+\epsilon}}$ (18)
where,$q_{k}$ is the mini-batch size and $B=\\{q_{1},q_{2},...,q_{N}\\}$ are
the learnable parameters. $\omega$ and $S-{B}$ implies the mean and standard
deviation of the batch as calculated using Eq. (19-20).
$n_{B}=\frac{1}{N}\sum_{k=1}^{N}\left(q_{k}\right)$ (19)
$S_{B}=\frac{1}{N}\sum_{k=1}^{N}\left(q_{i}-n_{B}\right)^{2}$ (20)
The capability of the AffectiveNet to control intra-class (anger and happy)
variations is depicted in Fig. 4. Thus, we can conclude that, proposed model
is able to differentiate between two expression classes (inter-class
variation) within an active patch. Active patches are highlighted by red color
boxes.
TABLE II: Recognition Accuracy Comparision on CASME2, SAMM AND CI2CII, CI2C2, CI2S, CII2CI, CII2C2, CII2S, S2CI, S2CII, S2C2 for PIE and CDE Experiments, Respectively. Here, PIE, CDE and CSM are stands for Person Independent Experiment, Cross Data Experiments and CASME, respectively. Method | Exp. | PIE | CDE
---|---|---|---
Training | CASME2 | SAMM | CASME-I | CASME-II | SAMM
Testing | CSM-II | CSM2 | SAMM | CSM-I | CSM2 | SAMM | CSM-I | CSM-II | CSM2
MicroAtt[6] | N/A | 48.50 | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A
STRCNN[9] | N/A | 54.45 | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A | N/A
ResNet-50[4] | 46.58 | 36.50 | 10.67 | 12.50 | 10.69 | 10.81 | 23.54 | 8.81 | 20.15 | 16.60 | 29.06
MobileNet [5] | 35.05 | 40.20 | 11.86 | 4.00 | 14.46 | 23.78 | 6.40 | 18.24 | 14.05 | 12.64 | 8.00
AffectiveNet-1 | 52.86 | 47.46 | 46.25 | 13.08 | 26.42 | 46.49 | 23.55 | 32.08 | 26.00 | 25.90 | 48.26
AffectiveNet-2 | 61.20 | 58.12 | - | - | - | - | - | - | - | - | -
## 4 Experimental Results and Analysis
### 4.1 Database Prepossessing
To test the effectiveness of AffectiveNet, we have utilized four benchmark
datasets: CASME-I [18], CASME-II [18], CASME2 [19] and SAMM [20]. These
datasets are prepared to analyze the candid expressions under various
challenges like illumination variations, subjects with different artifacts,
ethnicity variations, age differences, gender inequalities etc.
#### 4.1.1 CASME-I
The Chinese Academy of Sciences Micro-expression (CASME) [18] dataset
comprises of 19 participants’ spontaneous micro-expressions. The dataset
samples are labeled with eight emotion classes as: contempt, tense, disgust,
happiness, surprise, fear, sadness and repression with onset, peak and offset
frame tags. However, in CASME-I dataset, some emotions like fear, sadness and
contempt include very few samples and some of the emotion labels are
ambiguous. Thus, most of the existing approaches [1, 2] dropped these emotion
classes to balance the inequality issue in datasets. Recently, some methods
[12, 14] have created new emotion classes by merging the existing emotions as
positive, negative, surprise and other. In our experimental setup we have
utilized the merged emotion classes and finally gathered 187 affective motion
images as: positive: 9, negative: 50, surprise: 21 and others: 106.
Figure 5: Confusion matrices of AffectiveNet for 4-class expression
classification for a) CASME-I, b) CASME-II, c) SAMM d) CASME2 and for e)
CII2CI, f) CII2C2, g) CII2S, in PIE and CDE setups, respectively.
#### 4.1.2 CASME-II
The CASME-II [18] elicits 26 participants’ micro expressions in a well-
arranged laboratory with normal lighting to avoid the problem of illumination
variation. Each frame is annotated with one of seven emotions as disgust,
fear, happiness, other, repression, sadness and surprise. Similar to CASME-I,
we have also converted CASME-II dataset into four categories and collected 251
affective-motion images as positive: 31, negative: 72, surprise: 25 and
others: 126.
#### 4.1.3 CASME2
The CASME2 [19] includes 22 subjects’ (6 female and 16 male) expressions
captured at 30 fps with $640\times 480$ resolution. The dataset has been
annotated with three emotion classes: happy, anger and disgust based on AUs,
self-decision of participants and emotion-evoking videos. In our experimental
setup, we have selected a total 345 image sequences: anger-102, happy-155 and
disgust-88 of micro expressions.
#### 4.1.4 SAMM
The SAMM [20] dataset include 159 micro-expressions of 29 subjects with the
largest ethnicity variations. The dataset is labeled with eight identified
categories of expressions$:$ other, disgust, happiness, contempt, fear,
sadness, surprise and anger. Similar to CASME-I and CASME-II, SAMM dataset
also holds unequal data samples in emotion classes, thus we can combine
emotion classes and compile 159 affective-motion images as positive: 26,
negative:75, surprise:15 and others:43.
### 4.2 Experimental Setup
To evaluate the performance of the proposed method, we have chosen two sets of
experiments: Person independent experiments (PIE) and Cross dataset
experiments (CDE).
#### 4.2.1 Person independent experiments
In literature [2, 9, 17], mainly two types of evaluation techniques are used
to validate the efficiency of MER systems: leave one video out (LOVO) and
leave one subject out (LOSO) cross validation. In LOVO one expression video is
used as testing set and remaining all videos are used for training set.
Therefore, LOVO is evaluating the performance of MER in person dependent
manner. Thus, LOVO is prone to subject biasing and does not validate the
performance of system in effective manner. Thus, in this paper, all
experiments are computed by using LOSO strategy. In LOSO, only one subject’s
expressions are involved in testing set and remaining all subject’s
expressions are used for training. This ensures robustness to unseen faces for
expression recognition.
#### 4.2.2 Cross dataset experiments
In this paper we have utilized the cross-dataset experiments (CDE) setup to
evaluate the robustness and learnability of the AffectiveNet in cross domain.
In CDE setup a dataset is used to train a model and other dataset are used as
test set. In CDE, a different set of experiments are performed as follows.
CI2CII: where CASME-I dataset is used as training set and CASME-II is testing
set. Similarly, CI2C2, CI2S, CII2CI, CII2C2, CII2S, S2CI, S2CII and S2C2
experiments are conducted for other dataset combinations. Moreover, the
Performance of proposed method is measured using recognition accuracy
calculated by using Eq. 21.
$Recog.Acc.=\frac{Total\,no.\,of\,correctly\,predicted\,samples}{Total\,no.\,of\,samples}\times
100$ (21)
Figure 6: Qualitative representation of feature maps generated by existing
networks: ResNet, MobileNet and proposed network over different micro
expressions of four datasets: a) CASME-I: b) CASME-II: c) CASME2 and d) SAMM.
The red blocks validate that AffectiveNet is able to capture the furrow lines
more accurately as compared to ResNet and MobileNet.
Furthermore, to examine the effectiveness of AffectiveNet, we have compared
the proposed method with existing MER approaches by following two schemes.
1. 1.
We trained existing conventional networks: ResNet [4], MobleNet [5] by
utilizing pre-trained weights over our experimental setup that ensure a fair
comparison between state-of-the-art and proposed method. However, in case of
other MER approaches [12, 14] we quoted the published results directly we
follow the similar experimental setup.
2. 2.
Since, some of the recent approaches [7, 8, 9] and [17] etc. are follow
contrast experimental setup in terms of total number of samples, participants,
expression classes etc. or dropped some of emotion classes due to a smaller
number of images. Therefore to validate the effectiveness of proposed
AffectiveNet, we have compared with the existing state-of-the-art approaches
by following the experimental setup added in the [9].
However, In our experiments, we have augmented the generated affective-motion
images and create a large pool of data to avoid the problem of over-fitting in
training. Moreover, to train the network we have used SGD optimizer and
SoftMax loss function with 10-3 learning rate.
### 4.3 Quantitative Analysis
This section provides a comparative analysis of the obtained accuracy rates
between the existing and proposed network for both PIE and CDE experiments.
#### 4.3.1 Person independent experiments
Recognition accuracy results over CASME-I, CASME-II, CASME2 and SAMM datasets
for existing state-of-art and affective approaches for PIE setup are tabulated
in Table-I, respectively. Specifically, for CASME-I, Proposed Network secures
33.22%, and 41.95% more accuracy as compared to MobileNet and ResNet
respectively. Moreover, AffectiveNet also outperforms the existing handcrafted
MER: TICS and FHOFO approaches by 5.13% and 1.00%, respectively. For CASME-II,
our network achieves 31.33% and 29.46% more accuracy as compared to MobileNet
and ResNet respectively. Furthermore, proposed model yields 0.47% and 5.72%
better accuracy rates as compared to TICS and FHOFO respectively. For CASME2
dataset, our proposed method secures 17.81% and 6.28% improvement over
MobileNet and ResNet respectively. Similarly, for SAMM dataset, AffectiveNet
outperforms the existing approaches MobileNet and ResNet by 7.26% and 10.96%
accuracy rates, respectively.
#### 4.3.2 Cross dataset experiments
Comparative analysis results of the conventional CNN network and proposed
network for CDE setup are tabulated in Table-II. From the Table-II, it is
clear that, proposed model outperforms CDE experiment results and validates
the strength of proposed network. Particularly, AffectiveNet gains 35.58%,
0.58%, 15.73% and 34.39%, 9.08%, 11.96% more accuracy for CI2CII, CI2C2, CI2S
setups as compared to ResNet and MobileNet, respectively. Moreover,
AffectiveNet yields 35.68%, 0.01%, 23.27% and 22.71%, 17.15%, 13.84% better
accuracy rates for CII2CI, CII2C2 and CII2S experiments compared to ResNet and
MobileNet respectively. Similarly, for S2CI, S2CII and S2C2 proposed model
attains 5.85%, 9.3%, 19.2% and 22.71%, 13.26%, 40.25% as compared to ResNet
and MobileNet, respectively.
To analyze class-wise recognition accuracy (true positives and false
positives), we have presented the confusion matrices for all PIE and CDE
experiments in Fig. 5.
### 4.4 Qualitative Analysis
The learning capability of proposed network is compared with the state-of-the-
art networks are shown in Fig. 6. Fig. 6. demonstrates the three most
effective visual representations of different emotion classes as CASME-I:
disgust, CASME-II: Happy, CASME2: disgust and SAMM: Others. From figure, it is
clear that the response feature maps significantly assist in preserving the
dynamic variations in different expressive regions of the facial image. For
example, in Disgust: eyes, eyebrows, mouth regions; in happy: eyebrows, lips,
mouth and in others: eyes, mouth; give maximum affective response for related
facial expressions. Therefore, we conclude that AffectiveNet preserves more
relevant feature responses to outperform the existing CNN based networks
ResNet-50 and MobileNet for almost all emotion classes.
### 4.5 Computational Complexity
This section provides a comparative analysis of the computational complexity
between the existing and proposed networks. The total number of parameters
involved in each network are tabulated in Table-III. The proposed AffectiveNet
has only 2.3 million learnable parameters which are very less as compared to
other existing benchmark models like: MobileNet: 3.2M, VGG-16: 138M, VGG-19:
144M and ResNet: 11.7M. Moreover, proposed network architecture has fewer
depth channels and hidden layers as compared to former methods. Furthermore,
AffectiveNet takes only 8.3 MB memory storage which is very less as compared
to MobileNet: 25.3 MB, VGG-16: 515, VGG-19: 535 and ResNet: 44 MB.
Figure 7: The neuron visualization of responses for disgust emotion captured at 1st multiscale CNN layers of ablation experiments a) Af-Net-KS-1 b) Af-Net-KS-2, c) Af-Net-LFC and proposed AffectiveNet. TABLE III: Computational Complexity Analysis of AffectiveNet and Exitsing Networks. Network | | $\\#$ Parameters
---
(in millions)
| $\\#$ Memory
---
(in megabytes)
| Speed
---
(in seconds)
VGG-16[3] | 138 | 515 | 17.8
VGG-19 [3] | 144 | 535 | 21.2
ResNet -50 [4] | 26 | 44 | 19
MobileNet [5] | 4.2 | 25.3 | 12
Af-Net-KS-1 | 2.3 | 8.5 | 8.5
Af-Net-KS-2 | 2.5 | 9.4 | 8.6
Af-Net-LFC | 1.1 | 4.0 | 8.6
Af-Net-WoMFL | 1.0 | 3.4 | 4.5
Af-Net-$3\times 3$ | 2.1 | 7.8 | 5.4
Af-Net-$1\times 1$ | 2.1 | 8.1 | 5.1
AfffectiveNet-1 | 2.2 | 8.3 | 8.7
### 4.6 Ablation Study
In order to investigate the deep insights of AffectiveNet, we have conducted
six more supplementary experiments for ablation study as represented in
Table-I. This section mainly focuses on examining the effect of different
kernel sizes in EncapFeat block,linearly connected fully connected layer in
MFL block, effect of different filter sizes and MFL block. First, we have
examined the impact of two large kernel sizes like $\left(5\times 5,7\times
7\right)$ and $\left(7\times 7,11\times 11\right)$instead of $\left(3\times
3,5\times 5\right)$ in EncapFeat block named as Af-Net-KS-1. and Af-Net-KS-2,
respectively. Therefore, we have observed that smaller size kernels are more
preferable for micro-expression recognition. Kernels with large scale ignore
the minute transitional information which is quite important in micro
expression. From the Fig. 3 and 7 it is clear that kernel size $\left(11\times
11\right)$ skips the micro edge variations and preserved only high-level
edges. Second, we have analyzed the effect of linearly connected FC layers in
MSF blocks in proposed method, named as Af-Net-LFC. Af-Net-LFC fails to learns
the pertinent features and degrades the performance of network. Third, we
computed results by dropping the MFL block named as af-Net-WoMFL to
investigate the role of MFL block in learning of dicriminative variations of
micro expressions. Further, we have examined the effect of multi-scale filters
by replacing all filters with $\left(3\times 3\right)$ named as af-
Net-$\left(3\times 3\right)$. Finally, to analyse the effects of
$\left(1\times 1\right)$ sized filters, we execute the AffectiveNet by
replacing $\left(3\times 3\right)$ with $\left(1\times 1\right)$ in MICRoFeat
block.Quantitative results, represents in Table-I validates the performance of
AffectiveNet over other supplementary results. Moreover. Thus, by observing
ablation studies experimental results, we can conclude that our proposed model
has generated best results as compared to other combinations.
## 5 Conclusion
This paper presents an AffectiveNet: affective-motion feature learning for
micro-expression recognition. First, we computed single instance responses of
the affective-motion images from micro expression sequences which preserves
the facial movements into one instance. Further, the generated single instance
is processed through the AffectiveNet to estimate the networks performance. In
AffectiveNet two blocks are introduced MICRoFeat and MFL, to learn the micro
expression features. MICRoFeat block holds multi-scale filters as $3\times 3$,
$5\times 5$,$7\times 7$ and $11\times 11$ to extract the comprehensive and
detailed edge variations from affective images. Thus, MICRoFeat block is
responsible to capture edge variations from small regions to extensive
regions. While, MFL block incorporats two-stage FC layers to more
discriminative features of micro expressions and allows network to define the
disparities between emotion classes. Moreover, the proposed network has a
small number of parameters that reduce the training and testing time of the
MER system. The effectiveness of system is evaluated on the benchmark dataset
CASME-I, CASME-II, CASME2 and SAMM. It is evident from experimental results,
visual demonstration, complexity analysis and ablation study that AffectiveNet
has achieved better accuracy rates as compared to state-of-the-art approaches
for MER.
## Acknowledgment
This work was supported by the Science and Engineering Research Board (under
the Department of Science and Technology, Govt. of India) project
$\\#$SERB/F/9507/2017. The authors would like to thank our Vision Intelligence
lab group for their valuable support. We are also thankful to Shastri Indo-
Canadian Institute for their support in the form of SRS fellowship.
## References
* [1] Y. Wang, J. See, R. C. W. Phan and Y. H. Oh, _Lbp with six intersection points: Reducing redundant information in lbp-top for micro-expression recognition_ , In Asian Conf. on Comput. Vis., pp. 525-537, 2014.
* [2] F. Xu, J. Zhang and J. Z. Wang, _Microexpression identification and categorization using a facial dynamics map_ , IEEE Trans. on Affect. Comput., vol. 8, no. 2, pp. 254-267, 2017.
* [3] K. Simonyan and A. Zisserman, _Very deep convolutional networks for large-scale image recognition_ , arXiv preprint arXiv: 1409.1556, 2014.
* [4] K. He, X. Zhang, S. Ren and J. Sun, _Deep residual learning for image recognition_ , In Proc. IEEE conf. on Comput. Vis. Pattern Recognit., pp. 770-778, 2016.
* [5] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto and H. Adam, _Mobilenets: Efficient convolutional neural networks for mobile vision applications_ , arXiv preprint arXiv:1704.04861, 2017.
* [6] C. Wang, M. Peng, T. Bi and T. Chen, _Micro-Attention for Micro-Expression recognition_ , arXiv preprint arXiv:1811.02360, 2018.
* [7] S. J. Wang, B. J. Li, Y. J. Liu, W. J. Yan, X. Ou, X. Huang, F. Xu and X. Fu, _Micro-expression recognition with small sample size by transferring long-term convolutional neural network_ , Neurocomputing, Vol. 312, pp.251-262, 2018.
* [8] Z. Xia, X. Feng, X. Hong and G. Zhao, _Spontaneous facial micro-expression recognition via deep convolutional network_ , In 2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1-6, 2018.
* [9] Z. Xia, X. Hong, X. Gao, X. Feng and G. Zhao, _Spatiotemporal recurrent convolutional networks for recognizing spontaneous micro-expressions_ , IEEE Transactions on Multimedia, 2019.
* [10] J. Li, Y. Wang, J. See and W. Liu,_Micro-expression recognition based on 3D flow convolutional neural network_ , Patter. Analy. An App., pp. 1-9, 2018.
* [11] H. Bilen, B. Fernando, E. Gavves, A. Vedaldiand S. Gould, “Dynamic image networks for action recognition,” In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recognit. pp. 3034-3042, 2016.
* [12] S. J. Wang, W. J. Yan, X. Li, G. Zhao and X. Fu, _Micro-expression recognition using dynamic textures on tensor independent color space_ , In 22nd IEEE Conf. Pattern Recognit, pp. 4678-4683, 2014.
* [13] S. J. Wang, W. J. Yan, T. Sun, G. Zhao and X. Fu, _Sparse tensor canonical correlation analysis for micro-expression recognition_ , Neurocomputing, Vol. 214, pp. 218-232, 2016.
* [14] S. L. Happy and A. Routray, _Fuzzy histogram of optical flow orientations for micro-expression recognition_ , IEEE Trans. Affect. Comput., 2017.
* [15] S. J. Wang, S. Wu, X. Qian, J. Li and X. Fu, _A main directional maximal difference analysis for spotting facial movements from long-term videos_ , Neurocomputing, Vol. 230, pp. 382-389, 2017.
* [16] H. Q. Khor, J. See, R. C. W. Phan and W. Lin, _Enriched long-term recurrent convolutional network for facial micro-expression recognition_ , In 13th IEEE Conf. Auto. Face Gestur. Recognit. (FG 2018), pp. 667-674, 2018.
* [17] S. T. Liong, Y. S. Gan, J. See and H. Q. Khor, _A Shallow Triple Stream Three-dimensional CNN (STSTNet) for Micro-expression Recognition System_ , arXiv preprint arXiv: 1902.03634, 2019.
* [18] W. J. Yan, X. Li, S. J. Wang, G. Zhao, Y. J. Liu, Y. H. Chenand X. Fu, _CASME II: An improved spontaneous micro-expression database and the baseline evaluation_ , PloS one, vol. 9, no. 1, pp. 86041, 2014.
* [19] F. Qu, S. J. Wang, W. J. Yan, H. Li, S. Wu and X. Fu, _CAS (ME) 2: a database for spontaneous macro-expression and micro-expression spotting and recognition_, IEEE Trans. Affect. Comput., 2017.
* [20] A. K. Davison, C. Lansley, N. Costen, K. Tan and M. H. Yap, _Samm: A spontaneous micro-facial movement dataset_ , IEEE Trans. Affect. Comput., vol. 9, no. 1, pp. 116-129, 2018.
| Monu Verma received her B. Tech degree in Computer Science and Engineering
from GEC Bikaner, India, in 2013. She received her M. tech degree in 2016 from
NIT Jalandhar, India. She is currently pursuing her Ph.D. with the Department
of Computer Science and Engineering, MNIT Jaipur, India. She is a life member
of Vision Intelligence Lab @ MNIT, Jaipur. Her research interests include
facial expression recognition, depression analysis, micro expression
recognition, hand posture classification and finger sign analysis, .
---|---
| Santosh Kumar Vipparthi received his B.E. degree in Electrical and
Electronics Engineering from Andhra University, India. Further, he received
his M. Tech. and Ph. D. in Systems Engineering from IIT BHU, Varanasi, India.
Currently, he is an assistant professor in the Department of Computer Science
and Engineering, MNIT Jaipur, India. He leads Vision Intelligence Lab @ MNIT
with research focused important visual perception tasks such as object
detection, human emotion recognition, aberrant event detection, image
retrieval, Gesture recognition, Motion analysis, etc.
---|---
| Girdhari Singh received the B.E. degree in Computer Engineering from
Amravati University, Maharastra India, in 1990. Afterwards, he received his MS
in Software Engineering from BITS Pilani, India in 1996. Further, He received
his Ph.D. in Computer Engineering from MNIT, Jaipur, India in 2009. Currently,
he is working as an associate professor in the department of computer science
and engineering, MNIT, Jaipur, Rajasthan, India. His major fields of research
are software engineering, intelligent systems image processing and machine
learning.
---|---
|
# NewsQuote: A Dataset Built on Quote Extraction and Attribution for Expert
Recommendation in Fact-Checking
Wenjia Zhang,1 Lin Gui, 2 Rob Procter, 1,3 Yulan He 1,2,3
###### Abstract
To enhance the ability to find credible evidence in news articles, we propose
a novel task of expert recommendation, which aims to identify trustworthy
experts on a specific news topic. To achieve the aim, we describe the
construction of a novel NewsQuote dataset consisting of 24,031 quote-speaker
pairs that appeared on a COVID-19 news corpus. We demonstrate an automatic
pipeline for speaker and quote extraction via a BERT-based Question Answering
model. Then, we formulate expert recommendations as document retrieval task by
retrieving relevant quotes first as an intermediate step for expert
identification, and expert retrieval by directly retrieving sources based on
the probability of a query conditional on a candidate expert. Experimental
results on NewsQuote show that document retrieval is more effective in
identifying relevant experts for a given news topic compared to expert
retrieval.111Our source code can be accessed at:
https://github.com/WenjiaZh/NewsQuote
## 1 Introduction
The rapid growth of misinformation in recent years has been the subject of
much attention from academia, journalists, political analysts and fact-
checking organisations and has prompted research into NLP-based techniques and
tools to support fact-checking work and evidence verification (Lazarski, Al-
Khassaweneh, and Howard 2021; Zeng, Abumansour, and Zubiaga 2021; Guo,
Schlichtkrull, and Vlachos 2022). Much of this research effort has been based
on a document-centric model of fact-checking work, where the end goal is to
provide the journalist or fact-checker with an (automated) ranked list of
documents relevant to the claim that they can then use as evidence for
determining its likely veracity (e.g., Zhao et al. (2023)).
Our recent research reveals that some fact-checkers use a expert-centric
model, whereby they search for credible and trustworthy experts who are
willing to be quoted (Procter et al. 2023). Finding such experts is a big
challenge and often journalists and fact-checkers aim to interview several
experts as relying solely on one source may not be considered as sufficiently
credible. In the case of contentious claims, they may also need to ensure
their reports are balanced (Procter et al. 2023).
There is thus an urgent need to develop a tool for journalists and fact-
checkers to search for experts based on their past record of being quoted by
news media and fact-checking organisations, and other trustworthy agencies. To
achieve this goal, we need to first automatically extract quotes and their
sources from news articles, and then second return a ranked list of experts
relevant to a query that then can be assessed by the journalist or fact-
checker. This can be formulated as two tasks: (1) quote extraction and
attribution, and (2) expert recommendation.
For the first task of quote extraction and attribution, most datasets were
built on literature narratives and limited in size due to the reliance on
manual annotation (Zhang, Black, and Sproat 2003; Elson and McKeown 2010;
Fernandes, Motta, and Milidiú 2011; Lee and Yeung 2016). But newswire has much
fewer monologues and dialogues than fiction (O’Keefe et al. 2012). Early work
relied on rule-driven frameworks and manually-defined linguistic patterns,
hence they mainly focused on direct quotes (Lee and Yeung 2016; Zhang and Liu
2021; Vaucher et al. 2021). Unlike play scripts or fiction, people quoted in
the news media are not limited to a list of fixed characters. In addition, the
constantly evolving stream of events reported in news articles and diverse
writing styles used by news media outlets make it difficult to identify
experts and extract quotes by relying on regular expressions.
For the second task of expert recommendation, much work has been conducted for
expert finding in academic research (Sun et al. 2015; Silva 2014; Wang et al.
2017), online communities (Yuan et al. 2020), and the enterprise field (Paul
2016; Askari, Verberne, and Pasi 2022). However, we are not aware of any work
searching for experts based on their track record of being quoted in news
articles.
Corpus | #Quotes | Indirect% | Entity | Data Source
---|---|---|---|---
StylisticsCorpus | 16,533 | 16 | ✗ | Fiction, Newspaper, Biographies
PARC3 | 19,712 | 72 | ✗ | Wall Street Journal
QuoteBank | 178 million | - | ✓ | News Articles
DirectQuote | 10,279 | 0 | ✓ | News Articles
NewsQuote | 24,031 | 81 | ✓ | News Articles
Table 1: Summary of large-scale (larger than 10,000) news-originated English
quotation corpora.
In this paper, we propose a semi-automatic approach to construct a news
quotation dataset, called NewsQuote, from the AYLIEN coronavirus
dataset222This data was aggregated, analyzed, and enriched by AYLIEN using the
AYLIEN’s News Intelligence Platform.
https://aylien.com/resources/datasets/coronavirus-
dataset,https://aylien.com/blog/free-coronavirus-news-dataset, which contains
over 1.5 million English news articles generated from around 440 global
sources. We utilise the semantic role labelling results of sentences in news
articles to extract the quote trigger verbs, subjects (i.e., sources) and
objects (i.e., quotes), and identify sources by their corresponding
DBpedia333https://www.dbpedia.org/ ontology class labels. The resulting
dataset contains both direct and indirect quotes, and also mixed quotes where
only part of the quotations is placed inside quotation marks. We introduce the
task of finding sources of evidence from news reports and present a set of
approaches for (1) identifying quotations and their sources from text; and (2)
recommending potential experts for given news topics. Our experimental results
illustrate the feasibility of using our constructed NewsQuote dataset for
developing an automated tool for searching and ranking subject-matter experts
for journalists and fact-checkers.
## 2 Related Work
#### Quotation Extraction and Attribution
Quotation extraction and attribution originated as a study of literary works
(Zhang, Black, and Sproat 2003), and now typically covers three sub-tasks:
identifying sources, extracting quotations, and attributing a quotation to its
source. In Table 1, we summarise several large-scale English quotations
datasets that are built on news articles. The StylisticsCorpus (Semino and
Short 2004) was designed for discourse presentation in written British
narratives. They opted for hard news (e.g., accidents, conflicts, and crimes)
(Bell 1991) as a part of the data source because of its circulation,
narrative, authenticity, and cultural prominence. Of the total data, 5407
occurrences came from the press. They classified these samples into speech,
writing, and thought. Then they divided each class into many presentation
categories, such as indirect, free indirect, direct, and free direct. The
PARC3 (Pareti 2016) project aims to fill the gap of the attribution relation
(AR). Their annotation scheme tagged three constitutive components of an AR:
source, cue, and content. They labeled the quote status as direct, indirect,
or mixed by the usage of quote marks, and looked into the depth of attribution
by the level of nesting. The inspiration for generating QuoteBank (Vaucher et
al. 2021) came from the tangled nature of contemporary news flow. Vaucher et
al. (2021) exploited duplicate reports in different media to learn the
patterns of quote-source pairs. Focusing on the attribution of direct
quotations, they proposed an end-to-end minimally supervised framework, named
Quobert, to extract and attribute quotations. Using Quobert, they generated
QuoteBank from the Spinn3r dataset (Burton et al. 2011), and linked source
entities to the Wikidata knowledge base. DirectQuote (Zhang and Liu 2021)
contains direct quotations manually annotated from online news media. Like
QuoteBank, each source can be linked to a Wikidata named entity to benefit
various downstream tasks.
Among the existing news quotation datasets, StylisticCorpus and PARC3 contain
both direct and indirect quotes, but do not originate from multi-platform news
stories, nor do they provide source-entity linking to Wikidata. The other two
datasets, QuoteBank and DirectQuote, have each of their sources linked to a
Wikidata named entity, but they only focus on direct quotes. In comparison,
our NewsQuote contains various types of quotes including direct, indirect and
mixed quotes where only part of the quotation is inside the quotation marks.
In addition, all sources have their DBpedia entity links.
#### Expert Finding
The core task in expert finding is to identify candidates with the required
expertise for a given query (Yuan et al. 2020). Therefore, solutions focus on
matching the demand of searchers and the experience of relevant experts. In
practise, this problem has expanded to different situations where various
factors were considered. Academic accounts for up to 65% expert finding
research (Husain et al. 2019). When looking for academic experts, attention is
given to topic relevance, expert quality, research connectivity (Sun et al.
2015; Silva 2014; Wang et al. 2017), as well as capacity limitation (Neshati,
Beigy, and Hiemstra 2014). Meanwhile, many expert finding systems are used on
online platforms, such as community question answering, social networks and
forums (Yuan et al. 2020; Faisal, Daud, and Akram 2017). In the enterprise
field, experts’ accessibility and productivity are considered to have
significant economic benefits (Silva et al. 2013; Paul 2016). In the medical
domain, when looking for the most suitable doctor for a particular patient,
the patient’s underlying conditions are of critical importance (Tekin, Atan,
and Van Der Schaar 2014). In lawyer finding, users may prefer candidates in
the same state or city, hence the physical location was emphasized (Askari,
Verberne, and Pasi 2022).
## 3 NewsQuote: Dataset Construction
In this section, we describe how we constructed the dataset, including details
of the data source, pre-processing steps performed, and test set annotation.
Example data entries and dataset statistics will be presented at the end.
### Data Collection
We built our NewsQuote dataset from the AYLIEN coronavirus dataset, published
between November 2019 and August 2020. We used the AYLIEN News
API444https://aylien.com/product/news-api to retrieve news articles. Apart
from text, each article is also accompanied with the meta data such as
authors, keywords, summary, source, publishing time, topical categories coded
by both the Interactive Advertising Bureau (IAB)
taxonomy555https://www.iab.com and the IPTC
NewsCodes666https://iptc.org/standards/newscodes/, as well as recognized
entities and entity links from DBpedia.
### Pre-processing
#### Data De-duplication
As the same news story may be posted by multiple sources and there were exact
duplicates in the original dataset, we removed news articles that are similar
to ones already been published. News articles were first sorted in
chronological order. News duplicates were then detected using a RoBERTa
classifier777https://huggingface.co/vslaykovsky/roberta-news-duplicates
trained with title-body pairs using semi-supervised learning (Rücklé, Moosavi,
and Gurevych 2019). For processing efficiency, the dataset was split into 16
equal-sized subsets. For each subset, titles and first sentence of news
summaries of temporally-ordered news articles were sequentially fed as input
to the RoBERTa classifier. Any duplicates were removed. After data de-
duplication, 158,325 news articles remained. The total number of source
platforms is 258, and as shown in Figure 1(b), the top 5 source platforms are:
Daily Mail, Yahoo, Seeking Alpha, Business Insider, Reuters.
#### Quote Trigger Word Filtering
For each of the selected articles, we segment the the main body into
sentences, and then use a pre-trained BERT-based semantic role labeling model
(Shi and Lin 2019) to extract verbs (or predicates), subjects, and objects. We
obtained a candidate verb list sorted by their occurrence frequencies. After
manually checking the candidate verbs with over 100 occurrences, we identified
352 quote trigger words that are more likely indicative of direct or indirect
quotes. The list of verbs are presented in our source code repository
888https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedTriggerVerbs.csv.
Some of the verbs are clearly indicative of quotes, such as ‘ _said_ ’, while
others may not be associated with quotes in a traditional sense, for example,
‘ _tweet_ ’. After identifying the quote trigger words, we only kept the
sentences with at least one trigger word, one subject and one object. The
subject is regarded as a potential source and the object is considered as a
potential quotation. To ensure that the quotations are informative, we also
require that the length of the object should be more than three words.
#### Source and Quote Filtering
We required that the subject of a candidate sentence should be a person or an
organisation and therefore identified potential source entities via the
accompanying DBpedia ontology
labels999http://mappings.dbpedia.org/server/ontology/classes/ in the dataset.
Our selected ontology classes are shown in our source code repository
101010https://github.com/WenjiaZh/NewsQuote/blob/main/SelectedOntologyClasses.txt.Since
each entity could have more than one ontology class, we further removed
samples with sources labeled as _Location_ , _Place_ and _Country_. As the
same subject could have multiple mentions, we use DBPedia entity links for
entity resolution and normalisation. In addition, we required a named entity
to appear at least twice in the dataset. Finally, to avoid the sentence split
error, we required quotation marks to be paired in sentences that contain
direct quotes and mixed quotes.
Figure 1: Three types of quotes in our dataset. Sources are highlighted in
blue, trigger verbs are highlighted in red, and quotes are highlighted in
yellow.
### Test Set Annotation
Since in practice, given a topic, we can only identify experts based on their
previous quotes published in earlier news articles, we divide the dataset into
training, validation and testing sets by news articles publishing timestamps,
ensuring quote-source pairs in the validation and testing sets occurred later
than those in the training set. Figure 2 demonstrates the distribution of
quote-source pairs based on the publishing dates of their associated news
articles.111111There is no data between 2020-05-31 and 2020-06-21 in the
original dataset.
Figure 2: The distribution of quote-source pairs. The training set contains
samples released from 2020-01-19 to 2020-05-31, and the validation/testing set
contains samples released from 2020-06-21 to 2020-08-02.
To ensure data quality, samples in the test set were manually screened by one
annotator. We list five types of noise and corresponding examples appearing in
the raw test set in Table A1. Data falling into one of these noise categories
were removed from the test set.
### Dataset Statistics
Our data covers three categories of quotes, illustrated in Figure 1. In short,
direct quotations are placed inside quotation marks, while indirect quotations
are not, and a mix of direct and indirect quotations have only part of the
quotations placed inside quotation marks. We roughly estimated the weight of
each quotation type on the dataset by the number and position of quotation
marks: 81% for indirect quotes, 10% for direct quotes and 9% for mixed quotes.
In the test set, there are 1,867 (84%) indirect quotes, 215 (10%) mixed quotes
and 143 direct quotes (6%). Table 2 shows the statistics of our final
NewsQuote dataset. In summary, we have a total of 24,031 English source-quote
pairs with 3,246 sources from 258 global sources. More related statistics and
plots are presented in Appendix A.
| Test | Valid | Train
---|---|---|---
No. of samples | 2,236 | 2,082 | 19,713
No. of articles | 1,937 | 1,766 | 14,526
No. of source entities | 1,016 | 765 | 2,963
Avg. quote length | 28.38 | 29.16 | 28.99
No. of news sources | 180 | 178 | 252
No. of news categories | 470 | 440 | 629
Avg. keywords per article | 43.23 | 44.28 | 42.17
Table 2: The NewsQuote Dataset statistics. Figure 3: Illustrations of 5
approaches described in Section 5. Plot(a) describes the QA pipeline, the
sequence labelling and the Rule-based Quota Annotator used for quote-source
extraction. Plot(b) introduces the document retrieval approach for expert
recommendation, and plot(c) presents the expert retrieval approach for expert
recommendation
## 4 Task Definition
In our dataset, each sample $S_{i}$ consists of a context $c_{i}$, a quote-
source pair $(q,e)_{i}$, a list of keywords $k_{i}$ and metadata $m_{i}$. The
context contains 3 sentences, the main sentence where the source and quote
appeared, its preceding sentence and following sentence. Both keywords and
metadata are defined at the document level and are retrieved from the AYLIEN
coronavirus dataset. We propose the following two tasks on this NewsQuote
dataset:
Source and quote extraction is defined as automatically extracting the source-
quote pair $(q,e)_{i}$ from a given context $c_{i}$.
Expert recommendation involves suggesting a ranked list of experts given a
query, based on what they said in the past.
## 5 Approaches
We present approaches for source and quote extraction, and expert
recommendation. An overview of the approaches is illustrated in Figure 3.
### Source and Quote Extraction
We tackle the problem of extracting quote-source pairs using three approaches:
rule-based method, sequence labelling, and question answering.
#### Approach 1: Rule-based Quote Annotator
Regular-expression-like rules can be used to extract direct quotes. We run the
Quote Annotator 121212https://stanfordnlp.github.io/CoreNLP/quote.html from
Stanford CoreNLP (Manning et al. 2014) on our test sample sentences. It can
only extract direct quotes that are delimited by quotation marks.
#### Approach 2: Sequence Labelling
We can label each sample in our dataset with a 5-class BIO tagging scheme. The
source is annotated by ’B-S’ and ’I-S’, denoting whether the corresponding
token indicates the beginning of a source mention, or is inside a source
mention. Similarly, the quotation is annotated by ’B-Q’ and ’I-Q’, and all the
other tokens are marked by ’O’. We then fine tune a BERT-based token
classifier (Devlin et al. 2018) to identify sources and quotes from the
context.
#### Approach 3: Question Answering (QA) pipeline
We use a QA pipeline for source and quote extraction by asking two questions
in turn:
> Q1: Who is the source?
> Q2: What did [source] say?
During training, the [source] in Q2 is the gold standard answer for question
Q1. During inference, it is the extracted answer for Q1. The input context is
composed of a question, a left sentence, $l$, a main sentence, $s$ and a right
sentence, $r$. To extract the answer from the context, we fine-tuned the pre-
trained BERT-based extractive QA model (Devlin et al. 2018), where the input
takes the form:
> [CLS] Question [SEP] l [SEP] s [SEP] r [SEP]
### Expert Recommendation
We can formulate expert recommendation as a retrieval problem, that given a
query, we would like to retrieve sources who can comment on the topic
discussed in the query ranked by their relevance to the query. There are two
possible approaches, one is to use sources’ past quotes as documents and
perform _document retrieval_ and then return the sources of the retrieved
quotes as results, another is to perform _expert retrieval_ directly.
#### Approach 1: Document Retrieval
_Document retrieval_ aims to first retrieve relevant documents (i.e., the
context where a quote appears) given a query, and then extract the sources
from the documents as results. For document indexing, we experiment with a
sparse bag-of-words Lucene index and four kinds of dense transformer-encoded
Faiss indices via Pyserini131313https://github.com/castorini/pyserini. A BM25
ranking approach on the sparse index and a nearest-neighbor search on dense
indexes were then applied to return the top 10 most relevant documents for a
given query. Sources in the top 10 retrieved documents are then identified as
the recommended experts.
#### Approach 2: Expert Retrieval
_Expert retrieval_ directly retrieves sources based on the probability of a
query conditional on a given candidate source $P(q|e)$. Following the the
framework introduced by Balog, Azzopardi, and de Rijke (2009), we implemented
both candidate-based and document-based expert finding approaches.
Candidate-Based Expert Retrieval Assuming that each term in the query is
sampled identically and independently, also that the document and the expert
source candidate are conditionally independent, the candidate-based approach
estimates $P(q|e)$ by:
$\displaystyle P(q|e)=\prod_{t\in q}\\{(1-\lambda)(\sum_{d\in
D}p(t|d)p(d|e)+\lambda p(t)\\}^{n(t,q)},$
$\displaystyle\lambda=\frac{\beta}{\beta+n(e)},\quad\beta=\frac{\sum_{E}|\\{d:n(e,d)>0\\}|\cdot|d|}{|E|},$
where $\lambda$ is the smoothing parameter, $p(t|d)$, $p(d|e)$ and $p(t)$ are
the conditional probability of a term $t$ in document $d$, the conditional
probability of a document $d$ given source $e$, and the probability of term
$t$, respectively. Both $p(t|d)$ and $p(t)$ are estimated by maximum
likelihood. The probability $p(d|e)$ is set by a Boolean model, which will be
discussed later. $|d|$ is the average document length, $n(t,q)$ is the number
of times that a term $t$ appears in the query $q$, $n(e,d)$ is the occurrence
frequency of an expert $e$ appeared in the document $d$, and $n(e)$ is the
total number of occurrences in documents associated with the source $e$.
Document-Based Expert Retrieval The document-based expert retrieval approach
searches for sources via relevant document collection. This approach assumes
the conditional independence between the query and candidate, and estimates
the probability of a term $t$ in each document:
$\displaystyle P(q|e)=\sum_{d\in D}\\{\prod_{t\in q}((1-\lambda)p(t|d)+\lambda
p(t))^{n(t,q)}\\}p(d|e),$
$\displaystyle\lambda=\frac{\beta}{\beta+n(d)},\quad\beta=|d|,$
where $n(d)$ is the length of document $d$.
In both the candidate-based and document-based expert finding approaches, the
document-candidate associations, $p(d|e)$, is estimated by a simple Boolean
model, where it is set to 1, if $n(e,d)>0$, and 0, otherwise.
## 6 Experiments
### Experimental Setup
For the rule-based approach, we directly feed the raw sentences into the Quote
Annotator. To build the token classifier, we segment the input text into a
sequence of 512 tokens, and fine tune the model for 100 epochs with an initial
learning rate of 2e-7. For the extractive QA model, the maximum length of the
extracted answer is set to 30 when questioning sources and 512 when
questioning quotes. For the question about source, we train the model for 50
epochs with an initial learning rate of 2e-6. For the question about quote, we
train the model for 100 epochs with an initial learning rate of 2e-5.
For expert recommendation, we consider two types of documents: the main
sentence where a source/quote occurred, or the main sentence together with its
surrounding context (i.e., the preceding and following sentences). For the
query to be used for expert retrieval, we use either the title of a news
article, its keywords , or the first sentence of the summary. To further
remove interference, we eliminate the source name from the input query if
there is any. For the expert retrieval method, we take only the first $w$
words in the news article title (the keyword list or the first sentence of the
news summary) as the input query to reduce the running time. After validating
the value of $w$ between 1 and 10, we finally set $w=5$.
| Overall | Direct Quotes | Indirect Quotes | Mixed Quotes
---|---|---|---|---
| Macro F1 | Exact Match | Macro F1 | Exact Match | Macro F1 | Exact Match | Macro F1 | Exact Match
Rulesource | 5.76 | 5.62 | 50.58 | 49.65 | 0.214 | 0.214 | 24.11 | 23.26
Rulequote | 7.72 | 1.93 | 82.33 | 30.07 | 0.145 | 0.00 | 23.84 | 0.00
SLsource | 98.06 | 95.37 | 98.63 | 95.80 | 97.99 | 95.34 | 98.23 | 95.35
SLquote | 95.65 | 85.17 | 97.17 | 89.51 | 95.61 | 85.11 | 95.05 | 82.79
QAspeakr | 98.86 | 98.61 | 99.30 | 99.30 | 98.77 | 98.50 | 99.38 | 99.07
QAquote | | | | | | | |
$~{}~{}_{w/~{}true~{}source}$ | 95.96 | 90.74 | 95.83 | 93.01 | 95.96 | 90.31 | 96.06 | 93.02
$~{}~{}_{w/~{}pred.~{}source}$ | 95.61 | 89.93 | 95.78 | 93.01 | 95.55 | 89.34 | 96.06 | 93.02
$~{}~{}_{w/~{}source~{}mask}$ | 93.92 | 85.84 | 96.53 | 90.21 | 93.56 | 85.11 | 95.28 | 89.30
Table 3: Results of source and quotation extraction on the test set. Rule $-$
the rule-based annotator, SL $-$ sequence labeling, QA $-$ the question
answering pipeline. The subscripts indicate the aim of the models, either for
${source}$ extraction or for ${quote}$ extraction. Under the QAquote,
‘${}_{w/~{}true~{}source}$’ is where we use the true source name when asking ”
_What did + [source] + say?_ ”, while ‘${}_{w/~{}pred.~{}source}$’ uses the
predicted source from the QAspeakr results, and ‘${}_{w/~{}source~{}mask}$’
uses the generic word ” _they_ ”.
### Evaluation Metrics
To measure model performances for quote extraction and attribution, we use two
metrics defined in SQuAD (Rajpurkar et al. 2016), the exact match and the
macro-averaged F1 score. Exact Match is equal to one if the predicted outcome
is completely identical to the ground truth, while (Macro-averaged) F1
measures the average overlap between predicted and ground truth answers at the
token-level.
For expert recommendation, we use two metrics commonly used in information
retrieval, the mean average precision (MAP) and the normalized discounted
cumulative gain (NDCG). Mean Averaged Precision is the average value of the
precision at the points where relevant documents are retrieved. Normalized
Discounted Cumulative Gain at K first discounts the gain scale at the $i$-th
rank position by $\frac{1}{\log_{2}(i)}$, then adds up the converted gain
scales up to rank $k$, and finally normalizes the result by the ideal ranking
order. In addition, we propose relaxed metrics where the retrieved expert is
considered relevant if it is in the same cluster as the true source. In the
construction of relaxed metrics, we opt for the top 100 most frequent source
DBpedia categories and use the binary vectors to embed sources141414In our
dataset, a source is assigned to 4 to 5 DBpedia categories on average.. We
then perform $k$-means clustering on the source embeddings. We empirically set
$k=40$ according to the cluster coherence and separation scores.
| Strict Metrics | Relaxed Metrics
---|---|---
| MAP | NDCG5 | NDCG10 | MAP | NDCG5 | NDCG10
DRsparse | 0.2903 | 0.2807 | 0.3590 | 0.4162 | 0.3925 | 0.5183
DRflat | 0.1481 | 0.1440 | 0.1939 | 0.2886 | 0.2714 | 0.3887
DRhnswpq | 0.1509 | 0.1473 | 0.1926 | 0.2966 | 0.2805 | 0.3956
DRhnsw | 0.1446 | 0.1406 | 0.1889 | 0.2865 | 0.2686 | 0.3850
DRpq | 0.1395 | 0.1363 | 0.1838 | 0.2739 | 0.2583 | 0.3734
ERcan | 0.1021 | 0.1106 | 0.1252 | 0.2306 | 0.2294 | 0.3135
ERdoc | 0.1205 | 0.1281 | 0.1418 | 0.2465 | 0.2412 | 0.3285
Table 4: Results of expert recommendation using quote context as document, and
news article keywords as query. In the first five rows, DR denotes the
document retrieval approach, and the subscripts represent 5 types of retrieval
indices mentioned in Section 5 Approach 1, Lucene sparse bag-of-words index,
Faiss flat index, Faiss HNSWPQ index, Faiss HNSW index, and Faiss PQ index.
ERcan is the candidate-based expert finding approach, and ERdoc is the
document-based expert finding approach. In document-based expert finding
approaches, the input query length is set to 5 keywords.
### Experimental Results
We first present the results of the three quote extraction and attribution
methods described in Section 5, and subsequently present the evaluation
results for the two expert recommendation approaches introduced in Section 5.
#### Quote Extraction and Attribution
Table 3 presents the performance of the rule-based annotator, sequence
labeling and the QA pipeline on the test set. It is not surprising that the
rule-based quote annotator performs the worst as it can only extract direct
quotes using regular-expression-like rules. In our test set, only 337 out of
2225 samples were identified as containing quotes. On this subset, the rule-
based annotator gives a higher exact match score of 49.65 for sources compared
to quotes. But it performs much better for direct quote extraction in Macro F1
compared to source extraction. On the other two categories, indirect and mixed
quotes, the rule-based annotator essentially failed to produce any sensible
results. Sequence labeling gives much better results compared to the rule-
based annotator. We notice that in terms of exact match, quote extraction
appears to be nearly 10% lower than source extraction, showing that the model
struggled with longer answer extraction. For the three categories of quotes,
the model gives the best results for quote extraction on the direct quotes,
followed by the indirect quotes, and it performs the worst on the mixed
quotes. This is expected since mixed quotes are more complex to deal with
compared to the other two categories. The QA pipeline achieves the best
performance in both identifying sources and extracting the quotations. In
testing the QA pipeline’s quote extraction capabilities, we experimented with
three scenarios by using either: the true source name in the question for
quote, the predicted source from the results of QAsource, or masking the
source with the pronoun ’ _they_ ’ to completely remove the source information
from the question. Since the accuracy of our QA model for source
identification is already high, using the true or predicted source for the
question for quote extraction does not make much difference. However, if the
source information is lost, the quote extraction performance drops by nearly
2% in Macro F1 and over 4% in exact match.
#### Expert Recommendation
We show in Table 4 the expert recommendation results from using keywords of a
news article as query, and the context of quotes (the main sentence where the
source and quote occurred, together with the preceding and the following
sentences) as the document. It can be observed that the document retrieval
(DR) approaches generally outperform the expert retrieval (ER) approaches.
Among various document indexing strategies, using Lucene sparse bag-of-words
index (DRsparse) gives superior results compared to other dense transformer-
encoded Faiss indices. As expected, using the Relaxed Metrics where a
retrieved source is considered as relevant to the true source if they reside
in the same cluster, we obtain better results compared to the strict
metrics.151515Results using other document retrieval or expert retrieval
approaches based on different combinations of the formulation of documents and
queries are in Appendix C.
## 7 Challenges and Future Directions
We have presented our NewsQuote dataset, and introduced a new task, namely
expert recommendation in the field of journalism and fact-checking. Our
experiments confirmed the possibility of extracting quote-source pairs using a
question-answering pipeline as well as finding expert sources using document
retrieval and expert retrieval. Here, we outline some potential future
directions.
First, in the construction of our dataset, the quote trigger verbs are
manually selected from the most frequent group of verbs. On one hand, the
identified verb list does not cover all the possible verbs that are indicative
of quotations, such as those occurred less frequently or are not closely
related to the Covid topic. On the other hand, some verbs are ambiguous and
need to be contextualized to determine whether they are indeed the trigger
words. Although we removed disambiguous cases when examining the test set, it
is not practical to perform manual filtering on such large-scale data. Future
work could explore the possibility of leveraging other large-scale quote
corpora for training a model for the detection of quote trigger words. Also,
our dataset has been constructed from the news articles about the coronavirus.
In the future, this could be extended to cover a wide range of topics such as
business, technology, education, and politics.
Second, co-reference resolution will be vital for increasing the quote-source
attribution data as it is common to use pronouns to refer to previously
mentioned sources in news articles. Our preliminary experiments on co-
referencing resolution led to noisy quote-source attribution results. In the
future work, the content similarity and/or coherence between the quote
associated with a pronoun and a quote of a candidate source could be leveraged
to improve the co-reference resolution results.
Third, with the DBpedia links referred to as identifications of sources in our
dataset, external knowledge could be imported as evidence to enhance the
performance of expert recommendation.
Fourth, our framework makes it possible to build a quote-source library for
the newsroom that can help with veracity assessment, where summaries of the
comments made by each source, including who has quoted them, when and in
relation to which veracity check, can be made available to journalists and
fact-checkers, thereby reducing duplication of effort and supporting
collaboration.
Finally, it is important that journalists and fact-checkers do not become
over-reliant on tools such as the one we present here (i.e., fall victim to
so-called ’automation bias’). The results therefore need to be interpreted
with care and the final decision on which experts to approach should always
made by the journalist or fact-checker. It is therefore important that such
models provide evidence for their recommendations that can be assessed for
credibility and relevance by the user (Procter et al. 2023).
## 8 Conclusions
We have described the construction of a novel, large-scale dataset on quote-
source pairs retrieved from news articles. Our NewsQuote dataset comprises
direct quotations, indirect quotations and their combinations. The diversity
of quote types will encourage the development of more advanced approaches for
the challenging tasks of indirect and mixed quote extraction. Based on the
NewsQuote dataset, we have demonstrated that the QA pipeline is able to
achieve over 98% exact match for source extraction and close to 90% for quote
extraction. In addition, we have introduced the expert recommendation task and
shown that the document retrieval approach with sparse indexing gives the best
results compared to other dense retrieval approaches.
## Ethics Statement
All data we used are from open public sources. We have obtained a written
consent from the Aylien to download their data. As per the data owner’s
requirement, we will not directly share the downloaded data, instead, we will
share the download script and all pre-processing scripts so that others could
obtain the same dataset we used in the paper from the Aylien’s website.
## Acknowledgements
This work was supported in part by the EPSRC (grant no. EP/V048597/1). YH is
supported by a Turing AI Fellowship funded by the UKRI (grant no.
EP/V020579/2).
## References
* Askari, Verberne, and Pasi (2022) Askari, A.; Verberne, S.; and Pasi, G. 2022. Expert Finding in Legal Community Question Answering. _arXiv preprint arXiv:2201.07667_.
* Balog, Azzopardi, and de Rijke (2009) Balog, K.; Azzopardi, L.; and de Rijke, M. 2009. A language modeling framework for expert finding. _Information Processing & Management_, 45(1): 1–19.
* Bell (1991) Bell, A. 1991. _The language of news media_. Blackwell Oxford.
* Burton et al. (2011) Burton, K.; Java, A.; Soboroff, I.; et al. 2011. The ICWSM 2011 Spinn3r Dataset. In _In Proceedings of the Fifth Annual Conference on Weblogs and Social Media (ICWSM 2011)_.
* Devlin et al. (2018) Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_.
* Elson and McKeown (2010) Elson, D. K.; and McKeown, K. R. 2010. Automatic attribution of quoted speech in literary narrative. In _Twenty-Fourth AAAI Conference on Artificial Intelligence_.
* Faisal, Daud, and Akram (2017) Faisal, M.; Daud, A.; and Akram, A. 2017. Expert Ranking using Reputation and Answer Quality of Co-existing Users. _International Arab Journal of Information Technology (IAJIT)_ , 14(1).
* Fernandes, Motta, and Milidiú (2011) Fernandes, W. P. D.; Motta, E.; and Milidiú, R. L. 2011. Quotation extraction for portuguese. In _Proceedings of the 8th Brazilian Symposium in Information and Human Language Technology_.
* Guo, Schlichtkrull, and Vlachos (2022) Guo, Z.; Schlichtkrull, M.; and Vlachos, A. 2022. A survey on automated fact-checking. _Transactions of the Association for Computational Linguistics_ , 10: 178–206.
* Husain et al. (2019) Husain, O.; Salim, N.; Alias, R. A.; Abdelsalam, S.; and Hassan, A. 2019. Expert finding systems: A systematic review. _Applied Sciences_ , 9(20): 4250.
* Lazarski, Al-Khassaweneh, and Howard (2021) Lazarski, E.; Al-Khassaweneh, M.; and Howard, C. 2021. Using nlp for fact checking: A survey. _Designs_ , 5(3): 42.
* Lee and Yeung (2016) Lee, J.; and Yeung, C. Y. 2016. An Annotated Corpus of Direct Speech. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , 1059–1063. Portorož, Slovenia: European Language Resources Association (ELRA).
* Manning et al. (2014) Manning, C.; Surdeanu, M.; Bauer, J.; Finkel, J.; Bethard, S.; and McClosky, D. 2014\. The Stanford CoreNLP Natural Language Processing Toolkit. In _Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations_ , 55–60. Baltimore, Maryland: Association for Computational Linguistics.
* Neshati, Beigy, and Hiemstra (2014) Neshati, M.; Beigy, H.; and Hiemstra, D. 2014. Expert group formation using facility location analysis. _Information processing & management_, 50(2): 361–383.
* O’Keefe et al. (2012) O’Keefe, T.; Pareti, S.; Curran, J. R.; Koprinska, I.; and Honnibal, M. 2012. A sequence labelling approach to quote attribution. In _Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning_ , 790–799.
* Pareti (2016) Pareti, S. 2016. PARC 3.0: A corpus of attribution relations. In _Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16)_ , 3914–3920.
* Paul (2016) Paul, S. A. 2016. Find an expert: Designing expert selection interfaces for formal help-giving. In _Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems_ , 3038–3048.
* Procter et al. (2023) Procter, R.; Arana-catania, M.; He, Y.; Liakata, M.; Zubiaga, A.; Kochkina, E.; and Zhao, R. 2023. Some Observations on Fact Checking Work with Implications for Computational Support. In _Proceedings of the International AAAI Conference on Web and Social Media_. AAAI Press.
* Rajpurkar et al. (2016) Rajpurkar, P.; Zhang, J.; Lopyrev, K.; and Liang, P. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , 2383–2392. Austin, Texas: Association for Computational Linguistics.
* Rücklé, Moosavi, and Gurevych (2019) Rücklé, A.; Moosavi, N. S.; and Gurevych, I. 2019. Neural Duplicate Question Detection without Labeled Training Data. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , 1607–1617. Hong Kong, China: Association for Computational Linguistics.
* Semino and Short (2004) Semino, E.; and Short, M. 2004. _Corpus stylistics: Speech, writing and thought presentation in a corpus of English writing_. Routledge.
* Shi and Lin (2019) Shi, P.; and Lin, J. 2019. Simple bert models for relation extraction and semantic role labeling. _arXiv preprint arXiv:1904.05255_.
* Silva (2014) Silva, A. T. P. 2014. _A research analytics framework for expert recommendation in research social networks_. Ph.D. thesis, City University of Hong Kong.
* Silva et al. (2013) Silva, T.; Guo, Z.; Ma, J.; Jiang, H.; and Chen, H. 2013. A social network-empowered research analytics framework for project selection. _Decision Support Systems_ , 55(4): 957–968.
* Sun et al. (2015) Sun, J.; Xu, W.; Ma, J.; and Sun, J. 2015. Leverage RAF to find domain experts on research social network services: A big data analytics methodology with MapReduce framework. _International Journal of Production Economics_ , 165: 185–193.
* Tekin, Atan, and Van Der Schaar (2014) Tekin, C.; Atan, O.; and Van Der Schaar, M. 2014. Discover the expert: Context-adaptive expert selection for medical diagnosis. _IEEE transactions on emerging topics in computing_ , 3(2): 220–234.
* Vaucher et al. (2021) Vaucher, T.; Spitz, A.; Catasta, M.; and West, R. 2021. Quotebank: a corpus of quotations from a decade of news. In _Proceedings of the 14th ACM International Conference on Web Search and Data Mining_ , 328–336.
* Wang et al. (2017) Wang, Q.; Ma, J.; Liao, X.; and Du, W. 2017. A context-aware researcher recommendation system for university-industry collaboration on R&D projects. _Decision Support Systems_ , 103: 46–57.
* Yuan et al. (2020) Yuan, S.; Zhang, Y.; Tang, J.; Hall, W.; and Cabotà, J. B. 2020. Expert finding in community question answering: a review. _Artificial Intelligence Review_ , 53(2): 843–874.
* Zeng, Abumansour, and Zubiaga (2021) Zeng, X.; Abumansour, A. S.; and Zubiaga, A. 2021. Automated fact-checking: A survey. _Language and Linguistics Compass_ , 15(10): e12438.
* Zhang, Black, and Sproat (2003) Zhang, J. Y.; Black, A. W.; and Sproat, R. 2003. Identifying speakers in children’s stories for speech synthesis. In _Eighth European Conference on Speech Communication and Technology_.
* Zhang and Liu (2021) Zhang, Y.; and Liu, Y. 2021. DirectQuote: A Dataset for Direct Quotation Extraction and Attribution in News Articles. _arXiv preprint arXiv:2110.07827_.
* Zhao et al. (2023) Zhao, R.; Arana-Catania, M.; Zhu, L.; Kochkina, E.; Gui, L.; Zubiaga, A.; Procter, R.; Liakata, M.; and He, Y. 2023. PANACEA: An Automated Misinformation Detection System on COVID-19. _arXiv preprint arXiv:2303.01241_.
(a) Top frequent news article categories. The count is the number of articles
in the corresponding category.
(b) Top news sources. The count is the number of articles published by the
corresponding news source.
(c) Top frequent sources in our quote-source pair dataset. The count is the
number of quotations came from the corresponding sources.
Figure A1: Noise Type | Example Text
---|---
Jumble text | -=-=-=- +++lead-in-text Last August, Apple announced that it would [distribute special iPhones](https://www.
Incorrect labeling of the quote (only text in bold is marked as quote) | Furthermore, CBS reported, citing current and former diplomats with insight into the situation, that Gunter since his nomination in May 2019 has created an increasingly ”untenable” working environment by ”flying into a rage” and changing deputy chiefs of mission at will.
Improper trigger verb | ”These are warnings that have been inevitable from the very start and exactly the reason why ICE should have, and should continue to, release people, especially those who are medically vulnerable to COVID-19, to prevent a humanitarian disaster,” she said.
Improper source | We are in professional corporate relations with various companies and this helps us in digging out market data that helps us generate accurate research data tables and confirms utmost accuracy in our market forecasting.
Not an affirmative statement | Apple didn’t respond to a request for comment.
Table A1: Types of noisy samples removed from the test set.
## Appendix A Data Statistics
Figure 1(a) presents 25 of the most frequent news article categories. One
average, each article has 3.92 category labels. The top five categories are:
’Personal Finance’, ’Stocks’, ’Business’, ’Law, Gov’t & Politics’, and
’Travel’.
Figure 1(b) shows 25 of the most common news sources. In our dataset, the
total number of source platforms is 258. The top 5 source platforms are: Daily
Mail, Yahoo, Seeking Alpha, Business Insider, Reuters.
Figure 1(c) lists 25 of the most frequent sources. in total we have 3246
sources. The top 5 sources are: Amazon, Apple, Google, Microsoft, Reuters.
| | | Strict Metrics | Relaxed Metrics
---|---|---|---|---
Document | Query | Method | MAP | NDCG5 | NDCG10 | MAP | NDCG5 | NDCG10
| | DRsparse | 0.2903 | 0.2807 | 0.3590 | 0.4162 | 0.3925 | 0.5183
| | DRflat | 0.1481 | 0.1440 | 0.1939 | 0.2886 | 0.2714 | 0.3887
| | DRhnswpq | 0.1509 | 0.1473 | 0.1926 | 0.2966 | 0.2805 | 0.3956
Context | Keyword | DRhnsw | 0.1446 | 0.1406 | 0.1889 | 0.2865 | 0.2686 | 0.3850
| | DRpq | 0.1395 | 0.1363 | 0.1838 | 0.2739 | 0.2583 | 0.3734
| | ERcan | 0.1021 | 0.1106 | 0.1252 | 0.2306 | 0.2294 | 0.3135
| | ERdoc | 0.1205 | 0.1281 | 0.1418 | 0.2465 | 0.2412 | 0.3285
| | DRsparse | 0.1357 | 0.1367 | 0.1719 | 0.3112 | 0.2980 | 0.4048
| | DRflat | 0.1321 | 0.1304 | 0.1670 | 0.2979 | 0.2800 | 0.3917
| | DRhnswpq | 0.1206 | 0.1177 | 0.1538 | 0.2830 | 0.2659 | 0.3790
Context | Title | DRhnsw | 0.1311 | 0.1299 | 0.1659 | 0.2963 | 0.2787 | 0.3902
| | DRpq | 0.1231 | 0.1223 | 0.1570 | 0.2931 | 0.2797 | 0.3873
| | ERcan | 0.1062 | 0.1116 | 0.1289 | 0.2516 | 0.2485 | 0.3366
| | ERdoc | 0.1111 | 0.1183 | 0.1320 | 0.2532 | 0.2518 | 0.3387
| | DRsparse | 0.1498 | 0.1484 | 0.1884 | 0.3285 | 0.3115 | 0.4297
| | DRflat | 0.1435 | 0.1389 | 0.1818 | 0.3125 | 0.2945 | 0.4089
| | DRhnswpq | 0.1285 | 0.1254 | 0.1648 | 0.2969 | 0.2761 | 0.3953
Context | Summary | DRhnsw | 0.1419 | 0.1373 | 0.1803 | 0.3097 | 0.2907 | 0.4056
| | DRpq | 0.1402 | 0.1418 | 0.1773 | 0.3103 | 0.2953 | 0.4098
| | ERcan | 0.0918 | 0.0966 | 0.1157 | 0.2461 | 0.2419 | 0.3337
| | ERdoc | 0.1014 | 0.1069 | 0.1235 | 0.2570 | 0.2542 | 0.3479
| | DRsparse | 0.1356 | 0.1366 | 0.1730 | 0.3059 | 0.2938 | 0.4074
| | DRflat | 0.0721 | 0.0724 | 0.0999 | 0.2279 | 0.2127 | 0.3238
| | DRhnswpq | 0.0756 | 0.0746 | 0.1027 | 0.2501 | 0.2410 | 0.3458
Quote | Keyword | DRhnsw | 0.0713 | 0.0719 | 0.0984 | 0.2266 | 0.2124 | 0.3214
| | DRpq | 0.0692 | 0.0684 | 0.0931 | 0.2161 | 0.2036 | 0.3047
| | ERcan | 0.0685 | 0.0720 | 0.0815 | 0.1902 | 0.1902 | 0.2633
| | ERdoc | 0.0552 | 0.0588 | 0.0682 | 0.1685 | 0.1647 | 0.2404
| | DRsparse | 0.0955 | 0.0956 | 0.1204 | 0.2713 | 0.2614 | 0.3600
| | DRflat | 0.0953 | 0.0958 | 0.1227 | 0.2744 | 0.2611 | 0.3729
| | DRhnswpq | 0.0861 | 0.0865 | 0.1113 | 0.2683 | 0.2526 | 0.3654
Quote | Title | DRhnsw | 0.0943 | 0.0946 | 0.1209 | 0.2751 | 0.2620 | 0.3724
| | DRpq | 0.0934 | 0.0947 | 0.1188 | 0.2674 | 0.2565 | 0.3615
| | ERcan | 0.0644 | 0.0669 | 0.0785 | 0.1951 | 0.1935 | 0.2725
| | ERdoc | 0.0558 | 0.0582 | 0.0678 | 0.1851 | 0.1832 | 0.2598
| | DRsparse | 0.1021 | 0.1037 | 0.1306 | 0.2837 | 0.2739 | 0.3784
| | DRflat | 0.1058 | 0.1074 | 0.1338 | 0.2902 | 0.2771 | 0.3895
| | DRhnswpq | 0.0941 | 0.0911 | 0.1258 | 0.2912 | 0.2718 | 0.3933
Quote | Summary | DRhnsw | 0.1038 | 0.1056 | 0.1309 | 0.2882 | 0.2756 | 0.3866
| | DRpq | 0.1023 | 0.1013 | 0.1293 | 0.2882 | 0.2750 | 0.3854
| | ERcan | 0.0568 | 0.0585 | 0.0717 | 0.2002 | 0.1958 | 0.2778
| | ERdoc | 0.0457 | 0.0473 | 0.0569 | 0.1850 | 0.1799 | 0.2627
Table A2: DR denotes the document retrieval approach, and the subscripts
represent 5 types of retrieval indices mentioned in Section 5 Approach 1,
Lucene sparse bag-of-words index, Faiss flat index, Faiss HNSWPQ index, Faiss
HNSW index, and Faiss PQ index. ERcan is the candidate-based expert finding
approach, and ERdoc is the document-based expert finding approach. In two
expert finding approaches the input query length is set to 5.
## Appendix B Noisy Sample
Five types of noise and their corresponding examples that appeared in the raw
test set are listed in Table A1.
## Appendix C Results: Expert Recommendation
Table A2 shows the experimental results of expert recommendation. Among all 6
document-query combinations, the document retrieval (DR) approaches outperform
the expert retrieval (ER) approaches. Among various document indexing
strategies, the Lucene sparse bag-of-words index (DRsparse) gives better
results compared to other dense transformer-encoded Faiss indices. Averagely,
experiments perform better when we use the context as our documents. We
believe that adjacent contexts can also contain information about the sources
and their quotations.
|
# Aliphatics and Aromatics in the Universe: The Pre-JWST Era
DRAFT:
X.J. Yang11affiliation: Department of Physics, Xiangtan University, 411105
Xiangtan, Hunan Province, China<EMAIL_ADDRESS>22affiliation: Department
of Physics and Astronomy, University of Missouri, Columbia, MO 65211, USA;
<EMAIL_ADDRESS>and Aigen Li22affiliation: Department of Physics and
Astronomy, University of Missouri, Columbia, MO 65211, USA<EMAIL_ADDRESS>
###### Abstract
The so-called “unidentified infrared emission” (UIE) features at 3.3, 6.2,
7.7, 8.6, and 11.3$\,{\rm\mu m}$ ubiquitously seen in a wide variety of
astrophysical regions are generally attributed to polycyclic aromatic
hydrocarbon (PAH) molecules. Astronomical PAHs often have an aliphatic
component (e.g., aliphatic sidegroups like methyl –CH3 may be attached as
functional groups to PAHs) as revealed by the detection in many UIE sources of
the aliphatic C–H stretching feature at 3.4$\,{\rm\mu m}$. With its
unprecedented sensitivity, unprecedented spatial resolution and high spectral
resolution, the James Webb Space Telescope (JWST) holds great promise for
revolutionizing the studies of aliphatics and aromatics in the universe. To
facilitate analyzing JWST observations, we present a theoretical framework for
determining the aliphatic fractions ($\eta_{\rm ali}$) of PAHs, the fractions
of C atoms in aliphatic units, from the emission intensity ratios of the
3.4$\,{\rm\mu m}$ aliphatic C–H feature to the 3.3$\,{\rm\mu m}$ aromatic C–H
feature. To demonstrate the effectiveness of this framework, we compile the
3.3 and 3.4$\,{\rm\mu m}$ UIE data obtained in the pre-JWST era for an as
complete as possible sample, and then apply the framework to these pre-JWST
data. We derive a median aliphatic fraction of $\langle\eta_{\rm
ali}\rangle\approx 5.4\%$, and find that the aliphatic fractions are the
highest in protoplanetary nebulae illuminated by cool stars lacking
ultraviolet radiation. Nevertheless, the “hardness” of stellar photons is not
the only factor affecting the PAH aliphaticity, other factors such as the
starlight intensity may also play an important role.
dust, extinction — ISM: lines and bands — ISM: molecules
## 1 Introduction
Polycyclic aromatic hydrocarbon (PAH) molecules, composed of fused benzene
rings, have long been thought to be ubiquitous in the interstellar medium
(ISM), as evidenced by a series of emission bands observed at wavelengths 3.3,
6.2, 7.7, 8.6 and 11.3$\,{\rm\mu m}$, which are coincident with the
vibrational transitions of PAHs (Léger & Puget 1984, Allamandola et al. 1985).
These emission bands are often also known as the “unidentified infrared (IR)
emission” (UIE) bands. Of all interstellar carbon, $\sim\,$15% is thought to
be incorporated into PAHs (Li & Draine 2001). Their emission accounts for up
to 20% of the total IR power of the Milky Way and star-forming galaxies (see
Li 2020).
It has been generally held that astronomical PAHs are not really pure aromatic
compounds (see Kwok 2022). They may include ring defects, substituents,
partial dehydrogenation and sometimes superhydrogenation or deuteration (see
Yang et al. 2017a and references therein). Astronomical PAHs often also
include an aliphatic component (e.g., aliphatic sidegroups like methyl –CH3
may be attached as functional groups to PAHs), as revealed by the detection in
many UIE sources of a weak satellite emission feature at 3.4$\,{\rm\mu m}$
which always accompanies the 3.3$\,{\rm\mu m}$ emission feature (see Yang et
al. 2017b and references therein). While the 3.3$\,{\rm\mu m}$ feature arises
from aromatic C–H stretch, the 3.4$\,{\rm\mu m}$ feature is generally thought
to arise from aliphatic C–H stretch, although it could also be due to
anharmonicity (Barker et al. 1987) and superhydrogenation (Bernstein et al.
1996, Yang et al. 2020). In addition, some UIE sources also exhibit two
aliphatic C–H deformation bands at 6.85 and 7.25$\,{\rm\mu m}$ (see Yang et
al. 2016a and references therein). Typically, for those sources with prominent
6.85 and 7.25$\,{\rm\mu m}$ bands, the 3.4$\,{\rm\mu m}$ band is often
pronounced.
Let $\eta_{\rm ali}\equiv N_{\rm C,ali}/\left(N_{\rm C,aro}+N_{\rm
C,ali}\right)$ be the aliphatic fraction of PAHs, i.e., the ratio of the
number of C atoms in aliphatic units ($N_{\rm C,ali}$) to that in aromatic
rings ($N_{\rm C,aro}$) plus that in aliphatic units. In recent years, the PAH
aliphatic fraction has received increasing attention (e.g., see Kwok & Zhang
2011; Li & Draine 2012; Rouillé et al. 2012; Steglich et al. 2013; Pilleri et
al. 2015; Bernstein et al. 2017; Buragohain et al. 2015, 2016, 2020;
Allamandola et al. 2021; Yang et al. 2013, 2016a,b, 2017a,b). Despite the
widespread acceptance and extreme popularity of the PAH model, the exact
nature of the UIE carriers remains unknown and many candidate materials have
been proposed. All these hypotheses generally agree that the UIE bands arise
from some sort of aromatic hydrocarbon material. The major debate lies in the
exact structure of the UIE carriers: are they (i) free-flying, predominantly
aromatic gas-phase molecules like PAHs, or (ii) amorphous solids (either bulk
or nano-sized) with a mixed aromatic/aliphatic structure (e.g., see Sakata et
al. 1987, Papoular et al. 1993, Kwok & Zhang 2011, Jones et al. 2013)? One way
to address this is to examine the aliphatic fraction of the UIE carriers:
while PAHs, by definition, are predominantly aromatic, all other (proposed)
carriers are considerably aliphatic (see Yang et al. 2017b).
Prior to the launch of the James Webb Space Telescope (JWST), the
3.4$\,{\rm\mu m}$ feature, together with the 3.3$\,{\rm\mu m}$ feature, has
already been seen in a wide variety of Galactic and extragalactic regions,
including reflection nebulae, Hii regions, photodissociated regions (PDRs),
protoplanetary nebulae, planetary nebulae, protoplanetary disks around Herbig
Ae/Be stars and T Tauri stars, and external galaxies (see Yang et al. 2017b).
Undoubtedly, the high spectral resolution and unprecedented sensitivity of
JWST will bring the studies on aliphatics and aromatics to a new height.
Indeed, as illustrated in Figure 1, the 3.3 and (tentatively) 3.4$\,{\rm\mu
m}$ features were very recently seen in the mid-IR spectrum of SPT0418-47, a
galaxy at a redshift of $z\approx 4.22$, obtained with the Mid-IR Instrument
(MIRI) on board JWST (Spilker et al. 2023). The 3.3 and 3.4$\,{\rm\mu m}$
emission features have also been detected by JWST, through its Near Infrared
Camera (NIRCam), in dozens of moderately distant galaxies at redshifts $z$
$\sim$ 0.2–0.5 in the Great Observatories Origins Deep Survey–South (GOODS-S;
see Lyu et al. 2023). It is expected that JWST will accumulate a rich set of
such spectra for a wide range of astrophysical regions, particularly in the
distant universe.
We have initiated a program to explore the aliphatic and aromatic contents of
PAHs in the universe, both in the Milky Way and external galaxies, both near
and far. In this work, we focus on the 3.3 and 3.4$\,{\rm\mu m}$ emission
features detected in the pre-JWST era. This paper is organized as follows. In
§2 we present a theoretical framework for relating the aliphatic fractions of
PAHs to the emission intensity ratios of the 3.4$\,{\rm\mu m}$ feature to the
3.3$\,{\rm\mu m}$ feature. This theoretical framework will not only be used in
later sections but in the very near future also serve the JWST community as an
effective tool for quantitatively determining the aliphatic fractions of PAHs.
The 3.3 and 3.4$\,{\rm\mu m}$ emission features of various astrophysical
regions detected in the pre-JWST era will be summarized and analyzed in §3. We
will quantitatively determine the aliphatic fractions of PAHs and discuss the
results in §4. Finally, we summarize our major results in §5.
## 2 IR Emission Spectra of PAHs with Aliphatic Sidegroups: Theoretical
Framework
To facilitate the analysis of the 3.3 and 3.4$\,{\rm\mu m}$ emission detected
in the pre-JWST era, we first set up a theoretical framework to model the IR
emission of PAHs containing aliphatic sidegroups and relate the emission
intensities of the 3.3 and 3.4$\,{\rm\mu m}$ features to the PAH aliphatic
fraction. In the JWST era, this theoretical framework will also be used to
analyze JWST observations to quantitatively determine the aliphatic fractions
of PAHs.
Due to their small heat contents, PAHs are transiently heated in the ISM by
single stellar photons (see Li 2004). They will not attain an equilibrium
temperature, instead, they will experience temperature spikes and undergo
temperature fluctuations. For PAHs containing aliphatic contents (which we
call “aliphatic” PAHs), we consider PAHs attached with aliphatic sidegroups
like methylene and methyl. Following Draine & Li (2001), we will calculate the
temperature probability distribution functions and emission spectra of
aliphatic PAHs of $N_{\rm C,aro}$ aromatic C atoms, $N_{\rm H,aro}$ aromatic H
atoms, $N_{\rm C,ali}$ aliphatic C atoms, and $N_{\rm H,ali}$ aliphatic H
atoms. For such molecules, we approximate their absorption cross sections by
adding three Drude functions to that of PAHs of $N_{\rm C,aro}$ C atoms and
$N_{\rm H,aro}$ H atoms These Drude functions represent the 3.4$\,{\rm\mu m}$
aliphatic C–H stretch, and the 6.85 and 7.25$\,{\rm\mu m}$ aliphatic C–H
deformations. The absorption cross section of an aliphatic PAH molecule of
$N_{\rm C,aro}$ aromatic C atoms, $N_{\rm H,aro}$ aromatic H atoms, $N_{\rm
C,ali}$ aliphatic C atoms, and $N_{\rm H,ali}$ aliphatic H atoms becomes
$\displaystyle C_{\rm abs}(N_{\rm C},\lambda)$ $\displaystyle=$ $\displaystyle
C^{\scriptscriptstyle\rm PAH}_{\rm abs}(N_{\rm C,aro},N_{\rm H,aro},\lambda)$
(1) $\displaystyle+$ $\displaystyle N_{\rm
H,ali}\frac{2}{\pi}\frac{\gamma_{3.4}\lambda_{3.4}\sigma_{\rm
int,3.3}\left(A_{3.4}/A_{3.3}\right)}{(\lambda/\lambda_{3.4}-\lambda_{3.4}/\lambda)^{2}+\gamma_{3.4}^{2}}$
(2) $\displaystyle+$ $\displaystyle N_{\rm
H,ali}\frac{2}{\pi}\frac{\gamma_{6.85}\lambda_{6.85}\sigma_{\rm
int,6.2}\left(A_{6.85}/A_{6.2}\right)}{(\lambda/\lambda_{6.85}-\lambda_{6.85}/\lambda)^{2}+\gamma_{6.85}^{2}}$
(3) $\displaystyle+$ $\displaystyle N_{\rm
H,ali}\frac{2}{\pi}\frac{\gamma_{7.25}\lambda_{7.25}\sigma_{\rm
int,6.2}\left(A_{7.25}/A_{6.2}\right)}{(\lambda/\lambda_{7.25}-\lambda_{7.25}/\lambda)^{2}+\gamma_{7.25}^{2}}~{}~{},$
(4)
where $N_{\rm C}=N_{\rm C,aro}+N_{\rm C,ali}$ is the number of C atoms
contained in an aliphatic PAH molecule; $\lambda_{3.4}=3.4\,{\rm\mu m}$,
$\lambda_{6.85}=6.85\,{\rm\mu m}$, and $\lambda_{7.25}=7.25\,{\rm\mu m}$ are
respectively the central wavelengths of the 3.4, 6.85 and 7.25$\,{\rm\mu m}$
aliphatic C–H features; $\gamma_{3.4}\lambda_{3.4}$,
$\gamma_{6.85}\lambda_{6.85}$, and $\gamma_{7.25}\lambda_{7.25}$ are
respectively the FWHMs of the 3.4, 6.85 and 7.25$\,{\rm\mu m}$ features
($\gamma_{3.4}$, $\gamma_{6.85}$, and $\gamma_{7.25}$ are dimentionless
parameters; see Draine & Li 2007); $A_{3.3}$ and $A_{3.4}$ are the intensities
of the aromatic and aliphatic C–H stretches, respectively; $A_{6.2}$ and
$A_{7.7}$ are the intensities of the C–C stretches; $A_{6.85}$ and $A_{7.25}$
are the intensities of the aliphatic C–H deformation bands; and $\sigma_{{\rm
int},3.3}$ and $\sigma_{{\rm int},6.2}$ are respectively the integrated
strengths per (aromatic) C atom of the 3.3$\,{\rm\mu m}$ aromatic C–H stretch
and 6.2$\,{\rm\mu m}$ aromatic C–C stretch (see Draine & Li 2007). We take
$A_{3.4}/A_{3.3}=1.76$ for neutrals and $A_{3.4}/A_{3.3}=3.80$ for cations as
computed by Yang et al. (2013). We take the lower limits of
$A_{6.85}/A_{6.2}\approx 5.0$ and $A_{7.25}/A_{6.2}\approx 0.5$ for neutrals,
$A_{6.85}/A_{6.2}\approx 0.5$ and $A_{7.25}/A_{6.2}\approx 0.25$ for cations
as derived in Yang et al. (2016a). We note that, with $N_{\rm C,ali}\approx
3N_{\rm H,ali}$ (suitable for methyl sidegroups), the absorption cross
sections given in eqs.1–4 are the same as that of Yang et al. (2016a).
Let $dP$ be the probability that the temperature of the aliphatic PAH molecule
will be in $[T,T+dT]$. The emissivity (in unit of $\,{\rm erg}\,{\rm
s}^{-1}\,{\rm cm}^{-1}$) of this molecule becomes
$j_{\lambda}(N_{\rm C})=\int C_{\rm abs}(N_{\rm C},\lambda)\,4\pi
B_{\lambda}(T)\,\frac{dP}{dT}\,dT~{}.$ (5)
The 3–4$\,{\rm\mu m}$ interstellar UIE emitters are in the size range of
$N_{\rm C}$ $\sim\,$20–30 C atoms, as shown in Figures 6, 7 of Draine & Li
(2007). For illustrative purpose, we consider $N_{\rm C,aro}=24$ (like
coronene). For a coronene-like molecule, up to 12 methylene or methyl
sidegroups can be attached, we thus consider $N_{\rm C,ali}=0,1,2,...12$
aliphatic C atoms and $N_{\rm H,ali}=0,1,2,...36$ aliphatic H atoms. For all
molecules, $N_{\rm C,aro}=24$ is fixed. Yang et al. (2016a) have shown that
the model IR emission spectra (scaled by starlight intensity) are essentially
independent of the absolute values of the starlight intensities. Therefore, we
only consider $U=1$, with $U$ defined as
$U\equiv\frac{\int_{1\mu{\rm m}}^{912{\rm\,{\rm\AA}}}4\pi
J_{\star}(\lambda)\,d\lambda}{\int_{1\mu{\rm m}}^{912{\rm\,{\rm\AA}}}4\pi
J_{\rm ISRF}(\lambda)\,d\lambda}~{}~{},$ (6)
where $J_{\star}(\lambda)$ is the intensity of starlight, and $J_{\rm
ISRF}(\lambda)$ is the starlight intensity of the solar neighbourhood
interstellar radiation field (ISRF) of Mathis, Mezger & Panagia (1983; MMP83).
In addition to the MMP83 ISRF, we consider five types of radiation fields,
approximated by the stellar model atmospheric spectra of Kurucz (1979) of
effective temperatures of $T_{\rm eff}=3,500,6,000,10,000,22,000,30,000\,{\rm
K}$, like that of M2V stars, the Sun, A2V stars, B1.5V stars and B0V stars,
respectively. The reflection nebula NGC 2023 is illuminated by HD 37903, an
B1.5V star with $T_{\rm eff}=22,000\,{\rm K}$, while IRAS 03035+5819 is
illuminated by an B0V star of $T_{\rm eff}=30,000\,{\rm K}$.
We adopt the “thermal-discrete” method of Draine & Li (2001) to compute the
temperature probability distribution functions and the IR emission spectra of
both neutral and ionized aliphatic PAHs excited by starlight of different
spectra. In Figures 2 and 3 we show the model emission spectra in
3–15$\,{\rm\mu m}$ respectively for neutral and ionized aliphatic PAHs of
$N_{\rm H,ali}=0,2,6,10$ illuminated by stars of different $T_{\rm eff}$. It
is apparent that the 3.4 and 6.85$\,{\rm\mu m}$ aliphatic C–H features are
clearly visible and become stronger as $N_{\rm H,ali}$ increases. The
7.25$\,{\rm\mu m}$ aliphatic C–H feature, however, remains hardly noticeable
even for $N_{\rm H,ali}=10$. This is because the intrinsic strength of the
7.25$\,{\rm\mu m}$ feature ($A_{7.25}$) is much weaker compared to that of the
3.4 and 6.85$\,{\rm\mu m}$ features ($A_{3.4}$, $A_{6.85}$; see Yang et al.
2016a).
In the following, we will focus on the 3.3 and 3.4$\,{\rm\mu m}$ features.
Figure 4 highlights the spectra in the wavelength range of 3.2–3.6$\,{\rm\mu
m}$ for both neutral and ionized aliphatic PAHs with $N_{\rm H,ali}=0,2,6,10$.
The 3.4$\,{\rm\mu m}$ aliphatic C–H band becomes pronounced even at $N_{\rm
H,ali}=2$. At $N_{\rm H,ali}=10$, the 3.4$\,{\rm\mu m}$ feature becomes
comparable to or even stronger than the 3.3$\,{\rm\mu m}$ aromatic C–H
feature. For the same $N_{\rm H,ali}$, PAH cations emit less at 3.3 and
3.4$\,{\rm\mu m}$ than their neutral counterparts.
For a given $N_{\rm H,ali}$, we derive $\left(I_{3.4}/I_{3.3}\right)_{\rm
mod}$, the model emission intensity ratio of the 3.4$\,{\rm\mu m}$ band to the
3.3$\,{\rm\mu m}$ band, from
$\left(\frac{I_{3.4}}{I_{3.3}}\right)_{\rm mod}=\frac{\int_{3.4}\Delta
j_{\lambda}(N_{\rm C})\,d\lambda}{\int_{3.3}\Delta j_{\lambda}(N_{\rm
C})\,d\lambda}~{}~{},$ (7)
where $I_{3.4}$ and $I_{3.3}$ are respectively the calculated intensities of
the 3.4$\,{\rm\mu m}$ and 3.3$\,{\rm\mu m}$ emission features; and
$\int_{3.3}\Delta j_{\lambda}(N_{\rm C})\,d\lambda$ and $\int_{3.4}\Delta
j_{\lambda}(N_{\rm C})\,d\lambda$ are respectively the feature-integrated
excess emission of the 3.3 and 3.4$\,{\rm\mu m}$ features of aliphatic PAHs.
In Figures 5 and 6 we show the model intensity ratios
$\left(I_{3.4}/I_{3.3}\right)_{\rm mod}$ as a function of $N_{\rm
H,ali}/N_{\rm H,aro}$ for neutral and ionized PAHs, respectively. Basically,
the model band ratios $\left(I_{3.4}/I_{3.3}\right)_{\rm mod}$ are linearly
correlated with $N_{\rm H,ali}/N_{\rm H,aro}$ for both neutrals and cations.
The correlation slope, defined as $d\left(I_{3.4}/I_{3.3}\right)_{\rm
mod}/d\left(N_{\rm H,ali}/N_{\rm H,aro}\right)$, is a weak function of $T_{\rm
eff}$ and listed in Table 1. On average, $\langle
d\left(I_{3.4}/I_{3.3}\right)_{\rm mod}/d\left(N_{\rm H,ali}/N_{\rm
H,aro}\right)\rangle\approx 1.92\pm 0.09$ for neutrals and $\approx 4.13\pm
0.16$ for cations. Therefore, to first order, we obtain
$\left(I_{3.4}/I_{3.3}\right)_{\rm mod}\approx 1.92\times\left(N_{\rm
H,ali}/N_{\rm H,aro}\right)$ for neutrals and
$\left(I_{3.4}/I_{3.3}\right)_{\rm mod}\approx 4.13\times\left(N_{\rm
H,ali}/N_{\rm H,aro}\right)$ for cations. With the temperature dependence of
the correlation slope taken into account, the model band ratio
$\left(I_{3.4}/I_{3.3}\right)_{\rm mod}$ can be expressed as
$\left(\frac{I_{3.4}}{I_{3.3}}\right)_{\rm
mod}=\left(\frac{A_{3.4}}{A_{3.3}}\right)\times\left(\frac{N_{\rm
H,ali}}{N_{\rm H,aro}}\right)\times k(T_{\rm eff})~{}~{},$ (8)
where $k(T_{\rm eff})$, the correlation slope, is
$k(T_{\rm eff})\approx\begin{cases}1.20-0.122\times\left(T_{\rm
eff}/10,000\,{\rm K}\right)+0.022\times\left(T_{\rm eff}/10,000\,{\rm
K}\right)^{2}&{\rm for~{}neutrals}~{}~{},\\\ 1.18-0.113\times\left(T_{\rm
eff}/10,000\,{\rm K}\right)+0.023\times\left(T_{\rm eff}/10,000\,{\rm
K}\right)^{2}&{\rm for~{}cations}~{}~{}.\\\ \end{cases}$ (9)
The correlation slope $k(T_{\rm eff})$ somewhat decreases as $T_{\rm eff}$
increases. This is because, in regions illuminated by hot stars (of higher
$T_{\rm eff}$), the stellar photons are more energetic. Upon absorption of
such an energetic photon emitted from hotter stars, PAHs are excited to higher
temperatures and emit more effectively at shorter wavelengths (e.g.,
3.3$\,{\rm\mu m}$) than at longer wavelengths (e.g., 3.4$\,{\rm\mu m}$).
Therefore, for a given $N_{\rm H,ali}/N_{\rm H,aro}$, a smaller
$\left(I_{3.4}/I_{3.3}\right)_{\rm mod}$ is expected for regions illuminated
by stars of higher $T_{\rm eff}$.
To relate $N_{\rm C,ali}/N_{\rm C,aro}$ through $N_{\rm H,ali}/N_{\rm H,aro}$,
we assume that one aliphatic C atom corresponds to 2.5 aliphatic C–H bonds
(intermediate between methylene –CH2 and methyl –CH3) and one aromatic C atom
corresponds to 0.75 aromatic C–H bond (intermediate between benzene C6H6 and
coronene C24H12). Therefore, the ratio of the number of C atoms in aliphatic
units to that in aromatic rings is $N_{\rm C,ali}/N_{\rm
C,aro}\approx\left(0.75/2.5\right)\,\times\,N_{\rm H,ali}/N_{\rm H,aro}$. As
the 3.3 and 3.4$\,{\rm\mu m}$ C–H stretches are predominantly emitted by
neutral PAHs, we therefore recommend the following relation to estimate
$N_{\rm C,ali}/N_{\rm C,aro}$ from the observed band ratios
$\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$:
$\frac{N_{\rm C,ali}}{N_{\rm
C,aro}}\approx\frac{1}{5.87}\left(\frac{I_{3.4}}{I_{3.3}}\right)_{\rm
obs}\times\left\\{1.20-0.122\times\left(T_{\rm eff}/10,000\,{\rm
K}\right)+0.022\times\left(T_{\rm eff}/10,000\,{\rm
K}\right)^{2}\right\\}^{-1}~{}~{}.$ (10)
In case there is no information on $T_{\rm eff}$ (e.g., the MMP83 ISRF), we
recommend
$\frac{N_{\rm C,ali}}{N_{\rm
C,aro}}\approx\frac{1}{6.40}\left(\frac{I_{3.4}}{I_{3.3}}\right)_{\rm
obs}~{}~{}.$ (11)
The aliphatic fraction of PAHs is determined from
$\eta_{\rm ali}=\left(1+N_{\rm C,aro}/N_{\rm C,ali}\right)^{-1}~{}~{}.$ (12)
There is no need to compute the temperature probability distribution functions
and the IR emission spectra of aliphatic PAHs as long as one is only
interested in the aliphatic fraction of the UIE carrier.
## 3 Aliphatic and Aromatic Observations in the Pre-JWST Era
A wealth of observational spectra for the aliphatic and aromatic C–H stretches
are available in archive or literature. This allows an in-depth study of the
aliphatics and aromatics in the universe. We compile the aliphatic and
aromatic C–H emission data, as complete as possible, from observations made
with space-borne satellites such as the Infrared Space Observatory (ISO) and
AKARI, airborne telescopes such as the Kuiper Airborne Observatory (KAO), and
ground-based telescopes such as the Infrared Telescope Facilities (IRTF) and
the United Kingdom Infrared Telescope (UKIRT).
To this end, we find 28 sources which show both the 3.3 and 3.4$\,{\rm\mu m}$
features. These sources include Galactic PDRs, protoplanetary nebulae (PPNe),
planetary nebulae (PNe), reflection nebulae (RNe), young stellar objects
(YSOs), and HII regions, as well as external galaxies.
For each source, we fit the observed spectrum in terms of two or more Drude
profiles combined with an underlying linear continuum:
$F_{\lambda}=a_{0}+a_{1}\lambda+\sum_{j}\frac{P_{j}\,\times\,\left(2\gamma_{j}/\pi\right)}{\left(\lambda-\lambda_{{\rm
o},j}^{2}/\lambda\right)^{2}+\gamma_{j}^{2}},$ (13)
where $a_{0}$ and $a_{1}$ are the coefficients of the linear continuum;
$\lambda_{{\rm o},j}$ and $\gamma_{j}$ are the central wavelength and width of
the $j$-th Drude profile; $P_{j}$, the power emitted from $j$-th Drude profile
(in unit of $\,{\rm erg}\,{\rm s}^{-1}\,{\rm cm}^{-2}$), is obtained by
integrating the emission feature over wavelength:
$P_{j}=\int_{\lambda_{j}}\Delta F_{\lambda}\,d\lambda~{}~{}.$ (14)
For the Drude profiles, the 3.3 and 3.4$\,{\rm\mu m}$ features are always
included for consideration. In some objects (e.g., IRAS 21282+5050), one or
more additional weak features at 3.43, 3.47, 3.51, and 3.56$\,{\rm\mu m}$ are
also present and each of these features is also approximated as a Drude
profile. We sum up the power emitted from all these sub-features and attribute
them to the aliphatic C–H stretches. Therefore, for the ratio of the power
emitted from the aliphatic C–H stretches to that from the aromatic C–H
stretches, we take $\left(I_{3.4}/I_{3.3}\right)_{\rm
obs}=\left(P_{3.4}+P_{3.43}+P_{3.47}+P_{3.51}+P_{3.56}\right)/P_{3.3}$
provided that these subfeatures are detected. If only the 3.3 and
3.4$\,{\rm\mu m}$ features show up, we take $\left(I_{3.4}/I_{3.3}\right)_{\rm
obs}=P_{3.4}/P_{3.3}$. We note that in the literature the band ratios
$\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ have been reported for some sources.
We prefer to derive by ourselves because in the literature there is a certain
arbitrarity in defining the underlying continuum for the features and the
strengths of the features were calculated in different ways. When taking data
from different publications, these differences may actually play a role. We
therefore decide to derive $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ in a
coherent way for all sources.
For each source, we follow the above procedure to fit the observed spectrum to
derive $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$. We then derive the aliphatic
fraction $\eta_{\rm ali}$ from $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ (see
eqs. 10–12). The spectral fits are illustrated in Figures 7–11 and the derived
aliphatic fractions $\eta_{\rm ali}$ are tabulated in Table 2.
## 4 Results and Discussion
Figure 7 shows the aliphatic and aromatic C–H stretches seen in emission in
PDRs excited by B0V or earlier-type stars with $T_{\rm eff}\gtrsim
30,000\,{\rm K}$. The aliphatic C–H stretches are relatively weak and the
aliphatic fractions of PAHs are all smaller than 3%. This is understandable
since PDRs are rich in energetic photons so that the aliphatic sidegroups
attached to PAHs could easily be stripped off.
Figure 8 shows the near-IR spectra of four PNe. NGC 7027, excited by an B2.5V
star of $T_{\rm eff}\approx 20,000\,{\rm K}$, exhibits the strongest aliphatic
C–H stretches among these four PNe. As the illuminating star becomes hotter,
$\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ decreases, so does the PAH aliphatic
fraction. With $T_{\rm eff}\approx 37,000\,{\rm K}$, IC 418 does not show any
noticeable aliphatic C–H emission. In contrast, excited by an B0V star of
$T_{\rm eff}\approx 30,000\,{\rm K}$, BD+303639 shows a broad, shallow feature
around 3.4–3.5$\,{\rm\mu m}$. By attributing this feature to aliphatic C–H
stretches, we estimate an aliphatic fraction of $\sim\,$5.3% for PAHs in
BD+303639.
The aliphatic and aromatic C–H stretches of two reflection nebulae are shown
in Figure 9. IRAS 03035+5819 exhibits a series of weak aliphatic C–H
stretching features. NGC 1333 and IRAS 03035+5819 are both excited by an B0V
star of $T_{\rm eff}\approx 30,000\,{\rm K}$ and their PAH aliphatic fractions
are comparable to that of BD+303639 (PN), but appreciably higher than that of
S106 (PDR). Both BD+303639 and S106 are illuminated by stars with $T_{\rm
eff}\approx 30,000\,{\rm K}$, just like NGC 1333 and IRAS 03035+5819.
Figure 10 shows the near-IR spectra of four YSOs. All four objects show
several sub-features at $\sim\,$3.4–3.6$\,{\rm\mu m}$ attributed to aliphatic
C–H stretches. The PAH aliphatic fraction does not show any strong dependence
on $T_{\rm eff}$.
We show in Figure 11 the aliphatic and aromatic C–H stretching features of
four PPNe. Except the Red Rectangle illuminated by HD 44179 of $T_{\rm
eff}\approx 7,750\,{\rm K}$, PAHs in these PPNe are rich in aliphatic contents
and their aliphatic fractions are high, with $\eta_{\rm ali}\approx 25\%$ for
IRAS 04296+3429, $\eta_{\rm ali}\approx 35\%$ for IRAS 05341+0852, and
$\eta_{\rm ali}\approx 8.6\%$ for CRL 2688. All these three aliphatic-rich
PPNe are excited by cool stars which lack UV photons ($T_{\rm eff}\approx
6,500\,{\rm K}$ for IRAS 04296+3429 and IRAS 03541+0852, and $T_{\rm
eff}\approx 7,000\,{\rm K}$ for CRL 2688). This suggests that, in UV-poor
regions, once attained, PAHs are capable of maintaining their aliphatic
sidegroups without being stripped off. Nevertheless, the Red Rectangle, also
illuminated by a cool star with little UV radiation, shows very weak emission
at 3.4$\,{\rm\mu m}$ and the PAH aliphatic fraction is only $\sim\,$0.3%. This
indicates that, in addition to the “hardness” of the exciting stellar photons,
some other factors (e.g., the starlight intensity) are also at play in
affecting the PAH aliphatic fractions.
Among our 24 Galactic sources, six objects lack information about their
illuminating stars. Their near-IR spectra are shown in Figure 12 and the
intensity of the aliphatic C–H stretching feature (relative to the aromatic
C–H feature) varies substantially, from essentially no 3.4$\,{\rm\mu m}$
emission in IRAS 16362-4845 to highly aliphatic in IRAS 12063-6259 ($\eta_{\rm
ali}\approx 20.4\%$) and in IRAS 06572-0742 ($\eta_{\rm ali}\approx 6.0\%$).
IRAS 16362-4845 exhibits a smooth, flat continuum at $\sim\,$3.4–3.6$\,{\rm\mu
m}$. The PAH aliphatic fractions of IRAS 09296+1159 ($\eta_{\rm ali}\approx
3.5\%$), IRAS 17199-3446 ($\eta_{\rm ali}\approx 4.8\%$), and IRAS 19097+0847
($\eta_{\rm ali}\approx 4.6\%$) are moderate.
Figure 13 shows the aliphatic and aromatic C–H stretches of four nearby
galaxies. They all show considerable emission at 3.4$\,{\rm\mu m}$ and their
PAH aliphatic fractions all exceed 7%. The Large Magellanic Cloud (LMC) and
NGC 253, a dusty starburst galaxy, also show a weak sub-feature at $\sim\,$3.5
and 3.6$\,{\rm\mu m}$, respectively. It is not clear if (and how) the galaxy
metallicity affects the PAH aliphatic fraction. It has long been known that,
since the ISO time, the PAH abundance decreases as the metallicity drops (see
Li 2020 and references therein). It is unclear if the presence and the
intensity of the 3.4$\,{\rm\mu m}$ feature (relative to the 3.3$\,{\rm\mu m}$
feature) are related to the metallicity. In this respect, JWST, with its
unprecedented sensitivity and spatial resolution, will allow an in-depth
study. In principle, in low-metallicity regions, PAHs are less likely to
attain aliphatic sidegroups because of the low C/H ratio (and therefore low
–CH3 abundance) and more likely to attain extra H atoms to be
superhydrogenated. As the intrinsic strength of the 3.4$\,{\rm\mu m}$
aliphatic C–H stretch of superhydrogenated PAHs is close to that of PAHs with
aliphatic sidegroups (see Yang et al. 2020), observationally, it is difficult
to distinguish whether the 3.4$\,{\rm\mu m}$ feature arises from
superhydrogenated PAHs or from PAHs with aliphatic sidegroups. Also, it is not
clear if (and how) the 3.4$\,{\rm\mu m}$ feature is affected by the star
formation rate. We await JWST for quantitative investigations.
In Figure 14 we show the PAH aliphatic fraction distribution of our sample of
28 sources. It can be seen that the majority (24/28) of these sources has
$\eta_{\rm ali}<10\%$. The median of the PAH aliphatic fraction is
$\langle\eta_{\rm ali}\rangle\approx 5.4\%$. Two PPNe—IRAS 04296+3429
($\eta_{\rm ali}\approx 24.8\%$) IRAS 05341+0852 ($\eta_{\rm ali}\approx
34.9\%$)—are unusually rich in aliphatics. Another object—IRAS 12063-6259
($\eta_{\rm ali}\approx 20.4\%$) of which the exact nature is unknown—also has
a large aliphatic content.
We explore whether (and how) the PAH aliphatic fraction varies with
astrophysical environments (e.g., hardness of the exciting starlight photons).
As shown in Figure 15, $\eta_{\rm ali}$ appears higher in regions illuminated
by stars with lower $T_{\rm eff}$. Indeed, as discussed above, it is generally
true that in UV-poor PPNe the 3.4$\,{\rm\mu m}$ emission feature (relative to
the 3.3$\,{\rm\mu m}$ feature) is much stronger than that of PNe, RNe, and
PDRs (see Figure 11). Nevertheless, the Red Rectangle, a PPN illuminated by HD
44179 of $T_{\rm eff}\approx 7,750\,{\rm K}$, has a rather low $\eta_{\rm
ali}$, much lower than that of PDR and RNs illuminated by stars of much higher
$T_{\rm eff}$. This implies that, not only the hardness but also the intensity
of the starlight may affect the accumulation and survival of aliphatic
sidegroups attached to PAHs. This can be studied in more detail by future
JWST/NIRSpec observations of spatially-resolved PAH spectra. Previously, the
spatial variations of $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ with the
exciting UV starlight intensities have been investigated (e.g., see Joblin et
al. 1996, Sloan et al. 1997, Goto et al. 2003). Again, with its unprecedented
sensitivity and spatial resolution, JWST will allow us to explore the spatial
variations of the PAH aliphatic fractions and their relations to the physical
and chemical conditions in an unprecedented depth.
While the 3.4$\,{\rm\mu m}$ aliphatic C–H emission is widely seen in various
astrophysical environments, prior to JWST, the detection of the 6.85 and
7.25$\,{\rm\mu m}$ aliphatic C–H deformation bands is rare and has so far been
reported only in a couple dozen objects, mostly based on the observations made
with the Infrared Spectrograph (IRS) on board the Spitzer Space Telescope and
the Shorter Wavelength Spectrometer (SWS) on board ISO (see Sloan et al. 2014,
Yang et al. 2016a). This will change with the launch of JWST: due to its
unprecedented sensitivity, the MIRI spectrometer is well suited for detecting
the 6.85 and 7.25$\,{\rm\mu m}$ bands (while the NIRSpec spectrograph is ideal
for detecting the 3.4$\,{\rm\mu m}$ band). A combination of the 3.4, 6.85 and
7.25$\,{\rm\mu m}$ bands would allow us to probe the aliphatic contents of
large PAHs (e.g., see Li & Draine 2012). It is interesting to note that, in
some planetary nebulae, where the 3.4$\,{\rm\mu m}$ emission is observed, an
5.25$\,{\rm\mu m}$ band is often also seen. The 5.25$\,{\rm\mu m}$ band is
thought to be coming from large compact PAHs (see Boersma et al. 2009). Larger
compact PAHs with aliphatic side groups should be more eligible to survive in
environments illuminated by cool stars. It would be interesting to consider,
in the JWST era, large aliphatic PAHs in a theoretical framework similar to
that presented in §2 and see if any correlations exist between these bands.
Finally, we note that the 3.4$\,{\rm\mu m}$ band could also arise from
superhydrogenated PAHs whose edges contain excess H atoms (Bernstein et al.
1996, Sandford et al. 2013, Yang et al. 2020). The addition of excess H atoms
to PAHs converts the flat sp2 aromatic bonding of their asscoiated C atoms
into tetrahedral sp3 aliphatic bonding, resulting in the creation of aliphatic
C–H stretching bands. Compared with methylated PAHs in which one aliphatic C
atom corresponds to three aliphatic C–H bonds, for superhydrogenated PAHs, one
“superhydrogenated” C atom corresponds to two aliphatic C–H bonds. For
superhydrogenated PAHs, the ratio of the intensity of the 3.4$\,{\rm\mu m}$
aliphatic C–H stretch to that of the 3.3$\,{\rm\mu m}$ aromatic C–H stretch
$\langle A_{3.4}/A_{3.3}\rangle\approx 1.98$ (Yang et al. 2020) is similar to
that of methylated PAHs ($\langle A_{3.4}/A_{3.3}\rangle\approx 1.76$; Yang et
al. 2013, 2017b). Therefore, the aliphatic fraction as defined in eq.12 would
be higher by a factor of
$\approx\left(3/2\right)\times\left(1.76/1.98\right)\approx 1.33$, if the
observed 3.4$\,{\rm\mu m}$ emission is attributed to superhydrogenated PAHs.
## 5 Summary
To facilitate a quantitative analysis of the aliphatic and aromatic contents
of PAHs in the JWST era, we have proposed a theoretical framework for
determining the aliphatic fractions ($\eta_{\rm ali}$) of PAHs and have
applied the framework to pre-JWST UIE data. Our major results are as follows:
1. 1.
An analytical formula for relating the PAH aliphatic fraction ($\eta_{\rm
ali}$) to the emission intensity ratio of the 3.4$\,{\rm\mu m}$ feature to the
3.3$\,{\rm\mu m}$ feature ($I_{3.4}/I_{3.3}$) is presented. This relation is
somewhat dependent on the “hardness” of the exciting stellar photons measured
by the stellar effective temperature ($T_{\rm eff}$).
2. 2.
To demonstrate the effectiveness of this framework (of deriving $\eta_{\rm
ali}$ from $I_{3.4}/I_{3.3}$), we have compiled the 3.3 and 3.4$\,{\rm\mu m}$
UIE data obtained in the pre-JWST era for an as complete as possible sample of
28 Galactic and extragalactic sources. We have then applied the $\eta_{\rm
ali}$–$I_{3.4}/I_{3.3}$ relation to these pre-JWST data.
3. 3.
We have derived the PAH aliphatic fraction $\eta_{\rm ali}$ for each source
from the observed band ratio $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ and
found a median aliphatic fraction of $\langle\eta_{\rm ali}\rangle\approx
5.4\%$. Generally, the aliphatic fractions are the highest in protoplanetary
nebulae illuminated by cool stars lacking UV radiation. However, the hardness
of stellar photons is not the only factor affecting the PAH aliphaticity,
other factors such as the starlight intensity may also play an important role.
We thank B.T. Draine, R. Glaser, A.N. Witt and the anonymous referee for
valuable suggestions. We thank J.S. Spilker for providing us the JWST data of
SPT0418-47. XJY is supported in part by NSFC 12122302 and 11873041. AL is
supported in part by NASA grants 80NSSC19K0572 and 80NSSC19K0701.
## References
* (1) Allamandola, L.J., Tielens, A.G.G.M., & Barker, J.R. 1985, ApJ, 290, L25
* (2) Allamandola, L. J., Boersma, C., Lee, T. J., et al. 2021, ApJL, 917, L35
* (3) Barker, J. R., Allamandola, L. J., & Tielens, A.G.G.M. 1987, ApJ, 315, L61
* (4) Bernstein, L. S., Shroll, R. M., Lynch, D. K., & Clark, F. O. 2017, ApJ, 836, 229
* (5) Bernstein, M.P., Sandford, S.A., & Allamandola, L.J. 1996, ApJ, 472, L127
* (6) Boersma, C., Mattioda, A. L., Bauschlicher, C. W., et al. 2009, ApJ, 690, 1208
* (7) Buragohain, M., Pathak, A., Sarre, P., Onaka, T., & Sakon, I. 2015, MNRAS, 454, 193
* (8) Buragohain, M., Pathak, A., Sarre, P., Onaka, T., & Sakon, I. 2016, Planet. Space Sci., 133, 97
* (9) Buragohain, M., Pathak, A., Sakon, I., & Onaka, T. 2020, ApJ, 892, 11
* (10) Draine, B.T., & Li, A. 2001, ApJ, 551, 807
* (11) Draine, B.T., & Li, A. 2007, ApJ, 657, 810
* (12) Geballe, T.R., Lacy, J.H., Persson, S.E., McGregor, P. J., & Soifer, B.T. 1985, ApJ, 292, 500
* (13) Geballe, T. R. & van der Veen, W. E. C. J. 1990, A&A, 235, L9
* (14) Geballe, T. R., Tielens, A. G. G. M., Kwok, S., & Hrivnak, B. J. 1992, ApJL, 387, L89
* (15) Goto, M., Gaessler, W., Hayano, Y., et al. 2003, ApJ, 589, 419
* (16) Joblin, C., Tielens, A.G.G.M., Allamandola, L.J., & Geballe, T.R. 1996, ApJ, 458, 610
* (17) Jones, A. P., Fanciullo, L., Köhler, M., et al. 2013, A&A, 558, A62
* (18) Jourdain de Muizon, M., Geballe, T.R., d’Hendecourt, L.B., & Baas, F. 1986, ApJ, 306, L105
* (19) Jourdain de Muizon, M., d’Hendecourt, L. B., & Geballe, T. R. 1990, A&A,, 227, 526
* (20) Kondo, T., Kaneda, H., Oyabu, S., et al. 2012, ApJ, 751, L18
* (21) Kurucz, R.L. 1979, ApJS, 40, 1
* (22) Kwok, S. 2022, Ap&SS, 367, 16
* (23) Kwok, S., & Zhang, Y. 2011, Nature, 479, 80
* (24) Léger, A., & Puget, J.L. 1984, A&A, 137, L5
* (25) Li, A., 2004, in Astrophysics of Dust, Witt, A.N., Clayton, G.C., & Draine, B.T. (eds.), ASP Conf. Ser., 309, 417
* (26) Li, A. 2020, Nature Astronomy, 4, 339
* (27) Li, A., & Draine, B.T. 2001, ApJ, 554, 778
* (28) Li, A., & Draine, B.T. 2012, ApJ, 760, L35
* (29) Lyu, J.W., Yang, X.J., Li, A., et al. 2023, in preparation
* (30) Mathis, J.S., Mezger, P.G., & Panagia, N. 1983, A&A, 128, 212
* (31) Mori, T. I., Onaka, T., Sakon, I., et al. 2014, ApJ, 784, 53
* (32) Onaka, T., Nakamura, T., Sakon, I., et al. 2018, ApJ, 853, 31
* (33) Pilleri, P., Joblin, C., Boulanger, F., et al. 2015, A&A, 577, A16
* (34) Papoular, R., Breton, J., Gensterblum, G., Nenner, I., Papoular, R. J., & Pireaux, J.-J. 1993, A&A, 270, L5
* (35) Rouillé, G., Steglich, M., Carpentier, Y., et al. 2012, ApJ, 752, 25.
* (36) Sakata, A., Wada, S., Onaka, T., & Tokunaga, A.T. 1987, ApJ, 320. L63
* (37) Sandford, S.A., Bernstein, M. P., & Materese, C.K. 2013, ApJS, 205, 8
* (38) Sloan, G.C., Bregman, J.D., Geballe, T.R., Allamandola, L.J., & Woodward, C.E. 1997, ApJ, 474, 735
* (39) Sloan, G. C., Jura, M., Duley, W. W., et al. 2007, ApJ, 664, 1144
* (40) Sloan, G. C., Lagadec, E., Zijlstra, A. A., et al. 2014, ApJ, 791, 28
* (41) Spilker, J. S., Phadke, K. A., Aravena, M., et al. 2023, Nature, 618, 708
* (42) Steglich, M., Jäger, C., Huisken, F., et al. 2013, ApJS, 208, 26
* (43) Yamagishi, M., Kaneda, H., Ishihara, D., et al. 2012, A&A, 541, A10
* (44) Yang, X. J., Glaser, R., Li, A., & Zhong, J. X. 2013, ApJ, 776, 110
* (45) Yang, X. J., Glaser, R., Li, A., & Zhong, J. X. 2016a, MNRAS, 462, 1551
* (46) Yang, X. J., Li, A., Glaser, R., & Zhong, J. X. 2016b, ApJ, 825, 22
* (47) Yang, X. J., Li, A., Glaser, R., & Zhong, J. X. 2017a, ApJ, 837, 171
* (48) Yang, X. J., Glaser, R., Li, A., & Zhong, J. X. 2017b, New Astron. Rev., 77, 1
* (49) Yang, X. J., Li, A., & Glaser, R. 2020, ApJS, 247, 1
Figure 1: The aromatic and aliphatic C–H stretches respectively at
$\sim\,$3.3 and 3.4$\,{\rm\mu m}$ from SPT0418-47, a galaxy at $z\approx
4.22$, detected by JWST/MIRI (Spilker et al. 2023). The MIRI spectrum has been
smoothed to a resolution of $R\approx 600$. Also shown is the JWST/NIRCam
spectrum of GOODS-S 9883, a moderately distant galaxy at $z$ $\sim$ 0.36 (Lyu
et al. 2023). To facilitate comparison, the JWST/NIRCam spectrum has been
multiplied by a factor of three. Figure 2: Model IR emission spectra of
neutral aliphatic PAHs of $N_{\rm H,ali}=0,2,6,10$ aliphatic H atoms and
$N_{\rm C,aro}=24$ aromatic C atoms illuminated by an M2V star of $T_{\rm
eff}=3,500\,{\rm K}$ (orange lines), a solar-type star of $T_{\rm
eff}=6,000\,{\rm K}$ (purple lines), an A2V star of $T_{\rm eff}=10,000\,{\rm
K}$ (mangeta lines), an B1.5V star of $T_{\rm eff}=22,000\,{\rm K}$ (blue
lines), an B0V star of $T_{\rm eff}=30,000\,{\rm K}$ (cyan lines), and the
MMP83 ISRF (black lines). The starlight intensities are all set to be $U=1$.
The 3.4 and 6.85$\,{\rm\mu m}$ aliphatic C–H features are clearly seen in the
spectra of aliphatic PAHs with $N_{\rm H,ali}=2,6,10$, while the
7.25$\,{\rm\mu m}$ aliphatic C–H feature is less prominent. For clarity, their
spectra are vertically shifted. Figure 3: Same as Figure 2 but for aliphatic
PAH cations. Figure 4: Same as Figures 2,3 but highlighting the aromatic and
aliphatic C–H bands in the 3.2–3.6$\,{\rm\mu m}$ wavelength range. Figure 5:
Model-calculated intensity ratios $\left(I_{3.4}/I_{3.3}\right)_{\rm mod}$ as
a function of $N_{\rm H,ali}/N_{\rm H,aro}$ for neutral aliphatic PAHs of
$N_{\rm C,aro}=24$. These molecules are illuminated by an M2V star of $T_{\rm
eff}=3,500\,{\rm K}$ (orange lines), a solar-type star of $T_{\rm
eff}=6,000\,{\rm K}$ (purple lines), an A2V star of $T_{\rm eff}=10,000\,{\rm
K}$ (magenta lines), an B1.5V star of $T_{\rm eff}=22,000\,{\rm K}$ (blue
lines), an B0V star of $T_{\rm eff}=30,000\,{\rm K}$ (cyan lines), and the
MMP83 ISRF (black lines). The starlight intensities are all set to be $U=1$.
Figure 6: Same as Figure 5 but for aliphatic PAH cations.
Figure 7: Aliphatic and aromatic C–H stretching features of four PDRs (IRAS
18416-0420, Jourdain de Muizon et al. 1990; Orion Bar, Sloan et al. 1997; IRAS
19442+2427, Jourdain de Muizon et al. 1990; S106, Geballe et al. 1985). The
observed spectra are shown as black diamonds. The fitted spectra (solid red
lines) are a combination of two or more Drude profiles and a linear,
underlying continuum.
Figure 8: Same as Figure 7 but for four planetary nebulae (IC 418, Geballe et
al. 1985; BD+303639, Geballe et al. 1985; IRAS 21282+5050, Jourdain de Muizon
et al. 1986; NGC 7027, Geballe et al. 1985).
Figure 9: Same as Figure 7 but for reflection nebulae NGC 1333 (Joblin et al.
1996) and IRAS 03035+5819 (Jourdain de Muizon et al. 1986).
Figure 10: Same as Figure 7 but for four YSOs (IRAS 20293+3952, IRAS
20319+3958, IRAS 19213+1723, and IRAS 03260+3111; Jourdain de Muizon et al.
1990).
Figure 11: Same as Figure 7 but for four PPNe: the Red Rectangle illuminated
by HD 44179 (Geballe et al. 1985); CRL 2688 (Geballe et al. 1992); IRAS
04296+3429 (Geballe et al. 1992); and IRAS 05341+0852 (Geballe et al. 1990).
Figure 12: Same as Figure 7 but for six sources of which the effective
temperatures of the illuminating stars are unknown: IRAS 16362-4845 (Jourdain
de Muizon et al. 1990); IRAS 06572-0742 (Jourdain de Muizon et al. 1990); IRAS
09296+1159 (Geballe et al. 1990); IRAS 12063-6259 (Jourdain de Muizon et al.
1990); IRAS 17199-3446 (Jourdain de Muizon et al. 1990); and IRAS 19097+0847
(Jourdain de Muizon et al. 1990).
Figure 13: Same as Figure 7 but for four external galaxies: M82 (Yamagishi et
al. 2012); NGC 253 (Yamagishi et al. 2012); NGC 2782 (Onaka et al. 2018); and
LMC (Mori et al. 2012).
Figure 14: Histogram of the aliphatic fractions of PAHs for 28 UIE sources.
The median aliphatic fraction is $\langle\eta_{\rm ali}\rangle\approx 5.4\%$.
Figure 15: PAH aliphatic fraction ($\eta_{\rm ali}$) vs. stellar effective
temperature ($T_{\rm eff}$). PPN: protoplanetary nebula; PN: planetary nebula;
RN: reflection nebula; MC: molecular cloud; PDR: photodissociated region; YSO:
young stellar object.
Table 1: Slopes for the Correlation between the Model Band Ratio $\left(I_{3.4}/I_{3.3}\right)_{\rm mod}$ versus $N_{\rm C,ali}/N_{\rm C,aro}$ for Neutrals and Cations as a Function of Stellar Effective Temperatures. $T_{\rm eff}$ (K) | Neutral PAHs | Cationic PAHs
---|---|---
3500 | 2.06 | 4.38
6000 | 1.98 | 4.20
10000 | 1.93 | 4.14
22000 | 1.84 | 3.97
30000 | 1.81 | 3.96
Average | 1.92 | 4.13
Stdev | 0.09 | 0.16
Table 2: Observed Band Ratios $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ of Astronomical Sources Exhibiting Both the 3.3 and 3.4$\,{\rm\mu m}$ Emission, and PAH Aliphatic Fractions $\eta_{\rm ali}$ Derived from $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ Based on Eqs. 10–12. Object | Type | $T_{\rm eff}$ | Correlation | $\left(I_{3.4}/I_{3.3}\right)_{\rm obs}$ | $\eta_{\rm ali}$
---|---|---|---|---|---
| | (K) | Slope | | (%)
IRAS 18416-0420 | PDR | 40000 | 1.87 | 0.15 | 2.33
Orion Bar | PDR | 39000 | 1.86 | 0.11 | 1.74
IRAS 19442+2427 | PDR | 36000 | 1.84 | 0.17 | 2.67
S106 | PDR | 30000 | 1.81 | 0.08 | 1.28
IC418 | PN | 36700 | 1.84 | 0.01 | 0.08
BD+303639 | PN | 30000 | 1.81 | 0.34 | 5.30
IRAS 21282+5050 | PN | 28000 | 1.81 | 0.35 | 5.42
NGC7027 | PN | 20000 | 1.84 | 0.53 | 7.94
IRAS 03035+5819 | RN | 30000 | 1.81 | 0.43 | 6.67
NGC 1333 | RN | 30000 | 1.81 | 0.36 | 5.58
IRAS 20293+3952 | YSO | 20500 | 1.83 | 0.49 | 7.47
IRAS 20319+3958 | YSO | 20500 | 1.83 | 0.44 | 6.78
IRAS 19213+1723 | YSO | 15000 | 1.87 | 0.26 | 3.93
IRAS 03260+3111 | YSO | 13000 | 1.90 | 0.34 | 5.15
HD44179 | PPN | 7750 | 1.97 | 0.02 | 0.32
CRL2688 | PPN | 7000 | 1.98 | 0.62 | 8.56
IRAS 04296+3429 | PPN | 6500 | 1.99 | 2.18 | 24.79
IRAS 05341+0852 | PPN | 6500 | 1.99 | 3.56 | 34.94
IRAS 16362-4845 | MC | 5700 | 2.00 | 0.54 | 7.54
IRAS 06572-0742 | … | 18000 | 1.85 | 0.39 | 6.00
IRAS 09296+1159 | … | … | 1.92 | 0.24 | 3.54
IRAS 12063-6259 | … | … | 1.92 | 1.64 | 20.37
IRAS 17199-3446 | … | … | 1.92 | 0.32 | 4.82
IRAS 19097+0847 | … | … | 1.92 | 0.31 | 4.59
M82 | starburst galaxy | … | 1.92 | 0.58 | 8.35
NGC253 | starburst galaxy | … | 1.92 | 0.52 | 7.48
NGC2782 | spiral galaxy | … | 1.92 | 0.81 | 11.26
LMC | irregular galaxy | … | 1.92 | 0.72 | 10.06
|
# Improving Reliable Navigation under Uncertainty via Predictions Informed by
Non-Local Information
Raihan Islam Arnob and Gregory J. Stein R. Arnob and G. Stein are with the
Department of Computer Science, George Mason University, USA, {rarnob,
<EMAIL_ADDRESS>
###### Abstract
We improve reliable, long-horizon, goal-directed navigation in partially-
mapped environments by using non-locally available information to predict the
goodness of temporally-extended actions that enter unseen space. Making
predictions about where to navigate in general requires non-local information:
any observations the robot has seen so far may provide information about the
goodness of a particular direction of travel. Building on recent work in
learning-augmented model-based planning under uncertainty, we present an
approach that can both rely on non-local information to make predictions (via
a graph neural network) and is reliable by design: it will always reach its
goal, even when learning does not provide accurate predictions. We conduct
experiments in three simulated environments in which non-local information is
needed to perform well. In our large scale university building environment,
generated from real-world floorplans to the scale, we demonstrate a 9.3%
reduction in cost-to-go compared to a non-learned baseline and a 14.9%
reduction compared to a learning-informed planner that can only use local
information to inform its predictions.
## I Introduction
We focus on the task of goal-directed navigation in a partially-mapped
environment, in which a robot is expected to reach an unseen goal in minimum
expected time. Often modeled as a Partially Observable Markov Decision Process
(POMDP) [1], long-horizon navigation under uncertainty is computationally
demanding, and so many strategies turn to learning to make predictions about
unseen space and thereby inform good behavior. To perform well, a robot must
understand how parts of the environment the robot cannot currently see (i.e.,
non-locally available information) inform where it should go next, a
challenging problem for many existing planning strategies that rely on
learning.
Consider the simple scenario from our _J-Intersection_ environment shown in
Fig. 1: information at the center of the map (the color of that region)
informs whether the robot should travel left or right; optimal behavior
involves following the hallway whose color matches that of the center of the
map. As this color is not visible from the intersection, a robot must remember
what the space looked like around the corner to perform well and learn how
that information relates to its decision. More generally, many real-world
environments require such understanding, a particularly challenging task for
building-scale environments. In this work, we aim to allow a robot to retain
non-local knowledge and learn to use it to make predictions that inform where
it should travel next.
Recently, learning-driven approaches—including many model-free approaches
trained via deep reinforcement learning [2, 3]—have demonstrated the capacity
to perform well in this domain. However, in the absence of an explicit map for
the robot to use to keep track of where it has yet to go, many such approaches
are unreliable, lacking guarantees that they will reach the goal [4].
Moreover, these approaches struggle to reason far enough into the future to
understand the impact of their actions and thus perform poorly and can be
brittle and unreliable for long-horizon planning.
The recent Learning over Subgoals planning approach (LSP) [5] introduces a
high-level abstraction for planning in a partial map that allows for both
state-of-the-art performance and reliability-by-design. In LSP, actions
correspond to exploration of a particular region of unseen space. Learning
(via a fully-connected neural network) is used to estimate the goodness of
exploratory actions, including the likelihood an exploration will reveal the
unseen goal. These predictions inform model-based planning and are thus used
to compute expected cost. LSP overcomes two problems: (1) its state and action
abstraction allows for learning-informed reasoning far into the future and (2)
it is guaranteed to reach the goal if there exists a viable path. However, LSP
is limited: its ability to make predictions about unseen space only makes use
of locally observable information, limiting its performance.
Figure 1: Overview: non-local information is often essential for good
navigation in a partial map. Our LSP-GNN approach uses a graph neural network
to make predictions about unseen space via both local and non-local
information and integrates these into the Learning over Subgoals model-based
planning abstraction [5, 6] to improve reliable navigation.
In this paper, we extend the Learning over Subgoals Planner (LSP-Local),
replacing its learning backend with a Graph Neural Network (LSP-GNN),
affording reliable learning-informed planning capable of using both local and
non-local information to make predictions about unseen space and thus improve
performance in complex navigation scenarios in building-scale environments.
Using a graph representation of the partial map—constructed via a map skeleton
[7] so as to preserve topological structure—we demonstrate that our GNN allows
for accurate predictions of unseen space using non-local information.
Additionally, we demonstrate that our LSP-GNN planner improves performance
over the original LSP-Local planner while retaining guarantees on reliability:
i.e., the robot always reaches the goal. We show the effectiveness of our
approach in our simulated _J-Intersection_ , _Parallel Hallway_ , and
_University Building_ environments, in the latter yielding improvements of
9.3% and 14.9% (respectively) over non-learned and learned baselines.
## II Related Works
Planning under Uncertainty POMDPs [1, 8, 9, 10] have been used to represent
navigation and exploration tasks under uncertainty, yet direct solution of the
model implicit in the POMDP is often computationally infeasible. To mitigate
this limitation, many approaches to planning rely on learning to inform
behavior [4, 11, 12], yet only plan a few time steps into the future and so
are not well-suited to long-horizon planning problems. Some reinforcement
learning approaches that deal with partially observed environments [13, 14,
15, 16, 17, 18] are also limited to fairly small-scale environments. The
MERLIN agent [2] uses a differentiable neural computer to recall information
over much longer time horizons than is typically possible for end-to-end-
trained model-free deep reinforcement learning systems. However, the
reinforcement learning approaches [2, 19, 20] can be difficult to train and
lacks plan completeness, making it somewhat brittle in practice. Our proposed
work improves long-horizon planning under uncertainty learning the relational
properties from the non-local observation of the environment with the
guarantee of completeness.
Graph Neural Networks and Planning Battaglia et al. [21] present a survey of
GNN approaches, demonstrating how GNNs can be used for relational reasoning
and exhibit combinatorial generalization, opening numerous opportunities for
learning over structured and relational data. Zhou et al. [22] show how GNNs
have been used in the field of modeling physics systems, learning molecular
fingerprints, predicting protein interface, classifying diseases, and many
others. GNNs are fast to evaluate on sparse graphs and have shown capacity to
generalize effectively in multiple domains [21, 23, 24]. Moreover, GNNs have
recently been used to accelerate task and motion planning [25, 26] and to
inform other problems of interest to robotics: joint mapping and navigation
[27], object search in previously-seen environments [28], and modeling
physical interaction [29]. In particular, Chen et al. [30] propose a framework
that uses GNN in conjunction with deep reinforcement learning to address the
problem of autonomous exploration under localization uncertainty for a mobile
robot with 3D range sensing.
## III Problem Formulation
Our robot is tasked to reach an unseen goal in a partially-mapped environment
in minimum expected cost (distance). The synthetic robot is equipped with a
semantically-aware planar laser scanner, which it can use to both localize and
update its partial semantic-occupancy-grid map of its local surroundings,
limited by range and obstacle occlusion. As the robot navigates the partially-
mapped environment, it updates its belief state $b_{t}$ to include newly-
revealed space and its semantic class.
Formally, we represent this problem as a Partially Observable Markov Decision
Process [1, 8] (POMDP). The expected cost $Q$ under this model can be written
via a belief space variant of the Bellman equation [31]:
$\split
Q(b_{t},a_{t})=\sum_{b_{t+1}}P(b_{t+1}|b_{t},a_{t})\Big{[}R(b_{t+1},b_{t},a_{t})\\\
+\min_{a_{t+1}\in\mathcal{A}(b_{t}+1)}Q(b_{t+1},a_{t+1})\Big{]},$ (1)
where $R(b_{t+1},b_{t},a_{t})$ is the cost of reaching belief state $b_{t+1}$
from $b_{t}$ by taking action $a_{t}$ and $P(b_{t+1}|b_{t},a_{t})$ is the
transition probability.
Figure 2: Our robot’s actions correspond to boundaries between free and
unseen space. The robot can leave observed space through either boundary: via
subgoal $s_{1}$ or $s_{2}$. Upon selecting action $a_{2}$, the robot reaches
the goal with probability $P_{S}$ and incurs an expected cost $R_{S}$, or is
turned back (probability $1-P_{S}$), accumulates cost $R_{E}$ and selects
another action.
## IV Preliminaries: Model-based Planning under Uncertainty via Learning over
Subgoals
As Eq. eq:POMDP cannot be solved directly, our robot instead relies on the
recent Learning over Subgoals Planning (LSP) approach [5] to determine the
robot’s behavior. LSP introduces a model-based planning abstraction that
alleviates the computational requirements of POMDP planning, affording both
reliability and good performance informed by predictions about unseen space
from learning.
For LSP planning, actions available to the robot correspond to navigation to
_subgoals_ —each associated with a boundary between free and unknown space—and
then exploration beyond in an effort to reach the unseen goal. Consistent with
this action abstraction, planning under the LSP model is done over an abstract
belief state: a tuple $b_{t}=\\{m_{t},q_{t}\\}$, where $m_{t}$ is the current
map of the environment, and $q_{t}$ is the robot pose. Each high-level action
$a_{t}\in\mathcal{A}(\\{m_{t},q_{t}\\})$ has a binary outcome: with
probability $P_{S}(a_{t})$, the robot _succeeds_ in reaching the goal or (with
the inverse probability $1-P_{S}(a_{t})$) fails to reach the goal. Upon
selecting an action $a_{t}$, the robot must first move through known space to
the boundary, accumulating a cost $D(m_{t},q_{t},a_{t})$. If the robot
succeeds in reaching the goal, it accumulates a _success cost_ $R_{S}(a_{t})$,
the expected cost for the robot to reach the goal, and no further navigation
is necessary. Otherwise, the robot accumulates an _exploration cost_
$R_{E}(a_{t})$, the expected cost of exploring the region beyond the subgoal
of interest and needing to turn back, and must subsequently choose another
action $a_{t+1}\in
A_{t+1}\equiv\mathcal{A}(\\{m_{t},q(a_{t})\\})\setminus\\{a_{t}\\}$.
Under this LSP planning model, the expected cost of taking an action $a_{t}$
from belief state $b_{t}=\\{m_{t},q_{t}\\}$ is
$\split
Q(&\\{m_{t},q_{t}\\},a_{t}\in\mathcal{A})=D(m_{t},q_{t},a_{t})+P_{S}(a_{t})R_{S}(a_{t})\\\
+(1-P_{S}(a_{t}))\left[R_{E}(a_{t})+\min_{a_{t+1}}Q(\\{m_{t},q(a_{t})\\},a_{t+1})\right]$
(2)
While the known-space distance $D(m_{t},q_{t},a_{t})$ can be calculated
directly from the observed map using A${}^{\\!*}$ or RRT∗, the _subgoal
properties_ $P_{S}(a_{t})$, $R_{S}(a_{t})$, and $R_{E}(a_{t})$ for each
subgoal are estimated via learning from information collected during
navigation.111The terms $P_{S}$, $R_{S}$, and $R_{E}$ are implicitly functions
of the belief, but shown here only as functions of the chosen action for
notational simplicity.
In the LSP approach [5] and in other LSP-derived planners so far [32, 6],
learning has relied only on _local_ information—e.g., semantic information,
images, or local structure. However, locally-accessible information alone
cannot inform effective predictions about unseen space in general; information
revealed elsewhere in the environment may determine where a robot should
navigate next. As such, the learned models upon which existing LSP approaches
rely perform poorly in even simple environments where non-locally available
information is required. We show one example of this limitation in Sec. V and
discuss how we use Graph Neural Networks to overcome it in Sec. VI.
## V Motivating Example: A Memory Test for Navigation
Figure 3: Low cost navigation in our J-Intersection environment requires non-
local information. When the goal is either on left or right from the
intersection, we need the non-local information from the start position to
decide correctly at the intersection. Choosing always left or right or even
choosing one color over another will not reliably succeed.
Fig. 3 shows an example scenario motivating the necessity of using non-locally
observable information to make good predictions about the environment while
trying to reach the goal under uncertainty. Our _J-Intersection_ environment
has either a red or blue square region inside of it and around the corner
occluded from that square region far away at the intersection that colored
region leads to the goal (bottom).
Maps in this environment are structured so that the color of the hallway the
robot should follow matches the color of the center region of the map. We
randomize the color of the center map region and mirror the environment
randomly so that no systematic policy (e.g., _follow the blue hallway_ or
_turn left at the fork_) will efficiently reach the goal.
Since the LSP approach is limited to making predictions for the subgoal using
only locally observable information, it cannot to learn the (simple) defining
structural characteristic of the environment: if the inside square region is
red then the path to the goal is red and if the inside square is blue then
blue is the path to the goal. Instead, we will augment the LSP approach to
rely on a _graph neural network_ [21] to estimate the subgoal properties,
allowing it to use both local and non-local information to make predictions
about the goodness of actions that enter unseen space and thus perform well
across a variety of complex environments.
## VI Approach: Making Predictions about Unseen Space using Non-Local
Information
We aim to improve navigation under uncertainty by estimating task-relevant
properties of unseen space via non-locally observable information. Consistent
with our discussion in Sec. IV for modelling uncertainty via POMDP, our robot
relies on the LSP model-based planning abstraction of Stein et al. [5] for
high-level navigation through partially-revealed environments, for which
learning is used to estimate the _subgoal properties_ ($P_{S}$, $R_{S}$, and
$R_{E}$) used to determine expected cost via Eq. eq:lsp-planning.
We will use a Graph Neural Network (GNN) to overcome the limitations of making
predictions using only local information (as discussed in Sec. V) and thus
improve both predictive power and planning performance. A graph-based
representation of the environment captures both topological structure and also
allows information to be retained and communicated over long distances [33,
34]. A GNN is a deep-learning approach that allows predictions over graph
data; to plan, we require estimates of the properties ($P_{S}$, $R_{S}$, and
$R_{E}$) for each subgoal node and so our graph neural network will output
estimates of these properties for each. In the following sections, we detail
how we convert the environment into a graph representation (Sec. VI-A), how
training data is generated (Sec. VI-C), and the network and training
parameters (Sec. VI-B).
### VI-A Computing a High-level Graph Representation
While the occupancy grid of the observed region can be used as a graph
representation of the environment, it has too many nodes for learning to be
practical. Instead, we want to generate a simplified (few-node) graph of the
environment that preserves high-level topological structure, so that nodes
exist at (i) intersections, (ii) dead-ends, and (iii) subgoals.
Graph Generation: We create this graph via a process shown in Fig. 4. We first
generate a skeleton [7, 35] over a modified version of the map in which
unknown space is marked as free yet where frontiers are masked as obstacles
except for a single point near their center. We eliminate the skeleton outside
known space and add nodes at all intersections and skeleton endpoints and
finally use the skeleton to define the edges between them. We additionally add
nodes corresponding to each subgoal and connect each new node to its nearest
structural neighbor in the graph generated from the skeletonization process.
Finally, we add a _goal node_ at the location of the goal that has an edge
connection to every other node; this _global node_ [21] allows for the
propagation of information across the entire environment.
Figure 4: Graph representations of the environment for our graph neural net
are computed from the partial map. We use an image skeleton [7] to generate a
graph from the partial occupancy grid. See Sec. VI-A for details.
Neural Network Input Features: Structure alone is often insufficient to inform
good predictions of unseen space. As such, we seek to not only compute a
topometric graph of the environment, but also associate semantic information
with each node. Each graph node is given a local observation—a _node feature_
—from which the subgoal properties ($P_{S}$, $R_{S}$, and $R_{E}$ in Eq.
eq:lsp-planning) will be estimated via the graph neural network. Node features
are 6-element vectors: (i) a 3-element one-hot semantic class (or color) at
the location of the node, (ii) the number of neighbors of that node, (iii) a
binary indicator of whether or not the node is a subgoal, and (iv) a binary
indictor of whether the node is the goal node. We additionally include a
single edge feature, associated with each edge in the graph: the geodesic
distance between the nodes it connects. Owing to the presence of a goal node
connected to every other node, the edge features provides each node its
distance to the goal. To ensure a fair comparison with the LSP-Local planner,
our learned baseline that does not consider edge information, the node
features for LSP-Local are augmented to include the geodesic distance to the
goal. Conditioned upon, correctly building the map the input is enough to
ensure safety during navigation. Safety during navigation with the
aforementioned inputs is ensured conditioned upon correctly building the maps.
### VI-B Graph Neural Network Structure and Training
We use the PyTorch [36] neural network framework and Torch Geometric [37] to
define and train our graph neural network. The neural network begins with 3
locally-fully-connected layers, which are fully-connected layers that
processes the features for each node in isolation, without considering the
edges or passing information to neighbors; all three have hidden layer
dimension of 8. Next, the network has 4 GATv2Conv [38] layers, each with
hidden layer dimension of 8. Finally, a locally-fully-connected layer takes in
the 8-dimensional node features as input and produces a three dimensional
output: a logit corresponding to $P_{S}$ and the two cost terms $R_{S}$ and
$R_{E}$. For the LSP-Local learned-baseline planner, we replace the GATv2Conv
graph neural network layers with locally-fully-connected layers, eliminating
sharing of information between nodes and thus its ability to use non-locally-
available information to make predictions about unseen space.
Loss Function: Our loss function matches the original LSP approach of Stein et
al. [5] adapted for our graph input data. For each subgoal node, we accumulate
error according to a weighted cross-entropy loss (a classification objective)
for $P_{S}$ and an L1-loss (a regression objective) for $R_{S}$ and $R_{E}$.
Since only the properties of the subgoal nodes are needed, we mask the loss
for non-subgoal nodes and only consider the subgoal nodes’ contribution to the
loss.
Training Parameters: We train a separate network (with identical parameters)
for each environment. Training proceeds for 50k steps. The learning rate
begins at $10^{-3}$ and decays by a factor of 0.6 every 10k steps.
### VI-C Generating Training Data
To train our graph neural network, we require training data collected via
offline navigation trials from which we can learn to estimate the subgoal
properties ($P_{S}$, $R_{S}$, and $R_{E}$) for each subgoal node in the graph.
During an offline training phase, we conduct trials in which the robot
navigates from start to goal and generates labeled data at each time step.
Training data consists of environment graphs $G$—with input features
consistent with our discussion in Sec. VI-A—and labels associated with each
subgoal node.
To compute the labels for our training data, we use the underlying known map
to determine whether or not a path to the goal exists through a subgoal. Using
this information, we record a label for each subgoal that corresponds to a
sample of the probability of success $P_{S}$ and from which we can learn to
estimate $P_{S}$ using cross-entropy loss. Labels for the other subgoal
properties are computed similarly: labels for the success cost $R_{S}$
correspond to the travel distance through unknown space to reach the goal, for
when the goal can be reached, and the exploration cost $R_{E}$ is a heuristic
cost corresponding to how long it will take a robot to realize a region is a
dead end, approximated as the round-trip travel time to reach the farthest
reachable point in unseen space beyond the chosen frontier. This data and
collection process mirrors that of LSP [5]; readers are referred to their
paper for additional details.
We repeat the data collection process for each step over hundreds of trials
for each training environment. So as to generate more diverse data, we switch
between the known-space planner and an optimistic (non-learned) planner to
guide navigation during data generation. The details of each environment can
be found in Sec. VII.
## VII Experimental Results
We conduct simulated experiments in three environments—our _J-Intersection_
(Sec. VII-A), _Parallel Hallway_ (Sec. VII-B), and _University Building_ (Sec.
VII-C)—in which a robot must navigate to a point goal in unseen space. For
each trial, we evaluate performance of 4 planners:
Optimistically assumes the unseen space to be free and plans via grid-based
A${}^{\\!*}$ search.
Plans via Eq. eq:lsp-planning, estimating subgoal properties via only local
features, as in [5].
Plans via Eq. eq:lsp-planning, yet uses our graph neural network learning
backend to estimate subgoal properties using both local and non-local
features.
The robot uses the fully-known map to navigate; a lower bound on cost. For
each planner, we compute average navigation cost across many (at least 100)
random maps from each environment.
TABLE I: Avg. Cost over 100 Trials in the J-Intersection Environment Planner | Avg. Cost (grid cell units)
---|---
Non-Learned Baseline | $303.03$
LSP-Local (learned baseline) | $323.46$
LSP-GNN (ours) | 204.85
Fully-Known Planner | $204.85$
### VII-A J-Intersection Environment
Figure 5: Planned trajectories of the bench-marked planner approaches.
J-intersection environment where the goal is on the right. The left column
shows the optimal trajectory (planned using the underlying known map). The
middle column shows the same trajectory of both the non-learned baseline and
LSP-Local where they make a systematic choice. The right column shows the
trajectory planned by LSP-GNN that is similar to the optimal one.
We first show results in the J-Intersection environment, described in Sec. V
to motivate the importance of non-local information for good performance for
navigation under uncertainty. In this environment, the robot must choose where
to travel at a fork in the road, yet non-locally observable information is
needed to reliably make the correct choice—a blue-colored starting region
indicates that the goal can be reached by turning towards the blue hallway at
the intersection, and the same for the red-colored regions. We randomly mirror
the environment so that the robot cannot learn a systematic policy that
quickly reaches the goal without understanding.
We conduct 100 trials for each planner in this environment to evaluate their
performance and show the average cost planning strategy in Table I. Across all
trials, our proposed LSP-GNN planner _always_ correctly decides where to go at
the intersection and achieves near-perfect performance. By contrast, both the
LSP-Local and Non-Learned Baseline planners lack the knowledge to determine
which is the correct way to go and perform poorly overall, resulting in poor
performance in roughly half of the trials. We highlight two example trials in
Fig. 5. We do not report the prediction accuracy empirically, because the
prediction accuracy does not reflect the actual gain in performance for our
work.
### VII-B The Parallel Hallway Environment
Figure 6: Two sample maps from our procedurally-generated Parallel Hallway
environment. A robot is tasked to navigate from start to goal in these maps
without having access to the underlying map. The left image shows a sample map
where the red rooms connect the hallways and the right image shows where the
blue rooms connect the hallways.
Our _Parallel Hallway_ environment (Fig. 6) consists of parallel hallways
connected by rooms. We procedurally generate maps in this environment with
three hallways and two room types: (i) _dead-end_ rooms and (ii) _passage_
rooms that provide connections between neighboring parallel hallways. Only one
passage room exists between a pair of hallways, and so the robot must identify
this room if it is to travel to another hallway. Environments are generated
such that the dead-end rooms all have the same color (red or blue) distinct
from the color of the passage rooms, which are thus blue or red, respectively.
We are making the environment such that the relational information, such as
recognizing that if a room with certain color is explored as a dead-end, then
the other colored room serves as a pass-through room can be learned. If the
colors were entirely random, there would be no way to make predictions about
the unseen space. Both room types contain obstructions and are otherwise
identical, so that it is not possible to tell whether or not a room will
connect to a parallel hallway without trial-and-error or by utilizing semantic
color information from elsewhere in the map. Rooms are placed far enough apart
that the robot cannot determine from the local observations if a room will
lead to the next hallway or will be a dead end. The start and goal locations
are placed in separate hallways, so as to force the robot to understand its
surroundings to reach the goal quickly. Thus, to navigate well in this
challenging procedurally-generated environment, the robot must first explore,
trying nearby rooms to determine which color belongs to which room type, and
then retain this information to inform navigation through the rest of the
environment.
TABLE II: Avg. Cost over 500 Trials in the Parallel Hallway Environment Planner | Avg. Cost (grid cell units)
---|---
Non-Learned Baseline | $205.93$
LSP-Local (learned baseline) | $236.47$
LSP-GNN (ours) | 141.37
Fully-Known Planner | $108.37$
Figure 7: Parallel Hallway Results: average cost over 500 trials decreases
using LSP-GNN. Our learning-informed planner outperforms both the non-learned
baseline (left) and the LSP-Local (right) planners.
We train the simulated robot on data from 2,000 distinct procedurally
generated maps and evaluate in a separate set of 500 distinct procedurally
generated maps. We show the average performance of each planning strategy in
Table II and include scatterplots of the relative performance of different
planners for each trial in Fig. 7. The robot planning with our LSP-GNN
approach is able to utilize non-local local information to improve its
predictions about how best to reach the goal, achieving a 31.3% improvement in
average cost versus the optimistic Non-Learned Baseline planner and a 40.2%
improvement over the LSP-Local planner. In addition, our approach is
_reliable_ : owing to the LSP planning abstraction, our robot is able to
successfully reach the goal in all maps.
We highlight one trial in Fig. 8, in which the robot is tasked to navigate
from the top hallway to the bottom hallway, which contains the goal. After a
brief period of trial-and-error exploration in the first (top) hallway, the
robot discovers the passage to the neighboring hall and uses the knowledge of
the semantic color to quickly locate the passage to the next hallway and reach
the goal. By contrast, the Non-Learned Baseline optimistically assumes unseen
space to be free and enters every room in the direction of the goal. The LSP-
Local planner makes predictions using only local information and, unable to
use important navigation-relevant information, cannot determine how to reach
the goal; its poor predictions result in frequent turning-back behavior as it
seeks alternate routes to the goal, reducing performance.
Figure 8: Navigation trajectories of the tested planners in one of the testing
maps from the parallel hallway environment. Using non-local information
enables LSP-GNN to perform better than both the learned (LSP-Local) and non-
learned (Dijkstra) baselines.
### VII-C University Building Floorplans
Figure 9: Three large-scale training maps from our university floorplan
environment, each generated from a real-world floor plan. The inset in the
center map shows an instance of a graph (as used to define our graph neural
network) for a partial map during a navigation trial. Plot axes are in units
of meters.
Finally, we evaluate in large-scale maps generated from real-world floorplans
of buildings from the Massachusetts Institute of Technology, including
buildings of over 100 meters in extent along either side; see Fig. 9 for
examples. We generate data from 2,000 trials across 56 training floorplans and
evaluate in 250 trials from 9 held-out test floorplans, each augmented by
procedurally generated clutter to add furniture-like obstacles to rooms. In
addition to occupancy information, _rooms_ in the map have a distinct semantic
class from _hallways_ (and other large or accessible spaces); this semantic
information is provided as input node features to the neural networks to
inform their predictions.
We show the average performance of each planning strategy in Table III and
include scatterplots of the relative performance of different planners for
each trial in Fig. 10. The robot planning with our LSP-GNN approach achieves
improvements in average cost of 9.3% versus the optimistic Non-Learned
Baseline planner and of 14.9% improvement over the LSP-Local Learned Baseline
planner. Unlike the LSP-Local planner, which does not have enough information
to make good predictions about unseen space, our LSP-GNN approach can make use
of non-local information to inform its predictions and thus performs well
despite the complexity inherent in these large-scale testing environments.
TABLE III: Avg. Cost over 250 Trials in the University Building Floorplans Planner | Avg. Cost (meter)
---|---
Non-Learned Baseline | $44.98$
LSP-Local (learned baseline) | $47.93$
LSP-GNN (ours) | 40.80
Fully-Known Planner | $31.77$
Figure 10: University Building Floorplan Results: average cost (meter) over
250 trials decreases using LSP-GNN. Our learning-informed planner outperforms
both the non-learned baseline (left) and the LSP-Local (right) planners.
Figure 11: Navigation trajectories of all tested planners in one of the large-
scale testing maps from the university building environment. LSP-GNN performs
better than both the learned (LSP-Local) and non-learned (Dijkstra) baselines
deviating very few times from the hallway to reach faraway goal. Figure 12:
Two comparison between the navigational trajectories of our LSP-GNN against
LSP-Local. LSP-GNN exhibits the capacity to recover quickly than LSP-Local
when both planner cannot immediately find the correct path.
Fig. 11 shows a typical navigation example in one of our test environments. In
this scenario, the shortest possible trajectory involves knowing to follow
hallways until near to the goal. Both learned planners generally exhibit
hallway-following behavior—often useful in building-like environments such as
these—and improve upon the non-learned (optimistic) baseline. However, our
LSP-GNN planner, able to make use of non-local information, can more reliably
determine which is the more productive route and more quickly reaches the
faraway goal. Fig. 12 shows two additional examples that highlight the
improvements of our LSP-GNN planner made possible by non-locally-available
information. In Fig. 12A, we highlight an example in which both learned
planners cannot immediately find the correct path, yet LSP-GNN is able to
improve its predictions about where is most likely to lead to the unseen goal
and recover more quickly than does LSP-Local. Fig. 12B shows a more extreme
example, in which the LSP-Local planner fails to quickly turn back to seek a
promising alternate route immediately identified by LSP-GNN.
## VIII Conclusion and Future Work
We present a reliable model-based planning approach that uses a graph neural
network to estimate the goodness of goal-directed high-level actions from both
local and non-local information, improving navigation under uncertainty. Our
planning approach takes advantage of non-local information to make informed
decisions about how to more quickly reach the goal. We rely on a graph neural
network (GNN) to make these predictions. The GNN consumes a graph
representation of the partial map and makes predictions about the goodness of
potential routes to the goal. We demonstrate improved performance on two
simulated environments in which non-local information is required to plan
well, demonstrating the efficacy of our approach.
In future work, we envision passing more complex sensory input to the robot,
allowing it to estimate the goodness of its actions using information
collected from image sensors or semantically-segmented images.
## References
* [1] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra, “Planning and acting in partially observable stochastic domains,” _Artif. Intell._ , vol. 101, no. 1–2, p. 99–134, 1998.
* [2] G. Wayne, C. Hung, D. Amos, M. Mirza, A. Ahuja, A. Grabska-Barwinska, J. W. Rae, P. Mirowski, J. Z. Leibo, A. Santoro, M. Gemici, M. Reynolds, T. Harley, J. Abramson, S. Mohamed, D. J. Rezende, D. Saxton, A. Cain, C. Hillier, D. Silver, K. Kavukcuoglu, M. M. Botvinick, D. Hassabis, and T. P. Lillicrap, “Unsupervised predictive memory in a goal-directed agent,” _CoRR_ , vol. abs/1803.10760, 2018.
* [3] P. Mirowski, M. K. Grimes, M. Malinowski, K. M. Hermann, K. Anderson, D. Teplyashin, K. Simonyan, K. Kavukcuoglu, A. Zisserman, and R. Hadsell, “Learning to navigate in cities without a map,” _CoRR_ , vol. abs/1804.00168, 2018.
* [4] M. Pfeiffer, M. Schaeuble, J. I. Nieto, R. Siegwart, and C. Cadena, “From perception to decision: A data-driven approach to end-to-end motion planning for autonomous ground robots,” _CoRR_ , vol. abs/1609.07910, 2016\.
* [5] G. J. Stein, C. Bradley, and N. Roy, “Learning over subgoals for efficient navigation of structured, unknown environments,” in _Proceedings of The 2nd Conference on Robot Learning_ , ser. Proceedings of Machine Learning Research, A. Billard, A. Dragan, J. Peters, and J. Morimoto, Eds., vol. 87. PMLR, 29–31 Oct 2018, pp. 213–222.
* [6] C. Bradley, A. Pacheck, G. J. Stein, S. Castro, H. Kress-Gazit, and N. Roy, “Learning and planning for temporally extended tasks in unknown environments,” in _2021 IEEE International Conference on Robotics and Automation (ICRA)_ , 2021, pp. 4830–4836.
* [7] T. Y. Zhang and C. Y. Suen, “A fast parallel algorithm for thinning digital patterns,” _Commun. ACM_ , vol. 27, no. 3, p. 236–239, 1984.
* [8] M. L. Littman, A. R. Cassandra, and L. P. Kaelbling, _Learning Policies for Partially Observable Environments: Scaling Up_. Morgan Kaufmann Publishers Inc., 1997.
* [9] S. Thrun, W. Burgard, and D. Fox, _Probabilistic Robotics (Intelligent Robotics and Autonomous Agents)_. The MIT Press, 2005.
* [10] G. Parascandolo, L. Buesing, J. Merel, L. Hasenclever, J. Aslanides, J. B. Hamrick, N. Heess, A. Neitz, and T. Weber, “Divide-and-conquer monte carlo tree search for goal-directed planning,” 2020.
* [11] C. Richter, J. Ware, and N. Roy, “High-speed autonomous navigation of unknown environments using learned probabilities of collision,” in _2014 IEEE International Conference on Robotics and Automation (ICRA)_ , 2014, pp. 6114–6121.
* [12] S. Ross, N. Melik-Barkhudarov, K. S. Shankar, A. Wendel, D. Dey, J. A. Bagnell, and M. Hebert, “Learning monocular reactive UAV control in cluttered natural environments,” _CoRR_ , vol. abs/1211.1690, 2012.
* [13] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel, “RL2: Fast reinforcement learning via slow reinforcement learning,” _CoRR_ , vol. abs/1611.02779, 2016.
* [14] Y. Yang, J. P. Inala, O. Bastani, Y. Pu, A. Solar-Lezama, and M. C. Rinard, “Program synthesis guided reinforcement learning,” _CoRR_ , vol. abs/2102.11137, 2021.
* [15] S. Gupta, J. Davidson, S. Levine, R. Sukthankar, and J. Malik, “Cognitive mapping and planning for visual navigation,” _CoRR_ , vol. abs/1702.03920, 2017.
* [16] J. Zhang, J. T. Springenberg, J. Boedecker, and W. Burgard, “Deep reinforcement learning with successor features for navigation across similar environments,” _CoRR_ , vol. abs/1612.05533, 2016.
* [17] L. Tai, G. Paolo, and M. Liu, “Virtual-to-real deep reinforcement learning: Continuous control of mobile robots for mapless navigation,” _CoRR_ , vol. abs/1703.00420, 2017.
* [18] P. Mirowski, R. Pascanu, F. Viola, H. Soyer, A. J. Ballard, A. Banino, M. Denil, R. Goroshin, L. Sifre, K. Kavukcuoglu, D. Kumaran, and R. Hadsell, “Learning to navigate in complex environments,” _CoRR_ , vol. abs/1611.03673, 2016.
* [19] J. Kober and J. Peters, _Reinforcement Learning in Robotics: A Survey_. Cham: Springer International Publishing, 2014, pp. 9–67.
* [20] P. Henderson, R. Islam, P. Bachman, J. Pineau, D. Precup, and D. Meger, “Deep reinforcement learning that matters,” _CoRR_ , vol. abs/1709.06560, 2017\.
* [21] P. W. Battaglia, J. B. Hamrick, V. Bapst, A. Sanchez-Gonzalez, V. F. Zambaldi, M. Malinowski, A. Tacchetti, D. Raposo, A. Santoro, R. Faulkner, Ç. Gülçehre, H. F. Song, A. J. Ballard, J. Gilmer, G. E. Dahl, A. Vaswani, K. R. Allen, C. Nash, V. Langston, C. Dyer, N. Heess, D. Wierstra, P. Kohli, M. M. Botvinick, O. Vinyals, Y. Li, and R. Pascanu, “Relational inductive biases, deep learning, and graph networks,” _CoRR_ , vol. abs/1806.01261, 2018.
* [22] J. Zhou, G. Cui, Z. Zhang, C. Yang, Z. Liu, and M. Sun, “Graph neural networks: A review of methods and applications,” _CoRR_ , vol. abs/1812.08434, 2018.
* [23] D. K. Duvenaud, D. Maclaurin, J. Iparraguirre, R. Bombarell, T. Hirzel, A. Aspuru-Guzik, and R. P. Adams, “Convolutional networks on graphs for learning molecular fingerprints,” in _Advances in Neural Information Processing Systems (NeurIPS)_ , vol. 28, 2015.
* [24] F. Monti, D. Boscaini, J. Masci, E. Rodola, J. Svoboda, and M. M. Bronstein, “Geometric deep learning on graphs and manifolds using mixture model CNNs,” in _Computer Vision and Pattern Recognition (CVPR)_ , 2017, pp. 5115–5124.
* [25] B. Kim and L. Shimanuki, “Learning value functions with relational state representations for guiding task-and-motion planning,” in _Conference on Robot Learning (CoRL)_ , 2019.
* [26] B. Kim, L. Shimanuki, L. P. Kaelbling, and T. Lozano-Pérez, “Representation, learning, and planning algorithms for geometric task and motion planning,” _The International Journal of Robotics Research_ , 2021\.
* [27] F. Chen, J. D. Martin, Y. Huang, J. Wang, and B. Englot, “Autonomous exploration under uncertainty via deep reinforcement learning on graphs,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 2020.
* [28] A. Kurenkov, R. Martín-Martín, J. Ichnowski, K. Goldberg, and S. Savarese, “Semantic and geometric modeling with neural message passing in 3D scene graphs for hierarchical mechanical search,” in _International Conference on Robotics and Automation (ICRA)_ , 2021.
* [29] J. Kossen, K. Stelzner, M. Hussing, C. Voelcker, and K. Kersting, “Structured object-aware physics prediction for video modeling and planning,” in _International Conference on Learning Representations (ICLR)_ , 2020.
* [30] F. Chen, P. Szenher, Y. Huang, J. Wang, T. Shan, S. Bai, and B. J. Englot, “Zero-shot reinforcement learning on graphs for autonomous exploration under uncertainty,” _CoRR_ , vol. abs/2105.04758, 2021.
* [31] J. Pineau and S. Thrun, “An integrated approach to hierarchy and abstraction for POMDPs,” Carnegie Mellon University, Tech. Rep. CMU-RI-TR-02-21, 2002.
* [32] G. Stein, “Generating high-quality explanations for navigation in partially-revealed environments,” in _Advances in Neural Information Processing Systems_ , vol. 34. Curran Associates, Inc., 2021, pp. 17 493–17 506.
* [33] F. Zhu, X. Liang, Y. Zhu, X. Chang, and X. Liang, “SOON: scenario oriented object navigation with graph-based exploration,” _CoRR_ , vol. abs/2103.17138, 2021.
* [34] V. Azizi, M. Usman, H. Zhou, P. Faloutsos, and M. Kapadia, “Graph-based generative representation learning of semantically and behaviorally augmented floorplans,” _The Visual Computer_ , vol. 38, no. 8, pp. 2785–2800, may 2021\.
* [35] A. S. Krishna, K. Gangadhar, N. Neelima, and K. Sahithi, “Topology preserving skeletonization techniques for grayscale images,” in _Computation and Communication Technologies_ , 2016.
* [36] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “Pytorch: An imperative style, high-performance deep learning library,” in _Advances in Neural Information Processing Systems 32_. Curran Associates, Inc., 2019, pp. 8024–8035.
* [37] M. Fey and J. E. Lenssen, “Fast graph representation learning with PyTorch Geometric,” in _ICLR Workshop on Representation Learning on Graphs and Manifolds_ , 2019.
* [38] S. Brody, U. Alon, and E. Yahav, “How attentive are graph attention networks?” 2021.
|
# VIOLET : End-to-End Video-Language Transformers with
Masked Visual-token Modeling
Tsu-Jui Fu†, Linjie Li‡, Zhe Gan‡, Kevin Lin‡, William Yang Wang†, Lijuan
Wang‡, Zicheng Liu‡
†UC Santa Barbara ‡Microsoft
{tsu-juifu<EMAIL_ADDRESS>
{lindsey.li, zhe.gan, keli, lijuanw<EMAIL_ADDRESS>
###### Abstract
A great challenge in video-language (VidL) modeling lies in the disconnection
between fixed video representations extracted from image/video understanding
models and downstream VidL data. Recent studies try to mitigate this
disconnection via end-to-end training. To make it computationally feasible,
prior works tend to “imagify” video inputs, i.e., a handful of sparsely
sampled frames are fed into a 2D CNN, followed by a simple mean-pooling or
concatenation to obtain the overall video representations. Although achieving
promising results, such simple approaches may lose temporal information that
is essential for performing downstream VidL tasks. In this work, we present
VIOLET, a fully end-to-end VIdeO-LanguagE Transformer, which adopts a video
transformer to explicitly model the temporal dynamics of video inputs.
Further, unlike previous studies that found pre-training tasks on video inputs
(e.g., masked frame modeling) not very effective, we design a new pre-training
task, Masked Visual-token Modeling (MVM), for better video modeling.
Specifically, the original video frame patches are “tokenized” into discrete
visual tokens, and the goal is to recover the original visual tokens based on
the masked patches. Comprehensive analysis demonstrates the effectiveness of
both explicit temporal modeling via video transformer and MVM. As a result,
VIOLET achieves new state-of-the-art performance on 5 video question answering
tasks and 4 text-to-video retrieval tasks.111Code is available at
https://github.com/tsujuifu/pytorch_violet
## 1 Introduction
Humans are born to perceive this world from multiple modalities such as
vision, sound, and touch. Video, containing multiple modalities in nature, has
been used as an epitome to test how AI systems perceive. Video-language (VidL)
research aims at extending this ability to convey perception via language.
Popular VidL tasks were introduced, such as text-to-video retrieval [1, 2, 3,
4, 5], video question answering [6, 7, 8, 9], text-based video moment
retrieval [10, 11, 2, 4], and video captioning [12, 13, 1, 3].
Figure 1: End-to-end VIdeO-LanguagE Transformer (VIOLET). VIOLET performs
large-scale visual-text pre-training and can be applied to various video
question answering and text-to-video retrieval tasks.
Previous works [6, 14, 15, 16, 5, 17] attempt cross-modal fusion over dense
video features and text features to tackle VidL tasks, but suffer from domain
disconnection due to offline feature extraction [18, 16]. To address this
issue, ClipBERT [18] proposes to “imagify” the dense video frame inputs.
First, it adopts a sparse sampling strategy to employ only a handful of frames
from the entire video for efficient end-to-end training. Second, the overall
video representations are obtained through mean-pooling a sequence of frame
features, individually computed by a 2D Convolutional Network. Although
obtaining promising results, the brash mean pooling over individual frame
features forfeits the crucial temporal information in video. To improve
temporal modeling, recent works [19, 20] concatenate all sparse-sampled frame
features in chronological order, and directly enforce VidL learning along with
the text inputs. However, these methods still treat video frames as static
images, and rely heavily on cross-modal fusion module to capture both temporal
dynamics in videos and the alignment between visual and textual elements
simultaneously.
We propose fully end-to-end VIdeO-LanguagE Transformer (VIOLET) to enhance
video modeling for better VidL modeling from two perspectives: ($i$) model
architecture, and ($ii$) pre-training task design. In terms of _model
architecture_ , instead of naive mean pooling or concatenation over a sequence
of individual frame features, VIOLET contains Video Swin Transformer that
models video temporal explicitly for VidL learning [21, 22]. Since the self-
attention over spatial-temporal locality allows modeling variable sequence
lengths, our video transformer support flexible VidL learning from both videos
and static images.
In terms of _pre-training tasks_ , though the direct adoption of Masked
Language Modeling [23] has proven effective in pre-training vision-language
models, the attempt on similar masked modeling on vision inputs is not as
successful. For example, Masked Region Modeling (MRM) [24] or Masked Frame
Modeling (MFM) [5] aim to recover masked image regions or video frames.
Despite of the different variants of MRM/MFM that model object category or
distilled region/frame features, it suffers from imperfect patch labels,
excessive feature dimensions, rendering unsatisfactory performance [24, 5].
Recent VidL works [18, 19, 25] even completely discard such pre-training tasks
due to limited performance improvements.
To promote better video representations for VidL learning, we present a new
pre-training task: Masked Visual-token Modeling (MVM), as shown in the left of
Fig. 1. By using the pre-trained discrete VAE [26] from DALL-E [27], we
“tokenize” the video frames into discrete visual tokens, which can be used to
reconstruct the original video frames. During pre-training, we mask out some
proportions of the video input along both spatial and temporal dimensions, and
the model learns to recover the discrete visual tokens of these masked
patches. MVM improves over previous MRM/MFM in two ways: ($i$) MVM predicts
over a discrete space, which avoids falling ill with the similar training
issues of excessive feature dimensions as in [24, 5]; ($ii$) MVM is based on
latent visual tokens obtained from a self-reconstruction training procedure,
instead of distilling from a well-supervised visual backbone. Our
comprehensive comparison shows that MVM enhances the model’s capability to
better understand video scenes and in turn benefits downstream VidL tasks.
In summary, our contributions are four-fold. ($i$) We present VIOLET, a fully
end-to-end transformer to model the spatial-temporal dynamic in videos for
VidL learning. ($ii$) We propose a new pre-training task, Masked Visual-token
Modeling, which recovers the masked video frame patches into a discrete visual
token space. ($iii$) VIOLET achieves state-of-the-art results on 4 text-to-
video retrieval and 5 out of 8 video question answering tasks. ($iv$)
Comprehensive ablation studies demonstrate the necessity of temporal video
modeling and the effectiveness of MVM across different VidL pre-training
settings.
Figure 2: Overview of the proposed end-to-end VIdeO-LanguagE Transformer
(VIOLET), with Video Swin Transformer, Language Embedder, and Cross-modal
Transformer. VIOLET adopts Discrete VAE to extract discrete visual tokens to
perform Masked Visual-token Modeling along with Visual-Text Matching and
Masked Language Modeling during large-scale visual-text pre-training.
## 2 Related Work
Video-Language Understanding. Joint video-language (VidL) understanding [28,
29, 30, 14, 31, 32] aims at interpreting the physical world via both vision
and text perception. Researchers have explored such capability on VidL tasks
including text-based video retrieval [1, 2, 3, 4, 5], video question answering
[6, 7, 8, 9], moment retrieval [10, 11, 2, 4], and video captioning [12, 13,
1, 3]. Prior arts before the large-scale pre-training era [33, 34, 35, 36, 14,
9] leverage offline extracted video features [37, 38, 39, 40, 41, 42, 43, 44,
45]. Later on, VidL pre-trained models [46, 15, 5, 16] built on the above pre-
extracted features have shown promising results. To enhance the performance,
there have been parallel interests in bringing in more modalities from raw
video inputs [31, 47, 48] and end-to-end training till the raw pixels [49, 18,
19, 25], both aiming to elevate video representations for VidL modeling.
Our work further explores the second direction for general VidL understanding,
in contrast to [25] focusing on text-to-video retrieval only. Instead of
encoding each video frame individually as static image and applying simple
mean-pooling or concatenation along the temporal dimension [18, 19], we
demonstrate the necessity of temporal modeling by our video transformer over
the input video frames, even when they are sparsely sampled [18].
Masked Visual Modeling. Aligned with the success of transformer-based [50]
language pre-training models [51, 52, 53, 54, 55], image-text pre-training
[56, 57, 58, 59, 60, 61, 62, 63, 64, 65] and video-text pre-training [66, 67,
68] have shown promising results on diverse vision-language (VL) tasks.
Popular VL pre-training tasks include Visual-Text Matching and Masked Language
Modeling, which are directly adapted from language pre-training [23]. Similar
masked modeling on visual inputs [24, 5] has also been introduced to VL pre-
training, but are not as useful. We propose Masked Visual-token Modeling
(MVM), adopting the latent codes of discrete VAE [27, 69, 26] as the
reconstruction target for masked patches, which eases the auto-encoding
prediction and can lead to a more significant improvement.
Among the literature, BEiT [70] and VIMPAC [71] are two relevant studies of
masked visual modeling for image classification [42] and action recognition
[37]. Specifically, BEiT [70] proposes a BERT-like pre-training strategy to
recover the original visual tokens from some masked image patches. Our MVM
takes inspiration from BEiT, but extends to more complex video inputs with an
additional temporal dimension for VidL modeling. To prevent the model from
taking shortcuts in recovering visual tokens from its spatial or temporal
neighbors, we further introduce a combination of blockwise masking and
attended masking. VIMPAC [71] takes a step further to completely remove the
raw pixel inputs from the training procedure. It employs visual tokens as the
discrete representation of video inputs and applies a mask-then-predict pre-
training task. The removal of raw pixel inputs renders a weaker baseline on
popular action recognition tasks. In our work, we leverage visual tokens as
prediction targets for MVM, instead of replacing the raw video frame patches.
## 3 VIOLET
### 3.1 Model Architecture
Fig. 2 illustrates the overall architecture of our end-to-end video-language
transformer (VIOLET). VIOLET contains 3 components: Video Swin Transformer
(VT), Language Embedder (LE), and Cross-modal Transformer (CT). VIOLET takes
video $\mathcal{V}$ and sentence $\mathcal{X}$ as inputs. Sparse-sampled
frames $\\{f_{1},f_{2},...\\}$ from $\mathcal{V}$ are first processed by VT to
compute video features $v=\\{v_{1},v_{2},...\\}$. LE extracts the word
embeddings $w=\\{w_{1},w_{2},...\\}$ for each word token
$\\{x_{1},x_{2},...\\}$ in $\mathcal{X}$. Then CT performs cross-modal fusion
on top of $v$ and $w$ to produce joint video-language (VidL) representations
$h$ for pre-training and downstream finetuning. We explain each component in
detail below.
Video Swin Transformer (VT). Instead of mean pooling or concatenating
individual frame features, we adopt Video Swin Transformer [22] (VT) to model
$T$ sparse-sampled frames $\\{f_{t}\\}_{t=1}^{T}$ along both spatial and
temporal dimensions as video features $\\{v_{t}\\}_{t=1}^{T}$. VT first splits
each frame as non-overlapping $H\times W$ patches [72] and adopts a linear
projection layer to obtain the preliminary video patch embeddings
$u\in\mathbb{R}^{T\times H\times W\times d}$:
$u_{t}=\text{LinearProj}(f_{t}).$ (1)
The multi-layer 3D-shifted window [22] then considers different levels of
spatial-temporal attention over these video patch embeddings. We add learnable
positional embedding $p^{\text{v}}$ to $u$, including spatial
$p^{\text{s}}\in\mathbb{R}^{H\times W\times d}$ and temporal ordering
$p^{\text{t}}\in\mathbb{R}^{T\times d}$, and extracts the video features $v$:
$\begin{split}p^{\text{v}}_{t}&=p^{\text{s}}+p^{\text{t}}_{t},\\\
v&=\text{VT}(\\{u_{t}+p_{t}^{\text{v}}\\}_{t=1}^{T}).\end{split}$ (2)
All patches from the $t^{\text{th}}$ frame shares the same $p^{\text{t}}_{t}$
and all patches with the same spatial position are given the same
$p^{\text{s}}$. In particular, each 3D window is in the size of
$T^{\prime}\times M\times M$ and considers video temporal across $T^{\prime}$
consecutive frames. By adopting 3D windows upon blocks of video patches, VT
can model image spatial and video temporal simultaneously through the self-
attended computation procedure. Note that we make a slight modification to
remove the temporal down-sampling from the original Video Swin Transformer and
ensure the same temporal dimension as the input video for Masked Visual-token
Modeling during pre-training (Sec. 3.2).
VT enforces spatial-temporal modeling via 3D-shifted window to compute the
initial video representations for VidL modeling. We demonstrate the advantages
of VT over simple mean-pooling or concatenation of “imagified” frame
representations under different VidL pre-training settings in Sec. 4.3. In
addition, as VT encodes video frame patches through a fully self-attended
computation, it can support a variable length of visual inputs. This video
encoding enables VIOLET to carry out static images (i.e., $T$ = 1). We discuss
the flexibility of pre-training VIOLET on both large-scale image-text data and
video-text data in Sec. 5.
Language Embedder (LE). For a language input $\mathcal{X}$, we follow
WordPiece [73] and tokenize it into word tokens $\\{x_{i}\\}_{i=1}^{L}$, where
$L$ is the number of tokens in $\mathcal{X}$. LE embeds the discrete word
token $x_{i}$ into high-dimensional word representation
$w_{i}\in\mathbb{R}^{d}$ :
$\\{w_{i}\\}_{i=1}^{L}=\text{LE}(\\{x_{i}\\}_{i=1}^{L}).$ (3)
Cross-modal Transformer (CT). Given video features $v$ and word features $w$,
CT performs cross-modal fusion over all $\\{v_{i}\\}_{i=1}^{T}$ and
$\\{w_{i}\\}_{i=1}^{L}$ for joint VidL learning. We add different positional
embeddings $p^{\text{v}}$ or $p^{\text{x}}$ to video features $v$ or word
features $w$, to incorporate sequence ordering and distinguish between the two
modalities. In particular, we reuse $p^{\text{v}}$ from VT, containing both
spatial position and temporal ordering information. We concatenate the video
and text representations after position embedding as the input sequence to CT.
In addition, a special [CLS] token is added to compute the global VidL
representation, used in pre-training and downstream finetuning. The joint VidL
features $h\in\mathbb{R}^{(T+1+L)\times d}$ are computed as:
$\begin{split}h&=\text{CT}([v+p^{\text{v}},\texttt{[CLS]},w+p^{\text{x}}]),\\\
[h^{\text{v}},&~{}h^{\text{c}},h^{\text{x}}]=h,\end{split}$ (4)
### 3.2 Pre-training Tasks.
To benefit from large-scale data [19, 25, 74], we incorporate three pre-
training tasks, including our proposed Masked Visual-token Modeling. Masked
Language Modeling [23, 24, 5] predicts the masked word tokens to improve
language reasoning with the aid of visual perception. Masked Visual-token
Modeling recovers the masked video patches to enhance the video scene
understanding. Visual-Text Matching [24, 18, 25] learns the alignments between
video and text modality, improving the cross-modal fusion.
Masked Language Modeling (MLM). In MLM, we randomly mask out some word tokens
with a probability of 15%.222Following BERT [23], We replace 80% of masked
word tokens as the [MASK] token, 10% as a random token, and 10% as its
original token. The goal is to recover these masked tokens $x$ from the joint
VidL features $h$ modeled by Cross-modal Transformer (CT). Specifically, the
corresponding $h^{\text{x}}$ for these masked tokens are fed in a fully-
connected (FC) layer ($\text{FC}^{\text{MLM}}$) and projected to the discrete
word token space for classification:
$\begin{split}x^{\prime}_{i}&=\text{FC}_{\text{MLM}}(h^{\text{x}}_{i}),\\\
\mathcal{L}_{\text{MLM}}&=-\mathbb{E}~{}[\frac{1}{|\mathcal{M}^{\text{MLM}}|}\sum\nolimits_{i\in\mathcal{M}^{\text{MLM}}}\log
P(x_{i}~{}|~{}x^{\prime}_{i})],\end{split}$ (5)
where $\mathcal{M}^{\text{MLM}}$ denotes the index set of masked word tokens.
Visual-Text Matching (VTM). VTM enhances the cross-modal fusion via modeling
the alignments between visual and textual inputs. At each training step, we
randomly replace the corresponding text $\mathcal{X}_{\text{pos}}$ for a given
video $\mathcal{V}$ with the text description $\mathcal{X}_{\text{neg}}$ from
a different video in the same batch. Both the positive pair
$(\mathcal{V},\mathcal{X}_{\text{pos}})$ and negative pair
$(\mathcal{V},\mathcal{X}_{\text{neg}})$ are modeled by CT, and VTM is to tell
them apart from the global VidL representation $h^{\text{c}}$ of the [CLS]
token. In particular, $h^{\text{c}}$ will be processed by a FC layer
($\text{FC}^{\text{VTM}}$) to perform binary classification:
$\begin{split}b_{\text{pos}}&=\text{FC}^{\text{VTM}}(h^{\text{c}}_{\text{pos}}),b_{\text{neg}}=\text{FC}^{\text{VTM}}(h^{\text{c}}_{\text{neg}}),\\\
\mathcal{L}_{\text{VTM}}&=-\mathbb{E}[\log(b_{\text{pos}})+\log(1-b_{\text{neg}})],\end{split}$
(6)
where $h^{\text{c}}_{\text{pos}}$ or $h^{\text{c}}_{\text{neg}}$ is
$h^{\text{c}}$ of positive or negative pairs.
Masked Visual-token Modeling (MVM). Previous Masked Region Modeling (MRM) [24]
and Masked Frame Modeling (MFM) [5] extends MLM to visual inputs but sometimes
leads to unsatisfactory performance. Different from MRM and MFM, which rely on
distilled visual categories or features from a well-supervised visual backbone
[45, 41], we present Masked Visual-token Modeling (MVM) to perform masked
visual modeling in a self-reconstruction scenario. We consider the discrete
variational autoencoder (dVAE) [26, 27] to quantize video inputs into masked
prediction targets. dVAE is learned to tokenize images into discrete visual
tokens $q$ from a finite vocabulary and then reconstruct the original visual
scene based on $q$, where $q$ should have a one-to-one correspondence with the
input image patches spatially. We first adopt dVAE to tokenize the
$t^{\text{th}}$ video frame $f_{t}$ into $q_{t}$:
$q_{t}=\text{dVAE}(f_{t}).$ (7)
Similar to MLM, we mask out some video patches by replacing the pixel values
with all zeros. MVM aims at recovering the visual tokens $q$ of those masked
video patches $v$ from the corresponding joint VidL features $h^{\text{v}}$.
$h^{\text{v}}$ is fed into a FC layer ($\text{FC}^{\text{MVM}}$) and projected
to the discrete visual token space for classification:
$\displaystyle q^{\prime}_{t,i}$
$\displaystyle=\text{FC}^{\text{MVM}}(h^{\text{v}}_{t,i}),$ (8)
$\displaystyle\mathcal{L}_{\text{MVM}}$
$\displaystyle=-\mathbb{E}~{}[\sum_{t=1}^{T}\frac{1}{|\mathcal{M}^{\text{MVM}}_{t}|}\sum\nolimits_{i\in\mathcal{M}^{\text{MVM}}_{t}}\log
P(q_{t,i}~{}|~{}q^{\prime}_{t,i})],$
where $\mathcal{M}^{\text{MVM}}_{t}$ is the index set of masked video patches
for the $t^{\text{th}}$ frame. Using discrete visual tokens as masked
prediction targets has two main advantages: ($i$) The finite vocabulary size
of these discrete visual tokens eases the learning of MVM, avoid the previous
difficulty in model training with MRM/MFM from imperfect patch categories or
excessive feature dimensions; ($ii$) MVM does not require a well-supervised
visual backbone to distill the masking labels. The latent visual tokens can be
learned in a self-supervised way without human annotations.
### 3.3 Masking Strategy of MLM and MVM
We introduce a combination of Blockwise Masking and Attended Masking to
amplify the effectiveness of MLM and MVM, as shown in Fig. 3.
Blockwise Masking (BM). Video usually presents analogous visual patterns in
spatial-temporal neighbors (i.e., nearby patches within current frame or
neighboring frames). While these neighbors make the masked video patches easy
to recover, they may lead to spurious success in MVM evaluation. To make MVM
more challenging, we adopt Blockwise Masking [71, 70] that masks blocks of
video patches along spatial-temporal dimension rather than independently
masking randomly sampled patches for each frame. Specifically, we randomly
sample an $(H^{\prime},W^{\prime},T^{\prime})$ as a masking block, where all
$H^{\prime}\times W^{\prime}$ visual patches in the following $T^{\prime}$
consecutive frames will be masked; We repeat this process until $>$15% of
video patches are masked to perform MVM pre-training. The model cannot merely
rely on similar neighboring visual cues but requires actual visual reasoning
to recover a group of missing patterns.
Figure 3: Masking Strategy of MLM and MVM, including Blockwise Masking (BM)
and Attended Masking (AM).
Attended Masking (AM). The conventional practice is to sample masked visual
patches or textual tokens with the same probability over all visual and
textual inputs. However, the important elements (e.g., visual patches
containing the main object or content words) receive the same weight as the
less relevant elements (e.g., scene background or stop words) in masked
modeling. Attended Masking tries to put more weights on the more important
elements based on the attention weights computed by Cross-modal Transformer
(CT). A similar idea has been explored in [19] for MLM. In this paper, we
extend AM to both visual and textual modalities. We first keep the video-text
inputs intact, feed them into CT to compute the attention weights, to decide
which portions in video and text are more important. We then select the top
15% of most-attended tokens to be masked in both video and text inputs to
perform MVM and MLM.
| Text-to-Video Retrieval
---|---
Method | MSRVTT | DiDeMo | YouCook2 | LSMDC
Models using Pre-extracted Features |
HT100M [16] | 14.9 / 40.2 / 52.8 | - | 08.2 / 24.5 / 35.3 | 07.1 / 19.6 / 27.9
MMT [31] | 26.6 / 57.1 / 67.1 | - | - | 12.9 / 29.9 / 40.1
HERO [5] | 16.8 / 43.4 / 57.7 | - | - | -
AVLnet [47] | 27.1 / 55.6 / 66.6 | - | 33.2 / 61.0 / 71.5 | 17.0 / 38.0 / 48.6
Support-Set [32] | 30.1 / 58.3 / 69.3 | - | - | -
TACo [75] | 28.4 / 57.8 / 71.2 | - | 29.6 / 59.7 / 72.7 | -
VideoCLIP [17] | 30.9 / 55.4 / 66.8 | - | 32.2 / 62.6 / 75.0 | -
Models with End-to-end Training |
ClipBERT [18] | 22.0 / 46.8 / 59.9 | 20.4 / 48.0 / 60.8 | - | -
Frozen [25] | 32.5 / 61.5 / 71.2 | 31.0 / 59.8 / 72.4 | - | 15.0 / 30.8 / 39.8
Clip4Clip [76] | 42.1 / 71.9 / 81.4 | 43.4 / 70.2 / 80.6 | - | 21.6 / 41.8 / 49.8
VIOLET | 34.5 / 63.0 / 73.4 | 32.6 / 62.8 / 74.7 | 35.7 / 66.7 / 78.2 | 16.1 / 36.6 / 41.2
(a)
| Zero-Shot Retrieval
---|---
Method | MSRVTT | DiDeMo
Models using Pre-extracted Features
HT100M [16] | 07.5 / 21.2 / 29.6 | -
MMT [31] | 0-0 / 06.9 / 0-0 | -
AVLnet [47] | 19.6 / 40.8 / 50.7 | -
Support-Set [32] | 12.7 / 27.5 / 36.2 | -
TACo [75] | 09.8 / 25.0 / 33.4 | -
VideoCLIP [17] | 10.4 / 22.2 / 30.0 | 16.6 / 46.9 / 0-0
Models with End-to-end Training
MIL-NCE [49] | 09.9 / 24.0 / 32.4 | -
VATT [77] | 0-0 / 00-0 / 29.7 | -
Frozen [25] | 24.7 / 46.2 / 57.2 | 21.1 / 46.0 / 56.2
CLIP [78, 76] | 31.2 / 53.7 / 64.2 | -
VIOLET | 25.9 / 49.5 / 59.7 | 23.5 / 49.8 / 59.8
(b)
Table 1: Comparison with SOTA on text-to-video-retrieval tasks under different
settings: (a) pre-train then finetune and (b) pre-train then zero-shot
evaluation. All resutls are reported on R@1 / R@5 / R@10. All models perform
visual-text pre-training. Rows highlighted in blue use additional modalities
such as sound and speech besides video frames.
## 4 Experiments
### 4.1 Experimental Setup
Pre-training Datasets. As mentioned in Sec. 3.1, VIOLET is flexible in taking
both video and image as inputs. Hence, We follow [25] to jointly pre-train our
model on image-text and video-text data, which we briefly describe below.
($i$) YT-Temporal-180M (YT-Temporal) [19] contains 6M YouTube videos with
subtitle texts from Automatic Speech Recognition (ASR). Following [19], we
divide a long video into several video segments, with an average length of
9.29 seconds. We treat every 4 consecutive segments with their ASR as a video
clip, leading to 180M video-subtitle pairs. ($ii$) WebVid-2.5M (WebVid) [25]
scrapes 2.5M video-text pairs from the web. Different from YT-Temporal, text
data in WebVid describes the global video semantic. ($iii$)
ConceptualCaptions-3M (CC) [74] consists of 3.3M image-text pairs harvested
from the web. We compare the effects of different pre-training data on
downstream tasks in Sec. 5.
Downstream Tasks. We evaluate VIOLET on both text-to-video retrieval and video
question answering, across 12 downstream benchmarks. For text-to-video
retrieval, we report performance of Recall at K (R@K) on MSRVTT [1], DiDeMo
[79], YouCook2 [13] and LSMDC [3]. For video question answering, we consider 8
datasets in multiple-choice and open-ended settings: TGIF-Action, TGIF-
Transition and TGIF-Frame [6], MSRVTT-MC [80], MSRVTT-QA, MSVD-QA [7], LSMDC-
MC and LSMDC-FiB [81]. Accuracy is used as evaluation metric. More details are
provided in Appendix A.
Implementation Details. We initialize our Video Swin Transformer with
VideoSwin-Base [22], pre-trained on Kinetics-400 [37]. Language Embedder and
Cross-modal Transformer are initialized from pre-trained BERT-Base [59]. We
train VIOLET in a end-to-end manner for both pre-training and downstream
finetuning.
During pre-training, we sparsely sample $T$ = 4 video frames and resize them
into 224x224 to split into patches with $H$ = $W$ = 32. We use pre-trained
DALL-E [27] as our dVAE to generate discrete visual tokens for MVM. For WebVid
[25] and CC [74], we perform VTM+MLM+MVM to pre-train on videos or images with
the globally-aligned alt-text descriptions. We follow [19] to concatenate all
ASR descriptions for each middle frame as text input for YT-Temporal. VTM is
performed for each pair of middle frame and its ASR text to learn the temporal
reasoning over YT-Temporal video clips. Our implementation of VIOLET is based
on PyTorch [82]. We adopt AdamW [83] as the optimizer with an initial learning
rate of 2e-5, betas of (0.9, 0.98), and weight decay of 1e-3 for all pre-
training experiments. VIOLET follows a simple curriculum learning strategy,
where we first pre-train on YT-Temporal with noisy ASR text for 5 epochs and
then on WebVid+CC with alt-text descriptions for another 5 epochs.
For all downstream tasks, we adopt the same video frame size (224x224) and
patch size (32x32) but 5 sparse-sampled frames. Due to various data scales and
domains, we use task-specific learning rates and training epochs based on the
performance of the validation set for each downstream task.
### 4.2 Comparison to Prior Arts
Text-to-Video Retrieval. Table LABEL:table:retrieval summarizes results on
text-to-video retrieval. VIOLET achieves significant gain over existing VidL
pre-trained models across all text-to-video retrieval datasets considered.
Specifically, VIOLET surpasses most previous methods focus on modeling multi-
modal fusion with pre-extracted video features. Notably, VIOLET is still
competitive even when compared with MMT [31], HERO [5], and AVLnet [47] that
use additional modalities, such as sound and speech besides video frames.
For comparisons to end-to-end pre-trained models, VIOLET outperforms ClipBERT
[18] by $+10\%$ on R@1 on both MSRVTT and DiDeMo, even though VIOLET uses even
less frames (Ours: 5 frames vs. ClipBERT: 16 frames). These results highlight
the deficiency of ‘imagifying‘ video representations. When compared with
Frozen [25], designed specifically for text-to-video retrieval tasks, VIOLET
can achieve notable performance improvements with $+2.0\%$, $+1.6\%$ and
$+1.1\%$ on R@1 for MSRVTT, DiDeMo and LSMDC, respectively. We also include
results from Clip4Clip [76] that leverages pre-trained CLIP [78] on over 400M
image-text data, which is a few magnitude larger than our pre-training data.
VIOLET closes the gap between previous end-to-end pre-trained models and
Clip4Clip, and we believe pre-training VIOLET with larger-scale data can
further reduce the gap.
Zero-shot text-to-video retrieval. We further conduct generalizability
evaluation under the zero-shot setting on MSRVTT and DiDeMo in Table
LABEL:table:retrieval-zs. Similarly, VIOLET achieves remarkable performance
improvements over the existing methods by large margins. Specifically, we
observe $+6\%$ gain on R@1 over previous models using pre-extracted video
features and $+1.2-2\%$ on R@1 over end-to-end pre-trained models, excluding
CLIP [78, 76].
| TGIF | MSRVTT | LSMDC | MSVD
---|---|---|---|---
Method | Action | Transition | Frame | MC | QA | MC | FiB | QA
ClipBERT [18] | 82.8 | 87.8 | 60.3 | 88.2 | 37.4 | - | - | -
JustAsk [68] | - | - | - | - | 41.5 | - | - | 46.3
MERLOT [19] | 94.0 | 96.2 | 69.5 | 90.9 | 43.1 | 81.7 | 52.9 | -
VIOLET | 92.5 | 95.7 | 68.9 | 91.9 | 43.9 | 82.8 | 53.7 | 47.9
Table 2: Comparison with SOTA methods on video question answering. We gray out
MERLOT due to its excessive computational cost (e.g., 30K TPU hours vs. 2K GPU
hours (ours) for pre-training and frame resolution 704 vs. 224 for downstream
tasks).
Video Question Answering. We compare with prior arts on video question
answering (QA) tasks in Table 2. VIOLET surpasses ClipBERT [18] with
significant performance gain of $+9.7\%$ on TGIF-Action, $+7.9\%$ on TGIF-
Transition, $+8.6\%$ on TGIF-Frame, $+3.7\%$ on MSRVTT-MC and $+6.5\%$ on
MSRVTT-QA. These results suggest the explicit temporal modeling introduced by
our video transformer is essential for video QA tasks, and pre-training with
image-text data alone may not be sufficient for VidL modeling. We provide more
detailed discussions in Sec. 4.3.
Note that both JustAsk [68] and MERLOT [19] specifically focus on video QA.
JustAsk automatically generates 69M video-question-answer triplets from
narrated videos for training, which is hardly extendable to text-to-video
retrieval tasks. MERLOT is pre-trained for 40 epochs and with extensive
hyperparameter tuning on the frame resolution from 384x704 to 704x704 for
downstream tasks. The overall pre-training of MERLOT takes 30,720 TPU hours on
TPU v3. In contrast, we pre-train VIOLET for 5 epochs, which results in 2,240
GPU hours on V100 GPUs. We also adopt a much lower frame resolution of
224x224. With a much lower computational cost, VIOLET achieves around $+1.0\%$
performance gain over MERLOT on 4 video QA tasks for MSRVTT and LSMDC videos,
while remains competitive on TGIF. We believe VIOLET can further improve with
larger frame resolution and longer pre-training epoch if computational
resources permit.
Video | TGIF- | TGIF- | MSRVTT- | DiDeMo-
---|---|---|---|---
Encoding | Action | Transition | Retrieval | Retrieval
Random initialized visual encoder
Mean | 72.1 | 83.5 | 08.4 / 22.7 / 35.3 | 09.1 / 24.9 / 36.7
Concat | 72.9 | 83.7 | 09.0 / 23.5 / 35.5 | 09.4 / 25.8 / 38.1
VT | 73.6 | 84.6 | 09.2 / 24.0 / 35.8 | 10.3 / 30.1 / 40.5
ImageNet pre-trained visual encoder
Mean | 77.5 | 86.5 | 09.6 / 26.7 / 39.5 | 09.5 / 27.5 / 40.9
Concat | 78.0 | 87.0 | 10.4 / 30.5 / 42.0 | 10.6 / 30.8 / 42.9
VT | 79.6 | 87.8 | 11.8 / 32.3 / 44.6 | 12.0 / 32.4 / 43.5
\+ Video-text pre-training on WebVid
Mean | 80.3 | 88.7 | 20.8 / 44.9 / 58.1 | 17.9 / 43.5 / 51.3
Concat | 82.5 | 91.2 | 23.5 / 51.9 / 63.0 | 22.2 / 50.5 / 62.6
VT | 85.8 | 92.1 | 27.0 / 56.5 / 68.8 | 26.1 / 56.9 / 68.9
Table 3: Impact of different temporal modeling methods over video inputs under
different settings: ($i$) random initialized visual encoder; ($ii$) ImageNet
[84] pre-trained visual encoder and ($iii$) Adding video-text pre-training on
WebVid [25].
### 4.3 Analysis of VIOLET
We conduct ablation experiments on two video question answering datasets
(TGIF-Action and TGIF-Transition) and two text-to-video retrieval datasets
(MSRVTT and DiDeMo) to study the factors leading to VIOLET’s success.
Impact of Temporal Video Modeling. To demonstrate the necessity of temporal
modeling even under sparse sampling, we compare three variants for temporal
modeling in Table 3. ($i$) Mean: mean-pooling over independently computed
frame features via ResNet-50 [43] as in [18]; ($ii$) Concat: concatenation of
the aforementioned frame features along the temporal dimension as in [19];
($iii$) VT: enforcing spatial-temporal modeling altogether on input video
frame sequences via Video Swin Transformer in VIOLET. The final video
representations are then concatenated with corresponding text embeddings and
fed in Cross-modal Transformer for downstream VidL modeling. We show results
under different settings: random-initialized visual encoder, ImageNet-
pretrained visual encoder, and with additional VidL pre-training on WebVid
[25].
VT consistently outperforms Mean and Concat over the 4 datasets across all
settings. The loss of temporal information in naive mean pooling (Mean) result
in worst performance among the three. Although Concat can preserve the
temporal order of input video frames, it solely relies on Cross-modal
Transformer to model both the temporal dynamics in video and the
correspondence between visual and textual elements, brings unsatisfactory
performance.
When taking a closer look into different pre-training settings, multimodal
pre-training on WebVid significantly boosts the model performance, compared to
unimodal pre-training of visual encoder on ImageNet. In addition, VT benefits
more from VidL pre-training, leading to a bigger performance gap when compared
to Mean or Concat. As the exposure to video data during pre-training utmostly
enhances the learning of temporal dynamics.
Pre-training | TGIF- | TGIF- | MSRVTT- | DiDeMo-
---|---|---|---|---
Task | Action | Transition | Retrieval | Retrieval
None | 81.9 | 88.5 | 13.0 / 36.5 / 49.6 | 18.3 / 46.4 / 56.5
VTM+MLM | 85.4 | 91.6 | 24.4 / 54.4 / 68.1 | 25.8 / 54.2 / 67.0
\+ MCM | 85.0 | 91.6 | 26.0 / 56.0 / 68.4 | 25.8 / 55.9 / 68.1
\+ MFM | 85.5 | 92.0 | 26.2 / 55.5 / 68.4 | 25.4 / 55.5 / 67.8
\+ MPM | 85.0 | 91.8 | 26.6 / 56.2 / 68.4 | 26.0 / 56.5 / 68.0
\+ MVM | 85.8 | 92.1 | 27.0 / 56.5 / 68.8 | 26.1 / 56.9 / 68.9
Table 4: Impact of self-supervised pre-training on video inputs. All pre-
training are conducted on WebVid [25]. Figure 4: MVM accuracy vs. downstream
performance. We adopt only MVM during pre-training (using 0% (w/o MVM), 10%
(8.9% MVM accuracy), 20% (13.4%), 50% (19.2%), and 100% (21.6%) of YT-Temporal
videos).
Effectiveness of MVM. To demonstrate the effectiveness of MVM, we compare
different variants of masked visual modeling when pre-trained on WebVid [25]
in Table 4. First, we establish two baselines: without pre-training (None) and
pre-training with only Visual-Text Matching and Masked Language Modeling
(VTM+MLM) following [18, 19, 25]. Then we augment VTM+MLM with different
variants of masked visual modeling tasks. Masked Classification Modeling (MCM)
mimics MRC in [24] to predict the ImageNet [84] category of the masked image
patch from a pre-trained ResNet-50 [43]; Masked Feature Modeling (MFM) [5]
distills the fixed frame features extracted from a pre-trained visual encoder.
We use the output feature of the masked patches from the last CNN layer of a
ImageNet pre-trained ResNet-50 and adopt linear regression as the training
objective; Masked Patch Modeling (MPM) [5] distinguishes the correct masked
visual patch from negative patches in the same batch with Noise Contrastive
Estimation loss [85], similar to MFM-NCE in [5].
Our results suggest that not all masked visual modeling methods bring
consistent improvement. MCM and MPM give worse results on TGIF-Action over
VTM+MLM; similar trends have been observed in [24, 5]. MFM seems to favor QA
tasks, and MPM benefits more on retrieval tasks. In contrast, MVM leads to the
best performance on all tasks, as it recovers masked patches into a finite
discrete set, making the learning of masked visual modeling easier.
We further investigate the relationship between MVM performance and downstream
performance. We pre-train VIOLET with MVM-only on 10%, 20%, 50%, and 100% of
video scenes from YT-Temporal [19], discarding the corresponding text. As
illustrated in Fig. 4, such MVM pre-training on video inputs only can greatly
lift the performance on all 4 datasets, even without text information.
Moreover, better MVM performance also leads to better downstream performance.
For example, with a 21.6% MVM accuracy on 100% YT-Temporal data, our model
achieves +2.3% improvement on TGIF-Action and +2.5% R@5 increase on MSRVTT. In
summary, results in Table 4 and Fig. 4 suggest that MVM is vital in the
success of VIOLET, as it learns a better video representation to benefit
downstream VidL tasks.
Method | TGIF- | TGIF- | MSRVTT- | DiDeMo-
---|---|---|---|---
Action | Transition | Retrieval | Retrieval
Without Pre-training |
VIOLET | 81.9 | 88.5 | 13.0 / 36.5 / 49.6 | 18.3 / 46.4 / 56.5
Pre-training on COCO+VG |
ClipBERT [18] | 82.8 | 87.8 | 22.0 / 46.8 / 59.9 | 20.4 / 48.0 / 60.8
VIOLET | 84.8 | 90.2 | 23.5 / 50.5 / 63.9 | 22.8 / 51.2 / 62.0
Pre-training on WebVid+CC |
Frozen [25] | - | - | 31.0 / 59.5 / 70.5 | 31.0 / 59.8 / 72.4
VIOLET | 87.1 | 93.6 | 34.2 / 63.5 / 73.6 | 32.9 / 63.0 / 74.5
Pre-training on YTTemporal |
MERLOT [19] | 94.0 | 96.2 | - | -
VIOLET | 91.0 | 94.7 | 25.4 / 54.3 / 64.6 | 26.7 / 56.4 / 64.6
Table 5: Impact of using different pre-training data. We gray out MERLOT due
to its excessive computational cost (e.g., 30K TPU hours vs. 2K GPU hours
(ours) for pre-training and frame resolution 384x704 vs. 224x224 (ours) for
downstream tasks).
Impact of different pre-training data. Table 5 establishes a fair comparison
to the recent SOTA methods, ClipBERT [18], Frozen [25] and MERLOT [19], with
the same pre-training data, respectively. We also compare the effects of
different pre-training data on downstream tasks.
Under fair comparison, our model consistently outperforms ClipBERT and Frozen
by large margins. When both pre-trained on COCO+VG [86, 44], VIOLET surpasses
ClipBERT by $>$+2.0% on Video QA tasks, and $>$+1.5% on R@1 for retrieval
tasks. Frozen adopts a two-stream architecture specifically designed for text-
to-video retrieval applications. VIOLET not only is applicable to video QA
tasks but also achieves a gain of $>$+1.9% on R@1 for retrieval tasks over
Frozen, when both pre-trained on WebVid+CC [25, 74]. On YT-Temporal [19],
VIOLET achieves competitive results with MERLOT on TGIF-Action and TGIF-
Transition with a much lower training cost, as discussed in Sec. 4.2. We
further examine the effect of different pre-training data on downstream tasks
with VIOLET. YT-Temporal is designed to promote video temporal reasoning and
not surprisingly leads to the best QA result. However, the noisy ASR
descriptions lead to smaller gains in retrieval tasks, with a similar
performance to COCO+VG, but much worse than WebVid+CC with a smaller data size
(5.5M vs. 180M). Therefore, we take advantage of both YT-Temporal and
WebVid+CC as our final pre-training corpus, which leads to strong performance
on both video QA and retrieval tasks as presented in Sec. 4.2.
Figure 5: Qualitative examples of self-reconstruction (highlighted with orange
bounding boxes) from predicted visual tokens during our Masked Visual-token
Modeling (MVM).
Qualitative Examples. Fig. 5 illustrates the qualitative examples of self-
reconstruction from predicted visual tokens during MVM, under both Blockwise
Masking (BM) and Attended Masking (AM). As shown, BM masks blocks of video
patches along with consecutive frames and AM masks the most-attended video
patches based on text input (e.g., drawing with “hand” and “cartoon image” in
the $2^{\text{nd}}$ row or “chicken” and “ground” in the $4^{\text{th}}$ row).
VIOLET improves visual reasoning through this video reconstruction during MVM,
and the better video scene understanding further benefits downstream VidL
tasks.
## 5 Conclusion
We present VIOLET, a fully end-to-end VIdeO-LanguagE Transformer, which
contains Video Swin Transformer to explicitly model the vital video temporal
for video-language learning. We further enhance VIOLET with a new pre-training
task, Masked Visual-token Modeling (MVM), that learns video scene
understanding through a mask-the-predict procedure with self-reconstructable
visual tokens. Experiments on various text-to-video retrieval and video
question answering tasks show that VIOLET achieves SOTA (or competitive)
performance. Comprehensive ablation studies demonstrate the necessity of
temporal video modeling and the effectiveness of MVM over previous MRM/MFM for
video-language reasoning under different pre-training settings.
## Appendix A Experimental Setup of Downstream Tasks
We evaluate our pre-trained VIOLET on text-to-video retrieval and video
question answering tasks across 12 downstream datasets. For text-to-video
retrieval, we report model performance on MSRVTT [1], DiDeMo [79], YouCook2
[13], and LSMDC [3] and use Recall at K (R@K) as the evaluation metric. For
video question answering, we consider datasets in both multiple-choice and
open-ended settings, including TGIF-Action, TGIF-Transition, TGIF-Frame [6],
MSRVTT-MC, MSRVTT-QA, MSVD-QA [7], LSMDC-MC and LSMDC-FiB [81]. We evaluate
our models using accuracy.
We follow the standard training/validation/testing splits of the original
datasets. If not otherwise stated, we sparsely sample $T$ = 5 video frames and
adopt video frame size 224 with patch size 32. We use AdamW [83] to fine-tune
VIOLET for each downstream task with an initial learning rate of 1.2e-5, betas
of (0.9, 0.98), and weight decay of 1e-3. All finetuning experiments are
conducted on Microsoft Azure [87] with 8 Nvidia V100 GPUs (32GB VRAM).
### A.1 Text-To-Video Retrieval
For text-to-video retrieval, similar to visual-text matching (VTM) during pre-
training, we treat corresponding video-text pairs as positives and all other
pairwise combinations as negatives. We adopt a fully-connected (FC) layer
(FC${}^{\text{T2V}}$) over the global VidL representation $h^{\text{c}}$ of
the [CLS] token to perform binary classification:
$\begin{split}b_{\text{pos}}&=\text{FC}^{\text{T2V}}(h^{\text{c}}_{\text{pos}}),b_{\text{neg}}=\text{FC}^{\text{T2V}}(h^{\text{c}}_{\text{neg}}),\\\
\mathcal{L}_{\text{T2V}}&=-\mathbb{E}[\log(b_{\text{pos}})+\log(1-b_{\text{neg}})],\end{split}$
(9)
where $h^{\text{c}}_{\text{pos}}$ or $h^{\text{c}}_{\text{neg}}$ is
$h^{\text{c}}$ of positive or negative pairs. In particular, we use pre-
trained FC${}^{\text{VTM}}$ for zero-shot text-to-video retrieval and to
initialize FC${}^{\text{T2V}}$ for further fine-tuning on each downstream
text-to-video retrieval task.
MSRVTT [1] contains 10K YouTube videos with 200K human annotations. For fair
comparison [25, 18], we train on 9K training+validation splits and evaluate on
the 1K-A testing split. We adopt batch size 56 and train for 20 epochs.
DiDeMo [79] consists of 10K videos annotated with 40K sentences from Flickr.
Following [25, 18], we concatenate all sentences from the same video into a
paragraph and perform paragraph-to-video retrieval for DiDeMo. We adopt batch
size 48 and train for 20 epochs.
YouCook2 [13] contains 14K video clips from 2K cooking videos and 89 recipes.
Each clip is annotated with one sentence. We follow [16, 17] to report
retrieval performance on the entire validation clips. We adopt batch size 56
and train for 40 epochs.
LSMDC [3] is built upon 118K video clips from 202 movies. Each clip has a
caption from movie scripts or descriptive video services. Following [25, 16],
we evaluate on 1K testing clips that disjoint from the training+validation
splits. We adopt batch size 56 and train for 40 epochs.
VideoQA | Task | #Option
---|---|---
Multiple- Choice | TGIF-Action [6] | 5
TGIF-Transition [6] | 5
MSRVTT-MC [7] | 5
LSMDC-MC [3] | 5
Open- Ended | TGIF-Frame [6] | 1,540
MSRVTT-QA [7] | 1,500
MSVD-QA [88] | 1,000
LSMDC-FiB [81] | 908
Table 6: Summary of video question answering tasks.
### A.2 Video Question Answering
We test our model on video question answering (QA) tasks in both multiple-
choice and open-ended settings, as summarized in Table 6. For multiple-choice
QA tasks, we concatenate question with each answer option and add a separating
blank token to form the input text (Q+[SEP]+A). We adopt a FC layer upon
$h^{\text{c}}$ to predict the model confidence on each answer option. Cross-
entropy loss is used to train a classifier over all answer options for each
video-question pair. For open-ended QA tasks, we follow the common practice to
convert it to a classification task with a finite set of answer classes. We
build a specific answer vocabulary that can cover most common answers in the
training split of each dataset. Similarly, our model predicts the answer to a
given question over all answer vocabulary through a FC layer upon
$h^{\text{c}}$.
TGIF-Action, TGIF-Transition, and TGIF-Frame [6] require spatial-temporal
reasoning to answer questions regarding GIF videos in TGIF-QA [6]
Specifically, we aim to test our model along three dimensions: ($i$) Action:
to recognize the repeated action; ($ii$) Transition: to identify the
transition between the before and after states; ($iii$) Frame: to answer
questions about a specific frame from the GIF video. Among them, TGIF-Action
and TGIF-Transition are collected under multiple-choice setting, and TGIF-
Frame is an open-ended video QA task with free-form answers. In our
implementation, we select 1,540 most common answers as answer candidates for
TGIF-frame. We adopt batch size 48 and train for 20 epochs.
MSRVTT-MC and MSRVTT-QA [7] are created based on videos and captions in MSRVTT
[1]. MSRVTT-MC is a multiple-choice task with videos as questions, and
captions as answers. Each video contains 5 captions, with only one positive
match. MSRVTT-QA contains 243K open-ended questions over 10K videos. We select
1,500 most common answers as the answer candidates. We adopt batch size 48 and
training epochs 20 for both datasets.
MSVD-QA [7] consists of 47K open-ended questions over 2K videos, based on
video-caption pairs from MSVD [88]. We use 1,000 most common answers as the
answer vocabulary. We adopt batch size 80 and train for 40 epochs.
LSMDC-MC and LSMDC-FiB [81] are built from LSMDC dataset [3]. Similar to
MSRVTT-MC, LSMDC-MC requires the model to select the only positive caption
that describes the video from 5 caption candidates and formulates it as a
multiple-choice QA task. LSMDC-FiB replaces a word in the question sentence
with the [BLANK] token, and the model is to recover the missing word. We
regard LSMDC-FiB as an open-ended Video QA task. In particular, we use a FC
layer over the joint VidL representation $h$ of the [BLANK] token to predict
from 908 answer candidates. We adopt batch size 80 and train for 40 epochs.
Masking | TGIF- | TGIF- | MSRVTT- | DiDeMo-
---|---|---|---|---
Strategy | Action | Transition | Retrieval | Retrieval
Without pre-training
None | 81.9 | 88.5 | 13.0 / 36.5 / 49.6 | 18.3 / 46.4 / 56.5
Pre-train on WebVid [25] with VTM+MLM+MVM
Random | 83.7 | 90.8 | 24.3 / 54.8 / 66.7 | 24.2 / 53.5 / 67.6
BM | 85.4 | 91.8 | 27.0 / 56.2 / 68.6 | 25.8 / 56.8 / 68.8
AM | 85.5 | 91.6 | 26.8 / 56.5 / 68.7 | 26.0 / 56.8 / 68.6
BM+AM | 85.8 | 92.1 | 27.0 / 56.5 / 68.8 | 26.1 / 56.9 / 68.9
Table 7: Impact of masking strategy in MVM and MLM.
## Appendix B Impact of Masking Strategy
To amplify the effectiveness of our MLM and MVM, we introduce Blockwise
Masking (BM) and Attended Masking (AM) in Sec. 3.3. Table 7 compares different
masking strategies when pre-trained on WebVid [25]. Specifically, we compare 4
masking strategies, random masking, BM only, AM only and BM+AM. Although
improving from the non-pretrained baseline, random masking results in the
least performance improvement on both video QA and retrieval tasks. In
contrast, BM or AM alone brings more significant performance improvements on
both tasks, while seems to benefit different tasks (e.g., 91.8% on TGIF-
Transition and 27.0% R@1 on MSRVTT-Retrieval for BM, and 85.5% on TGIF-Action
and 26.0% R@1 on DideMo-Retrieval for AM). Finally, by leveraging both BM and
AM (BM+AM), we lead to the best performance among the four.
Unlike random masking, BM cuts down the spurious success in MVM evaluation
through neighboring patches that are visually similar to the masked patches,
and AM puts more masking weights on more important video-text elements based
on the attention pattern from our Cross-modal Transformer. These results
demonstrate that both BM and AM contribute to the success of VIOLET.
## Appendix C Extending VIOLETto Image Question Answering Task
In this section, we show that VIOLET is also extendable to image question
answering task by evaluating it on VCR [89], which requires commonsense
reasoning about the image content. We follow MERLOT [19] to draw colored
highlights around the referenced entity (e.g., [PERSON-1] and [CHAIR-2]) in
the given image and report performance on the multiple-choice Q→A subtask. To
finetune our model, we concatenate the question and each answer choice from
the 4 possible answer candidates. Similarly, a FC layer upon the global cross-
modal representation $h^{\text{c}}$ of the [CLS] token is adopted to predict
the answer and cross-entropy loss is used to supervise the model training. We
adopt batch size 48 and train for 20 epochs.
Method | Frame Size | VCR
---|---|---
MERLOT [19] | 384x704 | 75.1
VIOLET | 224x224 | 74.9
VIOLET | 384x384 | 76.3
Table 8: Comparison with MERLOT [19] under the same pre-training epoch on VCR
[89]. The pre-training are conducted on YT-Temporal [19] for 5 epochs.
The results are shown in Table 8. For a fair comparison, both VIOLET and
MERLOT are pre-trained on YT-Temporal [19] for 5 epochs. Note that MERLOT
adopts a input image resolution of 384x704. With input image size of 224x224,
our VIOLET achieves comparable performance as MERLOT (74.9% vs. 75.1%). When
increasing the input image resolution to 384x384, though still smaller than
the input image size in MERLOT, VIOLET can achieve a superior performance with
an absolute gain of +1.2% over MERLOT.
As mentioned in Sec. 4.2, the full MERLOT pre-training requires excessive
computation power (30K TPU hours), while VIOLET, only pre-trained for 5 epochs
(2K GPU hours), can achieve competitive performance on video-language
downstream tasks with lower input resolution (ours: 224x224, MERLOT:384x704).
When comparing the results from VIOLET with different input resolutions in
Table 8, we observe that higher input resolution results in better downstream
performance. It is also worth noting that longer pre-training leads to
monotonic performance improvement, as shown in [19]. Hence we believe VIOLET
can further improve with higher frame resolution and more pre-training epochs
if computational resources permit.
## Appendix D Qualitative Examples of Zero-shot Text-to-Video Retrieval
We visualize some qualitative examples of zero-shot text-to-video retrieval in
Fig. 6-9 on MSRVTT [1], DiDeMo [79], LSMDC [3], and YouCook2 [13],
respectively. These examples show that pre-training on large-scale visual-text
data (YT-Temporal [19], WebVid [25], and CC [74]) enables VIOLET to learn
cross-modal alignment to perform text-to-video retrieval in a zero-shot
scenario. For MSRVTT (Fig. 6), “grand theft auto 5” (a video game) is not a
commonly seen phrase, but we can still retrieve relevant video clips depicting
the video game. For paragraph-to-video retrieval in DiDeMo (Fig. 7), the
textual query is a concatenation of multiple sentences, much longer than the
input text during pre-training. Surprisingly, VIOLET can still retrieve videos
that contain relevant semantics mentioned in the textual query. For instance,
the top-2/3/5 of the retrieved videos on the upper left of Fig. 7 correspond
to the textual cues, such as “traveling away, comes back, red signs flaps”,
Moreover, visualizations of zero-shot text-to-video retrieval on LSMDC (Fig.
8) and YouCook2 (Fig. 9) show that VIOLET is generalizable to more specific
video domains, such as movie or cooking videos.
## Appendix E Limitation and Broader Impact
The broader impact of this paper falls in applications of video-language
(VidL) reasoning, including video question answering and text-to-video
retrieval. Our end-to-end VIdeO-LanguagE Transformer (VIOLET) has the
potential to be applied to various VidL tasks, such as video captioning and
video grounded dialogue, which is worth exploration in future study. In
addition, the newly introduced Masked Visual-token Modeling can further
improve the performance when scaling up the pre-training to even larger-scale
visual-text data. There are also several potential limitations of VIOLET that
would make for promising avenues of future work, including: 1) extending
VIOLET to model the full-length videos with densely sampled frames for
downstream VidL tasks like TGIF-Count [6]; and 2) exploring extra input
signals from videos, such as audio, into the VIOLET framework for better
performance.
We do not anticipate major ethical issues in our work. As a data-driven
system, the self-supervised method is sensitive to the distribution of the
pre-training data. Therefore, we consider diverse types of video, including
VLOGs, instructional videos, short-duration GIFs, and even static images
across YT-Temporal [19], WebVid [25], and CC [74]. The accompanying textual
inputs used to train our model are from various sources, such as alt-text
descriptions, human annotations, and ASR outputs, at different levels,
including temporally-specified and globally-aligned descriptions. We conduct a
comprehensive downstream evaluation over 12 VidL tasks, trying to mitigate the
bias of our learned cross-modal representation for better VidL reasoning.
Figure 6: Qualitative examples of zero-shot text-to-video retrieval on MSRVTT
[1].
Figure 7: Qualitative examples of zero-shot text-to-video retrieval on DiDeMo
[79].
Figure 8: Qualitative examples of zero-shot text-to-video retrieval on LSMDC
[3].
Figure 9: Qualitative examples of zero-shot text-to-video retrieval on
YouCook2 [13].
## References
* [1] Jun Xu, Tao Mei, Ting Yao, and Yong Rui. MSR-VTT: A Large Video Description Dataset for Bridging Video and Language. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
* [2] Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. Dense-Captioning Events in Videos. In International Conference on Computer Vision (ICCV), 2017.
* [3] Anna Rohrbach, Marcus Rohrbach, Niket Tandon, and Bernt Schiele. A Dataset for Movie Description. In Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
* [4] Jie Lei, Licheng Yu, Tamara L. Berg, and Mohit Bansal. TVR: A Large-Scale Dataset for Video-Subtitle Moment Retrieval. In European Conference on Computer Vision (ECCV), 2020.
* [5] Linjie Li, Yen-Chun Chen, Yu Cheng, Zhe Gan, Licheng Yu, and Jingjing Liu. HERO: Hierarchical Encoder for Video+Language Omni-representation Pre-training. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020.
* [6] Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, and Gunhee Kim. TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
* [7] Dejing Xu, Zhou Zhao, Jun Xiao, Fei Wu, Hanwang Zhang, Xiangnan He, and Yueting Zhuang. Video Question Answering via Gradually Refined Attention over Appearance and Motion. In ACM Multimedia (ACMMM), 2017.
* [8] Jie Lei, Licheng Yu, Mohit Bansal, and Tamara L. Berg. TVQA: Localized, Compositional Video Question Answering. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2018.
* [9] Jie Lei, Licheng Yu, Tamara L. Berg, and Mohit Bansal. TVQA+: Spatio-Temporal Grounding for Video Question Answering. In Annual Meeting of the Association for Computational Linguistics (ACL), 2020.
* [10] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing Moments in Video with Natural Language. In International Conference on Computer Vision (ICCV), 2017.
* [11] Jiyang Gao, Chen Sun, Zhenheng Yang, and Ram Nevatia. TALL: Temporal Activity Localization via Language Query. In International Conference on Computer Vision (ICCV), 2017.
* [12] Xin Wang, Jiawei Wu, Junkun Chen, Lei Li, Yuan-Fang Wang, and William Yang Wang. VATEX: A Large-Scale, High-Quality Multilingual Dataset for Video-and-Language Research. In International Conference on Computer Vision (ICCV), 2019.
* [13] Luowei Zhou, Chenliang Xu, and Jason J. Corso. Towards Automatic Learning of Procedures from Web Instructional Videos. In AAAI Conference on Artificial Intelligence (AAAI), 2018.
* [14] Thao Minh Le, Vuong Le, Svetha Venkatesh, and Truyen Tran. Hierarchical Conditional Relation Networks for Video Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [15] Linchao Zhu and Yi Yang. ActBERT: Learning Global-Local Video-Text Representations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [16] Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips. In International Conference on Computer Vision (ICCV), 2019.
* [17] Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. VideoCLIP: Contrastive Pre-training for Zero-shot Video-Text Understanding. In Conference on Empirical Methods in Natural Language Processing (EMNLP), 2021.
* [18] Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, and Jingjing Liu. Less is More: ClipBERT for Video-and-Language Learning via Sparse Sampling. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
* [19] Rowan Zellers, Ximing Lu, Jack Hessel, Youngjae Yu, Jae Sung Park, Jize Cao, Ali Farhadi, and Yejin Choi. MERLOT: Multimodal Neural Script Knowledge Models. In Conference on Neural Information Processing Systems (NeurIPS), 2021.
* [20] Peng Wu, Xiangteng He, Mingqian Tang, Yiliang Lv, and Jing Liu. HANet: Hierarchical Alignment Networks for Video-Text Retrieval. In ACM Multimedia (ACMMM), 2021.
* [21] Gedas Bertasius, Heng Wang, and Lorenzo Torresani. Is Space-Time Attention All You Need for Video Understanding? In International Conference on Machine Learning (ICML), 2021.
* [22] Ze Liu, Jia Ning, Yue Cao, Yixuan Wei, Zheng Zhang, Stephen Lin, and Han Hu. Video Swin Transformer. In arXiv:2106.13230, 2021.
* [23] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2019.
* [24] Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: UNiversal Image-TExt Representation Learning. In European Conference on Computer Vision (ECCV), 2020.
* [25] Max Bain, Arsha Nagrani, Gül Varol, and Andrew Zisserman. Frozen in Time: A Joint Video and Image Encoder for End-to-End Retrieval. In International Conference on Computer Vision (ICCV), 2021.
* [26] Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural Discrete Representation Learning. In Conference on Neural Information Processing Systems (NeurIPS), 2017.
* [27] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-Shot Text-to-Image Generation. In arXiv:2102.12092, 2021.
* [28] Linjie Li, Jie Lei, Zhe Gan, Licheng Yu, Yen-Chun Chen, Rohit Pillai, Yu Cheng, Luowei Zhou, Xin Eric Wang, William Yang Wang, Tamara Lee Berg, Mohit Bansal, Jingjing Liu, Lijuan Wang, and Zicheng Liu. VALUE: A Multi-Task Benchmark for Video-and-Language Understanding Evaluation. In Conference on Neural Information Processing Systems (NeurIPS), 2021.
* [29] Yang Liu, Samuel Albanie, Arsha Nagrani, and Andrew Zisserman. Use What You Have: Video Retrieval Using Representations From Collaborative Experts. In British Machine Vision Conference (BMVC), 2020.
* [30] Jianwen Jiang, Ziqiang Chen, Haojie Lin, Xibin Zhao, and Yue Gao. Divide and Conquer: Question-Guided Spatio-Temporal Contextual Attention for Video Question Answering. In AAAI Conference on Artificial Intelligence (AAAI), 2020.
* [31] Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. Multi-modal Transformer for Video Retrieval. In European Conference on Computer Vision (ECCV), 2020.
* [32] Mandela Patrick, Po-Yao Huang, Yuki Asano, Florian Metze, Alexander Hauptmann, Joao Henriques, and Andrea Vedaldi. Support-set bottlenecks for video-text representation learning. In International Conference for Learning Representations (ICLR), 2021.
* [33] Jiyang Gao, Runzhou Ge, Kan Chen, and Ram Nevatia. Motion-Appearance Co-Memory Networks for Video Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
* [34] Bowen Zhang, Hexiang Hu, and Fei Sha. Cross-Modal and Hierarchical Modeling of Video and Text. In European Conference on Computer Vision (ECCV), 2018.
* [35] Jie Lei, Tamara L Berg, and Mohit Bansal. QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries. In Conference on Neural Information Processing Systems (NeurIPS), 2021.
* [36] Chenyou Fan, Xiaofan Zhang, Shu Zhang, Wensheng Wang, Chi Zhang, and Heng Huang. Heterogeneous Memory Enhanced Multimodal Attention Model for Video Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
* [37] Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang, Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Viola, Tim Green, Trevor Back, Paul Natsev, Mustafa Suleyman, and Andrew Zisserman. The Kinetics Human Action Video Dataset. In arXiv:1705.06950, 2017.
* [38] Limin Wang, Yuanjun Xiong, Zhe Wang, Yu Qiao, Dahua Lin, Xiaoou Tang, and Luc Van Gool. Temporal Segment Networks: Towards Good Practices for Deep Action Recognition. In European Conference on Computer Vision (ECCV), 2016.
* [39] Joao Carreira and Andrew Zisserman. Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
* [40] Saining Xie, Chen Sun, Jonathan Huang, Zhuowen Tu, and Kevin Murphy. Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification. In European Conference on Computer Vision (ECCV), 2018.
* [41] Christoph Feichtenhofer, Haoqi Fan, Jitendra Malik, and Kaiming He. SlowFast Networks for Video Recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
* [42] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: a Large-Scale Hierarchical Image Database. In Conference on Computer Vision and Pattern Recognition (CVPR), 2009.
* [43] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
* [44] Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Fei-Fei Li. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. In International Journal of Computer Vision (IJCV), 2017.
* [45] Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. Bottom-Up and Top-Down Attention for Image Captioning and Visual Question Answering. In Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
* [46] Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. VideoBERT: A Joint Model for Video and Language Representation Learning. In International Conference on Computer Vision (ICCV), 2019.
* [47] Andrew Rouditchenko, Angie Boggust, David Harwath, Brian Chen, Dhiraj Joshi, Samuel Thomas, Kartik Audhkhasi, Hilde Kuehne, Rameswar Panda, Rogerio Feris, Brian Kingsbury, Michael Picheny, Antonio Torralba, and James Glass. AVLnet: Learning Audio-Visual Language Representations from Instructional Videos. In INTERSPEECH, 2021.
* [48] Song Liu, Haoqi Fan, Shengsheng Qian, Yiru Chen, Wenkui Ding, and Zhongyuan Wang. HiT: Hierarchical Transformer with Momentum Contrast for Video-Text Retrieval. In arXiv:2103.15049, 2021.
* [49] Antoine Miech, Jean-Baptiste Alayrac, Lucas Smaira, Ivan Laptev, Josef Sivic, and Andrew Zisserman. End-to-End Learning of Visual Representations from Uncurated Instructional Videos. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [50] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention Is All You Need. In Conference on Neural Information Processing Systems (NeurIPS), 2017.
* [51] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. RoBERTa: A Robustly Optimized BERT Pretraining Approach. In arXiv:1907.11692, 2019.
* [52] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Conference on Neural Information Processing Systems (NeurIPS), 2019.
* [53] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. In arXiv:1910.10683, 2020.
* [54] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In International Conference for Learning Representations (ICLR), 2020.
* [55] Kevin Clark, Minh-Thang Luong, Quoc V Le, and Christopher D Manning. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. In International Conference for Learning Representations (ICLR), 2020.
* [56] Wonjae Kim, Bokyung Son, and Ildoo Kim. ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. In International Conference on Machine Learning (ICML), 2021.
* [57] Karan Desai and Justin Johnson. VirTex: Learning Visual Representations from Textual Annotations. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
* [58] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 12-in-1: Multi-Task Vision and Language Representation Learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
* [59] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. VL-BERT: Pre-training of Generic Visual-Linguistic Representations. In International Conference for Learning Representations (ICLR), 2020.
* [60] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. Unified Vision-Language Pre-Training for Image Captioning and VQA. In AAAI Conference on Artificial Intelligence (AAAI), 2020.
* [61] Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, and Ming Zhou. Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training. In AAAI Conference on Artificial Intelligence (AAAI), 2020.
* [62] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. VisualBERT: A Simple and Performant Baseline for Vision and Language. In arXiv:1908.03557, 2019.
* [63] Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. Large-Scale Adversarial Training for Vision-and-Language Representation Learning. In Conference on Neural Information Processing Systems (NeurIPS), 2020.
* [64] Siqi Sun, Yen-Chun Chen, Linjie Li, Shuohang Wang, Yuwei Fang, and Jingjing Liu. LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval. In Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), 2021.
* [65] Mingyang Zhou, Luowei Zhou, Shuohang Wang, Yu Cheng, Linjie Li, Zhou Yu, and Jingjing Liu. UC2: Universal Cross-lingual Cross-modal Vision-and-Language Pre-training. In Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
* [66] Seonhoon Kim, Seohyeong Jeong, Eunbyul Kim, Inho Kang, and Nojun Kwak. Self-supervised Pre-training and Contrastive Representation Learning for Multiple-choice Video QA. In AAAI Conference on Artificial Intelligence (AAAI), 2021.
* [67] Zekun Yang, Noa Garcia, Chenhui Chu, Mayu Otani, Yuta Nakashima, and Haruo Takemura. BERT Representations for Video Question Answering. In Winter Conference on Applications of Computer Vision (WACV), 2020\.
* [68] Antoine Yang, Antoine Miech, Josef Sivic, Ivan Laptev, and Cordelia Schmid. Just Ask: Learning to Answer Questions from Millions of Narrated Videos. In International Conference on Computer Vision (ICCV), 2021.
* [69] Jason Tyler Rolfe. Discrete Variational Autoencoders. In International Conference for Learning Representations (ICLR), 2017.
* [70] Hangbo Bao, Li Dong, and Furu Wei. BEiT: BERT Pre-Training of Image Transformers. In arXiv:2106.08254, 2021.
* [71] Hao Tan, Jie Lei, Thomas Wolf, and Mohit Bansal. VIMPAC: Video Pre-Training via Masked Token Prediction and Contrastive Learning. In arXiv:2106.11250, 2021.
* [72] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference for Learning Representations (ICLR), 2021.
* [73] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin Johnson, Xiaobing Liu, Łukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rudnick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation. In arXiv:1609.08144, 2016.
* [74] Piyush Sharma, Nan Ding, Sebastian Goodman, and Radu Soricut. Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning. In Annual Meeting of the Association for Computational Linguistics (ACL), 2018.
* [75] Jianwei Yang, Yonatan Bisk, and Jianfeng Gao. TACo: Token-aware Cascade Contrastive Learning for Video-Text Alignmentl. In International Conference on Computer Vision (ICCV), 2021.
* [76] Huaishao Luo, Lei Ji, Ming Zhong, Yang Chen, Wen Lei, Nan Duan, and Tianrui Li. CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval. In arXiv:2104.08860, 2021.
* [77] Hassan Akbari, Linagzhe Yuan, Rui Qian, Wei-Hong Chuang, Shih-Fu Chang, Yin Cui, and Boqing Gong. VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text. In arXiv:2104.11178, 2021.
* [78] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. In arXiv:2103.00020, 2021.
* [79] Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. Localizing Moments in Video with Natural Language. In International Conference on Computer Vision (ICCV), 2017.
* [80] Youngjae Yu, Jongseok Kim, and Gunhee Kim. A Joint Sequence Fusion Model for Video Question Answering and Retrieval. In European Conference on Computer Vision (ECCV), 2018.
* [81] Atousa Torabi, Niket Tandon, and Leonid Sigal. Learning Language-Visual Embedding for Movie Understanding with Natural-Language. In arXiv:1609.08124, 2016.
* [82] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Conference on Neural Information Processing Systems (NeurIPS), 2019.
* [83] Ilya Loshchilov and Frank Hutter. Decoupled Weight Decay Regularization. In International Conference for Learning Representations (ICLR), 2019.
* [84] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Conference on Neural Information Processing Systems (NeurIPS), 2012.
* [85] Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the Limits of Language Modeling. In arXiv:1602.02410, 2016.
* [86] Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Saurabh Gupta, Piotr Dollar, and C. Lawrence Zitnick. Microsoft COCO Captions: Data Collection and Evaluation Server. In arXiv:1504.00325, 2015.
* [87] Microsoft Azure. https://azure.microsoft.com/.
* [88] David L. Chen and William B. Dolan. Collecting Highly Parallel Data for Paraphrase Evaluation. In Annual Meetings of the Association for Computational Linguistics (ACL), 2011.
* [89] Rowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. From Recognition to Cognition: Visual Commonsense Reasoning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
|
On Some Quadratic Algebras
On Some Quadratic Algebras I $\boldsymbol{\frac{1}{2}}$:
Combinatorics of Dunkl and Gaudin Elements,
Schubert, Grothendieck, Fuss–Catalan,
Universal Tutte and Reduced Polynomials
Anatol N. KIRILLOV †‡§
A.N. Kirillov
† Research Institute of Mathematical Sciences (RIMS), Kyoto, Sakyo-ku
606-8502, Japan<EMAIL_ADDRESS>http://www.kurims.kyoto-u.ac.jp/~kirillov/
‡ The Kavli Institute for the Physics and Mathematics of the Universe (IPMU),
‡ 5-1-5 Kashiwanoha, Kashiwa, 277-8583, Japan
§ Department of Mathematics, National Research University Higher School of
Economics,
§ 7 Vavilova Str., 117312, Moscow, Russia
Received March 23, 2015, in final form December 27, 2015; Published online
January 05, 2016
We study some combinatorial and algebraic properties of certain quadratic
algebras related with dynamical classical and classical Yang–Baxter equations.
braid and Yang–Baxter groups; classical and dynamical Yang–Baxter relations;
classical Yang–Baxter, Kohno–Drinfeld and $3$-term relations algebras; Dunkl,
Gaudin and Jucys–Murphy elements; small quantum cohomology and $K$-theory of
flag varieties; Pieri rules; Schubert, Grothendieck, Schröder, Ehrhart,
Chromatic, Tutte and Betti polynomials; reduced polynomials; Chan–Robbins–Yuen
polytope; $k$-dissections of a convex $(n+k+1)$-gon, Lagrange inversion
formula and Richardson permutations; multiparameter deformations of
Fuss–Catalan and Schröder polynomials; Motzkin, Riordan, Fine, poly-Bernoulli
and Stirling numbers; Euler numbers and Brauer algebras; VSASM and CSTCPP;
Birman–Ko–Lee monoid; Kronecker elliptic sigma functions
14N15; 53D45; 16W30
To the memory of Alain Lascoux 1944–2013, the great Mathematician, from whom I
have learned a lot about the Schubert and Grothendieck polynomials.
###### Contents
1. 1 Introduction
2. 2 Dunkl elements
1. 2.1 Some representations of the algebra $6DT_{n}$
1. 2.1.1 Dynamical Dunkl elements and equivariant quantum cohomology
2. 2.1.2 Step functions and the Dunkl–Uglov representations of the degenerate affine Hecke algebras [138]
3. 2.1.3 Extended Kohno–Drinfeld algebra and Yangian Dunkl–Gaudin elements
2. 2.2 “Compatible” Dunkl elements, Manin matrices and algebras related with weighted complete graphs $rK_{n}$
3. 2.3 Miscellany
1. 2.3.1 Non-unitary dynamical classical Yang–Baxter algebra ${\rm DCYB}_{n}$
2. 2.3.2 Dunkl and Knizhnik–Zamolodchikov elements
3. 2.3.3 Dunkl and Gaudin operators
4. 2.3.4 Representation of the algebra $3T_{n}$ on the free algebra $\mathbb{Z}\langle t_{1},\ldots,t_{n}\rangle$
5. 2.3.5 Kernel of Bruhat representation
6. 2.3.6 The Fulton universal ring [47], multiparameter quantum cohomology of flag varieties [45] and the full Kostant–Toda lattice [29, 80]
3. 3 Algebra $3HT_{n}$
1. 3.1 Modified three term relations algebra $3MT_{n}(\beta,\psi)$
1. 3.1.1 Equivariant modified three term relations algebra
2. 3.2 Multiplicative Dunkl elements
3. 3.3 Truncated Gaudin operators
4. 3.4 Shifted Dunkl elements $\mathfrak{d}_{i}$ and $\mathfrak{D}_{i}$
4. 4 Algebra $3T_{n}^{(0)}(\Gamma)$ and Tutte polynomial of graphs
1. 4.1 Graph and nil-graph subalgebras, and partial flag varieties
1. 4.1.1 Nil-Coxeter and affine nil-Coxeter subalgebras in $3T_{n}^{(0)}$
2. 4.1.2 Parabolic 3-term relations algebras and partial flag varieties
1. 4.1.3 Universal Tutte polynomials
3. 4.1.4 Quasi-classical and associative classical Yang–Baxter algebras of type $B_{n}$
2. 4.2 Super analogue of 6-term relations and classical Yang–Baxter algebras
1. 4.2.1 Six term relations algebra $6T_{n}$, its quadratic dual $(6T_{n})^{!}$, and algebra $6HT_{n}$
2. 4.2.2 Algebras $6T_{n}^{(0)}$ and $6T_{n}^{\bigstar}$
3. 4.2.3 Hilbert series of algebras ${\rm CYB}_{n}$ and $6T_{n}$
4. 4.2.4 Super analogue of 6-term relations algebra
3. 4.3 Four term relations algebras / Kohno–Drinfeld algebras
1. 4.3.1 Kohno–Drinfeld algebra $4T_{n}$ and that ${\rm CYB}_{n}$
2. 4.3.2 Nonsymmetric Kohno–Drinfeld algebra $4NT_{n}$, and McCool algebras ${\cal P}\Sigma_{n}$ and ${\cal P}\Sigma_{n}^{+}$
3. 4.3.3 Algebras $4TT_{n}$ and $4ST_{n}$
4. 4.4 Subalgebra generated by Jucys–Murphy elements in $4T_{n}^{0}$
5. 4.5 Nonlocal Kohno–Drinfeld algebra $NL4T_{n}$
1. 4.5.1 On relations among JM-elements in Hecke algebras
6. 4.6 Extended nil-three term relations algebra and DAHA, cf. [24]
7. 4.7 Braid, affine braid and virtual braid groups
1. 4.7.1 Yang–Baxter groups
2. 4.7.2 Some properties of braid and Yang–Baxter groups
3. 4.7.3 Artin and Birman–Ko–Lee monoids
5. 5 Combinatorics of associative Yang–Baxter algebras
1. 5.1 Combinatorics of Coxeter element
1. 5.1.1 Multiparameter deformation of Catalan, Narayana and Schröder numbers
2. 5.2 Grothendieck and $q$-Schröder polynomials
1. 5.2.1 Schröder paths and polynomials
2. 5.2.2 Grothendieck polynomials and $k$-dissections
3. 5.2.3 Grothendieck polynomials and $q$-Schröder polynomials
4. 5.2.4 Specialization of Schubert polynomials
5. 5.2.5 Specialization of Grothendieck polynomials
3. 5.3 The “longest element” and Chan–Robbins–Yuen polytope575757Some results of this section, e.g., Theorems 5.63 and 5.65, has been proved independently and in greater generality in [102].
1. 5.3.1 The Chan–Robbins–Yuen polytope ${\cal{CRY}}_{n}$
2. 5.3.2 The Chan–Robbins–Mészáros polytope ${\cal{P}}_{n,m}$
4. 5.4 Reduced polynomials of certain monomials
1. 5.4.1 Reduced polynomials, Motzkin and Riordan numbers
2. 5.4.2 Reduced polynomials, dissections and Lagrange inversion formula
6. A Appendixes
1. A.1 Grothendieck polynomials
2. A.2 Cohomology of partial flag varieties
3. A.3 Multiparamater 3-term relations algebras
1. A.3.1 Equivariant multiparameter 3-term relations algebras
2. A.3.2 Algebra $3QT_{n}(\beta,h)$, generalized unitary case
4. A.4 Koszul dual of quadratic algebras and Betti numbers
5. A.5 On relations in the algebra $Z_{n}^{0}$
1. A.5.1 Hilbert series ${\rm Hilb}\big{(}3T_{n}^{0},t\big{)}$ and ${\rm Hilb}\big{(}\big{(}3T_{n}^{0}\big{)}^{!},t\big{)}$: Examples
6. A.6 Summation and Duality transformation formulas [63]
7. Acknowledgments
### Extended abstract
We introduce and study a certain class of quadratic algebras, which are
nonhomogeneous in general, together with the distinguish set of mutually
commuting elements inside of each, the so-called Dunkl elements. We describe
relations among the Dunkl elements in the case of a family of quadratic
algebras corresponding to a certain splitting of the universal classical
Yang–Baxter relations into two three term relations. This result is a further
extension and generalization of analogous results obtained in [45, 117] and
[76]. As an application we describe explicitly the set of relations among the
Gaudin elements in the group ring of the symmetric group, cf. [108]. We also
study relations among the Dunkl elements in the case of (nonhomogeneous)
quadratic algebras related with the universal dynamical classical Yang–Baxter
relations. Some relations of results obtained in papers [45, 72, 75] with
those obtained in [54] are pointed out. We also identify a subalgebra
generated by the generators corresponding to the simple roots in the extended
Fomin–Kirillov algebra with the $\rm DAHA$, see Section 4.3.
The set of generators of algebras in question, naturally corresponds to the
set of edges of the complete graph $K_{n}$ (to the set of edges and loops of
the complete graph with (simple) loops ${\widetilde{K}}_{n}$ in dynamical and
equivariant cases). More generally, starting from any subgraph $\Gamma$ of the
complete graph with simple loops ${\widetilde{K}}_{n}$ we define a (graded)
subalgebra $3T_{n}^{(0)}(\Gamma)$ of the (graded) algebra
$3T_{n}^{(0)}({\widetilde{K}}_{n})$ [70]. In the case of loop-less graphs
$\Gamma\subset K_{n}$ we state conjecture, Conjecture 35 in the main text,
which relates the Hilbert polynomial of the abelian quotient
$3T_{n}^{(0)}(\Gamma)^{ab}$ of the algebra $3T_{n}^{(0)}(\Gamma)$ and the
chromatic polynomial of the graph $\Gamma$ we are started with111We expect
that a similar conjecture is true for any finite (oriented) matroid $\cal{M}$.
Namely, one (A.K.) can define an analogue of the three term relations algebra
$3T^{(0)}({\cal{M}})$ for any (oriented) matroid $\cal{M}$. We expect that the
abelian quotient $3T^{(0)}({\cal{M}})^{ab}$ of the algebra
$3T^{(0)}({\cal{M}})$ is isomorphic to the Orlik–Terao algebra [114], denoted
by ${\rm OT}({\cal{M}})$ (known also as even version of the Orlik–Solomon
algebra, denoted by ${\rm OS}^{+}({\cal{M}})$ ) associated with matroid
$\cal{M}$ [28]. Moreover, the anticommutative quotient of the odd version of
the algebra $3T^{(0)}({\cal{M}})$, as we expect, is isomorphic to the
Orlik–Solomon algebra ${\rm OS}({\cal{M}})$ associated with matroid
${\cal{M}}$, see, e.g., [11, 49]. In particular, $\displaystyle{\rm
Hilb}(3T^{(0)}\big{(}{\cal{M}})^{ab},t\big{)}=t^{r({\cal{M}})}{\rm
Tutte}\big{(}{\cal{M}};1+t^{-1},0\big{)}.$ We expect that the Tutte polynomial
of a matroid, ${\rm Tutte}({\cal{M}},x,y)$, is related with the Betti
polynomial of a matroid $\cal{M}$. Replacing relations $u_{ij}^{2}=0$,
$\forall\,i,j$, in the definition of the algebra $3T^{(0)}(\Gamma)$ by
relations $u_{ij}^{2}=q_{ij}$, $\forall\,i,j$, $(i,j)\in E(\Gamma)$, where
$\\{q_{ij}\\}_{(i,j)\in E(\Gamma)}$, $q_{ij}=q_{ji}$, is a collection of
central elements, give rise to a quantization of the Orlik–Terao algebra ${\rm
OT}(\Gamma)$. It seems an interesting task to clarify combinatorial/geometric
significance of noncommutative versions of Orlik–Terao algebras (as well as
Orlik–Solomon ones) defined as follows:
${\cal{OT}}(\Gamma):=3T^{(0)}(\Gamma)$, its “quantization”
$3T^{({\boldsymbol{q}})}(\Gamma)^{ab}$ and $K$-theoretic analogue
$3T^{({\boldsymbol{q}})}(\Gamma,\beta)^{ab}$, cf. Definition 3.1, in the
theory of hyperplane arrangements. Note that a small modification of arguments
in [89] as were used for the proof of our Conjecture 35, gives rise to a
theorem that the algebra $3T_{n}(\Gamma)^{ab}$ is isomorphic to the
Orlik–Terao algebra ${\rm OT}(\Gamma)$ studied in [126].222In the case of
simple graphs our Conjecture 35 has been proved in [89].. We check our
conjecture for the complete graphs $K_{n}$ and the complete bipartite graphs
$K_{n,m}$. Besides, in the case of complete multipartite graph
$K_{n_{1},\ldots,n_{r}}$, we identify the commutative subalgebra in the
algebra $3T_{N}^{(0)}(K_{n_{1},\ldots,n_{r}})$, $N=n_{1}+\cdots+n_{r}$,
generated by the elements
$\displaystyle\theta_{j,k_{j}}^{(N)}:=e_{k_{j}}\big{(}\theta_{N_{j-1}+1}^{(N)},\ldots,\theta_{N_{j}}^{(N)}\big{)},$
$\displaystyle 1\leq j\leq r,\quad 1\leq k_{j}\leq n_{j},\quad
N_{j}:=n_{1}+\cdots+n_{j},\quad N_{0}=0,$
with the cohomology ring $H^{*}({\cal{F}}l_{n_{1},\ldots,n_{r}},\mathbb{Z})$
of the partial flag variety ${\cal{F}}l_{n_{1},\dots,n_{r}}$. In other words,
the set of (additive) Dunkl elements
$\big{\\{}\theta_{N_{j-1}+1}^{(N)},\ldots,\theta_{N_{j}}^{(N)}\big{\\}}$ plays
a role of the Chern roots of the tautological vector bundles $\xi_{j}$,
$j=1,\ldots,r$, over the partial flag variety
${{\cal{F}}l}_{n_{1},\ldots,n_{r}}$, see Section 4.1.2 for details. In a
similar fashion, the set of multiplicative Dunkl elements
$\big{\\{}\Theta_{N_{j-1}+1}^{(N)},\ldots,\Theta_{N_{j}}^{(N)}\big{\\}}$ plays
a role of the $K$-theoretic version of Chern roots of the tautological vector
bundle $\xi_{j}$ over the partial flag variety
${\cal{F}}l_{n_{1},\ldots,n_{r}}$. As a byproduct for a given set of weights
${\boldsymbol{\ell}}=\\{\ell_{ij}\\}_{1\leq i<j\leq r}$ we compute the Tutte
polynomial $T(K_{n_{1},\ldots,n_{k}}^{({\boldsymbol{\ell}})},x,y)$ of the
${\boldsymbol{\ell}}$-weighted complete multipartite graph
$K_{n_{1},\ldots,n_{k}}^{({\boldsymbol{\ell}})}$, see Section 4, Definition
4.4 and Theorem 4.3. More generally, we introduce universal Tutte polynomial
$\displaystyle T_{n}(\\{q_{ij}\\},x,y)\in\mathbb{Z}[\\{q_{ij}\\}][x,y]$
in such a way that for any collection of non-negative integers
${\boldsymbol{m}}=\\{m_{ij}\\}_{1\leq i<j\leq n}$ and a subgraph
$\Gamma\subset K_{n}^{({\boldsymbol{m}})}$ of the weighted complete graph on
$n$ labeled vertices with each edge $(i,j)\in K_{n}^{({\boldsymbol{m}})}$
appears with multiplicity $m_{ij}$, the specialization
$\displaystyle q_{ij}\longrightarrow 0\quad\text{if edge}\ \
(i,j)\notin\Gamma,\qquad
q_{ij}\longrightarrow[m_{ij}]_{y}:=\frac{y^{m_{ij}}-1}{y-1}\quad\text{if
edge}\ \ (i,j)\in\Gamma$
of the universal Tutte polynomial is equal to the Tutte polynomial of graph
$\Gamma$ multiplied by $(x-1)^{\kappa(\Gamma)}$, see Section 4.1.2, Theorem
4.24, and comments and examples, for details.
We also introduce and study a family of $($super$)$ $6$-term relations
algebras, and suggest a definition of “multiparameter quantum deformation” of
the algebra of the curvature of $2$-forms of the Hermitian linear bundles over
the complete flag variety ${\cal{F}}l_{n}$. This algebra can be treated as a
natural generalization of the (multiparameter) quantum cohomology ring
$QH^{*}({\cal{F}}l_{n})$, see Section 4.2. In a similar fashion as in the case
of three term relations algebras, for any subgraph $\Gamma\subset K_{n}$, one
(A.K.) can also define an algebra $6T^{(0)}(\Gamma)$ and projection333We treat
this map as an algebraic version of the homomorphism which sends the curvature
of a Hermitian vector bundle over a smooth algebraic variety to its cohomology
class, as well as a splitting of classical Yang–Baxter relations (that is six
term relations) in a couple of three term relations.
$\displaystyle\text{Ch}\colon\ 6T^{(0)}(\Gamma)\longrightarrow
3T^{(0)}(\Gamma).$
Note that subalgebra
${\cal{A}}(\Gamma):={\mathbb{Q}}[\theta_{1},\ldots,\theta_{n}]\subset
6T^{(0)}(\Gamma)^{ab}$ generated by additive Dunkl elements
$\displaystyle\theta_{i}=\sum_{j\atop(ij)\in E(\Gamma)}u_{ij}$
is closely related with problems have been studied in [118, 129], …, and [137]
in the case $\Gamma=K_{n}$, see Section 4.2.2. We want to draw attention of
the reader to the following problems related with arithmetic Schubert444See
for example [137] and the literature quoted therein. and Grothendieck calculi:
1. (i)
Describe (natural) quotient $6T^{\dagger}(\Gamma)$ of the algebra
$6T^{(0)}(\Gamma)$ such that the natural epimorphism ${\rm
pr}\colon{\mathbb{A}}(\Gamma)\longrightarrow{\cal{A}}(\Gamma)$ turns out to be
isomorphism, where we denote by ${\mathbb{A}}(\Gamma)$ a subalgebra of
$6T^{\dagger}(\Gamma)$ generated over ${\mathbb{Q}}$ by additive Dunkl
elements.
2. (ii)
It is not difficult to see [72] that multiplicative Dunkl elements
$\\{\Theta_{i}\\}_{1\leq i\leq n}$ also mutually commute in the algebra
$6T^{(0)}$, cf. Section 3.2. Problem we are interested in is to describe
commutative subalgebras generated by multiplicative Dunkl elements in the
algebras $6T^{\dagger}(\Gamma)$ and $6T^{(0)}(\Gamma)^{ab}$. In the latter
case one will come to the $K$-theoretic version of algebras studied in [118],
….
Yet another objective of our paper555This part of our paper had its origin in
the study/computation of relations among the additive and multiplicative Dunkl
elements in the quadratic algebras we are interested in, as well as the
author’s attempts to construct a monomial basis in the algebra $3T_{n}^{(0)}$
and find its Hilbert series for $n\geq 6$. As far as I’m aware these problems
are still widely open. is to describe several combinatorial properties of some
special elements in the associative quasi-classical Yang–Baxter algebras [72],
including among others, the so-called Coxeter element and the longest element.
In the case of Coxeter element we relate the corresponding reduced polynomials
introduced in [133, Exercise 6.C5(c)], and independently in [72], cf. [70],
with the $\beta$-Grothendieck polynomials [42] for some special permutations
$\pi_{k}^{(n)}$. More generally, we identify the $\beta$-Grothendieck
polynomial $\mathfrak{G}_{\pi_{k}^{(n)}}^{(\beta)}(X_{n})$ with a certain
weighted sum running over the set of $k$-dissections of a convex
$(n+k+1)$-gon. In particular we show that the specialization
$\mathfrak{G}_{\pi_{k}^{(n)}}^{(\beta)}(1)$ of the $\beta$-Grothendieck
polynomial $\mathfrak{G}_{\pi_{k}^{(n)}}^{(\beta)}(X_{n})$ counts the number
of $k$-dissections of a convex $(n+k+1)$-gon according to the number of
diagonals involved. When the number of diagonals in a $k$-dissection is the
maximal possible (equals to $n(2k-1)-1$), we recover the well-known fact that
the number of $k$-triangulations of a convex $(n+k+1)$-gon is equal to the
value of a certain Catalan–Hankel determinant, see, e.g., [129]. In Section
5.4.2 we study multiparameter generalizations of reduced polynomials
associated with Coxeter elements.
We also show that for a certain $5$-parameters family of vexillary
permutations, the specialization $x_{i}=1$, $\forall\,i\geq 1$, of the
corresponding $\beta$-Schubert polynomials
${{\mathfrak{S}}}_{w}^{(\beta)}(X_{n})$ turns out to be coincide either with
the Fuss–Narayana polynomials and their generalizations, or with a
$(q,\beta)$-deformation of $\rm VSASM$ or that of $\rm CSTCPP$ numbers, see
Corollary 5.33B. As examples we show that
1. (a)
the reduced polynomial corresponding to a monomial $x_{12}^{n}x_{23}^{m}$
counts the number of $(n,m)$-Delannoy paths according to the number of
$NE$-steps, see Lemma 5.81;
2. (b)
if $\beta=0$, the reduced polynomial corresponding to monomial
$(x_{12}x_{23})^{n}x_{34}^{k}$, $n\geq k$, counts the number of $n$ up, $n$
down permutations in the symmetric group ${\mathbb{S}}_{2n+k+1}$, see
Proposition 5.82; see also Conjecture 5.83.
We also point out on a conjectural connection between the sets of maximal
compatible sequences for the permutation $\sigma_{n,2n,2,0}$ and that
$\sigma_{n,2n+1,2,0}$ from one side, and the set of ${\rm VSASM}(n)$ and that
of ${\rm CSTCPP}(n)$ correspondingly, from the other, see Comments 5.48 for
details. Finally, in Sections 5.1.1 and 5.4.1 we introduce and study a
multiparameter generalization of reduced polynomials considered in [133,
Exercise 6.C5(c)], as well as that of the Catalan, Narayana and (small)
Schröder numbers.
In the case of the longest element we relate the corresponding reduced
polynomial with the Ehrhart polynomial of the Chan–Robbins–Yuen polytope, see
Section 5.3. More generally, we relate the $(t,\beta)$-reduced polynomial
corresponding to monomial
$\displaystyle\prod_{j=1}^{n-1}x_{j,j+1}^{a_{j}}\prod_{j=2}^{n-2}\left(\prod_{k=j+2}^{n}x_{jk}\right),\qquad
a_{j}\in\mathbb{Z}_{\geq 0},\qquad\forall\,j,$
with positive $t$-deformations of the Kostant partition function and that of
the Ehrhart polynomial of some flow polytopes, see Section 5.3.
In Section 5.4 we investigate reduced polynomials associated with certain
monomials in the algebra $({\widehat{{\rm ACYB}}})^{ab}_{n}(\beta)$, known
also as Gelfand–Varchenko algebra [67, 72], and study its combinatorial
properties. Our main objective in Section 5.4.2 is to study reduced
polynomials for Coxeter element treated in a certain multiparameter
deformation of the (noncommutative) quadratic algebra ${\widehat{{\rm
ACYB}}}_{n}(\alpha,\beta)$. Namely, to each dissection of a convex $(n+2)$-gon
we associate a certain weight and consider the generating function of all
dissections of $(n+2)$-gon selected taken with that weight. One can show that
the reduced polynomial corresponding to the Coxeter element in the deformed
algebra is equal to that generating function. We show that certain
specializations of that reduced polynomial coincide, among others, with the
Grothendieck polynomials corresponding to the permutation $1\times
w_{0}^{(n-1)}\in\mathbb{S}_{n}$, the Lagrange inversion formula, as well as
give rise to combinatorial (i.e., positive expressions) multiparameters
deformations of Catalan and Fuss–Catalan, Motzkin, Riordan and Fine numbers,
Schröder numbers and Schröder trees. We expect (work in progress) a similar
connections between Schubert and Grothendieck polynomials associated with the
Richardson permutations $1^{k}\times w_{0}^{(n-k)}$, $k$-dissections of a
convex $(n+k+1)$-gon investigated in the present paper, and $k$-dimensional
Lagrange–Good inversion formula studied from combinatorial point of view,
e.g., in [22, 50].
## 1 Introduction
The Dunkl operators have been introduced in the later part of 80’s of the last
century by Charles Dunkl [35, 36] as a powerful mean to study of harmonic and
orthogonal polynomials related with finite Coxeter groups. In the present
paper we don’t need the definition of Dunkl operators for arbitrary (finite)
Coxeter groups, see, e.g., [35], but only for the special case of the
symmetric group ${\mathbb{S}}_{n}$.
###### Definition 1.1.
Let $P_{n}=\mathbb{C}[x_{1},\ldots,x_{n}]$ be the ring of polynomials in
variables $x_{1},\ldots,x_{n}$. The type $A_{n-1}$ (additive) rational Dunkl
operators $D_{1},\ldots,D_{n}$ are the differential-difference operators of
the following form
$\displaystyle D_{i}=\lambda{\partial\over\partial
x_{i}}+\sum_{j\not=i}{1-s_{ij}\over x_{i}-x_{j}},$ (1.1)
Here $s_{ij}$, $1\leq i<j\leq n$, denotes the exchange (or permutation)
operator, namely,
$\displaystyle
s_{ij}(f)(x_{1},\ldots,x_{i},\ldots,x_{j},\ldots,x_{n})=f(x_{1},\ldots,x_{j},\ldots,x_{i},\ldots,x_{n}),$
${\partial\over\partial x_{i}}$ stands for the derivative w.r.t. the variable
$x_{i}$, $\lambda\in\mathbb{C}$ is a parameter.
The key property of the Dunkl operators is the following result.
###### Theorem 1.2 (C. Dunkl [35]).
For any finite Coxeter group $(W,S)$, where $S=\\{s_{1},\ldots,s_{l}\\}$
denotes the set of simple reflections, the Dunkl operators $D_{i}:=D_{s_{i}}$
and $D_{j}:=D_{s_{j}}$ pairwise commute: $D_{i}D_{j}=D_{j}D_{i}$, $1\leq
i,j\leq l$.
Another fundamental property of the Dunkl operators which finds a wide variety
of applications in the theory of integrable systems, see, e.g., [56], is the
following statement: the operator
$\displaystyle\sum_{i=1}^{l}(D_{i})^{2}$
“essentially” coincides with the Hamiltonian of the rational Calogero–Moser
model related to the finite Coxeter group $(W,S)$.
###### Definition 1.3.
Truncated (additive) Dunkl operator (or the Dunkl operator at critical level),
denoted by ${\cal D}_{i}$, $i=1,\ldots,l$, is an operator of the form (1.1)
with parameter $\lambda=0$.
For example, the type $A_{n-1}$ rational truncated Dunkl operator has the
following form
$\displaystyle{\cal D}_{i}=\sum_{j\not=i}{1-s_{ij}\over x_{i}-x_{j}}.$
Clearly the truncated Dunkl operators generate a commutative algebra. The
important property of the truncated Dunkl operators is the following result
discovered and proved by C. Dunkl [36]; see also [8] for a more recent proof.
###### Theorem 1.4 (C. Dunkl [36], Yu. Bazlov [8]).
For any finite Coxeter group $(W,S)$ the algebra over $\mathbb{Q}$ generated
by the truncated Dunkl operators ${\cal D}_{1},\ldots,{\cal D}_{l}$ is
canonically isomorphic to the coinvariant algebra ${\cal{A}}_{W}$ of the
Coxeter group $(W,S)$.
Recall that for a finite crystallographic Coxeter group $(W,S)$ the
coinvariant algebra ${\cal{A}}_{W}$ is isomorphic to the cohomology ring
$H^{*}(G/B,\mathbb{Q})$ of the flag variety $G/B$, where $G$ stands for the
Lie group corresponding to the crystallographic Coxeter group $(W,S)$ we
started with.
###### Example 1.5.
In the case when $W={\mathbb{S}}_{n}$ is the symmetric group, Theorem 1.4
states that the algebra over $\mathbb{Q}$ generated by the truncated Dunkl
operators ${\cal D}_{i}=\sum\limits_{j\not=i}{1-s_{ij}\over x_{i}-x_{j}}$,
$i=1,\ldots,n$, is canonically isomorphic to the cohomology ring of the full
flag variety ${\cal F}l_{n}$ of type $A_{n-1}$
$\displaystyle\mathbb{Q}[{\cal D}_{1},\ldots,{\cal
D}_{n}]\cong\mathbb{Q}[x_{1},\ldots,x_{n}]/J_{n},$ (1.2)
where $J_{n}$ denotes the ideal generated by the elementary symmetric
polynomials $\\{e_{k}(X_{n}),\,1\leq k\leq n\\}$.
Recall that the elementary symmetric polynomials $e_{i}(X_{n})$,
$i=1,\ldots,n$, are defined through the generating function
$\displaystyle 1+\sum_{i=1}^{n}e_{i}(X_{n})t^{i}=\prod_{i=1}^{n}(1+tx_{i}),$
where we set $X_{n}:=(x_{1},\ldots,x_{n})$. It is well-known that in the case
$W=\mathbb{S}_{n}$, the isomorphism (1.2) can be defined over the ring of
integers $\mathbb{Z}$.
Theorem 1.4 by C. Dunkl has raised a number of natural questions:
1. (A)
What is the algebra generated by the truncated
* •
trigonometric,
* •
elliptic,
* •
super, matrix, …,
1. (a)
additive Dunkl operators?
2. (b)
Ruijsenaars–Schneider–Macdonald operators?
3. (c)
Gaudin operators?
2. (B)
Describe commutative subalgebra generated by the Jucys–Murphy elements in
* •
the group ring of the symmetric group;
* •
the Hecke algebra;
* •
the Brauer algebra, ${\rm BMW}$ algebra, ….
3. (C)
Does there exist an analogue of Theorem 1.4 for
* •
classical and quantum equivariant cohomology and equivariant $K$-theory rings
of the partial flag varieties?
* •
chomology and $K$-theory rings of affine flag varieties?
* •
diagonal coinvariant algebras of finite Coxeter groups?
* •
complex reflection groups?
The present paper is an extended introduction to a few items from Section 5 of
[72].
The main purpose of my paper “On some quadratic algebras, II” is to give some
partial answers on the above questions basically in the case of the symmetric
group ${\mathbb{S}}_{n}$.
The purpose of the present paper is to draw attention to an interesting class
of nonhomogeneous quadratic algebras closely connected (still mysteriously!)
with different branches of Mathematics such as classical and quantum Schubert
and Grothendieck calculi, low-dimensional topology, classical, basic and
elliptic hypergeometric functions, algebraic combinatorics and graph theory,
integrable systems, etc.
What we try to explain in [72] is that upon passing to a suitable
representation of the quadratic algebra in question, the subjects mentioned
above, are a manifestation of certain general properties of that quadratic
algebra.
From this point of view, we treat the commutative subalgebra generated (over a
universal Lazard ring ${\mathbb{L}}_{n}$ [88]) by the additive (resp.
multiplicative) truncated Dunkl elements in the algebra $3T_{n}(\beta)$, see
Definition 3.1, as universal cohomology (resp. universal $K$-theory) ring of
the complete flag variety ${\cal F}l_{n}$. The classical or quantum cohomology
(resp. the classical or quantum $K$-theory) rings of the flag variety ${\cal
F}l_{n}$ are certain quotients of that universal ring.
For example, in [74] we have computed relations among the (truncated) Dunkl
elements $\\{\theta_{i},\,i=1,\ldots,n\\}$ in the elliptic representation of
the algebra $3T_{n}(\beta=0)$. We expect that the commutative subalgebra
obtained is isomorphic to elliptic cohomology ring (not defined yet, but see
[48, 52]) of the flag variety ${\cal F}l_{n}$.
Another example from [72]. Consider the algebra $3T_{n}(\beta=0)$. One can
prove [72] the following identities in the algebra $3T_{n}(\beta=0)$:
1. (A)
summation formula
$\displaystyle\sum_{j=1}^{n-1}\left(\prod_{b=j+1}^{n-1}u_{b,b+1}\right)u_{1,n}\left(\prod_{b=1}^{j-1}u_{b,b+1}\right)=\prod_{a=1}^{n-1}u_{a,a+1};$
2. (B)
duality transformation formula, let $m\leq n$, then
$\displaystyle\sum_{j=m}^{n-1}\left(\prod_{b=j+1}^{n-1}u_{b,b+1}\right)\left[\prod_{a=1}^{m-1}u_{a,a+n-1}u_{a,a+n}\right]u_{m,m+n-1}\left(\prod_{b=m}^{j-1}u_{b,b+1}\right)$
$\displaystyle\qquad\quad{}+\sum_{j=2}^{m}\left[\prod_{a=j}^{m-1}u_{a,a+n-1}u_{a,a+n}\right]u_{m,n+m-1}\left(\prod_{b=m}^{n-1}u_{b,b+1}\right)u_{1,n}$
$\displaystyle\qquad{}=\sum_{j=1}^{m}\left[\prod_{a=1}^{m-j}u_{a,a+n}u_{a+1,a+n}\right]\left(\prod_{b=m}^{n-1}u_{b,b+1}\right)\left[\prod_{a=1}^{j-1}u_{a,a+n-1}u_{a,a+n}\right].$
One can check that upon passing to the elliptic representation of the algebra
$3T_{n}(\beta=0)$, see Section 3.1 or [74], for the definition of elliptic
representation, the above identities (A) and (B) finally end up
correspondingly, to be the summation formula and the $N=1$ case of the duality
transformation formula for multiple elliptic hypergeometric series (of type
$A_{n-1})$, see, e.g., [63] or Appendix A.6 for the explicit forms of the
latter. After passing to the so-called Fay representation [72], the identities
(A) and (B) become correspondingly to be the summation formula and duality
transformation formula for the Riemann theta functions of genus $g>0$ [72].
These formulas in the case $g\geq 2$ seems to be new.
Worthy to mention that the relation (A) above can be treated as a “non-
commutative analogue” of the well-known recurrence relation among the Catalan
numbers. The study of “descendent relations” in the quadratic algebras in
question was originally motivated by the author attempts to construct a
monomial basis in the algebra $3T_{n}^{(0)}$, and compute ${\rm
Hilb}(3T_{n}^{(0)},t)$ for $n\geq 6$. These problems are still widely open,
but gives rise the author to discovery of several interesting connections with
* •
classical and quantum Schubert and Grothendieck calculi,
* •
combinatorics of reduced decomposition of some special elements in the
symmetric group,
* •
combinatorics of generalized Chan–Robbins–Yuen polytopes,
* •
relations among the Dunkl and Gaudin elements,
* •
computation of Tutte and chromatic polynomials of the weighted complete
multipartite graphs, etc.
A few words about the content of the present paper. Example 1.5 can be viewed
as an illustration of the main problems we are treated in Sections 2 and 3 of
the present paper, namely the following ones.
* •
Let $\\{u_{ij},\,1\leq i,j\leq n\\}$ be a set of generators of a certain
algebra over a commutative ring $K$. The first problem we are interested in is
to describe “a natural set of relations” among the generators
$\\{u_{ij}\\}_{1\leq i,j\leq n}$ which implies the pair-wise commutativity of
dynamical Dunkl elements
$\displaystyle\theta_{i}=\theta_{i}^{(n)}=:\sum\limits_{j=1}^{n}u_{ij},\qquad
1\leq i\leq n.$
* •
Should this be the case then we are interested in to describe the algebra
generated by “the integrals of motions”, i.e., to describe the quotient of the
algebra of polynomials $K[y_{1},\ldots,y_{n}]$ by the two-sided ideal
${\cal{J}}_{n}$ generated by non-zero polynomials $F(y_{1},\ldots,y_{n})$ such
that $F(\theta_{1},\ldots,\theta_{n})=0$ in the algebra over ring $K$
generated by the elements $\\{u_{ij}\\}_{1\leq i,j\leq n}$.
* •
We are looking for a set of additional relations which imply that the
elementary symmetric polynomials $e_{k}(Y_{n})$, $1\leq k\leq n$, belong to
the set of integrals of motions. In other words, the value of elementary
symmetric polynomials $e_{k}(y_{1},\ldots,y_{n})$, $1\leq k\leq n$, on the
Dunkl elements $\theta_{1}^{(n)},\ldots,\theta_{n}^{(n)}$ do not depend on the
variables $\\{u_{ij},\,1\leq i\not=j\leq n\\}$. If so, one can defined
deformation of elementary symmetric polynomials, and make use of it and the
Jacobi–Trudi formula, to define deformed Schur functions, for example. We try
to realize this program in Sections 2 and 3.
In Section 2, see Definition 2.3, we introduce the so-called dynamical
classical Yang–Baxter algebra as “a natural quadratic algebra” in which the
Dunkl elements form a pair-wise commuting family. It is the study of the
algebra generated by the (truncated) Dunkl elements that is the main objective
of our investigation in [72] and the present paper. In Section 2.1 we describe
few representations of the dynamical classical Yang–Baxter algebra ${\rm
DCYB}_{n}$ related with
* •
quantum cohomology $QH^{*}({\cal{F}}l_{n})$ of the complete flag variety
${\cal{F}}l_{n}$, cf. [41];
* •
quantum equivariant cohomology $QH^{*}_{T^{n}\times
C^{*}}(T^{*}{\cal{F}}l_{n})$ of the cotangent bundle $T^{*}{\cal{F}}l_{n}$ to
the complete flag variety, cf. [54];
* •
Dunkl–Gaudin and Dunkl–Uglov representations, cf. [108, 138].
In Section 3, see Definition 3.1, we introduce the algebra $3HT_{n}(\beta)$,
which seems to be the most general (noncommutative) deformation of the (even)
Orlik–Solomon algebra of type $A_{n-1}$, such that it’s still possible to
describe relations among the Dunkl elements, see Theorem 3.8. As an
application we describe explicitly a set of relations among the (additive)
Gaudin/Dunkl elements, cf. [108]. It should be stressed at this place that we
treat the Gaudin elements/operators (either additive or multiplicative) as
images of the universal Dunkl elements/operators (additive or multiplicative)
in the Gaudin representation of the algebra $3HT_{n}(0)$. There are several
other important representations of that algebra, for example, the
Calogero–Moser, Bruhat, Buchstaber–Felder–Veselov (elliptic), Fay trisecant
($\tau$-functions), adjoint, and so on, considered (among others) in [72].
Specific properties of a representation chosen666For example, in the cases of
either Calogero–Moser or Bruhat representations one has an additional
constraint, namely, $u_{ij}^{2}=0$ for all $i\not=j$. In the case of Gaudin
representation one has an additional constraint $u_{ij}^{2}=p_{ij}^{2}$, where
the (quantum) parameters $\\{p_{ij}={1\over x_{i}-x_{j}},\,i\not=j\\}$,
satisfy simultaneously the Arnold and Plücker relations, see Section 2, II.
Therefore, the (small) quantum cohomology ring of the type $A_{n-1}$ full flag
variety ${{\cal F}l}_{n}$ and the Bethe subalgebra(s) (i.e., the subalgebra
generated by Gaudin elements in the algebra $3HT_{n}(0)$) correspond to
different specializations of “quantum parameters” $\\{q_{ij}:=u_{ij}^{2}\\}$
of the universal cohomology ring (i.e., the subalgebra/ring in $3HT_{n}(0)$
generated by (universal) Dunkl elements). For more details and examples, see
Section 2.1 and [72]. (e.g., Gaudin representation) imply some additional
relations among the images of the universal Dunkl elements (e.g., Gaudin
elements) should to be unveiled.
We start Section 3 with definition of algebra $3T_{n}(\beta)$ and its “Hecke”
$3HT_{n}(\beta)$ and “elliptic” $3MT_{n}(\beta)$ quotients. In particular we
define an elliptic representation of the algebra $3T_{n}(0)$ [74], and show
how the well-known elliptic solutions of the quantum Yang–Baxter equation due
to A. Belavin and V. Drinfeld, see, e.g., [9], S. Shibukawa and K. Ueno [130],
and G. Felder and V. Pasquier [40], can be plug in to our construction, see
Section 3.1. At the end of Section 3.1.1 we point out on a mysterious (at
least for the author) appearance of the Euler numbers and “traces” of the
Brauer algebra in the equivariant Pieri rules hold for the algebra
$3TM_{n}(\beta,{\boldsymbol{q}},\psi)$ stated in Theorem 3.8.
In Section 3.2 we introduce a multiplicative analogue of the Dunkl elements
$\\{\Theta_{j}\in 3T_{n}(\beta)$, $1\leq j\leq n\\}$ and describe the
commutative subalgebra in the algebra $3T_{n}(\beta)$ generated by
multiplicative Dunkl elements [76]. The latter commutative subalgebra turns
out to be isomorphic to the quantum equivariant $K$-theory of the complete
flag variety ${\cal{F}}l_{n}$ [76].
In Section 3.3 we describe relations among the truncated Dunkl–Gaudin
elements. In this case the quantum parameters $q_{ij}=p_{ij}^{2}$, where
parameters $\\{p_{ij}=(z_{i}-z_{j})^{-1},\,1\leq i<j\leq n\\}$ satisfy the
both Arnold and Plücker relations. This observation has made it possible to
describe a set of additional rational relations among the Dunkl–Gaudin
elements, cf. [108].
In Section 3.4 we introduce an equivariant version of multiplicative Dunkl
elements, called shifted Dunkl elements in our paper, and describe (some)
relations among the latter. This result is a generalization of that obtained
in Section 3.1 and [76]. However we don’t know any geometric interpretation of
the commutative subalgebra generated by shifted Dunkl elements.
In Section 4.1 for any subgraph $\Gamma\subset K_{n}$ of the complete graph
$K_{n}$ we introduce777Independently the algebra $3T_{n}^{(0)}(\Gamma)$ has
been studied in [16], where the reader can find some examples and conjectures.
[70, 72], algebras $3T_{n}(\Gamma)$ and $3T_{n}^{(0)}(\Gamma)$ which can be
seen as analogues of algebras $3T_{n}$ and $3T_{n}^{(0)}$ correspondingly888To
avoid confusions, it must be emphasized that the defining relations for
algebras $3T_{n}(\Gamma)$ and $3T_{n}(\Gamma)^{(0)}$ may have more then three
terms..
We want to point out in the Introduction, cf. footnote 1, that an analog of
the algebras $3T_{n}$ and $3T_{n}^{(\beta)}$, $3HT_{n}$, etc. treated in the
present paper, can be defined for any (oriented or not) matroid $\cal{M}$. We
denote these algebras as $3T({\cal{M}})$ and $3T^{(\beta)}({\cal{M}})$. One
can show (A.K.) that the abelianization of the algebra
$3T^{(\beta)}({\cal{M}})$, denoted by ${3T^{(\beta)}({\cal{M}})}^{ab}$, is
isomorphic to the Gelfand–Varchenko algebra corresponding to a matroid
$\cal{M}$, whereas the algebra ${3T^{(\beta=0)}({\cal{M}})}^{ab}$ is
isomorphic to the (even) Orlik–Solomon algebra ${\rm OS}^{+}({\cal{M}})$ of a
matroid $\cal{M}$.999For a definition and basic properties of the
Orlik–Solomon algebra corresponding to a matroid, see, e.g., [49, 65]. We
consider and treat the algebras $3T({\cal{M}})$, $3HT({\cal{M}})$, …, as
equivariant noncommutative (or quantum) versions of the (even) Orlik–Solomon
algebras associated with matroid (including hyperplane, graphic, …
arrangements). However a meaning of a quantum deformation of the (even or odd)
Orlik–Solomon algebra suggested in the present paper, is missing, even for the
braid arrangement of type $A_{n}$. Generalizations of the Gelfand–Varchenko
algebra has been suggested and studied in [67, 72] and in the present paper
under the name quasi-associative Yang–Baxter algebra, see Section 5.
In the present paper we basically study the abelian quotient of the algebra
$3T_{n}^{(0)}(\Gamma)$, where graph $\Gamma$ has no loops and multiple edges,
since we expect some applications of our approach to the theory of chromatic
polynomials of planar graphs, in particular to the complete multipartite
graphs $K_{n_{1},\ldots,n_{r}}$ and the grid graphs $G_{m,n}$.101010See
http://reference.wolfram.com/language/ref/GridGraph.html for a definition of
grid graph $G_{m,n}$. Our main results hold for the complete multipartite,
cyclic and line graphs. In particular we compute their chromatic and Tutte
polynomials, see Proposition 4.19 and Theorem 4.24. As a byproduct we compute
the Tutte polynomial of the ${\boldsymbol{\ell}}$-weighted complete
multipartite graph $K_{n_{1},\ldots,n_{r}}^{({\boldsymbol{\ell}})}$ where
${\boldsymbol{\ell}}=\\{\ell_{ij}\\}_{1\leq i<j\leq r}$, is a collection of
weights, i.e., a set of non-negative integers.
More generally, for a set of variables $\\{\\{q_{ij}\\}_{1\leq i<j\leq
n},x,y\\}$ we define universal Tutte polynomial
$T_{n}(\\{q_{ij}\\},x,y)\in\mathbb{Z}[q_{ij}][x,y]$ such that for any
collection of non-negative integers $\\{m_{ij}\\}_{1\leq i<j\leq n}$ and a
subgraph $\Gamma\subset K_{n}^{({\boldsymbol{m}})}$ of the complete graph
$K_{n}$ with each edge $(i,j)$ comes with multiplicity $m_{ij}$, the
specialization
$\displaystyle q_{ij}\longrightarrow 0\quad\text{if~{}edge}\ \
(i,j)\notin\Gamma,\qquad
q_{ij}\longrightarrow[m_{ij}]_{y}:=\frac{y^{m_{ij}}-1}{y-1}\quad\text{if~{}edge}\
\ (i,j)\in\Gamma$
of the universal Tutte polynomial $T_{n}(\\{q_{ij}\\},x,y)$ is equal to the
Tutte polynomial of graph $\Gamma$ multiplied by the factor
$(t-1)^{\kappa(\Gamma)}$:
$\displaystyle(x-1)^{\kappa(\Gamma)}{\rm
Tutte}(\Gamma,x,y):=T_{n}(\\{q_{ij}\\},x,y)\bigg{|}_{q_{ij}=0\,\text{if}\,(i,j)\notin\Gamma\atop
q_{ij}={[m_{ij}]}_{y}\,\text{if}\,(i,j)\in\Gamma}.$
Here and after $\kappa(\Gamma)$ demotes the number of connected components of
a graph $\Gamma$. In other words, one can treat the universal Tutte polynomial
$T_{n}(\\{q_{ij}\\},x,y)$ as a “reproducing kernel” for the Tutte polynomials
of all (loop-less) graphs with the number of vertices not exceeded $n$.
We also state Conjecture 35 that for any loopless graph $\Gamma$ (possibly
with multiple edges) the algebra ${3T_{|\Gamma|}^{(0)}(\Gamma)}^{ab}$ is
isomorphic to the even Orlik–Solomon algebra ${\rm
OS}^{+}({\cal{A}}_{\Gamma})$ of the graphic arrangement associated with graph
$\Gamma$ in question111111For simple graphs, i.e., without loops and multiple
edges, this conjecture has been proved in [89]..
At the end we emphasize that the case of the complete graph $\Gamma=K_{n}$
reproduces the results of the present paper and those of [72], i.e., the case
of the full flag variety ${\cal F}l_{n}$. The case of the complete
multipartite graph $\Gamma=K_{n_{1},\ldots,n_{r}}$ reproduces the analogue of
results stated in the present paper for the case of full flag variety ${\cal
F}l_{n}$, to the case of the partial flag variety ${\cal
F}_{n_{1},\ldots,n_{r}}$, see [72] for details.
In Section 4.1.4 we sketch how to generalize our constructions and some of our
results to the case of the Lie algebras of classical types121212One can define
an analogue of the algebra $3T_{n}^{(0)}$ for the root system of $BC_{n}$ and
$C_{n}^{\vee}C_{n}$-types as well, but we are omitted these cases in the
present paper..
In Section 4.2 we briefly overview our results concerning yet another
interesting family of quadratic algebras, namely the six-term relations
algebras $6T_{n}$, $6T_{n}^{(0)}$ and related ones. These algebras also
contain a distinguished set of mutually commuting elements called Dunkl
elements $\\{\theta_{i},\,i=1,\ldots,n\\}$ given by
$\theta_{i}=\sum\limits_{j\not=i}r_{ij}$, see Definition 4.48.
In Section 4.2.2 we introduce and study the algebra $6T_{n}^{\bigstar}$ in
greater detail. In particular we introduce a “quantum deformation” of the
algebra generated by the curvature of $2$-forms of of the Hermitian linear
bundles over the flag variety ${\cal{F}}l_{n}$, cf. [118].
In Section 4.2.3 we state our results concerning the classical Yang–Baxter
algebra ${\rm CYB}_{n}$ and the $6$-term relation algebra $6T_{n}$. In
particular we give formulas for the Hilbert series of these algebras. These
formulas have been obtained independently in [7] The paper just mentioned,
contains a description of a basis in the algebra $6T_{n}$, and much more.
In Section 4.2.4 we introduce a super analog of the algebra $6T_{n}$, denoted
by $6T_{n,m}$, and compute its Hilbert series.
Finally, in Section 4.3 we introduce extended nil-three term relations algebra
${3\mathfrak{T}}_{n}$ and describe a subalgebra inside of it which is
isomorphic to the double affine Hecke algebra of type $A_{n-1}$, cf. [24].
In Section 5 we describe several combinatorial properties of some special
elements in the associative quasi-classical Yang–Baxter algebra131313The
algebra $\widehat{{\rm ACYB}}_{n}$ can be treated as “one-half” of the algebra
$3T_{n}(\beta)$. It appears that the basic relations among the Dunkl elements,
which do not mutually commute anymore, are still valid, see Lemma 5.3.,
denoted by $\widehat{{\rm ACYB}}_{n}$. The main results in that direction were
motivated and obtained as a by-product, in the process of the study of the the
structure of the algebra $3HT_{n}(\beta)$. More specifically, the main results
of Section 5 were obtained in the course of “hunting for descendant relations”
in the algebra mentioned, which is an important problem to be solved to
construct a basis in the nil-quotient algebra $3T_{n}^{(0)}$. This problem is
still widely-open.
The results of Section 5.1, see Proposition 5.4, items (1)–(5), are more or
less well-known among the specialists in the subject, while those of the item
(6) seem to be new. Namely, we show that the polynomial $Q_{n}(x_{ij}=t_{i})$
from [133, Exercise 6.C8(c)], essentially coincides with the
$\beta$-deformation [42] of the Lascoux–Schützenberger Grothendieck polynomial
[86] for some particular permutation. The results of Proposition 5.4(6), point
out on a deep connection between reduced forms of monomials in the algebra
$\widehat{{\rm ACYB}}_{n}$ and the Schubert and Grothendieck calculi. This
observation was the starting point for the study of some combinatorial
properties of certain specializations of the Schubert, the
$\beta$-Grothendieck [43] and the double $\beta$-Grothendieck polynomials in
Section 5.2. One of the main results of Section 5.2 can be stated as follows.
###### Theorem 1.6.
1. $(1)$
Let $w\in\mathbb{S}_{n}$ be a permutation, consider the specialization
$x_{1}:=q$, $x_{i}=1$, $\forall\,i\geq 2$, of the $\beta$-Grothendieck
polynomial $\mathfrak{G}_{w}^{(\beta)}(X_{n})$. Then
$\displaystyle{\cal{R}}_{w}(q,\beta+1):=\mathfrak{G}_{w}^{(\beta)}(x_{1}=q,\,x_{i}=1,\,\forall\,i\geq
2)\in\mathbb{N}[q,1+\beta].$
In other words, the polynomial ${\cal{R}}_{w}(q,\beta)$ has non-negative
integer coefficients141414For a more general result see Appendix A.1,
Corollary A.7..
For late use we define polynomials
$\displaystyle\mathfrak{R}_{w}(q,\beta):=q^{1-w(1)}{\cal{R}}_{w}(q,\beta).$
2. $(2)$
Let $w\in\mathbb{S}_{n}$ be a permutation, consider the specialization
$x_{i}:=q$, $y_{i}=t$, $\forall\,i\geq 1$, of the double $\beta$-Grothendieck
polynomial $\mathfrak{G}_{w}^{(\beta)}(X_{n},Y_{n})$. Then
$\displaystyle\mathfrak{G}_{w}^{(\beta-1)}(x_{i}:=q,\,y_{i}:=t,\,\forall\,i\geq
1)\in\mathbb{N}[q,t,\beta].$
3. $(3)$
Let $w$ be a permutation, then
$\displaystyle\mathfrak{R}_{w}(1,\beta)=\mathfrak{R}_{1\times w}(0,\beta).$
Note that ${\cal{R}}_{w}(1,\beta)={\cal{R}}_{w^{-1}}(1,\beta)$, but
${\cal{R}}_{w}(t,\beta)\not={\cal{R}}_{w^{-1}}(t,\beta)$, in general.
For the reader convenience we collect some basic definitions and results
concerning the $\beta$-Grothendieck polynomials in Appendix A.1.
Let us observe that $\mathfrak{R}_{w}(1,1)=\mathfrak{S}_{w}(1)$, where
$\mathfrak{S}_{w}(1)$ denotes the specialization $x_{i}:=1$, $\forall\,i\geq
1$, of the Schubert polynomial $\mathfrak{S}_{w}(X_{n})$ corresponding to
permutation $w$. Therefore, $\mathfrak{R}_{w}(1,1)$ is equal to the number of
compatible sequences [13] (or pipe dreams, see, e.g., [129]) corresponding to
permutation $w$.
###### Problem 1.7.
Let $w\in\mathbb{S}_{n}$ be a permutation and $l:=\ell(w)$ be its length.
Denote by ${\rm CS}(w)=\\{{\boldsymbol{a}}=(a_{1}\leq a_{2}\leq\cdots\leq
a_{l})\in\mathbb{N}^{l}\\}$ the set of compatible sequences [13] corresponding
to permutation $w$.
* •
Define statistics $r({\boldsymbol{a}})$ on the set of all compatible sequences
${\rm CS}_{n}:=\coprod\limits_{{w\in\mathbb{S}_{n}}}{\rm CS}(w)$ in a such way
that
$\displaystyle\sum_{{\boldsymbol{a}}\in{\rm
CS}(w)}q^{a_{1}}\beta^{r({\boldsymbol{a}})}={\cal{R}}_{w}(q,\beta).$
* •
Find a geometric interpretation, and investigate combinatorial and algebra-
geometric properties of polynomials $\mathfrak{S}_{w}^{(\beta)}(X_{n})$, where
for a permutation $w\in\mathbb{S}_{n}$ we denoted by
$\mathfrak{S}_{w}^{(\beta)}(X_{n})$ the $\beta$-Schubert polynomial defined as
follows
$\displaystyle\mathfrak{S}_{w}^{(\beta)}(X_{n})=\sum_{{\boldsymbol{a}}\in{\rm
CS}(w)}\beta^{r({\boldsymbol{a}})}\prod_{i=1}^{l:=\ell(w)}x_{a_{i}}.$
We expect that polynomial $\mathfrak{S}_{w}^{(\beta)}(1)$ coincides with the
Hilbert polynomial of a certain graded commutative ring naturally associated
to permutation $w$.
###### Remark 1.8.
It should be mentioned that, in general, the principal specialization
$\displaystyle\mathfrak{G}_{w}^{(\beta-1)}\big{(}x_{i}:=q^{i-1},\,\forall\,i\geq
1\big{)}$
of the $(\beta-1)$-Grothendieck polynomial may have negative coefficients.
Our main objective in Section 5.2 is to study the polynomials
$\mathfrak{R}_{w}(q,\beta)$ for a special class of permutations in the
symmetric group $\mathbb{S}_{\infty}$. Namely, in Section 5.2 we study some
combinatorial properties of polynomials
$\mathfrak{R}_{\varpi_{\lambda,\phi}}(q,\beta)$ for the five parameters family
of vexillary permutations $\\{\varpi_{\lambda,\phi}\\}$ which have the shape
$\lambda:=\lambda_{n,p,b}=(p(n-i+1)+b$, $i=1,\ldots,n+1)$ and flag
$\phi:=\phi_{k,r}=(k+r(i-1),~{}i=1,\ldots,n+1)$.
This class of permutations is notable for many reasons, including that the
specialized value of the Schubert polynomial
$\mathfrak{S}_{\varpi_{\lambda,\phi}}(1)$ admits a nice product
formula151515One can prove a product formula for the principal specialization
$\mathfrak{S}_{\varpi_{\lambda,\phi}}(x_{i}:=q^{i-1},\,\forall\,i\geq 1)$ of
the corresponding Schubert polynomial. We don’t need a such formula in the
present paper., see Theorem 5.29. Moreover, we describe also some interesting
connections of polynomials $\mathfrak{R}_{\varpi_{\lambda,\phi}}(q,\beta)$
with plane partitions, the Fuss–Catalan numbers161616We define the
(generalized) Fuss–Catalan numbers to be ${\rm FC}_{n}^{(p)}(b):={1+b\over
1+b+(n-1)p}{np+b\choose n}$. Connection of the Fuss–Catalan numbers with the
$p$-ballot numbers ${\rm Bal}_{p}(m,n):={n-mp+1\over n+m+1}~{}{n+m+1\choose
m}$ and the Rothe numbers $R_{n}(a,b):={a\over a+bn}{a+bn\choose n}$ can be
described as follows $\displaystyle{\rm FC}_{n}^{(p)}(b)=R_{n}(b+1,p)={\rm
Bal}_{p-1}(n,(n-1)p+b).$ and Fuss–Narayana polynomials, $k$-triangulations and
$k$-dissections of a convex polygon, as well as a connection with two families
of ${\rm ASM}$. For example, let $\lambda=(b^{n})$ and $\phi=(k^{n})$ be
rectangular shape partitions, then the polynomial
$\mathfrak{R}_{\varpi_{\lambda,\phi}}(q,\beta)$ defines a
$(q,\beta)$-deformation of the number of (ordinary) plane partitions171717Let
$\lambda$ be a partition. An ordinary plane partition (plane partition for
short)bounded by $d$ and shape $\lambda$ is a filling of the shape $\lambda$
by the numbers from the set $\\{0,1,\ldots,d\\}$ in such a way that the
numbers along columns and rows are weakly decreasing. A reverse plane
partition bounded by $d$ and shape $\lambda$ is a filling of the shape
$\lambda$ by the numbers from the set $\\{0,1,\ldots,d\\}$ in such a way that
the numbers along columns and rows are weakly increasing. sitting in the box
$b\times k\times n$. It seems an interesting problem to find an algebra-
geometric interpretation of polynomials $\mathfrak{R}_{w}(q,\beta)$ in the
general case.
###### Question 1.9.
Let $a$ and $b$ be mutually prime positive integers. Does there exist a family
of permutations $w_{a,b}\in{\mathbb{S}}_{ab(a+b)}$ such that the
specialization $x_{i}=1$, $\forall\,i$ of the Schubert polynomial
${\mathfrak{S}}_{w_{a,b}}$ is equal to the rational Catalan number $C_{a/b}$?
That is
$\displaystyle{\mathfrak{S}}_{w_{a,b}}(1)={1\over a+b}{a+b\choose a}.$
Many of the computations in Section 5.2 are based on the following
determinantal formula for $\beta$-Grothendieck polynomials corresponding to
grassmannian permutations, cf. [84].
###### Theorem 1.10 (see Comments 5.37(b)).
If $w=\sigma_{\lambda}$ is the grassmannian permutation with shape
$\lambda=(\lambda_{,}\ldots,\lambda_{n})$ and a unique descent at position
$n$, then181818The equality
$\displaystyle\mathfrak{G}_{\sigma_{\lambda}}^{(\beta)}(X_{n})={\operatorname{DET}\big{|}x_{i}^{\lambda_{j}+n-j}(1+\beta
x_{i})^{j-1}\big{|}_{1\leq i,j\leq n}\over\prod\limits_{1\leq i<j\leq
n}(x_{i}-x_{j})},$ has been proved independently in [107].
$\displaystyle({\rm
A})\quad\mathfrak{G}_{\sigma_{\lambda}}^{(\beta)}(X_{n})=\operatorname{DET}\big{|}h_{\lambda_{j}+i,j}^{(\beta)}(X_{n})\big{|}_{1\leq
i,j\leq n}={\operatorname{DET}\big{|}x_{i}^{\lambda_{j}+n-j}(1+\beta
x_{i})^{j-1}\big{|}_{1\leq i,j\leq n}\over\prod\limits_{1\leq i<j\leq
n}(x_{i}-x_{j})},$
where $X_{n}=(x_{i},x_{1},\ldots,x_{n})$, and for any set of variables $X$,
$\displaystyle h_{n,k}^{(\beta)}(X)=\sum_{a=0}^{k-1}~{}{k-1\choose
a}h_{n-k+a}(X)\beta^{a},$
and $h_{k}(X)$ denotes the complete symmetric polynomial of degree $k$ in the
variables from the set $X$.
$\displaystyle({\rm
B})\quad{\mathfrak{G}}_{\sigma_{\lambda}}(X,Y)={\operatorname{DET}\Big{|}\prod\limits_{a=1}^{\lambda_{j}+n-j}(x_{i}+y_{a}+\beta
x_{i}y_{a})(1+\beta x_{i})^{j-1}\Big{|}_{1\leq i,j\leq
n}\over\prod\limits_{1\leq i<j\leq n}(x_{i}-x_{j})}.$
In Sections 5.2.2 and 5.4.2 we study connections of Grothendieck polynomial
associated with the Richardson permutation $w_{k}^{(n)}=1^{k}\times
w_{0}^{(n-k)}$, $k$-dissections of a convex $(n+k+1)$-gon, generalized reduced
polynomial corresponding to a certain monomial in the algebra ${\widehat{{\rm
ACYB}}}_{n}$ and the Lagrange inversion formula. In the case of generalized
Richardson permutation $w_{n,p}^{(k)}$ corresponding to the $k$-shifted
dominant permutations $w^{(p,n)}$ associated with the Young diagram
$\lambda_{p,n}:=p(n-1,n-2,\ldots,1)$, namely, $w_{n,p}^{(k)}=1^{k}\times
w^{(p,n)}$, we treat only the case $k=1$, see also [39]. In the case $k\geq 2$
one comes to a task to count and find a lattice path type interpretation for
the number of $k$-pgulations of a convex $n$-gon that is the number of
partitioning of a convex $n$-gon on parts which are all equal to a convex
$(p+2)$-gon, by a (maximal) family of diagonals such that each diagonal has at
most $k$ internal intersections with the members of a family of diagonals
selected.
In Section 5.3 we give a partial answer on Question 6.C8(d) by R. Stanley
[133]. In particular, we relate the reduced polynomial corresponding to
monomial
$\displaystyle\bigl{(}x_{12}^{a_{2}}\cdots{x_{n-1,n}}^{a_{n}}\bigr{)}\prod_{j=2}^{n-2}\prod_{k=j+2}^{n}x_{jk},\qquad
a_{j}\in\mathbb{Z}_{\geq 0},\qquad\forall\,j,$
with the Ehrhart polynomial of the generalized Chan–Robbins–Yuen polytope, if
$a_{2}=\cdots=a_{n}=m+1$, cf. [101], with a $t$-deformation of the Kostant
partition function of type $A_{n-1}$ and the Ehrhart polynomials of some flow
polytopes, cf. [103].
In Section 5.4 we investigate certain specializations of the reduced
polynomials corresponding to monomials of the form
$\displaystyle x_{12}^{m_{1}}\cdots x_{n-1,n}^{m_{n}},\qquad
m_{j}\in\mathbb{Z}_{\geq 0},\qquad\forall\,j.$
First of all we observe that the corresponding specialized reduced polynomial
appears to be a piece-wise polynomial function of parameters
${\boldsymbol{m}}=(m_{1},\ldots,m_{n})\in(\mathbb{R}_{\geq 0})^{n}$, denoted
by $P_{{\boldsymbol{m}}}$. It is an interesting problem to compute the Laplace
transform of that piece-wise polynomial function. In the present paper we
compute the value of the function $P_{{\boldsymbol{m}}}$ in the dominant
chamber ${\cal{C}}_{n}=(m_{1}\geq m_{2}\geq\cdots\geq m_{n}\geq 0)$, and give
a combinatorial interpretation of the values of that function in points
$(n,m)$ and $(n,m,k)$, $n\geq m\geq k$.
For the reader convenience, in Appendices A.1–A.6 we collect some useful
auxiliary information about the subjects we are treated in the present paper.
Almost all results in Section 5 state that some two specific sets have the
same number of elements. Our proofs of these results are pure algebraic. It is
an interesting problem to find bijective proofs of results from Section 5
which generalize and extend remarkable bijective proofs presented in [103,
129, 135, 142] to the cases of
* •
the $\beta$-Grothendieck polynomials,
* •
the (small) Schröder numbers,
* •
$k$-dissections of a convex $(n+k+1)$-gon,
* •
special values of reduced polynomials.
We are planning to treat and present these bijections in separate
publication(s).
We expect that the reduced polynomials corresponding to the higher-order
powers of the Coxeter elements also admit an interesting combinatorial
interpretation(s). Some preliminary results in this direction are discussed in
Comments 5.67.
At the end of introduction I want to add a few remarks.
(a) After a suitable modification of the algebra $3HT_{n}$, see [75], and the
case $\beta\not=0$ in [72], one can compute the set of relations among the
(additive) Dunkl elements (defined in Section 2, equation (2.1)). In the case
$\beta=0$ and $q_{ij}=q_{i}\delta_{j-i,1}$, $1\leq i<j\leq n$, where
$\delta_{a,b}$ is the Kronecker delta symbol, the commutative algebra
generated by additive Dunkl elements (2.3) appears to be “almost” isomorphic
to the equivariant quantum cohomology ring of the flag variety ${\cal
F}l_{n}$, see [75] for details. Using the multiplicative version of Dunkl
elements, see Section 3.2, one can extend the results from [75] to the case of
equivariant quantum $K$-theory of the flag variety ${\cal F}l_{n}$, see [72].
(b) As it was pointed out previously, one can define an analogue of the
algebra $3T_{n}^{(0)}$ for any (oriented) matroid ${\cal{M}}_{n}$, and state a
conjecture which connects the Hilbert polynomial of the algebra
$3T_{n}^{(0)}(({\cal{M}}_{n})^{ab},t)$ and the chromatic polynomial of matroid
${\cal{M}}_{n}$. We expect that algebra
$3T_{n}^{(\beta=1)}({\cal{M}}_{n})^{ab}$ is isomorphic to the
Gelfand–Varchenko algebra associated with matroid $\cal{M}$. It is an
interesting problem to find a combinatorial meaning of the algebra
$3T_{n}^{(\beta)}({\cal{M}}_{n})$ for $\beta=0$ and $\beta\not=0$.
(c) Let $R$ be a (graded) ring (to be specified later) and
${\mathfrak{F}}_{n^{2}}$ be the free associative algebra over $R$ with the set
of generators $\\{u_{ij},\,1\leq i,j\leq n\\}$. In the subsequent text we will
distinguish the set of generators $\\{u_{ii}\\}_{1\leq i\leq n}$ from that
$\\{u_{ij}\\}_{1\leq i\not=j\leq n}$, and set
$\displaystyle x_{i}:=u_{ii},\qquad i=1,\ldots,n.$
A guiding idea to choose definitions and perform constructions in the present
paper is to impose a set of relations ${\cal{R}}_{n}$ among the generators
$\\{x_{i}\\}_{1\leq i\leq n}$ and that $\\{u_{ij}\\}_{1\leq i\not=j\leq n}$
which ensure the mutual commutativity of the following elements
$\displaystyle\theta_{i}^{(n)}:=\theta_{i}=x_{i}+\sum_{j\not=i}^{n}u_{ij},\qquad
i=1,\ldots,n,$
in the algebra ${\cal{F}}_{n^{2}}/{\cal{R}}_{n}$, as well as to have a good
chance to describe/compute
$\bullet$ “Integral of motions”, that is finding a big enough set of
algebraically independent polynomials (quite possibly that polynomials are
trigonometric or elliptic ones) $I_{\alpha}^{(n)}(y_{1},\ldots,y_{n})\in
R[Y_{n}]$ such that
$\displaystyle
I_{\alpha}^{(n)}\big{(}\theta_{1}^{(n)},\ldots,\theta_{n}^{(n)}\big{)}\in
R[X_{n}],\qquad\forall\,\alpha,$
in other words, the latter specialization of any integral of motion has to be
independent of the all generators $\\{u_{ij}\\}_{1\leq i\not=j\leq n}$.
$\bullet$ Give a presentation of the algebra ${\cal{I}}_{n}$ generated by the
integral of motions that is to find a set of defining relations among the
elements $\theta_{1},\ldots,\theta_{n}$, and describe a $R$-basis
$\big{\\{}m_{\alpha}^{(n)}\big{\\}}$ in the algebra ${\cal{I}}_{n}$.
$\bullet$ Generalized Littlewood–Richardson and Murnaghan–Nakayama problems.
Given an integral of motion $I_{\beta}^{(m)}(Y_{m})$ and an integer $n\geq m$,
find an explicit positive (if possible) expression in the quotient algebra
${\cal{F}}_{n^{2}}/{\cal{R}}_{n}$ of the element
$\displaystyle
I_{\beta}^{(m)}\big{(}\theta_{1}^{(n)},\ldots,\theta_{m}^{(n)}\big{)}.$
For example in the case of the 3-term relations algebra $3T_{n}^{(0)}$ (as
well as its equivariant, quantum, etc. versions) the generalized
Littlewood–Richardson problem is to find a positive expression in the algebra
$3T_{n}^{(0)}$ for the element
${\mathfrak{S}}_{w}\big{(}\theta_{1}^{(n)},\ldots,\theta_{m}^{(n)}\big{)}$,
where ${\mathfrak{S}}_{w}(Y_{n})$ stands for the Schubert polynomial
corresponding to a permutation $w\in\mathbb{S}_{n}$.
Generalized Murnaghan–Nakayama problem consists in finding a combinatorial
expression in the algebra $3T_{n}^{(0)}$ for the element
$\sum\limits_{i=1}^{m}(\theta_{i}^{(n)})^{k}$.
Partial results concerning these problems have been obtained as far as we
aware in [45, 70, 72, 73, 104, 117].
$\bullet$ “Partition functions”. Assume that the (graded) algebra
${\cal{I}}_{n}$ generated over $R$ by the elements
$\theta_{1},\ldots,\theta_{n}$ has finite dimension/rank, and the (non zero)
maximal degree component ${\cal{I}}_{\max}^{(n)}$ of that algebra has
dimension/rank one and generated by an element $\omega$. For any element
$g\in{\cal{F}}_{n^{2}}$ let us denote by $\operatorname{Res}_{\omega}(g)$ an
element in $R$ such that
$\displaystyle\overline{g}=\operatorname{Res}_{\omega}(g)\omega,$
where we denote by $\overline{g}$ the image of element $g$ in the component
${\cal{I}}_{\max}^{(n)}$.
We define partition function associated with the algebra ${\cal{I}}_{n}$ as
follows
$\displaystyle{\cal{Z}}({\cal{I}}_{n})=\operatorname{Res}_{w}\bigg{(}\exp\bigg{(}\sum_{\alpha}q_{\alpha}m_{\alpha}^{(n)}\bigg{)}\bigg{)},$
where $\\{q_{\alpha}\\}$ is a set of parameters which is consistent in one-to-
one correspondence with a basis $\big{\\{}m_{\alpha}^{(n)}\big{\\}}$ chosen.
We are interesting in to find a closed formula for the partition function
${\cal{Z}}({\cal{I}}_{n})$ as well as that for a small partition function
$\displaystyle{\cal{Z}}^{(0)}({\cal{I}}_{n}):=\operatorname{Res}_{\omega}\bigg{(}\exp\bigg{(}\sum_{1\leq
i,j\leq n}\lambda_{ij}u_{ij}\bigg{)}\bigg{)},$
where $\\{\lambda_{ij}\\}_{1\leq i,j\leq n}$ stands for a set of parameters.
One can show [68] that the partition function ${\cal{Z}}({\cal{I}}_{n})$
associated with algebra $3T_{n}^{{\boldsymbol{q}}}$ satisfies the famous
Witten–Dijkraaf–Verlinde–Verlinde equations.
As a preliminary steps to perform our guiding idea we
1. (i)
investigate properties of the abelianization of the algebra
${\cal{F}}_{n^{2}}/{\cal{R}}_{n}$. Some unexpected connections with the theory
of hyperplane arrangements and graph theory are discovered;
2. (ii)
investigate a variety of descendent relations coming from the defining
relations. Some polynomials with interesting combinatorial properties are
naturally appear.
To keep the size of the present paper reasonable, several new results are
presented as exercises.
We conclude Introduction by a short historical remark. As far as we aware, the
commutative version of $3$-term relations which provided the framework for a
definition of the FK algebra ${\cal{E}}_{n}$ [45] and a plethora of its
generalizations, have been frequently used implicitly in the theory of
elliptic functions and related topics, starting at least from the middle of
the 19th century, see, e.g., [141] for references, and up to now, and for sure
will be used for ever. The key point is that the Kronecker sigma function
$\displaystyle\sigma_{z}(w):={\frac{\sigma(z-w)\theta^{\prime}(0)}{\sigma(z)\sigma(-w)}},$
where $\sigma(z)$ denotes the Weierstrass sigma function, satisfies the
quadratic three terms addition formula or functional equation discovered, as
far as we aware, by K. Weierstrass. In fact this functional equation is really
equivalent191919We refer the reader to a nice written paper by Tom H.
Koornwinder [79] for more historical information. to the famous Jacobi–Riemann
three term relation of degree four between the Riemann theta functions
$\theta(x)$. In the rational degeneration of theta functions, the three term
relation between Kronecker sigma functions turns to the famous three term
Jacobi identity which can be treated as an associative analogue of the Jacobi
identity in the theory of Lie algebras.
To our best knowledge, in an abstract form that is as a set of defining
relations in a certain algebra, an anticommutative version of three term
relations had been appeared in a remarkable paper by V.I. Arnold [3]. Nowadays
these relations are known as Arnold relations. These relations and its various
generalizations play fundamental role in the theory of arrangements, see,
e.g., [113], in topology, combinatorics and many other branches of
Mathematics.
In commutative set up abstract form of $3$-term relations has been invented by
O. Mathieu [96]. In the context of the braided Hopf algebras (of type A)
$3$-term relations like algebras (as some examples of the Nichols algebras)
have appeared in papers by A. Milinski and H.-J. Schneider (2000), N.
Andruskiewitsch (2002), S. Madjid (2004), I. Heckenberger (2005) and many
others202020We refer the reader to the site
https://en.wikipedia.org/wiki/Nichols_algebra for basic definitions and
results concerning Nichols’ algebras and references on vast literature treated
different aspects of the theory of Nichols’ algebras and braided Hopf
algebras..
It is well-known that the Nichols algebra associated with the symmetric group
${\mathbb{S}}_{n}$ and trivial conjugacy class is a quotient of the algebra
$FK_{n}$. It is still an open problem to prove (or disprove) that these two
algebras are isomorphic.
## 2 Dunkl elements
Having in mind to fulfill conditions suggested by our guiding line mentioned
in Introduction as far as it could be done till now, we are led to introduce
the following algebras212121Surprisingly enough, in many cases to find
relations among the elements $\theta_{1},\ldots,\theta_{n}$ there is no need
to require that the elements $\\{\theta_{i}\\}_{1\leq i\leq n}$ are pairwise
commute..
###### Definition 2.1 (additive Dunkl elements).
The (additive) Dunkl elements $\theta_{i}$, $i=1,\dots,n$, in the algebra
${\cal F}_{n}$ are defined to be
$\displaystyle\theta_{i}=x_{i}+\sum_{j=1\atop j\not=i}^{n}u_{ij}.$ (2.1)
We are interested in to find “natural relations” among the generators
$\\{u_{ij}\\}_{1\leq i,j\leq n}$ such that the Dunkl elements (2.1) are pair-
wise commute. One of the natural conditions which is the commonly accepted in
the theory of integrable systems, is
* •
locality conditions:
$\displaystyle(a)\quad[x_{i},x_{j}]=0\qquad\text{if}\ \ i\not=j,$
$\displaystyle(b)\quad u_{ij}u_{kl}=u_{kl}u_{ij}\qquad\text{if}\ \ i\not=j,\ \
k\not=l\ \ \text{and}\ \ \\{i,j\\}\cap\\{k,l\\}=\varnothing.$ (2.2)
###### Lemma 2.2.
Assume that elements $\\{u_{ij}\\}$ satisfy the locality condition (2.1). If
$i\not=j$, then
$\displaystyle[\theta_{i},\theta_{j}]=\biggl{[}x_{i}+\sum_{k\not=i,j}u_{ik},u_{ij}+u_{ji}\biggr{]}+\biggl{[}u_{ij},\sum_{k=1}^{n}x_{k}\biggr{]}+\sum_{k\not=i,j}w_{ijk},$
where
$\displaystyle
w_{ijk}=[u_{ij},u_{ik}+u_{jk}]+[u_{ik},u_{jk}]+[x_{i},u_{jk}]+[u_{ik},x_{j}]+[x_{k},u_{ij}].$
(2.3)
Therefore in order to ensure that the Dunkl elements form a pair-wise
commuting family, it’s natural to assume that the following conditions hold
* •
unitarity:
$\displaystyle[u_{ij}+u_{ji},u_{kl}]=0=[u_{ij}+u_{ji},x_{k}]\qquad\text{for
all distinct}\ \ i,\,j,\,k,\,l,$ (2.4)
i.e., the elements $u_{ij}+u_{ji}$ are central.
* •
“conservation laws”:
$\displaystyle\left[\sum_{k=1}^{n}x_{k},u_{ij}\right]=0\qquad\text{for all}\ \
i,\,j,$ (2.5)
i.e., the element $E:=\sum\limits_{k=1}^{n}x_{k}$ is central,
* •
unitary dynamical classical Yang–Baxter relations:
$\displaystyle[u_{ij},u_{ik}+u_{jk}]+[u_{ik},u_{jk}]+[x_{i},u_{jk}]+[u_{ik},x_{j}]+[x_{k},u_{ij}]=0,$
(2.6)
if $i$, $j$, $k$ are pair-wise distinct.
###### Definition 2.3 (dynamical six term relations algebra $6DT_{n}$).
We denote by $6DT_{n}$ (and frequently will use also notation ${\rm
DCYB}_{n}$) the quotient of the algebra ${\cal F}_{n}$ by the two-sided ideal
generated by relations (2.2)–(2.6).
Clearly, the Dunkl elements (2.1) generate a commutative subalgebra inside of
the algebra $6DT_{n}$, and the sum
$\sum\limits_{i=1}^{n}\theta_{i}=\sum\limits_{i=1}^{n}x_{i}$ belongs to the
center of the algebra $6DT_{n}$.
###### Remark 2.4.
Occasionally we will call the Dunkl elements of the form (2.1) by dynamical
Dunkl elements to distinguish the latter from truncated Dunkl elements,
corresponding to the case $x_{i}=0$, $\forall\,i$.
### 2.1 Some representations of the algebra $\boldsymbol{6DT_{n}}$
#### 2.1.1 Dynamical Dunkl elements and equivariant quantum cohomology
(I) ( cf. [41]). Given a set $q_{1},\ldots,q_{n-1}$ of mutually commuting
parameters, define
$\displaystyle q_{ij}=\prod_{a=i}^{j-1}q_{a}\qquad\text{if}\quad i<j,$
and set $q_{ij}=q_{ji}$ in the case $i>j$. Clearly, that if $i<j<k$, then
$q_{ij}q_{jk}=q_{ik}$.
Let $z_{1},\ldots,z_{n}$ be a set of (mutually commuting) variables. Denote by
$P_{n}:=\mathbb{Z}[z_{1},\ldots,z_{n}]$ the corresponding ring of polynomials.
We consider the variable $z_{i}$, $i=1,\ldots,n$, also as the operator acting
on the ring of polynomials $P_{n}$ by multiplication on the variable $z_{i}$.
Let $s_{ij}\in\mathbb{S}_{n}$ be the transposition that swaps the letters $i$
and $j$ and fixes the all other letters $k\not=i,j$. We consider the
transposition $s_{ij}$ also as the operator which acts on the ring $P_{n}$ by
interchanging $z_{i}$ and $z_{j}$, and fixes all other variables. We denote by
$\displaystyle\partial_{ij}={1-s_{ij}\over
z_{i}-z_{j}},\qquad\partial_{i}:=\partial_{i,i+1},$
the divided difference operators corresponding to the transposition $s_{ij}$
and the simple transposition $s_{i}:=s_{i,i+1}$ correspondingly. Finally we
define operator (cf. [41])
$\displaystyle\partial_{(ij)}:=\partial_{i}\cdots\partial_{j-1}\partial_{j}\partial_{j-1}\cdots\partial_{i}\qquad\text{if}\
\ i<j.$
The operators $\partial_{(ij)}$, $1\leq i<j\leq n$, satisfy (among other
things) the following set of relations (cf. [41])
* •
$[z_{j},\partial_{(ik)}]=0$ if $j\notin[i,k]$,
$\Big{[}\partial_{(ij)},\sum\limits_{a=i}^{j}z_{a}\Big{]}=0$,
* •
$[\partial_{(ij)},\partial_{(kl)}]=\delta_{jk}[z_{j},\partial_{(il)}]+\delta_{il}[\partial_{(kj)},z_{i}]$
if $i<j$, $k<l$.
Therefore, if we set $u_{ij}=q_{ij}\partial_{(ij)}$ if $i<j$, and
$u_{ij}=-u_{ji}$ if $i>j$, then for a triple $i<j<k$ we will have
$\displaystyle[u_{ij},u_{ik}+u_{jk}]+[u_{ik},u_{jk}]+[z_{i},u_{jk}]+[u_{ik},z_{j}]+[z_{k},u_{jk}]$
$\displaystyle\qquad{}=q_{ij}q_{jk}[\partial_{(ij)},\partial_{(jk)}]+q_{ik}[\partial_{(ik)},z_{j}]=0.$
Thus the elements $\\{z_{i},\,i=1,\ldots,n\\}$ and $\\{u_{ij},\,1\leq i<j\leq
n\\}$ define a representation of the algebra ${\rm DCYB}_{n}$, and therefore
the Dunkl elements
$\displaystyle\theta_{i}:=z_{i}+\sum_{j\not=i}u_{ij}=z_{i}-\sum_{j<i}q_{ji}\partial_{(ji)}+\sum_{j>i}q_{ij}\partial_{(ij)}$
form a pairwise commuting family of operators acting on the ring of
polynomials
$\displaystyle\mathbb{Z}[q_{1},\ldots,q_{n-1}][z_{1},\ldots,z_{n}],$
cf. [41]. This representation has been used in [41] to construct the small
quantum cohomology ring of the complete flag variety of type $A_{n-1}$.
(II) Consider degenerate affine Hecke algebra ${\mathfrak{H}}_{n}$ generated
by the central element $h$, the elements of the symmetric group
${\mathbb{S}}_{n}$, and the mutually commuting elements $y_{1},\ldots,y_{n}$,
subject to relations
$\displaystyle s_{i}y_{i}-y_{i+1}s_{i}=h,\quad 1\leq i<n,\qquad
s_{i}y_{j}=y_{j}s_{i},\quad j\not=i,i+1,$
where $s_{i}$ stand for the simple transposition that swaps only indices $i$
and $i+1$. For $i<j$, let $s_{ij}=s_{i}\cdots s_{j-1}s_{j}s_{j-1}\cdots s_{i}$
denotes the permutation that swaps only indices $i$ and $j$. It is an easy
exercise to show that
* •
$[y_{j},s_{ik}]=h[s_{ij},s_{jk}]$ if $i<j<k$,
* •
$y_{i}s_{ik}-s_{ik}y_{k}=h+hs_{ik}\sum\limits_{i<j<k}s_{jk}$ if $i<k$.
Finally, consider a set of mutually commuting parameters $\\{p_{ij},\,1\leq
i\not=j\leq n,\,p_{ij}+p_{ji}=0\\}$, subject to the constraints
$\displaystyle p_{ij}p_{jk}=p_{ik}p_{ij}+p_{jk}p_{ik}+hp_{ik},\qquad i<j<k.$
###### Comments 2.5.
If parameters $\\{p_{ij}\\}$ are invertible, and satisfy relations
$\displaystyle p_{ij}p_{jk}=p_{ik}p_{ij}+p_{jk}p_{ik}+\beta p_{ik},\qquad
i<j<k,$
then one can rewrite the above displayed relations in the following form
$\displaystyle 1+{\beta\over p_{ik}}=\left(1+{\beta\over
p_{ij}}\right)\left(1+{\beta\over p_{jk}}\right),\qquad 1\leq i<j<k\leq n.$
Therefore there exist parameters $\\{q_{1},\ldots,q_{n}\\}$ such that
$1+\beta/p_{ij}=q_{i}/q_{j}$, $1\leq i<j\leq n$. In other words,
$p_{ij}={\beta q_{j}\over q_{j}-q_{j}}$, $1\leq i<j\leq n$. However in
general, there are many other types of solutions, for example, solutions
related to the Heaviside function222222See
https://en.wikipedia.org/wiki/Heaviside_step_function. $H(x)$, namely,
$p_{ij}=H(x_{i}-x_{j})$, $x_{i}\in\mathbb{R}$, $\forall\,i$, and its discrete
analogue, see Example (III) below. In the both cases $\beta=-1$; see also
Comments 2.12 for other examples.
To continue presentation of Example (II), define elements
$u_{ij}=p_{ij}s_{ij}$, $1\leq i\not=j\leq n$.
###### Lemma 2.6 (dynamical classical Yang–Baxter relations).
$\displaystyle[u_{ij},u_{ik}+u_{jk}]+[u_{ik},u_{jk}]+[u_{ik},y_{j}]=0,\qquad
1<i<j<k\leq n.$ (2.7)
Indeed,
$\displaystyle
u_{ij}u_{jk}=u_{ik}u_{ij}+u_{jk}u_{ik}+hp_{ik}s_{ij}s_{jk},\qquad
u_{jk}u_{ij}=u_{ij}u_{ik}+u_{ik}u_{jk}+hp_{ik}s_{jk}s_{ij},$
and moreover, $[y_{j},u_{ik}]=hp_{ik}[s_{ij},s_{jk}]$.
Therefore, the elements
$\displaystyle\theta_{i}=y_{i}-h\sum_{j<i}u_{ij}+h\sum_{i<j}u_{ij},\qquad
i=1,\ldots,n,$
form a mutually commuting set of elements in the algebra
$\mathbb{Z}[\\{p_{ij}\\}]\otimes_{\mathbb{Z}}{\mathfrak{H}}_{n}$.
###### Theorem 2.7.
Define matrix $M_{n}=(m_{i,j})_{1\leq i,j\leq n}$ as follows
$\displaystyle m_{i,j}(u;z_{1},\ldots,z_{n})=\begin{cases}u-z_{i}&\text{if
$i=j$},\\\ -h-p_{ij}&\text{if $i<j$},\\\ p_{ij}&\text{if $i>j$}.\end{cases}$
Then
$\displaystyle\operatorname{DET}\big{|}M_{n}(u;\theta_{1},\ldots,\theta_{n})\big{|}=\prod_{j=1}^{n}(u-y_{j}).$
Moreover, let us set
$q_{ij}:=h^{2}(p_{ij}+p_{ij}^{2})=h^{2}q_{i}q_{j}(q_{i}-q_{j})^{-2}$, $i<j$,
then
$\displaystyle
e_{k}(\theta_{1},\ldots,\theta_{n})=e_{k}^{({\boldsymbol{q}})}(y_{1},\ldots,y_{n}),\qquad
1\leq k\leq n,$
where $e_{k}(x_{1},\ldots,x_{n})$ and
$e_{k}^{({\boldsymbol{q}})}(x_{1},\ldots,x_{n})$ denote correspondingly the
classical and multiparameter quantum [45] elementary polynomials232323For the
reader convenience we remind [45] a definition of the quantum elementary
polynomial $e_{k}^{\boldsymbol{q}}(x_{1},\ldots,x_{n})$. Let
${\boldsymbol{q}}:=\\{q_{ij}\\}_{1\leq i<j\leq n}$ be a collection of “quantum
parameters”, then $\displaystyle
e_{k}^{\boldsymbol{q}}(x_{1},\ldots,x_{n})=\sum_{\ell}\sum_{1\leq
i_{1}<\cdots<i_{\ell}\leq n\atop
j_{1}>i_{1},\ldots,j_{\ell}>i_{\ell}}e_{k-2\ell}(X_{\overline{I\cup
J}})\prod_{a=1}^{\ell}q_{i_{a},j_{a}},$ where $I=(i_{1},\ldots,i_{\ell})$,
$J=(j_{1},\ldots,j_{\ell})$ should be distinct elements of the set
$\\{1,\ldots,n\\}$, and $X_{\overline{I\cup J}}$ denotes set of variables
$x_{a}$ for which the subscript $a$ is neither one of $i_{m}$ nor one of the
$j_{m}$..
Let’s stress that the elements $y_{i}$ and $\theta_{j}$ do not commute in the
algebra ${\mathfrak{H}}_{n}$, but the symmetric functions of
$y_{1},\ldots,y_{n}$, i.e., the center of the algebra ${\mathfrak{H}}_{n}$,
do.
A few remarks in order. First of all, $u_{ij}^{2}=p_{ij}^{2}$ are central
elements. Secondly, in the case $h=0$ and $y_{i}=0$, $\forall\,i$, the
equality
$\displaystyle\operatorname{DET}\big{|}M_{n}(u;x_{1},\ldots,x_{n})\big{|}=u^{n}$
describes the set of polynomial relations among the Dunkl–Gaudin elements
(with the following choice of parameters $p_{ij}=(q_{i}-q_{j})^{-1}$ are
taken). And our final remark is that according to [54, Section 8], the
quotient ring
$\displaystyle{\cal{H}}_{n}^{{\boldsymbol{q}}}:=\mathbb{Q}[y_{1},\ldots,y_{n}]^{{\mathbb{S}}_{n}}\otimes\mathbb{Q}[\theta_{1},\dots,\theta_{n}]\otimes\mathbb{Q}[h]\Big{/}\bigg{\langle}M_{n}(u;\theta_{1},\ldots,\theta_{n})=\prod_{j=1}^{n}(u-y_{j})\bigg{\rangle}$
is isomorphic to the quantum equivariant cohomology ring of the cotangent
bundle $T^{*}{\cal{F}}l_{n}$ of the complete flag variety of type $A_{n-1}$,
namely,
$\displaystyle{\cal{H}}_{n}^{{\boldsymbol{q}}}\cong
QH^{*}_{T^{n}\times\mathbb{C}^{*}}(T^{*}{\cal{F}}l_{n})$
with the following choice of quantum parameters: $Q_{i}:=hq_{i+1}/q_{i}$,
$i=1,\ldots,n-1$.
On the other hand, in [75] we computed the so-called multiparameter
deformation of the equivariant cohomology ring of the complete flag variety of
type $A_{n-1}$.
A deformation defined in [75] depends on parameters $\\{q_{ij},\,1\leq i<j\leq
n\\}$ without any constraints are imposed. For the special choice of
parameters $\displaystyle q_{ij}:=h^{2}{q_{i}~{}q_{j}\over(q_{i}-q_{j})^{2}}$
the multiparameter deformation of the equivariant cohomology ring of the type
$A_{n-1}$ complete flag variety ${\cal{F}}l_{n}$ constructed in [75], is
isomorphic to the ring ${\cal{H}}_{n}^{{\boldsymbol{q}}}$.
###### Comments 2.8.
Let us fix a set of independent parameters $\\{q_{1},\ldots,q_{n}\\}$ and
define new parameters
$\displaystyle\left\\{q_{ij}:=hp_{ij}(p_{ij}+h)=h^{2}{q_{i}q_{j}\over(q_{i}-q_{j})^{2}}\right\\},\quad
1\leq i<j\leq n,\\!\qquad\text{where}\quad p_{ij}={q_{j}\over
q_{i}-q_{j}},\quad i<j.$
We set $\deg(q_{ij})=2$, $\deg(p_{ij})=1$, $\deg(h)=1$.
The new parameters $\\{q_{ij}\\}_{1\leq i<j\leq n}$, do not free anymore, but
satisfy rather complicated algebraic relations. We display some of these
relations soon, having in mind a question: is there some intrinsic meaning of
the algebraic variety defined by the set of defining relations among the
“quantum parameters” $\\{q_{ij}\\}$?
Let us denote by ${{\cal{A}}}_{n,h}$ the quotient ring of the ring of
polynomials $\mathbb{Q}[h][x_{ij},\,1\leq i<j\leq n]$ modulo the ideal
generating by polynomials $f(x_{ij})$ such that the specialization
$x_{ij}=q_{ij}$ of a polynomial $f(x_{ij})$, namely $f(q_{ij})$, is equal to
zero. The algebra ${\cal{A}}_{n,h}$ has a natural filtration, and we denote by
${\cal{A}}_{n}=\operatorname{gr}{\cal{A}}_{n,h}$ the corresponding associated
graded algebra.
To describe (a part of) relations among the parameters $\\{q_{ij}\\}$ let us
observe that parameters $\\{p_{ij}\\}$ and $\\{q_{ij}\\}$ are related by the
following identity
$\displaystyle
q_{ij}q_{jk}-q_{ik}(q_{ij}+q_{jk})+h^{2}q_{ik}=2p_{ij}p_{ik}p_{jk}(p_{ik}+h)\qquad\text{if}\quad
i<j<k.$
Using this identity we can find the following relations among parameters in
question
$\displaystyle
q_{ij}^{2}q_{jk}^{2}+q_{ij}^{2}q_{ik}^{2}+h^{4}q_{ik}^{2}q_{jk}^{2}-2q_{ij}q_{ik}q_{jk}(q_{ij}+q_{jk}+q_{ik})$
$\displaystyle\qquad{}-2h^{2}q_{ik}(q_{ij}q_{jk}+q_{ij}q_{ik}+q_{jk}q_{ik})=8hq_{ij}q_{ik}q_{jk}p_{ik},$
(2.8)
if $1\leq i<j<k\leq n$.
Finally, we come to a relation of degree $8$ among the “quantum parameters”
$\\{q_{ij}\\}$
$\displaystyle\big{(}\text{l.h.s.\ of
\eqref{eq:xdef}}\big{)}^{2}=64h^{2}q_{ij}^{2}q_{ik}^{3}q_{jk}^{2},\qquad 1\leq
i<j<k\leq n.$
There are also higher degree relations among the parameters $\\{q_{ij}\\}$
some of whose in degree $16$ follow from the deformed Plücker relation between
parameters $\\{p_{ij}\\}$:
$\displaystyle{1\over p_{ik}p_{jl}}={1\over p_{ij}p_{kl}}+{1\over
p_{il}p_{jk}}+{h\over p_{ij}p_{jk}p_{kl}},\qquad i<j<k<l.$
However, we don’t know how to describe the algebra ${{\cal{A}}}_{n,h}$
generated by quantum parameters $\\{q_{ij}\\}_{1\leq i<j\leq n}$ even for
$n=4$.
The algebra ${\cal{A}}_{n}=\operatorname{gr}({\cal{A}}_{n,h})$ is isomorphic
to the quotient algebra of $\mathbb{Q}[x_{ij},\,1\leq i<j\leq n]$ modulo the
ideal generated by the set of relations between “quantum parameters”
$\displaystyle\left\\{\overline{q}_{ij}:=\left({1\over
z_{i}-z_{j}}\right)^{2}\right\\}_{1\leq i<j\leq n},$
which correspond to the Dunkl–Gaudin elements $\\{\theta_{i}\\}_{1\leq i\leq
n}$, see Section 3.2 below for details. In this case the parameters
$\\{\overline{q}_{ij}\\}$ satisfy the following relations
$\displaystyle\overline{q}_{ij}^{2}\overline{q}_{jk}^{2}+\overline{q}_{ij}^{2}\overline{q}_{ik}^{2}+\overline{q}_{jk}^{2}\overline{q}_{ik}^{2}=2\overline{q}_{ij}\overline{q}_{ik}\overline{q}_{jk}(\overline{q}_{ij}+\overline{q}_{jk}+\overline{q}_{jk})$
which correspond to the relations (2.8) in the special case $h=0$. One can
find a set of relations in degrees $6$, $7$ and $8$, namely for a given pair-
wise distinct integers $1\leq i,j,k,l\leq n$, one has
* •
one relation in degree $6$
$\displaystyle\overline{q}_{ij}^{2}\overline{q}_{ik}^{2}\overline{q}_{il}^{2}+\overline{q}_{ij}^{2}\overline{q}_{jk}^{2}\overline{q}_{jl}^{2}+\overline{q}_{ik}^{2}\overline{q}_{jk}^{2}\overline{q}_{kl}^{2}+\overline{q}_{il}^{2}\overline{q}_{jl}^{2}\overline{q}_{kl}^{2}$
$\displaystyle\qquad{}-2\overline{q}_{ij}\overline{q}_{ik}\overline{q}_{il}\overline{q}_{jk}\overline{q}_{jl}\overline{q}_{kl}\left({{{\overline{q}_{ij}}\over{\overline{q}_{kl}}}}+{{{\overline{q}_{kl}}\over{\overline{q}_{ij}}}}+{{{\overline{q}_{ik}}\over{\overline{q}_{jl}}}}+{{{\overline{q}_{jl}}\over{\overline{q}_{ik}}}}+{{{\overline{q}_{il}}\over{\overline{q}_{jk}}}}+{{{\overline{q}_{jk}}\over{\overline{q}_{il}}}}\right)$
$\displaystyle\qquad{}+8\overline{q}_{ij}\overline{q}_{ik}\overline{q}_{il}\overline{q}_{jk}\overline{q}_{jl}\overline{q}_{kl}=0;$
* •
three relations in degree $7$
$\displaystyle\overline{q}_{ik}\bigl{(}\overline{q}_{ij}\overline{q}_{il}\overline{q}_{kl}-\overline{q}_{ij}\overline{q}_{il}\overline{q}_{jk}+\overline{q}_{ij}\overline{q}_{jk}\overline{q}_{kl}-\overline{q}_{il}\overline{q}_{jk}\overline{q}_{kl}\bigr{)}^{2}$
$\displaystyle\qquad{}=8\overline{q}_{ij}^{2}\overline{q}_{ik}^{2}\overline{q}_{jk}\overline{q}_{kl}\bigl{(}\overline{q}_{jk}+\overline{q}_{jl}+\overline{q}_{kl}\bigr{)}-4\overline{q}_{ij}^{2}\overline{q}_{il}^{2}\overline{q}_{jl}\bigl{(}\overline{q}_{jk}^{2}+\overline{q}_{kl}^{2}\bigr{)},$
* •
one relation in degree $8$
$\displaystyle\overline{q}_{ij}^{2}\overline{q}_{il}^{2}\overline{q}_{jk}^{2}\overline{q}_{kl}^{2}+\overline{q}_{ij}^{2}\overline{q}_{ik}^{2}\overline{q}_{jl}^{2}\overline{q}_{kl}^{2}+\overline{q}_{ik}^{2}\overline{q}_{il}^{2}\overline{q}_{jk}^{2}\overline{q}_{jl}^{2}=2\overline{q}_{ij}\overline{q}_{ik}\overline{q}_{il}\overline{q}_{jk}\overline{q}_{jl}\overline{q}_{kl}\bigl{(}\overline{q}_{ij}\overline{q}_{kl}+\overline{q}_{ik}\overline{q}_{jl}+\overline{q}_{il}\overline{q}_{jk}\bigr{)},$
However we don’t know does the list of relations displayed above, contains the
all independent relations among the elements $\\{\overline{q}_{ij}\\}_{1\leq
i<j\leq n}$ in degrees $6$, $7$ and $8$, even for $n=4$. In degrees $\geq 9$
and $n\geq 5$ some independent relations should appear.
Notice that the parameters $\big{\\{}p_{ij}={hq_{j}\over
q_{i}-q_{j}},\,i<j\big{\\}}$ satisfy the so-called Gelfand–Varchenko
relations, see, e.g., [67]
$\displaystyle p_{ij}p_{jk}=p_{ik}p_{ij}+p_{jk}p_{ik}+hp_{ik},\qquad i<j<k,$
whereas parameters $\big{\\{}{\overline{p}}_{ij}={1\over
q_{i}-q_{j}},\,i<j\big{\\}}$ satisfy the so-called Arnold relations
$\displaystyle{\overline{p}}_{ij}{\overline{p}}_{jk}={\overline{p}}_{ik}{\overline{p}}_{ij}+{\overline{p}}_{jk}{\overline{p}}_{ik},\qquad
i<j<k.$
###### Project 2.9.
Find Hilbert series ${\rm Hilb}({\cal{A}}_{n},t)$ for $n\geq 4$.242424This is
a particular case of more general problem we are interested in. Namely, let
$\\{f_{\alpha}\in\mathbb{R}[x_{1},\ldots,x_{n}]\\}_{1\leq\alpha\leq N}$ be a
collection of linear forms, and $k\geq 2$ be an integer. Denote by
$I(\\{f_{\alpha}\\})$ the ideal in the ring of polynomials
$\mathbb{R}[z_{1},\ldots,z_{N}]$ generated by polynomials
$\Phi(z_{1},\ldots,z_{N})$ such that
$\displaystyle\Phi\big{(}f_{1}^{-k},\ldots,f_{N}^{-k}\big{)}=0.$ Compute the
Hilbert series (polynomial?) of the quotient algebra
$\mathbb{R}[z_{1},\ldots,z_{N}]/I(\\{f_{\alpha}\\})$.
For example, ${\rm Hilb}({\cal{A}}_{3},t)={(1+t)(1+t^{2})\over(1-t)^{2}}$.
Finally, if we set $q_{i}:=\exp(hz_{i})$ and take the limit $\lim\limits_{h\to
0}\frac{h^{2}q_{i}q_{j}}{(q_{i}-q_{j})^{2}}$, as a result we obtain the
Dunkl–Gaudin parameter ${\overline{q}}_{ij}=\frac{1}{(z_{i}-z_{j})^{2}}$.
(III) Consider the following representation of the degenerate affine Hecke
algebra ${\mathfrak{H}}_{n}$ on the ring of polynomials
$P_{n}=\mathbb{Q}[x_{1},\ldots,x_{n}]$:
* •
the symmetric group ${\mathbb{S}}_{n}$ acts on $P_{n}$ by means of operators
$\displaystyle\overline{s}_{i}=1+(x_{i+1}-x_{i}-h)\partial_{i},\qquad
i=1,\ldots,n-1,$
* •
$y_{i}$ acts on the ring $P_{n}$ by multiplication on the variable $x_{i}$:
$y_{i}(f(x))=x_{i}f(x)$, $f\in P_{n}$. Clearly,
$\displaystyle
y_{i}\overline{s_{i}}-y_{i+1}\overline{s_{i}}=h\qquad\text{and}\qquad
y_{i}({\overline{s}}_{i}-1)=({\overline{s}}_{i}-1)y_{i+1}+x_{i+1}-x_{i}-h.$
In the subsequent discussion we will identify the operator of multiplication
by the variable $x_{i}$, namely the operator $y_{i}$, with $x_{i}$.
This time define $u_{ij}=p_{ij}(\overline{s}_{i}-1)$, if $i<j$ and set
$u_{ij}=-u_{ji}$ if $i>j$, where parameters $\\{p_{ij}\\}$ satisfy the same
conditions as in the previous example.
###### Lemma 2.10.
The elements $\\{u_{ij},\,1\leq i<j\leq n\\}$, satisfy the dynamical classical
Yang–Baxter relations displayed in Lemma 2.6, equation (2.7).
Therefore, the Dunkl elements
$\displaystyle\overline{\theta}_{i}:=x_{i}+\sum_{j\atop j\not=i}u_{ij},\qquad
i=1,\ldots,n,$
form a commutative set of elements.
###### Theorem 2.11 ([54]).
Define matrix $\overline{M}_{n}=(\overline{m}_{ij})_{1\leq i,j\leq n}$ as
follows
$\displaystyle\overline{m}_{i,j}(u;z_{1},\ldots,z_{n})=\begin{cases}u-z_{i}+\sum\limits_{j\not=i}hp_{ij}&\text{if
$i=j$},\\\ -h-p_{ij}&\text{if $i<j$},\\\ p_{ij}&\text{if $i>j$}.\end{cases}$
Then
$\displaystyle\operatorname{DET}\big{|}\overline{M}_{n}(u;\overline{\theta}_{1},\ldots,\overline{\theta}_{n})\big{|}=\prod_{j=1}^{n}(u-x_{j}).$
###### Comments 2.12.
Let us list a few more representations of the dynamical classical Yang–Baxter
relations.
* •
Trigonometric Calogero–Moser representation. Let $i<j$, define
$\displaystyle u_{ij}={x_{j}\over
x_{i}-x_{j}}(s_{ij}-\epsilon),\qquad\epsilon=0\ \text{or}\ 1,$ $\displaystyle
s_{ij}(x_{i})=x_{j},\qquad s_{ij}(x_{j})=x_{i},\qquad
s_{ij}(x_{k})=x_{k},\qquad\forall\,k\not=i,j.$
* •
Mixed representation:
$\displaystyle
u_{ij}=\left({\lambda_{j}\over\lambda_{i}-\lambda_{j}}-{x_{j}\over
x_{i}-x_{j}}\right)(s_{ij}-\epsilon),\qquad\epsilon=0\ \text{or}\ 1,\qquad
s_{ij}(\lambda_{k})=\lambda_{k},\qquad\forall\,k.$
We set $u_{ij}=-u_{ji}$, if $i>j$. In all cases we define Dunkl elements to be
$\theta_{i}=\sum\limits_{j\not=i}u_{ij}$.
Note that operators
$\displaystyle
r_{ij}=\left({\lambda_{i}+\lambda_{j}\over\lambda_{i}-\lambda_{j}}-{x_{i}+x_{j}\over
x_{i}-x_{j}}\right)s_{ij}$
satisfy the three term relations: $r_{ij}r_{jk}=r_{ik}r_{ij}+r_{jk}r_{ik}$,
and $r_{jk}r_{ij}=r_{ij}r_{jk}+r_{ik}r_{jk}$, and thus satisfy the classical
Yang–Baxter relations.
#### 2.1.2 Step functions and the Dunkl–Uglov representations
of the degenerate affine Hecke algebras [138]
Consider step functions $\eta^{\pm}\colon\mathbb{R}\longrightarrow\\{0,1\\}$
$\displaystyle\text{(Heaviside
function)}\qquad\eta^{+}(x)=\begin{cases}1&\text{if $x\geq 0$},\\\ 0&\text{if
$x<0$},\end{cases}\qquad\eta^{-}(x)=\begin{cases}1&\text{if $x>0$},\\\
0&\text{if $x\leq 0$}.\end{cases}$
For any two real numbers $x_{i}$ and $x_{j}$ set
$\eta_{ij}^{\pm}=\eta^{\pm}(x_{i}-x_{j})$.
###### Lemma 2.13.
The functions $\eta_{ij}$ satisfy the following relations
$\displaystyle\eta_{ij}^{\pm}+\eta_{ji}^{\pm}=1+\delta_{x_{i},x_{j}},\qquad(\eta_{ij}^{\pm})^{2}=\eta_{ij}^{\pm},$
$\displaystyle\eta_{ij}^{\pm}\eta_{jk}^{\pm}=\eta_{ik}^{\pm}\eta_{ij}^{\pm}+\eta_{jk}^{\pm}\eta_{ik}^{\pm}-\eta_{ik}^{\pm},$
where $\delta_{x,y}$ denotes the Kronecker delta function.
To introduce the Dunkl–Uglov operators [138] we need a few more definitions
and notation. To start with, denote by $\Delta_{i}^{\pm}$ the finite
difference operators:
$\Delta_{i}^{\pm}(f)(x_{1},\ldots,x_{n})=f(\ldots,x_{i}\pm 1,\ldots)$. Let as
before, $\\{s_{ij},\,1\leq i\not=j\leq n,\,s_{ij}=s_{ji}\\}$, denotes the set
of transpositions in the symmetric group ${\mathbb{S}}_{n}$. Recall that
$s_{ij}(x_{i})=x_{j}$, $s_{ij}(x_{k})=x_{k}$, $\forall\,k\not=i,j$. Finally
define Dunkl–Uglov operators
$d_{i}^{\pm}\colon\mathbb{R}^{n}\longrightarrow\mathbb{R}^{n}$ to be
$\displaystyle
d_{i}^{\pm}=\Delta_{i}^{\pm}+\sum_{j<i}\delta_{x_{i},x_{j}}-\sum_{j<i}\eta_{ji}^{\pm}s_{ij}+\sum_{j>i}\eta_{ij}^{\pm}s_{ij}.$
To simplify notation, set $u_{ij}^{\pm}:=\eta_{ij}^{\pm}s_{ij}$ if $i<j$, and
${\widetilde{\Delta}}_{i}^{\pm}=\Delta_{i}^{\pm}+\sum\limits_{j<i}\delta_{x_{i},x_{j}}$.
###### Lemma 2.14.
The operators $\\{u_{ij}^{\pm},\,1\leq i<j\leq n\\}$ satisfy the following
relations
$\displaystyle\big{[}u_{ij}^{\pm},u_{ik}^{\pm}+u_{jk}^{\pm}\big{]}+\big{[}u_{ik}^{\pm},u_{jk}^{\pm}\big{]}+\bigg{[}u_{ik}^{\pm},\sum_{j<i}\delta_{x_{i},x_{j}}\bigg{]}=0\qquad\text{if}\
\ i<j<k.$
From now on we assume that $x_{i}\in\mathbb{Z}$, $\forall\,i$, that is, we
will work with the restriction of the all operators defined at beginning of
Example 2.28(c), to the subset $\mathbb{Z}^{n}\subset\mathbb{R}^{n}$. It is
easy to see that under the assumptions $x_{i}\in\mathbb{Z}$, $\forall\,i$, we
will have
$\displaystyle\Delta_{j}^{\pm}\eta_{ij}^{\pm}=(\eta_{ij}^{\pm}\mp\delta_{x_{i},x_{j}})\Delta_{i}^{\pm}.$
(2.9)
Moreover, using relations (2.12), (2.13) one can prove that
###### Lemma 2.15.
* •
$[u_{ij}^{\pm},{\widetilde{\Delta}}_{i}^{\pm}+{\widetilde{\Delta}}_{j}^{\pm}]=0$,
* •
$[u_{ik}^{\pm},{\widetilde{\Delta}_{j}}^{\pm}]=\big{[}u_{ik}^{\pm},\sum\limits_{j<i}\delta_{x_{i},x_{j}}\big{]}$,
$i<j<k$.
###### Corollary 2.16.
* •
The operators $\\{u_{ij}^{\pm},\,1\leq i<j<k\leq n\\}$, and
${\widetilde{\Delta}}_{i}^{\pm}$, $i=1,\ldots,n$ satisfy the dynamical
classical Yang–Baxter relations
$\displaystyle\big{[}u_{ij}^{\pm},u_{ik}^{\pm}+u_{jk}^{\pm}\big{]}+\big{[}u_{ik}^{\pm},u_{jk}^{\pm}\big{]}+\big{[}u_{ik}^{\pm},{\widetilde{\Delta}}_{j}\big{]}=0\qquad\text{if}\
\ i<j<k.$
* •
The operators $\\{s_{i}:=s_{i,i+1},\,1\leq
i<n,\,and\,{\widetilde{\Delta}}_{j}^{\pm},\,1\leq j\leq n\\}$ give rise to two
representations of the degenerate affine Hecke algebra ${\mathfrak{H}}_{n}$.
In particular, the Dunkl–Uglov operators are mutually commute:
$[d_{i}^{\pm},d_{j}^{\pm}]=0$ [138].
#### 2.1.3 Extended Kohno–Drinfeld algebra and Yangian Dunkl–Gaudin elements
###### Definition 2.17.
Extended Kohno–Drinfeld algebra is an associative algebra over
$\mathbb{Q}[\beta]$ generated by the elements $\\{z_{1},\ldots,z_{n}\\}$ and
$\\{y_{ij}\\}_{1\leq i\not=j\leq n}$ subject to the set of relations
1. (i)
The elements $\\{y_{ij}\\{_{1\leq i\not=j\leq n}$ satisfy the Kohno–Drinfeld
relations
* •
$y_{ij}=y_{ji}$, $[y_{ij},y_{kl}]=0$ if $i$, $j$, $k$, $l$ are distinct,
* •
$[y_{ij},y_{ik}+y_{jk}]=0=[y_{ij}+y_{ik},y_{jk}]$ if $i<j<k$.
2. (ii)
The elements $z_{1},\ldots,z_{n}$ generate the free associative algebra
${\cal{F}}_{n}$.
3. (iii)
Crossing relations:
* •
$[z_{i},y_{jk}]=0$ if $i\not=j,k$, $[z_{i},z_{j}]=\beta[y_{ij},z_{i}]$ if
$i\not=j$.
To define the (Yangian) Dunkl–Gaudin elements, cf. [54], let us consider a set
of elements $\\{p_{ij}\\}_{1\leq i\not=j\leq n}$ subject to relations
* •
$p_{ij}+p_{ji}=\beta$, $[p_{ij},y_{kl}]=0=[p_{ij},z_{k}]$ for all $i$, $j$,
$k$,
* •
$p_{ij}p_{jk}=p_{ik}(p_{jk}-p_{ji})$ if $i<j<k$.
Let us set $u_{ij}=p_{ij}y_{ij}$, $i\not=j$, and define the (Yangian)
Dunkl–Gaudin elements as follows
$\displaystyle\theta_{i}=z_{i}+\sum_{j\not=i}u_{ij},\qquad i=1,\ldots,n.$
###### Proposition 2.18 (cf. [54, Lemma 3.5]).
The elements $\theta_{1},\ldots,\theta_{n}$ form a mutually commuting family.
Indeed, let $i<j$, then
$\displaystyle[\theta_{i},\theta_{j}]=[z_{i},z_{j}]+\beta[z_{i},y_{ij}]+p_{ij}[y_{ij},z_{i}+z_{j}]$
$\displaystyle\hphantom{[\theta_{i},\theta_{j}]=}{}+\sum_{k\not=i,j}\big{(}p_{ik}p_{jk}\big{[}y_{ij}+y_{ik},y_{jk}\big{]}+p_{ik}p_{ji}\big{[}y_{ij},y_{ik}+y_{jk}\big{]}\big{)}=0.$
A representation of the extended Kohno–Drinfeld algebra has been constructed
in [54], namely one can take
$\displaystyle y_{ij}:=T_{ij}^{(1)}T_{ji}^{(1)}-T_{jj}^{(1)}=y_{ji},\qquad
z_{i}:=\beta
T_{ii}^{(2)}-\frac{\beta}{2}T_{ii}^{(1)}\big{(}T_{ii}^{(1)}-1\big{)},$
$\displaystyle p_{ij}:=\frac{\beta q_{j}}{q_{i}-q_{j}},\qquad i\not=j,$
where $q_{1},\ldots,q_{n}$ stands for a set of mutually commuting quantum
parameters, and $\big{\\{}T_{ij}^{(s)}\big{\\}}_{1\leq i,j\leq n\atop
s\in\mathbb{Z}_{\geq 0}}$ denotes the set of generators of the Yangian
$Y({\mathfrak{gl}}_{n})$, see, e.g., [106].
A proof that the elements $\\{z_{i}\\}_{1\leq i\leq n}$ and
$\\{y_{ij}\\}_{1\leq i\not=j\leq n}$ satisfy the extended Kohno–Drinfeld
algebra relations is based on the following relations, see, e.g., [54, Section
3],
$\displaystyle\big{[}T_{ij}^{(1)},T_{kl}^{(s)}\big{]}=\delta_{il}T_{kj}^{(s)}-\delta_{jk}T_{il}^{(s)},\qquad
i,j,k,l=1,\ldots,n,\qquad s\in\mathbb{Z}_{\geq 0}.$
### 2.2 “Compatible” Dunkl elements, Manin matrices and algebras
related with weighted complete graphs $\boldsymbol{rK_{n}}$
Let us consider a collection of generators $\\{u_{ij}^{(\alpha)},\,1\leq
i,j\leq n,\,\alpha=1,\ldots,r\\}$, subject to the following relations
* •
either the unitarity (the case of sign “${+}$”) or the symmetry relations (the
case of sign “${-}$”)252525More generally one can impose the $q$-symmetry
conditions $\displaystyle u_{ij}+qu_{ji}=0,\qquad 1\leq i<j\leq n$ and ask
about relations among the local Dunkl elements to ensure the commutativity of
the global ones. As one might expect, the matrix
$Q:=\big{(}\theta_{j}^{(a)}\big{)}_{1\leq a\leq r\atop 1\leq j\leq n}$
composed from the local Dunkl elements should be a $q$-Manin matrix. See,
e.g., [25], or https://en.wikipedia.org/wiki/Manin.matrix for a definition and
basic properties of the latter.
$\displaystyle u_{ij}^{(\alpha)}\pm
u_{ji}^{(\alpha)}=0,\qquad\forall\,\alpha,i,j,$ (2.10)
* •
local $3$-term relations:
$\displaystyle
u_{ij}^{(\alpha)}u_{jk}^{(\alpha)}+u_{jk}^{(\alpha)}u_{ki}^{\alpha)}+u_{ki}^{(\alpha)}u_{ij}^{(\alpha)}=0,\qquad
i,j,k\ \ \text{are distinct},\quad 1\leq\alpha\leq r.$ (2.11)
We define global 3-term relations algebra $3T_{n,r}^{(\pm)}$ as “compatible
product” of the local 3-term relations algebras. Namely, we require that the
elements
$\displaystyle
U_{ij}^{({\boldsymbol{\lambda}})}:=\sum_{\alpha=1}^{r}\lambda_{\alpha}u_{ij}^{(\alpha)},\qquad
1\leq i,j\leq n,$
satisfy the global 3-term relations
$\displaystyle
U_{ij}^{(\boldsymbol{\lambda})}U_{jk}^{(\boldsymbol{\lambda})}+U_{jk}^{(\boldsymbol{\lambda})}U_{ki}^{(\boldsymbol{\lambda})}+U_{ki}^{(\boldsymbol{\lambda})}U_{ij}^{(\boldsymbol{\lambda})}=0$
for all values of parameters $\\{\lambda_{i}\in\mathbb{R},\,1\leq\alpha\leq
r\\}$.
It is easy to check that our request is equivalent to a validity of the
following sets of relations among the generators
$\big{\\{}u_{ij}^{(\alpha)}\big{\\}}$
1. (a)
local $3$-term relations:
$u_{ij}^{(\alpha)}u_{jk}^{\alpha)}+u_{jk}^{(\alpha)}u_{ki}^{(\alpha)}+u_{ki}^{\alpha)}u_{ij}^{(\alpha)}=0$,
2. (b)
$6$-term crossing relations:
$\displaystyle
u_{ij}^{(\alpha)}u_{jk}^{(\beta)}+u_{ij}^{(\beta)}u_{jk}^{(\alpha)}+u_{k,i}^{(\alpha)}u_{ij}^{(\beta)}u_{ki}^{(\alpha)}+u_{jk}^{(\alpha)}u_{ki}^{(\beta)}+u_{jk}^{(\beta)}u_{ki}^{(\alpha)}=0,$
$i$, $j$, $k$ are distinct, $\alpha\not=\beta$.
Now let us consider local Dunkl elements
$\displaystyle\theta_{i}^{(\alpha)}:=\sum_{j\neq i}u_{ij}^{(\alpha)},\qquad
j=1,\ldots,n,\quad\alpha=1,\ldots,r.$
It follows from the local 3-term relations (2.11) that for a fixed
$\alpha\in[1,r]$ the local Dunkl elements
$\big{\\{}\theta_{i}^{(\alpha)}\big{\\}}_{1\leq i\leq n\atop 1\leq\alpha\leq
r}$ either mutually commute (the sign “$+$”), or pairwise anticommute (the
sign “$-$”). Similarly, the global 3-term relations imply that the global
Dunkl elements
$\displaystyle\theta_{i}^{(\lambda)}:=\lambda_{1}\theta_{i}^{(1)}+\cdots+\lambda_{r}\theta_{i}^{(r)}=\sum_{j\not=i}U_{ij}^{(\lambda)},\qquad
i=1,\ldots,n,$
also either mutually commute (the case “$+$”) or pairwise anticommute (the
case “$-$”).
Now we are looking for a set of relations among the local Dunkl elements which
is a consequence of the commutativity (anticommutativity) of the global Dunkl
elements. It is quite clear that if $i<j$, then
$\displaystyle\big{[}\theta_{i}^{(a)},\theta_{j}^{(b)}\big{]}_{\pm}=\sum_{a=1}^{r}\lambda_{a}^{2}\big{[}\theta_{i}^{(a)},\theta_{j}^{(a)}\big{]}_{\pm}+\sum_{1\leq
a<b\leq
r}\lambda_{a}\lambda_{b}\big{(}\big{[}\theta_{i}^{(a)},\theta_{j}^{(b)}\big{]}_{\pm}+\big{[}\theta_{i}^{(b)},\theta_{j}^{(a)}\big{]}_{\pm}\big{)},$
and the commutativity (or anticommutativity) of the global Dunkl elements for
all $(\lambda_{1},\ldots,\lambda_{r})\in\mathbb{R}^{r}$ is equivalent to the
following set of relations
* •
$[\theta_{i}^{(a)},\theta_{j}^{(a)}]_{\pm}=0$,
* •
$[\theta_{i}^{(a)},\theta_{j}^{(b)}]_{\pm}+[\theta_{i}^{(b)},\theta_{j}^{(a)}]_{\pm}=0$,
$a<b$ and $i<j$, where by definition we set $[a,b]_{\pm}:=ab\mp ba$.
In other words, the matrix
$\varTheta_{n}:=\big{(}\theta_{i}^{(a)}\big{)}_{1\leq a\leq r\atop 1\leq i\leq
n}$ should be either a Manin matrix (the case “$+$”), or its super analogue
(the case “$-$”). Clearly enough that a similar construction can be applied to
the algebras studied in Section 2, I–III, and thus it produces some
interesting examples of the Manin matrices. It is an interesting problem to
describe the algebra generated by the local Dunkl elements
$\big{\\{}\theta_{i}^{(a)}\big{\\}}_{1\leq a\leq r\atop 1\leq i\leq n}$ and a
commutative subalgebra generated by the global Dunkl elements inside the
former. It is also an interesting question whether or not the coefficients
$C_{1},\ldots,C_{n}$ of the column characteristic polynomial
$\operatorname{Det}^{\rm
col}|\varTheta_{n}-tI_{n}|=\sum\limits_{k=0}^{n}C_{k}t^{n-k}$ of the Manin
matrix $\varTheta_{n}$ generate a commutative subalgebra? For a definition of
the column determinant of a matrix, see, e.g., [25].
However a close look at this problem and the question posed needs an
additional treatment and has been omitted from the content of the present
paper.
Here we are looking for a “natural conditions” to be imposed on the set of
generators $\\{u_{ij}^{\alpha}\\}_{1\leq\alpha\leq r\atop 1\leq i,j\leq n}$ in
order to ensure that the local Dunkl elements satisfy the commutativity (or
anticommutativity) relations:
$\displaystyle\big{[}\theta_{i}^{(\alpha)},\theta_{j}^{(\beta)}\big{]}_{\pm}=0,\qquad\text{for
all}\ \ 1\leq i<j\leq n,\qquad 1\leq\alpha,\beta\leq r.$
The “natural conditions” we have in mind are
* •
locality relations:
$\displaystyle\big{[}u_{ij}^{(\alpha)},u_{kl}^{(\beta)}\big{]}_{\pm}=0,$
(2.12)
* •
twisted classical Yang–Baxter relations:
$\displaystyle\big{[}u_{ij}^{(\alpha)},u_{jk}^{(\beta)}\big{]}_{\pm}+\big{[}u_{ik}^{(\alpha)},u_{ji}^{(\beta)}\big{]}_{\pm}+\big{[}u_{ik}^{(\alpha)},u_{jk}^{(\beta)}\big{]}_{\pm}=0,$
(2.13)
if $i$, $j$, $k$, $l$ are distinct and $1\leq\alpha,\beta\leq r$.
Finally we define a multiple analogue of the three term relations algebra,
denoted by $3T^{\pm}(rK_{n})$, to be the quotient of the global $3$-term
relations algebra $3T_{n,r}^{\pm}$ modulo the two-sided ideal generated by the
left hand sides of relations (2.12), (2.13) and that of the following
relations
* •
$\big{(}u_{ij}^{(\alpha)}\big{)}^{2}=0$,
$\big{[}u_{ij}^{(\alpha)},u_{ij}^{(\beta)}\big{]}_{\pm}=0$, for all $i\not=j$,
$\alpha\not=\beta$.
The outputs of this construction are
* •
commutative (or anticommutative) quadratic algebra $3T^{(\pm)}(rK_{n})$
generated by the elements $\big{\\{}u_{ij}^{(\alpha)}\big{\\}}_{1\leq i<j\leq
n\atop\alpha=1,\ldots,r}$,
* •
a family of $nr$ either mutually commuting (the case “$+$”), or pair-wise
anticommuting (the case “$-$”) local Dunkl elements
$\big{\\{}\theta_{i}^{(\alpha)}\big{\\}}_{i=1,\ldots,n\atop\alpha=1,\ldots,r}$.
We expect that the subalgebra generated by local Dunkl elements in the algebra
$3T^{+}(rK_{n})$ is closely related (isomorphic for $r=2$) with the
coinvariant algebra of the diagonal action of the symmetric group
${\mathbb{S}}_{n}$ on the ring of polynomials
$\mathbb{Q}\big{[}X_{n}^{(1)},\ldots,X_{n}^{(r)}\big{]}$, where $X_{n}^{(j)}$
stands for the set of variables
$\big{\\{}x_{1}^{(j)},\ldots,x_{n}^{(j)}\big{\\}}$. The algebra
$3T^{-}(2K_{n})^{\rm anti}$ has been studied in [72] and [12]. In the present
paper we state only our old conjecture.
###### Conjecture 2.19 (A.N. Kirillov, 2000).
$\displaystyle{\rm Hilb}\big{(}3T^{-}(3K_{n})^{\rm
anti},t\big{)}=(1+t)^{n}(1+nt)^{n-2},$
where for any algebra $A$ we denote by $A^{\rm anti}$ the quotient of algebra
$A$ by the two-sided ideal generated by the set of anticommutators
$\\{ab+ba\,|\,(a,b)\in A\times A\\}$.
According to observation of M. Haiman [55], the number $2^{n}(n+1)^{n-2}$ is
thought of as being equal to the dimension of the space of triple coinvariants
of the symmetric group $\mathbb{S}_{n}$.
### 2.3 Miscellany
#### 2.3.1 Non-unitary dynamical classical Yang–Baxter algebra
$\boldsymbol{{\rm DCYB}_{n}}$
Let $\widetilde{{\cal A}_{n}}$ be the quotient of the algebra
${\mathfrak{F}}_{n}$ by the two-sided ideal generated by the relations (2.2),
(2.5) and (2.6). Consider elements
$\displaystyle\theta_{i}=x_{i}+\sum_{a\not=i}u_{ia}\qquad\text{and}\qquad{\bar{\theta_{j}}}=-x_{j}+\sum_{b\not=j}u_{bj},\qquad
1\leq i<j\leq n.$
Clearly, if $i<j$, then
$\displaystyle[\theta_{i},{\bar{\theta}_{j}}]+[x_{i},x_{j}]=\left[\sum_{k=1}^{n}x_{k},u_{ij}\right]+\sum_{k\not=i,j}w_{ikj},$
where the elements $w_{ijk}$, $i<j$, have been defined in Lemma 2.2, equation
(2.3).
Therefore the elements $\theta_{i}$ and ${\bar{\theta}_{j}}$ commute in the
algebra ${\widetilde{A}_{n}}$.
In the case when $x_{i}=0$ for all $i=1,\ldots,n$, the relations
$\displaystyle w_{ijk}:=[u_{ij},u_{ik}+u_{jk}]+[u_{ik},u_{jk}]=0\qquad\text{if
$i$, $j$, $k$ are all distinct},$
are well-known as the non-unitary classical Yang–Baxter relations. Note that
for a given triple of pair-wise distinct $(i,j,k)$ one has in fact 6
relations. These six relations imply that $[\theta_{i},{\bar{\theta_{j}}}]=0$.
However, in general,
$\displaystyle[\theta_{i},\theta_{j}]=\biggl{[}\sum_{k\not=i,j}u_{ik},u_{ij}+u_{ji}\biggr{]}\not=0.$
Dynamical classical Yang–Baxter algebra ${\rm DCYB}_{n}$. In order to ensure
the commutativity relations among the Dunkl elements (2.1), i.e.,
$[\theta_{i},\theta_{j}]=0$ for all $i$, $j$, let us remark that if $i\not=j$,
then
$\displaystyle[\theta_{,}\theta_{j}]=[x_{i}+u_{ij},x_{j}+u_{ji}]+[x_{i}+x_{j},u_{ij}]+\left[u_{ij},\sum_{k=1}^{n}x_{k}\right]$
$\displaystyle\hphantom{[\theta_{,}\theta_{j}]=}{}+\sum_{k=1\atop
k\not=i,j}^{n}[u_{ij}+u_{ik},u_{jk}]+[u_{ik},u_{ji}]+[x_{i},u_{jk}]+[u_{ik},x_{j}]+[x_{k},u_{ij}].$
###### Definition 2.20.
Define dynamical non-unitary classical Yang–Baxter algebra ${\rm DNUCYB}_{n}$
to be the quotient of the free associative algebra
$\mathbb{Q}\langle\\{x_{i},\,1\leq i\leq n\\},\,\\{u_{ij}\\}_{1\leq
i\not=j\leq n}\rangle$ by the two-sided ideal generated by the following set
of relations
* •
zero curvature conditions:
$\displaystyle[x_{i}+u_{ij},x_{j}+u_{ji}]=0,\qquad 1\leq i\not=j\leq n,$
(2.14)
* •
conservation laws conditions:
$\displaystyle\left[u_{ij},\sum_{k=1}^{n}x_{k}\right]=0\qquad\text{for all}\ \
i\not=j,k.$
* •
crossing relations:
$\displaystyle[x_{i}+x_{j},u_{ij}]=0,\qquad i\not=j.$
* •
twisted dynamical classical Yang–Baxter relations:
$\displaystyle[u_{ij}+u_{ik},u_{jk}]+[u_{ik},u_{ji}]+[x_{i},u_{jk}]+[u_{ik},x_{j}]+[x_{k},u_{ij}]=0,$
$i$, $j$, $k$ are distinct.
It is easy to see that the twisted classical Yang–Baxter relations
$\displaystyle[u_{ij}+u_{ik},u_{jk}]+[u_{ik},u_{ji}]=0,\qquad i,j,k\ \
\text{are distinct},$ (2.15)
for a fixed triple of distinct indices $i$, $j$, $k$ contain in fact $3$
different relations whereas the non-unitary classical Yang–Baxter relations
$\displaystyle[u_{ij}+u_{ik},u_{jk}]+[u_{ij},u_{ik}],\qquad i,j,k\ \ \text{are
distinct},$
contain $6$ different relations for a fixed triple of distinct indices $i$,
$j$, $k$.
###### Definition 2.21.
* •
Define dynamical classical Yang–Baxter algebra ${\rm DCYB}_{n}$ to be the
quotient of the algebra ${\rm DNUCYB}_{n}$ by the two-sided ideal generated by
the elements
$\displaystyle\sum_{k\not=i,j}[u_{ik},u_{ij}+u_{ji}]\qquad\text{for all}\ \
i\not=j.$
* •
Define classical Yang–Baxter algebra ${\rm CYB}_{n}$ to be the quotient of the
dynamical classical Yang–Baxter algebra ${\rm DCYB}_{n}$ by the set of
relations
$\displaystyle x_{i}=0\qquad\text{for}\ \ i=1,\dots,n.$
###### Example 2.22.
Define
$\displaystyle
p_{ij}(z_{1},\ldots,z_{n})=\begin{cases}\dfrac{z_{i}}{z_{i}-z_{j}}&\text{if
$1\leq i<j\leq n$},\vspace{1mm}\\\ -\dfrac{z_{j}}{z_{j}-z_{i}}&\text{if $n\geq
i>j\geq 1$}.\end{cases}$
Clearly, $p_{ij}+p_{ji}=1$. Now define operators $u_{ij}=p_{ij}s_{ij}$, and
the truncated Dunkl operators to be $\theta_{i}=\sum\limits_{j\not=i}u_{ij}$,
$i=1,\ldots,n$. All these operators act on the field of rational functions
$\mathbb{Q}(z_{1},\ldots,z_{n})$; the operator $s_{ij}=s_{ji}$ acts as the
exchange operator, namely, $s_{ij}(z_{i})=z_{j}$, $s_{ij}(z_{k})=z_{k}$,
$\forall\,k\not=i,j$, $s_{ij}(z_{j})=z_{i}$.
Note that this time one has
$\displaystyle p_{12}p_{23}=p_{13}p_{12}+p_{23}p_{13}-p_{13}.$
It is easy to see that the operators $\\{u_{ij},\,1\leq i\not=j\leq n\\}$
satisfy relations (3.1), and therefore, satisfy the twisted classical
Yang–Baxter relations (2.13). As a corollary we obtain that the truncated
Dunkl operators $\\{\theta_{i},\,i=1,\ldots,n\\}$ are pair-wise commute. Now
consider the Dunkl operator $D_{i}=\partial_{{z_{i}}}+h\theta_{i}$,
$i=1,\ldots,n$, where $h$ is a parameter. Clearly that
$[\partial_{{z_{i}}}+\partial_{{z_{j}}},u_{ij}]=0$, and therefore
$[D_{i},D_{j}]=0$, $\forall\,i,j$. It easy to see that
$\displaystyle
s_{i,i+1}D_{i}-D_{i+1}s_{i,i+1}=h,\qquad[D_{i},s_{j,j+1}]=0\qquad\text{if}\ \
j\not=i,i+1.$
In such a manner we come to the well-known representation of the degenerate
affine Hecke algebra ${\mathfrak{H}}_{n}$.
#### 2.3.2 Dunkl and Knizhnik–Zamolodchikov elements
Assume that $\forall\,i$, $x_{i}=0$, and generators $\\{u_{ij},\,1\leq i<j\leq
n\\}$ satisfy the locality conditions (2.2) and the classical Yang–Baxter
relations
$\displaystyle[u_{ij},u_{ik}+u_{jk}]+[u_{ik},u_{jk}]=0\qquad\text{if}\ \ 1\leq
i<j<k\leq n.$
Let $y,z,t_{1},\ldots,t_{n}$ be parameters, consider the rational function
$\displaystyle F_{\rm CYB}(z;{\boldsymbol{t}}):=F_{\rm
CYB}(z;t_{1},\ldots,t_{n})=\sum_{1\leq i<j\leq
n}{(t_{i}-t_{j})u_{ij}\over(z-t_{i})(z-t_{j})}.$
Then
$\displaystyle[F_{\rm CYB}(z;{\boldsymbol{t}}),F_{\rm
CYB}(y;{\boldsymbol{t}})]=0\qquad\text{and}\qquad\operatorname{Res}_{z=t_{i}}F_{\rm
CYB}(z;{\boldsymbol{t}})=\theta_{i}.$
Now assume that a set of generators $\\{c_{ij},\,1\leq i\not=j\leq n\\}$
satisfy the locality and symmetry (i.e., $c_{ij}=c_{ji}$) conditions, and the
Kohno–Drinfeld relations:
$\displaystyle[c_{ij},c_{kl}]=0\qquad\text{if}\ \
\\{i,j\\}\cap\\{k,l\\}={\varnothing},$
$\displaystyle[c_{ij},c_{jk}+c_{ik}]=0=[c_{ij}+c_{ik},c_{jk}],\qquad i<j<k.$
Let $y,z,t_{1},\ldots,t_{n}$ be parameters, consider the rational function
$\displaystyle F_{\rm KD}(z;{\boldsymbol{t}}):=F_{\rm
KD}(z;t_{1},\ldots,t_{n})=\sum_{1\leq i\not=j\leq
n}{c_{ij}\over(z-t_{i})(t_{i}-t_{j})}=\sum_{1\leq i<j\leq
n}{c_{ij}\over(z-t_{i})(z-t_{j})}.$
Then
$\displaystyle[F_{\rm KD}(z;{\boldsymbol{t}}),F_{\rm
KD}(y;{\boldsymbol{t}})]=0\qquad\text{and}\qquad\operatorname{Res}_{z=t_{i}}F_{\rm
KD}(z;{\boldsymbol{t}})={\rm KZ}_{i},$
where
$\displaystyle{\rm KZ}_{i}=\sum_{j=1\atop j\not=i}^{n}{c_{ij}\over
t_{i}-t_{j}}$
denotes the truncated Knizhnik–Zamolodchikov element.
#### 2.3.3 Dunkl and Gaudin operators
(a) Rational Dunkl operators. Consider the quotient of the algebra ${\rm
DCYB}_{n}$, see Definition 2.3, by the two-sided ideal generated by elements
$\displaystyle\\{[x_{i}+x_{j},u_{ij}]\\}\qquad\text{and}\qquad\\{[x_{k},u_{ij}],\,k\not=i,j\\}.$
Clearly the Dunkl elements (2.1) mutually commute. Now let us consider the so-
called Calogero–Moser representation of the algebra ${\rm DCYB}_{n}$ on the
ring of polynomials $R_{n}:=\mathbb{R}[z_{1},\ldots,z_{n}]$ given by
$\displaystyle x_{i}(p(z))=\lambda{\partial p(z)\over\partial z_{i}},\qquad
u_{ij}(p(z))={1\over z_{i}-z_{j}}(1-s_{ij})p(z),\qquad p(z)\in R_{n}.$
The symmetric group ${\mathbb{S}}_{n}$ acts on the ring $R_{n}$ by means of
transpositions $s_{ij}\in{\mathbb{S}}_{n}$: $s_{ij}(z_{i})=z_{j}$,
$s_{ij}(z_{j})=z_{i}$, $s_{ij}(z_{k})=z_{k}$ if $k\not=i,j$.
In the Calogero–Moser representation the Dunkl elements $\theta_{i}$ becomes
the rational Dunkl operators [35], see Definition 1.1. Moreover, one has
$[x_{k},u_{ij}]=0$ if$k\not=i,j$, and
$\displaystyle x_{i}u_{ij}=u_{ij}x_{j}+{1\over
z_{i}-z_{j}}(x_{i}-x_{j}-u_{ij}),\qquad x_{j}u_{ij}=u_{ij}x_{i}-{1\over
z_{i}-z_{j}}(x_{i}-x_{j}-u_{ij}).$
(b) Gaudin operators. The Dunkl–Gaudin representation of the algebra ${\rm
DCYB}_{n}$ is defined on the field of rational functions
$K_{n}:=\mathbb{R}(q_{1},\ldots,q_{n})$ and given by
$\displaystyle x_{i}(f(q)):=\lambda{\partial f(q)\over\partial q_{i}},\qquad
u_{ij}={s_{ij}\over q_{i}-q_{j}},\qquad f(q)\in K_{n},$
but this time we assume that $w(q_{i})=q_{i}$, $\forall\,i\in[1,n]$ and for
all $w\in{\mathbb{S}}_{n}$. In the Dunkl–Gaudin representation the Dunkl
elements becomes the rational Gaudin operators, see, e.g., [108]. Moreover,
one has $[x_{k},u_{ij}]=0$, if $k\not=i,j$, and
$\displaystyle x_{i}u_{ij}=u_{ij}x_{j}-{u_{ij}\over q_{i}-q_{j}},\qquad
x_{j}u_{ij}=u_{ij}x_{i}+{u_{ij}\over q_{i}-q_{j}}.$
###### Comments 2.23.
It is easy to check that if $f\in\mathbb{R}[z_{1},\ldots,z_{n}]$, and
$x_{i}:={\frac{\partial}{\partial z_{i}}}$, then the following commutation
relations are true
$\displaystyle x_{i}f=fx_{i}+\frac{\partial}{\partial_{z_{i}}}(f),\qquad
u_{ij}f=s_{ij}(f)u_{ij}+\partial_{z_{i},z_{j}}(f).$
Using these relations it easy to check that in the both cases
$({\boldsymbol{a}})$ and $({\bf b})$ the elementary symmetric polynomials
$e_{k}(x_{1},\ldots,x_{n})$ commute with the all generators
$\\{u_{ij}\\}_{1\leq i,j\leq n}$, and therefore commute with the all Dunkl
elements $\\{\theta_{i}\\}_{1\leq i\leq n}$. Let us stress that
$[\theta_{i},x_{k}]\not=0$ for all $1\leq i,k\leq n$.
###### Project 2.24.
Describe a commutative algebra generated by the Dunkl elements
$\\{\theta_{i}\\}_{1\leq i\leq n}$ and the elementary symmetric polynomials
$\\{e_{k}(x_{1},\ldots,x_{n})\\}_{1\leq k\leq n}$.
#### 2.3.4 Representation of the algebra $\boldsymbol{3T_{n}}$ on the free
algebra $\boldsymbol{\mathbb{Z}\langle t_{1},\ldots,t_{n}\rangle}$
Let ${\mathcal{F}}_{n}=\mathbb{Z}\langle t_{1},\ldots,t_{n}\rangle$ be free
associative algebra over the ring of integers $\mathbb{Z}$, equipped with the
action of the symmetric group $\mathbb{S}_{n}$: $s_{ij}(t_{i})=t_{j}$,
$s_{ij}(t_{k})=t_{k}$, $\forall\,k\not=i,j$.
Define the action of $u_{ij}\in 3T_{n}$ on the set of generators of the
algebra $\mathcal{F}_{n}$ as follows
$\displaystyle u_{ij}(t_{k})=\delta_{i,k}t_{i}t_{j}-\delta_{j,k}t_{j}t_{i}.$
The action of generator $u_{ij}$ on the whole algebra $\mathcal{F}_{n}$ is
defined by linearity and the twisted Leibniz rule:
$\displaystyle u_{ij}(1)=0,\qquad u_{ij}(a+b)=u_{ij}(a)+u_{ij}(b),\qquad
u_{ij}(ab)=u_{ij}(a)b+s_{ij}(a)u_{ij}(b).$
It is easy to see from (2.14) that
$\displaystyle s_{ij}u_{jk}=u_{ik}s_{ij},\qquad
s_{ij}u_{kl}=u_{kl}s_{ij}\qquad\text{if}\ \
\\{i,j\\}\cap\\{k,l\\}=\varnothing,\qquad u_{ij}+u_{ji}=0.$
Now let us consider operator
$\displaystyle u_{ijk}:=u_{ij}u_{jk}-u_{jk}u_{ik}-u_{ik}u_{ij},\qquad 1\leq
i<j<k\leq n.$
###### Lemma 2.25.
$\displaystyle u_{ijk}(ab)=u_{ijk}(a)b+s_{ij}s_{jk}(a)u_{ijk}(b),\qquad
a,b\in\mathcal{F}_{n}.$
###### Lemma 2.26.
$\displaystyle u_{ijk}(a)=0\qquad\forall\,a\in\mathcal{F}_{n}.$
Indeed,
$\displaystyle
u_{ijk}(t_{i})=-u_{jk}(u_{ij}(t_{i}))-u_{ik}(u_{ij}(t_{i}))=-t_{i}u_{jk}(t_{k})-u_{ik}(t_{i})t_{j}=t_{i}(t_{k}t_{j})-(t_{i}t_{k})t_{j}=0,$
$\displaystyle
u_{ijk}(t_{k})=u_{ij}(u_{jk}(t_{k}))-u_{jk}(u_{ik}(t_{k}))=-u_{ij}(t_{k}t_{j})+u_{jk}(t_{k}t_{i})=t_{k}(u_{ij}(t_{j})+u_{jk}(t_{k})t_{i}=0,$
$\displaystyle
u_{ijk}(t_{j})=u_{ij}(u_{jk}(t_{j}))-u_{ik}(u_{ij}(t_{j}))=-u_{ij}(t_{j})t_{k}-t_{j}u_{ik}(t_{i})=(t_{j}t_{i})t_{k}-t_{j}(t_{i}t_{k})=0.$
Therefore Lemma 2.26 follows from Lemma 2.25.
Let $\mathcal{F}_{n}^{\bullet}$ be the quotient of the free algebra
$\mathcal{F}_{n}$ by the two-sided ideal generated by elements
$t_{i}^{2}t_{j}-t_{j}t_{i}^{2}$, $1\leq i\not=j\leq n$. Since
$u_{i,j}^{2}(t_{i})=t_{i}t_{j}^{2}-t_{j}^{2}t_{i}$, one can define a
representation of the algebra $3T_{n}^{(0)}$ on that
$\mathcal{F}_{n}^{\bullet}$. One can also define a representation of the
algebra $3T_{n}^{(0)}$ on that $\mathcal{F}_{n}^{(0)}$, where
$\mathcal{F}_{n}^{(0)}$ denotes the quotient of the algebra $\mathcal{F}_{n}$
by the two-sided ideal generated by elements $\\{t_{i}^{2},\,1\leq i\leq
n\\}$. Note that
$(u_{i,k}u_{j,k}u_{i,j})(t_{k})=[t_{i}t_{j}t_{i},t_{k}]\not=0$ in the algebra
$\mathcal{F}_{n}^{(0)}$, but the elements $u_{i,j}u_{i,k}u_{j,k}u_{i,j}$,
$1\leq i<j<k\leq n$, which belong to the kernel of the Calogero–Moser
representation [72], act trivially both on the algebras
$\mathcal{F}_{n}^{(0)}$ and that $\mathcal{F}_{n}^{\bullet}$.
Note finally that the algebra $\mathcal{F}_{n}^{(0)}$ is Koszul and has
Hilbert series ${\rm Hilb}\big{(}\mathcal{F}_{n}^{(0)},t\big{)}={1+t\over
1-(n-1)t}$, whereas the algebra $\mathcal{F}_{n}^{\bullet}$ is not Koszul for
$n\geq 3$, and
$\displaystyle{\rm
Hilb}(\mathcal{F}_{n}^{\bullet},t)={1\over(1-t)(1-(n-1)t)(1-t^{2})^{n-1}}.$
In Appendix A.5 we apply the representation introduced in this section to the
study of relations in the subalgebra $Z_{n}^{(0)}$ of the algebra
$3T_{n}^{(0)}$ generated by the elements $u_{1,n},\ldots,u_{n-1,n}$. To
distinguish the generators $\\{u_{ij}\\}$ of the algebra $3T_{n}^{(0)}$ from
the introduced in this section operators $u_{ij}$ acting on it, in Appendix
A.5 we will use for the latter notation $\nabla_{ij}:=u_{ij}$.
#### 2.3.5 Kernel of Bruhat representation
Bruhat representations, classical and quantum, of algebras $3T_{n}^{(0)}$ and
$3QT_{n}$ can be seen as a connecting link between commutative subalgebras
generating by either additive or multiplicative Dunkl elements in these
algebras, and classical and quantum Schubert and Grothendieck calculi.
$(\bf Ia)$ Bruhat representation of algebra $3T_{n}^{(0)}$, cf. [45]. Define
action of $u_{i,j}\in 3T_{n}^{(0)}$ on the group ring of the symmetric group
$\mathbb{Z}[{\mathbb{S}}_{n}]$ as follows: let $w\in{\mathbb{S}}_{n}$, then
$\displaystyle u_{i,j}w=\begin{cases}ws_{ij}&\text{if \
$l(ws_{ij})=l(w)+1$},\\\ 0&\text{otherwise}.\end{cases}$
Let us remind that $s_{ij}\in{\mathbb{S}}_{n}$ denotes the transposition that
interchanges $i$ and $j$ and fixes each $k\not=i,j$; for each permutation
$u\in{\mathbb{S}}_{n}$, $l(u)$ denotes its length.
$(\bf Ib)$ Quantum Bruhat representation of algebra $3QT_{n}$, cf. [45]. Let
us remind that algebra $3QT_{n}$ is the quotient of the 3-term relations
algebra $3T_{n}$ by the two-sided ideal generated by the elements
$\displaystyle\\{u_{ij}^{2},|j-i|\geq
2\\}\bigcup\\{u_{i,i+1}^{2}=q_{i},~{}i=1,\ldots,n-1\\}.$
Define the $\mathbb{Z}[q]-$linear action of $u_{i,j}\in 3QT_{n}$, $i<j$, on
the extended group ring of the symmetric group
$\mathbb{Z}[q][{\mathbb{S}}_{n}]$ as follows: let $w\in{\mathbb{S}}_{n}$, and
$q_{ij}=q_{i}q_{i+1}\cdots q_{j-1}$, $i<j$, then
$\displaystyle u_{i,j}w=\begin{cases}ws_{ij}&\text{if \
$l(ws_{ij})=l(w)+1$},\\\ q_{ij}ws_{ij}&\text{if \
$l(ws_{ij})=l(w)-l(s_{ij})$},\\\ 0&\text{otherwise}.\end{cases}$
Let us remind, see, e.g., [92], that in general one has
$\displaystyle l(ws_{ij})=\begin{cases}l(w)-2e_{ij}-1&\text{if}\ \
w(i)>w(j),\\\ l(w)+2~{}e_{ij}+1&\text{if}\ \ w(i)<w(j).\end{cases}$
Here $e_{ij}(w)$ denotes the number of $k$ such that $i<k<j$ and $w(k)$ lies
between $w(i)$ and $w(j)$. In particular, $l(ws_{ij})=l(w)+1$ iff
$e_{ij}(w)=0$ and $w(i)<w(j)$; $l(ws_{ij})=l(w)-l(s_{ij})=l(w)-2(j-i)+1$ iff
$w(i)>w(j)$ and $e_{ij}=j-i-1$ is the maximal possible.
$({\bf II})$ Kernel of the Bruhat representation. It is not difficult to see
that the following elements of degree three and four belong to the kernel of
the Bruhat representation:
$\displaystyle({\bf IIa})\quad u_{i,j}u_{i,k}u_{i,j}\qquad\text{and}\qquad
u_{i,k}u_{j,k}u_{i,k}\qquad\text{if}\ \ 1\leq i<j<k\leq n;$
$\displaystyle({\bf IIb})\quad u_{i,k}u_{i,l}u_{j,l}\qquad\text{and}\qquad
u_{j,l}u_{i,l}u_{i,k};$ $\displaystyle({\bf IIc})\quad
u_{il}u_{ik}u_{jl}u_{il},\qquad u_{il}u_{ij}u_{kl}u_{il},\qquad
u_{ik}u_{il}u_{jk}u_{ik},$ $\displaystyle\hphantom{({\bf
IIc})\quad}u_{ij}u_{ik}u_{il}u_{ij},\qquad
u_{ik}u_{il}u_{ij}u_{ik}\qquad\text{if}\ \ 1\leq i<j<k<l\leq n.$
This observation motivates the following definition.
###### Definition 2.27.
The reduced 3-term relation algebra $3T_{n}^{\rm red}$ is defined to be the
quotient of the algebra $3T_{n}^{(0)}$ by the two-sided ideal generated by the
elements displayed in IIa–IIc above.
###### Example 2.28.
$\displaystyle{\rm Hilb}\big{(}3T_{3}^{\rm
red},t\big{)}=(1,3,4,1),\qquad\dim\big{(}3T_{3}^{\rm red}\big{)}=9,$
$\displaystyle{\rm Hilb}\big{(}3T_{4}^{\rm
red},t\big{)}=(1,6,19,32,19,6,1),\qquad\dim\big{(}3T_{4}^{\rm red}\big{)}=84,$
$\displaystyle{\rm Hilb}\big{(}3T_{5}^{\rm
red},t\big{)}=(1,10,55,190,383,370,227,102,34,8,1),\qquad\dim\big{(}3T_{5}^{\rm
red}\big{)}=1374.$
We expect that $\dim(3T_{n}^{red})_{{n\choose 2}-1}=2(n-1)$ if $n\geq 3$.
###### Theorem 2.29.
1. $1.$
The algebra $3T_{n}^{\rm red}$ is finite-dimensional, and its Hilbert
polynomial has degree ${n\choose 2}$.
2. $2.$
The maximal degree ${n\choose 2}$ component of the algebra $3T_{n}^{\rm red}$
has dimension one and generated by any element which is equal to the product
$($in any order$)$ of all generators of the algebra $3T_{n}^{\rm red}$.
3. $3.$
The subalgebra in $3T_{n}^{\rm red}$ generated by the elements
$\\{u_{i,i+1},\,i=1,\ldots,n-1\\}$ is canonically isomorphic to the nil-
Coxeter algebra ${\rm NC}_{n}$. In particular, its Hilbert polynomial is equal
to $[n]_{t}!:=\prod\limits_{j=1}^{n}{(1-t^{j})\over 1-t}$, and the element
$\prod\limits_{j=1}^{n-1}\prod\limits_{a=j}^{1}u_{a,a+1}$ of degree ${n\choose
2}$ generates the maximal degree component of the algebra $3T_{n}^{\rm red}$.
4. $4.$
The subalgebra over $\mathbb{Z}$ generated by the Dunkl elements
$\\{\theta_{1},\ldots,\theta_{n}\\}$ in the algebra $3T_{n}^{\rm red}$ is
canonically isomorphic to the cohomology ring $H^{*}({\cal
F}l_{n},\mathbb{Z})$ of the type $A$ flag variety ${\cal F}l_{n}$.
A definition of the nil-Coxeter algebra ${\rm NC}_{n}$ one can find in Section
4.1.1. It is known, see [8] or Section 4.1.1, that the subalgebra generated by
the elements $\\{u_{i,i+1},\,i=1,\ldots,n-1\\}$ in the whole algebra
$3T_{n}^{(0)}$ is canonically isomorphic to the nil-Coxeter algebra ${\rm
NC}_{n}$ as well.
We expect that the kernel of the Bruhat representation of the algebra
$3T_{n}^{(0)}$ is generated by all monomials of the form
$u_{i_{1},j_{1}}\cdots u_{i_{k},j_{k}}$ such that the sequence of
transpositions $t_{i_{1},j_{1}},\ldots,t_{i_{k},j_{k}}$ does not correspond to
a path in the Bruhat graph of the symmetric group ${\mathbb{S}}_{n}$. For
example if $1\leq i<j<k<l\leq n$, the elements $u_{i,k}u_{i,l}u_{j,l}$ and
$u_{j,l}u_{i,l}u_{i,k}$ do belong to the kernel of the Bruhat representation.
###### Problem 2.30.
1. $1.$
The image of the Bruhat representation of the algebra $3T_{n}^{(0)}$ defines a
subalgebra
$\displaystyle\operatorname{Im}\big{(}3T_{n}^{(0)}\big{)}\subset\operatorname{End}_{\mathbb{Q}}(\mathbb{Q}[{\mathbb{S}}_{n}]).$
Does this image isomorphic to the algebra $3T_{n}^{\rm red}$? Compute Hilbert
polynomials of algebras $\operatorname{Im}\big{(}3T_{n}^{(0)}\big{)}$ and
$3T_{n}^{\rm red}$.
2. $2.$
Describe the image$($s$)$ of the affine nil-Coxeter algebra ${\widetilde{{\rm
NC}}}_{n}$, see Section 4.1.1, in the algebras $3T_{n}^{\rm red}$ and
$\operatorname{End}_{\mathbb{Q}}(\mathbb{Q}[{\mathbb{S}}_{n}])$.
#### 2.3.6 The Fulton universal ring [47], multiparameter quantum cohomology
of flag varieties [45] and the full Kostant–Toda lattice [29, 80]
Let $X_{n}=(x_{1},\ldots,x_{n})$ be be a set of variables, and
$\displaystyle{\boldsymbol{g}}:={\boldsymbol{g}}^{(n)}=\\{g_{a}[b]\,|\,a\geq
1,\,b\geq 1,\,a+b\leq n\\}$
be a set of parameters; we put $\deg(x_{i})=1$ and $\deg(g_{a}[b])=b+1$, and
set $g_{k}[0]:=x_{k}$, $k=1,\ldots,n$. For a subset $S\subset[1,n]$ we denote
by $X_{S}$ the set of variables $\\{x_{i}\,|\,i\in S\\}$.
Let $t$ be an auxiliary variable, denote by $M=(m_{ij})_{1\leq i,j\leq n}$ the
matrix of size $n$ by $n$ with the following elements:
$\displaystyle m_{i,j}=\begin{cases}x_{i}+t&\text{if \ $i=j$},\\\
g_{i}[j-i]&\text{if \ $j>i$},\\\ -1&\text{if \ $i-j=1$},\\\ 0&\text{if \
$i-j>1$}.\end{cases}$
Let $P_{n}(X_{n},t)=\det|M|$.
###### Definition 2.31.
The Fulton universal ring ${\cal R}_{n-1}$ is defined to be the
quotient262626If $P(t,X_{n})=\sum\limits_{k\geq 1}f_{k}(X_{n})t^{k}$,
$f_{k}(X_{n})\in\mathbb{Q}[Xn]$ is a polynomial, we denote by $\langle
P(t,X_{n})\rangle$ the ideal in the polynomial ring $\mathbb{Q}[X_{n}]$
generated by the coefficients $\\{f_{1},f_{2},\ldots\\}$.
$\displaystyle{\cal
R}_{n-1}=\mathbb{Z}\big{[}{\boldsymbol{g}}^{(n)}\big{]}[x_{1},\ldots,x_{n}]/\langle
P_{n}(X_{n},t)-t^{n}\rangle.$
###### Lemma 2.32.
Let $P_{n}(X_{n},t)=\sum\limits_{k=0}^{n}c_{k}(n)t^{n-k}$, $c_{0}(n)=1$. Then
$\displaystyle
c_{k}(n):=c_{k}\big{(}n;X_{n},{\boldsymbol{g}}^{(n)}\big{)}=\sum_{{1\leq
i_{1}<i_{2}<\cdots<i_{s}<n\atop j_{1}\geq 1,\ldots,j_{s}\geq 1}\atop
m:=\sum(j_{a}+1)\leq
n}\prod_{a=1}^{s}g_{i_{a}}[j_{a}]e_{k-m}\big{(}X_{[1,n]{\setminus}\bigcup\limits_{a=1}^{s}[i_{a},i_{a}+j_{a}]}\big{)},$
(2.16)
where in the summation we assume additionally that the sets
$[i_{a},i_{a}+j_{a}]:=\\{i_{a},i_{a}+1,\ldots,i_{a}+j_{a}\\}$, $a=1,\ldots,s$,
are pair-wise disjoint.
It is clear that ${\cal
R}_{n-1}=\mathbb{Z}[{\boldsymbol{g}}^{(n)}][x_{1},\ldots,x_{n}]/\langle
c_{n}(1),\ldots,c_{n}(n)\rangle$. One can easily see that the coefficients
$c_{k}(n)$ and $g_{m}[k]$ satisfy the following recurrence relations [47]:
$\displaystyle
c_{k}(n)=c_{k}(n-1)+\sum_{a=0}^{k-1}g_{n-a}[a]c_{k-a-1}(n-a-1),\qquad
c_{0}(n)=1,$ $\displaystyle
g_{m}[k]=c_{k+1}(m+k)-c_{k+1}(m+k-1)-\sum_{a=0}^{k-1}g_{m+k-a}[a]c_{k-a}(m+k-a),$
$\displaystyle g_{m}[0]:=x_{m}.$
On the other hand, let $\\{q_{ij}\\}_{1\leq i<j\leq n}$ be a set of (quantum)
parameters, and $e_{k}^{({\boldsymbol{q}})}(X_{n})$ be the multiparameter
quantum elementary polynomial introduced in [45]. We are interested in to
describe a set of relations between the parameters $\\{g_{i}[j]\\}_{i\geq
1,j\geq 1\atop i+j\leq n}$ and the quantum parameters $\\{q_{ij}\\}_{1\leq
i<j\leq n}$ which implies that
$\displaystyle c_{k}(n)=e_{k}^{({\boldsymbol{q}})}(X_{n})\qquad\text{for}\quad
k=1,\ldots,n.$
To start with, let us recall the recurrence relations among the quantum
elementary polynomials, cf. [117]. To do so, consider the generating function
$\displaystyle E_{n}\big{(}X_{n};\\{q_{ij}\\}_{1\leq i<j\leq
n}\big{)}=\sum_{k=0}^{n}e_{k}^{({\boldsymbol{q}})}(X_{n})t^{n-k}.$
###### Lemma 2.33 ([41, 117]).
One has
$\displaystyle E_{n}\big{(}X_{n};\\{q_{ij}\\}_{1\leq i<j\leq
n}\big{)}=(t+x_{n})E_{n-1}\big{(}X_{n-1};\\{q_{ij}\\}_{1\leq i<j\leq
n-1}\big{)}$ $\displaystyle\hphantom{E_{n}\big{(}X_{n};\\{q_{ij}\\}_{1\leq
i<j\leq
n}\big{)}=}{}+\sum_{j=1}^{n-1}q_{jn}E_{n-2}\big{(}X_{[1,n-1]{\setminus}\\{j\\}};\\{q_{a,b}\\}_{1\leq
a<b\leq n-1\atop a\not=j,b\not=j}\big{)}.$
###### Proposition 2.34.
Parameters $\\{g_{a}[b]\\}$ can be expressed polynomially in terms of quantum
parameters $\\{q_{ij}\\}$ and variables $x_{1},\ldots,x_{n}$, in a such way
that
$\displaystyle c_{k}(n)=e_{k}^{({\boldsymbol{q}})}(X_{n}),\qquad\forall\,k,n.$
Moreover,
* •
$g_{a}[b]=\sum\limits_{k=1}^{a}q_{k,a+b}\prod\limits_{j=a+1}^{a+b-1}(x_{j}-x_{k})+\text{lower
degree polynomials in $x_{1},\ldots,x_{n}$}$,
* •
the quantum parameters $\\{q_{ij}\\}$ can be presented as rational functions
in terms of variables $x_{1},\ldots,x_{n}$ and polynomially in terms of
parameters $\\{g_{a}[b]\\}$ such that the equality
$c_{k}(n)=e_{k}^{({\boldsymbol{q}})}(X_{n})$ holds for all $k$, $n$.
In other words, the transformation
$\displaystyle\\{q_{ij}\\}_{1\leq i<j\leq
n}\longleftrightarrow\\{g_{a}[b]\\}_{a+b\leq n\atop a\geq 1,\,b\geq 1}$
defines a “birational transformation” between the algebra
$\mathbb{Z}[{\boldsymbol{g}}^{(n)}][X_{n}]/\langle
P_{n}(X_{n},t)-t^{n}\rangle$ and multiparameter quantum deformation of the
algebra $H^{*}({\cal{F}}l_{n},\mathbb{Z})$.
###### Example 2.35.
Clearly,
$\displaystyle g_{n-1}[1]=\sum_{j=1}^{n-1}q_{j,n},\quad n\geq
2\qquad\text{and}\qquad g_{n-2}[2]=\sum_{j=1}^{n-2}q_{jn}(x_{n-1}-x_{j}),\quad
n\geq 3.$
Moreover
$\displaystyle
g_{1}[3]=q_{14}\big{(}(x_{2}-x_{1})(x_{3}-x_{1})+q_{23}-q_{12}\big{)}+q_{24}\big{(}q_{13}-q_{12}\big{)},$
$\displaystyle
g_{2}[3]=q_{15}\big{(}(x_{3}-x_{1})(x_{4}-x_{1})+q_{24}+q_{34}-q_{12}-q_{13}\big{)}$
$\displaystyle\hphantom{g_{1}[3]=}{}+q_{25}\big{(}(x_{3}-x_{2})(x_{4}-x_{2})+q_{14}+q_{34}-q_{12}-q_{23}\big{)}+q_{35}\big{(}q_{14}+q_{24}-q_{13}-q_{23}\big{)}.$
###### Comments 2.36.
The full Kostant–Toda lattice (FKTL for short) has been introduced in the end
of $70^{\prime}s$ of the last century by B. Kostant and since that time has
been extensively studied both in Mathematical and Physical literature. We
refer the reader to the original paper by B. Kostant [29, 80] for the
definition of the ${\rm FKTL}$ and its basic properties. In the present paper
we just want to point out on a connection of the Fulton universal ring and
hence the multiparameter deformation of the cohomology ring of complete flag
varieties, and polynomial integral of motion of the FKTL. Namely,
Polynomials $c_{k}(n;X_{n},{\boldsymbol{g}}^{(n)})$ defined by (2.16) coincide
with the polynomial integrals of motion of the FKTL.
It seems an interesting task to clarify a meaning of the ${\rm FKTL}$ rational
integrals of motion in the context of the universal Schubert calculus [47] and
the algebra $3HT_{n}(0)$, as well as any meaning of universal Schubert or
Grothendieck polynomials in the context of the Toda or full Kostant–Toda
lattices.
## 3 Algebra $\boldsymbol{3HT_{n}}$
Consider the twisted classical Yang–Baxter relation
$\displaystyle[u_{ij}+u_{ik},u_{jk}]+[u_{ik},u_{ji}]=0,$
where $i$, $j$, $k$ are distinct. Having in mind applications of the Dunkl
elements to combinatorics and algebraic geometry, we split the above relation
into two relations
$\displaystyle u_{ij}u_{jk}=u_{jk}u_{ik}-u_{ik}u_{ji}\qquad\text{and}\qquad
u_{jk}u_{ij}=u_{ik}u_{jk}-u_{ji}u_{ik}$ (3.1)
and impose the following unitarity constraints
$\displaystyle u_{ij}+u_{ji}=\beta,$
where $\beta$ is a central element. Summarizing, we come to the following
definition.
###### Definition 3.1.
Define algebra $3T_{n}(\beta)$ to be the quotient of the free associative
algebra
$\displaystyle\mathbb{Z}[\beta]\langle u_{ij},\,1\leq i<j\leq n\rangle$
by the set of relations
* •
locality: $u_{ij}u_{kl}=u_{kl}u_{ij}$ if $\\{i,j\\}\cap\\{k,l\\}=\varnothing$,
* •
$3$-term relations: $u_{ij}u_{jk}=u_{ik}u_{ij}+u_{jk}u_{ik}-\beta u_{ik}$, and
$u_{jk}u_{ij}=u_{ij}u_{ik}+u_{ik}u_{jk}-\beta u_{ik}$ if $1\leq i<j<k\leq n$.
|
# Spectral Prompt Tuning:
Unveiling Unseen Classes for Zero-Shot Semantic Segmentation
Wenhao Xu1,, Rongtao Xu2,3,, Changwei Wang2,3, Shibiao Xu1,,
Li Guo1, Man Zhang1, Xiaopeng Zhang2 Shibiao Xu is the corresponding author.
###### Abstract
Recently, CLIP has found practical utility in the domain of pixel-level zero-
shot segmentation tasks. The present landscape features two-stage
methodologies beset by issues such as intricate pipelines and elevated
computational costs. While current one-stage approaches alleviate these
concerns and incorporate Visual Prompt Training (VPT) to uphold CLIP’s
generalization capacity, they still fall short in fully harnessing CLIP’s
potential for pixel-level unseen class demarcation and precise pixel
predictions. To further stimulate CLIP’s zero-shot dense prediction
capability, we propose SPT-SEG, a one-stage approach that improves CLIP’s
adaptability from image to pixel. Specifically, we initially introduce
Spectral Prompt Tuning (SPT), incorporating spectral prompts into the CLIP
visual encoder’s shallow layers to capture structural intricacies of images,
thereby enhancing comprehension of unseen classes. Subsequently, we introduce
the Spectral Guided Decoder (SGD), utilizing both high and low-frequency
information to steer the network’s spatial focus towards more prominent
classification features, enabling precise pixel-level prediction outcomes.
Through extensive experiments on two public datasets, we demonstrate the
superiority of our method over state-of-the-art approaches, performing well
across all classes and particularly excelling in handling unseen classes. Code
is available at: https://github.com/clearxu/SPT.
## Introduction
Semantic segmentation is one of the fundamental tasks in computer vision,
aiming to predict the class for each pixel in an image (Xu et al. 2023d,
2021b; Chen et al. 2021; Dong et al. 2021). Despite the existence of numerous
related works (Lu et al. 2020; Dong et al. 2020; Xu et al. 2023b; Wang et al.
2023a), the success of deep semantic segmentation models heavily relies on a
large amount of annotated training images, which requires significant efforts.
In recent years, interest has been growing in unsupervised or weakly
supervised semantic segmentation methods, including semi-supervised (Chen et
al. 2021), weakly supervised (Xu et al. 2023a, c; Wang et al. 2023b), few-shot
(Xie et al. 2021), and zero-shot semantic segmentation (Bucher et al. 2019;
Pastore et al. 2021; Xian et al. 2019). Among them, zero-shot semantic
segmentation tasks are particularly challenging and appealing, as they require
generating accurate semantic segmentation results with only the semantic
descriptions of the classes given.
Figure 1: (a) Our SPT-SEG method demonstrates outstanding performance across
all classes. (b) While yielding favorable results within the seen classes, it
exhibits relatively poorer performance in the unseen classes. (c) Its
performance is unsatisfactory across all classes.
To incorporate zero-shot capability into visual systems, researchers have
proposed large-scale vision-and-language pretraining models, such as CLIP
(Radford et al. 2021) and ALIGN (Jia et al. 2021a). Specifically, CLIP encodes
semantic concepts into model parameters by contrastive training on a massive
collection of image-text pairs, forming a zero-shot knowledge base for
downstream tasks. However, contrastive pretraining mainly focuses on capturing
image-level concepts. In CLIP, the training texts primarily describe the
global context of images, and the encoded image and text embeddings are used
together to compute contrastive losses. Consequently, CLIP is more suitable
for image-level classification tasks (Zhou et al. 2022b, a; Lu et al. 2022;
Zhang et al. 2022). The pretrained visual-language model CLIP (Radford et al.
2021) has recently found applications in various dense prediction tasks,
including semantic segmentation (Pakhomov et al. 2021), referring segmentation
(Wang et al. 2022), and object detection (Esmaeilpour et al. 2022). In the
zero shot semantic segmentation task, approaches like zsseg (Xu et al. 2021a)
and Zegformer (Ding et al. 2022) adopt a similar strategy that requires two-
stage processing: first generating region proposals and then feeding the
cropped regions into CLIP for zero-shot classification. However, this strategy
involves encoding images twice as FI 1(c), once for proposal generation and
another for CLIP encoding of each proposal. This design introduces additional
computational overhead and fails to fully leverage the knowledge in the CLIP
encoder to guide the proposal generation stage. To streamline the process,
ZegCLip (Zhou et al. 2023) introduces a one-stage approach by incorporating
visual prompt tuning into CLIP, then extending CLIP’s zero-shot capabilities
from image-level to pixel-level.
The inclusion of Visual Prompt Tuning (VPT) in CLIP significantly enhances its
downstream task generalization with few learnable parameters. However, since
the original CLIP’s training primarily revolves around image-level contrastive
learning, its features tend to emphasize only the most discriminative parts of
objects. Even with the introduction of VPT, the observed phenomenon persists
even during pre-training with image-level contrastive loss. Consequently, this
phenomenon leads to incomplete and biased segmentation in dense prediction
tasks.
Based on the aforementioned observations, we believe that further enhancing
the image-to-pixel adaptability of CLIP (Radford et al. 2021) would contribute
to improved zero-shot segmentation performance. Therefore, we propose an
innovative one-stage method called SPT-SEG, as shown in Fig. 1(b). SPT-SEG
differs from plain one-stage methods, as depicted in Fig.1(a). In our
approach, we integrate spectral cues into the shallow layers of the CLIP
visual encoder, which provides additional structural information that enhances
the model’s comprehension of various object components. We also utilize high-
frequency and low-frequency information to guide the alignment of text and
pixels, directing the network’s spatial focus towards more salient
classification features. The synergy of these two designs enhances the model’s
semantic understanding and reasoning capabilities, effectively addressing the
issues of inadequate pixel generalization and incomplete segmentation present
in the current CLIP-based zero-shot semantic segmentation methods.
In summary, our contributions are listed as follows:
* •
We introduce Spectral Prompt Tuning (SPT), which builds upon VPT by
incorporating a small set of learnable spectral parameters. These parameters
are integrated into the shallow layers of the CLIP visual encoder to introduce
spectral information.
* •
We propose the Spectral Guided Decoder (SGD) layer, which is a novel component
that utilizes high-frequency and low-frequency information to guide the
matching process between textual and pixel representations.
* •
We comprehensively assess our method on two public datasets, and the results
clearly show that our approach significantly surpasses state-of-the-art
methods.
Figure 2: Overview of our proposed SPT-SEG. The main contribution of our work
lies in two simple but effective designs (Red marks a,b in the figure): (a)
Spectral prompt tuning which adds learnable spectral prompts to the first two
layers of the CLIP’s visual encoder; (b) Spectral guided decoder which
utilizes high- and low-frequency feature information to guide the text to
match with pixels, and decodes the predicted results.
.
## Related Work
Vision-Language Model. Extensive research has been conducted on Visual-
Language Models (VLM)(Hong et al. 2021; Huang et al. 2021; Kamath et al. 2021;
Kim, Son, and Kim 2021), showcasing significant advancements in downstream
vision tasks, especially in settings with unannotated or restricted data.
These tasks encompass diverse areas such as image retrieval(Liu et al. 2021),
dense prediction (Rao et al. 2022), visual referring expression (Wang et al.
2022), and visual question answering (Jiang, Liu, and Zheng 2022). CLIP
(Radford et al. 2021) is widely recognized as one of the most popular vision-
language models. It is pretrained using contrastive learning on a massive
dataset of 400 million text-image pairs. ALIGN (Jia et al. 2021b) utilized an
even larger dataset, comprising 1.8 billion pairs, for pre-training its model.
However, this larger dataset also introduced a significant amount of noise. In
more recent works, CoCa (Yu et al. 2022) and Beit-V3 (Wang et al. 2023c) have
further emphasized the superior performance of VLM pre-trained features.
Prompt Tuning. The concept of prompts originated from natural language
processing and is mainly used in VLM to enhance its understanding of
downstream specific tasks. By providing prompts, we can avoid massive
parameter learning for VLM and instead use it as a fixed knowledge base,
focusing only on task-relevant information. These prompts can be manually
created for downstream tasks or automatically learned during fine-tuning. Full
fine-tuning and linear probe (Gao et al. 2021) are two typical methods for
adapting the VLM (i.e. CLIP) to downstream tasks. Full fine-tuning leads to a
reduced VL representation of previously learned, while linear probe limits the
zero-shot capability of CLIP. Inspired by the prompt learning in NLP, many
works propose to adapt VLM by adding learnable tokens during end-to-end
training. CoOp (Zhou et al. 2022b) introduced continuous prompt learning,
where a set of continuous vectors are optimized end-to-end with down-stream
supervision . Additionally, learnable prompts are applied by CoOp on the text
encoder of CLIP to replace sub-optimal hand-crafted templates. Co-CoOp (Zhou
et al. 2022a) highlights the poor performance of CoOp on novel classes and
addresses the generalization problem by explicitly conditioning the prompts on
image instances. Recently, prompting (Jia et al. 2022; Sandler et al. 2022)
has been adapted to vision tasks. (Sandler et al. 2022) proposes memory tokens
which is a set of learnable embedding vectors for each transformer layer. VPT
(Jia et al. 2022) proposes similar ideas and investigates the generality and
feasibility of visual prompting via extensive experiments spanning multiple
kinds of recognition tasks across multiple domains and backbone architectures.
Our research further extends the paradigm of visual prompt learning by
introducing spectral prompt, addressing the limitations of previous visual
prompt learning methods in fully leveraging the structural information of
images and their limited adaptability to pixel-level tasks.
Zero-shot Semantic Segmentation. It remains a challenging task to achieve
zero-shot semantic segmentation due to the presence of an imbalance problem in
seen classes. Previous studies such as SPNet (Xian et al. 2019), ZS3 (Bucher
et al. 2019), CaGNet (Gu et al. 2020) and STRICT (Pastore et al. 2021) adopt
strategies to improve the generalization ability of semantic mappings from
visible to invisible classes. Since the popular pre-trained visual language
model CLIP has shown powerful zero-shot classification capabilities, it has
recently been applied to zero-shot semantic segmentation as well. Zegformer
(Ding et al. 2022) and zsseg (Xu et al. 2021a) developed an extensive proposal
generator and used CLIP to classify each region and then integrate the
predictions. Previous studies, such as SPNet (Xian et al. 2019), ZS3 (Bucher
et al. 2019), CaGNet (Gu et al. 2020), SIGN (Cheng et al. 2021), Joint (Baek,
Oh, and Ham 2021), and STRICT (Pastore et al. 2021), adopt the approach of
improving the generalization capability of semantic mapping from the classes
that have been encountered to unseen ones. Recently, a two-stage paradigm
(Ding et al. 2022; Xu et al. 2021a) has been proposed to explore the use of
CLIP for zero-shot segmentation. They leveraged the CLIP model to classify
individual regions following a comprehensive proposal generator and then
integrate the resulting predictions. Although effective, this design requires
two image encoding processes, resulting in expensive computational costs. In
order to simplify the pipeline of the two stages, ZegCLIP (Zhou et al. 2023)
proposed a one-stage method that transfers CLIP’s powerful generalization
ability from images to pixel-level classification. In this work, we use a one-
stage method and achieve outstanding zero-shot segmentation performance
through two effective designs.
## Method
### Problem Definition
We adopt the generalized zero-shot semantic segmentation (GZLSS) method (Xian
et al. 2019), which requires to segment both seen classes ${C}^{s}$ and unseen
classes ${C}^{u}$ after only training on a dataset with pixel-annotations of
seen part. During training, the model generates per-pixel classification
results based on the semantic descriptions of all visible classes. During
testing, the model is evaluated on both seen and unseen classes. It is
important to note that $\mathcal{C}^{s}\cap\mathcal{C}^{u}=\oslash$ and that
the label of $\mathcal{C}^{u}$ is not available during training.
### SPT-SEG
The architecture of SPT-SEG is illustrated in Fig. 2. The basic one-stage
methodology comprises four key components: the CLIP encoder that incorporates
the text and visual encoders, the relationship descriptor between the cls
token and the text embeding, a decoder, and a loss function. Our enhancements
focus on two pivotal components: (1) Introducing an innovative Spectral Prompt
Tuning approach within the visual encoder, aimed at extracting structural
insights to bolster CLIP’s adaptability to dense prediction tasks, (2)
Integrating a Spectral Guided Decode Layer into the decoder, which adeptly
captures high and low-frequency features specific to the task.
Figure 3: Overview of our proposed Spectral-Prompt Tuning. During training on
downstream tasks, only the parameters of prompts and the linear head are
updated while the whole Transformer encoder is frozen.
#### Spectral Prompt Tuning
Prompt tuning is a recently proposed fine-tuning technique that offers a
valuable approach to adapt pre-trained transformer models to target domains
(Xing et al. 2022). However, fine-tuning zero-shot segmentation models solely
on a limited set of visible classes often leads to overfitting. This occurs
because the optimization process focuses solely on visible classes,
disregarding knowledge relevant to visual concepts that cannot be obtained
from the training set. To address this issue, Visual Prompt Tuning (VPT) (Jia
et al. 2022) has emerged as a potential solution. VPT introduces a small
number of task-specific learnable parameters in the input space while keeping
the backbone frozen during downstream training. While VPT has shown promising
results in certain cases, it does not fully leverage the intrinsic properties
and structural characteristics of images, which may not be fully manifested in
the spatial domain, thereby limiting its effectiveness in handling structure-
aware tasks.
To address this limitation, we propose the Spectral Prompt Tuning (SPT)
method, as shown in Fig. 3. SPT extends the concept of VPT by incorporating
prompt parameters learned from a spectral perspective.
In contrast to VPT’s exclusive reliance on visual prompts for fine-tuning, SPT
capitalizes on frequency domain features to offer supplementary understanding
of intricate attributes and structural characteristics. The features learned
by SPT in the spectrum allow it to better capture and distinguish subtle
visual features of different classes, even for those classes that do not have
direct examples in the training data. In this way, when the model encounters
images of completely new classes, it can extract common information about
these classes from the spectrum features, enabling more accurate segmentation.
This ability can alleviate the ”partial” or ”ambiguous” segmentation issues
that occur in zero-shot scenarios, thus ensuring a more precise capture of
unknown classes.
The input embeddings from the $l$-th layer of the image encoder in the CLIP
model are denoted as
$\left\\{\mathbf{g}^{l},\mathbf{h}_{1}^{l},\mathbf{h}_{2}^{l},\cdots,\mathbf{h}_{N}^{l}\right\\}$.
Here, $\mathbf{g}^{l}$ represents the embedding for the [cls] token, and
$\mathbf{H}^{l}=\left\\{\mathbf{h}_{1}^{l},\mathbf{h}_{2}^{l},\cdots,\mathbf{h}_{N}^{l}\right\\}$
corresponds to the embeddings of image patches. In the context of SPT, the
CLIP image encoder’s token sequences are extended with learnable tokens
$\mathbf{V}^{l}=\left\\{\mathbf{v}_{1}^{l},\mathbf{v}_{2}^{l},\cdots,\mathbf{v}_{M}^{l}\right\\}$
in each layer. Furthermore, learnable spectral prompts
$\mathbf{S}^{l}=\left\\{\mathbf{s}_{1}^{l},\mathbf{s}_{2}^{l},\cdots,\mathbf{s}_{N}^{l}\right\\}$
are added in the first two layers. These additions enhance the model’s ability
to process image features at multiple levels of abstraction.
$\mathbf{S}^{l}$ is calculated from $\mathbf{H}^{l}$ and $\mathbf{g}^{l}$, and
a set of learnable filter parameters $\mathbf{w}_{f}$, the process can be
expressed as:
$\displaystyle\mathbf{S}^{l}=\operatorname{\mathcal{F}^{-1}}(\operatorname{\mathcal{F}}(\mathbf{H}^{l}\odot\mathbf{g}^{l})\odot\mathbf{w}_{f}),$
(1)
where $\mathcal{F}$ is the 2D fast fourier transform (FFT) and
$\mathcal{F}^{-1}$ is the inverse FFT (IFFT). Then, when $l\leq 2$ the layer
processes the input token as:
$\displaystyle\left[\mathbf{g}^{l},{}_{-},\mathbf{H}^{l}\right]=\operatorname{Layer}^{l}\left(\left[\mathbf{g}^{l-1},\mathbf{V}^{l-1},\mathbf{H}^{l-1}+\mathbf{S}^{l-1}\right]\right)),$
(2)
when $l>2$ the transform layer processes the input token as:
$\displaystyle\left[\mathbf{g}^{l},{}_{-},\mathbf{H}^{l}\right]=\operatorname{Layer}^{l}\left(\left[\mathbf{g}^{l-1},\mathbf{V}^{l-1},\mathbf{H}^{l-1}\right]\right)).$
(3)
#### Spectral Guided Decode Layer
In practical semantic segmentation applications, high-quality segmentation
results are crucial for the success of the task. Recent work (Patro,
Namboodiri, and Agneeswaran 2023) combined spectral layers with multi-head
attention in a transformer architecture to capture relevant features in
initial layers. LiTv2 (Pan, Cai, and Zhuang 2022)introduced a novel attention
mechanism that separately processes high and low-frequency components in
attention layers, capturing local and global relationships effectively in
classification and segmentation tasks. Drawing inspiration from these
insights, we propose an innovative decoding method as shown Fig. 2(b) by
introducing frequency domain features during the decoding stage, which
significantly enhances the performance of image segmentation. Firstly, the
frequency domain-guided decoder can balance the attention on small details and
global structure, enabling the model to focus on both local and overall
features simultaneously. Secondly, guided by frequency domain features, the
decoder can capture object boundaries and textures more accurately, thereby
improving the precision of the segmentation results. Most importantly, this
decoder exhibits stronger generalization ability on unseen classes, which is
crucial for unknown situations in real-world applications. The design
comprises the following steps:
(1) The high-frequency branch captures fine-grained local dependencies through
local window self-attention., while the low-frequency branch applies average
pooling to each window, obtaining low-frequency signals that capture the
global dependencies of the input. This high and low-frequency capturing is
built on the multi-head self-attention (MSA) mechanism, which allowsfor
capturing distant relations labeled at different locations in the input
sequence $\mathbf{X}\in\mathbb{R}^{N\times D}$. Here, $N$ is the length of the
input sequence, and $D$ represents the hidden dimension. To achieve this, we
divide the $N_{h}$ heads in MSA into two groups with a split ratio $\alpha$.
Specifically, $\alpha N_{h}$ heads are used for the high-frequency branch, and
the remaining $(1-\alpha)N_{h}$ heads are utilized for the low-frequency
branch. The high-frequency branch computes the output by linearly projecting
the outputs of the $\alpha$ self-attention heads and then concatenating them
as follows: The high-frequency branch performs a simple non-overlapping-window
($3\times 3$) partitioning of the inputs $X$, and then computes the outputs by
$\alpha$-sizing and concatenating them as follows:
$\mathrm{MSA_{\alpha}}(\hat{\mathbf{X}})=\underset{h\in[\alpha
N_{h}]}{\mathrm{Concat}}[\mathrm{SA}_{h}(\hat{\mathbf{X}})],$ (4)
where $\mathrm{SA}_{h}(\hat{\mathbf{X}})$ denotes the output of the $h$-th
self-attention head, and note that $\hat{\mathbf{X}}$ denotes the input with
the non-overlapping window already divided. Meanwhile, the low-frequency
branch utilizes average pooling to extract low-frequency signals within each
window, and its computation process can be expressed as:
$\mathrm{MSA_{1-\alpha}}(\hat{\mathbf{X}})=\underset{h\in[(1-\alpha)N_{h}]}{\mathrm{Concat}}[\mathrm{SA}_{h}(\mathrm{AvgPool}(\hat{\mathbf{X}}))],$
(5)
Finally, the overall output is obtained by concatenating the outputs from each
branch as follows:
$\mathbf{z}=[\mathrm{MSA_{\alpha}}(\hat{\mathbf{X}});\mathrm{MSA_{1-\alpha}}(\hat{\mathbf{X}})],$
(6)
where $[\cdot]$ denotes the concatenation operation.
(2) we emphasize task-relevant tokens and channels through frequency domain
feature extraction to select specific characteristics. We perform frequency
domain feature extraction on $\mathbf{z}\in\mathbb{R}^{N\times D}$ to identify
task-related markers and channels. The output is obtained using the following
operation:
$\displaystyle\mathbf{\hat{z}}=P\cdot\text{sim}(\mathbf{z},\xi)\cdot\mathbf{z},$
(7)
where $\xi\in\mathbb{R}^{d}$ and $P\in\mathbb{R}^{d\times d}$ are task-
specific parameters, and $\text{sim}(\cdot,\cdot)$ represents the cosine
similarity ranging between $[0,1]$. The resulting $\hat{\mathbf{z}}$ can be
represented as
$[\hat{\mathbf{z}}_{1},\hat{\mathbf{z}}_{2},...,\hat{\mathbf{z}}_{N}]\in\mathbb{R}^{N\times
D}$, where $\hat{\mathbf{z}}_{j}$ denotes the embedding for the jth patch
class. The matrix
$\mathbf{t}=[\mathbf{t}^{1},\mathbf{t}^{2},...,\mathbf{t}^{C}]\in\mathbb{R}^{C\times
D}$ represents $C$ classes, with $d$ as the feature dimension of the CLIP
model. Here, $\mathbf{t}^{i}$ denotes the representation of the $i$-th class,
and [cls] corresponds to the global feature represented as
$\mathbf{g}\in\mathbb{R}^{N\times D}$. The relationship descriptor can be
represented as:
$\displaystyle\mathbf{\hat{t}}=\phi(\mathbf{[t\cdot g;t]}),$ (8)
where $\phi(\cdot)$ projects $\mathbf{[t\cdot g;t]}$ to the same dimension as
$\hat{\mathbf{z}}$.
Semantic masks are calculated using matrix product:
$\mathbf{Masks}=\mathbf{\hat{t}}\cdot\hat{\mathbf{z}}^{T}\in\mathbb{R}^{C\times
N},$ (9)
The final segmentation results are obtained by applying the $Argmax$ operation
along the class dimension of $\mathbf{Masks}$.
#### Loss Function
We employ a combination of the focal loss (Lin et al. 2017), and the
structural similarity (SSIM) loss (Wang, Simoncelli, and Bovik 2003). The
total loss $\mathcal{L}$ is a linear combination of the focal loss and SSIM
loss, with coefficients $\alpha$ and $\beta$ to balance their contributions:
$\mathcal{L}=\gamma\cdot\mathcal{L}_{\mathtt{focal}}+\sigma\cdot\mathcal{L}_{\mathtt{ssim}},$
(10)
The coefficients ${\gamma,\sigma}$ are used to control the relative importance
of the focal loss and SSIM loss in the overall loss function.
## Experiments
### Datasets
We conducted extensive experiments on two benchmark datasets to evaluate the
effectiveness of our proposed method: PASCAL VOC 2012 (20), COCO-Stuff 164K.
Here are the details of each dataset:
1. 1.
PASCAL VOC 2012: This dataset consists of 10,582 augmented images for training
and 1,449 for validation. We focus on 15 seen classes, ignoring the
”background” class, and 5 unseen classes.
2. 2.
COCO-Stuff 164K: It is a large-scale dataset with 118,287 training images and
5,000 testing images, covering 171 classes. Among them, 156 classes are seen,
and 15 classes are unseen.
### Evaluation Metrics
As in previous studies, we assess the performance using pixel-wise
classification accuracy ($pAcc$) and the mean intersection over union ($mIoU$)
for both seen and unseen classes, referred to as $mIoU(S)$ and $mIoU(U)$,
respectively. Additionally, we calculate the harmonic mean IoU ($hIoU$)
between the seen and unseen classes as in ZegCLIP (Zhou et al. 2023), which is
formulated as:
$hIoU=\frac{2*mIoU(S)*mIoU(U)}{mIoU(S)+mIoU(U)}.$ (11)
Methods | PASCAL VOC 2012 | COCO-Stuff 164K
---|---|---
pAcc | mIoU(S) | mIoU(U) | hIoU | pAcc | mIoU(S) | mIoU(U) | hIoU
SPNet${}_{C}VPR^{\prime}19$ | / | 78.0 | 15.6 | 26.1 | / | 35.2 | 8.7 | 14.0
ZS3${}_{N}eurIPS^{\prime}19$ | / | 77.3 | 17.7 | 28.7 | / | 34.7 | 9.5 | 15.0
CaGNet${}_{A}CMMM^{\prime}20$ | 80.7 | 78.4 | 26.6 | 39.7 | 56.6 | 33.5 | 12.2 | 18.2
SIGN${}_{I}CCV^{\prime}21$ | / | 75.4 | 28.9 | 41.7 | / | 32.3 | 15.5 | 20.9
Joint${}_{I}CCV^{\prime}21$ | / | 77.7 | 32.5 | 45.9 | / | / | / | /
ZegFormer${}_{C}VPR^{\prime}22$ | / | 86.4 | 63.6 | 73.3 | / | 36.6 | 33.2 | 34.8
zsseg${}_{a}rXiv^{\prime}21$ | 90.0 | 83.5 | 72.5 | 77.5 | 60.3 | 39.3 | 36.3 | 37.8
ZegCLIP${}_{C}VPR^{\prime}23$ | 94.6 | 91.9 | 77.8 | 84.3 | 62.0 | 40.2 | 41.4 | 40.8
SPT-SEG (Ours) | | 96.7
---
(+2.1)
| 92.9
---
(+1.0)
| 87.4
---
(+9.6)
| 90.1
---
(+5.8)
| 62.9
---
(+0.9)
| 40.6
---
(+0.4)
| 43.8
---
(+2.4)
| 42.1
---
(+1.3)
ZegCLIP *${}_{C}VPR^{\prime}23$ | 96.3 | 92.4 | 90.9 | 91.6 | 69.9 | 40.7 | 63.2 | 49.6
SPT-SEG * (Ours) | 97.6 | 93.6 | 92.9 | 93.2 | 72.5 | 41.6 | 66.0 | 51.0
Table 1: Comparison with state-of-the-art methods on the PASCAL VOC 2012 and
COCO-Stuff 164K datasets. The asterisk (*) denotes training involving all
classes. The best results are highlighted in bold.
### Implementation Details
Our proposed method is implemented using the MMSegmentation open-source
toolbox(Contributors 2020) with PyTorch 1.10.1. All experiments were conducted
on two H800 GPUs using the pre-trained CLIP ViT-B/16 model. The batch size was
set to 16, and the images were resized to a resolution of $512\times 512$. We
performed a total of 20,000 training iterations on the PASCAL VOC 2012
dataset, and 96,000 iterations on the COCO-Stuff 164K dataset. Based on
previous research works (Gu et al. 2020; Xu et al. 2021a; Ding et al. 2022;
Zhou, Loy, and Dai 2022), we have set up the unseen classes. The optimizer
used was AdamW, and we followed the default training schedule provided by the
MMSeg toolbox. In SPT-SEG, it should be noted that the model learns multiple
prompts exclusively from seen classes during training. The optimizer used was
AdamW, and we followed the default training schedule provided by the MMSeg
toolbox.
### Comparison with State-of-the-Art Methods
To showcase the effectiveness of our method, we present the evaluation results
in comparison with previous state-of-the-art approaches, as shown in Tab. 1.
Additionally, we include the results of fully supervised learning as an upper
bound to demonstrate the performance gap between fully supervised segmentation
and zero-shot segmentation on unseen classes. We provide qualitative results
on the COCO-Stuff 164K dataset, depicted in Fig. 4. Our proposed method
exhibits significant performance improvements, particularly for unseen
classes, surpassing previous approaches, as depicted in Tab. 1. This
highlights the superior generalization capability of our method compared to
existing methods. Particularly noteworthy is the significant increase in mIoU
for unseen classes in the VOC dataset 9.6% and for unseen classes in the COCO
dataset 2.4%
Fig. 4 showcases the segmentation outcomes of the ZegCLIP (Zhou et al. 2023)
and our proposed SPT-SEG, both on seen and unseen classes. With the
integration of our proposed designs, SPT-SEG demonstrates impressive
segmentation capabilities on both seen and unseen classes, effectively
distinguishing similar unseen classes. For example, our approach effectively
segments small target ’sport ball’ objects and achieves full recognition of
the unseen class ’playing field’ (Fig. 4(1)). Furthermore, our method
successfully discriminates “plastic” classes from skateboard regions (Fig.
4(2)), and accurately segments “dog” instances bearing resemblance to “horses”
(Fig. 4(3)). Overall, SPT-SEG completely segments the unseen classes(“playing
field”, “plastic”) and significantly outperforms other methods in terms of
segmentation details. These results confirm the effectiveness of our proposed
method in achieving superior segmentation performance, especially for unseen
classes.
Figure 4: Qualitative results on COCO-Stuff 164K. (a) are the original testing
images; (b) are the ground truths of each image.(c) represent the performance
of ZegCLIP; (d) are the visualization results of our proposed SPT-SEG. Note
that we have highlighted prominent regions using yellow arrows and marked
other significant areas with yellow stars for emphasis.
### Ablation Study
#### Detailed results of applying designs on baseline
To demonstrate the effectiveness of our proposed designs, we further report
the improvements of applying designs on baseline (ZegCLIP) in Tab. 2. The
addition of the SPT significantly enhances the model’s performance on unseen
data. When both SPT and SGD are utilized, the SPT-SEG model exhibits excellent
results on the VOC test dataset.
Bas. | SPT | SGD | PASCAL VOC 2012
---|---|---|---
mIoU(S) | mIoU(U) | hIoU
✓ | | | 91.9 | 77.8 | 84.3
✓ | ✓ | | 92.6 | 86.7 | 89.6
✓ | | ✓ | 92.0 | 79.9 | 85.5
✓ | ✓ | ✓ | 92.9 | 87.4 | 90.1
Table 2: Quantitative results on VOC dataset to demonstrate the effectiveness
of our proposed two designs. Here ✓means that this component is applied. Note
that our baseline (Bas.) method is ZegCLIP (Zhou et al. 2023). The best
results are highlighted in bold.
#### Effect of the depth of SPT
Tab. 3 demonstrates the impact of SPT insertion positions and layers on SPT-
SEG performance. The performance of SPT is notably superior when inserted in
the earlier layers compared to the later ones. However, its overall
performance is comparable when applied across all layers as well as with its
application limited to the first two layers. This finding indicates the
greater significance of early transformer layer spectral prompts over later
layers’ prompts.
Depth | PASCAL VOC 2012
---|---
mIoU(S) | mIoU(U) | hIoU
1-6 | 92.5 | 86.4 | 89.3
6-12 | 92.1 | 80.9 | 86.1
1-12 | 92.6 | 86.5 | 89.4
11-12 | 92.0 | 78.3 | 84.6
1-2 | 92.9 | 87.4 | 90.1
Table 3: Ablation on Spectral Prompt Tuning depth. The 1-st layer refers to
the one closest to input. ViT-B has 12 layers in total.The best results are
highlighted in bold.
#### Effect of Spectral Guided Decode layers
To investigate the impact of decoder layers on the performance of SPT-SEG, we
conducted an ablation study on decoder layer depth. Tab. 4 demonstrates that
within our research settings, the model achieved its optimal performance with
3 decoder layers. At this layer depth, the model exhibited excellent
performance both at the pixel-level and class-level. However, when the decoder
layers were increased to 5, we observed signs of overfitting, resulting in a
decline in performance on the test set. Conversely, employing only 1 decoder
layer significantly reduced the model’s performance.
Layers | PASCAL VOC 2012
---|---
mIoU(S) | mIoU(U) | hIoU
1 | 91.9 | 82.8 | 87.1
3 | 92.9 | 87.4 | 90.1
5 | 92.2 | 83.7 | 87.7
Table 4: Ablation on layers of Spectral Guided Decode Layer. The best results
are highlighted in bold.
## Limitations
Limited by the recognition capability and resolution of CLIP, pixel
classification may be prone to errors in complex scenes such as object
occlusion and glass reflection (e.g. (Fig. 4(5))). Additionally, the ability
to recognize details, such as object edges, also needs improvement. Resolving
these limitations and enhancing the robustness of the SPT-SEG method are
important directions for future research.
## Conclusion
In this work, we present an efficient one-stage direct zero-shot semantic
segmentation method based on the pre-trained vision-language model CLIP. We
introduce two innovative designs to transfer image classification capabilities
to dense prediction tasks while maintaining a leading edge in zero-shot
knowledge. These designs enable us to achieve competitive results on known
classes and significantly improve performance on novel classes. To demonstrate
the effectiveness of our approach, we comprehensively test its performance on
two widely-used benchmark datasets, outperforming the previous state-of-the-
art methods. Our research aims to explore the use of pre-trained visual
language models for semantic segmentation. By integrating spectral information
and enhancing the capability of CLIP, we successfully apply its zero-shot
knowledge to downstream tasks, providing a flexible and accurate solution for
zero-shot semantic segmentation.
## Acknowledgements
This work was supported by Beijing Natural Science Foundation No. JQ23014, and
in part by the National Natural Science Foundation of China (Nos. $U21A20515$,
$62271074$ and $62276031$).
## References
* Baek, Oh, and Ham (2021) Baek, D.; Oh, Y.; and Ham, B. 2021. Exploiting a joint embedding space for generalized zero-shot semantic segmentation. In _Proceedings of the IEEE/CVF international conference on computer vision_ , 9536–9545.
* Bucher et al. (2019) Bucher, M.; Vu, T.-H.; Cord, M.; and Pérez, P. 2019. Zero-shot semantic segmentation. _Advances in Neural Information Processing Systems_ , 32.
* Chen et al. (2021) Chen, X.; Yuan, Y.; Zeng, G.; and Wang, J. 2021. Semi-supervised semantic segmentation with cross pseudo supervision. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2613–2622.
* Cheng et al. (2021) Cheng, J.; Nandi, S.; Natarajan, P.; and Abd-Almageed, W. 2021. Sign: Spatial-information incorporated generative network for generalized zero-shot semantic segmentation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 9556–9566.
* Contributors (2020) Contributors, M. 2020. MMSegmentation: OpenMMLab Semantic Segmentation Toolbox and Benchmark. https://github.com/open-mmlab/mmsegmentation.
* Ding et al. (2022) Ding, J.; Xue, N.; Xia, G.-S.; and Dai, D. 2022. Decoupling zero-shot semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 11583–11592.
* Dong et al. (2021) Dong, J.; Cong, Y.; Sun, G.; Fang, Z.; and Ding, Z. 2021. Where and How to Transfer: Knowledge Aggregation-Induced Transferability Perception for Unsupervised Domain Adaptation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 1–1.
* Dong et al. (2020) Dong, J.; Cong, Y.; Sun, G.; Zhong, B.; and Xu, X. 2020. What Can Be Transferred: Unsupervised Domain Adaptation for Endoscopic Lesions Segmentation. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 4022–4031.
* Esmaeilpour et al. (2022) Esmaeilpour, S.; Liu, B.; Robertson, E.; and Shu, L. 2022. Zero-shot out-of-distribution detection based on the pre-trained model clip. In _Proceedings of the AAAI conference on artificial intelligence_ , volume 36, 6568–6576.
* Gao et al. (2021) Gao, P.; Geng, S.; Zhang, R.; Ma, T.; Fang, R.; Zhang, Y.; Li, H.; and Qiao, Y. 2021. Clip-adapter: Better vision-language models with feature adapters. _arXiv preprint arXiv:2110.04544_.
* Gu et al. (2020) Gu, Z.; Zhou, S.; Niu, L.; Zhao, Z.; and Zhang, L. 2020. Context-aware feature generation for zero-shot semantic segmentation. In _Proceedings of the 28th ACM International Conference on Multimedia_ , 1921–1929.
* Hong et al. (2021) Hong, Y.; Wu, Q.; Qi, Y.; Rodriguez-Opazo, C.; and Gould, S. 2021. Vln bert: A recurrent vision-and-language bert for navigation. In _Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition_ , 1643–1653.
* Huang et al. (2021) Huang, Z.; Zeng, Z.; Huang, Y.; Liu, B.; Fu, D.; and Fu, J. 2021. Seeing out of the box: End-to-end pre-training for vision-language representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 12976–12985.
* Jia et al. (2021a) Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021a. Scaling up visual and vision-language representation learning with noisy text supervision. In _International conference on machine learning_ , 4904–4916. PMLR.
* Jia et al. (2021b) Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.-T.; Parekh, Z.; Pham, H.; Le, Q.; Sung, Y.-H.; Li, Z.; and Duerig, T. 2021b. Scaling up visual and vision-language representation learning with noisy text supervision. In _International Conference on Machine Learning_ , 4904–4916. PMLR.
* Jia et al. (2022) Jia, M.; Tang, L.; Chen, B.-C.; Cardie, C.; Belongie, S.; Hariharan, B.; and Lim, S.-N. 2022. Visual prompt tuning. In _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXXIII_ , 709–727. Springer.
* Jiang, Liu, and Zheng (2022) Jiang, J.; Liu, Z.; and Zheng, N. 2022. Finetuning Pretrained Vision-Language Models with Correlation Information Bottleneck for Robust Visual Question Answering. _arXiv preprint arXiv:2209.06954_.
* Kamath et al. (2021) Kamath, A.; Singh, M.; LeCun, Y.; Synnaeve, G.; Misra, I.; and Carion, N. 2021. Mdetr-modulated detection for end-to-end multi-modal understanding. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 1780–1790.
* Kim, Son, and Kim (2021) Kim, W.; Son, B.; and Kim, I. 2021. Vilt: Vision-and-language transformer without convolution or region supervision. In _International Conference on Machine Learning_ , 5583–5594. PMLR.
* Lin et al. (2017) Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; and Dollár, P. 2017. Focal loss for dense object detection. In _Proceedings of the IEEE international conference on computer vision_ , 2980–2988.
* Liu et al. (2021) Liu, Z.; Rodriguez-Opazo, C.; Teney, D.; and Gould, S. 2021. Image retrieval on real-life images with pre-trained vision-and-language models. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2125–2134.
* Lu et al. (2020) Lu, X.; Wang, W.; Danelljan, M.; Zhou, T.; Shen, J.; and Van Gool, L. 2020. Video object segmentation with episodic graph memory networks. In _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part III 16_ , 661–679. Springer.
* Lu et al. (2022) Lu, Y.; Liu, J.; Zhang, Y.; Liu, Y.; and Tian, X. 2022. Prompt distribution learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5206–5215.
* Pakhomov et al. (2021) Pakhomov, D.; Hira, S.; Wagle, N.; Green, K. E.; and Navab, N. 2021. Segmentation in style: Unsupervised semantic image segmentation with stylegan and clip. _arXiv preprint arXiv:2107.12518_.
* Pan, Cai, and Zhuang (2022) Pan, Z.; Cai, J.; and Zhuang, B. 2022. Fast Vision Transformers with HiLo Attention. In _NeurIPS_.
* Pastore et al. (2021) Pastore, G.; Cermelli, F.; Xian, Y.; Mancini, M.; Akata, Z.; and Caputo, B. 2021. A closer look at self-training for zero-label semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2693–2702.
* Patro, Namboodiri, and Agneeswaran (2023) Patro, B. N.; Namboodiri, V. P.; and Agneeswaran, V. S. 2023. SpectFormer: Frequency and Attention is what you need in a Vision Transformer. _arXiv preprint arXiv:2304.06446_.
* Radford et al. (2021) Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; et al. 2021. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , 8748–8763. PMLR.
* Rao et al. (2022) Rao, Y.; Zhao, W.; Chen, G.; Tang, Y.; Zhu, Z.; Huang, G.; Zhou, J.; and Lu, J. 2022. Denseclip: Language-guided dense prediction with context-aware prompting. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 18082–18091.
* Sandler et al. (2022) Sandler, M.; Zhmoginov, A.; Vladymyrov, M.; and Jackson, A. 2022. Fine-tuning image transformers using learnable memory. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 12155–12164.
* Wang et al. (2023a) Wang, C.; Xu, R.; Xu, S.; Meng, W.; and Zhang, X. 2023a. Automatic polyp segmentation via image-level and surrounding-level context fusion deep neural network. _Engineering Applications of Artificial Intelligence_ , 123: 106168.
* Wang et al. (2023b) Wang, C.; Xu, R.; Xu, S.; Meng, W.; and Zhang, X. 2023b. Treating Pseudo-labels Generation as Image Matting for Weakly Supervised Semantic Segmentation. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 755–765.
* Wang et al. (2023c) Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; Liu, Q.; Aggarwal, K.; Mohammed, O. K.; Singhal, S.; Som, S.; et al. 2023c. Image as a Foreign Language: BEiT Pretraining for Vision and Vision-Language Tasks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 19175–19186.
* Wang et al. (2022) Wang, Z.; Lu, Y.; Li, Q.; Tao, X.; Guo, Y.; Gong, M.; and Liu, T. 2022. Cris: Clip-driven referring image segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 11686–11695.
* Wang, Simoncelli, and Bovik (2003) Wang, Z.; Simoncelli, E. P.; and Bovik, A. C. 2003. Multiscale structural similarity for image quality assessment. In _The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003_, volume 2, 1398–1402. Ieee.
* Xian et al. (2019) Xian, Y.; Choudhury, S.; He, Y.; Schiele, B.; and Akata, Z. 2019. Semantic projection network for zero-and few-label semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 8256–8265.
* Xie et al. (2021) Xie, G.-S.; Liu, J.; Xiong, H.; and Shao, L. 2021. Scale-aware graph neural network for few-shot semantic segmentation. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 5475–5484.
* Xing et al. (2022) Xing, Y.; Wu, Q.; Cheng, D.; Zhang, S.; Liang, G.; and Zhang, Y. 2022. Class-aware visual prompt tuning for vision-language pre-trained model. _arXiv preprint arXiv:2208.08340_.
* Xu et al. (2021a) Xu, M.; Zhang, Z.; Wei, F.; Lin, Y.; Cao, Y.; Hu, H.; and Bai, X. 2021a. A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model. _arXiv preprint arXiv:2112.14757_.
* Xu et al. (2023a) Xu, R.; Wang, C.; Sun, J.; Xu, S.; Meng, W.; and Zhang, X. 2023a. Self Correspondence Distillation For End-to-End Weakly-Supervised Semantic Segmentation. In _Proceedings of the AAAI Conference on Artificial Intelligence_.
* Xu et al. (2021b) Xu, R.; Wang, C.; Xu, S.; Meng, W.; and Zhang, X. 2021b. DC-net: Dual context network for 2D medical image segmentation. In _Medical Image Computing and Computer Assisted Intervention–MICCAI 2021: 24th International Conference, Strasbourg, France, September 27–October 1, 2021, Proceedings, Part I 24_ , 503–513. Springer.
* Xu et al. (2023b) Xu, R.; Wang, C.; Xu, S.; Meng, W.; and Zhang, X. 2023b. Dual-stream Representation Fusion Learning for accurate medical image segmentation. _Engineering Applications of Artificial Intelligence_ , 123: 106402.
* Xu et al. (2023c) Xu, R.; Wang, C.; Xu, S.; Meng, W.; and Zhang, X. 2023c. Wave-Like Class Activation Map With Representation Fusion for Weakly-Supervised Semantic Segmentation. _IEEE Transactions on Multimedia_.
* Xu et al. (2023d) Xu, R.; Wang, C.; Zhang, J.; Xu, S.; Meng, W.; and Zhang, X. 2023d. Rssformer: Foreground saliency enhancement for remote sensing land-cover segmentation. _IEEE Transactions on Image Processing_ , 32: 1052–1064.
* Yu et al. (2022) Yu, J.; Wang, Z.; Vasudevan, V.; Yeung, L.; Seyedhosseini, M.; and Wu, Y. 2022. Coca: Contrastive captioners are image-text foundation models. _arXiv preprint arXiv:2205.01917_.
* Zhang et al. (2022) Zhang, R.; Zhang, W.; Fang, R.; Gao, P.; Li, K.; Dai, J.; Qiao, Y.; and Li, H. 2022. Tip-adapter: Training-free adaption of clip for few-shot classification. In _European Conference on Computer Vision_ , 493–510. Springer.
* Zhou, Loy, and Dai (2022) Zhou, C.; Loy, C. C.; and Dai, B. 2022. Extract free dense labels from clip. In _European Conference on Computer Vision_ , 696–712. Springer.
* Zhou et al. (2022a) Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022a. Conditional prompt learning for vision-language models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 16816–16825.
* Zhou et al. (2022b) Zhou, K.; Yang, J.; Loy, C. C.; and Liu, Z. 2022b. Learning to prompt for vision-language models. _International Journal of Computer Vision_ , 130(9): 2337–2348.
* Zhou et al. (2023) Zhou, Z.; Lei, Y.; Zhang, B.; Liu, L.; and Liu, Y. 2023. Zegclip: Towards adapting clip for zero-shot semantic segmentation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 11175–11185.
|
# Shadow of regular black hole in scalar-tensor-vector gravity theory
Subhadip<EMAIL_ADDRESS>1,2 ID and John W.
<EMAIL_ADDRESS>3,4
1Department of Physics, Jhargram Raj College, Jhargram, West Bengal-721507
2School of Physical Sciences, Indian Association for the Cultivation of
Science,
2A & 2B Raja S. C. Mullick Road, Kolkata-700032,India
3Perimeter Institute for Theoretical Physics, Waterloo, Ontario N2L 2Y5,
Canada
4Department of Physics and Astronomy, University of Waterloo, Waterloo,
Ontario N2L 3G1, Canada
###### Abstract
We investigate the shadow cast by a regular black hole in scalar-tensor-vector
mOdified gravity theory. This black hole differs from a Schwarzschild-Kerr
black hole by the dimensionless parameter $\beta$. The size of the shadow
depends on this parameter. Increasing the value of the parameter $\beta$
shrinks the shadow. A critical value of the parameter $\beta$ is found to be
$\beta_{\rm crit}=0.40263$. The shadow for the horizonless dark compact object
has been analysed for the static, spherically symmetric case and compared with
M87* and Sgr A* data. Shadow observables have been determined in the context
of the regular black hole and used for obtaining the energy emission rate. The
peak of the energy emission rate shifts to lower frequency for the increasing
value of the parameter $\beta$.
## 1 Introduction
One of the most remarkable predictions of the general theory of relativity is
the occurrence of black holes. The recent observations by the Event Horizon
Telescope (EHT) collaboration[1, 2, 3, 4, 5, 6, 7] and the detection of
gravitational wave signals by the Laser-Interferometer Gravitational Wave-
Observatory (LIGO) and Virgo [8, 9, 10], corroborate the existence of these
celestial objects. Despite its success, the theory of general relativity is
not flawless. The two major drawbacks of this theory are the presence of
singularities[11, 12] in the theory and the lack of observational data
verifying the existence of the dark sector[13]. The research community is
divided into two groups regarding this issue[14, 15, 16, 17, 18, 19]. Either
dark matter exists or alternatively Einstein’s gravitational theory has to be
modified. Despite numerous attempts to find the existence of the dark sector,
in particular, the dark matter, all experimental attempts to detect dark
matter have until now failed[20, 21]. This motivates us to explore the nature
of black holes in a theory where these above mentioned ambiguities are
removed. One of the successful approaches towards this goal has been developed
by one of the authors[22]. The theory is popularly known in the literature as
the scalar-tensor-vector gravity (STVG) theory and MOdified gravity (MOG). The
solar system observations[23], cosmological observations[24], galaxy rotation
curves[25, 26, 27] and the dynamics of galaxy clusters[28, 29] have all been
satisfactorily explained by the MOG. It has also been successful in describing
structure growth, the matter power spectrum, and cosmic microwave background
(CMB) acoustical and angular power spectrum data[30, 31, 32, 33].
Observational signatures and constrains of the black holes and other compact
objects as appearing in MOG theory have been discussed in the literature[34,
35, 36]. To distinguish the MOG theory from general relativity, EHT
observational data have been used to study the shadow cast by the supermassive
MOG black holes Sgr A* and M87*[37].
As a result of lensing phenomena[38, 39, 40, 41], the black hole scatters the
higher angular momentum photons from the source, sending them to the distant
observer, while the photons with less angular momentum fall into the black
hole and create a shadow zone and a possible light ring. The black hole
shadow, which develops next to the event horizon, gives us a general notion of
the fundamental geometrical structure of horizons[42]. A review of these
developments can be found in [43]. Sagittarius A*, the supermassive black hole
at the heart of our galaxy, and M87* at the galactic centre of M87 have both
been confirmed by the EHT astronomical observations[1, 2, 3, 4, 5, 6, 7]. A
two-dimensional dark disc encircled by bright rings is the black hole’s
observable appearance. The light rings are photon orbits, while the dark area
represents the black hole shadow. The accreted matter around the black hole
has an impact on how the shadow is shaped. Since the black hole’s shadow
carries the geometry of the surrounding region in its shape and size, it is
considered a helpful tool for determining the black hole’s spin and other
deformation characteristics and parameters[44, 45, 46, 47]. This in turn can
help to distinguish and test general relativity and other alternative
theories[48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64,
65]. The black hole candidates display a significant rotation. Our main goal
will be to explore the rotating black holes in MOG/STVG theory. To evade the
problem of a singularity, we will focus our study on regular solutions in the
STVG/MOG theory of gravity. One of the crucial methods for obtaining
information about a black hole is the study of its shadow[66, 67, 68, 69, 70,
71]. Earlier attempts have been made to analyse the shadows of regular black
holes[72, 73, 74, 75, 76, 77, 78, 54]. Many of these regular black holes arise
from gravity coupled to non-linear electrodynamics. However, the electrical
charge of black holes is expected to have negligible effect on the geometry of
spacetime[79]. In STVG/MOG theory, the regular black hole solutions are
obtained from a purely gravitational theory and can be potential candidates
for astrophysical black holes.
The paper is organized as follows: In 2 a brief introduction to the STVG/MOG
gravitational action and field equations is presented. The static regular MOG
compact object is discussed in 3. In 4, we investigate the regular MOG
spherically symmetric solution and analytically derive the critical value of
the parameter $\beta$. We also derive the parameter dependence of the black
hole horizon, photon sphere and shadow. 5 is dedicated to a study of the
regular MOG rotating solution. In 6, we have determined the shape and size of
the black hole shadow for the regular MOG rotating solution along with the
observables associated with it. Finally we have calculated the energy emission
rate for the concerned black hole with the help of associated observables in
7.
Throughout the paper, we will use mostly the positive metric convention
assuming the velocity of light to be unity ($c=1$).
## 2 STVG action and field Equations
The action for MOG/ STVG theory is
$\displaystyle S=S_{\rm GR}+S_{\phi}+S_{S}+S_{M}$ (1)
where
$\displaystyle S_{\rm GR}$ $\displaystyle=\dfrac{1}{16\pi}\int
d^{4}x\sqrt{-g}\dfrac{1}{G}R$ (2) $\displaystyle S_{\phi}$
$\displaystyle=-\int
d^{4}x\sqrt{-g}\left(\dfrac{1}{4}B^{\mu\nu}B_{\mu\nu}-\dfrac{1}{2}{\mu}^{2}\phi^{\mu}\phi_{\mu}-J^{\mu}\phi_{\mu}\right)$
(3) $\displaystyle S_{S}$ $\displaystyle=\int
d^{4}x\sqrt{-g}\dfrac{1}{G^{3}}\left(\dfrac{1}{2}g^{\mu\nu}\nabla_{\mu}G\nabla_{\nu}G-V(G)-JG\right)+\int
d^{4}x\sqrt{-g}\dfrac{1}{{\mu}^{2}G}\left(\dfrac{1}{2}g^{\mu\nu}\nabla_{\mu}{\mu}\nabla_{\nu}{\mu}-V(\mu)\right)$
(4)
Here $g_{\mu\nu}$ is the spacetime metric, $g$ is the determinant of the
metric, $R$ is the Ricci scalar, $\phi^{\mu}$ is a proca-type massive vector
field such that
$B_{\mu\nu}=\partial_{\mu}\phi_{\nu}-\partial_{\nu}\phi_{\mu}$, $G(x)$ and
$\mu(x)$ are scalar fields and $V(G)$ and $V(\mu)$ are the corresponding
potentials. $S_{M}$ is the matter action. The energy-momentum tensor for the
gravitational source can be written as
$\displaystyle T_{\mu\nu}=T_{\mu\nu}^{M}+T_{\mu\nu}^{\phi}+T_{\mu\nu}^{S}$ (5)
where
$\displaystyle T_{\mu\nu}^{M}=-\dfrac{2}{\sqrt{-g}}\dfrac{\partial
S_{M}}{\delta g^{\mu\nu}}$ (6a) $\displaystyle
T_{\mu\nu}^{\phi}=-\dfrac{2}{\sqrt{-g}}\dfrac{\partial S_{\phi}}{\delta
g^{\mu\nu}}$ (6b) $\displaystyle
T_{\mu\nu}^{S}=-\dfrac{2}{\sqrt{-g}}\dfrac{\partial S_{S}}{\delta g^{\mu\nu}}$
(6c)
Here, $T_{\mu\nu}^{M}$ is the ordinary matter energy-momentum tensor,
$T_{\mu\nu}^{\phi}$ is the energy-momentum tensor for the field $\phi^{\mu}$
and the scalar contribution to the energy-momentum tensor is denoted by
$T_{\mu\nu}^{S}$. Moreover, $J^{\mu}$ and $J$ are the vector and scalar field
currents, respectively.
The Schwarzschild-MOG and Kerr-MOG black hole solutions can be found with the
following assumptions:
* •
It is assumed that the matter energy-momentum tensor $T_{\mu\nu}^{M}$ and the
vector and scalar field currents $J^{\mu}$ and $J$ are zero.
* •
Since the effects of the vector field $\phi_{\mu}$ mass $\mu$ becomes
prominent at kiloparsec distances from the source, the mass of the vector
field is disregarded when solving the field equations for compact objects like
black holes.
* •
The constant $G$ depends on the parameter $\beta=\alpha/(1+\alpha)$ by
$G=G_{N}(1+\alpha)=\dfrac{G_{N}}{1-\beta}$. Here, $G_{N}$ is Newton’s
gravitational constant and we assume that $\partial_{\mu}G\approx 0$. The
range of the dimensionless parameter $\beta$ is $0\leq\beta\leq 1$.
The action in 1 assumes the following form:
$\displaystyle S=\frac{1}{16\pi
G}\int\differential^{4}x\sqrt{-g}\left(R-\dfrac{1}{4}B^{\mu\nu}B_{\mu\nu}\right)$
(7)
Varying this action with respect to $g_{\mu\nu}$, we get the following field
equations:
$\displaystyle G_{\mu\nu}=8\pi GT_{\mu\nu}^{\phi}$ (8)
Here $G_{\mu\nu}$ is the Einstein tensor $R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R$.
The energy-momentum tensor associated with vector field $\phi_{\mu}$ is given
by
$\displaystyle
T_{\mu\nu}^{\phi}=\frac{1}{4\pi}\left(B_{\mu}^{~{}~{}\rho}B_{\nu\rho}-\dfrac{1}{4}g_{\mu\nu}B^{\alpha\beta}B_{\alpha\beta}\right)$
(9)
To obtain the dynamical equation for the vector field, we need to vary the
action in 7 with respect to the vector field $\phi_{\mu}$. Such a variation
leads to the following dynamical equation:
$\displaystyle\nabla_{\nu}B^{\mu\nu}=\dfrac{1}{\sqrt{-g}}\partial_{\nu}\left(\sqrt{-g}B^{\mu\nu}\right)=0$
(10)
One should note here that the gravitational charge $Q_{g}$ associated with the
MOG vector field is proportional to the mass of the gravitational source
as[80]
$\displaystyle Q_{g}=\sqrt{\alpha G_{N}}M=\sqrt{\beta(1-\beta)G_{N}}M_{\beta}$
(11)
where $M_{\beta}=(1+\alpha)M$. The gravitational charge $Q_{g}$ results in the
modified Newtonian acceleration for weak gravitational fields and slow
particle motion:
$\displaystyle a(r)=-\frac{G_{N}M}{r^{2}}[1+\alpha-\alpha\exp(-\mu r)(1+\mu
r)]$ (12)
For small scale objects and weak gravitational fields $\mu r<<1$ and the
parameter $\alpha$ cancels, reducing the acceleration to Newtonian gravity.
With parameter-post-Newtonian corrections this guarantees that MOG is
consistent with accurate solar system experiments.
## 3 Static regular MOG compact object
The gravitational action for the matter-free MOG theory using non-linear field
equations for the gravitational spin 1 vector field $\phi_{\mu}$ is given by
[81]
$\displaystyle S_{\rm MOG}=\dfrac{1}{16\pi G}\int
d^{4}x\sqrt{-g}\left[R-L(B)\right]$ (13)
where $R$ is the Ricci scalar, $L(B)$ describes the non-linear contribution of
$B_{\mu\nu}=\partial_{\mu}\phi_{\nu}-\partial_{\nu}\phi_{\mu}$ with
$B=\dfrac{1}{4}B_{\mu\nu}B^{\mu\nu}$. The associated field equations are
$\displaystyle G_{\mu\nu}=8\pi GT^{\phi}_{\mu\nu}$ (14a)
$\displaystyle\nabla_{\nu}\left(\dfrac{\partial L}{\partial
B}B^{\mu\nu}\right)=0$ (14b)
$\displaystyle\nabla_{\mu}\left({}^{\star}B^{\mu\nu}\right)=0$ (14c)
where ${}^{\star}B^{\mu\nu}=\epsilon^{\mu\nu\rho\sigma}B_{\rho\sigma}$ is the
Hodge-dual of $B^{\mu\nu}$. The energy-momentum tensor associated with the
theory is given by
$\displaystyle T_{\mu\nu}^{\phi}=\dfrac{1}{4\pi}\left[\dfrac{\partial
L}{\partial B}g^{\rho\sigma}B_{\mu\rho}B_{\nu\sigma}-g_{\mu\nu}L(B)\right]$
(15)
In this theory, the gravitational constant is enhanced by $G=G_{N}(1+\alpha)$.
The gravitational source charge associated with the vector field $\phi_{\mu}$
is given by
$\displaystyle Q_{g}=\sqrt{\alpha G_{N}}M$ (16)
where $M$ is the mass parameter of the theory. The gravi-electric field is
given by
$\displaystyle E_{\rm grav}(r)=B_{01}(r)=-B_{10}(r)$ (17)
The energy-momentum tensor components are given by
$\displaystyle T^{\phi 0}_{0}=T^{\phi 1}_{1}=-\dfrac{1}{4\pi}\left(E_{\rm
grav}^{2}\dfrac{\partial L}{\partial B}+L(B)\right)$ (18)
To describe the non-linear system in an alternative way, one can consider the
function $H$ obtained from the Legendre transformation. The function $H$ is
given by
$\displaystyle H=2B\dfrac{\partial L}{\partial B}-L(B)$ (19)
We assume
$\displaystyle P_{\mu\nu}=\dfrac{\partial L}{\partial B}B_{\mu\nu}$ (20)
and
$\displaystyle P=\dfrac{1}{4}P_{\mu\nu}P^{\mu\nu}=\left(\dfrac{\partial
L}{\partial B}\right)^{2}B$ (21)
Now, $H$ can be expressed as the function of $P$. For the regular spacetime
metric solution the form of the function $H(P)$ is given by
$\displaystyle
H(P)=P\dfrac{\left(1-3\sqrt{-2\alpha(1+\alpha)M^{2}P}\right)}{\left(1+\sqrt{-2\alpha(1+\alpha)M^{2}P}\right)^{3}}-\dfrac{3}{2\alpha(1+\alpha)M^{2}b}\left(\dfrac{\sqrt{-2\alpha(1+\alpha)M^{2}P}}{1+\sqrt{-2\alpha(1+\alpha)M^{2}P}}\right)$
(22)
where $b=\dfrac{\sqrt{\alpha}M}{2}$ and
$P=-\dfrac{\alpha}{(1+\alpha)}\dfrac{M^{2}}{2r^{4}}$ and we have set the
gravitational constant $G_{N}=1$. The associated Lagrangian L is provided by
$\displaystyle
L(P)=P\dfrac{\left(1-8\sqrt{-2\alpha(1+\alpha)M^{2}P}-6\alpha(1+\alpha)M^{2}P\right)}{\left(1+\sqrt{-2\alpha(1+\alpha)M^{2}P}\right)^{4}}-\dfrac{3\left(-2\alpha(1+\alpha)M^{2}P\right)^{5/4}\left(3-\sqrt{-2\alpha(1+\alpha)M^{2}P}\right)}{4\alpha(1+\alpha)M^{2}b\left(1+\sqrt{-2\alpha(1+\alpha)M^{2}P}\right)^{7/2}}$
(23)
## 4 Regular MOG static spherically symmetric spacetime
The MOG regular, static spherically symmetric solution can be written as[81,
82]
$\displaystyle
ds^{2}=-f(r)dt^{2}+\dfrac{1}{f(r)}dr^{2}+r^{2}\left(d\theta^{2}+\sin^{2}\theta
d\phi^{2}\right)$ (24)
with
$\displaystyle
f(r)=1-\dfrac{2(1+\alpha)Mr^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{3/2}}+\dfrac{\alpha(1+\alpha)M^{2}r^{2}}{\left(r^{2}+\alpha(1+\alpha)M^{2}\right)^{2}}$
(25)
Here $M$ is the mass parameter of the gravitating object. The associated
gravi-electric field is given by
$\displaystyle E_{\rm
grav}(r)=\sqrt{\alpha}Mr^{4}\left[\dfrac{r^{2}-5\alpha(1+\alpha)M^{2}}{\left\\{r^{2}+\alpha(1+\alpha)M^{2}\right\\}^{4}}+\dfrac{15}{2}\dfrac{(1+\alpha)M}{\left\\{r^{2}+\alpha(1+\alpha)M^{2}\right\\}^{7/2}}\right]$
(26)
For a convenient way of studying the theory, we introduce the alternative
parameter $\beta$ as
$\displaystyle\beta=\dfrac{\alpha}{1+\alpha}$ (27)
The ADM mass of the gravitating object is
$\displaystyle M_{\rm ADM}=(1+\alpha)M=\dfrac{M}{1-\beta}\equiv M_{\beta}$
(28)
We can express the metric in 24 in terms of the ADM mass with
$\displaystyle f(r)=1-\dfrac{2M_{\beta}r^{2}}{\left(r^{2}+\beta
M_{\beta}^{2}\right)^{3/2}}+\dfrac{\beta M_{\beta}^{2}r^{2}}{\left(r^{2}+\beta
M_{\beta}^{2}\right)^{2}}$ (29)
Here $M_{\beta}$ is the ADM mass of the spacetime and $\beta$ is the
enhancement parameter. The gravitational source charge in terms of the ADM
mass is given by
$\displaystyle Q_{g}=\sqrt{\beta(1-\beta)G_{N}}M_{\beta}$ (30)
The horizon of the spacetime depends on the zeros of the function $f(r)$ and
that can be used to determine the critical value of the parameter $\beta$. Let
us assume $\dfrac{r^{2}}{M_{\beta}^{2}}+\beta=x^{2}$, then zeros of $f(r)$ can
be determined by the equation
$\displaystyle ax^{4}+bx^{3}+cx^{2}+dx+e=0$ (31)
where, $a=1,b=-2,c=\beta,d=2\beta,e=-\beta^{2}$. The discriminant of the
quartic equation is
$\displaystyle\Delta=$ $\displaystyle
256a^{3}e^{3}-192a^{2}bde^{2}-128a^{2}c^{2}e^{2}+144a^{2}cd^{2}e-27a^{2}d^{4}$
$\displaystyle+144ab^{2}ce^{2}-6ab^{2}d^{2}e-80abc^{2}de+18abcd^{3}+16ac^{4}e$
$\displaystyle-4ac^{3}d^{2}-27b^{4}e^{2}+18b^{3}cde-4b^{3}d^{3}-4b^{2}c^{3}e+b^{2}c^{2}d^{3}$
$\displaystyle=$
$\displaystyle-4\beta^{2}\left(27+4\beta\left[-16+\beta\left\\{20+\beta(-28+25\beta)\right\\}\right]\right)$
(32)
As the discriminant satisfies $\Delta\leq 0$, there will be two distinct real
roots of the quartic equation. The critical value at which there will be only
one horizon is given by the solution of the equation
$\displaystyle 10800\beta^{3}-12096\beta^{2}+20304\beta-6912=0$ (33)
The solution of this equation is $\beta=0.402186=\beta_{\rm crit}$. For
$\beta<\beta_{\rm crit}$ there will be a black hole with two horizons[80] and
there will be no horizon for $\beta>\beta_{\rm crit}$. This result is
displayed in 1.
Figure 1: The zeros of the function $f(r)$ have been shown to confirm that
either two horizons or no horizon are possible for the regular MOG spherically
symmetric spacetime. For $\beta=0$ the spacetime becomes Schwarzschild and has
a singularity at $r=0$. One horizon solution is possible for the critical
value of the parameter $\beta_{\rm crit}\approx 0.40263$. It is easy to check
that there is no singularity for non-zero values of parameter $\beta$.
### 4.1 Motion of photons in MOG spherically symmetric spacetime
The Lagrangian for the photon motion is given by
$\displaystyle\mathcal{L}=\dfrac{1}{2}\left[-f(r)\dot{t}^{2}+\dfrac{1}{f(r)}\dot{r}^{2}+r^{2}\dot{\theta}^{2}+r^{2}\sin^{2}\theta\dot{\phi}^{2}\right]$
(34)
For a spherically symmetric spacetime, we can always choose without loss of
generality $\theta=\pi/2$ and $\dot{\theta}=0$. The equations for $\dot{t}$
and $\dot{\phi}$ can be deduced using the symmetries of the MOG regular,
static spherically symmetric spacetime. The associated equations are
$\displaystyle f(r)\,\dot{t}$ $\displaystyle=E$ (35) $\displaystyle
r^{2}\dot{\phi}$ $\displaystyle=L$ (36)
where $E$ and $L$ are, respectively, the energy and angular momentum of the
photon. The radial equation can be written as
$\displaystyle\dot{r}^{2}+V(r)=E^{2}$ (37)
where, $V(r)=L^{2}\dfrac{f(r)}{r^{2}}$. The structure of the potential helps
to determine the presence of stable or/and unstable circular orbits. From 2,
we conclude that for the whole parameter space, there exists a stable circular
orbit and an unstable circular orbit. This special situation arises for
$\beta_{\rm crit}<\beta\lesssim 0.5$. However, with close inspection and from
3a for $\beta<\beta_{\rm crit}$, we only have unstable circular orbits.
(a) Minima of the potential have been shown here. This corresponds to stable
circular orbits. However, the relevance of the stable circular orbit is valid
only for $\beta_{\rm crit}<\beta\lesssim 0.5$ (b) Maxima of the potential have
been shown here. This corresponds to the existence of the unstable circular
orbits and is valid for the range $0<\beta\lesssim 0.5$
Figure 2: The variation of the potential for the photon particle has been
shown in these plots (a) minima of the potential has been shown and (b) maxima
of the potential has been shown . For $\beta<\beta_{\rm crit}$, there exists a
stable circular orbit along with an unstable circular orbit. However, for the
MOG static spherically symmetric solution only unstable circular orbits exist
for $\beta<\beta_{\rm crit}$. The existence of both stable and unstable
circular orbits is possible for $\beta_{\rm crit}<\beta\lesssim 0.5$.
As assumed earlier, $r^{2}/M_{\beta}^{2}+\beta=x^{2}$ can be used to find the
position of the photon sphere. The position of the photon sphere can be found
by the real greatest solution of the following equation
$\displaystyle x^{4}-3x^{3}+2\beta x^{2}+5\beta x-3\beta^{2}=0$ (38)
For $\beta<\beta_{\rm crit}$, there will be a photon sphere and a gradual
increase in the enhancement parameter $\beta$ causes the decrease of both the
horizon and shadow radius. For the dark compact object with $\beta_{\rm
crit}<\beta\lesssim 0.5$, although there is no horizon, we still have a photon
sphere. In spherically symmetric spacetimes, the shadow of the black hole is
circular in structure. For the regular MOG black holes, the shadow has been
shown with the variation in the parameter $\beta$ in 3b. In the figures, $A$
and $B$ are the celestial coordinates.
(a) Variation of various radii with variation of the parameter $\beta$.
(b) Variation of the shadow radius with variation of parameter $\beta$.
Figure 3: (a)The variation of the horizon radius, photon sphere and the radius
of the shadow are depicted as a function of the parameter $\beta$. It is
interesting to note that in the range $0.4\lesssim\beta\lesssim 0.5$ there is
no event horizon. However, this does not hinder us in defining the photon
sphere and shadow for the compact object. Also, for the range of parameter
space, we have both stable and unstable circular orbits. (b) The circular
shadow structures have been depicted.
### 4.2 Parameter estimation using M87* and Sgr A* data
Although astrophysical black holes are rotating in nature, for a first-hand
estimation of black hole parameters, one can use the shadow of the spherically
symmetric black holes. As the shadow for spherically symmetric black holes
does not depend on the inclination angle to obtain the initial estimation of
the parameter $\beta$, we can work with the shadow of the regular MOG black
hole solution. Apart from this, the observed shadows for M87* and Sgr A* are
more or less circular in nature. This motivates us to find the observational
signatures of the regular MOG black hole or compact objects in M87* and Sgr A*
using EHT data.
We have calculated how the radius of the photon sphere and the shadow affected
the parameter $\beta$ in the previous section. We can determine the values of
$\beta$ based on the size of the angular diameter, which is defined as
$\displaystyle\tan\alpha\approx\alpha=\dfrac{r_{sh}}{D}$ (39)
Where $r_{sh}$ is the radius of the black hole shadow, $D$ is the distance of
the centre of the black hole from the observer, $2\alpha$ is the angular
diameter. As the distance between the black hole and the observer is much
greater than the radius of the black hole shadow, the small angle
approximation is justified.
The mass and distance of M87* needs to be independently measured. The mass of
M87* has been reported to be $M=3.5_{-0.3}^{+0.9}\times 10^{9}M_{\odot}$ from
model gas dynamics mass measurements[83]. However, based on model stellar
dynamics mass measurements, the mass is reported to be
$M=6.2_{-0.5}^{+1.1}\times 10^{9}M_{\odot}$[84, 85]. The distance of the
gravitating source is reported to be $D=(16.8\pm 0.8)\rm Mpc$. Having the
information of mass and distance of the black hole one can define the angular
gravitational radius $\theta={GM}/{c^{2}D}$. The angular gravitational radius
$\theta_{dyn}$ as measured by stellar-dynamics process and the angular
gravitational radius $\theta_{g}$ as reported by EHT are more or less
consitent[86]. Theoretical bounds on the shadow diameter has been discussed by
Kocherlakota et. al. [87] Based on M87* shadow size they have implied
restrictions on the physical charges of several different spinning or non-
rotating black holes. We use the stellar dynamics mass measurement to
theoretically deduce the shadow radius of the black hole. The supermassive
black hole M87* in the core of the galaxy M87 has an angular diameter of
$(42\pm 3)\mu as$, according to the Event Horizon Telescope (EHT)
collaboration[2]. In the plots shown in 4, the central value $42~{}\mu as$
second has been shown with a grey line and the error bar has been shown with
the dashed grey line. There is an error in the mass estimation of M87* around
the central value. The variation of angular diameter, taking the central value
of mass, has been shown with a blue line. Taking the errors, we can also plot
the angular diameter. This has been shown with dot-dashed blue lines. The
central value of mass of M87* is $6.2\times 10^{9}M_{\odot}$.
Considering the error bars both for angular diameter measurement and mass
measurement, there is a possibility that M87* could be a regular MOG black
hole. For the angular diameter $(42\pm 3)\mu as$ the value of the parameter
$\beta$ can be as high as approximately $\beta=0.3$. This has been shown with
a vertical orange line in 4a. So, in this case we can say that the M87* is a
regular MOG black hole. With the angular diameter $(42\pm 3)\mu as$, the
possibility that M87* is a horizonless compact object can be rejected.
However, if one considers a $10\%$ offset value of the angular diameter, the
parameter $\beta$ can be as high as approximately $\beta=0.45$, and in this
case M87* can be a horizonless compact object. In 4b, the theoretical range of
$\beta$ has been shown by a grey shaded region and the region enclosed by the
two orange lines in grey shaded represents the observationally allowed range
of the parameter $\beta$ for M87*.
(a) Angular diameter versus $\beta$ with observed values $(42\pm 3)\mu as$
marked in grey
(b) Angular diameter versus $\beta$ with observed values $(37.8\pm 2.7)\mu
as$ marked in grey
Figure 4: (a)The variation of the ring diameter has been shown with the error
bar. (b) The variation of the shadow diameter has been shown.
According to the EHT collaboration, the angular diameter of the Sgr A* shadow
is $(48.7\pm 7)\mu as$[88, 89, 90, 91, 92, 93]. The angular diameter of the
Sgr A* shadow depends on the determined mass and distance of Sgr A*. Several
groups have reported the mass and distance of Sgr A*. From the Keck team,
keeping the redshift parameter free the mass and distance of Sgr A* have been
reported to $(3.975\pm 0.058\pm 0.026)\times 10^{6}M_{\odot}$ and $(7959\pm
59\pm 32)\rm pc$, respectively [94]. The same group has also reported the mass
and distance assuming the redshift parameter to be unity and these are
$(3.951\pm 0.047)\times 10^{6}M_{\odot}$ and $(7935\pm 50)\rm pc$,
respectively[94]. The mass and distance, according to the Gravity
collaboration are, respectively, $(4.261\pm 0.012)\times 10^{6}M_{\odot}$ and
$(8246.7\pm 9.3)\rm pc$[95, 96]. The Gravity Collaboration further limited the
BH mass $(4.297\pm 0.012\pm 0.040)\times 10^{6}M_{\odot}$ and the distance
$(8277\pm 9\pm 33)\rm pc$ by accounting for optical aberrations. In 5, we have
plotted the angular diameter as a function of the parameter $\beta$ with mass
and distance as given by the above teams. From the plot, using the Keck team
data, one can constrain the parameter to be $0<\beta\lesssim 0.4$. With the
Keck team data, it is almost impossible to say that Sgr A* is a horizonless
compact object. However, using the Gravity collaboration data, we can say that
there is a possibility that Sgr A* is a horizonless compact object, because
with the Gravity collaboration data the parameter range is $0<\beta\lesssim
0.46$.
Figure 5: Theoretical angular diameter for Sgr A* has been shown.
## 5 Regular MOG rotating compact object
The regular rotating MOG solution can be obtained with the help of the
modified Newman-Janis algorithm. The associated line element of the spacetime
in Boyer-Lindquist coordinates is given by[81]
$\displaystyle ds^{2}=$
$\displaystyle-f(r,\theta)dt^{2}-2a\sin^{2}\theta\left\\{1-f(r,\theta)\right\\}d\phi
dt$ $\displaystyle+\dfrac{\Sigma}{\Delta}dr^{2}+\Sigma
d\theta^{2}+\sin^{2}\theta\left[\Sigma-a^{2}\left\\{f(r,\theta)-2\right\\}\sin^{2}\theta\right]d\phi^{2}$
(40)
where
$\displaystyle f(r,\theta)$
$\displaystyle=1-\dfrac{2M_{\beta}r\sqrt{\Sigma}}{\left[\Sigma+\beta
M_{\beta}^{2}\right]^{3/2}}+\dfrac{\beta
M_{\beta}^{2}\Sigma}{\left[\Sigma+\beta M_{\beta}^{2}\right]^{2}}$ (41a)
$\displaystyle\Delta$ $\displaystyle=\Sigma f(r,\theta)+a^{2}\sin^{2}\theta$
(41b) $\displaystyle\Sigma$ $\displaystyle=r^{2}+a^{2}\cos^{2}\theta$ (41c)
Here, $M_{\beta}$ is the ADM mass of the spacetime, $\beta$ is the enhancement
parameter and $a$ is the spin parameter. A certain portion of the full
parameter space of $\beta-a$ is available for the existence of the regular
rotating MOG black hole. The parameter space has been shown in 6. From the
figure, it is noticeable that the highly spinning regular MOG black hole has a
relatively low value of the parameter $\beta$.
Figure 6: Parameter space of $(\beta-a)$ plane for the regular rotating MOG
solution is displayed here. The reddish region represents the black hole
solution and the boundary denotes the occurrence of an extremal black hole.
The location and the structure of the static limit surface are obtained by
setting the prefactor of $dt^{2}$ to zero. The SLS can be determined by
solving the following equation
$\displaystyle\left(\Sigma+\beta
M_{\beta}^{2}\right)^{2}-2M_{\beta}r\sqrt{\Sigma(\Sigma+\beta
M_{\beta}^{2})}+\beta M_{\beta}^{2}\Sigma=0$ (42)
For $\beta=0$, we have the usual Kerr scenario. The variation and existence of
the static limit surface for rotating regular MOG black holes has been
displayed in 7.
(a) Variation of $g_{tt}$ with respect to $r$, when $\theta=0$ and
$\beta=0.1$ (b) Variation of $g_{tt}$ with respect to $r$, when $\theta=0$ and
$\beta=0.3$ (c) Variation of $g_{tt}$ with respect to $r$, when $\theta=\pi/4$
and $\beta=0.1$ (d) Variation of $g_{tt}$ with respect to $r$, when
$\theta=\pi/4$ and $\beta=0.3$
Figure 7: The nature of the static limit surface (SLS) has been depicted as a
function of coordinate $r$ for various values of spin parameter $a$. The
variation of the SLS with respect to the enhancement parameter $\beta$ can be
seen along the row of the figure matrix [i.e (a)-(b) and (c)-(d)]
The location of the horizon is determined by setting the $g^{rr}$ to be zero,
which in turn gives
$\displaystyle\Delta=\Sigma f(r,\theta)+a^{2}\sin^{2}\theta=0$ (43)
The existence of the horizon in the regular rotating MOG solution has been
depicted in 8.
(a) Variation of $g^{rr}$ with respect to $r$ when $\beta=0.1$ (b) Variation
of $g^{rr}$ with respect to $r$, when $\beta=0.3$
Figure 8: A set of parameter values are allowed for black hole solutions. In
these plots, the set of parameters has been shown for the extremal black hole.
The horizonless compact object can result for an increase in spin keeping the
enhancement parameter $\beta$ constant. (a) The enhancement parameter $\beta$
is $0.1$. Whereas in (b) the enhancement parameter is $0.3$.
.
## 6 Analysis of the black hole shadow
To investigate the black hole shadow, we must determine the photon geodesic
equations for the metric given in 40. When using the Hamilton-Jacobi
formulation for the rotating MOG regular solution, it is particularly
challenging to separate the equations, since the function $f(r,\theta)$ has a
highly complex structure. Consequently, to overcome this issue, we consider an
approximation for $\theta$, such that $\theta\approx\pi/2+\epsilon$.[97] Note
that although we are focusing on photon orbits that are close to the equator,
unstable photon circular orbits are not only limited to this region. This fact
does not invalidate the calculations that follow, because the major goal of
this work is to calculate the shadow of a black hole cast by an observer at
infinity, which can be done using the approximations indicated above. The
trigonometric functions here have the following form: $\sin\theta\approx 1$
and $\cos\theta\approx-\epsilon$. With these approximations the function
$f(r,\theta)$ becomes $f(r)$, which is given by
$\displaystyle f(r)=1-\dfrac{2M_{\beta}r^{2}}{\left(r^{2}+\beta
M_{\beta}^{2}\right)^{3/2}}+\dfrac{\beta M_{\beta}^{2}r^{2}}{\left(r^{2}+\beta
M_{\beta}^{2}\right)^{2}}$ (44)
### 6.1 Null geodesics
For a general stationary, axisymmetric metric the Lagrangian $\mathcal{L}$ can
be written as
$\displaystyle
g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=g_{tt}\dot{t}^{2}+2g_{t\phi}\dot{t}\dot{\phi}+g_{\phi\phi}\dot{\phi}^{2}+g_{rr}\dot{r}^{2}+g_{\theta\theta}\dot{\theta}^{2}=2\mathcal{L}$
(45)
For massive particles and massless particles, the Lagrangian is equal to unity
and zero, respectively. The associated Hamiltonian is provided by
$\displaystyle\mathcal{H}=p_{\mu}\dot{x}^{\mu}-\mathcal{L}=\dfrac{1}{2}g^{\mu\nu}p_{\mu}p_{\nu}=\dfrac{k}{2}$
(46)
with $k$, which in this instance is zero and represents the test particle’s
rest mass. By utilising the Hamilton-Jacobi technique, we may connect the
Hamiltonian to the action S by
$\displaystyle\mathcal{H}(x^{\mu},p^{\mu})+\dfrac{\partial
S}{\partial\lambda}=0\hskip 28.45274pt{\rm with}\hskip
28.45274ptp_{\mu}=\dfrac{\partial S}{\partial x^{\mu}}$ (47)
The metric described in 45 is independent of $t$ and $\phi$, whereby the
(specific) energy E and the (specific) angular momentum L are conserved
quantities. These are supplied by
$\displaystyle E=g_{tt}\dot{t}+g_{t\phi}\dot{\phi}$ (48a) $\displaystyle
L=g_{t\phi}\dot{t}+g_{\phi\phi}\dot{\phi}$ (48b)
From the aforementioned context, the action may be expressed as
$\displaystyle S=-Et+L\phi+S(r,\epsilon)$ (49)
Now, for the metric given in 40, 49 is separable such that
$S(r,\theta)=S_{r}(r)+S_{\epsilon}(\epsilon)$. Substituting 49 in equation 46
we get
$\displaystyle g^{rr}\left(\dfrac{\partial S^{r}}{\partial
r}\right)^{2}+g^{\theta\theta}\left(\dfrac{\partial
S^{\theta}}{\partial\theta}\right)^{2}+g^{tt}(-E)^{2}+2g^{t\phi}(-E)L+g^{\phi\phi}L^{2}=0$
(50)
The metric in 40 causes the equation above to take the form:
$\displaystyle\Delta\left(\dfrac{dS^{r}}{dr}\right)^{2}+\left(\dfrac{dS^{\epsilon}}{d\epsilon}\right)^{2}-\left\\{\dfrac{1}{\Delta}\left[r^{2}+a^{2}\right]^{2}-a^{2}\right\\}E^{2}+\dfrac{2ar^{2}}{\Delta}\left\\{1-f(r)\right\\}EL+\dfrac{r^{2}f(r)}{\Delta}L^{2}=0$
(51)
It’s interesting to note that the $r$ and $\theta$ components of the preceding
equation may be split up so that
$\displaystyle\Delta\left(\dfrac{dS^{r}}{dr}\right)^{2}-\dfrac{1}{\Delta}\left[r^{2}+a^{2}\right]^{2}E^{2}+\dfrac{2ar^{2}}{\Delta}\left\\{1-f(r)\right\\}EL+\dfrac{r^{2}f(r)}{\Delta}L^{2}+a^{2}E^{2}=-\left(\dfrac{dS^{\epsilon}}{d\epsilon}\right)^{2}=-C$
(52)
The Carter constant is represented by C. The left-hand side of 52 is only a
function of $r$, whereas the right-hand side is a function of $\theta$ alone.
The radial component of 52 may be expressed as
$\displaystyle\left[\dfrac{dS^{r}}{dr}\right]^{2}=\dfrac{R(r)}{\Delta^{2}}$
(53)
where
$\displaystyle
R(r)=-\Delta\left[C+(L-aE)^{2}\right]+\left\\{\left[r^{2}+a^{2}\right]E-aL\right\\}^{2}$
(54)
The angular part can be written as
$\displaystyle\left(\dfrac{dS^{\epsilon}}{d\epsilon}\right)^{2}=C$ (55)
Consequently, the action adopts the form
$\displaystyle
S=-Et+L\phi+\int\dfrac{\sqrt{R(r)}}{\Delta}dr+\int\sqrt{C}d\epsilon$ (56)
The equation of motion for $r$ and $\epsilon$ is given by
$\displaystyle\dot{r}=\pm\dfrac{\sqrt{R}}{r^{2}}$ (57)
$\displaystyle\dot{\epsilon}=\pm\dfrac{\sqrt{C}}{r^{2}}$ (58)
For determining the unstable circular orbits, one needs to introduce
$\chi=\dfrac{C}{E^{2}}$ and $\eta=\dfrac{L}{E}$. The unstable circular orbit
can be obtained by setting $R(r)=0=\dfrac{dR(r)}{dr}$. So, using 54 with
aforementioned conditions one obtains
$\displaystyle\left[r^{2}f(r)+a^{2}\right]\left(\chi+\eta^{2}+a^{2}-2\eta
a\right)=\left[r^{2}+a^{2}-a\eta\right]^{2}$ (59)
$\displaystyle{\chi+\eta^{2}+a^{2}-2\eta
a}=\dfrac{4}{\left[2f(r)+rf^{\prime}(r)\right]}\left[r^{2}+a^{2}-a\eta\right]$
(60)
These two equations can be solved to get two one-parameter classes of
solutions parametrized in terms of $r$, which is the radius of unstable
circular orbits:
1. (i)
$\displaystyle\chi$ $\displaystyle=-\dfrac{r^{4}}{a^{2}}$ (61)
$\displaystyle\eta$ $\displaystyle=\frac{a^{2}+r^{2}}{a}$ (62)
2. (ii)
$\displaystyle\chi$
$\displaystyle=\frac{r^{3}\left[8a^{2}f^{\prime}(r)-r\left\\{rf^{\prime}(r)-2f(r)\right\\}^{2}\right]}{a^{2}\left\\{rf^{\prime}(r)+2f(r)\right\\}^{2}}$
(63) $\displaystyle\eta$
$\displaystyle=\dfrac{1}{a}\left[r^{2}+a^{2}-\dfrac{4(r^{2}f(r)+a^{2})}{2f(r)+rf^{\prime}(r)}\right]$
(64)
The solution of the first kind is not a physical solution, but the second
solution helps to determine the contour of the shadow in the $(\eta,\chi)$
plane. Further, this solution satisfies the following condition for the
critical curve:
$\displaystyle
a^{2}-\chi-\eta^{2}=\frac{8\left(a^{2}+r^{2}f(r)\right)}{rf^{\prime}(r)+2f(r)}-\frac{16\left(a^{2}+r^{2}f(r)\right)}{\left(rf^{\prime}(r)+2f(r)\right)^{2}}-2r^{2}$
(65)
When we consider the non-rotating case i.e the regular MOG static spherically
symmetric solution, we have
$\displaystyle\chi+\eta^{2}=\dfrac{2r_{ph}^{2}\left[4f(r_{ph})^{2}-8f(r_{ph})-r_{ph}^{2}f^{\prime}(r_{ph})^{2}\right]}{\left\\{r_{ph}f^{\prime}(r_{ph})+2f(r_{ph})\right\\}^{2}}$
(66)
Here, $r_{ph}$ is the radius of the photon sphere. The above equation helps to
find the shadow of the regular MOG static, spherically symmetric solution.
### 6.2 Celestial coordinates and shadow structure
We now want to find out how the rotating MOG regular black hole shadow appears
to be shaped. For a clearer depiction, we locate the shadow using the
celestial coordinates $A_{i}$ and $B_{i}$. These coordinates are introduced as
$\displaystyle A_{i}$
$\displaystyle=\displaystyle\lim_{r_{0}\to\infty}\left(-r_{0}^{2}\sin\theta_{0}\dfrac{d\phi}{dr}\right)$
(67) $\displaystyle B_{i}$
$\displaystyle=\displaystyle\lim_{r_{0}\to\infty}\left(r_{0}^{2}\dfrac{d\epsilon}{dr}\right)$
(68)
where $r_{0}$ is the distance between the black hole and the distant observer
and $\theta_{0}$ is the inclination angle i.e the angle between the line of
sight and the rotation axis of the black hole. From further calculations and
considering the limit, one can arrive at the following:
$\displaystyle A_{i}=-\eta$ (69) $\displaystyle B_{i}=\sqrt{\chi}$ (70)
Figure 9: Schematic diagram of lensing and formation of shadow
The photons are now parametrized by the conserved quantities $(\eta,\chi)$.
All the light rays, coming from the source placed behind the black hole, will
not be able to reach the observer as the black hole will hinder a portion of
light rays due to gravitational lensing as shown in 9. The dark patch that
appears to the observer is known as the black hole shadow and the boundary of
this shadow can be determined allowing the parameters $(\eta,\chi)$ all
possible values. The coordinates $A_{i}$ and $B_{i}$ are known as celestial
coordinates. For a spherically symmetric scenario, these coordinates do not
depend on the inclination angle and the shadow appears to be a perfect circle.
However, the inclusion of the rotational effect of compact objects and the
inclination angle make the shadow dented and not a perfect circle. The shape
and size of the shadow can be analysed to determine the parameters including
the spin parameter of the black hole. The variation of the shape and size of
the black hole shadow has been depicted in 10.
(a) (b) (c) (d)
Figure 10: Shadow of the regular MOG rotating black hole is situated at the
origin of the coordinate system. The inclination angle is $\theta_{0}=\pi/2$.
Each image represents the shadow for a fixed value of the spin parameter a.
One can note that just like in the Kerr scenario, here also the shadow gets
dented with an increase in spin parameter $a$. An increase in the parameter
$\beta$ causes a shrinking of the shadow for a fixed value of the spin
parameter.
### 6.3 Observables
For further analysis of the shape of the critical curve, we are going to
define two new observables as prescribed by Hioki and Maeda [98]. To define
these observables, we need to characterize a few points of the critical curve
while fitting it with a circular outline. Consider a circle in 11 that passes
through the three extreme points of the shadow curve. The points are:
* •
extreme right of the shadow i.e $U(A_{r},0)$, at which the shadow intersects
the $A-\rm axis$.
* •
top-most point of the shadow i.e $V(A_{t},B_{t})$
* •
bottom-most point of the shadow i.e $W(A_{b},B_{b})$
Figure 11: Characteristic points have been shown in the schematic diagram of
the black hole shadow. A solid blue curve is the outline of the black hole
shadow. The dot-dashed green curve is associated with the fitting circle. The
associated circle passes through the three points of the shadow outline. The
top-most and bottom-most points of the shadow are, respectively, $V$ and $W$.
The left-most and right-most points of the shadow outline are $P$ and $U$. A
separation distance of points $P$ and $Q$ measures the distortion parameter
$\delta_{s}$.
As the shape of the shadow is not circular, the extreme left point of the
shadow does not coincide with the extreme left point of the associated circle.
This characterizes the distortion of the shape of the shadow from a circular
shape. The extreme left point of the shadow is $P(A_{l},0)$ and the extreme
point of the associated circle is $Q(A_{L},0)$. Now we define the two
observables associated with the shadow curve, which are
1. (i)
The characteristic radius $R_{s}$, which can be defined as
$\displaystyle R_{s}=\dfrac{(A_{t}-A_{r})^{2}+B_{t}^{2}}{2|A_{t}-A_{r}|}$ (71)
2. (ii)
Distortion parameter $\delta_{s}$, which is defined as
$\displaystyle\delta_{s}=\dfrac{D_{s}}{R_{s}}=\dfrac{|A_{L}-A_{l}|}{R_{S}}$
(72)
For a non-rotating scenario, the distortion parameter becomes zero as the
shape of the shadow for such a case is always zero. Similarly, the
characteristic radius reduces to the radius of the circle for the non-rotating
scenario. So these two observables measure the deviation from the circularity
of the shape of the shadow. It has been illustrated in 12 how the parameter
$\beta$ affects the characteristic radius $R_{s}$ and distortion parameter
$\delta_{s}$ of the spinning regular MOG black hole. The change in the
observables has been shown for two fixed values of the spin parameters $a$.
(a) Variation of $R_{s}$ with $\beta$ (b) Variation of $\delta_{s}$ with
$\beta$
Figure 12: The variation of characteristic radius $R_{s}$ and distortion
parameter $\delta_{s}$ of the regular rotating MOG black hole as a function of
parameter $\beta$ has been shown. The variation has been shown for a fixed
value of spin parameter $a$. From the plots, one notices how the observables
get changed with the spin parameter.
## 7 Energy emission rate for the rotating regular MOG black hole
For the regular rotating MOG solution, the observers see the large energy
absorption cross-section is caused by the shadows of black holes. At high
energies, the black hole absorption cross sections exhibit a little modulation
close to a limiting constant value. We may use the absorption cross-section
limiting constant value for a nearly spherically symmetric black hole as a
decent approximation, which is given by
$\displaystyle\sigma_{lim}\approx\pi R_{s}^{2}$ (73)
The energy emission rate of the concerned black hole is given as [99]:
$\displaystyle\dfrac{d^{2}E(\omega)}{d\omega
dt}=\dfrac{2\pi^{3}R_{s}^{2}\omega^{3}}{e^{\omega/T}-1}$ (74)
where $\omega$ is the frequency of the photon and $T$ is the Hawking
temperature, which can be defined as [99]:
$\displaystyle T=\displaystyle\lim_{\theta=0,r\to
r_{+}}\dfrac{\partial_{r}\sqrt{-g_{tt}}}{2\pi\sqrt{g_{rr}}}$ (75)
Here, $r_{+}$ is the outer event horizon of the regular rotating MOG black
hole. In 13, we have plotted the energy emission rate of the black hole with
the frequency of the photon $\omega$ for different values of the parameter
$\beta$. One notices from the figure that the peak of the energy emission rate
shifts towards a lower frequency as the parameter $\beta$ increases.
(a) Energy emission rate for the value of spin parameter $a/M_{\beta}=0.20$
(b) Energy emission rate for the value of the spin parameter
$a/M_{\beta}=0.80$
Figure 13: Energy emission rate as a function of frequency has been shown. For
a fixed value of the spin parameter, the increase in the parameter $\beta$
causes a shift of the peak of the spectrum to a lower frequency.
## 8 Conclusions
In this paper, we have explored the regular black hole solution in STVG/MOG
theory. In early papers on this theory, the parameter space is from zero to
infinity. We have compactified the range of the parameter space with a
modification of the form of the parameter $\beta$. At first, we have focused
on the regular solution of STVG/ MOG theory in the static, spherically
symmetric scenario and also determined analytically the critical value of the
parameter $\beta$. For the dimensionless parameter $\beta\lesssim 0.4$, we
have a black hole solution with two horizons in the spherically symmetric
case. For $0.4\lesssim\beta\lesssim 0.5$, there is no black hole solution as
there exists no horizon. For the critical value of the parameter $\beta\cong
0.40263$, a single horizon black hole solution can be obtained. We have also
studied the null geodesics in this spacetime as it is a prerequisite to
analyse the shadow of the black hole. For $\beta<\beta_{\rm crit}$, only
unstable circular orbits exist. However, for $\beta_{\rm crit}<\beta\lesssim
0.5$, there exists a stable circular orbit. It is also noticeable that the
radius of the photon sphere, the radii of the shadow and the event horizon
decrease as the parameter $\beta$ increases. Thus, the circular shadow shrinks
as the parameter $\beta$ is increased. Furthermore, as the shadows of M87* and
Sgr A* are more or less circular in shape, we have tried to compare the
theoretical outcomes with the observational data. For this purpose, we have
used independent mass measurements to calculate the theoretical angular
diameter. The regular MOG black holes and the possibility of horizonless
compact objects are compatible with the EHT data and mass measurements.
We have considered the regular rotating MOG black hole by studying the
behaviours of the horizon and static limit surface for a change in the
parameter $\beta$. Just like the spherically symmetric case, here also a
critical value of parameter $\beta$ exists for a fixed value of the spin
parameter $a$. We have determined the parameter space for which a rotating
regular black hole exists. As a chief goal of this paper, a special emphasis
has been placed on the black hole shadow. However, we have considered only
equatorial approximations. How the shape and size of the associated shadow
transforms have been determined. As the spin parameter, $a$, increases the
shape gets more deformed from a circular shape. The increasing value of
parameter $\beta$ causes the size of the shadow to become smaller. To analyse
the shadow, the required observables have been defined and plotted. One of the
observables has been used to evaluate the energy emission rate for the
rotating regular MOG black hole. From this, we have concluded that the peak of
the energy emission rate shifts to a lower frequency for a relatively large
value of the parameter $\beta$.
It is hard to decouple the differential equations in terms of $r$ and $\theta$
without an equatorial approximation. We have reported the analysis of shadows
using numerical techniques, using the observables as introduced by Hioki and
Maeda. However, one can introduce new observables or can use other existing
observables to study the shadows.
In this work, we have demonstrated that classical regular black holes and
regular horizonless dark compact objects, generally considered to be distinct
families of astrophysical objects, are a family of connected astrophysical
objects continuously deformed into one another, depending on the range and
value of the parameter $\beta$. Our work illustrates that different strong
gravity geometries describe alternative states of black holes and compact
astrophysical objects in their lifetime. It is expected that at the small
scale reached at the central value of the compact object when $r\rightarrow
0$, quantum gravity will take over[100]. The regular and horizonless compact
objects derived from the MOG field equations in this work are classical in
nature. The stability of photon orbits around the black hole and dark compact
object shadows will produce viability issues for the existence of these
astrophysical objects. For the MOG-Schwarzschild and MOG-Kerr solutions with
two horizons, the inner Cauchy horizon can lead to instability problems.
In future work, the gravitational collapse of stars will be investigated,
assuming a form of matter and stress-energy, by solving the time dependent MOG
field equations. Moreover, the merging of the regular and horizonless dark
compact objects, producing gravitational waves and the subsequent ringdown
phase, will be investigated. Singularity-resolving physics in photon rings can
further be studied in context of MOG theory[101].
Acknowledgements
Research at the Perimeter Institute is supported by the Government of Canada
through the Department of Innovation, Science and Economic Development Canada
and by the Province of Ontario through the Ministry of Research, Innovation
and Science.
## References
* [1] Event Horizon Telescope Collaboration, V. L. Fish, K. Akiyama, K. L. Bouman, A. A. Chael, M. D. Johnson, S. S. Doeleman, L. Blackburn, J. F. C. Wardle, and W. T. Freeman, “Observing—and Imaging—Active Galactic Nuclei with the Event Horizon Telescope,” Galaxies 4 no. 4, (2016) 54, arXiv:1607.03034 [astro-ph.IM].
* [2] Event Horizon Telescope Collaboration, K. Akiyama et al., “First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole,” Astrophys. J. Lett. 875 (2019) L1, arXiv:1906.11238 [astro-ph.GA].
* [3] Event Horizon Telescope Collaboration, K. Akiyama et al., “First M87 Event Horizon Telescope Results. II. Array and Instrumentation,” Astrophys. J. Lett. 875 no. 1, (2019) L2, arXiv:1906.11239 [astro-ph.IM].
* [4] Event Horizon Telescope Collaboration, K. Akiyama et al., “First M87 Event Horizon Telescope Results. III. Data Processing and Calibration,” Astrophys. J. Lett. 875 no. 1, (2019) L3, arXiv:1906.11240 [astro-ph.GA].
* [5] Event Horizon Telescope Collaboration, K. Akiyama et al., “First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole,” Astrophys. J. Lett. 875 no. 1, (2019) L4, arXiv:1906.11241 [astro-ph.GA].
* [6] Event Horizon Telescope Collaboration, K. Akiyama et al., “First M87 Event Horizon Telescope Results. V. Physical Origin of the Asymmetric Ring,” Astrophys. J. Lett. 875 no. 1, (2019) L5, arXiv:1906.11242 [astro-ph.GA].
* [7] Event Horizon Telescope Collaboration, K. Akiyama et al., “First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole,” Astrophys. J. Lett. 875 no. 1, (2019) L6, arXiv:1906.11243 [astro-ph.GA].
* [8] LIGO Scientific, Virgo Collaboration, B. P. Abbott et al., “Observation of Gravitational Waves from a Binary Black Hole Merger,” Phys. Rev. Lett. 116 no. 6, (2016) 061102, arXiv:1602.03837 [gr-qc].
* [9] LIGO Scientific, Virgo Collaboration, R. Abbott et al., “GW190412: Observation of a Binary-Black-Hole Coalescence with Asymmetric Masses,” Phys. Rev. D 102 no. 4, (2020) 043015, arXiv:2004.08342 [astro-ph.HE].
* [10] LIGO Scientific Collaboration and Virgo Collaboration Collaboration, R. A. et al, “Gw190521: A binary black hole merger with a total mass of $150\text{ }\text{ }{M}_{\bigodot}$,” Phys. Rev. Lett. 125 (Sep, 2020) 101102. https://link.aps.org/doi/10.1103/PhysRevLett.125.101102.
* [11] R. Geroch, “What is a singularity in general relativity?,” Annals of Physics 48 no. 3, (1968) 526–540. https://www.sciencedirect.com/science/article/pii/0003491668901449.
* [12] E. T. NEWMAN and R. POSADAS, “Motion and structure of singularities in general relativity,” Phys. Rev. 187 (Nov, 1969) 1784–1791. https://link.aps.org/doi/10.1103/PhysRev.187.1784.
* [13] A. J. C. de Souza, “Introductory chapter: The physics of dark sector,” in Essentials on Dark Matter, A. J. C. de Souza, ed., ch. 1. IntechOpen, Rijeka, 2018. https://doi.org/10.5772/intechopen.80234.
* [14] N. C. Martens and D. Lehmkuhl, “Dark matter = modified gravity? scrutinising the spacetime–matter distinction through the modified gravity/ dark matter lens,” Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 72 (2020) 237–250. https://www.sciencedirect.com/science/article/pii/S135521982030109X.
* [15] S. Nojiri and S. D. Odintsov, “Is the future universe singular: Dark matter versus modified gravity?,” Physics Letters B 686 no. 1, (2010) 44–48. https://www.sciencedirect.com/science/article/pii/S0370269310001917.
* [16] X. Calmet and I. Kuntz, “What is modified gravity and how to differentiate it from particle dark matter?,” Eur. Phys. J. C 77 no. 2, (2017) 132, arXiv:1702.03832 [gr-qc].
* [17] R. H. Sanders, “Modified gravity without dark matter,” Lect. Notes Phys. 720 (2007) 375–402, arXiv:astro-ph/0601431.
* [18] S. Nojiri and S. D. Odintsov, “Dark energy, inflation and dark matter from modified F(R) gravity,” TSPU Bulletin N8(110) (2011) 7–19, arXiv:0807.0685 [hep-th].
* [19] S. Nojiri and S. D. Odintsov, “Modified gravity as realistic candidate for dark energy, inflation and dark matter,” AIP Conf. Proc. 1115 no. 1, (2009) 212–217, arXiv:0810.1557 [hep-th].
* [20] L. Baudis, “The search for dark matter,” European Review 26 no. 1, (2018) 70–81.
* [21] J. Liu, X. Chen, and X. Ji, “Current status of direct dark matter detection experiments,” Nature Phys. 13 no. 3, (2017) 212–216, arXiv:1709.00688 [astro-ph.CO].
* [22] J. W. Moffat, “Scalar-tensor-vector gravity theory,” JCAP 03 (2006) 004, arXiv:gr-qc/0506021.
* [23] J. W. Moffat, “Scalar and Vector Field Constraints, Deflection of Light and Lensing in Modified Gravity (MOG),” arXiv:1410.2464 [gr-qc].
* [24] Z. Davari and S. Rahvar, “MOG cosmology without dark matter and the cosmological constant,” Mon. Not. Roy. Astron. Soc. 507 no. 3, (2021) 3387–3399, arXiv:2108.00266 [astro-ph.CO].
* [25] J. W. Moffat and S. Rahvar, “The MOG weak field approximation and observational test of galaxy rotation curves,” Monthly Notices of the Royal Astronomical Society 436 no. 2, (09, 2013) 1439–1451, https://academic.oup.com/mnras/article-pdf/436/2/1439/3933595/stt1670.pdf. https://doi.org/10.1093/mnras/stt1670.
* [26] J. W. Moffat and V. T. Toth, “Rotational velocity curves in the Milky Way as a test of modified gravity,” Phys. Rev. D 91 no. 4, (2015) 043004, arXiv:1411.6701 [astro-ph.GA].
* [27] J. W. Moffat and S. Rahvar, “The MOG weak field approximation and observational test of galaxy rotation curves,” Mon. Not. Roy. Astron. Soc. 436 (2013) 1439–1451, arXiv:1306.6383 [astro-ph.GA].
* [28] J. R. Brownstein and J. W. Moffat, “The Bullet Cluster 1E0657-558 evidence shows Modified Gravity in the absence of Dark Matter,” Mon. Not. Roy. Astron. Soc. 382 (2007) 29–47, arXiv:astro-ph/0702146.
* [29] J. W. Moffat and S. Rahvar, “The MOG weak field approximation – II. Observational test of Chandra X-ray clusters,” Monthly Notices of the Royal Astronomical Society 441 no. 4, (06, 2014) 3724–3732, https://academic.oup.com/mnras/article-pdf/441/4/3724/4059513/stu855.pdf. https://doi.org/10.1093/mnras/stu855.
* [30] J. W. Moffat, “Structure Growth and the CMB in Modified Gravity (MOG),” arXiv:1409.0853 [astro-ph.CO].
* [31] J. W. Moffat and V. T. Toth, “Modified Gravity: Cosmology without dark matter or Einstein’s cosmological constant,” arXiv:0710.0364 [astro-ph].
* [32] J. W. Moffat and V. T. Toth, “Cosmological observations in a modified theory of gravity (MOG),” Galaxies 1 (2013) 65–82, arXiv:1104.2957 [astro-ph.CO].
* [33] J. W. Moffat and V. Toth, “Scalar–Tensor–Vector Modified Gravity in Light of the Planck 2018 Data,” Universe 7 no. 10, (2021) 358, arXiv:2104.12806 [gr-qc].
* [34] S. Hu, C. Deng, D. Li, X. Wu, and E. Liang, “Observational signatures of Schwarzschild-MOG black holes in scalar-tensor-vector gravity: shadows and rings with different accretions,” Eur. Phys. J. C 82 no. 10, (2022) 885.
* [35] X. Qin, S. Chen, Z. Zhang, and J. Jing, “Polarized Image of a Rotating Black Hole in Scalar–Tensor–Vector–Gravity Theory,” Astrophys. J. 938 no. 1, (2022) 2, arXiv:2207.12034 [gr-qc].
* [36] R. Della Monica, I. de Martino, and M. de Laurentis, “Orbital precession of the S2 star in Scalar–Tensor–Vector Gravity,” Mon. Not. Roy. Astron. Soc. 510 no. 4, (2022) 4757–4766, arXiv:2105.12687 [gr-qc].
* [37] J. W. Moffat and V. T. Toth, “Masses and shadows of the black holes Sagittarius A* and M87* in modified gravity,” Phys. Rev. D 101 no. 2, (2020) 024014, arXiv:1904.04142 [gr-qc].
* [38] A. Einstein, “Lens-like action of a star by the deviation of light in the gravitational field,” Science 84 no. 2188, (1936) 506–507, https://www.science.org/doi/pdf/10.1126/science.84.2188.506. https://www.science.org/doi/abs/10.1126/science.84.2188.506.
* [39] K. S. Virbhadra and G. F. R. Ellis, “Schwarzschild black hole lensing,” Phys. Rev. D 62 (Sep, 2000) 084003. https://link.aps.org/doi/10.1103/PhysRevD.62.084003.
* [40] V. Perlick, “Gravitational Lensing from a Spacetime Perspective,” arXiv:1010.3416 [gr-qc].
* [41] P. V. P. Cunha and C. A. R. Herdeiro, “Shadows and strong gravitational lensing: a brief review,” Gen. Rel. Grav. 50 no. 4, (2018) 42, arXiv:1801.00860 [gr-qc].
* [42] T. Bronzwaer and H. Falcke, “The Nature of Black Hole Shadows,” Astrophys. J. 920 no. 2, (2021) 155, arXiv:2108.03966 [astro-ph.HE].
* [43] V. Perlick and O. Y. Tsupko, “Calculating black hole shadows: Review of analytical studies,” Phys. Rept. 947 (2022) 1–39, arXiv:2105.07101 [gr-qc].
* [44] R. Kumar and S. G. Ghosh, “Black Hole Parameter Estimation from Its Shadow,” Astrophys. J. 892 (2020) 78, arXiv:1811.01260 [gr-qc].
* [45] S. G. Ghosh, R. Kumar, and S. U. Islam, “Parameters estimation and strong gravitational lensing of nonsingular Kerr-Sen black holes,” JCAP 03 (2021) 056, arXiv:2011.08023 [gr-qc].
* [46] M. Afrin, R. Kumar, and S. G. Ghosh, “Parameter estimation of hairy Kerr black holes from its shadow and constraints from M87*,” Mon. Not. Roy. Astron. Soc. 504 (2021) 5927–5940, arXiv:2103.11417 [gr-qc].
* [47] S. G. Ghosh and M. Afrin, “Constraining Kerr-like black holes with Event Horizon Telescope results of Sgr A*,” arXiv:2206.02488 [gr-qc].
* [48] Z. Younsi, D. Psaltis, and F. Özel, “Black Hole Images as Tests of General Relativity: Effects of Spacetime Geometry,” arXiv:2111.01752 [astro-ph.HE].
* [49] D. Psaltis, “Testing General Relativity with the Event Horizon Telescope,” Gen. Rel. Grav. 51 no. 10, (2019) 137, arXiv:1806.09740 [astro-ph.HE].
* [50] Y. Mizuno, Z. Younsi, C. M. Fromm, O. Porth, M. De Laurentis, H. Olivares, H. Falcke, M. Kramer, and L. Rezzolla, “The Current Ability to Test Theories of Gravity with Black Hole Shadows,” Nature Astron. 2 no. 7, (2018) 585–590, arXiv:1804.05812 [astro-ph.GA].
* [51] A. Stepanian, S. Khlghatyan, and V. G. Gurzadyan, “Black hole shadow to probe modified gravity,” Eur. Phys. J. Plus 136 no. 1, (2021) 127, arXiv:2101.08261 [gr-qc].
* [52] R. Kumar Walia, S. G. Ghosh, and S. D. Maharaj, “Testing Rotating Regular Metrics with EHT Results of Sgr A*,” Astrophys. J. 939 (2022) 77, arXiv:2207.00078 [gr-qc].
* [53] S. Vagnozzi et al., “Horizon-scale tests of gravity theories and fundamental physics from the Event Horizon Telescope image of Sagittarius A∗,” arXiv:2205.07787 [gr-qc].
* [54] I. Banerjee, S. Sau, and S. SenGupta, “Signatures of regular black holes from the shadow of Sgr A* and M87*,” JCAP 09 (2022) 066, arXiv:2206.12125 [gr-qc].
* [55] I. Banerjee, S. Chakraborty, and S. SenGupta, “Hunting extra dimensions in the shadow of Sgr A*,” Phys. Rev. D 106 no. 8, (2022) 084051, arXiv:2207.09003 [gr-qc].
* [56] R. Shaikh, “Black hole shadow in a general rotating spacetime obtained through newman-janis algorithm,” Phys. Rev. D 100 (Jul, 2019) 024028. https://link.aps.org/doi/10.1103/PhysRevD.100.024028.
* [57] C. Bambi, K. Freese, S. Vagnozzi, and L. Visinelli, “Testing the rotational nature of the supermassive object M87* from the circularity and size of its first image,” Phys. Rev. D 100 no. 4, (2019) 044057, arXiv:1904.12983 [gr-qc].
* [58] S. Vagnozzi and L. Visinelli, “Hunting for extra dimensions in the shadow of M87*,” Phys. Rev. D 100 no. 2, (2019) 024020, arXiv:1905.12421 [gr-qc].
* [59] A. Allahyari, M. Khodadi, S. Vagnozzi, and D. F. Mota, “Magnetically charged black holes from non-linear electrodynamics and the Event Horizon Telescope,” JCAP 02 (2020) 003, arXiv:1912.08231 [gr-qc].
* [60] M. Khodadi, A. Allahyari, S. Vagnozzi, and D. F. Mota, “Black holes with scalar hair in light of the Event Horizon Telescope,” JCAP 09 (2020) 026, arXiv:2005.05992 [gr-qc].
* [61] R. Roy, S. Vagnozzi, and L. Visinelli, “Superradiance evolution of black hole shadows revisited,” Phys. Rev. D 105 no. 8, (2022) 083002, arXiv:2112.06932 [astro-ph.HE].
* [62] R. Shaikh, K. Pal, K. Pal, and T. Sarkar, “Constraining alternatives to the Kerr black hole,” Mon. Not. Roy. Astron. Soc. 506 no. 1, (2021) 1229–1236, arXiv:2102.04299 [gr-qc].
* [63] R. Ghosh, M. Rahman, and A. K. Mishra, “Regularized Stable Kerr Black Hole: Cosmic Censorships, Shadow and Quasi-Normal Modes,” arXiv:2209.12291 [gr-qc].
* [64] R. C. Bernardo and C.-Y. Chen, “Dressed black holes in the new tensor-vector-scalar theory,” arXiv:2202.08460 [gr-qc].
* [65] Event Horizon Telescope Collaboration, D. Psaltis et al., “Gravitational Test Beyond the First Post-Newtonian Order with the Shadow of the M87 Black Hole,” Phys. Rev. Lett. 125 no. 14, (2020) 141104, arXiv:2010.01055 [gr-qc].
* [66] F. Atamurotov, A. Abdujabbarov, and B. Ahmedov, “Shadow of rotating non-kerr black hole,” Phys. Rev. D 88 (Sep, 2013) 064004. https://link.aps.org/doi/10.1103/PhysRevD.88.064004.
* [67] F. Atamurotov, A. Abdujabbarov, and B. Ahmedov, “Shadow of rotating Hořava-Lifshitz black hole,” Astrophys. Space Sci. 348 (2013) 179–188.
* [68] A. Abdujabbarov, F. Atamurotov, Y. Kucukakca, B. Ahmedov, and U. Camci, “Shadow of Kerr-Taub-NUT black hole,” Astrophys. Space Sci. 344 (2013) 429–435, arXiv:1212.4949 [physics.gen-ph].
* [69] F. Atamurotov, K. Jusufi, M. Jamil, A. Abdujabbarov, and M. Azreg-Aïnou, “Axion-plasmon or magnetized plasma effect on an observable shadow and gravitational lensing of a Schwarzschild black hole,” Phys. Rev. D 104 no. 6, (2021) 064053, arXiv:2109.08150 [gr-qc].
* [70] U. Papnoi and F. Atamurotov, “Rotating charged black hole in 4D Einstein–Gauss–Bonnet gravity: Photon motion and its shadow,” Phys. Dark Univ. 35 (2022) 100916, arXiv:2111.15523 [gr-qc].
* [71] B.-H. Lee, W. Lee, and Y. S. Myung, “Shadow cast by a rotating black hole with anisotropic matter,” Phys. Rev. D 103 no. 6, (2021) 064026, arXiv:2101.04862 [gr-qc].
* [72] I. Dymnikova and K. Kraav, “Identification of a regular black hole by its shadow,” Universe 5 no. 7, (2019) . https://www.mdpi.com/2218-1997/5/7/163.
* [73] S. G. Ghosh, M. Amir, and S. D. Maharaj, “Ergosphere and shadow of a rotating regular black hole,” Nuclear Physics B 957 (2020) 115088. https://www.sciencedirect.com/science/article/pii/S0550321320301747.
* [74] R. Kumar, S. G. Ghosh, and A. Wang, “Shadow cast and deflection of light by charged rotating regular black holes,” Phys. Rev. D 100 (Dec, 2019) 124024. https://link.aps.org/doi/10.1103/PhysRevD.100.124024.
* [75] A. Abdujabbarov, M. Amir, B. Ahmedov, and S. G. Ghosh, “Shadow of rotating regular black holes,” Phys. Rev. D 93 (May, 2016) 104004. https://link.aps.org/doi/10.1103/PhysRevD.93.104004.
* [76] N. Tsukamoto, “Black hole shadow in an asymptotically flat, stationary, and axisymmetric spacetime: The kerr-newman and rotating regular black holes,” Phys. Rev. D 97 (Mar, 2018) 064021. https://link.aps.org/doi/10.1103/PhysRevD.97.064021.
* [77] Z. Li and C. Bambi, “Measuring the kerr spin parameter of regular black holes from their shadow,” Journal of Cosmology and Astroparticle Physics 2014 no. 01, (Jan, 2014) 041. https://dx.doi.org/10.1088/1475-7516/2014/01/041.
* [78] I. Banerjee, S. Sau, and S. SenGupta, “Do shadows of Sgr A* and M87* indicate black holes with a magnetic monopole charge?,” arXiv:2207.06034 [gr-qc].
* [79] Y. Gong, Z. Cao, H. Gao, and B. Zhang, “On neutralization of charged black holes,” Monthly Notices of the Royal Astronomical Society 488 no. 2, (07, 2019) 2722–2731, https://academic.oup.com/mnras/article-pdf/488/2/2722/29002740/stz1904.pdf. https://doi.org/10.1093/mnras/stz1904.
* [80] J. W. Moffat, “Black Holes in Modified Gravity (MOG),” Eur. Phys. J. C 75 no. 4, (2015) 175, arXiv:1412.5424 [gr-qc].
* [81] J. W. Moffat, “Regular Rotating MOG Dark Compact Object,” Eur. Phys. J. C 81 no. 2, (2021) 119, arXiv:1806.01903 [gr-qc].
* [82] E. Ayon-Beato and A. Garcia, “Regular black hole in general relativity coupled to nonlinear electrodynamics,” Phys. Rev. Lett. 80 (1998) 5056–5059, arXiv:gr-qc/9911046.
* [83] J. L. Walsh, A. J. Barth, L. C. Ho, and M. Sarzi, “The M87 Black Hole Mass from Gas-dynamical Models of Space Telescope Imaging Spectrograph Observations,” 770 no. 2, (June, 2013) 86, arXiv:1304.7273 [astro-ph.CO].
* [84] K. Gebhardt and J. Thomas, “The black hole mass, stellar mass-to-light ratio, and dark halo in m87,” The Astrophysical Journal 700 no. 2, (Jul, 2009) 1690. https://dx.doi.org/10.1088/0004-637X/700/2/1690.
* [85] N. J. McConnell, C.-P. Ma, K. Gebhardt, S. A. Wright, J. D. Murphy, T. R. Lauer, J. R. Graham, and D. O. Richstone, “Two ten-billion-solar-mass black holes at the centres of giant elliptical galaxies,” Nature 480 no. 7376, (Dec, 2011) 215–218. https://doi.org/10.1038%2Fnature10636.
* [86] Event Horizon Telescope Collaboration, P. Kocherlakota et al., “Constraints on black-hole charges with the 2017 EHT observations of M87*,” Phys. Rev. D 103 no. 10, (2021) 104047, arXiv:2105.09343 [gr-qc].
* [87] EHT Collaboration Collaboration, P. e. a. Kocherlakota, “Constraints on black-hole charges with the 2017 eht observations of m87*,” Phys. Rev. D 103 (May, 2021) 104047. https://link.aps.org/doi/10.1103/PhysRevD.103.104047.
* [88] Event Horizon Telescope Collaboration, K. Akiyama et al., “First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole in the Center of the Milky Way,” Astrophys. J. Lett. 930 no. 2, (2022) L12.
* [89] Event Horizon Telescope Collaboration, K. Akiyama et al., “First Sagittarius A* Event Horizon Telescope Results. II. EHT and Multiwavelength Observations, Data Processing, and Calibration,” Astrophys. J. Lett. 930 no. 2, (2022) L13.
* [90] Event Horizon Telescope Collaboration, K. Akiyama et al., “First Sagittarius A* Event Horizon Telescope Results. III. Imaging of the Galactic Center Supermassive Black Hole,” Astrophys. J. Lett. 930 no. 2, (2022) L14.
* [91] Event Horizon Telescope Collaboration, K. Akiyama et al., “First Sagittarius A* Event Horizon Telescope Results. IV. Variability, Morphology, and Black Hole Mass,” Astrophys. J. Lett. 930 no. 2, (2022) L15.
* [92] Event Horizon Telescope Collaboration, K. Akiyama et al., “First Sagittarius A* Event Horizon Telescope Results. V. Testing Astrophysical Models of the Galactic Center Black Hole,” Astrophys. J. Lett. 930 no. 2, (2022) L16.
* [93] Event Horizon Telescope Collaboration, K. Akiyama et al., “First Sagittarius A* Event Horizon Telescope Results. VI. Testing the Black Hole Metric,” Astrophys. J. Lett. 930 no. 2, (2022) L17.
* [94] T. Do et al., “Relativistic redshift of the star S0-2 orbiting the Galactic center supermassive black hole,” Science 365 no. 6454, (2019) 664–668, arXiv:1907.10731 [astro-ph.GA].
* [95] GRAVITY Collaboration, R. Abuter et al., “Mass distribution in the Galactic Center based on interferometric astrometry of multiple stellar orbits,” Astron. Astrophys. 657 (2022) L12, arXiv:2112.07478 [astro-ph.GA].
* [96] GRAVITY Collaboration, R. Abuter et al., “Detection of the Schwarzschild precession in the orbit of the star S2 near the Galactic centre massive black hole,” Astron. Astrophys. 636 (2020) L5, arXiv:2004.07187 [astro-ph.GA].
* [97] A. Abdujabbarov, M. Amir, B. Ahmedov, and S. G. Ghosh, “Shadow of rotating regular black holes,” Phys. Rev. D 93 no. 10, (2016) 104004, arXiv:1604.03809 [gr-qc].
* [98] K. Hioki and K.-i. Maeda, “Measurement of the Kerr Spin Parameter by Observation of a Compact Object’s Shadow,” Phys. Rev. D 80 (2009) 024042, arXiv:0904.3575 [astro-ph.HE].
* [99] T.-C. Ma, H.-X. Zhang, P.-Z. He, H.-R. Zhang, Y. Chen, and J.-B. Deng, “Shadow cast by a rotating and nonlinear magnetic-charged black hole in perfect fluid dark matter,” Mod. Phys. Lett. A 36 no. 17, (2021) 2150112, arXiv:2010.00151 [gr-qc].
* [100] L. Modesto, J. W. Moffat, and P. Nicolini, “Black holes in an ultraviolet complete quantum gravity,” Phys. Lett. B 695 (2011) 397–400, arXiv:1010.0680 [gr-qc].
* [101] A. Eichhorn, A. Held, and P.-V. Johannsen, “Universal signatures of singularity-resolving physics in photon rings of black holes and horizonless objects,” Journal of Cosmology and Astroparticle Physics 2023 no. 01, (Jan, 2023) 043. https://dx.doi.org/10.1088/1475-7516/2023/01/043.
|
# Differentiable Particle Filters:
End-to-End Learning with Algorithmic Priors
Rico Jonschkowski, Divyam Rastogi, and Oliver Brock Robotics and Biology
Laboratory, Technische Universität Berlin, Germany
###### Abstract
We present differentiable particle filters (DPFs): a differentiable
implementation of the particle filter algorithm with learnable motion and
measurement models. Since DPFs are end-to-end differentiable, we can
efficiently train their models by optimizing end-to-end state estimation
performance, rather than proxy objectives such as model accuracy. DPFs encode
the structure of recursive state estimation with prediction and measurement
update that operate on a probability distribution over states. This structure
represents an algorithmic prior that improves learning performance in state
estimation problems while enabling explainability of the learned model. Our
experiments on simulated and real data show substantial benefits from end-to-
end learning with algorithmic priors, e.g. reducing error rates by $\sim$80%.
Our experiments also show that, unlike long short-term memory networks, DPFs
learn localization in a policy-agnostic way and thus greatly improve
generalization. Source code is available at https://github.com/tu-
rbo/differentiable-particle-filters.
## I Introduction
End-to-end learning tunes all parts of a learnable system for end-to-end
performance—which is what we ultimately care about—instead of optimizing each
part individually. End-to-end learning excels when the right objectives for
individual parts are not known; it therefore has significant potential in the
context of complex robotic systems.
Compared to learning each part of a system individually, end-to-end learning
puts fewer constraints on the individual parts, which can improve performance
but can also lead to overfitting. We must therefore balance end-to-end
learning with regularization by incorporating appropriate priors. Priors can
be encoded in the form of differentiable network architectures. By defining
the network architecture and its learnable parameters, we restrict the
hypothesis space and thus regularize learning. At the same time, the
differentiability of the network allows all of its parts to adapt to each
other and to optimize their parameters for end-to-end performance.
This approach has been very successful in computer vision. Highly engineered
vision pipelines are outperformed by convolutional networks trained end-to-end
[8]. But it only works because convolutional networks [15] encode priors in
the network architecture that are suitable for computer vision—a hierarchy of
local filters shared across the image. Problems in robotics possess additional
structure, for example in physical interactions with the environment. Only by
exploiting all available structure will we be able to realize the full
potential of end-to-end learning in robotics.
_But how can we find more architectures like the convolutional network for
robotics?_ Roboticists have captured problem structure in the form of
algorithms, often combined with models of the specific task. By making these
algorithms differentiable and their models learnable, we can turn robotic
algorithms into network architectures. This approach enables end-to-end
learning while also encoding prior knowledge from algorithms, which we call
_algorithmic priors_.
Here, we apply _end-to-end learning with algorithmic priors_ to state
estimation in robotics. In this problem, a robot needs to infer the latent
state from its observations and actions. Since a single observation can be
insufficient to estimate the state, the robot needs to integrate uncertain
information over time.
Given the standard assumptions for this problem, _Bayes filters_ provide the
provably optimal algorithmic structure for solving it [21], recursively
updating a probability distribution over states with prediction and
measurement update using task-specific motion and measurement models. The
_differentiable particle filter_ (DPF) is an end-to-end differentiable
implementation of the particle filter—a Bayes filter that represents
probability distributions with samples—with learnable motion and measurement
models (see Fig. 1).
Figure 1: Differentiable particle filters. Models can be learned end-to-end by
backpropagation through the algorithm.
Since DPFs are differentiable, we can learn their models end-to-end to
optimize state estimation performance. Our experiments show that end-to-end
learning improves performance compared to using models optimized for accuracy.
Interestingly, end-to-end learning in DPFs re-discovers what roboticists found
out via trial and error: that overestimating uncertainty is beneficial for
filtering performance [21, p. 118].
Since DPFs use the Bayes filter algorithm as a prior, they have a number of
advantages. First, even with end-to-end learning, DPFs remain explainable—we
can examine the learned models and their interaction. Second, the algorithmic
prior regularizes learning, which greatly improves performance in state
estimation. Compared to generic long short-term memory networks (LSTMs) [9],
DPFs reduce the error rate by $\sim$80% or require 87% less training data for
the same error rate. And finally, the algorithmic prior improves
generalization: while LSTMs fail when tested with a different policy than used
for training, DPFs are robust to changing the policy.
## II Related Work
There is a surge of recent work that combines algorithmic priors and end-to-
end learning for planning and state estimation with histogram-based and
Gaussian belief representations.
##### Planning with known state
Tamar et al. [20] introduced value iteration networks, a differentiable
planning algorithm with models that can be optimized for value iteration.
Their key insight is that value iteration in a grid based state space can be
represented by convolutional neural networks. Silver et al. [18] proposed the
predictron, a differentiable embedding of the TD($\lambda$) algorithm in a
learned state space. Okada and Aoshima [16] proposed path integral networks,
which encode an optimal control algorithm to learn continuous tasks.
##### State estimation (and planning) with histograms
Jonschkowski and Brock [10] introduced the end-to-end learnable histogram
filter, a differentiable Bayes filter that represents the belief with a
histogram. Shankar et al. [17] and Karkus et al. [11] combined histogram
filters and QMDP planners in a differentiable network for planning in
partially observable environments. Gupta et al. [6] combined differentiable
mapping and planning in a network architecture for navigation in novel
environments. All of these approaches use convolution to operate on a grid
based state space.
##### State estimation with Gaussians
Harnooja et al. [7] presented a differentiable Kalman filter with a Gaussian
belief and an end-to-end learnable measurement model from visual input. Watter
et al. [22] and Karl et al. [12] learn a latent state space that facilitates
prediction. These approaches use (locally) linear dynamics models and Gaussian
beliefs.
Related work has established how to operate on _histogram-based_ belief
representations using convolution and how to work with _Gaussian_ beliefs
using linear operations. We build on this work and extend its scope to include
_sample-based_ algorithms, such as particle filters. Sample-based
representations can be advantageous because they can represent multi-modal
distributions (unlike Gaussians) while focusing the computational effort on
states of high probability (unlike histograms). But sample-based
representations introduce new challenges for differentiable implementations,
e.g. generating samples from networks, performing density estimation to
compute gradients, and handling non-differentiable resampling. These are the
challenges that we tackle in this paper.
## III Background: Bayes Filters and Their Particle-Based Approximation
We consider the problem of estimating a latent _state_ $\boldsymbol{s}$ from a
history of _observations_ $\boldsymbol{o}$ and _actions_ $\boldsymbol{a}$,
e.g. a robot’s pose from camera images and odometry. To handle uncertainty, we
estimate a probability distribution over the current state
$\boldsymbol{s}_{t}$ conditioned on the history of observations
$\boldsymbol{o}_{1:t}$ and actions $\boldsymbol{a}_{1:t}$, which is called
_belief_ ,
$\text{bel}(\boldsymbol{s}_{t})=p(\boldsymbol{s}_{t}|\boldsymbol{a}_{1:t},\boldsymbol{o}_{1:t}).$
### III-A Bayes Filters
Figure 2: Graphical model for state estimation
If we assume that our problem factorizes as shown in Fig. 2, the _Bayes
filter_ algorithm solves it optimally [21] by making use of the Markov
property of the state and the conditional independence of observations and
actions. From the Markov property follows that the last belief
$\text{bel}(\boldsymbol{s}_{t-1})$ summarizes all information contained in the
history of observations $\boldsymbol{o}_{1:t-1}$ and actions
$\boldsymbol{a}_{1:t-1}$ that is relevant for predicting the future.
Accordingly, the Bayes filter computes $\text{bel}(\boldsymbol{s}_{t})$
recursively from $\text{bel}(\boldsymbol{s}_{t-1})$ by incorporating the new
information contained in $\boldsymbol{a}_{t}$ and $\boldsymbol{o}_{t}$. From
assuming conditional independence between actions and observations given the
state follows that Bayes filters update the belief in two steps: 1)
_prediction_ using action $\boldsymbol{a}_{t}$ and 2) _measurement update_
using observation $\boldsymbol{o}_{t}$.
1) The _prediction_ step is based on the _motion model_
$p(\boldsymbol{s}_{t}\mid\boldsymbol{s}_{t-1},\boldsymbol{a}_{t})$, which
defines how likely the robot enters state $\boldsymbol{s}_{t}$ if it performs
action $\boldsymbol{a}_{t}$ in $\boldsymbol{s}_{t-1}$. Using the motion model,
this step computes the _predicted belief_
$\overline{\text{bel}}(\boldsymbol{s}_{t})$ by summing over all
$\boldsymbol{s}_{t-1}$ from which $\boldsymbol{a}_{t}$ could have led to
$\boldsymbol{s}_{t}$.
$\displaystyle\overline{\text{bel}}(\boldsymbol{s}_{t})$ $\displaystyle=\int
p(\boldsymbol{s}_{t}\mid\boldsymbol{s}_{t-1},\boldsymbol{a}_{t})\,\text{bel}(\boldsymbol{s}_{t-1})\,d\boldsymbol{s}_{t-1}.$
(1)
2) The _measurement update_ uses the _measurement model_
$p(\boldsymbol{o}_{t}\mid\boldsymbol{s}_{t})$, which defines the likelihood of
an observation $\boldsymbol{o}_{t}$ given a state $\boldsymbol{s}_{t}$. Using
this model and observation $o_{t}$, this step updates the belief using Bayes’
rule (with normalization $\eta$),
$\displaystyle\text{bel}(\boldsymbol{s}_{t})=\eta\,p(\boldsymbol{o}_{t}\mid\boldsymbol{s}_{t})\,\overline{\text{bel}}(\boldsymbol{s}_{t}).$
(2)
Any implementation of the Bayes filter algorithm for a continuous state space
must represent a continuous belief–and thereby approximate it. Different
approximations correspond to different Bayes filter implementations, for
example histogram filters, which represent the belief by a histogram, Kalman
filters, which represent it by a Gaussian, or particle filters, which
represent the belief by a set of particles [21].
(a) Prediction and measurement update; boxes represent models, colored boxes
are learned (b) Computing the gradient for end-to-end learning requires
density estimation from the predicted particles (gray circles, darkness
corresponds to particle weight). After converting the particles into a mixture
of Gaussians (blue), we can compute the belief at the true state (orange bar
at red x) and maximize it.
Figure 3: DPF overview. Models in (a) can be learned end-to-end by maximizing
the belief of the true state (b).
### III-B Particle Filters
Particle filters approximate the belief with particles (or samples)
$\mathcal{S}_{t}=\boldsymbol{s}^{[1]}_{t},\boldsymbol{s}^{[2]}_{t},\dots,\boldsymbol{s}^{[n]}_{t}$
with weights $w^{[1]}_{t},w^{[2]}_{t},\dots,w^{[n]}_{t}$. The particle filter
updates this distribution by moving particles, changing their weights, and
resampling them, which duplicates or removes particles proportionally to their
weight. Resampling makes this Bayes filter implementation efficient by
focusing the belief approximation on probable states.
The particle filter implements the prediction step (Eq. 1) by moving each
particle stochastically, which is achieved by sampling from a generative
motion model,
$\displaystyle\forall_{i}:\;\;\boldsymbol{s}^{[i]}_{t}\sim
p(\boldsymbol{s}_{t}\mid\boldsymbol{a}_{t},\boldsymbol{s}^{[i]}_{t-1}).$ (3)
The particle filter implements the measurement update (Eq. 2) by setting the
weight of each particle to the observation likelihood—the probability of the
current observation conditioned on the state represented by the particle,
$\displaystyle\forall_{i}:\;\;w^{[i]}_{t}=p(\boldsymbol{o}_{t}\mid\boldsymbol{s}^{[i]}_{t}).$
(4)
The particle set is then resampled by randomly drawing particles
$\boldsymbol{s}^{[i]}_{t}$ proportionally to their weight $w^{[i]}_{t}$ before
the filter performs the next iteration of prediction and update.
## IV Differentiable Particle Filters
Differentiable particle filters (DPFs) are a differentiable implementation of
the particle filter algorithm with end-to-end learnable models. We can also
view DPFs as a new recurrent network architecture that encodes the algorithmic
prior from particle filters in the network structure (see Fig. 3a).
With end-to-end learning, we do not mean that every part of a system is
learned but that the objective for the learnable parts is end-to-end
performance. For efficient end-to-end learning in particle filters, we need
learnable models and the ability to backpropagate the gradient through the
particle filter algorithm—not to change the algorithm but to compute how to
change the models to improve the algorithm’s output.
This section describes our DPF implementation. Our source code based on
TensorFlow [1] and Sonnet [4] is available at https://github.com/tu-
rbo/differentiable-particle-filters.
### IV-A Belief
DPFs represent the belief at time $t$ by a set of weighted particles,
$\text{bel}(\boldsymbol{s}_{t})=(S_{t},\boldsymbol{w}_{t})$, where
$S\in\mathbb{R}^{n\times d}$ describes $n$ particles in $d$-dimensional state
space with weights $\boldsymbol{w}\in\mathbb{R}^{n}$. At every time step, DPFs
update the previous belief $\text{bel}(\boldsymbol{s}_{t-1})$ with action
$\boldsymbol{a}_{t}$ and observation $\boldsymbol{o}_{t}$ to get
$\text{bel}(\boldsymbol{s}_{t})$ (see Fig. 3a).
### IV-B Prediction
The prediction step moves each particle by sampling from a probabilistic
motion model (Eq. 3). Motion models often assume deterministic environments;
they account for uncertainty by generating noisy versions of the commanded or
measured action such that a different version of the action is applied to each
particle [21, chap. 5]. We follow the same approach by splitting the motion
model into an _action sampler_ $f$, which creates a noisy action
$\hat{\boldsymbol{a}}^{[i]}$ per particle, and a _dynamics model_ $g$, which
moves each particle $i$ according to $\hat{\boldsymbol{a}}^{[i]}$.
$\displaystyle\hat{\boldsymbol{a}}^{[i]}_{t}$
$\displaystyle=\boldsymbol{a}_{t}+f_{\boldsymbol{\theta}}(\boldsymbol{a}_{t},\boldsymbol{\epsilon}^{[i]}\sim\mathcal{N}),$
(5) $\displaystyle\boldsymbol{s}^{[i]}_{t}$
$\displaystyle=\boldsymbol{s}^{[i]}_{t-1}+g(\boldsymbol{s}^{[i]}_{t-1},\hat{\boldsymbol{a}}^{[i]}_{t}),$
(6)
where $f_{\boldsymbol{\theta}}$ is a feedforward network (see Table I),
$\boldsymbol{\theta}$ are all parameters of the DPF, and
$\boldsymbol{\epsilon}^{[i]}\in\mathbb{R}^{d}$ is a noise vector drawn from a
standard normal distribution. Using the noise vector as input for a learnable
generative model is known as the reparameterization trick [14]. Here, this
trick enables $f_{\boldsymbol{\theta}}$ to learn to sample from action-
dependent motion noise. The resulting noisy actions are fed into $g$, which
simulates how these actions change the state. Since we often know the
underlying dynamics model, we can implement its equations in $g$.
Alternatively, we can replace $g$ by a feedforward network
$g_{\boldsymbol{\theta}}$ and learn the dynamics from data (tested in Section
V-A3).
### IV-C Measurement Update
The measurement update uses the observation to compute particle weights (Eq.
4). DPFs implement this update and additionally use the observation to propose
new particles (see Fig. 3a). The DPF measurement model consists of three
components: a shared _observation encoder_ $h$, which encodes an observation
$\boldsymbol{o}_{t}$ into a vector $\boldsymbol{e}_{t}$, a _particle proposer_
$k$, which generates new particles, and an _observation likelihood estimator_
$l$, which weights each particle based on the observation.
$\displaystyle\boldsymbol{e}_{t}$
$\displaystyle=h_{\boldsymbol{\theta}}(\boldsymbol{o}_{t}),$ (7)
$\displaystyle\boldsymbol{s}^{[i]}_{t}$
$\displaystyle=k_{\boldsymbol{\theta}}(\boldsymbol{e}_{t},\boldsymbol{\delta}^{[i]}\sim
B),$ (8) $\displaystyle w^{[i]}_{t}$
$\displaystyle=l_{\boldsymbol{\theta}}(\boldsymbol{e}_{t},\boldsymbol{s}^{[i]}_{t}),$
(9)
where $h_{\boldsymbol{\theta}}$, $k_{\boldsymbol{\theta}}$, and
$l_{\boldsymbol{\theta}}$ are feedforward networks based on parameters
$\boldsymbol{\theta}$; the input $\boldsymbol{\delta}^{[i]}$ is a dropout
vector sampled from a Bernoulli distribution. Here, dropout is not used for
regularization but as a source of randomness for sampling different particles
from the same encoding $\boldsymbol{e}_{t}$ (see Table I).
### IV-D Particle Proposal and Resampling
We do _not_ initialize DPFs by uniformly sampling the state space—this would
produce too few initial particles near the true state. Instead, we initialize
DPFs by proposing particles from the current observation (as described above)
for the first steps. During filtering, DPFs move gradually from particle
proposal, which generates hypotheses, to resampling, which tracks and weeds
out these hypotheses. The ratio of proposed to resampled particles follows an
exponential function $\gamma^{t-1}$, where $\gamma$ is a hyperparameter set to
$0.7$ in our experiments. We use 1000 particles for testing and 100 particles
for training (to speed up the training process). DPFs implement resampling by
stochastic universal sampling [2], which is not differentiable and leads to
limitations discussed in Section IV-F.
### IV-E Supervised Learning
DPF models can be learned from sequences of supervised data
$\boldsymbol{o}_{1:T}$, $\boldsymbol{a}_{1:T}$, $\boldsymbol{s}^{*}_{1:T}$
using maximum likelihood estimation by maximizing the belief at the _true
state_ $\boldsymbol{s}^{*}_{t}$. To estimate
$\text{bel}(\boldsymbol{s}^{*}_{t})$ from a set of particles, we treat each
particle as a Gaussian in a mixture model with weights $\boldsymbol{w}_{t}$
(see Fig. 3b). For a sensible metric across state dimensions, we scale each
dimension by dividing by the average step size
$\text{E}_{t}[\text{abs}(\boldsymbol{s}^{*}_{t}-\boldsymbol{s}^{*}_{t-1})]$.
This density estimation enables individual and end-to-end learning.
#### IV-E1 Individual learning of the motion model
We optimize the motion model individually to match the observed motion noise
by sampling states $\boldsymbol{s}^{[i]}_{t}$ from $\boldsymbol{s}^{*}_{t-1}$
and $\boldsymbol{a}_{t}$ using Eq. 5-6 and maximizing the data likelihood as
described above,
$\boldsymbol{\theta}^{*}_{f}=\text{argmin}_{\boldsymbol{\theta}_{f}}-\log
p(\boldsymbol{s}^{*}_{t}\mid\boldsymbol{s}^{*}_{t-1},\boldsymbol{a}_{t};\boldsymbol{\theta}_{f})$.
If the dynamics model $g$ is unknown, we train $g_{\boldsymbol{\theta}}$ by
minimizing mean squared error between
$g(\boldsymbol{s}^{*}_{t-1},\boldsymbol{a}_{t})$ and
$\boldsymbol{s}^{*}_{t}-\boldsymbol{s}^{*}_{t-1}$.
#### IV-E2 Individual learning of the measurement model
The particle proposer $k_{\boldsymbol{\theta}}$ is trained by sampling
$\boldsymbol{s}^{[i]}_{t}$ from $\boldsymbol{o}_{t}$ using Eq. 7-8 and
maximizing the Gaussian mixture at $\boldsymbol{s}^{*}_{t}$.
We train the observation likelihood estimator $l_{\boldsymbol{\theta}}$ (and
$h_{\boldsymbol{\theta}}$) by maximizing the likelihood of observations in
their state and minimizing their likelihood in other states,
$\boldsymbol{\theta}^{*}_{h,l}=\text{argmin}_{\boldsymbol{\theta}_{h,l}}$
$-\log(\text{E}_{t}[l_{\boldsymbol{\theta}}(h_{\boldsymbol{\theta}}(\boldsymbol{o}_{t}),\boldsymbol{s}^{*}_{t})])-\log(1-\text{E}_{t_{1},t_{2}}[l_{\boldsymbol{\theta}}(h_{\boldsymbol{\theta}}(\boldsymbol{o}_{t_{1}}),\boldsymbol{s}^{*}_{t_{2}})]).$
#### IV-E3 End-to-end learning
For end-to-end learning, we apply DPFs on overlapping subsequences and
maximize the belief at all true states along the sequence as described above,
$\boldsymbol{\theta}^{*}=\text{argmin}_{\boldsymbol{\theta}}-\log\text{E}_{t}[\text{bel}(\boldsymbol{s}^{*}_{t};\boldsymbol{\theta})].$
### IV-F Limitations and Future Work
We compute the end-to-end gradient by backpropagation from the DPF output
through the filtering loop. _Since resampling is not differentiable, it stops
the gradient computation after a single loop iteration._ Therefore, the
gradient neglects the effects of previous prediction and update steps on the
current belief. This limits the scope of our implementation to supervised
learning, where predicting the Markov state at each time step is a useful
objective that facilitates future predictions. Differentiable resampling could
still improve supervised learning, e.g. by encouraging beliefs to overestimate
uncertainty, which reduces performance at the current step but can potentially
increase robustness of future state estimates.
Since it is difficult to generate training data that include the true state
$\boldsymbol{s}^{*}_{t}$ outside of simulation, we must work towards
unsupervised learning, which will require backpropagation through multiple
time steps because observations are generally non-Markov. Here are two
possible implementations of differentiable resampling that could be the
starting point of future work: a) Partial resampling: sample only $m$
particles in each step; keep $n-m$ particles from the previous time step; the
gradient can flow backwards through those. b) Proxy gradients: define a proxy
gradient for the weight of a resampled particle that is tied to the particle
it was sampled from; the particle pose is already connected to the pose of the
particle it was sampled from; the gradient can flow through these connections.
## V Experiments
TABLE I: Feedforward networks for learnable DPF models
$f_{\boldsymbol{\theta}}$: | 2 x fc(32, relu), fc(3) + mean centering across particles
---|---
$g_{\boldsymbol{\theta}}$: | 3 x fc(128, relu), fc(3) + scaled by $\text{E}_{t}[\text{abs}(\boldsymbol{s}_{t}-\boldsymbol{s}_{t-1})]$
$h_{\boldsymbol{\theta}}$: | conv(3x3, 16, stride 2, relu), conv(3x3, 32, stride 2, relu), conv(3x3, 64, stride 2, relu), dropout(keep 0.3), fc(128, relu)
$k_{\boldsymbol{\theta}}$: | fc(128, relu), dropout*(keep 0.15), 3 x fc(128, relu), fc(4, tanh)
$l_{\boldsymbol{\theta}}$: | 2 x fc(128, relu), fc(1, sigmoid scaled to range [0.004, 1.0])
fc: fully connected, conv: convolution, *: applied at training and test time
We evaluated DPFs in two state estimation problems in robotics: _global
localization_ and _visual odometry_. We tested global localization in
simulated 3D mazes based on vision and odometry. We focused on this task
because it requires simultaneously considering multiple hypotheses, which is
the main advantage of particle filters over Kalman filters. Here, we
evaluated: a) the effect of end-to-end learning compared to individual
learning and b) the influence of algorithmic priors encoded in DPFs by
comparing to generic LSTMs. To show the versatility of DPFs and to compare to
published results with backprop Kalman filters (BKFs) [7], we also apply DPFs
to the KITTI visual odometry task [5]. The goal is to track the pose of a
driving car based on a first-person-view video. In both tasks, DPFs use the
known dynamics model $g$ but do not assume any knowledge about the map of the
environment and learn the measurement model entirely from data.
Our global localization results show that 1) algorithmic priors enable
explainability, 2) end-to-end learning improves performance but sequencing
individual and end-to-end learning is even more powerful, 3) algorithmic
priors in DPFs improve performance compared to LSTMs reducing the error by
$\sim$80%, and 4) algorithmic priors lead to policy invariance: While the LSTM
baseline learns localization in a way that stops working when the robot
behaves differently ($\sim$84% error rate), localization with the DPF remains
useful with different policies ($\sim$15% error rate).
In the visual odometry task, DPFs outperform BKFs even though the task exactly
fits the capabilities and limitations of Kalman filters—tracking a unimodal
belief from a known initial state. This result demonstrates the applicability
of DPFs to tasks with different properties: higher frequency, longer
sequences, a 5D state instead of a 3D state, and latent actions. The result
also shows that DPFs work on real data and are able to learn measurement
models that work for visually diverse observations based on less than 40
minutes of video.
### V-A Global Localization Task
(a) Maze 1 (10x5)
(b) Maze 2 (15x9)
(c) Maze 3 (20x13)
(d) Maze 1 observations
(e) Maze 2 observations
(f) Maze 3 observations
Figure 4: Three maze environments. Red lines show example trajectories of
length 100. Blue circles show the first five steps, of which the observations
are depicted below.
The global localization task is about estimating the pose of a robot based on
visual and odometry input. All experiments are performed in modified versions
of the navigation environments from DeepMind Lab [3], where all objects and
unique wall textures were removed to ensure partial observability. Data was
collected by letting the simulated robot wander through the mazes (see Fig.
4). The robot followed a hand-coded policy that moves in directions with high
depth values from RGB-D input and performs 10% random actions. For each maze,
we collected 1000 trajectories of 100 steps with one step per second for
training and testing. As input for localization, we only used RGB images and
odometry, both with random disturbances to make the task more realistic. For
the observations, we randomly cropped the rendered $32\times 32$ RGB images to
$24\times 24$ and added Gaussian noise ($\sigma=20$, see Fig. 4d-f). As
actions, we used odometry information that corresponds to the change in
position and orientation from the previous time step in the robot’s local
frame, corrupted with multiplicative Gaussian noise ($\sigma=0.1$). All
methods were optimized on short trajectories of length 20 with Adam [13] and
regularized using dropout [19] and early stopping. We will now look at the
results for this task.
#### V-A1 Algorithmic priors enable explainability
Due to the algorithmic priors in DPFs, the models remain explainable even
after end-to-end learning. We can therefore examine a) the motion model, b)
the measurement model, and c) their interplay during filtering. Unless
indicated otherwise, all models were first learned individually and then end-
to-end.
##### Motion Model
(a) Predictions with learned motion model
(b) Comparison of learned noise
Figure 5: Learned motion model. (a) shows predictions (cyan) of the state
(red) from the previous state (black). (b) compares prediction uncertainty in
x to true odometry noise (dotted line).
Fig. 5a shows subsequent robot poses together with predictions from the motion
model. These examples show that the model has learned to spread the particles
proportionally to the amount of movement, assigning higher uncertainty to
larger steps. But how does this behavior depend on whether the model was
learned individually or end-to-end?
Fig. 5b compares the average prediction uncertainty using models from
different learning schemes. The results show that individual learning produces
an accurate model of the odometry noise (compare red and the dotted black
lines). End-to-end learning generates models that overestimate the noise
(green and orange lines), which matches insights of experts in state
estimation who report that “many of the models that have proven most
successful in practical applications vastly overestimate the amount of
uncertainty” [21, p. 118].
(a) Obs.
(b) Particle proposer (c) Obs. likelihood estimator
(d) Obs. (e) Particle proposer (f) Obs. likelihood estimator
(g) Obs. (h) Particle proposer (i) Obs. likelihood estimator
Figure 6: Learned measurement model. Observations, corresponding model output,
and true state (red). To remove clutter, the observation likelihood only shows
above average states.
Figure 7: Global localization with DPFs. One plot per time step of a test
trajectory: true state (red), 1000 particles (proposed particles have weight
0.001). Last plot: the weighted particle mean (green) matches the true state
after the first few steps.
##### Measurement Model
Fig. 6 shows three example observations and the corresponding outputs of the
measurement model: proposed particles and weights depending on particle
position. Note how the model predicts particles and estimates high weights at
the true state and other states in locally symmetric parts of the maze. We can
also see that the data distribution shapes the learned models, e.g. by
focusing on dead ends for the second observation, which is where the robot
following the hand-coded policy will look straight at a wall before turning
around. Similar to motion models, end-to-end learned measurement models are
not accurate but effective for end-to-end state estimation, as we will see
next.
##### Filtering
Figure 7 shows filtering with learned models. The DPF starts by generating
many hypotheses (top row). Then, hypotheses form clusters and incorrect
clusters vanish when they are inconsistent with observations (second row).
Finally, the remaining cluster tracks the true state.
(a) Maze 1 (10x5)
(b) Maze 2 (15x9)
(c) Maze 3 (20x13)
(d) Maze 1 (10x5), relative to LSTM
(e) Maze 2 (15x9), relative to LSTM
(f) Maze 3 (20x13), relative to LSTM
Figure 8: Learning curves in all mazes (a-c), also relative to LSTM baseline
(d-f). ind: individual learning, e2e: end-to-end learning. Shaded areas denote
standard errors. Figure 9: Generalization between policies in maze 2. A:
heuristic exploration policy, B: shortest path policy. Methods were trained
using 1000 trajectories from A, B, or an equal mix of A and B, and then tested
with policy A or B.
#### V-A2 End-to-end learning improves performance
To quantify the effect of end-to-end learning on state estimation performance,
we compared three different learning schemes for DPFs: individual learning of
each model (ind), end-to-end learning (e2e), and both in sequence (ind+e2e).
We evaluated performance in all three mazes and varied the amount of training
trajectories along a logarithmic scale from 32 to 1000. We measured
localization performance by _error rate_ , where we consider a prediction
erroneous if the distance to the true state, divided by
$\text{E}_{t}[\text{abs}(\boldsymbol{s}_{t}-\boldsymbol{s}_{t-1})]$, is
greater than 1.
The resulting learning curves in Fig. 8a-c show that end-to-end learned DPFs
(orange line) consistently outperform individually trained DPFs (red line)
across all mazes. Individual training is worst with few training trajectories
(less than 64) but also plateaus with more data (more than 125 trajectories).
In both cases, the problem is that the models are not optimized for state
estimation performance. With few data, training does not take into account how
unavoidable model errors affect filtering performance. With lots of data, the
models might be individually accurate but suboptimal for end-to-end filtering
performance. End-to-end learning consistently leads to improved performance
for the same reasons.
Performance improves even more when we sequence individual and end-to-end
learning (green line in Fig. 8a-c). Individual pretraining helps because it
incorporates additional information about the function of each model into the
learning process, while end-to-end learning incorporates information about how
these models affect end-to-end performance. Naturally, it is beneficial to
combine both sources of information.
#### V-A3 Algorithmic priors improve performance
To measure the effect of the algorithmic priors encoded in DPFs, we compare
them with a generic neural network baseline that replaces the filtering loop
with a two-layer long-short-term memory network (LSTM) [9]. The baseline
architecture uses the same convolutional network architecture as the DPF—it
embeds images using a convolutional network $h_{\boldsymbol{\theta}}$,
concatenates the embedding with the action vector and feeds the result into
2xlstm(512), 2xfc(256, relu), and fc(3)—and is trained end-to-end to minimize
mean squared error.
The comparison between DPF (ind+e2e) and the LSTM baseline (blue) in Fig. 8a-c
shows that the error rate of DPF (ind+e2e) is lower than for LSTM for all
mazes and all amounts of training data. Also in all mazes, DPF (ind+e2e)
achieve the final performance of LSTM already with 125 trajectories,
$\frac{1}{8}$ of the full training set.
We performed a small ablation study in maze 2 to quantify the effect the known
dynamics model on this performance. When the dynamics model is learned, the
final error rate for DPFs increases from 1.6% to 2.7% compared to 6.0% error
rate for LSTMs. This shows that knowing the dynamics model is helpful but not
essential for DPF’s performance.
To visualize the performance relative to the baseline, we divided all learning
curves by LSTM’s performance (see Fig. 8d-f). Since DPFs encode additional
prior knowledge compared to LSTMs, we might expect them to have higher bias
and lower variance. Therefore, DPF’s relative error should be lowest with
small amounts of data and highest with large amounts of data (the green curves
in Fig. 8d-f should go up steadily from left to right until they cross the
blue lines). Surprisingly, these curves show a different trend: DPFs relative
performance to LSTMs improves with more data and converges to about
$\frac{1}{10}$ to $\frac{1}{3}$. There could be a slight upwards trend in the
end, but on a logarithmic data axis it would take a tremendous amount of data
to close the gap. This result suggests that the priors from the Bayes filter
algorithm reduce variance without adding bias— _that these algorithmic priors
capture some true structure about the problem_ , which data does not help to
improve upon.
#### V-A4 Algorithmic priors lead to policy invariance
To be useful for different tasks, localization must be policy-invariant. At
the same time, the robot must follow some policy to gather training data,
which will inevitably affect the data distribution, add unwanted correlations
between states and actions, etc.
We investigated how much the different methods overfit to these correlations
by changing the policy between training and test, using two policies A and B.
Policy A refers to the heuristic exploration policy that we used for all
experiments above (see Sec. V-A). Policy B uses the true pose of the robot,
randomly generates a goal cell in the maze, computes the shortest path to the
goal, and follows this path from cell to cell using a simple controller mixed
with 10% random actions.
The results in Fig. 9 show that all methods have low error rates when tested
on their training policy (although DPFs improve over LSTMs even more on policy
B). But when we use different policies for training and test, LSTM’s error
rate jumps to over 80%, while DPF (ind+e2e) still works in most cases (5% and
26% error rate).
The LSTM baseline is not able to generalize to new policies because it does
not discriminate between actions and observations and fits to any information
that improves state estimation. If the training data includes correlations
between states and actions (e.g. because the robot moves faster in a long
hallway than in a small room), then the LSTM learns this correlation. Put
differently, the LSTM learns to infer the state from the action chosen by the
policy. The problem is that this inference fails if the policy changes. The
algorithmic priors in DPFs prevent them from overfitting to such correlations
because DPFs cannot directly infer states from actions.
DPFs generalize better from A to B than from B to A. Since generalization from
B to A is equally difficult for DPFs with individually learned models, the
error increase cannot come from overfitting to correlations in the data
through end-to-end learning but is most likely because the states visited by
policy A cover those visited by policy B but not vice versa.
The alternative approach to encoding policy invariance as a prior is to learn
it by adding this variance to the data. Our results show that if we train on
combined training data from both policies (A+B), all methods perform well in
tests with either policy. This approach in the spirit of domain randomization
and data augmentation helps DPFs because it covers the union of the visited
states and (additionally) helps LSTM by including state-action correlations
from both policies. But to make the LSTM localization truly policy invariant
such that it would work with any new policy C, the training data has to cover
the space of all policies in an unbiased way, which is difficult for any
interesting problem.
(a) Visual input (image and difference image) at time steps 100, 200, and 300
(indicated in (b) by black circles) (b) Trajectory 9; starts at (0,0)
Figure 10: Visual odometry with DPFs. Example test trajectory
### V-B Visual Odometry Task
To validate our simulation results on real data, we applied DPFs on the KITTI
visual odometry data set, which consists of data from eleven trajectories of a
real car driving in an urban area for a total of 40 minutes. The data set
includes RGB stereo camera images as well as the ground truth position and
orientation of the car in an interval of $\sim$0.1 seconds. The challenge of
this task is to generalize in a way that works across highly diverse
observations because the method is tested on roads that are never seen during
training. Since the roads are different in each trajectory, it is not possible
to extract global information about the car’s position from the images.
Instead, we need to estimate the car’s translational and angular velocity from
the stream of images and integrate this information over time to track the
car’s position and orientation.
We tackle this problem with a DPF in a five dimensional state space, which
consists of the position, orientation, forward velocity and angular velocity.
DPFs learn to perform visual odometry from a known initial state using a
simple first-order _dynamics model_ $g$ and a learnable action sampler
$f_{\boldsymbol{\theta}}$. Since there is no information about the action of
the driver, the action sampler produces zero mean motion noise on the velocity
dimensions, which is then evaluated with the measurement model. For a fair
comparison, we used the same network architecture for the observation encoder
$h_{\boldsymbol{\theta}}$ as in the backprop Kalman filter paper [7], which
takes as input the current image and the difference image to the last frame
(see Fig. 10). Our observation likelihood estimator $l_{\boldsymbol{\theta}}$
weights particles based on their velocity dimensions and the encoding
$h_{\boldsymbol{\theta}}(\boldsymbol{o}_{t})$. Since, the initial state is
known, we do not use a particle proposer. We train the DPF individually and
end-to-end, using only the velocity dimensions for maximum likelihood
estimation.
We evaluated the performance following the same procedure as in the BKF paper.
We used eleven-fold cross validation where we picked one trajectory for
testing and used all others for training with subsequences of length 50. We
evaluated the trained model on the test trajectory by computing the average
error over all subsequences of 100 time steps and all subsequences of 100,
200, 400, and 800 time steps.
Table V-B compares our results to those published for BKFs [7]. DPFs
outperform BKFs, in particular for short sequences where they reduce the error
by $\sim$30%. Any improvement over BKFs in the this task is surprising because
Gaussian beliefs seem sufficient to capture uncertainty in this task. The
improvement could come from the ability of particles to represent long tailed
probability distributions. These results demonstrate that DPFs generalize to
different tasks and can be successfully applied to real data.
TABLE II: KITTI visual odometry results
o @ X[1.2, l] X[c] X[c] @ | Test 100 | Test 100/200/400/800
---|---|---
Translational error (m/m)
BKF* | 0.2062 | 0.1804
DPF (ind) | 0.1901 $\pm$ 0.0229 | 0.2246 $\pm$ 0.0371
DPF (e2e) | 0.1467 $\pm$ 0.0149 | 0.1748 $\pm$ 0.0468
DPF (ind+e2e) | 0.1559 $\pm$ 0.0280 | 0.1666 $\pm$ 0.0379
Rotational error (deg/m)
BKF* | 0.0801 | 0.0556
DPF (ind) | 0.1074 $\pm$ 0.0199 | 0.0806 $\pm$ 0.0153
DPF (e2e) | 0.0645 $\pm$ 0.0086 | 0.0524 $\pm$ 0.0068
DPF (ind+e2e) | 0.0499 $\pm$ 0.0082 | 0.0409 $\pm$ 0.0060
Means $\pm$ standard errors; * results from [7]
## VI Conclusion
We introduced differentiable particle filters to demonstrate the advantages of
combining end-to-end learning with algorithmic priors. End-to-end learning
optimizes models for performance while algorithmic priors enable
explainability and regularize learning, which improves data-efficiency and
generalization. The use of algorithms as algorithmic priors will help to
realize the potential of deep learning in robotics. The components of the DPF
implementation, such as sample generation and density estimation, will be
useful for producing differentiable versions of other sampling-based
algorithms.
## Acknowledgments
We gratefully acknowledge financial support by the German Research Foundation
(DFG, project number 329426068).
## References
* [1] Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems. http://tensorflow.org/, 2015.
* [2] James E. Baker. Reducing Bias and Inefficiency in the Selection Algorithm. In Proceedings of the International Conference on Genetic Algorithms (ICGA), pages 14–21, 1987.
* [3] Charles Beattie, Joel Z. Leibo, Denis Teplyashin, Tom Ward, Marcus Wainwright, Heinrich Küttler, Andrew Lefrancq, Simon Green, Víctor Valdés, Amir Sadik, and others. Deepmind Lab. arXiv:1612.03801, 2016.
* [4] DeepMind. Sonnet: TensorFlow-Based Neural Network Library. https://github.com/deepmind/sonnet, 2017.
* [5] Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision Meets Robotics: The KITTI Dataset. The International Journal of Robotics Research, 32(11):1231–1237, 2013.
* [6] Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. Cognitive Mapping and Planning for Visual Navigation. arXiv:1702.03920, 2017.
* [7] Tuomas Haarnoja, Anurag Ajay, Sergey Levine, and Pieter Abbeel. Backprop KF: Learning Discriminative Deterministic State Estimators. In Advances in Neural Information Processing Systems (NIPS), pages 4376–4384, 2016.
* [8] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. arXiv:1512.03385, 2015.
* [9] Sepp Hochreiter and Jürgen Schmidhuber. Long Short-Term Memory. Neural Computation, 9(8):1735–1780, 1997.
* [10] Rico Jonschkowski and Oliver Brock. End-To-End Learnable Histogram Filters. In Workshop on Deep Learning for Action and Interaction at the Conference on Neural Information Processing Systems (NIPS), 2016.
* [11] Peter Karkus, David Hsu, and Wee Sun Lee. QMDP-Net: Deep Learning for Planning under Partial Observability. In Advances in Neural Information Processing Systems (NIPS), pages 4697–4707, 2017.
* [12] Maximilian Karl, Maximilian Soelch, Justin Bayer, and Patrick van der Smagt. Deep Wariational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data. arXiv:1605.06432, 2017.
* [13] Diederik P. Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), 2014.
* [14] Diederik P. Kingma and Max Welling. Auto-Encoding Variational Bayes. arXiv:1312.6114, 2013.
* [15] Yann A. LeCun, Bernhard E. Boser, John S. Denker, Donnie Henderson, Richard E. Howard, Wayne E. Hubbard, and Lawrence D. Jackel. Backpropagation Applied to Handwritten Zip Code Recognition. Neural Computation, 1(4):541–551, 1989.
* [16] Masashi Okada, Luca Rigazio, and Takenobu Aoshima. Path Integral Networks: End-to-End Differentiable Optimal Control. arXiv:1706.09597, 2017.
* [17] Tanmay Shankar, Santosha K. Dwivedy, and Prithwijit Guha. Reinforcement Learning via Recurrent Convolutional Neural Networks. In Proceedings of the International Conference on Pattern Recognition (ICPR), pages 2592–2597, 2016.
* [18] David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, and Andre Barreto. The Predictron: End-to-End Learning and Planning. In Proceedings of the International Conference on Machine Learning (ICML), pages 3191–3199, 2017.
* [19] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(1):1929–1958, 2014.
* [20] Aviv Tamar, Yi Wu, Garrett Thomas, Sergey Levine, and Pieter Abbeel. Value Iteration Networks. In Advances in Neural Information Processing Systems (NIPS), pages 2154–2162, 2016.
* [21] Sebastian Thrun, Wolfram Burgard, and Dieter Fox. Probabilistic Robotics. MIT Press, 2005.
* [22] Manuel Watter, Jost Tobias Springberg, Joschka Boedecker, and Martin Riedmiller. Embed to Control: A Locally Linear Latent Dynamics Model for Control from Raw Images. In Advances in Neural Information Processing Systems (NIPS), pages 2746–2754, 2015.
|
# Transient two pole accretion in the polar V496 UMa
M. R. Kennedy,1,2 C. Littlefield,3,4222 and P. M. Garnavich3
1Department of Physics, University College Cork, Cork, Ireland
2Jodrell Bank Centre for Astrophysics, Department of Physics and Astronomy,
The University of Manchester, M19 9PL, UK
3Department of Physics, University of Notre Dame, Notre Dame, IN 46556 USA
4Bay Area Environmental Research Institute, Moffett Field, CA 94035 USA
E-mail<EMAIL_ADDRESS>authors contributed equally to this work.
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
We report XMM-Newton and TESS observations of V496 UMa, an AM Herculis-type
cataclysmic variable. The XMM-Newton observation reveals that at times, two
poles on the white dwarf accrete simultaneously, but accretion onto the
secondary magnetic pole is erratic and can nearly cease in less than one
binary orbit (1.5 h). Modelling of the X-ray spectrum during the primary
maximum reveals no change in the accretion structures onto the primary pole
when accretion onto the secondary pole is disrupted, suggesting that the
disruption of accretion onto the secondary pole may be caused by mass-transfer
variations from the donor star. The TESS observation, which spanned eight
weeks at a two-minute cadence, shows a stable, double-humped orbital
modulation due to cyclotron emission from the post-shock region, while the
observed times of maximum light show a slow systematic drift that does not
correlate with the system’s overall brightness.
###### keywords:
accretion, accretion discs – novae, cataclysmic variables – stars: magnetic
field – X-rays: individual: V496 UMa
††pubyear: 2021††pagerange: Transient two pole accretion in the polar V496
UMa–A
## 1 Introduction
Cataclysmic variables (CV) are compact binary systems with white dwarf (WD)
primaries that are accreting material from a nearby companion star which fills
its Roche lobe. When a new CV is discovered, it is important that a clear
understanding the accretion structures within the binary is established, as
this gives us information regarding the central WD. Indeed, the path material
from the companion takes as it flows through the inner Lagrange point (L1)
towards the WD is dictated by the magnetic field of the WD. In systems where
the WDs surface magnetic field is large (>5 MG), material flows first as a
ballistic stream towards the WD until the point at which the magnetic pressure
exerted on the material by the WDs magnetic field overcomes the ram pressure
inside the stream. From this point onwards, material couples to the WDs
magnetic field and flows towards the WDs magnetic poles. Systems in which the
magnetic field is strong enough to produce such an accretion structure are
called AM Her stars (after the archetypal system) or polars due to the high
percentage of polarsied optical light which they produce (Tapia, 1977). They
are formed as, soon after the binaries formation, the rotational period of the
WD ($P_{\rm{s}}$) synchronises to the orbital period ($P_{\rm{O}}$) of the
system and becomes tidally locked due to the interaction between the WDs
magnetic field and the secondary stars magnetic field.
Due to tidal locking, there is a preferential magnetic pole on the WD for
material to flow to - the one which is aligned best with material in the
ballistic stream (for the remainder of this paper, we shall refer to this as
the primary pole). During the early years of polar study, accretion was
thought to only occur onto a single pole of the white dwarf. Such accretion
onto a single pole leads to large amplitude variations at optical and X-ray
wavelengths as the accreting pole rotates in and out of our field of view.
However, after the discovery of polars which underwent changes in the sign of
the circular polarisation (e.g. VV Pup: Liebert & Stockman 1979), polars which
had two optical maxima and minima per orbit (e.g. EF Eri; Watson et al. 1980),
and the variable light curve of the archetype of polars AM Her (Heise et al.,
1985), it was quickly realised that a second pole might accrete within these
systems. This idea of 2 pole accretion was soldified by spectroscopic
observations of VV Pup, in which 2 distinct sets of cyclotron features
(corresponding to magnetic both poles of the WD) were observed (Wickramasinghe
et al., 1989). Since then, secondary pole accretion has become a common
feature of many polars.
While accretion onto this secondary pole can be constant and uninterrupted
(e.g. Reimers et al. 1999, Schwarz et al. 2001, and Schwarz et al. 2002), it
can also be transient, and manifests either as a change in the optical and
X-ray light curve (as in AM Her), or as a change in the sign of the circular
polarisation of light coming from a polar (e.g. as in VV Pup and QQ Vul;
Schwope et al. 2000). In such cases, the accretion stream can be thought of as
a probe of the WDs magnetic field, as it couples on to different fields lines
at different times, helping us to build a picture of the WDs magnetic field
structure.
For systems with a transient behaviour, there are two possible explanations.
The first is that the white dwarf is spinning with a period slightly longer or
shorter than the orbital period. There are a handful of polar systems for
which this true, and $P_{\rm{s}}$ is $\sim 2\%$ smaller or larger than
$P_{\rm{O}}$. These systems are thought to have been knocked out of
synchronicity by a nova eruption on the WD, an idea developed after
observations of V1500 Cyg after a nova outburst in 1975 (Stockman et al.,
1988). They are expected to synchronise after enough time has passed,and if
this asynchronicity is the case of the transient two pole accretion, then the
phenomenon should occur over a periodic timescale (e.g. as in clearly in TESS
observations of the asynchronous polar CD Ind; Hakala et al. 2019 and
Littlefield et al. 2019).
However, there are clear cases where a polar switches between one and two pole
accretion, and is not asynchronous. One need look no further than the
archetype of polars, AM Her, to see a firm example. AM Her has been observed
in both one-pole and two-pole configurations, but the timescale for switching
between configurations is months-years. For short epochs of observations
($\sim$ months), the accretion geometry seems stable (see Schwope et al. 2020
for a thorough review on the variability seen in AM Her).
The alternate model is that the transient behaviour is caused by a change in
the mass transfer rate from the binary. When the transfer rate is high, the
ram pressure within the accretion structures is high enough such that the
penetration depth the ballistic stream achieves into the WDs magnetic field is
deep enough for material to reach field lines connected to the second pole. If
the mass transfer rate drops, the penetration depth decreases, leading to
cessation of accretion onto the secondary pole. This variable accretion model
led researchers to investigate whether the X-ray emission from the secondary
pole is not described by the typical shock model (King & Lasota 1979;Lamb &
Masters 1979), but instead may be due to “blobby” accretion (Kuijpers &
Pringle 1982; Frank et al. 1988), where individual blobs of accreting material
penetrate below the WD photosphere, manifesting as thermal radiation. Such a
model has been applied to explain the different accretion regimes within AM
Her, and predicts significantly different X-ray spectra from the primary and
secondary poles. (Hameury & King 1988;Schwope et al. 2020).
The cause of the variation in the mass transfer rate have been explained by
stellar spots on the secondary star causing a temporary change in the
accretion rate (Livio & Pringle, 1994), but the time scale for switching
between one and two pole accretion is often on a timescale of weeks to years.
Differentiating whether two-pole accretion is occurring due to asynchronicity
or a variable mass transfer rate cases requires long term monitoring to
identify any periodicity in the transitions between single and two-pole
accretion. Finally, determining whether a polar is undergoing “blobby”
accretion onto the secondary pole requires X-ray spectra of both the primary
and secondary poles.
This paper focuses on the polar V496 UMa, and on answering questions
surrounding the accretion geometry and its stability. This system has been the
subject of two dedicated studies, both of which reported time-series
photometry and optical spectroscopy. Littlefield et al. (2015) measured a
91-minute orbital period and showed that a typical orbital light curve
contains two photometric maxima, one of which peaks at $V\sim 16.5$ and the
other at $V\sim 17$. A single, low-resolution spectrum showed the H, He I, and
He II emission lines which are characteristic of a polar accreting at a high
accretion rate, along with a non-thermal continuum. Littlefield et al. (2018)
followed up with time-series spectroscopy showing that V496 UMa’s emission-
line spectrum transitions into an absorption spectrum for several minutes
during each orbit when the accretion curtain eclipses the cyclotron-emitting
region. They also established that the non-thermal continuum in the optical
spectrum is caused by smearing of the harmonics of V496 UMa’s cyclotron
spectrum. V496 UMa’s parallax from Gaia EDR3 (Gaia Collaboration et al., 2016,
2021) yields a distance of $760\pm 30$ pc using the geometric algorithm from
Bailer-Jones et al. (2021).
V496 UMa’s most distinguishing property is the intermittent nature of the
secondary maximum in its optical light curve. Littlefield et al. (2018) found
that four of the 45 secondary maxima in their photometry were either
extraordinarily weak or completely absent, with no apparent impact on the rest
of the orbital light curve (Fig. 3 in Littlefield et al., 2018). When absent,
V496 UMa could be as much as 2.5 mag fainter during the expected secondary
maxima than its normal brightness during this part of the orbit. Even more
surprisingly, a failed secondary maximum in one orbit could be followed by a
normal secondary maximum in the very next orbit, establishing that the
mechanism responsible for the failed maxima operates on timescales of less
than one orbit. Littlefield et al. (2018) speculated that the missing maxima
might arise from intermittent accretion onto the secondary magnetic pole but
lacked the observational data to test this proposal.
Motivated by the question of what is causing the missing secondary maximum,
and whether the X-ray spectrum and light curve vary in a similar manner, we
obtained X-ray data of V496 UMa using the XMM-Newton X-ray telescope. During
preparation of these data (present in Section 3), V496 UMa was also observed
by the Transiting Exoplanet Survey Satellite (TESS; Ricker et al. 2015),
allowing for a unique opportunity to probe the long term nature of the
variability of the secondary maximum. These data are discussed in Section 4.
## 2 Observations
Table 1: Details of the various observations of V496 UMa. Facility | Start Time | End Time | Cadence
---|---|---|---
XMM-Newton | 2017-12-03 08:35:31 | 2017-12-03 16:38:51 | 100s (X-ray), 10s (Optical)
SLKT | 2019-08-29 01:53:49 | 2019-08-29 03:46:07 | 33s
SLKT | 2019-09-06 02:14:52 | 2019-09-06 03:44:54 | 33s
SLKT | 2019-09-18 01:12:35 | 2019-09-18 02:53:00 | 33s
SLKT | 2017-12-03 08:14:49 | 2017-12-03 11:59:14 | 33s
AAVSO | 2017-12-02 08:18:46 | 2017-12-07 12:32:30 |
TESS | 2019-08-15 | 2019-10-06 | 120 s
### 2.1 XMM-Newton
V496 UMa was observed by XMM-Newton for 29 ks starting 2017-12-03 08:35:31
(UTC). The European Photon Imaging Camera (EPIC) -pn (Strüder et al., 2001),
-MOS1, and MOS2 (Turner et al., 2001) instruments were all operated in full
frame mode with a thin filter inserted. The Reflection Grating Spectrographs
(RGS1 and RGS2; den Herder et al. 2001) were both operated in spectroscopy
HER+SES mode. The Optical Monitor (OM; Mason et al. 2001) was operated in fast
imaging mode with a white filter inserted. Due to the OMs observing mode,
there are brief gaps in coverage every $\sim 26$ min. Initial inspection of
the RGS data suggested no appreciable signal was detected, and these data will
not be discussed further.
All data were reduced using tasks in SAS v16.1.0. All data were corrected to
the solar system barycentre using Barycen. A background light curve was
inspected to look for periods of high background which may have affected the
data, but none were found. All extracted spectra and light curves are
available through an online repository.
### 2.2 TESS
TESS observed V496 UMa in two consecutive sectors at a two-minute cadence.
Observations began in Sector 14 on 2019 Aug. 15 and continued until the end of
Sector 15 on 2019 Oct. 6. The TESS data are nearly uninterrupted, except for
three downlink gaps. Both TESS light curves were extracted with lightkurve
(Lightkurve Collaboration et al., 2018). After experimenting with different
extraction apertures, we decided to use the pipeline apertures. Due to TESS’s
24-arcsec pixels, the TESS observations of V496 UMa are blended. In spite of
this blending, V496 UMa and its photometric variability were both readily
apparent in visual inspection of the TESS images.
### 2.3 Ground-Based Optical Photometry
Additional observations of V496 UMa were carried out by members of the
American Association of Variable Star Observers (AAVSO) in the days leading up
to, during, and after the XMM-Newton observations. These data were used to
identify correlations between the X-ray behaviour and optical behaviour of the
system.
Finally, we used the 80-cm Sarah L. Krizmanich Telescope (SLKT) at the
University of Notre Dame to obtain time-series photometry of V496 UMa during
the first part of the XMM-Newton observation as well as three light curves
while TESS observations were underway. Table 1 summarizes these observations.
The observations consisted of 30-second unfiltered exposures with
approximately 3 s of overhead between images. Data were debiased and
flatfielded in the usual fashion, and differential aperture photometry used to
extract the flux of V496 UMa.
## 3 X-ray
Figure 1: Light curves around the time of the XMM-Newton observations. The top
panel shows the full 0.3–10.0 keV light curve. The middle panel shows the
optical light curves using data from the OM, the SLKT, and from members of the
AAVSO community. The second panel shows the X-ray light split into 2 bands -
soft (0.3–2 keV) and hard (2–10 keV). The bottom panel shows the ratio of
these light curves, and highlights when we see a hard X-ray excess. Orbital
phase has been calculated using the ephemeris described in the text.
### 3.1 X-ray light curves
Light curves were extracted for 3 energy ranges - the full energy range of the
detectors (0.3-10.0 keV), a soft energy range of 0.3-2.0 keV, and a hard
energy range of 2.0-10.0 keV. The Hardness ratio
(($F_{2-10}-F_{0.3-2}$)/($F_{2-10}+F_{0.3-2}$); Worpel & Schwope 2015) for the
duration of the observations was also computed. The top panel of Figure 1
shows the 0.3-10.0 keV light curve of V496 UMa phased using the ephemeris from
Section 4.2. A total of 8 X-ray maxima were detected over 5 orbits of
observations. The light curve over a single orbital period is composed of
three features - a primary maximum at $\phi=0$ which corresponds to the
optical maximum, a secondary maximum which occurs at $\phi=0.4$, and a rapid
change in the Hardness ratio from -0.5 to +0.3 at $\phi=0.75$. The secondary
maximum is only clearly detected for the first 3 orbital periods of data,
after which its strength diminishes rapidly.
### 3.2 X-ray spectra
#### 3.2.1 Spectral Extraction
Spectra covering the 0.3-10.0 keV were extracted for several different phase
intervals as given by:
* •
An X-ray spectrum constructed from all data up until the first missing
secondary maximum (T (BJD)<245810.06). This is referred to as the “half data”
set in the rest of the text.
* •
An X-ray spectrum of the primary maximum (data with $0.85<\phi<0.2$).
* •
An X-ray spectrum of the secondary maximum (data with $0.2<\phi<0.7$, but only
for the first three orbital cycles).
* •
An X-ray spectrum of the first failed secondary maximum (data with
$0.2<\phi<0.7$, but only for the fourth orbital cycle).
* •
An X-ray spectrum of the second failed secondary maximum (data with
$0.2<\phi<0.7$, but only for the fifth orbital cycle).
* •
An X-ray spectrum of regions where the Hardness ratio was measured to be
positive (data with $0.7<\phi<0.85$). This is referred to as the absorption
dip spectrum from here onward.
In the case of each spectrum, the region for source extraction was chosen to
be a circle with a radius of 24″around the target position. For the PN
instrument, background spectra were extracted using a circular annulus
centered on 13:21:32.46 +56:11:58.45 and with a radius of 57″. For both MOS
instruments, background spectra came from a circular annulus centered on
13:21:33.25 +56:09:17.63 and with a radius of 96″. These spectra, along with
the times of the X-ray observation they correspond to, are shown in Figure 2.
Figure 2: The extracted PN (black points), MOS1 (red points), and MOS2 (purple
points) for 6 different time segments, along with the model residuals in units
of $\sigma$. Above each spectrum is the 1-10 keV light curve from the PN
instrument. The spectra in each panel were extracted using the highlighted
time ranges. The best fit spectra to each epoch of data are also plotted in
each panel as histograms, with the same colour scheme as the data. The
components which are summed together to give the best fitting model to the PN
instrument data are also shown in each panel, and consist of a blackbody
component (blue), a diffuse hot plasma (Mekal; orange) and in one instance a
6.4 keV Gaussian emission component (green)
#### 3.2.2 Spectral Fitting
The spectra were analysed using Xspec v12.10.1 (Arnaud, 1996). Each of the 6
extracted spectra were fit with a black body to account for the soft (<1.0
keV) component and a single temperature plasma emission model (mekal in Xspec;
Mewe et al. 1985; Mewe et al. 1986; Liedahl et al. 1995) to account for
emission produced in the shock above the white dwarfs surface. Both components
were absorbed by an interstellar absorber (tbabs, the Tuebingen-Boulder ISM
absorption model; Wilms et al. 2000). Finally, the model was multiplied by a
constant which was set to a value of 1 for the PN instrument, and allowed to
vary for both the MOS1 and MOS2 instruments to allow for cross-instrument
calibration. Such a model is a common starting place for describing the
spectra of polars (e.g. Schmidt et al. 2005; Worpel & Schwope 2015). We also
included a Gaussian emission component at 6.4 keV to account for the common
appearance of the Fe fluorescence feature at these energies in some accreting
systems.
For the absorption dip spectrum, we added an additional partial covering
absorption component (pcfabs), and froze all other parameters to their best
fit values from modelling the spectrum of the primary maximum, under the
assumption that the only difference between the absorption dip spectrum and
the primary maximum spectrum should be additional absorption from the
accretion stream.
The best fit parameters for these models were found using the default
Levenberg-Marquardt algorithm in Xspec with a maximum number of 10000
evaluations allowed and a critical delta of $1\times 10^{-4}$ required. The
parameter space was then explored to obtain errors on the parameters by using
the Goodman-Weare algorithm (Goodman & Weare, 2010) for Markov Chain Monte
Carlo’s as implemented within Xspec. A total of 20 walkers were used, each of
which were allowed to take 500,000 steps. The corner plots from the MCMC
analysis of each of the spectra are included as an online dataset, while the
corner plot from fitting the primary maximum is included in Appendix A.
The results from fitting this model to each of the spectra are shown in Figure
2, and the best-fit parameter values are given in Table 2. We also include the
unabsorbed, 0.3-10.0 keV X-ray luminosity (assuming a source distance of
760$\pm$30 pc) for just the plasma component of the model (that is, excluding
the soft thermal emission from the white dwarf), which can be used as a stand-
in for the mass accretion rate.
Of the six data sets which this model was applied to, only two have
unacceptable an $\chi^{2}$ \- the half data and primary maximum data. In the
first instance, the poor $\chi^{2}$ can be attributed to the fact that the
spectrum is the results of emission from both accreting magnetic poles of the
WD, while the model is a single temperature plasma. As such, decomposing the
spectrum into a primary and secondary spectrum improves this.
The cause of the $\chi^{2}$ of 305 for 275 degrees of freedom when fitting the
spectrum of the primary maximum is more difficult to explain. In the above, we
have assumed the hard X-ray emission comes from a single temperature plasma,
the reality is likely more complex. The plasma should have a range of
temperatures due to the ballistic stream coupling to the magnetic field across
a range of angles, rather than at a single point. This is what likely leads to
the high $\chi^{2}_{\rm R}$ value when modelling the primary maximum. As such,
we have also fit the primary maximum data with the mekal replaced by cemekl
(Singh et al., 1996), which allows for a multi-temperature plasma. The best
fitting parameters and their errors (as estimated using the same methods as
above) are given in Table 3. The $\chi^{2}$ of 282 for 274 d.o.f is a
significant improvement on the single temperature model, but there is a very
strong anti-correlation between the index of the power-law emissivity function
versus the maximum plasma temperature (as seen in Appendix A), making it
difficult to conclude anything physical from these models.
Table 2: Model parameters from fitting each of the spectrum in Figure 2 with an absorbed black body and plasma model. Errors are given at the $1\sigma$ level, and have been calculated as described in the text. Parameters marked with $a$ were frozen when fitting. The 0.3-10 keV X-ray luminosity of the Mekal component has been calculated assuming a source distance of 760$\pm$30 pc. Data Considered | Half data | Primary Max | Secondary Max | Failed Max # 1 | Failed Max # 2 | Absorption dip
---|---|---|---|---|---|---
$n_{\rm H}$ ($\times 10^{22}\>{\rm cm}^{-2}$) | <$0.005$ | <$0.008$ | <$0.01$ | <$0.06$ | <$0.15$ | 0.008a
$n_{\rm H,pcfabs}$ ($\times 10^{22}\>{\rm cm}^{-2}$) | - | - | - | - | - | $2.1\pm 0.4$
CvrFract | - | - | - | - | - | $0.68\pm 0.01$
$kT_{\rm BB}$ (keV) | $0.078^{+0.01}_{-0.009}$ | $0.09\pm 0.02$ | $0.06^{+0.01}_{-0.009}$ | <$0.25$ | <$0.27$ | 0.09a
$norm_{\rm BB}$ ($(\times 10^{-6})$) | $3.1^{+0.7}_{-0.5}$ | $2.8^{+0.7}_{-0.4}$ | $5^{+2}_{-1}$ | <$8$ | <$8$ | 2.8a
$kT_{\rm mekal}$ (keV) | $15\pm 1$ | $13\pm 1$ | $14\pm 2$ | $7^{+2}_{-1}$ | $20^{+20}_{-10}$ | 13a
$norm_{\rm mekal}$ $(\times 10^{-3})$ | $1.42\pm 0.02$ | $1.69\pm 0.03$ | $1.22\pm 0.03$ | $0.74\pm 0.04$ | $0.31\pm 0.05$ | 1.69a
$C_{\rm MOS1}$ | $0.95\pm 0.02$ | $0.94\pm 0.02$ | $0.96\pm 0.03$ | $0.96\pm 0.07$ | $1.0\pm 0.1$ | $0.92\pm 0.05$
$C_{\rm MOS2}$ | $0.96\pm 0.02$ | $0.97\pm 0.02$ | $0.94\pm 0.03$ | $0.90\pm 0.07$ | $1.0\pm 0.1$ | $0.89\pm 0.05$
$L_{\rm MEKAL,0.3-10keV}$ (erg/s) | | $2.5\pm 0.2\times 10^{32}$ | $1.8\pm 0.1\times 10^{32}$ | $1.1\pm 0.1\times 10^{32}$ | $0.42\pm 0.05\times 10^{32}$ |
$\chi^{2}$ (d.o.f) | 340 (294) | 305 (275) | 245 (230) | 52 (65) | 26.25 (19) | 185 (98)
Table 3: Results from applying the multi-temperature plasma model to the primary maximum data. Data Considered Primary Max |
---|---
$n_{\rm H}$ ($\times 10^{22}\>{\rm cm}^{-2}$) | <$0.005$
$kT_{\rm BB}$ (keV) | $0.071^{+0.008}_{-0.009}$
$norm_{\rm BB}$ ($(\times 10^{-6})$) | $5.0^{+2.0}_{-1.0}$
$\alpha$ | $1.1^{+0.2}_{-0.1}$
$kT_{max}$ (keV) | $41^{+13}_{-9}$
$norm_{\rm mekal}$ $(\times 10^{-3})$ | $4.2\pm 0.5$
$C_{\rm MOS1}$ | $0.94\pm 0.02$
$C_{\rm MOS2}$ | $0.97\pm 0.02$
$\chi^{2}$ (d.o.f) | 282 (274)
### 3.3 X-ray absorption dip
The spikes in the Hardness ratio occur at the same orbital phase during which
absorption lines appear in the optical spectrum of V496 UMa. The X-ray spectra
extracted during this orbital phase show that the spike in the hard-soft ratio
are due to a significant decrease in the soft X-ray flux (as opposed to being
an increase in the hard X-ray flux).
Modelling of this spectrum reveals that the decrease in the soft X-ray flux is
due to a sudden increase in the absorption column between us and V496 UMa. The
most likely cause of the sudden increase in absorption is the accretion stream
passing through our line of sight, temporarily blocking the view of the
accreting primary pole and absorbing a majority of the produced soft X-rays.
Since modelling of the other extracted spectra allow us to put strong upper
limits on the interstellar absorption column of $n_{\rm H}<0.01\times 10^{22}$
cm-2 in the direction of V496 UMa we can attribute the entirety of the
measured value of $n_{\rm H}=(2.1\pm 0.4)\times 10^{22}$ cm-2 to absorption by
the accretion column. In terms of the system geometry, this absorption dip
suggests the accretion stream is leading the companion star, as this soft
X-ray absorption dip occurs before inferior conjunction of the companion.
If the single-to-noise of the individual absorption dip spectra were high
enough, the measurement of the particle density of the column could be used to
directly measure variations in the mass-accretion rate over the timescale of a
single orbit. Unfortunately, these data do not have the sufficient S/N to do
this, but it may be possible with future, more sensitive X-ray missions.
### 3.4 Shock temperatures and magnetic field geometry
The improvement of the multi-temperature plasma model over the single
temperature model when modelling the primary maximum is in line with the
connecting region between the ballistic stream and the primary poles magnetic
field spanning a range of azimuth and radii. We are not able to derive strong
constraints on the maximum plasma temperature, likely due to the low count
rate at the highest energies of our spectrum. Further X-ray data taken at high
energies (for example, with NuSTAR) will be able to better constrain the
highest shock temperature.
The very good agreement between the model and data for the secondary maximum
suggests one of two things. Either the material which is feeding this pole
comes from a very narrow connecting region, leading to a shock which is very
close to uniform in temperature or, more likely, the spectrum does not have
sufficient signal to differentiate between a single and multi-temperature
plasma. Again, observations at a higher X-ray energy will help differentiate
the two scenarios.
With the detection of two distinct maxima in the X-ray light curve, it is very
likely that the white dwarf primary in V496 UMa is accreting onto both of its
magnetic poles, as proposed by Littlefield et al. (2018). If the magnetic
field in V496 UMa were perfectly dipolar, one would naively assume that these
maxima should be 180 degrees apart, or in other words, separated by 0.5 in
orbital phase. This is very close to the observed phase separation of the two
X-ray maxima in V496 UMa when both maxima are present and stable, suggesting
the structure of WD’s magnetic field might be reasonably approximated as
dipolar. An additional test for this can be done by measuring the magnetic
field of both poles. This is typically done by measuring the cyclotron
harmonics in the optical spectrum of both accretion regions (as was done for
e.g. V808 Aur; Worpel & Schwope 2015). However, as highlighted by Littlefield
et al. (2018), measurement of the magnetic field in V496 UMa is complicated by
significant smearing of the harmonics, hampering attempts to measure the
magnetic field of both poles.
### 3.5 Failed X-ray maximum
The detection of 2 maxima per orbital phase during the first 3 orbits of XMM-
Newton data confirm the suggestion put forward by Littlefield et al. (2018)
that accretion onto the WD in V496 UMa typically occurs via two distinct
magnetic poles. Modelling of the observed X-ray spectra during these maxima
reveal that both accretion columns have approximately the same temperatures in
the shock in the accretion column, and that both polar caps of the WD are
heated to the same degree.
The two failed secondary maxima in the optical light curve during the latter
half of the XMM-Newton observations coincide with a significant decrease in
the amplitude of the secondary maxima in the X-ray light curve. Assuming the
blackbody component of our models is coming from the WD surface, modelling of
these two failed maxima show that the temperature of the WD surface was
unchanged (to within 1$\sigma$) when compared with the derived temperature
when the secondary maximum was present. On the other hand, the shock
temperature exhibits a rapid decrease between the times when the secondary
maximum is present ($kT=17^{+4}_{-2}$ keV) and when it is not present
($kT=7^{+2}_{-1}$ keV during the first failed maximum, and completely
unconstrained during the second failed maximum).
This suggests that accretion onto the second, less preferential magnetic pole
decreases significantly but does not cease entirely. The primary maxima before
these failed secondary maxima are not significantly brighter than the primary
maxima which occur when the secondary maximum is fully present. This rules out
the case that more material gets channeled onto the primary magnetic pole
during the failed secondary maxima as, if this were the case, we would expect
the primary maxima to increase in strength.
Rather, the data suggests an overall decrease in the mass transfer rate in the
system, which leads to less material making it to the secondary magnetic pole
while maintaining the same amount of material reaching the primary maximum.
The cause of this decrease in the mass transfer rate is unclear, but may be
related to activity on the surface of the secondary star. Such a model has
been invoked to explain the transient two-pole accretion seen in QS Tel
(Schwope et al., 1995; Rosen et al., 1996) and MT Dra (Schwarz et al., 2002).
## 4 Optical Photometry
### 4.1 Light curves
The TESS light curve (which can be seen in Figure 3) is generally consistent
with previous optical observations of the system (Littlefield et al., 2015;
Littlefield et al., 2018), except that the secondary maximum does not stand
out as prominently. It has a lower amplitude, and often blends with the
primary maximum. This is probably attributable to differences in the cyclotron
continua of the two poles; time-resolved spectroscopy of a binary orbit
(Littlefield et al., 2018) shows that the continuum for the primary pole shows
more variability at the longer wavelengths which TESS is sensitive to. The
variability of the second pole’s cyclotron continuum increases at shorter
wavelengths, thereby explaining why the secondary pole is more pronounced in
optical observations than in the near-infrared TESS bandpass.
Figure 3: The full TESS light curve, phased to the orbital period. The width
of the sliding window is one-eighth of a day. The three horizontal white bands
indicate gaps due to data downlink. The central gap coincides with the
transition from Sector 15 to Sector 16, and because of the changed spacecraft
pointing, there is a brightness discontinuity at that gap.
We obtained one ground-based light curve of V496 UMa with the SLKT during each
sector of TESS observations, with the aim of ascertaining whether the
variability observed in the TESS bandpass is consistent with the variability
observed in previous optical studies. The overall shape of the light curve is
consistent across the two bandpasses as can be seen in Figure 4, and the
primary photometric maximum in the TESS light curve is the same as the primary
photometric maximum in the optical light curve. However, the relative
amplitude of the variation is reduced in the TESS bandpass, a likely
consequence of blending with nearby sources. Additionally, the rapid
flickering in the SLKT light curve is not always apparent in the TESS data,
possibly because the time resolution of the SLKT was superior by a factor of
$\sim$4.
Figure 4: Comparison of simultaneous light curves of V496 UMa obtained with
TESS and the SLKT. The SLKT data were obtained without a filter and use a
Johnson $V$ zeropoint, and the TESS data were converted to an instrumental
magnitude, with an arbitrary offset added. The TESS light curve shows a
decreased amplitude of variability, attributable to a combination of blending
and a bandpass difference.
### 4.2 Optical Ephemeris
A common method of measuring the orbital period in a polar is to measure the
recurrence interval of a well-defined feature in the light curve. At first
glance, the primary photometric maximum of V496 UMa is ideal for this purpose.
We fit third-order polynomials to each of the primary photometric maxima,
visually inspected the resulting fits to ensure their adequacy, and used each
polynomial to calculate the time of maximum flux for each peak. We used a
Monte Carlo procedure to estimate the uncertainty of each timing and
calculated a best-fit linear ephemeris of
$T_{max}[BJD]=2458722.01138(4)+0.0632329(2)$ (1)
for the TESS data. This period is very different from the period of
0.063235199(40) d reported in Littlefield et al. (2018). Inspection of the
residuals from the ephemeris (Fig. 5) suggest a possible explanation for this
discrepancy. The residuals from Eq. 1 show a systematic curvature consistent
with a gradual, aperiodic phase shift of the primary photometric maximum. On
relatively short timescales ($\lesssim 2$ weeks), a linear ephemeris can
compensate for this phase drift with a change in the apparent orbital period.
For example, the residuals in Fig. 5 are clustered into four groups (each
corresponding to one spacecraft orbit), and the best-fit periods for each of
the four groups differed from the Littlefield et al. (2018) period by up to
$\sim\pm 1$ s.
It is obviously unphysical for the binary orbit to change by such a large
amount in such a short time, but it is possible for the position of the
cyclotron-emitting region to drift across the face of the WD. Such behavior is
expected in a polar, as the location of the accretion region is not fixed to
the binary frame and depends on which field lines are channeling the infalling
matter. Variations in the mass-transfer rate could therefore cause a phase
shift of the accretion region, in which case one would also expect such a
change to produce observable luminosity variations. However, the O$-$C does
not show any significant correlation with the system’s brightness.
The detection of this oscillation is reminiscent of the aperiodic drift in the
optical maxima of the intermediate polar FO Aqr, identified by Kennedy et al.
(2017) using Kepler K2 data. Kennedy et al. (2017) demonstrated how this
effect can frustrate attempts to precisely measure the orbital period, but it
has not been previously reported in a synchronous polar. The drift observed in
V496 UMa is sufficiently small and gradual that poorly sampled ground-based
observations might struggle to distinguish between this effect and an
inaccurate measurement of the orbital period. It is unclear whether this phase
drift is a persistent feature of V496 UMa or whether it occurs in polars
generally, and but as TESS continues to observe polars, it will be possible to
search for this effect in other systems.
## 5 Discussion
The failed secondary maxima in the TESS observations can be broadly classified
into two categories: those that correlate with the system’s luminosity, and
those that do not. Littlefield et al. (2018) noted that at optical
wavelengths, the primary photometric maximum appeared to be unaffected by
nearby failed secondary maxima, and a number of the failed maxima in the TESS
light curve share this property. However, the TESS light curve shows multi-
day-long depressions near BTJD = 1730 and BTJD = 1740, and during these
episodes of reduced mass transfer, the failed secondary maxima are much more
frequent, occurring in a majority of the orbital cycles.
Although the failed maxima in Littlefield et al. (2018) created the impression
that there is a relatively clear dichotomy between normal and failed maxima,
the extensive TESS data demonstrate that this is not so in the near-infrared
TESS bandpass. On the contrary, the secondary maxima observed by TESS show
such a wide range of behaviours that it can be difficult to categorize some of
the maxima. Contamination from a nearby background star of similar brightness
further complicates matters, since it means that if V496 UMa were to become
undetectably faint, there would still be a weak signal at its position.
Figure 5: Top: The brightness of each accretion region during Sectors 15 and
16 of TESS. For both the primary and secondary maxima, we calculated the
average flux within $\pm$0.1 phase units of the expected phase of maximum
light. The secondary maxima show particularly erratic variability. Bottom:
O$-$C of the primary photometric maxima with respect to the orbital period
from Littlefield et al. (2018) and a reference time $T_{0}$ from the TESS
dataset. The O$-$C values display a slow, apparently aperiodic drift that does
not correlate with the brightness of either the primary or secondary maximum.
V496 UMa joins a growing list of polars which display dips due to obscuration
of the soft X-ray producing region by the ballistic stream. The XMM-Newton
observations strongly suggest that it is a two-pole accretor with highly
intermittent accretion onto its secondary magnetic pole, and that this
intermittent behaviour is being driven in changes in the mass transfer rate
from the donor star. The TESS light curve does not show the missing secondary
maxima as clearly as previous optical observations, and also suggest that the
ephemeris derived from short intervals of observations may be unreliable. We
speculate that this is because the cyclotron spectrum of the secondary
accretion region from Littlefield et al. (2018) is quite blue, so its relative
contribution in the TESS bandpass is relatively low.
### 5.1 Comparison with other systems
Two-pole accretion within polars is an informative phenomenon, as it allows us
to study both multiple accretion regions on the white dwarf’s surface, and
probe a larger volume of the WDs magnetosphere over the single pole case. This
is particularly powerful in systems with transient two pole accretion, where
the location within the magnetic field which the accretion stream is probing
varies. Two-pole accretion is not uncommon, and there are numerous instances
in the literature of polars that have switched between one- and two-pole
accretion, the prototype polar AM Her (Heise et al., 1985), MT Dra (Schwarz et
al., 2002), and QS Tel (Rosen et al., 1996) being excellent examples. It is
not always clear why the number of active poles changes, though Rosen et al.
(1996) and Schwarz et al. (2002) considered two hypotheses for QS Tel and MT
Dra, respectively: a change in the mass-transfer rate and asynchronous
rotation. In the former, the accretion stream’s ram pressure depends on the
mass-transfer rate, causing the stream to travel deeper into the magnetosphere
at higher mass-transfer rates. In the latter, the accretion stream would latch
onto different magnetic field lines due to the differential rotation of the
magnetosphere; these variations would occur at the beat frequency between the
binary orbital frequency and the WD’s spin frequency, which ranges from a few
days to $\sim$2 months in the known asynchronous polars.
The data presented here confirm that the transient two-pole accretion in V496
UMa is not due to asynchronous rotation of the WD. If this were the case, we
would have expected correlated changes in the primary and secondary maxima of
the X-ray light curve, and a long-term periodicity in the TESS light curve (as
observed in CD Ind; Hakala et al., 2019; Littlefield et al., 2019; Mason et
al., 2020). The absence of these effects suggest the driving force between the
variability in the secondary pole may be more akin to what is occurring in AM
Her and other synchronous polars.
Although the variable mass-transfer-rate explanation is more promising, it is
has its own shortcomings. If we use V496 UMa’s time-averaged optical
brightness as a proxy for its accretion rate, then there is no consistent
relation between its overall accretion rate and the failed secondary maxima;
in Figs. 3 and 5, the failed maxima are common during a dip near BTJD=1740,
but they also occur sporadically when V496 UMa is brightest. The lack of such
a correlation is reminiscent of the behaviour of AM Her, whose accretion
geometry does not always correlate strongly with the mass-transfer rate
(Schwope et al., 2020). Similarly, Beuermann et al. (2020) found that HY Eri
remains in a two-pole-accretion state even when the mass-transfer rate varies
by three orders of magnitude. However, BL Hyi provides a countervailing
example, as it undergoes two-pole accretion at enhanced accretion rates but
one-pole accretion in its low states (Beuermann & Schwope, 1989).
A related scenario considered by Rosen et al. (1996) to explain why QS Tel
changed between one- and two-pole accretion was that the accretion stream can
be fragmented into discrete blobs of varying densities. In such a case, the
lifetime of any individual blob depends on its density, with the densest blobs
surviving longer and traveling deeper into the WD’s magnetosphere. Based on
this picture, Rosen et al. (1996) proposed that low-density material becomes
magnetically entrained shortly after it leaves the donor star, producing a
hard X-ray-emitting region, while higher-density blobs travel to a secondary
accretion region with a softer spectrum. In this scenario, a temporary
reduction in the number of dense blobs could interrupt accretion onto the
second pole. It is worth noting that AM Her and MT Dra do conform to the Rosen
et al. (1996) scenario; when their secondary poles are active, they have
softer spectra than their primary poles (Schwope et al., 2020; Schwarz et al.,
2002).
However, if this mechanism were at play in V496 UMa we would have expected to
observe a pronounced difference in the X-ray hardness of the two poles. At
most, V496 UMa may display a slight enhancement of soft X-ray emission coming
from the secondary pole when is active and accreting; the blackbody component
may be slightly higher for the secondary maximum than for the primary
maxiumum, as given in Table 2. However, the enhancement is in no way
definitive, and higher signal-to-noise spectra are required to tell. Indeed,
the differences between the secondary and primary spectra are not nearly as
extreme as in the case of AM Her, suggesting V496 UMa is not undergoing blobby
accretion.
While V496 UMa does not offer an obvious answer as to why the secondary
maximum occasionally disappears, this system stands out because of the time
scale over which this takes place. For some of the best studied polars which
undergo mode switching, the accretion geometry appears to be relatively stable
during individual observing epochs. In contrast, the geometry in V496 UMa
changes over the course of a single orbit, as shown here, and few synchronous
polars have been observed to show such rapid optical changes in the accretion
rate onto a secondary pole. One such example is DP Leo (Beuermann et al.,
2014), which showed an intermittent secondary photometric maximum in optical
photometry. However, there is insufficient data about this phenomenon in DP
Leo to draw any robust comparisons with V496 UMa.
Whichever mechanism is altering the accretion geometry in V496 UMa (and
presumably DP Leo) varies over a short timescale. The most obvious culprit are
stellar spots on the secondary star moving across the L1 point. Verification
that this is driving a change in the mass transfer rate from the companion
star would require optical spectroscopic and photometric observations in which
the companion star dominates. This is a difficult task when accretion
structures and cyclotron harmonics are present in the system, and Littlefield
et al. (2018) were unable to detect the secondary star spectroscopically when
V496 UMa was in a high state. As such, V496 UMa should be monitored
frequenctly for the onset of a low state, at which point studies of the
secondary, and indeed measurement of the WD’s magnetic field through Zeeman
splitting of the absorption lines created in the WD photosphere, would become
possible.
## Acknowledgements
We thank the TESS mission-operations personnel, particularly George Ricker and
Roland Vanderspek, for scheduling DDT observations of V496 UMa after a last-
minute pointing change made it unexpectedly observable during Sectors 15 and
16. We thank the Krizmanich Family for their generous donation to the
University of Notre Dame that funded the Sarah L. Krizmanich Telescope.
M.R.K. acknowledges support from the ERC under the European Union’s Horizon
2020 research and innovation programme (grant agreement No. 715051; Spiders),
the Royal Society in the form of a Newton International Fellowship (NIF No.
NF171019), and the Irish Research Council in the form of a Government of
Ireland Postdoctoral Fellowship (GOIPD/2021/670: Invisible Monsters).
This work made use of Astropy (Astropy Collaboration et al., 2013, 2018),
Corner (Foreman-Mackey, 2016), and Pyxspec.
## Data availability
The raw X-ray data are available through the XMM-Newton Science Archive, while
the TESS data are available through the Barbara A. Mikulski Archive for Space
Telescopes. The exact X-ray spectra and light curves used in this paper can be
found at the following permanent repository:
https://zenodo.org/record/5746735.
## References
* Arnaud (1996) Arnaud K. A., 1996, in Jacoby G. H., Barnes J., eds, Astronomical Society of the Pacific Conference Series Vol. 101, Astronomical Data Analysis Software and Systems V. p. 17
* Astropy Collaboration et al. (2013) Astropy Collaboration et al., 2013, A&A, 558, A33
* Astropy Collaboration et al. (2018) Astropy Collaboration et al., 2018, AJ, 156, 123
* Bailer-Jones et al. (2021) Bailer-Jones C. A. L., Rybizki J., Fouesneau M., Demleitner M., Andrae R., 2021, AJ, 161, 147
* Beuermann & Schwope (1989) Beuermann K., Schwope A. D., 1989, A&A, 223, 179
* Beuermann et al. (2014) Beuermann K., Dreizler S., Hessman F. V., Schwope A. D., 2014, A&A, 562, A63
* Beuermann et al. (2020) Beuermann K., Burwitz V., Reinsch K., Schwope A., Thomas H. C., 2020, A&A, 634, A91
* Foreman-Mackey (2016) Foreman-Mackey D., 2016, The Journal of Open Source Software, 1, 24
* Frank et al. (1988) Frank J., King A. R., Lasota J. P., 1988, A&A, 193, 113
* Gaia Collaboration et al. (2016) Gaia Collaboration et al., 2016, A&A, 595, A1
* Gaia Collaboration et al. (2021) Gaia Collaboration et al., 2021, A&A, 649, A1
* Goodman & Weare (2010) Goodman J., Weare J., 2010, Communications in Applied Mathematics and Computational Science, 5, 65
* Hakala et al. (2019) Hakala P., Ramsay G., Potter S. B., Beardmore A., Buckley D. A. H., Wynn G., 2019, MNRAS, 486, 2549
* Hameury & King (1988) Hameury J. M., King A. R., 1988, MNRAS, 235, 433
* Heise et al. (1985) Heise J., Brinkman A. C., Gronenschild E., Watson M., King A. R., Stella L., Kieboom K., 1985, A&A, 148, L14
* Kennedy et al. (2017) Kennedy M. R., Garnavich P. M., Littlefield C., Callanan P., Mukai K., Aadland E., Kotze M. M., Kotze E. J., 2017, MNRAS, 469, 956
* King & Lasota (1979) King A. R., Lasota J. P., 1979, MNRAS, 188, 653
* Kuijpers & Pringle (1982) Kuijpers J., Pringle J. E., 1982, A&A, 114, L4
* Lamb & Masters (1979) Lamb D. Q., Masters A. R., 1979, ApJ, 234, L117
* Liebert & Stockman (1979) Liebert J., Stockman H. S., 1979, ApJ, 229, 652
* Liedahl et al. (1995) Liedahl D. A., Osterheld A. L., Goldstein W. H., 1995, ApJ, 438, L115
* Lightkurve Collaboration et al. (2018) Lightkurve Collaboration et al., 2018, Lightkurve: Kepler and TESS time series analysis in Python, Astrophysics Source Code Library (ascl:1812.013)
* Littlefield et al. (2015) Littlefield C., Garnavich P., Magno K., Murison M., Deal S., McClelland C., Rose B., 2015, Information Bulletin on Variable Stars, 6129, 1
* Littlefield et al. (2018) Littlefield C., Garnavich P., Hoyt T. J., Kennedy M., 2018, AJ, 155, 18
* Littlefield et al. (2019) Littlefield C., Garnavich P., Mukai K., Mason P. A., Szkody P., Kennedy M., Myers G., Schwarz R., 2019, ApJ, 881, 141
* Livio & Pringle (1994) Livio M., Pringle J. E., 1994, ApJ, 427, 956
* Mason et al. (2001) Mason K. O., et al., 2001, A&A, 365, L36
* Mason et al. (2020) Mason P. A., et al., 2020, Advances in Space Research, 66, 1123
* Mewe et al. (1985) Mewe R., Gronenschild E. H. B. M., van den Oord G. H. J., 1985, A&AS, 62, 197
* Mewe et al. (1986) Mewe R., Lemen J. R., van den Oord G. H. J., 1986, A&AS, 65, 511
* Reimers et al. (1999) Reimers D., Hagen H. J., Hopp U., 1999, A&A, 343, 157
* Ricker et al. (2015) Ricker G. R., et al., 2015, Journal of Astronomical Telescopes, Instruments, and Systems, 1, 014003
* Rosen et al. (1996) Rosen S. R., et al., 1996, MNRAS, 280, 1121
* Schmidt et al. (2005) Schmidt G. D., et al., 2005, ApJ, 620, 422
* Schwarz et al. (2001) Schwarz R., Schwope A. D., Staude A., 2001, A&A, 374, 189
* Schwarz et al. (2002) Schwarz R., Greiner J., Tovmassian G. H., Zharikov S. V., Wenzel W., 2002, A&A, 392, 505
* Schwope et al. (1995) Schwope A. D., Thomas H. C., Beuermann K., Burwitz V., Jordan S., Haefner R., 1995, A&A, 293, 764
* Schwope et al. (2000) Schwope A. D., Catalán M. S., Beuermann K., Metzner A., Smith R. C., Steeghs D., 2000, MNRAS, 313, 533
* Schwope et al. (2020) Schwope A. D., Worpel H., Traulsen I., Sablowski D., 2020, A&A, 642, A134
* Singh et al. (1996) Singh K. P., White N. E., Drake S. A., 1996, ApJ, 456, 766
* Stockman et al. (1988) Stockman H. S., Schmidt G. D., Lamb D. Q., 1988, ApJ, 332, 282
* Strüder et al. (2001) Strüder L., et al., 2001, A&A, 365, L18
* Tapia (1977) Tapia S., 1977, ApJ, 212, L125
* Turner et al. (2001) Turner M. J. L., et al., 2001, A&A, 365, L27
* Watson et al. (1980) Watson M. G., Mayo S. K., King A. R., 1980, MNRAS, 192, 689
* Wickramasinghe et al. (1989) Wickramasinghe D. T., Ferrario L., Bailey J., 1989, ApJ, 342, L35
* Wilms et al. (2000) Wilms J., Allen A., McCray R., 2000, ApJ, 542, 914
* Worpel & Schwope (2015) Worpel H., Schwope A. D., 2015, A&A, 583, A130
* den Herder et al. (2001) den Herder J. W., et al., 2001, A&A, 365, L7
## Appendix A Corner Plots
Figure 6: Corner plot from the MCMC analysis of the primary maximum spectrum
using the single temperature plasma and thermal black body model. Figure 7:
Corner plot from the MCMC analysis of the primary maximum spectrum using the
multi temperature plasma and thermal black body model.
|
# SN2023fyq: A Type Ibn Supernova With Long-standing Precursor Activity Due to
Binary Interaction
Yize Dong (董一泽) Department of Physics and Astronomy, University of California,
1 Shields Avenue, Davis, CA 95616-5270, USA Daichi Tsuna TAPIR, Mailcode
350-17, California Institute of Technology, Pasadena, CA 91125, USA Research
Center for the Early Universe (RESCEU), School of Science, The University of
Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan Stefano Valenti
Department of Physics and Astronomy, University of California, 1 Shields
Avenue, Davis, CA 95616-5270, USA David J. Sand Steward Observatory,
University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA
Jennifer E. Andrews Gemini Observatory, 670 North A‘ohoku Place, Hilo, HI
96720-2700, USA K. Azalee Bostroem LSSTC Catalyst Fellow Steward
Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ
85721-0065, USA Griffin Hosseinzadeh Steward Observatory, University of
Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA Emily Hoang
Department of Physics and Astronomy, University of California, 1 Shields
Avenue, Davis, CA 95616-5270, USA Saurabh W. Jha Department of Physics and
Astronomy, Rutgers, the State University of New Jersey,
136 Frelinghuysen Road, Piscataway, NJ 08854-8019, USA Daryl Janzen Department of Physics & Engineering Physics, University of Saskatchewan, 116 Science Place, Saskatoon, SK S7N 5E2, Canada Jacob E. Jencson Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218, USA Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Michael Lundquist W. M. Keck Observatory, 65-1120 Māmalahoa Highway, Kamuela, HI 96743-8431, USA Darshana Mehta Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA Aravind P. Ravi Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA Nicolas E. Meza Retamal Department of Physics and Astronomy, University of California, 1 Shields Avenue, Davis, CA 95616-5270, USA Jeniveve Pearson Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA Manisha Shrestha Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721-0065, USA Alceste Z. Bonanos IAASARS, National Observatory of Athens, Metaxa & Vas. Pavlou St., 15236, Penteli, Athens, Greece D. Andrew Howell Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Nathan Smith Steward Observatory, University of Arizona, 933 North Cherry Avenue, Rm. N204, Tucson, AZ 85721-0065, USA Joseph Farah Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Daichi Hiramatsu Center for Astrophysics | Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138-1516, USA The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, USA Koichi Itagaki (板垣公一) Itagaki Astronomical Observatory, Yamagata 990-2492, Japan Curtis McCully Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Megan Newsome Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Estefania Padilla Gonzalez Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Emmanouela Paraskeva IAASARS, National Observatory of Athens, Metaxa & Vas. Pavlou St., 15236, Penteli, Athens, Greece Craig Pellegrino Department of Astronomy, University of Virginia, Charlottesville, VA 22904, USA Giacomo Terreran Las Cumbres Observatory, 6740 Cortona Drive, Suite 102, Goleta, CA 93117-5575, USA Department of Physics, University of California, Santa Barbara, CA 93106-9530, USA Joshua Haislip Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA Vladimir Kouprianov Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA Daniel E. Reichart Department of Physics and Astronomy, University of North Carolina, 120 East Cameron Avenue, Chapel Hill, NC 27599, USA
###### Abstract
We present photometric and spectroscopic observations of SN 2023fyq, a type
Ibn supernova in the nearby galaxy NGC 4388 (D$\simeq$18 Mpc). In addition, we
trace long-standing precursor emission at the position of SN 2023fyq using
data from DLT40, ATLAS, ZTF, ASAS-SN, Swift, and amateur astronomer Koichi
Itagaki. Precursor activity is observed up to nearly three years before the
supernova explosion, with a relatively rapid rise in the final 100 days. The
double-peaked post-explosion light curve reaches a luminosity of $\sim
10^{43}~{}\rm erg\,s^{-1}$. The strong intermediate-width He lines observed in
the nebular spectrum of SN 2023fyq imply the interaction is still active at
late phases. We found that the precursor activity in SN 2023fyq is best
explained by the mass transfer in a binary system involving a low-mass He star
and a compact companion. An equatorial disk is likely formed in this process
($\sim$0.6$\rm M_{\odot}$), and the interaction of SN ejecta with this disk
powers the main peak of the supernova. The early SN light curve reveals the
presence of dense extended material ($\sim$0.3$\rm M_{\odot}$) at
$\sim$3000$\rm R_{\odot}$ ejected weeks before the SN explosion, likely due to
final-stage core silicon burning or runaway mass transfer resulting from
binary orbital shrinking, leading to rapid rising precursor emission within
$\sim$30 days prior to explosion. The final explosion could be triggered
either by the core-collapse of the He star or by the merger of the He star
with a compact object. SN 2023fyq, along with SN 2018gjx and SN 2015G, forms a
unique class of Type Ibn SNe which originate in binary systems and are likely
to exhibit detectable long-lasting pre-explosion outbursts with magnitudes
ranging from $-$10 to $-$13.
Core-collapse supernovae (304), Circumstellar matter (241), Stellar mass loss
(1613)
††facilities: ADS, DLT40 (Prompt5, Prompt-MO), ATLAS, LCOGT (SBIG, Sinistro,
FLOYDS), Gemini:North (GMOS), Keck:I (LRIS, DEIMOS), NED, SOAR (Goodman),
Swift (UVOT), LBT (MODS) ††software: Astropy (Astropy Collaboration et al.,
2013, 2018), emcee (Foreman-Mackey et al., 2013) HOTPANTS (Becker, 2015),
Matplotlib (Hunter, 2007), NumPy (Harris et al., 2020), PYRAF (Science
Software Branch at STScI, 2012), Pandas (Wes McKinney, 2010), SciPy (Virtanen
et al., 2020), SWarp (Bertin et al., 2002), HOTPANTS (Becker, 2015),
LCOGTSNpipe (Valenti et al., 2016), Light Curve Fitting (Hosseinzadeh & Gomez,
2020), LPipe (Perley, 2019)
## 1 Introduction
Type Ibn supernovae (SNe) are a subclass of interaction-powered SNe that show
narrow helium (He) lines but not hydrogen (H) lines in their spectra (e.g.,
Smith, 2017; Modjaz et al., 2019). Although it has been more than two decades
since the discovery of the first Type Ibn SN (SN 1999cp, Matheson et al.
2000), our understanding of Type Ibn progenitors remains limited. The light
curves of Type Ibn SNe tend to be short-lived and some of them even resemble
the evolution of fast-evolving transients (Ho et al., 2023; Fox & Smith,
2019). A general interpretation is that SNe Ibn are Wolf-Rayet/He stars that
experience enhanced mass loss right before the SN explosion. The interaction
of SN ejecta with the surrounding dense He-rich circumstellar material (CSM)
powers some of the SN light curve and ionizes the outer CSM, producing the
narrow lines we observe (Pastorello et al., 2007; Hosseinzadeh et al., 2017).
Light curve modeling of Type Ibn SNe has supported the presence of dense CSM
close to the progenitors (Gangopadhyay et al., 2020; Pellegrino et al., 2022;
Ben-Ami et al., 2023). Both SNe Ibn and their H-rich counterparts, SNe IIn,
have CSM interaction signatures that point to pre-SN mass loss that is much
stronger than normal massive-star winds (Smith, 2014, 2017). However, the
mechanisms driving the enhanced mass loss near the time of explosion remain a
subject of active debate. This enhanced mass loss could be attributed to the
final-stage stellar activities of massive stars, where the dense CSM could be
produced by eruptive outbursts through pulsational pair instability (Yoshida
et al., 2016; Woosley, 2017) or wave-driven outbursts excited by late-stage
nuclear burning (Quataert & Shiode, 2012; Shiode & Quataert, 2014; Fuller,
2017; Fuller & Ro, 2018; Morozova et al., 2020). Alternatively, the dense CSM
might be generated through binary interactions (Smith, 2014; Smith & Arnett,
2014; Metzger, 2022; Wu & Fuller, 2022; Dessart et al., 2022; Tsuna et al.,
2024). In this scenario the progenitor does not necessarily have to be a very
massive star, as the mass loss would be significantly enhanced by the presence
of a binary companion.
One way to constrain the progenitor of Type Ibn SNe is by searching for
evidence of a massive star or a binary companion in deep images once the SN
fades. The absence of evidence for massive star progenitors and the possible
detection of binary companions have been reported for some Type Ibn SNe (Maund
et al., 2016; Shivvers et al., 2017; Hosseinzadeh et al., 2019).
Alternatively, a direct way to constrain the mass loss history of SN
progenitors is by searching for signs of pre-explosion activity or precursor
emission prior to the SN explosion. Precursor emission is commonly observed in
Type IIn SNe (e.g., Mauerhan et al., 2013; Smith et al., 2010; Ofek et al.,
2013; Tartaglia et al., 2016; Pastorello et al., 2013, 2018; Hiramatsu et al.,
2024). The bright precursor outbursts in Type IIn SNe may be due to eruptive
mass loss from LBV-like progenitors (e.g., Smith, 2017) or pulsational pair
instability outbursts (Smith & McCray, 2007; Woosley et al., 2007; Smith,
2014). Alternatively, these outbursts could be caused by red supergiants with
a compact object companion (Fryer & Woosley, 1998; Schrøder et al., 2020;
Smith et al., 2024; Tsuna et al., 2024), or other late-stage binary
interaction (Smith & Arnett, 2014). To date, precursor emission has been
identified in two Type Ibn SNe, SN 2006jc (Pastorello et al., 2007) and SN
2019uo (Strotjohann et al., 2021). The precursor outbursts in these events are
shorter and fainter compared to those observed in Type IIn SNe, and have been
interpreted as resulting from single massive star activities or binary
interactions (Pastorello et al., 2007; Foley et al., 2007; Smith et al., 2008;
Tsuna et al., 2024).
In this paper we present the optical observations of SN 2023fyq, one of the
closest SNe Ibn. The light curves and spectra of this object closely resemble
those of Type Ibn SNe. Notably, relatively steady precursor activity is
observed up to approximately three years prior to the SN explosion. The
detection of precursor emission in SN 2023fyq allows us to investigate the
final-stage stellar activity and the nature of its progenitor system. The pre-
explosion observations of SN 2023fyq are also presented in Brennan et al.
(2024), where they identify an asymmetric CSM structure, likely related to
unstable stellar activities of the progenitor.
The paper is organized as follows: the photometric and spectroscopic
observations are described in Section 2. We constrain the reddening and
distance of SN 2023fyq in Section 3. We describe the photometric and
spectroscopic evolution of SN 2023fyq in Sections 4 and 5. The progenitor
scenario and the physical mechanism of precursor activities are discussed in
Section 6. We summarize the main results in Section 7.
Figure 1: Composite $gri$ image of SN 2023fyq in NGC 4388 obtained with the
Las Cumbres Observatory on 2023 August 11. The position of SN 2023fyq is
indicated by white tick markers. Figure 2: Photometric limits and detections
of SN 2023fyq prior to and after explosion. Detections with S/N$>$4 are
indicated by large solid symbols, while detections with 3$<$S/N$\leq$4 are
indicated by hollow symbols. The smaller symbols are nondetection limits with
S/N$\leq$3\. The precursor activities detected in Type Ibn SN 2006jc ($R$
band) and SN 2019uo ($r$ band) are indicated in the red and green rectangles,
respectively. The limits on the precursor activities on Type Ibn SN 2015G are
shown with the purple dashed line. All of the bands are in the AB magnitude
system.
## 2 Observations
SN 2023fyq was discovered on 2023 April 17 by the Zwicky Transient Facility
(ZTF) survey at RA(2000) $=$ 12h25m45$\fs$847, Dec(2000) $=+12\arcdeg
39\arcmin 48\farcs 87$ in NGC 4388 (De, 2023) (see Figure 1). On 2023 June 14
a rapid rebrightening of SN 2023fyq was observed and reported by amateur
astronomer Koichi Itagaki. On 2023 June 25 SN 2023fyq was classified as a
peculiar Type Ib due the presence of helium lines and the lack of hydrogen
lines in the optical spectrum (Valerin et al., 2023).
In this section we present the photometric data of SN 2023fyq taken by Las
Cumbres Observatory (Brown et al., 2013) via the Global Supernova Project, the
Distance Less Than 40 Mpc (DLT40, Tartaglia et al., 2018) survey, ZTF (Bellm
et al., 2019; Graham et al., 2019), the Asteroid Terrestrial-Impact Last Alert
System (ATLAS, Tonry 2011; Tonry et al. 2018; Smith et al. 2020), the All-Sky
Automated Survey for Supernovae (ASAS-SN, Shappee et al. 2014; Kochanek et al.
2017), the Neil Gehrels Swift Observatory (Gehrels et al., 2004), and amateur
astronomer Itagaki. We also report the spectroscopic followup of SN 2023fyq
taken after the SN explosion. All spectroscopic observations from this paper
can be found at https://github.com/yizedong/SN2023fyq_data and will be
available on WISeREP (Yaron & Gal-Yam, 2012)111http://www.weizmann.ac.il.
Figure 3: The light curve evolution of SN 2023fyq. The $Clear$ filter is
calibrated to the $r$ band. The hollow symbol indicates the data with
3$<$S/N$\leq$4, while the solid symbol indicates the data with S/N$>$4\. Light
curves in the bottom panel have been shifted by the indicated amounts to
enhance clarity. All of the bands are in the AB magnitude system. The black
dashed line marks the epoch of the first light of the SN ($-$11 d), as adopted
in the paper.
### 2.1 Photometric Observations
For the photometry we adopt a signal-to-noise threshold of 3 for source
detections and a signal-to-noise threshold of 5 for computing the upper limit,
following the suggestions of Masci (2011). The light curves are shown in
Figure 2 and 3.
#### 2.1.1 Las Cumbres Observatory Observations
Our multiband photometric followup campaign with Las Cumbres Observatory was
initiated on 2023 July 26. The images were reduced using the PyRAF-based
photometric reduction pipeline lcogtsnpipe (Valenti et al., 2016). Apparent
magnitudes were calibrated using the APASS ($g,r,i$) and Landolt ($U,B,V$)
catalogs.
#### 2.1.2 DLT40 Observations
The DLT40 survey is a targeted one-day cadence SN search for very young
transients within 40 Mpc (Tartaglia et al., 2018; Yang et al., 2019).
DLT40 has been monitoring the field of SN 2023fyq since 2014 in the $Clear$
filer. All of the images have been visually inspected to remove those with bad
qualities. A deep template was made with the images taken between 2014 June 20
and 2015 February 01 using Swarp (Bertin et al., 2002). The rest of the images
were stacked in windows of 15 days and were then subtracted against the
template using HOTPANTS (Becker, 2015). We used aperture photometry at the
position of SN 2023fyq through a pipeline based on Photutils (Bradley et al.,
2022). The photometry was calibrated to the $r$ band.
#### 2.1.3 ZTF Observations
ZTF is a time-domain survey using a wide-field camera mounted on the Palomar
48-inch Schmidt telescope (Bellm et al., 2019; Graham et al., 2019). The ZTF
public survey searches for transients and variables in the northern sky with a
three-day cadence in $g$ and $r$ filters.
The position of SN 2023fyq has been monitored by ZTF since 2018. We obtained
the reference image subtracted forced photometry from the ZTF Forced
Photometry Service (Masci et al., 2023). We removed bad-quality data following
the instructions in Masci et al. (2023). For images taken after $-$300d, the
transient was bright enough to be detected in single images, and so the
observations were stacked in 1-day time bins. For images taken prior to
$-$300d, the observations were stacked in 15-day time bins to improve the
signal to noise ratio (S/N).
#### 2.1.4 ATLAS Observations
The ATLAS survey is an all-sky daily cadence survey (Smith et al., 2020)
carried out in two filters, cyan ($c$) and orange ($o$), roughly equivalent to
Pan-STARRS filters $g+r$ and $r+i$, respectively.
The position of SN 2023fyq has been monitored by ATLAS since 2015. Forced
photometry at the supernova position was obtained from the ATLAS forced
photometry server (Shingles et al., 2021). Using the method presented in Young
(2022), we stacked the measurements to improve the signal-to-noise ratio and
obtain deeper upper limits. For images taken after $-$300d, the observations
were stacked in 1-day time bins. For images taken before $-$300d, the
observations were stacked in 15-day time bins.
#### 2.1.5 ASAS-SN Observations
ASAS-SN is an untargeted all-sky survey to a depth of g$\sim$18.5 mag.
(Shappee et al., 2014; Kochanek et al., 2017). We obtained the ASAS-SN
reference image subtracted forced photometry from the ASAS-SN sky
portal222https://asas-sn.osu.edu/.
#### 2.1.6 Swift Observations
The position of SN 2023fyq has been observed by the UVOT instrument on the
Neil Gehrels Swift Observatory (Gehrels et al., 2004) since 2015. We performed
aperture photometry at the position of SN 2023fyq on Swift UVOT images using
the High-Energy Astrophysics software (HEA-Soft). Background variations in
individual images were removed using an aperture placed on a blank section of
the sky. To remove the underlying galaxy background contamination, we
subtracted the flux extracted from Swift UVOT images taken on 2016 November
08. Zero-points were chosen from Breeveld et al. (2011) with time-dependent
sensitivity corrections updated in 2020.
#### 2.1.7 Koichi Itagaki’s Observations
We also incorporated observations taken with Koichi Itagaki’s Bitran BN-83MCCD
imager mounted on a 0.5m telescope in Okayama Prefecture, Japan. We solved the
astrometry of the images using Astrometry.net (Lang et al., 2010). The
aperture photometry was performed using a pipeline based on Photutils (Bradley
et al., 2022) and was calibrated to r-band magnitudes in the Sloan system
(Fukugita et al., 1996).
### 2.2 Spectroscopic Observations
We collected four optical spectra from the FLOYDS spectrograph (Brown et al.,
2013) on the 2m Faulkes Telescope South in Australia at the Las Cumbres
Observatory via the Global Supernova Project. The FLOYDS spectra were reduced
following standard procedures using the FLOYDS pipeline (Valenti et al.,
2014). We triggered Gemini-North Target of Opportunity (ToO) observations with
the Gemini Multi-Object Spectrograph (GMOS; Hook et al., 2004) and the B600
grating on 2023 July 27 and 2023 August 01 through proposal GN-2023A-Q-136.
The Gemini spectra were reduced by using the IRAF Gemini package. We triggered
further ToO observations with the Andalucia Faint Object Spectrograph and
Camera (ALFOSC) on the Nordic Optical Telescope (NOT) at the Spanish “Roque de
los Muchachos” Observatory (ORM) on 2023 August 04 through proposal 67-112.
The NOT ALFOSC spectrum was observed using Grism #4 and a 1.$\arcsec$0 slit
and was reduced using the PypeIt pipeline (Prochaska et al., 2020, 2020). We
obtained spectra on 2023 December 12 and 2024 May 1 from the Low-Resolution
Imaging Spectrometer (LRIS; Oke et al., 1995) on the Keck I telescope. The
LRIS spectra were reduced in a standard way using the LPipe pipeline (Perley,
2019). A low-resolution spectrum was taken on 2024 January 23 with the Goodman
High Throughput Spectrograph (GHTS) on the Southern Astrophysical Research
Telescope (SOAR; Clemens et al., 2004), and was reduced with the Goodman
pipeline (Torres et al., 2017). One spectrum was obtained with the Multi-
Object Double Spectrographs (MODS, Pogge et al., 2010) on the twin 8.4 m Large
Binocular Telescope (LBT) at Mount Graham International Observatory. The
spectrum was reduced using standard techniques, including bias subtraction and
flat-fielding using the MODSCCDred package (Pogge, 2019) and further reduced
with IRAF including cosmic ray rejection, local sky subtraction, and
extraction of one-dimensional spectra. A log of the spectroscopic observations
is presented in Table A1. We also present an unpublished nebular spectrum of
Type Ibn SN 2019kbj taken at 80 d after the peak. The spectrum was taken on
2019 September 23 with the DEep Imaging Multi-Object Spectrograph (DEIMOS,
Faber et al., 2003) on the Keck II telescope (Table A2). The DEIMOS spectrum
was reduced using the PypeIt pipeline (Prochaska et al., 2020, 2020). A
detailed analysis of SN 2019kbj has been presented in Ben-Ami et al. (2023).
Figure 4: $r/R$ Light curve comparison between SN 2023fyq, a sample of Type
Ibn SNe, and well-studied normal SESNe. The Vega magnitudes have been
converted to the AB magnitude system. The evolution of SN 2023fyq is similar
to those of Type Ibn SNe. The SNe used in this plot includes Type IIb SN 1993J
(Filippenko et al., 1993), Type Ib SN 2008D (Modjaz et al., 2009), Type Ic SN
2007gr (Hunter et al., 2009)), and Type Ibn SNe: SN 2015U (Tsvetkov et al.,
2015; Pastorello et al., 2015a; Hosseinzadeh et al., 2017), iPTF15ul
(Hosseinzadeh et al., 2017), iPTF14aki (Hosseinzadeh et al., 2017), iPTF15akq
(Hosseinzadeh et al., 2017), SN 2019deh (Pellegrino et al., 2022), SN 2021jpk
(Pellegrino et al., 2022), SN 2005la (Pastorello et al., 2008), SN 2020nxt
(Wangq et al., 2024), SN 2018gjx (Prentice et al., 2020), ASASSN-15ed
(Pastorello et al., 2015b), SN 2010al (Pastorello et al., 2015c), SN 2015G
(Shivvers et al., 2017; Hosseinzadeh et al., 2017), SN 2006jc (Pastorello et
al., 2007), SN 2019uo (Gangopadhyay et al., 2020), and SN 2019kbj (Ben-Ami et
al., 2023). SN 2018gjx, ASASSN-15ed, SN 2010al, SN 2015G, SN 2006jc, SN
2019uo, and SN 2019kbj will be used for further comparison in the paper, while
a broader sample of SNe Ibn are shown in tan. SESNe are shown in grey.
Figure 5: The pre- and post-explosion bolometric light curve (upper two
panels) and the blackbody temperature and radius evolution (bottom panel) of
SN 2023fyq at the precursor phases and the early SN phases. The uncertainties
are indicated by the shaded area.
## 3 Observational Properties
### 3.1 Reddening
The empirical correlation between the equivalent width (EW) of the Na I D line
and the amount of gas and dust along the line of sight has often been used in
extinction estimations (Munari & Zwitter, 1997). In order to measure the line-
of-sight reddening towards SN 2023fyq, we analyzed the medium-resolution
spectrum (R$\sim$1800) taken with Gemini North on 2023 August 1. The measured
EW of the host galaxy Na I D $\lambda$5890 ($\rm D_{2}$) and Na I D
$\lambda$5896 ($\rm D_{1}$) are $0.27\pm 0.04$ Å and $0.15\pm 0.04$ Å,
respectively. The measured EW of the Galactic Na I $\rm D_{2}$ and Na I $\rm
D_{1}$ are $0.23\pm 0.02$ Å and $0.16\pm 0.01$ Å respectively. Using Eq.9 in
Poznanski et al. (2012) and applying the renormalization factor of 0.86 from
Schlafly et al. (2010), we found a host extinction of $E(B-V)_{\rm
host}=0.037\pm 0.01$ mag. The Milky Way extinction is measured to be
$E(B-V)_{\rm MW}=0.035\pm 0.01$ mag, which is consistent with the Milky Way
extinction of $E(B-V)_{\rm MW}$ = 0.0286 mag from the extinction map by
Schlafly & Finkbeiner (2011). We adopt the latter for the Milky Way
extinction. Throughout the paper, we will adopt a total extinction of $E(B-V)$
= $0.066\pm 0.01$ mag.
We note that Brennan et al. (2024) found a larger host extinction value
($E(B-V)_{\rm host}=0.4\pm 0.1$ mag) using the Balmer ratio measured from the
host emission lines. The disagreement is probably because this method measures
the full column of gas including the background. In this case, there is likely
some dust between the SN and the underlying HII region, which is responsible
for this greater implied extinction value.
### 3.2 Distance
The distance of NGC 4388 listed on the NASA/IPAC Extragalactic Database (NED)
ranges from 13.6 Mpc to 25.7 Mpc ($\mu$ = 30.67 – 32.05 mag). We adopt the
most recent Tully-Fisher distance (based on photometry at 3.6$\mu$m with
Spitzer Space Telescope), 18.0$\pm$3.7 Mpc ($\mu$ = 31.28$\pm$0.45 mag; Tully
et al. 2016).
## 4 Photometric Evolution
In Figure 2 we present the photometric evolution of SN 2023fyq dating back to
2015, illustrating our search for precursor activities. In Figure 3 we take a
closer look at the evolution from one year before the SN explosion. All phases
mentioned in the paper are with respect to the maximum light in the $r$ band,
which is measured to be at JD = 2460154 after fitting the light curve with a
spline function. At $\sim-11$ d, a sudden rise of $\sim$1.5 mag within
$\sim$17 hrs is clearly observed (see lower panel of Figure 3). As we will
discuss below, we attribute this rapid rise to the SN first light.
Consequently, we divide the photometric evolution of SN 2023fyq into two
phases: the precursor phase ($<-11$ d) and the SN phase ($>-11$ d).
### 4.1 Precursor Detections
The precursor is detected from $\sim$$-$1000 d to $\sim$$-$11 d. There are
also single detections at around $-$2300 d and $-$1300 d. These detections
have 3$<$S/N$\leq$4, and are bracketed by nondetections of similar depth.
Therefore, they are likely not true detections of precursor emission. As
illustrated in Figure 2, the precursor activities remain relatively stable at
$-10$ to $-12$ mag between $\sim-$1300 d and $\sim-$100 d. Then, starting from
$-100$ d, the object slowly brightens to $\sim$$-15$ mag. Between
$\sim$$-2500$ and $\sim$$-100$ d, the UV observations from Swift only give
nondetection limits (See Figure 2). As the precursor gets brighter, at
$\sim$$-28$ d, a source is detected in the $UVW1$ filter at $\sim$$-$13 mag,
with similar magnitudes observed in $g$ and $o$ bands. From $-300$ to $-11$ d,
the precursor light curves seem to exhibit multiple bumps, indicative of pre-
explosion activities, such as small eruptions, from the progenitor star. As
shown in Figure 2, the precursor emission detected in SN 2023fyq appears
fainter and longer compared to that observed in Type Ibn SN 2006jc (Pastorello
et al., 2007) and SN 2019uo (Strotjohann et al., 2021), even when accounting
for uncertainties in the distance measurement of SN 2023fyq. Pre-explosion
activities were not detected for Type Ibn SN 2015G down to $-$13.3 $\pm$ 0.5
mag (Shivvers et al., 2017). It should be noted that the precursor searches
for SN 2006jc and SN 2019uo only go down to around $-$13 mag. Therefore,
fainter precursor activities like those observed in SN 2023fyq can not be
excluded for these events.
### 4.2 SN Light Curve
The bluer-band ($UVW2$, $UVM2$, $UVW1$) light curves of SN 2023fyq exhibit a
notable bump from $-11$ d to $-4$ d, before reaching the second peak and then
falling off rapidly. This initial bump in the blue bands is likely
attributable to the cooling following shock breakout. For the rest of the
bands, the SN light curves show a fast rise and also a fast decline. The peak
$r$-band magnitude is measured to be $M_{r}=-18.5$ mag. In Figure 4, we
compare the $r$-band light curve of SN 2023fyq with the $r/R$-band light
curves of a sample of Type Ibn SNe and well-studied normal stripped-envelop
SNe (SESNe). At early times SN 2023fyq appears more luminous than the typical
SESNe, and the evolution of SN 2023fyq is overall similar to those of Type Ibn
SNe. At late times SN 2023fyq declines similarly to SN 2018gjx and SN 2015G,
but slower than SN 2006jc. The steep decline of SN 2006jc in the optical is
likely due to dust formation in the SN ejecta or in the surrounding CSM (e.g.,
Smith et al., 2008). The slower decline of SN 2023fyq, SN 2018gjx, and SN
2015G at late times could be an indication of less efficient dust formation
than in SN 2006jc. However, due to the lack of late-phase observations of Type
Ibn SNe, it is not clear if SN 2006jc is really an outlier. SN 2023fyq
declines faster than normal SESNe at nebular phases. This may be due to an
inefficient trapping of $\gamma$-rays in SN 2023fyq if the light curve tail is
powered by $\rm{}^{56}Ni$ decay, a power source other than $\rm{}^{56}Ni$
decay, or dust formation in SN 2023fyq.
Figure 6: Left: The optical spectroscopic evolution of SN 2023fyq. The phase
is measured from the $r$-band maximum. Right: The evolution of the He I
$\lambda$5876 line. The pre-maximum spectra marked in grey are from Brennan et
al. (2024). The He I $\lambda$5876 line shows a high-velocity component
(marked with the blue band) and a low-velocity component (marked with the red
band), which may come from the SN ejecta and He-rich CSM, respectively. The
grey bands mark the emission lines from the galaxy. Figure 7: Optical
spectral comparison of SN 2023fyq at $\sim$0 d to other Type Ibn SNe and
normal SESNe.
Figure 8: Upper: Optical spectral comparison of SN 2023fyq at $\sim$7 d to
other Type Ibn SNe and normal SESNe. Bottom: The optical spectrum taken at
$\sim$7 d compared to the mean spectra (the solid lines) and the standard
deviations (the shaded regions) of SN Ib and Ic at $\sim$10 d from Liu et al.
(2016). SN 2023fyq has several features in common with these normal SESNe,
suggesting SN 2023fyq is likely from an explosion of a stripped star. Figure
9: Nebular spectral comparison of SN 2023fyq to other Type Ibn SNe with
nebular spectra and normal SESNe. The phases are relative to the time of
maximum light. A continuum spectrum of the background galaxy is subtracted
from the spectrum of SN 2023fyq. At nebular phases, SNe Ibn appear to fall
into two distinct classes: one exhibiting only narrow He lines (SN 2019kbj and
SN 2006jc), and another displaying intermediate-width He lines and oxygen
lines (SN 2023fyq, SN 2015G, and SN 2018gjx).
### 4.3 Bolometric Light Curve
We constructed the bolometric light curve of SN 2023fyq using data from ZTF,
ATLAS, ASAS-SN, Swift, and Itagaki. To build the spectral energy distribution
(SED) in the regions without complete multiband coverage, we reconstruct the
multiband light curves using a neural network based light curve fitting method
presented in Demianenko et al. (2023). This method is able to capture
correlations across different observations over time and among various
passbands, and compute an approximate light curve within the specified time
and wavelength ranges. The final bolometric light curve is calculated by
fitting the SED with a blackbody function using a Markov Chain Monte Carlo
(MCMC) routine in the Light Curve Fitting package (Hosseinzadeh & Gomez,
2020). The blackbody temperatures measured from the pre-explosion spectra of
SN 2023fyq in Brennan et al. (2024) are used as priors for the SED fitting. We
present the bolometric light curve of SN 2023fyq, and the corresponding
blackbody temperature ($T_{BB}$) and radius ($R_{BB}$), in the precursor phase
and the SN phase, in Figure 5. We note that we only focus on the long-term
evolution of the bolometric light curve, and small variations in the light
curves are not reflected in the final bolometric light curve.
Before $\sim$$-100$ d, the precursor of SN 2023fyq is in a relatively stable
state with a luminosity of $\sim 1\times 10^{40}$ erg s-1. During that time,
$T_{BB}$ and $R_{BB}$ are around 10,000 K and 600 $\rm R_{\odot}$,
respectively. After $-$100 d, SN 2023fyq shows a faster rise and, at
$\sim$$-$11 d, the luminosity suddenly increases over an order of magnitude
(i.e., from $\sim 4\times 10^{41}$ erg s-1 to $\sim 7\times 10^{42}$ erg s-1).
Later, after a brief decline, the SN reaches its main peak and declines
afterwards. The decline of luminosity shortly after $\sim$$-11$ d is likely
due to the shock cooling after the shock breakout. For $T_{BB}$, after jumping
to $\sim$22,000 K at $\sim$$-$11 d, it rapidly declines until entering a brief
plateau phase between $\sim$$-5$ and $0$ d with $T_{BB}$$\simeq$10,000K. The
initial rapid decrease of $T_{BB}$ is likely associated with the shock cooling
process, while the plateau phase is likely due to the recombination of He I
and will be further discussed in section 6.1. After around $-40$ d, $R_{BB}$
shows a gradual expansion with a velocity of $\sim$700 $\rm km\,s^{-1}$. After
$-11$ d, $R_{BB}$ continuously increase, reflecting an increase of the
photospheric radius with the expansion of SN ejecta. The expansion rate of
$R_{BB}$ is $\sim$14,000 $\rm km\,s^{-1}$ initially, which slows down to
$\sim$7000 $\rm km\,s^{-1}$ after around -2 d. After around 5 d, as will be
discussed in the next section, the spectra of SN 2023fyq are dominated by
absorption lines from the SN ejecta, so $R_{BB}$ may not accurately reflect
the position of the photosphere.
## 5 Spectroscopic Evolution
The spectroscopic evolution of SN 2023fyq is presented in Figure 6. At -1.3 d,
the spectrum shows a blue continuum with a prominent He I $\lambda$5876 line.
Other He lines, such as He I $\lambda$5015, He I $\lambda$6678, He I
$\lambda$7065, and He I $\lambda$7281, are also observed. The He I
$\lambda$5876 line shows a rather asymmetric profile (right panel of Figure
6). In the blue wing, the He I $\lambda$5876 line shows a two-component
profile, with a narrow absorption feature at $\sim$$-$1000 $\rm km\,s^{-1}$
and a broad absorption feature at $\sim$$-$7000 $\rm km\,s^{-1}$. The
detection of a two-component He I line profile in SN 2023fyq is consistent
with those observed in other Type Ibn SNe (Pastorello et al., 2016), and is
likely from different emitting regions. The broad component is from the fast
moving ejecta, while the narrow component is likely from the surrounding
unshocked He-rich CSM. In the red wing, there is an additional emission
component at around 1500 $\rm km\,s^{-1}$. This component is also observed
during the pre-explosion phase of SN 2023fyq (Brennan et al., 2024), and could
be due to an asymmetric CSM structure formed before the SN explosion. A few
days later the object quickly becomes redder, and the Ca II H&K
$\lambda\lambda 3934,3969$ and Ca II $\lambda\lambda$8498, 8542, 8662 lines
appear more prominent. No broad hydrogen features are observed in the spectra
of SN 2023fyq. However, we can not exclude the presence of narrow hydrogen
lines since the spectra are heavily contaminated by the host-galaxy emission.
At $\sim$137 d, the spectrum is dominated by strong [O I]
$\lambda\lambda$6300, 6364 and [Ca II] $\lambda\lambda$7291, 7323. He lines,
such as He I $\lambda$5876 and He I $\lambda$7065 are also strong at this
phase. Other lines, including Mg I] $\lambda$4571 and Ca II
$\lambda\lambda$8498, 8542, 8662, can be seen in the spectrum. After that, the
spectra we have are mainly dominated by the host, while weak [O I]
$\lambda\lambda$6300, 6364 lines are still present.
We compare the spectra of SN 2023fyq around 0 d and 7 d with other SNe Ibn and
normal SESNe at similar phases in Figure 7 and Figure 8. At around 0 d, other
SNe Ibn show blue continua plus narrow He I $\lambda$5876 lines in their
spectra. The velocities of those narrow He I $\lambda$5876 lines are
consistent with that of the narrow component of the He I $\lambda$5876 line in
SN 2023fyq. At around 0 d, normal SESNe are redder than SN 2023fyq and other
SNe Ibn. This is probably due to the ongoing CSM interaction in the SNe Ibn,
which is not significant in SESNe. SESNe start to show lines from iron-group
elements at this phase, whereas these features are not strong in SN 2023fyq or
other SNe Ibn at a similar phase. The He lines in Type Ib/IIb SNe are also
much broader than those shown in SN 2023fyq.
At around 7 d, SN 2023fyq is very similar to SNe Ibn SN 2018gjx, ASASSN-15ed,
SN 2010al, and SN 2015G, which start to show signatures from deeper layers of
the ejecta. The He I $\lambda$5876 lines of SN 2018gjx, ASASSN-15ed, SN
2010al, and SN 2015G grow broader, with velocities similar to that of the
broad component of He I $\lambda$5876 in SN 2023fyq. Interestingly, some
similarities between SN 2023fyq and normal SESNe are also observed at around 7
d. To better illustrate this, we flatten the spectrum of SN 2023fyq at $\sim$7
d using SNID following the procedure outlined in Blondin & Tonry (2007) and
compare the flattened spectrum with Type Ib and Ic templates at 10 d from Liu
et al. (2016) in the bottom panel of Figure 8. This comparison clearly
indicates that SN 2023fyq exhibits spectral features similar to those of Type
Ic SNe, suggesting that its progenitor is likely a stripped/He star.
When the object enters the nebular phase, the ejecta become optically thin,
providing an unique opportunity to study the core of the progenitor star.
However, it is challenging to follow up SNe Ibn at nebular phases since they
rapidly get fainter. In Figure 9, we compare the nebular spectrum of SN
2023fyq at $\sim$136.5d with a few SNe Ibn with late-time observations and
normal SESNe at similar phases. The underlying continuum of the background
galaxy, obtained from a pre-explosion spectrum taken at -504 d as presented in
Brennan et al. (2024) when the signal from the host is dominant, is subtracted
from the spectrum presented here. SN 2023fyq shows strong intermediate-width
He emission lines, similar to Type Ibn SN 2018gjx and SN 2015G, but the [O I]
$\lambda\lambda$6300, 6364 line in SN 2023fyq is significantly stronger than
those in other objects. Type Ibn SN 2006jc and SN 2019kbj only show narrow He
lines and have no signatures of oxygen. SNe Ibn at nebular phases seem to fall
into two distinct classes, with one still showing only narrow lines and
another showing intermediate-width He lines and oxygen lines. This topic will
be further discussed in Section 6.4. Compared to normal SESNe SN 1993J, SN
2008D, and SN 2007gr, SN 2023fyq shows prominent He emission lines, but
otherwise SN 2023fyq is similar to those normal SESNe at the nebular phase.
Overall, the spectroscopic evolution SN 2023fyq is similar to those of some
SNe Ibn. However, the difference between SESNe and SN 2023fyq shortly after
the light curve maximum is less evident. A transition between Type Ibn and
Type Ic is clearly observed. Similar behaviors have been reported in several
previous studies of other Type Ibn SNe (e.g., Pastorello et al., 2015b;
Prentice et al., 2020). If SN 2023fyq is indeed dominated by CSM interaction
at peak light, the transition to Type Ic could be due to the CSM-interaction
region becoming transparent over time, allowing us to see more signatures from
the SN ejecta. It is also possible that the SN ejecta has moved beyond the
dense CSM. This suggests that SN 2023fyq is likely exploded from a stripped/He
star within He-rich CSM. The He lines observed at the nebular phase indicate
that the interaction with the He-rich CSM is still ongoing. It is natural to
link the pre-existing He-rich CSM with the pre-explosion activities of the
progenitor system, which likely also produces the precursor emission observed
in SN 2023fyq. This topic will be further discussed in Section 6.3.
## 6 Discussions
The detection of sustained precursor emission in SN 2023fyq provides an
invaluable opportunity to study the progenitor system of Type Ibn SNe. Below
is a summary of the primary observed characteristics of SN 2023fyq:
1. 1.
A long-standing and continuously rising precursor emission starting from years
before the SN explosion;
2. 2.
The light curve following the explosion exhibits an evolution similar to Type
Ibn SNe; the bolometric light curve exhibits two peaks.
3. 3.
The early- and late-phase spectra both show narrow/intermediate-width He
lines. The nebular spectra show prominent [O I] $\lambda\lambda$6300, 6364
emission, suggesting that SN 2023fyq is likely a stripped/He star exploded
within He-rich CSM.
Any progenitor scenario for SN 2023fyq needs to explain the above behaviors.
In this section we will discuss the progenitor system and possible powering
mechanisms of the precursor and the SN light curve.
### 6.1 What Powers The First Peak of The SN Bolometric Light Curve?
The light curve of SN 2023fyq reaches its initial peak at around $-$11 d. The
later decrease of luminosity is associated with a prompt decline of $T_{BB}$
and a rapid expansion of $R_{BB}$. This process is likely the shock cooling
phase after the shock breakout. During this phase, the expansion of the ejecta
is nearly adiabatic, converting the thermal energy into kinetic energy. The
rapid decline of the photospheric temperature can produce a decrease in
brightness in bluer bands and an increase in brightness in redder bands as the
temperature moves through the optical bands, which is consistent with what we
see in SN 2023fyq (Figure 3). It is noteworthy that, around the shock
breakout, $R_{BB}$ is about 2$\sim$3$\times 1000\rm~{}R_{\odot}$
($\sim$1-2$\times 10^{14}$ cm), so the shock breakout likely originates from
an extended envelope/CSM wind instead of from the stellar surface. A similar
conclusion is also drawn by Brennan et al. (2024) based on the pre-explosion
spectroscopic and photometric observations of SN 2023fyq.
When $T_{BB}$ drops down to $\simeq$10,000 K, it enters a brief plateau phase
(Figure 5). Interestingly this temperature is consistent with the
recombination temperature of He I ($\sim$10,000 K) (Kleiser & Kasen, 2014). In
the meantime, the expansion of $R_{BB}$ slows down. Given that the early SN
spectra are dominated by He lines, the outer envelope is likely He-rich. We
argue that this $T_{BB}$ plateau phase is due to the recombination of He I,
and the decrease of $R_{BB}$ expansion rate is due to the recession of the
photosphere into the extended envelope. After this process, the outer envelope
becomes almost transparent due to the drop of electron scattering opacity.
This is consistent with the fact that we start to see more signals, such as Ca
lines, from the deeper SN ejecta after 0 d.
In conclusion, the first peak of the SN bolometric light curve of SN 2023fyq
is likely due to shock breakout in an extended envelope/CSM wind located at
$\sim$$2000-3000\rm~{}R_{\odot}$.
### 6.2 What Powers The Second Peak of The SN Bolometric Light Curve?
At 0 d, SN 2023fyq reaches its second peak. It should be noted that all bands
(from UV to optical) show peaks at this phase, so this second peak is not an
effect of temperature evolution and is instead powered by energy processes.
#### 6.2.1 radioactive decay (RAD)?
We first consider the possibility that the SN light curve around the second
peak is powered by the $\rm{}^{56}Ni$ decay. The early light curve evolution
of SNe is regulated by the photon diffusion time, which depends on the SN
ejecta mass, the ejecta velocity, and the opacity (Arnett, 1982). Assuming
that the rise time of the light curve is equal to the photon diffusion time
and Arnett’s law holds for this object, i.e., the peak luminosity is close to
the instantaneous decay power at the peak, we can estimate the $\rm{}^{56}Ni$
mass ($M_{Ni}$) and the ejecta mass ($M_{ej}$). We fix the optical opacity
$\kappa_{opt}$ to be 0.1 $\rm cm^{2}\,g^{-1}$. Given a peak luminosity of
$9.5\times 10^{42}$ erg $s^{-1}$, we get $M_{Ni}$$\simeq$0.28 $\rm M_{\odot}$
and $M_{ej}\simeq 0.54\rm M_{\odot}$($v_{ph}/\rm 7000km\,s^{-1})(t/10d)^{2}$.
Therefore, to power the light curve with only $\rm{}^{56}Ni$ decay, around
half of the ejecta is composed of $\rm{}^{56}Ni$. This ratio is much higher
than those in typical CCSNe (e.g., Lyman et al., 2016) and similar to those
found in Type Ia SNe (e.g., Könyves-Tóth et al., 2020; Graham et al., 2022).
If the ejecta is $\rm{}^{56}Ni$-rich, when the ejecta become optically thin,
the optical spectra would be dominated by forbidden lines from Fe and Co.
However, as we discussed in Section 5, the nebular spectrum of SN 2023fyq is
mainly dominated by He, O and Ca. Therefore, we disfavor the $\rm{}^{56}Ni$
decay as the dominant power source of the early light curve of SN 2023fyq.
Figure 10: Upper-Left: Fits to the bolometric light curve of SN 2023fyq using
a combination of shock breakout and CSM interaction models. Bottom: Fits to
the bolometric light curve of SN 2023fyq using a combination of shock
breakout, CSM interaction, and $\rm{}^{56}Ni$ decay models. The dip observed
around 30 days in the $\rm{}^{56}Ni$ decay model is due to the transition from
the photospheric phase to the nebular phase (see Valenti et al. (2008) for
more details). The upper-right panel is a zoom-in of the bottom panel to
better illustrate the fit close to the SN peak. The initial bump is well-
fitted by the shock breakout model. The hollow point is at the precursor
phase, so it is not included in the fit.
#### 6.2.2 CSM interaction?
Since the evolution of SN 2023fyq is similar to those of Type Ibn SNe, it is
likely that the light curve around the second peak is powered by CSM
interaction. We use the model presented in Jiang et al. (2020), which
generalizes the self-similar solution to the interaction of stellar ejecta
with surrounding CSM originally presented in (Chevalier, 1982). In this model,
the density of CSM is described by a power law, $\rho\propto qr^{-s}$, while
the ejecta are divided by an inner region ($\rho_{ej}\propto r^{-\delta}$) and
an outer region ($\rho_{ej}\propto r^{-n}$). We fix the optical opacity
($\kappa$) to be 0.1 $\rm cm^{2}\,g^{-1}$, $n=10$, $s=0$, and $\delta=1$
following Pellegrino et al. (2022). The value of $\kappa\approx 0.1\ {\rm
cm^{2}\ g^{-1}}$ is motivated by the opacity of singly-ionized He at $\sim
10^{4}$ K (e.g., Kleiser & Kasen, 2014). We also attempted to fit the data
with $s=2$ (wind-like CSM), but did not achieve a reasonable fit. This result
is consistent with the findings reported by Karamehmetoglu et al. (2017),
Gangopadhyay et al. (2020), and Ben-Ami et al. (2023). The ejecta velocity
(7,000 $\rm km\,s^{-1}$) is obtained from the velocity of the P-Cygni minimum
of the He I lines near peak. The free parameters in our fit are the explosion
epoch ($t_{exp}$), the ejecta mass ($M_{ej}$), the inner radius of the CSM
($R_{0}$), the CSM mass ($M_{csm}$), the density of the CSM at $R_{0}$
($\rho_{csm,0}$), and the conversion efficiency of the shock kinetic energy to
radiation ($\epsilon$).
To account for the initial shock cooling phase we have incorporated the shock
breakout (SBO) model presented by (Margalit, 2022). This model provides an
analytic solution for the shock cooling phase following shock breakout from
extended optically thick material, which is suitable for the case of SN
2023fyq. We fix the velocity of the inner envelope at 7,000 $\rm km\,s^{-1}$.
Additionally, we introduce two free parameters into our fit: the radius of the
extended material ($R_{e}$) and the mass of the extended material ($M_{e}$).
The model fit to the observed light curve is performed using an MCMC routine.
As illustrated in the upper-left panel of Figure 10, both the initial bump and
the subsequent evolution of the light curve are well-fitted by the model. The
best-fitting parameters are detailed in Table 1 (CSM+SBO model). It is
important to note that the models presented here are likely simplified, so the
parameters derived can only be considered as estimations of the order of
magnitude.
The $M_{ej}$ and $M_{csm}$ derived for SN 2023fyq are roughly consistent with
those found in other studies (e.g., Pellegrino et al., 2022; Ben-Ami et al.,
2023). The low ejecta mass implies that the progenitor is likely a low-mass He
star. However, this model can only fit the light curve around the peak and
cannot explain the light curve flattening at late times (see Figure 5). At
later times, the light curve is likely powered by another source of energy.
#### 6.2.3 RAD+CSM interaction?
Since SN 2023fyq is similar to normal SESNe shortly after peak and during
nebular phases, it is plausible that a certain amount of $\rm{}^{56}Ni$ is
produced during the explosion. Therefore, it is natural to consider
$\rm{}^{56}Ni$ decay as an additional energy source. A $\rm{}^{56}Ni$ decay
model has been employed to interpret the late-time light curves of many other
Type Ibn SNe, often revealing low $\rm{}^{56}Ni$ masses across previous
studies (Gangopadhyay et al., 2020; Pellegrino et al., 2022; Ben-Ami et al.,
2023).
We use the $\rm{}^{56}Ni$ decay model presented in (Arnett, 1982; Valenti et
al., 2008). The full SN light curve is fitted by a combination of CSM
interaction, shock breakout, and $\rm{}^{56}Ni$ decay models. We fix the
optical opacity to be $\kappa=$0.1 $\rm cm^{2}\,g^{-1}$ and the $\gamma$-ray
opacity to be 0.03 $\rm cm^{2}\,g^{-1}$. The ejecta velocity is fixed to be
7,000 $\rm km\,s^{-1}$. The best-fit model is shown in the upper-right panel
and the bottom panel of Figure 10, and the best-fit parameters are presented
in Table 1 (the CSM+SBO+RAD model). Both the amount of $\rm{}^{56}Ni$
($\sim$0.02 $\rm M_{\odot}$) and the ejecta mass ($\sim$$1.2\rm M_{\odot}$)
are lower than those of SESNe (Lyman et al., 2016). The low ejecta mass
implies that the progenitor of SN 2023fyq is less massive than those of normal
SESNe right before the SN explosion. One caveat of the model is that we did
not consider the CSM interaction at late phase, so the $\rm{}^{56}Ni$ mass we
derive here can only be treated as an upper limit.
The radius of the extended material ($R_{e}$) is around $21\times 10^{13}\rm
cm$ ($\sim$3000 $\rm R_{\odot}$). This large radius is consistent with the
blackbody radius of SN 2023fyq around the shock breakout (Figure 5). This
indicates that, at the explosion, the progenitor is surrounded by an extended
envelope with a mass of 0.3 $\rm M_{\odot}$ at a radius of
$R_{e}$$\sim$3000$\rm R_{\odot}$, consistent with what we discussed in Section
6.1. Considering the width of the narrow line component in the SN spectra
(Figure 5) and the narrow lines observed pre-explosion (Brennan et al., 2024),
the extended material likely expands with a velocity of $\sim$1000 $\rm
km\,s^{-1}$. Such a velocity suggests that the material at around
$\sim$3000$~{}\rm R_{\odot}$ was formed within around 20 days before the
explosion.
In such a scenario, the pre-explosion photophere would be located within the
extended material where the optical depth is sufficiently high. For a wind
profile $\rho\propto r^{-2}$, $R_{BB}$ is roughly proportional to
$\dot{M}/V_{wind}$, where $\dot{M}$ is the mass-loss rate and $V_{wind}$ is
the expansion velocity of the extended material. Consequently the expansion of
$R_{BB}$, starting from around $-$100 d (Figure 5), is likely due to an
increase of mass loss. The more pronounced rise between $\sim-$40 d and $-$11
d can be attributed to a more eruptive mass loss immediately preceding the
explosion. If the majority of the material characterized by $M_{e}$ is formed
during this eruptive phase, the mass loss rate can be estimated to be
$\dot{M}\approx\frac{M_{e}V_{wind}}{R_{e}}\approx
4.5~{}\rm{M_{\odot}\,yr^{-1}}\frac{M_{e}}{0.3\rm M_{\odot}}\frac{3000\rm
R_{\odot}}{R_{e}}\frac{V_{wind}}{1000\rm km\,s^{-1}}.$ (1)
Interestingly, eruptive mass ejections on the order $\sim$0.1–1 $\rm
M_{\odot}$ are anticipated for low-mass He stars with masses of 2.5–3.2 $\rm
M_{\odot}$ due to core silicon deflagration or detonation weeks prior to core
collapse (Woosley, 2019; Ertl et al., 2020). The mass and velocity of the
ejected material depends on the amount of silicon that burns (Ertl et al.,
2020). An ejection mass of $\sim$0.3 $\rm M_{\odot}$ with a velocity of
$\sim$1000$\rm\ km\,s^{-1}$ is consistent with the typical values of such
events (see figure 14 and table 4 of Woosley 2019).
The CSM characterized by $M_{CSM}$ is likely more extended and formed during
the earlier phase of the precursor activities. Detailed discussion on this
topic are provided in Section 6.3.2.
Shortly after the peak, the spectra of SN 2023fyq exhibit broad absorption
lines from the SN ejecta, indicating an optically thin CSM interaction region
between the observer and the SN ejecta. However, the model fit indicates that
the light curve is still predominantly influenced by the CSM interaction. One
possible explanation for this discrepancy is that our analytical model is
oversimplified, leading to an overestimation of the contribution from the CSM
interaction. Alternatively, the CSM may not be spherically symmetric. For
instance, if the SN were surrounded by a disk/torus-like CSM, strong CSM
interaction would mainly occur in the equatorial region. Consequently, an
observer looking along the polar direction would observe less obscured signals
from the SN ejecta while the majority of the luminosity arises from the CSM
interaction. The physical picture of this disk-like CSM scenario has been
extensively discussed in Smith (2017).
In summary, neither radioactive decay nor CSM interaction alone can be the
power source of SN 2023fyq. Approximately a few weeks before the explosion,
about 0.3 $M_{\odot}$ of material is ejected with a velocity of $\sim$1000
$\rm km\,s^{-1}$ due to an increase in mass loss from the progenitor. This
material expands to a radius of $\sim$3000 $\rm R_{\odot}$ at the time of the
explosion. After the explosion, the energy deposited by the shock breakout
from the extended material produces the initial light curve bump. Around 0 d
the light curve is at least partiall powered by the interaction between the SN
ejecta and the surrounding CSM, with the kinetic energy of the ejecta
converted into thermal energy, resulting in a bright peak. After that, as the
strength of the CSM interaction decreases over time, the light curve becomes
more influenced by radioactive decay, leading to a relatively flat light
curve.
Table 1: Best-fit parameters of the CSM+Shock Breakout model and the CSM+Shock Breakout+RAD model. Model | $t_{exp}$ | $M_{ej}$ | $R_{0}$ | $M_{csm}$ | $\rho_{csm,0}$ | $\epsilon$ | $R_{e}$ | $M_{e}$ | $M_{Ni}$
---|---|---|---|---|---|---|---|---|---
| (Day) | ($\rm M_{\odot}$) | ($\rm 10^{13}\,cm$) | ($\rm M_{\odot}$) | ($\rm log10(g\,cm^{-3})$ | | ($\rm 10^{13}\,cm$) | ($\rm M_{\odot}$) | ($\rm M_{\odot}$)
CSM+SBO | $-11.5^{+0.1}_{-0.1}$ | $1.3^{+0.1}_{-0.1}$ | $16.0^{+14.2}_{-9.7}$ | $0.7^{+0.1}_{-0.1}$ | $-11.9^{+0.1}_{-0.1}$ | $5^{+0.1}_{-0.1}\times 10^{-2}$ | $24.2^{+0.6}_{-1.1}$ | $0.4^{+0.1}_{-0.1}$ | $\cdots$
CSM+SBO+RAD | $-11.4^{+0.1}_{-0.1}$ | $1.2^{+0.1}_{-0.1}$ | $15.0^{+12.5}_{-10.0}$ | $0.6^{+0.1}_{-0.1}$ | $-12.2^{+0.1}_{-0.1}$ | $5^{+0.1}_{-0.1}\times 10^{-2}$ | $21.4^{+0.7}_{-0.6}$ | $0.3^{+0.1}_{-0.1}$ | $0.02^{+0.01}_{-0.01}$
### 6.3 What Powers The Precursor of SN 2023fyq?
#### 6.3.1 Single Massive Star Activities?
SN Precursors have been commonly observed in Type IIn SNe (e.g., Mauerhan et
al., 2013; Smith et al., 2010; Ofek et al., 2013, 2014; Tartaglia et al.,
2016; Pastorello et al., 2013, 2018; Strotjohann et al., 2021; Hiramatsu et
al., 2024), but are rarely found in Type Ibn SNe and Type II SNe. To date, the
pre-explosion activities for Type Ibn SNe have only been detected in SN 2006jc
(Pastorello et al., 2007) and SN 2019uo (Strotjohann et al., 2021). Searches
for precursors in other SNe Ibn yielded only upper limits, ranging from around
$-15$ to $-13$ mag (e.g., Pastorello et al., 2008; Shivvers et al., 2017;
Wangq et al., 2024). This may be because those SNe Ibn had no precursors or
only fainter and shorter ones, and also because most of these events occur at
greater distances than SN 2023fyq. Compared to SN 2006jc and SN 2019uo, one
unique characteristic of SN 2023fyq is the long-standing precursor emission.
Precursor emission observed in SN 2006jc and SN 2019uo is around hundreds of
days before the SN explosions with duration of $\sim$10 days. The precursor
observed in these events are much shorter and brighter than that in SN 2023fyq
(see Figure 2).
We first consider the possibility that the precursor of SN 2023fyq is produced
by the final-stage stellar activities of a single massive star. In this case,
the precursor can be powered by mass ejection driven by wave transport during
the late-stage nuclear burning in the core (Quataert & Shiode, 2012; Shiode &
Quataert, 2014; Fuller, 2017; Fuller & Ro, 2018; Morozova et al., 2020) or
pulsational pair instability (Yoshida et al., 2016; Woosley, 2017).
Massive stars with He core masses of 30 – 64 $\rm M_{\odot}$ experience
pulsational pair instability after carbon burning, producing violent mass
ejections before their cores collapse (Woosley, 2017). Pulsational pair
instability in massive stars have been suggested to be a promising channel of
Type Ibn SNe (Yoshida et al., 2016; Woosley, 2017; Leung et al., 2019; Renzo
et al., 2020). The pulsing activities can last for hours to 10,000 years,
depending on the He core mass, before the SN explosion (Yoshida et al., 2016;
Woosley, 2017). In SN 2023fyq, precursor emission is detected for $\sim$3
years before the SN explosion. Therefore, if pulsational pair instability
powers the precursor emission of SN 2023fyq, the progenitor would be a He star
with a ZAMS mass larger than $\sim$52 $\rm M_{\odot}$ (Woosley, 2017).
However, the outbursts caused by the pulses of these more massive stars are
usually energetic and can result in sharply rising light curves, which is
inconsistent with the relatively steady precursor emission of SN 2023fyq.
Additionally, the low ejecta mass we derived in Section 6.2 does not align
with a very massive He star progenitor. Therefore, we disfavor pulsational
pair instability as the powering mechanism of precursor emission in SN
2023fyq.
Strong temperature gradients can form during late-stage nuclear burning in
massive stars, which generates convection, exciting internal gravity waves.
The gravity waves may carry their energy to the envelope of the star and
deposit it there (Quataert & Shiode, 2012; Shiode & Quataert, 2014; Fuller,
2017; Fuller & Ro, 2018), which may trigger eruptive mass ejections (Leung &
Fuller, 2020; Matzner & Ro, 2021). The mass ejection itself and the collision
between the ejecta generated from multiple outbursts can potentially produce
SN precursor emission (Leung & Fuller, 2020; Strotjohann et al., 2021; Tsuna
et al., 2023). However, it would be difficult to reproduce the time scale of
the observed precursor with a single event of dynamical envelope ejection from
a stripped star (Tsuna et al., 2024). This is because the timescale is
regulated by radiative diffusion from the precursor ejecta, which is only
weeks to months for stripped stars, thus it would work for the precursors of
SN 2006jc or SN 2019uo (Tsuna et al., 2024), but not for SN 2023fyq. In order
to produce the precursor emission seen in SN 2023fyq, multiple fine-tuned mass
ejections would be needed. Therefore, a more plausible scenario is a
continuous mass loss over the timescale of years, with some continuous
powering mechanism for the precursor.
#### 6.3.2 Binary Interaction?
A low-mass He star in a binary system has been proposed to be a possible
progenitor scenario for Type Ibn SNe (Maund et al., 2016; Dessart et al.,
2022; Tsuna et al., 2024), which is supported by the lack of star formation at
the site of some members of the class (Sanders et al., 2013; Hosseinzadeh et
al., 2019). In this section we explore the possibility that the progenitor of
SN 2023fyq is an exploded stripped star, such as a He star, in a binary system
and that the binary mass transfer generated the precursor activities.
The stripped SN progenitor in a binary system expands at some point in its
evolution near core-collapse, filling its Roche lobe and initiating mass
transfer onto the companion. Such a scenario is expected for stripped stars
with He core masses in the range of 2.5–3 $M_{\odot}$, which can inflate their
envelopes to up to $\sim 100\ R_{\odot}$ at the oxygen/neon burning phase in
the final years to decades of their lives (e.g., Wu & Fuller, 2022, and
references therein). Thus for orbital separations of $\sim$(1–few) $\times
100\ R_{\odot}$ (orbital period of order 100 days for a companion of order
$\sim 1M_{\odot}$), we expect intense mass transfer to initiate during this
time period.
If the accretor is a compact object, the mass transfer rate is typically
orders of magnitude higher than its Eddington rate, $\dot{M}_{\rm Edd}\sim
2\times 10^{-8}\ {\rm M_{\odot}\ yr^{-1}}(M_{\rm
comp}/{1M_{\odot}})(\kappa_{opt}/{0.1\ {\rm cm^{2}\ g^{-1}}})^{-1}$ (where a
radiation efficiency of $10\%$ was assumed), and thus most of the transferred
mass actually escapes from the binary system without being accreted onto the
compact object. Even if the companion is not a compact object, for large mass
transfer rates of $\gtrsim 10^{-4}$–$10^{-3}\ M_{\odot}$ yr-1, most of the
mass is expected to still escape through the binary’s outer Lagrange point (Lu
et al., 2023). In either case, this escaped material becomes the CSM that
later powers the bright SN.
In Section 6.2 we found that the CSM required to power the main SN light curve
is around $0.6^{+0.1}_{-0.1}$ $\rm M_{\odot}$, which requires a time-averaged
mass loss rate of around a few $0.1M_{\odot}$ yr-1 given that the mass loss is
linked to the observed 1000-day precursor.
For binary systems exhibiting such high mass loss rates suggested by Wu &
Fuller (2022), those with orbital periods ranging from 10 to 100 days are
favored. These systems have orbital velocities of $\sim$100 – a few $100~{}\rm
km\,s^{-1}$. Assuming the velocity of the CSM that escapes the binary system
is $\sim$200 $\rm km\,s^{-1}$, the mass loss rate via mass transfer should be
at least larger than $\sim$2$\times 10^{-2}$ $M_{\odot}$ yr-1 to power the
light curve peak (the detailed derivation is shown in Appendix A), which is
consistent with what we found in Section 6.2.
Given the required $\dot{M}$, we can consider two mechanisms to power the
precursor emission. The first is a collision of the mass-transfer outflow with
external material, which may exist due to a previous mass-transfer episode
(e.g., Pejcha et al., 2016; Metzger & Pejcha, 2017). While we remain agnostic
to the origin of the pre-existing matter, the maximum available power is given
by the kinetic luminosity of the outflow as
$\displaystyle L_{\rm out}\approx\frac{1}{2}\dot{M}v_{\rm CSM}^{2}\sim
1.3\times 10^{39}\ {\rm erg\ s^{-1}}$
$\displaystyle\times\left(\frac{\dot{M}}{0.1M_{\odot}\ {\rm
yr}^{-1}}\right)\left(\frac{v_{\rm CSM}}{200\ {\rm km\ s^{-1}}}\right)^{2}.$
(2)
Thus the precursor may be explained, but only for favorably high CSM velocity
as well as high efficiencies for dissipation and radiation conversion close to
unity.
In the case for a compact object companion, an accretion disk forming around
the compact object can be a promising energy source. While most of the
transferred mass is removed from the outer L2 point, a small fraction can
still accrete onto the companion and form a disk. The disk, if its accretion
rate is super-Eddington, can launch a fast radiation-driven wind that can
collide with the rest of the mass and dissipate its kinetic energy.
The hydrodynamics of the transferred mass has been considered recently in Lu
et al. (2023). For a neutron star companion with an orbital separation of
$a\approx$ (1–few)$\times 100\ R_{\odot}$ and mass transfer rate $\gg 10^{-3}\
M_{\odot}$ yr-1, most of the mass is indeed lost from the L2 point (their
$f_{\rm L2}\sim 1$). However the accretion rate can still reach $\dot{M}_{\rm
acc}\sim(3$–$7)\times 10^{-4}\ M_{\odot}$ yr-1 (Figure A2 of Lu et al. 2023),
which is orders of magnitude larger than the Eddington rate.
For a binary mass ratio of $q=M_{\rm NS}/M_{\rm*}\approx 0.5$, the (Keplerian)
circularization radius of the disk is found from the fitting formula in Lu et
al. (2023) as
$R_{c}\approx 0.10a\sim 7\times 10^{11}\ {\rm
cm}\left(\frac{a}{100R_{\odot}}\right).$ (3)
We expect a disk wind to be launched roughly where the local luminosity
exceeds the Eddington luminosity of the NS, within a disk radius (equation 31
of Lu et al. 2023)
$\displaystyle R_{\rm sph}$ $\displaystyle\approx$
$\displaystyle\frac{\dot{M}_{\rm acc}\kappa}{4\pi c}$ (4) $\displaystyle\sim$
$\displaystyle 2\times 10^{10}\ {\rm cm}\left(\frac{\dot{M}_{\rm acc}}{5\times
10^{-4}M_{\odot}\ {\rm yr^{-1}}}\right)\left(\frac{\kappa}{0.2\ {\rm cm^{2}\
g^{-1}}}\right),$
which is typically less than $R_{\rm c}$ for an orbital separation of $a\sim
100~{}R_{\odot}$. We have taken the opacity here to be $\kappa\approx 0.2\
{\rm cm^{2}\ g^{-1}}$ as helium is expected to be fully ionized in the
interior of the disk. In line with many theoretical works that model super-
Eddington disk winds, we assume a power-law accretion rate $\dot{M}$ of
$\dot{M}\propto r^{p}$ ($R_{\rm NS}<r<R_{\rm sph}$), where we adopt $R_{\rm
NS}=10$ km. This means that a fraction of the accreted mass is expelled at
each radius, and we assume that the wind velocity is equivalent to the local
disk escape velocity. Consequently, the wind kinetic luminosity, integrated
over the range of $r$, is estimated as
$\displaystyle L_{\rm wind}$ $\displaystyle\approx\frac{p}{2(1-p)}\dot{M}_{\rm
acc}\frac{GM_{\rm NS}}{R_{\rm NS}}\left(\frac{R_{\rm NS}}{R_{\rm
sph}}\right)^{p}$ $\displaystyle\sim 2\times 10^{40}\ {\rm erg\
s^{-1}}\left(\frac{\dot{M}_{\rm acc}}{5\times 10^{-4}M_{\odot}\ {\rm
yr^{-1}}}\right)^{1/2}$ $\displaystyle\times\left(\frac{M_{\rm
NS}}{1.4M_{\odot}}\right)\left(\frac{\kappa}{0.2\ {\rm cm^{2}\
g^{-1}}}\right)^{-1/2}$ (5)
where we have adopted $p=0.5$ in the last equation while a possible range of
$0.3\leq p\leq 0.8$ is suggested (Yuan & Narayan, 2014). We thus find that the
disk wind carries the appropriate kinetic luminosity to explain the precursor
in the steady-state phase.
As the disk wind carries much smaller mass than the rest of the material
around the system, its kinetic energy will be efficiently dissipated by their
collision. We check that the dissipated energy would be successfully radiated
as the precursor. For a wind profile the diffusion timescale in the CSM is
$\displaystyle t_{\rm diff}$ $\displaystyle\approx\frac{\kappa\dot{M}}{4\pi
v_{\rm CSM}c}$ $\displaystyle\sim 8\times 10^{4}\ {\rm
sec}\left(\frac{\dot{M}}{0.1M_{\odot}\ {\rm
yr}^{-1}}\right)\left(\frac{\kappa}{0.1\ {\rm cm^{2}\
g^{-1}}}\right)\left(\frac{v_{\rm CSM}}{200\ {\rm km\ s^{-1}}}\right)^{-1}$
(6)
and the adiabatic expansion timescale from the dissipation region, whose size
is roughly comparable to the orbital separation, is
$t_{\rm exp}\approx\frac{a}{v_{\rm CSM}}\sim 3\times 10^{5}\ {\rm
sec}\left(\frac{a}{100\ R_{\odot}}\right)\left(\frac{v_{\rm CSM}}{200\ {\rm
km\ s^{-1}}}\right)^{-1}$ (7)
Thus we expect that the dissipated energy can be successfully radiated away
without adiabatic losses. The radiation will be reprocessed in the CSM, and
finally be emitted as optical radiation at $r\approx R_{\rm BB}$. The mass
loss via the L2 point can form an equatorial disk (e.g., Lu et al., 2023). The
interaction of the equatorial disk with the SN ejecta may contribute to the
main peak of the SN light curve. In this case, the parameter $M_{CSM}$
mentioned in Section 6.2 roughly characterizes the mass of the equatorial
disk. The interaction of SN ejecta with this dense CSM may still continue in
the nebular phase, producing the intermediate-width He lines we observe.
#### 6.3.3 What About The Rise After $-$100 d In The Pre-explosion Light
Curve?
As we mentioned in Section 4.3, the pre-explosion light curve shows a rapid
rise after $-$100 d, with a more pronounced rise occurring between $-$40 d and
$-$11 d. This may be associated with eruptive mass loss right before the SN
explosion. For the more pronounced rise between $-$40 d and $-$11 d, we
consider two possibilities: 1) the rise is due to orbital shrinking of the
binary, leading to a runaway of mass transfer and resulting in a rapid-rising
pre-explosion light curve (i.e., MacLeod et al., 2018). 2) The rise is
influenced by the core silicon burning of the He star, which ejects a large
amount of material and powers the fast-rising light curve just before the core
collapses.
For the first case we initially consider the orbital evolution of this binary
system over the few-year timescale during which we observe the precursor. The
mass loss from the Lagrange point carries away angular momentum as well, which
can affect the orbital separation of the binary. This generally leads to
shrinking of the orbit, which may have been witnessed as the sharp rise of the
light curve as we approach the explosion epoch. From Figure 5 of Lu et al.
(2023) we find the orbital shrinking rate for mass ratio $q=0.5$ and $f_{\rm
L2}=1$ as
$\frac{\dot{a}}{a}\approx(-5)\frac{\dot{M}}{M_{*}}\sim-(6\ {\rm
yr})^{-1}\left(\frac{\dot{M}}{0.1M_{\odot}\ {\rm
yr}^{-1}}\right)\left(\frac{M_{*}}{3M_{\odot}}\right)^{-1}$ (8)
which means that for a mass loss rate of $\sim 0.1M_{\odot}$ yr-1, the orbital
separation can significantly shrink in the several years that we observe the
precursor. The orbital shrinking of the binary may cause an unstable mass
transfer and accretion onto the compact object, resulting in a runaway mass
loss. This may explain the rapid rise after around $-$40 d in the precursor
light curve. Given the anticipated significant orbital shrinking within
several years for the system under consideration, the shallower rise in the
light curve between $-$100 d and $\sim-40$ d is likely also influenced by the
orbital shrinking. This may only lead to a gently increase in the accretion
rate onto the compact companion, resulting in the rise of the light curve.
In this scenario the final SN explosion can be due to the merger of the He
star with a compact object (e.g., Chevalier, 2012; Soker, 2019; Metzger,
2022). Such merger-driven explosions have been proposed to explain some long
gamma-ray bursts (Fryer & Woosley, 1998; Zhang & Fryer, 2001; Thöne et al.,
2011; Fryer et al., 2013), which are usually associated with a subtype of Type
Ic SNe that exhibit broad spectral lines. This He-merger scenario can connect
the observed rapid increase in the light curve’s brightness at the end of the
precursor phase with the following SN-like explosion. However, the
characteristics of the final explosion post-merger remain poorly understood.
For example, the predicted explosion energies are uncertain by many orders of
magnitude (Fryer & Woosley, 1998; Zhang & Fryer, 2001; Schrøder et al., 2020).
While the merger-driven explosion might explain the spectral features
observed, detailed spectral modeling of these events is still lacking.
For the second case, a core-collapse SN explosion is anticipated after
significant mass transfer over years from low-mass stripped stars ranging from
$2.5$ to $3M_{\odot}$ (Wu & Fuller, 2022). Additionally an explosive mass
ejection weeks before the explosion due to silicon burning is indeed expected
in recent studies for low mass He stars with masses of 2.5 – 3.2 $\rm
M_{\odot}$ (Woosley, 2019). The mass ejected can range from $10^{-2}$ to 1
$\rm M_{\odot}$ with velocities from $\sim$100 $\rm km\,s^{-1}$ to a few 1000
$\rm km\,s^{-1}$. In Section 6.2 we found that there is likely an eruptive
mass loss of $\sim$0.3 $\rm M_{\odot}$ a few weeks before the SN explosion
with a velocity of $\sim$1000 $\rm km\,s^{-1}$, which is consistent with the
silicon burning phase for low-mass He stars. The eruptive mass loss may
explain the more pronounced rise of the precursor light curve between
$\sim-$40 d and $-$11 d, and the ejected material in turn produces the first
SN peak. However, we note that detailed light curve modeling is necessary to
confirm this hypothesis. In this case, the shallower rise in the light curve
between $-$100 d and $\sim-40$ d is likely still attributed to the orbital
shirking of the binary system, like discussed above.
In this scenario the final SN explosion results from the core collapse of the
He star. This explanation accounts for the observed spectral similarities
between SN 2023fyq and SESNe both post-peak and during the nebular phases.
Both the merger-driven and core-collapse scenarios can account for certain
observed features of SN 2023fyq. In either case, the progenitor system would
likely be asymmetric, which aligns with observations of SN 2023fyq. The 56Ni
yields from a merger-driven explosion are likely low (Fryer et al., 2013;
Metzger, 2022) and, similarly, low 56Ni production is expected from core-
collapse explosions in low-mass helium stars (Woosley, 2019). These
predictions are consistent with the low 56Ni mass derived from the late-time
light curves of SN 2023fyq.
An important difference between these two scenarios is that a merger-driven
explosion typically results in a single compact object in the remnant, whereas
a core-collapse explosion generally leaves behind a compact binary. In the
latter case fallback accretion post-explosion could produce observable X-ray
emissions approximately 100 to 1000 days after the explosion, which may show
time variations tied to the orbital motion of the binary (Kashiyama et al.,
2022). For SN 2023fyq, conducting X-ray follow-up years after the explosion
could be helpful in distinguishing between these two scenarios in future
studies (see Appendix B for details).
In conclusion the timescale and brightness of the precursor observed in SN
2023fyq before $-$100 d can be attributed to mass transfer in a binary system.
The companion star is likely a compact object, as the energetics of the disk
wind launched from super-Eddington accretion onto the compact object can
naturally explain the luminosity of the precursor. An equatorial circumbinary
disk, formed during the mass transfer, later interacts with the SN ejecta,
powering the main SN peak. During the nebular phases the ongoing interaction
between the equatorial disk and the SN ejecta produces the intermediate-width
He lines observed. The rise of the light curve between $-$100 d and $\sim-40$
d is likely due to orbital shrinking. The more pronounced rise of the light
curve starting around $-$40 d may be linked to 1) an eruptive mass ejection
due to final-stage silicon burning, or 2) runaway mass transfer caused by
orbital shrinking of the binary system. In the first scenario, the subsequent
explosion would result from the core-collapse of the He star. In the second
scenario, it would result from the merger of the He star with the compact
object. Both scenarios can launch materials into the polar region. The shock
breakout from this extended material and the following cooling emission power
the first bright SN peak.
### 6.4 Connections to Other Transient Phenomena and Implications on The CSM
Structure
It is noteworthy that the light curve morphology (both the pre- and post-
explosion phase) of SN 2023fyq is quite similar to those of luminous red novae
(Soker & Tylenda, 2003; Tylenda et al., 2011; Mauerhan et al., 2015; Smith et
al., 2016; Blagorodnova et al., 2017), which are generally understood to be
the product of binary mergers (e.g., Metzger & Pejcha, 2017). The pre-
explosion activities in luminous red novae are often associated with binary
mass transfer (e.g., Pejcha, 2014), and the pre-explosion brightening is due
to the increase in the mass-loss rate caused by orbital shrinking. The post-
explosion light curves of luminous red novae are double-peaked, in which the
first peak is likely from the shock cooling and the second peak is from the
interaction between the ejecta and a pre-existing equatorial disc formed
during binary mass transfer (Metzger & Pejcha, 2017).
The scenario for luminous red novae is analogous to what we proposed for SN
2023fyq, and the primary difference is just the explosion energy source. Such
an asymmetric CSM structure is consistent with the multi-component profile of
the He I $\lambda$5876 line as we discussed in Section 5 and also the
asymmetric line profiles observed during the pre-explosion phase of SN 2023fyq
(Brennan et al., 2024). Similarities between luminous red novae and
interaction-powered SNe have also been reported in previous studies (e.g.,
Hiramatsu et al., 2024).
The SN light curve evolution of SN 2023fyq is similar to those of ultra-
stripped SNe (De et al., 2018; Yao et al., 2020). The first bright SN light
curve peak in these ultra-stripped SNe is generally understood as a result of
shock breakout from the dense CSM ejected weeks before the SN explosion. The
second peak of these objects is usually around $10^{42}\rm erg\,s^{-1}$, much
fainter than that of SN 2023fyq, and is thought to be powered by
$\rm{}^{56}Ni$ decay (De et al., 2018; Yao et al., 2020). It may be that in
these objects the CSM is more confined and a more extended ($\sim 10^{15}$ cm)
dense equatorial disk is lacking, resulting in insufficient CSM at these radii
to power the second peak through interaction like that observed in SN 2023fyq.
SNe Ibn can show a wide variety of spectral features at early phases
(Hosseinzadeh et al., 2017), which is not surprising if all SNe Ibn experience
strong interaction with asymmetric CSM (e.g., Smith et al., 2015; Smith,
2017). Only a few SNe Ibn are observed until late phases since they can
decline fast. Interestingly, as we show in Figure 9, at late times, these SNe
Ibn seem to fall into two distinct classes: Class I that shows broad lines and
share many similarities with normal SESNe (SN 2023fyq, SN 2015G, SN 2018gjx)
and Class II that is still dominated by narrow emission lines (SN 2006jc, SN
2019kbj). Assuming the progenitors of all these SNe Ibn are He stars, the
objects in Class II may be surrounded by more massive CSM and/or have lower
explosion energy (Dessart et al., 2022).
For the objects in Class I, the intensity of the [O I] $\lambda\lambda$6300,
6364 line can vary significantly among different objects while the other
spectral features are quite similar. If the progenitors of all these objects
are surrounded by an equatorial disk, the difference in the intensity of the
[O I] $\lambda\lambda$6300, 6364 line can be naturally explained by different
viewing angles (See Figure 11). If the system is observed from the equatorial
direction, the central [O I] $\lambda\lambda$6300, 6364 line forming region
can be obscured by the disk. Instead, a polar observer would be able to see
the whole nebular emission from the inner ejecta. For both observers,
intermediate-width He emission lines from the ongoing interaction of the SN
ejecta with the equatorial disk can be seen.
A disk/torus-like CSM is also invoked in previous studies to explain the
spectroscopic evolution of SNe Ibn (Prentice et al., 2020) and SNe IIn (e.g.,
Smith & Arnett, 2014; Smith et al., 2015; Andrews & Smith, 2018; Smith &
Andrews, 2020). Such a disk/torus-like CSM scenario could potentially explain
the diversity we see in SNe Ibn in Class I, and is consistent with the
precursor model we discussed in Section 6.3.2. This suggests that Class I SNe
Ibn may originate from a similar progenitor channel but with variations in
viewing angles.
Long-lasting and relatively stable precursor activities due to binary
interaction are commonly seen in luminous red novae (e.g., Tylenda et al.,
2011; Mauerhan et al., 2015; Blagorodnova et al., 2017). Given the similarity
of the progenitor scenario of luminous red novae and SN 2023fyq, it is
possible that precursor activities are not rare in SNe Ibn in Class I. If this
is true, the long-lasting and slowly rising pre-explosion emission may serve
as a unique early warning for this subclass of Type Ibn SNe. The evolution of
the precursor light curves may vary depending on the viewing angle, as the
emission could be obscured by the equatorial disk for observers near the
equatorial plane. Given that the viewing angle also influences the intensity
of the [OI] lines in the nebular spectra, combining the precursor emission
with late-time spectroscopy could serve as a unique probe for the progenitor
scenario we propose.
Figure 11: A sketch of the possible progenitor system of SN 2023fyq. Upper:
around a few years before the explosion, the progenitor (a He star with a mass
of $\sim$2.5 – 3 $M_{\odot}$) expands at the oxygen/neon burning phase,
filling its Roche lobe. This triggers mass transfer onto its companion compact
object, resulting in the precursor emission we observe. Around weeks before
the explosion, an eruptive mass ejection is triggered through core silicon
burning in the low-mass He star or runaway mass transfer due to orbital
shrinking, launching dense material to the polar region. The subsequent
explosion is likely due to either by core-collapse of the He star or by the
merger of the He star with its compact object companion. Bottom: Immediately
after the explosion, the shock breaks out from the dense polar material formed
weeks before the explosion, producing the first light curve peak. The
interaction of SN ejecta with the equatorial disk formed by the pre-explosion
binary interaction contributes to the second peak.
## 7 Summary
The evolution of SN 2023fyq closely resemble that of Type Ibn SNe. The optical
spectra post-peak and the nebular spectrum of SN 2023fyq share similarities
with those of normal SESNe, implying that the progenitor is a stripped/He
star. The SN light curve can be reproduced by a CSM interaction + shock
breakout + 56Ni decay model, implying the presence of dense CSM around the
progenitor, a low progenitor mass and a low $\rm{}^{56}Ni$ production. The
precursor emission of SN 2023fyq is observed up to around three years before
the SN explosion. The long-duration precursor activities are best explained by
the mass transfer in a binary system involving a low-mass He star.
Putting all these together, we summarize a possible timeline for SN 2023fyq:
1. 1.
$\sim$$-$1000 d to $\sim$$-$100 d (upper panel of Figure 11): A low-mass He
star (2.5 – 3 $\rm M_{\odot}$) expands substantially at the oxygen/neon
burning phase, triggering mass transfer to its companion compact object, which
produces the precursor emission we observe. The outflow via L2 point produces
the He-rich CSM around the progenitor system and forms an equatorial disk
($\sim$0.6$\rm M_{\odot}$).
2. 2.
$\sim$$-$100 d to $\sim$$-$11 d: The shrinkage of the orbit leads to an
increase in the accretion rate onto the companion compact object, resulting in
a rise in the light curve. The more pronounced light curve rise after
$\sim-$40 d is likely due to either the core silicon burning or the runaway
mass transfer caused by orbital shrinking, which triggers an eruptive mass
ejection ($\sim$0.3$\rm M_{\odot}$) with a velocity of $\sim$1000$\rm
km\,s^{-1}$. This launches dense material to the polar region.
3. 3.
$\sim$$-$11 d (bottom panel of Figure 11): A SN explosion is triggered either
by the core-collapse of the He star or by the merger of the He star with a
compact object, which sends a shock through the polar material ($\sim$3000
$\rm R_{\odot}$). The energy deposited during the shock breakout produces the
initial bump of the light curve.
4. 4.
$\sim$$-$11 d to $\sim$20 d: The SN ejecta collide with the equatorial He-rich
CSM ($\sim$0.6$\rm M_{\odot}$), converting the kinetic energy of the SN ejecta
into thermal energy, contributing to the SN light curve and generating a very
blue spectrum with only prominent He lines. With the expansion of the ejecta,
the optical depth decreases so that more signals from the SN ejecta are
observed.
5. 5.
after $\sim$20 d: The strength of the CSM interaction decreases and the SN
fades, and radioactive decay likely starts to contribute more to the light
curve. Later, the ejecta become more optically thin and the object transitions
into the nebular phase. Given our proximity to the polar direction of the
system, signals from the inner part of the ejecta are revealed, which closely
resemble those of normal SESNe at nebular phases. Additionally, the continuing
interaction between the ejecta and the He-rich equatorial CSM produces strong
intermediate-width He emission lines.
Given the similarities between SN 2023fyq and other Type Ibn SNe, precursor
activities may be common for a certain subclass of Type Ibn SNe. If an
equatorial disk is indeed formed during the precursor phase, the precursor
emission and the intensity of the [OI] lines at the nebular phases for this
class of objects would be dependent on the viewing angle. It is worth noting
that this mechanism does not apply to the very brief, singular pre-explosion
outburst observed in SN 2006jc and SN 2019uo. For the upcoming LSST survey, a
single 30-second visit will achieve a 5$\sigma$ depth of approximately 24 mag
(Bianco et al., 2022). By stacking images, even deeper limits can be achieved.
This enables LSST to effectively constrain the precursors of Type Ibn SNe,
such as SN 2023fyq, within 150 Mpc, assuming a typical precursor brightness of
$-$12 mag. A sample of Type Ibn SNe with well-constrained precursor
activities, combined with the late-time spectroscopy, will test the progenitor
scenario we propose. We also encourage detailed spectral and light curve
modeling of merger-driven explosions, as well as the silicon burning phase in
low-mass He stars just prior to core collapse. By comparing these models with
a large sample of observations, we can deepen our understanding of the final
stages of stellar evolution.
## Acknowledgements
We would like to thank Jim Fuller for the assistance with the manuscript in
its early stages. We would like to thank Kyle Davis for sharing the SOAR
spectrum from his program. Y.D. would like to thank L.Z. for redesigning and
redrawing Figure 11 in the paper.
Research by Y.D., S.V., N.M.R, E.H., and D.M. is supported by NSF grant
AST-2008108. D.T. is supported by the Sherman Fairchild Postdoctoral
Fellowship at the California Institute of Technology.
Time-domain research by the University of Arizona team and D.J.S. is supported
by NSF grants AST-1821987, 1813466, 1908972, 2108032, and 2308181, and by the
Heising-Simons Foundation under grant #2020-1864.
This work makes use of data from the Las Cumbres Observatory global telescope
network. The LCO group is supported by NSF grants AST-1911225 and AST-1911151.
A.Z.B. acknowledges support from the European Research Council (ERC) under the
European Union’s Horizon 2020 research and innovation program (grant agreement
No. 772086).
This publication was made possible through the support of an LSST-DA Catalyst
Fellowship to K.A.B, funded through Grant 62192 from the John Templeton
Foundation to LSST Discovery Alliance. The opinions expressed in this
publication are those of the authors and do not necessarily reflect the views
of LSST-DA or the John Templeton Foundation.
Based on observations obtained at the international Gemini Observatory, a
program of NSF’s NOIRLab, which is managed by the Association of Universities
for Research in Astronomy (AURA) under a cooperative agreement with the
National Science Foundation. On behalf of the Gemini Observatory partnership:
the National Science Foundation (United States), National Research Council
(Canada), Agencia Nacional de Investigación y Desarrollo (Chile), Ministerio
de Ciencia, Tecnología e Innovación (Argentina), Ministério da Ciência,
Tecnologia, Inovações e Comunicações (Brazil), and Korea Astronomy and Space
Science Institute (Republic of Korea).
This work was enabled by observations made from the Gemini North telescope,
located within the Maunakea Science Reserve and adjacent to the summit of
Maunakea. We are grateful for the privilege of observing the Universe from a
place that is unique in both its astronomical quality and its cultural
significance.
This work includes observations obtained at the Southern Astrophysical
Research (SOAR) telescope, which is a joint project of the Ministério da
Ciência, Tecnologia e Inovações (MCTI/LNA) do Brasil, the US National Science
Foundation’s NOIRLab, the University of North Carolina at Chapel Hill (UNC),
and Michigan State University (MSU).
Some of the data presented herein were obtained at Keck Observatory, which is
a private 501(c)3 non-profit organization operated as a scientific partnership
among the California Institute of Technology, the University of California,
and the National Aeronautics and Space Administration. The Observatory was
made possible by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural
role and reverence that the summit of Maunakea has always had within the
indigenous Hawaiian community. We are most fortunate to have the opportunity
to conduct observations from this mountain.
The LBT is an international collaboration among institutions in the United
States, Italy and Germany. LBT Corporation Members are: The University of
Arizona on behalf of the Arizona Board of Regents; Istituto Nazionale di
Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the
Max-Planck Society, The Leibniz Institute for Astrophysics Potsdam, and
Heidelberg University; The Ohio State University, and The Research
Corporation, on behalf of The University of Notre Dame, University of
Minnesota and University of Virginia.
This research has made use of the NASA/IPAC Extragalactic Database (NED),
which is funded by the National Aeronautics and Space Administration and
operated by the California Institute of Technology.
This research made use of Photutils, an Astropy package for detection and
photometry of astronomical sources (Bradley et al., 2022).
## Appendix A The Mass Loss Rate in binary interaction scenario
This appendix calculates the mass loss rate needed for a binary system to
explain the observations, as discussed in Section 6.3.2. We begin with
estimating the required mass loss rate $\dot{M}$ of the CSM, which in our
scenario is equivalent to the mass transfer rate if the rate is much larger
than the Eddington rate and the companion cannot accrete most of the
transferred material. The CSM must be optically thick within the observed
blackbody radius $R_{\rm BB}\approx 600~{}R_{\odot}$ at the precursor phase.
For a mass loss rate of $\dot{M}$, the optical depth at $R_{\rm BB}$ is
$\displaystyle\tau_{\rm CSM}(r=R_{\rm BB})$
$\displaystyle\approx\frac{\kappa\dot{M}}{4\pi R_{\rm BB}v_{\rm CSM}}$
$\displaystyle\sim 60\left(\frac{\dot{M}}{0.1\ M_{\odot}\ {\rm
yr}^{-1}}\right)\left(\frac{\kappa}{0.1\ {\rm cm^{2}\
g^{-1}}}\right)\left(\frac{v_{\rm CSM}}{200\ {\rm km\ s^{-1}}}\right)^{-1}$
(A1)
where $v_{\rm CSM}$ is the velocity of the CSM that escapes the binary system.
This is typically the orbital velocity for outflows from mass transfer, which
is $\sim 200$ km s-1 for the orbital separation of interest (see Section
6.3.2), but the arguments below would not depend much on the adopted value.
The value of $\kappa\approx 0.1\ {\rm cm^{2}\ g^{-1}}$ is motivated from that
of singly-ionized helium at around $10^{4}$ K (e.g., Kleiser & Kasen, 2014).
The optical depth then poses a lower limit in $\dot{M}$ of
$\dot{M}\geq\dot{M}_{\rm min}\approx 2\times 10^{-3}M_{\odot}\ {\rm
yr}^{-1}\left(\frac{\kappa}{0.1\ {\rm cm^{2}\
g^{-1}}}\right)^{-1}\left(\frac{v_{\rm CSM}}{200\ {\rm km\ s^{-1}}}\right)$
(A2)
which confirms the super-Eddington mass transfer rate333As the blackbody
temperature is $\sim 10^{4}$ K during the precursor phase, even for
$\dot{M}\gg\dot{M}_{\rm min}$, we expect that the blackbody radius would not
be too much larger than the observed value. This is because the temperature
drops as a function of radius, and the opacity at $r>R_{\rm BB}$ will rapidly
drop with radius due to helium recombination (analogous to the recombination
front of SN II-P).. As a cross check, we can also roughly infer $\dot{M}$ from
the observed SN. The collision of the SN with the CSM generates a shock that
powers the SN light curve. The kinetic energy dissipation rate is
$\displaystyle L_{\rm kin}$ $\displaystyle=2\pi r^{2}\left(\frac{\dot{M}}{4\pi
r^{2}v_{\rm CSM}}\right)v_{\rm sh}^{3}$ $\displaystyle\sim 5.5\times 10^{43}\
{\rm erg\ s^{-1}}\left(\frac{\dot{M}}{0.1M_{\odot}\ {\rm
yr}^{-1}}\right)\left(\frac{v_{\rm CSM}}{200\ {\rm km\
s^{-1}}}\right)^{-1}\left(\frac{v_{\rm sh}}{7000\ {\rm km\
s^{-1}}}\right)^{3}$ (A3)
where $v_{\rm sh}$ is the forward shock velocity. Assuming that the luminosity
at the second peak is generated by the interaction with CSM generated in the
precursor phase, we infer a mass loss rate of
$\dot{M}\sim 2\times 10^{-2}\ M_{\odot}\ {\rm
yr}^{-1}\epsilon^{-1}\left(\frac{L_{\rm rad}}{10^{43}\ {\rm erg\
s^{-1}}}\right)\left(\frac{v_{\rm CSM}}{200\ {\rm km\
s^{-1}}}\right)\left(\frac{v_{\rm sh}}{7000\ {\rm km\ s^{-1}}}\right)^{-3},$
(A4)
where $\epsilon=L_{\rm rad}/L_{\rm kin}\leq 1$ is the radiation conversion
efficiency. While this estimate is quite sensitive to the assumed $v_{\rm
sh}$, it implies that a similarly high $\dot{M}$ is also required to explain
the SN. The required mass transfer rate of $\sim 0.02$–$0.2M_{\odot}$ yr-1 for
$\epsilon\approx 0.1$–$1$ roughly overlaps with the range obtained from
simulations of binaries composed of a low-mass ($2.5$–$3\ M_{\odot}$) He star
and a neutron star, years to decades before the SN (Wu & Fuller, 2022, Figure
2).
## Appendix B Late-time X-ray detectability of SN 2023fyq
In this appendix, we roughly estimate the X-ray detectability of SN 2023fyq
for future followup. We expect X-ray emission when it is transparent to
photoionization by oxygen and carbon in the ejecta. Our modeling favors low-
mass (a few $M_{\odot}$) helium stars for the progenitor, with carbon-oxygen
cores of mass $\approx 1.5$–$2~{}M_{\odot}$. For an explosion ejecta from such
progenitors, we infer the mass of carbon/oxygen-rich material to be roughly
$M_{\rm ej,C/O}\sim 0.1$–$1~{}M_{\odot}$. The lower limit applies if a neutron
star is left behind in the explosion (as in ultra-stripped SNe considered in
Kashiyama et al. 2022), and the upper limit is if the bulk of the CO-core is
disrupted (e.g. by a merger) and becomes part of the SN ejecta. Adopting the
ejecta velocity of $v_{\rm ej}=7000$ km s-1 and the X-ray photoionization
cross section of $\sigma_{\rm X}\sim 10^{-19}\ {\rm cm^{2}}(h\nu/{\rm
keV})^{-3}$, we expect X-rays with energy $h\nu$ to be transparent at
$\displaystyle t_{\rm trans}$ $\displaystyle\sim$
$\displaystyle\sqrt{\frac{\sigma_{\rm X}M_{\rm ej,C/O}/14m_{p}}{4\pi v_{\rm
ej}^{2}}}\sim 1~{}{\rm yr}\left(\frac{M_{\rm
ej,C/O}}{0.1~{}M_{\odot}}\right)^{1/2}\left(\frac{h\nu}{5~{}{\rm
keV}}\right)^{-3/2}.$ (B1)
Thus follow-up in hard X-rays at years after the explosion is encouraged,
although the X-ray luminosity would depend on the uncertain degree of
fallback. If the fallback is similar to the ultra-stripped SN models in
Kashiyama et al. (2022), we expect the source to be detectable by current
X-ray facilities thanks to the proximity of this event.
## Appendix C Spectroscopic Observations
Table A1 shows a log of the spectroscopic observations of SN 2023fyq. Table A2
shows a log of the spectroscoppic observation of SN 2019kbj.
Table A1: Spectroscopic observations of SN 2023fyq UT Date | Julian Date (Days) | Phase (Days) | Telescope | Instrument
---|---|---|---|---
2023-07-27 | 2460152.743 | -1.3 | Gemini | GMOS
2023-07-27 | 2460152.85 | -1.1 | FTS | FLOYDS
2023-07-28 | 2460153.859 | -0.1 | FTS | FLOYDS
2023-07-31 | 2460156.851 | 2.9 | FTS | FLOYDS
2023-08-01 | 2460157.74 | 3.7 | Gemini | GMOS
2023-08-04 | 2460160.858 | 6.9 | FTS | FLOYDS
2023-08-04 | 2460161.392 | 7.4 | NOT | ALFOSC
2023-12-12 | 2460291.123 | 137.1 | Keck | LRIS
2024-01-23 | 2460332.761 | 178.8 | SOAR | GHTS
2024-03-11 | 2460380.865 | 226.9 | LBT | MODS
2024-05-01 | 2460431.943 | 277.9 | Keck | LRIS
Table A2: Spectroscopic observations of SN 2019kbj UT Date | Julian Date (Days) | Phase (Days) | Telescope | Instrument
---|---|---|---|---
2019-09-23 | 2458750.817 | 80 | Keck | DEIMOS
## References
* Andrews & Smith (2018) Andrews, J. E., & Smith, N. 2018, MNRAS, 477, 74, doi: 10.1093/mnras/sty584
* Arnett (1982) Arnett, W. D. 1982, ApJ, 253, 785, doi: 10.1086/159681
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
* Becker (2015) Becker, A. 2015, HOTPANTS: High Order Transform of PSF ANd Template Subtraction. http://ascl.net/1504.004
* Bellm et al. (2019) Bellm, E. C., Kulkarni, S. R., Graham, M. J., et al. 2019, PASP, 131, 018002, doi: 10.1088/1538-3873/aaecbe
* Ben-Ami et al. (2023) Ben-Ami, T., Arcavi, I., Newsome, M., et al. 2023, ApJ, 946, 30, doi: 10.3847/1538-4357/acb432
* Bertin et al. (2002) Bertin, E., Mellier, Y., Radovich, M., et al. 2002, in Astronomical Society of the Pacific Conference Series, Vol. 281, Astronomical Data Analysis Software and Systems XI, ed. D. A. Bohlender, D. Durand, & T. H. Handley, 228
* Bianco et al. (2022) Bianco, F. B., Ivezić, Ž., Jones, R. L., et al. 2022, ApJS, 258, 1, doi: 10.3847/1538-4365/ac3e72
* Blagorodnova et al. (2017) Blagorodnova, N., Kotak, R., Polshaw, J., et al. 2017, ApJ, 834, 107, doi: 10.3847/1538-4357/834/2/107
* Blondin & Tonry (2007) Blondin, S., & Tonry, J. L. 2007, ApJ, 666, 1024, doi: 10.1086/520494
* Bradley et al. (2022) Bradley, L., Sipőcz, B., Robitaille, T., et al. 2022, astropy/photutils: 1.5.0, 1.5.0, Zenodo, doi: 10.5281/zenodo.6825092
* Breeveld et al. (2011) Breeveld, A. A., Landsman, W., Holland, S. T., et al. 2011, in American Institute of Physics Conference Series, Vol. 1358, Gamma Ray Bursts 2010, ed. J. E. McEnery, J. L. Racusin, & N. Gehrels, 373–376, doi: 10.1063/1.3621807
* Brennan et al. (2024) Brennan, S. J., Sollerman, J., Irani, I., et al. 2024, arXiv e-prints, arXiv:2401.15148, doi: 10.48550/arXiv.2401.15148
* Brown et al. (2013) Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031, doi: 10.1086/673168
* Chevalier (1982) Chevalier, R. A. 1982, ApJ, 258, 790, doi: 10.1086/160126
* Chevalier (2012) —. 2012, ApJ, 752, L2, doi: 10.1088/2041-8205/752/1/L2
* Clemens et al. (2004) Clemens, J. C., Crain, J. A., & Anderson, R. 2004, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 5492, Ground-based Instrumentation for Astronomy, ed. A. F. M. Moorwood & M. Iye, 331–340, doi: 10.1117/12.550069
* De (2023) De, K. 2023, Transient Name Server Discovery Report, 2023-825, 1
* De et al. (2018) De, K., Kasliwal, M. M., Ofek, E. O., et al. 2018, Science, 362, 201, doi: 10.1126/science.aas8693
* Demianenko et al. (2023) Demianenko, M., Malanchev, K., Samorodova, E., et al. 2023, A&A, 677, A16, doi: 10.1051/0004-6361/202245189
* Dessart et al. (2022) Dessart, L., Hillier, D. J., & Kuncarayakti, H. 2022, A&A, 658, A130, doi: 10.1051/0004-6361/202142436
* Ertl et al. (2020) Ertl, T., Woosley, S. E., Sukhbold, T., & Janka, H. T. 2020, ApJ, 890, 51, doi: 10.3847/1538-4357/ab6458
* Faber et al. (2003) Faber, S. M., Phillips, A. C., Kibrick, R. I., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1657–1669, doi: 10.1117/12.460346
* Filippenko et al. (1993) Filippenko, A. V., Matheson, T., & Ho, L. C. 1993, ApJ, 415, L103, doi: 10.1086/187043
* Foley et al. (2007) Foley, R. J., Smith, N., Ganeshalingam, M., et al. 2007, ApJ, 657, L105, doi: 10.1086/513145
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067
* Fox & Smith (2019) Fox, O. D., & Smith, N. 2019, MNRAS, 488, 3772, doi: 10.1093/mnras/stz1925
* Fryer et al. (2013) Fryer, C. L., Belczynski, K., Berger, E., et al. 2013, ApJ, 764, 181, doi: 10.1088/0004-637X/764/2/181
* Fryer & Woosley (1998) Fryer, C. L., & Woosley, S. E. 1998, ApJ, 502, L9, doi: 10.1086/311493
* Fukugita et al. (1996) Fukugita, M., Ichikawa, T., Gunn, J. E., et al. 1996, AJ, 111, 1748, doi: 10.1086/117915
* Fuller (2017) Fuller, J. 2017, MNRAS, 470, 1642, doi: 10.1093/mnras/stx1314
* Fuller & Ro (2018) Fuller, J., & Ro, S. 2018, MNRAS, 476, 1853, doi: 10.1093/mnras/sty369
* Gangopadhyay et al. (2020) Gangopadhyay, A., Misra, K., Hiramatsu, D., et al. 2020, ApJ, 889, 170, doi: 10.3847/1538-4357/ab6328
* Gehrels et al. (2004) Gehrels, N., Chincarini, G., Giommi, P., et al. 2004, ApJ, 611, 1005, doi: 10.1086/422091
* Graham et al. (2019) Graham, M. J., Kulkarni, S. R., Bellm, E. C., et al. 2019, PASP, 131, 078001, doi: 10.1088/1538-3873/ab006c
* Graham et al. (2022) Graham, M. L., Kennedy, T. D., Kumar, S., et al. 2022, MNRAS, 511, 3682, doi: 10.1093/mnras/stac192
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, at, 585, 357, doi: 10.1038/s41586-020-2649-2
* Hiramatsu et al. (2024) Hiramatsu, D., Matsumoto, T., Berger, E., et al. 2024, ApJ, 964, 181, doi: 10.3847/1538-4357/ad2854
* Ho et al. (2023) Ho, A. Y. Q., Perley, D. A., Gal-Yam, A., et al. 2023, ApJ, 949, 120, doi: 10.3847/1538-4357/acc533
* Hook et al. (2004) Hook, I. M., Jørgensen, I., Allington-Smith, J. R., et al. 2004, PASP, 116, 425, doi: 10.1086/383624
* Hosseinzadeh & Gomez (2020) Hosseinzadeh, G., & Gomez, S. 2020, Light Curve Fitting, v0.2.0, Zenodo, doi: 10.5281/zenodo.4312178
* Hosseinzadeh et al. (2019) Hosseinzadeh, G., McCully, C., Zabludoff, A. I., et al. 2019, ApJ, 871, L9, doi: 10.3847/2041-8213/aafc61
* Hosseinzadeh et al. (2017) Hosseinzadeh, G., Arcavi, I., Valenti, S., et al. 2017, ApJ, 836, 158, doi: 10.3847/1538-4357/836/2/158
* Hunter et al. (2009) Hunter, D. J., Valenti, S., Kotak, R., et al. 2009, A&A, 508, 371, doi: 10.1051/0004-6361/200912896
* Hunter (2007) Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Jiang et al. (2020) Jiang, B., Jiang, S., & Ashley Villar, V. 2020, Research Notes of the American Astronomical Society, 4, 16, doi: 10.3847/2515-5172/ab7128
* Karamehmetoglu et al. (2017) Karamehmetoglu, E., Taddia, F., Sollerman, J., et al. 2017, A&A, 602, A93, doi: 10.1051/0004-6361/201629619
* Kashiyama et al. (2022) Kashiyama, K., Sawada, R., & Suwa, Y. 2022, ApJ, 935, 86, doi: 10.3847/1538-4357/ac7ff7
* Kleiser & Kasen (2014) Kleiser, I. K. W., & Kasen, D. 2014, MNRAS, 438, 318, doi: 10.1093/mnras/stt2191
* Kochanek et al. (2017) Kochanek, C. S., Shappee, B. J., Stanek, K. Z., et al. 2017, PASP, 129, 104502, doi: 10.1088/1538-3873/aa80d9
* Könyves-Tóth et al. (2020) Könyves-Tóth, R., Vinkó, J., Ordasi, A., et al. 2020, ApJ, 892, 121, doi: 10.3847/1538-4357/ab76bb
* Lang et al. (2010) Lang, D., Hogg, D. W., Mierle, K., Blanton, M., & Roweis, S. 2010, AJ, 139, 1782, doi: 10.1088/0004-6256/139/5/1782
* Leung & Fuller (2020) Leung, S.-C., & Fuller, J. 2020, ApJ, 900, 99, doi: 10.3847/1538-4357/abac5d
* Leung et al. (2019) Leung, S.-C., Nomoto, K., & Blinnikov, S. 2019, ApJ, 887, 72, doi: 10.3847/1538-4357/ab4fe5
* Liu et al. (2016) Liu, Y.-Q., Modjaz, M., Bianco, F. B., & Graur, O. 2016, ApJ, 827, 90, doi: 10.3847/0004-637X/827/2/90
* Lu et al. (2023) Lu, W., Fuller, J., Quataert, E., & Bonnerot, C. 2023, MNRAS, 519, 1409, doi: 10.1093/mnras/stac3621
* Lyman et al. (2016) Lyman, J. D., Bersier, D., James, P. A., et al. 2016, MNRAS, 457, 328, doi: 10.1093/mnras/stv2983
* MacLeod et al. (2018) MacLeod, M., Ostriker, E. C., & Stone, J. M. 2018, ApJ, 863, 5, doi: 10.3847/1538-4357/aacf08
* Margalit (2022) Margalit, B. 2022, ApJ, 933, 238, doi: 10.3847/1538-4357/ac771a
* Masci (2011) Masci, F. 2011, Computing flux upper-limits for non-detections. https://web.ipac.caltech.edu/staff/fmasci/home/mystats/UpperLimits_FM2011.pdf
* Masci et al. (2023) Masci, F. J., Laher, R. R., Rusholme, B., et al. 2023, arXiv e-prints, arXiv:2305.16279, doi: 10.48550/arXiv.2305.16279
* Matheson et al. (2000) Matheson, T., Filippenko, A. V., Chornock, R., Leonard, D. C., & Li, W. 2000, AJ, 119, 2303, doi: 10.1086/301352
* Matzner & Ro (2021) Matzner, C. D., & Ro, S. 2021, ApJ, 908, 23, doi: 10.3847/1538-4357/abd03b
* Mauerhan et al. (2013) Mauerhan, J. C., Smith, N., Filippenko, A. V., et al. 2013, MNRAS, 430, 1801, doi: 10.1093/mnras/stt009
* Mauerhan et al. (2015) Mauerhan, J. C., Van Dyk, S. D., Graham, M. L., et al. 2015, MNRAS, 447, 1922, doi: 10.1093/mnras/stu2541
* Maund et al. (2016) Maund, J. R., Pastorello, A., Mattila, S., Itagaki, K., & Boles, T. 2016, ApJ, 833, 128, doi: 10.3847/1538-4357/833/2/128
* Metzger (2022) Metzger, B. D. 2022, ApJ, 932, 84, doi: 10.3847/1538-4357/ac6d59
* Metzger & Pejcha (2017) Metzger, B. D., & Pejcha, O. 2017, MNRAS, 471, 3200, doi: 10.1093/mnras/stx1768
* Modjaz et al. (2019) Modjaz, M., Gutiérrez, C. P., & Arcavi, I. 2019, Nature Astronomy, 3, 717, doi: 10.1038/s41550-019-0856-2
* Modjaz et al. (2009) Modjaz, M., Li, W., Butler, N., et al. 2009, ApJ, 702, 226, doi: 10.1088/0004-637X/702/1/226
* Morozova et al. (2020) Morozova, V., Piro, A. L., Fuller, J., & Van Dyk, S. D. 2020, ApJ, 891, L32, doi: 10.3847/2041-8213/ab77c8
* Munari & Zwitter (1997) Munari, U., & Zwitter, T. 1997, A&A, 318, 269
* Ofek et al. (2013) Ofek, E. O., Sullivan, M., Cenko, S. B., et al. 2013, Nature, 494, 65, doi: 10.1038/nature11877
* Ofek et al. (2014) Ofek, E. O., Sullivan, M., Shaviv, N. J., et al. 2014, ApJ, 789, 104, doi: 10.1088/0004-637X/789/2/104
* Oke et al. (1995) Oke, J. B., Cohen, J. G., Carr, M., et al. 1995, PASP, 107, 375, doi: 10.1086/133562
* Pastorello et al. (2007) Pastorello, A., Smartt, S. J., Mattila, S., et al. 2007, Nature, 447, 829, doi: 10.1038/nature05825
* Pastorello et al. (2008) Pastorello, A., Quimby, R. M., Smartt, S. J., et al. 2008, MNRAS, 389, 131, doi: 10.1111/j.1365-2966.2008.13603.x
* Pastorello et al. (2013) Pastorello, A., Cappellaro, E., Inserra, C., et al. 2013, ApJ, 767, 1, doi: 10.1088/0004-637X/767/1/1
* Pastorello et al. (2015a) Pastorello, A., Tartaglia, L., Elias-Rosa, N., et al. 2015a, MNRAS, 454, 4293, doi: 10.1093/mnras/stv2256
* Pastorello et al. (2015b) Pastorello, A., Prieto, J. L., Elias-Rosa, N., et al. 2015b, MNRAS, 453, 3649, doi: 10.1093/mnras/stv1812
* Pastorello et al. (2015c) Pastorello, A., Benetti, S., Brown, P. J., et al. 2015c, MNRAS, 449, 1921, doi: 10.1093/mnras/stu2745
* Pastorello et al. (2016) Pastorello, A., Wang, X. F., Ciabattari, F., et al. 2016, MNRAS, 456, 853, doi: 10.1093/mnras/stv2634
* Pastorello et al. (2018) Pastorello, A., Kochanek, C. S., Fraser, M., et al. 2018, MNRAS, 474, 197, doi: 10.1093/mnras/stx2668
* Pejcha (2014) Pejcha, O. 2014, ApJ, 788, 22, doi: 10.1088/0004-637X/788/1/22
* Pejcha et al. (2016) Pejcha, O., Metzger, B. D., & Tomida, K. 2016, MNRAS, 455, 4351, doi: 10.1093/mnras/stv2592
* Pellegrino et al. (2022) Pellegrino, C., Howell, D. A., Vinkó, J., et al. 2022, ApJ, 926, 125, doi: 10.3847/1538-4357/ac3e63
* Perley (2019) Perley, D. A. 2019, PASP, 131, 084503, doi: 10.1088/1538-3873/ab215d
* Pogge (2019) Pogge, R. 2019, rwpogge/modsCCDRed 2.0, 2.0, Zenodo, doi: 10.5281/zenodo.2550741
* Pogge et al. (2010) Pogge, R. W., Atwood, B., Brewer, D. F., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 77350A, doi: 10.1117/12.857215
* Poznanski et al. (2012) Poznanski, D., Prochaska, J. X., & Bloom, J. S. 2012, MNRAS, 426, 1465, doi: 10.1111/j.1365-2966.2012.21796.x
* Prentice et al. (2020) Prentice, S. J., Maguire, K., Boian, I., et al. 2020, MNRAS, 499, 1450, doi: 10.1093/mnras/staa2947
* Prochaska et al. (2020) Prochaska, J. X., Hennawi, J. F., Westfall, K. B., et al. 2020, Journal of Open Source Software, 5, 2308, doi: 10.21105/joss.02308
* Prochaska et al. (2020) Prochaska, J. X., Hennawi, J., Cooke, R., et al. 2020, pypeit/PypeIt: Release 1.0.0, v1.0.0, Zenodo, doi: 10.5281/zenodo.3743493
* Quataert & Shiode (2012) Quataert, E., & Shiode, J. 2012, MNRAS, 423, L92, doi: 10.1111/j.1745-3933.2012.01264.x
* Renzo et al. (2020) Renzo, M., Farmer, R., Justham, S., et al. 2020, A&A, 640, A56, doi: 10.1051/0004-6361/202037710
* Sanders et al. (2013) Sanders, N. E., Soderberg, A. M., Foley, R. J., et al. 2013, ApJ, 769, 39, doi: 10.1088/0004-637X/769/1/39
* Schlafly & Finkbeiner (2011) Schlafly, E. F., & Finkbeiner, D. P. 2011, ApJ, 737, 103, doi: 10.1088/0004-637X/737/2/103
* Schlafly et al. (2010) Schlafly, E. F., Finkbeiner, D. P., Schlegel, D. J., et al. 2010, ApJ, 725, 1175, doi: 10.1088/0004-637X/725/1/1175
* Schrøder et al. (2020) Schrøder, S. L., MacLeod, M., Loeb, A., Vigna-Gómez, A., & Mandel, I. 2020, ApJ, 892, 13, doi: 10.3847/1538-4357/ab7014
* Science Software Branch at STScI (2012) Science Software Branch at STScI. 2012, PyRAF: Python alternative for IRAF, Astrophysics Source Code Library, record ascl:1207.011. http://ascl.net/1207.011
* Shappee et al. (2014) Shappee, B. J., Prieto, J. L., Grupe, D., et al. 2014, ApJ, 788, 48, doi: 10.1088/0004-637X/788/1/48
* Shingles et al. (2021) Shingles, L., Smith, K. W., Young, D. R., et al. 2021, Transient Name Server AstroNote, 7, 1
* Shiode & Quataert (2014) Shiode, J. H., & Quataert, E. 2014, ApJ, 780, 96, doi: 10.1088/0004-637X/780/1/96
* Shivvers et al. (2017) Shivvers, I., Zheng, W., Van Dyk, S. D., et al. 2017, MNRAS, 471, 4381, doi: 10.1093/mnras/stx1885
* Smith et al. (2020) Smith, K. W., Smartt, S. J., Young, D. R., et al. 2020, PASP, 132, 085002, doi: 10.1088/1538-3873/ab936e
* Smith (2014) Smith, N. 2014, ARA&A, 52, 487, doi: 10.1146/annurev-astro-081913-040025
* Smith (2017) —. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin, 403, doi: 10.1007/978-3-319-21846-5_38
* Smith & Andrews (2020) Smith, N., & Andrews, J. E. 2020, MNRAS, 499, 3544, doi: 10.1093/mnras/staa3047
* Smith et al. (2024) Smith, N., Andrews, J. E., Milne, P., et al. 2024, MNRAS, 530, 405, doi: 10.1093/mnras/stae726
* Smith & Arnett (2014) Smith, N., & Arnett, W. D. 2014, ApJ, 785, 82, doi: 10.1088/0004-637X/785/2/82
* Smith et al. (2008) Smith, N., Foley, R. J., & Filippenko, A. V. 2008, ApJ, 680, 568, doi: 10.1086/587860
* Smith & McCray (2007) Smith, N., & McCray, R. 2007, ApJ, 671, L17, doi: 10.1086/524681
* Smith et al. (2010) Smith, N., Miller, A., Li, W., et al. 2010, AJ, 139, 1451, doi: 10.1088/0004-6256/139/4/1451
* Smith et al. (2015) Smith, N., Mauerhan, J. C., Cenko, S. B., et al. 2015, MNRAS, 449, 1876, doi: 10.1093/mnras/stv354
* Smith et al. (2016) Smith, N., Andrews, J. E., Van Dyk, S. D., et al. 2016, MNRAS, 458, 950, doi: 10.1093/mnras/stw219
* Soker (2019) Soker, N. 2019, Science China Physics, Mechanics, and Astronomy, 62, 119501, doi: 10.1007/s11433-019-9402-x
* Soker & Tylenda (2003) Soker, N., & Tylenda, R. 2003, ApJ, 582, L105, doi: 10.1086/367759
* Strotjohann et al. (2021) Strotjohann, N. L., Ofek, E. O., Gal-Yam, A., et al. 2021, ApJ, 907, 99, doi: 10.3847/1538-4357/abd032
* Tartaglia et al. (2016) Tartaglia, L., Pastorello, A., Sullivan, M., et al. 2016, MNRAS, 459, 1039, doi: 10.1093/mnras/stw675
* Tartaglia et al. (2018) Tartaglia, L., Sand, D. J., Valenti, S., et al. 2018, ApJ, 853, 62, doi: 10.3847/1538-4357/aaa014
* Thöne et al. (2011) Thöne, C. C., de Ugarte Postigo, A., Fryer, C. L., et al. 2011, Nature, 480, 72, doi: 10.1038/nature10611
* Tonry (2011) Tonry, J. L. 2011, PASP, 123, 58, doi: 10.1086/657997
* Tonry et al. (2018) Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505, doi: 10.1088/1538-3873/aabadf
* Torres et al. (2017) Torres, S., Briceño, C., & Quint, B. 2017, Goodman HTS Pipeline Documentation 1.3.6. https://soardocs.readthedocs.io/projects/goodman-pipeline/
* Tsuna et al. (2024) Tsuna, D., Matsumoto, T., Wu, S. C., & Fuller, J. 2024, arXiv e-prints, arXiv:2401.02389, doi: 10.48550/arXiv.2401.02389
* Tsuna et al. (2023) Tsuna, D., Takei, Y., & Shigeyama, T. 2023, ApJ, 945, 104, doi: 10.3847/1538-4357/acbbc6
* Tsvetkov et al. (2015) Tsvetkov, D. Y., Volkov, I. M., & Pavlyuk, N. N. 2015, Information Bulletin on Variable Stars, 6140, 1, doi: 10.48550/arXiv.1504.01864
* Tully et al. (2016) Tully, R. B., Courtois, H. M., & Sorce, J. G. 2016, AJ, 152, 50, doi: 10.3847/0004-6256/152/2/50
* Tylenda et al. (2011) Tylenda, R., Hajduk, M., Kamiński, T., et al. 2011, A&A, 528, A114, doi: 10.1051/0004-6361/201016221
* Valenti et al. (2008) Valenti, S., Benetti, S., Cappellaro, E., et al. 2008, MNRAS, 383, 1485, doi: 10.1111/j.1365-2966.2007.12647.x
* Valenti et al. (2014) Valenti, S., Sand, D., Pastorello, A., et al. 2014, MNRAS, 438, L101, doi: 10.1093/mnrasl/slt171
* Valenti et al. (2016) Valenti, S., Howell, D. A., Stritzinger, M. D., et al. 2016, MNRAS, 459, 3939, doi: 10.1093/mnras/stw870
* Valerin et al. (2023) Valerin, G., Benetti, S., Elias–Rosa, N., et al. 2023, Transient Name Server Classification Report, 2023-1777, 1
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2
* Wangq et al. (2024) Wangq, Q., Goel, A., Dessart, L., et al. 2024, MNRAS, doi: 10.1093/mnras/stae1038
* Wes McKinney (2010) Wes McKinney. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, 56 – 61, doi: 10.25080/Majora-92bf1922-00a
* Woosley (2017) Woosley, S. E. 2017, ApJ, 836, 244, doi: 10.3847/1538-4357/836/2/244
* Woosley (2019) —. 2019, ApJ, 878, 49, doi: 10.3847/1538-4357/ab1b41
* Woosley et al. (2007) Woosley, S. E., Blinnikov, S., & Heger, A. 2007, Nature, 450, 390, doi: 10.1038/nature06333
* Wu & Fuller (2022) Wu, S. C., & Fuller, J. 2022, ApJ, 940, L27, doi: 10.3847/2041-8213/ac9b3d
* Yang et al. (2019) Yang, S., Sand, D. J., Valenti, S., et al. 2019, ApJ, 875, 59, doi: 10.3847/1538-4357/ab0e06
* Yao et al. (2020) Yao, Y., De, K., Kasliwal, M. M., et al. 2020, ApJ, 900, 46, doi: 10.3847/1538-4357/abaa3d
* Yaron & Gal-Yam (2012) Yaron, O., & Gal-Yam, A. 2012, PASP, 124, 668, doi: 10.1086/666656
* Yoshida et al. (2016) Yoshida, T., Umeda, H., Maeda, K., & Ishii, T. 2016, MNRAS, 457, 351, doi: 10.1093/mnras/stv3002
* Young (2022) Young, D. 2022, Plot Results from ATLAS Force Photometry Service. https://gist.github.com/thespacedoctor/86777fa5a9567b7939e8d84fd8cf6a76
* Yuan & Narayan (2014) Yuan, F., & Narayan, R. 2014, ARA&A, 52, 529, doi: 10.1146/annurev-astro-082812-141003
* Zhang & Fryer (2001) Zhang, W., & Fryer, C. L. 2001, ApJ, 550, 357, doi: 10.1086/319734
|
††thanks: Research supported by US Office of Naval Research Grant
N000141712622.
# Better Automata through Process Algebra
Rance Cleaveland Department of Computer Science, University of Maryland,
College Park MD 20742 USA<EMAIL_ADDRESS>
###### Abstract
This paper shows how the use of Structural Operational Semantics (SOS) in the
style popularized by the process-algebra community can lead to a more succinct
and useful construction for building finite automata from regular expressions.
Such constructions have been known for decades, and form the basis for the
proofs of one direction of Kleene’s Theorem. The purpose of the new
construction is, on the one hand, to show students how small automata can be
constructed, without the need for empty transitions, and on the other hand to
show how the construction method admits closure proofs of regular languages
with respect to other operators as well. These results, while not
theoretically surprising, point to an additional influence of process-
algebraic research: in addition to providing fundamental insights into the
nature of concurrent computation, it also sheds new light on old, well-known
constructions in automata theory.
###### keywords:
Process algebra; finite automata; regular expressions; operational semantics
## 1 Introduction
It is an honor to write this paper in celebration of Jos Baeten on the
occasion of the publication of his _Festschrift_. I recall first becoming
aware of Jos late in my PhD studies at Cornell University. Early in my
doctoral career I had become independently interested in process algebra,
primarily through Robin Milner’s original monograph, _A Calculus of
Communicating Systems_ [Mil80], and indeed wound up writing my dissertation on
the topic. I was working largely on my own; apart from very stimulating
interactions with Prakash Panangaden, who was at Cornell at the time, there
were no researchers in the area at Cornell. It was in this milieu that I
stumbled across the seminal papers by Jos’ colleagues, Jan Bergstra and Jan
Willem Klop, describing the Algebra of Communicating Processes [BK84, BK85]. I
was impressed with their classically algebraic approach, and their semantic
accounts based on graph constructions. This, together with Milner’s focus on
operational semantics and the Communicating Sequential Processes community’s
on denotational semantics [BHR84], finally enabled me to truly understand the
deep and satisfying links between operational, denotational and axiomatic
approaches to not only process algebra, but to program semantics in general.
While Jos was not a co-author of the two papers just cited, he was an early
contributor to the process-algebraic field and has remained a prolific
researcher in both theoretical and applied aspects of the discipline. I have
followed his career, and admired his interest in both foundational theory and
practical applications of process theory, since completing my PhD in 1987. It
is this broader view on the impact of process algebra that is the motivation
for this note. Indeed, I will not focus so much on new theoretical results,
satisfying though they can be. Rather, I want recount a story about my usage
of process-algebra-inspired techniques to redevelop part of an undergraduate
course on automata theory that I taught for a number of years. Specifically, I
will discuss how I have used the Structural Operational Semantics (SOS)
techniques used extensively in process algebra to present what I have found to
be more satisfying ways than those typically covered in textbooks to construct
finite automata from regular expressions. Such constructions constitute a
proof of one half of Kleene’s Theorem [Kle56], which asserts a correspondence
between regular languages and those accepted by finite automata.
In the rest of this paper I present the construction and contrast it to the
constructions found in classical automata-theory textbooks such as [HMU06],
explaining why I find the work presented here preferable from a pedagogical
point of view. I also briefly situate the work in the setting of an efficient
technique [BS86] used in practice for converting regular expressions to finite
automata. The messsage I hope to convey is that in addition to contributing
foundational understanding to notions of concurrent computation, process
algebra can also cast new light on well-understood automaton constructions as
well, and that pioneers in process algebra, such as Jos Baeten, are doubly
deserving of the accolades they receive from the research community.
## 2 Alphabets, Languages, Regular Expressions and Automata
This section reviews the definitions and notation used later in this note for
formal languages, regular expressions and finite automata. In the interest of
succinctness the definitions depart slightly from those found in automata-
theory textbooks, although notationally I try to follow the conventions used
in those books.
### 2.1 Alphabets and Languages
At their most foundational level digital computers are devices for computing
with symbols. Alphabets and languages formalize this intuition mathematically.
[Alphabet, word]
1. 1.
An _alphabet_ is a finite non-empty set $\Sigma$ of symbols.
2. 2.
A _word_ over alphabet $\Sigma$ is a finite sequence $a_{1}\ldots a_{k}$ of
elements from $\Sigma$. We say that $k$ is the _length_ of $w$ in this case.
If $k=0$ we say $w$ is _empty_ ; we write $\varepsilon$ for the (unique) empty
word over $\Sigma$. Note that every $a\in\Sigma$ is also a (length-one) word
over $\Sigma$. We write $\Sigma^{*}$ for the set of all words over $\Sigma$.
3. 3.
If $w_{1}=a_{1}\ldots a_{k}$ and $w_{2}=b_{1}\ldots b_{\ell}$ are words over
$\Sigma$ then the _concatenation_ , $w_{1}\cdot w_{2}$, of $w_{1}$ and $w_{2}$
is the word $a_{1}\ldots a_{k}b_{1}\ldots b_{n}$. Note that
$w\cdot\varepsilon=\varepsilon\cdot w=w$ for any word $w$. We often omit
$\cdot$ and write $w_{1}w_{2}$ for the concatenation of $w_{1}$ and $w_{2}$.
4. 4.
A _language_ $L$ over alphabet $\Sigma$ is a subset of $\Sigma^{*}$. The set
of all languages over $\Sigma$ is the set of all subsets of $\Sigma^{*}$, and
is written $2^{\Sigma^{*}}$ following standard mathematical conventions.
Since languages over $\Sigma^{*}$ are sets, general set-theoretic operations,
including $\cup$ (union), $\cap$ (intersection) and $-$ (set difference) may
be applied to them. Other, language-specific operations may also be defined.
[Language concatenation, Kleene closure] Let $\Sigma$ be an alphabet.
1. 1.
Let $L_{1},L_{2}\subseteq\Sigma^{*}$ be languages over $\Sigma$. Then the
_concentation_ , $L_{1}\cdot L_{2}$, of $L_{1}$ and $L_{2}$ is defined as
follows.
$L_{1}\cdot L_{2}=\\{w_{1}\cdot w_{2}\mid w_{1}\in L_{1}\textnormal{ and
}w_{2}\in L_{2}\\}$
2. 2.
Let $L\subseteq\Sigma^{*}$ be a language over $\Sigma$. Then the _Kleene
closure_ , $L^{*}$, of $L$ is defined inductively as follows.111Textbooks
typically define $L^{*}$ differently, by first introducing $L^{i}$ for $i\geq
0$ and then taking $L^{*}=\bigcup_{i=0}^{\infty}L^{i}$
* •
$\varepsilon\in L^{*}$
* •
If $w_{1}\in L$ and $w_{2}\in L^{*}$ then $w_{1}\cdot w_{2}\in L^{*}$.
### 2.2 Regular Expressions
_Regular expressions_ provide a notation for defining languages.
[Regular expression] Let $\Sigma$ be an alphabet. Then the set,
$\mathcal{R}(\Sigma)$, of _regular expressions_ over $\Sigma$ is defined
inductively as follows.
* •
$\emptyset\in\mathcal{R}(\Sigma)$.
* •
$\varepsilon\in\mathcal{R}(\Sigma)$.
* •
If $a\in\Sigma$ then $a\in\mathcal{R}(\Sigma)$.
* •
If $r_{1}\in\mathcal{R}(\Sigma)$ and $r_{2}\in\mathcal{R}(\Sigma)$ then
$r_{1}+r_{2}\in\mathcal{R}(\Sigma)$ and $r_{1}\cdot
r_{2}\in\mathcal{R}(\Sigma)$.
* •
If $r\in\mathcal{R}(\Sigma)$ then $r^{*}\in\mathcal{R}(\Sigma)$.
It should be noted that $\mathcal{R}(\Sigma)$ is a set of expressions; the
occurrences of $\emptyset,\varepsilon,+,\cdot$ and ∗ are symbols that do not
innately possess any meaning, but must instead be given a semantics. This is
done by interpreting regular expressions mathematically as languages. The
formal definition takes the form of a function,
$\mathcal{L}\in\mathcal{R}(\Sigma)\rightarrow 2^{\Sigma^{*}}$ assigning a
language $\mathcal{L}(r)\subseteq\Sigma^{*}$ to regular expression $r$.
[Language of a regular expression, regular language] Let $\Sigma$ be an
alphabet, and $r\in\mathcal{R}(\Sigma)$ a regular expression over $\Sigma$.
Then the _language_ , $\mathcal{L}(r)\subseteq\Sigma^{*}$, associated with $r$
is defined inductively as follows.
$\mathcal{L}(r)=\left\\{\begin{array}[]{lp{5cm}}\emptyset&if $r=\emptyset$\\\
\\{\varepsilon\\}&if $r=\varepsilon$\\\ \\{a\\}&if $r=a$ and $a\in\Sigma$\\\
\mathcal{L}(r_{1})\cup\mathcal{L}(r_{2})&if $r=r_{1}+r_{2}$\\\
\mathcal{L}(r_{1})\cdot\mathcal{L}(r_{2})&if $r=r_{1}\cdot r_{2}$\\\
(\mathcal{L}(r^{\prime}))^{*}&if $r=(r^{\prime})^{*}$\end{array}\right.$
A language $L\subseteq\Sigma^{*}$ is _regular_ if and only if there is a
regular expression $r\in\mathcal{R}(\Sigma)$ such that $\mathcal{L}(r)=L$.
### 2.3 Finite Automata
Traditional accounts of finite automata typically introduce three variations
of the notion: deterministic (DFA), nondeterministic (NFA), and
nondeterministic with $\varepsilon$-transitions (NFA-$\varepsilon$). I will do
the same, although I will do so in a somewhat different order than is typical.
[Nondeterministic Finite Automaton (NFA)] A _nondeterministic finite automata_
(NFA) is a tuple $(Q,\Sigma,\delta,q_{I},F)$, where:
* •
$Q$ is a finite non-empty set of _states_ ;
* •
$\Sigma$ is an _alphabet_ ;
* •
$\delta\subseteq Q\times\Sigma\times Q$ is the _transition relation_ ;
* •
$q_{I}\in Q$ is the _initial state_ ; and
* •
$F\subseteq Q$ is the set of _accepting_ , or _final_ , states.
This definition of NFA differs slightly from e.g. [HMU06] in that $\delta$ is
given as relation rather than function in $Q\times\Sigma\rightarrow 2^{Q}$. It
also defines the form of a NFA but not the sense in which it is indeed a
machine for processing words in a language. The next definition does this by
associating a language $\mathcal{L}(M)$ with a given NFA
$M=(Q,\Sigma,\delta,q_{I},F)$.
[Language of a NFA] Let $M=(Q,\Sigma,\delta,q_{I},F)$ be a NFA.
1. 1.
Let $q\in Q$ be a state of $M$ and $w\in\Sigma^{*}$ be a word over $\Sigma$.
Then $M$ _accepts_ $w$ from $q$ if and only if one of the following holds.
* •
$w=\varepsilon$ and $q\in F$; or
* •
$w=aw^{\prime}$ some $a\in\Sigma$ and $w^{\prime}\in\Sigma^{*}$, and there
exists $(q,a,q^{\prime})\in\delta$ such that $M$ accepts $w^{\prime}$ from
$q^{\prime}$.
2. 2.
The _language_ , $\mathcal{L}(M)$, accepted by $M$ is defined as follows.
$\mathcal{L}(M)=\\{w\in\Sigma^{*}\mid M\textnormal{ accepts }w\textnormal{
from }q_{I}\\}$
Deterministic Finite Automata (DFAs) constitute a subclass of NFAs whose
transition relation is deterministic, in a precisely defined sense.
[Deterministic Finite Automaton (DFA)] NFA $M=(Q,\Sigma,\delta,q_{I},F)$ is a
_deterministic finite automaton_ (DFA) if and only if $\delta$ satisfies the
following: for every $q\in Q$ and $a\in\Sigma$, there exists exactly one
$q^{\prime}$ such that $(q,a,q^{\prime})\in\delta$.
Since DFAs are NFAs the definition of $\mathcal{L}$ in Definition 2.3 is
directly applicable to them as well. NFAs with $\epsilon$-transitions are now
defined as follows.
[NFAs with $\varepsilon$-Transitions] A _nondeterministic automaton with
$\varepsilon$-transitions_ (NFA-$\varepsilon$) is a tuple
$(Q,\Sigma,\delta,q_{I},F)$, where:
* •
$Q$ is a nonempty finite set of _states_ ;
* •
$\Sigma$ is an _alphabet_ , with $\varepsilon\not\in\Sigma$;
* •
$\delta\subseteq Q\times(\Sigma\cup\\{\varepsilon\\})\times Q$ is the
_transition relation_ ;
* •
$q_{I}\in Q$ is the _initial state_ ; and
* •
$F$ is the set of _accepting_ , or _final_ , states.
An NFA-$\varepsilon$ is like a NFA except that some transitions can be labeled
with the empty string $\varepsilon$ rather than a symbol from $\Sigma$. The
intution is that a transition of form $(q,\varepsilon,q^{\prime})$ can occur
without consuming any symbol as an input. Formalizing this intuition, and
defining $\mathcal{L}(M)$ for NFA-$\varepsilon$, may be done as follows.
[Language of a NFA-$\varepsilon$] Let $M=(Q,\Sigma,\delta,q_{I},F)$ be a
NFA-$\varepsilon$.
1. 1.
Let $q\in Q$ and $w\in\Sigma^{*}$. Then $M$ _accepts_ $w$ _from_ $q$ if and
only if one of the following holds.
* •
$w=\varepsilon$ and $q^{\prime}\in F$; or
* •
$w=aw^{\prime}$ for some $a\in\Sigma$ and $w^{\prime}\in\Sigma^{*}$ and there
exists $q^{\prime}\in Q$ such that $(q,a,q^{\prime})\in\delta$ and $M$ accepts
$w^{\prime}$ from $q^{\prime}$; or
* •
there exists $q^{\prime}\in Q$ such that $(q,\varepsilon,q^{\prime})\in\delta$
and $M$ accepts $w$ from $q^{\prime}$.
2. 2.
The _language_ , $\mathcal{L}(M)$, accepted by $M$ is defined as follows.
$\mathcal{L}(M)=\\{w\in\Sigma^{*}\mid M\textnormal{ accepts }w\textnormal{
from }q_{I}\\}$
Defining the language of a NFA-$\varepsilon$ requires redefining the notion of
a machine accepting a string from state $q$ as given in the definition of the
language of a NFA. This redefinition reflects the essential difference between
$\varepsilon$-transitions and those labeled by alphabet symbols.
The three types of automata have differences in form, but equivalent
expressive power. It should first be noted that, just as every DFA is already
a NFA, every NFA is also a NFA-$\varepsilon$, namely, a NFA-$\varepsilon$ with
no $\varepsilon$-transitions. Thus, every language accepted by some DFA is
also accepted by some NFA, and every language accepted by some NFA is accepted
by some NFA-$\varepsilon$. The next theorem establishes the converses of these
implications.
###### Theorem 1 (Equivalence of DFAs, NFAs and NFA-$\varepsilon$s).
1. 1.
Let $M$ be a NFA. Then there is a DFA $D(M)$ such that
$\mathcal{L}(D(M))=\mathcal{L}(M)$.
2. 2.
Let $M$ be a NFA-$\varepsilon$. Then there is a NFA $N(M)$ such that
$\mathcal{L}(N(M))=\mathcal{L}(M)$.
###### Proof 2.1.
The proof of Case (1) involves the well-known subset construction, whereby
each subset of states in $M$ is associated with a single state in $D(M)$. The
proof of Case (2) typically relies on defining the $\varepsilon$ closure of a
set of states, namely, the set of states reachable from the given set via a
sequence of zero or more $\varepsilon$-transitions. This notion is used to
define the transition relation of $N(M)$ as well as its set of accepting
states.
## 3 Kleene’s Theorem
Given the definitions in the previous section it is now possible to state
Kleene’s Theorem succinctly.
###### Theorem 2 (Kleene’s Theorem).
Let $\Sigma$ be an alphabet. Then $L\subseteq\Sigma^{*}$ is regular if and
only if there is a DFA $M$ such that $\mathcal{L}(M)=L$.
The proof of this theorem is usually split into two pieces. The first involves
showing that for any regular expression $r$, there is a finite automaton $M$
(DFA, NFA or NFA-$\varepsilon$) such that $\mathcal{L}(M)=\mathcal{L}(r)$.
Theorem 1 then ensures that the resulting finite automaton, if it is not
already a DFA, can be converted into one in a language-preserving manner. The
second shows how to convert a DFA $M$ into a regular expression $r$ in such a
way that $\mathcal{L}(r)=\mathcal{L}(M)$; there are several algorithms for
this in the literature, including the classic dynamic-programming-based method
of Kleene [Kle56] and equation-solving methods that rely on Arden’s Lemma
[Ard61].
From a practical standpoint, the conversion of regular expressions to finite
automata is the more important, since regular expressions are textual and are
used consequently as the basis for string search and processing. For this
reason, I believe that teaching this construction is especially keyin
automata-theory classes, and this where my complaint with the approaches in
traditional automata-theory texts originates.
To understand the basis for my dissatisfaction, let us review the construction
presented in [HMU06], which explains how to convert regular expression $r$
into NFA-$\varepsilon$ $M_{r}$ in such a way that
$\mathcal{L}(r)=\mathcal{L}(M_{r})$. The method is based on the construction
due to Ken Thompson [Tho68] and produces NFA-$\varepsilon$ $M_{r}$ with the
following properties.
* •
The initial state $q_{I}$ has no incoming transitions: that is, there exists
no $(q,\alpha,q_{I})\in\delta$.
* •
There is a single accepting state $q_{F}$, and $q_{F}$ has no outgoing
transitions: that is, $F=\\{q_{F}\\}$, and there exists no
$(q_{F},\alpha,q^{\prime})\in\delta$.
The approach proceeds inductively on the structure of $r$. For example, if
$r=(r^{\prime})^{*}$, then assume that
$M_{r^{\prime}}=(Q,\Sigma,\delta,q_{I},\\{q_{F}\\})$ meeting the above
constraints has been constructed. Then $M_{r}$ is built as follows. First, let
$q_{I}^{\prime}\not\in Q$ and $q_{F}^{\prime}\not\in Q$ be new states. Then
$M_{r}=(Q\cup\\{q_{I}^{\prime},q_{F}^{\prime}\\},\Sigma,\delta^{\prime},\\{q_{F}^{\prime}\\})$,
where
$\delta^{\prime}=\delta\cup\\{(q_{I}^{\prime},\varepsilon,q_{I}),(q_{I}^{\prime},\varepsilon,q_{F}^{\prime}),(q_{F},\varepsilon,q_{I}),(q_{F},\varepsilon,q_{F}^{\prime})\\}.$
It can be shown that $M_{r}$ satisfies the requisite properties and that
$\mathcal{L}(M_{r})=(\mathcal{L}(r^{\prime}))^{*}$.
Mathematically, the construction of $M_{r}$ is wholly satisfactory: it has the
required properties and can be defined relatively easily, albeit at the cost
of introducing new states and transitions. The proof of correctness is perhaps
somewhat complicated, owing to the definition of $\mathcal{L}(M)$ and the
subtlety of $\varepsilon$-transitions, but it does acquaint students with
definitions via structural induction on regular expressions.
My concern with the construction, however, is several-fold. On the one hand,
it does require the introduction of the notion of NFA-$\varepsilon$, which is
indeed more complex that that of NFA. In particular, the definition of
acceptance requires allowing transitions that consume no symbol in the input
word. On the other hand, the accretion of the introduction of new states at
each state in the construction makes it difficult to test students on their
understanding of the construction in an exam setting. Specifically, even for
relatively small regular expressions the literal application of the
construction yields automata with too many states and transitions to be doable
during the typical one-hour midterm exam for which US students would be tested
on the material. Finally, the construction bears no resemblance to algorithms
used in practice for construction finite automata from regular expressions. In
particular routines such as the Berry-Sethi procedure [BS86] construct DFAs
directly from regular expressions, completely avoiding the need for
NFA-$\varepsilon$s, or indeed NFAs, altogether.
The Berry-Sethi procedure is subtle and elegant, and relies on concepts, such
as Brzozowski derivatives [Brz64], that I would view as too specialized for an
undergraduate course on automata theory. Consequently, I would not be in favor
of covering them in an undergraduate classroom setting. Instead, in the next
section I give a technique, based on operational semantics in process algebra,
for construction NFAs from regular expressions. The resulting NFAs are small
enough for students to construct during exams, and the construction has other
properties, including the capacity for introducing other operations that
preserve regularity, that are pedagogically useful.
## 4 NFAs via Structural Operational Semantics
This section describes an approach based on _Structural Operational Semantics_
(SOS) [Plo81, Plo04] for constructing NFAs from regular expressions.
Specifically, I will define a (small-step) operational semantics for regular
expressions on the basis of the structure of regular expressions, and use the
semantics to construct the requisite NFAs. The construction requires no
$\varepsilon$-transitions and yields automata with at most one more state
state than the size of the regular expression from which they are derived.
Following the conventions in the other parts of this paper I give the SOS
rules using notation typically found in automata-theory texts. In particular,
the SOS specification is given in natural language, as a collection of if-then
statements, and not via inference rules. I use this approach in the classroom
to avoid having to introduce notations for inference rules. In the appendix I
give the more traditional SOS presentation.
### 4.1 An Operational Semantics for Regular Expressions
In what follows fix alphabet $\Sigma$. The basis for the operational semantics
of regular expressions consists of a relation,
$\xrightarrow{}\subseteq\mathcal{R}(\Sigma)\times\Sigma\times\mathcal{R}(\Sigma)$,
and a predicate $\surd\subseteq\mathcal{R}(\Sigma)$. In what follows I will
write $r\xrightarrow{a}r^{\prime}$ and $r\surd$ in lieu of
$(r,a,r^{\prime})\in\,\xrightarrow{}$ and $r\in\surd$. The intuitions are as
follows.
1. 1.
$r\surd$ is intended to hold if and only if $\varepsilon\in\mathcal{L}(r)$.
This is used in defining accepting states.
2. 2.
$r\xrightarrow{a}r^{\prime}$ is intended to reflect the following about
$\mathcal{L}(r)$: one way to build a word in $\mathcal{L}(r)$ is to start with
$a\in\Sigma$ and then finish it with a word from $\mathcal{L}(r^{\prime})$.
Using these relations, I then show how to build a NFA from $r$ whose states
are regular expressions, whose transitions are given by $\xrightarrow{}$, and
whose final states are defined using $\surd$.
#### Defining $\surd$ and $\xrightarrow{}$
We now define $\surd$. [Definition of $\surd$] Predicate $r\surd$ is defined
inductively on the structure of $r\in\mathcal{R}(\Sigma)$ as follows.
* •
If $r=\varepsilon$ then $r\surd$.
* •
If $r=(r^{\prime})^{*}$ for some $r^{\prime}\in\mathcal{R}(\Sigma)$ then
$r\surd$.
* •
If $r=r_{1}+r_{2}$ for some $r_{1},r_{2}\in\mathcal{R}(\Sigma)$, and
$r_{1}\surd$, then $r\surd$.
* •
If $r=r_{1}+r_{2}$ for some $r_{1},r_{2}\in\mathcal{R}(\Sigma)$, and
$r_{2}\surd$, then $r\surd$.
* •
If $r=r_{1}\cdot r_{2}$ for some $r_{1},r_{2}\in\mathcal{R}(\Sigma)$, and
$r_{1}\surd$ and $r_{2}\surd$, then $r\surd$.
From the definition, one can see it is not the case that $\emptyset\surd$ or
$a\surd$, for any $a\in\Sigma$, while both $\varepsilon\surd$ and $r^{*}\surd$
always. This accords with the definition of $\mathcal{L}(r)$;
$\varepsilon\not\in\mathcal{L}(\emptyset)=\emptyset$, and
$\varepsilon\not\in\mathcal{L}(a)=\\{a\\}$, while
$\varepsilon\in\mathcal{L}(\varepsilon)=\\{\varepsilon\\}$ and $\varepsilon\in
L^{*}$ for any language $L\subseteq\Sigma^{*}$, and in particular for
$L=\mathcal{L}(r)$ for regular expression $r$. The other cases in the
definition reflect the fact that $\varepsilon\in\mathcal{L}(r_{1}+r_{2})$ can
only hold if $\varepsilon\in\mathcal{L}(r_{1})$ or
$\varepsilon\in\mathcal{L}(r_{2})$, since $+$ is interpreted as set union, and
that $\varepsilon\in\mathcal{L}(r_{1}\cdot r_{2})$ can only be true if
$\varepsilon\in\mathcal{L}(r_{1})$ and $\varepsilon\in\mathcal{L}(r_{2})$,
since regular-expression operator $\cdot$ is interpreted as language
concatenation. We have the following examples.
$\begin{array}[]{lp{3in}}(\varepsilon\cdot a^{*})\surd&since
$\varepsilon\surd$ and $a^{*}\surd$.\\\ \neg(a+b)\surd&since neither $a\surd$
nor $b\surd$.\\\ (01+(1+01)^{*})\surd&since $(1+01)^{*}\surd$.\\\
\neg(01(1+01)^{*})\surd&since $\neg(01)\surd$.\\\ \end{array}$
We also use structural induction to define $\xrightarrow{}$. [Definition of
$\xrightarrow{}$] Relation $r\xrightarrow{a}r^{\prime}$, where
$r,r^{\prime}\in\mathcal{R}(\Sigma)$ and $a\in\Sigma$, is defined inductively
on $r$.
* •
If $r=a$ and $a\in\Sigma$ then $r\xrightarrow{a}\varepsilon$.
* •
If $r=r_{1}+r_{2}$ and $r_{1}\xrightarrow{a}r_{1}^{\prime}$ then
$r\xrightarrow{a}r_{1}^{\prime}$.
* •
If $r=r_{1}+r_{2}$ and $r_{2}\xrightarrow{a}r_{2}^{\prime}$ then
$r\xrightarrow{a}r_{2}^{\prime}$.
* •
If $r=r_{1}\cdot r_{2}$ and $r_{1}\xrightarrow{a}r_{1}^{\prime}$ then
$r\xrightarrow{a}r_{1}^{\prime}\cdot r_{2}$.
* •
If $r=r_{1}\cdot r_{2}$, $r_{1}\surd$ and $r_{2}\xrightarrow{a}r_{2}^{\prime}$
then $r\xrightarrow{a}r_{2}^{\prime}$.
* •
If $r=(r^{\prime})^{*}$ and $r^{\prime}\xrightarrow{a}r^{\prime\prime}$ then
$r\xrightarrow{a}r^{\prime\prime}\cdot(r^{\prime})^{*}$.
The definition of this relation is somewhat complex, but the idea that it is
trying to capture is relatively simple: $r\xrightarrow{a}r^{\prime}$ if one
can build words in $\mathcal{L}(r)$ by taking the $a$ labeling
$\xrightarrow{}$ and appending a word from $\mathcal{L}(r^{\prime})$. So we
have the rule $a\xrightarrow{a}\varepsilon$ for $a\in\Sigma$, while the rules
for $+$ follow from the fact that
$\mathcal{L}(r_{1}+r_{2})=\mathcal{L}(r_{1})\cup\mathcal{L}(r_{2})$. The cases
for $r_{1}\cdot r_{2}$ in essence state that $aw\in\mathcal{L}(r_{1}\cdot
r_{2})$ can hold either if there is a way of splitting $w$ into $w_{1}$ and
$w_{2}$ such that $aw_{1}$ is in the language of $r_{1}$ and $w_{2}$ is in the
language of $r_{2}$, or if $\varepsilon$ is in the language of $r_{1}$ and
$aw$ is in the language of $r_{2}$. Finally, the rule for $(r^{\prime})^{*}$
essentially permits “looping”. As examples, we have the following.
$\begin{array}[]{lp{8cm}}a+b\xrightarrow{a}\varepsilon&by the rules for $a$
and $+$.\\\ (abb+a)^{*}\xrightarrow{a}\varepsilon bb(abb+a)^{*}&by the rules
for $a$, $\cdot$, $+$, and ${}^{*}$.\end{array}$
In this latter example, note that applying the definition literally requires
the inclusion of the $\varepsilon$ in $\varepsilon bb(abb+a)^{*}$. This is
because the case for $a$ says that $a\xrightarrow{a}\varepsilon$, meaning that
$abb\xrightarrow{a}\varepsilon bb$, etc. However, when there are leading
instances of $\varepsilon$ like this, I will sometimes leave them out, and
write $abb\xrightarrow{a}bb$ rather than $abb\xrightarrow{a}\varepsilon
bb$.222This convention can be formalized by introducing a special case in the
definition of $\xrightarrow{}$ for $a\cdot r_{2}$ and distinguishing the
current two cases for $r_{1}\cdot r_{2}$ to apply only when
$r_{1}\not\in\Sigma.$
The following lemmas about $\surd$ and $\xrightarrow{}$ formally establish the
intuitive properties that they should have.
###### Lemma 3.
Let $r\in\mathcal{R}(\Sigma)$ be a regular expression. Then $r\surd$ if and
only if $\varepsilon\in\mathcal{L}(r)$.
###### Proof 4.1.
The proof proceeds by structural induction on $r$. Most cases are left to the
reader; we only consider the $r=r_{1}\cdot r_{2}$ case here. The induction
hypothesis states that $r_{1}\surd$ if and only if
$\varepsilon\in\mathcal{L}(r_{1})$ and $r_{2}\surd$ if and only if
$\varepsilon\in\mathcal{L}(r_{2})$. One reasons as follows.
$\begin{array}[]{r@{\textnormal{ iff }}lp{6cm}}r\surd&r_{1}\surd\textnormal{
and }r_{2}\surd&Definition of $\surd$\\\
&\varepsilon\in\mathcal{L}(r_{1})\textnormal{ and
}\varepsilon\in\mathcal{L}(r_{2})&Induction hypothesis\\\
&\varepsilon\in(\mathcal{L}(r_{1}))\cdot(\mathcal{L}(r_{2}))&Property of
concatenation\\\ &\varepsilon\in\mathcal{L}(r_{1}\cdot r_{2})&Definition of
$\mathcal{L}(r_{1}\cdot r_{2})$\\\ &\varepsilon\in\mathcal{L}(r)&$r=r_{1}\cdot
r_{2}$\end{array}$
###### Lemma 4.
Let $r\in\mathcal{R}(\Sigma)$, $a\in\Sigma$, and $w\in\Sigma^{*}$. Then
$aw\in\mathcal{L}(r)$ if and only if there is an
$r^{\prime}\in\mathcal{R}(\Sigma)$ such that $r\xrightarrow{a}r^{\prime}$ and
$w\in\mathcal{L}(r^{\prime})$.
###### Proof 4.2.
The proof proceeds by structural induction on $r$. We only consider the case
$r=(r^{\prime})^{*}$ in detail; the others are left to the reader. The
induction hypothesis asserts that for all $a$ and $w^{\prime}$,
$aw^{\prime}\in\mathcal{L}(r^{\prime})$ if and only if there is an
$r^{\prime\prime}$ such that $r^{\prime}\xrightarrow{a}r^{\prime\prime}$ and
$w^{\prime}\in\mathcal{L}(r^{\prime\prime})$. We reason as follows.
$\begin{array}[]{r@{\textnormal{ iff
}}lp{4.8cm}}aw\in\mathcal{L}(r)&aw\in\mathcal{L}((r^{\prime})^{*})&$r=(r^{\prime})^{*}$\\\
&aw\in(\mathcal{L}(r^{\prime}))^{*}&Definition of
$\mathcal{L}((r^{\prime})^{*})$\\\ &aw=w_{1}\cdot w_{2}\textnormal{ some
}w_{1}\in\mathcal{L}(r^{\prime}),w_{2}\in(\mathcal{L}(r^{\prime}))^{*}&Definition
of Kleene closure\\\ &w_{1}=a\cdot w_{1}^{\prime}\textnormal{ some
}w_{1}^{\prime}&Property of Kleene closure\\\
&r^{\prime}\xrightarrow{a}r^{\prime\prime}\textnormal{ some
}r^{\prime\prime}\textnormal{ with
}w_{1}^{\prime}\in\mathcal{L}(r^{\prime\prime})&Induction hypothesis\\\
&r\xrightarrow{a}r^{\prime\prime}\cdot(r^{\prime})^{*}&Definition of
$\xrightarrow{}$\\\ &w_{1}^{\prime}\cdot
w_{2}\in\mathcal{L}(r^{\prime\prime})\cdot\mathcal{L}((r^{\prime})^{*})&Definition
of concatenation\\\ &w_{1}^{\prime}\cdot
w_{2}\in\mathcal{L}(r^{\prime\prime}\cdot(r^{\prime})^{*})&Definition of
$\mathcal{L}(r^{\prime\prime}\cdot(r^{\prime})^{*})$\\\
&r\xrightarrow{a}r^{\prime\prime}\cdot(r^{\prime})^{*}\textnormal{ and
}w\in\mathcal{L}(r^{\prime\prime}\cdot(r^{\prime})^{*})&$w=w_{1}^{\prime}\cdot
w_{2}$\end{array}$
Appendix A contains definitions of $\surd$ and $\xrightarrow{}$ in the more
usual inference-rule style used in SOS specifications.
### 4.2 Building Automata using $\surd$ and $\xrightarrow{}$
That $\surd$ and $\xrightarrow{}$ may be used to build NFAs derives from how
they may be used to determine whether a string is in the language of a regular
expression. Consider the following sequence of transitions starting from the
regular expression $(abb+a)^{*}$.
$(abb+a)^{*}\xrightarrow{a}bb(abb+a)^{*}\xrightarrow{b}b(abb+a)^{*}\xrightarrow{b}(abb+a)^{*}\xrightarrow{a}(abb+a)^{*}$
Using Lemma 4 four times, we can conclude that if
$w\in\mathcal{L}((abb+a)^{*})$, then $abba\cdot w\in\mathcal{L}((abb+a)^{*})$
also. In addition, since $(abb+a)^{*}\surd$, it follows from Lemma 3 that
$\varepsilon\in\mathcal{L}((abb+a)^{*})$. Since $abba\cdot\varepsilon=abba$,
it follows that $abba\in\mathcal{L}((abb+a)^{*})$.
More generally, if there is a sequence of transitions
$r_{0}\xrightarrow{a_{1}}r_{1}\cdots\xrightarrow{a_{n}}r_{n}$ and
$r_{n}\surd$, then it follows that $a_{1}\ldots a_{n}\in\mathcal{L}(r_{0})$,
and vice versa. This observation suggests the following strategy for building
a NFA from a regular expression $r$.
1. 1.
Let the states be all possible regular expressions that can be reached by some
sequence of transitions from $r$.
2. 2.
Take $r$ to be the start state.
3. 3.
Let the transitions be given by $\xrightarrow{}$.
4. 4.
Let the accepting states be those regular expressions $r^{\prime}$ reachable
from $r$ for which $r^{\prime}\surd$ holds.
Of course, this construction is only valid if the set of all possible regular
expressions mentioned in Step (1) is finite, since NFAs are required to have a
finite number of states. In fact, a stronger result can be proved. First,
recall the definition of the size, $|r|$, of regular expression $r$.
[Size of a regular expression] The size, $|r|$, of $r\in\mathcal{R}(\Sigma)$
is defined inductively as follows.
$|r|=\left\\{\begin{array}[]{lp{8cm}}1&if $r=\varepsilon,r=\emptyset,$ or
$r=a$ for some $a\in\Sigma$\\\ |r^{\prime}|+1&if $r=(r^{\prime})^{*}$\\\
|r_{1}|+|r_{2}|+1&if $r=r_{1}+r_{2}$ or $r=r_{1}\cdot
r_{2}$\end{array}\right.$
Intuitively, $|r|$ counts the number of regular-expression operators in $r$.
The _reachability set_ of regular expression $r$ can now be defined in the
usual manner.
Let $r\in\mathcal{R}(\Sigma)$ be a regular expression. Then the set
$RS(r)\subseteq\mathcal{R}(\Sigma)$ of regular expressions _reachable from_
$r$ is defined recursively as follows.
* •
$r\in RS(r)$.
* •
If $r_{1}\in RS(r)$ and $r_{1}\xrightarrow{a}r_{2}$ for some $a\in\Sigma$,
then $r_{2}\in RS(r)$.
As an example, note that $|(abb+a)^{*}|=8$ and that
$RS((abb+a)^{*})=\\{(abb+a)^{*},\varepsilon bb(abb+a)^{*},\varepsilon
b(abb+a)^{*},\varepsilon(abb+a)^{*}\\},$
(In this case I have not applied my heuristic of suppressing leading
$\varepsilon$ expressions.) The following can now be provd.
###### Theorem 5.
Let $r\in\mathcal{R}(\Sigma)$ be a regular expression. Then
$|RS(r)|\leq|r|+1$.
###### Proof 4.3.
The proof proceeds by structural induction on $r$. There are six cases to
consider.
$r=\emptyset$
In this case $RS(r)=\\{\emptyset\\}$, and $|RS(r)|=1=|r|<|r|+1$.
$r=\varepsilon$
In this case $RS(r)=\\{\varepsilon\\}$, and $|RS(r)|=1=|r|<|r|+1$.
$r=a$ for some $a\in\Sigma$
In this case $RS(r)=\\{a,\varepsilon\\}$, and $|RS(r)|=2=|r|+1$.
$r=r_{1}+r_{2}$
In this case, $RS(r)\subseteq RS(r_{1})\cup RS(r_{2})$, and the induction
hypothesis guarantees that $|RS(r_{1})|\leq|r_{1}|+1$ and
$RS(r_{2})\leq|r_{2}|+1$. It then follows that
$|RS(r)|\leq|RS(r_{1})|+|RS(r_{2})|\leq|r_{1}|+|r_{2}|+2=|r|+1.$
$r=r_{1}\cdot r_{2}$
In this case it can be shown that $RS(r)\subseteq\\{r_{1}^{\prime}\cdot
r_{2}\mid r_{1}^{\prime}\in RS(r_{1})\\}\cup RS(r_{2})$. Since
$|\\{r_{1}^{\prime}\cdot r_{2}\mid r_{1}^{\prime}\in
RS(r_{1})\\}|=|RS(r_{1})|$, similar reasoning as in the $+$ case applies.
$r=(r^{\prime})^{*}$
In this case we have that $RS(r)\subseteq\\{r\\}\cup\\{r^{\prime\prime};r\mid
r^{\prime\prime}\in RS(r^{\prime})\\}$. Thus
$|RS(r)|\leq|RS(r^{\prime})|+1\leq|r^{\prime}|+2=|r|+1.$
This result shows not only that the sketched NFA construction given above
yields a finite number of states for given $r$, it in fact establishes that
this set of state is no larger than $|r|+1$. This highlights one of the main
reasons I opted to introduce this construction in my classes: small regular
expressions yield NFAs that are almost as small, and can be constructed
manually in an exam setting.
We can now formally define the construction of NFA $M_{r}$ from regular
expression $r$ as follows. Let $r\in\mathcal{R}(\Sigma)$ be a regular
expression. Then $M_{r}=(Q,\Sigma,q_{I},\delta,A)$ is the NFA defined as
follows.
* •
$Q=RS(r)$.
* •
$q_{I}=r$.
* •
$\delta=\\{(r_{1},a,r_{2})\mid r_{1}\xrightarrow{a}r_{2}\\}$.
* •
$F=\\{r^{\prime}\in Q\mid r^{\prime}\surd\\}$.
The next theorem establishes that $r$ and $M_{r}$ define the same languages.
###### Theorem 6.
Let $r\in\mathcal{R}(\Sigma)$ be a regular expression. The
$\mathcal{L}(r)=\mathcal{L}(M_{r})$.
###### Proof 4.4.
Relies on the fact that Lemmas 3 and 4 guarantee that $w=a_{1}\ldots
a_{n}\in\mathcal{L}(r)$ if and only if there is a regular expression
$r^{\prime}$ such that
$r\xrightarrow{a_{1}}\cdots\xrightarrow{a_{n}}r^{\prime}$ and
$r^{\prime}\surd$.
### 4.3 Computing $M_{r}$
This section gives a routine for computing $M_{r}$. It intertwines the
computation of the reachability set from regular expression $r$ with the
updating of the transition relation and set of accepting states. It relies on
the computation of the so-called _outgoing transitions_ of $r$; these are
defined as follows.
Let $r\in\mathcal{R}(\Sigma)$ be a regular expression. Then the set of
_outgoing transitions_ from $r$ is defined as the set $\\{(a,r^{\prime})\mid
r\xrightarrow{a}r^{\prime}\\}$. The outgoing transitions from $r$ consists of
pairs $(a,r^{\prime})$ that, when combined with $r$, constitute a valid
transition $r\xrightarrow{a}r^{\prime}$. Figure 1 defines a recursive
function, out, for computing the outgoing transitions of $r$. The routine uses
the structure of $r$ and the definition of $\xrightarrow{}$ to guide its
computation. For regular expressions of the form $\emptyset,\varepsilon$ and
$a\in\Sigma$, the definition of $\xrightarrow{}$ in Definition 4.1 immediately
gives all the transitions. For regular expressions built using $+,\cdot$ and
∗, one must first recursively compute the outgoing transitions of the
subexpressions of $r$ and then combine the results appropriately, based on the
cases given in the Definition 4.1.
$\textit{out}(r)=\left\\{\begin{array}[]{lp{10em}}\emptyset&if $r=\emptyset$
or $r=\varepsilon$\\\ \\{(a,\varepsilon)\\}&if $r=a\in\Sigma$\\\
\textit{out}(r_{1})\cup\textit{out}(r_{2})&if $r=r_{1}+r_{2}$\\\
\\{(a,r_{1}^{\prime}\cdot
r_{2})\mid(a,r_{1}^{\prime})\in\textit{out}(r_{1})\\}&\\\
\;\;\;\;\;\;\cup\;\\{(a,r_{2}^{\prime})\mid(a,r_{2}^{\prime})\in\textit{out}(r_{2})\land
r_{1}\surd\\}&if $r=r_{1}\cdot r_{2}$\\\
\\{(a,r^{\prime\prime}\cdot(r^{\prime})^{*})\mid(a,r_{1}^{\prime})\in\textit{out}(r_{1})\\}&if
$r=(r^{\prime})^{*}$\end{array}\right.$ Figure 1: Calculating the outgoing
transitions of regular expressions.
The next lemma states that $\textit{out}(r)$ correctly computes the outgoing
transitions of $r$.
###### Lemma 7.
Let $r\in\mathcal{R}(\Sigma)$ be a regular expression, and let
$\textit{out}(r)$ be as defined in Figure 1. Then
$\textit{out}(r)=\\{(a,r^{\prime})\mid r\xrightarrow{a}r^{\prime}\\}$.
###### Proof 4.5.
By structural induction on $r$. The details are left to the reader.
Algorithm 1 contains pseudo-code for computing $M_{r}$. It maintains four
sets.
* •
$Q$, a set that will eventually contain the states of $M_{r}$.
* •
$F$, a set that will eventually contain the accepting states of $M_{r}$.
* •
$\delta$, a set that will eventually contain the transition relation of
$M_{r}$.
* •
$W$, the _work set_ , a subset of $Q$ containing states that have not yet had
their outgoing transitions computed or acceptance status determined.
The procedure begins by adding $r$, its input parameter, to both $Q$ and $W$.
It then repeatedly removes a state from $W$, determines if it should be added
to $F$, computes its outgoing transitions and updates $\delta$ appropriately,
and finally adds the target states in the outgoing transition set to both $Q$
and $W$ if they are not yet in $Q$ (meaning they have not yet been encountered
in the construction of $M_{r}$). The algorithm terminates when $W$ is empty.
1
2Algorithm NFA$(r)$
Input : Regular rexpression $r\in\mathcal{R}(\Sigma)$
Output : NFA $M_{r}=(Q,\Sigma,q_{I},\delta,F)$
3
$Q:=\\{r\\}$
// State set
$q_{I}:=r$
// Start state
$W:=\\{r\\}$
// Working set
$\delta:=\emptyset$
// Transition relation
$F:=\emptyset$
// Accepting states
4
5while _$W\neq\emptyset$_ do
6 choose $r^{\prime}\in W$
7 $W:=W-\\{r^{\prime}\\}$
8 if _$r^{\prime}\surd$ _ then
9 $F:=F\cup\\{r^{\prime}\\}$ // $r^{\prime}$ is an accepting state
$T=\textit{out}(r^{\prime})$
// Outgoing transitions of $r^{\prime}$
$\delta:=\delta\cup\\{r^{\prime},a,r^{\prime\prime})\mid(a,r^{\prime\prime})\in
T\\}$
// Update transition relation
10
11 foreach _$(a,r^{\prime\prime})\in T$_ do
12 if _$r^{\prime\prime}\not\in Q$_ then
$Q:=Q\cup\\{r^{\prime\prime}\\}$
// $r^{\prime\prime}$ is a new expression
13 $W:=W\cup\\{r^{\prime\prime}\\}$
14
15 end foreach
16
17 end while
18
19return $M_{r}=(Q,\Sigma,\delta,q_{I},F)$
Algorithm 1 Algorithm for computing NFA $M_{r}$ from regular expression $r$
Figure 2 gives the NFA resulting from applying the procedure to $(abb+a)^{*}$.
Figure 3, by way of contrast, shows the result of applying the routine in
[HMU06] to produce a NFA-$\varepsilon$ from the same regular expression.
$\;\;(abb+a)^{*}\;\;$$\varepsilon bb(abb+a)^{*}$$\;\varepsilon
b(abb+a)^{*}\;$$\;\varepsilon(abb+a)^{*}\;$$a$$a$$b$$b$$a$ Figure 2: NFA$(r)$
for $r=(abb+a)^{*}$.
$\varepsilon$$\varepsilon$$a$$\varepsilon$$b$$\varepsilon$$b$$a$$\varepsilon$$\varepsilon$$\varepsilon$$\varepsilon$$\varepsilon$$\varepsilon$
Figure 3: NFA-$\varepsilon$ for $(abb+a)^{*}$.
## 5 Discussion
The title of this note is “Better Automata through Process Algebra,” and I
want to revisit it in order to explain in what respects I regard the method
presented in here as producing “better automata.” Earlier I identified the
following motivations that prompted me to incorporate this approach in my
classroom instruction.
* •
I wanted to produce NFAs rather than NFA-$\varepsilon$s. In large part this
was due to my desire not cover the notion of NFA-$\varepsilon$. The only place
this material is used in typical automata-theory textbooks is as a vehicle for
converting regular expressions into finite automata. By giving a construction
that avoids the use of $\varepsilon$-transitions, I could avoid covering
NFA-$\varepsilon$s and devote the newly freed lecture time to other topics. Of
course, this is only possible if the NFA-based construction does not require
more time to describe than the introduction of NFA-$\varepsilon$ and the
NFA-$\varepsilon$ construction.
* •
I wanted the construction to be one that students could apply during an exam
to generate finite automata from regular expressions. The classical
construction found in [HMU06] and other books fails this test, in my opinion;
while the inductive definitions are mathematically pleasing, they yield
automata with too many states for students to be expected to apply them in a
time-constrained setting.
* •
Related to the preceding point, I wanted a technique that students could
imagine being implemented and used in the numerous applications to which
regular expressions are applied. In such a setting, fewer states is better
than more states, all things considered.
This note has attempted to argue these points by giving a construction in
Definition 4.2 for constructing NFAs directly from regular expressions.
Theorem 5 estabishes that the number of states in these NFAs is at most one
larger than the size of the regular expression from which the NFAs are
generated; this provides guidance in preparing exam questions, as the size of
the NFAs students can be asked to generate are tightly bounded by the size of
the regular expression given in the exam. Finally, Algorithm 1 gives a “close-
to-code” account of the construction that hints at its implementability.
Indeed, several years ago a couple of students that I presented this material
to independently implemented the algorithm.
Beyond the points mentioned above, I think this approach has two other points
in its favor. The first is that is provides a basis for defining other
operators over regular expressions and proving that the class of regular
languages is closed with result to these operations. The ingredients for
introducing such a new operator and proving closure of regular languages with
respect to it can be summarized as follows.
1. 1.
Extend the definition of $\mathcal{L}(r)$ given in Definition 2.2 to give a
language-theoretic semantics for the operator.
2. 2.
Extend the definitions of $\surd$ and $\xrightarrow{}$ in Definitions 4.1 and
4.1 to give a small-step operations semantics for the operator.
3. 3.
Extend the proofs of Lemmas 3 and 4 to establish connections between the
language semantics and the operational semantics.
4. 4.
Prove that expressions extended with the new operator yield finite sets of
reachable expressions.
All of these steps involve adding new cases to the existing definitions and
lemmas, and altering Theorem 5 in the case of the last point. Once these are
done, Algorithm 1, with the definition of out given in Figure 1 suitably
modified to cover the new operator, can be used as is as a basis for
constructing NFAs from these extended classes of regular languages.
I have used parts of this approach in the classroom to ask students to prove
that synchronous product and interleaving operators can be shown to preserve
language regularity. Other operators, such as ones from process algebra, are
also candidates for these kinds of questions.
The second feature of the approach in this paper that I believe recommends it
is that the NFA construction is “on-the-fly”; the construction of a automaton
from a regular expression does not require the _a priori_ construction of
automata from subexpressions, meaning that the actual production of the
automaton can be intertwined with other operations, such as the checking of
whether a word belongs to the regular expression’s language. One does not need
to wait the construction of the full automaton, in other words, before putting
it to use.
Criticisms that I have heard of this approach center around two issues. The
first is that the construction of NFA $M_{r}$ from regular expression $r$ does
not use structural induction on $r$, unlike the classical constructions in
e.g. [HMU06]. I do not have much patience with the complaint, as the concepts
that $M_{r}$ is built on, namely $\surd$ and $\xrightarrow{}$, are defined
inductively, and the results proven about them require substantial use of
induction. The other complaint is that the notion of
$r\xrightarrow{a}r^{\prime}$ is “hard to understand.” It is indeed the case
that equipping regular expressions with an operational semantics is far
removed from the language-theoretic semantics typically given to these
expressions. That said, I would argue that the small-step operational
semantics considered here in fact exposes the essence of the relationship
between regular expressions and finite automata: this semantics enables
regular expressions to be executed, and in a way that can be captured via
automata.
I close this section with a brief discussion of the Berry-Sethi algorithm
[BS86], which is used in practice and produces deterministic finite automata.
This feature enables their technique to accommodate complementation, an
operation with respect to which regular languages are closed but which fits
uneasily with NFAs. From a pedagogical perspective, however, the algorithm
suffers somewhat as number of states in a DFA can be exponentially larger than
that size of the regular expression from which it is derived. A similar
criticism can be made of other techniques that rely on Brzozowsky derivatives
[Brz64], which also produce DFAs. There are interesting connections between
our operational semantics and these derivatives, but we exploit nondeterminacy
to keep the sizes of the resulting finite automata small.
## 6 Conclusions and Directions for Future Work
In this note I have presented an alternative approach for converting regular
expressions into finite automata. The method relies on defining an operational
semantics for regular expressions, and as such draws inspiration from the work
on process algebra undertaken by pioneers in that field, including Jos Baeten.
In contrast with classical techniques, the construction here does not require
transitions labeled by the empty word $\varepsilon$, and it yields automata
whose state sets are proportional in size to the regular expressions they come
from. The procedure can also be implemented in an on-the-fly manner, meaning
that the production of the automaton can be intertwined with other analysis
procedures as well.
Other algorithms studied in process algebra also have pedagogical promise, in
my opinion. One method, the Kanellakis-Smolka algorithm for computing
bisimulation equivalence [KS90], is a case in point. Partition-refinement
algorithms for computing langauge equivalence of deterministic automata have
been in existence for decades, but the details underpinning them are subtle
and difficult to present in an undergraduate automata-theory class, where
instructional time is at a premium. While not as efficient asymptotically as
the best procedures, the simplicity of the K-S technique recommends it, in my
opinion, both for equivalence checking and state-machine minimization.
Simulation-checking algorithms [HHK95] can also be used as a basis for
checking language containment among finite automata; these are interesting
because they do not require determinization of both automata being compared,
in general.
## References
* [Ard61] Dean N Arden. Delayed-logic and finite-state machines. In 2nd Annual Symposium on Switching Circuit Theory and Logical Design (SWCT 1961), pages 133–151. IEEE, 1961.
* [BHR84] Stephen D. Brookes, C. A. R. Hoare, and A. W. Roscoe. A theory of communicating sequential processes. Journal of the ACM, 31(3):560–599, 1984.
* [BK84] J.A. Bergstra and J.W. Klop. Process algebra for synchronous communication. Information and Control, 60(1):109–137, 1984.
* [BK85] Jan A. Bergstra and Jan Willem Klop. Algebra of communicating processes with abstraction. Theoretical Computer Science, 37:77–121, 1985.
* [Brz64] Janusz A. Brzozowski. Derivatives of regular expressions. Journal of the ACM (JACM), 11(4):481–494, 1964.
* [BS86] Gerard Berry and Ravi Sethi. From regular expressions to deterministic automata. Theoretical Computer Science, 48:117–126, 1986.
* [HHK95] Monika Rauch Henzinger, Thomas A. Henzinger, and Peter W. Kopke. Computing simulations on finite and infinite graphs. In Proceedings of IEEE 36th Annual Foundations of Computer Science, pages 453–462. IEEE, 1995.
* [HMU06] John E. Hopcroft, Rajeev Motwani, and Jeffrey D. Ullman. Introduction to Automata Theory, Languages, and Computation (3rd Edition). Addison-Wesley Longman Publishing Co., Inc., Boston, 2006.
* [Kle56] S.C. Kleene. Representation of events in nerve nets and finite automata. In Automata Studies, pages 3–41. Princeton University Press, 1956\.
* [KS90] Paris C. Kanellakis and Scott A. Smolka. Ccs expressions, finite state processes, and three problems of equivalence. Information and Computation, 86(1):43–68, 1990.
* [Mil80] Robin Milner. A Calculus of Communicating Systems, volume 92 of Lecture Notes in Computer Science. Springer, 1980.
* [Plo81] Gordon D Plotkin. A structural approach to operational semantics. Technical report, Aarhus University, Denmark, 1981.
* [Plo04] Gordon D Plotkin. The origins of structural operational semantics. The Journal of Logic and Algebraic Programming, 60:3–15, 2004.
* [Tho68] Ken Thompson. Programming techniques: Regular expression search algorithm. Communications of the ACM, 11(6):419–422, June 1968.
## Appendix A SOS Rules for $\surd$ and $\xrightarrow{}$
Here are the inference rules used to define $\surd$. They are given in the
form
$\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}\textit{premises}\\\ \hline\cr\\\\[-9.47217pt]
\textit{conclusion}\end{array}\\\ \hline\cr\end{array}$
with $-$ denoting an empty list of premises.
$\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt] \begin{array}[]{c}-\\\
\hline\cr\\\\[-9.47217pt] \varepsilon\surd\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}-\\\ \hline\cr\\\\[-9.47217pt] r^{*}\surd\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{1}\surd\\\ \hline\cr\\\\[-9.47217pt]
(r_{1}+r_{2})\surd\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{2}\surd\\\ \hline\cr\\\\[-9.47217pt]
(r_{1}+r_{2})\surd\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{1}\surd\;\;\;\;r_{2}\surd\\\ \hline\cr\\\\[-9.47217pt]
(r_{1}\cdot r_{2})\surd\end{array}\\\ \hline\cr\end{array}$
Next are the rules for $\xrightarrow{}$.
$\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt] \begin{array}[]{c}-\\\
\hline\cr\\\\[-9.47217pt] a\xrightarrow{a}\varepsilon\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{1}\xrightarrow{a}r_{1}^{\prime}\\\
\hline\cr\\\\[-9.47217pt]
r_{1}+r_{2}\xrightarrow{a}r_{1}^{\prime}\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{2}\xrightarrow{a}r_{2}^{\prime}\\\
\hline\cr\\\\[-9.47217pt]
r_{1}+r_{2}\xrightarrow{a}r_{2}^{\prime}\end{array}\\\ \hline\cr\end{array}$
$\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{1}\xrightarrow{a}r_{1}^{\prime}\\\
\hline\cr\\\\[-9.47217pt] r_{1}\cdot r_{2}\xrightarrow{a}r_{1}^{\prime}\cdot
r_{2}\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r_{1}\surd\;\;\;\;r_{2}\xrightarrow{a}r_{2}^{\prime}\\\
\hline\cr\\\\[-9.47217pt] r_{1}\cdot
r_{2}\xrightarrow{a}r_{2}^{\prime}\end{array}\\\
\hline\cr\end{array}\;\;\;\;\begin{array}[]{|c|}\hline\cr\\\\[-9.47217pt]
\begin{array}[]{c}r\xrightarrow{a}r^{\prime}\\\ \hline\cr\\\\[-9.47217pt]
r^{*}\xrightarrow{a}r^{\prime}\cdot(r^{*})\end{array}\\\ \hline\cr\end{array}$
|
# Motif Identification using CNN-based Pairwise Subsequence Alignment Score
Prediction
††thanks: Identify applicable funding agency here. If none, delete this.
Ethan Moyer School of Biomedical Engineering, Science and Health Systems
Drexel University
Philadelphia, PA
https://orcid.org/0000-0002-8023-3810 Anup Das College of Engineering
Drexel University
Philadelphia, PA
https://orcid.org/0000-0002-5673-2636
###### Abstract
A common problem in bioinformatics is related to identifying gene regulatory
regions marked by relatively high frequencies of motifs, or deoxyribonucleic
acid sequences that often code for transcription and enhancer proteins.
Predicting alignment scores between subsequence k-mers and a given motif
enables the identification of candidate regulatory regions in a gene, which
correspond to the transcription of these proteins. We propose a one-
dimensional (1-D) Convolution Neural Network trained on k-mer formatted
sequences interspaced with the given motif pattern to predict pairwise
alignment scores between the consensus motif and subsequence k-mers. Our model
consists of fifteen layers with three rounds of a one-dimensional convolution
layer, a batch normalization layer, a dense layer, and a 1-D maximum pooling
layer. We train the model using mean squared error loss on four different data
sets each with a different motif pattern randomly inserted in DNA sequences:
the first three data sets have zero, one, and two mutations applied on each
inserted motif, and the fourth data set represents the inserted motif as a
position-specific probability matrix. We use a novel proposed metric in order
to evaluate the model’s performance, $S_{\alpha}$, which is based on the
Jaccard Index. We use 10-fold cross validation to evaluate out model. Using
$S_{\alpha}$, we measure the accuracy of the model by identifying the 15
highest-scoring 15-mer indices of the predicted scores that agree with that of
the actual scores within a selected $\alpha$ region. For the best performing
data set, our results indicate on average 99.3% of the top 15 motifs were
identified correctly within a one base pair stride ($\alpha=1$) in the out of
sample data. To the best of our knowledge, this is a novel approach that
illustrates how data formatted in an intelligent way can be extrapolated using
machine learning.
###### Index Terms:
Motif Finding, Convolution Neural Network, Pairwise Sequence Alignment
## I Introduction
Measuring the similarity of two sequences is a well known problem called
sequence alignment. This topic includes a vast category of methods for
identifying regions of high similarity in biological sequences, such as those
in deoxyribonucleic Acid (DNA), ribonucleic acid (RNA), and protein [7].
Specifically, DNA pairwise sequence alignment (PSA) methods are concerned with
finding the best arrangement of two DNA sequences. Some historically notable
dynamic programming PSA methods are the Needleman-Wunsch (NW) algorithm for
global alignment [1] and Smith-Waterman (SW) algorithm for local alignment
[2]. The main difference between global and local alignment is related to the
difference in length of the two sequences: global alignment attempts to find
the highest-scoring end-to-end alignment between two sequences of
approximately the same length, and local alignment searches for local regions
of high similarity between two sequences with different lengths [8]. Figure 1
shows this difference between local and global DNA alignment with two
sequences aligned in a 5’ (i.e. five prime) to 3’ direction. In molecular
biology, this orientation refers to the directionality of the carbon backbone
in DNA. The top subfigure displays global alignment where a query sequence is
aligned end-to-end with a reference. The bottom subfigure displays local
alignment where a short query sequence is most optimally aligned with a longer
reference sequence. This latter alignment displays how the query sequence is
approximately equal to a subsequence of the reference sequence.
Figure 1: Local vs. Global Alignment. In general, DNA is composed of a
permutation of the four nucleotides [adenine (A), thymine (T), cytosine (C),
guanine (G)] and an ambiguous base (N).
In this way, local alignment methods recognize approximate subsequence matches
of a query sequence with respect to a given reference sequence. One common
paradigm utilizing local alignment is to examine similarities between a query
sequence and specific k-long subsequences in a given gene, known as k-mers,
found within the reference sequence. Traditional local alignment algorithms
calculate these scores between the query sequence and each k-mer in the
reference sequence. The aim of this research is to identify where the most
likely subsequence matches of the query sequence occur in each reference
sequence using machine learning methods. One such type of query sequence that
is of high biological significance is a sequence motif, which are short
reoccurring subsequences of DNA [5]. Therefore, this research follows the
ability of machine learning methods to gauge the relative enrichment of
various representations of motifs (or motif patterns) in independent reference
sequences. More specifically, the efficacy of identifying motif enrichment in
sequences is explored using a one-dimensional (1-D) convolution neural network
(CNN).
Four different data sets are generated, each with a different motif pattern
randomly inserted in approximately 10,000 reference sequences: the first three
data sets have zero, one, and two mutations applied on each inserted motif,
and the fourth data set represents the inserted motif as a position-specific
probability matrix (PPM). In this data structure, each nucleotide position
corresponds to a frequency of nucleotides [22]. These distinct motif patterns
help display how the CNN model can recognize both subsequence matches with
exact, inexact, and probabilistic motifs. Each sample in a given data set
consists of artificial sequences enriched with a given motif pattern at a
frequency between five and fifteen occurrences per 1,000 base pairs (bp).
These samples are split into 986 overlapping 15-mers with a corresponding
calculated local alignment score from the BioPython Aligner [20]. These sores
are then predicted using a CNN with 10-fold cross validation. In order to
measure the performance of the model, the average out of sample mean squared
error (MSE), R2, and accuracy scores are reported.
While the MSE of the model trained on each data set is not representative of
the model’s effectiveness, the Jaccard Index and $S_{\alpha}$, a novel
modified version of the Jaccard Index, are better suited to capture accuracy
of the model. The standard MSE is not suitable for this problem because it
inherently only displays differences between predicted and actual values.
Since our aim is to locate those highest-scoring 15-mers, we need a metric
that determines at which positions they occur and with what accuracy (see
subsection V-A). This new metric, $S_{\alpha}$, measures the degree of
similarity between two sets where each pair of elements can be different by at
most $\alpha$. Because of the plateauing nature of this metric as seen in each
data set and the risks involved in increasing alpha, only $S_{0}$ to $S_{5}$
are reported.
In implementing this new metric, the accuracy of the model increases
dramatically across all four data sets compared to the Jaccard Index. This
indicates that while the model is not able to precisely identify the highest-
scoring k-mers exactly, it is able to accurately identify their local region.
As expected, the model’s accuracy is far higher for the data sets with
relatively simple inserted motif patterns–non-probabilistic consensus
motifs–compared to that of the data set with more complex inserted motif
patterns, such as consensus PPM.
## II Background
Clusters of motifs across a genome strongly correlate to a gene regulatory
regions [18]. These regions are especially important for motif enrichment
analysis, where known motifs are identified in the regulatory sequence of a
gene in order to determine which proteins (transcription factors and
enhancers) control its transcription [6] [19]. Motif enrichment analysis is
only relevant given that the regulatory region of a gene is known, otherwise
the sequence under study may be from a non-coding region of an organism’s
genome or an untranslated region of a gene [9]. Given that the regulatory
region of a gene is unknown, one frequently used approach to identifying it is
to first locate sequences enriched with highly conserved motifs. Fortunately,
many motifs that have been discovered are common amongst genes serving a
similar role across organisms, such as a negative regulatory region for
eukaryotes [10]. Finding these conserved motifs may facilitate the
identification of the regulatory regions in a gene. For that reason,
identifying the exact or relative positions of a given motif in a gene or
sequence is a relevant inquiry in the process for classifying candidate
regulatory regions of a gene.
A software toolkit known as MEME Suit includes three different methods for
motif-sequence searching [23]: FIMO (Find Individual Motif Occurrences) [21],
GLAM2SCAN (Gapped Local Alignment of Motifs SCAN) [24], and MAST (Motif
Alignment and Search Tool) [25].
FIMO focuses on scanning both DNA and protein sequences for a given motif
represented as PPM. This software tool calculates the log-likelihood ratio
score, p-value, and q-value (false discovery rate) for each subsequence
position in a sequence database [21].
Typically, GLAM2SCAN performs a Waterman-Eggert local alignment between motifs
found by GLAM2, its companion motif-finding algorithm, and a sequence
database. These local alignment scores are generated from an aligner
programmed with position specific residue scores, deletion scores, and
insertion scores returned from the GLAM2 algorithm. The $n$ highest alignments
are returned to the user [24].
MAST locates the highest-scoring $n$ subsequences with respect to a motif
described as a position-specific score matrix. Using the QFAST algorithm, MAST
calculates the p-value of a group of motif matches. This is accomplished by
first finding the p-value of each match (position p-value’) and normalizing it
for the length of the motif (’sequence p-value’). Then each of these
normalized p-values are multiplied together to find the statistical
significance across all located motifs in the database (’combined p-value’)
[25].
## III Data Analysis & Curation
A single data set contains approximately 10,000 randomly generated DNA
sequences, each 1,000 bp long. The number of samples vary slightly from one to
another due to some inconsistencies that are removed in prepossessing. A
15-mer motif is inserted into each sample anywhere from five to fifteen times.
Four separate data sets of this structure are created where a different motif
pattern is inserted randomly into each sequence. The first three data sets
have zero, one, and two mutations applied on each inserted motif. These
mutations are applied in order to determine whether the proposed model has the
potential to identify consensus motifs and non-exact consensus motifs across
many sequences. Since motifs mostly exist as profiles where each base pair
position corresponds to a frequency table of nucleotides, the fourth data set
is created where the inserted motifs are based off of a PPM [11].
Equation 1 is used to calculate the PPM indicated by matrix $M$ given a set of
candidate motifs, or sequences that are thought to be from the same motif PPM.
This equation counts the number of occurrences of each nucleotide in set
$\gamma$ for each nucleotide position across all motifs, where
$\gamma=\\{A,T,C,G\\}$; $I=\\{0,1\\}$ represents an indicator function, where
$I(x=\gamma)$ is 1 if $x=\gamma$ and 0 otherwise; $i{\displaystyle\in}$ (1, …,
L), where L is the length of each motif; and $j{\displaystyle\in}(1,...,N)$,
where N is the number of motifs.
$M_{\alpha,k}=\frac{1}{N}\sum^{N}_{i=1}I(X_{i,j}=\gamma)$ (1)
In order to apply Equation 1 on candidate motifs, the DNA sequence data must
be formatted as nucleotide position counts shown in Figure 2. This figure
illustrates the conversion of a list of candidate motifs to matrix
$M_{counts}$ and then to $PPM$ using Equation 1. While Figure 2 displays this
process for five 10-mers, the fourth data sets in this work relies on profiles
built from ten 15-mers.
TACAGAGTTG
CCATAGGCGT
TGAACGCTAC
ACGGACGATA
CGAATTTACG
$\downarrow$
$M_{counts}$ = A 1 1 3 3 2 1 0 2 1 1 T 2 0 0 1 1 1 1 2 2 1 C 2 2 1 0 1 1 1 1
1 1 G 0 2 1 1 1 2 3 0 1 2
$\downarrow$
$PPM$ = A 0.2 0.2 0.6 0.6 0.4 0.2 0.0 0.4 0.2 0.2 T 0.4 0.0 0.0 0.2 0.2 0.2
0.2 0.4 0.4 0.2 C 0.4 0.4 0.2 0.0 0.2 0.2 0.2 0.2 0.2 0.2 G 0.0 0.4 0.2 0.2
0.2 0.4 0.6 0.0 0.2 0.4
Figure 2: The conversion of five candidate subsequence motifs to PPM using
Equation 1.
## IV Feature & Output Selection
In order to format the sequence data into a structure that is both
recognizable and meaningful to a CNN, we first split each sequence into a list
of overlapping 15-mers. Next, we generate a one-hot encoding for each
nucleotide in the 15-mers. The resulting feature set is composed of 60 values.
Figure 3 displays this process using a small subsequence example formatted as
4-mers.
Figure 3: DNA subsequence k-mer formatting by one-hot encoding nucleotides.
To obtain the target values, each of these 15-mers are pairwise aligned with
the consensus motif for the given data set motif pattern using the SW
algorithm. Given two sequences, $a$ of length $n$ and $b$ of length $m$, this
algorithm begins by defining an $n+1$ by $m+1$ matrix $H$. The first column
and first row are assigned $0$, and the following recurrence relation is
applied to assign the rest of the values in $H$.
$H(i,j)=max\begin{cases}H(i-1,j-1)+\sigma(a_{i},b_{j})\\\ H(i,j-1)+W\\\
H(i-1,j)+W\\\ 0\end{cases}$
where W is a gap score and $\sigma$ is a score matrix such that
$\sigma(a_{i},b_{j})=\begin{cases}+1&\quad\text{if }a_{i}=b_{j}\\\
-2&\quad\text{if }a_{i}\neq b_{j}\end{cases}$
In the case when $a_{i}=b_{j}$, $\sigma$ returns a match score of $+1$, and in
the case when $a_{i}\neq b_{j}$, $\sigma$ returns a mismatch score of $-2$.
The gap score, $W$, is assigned $-2.5$. The match, mismatch, and gap score can
be configured for different alignments. These parameters are used because they
are the most optimal for this type of local alignment [4]. Once $H$ is
assigned its values, the best alignment is obtained by finding the maximum
value in $H$ and tracing back the matrix elements that led up to this maximum.
In this way, the maximum value in $H$ defines the optimal path in $H$ for the
best alignment between sequences $a$ and $b$ [2]. The calculated alignment
scores are normalized based on the maximum alignment score in each sample.
## V Methods
### V-A CNN Model Evaluation
Although the MSE loss function is effective at penalizing large differences
between predicted and target values, such as outliers in the data, it does not
successfully represent the predictive power of the model given the scope of
the problem [14]. In the data, the target value from each sample ranges from
zero to one. This range already generates an inherently small MSE. Even when
the MSE for each sample is normalized, the metric is overshadowed by the
overwhelming majority of the predicted values that were approximately equal to
the global mean of each sample. In other words, the MSE as a metric does not
capture the correct information pertaining to the five to fifteen inserted
motif patterns in each sample due to a large unequal distribution of such
scores that deviate from the global mean. This problem is analogous to that of
an unequal class distribution in a classification problem.
The goal of the model is to score the CNN based on its ability to locate the
15 highest-scoring 15-mers, because we inserted a motif pattern at most 15
times into a single sample. Since this network deals with continuous values
instead of discrete classes, initially we cannot be certain of the 15-mer to
which a 15-mer score at any index $i$ corresponds. However, a higher scoring
15-mer has a greater probability of corresponding to that of a motif, whereas
the lower scoring 15-mers carry little information. This is due to the fact
that each score in the data is generated from a local alignment between 15-mer
and the given consensus motif. In this way, only the highest 15-scoring
15-mers are of interest. As previously mentioned, we indicate that there is an
unequal distribution between the number of scores corresponding to that of
each inserted motif and the global mean of each sample. Using these
observations, we rationalize that we only have to examine the 15 highest-
scoring indices. This generality that the 15 highest-scoring idicies
correspond to the inserted motif patterns is further supported by the notion
that probability of observing a random 15-mer exactly equal or similar to the
inserted motifs is relatively low.
Thus, the indices of the predicted 15 highest-scoring 15-mer inherently hold
information about the position of possible inserted motif patterns because it
is at these indices at which the local alignment is conducted. Due to the low
likelihood of observing a false positive (when a 15-mer is identified as a
motif but in all actuality is not one), we create a one-to-one correspondence
between the indices of the actual motif indices and that of the predicted
motifs using high local alignment scores. The accuracy of this one-to-one
correspondence can be measured using the Jaccard Index given in Equation 2.
$J(A,B)=\frac{|A\cap B|}{|A\cup B|}$ (2)
We propose a more generalized index, $S_{\alpha}$, in Equation 3 which
measures the similarity of two sets with an allowed margin of error of
$\alpha$. Because of the high locality of local alignment score predictions
and due to the fact that the highest-scoring 15-mers can still be found from
examining the immediate region of a prediction, this margin of error serves as
a heuristic for motif identification. In this metric, two items are considered
identical if they are no more than $\alpha$ away from each other. In the scope
of this work, sets $A$ and $B$ contain the indices of the 15 highest-scoring
15-mers of the actual data and predicted data, respectively. When $\alpha=0$,
$S_{0}(A,B)$ in Equation 2 is identical to $J(A,B)$ in Equation 3. Conversely,
as $\alpha$ increases, the allowed distance between indices in sets $A$ and
$B$ increases. For example, when $\alpha=2$, a predicted 15-mer index $i$ and
actual 15-mer index $i+2$ are considered the same.
$J(A,B\mid\alpha)=S_{\alpha}(A,B)=\frac{|\bigcup\limits_{\mu=0}^{\alpha}A\cap\\{x+\mu\mid
x\in B\\}|}{|A\cup B|}$ (3)
The following process is an algorithm to calculate a modified version of the
Jaccard Index. Using the $argsort$ function in NumPy, we examine the indices
that order both the actual outputs and the predicted outputs. In looping
through the each of the top $n$ indices of the predicted outputs, we count the
number of them which are contained in the list of indices of the actual
outputs. The process returns the score as count over the maximum possible
value, which in this case is $n$. This is implemented in Algorithm 1
Algorithm 1 Measuring Jaccard Index with stride $\alpha$
1:procedure $s_{\alpha}$
2: $\textit{n}\leftarrow\text{number of highest-scoring k-mers to analyze}$
3: $\textit{score}\leftarrow 0$
4: $\textit{act\\_outputs}\leftarrow\text{actual outputs}$
5: $\textit{pred\\_outputs}\leftarrow\text{outputs from CNN}$
6: $\textit{act\\_indxs}\leftarrow\text{indices that would sort
}\textit{act\\_outputs}$
7: $\textit{pred\\_indxs}\leftarrow\text{indices that would sort
}\textit{pred\\_outputs}$
8: _outerloop_ :
9: for $i$ := 1 to $n$ do
10: $\textit{pred\\_indx}\leftarrow\textit{pred\\_indxs(i)}$.
11: for $j$ := 0 to $\alpha$ do
12: if $\textit{pred\\_indxs}\in\textit{act\\_indxs}-j$ then
13: $score\leftarrow score+1$.
14: goto _outerloop_.
15: if $\textit{pred\\_indxs}\in\textit{act\\_indxs}+j$ then
16: $score\leftarrow score+1$.
17: goto _outerloop_.
18: $normalized\\_score\leftarrow score/n$.
## VI Results
Each of the four data sets is characterized by 10,000 samples where each
sample contains a sequence that is 1,000 bp in length. In each sample, a motif
pattern is inserted randomly anywhere from five to fifteen times. The first
three data sets include inserted motif patterns with zero, one, and two
mutations. The fourth data set includes an inserted motif pattern represented
based on a PPM. Each data set is evaluated using out of sample data generated
from 10-fold cross validation based on eight metrics: MSE, R2, and
$S_{0}$-$S_{5}$.
Table I: CNN Results. The average out of sample MSE, R2, and $S_{0}$-$S_{5}$
for each data set.
A fifth analysis is conducted with another data set using a motif
representation similar to that of the fourth data set with the MafK
transcription factor from the BATCH1 regulatory gene [26]. This motif is a
15-mer with a less conserved consensus sequence compared to that of the former
four data sets. While this data set did not perform as well as the other four
data sets with a $S_{9}$ of 45.3%, this analysis brought to light the
consideration of the aligner scoring matrix as another hyperparameter to this
work.
As it turns out, the performance of the model varies greatly with the chosen
match score, mismatch score penalty, and gap score penalty for the currently
implemented alignment method. For instance, the $S_{9}$ varies from 33.7% to
52.6% with different scoring hyperparameters. The former result is derived
from an aligner with a match score of +2.0, mismatch score penalty of -3.0,
and gap score penalty of -3.5, whereas the latter result is derived from an
aligner with a match score of +2.0, mismatch score penalty of -4.0, and gap
score penalty of -4.5. It is currently unclear what aligner hyperparameters
are most optimal for this more complex data set and the original four data
sets explored in the work. Although there is evidence to suggest that aligner
scoring matrices vary with the type of inserted motif pattern, it is unclear
whether the most optimal hyperparameters change from motif to motif.
One possible interpretation of the dependence of the model’s chosen evaluation
metric, $S_{\alpha}$, on the aligner hyperparameters is related to the fact
that the CNN predicts alignment scores that are normalized within each sample.
Therefore, the farther these highest-scoring scores are from the global mean,
the more likely that the proposed metric will be able to recognize inserted
motifs. Conversely, when analyzing a data set with a less conserved motif
consensus sequence, such as that of the MafK transcription factor, the
alignment scores are closer to the global mean of each sample. This in turn
makes recognizing the indices of the highest-scoring segments more
challenging. It follows that the aligner hyperparameters which capitalize on
increasing this difference are most favorable for all motifs, regardless of
pattern.
### VI-A Convolution Neural Network (CNN) Architecture
CNN is a class of deep learning models which can infer patterns based on data
formatted as a grid structure, such as a set of prices over time for stock or
a grid representation of pixels in an image (add reference for these
architectures). These Artificial Neural Netowrk (ANNs) use a linear
mathematical operation called convolution in at least one of their layers [3].
The convolution operation is commonly identified by the following two
equations:
$s(t)=\int x(a)w(t-a)da$ (4)
$s(t)=(x*w)(t)$ (5)
Equation 4 explicitly denotes the equation for convolution, whereas Equation 5
displays how an asterisk can be used to for the linear operation. In both
equations, $x$ is referred to as the input. Typically, this is formatted as a
multidimensional array, or a tensor, that matches the size and dimensions of
the data. The second argument is $w$, representing a kernel, which stores
parameters for the model also formatted as a tensor. This argument is adapted
throughout the training process of the model. The output of both functions,
$s$, is called the feature map of the convolution layer. This is what is fed
into the next layer of the network [3]. Hidden layers are generated from
applying a kernel, or filter, of weights over the receptive field of the
inputs. More specifically, the hidden layer is computed based off of the
filter weights and the input layer as it strides across the feature space
[28]. This operation can either compress or expand input space depending on
the applied kernel [29]. This paradigm is followed by rounds of activations,
normalizations, and pooling [29]. The model typically ends with a fully
connected layer to compute its outputs [28]. The proposed model is represented
in Figure 4 [cite my paper].
Figure 4: CNN model. (create better caption)
The model is marked by three rounds of a 1-D convolution layer, a batch
normalization layer, a dense layer, and a 1-D maximum pooling layer. After
these 12 layers, the model finishes off with a 50% dropout layer, a flattened
layer, and finally a fully connected layer corresponding to the 986 alignment
scores for each sample [13] [12].
The model described above is ran on all four data sets for 100 epochs with a
batch size of 80 and compiled with the Adam optimizer (learning rate=0.001,
beta 1=0.9, beta 2=0.999, epsilon=1e-07). Of the 10,000 samples in each
dataset, 80% is reserved for training the network and the remaining 20% is
used for validation after each epoch. For its loss function, the model relies
on Mean Squared Error (MSE), which is calculated between predicted values
($y_{pred}$) and target values ($y_{act}$) with the following formula in
Equation 6:
$MSE(y_{pred},y_{act})=\frac{1}{n}\sum_{i=1}^{n}(y_{pred,i}-y_{act,i})$ (6)
## VII Discussion
As displayed in this work, deep learning models, such as a CNN, have the
capacity to recognize and predict the positions of an inserted motif with
great accuracy. Furthermore, data structures can be devised to take advantage
of unequal class distributions in regression problems as highlighted by the
design of k-mer data representation in this work and the incorporation of
$S_{\alpha}$ as a novel evaluation metric.
In analyzing the results in Table I, there is a characteristic pattern between
the accuracy metrics across each data set. For instance, in comparing
$S_{0}$-$S_{5}$ for the first data set with zero mutations applied on each
inserted motif, the score monotonically increases with an increasing $\alpha$.
This is evident for the three other data sets as well. With respect to this
particular trend, it is expected that as $\alpha$ increases, the score will
also increase since $\alpha$ relates directly to the allowed margin of error,
making $S_{\alpha}$ less conservative.
Additionally, the model’s accuracy is far higher for the data sets with
relatively simple inserted motif patterns, such as nonmutated and mutated
consensus motifs, compared to that of the fourth data set with a PPM motif
pattern. This relationship can be explained by the process by which the scores
for each 15-mer are calculated. For a given 15-mer, a score is computed based
on its local alignment with a given consensus motif. For the first data set,
these local alignment scores generated are derived from each inserted motif,
whereas in the latter three data sets, the scores are not necessarily derived
from each data set’s consensus motif since the motif patterns support variable
inserted motif.
In all data sets, the largest increase in $S_{\alpha}$ appears to be between
the $S_{0}$ and $S_{1}$. After this point, change in $S_{\alpha}$ plateaus
after a given $\alpha$. With the consideration that the likelihood of
observing a false positive is relatively low, this indicates that the addition
of stride $\alpha$ is well-advised. This is the case because the increase in
$\alpha$ only influences $S_{\alpha}$ up to a certain point. It is expected
that as $\alpha\xrightarrow{}\beta$, where $\beta$ is the maximum $\alpha$ on
either side of a given motif index, $S_{\alpha}\xrightarrow{}1$ because every
single $n$ indices will be covered by the stride ${\alpha}$. In the case that
$S_{\alpha}\xrightarrow{}1$, the certainty for each identified motif decreases
with increasing $S_{\alpha}$ regardless; however, the absence of this limit in
the data indicates that the certainty of the identified motifs does not
decreases dramatically from $S_{0}$ to $S_{5}$. Furthermore, the presence of a
plateauing $S_{\alpha}$ supports the thought that a decrease in the certainty
of an identified motif is negligible. This analysis can be drawn further in
noticing that the point at which $S_{\alpha}$ plateaus increases as the
complexity of the motif pattern increases. In the case of a more complex motif
pattern, such as either of the PPMs, a greater $\alpha$ is required to fully
encapsulate accuracy of the model’s predictions. Even then, the certainty of
such motif identification with increasing $\alpha$ decreases.
In subsection V-A, we draw a one to one correspondence between the actual
motif indices and that of the predicted motifs by only examining the indices
of the 15 highest-scoring 15-mers in both the actual scores and predicted
scores. This is not a strong one-to-one correspondence because the number of
inserted motifs actually varies randomly from five to fifteen times sample to
sample. By design, this is a confounding variable When $S_{\alpha}$ is applied
on a sample with five inserted motifs, the returned score is predicted to be
an underestimate of the model’s prediction. This is due to the fact that this
function only examines the highest 15-scoring indices for each sample. In the
case of five inserted motifs, there would be ten 15-mers identified as high-
scoring motifs, when in reality these are random 15-mers in the sequence.
Because those scores are more likely to be present throughout a sequence,
there will be less similarity between the indices of the predicted 15 highest-
scoring 15-mers and that of the actual 15 highest-scoring 15-mers. This will
most likely lead to a decrease in $S_{\alpha}$.
## References
* [1] Cold Spr. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Mol. Biol, 48:443–153, 1970.
* [2] Temple F Smith, Michael S Waterman, et al. Identification of common molecular subsequences. Journal of molecular biology, 147(1):195–197, 1981.
* [3] Yoshua Bengio, Ian Goodfellow, and Aaron Courville. Deep learning, volume 1. MIT press Massachusetts, USA:, 2017.
* [4] Ahmad Al Kawam, Sunil Khatri, and Aniruddha Datta. A survey of software and hardware approaches to performing read alignment in next generation sequencing. IEEE/ACM transactions on computational biology and bioinformatics, 14(6):1202–1213, 2016.
* [5] Patrik D’haeseleer. What are dna sequence motifs? Nature biotechnology, 24(4):423–425, 2006.
* [6] Robert C McLeay and Timothy L Bailey. Motif enrichment analysis: a unified framework and an evaluation on chip data. BMC bioinformatics, 11(1):165, 2010.
* [7] Waqar Haque, Alex Aravind, and Bharath Reddy. Pairwise sequence alignment algorithms: A survey. In Proceedings of the 2009 Conference on Information Science, Technology and Applications, page 96–103, 2009.
* [8] EMBL-EBI. Pairwise Sequence Alignment, 2020.
* [9] Xiaole Liu, Douglas L Brutlag, and Jun S Liu. Bioprospector: discovering conserved dna motifs in upstream regulatory regions of co-expressed genes. In Biocomputing 2001, pages 127–138. World Scientific, 2000.
* [10] Jorge A Iñiguez-Lluhí and David Pearce. A common motif within the negative regulatory regions of multiple factors inhibits their transcriptional synergy. Molecular and Cellular Biology, 20(16):6040–6050, 2000.
* [11] Modan K Das and Ho-Kwok Dai. A survey of dna motif finding algorithms. In BMC bioinformatics, volume 8, page S21. Springer, 2007.
* [12] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
* [13] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
* [14] Yu Qi, Yueming Wang, Xiaoxiang Zheng, and Zhaohui Wu. Robust feature learning by stacked autoencoder with maximum correntropy criterion. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6716–6720. IEEE, 2014.
* [15] Luping Ji, Xiaorong Pu, Hong Qu, and Guisong Liu. One-dimensional pairwise cnn for the global alignment of two dna sequences. Neurocomputing, 149:505–514, 2015.
* [16] Q. Zhang, L. Zhu, W. Bao, and D. Huang. Weakly-supervised convolutional neural network architecture for predicting protein-dna binding. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 17(2):679–689, 2020.
* [17] Gary D Stormo and George W Hartzell. Identifying protein-binding sites from unaligned dna fragments. Proceedings of the National Academy of Sciences, 86(4):1183–1187, 1989.
* [18] Martin C. Frith, Michael C. Li, and Zhiping Weng. Cluster-Buster: finding dense clusters of motifs in DNA sequences. Nucleic Acids Research, 31(13):3666–3668, 07 2003.
* [19] Tom Lesluyes, James Johnson, Philip Machanick, and Timothy L. Bailey. Differential motif enrichment analysis of paired chip-seq experiments. BMC Genomics, 15(1):752, 2014.
* [20] Peter J. A. Cock, Tiago Antao, Jeffrey T. Chang, Brad A. Chapman, Cymon J. Cox, Andrew Dalke, Iddo Friedberg, Thomas Hamelryck, Frank Kauff, Bartek Wilczynski, and Michiel J. L. de Hoon. Biopython: freely available python tools for computational molecular biology and bioinformatics. Bioinformatics, 25(11):1422–1423, 8/5/2020 2009.
* [21] Charles E. Grant, Timothy L. Bailey, and William Stafford Noble. Fimo: scanning for occurrences of a given motif. Bioinformatics, 27(7):1017–1018, 9/8/2020 2011.
* [22] Mengchi Wang, David Wang, Kai Zhang, Vu Ngo, Shicai Fan, and Wei Wang. Motto: Representing motifs in consensus sequences with minimum information loss. Genetics, page genetics.303597.2020, 08 2020.
* [23] Timothy L. Bailey, Mikael Boden, Fabian A. Buske, Martin Frith, Charles E. Grant, Luca Clementi, Jingyuan Ren, Wilfred W. Li, and William S. Noble. Meme suite: tools for motif discovery and searching. Nucleic Acids Research, 37(suppl_2):W202–W208, 9/9/2020 2009\.
* [24] Martin C. Frith, Neil F. W. Saunders, Bostjan Kobe, and Timothy L. Bailey. Discovering sequence motifs with arbitrary insertions and deletions. PLOS Computational Biology, 4(5):e1000071–, 05 2008.
* [25] T L Bailey and M Gribskov. Combining evidence using p-values: application to sequence homology searches. Bioinformatics, 14(1):48–54, 9/9/2020 1998.
* [26] Oriol Fornes, Jaime A Castro-Mondragon, Aziz Khan, Robin van der Lee, Xi Zhang, Phillip A Richmond, Bhavi P Modi, Solenne Correard, Marius Gheorghe, Damir Baranašić, Walter Santana-Garcia, Ge Tan, Jeanne Chèneby, Benoit Ballester, François Parcy, Albin Sandelin, Boris Lenhard, Wyeth W Wasserman, and Anthony Mathelier. JASPAR 2020: update of the open-access database of transcription factor binding profiles. Nucleic Acids Research, 48(D1):D87–D92, 11 2019.
* [27] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016.
* [28] Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural Networks, 61:85–117, 2015.
* [29] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436–444, 2015.
* [30] Ethan J Moyer and Anup Das. Machine learning applications to dna subsequence and restriction site analysis. arXiv preprint arXiv:2011.03544, 2020.
|
Table 12: Benchmarking information retrieval recall@1/5/10 on M-BEIR for CLIP
Base models. For Fashion200K and FashionIQ, we report recall@10/20/50
following the original work.
| | | Multi-task (✗ instruction) | UniIR (✓instruction)
---|---|---|---|---
| | | BLIP${}_{\text{SF}}$ | BLIP${}_{\text{FF}}$ | BLIP${}_{\text{SF}}$ | BLIP${}_{\text{FF}}$
| Dataset | Metric | M-BEIR${}_{\text{local}}$ | M-BEIR | M-BEIR${}_{\text{local}}$ | M-BEIR | M-BEIR${}_{\text{local}}$ | M-BEIR | M-BEIR${}_{\text{local}}$ | M-BEIR
1\. $q_{t}\to c_{i}$ | | R@1 | 7.1 | 0.0 | 7.8 | 0.0 | 6.9 | 5.5 | 7.5 | 6.9
VisualNews | R@5 | 16.9 | 2.3 | 18.2 | 3.4 | 17.0 | 15.9 | 18.0 | 17.6
| R@10 | 22.9 | 5.0 | 24.3 | 6.9 | 22.8 | 21.9 | 24.3 | 23.8
| R@1 | 47.3 | 0.0 | 48.1 | 0.0 | 47.8 | 11.3 | 49.5 | 23.8
MSCOCO | R@5 | 74.6 | 15.4 | 75.7 | 10.5 | 75.6 | 65.1 | 76.6 | 70.2
| R@10 | 83.4 | 32.9 | 84.5 | 24.4 | 84.5 | 76.5 | 85.1 | 80.2
| R@10 | 22.1 | 3.5 | 22.3 | 0.9 | 21.6 | 21.2 | 22.8 | 22.1
Fashion200K | R@20 | 29.7 | 7.5 | 29.6 | 3.0 | 28.2 | 27.8 | 30.6 | 30.3
| R@50 | 42.5 | 18.2 | 41.8 | 8.7 | 40.0 | 39.7 | 41.5 | 41.1
2\. $q_{t}\to c_{t}$ | | R@1 | 50.1 | 47.9 | 49.2 | 46.9 | 51.4 | 51.2 | 50.1 | 49.7
WebQA | R@5 | 77.0 | 74.2 | 76.2 | 74.6 | 76.7 | 76.7 | 76.4 | 76.4
| R@10 | 84.0 | 82.0 | 83.4 | 81.9 | 83.3 | 83.2 | 82.7 | 82.7
3\. $q_{t}\to$ ($c_{i},c_{t}$) | | R@1 | 20.7 | 12.0 | 23.8 | 12.2 | 22.0 | 21.2 | 23.5 | 22.6
EDIS | R@5 | 40.2 | 30.0 | 47.5 | 30.4 | 44.5 | 44.1 | 46.2 | 45.9
| R@10 | 50.1 | 38.7 | 57.7 | 38.9 | 53.9 | 53.7 | 56.7 | 56.4
| R@1 | 49.0 | 45.0 | 49.1 | 45.7 | 49.9 | 49.6 | 49.5 | 49.1
WebQA | R@5 | 77.3 | 73.5 | 78.1 | 75.0 | 77.5 | 77.2 | 78.1 | 77.9
| R@10 | 86.6 | 83.4 | 85.9 | 82.6 | 86.4 | 86.2 | 85.6 | 85.5
4\. $q_{i}\to c_{t}$ | | R@1 | 7.1 | 0.0 | 7.4 | 0.0 | 6.6 | 3.7 | 7.2 | 5.0
VisualNews | R@5 | 17.1 | 3.2 | 17.1 | 2.3 | 16.8 | 14.8 | 17.1 | 15.8
| R@10 | 23.2 | 6.7 | 22.8 | 5.0 | 22.6 | 21.4 | 23.6 | 22.5
| R@1 | 58.3 | 0.0 | 63.7 | 0.0 | 62.4 | 39.2 | 64.6 | 51.7
MSCOCO | R@5 | 83.0 | 71.6 | 87.0 | 71.7 | 85.9 | 84.5 | 88.1 | 86.9
| R@10 | 90.3 | 83.3 | 93.1 | 83.9 | 92.0 | 91.5 | 93.5 | 93.1
| R@10 | 22.4 | 1.6 | 23.6 | 0.9 | 20.9 | 18.2 | 23.8 | 22.4
Fashion200K | R@20 | 30.4 | 4.0 | 32.0 | 2.4 | 29.7 | 27.7 | 32.7 | 31.5
| R@50 | 43.8 | 10.2 | 44.7 | 6.8 | 42.3 | 40.6 | 45.6 | 45.1
5\. $q_{i}\to c_{i}$ | | R@1 | 8.2 | 8.2 | 8.0 | 8.0 | 7.7 | 7.7 | 7.7 | 7.7
NIGHTS | R@5 | 30.0 | 29.7 | 31.8 | 31.6 | 30.7 | 30.6 | 30.3 | 30.3
| R@10 | 49.8 | 49.1 | 51.0 | 50.5 | 51.2 | 51.2 | 49.5 | 49.5
6\. ($q_{i},q_{t}$) $\to c_{t}$ | | R@1 | 17.6 | 16.9 | 19.6 | 20.4 | 13.9 | 16.0 | 17.9 | 20.8
OVEN | R@5 | 33.2 | 29.2 | 36.7 | 32.8 | 28.9 | 28.2 | 34.5 | 33.1
| R@10 | 40.7 | 35.2 | 44.4 | 38.9 | 36.1 | 34.1 | 42.1 | 39.0
| R@1 | 7.6 | 3.7 | 8.6 | 5.6 | 6.3 | 5.0 | 7.9 | 6.9
InfoSeek | R@5 | 17.3 | 9.5 | 21.2 | 13.6 | 16.5 | 13.3 | 18.9 | 16.5
| R@10 | 23.6 | 14.0 | 28.1 | 19.3 | 23.3 | 19.6 | 25.4 | 22.5
7\. ($q_{i},q_{t}$) $\to c_{i}$ | | R@10 | 22.5 | 22.1 | 25.4 | 24.9 | 20.8 | 20.1 | 23.7 | 23.0
FashionIQ | R@20 | 29.9 | 29.2 | 33.2 | 32.5 | 27.5 | 26.6 | 31.0 | 29.8
| R@50 | 41.2 | 40.3 | 44.8 | 43.9 | 38.2 | 36.6 | 42.1 | 40.5
| R@1 | 11.0 | 9.1 | 20.3 | 18.7 | 13.1 | 12.9 | 20.2 | 19.4
CIRR | R@5 | 39.0 | 31.8 | 45.1 | 42.0 | 42.2 | 40.7 | 46.4 | 45.1
| R@10 | 50.3 | 42.7 | 56.0 | 52.2 | 54.4 | 52.3 | 57.5 | 55.8
8\. ($q_{i},q_{t}$) $\to$ ($c_{i},c_{t}$) | | R@1 | 28.3 | 32.5 | 31.1 | 33.2 | 28.3 | 33.0 | 29.1 | 37.1
OVEN | R@5 | 47.0 | 46.8 | 49.9 | 47.5 | 47.3 | 48.2 | 48.2 | 50.6
| R@10 | 54.6 | 52.5 | 57.5 | 53.1 | 54.7 | 54.2 | 55.6 | 55.9
| R@1 | 11.2 | 9.4 | 13.7 | 10.9 | 12.2 | 10.8 | 13.0 | 11.8
InfoSeek | R@5 | 25.5 | 20.7 | 29.1 | 22.7 | 26.4 | 23.6 | 27.0 | 23.5
| R@10 | 33.6 | 27.9 | 37.3 | 29.4 | 34.2 | 30.7 | 35.0 | 30.4
- | | R@1 | 24.9 | 14.2 | 26.9 | 15.5 | 25.3 | 20.6 | 26.8 | 24.1
Average | R@5 | 44.5 | 33.7 | 47.2 | 35.2 | 45.1 | 43.3 | 46.6 | 45.4
| R@10 | 47.5 | 36.3 | 49.8 | 37.1 | 47.7 | 46.0 | 49.2 | 47.8
Table 13: Benchmarking information retrieval recall@1/5/10 on M-BEIR for BLIP
Base models. For Fashion200K and FashionIQ, we report recall@10/20/50
following the original work.
|
# On the solvability of graded Novikov algebras
###### Abstract.
We show that the right ideal of a Novikov algebra generated by the square of a
right nilpotent subalgebra is nilpotent. We also prove that a $G$-graded
Novikov algebra $N$ over a field $K$ with solvable $0$-component $N_{0}$ is
solvable, where $G$ is a finite additive abelean group and the characteristic
of $K$ does not divide the order of the group $G$. We also show that any
Novikov algebra $N$ with a finite solvable group of automorphisms $G$ is
solvable if the algebra of invariants $N^{G}$ is solvable.
Ualbai Umirbaev111Department of Mathematics, Wayne State University, Detroit,
MI 48202, USA; Department of Mathematics, Al-Farabi Kazakh National
University, Almaty, 050040, Kazakhstan; and Institute of Mathematics and
Mathematical Modeling, Almaty, 050010, Kazakhstan, e-mail<EMAIL_ADDRESS>and Viktor Zhelyabin222Institute of Mathematics of the SB of RAS, Novosibirsk,
630090, Russia, e-mail<EMAIL_ADDRESS>
Mathematics Subject Classification (2010): 17D25, 17B30, 17B70
Key words: Novikov algebra, graded algebra, solvability, nilpotency,
automorphism, the ring of invariants
## 1\. Introduction
A nonassociative algebra $N$ over a field $K$ is called a Novikov algebra [17]
if it satisfies the following identities:
(1) $\displaystyle(x,y,z)=(y,x,z)\,\text{ (left symmetry)},$ (2)
$\displaystyle(xy)z=(xz)y\,\text{ (right commutativity)},$
where $(x,y,z)=(xy)z-x(yz)$ is the associator of elements $x,y,z$.
The defining identities of a Novikov algebra first appeared in the study of
Hamiltonian operators in the formal calculus of variations by I.M. Gelfand and
I.Ya. Dorfman [8]. These identities played a crucial role in the
classification of linear Poisson brackets of hydrodynamical type by A.A.
Balinskii and S.P. Novikov [1].
In 1987 E.I. Zelmanov [25] proved that all finite-dimensional simple Novikov
algebras over a field $K$ of characteristic $0$ are one-dimensional. V.T.
Filippov [6] constructed a wide class of simple Novikov algebras of
characteristic $p\geq 0$. J.M. Osborn [17, 18, 19] and X. Xu [23, 24]
continued the study of simple finite dimensional algebras over fields of
positive characteristic and simple infinite dimensional algebras over fields
of characteristic zero. A complete classification of finite dimensional simple
Novikov algebras over algebraically closed fields of characteristic $p>2$ is
given in [23].
E.I. Zelmanov also proved that if $N$ is a finite dimensional right nilpotent
Novikov algebra then $N^{2}$ is nilpotent [25]. In 2001 V.T. Filippov [7]
proved that any left-nil Novikov algebra of bounded index over a field of
characteristic zero is nilpotent. A.S. Dzhumadildaev and K.M. Tulenbaev [5]
proved that any right-nil Novikov algebra of bounded index $n$ is right
nilpotent if the characteristic $p$ of the field $K$ is $0$ or $p>n$. In 2020
I. Shestakov and Z. Zhang proved [21] that for any Novikov algebra $N$ over a
field the following conditions are equivalent:
$(i)$ $N$ is solvable;
$(ii)$ $N^{2}$ is nilpotent;
$(iii)$ $N$ is right nilpotent.
The Freiheitssatz for Novikov algebras over fields of characteristic $0$ was
proven by L. Makar-Limanov and U. Umirbaev [15]. L.A. Bokut, Y. Chen, and Z.
Zhang [3] proved that every Novikov algebra is a subalgebra of a Novikov
algebra obtained from some differential algebra by Gelfand-Dorfman
construction [8].
This paper is devoted to the study of solvable, nilpotent, and right nilpotent
Novikov algebras and graded Novikov algebras. Notice that an algebra $A$ over
a field containing all $n$th roots of unity admits an automorphism of order
$n$ if and only if $A$ admits a $\mathbb{Z}_{n}$-grading. For this reason the
study of graded algebras is related to the study of actions of finite groups.
First we recall some definitions and classical results.
Let $R$ be an algebra over a field $K$. For any automorphism $\phi$ of $R$ the
set of fixed elements
$\displaystyle R^{\phi}=\\{x\in R|\phi(x)=x\\}$
is a subalgebra of $R$ and is called the subalgebra of invariants of $\phi$.
An automorphism $\phi$ is called regular if $R^{\phi}=0$. For any group $G$ of
automorphisms of $R$ the subalgebra of invariants
$\displaystyle R^{G}=\\{x\in R|\phi(x)=x\text{ for all }\phi\in G\\}$
is defined similarly.
In 1957 G. Higman [10] published a classical result on Lie algebras which says
that if a Lie algebra $L$ has a regular automorphism $\phi$ of prime order
$p$, then $L$ is nilpotent. It was also shown that the index of nilpotency
$h(p)$ of $L$ depends only on $p$. An explicit estimation of the function
$h(p)$ was found by A.I. Kostrikin and V.A. Kreknin [12] in 1963. A little
later, V.A. Kreknin proved [13] that a finite dimensional Lie algebra with a
regular automorphism of an arbitrary finite order is solvable. In 2005 N. Yu.
Makarenko [14] proved that if a Lie algebra $L$ admits an automorphism of
prime order $p$ with a finite-dimensional fixed subalgebra of dimension $t$,
then $L$ has a nilpotent ideal of finite codimension with the index of
nilpotency bounded in terms of $p$ and the codimension bounded in terms of $t$
and $p$.
In 1973 G. Bergman and I. Isaacs [2] published a classical result on the
actions of finite groups on associative algebras. Let $G$ be a finite group of
automorphisms of an associative algebra $R$ and suppose that $R$ has no
$|G|$-torsion. If the subalgebra of invariants $R^{G}$ is nilpotent then the
Bergman-Isaacs Theorem [2] states that $R$ is also nilpotent. Since then a
very large number of papers have been devoted to the study of automorphisms of
associative rings. The central problem of these studies was to identify the
properties of rings that can be transformed from the ring of invariants to the
whole ring. In 1974 V. K. Kharchenko [11] proved if $R^{G}$ is a PI-ring then
$R$ is a PI-ring under the conditions of the Bergman-Isaacs Theorem.
The Bergman-Isaacs Theorem was partially generalized by W.S. Martindale and S.
Montgomery [16] in 1977 to the case of a finite group of Jordan automorphisms,
that is a finite group of automorphisms of the adjoint Jordan algebra
$R^{(+)}$.
An analogue of Kharchenko’s result for Jordan algebras was proven by A. P.
Semenov [20] in 1991. In particular, A. P. Semenov proved that if $J^{G}$ is a
solvable algebra over a field of characteristic zero, then so is the Jordan
algebra $J$. His proof uses a deep result by E.I. Zel’manov [26] which says
that every Jordan nil-algebra of bounded index over a field of characteristic
zero is solvable. If a Jordan algebra $J$ over a field of characteristic not
equal to $2,3$ admits an automorphism $\phi$ of order $2$ with solvable
$J^{\phi}$, then $J$ is solvable [27].
In the case of alternative algebras one cannot expect that nilpotency of the
invariant subalgebra implies the nilpotency of the whole algebra. There is an
example (see [4, 30]) of a solvable non-nilpotent alternative algebra with an
automorphism of order two such that its subalgebra of invariants is nilpotent.
A combination of Semenov’s result [20] and Zhevlakov’s theorem [29] gives
that, for an alternative algebra $A$ over a field of characteristic zero, the
solvability of the algebra of invariants $A^{G}$ for a finite group $G$
implies the solvability of $A$. It is also known [22] that if $A$ is an
alternative algebra over a field of characteristic not equal to $2$ with an
automorphism $\phi$ of order two, then the solvability of the algebra of
invariants $A^{\phi}$ implies the solvability of $A$. In [9] M. Goncharov
proved that an alternative $\mathbb{Z}_{3}$\- graded algebra $A=A_{0}\oplus
A_{1}\oplus A_{2}$ over a field of characteristic not equal to $2,3,5$ is
solvable if $A_{0}$ is solvable.
It was shown in [28] for every $n$ of the form $n=2^{k}3^{l}$ that a
$\mathbb{Z}_{n}$\- graded Novikov
$\displaystyle N=N_{0}\oplus\ldots\oplus N_{n-1}$
over a field of characteristic not equal to $2,3$ is solvable if $N_{0}$ is
solvable.
In this paper we first prove that if $L$ is a right nilpotent subalgebra of a
Novikov algebra $N$ then the right ideal of $N$ generated by $L^{2}$ is right
nilpotent (Theorem 1). This result gives a deeper explanation of the results
on the nilpotency of $N^{2}$ mentioned above. The main result of the paper
(Theorem 2) says that if $N$ is a $G$-graded Novikov algebra with solvable
$0$-component $N_{0}$, where $G$ is a finite additive abelian group, then $N$
is solvable. This result allows us to prove (Theorem 3) that if $N$ is a
Novikov algebra with solvable algebra of invariants $N^{G}$, where $G$ is a
finite solvable group of automorphisms of $N$, then $N$ is solvable. Theorems
2 and 3 are formulated for fields of characteristic $0$ or positive
characteristic $p$ that does not divide $|G|$. Notice that the solvability and
the right nilpotency of Novikov algebras are equivalent by the result of I.
Shestakov and Z. Zhang mentioned above.
The paper is organized as follows. In Section 2 we prove some identities and
Theorem 1. Sections 3–5 are devoted to the study of the structure of
$\mathbb{Z}_{n}$-graded Novikov algebras. Theorems 2 and 3 are formulated and
proven in Section 6.
## 2\. Right nilpotent subalgebras
The identities (1) and (2) easily imply the identities
(3) $\displaystyle(xy,z,t)=(x,z,t)y$
and
(4) $\displaystyle(x,yz,t)=(x,y,t)z.$
Let $A$ be an arbitrary algebra. The powers of $A$ are defined inductively by
$A^{1}=A$ and
$\displaystyle A^{m}=\sum_{i=1}^{m-1}A^{i}A^{m-i}$
for all positive integers $m\geq 2$. The algebra $A$ is called nilpotent if
$A^{m}=0$ for some positive integer $m$.
The right powers of $A$ are defined inductively by $A^{[1]}=A$ and
$A^{[m+1]}=A^{[m]}A$ for all integers $m\geq 1$. The algebra $A$ is called
right nilpotent if there exists a positive integer $m$ such that $N^{[m]}=0$.
In general, the right nilpotency of an algebra does not imply its nilpotency.
This is also true in the case of Novikov algebras.
Example 1. [25] Let $N=Fa+Fb$ be a vector space of dimension 2. The product on
$N$ is defined as
$ab=b,a^{2}=b^{2}=ba=0.$
It is easy to check that $N$ is a right nilpotent Novikov algebra, but not
nilpotent.
The derived powers of $A$ are defined by $A^{(0)}=A$, $A^{(1)}=A^{2}$, and
$A^{(m)}=A^{(m-1)}A^{(m-1)}$ for all positive integers $m\geq 2$. The algebra
$A$ is called solvable if $A^{(m)}=0$ for some positive integer $m$. Every
right nilpotent algebra is solvable, and, in general, the converse is not
true. But every solvable Novikov algebra is right nilpotent [21].
It is well known that if $I$ and $J$ are ideals of a Novikov algebra $N$, then
$IJ$ is also an ideal of $N$. Consequently, if $N$ is a Novikov algebra then
$N^{m}$, $N^{[m]}$, and $N^{(m)}$ are ideals of $N$. If $S$ is a subset of a
Novikov algebra $N$, then denote by $\langle S\rangle$ the right ideal of $N$
generated by $S$. Notice that if $I$ is a right ideal of $N$, then $IS$ is a
right ideal of $N$ for any subset $S\subseteq N$ by (2).
In any algebra we denote by $x_{1}x_{2}\ldots x_{k}$ the right normed product
$(\ldots(x_{1}x_{2})\ldots)x_{k}$ of elements $x_{1},x_{2},\ldots,x_{k}$. For
any $x,y$ denote by $x\circ y=xy+yx$ the Jordan product.
###### Lemma 1.
Any Novikov algebra satisfies the following identities:
(5) $\displaystyle a(bx_{1}\ldots x_{t})=abx_{1}x_{2}\ldots
x_{t}-\sum_{i=1}^{t}(a,b,x_{i})x_{1}\ldots x_{i-1}x_{i+1}\ldots x_{t}$
for each positive integer $t\geq 1$,
(6) $\displaystyle(ax_{1}\ldots x_{s})\circ(bx_{s+1}\ldots x_{t})=(a\circ
b)x_{1}x_{2}\ldots x_{t}-\sum_{i=1}^{k}(a,b,x_{i})x_{1}\ldots
x_{i-1}x_{i+1}\ldots x_{t}$
for all nonnegative integers $0\leq s<t$, and
(7) $\displaystyle(ax_{1}\ldots x_{s})\circ(bx_{s+1}\ldots
x_{t})=a\circ(bx_{1}\ldots x_{t}).$
Proof. We prove (5) by induction on $t$. If $t=1$, then (5) is true by the
definition of the associator. By (4), we have
$\displaystyle a(bx_{1}\ldots x_{t})=a(bx_{1}\ldots
x_{t-1})x_{t}-(a,bx_{1}\ldots x_{t-1},x_{t})$ $\displaystyle=a(bx_{1}\ldots
x_{t-1})x_{t}-(a,b,x_{t})x_{1}\ldots x_{t-1}.$
Using this and the induction proposition, we get
$\displaystyle a(bx_{1}\ldots x_{t})=(abx_{1}x_{2}\ldots
x_{t-1}-\sum_{i=1}^{t-1}(a,b,x_{i})x_{1}\ldots x_{i-1}x_{i+1}\ldots
x_{t-1})x_{t}$ $\displaystyle-(a,b,x_{t})x_{1}\ldots
x_{t-1}=abx_{1}x_{2}\ldots x_{t}-\sum_{i=1}^{t}(a,b,x_{i})x_{1}\ldots
x_{i-1}x_{i+1}\ldots x_{t}.$
By (2), (3), and (5), we get
$\displaystyle(ax_{1}\ldots x_{s})(bx_{s+1}\ldots x_{t})=(ab)x_{1}x_{2}\ldots
x_{t}-\sum_{i=s+1}^{t}(a,b,x_{i})x_{1}\ldots x_{i-1}x_{i+1}\ldots x_{t}.$
This implies (6). The identity (7) is a direct consequence of (2), (5), and
(6). $\Box$
Let $N$ be a Novikov algebra and let $L$ be a subalgebra of $N$. Set $L_{0}=N$
and $L_{k}=\langle L^{[k]}\rangle$ for each positive integer $k$.
Consider the descending sequence of right ideals
$\displaystyle N=L_{0}\supseteq L_{1}\supseteq L_{2}\supseteq\ldots\supseteq
L_{k}\supseteq\ldots$
of the algebra $N$.
###### Lemma 2.
$L_{s}L_{t}\subseteq L_{s+t-1}$ for all positive integers $s,t$.
Proof. We prove the lemma by induction on $t$. It is true for $t=1$ by the
definition of $L_{s}$. Notice that
$\displaystyle L_{s}=L_{1}\underbrace{L\ldots L}_{s-1}$
for each $s\geq 1$ by (2).
Suppose that $t\geq 2$ and let $x\in L_{s}$ and $y=za_{1}\ldots a_{t-1}\in
L_{t}$, where $z\in L_{1}$ and $a_{1},\ldots,a_{t-1}\in L$. By (5), we get
$\displaystyle xy=xza_{1}\ldots
a_{t-1}-\sum_{i=1}^{t-1}(x,z,a_{i})a_{1}\ldots\widehat{a_{i}}\ldots a_{t-1}$
where $\widehat{a_{i}}$ means that $a_{i}$ is absent. Notice that $xz\in
L_{s}$ and
$\displaystyle xza_{1}\ldots a_{t-1}\in L_{s}\underbrace{L\ldots
L}_{t-1}=L_{s+t-1}.$
Moreover, $(x,z,a_{i})$ belongs to the right ideal generated by
$(L^{[s]},L,a_{i})$ by (3) and (4). Consequently, $(x,z,a_{i})\in L_{s+1}$ and
$(x,z,a_{i})a_{1}\ldots\widehat{a_{i}}\ldots a_{t-1}\in L_{s+t-1}$. $\Box$
In general, $L_{1}$ is not an ideal of $L_{0}=N$.
Example 2. Let $K[x,y]$ be the polynomial algebra over $K$ in the variables
$x,y$. Define a new product $\cdot$ on $K[x,y]$ by
$f\cdot g=f\frac{\partial g}{\partial x},\,f,g\in K[x,y].$
Then $N=(K[x,y],\cdot)$ is a Novikov algebra.
Let $L=Kx$. Then $L$ is a subalgebra of $N$ since $x\cdot x=x$. Let
$L_{1}=\langle L\rangle$. It is clear that $L_{1}\subseteq xK[x,y]$. Hence,
$y\cdot x=y\frac{\partial x}{\partial x}=y\not\in L_{1}.$
Consequently, $L_{1}$ is not an ideal of $L_{0}=N$.
But for each $r\geq 2$ the right ideal $L_{r}$ is an ideal of $L_{1}$ by Lemma
2.
###### Corollary 1.
$L_{2}^{n}\subseteq L_{n+1}$ for all $n\geq 1$.
Proof. It is trivial for $n=1$ and true for $n=2$ by Lemma 2. If
$L_{2}^{i}\subseteq L_{2+i-1}$ and $L_{2}^{j}\subseteq L_{2+j-1}$, then
$L_{2}^{i}L_{2}^{j}\subseteq L_{i+1}L_{j+1}\subseteq L_{i+j+1}$. Leading an
induction on $n$ we get
$\displaystyle L_{2}^{n}=\sum_{i+j=n,i,j\geq 1}L_{2}^{i}L_{2}^{j}\subseteq
L_{n+1}.\ \ \ \Box$
###### Theorem 1.
Let $L$ be a right nilpotent subalgebra of a Novikov algebra $N$ over a field
$K$. Then the right ideal $L_{2}=\langle L^{2}\rangle$ of $N$ generated by
$L^{2}$ is nilpotent.
Proof. Suppose that $L^{[n]}=0$ for some $n\geq 2$. Then $L_{n}=0$. By
Corollary 1, we have $L_{2}^{n-1}\subseteq L_{n}=0$. This means that $L_{2}$
is nilpotent. $\Box$
## 3\. $\mathbb{Z}_{n}$-graded Novikov algebras
Let $\mathbb{Z}_{n}=\mathbb{Z}/n\mathbb{Z}$ be the additive cyclic group of
order $n$. Let
(8) $\displaystyle N=N_{0}\oplus N_{1}\oplus N_{2}\oplus\ldots\oplus N_{n-1},\
\ N_{i}N_{j}\subseteq N_{i+j},\ i,j\in\mathbb{Z}_{n},$
be a $\mathbb{Z}_{n}$-graded Novikov algebra over $K$.
If $f\in N_{i}$ then we say that $f$ is a homogeneous element of degree $i$.
Notice that $i$ is an element of $\mathbb{Z}_{n}$. Sometimes we consider the
subscripts $i$ of $N_{i}$ as integers satisfying the condition $0\leq i\leq
n-1$.
Obviously, $A=N_{0}$ is a subalgebra of $N$. Recall that $A^{[r]}$ is the
right $r$th power of $A$.
###### Lemma 3.
Let $i_{1},i_{2},\ldots,i_{k}\in\mathbb{Z}_{n}$ and
$i_{1}+i_{2}+\ldots+i_{k}=0$. Then
$\displaystyle A^{[r]}N_{i_{1}}N_{i_{2}}\ldots N_{i_{k}}\subseteq A^{[r]}.$
Proof. By the definition of a $\mathbb{Z}_{n}$-graded algebra, we have
$\displaystyle AN_{i_{1}}N_{i_{2}}\ldots N_{i_{k}}\subseteq A.$
Using this and (2), we get
$\displaystyle A^{[r]}N_{i_{1}}N_{i_{2}}\ldots
N_{i_{k}}=AN_{i_{1}}N_{i_{2}}\ldots N_{i_{k}}\underbrace{A\ldots
A}_{r-1}\subseteq A\underbrace{A\ldots A}_{r-1}=A^{[r]}.\ \ \Box$
Set $A^{\\{0\\}}=N$ and for any integer $r\geq 1$ denote by
$A^{\\{r\\}}=\langle A^{[r]}\rangle$ the right ideal of $N$ generated by
$A^{[r]}$. Obviously, $A^{\\{r\\}}$ is a $\mathbb{Z}_{n}$-graded algebra,
i.e.,
$\displaystyle A^{\\{r\\}}=A^{\\{r\\}}_{0}\oplus A^{\\{r\\}}_{1}\oplus
A^{\\{r\\}}_{2}\oplus\ldots\oplus A^{\\{r\\}}_{n-1}.$
###### Corollary 2.
If $r\geq 1$ and $0\leq i\leq n-1$, then
$\displaystyle
A^{\\{r\\}}_{i}=\sum_{i_{1},i_{2},\ldots,i_{k}}A^{[r]}N_{i_{1}}N_{i_{2}}\ldots
N_{i_{k}},$
where $0\leq i_{1},i_{2},\ldots,i_{k}\leq n-1$,
$i_{1}+i_{2}+\ldots+i_{k}\equiv i(\mathrm{mod}\ n)$ and
$i_{1}+i_{2}+\ldots+i_{k}<n$.
In particular, $A^{\\{r\\}}_{0}=A^{[r]}$.
Consider the descending sequence of right ideals
(9) $\displaystyle N=A^{\\{0\\}}\supseteq A^{\\{1\\}}\supseteq\ldots\supseteq
A^{\\{r\\}}\supseteq\ldots$
of the algebra $N$ and the quotient algebra
(10) $\displaystyle B=A^{\\{1\\}}/A^{\\{2\\}}=B_{0}\oplus B_{1}\oplus
B_{2}\oplus\ldots\oplus B_{n-1},\ \ B_{i}B_{j}\subseteq B_{i+j},\
i,j\in\mathbb{Z}_{n}.$
Notice that $B$ is a right $N$-module. We establish some properties of the
algebra $B$.
###### Lemma 4.
Let $B$ be the Novikov algebra defined by (10). Then
$(i)$ $B_{0}=A/A^{2}$;
$(ii)$ $BB_{0}=0$;
$(iii)$ $x\circ y=xy+yx=0$ for any $x\in B_{i}$ and $y\in B_{n-i}$.
Proof. The statement (i) is true since $A^{\\{r\\}}_{0}=A^{[r]}$ by Corollary
2. The statement (ii) is a direct corollary of the inclusion
$A^{\\{r\\}}A\subseteq A^{\\{r+1\\}}$.
Let $x=ax_{1}x_{2}\ldots x_{s}\in A^{\\{1\\}}_{i}$ and
$y=bx_{s+1}x_{s+2}\ldots x_{t}\in A^{\\{1\\}}_{n-i}$, where $a,b\in A$ and
$x_{r}\in N_{k_{r}}$ for all $1\leq r\leq t$. If $i=0$, then $A_{i}=A_{n-i}=A$
and $xy,yx\in A^{2}$. Suppose that $i,n-i\neq 0$. Then
$\Sigma_{r=1}^{s}k_{r}=i\neq 0$, $\Sigma_{r=s+1}^{t}l_{r}=n-i\neq 0$, and
$\Sigma_{r=1}^{t}k_{r}=0$. In particular, $t>s\geq 1$. By (7), we have
$\displaystyle x\circ y=(ax_{1}\ldots x_{s})\circ(bx_{s+1}\ldots
x_{t})=a\circ(bx_{1}\ldots x_{t}).$
The condition $\Sigma_{r=1}^{t}k_{r}=0$ implies that $bx_{1}\ldots x_{t}\in
A$. Consequently, $x\circ y\in A^{2}$. This proves $(iii)$. $\Box$
## 4\. Right nilpotency modulo $A^{\\{1\\}}$
In this section we show that if the $0$-component $A=N_{0}$ of a
$\mathbb{Z}_{n}$-graded Novikov algebra $N$ of the form (8) is right nlpotent,
then $N^{[m]}\subseteq A^{\\{1\\}}$ for some positive integer $m$.
###### Lemma 5.
Let $N$ be an arbitrary Novikov algebra and let $V$ be a subspace of $N$. Then
for any $r\geq 1$ we have
$\displaystyle NV^{[r]}V\subseteq\langle V^{[r]}\rangle+NV^{[r+1]}.$
Proof. By (1), we get
$\displaystyle(NV^{[r]})V\subseteq(N,V^{[r]},V)+NV^{[r+1]}\subseteq(V^{[r]},N,V)+NV^{[r+1]}\subseteq\langle
V^{[r]}\rangle+NV^{[r+1]}.\ \ \Box$
###### Corollary 3.
If $r\geq 1$, then
$\displaystyle N\underbrace{V\ldots V}_{r+1}\subseteq\langle
V^{[r]}\rangle+NV^{[r+1]}.$
Proof. It is true for $r=1$ by Lemma 5. If it is true for some $r\geq 1$, then
we get
$\displaystyle N\underbrace{V\ldots V}_{r+2}\subseteq\langle V^{[r]}\rangle
V+NV^{[r+1]}V\subseteq\langle V^{[r+1]}\rangle+NV^{[r+2]}$
by (2) and Lemma 5. $\Box$
###### Lemma 6.
Let $N$ be an arbitrary $\mathbb{Z}_{n}$-graded Novikov algebra $N$ from (8)
and suppose that the $0$-component $A=N_{0}$ of $N$ is right nilpotent. Then
there exists a positive integer $m$ such that $N^{[m]}\subseteq A^{\\{1\\}}$.
Proof. Suppose that $A^{[r]}=0$ for some positive integer $r$. By Corollary 3,
$\displaystyle N\underbrace{A\ldots A}_{r+1}\subseteq\langle
A^{[r]}\rangle+NA^{[r+1]}=0.$
Again, by Corollary 3, we get
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{n}\subseteq\langle
N_{i}^{[n-1]}\rangle+NN_{i}^{[n]}.$
Notice that $N_{i}^{[n]}\subseteq A$. Consequently,
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{n+1}\subseteq(\langle
N_{i}^{[n-1]}\rangle+NA)N_{i}\subseteq\langle
N_{i}^{[n]}\rangle+NN_{i}A\subseteq A^{\\{1\\}}+NN_{i}A.$
Using this, we can easily show by induction on $s$ that
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{sn+1}\subseteq
A^{\\{1\\}}+NN_{i}\underbrace{A\ldots A}_{s}.$
Cosequently,
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{(r+1)n+1}\subseteq
A^{\\{1\\}}+NN_{i}\underbrace{A\ldots A}_{r+1}\subseteq A^{\\{1\\}}$
since $N\underbrace{A\ldots A}_{r+1}=0$.
Thus, every $N_{i}$ acts on $N$ nilpotently modulo $A^{\\{1\\}}$ from the
right hand side. Moreover, by (2), this action is commutative. This easily
implies the existence of an integer $m$ such that $N^{[m]}\subseteq
A^{\\{1\\}}$. $\Box$
## 5\. Right nilpotency of $B$
In this section we prove that any $\mathbb{Z}_{n}$-graded Novikov algebra $B$
defined by (10) is right nilpotent if the characteristic of $K$ does not
divide $n$. Suppose that $N$ is a $\mathbb{Z}_{n}$-graded Novikov algebra of
the form (8) satisfying the conditions
$(a)$ $NA=0$ and
$(b)$ $x\circ y=xy+yx=0$ for any $x\in N_{i}$ and $y\in N_{n-i}$ and for any
$i\in\mathbb{Z}_{n}$.
All statements in this section are formulated for the algebra $N$.
First we prove the following lemma.
###### Lemma 7.
Let $x\in N_{n-i}$, $u\in N_{i}^{[k]}$, $i\in\mathbb{Z}_{n}$, and $k\geq 1$.
Then $xu=-kux$.
Proof. We prove the statement of the lemma by induction on $k$. If $k=1$, then
it is true by $(b)$. Suppose that $k>1$ and $u=vy$, where $v\in N_{i}^{[k-1]}$
and $y\in N_{i}$. Using (1), (2), and the induction proposition, we get
$\displaystyle xu=x(vy)=-(x,v,y)+(xv)y=-(v,x,y)-(k-1)(vx)y$
$\displaystyle=-(vx)y+v(xy)-(k-1)(vx)y=-k(vy)x+v(xy)=-kux+v(xy).$
Notice that $xy\in N_{n-i}N_{i}\subseteq A$ and $v(xy)=0$ by the condition
$(a)$. Consequently, $xu=-kux$. $\Box$
###### Corollary 4.
If the characteristic of the field $K$ does not divide $n$, then
$N_{i}^{[n]}N_{n-i}=0$ for any $i\in\mathbb{Z}_{n}$.
Proof. Notice that $N_{i}^{[n]}\subseteq A$ and $N_{n-i}N_{i}^{[n]}=0$ by the
condition $(a)$. Then Lemma 7 gives that $nN_{i}^{[n]}N_{n-i}=0$. If the
characteristic of $K$ does not divide $n$, then this gives
$N_{i}^{[n]}N_{n-i}=0$. $\Box$
###### Lemma 8.
If the characteristic of the field $K$ does not divide $n$, then
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{2n}=0$
for any $i\in\mathbb{Z}_{n}$.
Proof. Corollary 3 and the condition $(a)$ give that
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{n}\subseteq\langle
N_{i}^{[n-1]}\rangle+NN_{i}^{[n]}\subseteq\langle N_{i}^{[n-1]}\rangle$
since $N_{i}^{[n]}\subseteq A$. Notice that $i(n-1)=-i=n-i$ in
$\mathbb{Z}_{n}$. This means $N_{i}^{[n-1]}\subseteq N_{n-i}$. Cosequently,
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{n}\subseteq\langle
N_{n-i}\rangle.$
Using (2) and $(a)$, we get
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{n+1}\subseteq\langle
N_{n-i}\rangle N_{i}=\langle N_{n-i}N_{i}\rangle=\langle N_{i}N_{n-i}\rangle.$
Then
$\displaystyle N\underbrace{N_{i}\ldots N_{i}}_{2n}\subseteq=\langle
N_{i}N_{n-i}\rangle\underbrace{N_{i}\ldots N_{i}}_{n-1}=\langle
N_{i}^{[n]}N_{n-i}\rangle.$
Corollary 4 implies the statement of the lemma. $\Box$
###### Proposition 1.
Let $N$ be a $\mathbb{Z}_{n}$-graded Novikov algebra of the form (8)
satisfying the conditions
$(a)$ $NA=0$ and
$(b)$ $x\circ y=xy+yx=0$ for any $x\in N_{i}$ and $y\in N_{n-i}$ and for any
$i\in\mathbb{Z}_{n}$.
If the characteristic of the field $K$ does not divide $n$, then $N$ is right
nilpotent.
Proof. By Lemma 8, every $N_{i}$ acts nilpotently on the right $N$-module $N$.
Moreover, this action is commutative by (2). Consequently, $N$ acts
nilpotently on $N$. $\Box$
## 6\. Solvability and right nilpotency
The solvability and the right nilpotency of Novikov algebras are equivalent
[21]. In this section we use these notions as synonyms.
###### Proposition 2.
Let $N$ be a $\mathbb{Z}_{n}$-graded Novikov algebra of the form (8) such that
$A=N_{0}$ is solvable. If the characteristic of the field $K$ does not divide
$n$, then $N$ is solvable.
Proof. Consider the descending sequence of right ideals (9). By Lemma 6 there
exists a positive integer $m$ such that $N^{[m]}\subseteq A^{\\{1\\}}$. The
algebra $B$ from (10) satisfies all conditions of Proposition 1 by Lemma 4. By
Proposition 1 there exists a positive integer $t$ such that $B^{[t]}=0$. This
means that $(A^{\\{1\\}})^{[t]}\subseteq A^{\\{2\\}}$. By Theorem 1, the
algebra $A^{\\{2\\}}$ is nilpotent. Consequently, $A^{\\{1\\}}$ and $N$ are
both solvable. $\Box$
Let $G$ be an additive abelian group. We say that
$\displaystyle N=\bigoplus_{g\in G}N_{g}$
is a $G$-graded algebra if $N_{g}N_{h}\subseteq N_{g+h}$ for all $g,h\in G$.
###### Theorem 2.
Let $G$ be a finite additive abelian group and let $N$ be a $G$-graded Novikov
algebra with solvable $0$-component $N_{0}$. If the characteristic of the
field $K$ does not divide the order of the group $G$, then $N$ is solvable.
Proof. We prove the statement of the theorem by induction on the order $|G|$
of $G$. If $G=\mathbb{Z}_{n}$, then $N$ is solvable by Proposition 2.
Every finite abelian group is a direct sum of cyclic subgroups. Suppose that
$G=\mathbb{Z}_{n_{1}}\oplus\mathbb{Z}_{n_{2}}\oplus\ldots\oplus\mathbb{Z}_{n_{k}}$,
where $n_{i}>1$ for all $i$ and $k\geq 2$. Then $G=\mathbb{Z}_{n_{1}}\oplus
G_{1}$, where $G_{1}=\mathbb{Z}_{n_{2}}\oplus\ldots\oplus\mathbb{Z}_{n_{k}}$.
Denote by $\mathrm{pr}$ the projection of $G$ onto the group
$\mathbb{Z}_{n_{1}}$. Set
$\displaystyle N_{i}^{\prime}=\sum_{g\in G,pr(g)=i}N_{g},$
where $i=0,1,\ldots,n_{1}-1$. It is easy to show that
$\displaystyle N=N^{\prime}_{0}\oplus\ldots N^{\prime}_{n_{1}-1}$
and $N$ is a $\mathbb{Z}_{n_{1}}$-graded algebra.
It is also clear that
$\displaystyle N_{0}^{\prime}=\sum_{g\in G,pr(g)=0}N_{g}$
is a $G_{1}$-graded algebra and the $0$-component of $N_{0}^{\prime}$ is
$N_{0}$. Since $|G_{1}|<|G|$ it follows that $N_{0}^{\prime}$ is solvable by
the induction proposition. Now we can apply Proposition 2 to the
$\mathbb{Z}_{n_{1}}$-graded algebra $N$. Hence $N$ is solvable. $\Box$
The statement of the next lemma is well known.
###### Lemma 9.
Let $G$ be a group of automorphisms of an arbitrary algebra $A$ and let $H$ be
a normal subgroup of $G$. Then $A^{H}$ is $G$-invariant, the quotient group
$G/H$ acts on $A^{H}$ by automorphisms, and $(A^{H})^{G/H}=A^{G}$.
Proof. Let $a\in A^{H}$ and let $g\in G$. Then $ghg^{-1}\in H$ for any $h\in
H$. Therefore, $a^{ghg^{-1}}=a$ and
$(a^{g})^{h}=a^{gh}=a^{ghg^{-1}g}=(a^{ghg^{-1}})^{g}=a^{g}.$
Consequently, the algebra $A^{H}$ is $G$-invariant. Let $g\in G$ and let
$\overline{g}$ be the image of $g$ in $G/H$. Then $\overline{g}$ defines an
automorphism of the algebra $A$ by the rule $a^{\overline{g}}=a^{g}$. This
action is well defined. Hence the quotient group $G/H$ acts on $A^{H}$. It is
easy to check that $(A^{H})^{G/H}=A^{G}$. $\Box$
###### Corollary 5.
Let $N$ be a Novikov algebra and let $G$ be a finite abelian group of
automorphisms of $N$. If the algebra $N^{G}$ is solvable and the
characteristic of the field $K$ does not divide the order of the group $G$,
then $N$ is solvable.
Proof. We may assume that $K$ is algebraically closed. We prove the statement
of the corollary by induction on the order $|G|$ of $G$. If $G$ is a simple
group, then $G\cong\mathbb{Z}_{p}$, where $p$ is a prime number. Let $\phi$ be
a generating element of the group $G$. Then $\phi^{p}=e$, where $e$ is the
identity element of $G$. Let $\epsilon$ be a primitive $p$th root of unity and
let $N_{i}=\ker(\phi-\epsilon^{i})$ for all $0\leq i\leq p-1$. The indexes $i$
may be considered as elements of $\mathbb{Z}_{p}$ since $\epsilon^{p}=1$.
Obviously,
$\displaystyle N=N_{0}\oplus\ldots\oplus N_{p-1}$
and it is easy to check that $N_{i}N_{j}\subseteq N_{i+j}$ for all
$i,j\in\mathbb{Z}_{p}$, i.e., $N$ is a $\mathbb{Z}_{p}$-graded algebra.
Moreover, $N_{0}=N^{G}$. By Proposition 2, $N$ is solvable.
Let $H$ be a proper subgroup of $G$. Then, by Lemma 9, the quotient group
$G/H$ acts on $N^{H}$ by automorphisms and $(N^{H})^{G/H}=N^{G}$. We get that
$N^{H}$ is solvable by the induction proposition since $|G/H|<|G|$. Now we can
apply the induction proposition to the group $H$ and get that $N$ is solvable.
$\Box$
###### Theorem 3.
Let $N$ be a Novikov algebra and let $G$ be a finite solvable group of
automorphisms of $N$. If the algebra $N^{G}$ is solvable and the
characteristic of the field $K$ does not divide the order of the group $G$,
then $N$ is solvable.
Proof. We prove the statement of the theorem by induction on $|G|$. The case
of abelian groups is considered in Corollary 5. Suppose that $G$ is not
abelian. Then the commutator subgroup $G^{\prime}$ of the solvable finite
group $G$ is a proper normal subgroup.
By Lemma 9, $(N^{G^{\prime}})^{G/G^{\prime}}=N^{G}$. Then the algebra
$N^{G^{\prime}}$ is solvable by the induction proposition since
$|G/G^{\prime}|<|G|$. Applying the induction proposition to $G^{\prime}$, we
get that $N$ is solvable. $\Box$
## References
* [1] I.M. Balinskii, S.P. Novikov, Poisson brackets of hydrodynamic type, Frobenius algebras and Lie algebras. (Russian) Dokl. Akad. Nauk SSSR 283 (1985), no. 5, 1036–1039.
* [2] G.M. Bergman, I.M. Isaacs, Rings with fixed-point-free group actions. Proc. London Math. Soc. (3) 27 (1973), 69–87.
* [3] L.A. Bokut, Y. Chen, Z. Zhang, On free Gelfand-Dorfman-Novikov-Poisson algebras and a PBW theorem. J. Algebra 500 (2018), 153–170.
* [4] G. V. Dorofeev, An instance of a solvable, though nonnilpotent, alternative ring. (Russian) Uspehi Mat. Nauk 15 (1960), no. 3 (93), 147–150.
* [5] A.S. Dzhumadil’daev, K.M. Tulenbaev, Engel theorem for Novikov algebras. Comm. Algebra 34 (2006), no. 3, 883–888.
* [6] V.T. Filippov, A class of simple nonassociative algebras. (Russian) Mat. Zametki 45 (1989), no. 1, 101–105; translation in Math. Notes 45 (1989), no. 1–2, 68–.
* [7] V.T. Filippov, On right-symmetric and Novikov nil algebras of bounded index. (Russian) Mat. Zametki 70 (2001), no. 2, 289–295; translation in Math. Notes 70 (2001), no. 1–2, 258–263.
* [8] I. M. Gel’fand, I. Ya. Dorfman, Hamiltonian operators and algebraic structures related to them. (Russian) Funktsional. Anal. i Prilozhen 13 (1979), no. 4, 3–30.
* [9] Maxim Goncharov, On solvable $\mathbb{Z}_{3}$-graded alternative algebras. Algebra Discrete Math. 20 (2015), no. 2, 203–216.
* [10] G. Higman, Groups and rings which have automorphisms without non-trivial fixed elements. J. London Math. Soc. 32 (1957), no. 2, 321–334
* [11] V. K. Kharchenko, Galois extensions and quotient rings. Algebra and Logic 13 (1974), no 4, 265–281.
* [12] V.A. Kreknin, A.I. Kostrikin, Lie algebras with a regular automorphism. Sov. Math. Dokl. 4 (1963), 355–358.
* [13] V.A. Kreknin, Solvability of a Lie algebra containing a regular automorphism. Sib. Math. J. 8 (1967), 536–537.
* [14] N.Yu. Makarenko, A nilpotent ideal in the Lie rings with automorphism of prime order. Sib. Mat. Zh., 46 (2005), no. 6, 1360–1373.
* [15] L. Makar-Limanov, U. Umirbaev, The Freiheitssatz for Novikov algebras. TWMS J. Pure Appl. Math. 2 (2011), no. 2, 228–235.
* [16] W. S. Martindale, S. Montgomery, Fixed elements of Jordan automorphisms of associative rings. Pacific J. Math. 72 (1977), no. 1, 181–196.
* [17] J.M. Osborn, Novikov algebras. Nova J. Algebra Geom. 1 (1992), no. 1, 1–13.
* [18] J.M. Osborn, Simple Novikov algebras with an idempotent. Comm. Algebra 20 (1992), no. 9, 2729–2753.
* [19] J.M. Osborn, Infinite-dimensional Novikov algebras of characteristic $0$. J. Algebra 167 (1994), no. 1, 146–167.
* [20] A.P. Semenov, Subrings of invariants of a finite group of automorphisms of a Jordan ring. Sib. Math. J. 32 (1991), no. 1, 169–172.
* [21] I. Shestakov and Z. Zhang, Solvability and nilpotency of Novikov algebras. Comm. Algebra 48 (2020), no. 12, 5412–5420.
* [22] O.N. Smirnov, Solvability of alternative $\mathbb{Z}_{2}$-graded algebras and alternative superalgebras. Sib. Math. J. 32 (1991), no. 6, 1030–1034.
* [23] X. Xu, On simple Novikov algebras and their irreducible modules. (English summary) J. Algebra 185 (1996), no. 3, 905–934.
* [24] X. Xu, Classification of simple Novikov algebras and their irreducible modules of characteristic 0. (English summary) J. Algebra 246 (2001), no. 2, 673–707.
* [25] E.I. Zel’manov, A class of local translation-invariant Lie algebras. Dokl. Akad. Nauk SSSR 292 (1987), no. 6, 1294–1297.
* [26] E.I. Zel’manov, On solvability of Jordan nil-algebras. Sib. Adv. Math. 1 (1991), 185–203; translation from Tr. Inst. Mat., 1989, 16 , 37–54.
* [27] V. N. Zhelyabin, Jordan superalgebras with a solvable even part. Algebra and Logic 34 (1995), no. 1, 25–34.
* [28] V. Zhelyabin, U. Umirbaev, On the Solvability of $\mathbb{Z}_{3}$-Graded Novikov Algebras. Symmetry 312(2) (2021), 13.
* [29] K. A. Zhevlakov, Solvability of alternative nil-rings. Sib. Math. J. 3 (1962), 368–377.
* [30] K.A. Zhevlakov, A. M. Slinko, I. P. Shestakov, A. I. Shirshov, Rings that are Nearly Associative. Academic Press, New York, 1982.
|
# Improving cosmological constraints from galaxy cluster number counts with
CMB-cluster-lensing data: Results from the SPT-SZ survey and forecasts for the
future
P. S. Chaubal School of Physics, University of Melbourne, Parkville, VIC
3010, Australia C. L. Reichardt School of Physics, University of Melbourne,
Parkville, VIC 3010, Australia N. Gupta School of Physics, University of
Melbourne, Parkville, VIC 3010, Australia CSIRO Astronomy and Space Science,
PO Box 1130, Bentley WA 6102, Australia B. Ansarinejad School of Physics,
University of Melbourne, Parkville, VIC 3010, Australia K. Aylor Department
of Physics, University of California, Davis, CA, USA 95616 L. Balkenhol
School of Physics, University of Melbourne, Parkville, VIC 3010, Australia E.
J. Baxter Center for Particle Cosmology, Department of Physics and Astronomy,
University of Pennsylvania, Philadelphia, PA, USA 19104 Kavli Institute for
Cosmological Physics, University of Chicago, Chicago, IL, USA 60637
Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL,
USA 60637 F. Bianchini Dept. of Physics, Stanford University, 382 Via Pueblo
Mall, Stanford, CA 94305 B. A. Benson Fermi National Accelerator Laboratory,
MS209, P.O. Box 500, Batavia, IL 60510 Kavli Institute for Cosmological
Physics, University of Chicago, Chicago, IL, USA 60637 Department of
Astronomy and Astrophysics, University of Chicago, Chicago, IL, USA 60637 L.
E. Bleem High Energy Physics Division, Argonne National Laboratory, Argonne,
IL, USA 60439 Kavli Institute for Cosmological Physics, University of
Chicago, Chicago, IL, USA 60637 S. Bocquet Faculty of Physics, Ludwig-
Maximilians-Universität, 81679 München, Germany J. E. Carlstrom Kavli
Institute for Cosmological Physics, University of Chicago, Chicago, IL, USA
60637 Department of Physics, University of Chicago, Chicago, IL, USA 60637
High Energy Physics Division, Argonne National Laboratory, Argonne, IL, USA
60439 Department of Astronomy and Astrophysics, University of Chicago,
Chicago, IL, USA 60637 Enrico Fermi Institute, University of Chicago,
Chicago, IL, USA 60637 C. L. Chang High Energy Physics Division, Argonne
National Laboratory, Argonne, IL, USA 60439 Kavli Institute for Cosmological
Physics, University of Chicago, Chicago, IL, USA 60637 Department of
Astronomy and Astrophysics, University of Chicago, Chicago, IL, USA 60637 T.
M. Crawford Kavli Institute for Cosmological Physics, University of Chicago,
Chicago, IL, USA 60637 Department of Astronomy and Astrophysics, University
of Chicago, Chicago, IL, USA 60637 A. T. Crites Kavli Institute for
Cosmological Physics, University of Chicago, Chicago, IL, USA 60637
Department of Astronomy and Astrophysics, University of Chicago, Chicago, IL,
USA 60637 California Institute of Technology, Pasadena, CA, USA 91125 T. de
Haan Department of Physics and McGill Space Institute, McGill University,
Montreal, Quebec H3A 2T8, Canada Department of Physics, University of
California, Berkeley, CA, USA 94720 M. A. Dobbs Department of Physics and
McGill Space Institute, McGill University, Montreal, Quebec H3A 2T8, Canada
Canadian Institute for Advanced Research, CIFAR Program in Cosmology and
Gravity, Toronto, ON, M5G 1Z8, Canada W. B. Everett Center for Astrophysics
and Space Astronomy, Department of Astrophysical and Planetary Sciences,
University of Colorado, Boulder, CO, 80309 B. Floyd Department of Physics
and Astronomy, University of Missouri-Kansas City, 5110 Rockhill Road, Kansas
City, MO 64110, USA E. M. George Department of Physics, University of
California, Berkeley, CA, USA 94720 European Southern Observatory, Karl-
Schwarzschild-Straße 2, 85748 Garching, Germany N. W. Halverson Center for
Astrophysics and Space Astronomy, Department of Astrophysical and Planetary
Sciences, University of Colorado, Boulder, CO, 80309 Department of Physics,
University of Colorado, Boulder, CO, 80309 W. L. Holzapfel Department of
Physics, University of California, Berkeley, CA, USA 94720 J. D. Hrubes
University of Chicago, Chicago, IL, USA 60637 L. Knox Department of Physics,
University of California, Davis, CA, USA 95616 A. T. Lee Department of
Physics, University of California, Berkeley, CA, USA 94720 Physics Division,
Lawrence Berkeley National Laboratory, Berkeley, CA, USA 94720 D. Luong-Van
University of Chicago, Chicago, IL, USA 60637 J. J. McMahon Department of
Physics, University of Michigan, Ann Arbor, MI, USA 48109 S. S. Meyer Kavli
Institute for Cosmological Physics, University of Chicago, Chicago, IL, USA
60637 Department of Astronomy and Astrophysics, University of Chicago,
Chicago, IL, USA 60637 Enrico Fermi Institute, University of Chicago,
Chicago, IL, USA 60637 Department of Physics, University of Chicago, Chicago,
IL, USA 60637 L. M. Mocanu Kavli Institute for Cosmological Physics,
University of Chicago, Chicago, IL, USA 60637 Department of Astronomy and
Astrophysics, University of Chicago, Chicago, IL, USA 60637 J. J. Mohr
Faculty of Physics, Ludwig-Maximilians-Universität, 81679 München, Germany
Excellence Cluster Universe, 85748 Garching, Germany Max-Planck-Institut für
extraterrestrische Physik, 85748 Garching, Germany T. Natoli Kavli Institute
for Cosmological Physics, University of Chicago, Chicago, IL, USA 60637
Department of Physics, University of Chicago, Chicago, IL, USA 60637 Dunlap
Institute for Astronomy & Astrophysics, University of Toronto, 50 St George
St, Toronto, ON, M5S 3H4, Canada S. Padin Kavli Institute for Cosmological
Physics, University of Chicago, Chicago, IL, USA 60637 Department of
Astronomy and Astrophysics, University of Chicago, Chicago, IL, USA 60637 C.
Pryke Department of Physics, University of Minnesota, Minneapolis, MN, USA
55455 J. E. Ruhl Physics Department, Center for Education and Research in
Cosmology and Astrophysics, Case Western Reserve University,Cleveland, OH, USA
44106 F. Ruppin Kavli Institute for Astrophysics and Space Research,
Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA
02139, USA L. Salvati INAF-Osservatorio Astronomico di Trieste, via G. B.
Tiepolo 11, I-34143 Trieste, Italy IFPU - Institute for Fundamental Physics
of the Universe, via Beirut 2, 34014 Trieste, Italy Universite Paris-Saclay,
CNRS, Institut d’Astrophysique Spatiale, 91405, Orsay, France A. Saro
Astronomy Unit, Department of Physics, University of Trieste, via Tiepolo 11,
I-3413 Trieste, Italy INAF-Osservatorio Astronomico di Trieste, via G. B.
Tiepolo 11, I-34143 Trieste, Italy IFPU - Institute for Fundamental Physics
of the Universe, via Beirut 2, 34014 Trieste, Italy INFN-Sezione di Trieste,
Trieste, Italy K. K. Schaffer Kavli Institute for Cosmological Physics,
University of Chicago, Chicago, IL, USA 60637 Enrico Fermi Institute,
University of Chicago, Chicago, IL, USA 60637 Liberal Arts Department, School
of the Art Institute of Chicago, Chicago, IL, USA 60603 E. Shirokoff
Department of Physics, University of California, Berkeley, CA, USA 94720
Kavli Institute for Cosmological Physics, University of Chicago, Chicago, IL,
USA 60637 Department of Astronomy and Astrophysics, University of Chicago,
Chicago, IL, USA 60637 Z. Staniszewski Physics Department, Center for
Education and Research in Cosmology and Astrophysics, Case Western Reserve
University,Cleveland, OH, USA 44106 Jet Propulsion Laboratory, California
Institute of Technology, Pasadena, CA 91109, USA A. A. Stark Harvard-
Smithsonian Center for Astrophysics, Cambridge, MA, USA 02138 J. D. Vieira
Astronomy Department, University of Illinois at Urbana-Champaign, 1002 W.
Green Street, Urbana, IL 61801, USA Department of Physics, University of
Illinois Urbana-Champaign, 1110 W. Green Street, Urbana, IL 61801, USA R.
Williamson Kavli Institute for Cosmological Physics, University of Chicago,
Chicago, IL, USA 60637 Department of Astronomy and Astrophysics, University
of Chicago, Chicago, IL, USA 60637
###### Abstract
We show the improvement to cosmological constraints from galaxy cluster
surveys with the addition of CMB-cluster lensing data. We explore the
cosmological implications of adding mass information from the 3.1 $\sigma$
detection of gravitational lensing of the cosmic microwave background (CMB) by
galaxy clusters to the Sunyaev-Zel’dovich (SZ) selected galaxy cluster sample
from the 2500 deg2 SPT-SZ survey and targeted optical and X-ray followup data.
In the $\Lambda\mathrm{CDM}$ model, the combination of the cluster sample with
the Planck power spectrum measurements prefers
$\sigma_{8}\left(\Omega_{m}/0.3\right)^{0.5}=0.831\pm 0.020$. Adding the
cluster data reduces the uncertainty on this quantity by a factor of $1.4$,
which is unchanged whether or not the 3.1 $\sigma$ CMB-cluster lensing
measurement is included. We then forecast the impact of CMB-cluster lensing
measurements with future cluster catalogs. Adding CMB-cluster lensing
measurements to the SZ cluster catalog of the on-going SPT-3G survey is
expected to improve the expected constraint on the dark energy equation of
state $w$ by a factor of $1.3$ to $\sigma(w)=0.19$. We find the largest
improvements from CMB-cluster lensing measurements to be for $\sigma_{8}$,
where adding CMB-cluster lensing data to the cluster number counts reduces the
expected uncertainty on $\sigma_{8}$ by factors of $2.4$ and $3.6$ for SPT-3G
and CMB-S4 respectively.
cosmological parameters — cosmology:observations — cluster cosmology large
scale structure — CMB — cluster lensing
## 1 Introduction
Galaxy clusters are the largest gravitationally collapsed structures and a key
testing ground of cosmological models of structure growth (Allen et al.,
2011). The number density of galaxy clusters depends sensitively upon
cosmological parameters, particularly those that affect late-time structure
growth such as the sum of the neutrino masses, the dark energy equation of
state, and matter density (Wang & Steinhardt, 1998; Haiman et al., 2001;
Weller et al., 2002; Weller & Battye, 2003; Holder, 2006; Shimon et al.,
2011). Upcoming surveys such as eROSITA (Merloni et al., 2012), LSST (LSST
Science Collaboration et al., 2009; The LSST Dark Energy Science Collaboration
et al., 2018) and CMB-S4 (CMB-S4 Collaboration, 2019) are expected to detect
tens of thousands of galaxy clusters at different wavelengths, and will
dramatically improve the cosmological constraints from cluster cosmology.
Galaxy clusters already yield interesting constraints on the matter density
$\Omega_{\mathrm{m}}$ and the amplitude of density fluctuations $\sigma_{8}$
(Bocquet et al., 2019; Zubeldia & Challinor, 2019; To et al., 2020). The
cosmological constraints are limited, however, by the uncertainty on the
masses of galaxy clusters and can be biased if the cluster mass-observable
scaling relations are mis-estimated. Current cluster mass estimates are
typically based on assuming a power-law scaling relationship between observed
quantities (such as the X-ray observable $Y_{\mathrm{X}}$) and cluster masses.
Observationally expensive optical weak lensing measurements are used to
normalize the scaling relation (e.g., Dietrich et al., 2019). These optical
weak lensing mass measurements should substantially improve with surveys like
LSST and Euclid (The LSST Dark Energy Science Collaboration et al., 2018;
Euclid Collaboration et al., 2019). At higher redshifts ($z\gtrsim 1$),
optical weak lensing becomes increasingly difficult due to a dearth of
background galaxies and difficulties in measuring their shape with blending
and lower signal to noise. High-redshift mass information is important as
there are suggestions that scaling relations calibrated at lower redshifts may
mis-estimate the masses at higher redshifts (Zohren et al., 2019; Salvati et
al., 2018, 2019).
Galaxy clusters also gravitationally lens the cosmic microwave background
(CMB), an effect referred to as CMB-cluster lensing and first considered by
Seljak & Zaldarriaga (2000). While useful as an independent cross-check on
optical weak lensing cluster masses at low redshift, CMB-cluster lensing is
particularly useful at higher redshifts. Since all CMB photons originate at
the same extremely high redshift, $z\simeq 1100$, the signal-to-noise of CMB-
cluster lensing does not drop as the cluster redshift increases (Melin &
Bartlett, 2015). This also simplifies the measurement (and eliminates related
uncertainties), as one does not need to calculate intrinsic alignments, boost
factors, or the redshift distribution to background sources. The problem of
estimating the masses of clusters from their CMB lensing signals has been
extensively considered (Seljak & Zaldarriaga, 2000; Holder & Kosowsky, 2004;
Vale & Ostriker, 2004; Dodelson, 2004; Lewis & Challinor, 2006; Lewis & King,
2006; Hu et al., 2007; Raghunathan et al., 2017, 2019a; Gupta & Reichardt,
2020). Actual measurements of the CMB-cluster lensing signal have followed as
CMB surveys have advanced, from the first detections in 2015 (Madhavacheril et
al., 2015; Baxter et al., 2015; Planck Collaboration et al., 2016) to
$\sim$15% mass measurements of different cluster samples today (Baxter et al.,
2018; Raghunathan et al., 2019b).
In this work, we present the first cosmological analysis of the SPT-SZ galaxy
cluster sample that includes CMB-cluster lensing information. The SPT-SZ
survey detected galaxy clusters from the imprint of thermal SZ (tSZ)
signatures on the background primary CMB anisotropies (Bleem et al., 2015).
Bocquet et al. (2019, hereafter B19) presented cosmological constraints from
this sample along with X-ray observations and optical weak-lensing
measurements. We add the CMB-cluster lensing mass measurement of Baxter et al.
(2015, hereafter B15) to that dataset, and look at the implications for the
combined dataset on the $\Lambda\mathrm{CDM}$ and $w\mathrm{CDM}$ cosmological
models. We follow this by presenting forecasts for the cosmological
constraints from future CMB-cluster lensing measurements with SPT-3G (Benson
et al., 2014) and CMB-S4 (CMB-S4 Collaboration, 2019). We find that CMB-
cluster lensing mass measurements substantially improve the predicted
constraints on the dark energy equation of state parameter $w$ from future
cluster catalogs.
The paper is organized as follows. In §2, we review the datasets used in this
analysis. We describe the analysis methods in §3. In §4, we present the
cosmological constraints from the current CMB-cluster lensing measurement. In
§5, we forecast the constraints expected from the ongoing SPT-3G and future
CMB-S4 surveys. Finally, we conclude in §6. Throughout this work, we report
galaxy cluster masses in terms of either $M_{\rm 200}$ or $M_{\rm 500}$, the
mass contained within the radius where the mean density is 200 (500) times the
critical density of the Universe.
## 2 The Cluster catalog from the 2500d SPT-SZ survey
The main dataset in this work is the galaxy cluster sample from the 2500d SPT-
SZ survey (Bleem et al., 2015), which provides a measure of the SZ detection
significance and redshift for each cluster in the sample. As in the previous
cosmological analysis by B19, we supplement the SZ cluster catalog with
follow-up X-ray and optical weak-lensing observations. The new addition in
this work is that we add the $3.1\sigma{}$ CMB-cluster lensing mass
measurement from B15 for a stack of 513 of galaxy clusters in the sample. This
sub-sample of 513 clusters is chosen by selecting only those clusters from the
2500d SPT-SZ catalog which have measured optical redshifts. We refer to the
combination of SPT number counts, X-ray and weak lensing follow up, and CMB
cluster lensing datasets as SPT clusters. We briefly describe these datasets
in the following subsections.
For some parameter fits, we also include measurements of the CMB TT, TE and EE
power spectra from the 2018 data release of the Planck satellite (Planck
Collaboration et al., 2020). We refer to this dataset as ‘Planck’ throughout
rest of the work. The Planck CMB data allow us to demonstrate where clusters
and CMB-cluster lensing add the most information.
### 2.1 SZ detection significance and cluster redshift
The SZ detection significance and cluster redshift (or lower limit on
redshift) are reported for all cluster candidates in the Bleem et al. (2015)
catalog and were later updated in B19. The reported significance is the
maximum across a set of matched filters (to allow for variations in the
cluster angular radius with redshift and mass), and therefore is biased high
on average. To avoid this biasing in the mass estimates, we follow B19 in
using the unbiased significance $\zeta=\sqrt{(\xi^{2}-3)}$ as a mass proxy. A
detailed discussion on the validity of this approach can be found in
Vanderlinde et al. (2010). As in B19, we model the relationship between the
unbiased significance $\zeta$ and cluster mass $M_{\mathrm{500}}$ as:
$\displaystyle\zeta=\ $
$\displaystyle{A_{\mathrm{SZ}}}\left(\frac{M_{500}h_{70}}{4.3\times
10^{14}M_{\odot}}\right)^{B_{\mathrm{SZ}}}\left(\frac{E\left(z\right)}{E\left(0.6\right)}\right)^{C_{\mathrm{SZ}}}\
,$ (1)
where ${A_{\mathrm{SZ}}}$, ${B_{\mathrm{SZ}}}$, and ${C_{\mathrm{SZ}}\ }$are
free parameters in the model fits (see Table 1) and $E(z)$ is the
dimensionless Hubble parameter. Here $h_{70}$ is the Hubble constant divided
by 70 km s-1 Mpc-1, and $z$ is the cluster redshift. The intrinsic scatter in
$\mathrm{ln}\zeta$ at a fixed mass and redshift, is modeled as a Gaussian
scatter with width ${\sigma_{\ln\zeta}}$ and is also left as a free parameter
of the model.
### 2.2 Weak-lensing shear profiles
Thirty-two clusters have optical weak lensing shear profiles, with 13 from the
Hubble Space Telescope and 19 from ground-based Megacam/Magellan imaging
(Schrabback et al., 2018; Dietrich et al., 2019). The shear profiles of these
clusters are compared to the expected weak lensing shear profiles under the
assumption of a Navarro-Frenk-White (NFW) profile (Navarro et al., 1997) for
the cluster density. We allow for a systematic bias $b_{WL}$ between the halo
mass $M_{\rm halo}$ and inferred lensing mass $M_{WL}$,
$M_{WL}=b_{WL}M_{\rm halo}\ .$ (2)
We refer the reader to Eqn. 9 in B19 for the breakdown of $b_{WL}$ into
different sources of uncertainty in the weak lensing observations. The priors
on these uncertainties are included in Table 1 under the _WL modeling_
section. The weak-lensing model is described in more detail by B19.
### 2.3 X-ray $Y_{\mathrm{X}}$ data
As in B19, we use X-ray observations of 89 galaxy clusters taken through a
Chandra X-ray visionary project (McDonald et al., 2013, 2017). The X-ray data
is used to estimate $Y_{\mathrm{X}}$ (the product of the gas mass and X-ray
temperature) within $r_{500}$ for each cluster. We assume a scaling relation
between $Y_{\mathrm{X}}$ and the cluster mass $M_{500}$ of the form:
$\displaystyle\mathrm{ln}\left(\frac{M_{500}h_{70}}{8.37\times
10^{13}M_{\odot}}\right)=\ $
$\displaystyle\mathrm{ln}A_{Y_{\mathrm{X}}}+B_{Y_{\mathrm{X}}}\langle\mathrm{ln}Y_{\mathrm{X}}\rangle$
(3) $\displaystyle+B_{Y_{\mathrm{X}}}\mathrm{ln}\
\left(\frac{h_{70}^{5/2}}{3\times 10^{14}M_{\odot}\mathrm{keV}}\right)$
$\displaystyle+C_{Y_{\mathrm{X}}}\mathrm{ln}\ E\left(z\right)\ .$
The intrinsic scatter in $\mathrm{ln}\,Y_{\mathrm{X}}$ at fixed mass and
redshift is modeled as a normal distribution with width ${\sigma_{\ln
Y_{\mathrm{X}}}}$.
### 2.4 CMB-Cluster lensing measurement
CMB photons are deflected by the gravitational pull of galaxy clusters. This
deflection remaps the CMB anisotropy, and introduces a dipole-like signal
aligned with the local gradient in the primary CMB anisotropy (Lewis &
Challinor, 2006). B15 extracted this CMB-cluster lensing signal from the SPT-
SZ survey data at the positions of clusters in the SPT-SZ sample. To avoid
being biased by the cluster’s own tSZ signal, B15 used a linear combination of
the 90, 150 and 220 GHz maps from the SPT-SZ survey to make a tSZ-free map for
the analysis. We refer the reader to B15 for further details on the
measurement.
For the SPT-SZ catalog sub-sample described in §2, B15 found the mean mass of
the stacked clusters to be $\bar{M}_{200}=(5.1\pm 2.1)\times
10^{14}M_{\odot}$. We convert $M_{200}$ to $M_{\rm 500}$ by assuming a
concentration parameter $c=3$ and the same flat $\Lambda\mathrm{CDM}$
cosmological parameters used in B15 ($\Omega_{\mathrm{m}}{}=0.3$, $h=0.7$) for
the redshift of $z=0.7$. This gives us a value of $M_{\rm 500}=\left(3.49\pm
0.74\right)\times 10^{14}M_{\odot}$ which we use in our analysis. We note that
converting the mean mass of the stack from $M_{200}$ to $M_{500}$ is not
equivalent to converting individual cluster masses before stacking as the
concentration-mass relation is redshift dependent. For this sample, this
approximation results in a $\sim$ 2% systematic error, which is negligible at
the current statistical uncertainty, although the approximation may be
inadequate for future high-S/N mass measurements.
## 3 Likelihood
As in past SPT-SZ cluster analyses (Reichardt et al., 2013; de Haan et al.,
2016; Bocquet et al., 2019), we derive cosmological constraints from galaxy
clusters by using the Cash statistic (Cash, 1979) to compare the expected
number of clusters with the observed number as a function of the SZ signal and
redshift. The number density of clusters is predicted from the matter power
spectrum and mass-observable scaling relations for each set of model
parameters. We briefly review the likelihood111
https://github.com/SebastianBocquet/SPT_SZ_cluster_likelihood here, which is
presented in more detail by B19, before describing how we incorporate the new
CMB-cluster lensing information.
We choose to express the likelihood function in three parts: cluster
abundances ($\mathcal{L}_{\mathrm{abund}}$), mass calibration from the weak
lensing and X-ray observations ($\mathcal{L}_{\mathrm{fol}}$), and mass
calibration from the CMB-cluster lensing observation
($\mathcal{L}_{\mathrm{CL}}$). The abundance part (which is unchanged from
B19) calculates the chance of finding a catalog of clusters with the specified
redshifts and SZ significances as a function of the cosmology and scaling
relations. As in B19, the X-ray and weak-lensing mass calibration likelihood
is expressed as:
$\begin{split}\mathcal{L}_{\mathrm{fol}}\equiv
P(Y_{\mathrm{X}}^{\mathrm{obs}},&g_{\mathrm{t}}^{\mathrm{obs}}|\xi,z,\mbox{\boldmath$p$})=\\\
&\iiiint dM\,d\zeta\,dY_{\mathrm{X}}\,dM_{\mathrm{WL}}\,\left[\right.\\\
&P(Y_{\mathrm{X}}^{\mathrm{obs}}|Y_{\mathrm{X}})P(g_{\mathrm{t}}^{\mathrm{obs}}|M_{\mathrm{WL}})P(\xi|\zeta)\\\
&P(\zeta,Y_{\mathrm{X}},M_{\mathrm{WL}}|M,z,\mbox{\boldmath$p$})P(M|z,\mbox{\boldmath$p$})\left.\right]\
.\end{split}$ (4)
This equation gives the likelihood of observing the follow-up X-ray,
$Y_{\mathrm{X}}^{\mathrm{obs}}$, and weak lensing,
$g_{\mathrm{t}}^{\mathrm{obs}}$, observables for a cluster detected with SZ
significance $\xi$. Here, $p$ represents cosmological and scaling relation
parameters. We assume the systematics in the CMB-cluster lensing measurement
to be uncorrelated with other observations. The notation adopted for other
variables is identical to that of B19.
While we could exactly mirror the approach used for including weak lensing
data, the CMB-cluster lensing signal from individual clusters is too weak to
justify the computational complexity. Instead, we take the observed mean mass
from CMB-cluster lensing $\bar{M}_{200}=(5.1\pm 2.1)\times 10^{14}M_{\odot}$
as a prior on the modeled mean mass of the sample, $\bar{M}$:
$\bar{M}=\frac{1}{N}\sum_{i}\iint d\xi_{i}dz_{i}\
M_{i}P(M_{i}|\xi_{i},z_{i})P(\xi_{i},z_{i}|\mbox{\boldmath$p$})\ .$ (5)
Given the number of clusters in the sample, we approximate the integral by
taking the mass at the peak of the posterior for each cluster in the sample.
Table 1: Parameter priors
Parameter | Prior
---|---
SZ scaling relation
${A_{\mathrm{SZ}}}$ | $\mathcal{U}(1,10)$
${B_{\mathrm{SZ}}}$ | $\mathcal{U}(1,2.0)$
${C_{\mathrm{SZ}}\ }$ | $\mathcal{U}(-1,2)$
${\sigma_{\ln\zeta}}$ | $\mathcal{U}(0.01,2.0)$
Priors for the SPT-SZ cluster catalog
X-ray $Y_{\mathrm{X}}$ scaling relation
${A_{Y_{\mathrm{X}}}}$ | $\mathcal{U}(3,10)$
${B_{Y_{\mathrm{X}}}}$ | $\mathcal{U}(0.3,0.9)$
${C_{Y_{\mathrm{X}}}}$ | $\mathcal{U}(-1,0.5)$
${\sigma_{\ln Y_{\mathrm{X}}}}$ | $\mathcal{U}(0.01,0.5)$
$d\ln M_{\mathrm{g}}/d\ln r$ | $\mathcal{N}(1.12,0.23^{2})$
WL modeling
$\delta_{\mathrm{WL,bias}}$ | $\mathcal{N}(0,1)$
$\delta_{\mathrm{Megacam}}$ | $\mathcal{N}(0,1)$
$\delta_{\mathrm{HST}}$ | $\mathcal{N}(0,1)$
$\delta_{\mathrm{WL,scatter}}$ | $\mathcal{N}(0,1)$
$\delta_{\mathrm{WL,LSS}_{\mathrm{Megacam}}}$ | $\mathcal{N}(0,1)$
$\delta_{\mathrm{WL,LSS}_{\mathrm{HST}}}$ | $\mathcal{N}(0,1)$
Correlated scatter
$\rho_{\mathrm{SZ-WL}}$ | $\mathcal{U}(-1,1)$
$\rho_{\mathrm{SZ-X}}$ | $\mathcal{U}(-1,1)$
$\rho_{\mathrm{X-WL}}$ | $\mathcal{U}(-1,1)$
| $\det(\mbox{\boldmath$\Sigma$}_{\text{multi-obs}})>0$
Priors on cluster-only chains
$\Omega_{b}h^{2}$ | $\mathcal{N}(0.02212,0.00022^{2})$
$\tau$ | $\mathcal{N}(0.0522,0.0080^{2})$
$10^{9}A_{s}$ | $\mathcal{N}(2.092,0.033^{2})$
$n_{s}$ | $\mathcal{N}(0.9626,0.0057^{2})$
Note. — The parameter priors used in this analysis are listed here. The symbol
$\mathcal{U}$ denotes a uniform prior over the given range while
$\mathcal{N}(\mu,\sigma^{2})$ denotes a Gaussian prior centered at $\mu$ with
variance $\sigma^{2}$. The SZ scaling relation priors are used for all results
in this work that include cluster data, while the cluster-only priors listed
in the bottom section are only used in cluster-only-MCMCs. The priors in the
X-ray, WL modeling and Correlated scatter section are used for the SPT-SZ
cluster data, but not in forecasts for future experiments.
## 4 Parameter constraints
We now turn to the cosmological implications of the CMB-cluster lensing
measurement and cluster catalog described in §2 using the likelihood function
described in §3. All MCMC analyses use the same priors for the scaling
relations, which are listed in Table 1.
We infer cosmological constraints using the publicly available COSMOSIS
parameter estimation code (Zuntz et al., 2015), running the Boltzmann code
package CAMB (Lewis et al., 2000). We use the _Multinest_ or _emcee_ samplers
(Feroz et al., 2009; Foreman-Mackey et al., 2013) as implemented by COSMOSIS.
Multinest is run with 250 live points with a tolerance value of 0.1. We look
at two cosmological models: the standard six-parameter $\Lambda\mathrm{CDM}$
model with fixed $\sum m_{\nu}=0.06$ eV, and a well-motivated extension to
$\Lambda\mathrm{CDM}$ where the dark energy equation of state, $w$, is allowed
to vary.
### 4.1 $\Lambda\mathrm{CDM}$ Cosmology
Galaxy cluster number counts are very sensitive to the growth of matter
perturbations. Previous works have found galaxy clusters constrain best the
parameter combination $S_{8}=\sigma_{8}\left(\Omega_{m}/0.3\right)^{0.5}$. We
find for the SPT cluster sample with Planck power spectrum measurement:
$S_{8}=0.831\pm 0.020\ .$ (6)
The uncertainty is larger than Planck-only by a factor of $1.4$ , due to the
tension between the Planck data favoring $S_{8}=0.834\pm 0.016$ and cluster
data favoring a lower $S_{8}=0.794\pm 0.049$. The result is similar to what
was found in B19 so we do not attribute it to CMB-cluster lensing. The
similarity is understandable since the S/N on the CMB-cluster lensing is low
compared to optical weak-lensing. For instance, changing the mass
normalization ${A_{\mathrm{SZ}}}$ from 4.4 to 5.5, the weak-lensing log-
likelihood changes by $\Delta\rm{ln}\mathcal{L}_{WL}=-5.8$, 15 times greater
than the change in the CMB-cluster lensing log-likelihood of
$\Delta\rm{ln}\mathcal{L}_{CMBcl}=-0.38$ for the same shift. As noted above
for $S_{8}$, the modest tension between the cluster and Planck data leads to
slightly wider constraints for the combined dataset on $\Omega_{\mathrm{m}}$
and $\sigma_{8}$:
$\displaystyle\Omega_{\mathrm{m}}$ $\displaystyle=$ $\displaystyle 0.316\pm
0.011\ ,$ (7) $\displaystyle\sigma_{8}$ $\displaystyle=$ $\displaystyle
0.8081\pm 0.0079\ .$ (8)
We report the parameter constraints on selected cosmological and scaling
relation parameters in Table 2.
Figure 1: Constraints on $\Omega_{\mathrm{m}}$ and $w$ in the $w\mathrm{CDM}$
model from the SPT-SZ cluster dataset (blue contours) and the Planck TTTEEE
power spectra (green contours). The SPT-SZ cluster count constraints are
obtained using CMB-cluster lensing information along with information from
follow-up datasets. The cluster data help break the degeneracy between
$\Omega_{\mathrm{m}}$ and $w$ that exists in the CMB power spectra alone.
Table 2: Parameter Constraints for the Planck and SPT-SZ surveys
Parameter | $\Lambda\mathrm{CDM}$ | $w\mathrm{CDM}$
---|---|---
| Planck | SPT Clusters | Planck | SPT Clusters
$\Omega_{\mathrm{m}}$ | $0.3165\pm 0.0084$ | $0.352\pm 0.047$ | $0.184\pm 0.045$ | $0.279\pm 0.042$
$\sigma_{8}$ | $0.8118\pm 0.0072$ | $0.737\pm 0.033$ | $0.985\pm 0.077$ | $0.772\pm 0.037$
$S_{8}$ | $0.834\pm 0.016$ | $0.794\pm 0.049$ | $0.774\pm 0.031$ | $0.743\pm 0.048$
$w$ | – | – | $-1.63\pm 0.28$ | $-1.07\pm 0.20$
${A_{\mathrm{SZ}}}$ | – | $5.3\pm 1.1$ | – | $5.1\pm 1.2$
${B_{\mathrm{SZ}}}$ | – | $1.668\pm 0.068$ | – | $1.631\pm 0.068$
${C_{\mathrm{SZ}}\ }$ | – | $1.09\pm 0.30$ | – | $0.73\pm 0.24$
${\sigma_{\ln\zeta}}$ | – | $0.168\pm 0.076$ | – | $0.176\pm 0.071$
Note. — Summary of constraints obtained from including cluster data in our
analysis for $\Lambda\mathrm{CDM}$ and $w\mathrm{CDM}$ cosmological models.
Constraints obtained from using Planck only dataset are given for comparison.
### 4.2 $w\mathrm{CDM}$
Clusters are an important probe of the late time Universe when dark energy
dominates the energy budget. We therefore consider the impact of the cluster
abundance and CMB-cluster lensing measurement on the dark energy equation of
state parameter $w$. The cluster data favors
$w=-1.07\pm 0.20\ ,$ (9)
consistent with a cosmological constant. As shown in Fig. 1, the cluster
abundance data prefers a higher value of the dark energy equation of state as
the matter density increases. The detection significance of the B15 CMB-
cluster lensing measurement is as yet too low to significantly tighten the
allowed parameter volume. While this uncertainty on $w$ is modestly tighter
than that inferred from Planck power spectra alone
($w=-1.56^{+0.19}_{-0.39}$), combining the cluster abundance and Planck CMB
data significantly reduces the allowed region to:
$w=-1.30\pm 0.10\ .$ (10)
## 5 Forecasts
We now examine the expected impact of CMB-cluster lensing on the cosmological
constraints from upcoming galaxy cluster surveys. Using the likelihood
framework from §3, we forecast the results from two surveys: the on-going
SPT-3G survey, and the planned CMB-S4 survey. We assume that SPT-3G will
survey 1500 ${\rm deg}^{2}$ with a temperature map noise level of 2.5 $\mu{\rm
K\mathchar 45\relax arcmin}$ (polarization map noise level a factor of
$\sqrt{2}$ higher) at 150 GHz (Sobrin et al., 2021) and produce a catalog of
$\sim$3600 clusters above a signal-to-noise of 4.5. After galactic cuts, we
assume the CMB-S4 survey will cover 60% of the sky with a map noise level of
1.0 $\mu{\rm K\mathchar 45\relax arcmin}$ (polarization map noise level a
factor of $\sqrt{2}$ higher) at 150 GHz (CMB-S4 Collaboration, 2019) and
produce a catalog of $\sim$135,000 clusters above a signal-to-noise of 4.5.
CMB-S4 will survey 3% of the sky to even lower noise levels, which is expected
to add a further 17,000 clusters. Catalogs from both CMB-S4 surveys are used
in the forecasts in this work. We look at the results for the cluster
abundances alone, and in combination with mass information from optical weak
lensing or CMB-cluster lensing. The redshift bins and the uncertainties for
SPT-3G and CMB-S4 surveys are described below.
For the full SPT-3G survey, we expect CMB-cluster lensing to lead to a 4.6%
mass measurement across the entire cluster sample (Raghunathan et al., 2017).
Given the high detection significance, we choose to subdivide the cluster
catalog into four redshift bins to better constrain any redshift evolution in
the relationship between SZ flux and mass. The four redshift bins are
$[0.25,0.55),[0.55,0.78),[0.78,1.06)$, and $[1.06,2.]$, which achieves a
roughly equal number of clusters and lensing detection significance in each
bin. The uncertainty on the average mass of the clusters in each of the four
bins is taken to be 9.2%. For simplicity, we assume equal constraining power
in each of the bins. We do not include the effect of systematic uncertainties,
such as from tSZ contamination or errors in the assumed mass profile, but
point interested readers towards Raghunathan et al. (2017) for a discussion of
potential systematic errors and their magnitude. The potential systematic
biases are expected to be correctable to better than the mass uncertainties
assumed in this work. We conservatively assume a 5% mass calibration from
optical weak lensing at $z<0.8$, again implemented as four 10% mass
constraints on redshift bins running $[0.25,0.39),[0.39,0.53),[0.53,.67),$ and
$[0.67,0.8]$, such as might be achieved from the final DES results (McClintock
et al., 2019).
The CMB-S4 survey is expected to start in the second half of this decade. As
such, we assume substantially improved optical weak lensing mass measurements
will be available from, for instance, LSST or Euclid, and provide either a 2%
(conservative) or 1% (goal) mass calibration (Grandis et al., 2019). As
before, we implement this as either a 4% or 2% mass calibration in each of
four redshift bins that cover the redshift range from z = 0.25 to 0.8. The
lower noise CMB maps will also enable tighter mass constraints from CMB-
cluster lensing. From Raghunathan et al. (2017), we estimate that the CMB-S4
wide survey will yield a 3% mass calibration in each of the four redshift
bins, while the deep survey will yield a weaker (due to fewer clusters) 5%
mass calibration in each redshift bin. As with SPT-3G, we do not include the
effect of systematic errors.
As shown in Table 3 and Fig. 2, we find that adding the mass information from
optical weak lensing and CMB-cluster lensing substantially improves
cosmological constraints from galaxy cluster abundances with SPT-3G and
CMB-S4. Assuming that that posteriors are approximately Gaussian, we calculate
the allowed parameter volume as the square root of the determinant of the
covariance matrix. The allowed parameter volume from the cluster abundance
data for the 7 parameters of the $w$CDM model is reduced by a factor of $4.1$
for SPT-3G and $6.1$ for CMB-S4 by adding the CMB-cluster and optical lensing
measurements. While the absolute mass calibration is similar between the
optical and CMB lensing channels ($\sim$ 5% for SPT-3G and $\sim$ 2-3% for
CMB-S4), the higher redshift lever arm in the CMB-cluster lensing measurement
has advantages for the SZ cluster catalogs with their high median redshifts
($\sim$ 0.8 for both the SPT-3G and CMB-S4 surveys). For the SPT-3G cluster
sample, adding only the CMB-cluster lensing measurement reduces the parameter
volume by a factor of $2.8$. Adding both CMB-cluster lensing and optical weak
lensing improves the parameter volume by a factor of $4.1$, as stated above.
This translates to an improvement on $w$ from $\sigma(w)=0.19$ for cluster
counts to $\sigma(w)=0.15$ with CMB-cluster lensing and $\sigma(w)=0.14$ with
CMB-cluster lensing and optical weak lensing information (the latter two
uncertainties are consistent given the number of samples in the MCMC). The
expected constraint on $\sigma_{8}$ shows an even larger improvement,
tightening from $\sigma(\sigma_{8})=0.039$ for cluster counts to
$\sigma(\sigma_{8})=0.016$ with CMB-cluster lensing and
$\sigma(\sigma_{8})=0.014$ with CMB-cluster lensing and optical weak lensing
information. The story is similar for CMB-S4. The 7-parameter volume is
reduced by a factor of $4.8$ ($6.1$) by adding CMB-cluster lensing (both CMB-
cluster lensing and a 2% optical weak lensing measurement). Adding both the
optical weak lensing and CMB-cluster lensing information brings
$\sigma(w)=0.028$ down to $\sigma(w)=0.023$ for a 2% mass calibration
($\sigma(w)=0.020$ for a 1% mass calibration), a factor of $1.2$ ($1.4$)
improvement over the cluster counts alone. The CMB-cluster lensing information
substantially improves the constraint on $\sigma_{8}$ from the CMB-S4 cluster
catalog by more than a factor of three, from $\sigma(\sigma_{8})=0.016$ to
$\sigma(\sigma_{8})=0.0044$. Adding a 1% (2%) optical weak lensing mass
measurement yields consistent results (within the sampling error) of
$\sigma(\sigma_{8})=0.0046{}~{}(0.0040)$. CMB-cluster lensing cluster mass
measurements will be important to achieve the full potential of cluster
cosmology over this decade.
Figure 2: The 1 and 2 $\sigma$ contours for $\sigma_{8}$ and $w$ in the
$w\mathrm{CDM}$ model for the SPT-SZ (left panel), SPT-3G (middle panel) and
CMB-S4 (right panel) surveys. The SPT-3G and CMB-S4 contours are forecasts
from simulated cluster catalogs created for $\sigma_{8}=0.8126$ and $w=-1$.
Parameter posterior distributions from the Planck CMB data are shown in green,
while the posteriors from cluster number counts are shown in blue. The
posterior distributions from cluster number counts and CMB-cluster lensing are
shown in orange. Adding CMB-cluster lensing information significantly improves
the constraints on equation of dark energy parameter $w$ and $\sigma_{8}$.
Table 3: Forecasts for Parameter Constraints for Upcoming Surveys
Survey | Data | $\Omega_{m}$ | $h$ | $w$ | $\sigma_{8}$ | $S_{8}$
---|---|---|---|---|---|---
Planck | CMB TTTEEE power spectra | 0.045 | 0.10 | 0.28 | 0.077 | 0.031
SPT-3G | Number counts | $0.026$ | $0.030$ | $0.19$ | $0.039$ | $0.051$
\+ CMB-cluster lensing | $0.025$ | $0.028$ | $0.15$ | $0.016$ | $0.025$
\+ CMB-cluster and optical weak lensing | $0.024$ | $0.024$ | $0.14$ | $0.014$ | $0.023$
CMB-S4 | Number counts | $0.0063$ | $0.012$ | $0.028$ | $0.016$ | $0.016$
\+ CMB-cluster lensing | $0.0057$ | $0.0092$ | $0.029$ | $0.0044$ | $0.0059$
\+ CMB-cluster and 2% optical weak lensing | $0.0052$ | $0.0071$ | $0.023$ | $0.0040$ | $0.0059$
\+ CMB-cluster and 1% optical weak lensing | $0.0050$ | $0.0072$ | $0.020$ | $0.0046$ | $0.0059$
Note. — Cluster counts from SPT-3G and CMB-S4 can significantly improve
cosmological constraints. We report here forecasted constraints in the
7-parameter $w\mathrm{CDM}$ model for $w$, $\sigma_{8}$, and
$S_{8}\equiv\sigma_{8}\sqrt{\Omega_{m}/0.3}$. The second row has current
uncertainties from the Planck 2018 TTTEEE data shown for comparison. The third
through fifth rows have, in order, the expected uncertainties with the SPT-3G
cluster counts , with the SPT-3G cluster counts and CMB-cluster lensing mass
measurement, with the SPT-3G cluster counts and a DES-like optical weak
lensing mass measurement, and with both the optical and CMB-cluster lensing
mass measurements. The sixth through ninth rows are the same except for CMB-S4
and two options for an LSST-like optical survey that yields either a 1% or 2%
mass measurement. Adding the optical weak lensing mass measurements to the
CMB-S4 catalog does not improve estimates of large scale structure today (i.e.
$\sigma_{8}$) but does noticeably improve the constraints on the dark energy
equation of state.
## 6 Conclusions and Outlook
We present the first cosmological parameter constraints incorporating CMB-
cluster lensing mass estimates from the South Pole Telescope. While the CMB-
cluster lensing mass information does not yet substantively improve
cosmological constraints as compared to B19, this work serves as a
demonstration for the method which will be important for the next generation
of large galaxy cluster surveys.
We show that adding CMB-cluster lensing mass measurements should significantly
improve cosmological constraints from on-going cluster surveys such as SPT-3G.
In the 7-parameter $w$CDM cosmological model, we find that adding CMB-cluster
lensing mass estimates to cluster number counts leads to a factor of $1.3$
reduction in the uncertainty of $w$ and a factor of $2.4$ on $\sigma_{8}$.
CMB-cluster lensing data remains significant for the larger galaxy cluster
catalog expected for CMB-S4. For CMB-S4, we find the CMB-cluster lensing data
reduces the uncertainty on $\sigma_{8}$ by a factor of $3.6$. CMB-cluster
lensing has the potential to significantly expand the cosmological information
we can extract from galaxy cluster surveys.
The South Pole Telescope program is supported by the National Science
Foundation (NSF) through award OPP-1852617. Argonne National Laboratory’s work
was supported by the U.S. Department of Energy, Office of High Energy Physics,
under contract DE-AC02-06CH11357. We also acknowledge support from the Argonne
Center for Nanoscale Materials. The Melbourne group acknowledges support from
the Australian Research Council’s Discovery Projects scheme (DP200101068). AAS
acknowledges support by U.S. National Science Foundation grant AST-1814719. AS
is supported by the FARE-MIUR grant ’ClustersXEuclid’ R165SBKTMA, INFN InDark,
and by the ERC-StG ‘ClustersXCosmo’ grant agreement 716762. The data analysis
pipeline also uses the scientific python stack (Hunter, 2007; Jones et al.,
2001; van der Walt et al., 2011). We acknowledge the use of the Spartan, a
high performance computing facility at the University of Melbourne (Lafayette
et al., 2016).
## References
* Allen et al. (2011) Allen, S. W., Evrard, A. E., & Mantz, A. B. 2011, ARA&A, 49, 409, doi: 10.1146/annurev-astro-081710-102514
* Baxter et al. (2015) Baxter, E. J., Keisler, R., Dodelson, S., et al. 2015, ApJ, 806, 247, doi: 10.1088/0004-637X/806/2/247
* Baxter et al. (2018) Baxter, E. J., Raghunathan, S., Crawford, T. M., et al. 2018, MNRAS, 476, 2674, doi: 10.1093/mnras/sty305
* Benson et al. (2014) Benson, B. A., Ade, P. A. R., Ahmed, Z., et al. 2014, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9153, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. https://arxiv.org/abs/1407.2973
* Bleem et al. (2015) Bleem, L. E., Stalder, B., de Haan, T., et al. 2015, ApJS, 216, 27, doi: 10.1088/0067-0049/216/2/27
* Bocquet et al. (2019) Bocquet, S., Dietrich, J. P., Schrabback, T., et al. 2019, ApJ, 878, 55, doi: 10.3847/1538-4357/ab1f10
* Cash (1979) Cash, W. 1979, ApJ, 228, 939, doi: 10.1086/156922
* CMB-S4 Collaboration (2019) CMB-S4 Collaboration. 2019, arXiv e-prints, arXiv:1907.04473. https://arxiv.org/abs/1907.04473
* de Haan et al. (2016) de Haan, T., Benson, B. A., Bleem, L. E., et al. 2016, ApJ, 832, 95, doi: 10.3847/0004-637X/832/1/95
* Dietrich et al. (2019) Dietrich, J. P., Bocquet, S., Schrabback, T., et al. 2019, MNRAS, 483, 2871, doi: 10.1093/mnras/sty3088
* Dodelson (2004) Dodelson, S. 2004, Phys. Rev. D, 70, 023009, doi: 10.1103/PhysRevD.70.023009
* Euclid Collaboration et al. (2019) Euclid Collaboration, Adam, R., Vannier, M., et al. 2019, Astronomy and Astrophysics, 627, A23, doi: 10.1051/0004-6361/201935088
* Feroz et al. (2009) Feroz, F., Hobson, M., & Bridges, M. 2009, Mon. Not. Roy. Astron. Soc., 398, 1601, doi: 10.1111/j.1365-2966.2009.14548.x
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067
* Grandis et al. (2019) Grandis, S., Mohr, J. J., Dietrich, J. P., et al. 2019, Monthly Notices of the Royal Astronomical Society, 488, 2041, doi: 10.1093/mnras/stz1778
* Gupta & Reichardt (2020) Gupta, N., & Reichardt, C. L. 2020, arXiv e-prints, arXiv:2005.13985. https://arxiv.org/abs/2005.13985
* Haiman et al. (2001) Haiman, Z., Mohr, J. J., & Holder, G. P. 2001, ApJ, 553, 545, doi: 10.1086/320939
* Holder (2006) Holder, G. 2006, arXiv e-prints, astro. https://arxiv.org/abs/astro-ph/0602251
* Holder & Kosowsky (2004) Holder, G., & Kosowsky, A. 2004, ApJ, 616, 8, doi: 10.1086/424808
* Hu et al. (2007) Hu, W., DeDeo, S., & Vale, C. 2007, New Journal of Physics, 9, 441, doi: 10.1088/1367-2630/9/12/441
* Hunter (2007) Hunter, J. D. 2007, Computing In Science & Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
* Jones et al. (2001) Jones, E., Oliphant, T., Peterson, P., et al. 2001, SciPy: Open source scientific tools for Python. http://www.scipy.org/
* Lafayette et al. (2016) Lafayette, L., Sauter, G., Vu, L., & Meade, B. 2016, OpenStack Summit, Barcelona, doi: 10.4225/49/58ead90dceaaa
* Lewis & Challinor (2006) Lewis, A., & Challinor, A. 2006, Phys. Rep., 429, 1, doi: 10.1016/j.physrep.2006.03.002
* Lewis et al. (2000) Lewis, A., Challinor, A., & Lasenby, A. 2000, Astrophys. J., 538, 473
* Lewis & King (2006) Lewis, A., & King, L. 2006, Phys. Rev. D, 73, 063006, doi: 10.1103/PhysRevD.73.063006
* LSST Science Collaboration et al. (2009) LSST Science Collaboration, Abell, P. A., Allison, J., et al. 2009, arXiv e-prints, arXiv:0912.0201. https://arxiv.org/abs/0912.0201
* Madhavacheril et al. (2015) Madhavacheril, M., Sehgal, N., Allison, R., et al. 2015, Physical Review Letters, 114, 151302, doi: 10.1103/PhysRevLett.114.151302
* McClintock et al. (2019) McClintock, T., Varga, T. N., Gruen, D., et al. 2019, MNRAS, 482, 1352, doi: 10.1093/mnras/sty2711
* McDonald et al. (2013) McDonald, M., Benson, B. A., Vikhlinin, A., et al. 2013, ApJ, 774, 23, doi: 10.1088/0004-637X/774/1/23
* McDonald et al. (2017) McDonald, M., Allen, S. W., Bayliss, M., et al. 2017, ApJ, 843, 28, doi: 10.3847/1538-4357/aa7740
* Melin & Bartlett (2015) Melin, J.-B., & Bartlett, J. G. 2015, A&A, 578, A21, doi: 10.1051/0004-6361/201424720
* Merloni et al. (2012) Merloni, A., Predehl, P., Becker, W., et al. 2012, arXiv e-prints, arXiv:1209.3114. https://arxiv.org/abs/1209.3114
* Navarro et al. (1997) Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493, doi: 10.1086/304888
* Planck Collaboration et al. (2016) Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A24, doi: 10.1051/0004-6361/201525833
* Planck Collaboration et al. (2020) Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A5, doi: 10.1051/0004-6361/201936386
* Raghunathan et al. (2017) Raghunathan, S., Patil, S., Baxter, E. J., et al. 2017, J. Cosmology Astropart. Phys, 8, 030, doi: 10.1088/1475-7516/2017/08/030
* Raghunathan et al. (2019a) Raghunathan, S., Patil, S., Baxter, E., et al. 2019a, Phys. Rev. Lett., 123, 181301, doi: 10.1103/PhysRevLett.123.181301
* Raghunathan et al. (2019b) —. 2019b, ApJ, 872, 170, doi: 10.3847/1538-4357/ab01ca
* Reichardt et al. (2013) Reichardt, C. L., Stalder, B., Bleem, L. E., et al. 2013, ApJ, 763, 127, doi: 10.1088/0004-637X/763/2/127
* Salvati et al. (2018) Salvati, L., Douspis, M., & Aghanim, N. 2018, A&A, 614, A13, doi: 10.1051/0004-6361/201731990
* Salvati et al. (2019) Salvati, L., Douspis, M., Ritz, A., Aghanim, N., & Babul, A. 2019, A&A, 626, A27, doi: 10.1051/0004-6361/201935041
* Schrabback et al. (2018) Schrabback, T., Applegate, D., Dietrich, J. P., et al. 2018, MNRAS, 474, 2635, doi: 10.1093/mnras/stx2666
* Seljak & Zaldarriaga (2000) Seljak, U., & Zaldarriaga, M. 2000, ApJ, 538, 57, doi: 10.1086/309098
* Shimon et al. (2011) Shimon, M., Sadeh, S., & Rephaeli, Y. 2011, MNRAS, 412, 1895, doi: 10.1111/j.1365-2966.2010.18026.x
* Sobrin et al. (2021) Sobrin, J. A., Anderson, A. J., Bender, A. N., et al. 2021, arXiv e-prints, arXiv:2106.11202. https://arxiv.org/abs/2106.11202
* The LSST Dark Energy Science Collaboration et al. (2018) The LSST Dark Energy Science Collaboration, Mandelbaum, R., Eifler, T., et al. 2018, arXiv e-prints, arXiv:1809.01669. https://arxiv.org/abs/1809.01669
* To et al. (2020) To, C., Krause, E., Rozo, E., et al. 2020, arXiv e-prints, arXiv:2010.01138. https://arxiv.org/abs/2010.01138
* Vale & Ostriker (2004) Vale, A., & Ostriker, J. P. 2004, MNRAS, 353, 189, doi: 10.1111/j.1365-2966.2004.08059.x
* van der Walt et al. (2011) van der Walt, S., Colbert, S., & Varoquaux, G. 2011, Computing in Science Engineering, 13, 22, doi: 10.1109/MCSE.2011.37
* Vanderlinde et al. (2010) Vanderlinde, K., Crawford, T. M., de Haan, T., et al. 2010, ApJ, 722, 1180, doi: 10.1088/0004-637X/722/2/1180
* Wang & Steinhardt (1998) Wang, L., & Steinhardt, P. J. 1998, ApJ, 508, 483
* Weller & Battye (2003) Weller, J., & Battye, R. A. 2003, New Astronomy Review, 47, 775, doi: 10.1016/S1387-6473(03)00137-4
* Weller et al. (2002) Weller, J., Battye, R. A., & Kneissl, R. 2002, Phys. Rev. Lett., 88, 231301, doi: 10.1103/PhysRevLett.88.231301
* Zohren et al. (2019) Zohren, H., Schrabback, T., van der Burg, R. F. J., et al. 2019, MNRAS, 488, 2523, doi: 10.1093/mnras/stz1838
* Zubeldia & Challinor (2019) Zubeldia, Í., & Challinor, A. 2019, MNRAS, 489, 401, doi: 10.1093/mnras/stz2153
* Zuntz et al. (2015) Zuntz, J., Paterno, M., Jennings, E., et al. 2015, Astron. Comput., 12, 45, doi: 10.1016/j.ascom.2015.05.005
|
# Optimal Decision-Making for Autonomous Agents via Data Composition
Émiland Garrabé, Martina Lamberti and Giovanni Russo∗ ∗ emails:
<EMAIL_ADDRESS><EMAIL_ADDRESS>E. Garrabé and G.
Russo with the DIEM at the University of Salerno, 84084, Salerno, Italy.
###### Abstract
We consider the problem of designing agents able to compute optimal decisions
by composing data from multiple sources to tackle tasks involving: (i)
tracking a desired behavior while minimizing an agent-specific cost; (ii)
satisfying safety constraints. After formulating the control problem, we show
that this is convex under a suitable assumption and find the optimal solution.
The effectiveness of the results, which are turned in an algorithm, is
illustrated on a connected cars application via in-silico and in-vivo
experiments with real vehicles and drivers. All the experiments confirm our
theoretical predictions and the deployment of the algorithm on a real vehicle
shows its suitability for in-car operation.
## I Introduction
We often make decisions by composing knowledge gathered from others [1] and a
challenge transversal to control and learning is to devise mechanisms allowing
autonomous decision-makers to emulate these abilities. Systems based on
sharing data [2] are examples where agents need to make decisions based on
some form of crowdsourcing [3]. Similar mechanisms can also be useful for the
data-driven control paradigm when e.g., one needs to re-use policies
synthesized on plants for which data are available to solve a control task on
a new plant, for which data are scarcely available [3, 4, 5].
Motivated by this, we consider the problem of designing decision-making
mechanisms that enable autonomous agents to compute optimal decisions by
composing information from third parties to solve tasks that involve: (i)
tracking a desired behavior while minimizing an agent-specific cost; (ii)
satisfying safety constraints. Our results enable computation of the optimal
behavior and are turned into an algorithm. This is experimentally validated on
a connected car application.
#### Related works
we briefly survey a number of works related to the results and methodological
framework of this paper. The design of context-aware switches between multiple
datasets for autonomous agents has been recently considered in [3, 4], where
the design problem, formalized as a data-driven control (DDC) problem, did not
take into account safety requirements. Results in DDC include [6, 7, 8], which
take a behavioral systems perspective, [9], which finds data-driven formulas
towards a model-free theory of geometric control. We also recall e.g., [10,
11, 12] for results inspired from MPC, [5] that considers data-driven control
policies transfer and [13] that tackles he problem of computing data-driven
minimum-energy control for linear systems. In our control problem (see Section
III) we formalize the tracking of a given behavior via Kullback-Leibler (KL)
divergence minimization and we refer to e.g., [14, 15] for examples across
learning and control that involve minimizing this functional. Further, the
study of mechanisms enabling agents to re-use data, also arises in the design
of prediction algorithms from experts [16] and of learning algorithms from
multiple simulators [17]. In a yet broader context, studies in neuroscience
[18] hint that our neocortex might implement a mechanism composing the output
of the cortical columns and this might be the basis of our ability to re-use
knowledge.
_Contributions:_ we consider the problem of designing agents that dynamically
combine data from heterogeneous sources to fulfill tasks that involve tracking
a target behavior while optimizing a cost and satisfying safety requirements
expressed as box constraints. By leveraging a probabilistic framework, we
formulate a data-driven optimal control problem and, for this problem, we: (i)
prove convexity under a suitable condition; (ii) find the optimal solution;
(iii) turn our results into an algorithm, using it to design an intelligent
parking system for connected vehicles. Validations are performed both in-
silico and in-vivo, with real cars. As such, the main purpose of this paper is
twofold: (i) we introduce, and rigorously characterize our algorithm; (ii)
propose a stand-alone implementation of our results, suitable for in-vivo
experiments on real cars. In-vivo validations were performed via an hardware-
in-the-loop platform allowing to embed real cars/drivers in the experiments.
Using the platform, we deploy our algorithm on a real vehicle showing its
suitability for in-car operation. All experiments confirm the effectiveness of
our approach (documented code/data for our simulations at
https://tinyurl.com/3ep4pknh).
While our results are inspired by the ones in [3, 4], our paper extends these
in several ways. First, the results in [3, 4] cannot consider box constraints
and hence cannot tackle the control problem of this paper. Second, even when
there are no box constraints, the results in [3, 4] only solve an approximate
version of the problem considered here. That is, the results from [3, 4] only
find an approximate, non-optimal, solution of the problem considered here. As
we shall see, this means that the solutions from [3, 4] cannot get a better
cost than the one obtained with the results of this paper. Third, the
algorithm obtained from the results in this paper is deployed, and validated,
on a real car and this was not done in [3, 4]. The _in-vivo_ implementation is
novel.
#### Notation
sets are in calligraphic and vectors in bold. Given the measurable space
$(\mathcal{X},\mathcal{F}_{x})$, with $\mathcal{X}\subseteq\mathbb{R}^{d}$
($\mathcal{X}\subseteq\mathbb{Z}^{d}$) and $\mathcal{F}_{x}$ being a
$\sigma$-algebra on $\mathcal{X}$, a random variable on
$(\mathcal{X},\mathcal{F}_{x})$ is denoted by $\mathbf{X}$ and its realization
by $\mathbf{x}$. The probability density (resp. mass) function or pdf (pmf) of
a continuous (discrete) $\mathbf{X}$ is denoted by $p(\mathbf{x})$. The convex
subset of such probability functions (pfs) is $\mathcal{D}$. The expectation
of a function $\mathbf{h}(\cdot)$ of the continuous variable $\mathbf{X}$ is
$\mathbb{E}_{{p}}[\mathbf{h}(\mathbf{X})]:=\int\mathbf{h}(\mathbf{x})p(\mathbf{x})d\mathbf{x}$,
where the integral (in the sense of Lebesgue) is over the support of
$p(\mathbf{x})$, which we denote by $\mathcal{S}(p)$. The joint pf of
$\mathbf{X}_{1}$, $\mathbf{X}_{2}$ is $p(\mathbf{x}_{1},\mathbf{x}_{2})$ and
the conditional pf of $\mathbf{X}_{1}$ given $\mathbf{X}_{2}$ is
$p\left(\mathbf{x}_{1}\mid\mathbf{x}_{2}\right)$. Countable sets are denoted
by $\\{w_{k}\\}_{k_{1}:k_{n}}$, where $w_{k}$ is the generic element of the
set and $k_{1}:k_{n}$ is the closed set of consecutive integers between
$k_{1}$ and $k_{n}$. The KL divergence between $p(\mathbf{x})$ and
$q(\mathbf{x})$, where $p$ is absolutely continuous w.r.t. $q$, is
$\mathcal{D}_{\text{KL}}\left(p\mid\mid
q\right):=\int_{{\color[rgb]{0,0,0}\mathcal{S}(p)}}p\;\ln\left({p}/{q}\right)\,d\mathbf{x}$:
it is non-negative and $0$ if and only if $p(\mathbf{x})=q(\mathbf{x})$. In
the expressions for the expectation and KL divergence, the integral is
replaced by the sum if the variables are discrete. Finally: (i) we let
$\mathds{1}_{\mathcal{A}}(\mathbf{x})$ denote the indicator function being
equal to $1$ if $\mathbf{x}\in\mathcal{A}\subseteq\mathcal{X}$ and $0$
otherwise; (ii) set exclusion is instead denoted by $\setminus$.
## II The Setup
The agent seeks to craft its behavior by combining a number of sources to
fulfill a task that involves tracking a target/desired behavior while
maximizing an agent-specific reward over the time horizon $\mathcal{T}:=0:T$,
$T>0$. The agent’s state at time step $k\in\mathcal{T}$ is
$\mathbf{x}_{k}\in\mathcal{X}$ and the target behavior that the agent seeks to
track is $p_{0:T}:=p_{0}(\mathbf{x}_{0})\prod_{k\in
1:T}p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$. As in [3, 4], we design the
behavior of the agent by designing its joint pf
$\pi_{0:T}:=\pi(\mathbf{x}_{0},\ldots,\mathbf{x}_{T})$ and we have:
$\pi_{0:T}=\pi_{0}(\mathbf{x}_{0})\prod_{k\in
1:T}\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}).$ (1)
That is, the behavior of the agent can be designed via the pfs
$\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$, i.e., the transition probabilities.
To do so, the agent has access to $S$ sources and we denote by
$\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$, with support
$\mathcal{S}(\pi)\subseteq\mathcal{X}$, the behavior made available by source
$i$, $i\in\mathcal{S}:=1:S$, at $k-1$. We also let $r_{k}(\mathbf{x}_{k})$ be
the agent’s reward for being in state $\mathbf{x}_{k}$ at $k$.
## III Formulation of the Control Problem
Let $\alpha_{k}^{(1)},\ldots,\alpha_{k}^{(S)}$ be weights and
$\boldsymbol{\alpha}_{k}$ be their stack. Then, the control problem we
consider can be formalized as:
###### Problem 1
find the sequence ${\left\\{\boldsymbol{\alpha}_{k}^{\ast}\right\\}_{1:T}}$
solving
$\displaystyle\underset{\left\\{\boldsymbol{\alpha}_{k}\right\\}_{1:T}}{\text{min}}$
$\displaystyle\mathcal{D}_{\text{KL}}\left(\pi_{0:T}\mid\mid
p_{0:T}\right)-\sum_{k=1}^{T}\mathbb{E}_{\pi(\mathbf{x}_{k-1})}\left[\tilde{r}_{k}(\mathbf{X}_{k-1})\right]$
s.t. $\displaystyle\
\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]\geq
1-\epsilon_{k},\ \ \ \forall k,$ $\displaystyle\
\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})=\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}),\
\ \ \forall k,$ $\displaystyle\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}=1,\ \
\alpha_{k}^{(i)}\in[0,1],\ \ \ \forall k.$
In Problem 1,
$\tilde{r}_{k}(\mathbf{x}_{k-1}):=\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[r_{k}(\mathbf{X}_{k})\right]$
and we note that
$\mathbb{E}_{\pi(\mathbf{x}_{k-1})}\left[\tilde{r}_{k}(\mathbf{X}_{k-1})\right]=\mathbb{E}_{\pi(\mathbf{x}_{k})}\left[r_{k}(\mathbf{X}_{k})\right]$
is the expected reward for the agent when the behavior in (1) is followed. The
problem is a finite-horizon optimal control problem with the
$\boldsymbol{\alpha}_{k}$’s as decision variables. As we shall see, these are
generated as feedback from the agent state (Section IV). We say that
$\left\\{\pi^{\ast}\left(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}\right)\right\\}_{1:T}$,
with
$\pi^{\ast}\left(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}\right)=\sum_{i\in\mathcal{S}}\alpha_{k}^{(i),\ast}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$,
is the optimal behavior for the agent, obtained by linearly combining the
sources via the $\boldsymbol{\alpha}_{k}^{\ast}$’s. In the problem, the cost
formalizes the fact that the agent seeks to maximize its reward, while
tracking (in the KL divergence sense) the target behavior. Minimizing the KL
term amounts at minimizing the discrepancy between $\pi_{0:T}$ and $p_{0:T}$.
This term can also be thought as a divergence regularizer and, when $p_{0:T}$
is uniform, it becomes an entropic regularizer. The second and third
constraints formalize the fact that, at each $k$,
$\pi^{\ast}\left(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}\right)\in\mathcal{D}$ and
that this is a convex combination of the pfs from the sources. The first
constraint is a box constraint and models the fact that the probability that
the agent behavior is, at each $k$, inside some (e.g., safety) measurable set
$\mathcal{X}_{k}\subseteq\mathcal{X}$ is greater than some
${\color[rgb]{0,0,0}\epsilon}_{k}\geq 0$. We now make the following
###### Assumption 1
the optimal cost of Problem 1 is bounded.
###### Remark 1
the cost in Problem 1 can be recast as
$\mathcal{D}_{\text{KL}}\left(\pi_{0:T}\mid\mid\tilde{p}_{0:T}\right)$, where
$\tilde{p}_{0:T}\propto
p_{0:T}\exp{\left(\sum_{k=1}^{T}r_{k}(\mathbf{x}_{k})\right)}$. This means
that Assumption 1 is satisfied whenever there exists some $\tilde{\pi}_{0:T}$
that is feasible for Problem 1 and that is absolutely continuous w.r.t.
$\tilde{p}_{0:T}$. See also Remark 3.
Algorithm 1 Pseudo-code
1:Input: time horizon $T$, target behavior $p_{0:T}$, reward $r_{k}(\cdot)$,
sources $\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$, box constraints
(optional)
2:Output: optimal agent behavior
$\left\\{\pi^{\ast}\left(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}\right)\right\\}_{1:T}$
3:$\hat{r}_{T}(\mathbf{x}_{T})\leftarrow 0$
4:for $k=T:1$ do
5:$\bar{r}_{k}(\mathbf{x}_{k})\leftarrow
r_{k}(\mathbf{x}_{k})-\hat{r}_{k}(\mathbf{x}_{k})$
6:$\boldsymbol{\alpha}^{\ast}_{k}(\mathbf{x}_{k-1})\leftarrow\text{minimizer
of the problem in \eqref{eqn:subproblem_alg}}$;
7:$\pi^{\ast}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\leftarrow\sum_{i\in\mathcal{S}}\boldsymbol{\alpha}^{(i),\ast}_{k}(\mathbf{x}_{k-1})\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$
8:$\hat{r}_{k-1}(\mathbf{x}_{k-1})\leftarrow
c_{k}(\pi^{\ast}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}))$
9:end for
## IV Main Results
We propose an algorithm to tackle Problem 1. The algorithm takes as input $T$,
the target behavior, the reward, the behaviors of the sources and the box
constraints of Problem 1 (if any). Given the inputs, it returns the optimal
behavior for the agent. The key steps of the algorithm are given as pseudo-
code in Algorithm 1. An agent that follows Algorithm 1 computes
$\left\\{\boldsymbol{\alpha}_{k}\right\\}_{1:N}$ via backward recursion (lines
$4-9$). At each $k$, the $\boldsymbol{\alpha}_{k}$’s are obtained as the
minimizers of
$\displaystyle\underset{\boldsymbol{\alpha}_{k}}{\min}$ $\displaystyle\
c_{k}(\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}))$ (2) s.t. $\displaystyle\
\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]\geq
1-\epsilon_{k}$ $\displaystyle\
\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})=\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$
$\displaystyle\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}=1,\ \
\alpha_{k}^{(i)}\in[0,1],$
where
$\begin{split}c_{k}(\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}))&:=\mathcal{D}_{\text{KL}}\left(\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\mid\mid
p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\right)\\\
&-\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\bar{r}_{k}(\mathbf{X}_{k})\right],\end{split}$
(3)
with $\bar{r}_{k}(\cdot)$ iteratively built within the recursion (lines $5$,
$8$). The weights are used (line $7$) to compute
$\pi^{\ast}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$.
###### Remark 2
results are stated for continuous variables (proofs for discrete variables
omitted for brevity). Note that integrals/summations in the cost are over
$\mathcal{S}(\pi)$.
###### Remark 3
following Remark 1, the optimal cost of the problem in (2) is bounded if there
exists some feasible $\tilde{\pi}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$ that is
absolutely continuous w.r.t.
$\tilde{p}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\propto
p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\exp\left(\bar{r}_{k}(\mathbf{x}_{k})\right)$.
From the design viewpoint, this can satisfied if it holds for at least one
$\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$.
Finally, we make the following
###### Assumption 2
$\forall i\in\mathcal{S}$ and $\forall\mathbf{x}_{k-1}\in\mathcal{X}$, there
exist some constants, say $m$ and $M$, with $0<m\leq M<+\infty$, such that
$m\leq\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\leq M$,
$\forall\mathbf{x}_{k}\in\mathcal{S}(\pi)$.
###### Remark 4
Assumption 1 is satisfied for e.g., Gaussian distributions. As we shall see
(Section V) the assumption can be fulfilled by injecting noise in the data.
### IV-A Properties of Algorithm 1
Before characterizing convexity of the problems recursively solved in
Algorithm 1 and optimality, we give a condition ensuring feasibility of the
problem in (2).
###### Lemma 1
the problem in (2) is feasible if and only if there exists at least one
source, say $\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$, such that
$\mathbb{E}_{\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]\geq
1-\epsilon_{k}$.
###### Proof:
the if part clearly holds. For the only if part we prove that if problem (2)
is infeasible, then
$\max_{i}\mathbb{E}_{\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]<1-\epsilon_{k}$.
In fact, if the problem is infeasible, then for all $\boldsymbol{\alpha}_{k}$
such that $\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}=1$ and
$\alpha_{k}^{(i)}\in[0,1]$ it must hold that
$\int_{\mathcal{X}_{k}}\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})d\mathbf{x}_{k}<1-\epsilon_{k}.$
Note that this must also hold for all $\boldsymbol{\alpha}_{k}$ such that
$\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}=1$ and $\alpha_{k}^{(i)}\in\\{0,1\\}$,
as these are contained in the set of real-valued $\boldsymbol{\alpha}_{k}$’s.
We conclude the proof by noticing that, if $\boldsymbol{\alpha}_{k}$ is such
that $\alpha_{k}^{(i)}=0\forall i\neq j$ and $\alpha_{k}^{(j)}=1$, then
$\displaystyle\int_{\mathcal{X}_{k}}\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})d\mathbf{x}_{k}$
$\displaystyle=\int_{\mathcal{X}_{k}}\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})d\mathbf{x}_{k}$
$\displaystyle=E_{\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right].$
It then follows that, $\forall j$,
$E_{\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]<1-\epsilon_{k}.$
∎
#### Convexity
we are now ready to prove the following
###### Proposition 1
let Assumption 2 hold. Then, the problem in (2) is convex.
###### Proof:
Clearly, the second and third constraint in the problem are convex. For the
first constraint, we get
$\displaystyle\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]$
$\displaystyle=\int\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})d\mathbf{x}_{k}$
$\displaystyle=\int_{\mathcal{X}_{k}}\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})d\mathbf{x}_{k}$
$\displaystyle=\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\int_{\mathcal{X}_{k}}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})d\mathbf{x}_{k},$
which is therefore convex in the decision variables.
Now, we show that the cost is also convex in these variables and we do so by
explicitly computing, for each $\mathbf{x}_{k-1}$, its Hessian, say
$\mathbf{H}(\mathbf{x}_{k-1})$. Specifically, after embedding the second
constraint of the problem in (2) in the cost and differentiating with respect
to the decision variables we get, for each $j\in\mathcal{S}$:
$\begin{split}&\frac{\partial
c_{k}}{\partial\alpha_{k}^{(j)}}:=\frac{\partial}{\partial\alpha_{k}^{(j)}}\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\left(\log\left(\frac{\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}{p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\right)-\bar{r}_{k}(\mathbf{x}_{k})\right)d\mathbf{x}_{k}\\\
&=\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\frac{\partial}{\partial\alpha_{k}^{(j)}}\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\left(\log\left(\frac{\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}{p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\right)-\bar{r}_{k}(\mathbf{x}_{k})\right)d\mathbf{x}_{k}\\\
&=\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\Bigg{(}\Bigg{.}\log\left(\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\right)-\log\left(p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\right)-\bar{r}_{k}(\mathbf{x}_{k})+1\Bigg{.}\Bigg{)}d\mathbf{x}_{k}.\\\
\end{split}$
The above chain of identities was obtained by swapping integration and
differentiation, leveraging the fact that the cost is smooth in the decision
variables. Similarly, we get
$\begin{split}\frac{\partial^{2}c_{k}}{\partial\alpha_{k}^{(j)^{2}}}&=\frac{\partial}{\partial\alpha_{k}^{(j)}}\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\Bigg{(}\Bigg{.}\log\left(\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\right)-\log\left(p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\right)-\bar{r}_{k}(\mathbf{x}_{k})+1\Bigg{.}\Bigg{)}d\mathbf{x}_{k}\\\
&={\color[rgb]{0,0,0}\int_{\mathcal{S}(\pi)}\frac{\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})^{2}}{\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}d\mathbf{x}_{k}}\\\
&=:h_{{\color[rgb]{0,0,0}jj}}(\mathbf{x}_{k-1}),\end{split}$
and, for each $m\neq j$, $m\in\mathcal{S}$,
$\frac{\partial^{2}c_{k}}{\partial\alpha_{k}^{(j)}\partial\alpha_{k}^{(m)}}=\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\frac{\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\pi^{(m)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}{\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}d\mathbf{x}_{k}=:h_{{\color[rgb]{0,0,0}j}m}(\mathbf{x}_{k-1}).$
Also, following Assumption 2, $\forall j,m\in\mathcal{S}$ we have that
$\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\left|\frac{\pi^{(j)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\pi^{(m)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}{\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\right|d\mathbf{x}_{k}\leq\frac{M}{m}$
where we used the third constraint. That is, the above integrals are well
defined and thus we can conclude the proof by computing
$\mathbf{v}^{T}\mathbf{H}(\mathbf{x}_{k-1})\mathbf{v}$ for some non-zero
$\mathbf{v}\in\mathbb{R}^{S}$:
$\displaystyle\mathbf{v}^{T}\mathbf{H}(\mathbf{x}_{k-1})\mathbf{v}$
$\displaystyle=\sum_{j,m}v_{j}v_{m}h_{jm}(\mathbf{x}_{k-1})$
$\displaystyle=\sum_{j,m}v_{j}v_{m}\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}a_{jm}(\mathbf{x}_{k},\mathbf{x}_{k-1})d\mathbf{x}_{k}$
$\displaystyle=\int_{{\color[rgb]{0,0,0}\mathcal{S}(\pi)}}\sum_{j,m}v_{j}v_{m}a_{jm}(\mathbf{x}_{k},\mathbf{x}_{k-1})d\mathbf{x}_{k},$
where the $a_{jm}$’s are the elements of the matrix
$A(\mathbf{x}_{k},\mathbf{x}_{k-1}):=\bar{\pi}(\mathbf{x}_{k},\mathbf{x}_{k-1})\left[\begin{array}[]{*{20}c}\pi^{(1)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\\\
\vdots\\\
\pi^{(S)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\end{array}\right]\cdot\\\
\cdot\left[\begin{array}[]{*{20}c}\pi^{(1)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\ldots\pi^{(S)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\end{array}\right],$
with
$\bar{\pi}(\mathbf{x}_{k},\mathbf{x}_{k-1}):={1}/\left({\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\right)$.
The above expression is indeed positive semi-definite for each
$\mathbf{x}_{k}$, $\mathbf{x}_{k-1}$ and we can draw the desired conclusion. ∎
#### Optimality
we can now prove the following
###### Proposition 2
let Assumption 2 and Assumption 1 hold. Then, Algorithm 1 gives an optimal
solution for Problem 1.
###### Proof:
the chain rule for the KL divergence and the linearity of expectation imply
that the cost can be written as
$\mathcal{D}_{\text{KL}}\left(\pi_{0:T-1}\mid\mid
p_{0:T-1}\right)-\sum_{k=1}^{T-1}\mathbb{E}_{\pi(\mathbf{x}_{k-1})}\left[\tilde{r}_{k}(\mathbf{X}_{k-1})\right]+\mathbb{E}_{\pi(\mathbf{x}_{T-1})}\left[c_{T}(\pi(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}))\right],$
(4)
where $c_{T}(\pi(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}))$ is defined as in (3)
with $\bar{r}_{T}(\mathbf{x}_{T})$ given by Algorithm 1 – see lines $3$ and
$5$ and note that, at time step $T$,
$\bar{r}_{T}(\mathbf{x}_{T})={r}_{T}(\mathbf{x}_{T})$. To obtain the above
expression, the fact that $c_{T}(\pi(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}))$
only depends on $\mathbf{x}_{T-1}$ was also used. Hence, Problem 1 can be
split into the sum of two sub-problems: a first problem over $k\in 0:T-1$ and
the second for $k=T$. For this last time step, the problem can be solved
independently on the others and is given by:
$\displaystyle\underset{\boldsymbol{\alpha}_{T}}{\text{ min }}$
$\displaystyle\
\mathbb{E}_{\pi(\mathbf{x}_{T-1})}[c_{T}(\pi(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}))]$
(5) s.t. $\displaystyle\
\mathbb{E}_{\pi(\mathbf{x}_{T}\mid\mathbf{x}_{T-1})}\left[\mathds{1}_{\mathcal{X}_{T}}(\mathbf{x}_{T})\right]\geq
1-\epsilon_{T},$ $\displaystyle\
\pi(\mathbf{x}_{T}\mid\mathbf{x}_{T-1})=\sum_{i\in\mathcal{S}}\alpha_{T}^{(i)}\pi^{(i)}(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}),$
$\displaystyle\sum_{i\in\mathcal{S}}\alpha_{T}^{(i)}=1,\ \
\alpha_{T}^{(i)}\in[0,1].$
Using linearity of the expectation and the fact that the decision variable is
independent on $\pi(\mathbf{x}_{T-1})$, we have that the minimizer of the
problem in (5) is the same as the problem in (2) with $k=T$. Following
Proposition 1, such a problem is convex and we denote its optimal solution as
$\boldsymbol{\alpha}^{\ast}_{T}(\mathbf{x}_{T-1})$ – see line $6$ of Algorithm
1 – and the optimal cost of the problem, which is bounded by Assumption 1, is
$c_{T}(\pi^{\ast}(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}))$, where:
$\pi^{\ast}(\mathbf{x}_{T}\mid\mathbf{x}_{T-1})=\sum_{i\in\mathcal{S}}\boldsymbol{\alpha}^{(i),\ast}_{T}(\mathbf{x}_{T-1})\pi^{(i)}(\mathbf{x}_{T}\mid\mathbf{x}_{T-1}){\color[rgb]{0,0,0}.}$
This gives $\hat{r}_{T-1}(\mathbf{x}_{T-1})$ in Algorithm 1 (lines $7-8$),
thus yielding the steps for the backward recursion of the Algorithm 1 at time
step $T$. Now, the minimum value of the problem in (5) is given by
$\mathbb{E}_{\pi(\mathbf{x}_{T-1})}\left[\hat{r}_{T-1}(\mathbf{X}_{T-1})\right]$.
Hence, the cost of Problem 1 becomes
$\mathcal{D}_{\text{KL}}\left(\pi_{0:T-1}\mid\mid
p_{0:T-1}\right)-\sum_{k=1}^{T-1}\mathbb{E}_{\pi(\mathbf{x}_{k-1})}\left[\tilde{r}_{k}(\mathbf{X}_{k-1})\right]+\mathbb{E}_{\pi(\mathbf{x}_{T-1})}\left[\hat{r}_{T-1}(\mathbf{X}_{T-1})\right].$
Then, following the same reasoning used to obtain (4) and by noticing that
$\mathbb{E}_{\pi(\mathbf{x}_{T-1})}\left[\hat{r}_{T-1}(\mathbf{X}_{T-1})\right]=\mathbb{E}_{\pi(\mathbf{x}_{T-2})}\left[\mathbb{E}_{\pi(\mathbf{x}_{T-1}\mid\mathbf{x}_{T-2})}\left[\hat{r}_{T-1}(\mathbf{X}_{T-1})\right]\right],$
we get:
$\mathcal{D}_{\text{KL}}\left(\pi_{0:T-2}\mid\mid
p_{0:T-2}\right)-\sum_{k=1}^{T-2}\mathbb{E}_{\pi(\mathbf{x}_{k-1})}\left[\tilde{r}_{k}(\mathbf{X}_{k-1})\right]+\mathbb{E}_{\pi(\mathbf{x}_{T-2})}\left[c_{T-1}(\pi(\mathbf{x}_{T-1}\mid\mathbf{x}_{T-2}))\right],$
where $c_{T-1}(\pi(\mathbf{x}_{T-1}\mid\mathbf{x}_{T-2}))$ is again given in
(3) with $\bar{r}_{T-1}(\mathbf{x}_{T-1})$ again defined as in Algorithm 1. By
iterating the arguments above, we find that at each time step Problem 1 can
always be split as the sum of two sub-problems, where the last sub-problem can
be solved independently on the previous ones. Moreover, the minimizer of this
last sub-problem is always the solution of a problem of the form
$\displaystyle\underset{\boldsymbol{\alpha}_{k}}{\text{min}}$
$\displaystyle\mathcal{D}_{\text{KL}}\left(\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\mid\mid
p(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})\right)-\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\bar{r}_{k}(\mathbf{X}_{k})\right]$
(6) s.t. $\displaystyle\
\mathbb{E}_{\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})}\left[\mathds{1}_{\mathcal{X}_{k}}(\mathbf{x}_{k})\right]\geq
1-\epsilon_{k},$ $\displaystyle\
\pi(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})=\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1}),$
$\displaystyle\sum_{i\in\mathcal{S}}\alpha_{k}^{(i)}=1,\ \
\alpha_{k}^{(i)}\in[0,1],$
where
$\bar{r}_{k}(\mathbf{x}_{k}):=r_{k}(\mathbf{x}_{k})-\hat{r}_{k}(\mathbf{x}_{k})$,
with
$\hat{r}_{k}(\mathbf{x}_{k}):=c_{k+1}({\pi}^{\ast}(\mathbf{x}_{k+1}\mid\mathbf{x}_{k}))$
and
$\pi^{\ast}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})=\sum_{i\in\mathcal{S}}\boldsymbol{\alpha}^{(i),\ast}_{k}(\mathbf{x}_{k-1})\pi^{(i)}(\mathbf{x}_{k}\mid\mathbf{x}_{k-1})$.
This yields the desired conclusions. ∎
###### Remark 5
the results from [3, 4] solve an approximate version of Problem 1 when there
are no box constraints. Hence, even in this special case, the results from [3,
4] do not lead to the optimal solution found with Algorithm 1. Specifically,
the approximate solution from [3, 4] corresponds to the optimal solution of a
problem with additional binary constraints on the decision variables. As a
result, the algorithm from [3, 4] searches solutions in a feasibility domain
that is contained in the feasibility domain of Problem 1. Hence, the solutions
found in [3, 4] cannot achieve a better cost than the ones obtained via
Algorithm 1.
## V Designing an Intelligent Parking System
We now use Algorithm 1 to design an intelligent parking system for connected
cars and validate the results via in-silico and in-vivo experiments. For the
latter set of experiments, Algorithm 1 is deployed on a real car and
validation is performed via an hardware-in-the-loop (HiL) platform inspired
from [19]. Before reporting the results, we describe the validation scenarios
and the experimental set-up. Code, maps and parameters with instructions to
replicate the simulations are given at https://tinyurl.com/3ep4pknh.
Figure 1: campus map. The magnified areas show the obstructed road link (in
blue) and links used within the validations. Colors online.
### V-A Validation Scenarios and Experimental Set-up
We consider the problem of routing vehicles in a given geographic area to find
parking. In all experiments we consider a morning rush scenario at the
University of Salerno campus (see Figure 1). Specifically, cars arrive to the
campus through a highway exit and, from here, users seek to park in one of the
parking locations: Biblioteca and Terminal.
In this context, vehicles are agents equipped with Algorithm 1. The set of
road links within the campus is $\mathcal{X}$ and time steps are associated to
the time instants when the vehicle changes road link. The state of the agent,
$x_{k}$, is the road link occupied by the vehicle at time step $k$. Given this
set-up, at each $k$ Algorithm 1 outputs the turning probability for the car
given the current car link, $\pi^{\ast}({x}_{k}\mid{x}_{k-1})$. The next
direction for the car is then obtained by sampling from this pf. Agents have
access to a set of sources, each providing different routes. As discussed in
[20], sources might be third parties navigation services, routes collected
from other cars/users participating to some sharing service. Agents wish to
track their target/desired behavior (driving them to the preferred parking –
Terminal in our experiments) and the reward depends on the actual road
conditions within the campus. Links adjacent to a parking lot are assigned a
constant reward of: (i) $3.8$ if the parking has available spaces; (ii) 0 when
it becomes full. Unparked cars already on full parking lots are assigned a
target behavior leading them to another parking. In normal road conditions,
the reward for the other links is $0$ and becomes $-20$ when there is an
obstruction. In our experiments, the reward was selected heuristically so that
it would be: (i) sufficiently penalizing for links affected by obstruction;
(ii) encouraging cars to drive towards parking lots with available spaces. In
the first scenario (Scenario $1$) there are no box constraints: this is done
to benchmark Algorithm 1 with [3, 4]. To this aim, we use the campus map from
[20] in which [3, 4] were thoroughly validated via simulations. Then, we show
that by introducing box constraints Algorithm 1 can effectively regulate
access of vehicles through selected road links. This is Scenario $2$ and we
considered three situations: (A) the road towards the Biblioteca parking lot
is forbidden. To account for this, we set in Algorithm 1
$\mathcal{X}_{k}=\mathcal{X}\setminus{{l_{2}}}$, where the link ${l_{2}}$ is
shown in Figure 1, and $\epsilon_{k}=0.027$; (B) the set $\mathcal{X}_{k}$ as
before but now $\epsilon_{k}=0.5$; (C) the road towards the Terminal parking
lot is forbidden and in this case we set
$\mathcal{X}_{k}=\mathcal{X}\setminus{l_{1}}$, $\epsilon_{k}=0.027$ (see
Figure 1 for link ${l_{1}}$). For this last scenario, Algorithm 1 is validated
both in-silico and in-vivo. Next, we describe the simulation and HiL
experimental set-ups.
#### Simulation set-up
simulations were performed in SUMO [21]; see also [20] for a description of
the pipeline to import maps and traffic demands. In our simulations, each
parking lot can accommodate up to $50$ cars and we generated the traffic
demand so that $100$ cars would arrive on campus at $5$-second intervals. All
the cars seek to park and, by doing so, the parking capacity is saturated.
Given this setting, we simulated a road obstruction on the main link (in blue
in Figure 1) from the highway exit to the campus entrance. This was done by
restricting, in SUMO, the speed of the link to less than one kilometer per
hour. Information on the cars in the simulation are contained in the stand-
alone file agent.npy. Instead, the pfs associated with the sources (described
below) are all stored in behaviors.npy.
#### HiL set-up
the platform embeds a real vehicle into a SUMO simulation. By doing so,
performance of the algorithm deployed on a real car can be assessed under
arbitrary synthetic traffic conditions generated via SUMO. The high-level
architecture of the platform is shown in Figure 2. The platform creates an
avatar of the real car in the SUMO simulation. Namely, as shown in Figure 2,
the position of the real car is embedded in SUMO by using a standard
smartphone to collect its GPS coordinates. These coordinates are then sent via
bluetooth to a computer, also hosted on the car in the current implementation.
The connection is established via an off-the-shelf app, which writes the
coordinates in a rfcomm file. By using the pySerial library, an interface was
developed to read data in Python. Here, a script was designed leveraging
pynmeaGPS to translate the data in the NMEA format for longitude/latitude
coordinates. With this data format, a Python script was created to place the
avatar of the real car in the position given by the coordinates. A GUI is also
included to highlight the trajectory of the real car on the map and an off-
the-shelf text-to-speech routine is used to provide audio feedback to the
driver on the vehicle.
Figure 2: HiL functional architecture.
### V-B In-car Implementation of the Algorithm
For both the in-silico and in-vivo experiments, Algorithm 1 was implemented in
Python as a stand-alone class so that each car equipped with the algorithm
could function as a stand-alone agent. The class has methods accepting all the
input parameters of Algorithm 1 and providing as output the car behavior
computed by the algorithm. The optimization problems within the algorithm loop
were solved via the Python library scipy.optimize. Additionally, the class
also implements methods to compute the cost and to support receding horizon
implementations of Algorithm 1. In our experiments, we used this receding
horizon implementation: the width of the receding horizon window was $T=5$ and
every time the car entered in a new link/state computations were triggered.
The pfs from the sources in our experiments were such that, at each $k$,
feasibility of the problem was ensured (see Lemma 1). Following [20], we also
implemented an utility function that restricts calculations of the agent only
to the road links that can be reached in $T$ time steps (rather than through
the whole state space/map). With this feature, in our experiments the
algorithm took on average approximately half a second to output a behavior,
less than the typical time taken to drive through a road link.111this duration
was measured between the moment where the GUI shows the car merging on a new
link and the moment where new directions are displayed. The simulation was run
on a modern laptop. Finally, the pfs of the sources were obtained via built-in
routing functions in SUMO and we added noise to the routes so that Assumption
2 would be fulfilled (for each road link, $\mathcal{S}(\pi)$ is the set of
outgoing links). See our github for the details.
### V-C Experimental Results
#### Simulation results
first, we benchmarked the performance obtained by Algorithm 1 against these
from the algorithm in [3, 4], termed as crowdsourcing algorithm in what
follows. To this aim, we considered Scenario $1$ and performed two sets of
$10$ simulations. In the first set of experiments, Algorithm 1 was used to
determine the behavior of cars on campus (note that Assumption 1 is
fulfilled). In the second set of simulations, the cars instead used the
crodwourcing algorithm. Across the simulations, we recorded the number of cars
that the algorithms were able to park. The results are illustrated in Figure 3
(top panel). The figure shows that the crowdsourcing algorithm was not able to
park all the cars within the simulation. This was instead achieved by
Algorithm 1, which outperformed the algorithm from [3, 4]. To further quantify
the performance, we also computed the average time spent by a car looking for
a parking space after it enters the simulation (ATTP: average time-to-
parking). Across the simulations, the ATTP for the algorithm in [3] was of
$224.74\pm 19.67$, while for Algorithm 1 it was of $151.32\pm 30.59$ (first
quantities are means, second quantities are standard deviations). That is,
Algorithm 1 yielded an average improvement of $32.7\%$ in the ATTP. Then, we
simulated the three cases of Scenario $2$ to verify that Algorithm 1 can
effectively regulate access through specific links. The constraints for the
three cases of Scenario $2$ were given as an input to the algorithm and in
Figure 3 (bottom panel) the optimal solution $\pi^{\ast}({x}_{k}\mid
x_{k-1}=l_{r})$ is shown. The figure shows that the optimal solution indeed
fulfills the constraints (the road link $l_{r}$ is in Figure 1).
Figure 3: Top: unparked cars over time for crowdsourcing and Algorithm 1.
Solid lines are means across the simulations, shaded areas are confidence
intervals (one standard deviation). Bottom: $\pi^{\ast}({x}_{k}\mid
x_{k-1}=l_{r})$ for the three cases of Scenario $2$. The pfs satisfy the
constraints. Link definitions in Figure 1.
#### HiL results
we deployed Algorithm 1 on a real vehicle using the HiL platform and validated
its effectiveness in Scenario $2$ (C): the target behavior of the agent would
make the real car reach the Terminal parking but this route is forbidden. What
we observed in the experiment was that, once the car entered in the campus,
this was re-routed towards the Biblioteca parking. The re-routing was an
effect of Algorithm 1 computing the rightmost pf in Figure 3 (bottom panel). A
video of the HiL experiment is available on our github. The video shows that
the algorithm is suitable for real car operation: it would run smoothly during
the drive, providing feedback to the driver on the vehicle. Figure 4 shows the
car’s route recorded during the experiment.
Figure 4: Route of the real vehicle. The continuous line shows the GPS
position during the HiL experiment (map from OpenStreetMaps).
## VI Conclusions
We considered the problem of designing agents able to compute optimal
decisions by re-using data from multiple sources to solve tasks involving: (i)
tracking a desired behavior while minimizing an agent-specific cost; (ii)
satisfying certain safety constraints. After formulating the control problem,
we showed that this is convex under a mild condition and computed the optimal
solution. We turned the results in an algorithm and used it to design an
intelligent parking system. We evaluated the algorithm via in-silico and in-
vivo experiments with real vehicles/drivers. All experiments confirmed the
effectiveness of the algorithm and its suitability for in-car operation.
Besides considering the use of other divergences in the cost and deploying our
results in more complex urban scenarios that would use data from pedestrians
and sensors on-board vehicles, our future research will involve devising
mechanisms for the composition of policies for the tasks with actuation
constraints in [22].
## References
* [1] B. M. Lake _et al._ , “Human-level concept learning through probabilistic program induction,” _Science_ , vol. 350, no. 6266, pp. 1332–1338, 2015\.
* [2] E. Crisostomi _et al._ , _Analytics for the sharing economy: Mathematics, Engineering and Business perspectives_. Springer, 2020.
* [3] G. Russo, “On the crowdsourcing of behaviors for autonomous agents,” _IEEE Cont. Sys. Lett._ , vol. 5, pp. 1321–1326, 2020.
* [4] É. Garrabé and G. Russo, “On the design of autonomous agents from multiple data sources,” _IEEE Cont. Sys. Lett._ , vol. 6, pp. 698–703, 2021\.
* [5] L. Li, C. De Persis, P. Tesi, and N. Monshizadeh, “Data-based transfer stabilization in linear systems,” 2022. [Online]. Available: https://arxiv.org/abs/2211.05536
* [6] J. Coulson, J. Lygeros, and F. Dörfler, “Data-enabled predictive control: In the shallows of the DeePC,” in _European Control Conference_ , 2019, pp. 307–312.
* [7] H. J. van Waarde, J. Eising, H. L. Trentelman, and M. K. Camlibel, “Data informativity: a new perspective on data-driven analysis and control,” _IEEE Trans. Automatic Control_ , vol. 65, pp. 4753–4768, 2020.
* [8] C. De Persis and P. Tesi, “Formulas for data-driven control: Stabilization, optimality, and robustness,” _IEEE Trans. Automatic Control_ , vol. 65, pp. 909–924, 2020.
* [9] F. Celi and F. Pasqualetti, “Data-driven meets geometric control: Zero dynamics, subspace stabilization, and malicious attacks,” _IEEE Cont. Sys. Lett._ , vol. 6, pp. 2569–2574, 2022.
* [10] U. Rosolia and F. Borrelli, “Learning model predictive control for iterative tasks. A data-driven control framework,” _IEEE Trans. Automatic Control_ , vol. 63, pp. 1883–1896, 2018.
* [11] K. P. Wabersich and M. N. Zeilinger, “Bayesian model predictive control: Efficient model exploration and regret bounds using posterior sampling,” ser. Proc. of ML Research, vol. 120, 2020, pp. 455–464.
* [12] J. Berberich, J. Kohler, M. A. Muller, and F. Allgower, “Data-driven model predictive control with stability and robustness guarantees,” _IEEE Trans. Aut. Contr._ , vol. 66, pp. 1702–1707, 2021.
* [13] G. Baggio, V. Katewa, and F. Pasqualetti, “Data-driven minimum-energy controls for linear systems,” _IEEE Cont. Sys. Lett._ , vol. 3, pp. 589–594, 2019\.
* [14] Émiland Garrabé and G. Russo, “Probabilistic design of optimal sequential decision-making algorithms in learning and control,” _Annual Rev, in Control_ , vol. 54, pp. 81–102, 2022.
* [15] N. Cammardella, A. Busic, Y. Ji, and S. P. Meyn, “Kullback-Leibler-Quadratic optimal control of flexible power demand,” in _IEEE 58th Conference on Decision and Control_ , 2019, pp. 4195–4201.
* [16] N. Cesa-Bianchi and G. Lugosi, _Prediction, learning, and games._ Cambridge University Press, 2006.
* [17] M. Cutler, T. J. Walsh, and J. P. How, “Real-world reinforcement learning via multifidelity simulators,” _IEEE Trans. on Robotics_ , vol. 31, pp. 655–671, 2015.
* [18] V. B. Mountcastle, “The columnar organization of the neocortex.” _Brain_ , vol. 120, pp. 701–722, Apr 1997.
* [19] W. Griggs _et al._ , _A Vehicle-in-the-Loop Emulation Platform for Demonstrating Intelligent Transportation Systems_. Cham: Springer International Publishing, 2019, pp. 133–154.
* [20] É. Garrabé and G. Russo, “CRAWLING: a Crowdsourcing Algorithm on Wheels for Smart Parking,” 2022, preprint submitted to Scientific Reports. [Online]. Available: https://arxiv.org/abs/2212.02467
* [21] Pablo Alvarez Lopez and others, “Microscopic traffic simulation using SUMO,” in _21st IEEE International Conference on Intelligent Transportation Systems_ , 2018, pp. 2575–2582.
* [22] D. Gagliardi and G. Russo, “On a probabilistic approach to synthesize control policies from example datasets,” _Automatica_ , vol. 137, p. 110121, 2022\.
## Appendix
We report here an investigation of how the time taken by Algorithm 1 changes
as a function of the number of sources, $S$, and time horizon $T$. This time
is a crucial aspect to investigate whether the approach we propose would scale
to more complex urban scenarios than the one presented in Section V, which
would e.g., include more parking locations (note that these are seen as links
by Algorithm 1) and more complex road networks together with more sources that
the agent could use to determine its optimal behavior. The time Algorithm 1
takes to output a behavior depends on the number of available sources, the
time horizon $T$ and the number of links accessible to the car within the time
horizon. We analyze the computation time w.r.t. the
To investigate Algorithm 1’s computation time, we considered the same
implementation as in Section V-B and ran the algorithm by varying the receding
horizon window between $0$ and $5$ time steps, and the number of sources
available to the agent between $1$ and $6$. For this, additional sources were
taken from [20], where more sources were used to implement the algorithm from
[4]. For each pair of these parameters, we measured the time taken by
Algorithm 1 to output a behavior, by running the algorithm over each link in
the network on a standard computer and taking the average of such times. This
gives a fair estimate of the average runtime, as the amount of states
considered depends on the density of the connections in the neighborhood of
each link.
The results of this numerical investigation are shown in Figure 5. The figure
shows that the highest computation time is about one second, which appears to
be suitable for our reference application.
Figure 5: Computation time as a function of time horizon and number of sources
|
# Intersection properties of finite disk collections
Jesús F. Espinoza<EMAIL_ADDRESS>and Cynthia G. Esquer-
Pérez<EMAIL_ADDRESS>Departamento de Matemáticas, Universidad de
Sonora, México
###### Abstract.
In this work we study the intersection properties of a finite disk system in
the euclidean space. We accomplish this by utilizing subsets of spheres with
varying dimensions and analyze specific points within them, referred to as
poles. Additionally, we introduce two applications: estimating the common
scale factor for the radii that makes the re-scaled disks intersects in a
single point, this is the Čech scale, and constructing the minimal Axis-
Aligned Bounding Box (AABB) that encloses the intersection of all disks in the
system.
## 1\. Introduction
One of the new techniques developed for the analysis of large clusters of
information, known as Big Data, is Topological Data Analysis (TDA). In TDA,
simplicial complexes associated with the data are constructed. These
structures include the Vietoris-Rips complex, the Čech complex, and the
piecewise linear lower star complex, among others. Of special interest to us
is the generalized Čech complex structure. Although the standard Čech complex
is formed by intersecting a collection of disks with a fixed radius, the
generalized version allows varying radii. This flexibility enables us to
highlight specific data points by assigning or weighting them with larger
and/or more rapidly expanding balls to them, while de-emphasizing others by
using smaller and/or slower growing balls. This approach proves valuable for
handling noisy data sets, offering an alternative to discarding data that may
not meet a specific significance threshold [1].
Understanding the patterns of intersections and the timing of intersections
among a set of disks in $\mathbb{R}^{d}$, each with potentially different
radii, is a fundamental problem. This leads to the exploration of the
generalized Čech complex structure, which captures the intersection
information of these disks, regardless of their radii. Rescaling the radii by
the same factor, we obtain a filtered generalized Čech complex, where the
associated simplicial complexes evolve as the scale parameter varies. In
particular, in [4], algorithms are provided to calculate the generalized Čech
complex in $\mathbb{R}^{2}$, and [3] presents an algorithm to determine the
Čech scale for a collection of disks in the plane.
To establish the necessary foundation for our study, Section 2 introduces
crucial concepts and notation that will be used throughout the article and we
focus on analyzing the intersection of a disk system in $\mathbb{R}^{d}$. We
start by investigating the intersection of two disks in Subsection 2.1 and
then expand our analysis to a system of $m$ disks in Subsection 2.2. By
applying Helly’s Theorem, we prove that it is sufficient to examine the
intersection of all subsystems consisting of $d+1$ disks in order to determine
if the system has a nonempty intersection.
In Section 3, we define Vietoris-Rips systems and Čech systems, together with
presenting results regarding the Rips scale and the Čech scale, as well as
their connections. In Subsection 3.1, we present an algorithm that can
determine whether the intersection of the system is empty or non-empty. This
is achieved by exclusively computing the poles of subsystems of disks (or
spheres). Finally, in Subsection 3.2, we introduce the algorithm that computes
an approximation to the Čech scale using the numerical bisection method.
Additionally, in Section 4 we incorporate the concept of an minimal axis-
aligned bounding box (AABB) into our methodology. An AABB is a rectangular
parallelepiped whose faces are perpendicular to the basis vectors. These
bounding boxes frequently arise in spatial subdivision problems, such as ray
tracing [5] and collision detection [2]. In this paper, we study AABBs to
enclose the intersection of a finite collection of disks. This approach proves
valuable for discerning whether the collection intersects at a singular point
or not. In this section, we also provide an algorithm for constructing the
AABB of a disk system.
## 2\. Intersection properties of sphere systems
Throughout this work, we will refer to a $d$-disk system $M$, or simply a disk
system, as a finite collection of closed disks in $\mathbb{R}^{d}$ with
positive and not necessarily equal radii, i.e.,
$M=\\{D_{i}(c_{i};r_{i})\subset\mathbb{R}^{d}\mid
c_{i}\in\mathbb{R}^{d},r_{i}>0,1\leq i\leq m<\infty\\}.$
Moreover, in order to study the intersection properties of a disk system $M$
with the approach addressed in Sections 4 and 3 of this work, we will conduct
a study in this section of the intersection properties of the spheres
corresponding to the boundaries of each disk in $M$, which we call a sphere
system and denote by $\partial M$,
$\partial M=\\{\partial D_{i}\subset\mathbb{R}^{d}\mid D_{i}\in M\\},$
where $\partial$ denotes the topological boundary operator.
Following the notation in [6], we introduce the following generalization of
the sphere.
###### Definition 1.
An $i$-sphere in $\mathbb{R}^{d}$ is the intersection of a sphere with an
affine subspace of dimension $i$.
Of course, the notions of a sphere (as a $(d-1)$-dimensional surface) and a
$d$-sphere in $\mathbb{R}^{d}$ agree. However, an $i$-sphere in
$\mathbb{R}^{d}$ can also be viewed as the intersection of $d$-spheres. For
instance, the intersection of two spheres typically occurs in a hyperplane,
forming a $(d-1)$-sphere in $\mathbb{R}^{d}$. When another $d$-sphere
intersects this configuration, the result may be a $(d-1)$-sphere, a
$(d-2)$-sphere, a $0$-sphere (a single point), or it might even be empty, all
within the same hyperplane. For a disk system $M=\\{D_{i}(c_{i};r_{i})\\}$
composed of $m$ disks, where $\\{c_{1},\dots,c_{m}\\}$ is a set in general
position in $\mathbb{R}^{d}$, the maximum dimension of the affine subspace
associated with the $i$-sphere, obtained from the intersection of all the
spheres in $\partial M$, is at most $d-m+1$, or equivalently, $i=d-m+1$. This
conclusion is drawn from [6, Theorem 2.1] and the fact that the affine hull of
$\\{c_{1},\dots,c_{m}\\}$ is of dimension $m-1$. Consequently, the following
result is proven.
###### Lemma 2.
Let $M=\\{D_{1}(c_{1};r_{1}),\ldots,D_{m}(c_{m};r_{m})\\}$ be disk system such
that $\\{c_{1},\dots,c_{m}\\}$ is a set in general position in
$\mathbb{R}^{d}$. Then, the possibilities for the set $\cap_{D_{i}\in
M}\partial D_{i}$ are:
1. (1)
the empty set;
2. (2)
a single point;
3. (3)
a $(d-m+1)$-sphere.
Remarkable points in $i$-spheres that will play a key role in the rest of the
article are the poles. Let $\pi_{i}:\mathbb{R}^{d}\longrightarrow\mathbb{R}$
be the canonical projection on the $i$-th factor for $i=1,...,d$, and let
$\\{e_{1},e_{2},...,e_{d}\\}$ be the standard basis of $\mathbb{R}^{d}$.
###### Definition 3.
Let $e_{q}$ be the $q$-th vector of the canonical base of $\mathbb{R}^{d}$. An
$e_{q}$-north (south) pole of an $i$-sphere $S$ in $\mathbb{R}^{d}$ is a point
on $S$ whose projection on the $q$-th coordinate is maximum (minimum). In
other words, $x\in S$ is the $e_{q}$-north pole if $\pi_{q}(y)\leq\pi_{q}(x)$
for all $y\in S-\\{x\\}$, where $\pi_{q}$ represents the projection onto the
$q$-th coordinate.
We denote the $e_{q}$-north pole of $S$ by $s_{q}^{+}$ and the $e_{q}$-south
pole by $s_{q}^{-}$.
An $i$-sphere can have a single $e_{q}$-pole (north or south) or an infinite
number of them, which occurs when a normal vector to the affine space
containing the $i$-sphere is aligned with the vector $e_{q}$. We are
interested in finding the $e_{q}$-poles of $(d-m+1)$-spheres originating from
disk systems $M=\\{D_{1}(c_{1};r_{1}),\ldots,D_{m}(c_{m};r_{m})\\}$, by taking
the intersection $\cap_{j=1}^{m}\partial D_{j}$. Such $(d-m+1)$-spheres will
be denoted by $S_{M}(c;r)$, to emphasize the disk system $M$, as well as its
center and radius.
###### Lemma 4.
Let $M=\\{D_{1},\ldots,D_{m}\\}$ be a $d$-disk system such that
$\bigcap_{j=1}^{m}D_{j}\neq\emptyset$, and let $p$ be a point in
$\bigcap_{j=1}^{m}D_{j}$ such that $\pi_{q}(p)\leq\pi_{q}(x)$ (resp.
$\pi_{q}(p)\geq\pi_{q}(x)$) for every $x$ in $\bigcap_{j=1}^{m}D_{j}$. Then,
there exists an $i$-sphere $S=\partial D_{j_{1}}\cap\cdots\cap\partial
D_{j_{i}}$ such that $p$ is in $S$ and $p$ is the $e_{q}$-south pole (resp.
$e_{q}$-north pole) of $S$.
###### Proof.
Since $\cap_{j=1}^{m}D_{j}\neq\emptyset$, then
$\partial(\cap_{j=1}^{m}D_{j})\neq\emptyset$,
$\partial(\cap_{j=1}^{m}D_{j})\subset\cap_{j=1}^{m}D_{j}$ due to the
closedness of the sets $D_{j}$, for $j=1,\ldots,m$, and
$p\in\partial(\cap_{j=1}^{m}D_{j})$.
On the other hand, since
$\partial(\cap_{j=1}^{m}D_{j})\subset\cup_{i=1}^{m}\partial D_{j}$, there
exist indices $j_{1},\ldots,j_{i}$ such that $p\in\partial D_{j_{r}}$ for any
$r=1,\ldots,i$; let
$\Lambda(p)=\\{j_{1},\ldots,j_{i}\\}\subseteq\\{1,\ldots,m\\}$ be a maximal
subset of indices such that $p\in D_{j}$ if and only if $j\in\Lambda(p)$. We
claim that $p\in\cap_{r=1}^{i}\partial D_{j_{r}}$ is the $e_{q}$-south pole of
$S:=\cap_{r=1}^{i}\partial D_{j_{r}}$.
In effect, let $V_{p}\subset\mathbb{R}^{d}$ be an open neighborhood of $p$
sufficiently small such that:
1. (1)
Every $x\in V_{p}\cap\partial(\cap_{j=1}^{m}D_{j})$ has as maximal set of
indices a proper subset of $\Lambda(p)$,
2. (2)
$S\cap V_{p}\subset\partial(\cap_{j=1}^{m}D_{j})$.
The first condition can be guaranteed by the finiteness of the disk system
$M$, and the second condition is a consequence of the maximality of the set
$\Lambda(p)$. Therefore, $\pi_{q}(p)\leq\pi_{q}(x)$ for every $x\in S\cap
V_{p}$, which is equivalent to the fact that $\pi_{q}(p)\leq\pi_{q}(x)$ for
every $x\in S$, in the case of $i$-spheres.
∎
### 2.1. Sphere systems with two spheres
In the following two lemmas we provide the computations to determine the
center, radius and poles for a $(d-1)$-sphere given by the intersection of two
disks in $\mathbb{R}^{d}$.
###### Lemma 5.
Let $M=\\{D_{1}(c_{1};r_{1}),D_{2}(c_{2};r_{2})\\}$ be a disk system with two
$d$-disks such that $\partial D_{1}\cap\partial D_{2}$ is a $(d-1)$-sphere
$S=S_{M}(c;r)$ with center $c$ and radius $r$. Then,
$c=\frac{1}{2}\left(1+\frac{r_{2}^{2}-r_{1}^{2}}{\|c_{2}-c_{1}\|^{2}}\right)c_{1}+\frac{1}{2}\left(1+\frac{r_{1}^{2}-r_{2}^{2}}{\|c_{2}-c_{1}\|^{2}}\right)c_{2}$
and
$r=\frac{2\sqrt{s(s-\|c_{2}-c_{1}\|)(s-r_{1})(s-r_{2})}}{\|c_{2}-c_{1}\|}$
where $s=\frac{1}{2}(\|c_{2}-c_{1}\|+r_{1}+r_{2})$.
###### Proof.
Let $\Pi$ be the hyperplane containing the $(d-1)$-sphere $S$, which is
defined by the equation:
(1)
$\sum_{i=1}^{d}(k_{i}-h_{i})x_{i}-\frac{1}{2}\sum_{i=1}^{d}(k_{i}^{2}-h_{i}^{2})=\frac{r_{1}^{2}-r_{2}^{2}}{2},$
where $c_{1}=(h_{1},\ldots,h_{d})$ and $c_{2}=(k_{1},\ldots,k_{d})$. Then the
normal vector of the hyperplane $\Pi$ is given by
$N:=c_{2}-c_{1}=(k_{1}-h_{1},\ldots,k_{d}-h_{d})$, and the center $c$ of $S$
is determined by the intersection point of the hyperplane $\Pi$ with the
perpendicular line that passes through the center $c_{1}$ of $D_{1}$. This
line can be parameterized as $\gamma:t\mapsto
c_{1}+tN=(x_{1}(t),\ldots,x_{d}(t))$, such that $\gamma(0)=c_{1}$ and
$\gamma(1)=c_{2}$. We can compute the intersection point $c=\gamma(t_{*})$ of
$\Pi$ and $\gamma([0,1])$, for any $t_{*}\in[0,1]$, by substituting it in (1),
$\sum_{i=1}^{d}(k_{i}-h_{i})(h_{i}+t_{*}(k_{i}-h_{i}))=\frac{1}{2}\left(\sum_{i=1}^{d}(k_{i}^{2}-h_{i}^{2})+(r_{1}^{2}-r_{2}^{2})\right).$
And solving for $t_{*}$, we obtain that
$t_{*}=\frac{1}{2}+\frac{r_{1}^{2}-r_{2}^{2}}{2\|c_{2}-c_{1}\|^{2}}$. Hence,
the center of $S$ is given by:
$\displaystyle
c=c_{1}+t_{*}N=\frac{1}{2}\left(1+\frac{r_{2}^{2}-r_{1}^{2}}{\|c_{2}-c_{1}\|^{2}}\right)c_{1}+\frac{1}{2}\left(1+\frac{r_{1}^{2}-r_{2}^{2}}{\|c_{2}-c_{1}\|^{2}}\right)c_{2}$
Next, we will compute the radius $r$ of $S$. This radius can be determined as
the height $r$ of the triangle with base $\|c_{2}-c_{1}\|$ formed by the
points $c_{1}$, $c_{2}$, and a point on $S$. Thus, by the Heron’s formula we
have that
$r=\frac{2\sqrt{s(s-\|c_{2}-c_{1}\|)(s-r_{1})(s-r_{2})}}{\|c_{2}-c_{1}\|},$
where $s=\frac{1}{2}(\|c_{2}-c_{1}\|+r_{1}+r_{2})$ correspond to the semi-
perimeter. ∎
We can proceed now to compute the poles of the $(d-1)$-sphere $\partial
D_{1}\cap\partial D_{2}$.
###### Lemma 6.
Let $D_{1}(c_{1};r_{1})$ and $D_{2}(c_{2};r_{2})$ be two $d$-disks such that
$\partial D_{1}\cap\partial D_{2}$ is a $(d-1)$-sphere $S=S(c;r)$ with center
$c$ and radius $r$. Then, the $e_{q}$-poles of $S$ are
$s_{q}^{\pm}=c\pm\sum_{i}^{d}x_{i}e_{i}$, where
$x_{i}=\begin{cases}\dfrac{r|\pi_{i}(c_{2}-c_{1})\pi_{q}(c_{2}-c_{1})|}{\|c_{2}-c_{1}\|\sqrt{\|c_{2}-c_{1}\|^{2}-\pi_{q}(c_{2}-c_{1})^{2}}},&i\neq
q\\\ \\\
\dfrac{r{\sqrt{\|c_{2}-c_{1}\|^{2}-\pi_{q}(c_{2}-c_{1})^{2}}}}{\|c_{2}-c_{1}\|},&i=q.\end{cases}$
###### Proof.
For simplicity, we translate the hyperplane $\Pi$, which contains the
$(d-1)$-sphere $S$, as well as the sphere itself, to the origin; in such case,
the corresponding equations are given by,
$\displaystyle\sum_{i=1}^{d}(k_{i}-h_{i})x_{i}$ $\displaystyle=0,$
$\displaystyle\sum_{i=1}^{d}x_{i}^{2}$ $\displaystyle=r^{2},$
where $h_{i}:=\pi_{i}(c_{1})$ and $k_{i}:=\pi_{i}(c_{2})$ for $i=1,\ldots,d$.
In the case that $k_{q}-h_{q}=\pi_{q}(c_{2}-c_{1})=0$, the normal vector
$N=c_{2}-c_{1}$ of the hyperplane $\Pi$ is orthogonal to the basis vector
$e_{q}$. Therefore, the $e_{q}$-poles of $S$ are $c\pm re_{q}$, which agree
with the formulae of the lemma.
On the other hand, suppose that $k_{q}-h_{q}\neq 0$. To find the $e_{q}$-poles
of $S$, we will use the Lagrange multiplier method. Consider the following
function:
(2) $x_{q}=f(x_{1},x_{2},...,\widehat{x_{q}},...,x_{d})=\frac{-\sum_{j\neq
q}^{d}(k_{j}-h_{j})x_{j}}{k_{q}-h_{q}},$
subject to the restriction:
$g(x_{1},x_{2},...,\widehat{x_{q}},...,x_{d})=\sum_{j\neq
q}^{d}x_{j}^{2}+\left(\frac{-\sum_{j\neq
q}^{d}(k_{j}-h_{j})x_{j}}{k_{q}-h_{q}}\right)^{2}-r^{2}=0$
Let $\lambda$ be the Lagrange multiplier, we define
$h(x_{1},...,\widehat{x_{q}},...,x_{d},\lambda)=f(x_{1},...,\widehat{x_{q}},...,x_{d})+\lambda
g(x_{1},...,\widehat{x_{q}},...,x_{d})$
For any $i\neq q$, consider the following system of equations:
$\frac{\partial h}{\partial x_{i}}=-\frac{k_{i}-h_{i}}{k_{q}-h_{q}}+2\lambda
x_{i}+2\lambda\left(\frac{-\sum_{j\neq
q}^{d}(k_{j}-h_{j})x_{j}}{k_{q}-h_{q}}\right)\left(-\frac{k_{i}-h_{i}}{k_{q}-h_{q}}\right)=0.$
Then
$\displaystyle-\frac{k_{i}-h_{i}}{k_{q}-h_{q}}+2\lambda\left(x_{i}+\frac{k_{i}-h_{i}}{(k_{q}-h_{q})^{2}}{\sum_{j\neq
q}^{d}(k_{j}-h_{j})x_{j}}\right)$ $\displaystyle=0$
$\displaystyle-(k_{i}-h_{i})(k_{q}-h_{q})+2\lambda\left((k_{q}-h_{q})^{2}x_{i}+(k_{i}-h_{i}){\sum_{j\neq
q}^{d}(k_{j}-h_{j})x_{j}}\right)$ $\displaystyle=0$
Solving this system of equations for $\lambda$, we obtain that,
$\lambda=\frac{(k_{i}-h_{i})(k_{q}-h_{q})}{2\left((k_{q}-h_{q})^{2}x_{i}+(k_{i}-h_{i}){\sum_{j\neq
q}^{d}(k_{j}-h_{j})x_{j}}\right)}$
Comparing the last expression for two indices $i\neq\tilde{i}$, we have that,
$\displaystyle x_{i}^{2}$
$\displaystyle=\frac{r^{2}(k_{i}-h_{i})^{2}}{(k_{\tilde{i}}-h_{\tilde{i}})^{2}+\sum_{j\neq\tilde{i},q}{(k_{j}-h_{j})}^{2}+\frac{1}{(k_{q}-h_{q})^{2}}\left(\sum_{j\neq
q}(k_{j}-h_{j})^{2}\right)^{2}}$
$\displaystyle=\frac{r^{2}(k_{i}-h_{i})^{2}}{\sum_{j\neq
q}{(k_{j}-h_{j})}^{2}+\frac{1}{(k_{q}-h_{q})^{2}}\left(\sum_{j\neq
q}(k_{j}-h_{j})^{2}\right)^{2}}$
$\displaystyle=\frac{r^{2}(k_{i}-h_{i})^{2}(k_{q}-h_{q})^{2}}{\left((k_{q}-h_{q})^{2}+\sum_{j\neq
q}(k_{j}-h_{j})^{2}\right)\left(\sum_{j\neq q}{(k_{j}-h_{j})}^{2}\right)}$
$\displaystyle=\frac{r^{2}\pi_{i}(c_{2}-c_{1})^{2}\pi_{q}(c_{2}-c_{1})^{2}}{\|c_{2}-c_{1}\|^{2}\left(\|c_{2}-c_{1}\|^{2}-\pi_{q}(c_{2}-c_{1})^{2}\right)}$
Finally, for $i=q$, we can use the last expression to substitute it in (2) and
obtain the desired result. ∎
### 2.2. Sphere systems with more than two spheres
Now, let us proceed with the explicit calculation of the coefficients for the
center $c$ of the $(d-m+1)$-sphere $S=\cap_{j=1}^{m}\partial D_{j}$. We can
achieve this by considering the disk system translated to $c_{m}$, denoted as
$\\{D_{j}(c_{j}-c_{m};r_{j})\\}_{j=1}^{m}$, and by defining the
$(d-m+1)$-sphere $S-\\{c_{m}\\}=\cap_{j=1}^{m}\partial
D_{j}(c_{j}-c_{m};r_{j})$. This sphere is positioned at the intersection of
hyperplanes (for more details, refer to [6]).
(3) $(c_{k}-c_{m})^{T}x=\frac{1}{2}(r_{m}^{2}+\|c_{k}-c_{m}\|^{2}-r_{k}^{2})$
for all $k=1,...,m-1$. Utilizing the information that the center of
$S-\\{c_{m}\\}$ can be expressed as a combination of the centers $c_{k}-c_{m}$
and substituting it into (3), we obtain a linear system of equations with
dimensions $(m-1)\times(m-1)$:
$\sum_{j=1}^{m-1}\lambda_{j}(c_{k}-c_{m})\cdot(c_{j}-c_{m})=\frac{1}{2}(r_{m}^{2}+\|c_{k}-c_{m}\|^{2}-r_{k}^{2})$
for $k=1,...,m-1$. Solving the system of equations for
$\lambda=(\lambda_{1},...,\lambda_{m-1})$, we find the center of $S$ as
follows:
$c=\lambda_{1}(c_{1}-c_{m})+...+\lambda_{m-1}(c_{m-1}-c_{m})+c_{m}$
The radius of the sphere $S$ can be computed using the equation:
$r^{2}=r_{k}^{2}-\|c-c_{k}\|^{2}$
for any $k\in\\{1,...,m-1\\}$.
Now that we have determined the center and radius of $S$, as well as the
affine space that contains it, we can proceed to compute its $e_{q}$-poles for
each $q\in\\{1,2,...,d\\}$. These poles reside in the affine space that
contains $S$ and within a set that we define below.
Let $S$ be an $i$-sphere in $\mathbb{R}^{d}$, and let $n_{1},...,n_{d-i}$ be
orthogonal vectors to the affine space $L$ that contains $S$. Consider the
space $M$ generated by these vectors together with the vector $e_{q}$ from the
canonical basis of $\mathbb{R}^{d}$. Let us denote
$n_{j}=\left(n_{l}^{(j)}\right)_{l=1}^{d}$ for each $j=1,...,d-i$. Then, we
can define $L_{0}$, the set $L$ translated to the origin, as follows:
$\displaystyle L_{0}$
$\displaystyle=\left<\\{n_{j}\\}_{j=1}^{d-i}\right>^{\perp}$
$\displaystyle=\left\\{x\in\mathbb{R}^{d}\mid x\cdot n_{j}=0,\hskip
5.69046pt\forall j=1,...,d-i\right\\}$
The set $M$ is defined as:
$\displaystyle M$
$\displaystyle=\left<\\{n_{j}\\}_{j=1}^{d-i}\cup\\{e_{q}\\}\right>$
$\displaystyle=\left\\{x=\sum_{j=1}^{d-i}\lambda_{j}n_{j}+\lambda_{d-i+1}e_{q}\in\mathbb{R}^{d}\mid\lambda_{j}\in\mathbb{R},\hskip
5.69046pt\forall j=1,...,d-i+1\right\\}$
$\displaystyle=\left\\{\left(\sum_{j=1}^{d-i}\lambda_{j}n_{1}^{(j)},...,\sum_{j=1}^{d-i}\lambda_{j}n_{q}^{(j)}+\lambda_{d-i+1},...,\sum_{j=1}^{d-i}\lambda_{j}n_{d}^{(j)}\right)\mid\lambda_{j}\in\mathbb{R}\right\\}$
Refer to Figure 1 for a visual representation of the subspaces $L$ and
$M+\\{c\\}$.
Figure 1. Visualization of the subspaces $L$ and $M+\\{c\\}$.
As mentioned above, the $e_{q}$-poles of $S$ lie at the intersection of $L$
and $M+\\{c\\}$, where $c$ is the center of $S$. To simplify the calculations,
we will utilize $L_{0}$ and $M$, and then translate them into $c$. The
intersection of $M$ and $L_{0}$ can be expressed as follows:
$\displaystyle M\cap L_{0}$ $\displaystyle=\left\\{x\in M\mid x\cdot
n_{k}=0,\hskip 5.69046pt\forall k=1,...,d-i\right\\}$
$\displaystyle=\left\\{\sum_{j=1}^{d-i}\lambda_{j}n_{j}+\lambda_{d-i+1}e_{q}\in\mathbb{R}^{d}\left|\left(\sum_{j=1}^{d-i}\lambda_{j}n_{j}+\lambda_{d-i+1}e_{q}\right)\cdot
n_{k}=0,\hskip 5.69046pt\lambda_{j}\in\mathbb{R}\hskip 5.69046pt\forall
k=1,...,d-i\right.\right\\}$
$\displaystyle=\left\\{\sum_{j=1}^{d-i}\lambda_{j}n_{j}+\lambda_{d-i+1}e_{q}\in\mathbb{R}^{d}\left|\sum_{j=1}^{d-i}\lambda_{j}n_{j}\cdot
n_{k}+\lambda_{d-i+1}n_{q}^{k}=0,\hskip
5.69046pt\lambda_{j}\in\mathbb{R}\hskip 5.69046pt\forall
k=1,...,d-i\right.\right\\}$
Let us consider a disk system in $\mathbb{R}^{d}$, denoted
$\\{D_{j}(c_{j};r_{j})\\}_{j=1}^{m}$, where $m<d$. The intersection of their
boundaries forms a $(d-m+1)$-sphere $S$. In this case, the subspace $M$ has
dimension $m$, or $dim(M)=m-1$ if $e_{q}\in M$. We choose the normal vectors
for the affine space containing $S$ as $n_{j}=c_{j}-c_{m}$, where
$j=1,...,m-1$. Then
$M=\left\\{\left(\sum_{j=1}^{m-1}\lambda_{j}\left(c_{1}^{(j)}-c_{1}^{(m)}\right),...,\sum_{j=1}^{m-1}\lambda_{j}\left(c_{q}^{(j)}-c_{q}^{(m)}\right)+\lambda_{m},...,\sum_{j=1}^{m-1}\lambda_{j}\left(c_{d}^{(j)}-c_{d}^{(m)}\right)\right)\left|\lambda_{j}\in\mathbb{R}\right.\right\\}$
By rewriting, we have
$\displaystyle M\cap L_{0}$
$\displaystyle=\left\\{\sum_{j=1}^{m-1}\lambda_{j}n_{j}+\lambda_{m}e_{q}\in\mathbb{R}^{d}\left|\sum_{j=1}^{m-1}\lambda_{j}n_{j}\cdot
n_{k}+\lambda_{m}n_{q}^{k}=0,\hskip 5.69046pt\lambda_{j}\in\mathbb{R},\hskip
5.69046pt\forall k=1,...,m-1\right.\right\\}$
If $S(c;r)=\cap_{j=1}^{m}\partial D_{j}$ is the $(d-m+1)$-sphere with center
in $c$ and radius $r$, then the $e_{q}$-poles of $S$ are the $e_{q}$-poles of
$S-\\{c_{m}\\}$ but translated by $c$. The poles of $S-\\{c_{m}\\}$ are
located in $M\cap L_{0}$. If $p$ is an $e_{q}$-pole of $S-\\{c_{m}\\}$, then
it can be expressed as
$p=\sum_{j=1}^{m-1}\lambda_{j}n_{j}+e_{q}\lambda_{m}$
for some $\lambda_{j}\in\mathbb{R}$, $j=1,...,m$ and the following conditions
holds:
$\sum_{j=1}^{m-1}\lambda_{j}n_{j}\cdot n_{k}+\lambda_{m}n_{q}^{k}=0$
for each $k=1,...,m-1$ and
$\|p\|^{2}=r^{2}$
Thus, if $p=\sum_{j=1}^{m-1}\lambda_{j}n_{j}+e_{q}\lambda_{m}$ is an
$e_{q}$-pole of $S-\\{c_{m}\\}$, the following equations are satisfied for
$\lambda_{1},\lambda_{2},...,\lambda_{m}\in\mathbb{R}$:
(4) $\sum_{j=1}^{m-1}\lambda_{j}n_{j}\cdot n_{k}+\lambda_{m}n_{q}^{k}=0$
(5)
$\sum_{i=1}^{d}\left(\sum_{j=1}^{m-1}\lambda_{j}n_{i}^{j}\right)^{2}+2\lambda_{m}\sum_{j=1}^{m-1}\lambda_{j}n_{q}^{j}+\lambda_{m}^{2}=r^{2}$
for all $k=1,...,m-1$, with $r$ the radius of the $(d-m+1)$-sphere $S$. From
(4) we have the system
$\begin{pmatrix}n_{1}\cdot n_{1}&n_{1}\cdot n_{2}&....&n_{1}\cdot n_{m-1}\\\
n_{2}\cdot n_{1}&n_{2}\cdot n_{2}&....&n_{2}\cdot n_{m-1}\\\ ...\\\
n_{m-1}\cdot n_{1}&n_{m-1}\cdot n_{2}&....&n_{m-1}\cdot n_{m-1}\\\
\end{pmatrix}\begin{pmatrix}\lambda_{1}\\\ \lambda_{2}\\\ ...\\\
\lambda_{m-1}\end{pmatrix}+\lambda_{m}\begin{pmatrix}n_{q}^{1}\\\ n_{q}^{2}\\\
...\\\ n_{q}^{m-1}\end{pmatrix}=0.$
Let us denote $A$ as the matrix $(n_{i}\cdot n_{j})_{i,j}$ and $B$ as the
vector $(-n_{q}^{j})_{j=1}^{m-1}$. Then, we have $A\lambda=\lambda_{m}B$,
where $\lambda=(\lambda_{1},...,\lambda_{m-1})$. Solving for $\lambda$, we
obtain $\lambda_{j}=\lambda_{m}(A^{-1}B)[j]$ for each $j=1,...,m-1$ (where
$(A^{-1}B)[j]$ denotes the entry $j$ of the $(m-1)\times 1$ vector
$(A^{-1}B)$). By substituting the value of $\lambda_{j}$ into (5), we obtain
the quadratic equation:
$\lambda_{m}^{2}\left[\sum_{i=1}^{d}\left(\sum_{j=1}^{m-1}(A^{-1}B)[j]n_{i}^{j}\right)^{2}+2\sum_{j=1}^{m-1}(A^{-1}B)[j]n_{q}^{j}+1\right]-r^{2}=0$
Let us define
$\Gamma_{i}=\sum_{j=1}^{m-1}(A^{-1}B)[j]n_{i}^{j}$
for all $i=1,...,d$. Solving this equation, we find:
$\lambda_{m}=\frac{\pm r}{\sqrt{\sum_{i=1}^{d}\Gamma_{i}^{2}+2\Gamma_{q}+1}}$
$\lambda_{j}=\frac{\pm
r(A^{-1}B)[j]}{\sqrt{\sum_{i=1}^{d}\Gamma_{i}^{2}+2\Gamma_{q}+1}}$
for $j=1,...,m-1$. Therefore, the $e_{q}$-poles of $S$, for $q=1,...,d$, are:
$p=\sum_{j=1}^{m-1}\lambda_{j}n_{j}+e_{q}\lambda_{m}+c=\frac{\pm
r}{\sqrt{\sum_{i=1}^{d}\Gamma_{i}^{2}+2\Gamma_{q}+1}}\left(\sum_{j=1}^{m-1}(A^{-1}B)[j]n_{j}+e_{q}\right)+c$
## 3\. Vietoris-Rips and Čech systems
Our goal in this section is to provide a comprehensive understanding of the
disk system, the Vietoris-Rips system, and the Čech system. Additionally, we
introduce some results that establish a certain connection between both disk
systems. Investigating the features and qualities of data and spaces can
provide us with useful knowledge about their geometric and topological
characteristics.
Before we look into the definitions of Vietoris-Rips and Čech systems, let us
give a brief overview. These systems are essential in the field of topological
data analysis for recognizing and comprehending the geometric structure of
point cloud data. Vietoris-Rips complex and Čech complex share the goal of
capturing the topology of the underlying metric space, both provide different
ways of recognizing connections and associations among data points. The
Vietoris-Rips complex tends to be more efficient and scalable for large
datasets, while the Čech complex can be more accurate but computationally more
expensive. The choice between the two depends on the nature of the dataset and
the specific goals of the topological analysis. Now, let us move on to
defining these fundamental concepts.
###### Definition 7.
Let $M=\\{D_{1},D_{2},\ldots,D_{m}\\}$ be a $d$-disk system. We say $M$ is a
Vietoris-Rips system if $D_{i}\cap D_{j}\neq\emptyset$ for each pair
$i,j\in\\{1,2,\ldots,m\\}$. Furthermore, if the $d$-disk system $M$ has the
nonempty intersection property $\bigcap_{D_{i}\in M}D_{i}\neq\emptyset$, then
$M$ is called a Čech system.
For each $\lambda\geq 0$, we define a collection of $d$-disks $M_{\lambda}$
with the same centers as those in the $d$-disk system $M$, but with radii
rescaled by $\lambda$. When $\lambda>0$, $M_{\lambda}$ is a $d$-disk system
again. $M_{1}$ is equal to $M$, and $M_{0}$ is the set of the centers of the
$d$-disks in $M$.
In the field of topological data analysis, the Rips scale and the Čech scale
are essential parameters for determining the closeness and connectivity
between data points. These two scales offer different perspectives on how we
measure and comprehend geometric relationships within point-cloud data. To
understand their importance in capturing the underlying topological structure,
let us look at their definitions. The Vietoris-Rips scale $\nu_{M}$, of a
$d$-disk system $M$ is the smallest $\lambda\in\mathbb{R}$ such that
$M_{\lambda}$ is a Vietoris-Rips system. Similarly, the Čech scale $\mu_{M}$,
of $M$ is the smallest $\lambda\in\mathbb{R}$ such that $M_{\lambda}$ is a
Čech system. This is
$\displaystyle\nu_{M}$ $\displaystyle=\inf\\{\lambda\in\mathbb{R}\mid
M_{\lambda}\mbox{ is a Vietoris-Rips system}\\}$ $\displaystyle\mu_{M}$
$\displaystyle=\inf\\{\lambda\in\mathbb{R}\mid M_{\lambda}\mbox{ is a \v{C}ech
system}\\}$
Next, we present some easily observable properties for both scales. It can be
easily seen that $M$ is a Vietoris-Rips system if and only if $\nu_{M}\leq 1$
(in particular $\nu_{M_{\nu_{M}}}=1$); similarly, $M$ is a Čech system if and
only if $\mu_{M}\leq 1$.
Note that for a given $d$-disk system $M=\\{D_{1},D_{2},\ldots,D_{m}\\}$ the
Vietoris-Rips scale is
$\nu_{M}=\max_{i<j}\\{\|c_{i}-c_{j}\|/(r_{i}+r_{j})\\}$
where $c_{i}$ and $r_{i}$ are the center and radii of $D_{i}$. An additional
observation is that, in cases where the disk system consists of either one or
two disks, the Vietoris-Rips scale coincides with the Čech scale. It is
evident that every Čech system is also a Vietoris-Rips system; however, the
reverse assertion, in general, is not true.
Conversely, if the $d$-disk system contains at least three disks, determining
the Čech scale becomes more complex. In the context of Čech scale, the
following remark is important and play a key role in implementation (see [3]
for details).
###### Remark 8.
If $\mu_{M}$ is the Čech scale for $M$, then the $\mu_{M}$-rescaled system
$M_{\mu_{M}}$, has only one point in the intersection $\bigcap_{D_{i}\in
M}D_{i}(c_{i};\mu_{M}r_{i})$.
As we have mentioned, a Čech system is also a Vietoris-Rips system, but the
converse is not true. What we can affirm is that if a system is a Vietoris-
Rips system, then the system rescaled by the factor $\sqrt{2d/(d+1)}$ is also
a Čech system. This is established by the following lemma, the proof of which
can be found in [3].
###### Lemma 9.
Let $M=\\{D_{i}(c_{i};r_{i})\\}$ be a $d$-disk system in euclidean space
$\mathbb{R}^{d}$. If $D_{i}(c_{i};r_{i})\cap D_{j}(c_{j};r_{j})\neq\emptyset$
for every pair of disks in $M$, then
$\bigcap_{D_{i}\in M}D_{i}(c_{i};\sqrt{2d/(d+1)}\,r_{i})\neq\emptyset.$
One of the implications of the previous result is that, for any given disk
system $M$, we can bound the Čech scale using the Vietoris-Rips scale
$\nu_{M}$. This is stated by the following corollary.
###### Corollary 10.
If $M$ is an arbitrary $d$-disk system and $\nu_{M}$ is its Vietoris-Rips
scale, then its Čech scale satisfies
$\nu_{M}\leq\mu_{M}\leq\sqrt{2d/(d+1)}\,\nu_{M}$. Therefore, for every
$d$-disk system $M$, the rescaled disk system $M_{\sqrt{2d/(d+1)}\,\nu_{M}}$
is always a Čech system. In particular, if $\sqrt{2d/(d+1)}\,\nu_{M}\leq 1$
then $M_{\nu_{M}}$ is a Čech system.
### 3.1. Algorithm for determining Čech system.
In the previous section, we have determined the $e_{q}$-poles for the
intersection of any number of disks in $\mathbb{R}^{d}$. If any of these poles
is in all disks of the disk system, it indicates that the system conforms to
the criteria of a Čech system. It is important to recognize that this result
streamlines our calculation process, focusing on specific points to establish
whether the system exhibits a non-empty intersection.
Given a system of $m$ disks in $\mathbb{R}^{d}$ where $m>d$, it is enough to
verify if every subsystem of $d+1$ disks qualifies as a Čech system to
conclude that the entire system of disks has a non-empty intersection. This
assertion is supported by the Helly’s Theorem.
Now, we introduce an algorithm that determines whether a disk system qualifies
as a Čech system. In simpler terms, if the disk system exhibits a non-empty
intersection, the algorithm outputs ”TRUE”; otherwise, it outputs ”FALSE”. The
algorithm operates by seeking poles within the intersections of the disk
boundaries, which, as we have observed, correspond to $i$-spheres. It
initiates the search for poles within individual disks and then progresses to
the pairwise intersections of the disk boundaries ($(d-2)-sphere$), continuing
the process iteratively. If a pole is found within the remaining disks, the
system is classified as a Čech system.
1
Input : A $d$-disk system $M=\\{D_{j}\\}_{j=1}^{m}$
Output : A logical TRUE/FALSE to indicate if $M$ is a Čech system
2 Initialize: $\texttt{Is\\_Cech\\_System}\leftarrow\texttt{FALSE}$
3 for _$k\leftarrow 1$ to $m$_ do
4 Let $\mathcal{S}$ be the set of $(d-k+1)$-spheres of $\partial M$
5 for _$S$ in $\mathcal{S}$_ do
6 for _$q\leftarrow 1$ to $d$_ do
7 Compute the set $\\{s_{q}^{\pm}\\}$ of $e_{q}$-poles of $S$
8 for _$s\leftarrow\\{s_{q}^{\pm}\\}$_ do
9 if _$s\in\cap_{j=1}^{d}D_{j}$_ then
10 $\texttt{Is\\_Cech\\_System}\leftarrow\texttt{TRUE}$
11 Go to line 16
12 end if
13
14 end for
15
16 end for
17
18 end for
19
20 end for
return (Is_Cech_System)
Algorithm 1 Cech.system
###### Theorem 11.
Let $M=\\{D_{1},\ldots,D_{m}\\}$ be a $d$-disk system. Then, $M$ is a Čech
system if and only if Cech.system$(M)=$ TRUE.
###### Proof.
If Cech.system$(M)=$ TRUE, then the Cech.system algorithm (Algorithm 1) found
a pole contained in the intersection $\cap_{j=1}^{m}D_{j}$, therefore
$\cap_{j=1}^{m}D_{j}\neq\emptyset$ and it follows that $M$ is a Čech system.
On the other hand, if $\bigcap_{j=1}^{m}D_{j}\neq\emptyset$, let $p$ be a
point in $\bigcap_{j=1}^{m}D_{j}$ satisfying $\pi_{1}(p)\leq\pi_{1}(x)$ for
every $x$ in $\bigcap_{j=1}^{m}D_{j}$. By Lemma 4, it follows that $p$ belongs
to an $i$-sphere and must be an $e_{1}$-south pole. Therefore, by the
exhaustive search of Algorithm 1 across all poles, its output is
Cech.system$(M)=$ TRUE. ∎
### 3.2. Algorithm to compute the Čech scale.
Finding the minimum parameter for which the rescaled system of disks has a
non-empty intersection is significant because it helps identify a critical
threshold at which the disks come into contact. This parameter, known as the
Čech scale, provides valuable information about the proximity or overlap of
the disks, which can be crucial in various applications such as collision
detection in computer graphics, spatial packing problems, and modeling
physical phenomena. In this section, we introduce an algorithm to compute an
approximation of the Čech scale for a system of $m$ disks in $\mathbb{R}^{d}$.
1
Input : A $d$-disk system $M$ in $\mathbb{R}^{d}$ and precision parameter
$\eta>0$
Output : $\mu_{M}$, a Čech scale approximation
2 Compute the Vietoris-Rips scale of $M$: $\nu_{M}$
3 if _Cech.system $(M_{\nu_{M}})=$ TRUE_ then
4 $\mu_{M}\leftarrow\nu_{M}$
5 Go to line 16
6
7else
8 Initialize: $\mu_{M}^{*}\leftarrow\nu_{M}$,
$\mu_{M}\leftarrow\sqrt{2d/(d+1)}\nu_{M}$
9 while _$\mu_{M}-\mu_{M}^{*} >\eta$_ do
10 Compute: $\lambda\leftarrow\dfrac{\mu_{M}^{*}+\mu_{M}}{2}$
11 if _Cech.system $(M_{\lambda})=$ TRUE_ then
12 Update: $\mu_{M}\leftarrow\lambda$
13 else
14 Update: $\mu_{M}^{*}\leftarrow\lambda$
15 end if
16
17 end while
18
19 end if
return ($\mu_{M}$)
Algorithm 2 Cech.scale
The given code presents an algorithm to compute an approximation of the Čech
scale of a disk system in Euclidean space using Algorithm 1 and a precision
parameter $\eta>0$. It initializes the scale factor $\lambda$ to the Rips
scale $\nu_{M}$. If this scale satisfies Cech.system$(M_{\nu_{M}})=$ TRUE, it
indicates that the Čech scale has been found. Otherwise, we initiate a cycle
in which we compute Cech.system of the system rescaled by a factor $\lambda$.
The Čech scale is known to fall between the Rips scale and the value
$\sqrt{{2d}/{(d+1)}}\nu_{M}$ (Generalized Vietoris-Rips, Corollary 10). To
approximate the Čech scale, we employ the bisection method as long as the
interval enclosing the Čech scale has a length greater than $\eta$. Finally,
the algorithm returns an approximation of the Čech scale.
Utilizing the previously described algorithm, we can construct the filtered
generalized Čech complex for a disk system $M$. Let $\mathscr{C}(M)$ denote
the set of Čech subsystems, and $\mathscr{C}_{\lambda}(M)$ the set of Čech
subsystems for the rescaled disk system $M_{\lambda}$. The Čech filtration of
the $M$ system forms a maximal chain of Čech complexes
$\mathscr{C}_{*}(M):\mathscr{C}_{0}(M)\subsetneq\mathscr{C}_{\lambda_{1}}(M)\subsetneq\mathscr{C}_{\lambda_{2}}(M)\subsetneq...\subsetneq\mathscr{C}_{\mu_{M}}(M),$
where each $\lambda_{i}$ represents the Čech scale of the system
$M_{\lambda_{i}}$. Since the Čech scale of a disk system indicates the factor
by which we must rescale the system to make it Čech, defining a level of the
filtration, $\mathscr{C}_{\lambda}(M)$, simply requires determining the Čech
scale of the system $M_{\lambda}$.
## 4\. Minimal Axis-Aligned Bounding Box.
In this section, we introduce the concept of the minimal axis-aligned bounding
box (AABB) for the intersection of $d$-disks and present methods for its
computation. The AABB provides a simplified representation of the disk
intersection, making it easier to obtain valuable information about the disks.
This information could be useful for computing the Čech scale of a disk
system.
###### Definition 12.
Let $M$ be a disk system in $\mathbb{R}^{d}$. The minimal axis-aligned
bounding box of $M$, denoted as $AABB(M)$, is defined as the smallest axis-
aligned bounding box that contains the intersection $D=\cap_{D_{i}\in M}D_{i}$
, given by
$AABB(M):=\bigcap_{D\subset\tilde{B}}\tilde{B}$
where $\tilde{B}$ ranges over all axis-aligned bounding boxes that contain
$D$.
Note that the AABB can be expressed as
$AABB(M)=\prod_{k=1}^{d}[\inf\pi_{k}(\partial D),\sup\pi_{k}(\partial D)]$
where $\pi_{k}:\mathbb{R}^{d}\longrightarrow\mathbb{R}$ is the canonical
projection onto the $k$-th factor, and $\partial D$ denotes the boundary of
$D$. In other words, the AABB of a disk system $M$ is given by the Cartesian
product of intervals, where each interval is determined by the minimum and
maximum values of the corresponding projection of the disk boundaries.
### 4.1. Minimal axis-aligned bounding box for two disks
Let’s consider the situation when the disk system $M$ is composed of two disks
$D_{1}$ and $D_{2}$ in $\mathbb{R}^{d}$. If $D_{1}\cap D_{2}\neq\emptyset$ and
$D_{1}\neq D_{2}$, the subset $\partial D_{1}\cap\partial D_{2}$ can take one
of three forms: an empty set, a single common point (when the disks are
tangent), or a $(d-2)$-dimensional sphere. In the last case, we denote the
$(d-2)$-dimensional sphere or the $(d-1)$-sphere $\partial D_{1}\cap\partial
D_{2}$ by $S_{1,2}$. To calculate $\inf\pi_{i}(\partial(D_{1}\cap D_{2}))$ and
$\sup\pi_{i}(\partial(D_{1}\cap D_{2}))$ for each $i\in\\{1,2,...,d\\}$, we
can use:
(6) $\displaystyle\inf\pi_{i}(\partial(D_{1}\cap
D_{2}))=\begin{cases}\pi_{i}(c_{1}-r_{1}e_{i})&\text{if }c_{1}-r_{1}e_{i}\in
D_{2},\\\ \pi_{i}(c_{2}-r_{2}e_{i})&\text{if }c_{2}-r_{2}e_{i}\in D_{1},\\\
\inf(\pi_{i}(\partial D_{1}\cap\partial D_{2}))&\text{otherwise},\end{cases}$
$\displaystyle\sup\pi_{i}(\partial(D_{1}\cap
D_{2}))=\begin{cases}\pi_{i}(c_{1}+r_{1}e_{i})&\text{if }c_{1}+r_{1}e_{i}\in
D_{2},\\\ \pi_{i}(c_{2}+r_{2}e_{i})&\text{if }c_{2}+r_{2}e_{i}\in D_{1},\\\
\sup(\pi_{i}(\partial D_{1}\cap\partial D_{2}))&\text{otherwise}.\end{cases}$
Indeed, by Lemma 4, the extremes of the AABB are the projections of certain
poles, either from the $(d-1)$-sphere or some $d$-sphere. The $d$-spheres
represent the boundaries of each disk, with poles given by $c_{j}\pm
r_{j}e_{i}$, and the $(d-1)$-sphere is $S_{1,2}=\partial D_{1}\cap\partial
D_{2}$, whose poles are computed using Lemma 6. It is worth noting that there
are no further options for $(d-m+1)$-spheres in the case of a two-disk system.
To simplify the notation, we will use $B_{i,j}$ to denote the AABB of the
intersection of disks $D_{i}$ and $D_{j}$. Figure 2 illustrates the AABB of
the intersection of two disks in the plane.
Figure 2. AABB of two disks.
According to (6) and Lemma 5, we have a method to calculate the axis-aligned
boundary box (AABB) for systems of two disks.
Knowing how to compute the AABB of two disks is not sufficient to determine
the AABB for a disk system with more than two disks in $\mathbb{R}^{d}$. In
the following examples, we demonstrate that the AABB of a disk system is not
simply the intersection of all AABBs of two disks in $\mathbb{R}^{3}$.
###### Example 13.
Let
$M=\\{D_{1}((4,1,0);\sqrt{2}),D_{2}((4,-1,0);\sqrt{2}),D_{3}((0,0,0);3)\\}\subset\mathbb{R}^{3}$
be a Vietoris-Rips system in $\mathbb{R}^{3}$ with the following projection
onto the $xy$-plane:
Figure 3. Disk system $M$ projected onto the $xy$-plane.
By computing the boxes $B_{i,j}$ for all $i<j$, we obtain:
$\displaystyle B_{1,2}$
$\displaystyle=[3,5]\times[-\sqrt{2},\sqrt{2}]\times[-1,1]$ $\displaystyle
B_{1,3}$ $\displaystyle=[4-\sqrt{2},3]\times[0,1.41]\times[-0.72,0.72]$
$\displaystyle B_{2,3}$
$\displaystyle=[4-\sqrt{2},3]\times[-1.4,0]\times[-0.72,0.72]$
Therefore, $\cap_{i<j}B_{i,j}=[3,3]\times[0,0]\times[-0.72,0.72]$. However,
the disk system intersects at the point $P=(3,0,0)$, which means
$AABB(M)=\\{P\\}$. In other words, $AABB(M)$ is not equal to
$\cap_{i<j}B_{i,j}$.
We know that if $D\neq\emptyset$, then $\cap_{i<j}B_{i,j}\neq\emptyset$.
However, the converse is not always true. The following example illustrates
this fact.
###### Example 14.
Let
$N=\\{D_{1},D_{2},D_{3}((0,1,0);\sqrt{10}),D_{4}((3,0,1);0.9)\\}\subset\mathbb{R}^{3}$
be a Vietoris-Rips system. The projection of disks $D_{1},D_{2},$ and $D_{3}$
onto the $xy$-plane is illustrated in Figure 4.
Figure 4. Disk system $N$ projected onto the $xy$-plane.
We will now compute the intersections $B_{i,j}$ for different pairs of disks:
$\displaystyle B_{1,2}$
$\displaystyle=[3,5]\times[-\sqrt{2},\sqrt{2}]\times[-1,1]$ $\displaystyle
B_{1,3}$ $\displaystyle=[4-\sqrt{2},\sqrt{10}]\times[0,2]\times[-1,1]$
$\displaystyle B_{1,4}$
$\displaystyle=[2.6,3.9]\times[-0.4,0.9]\times[0.100007,1.3]$ $\displaystyle
B_{2,3}$ $\displaystyle=[2.6,3]\times[-0.8,0]\times[-0.44,4.47]$
$\displaystyle B_{2,4}$
$\displaystyle=[2.6,3.9]\times[-0.9,0.4]\times[0.100007,1.3]$ $\displaystyle
B_{3,4}$ $\displaystyle=[2.1,3.11]\times[-0.73,0.9]\times[0.1,1.7]$
The intersection of all pairwise intersections, $\cap_{i<j}B_{i,j}$, is given
by $[3,3]\times[0,0]\times[0.1,1.7]$. However, the intersection of disks
$D_{1},D_{2},$ and $D_{3}$ is a single point $P$, which is not contained in
$D_{4}$ (by construction). Therefore, $D$ is an empty set, but
$\cap_{i<j}B_{i,j}$ is not.
These examples clearly illustrate that when dealing with the AABB of three
disks or more, knowing the AABB for pairs of disks is insufficient. In Example
13, we observe that the intersection of $AABB(\\{D_{i},D_{j}\\})$ is not equal
to the AABB of the disk system $M$. Similarly, in Example 14, we find that the
intersection of three disks is empty, yet the intersection of
$AABB(\\{D_{i},D_{j}\\})$ contains points.
Therefore, the next crucial step is to determine how to calculate the AABB of
a disk system consisting of more than two disks in $\mathbb{R}^{d}$.
### 4.2. Minimal axis-aligned bounding box for more than two disks
Given a system $M$ consisting of $m$ disks in $\mathbb{R}^{d}$, we can compute
the $e_{q}$-poles for any subcollection of disks. Using these $e_{q}$-poles,
we can determine the axis-aligned bounding box (AABB) of $M$.
If $m\leq d$, we calculate the $e_{q}$-poles of the $(d-m+1)$-sphere
$\cap\partial D_{i}$, and with these poles, we define the AABB of $M$ by
taking $\inf\pi_{q}\partial D=\pi_{q}(p)$, where $p$ is the $e_{q}$-south pole
of the $(d-m+1)$-sphere (similarly for $\sup\pi_{q}\partial D$).
In the case where $m=d+1$, we consider $AABB(M(1)),...,AABB(M(m))$ as a
collection of minimal axis-aligned boxes for the disk system
$M(i)=M-\\{D_{i}\\}$ in $\mathbb{R}^{d}$. If the intersection of any $d+1$ of
these sets is nonempty, then the intersection of the entire collection gives
us the minimal axis-aligned box of the disk system $M$. This can be expressed
as $AABB(M)=\cap AABB(M(i))$. The next theorem confirms this finding.
###### Theorem 15 (Helly’s theorem for minimal axis-aligned boxes).
Let $M$ be a Rips system with $d+1$ disks in $\mathbb{R}^{d}$. Then, the
minimal axis-aligned bounding box for the intersection set
$D=\cap_{j=1}^{d+1}D_{j}$ satisfies:
$AABB(M)=\bigcap_{j=1}^{d+1}AABB(M(j))$
where $M(j)=M-\\{D_{j}\\}$.
###### Proof.
We know that $AABB(M)\subseteq\bigcap_{j=1}^{d+1}AABB(M(j))$. Now, our
objective is to establish the reverse inclusion, that is,
$\bigcap_{j=1}^{d+1}AABB(M(j))\subseteq AABB(M)$. In order to derive a
contradiction, suppose that the reverse inclusion is not true. By definition,
we have:
$AABB(M)=\prod_{i=1}^{d}\left[\inf\pi_{i}(\partial D),\sup\pi_{i}(\partial
D)\right]$
and
$\bigcap_{j=1}^{d+1}AABB(M(j))=\prod_{i=1}^{d}\left[\max_{k}\\{\inf\pi_{i}(\partial\cap_{j\neq
k}D_{j})\\},\min_{k}\\{\sup\pi_{i}(\partial\cap_{j\neq k}D_{j})\\}\right]$
Without loss of generality, let’s assume that:
$\pi_{1}(p)>\max_{k}\left\\{\inf\pi_{1}(\cap_{j\neq k}\partial
D_{j})\right\\}_{k=1}^{d+1}$
where $p\in D$ satisfies $\pi_{1}(p)=\inf\pi_{1}(\partial D)$.
Let $q_{j}$ be the point in $\cap_{k\neq j}D_{k}$ such that
$\pi_{1}(q_{j})=\inf\pi_{1}(\cap_{k\neq j}D_{k})$ for each $j=1,2,...,d+1$.
Note that $q_{j}\notin D_{j}$ because $q_{j}$ is not in $D$. Now, let
$\gamma_{j}$ be the line segment that connects $q_{j}$ and $p$.
Choose $\epsilon>0$ small enough such that the hyperplane
$P:x_{1}=\pi_{1}(p)-\epsilon$ does not contain any $q_{j}$, and
$\pi_{1}(q_{j})<\pi_{1}(p)-\epsilon$ for all $j=1,2,...,d+1$ (such hyperplane
$P$ exists because $\pi_{1}(p)>\pi_{1}(q_{j})$ for all $j=1,...,d+1$). Since
$q_{j}$ is in $\cap_{k\neq j}D_{k}$, the hyperplane $P$ intersects every disk
in $M(j)$ for each $j$, and $\gamma_{j}\subset\cap_{k\neq j}D_{k}$ intersects
$P$ at a point that is in $\cap_{k\neq j}D_{k}$ (see Figure 5). Furthermore,
$D_{k}\cap P$ is a $(d-1)$-dimensional disk. Therefore, we have a collection
$\mathcal{D}=\\{D_{j}\cap P|j=1,2,...,d+1\\}$ of $d+1$ disks, each of
dimension $d-1$, such that every subset $A$ of $\mathcal{D}$ consisting of $d$
disks has the non-empty intersection property. By Helly’s Theorem, the
intersection of all $(d-1)$-disks in $\mathcal{D}$ is not empty. Therefore,
there exists a point $q\in D$ with $\pi_{1}(q)<\pi_{1}(p)$. However, this
contradicts the fact that $p\in D$ is such that
$\pi_{1}(p)=\inf\pi_{1}(\partial D)$.
Therefore, we conclude that $\bigcap_{j=1}^{d+1}AABB(M(j))=AABB(M)$.
Figure 5.
∎
###### Lemma 16.
Let $M$ be a Vietoris-Rips system of $d+1$ disks in $\mathbb{R}^{d}$. If
$D=\cap_{i=1}^{d+1}D_{i}=\emptyset$ and $AABB(M(j))\neq\emptyset$ for each
$j=1,...,d+1$, then $\cap_{i=1}^{d+1}AABB(M(i))$ consists only of intervals of
the form $[a,b]$ with $a>b$ (inverted intervals).
###### Proof.
Suppose that $\cap_{i=1}^{d+1}AABB(M(i))$ contains an interval. Without loss
of generality, let’s assume it is the interval
$[\inf\pi_{1}(\partial D(k)),\sup\pi_{1}(\partial D(l))]$
for some $k,l\in\\{1,2,...,d+1\\}$ where $D(k)=\cap_{i\neq k}D_{i}$. By
definition of $\cap_{i=1}^{d+1}AABB(M(i))$, we have $\inf\pi_{1}(\partial
D(k))\geq\inf\pi_{1}(\partial D(i))$ for all $i\neq k$ and
$\sup\pi_{1}(\partial D(l))\leq\sup\pi_{1}(\partial D(j))$ for all $j\neq l$.
Now, consider a hyperplane $P:x_{1}=p$ with $\inf\pi_{1}(\partial D(k))\leq
p\leq\sup\pi_{1}(\partial D(k))$. The hyperplane $P$ intersects $D(j)$ for
every $j=1,...,d+1$ (since $P$ cuts through all the boxes $AABB(M(j))$). We
also know that $P\cap D_{i}$ is a $d-1$-dimensional disk for each $i$.
Therefore, we have a collection of $d$ disks in a $d-1$ dimensional space, and
by Helly’s Theorem, this collection must have a non-empty intersection. This
implies that $D\neq\emptyset$, which contradicts our assumption that $D$ is
empty. Hence, all intervals in $\cap_{i=1}^{d+1}AABB(M(i))$ must be inverted
intervals of the form $[a,b]$ with $a>b$. ∎
As an example, we provide the lower bounds of the AABB for a system of three
disks in $\mathbb{R}^{d}$, this is, we compute $\inf\pi_{i}(\partial D)$ for
each $i=1,...,d$ (analogous $\sup\pi_{i}\partial D$) with
$D=\cap_{j=1}^{3}D_{j}$. Let $M=\\{D_{1},D_{2},D_{3}\\}$ be a Rips system in
$\mathbb{R}^{d}$, then, for each $i=1,...,d$, the computation of the AABB for
the disk system $M$ is given by:
$\displaystyle\inf\pi_{i}(\partial
D)=\begin{cases}\pi_{i}(c_{k}-r_{k}e_{i})&\text{if $c_{k}-r_{k}e_{i}\in D$}\\\
\max_{j<k}\\{\inf\pi_{i}(\partial D_{j}\cap\partial D_{k})\\}&\text{if (*)}\\\
\inf\pi_{i}(\cap_{j=1}^{3}\partial D_{j})&\text{otherwise}\end{cases}$
where (*) denotes the case where $q\in D$ with $q\in\partial D_{j}\cap\partial
D_{k}$ such that
$\pi_{i}(q)=\max_{j<k}\\{\inf\pi_{i}(\partial D_{j}\cap\partial D_{k})\\}.$
The preceding calculations are a result of Lemma 4. It is known that the
extremes of the AABB lie in the projections of specific poles, either from a
$d$-sphere, $(d-1)$-sphere, or the $(d-2)$-sphere.
Given a disk system $M$ in $\mathbb{R}^{d}$, we can compute the minimal axis-
aligned bounding box (AABB) for the intersection of all disks in $M$. If the
AABB is a point, then it represents the intersection of the disks. This
property allows us to identify when the AABB is a point. If the AABB of $M$ is
not a point, we can rescale $M$ by a scale factor $\lambda$ such that the AABB
of $M_{\lambda}$ becomes a point. The value of $\lambda$ is referred to as the
Čech scale of the system $M$.
### 4.3. Minimal axis-aligned bounding box Algorithm.
Now, we present an algorithm to calculate the minimal axis-aligned bounding
box (AABB) of a system of $m$ disks in $\mathbb{R}^{d}$. As previously
explained, the AABB’s extremes are defined by projecting the poles of specific
$i$-spheres. Thus, in computing the AABB, we will determine the poles for each
$i$-sphere within the disk system.
1
Input : A $d$-disk system $M=\\{D_{j}\\}_{j=1}^{m}$
Output : The minimal AABB for the system $M$
2 Initialize: $P=\emptyset$
3 for _$k\leftarrow 1$ to $m$_ do
4 Let $\mathcal{S}$ be the set of $(d-k+1)$-spheres of $\partial M$
5 for _$S$ in $\mathcal{S}$_ do
6 for _$q\leftarrow 1$ to $d$_ do
7 Compute the set $\\{s_{q}^{\pm}\\}$ of $e_{q}$-poles of $S$
8 for _$s\leftarrow\\{s_{q}^{\pm}\\}$_ do
9 if _$s\in\cap_{j=1}^{d}D_{j}$_ then
10 Add: $P\leftarrow s$
11
12 end if
13
14 end for
15
16 end for
17
18 end for
19
20 end for
21if _$P\neq\emptyset$_ then
22 for _$q\leftarrow 1$ to $d$_ do
23 $a_{q}=min_{P}\\{\pi_{q}(s_{q}^{-})\\}$
24 $b_{q}=max_{P}\\{\pi_{q}(s_{q}^{+})\\}$
25
26 end for
27 return ($\Pi_{q=1}^{d}[a_{q},b_{q}]$)
28else
29 return (The disk system does not intersect)
30 end if
Algorithm 3 AABB.minimal
The algorithm starts by initializing the set of poles, $P$, as an empty set.
In each iteration of the first loop, we determine the spheres formed by the
intersection of the boundaries of the disk subcollections in the system $M$
(for this, we require the center and radius, which are computed in Subsection
2.2). Next, we identify the poles of each sphere and if any of them are
present in all the disks of $M$, we add them to the set $P$. If $P$ is not
empty, we proceed to calculate the extremes for each dimension of the AABB
using the set of north poles for the upper bounds and the set of south poles
for the lower bounds. In this case, the output is the product of the intervals
defined by the computed extremes. If $P$ is empty, it indicates that there is
no intersection in the disk system.
## 5\. Acknowledgements
C.G.E.P acknowledges CONACYT for the financial support provided through a
National Fellowship (CVU-638165).
## References
* [1] G. Bell, A. Lawson, J. Martin, J. Rudzinski, and C. Smyth, Weighted persistent homology, Involve, a Journal of Mathematics, 12 (2019), pp. 823–837.
* [2] P. Cai, Y. Cai, I. Chandrasekaran, and J. Zheng, Collision detection using axis aligned bounding boxes, Simulation, Serious Games and Their Applications, (2013), pp. 1–14.
* [3] J. F. Espinoza, R. Hernández-Amador, H. A. Hernández-Hernández, and B. Ramonetti-Valencia, A numerical approach for the filtered generalized čech complex, Algorithms, 13 (2019), p. 11.
* [4] N. K. Le, P. Martins, L. Decreusefond, and A. Vergne, Construction of the generalized cech complex, 2015.
* [5] J. Mahovsky and B. Wyvill, Fast ray-axis aligned bounding box overlap tests with plucker coordinates, Journal of Graphics Tools, 9 (2004), pp. 35–46.
* [6] D. S. Maioli, C. Lavor, and D. S. Gonçalves, A note on computing the intersection of spheres in $\mathbb{R}^{n}$, The ANZIAM Journal, 59 (2017), p. 271–279.
|
remarkRemark hypothesisHypothesis claimClaim Coarse-graining multi-agent
stochastic systemsD. Stepanova, H. M. Byrne, P. K. Maini and T. Alarcón
[supp-]SupplementaryMaterial
# A method to coarse-grain multi-agent stochastic systems with regions of
multistability††thanks: . This work is supported by a grant of the Obra Social
La Caixa Foundation on Collaborative Mathematics awarded to the Centre de
Recerca Matemàtica through a scholarship awarded to D.S. D.S. and T.A. have
been partially funded by the CERCA Programme of the Generalitat de Catalunya.
They also acknowledge MINECO (https://www.ciencia.gob.es/) for funding under
grants MTM2015-71509-C2-1-R and RTI2018-098322-B-I00. D.S. and T.A.
participate in project 2017SGR01735 which was awarded by AGAUR
(https://agaur.gencat.cat/en/inici/index.html) but with no actual funding. The
funders had no role in study design, data collection and analysis, decision to
publish, or preparation of the manuscript. H.M.B. and P.K.M. received no
specific funding for this work.
Daria Stepanova222Centre de Recerca Matemàtica, Bellaterra (Barcelona) 08193,
Spain 333Departament de Matemàtiques, Universitat Autònoma de Barcelona,
Bellaterra (Barcelona) 08193, Spain 666 Helen M. Byrne444Wolfson Centre for
Mathematical Biology, Mathematical Institute, University of Oxford, Oxford OX2
6GG, UK Philip K. Maini444Wolfson Centre for Mathematical Biology,
Mathematical Institute, University of Oxford, Oxford OX2 6GG, UK Tomás
Alarcón555Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona
08010, Spain 222Centre de Recerca Matemàtica, Bellaterra (Barcelona) 08193,
Spain 333Departament de Matemàtiques, Universitat Autònoma de Barcelona,
Bellaterra (Barcelona) 08193, Spain<EMAIL_ADDRESS>
###### Abstract
Hybrid multiscale modelling has emerged as a useful framework for modelling
complex biological phenomena. However, when accounting for stochasticity in
the internal dynamics of agents, these models frequently become
computationally expensive. Traditional techniques to reduce the computational
intensity of such models can lead to a reduction in the richness of the
dynamics observed, compared to the original system. Here we use large
deviation theory to decrease the computational cost of a spatially-extended
multi-agent stochastic system with a region of multi-stability by coarse-
graining it to a continuous time Markov chain on the state space of stable
steady states of the original system. Our technique preserves the original
description of the stable steady states of the system and accounts for noise-
induced transitions between them. We apply the method to a bistable system
modelling phenotype specification of cells driven by a lateral inhibition
mechanism. For this system, we demonstrate how the method may be used to
explore different pattern configurations and unveil robust patterns emerging
on longer timescales. We then compare the full stochastic, coarse-grained and
mean-field descriptions via pattern quantification metrics and in terms of the
numerical cost of each method. Our results show that the coarse-grained system
exhibits the lowest computational cost while preserving the rich dynamics of
the stochastic system. The method has the potential to reduce the
computational complexity of hybrid multiscale models, making them more
tractable for analysis, simulation and hypothesis testing.
###### keywords:
Large deviation theory, coarse-graining, phenotype pattern formation,
multiscale modelling, hybrid modelling
60F10 (Large deviations), 92C15 (Developmental biology, pattern formation),
92C42 (Systems biology, networks), 92B05 (General biology and biomathematics),
92-08 (Computational methods for problems pertaining to biology)
## 1 Introduction
When modelling a biological process, one has to make choices on how detailed
the model should be in order to capture the characteristic features of the
system. At the same time, the model should be as simple as possible in order
to facilitate its analysis and numerical simulations. The evolution of systems
with large numbers of agents (e.g. molecules, cells, species) can be described
by the average behaviour of their agents, or their mean-field limits using
(ordinary or partial) differential equations ([6, 9, 34]). Dynamical systems
theory provides methods and techniques for the analysis and numerical
simulations of such systems. This description might become insufficient when
the system comprises agents with internal variables that change in time, thus
altering the agents’ behaviour, or when the system is not ‘large enough’ to be
described accurately by the mean-field equations. For these systems,
stochastic descriptions are employed [38] (for example, continuous time Markov
chains, CTMCs, or stochastic differential equations, SDEs). In biological
systems, the number of agents is finite and some level of noise is always
present which can affect the system dynamics [38]. While exhibiting richer
dynamics than deterministic systems, stochastic models are more
computationally intensive.
Furthermore, in order to formulate a theoretical model of a biological
phenomenon, it is often necessary to account for dynamics that act on
different temporal and/or spatial scales [2, 17]. This has led to the
development of hybrid multiscale models, in which different modelling
techniques may be applied at each scale and then efficient coupling algorithms
are used to integrate these models (see, e.g., [8, 16, 36] and references
therein). In many of these models, individual entities (cells, species, etc.)
are considered as discrete agents which are, themselves, equipped with models
for their internal states determining the behaviour (e.g. subcellular
signalling, cell cycle, response to extracellular stimuli). Such models have
great potential for generating insights into the behaviour of a system (e.g.,
endothelial cell rearrangements [3], cell differentiation and tissue
organization in intestinal crypts [8], and multiscale cancer modelling [10]).
However, they frequently become numerically intractable because of their
complexity (e.g. the internal dynamics of agents) [2]. This limits possible
applications of these models.
(a)
(b)
(c)
Figure 1: Cell phenotype specification. (a) Phenotype (Delta-high and Delta-
low cells) patterning of cells induced by a mechanism of lateral inhibition in
two different domains: a cell monolayer and a branching network. (b) Dynamic
time evolution of phenotype adaptation of an individual cell. Using a
phenotype proxy, e.g. level of Delta, allows for identification of a
continuous cell phenotype. (c) Phenotype switches, as in (b) (dashed vertical
lines), occur due to either a change in a cell’s microenvironment or naturally
present noise in intracellular signalling.
In this work, we explain how to reduce the computational complexity of a
hybrid model by coarse-graining the internal dynamics of its agents when these
are described by a stochastic system with multiple steady states. The method
involves applying large deviation theory (LDT) to reduce the dynamics of the
stochastic system to a continuous time Markov chain (CTMC) on the state space
of its stable steady states. LDT provides a theoretical framework with which
to quantify how small time-dependent fluctuations can lead to significant
deviations from the mean-field behaviour (rare events) such as transitions
between stable steady states which cannot occur in deterministic systems [23].
This approach has previously been used to study rare, noise-induced events in
individual stochastic systems [14, 15, 19, 38, 39, 41], but to our knowledge,
this is its first application to a multi-agent model.
In previous work, we developed a multiscale model of angiogenesis [44], the
process of growth of new blood vessels from pre-existing ones [28], which
accounts for gene expression patterns (phenotypes) of endothelial cells (ECs)
at the subcellular scale. For prescribed levels of extracellular stimuli, the
system is either monostable (i.e. only one cell phenotype exists) or bistable
(i.e. two stable steady states, cell phenotypes, coexist). Cell phenotype is
specified via contact-dependent cross-talk with neighbouring ECs via the VEGF-
Delta-Notch signalling pathway [5, 24]. VEGF, or vascular endothelial growth
factor, is the activating external stimulus; Delta and Notch are transmembrane
ligands and receptors, respectively, which can trans-bind, (i.e. a ligand on
one cell can bind to a receptor on another cell, thus allowing the two cells
to ‘communicate’). Cells adjust their gene expression in order to maintain a
pattern of two distinct phenotypes, Delta-high and Delta-low cells (see
LABEL:PhenotypeSwitch_Motivation_Config and
LABEL:PhenotypeSwitch_Motivation_Trajectory). We use the internal level of
Delta as a proxy to distinguish between the phenotypes. In angiogenesis, the
Delta-high (Delta-low) cells are referred to as tip (stalk) cells [5]. The
number of transmembrane proteins in this signalling pathway is on the order of
thousands for each cell [6]. Therefore, in order to formulate a mathematical
model, it is tempting to use deterministic mean-field equations to describe
the kinetic reactions of this signalling pathway. However, deterministic
descriptions cannot account for noise-induced transitions between stable
steady states or, in the case of this signalling pathway, phenotypic switches,
which can occur in regions of bistability (see
LABEL:PhenotypeSwitch_Motivation_Trajectory and
LABEL:PhenotypeSwitch_Motivation_Switch). Since branching patterns of vascular
networks are affected by the distribution of cells with different phenotypes,
such phenotype transitions are potentially significant. Therefore, we modelled
the subcellular signalling pathways stochastically, which increased the
computational cost of the model. This example illustrates a general problem
associated with computational and, in particular, hybrid models: in order to
preserve emergent features of the system, such as continuous cell phenotypes
and noise-induced phenotype switches, the model becomes computationally
intractable for large lattice simulations.
We illustrate the coarse-graining method by reference to the subcellular model
of the VEGF-Delta-Notch signalling pathway that defines cell phenotype. The
core Delta-Notch signalling pathway plays a key role in phenotype adaptation
in cell types which can form cell monolayers, such as epithelial sheets [35,
43], bristle patterning in Drosophila [11, 30, 13], and neural precursor cells
[22]. In all of these biological processes, the lateral inhibition mechanism
of Delta-Notch signalling generates spatial patterns of cells with alternating
fates (phenotypes). In the VEGF-Delta-Notch model, the stationary distribution
of VEGF serves as an activating extracellular stimulus for the particular case
of endothelial cells. In other cell types, which use the lateral inhibition
mechanism to communicate, the external stimulus may differ from VEGF. In this
paper we perform our simulations for two spatial geometries: a cell monolayer
and a branching network (LABEL:PhenotypeSwitch_Motivation_Config). For our
model of multicellular VEGF-Delta-Notch signalling, we show typical simulation
results of the coarse-grained system which allows us to explore different
configurations of spatial patterns in a single realisation of the model (due
to phenotypic switches). We then demonstrate how this dynamic exploration of
possible patterns may be used to uncover robust patterns emerging at long
timescales. We finally compare the spatio-temporal dynamics and computational
cost of the full stochastic CTMC, the coarse-grained and the deterministic
mean-field descriptions. Our results show that the coarse-grained model, while
preserving the continuous description of cell phenotype and rare events of
phenotype switching, is more computationally efficient than the other two
systems. Thus, it significantly reduces the computational complexity of the
model without sacrificing the rich dynamics of the original stochastic system.
The remainder of the paper is organised as follows. In Section 2, we review
the hybrid (multiscale) modelling approach (Section 2.1) and summarise large
deviation theory (Section 2.2). This provides us with the information needed
to formulate the coarse-grained model in Section 3. In Section 3.1, we start
by coarse-graining the individual agent system and checking the accuracy of
the method. We then extend the technique to a multi-agent system in Section
3.2 where we outline a general algorithm for formulating and simulating the
coarse-grained model. In Section 4, we present typical simulation results for
the model of the VEGF-Delta-Notch signalling pathway (Section 4.2) and compare
the full stochastic, coarse-grained and mean-field systems via metrics which
quantify the spatial patterns formed by the two cell phenotypes and we also
compare computational cost of simulations (Section 4.3). The paper concludes
in Section 5 with a summary of our findings and suggestions for future
research directions.
## 2 Theoretical background
### 2.1 Hybrid models
Biological systems are often highly complex, involving processes that may
interact across multiple spatial and temporal scales (see Figure 2). From a
general perspective, the subcellular scale is characterised by intracellular
chemistry (e.g. gene expression, signal transduction and receptor/ligand
dynamics). Subcellular processes determine behaviour at the cellular scale and
may generate emergent properties at the tissue scale. In addition to this
upward coupling across spatial scales, there is downward coupling whereby
extracellular chemicals and biomechanical cues influence the subcellular
chemistry/mechanics within a cell. In this way, dynamic interactions,
encompassing all the scales, can occur (Figure 2).
(a) Figure 2: A schematic diagram illustrating characteristic spatial and
temporal scales of a typical biological process and coupling between them. The
VEGF-Delta-Notch signalling pathway, which serves as an illustrative example
for application of the CG method, acts at the subcellular scale (highlighted
in blue) on a timescale shorter than other processes (e.g. cell migration,
cell-extracellular matrix interaction at the tissue scale) involved in the
multiscale model of angiogenesis [44]. As a result, we may use LDT theory to
coarse-grain its dynamics.
From the theoretical perspective, models which consider only processes at a
single spatial/temporal scale do not allow for investigation of emergent
features which manifest at other scales (for example, collective migration or
phenotype patterning which arise from individual cell dynamics and govern
tissue scale organisation). Equally, difficulties associated with the physical
interpretation of parameters in phenomenological models, i.e. large scale
models which capture the overall evolution of a biological process, make it
challenging to fit the model to biological data. In particular, this abstract
parameter construct hinders model calibration/validation and limits potential
applications of the models. Multiscale models, which couple processes at
different spatial and/or temporal scales, have the potential to address these
issues [4].
A challenge in formulating a multiscale model relates to the number of
entities (protein, cells, extracellular components, etc.) that should be
included at each scale of interest. Using the same mathematical formalism to
model processes involving entities which vary in number by several orders of
magnitude may lead to the omission of essential features or make the model
computationally intractable. Hybrid approaches are increasingly being
recognised as suitable tools for trying to overcome problems of this type and
have become a key part of multiscale modelling [16, 17]. The central idea is
to employ the modelling framework most suitable to each subprocess and then to
couple them. For example, the extracellular environment and signalling cues
are usually modelled deterministically due to the large number of proteins
involved. On the other hand, cells may be treated as individual entities,
equipped with a subcellular model which determines their behaviour (e.g.
proliferation, cell polarity and migration). This framework has been used to
develop multiscale models of cancer (see reviews [16, 40] and references
therein), angiogenesis [28], collective cell migration [17], among other
examples [2].
Hybrid modelling allows for efficient parameter estimation and model
visualisation, forging interdisciplinary collaboration between researchers in
theoretical modelling and experimental biology [2, 36]. There is also the
potential of using high-throughput experimental data to develop more detailed
multiscale models. As an example, one of the aspects of biological systems
that has received little attention in theoretical modelling is the effect of
stochasticity in the response of individual entities to external stimuli [17].
Hybrid modelling allows investigation of this effect on the collective,
emergent behaviour. However, increasing computational complexity makes these
models intractable for large-scale simulations [16].
This challenge motivated us to develop a technique which reduces the
computational complexity of a model while preserving its stochasticity. The
method is applicable to systems characterised by stochastic processes which
exhibit multistability and which evolve on timescales shorter than those
associated with other system processes. The example that we study in this
paper is of this type: the subcellular dynamics of cell fate determination via
lateral inhibition (a bistable, stochastic system) act on a shorter timescale
than those associated with, for example, cell migration, and tissue scale
processes such as the dynamics of extracellular soluble factors (e.g.
diffusion, secretion by cells, degradation) [28] (Figure 2). This observation
motivates us to use large deviation theory to coarse-grain the dynamics
associated with intracellular signalling to produce a jump process (i.e. a
Markov chain) on the stable state space of the steady states of the original
system which describes the VEGF-Delta-Notch pathway.
### 2.2 Large deviation theory (LDT)
In the presence of noise, small fluctuations can drive significant deviations
from mean-field behaviour such as, for example, transitions from one stable
steady state to another. These transitions are usually referred to as rare
events since their likelihood is small. LDT is predicated on the assumption
that when rare events occur, the system follows the least unlikely paths.
Deviations from these paths occur with very small probability (i.e. smaller
than the probability of a rare event). Specifically, Freidlin-Wentzell’s
theory of large deviations predicts that the deviations are exponentially
suppressed [23], making such transitions ‘predictable’. LDT provides the means
to analyse the frequency of rare events and to identify the maximum likelihood
path (minimum action path, MAP) along which these transitions can occur.
A stochastic differential equation (SDE) of a diffusion process,
$x^{\epsilon}\in\mathbb{R}^{n}$, has the following form
(1)
$\mathrm{d}x^{\epsilon}(t)=b(x^{\epsilon})\mathrm{d}t+\sqrt[]{\epsilon}\sigma(x^{\epsilon})\mathrm{d}W,$
where $b:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}$ is a drift vector,
$a(x^{\epsilon})=(\sigma\sigma^{T})(x^{\epsilon})$ is a diffusion tensor
($\sigma:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\times\mathbb{R}^{m}$, $m$
corresponds to the number of kinetic reactions in the system), $W$ is a Wiener
process in $\mathbb{R}^{m}$ and $\epsilon=\Omega^{-1}$ is noise amplitude.
The mean-field limit of Equation 1, $x(t)\in\mathbb{R}^{n}$, solves the
following differential equation:
(2) $\derivative{x}{t}=b(x).$
Assume that Equation 2 has two stable steady states,
$x_{1},~{}x_{2}\in\mathbb{R}^{n}$, whose basins of attraction form a complete
partition of $\mathbb{R}^{n}$. We are interested in transitions from
$x_{1}\rightarrow x_{2}$ (and $x_{2}\rightarrow x_{1}$) which cannot be
accounted for unless noise is present in the system.
A key player in LDT is the action functional
$S_{T}(\psi)=\begin{cases}\displaystyle\int_{0}^{T}L(\psi,\dot{\psi})\mathop{dt},&\text{if
$\psi\in C(0,T)$ is absolutely continuous and}\\\\[-7.0pt] &\text{the integral
converges,}\\\\[5.0pt] +\infty,&\text{otherwise,}\end{cases}$
which is computed for a transition path $\psi:[0,T]\rightarrow\mathbb{R}^{n}$
from $x_{1}$ to $x_{2}$ ($\psi(0)=x_{1}$ and $\psi(T)=x_{2}$, $T$ is the
transition time). Here
$L(x,y)=\displaystyle\sup_{\theta\in\mathbb{R}^{n}}\left(\langle
y,\theta\rangle-H(x,\theta)\right)$ is the large deviation Lagrangian, with
$\langle\cdot,\cdot\rangle$ being the Euclidean scalar product in
$\mathbb{R}^{n}$ and $H(x,\theta)$ being the Hamiltonian associated with
$L(x,y)$. The particular form of the Hamiltonian depends on the dynamical
system under consideration (in LABEL:supp-appendix:gMAM, we explain how to
define the Hamiltonian for an SDE such as Equation 1 and a general birth-death
CTMC).
The action functional is used to estimate the probability that a trajectory
$x^{\epsilon}(t)$ lies in a narrow neighbourhood, of width $\delta>0$, of a
given path $\psi\in C(0,T)$ (see Figure 3 for an illustration):
(3) $\operatorname{P}\left\\{\sup_{0\leq t\leq T}\mid
x^{\epsilon}(t)-\psi(t)\mid<\delta~{}\middle|~{}x^{\epsilon}(0)=x_{1}\right\\}\approx\exp{-\epsilon^{-1}S_{T}(\psi)}.$
(a) Figure 3: An illustration of a transition path between two stable steady
states of an arbitrary bistable system. The two stable steady states, $x_{1}$
and $x_{2}$, are marked by filled red circles; an unstable saddle point is
marked by an unfilled red circle. The transition path, $\psi(t)$, from $x_{1}$
to $x_{2}$ is shown by a thick green line, whereas a single stochastic
trajectory, $x^{\epsilon}(t)$, is indicated by a thin black path. The shaded
blue region indicates a $\delta$-neighbourhood around $\psi(t)$ ($\delta$ as
defined in Equation 3).
Since the probability function in Equation 3 decreases as the action
functional, $S_{T}(\psi)$, increases, the maximum likelihood path, $\psi^{*}$,
is the minimiser of $S_{T}(\cdot)$. This leads naturally to the idea of the
quasipotential:
(4)
$V(x_{1},x_{2})=\displaystyle\inf_{T>0}~{}\inf_{\psi\in\overline{C}_{x_{1}}^{x_{2}}(0,T)}S_{T}(\psi).$
Here $\overline{C}_{x_{1}}^{x_{2}}(0,T)$ is the space of absolutely continuous
functions $f:[0,T]\rightarrow\mathbb{R}^{n}$ such that $f(0)=x_{1}$ and
$f(T)=x_{2}$. Roughly speaking, the quasipotential gives an estimate of how
‘difficult’ it is to move from $x_{1}$ to $x_{2}$.
On timescales which are much longer than those associated with relaxation to a
stable steady state, the dynamics of Equation 1 can be reduced, or coarse-
grained, to that of a CTMC on the state space of the two stable steady states,
$\left\\{x_{1},x_{2}\right\\}$, with transition rates
(5) $k_{x_{1}\rightarrow
x_{2}}\asymp\exp\left(-\epsilon^{-1}V(x_{1},x_{2})\right),\qquad
k_{x_{2}\rightarrow
x_{1}}\asymp\exp\left(-\epsilon^{-1}V(x_{2},x_{1})\right).$
Here $\asymp$ denotes log-asymptotic equivalence so that $f(\epsilon)\asymp
g(\epsilon)$ if and only if
${\lim_{\epsilon\rightarrow 0}\frac{\log f(\epsilon)}{\log g(\epsilon)}=1}$.
In practice, most double minimisation problems, such as Equation 4, do not
have a solution for finite $T>0$. Furthermore, closed-form Lagrangians exist
for SDEs of the type defined by Equation 1 but not for general birth-death
CTMCs. Equation 4 can be reformulated in terms of a Hamiltonian system of the
form
$\derivative{\phi}{t}=\partialderivative{H(\phi,\theta)}{\theta},\qquad\derivative{\theta}{t}=-\partialderivative{H(\phi,\theta)}{\phi}.$
This problem must be solved as a boundary-value problem, i.e. $\phi(0)=x_{1}$
and $\phi(T)=x_{2}$, on an infinite time interval, $T\rightarrow\infty$, [26]
which makes it a non-trivial numerical problem. Thus the traditional LDT
methods are inapplicable in most cases.
One way to resolve these problems is to reformulate the minimisation problem
defined by Equation 4 on the space of curves (i.e. transition paths from one
stable steady state to another). In [29], Heymann and Vanden-Eijnden proved
that the minimisation problem defined by Equation 4, is equivalent to
(6)
$V(x_{1},x_{2})=\displaystyle\inf_{\phi}\widehat{S}(\phi),~{}~{}\text{with}~{}~{}\widehat{S}(\phi)=\displaystyle\sup_{\begin{subarray}{c}\hat{\theta}:[0,1]\rightarrow\mathbb{R}^{n}\\\
H(\phi,\hat{\theta})=0\end{subarray}}\displaystyle\int_{0}^{1}\langle\phi^{\prime},\hat{\theta}\rangle\mathop{d\alpha},$
where $\phi:[0,1]\rightarrow\mathbb{R}^{n}$ is a curve from $x_{1}$ to $x_{2}$
parametrised by standard arc length.
The geometric reformulation, Equation 6, resolves analytically the issue of
the infinite time, $T$, in the original minimisation problem. Furthermore,
only the Hamiltonian is needed. In this respect, the method is more general as
it can be applied to SDEs, CTMCs and other systems for which the Hamiltonian
is known (see LABEL:supp-appendix:gMAM in Supplementary Material).
In [29], an algorithm was developed to efficiently compute $V(x_{1},x_{2})$
and the corresponding minimiser, $\phi^{*}$, from the geometric reformulation.
The algorithm is known as the geometric minimum action method (gMAM) and the
minimiser, $\phi^{*}$, of the action functional is referred to as the minimum
action path (MAP) (for more details see LABEL:supp-appendix:gMAM).
Once the quasipotential has been computed, the coarse-grained system is given
by a CTMC, with rates defined by Equation 5.
## 3 Coarse-graining (CG)
We now illustrate how the theory described in the previous section can be used
to coarse-grain a specific hybrid multiscale model, one for which the internal
dynamics of the agents are described by multistable stochastic systems. This
property is characteristic of, for example, systems driving cell fate
(phenotype) determination. We begin by using LDT to formulate a CG model for a
system comprising a single agent (here a cell). The subcellular signalling
pathway, which we use to illustrate the method, is the VEGF-Delta-Notch
pathway (see LABEL:supp-appendix:VEGFDeltaNotch in Supplementary Material and
[44] for details). This pathway regulates phenotypic adaptation via lateral
inhibition [12, 35]. This system meets the requirements for application of the
CG technique: (a) it is bistable; its stable steady states are associated with
cellular phenotypes (Delta-high and Delta-low cells); (b) we are interested in
its evolution on timescales longer than the typical time for relaxation to an
equilibrium since other processes (e.g. cell migration and dynamics of
extracellular matrix) act on longer timescales (see Figure 2).
We then extend the method to the general case of multi-agent systems. Here the
dynamics of each entity is coarse-grained to a CTMC on the state space of its
stable states, and coupling between the internal dynamics of individual agents
is achieved via the external variables whose dynamics depend on the states of
neighbouring agents and/or the time evolution of these variables. We outline
below how we apply this method to a monolayer of cells (motivated by phenotype
patterning via the core Delta-Notch pathway in cell monolayers [35]) and a
branching network (angiogenesis-motivated application [44]) that interact via
VEGF-Delta-Notch signalling.
### 3.1 Individual agent system
(a) Figure 4: A flowchart of the procedure used to coarse-grain a multistable
stochastic system for an individual entity. The steady state solutions,
quasipotential and prefactor depend on the model parameters and external
variables, $v\in\mathrm{R}^{V}$ ($V$ indicates the dimension of the vector of
external variables). Here the transition rates, $k_{x_{s}\rightarrow x_{l}}$,
are defined by Equation 7, the prefactor, $C_{x_{s}\rightarrow x_{l}}$, is
determined from Equation 8b, and $\overline{\Omega}$ is given by Equation 9.
Our algorithm for coarse-graining a stochastic system with a region of
multistability involving a single entity is illustrated in Figure 4. For the
particular case of VEGF-Delta-Notch signalling, a cell’s internal state
(phenotype) depends on two model parameters (inputs) corresponding to the
extracellular levels of Delta and Notch,
$v=\left(d_{ext},n_{ext}\right)\in\mathrm{R}^{2}$ (see LABEL:supp-
appendix:VEGFDeltaNotch). We fix the values of the model parameters and the
external variables, $v$ (see LABEL:supp-Params). We then use the mean-field
system defined by LABEL:supp-eq:DN_single_nondimensional to compute the steady
state solutions. For this example, the values of the external variables, $v$,
are chosen so that the system is bistable; the two stable steady states
correspond to Delta-high and Delta-low cell phenotypes,
$\left\\{x_{1},x_{2}\right\\}=\left\\{\text{Delta-high},~{}\text{Delta-
low}\right\\}$, and the unstable steady state is an unstable saddle. Our goal
is to compute the transition rates of the CG system which we approximate as
follows:
(7) $k_{x_{s}\rightarrow x_{l}}\approx C_{x_{s}\rightarrow
x_{l}}\exp\left(-\Omega V(x_{s},x_{l})\right),\quad
s,~{}l\in\left\\{1,2\right\\},~{}s\neq l.$
We note that the prefactor, $C_{x_{s}\rightarrow x_{l}}$, arises from the
asymptotic equivalence relation defined by Equation 5. The system size is
given by $\Omega=\epsilon^{-1}$, where $\epsilon$ is the noise level.
We use the gMAM to compute the quasipotential values and corresponding paths
(MAPs) for transitions between the Delta-high and Delta-low phenotypes (for
more details, see LABEL:supp-appendix:MAP in Supplementary Material). An
illustrative example is shown in Figure 5, where we compare the MAPs and
sample paths of the full stochastic CTMC for an individual cell (see also
LABEL:supp-FullStochasticSystem in LABEL:supp-appendix:VEGFDeltaNotch).
Several characteristic features of the phenotype transitions are noteworthy.
First, the dynamics of the MAP can be split into two parts: the transition
from the steady state of origin to the saddle point (for example, from the
Delta-low phenotype to the saddle point, indicated by the blue circle in
LABEL:MAP_Path_s2t) which is possible due to the presence of noise. The main
contribution to the quasipotential comes from this transition. The MAP from
the unstable saddle point to the stable steady state of destination (from the
saddle point indicated by the blue circle to the Delta-high phenotype in
LABEL:MAP_Path_s2t) follows the fastest route given by the deterministic
heteroclinic orbit connecting the steady states (i.e. the unstable saddle and
the stable Delta-high cell state). The second noteworthy feature of the
phenotype transitions is that, as the level of noise, $\epsilon$, decreases,
the stochastic sample path follows the MAP more closely (compare
LABEL:MAP_Path_s2t and LABEL:MAP_Path_t2s for which $\Omega=\epsilon^{-1}=70$
and $\Omega=\epsilon^{-1}=450$, respectively).
(a)
(b)
Figure 5: An illustration of the minimum action paths (MAPs) and stochastic
sample paths for transitions between the Delta-high and Delta-low cell
phenotypes. We computed the MAPs (indicated by the dotted magenta lines) for
the subcellular VEGF-Delta-Notch system in an individual cell using the gMAM
for transitions from (a) Delta-low to Delta-high cell and (b) Delta-high to
Delta-low cell. The stochastic sample paths obtained by simulating the full
stochastic CTMC model (LABEL:supp-FullStochasticSystem) with the system sizes
(a) $\Omega=70$, (b) $\Omega=450$, are plotted in black. The thin grey lines
indicate streamlines of the corresponding mean-field system (LABEL:supp-
eq:DN_single_nondimensional). The Delta-high (Delta-low) cell stable steady
state is indicated by a green (red) filled circle; the unstable saddle by a
blue unfilled circle. The plots represent three-dimensional projections of the
full five-dimensional system as defined by LABEL:supp-
eq:DN_single_nondimensional. Parameter values are fixed as indicated in
LABEL:supp-Params.
To fully determine the CG transition rates, the prefactor value,
$C_{x_{s}\rightarrow x_{l}}$, must be estimated. From Equation 7, for
$s,~{}l\in\left\\{1,2\right\\},~{}s\neq l$, we have
(8a) $\displaystyle\log\langle T^{\Omega}_{x_{s}\rightarrow x_{l}}\rangle$
$\displaystyle\approx\Omega V(x_{s},x_{l})-\log C_{x_{s}\rightarrow
x_{l}}~{},$ (8b) $\displaystyle\log C_{x_{s}\rightarrow x_{l}}$
$\displaystyle\approx\Omega V(x_{s},x_{l})-\log\langle
T^{\Omega}_{x_{s}\rightarrow x_{l}}\rangle~{},$
where $\langle T^{\Omega}_{x_{s}\rightarrow
x_{l}}\rangle=1/k_{x_{s}\rightarrow x_{l}}$ is the mean passage time between
the stable steady states, $x_{s}$ and $x_{l}$ (Delta-high and Delta-low
phenotypes), for a fixed value of the system size, $\Omega$. $\langle
T^{\Omega}_{x_{s}\rightarrow x_{l}}\rangle$ can be determined from direct
simulation of the full stochastic model using the reaction kinetics given in
LABEL:supp-FullStochasticSystem.
(a)
(b)
Figure 6: Convergence of the quasipotential, $V(x_{s},x_{l})$, as the system
size, $\Omega$, increases. We ran 1000 realisations of the stochastic VEGF-
Delta-Notch model for an individual cell (see LABEL:supp-FullStochasticSystem)
for fixed values of $d_{ext}=0.2$, $n_{ext}=0.5$ and increasing system size,
$\Omega$. We plotted the convergence to the quasipotential value (a)
$V(\text{Delta-low},\text{Delta-high})$ and (b) $V(\text{Delta-
high},\text{Delta-low})$ as a function of $\Omega$ (black circle markers). For
these parameter values, transitions from the Delta-low to Delta-high phenotype
are less likely to occur (higher noise levels, $\epsilon=\Omega^{-1}$, and/or
longer transition times are needed) than transitions from the Delta-high to
Delta-low phenotype (see Equation 8a). Therefore, the perturbations of this
random event are smaller and convergence is reached for higher values of
noise. This is why lower values of $\Omega$ in (a) suffice to accurately
determine the prefactor value from Equation 8. The blue dashed lines indicate
the value of the corresponding quasipotential computed via the gMAM; the red
dotted lines indicate $\overline{\Omega}$ from Equation 9. All other parameter
values are fixed as indicated in LABEL:supp-Params.
An accurate estimate of the quasipotential (as obtained via the gMAM) allows
us to obtain the prefactor given the mean passage time, $\langle
T^{\Omega}_{x_{s}~{}\rightarrow~{}x_{l}}\rangle$, for a single value of the
system size, $\Omega$. However, the approximate relation in Equation 8 is
valid in the limit $\Omega\rightarrow\infty$ (see Figure 6). Thus, $\Omega$
should be chosen sufficiently large to achieve convergence in Equation 8 and,
at the same time, not too large in order to ensure that transitions between
the phenotypes occur in a computationally feasible time, since the waiting
times for transitions between stable steady states increase exponentially as
$\Omega$ grows. Specifically, we fix a maximum simulation time, $T_{max}$, and
an average prefactor value, $\bar{C}$ (determined computationally from
simulations), and approximate the corresponding system size,
$\overline{\Omega}$, as:
(9) $\overline{\Omega}\approx\frac{\log T_{max}+\log\bar{C}}{V(x_{s},x_{l})}.$
Then the prefactor, $C_{x_{s}\rightarrow x_{l}}$, can be approximated using
Equation 8b with $\Omega=\overline{\Omega}$.
From Equation 8a, we know that $\log\langle
T^{\Omega}_{x_{s}~{}\rightarrow~{}x_{l}}\rangle$ is a linear function of
$\Omega$ whose slope and intercept are given by the quasipotential,
$V(x_{s},x_{l})$, and $\left(-\log C_{x_{s}~{}\rightarrow~{}x_{l}}\right)$,
respectively. Thus, in order to check the accuracy of our estimate for the
system size, $\overline{\Omega}$ (Equation 9), we compared linear fitting of
data obtained from the full stochastic CTMC model for increasing $\Omega$,
with the estimate obtained from the gMAM quasipotential and the prefactor
extracted from simulations with system size, $\overline{\Omega}$. The results
presented in Figure 7 show that the estimates converge as $\Omega$ increases,
confirming the accuracy of the two methods.
(a)
(b)
Figure 7: Prefactor estimation. Comparison of prefactor estimates obtained
from simulations of the full stochastic CTMC model (black circles) and
estimates obtained using the gMAM-quasipotential and mean passage times for a
single value of the system size, $\overline{\Omega}$ (blue line), see Equation
8a. The linear fit of the full stochastic data (red line) was performed for
values of $\Omega$ such that the corresponding sample
$\left\\{T^{\Omega}_{x_{s}~{}\rightarrow~{}x_{l}}\right\\}$ is exponentially
distributed (high levels of noise might affect the distribution of these
transitions). Panel (a) corresponds to the transition from Delta-low to Delta-
high phenotype; panel (b) corresponds to the transition from Delta-high to
Delta-low phenotype. The red dotted lines indicate $\overline{\Omega}$ from
Equation 9. All other parameter values are fixed as indicated in LABEL:supp-
Params.
To summarise, we coarse-grain the stochastic VEGF-Delta-Notch dynamics as
follows (see Figure 4):
1. I
Fix the model parameter values and the vector of external variables, $v$,
which, for this system, is given by the extracellular levels of Delta and
Notch, $v=\left(d_{ext},n_{ext}\right)$.
2. II
Compute the steady states of the corresponding mean-field system (LABEL:supp-
eq:DN_single_nondimensional).
3. III
Formulate the CG model:
1. i
If, for the given $v=\left(d_{ext},n_{ext}\right)$, the system is monostable
(either Delta-high or Delta-low cell steady state exists), then the
quasipotential value to arrive at this state is 0. The value of the other
quasipotential can be assumed infinite (since the system is monostable, this
transition is impossible). For example, if the only stable steady state is the
Delta-high cell, then $V(\text{Delta-low},\text{Delta-high})=0$ and
$V(\text{Delta-high},\text{Delta-low})=\infty$. The CG model is defined by its
unique stable steady state.
2. ii
If the system is within the bistable regime (both Delta-high and Delta-low
steady states are stable), then the CG model is defined as a CTMC on the state
space of $\\{x_{s},x_{l}\\}=\\{\text{Delta-high},$ $\text{Delta-low}\\}$. The
transition rates are given by Equation 7. The quasipotential,
$V(x_{s},x_{l})$, is approximated using the gMAM; the prefactor value,
$C_{x_{s}~{}\rightarrow~{}x_{l}}$, is obtained via Equation 8b from stochastic
simulations of the full VEGF-Delta-Notch model for a fixed value of the system
size, $\overline{\Omega}$, defined by Equation 9.
4. IV
The CG model can be simulated using any variant of the SSA, such as, for
example, the classical Gillespie algorithm [25].
The above method generalises naturally for systems with an arbitrary number of
stable steady states (see Figure 4). In this case, the quasipotential and the
corresponding prefactor must be approximated for each pair of stable steady
states. The method can also be applied to systems which possess other
attractors, e.g. limit cycles [15, 23].
### 3.2 Multi-agent system
(a) Figure 8: A flowchart of the procedure to coarse-grain a multi-agent
stochastic system with a region of multistability. A pseudocode of the
simulation algorithm for the multi-agent CG model is presented in LABEL:supp-
appendix:SimulationAlgorithm. The simulation part of the diagram illustrates
an iteration of the Gillespie algorithm for simulation of multi-agent CG
systems. Here $T_{final}$ stands for the final simulation time;
$\mathrm{Exp}(\lambda)$ is an exponential distribution of intensity,
$\lambda$.
In this section we show how the CG method can be applied to multi-agent
systems with a region of multistability. In this case, the dynamics of each
agent is coarse-grained to that of a CTMC between its stable steady states for
given values of the external variables, $v$, which establish the coupling
between the internal dynamics of individual agents ($v$ depends on the state
of agents in the local environment of the focal agent and/or time, and defines
its internal state, e.g. phenotype). If the dynamics of an individual agent
are independent of its neighbours and time (i.e. the values of the external
variables are constant) then we use the CG method described in Section 3.1
(see also Figure 4). A suitable range of values for the external variables,
$v\in\mathcal{V}$, where $\mathcal{V}\subset\mathrm{R}^{V}$, can be determined
by simulating the original multiscale model. Here $V$ indicates the dimension
of the vector of external variables, $v$. In order to reduce the computational
cost in the multi-agent CG system, it is convenient to calculate a priori
look-up tables for the steady states, quasipotential and prefactor values for
a discretisation, $\left\\{v_{j}\right\\}_{j\in\mathcal{J}}\subset\mathcal{V}$
(here, $j$ indexes entries in the generated discretisation; $\mathcal{J}$ is
the size of the discretisation). Interpolation routines can then be used to
establish an input-output relationship between an arbitrary $v\in\mathcal{V}$
and the values of the corresponding steady states and the transition rates
between them. Therefore, we split the general CG method for multi-agent
systems into two steps (see Figure 8):
1. (i)
Pre-simulation: calculate look-up tables for the system steady states, quasi-
potential and prefactor values for each entry in a discretisation,
$\left\\{v_{j}\right\\}_{j\in\mathcal{J}}$, for a range of values of the
external variables, $\mathcal{V}\subset\mathrm{R}^{V}$.
2. (ii)
Simulation: the CG model is simulated (via, e.g., the Gillespie algorithm) as
a CTMC on a state space defined by the steady states of all of its entities,
with the coupling maintained via the external variables, $v$, updated at each
simulation step according to entities’ local environments and/or time.
We now provide more details on the pre-simulation and simulation steps.
#### 3.2.1 Pre-simulation: look-up tables
Pre-computed look-up tables of system steady states, quasipotential and
prefactor values are used to interpolate the values of the system steady
states and the CG transition rates between them for an arbitrary set of values
of the external variables, $v$, without calculating them explicitly at each
step during simulations of the CG model. In a general setting, the dimension
of each table is equal to $V$, the dimension of the vector of external
variables.
The steady states must be computed numerically for each entry $v_{j}$ in the
discretisation, $\left\\{v_{j}\right\\}_{j\in\mathcal{J}}$, using the mean-
field limit for an individual entity (as described in Section 3.1). For values
of $v_{j}$ that fall within the multistability region, the quasipotential is
computed via the gMAM in a pair-wise manner, for each pair of stable steady
states, $\left\\{x_{s}\right\\}_{s=1}^{\mathcal{S}}$. The last look-up table
corresponds to the prefactor, $C_{x_{s}\rightarrow x_{l}}$, $x_{s},~{}x_{l}\in
1\ldots\mathcal{S}$, which must be approximated for each $v_{j}$ within the
multistability region. The prefactor values are obtained from Equation 8b as
before, using the mean passage times, $\langle
T^{\Omega}_{x_{s}~{}\rightarrow~{}x_{l}}\rangle$, which are determined by
simulating the full stochastic model with the system size,
$\Omega=\overline{\Omega}$, defined by Equation 9.
#### 3.2.2 Simulation algorithm
Once all the look-up tables have been computed, the multi-agent CG system can
be simulated as a standard Gillespie algorithm (or one of its variants, e.g.,
Next Subvolume method [20]) in which the total propensity, $P$, at each time
step is computed as a sum of transitions, $k^{e}_{x_{s}\rightarrow x_{l}}$,
for each entity, $e$, to switch its (stable) state (see Figure 8). The steady
states corresponding to each entity (and the transition rates between them)
for the exact value of the external variables, $v^{e}\in\mathcal{V}$, ($v^{e}$
has to be computed for each entity, $e$, according to its microenvironment)
are interpolated via appropriate numerical routines. We present pseudocode for
the simulation procedure in LABEL:supp-appendix:SimulationAlgorithm.
Note that our CG method does not account for the initial, relatively short
(compared to the LDT timescale), relaxation time during which the system
relaxes onto the timescale on which the CG approximation is valid. Thus, it is
necessary to obtain an initial stable steady state configuration, i.e to pre-
pattern the system, using either the full stochastic CTMC or the mean-field
model (see Figure 8 and line 5 in LABEL:supp-SimulationAlgorithm). The final
simulation time for the pre-patterning should be large enough to ensure that
the system relaxes to an equilibrium. Since this procedure is performed only
once, it does not affect the computational complexity of the CG simulations.
We have chosen to use the mean-field system to pre-pattern our simulations
since it is less time-consuming and the stochasticity (i.e. transitions
between phenotypes) is preserved later in the CG simulation loop.
## 4 Results
For illustrative purposes, we consider the specific example of spatial
phenotype patterning via the Delta-Notch lateral inhibition mechanism in
response to an external signalling cue (VEGF). First, we provide more details
about our implementation of the CG model and present typical simulation
results and the robust patterns that emerge at long times. We then discuss the
relative merits of the CG method, using a variety of metrics to compare its
performance with the original stochastic and mean-field systems. We used the
Next Subvolume method [20] for simulations of the full stochastic CTMC and the
Euler-Lagrange method (explicit scheme) for the numerical integration of the
mean-field equations.
### 4.1 CG model of spatial cell phenotype patterning
The multicellular VEGF-Delta-Notch (i.e. the Delta-Notch signalling pathway
coupled with external VEGF stimulation) model is bistable (see LABEL:supp-
appendix:VEGFDeltaNotch in Supplementary Material). When simulated in a two-
dimensional geometry, it produces ‘salt-and-pepper’ patterns in which the
phenotypes of neighbouring cells alternate between Delta-high and Delta-low
states [44]. For this model, cross-talk between individual cells is achieved
via external variables, $d_{ext}$ and $n_{ext}$, which represent the levels of
Delta and Notch, respectively, summed over cells in a circular neighbourhood
with a fixed interaction radius, $R_{s}$ (see Figure 9 and LABEL:supp-
appendix:VEGFDeltaNotch). Hence, for this system,
$v=\left(d_{ext},n_{ext}\right)$ defines a cell’s internal state (phenotype)
and the dimension of the pre-computed look-up tables is 2 (see Section 3.2.1).
We determined a suitable range,
$\mathcal{V}=\left[0,d_{ext}^{max}\right]\times\left[0,n_{ext}^{max}\right]\subset\mathrm{R}^{2}$,
for these variables by running 100 realisations of the multiscale model of
angiogenesis (the number of realisations depends on the model of interest).
(a) Figure 9: A schematic diagram showing the non-local interactions in the
multicellular VEGF-Delta-Notch model. Cell-to-cell interactions may be non-
local (i.e. beyond immediate neighbours on a given lattice) provided they lie
within an interaction radius, $R_{s}$. The diagram illustrates the weights of
interactions between the focal cell (highlighted in blue) and cells in its
neighbourhood, for a regular hexagonal lattice (the weights are defined as a
normalised area of the overlap between a neighbouring voxel and the circular
neighbourhood of the focal cell, see LABEL:supp-ExtDN in LABEL:supp-
appendix:VEGFDeltaNotch).
We then generated a regular discretisation of $\mathcal{V}$,
$\left\\{v_{j}\right\\}_{j\in\mathcal{J}}$, with a grid $100\times 100$. For
each $v_{j}$ in this grid, we computed the steady states for the mean-field
limit defined by LABEL:supp-eq:DN_single_nondimensional using non-linear
solvers from the C++ GNU Scientific Library (GSL). We note that, once the
steady states of the full system have been computed, the subcellular variables
$\iota$, $r_{2}$ and $r_{2}^{*}$, corresponding to the Notch intracellular
domain, VEGF receptor 2 (VEGFR2) and VEGF-VEGFR2 complexes, respectively, (see
definitions in LABEL:supp-appendix:VEGFDeltaNotch in Supplementary Material)
are redundant; it is not necessary to track these variables because the input-
output relationship between $v=\left(d_{ext},n_{ext}\right)$, and the steady
states completely defines the configuration of the system.
(a)
(b)
(c)
(d)
Figure 10: An illustration of the quasipotential surfaces. Upper panels: a
noise-induced transition from Delta-high (in magenta) to Delta-low (in black)
phenotype of a single cell during a simulation of the angiogenesis model [44]
plotted as a function of the focal cell’s (a) Delta and (b) Notch levels. The
external Delta, $d_{ext}$, (Notch, $n_{ext}$) for the focal cell is computed
as a weighted sum of the Delta (Notch) levels of its neighbours as defined by
LABEL:supp-MulticellularMeanField. Lower panels: 2D projections of the
quasipotential surfaces (c) $V(\text{Delta-low},\text{Delta-high})$ and (d)
$V(\text{Delta-high},\text{Delta-low})$ as functions of $d_{ext}$ and
$n_{ext}$. The monostability region in which the unique stable steady state
corresponds to a Delta-high (Delta-low) cell is coloured green (red). The
colour bar indicates the value of the corresponding quasipotential. The
trajectory (as in panels (a) and (b)) plotted on the quasipotential surfaces
(in (c) and (d)), illustrates that phenotype switches are more likely to occur
for lower values of the quasipotential. Parameter values are fixed as
indicated in LABEL:supp-Params.
For values of $v_{j}$ that fall within the bistability region, we computed the
quasipotential values of the transitions between phenotypes (see Figure 10),
using the gMAM (see LABEL:supp-appendix:gMAM in Supplementary Material). We
also used the full stochastic system to check those values of the
quasipotential for which a phenotype switch is more likey to occur. As
expected, most phenotype transitions occur close to the boundary of the
bistability region, where values of the quasipotential are lower. For example,
LABEL:QasipotentialIllustration_trajectory_D and
LABEL:QasipotentialIllustration_trajectory_N show a sample path of the full
stochastic system for an individual cell during a simulation of the multi-
agent model [44]. The cell undergoes a noise-induced switch from a Delta-high
to a Delta-low phenotype. LABEL:QasipotentialIllustration_s2t and
LABEL:QasipotentialIllustration_t2s show the same sample path projected onto
the quasipotential surfaces. These plots show that phenotypic switches are
more likely to occur when the values of external Delta and Notch,
$\left(d_{ext},n_{ext}\right)$, are such that the quasipotential,
$V(x_{1},x_{2})=V(\text{Delta-high},\text{Delta-low})$, is small.
We constructed a look-up table of prefactor values, $C_{x_{s}\rightarrow
x_{l}}$, $x_{s},~{}x_{l}\in\\{\text{Delta-high},$ $\text{Delta-low}\\}$, by
approximating the mean passage times, $\langle
T^{\overline{\Omega}}_{x_{s}\rightarrow x_{l}}\rangle$, (sample size of 1000
realisations) for an individual cell to switch its phenotype from simulations
of the full stochastic CTMC (LABEL:supp-FullStochasticSystem) with the system
size, $\overline{\Omega}$, given by Equation 9.
We then implemented the CG model in C++ using LABEL:supp-SimulationAlgorithm.
In order to establish an input-output relationship between an arbitrary
$v=\left(d_{ext},n_{ext}\right)$ and the corresponding cell phenotypes and
transition rates, we used bilinear interpolation routines from the C++ GNU
Scientific Library (GSL) (gsl_interp2d routines). The model was then simulated
using the standard Gillespie algorithm. We used no-flux boundary conditions to
compute for each cell the extracellular levels of Delta and Notch in all our
simulations.
### 4.2 Spatial patterning in the CG model
In order to illustrate the CG model, we first ran numerical simulations on a
small cell monolayer ($10\times 12$ voxels). The results presented in
LABEL:PatternConfigurations_S1, LABEL:PatternConfigurations_S2,
LABEL:PatternConfigurations_S3, and LABEL:PatternConfigurations_S4 show how
the distribution of Delta-high and Delta-low cells changes over time during a
typical CG realisation (see also Movie S1). Starting from an initial pre-
pattern (LABEL:PatternConfigurations_S1), noise-induced phenotype transitions
enable the system to explore different pattern configurations for the given
geometry, while the proportion of Delta-high cells remains on average constant
(see LABEL:PatternConfigurations_TipProportion).
(a)
(b)
(c)
(d)
(e)
Figure 11: Different pattern configurations explored by the CG model. (a)-(d)
Series of plots showing how the distribution of cell phenotypes changes over
time during a single simulation of the CG model. The colour bar indicates the
level of Delta. (a) $t=0$; (b) $t=40$; (c) $t=260$; (d) $t=410$ minutes. (e)
Time evolution of the Delta-high cell proportion (defined as a ratio of cells
with the Delta-high phenotype to the total cell number) for a single
simulation of the CG model (blue line) and averaged over $1000$ realisations
(red line). For these simulations, the interaction radius and system size were
fixed at $R_{s}=15\mu m$ and $\Omega=100$, respectively; the values of the
remaining parameters were fixed as indicated in LABEL:supp-Params.
The mean proportion of Delta-high cells (and, thus, the spatial pattern)
during simulations of the CG system depends on the interaction radius,
$R_{s}$. For values of $R_{s}$ corresponding to nearest-neighbours interaction
($R_{s}\leq 1.5h$, where $h$ is the voxel width), we observe classical
patterns of alternating Delta-high and Delta-low cells (i.e. the so-called
salt-and-pepper pattern [12]; see LABEL:supp-PatternVaryingR10). As $R_{s}$
increases, the number of Delta-low cells that may be inhibited by a focal
Delta-high cell increases, causing the proportion of Delta-high cells in the
spatial patterns to decrease [44]. Thus, for larger values of $R_{s}$
($R_{s}>1.5h$), Delta-high cells are separated by larger distances (see
LABEL:supp-PatternVaryingR20, LABEL:supp-PatternVaryingR30, and LABEL:supp-
PatternVaryingR40). These results for CG simulations are consistent with those
obtained for the full multicellular stochastic model of the VEGF-Delta-Notch
signalling pathway [44]. The ability of the CG system to explore different
spatial patterns increases as the size of the interaction radius, $R_{s}$,
grows, and the corresponding emerging patterns are more diverse (see
LABEL:PatternConfigurations_S1, LABEL:PatternConfigurations_S2,
LABEL:PatternConfigurations_S3, LABEL:PatternConfigurations_S4, LABEL:supp-
PatternVaryingR20, LABEL:supp-PatternVaryingR30, and LABEL:supp-
PatternVaryingR40).
It is noteworthy that spatial patterns explored in simulations of the CG model
differ in their robustness to noise. In particular, the mean passage time for
a phenotype switch, and, thus, a change in the pattern, to occur, which is
equal to the inverse of the total propensity, $P$, depends on the values of
the quasipotential, $V(x_{s},x_{l})$, for all entities in the system. Here,
the total propensity, $P$, for a phenotype switch event is defined as a sum of
transition rates, $k^{e}_{x_{s}\rightarrow x_{l}}$, for each cell with index,
$e$, to change its state from $x_{s}$ to $x_{l}$, see Figure 8. When, via
random exploration, the system finds a configuration for which the values of
$V(x_{s},x_{l})$ are larger, the waiting time for a phenotype switch increases
and the configuration is more resilient to further changes.
This feature of the CG method facilitates exploration of new robust spatial
patterns which cannot practically be achieved using other numerical
frameworks: (i) simulations of the full stochastic model are too
computationally intensive, which makes the exploration of these patterns
infeasible because of the longer timescales needed; (ii) the deterministic
framework does not allow for transitions between stable steady states, which
makes this exploration impossible; (iii) the complexity of analytic methods
needed to verify the stability of a pattern of a system with non-local
interactions does not permit exploration of complex pattern configurations
[37].
(a)
(b)
(c)
(d)
Figure 12: Emergence of robust pattern configurations in simulations of the CG
model. At long times, via exploration of different pattern configurations, the
dynamics of the CG system evolve to a robust pattern in which any further
phenotype switches are unlikely. (a) A typical emergent pattern for a single
realisation of the CG model (the colour bar indicates the level of Delta, $d$,
for each cell). (b) The time evolution of the total propensity, $P$, for a
phenotype switch to occur. Cells in the border rim (three-cell width) are
excluded from $P$ since, due to the model geometry, they do not possess a
‘robust’ configuration of neighbours. As $P$ decreases to $0$, the waiting
time for a phenotype switch to occur approaches infinity, and the pattern
becomes more robust to change. (c)-(d) The dynamics of an individual cell
(outlined in cyan in (a)) during this simulation. (c) Temporal evolution of
the internal level of Delta, $d$, (defining cell phenotype: high (low) values
of $d$ correspond to Delta-high (Delta-low) phenotype) and that in its
microenvironment, $d_{ext}$. (d) Temporal evolution of transition rates for a
phenotype switch for this cell. We note that the large difference in the order
of values for transition rates for the total propensity, $P$, of the lattice
($O(10^{6})$), plot (b), and for an individual cell
($O(10^{-17})-O(10^{-3})$), plot (d), comes from the contribution to $P$ of
transition rates for cells which are, for the given values of the external
variables, on the border of the bistability region (see Figure 10). For these
simulations, the interaction radius and system size were fixed at $R_{s}=15\mu
m$ and $\Omega=1000$, respectively; the values of all remaining parameters
were fixed as indicated in LABEL:supp-Params.
We now present simulation results which illustrate the ability of the CG
method to uncover new spatial patterns for the VEGF-Delta-Notch system at long
times. We fixed the interaction radius at $R_{s}=3.0h=15\mu m$ ($h=5\mu m$ is
the voxel width), so that interactions occur between cells that are first and
second order neighbours in the lattice; the noise amplitude was fixed at
$\epsilon=\Omega^{-1}=0.001$. We ran a CG simulation on a medium size
monolayer of cells (see LABEL:OptimalPattern_CellHighlighted and Movie S2).
Starting from the initial pre-pattern, the CG model explores various patterns
until it eventually settles on a more robust configuration (shown in
LABEL:OptimalPattern_CellHighlighted). In order to confirm our prediction
regarding pattern robustness, we plotted the temporal evolution of the total
propensity of the lattice, $P$, in LABEL:OptimalPattern_TotalPropensity. As
its value decreases, $P\rightarrow 0$, the mean waiting time for a change in
the spatial pattern becomes infinite, which accounts for the robustness of the
emerging pattern. We also considered the dynamics of an individual cell (its
position in the monolayer is highlighted by a cyan line in
LABEL:OptimalPattern_CellHighlighted). LABEL:OptimalPattern_DeltaLevel shows
how the phenotype of this cell changes over time: at early times, the cell
switches between Delta-high and Delta-low phenotypes (low (high) values of
subcellular Delta, $d$, correspond to Delta-low (Delta-high) phenotype). As
the spatial pattern settles to a robust configuration, the cell’s environment,
i.e. the levels of Delta of its neighbours, $d_{ext}$, stop changing and the
cell acquires a Delta-high phenotype that remains unchanged for the rest of
the simulation. The transition rates for phenotype switches for this cell
(LABEL:OptimalPattern_TransitionRate) exhibit similar dynamics to the total
propensity, $P$, of the whole lattice (LABEL:OptimalPattern_TotalPropensity).
Our CG simulation results show that this robust pattern configuration is not
unique. However, we note that the spatial patterns tend to have a regular
structure; for example, Delta-high cells may be organised in similar clusters
comprising two or three cells as in the pattern shown in
LABEL:OptimalPattern_CellHighlighted. These configurations have lower values
of the total propensity, $P$. Cells on the border of the lattice undergo
phenotype switches (see Movie S2), since they cannot attain this ‘more robust’
combination of neighbours for the given geometry (since we use no-flux
boundary conditions in our simulations).
### 4.3 Comparison of the full stochastic, coarse-grained and mean-field
frameworks
We compared the dynamics of the multicellular VEGF-Delta-Notch model using
three frameworks: (i) full stochastic CTMC, (ii) CG, and (iii) mean-field
descriptions. Simulated (using any of these frameworks) on a 2D domain, the
model produces a characteristic pattern of ECs with two cell phenotypes (see,
for example, Figures 11 and LABEL:supp-PatternVaryingR). Since the CG
approximation describes the long-term behaviour of the system, when its
evolution is dominated by the timescale associated with phenotypic switches,
it does not account for the initial relaxation onto a quasi-steady state
pattern. Thus, the three frameworks cannot be compared with respect to their
behaviour at early evolution times. Instead, we quantified the final pattern
and the computational cost of simulations. The final simulation time,
$t=T_{final}$, was chosen sufficiently large to ensure that a steady state
pattern had been established for the mean-field simulations (since stochastic
systems do not have a steady state pattern in a classical sense). In order to
systematically compare the three frameworks, we used the same final simulation
time, $t=T_{final}$, for the other two systems.
We used the following set of metrics to compare the dynamics of the three
mathematical descriptions (Supplementary Material):
* •
Delta-high cell proportion, which is defined as the ratio of the number of
cells with Delta-high phenotype to the total number of cells in the system;
* •
distribution of Delta-high cell clusters, which provides a breakdown of sizes
of Delta-high cell clusters (adjacent cells with Delta-high phenotype, e.g., a
single Delta-high cell, two adjacent Delta-high cells, etc.) in a steady
pattern configuration;
* •
computational cost, which is defined as the average CPU time (in seconds) to
perform a single realisation of model simulation.
Since the pre-calculated look-up tables for the CG simulations (Section 3.2.1)
were computed for a fixed set of model parameters (see LABEL:supp-Params), we
held them fixed for all simulations. However, the cell-to-cell interaction
radius, $R_{s}$, which is used in the multicellular simulations to determine
for each cell, $e$, the vector of extracellular variables,
$v^{e}=\left(d^{e}_{ext},n^{e}_{ext}\right)$, may vary. In our simulations, we
used $R_{s}\in\left\\{5,~{}7.5,~{}10,~{}12.5,~{}15\right\\}~{}\mu m$ which
correspond to experimental observations of the distance over which cell-to-
cell interaction can occur in endothelial cells [18] (which corresponds to up
to three cells in the interaction circle). Nonetheless, from a theoretical
point of view, this quantity can take any value greater than the half-width of
a voxel, $R_{s}>0.5h$, where $h$ is the voxel width (we fix $h=5\mu m$ in our
simulations). In addition, for the full stochastic CTMC and CG descriptions,
we vary the noise amplitude, $\epsilon=1/\Omega$, by changing the system size
parameter, $\Omega$. We used
$\Omega\in\left\\{50,~{}100,~{}200,~{}500,~{}1000\right\\}$. The larger the
value of $\Omega$, the closer will be the dynamics of a stochastic system to
its mean-field description. For each numerical setup ($R_{s}$ and $\Omega$),
we ran 100 realisations.
We considered two simulation geometries: a 2D cell monolayer and a branching
network.
##### Setup 1: a cell monolayer
We first ran numerical simulations on a cell monolayer (see LABEL:supp-
InitialSetup_M). This spatial geometry was motivated by the biological process
of cell fate specification induced by lateral inhibition via Delta-Notch
signalling in flat domains. Examples of such cell fate specification include
bristle patterning in Drosophila notum [11, 30, 13], and differentiation of
neural precursors in neurogenesis [22] (see [7, 35] and references therein for
other examples). The fixed stationary distribution of the VEGF serves as an
external stimulus which enhances lateral inhibition via Delta-Notch
signalling. We chose VEGF as an illustrative example, although, depending on
the specific system, other extracellular signals will provide cell stimulus.
(a)
(b)
Figure 13: Comparison of the dynamics of the multicellular VEGF-Delta-Notch
model simulated on a cell monolayer using the full stochastic (CTMC), CG, and
mean-field descriptions. (a) The Delta-high cell proportion as a function of
the cell-to-cell interaction radius, $R_{s}$, for varying noise amplitude,
$\epsilon=1/\Omega$ (the value of $\Omega$ is indicated in the title of each
plot), for the full stochastic CTMC (black), CG (blue) and mean-field (red)
descriptions. To explore different possible patterns in the deterministic
mean-field system, we created a small initial perturbation to the initial
configuration (LABEL:supp-InitialSetup_M). (b) A series of barplots showing
how the long-time distribution of Delta-high cell clusters changes as the
interacton radius, $R_{s}$, varies for the full stochastic CTMC (left panel),
CG (middle panel), and mean-field (right panel) systems. The number of single
Delta-high cells in the final pattern (i.e. at a fixed final simulation time)
is shown in blue; the number of clusters with 2, 3, and 4 adjacent Delta-high
cells is shown in yellow, green, and red, respectively. For these simulations,
we fixed $\Omega=1000$ ($\epsilon=0.001$). The results are averaged over 100
realisations. The remaining parameter values were fixed as indicated in
LABEL:supp-Params.
We began by considering the dynamics of the Delta-high cell proportion for
this spatial geometry (see LABEL:ResultsMonolayer_TipProportion). Consistent
with the previous results [44], for all simulation frameworks (i.e. the full
stochastic (CTMC), CG, and mean-field descriptions), the Delta-high cell
proportion decreases as the cell interaction radius, $R_{s}$, increases.
LABEL:ResultsMonolayer_TipProportion confirms that, as expected, differences
in this metric between the three systems decrease as the level of noise is
reduced (i.e. as $\Omega$ increases). In particular, for high noise levels
(i.e. lower values of $\Omega$), the patterns generated by the stochastic
systems (full CTMC and CG frameworks) are more diverse, and the Delta-high
cell proportions differ from those for the associated mean-field description.
We note that the dynamics of the Delta-high cell proportion for the mean-field
system (red lines) are identical in all subplots in
LABEL:ResultsMonolayer_TipProportion since noise is absent in deterministic
systems (i.e. the system size parameter, $\Omega$, is irrelevant).
We also quantified the size distribution of the Delta-high cell clusters
associated with the final patterns established on the cell monolayers. Since
the dynamics of the three systems converge for larger values of the system
size, $\Omega$ (as shown in LABEL:ResultsMonolayer_TipProportion),
LABEL:ResultsMonolayer_Clusters shows results for this metric computed for
simulations with $\Omega=1000$. The distributions are in good quantitative
agreement for the three systems. The discrepancy for simulations with larger
cell interaction radius (e.g. $R_{s}=15\mu m$) arises because (for this value
of $\Omega$) the CG system is more likely to explore long timescale patterns
which have a more ‘regular’ structure and are more robust to noise (cells with
Delta-high phenotype organised in similar clusters, see Section 4.2).
##### Setup 2: a branching network
We next considered a more complex spatial geometry of a small branching
network (see LABEL:supp-InitialSetup_N) extracted from a simulation of a
hybrid model of angiogenesis [44]. LABEL:supp-SnapshotsCGVascularNetwork shows
a series of patterns explored by the CG system at different time points during
a typical simulation for this configuration (for the full simulation, see
Movie S3).
For this spatial configuration, we compared the three simulation frameworks
using the same metrics as for the cell monolayer. The results for the Delta-
high cell proportion are presented in LABEL:supp-
ResultsVascularNetwork_TipProportion. We find that the number of possible
patterns generated by lateral inhibition is lower for the branching network
geometry than for the cell monolayer (see LABEL:supp-
SnapshotsCGVascularNetwork). Consequently, the Delta-high cell proportions
converge for smaller values of $\Omega$ (compare
LABEL:ResultsMonolayer_TipProportion and LABEL:supp-
ResultsVascularNetwork_TipProportion). We also note that, since, in the
network configuration, cells have fewer neighbours, the values of this metric
are higher than those computed for a cell monolayer
LABEL:supp-ResultsVascularNetwork_Clusters shows the size distribution of
Delta-high cell clusters for simulations on the branching network. We note
that, for this configuration, isolated Delta-high cells (i.e. cells not
adjacent to another Delta-high cell) are predominant in the final spatial
patterns and the patterns generated by the three frameworks are comparable.
Regarding the computational cost (see technical specifications of computers
used in File S1), the CG method showed a great reduction in the average CPU
time compared to the original stochastic system when performing a single
realisation (see LABEL:supp-ResultsCPU). Whereas the numerical cost of
simulations of the full stochastic system (LABEL:supp-ResultsCPU, left panels)
increases exponentially as the system size, $\Omega$, grows, simulations of
the CG system decrease in average computational time as $\Omega$ increases
(LABEL:supp-ResultsCPU, middle panels). This is because, as the noise level
decreases (i.e. $\Omega$ increases), fewer transitions occur in a CG
simulation for a fixed final simulation time. Interestingly, the CG
simulations are also faster than the mean-field system (LABEL:supp-ResultsCPU,
right panels). The numerical integration of the mean-field system (we used the
explicit scheme for the Euler-Lagrange method, although other schemes for
numerical integration may show better performance) required evaluation of the
non-linear right-hand-side of the equations of the mean-field description (see
LABEL:supp-MulticellularMeanField in Supplementary Material) at each time step
for every voxel in the lattice, whereas for the CG simulations only one voxel
undergoes a change (i.e. a phenotype switch) at each iteration.
To summarise, the CG method, while preserving stochasticity of transitions
between cell phenotypes and producing spatial patterns comparable to those
generated using the original stochastic and mean-field descriptions,
significantly reduces computational time of simulations.
## 5 Discussion and conclusions
Hybrid (multiscale) models of complex biological phenomena are often
computationally inefficient, which hinders their potential utility. To address
this issue, we have developed a coarse-graining (CG) method that reduces the
numerical cost of simulations of multi-agent stochastic systems with multiple
stable steady states. The CG technique is based on large deviation theory that
allows the dynamics of a stochastic system to be reduced to a jump process
(i.e. a continuous time Markov chain) on a discrete state space which
comprises the stable steady states of all agents in the system. The CG system
operates on a timescale on which transitions between these steady states take
place. This allows the method to be applied to models whose dynamics act on
timescales longer than the typical timescale for relaxation to an equilibrium
(e.g., molecular or subcellular processes act on longer timescales when
compared to higher spatial scales such as cell migration, dynamics of
extracellular cues, etc.). Our results show good qualitative and quantitative
agreement between CG simulations and other simulation methods (Figures 13 and
LABEL:supp-ResultsVascularNetwork). Furthermore, the CG algorithm is
numerically more efficient in terms of CPU time even when compared with the
corresponding mean-field simulations (see LABEL:supp-ResultsCPU). Likewise,
the CG framework allows exploration of new emergent properties of the system,
such as long timescale patterns in multicellular systems (Figure 12).
The implementation of the CG method requires pre-calculation of several look-
up tables (for stable steady state solutions of the system that is being
coarse-grained, quasipotential values for transitions between them and the
corresponding prefactor of these transitions) which are used later in
simulations. To do this, the values of model parameters must be fixed (except
for the external variables). However, in order to perform sensitivity analysis
with respect to any specific parameter, this parameter may be added to the set
of external variables (thus, adding a new dimension to the look-up tables).
Since the procedure of pre-calculating the look-up tables is done once, prior
to model simulation, it does not increase the numerical cost of the algorithm.
Likewise, the computational cost of computing the quasipotential via the
geometric minimum action method (gMAM) is independent of the system size,
$\Omega$, and an estimate for the required prefactor can be obtained from
simulations of the full stochastic model for a single value of the system size
parameter, $\Omega$, for which we provided an accurate estimate (see Equation
9 and Figure 7). Then the CG model can be efficiently simulated using the
standard Gillespie algorithm for any value of $\Omega$ (or, equivalently,
noise level, $\epsilon=1/\Omega$).
After introducing the CG method (Section 3), we applied it to a multi-agent
model of phenotypic specification of cells via the VEGF-Delta-Notch signalling
pathway. For this system, we demonstrated how the spatial patterning of cells
with different phenotypes changes as CG transitions between these steady
states (phenotypes) occur (Figure 11). We then confirmed that the patterns
generated by the CG system are quantitatively similar to steady state
configurations of the original stochastic system and the associated mean-field
limit for this model (see Figures 13 and LABEL:supp-ResultsVascularNetwork).
We conclude that the CG method preserves the continuous cell phenotypes and
stochasticity of the original system, while reducing the computational cost of
simulations by several orders of magnitude (as compared to the numerical cost
of simulations of the full stochastic system, see LABEL:supp-ResultsCPU).
In this paper, we used the VEGF-Delta-Notch model to illustrate the benefits
of the CG method. We note, however, that the CG method can be applied to a
wider class of multi-agent models in which the behaviour of the agents is
regulated by stochastic models with multiple stable attractors (e.g. steady
states, limit cycles) and whose dynamics are controlled by external cues (e.g.
morphogens, growth factors, levels of specific ligands/receptors in
neighbouring cells, etc.). Examples of systems with subcellular dynamics which
satisfy the requirements for application of the CG method include fate
specification of cells in intestinal crypts [32, 8], epithelial to mesenchymal
phenotypic transition (and its reverse) in cancer invasion [31] and
development [42], cell differentiation in neurogenesis [22], and a general
class of models describing cell decision switches [27]. These models are
multistable and the timescale of simulations is longer than the timescale of
the relevant subcellular signalling pathway. Nonetheless, the spectrum of
models which are suitable for coarse-graining via the CG algorithm is not
restricted to intracellular signalling pathways in animal cells; other
examples include vegetation patterning in arid ecosystems [33] or plant
morphogenesis mediated via the auxin hormone [1, 21]. The exact implementation
of the CG system for the aforementioned models is beyond the scope of this
paper.
To conclude, the CG method developed in this paper paves the way for a
systematic reduction of the dynamics of a wide class of multistable stochastic
models. It allows for investigation of their behaviour on longer timescales
than is possible with other frameworks (e.g. full stochastic simulations or
deterministic equations). To our knowledge, this is the first example in which
large deviation theory has been used to coarse-grain the dynamics of a multi-
agent system. In future work we intend to further investigate the performance
of the CG method by incorporating the CG system for the VEGF-Delta-Notch
signalling into a multiscale model of angiogenesis [44].
## Data management
All of the computational data output is included in the manuscript and/or in
the supplementary material. The code of the numerical procedures used in this
work is available upon request.
## Supplementary materials
Text
##### Supplementary Material
The file contains a more detailed description of the VEGF-Delta-Notch model,
implementation of the CG method, and additional figures and tables.
##### File S1
Technical specifications of the computers used to perform simulations in this
work.
##### Movie S1
A simulation movie showing different pattern configurations explored by the CG
system in a small 2D cell monolayer. The colour bar indicates the levels of
Delta. For this simulation, the interaction radius and system size were fixed
at $R_{s}=15\mu m$ and $\Omega=100$, respectively; the values of the remaining
parameters were fixed as indicated in LABEL:supp-Params.
##### Movie S2
A simulation movie showing emergence of robust pattern configurations in
simulations of the CG system. The colour bar indicates the levels of Delta. A
single cell, whose dynamics is shown in LABEL:OptimalPattern_DeltaLevel and
LABEL:OptimalPattern_TransitionRate, is outlined in cyan. For this simulation,
the interaction radius and system size were fixed at $R_{s}=15\mu m$ and
$\Omega=1000$, respectively; the values of all remaining parameters were fixed
as indicated in LABEL:supp-Params.
##### Movie S3
A simulation movie showing different pattern configurations explored by the CG
system in a branching network. The colour bar indicates the levels of Delta.
For this simulation, the interaction radius and system size were fixed at
$R_{s}=15\mu m$ and $\Omega=100$, respectively; the values of the remaining
parameters were fixed as indicated in LABEL:supp-Params.
## References
* [1] V. Baldazzi, N. Bertin, H. de Jong, and M. Génard, Towards multiscale plant models: integrating cellular networks, Trends in plant science, 17 (2012), pp. 728–736.
* [2] R. Bardini, G. Politano, A. Benso, and S. Di Carlo, Multi-level and hybrid modelling approaches for systems biology, Computational and Structural Biotechnology Journal, 15 (2017), pp. 396–402.
* [3] K. Bentley, C. A. Franco, A. Philippides, R. Blanco, M. Dierkes, V. Gebala, F. Stanchi, M. Jones, I. M. Aspalter, G. Cagna, et al., The role of differential ve-cadherin dynamics in cell rearrangement during angiogenesis, Nature cell biology, 16 (2014), p. 309.
* [4] S. Bernard, How to build a multiscale model in biology, Acta biotheoretica, 61 (2013), pp. 291–303.
* [5] R. Blanco and H. Gerhardt, Vegf and notch in tip and stalk cell selection, Cold Spring Harbor Perspectives in Medicine, 3 (2013), p. a006569.
* [6] M. Boareto, M. K. Jolly, M. Lu, J. N. Onuchic, C. Clementi, and E. Ben-Jacob, Jagged–delta asymmetry in notch signaling can give rise to a sender/receiver hybrid phenotype, Proceedings of the National Academy of Sciences, 112 (2015), pp. E402–E409.
* [7] F. Bocci, J. N. Onuchic, and M. K. Jolly, Understanding the principles of pattern formation driven by notch signaling by integrating experiments and theoretical models, Frontiers in Physiology, 11 (2020).
* [8] P. Buske, J. Galle, N. Barker, G. Aust, H. Clevers, and M. Loeffler, A comprehensive model of the spatio-temporal stem cell and tissue organisation in the intestinal crypt, PLoS Comput Biol, 7 (2011), p. e1001045.
* [9] H. Byrne and M. Chaplain, Mathematical models for tumour angiogenesis: numerical simulations and nonlinear wave solutions, Bulletin of mathematical biology, 57 (1995), pp. 461–486.
* [10] M. A. Chaplain, Multiscale modelling of cancer: Micro-, meso-and macro-scales of growth and spread, in Approaching Complex Diseases, Springer, 2020, pp. 149–168.
* [11] M. Cohen, M. Georgiou, N. L. Stevenson, M. Miodownik, and B. Baum, Dynamic filopodia transmit intermittent delta-notch signaling to drive pattern refinement during lateral inhibition, Developmental cell, 19 (2010), pp. 78–89.
* [12] J. R. Collier, N. A. Monk, P. K. Maini, and J. H. Lewis, Pattern formation by lateral inhibition with feedback: a mathematical model of delta-notch intercellular signalling, Journal of Theoretical Biology, 183 (1996), pp. 429–446.
* [13] F. Corson, L. Couturier, H. Rouault, K. Mazouni, and F. Schweisguth, Self-organized notch dynamics generate stereotyped sensory organ patterns in drosophila, Science, 356 (2017).
* [14] R. de la Cruz, P. Guerrero, J. Calvo, and T. Alarcón, Coarse-graining and hybrid methods for efficient simulation of stochastic multi-scale models of tumour growth, Journal of computational physics, 350 (2017), pp. 974–991.
* [15] R. De La Cruz, R. Perez-Carrasco, P. Guerrero, T. Alarcon, and K. M. Page, Minimum action path theory reveals the details of stochastic transitions out of oscillatory states, Physical review letters, 120 (2018), p. 128102.
* [16] T. S. Deisboeck, Z. Wang, P. Macklin, and V. Cristini, Multiscale cancer modeling, Annual review of biomedical engineering, 13 (2011), pp. 127–155.
* [17] A. Deutsch, P. Friedl, L. Preziosi, and G. Theraulaz, Multi-scale analysis and modelling of collective migration in biological systems, 2020.
* [18] Y. Du, S. C. Herath, Q.-g. Wang, D.-a. Wang, H. H. Asada, and P. C. Chen, Three-dimensional characterization of mechanical interactions between endothelial cells and extracellular matrix during angiogenic sprouting, Scientific Reports, 6 (2016), p. 21362.
* [19] M. I. Dykman, E. Mori, J. Ross, and P. Hunt, Large fluctuations and optimal paths in chemical kinetics, The Journal of chemical physics, 100 (1994), pp. 5735–5750.
* [20] J. Elf and M. Ehrenberg, Spontaneous separation of bi-stable biochemical systems into spatial domains of opposite phases, Systems Biology, 1 (2004), pp. 230–236.
* [21] E. Farcot, C. Lavedrine, and T. Vernoux, A modular analysis of the auxin signalling network, PLoS One, 10 (2015), p. e0122231.
* [22] P. Formosa-Jordan, M. Ibañes, S. Ares, and J.-M. Frade, Lateral inhibition and neurogenesis: novel aspects in motion, International Journal of Developmental Biology, 57 (2013), pp. 341–350.
* [23] M. I. Freidlin and A. D. Wentzell, Random perturbations, in Random perturbations of dynamical systems, Springer, 1998, pp. 15–43.
* [24] H. Gerhardt, M. Golding, M. Fruttiger, C. Ruhrberg, A. Lundkvist, A. Abramsson, M. Jeltsch, C. Mitchell, K. Alitalo, D. Shima, et al., Vegf guides angiogenic sprouting utilizing endothelial tip cell filopodia, The Journal of Cell Biology, 161 (2003), pp. 1163–1177.
* [25] D. T. Gillespie, A general method for numerically simulating the stochastic time evolution of coupled chemical reactions, Journal of Computational Physics, 22 (1976), pp. 403–434.
* [26] T. Grafke, T. Schäfer, and E. Vanden-Eijnden, Long term effects of small random perturbations on dynamical systems: Theoretical and computational tools, in Recent Progress and Modern Challenges in Applied Mathematics, Modeling and Computational Science, Springer, 2017, pp. 17–55.
* [27] R. Guantes and J. F. Poyatos, Multistable decision switches for flexible control of epigenetic differentiation, PLoS Comput Biol, 4 (2008), p. e1000235.
* [28] T. Heck, M.-M. Vaeyens, and H. Van Oosterwyck, Computational models of sprouting angiogenesis and cell migration: towards multiscale mechanochemical models of angiogenesis, Mathematical Modelling of Natural Phenomena, 10 (2015), pp. 108–141.
* [29] M. Heymann and E. Vanden-Eijnden, The geometric minimum action method: A least action principle on the space of curves, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences, 61 (2008), pp. 1052–1117.
* [30] G. L. Hunter, Z. Hadjivasiliou, H. Bonin, L. He, N. Perrimon, G. Charras, and B. Baum, Coordinated control of notch/delta signalling and cell cycle progression drives lateral inhibition-mediated tissue patterning, Development, 143 (2016), pp. 2305–2310.
* [31] M. K. Jolly, M. Boareto, B. Huang, D. Jia, M. Lu, E. Ben-Jacob, J. N. Onuchic, and H. Levine, Implications of the hybrid epithelial/mesenchymal phenotype in metastasis, Frontiers in oncology, 5 (2015), p. 155.
* [32] S. K. Kay, H. A. Harrington, S. Shepherd, K. Brennan, T. Dale, J. M. Osborne, D. J. Gavaghan, and H. M. Byrne, The role of the hes1 crosstalk hub in notch-wnt interactions of the intestinal crypt, PLoS computational biology, 13 (2017), p. e1005400.
* [33] S. Kéfi, M. B. Eppinga, P. C. de Ruiter, and M. Rietkerk, Bistability and regular spatial patterns in arid ecosystems, Theoretical Ecology, 3 (2010), pp. 257–269.
* [34] P. Macklin, S. McDougall, A. R. Anderson, M. A. Chaplain, V. Cristini, and J. Lowengrub, Multiscale modelling and nonlinear simulation of vascular tumour growth, Journal of mathematical biology, 58 (2009), pp. 765–798.
* [35] N. A. Monk, J. A. Sherratt, and M. R. Owen, Spatiotemporal patterning in models of juxtacrine intercellular signalling with feedback, Mathematical Models for Biological Pattern Formation, (2001), pp. 165–192.
* [36] J. M. Osborne, A. Walter, S. Kershaw, G. Mirams, A. Fletcher, P. Pathmanathan, D. Gavaghan, O. Jensen, P. Maini, and H. Byrne, A hybrid approach to multi-scale modelling of cancer, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 368 (2010), pp. 5013–5028.
* [37] R. D. O’Dea and J. R. King, Continuum limits of pattern formation in hexagonal-cell monolayers, Journal of mathematical biology, 64 (2012), pp. 579–610.
* [38] R. Perez-Carrasco, P. Guerrero, J. Briscoe, and K. M. Page, Intrinsic noise profoundly alters the dynamics and steady state of morphogen-controlled bistable genetic switches, PLoS computational biology, 12 (2016), p. e1005154.
* [39] G. L. Poppe Jr, Physical applications of the geometric minimum action method, (2018).
* [40] K. A. Rejniak and A. R. Anderson, State of the art in computational modelling of cancer, Mathematical medicine and biology: a journal of the IMA, 29 (2012), pp. 1–2.
* [41] D. M. Roma, R. A. O’Flanagan, A. E. Ruckenstein, A. M. Sengupta, and R. Mukhopadhyay, Optimal path to epigenetic switching, Physical Review E, 71 (2005), p. 011902.
* [42] Y. Sha, D. Haensel, G. Gutierrez, H. Du, X. Dai, and Q. Nie, Intermediate cell states in epithelial-to-mesenchymal transition, Physical biology, 16 (2019), p. 021001.
* [43] D. Sprinzak, A. Lakhanpal, L. LeBon, L. A. Santat, M. E. Fontes, G. A. Anderson, J. Garcia-Ojalvo, and M. B. Elowitz, Cis-interactions between notch and delta generate mutually exclusive signalling states, Nature, 465 (2010), p. 86.
* [44] D. Stepanova, H. M. Byrne, P. K. Maini, and T. Alarcón, A multiscale model of complex endothelial cell dynamics in early angiogenesis, PLOS Computational Biology, 17 (2021), p. e1008055.
|
2023
These authors contributed equally to this work.
]School of Physics and Astronomy, Tel Aviv University, Tel Aviv, 69978, Israel
# Precision cosmology with the 21-cm signal from the dark ages
Rajesh Mondal<EMAIL_ADDRESS>Rennan Barkana<EMAIL_ADDRESS>[
###### Abstract
The 21-cm signal from the dark ages provides a potential new probe of
fundamental cosmology. While exotic physics could be discovered, here we
quantify the expected benefits within the standard cosmology. A measurement of
the global (sky-averaged) 21-cm signal to the precision of thermal noise from
a 1,000 hour integration would yield a 10% measurement of a combination of
cosmological parameters. A 10,000 hour integration would improve this to 3.2%,
and constrain the cosmic Helium fraction to 9.9%. Precision cosmology with
21-cm fluctuations requires a collecting area of 10 km2 (which corresponds to
400,000 stations), which with a 1,000 hour integration would exceed the same
global case by a factor of $\sim$ 2\. Enhancing the collecting area or
integration time $\times$10 would yield a 0.5% parameter combination, a Helium
measurement five times better than Planck, and a constraint on the neutrino
mass as good as Planck. Our analysis sets a baseline for upcoming lunar and
space-based dark ages experiments.
## 1 Main
Observation of the redshifted 21-cm signal due to the hyperfine transition of
neutral hydrogen (HI) is a promising method to study its three-dimensional
(3D) distribution in the Universe Sunyaev1972 ; Hogan ; Scott1990 . The era
from the epoch of recombination (redshift $z\sim 1100$), when matter decoupled
from the radiation and these photons free-streamed as the cosmic microwave
background (CMB), until the formation of the first substantial population of
luminous objects ($z\sim 30$) is referred to as the ‘Dark Ages’. After
decoupling, the temperature of the gas ($T_{\rm g}$) declined adiabatically as
$(1+z)^{2}$ whereas the CMB temperature ($T_{\gamma}$) fell as $(1+z)$. The
spin temperature $T_{\rm s}$ (an effective temperature that measures the ratio
of the population densities of the upper and lower states of the 21-cm
transition) was strongly coupled to $T_{\rm g}$ through collisional coupling
until $z\sim 70$ madau97 . After this time, the collisional process became
ineffective and $T_{\rm s}$ began to approach $T_{\gamma}$. Thus, during the
dark ages $T_{\rm s}$ remained significantly lower than $T_{\gamma}$ over a
wide redshift range of $300\lesssim z\lesssim 30$, during which the HI is
expected to produce absorption features in the CMB spectrum.
The dark ages are potentially a critically important window in the
evolutionary history of the Universe. Since cosmic evolution at this time was
not yet significantly affected by astrophysical processes, the dark ages offer
a clean probe of fundamental cosmology similar to the CMB. This is in contrast
to cosmological probes in the modern Universe, such as those based on the
galaxy distribution or on the statistics of Lyman-$\alpha$ absorption lines,
that suffer an irreducible systematic uncertainty due to the potential
influence of complex astrophysics (including star formation, radiative
feedback from stars and stellar remnants, and supernova feedback). The dark
ages can be probed using the redshifted HI 21-cm signal, either by measuring
the global (or mean) signal or by measuring the fluctuations at various length
scales, i.e., the power spectrum. Unlike the CMB which comes to us from a
single cosmic time, the 21-cm signal can be observed over a range of cosmic
times, as each frequency 1420 MHz$/(1+z)$ corresponds to a different look-back
time. The 21-cm data set is thus 3D; moreover, since small-scale fluctuations
that are washed out in the CMB are available in the 21-cm power spectrum, the
latter contains potentially far more cosmological information Loeb2004 .
Prior to recombination, the coupling of the baryons to the photons kept the
baryon density and temperature fluctuations negligible on sub-horizon scales.
The 21-cm power spectrum during the dark ages probes the era of baryonic
infall into the dark matter potential wells barkana2005 and is quite
sensitive to the values of the $\Lambda$CDM cosmological parameters
barkana2005 , particularly on the scale of the baryon acoustic oscillations
(BAO) that are a remnant of the pre-recombination baryon-photon fluid.
Importantly, the fluctuations on these scales are still quite linear during
the dark ages, and thus modeling them does not need to deal with complex non-
linearity as is the case for probes in the more recent Universe. The 21-cm
fluctuations probe fluctuations of the baryon density, peculiar velocity
bharadwaj04 ; barkana05 , and baryon temperature (as determined by the
fluctuating sound speed naoz05 ; barkana2005 ). A number of smaller
contributions must be included in a precise calculation Lewis2007 ; Ali-Ha2014
. All of this assumes the standard cosmology. Besides this, there are various
studies on non-standard possibilities during the dark ages, such as DM-baryon
interactions Tashiro14 ; Munoz2015 ; Barkana2018 or features in the
primordial power spectrum 2016JCAP . It has also been shown that the 21-cm
signal can potentially be a powerful probe of primordial non-Gaussianity, at
the levels expected from cosmic inflation (see, e.g., Loeb2004 ; Pillepich2007
; Joudaki2011 ; Floss2022 ; Balaji2022 ), but below we find that it would be
difficult observationally to reach the high wavenumbers where this is most
promising. While some types of exotic (non-standard) cosmology would be easier
to detect, we focus in this work on the safest case of standard cosmology.
Observing the 21-cm signal from the dark ages using radio telescopes on Earth
would be nearly impossible due to the ionosphere, which heavily distorts and
eventually blocks very low frequencies. This necessitates lunar or space-based
experiments, and these are being rapidly developed as part of the
international race to return to the moon, with efforts including NCLE
(https://www.ru.nl/astrophysics/radboud-radio-lab/projects/netherlands-china-
low-frequency-explorer-ncle) (Netherlands-China), DAPPER
(https://www.colorado.edu/project/dark-ages-polarimeter-pathfinder) (USA),
FARSIDE (https://www.colorado.edu/project/lunar-farside) (USA), DSL
(https://www.astron.nl/dsl2015) (China), PRATUSH
(https://wwws.rri.res.in/DISTORTION/pratush.html) (India), FarView
(https://www.colorado.edu/ness/projects/farview-lunar-far-side-radio-
observatory) (USA), SEAMS (India), LuSee Night (https://www.lusee-
night.org/night) (USA), ALO
(https://www.astron.nl/dailyimage/main.php?date=20220131) (Europe), and ROLSES
(https://www.colorado.edu/ness/projects/radiowave-observations-lunar-surface-
photoelectron-sheath-rolses) (USA). These missions will probe either the
global signal or the spatial fluctuations of the dark ages 21-cm signal. We
note that most of these experiments are at the early design stage, and also
measuring the dark ages power spectrum is substantially more futuristic than
the global signal.
Given the great potential for precision cosmology, as well as the rapid
observational developments, in this paper we study the use of the 21-cm signal
(both the global signal and power spectrum) during the dark ages to constrain
the cosmological parameters. In this prediction it is important to account for
observational limitations, and not just assume the theoretical limiting case
of a full-sky, cosmic variance limited experiment. Indeed, an inevitable
obstacle is thermal noise, which rises rapidly with redshift as well as
wavenumber. For the power spectrum, another observational barrier is the
angular resolution, since for an interferometer with a given set of antennae,
reaching a higher angular resolution reduces the array’s sensitivity. In
general in 21-cm cosmology, an interferometer requires a much greater
investment of resources than a simple antenna for measuring the global signal;
the potential reward of the former is also substantially greater due to the
richer information content available in measuring the power spectrum versus
redshift. We note that the just-mentioned observational obstacles worsen more
rapidly with redshift for the 21-cm power spectrum; the signal also declines
faster for the power spectrum, since the dark ages universe was more
homogeneous, density fluctuations were smaller, and there were no galaxies
around to amplify the 21-cm fluctuations. These factors make the global signal
relatively advantageous at least as an initial probe of the dark ages.
There are other practical difficulties faced by 21-cm experiments, including
removing terrestrial radio-frequency interference (RFI), accounting for the
effect of the ionosphere, and removing or avoiding foreground emission (coming
from synchrotron radiation from our own Milky Way as well as other galaxies).
As these obstacles are increasingly overcome, this will allow for deeper
integrations for which the noise remains dominated by the thermal noise.
Techniques for achieving this are a matter of research that is achieving
continuous improvement, as reflected in the best current constraints from
global experiments such as EDGES Bowman:2018 and SARAS SARAS , and
interferometers such as LOFAR LOFAR-EoR:2020 , MWA Trott:2020 and HERA HERA .
Thus, for the 21-cm signal from the dark ages, the thermal noise for various
integration times serves as a fiducial benchmark for future experiments. Note
that going to the moon could present substantial practical advantages beyond
just avoiding the Earth’s ionosphere: a potentially benign environment that is
extremely dry and stable, plus the blocking out of terrestrial RFI (on the
lunar far-side).
## 2 Calculating the 21-cm signal
As previously noted, the 21-cm differential brightness temperature relative to
the CMB, ${T}_{\rm b}$, from each redshift $z$ is observed at a wavelength of
$21\times(1+z)$ cm. Global experiments measure the cosmic mean 21-cm
brightness temperature; since these are relatively simple and are advantageous
at the highest redshifts, we consider them first in Sec. 3.1. In addition, the
21-cm brightness temperature fluctuates spatially, mainly due to the
fluctuations in the gas density and temperature. As noted above, the gas
retains some memory of the early BAO, on the scale traversed by sound waves in
the photon-baryon fluid (wavenumber $k\sim 0.1\,{\rm Mpc}^{-1}$; scales are
comoving unless indicated otherwise). The signature of these oscillations can
be detected in the 21-cm power spectrum from the dark ages (see Sec. 3.2).
We use the standard CAMB (http://camb.info) Lewis2007 ; CAMB cosmological
perturbation code to precisely generate the 21-cm global signal from the dark
ages and, after accounting for the anisotropy due to redshift space
distortions BLlos , the 3D 21-cm power spectrum. While the line-of-sight
anisotropy is in principle measurable, foreground removal is expected to make
this difficult, so here we conservatively consider only the spherically-
averaged power spectrum. We also add the Alcock-Paczyński effect AP1979 ;
AliAP ; Nusser ; APeffect and the light-cone effect barkana06 ; mondal18 to
our power spectrum calculations, and account for the effect of the field of
view and angular resolution of radio interferometers (see Supplementary
Information). We use the latest measurements (based mainly on the CMB)
Planck:2018 to set our fiducial values of the cosmological parameters. The
21-cm global signal and power spectra during the dark ages for this fiducial
model are shown, respectively, in Figs. 1 and 2, which are explained below in
further detail.
## 3 Results
### 3.1 The global 21-cm signal
As noted above, measuring the 21-cm global signal requires a single, well-
calibrated antenna. Fig. 1 shows the 21-cm global signal from the dark ages as
a function of $\nu$ (and $z$), for the fiducial cosmological model. The
expected signal dips to a maximum absorption of $-40.2$ mK at $z=86$
($\nu=16.3$ MHz). Also shown in Fig. 1 is the instrumental noise for $t_{\rm
int}=1$,000 hrs (a standard fiducial integration time, also equal to 11.4% of
a year) and 100,000 hrs, for a bin of width $\Delta(\ln\nu)=1$ around each
$\nu$. The noise increases sharply with redshift, yielding a maximum signal-
to-noise ratio (S/N) of 11.6 for $t_{\rm int}=1$,000 hrs and 116 for 100,000
hrs (both at $z=41$ or $\nu=34$ MHz); in the latter case, the S/N for the
global signal remains above unity up to $z=207$ ($\nu=6.8$ MHz). We note that
if other difficulties are overcome and integration time becomes the main
issue, then multiple copies of a global experiment effectively increase
$t_{\rm int}$ in proportion to the number of copies (as long as they are not
placed spatially too close together).
Figure 1: The 21-cm global signal from the dark ages (blue solid line) as a
function of $\nu$ (or $z$ as the top $x$-axis). We also show the expected
thermal noise for a global signal experiment observing for integration time
1,000 hrs (orange dashed line) or 100,000 hrs (green dotted line) for a bin
with $\Delta(\ln\nu)=1$.
By fitting the standard cosmological model to the global signal with its
expected errors, we find the constraints that can be obtained on the
cosmological parameters. While the amplitude of the global signal depends
significantly on some of the parameters, the shape is highly insensitive,
which means that the signal essentially measures a single quantity, the
overall amplitude, which depends on a combination of cosmological parameters.
Specifically, the global signal depends significantly only on the parameters
$\Omega_{\rm b}h^{2}$ and $\Omega_{\rm m}h^{2}$, where $\Omega_{\rm b}$ and
$\Omega_{\rm m}$ are the cosmic mean densities of baryons and (total) matter,
respectively, in units of the critical density, and $h$ is the Hubble constant
in units of 100 km s-1 Mpc-1. Given the strong degeneracy between the two
parameters, the constraint is on the combination (see Supplementary
Information)
$C_{\rm Global}\equiv\frac{\Omega_{\rm b}h^{2}}{(\Omega_{\rm
m}h^{2})^{0.248}}\ .$ (1)
The relative errors in $C_{\rm Global}$ for three different values of $t_{\rm
int}$ are shown (along with our other main results) in Table 1 and Fig. 3. We
account for the fact that the presence of the bright synchrotron foreground
means that a signal component of the same shape cannot be distinguished from
the foreground (see Supplementary Information). A measurement of the global
21-cm signal to the precision of thermal noise from a 1,000 hour integration
would yield a 10.1% measurement. This would be a remarkable achievement for
cosmological concordance, since it would be independent of other cosmological
probes and come from a previously unexplored cosmological era. Increasing the
integration time would improve this precision as the inverse square root, so
that sub-percent precision in $C_{\rm Global}$ (comparable to the typical
Planck precision on each cosmological parameter) would require a considerable
integration time exceeding 100,000 hrs.
Table 1: The relative errors in % and the limits on the total mass of
neutrinos (all are $1\sigma$). For the Helium fraction ($Y_{\rm P}$) and the
neutrino mass, we compare to constraints based on Planck CMB measurements
alone and also (Planck + BAO) those that include BAO measurements from galaxy
clustering Planck:2018 .
Global | Planck | Planck | Integration time
---|---|---|---
signal | $+$ BAO | | 100,000 hrs | 10,000 hrs | 1,000 hrs
$C_{\rm Global}$ | | | 1.01 | 3.18 | 10.1
$Y_{\rm P}$ | 4.96 | 5.44 | 3.14 | 9.94 | 31.4
$\sum m_{\nu}\,[{\rm eV}]$ | $<0.0578$ | $<0.108$ | $<0.746$ | |
Power | Planck | Planck | Configuration
---|---|---|---
spectrum | $+$ BAO | | D | C | B | A | G
$C_{\rm PowSpec}$ | | | 0.0457 | 0.382 | 0.462 | 4.59 | 10.1
$Y_{\rm P}$ | 4.96 | 5.44 | 0.116 | 0.981 | 1.20 | 11.9 | 26.6
$\sum m_{\nu}\,[{\rm eV}]$ | $<0.0578$ | $<0.108$ | $<0.0100$ | $<0.0839$ | $<0.107$ | $<1.06$ |
| | | | | | |
In addition to the minimal, standard set of cosmological parameters, the
global signal can also provide additional constraints, of which we consider a
couple examples. For these additional parameters, we consider the favorable
approach in which we fix the standard set of parameters at their fiducial
values (e.g., based on Planck with its small errors) and explore the power of
the 21-cm signal to constrain these specific extended parameters. In
particular, since the 21-cm signal depends directly on hydrogen and not just
the total baryon density, the first additional parameter is the fraction of
the baryonic mass in helium, usually denoted $Y_{\rm P}$. This parameter is
currently constrained by Planck (even when combined with galaxy clustering) at
a level that is an order of magnitude worse than the precision on the standard
parameters. For $t_{\rm int}=1$,000 hrs, the global 21-cm signal would yield
an independent 31.4% constraint on $Y_{\rm P}$; 10,000 hrs would measure
$Y_{\rm P}$ to 9.94%, and the best-case scenario of 100,000 hrs would beat
Planck by a factor of 1.7 (Table 1 and Fig. 3). Similarly, we find constraints
on the neutrino mass, though for this purpose the global signal would not be
competitive with Planck, even for $t_{\rm int}=100$,000 hrs.
### 3.2 The 21-cm power spectrum
As noted above, compared to the global 21-cm signal from the dark ages, it
would take a substantially larger effort to measure the power spectrum.
However, looking towards the future, the power spectrum has a far greater
potential to become a ground-breaking cosmological probe, as it is a much
richer dataset. Fig. 2 shows the spherically-averaged power spectrum of 21-cm
brightness fluctuations as a function of wavenumber at various redshifts
during the dark ages. The signal rises initially as the adiabatic expansion
cools the gas faster than the CMB, creating an absorption signal that
strengthens with time. Eventually, though, the declining density reduces the
collisional coupling so that $x_{c}$ drops below unity and the 21-cm signal
weakens. For example, the maximum squared fluctuation $\Delta^{2}$ at
$k=0.1\,{\rm Mpc}^{-1}$ is 0.44 mK2 at $z=51$.
Figure 2: The spherically-averaged (total) power spectrum of 21-cm brightness
fluctuations as a function of wavenumber $k$ during the dark ages, at
redshifts $z=[150,125,75,50,40,30]$. The dotted lines show the power spectrum
at $z=75$ and 40 when accounting for the effect of angular resolution (for our
A or B configuration). We also show the 1$\sigma$ noise (thermal plus cosmic
variance) for our A (short dashed lines) and B (long dashed lines)
configurations, at $z=75$ and 40 (for bins with $\Delta(\ln\nu)=1$ and
$\Delta(\ln k)$=1).
For the observational setup, we assume a minimal case for which a 1,000 hour
integration would significantly exceed (by a factor of $\sim$ 2) the
constraint level given by the same global case; this would be required to
justify the much greater effort involved in building an interferometer. We
find that this would (approximately) require a collecting area of $A_{\rm
coll}=10\,{\rm km}^{2}$, which along with $t_{\rm int}=1$,000 hrs we adopt as
our minimal, A configuration. The collecting area of 10 km2 corresponds to
400,000 stations, each with an effective collecting area of 25 m2 (see
Supplementary Information). This is quite futuristic but we hope that our
theoretical work helps motivate new ideas to achieve this more quickly. The
four observational configurations that we use to illustrate measurements of
the 21-cm power spectrum are listed in Table 2 (for reference, we also include
a G configuration that yields constraints roughly equal to the 1,000 hour
global case). Fig. 2 shows the 1$\sigma$ noise expected for our A and B
configurations, when we include the (dominant) thermal noise as well as cosmic
variance (see Supplementary Information). The figure also shows the power
spectrum when accounting for the effect of angular resolution. The thermal
noise increases rapidly with redshift, and so the maximum S/N (without the
effect of angular resolution) occurs at the minimum redshift we consider
($z=30$), and is 13.3 for the A configuration and 133 for the B configuration,
both at $k=0.091$ Mpc-1.
Table 2: The observational configurations that we use to illustrate
measurements of the 21-cm power spectrum, in terms of the collecting area
$A_{\rm coll}$ and integration time $t_{\rm int}$ (see Supplementary
Information).
| Configuration
---|---
| D | C | B | A | G
$A_{\rm coll}$ [km2] | 100 | 100 | 10 | 10 | 5
$t_{\rm int}$ [hrs] | 10,000 | 1,000 | 10,000 | 1,000 | 1,000
| | | | |
Figure 3: We show graphically the main results that are listed in Table 1,
namely the relative errors in % and the limits on the total mass of neutrinos
(all are $1\sigma$). For the Helium fraction ($Y_{\rm P}$) and the neutrino
mass, we compare to constraints based on Planck (with or without BAO from
galaxy clustering). Note that we abbreviate “configuration” as “Conf.”.
Given measurements of the 21-cm power spectrum throughout the dark ages
($z>30$), we carry out a Fisher analysis with five cosmological parameters
(see Supplementary Information). The relative errors in the $\Lambda$CDM
cosmological parameters are rather large due to significant degeneracies, and
even configuration D approaches the accuracy level of Planck only in some of
the parameters (see Supplementary Information). As with the global signal, it
is more useful to consider parameter combinations that are well constrained.
In particular, we focus on the minimum variance combination (see Supplementary
Information), which for configuration A is
$C_{\rm PowSpec}\equiv\Omega_{\rm b}h^{2}\frac{(A_{\rm
s}e^{-2\tau})^{0.307}(0.9950)^{n_{\rm s}}}{(\Omega_{\rm
m}h^{2})^{0.464}H_{0}^{0.0753}}\,.$ (2)
Here the additional parameters Planck:2018 are the Hubble constant $H_{0}$
(in units of km s-1 Mpc-1), the primordial amplitude $A_{\rm s}$, the total
reionization optical depth to the CMB $\tau$, and the scalar spectral index
$n_{\rm s}$. We note that the form of $C_{\rm PowSpec}$ (eq. 2) changes
slightly for different scenarios (see Supplementary Information).
The relative errors in $C_{\rm PowSpec}$ for the various observational
configurations are shown in Table 1 and Fig. 3. Configuration A would yield a
4.59% measurement of the parameter combination $C_{\rm PowSpec}$, which would
be observationally independent of the global signal constraint and thus
provide a powerful cross-check. More importantly, there would be a great
potential for future improvement, as Configurations B and (the slightly
better) C would improve this by an order of magnitude (reaching the typical
Planck precision on each cosmological parameter), and D by a further order of
magnitude. Just as for the global signal, we consider constraints on
additional parameters while fixing the standard set of parameters. Here
configuration A would measure $Y_{\rm P}$ to 11.9%, B and C would do 5 times
better than the Planck constraint, and D would do almost 50 times better than
Planck. It is reasonable to again consider these constraints while fixing the
standard parameters based on Planck, since Planck constrains the standard
parameters better than any of our 21-cm configurations. Finally, the
constraint on the total neutrino mass would not be competitive with Planck for
configuration A, but would roughly match Planck for configurations B or C, and
beat it by an order of magnitude for configuration D. This constraint is
driven by the suppression of small-scale power due to neutrino free-streaming.
Here we emphasize a major advantage for probing the neutrino effect on
gravitational clustering during the dark ages: the corresponding scales were
still in the regime of linear fluctuations, and were not yet affected by the
complex astrophysics of galaxies.
## 4 Conclusions
Observations of the redshifted 21-cm signal from the dark ages have great
cosmological potential. While various exotic, non-standard scenarios could be
easily detected (or ruled out), here we considered the safe, conservative case
of standard cosmology, studying the potential for creating a powerful new
cosmological probe. We found constraints on the $\Lambda$CDM cosmological
parameters, independently considering the two major types of 21-cm
measurements, the global (or mean) signal as a function of frequency and the
spherically-averaged power spectrum as a function of both frequency and scale.
We used CAMB and added to it redshift space distortions, the Alcock-Paczyński
effect, the light-cone effect, and the effect of angular resolution. For the
error estimates, we considered different levels of thermal noise (plus cosmic
variance), meant to serve as a benchmark for experiments which face additional
practical challenges including foreground removal.
With global 21-cm signal measurements, we found that a combination of
cosmological parameters, $C_{\rm Global}$ (eq. 1), can be effectively
constrained. An integration time of 1,000 hrs would yield a relative error in
$C_{\rm Global}$ of 10.1%, with improvement to a best-case precision of 1.01%
for 100,000 hrs. In the case of the 21-cm power spectrum, it would take a
greater effort to achieve comparable constraints, but there are better
prospects for future advances. The parameter combination $C_{\rm PowSpec}$
(eq. 2) can be constrained to 4.59% in our configuration A (a 1,000 hr
integration with an array of collecting area 10 km2), but the precision can
improve to 0.0457% in our configuration D (a 10,000 hr integration with a
collecting area of 100 km2).
Fixing the standard set of cosmological parameters to their fiducial values,
we found constraints on separately varying two other important parameters.
Given the direct dependence of the 21-cm signal on hydrogen, the fraction of
the baryonic mass in helium $Y_{\rm P}$ would be constrained to 31.4% with a
1,000 hr integration of the global signal; 10,000 hrs would measure it to
9.94%, and the best-case scenario of 100,000 hrs would beat Planck by a factor
of 1.7. Using the power spectrum, configuration A would measure $Y_{\rm P}$ to
11.9%, B and C would do 5 times better than the Planck constraint, and D would
do almost 50 times better than Planck. Regarding limits on the total mass of
neutrinos, constraints that are competitive with Planck would be possible only
with the 21-cm power spectrum, for which configurations B or C would roughly
match Planck, and configuration D would beat it by an order of magnitude.
Our analysis highlights the potential of the 21-cm signal as a probe of
cosmology, and suggests a focus on the global signal as the first step, with
the 21-cm power spectrum being much more promising in the long run. Our
results set a baseline reference for many upcoming and future lunar and space-
based dark ages experiments.
##
Correspondence and requests for materials should be addressed to R. Mondal
([email protected]).
## Acknowledgments
The authors would like to thank Antony Lewis and Léon V.E. Koopmans for their
useful discussions. RM is supported by the Israel Academy of Sciences and
Humanities & Council for Higher Education Excellence Fellowship Program for
International Postdoctoral Researchers. The authors acknowledge the support of
the Israel Science Foundation (grant No. 2359/20).
## Author contributions
R.B. initiated the project. R.M. performed the calculations, made the figures,
and wrote the paper, in consultation with R.B..
## Data availability
The data are available upon request from the corresponding author.
## Code availability
CAMB is available at http://camb.info. emcee is available at
https://github.com/dfm/emcee. corner is available at
https://github.com/dfm/corner.py. The analyses are done in Python, extensively
using publicly available routines in NumPy (https://numpy.org) and Matplotlib
(https://matplotlib.org). All other codes used are available upon request from
the corresponding author.
## Competing interests
The authors declare no competing interests.
## Appendix A Supplementary note
In this Supplementary Note, we first (Sec. A.1) briefly summarize our methods
and add some details and technical notes. We next describe in detail how our
predicted signal accounts for several effects: the Alcock-Paczyński effect
(Sec. A.2), the light-cone effect (Sec. A.3), and the effect of angular
resolution (Sec. A.4). We then note our method for constructing a useful
(minimum variance) linear combination of correlated parameters (Sec. A.5), and
present some additional results and discussion (Sec. A.6). Finally, we briefly
discuss the effect of foregrounds (Sec. A.7).
### A.1 Summary of our methods
The main quantity for 21-cm observations, the excess brightness temperature
relative to the CMB from redshift $z$, is
$T_{\rm b}=(T_{\rm s}-T_{\gamma})\frac{1-e^{-\tau_{21}}}{1+z}\ ,$ (3)
where $\tau_{21}$ is the optical depth of the 21-cm transition. Assuming
$\tau_{21}\ll 1$, this can be expressed in the simpler form madau97 ;
Furlanetto2006
${T}_{\rm b}\simeq 54.0\,{\rm mK}\,\frac{\rho_{\rm HI}}{\bar{\rho}_{\rm
H}}\left(\frac{\Omega_{\rm b}h^{2}}{0.02242}\right)\left(\frac{\Omega_{\rm
m}h^{2}}{0.1424}\right)^{-\frac{1}{2}}\left(\frac{1+z}{40}\right)^{\frac{1}{2}}\frac{x_{\rm
c}}{1+x_{\rm c}}\left(1-\frac{T_{\gamma}}{T_{\rm g}}\right),$ (4)
where $\rho_{\rm HI}$ is the neutral hydrogen density and $\bar{\rho}_{\rm H}$
is the cosmic mean density of hydrogen, and also $x_{\rm c}$ is the
collisional coupling coefficient. During the dark ages, CMB scattering pulls
$T_{\rm s}\xrightarrow[]{}T_{\gamma}$, whereas atomic collisions pull $T_{\rm
s}\xrightarrow[]{}T_{\rm g}$.
As noted in the main text, we use the standard CAMB (http://camb.info)
Lewis2007 ; CAMB cosmological perturbation code to generate the predicted
21-cm signal. Note that CAMB does not directly yield the 21-cm global signal
(i.e., the mean 21-cm brightness temperature) as a function of redshift, so we
extract it indirectly by running once with temperature (mK) units on and once
with temperature units off, and taking the ratio of the transfer functions in
the two cases. Also, CAMB outputs the 2D angular power spectrum, inspired by
CMB analyses but less relevant to 21-cm data that naturally constitute a 3D
dataset both theoretically and observationally. CAMB yields the transfer
function for the 21-cm monopole power spectrum, to which we add by hand the
redshift space distortions using the transfer function of baryon density,
based on Ref. BLlos . Having obtained the anisotropic 3D power spectrum, we
then average over angle in this linear-theory case (this is shown explicitly
within the derivation in Sec. A.2 below). After this, we add several effects
that are presented in detail in the next few sections.
In CAMB and throughout the paper, for the cosmological parameters we use
fiducial values (based mainly on the CMB) Planck:2018 of $H_{0}=67.66$ km s-1
Mpc-1, $\Omega_{\rm b}h^{2}=0.02242$, $\Omega_{\rm m}h^{2}=0.14240$, $A_{\rm
s}e^{-2\tau}=1.881\times 10^{-9}$, and $n_{\rm s}=0.9665$. We note that
$\Omega_{\rm m}=\Omega_{\rm c}+\Omega_{\rm b}+\Omega_{\nu}$, where
$\Omega_{\rm c}h^{2}=0.11933$ is the contribution of cold dark matter and the
fiducial neutrino contribution (based on the minimal total mass of 0.06 eV
allowed by neutrino oscillation experiments) is $\Omega_{\nu}h^{2}=6.451\times
10^{-4}$.
In this paper, our variables in the global signal case are $\log(\Omega_{\rm
b}h^{2})$ and $\log(\Omega_{\rm m}h^{2})$. For the 21-cm power spectrum we
have three additional variables: $\log(A_{\rm s}e^{-2\tau})$, $n_{\rm s}$, and
$\log(H_{0})$. We assume a flat Universe, where the rest of the energy density
($1-\Omega_{\rm m}$) is given by a cosmological constant. We note that in the
21-cm power spectrum from the dark ages (which, like cosmic recombination in
the case of the CMB, occurred long before any significant reionization), the
amplitude that is directly probed is $(A_{\rm s}e^{-2\tau})$ and not $A_{\rm
s}$. This is since the re-scattering of the 21-cm photons during reionization
damps the fluctuations as in the case of sub-horizon CMB fluctuations, i.e.,
the brightness temperature relative to the mean gets multiplied by a factor of
$e^{-\tau}$. Unlike the CMB, there is no separate information on $A_{\rm s}$
and $\tau$, since in the CMB there is information on the largest scales and on
polarization, but both of these are not expected in 21-cm measurements (at
least in the near future). We also note that there is no logarithm on $n_{\rm
s}$ since it is a power, i.e., it effectively is already defined as a
logarithmic variable. For the power spectrum $P(k)$, we express the results in
terms of the squared fluctuation $\Delta^{2}\equiv k^{3}P(k)/(2\pi^{2})$.
The thermal noise in a global signal measurement is Shaver1999
$\Delta T=\frac{T_{\rm sys}}{\sqrt{\Delta\nu\,t_{\rm int}}}\ ,$ (5)
where $\Delta\nu$ is the bandwidth, $t_{\rm int}$ is the integration time, and
we assume that the system temperature $T_{\rm sys}$ is approximately equal to
the sky brightness temperature $T_{\rm sky}=180\times(\nu/180\,{\rm
MHz})^{-2.6}$ K Furlanetto2006 .
Next we estimate the observational errors for the 21-cm power spectrum.
Although it is negligible in most practical cases, for completeness (and for
comparison with previous theoretically-motivated work) we include the error
due to cosmic variance (CV), which for the power spectrum measured in a bin
centered at a wavenumber $k$ and frequency $\nu$ we can express as (following
eq. 30 of Ref. mondal16 )
$\delta P_{\rm cv}(k,\nu)=\frac{2\pi P(k,\nu)}{\sqrt{V(\nu)k^{3}\Delta(\ln
k)}}\ ,$ (6)
where the survey volume for the frequency (redshift) bin $V(\nu)$, in the
limit of a thin bin, is given by $\Omega_{\rm FoV}r_{\nu}^{2}\Delta
r_{\Delta\nu}$, where $\Omega_{\rm FoV}$ is the field of view, $r_{\nu}$ the
comoving distance to the bin center, and $\Delta r_{\Delta\nu}$ is the
comoving length corresponding to the bandwidth $\Delta\nu$. Note that the
field of view (FoV) is $[21(1+z)\,{\rm cm}]^{2}/A_{\rm eff}$ for an antenna
with an effective collecting area of $A_{\rm eff}$. For example, $\Omega_{\rm
FoV}=8.89$ at $z\sim 70$ assuming $A_{\rm eff}=25\,{\rm m}^{2}$, compared to
the whole sky which is $4\pi=12.6$. Thus, given the small effective area and
large $z$, for a dark ages array the FoV is typically a significant fraction
of the sky; we set a cutoff of half the sky as the maximum solid angle
available to an interferometer.
The dominant error that we include in the power spectrum measurement is that
due to thermal noise, which can be expressed as mellema13
$\delta P_{\rm thermal}=\frac{2}{\pi}\left(\frac{k^{3}\,V}{\Delta(\ln
k)}\right)^{1/2}\,\frac{T^{2}_{\rm sys}}{\Delta\nu~{}t_{\rm
int}}\,\frac{1}{N^{2}}\,\frac{A_{\rm core}}{A_{\rm eff}}\,,$ (7)
where $N$ is the total number of stations and $A_{\rm core}$ is the core area
of the telescope array. A reasonable plan for an upcoming lunar array (Leon
Koopmans, personal communication) consists of $N=128^{2}=16,384$ stations with
$A_{\rm eff}=25\,{\rm m}^{2}$ and a core area equal to the collecting area,
i.e., $A_{\rm core}=A_{\rm coll}=N\times A_{\rm eff}$. This gives a total
collecting area of 0.4096 km2. We keep all of these relations fixed but modify
$N$, getting a total dependence of $\delta P_{\rm thermal}\propto 1/N$. This
number must be increased by a factor of 24.4 to $N=400$,000 to give our A
configuration in the main text (with a collecting area of $10\,{\rm km}^{2}$).
We note though that the smaller collecting area would suffice to put new
strict limits on various non-standard models, and it would approach the
performance of our A configuration if used with an integration time of a few
tens of thousands of hours.
We showed noise estimates that are independent of binning, so that we gave the
overall S/N based on a bin size of order the central value, i.e.,
$\Delta(\ln\nu)=1$ as well as $\Delta(\ln k)=1$. For the Fisher matrix
predictions, we used 8 frequency/redshift bins in the range $5.81\leq\nu\leq
45.81$ with a bin width of $\Delta\nu=5\,{\rm MHz}$, which corresponds to
central redshifts of [170, 106, 76.6, 59.9, 49.2, 41.6, 36.1, 31.8]; the upper
end of the frequency range was chosen at $z=30$, the typical redshift where
galaxies at cosmic dawn form in sufficient numbers to significantly affect the
21-cm signal subtle . For the power spectrum, we used 11 logarithmic $k$ bins
covering the range $0.00779\leq k<1.91\,{\rm Mpc}^{-1}$ with bin width
$\Delta(\ln k)=0.5$. We checked that the results are insensitive to increasing
these binning resolutions. See also sec. A.6 where these ranges are varied.
### A.2 The Alcock-Paczyński effect
When using the 21-cm power spectrum for constraining cosmological parameters,
it is important to account for the fact that the 3D power spectrum depends on
distances, but these are usually not directly measurable in cosmology.
Instead, redshifts are measured along the line of sight, while angles are
measured on the sky. The conversions of these quantities to comoving distances
depend on the values of the cosmological parameters, which themselves are
being constrained by the data. This leads to the so-called Alcock-Paczyński
effect AP1979 , which is important also for the 21-cm signal AliAP ; Nusser ;
APeffect .
Following the analysis of this effect on the 21-cm power spectrum in Ref.
APeffect , the setup is that we have a true cosmology (which we take as that
given by the central, fiducial values of the cosmological parameters), and a
different assumed cosmology (for example, where one of the parameters is
varied from its fiducial value in order to find the resulting derivative of
the signal, for the Fisher matrix calculation). The conversion to distances at
redshift $z$ involves (on the sky) the angular diameter distance $D_{A}$ and
(for small distances along the line of sight) the Hubble constant $H$ at $z$.
The ratio $D_{A}$(true)/$D_{A}$(assumed) we designate $1+\alpha_{\perp}$, and
the ratio $[HD_{A}]$(true)/$[HD_{A}]$(assumed) we designate $1+\alpha$. Note
that these standard scalings, as written, are for physical distances, while we
are interested in comoving distances, but this does not matter here since the
difference is a redshift factor which in 21-cm cosmology is known precisely,
independently of the cosmological parameters. Now, instead of using the full
(complicated) equations in Ref. APeffect , we show here how to implement the
effect of the changing distance measures in two steps. Note that to linear
order the effects of $\alpha_{\perp}$ and of $\alpha$ are independent APeffect
.
The first step is to include the effect of $\alpha_{\perp}$ assuming
$\alpha=0$, which corresponds to assuming that the scalings from angle to
perpendicular distance and from frequency to line-of-sight distance are the
same, in terms of the true parameters relative to the assumed parameters. This
case is isotropic and is simple to do exactly (without a linear approximation)
APeffect . The formulas simplify further when applied to the dimensionless
combination $k^{3}P(k)$ (which is proportional to $\Delta^{2}$):
$k^{3}P(k)=k_{\rm{true}}^{3}P_{\rm{true}}(k_{\rm{true}})\ ,$ (8)
where $k_{\rm{true}}=k/(1+\alpha_{\perp})$.
Figure 4: The logarithmic dependence of the squared fluctuation on the two
parameters of relevance for the Alcock-Paczyński (AP) effect; i.e.,
$\frac{1}{\Delta^{2}}\frac{\partial\Delta^{2}}{\partial(\ln H_{0})}$ (left
panel) and
$\frac{1}{\Delta^{2}}\frac{\partial\Delta^{2}}{\partial(\ln[\Omega_{\rm
m}h^{2}])}$ (right panel), shown at $z=40$. We show the results in three
cases: without the AP effect, with the first (isotropic $\alpha_{\perp}$) step
only, and with the full AP effect.
The second effect, that of $\alpha\neq 0$, is anisotropic, but here we insert
it only for the case of interest, i.e., the simplified case of the effect on
the spherically-averaged power spectrum, to linear order in changes of the
parameters (i.e., to first order in $\alpha$). The result uses the angular
decomposition of the 21-cm power spectrum in linear theory, including the
effect of line-of-sight velocity gradients BLlos :
$P(k,\mu)=\mu^{4}P_{\mu^{4}}(k)+\mu^{2}P_{\mu^{2}}(k)+P_{\mu^{0}}(k)\ ,$ (9)
where $\mu=k_{z}/k$ is the cosine of the angle between the $\vec{k}$ vector
and the line of sight. Here and subsequently, the components (in the
decomposition in powers of $\mu$) of $P(k,\mu)$ refer to the result of eq. 8
(note that $\mu$ is left unchanged by the $\alpha_{\perp}$ rescaling). We note
that $P_{\mu^{0}}(k)$ is the (monopole) 21-cm power spectrum without velocity
effects, $P_{\mu^{4}}(k)$ is simply the power spectrum of the baryon density
(the dimensionless power spectrum times the global temperature squared, to get
mK2 units), and $P_{\mu^{2}}(k)$ is the cross term of the 21-cm fluctuation
and the baryon density fluctuation. The total spherically-averaged 21-cm power
spectrum is then:
$P(k)=\frac{1}{5}P_{\mu^{4}}(k)+\frac{1}{3}P_{\mu^{2}}(k)+P_{\mu^{0}}(k)\ .$
(10)
Now, from Ref. APeffect we find that the second effect, that of $\alpha\neq
0$, on the spherically-averaged 21-cm power spectrum, is the addition to
$k^{3}P(k)$ of:
$\alpha\,\frac{\partial}{\partial\log
k}\left[\frac{1}{7}\,k^{3}P_{\mu^{4}}(k)+\frac{1}{5}\,k^{3}P_{\mu^{2}}(k)+\frac{1}{3}\,k^{3}P_{\mu^{0}}(k)\right]\
,$ (11)
where again the use of the dimensionless combination $k^{3}P(k)$ simplified
the result. Note that here the $\partial/\partial\log k$ refers to a
derivative at fixed cosmological parameters (as the change in the parameters
is captured through $\alpha_{\perp}$ and $\alpha$).
As noted in the main text, the Alcock-Paczyński effect is important in fitting
the dark ages 21-cm power spectrum, since it introduces a dependence on
$H_{0}$ that is separate from the dependence on the other cosmological
parameters; it also modifies the dependence of the 21-cm signal on
$\Omega_{\rm m}h^{2}$. Fig. 4 illustrates the logarithmic dependence of the
power spectrum on these two parameters that are relevant for the Alcock-
Paczyński effect. Both steps (in the above two-step procedure) have a
comparable contribution to the $H_{0}$ dependence (which would otherwise be
completely absent), while mainly the first (isotropic $\alpha_{\perp}$) step
significantly enhances the dependence on $\Omega_{\rm m}h^{2}$.
### A.3 The light-cone effect
In measurements of the 21-cm power spectrum, since different positions along
the line of sight correspond to different redshifts (i.e., what is observed
are points along our past light cone), this results in anisotropy in the 21-cm
power spectrum barkana06 ; mondal18 . Here we are interested only in the
spherically-averaged power spectrum, as averaged over the redshift span of
each frequency bin, and the effect is then simply to average the signal over
this redshift range.
To understand how this averaging works, it is easier to consider the
correlation function, which is of course closely related to the power
spectrum. For the correlation function at some (comoving) distance $r$, what
we do is average over all pairs separated by $r$ in the observed volume. Let
us call the line-of-sight comoving distance $x$ in this subsection (to avoid
confusion with the redshift $z$). In the pair, let us call the two points #1
and #2; for each point #1, we average over points #2 in a spherical shell at a
distance $r$ from point #1. Actually, the shell is partially cut off at the
near and far edges of the radial bin, but this can be neglected as long as the
bin is large compared to the scales $1/k$ that we are interested in. Each
spherical shell is symmetric about point #1, so there is a cancellation as
long as we can treat the power spectrum as a linear function of $x$, over
distances of order $1/k$. We indeed assume this linear case, consistently with
our overall approach.
What remains is the averaging over points #1, so the result is simply an
average of the power spectrum over comoving volume:
$\frac{\int x^{2}\Omega(x)P(k,x)dx}{\int x^{2}\Omega(x)dx}\ .$ (12)
Here the solid angle $\Omega(z)$ at each $z(x)$ is the same as before (a
function of $z$ but no more than half the sky). Also, $P(k,x)$ denotes the
power spectrum at $k$ at the redshift corresponding to comoving distance $x$
(note that $x=(1+z)D_{A}(z)$ in terms of the angular diameter distance). For
each frequency bin, the integrals are over the range of $x$ corresponding to
the bin.
We find that the light-cone effect in our analysis is fairly small given our
bandwidth of $\Delta\nu=5$ MHz. E.g., when fitting the power spectrum with
configuration A, if we do not include the light-cone effect, the error in
$C_{\rm PowSpec}$ changes from 4.59% to 4.95%.
### A.4 Angular resolution
When radio interferometry is done at increasingly high redshifts, it becomes
more difficult to achieve a given angular resolution. Thus, the resolution is
a significant limiting factor in measuring the 21-cm fluctuations from the
dark ages. We account for the effect of angular resolution analytically, as
follows. Based on simulations of future radio arrays as well as experience
with current arrays (Koopmans, personal communication), it is a good
approximation to assume a Gaussian point-spread function (PSF), with a full-
width at half max (FWHM) corresponding to $0.6\lambda/D$, where $\lambda$ is
the wavelength and $D$ is the maximum diameter of the array (which we find
assuming that the collecting area is a full circle).
Thus, if the comoving coordinates are $X$, $Y$, and $Z$ (with the latter being
the line-of-sight direction in this subsection), angular resolution smooths
the 21-cm map with a window function
$W=\frac{1}{2\pi R^{2}}\,e^{-(X^{2}+Y^{2})/(2R^{2})}\delta_{D}(Z)\ ,$ (13)
where $\delta_{D}$ is a Dirac delta function, the pre-factor ensures
normalization to a volume integral of unity, and $R$ is the comoving distance
corresponding to angle $\theta_{D}$ (i.e., $R=(1+z)D_{A}(z)\theta_{D}$ in
terms of the angular diameter distance $D_{A}$), where the above yields an
angle
$\theta_{D}=\frac{0.6\lambda/D}{2\sqrt{2\ln{2}}}=0.25\,\frac{\lambda}{D}=9.\mkern-4.0mu^{\prime}1\left(\frac{1+z}{50}\right)\left(\frac{D}{1\,\mathrm{km}}\right)^{-1}\
.$ (14)
Then the Fourier transform of $W$ is
$\tilde{W}=\int
d^{3}r\,We^{-i\vec{k}\cdot\vec{r}}=e^{-\frac{1}{2}k^{2}R^{2}(1-\mu^{2})}\ ,$
(15)
where $\mu=\cos{\theta}$ in terms of the angle $\theta$ between $\vec{k}$ and
the line of sight. The power spectrum is multiplied by the square of
$\tilde{W}$.
Finally, since here we are only considering the spherically-averaged power
spectrum, we average over angle, which multiplies the power spectrum by the
factor $F(kR$), where
$F(\alpha)=\frac{1}{2}\int_{-1}^{1}d\mu\,e^{\alpha^{2}(\mu^{2}-1)}\ .$ (16)
This integral is related to the error function, but note that the coefficient
of $\mu^{2}$ in the exponent is positive. This function is shown in Fig. 5.
Figure 5: The function $F(\alpha)$ that captures the effect of angular
resolution, is shown as a function of $\alpha$.
As an example, when fitting the power spectrum with configuration A, if we do
not include the effect of angular resolution, the error in $C_{\rm PowSpec}$
changes from 4.59% to 3.53%. We note, however, that since in our calculations,
we compare the theoretically-predicted power spectrum with the effect of
angular resolution to the standard expression for the thermal noise, we are
being somewhat overly conservative since in reality the limited angular
resolution will also smooth out the power spectrum of the thermal noise (an
effect that we do not include).
### A.5 Method for constructing a minimum variance linear combination of
correlated parameters
Given that the fitting of the 21-cm signal to cosmological parameters results
in significant degeneracies among the parameters, we found it useful to
construct combinations of the parameters that have a minimum variance. This
best captures the constraining power of the data, especially since these
combinations are unique to the 21-cm signal and are substantially different
from the combinations that are best constrained by other cosmological
datasets.
Fitting the 21-cm global signal is a case of two parameters. In general, let
the parameters be $x$ and $y$, and assume we know
$\sigma_{x}\equiv\sqrt{\langle(\Delta x)^{2}\rangle}$ (where $\Delta x\equiv
x-\langle x\rangle$), $\sigma_{y}\equiv\sqrt{\langle(\Delta y)^{2}\rangle}$,
and the correlation coefficient $r=\langle\Delta x\Delta
y\rangle/(\sigma_{x}\sigma_{y})$. We treat $x$ as the primary variable, which
in practice should be chosen as the parameter that the signal is most
sensitive to; naturally, this parameter will have the largest coefficient in
the linear combination below. Then the linear combination of $x$ and $y$ with
minimum variance, normalized so that $x$ has a coefficient of unity, is:
$C=x-\alpha y\ ,$ (17)
where
$\alpha=r\frac{\sigma_{x}}{\sigma_{y}}\ ,$ (18)
and $C$ has a standard deviation of
$\sigma_{C}=\sigma_{x}\sqrt{1-r^{2}}\ .$ (19)
Fitting the 21-cm power spectrum is a case of $n=5$ parameters. In general,
let the parameters be $x_{i}$, with $i=1$ through $n$. Then we desire to find
the weights $w_{i}$ so that the parameter combination
$C=\sum_{i}x_{i}w_{i}\,,$ (20)
has minimum variance, where in the weight vector $w$, we fix $w_{1}=1$, thus
treating $x_{1}$ as the primary variable. Assume we know the covariance matrix
$S_{ij}=\langle\Delta x_{i}\Delta x_{j}\rangle$. Then to get the solution, we
remove the first row and column and obtain the reduced $(n-1)\times(n-1)$
matrix $U$, which is simply $S_{ij}$ for $i,j>1$. Also the covariances of the
other $x_{j}$ (for $j>1$) with $x_{1}$, i.e., $S_{j1}$ for $j>1$, we will call
the $(n-1)\times 1$ vector $v$. In addition, the $n-1$ weights, $w_{j}$ for
$j>1$, are a reduced weight vector $z$. Now we solve: $Uz=-v$, so that the
solution is:
$z=-U^{-1}v\ .$ (21)
We construct the full vector $w$ from this solution for $z$, and the resulting
minimum variance is
$\sigma_{C}^{2}=w^{T}Sw=\sum_{i,j}S_{ij}w_{i}w_{j}\ ,$ (22)
where $w^{T}$ is the transpose of $w$, and $i,j$ go from 1 to $n$.
We note that it is a general result that if only the parameter $x_{1}$ is fit
to the data with all the other parameters held fixed, then the resulting error
$\sigma_{1}$ in $x_{1}$ is in fact equal to the just-written expression for
$\sigma_{C}$.
### A.6 Additional results and discussion
In this section we present a number of additional results and checks, along
with additional discussion. We begin with the global signal. We focused on the
parameter combination $C_{\rm Global}$, where the power in the denominator
indicates the power-law dependence of the global signal amplitude on
$\Omega_{\rm m}h^{2}$ relative to $\Omega_{\rm b}h^{2}$. It is naturally
expected to be near 1/4, given eq. (4) (with its terms directly suggesting a
power of 1/2) plus the fact that most of the signal-to-noise comes from the
relatively low redshifts where $x_{c}$ is significantly below 1, and this
coefficient is proportional to the collision rate per atom, and thus to
$\Omega_{\rm b}h^{2}$; this suggests a total dependence roughly proportional
to $(\Omega_{\rm b}h^{2})^{2}/(\Omega_{\rm m}h^{2})^{1/2}$, and $C_{\rm
Global}$ then goes as the square root of this (since we fix the dependence on
the primary parameter, $\Omega_{\rm b}h^{2}$, to the power of unity). In Table
3 we also show the global signal constraints on the two relevant cosmological
parameters. The errors are very large, in general and also compared to the
Planck measurements. There is a nearly complete degeneracy, in that the
correlation coefficient between $\Omega_{\rm b}h^{2}$ and $\Omega_{\rm
m}h^{2}$, for example for $t_{\rm int}=10$,000 hrs, is 0.994. We note that for
any parameter $x$, $\sigma[\ln(x)]$ equals the relative error in $x$ when
$\sigma\ll 1$; this relation is only approximately true when $\sigma$ is not
small, but for simplicity, we always quote $\sigma[\ln(x)]$ as the relative
error in $x$.
Table 3: For the global signal, the 1$\sigma$ relative errors (in %) on cosmological parameters, compared to Planck. Global | Planck | Planck | Integration time
---|---|---|---
signal | $+$ BAO | | 100,000 hrs | 10,000 hrs | 1,000 hrs
$\Omega_{\rm b}h^{2}$ | 0.624 | 0.671 | 9.76 | 30.9 | 97.6
$\Omega_{\rm m}h^{2}$ | 0.611 | 0.769 | 39.2 | 124 | 392
| | | | |
Figure 6: Global 21-cm signal constraints based on MCMC fitting for $t_{\rm
int}=10$,000 hrs. Here we use two basic variables, $\ln(\Omega_{\rm b}h^{2})$
and the logarithm of $C_{\rm Global}\equiv\Omega_{\rm b}h^{2}/(\Omega_{\rm
m}h^{2})^{0.248}$. The panels show the posterior distributions (1D and 2D) of
the two parameters.
As the errors on the parameters are large while that on $C_{\rm Global}$ is
small, we run an MCMC chain in one case ($t_{\rm int}=10$,000 hrs) to verify
that the non-linear individual errors are not leading to a breakdown of the
Fisher matrix approach as applied to the important parameter, $C_{\rm
Global}$. The results are shown in Fig. 6. In the 2D posterior panel, we see
that there is almost no correlation between $\Omega_{\rm b}h^{2}$ and $C_{\rm
Global}$, which justifies the choice of $C_{\rm Global}$ as the second
parameter. We find a 1$\sigma$ constraint on $\ln{C_{\rm Global}}$ of
$-3.310^{+0.033}_{-0.035}$, equivalent to a relative error of 3.4% in $C_{\rm
Global}$, compared to the Fisher approach that gave a relative uncertainty of
$3.18\%$, an error that is close to the MCMC limits. Also the 2$\sigma$ MCMC
constraint on $\ln{C_{\rm Global}}$ is $-3.310^{+0.067}_{-0.070}$, which is
roughly double the 1$\sigma$ range but shows slight asymmetry. We conclude
that the Fisher approach is good enough for approximate answers in this first
analysis, but full MCMC is needed for higher precision when the errors in some
of the underlying parameters are large. Fig. 6 was generated using the python
packages emcee Foreman-Mackey2013 and corner corner .
Moving to the 21-cm power spectrum, in Fig. 2 in the main text we showed
slices through this 2D dataset at fixed redshifts. Fig. 7 shows slices in the
opposite direction, namely the variation with $\nu$ (or $z$), at fixed
wavenumber values $k=[0.01$, 0.04, 0.1, 0.4, 1.0, $4.0]\,{\rm Mpc}^{-1}$, for
the fiducial cosmological model. As expected, the power increases as we go
from large scales to small scales. The power (for all the curves that are
shown) peaks at $z=51$. These slices show the smooth evolution with redshift
at each $k$. We also show the 1$\sigma$ noise curves (thermal plus cosmic
variance) for the A and B configurations.
Figure 7: The spherically-averaged (total) power spectrum of 21-cm brightness
fluctuations as a function of $\nu$ (or $z$ as the top $x$-axis) at wavenumber
values $k=[0.01$, 0.04, 0.1, 0.4, 1.0, $4.0]\,{\rm Mpc}^{-1}$. We also show
the 1$\sigma$ noise (thermal plus cosmic variance) for our A (short dashed
lines) and B (long dashed lines) configurations, at $k=0.1$ Mpc-1 and 1.0
Mpc-1. The effect of the angular resolution is not shown here.
We now consider the results for the cosmic variance (CV) only case, which
corresponds to the limit of infinite collecting area or integration time. This
is a purely theoretical limit of some interest as a comparison case, given its
role in some previous work Floss2022 ; mondal17 . We assume in this limiting
case no thermal noise, perfect angular resolution, and a full sky (i.e.,
$\Omega_{\rm FoV}=4\pi$). The relative error in $C_{\rm PowSpec}$ for the CV-
only case would be $7.72\times 10^{-5}\,$%. The relative error in $Y_{\rm P}$
(fixing all other parameters) would be $2.18\times 10^{-4}\,$%, and the sum of
the neutrino masses would be constrained to $\sum m_{\nu}<2.40\times 10^{-5}$
eV. Fixing the other parameters would not be an appropriate assumption in this
case with such minuscule errors, but we include this here for comparison with
the other cases considered in the main text.
As noted in the main text, there are strong correlations among the
cosmological parameters, which is what led us to focus on the combination
$C_{\rm PowSpec}$. The values of the correlation coefficients are illustrated
in Table 4, for Configuration A and for the CV-only case. Some of the
coefficients approach unity in absolute value.
We summarize the coefficients for configuration A as [0.307, 0.9950, 0.464,
0.0753] for the power of $(A_{\rm s}e^{-2\tau})$, the base of $n_{\rm s}$, and
the powers in the denominator of $\Omega_{\rm m}h^{2}$ and $H_{0}$,
respectively. While the dependence of the 21-cm power spectrum on the
cosmological parameters is complex, we can try to roughly understand what
drives the various powers in the combination $C_{\rm PowSpec}$. As discussed
in the first paragraph of this section, the global signal is roughly
proportional to $(\Omega_{\rm b}h^{2})^{2}/(\Omega_{\rm m}h^{2})^{1/2}$. The
power spectrum goes as the global signal squared times the dimensionless
(i.e., relative) squared fluctuation level. This is proportional to the
primordial amplitude $A_{\rm s}$, reduced by post-reionization scattering (as
for all sub-horizon scales in the CMB) by the factor $e^{-2\tau}$. Then, the
growth of fluctuations (squared) from the early Universe down to the cosmic
dark ages is roughly proportional to the growth factor (squared) at the dark
ages relative to matter-radiation equality (which is when significant matter
fluctuation growth begins). Fixing as before the dependence on the primary
parameter, $\Omega_{\rm b}h^{2}$, to a power of unity, this would suggest a
power of 0.25 for $A_{\rm s}e^{-2\tau}$ and 0.75 in the denominator for
$\Omega_{\rm m}h^{2}$. The actual powers are changed by various additional
complications, including a strong scale dependence in the sensitivity to
$\Omega_{\rm m}h^{2}$ and a weak separate sensitivity to the Hubble constant
introduced by the Alcock-Paczyński effect (see Sec. A.2). In addition, the
weak dependence on $n_{\rm s}$ in $C_{\rm PowSpec}$ means that the effective
scale that is being constrained by the 21-cm power spectrum is close to the
pivot scale $k=0.05\,{\rm Mpc}^{-1}$ at which $A_{\rm s}$ is defined
Planck:2018 .
As we noted in the main text, the form of $C_{\rm PowSpec}$ changes for
different scenarios. The coefficients for configuration G are [0.304, 0.0488,
0.484, 0.0698], for configuration B: [0.307, 0.9986, 0.461, 0.0751], for
configuration C: [0.311, 1.118, 0.382, 0.0811], for configuration D: [0.315,
1.233, 0.300, 0.0760], and for the CV-only case: [0.335, 2.97, -0.292,
0.0223]. Thus, the coefficients for configurations A and B are nearly
identical (since both are strongly dominated by the thermal noise), but things
change with C and D (the angular resolution is now higher, and the CV plays
some role, particularly for D), and big changes happen for CV-only (as the
detailed shape of the power spectrum now plays a major role, and much smaller
scales come into play).
Table 4: The correlation coefficients in the fits of the 21-cm power spectrum. Note that the actual parameters used in the fitting are the logarithms of the parameters listed here (except for $n_{\rm s}$). | CV only | Configuration A
---|---|---
$\Omega_{\rm b}h^{2}$ | 0.538 | | | | -0.661 | | |
$\Omega_{\rm m}h^{2}$ | -0.965 | -0.423 | | | -0.779 | 0.317 | |
$A_{\rm s}e^{-2\tau}$ | 0.917 | 0.271 | -0.897 | | 0.633 | -0.992 | -0.225 |
$n_{\rm s}$ | 0.814 | 0.260 | -0.911 | 0.674 | -0.0812 | 0.575 | -0.487 | -0.658
| $H_{0}$ | $\Omega_{\rm b}h^{2}$ | $\Omega_{\rm m}h^{2}$ | $A_{\rm s}e^{-2\tau}$ | $H_{0}$ | $\Omega_{\rm b}h^{2}$ | $\Omega_{\rm m}h^{2}$ | $A_{\rm s}e^{-2\tau}$
| | | | | | | |
In fitting the 21-cm power spectra from the dark ages, in the main text we
focused on $C_{\rm PowSpec}$ as well as constraints on Helium and neutrinos.
The relative errors in the standard cosmological parameters are listed in
Table 5 and shown in Fig. 8. We do not show configuration A (for which the
errors are even significantly larger than for the 1,000 hr global signal
case). For configuration D some of the errors approach Planck levels, while
the ultimate CV-only case is in principle better than Planck by between 1 and
3 orders of magnitude.
Table 5: For the 21-cm power spectrum, the relative 1$\sigma$ errors in %,
compared to Planck. Note that we include the CV-only case (which has an extra
factor of $10^{-2}$ as indicated). We also list here the errors on
$\Omega_{\rm c}h^{2}$ since this is one of the standard input parameters in
CAMB.
| Planck | Planck | CV only | Configurations
---|---|---|---|---
| $+$ BAO | | ($\times 10^{-2}$) | D | C | B
$H_{0}$ | $0.621$ | 0.802 | 8.10 | 4.76 | 40.4 | 42.4
$\Omega_{\rm b}h^{2}$ | 0.624 | 0.671 | 0.105 | 1.62 | 13.8 | 18.4
$\Omega_{\rm m}h^{2}$ | 0.611 | 0.769 | 1.21 | 0.968 | 7.85 | 8.60
$A_{\rm s}e^{-2\tau}$ | 0.532 | 0.584 | 0.859 | 5.82 | 49.3 | 62.9
$n_{\rm s}$ | $0.393$ | 0.435 | 0.237 | 0.687 | 5.58 | 7.32
$\Omega_{\rm c}h^{2}$ | 0.762 | 1.00 | 1.46 | 1.09 | 8.73 | 9.69
| | | | | |
Figure 8: The relative 1$\sigma$ errors in % from fitting to the 21-cm power
spectrum from the dark ages. We show graphically the main results listed in
Table 5. Note that ‘Conf.’ stands for configuration.
Finally, we explore the dependence of our power spectrum results on varying
the assumed observational ranges, for configuration A. For $k$ this is of
interest since observational limitations (such as foreground removal) could
limit the available range. Table 6 shows that the results are insensitive as
long as we include the scales around the first few BAO, where the S/N is
maximized. We also explore various $\nu$ ranges, keeping $\Delta\nu=5$ MHz and
removing low redshifts. This is interesting since in rare models the 21-cm
signal can be affected by galaxies at redshifts almost up to 35 subtle , plus
it is of interest to understand to what degree the lower redshifts dominate
the fitting. As shown in Table 7, since the S/N is maximized at the lowest
redshift, the cutoff redshift indeed has a substantial effect on the results;
the minimum redshifts corresponding to the tabulated cases are 30, 33.8, 38.7,
and 45.1. The precise high-redshift cutoff is less important given the low S/N
at that end.
Table 6: The relative (1$\sigma$) errors in %, for various $k$ ranges, when fitting the power spectrum with configuration A. In all cases we maintain an integer number of bins with $\Delta\ln k=0.5$. | $k$ range [${\rm Mpc}^{-1}$]
---|---
| Fiducial [0.00779 - 1.91] | [0.0234 - 2.10] | [0.0779 - 2.58] | [0.00779 - 5.18]
$C_{\rm PowSpec}$ | 4.59 | 4.65 | 7.76 | 4.59
| | | |
Table 7: The relative (1$\sigma$) errors in %, for various $\nu$ ranges, when
fitting the power spectrum with configuration A. In all cases we maintain an
integer number of bins with $\Delta\nu=5$ MHz.
| $\nu$ range [MHz]
---|---
| Fiducial $[5.81-45.81]$ | $[5.81-40.81]$ | $[5.81-35.81]$ | $[5.81-30.81]$
$C_{\rm PowSpec}$ | 4.59 | 6.57 | 12.2 | 32.7
| | | |
### A.7 Discussion of foregrounds
The brightness temperature of the foreground sky emission at $z=40$ is
expected to be around 13,070 K. While this is significantly higher than at
lower redshifts, the thermal noise is proportional to the sky brightness for
the global signal (eq. 5), and the square of the sky brightness for the power
spectrum (eq. 7); thus, the relative accuracy needed for foreground removal,
in order for the foreground residuals to fall below the thermal noise, is
independent of redshift (for a fixed integration time and frequency bin size).
For example, for the global signal with $t_{\rm int}$ = 1,000 hrs, the
foreground must be removed to an accuracy of a part in $10^{6}$ or better
(depending on the frequency bin size). This is challenging, but the cosmic
dawn experiments are making steady progress, and as explained in the
introduction of the main text, we expect the lunar environment to make this
task significantly easier than for the terrestrial environment.
We wish to account for foreground removal while fitting the global signal, at
least in the best-case scenario. Thus we add a free parameter $A$ to the model
that we fit to the data, in the shape of the synchrotron foreground, i.e.,
$A\,\nu^{-2.6}$. In practice, in current global signal experiments more
polynomial terms are usually added for a more realistic foreground modeling.
As we noted, there are reasons to hope that less of this will be required on
the moon, but even in the best-case scenario, a signal component of the same
shape as the foreground cannot be distinguished from it. To illustrate the
impact, we note that the error in $C_{\rm Global}$ for $t_{\rm int}=1$,000
hrs, which is 10.1%, would instead be 5.54% without this additional foreground
term.
In the case of the 21-cm power spectrum, in addition to foreground removal,
which can never be perfect, another method of dealing with foregrounds is to
avoid them. Foreground contamination is expected to be largely restricted to
within a wedge-shaped region in the 2D $(k_{\perp},k_{\parallel})$ Fourier
space, where these are the components of the wavevector perpendicular and
parallel to the line of sight, respectively. Thus, it may be easier to achieve
an extremely high accuracy of foreground removal outside the wedge. For a
rough estimate of the effect of foreground avoidance, we consider this
foreground wedge with different levels of contamination. We calculate the
wedge boundary using Datta2010 ; dillon14 ; pober14 ; jensen15
$k_{\parallel}=\left(\frac{r_{\nu}\sin{\theta_{\rm
L}}}{r_{\nu}^{\prime}\nu}\right)k_{\perp}\,,$ (23)
where $r_{\nu}$ is the comoving distance to the bin center,
$r_{\nu}^{\prime}=\frac{dr}{d\nu}$ at the bin center, and $\theta_{\rm L}$ is
the angle on the sky with respect to the zenith from which the foregrounds
contaminate the power of the 21-cm signal. At $z=40$, we assume two scenarios.
The first assumes $\theta_{\rm L}=2\times$ FWHM (optimistic). We estimate that
roughly $1/10$ of the $(k_{\perp},k_{\parallel})$ space is affected in this
case. Assuming a more pessimistic case of $\theta_{\rm L}=\pi/2$, we find that
roughly 1/2 of the S/N can be lost due to foreground contamination. Thus, the
effect of foreground avoidance can be significant but is most likely not a
game changer.
## References
* * (1) Sunyaev, R. A. & Zeldovich, Y. B. Formation of Clusters of Galaxies; Protocluster Fragmentation and Intergalactic Gas Heating. _A &A_ 20, 189 (1972) .
* (2) Hogan, C. J. & Rees, M. J. Spectral appearance of non-uniform gas at high z. _MNRAS_ 188, 791–798 (1979). 10.1093/mnras/188.4.791 .
* (3) Scott, D. & Rees, M. J. The 21-cm line at high redshift: a diagnostic for the origin of large scale structure. _MNRAS_ 247, 510 (1990) .
* (4) Madau, P., Meiksin, A. & Rees, M. J. 21 Centimeter Tomography of the Intergalactic Medium at High Redshift. _ApJ_ 475, 429 (1997). arXiv:astro-ph/9608010 .
* (5) Loeb, A. & Zaldarriaga, M. Measuring the Small-Scale Power Spectrum of Cosmic Density Fluctuations through 21cm Tomography Prior to the Epoch of Structure Formation. _Phys. Rev. Lett._ 92 (21), 211301 (2004). 10.1103/PhysRevLett.92.211301, arXiv:astro-ph/0312134 [astro-ph].
* (6) Barkana, R. & Loeb, A. Probing the epoch of early baryonic infall through 21-cm fluctuations. _MNRAS_ 363 (1), L36–L40 (2005). 10.1111/j.1745-3933.2005.00079.x, arXiv:astro-ph/0502083 [astro-ph].
* (7) Bharadwaj, S. & Ali, S. S. The cosmic microwave background radiation fluctuations from HI perturbations prior to reionization. _MNRAS_ 352, 142–146 (2004). arXiv:astro-ph/0401206 .
* (8) Barkana, R. & Loeb, A. A Method for Separating the Physics from the Astrophysics of High-Redshift 21 Centimeter Fluctuations. _ApJ_ 624, L65–L68 (2005). arXiv:astro-ph/0409572 .
* (9) Naoz, S. & Barkana, R. Growth of linear perturbations before the era of the first galaxies. _MNRAS_ 362 (3), 1047–1053 (2005). 10.1111/j.1365-2966.2005.09385.x, arXiv:astro-ph/0503196 [astro-ph].
* (10) Lewis, A. & Challinor, A. 21cm angular-power spectrum from the dark ages. _Phys. Rev. D_ 76 (8), 083005 (2007). 10.1103/PhysRevD.76.083005, arXiv:astro-ph/0702600 [astro-ph].
* (11) Ali-Haïmoud, Y., Meerburg, P. D. & Yuan, S. New light on 21 cm intensity fluctuations from the dark ages. _Phys. Rev. D_ 89 (8), 083506 (2014). 10.1103/PhysRevD.89.083506, arXiv:1312.4948 [astro-ph.CO].
* (12) Tashiro, H., Kadota, K. & Silk, J. Effects of dark matter-baryon scattering on redshifted 21 cm signals. _Phys. Rev. D_ 90 (8), 083522 (2014). 10.1103/PhysRevD.90.083522, arXiv:1408.2571 [astro-ph.CO].
* (13) Muñoz, J. B., Kovetz, E. D. & Ali-Haïmoud, Y. Heating of baryons due to scattering with dark matter during the dark ages. _Phys. Rev. D_ 92 (8), 083528 (2015). 10.1103/PhysRevD.92.083528, arXiv:1509.00029 [astro-ph.CO].
* (14) Barkana, R. Possible interaction between baryons and dark-matter particles revealed by the first stars. _Nature_ 555 (7694), 71–74 (2018). 10.1038/nature25791, arXiv:1803.06698 [astro-ph.CO].
* (15) Chen, X., Meerburg, P. D. & Münchmeyer, M. The future of primordial features with 21 cm tomography. _J. Cosmology Astropart. Phys_ 2016 (9), 023 (2016). 10.1088/1475-7516/2016/09/023, arXiv:1605.09364 [astro-ph.CO].
* (16) Pillepich, A., Porciani, C. & Matarrese, S. The Bispectrum of Redshifted 21 Centimeter Fluctuations from the Dark Ages. _ApJ_ 662 (1), 1–14 (2007). 10.1086/517963, arXiv:astro-ph/0611126 [astro-ph].
* (17) Joudaki, S., Doré, O., Ferramacho, L., Kaplinghat, M. & Santos, M. G. Primordial Non-Gaussianity from the 21 cm Power Spectrum during the Epoch of Reionization. _Phys. Rev. Lett._ 107 (13), 131304 (2011). 10.1103/PhysRevLett.107.131304, arXiv:1105.1773 [astro-ph.CO].
* (18) Flöss, T., de Wild, T., Meerburg, P. D. & Koopmans, L. V. E. The Dark Ages’ 21-cm trispectrum. _J. Cosmology Astropart. Phys_ 2022 (6), 020 (2022). 10.1088/1475-7516/2022/06/020, arXiv:2201.08843 [astro-ph.CO].
* (19) Balaji, S., Ragavendra, H. V., Sethi, S. K., Silk, J. & Sriramkumar, L. Observing Nulling of Primordial Correlations via the 21-cm Signal. _Phys. Rev. Lett._ 129 (26), 261301 (2022). 10.1103/PhysRevLett.129.261301, arXiv:2206.06386 [astro-ph.CO].
* (20) Bowman, J. D., Rogers, A. E. E., Monsalve, R. A., Mozdzen, T. J. & Mahesh, N. An absorption profile centred at 78 megahertz in the sky-averaged spectrum. _Nature_ 555 (7694), 67–70 (2018). 10.1038/nature25792, arXiv:1810.05912 [astro-ph.CO].
* (21) Singh, S. _et al._ On the detection of a cosmic dawn signal in the radio background. _Nature Astronomy_ 6, 607–617 (2022). 10.1038/s41550-022-01610-5, arXiv:2112.06778 [astro-ph.CO].
* (22) Mertens, F. G. _et al._ Improved upper limits on the 21 cm signal power spectrum of neutral hydrogen at z $\approx$ 9.1 from LOFAR. _MNRAS_ 493 (2), 1662–1685 (2020). 10.1093/mnras/staa327, arXiv:2002.07196 [astro-ph.CO].
* (23) Trott, C. M. _et al._ Deep multi-redshift limits on Epoch of Reionisation 21 cm Power Spectra from Four Seasons of Murchison Widefield Array Observations. _MNRAS_ (2020). 10.1093/mnras/staa414, arXiv:2002.02575 [astro-ph.CO].
* (24) The HERA Collaboration _et al._ Improved Constraints on the 21 cm EoR Power Spectrum and the X-Ray Heating of the IGM with HERA Phase I Observations. _arXiv e-prints_ arXiv:2210.04912 (2022). arXiv:2210.04912 [astro-ph.CO].
* (25) Lewis, A. & Bridle, S. Cosmological parameters from CMB and other data: A Monte Carlo approach. _Phys. Rev. D_ 66, 103511 (2002). 10.1103/PhysRevD.66.103511, arXiv:astro-ph/0205436 [astro-ph].
* (26) Barkana, R. & Loeb, A. A Method for Separating the Physics from the Astrophysics of High-Redshift 21 Centimeter Fluctuations. _ApJ_ 624 (2), L65–L68 (2005). 10.1086/430599, arXiv:astro-ph/0409572 [astro-ph].
* (27) Alcock, C. & Paczynski, B. An evolution free test for non-zero cosmological constant. _Nature_ 281, 358 (1979). 10.1038/281358a0 .
* (28) Ali, S. S., Bharadwaj, S. & Pandey, B. What will anisotropies in the clustering pattern in redshifted 21-cm maps tell us? _MNRAS_ 363 (1), 251–258 (2005). 10.1111/j.1365-2966.2005.09444.x, arXiv:astro-ph/0503237 [astro-ph].
* (29) Nusser, A. The Alcock-Paczyński test in redshifted 21-cm maps. _MNRAS_ 364 (2), 743–750 (2005). 10.1111/j.1365-2966.2005.09603.x, arXiv:astro-ph/0410420 [astro-ph].
* (30) Barkana, R. Separating out the Alcock-Paczyński effect on 21-cm fluctuations. _MNRAS_ 372 (1), 259–264 (2006). 10.1111/j.1365-2966.2006.10882.x, arXiv:astro-ph/0508341 [astro-ph].
* (31) Barkana, R. & Loeb, A. Light-cone anisotropy in 21-cm fluctuations during the epoch of reionization. _MNRAS_ 372, L43–L47 (2006). 10.1111/j.1745-3933.2006.00222.x, astro-ph/0512453 .
* (32) Mondal, R., Bharadwaj, S. & Datta, K. K. Towards simulating and quantifying the light-cone EoR 21-cm signal. _MNRAS_ 474, 1390–1397 (2018). 10.1093/mnras/stx2888, arXiv:1706.09449 .
* (33) Planck Collaboration _et al._ Planck 2018 results. VI. Cosmological parameters. _A &A_ 641, A6 (2020). 10.1051/0004-6361/201833910, arXiv:1807.06209 [astro-ph.CO].
* (34) Furlanetto, S. R., Oh, S. P. & Briggs, F. H. Cosmology at low frequencies: The 21 cm transition and the high-redshift Universe. _Phys. Rep._ 433 (4-6), 181–301 (2006). 10.1016/j.physrep.2006.08.002, arXiv:astro-ph/0608032 [astro-ph].
* (35) Shaver, P. A., Windhorst, R. A., Madau, P. & de Bruyn, A. G. Can the reionization epoch be detected as a global signature in the cosmic background? _A &A_ 345, 380–390 (1999). arXiv:astro-ph/9901320 [astro-ph].
* (36) Mondal, R., Bharadwaj, S. & Majumdar, S. Statistics of the epoch of reionization 21-cm signal \- I. Power spectrum error-covariance. _MNRAS_ 456, 1936–1947 (2016). 10.1093/mnras/stv2772, arXiv:1508.00896 .
* (37) Mellema, G. _et al._ Reionization and the Cosmic Dawn with the Square Kilometre Array. _Experimental Astronomy_ 36, 235–318 (2013). 10.1007/s10686-013-9334-5, arXiv:1210.0197 [astro-ph.CO].
* (38) Reis, I., Fialkov, A. & Barkana, R. The subtlety of Ly $\alpha$ photons: changing the expected range of the 21-cm signal. _MNRAS_ 506 (4), 5479–5493 (2021). 10.1093/mnras/stab2089, arXiv:2101.01777 [astro-ph.CO].
* (39) Foreman-Mackey, D., Hogg, D. W., Lang, D. & Goodman, J. emcee: The MCMC Hammer. _PASP_ 125 (925), 306 (2013). 10.1086/670067, arXiv:1202.3665 [astro-ph.IM].
* (40) Foreman-Mackey, D. corner.py: Scatterplot matrices in python. _The Journal of Open Source Software_ 1 (2), 24 (2016). URL https://doi.org/10.21105/joss.00024. 10.21105/joss.00024 .
* (41) Mondal, R., Bharadwaj, S. & Majumdar, S. Statistics of the epoch of reionization (EoR) 21-cm signal - II. The evolution of the power-spectrum error-covariance. _MNRAS_ 464, 2992–3004 (2017). 10.1093/mnras/stw2599, arXiv:1606.03874 .
* (42) Datta, A., Bowman, J. D. & Carilli, C. L. Bright Source Subtraction Requirements for Redshifted 21 cm Measurements. _ApJ_ 724 (1), 526–538 (2010). 10.1088/0004-637X/724/1/526, arXiv:1005.4071 [astro-ph.CO].
* (43) Dillon, J. S. _et al._ Overcoming real-world obstacles in 21 cm power spectrum estimation: A method demonstration and results from early Murchison Widefield Array data. _Phys. Rev. D_ 89 (2), 023002 (2014). 10.1103/PhysRevD.89.023002, arXiv:1304.4229 [astro-ph.CO].
* (44) Pober, J. C. _et al._ What Next-generation 21 cm Power Spectrum Measurements can Teach us About the Epoch of Reionization. _ApJ_ 782, 66 (2014). 10.1088/0004-637X/782/2/66, arXiv:1310.7031 .
* (45) Jensen, H. _et al._ The wedge bias in reionization 21-cm power spectrum measurements. _Monthly Notices of the Royal Astronomical Society_ 456 (1), 66–70 (2015). 10.1093/Monthly Notices of the Royal Astronomical Society/stv2679 .
|
MLshort=ML,long=machine learning,short-indefinite=an SoCshort=SoC,long=system-
on-chip,long-plural-form=systems-on-chip,short-indefinite=an
# Speed-Oblivious Online Scheduling:
Knowing (Precise) Speeds is not Necessary
Alexander Lindermayr University of Bremen, Faculty of Mathematics and Computer
Science, Germany<EMAIL_ADDRESS>Nicole Megow 11footnotemark:
1 Martin Rapp Faculty for Informatics, Karlsruhe Institute of Technology,
Germany<EMAIL_ADDRESS>
###### Abstract
We consider online scheduling on unrelated (heterogeneous) machines in a
_speed-oblivious_ setting, where an algorithm is unaware of the exact job-
dependent processing speeds. We show strong impossibility results for
clairvoyant and non-clairvoyant algorithms and overcome them in models
inspired by practical settings: (i) we provide competitive _learning-
augmented_ algorithms, assuming that (possibly erroneous) predictions on the
speeds are given, and (ii) we provide competitive algorithms for the _speed-
ordered_ model, where a single global order of machines according to their
unknown job-dependent speeds is known. We prove strong theoretical guarantees
and evaluate our findings on a representative heterogeneous multi-core
processor. These seem to be the first empirical results for scheduling
algorithms with predictions that are evaluated in a non-synthetic hardware
environment.
## 1 Introduction
Heterogeneous processors are getting more and more common in various domains.
For several years now, efficiency and performance gains in smartphone chips
have depended crucially on the combination of high-performance and low-
performance (but energy-efficient) cores [ARM13]. Heterogeneity has recently
been introduced also to the desktop market with Intel Alder Lake (Q1’2022)
[RYR+22] and AMD Zen 5 (announced for 2023). Further, jobs differ in their
instruction mix and memory access patterns, and hence may not benefit
uniformly from the high-performance cores, which typically feature larger
caches, out-of-order execution, and a higher CPU frequency. Figure 1 shows
job-dependent speed varieties in common benchmark suites (_PARSEC-3.0_ ,
_SPLASH-3_ , _Polybench_) running on _big_ and _LITTLE_ cores of a Kirin 970
smartphone SoC with Arm big.LITTLE architecture.
Figure 1: The execution time and speedup of the _big_ over _LITTLE_ cores on
an Arm big.LITTLE heterogeneous processor varies strongly between jobs and
different input data. Variations of the speedup w.r.t. input data are large
for some jobs (e.g., _water-nsquared_) but small for others (e.g., _fmm_).
These advances show the demand for schedulers that respect job-dependent
heterogeneity. Formally, the _(processing) speed_ $s_{ij}$ of job $j$ on
machine $i$ is the amount of processing that $j$ receives when running on $i$
for one time unit. Despite the relevance of values $s_{ij}$ for high-
performance scheduling, there is a big discrepancy between how theory and
practice handle them: while scheduling theory most commonly assumes that
speeds are known to an algorithm, this is typically not the case in practice.
Hence, algorithms that perform well in theory are often not applicable in
practice.
In this work, we propose new models and algorithms to bridge this gap. In
particular, we introduce _speed-oblivious_ algorithms, which do not rely on
knowing (precise) speeds. Thereby we focus on (non-)clairvoyant scheduling
subject to minimizing the total (weighted) completion time.
Formally, an instance of this scheduling problem is composed of a set $J$ of
$n$ jobs, a set $I$ of $m$ machines, and a time-discretization. The
characteristics of a job $j\in J$ are its processing requirement $p_{j}$, its
weight $w_{j}$, and for every machine $i\in I$ its individual processing speed
$s_{ij}>0$. A job $j$ arrives online at its release date $r_{j}$, i.e., an
algorithm is unaware of its existence before that time. A schedule assigns for
every unfinished job $j\in J$ and for every machine $i\in I$ at any time
$t\geq r_{j}$ a _(machine) rate_ $y_{ijt}\in[0,1]$, which induces the progress
$q_{jt}=\sum_{i}s_{ij}y_{ijt}$ of $j$ at time $t$. The completion time $C_{j}$
of a job $j$ in a fixed schedule is the first point in time $t$ that satisfies
$\sum_{t^{\prime}=r_{j}}^{t}q_{jt^{\prime}}\geq p_{j}$. A schedule is feasible
if there exists a progress-preserving actual schedule, where at any
infinitesimal time a job is being processed on at most one machine. This
applies if the rates satisfy $\sum_{i\in I}y_{ijt}\leq 1$ for all $j\in J$ and
$\sum_{j\in J}y_{ijt}\leq 1$ for all $i\in I$ at any time $t$ [IKM18]. The
task is to compute a feasible schedule that minimizes $\sum_{j\in
J}w_{j}C_{j}$.
An algorithm is called non-migratory, if it assigns for each job $j$ positive
rates only on a single machine $i_{j}$, and migratory otherwise. Further, it
is called _non-preemptive_ if for all jobs $j$, machines $i$, and times $t$, a
rate $y_{ijt}>0$ implies $y_{ijt^{\prime}}=1$ for all times $t^{\prime}$ with
$t\leq t^{\prime}\leq C_{j}$. We say that the machines are _related_ if
$s_{i}=s_{ij}$ for all jobs $j$ and machines $i$, i.e., speeds are not job-
dependent.
#### Models and state-of-the-art in theory
Scheduling jobs (offline) on machines with job-dependent heterogeneity (called
_unrelated_ machine scheduling) to minimize the total weighted completion time
is a prominent NP-hard problem; several approximation algorithms are known,
e.g., [HSSW97, SS02b, Li20, BSS21, IL23]. Well-studied online models include
_online_ job arrival [PST04], i.e., a job is unknown to an algorithm until its
release date $r_{j}$, and _non-clairvoyance_ [MPT94], i.e., an algorithm has
no knowledge about the total processing requirement $p_{j}$ of a job (as
opposed to _clairvoyant_ schedulers). In particular, online algorithms cannot
revert previous decisions. The performance of an online algorithm is typically
evaluated by its _competitive ratio_ [BE98], i.e., the worst-case ratio
between the algorithm’s objective value and the optimal objective value (given
full information upfront) for every instance. We say that an algorithm is
$\rho$-competitive if its competitive ratio is at most $\rho$. Known online
results include [HSSW97, CGKM09, AGK12, IKMP14, IKM18, GMUX20, Jäg21, BKL21,
LM22].
To the best of our knowledge, unrelated machine scheduling has been studied
only in a _speed-aware_ setting, where an algorithm knows the speeds $s_{ij}$
for available jobs. It is not difficult to see that there are prohibitive
lower bounds for speed-oblivious scheduling on (un-)related machines: consider
an instance with a single unit-sized job $j$ which makes substantial progress
only on one machine. This means that in the worst-case, the first $m-1$
machines tried by the algorithm have speed $\epsilon$ and $j$ makes no
substantial progress. Thus, the algorithm spends at least $m$ time units to
complete it. Knowing this fast machine upfront allows an optimal solution to
complete the job immediately. This implies a competitive ratio of at least
$\Omega(m)$ for $m$ machines:
###### Observation 1.1.
Any speed-oblivious algorithm has a competitive ratio of at least $\Omega(m)$
for minimizing the total (weighted) completion time on $m$ related machines,
even if the algorithm is clairvoyant.
#### Models and state-of-the-art in practice
Practical scheduling algorithms commonly operate in open systems [FR98], where
jobs arrive online, are non-clairvoyant, and, in contrast to the assumption in
theory, their exact processing speeds on every core are _unknown_ upfront.
Therefore, state-of-the-practice schedulers usually ignore heterogeneity
between jobs, e.g., Linux Energy-Aware Scheduling [The19]. State-of-the-art
schedulers rely on prior knowledge about jobs [KPSH15], which is not always
available, or rely on predictions of job characteristics to leverage this
information gap. Such predictions could be based on prior executions of
repeating jobs or on machine-learning-based techniques [GBA+18, RPMH21]. They
are often quite precise, but can be highly inaccurate due to varying and
unpredictable input data as shown in Figure 1. To the best of our knowledge,
all these approaches are evaluated only empirically. In particular, there are
no theoretical guarantees on the performance in worst-case scenarios or with
respect to a prediction’s quality.
### 1.1 Our Results
We initiate the theoretical study of speed-oblivious algorithms. Since strong
lower bounds rule out good worst-case guarantees for speed-oblivious unrelated
machine scheduling without further assumptions, we propose two (new) models
which are motivated by data-driven machine-learned models and modern
heterogeneous hardware architectures:
* •
Speed predictions give algorithms access to values $\bm{\hat{}}{s}_{ij}$ for
every machine $i$ at the release date of every job $j$. We measure the
accuracy of such a prediction by the _distortion error_ $\mu$, where
$\mu=\mu_{1}\cdot\mu_{2}$ and
$\mu_{1}=\max_{i\in I,j\in
J}\left\\{\frac{\bm{\hat{}}{s}_{ij}}{s_{ij}}\right\\}\text{ and
}\mu_{2}=\max_{i\in I,j\in
J}\left\\{\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\right\\}.$
* •
Speed-ordered machines assume an order on $I$ such that for all
$i,i^{\prime}\in I$ and jobs $j\in J$ holds $s_{ij}\geq s_{i^{\prime}j}$ if
and only if $i\leq i^{\prime}$. Algorithms are aware of this order.
Finally, we compare algorithms for these models with heuristics in experiments
on an actual modern heterogeneous chip. These are the first empirical results
which show the benefit of learning-augmented algorithms and validate
theoretical findings on _real_ hardware. In particular, we initiate the
investigation in practical applicability of theoretical scheduling algorithms
for actual realistic hardware environments.
We now give a more detailed overview of our results.
#### Learning-augmented algorithms for speed predictions
We provide the first learning-augmented algorithms with job-dependent speed
predictions and prove error-dependent performance guarantees w.r.t. the
distortion error $\mu$. This gives formal evidence on why algorithms perform
well in practice, even if the assumed speeds slightly diverge from the true
speeds. We further show that a competitive ratio linear in $\mu$ is best
possible, even for migratory algorithms and related machines. We emphasize
that the algorithms do not have access to $\mu$ upfront for the given
instance.
###### Theorem 1.2.
For minimizing the total weighted completion time on unrelated machines, there
exist speed-oblivious online algorithms with speed predictions that are
1. (i)
clairvoyant and $8\mu$-competitive,
2. (ii)
clairvoyant, non-preemptive and $7.216\mu^{2}$-competitive,
3. (iii)
non-clairvoyant and $108\mu$-competitive.
For $(i)$, we design a novel and efficient clairvoyant algorithm, which might
be of independent interest. It always schedules the subset of jobs that
maximizes the total (predicted) density in a feasible job-to-machine
assignment, where the density of a job $j$ on machine $i$ is equal to
$\frac{w_{j}s_{ij}}{p_{j}}$. We show that it is $8$-competitive in the speed-
aware setting. Interestingly, this algorithm reduces to Smith’s rule on a
single machine [S+56] and preemptive variants [SS02a, MS04].
On the technical side, we prove upper bounds on the competitive ratios using
the _dual-fitting_ technique [JMM+03, AGK12]. There, we lower bound the
optimal solution by the dual of a linear programming (LP) relaxation, and then
show that a specific feasible dual assignment has an objective value which is
close to the algorithm’s objective value. The main difficulty is therefore to
come up with good dual assignment. For $(i)$, we present a new dual setup,
which we believe could be helpful for future dual-fitting approaches. The
algorithms and proofs for $(ii)$ and $(iii)$ are are inspired by previous work
(Greedy WSPT [GMUX20], Proportional Fairness [IKM18]). However, for $(iii)$ we
achieve better constants via optimized duals, even for the speed-aware case.
In all proofs, we use scalable properties of duals to convert bad decisions
due to imprecise predictions into scaled bounds on the competitive ratio.
#### Novel algorithms for speed-ordered machines
The strong lower bound of $\Omega(m)$ on the competitive ratio for speed-
oblivious algorithms for $m$ machines crucially relies on accelerating the
machine that an algorithm tries last. This argument becomes infeasible in the
speed-ordered setting, because the machines are distinguishable upfront.
Designing an algorithm is yet still challenging, as precise factors between
speeds remain unknown. On the negative side, we show that any constant-
competitive algorithm must migrate jobs. This is even true for clairvoyant
algorithms and related machines. On the positive side, we present two
algorithms:
###### Theorem 1.3.
There is a clairvoyant speed-oblivious online algorithm for minimizing the
total weighted completion time on speed-ordered related machines with a
competitive ratio of at most $8$.
We show that this algorithm is not competitive on unrelated machines. Somewhat
surprisingly, our non-clairvoyant algorithm achieves non-trivial competitive
ratios for both related and unrelated machines, as the following theorem
states.
###### Theorem 1.4.
There is a non-clairvoyant speed-oblivious online algorithm for minimizing the
total completion time
1. (i)
on speed-ordered related machines with a competitive ratio of at most $216$,
and
2. (ii)
on speed-ordered unrelated machines with a competitive ratio of
$\Theta(\log(\min\\{n,m\\}))$.
A crucial observation for deriving these algorithms is that in the speed-
ordered setting certain speed-aware algorithms use strategies which can be
formulated _even without_ precise speed values. An additional challenge is the
few-job regime, i.e., there are less jobs than machines, where we have to
ensure that the algorithms prefer the fast machines.
### 1.2 Further Related Work
Uncertainty about machine speeds or, generally, the machine environment, have
hardly been studied in scheduling theory. Some works consider scheduling with
unknown non-availability periods, i.e., periods with speed $0$ [AS01, DJST09],
permanent break-downs of a subset of machines [SZ20], or more generally
arbitrarily changing machine speed for a single machine [ELM+12], but not on
heterogenous machines. In scheduling with testing, unknown processing
requirements of a job (and thus its machine-dependent speed) can be explored
by making queries, e.g., [DEMM20, AE20, ABK+18], but also here heterogenous
processors are not considered.
Mitigating pessimistic lower bounds of classic worst-case analysis via
untrusted predictions [MV22, LM23] has been successfully applied to various
scheduling problems [PSK18, LLMV20, ALT21, ALT22, IKQP21, LX21, LM22, AGS22,
DIL+22]. While all these results concentrate on the uncertainty of online
arrival and non-clairvoyance, Balkanski et al. [BOSW22] consider a robust
scheduling problem where machine speeds are only predicted and jobs have to be
grouped to be scheduled together before knowing the true machine speeds; such
problems without predictions were introduced in [EHM+21, SZ20]. In contrast,
in our model an algorithm will never learn about a job’s true speed(s) before
its completion and, further, the speeds might be job-dependent.
## 2 Algorithms with Speed Predictions
In this section, we investigate the model with speed predictions. We first
rule out any sublinear error-dependency.
###### Theorem 2.1.
Any speed-oblivious algorithm with speed predictions has a competitive ratio
of at least $\Omega(\min\\{\mu,m\\})$ for minimizing the total (weighted)
completion time, even if the algorithm is clairvoyant and machines are
related.
###### Proof.
Let $\mu_{1},\mu_{2}\geq 1$ and $\mu=\mu_{1}\cdot\mu_{2}$. Consider an
instance $J=\\{j\\}$ with $p_{j}=2\mu$ and $m\geq 2\mu$ machines such that
$\bm{\hat{}}{s}_{i}=\mu_{1}$ for all $1\leq i\leq m$. The algorithm cannot
distinguish the machines. For the first $2\mu-1$ machines $i$ on which the
algorithm processes $j$, the adversary fixes $s_{i}=1$. Thus, at time
$2\mu-1$, the remaining processing requirement of $j$ is at least
$2\mu-(2\mu-1)=1$ and there exists a machine $i^{\prime}$ on which $j$ has not
been processed yet. Thus, the adversary can set $s_{i^{\prime}}=\mu$ and
complete $j$ on $i^{\prime}$ within two time units, implying a competitive
ratio of at least $\Omega(\min\\{\mu,m\\})$. ∎
Observe that this construction already works for two machines when migration
is forbidden.
### 2.1 A Clairvoyant Algorithm
We firstly present a novel migratory algorithm for the clairvoyant setting
with known processing requirements for both the speed-aware setting as well as
speed predictions. Sequencing jobs by Smith’s rule by non-increasing density
$\frac{w_{j}}{p_{j}}$ (aka Weighted-Shortest-Processing-Time, WSPT) is optimal
on a single machine [S+56]. In the online setting with release dates, this
policy is $2$-competitive when applied preemptively on the available
unfinished jobs [SS02a]. It can be extended to identical parallel machines
[MS04], by processing at any time the (at most) $m$ jobs with highest
densities. However, this approach is infeasible on unrelated machines, because
jobs can have different densities on every machine.
Inspired by the power of densities, we compute a subset of at most $m$ jobs
that instead maximizes the total density, that is, the sum of the densities of
the job-to-machine assignment. This can be done efficiently by computing at
any time $t$ a matching $M_{t}$ between alive jobs $j\in J(t)=\\{j\in J\mid
r_{j}\leq t\leq C_{j}\\}$ and machines $i\in I$ with edge weights
$\bm{\hat{}}{\delta}_{ij}=\frac{w_{j}\bm{\hat{}}{s}_{ij}}{p_{j}}$ using, e.g.,
the Hungarian algorithm [Kuh55]. In the analysis, we crucially exploit the
local optimality of any two matched job-machine pairs via exchange arguments.
Algorithm 1 Maximum Density
0: time $t$, speed (predictions) $\\{\bm{\hat{}}{s}_{ij}\\}$
1: Construct a complete bipartite graph $G_{t}=I\cup J(t)$ where an edge
$(i,j)\in I\times J(t)$ has a weight equal to
$\bm{\hat{}}{\delta}_{ij}=\frac{w_{j}\bm{\hat{}}{s}_{ij}}{p_{j}}$.
2: Compute a maximum-weight matching $M_{t}$ for $G_{t}$.
3: Schedule jobs to machines according to $M_{t}$ at time $t$.
###### Theorem 2.2.
Algorithm 1 has a competitive ratio of at most $8\mu$ for minimizing the total
weighted completion time on unrelated machines with speed predictions.
This theorem implies immediately the following corollary for the speed-aware
setting ($\mu=1$).
###### Corollary 2.3.
Algorithm 1 has a competitive ratio of at most $8$ for minimizing the total
weighted completion time on unrelated machines in the speed-aware setting.
The remaining section is devoted to proof of Theorem 2.2, which uses a dual-
fitting argumentation. To this end, we state the standard migratory linear
programming relaxation for our objective function [SS02b]. In fact, we state a
variant where the machines of an optimal solution run at a lower speed of
$\frac{1}{\alpha}$ for $\alpha\geq 1$ [IKM18].
min $\displaystyle\sum_{i\in I}\sum_{j\in J}\sum_{t\geq 0}w_{j}\cdot
t\cdot\frac{x_{ijt}s_{ij}}{p_{j}}$ ($\text{LP}_{\alpha}$) s.t.
$\displaystyle\sum_{i\in I}\sum_{t\geq 0}\frac{x_{ijt}s_{ij}}{p_{j}}\geq 1$
$\displaystyle\forall j\in J$ $\displaystyle\sum_{j\in J}\alpha\cdot
x_{ijt}\leq 1$ $\displaystyle\forall i\in I,t\geq 0$ $\displaystyle\sum_{i\in
I}\alpha\cdot x_{ijt}\leq 1$ $\displaystyle\forall j\in J,t\geq r_{j}$
$\displaystyle x_{ijt}\geq 0$ $\displaystyle\forall i\in I,j\in J,t\geq r_{j}$
$\displaystyle x_{ijt}=0$ $\displaystyle\forall i\in I,j\in J,t<r_{j}$
Let ${\textsc{Opt}}_{\alpha}$ denote the optimal objective value in this
restricted setting. The dual of ($\text{LP}_{\alpha}$) can be written as
follows. (From now on we omit obvious set constraints in the notation for an
improved readability.)
max $\displaystyle\sum_{j}a_{j}-\sum_{i,t}b_{it}-\sum_{j,t\geq r_{j}}c_{jt}$
($\text{DLP}_{\alpha}$) s.t. $\displaystyle\frac{a_{j}s_{ij}}{p_{j}}-\alpha
b_{it}-\alpha c_{jt}\leq w_{j}\frac{s_{ij}t}{p_{j}}\qquad$
$\displaystyle\forall i,j,t\geq r_{j}$ $\displaystyle
a_{j},b_{it},c_{jt^{\prime}}\geq 0\qquad$ $\displaystyle\forall i,j,t\;\forall
t^{\prime}\geq r_{j}$
Fix an instance and the algorithm’s schedule. Let $\kappa\geq 1$ be a
constant. We define for every machine $i$ and any time $t$
$\beta_{it}=\begin{cases}\bm{\hat{}}{\delta}_{ij}&\text{ if }i\text{ is
matched to }j\in J(t)\text{ in }M_{t}\\\ 0&\text{ otherwise,}\end{cases}$
and for every job $j$ and any time $t$
$\gamma_{jt}=\begin{cases}\bm{\hat{}}{\delta}_{ij}&\text{ if }j\text{ is
matched to }i\in I\text{ in }M_{t}\\\ 0&\text{ otherwise.}\end{cases}$
Consider the following values:
* •
$\bm{\bar{}}{a}_{j}=w_{j}C_{j}$ for every job $j$,
* •
$\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\beta_{it^{\prime}}$ for every machine $i$ and time $t$, and
* •
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\gamma_{jt^{\prime}}$ for every job $j$ and time $t\geq r_{j}$.
We show in Lemma 2.5 that these values define a feasible solution for the dual
problem ($\text{DLP}_{\alpha}$), and that the corresponding dual objective
value is at least a certain fraction of the algorithm’s solution value (Lemma
2.4). Weak LP duality then implies Theorem 2.2. Let
${\textsc{Alg}}=\sum_{j}w_{j}C_{j}$.
###### Lemma 2.4.
$(1-\frac{2\mu_{1}}{\kappa}){\textsc{Alg}}\leq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}$
In the following, let $U_{t}$ be the set of unfinished jobs at time $t$, i.e.,
all jobs $j$ with $t\leq C_{j}$, and let $W_{t}=\sum_{j\in U_{t}}w_{j}$.
###### Proof.
Fix a time $t$ and a job $j$. If $j\in U_{t}$, let
$i^{j}_{1},\ldots,i^{j}_{z(j)}$ be the sequence of individual machine
assignments of $j$ between time $t$ and $C_{j}$. Let
$\bm{\hat{}}{\delta}(i,j):=\bm{\hat{}}{\delta}_{ij}$. Note that
$\sum_{\ell=1}^{z(j)}\bm{\hat{}}{\delta}(i^{j}_{\ell},j)=\sum_{\ell=1}^{z(j)}\bm{\hat{}}{s}_{i^{j}_{\ell},j}\frac{w_{j}}{p_{j}}\leq\mu_{1}\sum_{\ell=1}^{z(j)}s_{i^{j}_{\ell},j}\frac{w_{j}}{p_{j}}\leq\mu_{1}w_{j}.$
Therefore, $\sum_{i}\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{j\in
U_{t}}\sum_{\ell=1}^{z(j)}\bm{\hat{}}{\delta}(i^{j}_{\ell},j)\leq\frac{\mu_{1}}{\kappa}W_{t}$.
Similarly,
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{\ell=1}^{z(j)}\bm{\hat{}}{\delta}(i^{j}_{\ell},j)\leq\frac{\mu_{1}}{\kappa}w_{j}$.
If $j\in J\setminus U_{t}$, then, $\bm{\bar{}}{c}_{jt}=0$. Hence, $\sum_{j\in
J}\bm{\bar{}}{c}_{jt}\leq\frac{\mu_{1}}{\kappa}W_{t}$. Finally, we conclude
$\sum_{i,t}\bm{\bar{}}{b}_{it}\leq\frac{\mu_{1}}{\kappa}{\textsc{Alg}}$ and
$\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\leq\frac{\mu_{1}}{\kappa}{\textsc{Alg}}$. ∎
###### Lemma 2.5.
Assigning $a_{j}=\bm{\bar{}}{a}_{j}$, $b_{it}=\bm{\bar{}}{b}_{it}$ and
$c_{jt}=\bm{\bar{}}{c}_{jt}$ is feasible for ($\text{DLP}_{\alpha}$) if
$\alpha=\mu_{2}\kappa$.
###### Proof.
First note that the dual assignment is non-negative. Let $i\in I,j\in J$ and
$t\geq r_{j}$. The definition of $\bm{\bar{}}{a}_{j}$ yields
$\bm{\bar{}}{a}_{j}\frac{s_{ij}}{p_{j}}-w_{j}t\frac{s_{ij}}{p_{j}}\leq\sum_{t^{\prime}=t}^{C_{j}}\frac{w_{j}s_{ij}}{p_{j}}.$
By using the fact that
$\frac{w_{j}s_{ij}}{p_{j}}\leq\mu_{2}\frac{w_{j}\bm{\hat{}}{s}_{ij}}{p_{j}}$,
the definitions of $\bm{\bar{}}{b}_{it}$ and $\bm{\bar{}}{c}_{jt}$, and the
value of $\alpha$, it remains to validate for every $t\leq t^{\prime}\leq
C_{j}$ that
$\frac{w_{j}\bm{\hat{}}{s}_{ij}}{p_{j}}=\bm{\hat{}}{\delta}_{ij}\leq\beta_{it^{\prime}}+\gamma_{jt^{\prime}}$.
We distinguish five cases:
1. (i)
If $(i,j)\in M_{t^{\prime}}$, then
$\bm{\hat{}}{\delta}_{ij}=\beta_{it^{\prime}}=\gamma_{jt^{\prime}}$.
2. (ii)
If $(i,j^{\prime})\in M_{t^{\prime}}$ and $(i^{\prime},j)\in M_{t^{\prime}}$
s.t. $i^{\prime}\neq i$ (and thus $j^{\prime}\neq j$), we know by the
optimality of $M_{t^{\prime}}$ that
$\displaystyle\bm{\hat{}}{\delta}_{ij}\leq\bm{\hat{}}{\delta}_{ij}+\bm{\hat{}}{\delta}_{i^{\prime}j^{\prime}}\leq\bm{\hat{}}{\delta}_{i^{\prime}j}+\bm{\hat{}}{\delta}_{ij^{\prime}}=\gamma_{jt^{\prime}}+\beta_{it^{\prime}}.$
3. (iii)
If $(i^{\prime},j)\in M_{t^{\prime}}$ and $i$ is not matched in
$M_{t^{\prime}}$, we conclude
$\bm{\hat{}}{\delta}_{ij}\leq\bm{\hat{}}{\delta}_{i^{\prime}j}=\gamma_{jt^{\prime}}.$
4. (iv)
If $(i,j^{\prime})\in M_{t^{\prime}}$ and $j$ is not matched in
$M_{t^{\prime}}$, we conclude
$\bm{\hat{}}{\delta}_{ij}\leq\bm{\hat{}}{\delta}_{ij^{\prime}}=\beta_{it^{\prime}}.$
5. (v)
The case where $\bm{\hat{}}{s}_{ij}>0,w_{j}>0$, but both $i$ and $j$ are
unmatched in $M_{t^{\prime}}$ contradicts the optimality of $M_{t^{\prime}}$,
as $t^{\prime}\leq C_{j}$. Else holds $\bm{\hat{}}{\delta}_{ij}=0$, and we
conclude since the right side of the inequality is non-negative. ∎
###### Proof of Theorem 2.2.
Weak LP duality implies that the optimal objective value of
($\text{DLP}_{\alpha}$) is greater or equal to the optimal objective value of
($\text{LP}_{\alpha}$). Being the objective value of a relaxation, the latter
is a lower bound on ${\textsc{Opt}}_{\alpha}$, which in turn is at most
$\alpha{\textsc{Opt}}$ by scaling completion times, where Opt denotes the
optimal objective value of the original problem. This implies via Lemma 2.4
and Lemma 2.5
$\displaystyle\mu_{2}\kappa\cdot{\textsc{Opt}}\geq{\textsc{Opt}}_{\mu_{2}\kappa}\geq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\geq\left(1-\frac{2\mu_{1}}{\kappa}\right)\cdot{\textsc{Alg}}.$
Choosing $\kappa=4\mu_{1}$, we conclude ${\textsc{Alg}}\leq
8\mu\cdot{\textsc{Opt}}$. ∎
### 2.2 A Clairvoyant Non-Preemptive Algorithm
Algorithm 2 Greedy WSPT
0: speed predictions $\\{\bm{\hat{}}{s}_{ij}\\}$
function UponJobArrival(job $j$)
Assign job $j$ to machine $g(j)=\operatorname*{arg\,min}_{i\in
I}\bm{\hat{}}{Q}_{ij}$.
end function
function UponMachineIdle(machine $i$, time $t$)
Start processing the job $j$ with largest $\bm{\hat{}}{\delta}_{ij}$ among all
alive jobs assigned to $i$ which satisfy $\bm{\hat{}}{r}_{ij}\leq t$.
end function
In many applications, job migration or preemption are not possible. In this
section, we show that the non-preemptive Greedy WSPT algorithm by [GMUX20]
achieves an error-dependent competitive ratio when using predicted speeds to
make decisions (Algorithm 2). The intuition of this algorithm is to greedily
assign arriving jobs to machines, where they are then scheduled in WSPT order,
i.e., on machine $i$ by non-decreasing $\frac{w_{j}s_{ij}}{p_{j}}$. The greedy
job-to-machine assignment intuitively minimizes the increase of the objective
value that scheduling the job on a machine incurs in the current state.
Additionally, the execution of job $j$ is delayed depending on its processing
time $\frac{p_{j}}{s_{ij}}$ on the assigned machine $i$. This is necessary due
to simple lower bounds in the non-preemptive setting [LSS03].
To make this precise, for every $j\in J$, let $M_{i}(j)$ be the set of jobs,
excluding job $j$, which are assigned to machine $i$ at time $r_{j}$, but have
not been started yet. As this definition is ambiguous if there are two jobs
$j$ and $j^{\prime}$ with $r_{j}=r_{j^{\prime}}$ being assigned to $i$, we
assume that we assign them in the order of their index. For all machines $i$,
jobs $j$ and a constant $\theta>0$, which we will set $\theta=\frac{2}{3}$, we
define
$\bm{\hat{}}{r}_{ij}=\max\\{r_{j},\theta\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}\\}$
and $\bm{\hat{}}{Q}_{ij}$ as
$w_{j}\Bigg{(}\bm{\hat{}}{r}_{ij}+\frac{\bm{\hat{}}{r}_{ij}}{\theta}+\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}+\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}\geq\bm{\hat{}}{\delta}_{ij}\end{subarray}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\Bigg{)}+\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}\end{subarray}}w_{j^{\prime}}.$
We prove in Section A.1 the following theorem.
###### Theorem 2.6.
Algorithm 2 has a competitive ratio of at most
$\frac{368}{51}\mu^{2}<7.216\mu^{2}$ for minimizing the total weighted
completion time on unrelated machines with speed predictions.
### 2.3 A Non-Clairvoyant Algorithm
In the non-clairvoyant setting, any constant-competitive algorithm for
minimizing the total completion time on unrelated machines has to migrate and
preempt jobs [MPT94, GIK+12]. Since such algorithms cannot compute densities,
a common strategy is to run all jobs simultaneously at a rate proportional to
their weight [MPT94, KC03]. On unrelated machines with job-dependent speeds,
the Proportional Fairness Algorithm (PF) develops this idea further by
respecting job-dependent speeds [IKM18]. It is known that PF has a competitive
ratio of at most $128$ for minimizing the total weighted completion time
[IKM18]. In the following, we show that PF has a linear error-dependency in
$\mu$ when computing rates via predicted speeds. As a byproduct, we slightly
improve the upper bound on the speed-aware competitive ratio of PF via
optimized duals to $108$.
Algorithm 3 Proportional Fairness
0: time $t$, speed predictions $\\{\bm{\hat{}}{s}_{ij}\\}$
Use solution $\\{y_{ijt}\\}_{i,j}$ of ($\text{CP}_{t}$) as rates at time $t$.
###### Theorem 2.7.
Algorithm 3 has a competitive ratio of at most $108\mu$ for minimizing the
total weighted completion time on unrelated machines with predicted speeds.
At every time $t$, Algorithm 3 schedules jobs $J(t)$ with rates computed via
the following convex program ($\text{CP}_{t}$) with variables
$\bm{\hat{}}{y}_{ijt}$ for every machine $i$ and job $j\in J(t)$.
max $\displaystyle\sum_{j\in J(t)}w_{j}\log\left(\sum_{i\in
I}\bm{\hat{}}{s}_{ij}\bm{\hat{}}{y}_{ijt}\right)$ ($\text{CP}_{t}$) s.t.
$\displaystyle\sum_{j\in J(t)}\bm{\hat{}}{y}_{ijt}\leq 1$
$\displaystyle\forall i\in I$ $\displaystyle\sum_{i\in
I}\bm{\hat{}}{y}_{ijt}\leq 1$ $\displaystyle\forall j\in J(t)$
$\displaystyle\bm{\hat{}}{y}_{ijt}\geq 0$ $\displaystyle\forall i\in I,j\in
J(t)$
We now give an overview over the proof of Theorem 2.7 and defer further
details to Section A.2.
Fix an instance and PF’s schedule. Let $\kappa\geq 1$ and $0<\lambda<1$ be
constants which we fix later. In the following, we assume by scaling that all
weights are integers. For every time $t$, let $Z^{t}$ be the sorted
(ascending) list of length $W_{t}$ composed of $w_{j}$ copies of
$\frac{q_{jt}}{p_{j}}$ for every $j\in U_{t}$. We define $\zeta_{t}$ as the
value at the index $\lfloor\lambda W_{t}\rfloor$ in $Z^{t}$. Let
$\\{\eta_{it}\\}_{i,t}$ and $\\{\theta_{jt}\\}_{j\in J(t),t}$ be the KKT
multipliers of the first two constraint sets of the optimal solution
$\\{y_{ijt}\\}_{i,j}$. Let $\mathds{1}[\varphi]$ be the indicator variable of
the formula $\varphi$, and consider the following duals:
* •
$\bm{\bar{}}{a}_{j}=\sum_{t^{\prime}=0}^{C_{j}}w_{j}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
for every job $j$,
* •
$\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\zeta_{t^{\prime}}\eta_{it^{\prime}}$ for every machine $i$ and time $t$,
and
* •
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{t^{\prime}=t}^{C_{j}}\zeta_{t^{\prime}}\theta_{jt^{\prime}}$
for every job $j$ and time $t\geq r_{j}$.
We show that this assignment has an objective value which lower bounds a
fraction of PF’s objective value, and that it is feasible for
($\text{DLP}_{\alpha}$) for some values of $\alpha$.
###### Lemma 2.8.
$(\lambda-\frac{4}{(1-\lambda)\kappa}){\textsc{Alg}}\leq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}$
###### Lemma 2.9.
Assigning $a_{j}=\bm{\bar{}}{a}_{j}$, $b_{it}=\bm{\bar{}}{b}_{it}$ and
$c_{jt}=\bm{\bar{}}{c}_{jt}$ is feasible for ($\text{DLP}_{\alpha}$) if
$\alpha=\kappa\mu$.
###### Proof of Theorem 2.7.
Weak duality, 2.8 and 2.9 imply
$\displaystyle\kappa\mu\cdot{\textsc{Opt}}\geq{\textsc{Opt}}_{\kappa\mu}\geq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\geq\left(\lambda-\frac{4}{(1-\lambda)\kappa}\right)\cdot{\textsc{Alg}}.$
Setting $\kappa=36$ and $\lambda=\frac{2}{3}$ implies ${\textsc{Alg}}\leq
108\mu\cdot{\textsc{Opt}}$. ∎
## 3 Algorithms for Speed-Ordered Machines
This section contains our results on speed-ordered machines. In the first
subsection, we present a clairvoyant algorithm, and in the second subsection a
non-clairvoyant algorithm. But first, we observe that in this model migration
is necessary for speed-oblivious algorithms.
###### Theorem 3.1.
Any non-migratory speed-oblivious algorithm has a competitive ratio of at
least $\Omega(m)$ for minimizing the total completion time on $m$ speed-
ordered machines, even if it is clairvoyant and the machines are related.
###### Proof.
Consider the execution of some algorithm on an instance of $n$ jobs with unit-
weights and with processing requirements equal to $n^{2}m$ and $s_{1}=n^{2}m$.
If at some point in time, the algorithm starts a job on machines $2,\ldots,m$,
the adversary sets $s_{2}=\ldots=s_{m}=1$ to enforce an objective value of at
least $\Omega(n^{2}m)$, while scheduling all jobs on the first machine gives
an objective value of at most $\mathcal{O}(n^{2})$. If this does not happen,
the algorithm must have scheduled all jobs on the first machine. But then the
adversary sets $s_{2}=\ldots=s_{m}=n^{2}m$ and achieves an objective value of
$\mathcal{O}(\frac{n^{2}}{m})$ by distributing the jobs evenly to all
machines, while the algorithm has an objective value of $\Omega(n^{2})$. ∎
### 3.1 A Clairvoyant Algorithm
Algorithm 4 Maximum Density for speed-ordered machines
0: time $t$, speed-ordered machines $s_{1}\geq\ldots\geq s_{m}$
1: $\sigma_{t}\leftarrow$ order of $J(t)$ with non-increasing
$\frac{w_{j}}{p_{j}}$.
2: $M_{t}=\\{(k,\sigma_{t}(k))\\}_{k\in[\ell]}$ where $\ell=\min\\{m,\lvert
J(t)\rvert\\}$
3: Schedule jobs to machines according to $M_{t}$ at time $t$.
Our clairvoyant algorithm for speed-ordered related machines is motivated by
the following observation. If the machines are related and speed-ordered,
Algorithm 1, given correct speed predictions, will assign jobs by non-
increasing order of $\frac{w_{j}}{p_{j}}$ to machines in speed order, because
this clearly maximizes the total scheduled density, i.e., sum of assigned
$\frac{w_{j}s_{i}}{p_{j}}$. Algorithm 4 can therefore emulate this schedule of
maximum density _without_ having to compute a maximum matching, and thus does
not require (predicted) speeds. These observations also suggest that the
analysis must be similar. Indeed, we can use a similar dual-fitting as for
Theorem 2.2 to prove the following theorem. We mainly present new ideas for
proving the dual feasibility. Note that this observation does not hold for
unrelated machines.
###### Theorem 3.2.
Algorithm 4 has a competitive ratio of at most $8$ for minimizing the total
weighted completion time on speed-ordered related machines.
We use a dual-fitting analysis based on ($\text{DLP}_{\alpha}$) to prove this
theorem. Fix an instance and the algorithm’s schedule, and observe that the
algorithm ensures at every time $t$ that $M_{t}$ is a matching between alive
jobs and machines. Recall that for related machines, $s_{i}=s_{ij}$ for every
job $j$ and every machine $i$. Let $\kappa\geq 1$ be a constant. We define for
every machine $i$ and any time $t$
$\beta_{it}=\begin{cases}\frac{w_{j}s_{i}}{p_{j}}&\text{ if }i\text{ is
matched to }j\in J(t)\text{ in }M_{t}\\\ 0&\text{ otherwise,}\end{cases}$
and for every job $j$ and any time $t$
$\gamma_{jt}=\begin{cases}\frac{w_{j}s_{i}}{p_{j}}&\text{ if }j\text{ is
matched to }i\in I\text{ in }M_{t}\\\ 0&\text{ otherwise.}\end{cases}$
Using these values, we have the following dual assignment:
* •
$\bm{\bar{}}{a}_{j}=w_{j}C_{j}$ for every job $j$,
* •
$\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\beta_{it^{\prime}}$ for every machine $i$ and time $t$, and
* •
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\gamma_{jt^{\prime}}$ for every job $j$ and time $t\geq r_{j}$.
We first observe that the dual objective of this assignment is close to
algorithm’s objective. The proof works analogous to the proof of Lemma 2.4.
###### Lemma 3.3.
$(1-\frac{2}{\kappa}){\textsc{Alg}}\leq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}$
###### Lemma 3.4.
Assigning $a_{j}=\bm{\bar{}}{a}_{j}$, $b_{it}=\bm{\bar{}}{b}_{it}$ and
$c_{jt}=\bm{\bar{}}{c}_{jt}$ is feasible for ($\text{DLP}_{\alpha}$) if
$\alpha=\kappa$ and $s_{i}=s_{ij}$ for every job $j$ and every machine $i$.
###### Proof.
Since the dual assignment is clearly non-negative, we now show that it
satisfies the dual constraint. Let $i\in I,j\in J$ and $t\geq r_{j}$. We first
observe that
$\displaystyle\bm{\bar{}}{a}_{j}\frac{s_{i}}{p_{j}}-w_{j}t\frac{s_{i}}{p_{j}}\leq\sum_{t^{\prime}=t}^{C_{j}}\frac{w_{j}s_{i}}{p_{j}}.$
Using $\alpha=\kappa$, it remains to validate for every $t\leq t^{\prime}\leq
C_{j}$ that
$\frac{w_{j}s_{i}}{p_{j}}\leq\beta_{it^{\prime}}+\gamma_{jt^{\prime}}$. We
distinguish five cases:
1. (i)
If $(i,j)\in M_{t^{\prime}}$, then
$\frac{w_{j}s_{i}}{p_{j}}=\beta_{it^{\prime}}=\gamma_{jt^{\prime}}$.
2. (ii)
If $(i,j^{\prime})\in M_{t^{\prime}}$ and $(i^{\prime},j)\in M_{t^{\prime}}$
s.t. $i\neq i^{\prime}$, we have two cases. If $i<i^{\prime}$, it must be that
$\sigma_{t^{\prime}}(j^{\prime})<\sigma_{t^{\prime}}(j)$ and, thus,
$\frac{w_{j^{\prime}}}{p_{j^{\prime}}}\geq\frac{w_{j}}{p_{j}}$. But then,
$\frac{w_{j}s_{i}}{p_{j}}\leq\frac{w_{j^{\prime}}s_{i}}{p_{j^{\prime}}}$.
Otherwise, that is, $i>i^{\prime}$, we know by the speed order that $s_{i}\leq
s_{i^{\prime}}$, and, thus,
$\frac{w_{j}s_{i}}{p_{j}}\leq\frac{w_{j}s_{i^{\prime}}}{p_{j}}$. Put together,
$\frac{w_{j}s_{i}}{p_{j}}\leq\frac{w_{j^{\prime}}s_{i}}{p_{j^{\prime}}}+\frac{w_{j}s_{i^{\prime}}}{p_{j}}=\beta_{it^{\prime}}+\gamma_{jt^{\prime}}.$
3. (iii)
If $(i^{\prime},j)\in M_{t^{\prime}}$ and $i$ is not matched in
$M_{t^{\prime}}$, it follows $i^{\prime}<i$, which gives
$\frac{w_{j}s_{i}}{p_{j}}\leq\frac{w_{j}s_{i^{\prime}}}{p_{j}}=\gamma_{jt^{\prime}}.$
4. (iv)
If $(i,j^{\prime})\in M_{t^{\prime}}$ and $j$ is not matched in
$M_{t^{\prime}}$, it follows
$\sigma_{t^{\prime}}(j^{\prime})<\sigma_{t^{\prime}}(j)$, and hence
$\frac{w_{j}}{p_{j}}\leq\frac{w_{j^{\prime}}}{p_{j^{\prime}}}$. This
immediately concludes
$\frac{w_{j}s_{i}}{p_{j}}\leq\frac{w_{j^{\prime}}s_{i}}{p_{j^{\prime}}}=\beta_{it^{\prime}}.$
5. (v)
The case where both $i$ and $j$ are unmatched in $M_{t^{\prime}}$ contradicts
the definition of $M_{t^{\prime}}$ in Algorithm 4.
∎
###### Proof of Theorem 3.2.
Weak duality, Lemma 3.4 and Lemma 3.3 imply
$\displaystyle\kappa\cdot{\textsc{Opt}}\geq{\textsc{Opt}}_{\kappa}\geq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\geq\left(1-\frac{2}{\kappa}\right)\cdot{\textsc{Alg}}.$
Using $\kappa=4$ concludes
${\textsc{Alg}}\leq\frac{\kappa}{1-2/\kappa}\cdot{\textsc{Opt}}=8\cdot{\textsc{Opt}}.$
∎
We finally observe that Algorithm 4 indeed cannot achieve a good competitive
ratio if speeds are job-dependent.
###### Lemma 3.5.
Algorithm 4 has a competitive ratio of at least $\Omega(n)$ for minimizing the
total weighted completion time on speed-ordered unrelated machines, even on
two machines and if $w_{j}=1$ for all jobs $j$.
###### Proof.
Let $0<\epsilon<1$. Consider an instance composed of $n$ jobs and $2$
machines, where $w_{j}=1$ for all jobs $j$, $p_{1}=1$ and $p_{j}=1+\epsilon$
for all $2\leq j\leq n$. The processing speeds are given by
$s_{11}=s_{21}=\epsilon$, and $s_{1j}=1$ and $s_{2j}=\epsilon$ for all $2\leq
j\leq n$. Note that the machines are speed-ordered. Algorithm 4 completes at
time $\frac{1}{\epsilon}$ job $1$ on machine $1$ before any other job. Thus,
${\textsc{Alg}}\geq\frac{n}{\epsilon}$. Another solution is to schedule jobs
$2,\ldots,n$ on machine $1$, and job $1$ on machine $2$, giving an objective
of at most $n^{2}+\frac{1}{\epsilon}$. For $\epsilon<n^{-2}$, this concludes
that $\frac{{\textsc{Alg}}}{{\textsc{Opt}}}\geq\Omega(n)$. ∎
### 3.2 A Non-Clairvoyant Algorithm
The non-clairvoyant setting is more difficult. This is because the schedules
of speed-aware algorithms, such as PF, are not as easy to describe, as it was
the case for clairvoyant algorithms. However, for unit weights, related
machines and many alive jobs, i.e., $\lvert J(t)\rvert\geq m$, one solution of
($\text{CP}_{t}$) is to schedule all jobs on all machines with the same rate,
i.e., do Round Robin on every machine. We can describe this schedule without
knowing anything about the speeds. However, in the few-job regime, i.e.,
$\lvert J(t)\rvert<m$, this approach violates the packing constraints of the
jobs, i.e., $\sum_{i}y_{ijt}>1$. This is where the speed order comes into
play: we partition a job’s available rate only to the $\lvert J(t)\rvert$
fastest machines. For the final algorithm (Algorithm 5), we prove below a
guarantee for unrelated machines, and a constant upper bound for related
machines in Section B.2.
Algorithm 5 Round Robin for speed-ordered machines
0: time $t$, speed-ordered machines $s_{1j}\geq\ldots\geq s_{mj}$
Use rates $y_{ijt}=\lvert J(t)\rvert^{-1}\cdot\mathds{1}\left[i\leq\lvert
J(t)\rvert\right]$ at time $t$.
###### Theorem 3.6.
Algorithm 5 has a competitive ratio of at most
$\mathcal{O}(\log(\min\\{n,m\\}))$ for minimizing the total completion time on
speed-ordered unrelated machines.
We prove Theorem 3.6 via dual-fitting based on ($\text{DLP}_{\alpha}$), where
$w_{j}=1$ for every job $j$. Fix an instance and the algorithm’s schedule. For
every time $t$, we write $m_{t}=\min\\{m,\lvert J(t)\rvert\\}$, and we define
$\beta_{it}=\frac{1}{i}\cdot\lvert J(t)\rvert\cdot\mathds{1}\left[i\leq\lvert
J(t)\rvert\right]$ for every machine $i$, and
$\gamma_{jt}=\mathds{1}\left[j\in J(t)\right]$ for every job $j$.
Let $\kappa=\Theta(\log(\min\\{n,m\\}))$. Intuitively, this factor upper
bounds $\sum_{i=1}^{m_{t}}\frac{1}{i}$, which will be necessary when handling
$\sum_{i}\beta_{it}$. For related machines, we can alter the definition of
$\beta_{it}$ and thus have a constant $\kappa$, which eventually implies a
constant upper bound on the competitive ratio.
For every time $t$, consider the sorted (ascending) list $Z^{t}$ composed of
values $\frac{q_{jt}}{p_{j}}$ for every $j\in U_{t}$. We define $\zeta_{t}$ as
the value at the index $\lfloor\frac{1}{2}\lvert U_{t}\rvert\rfloor$ in
$Z^{t}$. Consider the following duals:
* •
$\bm{\bar{}}{a}_{j}=\sum_{t^{\prime}=0}^{C_{j}}\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
for every job $j$,
* •
$\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\beta_{it^{\prime}}\zeta_{t^{\prime}}$ for every machine $i$ and time $t$,
and
* •
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\gamma_{jt^{\prime}}\zeta_{t^{\prime}}$ for every job $j$ and time $t\geq
r_{j}$.
We prove the following bound on Alg in Section B.1.
###### Lemma 3.7.
$\Omega(1)\cdot{\textsc{Alg}}\leq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}$
This lemma, weak LP duality, and the feasibility of the crafted duals (Lemma
3.8) imply Theorem 3.6 for $\alpha=\kappa$.
###### Lemma 3.8.
Assigning $a_{j}=\bm{\bar{}}{a}_{j}$, $b_{it}=\bm{\bar{}}{b}_{it}$ and
$c_{jt}=\bm{\bar{}}{c}_{jt}$ is feasible for ($\text{DLP}_{\alpha}$) if
$\alpha=\kappa$.
###### Proof.
First observe that the dual assignment is non-negative. Let $i\in I,j\in J$
and $t\geq r_{j}$. Since the rates of Algorithm 5 imply
$q_{jt}=\sum_{\ell=1}^{m_{t}}\frac{s_{\ell j}}{\lvert J(t)\rvert}$, we have
$\displaystyle\frac{\bm{\bar{}}{a}_{j}s_{ij}}{p_{j}}-\frac{s_{ij}\cdot
t}{p_{j}}$
$\displaystyle\leq\sum_{t^{\prime}=t}^{C_{j}}\frac{s_{ij}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]=\sum_{t^{\prime}=t}^{C_{j}}\frac{s_{ij}}{q_{jt^{\prime}}}\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]\leq\sum_{t^{\prime}=t}^{C_{j}}\frac{s_{ij}}{\sum_{\ell=1}^{m_{t^{\prime}}}\frac{s_{\ell
j}}{\lvert J(t^{\prime})\rvert}}\cdot\zeta_{t^{\prime}}.$ (1)
Consider any time $t^{\prime}$ with $t\leq t^{\prime}\leq C_{j}$. If
$i\leq\lvert J(t^{\prime})\rvert$, by the speed order,
$\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell j}\geq\sum_{\ell=1}^{i}s_{\ell j}\geq
i\cdot s_{ij}$, and thus
$\frac{s_{ij}}{\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell j}}\cdot\lvert
J(t^{\prime})\rvert\cdot\zeta_{t^{\prime}}\leq\frac{1}{i}\cdot\lvert
J(t^{\prime})\rvert\cdot\zeta_{t^{\prime}}=\beta_{it^{\prime}}\cdot\zeta_{t^{\prime}}.$
Otherwise, that is, $i>\lvert J(t^{\prime})\rvert$, we conclude by the speed
order, $\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell j}\geq\sum_{\ell=1}^{\lvert
J(t^{\prime})\rvert}s_{\ell j}\geq\lvert J(t^{\prime})\rvert\cdot s_{ij}$.
Therefore,
$\frac{s_{ij}}{\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell j}}\cdot\lvert
J(t^{\prime})\rvert\cdot\zeta_{t^{\prime}}\leq\frac{\lvert
J(t^{\prime})\rvert}{\lvert
J(t^{\prime})\rvert}\cdot\zeta_{t^{\prime}}=\gamma_{jt^{\prime}}\cdot\zeta_{t^{\prime}},$
because $t^{\prime}\leq C_{j}$. Put together, (1) is at most
$\displaystyle\sum_{t^{\prime}=t}^{C_{j}}\beta_{it^{\prime}}\zeta_{t^{\prime}}+\sum_{t^{\prime}=t}^{C_{j}}\gamma_{jt^{\prime}}\zeta_{t^{\prime}}\leq\kappa(\bm{\bar{}}{b}_{it}+\bm{\bar{}}{c}_{jt}),$
which verifies the dual constraint. ∎
###### Lemma 3.9.
Algorithm 4 has a competitive ratio of at least $\Omega(\log(\min\\{n,m\\}))$
for minimizing the total completion time on speed-ordered unrelated machines,
even if processing speeds are exclusively from $\\{0,1\\}$.
###### Proof.
Consider an instance of $m$ unit-sized jobs $[m]$ and $m$ machines $[m]$.
Every job $j\in[m]$ has on machine $i\in[m]$ a processing speed equal to
$s_{ij}=\mathds{1}\left[i\leq m-j+1\right]$. First observe that
${\textsc{Opt}}\leq m$, because we can process and complete every job
$j\in[m]$ exclusively on machine $m-j+1$ at time $1$. We now calculate the
algorithm’s objective value. To this end, we argue that in the algorithm’s
schedule holds $C_{j}=1+\sum_{i=1}^{j-1}\frac{1}{m-i+1}$ for every job $j$.
Then, ${\textsc{Alg}}=\sum_{j=1}^{m}C_{j}=\Omega(m\log m)$ concludes the
statement.
We first observe that $C_{1}=1$, because job $1$ receives in interval
$I_{1}=[0,C_{1})$ on every machine a rate equal to $\frac{1}{m}$. We now argue
iteratively for $j=2,\ldots,m$ that $C_{j}=1+\sum_{i=1}^{j-1}\frac{1}{m-i+1}$.
Consequently, in interval $I_{j}=[C_{j-1},C_{j})$ must be exactly jobs
$j,\ldots,m$ alive. Fix a job $j$ with $2\leq j\leq m$ and let $2\leq i\leq
j$. Since $j$ receives progress on exactly $m-j+1$ machines, there are $m-i+1$
alive jobs in $I_{i}$, and $I_{i}$ has length $\frac{1}{m-i+2}$, its total
progress in $I_{i}$ is equal to $\frac{m-j+1}{(m-i+1)(m-i+2)}$. Further, $j$’s
progress is equal to $\frac{m-j+1}{m}$ in $I_{1}$. Summing over all intervals
$I_{i}$ with $1\leq i\leq j$ concludes that $j$’s progress until the end of
$I_{j}$ is equal to
$\frac{m-j+1}{m}+\sum_{i=2}^{j}\frac{m-j+1}{(m-i+1)(m-i+2)}=1,$
asserting that $1+\sum_{i=1}^{j-1}\frac{1}{m-i+1}$ is indeed $j$’s completion
time in the algorithm’s schedule. ∎
## 4 Experimental Evaluation
Figure 2: Real experiments on a _HiKey 970_ board. The experiments are each
repeated 3 times with the same workload but different random noise for speed
predictions. Shaded areas show the standard deviation.
#### Setup
We perform experiments on real hardware running representative jobs, which
enables us to perform a realistic evaluation. The setup uses a _HiKey 970_
board [Lin] with a _Kirin 970_ Arm big.LITTLE SoC featuring 4 _big_ cores and
4 _LITTLE_ cores, running Android 8.0. This is a representative smartphone
platform. The _big_ cores always offer a higher performance than the _LITTLE_
cores (speed-ordered) because they support out-of-order execution at higher
frequency and larger caches (see also Figure 1, all speedups are $>1$). Our
workload comprises 100 randomly selected single-threaded jobs from the well-
established _PARSEC-3.0_ [ZBBL16], _SPLASH-3_ [SLKR16], and Polybench [YP15]
benchmark suites. These benchmarks represent various use cases from video
transcoding, rendering, compression, etc. The arrival times are drawn from a
Poisson distribution with varying rate parameter to study different system
loads. We characterized all jobs offline to get accurate speed $s_{ij}$ and
job volume $p_{j}$ values. Speed predictions are created with controllable
error by $\bm{\hat{}}{s}_{ij}=s_{ij}\cdot y_{ij}$, where $y_{ij}$ follows a
log-normal distribution $ln(y_{ij})\sim\mathcal{N}(0,\sigma^{2})$. Note that
the predictions do not consider slowdown effects on real hardware, e.g., due
to shared resource contention, adding additional inaccuracy.
Additionally, we perform synthetic experiments (Appendix C), which use similar
workload and core configurations, but are only simulated. An advantage is that
rates must not be transformed to actual schedules. The results are in line
with the results of our hardware experiments.
#### Algorithms
We consider all algorithms presented in previous sections. Additionally, we
consider Round Robin (RR), which distributes a job evenly over all machines,
and Iterative Greedy (Algorithm 6), which at any time iteratively schedules
the job $j$ on machine $i$ which has the maximum $\bm{\hat{}}{s}_{ij}$ among
all unassigned alive jobs and free machines. We show that Iterative Greedy is
not competitive (lower bound of $\Omega(n)$).
Algorithm 6 Iterative Greedy
0: time $t$, speed predictions $\\{\bm{\hat{}}{s}_{ij}\\}$
1: $I^{\prime}\leftarrow I,J^{\prime}\leftarrow J(t)$
2: while $I^{\prime}\neq\emptyset\land J^{\prime}\neq\emptyset$ do
3: $(i,j)=\operatorname*{arg\,max}_{i\in I^{\prime},j\in
J^{\prime}}w_{j}\bm{\hat{}}{s}_{ij}$
4: $I^{\prime}\leftarrow I^{\prime}\setminus\\{i\\},J^{\prime}\leftarrow
J^{\prime}\setminus\\{j\\}$
5: Schedule job $j$ on machine $i$ with rate $y_{ijt}=1$ at time $t$.
6: end while
###### Lemma 4.1.
Algorithm 6 has a competitive ratio of at least $\Omega(n)$ for minimizing the
total completion time on unrelated machines, even if
$s_{ij}=\bm{\hat{}}{s}_{ij}$ for all jobs $j$ and machines $i$.
###### Proof.
Let $\epsilon>0~{}$ and $n>m\geq 2$ such that $\frac{n-1}{m-1}$ is an integer.
Consider a unit-weight instance of one job with $p_{1}=\frac{n-1}{m-1}$,
$s_{11}=1+\epsilon$ and $s_{i1}=1$ for $2\leq i\leq m$, and $n-1$ jobs with
$p_{j}=\epsilon$ and $s_{1j}=1$ and $s_{ij}=\epsilon$ for $2\leq j\leq n,2\leq
i\leq m$. Algorithm 6 first schedules job 1 on machine 1, and the $n-1$ others
on the remaining $m-1$ machines. Since the completion time of job $1$ is equal
to $\frac{n-1}{(1+\epsilon)(m-1)}$, jobs $2,\ldots,n$ will complete at time at
least $\frac{n-1}{m-1}$ only on machines $2,\ldots,m$ if
$\epsilon<\frac{m}{n-m-1}$, hence this allocation will remain until the end of
the instance. This implies a total completion time of
$\Omega(\frac{n^{2}}{m})$ for jobs $2,\ldots,n$. Another solution is to
schedule all jobs $2,\ldots,n$ on machine 1 with a total completion time of at
most $\mathcal{O}(\epsilon n^{2})$, and job $1$ latest at time
$\mathcal{O}(\frac{n}{m})$ on any other machine. This implies that Algorithm 6
has a competitive ratio of at least $\Omega(n)$. ∎
#### Results
Figure 2 presents the results of the hardware experiments. We exclude PF
because it produces fractional schedules which are often difficult to convert
into real schedules [IKM18], and Greedy WSPT, because, given incorrect
predictions, it significantly underperforms in synthetic experiments. We
repeat each experiment 3 times with the same workload (jobs and arrival times)
but different random noisy speed predictions and plot the average and standard
deviation of the average completion times.
Under low system load (Figure 2a), the number of active jobs is mostly $\leq$
4, i.e., it is mostly feasible to only use the _big_ cores. Consequently, the
algorithms that exploit the speed-ordered property (red) consistently perform
best. Algorithms with speed predictions (blue) perform equally well for
accurate predictions but their performance deteriorates for very noisy
predictions. RR always uses all cores and thus shows a low performance.
Under high system load (Figure 2b), the number of active jobs is mostly $>$ 4,
thus, _LITTLE_ cores have to be used. RR and speed-ordered RR perform
similarly, as both mostly use the same cores. For low prediction noise
($\sigma<1$), Maximum Density performs best, but also requires most
information (speed predictions and clairvoyant). For higher prediction noise,
speed-ordered Maximum Density is better because too noisy speed predictions
result in bad schedules. Iterative Greedy performs best among the non-
clairvoyant algorithms, but does not offer any theoretical guarantees.
Figure 3: Distribution of the system load with speed-ordered Round Robin
(Algorithm 5).
#### Load analysis
Figure 3 shows the distribution of system load during the experiments with
speed-ordered Round Robin (Algorithm 5). At low job arrival rate (1 task/min),
the system load is $\leq$ 4 during 87 % of the time. This means that during
the majority of the time, it is possible to only use the _big_ cores,
explaining why speed predictions or clairvoyance bring little benefit over the
speed-ordered setting as in Figure 2a. In contrast, the system load is $\leq$
4 during 43 % of the time at a high job arrival rate (4 tasks/min), reaching
up to 46. Accurate speed and job volume predictions are much more beneficial
in this case, explaining the larger differences between algorithms in Figure
2b.
#### Summary
Speed predictions are beneficial in the average case if they are relatively
accurate. With inaccurate predictions, relying on the speed-ordering instead
is beneficial. _In summary, our experiments show the power of speed
predictions and speed-ordering for online scheduling in real-world settings._
## 5 Conclusion and Future Directions
We initiated research on speed-oblivious algorithms with two models motivated
by real-world observations. Future directions include settling the asymptotic
competitive ratio for (non-)clairvoyant speed-oblivious algorithms on speed-
ordered unrelated machines, shrinking the upper bound of PF to a small
constant, and investigating speed-oblivious algorithms for other objective
functions such as the total flow time, potentially also in the speed-scaling
model.
## References
* [ABK+18] Luciana Arantes, Evripidis Bampis, Alexander V. Kononov, Manthos Letsios, Giorgio Lucarelli, and Pierre Sens. Scheduling under uncertainty: A query-based approach. In IJCAI, pages 4646–4652, 2018.
* [AE20] Susanne Albers and Alexander Eckl. Explorable uncertainty in scheduling with non-uniform testing times. In WAOA, volume 12806 of Lecture Notes in Computer Science, pages 127–142. Springer, 2020.
* [AGK12] S. Anand, Naveen Garg, and Amit Kumar. Resource augmentation for weighted flow-time explained by dual fitting. In SODA, pages 1228–1241. SIAM, 2012.
* [AGS22] Antonios Antoniadis, Peyman Jabbarzade Ganje, and Golnoosh Shahkarami. A novel prediction setup for online speed-scaling. In SWAT, volume 227 of LIPIcs, pages 9:1–9:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2022.
* [ALT21] Yossi Azar, Stefano Leonardi, and Noam Touitou. Flow time scheduling with uncertain processing time. In STOC, pages 1070–1080. ACM, 2021.
* [ALT22] Yossi Azar, Stefano Leonardi, and Noam Touitou. Distortion-oblivious algorithms for minimizing flow time. In SODA, pages 252–274. SIAM, 2022.
* [ARM13] ARM Limited. big.LITTLE Technology: The Future of Mobile, 2013.
* [AS01] Susanne Albers and Günter Schmidt. Scheduling with unexpected machine breakdowns. Discret. Appl. Math., 110(2-3):85–99, 2001.
* [BE98] Allan Borodin and Ran El-Yaniv. Online computation and competitive analysis. Cambridge University Press, 1998.
* [BKL21] Marcin Bienkowski, Artur Kraska, and Hsiang-Hsuan Liu. Traveling repairperson, unrelated machines, and other stories about average completion times. In ICALP, volume 198 of LIPIcs, pages 28:1–28:20. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2021.
* [BOSW22] Eric Balkanski, Tingting Ou, Clifford Stein, and Hao-Ting Wei. Scheduling with speed predictions. CoRR, abs/2205.01247, 2022.
* [BSS21] Nikhil Bansal, Aravind Srinivasan, and Ola Svensson. Lift-and-round to improve weighted completion time on unrelated machines. SIAM J. Comput., 50(3), 2021.
* [CGKM09] Jivitej S. Chadha, Naveen Garg, Amit Kumar, and V. N. Muralidhara. A competitive algorithm for minimizing weighted flow time on unrelated machines with speed augmentation. In STOC, pages 679–684. ACM, 2009.
* [DEMM20] Christoph Dürr, Thomas Erlebach, Nicole Megow, and Julie Meißner. An adversarial model for scheduling with testing. Algorithmica, 82(12):3630–3675, 2020.
* [DIL+22] Michael Dinitz, Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Algorithms with prediction portfolios. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022.
* [DJST09] Florian Diedrich, Klaus Jansen, Ulrich M. Schwarz, and Denis Trystram. A survey on approximation algorithms for scheduling with machine unavailability. In Algorithmics of Large and Complex Networks, volume 5515 of Lecture Notes in Computer Science, pages 50–64. Springer, 2009.
* [EHM+21] Franziska Eberle, Ruben Hoeksma, Nicole Megow, Lukas Nölke, Kevin Schewior, and Bertrand Simon. Speed-robust scheduling - sand, bricks, and rocks. In IPCO, volume 12707 of Lecture Notes in Computer Science, pages 283–296. Springer, 2021.
* [ELM+12] Leah Epstein, Asaf Levin, Alberto Marchetti-Spaccamela, Nicole Megow, Julián Mestre, Martin Skutella, and Leen Stougie. Universal sequencing on an unreliable machine. SIAM J. Comput., 41(3):565–586, 2012.
* [FR98] Dror G. Feitelson and Larry Rudolph. Metrics and benchmarking for parallel job scheduling. In JSSPP, volume 1459 of Lecture Notes in Computer Science, pages 1–24. Springer, 1998.
* [GBA+18] Ujjwal Gupta, Manoj Babu, Raid Ayoub, Michael Kishinevsky, Francesco Paterna, and Ümit Y. Ogras. STAFF: online learning with stabilized adaptive forgetting factor and feature selection algorithm. In DAC, pages 177:1–177:6. ACM, 2018.
* [GIK+12] Anupam Gupta, Sungjin Im, Ravishankar Krishnaswamy, Benjamin Moseley, and Kirk Pruhs. Scheduling heterogeneous processors isn’t as easy as you think. In SODA, pages 1242–1253. SIAM, 2012.
* [GMUX20] Varun Gupta, Benjamin Moseley, Marc Uetz, and Qiaomin Xie. Greed works - online algorithms for unrelated machine stochastic scheduling. Math. Oper. Res., 45(2):497–516, 2020.
* [HSSW97] Leslie A. Hall, Andreas S. Schulz, David B. Shmoys, and Joel Wein. Scheduling to minimize average completion time: Off-line and on-line approximation algorithms. Math. Oper. Res., 22(3):513–544, 1997.
* [IKM18] Sungjin Im, Janardhan Kulkarni, and Kamesh Munagala. Competitive algorithms from competitive equilibria: Non-clairvoyant scheduling under polyhedral constraints. J. ACM, 65(1):3:1–3:33, 2018.
* [IKMP14] Sungjin Im, Janardhan Kulkarni, Kamesh Munagala, and Kirk Pruhs. Selfishmigrate: A scalable algorithm for non-clairvoyantly scheduling heterogeneous processors. In FOCS, pages 531–540. IEEE Computer Society, 2014.
* [IKQP21] Sungjin Im, Ravi Kumar, Mahshid Montazer Qaem, and Manish Purohit. Non-clairvoyant scheduling with predictions. In SPAA, pages 285–294. ACM, 2021.
* [IL23] Sungjin Im and Shi Li. Improved approximations for unrelated machine scheduling. In SODA, pages 2917–2946. SIAM, 2023.
* [Jäg21] Sven Joachim Jäger. Approximation in deterministic and stochastic machine scheduling. PhD thesis, Technical University of Berlin, Germany, 2021.
* [JMM+03] Kamal Jain, Mohammad Mahdian, Evangelos Markakis, Amin Saberi, and Vijay V. Vazirani. Greedy facility location algorithms analyzed using dual fitting with factor-revealing LP. J. ACM, 50(6):795–824, 2003.
* [KC03] Jae-Hoon Kim and Kyung-Yong Chwa. Non-clairvoyant scheduling for weighted flow time. Inf. Process. Lett., 87(1):31–37, 2003.
* [KPSH15] Heba Khdr, Santiago Pagani, Muhammad Shafique, and Jörg Henkel. Thermal constrained resource management for mixed ILP-TLP workloads in dark silicon chips. In DAC, pages 179:1–179:6. ACM, 2015.
* [Kuh55] H. W. Kuhn. The hungarian method for the assignment problem. Naval Research Logistics Quarterly, 2(1-2):83–97, 1955.
* [Li20] Shi Li. Scheduling to minimize total weighted completion time via time-indexed linear programming relaxations. SIAM J. Comput., 49(4), 2020.
* [Lin] Linaro 96Boards. Hikey970. https://96boards.org/product/hikey970/.
* [LLMV20] Silvio Lattanzi, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii. Online scheduling via learned weights. In SODA, pages 1859–1877. SIAM, 2020.
* [LM22] Alexander Lindermayr and Nicole Megow. Permutation predictions for non-clairvoyant scheduling. In SPAA, pages 357–368. ACM, 2022.
* [LM23] Alexander Lindermayr and Nicole Megow. Repository of papers on algorithms with predictions, 2023. URL: https://algorithms-with-predictions.github.io/.
* [LSS03] Xiwen Lu, René Sitters, and Leen Stougie. A class of on-line scheduling algorithms to minimize total completion time. Oper. Res. Lett., 31(3):232–236, 2003.
* [LX21] Shi Li and Jiayi Xian. Online unrelated machine load balancing with predictions revisited. In ICML, volume 139 of Proceedings of Machine Learning Research, pages 6523–6532. PMLR, 2021.
* [MPT94] Rajeev Motwani, Steven J. Phillips, and Eric Torng. Non-clairvoyant scheduling. Theor. Comput. Sci., 130(1):17–47, 1994.
* [MS04] Nicole Megow and Andreas S. Schulz. On-line scheduling to minimize average completion time revisited. Oper. Res. Lett., 32(5):485–490, 2004.
* [MV22] Michael Mitzenmacher and Sergei Vassilvitskii. Algorithms with predictions. Commun. ACM, 65(7):33–35, 2022.
* [PSK18] Manish Purohit, Zoya Svitkina, and Ravi Kumar. Improving online algorithms via ML predictions. In NeurIPS, pages 9684–9693, 2018.
* [PST04] Kirk Pruhs, Jirí Sgall, and Eric Torng. Online scheduling. In Handbook of Scheduling. Chapman and Hall/CRC, 2004.
* [RPMH21] Martin Rapp, Anuj Pathania, Tulika Mitra, and Jörg Henkel. Neural network-based performance prediction for task migration on S-NUCA many-cores. IEEE Trans. Computers, 70(10):1691–1704, 2021.
* [RYR+22] Efraim Rotem, Adi Yoaz, Lihu Rappoport, Stephen J Robinson, Julius Yuli Mandelblat, Arik Gihon, Eliezer Weissmann, Rajshree Chabukswar, Vadim Basin, Russell Fenger, et al. Intel Alder Lake CPU Architectures. IEEE Micro, 42(3):13–19, 2022.
* [S+56] Wayne E Smith et al. Various optimizers for single-stage production. Naval Research Logistics Quarterly, 3(1-2):59–66, 1956.
* [SLKR16] Christos Sakalis, Carl Leonardsson, Stefanos Kaxiras, and Alberto Ros. Splash-3: A properly synchronized benchmark suite for contemporary research. In ISPASS, pages 101–111. IEEE Computer Society, 2016.
* [SS02a] Andreas S. Schulz and Martin Skutella. The power of $\alpha$-points in preemptive single machine scheduling. Journal of Scheduling, 5(2):121–133, 2002.
* [SS02b] Andreas S. Schulz and Martin Skutella. Scheduling unrelated machines by randomized rounding. SIAM J. Discret. Math., 15(4):450–469, 2002.
* [SZ20] Clifford Stein and Mingxian Zhong. Scheduling when you do not know the number of machines. ACM Trans. Algorithms, 16(1):9:1–9:20, 2020.
* [The19] The kernel development community. Energy Aware Scheduling – The Linux Kernel Documentation, 2019. https://www.kernel.org/doc/html/v5.3/scheduler/sched-energy.html.
* [YP15] Tomofumi Yuki and Louis-Noël Pouchet. Polybench 4.0, 2015.
* [ZBBL16] Xusheng Zhan, Yungang Bao, Christian Bienia, and Kai Li. PARSEC3.0: A multicore benchmark suite with network stacks and SPLASH-2X. SIGARCH Comput. Archit. News, 44(5):1–16, 2016.
## Appendix A Details on Algorithms with Speed Predictions
### A.1 Full Analysis of Greedy WSPT with Speed Predictions
In this section, we present an error-dependent competitive ratio for Greedy
WSPT with speed predictions and eventually prove Theorem 2.6. The analysis is
inspired by [GMUX20], but uses a different approach for proving the
feasibility of the crafted duals. In particular, we need less scaling
parameters than Gupta et al. See 2.6
Fix an instance and the algorithm’s schedule. Let $\kappa\geq 1$ and
$0<\theta<1$ be constants. We assume w.l.o.g. by scaling the instance that all
processing requirements and release dates are integer multiples of $\kappa$.
Recall that $\bm{\hat{}}{\delta}_{ij}=\frac{w_{j}\bm{\hat{}}{s}_{ij}}{p_{j}}$
and
$\bm{\hat{}}{r}_{ij}=\max\\{r_{j},\theta\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}\\}$.
We write for every job $j$ and machine $i$
$Q_{ij}=w_{j}\Bigg{(}\bm{\hat{}}{r}_{ij}+\mu_{1}\frac{\bm{\hat{}}{r}_{ij}}{\theta}+\frac{p_{j}}{s_{ij}}+\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}\geq\bm{\hat{}}{\delta}_{ij}\end{subarray}}\frac{p_{j^{\prime}}}{s_{ij^{\prime}}}\Bigg{)}+\frac{p_{j}}{s_{ij}}\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}\end{subarray}}w_{j^{\prime}}.$
Also, recall that the algorithm uses the values $\bm{\hat{}}{Q}_{ij}$ to
assign a job $j$ at time $r_{j}$ to machine
$g(j)=\operatorname*{arg\,min}_{i}\bm{\hat{}}{Q}_{ij}$:
$\bm{\hat{}}{Q}_{ij}=w_{j}\Bigg{(}\bm{\hat{}}{r}_{ij}+\frac{\bm{\hat{}}{r}_{ij}}{\theta}+\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}+\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}\geq\bm{\hat{}}{\delta}_{ij}\end{subarray}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\Bigg{)}+\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}\end{subarray}}w_{j^{\prime}}.$
We now introduce a linear programming relaxation of our problem. As we
consider a non-preemptive scheduling problem here, we can define a stronger
linear program relaxation than ($\text{LP}_{\alpha}$) [SS02b]:
min $\displaystyle\sum_{i,j,t}w_{j}\cdot
x_{ijt}\cdot\left(\frac{1}{2}+\frac{s_{ij}}{p_{j}}\cdot\left(t+\frac{1}{2}\right)\right)$
(NP-LP) s.t. $\displaystyle\sum_{i,t\geq r_{j}}\frac{x_{ijt}s_{ij}}{p_{j}}\geq
1$ $\displaystyle\forall j$ $\displaystyle\sum_{j}x_{ijt}\leq 1$
$\displaystyle\forall i,t$ $\displaystyle x_{ijt}\geq 0$ $\displaystyle\forall
i,j,t$ $\displaystyle x_{ijt}=0$ $\displaystyle\forall i,j,t<r_{j}$
This relaxation has an integrality gap of $2$ [SS02b]. The dual of (NP-LP) can
be written as follows:
max $\displaystyle\quad\sum_{j}a_{j}-\sum_{i,t}b_{it}$ (NP-DLP) s.t.
$\displaystyle\frac{a_{j}s_{ij}}{p_{j}}-b_{it}\leq
w_{j}\left(s_{ij}\frac{t+1/2}{p_{j}}+\frac{1}{2}\right)\quad\forall i,j,t\geq
r_{j}$ (2) $\displaystyle a_{j},b_{it}\geq 0\qquad\forall i,j,t$
We define a solution for (NP-DLP) which depends on the schedule produced by
the algorithm. Let $U_{i}(t)=\\{j\in J\mid g(j)=i\land t<C_{j}\\}$. Note that
$U_{i}(t)$ includes unreleased jobs at time $t$. Consider the following dual
assignment:
* •
$\bm{\bar{}}{a}_{j}=Q_{g(j)j}$ for every job $j$ and
* •
$\bm{\bar{}}{b}_{it}=\mu\cdot\sum_{j\in U_{i}(\kappa\cdot t)}w_{j}$ for every
machine $i$ and time $t$.
We first show that the objective value of (NP-DLP) for
$(\bm{\bar{}}{a}_{j},\bm{\bar{}}{b}_{it})$ is close to the objective value of
the algorithm.
###### Lemma A.1.
$\sum_{j}\bm{\bar{}}{a}_{j}\geq{\textsc{Alg}}$
###### Proof.
Consider the algorithm’s schedule. Let $x_{i}(t)$ denote the amount of time
(not volume) the currently processed job on machine $i$ requires to complete.
If there is no job running on machine $i$ at time $t$, we define $x_{i}(t)=0$.
We now calculate the contribution of some job $j$ to the algorithm’s objective
value Alg. Suppose that $j$ gets assigned to $g(j)=i$. Then, $j$ might delay
other jobs with smaller predicted density which have been already assigned to
$i$, i.e., are part of $M_{i}(j)$. Further, $j$ might be delayed by jobs which
have higher predicted density and are part of $M_{i}(j)$. Finally, $j$’s
completion time cannot be less than $\bm{\hat{}}{r}_{ij}+\frac{p_{j}}{s_{ij}}$
due to the definition of the algorithm, and this value might be delayed
further by $x_{i}(\bm{\hat{}}{r}_{ij})$. In total, we conclude that the
contribution of $j$ to Alg is at most
$w_{j}\Bigg{(}\bm{\hat{}}{r}_{ij}+x_{i}(\bm{\hat{}}{r}_{ij})+\frac{p_{j}}{s_{ij}}+\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}\geq\bm{\hat{}}{\delta}_{ij}\end{subarray}}\frac{p_{j^{\prime}}}{s_{ij^{\prime}}}\Bigg{)}+\frac{p_{j}}{s_{ij}}\sum_{\begin{subarray}{c}j^{\prime}\in
M_{i}(j)\\\
\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}\end{subarray}}w_{j^{\prime}}.$
This value is indeed at most $Q_{ij}$, because if at time
$\bm{\hat{}}{r}_{ij}$ some job $k$ is being processed, it must be that
$\bm{\hat{}}{r}_{ik}\leq\bm{\hat{}}{r}_{ij}$, and thus
$x_{i}(\bm{\hat{}}{r}_{ij})\leq\frac{p_{k}}{s_{ik}}\leq\mu_{1}\frac{p_{k}}{\bm{\hat{}}{s}_{ik}}\leq\mu_{1}\frac{\bm{\hat{}}{r}_{ik}}{\theta}\leq\mu_{1}\frac{\bm{\hat{}}{r}_{ij}}{\theta}.$
The statement then follows by summation of all jobs and the observation that
this contribution only affects jobs that were handled before job $j$. ∎
###### Lemma A.2.
$\sum_{i,t}\bm{\bar{}}{b}_{it}=\frac{\mu}{\kappa}{\textsc{Alg}}$
###### Proof.
Since we assumed that all release dates and processing times in $J$ are
integer multiples of $\kappa$, all all job completions occur at integer
multiples of $\kappa$. Thus, $\sum_{t}\sum_{j\in U_{i}(\kappa\cdot
t)}w_{j}=\frac{1}{\kappa}\sum_{t}\sum_{j\in U_{i}(t)}w_{j}$ for every machine
$i$, and we conclude
$\sum_{i,t}\bm{\bar{}}{b}_{it}=\mu\sum_{i,t}\sum_{j\in U_{i}(\kappa\cdot
t)}w_{j}=\frac{1}{\kappa}\sum_{i,t}\sum_{j\in
U_{i}(t)}w_{j}=\frac{\mu}{\kappa}\cdot{\textsc{Alg}}.$
∎
These two lemmas give the following corollary.
###### Corollary A.3.
$\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}\geq\left(1-\frac{\mu}{\kappa}\right)\cdot{\textsc{Alg}}$.
Second, we show that scaling the crafted duals makes them feasible for (NP-
DLP).
###### Lemma A.4.
Assigning $a_{j}=\bm{\bar{}}{a}_{j}/\lambda$ and
$b_{it}=\bm{\bar{}}{b}_{it}/\lambda$ gives a feasible solution for (NP-DLP)
for a constant $\lambda>0$ that satisfies $\lambda\geq 2\mu(2+\theta)$ and
$\lambda\geq\mu_{1}(\frac{1}{\theta}+\mu_{2}\cdot\kappa)$.
###### Proof.
Since our defined variables are non-negative by definition, it suffices to
show that this assignment satisfies (2). Fix a job $j$, a machine $i$ and a
time $t\geq r_{j}$. We assume that no new job arrives after $j$, since such a
job may only increase $\bm{\bar{}}{b}_{it}$ while $\bm{\bar{}}{a}_{j}$ stays
unchanged. We define a partition of $M_{i}(j)$ into high priority and low
priority jobs with respect to $j$, and into completed and unfinished jobs with
respect to time $\kappa\cdot t$:
* •
$H_{U}=\\{j^{\prime}\in
M_{i}(j):\bm{\hat{}}{\delta}_{ij^{\prime}}\geq\bm{\hat{}}{\delta}_{ij}\land
C_{j^{\prime}}>\kappa\cdot t\\}$ and $H_{C}=\\{j^{\prime}\in
M_{i}(j):\bm{\hat{}}{\delta}_{ij^{\prime}}\geq\bm{\hat{}}{\delta}_{ij}\land
C_{j^{\prime}}\leq\kappa\cdot t\\}$,
* •
$L_{U}=\\{j^{\prime}\in
M_{i}(j):\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}\land
C_{j^{\prime}}>\kappa\cdot t\\}$ and $L_{C}=\\{j^{\prime}\in
M_{i}(j):\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}\land
C_{j^{\prime}}\leq\kappa\cdot t\\}$.
We write $H=H_{C}\cup H_{U}$, $L=L_{C}\cup L_{U}$ and
$\delta_{ij}=\frac{w_{j}s_{ij}}{p_{j}}$. Due to the choice of $g(j)$ in the
algorithm, $\bm{\hat{}}{Q}_{g(j)j}\leq\bm{\hat{}}{Q}_{i^{\prime}j}$ for every
machine $i^{\prime}$. Hence, we have
$\bm{\bar{}}{a}_{j}=Q_{g(j)j}\leq\mu_{1}\cdot\bm{\hat{}}{Q}_{g(j)j}\leq\mu_{1}\cdot\bm{\hat{}}{Q}_{ij}$,
and using that,
$\displaystyle\frac{\bm{\bar{}}{a}_{j}\cdot s_{ij}}{\lambda p_{j}}$
$\displaystyle\leq\mu_{1}\frac{\bm{\hat{}}{Q}_{ij}\cdot s_{ij}}{\lambda
p_{j}}$
$\displaystyle=\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\bm{\hat{}}{r}_{ij}+\frac{\bm{\hat{}}{r}_{ij}}{\theta}+\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}+\sum_{j^{\prime}\in
H}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\mu_{1}\frac{s_{ij}w_{j}}{\lambda
p_{j}}(2+\theta)\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\mu\frac{w_{j}}{\lambda}\left(2+\theta\right)+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\frac{w_{j}}{2}+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L}w_{j^{\prime}},$
where the second inequality is due to
$(1+\frac{1}{\theta})\bm{\hat{}}{r}_{ij}\leq(1+\frac{1}{\theta})r_{j}+(1+\theta)\frac{p_{j}}{\bm{\hat{}}{s}_{ij}}$,
which follows from the definition of $\bm{\hat{}}{r}_{ij}$, and the last
inequality requires $\lambda\geq 2\mu(2+\theta)$. Thus, asserting the dual
constraint (2) reduces to proving
$\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L}w_{j^{\prime}}\leq\delta_{ij}t+\frac{\bm{\bar{}}{b}_{it}}{\lambda}.$
To this end, first note that for all $j^{\prime}\in L$ holds
$w_{j^{\prime}}\frac{\bm{\hat{}}{s}_{ij^{\prime}}}{p_{j^{\prime}}}=\bm{\hat{}}{\delta}_{ij^{\prime}}<\bm{\hat{}}{\delta}_{ij}=\frac{w_{j}\bm{\hat{}}{s}_{ij}}{p_{j}}=\frac{\delta_{ij}\bm{\hat{}}{s}_{ij}}{s_{ij}}\Longrightarrow\frac{s_{ij}}{\delta_{ij}\bm{\hat{}}{s}_{ij}}w_{j^{\prime}}\leq\frac{\bm{\hat{}}{s}_{ij^{\prime}}}{p_{j^{\prime}}},$
(3)
and for all $j^{\prime}\in H$
$\delta_{ij}\leq\mu_{2}\cdot\bm{\hat{}}{\delta}_{ij}\leq\mu_{2}\cdot\bm{\hat{}}{\delta}_{ij^{\prime}}=\frac{w_{j^{\prime}}\bm{\hat{}}{s}_{ij^{\prime}}}{p_{j^{\prime}}}\Longrightarrow\delta_{ij}\frac{p_{j}^{\prime}}{\bm{\hat{}}{s}_{ij^{\prime}}}\leq
w_{j^{\prime}}.$ (4)
Using these two inequalities gives
$\displaystyle\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H_{C}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}+\sum_{j^{\prime}\in
H_{U}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L_{C}}w_{j^{\prime}}+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L_{U}}w_{j^{\prime}}$
$\displaystyle=\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H_{C}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}+\frac{s_{ij}}{\delta_{ij}\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L_{C}}w_{j^{\prime}}\right)+\delta_{ij}\frac{\mu_{1}}{\lambda}\sum_{j^{\prime}\in
H_{U}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L_{U}}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\left(1+\frac{1}{\theta}\right)r_{j}+\sum_{j^{\prime}\in
H_{C}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}+\sum_{j^{\prime}\in
L_{C}}\frac{p_{j^{\prime}}}{\bm{\hat{}}{s}_{ij^{\prime}}}\right)+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
H_{U}}w_{j^{\prime}}+\frac{\mu_{1}}{\lambda}\frac{s_{ij}}{\bm{\hat{}}{s}_{ij}}\sum_{j^{\prime}\in
L_{U}}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\frac{r_{j}}{\theta}+\mu_{2}\left(r_{j}+\sum_{j^{\prime}\in
M_{i}(j):\kappa\cdot t\geq
C_{j^{\prime}}}\frac{p_{j^{\prime}}}{s_{ij^{\prime}}}\right)\right)+\frac{\mu_{1}}{\lambda}\mu_{2}\sum_{j^{\prime}\in
M_{i}(j):\kappa\cdot t<C_{j^{\prime}}}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}\frac{\mu_{1}}{\lambda}\left(\frac{t}{\theta}+\mu_{2}\cdot\kappa\cdot
t\right)+\frac{\mu}{\lambda}\sum_{j^{\prime}\in U_{i}(\kappa\cdot
t)}w_{j^{\prime}}$
$\displaystyle\leq\delta_{ij}t+\frac{\bm{\bar{}}{b}_{it}}{\lambda}.$
In the first inequality we use (4) and (3). In order to understand the third
inequality, first recall that $M_{i}(j)$ contains all jobs that are assigned
to machine $i$ but unstarted at time $r_{j}$. Thus, the total processing
duration of these jobs that are completed within time $\kappa\cdot t$ can be
at most $\kappa\cdot t-r_{j}$. The last inequality follows from
$\lambda\geq\mu_{1}(\frac{1}{\theta}+\mu_{2}\cdot\kappa)$ and the definition
of $\bm{\bar{}}{b}_{it}$. ∎
###### Proof of Theorem 2.6.
We set $\kappa=\frac{23}{6}\mu$, $\theta=\frac{2}{3}$ and
$\lambda=\frac{16}{3}\mu^{2}$. Then, weak duality, Corollary A.3 and Lemma A.4
imply
$\displaystyle{\textsc{Opt}}\geq\sum_{j}a_{j}-\sum_{i,t}b_{it}=\frac{1}{\lambda}\left(\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}\right)=\left(\frac{1-\mu/\kappa}{\lambda}\right)\cdot{\textsc{Alg}}.$
Since $\kappa>\mu$ and $\lambda>0$, we conclude that
${\textsc{Alg}}\leq\frac{\frac{16}{3}\cdot\mu^{2}}{1-\frac{6}{23}}\cdot{\textsc{Opt}}=\frac{368}{51}\cdot\mu^{2}\cdot{\textsc{Opt}}.$
∎
### A.2 Full Analysis of Proportional Fairness with Speed Predictions
This section contains the detailed analysis of PF with speed predictions, and
thus the proof of Theorem 2.7. It is based on the analysis of the speed-aware
PF given in [IKM18].
See 2.7
Fix an instance and PF’s schedule. Let $\kappa\geq 1$ and $0<\lambda<1$ be
constants which we fix later. Recall that $q_{jt}$ denotes the progress of job
$j$ at time $t$. For every $t$, consider the sorted (ascending) list $Z^{t}$
composed of $w_{j}$ copies of $\frac{q_{jt}}{p_{j}}$ for every $j\in U_{t}$.
Note that $Z^{t}$ has length $W_{t}$. We define $\zeta_{t}$ as the value at
the index $\lfloor\lambda W_{t}\rfloor$ in $Z^{t}$.
We first state the KKT conditions with multipliers $\\{\eta_{it}\\}_{i}$ and
$\\{\theta_{jt}\\}_{j\in J(t)}$ of the optimal solution $\\{y_{ijt}\\}_{i,j}$
of ($\text{CP}_{t}$) the algorithm uses at time $t$:
$\displaystyle\frac{\bm{\hat{}}{s}_{ij}w_{j}}{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt}}$
$\displaystyle\leq\theta_{jt}+\eta_{it}\quad\forall t,\forall i,\forall j\in
J(t)$ (5) $\displaystyle
y_{ijt}\left(\frac{\bm{\hat{}}{s}_{ij}w_{j}}{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt}}-(\theta_{jt}+\eta_{it})\right)$
$\displaystyle=0\quad\forall t,\forall i,\forall j\in J(t)$ (6)
$\displaystyle\theta_{jt}\left(\sum_{i}y_{ijt}-1\right)$
$\displaystyle=0\quad\forall t,\forall j\in J(t)$ (7)
$\displaystyle\eta_{it}\left(\sum_{j}y_{ijt}-1\right)$
$\displaystyle=0\quad\forall t,\forall i$ (8)
$\displaystyle\theta_{jt},\eta_{it}$ $\displaystyle\geq 0\quad\forall
t,\forall i,\forall j\in J(t)$ (9)
We have the following dual assignment:
* •
$\bm{\bar{}}{a}_{j}=\sum_{t^{\prime}=0}^{C_{j}}\bm{\bar{}}{a}_{jt^{\prime}}$,
where
$\bm{\bar{}}{a}_{jt^{\prime}}=w_{j}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$,
for every job $j$,
* •
$\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\zeta_{t^{\prime}}\eta_{it^{\prime}}$ for every machine $i$ and time $t$,
and
* •
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{t^{\prime}=t}^{C_{j}}\zeta_{t^{\prime}}\theta_{jt^{\prime}}$
for every job $j$ and time $t\geq r_{j}$.
The following three lemmas will conclude that the dual objective value of this
assignment is close the algorithm’s objective value, and thus prove 2.8.
###### Lemma A.5.
$\sum_{j}\bm{\bar{}}{a}_{j}\geq\lambda\cdot{\textsc{Alg}}$
###### Proof.
Consider a time $t$ and the list $Z^{t}$. Observe that $\sum_{j\in
U_{t}}\bm{\bar{}}{a}_{jt}$ contains for every job $j$ which satisfies
$\frac{q_{jt}}{p_{j}}\leq\zeta_{t}$ its weight $w_{j}$. By the definitions of
$Z_{t}$ and $\zeta_{t}$, we conclude that this is at least $\lambda W_{t}$,
i.e., $\sum_{j\in U_{t}}\bm{\bar{}}{a}_{jt}\geq\lambda W_{t}$. The statement
then follows by summing over all times $t$. ∎
###### Lemma A.6.
At any time $t$, $\sum_{i}\eta_{it}+\sum_{j\in J(t)}\theta_{jt}\leq W_{t}$.
###### Proof.
At any time $t$ holds
$\displaystyle\sum_{i}\eta_{it}+\sum_{j\in J(t)}\theta_{jt}$
$\displaystyle=\left(\sum_{i}\eta_{it}\sum_{j\in
J(t)}y_{ijt}\right)+\left(\sum_{j\in J(t)}\theta_{jt}\sum_{i}y_{ijt}\right)$
$\displaystyle=\sum_{i}\sum_{j\in J(t)}y_{ijt}(\eta_{it}+\theta_{jt})$
$\displaystyle=\sum_{i}\sum_{j\in
J(t)}y_{ijt}\frac{\bm{\hat{}}{s}_{ij}w_{j}}{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt}}$
$\displaystyle=\sum_{j\in
J(t)}\sum_{i}\bm{\hat{}}{s}_{ij}y_{ijt}\frac{w_{j}}{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt}}=\sum_{j\in
J(t)}w_{j}\leq W_{t}.$
The first equality is due to (7) and (8), and the third equality due to (6). ∎
###### Lemma A.7.
At any time $t$, $\sum_{i}\bm{\bar{}}{b}_{it}+\sum_{j\in J:t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\leq\frac{4}{(1-\lambda)\kappa}W_{t}$.
###### Proof.
Fix a time $t$. Observe that for every $t^{\prime}\geq t$ the definitions of
$Z_{t^{\prime}}$ and $\zeta_{t^{\prime}}$ imply
$(1-\lambda)W_{t^{\prime}}\leq\sum_{j\in
U_{t^{\prime}}}w_{j}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\geq\zeta_{t^{\prime}}\right]$.
Thus,
$\zeta_{t^{\prime}}\cdot(1-\lambda)W_{t^{\prime}}\leq\sum_{j\in
U_{t^{\prime}}}w_{j}\cdot\zeta_{t^{\prime}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\geq\zeta_{t^{\prime}}\right]\leq\sum_{j\in
U_{t^{\prime}}}w_{j}\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\geq\zeta_{t^{\prime}}\right].$
(10)
We define a partition $\\{M_{k}\\}_{k\geq 1}$ of the time interval
$[t,\infty)$ such that the total weight of unfinished jobs at all times during
$M_{k}$ is part of $(\frac{1}{2^{k}}W_{t},\frac{1}{2^{k-1}}W_{t}]$. Fix a
$k\geq 1$. Rearranging (10) and estimating the total weight of unfinished jobs
in a partition against both its upper and lower bound yields
$\displaystyle\sum_{t^{\prime}\in M_{k}}\zeta_{t^{\prime}}$
$\displaystyle\leq\sum_{t^{\prime}\in M_{k}}\frac{1}{1-\lambda}\sum_{j\in
U_{t^{\prime}}}\frac{w_{j}}{W_{t^{\prime}}}\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\geq\zeta_{t^{\prime}}\right]$
$\displaystyle\leq\frac{1}{1-\lambda}\sum_{t^{\prime}\in M_{k}}\sum_{j\in
U_{t^{\prime}}}\frac{w_{j}}{W_{t^{\prime}}}\cdot\frac{q_{jt^{\prime}}}{p_{j}}$
$\displaystyle\leq\frac{2^{k}}{(1-\lambda)W_{t}}\sum_{t^{\prime}\in
M_{k}}\sum_{j\in U_{t^{\prime}}}w_{j}\cdot\frac{q_{jt^{\prime}}}{p_{j}}$
$\displaystyle\leq\frac{2^{k}\cdot W_{t}}{(1-\lambda)W_{t}\cdot
2^{k-1}}=\frac{2}{1-\lambda}.$
The definitions of $\bm{\bar{}}{b}_{it}$ and $\bm{\bar{}}{c}_{jt}$ and Lemma
A.6 imply
$\displaystyle\sum_{i}\bm{\bar{}}{b}_{it}+\sum_{j\in J:t\geq
r_{j}}\bm{\bar{}}{c}_{jt}$
$\displaystyle=\left(\sum_{i}\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\eta_{it^{\prime}}\cdot\zeta_{t^{\prime}}\right)+\left(\sum_{j\in J:t\geq
r_{j}}\frac{1}{\kappa}\sum_{t^{\prime}=t}^{C_{j}}\theta_{jt^{\prime}}\cdot\zeta_{t^{\prime}}\right)$
$\displaystyle=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\zeta_{t^{\prime}}\left(\sum_{i}\eta_{it^{\prime}}+\sum_{j\in
J(t^{\prime})}\theta_{jt^{\prime}}\right)\leq\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\zeta_{t^{\prime}}W_{t^{\prime}}.$
By dividing the time after $t$ into the partition $\\{M_{k}\\}_{k\geq 1}$ and
using our bound on $\sum_{t^{\prime}\in M_{k}}\zeta_{t^{\prime}}$, we conclude
that this is at most
$\displaystyle\frac{1}{\kappa}\sum_{k\geq 1}\sum_{t^{\prime}\in
M_{k}}\zeta_{t^{\prime}}W_{t^{\prime}}\leq\frac{1}{\kappa}\sum_{k\geq
1}\frac{W_{t}}{2^{k-1}}\sum_{t^{\prime}\in
M_{k}}\zeta_{t^{\prime}}\leq\frac{2}{\kappa(1-\lambda)}W_{t}\sum_{k\geq
1}\frac{1}{2^{k-1}}\leq\frac{4}{\kappa(1-\lambda)}W_{t}.$
The last inequality uses a bound on the geometric series. ∎
See 2.8
###### Proof.
Follows directly from Lemmas A.5 and A.7. ∎
See 2.9
###### Proof.
First observe that for every $t$ and $j$ holds
$\sum_{i}\bm{\hat{}}{s}_{ij}y_{ijt}\leq\mu_{1}\sum_{i}s_{ij}y_{ijt}=\mu_{1}\cdot
q_{jt}.$ (11)
Fix a job $j$, a machine $i$ and a time $t\geq r_{j}$.
$\displaystyle\frac{\bm{\bar{}}{a}_{j}s_{ij}}{p_{j}}-w_{j}\cdot\frac{t\cdot
s_{ij}}{p_{j}}$ $\displaystyle\leq
s_{ij}\cdot\sum_{t^{\prime}=t}^{C_{j}}\frac{\bm{\bar{}}{a}_{jt^{\prime}}}{p_{j}}$
$\displaystyle=s_{ij}\cdot\sum_{t^{\prime}=t}^{C_{j}}\frac{w_{j}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
$\displaystyle=s_{ij}\cdot\sum_{t^{\prime}=t}^{C_{j}}\frac{w_{j}}{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt^{\prime}}}\cdot\frac{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt^{\prime}}}{q_{jt^{\prime}}}\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
$\displaystyle\leq\mu_{1}\cdot\mu_{2}\cdot\sum_{t^{\prime}=t}^{C_{j}}\frac{\bm{\hat{}}{s}_{ij}w_{j}}{\sum_{i^{\prime}}\bm{\hat{}}{s}_{i^{\prime}j}y_{i^{\prime}jt^{\prime}}}\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
$\displaystyle\leq\mu\cdot\sum_{t^{\prime}=t}^{C_{j}}\left(\eta_{it^{\prime}}+\theta_{jt^{\prime}}\right)\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
$\displaystyle\leq\mu\cdot\sum_{t^{\prime}=t}^{C_{j}}\left(\eta_{it^{\prime}}+\theta_{jt^{\prime}}\right)\cdot\zeta_{t^{\prime}}$
$\displaystyle\leq\mu\kappa\cdot\frac{1}{\kappa}\left(\sum_{t^{\prime}\geq
t}\eta_{it^{\prime}}\cdot\zeta_{t^{\prime}}\right)+\mu\kappa\cdot\left(\frac{1}{\kappa}\sum_{t^{\prime}=t}^{C_{j}}\theta_{jt^{\prime}}\cdot\zeta_{t^{\prime}}\right)$
$\displaystyle=\mu\kappa\cdot\bm{\bar{}}{b}_{it}+\mu\kappa\cdot\bm{\bar{}}{c}_{jt}.$
The second inequality uses (11) and the third inequality uses (5). Since
$\alpha=\kappa\mu$, this dual assignment indeed satisfies the constraint of
($\text{DLP}_{\alpha}$). ∎
## Appendix B Details on Round Robin for Speed-Ordered Machines
### B.1 Missing Details for the Analysis for Unrelated Machines
This section contains missing details for the proof of Theorem 3.6, which we
firstly restate:
See 3.6
###### Proposition B.1.
At any time $t$,
$\sum_{i}\beta_{it}\leq\mathcal{O}(\log(\min\\{n,m\\}))\cdot\lvert
U_{t}\rvert$.
###### Proof.
At any time $t$,
$\sum_{i\in I}\beta_{it}=\sum_{i=1}^{m_{t}}\frac{1}{i}\cdot\lvert
J(t)\rvert\leq\lvert
U_{t}\rvert\sum_{i=1}^{m_{t}}\frac{1}{i}\leq\mathcal{O}(\log(\min\\{n,m\\}))\cdot\lvert
U_{t}\rvert,$
where in the last inequality we use that $m_{t}=\min\\{m,\lvert
J(t)\rvert\\}\leq\min\\{m,n\\}$. ∎
###### Proposition B.2.
At any time $t$, $\sum_{j\in J:r_{j}\geq t}\gamma_{jt}\leq\lvert U_{t}\rvert$.
###### Lemma B.3.
$\sum_{j}\bm{\bar{}}{a}_{j}\geq\frac{1}{2}\cdot{\textsc{Alg}}$.
###### Proof.
Analogous to the proof of Lemma A.5. ∎
###### Lemma B.4.
At any time $t$, $\sum_{i}\bm{\bar{}}{b}_{it}\leq\mathcal{O}(1)\cdot\lvert
U_{t}\rvert$.
###### Proof.
Analogous to the proof of Lemma A.7 when using Proposition B.1 and the fact
that $\kappa=\Theta(\log(\min\\{m,n\\}))$. ∎
###### Lemma B.5.
At any time $t$, $\sum_{j\in J:r_{j}\geq
t}\bm{\bar{}}{c}_{jt}\leq\mathcal{O}(1)\cdot\lvert U_{t}\rvert$.
###### Proof.
Analogous to the proof of Lemma A.7 when using Proposition B.2. ∎
Observe that Lemma B.3, Lemma B.4 and Lemma B.5 imply Lemma 3.7. It remains
the proof of Theorem 3.6:
###### Proof of Theorem 3.6.
Weak duality, Lemma 3.7 and Lemma 3.8 imply
$\displaystyle\kappa\cdot{\textsc{Opt}}$
$\displaystyle\geq{\textsc{Opt}}_{\kappa}\geq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\geq\Omega(1)\cdot{\textsc{Alg}}.$
We conclude the proof by noting that $\kappa=\Theta(\log(\min\\{m,n\\}))$. ∎
### B.2 Full Analysis of Round Robin for Speed-Ordered Related Machines
###### Theorem B.6.
Algorithm 5 has a competitive ratio of at most 216 for minimizing the total
completion time on speed-ordered related machines.
We prove this theorem using a dual-fitting proof based on
($\text{DLP}_{\alpha}$), where $w_{j}=1$ and $s_{i}=s_{ij}$ for every job $j$
and every machine $i$. Fix an instance and the algorithm’s schedule. For every
time $t$ we write $m_{t}=\min\\{m,\lvert J(t)\rvert\\}$. We define for every
machine $i$ and any time $t$
$\beta_{it}=\frac{s_{i}}{\sum_{\ell=1}^{m_{t}}s_{\ell}}\cdot\lvert
J(t)\rvert\cdot\mathds{1}\left[i\leq\lvert J(t)\rvert\right],$
and $\gamma_{jt}=\mathds{1}\left[j\in J(t)\right]$ for every job $j$ and any
time $t$.
Observe the following bounds when summing up these values:
###### Proposition B.7.
At any time $t$, $\sum_{i}\beta_{it}\leq\lvert U_{t}\rvert$.
###### Proposition B.8.
At any time $t$, $\sum_{j\in J(t)}\gamma_{jt}\leq\lvert U_{t}\rvert$.
Let $\kappa\geq 1$ and $0<\lambda<1$ be constants. For every $t$, consider the
sorted (ascending) list $Z^{t}$ composed of values $\frac{q_{jt}}{p_{j}}$ for
every $j\in U_{t}$. We define $\zeta_{t}$ as the value at the index
$\lfloor\lambda\lvert U_{t}\rvert\rfloor$ in $Z^{t}$. Consider the following
duals:
* •
$\bm{\bar{}}{a}_{j}=\sum_{t^{\prime}=0}^{C_{j}}\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]$
for every job $j$,
* •
$\bm{\bar{}}{b}_{it}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\beta_{it^{\prime}}\zeta_{t^{\prime}}$ for every $i$ and $t$, and
* •
$\bm{\bar{}}{c}_{jt}=\frac{1}{\kappa}\sum_{t^{\prime}\geq
t}\gamma_{jt^{\prime}}\zeta_{t^{\prime}}$ for every $j$ and $t\geq r_{j}$.
###### Lemma B.9.
$\sum_{j}\bm{\bar{}}{a}_{j}\geq\lambda\cdot{\textsc{Alg}}$.
###### Proof.
Analogous to the proof of Lemma A.5. ∎
###### Lemma B.10.
At any time $t$,
$\sum_{i}\bm{\bar{}}{b}_{it}\leq\frac{4}{(1-\lambda)\kappa}\lvert
U_{t}\rvert$.
###### Proof.
Analogous to the proof of Lemma A.7 when using Proposition B.7. ∎
###### Lemma B.11.
At any time $t$, $\sum_{j\in J:r_{j}\geq
t}\bm{\bar{}}{c}_{jt}\leq\frac{4}{(1-\lambda)\kappa}\lvert U_{t}\rvert$.
###### Proof.
Analogous to the proof of Lemma A.7 when using Proposition B.8. ∎
Lemmas B.9, B.10 and B.11 conclude the following bound between Alg and the
objective value of the crafted duals.
###### Lemma B.12.
$(\lambda-\frac{8}{(1-\lambda)\kappa}){\textsc{Alg}}\leq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}$
We finally prove that the crafted duals are feasible under certain conditions.
###### Lemma B.13.
Assigning $a_{j}=\bm{\bar{}}{a}_{j}$, $b_{it}=\bm{\bar{}}{b}_{it}$ and
$c_{jt}=\bm{\bar{}}{c}_{jt}$ is feasible for ($\text{DLP}_{\alpha}$) if
$\alpha=\kappa$ and $s_{i}=s_{ij}$ for all machines $i$ and jobs $j$.
###### Proof.
First observe that the dual assignment is non-negative. Let $i\in I,j\in J$
and $t\geq r_{j}$. Since the rates of Algorithm 5 imply
$q_{jt}=\sum_{\ell=1}^{m_{t}}\frac{s_{\ell}}{\lvert J(t)\rvert}$, we have
$\displaystyle\frac{\bm{\bar{}}{a}_{j}s_{i}}{p_{j}}-\frac{s_{i}\cdot
t}{p_{j}}\leq\sum_{t^{\prime}=t}^{C_{j}}\frac{s_{i}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]=\sum_{t^{\prime}=t}^{C_{j}}\frac{s_{i}}{q_{jt^{\prime}}}\cdot\frac{q_{jt^{\prime}}}{p_{j}}\cdot\mathds{1}\left[\frac{q_{jt^{\prime}}}{p_{j}}\leq\zeta_{t^{\prime}}\right]\leq\sum_{t^{\prime}=t}^{C_{j}}\frac{s_{i}}{\sum_{\ell=1}^{m_{t^{\prime}}}\frac{s_{\ell}}{\lvert
J(t^{\prime})\rvert}}\cdot\zeta_{t^{\prime}}$ (12)
Consider any time $t^{\prime}$ with $t\leq t^{\prime}\leq C_{j}$. If
$i\leq\lvert J(t^{\prime})\rvert$, the definition of $\beta_{it^{\prime}}$
yields
$\frac{s_{i}}{\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell}}\cdot\lvert
J(t^{\prime})\rvert\cdot\zeta_{t^{\prime}}=\beta_{it^{\prime}}\cdot\zeta_{t^{\prime}}.$
Otherwise, $i>\lvert J(t^{\prime})\rvert$, the fact that $s_{1}\geq\ldots\geq
s_{m}$ implies
$\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell}\geq\sum_{\ell=1}^{\lvert
J(t^{\prime})\rvert}s_{\ell}\geq\lvert J(t^{\prime})\rvert\cdot s_{i}$, and
thus
$\frac{s_{i}}{\sum_{\ell=1}^{m_{t^{\prime}}}s_{\ell}}\cdot\lvert
J(t^{\prime})\rvert\cdot\zeta_{t^{\prime}}\leq\frac{\lvert
J(t^{\prime})\rvert}{\lvert
J(t^{\prime})\rvert}\cdot\zeta_{t^{\prime}}=\gamma_{jt^{\prime}}\cdot\zeta_{t^{\prime}},$
because $t^{\prime}\leq C_{j}$. Put together, (12) is at most
$\displaystyle\sum_{t^{\prime}=t}^{C_{j}}\beta_{it^{\prime}}\zeta_{t^{\prime}}+\sum_{t^{\prime}=t}^{C_{j}}\gamma_{jt^{\prime}}\zeta_{t^{\prime}}\leq\kappa(\bm{\bar{}}{b}_{it}+\bm{\bar{}}{c}_{jt}),$
which verifies the dual constraint. ∎
###### Proof of Theorem B.6.
Weak duality, Lemma B.12 and Lemma B.13 imply
$\displaystyle\kappa\cdot{\textsc{Opt}}\geq{\textsc{Opt}}_{\kappa}\geq\sum_{j}\bm{\bar{}}{a}_{j}-\sum_{i,t}\bm{\bar{}}{b}_{it}-\sum_{j,t\geq
r_{j}}\bm{\bar{}}{c}_{jt}\geq\left(\lambda-\frac{8}{(1-\lambda)\kappa}\right)\cdot{\textsc{Alg}}.$
Setting $\kappa=72$ and $\lambda=\frac{2}{3}$ concludes ${\textsc{Alg}}\leq
216\cdot{\textsc{Opt}}$. ∎
## Appendix C Further Details on Experimental Results
### C.1 Implementation Details
We implemented the schedulers as separate applications running in _userspace_
, scheduling jobs via Linux _affinity masks_ , which indicate for each process
on which core it may be executed. The schedulers compute a schedule every 2 s
based on the currently active jobs and their (predicted) characteristics. The
schedulers are provided with the process IDs of the tasks in the workload,
potentially along with predictions, and only manage these processes via
affinity masks. Other processes may run on any core, but their load is
negligible.
We use _native_ input set for the _PARSEC-3.0_ jobs. For the _SPLASH-3_ jobs,
we use both the _large_ input set and custom input sizes to study the impact
of different input data (see Figure 1). The _Polybench_ jobs use their
standard hard-coded inputs. We discard jobs that execute for less than 30 s on
a _big_ core to reduce measurement noise, and discard jobs that use more than
512 MB RAM because the _HiKey 970_ board has only 6 GB RAM and Android does
not support swap. Overall, this results in 43 combinations of jobs and input
data, i.e., some jobs are repeated in the workloads.
### C.2 Simulation Experiments
The experiments on real hardware are slow, hence we can only study a limited
set of workload scenarios. We therefore additionally test many different
scenarios in synthetic experiments in simulation. These synthetic experiments
also model an 8-core processor. We create 20 random workloads with 100
synthetic jobs, whose arrival times are drawn from a Poisson distribution and
with random characteristics: 4 _LITTLE_ cores with speed 1, 4 _big_ cores with
job-dependent speeds from $\mathcal{U}(2,6)$, and
$p_{j}\sim\mathcal{U}(60,600)$. Speed predictions are same as in the hardware
experiments, i.e., $\bm{\hat{}}{s}_{ij}=s_{ij}\cdot y_{ij}$.
Figure 4: Synthetic experiments. The experiments are each repeated 20 times
with different random workloads.
Figure 4 shows the results of the synthetic experiments, including the
fractional schedulers Greedy WSPT and PF. Unlike the real experiments, we are
not restricted to a single workload and instead run 20 different workloads and
plot the average results. Inaccurate speed predictions in Greedy WSPT result
in large idle times, greatly deteriorating the performance. PF performs
similar to or worse than Maximum Density, depending on the system load. The
other algorithms perform similar to the experiments on the real platform,
confirming that the results are not depending on a specific workload.
|
Monte Carlo Optimization for Solving
Multilevel Stackelberg Games
inst1]Pravesh Koirala
[inst1]organization=Vanderbilt University,
inst1]Forrest Laine
Stackelberg games originate where there are market leaders and followers, and the actions of leaders influence the behavior of the followers. Mathematical modelling of such games results in what's called a Bilevel Optimization problem. There is an entire area of research dedicated to analyzing and solving Bilevel Optimization problems which are often complex, and finding solutions for such problems is known to be NP-Hard. A generalization of Stackelberg games is a Multilevel Stackelberg game where we may have nested leaders and followers, such that a follower is, in turn, a leader for all lower-level players. These problems are much more difficult to solve, and existing solution approaches typically require extensive cooperation between the players (which generally can't be assumed) or make restrictive assumptions about the structure of the problem. In this paper, we present a stochastic algorithm to approximate the local equilibrium solutions for these Multilevel games. We then construct a few examples of such Multilevel problems, including: a) a nested toll-setting problem; and b) an adversarial initial condition determination problem for Robust Trajectory Optimization. We test our algorithm on our constructed problems as well as some trilevel problems from the literature, and show that it is able to approximate the optimum solutions for these problems within a reasonable error margin. We also provide an asymptotic proof for the convergence of the algorithm and empirically analyze its accuracy and convergence speed for different parameters. Lastly, we compare it with existing solution strategies from the literature and demonstrate that it outperforms them.
Stackelberg games Multilevel Optimization Monte-Carlo algorithm Trajectory optimization Adversarial optimization
§ INTRODUCTION
Stackelberg Equilibriums are well-known and extensively studied economic phenomena. In their most rudimentary form, they occur when there is a market leader whose decision influences one or many market followers. These leaders and followers are constrained in their own way and are assumed to be rational players who seek to minimize their costs (or maximize their profits) while satisfying their constraints. Mathematical modeling of these games gives rise to a Bilevel Optimization problem of the following form:
\begin{align*}
\min_{x_1\in \real^{n_1}, x_2 \in \real^{n_2}} ~~&f^1(x_1, x_2) \\
s.t. ~~&g^1(x_1, x_2) \ge 0\\
&x_2 \in \arg\min_{x_2 \in \real^{n_2}} ~~f^2(x_1, x_2) \\
&~~~~~~~~~~~~~~~~~s.t.~~g^2(x_1, x_2)\ge 0
\end{align*}
Where the upper level player with the objective $f^1(x_1, x_2): \real^{n_1+n_2} \mapsto \real$ and constraints $g^1(x_1, x_2): \real^{n_1+n_2}\mapsto \real^{m_1}$ optimizes over $x_1 \in \real^{n_1}$ knowing that its choice of $x_1$ causes the lower-level player to adapt its response variable $x_2\in \real^{n_2}$ to minimize its objective $f^2(x_1, x_2): \real^{n_1+n_2}\mapsto \real$ subject to its constraints $g^2(x_1, x_2):\real^{n_1+n_2}\mapsto\real^{m_2}$. It is also generally assumed that the objective functions $f^1$ and $f^2$ and the constraints $g^1$ and $g^2$ are twice differentiable. But even with these assumptions, solution set for problems of this form generate not only non-convex but also non-smooth manifolds. In fact, finding an equilibrium point or a solution to these problems is known to be NP-Hard [Ben-Ayed and Blair, 1990, Blair, 1992]. Popular strategies to solve these problems are Vertex Enumeration methods [Bialas and Karwan, 1984], Complementary Pivoting methods [Júdice and Faustino, 1992], Mixed Integer Programming or Branch and Bound methods [Bard and Moore, 1990], and meta-heuristics based methods such as Genetic Algorithms [Oduguwa and Roy, 2002] and Particle Swarm optimization [Han et al., 2016] etc.
The problem discussed above is called a Bilevel problem because it has two levels of optimizers (alternatively referred to as decision makers or players in this text) with their own sets of decision variables and constraints. A natural extension of such a leader-follower game is, then, a Multilevel Stackelberg game that can be modeled as:
\begin{align*}
&Level_1& &~~~~~~~~~~~\hdots\\
&& &~~~~~~~~~~~~~\vdots\\
&Level_l & \min_{x_l, x_{l+1}..., x_L} ~~&f^l(X) \\
&& s.t. ~~&g^l(X) \ge 0\\
&Level_{l+1} & &x_{l+1},...,x_L \in \arg\min_{x_{l+1},...,x_L} ~~f^{(l+1)}(X) \\
&& &~~~~~~~~~~~~~~~~~~~~s.t.~~g^{(l+1)}(X)\ge 0\\
&& &~~~~~~~~~~~~~\vdots\\
&Level_{L}& &~~~~~~~~~~~\hdots\\
\end{align*}
Where $l \in {1, 2, ..., L}$ indicates player level and $x_l \in \real^{n_l}$ is the variable that player $l$ controls. Similarly, $X \in \real^{n}$ (where $n=n_1+n_2+...+n_L$) is the concatenation of all $x_l$'s and, therefore, the entire search space of the problem. The objectives and constraints for each player are defined as $f^l: \real^{n}\mapsto\real$ and $g^l:\real^{n}\mapsto\real^{m_l}$. It is often assumed that no two players share any degrees of freedom (or decision variables) but we make no such assumptions in this work. To be precise, any player at level $l$ is a Stackelberg follower of all preceding players at level $1, 2, ...~ l-1$ and is simultaneously a Stackelberg leader for all players at level $l+1~...~L$. Like before, each player cares for their own objective and has their own constraints. To define the solution of the problem, we start with the concept of a rational reaction set for the final player L, $\phi^L(x_1, ... x_{L-1})$ defined as:
\begin{align*}
\phi^L(x_1, ... x_{L-1}) &:= \arg\min_{x_L} f^L(X) \\
&~~~~s.t.~~g^L(X) \ge 0
\end{align*}
Then, the rational reaction set for any player $l$, i.e. $\phi^l(x_1...x_{l-1})$ can be recursively defined as:
\begin{align*}
\phi^l(x_1...x_{l-1}) &:= \arg\min_{x_l, x_{l+1},~...~x_L} f^l(X) \\
&~~~~~~~~~s.t.~~g^l(X) \ge 0 \\
&~~~~~~~~~~(x_{l+1}...x_L) \in \phi^{l+1}(x_1...x_l)
\end{align*}
The solution to the entire problem is then:
\begin{align*}
\phi^1 &:= \arg\min_{x_1,~...~x_L} f^1(X) \\
&~~~~s.t.~~g^1(X) \ge 0 \\
&~~~~~~~~~~(x_{2}...x_L) \in \phi^2(x_1)
\end{align*}
It must be noted that in general, $\phi^l$ may not be a singleton, and therefore, there can be multiple local solutions for the problem.
These problems are not new and have been researched over the years in domains such as Economics, Optimal Control, Operations Research, and Decision Programming etc., for example, to model multi-stakeholder fund allocation, supply chain networks, inventory management, and power system security [Han et al., 2015, Yao et al., 2007, Fard and Hajiaghaei-Keshteli, 2018, Cassidy et al., 1971]. There are further generalizations of multilevel problems that include multiple players at each level (sometimes called a Multilevel Decentralized problem) who have equal deciding power amongst themselves but are, as before, followers for players above them and leaders for players below them. In this work, we restrict ourselves to multilevel optimization problems with only a single decision maker at each level and introduce a monte-carlo sampling based method to find solutions of such multilevel optimization problems. We also model a robust trajectory optimization problem and a generalized version of the toll-setting problem as multilevel problems and use our algorithm to find solutions for them. In summary, the main contributions of this paper are:
* A simple yet powerful monte-carlo method to solve multilevel problems.
* Modeling adversarial initial condition determination and nested toll-setting problem as multilevel optimization problems and obtaining their solutions via our proposed algorithm.
The remainder of this paper is structured as follows: In section <ref>, we explore some of the works related to such multilevel optimization problems, including some of the algorithms proposed to solve them. In section <ref>, we propose a stochastic algorithm to solve problems of this kind. Then, in section <ref>, we construct two such multilevel problems: a) a robust optimization problem of finding adversarial initial condition, and b) a nested toll-setting problem, and discuss the nature of their solutions. Then, in section <ref>, we apply this algorithm to solve a few problems from existing literature in addition to the constructed problems from section <ref> and compare the obtained solutions. In section <ref>, we perform empirical comparisons to study the convergence speed and computation time of the proposed algorithm. Finally, in section <ref> we pave the way for further research by outlining some of the possible improvements we envision in this domain and proceed to conclude the work with a brief recap.
§ LITERATURE REVIEW
Stackelberg games and Bilevel Optimizations are well-researched problems, and we refer readers to Dempe, 2020 in lieu of attempting a survey ourselves. Henceforth, we limit ourselves to works related to trilevel or general multilevel problems.
§.§ Linear Multilevel Problems
[Cassidy et al., 1971] first modeled the flow of resources between the federal, state, and municipal levels as a trilevel problem and provided a recursive dynamic algorithm for solving such problems. [Bard, 1984] later established stationarity conditions for trilevel linear optimization problems, generalized it to p-level stationarity problems, and devised a Cutting plane algorithm to solve them. [Ue-Pyng and Bialas, 1986] devised a hybrid method based on the K-th best algorithm and Parametric Complementary Pivot algorithm to solve trilevel linear problems. [Anandalingam, 1988] devised another method for solving trilevel linear problems by first obtaining and embedding the first-order necessary conditions (FONCs) of the third-level problem into the second-level problem, then obtained FONCs of thusly obtained problem and embedded it into the first-level problem. [Benson, 1989] investigated a specific case where linear multilevel problems are unconstrained and performed rigorous geometric analysis. Their major result was to show that the feasible solution set of such problems is a union of connected polyhedral regions. White, 1997 modified Bard, 1984 's method by changing the first step in their algorithm and claimed a qualitative improvement on the overall results.
§.§ Fuzzy Set / Goal Programming-Based Approaches
[Lai, 1996] considered a fuzzy set based algorithm to model and solve linear bilevel and multilevel problems. [Shih et al., 1996] later improved it to model problems that are not just hierarchical but also decentralized, or both, in nature. [Pramanik and Roy, 2007] modeled the multilevel problem as a fuzzy goal programming problem to solve it. [Zhang et al., 2010] presented a kth-best algorithm to solve linear trilevel programming problems and solved a constructed problem of annual budget allocation in a company with CEO, branch heads, and group supervisors.
§.§ Meta-heuristics based approaches
[Woldemariam and Kassa, 2015] developed a genetic algorithm based method to solve arbitrarily deep multilevel problems for bounded decision variables. [Han et al., 2016] devised a particle swarm optimization based method to solve bilevel problems and used it to solve a trilevel problem as well by embedding the stationarity conditions of the last level problem into the second level problem and converting the entire structure into a bilevel programming problem. At this point, we must also mention [Lu et al., 2016]'s survey of multilevel decision-making problems, which, although a bit dated, is an excellent resource for multilevel problems, algorithms, and applications developed until 2016.
§.§ Applications
[Han et al., 2017] used Vertex Enumeration method to solve a decentralized supply chain network involving manufacturers, logistic companies, and consumers modeled as a trilevel decentralized programming problem. [Fard and Hajiaghaei-Keshteli, 2018] modeled a multi-stakeholder supply chain problem as a trilevel problem and used five different meta-heuristic algorithms to solve them by solving each level in a turn-based fashion. They also later modeled a tire closed-loop supply chain network as a trilevel problem and solved it using a similar approach [Fard et al., 2018]. [Tilahun et al., 2012] developed a turn-based optimization strategy similar to [Fard and Hajiaghaei-Keshteli, 2018] to solve general Multilevel problems and later generalized it to solve fuzzy Multilevel, multi-objective problems with collaboration. [Tilahun, 2019]. [Tian et al., 2019] formulated a coordinated cyber-attack scenario as a trilevel problem and used the column and constraint generation method to obtain a solution. [Luo et al., 2020] modeled an Energy scheduling problem as a trilevel optimization problem and exploited its structure to obtain a closed analytical expression. [Laine et al., 2023] later developed a general algorithm to find solutions to Generalized Feedback Nash Equilibrium problems, which can be modeled as a Multilevel Stackelberg problem.
From the literature review, it is clear that multiple methods exist to solve trilevel problems, but only a few of these can be generalized to solve an arbitrarily deep multilevel problem. Even then, we find that each method has its own limitations. For instance, fuzzy set based methods ([Lai, 1996, Shih et al., 1996, Pramanik and Roy, 2007]) implicitly assume some degree of cooperation from lower levels, which is not an assumption that holds for every problem. Similarly, turn-based methods of [Tilahun et al., 2012, Tilahun, 2019, Fard and Hajiaghaei-Keshteli, 2018, Fard et al., 2018] are iterative best response algorithms that are more suited to find solutions to Nash equilibrium problems, and since they do not take into account the rational reactions of lower-level players, they do not converge towards the Stackelberg equilibrium. [Woldemariam and Kassa, 2015]'s genetic algorithm is quite promising, but it only works for bounded variables, which makes it inapplicable for a wide class of problems. Similarly, [Laine et al., 2023]'s algorithm is applicable only under assumptions of strong complementarity.
In light of these facts, we propose an algorithm in section <ref> that solves all of the outlined concerns above. Furthermore, we demonstrate in section <ref> that even though it's simple and intuitive, it outperforms the existing methods of similar nature. Compared to other algorithms, our proposed algorithm has the advantage that:
* It can handle problems with unbounded decision variables and, thus, is applicable to a wider class of problems.
* It can handle problems with non-differentiable objectives, as long as the final objective is differentiable.
* It can handle equality constraints present at the final level, unlike other Meta-heuristic algorithms, which fail to handle any equality constraints at all without any reformulations.
* It does not require any reformulations of the objective functions and, thus, can solve problems that can't be approached via KKT or Value function based reformulations.
* It's an anytime algorithm and can be tuned to obtain arbitrary accuracy at the expense of computation.
§ MONTE CARLO MULTILEVEL OPTIMIZATION (MCMO)
Some of the notations used in the algorithm are as follows:
\begin{align*}
L \in \mathbb{N} &: \text{Number of players}.\\
n_l \in \mathbb{N} &: \text{Number of variables for player }l\\
x_l \in \real^{n_l} &: \text{Variables that $l$-th player controls}\\
C^l &: \text{Feasible region for player }l\\
X \in \real^{n} &: \text{Concatenation of all $x_l$'s}\\
\end{align*}
\begin{align*}
C = \bigcap_{l=1}^{L} C^l &: \text{Feasible region for the problem }\\
x_s &: \text{Initially feasible point s.t. } x_s \in C \\
f^l &: \text{Objective function of player }l ~(\mathbb{R}^{n}\mapsto \mathbb{R}) \\
D^l := \real^{n_l} &: \text{Subspace spanned by}
\alpha^l \in \real^+ &: \text{Step size for player } l\\
N^l \in \mathbb{N} &: \text{Number of samples generated for player }l\\
M^l \in \mathbb{N} &: \text{Number of sampling iterations for } l
% \eta > 1 &: \text{Cooldown parameter}
\end{align*}
Apart from the notations above, we use some colloquial array notations as follows:
\begin{align*}
[~] &: \text{Empty array}\\
X[a:b] &: \text{Slice of X from index a to b inclusive}\\
X[a:end] &: \text{Slice of X from index a to the length}\\
&~~~ \text{of X inclusive} \\
X~.+b &: \text{A broadcasting summation operator.}\\
\end{align*}
MCMO is a sampling based algorithm. It iteratively refines any given approximate solution by generating samples in its neighborhood. These samples are successively passed down to each lower-level players, who generate samples of their own and pass them down to their lower-level players. This continues until the very last level, where a solver is used to obtain the solution for $x_L$ given the variables $x_1, ... x_{L-1}$ for the corresponding objective and constraints. Once these solutions are obtained, they are returned to upper-level players who evaluate them, select the best among them for their own objectives, and subsequently return them to their upper levels. At level 1, all returned solutions are evaluated, and the best among them is kept as the current estimate of the solution. In this way, MCMO acts as a gradient-free solver and does not require gradient information for any objective or constraint function except the last one. Similarly, since the last level is always solved by using a solver, MCMO can accommodate both equality / inequality constraints for that level so long as it's supported by the solver. MCMO is described in Algorithm <ref>. In essence, it takes an initially feasible point and continuously searches in its neighborhood for a better feasible and optimal point for a specified number of iterations. When the desired number of iterations is reached, MCMO returns a smoothed result from the last $k$ obtained iterates, as outlined in subsection <ref>.
$MCMO~(x_s, k)$
[1]
$X \gets x_s$
$P \gets [X]$
$i \in [1 ... maxiter]$
$X \gets OPTIMIZE(X, 1) ~or~ X$ # stick with same point if no better point found.
$P \gets P \cup X$
$\textbf{return} SMOOTHEN(P, k)$
The Optimize function defined in Algorithm <ref> takes as input an initially feasible point $x_s$ and a level $l$ ($=1$ for initial call). For the final player ($l=L$), this function uses a solver, IPopt [Wächter and Biegler, 2006] in this case, to optimize for the final objective $f^L$ subject to the constraints $C^L$. In all other cases, it generates $N^l+1$ random directions (including the zero direction) in the subspace $D^l$ to obtain new candidate points, which are then recursively passed to the optimizers of the lower-level player, i.e., $l+1$. These passed candidate points are then recursively perturbed by the lower-level players and returned. Out of all the returned values, player $l$ keeps the perturbed candidates that's best for its objective and satisfies its feasibility constraints. This process is repeated by the player $l$ for $M^l$ number of times, where, at the end of each such sampling iteration, it chooses the point that is the best among all obtained candidates in that iteration and uses it for the next iteration. At the end, it returns the final obtained best candidate to the upper player $l-1$. In the event that no feasible point can be found at any iteration, the last known best candidate point is retained and used for the next iteration. If no feasible point can be found even after $M^l$ iterations, the function returns $null$. In this way, this function can obtain solution for multilevel problems with arbitrary levels.
The algorithm uses three sub-procedures SOLVE_FULL, ARGMIN, and RAND_DIRECTIONS. These sub-procedures are intuitive, and thus, we only explain but do not explicitly outline them here. SOLVE_FULL takes, in order, an initial point, an objective, a set of constraints, and player level (to determine degrees of freedom to optimize on) and uses a solver to fully solve it to completion. Similarly, ARGMIN takes, in order, a list of candidate points, the player level $l$, and determines the best point according to the objective function $f^l$ ignoring any null points in the given list. Finally, RAND_DIRECTIONS generates $N^l$ random directions from a uniform hypercube (of length 1) centered at the origin in the subspace $D^l$.
$OPTIMIZE~(X, l)$
[1]
Require: $f^l, C^l, M^l, \alpha^l, N^l, D^l$
$X_R \gets null$
$l = L$
$X_R \gets SOLVE\_FULL(X, f^L, C^L, L)$
$X_R \not\in C^L$
return null
return $X_R$
$k \in \{1, ..., M^l\}$
# generate candidate points
$X_C \gets X ~{.+}~ \{ \alpha^l \cdot $
$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~RAND\_DIRECTIONS(N^l, D^l) \cup \textbf{0} \}$
$x \in X_C$
$x \gets OPTIMIZE(x, l+1)$
$x \not\in C^l$
$X_R \gets ARGMIN(X_R, x, l)$
$X \gets X_R ~or~ X$
return $X_R$
Ideally, MCMO should be used with a high number of samples and sampling iterations, i.e. $N^l, M^l$ to obtain accurate results, as only by doing so can we solve all lower levels to completion before optimizing any upper-level problem. But this can result in a lot of computational overhead, as outlined in section <ref>. So, in practice, we select a reasonable $N^l, M^l$ for each level while still keeping the problem computationally tractable. But this will result in stochastic estimates (as opposed to true solutions), which is precisely what limits MCMO to an approximate algorithm.
§.§ Initialization
MCMO requires that an initially feasible (not necessarily optimal) point $x_s \in C$ be provided. A viable option to achieve such an initially feasible point is to solve the following problem:
\begin{align*}
x_s = &\arg\min_{X}~~ 0\\
&~~~~~~~~ s.t. ~X \in C
\end{align*}
However, a heuristic to achieve a reasonably optimal starting point for a non-trivial problem is to take the weighted sum of the objectives. Which is to say, we solve the following optimization problem to obtain such an initially feasible point:
\begin{align*}
x_s = &\arg\min_{X}~~ \sum_{l=1}^{L} w_l f^l(X)\\
&~~~~~~~~~~~ s.t. ~X \in C
\end{align*}
Where, $w_l$'s are chosen as required. This heuristic yields better starting points in cases where the true solution lies closer to the pareto front of the involved objective functions.
§.§ Smoothing
Since MCMO is a stochastic algorithm, it can only provide approximate solutions as all the lower-level problems are not completely solved. This is especially true when the number of samples generated ($N^l$) or the number of sampling iterations ($M^l$) are too low and $\alpha^l$ is high. Therefore, at the end of the algorithm, the last $k<maxiter$ points are used to obtain a more stable approximation of the equilibrium point by using a smoothing scheme. The choice of smoothing scheme may depend upon the problem, but in this work, we use the following scheme:
Best Objective Smoothing Scheme: $X^*$ is approximated as $X^*=\arg\min_{X \in \{X_1, X_2, ... X_k\}} f^1 (X)$. Where $f^1$ is the objective function of the first player. This scheme is guaranteed to produce a feasible point (since all $X_1, X_2, ... X_k$ are feasible, as shown in subsection <ref>).
§.§ Practical Considerations
The performance of the algorithm is reliant on the number of samples $N^l$ generated per level, number of sampling iterations $M^l$, choice of $\alpha^l$, and the number of iterations $maxiter$. In general, more samples and sampling iterations would improve the accuracy of the solution, but at the expense of computation costs. Similarly, a large $\alpha^l$ may prevent convergence, whereas a low $\alpha^l$ would delay it. An appropriate way of running MCMO is thus to start off with low $N^l, M^l$ and high $\alpha^l$ and then fine-tune the result with a lower value of $\alpha^l$ and a higher sample size $N^l$ and iteration $M^l$ to the desired accuracy. Due to the nature of multilevel optimization problems, lower-level players must be provided with greater deciding powers than any upper-level players. This is especially true when the degrees of freedom are shared between the upper and lower level players. Consider for example, the problem
max_x x
s.t. x ∈ min_x x
s.t. x ∈[l, u]
In this case, the solution for this problem for $x\in[l, u]$ is $l$. In terms of games, when the first-level player chooses any $x=x'$, the second-level player will choose $x=l$, overriding any choice of the variable $x$ made by the upper-level player.
Therefore, to achieve true solutions, $\alpha^l,$ $N^l, M^l$ for each subsequent level should be increased. Additionally, the choice of $N^l$ should also consider the degrees of freedom. If a player has control of two variables, they must be allowed to sample more directions than if they only had one decision variable. This ensures that the sampling is fair for all levels.
However, for simple problems, it may also be desirable to use the same $\alpha$ per player for maintaining a lean parameter space. And if bounds on the player's variables are known, it can guide the choice of $\alpha$.
§.§ Computation Time
MCMO is a recursive sampling based algorithm and thus, its computation time increases exponentially with each additional level. Furthermore, the computation time will also depend upon the parameters $N^l, M^l, maxiter$ and the nature of the problem itself. While parallelizing the implementation may provide speedups, for this work, we do not attempt such efforts and have left it for future improvements. A detailed empirical analysis of computation time can be found in section <ref>.
An implementation of the algorithm can be found on GitHub (https://github.com/VAMPIR-Lab/MCMO).
§.§ Proofs
This section presents proofs for the feasibility and convergence of the MCMO algorithm.
§.§.§ Proof of Feasibility
Any non-null point X returned from a function call of the form $X=~$OPTIMIZE($\cdot$ , l) is feasible for level l, i.e., $X \in C^l$.
For the final level $l=L$, this is easy to see from Algorithm <ref> lines 4–8. If a point is infeasible for that level, the if condition on line 4 causes a null return. Otherwise, a feasible point is returned in line 7. For levels $l\neq L$, a non-null result can only be returned if $X_R$, which is initially null, is set with a non-null value $x$ in line 18. But line 18 can only execute if the feasibility condition of line 15 was satisfied, which means that the returned non-null value $X\in C^l$.
Any non-null point X returned from a function call of the form $X=$ $OPTIMIZE(\cdot, l)$ is always obtained from a lower-level function call of the form $OPTIMIZE(\cdot, l+1)$ for $l < L$.
Since $l \neq L$, following arguments similar to lemma <ref>, it must have been set by line 18. But any such point is clearly obtained in line 14 by function call of the form $OPTIMIZE(\cdot, l+1)$. Hence, this is true.
We can now prove the following claim:
Each iteration in MCMO function obtains a feasible point.
From lemma <ref>, we know that any non-null point obtained from function of the form $OPTIMIZE(\cdot, 1)$ is obtained from $OPTIMIZE(\cdot, 2)$, $OPTIMIZE(\cdot, 3)$, and so on until $OPTIMIZE(\cdot, L)$. Similarly, we also know from lemma <ref> that any non-null point thus obtained must be feasible for levels 1, 2, ... $L$-1, and L. Therefore, any non-null point obtained from an iteration of the MCMO algorithm is feasible for all levels. Furthermore, if a null point is obtained at any point, MCMO retains the last non-null point, which is either $x_s$, an initially feasible point, or another non-null point previously obtained in iteration that has already been shown to be feasible.
§.§.§ Proof of Convergence
Any analytical reasoning for general multilevel problem is decidedly hard, and for stochastic or meta-heuristic algorithms, the difficulty only increases. Thus, we only present an asymptotic proof of convergence for a narrow class of problems that satisfy the following simplifying assumptions:
* The rational reaction set $\phi^l(x_1, ..., x_{l-1})$ (as defined in section <ref>) for player $l$ is a point-to-point map, i.e., all rational reactions are unique for given upper-level decisions.
* A solution exists for the given problem, and the solver used for the final level can always find solutions when they exist.
In general, assumption 1 may not be valid but may hold if the upper-level constraints are restrictive enough or if the topmost objective is strongly convex and we want to solve an optimistic multilevel optimization problem, i.e. lower levels cooperate with the topmost player for ambiguous rational reactions. Furthermore, this is a simplification that multiple analytic treatments of this problem [Liu, 1998, Woldemariam and Kassa, 2015] have made as arguing about the problem in general is intractable.
Under our assumption, for any multilevel Stackeblerg problem, the optimization that player $l$ solves, say $P^l(x_1, ... x_{l-1})$, condenses to:
\begin{align*}
P^l(x_1, ... x_{l-1}) := \min_{x_l} &f^l(x_1, ..., x_l, \phi^{l+1}(x_1, ..., x_l)) \\
&~~~~s.t.~~g^l(X) \ge 0 \\
\end{align*}
$OPTIMIZE(\cdot~, L)$ solves $P^L(x_1, ... x_{L-1})$.
For the last level, i.e., $l=L$, this function uses a solver to obtain the solution. Since it's assumed that a solution exists and that the solver can find it, this is trivially true.
If $OPTIMIZE(\cdot~, l+1)$ solves $P^{l+1}(x_1, ... x_l)$, then $OPTIMIZE(\cdot~, l)$ solves $P^l(x_1, ... x_{l-1})$ given $N^l, M^l \to \infty$
Since infinite samples are assumed with infinite sampling iterations and it's also assumed that a unique reaction (say $x_l^*$) exists for $P^l$, we claim that the sampling process would eventually converge towards $x_l^*$. To show that this is indeed true, we first assume that the sampling does not converge towards the optimum $x_l^*$. This can only mean one of the following:
* The algorithm cycles between points $x_l^1, x_l^2, ... x_l^i$. But this must mean that $v(x_l^1) > v(x_l^2) > ... > v(x_l^i) > v(x_l^1)$, which is a contradiction. Here, we define $v(x_l):= f^l(x_1, ..., x_l, \phi^{l+1}(x_1, ... x_l))$.
* The algorithm gets stuck on some $x_l$ and no $x_l'$ exists in its neighborhood such that $v(x_l')<v(x_l)$ and $x_l'$ satisfies appropriate constraints. However, this, by definition, is a local optimum for the player $l$ and thus, by our assumption, is the same as $x_l^*$ and results in contradiction.
MCMO eventually converges upon the unique solution.
Under our framework, the overall problem reduces to $P^1$. From lemma <ref> and <ref>, we have a proof by induction that MCMO solves $P^1$ when $\forall l, N^l, M^l \to \infty$ by calling $OPTIMIZE(\cdot~, 1)$
§ SOME MULTILEVEL PROBLEMS
§.§ Adversarial Initial Condition (AIC) determination problem
We can loosely define a Trajectory as a continuous path in some space. In robotics and control, such paths are generally produced from some initial conditions (start point, environment, etc.) by a set of rules or functions, usually called a policy.
This problem is related to finding a worst-case initial condition for any given policy. The worst-case being an initial point from where, if a trajectory is generated according to such a policy, it ends up a) bringing the trajectory as close to touching the obstacle as possible, and b) increasing the length cost of the trajectory. Figure <ref> depicts the problem we construct here.
We consider a 2D plane to be our environment. The blue circular region is the feasible region $\chi \subset \real^2$ where any start point $x \in \real^2$ is allowed to reside. A fixed and known policy $\Pi$ then generates a trajectory $\Tau = \Pi(x) = \tau^0, \tau^1, ...~, \tau^i \in \real^2, ~ \tau^0 = x$ up to the finishing line $D \in \real$ using the start point such that some cost $f(\Tau) \in \real$ (modeled here as the horizontal length of the trajectory i.e. $f(\Tau) = D-\tau^0_1$) is minimized and certain feasibility conditions for each trajectory points are satisfied i.e. $g(\tau^i) \ge 0~\forall \tau^i \in \Tau$.
In this example, the condition of feasibility for a trajectory $\Tau$ is that all trajectory points be outside the obstacle region $\mathcal{O} \subset \real^2$. Modeling $\mathcal{O}$ as a circle centered at $o$ with radius $r$, our feasibility condition for each trajectory point becomes: $g(\tau^i) = ||o-\tau^i||^2 - r^2 \ge 0 ~~ \forall \tau^i \in \Tau$. The problem that we consider in this work is to find an adversarial initial point $x^a$ such that for any given policy $\Pi$, the generated trajectory $\Tau^a = \Pi(x^a)$ is as close to infeasibility and sub-optimality as possible. The rationale being that, with such obtained point, we could iterate our policy to improve it under even the most adverse initial conditions. We do not attempt policy training in this text and have left it for future work.
The environment for the AIC problem. $\Tau$ is a sampled sinusoidal trajectory.
We can model this problem as a trilevel game as follows:
\begin{align}
\label{eq:trilevel}
&\max_{x,T\in\real}~~f(\Tau) \\
&~~~~ s.t~~x,T\in\arg\min_{x, T}~ T\\
&~~~~~~~~~~~~~~~~~~~ s.t.~~ x \in \chi \\
&~~~~~~~~~~~~~~~~~~T \in \arg\max_{T} ~ T\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~s.t~T \ge 0\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \Tau = \Pi(x)\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~g(\tau^i) \ge T;~~ \forall \tau^i \in \Tau
\end{align}
An interpretation of the trilevel problem is as follows: The first player wants the initial point $x\in \chi$ for the trajectory to maximize the cost of the trajectory $f(\Tau)$. The second player, whereas, wants to bring $T$ close to 0 by manipulating $x$. But, $T$ is the minimum of all feasibility scores $g(\tau^i)$ i.e., the point closest to violation. So when $T\to 0$, the closest trajectory point to the obstacle becomes even closer, and as a result, the trajectory $\Tau$ touches the obstacle $\mathcal{O}$. Here, the first and second players share the same degree of freedom, i.e., the variable $x$. Generally, in multilevel games such as these, non-overlapping d.o.f's are considered. But, as we mentioned previously, we make no such assumptions and design a general algorithm that can handle all such scenarios.
§.§ Nested Toll-Setting problem
A toll-setting problem is a well-known bilevel optimization problem where a toll-setter decides a toll amount for a road segment. Since they want to maximize the total income, they can neither set the toll too high, or drivers will avoid the road segment due to exorbitant fees, nor set the toll too low, or their total income will decrease. We refer readers to [Labbé et al., 1998] for a more detailed treatment of this problem. Instead, we focus on a generalization of this problem, i.e., the Nested Toll-Setting problem, as shown in figure <ref>.
The nested toll-setting problem. The numbers in black above a road segment represent the percentage of traffic on that segment. The numbers in red below a road segment represent the cost of taking that segment. Costs are sum of toll price, congestion, and other extra factors.
We consider two toll stations $T_1$ and $T_2$ established to oversee their respective tolled segments (red for $T_1$, and yellow for $T_2$). Any vehicle arriving at $T_1$ has the option to either take the tolled segment (red) by paying $t_1$ cost per unit traffic or take the non-tolled segment (black) for free. We assume that $p_1$ percentage of the original fleet takes the red tolled segment. Similarly, any vehicle arriving at $T_2$ has the option to either take the tolled segment (yellow) by paying $t_2$ per unit traffic or take the non-tolled segment. We assume that $p_2$ percentage of the original fleet takes the yellow tolled segment and $p_3$ percentage of original fleet takes the final free segment. It must be clarified that $p_1, p_2,$ and $p_3$ represent the percentage of fleet that first arrives at $T_1$ establishing $p_1 + p_2 + p_3 = 1$.
From the perspective of the fleet, the cost of travelling through any segment is the sum of a) the toll on the segment, b) congestion on the segment, and c) additional costs associated with the segment. For the purpose of this problem, we establish the congestion cost for any segment to be $\sigma \cdot p$, where $p$ is the percentage traffic on the segment and $\sigma$ is a constant. This is to say that the congestion cost increases linearly with the traffic on the segment. We further simplify this problem by setting $\sigma=1$, thereby setting the congestion cost at each segment equal to the traffic percentage at that segment. Finally, we assume that none of the road segments have any additional costs except for the final free segment, which has an extra cost $D$. This extra cost could theoretically model road length, road conditions, traffic lights, or a myriad of other factors. For this problem, we allow $D$ to be less than 0, allowing it to model a reward or a subsidy as well. Figure <ref> shows the costs associated with each road segment below the segments in red.
Once the toll has been set for any segment, the fleet decides to divide a certain percentage of its traffic to the tolled road or the free road to minimize its total cost across each road segment. Now if, at each toll station, the fleet makes a greedy decision, i.e., deciding whether or not to take the tolled road without considering any future toll stations on the way, then this problem can be written as the following trilevel problem:
max_t_1, t_2,p_1,p_2,p_3 p_1 ·t_1 + p_2 ·t_2
s.t. t_1, t_2 ≥0
p_1,p_2,p_3 ∈min_p_1,p_2,p_3 p_1 ·(p_1 + t_1)
+ (p_2 + p_3)^2
s.t. p_1 ∈[0, 1]
p_2,p_3∈min_p_2, p_3 p_2 ·(p_2+t_2)
+ p_3 ·(p_3 + D)
s.t. p_2 ∈[0,1]
p_3 ∈[0,1]
p_1+p_2+p_3 = 1
Here, the first level corresponds to the toll-setter who decides on $t_1, t_2$ to maximize their total income. The second level is the fleet's decision at station $T_1$. It chooses the percentage of traffic to balance total congestion costs and toll costs. The third level is the remaining fleet's decision at station $T_2$ for the same.
§.§.§ Nature of the solution
It can be argued with relative ease that for a very high $D$, it's beneficial for the toll-setter to redirect all traffic to $T_2$ whereas for a very low $D$, the toll-setter is better off exacting all tolls from $T_1$ instead. In fact, there are two known equilibrium points $X = (t_1, t_2, p_1, p_2, p_3)$ for this problem for different values of $D$ (see Appendix <ref>):
* At $D = \olsi D = 6, \olsi X = (\ge 2, 4, 0, 1, 0)$
* At $D = \underline D = -1.5, \underline X = (1, \ge 0, 0.25, 0, 0.75)$
§ EXPERIMENTS AND RESULTS
In this section we solve some existing multilevel problems from the literature in addition to our constructed problems, i.e., the adversarial initial condition (AIC) problem and the nested toll-setting problem outlined in section <ref> using MCMO algorithm of section <ref>. To keep our parameter space restricted and the experiments simple, we run each of these problems with the same value of $\alpha$ for different levels. Furthermore, we also set all $M^l=1$ and instead setup our algorithm based solely on $\alpha^l$ and the number of samples $N^l$. All examples have been run on a personal computer with an Intel Core i5 8400 processor with a 2.8GHz frequency and 32 GBs of DDR4 RAM.
§.§ Solving AIC using MCMO
We now solve the AIC problem for two policies, as described below.
§.§.§ Linear Policy: $\Pi^l$
We define the linear policy $\Pi^l: \real^2 \mapsto \real^2$ as follows:
$$\Pi^l([x_1, x_2]^T) = [x_1+\delta, x_2]^T$$
Where $\delta \in \real$ is a step-size. Intuitively, this policy takes a point $x^i$ and generates a point $x^{i+1}$ by stepping $\delta$ distance in the $x_1$ axis while leaving $x_2$ unchanged, i.e., a horizontal line parallel to the $x_1$ axis.
§.§.§ Non-Linear Policy: $\Pi^n$
We define $\Pi^n : \real^2 \mapsto \real^2$ as follows:
\begin{align*}
\Pi^n([x_1, x_2]^T) = [ &x_1+\delta, \\&x_2 + A \left( \sin( B (x_1 + \delta)) - \sin( B (x_1)) \right)]^T
\end{align*}
Where $\delta \in \real$ is a step-size, $A \in \real$ is an amplitude parameter, and $B \in \real$ is frequency parameter. This generates a sinusoidal trajectory parallel to the $x_1$ axis.
§.§.§ Setup for AIC
For both of the trajectories, we apply MCMO to obtain adversarial points for different placements of the obstacle circles of radius r = 2. For all experiments, our feasible region is a circle centered at $[5, 5]^T$ with a radius of 5 r. The number of trajectory points is fixed at $N^\tau = 20$, and the destination plane is set to $D = 20$. For both policies, step-size $\delta$ is set to 1, and for non-linear trajectory $\Pi^n$, $A, B$ are set to $0.5, 3$, respectively. We apply MCMO for a maximum of 150 and use the best objective smoothing scheme with 10 final samples. Similarly, the step parameter "alpha is set to 3. For both policies, $N^1$ was chosen to be $2$ and $N^2$ was chosen to be $10$. In general, $N^2>N^1$ is in accordance with subsection (<ref>), but in addition, player 2 has more degrees of freedom as compared to player 1, and furthermore, both player 1 and player 2 share two degrees of freedom $(x_1, x_2)$, so no matter what player 1 chooses, it is modified by player 2, so player 1 has very little influence to begin with. As discussed previously, we set $M^1=M^2=1$. While initializing, we found that the weights $w_1 = 10^5, w_2 = 10^{-5}, w_3 = 1$ (<ref>) gave us feasible starts. In general, this will always depend on the problem being solved. The initial points produced for linear policy are shown in figure <ref>, and those for nonlinear policy are shown in figure <ref>.
Adversarial Initial points for linear trajectories for obstacle center $o=(15, 13)$ (top-left), $o=(12, 9)$ (top-right), $o=(15, 5)$ (bottom-left), and $o=(12, -3)$ (bottom-right). Time taken for the solutions is, in order, 220.8 seconds, 269.47 seconds, 262.9 seconds, and 267.39 seconds. Differences in timing indicate that the final level solver converged quickly for some instances of the problem. Ideal points are as left as possible and are either touching the obstacle or come as close to touching it as possible. Red lines indicate the path of solution in the feasible region.
Adversarial Initial points for nonlinear (sinusoidal) trajectories for the obstacle center are $o=(15, 13)$ (top-left), $o=(12, 9)$ (top-right), $o=(15, 5)$ (bottom-left), and $o=(12, -3)$ (bottom-right). The time taken for the solutions, in order, is 2272.13 seconds, 2624.19 seconds, 2408.66 seconds, and 2356.23 seconds. Ideal points are as left as possible and are either touching the obstacle or come as close to touching it as possible. Red lines indicate the path of the solution in the feasible region. All the solutions obtained are quite close to optimality. The top-left instance may not look optimal but the phase of the sinusoid and our sampling strategy may go counter to our intuition.
§.§ Discussion on Results
As can be seen, in all cases, MCMO generates proper adversarial initial conditions for this problem. For linear policy (figure <ref>), except the bottom-right setup, all other instances of the problem achieve optimal results. For the bottom-right instance, although the obtained point is not optimal, the error is $\approx 7.5\%$ which is not at all unreasonable considering the stochatic nature of the algorithm. However, accuracy can be further increased to desired bounds by running MCMO with a higher number of samples and sampling iterations and lower values for $\alpha$ for further iterations. The path taken by the solution at each iteration is traced by the red line. Unsurprisingly, for problems where the initial feasible solutions were closer to optimality, the algorithm converged to the answer in very few iterations. For problems where the initially feasible solution was not close to optimality, the path appears repetitive and chaotic, eventually converging to the answer, but it must be taken into account that the plotted path is a projection $(x_1, x_2)$ of the true decision space $(x_1, x_2, T)$.
For non-linear policy (figure <ref>), almost all of the instances converge to the optimum. While the true trajectory (represented by the sinusoid) does intersect the obstacle, it is to be expected because our problem formulation has been for the discrete samples to begin with, which incidentally behave as expected. Furthermore, it may also appear that the top-right instance of the problem is not optimal, as the generated trajectory is not as close to the obstacle as possible. But owing to the fact that the sinusoid's phase depends upon the initial point and taking into account our sampling strategy, moving the point to the top does not, in fact, bring the trajectory any closer to the obstacle.
§.§ Solving Nested Toll-Setting Problem using MCMO
A comparison of the known solutions for the edge cases with the solutions obtained by MCMO is tabulated in Table <ref>. The parameters used for both of the instances of the problem are $N^1=7, N^2=7, X_s=[0, 0, 1, 0, 0], \alpha=0.15, maxiter=100$. Smoothing scheme used is best objective smoothing with $k=10$. From the table, we can see that MCMO achieves results with error (w.r.t $f_1$) of $0.025\%~ \text{and} ~1.76\%$ respectively for parameters $D=6,-1.5$ taking, respectively, 119.11 and 129.73 seconds. The achieved results are quite satisfactory but can be made more accurate by decreasing the step sizes $\alpha$ and increasing the number of samples and sampling iterations as required.
$D$ $X^*$ $t_1$ $t_2$ $p_1$ $p_2$ $p_3$ $f_1$
$6$ $\olsi X$ $\ge$ 2 4 0 1 0 4
$X_{M}$ 2.387 4.032 0.006 0.989 0.005 4.001
$-1.5$ $\underline X$ 1 $\ge$0 0.25 0 0.75 0.25
$X_{M}$ 1.281 0.15 0.199 0 0.801 0.254
A comparison of solution obtained via MCMO $X_M$ with known analytical solution $X^*$ for the Nested Toll-Setting Problem for different parameters $D=\olsi D = 6, \text{and} ~D = \underline D = -1.5$. The achieved errors are $0.025\%$ and 1.76% respectively.
§.§ Numerical Examples from the Literature
The following problem derived from [Sinha, 2003] is a trilevel linear problem defined as:
\begin{align*}
\label{eq:sinha}
&\max_{x_1, x_2,x_3,x_4}~~7x_1 + 3x_2 -4x_3+2x_4 \\
&~~~~ s.t.~x_3,x_4\in\arg\max_{x_3, x_4}~ x_2 + 3x_3 + 4x_4\\
&~~~~~~~~~~~s.t.~x_4\in\arg\max_{x_4} ~ 2x_1+x_2+x_3+x_4\\
&~~~~~~~~~~~~~s.t.~x_1+x_2+x_3+x_4\le 5 \\
&~~~~~~~~~~~~~~~~~~x_1+x_2-x_3-x_4\le2 \\
&~~~~~~~~~~~~~~~~~~x_1+x_2+x_3 \ge 1\\
&~~~~~~~~~~~~~~~~~~x_1-x_2+x_3+2x_4\le 4\\
&~~~~~~~~~~~~~~~~~~x_1,x_2,x_3,x_4 \ge 0
\end{align*}
The optimum $f_1^* = 16.25$ for this problem is reported at $(2.25, 0, 0, 0.25)$. MCMO obtains a result of $16.145$ at $(2.205, 0.06, 0, 0.265)$ when run with the following parameters: $N^1=6, N^2=3,M^1=M^2=1, X_s=[0.4, 0.4, 0.4, 0.4], \alpha=1, maxiter=100$. The sample size was chosen owing to the difference in the number of variables, while the feasible set was deduced by observation. The smoothing scheme used is the best objective smoothing with $k=10$. The relative error in objective values for this example is $< 1\%$. The time taken to obtain the solution is 59.26 seconds.
The second problem is taken from [Tilahun et al., 2012] and is defined as:
\begin{align*}
&\min_{x,y,z}~~-x + 4y \\
&~~~~ s.t. ~x+y \le 1\\
&~~~~ y,z\in\arg\min_{y,z}~ 2y+z\\
&~~~~~~~~~~s.t.~ -2x+y \le -z\\
&~~~~~~~~~~~z\in\arg\min_{z} ~ -z^2+y\\
&~~~~~~~~~~~~~~~~~~s.t.~~z\le x; x\in[0, 0.5]; y\in[0,1]; \\
\end{align*}
The reported optimum $f_1^*=-0.5$ is at $(0.5, 0, 0.0095)$ whereas MCMO obtains $f_1=-0.498$ at $(0.498, 0, 0.498)$ when run with the following parameters: $N^1=5, N^2=5, M^1=M^2=1, X_s=[0, 0, 0], \alpha=0.2, maxiter=100$. The choice of $\alpha$ was guided by the bounds on the decision variables, and the initial feasible solution was obtained via observation. The smoothing scheme used is the best objective smoothing with $k=10$.
The obtained minimizer disagrees with the reported minimizer, but it can be seen that the reported minimizer, i.e., $(0.5, 0, 0.0095)$ is incorrect as opposed to the actual minimizer, i.e., $(0.5, 0, 0.5)$ because once $x,y=0.5, 0$ are chosen, $z$ can be clearly increased (upto $x$) by the last player to achieve further minimality. Moreover, [Woldemariam and Kassa, 2015] agrees with our results on the same problem and reports $f_1=-0.4929$ at $(0.4994, 0.0016, 0.4988)$. For this problem, MCMO achieves a relative error of $< 1\%$ in 20.85 seconds.
§ COMPARISONS
§.§ Comparisons with Existing Works
We compare the results obtained in subsection <ref> for the Nested Toll-Setting problem with some of the existing methods from the literature. We chose [Tilahun et al., 2012] and [Woldemariam and Kassa, 2015] as baselines because these methods have been proposed for arbitrarily deep multilevel optimization problems as well. Since none of these methods are capable of solving problems with equality constraints, we have to reformulate the Nested Toll-Setting problem to remove the equality constraint as follows:
\begin{align*}
% \label{eq:newtoll}
&\max_{t_1, t_2,p_1,p_2}~~p_1 \cdot t_1 + p_2 \cdot t_2 \\
&~~~~ s.t.~ t_1, t_2 \in [0,10] \\
&~~~~ p_1,p_2 \in\arg\min_{p_1,p_2}~~p_1 \cdot (p_1 + t_1) + (1-p_1)^2\\
&~~~~~~~~~~~~~~~~~~ s.t. ~p_1 \in [0, 1] \\
&~~~~~~~~~~~~~~~~~~~~~~~~p_2\in\arg\min_{p_2} ~ p_2 \cdot (p_2+t_2) ~+ \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~(1-p_1-p_2) \cdot (1-p_1-p_2 + D)\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~s.t.~p_2 \in [0,1-p_1]\\
\end{align*}
We note that, in general, it may not always be possible to reformulate a problem to remove a particularly tricky equality constraint. For both methods, we generate an extremely high number of samples per iteration, i.e., $10^6$ for the entire decision space, and run both algorithms for 100 iterations. For $D=-1.5$ [Tilahun et al., 2012]'s method obtains $f_1=2.53\times 10^{-7}$ at $X=(27.65, 36.93, 1.21\times10^{-9}, 5.95\times10^{-9})$ and for $D=6$, it obtains $f^1=2.43\times10^{-6}$ while taking 875.2 seconds and 883.41 seconds respectively. These results have a very high error compared to the theoretical best, and the reason for this is that this algorithm does not truly solve a Multilevel Stackelberg problem at all. It's instead an Iterative Best Response type algorithm, which only works when finding the Nash Equilibrium of a problem.
[Woldemariam and Kassa, 2015]'s approach only works for bounded decision variables, so we add two additional constraints, i.e., $t_1\in[0,10], t_2\in[0,10]$. For $D=6$, it obtains $f_1=4.75$ at $X=(9.64, 5.13, 0.066, 0.8)$ in 500.76 seconds, which overestimates the theoretical maximum by the relative error of $18.75\%$ and for $D=-1.5$, it obtains $f_1=0.729$ at $X=(9.8, 6.72, 0.066, 0.0113)$ which has a relative error of $191.6\%$ in about 465.02 seconds. Even though this method works much better than the former, it still ends up overestimating the leader's objective most of the time. This is because of the update rule of this algorithm, which only ever changes the obtained solution if it's better than the previous one for just the leader. So if a solution with high complementary error but better leader objective is acquired, it's always kept regardless of whatever may be found in subsequent iterations.
§.§ Timing Comparison for arbitary levels
To compare the time required by MCMO to solve any given problem against its complexity, we introduce the following arbitrarily multilevel problem parameterized by $w\in(\real^+)^{n}$.
\begin{align*}
% \label{eq:nridgelin}
&\min_{x_1,...,x_n\in\real^n}~~ \norm{\column{x_1 \\ x_2 \\ \vdots \\x_n}-\column{w_1\\ w_2\\ \vdots\\ w_n}}\\
&~~~~~~~~~~~~ x_2, ...x_n \in \arg\min_{x_2,...,x_n\in\real^{n-1}}~~\norm {\column{x_2 \\ \vdots \\x_n} - \column{w_2\\ \vdots\\ w_n}}\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ s.t.~~x_2 \le x_1\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \vdots\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~x_n \in\arg\min_{x_n\in\real} ~ (x_n-w_n)^2\\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~s.t. ~~ x_n\le x_{n-1}
\end{align*}
While this problem generally degenerates to a single-level problem of the form $min_x \norm{x-w}; x_i\le x_{i-1} \forall i$, it's still an ideal problem to test our algorithm for the fact that the dimension of the decision variable $x$ increases linearly with the level. When solved with the parameters $M^i=1, \alpha_i=0.25 \forall i$, for 50 iterations each for different values of sample sizes per level $N^i$, the obtained timing results are tabulated in Table <ref> and the corresponding graph is shown in Figure 5. The execution time grows exponentially as the levels increase, which is as expected of a recursive algorithm.
4|c|Time (s)
Levels N=3 N=4 N=5 N=6
2 1.33 1.68 1.71 2.33
3 3.84 5.52 7.71 11.86
4 10.03 21.57 39.41 77.194
5 29.84 101.81 433.91 1356.82
6 97.15 1230.33 7,463.52 10938.59
Time taken as Levels increase for different sample sizes $N$
title=Problem Complexity vs Time for different $N^i$,
xlabel=Number of Levels,
ylabel=Time for 50 Iterations [s],
xmin=0, xmax=7,
ymin=0, ymax=5000,
ytick=0,1000, 2000, 3000, 4000,5000,6000,7000,8000,9000,
legend pos=north west,
grid style=dashed,
(2, 1.33)(3, 3.84)(4, 10.03)(5, 29.84)(6, 97.15)
(2, 1.68)(3, 5.52)(4, 21.57)(5, 101.81)(6, 1230.33)
(2, 1.71)(3, 7.71)(4, 39.41)(5, 433.91)(6, 7463.52)
(2, 2.33)(3, 11.86)(4, 77.194)(5, 1356.82)(6, 10938.59)
$N^i=3$, $N^i=4$, $N^i=5$, $N^i=6$
Execution time as problem complexity increases for different sample sizes. The obtained graph is exponential as the problem increases linearly, which is as expected.
§.§ Accuracy Comparison
In general, we expect the accuracy of MCMO to increase as the number of samples per level $N^i$ increases. For this experiment, we use a five level version of the problem introduced in the previous subsection with a randomly generated $w = (3, 8, 7, 7, 3)$. We fix $M^i=1, \alpha^i=0.25$ and start with the initial guess $x_s=(0,0,0,0,0)$ and plot the convergence of the algorithm per iteration in Figure 6. As can be noticed, as the sample sizes increase, the convergence of the algorithm does increase, but it gets capped beyond a certain point because of the step size $\alpha^i$. The effect of increasing the step size for the same problem by keeping the sample sizes fixed at $N^i=6$ for different $\alpha^i=0.25, 0.5, 1$ is shown in Figure 7. It can be observed that increasing $\alpha^i$ for an appropriate number of samples increases the convergence speed to the optimum. Once the optimum is approached the convergence plateaus. This demonstrates that MCMO is stable, i.e., once it approaches the neighborhood of a stable solution, it remains there (provided enough samples are taken at each level).
title=Convergence for different $N^i$,
ylabel=Objective of first player $f^1$,
xmin=0, xmax=25,
ymin=70, ymax=200,
legend pos=south west,
$N^i=2$, $N^i=3$, $N^i=4$, $N^i=5$, $N^i=6$
Convergence for different sample sizes. After a certain threshold, step size $\alpha^i$ caps the performance.
title=Convergence for different $\alpha^i$,
ylabel=Objective of first player $f^1$,
xmin=0, xmax=25,
ymin=0, ymax=200,
legend pos=north east,
$\alpha^i=0.25$, $\alpha^i=0.5$, $\alpha^i=1$, True
Convergence for different step sizes for $w=(3,8,7,7,3)$. It can be shown that the minimum for this problem is at $X=(6.25, 6.25, 6.25, 6.25, 3)$ with leader's objective $f^1=14.75$ (shown in the plot by the cyan horizontal line)
§ CONCLUSION AND FUTURE WORK
Stackelberg games arise in many real-world scenarios, and conversely, many interesting economic, control, and other causal phenomena can be naturally modeled as Stackelberg games. Multilevel Stackelberg games provide a further generalization that expands the perimeter of interesting interactions that can be modeled by such rules. However, the difficulty involved in solving them is non-trivial and can present a major challenge to those who seek to model and solve these kinds of problems. In this paper, we introduced two such example problems that can effortlessly be modelled using multilevel formulation, i.e., a) the Adversarial Initial Condition determination problem, where we find a challenging initial condition for any provided policy, and b) the Nested Toll-Setting problem, which is a generalization of the famous Bilevel Toll-Setting problem. We then presented MCMO, a stochastic algorithm that can be used to solve problems involving an arbitrary number of leaders and followers (i.e., arbitrarily deep multilevel games) up to desired accuracy, and presented proofs for its feasibility and (under certain assumptions) optimality. We then used this algorithm to solve the multilevel problems we constructed and also solved a few problems from the literature for comparison, achieving satisfactory results in each case.
Future work in this direction would be to improve the convergence speed and accuracy of this algorithm. Furthermore, a desired generalization of this algorithm would be one that works with multiple leaders and multiple followers at all levels (or the so-called Multilevel Decentralized Problem). This would enable us to solve a wider variety of interesting problems that involve numerous stakeholders with varying levels of power amongst themselves. And finally, for applications where an exact solution is required, we want to explore methods to obtain them by leveraging the approximate solution provided by MCMO.
[Anandalingam, 1988]
authorAnandalingam, G., year1988.
titleA mathematical programming model of decentralized multi-level systems.
journalJournal of the Operational Research Society volume39, pages1021–1033.
[Bard, 1984]
authorBard, J.F., year1984.
titleAn investigation of the linear three level programming problem.
journalIEEE Transactions on Systems, Man, and Cybernetics volumeSMC-14, pages711–717.
[Bard and Moore, 1990]
authorBard, J.F., authorMoore, J.T., year1990.
titleA branch and bound algorithm for the bilevel programming problem.
journalSIAM Journal on Scientific and Statistical Computing volume11, pages281–292.
[Ben-Ayed and Blair, 1990]
authorBen-Ayed, O., authorBlair, C.E., year1990.
titleComputational difficulties of bilevel linear programming.
journalOperations Research volume38, pages556–560.
[Benson, 1989]
authorBenson, H.P., year1989.
titleOn the structure and properties of a linear multilevel programming problem.
journalJournal of Optimization Theory and Applications volume60, pages353–373.
[Bialas and Karwan, 1984]
authorBialas, W.F., authorKarwan, M.H., year1984.
titleTwo-level linear programming.
journalManagement science volume30, pages1004–1020.
[Blair, 1992]
authorBlair, C., year1992.
titleThe computational complexity of multi-level linear programs.
journalAnnals of Operations Research volume34.
[Cassidy et al., 1971]
authorCassidy, R.G., authorKirby, M.J.L., authorRaike, W.M., year1971.
titleEfficient distribution of resources through three levels of government.
journalManagement Science volume17, pages462–473.
[Dempe, 2020]
authorDempe, S., year2020.
titleBilevel optimization: Theory, algorithms, applications and a bibliography.
[Fard and Hajiaghaei-Keshteli, 2018]
authorFard, A.M.F., authorHajiaghaei-Keshteli, M., year2018.
titleA tri-level location-allocation model for forward/reverse supply chain.
journalAppl. Soft Comput. volume62, pages328–346.
[Fard et al., 2018]
authorFard, A.M.F., authorHajiaghaei-Keshteli, M., authorMirjalili, S.M., year2018.
titleHybrid optimizers to solve a tri-level programming model for a tire closed-loop supply chain network design problem.
journalAppl. Soft Comput. volume70, pages701–722.
[Han et al., 2015]
authorHan, J., authorLu, J., authorHu, Y., authorZhang, G., year2015.
titleTri-level decision-making with multiple followers: Model, algorithm and case study.
journalInformation Sciences volume311, pages182–204.
[Han et al., 2017]
authorHan, J., authorLu, J., authorZhang, G., year2017.
titleTri-level decision-making for decentralized vendor-managed inventory.
journalInf. Sci. volume421, pages85–103.
[Han et al., 2016]
authorHan, J., authorZhang, G., authorHu, Y., authorLu, J., year2016.
titleA solution to bi/tri-level programming problems using particle swarm optimization.
journalInf. Sci. volume370-371, pages519–537.
[Júdice and Faustino, 1992]
authorJúdice, J.J., authorFaustino, A.M., year1992.
titleA sequential lcp method for bilevel linear programming.
journalAnnals of Operations Research volume34, pages89–106.
[Labbé et al., 1998]
authorLabbé, M., authorMarcotte, P., authorSavard, G., year1998.
titleA bilevel model of taxation and its application to optimal highway pricing.
journalManagement science volume44, pages1608–1622.
[Lai, 1996]
authorLai, Y.J., year1996.
titleHierarchical optimization: A satisfactory solution.
journalFuzzy Sets Syst. volume77, pages321–335.
[Laine et al., 2023]
authorLaine, F., authorFridovich-Keil, D., authorChiu, C.Y., authorTomlin, C., year2023.
titleThe computation of approximate generalized feedback nash equilibria.
journalSIAM Journal on Optimization volume33, pages294–318.
[Liu, 1998]
authorLiu, B., year1998.
titleStackelberg-nash equilibrium for multilevel programming with multiple followers using genetic algorithms.
journalComputers & Mathematics with Applications volume36, pages79–89.
[Lu et al., 2016]
authorLu, J., authorHan, J., authorHu, Y., authorZhang, G., year2016.
titleMultilevel decision-making: A survey.
journalInf. Sci. volume346-347, pages463–487.
[Luo et al., 2020]
authorLuo, X., authorLiu, Y., authorLiu, J., authorLiu, X., year2020.
titleEnergy scheduling for a three-level integrated energy system based on energy hub models: A hierarchical stackelberg game approach.
journalSustainable Cities and Society volume52, pages101814.
[Oduguwa and Roy, 2002]
authorOduguwa, V., authorRoy, R., year2002.
titleBi-level optimisation using genetic algorithm, in: booktitleProceedings 2002 IEEE International Conference on Artificial Intelligence Systems (ICAIS 2002), organizationIEEE. pp. pages322–327.
[Pramanik and Roy, 2007]
authorPramanik, S., authorRoy, T.K., year2007.
titleFuzzy goal programming approach to multilevel programming problems.
journalEur. J. Oper. Res. volume176, pages1151–1166.
[Shih et al., 1996]
authorShih, H.S., authorLai, Y.J., authorLee, E.S., year1996.
titleFuzzy approach for multi-level programming problems.
journalComput. Oper. Res. volume23, pages73–91.
[Sinha, 2003]
authorSinha, S., year2003.
titleFuzzy programming approach to multi-level programming problems.
journalFuzzy sets and systems volume136, pages189–202.
[Tian et al., 2019]
authorTian, M., authorCui, M., authorDong, Z., authorWang, X., authorYin, S., authorZhao, L., year2019.
titleMultilevel programming-based coordinated cyber physical attacks and countermeasures in smart grid.
journalIEEE Access volume7, pages9836–9847.
[Tilahun, 2019]
authorTilahun, S.L., year2019.
titleFeasibility reduction approach for hierarchical decision making with multiple objectives.
journalOperations Research Perspectives .
[Tilahun et al., 2012]
authorTilahun, S.L., authorKassa, S.M., authorOng, H.C., year2012.
titleA new algorithm for multilevel optimization problems using evolutionary strategy, inspired by natural adaptation, in: booktitlePRICAI 2012: Trends in Artificial Intelligence: 12th Pacific Rim International Conference on Artificial Intelligence, Kuching, Malaysia, September 3-7, 2012. Proceedings 12, organizationSpringer. pp. pages577–588.
[Ue-Pyng and Bialas, 1986]
authorUe-Pyng, W., authorBialas, W.F., year1986.
titleThe hybrid algorithm for solving the three-level linear programming problem.
journalComputers & operations research volume13, pages367–377.
[Wächter and Biegler, 2006]
authorWächter, A., authorBiegler, L.T., year2006.
titleOn the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming.
journalMathematical programming volume106, pages25–57.
[White, 1997]
authorWhite, D.J., year1997.
titlePenalty function approach to linear trilevel programming.
journalJournal of Optimization Theory and Applications volume93, pages183–197.
[Woldemariam and Kassa, 2015]
authorWoldemariam, A.T., authorKassa, S.M., year2015.
titleSystematic evolutionary algorithm for general multilevel stackelberg problems with bounded decision variables (seamsp).
journalAnnals of Operations Research volume229, pages771–790.
[Yao et al., 2007]
authorYao, Y., authorEdmunds, T., authorPapageorgiou, D., authorAlvarez, R., year2007.
titleTrilevel optimization in power network defense.
journalIEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) volume37, pages712–718.
[Zhang et al., 2010]
authorZhang, G., authorLu, J., authorMontero, J., authorZeng, Y., year2010.
titleModel, solution concept, and kth-best algorithm for linear trilevel programming.
journalInf. Sci. volume180, pages481–492.
§ NESTED TOLL-SETTING PROBLEM
The third level can be reformulated as:
\begin{align}
\min_{p_2} ~&p_2 \cdot (p_2 + t_2) + \\&(1-p_1-p_2) \cdot (1-p_1-p_2+D)\\
&0\le p_2 \le 1-p_1
\end{align}
The unconstrained stationary point for this problem is when:
\begin{align}
\nonumber p_2 + p_2 + &t_2 - (1-p_1-p_2)\\&- (1-p_1-p_2+D) = 0\\
&4p_2 + 2p_1 +t_2-D-2=0\\
&p_2 = \frac{D+2-2p_1-t_2}{4}
\end{align}
From the theory of constrained minimization, the response of the third level can then be written as:
\begin{align}
\label{eq:p2response}
p_2(t_1, t_2) = \begin{cases}
0 & \text{if} ~ D+2-2p_1(t_1)\le t_2 \\
1-p_1(t_1) & \text{if} ~D-2+2p_1(t_1)\ge t_2 \\
\frac{2+D-2p_1(t_1)-t_2}{4} & \text{otherwise}
\end{cases}
\end{align}
Similarly, we can obtain the response of the second level as:
\begin{align}
\label{eq:p1response}
p_1(t_1) = \begin{cases}
0 & \text{if} ~ t_1 \ge 2 \\
1 & \text{if} ~ t_1 \le -2\\
\frac{1}{4}(2-t_1) & \text{otherwise}
\end{cases}
\end{align}
From equations <ref> and <ref>, we can define the following parameterized constraint sets:
\begin{align*}
\mathcal{C}_1(D) := \{ & p_1=0 \wedge t_1 \ge 2, \\
& p_1=1 \wedge t_1\le-2, \\
&p_1=\frac{1}{4}(2-t_1) \wedge -2< t_1< 2\}
\end{align*}
\begin{align*}
\mathcal{C}_2(D) := \{ &p_2=0 \wedge ~ D+2-2p_1\le t_2 \\
&p_2=1-p_1 \wedge ~D-2+2p_1(t_1)\ge t_2 \\
&p_2=\frac{2+D-2p_1(t_1)-t_2}{4} \wedge \\
&~~~~~~~D-2+2p_1(t_1)< t_2 < D+2-2p_1 \}
\end{align*}
The solution for the Nested toll-setting problem would then simply be:
\begin{align}
\label{finaleq}
\max_{t_1, t_2, p_1, p_2} &p_1 \cdot t_1 + p_2 \cdot t_2 \\
\nonumber &t_1 \ge 0, t_2 \ge 0\\
\nonumber &(t_1,t_2,p_1,p_2) \in \bigcup \mathcal{C}_1(D) \cdot \mathcal{C}_2(D)
\end{align}
Equation <ref> is a standard quadratic programming problem defined over a union of polyhedral regions. It can be solved for each of the polyhedral regions using a standard solver to obtain the optimum value for the problem as follows:
* For $D = 6$, the obtained maximum is $4$ for $p_1=0, t_1=14, p_2=1, t_2=4$. The results imply that the toll-setter benefits when no traffic takes the tolled road at station $T_1$, i.e., ($p_1=0$) and all traffic takes the tolled road at station $T_2$.
It can be seen from equation <ref>a that the same objective can be realized for any $t_1\ge2$ (as this makes $p_1=0$).
* For $D = -1.5$, obtained maximum is $0.25$ for $p_1=0.25, t_1=1, p_2=0, t_2=4.53$. For this case, the toll-setter has to obtain all income from station $T_1$ as no traffic will take station $T_2$ due to the incentive on the non-tolled road, i.e., $p_2=0$. Like before, from equation <ref>a, for the given values of $p_1$ and $D$, any $t_2\ge0$ is a solution, as this yields $p_2=0$ for the same objective.
|
# Embracing Uncertainty: Adaptive Vague Preference Policy Learning for Multi-
round Conversational Recommendation
Gangyi Zhang1, Chongming Gao1, Wenqiang Lei2, Xiaojie Guo3, Shijun Li1,
Lingfei Wu3, Hongshen Chen4, Zhuozhi Ding4, Sulong Xu4 and Xiangnan He1
1University of Science and Technology of China, 2 Sichuan University, 3 JD.COM
Silicon Valley Research Center, 4 JD.COM
(2018)
###### Abstract.
Conversational recommendation systems (CRS) effectively address information
asymmetry by dynamically eliciting user preferences through multi-turn
interactions. Existing CRS widely assumes that users have clear preferences,
i.e., users have a firm belief about the fine-grained preference for one or
multiple target items. Under this assumption, the agent will completely trust
the user feedback and treat the accepted or rejected signals as strong
indicators to filter items and reduce the candidate space, which may lead to
the problem of over-filtering. However, in reality, users’ preferences are
often vague and volatile, with uncertainty about their desires and changing
decisions during interactions.
To address this issue, we introduce a novel scenario called Vague Preference
Multi-round Conversational Recommendation (VPMCR), which considers users’
vague and volatile preferences in CRS. VPMCR employs a soft estimation
mechanism to assign a non-zero confidence score for all candidate items to be
displayed, naturally avoiding the over-filtering problem. In the VPMCR
setting, we introduce an solution called Adaptive Vague Preference Policy
Learning (AVPPL), which consists of two main components: Uncertainty-aware
Soft Estimation (USE) and Uncertainty-aware Policy Learning (UPL). USE
estimates the uncertainty of users’ vague feedback and captures their dynamic
preferences using a choice-based preferences extraction module and a time-
aware decaying strategy. UPL leverages the preference distribution estimated
by USE to guide the conversation and adapt to changes in users’ preferences to
make recommendations or ask for attributes.
Our extensive experiments demonstrate the effectiveness of our method in the
VPMCR scenario, highlighting its potential for practical applications and
improving the overall performance and applicability of CRS in real-world
settings, particularly for users with vague or dynamic preferences.
Conversational Recommendation; Vague Preference; Policy Learning
††copyright: acmcopyright††journalyear: 2018††doi:
XXXXXXX.XXXXXXX††conference: Make sure to enter the correct conference title
from your rights confirmation emai; June 03–05, 2018; Woodstock, NY††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Information systems Users and
interactive retrieval††ccs: Information systems Recommender systems††ccs:
Information systems Personalization††ccs: Human-centered computing Interactive
systems and tools Figure 1. A realistic user simulation example
## 1\. Introduction
Conversational recommendation systems (CRS) have drawn a lot of research
attention recently. These systems interact with users to elicit preferences,
understand motivations, and address the long-standing information asymmetry
problem (Gao et al., 2021). Despite considerable progress, CRS is far from
mature, and researchers have focused on specific scenarios (Sun and Zhang,
2018; Lei et al., 2020a; Zhang et al., 2022b) to address particular
challenges.
One widely adopted scenario (Lei et al., 2020a; Lei et al., 2020b; Xu et al.,
2021) is Multi-round Conversational Recommendation (MCR), where the system can
ask for attributes or make recommendations multiple times, and the user
accepts or rejects accordingly. However, MCR assumes that users have a clear
single preferred item in mind, which may not be realistic, as users may have
more than one preferred item in mind. To address this, the Multi-Interest
Multi-round Conversational Recommendation (MIMCR) scenario (Zhang et al.,
2022b) was proposed, allowing users to have multiple preferences. In this
setting, a user may accept multiple attribute instances (e.g., red and black)
of an attribute type (e.g., color).
Despite the improvement, MIMCR can still fall short because it assumes that
users have clear preferences in mind during the conversation. This can be
impractical as users’ preferences can be vague or change dynamically over
time, leading to randomness in their answers and potential regret for previous
choices.
In practical applications, users exhibit vague or dynamic preferences, but
MIMCR (or MCR) fails to account for the uncertainty in users’ feedback,
treating it as a hard indicator to filter the candidate item set. This results
in over-filtering, as numerous potential items are removed when the user
selects or does not select corresponding attributes. In Fig. 1 (a), we
illustrate a toy example showing a conversation (tailored for vague settings)
under the MIMCR scenario. The CRS incorrectly interprets the user’s non-
clicking attributes (i.e., “plaid” in the first turn) and removes potential
target items (i.e., “item-1” in the first turn), causing the user’s preference
distribution over items to collapse suddenly as shown in the left side of Fig.
1 (b). This wrong inference will naturally affect the reasoning of the
subsequent conversation, leading to the wrong preference estimation (i.e., in
Fig. 1 (a), the “black” color of “item-1” was not displayed in the third
turn).
To address over-filtering in MIMCR and MCR and maintain diversity and accuracy
in the CRS, we propose a new scenario called Vague Preference Multi-round
Conversational Recommendation (VPMCR). This scenario uses a soft estimation
mechanism to account for users’ vague or dynamic preferences by assigning non-
zero confidence scores to all candidate items, avoiding the rigid filtering
strategy of MIMCR and MCR. Fig. 1 (c) shows an example of the VPMCR, which, in
contrast to MIMCR, captures changes in preference distribution of the entire
item space as shown in the right side of Fig. 1 (b).
In the VPMCR scenario, several challenges need to be addressed, including
estimating the uncertainty of the user’s vague feedback, capturing the user’s
dynamic preference throughout the conversation, and making conversational
decisions that consider the user’s vague or dynamic preferences.
To tackle these challenges, we propose an enhanced solution called Adaptive
Vague Preference Policy Learning (AVPPL), which consists of:
1\. Uncertainty-aware Soft Estimation (USE): USE estimates the uncertainty of
the user’s vague feedback in each turn using a choice-based preference
extraction method. It captures both explicit and implicit preferences
(distinguished based on whether the user explicitly clicks the item),
effectively estimating the uncertainty of users’ vague feedback. To capture
users’ dynamic preferences, USE employs a time-aware preference decay
strategy, which gives more weight to recent preferences while gradually
reducing the influence of historical preferences.
2\. Uncertainty-aware Policy Learning (UPL): Leveraging the preference
distribution estimated by USE, UPL implements a unified policy learning
framework to guide the conversation and adapt to changes in the user’s
preferences to make recommendations or ask for attributes. The soft estimation
scores from USE’s preference distribution are utilized as edge weights to
construct a dynamic heterogeneous graph of the conversation. We also introduce
a preference-guided action pruning strategy to expedite the RL sampling
process. To address the challenges in the VPMCR scenario, particularly
considering the uncertainty of users’ vague feedback, we employ a Deep
Q-Network (DQN) algorithm for UPL.
In summary, our contributions are as follows:
* •
We identify the limitations of existing CRS settings and introduce the VPMCR
scenario, which accounts for users’ vague and volatile preferences in CRS.
* •
We propose the AVPPL solution for the VPMCR setting, utilizing a unified
policy learning framework to make decisions that consider users’ current vague
preferences and account for their fading historical preferences.
* •
Our extensive experiments on four real-world datasets demonstrate the
effectiveness of AVPPL in the VPMCR scenario, highlighting its potential for
practical applications.
## 2\. Related Work
We briefly introduce the related works in conversational recommendation,
reinforcement learning, and graph learning.
### 2.1. Conversational recommendation system
(CRSs) is a novel solution to recommendation that leverage natural language to
effectively elicit dynamic user preferences that align with their real needs
through multiple rounds of real-time interaction. CRS is considered to be a
cutting-edge discipline that incorporates dialogue systems, recommendation
systems, and interactive systems (Gao et al., 2021). According to the focus on
different functions and settings, existing CSR methods can be roughly divided
into two types: dialogue-based recommendation (Li et al., 2018; Zhou et al.,
2020c; Chen et al., 2019; Zhou et al., 2022; Wu et al., 2019) and multi-round
conversational recommendation (MCR) (Lei et al., 2020b; Deng et al., 2021; Xu
et al., 2021; He et al., 2022; Gao et al., 2022b; Li et al., 2021). In this
work, we focus on the MCR setting.
MCR is considered to be the most realistic setting in CRS. Unlike dialogue-
based recommenders that need to extract information or generate responses
through raw natural language (Wang et al., 2022b), MCR focuses on the core
logic of the interaction strategy which involves asking questions (Zou et al.,
2020, 2020; Ren et al., 2022) and making recommendations. The traditional MCR
setting allows users to select only one preferred attribute value at a time,
which restricts users’ expression in the interaction. To overcome this issue,
Zhang et al. (2022b) propose the MIMCR setting, where a user is allowed to
select multiple options for a certain attribute. Though effective, they follow
the recommendation philosophy in MCR to directly filter out the items that the
user has not mentioned by attributes, which leads to failure as users may not
be sure what they want precisely. In our proposed VPMCR setting, we
specifically consider users’ vague preferences and adjust the recommendation
mechanism to consider the items with unmentioned attributes, which better
reflect users’ needs.
### 2.2. RL-based Recommendation
Reinforcement Learning (RL) is a type of Machine Learning. It considers how an
agent (e.g., a machine) should automatically make decisions within a specific
context to pursue a long-term goal. The agent learns and adjusts its policy
based on the reward feedback (i.e., reinforcement signals) given by the
environment. Recently, RL has shown its effectiveness in recommendation (Afsar
et al., 2022; Deffayet et al., 2023; Gao et al., 2023). As fitting user
interest is not a bottleneck for now, recommenders care more about users’
long-term satisfaction (Xue et al., 2023; Zhang et al., 2022a; Wang et al.,
2022a). For instance, Montazeralghaem and Allan (2022) use RL to generate the
proper questions that can maximally make the system help users search desired
products. Gao et al. (2022a) integrate causal inference into offline RL to
maximize users’ long-term satisfaction by removing filter bubbles. Sadeghi
Eshkevari et al. (2022) propose an RL-based dispatching solution for ride-
hailing platforms that can conduct robust and efficient on-policy learning and
inference while being adaptable for full-scale deployment. In this work, we
use RL to learn a policy that can automate question-asking and item
recommendation.
### 2.3. Graph-based Recommendation
Graph-based recommender systems have drawn a lot of research attention (Chen
et al., 2022; Liu et al., 2022; Guo et al., 2021, 2021). By arranging the
various entities (e.g., users, items, and attributes) in a heterogeneous
graph, we can leverage lots of properties in modeling the collaborative
signals. In CRS, the knowledge graph is utilized in enriching the system with
additional knowledge (Lei et al., 2020b; Xu et al., 2020; Zhou et al., 2020b;
Moon et al., 2019; Zhou et al., 2020a). For example, to better understand
concepts that a user mentioned, Zhou et al. (2020b) propose to incorporate two
external knowledge graphs (KGs): a word-oriented KG providing relations (e.g.,
synonyms, antonyms, or co-occurrence) between words and an item-oriented KG
carrying structured facts regarding the attributes of items. With the
increasing of nodes, the computational overhead is too large to satisfy the
requirement of real-time interaction. Hence, we propose a pruning strategy to
overcome this work.
## 3\. Problem Definition
Vague Preference Multi-round Conversational Recommendation (VPMCR). In the
VPMCR scenario, we consider a dynamic conversation between a user and a
conversational recommendation system (CRS). The user has a clear preference
space, denoted as $\mathcal{C}_{CI}$ (e.g., ”style” in Fig. 1), and a vague
preference space, denoted as $\mathcal{C}_{VI}$ (e.g., ”color” and ”pattern”
in Fig. 1).
The conversation begins with the user specifying a query attribute $p_{0}$
(e.g., ”T-shirt”), which initializes the candidate item set containing all
relevant items (e.g., all ”T-shirts”) and the candidate attribute set
containing all attributes of those items.
During the conversation, the CRS can either ask questions about attributes or
provide recommendations. When the CRS asks questions, the user responds
accordingly with their behavior depending on whether the attribute type $c$
belongs to their clear or vague preference space. If $c\in\mathcal{C}_{CI}$,
the user _honestly_ accepts or rejects the displayed attributes. However, if
$c\in\mathcal{C}_{VI}$, the user may _randomly_ accept or reject a potentially
preferred attribute. When the CRS provides recommendations, the user can
accept or reject one or more items from the recommended set
$\mathcal{V}_{rec}$.
The conversation proceeds through multiple iterations of the CRS
asking/recommending and the user responding, until a successful recommendation
is made or the maximum number of turns is reached. The VPMCR scenario differs
from previous MCR or MIMCR settings in that it does not filter
$\mathcal{V}_{cand}$ based on the user’s clicking or non-clicking attributes.
Instead, it only removes $\mathcal{V}_{rec}$ from $\mathcal{V}_{cand}$ when
the recommendation fails. Additionally, all candidate attributes linked to
candidate items are maintained in $\mathcal{P}_{cand}$.
The main challenges in the VPMCR scenario include estimating the uncertainty
of the user’s vague feedback, capturing the user’s dynamic preference
throughout the conversation, and making conversational decisions that consider
the user’s vague or dynamic preferences.
Figure 2. Adaptive Vague Preference Policy Learning (AVPPL) solution for VPMCR
scenario.
## 4\. METHODOLOGY
To address the challenges in the Vague Preference Multi-round Conversational
Recommendation (VPMCR) scenario, we propose the _Adaptive Vague Preference
Policy Learning (AVPPL)_ solution. AVPPL consists of two main components:
Uncertainty-aware Soft Estimation (USE) and Uncertainty-aware Policy Learning
(UPL). The USE component estimates the uncertainty of users’ vague feedback
and captures their dynamic preferences, while the UPL component leverages the
preference distribution estimated by USE to guide the conversation and adapt
to changes in users’ preferences. By incorporating the VPMCR scenario and the
AVPPL solution, we aim to improve the overall performance and applicability of
conversational recommendation systems in real-world settings, particularly for
users with vague or dynamic preferences.
### 4.1. Uncertainty-aware Soft Estimation
Uncertainty-aware Soft Estimation (USE) aims to estimate the uncertainty of
the user’s vague feedback in each turn by considering both explicit and
implicit preferences. USE focuses on understanding users’ decision-making
processes (Bettman et al., 1998), which reflect the trade-offs they make when
providing non-binary feedback. To capture users’ dynamic preferences
throughout the conversation, USE employs a time-aware preference decay
strategy that combines users’ recent preferences with fading historical
preferences.
In the VPMCR setting, we model the signals of clicking and non-clicking
separately based on the decision-making consciousness of users in choice-based
questions. For each turn, preference implied by clicking and non-clicking
choices is extracted, then the decay mechanism is used to weaken the
preference of historical turns. Finally, in the soft estimation, we derive the
user’s preference distribution toward items and attributes.
#### 4.1.1. Preference Extraction with Choice-based Approach
In each turn of interaction, user preference can be divided into personalized
user preference and choice-based preference. We adopt a common personalization
modeling strategy (Lei et al., 2020a) to represent the static preference of
user $u$ for item $v$ as:
(1) $w_{v\mbox{-}u}=e_{u}^{\top}e_{v},$
where $e_{u}$ and $e_{v}$ denote the embedding vectors of user $u$ and item
$v$, respectively.
To model users’ decision-making processes, USE employs a choice-based
preference extraction method that considers the trade-offs users make when
providing non-binary feedback. This approach captures both _explicit
preferences_ (when users actively select an attribute) and _implicit
preferences_ (when users do not select an attribute but may still have some
preference for it) by estimating the importance of clicking choices and non-
clicking choices separately.
For item $v$, we estimate the importance of clicking choices and non-clicking
choices, respectively. In turn $t$, the formula for capturing the user’s
explicit preference towards clicking choices
$\mathcal{P}_{\text{click}}^{(t)}$ and implicit preference towards non-
clicking choices $\mathcal{P}_{\text{noclick}}^{(t)}$ are shown as follows:
(2)
$\begin{split}w_{v\mbox{-}click}^{(t)}=\frac{1}{\lvert\mathcal{P}_{\text{click}}^{(t)}\rvert}\sum_{p\in\mathcal{P}_{\text{click}}^{(t)}}(e_{v}^{\top}e_{p}-w_{v\mbox{-}avg}^{(t)}),\\\
w_{v\mbox{-}noclick}^{(t)}=\frac{1}{\lvert\mathcal{P}_{\text{noclick}}^{(t)}\rvert}\sum_{p\in\mathcal{P}_{\text{noclick}}^{(t)}}(e_{v}^{\top}e_{p}-w_{v\mbox{-}avg}^{(t)}),\end{split}$
where $\lvert\mathcal{P}_{\text{click}}\rvert$ and
$\lvert\mathcal{P}_{\text{noclick}}\rvert$ indicates the number of attributes
related to clicked items and non-clicked items, respectively.
$w_{v\mbox{-}avg}^{(t)}$ measures the average preference towards all unshown
attribute types and is used to mitigate over-estimation of the system-
displayed choices, which is defined as:
(3)
$w_{v\mbox{-}avg}^{(t)}=\sum_{p\in\mathcal{P}_{\text{noshow}}^{(t)}}e_{v}^{\top}e_{p}\bigg{/}\lvert\mathcal{P}_{\text{noshow}}^{(t)}\rvert,$
where $e_{v}$ and $e_{p}$ represent the embedding vectors of item $v$ and
attribute $p$, respectively, and $\mathcal{P}_{\text{noshow}}^{(t)}$ refers to
the set of all unshown attributes associated with the specified attribute type
in turn $t$.
By considering both the personalized preferences and the choice-based
preference in turn $t$, the users’ preference for item $v$ in turn $t$ can be
calculated as:
(4)
$w_{v}^{(t)}=\sigma(w_{v\mbox{-}u}+\lambda_{1}w_{v\mbox{-}click}^{(t)}+\lambda_{2}w_{v\mbox{-}noclick}^{(t)}),$
where $\sigma$ is the sigmoid function. $\lambda_{1}$ and $\lambda_{2}$
represent the information intensity coefficients of the information contained
in the user’s clicked attribute and the user’s unclicked attribute,
respectively.
#### 4.1.2. Time-aware Preference Decay
In dynamic conversation interactions, the user’s global preferences should be
viewed as a combination of preferences across all turns. We employ a decay
mechanism to adjust the influence of historical preferences, enabling the
model to focus more on the user’s real-time feedback in the current turn and
mitigating the over-emphasized impact related to the user’s clicking behavior.
To combine the user’s current preference with historical decay preferences,
the user’s global preference toward the item is estimated as follows:
(5) $w_{v}^{(t)}=w_{v}^{(t)}+\gamma w_{v}^{(t-1)},$
which can be unfolded as:
(6) $w_{v}^{(t)}=\sum_{i=0}^{t-1}\gamma^{t-i-1}w_{v}^{(i)},$
where $\gamma$ is a decay factor satisfying $0\leq\gamma\leq 1$. The farther
the interaction history is from the current turn, the less impact it will have
on the current turn. $\gamma$ should be carefully chosen to balance the
influence of historical preferences and the user’s real-time feedback.
Finally, for turn $t$, the user’s global preference distribution for items
$f_{u}^{(t)}(v)$ can be calculated by estimating the user’s global preference
$w$ for each item $v$ in the candidate item set $\mathcal{V}_{\text{cand}}$.
When the size of the candidate item set is $n$, the soft estimation
distribution for items is shown as follows:
(7) $f_{u}^{(t)}(v)=\\{w_{v_{1}}^{(t)},w_{v_{2}}^{(t)},...,w_{v_{n}}^{(t)}\\}$
Similarly, by replacing items with attributes in the aforementioned equations,
we derive the user’s global preference distribution towards the candidate
attribute set $\mathcal{P}_{\text{cand}}$. When the size of the candidate
attribute set is $m$, the soft estimation for attributes is depicted by the
following distribution:
(8) $f_{u}^{(t)}(p)=\\{w_{p_{1}}^{(t)},w_{p_{2}}^{(t)},...,w_{p_{m}}^{(t)}\\}$
### 4.2. Uncertainty-aware Policy Learning (UPL)
In the Uncertainty-aware Policy Learning (UPL) module, we address the
challenge of making conversational decisions that consider users’ vague or
dynamic preferences in a Conversational Recommendation System (CRS). We
utilize the preference distribution estimated by the Uncertainty-aware Soft
Estimation (USE) module to guide the conversation and adapt to preference
changes. By constructing a dynamic heterogeneous graph and employing a
preference-guided action pruning strategy, we streamline the Reinforcement
Learning (RL) sampling process. We adopt a Deep Q-Network (DQN) algorithm for
UPL, which is effective in learning action policies in dynamic environments.
The UPL module, as part of the Adaptive Vague Preference Policy Learning
(AVPPL) solution, aims to enhance CRS performance for users with vague or
dynamic preferences.
#### 4.2.1. Graph-based Conversation Modeling
In the Graph-based Conversation Modeling section, we represent the current
state of the conversation at turn $t$ using a dynamic undirected graph
$\mathcal{G}_{u}^{(t)}=(\mathcal{N}^{(t)},\mathbf{A}^{(t)})$. This graph is a
subgraph of the heterogeneous graph, which consists of users, items, and
attributes. The dynamic graph is constructed based on the preference
distribution estimated by the Uncertainty-aware Soft Estimation (USE) module,
which sets it apart from previous work (Deng et al., 2021; Zhang et al.,
2022b).
The nodes in the graph, $\mathcal{N}^{(t)}$, are defined as follows:
(9)
$\mathcal{N}^{(t)}=\\{u\\}\cup\mathcal{P}_{\text{click}}\cup\mathcal{P}_{n\mbox{-}\text{click}}\cup\mathcal{P}_{\text{cand}}^{(t)}\cup\mathcal{V}_{\text{sample}}^{(t)}$
Here, $\mathcal{P}_{\text{click}}$ and $\mathcal{P}_{n\mbox{-}\text{click}}$
represent the user’s historical clicking and non-clicking attributes
throughout the conversation. $\mathcal{P}_{\text{cand}}^{(t)}$ and
$\mathcal{V}_{\text{sample}}^{(t)}$ indicate the candidate attribute set and
the randomly sampled candidate item set at turn $t$, respectively.
The weighted adjacency matrix, $\mathbf{A}^{(t)}$, is defined as:
(10)
$\begin{array}[]{l}A_{i,j}^{(t)}=\left\\{\begin{array}[]{ll}w_{v}^{(t)},&\text{
if }n_{i}=u,n_{j}\in\mathcal{V}\\\ 1,&\text{ if
}n_{i}\in\mathcal{V},n_{j}\in\mathcal{P}\\\ 0,&\text{ otherwise
}\end{array}\right.\end{array}$
The weight $w_{v}^{(t)}$ denotes the user’s estimated preference for the item
$v$, which is calculated via Eq. (6) within the USE module. The weights of the
edge between the item and its associated attributes are set to $1$.
To address the issue of a large number of candidate items in the VPMCR
setting, we implement a sampling strategy for candidate items
$\mathcal{V}_{\text{sample}}^{(t)}$ by randomly selecting from the candidate
items in each turn $t$. This node sampling strategy is similar to node dropout
(Wu et al., 2021) in graph learning and helps reduce the scale of the dynamic
graph while enhancing training convergence and the robustness of graph
learning (Wu et al., 2021).
We employ a Graph Convolutional Network (GCN) (Kipf and Welling, 2016) to
refine all node representations $\mathcal{E}{node}$ by capturing the
information of changing interrelationships for the current conversation state
$\mathcal{G}_{u}^{(t)}$:
(11) $\mathcal{E}_{\text{node}}=\text{GCN}(\mathcal{G}_{u}^{(t)}).$
Following the design from Deng et al. (Deng et al., 2021), the explicit
clicking history session $\mathcal{P}_{\text{click}}$ is encoded by a
Transformer (Vaswani et al., 2017) to learn the sequence information of the
conversation history $\mathcal{I}_{his}^{(t)}$:
(12)
$\mathcal{I}_{\text{his}}^{(t)}=\text{Transformer}(e_{\text{click}}^{1},e_{\text{click}}^{2},...e_{\text{click}}^{l}).$
Here, $l$ denotes the length of the sequence and
$\lvert\mathcal{P}_{\text{click}}\rvert$ is the number of clicking attributes
in a whole conversation. The input to the Transformer is an embedding sequence
corresponding to a sequence of clicking attributes
$\mathcal{P}_{\text{click}}$, where each embedding $e_{\text{click}}$ is
learned from the embeddings in $\mathcal{E}_{\text{node}}$.
Finally, the final conversation state representation $S_{\text{conv}}^{(t)}$
is obtained by a mean polling layer.
(13) $s_{\text{conv}}^{(t)}=\text{MeanPool}(\mathcal{I}_{\text{his}}^{(t)}).$
#### 4.2.2. Preference-guided Action Pruning
In the unified policy learning framework (Deng et al., 2021), the action space
includes all candidate attributes and all candidate items. Such a large action
space in reinforcement learning can negatively impact sampling efficiency. To
address this issue, we propose an effective action-pruning strategy based on
user preferences.
As described in Section 4.1, we can estimate the user’s preference
distribution $f_{u}$. Item $v$ or attribute $p$ with higher confidence values
are more likely to be preferred by the user.
To construct the pruning action space $\mathcal{A}_{\text{action}}^{(t)}$, we
first calculate the user’s preference distribution over items using Eq. (7) in
USE. Then, we select the top-N items $\mathcal{V}_{\text{top}}^{(t)}$ with the
highest confidence and include them in the pruning action space. Additionally,
we select the top-N attributes $\mathcal{P}_{\text{top}}^{(t)}$ with the
highest confidence and add them to the pruning action space. The pruning
action space is defined as:
(14)
$\mathcal{A}_{\text{action}}^{(t)}=\mathcal{V}_{\text{top}}^{(t)}+\mathcal{P}_{\text{top}}^{(t)}$
#### 4.2.3. Deep Q-Network for Policy Learning
Following UNICORN (Deng et al., 2021), we introduce a unified policy learning
framework that can systematically integrate the conversation and
recommendation component to solve the decision making problem in CRS.
We employ a Deep Q-Network (DQN) algorithm to address the challenge of making
conversational decisions that consider users’ vague or dynamic preferences in
CRS. The DQN algorithm has been proven effective in learning action policies
in dynamic environments, such as Markov Decision Processes (MDPs), making it
well-suited for predicting the next decision based on a series of historical
choices.
The Q-value function $Q\left(s_{t},a_{t}\right)$ of a policy $\pi$ is defined
to measure the expectation of the accumulated rewards based on the state $s$
and the action $a$. We adopt the same Dueling DQN and prioritized experience
replay as in UNICORN (Deng et al., 2021) to optimize the Q-function
$Q^{\ast}\left(s_{t},a_{t}\right)$:
(15)
$Q^{*}(s_{t},a_{t})=\max_{\pi}\mathbb{E}[R_{t+1}+\gamma\max_{a}Q^{\pi}(s_{t+1},a)|s_{t},a_{t}]$
where $\pi$ is the policy, $R_{t+1}$ is the reward at turn $t+1$, $\gamma$ is
the discount factor, and $Q^{\pi}(s_{t+1},a)$ is the estimated action-value
function for the next state and action.
For policy learning, the input conversation state $s_{\text{conv}}^{(t)}$ is
learned by the graph-based conversation modeling module. The pruning action
space $\mathcal{A}_{\text{action}}^{(t)}$ is determined by employing a
preference-guided action pruning strategy, which expedites the RL sampling
process. The reward $R$ follows the previous MCR setting (Lei et al., 2020b),
and the detailed settings will be described in the experimental section.
## 5\. Experiments
In this section, we evaluate the proposed method in VPMCR. We use the
following research questions (RQs) to guide our experiment.
* •
RQ1. How does our AVPPL method perform in comparison to state-of-the-art CRS
methods in the VPMCR scenario?
* •
RQ2. How do the key components contribute to the overall performance of our
AVPPL method?
* •
RQ3. How do the hyperparameters of our method affect its performance?
* •
RQ4. Can AVPPL effectively make recommendations based on users’ vague
preferences during the conversation?
### 5.1. Dataset Description
Table 1. Statistics of datasets.
Dataset | Yelp | LastFM | Amazon-Book | MovieLens
---|---|---|---|---
#Users | 27,675 | 1,801 | 30,291 | 20,892
#Items | 70,311 | 7,432 | 17,739 | 16,482
#Interactions | 1,368,609 | 76,693 | 478,099 | 454,011
#Attributes | 590 | 8,438 | 988 | 1,498
#Attribute-types | 29 | 34 | 40 | 24
#Entities | 98,576 | 17,671 | 49,018 | 38,872
#Relations | 3 | 4 | 2 | 2
#Triplets | 2,533,827 | 228,217 | 565,068 | 380,016
We introduce four datasets, whose statistics are shown in table 1.
* •
Yelp and LastFM (Lei et al., 2020a): Yelp111https://www.yelp.com/dataset/ and
LastFM222https://grouplens.org/datasets/hetrec-2011/ datasets are used for
business and music artist recommendations, respectively. We follow the
multiple attribute question settings, retaining the original attribute
instances in LastFM and Yelp, and extracting the attribute types they depend
on. In Yelp, we utilize the 2-layer taxonomy designed by (Lei et al., 2020a),
resulting in 29 categories in the first layer as attribute types and 590
attributes in the second layer as attribute instances. For LastFM, we follow
(Zhang et al., 2022b), retaining the original 8,438 attributes as attribute
instances and employing clustering to obtain 34 attribute types.
* •
Amazon-Book (Wang et al., 2019): Amazon
Book333http://jmcauley.ucsd.edu/data/amazon. is a widely used product
recommendation dataset. We retain users and items with at least 10 interaction
records and consider entities (e.g., science fiction) and relations (e.g.,
genre) in the knowledge graph as attribute instances and attribute types,
respectively.
* •
MovieLens: Movielens is a movie rating dataset. We adopt
MovieLens-20M444https://grouplens.org/datasets/movielens/ dataset, following
(Zhang et al., 2022b), and retain interactions with ratings greater than 3. We
select entities and relations in the knowledge graph (KG) as attribute
instances and attribute types, respectively.
### 5.2. Experimental Setup
#### 5.2.1. User Simulator in VPMCR
Conversational recommendation systems (CRSs) are interactive and require
training and evaluation through user interactions. However, obtaining data
directly from users in a research lab is impractical, so employing a user
simulator is a common practice (Chandramohan et al., 2011). The user simulator
simulates users’ interaction records in the training and test sets.
In the VPMCR scenario, we adopt a user simulation strategy similar to that in
MIMCR (Zhang et al., 2022b), considering the reasonableness of the multi-
interest setting. For a given observed user-items interaction pair
$(u,\mathcal{V}_{u})$, we simulate a conversation session. Each item $v$ in
$\mathcal{V}_{u}$ is treated as a ground-truth target item, and the union of
attribute types and attributes associated with each item are considered as the
user’s ground-truth intent space $\mathcal{C}_{u}$ and ground-truth attribute
space $\mathcal{P}$, respectively. The conversation session is initialized
when the user specifies a common attribute $p_{0}$ to all $\mathcal{V}_{u}$,
and the user’s clear preference space $\mathcal{C}_{CI}$ and user’s vague
preference space $\mathcal{C}_{VI}$ are randomly initialized from the ground-
truth intent space $\mathcal{C}_{u}$.
During the interaction, we use the ground-truth attribute space $\mathcal{P}$
as a criterion for the user simulator’s acceptance or rejection. The detailed
interaction process follows the “system asks or recommends and user responds”
rules outlined in Section 3.
#### 5.2.2. Action Inference
The action inference involves either recommending items or asking an
attribute-related question.
(1) Recommendation: If an item $v$ in the action space has the highest
Q-value, the CRS make a recommendation, resulting in a new action space
$\mathcal{A}^{(t)}=\mathcal{V}_{top}^{(t)}$.
(2) Questioning: If an attribute $p$ in the action space has the highest
Q-value, the CRS asks a question. In a multiple-choice setting, a two-level
decision process is employed: first selecting an attribute type, then
presenting several attributes within that type. A sum-based strategy (Zhang et
al., 2022b) is used to determine the attribute type for questioning.
Specifically, Q-values of all attributes within the attribute action space
$\mathcal{P}_{top}^{(t)}$ are summed and allocated to their respective
attribute types. The attribute type with the highest total value is selected
for questioning, and the top $K$ attributes with the highest Q-values within
that type are presented to the user.
#### 5.2.3. Baselines
We use the following baselines. For fairness, all baselines are compared in
the VPMCR scenario.
* •
Max Entropy. It selects the attribute with the maximum information entropy and
inversely relates the probability of making a recommendation to the length of
candidate items.
* •
CRM (Sun and Zhang, 2018). It employs a belief tracker to record user
preferences as conversation state representation vectors and applies them to a
reinforcement learning decision module and factorization machine (FM)
recommendation modules.
* •
EAR (Lei et al., 2020a). This method adopts the three-stage solution framework
to enhance the interaction between the conversation component and the
recommendation component.
* •
SCPR (Lei et al., 2020b). SCPR leverages graph-based path reasoning to prune
useless candidate attributes. It separates attribute selection from
reinforcement learning, which is only used for determining when to ask and
recommend.
* •
UNICORN (Deng et al., 2021). A state-of-the-art method for the MCR scenario
that proposes a unified policy learning framework using dynamic graphs to
model conversation states and employs a preference-based scoring to reduce
reinforcement learning action space.
* •
MCMIPL (Zhang et al., 2022b). It considers the user’s multi-interest space and
extends the MCR scenario to a more realistic MIMCR scenario. This method also
follows the graph-based unified reinforcement learning framework and employs
the multi-interest encoder to learn the conversation state.
#### 5.2.4. Training Details
We divide each dataset into training, validation, and testing sets using a
7:1.5:1.5 ratio. In the user simulator, we set the maximum conversation turn
$T$ to 15 and the number of target item sets $\mathcal{V}_{u}$ for the user to
2. We initialize the user’s vague preference space and clear preference space
using uniform sampling.
In the Uncertainty-aware Soft Estimation (USE) module, we set the information
intensity coefficients $\lambda_{1}$ and $\lambda_{2}$ to 0.1 and 0.01,
respectively, and the decay discount factor to 0.1.
In the Uncertainty-aware Policy Learning (UPL) module, when constructing the
dynamic graph, random sampling is employed to select candidate items when the
available number of candidates exceeds 5000. The graph-based conversation
modeling architecture consists of two GNN layers and one Transformer layer. We
fix the embedding size and hidden size at 64 and 100, respectively. For action
pruning in RL, we set the size of the item space and attribute space to 10
(i.e., $N=10$). For action inference, we set the number of attributes
displayed to the user to 2 (i.e., $K=2$). Following (Deng et al., 2021), we
use TransE (Bordes et al., 2013), implemented throughKE (Han et al., 2018), to
pre-train the graph node embeddings. During DQN training, we ensure a fair
comparison with other benchmarks by conducting online training for 10,000
episodes and adopting the same reward setting with $r_{\text{rec-suc}}=1$,
$r_{\text{rec-fail}}=-0.01$, $r_{{ask-suc}}=-0.1$, $r_{\text{ask-fail}}=-0.1$,
and $r_{\text{quit}}=-0.3$. We set the experience replay buffer to 50,000 and
the mini-batch size to 128. The learning rate is fixed at 1e-4 with an L2
regularization of 1e-6, using the Adam optimization algorithm.
#### 5.2.5. Evaluation Metrics
This study employs success rate (SR@$T$) and average turn (AT) to evaluate the
recommendation performance. SR@$T$ measures the percentage of successful
recommendations within $T$ turns. A higher SR@$T$ indicates better
performance. AT measures the average number of turns in a conversation. A
lower AT demonstrates greater efficiency.
We also use hierarchical normalized discounted cumulative gain (hDCG@($T,K$))
to evaluate the ranking performance of the top-$K$ recommendations within $T$
turns. hDCG assigns higher scores to recommendations that are more relevant to
the user. A higher nDCG@($T,K$) indicates a better ranking performance.
Table 2. Performance comparison of different models in VPMCR scenario. hDCG stands for hDCG@(15, 10). Models | Yelp | | LastFM | | Amazon-Book | | MovieLens
---|---|---|---|---|---|---|---
SR@15 | AT | hDCG | | SR@15 | AT | hDCG | | SR@15 | AT | hDCG | | SR@15 | AT | hDCG
Max Entropy | 0.062 | 14.44 | 0.030 | | 0.376 | 11.25 | 0.189 | | 0.180 | 12.91 | 0.107 | | 0.448 | 9.93 | 0.315
CRM | 0.212 | 13.27 | 0.070 | | 0.372 | 12.26 | 0.126 | | 0.296 | 12.34 | 0.109 | | 0.780 | 5.96 | 0.341
EAR | 0.232 | 13.05 | 0.080 | | 0.414 | 11.61 | 0.146 | | 0.324 | 12.14 | 0.119 | | 0.792 | 5.50 | 0.361
SCPR | 0.322 | 12.34 | 0.115 | | 0.596 | 10.18 | 0.206 | | 0.374 | 11.62 | 0.139 | | 0.806 | 4.90 | 0.387
UNICORN | 0.314 | 12.11 | 0.140 | | 0.632 | 9.17 | 0.280 | | 0.396 | 11.05 | 0.193 | | 0.810 | 4.81 | 0.548
MCMIPL | 0.322 | 12.16 | 0.136 | | 0.634 | 9.52 | 0.267 | | 0.412 | 10.90 | 0.205 | | 0.820 | 4.39 | 0.579
AVPPL | 0.398 | 11.26 | 0.175 | | 0.686 | 8.58 | 0.306 | | 0.424 | 10.75 | 0.206 | | 1.000 | 1.60 | 0.689
Figure 3. SR* of compared methods at different turns on four datasets (RQ1)
### 5.3. Performance comparison of AVPPL with existing models (RQ1)
Table 2 reports the SR@15, AT and hDCG@($15,10$) for AVPPL and baseline
models. AVPPL achieved significantly higher scores on all metrics and
datasets, demonstrating its effectiveness in the VPMCR scenario. The
performance gap was largest on MovieLens, likely because movie recommendations
are a relatively simple task and AVPPL better models user preferences for
items.
Fig. 3 shows the relative success rate (SR*) of each model at every turn
compared to the MCMIPL baseline (represented by the dark green line at $y=0$).
Observing the variation trend of curves in Fig. 3, we have the following
findings:
* •
AVPPL almost consistently and substantially surpassed all baselines over the
entire conversation session across datasets. Specifically, AVPPL achieved a
high recommendation success rate in the first a few turns on MovieLens,
demonstrating its ability to precisely capture users’ preferences.
* •
As the conversation continues, the performance gap between AVPPL and other
baselines widened, especially compared to Max Entropy. The lack of an adaptive
policy caused Max Entropy to require excessive turns, while AVPPL dynamically
predicts the best action at each turn based on the user responses and the
personalized recommendation policy learned via reinforcement learning.
* •
Reinforcement learning-based methods like CRM and EAR lag behind more advanced
models, as they directly apply RL to a large decision space without
effectively representing the conversation state, hindering optimal policy
learning. In contrast, graph-based models such as SCPR, UNICORN, and MCMIPL
leverage graph structures to achieve state-of-the-art performance on some
datasets, but still fall short of AVPPL’s performance.
### 5.4. Evaluating Key Design in AVPPL (RQ2)
Table 3. Ablation study of AVPPL in VPMCR (top) and comparison of AVPPL with
other baselines in MIMCR (bottom).
| Yelp | | LastFM | | Amazon-Book | | MovieLens
---|---|---|---|---|---|---|---
SR@15 | AT | hDCG | | SR@15 | AT | hDCG | | SR@15 | AT | hDCG | | SR@15 | AT | hDCG
AVPPL - (VPMCR) | 0.398 | 11.26 | 0.175 | | 0.686 | 8.58 | 0.306 | | 0.424 | 10.75 | 0.206 | | 1.000 | 1.60 | 0.689
(a) - w/o USE Item.Score | 0.328 | 12.04 | 0.144 | | 0.618 | 9.35 | 0.271 | | 0.386 | 11.17 | 0.189 | | 0.852 | 3.84 | 0.593
(b) - w/o USE Attr.Score | 0.354 | 11.88 | 0.149 | | 0.614 | 9.44 | 0.267 | | 0.412 | 10.91 | 0.199 | | 1.000 | 1.75 | 0.663
(c) - w/o Personalized Preference | 0.142 | 13.84 | 0.060 | | 0.444 | 10.79 | 0.211 | | 0.284 | 12.10 | 0.142 | | 0.858 | 5.22 | 0.492
(d) - w/o Average Preference | 0.368 | 11.38 | 0.169 | | 0.630 | 9.24 | 0.269 | | 0.416 | 10.84 | 0.199 | | 1.000 | 1.77 | 0.668
(e) - w/o Decaying Preference | 0.382 | 11.56 | 0.163 | | 0.628 | 9.15 | 0.280 | | 0.410 | 11.05 | 0.190 | | 1.000 | 1.49 | 0.708
AVPPL - (MIMCR) | 0.636 | 10.68 | 0.210 | | 0.840 | 7.33 | 0.350 | | 0.610 | 9.81 | 0.251 | | 0.988 | 2.42 | 0.640
MCMIPL - (MIMCR) | 0.552 | 10.95 | 0.204 | | 0.856 | 7.21 | 0.342 | | 0.544 | 10.32 | 0.239 | | 0.838 | 4.23 | 0.602
UNICORN - (MIMCR) | 0.454 | 11.01 | 0.188 | | 0.832 | 7.42 | 0.350 | | 0.530 | 10.23 | 0.231 | | 0.832 | 4.35 | 0.567
SCPR - (MIMCR) | 0.452 | 12.52 | 0.136 | | 0.688 | 10.27 | 0.220 | | 0.450 | 11.10 | 0.167 | | 0.834 | 4.80 | 0.392
#### 5.4.1. Key Components of AVPPL
We examine the effectiveness of Uncertainty-aware Soft Estimation (USE), our
framework’s main design, in guiding conversations and adapting to user
preference changes in VPMCR scenarios. We separately remove the USE module for
items and attributes (Section 4.1) and replace them with a preference-based
scoring strategy (Deng et al., 2021; Zhang et al., 2022b), which models user
preferences using historical click or non-click attributes as mixed signals.
Table 3 rows (a-b) display the ablation study results. Removing the USE module
for both items and attributes significantly degrades performance across all
datasets, emphasizing the importance of considering user preference
uncertainty. The USE module allows our model to learn a sophisticated
conversational state representation and prune a more reasonable action space
for the Unified Policy Learning (UPL) module, enhancing the upper bound for
unified policy learning.
We also find that the USE component is more effective in measuring user
preferences for items than attributes in VPMCR scenarios, suggesting that
click behavior provides more direct item-related information.
#### 5.4.2. Key Components of USE
Table 3 rows (c-e) present the ablation experiments for the USE component. Row
(c) shows that personalized information for user modeling is crucial; without
it, the model cannot capture personalized preferences, severely limiting
performance. Removing the average preference in Equation 3 (Row (d)) degrades
performance across all datasets, with LastFM suffering the most. This may be
due to LastFM’s numerous attributes and the significant impact of non-
displayed attribute information on user preference estimation. Additionally,
we remove the historical decay preference in time-aware preference decay (Row
(e)), leading to performance degradation on three datasets except for
MovieLens. On MovieLens, USE without decaying information reliably estimates
preferences in the current turn, and recommendations succeed within 1-2
rounds. Thus, introducing historical decay preference in short interactive
rounds may weaken preference inference on MovieLens.
Overall, the results confirm the USE module’s importance and the proposed
AVPPL framework’s effectiveness.
#### 5.4.3. VPMCR vs. MIMCR Scenarios
To comprehensively evaluate AVPPL’s effectiveness in modeling user preferences
based on click behaviors, we relax the scenario assumption and employ the
MIMCR scenario involving multi-choice question interactions. In MIMCR, user
feedback signals are treated as strong indicators to filter items.
Table 3 compares AVPPL’s performance with advanced baselines in the MIMCR
scenario. Our method shows significant advantages on Yelp, Amazon-book, and
Movielens datasets. On LastFM, although slightly inferior to MCMIPL in SR and
AT, AVPPL outperforms all w.r.t. hDCG. These results confirm AVPPL’s
effectiveness in eliciting user preferences in multi-choice question
scenarios, demonstrating its universality and effectiveness in handling both
VPMCR and MIMCR scenarios.
Figure 4. Comparative performance analysis of Success Rate with varying decay
factor (left) and proportion of vague preference (right)
hyperparameters.(RQ3).
### 5.5. Model Parameter Analysis RQ3
Table 4. The impact of the coefficient of information intensity w.r.t. SR@15. Dataset | Yelp | | Amazon-Book
---|---|---|---
$\lambda_{2}$ | 0.01 | 0.1 | 1 | | 0.01 | 0.1 | 1
$\lambda_{1}$ | 0.01 | 0.414 | 0.408 | 0.328 | | 0.424 | 0.430 | 0.400
0.1 | 0.398 | 0.410 | 0.344 | | 0.424 | 0.414 | 0.384
1 | 0.394 | 0.370 | 0.302 | | 0.420 | 0.398 | 0.406
The previous work on graph-based policy learning (Deng et al., 2021), has
conducted relevant hyperparameter analysis regarding policy learning. Here we
focus on the analysis of the hyperparameter impact of the core module (USE) in
AVPPL in the VPMCR scenario. Due to the limited space, we only present results
for Yelp and Amazon-Book, but note that LastFM and Movielens exhibit similar
trends.
#### 5.5.1. Hyperparameter Analysis in USE
We identified two key hyperparameters: (1) The information intensity
coefficients $\lambda_{1}$ and $\lambda_{2}$ control the importance of
explicit versus implicit preferences. The results presented in Table 4 show
that larger $\lambda_{1}$ and smaller $\lambda_{2}$ resulted in higher success
rates, indicating that explicit preferences ($\lambda_{1}$) are more crucial
than implicit preferences ($\lambda_{2}$) in VPMCR. Notably, performance
decreases when both $\lambda_{1}$ and $\lambda_{2}$ are large, especially for
sparser datasets like Yelp, posing a challenge to the model’s robustness. (2)
The decay factor $\gamma$ controls the trade-off between recent and historical
preferences. Fig. 4 shows that a moderate decay factor (0.6-0.8) performs
best, suggesting that a balance between recent and historical preferences is
optimal. Extreme values (0.1 and 1.0) perform poorly, indicating that
disregarding historical preferences or solely relying on recent ones is
suboptimal.
#### 5.5.2. Proportion of Vague Preferences
We conducted experiments with varying vague preference proportions (0.1 to 1).
In Fig. 4, higher success rates occurred at moderate vague preference
proportions. With a moderate level of vague preferences (around 40-50%), the
model balances the ability to utilize both vague and explicit preferences,
resulting in better recommendations. However, when vague preferences dominated
(over 70-80%), the model struggled to accurately determine user needs,
hampering performance.
### 5.6. Case Study RQ4
Figure 5. The left figure displays a sample conversation generated by our
AVPPL and the right figure illustrates the changes in the user preference
distribution during the conversation.
In this case study from the Yelp dataset (Fig. 5), the user initiated a
conversation with a clear preference of finding a beverage shop, prompting the
initialization of the user’s distribution space across all potential
locations. The user had a clear preference for “ _tea $\&$ coffee_” but was
vague about their preferences for “ _price_ ” and “ _leisure food_ ”. Our
proposed method takes into account the user’s click/non-click behavior to
update the user’s preference distribution on all beverage establishments
accordingly. This is in contrast to the traditional approach of filtering out
items based on click/non-click signals.
After the third turn of conversation, the combination of the user’s immediate
feedback (clicking on “ _dessert_ ” and not clicking on “ _smoothies_ ” and
historical feedback (“ _price_ ” and “ _tea $\&$ coffee_”) resulted in
identifying two target items, “ID:69761” and “ID:25587”, with the highest
preference estimate.
## 6\. conclusion
We propose a realistic Vague Preference Multi-round Conversational
Recommendation (VPMCR), which considers the user’s vague and volatile
preferences. By addressing the limitations of existing CRS scenarios and
incorporating the VPMCR scenario and AVPPL solution, we aim to improve the
overall performance and applicability of CRS in real-world settings,
particularly for users with vague or dynamic preferences. We hope the findings
will provide valuable insights into developing user-centric CRSs that can
handle users’ vague and dynamic preferences. In future work, we plan to
explore more sophisticated vague preference modeling and more efficient policy
learning techniques to further enhance the performance and generalizability of
AVPPL in VPMCR.
## References
* (1)
* Afsar et al. (2022) M Mehdi Afsar, Trafford Crump, and Behrouz Far. 2022\. Reinforcement Learning based Recommender Systems: A Survey. _Comput. Surveys_ 55, 7 (2022), 1–38.
* Bettman et al. (1998) James R Bettman, Mary Frances Luce, and John W Payne. 1998\. Constructive consumer choice processes. _Journal of consumer research_ 25, 3 (1998), 187–217.
* Bordes et al. (2013) Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013\. Translating embeddings for modeling multi-relational data. _Advances in neural information processing systems_ 26 (2013).
* Chandramohan et al. (2011) Senthilkumar Chandramohan, Matthieu Geist, Fabrice Lefevre, and Olivier Pietquin. 2011. User simulation in dialogue systems using inverse reinforcement learning. In _Interspeech 2011_. 1025–1028.
* Chen et al. (2019) Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, and Jie Tang. 2019. Towards Knowledge-Based Recommender Dialog System. In _EMNLP-IJCNLP_. 1803–1813.
* Chen et al. (2022) Yankai Chen, Huifeng Guo, Yingxue Zhang, Chen Ma, Ruiming Tang, Jingjie Li, and Irwin King. 2022. Learning Binarized Graph Representations with Multi-Faceted Quantization Reinforcement for Top-K Recommendation. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 168–178. https://doi.org/10.1145/3534678.3539452
* Deffayet et al. (2023) Romain Deffayet, Thibaut Thonet, Jean-Michel Renders, and Maarten de Rijke. 2023. Offline Evaluation for Reinforcement Learning-based Recommendation: A Critical Issue and Some Alternatives. _arXiv preprint arXiv:2301.00993_ (2023).
* Deng et al. (2021) Yang Deng, Yaliang Li, Fei Sun, Bolin Ding, and Wai Lam. 2021. Unified conversational recommendation policy learning via graph-based reinforcement learning. In _Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval_. 1431–1441.
* Gao et al. (2023) Chongming Gao, Kexin Huang, Jiawei Chen, Yuan Zhang, Biao Li, Peng Jiang, Shiqi Wang, Zhong Zhang, and Xiangnan He. 2023. Alleviating Matthew Effect of Offline Reinforcement Learning in Interactive Recommendation. In _Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval_ _(SIGIR ’23)_. 11. https://doi.org/10.1145/3539618.3591636
* Gao et al. (2022a) Chongming Gao, Wenqiang Lei, Jiawei Chen, Shiqi Wang, Xiangnan He, Shijun Li, Biao Li, Yuan Zhang, and Peng Jiang. 2022a. CIRS: Bursting Filter Bubbles by Counterfactual Interactive Recommender System. _arXiv preprint arXiv:2204.01266_ (2022).
* Gao et al. (2021) Chongming Gao, Wenqiang Lei, Xiangnan He, Maarten de Rijke, and Tat-Seng Chua. 2021. Advances and Challenges in Conversational Recommender Systems: A Survey. _AI Open_ 2 (2021), 100–126. https://doi.org/10.1016/j.aiopen.2021.06.002
* Gao et al. (2022b) Chongming Gao, Shijun Li, Wenqiang Lei, Jiawei Chen, Biao Li, Peng Jiang, Xiangnan He, Jiaxin Mao, and Tat-Seng Chua. 2022b. KuaiRec: A Fully-observed Dataset and Insights for Evaluating Recommender Systems. In _Proceedings of the 31st ACM International Conference on Information and Knowledge Management_ _(CIKM ’22)_. 11.
* Guo et al. (2021) Wei Guo, Rong Su, Renhao Tan, Huifeng Guo, Yingxue Zhang, Zhirong Liu, Ruiming Tang, and Xiuqiang He. 2021\. Dual Graph Enhanced Embedding Neural Network for CTR Prediction. In _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_ _(KDD ’21)_. Association for Computing Machinery, New York, NY, USA, 496–504. https://doi.org/10.1145/3447548.3467384
* Han et al. (2018) Xu Han, Shulin Cao, Xin Lv, Yankai Lin, Zhiyuan Liu, Maosong Sun, and Juanzi Li. 2018. Openke: An open toolkit for knowledge embedding. In _Proceedings of the 2018 conference on empirical methods in natural language processing: system demonstrations_. 139–144.
* He et al. (2022) Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, and Julian McAuley. 2022\. Bundle MCR: Towards Conversational Bundle Recommendation. In _Proceedings of the 16th ACM Conference on Recommender Systems_ _(RecSys ’22)_. Association for Computing Machinery, New York, NY, USA, 288–298. https://doi.org/10.1145/3523227.3546755
* Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_ (2016).
* Lei et al. (2020a) Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min-Yen Kan, and Tat-Seng Chua. 2020a. Estimation–Action–Reflection: Towards Deep Interaction Between Conversational and Recommender Systems. In _WSDM_. 304–312.
* Lei et al. (2020b) Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua. 2020b. Interactive Path Reasoning on Graph for Conversational Recommendation. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ _(KDD ’20)_. 2073–2083.
* Li et al. (2018) Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards Deep Conversational Recommendations. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ _(NeurIPS ’18)_. 9748–9758.
* Li et al. (2021) Shijun Li, Wenqiang Lei, Qingyun Wu, Xiangnan He, Peng Jiang, and Tat-Seng Chua. 2021\. Seamlessly Unifying Attributes and Items: Conversational Recommendation for Cold-Start Users. _ACM Trans. Inf. Syst._ 39, 4, Article 40 (aug 2021), 29 pages. https://doi.org/10.1145/3446427
* Liu et al. (2022) Dugang Liu, Mingkai He, Jinwei Luo, Jiangxu Lin, Meng Wang, Xiaolian Zhang, Weike Pan, and Zhong Ming. 2022\. User-Event Graph Embedding Learning for Context-Aware Recommendation. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 1051–1059. https://doi.org/10.1145/3534678.3539458
* Montazeralghaem and Allan (2022) Ali Montazeralghaem and James Allan. 2022. Learning Relevant Questions for Conversational Product Search Using Deep Reinforcement Learning. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_ _(WSDM ’22)_. Association for Computing Machinery, New York, NY, USA, 746–754. https://doi.org/10.1145/3488560.3498526
* Moon et al. (2019) Seungwhan Moon, Pararth Shah, Anuj Kumar, and Rajen Subba. 2019\. OpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ _(ACL ’19)_. 845–854.
* Ren et al. (2022) Zhaochun Ren, Zhi Tian, Dongdong Li, Pengjie Ren, Liu Yang, Xin Xin, Huasheng Liang, Maarten de Rijke, and Zhumin Chen. 2022. Variational Reasoning about User Preferences for Conversational Recommendation. In _Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval_ _(SIGIR ’22)_. Association for Computing Machinery, New York, NY, USA, 165–175. https://doi.org/10.1145/3477495.3532077
* Sadeghi Eshkevari et al. (2022) Soheil Sadeghi Eshkevari, Xiaocheng Tang, Zhiwei Qin, Jinhan Mei, Cheng Zhang, Qianying Meng, and Jia Xu. 2022\. Reinforcement Learning in the Wild: Scalable RL Dispatching Algorithm Deployed in Ridehailing Marketplace. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 3838–3848. https://doi.org/10.1145/3534678.3539095
* Sun and Zhang (2018) Yueming Sun and Yi Zhang. 2018. Conversational Recommender System. In _SIGIR_. 235–244.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ 30 (2017).
* Wang et al. (2022a) Shiqi Wang, Chongming Gao, Min Gao, Junliang Yu, Zongwei Wang, and Hongzhi Yin. 2022a. Who Are the Best Adopters? User Selection Model for Free Trial Item Promotion. _IEEE Transactions on Big Data_ (2022).
* Wang et al. (2019) Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019. Kgat: Knowledge graph attention network for recommendation. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_. 950–958.
* Wang et al. (2022b) Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022b. Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 1929–1937. https://doi.org/10.1145/3534678.3539382
* Wu et al. (2021) Jiancan Wu, Xiang Wang, Fuli Feng, Xiangnan He, Liang Chen, Jianxun Lian, and Xing Xie. 2021. Self-supervised graph learning for recommendation. In _Proceedings of the 44th international ACM SIGIR conference on research and development in information retrieval_. 726–735.
* Wu et al. (2019) Wenquan Wu, Zhen Guo, Xiangyang Zhou, Hua Wu, Xiyuan Zhang, Rongzhong Lian, and Haifeng Wang. 2019. Proactive human-machine conversation with explicit conversation goals. _arXiv preprint arXiv:1906.05572_ (2019).
* Xu et al. (2020) Hu Xu, Seungwhan Moon, Honglei Liu, Bing Liu, Pararth Shah, Bing Liu, and Philip Yu. 2020. User Memory Reasoning for Conversational Recommendation. In _Proceedings of the 28th International Conference on Computational Linguistics_ _(COLING ’20)_. 5288–5308.
* Xu et al. (2021) Kerui Xu, Jingxuan Yang, Jun Xu, Sheng Gao, Jun Guo, and Ji-Rong Wen. 2021. Adapting User Preference to Online Feedback in Multi-Round Conversational Recommendation. In _Proceedings of the 14th ACM International Conference on Web Search and Data Mining_ _(WSDM ’21)_. 364–372.
* Xue et al. (2023) Wanqi Xue, Qingpeng Cai, Ruohan Zhan, Dong Zheng, Peng Jiang, and Bo An. 2023\. ResAct: Reinforcing Long-term Engagement in Sequential Recommendation with Residual Actor. In _International Conference on Learning Representations_ _(ICLR ’23)_.
* Zhang et al. (2022a) Qihua Zhang, Junning Liu, Yuzhuo Dai, Yiyan Qi, Yifan Yuan, Kunlun Zheng, Fan Huang, and Xianfeng Tan. 2022a. Multi-Task Fusion via Reinforcement Learning for Long-Term User Satisfaction in Recommender Systems. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ _(KDD ’22)_. Association for Computing Machinery, New York, NY, USA, 4510–4520. https://doi.org/10.1145/3534678.3539040
* Zhang et al. (2022b) Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Bo Long, and Jian Pei. 2022b. Multiple Choice Questions based Multi-Interest Policy Learning for Conversational Recommendation. In _Proceedings of the ACM Web Conference 2022_. 2153–2162.
* Zhou et al. (2020b) Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020b. Improving Conversational Recommender Systems via Knowledge Graph based Semantic Fusion. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_ _(SIGKDD’ 20)_. 1006–1014.
* Zhou et al. (2020c) Kun Zhou, Yuanhang Zhou, Wayne Xin Zhao, Xiaoke Wang, and Ji-Rong Wen. 2020c. Towards Topic-Guided Conversational Recommender System. In _Proceedings of the 28th International Conference on Computational Linguistics_ _(COLING ’2020)_.
* Zhou et al. (2020a) Sijin Zhou, Xinyi Dai, Haokun Chen, Weinan Zhang, Kan Ren, Ruiming Tang, Xiuqiang He, and Yong Yu. 2020a. Interactive Recommender System via Knowledge Graph-Enhanced Reinforcement Learning. In _Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval_ _(SIGIR’ 20)_. 179–188.
* Zhou et al. (2022) Yuanhang Zhou, Kun Zhou, Wayne Xin Zhao, Cheng Wang, Peng Jiang, and He Hu. 2022\. C²-CRS: Coarse-to-Fine Contrastive Learning for Conversational Recommender System. In _Proceedings of the Fifteenth ACM International Conference on Web Search and Data Mining_ _(WSDM ’22)_. Association for Computing Machinery, New York, NY, USA, 1488–1496. https://doi.org/10.1145/3488560.3498514
* Zou et al. (2020) Jie Zou, Yifan Chen, and Evangelos Kanoulas. 2020. Towards Question-Based Recommender Systems. In _Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval_ _(SIGIR ’20)_. 881–890.
|
Creating new air combat tactics and discovering novel maneuvers can require numerous hours of expert pilots' time. Additionally, for each different combat scenario, the same strategies may not work since small changes in equipment performance may drastically change the air combat outcome. For this reason, we created a reinforcement learning environment to help investigate potential air combat tactics in the field of beyond-visual-range (BVR) air combat: the BVR Gym. This type of air combat is important since long-range missiles are often the first weapon to be used in aerial combat. Some existing environments provide high-fidelity simulations but are either not open source or are not adapted to the BVR air combat domain. Other environments are open source but use less accurate simulation models. Our work provides a high-fidelity environment based on the open-source flight dynamics simulator JSBSim and is adapted to the BVR air combat domain. This article describes the building blocks of the environment and some use cases.
Reinforcement Learning, Beyond Visual Range Air Combat
\(h\) Agents altitude
\(v\) Agents air speed
\(v_{D}\) Agents down velocity
\(\psi\) Agents heading
\(\rho\) Distance to the firing position
\(\nu\) Initial launch velocity
\(\tau\) Time since missile launch
\(\eta\) Relative angle to firing position
\(\beta\) Altitude of the firing position
\(MD\)Miss-Distance
\(a_{Head}\)Set heading
\(a_{Alt}\)Set altitude
\(a_{Thr}\)Set thrust
§ INTRODUCTION
The nature of air combat has changed dramatically in the past half a century. Pilots can engage hostile aircraft at increasing distances due to sensors, armaments, and communication improvements. This development allows pilots to switch from within-visual range (WVR) combat to beyond-visual range (BVR) combat. Given all these technological advancements, BVR air warfare is currently the most effective type of air combat [1]. When pilots train, it is vital that they are exposed to a large variety of situations and opponent tactics.
Manually creating such situations and tactics can be difficult and time-consuming.
One possible way to alleviate this problem is to use Reinforcement Learning (RL).
RL has been applied to a large variety of problem domains. A recent work [2] focuses on providing a customizable environment where the agent can learn team strategies for football. Other environments present challenges in agriculture, traffic management, and product recommendations [3, 4, 5]. In [6], the authors noted that there is a lack of standard environments for aerospace problems. For this reason, the latter work developed an Aerospace SafeRL Framework that includes environments for aircraft formation flight and spacecraft docking in both 2D and 3D environments.
Below, we discuss some noteworthy recent high-fidelity flight dynamics simulation engines available. AirSim [7] is one of the more cited flight dynamics simulation environments used for AI research in the aerospace domain. It is an open-source platform built upon the game platform Unreal Engine and aims to narrow the gap between simulation and reality. Gazebo [8] is another high-fidelity simulation framework popular among robotics researchers and extends to the aerospace domain, with a focus on multi-rotor drones. X-plane [<https://github.com/nasa/XPlaneConnect>] from Nasa is a high-fidelity simulation environment. In [9], authors used an X-plane flight dynamics engine for data collection, which was later used to train an autopilot in the form of a feed-forward neural network (FNN). The authors of [10] used the Double Deep Q-Network (DDQN) approach to train an agent for attitude control. This work used X-plane to verify the trained agents' ability to deal with complex environments. Another high-fidelity flight dynamics simulation environment is JSBSim [11]. One important recent event within the air combat domain was the DARPA AlphaDogfight Trials [<https://www.darpa.mil/news-events/2020-08-26>], where teams competed against each other on algorithms that are capable of performing simulated WVR air combat maneuvering and finally competing against experienced Air Force F-16 pilots. The authors of [12] participated in the trials and used RL to train an agent for this specific competition. The simulation environment used within this competition was based on the JSBSim flight dynamics engine, operating within a WVR air combat setting.
Given the currently available RL environments, as seen in Table <ref>, there is a need for an open-source high-fidelity environment to explore tactics within the BVR air combat domain.
Overview of Simulation environments
Simulation Environment High Fidelity Open source BVR
AirSim [7]
Gazebo [8]
X-plane [<https://github.com/nasa/XPlaneConnect>]
WUKONG [13]
JSBSim [11]
General Motion Model[14]
BVRGym [15] (our approach)
The main contributions of this paper are as follows.
We propose a BVR air combat environment based on the high-fidelity flight dynamics simulator JSBSim. Key contributions are listed below:
* it is open source
* it provides a set of BVR scenarios with easy integration into different RL algorithms
* it provides a BVR missile model with a Proportional Navigation (PN) guidance law
* it gives the ability to customize and create new scenarios
The library and additional documentation are available here[<https://github.com/xcwoid/BVRGym>], and Table <ref> above provides a comparison to related simulation environments.
§ BACKGROUND
This framework consists of the following components: (i) Tactical units that are used to conduct BVR air combat. We use a military aircraft model and a long-range missile for this work. These models are explicitly adapted for BVR air combat. (ii) To enable the participation of units applying manually designed policies in the scenarios, we include a behavior tree implementing a simple but extendable BVR policy. (iii) To facilitate the use of a wide range of RL algorithms, we developed a simple OpenAI-Gym-like interface [16].
Additional effort has been made to make it similar to an actual BVR air combat training. In such scenarios, the teams are usually split into two groups: the blue and red teams, the red team being the adversary. In these scenarios, aircraft use radars to track the position of the opposing team since the distances implied within BVR are generally up to 100 km. To detect an aircraft at such distances, pilots use onboard radars. When the opposing team launches a missile, it is possible to detect the launch; when the missile engine ignites, it might be captured by the Infrared-Search and Track (IRST) sensors, and the detection can be associated with the tracked aircraft. In this case, estimating from which adversarial aircraft the missile has been launched is possible. Tracking the missile, on the other hand, is a much more challenging task since the missile is much smaller than the size of the aircraft that launched it. For this reason, our training environments only enable using the knowledge of from where the missile was launched and not the current position of the missile.
Additionally, BVR air combat evolves on a slower time scale than WVR air combat, which makes typical RL rewards sparse and training for RL agents more challenging, as fewer things happen during more extended periods of time and the exploration space is large.
As noted above, the BVR Gym enables the manual design of policies using a behavior tree (BT), a switching structure that has been shown to be optimally modular [17] and used to create extendable hierarchical control structures in robotics [18, 19].
Below, we describe the basic concepts of RL and the BTs.
§.§ Reinforcement learning
Reinforcement learning (RL) is a subfield of machine learning that obtains knowledge by dynamic interaction with an environment, and it offers a powerful method to train an agent for intelligent decision-making. The agent is the learning entity's representative in this scenario, and a strategy that directs the agent's decision-making process is at the core of the agent. Unlike supervised learning, where the trained model is learned on a labeled dataset, the field of RL focuses on unsupervised learning, where the model focuses on discovering patterns without explicit guidance through a set of discrete time steps. At each time step $t$, the agent perceives the environment through state representation $s_t$, can select an action $a_t$ from a set of possible actions, and receives a numerical reward $r_{t}$ provided by the environment [20].
Depending on the agent's condition, the obtained reward might be used to assess how good or bad a particular action was. Thus, the agent's goal is to find a $\pi(a|s)$ policy that maximizes these long-term rewards given the agent's current state. The problem can be mathematically formulated as
\pi^* = \arg\max_\pi \mathbb{E}\left(\sum_{t=0}^{\infty} \gamma^t r_{t+1} \mid \pi \right),
where $\gamma$ is the discount factor to balance immediate rewards against future rewards, the expectation $\mathbb{E}$ is taken over all possible sequences of states, actions, and rewards under the policy $\pi$.
The two main methods for solving RL problems are on-policy and off-policy learning. Q-learning is one form of off-policy learning, which separates the learning policy from the policy used to investigate the provided environment [20]. On the other hand, policies are updated by on-policy reinforcement learning algorithms, like REINFORCE, based on their experiences interacting with the environment while utilizing current policy. Off-policy approaches also benefit from a broader range of experiences brought along by various policies. Finding the right balance between exploitation and exploration is crucial in reinforcement learning. To maximize immediate benefits, the agent must leverage its present knowledge while exploring to the fullest extent possible to uncover optimum behaviors. In this work, we use on-policy optimization, such as the Proximal Policy Optimization (PPO) algorithm [21]. One of the reasons for selecting PPO is its robustness of hyper-parameter selections for different tasks and the reported performance of the algorithm when used to address real-world issues like attitude control for fixed-wing and quad-rotor aircraft [22], [23].
§.§ Behavior Trees
Behavior Trees (BT) are a hierarchical and modular [17] method used in robotics, artificial intelligence, and video games to describe the policy of autonomous entities. BTs were first created by video game programmers, but they have since been used in many other fields where complex decision-making and accomplishment of tasks are crucial [19].
BT structure controlling the F16 aircraft belonging to Team $\mathcal{R}$ .
A behavior tree is a hierarchical structure depicting an agent's decision-making process. Unlike alternative frameworks, BTs offer a more adaptable and modular method of describing and classifying behaviors. We use BT in our work to create manual policies, and modularity is useful since BVR air combat can be decomposed into a number of complex behaviors. An example of a simple BT is illustrated in Figure <ref>. Each node in the tree structure represents a specific action or decision-making measure. The directed graph, which is made up of interconnected nodes, directs the agent through a series of decisions and actions. Control and task nodes are the two basic categories of nodes that comprise BTs [24]. Control nodes oversee the execution flow, choosing when and how to execute child nodes. In contrast, task nodes stand for discrete actions or decision points. There are two types of execution nodes (Action and Condition) and three primary categories of control nodes (Sequence, Fallback, and Parallel); however, for this work, we do not use the Parallel) node. Below, we will briefly describe the nodes of a given BT.
Sequences, illustrated by a box containing the label $\rightarrow$, execute its child nodes in sequence until one fails. Fallbacks, illustrated by a box containing the label $?$, executes its child nodes in sequence until one succeeds. Action nodes take actions, such as avoiding a missile, engaging with an enemy, or guiding your missile to the target, and Condition nodes check if a given condition is satisfied or not. BTs have the advantage of being easily readable and modifiable. Designers may easily manipulate the tree structure to visualize and change the decision-making process. Because of this, BTs are a very useful tool in fields where quick iterations and prototyping are crucial.
§ TACTICAL UNITS
Two critical components of the BVR air combat are military aircraft, such as jet fighters with long-range detection systems, and long-range missiles. This section briefly describes the tactical units used within this simulation environment, namely the F-16 aircraft and the BVR missile and their properties.
§.§.§ F-16 Aircraft
We utilize the JSBsim F-16 flight dynamics model[<https://github.com/JSBSim-Team/jsbsim/tree/master/aircraft/f16>] for this training environment. While the F-16 model has its own predefined controllers to keep the inherently unstable aircraft stable, we added an additional high-level controller to adapt the unit for BVR air combat. In general, BVR air combat does not include aggressive maneuvering since pilots conserve the energy of their aircraft; for this reason, we have added an auto-pilot controller to steer the aircraft in the desired direction. This allows the agent to set the desired heading, altitude, and throttle instead of controlling the attitude rates, reducing the RL search space and promoting faster convergence.
Thus, if the agent chooses to set a desired direction, the lower-level controllers automatically roll the aircraft and turn it to the desired direction. Similarly, if the agent chooses to change altitude, lower-level controllers automatically stabilize the aircraft, adjust the pitch angle, and execute the maneuver to achieve the desired altitude.
Since aircraft are complex systems, it is helpful to see the exact behavior of the unit of interest. For this reason, we added the possibility of studying aircraft behavior before deploying it to an RL environment. Figure <ref> captures aircraft dynamics while performing an evasive maneuver, including a decrease in altitude (to increase air density to promote missile deceleration) and a change in direction (to maximize the distance from the missile).
F-16 evasive maneuver. A drop in altitude initiates the maneuver, followed by a turn to evade the incoming missile.
§.§.§ BVR Missile
The capacity to interact with enemy fleets at great distances, up to 100 [km], is crucial in BVR air combat. For this reason, we developed a BVR Missile model. Since the exact performance of real missiles is highly classified, our missile model was inspired by performance estimates available from open sources[<https://en.wikipedia.org/wiki/AIM-120_AMRAAM>]. After being launched, a stage of acceleration is followed by the missile's ascent to a higher altitude. Since the air is less thick at higher altitudes, it is possible to increase the flight range by keeping a high altitude for as long as possible. To keep the missile model simple but realistic, we implemented a Proportional Navigation (PN) guidance law [25]. This law navigates the missile toward the target after reaching the desired cruise velocity and altitude.
The PN guidance law's solution provides the desired acceleration to change the missile heading to intercept a moving target. The acceleration is converted to the appropriate velocity vector and then sent to a lower-level controller to execute the turn. Like the F-16 unit, the user can change the missile's characteristics, including its precise launching location, initial velocity, altitude, cruise altitude, and target. More tools are available to aid with aligning the missile's initial heading in the direction of the target.
Similar to aircraft models, missiles are complex systems; hence, observing the missile's behavior before deploying it in an RL training environment is beneficial. Figure <ref> depicts the missile's flight dynamics properties while the target performs an evasive maneuver away from the missile.
Missile response to F-16 evasive maneuver. Initialized by acceleration stage to Mach 4 and an ascent in altitude.
§ SCENARIOS
This section introduces a set of example BVR problems for the agent to solve. We start by introducing a problem where a single aircraft is faced with a single incoming missile. Afterward, we present a more challenging problem: the aircraft has to evade two incoming missiles launched from different locations. Finally, we present a one vs one BVR air combat scenario, where the agent needs to figure out how to defeat an adversarial aircraft.
§.§ Evading a BVR Missile
Consider a situation where there is one unit on each team: a single F16 aircraft on the blue team $\mathcal{B}$ and a launched missile from the red team $\mathcal{R}$. In this environment, the agent aims to find a policy that maximizes the miss distance (MD) between itself and the incoming missile.
The following observations are available to the agent at each time step
s_t = (h, v_{D}, v, \psi, \nu, \tau, \eta, \beta, \rho).
These observations are chosen to represent parts of a realistic air combat scenario. In such cases, when a missile is launched, the knowledge of the current missile location is usually not available since the missile is too small for the radar to detect at long ranges. However, tracking the adversary aircraft that launched it is much less complicated. Thus, an assumption can be made: if we track an aircraft and detect a sudden flash (usually representing a missile launch), we can assume that the missile was launched from the aircraft where the flash occurred. Modern military aircraft are equipped with Missile Approach Warning systems (MAW) that may detect the flash associated with the missile launch.
When a missile launch has occurred, pilots tend to perform an evasive maneuver to evade the incoming missile. In BVR air combat, super maneuverability is not a must; thus, in most cases, complex maneuvers are not used in order to preserve aircraft momentum. For this reason, the action space can be broken down into the following actions.
a_t = (a_{Head}, a_{Alt}, a_{Thr}).
At each time step, the agent receives a reward $r_t = R(s_t,a_t)$ that is $r_t = 0$ except for the last step, when the missile has either hit the target or has depleted all fuel and speed so that it cannot get closer to it. In the case of a successful missile evasion, the agent then receives a reward $r_T > 0$ equivalent to the MD, which is equal to the smallest encountered distance between the agent and the missile.
If the missile hits the agent, this distance is zero, resulting in a zero reward.
Missile response to F-16 evasive maneuver. Initialized by acceleration stage to Mach 4 and an ascent in altitude.
§.§ Evading two BVR Missiles
We now consider an agent subjected to two incoming missiles launched simultaneously from different locations. This is an interesting problem to study since pilots in such situations must carefully consider their options when making maneuvers to avoid missiles. Focusing purely on evading one of the two missiles can expose the aircraft to the other one. Thus, multiple threats require splitting the focus and finding solutions that can avoid both. Additionally, more missiles usually mean more maneuvering, which makes controlling energy depletion challenging.
The observation assumptions about the two missile cases are similar to the single missile case. The agent has access to both launch locations for missiles M1 and M2 and the state observation
s_t = (h, v_{D}, v, \psi, \nu^{M1}, \tau^{M1}, \eta^{M1}, \beta^{M1}, \rho^{M1}, \nu^{M2}, \tau^{M2}, \eta^{M2}, \\ \beta^{M2}, \rho^{M2})
$, with the same action space available as in the problem with a single incoming missile. The reward is provided in a similar manner, where it is proportional to the smallest encountered distance between the agent and any of the two missiles. At every time step, the agent receives a reward $r_t^{M1} = R(s_t,a_t)$ and $r_t^{M2} = R(s_t,a_t)$ which both are equal to $0$ except for the last step. In the case of a successful missile evasion, the agent then receives a reward $min(r_{T}^{M1}, r_{T}^{M2} ) > 0$ equivalent to the MD, which is equal to the smallest encountered distance between the agent and the incoming missiles.
§.§ BVR DogFight
The aircraft's radar and other sensors play a critical role in the efficacy of BVR engagements. Challenges may arise from a sensor's limited precision, range, or sensitivity to electronic countermeasures. To simplify this, we consider that the location of the enemy aircraft is known without any interference. One of the crucial decisions that have to be made during air combat is the timing of when to fire and when to hold fire. Engaging in combat too soon could disclose the aircraft's location, while waiting too long could make you the target of the initial attack. BVR engagements also involve controlling the aircraft's altitude and speed to maximize missile performance after launch.
This scenario aims to find possible counterattack strategies against an enemy with known behavior.
To capture the enemy policy, we use BTs, which have shown to be an effective tool for creating sophisticated behaviors and have been used to define behaviors within the air combat domain [19]. Since BVR combat is rarely observable in practice, with low availability of historical data, much of its possibilities must be assessed through simulation [26]. For this reason, you might want to study potential solutions to a given adversary behavior.
We have equipped the adversary aircraft with a BT that dictates the actions to take during air combat. The strategy is visualized in Figure <ref>. The main focus of this BT is to prioritize one's own safety; for this reason, missile evasion tactics are placed on the left-hand side of the tree, while offensive tactics are located on the right-hand side. The following state observation is available to the agent $s_t = \\ (
\rho^{\mathcal{B}\mathcal{R}},
\nu^{\mathcal{B}\mathcal{R}},
\psi^{\mathcal{B}},
\rho^{\mathcal{B}\mathcal{M}}_{0},
\nu^{\mathcal{B}\mathcal{M}}_{0},
The components $(\rho^{\mathcal{B}\mathcal{M}}_{0}, \nu^{\mathcal{B}\mathcal{M}}_{0}, v^{\mathcal{M}}_{0}, h^{\mathcal{M}}_{0})$ indicating observations with respect to the launch location. If there are no active missiles launched by the $\mathcal{R}$ team aircraft, then $(\rho^{\mathcal{B}\mathcal{M}}_{0}, \nu^{\mathcal{B}\mathcal{M}}_{0}, v^{\mathcal{M}}_{0}, h^{\mathcal{M}}_{0})$ are equivalent to the $(\rho^{\mathcal{B}\mathcal{R}}, \nu^{\mathcal{B}\mathcal{R}}, v^{\mathcal{R}}, h^{\mathcal{R}})$, indicating that the missile is located at the same place as the aircraft carrying it. The following action space is available to the agent $a_t = (a_{Head}, a_{Alt}, a_{Thr}, a_{l})$, with an additional change being the missile launch capability $a_{l}$.
In this environment, the agent receives a reward $r_t = R(s_t,a_t)$, which is equal to $0$ except for the last step. In the case of a successful adversary kill, the agent then receives a reward $r_{T} = 1$. If the adversary successfully shoots down the agent or the agent hits the ground, or the scenario time runs out, then a reward of $r_{T} = -1$; is provided.
§ NUMERICAL RESULTS
This section presents results obtained from training an agent in different environments. We first consider the environment where the agent is faced with one and two incoming missiles, followed by a BVR air combat scenario.
§.§ One and two missile scenario
Figure <ref> shows the values obtained from the training with both one and two incoming missiles. The initial conditions of the agent and the missiles are both randomized. Both the missile's initial launch conditions and the agent's initial state are presented in Table <ref>.
Table of initial conditions.
Parameter Value
Initial Velocity: Agent $300 - 365 $ [m/s]
Initial Velocity: M1,M2 $280 - 320 $ [m/s]
Initial Altitude: Agent $6000 - 10000 $ [m]
Initial Altitude: M1,M2 $9000 - 11000 $ [m]
Firing Distance: M1,M2 $40 - 80$ [km]
Initial heading: Agent $0-360 $ [deg]
Initial pitch/roll: Agent $0 , 0$ [deg]
A closer look at Figure <ref> reveals that the agent typically achieves a greater separation with a single missile scenario because it is easier to establish a strategy without conflicting objectives than in a two-missile scenario. The trained model was able to improve the evasive maneuver in both situations, increasing the distance between the missile and the aircraft and preventing a missile hit. The primary tactic employed is the same as what pilots practice in training: lowering the aircraft's altitude to be surrounded by denser air, which promotes the missile to slow down and travel in the opposite direction from its launch point. Air combat problems with high-fidelity models typically require large computational budgets, as in [12]. We decrease the search space by setting the throttle to maximum, leaving the agent with the action space $a_t = (a_{Head}, a_{Alt})$. Such changes, in combination with a lower-level flight controller, reduce the need for such a budget. Additional changes have been made compared to our previous work in [27] to speed up convergence, and all the parameters can be found in [15]. One day's worth of computing is needed to solve the problem using an Intel Core<EMAIL_ADDRESS>ten CPUs operating in parallel, and one NVIDIA GeForce GTX 1080 GPU for neural network optimization. Changing the default settings, such as step time, the ability to control thrust, or increasing observation space, may significantly increase computation time.
Training results for evading one and two missile scenarios. The Y-axis shows the average distance by which the missile misses its target in kilometers.
§.§ BVR Dogfight
In this scenario, we let two aircraft face each other with two BVR missiles each. Each scenario lasts up to 16 minutes, slightly longer than in a related work done by [26] where the authors used a 12-minute time cap. The agent's goal in this scenario is to explore tactical policies to defeat the opposing aircraft that behaves according to the BT formulation in Figure <ref>. The training progress results are shown in Figure <ref>.
One vs One BVR air combat training results. The Figure illustrates the average obtained reward (Accumulated Reward) of ten in parallel executed episodes.
An illustration of the success rate of utilizing the first missile.
An illustration of the success rate of utilizing the second missile.
Upon examining Figure <ref>, we can observe that the agent is consistently shot down by the adversary at the beginning. Following a training period, we observe that the agent begins to behave better and wins more battles than it loses. A closer examination of the issue may also be done by examining how weapons are used in Figures <ref> - <ref>. Figure <ref> shows the frequency of the agent's missile usage. The missiles were not fired since the agent did not first approach the enemy to get within firing range. Following a period of training, we observed that the first missile was launched periodically and then, in the later phases of the training, a second missile. Looking at the adversary's actions, we can observe that it consistently, and with a high likelihood of success, used the first missile successfully. But when the agent became adept at dodging the first missile, the enemy began to use the second missile even more frequently.
BVR air combat, incorporating several tactical units, often leads to computationally intensive problems. To speed up the process, we introduced the following steps: (i) reduce the search space concerning the previous environments, as described in Section <ref>, (ii) let the agent make decisions once in 10 seconds, (iii) no variation in starting position for the agent and the adversary (iv) the missile launch is automated. This implies that the missile will be launched automatically when the launch conditions are satisfied. The main reason is to limit exploration space since missile launch action is a one-time action lasting for approximately 2-4 minutes of simulation time, depending on the distance. If the missile launch is made outside its range, the missile can be considered lost. The same computational resources were used in this scenario as in Section <ref> above, and all the parameters and the code can be found in [15].
§ CONCLUSION
In this work, we presented a high-fidelity environment to investigate tactics with Beyond Visual Range air combat. We showed three case study scenarios to explore different aspects of BVR air combat. We also suggested some configuration parameters that enable users with limited computational resources to investigate the problems.
§ ACKNOWLEDGMENT
The authors gratefully acknowledge funding from Vinnova, NFFP7, dnr 2017-04875.
[1]
John Stillion.
Trends in air-to-air combat: Implications for future air
Center for Strategic and Budgetary Assessments., 2015.
[2]
Karol Kurach, Anton Raichuk, Piotr Stańczyk, Michał Zając, Olivier
Bachem, Lasse Espeholt, Carlos Riquelme, Damien Vincent, Marcin Michalski,
Olivier Bousquet, et al.
Google research football: A novel reinforcement learning environment.
In Proceedings of the AAAI conference on artificial
intelligence, volume 34, pages 4501–4510, 2020.
[3]
Hiske Overweg, Herman NC Berghuijs, and Ioannis N Athanasiadis.
Cropgym: a reinforcement learning environment for crop management.
arXiv preprint arXiv:2104.04326, 2021.
[4]
Huichu Zhang, Siyuan Feng, Chang Liu, Yaoyao Ding, Yichen Zhu, Zihan Zhou,
Weinan Zhang, Yong Yu, Haiming Jin, and Zhenhui Li.
Cityflow: A multi-agent reinforcement learning environment for large
scale city traffic scenario.
In The world wide web conference, pages 3620–3624, 2019.
[5]
David Rohde, Stephen Bonner, Travis Dunlop, Flavian Vasile, and Alexandros
Recogym: A reinforcement learning environment for the problem of
product recommendation in online advertising.
arXiv preprint arXiv:1808.00720, 2018.
[6]
Umberto J Ravaioli, James Cunningham, John McCarroll, Vardaan Gangal, Kyle
Dunlap, and Kerianne L Hobbs.
Safe reinforcement learning benchmark environments for aerospace
control systems.
In 2022 IEEE Aerospace Conference (AERO), pages 1–20. IEEE,
[7]
Shital Shah, Debadeepta Dey, Chris Lovett, and Ashish Kapoor.
Airsim: High-fidelity visual and physical simulation for autonomous
In Field and Service Robotics: Results of the 11th International
Conference, pages 621–635. Springer, 2018.
[8]
Nathan Koenig and Andrew Howard.
Design and use paradigms for gazebo, an open-source multi-robot
In 2004 IEEE/RSJ international conference on intelligent robots
and systems (IROS)(IEEE Cat. No. 04CH37566), volume 3, pages 2149–2154.
IEEE, 2004.
[9]
Jérémy Pinguet, Philippe Feyel, and Guillaume Sandou.
A neural autopilot training platform based on a matlab and x-plane
In 2021 International Conference on Unmanned Aircraft Systems
(ICUAS), pages 1200–1209. IEEE, 2021.
[10]
David J Richter and Ricardo A Calix.
Using double deep q-learning to learn attitude control of fixed-wing
In 2022 16th International Conference on Signal-Image Technology
& Internet-Based Systems (SITIS), pages 646–651. IEEE, 2022.
[11]
Jon Berndt.
Jsbsim: An open source flight dynamics model in c++.
In AIAA Modeling and Simulation Technologies Conference and
Exhibit, page 4923, 2004.
[12]
Adrian P Pope, Jaime S Ide, Daria Mićović, Henry Diaz, Jason C Twedt,
Kevin Alcedo, Thayne T Walker, David Rosenbluth, Lee Ritholtz, and Daniel
Hierarchical reinforcement learning for air combat at darpa's
alphadogfight trials.
IEEE Transactions on Artificial Intelligence, 2022.
[13]
Haiyin Piao, Zhixiao Sun, Guanglei Meng, Hechang Chen, Bohao Qu, Kuijun Lang,
Yang Sun, Shengqi Yang, and Xuanqi Peng.
Beyond-visual-range air combat tactics auto-generation by
reinforcement learning.
In 2020 international joint conference on neural networks
(IJCNN), pages 1–8. IEEE, 2020.
[14]
Zhen Yang, Deyun Zhou, Haiyin Piao, Kai Zhang, Weiren Kong, and Qian Pan.
Evasive maneuver strategy for ucav in beyond-visual-range air combat
based on hierarchical multi-objective evolutionary algorithm.
IEEE Access, 8:46605–46623, 2020.
[15]
Edvards Scukins, Markus Klein, and Lars Kroon.
<https://github.com/xcwoid/BVRGym>, 2024.
[16]
Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman,
Jie Tang, and Wojciech Zaremba.
Openai gym.
arXiv preprint arXiv:1606.01540, 2016.
[17]
Oliver Biggar, Mohammad Zamani, and Iman Shames.
On modularity in reactive control architectures, with an application
to formal verification.
ACM Transactions on Cyber-Physical Systems (TCPS), 6(2):1–36,
[18]
Petter Ögren and Christopher I Sprague.
Behavior trees in robot control systems.
Annual Review of Control, Robotics, and Autonomous Systems,
5:81–107, 2022.
[19]
Matteo Iovino, Edvards Scukins, Jonathan Styrud, Petter Ögren, and
Christian Smith.
A survey of behavior trees in robotics and ai.
Robotics and Autonomous Systems, 154:104096, 2022.
[20]
Richard S Sutton and Andrew G Barto.
Reinforcement learning: An introduction.
MIT press, 2018.
[21]
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov.
Proximal policy optimization algorithms.
preprint arXiv:1707.06347, 2017.
[22]
William Koch, Renato Mancuso, Richard West, and Azer Bestavros.
Reinforcement learning for uav attitude control.
ACM Transactions on Cyber-Physical Systems, 3(2):1–21, 2019.
[23]
Eivind Bøhn, Erlend M Coates, Signe Moe, and Tor Ame Johansen.
Deep reinforcement learning attitude control of fixed-wing uavs using
proximal policy optimization.
IEEE, 2019.
[24]
Michele Colledanchise and Petter Ögren.
Behavior trees in robotics and AI: An introduction.
CRC Press, 2018.
[25]
Rafael T Yanushevsky.
Modern missile guidance.
CRC Press, 2018.
[26]
Joao PA Dantas, Andre N Costa, Diego Geraldo, Marcos ROA Maximo, and Takashi
Engagement decision support for beyond visual range air combat.
In 2021 Latin American Robotics Symposium (LARS), 2021 Brazilian
Symposium on Robotics (SBR), and 2021 Workshop on Robotics in Education
(WRE), pages 96–101. IEEE, 2021.
[27]
Edvards Scukins, Markus Klein, and Petter Ögren.
Enhancing situation awareness in beyond visual range air combat with
reinforcement learning-based decision support.
In 2023 International Conference on Unmanned Aircraft Systems
(ICUAS), pages 56–62. IEEE, 2023.
|
# $B^{*}_{c}$ meson parameters and radiative decay width within the covariant
confined quark model
Aidos Issadykov<EMAIL_ADDRESS>Sayabek K. Sakhiyev The Institute of
Nuclear Physics,
Ministry of Energy of the Republic of Kazakhstan, 050032 Almaty, KAZAKHSTAN
###### Abstract
In this work we tried to predict the parameters of $B^{*}_{c}$ meson. Simple
assumptions gave us following parametres $m_{B_{c}^{*}}=6329\pm 10$ MeV and
$f_{B_{c}^{*}}=535.5\pm 57.8$ MeV (for $\Lambda_{B_{c}^{*}}=2.26\pm 0.14$ GeV
in covariant confined quark model). We calculated widths of radiative decays
of $B^{*}_{q}$ mesons, where $q=u/d,s,c$ and compared them with other
theoretical works. It was shown that the width of the $B_{c}^{*}$ meson very
sensitive to the mass $m_{B_{c}^{*}}$ as expected and less to the size
parameter $\Lambda_{B_{c}^{*}}$.
###### pacs:
12.39.Ki, 13.30.Ce, 14.40.Nd
## I Introduction
The decay mode $B_{c}\to J/\psi\ell\nu$ of $B_{c}$ meson have about 2 standard
deviations disagreement between experimental data and theoretical predictions
Aaij:2017tyk . Meanwhile, its vector partner $B_{c}^{*}$ is still not found.
It is expected that the mass difference is not large to decay strongly to
$B_{c}$ meson and light meson. Thus, $B_{c}^{*}$ mesons cannot decay strongly
but can decay only weakly and electromagnetically. As a result, the partial
widths of electromagnetic decay channels, especially single-photon decay
channels, are dominant. Since the $B^{*}_{c}$ meson was not observed yet,
there are some theoretical predictions of it’s mass and leptonic decay
constants in the relativistic quark modelEbert:2002pp , Lattice
QCDDowdall:2012ab ; Colquhoun:2015oha , QCD Sum RulesWang:2012kw and
Nonrelativistic renormalization groupPenin:2004xi . Properties of $B^{*}_{c}$
meson in the relativistic quark modelEbert:2002pp as follows:
$\displaystyle m_{B_{c}^{*}}=6332\quad~{}\text{MeV},\qquad
f_{B_{c}^{*}}=503\quad~{}\text{MeV}.$ (1)
Mass and leptonic decay constant of $B^{*}_{c}$ meson in Lattice
QCDDowdall:2012ab ; Colquhoun:2015oha looks like:
$\displaystyle m_{B_{c}^{*}}=6332\pm 9\quad~{}\text{MeV},\qquad
f_{B_{c}^{*}}=422\pm 13\quad~{}\text{MeV}.$ (2)
Mass and leptonic decay constant of $B^{*}_{c}$ meson from QCD Sum
RulesWang:2012kw :
$\displaystyle m_{B_{c}^{*}}=6337\quad~{}\text{MeV},\qquad
f_{B_{c}^{*}}=384\quad~{}\text{MeV}.$ (3)
The Nonrelativistic renormalization group Penin:2004xi gave their prediction
on mass differences of $B^{*}_{c}$ and $B_{c}$ mesons
$\Delta m_{({B_{c}^{*}-B_{c}})}=50\pm 17^{+15}_{-12}\quad~{}\text{MeV}.$ (4)
Radiative decay of $B_{c}^{*}$ meson was calculated in Chang:2020xvu ;
Simonis:2018rld ; Jena:2002is ; Priyadarsini:2016tiu ; Patnaik:2017cbl ;
Ebert:2002xz ; Ebert:2002pp ; Lahde:1999ih ; Lahde:2002wj ; Choi:2007se ;
Choi:2009ai ; Eichten:1994gt ; Kiselev:1994rc ; Fulcher:1998ka ; Nobes:2000pm
; Monteiro:2016rzi ; AbdElHady:2005bv and have partial widths less than 1 keV
which makes the branching ratios of their weak decay modes may be within the
detection ability of current experiments. There are several works dedicated to
investigate the semileptonic decays of $B_{c}^{*}$ Wang:2012hu ; Dai:2018vzz ;
Wang:2018ryc ; Chang:2020xvu . The purpose of this paper is to extend our
model and predict a model parameters of unobserved $B_{c}^{*}$. We studied
$b\to c$, $b\to s$ and $b\to d(u)$ transitions in the framework of covariant
confined quark model(CCQM) in our previous worksSoni:2021fky ; Soni:2020bvu ;
Issadykov:2018myx ; Dubnicka:2016nyy ; Issadykov:2015iba .
## II Model
The covariant confined quark model Efimov:1988yd ; Efimov:1993ei ;
Branz:2009cd is an effective quantum field approach to hadronic interactions
based on an interaction Lagrangian of hadrons interacting with their
constituent quarks.
The effective Lagrangian describing the transition of a meson
$M(q_{1}\bar{q}_{2})$ to its constituent quarks $q_{1}$ and $\bar{q}_{2}$
$\displaystyle{\mathcal{L}}_{\rm int}(x)$ $\displaystyle=$ $\displaystyle
g_{M}M(x)\cdot J_{M}(x)+{\rm h.c.},$ $\displaystyle J_{M}(x)$ $\displaystyle=$
$\displaystyle\int\\!\\!dx_{1}\\!\\!\int\\!\\!dx_{2}F_{M}(x,x_{1},x_{2})\bar{q}_{2}(x_{2})\Gamma_{M}q_{1}(x_{1})$
(5)
with $\Gamma_{M}$ a Dirac matrix which projects onto the spin quantum number
of the meson field $M(x)$. The vertex function $F_{M}$ characterizes the
finite size of the meson. Translational invariance requires the function
$F_{M}$ to fulfill the identity
$F_{M}(x+a,x_{1}+a,x_{2}+a)=F_{M}(x,x_{1},x_{2})$ for any four-vector $a$. A
specific form for the vertex function is adopted
$F_{M}(x,x_{1},x_{2})=\delta(x-w_{1}x_{1}-w_{2}x_{2})\Phi_{M}((x_{1}-x_{2})^{2}),$
(6)
where $\Phi_{M}$ is the correlation function of the two constituent quarks
with masses $m_{q_{1}}$ and $m_{q_{2}}$. The ratios of the quark masses
$w_{i}$ are defined as
$w_{q_{1}}=\frac{m_{q_{1}}}{m_{q_{1}}+m_{q_{2}}},\quad
w_{q_{2}}=\frac{m_{q_{2}}}{m_{q_{1}}+m_{q_{2}}},\quad w_{1}+w_{2}=1.$ (7)
A simple Gaussian form of the vertex function $\bar{\Phi}_{M}(-\,k^{2})$ is
selected
$\bar{\Phi}_{M}(-\,k^{2})=\exp\left(k^{2}/\Lambda_{M}^{2}\right)$ (8)
with the parameter $\Lambda_{M}$ linked to the size of the meson. The minus
sign in the argument is chosen to indicate that we are working in the
Minkowski space. Since $k^{2}$ turns into $-\,k_{E}^{2}$ in the Euclidean
space, the form (8) has the appropriate fall-off behavior in the Euclidean
region. Any choice for $\Phi_{M}$ is appropriate as long as it falls off
sufficiently fast in the ultraviolet region of the Euclidean space to render
the corresponding Feynman diagrams ultraviolet finite. We choose a Gaussian
form for calculational convenience.
The coupling constant $g_{M}$ in Eq. (5) is determined by the so-called
compositeness condition. The compositeness condition requires that the
renormalization constant $Z_{B}$ of the elementary meson field $B(x)$ is set
to zero, i.e.
$Z_{B}=1-\widetilde{\Pi}^{\prime}_{B}(p^{2})=0,\qquad(p^{2}=m^{2}_{B})$ (9)
where $\Pi^{\prime}_{B}(p^{2})$ is the derivative of the mass function.
$S$-matrix elements are described by the quark-loop diagrams which are the
convolution of the vertex functions and quark propagators. In the evaluation
of the quark-loop diagrams we use the local Dirac propagator
$S_{q}(k)=\frac{1}{m_{q}-\not\\!k-i\epsilon}=\frac{m_{q}+\not\\!k}{m^{2}_{q}-k^{2}-i\epsilon}$
(10)
with an effective constituent quark mass $m_{q}$.
The meson functions in the case of the pseudoscalar and vector meson are
written as
$\displaystyle\widetilde{\Pi}_{P}(p^{2})$ $\displaystyle=$ $\displaystyle
N_{c}g_{P}^{2}\int\frac{d^{4}k}{(2\pi)^{4}i}\widetilde{\Phi}^{2}_{P}(-k^{2})\mbox{\rm{tr}}\Big{(}\gamma^{5}S_{1}(k+w_{1}p)\gamma^{5}S_{2}(k-w_{2}p)\Big{)},$
(11) $\displaystyle\widetilde{\Pi}^{\mu\nu}_{V}(p^{2})$ $\displaystyle=$
$\displaystyle
N_{c}g_{V}^{2}\int\frac{d^{4}k}{(2\pi)^{4}i}\widetilde{\Phi}^{2}_{V}(-k^{2})\mbox{\rm{tr}}\Big{(}\gamma^{\mu}S_{1}(k+w_{1}p)\gamma^{\nu}S_{2}(k-w_{2}p)\Big{)}$
(12) $\displaystyle=$ $\displaystyle
g^{\mu\nu}\widetilde{\Pi}_{V}(p^{2})+p^{\mu}p^{\nu}\widetilde{\Pi}^{\parallel}_{V}(p^{2}).$
Here $N_{c}=3$ is the number of colors. Since the vector meson is on its mass-
shell $\epsilon_{V}\cdot p=0$ we need to keep the part
$\widetilde{\Pi}_{V}(p^{2})$. Substituting the derivative of the mass
functions into Eq. (9) one can determine the coupling constant $g_{B}$ as a
function of other model parameters. The loop integrations in Eqs. (11) and
(12) proceed by using the Fock-Schwinger representation of quark propagators
$S_{q}(k+wp)=\frac{1}{m_{q}-\not\\!k-w\not\\!p}=(m_{q}+\not\\!k+w\not\\!p)\int\limits_{0}^{\infty}\\!\\!d\alpha\,e^{-\alpha[m_{q}^{2}-(k+wp)^{2}]}.$
(13)
In the obtained integrals over the Fock-Schwinger parameters
$0\leq\alpha_{i}<\infty$ we introduce an additional integration over the
proper time which converts the set of Fock-Schwinger parameters into a
simplex. In general case one has
$\prod\limits_{i=1}^{n}\int\limits_{0}^{\infty}\\!\\!d\alpha_{i}f(\alpha_{1},\ldots,\alpha_{n})=\int\limits_{0}^{\infty}\\!\\!dtt^{n-1}\prod\limits_{i=1}^{n}\int\\!\\!d\alpha_{i}\delta\left(1-\sum\limits_{i=1}^{n}\alpha_{i}\right)f(t\alpha_{1},\ldots,t\alpha_{n}).$
(14)
Finally, we cut the integration over the proper time at the upper limit by
introducing an infrared cutoff $\lambda$. One has
$\int\limits_{0}^{\infty}dt(\ldots)\to\int\limits_{0}^{1/\lambda^{2}}dt(\ldots).$
(15)
This procedure allows us to remove all possible thresholds present in the
initial quark diagram. Thus the infrared cutoff parameter $\lambda$
effectively guarantees the confinement of quarks within hadrons. This method
is quite general and can be used for diagrams with an arbitrary number of
loops and propagators. In the CCQM the infrared cutoff parameter $\lambda$ is
taken to be universal for all physical processes.
The model parameters are determined by fitting calculated quantities of basic
processes to available experimental data or lattice simulations (for details,
see Ref. Branz:2009cd ).
## III Matrix elements and one-photon radiative decay width
The free Lagrangian of quarks is gauged in the standard manner by using
minimal substitution which gives
$\mathcal{L}^{\rm em}_{\rm int}(x)=e\,A_{\mu}(x)\,J^{\mu}_{\rm em}(x),\qquad
J^{\mu}_{\rm
em}(x)=e_{b}\,\bar{b}(x)\gamma^{\mu}b(x)+e_{q}\,\bar{q}(x)\gamma^{\mu}q(x)$
(16)
where $e_{b}$ and $e_{q}$ are the quark charges in units of the positron
charge. The radiative decays of a vector mesons into a pseudoscalar meson and
photon $X_{1}\to X_{2}\gamma$ are described by the Feynman diagrams shown in
Fig. 1.
Figure 1: Feynman diagrams contributing in leading order to the dominant one-
photon radiative transitions $X_{1}(p)\to\gamma(q_{2})+X_{2}(q_{1})$
Ganbold:2021nvj .
The invariant matrix element for the one-photon radiative transition
$X_{1}\to\gamma X_{2}$ reads
$\displaystyle{\cal
M}_{{X_{1}}\to\gamma{X_{2}}}(p;p^{\prime},q)=eg_{X_{1}}g_{X_{2}}\epsilon^{V}_{\nu}(p)\epsilon^{\gamma}_{\mu}(q)\int\\!\\!dx\\!\\!\int\\!\\!dy\\!\\!\int\\!\\!dz\,e^{-ipx+ip^{\prime}y+iqz}\langle\,T\\{\bar{J}_{X_{1}}^{\nu}(x)J^{\mu}_{\rm
em}(z)J_{X_{2}}(y)\\}\rangle_{0}.,$ (17)
One has to note that there is an additional piece in the Lagrangian related to
the gauging nonlocal interactions of hadrons with their constituents
Branz:2009cd . This piece gives the additional contributions to the
electromagnetic processes. However, they are identically zero for the process
$X_{1}\to X_{2}\gamma$ due to its anomalous nature.
Using the Fourier transforms of the quark currents, we come to the final
result
$\displaystyle{\cal M}_{{X_{1}}\to\gamma{X_{2}}}(p;p^{\prime},q)$
$\displaystyle=$
$\displaystyle(2\pi)^{4}i\,\delta(p-p^{\prime}-q)M(p,p^{\prime}),$
$\displaystyle M(p,p^{\prime})$ $\displaystyle=$
$\displaystyle(-3i)eg_{X_{1}}g_{X_{2}}\epsilon^{V}_{\nu}(p)\epsilon^{\gamma}_{\mu}(q)\,\left(e_{b}M^{\mu\nu}_{b}+e_{q}M^{\mu\nu}_{q}\right)$
$\displaystyle M^{\mu\nu}_{b}$ $\displaystyle=$
$\displaystyle\int\\!\\!\frac{dk}{(2\pi)^{4}i}\widetilde{\Phi}_{X_{1}}(-\ell_{1}^{2})\widetilde{\Phi}_{X_{2}}(-\ell_{2}^{2})\mbox{\rm{tr}}\left[S_{q}(k)\gamma^{\nu}S_{b}(k-p)\gamma^{\mu}S_{b}(k-p^{\prime})\gamma^{5}\right]$
$\displaystyle M^{\mu\nu}_{q}$ $\displaystyle=$
$\displaystyle\int\\!\\!\frac{dk}{(2\pi)^{4}i}\widetilde{\Phi}_{X_{1}}(-\ell_{3}^{2})\widetilde{\Phi}_{X_{2}}(-\ell_{4}^{2})\mbox{\rm{tr}}\left[S_{q}(k+p^{\prime})\gamma^{\mu}S_{q}(k+p)\gamma^{\nu}S_{b}(k)\gamma^{5}\right]$
(18)
where $\ell_{1}=k-w_{2}\,p$, $\ell_{2}=k-w_{2}\,p^{\prime}$ and
$\ell_{3}=k+w_{1}\,p$, $\ell_{2}=k+w_{1}\,p^{\prime}$. The ratios of quark
masses are defined by Eq. (7). Now one has $m_{q_{1}}=m_{b}$ and
$m_{q_{2}}=m_{q}$ with $q=u,d,s$. By using the technique of calculations and
taking into account the transversality conditions
$\epsilon^{\gamma}_{\mu}(q)q^{\mu}=0$ and $\epsilon^{V}_{\nu}(p)p^{\nu}=0$ one
can arrives at the standard form of matrix element
$M(p,p^{\prime})=e\,g_{X_{1}X_{2}\gamma}\,\varepsilon^{pq\mu\nu}\epsilon^{\gamma}_{\mu}(q)\epsilon^{V}_{\nu}(p),$
(19)
where
$g_{X_{1}X_{2}\gamma}=e_{b}I_{b}(m^{2}_{X_{1}},m^{2}_{X_{2}})+e_{q}I_{q}(m^{2}_{X_{1}},m^{2}_{X_{2}})$
is radiative decay constant. The quantities $I_{b,q}$ are defined by the two-
fold integrals which are calculated numerically. The electromagnetic decay
width is written as
$\Gamma(X_{1}\to
X_{2}+\gamma)=\frac{\alpha}{24}m_{X_{1}}^{3}\left(1-\frac{m_{X_{2}}^{2}}{m_{X_{1}}^{2}}\right)^{3}g_{X_{1}X_{2}\gamma}^{2}\,.$
(20)
where $\alpha=e^{2}/4\pi=1/137.036$ is the fine-structure constant.
## IV Numerical results
The obvious model parameters include constituent quark masses and meson size
parameters that are fixed by fitting with the basic processes such as leptonic
decay widths with the experimental data or lattice simulations and the
differences are considered to be the absolute uncertainty in the respective
parameter. These parameters are determined by minimizing the functional
$\chi^{2}=\sum\limits_{i}\frac{(y_{i}^{\rm expt}-y_{i}^{\rm
theor})^{2}}{\sigma^{2}_{i}}$ where $\sigma_{i}$ is the experimental
uncertainty. If $\sigma$ is too small then we take its value of 10$\%$.
Besides, we have observed that the errors of the fitted parameters are of the
order of 10$\%$. Thus, the theoretical error of the CCQM is estimated to be of
the order of 10$\%$ at the level of matrix elements and the order of
15$-$20$\%$ at the level of widths. For present computations, we use the model
parameters obtained using the updated least square fit method performed in the
Ref. Ivanov:2015tru ; Ganbold:2014pua ; Dubnicka:2016nyy .
Table 1: Input values for some basic electromagnetic decay widths and our least-squares fit values (in keV). Process | Fit Values | Data ParticleDataGroup:2020ssz
---|---|---
$\rho^{\pm}\to\pi^{\pm}\gamma$ | 75.7$\pm$ 15.1 | 67 $\pm$ 7.5
$\omega\to\pi^{0}\gamma$ | 679$\pm$ 135.8 | 713 $\pm$ 26
$K^{\ast\pm}\to K^{\pm}\gamma$ | 55.8$\pm$ 11.2 | 46.8 $\pm$ 4.7
$K^{\ast 0}\to K^{0}\gamma$ | 132$\pm$ 26.4 | 116 $\pm$ 10
$D^{\ast\pm}\to D^{\pm}\gamma$ | 0.75$\pm$ 0.15 | 1.33 $\pm$ 0.37
$J/\psi\to\eta_{c}\gamma$ | 1.77$\pm$ 0.35 | 1.58 $\pm$ 0.37
The results of the least-squares fit used in the present study can be found in
Table 1. The agreement between the fit and experimental data is quite
satisfactory. The result for $J/\psi\to\eta_{c}\gamma$ agrees with the one
given in Ganbold:2021nvj (please look Table II there).
We think that there are strong relation between pseudoscalar $B_{q}$ and
vector $B^{*}_{q}$ mesons. In Table 2 given the leptonic decay constants and
masses of $B_{q}^{(*)}$ mesons from PDG ParticleDataGroup:2020ssz and
corresponding fitted size parameters from previous works in CCQM
Issadykov:2015iba ; Dubnicka:2016nyy ; Dubnicka:2017job ; Issadykov:2017wlb ;
Issadykov:2018myx .
The leptonic decay constants in CCQM are defined by Eq.10 in Issadykov:2017wlb
.
Table 2: The values of the leptonic decay constants and meson masses(in MeV) except the $B^{*}_{c}$ meson parameters from PDG ParticleDataGroup:2020ssz and corresponding our model parameter $\Lambda$(in GeV)from our previous works Issadykov:2015iba ; Dubnicka:2016nyy ; Dubnicka:2017job ; Issadykov:2017wlb ; Issadykov:2018myx . | $B_{c}$ | $B^{*}_{s}$ | $B_{s}$ | $B^{*0}$ | $B^{0}$ | $B^{+}$
---|---|---|---|---|---|---
$m$ | $6274.47\pm 0.32$ | $5415.4^{+1.8}_{-1.5}$ | $5366.88\pm 0.14$ | $5324.70\pm 0.21$ | $5279.65\pm 0.12$ | $5279.34\pm 0.12$
$f$ | 489 | 229 | 238.7 | 196 | 193 | 193
$\Lambda$ | 2.73 | 1.79 | 2.05 | 1.80 | 1.96 | 1.96
From Table 2 one can find next mass differences between pseudoscalar and
vector mesons
$\displaystyle\Delta m_{({B_{s}^{*}-B_{s}})}=49\quad~{}\text{MeV},$ (21)
$\displaystyle\Delta m_{({B^{*0}-B^{0}})}=45\quad~{}\text{MeV},$ (22)
so that the mass for $B_{c}^{*}$ meson assumed as:
$\displaystyle\Delta m_{({B_{c}^{*}-B_{c}})}=55\pm
10\quad~{}\text{MeV,}\quad~{}\text{then}\quad m_{B_{c}^{*}}=6329\pm
10\quad~{}\text{MeV,}$ (23)
which is within the predictions of other modelsEbert:2002pp ; Dowdall:2012ab ;
Colquhoun:2015oha ; Wang:2012kw ; Penin:2004xi .
The ratio between size parameters of $B_{q}^{(*)}$ mesons from our previous
works Issadykov:2015iba ; Dubnicka:2016nyy ; Dubnicka:2017job ;
Issadykov:2017wlb ; Issadykov:2018myx as follows
$\displaystyle\Delta\Lambda_{({B_{s}^{*}/B_{s}})}=0.876,$ (24)
$\displaystyle\Delta\Lambda_{({B^{*0}/B^{0}})}=0.921,$ (25)
so that the size parameter $\Lambda_{B_{c}^{*}}$ assumed as:
$\displaystyle\Delta\Lambda_{({B_{c}^{*}/B_{c}})}=0.83\pm
0.05,\quad~{}\text{then}\quad\Lambda_{B_{c}^{*}}=2.26\pm
0.14\quad~{}\text{GeV.}$ (26)
Taking into account these two parameters we calculated the width of radiative
decay $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma)$ and $f_{B^{*}_{c}}$ leptonic
decay constant in In Table 3. We calculated the widths of radiative decay in
dependence from mass(6319$-$6339 MeV) and $\Lambda$(2.12$-$2.40 GeV)
parameters of $B^{*}_{c}$ meson.
Table 3: The widths of radiative decay of $B^{*}_{c}$ meson in dependence from mass and $\Lambda$ parameters. $m_{B_{c}^{*}}=6319$ MeV | $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma),~{}\text{(keV)}$ | $f_{B_{c}^{*}},~{}\text{(MeV)}$
---|---|---
$\Lambda=2.12$ | 0.023 | 481
$\Lambda=2.19$ | 0.024 | 508.5
$\Lambda=2.26$ | 0.025 | 536.4
$\Lambda=2.33$ | 0.026 | 564.6
$\Lambda=2.40$ | 0.027 | 593.3
$m_{B_{c}^{*}}=6324$ MeV | $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma),~{}\text{(keV)}$ | $f_{B_{c}^{*}},~{}\text{(MeV)}$
$\Lambda=2.12$ | 0.032 | 479.9
$\Lambda=2.19$ | 0.033 | 507.3
$\Lambda=2.26$ | 0.034 | 535
$\Lambda=2.33$ | 0.035 | 563.1
$\Lambda=2.40$ | 0.036 | 591.6
$m_{B_{c}^{*}}=6329$ MeV | $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma),~{}\text{(keV)}$ | $f_{B_{c}^{*}},~{}\text{(MeV)}$
$\Lambda=2.12$ | 0.042 | 478.8
$\Lambda=2.19$ | 0.044 | 506
$\Lambda=2.26$ | 0.045 | 533.6
$\Lambda=2.33$ | 0.047 | 561.6
$\Lambda=2.40$ | 0.048 | 589.9
$m_{B_{c}^{*}}=6339$ MeV | $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma),~{}\text{(keV)}$ | $f_{B_{c}^{*}},~{}\text{(MeV)}$
$\Lambda=2.12$ | 0.069 | 476.5
$\Lambda=2.19$ | 0.072 | 503.5
$\Lambda=2.26$ | 0.074 | 530.8
$\Lambda=2.33$ | 0.077 | 558.5
$\Lambda=2.40$ | 0.079 | 586.5
The width of $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma)$ decay strongly depends on
the choice of $B^{*}_{c}$ meson’s mass than on the choice of
$\Lambda_{B^{*}_{c}}$ in our calculations as expected, and shown on the Figure
2. While $f_{B^{*}_{c}}$ leptonic decay constant depends on the choice of
$\Lambda_{B^{*}_{c}}$.
Figure 2: The width $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma)$ in dependence on
the choice of the $B^{*}_{c}$ meson mass and the size parameter
$\Lambda_{B^{*}_{c}}$.
We compared the results of widths of radiative decays of $B^{*}_{q}$ mesons
within the covariant confined quark model with those from other theoretical
predictions in Table 4. For $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma)$ we used
central values of assumed parameters($m_{B_{c}^{*}}=6329$ MeV and
$\Lambda_{B_{c}^{*}}=2.26$ GeV).
Table 4: The widths of radiative decays of $B^{*}_{q}$ mesons in units of keV. | $\Gamma(B^{*0}\to B^{0}\gamma)$ | $\Gamma(B^{*+}\to B^{+}\gamma)$ | $\Gamma(B^{*0}_{s}\to B^{0}_{s}\gamma)$ | $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma)$
---|---|---|---|---
This work | $0.117\pm 0.022.$ | $0.362\pm 0.072$ | $0.094\pm 0.018$ | $0.045\pm 0.009$
Ebert:2002xz ; Ebert:2002pp | 0.070 | 0.19 | 0.054 | 0.033
Simonis:2018rld | 0.165 | 0.520 | 0.115 | 0.039
Jena:2002is | 0.14 | 0.52 | 0.06 | 0.030
Chang:2020xvu | $0.116\pm 0.006$ | $0.349\pm 0.018$ | $0.084^{+11}_{-9}$ | $0.049^{+28}_{-21}$
Priyadarsini:2016tiu ; Patnaik:2017cbl | 0.181 | 0.577 | 0.119 | 0.023
Lahde:1999ih ; Lahde:2002wj | 0.0096 | 0.0674 | 0.148 | 0.034
Choi:2007se ; Choi:2009ai | 0.13 | 0.4 | 0.068 | 0.022
Eichten:1994gt | | | | 0.135
Kiselev:1994rc | | | | 0.060
Fulcher:1998ka | | | | 0.059
Nobes:2000pm | | | | 0.050
Monteiro:2016rzi | | | | 0.019
AbdElHady:2005bv | | | | 0.019
## V CONCLUSION
In this work we made naive assumptions for the $B_{c}^{*}$ meson mass and size
parameter $\Lambda_{B_{c}^{*}}$ as $m_{B_{c}^{*}}=6329\pm 10$ MeV and
$\Lambda_{B_{c}^{*}}=2.26\pm 0.14$ GeV . Further, using this numbers We
calculated leptonic decay constants for the $B_{c}^{*}$ meson, and widths of
radiative decays of $B^{*}_{q}$ mesons, where $q=u/d,s,c$. In Table 3 and Fig.
2 were shown that the width $\Gamma(B^{*+}_{c}\to B^{+}_{c}\gamma)$ very
sensitive to the mass $m_{B_{c}^{*}}$ as expected, and less to the size
parameter $\Lambda_{B_{c}^{*}}$. While the $f_{B^{*}_{c}}$ leptonic decay
constant strongly depends on the choice of $\Lambda_{B^{*}_{c}}$. There is a
significant scatter in the values for the decay widths in Table 4. Therefore,
their experimental measurement will significantly correct the framework of the
existing theoretical approaches to the description of these processes.
## VI ACKNOWLEDGEMENTS
We would like to thank Prof. Mikhail A. Ivanov for useful discussions of some
aspects of this work. This research has been funded by the Science Committee
of the Ministry of Education and Science of the Republic of Kazakhstan (Grant
No. AP09057862).
## References
* (1) R. Aaij et al. [LHCb], Phys. Rev. Lett. 120 (2018) no.12, 121801 doi:10.1103/PhysRevLett.120.121801 [arXiv:1711.05623 [hep-ex]].
* (2) D. Ebert, R. N. Faustov and V. O. Galkin, Phys. Rev. D 67 (2003) 014027
* (3) R. J. Dowdall, C. T. H. Davies, T. C. Hammant and R. R. Horgan, Phys. Rev. D 86 (2012) 094510 [arXiv:1207.5149 [hep-lat]].
* (4) B. Colquhoun et al. [HPQCD Collaboration], Phys. Rev. D 91 (2015) no.11, 114509 [arXiv:1503.05762 [hep-lat]].
* (5) Z. G. Wang, Eur. Phys. J. A 49 (2013) 131
* (6) A. A. Penin, A. Pineda, V. A. Smirnov and M. Steinhauser, Phys. Lett. B 593 (2004) 124 Erratum: [Phys. Lett. B 677 (2009) no.5, 343]
* (7) V. Simonis, arXiv:1803.01809 [hep-ph].
* (8) S. N. Jena, P. Panda and T. C. Tripathy, Nucl. Phys. A 699 (2002) 649.
* (9) Q. Chang, X. L. Wang, J. Zhu and X. N. Li, Adv. High Energy Phys. 2020 (2020), 3079670 doi:10.1155/2020/3079670 [arXiv:2003.08600 [hep-ph]].
* (10) M. Priyadarsini, P. C. Dash, S. Kar, S. P. Patra and N. Barik, Phys. Rev. D 94 (2016) no.11, 113011.
* (11) S. Patnaik, P. C. Dash, S. Kar, S. Patra and N. Barik, Phys. Rev. D 96 (2017) no.11, 116010
* (12) D. Ebert, R. N. Faustov and V. O. Galkin, Phys. Lett. B 537 (2002) 241
* (13) T. A. Lahde, C. J. Nyfalt and D. O. Riska, Nucl. Phys. A 674 (2000) 141
* (14) T. A. Lahde, Nucl. Phys. A 714 (2003) 183
* (15) H. M. Choi, Phys. Rev. D 75 (2007) 073016
* (16) H. M. Choi and C. R. Ji, Phys. Rev. D 80 (2009) 054016
* (17) E. J. Eichten and C. Quigg, Phys. Rev. D 49 (1994) 5845
* (18) S. S. Gershtein, V. V. Kiselev, A. K. Likhoded and A. V. Tkabladze, Phys. Rev. D 51 (1995) 3613
* (19) L. P. Fulcher, Phys. Rev. D 60 (1999) 074006
* (20) M. A. Nobes and R. M. Woloshyn, J. Phys. G 26 (2000) 1079
* (21) A. P. Monteiro, M. Bhat and K. B. Vijaya Kumar, Phys. Rev. D 95 (2017) no.5, 054016 [arXiv:1608.05782 [hep-ph]].
* (22) A. Abd El-Hady, J. R. Spence and J. P. Vary, Phys. Rev. D 71 (2005) 034006
* (23) Z. G. Wang, Commun. Theor. Phys. 61 (2014) no.1, 81-88 doi:10.1088/0253-6102/61/1/13 [arXiv:1209.1157 [hep-ph]].
* (24) L. R. Dai, X. Zhang and E. Oset, Phys. Rev. D 98 (2018) no.3, 036004 doi:10.1103/PhysRevD.98.036004 [arXiv:1806.09583 [hep-ph]].
* (25) T. Wang, Y. Jiang, T. Zhou, X. Z. Tan and G. L. Wang, J. Phys. G 45 (2018) no.11, 115001 doi:10.1088/1361-6471/aae14a [arXiv:1804.06545 [hep-ph]].
* (26) N. R. Soni, A. Issadykov, A. N. Gadaria, Z. Tyulemissov, J. J. Patel and J. N. Pandya, Eur. Phys. J. Plus 138, no.2, 163 (2023) doi:10.1140/epjp/s13360-023-03779-8 [arXiv:2110.12740 [hep-ph]].
* (27) N. R. Soni, A. Issadykov, A. N. Gadaria, J. J. Patel and J. N. Pandya, Eur. Phys. J. A 58 (2022) no.3, 39 doi:10.1140/epja/s10050-022-00685-y [arXiv:2008.07202 [hep-ph]].
* (28) A. Issadykov and M. A. Ivanov, Phys. Lett. B 783 (2018), 178-182 doi:10.1016/j.physletb.2018.06.056 [arXiv:1804.00472 [hep-ph]].
* (29) S. Dubnička, A. Z. Dubničková, A. Issadykov, M. A. Ivanov, A. Liptaj and S. K. Sakhiyev, Phys. Rev. D 93 (2016) no.9, 094022 doi:10.1103/PhysRevD.93.094022 [arXiv:1602.07864 [hep-ph]].
* (30) A. Issadykov, M. A. Ivanov and S. K. Sakhiyev, Phys. Rev. D 91 (2015) no.7, 074007 doi:10.1103/PhysRevD.91.074007 [arXiv:1502.05280 [hep-ph]].
* (31) G. V. Efimov and M. A. Ivanov, Int. J. Mod. Phys. A 4 (1989) no.8, 2031-2060 doi:10.1142/S0217751X89000832
* (32) G. V. Efimov and M. A. Ivanov, The Quark Confinement Model of Hadrons, (CRC Press, Boca Raton, 1993).
* (33) T. Branz, A. Faessler, T. Gutsche, M. A. Ivanov, J. G. Korner and V. E. Lyubovitskij, Phys. Rev. D 81 (2010), 034010 doi:10.1103/PhysRevD.81.034010 [arXiv:0912.3710 [hep-ph]].
* (34) G. Ganbold, T. Gutsche, M. A. Ivanov and V. E. Lyubovitskij, Phys. Rev. D 104 (2021) no.9, 094048 doi:10.1103/PhysRevD.104.094048 [arXiv:2107.08774 [hep-ph]].
* (35) M. A. Ivanov, J. G. Körner and C. T. Tran, Phys. Rev. D 92 (2015) no.11, 114022 doi:10.1103/PhysRevD.92.114022 [arXiv:1508.02678 [hep-ph]].
* (36) G. Ganbold, T. Gutsche, M. A. Ivanov and V. E. Lyubovitskij, J. Phys. G 42 (2015) no.7, 075002 doi:10.1088/0954-3899/42/7/075002 [arXiv:1410.3741 [hep-ph]].
* (37) P. A. Zyla et al. [Particle Data Group], PTEP 2020 (2020) no.8, 083C01 doi:10.1093/ptep/ptaa104
* (38) S. Dubnička, A. Z. Dubničková, A. Issadykov, M. A. Ivanov and A. Liptaj, Phys. Rev. D 96 (2017) no.7, 076017 doi:10.1103/PhysRevD.96.076017 [arXiv:1708.09607 [hep-ph]].
* (39) A. Issadykov, M. A. Ivanov and G. Nurbakova, EPJ Web Conf. 158 (2017), 03002 doi:10.1051/epjconf/201715803002 [arXiv:1907.13210 [hep-ph]].
|
# Frobenius–Schur indicators for twisted Real representation theory and two
dimensional unoriented topological field theory
Levi Gagnon-Ririe Department of Mathematics and Statistics
Utah State University
Logan, Utah 84322
USA<EMAIL_ADDRESS>and Matthew B. Young Department of Mathematics
and Statistics
Utah State University
Logan, Utah 84322
USA<EMAIL_ADDRESS>
###### Abstract.
We construct a two dimensional unoriented open/closed topological field theory
from a finite graded group $\pi:\hat{G}\twoheadrightarrow\\{1,-1\\}$, a
$\pi$-twisted $2$-cocycle $\hat{\theta}$ on $B\hat{G}$ and a character
$\lambda:\hat{G}\rightarrow U(1)$. The underlying oriented theory is a twisted
Dijkgraaf–Witten theory. The construction is based in the
$(\hat{G},\hat{\theta},\lambda)$-twisted Real representation theory of
$\ker\pi$. In particular, twisted Real representations are boundary conditions
and the generalized Frobenius–Schur element is its crosscap state.
###### Key words and phrases:
Real representation theory. Topological field theory.
###### 2020 Mathematics Subject Classification:
Primary: 20C25; Secondary 81T45.
## Introduction
Associated to a finite group $G$ and a $U(1)$-valued $2$-cocycle $\theta$ on
its classifying space $BG$ is a two dimensional topological gauge theory known
as Dijkgraaf–Witten theory [DW90]. This is an oriented open/closed topological
quantum field theory (TFT) $\mathcal{Z}_{(G,\theta)}$ with boundary conditions
the category $\textup{\text{Rep}}^{\theta}(G)$ of finite dimensional
$\theta$-twisted complex representations of $G$ [Fre94, MS06]. In particular,
$\mathcal{Z}_{(G,\theta)}$ assigns a partition function to each compact
oriented $2$-manifold with boundary components labelled by twisted
representations. Open/closed TFT was introduced as a framework to axiomatize
the structure of topological D-branes in string theory [Laz01, KR04, MS06] and
has found a variety of applications in pure mathematics [Cos07, BCT09, Abo10].
The open/closed structure of Dijkgraaf–Witten theory plays an important role
in the descriptions of D-branes in orbifold string theory [DW90], generalized
symmetries in quantum field theory [Sha15, HLS21] and boundary degrees of
freedom in topological phases of matter [SR17].
Open/closed TFTs on unoriented—and possibly non-orientable—manifolds play a
central role in orientifold string theory [HW08] and related mathematics
[You20, FH21, GI21, NY22]. In condensed matter physics, unoriented TFTs in
general, and Dijkgraaf–Witten theory in particular, model topological phases
of matter with time reversal symmetry [FM13, KT17, BBC+20]. The main result of
this paper is an algebraic construction of a class of unoriented lifts of the
oriented open/closed Dijkgraaf–Witten theories $\mathcal{Z}_{(G,\theta)}$.
###### Theorem A (Theorem 3.7).
A triple $(\hat{G},\hat{\theta},\lambda)$ consisting of a short exact sequence
of finite groups
$1\rightarrow
G\rightarrow\hat{G}\xrightarrow[]{\pi}C_{2}=\\{1,-1\\}\rightarrow 1,$
a $\pi$-twisted $2$-cocycle $\hat{\theta}$ on $B\hat{G}$ which restricts to
$\theta$ on $BG$ and a character $\lambda:\hat{G}\rightarrow U(1)$ defines a
two dimensional unoriented open/closed topological field theory
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$ whose oriented sector is a
subtheory of $\mathcal{Z}_{(G,\theta)}$.
A number of authors have studied
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$ under the assumption that
$\hat{G}=G\times C_{2}$ is the trivial extension, $\hat{\theta}$ is in the
image of the map $H^{2}(BG;C_{2})\rightarrow H^{2}(B\hat{G};U(1)_{\pi})$ and
$\lambda$ is trivial [KM97, AN06, Tur07, LN11, Sny17]. For general
$(\hat{G},\hat{\theta})$ and trivial $\lambda$, a topological construction of
the closed sector of $\mathcal{Z}_{(\hat{G},\hat{\theta},1)}$, and its higher
dimensional analogues, was given in [You20] while a $G$-equivariant extension
of the closed sector of $\mathcal{Z}_{(\hat{G},\hat{\theta},1)}$ was given in
[KT17]. We emphasize that for the applications of unoriented Dijkgraaf–Witten
theory mentioned before Theorem A, general input data
$(\hat{G},\hat{\theta},\lambda)$ is required; see Remark 3.8. As explained
below, general input data is also natural from the representation theoretic
and $K$-theoretic perspectives.
Theorem A is proved using an algebraic characterization of unoriented TFTs,
Theorem 3.4, which builds off characterizations of oriented closed and
open/closed TFTs [Dij89, Abr96, Laz01, MS06, AN06, LP08], unoriented closed
TFTs [TT06] and unoriented open/closed TFTs with a single boundary condition
[AN06]. The algebraic data required to define an unoriented open/closed TFT
includes:
* •
A commutative Frobenius algebra $A$; this defines the oriented closed sector.
* •
A Calabi–Yau category $\mathcal{B}$; this defines the oriented open sector.
* •
An isometric involution $p:A\rightarrow A$ and a _crosscap state_ $Q\in A$,
the latter corresponding to the value of the TFT on the compact Möbius strip;
this defines the unoriented closed sector.
* •
A strict contravariant involution of $\mathcal{B}$, that is, a functor
$P:\mathcal{B}^{\textup{\text{op}}}\rightarrow\mathcal{B}$ which squares to
the identity, which is moreover required to be the identity on objects; this
defines the unoriented open sector.
The data (and that which we have omitted here) is required to satisfy a number
of coherence conditions. The oriented theory $\mathcal{Z}_{(G,\theta)}$ is
defined by the commutative Frobenius algebra
$HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G))\simeq
Z(\mathbb{C}^{\theta^{-1}}[G])$ with the Haar bilinear form
$\langle-,-\rangle_{G}$ and Calabi–Yau category
$\textup{\text{Rep}}^{\theta}(G)$. Motivated by the search for the data
required to define the unoriented lift
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$, in Section 2 we construct and
study a contravariant involution
$(P^{(\hat{G},\hat{\theta},\lambda)},\Theta^{(\hat{G},\hat{\theta},\lambda)})$
of $\textup{\text{Rep}}^{\theta}(G)$. The functor
$P^{(\hat{G},\hat{\theta},\lambda)}$ acts non-trivially on objects and so is
not an admissible choice for the defining data of
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$. A key representation theoretic
observation is that the homotopy fixed points of
$(P^{(\hat{G},\hat{\theta},\lambda)},\Theta^{(\hat{G},\hat{\theta},\lambda)})$
is the category of $(\hat{G},\hat{\theta},\lambda)$-twisted Real
representations of $G$. The Real representation theory of $G$ was originally
studied by Wigner [Wig59] and Dyson [Dys62] as a generalization of real and
quaternionic representation theory in the context of anti-unitary symmetries
in quantum mechanics. More recently, Real representation theory has been
developed from the related perspective of twisted equivariant $KR$-theory
[AS69, Kar70, FM13, NY22] and categorical representation theory [You21, RY21,
RT22]. In the $K$-theoretic setting, general pairs $(\hat{\theta},\lambda)$
are required to realize all $KR$-theory twists. Motivated by the above
perspectives, we consider the element
$\nu_{(\hat{G},\hat{\theta},\lambda)}=\sum_{\varsigma\in\hat{G}\setminus
G}\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}l_{\varsigma^{2}}\in
HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G)).$
The role of $\nu_{(\hat{G},\hat{\theta},\lambda)}$ in Real representation
theory is summarized by the next result.
###### Theorem B (Theorem 2.7 and Corollary 2.8).
Let $V$ be a $\theta$-twisted representation of $G$ with character $\chi_{V}$.
Then $\langle\chi_{V},\nu_{(\hat{G},\hat{\theta},\lambda)}\rangle_{G}$ is
equal to the trace of the involution
$\textup{\text{Hom}}_{G}(V,P^{(\hat{G},\hat{\theta},\lambda)}(V))\rightarrow\textup{\text{Hom}}_{G}(V,P^{(\hat{G},\hat{\theta},\lambda)}(V)).\qquad
f\mapsto
P^{(\hat{G},\hat{\theta},\lambda)}(f)\circ\Theta^{(\hat{G},\hat{\theta},\lambda)}_{V}.$
In particular, if $V$ is irreducible, then
$\langle\chi_{V},\nu_{(\hat{G},\hat{\theta},\lambda)}\rangle_{G}=\begin{cases}1&\mbox{if
and only if $V$ lifts to a $(\hat{G},\hat{\theta},\lambda)$-twisted Real
representation},\\\ -1&\mbox{if and only if $V$ lifts to a
$(\hat{G},\delta\hat{\theta},\lambda)$-twisted Real representation},\\\
0&\mbox{otherwise},\end{cases}$
where $\delta$ is a representative of the generator of
$H^{2}(BC_{2};U(1)_{\pi})\simeq C_{2}$.
The element $\nu_{(\hat{G},\hat{\theta},\lambda)}$ recovers under various
specializations of the data $(\hat{G},\hat{\theta},\lambda)$ other generalized
Frobenius–Schur elements [FS06, Gow79, Tur07, IT23]. In particular, the second
statement in Theorem B shows that $\nu_{(\hat{G},\hat{\theta},\lambda)}$ is a
generalization to twisted Real representation theory of the classical
Frobenius–Schur element. Theorem B and a complete understanding of the
$\theta$-twisted representation theory of $G$ suffices to understand the
$(\hat{G},\hat{\theta},\lambda)$-twisted Real representation theory of $G$.
Returning to the proof of Theorem A, we take for $\mathcal{B}$ the Calabi–Yau
category of $(\hat{G},\hat{\theta},\lambda)$-twisted Real representations of
$G$ and their $G$-equivariant linear maps. We view this as an orientifold-type
construction, with $\textup{\text{Rep}}^{\theta}(G)$ seen as the category of
$D$-branes in an oriented string theory and $\mathcal{B}$ the category of
$D$-branes which survive the orientifold projection defined by
$(P^{(\hat{G},\hat{\theta},\lambda)},\Theta^{(\hat{G},\hat{\theta},\lambda)})$.
The category twisted Real representations is a non-full subcategory of
$\mathcal{B}$ and the forgetful functor
$\mathcal{B}\rightarrow\textup{\text{Rep}}^{\theta}(G)$ respects Calabi–Yau
structures. Moreover, $\mathcal{B}$ inherits a contravariant involution which
is the identity on objects and
$A=HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G))$ inherits an isometric
involution $p$. We take for the crosscap state $Q$ the generalized
Frobenius–Schur element $\nu_{(\hat{G},\hat{\theta},\lambda)}$. It remains to
verify the coherence conditions. A mild generalization of the first equality
in Theorem B (proved in Theorem 2.7) is the unoriented counterpart of the
famous Cardy condition, asserting the equality of two ways of evaluating a
Möbius strip diagram with boundary condition $V$. The remaining coherence
conditions required of the crosscap state, involution $p$ and boundary-bulk
and bulk-boundary maps are verified using the calculus of twisted cocycles. In
Section 3.3, we compute partition functions of
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$.
### Acknowledgements
The work of M. B. Y. was supported by National Science Foundation grant
DMS-2302363 and a Simons Foundation Collaboration Grant for Mathematicians
(Award ID 853541).
## 1\. Background material
Throughout the paper the ground field is $\mathbb{C}$ and vector spaces are
finite dimensional. Linear duality is
$(-)^{\vee}=\textup{\text{Hom}}_{\mathbb{C}}(-,\mathbb{C})$. Denote by $U(1)$
the group of unit norm complex numbers and $C_{n}$ the cyclic group of order
$n$, seen as a multiplicative group.
### 1.1. Group cohomology
Let $K$ be a finite group and $M$ a left $K$-module. We regard the underlying
abelian group of $M$ as multiplicative. Let $C^{\bullet}(BK;M)$ be the complex
of normalized simplicial cochains on $BK$ with coefficients in $M$. An element
$\theta\in C^{n}(BK;M)$ is a function
$\theta:K^{n}\rightarrow
M,\qquad(k_{n},\dots,k_{1})\mapsto\theta([k_{n}|\cdots|k_{1}])$
whose value is the identity if any $k_{i}$ is the identity. The differential
$d\theta$ of an $(n-1)$-cochain $\theta$ is defined so that
$d\theta([k_{n}|\cdots|k_{1}])$ is equal to
$k_{n}\cdot\theta([k_{n-1}|\cdots|k_{1}])\prod_{j=1}^{n-1}\theta([k_{n}|\cdots|k_{j+1}k_{j}|\cdots|k_{1}])^{(-1)^{n-j}}\times\theta([k_{n}|\cdots|k_{2}])^{(-1)^{n}}.$
Write $Z^{\bullet}(BK;M)$ and $H^{\bullet}(BK;M)$ for the cocycles and
cohomologies of $C^{\bullet}(BK;M)$.
When $M=U(1)$ with trivial $K$-action, write $C^{\bullet}(BK)$ for
$C^{\bullet}(BK;M)$. When $\pi:\hat{G}\rightarrow C_{2}$ is a group
homomorphism and $M=U(1)$ with $\hat{G}$-action $\omega\cdot
z=z^{\pi(\omega)}$, write $C^{\bullet+\pi}(B\hat{G})$ for
$C^{\bullet}(B\hat{G};M)$. If $\pi:C_{2}\rightarrow C_{2}$ is the identity
map, then $H^{2+\pi}(BC_{2})\simeq C_{2}$; a cocycle representative $\delta$
for the generator is given by
$\delta([\varsigma_{2}|\varsigma_{1}])=\begin{cases}-1&\mbox{if
}\varsigma_{1}=\varsigma_{2}=-1,\\\ 0&\mbox{otherwise}.\end{cases}$
We use the same notation for $\delta$ and its image under
$\pi^{*}:Z^{2+\pi}(BC_{2})\rightarrow Z^{2+\pi}(B\hat{G})$.
###### Lemma 1.1.
Let $\pi:\hat{G}\rightarrow C_{2}$ be a $C_{2}$-graded finite group and
$\hat{\theta}\in Z^{2+\pi}(B\hat{G})$. For all $g_{i}\in G$,
$\omega\in\hat{G}$ and $\varsigma\in\hat{G}\setminus G$, the following
equalities hold:
$\frac{\hat{\theta}([\omega g_{2}\omega^{-1}|\omega
g_{1}\omega^{-1}])}{\hat{\theta}([g_{2}|g_{1}])^{\pi(\omega)}}=\frac{\hat{\theta}([\omega
g_{2}\omega^{-1}|\omega])}{\hat{\theta}([\omega|g_{2}])}\frac{\hat{\theta}([\omega
g_{1}\omega^{-1}|\omega])}{\hat{\theta}([\omega|g_{1}])}\left(\frac{\hat{\theta}([\omega
g_{2}g_{1}\omega^{-1}|\omega])}{\hat{\theta}([\omega|g_{2}g_{1}])}\right)^{-1}$
(1)
$\frac{\hat{\theta}([\omega\varsigma\omega^{-1}|\omega\varsigma\omega^{-1}])}{\hat{\theta}([\varsigma|\varsigma])^{-\pi(\omega)}}=\frac{\hat{\theta}([\omega|\varsigma^{2}])}{\hat{\theta}([\omega\varsigma^{2}\omega^{-1}|\omega])}.$
(2)
###### Proof.
Both equalities follow from repeated use of the $2$-cocycle condition on
$\hat{\theta}$. ∎
### 1.2. Twisted representation theory
We recall background on twisted representation theory following [Kar85]. Let
$G$ be a finite group and $\theta\in Z^{2}(BG)$.
###### Definition 1.2.
A _$\theta$ -twisted_ (or _$\theta$ -projective_) _representation of $G$_ is
pair $(V,\rho)$ consisting of a vector space $V$ and a map $\rho:G\rightarrow
GL(V)$ which satisfies $\rho(e)=\textup{\text{id}}_{V}$ and
$\rho(g_{2})\circ\rho(g_{1})=\theta([g_{2}|g_{1}])\rho(g_{2}g_{1}),\qquad
g_{1},g_{2}\in G.$
We often write $V$ or $\rho_{V}$ for $(V,\rho)$. The category
$\textup{\text{Rep}}^{\theta}(G)$ of $\theta$-twisted representations and
their $G$-equivariant linear maps is $\mathbb{C}$-linear finite semisimple.
The $\theta$-twisted group algebra $\mathbb{C}^{\theta}[G]$ is the
$\mathbb{C}$-algebra with basis $\\{l_{g}\mid g\in G\\}$ and multiplication
$l_{g_{2}}\cdot l_{g_{1}}=\theta([g_{2}|g_{1}])l_{g_{2}g_{1}}$. The category
of finite dimensional $\mathbb{C}^{\theta}[G]$-modules is equivalent to
$\textup{\text{Rep}}^{\theta}(G)$. We sometimes interpret
$\mathbb{C}^{\theta}[G]$ as functions on $G$, in which case $l_{g}$ the
$\delta$-function at $g$. The centre $Z(\mathbb{C}^{\theta}[G])$ consists of
elements $\sum_{g\in G}a_{g}l_{g}$ whose coefficients satisfy
$a_{hgh^{-1}}=\uptau(\theta)([h]g)^{-1}a_{g},\qquad g,h\in G.$
Here $\uptau(\theta)([h]g)=\frac{\theta([hgh^{-1}|h])}{\theta([h|g])}$ are the
components of a $1$-cocycle $\uptau(\theta)$ on the loop groupoid of $BG$
called the _loop transgression_ of $\theta$ [Wil08, Theorem 3]. Define a non-
degenerate symmetric bilinear form on $\mathbb{C}^{\theta}[G]$ by
$\langle\sum_{g\in G}a_{g}l_{g},\sum_{h\in
G}b_{h}l_{h}\rangle_{G,\theta}=\frac{1}{|G|}\sum_{g\in
G}\theta([g^{-1}|g])a_{g^{-1}}b_{g}.$
The character of $(V,\rho)\in\textup{\text{Rep}}^{\theta}(G)$ is the function
$\chi_{V}:G\rightarrow\mathbb{C}$, $g\mapsto\textup{\text{tr}}_{V}\,\rho(g)$.
A short calculation shows that
$\chi_{V}(hgh^{-1})=\uptau(\theta)([h]g)\chi_{V}(g)$. Functions
$G\rightarrow\mathbb{C}$ with this conjugation equivariance are elements of
$Z(\mathbb{C}^{\theta^{-1}}[G])$ and are called $\theta$-twisted class
functions. Characters of irreducible $\theta$-twisted representations form an
orthonormal basis of $Z(\mathbb{C}^{\theta^{-1}}[G])$ with respect to
$\langle-,-\rangle_{G}:=\langle-,-\rangle_{G,\theta^{-1}}$.
Given $(V,\rho_{V})\in\textup{\text{Rep}}^{\theta}(G)$, define
$(V^{\vee},\rho_{V^{\vee}})\in\textup{\text{Rep}}^{\theta^{-1}}(G)$ by
$\rho_{V^{\vee}}(g)=(\rho_{V}(g)^{-1})^{\vee}$. For ease of notation, we write
$\rho_{V}(g)^{-\vee}$ for $(\rho_{V}(g)^{-1})^{\vee}$.
### 1.3. Categories with duality
###### Definition 1.3.
1. (1)
A _category with duality_ is a triple $(\mathcal{C},P,\Theta)$ consisting of a
category $\mathcal{C}$, a functor
$P:\mathcal{C}^{\textup{\text{op}}}\rightarrow\mathcal{C}$ and a natural
isomorphism $\Theta:\textup{\text{id}}_{\mathcal{C}}\Rightarrow P\circ
P^{\textup{\text{op}}}$ whose components satisfy
$P(\Theta_{V})\circ\Theta_{P(V)}=\textup{\text{id}}_{P(V)},\qquad
V\in\mathcal{C}.$ (3)
The duality structure $(P,\Theta)$ is _strict_ if $\Theta$ is the identity
natural transformation.
2. (2)
A _homotopy fixed point_ of $(\mathcal{C},P,\Theta)$ is a pair $(V,\psi_{V})$
consisting of an object $V\in\mathcal{C}$ and an isomorphism
$\psi_{V}:V\rightarrow P(V)$ which satisfies
$P(\psi_{V})\circ\Theta_{V}=\psi_{V}$.
We interpret $(P,\Theta)$ as defining a categorical $C_{2}$-action on
$\mathcal{C}$ in which the generator acts contravariantly. Motivated by this,
let $\mathcal{C}^{hC_{2}}$, $\mathcal{C}^{\tilde{h}C_{2}}$ be the categories
with objects homotopy fixed points and morphisms
$\textup{\text{Hom}}_{\mathcal{C}^{hC_{2}}}((V,\psi_{V}),(W,\psi_{W}))=\\{\phi\in\textup{\text{Hom}}_{\mathcal{C}}(V,W)\mid\psi_{V}=P(\phi)\circ\psi_{W}\circ\phi\\},$
$\textup{\text{Hom}}_{\mathcal{C}^{\tilde{h}C_{2}}}((V,\psi_{V}),(W,\psi_{W}))=\textup{\text{Hom}}_{\mathcal{C}}(V,W).$
Let
$P^{\tilde{h}C_{2}}:(\mathcal{C}^{\tilde{h}C_{2}})^{\textup{\text{op}}}\rightarrow\mathcal{C}^{\tilde{h}C_{2}}$
be the identity on objects and send a morphism
$\phi:(V,\psi_{V})\rightarrow(W,\psi_{W})$ to
$P^{\tilde{h}C_{2}}(\phi)=\psi_{V}^{-1}\circ P(\phi)\circ\psi_{W}$. Let
$\Theta^{\tilde{h}C_{2}}:\textup{\text{id}}_{\mathcal{C}^{\tilde{h}C_{2}}}\Rightarrow
P^{\tilde{h}C_{2}}\circ(P^{\tilde{h}C_{2}})^{\textup{\text{op}}}$ be the
identity natural transformation.
###### Lemma 1.4.
The triple
$(\mathcal{C}^{\tilde{h}C_{2}},P^{\tilde{h}C_{2}},\Theta^{\tilde{h}C_{2}})$ is
a category with strict duality. Moreover, $P^{\tilde{h}C_{2}}$ is the identity
on objects.
## 2\. A Frobenius–Schur indicator for twisted Real representation theory
### 2.1. Twisted Real representation theory
The Real representation theory of a finite group has been studied by many
authors as a generalization of representation theory over $\mathbb{R}$ or
$\mathbb{H}$ [Wig59, Dys62, AS69, Kar70, FM13, You21]. We establish relevant
aspects of the twisted form of this theory following [You21, §3.2].
Let $\pi:\hat{G}\rightarrow C_{2}$ be a $C_{2}$-graded finite group with $\pi$
surjective. Fix $\hat{\theta}\in Z^{2+\pi}(B\hat{G})$ and a character
$\lambda:\hat{G}\rightarrow U(1)$. Note that $\lambda$ can be interpreted as
an element of $Z^{1}(B\hat{G})$. Denote by $G=\ker\pi$ and $\theta\in
Z^{2}(BG)$ the restriction of $\hat{\theta}$ along $BG\rightarrow B\hat{G}$.
An element $\varsigma\in\hat{G}\backslash G$ determines a $\mathbb{C}$-linear
exact duality structure
$(P^{(\hat{\theta},\lambda,\varsigma)},\Theta^{(\hat{\theta},\lambda,\varsigma)})$
on $\textup{\text{Rep}}^{\theta}(G)$. On objects, we have
$P^{(\hat{\theta},\lambda,\varsigma)}(V,\rho)=(V^{\vee},\rho^{(\hat{\theta},\lambda,\varsigma)})$,
where
$\rho^{(\hat{\theta},\lambda,\varsigma)}(g)=\frac{\lambda(g)}{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)}\rho(\varsigma
g^{-1}\varsigma^{-1})^{\vee},\qquad g\in G.$
The coefficients
$\uptau^{\textup{\text{refl}}}_{\pi}(\hat{\theta})([\omega]g)=\hat{\theta}([g^{-1}|g])^{\frac{\pi(\omega)-1}{2}}\frac{\hat{\theta}([\omega
g^{\pi(\omega)}\omega^{-1}|\omega])}{\hat{\theta}([\omega|g^{\pi(\omega)}])},\qquad
g\in G,\;\omega\in\hat{G}$
are best understood in terms of orientation-twisted loop transgression [NY22,
Theorem 2.8], which is a cochain map
$\uptau^{\textup{\text{refl}}}_{\pi}:C^{\bullet+\pi}(B\hat{G})\rightarrow
C^{\bullet-1}(B(G/\\!\\!/_{R}\hat{G})).$
The codomain is simplicial cochains on the classifying space of the quotient
groupoid $G/\\!\\!/_{R}\hat{G}$ resulting from the Real conjugation action of
$\hat{G}$ on $G$: $\omega\cdot g=\omega g^{\pi(\omega)}\omega^{-1}$,
$\omega\in\hat{G},\;g\in G$. In geometric terms, $G/\\!\\!/_{R}\hat{G}$ is the
unoriented loop groupoid of $B\hat{G}$, that is, the quotient of the loop
groupoid of $BG$ by the $C_{2}$-action which reverses orientation of loops and
acts on $BG$ by deck transformations. Continuing, on morphisms
$P^{(\hat{\theta},\lambda,\varsigma)}$ is $\mathbb{C}$-linear duality. The
natural isomorphism $\Theta^{(\hat{\theta},\lambda,\varsigma)}$ is defined by
its components
$\Theta^{(\hat{\theta},\lambda,\varsigma)}_{V}=\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}\textup{\text{ev}}_{V}\circ\rho(\varsigma^{2})^{-1},\qquad(V,\rho)\in\textup{\text{Rep}}^{\theta}(G)$
where $\textup{\text{ev}}_{V}:V\rightarrow V^{\vee\vee}$ is the evaluation
isomorphism of underlying vector spaces. The normalization of
$\Theta^{(\hat{\theta},\lambda,\varsigma)}_{V}$ ensures that the coherence
condition (3) holds.
###### Definition 2.1.
The _category $\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(G)$ of
$(\hat{\theta},\lambda)$-twisted Real representations of $G$_ is the homotopy
fixed point category $\textup{\text{Rep}}^{\theta}(G)^{hC_{2}}$ of
$(P^{(\hat{\theta},\lambda,\varsigma)},\Theta^{(\hat{\theta},\lambda,\varsigma)})$.
Up to equivalence,
$(P^{(\hat{\theta},\lambda,\varsigma)},\Theta^{(\hat{\theta},\lambda,\varsigma)})$
depends only on $(\hat{G},[\hat{\theta}],\lambda)$. The same is therefore true
of $\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(G)$ and we drop $\varsigma$
from the notation if it is fixed or the particular realization of the duality
structure is not important.
Concretely, an object
$(V,\psi_{V})\in\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(G)$ is an
isomorphism $\psi_{V}:V\rightarrow P^{(\hat{\theta},\lambda)}(V)$ in
$\textup{\text{Rep}}^{\theta}(G)$ which satisfies
$P^{(\hat{\theta},\lambda)}(\psi_{V})\circ\Theta^{(\hat{\theta},\lambda)}_{V}=\psi_{V}$.
A morphism $\phi:(V,\psi_{V})\rightarrow(W,\psi_{W})$ in
$\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(G)$ is a morphism in
$\textup{\text{Rep}}^{\theta}(G)$ which satisfies
$P^{(\hat{\theta},\lambda)}(\phi)\circ\psi_{W}\circ\phi=\psi_{V}$. Note that
$\phi$ is necessarily injective and
$\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(\hat{G})$ is neither linear nor
abelian.
A more standard representation theoretic interpretation of
$\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(G)$ is as follows. Given a
vector space $V$ and sign $\epsilon\in C_{2}$, introduce the notation
${{}^{\epsilon}}V=\begin{cases}V&\mbox{if }\epsilon=1,\\\ V^{\vee}&\mbox{if
}\epsilon=-1\end{cases}$
with similar notation for linear maps. A $(\hat{\theta},\lambda)$-twisted Real
representation of $G$ is then a vector space $V$ together with linear maps
$\rho(\omega):\prescript{\pi(\omega)}{}{V}\rightarrow V$, $\omega\in\hat{G}$,
which satisfy $\rho(e)=\textup{\text{id}}_{V}$ and
$\rho(\omega_{2})\circ\prescript{\pi(\omega_{2})}{}{\rho(\omega_{1})}^{\pi(\omega_{2})}\circ\textup{\text{ev}}_{V}^{\delta_{\pi(\omega_{1}),\pi(\omega_{2}),-1}}=\lambda(\omega_{1})^{\frac{\pi(\omega_{2})-1}{2}}\hat{\theta}([\omega_{2}|\omega_{1}])\rho(\omega_{2}\omega_{1}).$
(4)
The notation
$\textup{\text{ev}}_{V}^{\delta_{\pi(\omega_{1}),\pi(\omega_{2}),-1}}$
indicates that $\textup{\text{ev}}_{V}$ is included exactly when
$\pi(\omega_{1})=\pi(\omega_{2})=-1$. The equivalence of this interpretation
with that of homotopy fixed points follows from noting that a homotopy fixed
point $((V,\rho_{V}),\psi_{V})$ determines an extension of $\rho_{V}$ to
$\hat{G}\setminus G$ by
$\rho_{V}(\omega)=\hat{\theta}([\omega\varsigma^{-1}|\varsigma])^{-1}\rho_{V}(\omega\varsigma^{-1})\circ\psi_{V}^{-1},\qquad\omega\in\hat{G}\setminus
G.$
A third interpretation of twisted Real representations will also be useful.
###### Proposition 2.2.
Fix $\varsigma\in\hat{G}\setminus G$. A $(\hat{\theta},\lambda)$-twisted Real
representation of $G$ is equivalent to the data of a $\theta$-twisted
representation of $G$ on $V$ together with a non-degenerate bilinear form
$\langle-,-\rangle:V\times V\rightarrow\mathbb{C}$ which satisfies the twisted
$G$-invariance condition
$\langle\rho(g)v_{1},\rho(\varsigma
g\varsigma^{-1})v_{2}\rangle=\lambda(g)\frac{\hat{\theta}([\varsigma|g])}{\hat{\theta}([\varsigma
g\varsigma^{-1}|\varsigma])}\langle v_{1},v_{2}\rangle,\qquad g\in G$
and the twisted symmetry condition
$\langle
v_{1},v_{2}\rangle=\lambda(\varsigma)\theta([\varsigma^{-1}|\varsigma^{-1}])\langle\rho(\varsigma^{-2})v_{2},v_{1}\rangle$
for all $v_{1},v_{2}\in V$.
###### Proof.
Let $(V,\rho)$ be a $(\hat{\theta},\lambda)$-twisted Real representation of
$G$. Fix $\varsigma\in\hat{G}\setminus G$ and define a non-degenerate bilinear
form on $V$ by
$\langle v_{1},v_{2}\rangle=\rho(\varsigma^{-1})^{-1}(v_{1})v_{2}.$ (5)
With this definition, $\langle\rho(g)v_{1},\rho(\varsigma
g\varsigma^{-1})v_{2}\rangle$ is equal to
$\displaystyle\rho(\varsigma^{-1})^{-1}(\rho(g)(v_{1}))(\rho(\varsigma
g\varsigma^{-1})v_{2})$ $\displaystyle=$ $\displaystyle\lambda(\varsigma
g)\hat{\theta}([\varsigma^{-1}|\varsigma g])^{-1}\rho(\varsigma
g)^{-\vee}(\textup{\text{ev}}_{V}(v_{1}))(\rho(\varsigma
g\varsigma^{-1})v_{2})$ $\displaystyle=$ $\displaystyle\lambda(\varsigma
g)\lambda(\varsigma^{-1})\hat{\theta}([\varsigma^{-1}|\varsigma
g])^{-1}\hat{\theta}([\varsigma g|\varsigma^{-1}])^{-1}\rho(\varsigma
g\varsigma^{-1})^{-\vee}\rho(\varsigma^{-1})^{-1}(v_{1})(\rho(\varsigma
g\varsigma^{-1})v_{2})$ $\displaystyle=$
$\displaystyle\lambda(g)\frac{\hat{\theta}([\varsigma|g])}{\hat{\theta}([\varsigma
g\varsigma^{-1}|\varsigma])}\langle v_{1},v_{2}\rangle.$
The first two equalities follow from equation (4) and the third from the
$2$-cocycle condition on $\hat{\theta}$. Similarly, we compute
$\displaystyle\langle v_{1},v_{2}\rangle$ $\displaystyle=$
$\displaystyle\lambda(\varsigma)\hat{\theta}([\varsigma^{-1}|\varsigma])^{-1}\rho(\varsigma)^{-\vee}(\textup{\text{ev}}_{V}(v_{1}))v_{2}$
$\displaystyle=$
$\displaystyle\lambda(\varsigma)\hat{\theta}([\varsigma^{-1}|\varsigma])^{-1}\rho(\varsigma^{-2})^{-\vee}\circ\rho(\varsigma)^{-\vee}(\textup{\text{ev}}_{V}(v_{1}))(\rho(\varsigma^{-2})v_{2})$
$\displaystyle=$
$\displaystyle\lambda(\varsigma)\hat{\theta}([\varsigma^{-1}|\varsigma])^{-1}\hat{\theta}([\varsigma^{-2}|\varsigma])^{-1}\textup{\text{ev}}_{V}(v_{1})(\rho(\varsigma^{-1})^{-1}\circ\rho(\varsigma^{-2})v_{2})$
$\displaystyle=$
$\displaystyle\lambda(\varsigma)\hat{\theta}([\varsigma^{-1}|\varsigma^{-1}])\langle\rho(\varsigma^{-2})v_{2},v_{1}\rangle.$
Conversely, given $(V,\rho)\in\textup{\text{Rep}}^{\theta}(G)$ with non-
degenerate bilinear form $\langle-,-\rangle$ satisfying the conditions of the
lemma, define $\rho(\varsigma^{-1})$ by equation (5) and set
$\rho(\omega)=\hat{\theta}([\omega\varsigma|\varsigma^{-1}])^{-1}\rho(\omega\varsigma)\circ\rho(\varsigma^{-1}),\qquad\omega\in\hat{G}\setminus
G.$
The verification that $\rho$ is a $(\hat{\theta},\lambda)$-twisted Real
representation of $G$ mirrors the calculations from the previous paragraph. ∎
A $(\hat{\theta},\lambda)$-twisted Real representation is called _irreducible_
if it has no non-trivial Real subrepresentations. The direct sum
$(V,\psi_{V})\oplus(W,\psi_{W})=(V\oplus W,\psi_{V}\oplus\psi_{W})$ allows for
the following formulation of a Real analogue of Maschke’s lemma.
###### Proposition 2.3.
Let $V\in\textup{\text{RRep}}^{(\hat{\theta},\lambda)}(G)$ be irreducible.
Then the restriction of $V$ to $G$ is irreducible or of the form $U\oplus
P^{(\hat{\theta},\lambda)}(U)$ for an irreducible
$U\in\textup{\text{Rep}}^{\theta}(G)$.
###### Proof.
Interpret $V$ as a $\theta$-twisted representation of $G$ with compatible
bilinear form $\langle-,-\rangle$, as in Proposition 2.2, and suppose that the
restriction $V_{|G}$ has a non-trivial irreducible $\theta$-twisted
subrepresentation $U$. The twisted $G$-invariance of $\langle-,-\rangle$
implies that the orthogonal complement $U^{\perp}$ is a $\theta$-twisted
subrepresentation of $V_{|G}$ and $V_{|G}=U\oplus U^{\perp}$ as
$\theta$-twisted representations. Since $V$ is irreducible, the map
$\rho(\varsigma):V^{\vee}\rightarrow V$ restricts to a map
$\rho(\varsigma):U^{\vee}\rightarrow U^{\perp}$ which defines an isomorphism
$P^{(\hat{\theta},\lambda)}(U)\xrightarrow[]{\sim}U^{\perp}$ of
$\theta$-twisted representations. ∎
### 2.2. A Frobenius–Schur indicator
Keep the notation of Section 2.1.
###### Definition 2.4.
The _$(\hat{\theta},\lambda)$ -twisted Frobenius–Schur element_ is
$\nu_{(\hat{\theta},\lambda)}=\sum_{\varsigma\in\hat{G}\setminus
G}\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}l_{\varsigma^{2}}\in\mathbb{C}^{\theta^{-1}}[G].$
When $(\hat{\theta},\lambda)$ is clear from the context, we write $\nu$ for
$\nu_{(\hat{\theta},\lambda)}$. Note that
$\nu_{(\hat{\theta},\lambda)}=-\nu_{(\delta\hat{\theta},\lambda)}=-\nu_{(\hat{\theta},\pi\lambda)}.$
###### Lemma 2.5.
The element $\nu_{(\hat{\theta},\lambda)}$ is a $\theta$-twisted class
function on $G$.
###### Proof.
The statement amounts to the identity
$\hat{\theta}([h\varsigma h^{-1}|h\varsigma
h^{-1}])^{-1}=\uptau(\theta)([h]\varsigma^{2})\hat{\theta}([\varsigma|\varsigma])^{-1},\qquad
h\in G,\;\varsigma\in\hat{G}\backslash G,$
which is seen to hold using equation (2). ∎
We require the following elementary result from linear algebra.
###### Lemma 2.6.
Let $V$ be a finite dimensional vector space and
$\phi\in\textup{\text{Hom}}_{\mathbb{C}}(V,V)$. Then
$\textup{\text{tr}}_{V}\,\phi$ is equal to the trace of the map
$\iota_{\phi}:\textup{\text{Hom}}_{\mathbb{C}}(V,V^{\vee})\rightarrow\textup{\text{Hom}}_{\mathbb{C}}(V,V^{\vee}),\qquad
f\mapsto f^{\vee}\circ\textup{\text{ev}}_{V}\circ\phi.$
###### Proof.
Let $\dim_{\mathbb{C}}V=v$. Fix a basis of $V$ with induced basis
$\\{E_{ij}\\}_{i,j=1}^{v}$ of $\textup{\text{Hom}}_{\mathbb{C}}(V,V)$. Writing
$\phi=\sum_{i,j=1}^{v}\phi_{ij}E_{ij}$ in this basis, we compute
$\iota_{\phi}(E_{ij})=\sum_{k=1}^{v}\phi_{ik}E_{jk}$ so that
$\textup{\text{tr}}_{\textup{\text{Hom}}_{\mathbb{C}}(V,V^{\vee})}\,\iota_{\phi}=\sum_{i,j=1}^{v}\iota_{\phi}(E_{ij})_{ij}=\sum_{i,j,k=1}^{v}\phi_{ik}(E_{jk})_{ij}=\sum_{i,j,k=1}^{v}\phi_{ik}\delta_{ji}\delta_{kj}=\textup{\text{tr}}_{V}\,\phi.\qed$
Let $V\in\textup{\text{Rep}}^{\theta}(G)$ and
$\phi\in\textup{\text{Hom}}_{G}(V,V)$. Consider the map
$\iota_{\phi}:\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda,\varsigma)}(V))\rightarrow\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda,\varsigma)}(V)),\qquad
f\mapsto
P^{(\hat{\theta},\lambda,\varsigma)}(f)\circ\Theta^{(\hat{\theta},\lambda,\varsigma)}_{V}\circ\phi.$
Independence of the duality structure up to equivalence on
$\varsigma\in\hat{G}\setminus G$ implies that $\iota_{\phi}$ is independent of
$\varsigma$. The coherence condition (3) implies that
$\iota:=\iota_{\textup{\text{id}}_{V}}$ is an involution.
For each $V\in\textup{\text{Rep}}^{\theta}(G)$, define
$\tau^{V}:\textup{\text{Hom}}_{G}(V,V)\rightarrow
Z(\mathbb{C}^{\theta^{-1}}[G]),\qquad\phi\mapsto\sum_{g\in
G}\textup{\text{tr}}_{V}(\phi\circ\rho_{V}(g))l_{g}.$
Note that $\tau^{V}(\textup{\text{id}}_{V})=\chi_{V}$.
###### Theorem 2.7.
For each $V\in\textup{\text{Rep}}^{\theta}(G)$ and
$\phi\in\textup{\text{Hom}}_{G}(V,V)$, there is an equality
$\textup{\text{tr}}_{\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))}\,\iota_{\phi}=\langle\tau^{V}(\phi),\nu_{(\hat{\theta},\lambda)}\rangle_{G}.$
###### Proof.
Write $(P,\Theta)$ for
$(P^{(\hat{\theta},\lambda,\varsigma)},\Theta^{(\hat{\theta},\lambda,\varsigma)})$.
We compute
$\displaystyle\textup{\text{tr}}_{\textup{\text{Hom}}_{G}(V,P(V))}\,\iota_{\phi}$
$\displaystyle=$
$\displaystyle\textup{\text{tr}}_{\textup{\text{Hom}}_{G}(V,P(V))}\,(f\mapsto\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}f^{\vee}\circ\textup{\text{ev}}_{V}\circ\rho(\varsigma^{2})^{-1}\circ\phi)$
$\displaystyle=$
$\displaystyle\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}\textup{\text{tr}}_{V}\,(\rho(\varsigma^{2})^{-1}\circ\phi),$
the second equality following from Lemma 2.6. Since
$\textup{\text{tr}}_{\textup{\text{Hom}}_{G}(V,P(V))}\,\iota_{\phi}$ is
independent of the choice $\varsigma\in\hat{G}\setminus G$ used in the
definition of $\iota_{\phi}$, we average over all such choices to obtain
$\textup{\text{tr}}_{\textup{\text{Hom}}_{G}(V,P(V))}\,\iota_{\phi}=\frac{1}{|G|}\sum_{\varsigma\in\hat{G}\backslash
G}\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}\textup{\text{tr}}_{V}\,(\rho(\varsigma^{2})^{-1}\circ\phi).$
On the other hand, we have
$\displaystyle\langle\tau^{V}(\phi),\nu\rangle_{G}$ $\displaystyle=$
$\displaystyle\frac{1}{|G|}\sum_{\varsigma\in\hat{G}\backslash
G}\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}\theta([\varsigma^{2}|\varsigma^{-2}])^{-1}\textup{\text{tr}}_{V}\,\left(\phi\circ\rho_{V}(\varsigma^{-2})\right)$
$\displaystyle=$
$\displaystyle\frac{1}{|G|}\sum_{\varsigma\in\hat{G}\backslash
G}\frac{\lambda(\varsigma)}{\hat{\theta}([\varsigma|\varsigma])}\textup{\text{tr}}_{V}\,\left(\rho_{V}(\varsigma^{2})^{-1}\circ\phi\right),$
thereby proving the desired equality. ∎
Recall that $\delta$ is a cocycle representative of the generator of
$H^{2+\pi}(BC_{2})\simeq C_{2}$.
###### Corollary 2.8.
Let $V$ be an irreducible $\theta$-twisted representation of $G$. Then
$\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}=\begin{cases}1&\mbox{if
and only if $V$ lifts to a $(\hat{\theta},\lambda)$-twisted Real
representation},\\\ -1&\mbox{if and only if $V$ lifts to a
$(\delta\hat{\theta},\lambda)$-twisted Real representation},\\\
0&\mbox{otherwise}.\end{cases}$
When $\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}=\pm 1$, the
twisted Real structure on $V$ is unique up to isomorphism.
###### Proof.
Schur’s Lemma for $\textup{\text{Rep}}^{\theta}(G)$ implies that
$\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))\simeq\mathbb{C}$ if
$P^{(\hat{\theta},\lambda)}(V)\simeq V$ and
$\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))=0$ otherwise. Hence,
if $\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))=0$, then $V$ does
not lift to a Real representation and the statement follows by applying
Theorem 2.7 with $\phi=\textup{\text{id}}_{V}$. If
$\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))\simeq\mathbb{C}$,
then a non-zero element
$\psi_{V}\in\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))$ is an
isomorphism which, by Theorem 2.7, satisfies
$P^{(\hat{\theta},\lambda)}(\psi_{V})\circ\Theta_{V}=\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}\cdot\psi_{V}.$
The first statement of the corollary now follows from the homotopy fixed point
interpretation of twisted Real representations. Uniqueness of the Real
structure up to isomorphism follows from one dimensionality of
$\textup{\text{Hom}}_{G}(V,P^{(\hat{\theta},\lambda)}(V))$. ∎
In the setting of Corollary 2.8, if
$\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}=0$, then $V$
determines an irreducible $(\hat{\theta},\lambda)$-twisted Real representation
by $H^{(\hat{\theta},\lambda)}(V)=V\oplus P^{(\hat{\theta},\lambda)}(V)$ with
its hyperbolic homotopy fixed point structure [You21, §7.3]. Note that
$H^{(\hat{\theta},\lambda)}(V)\simeq
H^{(\hat{\theta},\lambda)}(P^{(\hat{\theta},\lambda)}(V))$.
###### Corollary 2.9.
There are finitely many isomorphism classes of irreducible
$(\hat{\theta},\lambda)$-twisted Real representations.
###### Proof.
This follows from Proposition 2.3, finiteness of isomorphism classes of
irreducible $\theta$-twisted representations and the final statement of
Corollary 2.8. ∎
###### Corollary 2.10.
There is an equality
$\nu_{(\hat{\theta},\lambda)}=\sum_{\begin{subarray}{c}V\in\textup{\text{Irr}}^{\theta}(G)\\\
P^{(\hat{\theta},\lambda)}(V)\simeq
V\end{subarray}}\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}\chi_{V}.$
###### Proof.
This follows from the second part of Corollary 2.8 and the fact that the set
$\\{\chi_{V}\\}_{V\in\textup{\text{Irr}}^{\theta}(G)}$ of irreducible
$\theta$-twisted characters is an orthonormal basis of the space of
$\theta$-twisted class function on $G$. ∎
Various instances of the element $\nu_{(\hat{\theta},\lambda)}$ and Corollary
2.8 are known:
1. (1)
The classical setting of Frobenius and Schur [FS06] corresponds to taking
$\hat{G}=G\times C_{2}$ with $\pi$ the projection to the second factor and the
cohomological data $\hat{\theta}$ and $\lambda$ trivial. The conditions on the
bilinear form $\langle-,-\rangle$ from Proposition 2.2 reduce to
$G$-invariance and symmetry; if $\hat{\theta}=\delta$, then
$\langle-,-\rangle$ is skew-symmetric. Corollary 2.8 then gives the standard
necessary and sufficient condition for $V$ to admit a $G$-invariant bilinear
form and so be defined over $\mathbb{R}$ (in the symmetric case) or
$\mathbb{H}$ (in the skew-symmetric case).
2. (2)
Taking $\hat{\theta}$ and $\lambda$ to be trivial recovers Gow’s generalized
Frobenius–Schur element used in the character theoretic study of
$2$-regularity of finite groups [Gow79, §2]. For representation theoretic
applications, see [RT21].
3. (3)
Take $\hat{G}=G\times C_{2}$. In this case, there is a homomorphism
$G\rightarrow\hat{G}$ which splits $\pi$. When $\hat{\theta}$ is in the image
of the resulting map $H^{2}(BG;C_{2})\rightarrow H^{2+\pi}(B\hat{G})$ and
$\lambda$ is trivial, $\nu_{(\hat{\theta},1)}$ recovers Turaev’s generalized
Frobenius–Schur element studied in the context of closed unoriented TFT
[Tur07]. See [IT23] for a generalization in the setting of closed
$\textup{\text{Pin}}_{2}^{-}$ TFT.
4. (4)
When $\phi=\textup{\text{id}}_{V}$ the trace of Theorem 2.7 is an instance of
a Shimizu’s Frobenius–Schur indicator in a category with duality [Shi12].
###### Example 2.11.
Let $G=C_{n}$ with generator $r$ and $\zeta=e^{\frac{2\pi\sqrt{-1}}{n}}$. The
one dimensional representations $\\{\rho_{k}\mid 0\leq k\leq n-1\\}$, defined
by $\rho_{k}(r)=\zeta^{k}$, constitute a complete set of irreducible
representations of $G$. Take $\hat{\theta}$ and $\lambda$ to be trivial in
this example.
1. (1)
Let $\hat{G}=C_{n}\times C_{2}$ with $\pi$ projection to the second factor. We
have
$\langle\chi_{k},\nu\rangle_{G}=\begin{cases}1&\mbox{if $k=0$ or
$k=\frac{n}{2}$},\\\ 0&\text{otherwise},\end{cases}$
whence the trivial and sign representation (which exists when $n$ is even)
admit Real structures. These are precisely the irreducible representations
which are defined over $\mathbb{R}$.
2. (2)
Let $\hat{G}=C_{2n}$ with generator $\varsigma$ satisfying $\varsigma^{2}=r$
and $C_{2}$-grading $\pi:\hat{G}\rightarrow C_{2}$ determined by
$\pi(\varsigma)=-1$. Assume that $n$ is even, as otherwise $\hat{G}\simeq
C_{n}\times C_{2}$ as $C_{2}$-graded groups. We have
$\nu=2\sum_{j=0}^{\frac{n}{2}-1}l_{r^{2j}}$ from which we compute
$\langle\chi_{k},\nu\rangle_{G}=\frac{2}{n}\sum_{j=0}^{\frac{n}{2}-1}\zeta^{2kj}\\\
=\begin{cases}1&\mbox{if }k=0,\\\ -1&\mbox{if }k=\frac{n}{2},\\\
0&\mbox{otherwise}.\end{cases}$
The Real structure on $\rho_{0}$ is given by
$\rho_{0}(\varsigma)(1^{\vee})=1$. The same formula gives the $\delta$-twisted
Real structure on $\rho_{\frac{n}{2}}$.
3. (3)
Let $\hat{G}$ be the dihedral group $D_{2n}=\langle r,s\mid
r^{n}=s^{2}=e,\,srs=r^{-1}\rangle$ with $\pi:\hat{G}\rightarrow C_{2}$
determined by $\pi(r)=1$ and $\pi(s)=-1$. We have $\nu=nl_{e}$ from which we
compute $\langle\chi_{k},\nu\rangle_{G}=1$. Each irreducible representation
$\rho_{k}$ can therefore be extended to a Real representation by the formula
$\rho_{k}(s)(1^{\vee})=1$.∎
###### Example 2.12.
Let $\hat{G}=Q_{8}$ be the quaternion group with $C_{2}$-grading given on the
standard generators by $\pi(i)=1$ and $\pi(j)=-1$. Then $G\simeq C_{4}$ is
generated by $i$. We have $\nu=4l_{-1}$ so that
$\langle\chi_{k},\nu\rangle=(-1)^{k}$. The Real structure on $\rho_{k}$, which
is $\delta$-twisted precisely when $k$ is even, is determined by
$\rho_{k}(j)(1^{\vee})=1$. ∎
###### Example 2.13.
Let $G=A_{4}$ be the alternating group on $4$ letters. The irreducible
representations of $G$ are the trivial representation $U$, two non-trivial one
dimensional representations $U^{\prime}$ and $U^{\prime\prime}$ and a three
dimensional representation $V$. Writing $\zeta=e^{\frac{2\pi\sqrt{-1}}{3}}$,
we take the convention that their characters are
$\displaystyle\chi_{U^{\prime}}(123)$ $\displaystyle=\zeta,$
$\displaystyle\chi_{U^{\prime}}(132)$ $\displaystyle=\zeta^{2},$
$\displaystyle\chi_{U^{\prime}}((12)(34))$ $\displaystyle=1,$
$\displaystyle\chi_{U^{\prime\prime}}(123)$ $\displaystyle=\zeta^{2},$
$\displaystyle\chi_{U^{\prime\prime}}(132)$ $\displaystyle=\zeta,$
$\displaystyle\chi_{U^{\prime\prime}}((12)(34))$ $\displaystyle=1,$
$\displaystyle\chi_{V}(123)$ $\displaystyle=0,$ $\displaystyle\chi_{V}(132)$
$\displaystyle=0,$ $\displaystyle\chi_{V}((12)(34))$ $\displaystyle=-1.$
1. (1)
Taking $\hat{G}=A_{4}\times C_{2}$ with $\pi$ the projection to the second
factor gives $\nu=4l_{(1)}+\sum_{3\mbox{\tiny-cycles}\;\sigma}l_{\sigma}$.
Using this, we compute
$\langle\chi_{U},\nu\rangle=1,\qquad\langle\chi_{U^{\prime}},\nu\rangle=0,\qquad\langle\chi_{U^{\prime\prime}},\nu\rangle=0,\qquad\langle\chi_{V},\nu\rangle=1.$
Hence, only $U$ and $V$ admit real structures.
2. (2)
Taking $\hat{G}=S_{4}$ the symmetric group with $\pi$ the sign representation
gives $\nu=6l_{(1)}+2(l_{(12)(34)}+l_{(13)(24)}+l_{(14)(23)})$. Using this, we
compute
$\langle\chi_{U},\nu\rangle=1,\qquad\langle\chi_{U^{\prime}},\nu\rangle=1,\qquad\langle\chi_{U^{\prime\prime}},\nu\rangle=1,\qquad\langle\chi_{V},\nu\rangle=0.$
Hence, all one dimensional representations admit Real structures. Taking
$\lambda$ to be non-trivial, that is, $\lambda=\pi$, replaces $\nu$ with its
negative and leads to $\delta$-twisted Real structures on the one dimensional
representations.∎
## 3\. Two dimensional unoriented open/closed topological field theory
### 3.1. Algebraic characterization
Following Lazaroiu [Laz01] and Moore and Segal [MS06], we begin by recalling
an algebraic characterization of two dimensional oriented open/closed
topological field theories (TFTs). See also [AN06, LP08]. In topological
terms, such a TFT is a symmetric monoidal functor
$\mathcal{Z}:\textup{\text{Bord}}_{2}^{\textup{\text{or}},D}\rightarrow\textup{\text{Vect}}_{\mathbb{C}}$.
Here $\textup{\text{Bord}}_{2}^{\textup{\text{or}},D}$ two dimensional
open/closed bordism category [LP08, §3]. Objects are compact oriented
$1$-manifolds with boundary components labelled by elements of a given set
$D$. Morphisms are isomorphism classes of oriented bordisms with corners whose
free boundaries are $D$-labelled compatibly with the incoming and outgoing
boundaries. The monoidal structure of
$\textup{\text{Bord}}_{2}^{\textup{\text{or}},D}$ is disjoint union.
###### Theorem 3.1 ([MS06, Theorem 1]).
Two dimensional oriented open/closed TFTs are classified by the following
data:
1. (1)
A commutative Frobenius algebra $A$ with identity $1_{A}$ and trace
$\langle-\rangle_{0}:A\rightarrow\mathbb{C}$.
2. (2)
A Calabi–Yau category $\mathcal{B}$, that is, $\mathbb{C}$-linear additive
category with cyclic traces
$\langle-\rangle_{V}:\textup{\text{Hom}}_{\mathcal{B}}(V,V)\rightarrow\mathbb{C}$,
$V\in\mathcal{B}$, whose associated pairings
$\langle-,-\rangle_{V,W}:\textup{\text{Hom}}_{\mathcal{B}}(W,V)\otimes\textup{\text{Hom}}_{\mathcal{B}}(V,W)\xrightarrow[]{\circ}\textup{\text{Hom}}_{\mathcal{B}}(V,V)\xrightarrow[]{\langle-\rangle_{V}}\mathbb{C}$
are non-degenerate.
3. (3)
For each $V\in\mathcal{B}$, a linear _boundary-bulk_ map
$\tau^{V}:\textup{\text{Hom}}_{\mathcal{B}}(V,V)\rightarrow A$ and linear
_bulk-boundary_ map
$\tau_{V}:A\rightarrow\textup{\text{Hom}}_{\mathcal{B}}(V,V)$.
This data is required to satisfy the following conditions:
1. (i)
$\tau_{V}$ is a unital algebra homomorphism.
2. (ii)
$\tau_{W}(a)\circ\phi=\phi\circ\tau_{V}(a)$ for all $a\in A$ and
$\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,W)$.
3. (iii)
$\langle\phi,\tau_{V}(a)\rangle_{V,V}=\langle\tau^{V}(\phi),a\rangle_{0}$ for
all $a\in A$ and $\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,V)$.
4. (iv)
(The _oriented Cardy condition_) Let $\\{\psi_{i}\\}_{i}$ be a basis of
$\textup{\text{Hom}}_{\mathcal{B}}(V,W)$ and $\\{\psi^{i}\\}_{i}$ the basis of
$\textup{\text{Hom}}_{\mathcal{B}}(W,V)$ which is dual with respect to
$\langle-,-\rangle_{V,W}$. Then $\tau_{V}\circ\tau^{W}$ is equal to the map
$\textup{\text{Hom}}_{\mathcal{B}}(W,W)\rightarrow\textup{\text{Hom}}_{\mathcal{B}}(V,V),\qquad\phi\mapsto\sum_{i}\psi^{i}\circ\phi\circ\psi_{i}.$
###### Remarks 3.2.
1. (1)
When $\mathcal{B}$ has a single object, the algebraic data of Theorem 3.1 is
called a _Cardy–Frobenius_ or _knowledgeable_ Frobenius algebra [AN06, LP08].
2. (2)
Let $\mathcal{Z}$ be an oriented open/closed TFT with object set $D$. The
category111Since $\mathcal{B}$ is assumed to be additive, it may be required
to formally add some elements to $D$ to ensure the existence of direct sums.
See [MS06, §2.5]. $\mathcal{B}$ has objects $D$, morphisms
$\textup{\text{Hom}}_{\mathcal{B}}(V,W)$ given by the value of $\mathcal{Z}$
on the closed interval labelled by $V$ and $W$ and oriented from $V$ to $W$
and composition defined by the value of $\mathcal{Z}$ on the flattened pair of
pants. The value of $\mathcal{Z}$ on the flattened cap defines the Calabi–Yau
traces.
3. (3)
By non-degeneracy of the Calabi–Yau pairings, the oriented Cardy condition
holds if and only if
$\textup{\text{tr}}_{\textup{\text{Hom}}_{\mathcal{B}}(V,W)}\,(f\mapsto\psi\circ
f\circ\phi)=\langle\tau^{W}(\psi),\tau^{V}(\phi)\rangle_{0}$ (6)
for all $\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,V)$ and
$\psi\in\textup{\text{Hom}}_{\mathcal{B}}(W,W)$. Following [CW10, §7.4], we
refer to equation (6) as the _baggy oriented Cardy condition_. Topologically,
the oriented Cardy condition asserts the equality of two ways of evaluating
the TFT on the annulus with boundary components labelled by $V$ and $W$.
We are interested in the extension of Theorem 3.1 to the unoriented bordism
category $\textup{\text{Bord}}_{2}^{D}$, defined analogously to
$\textup{\text{Bord}}_{2}^{\textup{\text{or}},D}$ except that objects and
morphisms are unoriented. Upon restriction to the closed sector, the extension
is known.
###### Theorem 3.3 ([TT06, Proposition 2.9]).
Two dimensional unoriented TFTs are classified by the data of an _unoriented
Frobenius algebra_ , that is, a commutative Frobenius algebra
$(A,1_{A},\langle-\rangle_{0})$ with an isometric algebra involution
$p:A\rightarrow A$ and an element $Q\in A$, the _crosscap state_ , which
satisfy the following conditions:
1. (i)
$p(Qa)=Qa$ for all $a\in A$.
2. (ii)
(_The Klein condition_) Given a basis $\\{a_{i}\\}_{i}$ of $A$ with basis
$\\{a^{i}\\}_{i}$ of $A$ dual with respect to $\langle-\rangle_{0}$, the
equality $Q^{2}=\sum_{i}p(a^{i})a_{i}$ holds.
In terms of bordisms, $Q$ is the image under $\mathcal{Z}$ of the compact
Möbius strip $\mathbb{RP}^{2}\setminus\mathring{D}^{2}$,
$\leavevmode\hbox to57.5pt{\vbox
to41.79pt{\pgfpicture\makeatletter\hbox{\hskip 57.50414pt\lower-3.84514pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-28.45184pt}{12.44807pt}\pgfsys@curveto{-56.90414pt}{12.44807pt}{-56.90414pt}{37.34317pt}{-28.45184pt}{37.34317pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\color[rgb]{0,0,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.8}\pgfsys@color@rgb@stroke{0}{0}{0.8}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0.8}\pgfsys@invoke{
}\definecolor{pgffillcolor}{rgb}{0,0,0.8}{}\pgfsys@moveto{-28.45184pt}{12.44807pt}\pgfsys@curveto{-17.78252pt}{12.44807pt}{-17.78252pt}{37.34317pt}{-28.45184pt}{37.34317pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{{ {\pgfsys@beginscope
\pgfsys@setdash{}{0.0pt}\pgfsys@roundcap\pgfsys@roundjoin{} {}{}{} {}{}{}
\pgfsys@moveto{-3.04pt}{3.84514pt}\pgfsys@curveto{-2.4846pt}{1.53802pt}{-1.24696pt}{0.4486pt}{0.0pt}{0.0pt}\pgfsys@curveto{-1.24696pt}{-0.4486pt}{-2.4846pt}{-1.53802pt}{-3.04pt}{-3.84514pt}\pgfsys@stroke\pgfsys@endscope}}
}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{-0.00003}{1.0}{-1.0}{-0.00003}{-20.44978pt}{24.29616pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}} {{}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\color[rgb]{0,0,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.8}\pgfsys@color@rgb@stroke{0}{0}{0.8}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0.8}\pgfsys@invoke{
}\definecolor{pgffillcolor}{rgb}{0,0,0.8}\pgfsys@stroke@opacity{0.2}\pgfsys@invoke{
}\pgfsys@fill@opacity{0.2}\pgfsys@invoke{
}{}\pgfsys@moveto{-28.45184pt}{12.44807pt}\pgfsys@curveto{-39.12117pt}{12.44807pt}{-39.12117pt}{37.34317pt}{-28.45184pt}{37.34317pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}}{}{{}}
{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-44.8127pt}{26.67404pt}\pgfsys@lineto{-41.25595pt}{23.11725pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{{}}{}{{}}
{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-44.8127pt}{23.11725pt}\pgfsys@lineto{-41.25595pt}{26.67404pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{
}{}\pgfsys@moveto{-43.03447pt}{24.89658pt}\pgfsys@moveto{-39.47787pt}{24.89658pt}\pgfsys@curveto{-39.47787pt}{26.86084pt}{-41.0702pt}{28.45317pt}{-43.03447pt}{28.45317pt}\pgfsys@curveto{-44.99873pt}{28.45317pt}{-46.59106pt}{26.86084pt}{-46.59106pt}{24.89658pt}\pgfsys@curveto{-46.59106pt}{22.93231pt}{-44.99873pt}{21.33998pt}{-43.03447pt}{21.33998pt}\pgfsys@curveto{-41.0702pt}{21.33998pt}{-39.47787pt}{22.93231pt}{-39.47787pt}{24.89658pt}\pgfsys@closepath\pgfsys@moveto{-43.03447pt}{24.89658pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}:\varnothing\rightarrow
S^{1},$
and $p$ is the image of the mapping cylinder of circle reflection,
$\leavevmode\hbox to73.51pt{\vbox
to22.54pt{\pgfpicture\makeatletter\hbox{\hskip 4.15585pt\lower-11.26912pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{{}}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{0,0,0}\definecolor[named]{.}{rgb}{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{ }
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{7.11345pt}{10.66913pt}\pgfsys@lineto{56.90483pt}{10.66913pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{7.11345pt}{-10.66913pt}\pgfsys@lineto{56.90483pt}{-10.66913pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\color[rgb]{0,0,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.8}\pgfsys@color@rgb@stroke{0}{0}{0.8}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0.8}\pgfsys@invoke{
}\definecolor{pgffillcolor}{rgb}{0,0,0.8}{}\pgfsys@moveto{7.11345pt}{10.66913pt}\pgfsys@curveto{17.78278pt}{10.66913pt}{17.78278pt}{-10.66913pt}{7.11345pt}{-10.66913pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}} {{{ {\pgfsys@beginscope{{}}
\pgfsys@setdash{}{0.0pt}\pgfsys@roundcap\pgfsys@roundjoin{} {}{}{} {}{}{}
\pgfsys@moveto{3.04pt}{3.84514pt}\pgfsys@curveto{2.4846pt}{1.53802pt}{1.24696pt}{0.4486pt}{0.0pt}{0.0pt}\pgfsys@curveto{1.24696pt}{-0.4486pt}{2.4846pt}{-1.53802pt}{3.04pt}{-3.84514pt}\pgfsys@stroke\pgfsys@endscope}}
}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.00003}{-1.0}{1.0}{0.00003}{15.1153pt}{3.64047pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}} {{}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\color[rgb]{0,0,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.8}\pgfsys@color@rgb@stroke{0}{0}{0.8}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0.8}\pgfsys@invoke{
}\definecolor{pgffillcolor}{rgb}{0,0,0.8}{}\pgfsys@moveto{7.11345pt}{10.66913pt}\pgfsys@curveto{-3.55586pt}{10.66913pt}{-3.55586pt}{-10.66913pt}{7.11345pt}{-10.66913pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\color[rgb]{0,0,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.8}\pgfsys@color@rgb@stroke{0}{0}{0.8}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0.8}\pgfsys@invoke{
}\definecolor{pgffillcolor}{rgb}{0,0,0.8}{}\pgfsys@moveto{56.90483pt}{10.66913pt}\pgfsys@curveto{67.57416pt}{10.66913pt}{67.57416pt}{-10.66913pt}{56.90483pt}{-10.66913pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.00003}{-1.0}{1.0}{0.00003}{64.90677pt}{0.60048pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}} {{}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{{}}{}{{}}{}{{{}{}}}{{}}{}{{{}{}}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{
}\color[rgb]{0,0,0.8}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0.8}\pgfsys@color@rgb@stroke{0}{0}{0.8}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0.8}\pgfsys@invoke{
}\definecolor{pgffillcolor}{rgb}{0,0,0.8}\pgfsys@stroke@opacity{0.2}\pgfsys@invoke{
}\pgfsys@fill@opacity{0.2}\pgfsys@invoke{
}{}\pgfsys@moveto{56.90483pt}{10.66913pt}\pgfsys@curveto{46.2355pt}{10.66913pt}{46.2355pt}{-10.66913pt}{56.90483pt}{-10.66913pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}:S^{1}\rightarrow
S^{1}.$
The Klein condition is illustrated in Figure 1.
=
Figure 1. The equality of bordisms responsible for the Klein condition.
We now come the main classification result.
###### Theorem 3.4.
Two dimensional unoriented open/closed TFTs are classified by the data of an
underlying closed theory, as in Theorem 3.3, together with the data of a
$\mathbb{C}$-linear strict duality $P$ on $\mathcal{B}$. This data is required
to satisfy the following conditions:
1. (i)
The functor $P$ is the identity on objects.
2. (ii)
$\langle P(\phi)\rangle_{V}=\langle\phi\rangle_{V}$ for all
$\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,V)$.
3. (iii)
$P\circ\tau_{V}=\tau_{V}\circ p$ for all $V\in\mathcal{B}$.
4. (iv)
$p\circ\tau^{V}=\tau^{V}\circ P$ for all $V\in\mathcal{B}$.
5. (v)
(The _unoriented Cardy condition_) Let $\\{\psi_{i}\\}_{i}$ be a basis of
$\textup{\text{Hom}}_{\mathcal{B}}(V,V)$ with dual basis $\\{\psi^{i}\\}_{i}$
with respect to $\langle-,-\rangle_{V,V}$. Then there is an equality
$\tau_{V}(Q)=\sum_{i}\psi^{i}\circ P(\psi_{i}).$ (7)
###### Proof.
The theorem is proved in [AN06, §4] under the assumption that $\mathcal{B}$
has a single object, where the above algebraic data is known as a _structure
algebra_. This proof generalizes immediately to allow for $\mathcal{B}$ to
have many objects, in the same way as the analogous generalization in the
oriented case [LP08, §5]. ∎
Topologically, $P$ is the image under $\mathcal{Z}$ of the mapping cylinder of
reflection of the closed interval so that
$P_{V,W}:\textup{\text{Hom}}_{\mathcal{B}}(V,W)\rightarrow\textup{\text{Hom}}_{\mathcal{B}}(W,V)$
comes from the bordism
$V$$W$$V$$W$=$V$$W$$W$$V$.
As indicated on the right, we will picture this bordism as embedded in
$\mathbb{R}^{3}$ with a half-twist. That $P$ is a strict involution follows
from the fact that reflection of the closed interval is an involution. We
record two basic consequences of Theorem 3.4.
###### Proposition 3.5.
1. (1)
The equality $\langle Q^{2}\rangle_{0}=\textup{\text{tr}}_{A}\,p$ holds.
2. (2)
For any $V\in\mathcal{B}$ and $\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,V)$,
the equality
$\langle\tau^{V}(\phi),Q\rangle_{0}=\textup{\text{tr}}_{\textup{\text{Hom}}_{\mathcal{B}}(V,V)}\,\iota_{\phi}$
holds, where $\iota_{\phi}$ is defined analogously to Section 2.2.
###### Proof.
Since $p$ is an involution, there exists a basis $\\{a_{i}\\}_{i}$ of $A$ such
that $p(a_{i})=s_{i}a_{i}$ with $s_{i}\in\\{1,-1\\}$. Let $\\{a^{i}\\}_{i}$ be
a dual basis, so that $\langle a^{j},a_{i}\rangle_{0}=\delta^{j}_{i}$. Since
$p$ is an isometry of $\langle-\rangle_{0}$, we have $p(a^{i})=s_{i}a^{i}$.
With these preliminaries, we compute
$\langle
Q^{2}\rangle_{0}=\langle\sum_{i}p(a^{i})a_{i}\rangle_{0}=\sum_{i}s_{i}\langle
a^{i}a_{i}\rangle_{0}=\sum_{i}s_{i}=\textup{\text{tr}}_{A}\,p.$
For the second statement, we compute
$\langle\tau^{V}(\phi),Q\rangle_{0}=\langle\phi,\tau_{V}(Q)\rangle_{V,V}=\sum_{i}\langle\phi\circ\psi^{i}\circ
P(\psi_{i})\rangle_{V}=\\\
\sum_{i}\langle\psi^{i}\circ\iota_{\phi}(\psi_{i})\rangle_{V}=\textup{\text{tr}}_{\textup{\text{Hom}}_{\mathcal{B}}(V,V)}\,\iota_{\phi}.$
The first equality is the adjointness of $\tau^{V}$ and $\tau_{V}$, the second
is the unoriented Cardy condition and the third is cyclicity of traces. ∎
By non-degeneracy of the Calabi–Yau pairings, the unoriented Cardy condition
is equivalent to the second equality from Proposition 3.5, which we term the
_baggy unoriented Cardy condition_. The unoriented Cardy condition reflects
the equality of two ways of evaluating the TFT on the Möbius strip with
boundary component labelled by $V$. See Figure 2.
The next result constructs the algebraic input of Theorem 3.4 from a
Calabi–Yau category with a contravariant involution which need not act
trivially on objects.
=
Figure 2. The equality of bordisms responsible for the unoriented Cardy
condition (7). All boundaries are labelled by the object $V\in\mathcal{B}$.
###### Proposition 3.6.
Let $(\mathcal{B},\tau^{\bullet},\tau_{\bullet},A)$ define a two dimensional
oriented open/closed TFT and $(p,Q)$ an unoriented lift of $A$. Let
$(P,\Theta)$ be a duality structure on $\mathcal{B}$ such that $\langle
P(\phi)\rangle_{P(V)}=\langle\phi\rangle_{V}$ for all
$\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,V)$ and
$P\circ\tau_{V}=\tau_{P(V)}\circ p$ and $p\circ\tau^{V}=\tau^{P(V)}\circ P$
for all $V\in\mathcal{B}$. If the equality
$\textup{\text{tr}}_{\textup{\text{Hom}}_{\mathcal{B}}(V,P(V))}\,\iota_{\phi}=\langle\tau^{V}(\phi),Q\rangle_{0}$
(8)
holds for all $\phi\in\textup{\text{Hom}}_{\mathcal{B}}(V,V)$, then
$(\mathcal{B}^{\tilde{h}C_{2}},\tau^{\bullet},\tau_{\bullet},A,p,Q)$ defines a
two dimensional unoriented open/closed TFT.
###### Proof.
By Lemma 1.4, the triple
$(\mathcal{B}^{\tilde{h}C_{2}},P^{\tilde{h}C_{2}},\Theta^{\tilde{h}C_{2}})$ is
a category with strict duality and $P^{\tilde{h}C_{2}}$ acts trivially on
objects. The category $\mathcal{B}^{\tilde{h}C_{2}}$ inherits a Calabi–Yau
structure from $\mathcal{B}$ with traces
$\langle-\rangle_{(V,\psi_{V})}:=\langle-\rangle_{V}$. Define boundary-bulk
and bulk-boundary maps for $\mathcal{B}^{\tilde{h}C_{2}}$ by
$\tau^{(V,\psi_{V})}=\tau^{V}$ and $\tau_{(V,\psi_{V})}=\tau_{V}$. The
assumption that $P$ preserves the Calabi–Yau structure and that $P$ and $p$
are compatible with $\tau_{\bullet}$ and $\tau^{\bullet}$ verifies conditions
(ii)-(iv) of Theorem 3.4 for $P^{\tilde{h}C_{2}}$. It remains to verify the
unoriented Cardy condition. Let $(V,\psi_{V})\in\mathcal{B}^{\tilde{h}C_{2}}$.
Let $\\{\psi_{i}\\}_{i}$ be a basis of
$\textup{\text{Hom}}_{\mathcal{B}}(V,P(V))$ with dual basis
$\\{\psi^{i}\\}_{i}$ of $\textup{\text{Hom}}_{\mathcal{B}}(P(V),V)$. Then
$\\{\psi_{V}^{-1}\circ\psi_{i}\\}_{i}$ is a basis of
$\textup{\text{Hom}}_{\mathcal{B}}(V,V)$ with dual basis
$\\{\psi^{i}\circ\psi_{V}\\}_{i}$. We compute
$\tau_{(V,\psi_{V})}(Q)=\sum_{i}\psi^{i}\circ P(\psi_{i})\circ\Theta_{V}=\\\
\sum_{i}\psi^{i}\circ\psi_{V}\circ\psi^{-1}_{V}\circ P(\psi_{i})\circ
P(\psi_{V}^{-1})\circ\psi_{V}=\sum_{i}\psi^{i}\circ\psi_{V}\circ
P^{\tilde{h}C_{2}}(\psi_{V}^{-1}\circ\psi_{i}).$
For the first equality, note that the discussion proceeding Proposition 3.5
shows that equation (8) implies that $\tau_{V}(Q)=\sum_{i}\psi^{i}\circ
P(\psi_{i})\circ\Theta_{V}$. The second equality follows from the coherence
condition on homotopy fixed points and the final equality from the definition
of $(P^{\tilde{h}C_{2}},\Theta^{\tilde{h}C_{2}})$. ∎
We comment on the physical interpretation of Proposition 3.6. As mentioned in
the introduction, the Calabi–Yau category $\mathcal{B}$ should be seen as a
model for the category of D-branes in an oriented string theory. With this
interpretation, a duality structure $(P,\Theta)$ which preserves the
Calabi–Yau pairings is the categorical data of the orientifold construction;
see [DGRKS07, HW08] in the setting of orientifolds of IIB string theory and
Landau–Ginzburg theory. In this context, the quantity
$\textup{\text{tr}}_{\textup{\text{Hom}}_{\mathcal{B}}(V,P(V))}\,\iota_{\phi}$
is a _parity-twisted Witten index_ [BH04, §2] and it is through its
computation via closed sector quantities, namely equation (8), that the
crosscap state $Q$ naturally appears. The D-branes which survive the
orientifold projection are the homotopy fixed points of $(P,\Theta)$, that is,
objects of the category $\mathcal{B}^{\tilde{h}C_{2}}$ above. With these
remarks in mind, Proposition 3.6 is an orientifold-type construction of an
unoriented open/closed TFT from an oriented open/closed TFT.
### 3.2. The Frobenius–Schur element as a crosscap state
We give an algebraic construction of a two dimensional unoriented open/closed
TFT from twisted Real representation theory. When $\hat{G}=G\times C_{2}$ and
the cohomological data $(\hat{\theta},\lambda)$ is trivial, this generalizes
results of [AN06, LN11]. When $\lambda$ is trivial, a topological construction
of the closed sector of this theory was given in [You20, §4.4].
Fix group theoretic data $(\hat{G},\hat{\theta},\lambda)$ as in Section 2.1.
Let $A=Z(\mathbb{C}^{\theta^{-1}}[G])$ with Frobenius pairing
$\langle-,-\rangle_{G}$ and $\mathcal{B}=\textup{\text{Rep}}^{\theta}(G)$ the
Calabi–Yau category with traces
$\langle\phi\rangle_{V}=\frac{1}{|G|}\textup{\text{tr}}_{V}\,\phi$. The
boundary-bulk map $\tau^{V}$ is as in Section 2.2 and the bulk-boundary map is
defined by
$\tau_{V}\Big{(}\sum_{g\in G}a_{g}l_{g}\Big{)}=\sum_{g\in
G}a_{g}\theta([g|g^{-1}])^{-1}\rho_{V}(g^{-1}).$
This data defines a two dimensional oriented open/closed TFT
$\mathcal{Z}_{(G,\theta)}$ via Theorem 3.1. See [MS06, Tur07, Kho11]. The main
axiom to be verified is the oriented Cardy condition which, in the present
setting, is a mild generalization of the orthogonality of characters of
irreducible $\theta$-twisted representations.
###### Theorem 3.7.
The data $(\hat{G},\hat{\theta},\lambda)$ defines a two dimensional unoriented
open/closed TFT $\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$ whose oriented
sector is a sub TFT of $\mathcal{Z}_{(G,\theta)}$.
We will prove Theorem 3.7 using the orientifold construction of Proposition
3.6. We take $(P^{(\hat{\theta},\lambda)},\Theta^{(\hat{\theta},\lambda)})$
for the duality structure on $\mathcal{B}=\textup{\text{Rep}}^{\theta}(G)$ and
$Q=\nu_{(\hat{\theta},\lambda)}$ for the candidate crosscap state. We compute
$\langle
P(\phi)\rangle_{P(V)}=\frac{1}{|G|}\textup{\text{tr}}_{P(V)}\,P(\phi)=\frac{1}{|G|}\textup{\text{tr}}_{V^{\vee}}\,\phi^{\vee}=\langle\phi\rangle_{V},$
which verifies the open sector assumption of Proposition 3.6. The remainder of
the proof of Theorem 3.7 is divided into closed sector computations and
verification of the open/closed coherence conditions required to apply
Proposition 3.6.
The oriented open sector of $\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$ is
the full theory $\mathcal{Z}_{(G,\theta)}$ precisely when the forgetful
functor
$\textup{\text{Rep}}^{\theta}(G)^{\tilde{h}C_{2}}\rightarrow\textup{\text{Rep}}^{\theta}(G)$
is essentially surjective. By Corollary 2.8, this is the case when
$\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}=1$ for each
irreducible $V\in\textup{\text{Rep}}^{\theta}(G)$. Otherwise, the oriented
open sector is a strict subtheory of $\mathcal{Z}_{(G,\theta)}$. In the
context of Example 2.11, the forgetful functor is essentially surjective only
in subexample (3).
###### Remark 3.8.
We comment on the relation of $\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$
to categories of $D$-branes in orientifold string theory on global quotients.
Recall that the spacetime of an orientifold string theory is an orbifold
double cover $\pi:\mathcal{X}\rightarrow\hat{\mathcal{X}}$. Additional data
required to define the theory includes the (gauge equivalence class of a)
$B$-field $\check{B}\in\check{H}^{3+\pi}(\hat{\mathcal{X}})$ [DFM11], which is
a class in the $\pi$-twisted differential cohomology of $\hat{\mathcal{X}}$,
and a complex line bundle with connection
$\check{L}\in\check{H}^{2}(\hat{\mathcal{X}})$ [GH10, §8.4.1]. The underlying
(oriented) orbifold string theory depends only on
$(\mathcal{X},\pi^{*}\check{B})$. Consider now the particular case in which
the spacetime is a global quotient $\pi:X/\\!\\!/G\rightarrow
X/\\!\\!/\hat{G}$ associated to a finite $C_{2}$-graded group $\hat{G}$ acting
on a smooth manifold $X$. A special class of $B$-fields arises through the
composition
$H^{2+\pi}(B\hat{G})\rightarrow
H^{2+\pi}(X/\\!\\!/\hat{G})\hookrightarrow\check{H}^{3+\pi}(X/\\!\\!/G),\qquad\hat{\theta}\mapsto\check{B}_{\hat{\theta}},$
where the first map is restriction along the the canonical morphism
$X/\\!\\!/\hat{G}\rightarrow B\hat{G}$ and the second is the inclusion of flat
$B$-fields. Similarly, a class $\lambda\in H^{1}(B\hat{G})$ defines a flat
line bundle $\check{L}_{\lambda}\in\check{H}^{2}(X/\\!\\!/\hat{G})$. The pair
$(\check{B}_{\hat{\theta}},\check{L}_{\lambda})$ can be seen as defining
universal twists for global $\hat{G}$-orientifolds. The unoriented TFT
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$ is a precise mathematical
description of the affects of the twists
$(\check{B}_{\hat{\theta}},\check{L}_{\lambda})$ on partition functions. See
[BS02], [Sha11, §5], [NY22, §4.5] for detailed discussions of these affects in
the closed sector.
We return to the proof of Theorem 3.7.
#### 3.2.1. Closed sector
Denote by
$\textup{\text{Aut}}^{\textnormal{gen}}(\mathbb{C}^{\theta^{-1}}[G])$ the
group of algebra automorphisms and algebra anti-automorphisms of
$\mathbb{C}^{\theta^{-1}}[G]$. The group
$\textup{\text{Aut}}^{\textnormal{gen}}(\mathbb{C}^{\theta^{-1}}[G])$ is
$C_{2}$-graded by sending anti-automorphisms to $-1$.
###### Lemma 3.9.
The function
$p:\hat{G}\rightarrow\textup{\text{Aut}}^{\textnormal{gen}}(\mathbb{C}^{\theta^{-1}}[G])$,
$\omega\mapsto p^{\omega}$, where
$p^{\omega}(l_{g})=\lambda(g)^{\frac{\pi(\omega)-1}{2}}\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]g)l_{\omega
g^{\pi(\omega)}\omega^{-1}},\qquad g\in G$
is a $C_{2}$-graded group homomorphism. Moreover, each $p^{\omega}$ is an
isometry of $\langle-\rangle_{G}$.
###### Proof.
We prove that $p^{\omega}$, $\omega\in\hat{G}\backslash G$, is an anti-
automorphism and omit the easier calculation that $p^{g}$, $g\in G$, is an
automorphism. For $g,h\in G$, direct calculations give
$p^{\omega}(l_{g}\cdot
l_{h})=\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]gh)}{\lambda(gh)\theta([g|h])}l_{\omega(gh)^{-1}\omega^{-1}}$
and
$p^{\omega}(l_{h})\cdot
p^{\omega}(l_{g})=\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]h)\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]g)}{\lambda(g)\lambda(h)\theta([\omega
h^{-1}\omega^{-1}|\omega g^{-1}\omega^{-1}])}l_{\omega
h^{-1}g^{-1}\omega^{-1}}.$
It therefore suffices to prove that
$\frac{\theta([g|h])}{\theta([\omega h^{-1}\omega^{-1}|\omega
g^{-1}\omega^{-1}])}=\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]gh)}{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]h)\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega]g)}.$
A short calculation using equation (1) shows that this identity indeed holds.
That $p^{\omega}$ is an isometry follows from the equalities
$\langle
p^{\omega}(l_{g})\rangle_{G}=\frac{1}{|G|}\delta_{e,g}\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})}{\lambda(e)}([\omega]e)=\frac{\delta_{e,g}}{|G|}=\langle
l_{g}\rangle_{G}.$
It remains to prove the homomorphism property, $p^{\omega_{2}}\circ
p^{\omega_{1}}=p^{\omega_{2}\omega_{1}}$. Recall from Section 2.1 that
$\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})$ is a $1$-cocycle on the
groupoid $G/\\!\\!/_{R}\hat{G}$ whose objects are elements of $G$ and whose
morphisms are $\omega:g\rightarrow\omega g^{\pi(\omega)}\omega^{-1}$,
$\omega\in\hat{G}$. With this description, closedness of
$\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})$ becomes the equalities
$\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega_{2}]\omega_{1}g^{\pi(\omega_{1})}\omega_{1}^{-1})\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega_{1}]g)=\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\omega_{2}\omega_{1}]g),\qquad
g\in G,\;\omega_{i}\in\hat{G}$
which are immediately seen to imply the homomorphism property. ∎
###### Proposition 3.10.
For each $\varsigma\in\hat{G}\backslash G$, the map $p^{\varsigma}$ restricts
to an algebra involution of $Z(\mathbb{C}^{\theta^{-1}}[G])$. Moreover, this
involution is independent of $\varsigma$.
###### Proof.
Using the explicit descriptions of the centre $Z(\mathbb{C}^{\theta^{-1}}[G])$
from Section 1.2 and the $G$-action on $\mathbb{C}^{\theta^{-1}}[G]$ from
Lemma 3.9 we see that
$\mathbb{C}^{\theta^{-1}}[G]^{G}=Z(\mathbb{C}^{\theta^{-1}}[G])$. It follows
that the generalized $\hat{G}$-action on $\mathbb{C}^{\theta^{-1}}[G]$ from
Lemma 3.9 induces an action of $C_{2}\simeq\hat{G}/G$ by algebra automorphisms
on $Z(\mathbb{C}^{\theta^{-1}}[G])$. ∎
Denote by $p$ the algebra involution of $Z(\mathbb{C}^{\theta^{-1}}[G])$
induced by any $\varsigma\in\hat{G}\backslash G$.
###### Remark 3.11.
Using functoriality of Hochschild homology and invariance under taking
opposites, we form the composition
$HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G))\xrightarrow[]{\sim}HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G)^{\textup{\text{op}}})\xrightarrow[]{HH_{\bullet}(P^{(\hat{\theta},\lambda)})}HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G)).$
(9)
Since $\textup{\text{Rep}}^{\theta}(G)$ is finite semisimple,
$HH_{\bullet}(\textup{\text{Rep}}^{\theta}(G))$ is concentrated in degree
zero, where is it isomorphic to $Z(\mathbb{C}^{\theta^{-1}}[G])$. Under this
isomorphism, the map (9) is $p$. The $+1$ (resp. $-1$) eigenspace of $p$ is
then the involutive (resp. skew-involutive) Hochschild homology of
$(\textup{\text{Rep}}^{\theta}(G),P^{(\hat{\theta},\lambda)},\Theta^{(\hat{\theta},\lambda)})$.
See [Bra14, Theorem 2.14] for an analogous result in the setting of strictly
involutive $A_{\infty}$-algebras. The first part of Proposition 3.5 therefore
shows that the Klein condition computes the difference in dimensions of
involutive and skew-involutive Hochschild homologies.
###### Proposition 3.12.
The element $\nu_{(\hat{\theta},\lambda)}\in Z(\mathbb{C}^{\theta^{-1}}[G])$
is $p$-invariant.
###### Proof.
We have seen in Lemma 2.5 that $\nu_{(\hat{\theta},\lambda)}\in
Z(\mathbb{C}^{\theta^{-1}}[G])$. For $p$-invariance, we have
$p^{\varsigma}(\nu_{(\hat{\theta},\lambda)})=\sum_{\mu\in\hat{G}\backslash
G}\frac{\lambda(\mu)\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]\mu^{2})}{\lambda(\mu^{2})\hat{\theta}([\mu|\mu])}l_{\varsigma\mu^{-2}\varsigma^{-1}}.$
Equation (2) gives
$\hat{\theta}([\varsigma\mu^{-1}\varsigma^{-1}|\varsigma\mu^{-1}\varsigma^{-1}])=\frac{\hat{\theta}([\varsigma|\mu^{-2}])}{\hat{\theta}([\mu^{-1}|\mu^{-1}])\hat{\theta}([\varsigma\mu^{-2}\varsigma^{-1}|\varsigma])}$
so that $p^{\varsigma}(\nu_{(\hat{\theta},\lambda)})$ is equal to
$\sum_{\mu\in\hat{G}\backslash
G}\lambda(\mu)^{-1}\frac{\hat{\theta}([\varsigma\mu^{-2}\varsigma^{-1}|\varsigma])}{\hat{\theta}([\mu|\mu])\hat{\theta}([\mu^{2}|\mu^{-2}])\hat{\theta}([\varsigma|\mu^{-2}])}l_{\varsigma\mu^{-2}\varsigma^{-1}}\\\
=\sum_{\mu\in\hat{G}\backslash
G}\lambda(\mu)^{-1}\hat{\theta}([\mu|\mu])^{-1}\hat{\theta}([\mu^{2}|\mu^{-2}])^{-1}\hat{\theta}([\mu^{-1}|\mu^{-1}])^{-1}\hat{\theta}([\varsigma\mu^{-1}\varsigma^{-1}|\varsigma\mu^{-1}\varsigma^{-1}])^{-1}l_{\varsigma\mu^{-2}\varsigma^{-1}}.$
A short calculation shows that
$\hat{\theta}([\mu^{-1}|\mu^{-1}])\hat{\theta}([\mu|\mu])\hat{\theta}([\mu^{2}|\mu^{-2}])=1$,
whence
$p^{\varsigma}(\nu_{(\hat{\theta},\lambda)})=\sum_{\mu\in\hat{G}\backslash
G}\lambda(\mu^{-1})\hat{\theta}([\varsigma\mu^{-1}\varsigma^{-1}|\varsigma\mu^{-1}\varsigma^{-1}])^{-1}l_{\varsigma\mu^{-2}\varsigma^{-1}}=\nu_{(\hat{\theta},\lambda)}.\qed$
###### Lemma 3.13.
The following equality holds for all $g\in G$ and $\mu\in\hat{G}\backslash G$:
$\hat{\theta}([\mu|\mu])^{-1}l_{\mu^{2}}\cdot
a_{g}l_{g}=\lambda(g)p^{\mu}(a_{g}l_{g})\cdot\hat{\theta}([\mu g|\mu
g])^{-1}l_{(\mu g)^{2}}.$
###### Proof.
This can be verified directly from the twisted $2$-cocycle condition on
$\hat{\theta}$. ∎
###### Proposition 3.14.
The equality $p(\nu_{(\hat{\theta},\lambda)}f)=\nu_{(\hat{\theta},\lambda)}f$
holds for all $f\in Z(\mathbb{C}^{\theta^{-1}}[G])$.
###### Proof.
Write $\sum_{g\in G}a_{g}l_{g}$ for $f\in Z(\mathbb{C}^{\theta^{-1}}[G])$.
Lemma 3.13 gives
$\nu_{(\hat{\theta},\lambda)}\sum_{g\in
G}a_{g}l_{g}=\sum_{\begin{subarray}{c}g\in G\\\ \mu\in\hat{G}\backslash
G\end{subarray}}\lambda(\mu g)p^{\mu}(a_{g}l_{g})\hat{\theta}([\mu g|\mu
g])^{-1}l_{(\mu g)^{2}}$
from which we find that
$p^{\varsigma}(\nu_{(\hat{\theta},\lambda)}\sum_{g}a_{g}l_{g})$ is equal to
$\displaystyle\sum_{\begin{subarray}{c}g\in G\\\ \mu\in\hat{G}\backslash
G\end{subarray}}\lambda(\mu
g)p^{\varsigma}(p^{\mu}(a_{g}l_{g})\cdot\hat{\theta}([\mu g|\mu
g])^{-1}l_{(\mu g)^{2}})$ $\displaystyle=$
$\displaystyle\sum_{g,\mu}\lambda(\mu g)p^{\varsigma}(\hat{\theta}([\mu g|\mu
g])^{-1}l_{(\mu g)^{2}})p^{\varsigma}p^{\mu}(a_{g}l_{g})$ $\displaystyle=$
$\displaystyle\sum_{g,\mu}\lambda(\mu g)^{-1}\hat{\theta}([\varsigma
g^{-1}\mu^{-1}\varsigma^{-1}|\varsigma
g^{-1}\mu^{-1}\varsigma^{-1}])^{-1}l_{\varsigma(g^{-1}\mu^{-1})^{2}\varsigma^{-1}}p^{\varsigma}p^{\mu}(a_{g}l_{g})$
$\displaystyle=$ $\displaystyle\sum_{g,\mu}\lambda(\mu
g)\hat{\theta}([\varsigma g^{-1}\mu^{-1}\varsigma^{-1}|\varsigma
g^{-1}\mu^{-1}\varsigma^{-1}])^{-1}l_{\varsigma(g^{-1}\mu^{-1})^{2}\varsigma^{-1}}\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma\mu]g)^{-1}a_{g}l_{\varsigma\mu
g(\varsigma\mu)^{-1}}$ $\displaystyle=$ $\displaystyle\sum_{g,\mu}\lambda(\mu
g)\hat{\theta}([\varsigma g^{-1}\mu^{-1}\varsigma^{-1}|\varsigma
g^{-1}\mu^{-1}\varsigma^{-1}])^{-1}l_{\varsigma(g^{-1}\mu^{-1})^{2}\varsigma^{-1}}a_{\varsigma\mu
g(\varsigma\mu)^{-1}}l_{\varsigma\mu g(\varsigma\mu)^{-1}}$ $\displaystyle=$
$\displaystyle\sum_{\begin{subarray}{c}h\in G\\\ \eta\in\hat{G}\backslash
G\end{subarray}}\frac{\lambda(\eta)}{\hat{\theta}([\eta|\eta])}l_{\eta^{2}}a_{h}l_{h},$
which is $\nu_{(\hat{\theta},\lambda)}\sum_{h\in G}a_{h}l_{h}$. The first
equality follows from the fact that $p^{\varsigma}$ is an anti-homomorphism
(Lemma 3.9), the second from Proposition 3.12, the third from Lemma 3.9 and
the definition of $p$, the fourth from the assumed centrality of
$\sum_{g}a_{g}l_{g}$ and the fifth from the change of variables
$\eta=\varsigma g^{-1}\mu^{-1}\varsigma^{-1}$ and $h=\varsigma\mu
g\mu^{-1}\varsigma^{-1}$. ∎
Recall that a conjugacy class $\mathcal{O}\subset G$ is called _$\theta$
-regular_ if $\frac{\theta([g|h])}{\theta([h|g])}=1$ for all $g\in\mathcal{O}$
and $h\in C_{G}(g)$.
###### Proposition 3.15.
The Klein condition holds.
###### Proof.
The vector space $Z(\mathbb{C}^{\theta^{-1}}[G])$ has a basis
$\\{l_{\mathcal{O}}\\}_{\mathcal{O}}$ labelled by $\theta$-regular conjugacy
classes of $G$. For convenience, set $l_{\mathcal{O}}=0$ if $\mathcal{O}$ is
not $\theta$-regular. Writing
$l_{\mathcal{O}}=\sum_{g\in\mathcal{O}}a_{g}l_{g}$ and
$l_{\mathcal{O}^{-1}}=\sum_{h\in\mathcal{O}}b_{h^{-1}}l_{h^{-1}}$, we have
$\langle
l_{\mathcal{O}},l_{\mathcal{O}^{-1}}\rangle_{G}=\frac{1}{|G|}\sum_{g\in\mathcal{O}}\theta([g|g^{-1}])^{-1}a_{g}b_{g^{-1}}.$
Centrality of $l_{\mathcal{O}^{\pm 1}}$ implies that the function
$\mathcal{O}\rightarrow\mathbb{C}$,
$g\mapsto\theta([g|g^{-1}])^{-1}a_{g}b_{g^{-1}}$, is constant; denote its
(necessarily non-zero) value by $c_{\mathcal{O}}$. We also have $\langle
l_{\mathcal{O}},l_{\mathcal{O}^{\prime}}\rangle_{G}=0$ if
$\mathcal{O}^{\prime}\neq\mathcal{O}^{-1}$. It follows that
$l_{\mathcal{O}}^{\vee}=\frac{|G|}{c_{\mathcal{O}}|\mathcal{O}|}l_{\mathcal{O}^{-1}}$.
With this notation, the right hand side of the Klein condition is
$R:=\sum_{\mathcal{O}\in\pi_{0}(G/\\!\\!/G)}l_{\mathcal{O}}p^{\varsigma}(l_{\mathcal{O}}^{\vee})$.
We compute
$\displaystyle R$ $\displaystyle=$
$\displaystyle|G|\sum_{\mathcal{O}\in\pi_{0}(G/\\!\\!/G)}\sum_{g,h\in\mathcal{O}}\frac{a_{g}l_{g}p^{\varsigma}(b_{h^{-1}}l_{h^{-1}})}{c_{\mathcal{O}}|\mathcal{O}|}$
$\displaystyle=$
$\displaystyle|G|\sum_{\mathcal{O}\in\pi_{0}(G/\\!\\!/G)}\sum_{g,h\in\mathcal{O}}\frac{a_{g}b_{h^{-1}}\lambda(h)\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]h^{-1})l_{g\varsigma
h\varsigma^{-1}}}{c_{\mathcal{O}}|\mathcal{O}|\hat{\theta}([g|\varsigma
h\varsigma^{-1}])}$ $\displaystyle=$
$\displaystyle\sum_{\mathcal{O}\in\pi_{0}(G/\\!\\!/G)}\sum_{g\in\mathcal{O}}\sum_{t\in
G}\frac{a_{g}b_{tg^{-1}t^{-1}}\lambda(g)\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]tg^{-1}t^{-1})l_{g\varsigma
tgt^{-1}\varsigma^{-1}}}{c_{\mathcal{O}}\hat{\theta}([g|\varsigma
tgt^{-1}\varsigma^{-1}])}$ $\displaystyle=$ $\displaystyle\sum_{g,t\in
G}\frac{a_{g}b_{tg^{-1}t^{-1}}}{c_{\mathcal{O}}}\lambda(g)\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]tg^{-1}t^{-1})}{\hat{\theta}([g|\varsigma
tgt^{-1}\varsigma^{-1}])}l_{g\varsigma tgt^{-1}\varsigma^{-1}}.$
Above we have set $h=tgt^{-1}$. As
$b_{tg^{-1}t^{-1}}=\uptau(\theta)([t]g^{-1})b_{g^{-1}}$, we can write
$\displaystyle R$ $\displaystyle=$ $\displaystyle\sum_{g,t\in
G}\frac{a_{g}b_{g^{-1}}}{c_{\mathcal{O}}}\lambda(g)\frac{\uptau(\theta)([t]g^{-1})\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]tg^{-1}t^{-1})}{\hat{\theta}([g|\varsigma
tgt^{-1}\varsigma^{-1}])}l_{g\varsigma tg^{-1}t^{-1}\varsigma^{-1}}$
$\displaystyle=$ $\displaystyle\sum_{g,t\in
G}\lambda(g)\frac{\theta([g|g^{-1}])}{\hat{\theta}([g|\varsigma
tgt^{-1}\varsigma^{-1}])}\uptau(\theta)([t]g^{-1})\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]tg^{-1}t^{-1})l_{g\varsigma
tgt^{-1}\varsigma^{-1}}$ $\displaystyle=$ $\displaystyle\sum_{g,t\in
G}\lambda(g)\frac{\theta([g|g^{-1}])}{\hat{\theta}([g|\varsigma
tgt^{-1}\varsigma^{-1}])}\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma
t]g^{-1})l_{g\varsigma tgt^{-1}\varsigma^{-1}}.$
The second equality follows from the definition of $c_{\mathcal{O}}$ and the
final from closedness of $\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})$,
as in the proof of Lemma 3.9. Define $\mu,\xi\in\hat{G}\backslash G$ by
$\mu=g\varsigma t$ and $\xi=t^{-1}\varsigma^{-1}$, so that
$t=\varsigma^{-1}\xi^{-1}$ and $g=\mu\xi$. Making these substitutions, the
coefficient of $l_{g\varsigma tgt^{-1}\varsigma^{-1}}=l_{\mu^{2}\xi^{2}}$ in
$R$ is
$\lambda(\mu\xi)\hat{\theta}([\mu\xi|\xi^{-1}\mu\xi^{2}])^{-1}\frac{\hat{\theta}([\xi^{-1}\mu\xi^{2}|\xi^{-1}])}{\hat{\theta}([\xi^{-1}|\mu\xi])}=\lambda(\mu\xi)\hat{\theta}([\mu|\mu])^{-1}\hat{\theta}([\xi|\xi])^{-1}\hat{\theta}([\mu^{2}|\xi^{2}])^{-1}.$
It follows that
$R=\sum_{\mu,\xi\in\hat{G}\backslash
G}\lambda(\mu\xi)\hat{\theta}([\mu|\mu])^{-1}\hat{\theta}([\xi|\xi])^{-1}\hat{\theta}([\mu^{2}|\xi^{2}])^{-1}l_{\mu^{2}\xi^{2}},$
which is exactly $\nu_{(\hat{\theta},\lambda)}^{2}$. ∎
#### 3.2.2. Open/closed coherence
Note that Theorem 2.7 verifies equation (8).
###### Proposition 3.16.
The maps $\tau^{\bullet}$, $\tau_{\bullet}$, $P$ and $p$ satisfy the
assumptions of Proposition 3.6.
###### Proof.
We compute
$P\circ\tau_{V}(p(\sum_{g\in G}a_{g}l_{g}))=\sum_{g\in
G}a_{g}\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)}{\lambda(g)}\rho_{V}(\varsigma
g^{-1}\varsigma^{-1})^{\vee}=\tau_{P(V)}(\sum_{g\in G}a_{g}l_{g}),$
that is, $P\circ\tau_{V}\circ p=\tau_{P(V)}$. Since $p$ is an involution, this
implies $P\circ\tau_{V}=\tau_{P(V)}\circ p$.
We also have
$\displaystyle p\circ\tau^{V}(\phi)$ $\displaystyle=$ $\displaystyle\sum_{g\in
G}\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)}{\lambda(g)}\textup{\text{tr}}_{V}(\phi\circ\rho_{V}(g^{-1}))l_{\varsigma
g^{-1}\varsigma^{-1}}$
and
$\displaystyle\tau^{P(V)}(P(\phi))$ $\displaystyle=$ $\displaystyle\sum_{g\in
G}\frac{\lambda(g)}{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)}\textup{\text{tr}}_{V^{\vee}}(\phi^{\vee}\circ\rho_{V}(\varsigma
g^{-1}\varsigma^{-1})^{\vee})l_{g}$ $\displaystyle=$ $\displaystyle\sum_{g\in
G}\frac{\lambda(\varsigma
g^{-1}\varsigma^{-1})}{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})(\varsigma
g^{-1}\varsigma^{-1})}\textup{\text{tr}}_{V^{\vee}}(\phi^{\vee}\circ\rho_{V}(\varsigma^{2}g\varsigma^{-2})^{\vee})l_{\varsigma
g^{-1}\varsigma^{-1}}.$
Closedness of $\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})$ implies
$\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]\varsigma
g^{-1}\varsigma^{-1})=\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma^{2}]g).$
Since $\phi$ is $G$-equivariant and
$\rho_{V}(\varsigma^{2})\circ\rho_{V}(g)\circ\rho_{V}(\varsigma^{-2})=\theta([\varsigma^{2}|g])\theta([\varsigma^{2}g|\varsigma^{-2}])\rho_{V}(\varsigma^{2}g\varsigma^{-2}),$
we have
$\textup{\text{tr}}_{V^{\vee}}(\phi^{\vee}\circ\rho_{V}(\varsigma^{2}g\varsigma^{-2})^{\vee})=\frac{\theta([\varsigma^{-2}|\varsigma^{2}])}{\theta([\varsigma^{2}|g])\theta([\varsigma^{2}g|\varsigma^{-2}])}\textup{\text{tr}}_{V}(\phi\circ\rho_{V}(g)).$
The coefficient of $\textup{\text{tr}}_{V}(\phi\circ\rho_{V}(g))l_{\varsigma
g^{-1}\varsigma^{-1}}$ in $\tau^{P(V)}(P(\phi))$ is therefore
$\lambda(g)^{-1}\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)}{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma^{2}]g)}\frac{\theta([\varsigma^{-2}|\varsigma^{2}])}{\theta([\varsigma^{2}|g])\theta([\varsigma^{2}g|\varsigma^{-2}])}=\frac{\uptau_{\pi}^{\textup{\text{refl}}}(\hat{\theta})([\varsigma]g)}{\lambda(g)}.$
We conclude that $p\circ\tau^{V}=\tau^{P(V)}\circ P$. ∎
This completes the proof of Theorem 3.7.
### 3.3. Partition functions
The algebraic construction of $\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}$
allows for the explicit computation of the partition function of an arbitrary
surface.
#### 3.3.1. Closed surfaces
For the real projective plane, we have
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\mathbb{RP}^{2})=\langle\nu_{(\hat{\theta},\lambda)}\rangle_{G}=\frac{1}{|G|}\sum_{\begin{subarray}{c}\mu\in\hat{G}\backslash
G\\\ \mu^{2}=e\end{subarray}}\frac{\lambda(\mu)}{\hat{\theta}([\mu|\mu])},$
the first equality reflecting that $\mathbb{RP}^{2}$ is a Möbius strip glued
to a disk. In particular,
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\mathbb{RP}^{2})$ vanishes
unless $\pi:\hat{G}\rightarrow C_{2}$ splits. Realizing the Klein bottle as
two cylinders glued together, with one gluing by circle reflection, and using
that $Z(\mathbb{C}^{\theta^{-1}}[G])=\mathbb{C}^{\theta^{-1}}[G]^{G}$ (see the
proof of Proposition 3.10), we compute
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\mathbb{K})=\frac{1}{|G|}\sum_{h\in
G}\textup{\text{tr}}_{\mathbb{C}^{\theta^{-1}}[G]}p^{h\varsigma}=\frac{1}{|G|}\sum_{\begin{subarray}{c}g\in
G\\\ \omega\in\hat{G}\backslash G\\\ \omega
g^{-1}\omega^{-1}=g\end{subarray}}\frac{1}{\lambda(g)\hat{\theta}([g^{-1}|g])}\frac{\hat{\theta}([g|\omega])}{\hat{\theta}([\omega|g^{-1}])}.$
In general, a formula for the partition function of a closed connected non-
orientable surface $\Sigma$ can be written in terms of $\hat{\theta}$-weighted
counts of $C_{2}$-graded homomorphisms from the fundamental group of the
orientation double cover of $\Sigma^{\textup{\text{or}}}\rightarrow\Sigma$ to
$\hat{G}$. See [You20, §4.4]. Alternatively, the primitive orthogonal
idempotents of the semisimple algebra $Z(\mathbb{C}^{\theta^{-1}}[G])$ can be
used to evaluate the partition functions. Proceeding in this way and writing
the crosscap state as in Corollary 2.10, we find
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\Sigma)=\sum_{\begin{subarray}{c}V\in\textup{\text{Irr}}^{\theta}(G)\\\
P^{(\hat{\theta},\lambda)}(V)\simeq
V\end{subarray}}\left(\frac{\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}\dim_{\mathbb{C}}V}{|G|}\right)^{\chi(\Sigma)}.$
For example,
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\mathbb{RP}^{2})=\frac{1}{|G|}\sum_{\begin{subarray}{c}V\in\textup{\text{Irr}}^{\theta}(G)\\\
P^{(\hat{\theta},\lambda)}(V)\simeq
V\end{subarray}}\langle\chi_{V},\nu_{(\hat{\theta},\lambda)}\rangle_{G}\dim_{\mathbb{C}}V$
and
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\mathbb{K})=|\\{V\in\textup{\text{Irr}}^{\theta}(G)\mid
P^{(\hat{\theta},\lambda)}(V)\simeq V\\}|.$
Equating these expressions for the partition function of $\Sigma$ relates
weighted counts of $C_{2}$-graded homomorphisms
$\pi_{1}(\Sigma^{\textup{\text{or}}})\rightarrow\hat{G}$ to Real character
theoretic sums. Various specializations of these identities are known [FS06,
KM97, MY05, Sny17, BBC+20, You20] and provide non-orientable counterparts of
Mednykh’s formulae [Med78].
#### 3.3.2. Surfaces with boundary
Let $\Sigma$ be a compact connected non-orientable surface with $b\geq 1$
boundary components. To begin, label each boundary component by the same
irreducible twisted Real representation
$V\in\textup{\text{Rep}}^{\theta}(G)^{\tilde{h}C_{2}}$. The partition function
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\Sigma;V):\textup{\text{Hom}}_{\textup{\text{Rep}}^{\theta}(G)^{\tilde{h}C_{2}}}(V,V)^{\otimes
b}\rightarrow\mathbb{C}$
can be computed as follows. By Proposition 2.3, there are two cases to
consider. If $V$ is irreducible as a twisted representation, then the
primitive orthogonal idempotent of $\mathbb{C}^{\theta^{-1}}[G]$ corresponding
to $V$ is $e_{V^{\vee}}=\frac{\dim_{\mathbb{C}}V}{|G|}\chi_{V}$, whence
$\chi_{V}^{b}=\left(\frac{\dim_{\mathbb{C}}V}{|G|}\right)^{-b}e_{V^{\vee}}$.
Using this and the fact that
$\langle\chi_{V},\nu_{(\hat{\theta},\lambda))}\rangle_{G}=1$ (see Corollary
2.8), we compute
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\Sigma;V)(\textup{\text{id}}_{V}^{\otimes
b})=\left(\frac{\dim_{\mathbb{C}}V}{|G|}\right)^{\chi(\Sigma)}.$
If instead the underlying twisted representation of $V$ is reducible, then
$V\simeq H^{(\hat{\theta},\lambda)}(U)$ with
$U\in\textup{\text{Rep}}^{\theta}(G)$ irreducible. It follows that
$\chi_{H^{(\hat{\theta},\lambda)}(U)}=\frac{|G|}{\dim_{\mathbb{C}}U}(e_{U^{\vee}}+e_{P^{(\hat{\theta},\lambda)}(U)^{\vee}})$
and
$\chi_{H^{(\hat{\theta},\lambda)}(U)}^{b}=\left(\frac{\dim_{\mathbb{C}}U}{|G|}\right)^{-b}(e_{U^{\vee}}+e_{P^{(\hat{\theta},\lambda)}(U)^{\vee}}).$
There are two further sub-cases:
* •
$P^{(\hat{\theta},\lambda)}(U)\not\simeq U$, in which case
$\langle\chi_{U},\nu_{(\hat{\theta},\lambda)}\rangle_{G}=\langle\chi_{P^{(\hat{\theta},\lambda)}(U)}\nu_{(\hat{\theta},\lambda)}\rangle_{G}=0$.
Since $\nu_{(\hat{\theta},\lambda)}$ has no $\chi_{U}$ or
$\chi_{P^{(\hat{\theta},\lambda)}(U)}$ components, we find
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\Sigma;H^{(\hat{\theta},\lambda)}(U))(\textup{\text{id}}_{H^{(\hat{\theta},\lambda)}(U)}^{\otimes
b})=0.$
* •
$P^{(\hat{\theta},\lambda)}(U)\simeq U$, in which case
$\langle\chi_{U},\nu_{(\hat{\theta},\lambda)}\rangle_{G}=-1$ and
$\mathcal{Z}_{(\hat{G},\hat{\theta},\lambda)}(\Sigma;H^{(\hat{\theta},\lambda)}(U))(\textup{\text{id}}_{H^{(\hat{\theta},\lambda)}(U)}^{\otimes
b})=2\left(-\frac{\dim_{\mathbb{C}}U}{|G|}\right)^{\chi(\Sigma)}.$
A formula in the general case, with boundary components labelled by arbitrary
twisted Real representations $V_{i}$, $i=1,\dots,b$, can be deduced from the
previous formulae by writing $\chi_{V_{i}}$ as a linear combination of
primitive idempotents.
## References
* [Abo10] M. Abouzaid, _A geometric criterion for generating the Fukaya category_ , Publ. Math. Inst. Hautes Études Sci. (2010), no. 112, 191–240.
* [Abr96] L. Abrams, _Two-dimensional topological quantum field theories and Frobenius algebras_ , J. Knot Theory Ramifications 5 (1996), no. 5, 569–587.
* [AN06] A. Alexeevski and S. Natanzon, _Noncommutative two-dimensional topological field theories and Hurwitz numbers for real algebraic curves_ , Selecta Math. (N.S.) 12 (2006), no. 3-4, 307–377.
* [AS69] M. Atiyah and G. Segal, _Equivariant $K$-theory and completion_, J. Differential Geometry 3 (1969), 1–18.
* [BBC+20] M. Barkeshli, P. Bonderson, M. Cheng, C.-M. Jian, and K. Walker, _Reflection and time reversal symmetry enriched topological phases of matter: path integrals, non-orientable manifolds, and anomalies_ , Comm. Math. Phys. 374 (2020), no. 2, 1021–1124.
* [BCT09] A. Blumberg, R. Cohen, and C. Teleman, _Open-closed field theories, string topology, and Hochschild homology_ , Alpine perspectives on algebraic topology, Contemp. Math., vol. 504, Amer. Math. Soc., Providence, RI, 2009, pp. 53–76.
* [BH04] I. Brunner and K. Hori, _Orientifolds and mirror symmetry_ , J. High Energy Phys. (2004), no. 11, 005, 119 pp.
* [Bra14] C. Braun, _Involutive $A_{\infty}$-algebras and dihedral cohomology_, J. Homotopy Relat. Struct. 9 (2014), no. 2, 317–337.
* [BS02] V. Braun and B. Stefanski, Jr., _Orientifolds and $K$-theory_, Cargese 2002, Progress in String, Field and Particle Theory, NATO Science Series II: Mathematics, Physics and Chemistry, Springer Netherlands, 2002, pp. 369–372.
* [Cos07] K. Costello, _Topological conformal field theories and Calabi-Yau categories_ , Adv. Math. 210 (2007), no. 1, 165–214.
* [CW10] A. Căldăraru and S. Willerton, _The Mukai pairing. I. A categorical approach_ , New York J. Math. 16 (2010), 61–98.
* [DFM11] J. Distler, D. Freed, and G. Moore, _Spin structures and superstrings_ , Surveys in differential geometry. Volume XV. Perspectives in mathematics and physics, Surv. Differ. Geom., vol. 15, Int. Press, Somerville, MA, 2011, pp. 99–130.
* [DGRKS07] D.-E. Diaconescu, A. Garcia-Raboso, R. Karp, and K. Sinha, _D-brane superpotentials in Calabi-Yau orientifolds_ , Adv. Theor. Math. Phys. 11 (2007), no. 3, 471–516.
* [Dij89] R. Dijkgraaf, _A geometrical approach to two-dimensional conformal field theory_ , 1989, Thesis (Ph.D.)–Utrecht University.
* [DW90] R. Dijkgraaf and E. Witten, _Topological gauge theories and group cohomology_ , Comm. Math. Phys. 129 (1990), no. 2, 393–429.
* [Dys62] F. Dyson, _The Threefold Way. Algebraic Structure of Symmetry Groups and Ensembles in Quantum Mechanics_ , Journal of Mathematical Physics 3 (1962), no. 6, 1199–1215.
* [FH21] D. Freed and M. Hopkins, _Consistency of M-theory on non-orientable manifolds_ , Q. J. Math. 72 (2021), no. 1-2, 603–671.
* [FM13] D. Freed and G. Moore, _Twisted equivariant matter_ , Ann. Henri Poincaré 14 (2013), no. 8, 1927–2023.
* [Fre94] D. Freed, _Higher algebraic structures and quantization_ , Comm. Math. Phys. 159 (1994), no. 2, 343–398.
* [FS06] G. Frobenius and I. Schur, _Uber die reellen Darstellungen der endlichen Gruppen_ , Sitzungsberichte der königlich preussichen Akademi der Wissenschaften zu Berlin (1906), 198–208.
* [GH10] D. Gao and K. Hori, _On the structure of the Chan–Paton factors for D-branes in type II orientifolds_ , arXiv:1004.3972, 2010.
* [GI21] P. Georgieva and E.-N. Ionel, _A Klein TQFT: the local real Gromov-Witten theory of curves_ , Adv. Math. 391 (2021), Paper No. 107972, 70.
* [Gow79] R. Gow, _Real-valued and $2$-rational group characters_, J. Algebra 61 (1979), no. 2, 388–413.
* [HLS21] T.-C. Huang, Y.-H. Lin, and S. Seifnashri, _Construction of two-dimensional topological field theories with non-invertible symmetries_ , J. High Energy Phys. (2021), no. 12, 43 pp.
* [HW08] K. Hori and J. Walcher, _D-brane categories for orientifolds—the Landau-Ginzburg case_ , J. High Energy Phys. 4 (2008), 030, 36.
* [IT23] T. Ichikawa and Y. Tachikawa, _The super Frobenius–Schur indicator and finite group gauge theories on $\textnormal{Pin}^{-}$ surfaces_, Comm. Math. Phys. 400 (2023), no. 1, 417–428.
* [Kar70] M. Karoubi, _Sur la $K$-théorie équivariante_, Séminaire Heidelberg-Saarbrücken-Strasbourg sur la K-théorie (1967/68), Lecture Notes in Mathematics, Vol. 136, Springer, Berlin, 1970, pp. 187–253.
* [Kar85] G. Karpilovsky, _Projective representations of finite groups_ , Monographs and Textbooks in Pure and Applied Mathematics, vol. 94, Marcel Dekker, Inc., New York, 1985.
* [Kho11] V. Khoi, _On Turaev’s theorem about Dijkgraaf-Witten invariants of surfaces_ , J. Knot Theory Ramifications 20 (2011), no. 6, 837–846.
* [KM97] V. Karimipour and A. Mostafazadeh, _Lattice topological field theory on nonorientable surfaces_ , J. Math. Phys. 38 (1997), no. 1, 49–66.
* [KR04] A. Kapustin and L. Rozansky, _On the relation between open and closed topological strings_ , Comm. Math. Phys. 252 (2004), no. 1-3, 393–414.
* [KT17] A. Kapustin and A. Turzillo, _Equivariant topological quantum field theory and symmetry protected topological phases_ , J. High Energy Phys. (2017), no. 3, 006, front matter+19.
* [Laz01] C. Lazaroiu, _On the structure of open-closed topological field theory in two dimensions_ , Nuclear Phys. B 603 (2001), no. 3, 497–530.
* [LN11] S. Loktev and S. Natanzon, _Klein topological field theories from group representations_ , SIGMA Symmetry Integrability Geom. Methods Appl. 7 (2011), Paper 070, 15.
* [LP08] A. Lauda and H. Pfeiffer, _Open-closed strings: two-dimensional extended TQFTs and Frobenius algebras_ , Topology Appl. 155 (2008), no. 7, 623–666.
* [Med78] A. Mednykh, _Determination of the number of nonequivalent coverings over a compact Riemann surface_ , Dokl. Akad. Nauk SSSR 239 (1978), no. 2, 269–271.
* [MS06] G. Moore and G. Segal, _D-branes and $K$-theory in 2D topological field theory_, arXiv:hep-th/0609042, 2006.
* [MY05] M. Mulase and J. Yu, _Non-commutative matrix integrals and representation varieties of surface groups in a finite group_ , Ann. Inst. Fourier (Grenoble) 55 (2005), no. 6, 2161–2196.
* [NY22] B. Noohi and M. Young, _Twisted loop transgression and higher Jandl gerbes over finite groupoids_ , Algebr. Geom. Topol. 22 (2022), no. 4, 1663–1712.
* [RT21] D. Rumynin and J. Taylor, _Real representations of $C_{2}$-graded groups: the antilinear theory_, Linear Algebra Appl. 610 (2021), 135–168.
* [RT22] by same author, _Real representations of $C_{2}$-graded groups: the linear and hermitian theories_, High. Struct. 6 (2022), no. 1, 359–374.
* [RY21] D. Rumynin and M. Young, _Burnside rings for Real 2-representation theory: The linear theory_ , Commun. Contemp. Math. 23 (2021), no. 5, 2050012, 54.
* [Sha11] E. Sharpe, _Notes on discrete torsion in orientifolds_ , J. Geom. Phys. 61 (2011), no. 6, 1017–1032.
* [Sha15] by same author, _Notes on generalized global symmetries in QFT_ , Fortschr. Phys. 63 (2015), no. 11-12, 659–682.
* [Shi12] K. Shimizu, _Frobenius–Schur indicator for categories with duality_ , Axioms 1 (2012), no. 3, 324–364.
* [Sny17] N. Snyder, _Mednykh’s formula via lattice topological quantum field theories_ , Proceedings of the 2014 Maui and 2015 Qinhuangdao conferences in honour of Vaughan F. R. Jones’ 60th birthday, Proc. Centre Math. Appl. Austral. Nat. Univ., vol. 46, Austral. Nat. Univ., Canberra, 2017, pp. 389–398.
* [SR17] K. Shiozaki and S. Ryu, _Matrix product states and equivariant topological field theories for bosonic symmetry-protected topological phases in $(1+1)$ dimensions_, J. High Energy Phys. (2017), no. 4, 100, front matter+46.
* [TT06] V. Turaev and P. Turner, _Unoriented topological quantum field theory and link homology_ , Algebr. Geom. Topol. 6 (2006), 1069–1093.
* [Tur07] V. Turaev, _Dijkgraaf-Witten invariants of surfaces and projective representations of groups_ , J. Geom. Phys. 57 (2007), no. 11, 2419–2430.
* [Wig59] E. Wigner, _Group theory and its application to the quantum mechanics of atomic spectra_ , Pure and Applied Physics. Vol. 5, Academic Press, New York-London, 1959.
* [Wil08] S. Willerton, _The twisted Drinfeld double of a finite group via gerbes and finite groupoids_ , Algebr. Geom. Topol. 8 (2008), no. 3, 1419–1457.
* [You20] M. Young, _Orientation twisted homotopy field theories and twisted unoriented Dijkgraaf–Witten theory_ , Comm. Math. Phys. 374 (2020), no. 3, 1645–1691.
* [You21] by same author, _Real representation theory of finite categorical groups_ , High. Struct. 5 (2021), no. 1, 18–70.
|
\tau_\infty(T,w,f) = \;& \int_{[0,1]} F(w, f)(\xi, \rd z_1,\dots, \rd z_{|T|}) \;\rd \xi.
\end{aligned}
\end{equation*}
For any $\varphi \in C_c(\R^{|T|})$, by Lemma <ref>,
\begin{equation*}
\begin{aligned}
\;& \lim_{N \to \infty} \int_{z \in \R^{|T|}} \tau_\infty(T,\tilde w_{N;\#},\tilde f_{N;\#}) (\rd z_1,\dots, \rd z_{|T|})
\\
= \;& \lim_{N \to \infty} \int_{[0,1]} \int_{z \in \R^{|T|}} \varphi(z_1,\dots,z_{|T|}) F(\tilde w_N, \tilde f_N)(\Phi_N(\xi), \rd z_1,\dots, \rd z_{|T|}) \;\rd \xi
\\
= \;& \int_{[0,1]} \int_{z \in \R^{|T|}} \varphi(z_1,\dots,z_{|T|}) F(w, f)(\xi, \rd z_1,\dots, \rd z_{|T|}) \;\rd \xi
\\
= \;& \int_{z \in \R^{|T|}} \tau_\infty(T,w,f) (\rd z_1,\dots, \rd z_{|T|}).
\end{aligned}
\end{equation*}
Since $\varphi \in C_c(\R^{|T|})$ is arbitrary we conclude (<ref>), restated here:
\begin{equation*}
\begin{aligned}
\tau_\infty(T,\tilde w_N, \tilde f_N) \overset{\ast}{\rightharpoonup} \tau_\infty(T,w,f) \in \mathcal{M}(\R^{|T|}), \quad \forall T \in \mathcal{T}.
\end{aligned}
\end{equation*}
§ PROOFS OF THE QUANTITATIVE RESULTS
§.§ The hierarchy of equations
The subsection provides the main proofs of Proposition <ref>, <ref> and <ref>, which derive the hierarchy of equations from the Liouville equation (<ref>) and the Vlasov equation (<ref>)-(<ref>).
We begin with the proof of Proposition <ref>, showing that the observables corresponding to the laws of $(X^{1;N}_0,\dots, X^{N;N}_0)$ solving (<ref>) satisfy the extended BBGKY hierarchy (<ref>)-(<ref>).
Since the coefficients are bounded Lipschitz, the well-posedness of the SDE system (<ref>) and the Liouville-type equation (<ref>) are classical results.
For simplicity of the presentation, we avoid using weak formulations but only present a formal calculation.
Consider any distinct indexes $i_1,\dots,i_k \in \{1,\dots,N\}$. It is easy to verify the following identity deriving the marginal laws from the full joint law,
\begin{equation*}
\begin{aligned}
f_{N}^{i_1,\dots, i_k}(t,z_1,\dots,z_{k}) \defeq \;& \law (X^{i_1;N}_t,\dots, X^{i_{k};N}_t)
\\
=\;& \bigg(\int_{\R^{N - k}} f_{N}(t,x_1,\dots,x_N) \textstyle{\prod_{i \neq i_1,\dots, i_k} \rd x_i}\bigg)\bigg|_{\forall l = 1,\dots, k, \; x_{i_l} = z_l}.
\end{aligned}
\end{equation*}
By integrating Liouville equation (<ref>) along spatial directions $i \notin \{i_1,\dots,i_{k}\}$ and calculate the summation $i \in \{i_1,\dots,i_{k}\}$ and $i \notin \{i_1,\dots,i_{k}\}$ separately, we obtain equations for the marginals,
\begin{equation} \label{eqn:hierarchy_equation_marginal_integrate}
\begin{aligned}
& \partial_t f_{N}^{i_1,\dots, i_{k}}(t,z_1,\dots,z_{k})
\\
&\quad = \sum_{m = 1}^{k} \Bigg\{ \bigg[ - \partial_{z_m}(\mu(z_m) f_{N}^{i_1,\dots, i_{k}}(t,z)) + \frac{\sigma^2}{2} \partial_{z_m}^2 f_{N}^{i_1,\dots, i_{k}}(t,z)
\\
&\qquad - \nu(z_m) f_{N}^{i_1,\dots, i_{k}}(t,z) + \delta_0(z_m) \bigg( \int_{\R} \nu(u_m) f_{N}^{i_1,\dots, i_{k}}(t,u - {w_{N;i_m}^{i_1,\dots, i_{k}}}) \Big) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n} \bigg] \Bigg\}
\\
&\qquad\ + \sum_{i \neq i_1,\dots,i_{k}} \int_{\R} \nu(z_{k+1}) \bigg( f_{N}^{i_1,\dots, i_{k}, i}(t,z - {w_{N;i}^{i_1,\dots, i_{k}, i}}) - f_{N}^{i_1,\dots, i_{k}, i}(t,z) \bigg) \;\rd z_{k+1}.
\end{aligned}
\end{equation}
We can reformulate the last line as
\begin{equation*}
\begin{aligned}
& \sum_{i \neq i_1,\dots,i_{k}} \int_{\R} \nu(z_{k+1}) \bigg( f_{N}^{i_1,\dots, i_{k}, i}(t,z - {w_{N;i}^{i_1,\dots, i_{k}, i}}) - f_{N}^{i_1,\dots, i_{k}, i}(t,z) \bigg) \;\rd z_{k+1}
\\
&\quad= \sum_{i \neq i_1,\dots,i_{k}} \int_{\R} \nu(z_{k+1}) \bigg( \int_0^1 \sum_{m=1}^{k} - w_{i_m,i} \partial_{z_m} f_{N}^{i_1,\dots, i_{k}, i}(t,z - r {w_{N;i}^{i_1,\dots, i_{k}, i}}) \;\rd r \bigg) \;\rd z_{k+1}
\\
&\quad = \sum_{m=1}^{k} - \partial_{z_m} \bigg[ \sum_{i \neq i_1,\dots,i_{k}} w_{i_m,i;N} \int_{\R} \nu(z_{k+1}) \bigg( \int_0^1 f_{N}^{i_1,\dots, i_{k}, i}(t,z - r {w_{N;i}^{i_1,\dots, i_{k}, i}}) \;\rd r \bigg) \;\rd z_{k+1} \bigg],
\end{aligned}
\end{equation*}
changing it into an additional advection term $\partial_{z_m}[\dots]$ to the equation.
Introduce the simple identity
\begin{equation*}
\begin{aligned}
f_{N}^{i_1,\dots, i_{k}}(u - {w_{N;i_m}^{i_1,\dots, i_{k}}}) = f_{N}^{i_1,\dots, i_{k}}(u) - \big\{f_{N}^{i_1,\dots, i_{k}}(u) - f_{N}^{i_1,\dots, i_{k}}(u - {w_{N;i_m}^{i_1,\dots, i_{k}}})\big\},
\end{aligned}
\end{equation*}
and proceed to do the same for $f_{N}^{i_1,\dots, i_{k}, i}(z - r {w_{N;i}^{i_1,\dots, i_{k}, i}})$, so that
the marginal equations (<ref>) now read
\begin{equation} \label{eqn:hierarchy_equation_marginal}
\begin{aligned}
& \partial_t f_{N}^{i_1,\dots, i_{k}}(z_1,\dots,z_{k})
\\
&\quad = \sum_{m = 1}^{k} \Bigg\{ \bigg[ - \partial_{z_m}(\mu(z_m) f_{N}^{i_1,\dots, i_{k}}(z)) + \frac{\sigma^2}{2} \partial_{z_m}^2 f_{N}^{i_1,\dots, i_{k}}(z) - \nu(z_m) f_{N}^{i_1,\dots, i_{k}}(z)
\\
&\qquad + \delta_0(z_m) \bigg( \int_{\R} \nu(u_m) \Big( f_{N}^{i_1,\dots, i_{k}}(u) - \big\{f_{N}^{i_1,\dots, i_{k}}(u) - f_{N}^{i_1,\dots, i_{k}}(u - {w_{N;i_m}^{i_1,\dots, i_{k}}})\big\} \Big) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n} \bigg]
\\
& \qquad- \partial_{z_m} \bigg[ \sum_{i \neq i_1,\dots,i_{k}} w_{i_m,i;N} \int_{\R} \nu(z_{k+1}) \bigg( \int_0^1 f_{N}^{i_1,\dots, i_{k}, i}(z) \\
&\qquad\qquad\qquad\qquad\qquad- \big\{f_{N}^{i_1,\dots, i_{k}, i}(z) - f_{N}^{i_1,\dots, i_{k}, i}(z - r {w_{N;i}^{i_1,\dots, i_{k}, i}})\big\} \;\rd r \bigg) \;\rd z_{k+1} \bigg] \Bigg\},
\end{aligned}
\end{equation}
where we omit variable $t$ for simplicity.
By taking the time derivative to the definition of observables (<ref>), restated here
\begin{equation*}
\begin{aligned}
\tau_N (T,w_N,f_N)(t,z) \defeq \;& \frac{1}{N} \sum_{i_1,\dots, i_{|T|} = 1}^N w_{N,T}(i_1,\dots, i_{|T|}) f_{N}^{i_1,\dots, i_{|T|}}(t,z_1,\dots,z_{|T|})
\end{aligned}
\end{equation*}
and substituting the right hand side $\partial_t f_{N}^{i_1,\dots, i_{|T|}}$ by the marginal equation (<ref>) with $k = |T|$, we obtain that
\begin{equation*}
\begin{aligned}
& \partial_t \bigg( \frac{1}{N} \sum_{i_1,\dots, i_{|T|} = 1}^N w_{N,T}(i_1,\dots, i_{|T|}) f_{N}^{i_1,\dots, i_{|T|}}(z_1,\dots,z_{|T|}) \bigg)= \frac{1}{N} \sum_{i_1,\dots, i_{|T|} = 1}^N w_{N,T}(i_1,\dots, i_{|T|}) \sum_{m = 1}^{|T|} \Bigg\{ \\
&\ \bigg[ - \partial_{z_m}(\mu(z_m) f_{N}^{i_1,\dots, i_{|T|}}(z)) + \frac{\sigma^2}{2} \partial_{z_m}^2 f_{N}^{i_1,\dots, i_{|T|}}(z) - \nu(z_m) f_{N}^{i_1,\dots, i_{|T|}}(z)\\
&\ + \delta_0(z_m) \bigg( \int_{\R} \nu(u_m) \Big( f_{N}^{i_1,\dots, i_{|T|}}(u) - \big\{f_{N}^{i_1,\dots, i_{|T|}}(u)- f_{N}^{i_1,\dots, i_{|T|}}(u - {w_{N;i_m}^{i_1,\dots, i_{|T|}}})\big\} \Big) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n} \bigg]
\\
& - \partial_{z_m} \bigg[ \sum_{i \neq i_1,\dots,i_{k}} \!\!\!\! w_{i_m,i;N} \!\! \int_{\R} \! \nu(z_{|T|+1}) \! \bigg( \int_0^1 f_{N}^{i_1,\dots, i_{|T|}, i}(z) - \big\{f_{N}^{i_1,\dots, i_{|T|}, i}(z) \\
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad - f_{N}^{i_1,\dots, i_{|T|}, i}(z - r {w_{N;i}^{i_1,\dots, i_{|T|}, i}})\big\} \rd r \bigg) \rd z_{|T|+1} \bigg] \Bigg\}.
\end{aligned}
\end{equation*}
Noticing the identity $w_{N,T + j}(i_1,\dots, i_{|T| + 1}) = w_{N,T}(i_1,\dots, i_{|T|}) w_{i_j, i_{|T|+1}}$,
we see that all the marginals, except the two terms of form
$\{f_N^{\dots}(\cdot) - f_N^{\dots}(\cdot - w)\}$, are expressed in the right way so they can be rewritten as observables, obtaining (<ref>) as the approximate hierarchy and (<ref>) as the explicit form of the remainders.
We now turn to the proof of Proposition <ref>. It is worth noting that the main Gronwall estimate could also be written in the probabilistic language of Itô calculus. However, we prefer to keep an approach and notation similar to the rest of the proofs presented.
To simplify the argument, we only present a formal calculation where the tensorized weight $\eta^{\otimes |T|}$ is directly used as the test function, while, strictly speaking, the valid test functions for distributional solutions should have compact support. Given that the remaining coefficients are bounded Lipschitz and all terms in the subsequent calculation are non-negative, passing the limit to justify the use of unbounded weight on the dual side poses no problems.
The weighted total variation $\||\tau_N|(T) \eta^{\otimes |T|}\|_{\mathcal{M}(\R^{|T|})}$ can be decomposed as
\begin{equation*}
\begin{aligned}
\||\tau|(T)(t,\cdot) \eta^{\otimes |T|}\|_{\mathcal{M}(\R^{|T|})} = \;& \int_{\R^{|T|}} \frac{1}{N} \sum_{i_1,\dots, i_{|T|} = 1}^N \big| w_{N,T}(i_1,\dots, i_{|T|}) \big| f_{N}^{i_1,\dots, i_{|T|}}(t,z) \eta^{\otimes |T|}(z) \;\rd z
\\
= \;& \frac{1}{N} \sum_{i_1,\dots, i_{|T|} = 1}^N \big| w_{N,T}(i_1,\dots, i_{|T|}) \big| \int_{\R^{|T|}} f_{N}^{i_1,\dots, i_{|T|}}(t,z) \eta^{\otimes |T|}(z) \;\rd z.
\end{aligned}
\end{equation*}
For any distinct indexes $i_1,\dots,i_k$, we have
\begin{equation*}
\begin{aligned}
\int_{\R^{|T|}} f_{N}^{i_1,\dots, i_{|T|}}(t,z) \eta^{\otimes |T|}(z) \;\rd z = \int_{\R^N} f_{N}(t,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x.
\end{aligned}
\end{equation*}
The forthcoming estimate is not exclusive to our specific choice $\eta = \eta_\alpha$, but for any weight function adhering to the form
\begin{equation*}
\begin{aligned}
\eta(x) = \exp( h(x)), \quad \forall x \in \R
\end{aligned}
\end{equation*}
such that $\|h'\|_{L^\infty}$, $\|h''\|_{L^\infty}$ are bounded and $h(0) \leq h(x)$.
Our choice of $\eta = \eta_\alpha$ is clearly included by choosing
$h(x) = \sqrt{1 + \alpha^2 x^2}$, resulting in $\|h'\|_{L^\infty} \leq \alpha$ and $\|h''\|_{L^\infty} \leq \alpha^2$.
The following inequalities are immediate results by chain rule and fundamental theorem of calculus.
For any weight function of form $\eta(x) = \exp( h(x))$ such that $\|h'\|_{L^\infty}$, $\|h''\|_{L^\infty}$ are bounded and $h(0) \leq h(x)$, one has that
\begin{equation*}
\begin{aligned}
|\eta'/\eta|(x) \leq \|h'\|_{L^\infty}, \quad |\eta''/\eta|(x) \leq \|h''\|_{L^\infty} + \|h'\|_{L^\infty}^2,
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
\eta(x + y) - \eta(x) %\leq [ \exp(\|h'\|_{L^\infty} |y|) - 1 ] \eta(x)
\leq \|h'\|_{L^\infty} |y| \exp(\|h'\|_{L^\infty} |y|) \eta(x).
\end{aligned}
\end{equation*}
The last inequality can be extended to the tensorized case $\eta^{\otimes k}(x) = \prod_{l = 1}^{k} \eta(x_{i_l})$ as
\begin{equation*}
\begin{aligned}
\eta^{\otimes k}(x + y) - \eta^{\otimes k}(x) %\leq [ \exp(\|h'\|_{L^\infty} \|y\|_{\ell^1}) - 1 ] \eta^{\otimes k}(x)
\leq \|h'\|_{L^\infty} \|y\|_{\ell^1} \exp(\|h'\|_{L^\infty} \|y\|_{\ell^1}) \eta^{\otimes k}(x).
\end{aligned}
\end{equation*}
We are now ready to prove Proposition <ref> under the more general assumption that $\eta(x) = \exp( h(x))$.
Since $f_N$ solves (<ref>) in the distributional sense, it is easy to verify that
\begin{equation*}
\begin{aligned}
& \int_{\R^N} f_{N}(t,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x
= \int_{\R^N} f_{N}(0,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x
\\
&\quad + \int_0^t \int_{\R^N} f_{N}(s,x) \Bigg[ \sum_{m=1}^{|T|} \bigg( \mu(x_{i_m}) (\eta'/\eta) (x_{i_m}) + \frac{1}{2}\sigma^2 (\eta''/\eta) (x_{i_m}) \bigg) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})}
\\
&\quad + \sum_{j = i_1,\dots, i_{|T|}} \nu(x_j) \bigg( \frac{\eta(0)}{\eta(x_{j})} {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l} + w_{i_l,j;N})} - {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \bigg)
\\
&\quad + \sum_{j \neq i_1,\dots, i_{|T|}} \nu(x_j) \bigg( {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l} + w_{i_l,j;N})} - {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \bigg)
\Bigg] \;\rd x \rd s.
\end{aligned}
\end{equation*}
By Lemma <ref>, we have that
\begin{equation*}
\begin{aligned}
& \int_{\R^N} f_{N}(t,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x
\leq \int_{\R^N} f_{N}(0,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x
\\
&\qquad + \int_0^t \int_{\R^N} f_{N}(s,x) \Bigg[ \sum_{m=1}^{|T|} \bigg( \|\mu\|_{L^\infty} \|h'\|_{L^\infty} + \frac{1}{2}\sigma^2 (\|h''\|_{L^\infty} + \|h'\|_{L^\infty}^2) \bigg) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})}
\\
&\qquad + \sum_{j = 1}^{N} \|\nu\|_{L^\infty} \; \|h'\|_{L^\infty} {\textstyle \sum_{m=1}^{|T|} |w_{i_m,j;N}|} \exp\Big(\|h'\|_{L^\infty} \; {\textstyle \max_{j} \sum_{i} |w_{i,j;N}|}\Big) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \Bigg] \;\rd x \rd s
\\
&\quad= \int_{\R^N} f_{N}(0,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x
+ \Bigg[ \sum_{m=1}^{|T|} \bigg( \|\mu\|_{L^\infty} \|h'\|_{L^\infty} + \frac{1}{2}\sigma^2 (\|h''\|_{L^\infty} + \|h'\|_{L^\infty}^2) \bigg)
\\
& \qquad+ \sum_{j = 1}^{N} \sum_{m=1}^{|T|} |w_{i_m,j;N}| \; \|\nu\|_{L^\infty} \|h'\|_{L^\infty} \exp\Big(\|h'\|_{L^\infty} \; {\textstyle \max_{j} \sum_{i} |w_{i,j;N}|}\Big) \Bigg] \int_0^t \int_{\R^N} f_{N}(s,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x \rd s,
\end{aligned}
\end{equation*}
where the summations of $j = i_1,\dots, i_{|T|}$ and $j \neq i_1,\dots, i_{|T|}$ are combined together by the simple fact that $h(0) \leq h(x_j)$, hence ${\eta(0)}/{\eta(x_{j})} \leq 1$.
Furthermore, we have that
\begin{equation*}
\begin{aligned}
\sum_{j = 1}^{N} \sum_{m=1}^{|T|} |w_{i_m,j;N}| \leq |T| \; {\textstyle \max_{i} \sum_{j} |w_{i,j;N}|}.
\end{aligned}
\end{equation*}
Hence by choosing
\begin{equation*}
\begin{aligned}
C_\mathcal{W} = \;& \textstyle \max\left(\max_{i} \sum_{j} |w_{i,j;N}| ,\ \max_{j} \sum_{i} |w_{i,j;N}|\right),
\\
A_\eta = \;& \Big( \|\mu\|_{L^\infty} \|h'\|_{L^\infty} + \frac{1}{2}\sigma^2 (\|h''\|_{L^\infty} + \|h'\|_{L^\infty}^2) + \|\nu\|_{L^\infty} \|h'\|_{L^\infty} C_{\mathcal{W}} \exp(\|h'\|_{L^\infty} C_{\mathcal{W}}) \Big),
\end{aligned}
\end{equation*}
we conclude that
\begin{equation*}
\begin{aligned}
& \int_{\R^N} f_{N}(t,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x \\
&\quad \leq \int_{\R^N} f_{N}(0,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x + \int_0^t |T| A_\eta \int_{\R^N} f_{N}(s,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x \;\rd s.
\end{aligned}
\end{equation*}
By Gronwall lemma, this implies that
\begin{equation*}
\begin{aligned}
\;& \int_{\R^N} f_{N}(t,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x \leq \exp \big( |T| A_\eta t \big) \int_{\R^N} f_{N}(0,x) {\textstyle \prod_{l = 1}^{|T|} \eta(x_{i_l})} \;\rd x.
\end{aligned}
\end{equation*}
Taking the summation over $i_1,\dots, i_{|T|}$, we have that
\begin{equation*}
\begin{aligned}
\||\tau|(T) \eta^{\otimes |T|}(t,\cdot)\|_{\mathcal{M}(\R^{|T|})} \leq \;& \exp \big( |T| A_\eta t \big) \||\tau|(T) \eta^{\otimes |T|}(0,\cdot)\|_{\mathcal{M}(\R^{|T|})}
\\
\leq \;& C_\eta \big( M_\eta \exp(A_\eta t_*) \big)^{|T|}
\end{aligned}
\end{equation*}
Finally, by applying Lemma <ref> to the left hand side, we immediately obtain (<ref>), restated here,
\begin{equation*}
\begin{aligned}
\| |\tau_N| (T) (t,\cdot) \|_{H^{-1\otimes |T|}_\eta} \leq C_\eta(T) \big(\|K\|_{L^2(\R)} \exp(A_\eta t_*) \big)^{|T|}
\end{aligned}
\end{equation*}
for all $T \in \mathcal{T}, t \in [0,t_*]$.
Finally, we give the proof of Proposition <ref>.
We show the well-posedness of Vlasov equation (<ref>)-(<ref>) by a classical fixed point argument. Let us first define the mapping $f \mapsto \mathcal{L} f$ as the solution of
\begin{equation*}
\begin{aligned}
\partial_t \mathcal{L} f(t,\xi,x) + \partial_x \Big(\mu^*_{f}(t,\xi,x) \mathcal{L} f(t,\xi,x) \Big) - \frac{\sigma^2}{2} \partial_{xx} \Big( \mathcal{L} f(t,\xi,x) \Big)
\\
+ \nu(x) \mathcal{L} f(t,\xi,x) - \delta_0(x) J_f(t,\xi) = 0
\end{aligned}
\end{equation*}
If $f$ is given, then $J_f$ and $\mu^*_{f}$ are determined, making the above identity a linear equation with respect to $\mathcal{L} f$.
We are going to see that if $f \in L^\infty([0,t_*] \times [0,1] ; H^{-1}_\eta \cap \mathcal{M}_+(\R) )$, then $\mathcal{L} f$ belongs to the same space.
By multiplying the equation by the weight function $\eta$ and applying Leibniz formula, we obtain that
\begin{equation*}
\begin{aligned}
& \partial_t \mathcal{L} f(t,\xi,x) \eta(x)
\\
& \quad=- \partial_x \Big(\mu^*_{f}(t,\xi,x) \mathcal{L} f(t,\xi,x) \eta(x) \Big) + \frac{\sigma^2}{2} \partial_{xx} \Big( \mathcal{L} f(t,\xi,x) \eta(x) \Big) - \nu(x) \mathcal{L} f(t,\xi,x) \eta(x)\\
&\qquad + \delta_0(x) \eta(0) J_f(t,\xi) + \mu^*_{f}(t,\xi,x) (\eta'/\eta)(x) \mathcal{L} f(t,\xi,x) \eta(x)\\
&\qquad + \frac{\sigma^2}{2} \bigg[ - \partial_x \Big( 2(\eta'/\eta)(x) \mathcal{L} f(t,\xi,x) \eta(x) \Big) + (\eta''/\eta)(x) \mathcal{L} f(t,\xi,x) \eta(x) \bigg].
\end{aligned}
\end{equation*}
We start the a priori estimate of the linear mapping $\mathcal{L}$ by the total mass.
It is straightforward to verify that
\begin{equation} \label{eqn:a_priori_invariance_1}
\begin{aligned}
\| \mathcal{L}f(t,\cdot,\xi)\|_{\mathcal{M}(\R)} \leq \;& \| f(0,\cdot,\xi)\|_{\mathcal{M}(\R)} + \int_0^t J_f(s,\xi) \;\rd s
\\
\leq \;& \| f(0,\cdot,\xi)\|_{\mathcal{M}(\R)} + \int_0^t \|\nu\|_{L^\infty} \| f(s,\cdot,\xi) \|_{\mathcal{M}(\R)} \;\rd s.
\end{aligned}
\end{equation}
Note that by choosing $t_1 = 1/(2\|\nu\|_{L^\infty})$, we have that
\begin{equation*}
\begin{aligned}
\sup_{t \in [0,t_1]} \| f(t,\cdot,\xi)\|_{\mathcal{M}(\R)} \leq 2 \| f(0,\cdot,\xi)\|_{\mathcal{M}(\R)} \implies \sup_{t \in [0,t_1]} \| \mathcal{L}f(t,\cdot,\xi)\|_{\mathcal{M}(\R)} \leq 2 \| f(0,\cdot,\xi)\|_{\mathcal{M}(\R)}.
\end{aligned}
\end{equation*}
Next, consider the $\eta$-weighted total moment,
\begin{equation*}
\begin{aligned}
& \| \mathcal{L}f(t,\cdot,\xi) \eta\|_{\mathcal{M}(\R)}
\\
&\quad \leq \| f(0,\cdot,\xi) \eta\|_{\mathcal{M}(\R)} + \int_0^t \Big\{\eta(0)J_f(s,\xi) + \Big[ \|\mu_f^*(s,\cdot,\xi)\|_{L^\infty} \|\eta'/\eta\|_{L^\infty} + \frac{\sigma^2}{2} \|\eta''/\eta\|_{L^\infty} \Big] \\
&\hspace{330pt} \|\mathcal{L}f(s,\cdot,\xi) \eta\|_{\mathcal{M}(\R)}\Big\} \;\rd s
\\
&\quad \leq \| f(0,\cdot,\xi) \eta\|_{\mathcal{M}(\R)} + \int_0^t \Big\{\eta(0)\|\nu\|_{L^\infty} \| f(s,\cdot,\xi) \|_{\mathcal{M}(\R)}
\\
& \qquad+ \Big[ \big( \|\mu\|_{L^\infty} + \|w\|_{\mathcal{W}} \|\nu\|_{L^\infty} \| f(s,\cdot,\cdot) \|_{L^\infty_\xi \mathcal{M}_x} \big) \|\eta'/\eta\|_{L^\infty} + \frac{\sigma^2}{2} \|\eta''/\eta\|_{L^\infty} \Big] \| \mathcal{L}f(s,\cdot,\xi) \eta\|_{\mathcal{M}(\R)}\Big\} \;\rd s.
\end{aligned}
\end{equation*}
By taking the supremum over $\xi \in [0,1]$, we have, for $t \in [0,t_1]$,
\begin{equation} \label{eqn:a_priori_invariance_2}
\begin{aligned}
& \| \mathcal{L}f(t,\cdot,\cdot) \eta\|_{L^\infty_\xi \mathcal{M}_x}
\leq \| f(0,\cdot,\cdot) \eta\|_{L^\infty_\xi \mathcal{M}_x} + \int_0^t \eta(0)\|\nu\|_{L^\infty} \| f \|_{L^\infty_{t,\xi} \mathcal{M}_x}
\\
& \qquad + \Big[ \big( \|\mu\|_{L^\infty} + \|w\|_{\mathcal{W}} \|\nu\|_{L^\infty} \| f \|_{L^\infty_{t,\xi} \mathcal{M}_x} \big) \|\eta'/\eta\|_{L^\infty} + \frac{\sigma^2}{2} \|\eta''/\eta\|_{L^\infty} \Big] \| \mathcal{L}f(s,\cdot,\cdot) \eta\|_{L^\infty_\xi \mathcal{M}_x} \;\rd s
\\
& \quad \leq \bigg( \| f(0,\cdot,\cdot) \eta\|_{L^\infty_\xi \mathcal{M}_x} + \int_0^t \eta(0)\|\nu\|_{L^\infty} \| f \|_{L^\infty_{t,\xi} \mathcal{M}_x} \;\rd s\bigg)
\\
& \qquad\qquad\exp \bigg( \Big[ \big( \|\mu\|_{L^\infty} + \|w\|_{\mathcal{W}} \|\nu\|_{L^\infty} \| f \|_{L^\infty_{t,\xi} \mathcal{M}_x} \big) \|\eta'/\eta\|_{L^\infty} + \frac{\sigma^2}{2} \|\eta''/\eta\|_{L^\infty} \Big] t \bigg),
\end{aligned}
\end{equation}
where the $L^\infty_t$ should be understood as the supremum over $t \in [0,t_1]$.
We construct the invariance set and show $\mathcal{L}$-contractivity on the set by the following procedure:
For any $ R > R_0 \defeq \| f(0,\cdot,\cdot) \eta\|_{L^\infty_\xi \mathcal{M}_x}$, and any $t_* > 0$, denote
\begin{equation*}
\begin{aligned}
E_{R;t} \defeq \{ f \in \mathcal{M}_+ : \sup_{s \in [0,t]} \|f (s,\cdot,\cdot) \eta\|_{L^\infty_{\xi} \mathcal{M}_x} < R\}.
\end{aligned}
\end{equation*}
By taking sufficiently small $t_2$, for example
\begin{equation*}
\begin{aligned}
t_2 \leq \min\bigg( \frac{1}{2 \|\nu\|_{L^\infty}} , \frac{R - R_0}{2 \eta(0) \|\nu\|_{L^\infty} R} , \frac{\log \frac{2R}{R+R_0}}{\big( \|\mu\|_{L^\infty} + \|w\|_{\mathcal{W}} \|\nu\|_{L^\infty} R \big) \|\eta'/\eta\|_{L^\infty} + \frac{\sigma^2}{2} \|\eta''/\eta\|_{L^\infty}} \bigg),
\end{aligned}
\end{equation*}
we can make $E_{R;t_2}$ an invariance set, i.e. $\mathcal{L}(E_{R;t_2}) \subset E_{R;t_2}$.
To show that $f \mapsto \mathcal{L} f$ is contracting in the $H^{-1}_\eta$-sense, we consider the following energy estimate: Along each fiber $\xi \in [0,1]$,
\begin{equation*}
\begin{aligned}
& \frac{\rd}{\rd t} \bigg( \frac{1}{2} \int_{\R} \Big[ \Lambda \star \big((\mathcal{L} f - \mathcal{L} g) \eta\big) \Big] (\mathcal{L} f - \mathcal{L} g) \eta \;\rd x \bigg) = \int_{\R} \Big[ \Lambda \star \big( (\mathcal{L} f - \mathcal{L} g) \eta \big) \Big] \partial_t (\mathcal{L} f - \mathcal{L} g) \eta \;\rd x
\\
&\quad= \int_{\R} - \frac{\sigma^2}{2} \Big[ \Lambda \star \partial_x \big( (\mathcal{L} f - \mathcal{L} g) \eta\big) \Big] \Big[ \partial_x \big( (\mathcal{L} f - \mathcal{L} g) \eta\big) \Big]
\\
&\qquad + \Big[ \Lambda \star \partial_x \big( (\mathcal{L} f - \mathcal{L} g) \eta\big) \Big] \bigg[ \mu^*_{f} (\mathcal{L} f - \mathcal{L} g) \eta + (\mu^*_{f} - \mu^*_{g}) (\mathcal{L} g) \eta + \sigma^2 (\eta'/\eta) (\mathcal{L} f - \mathcal{L} g) \eta \bigg]
\\
&\qquad + \Big[ \Lambda \star \big( (\mathcal{L} f - \mathcal{L} g) \eta\big) \Big] \bigg[ - \nu (\mathcal{L} f - \mathcal{L} g) \eta + \delta_0 \eta(0) (J_f - J_g)
\\
& \qquad + \mu^*_{f} (\eta'/\eta) (\mathcal{L} f - \mathcal{L} g) \eta + (\mu^*_{f} - \mu^*_{g}) (\eta'/\eta) (\mathcal{L} g) \eta + \frac{\sigma^2}{2} (\eta''/\eta) (\mathcal{L} f - \mathcal{L} g) \eta \bigg] \;\rd x.
\end{aligned}
\end{equation*}
Apply Cauchy-Schwartz inequality, we obtain that
\begin{equation*}
\begin{aligned}
& \frac{\rd}{\rd t} \bigg( \int_{\R} \Big[ \Lambda \star \big((\mathcal{L} f - \mathcal{L} g) \eta\big) \Big] (\mathcal{L} f - \mathcal{L} g) \eta \;\rd x \bigg)
\\
&\quad \leq \frac{4}{\sigma^2} \|\mu^*_{f} (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2 + \frac{4}{\sigma^2} \|(\mu^*_{f} - \mu^*_{g}) (\mathcal{L} g)\|_{H^{-1}_\eta}^2 + 4 \| (\eta'/\eta) (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2
\\
& \qquad+ \Big( 4 + \frac{\sigma^2}{2} \Big) \|(\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2 + \|\nu (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2 + \|\delta_0 (J_f - J_g)\|_{H^{-1}_\eta}^2
\\
& \qquad+ \|\mu^*_{f} (\eta'/\eta) (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2 + \|(\mu^*_{f} - \mu^*_{g}) (\eta'/\eta) (\mathcal{L} g)\|_{H^{-1}_\eta}^2 + \frac{\sigma^2}{2} \|(\eta''/\eta) (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2.
\end{aligned}
\end{equation*}
Applying Lemma <ref>, we further have that
\begin{equation*}
\begin{aligned}
& \frac{\rd}{\rd t} \| (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2 \leq \bigg( \frac{16}{\sigma^2} \|\mu^*_{f}\|_{W^{1,\infty}}^2 + 16 \|\eta'/\eta\|_{W^{1,\infty}}^2 + \Big( 4 + \frac{\sigma^2}{2} \Big)
\\
&\qquad + 4 \|\nu\|_{W^{1,\infty}}^2 + 4 \|\mu^*_{f}\|_{W^{1,\infty}}^2 \|\eta'/\eta\|_{W^{1,\infty}}^2 + 2 \sigma^2 \|\eta''/\eta\|_{W^{1,\infty}}^2 \bigg) \| (\mathcal{L} f - \mathcal{L} g)\|_{H^{-1}_\eta}^2
\\
&\qquad + \bigg( \frac{4}{\sigma^2} \| (\mathcal{L} g)\|_{H^{-1}_\eta}^2 + 4\|\eta'/\eta\|_{W^{1,\infty}}^2 \| (\mathcal{L} g)\|_{H^{-1}_\eta}^2 \bigg) |\mu^*_{f} - \mu^*_{g}|^2 + \|\delta_0\|_{H^{-1}_\eta}^2 |J_f - J_g|^2.
\end{aligned}
\end{equation*}
Now let us consider the integration over $\xi \in [0,1]$. Firstly, using that $w \in \mathcal{W}$ combined with classical interpolation,
\begin{equation*}
\begin{aligned}
& \int_{[0,1]} |\mu^*_{f}(t,\xi,x) - \mu^*_{g}(t,\xi,x)|^2 \;\rd \xi = \int_{[0,1]} \bigg( \int_{[0,1]} w(\xi, \zeta) \big( J_f(t,\zeta) - J_g(t,\zeta) \big) \;\rd \zeta \bigg)^2 \;\rd \xi
\\
&\qquad \leq \|w\|_{\mathcal{W}}^2 \|J_f(t,\cdot) - J_g(t,\cdot)\|_{L^2_\xi}^2.
\end{aligned}
\end{equation*}
Secondly, by Lemma <ref>,
\begin{equation*}
\begin{aligned}
\big| J_f(t,\xi) - J_g(t,\xi) \big| \leq \bigg| \int_{\R} \nu(x) \big( f(t,\xi,x) - g(t,\xi,x) \big) \;\rd x \bigg| \leq C (\alpha) \|\nu\|_{W^{1,\infty}} \|f(t,\cdot,\xi) - g(t,\cdot,\xi)\|_{H^{-1}_\eta}.
\end{aligned}
\end{equation*}
Hence, we have that
\begin{equation*}
\begin{aligned}
& \|J_f(t,\cdot) - J_g(t,\cdot)\|_{L^2_\xi}^2 = \int_{[0,1]} \big| J_f(t,\xi) - J_g(t,\xi) \big|^2 \;\rd \xi\\
&\quad \leq C(\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \int_{[0,1]} \|f(t,\cdot,\xi) - g(t,\cdot,\xi)\|_{H^{-1}_\eta}^2 \;\rd \xi
= C(\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \|f - g\|_{L^2_\xi (H^{-1}_\eta)_x}^2.
\end{aligned}
\end{equation*}
Therefore, by integrating over $\xi \in [0,1]$,
\begin{equation} \label{eqn:Vlasov_Gronwall}
\begin{aligned}
\| (\mathcal{L} f - \mathcal{L} g)(t,\cdot,\cdot)\|_{L^2_\xi (H^{-1}_\eta)_x}^2
\leq \;& \int_0^t M_0 \| (\mathcal{L} f - \mathcal{L} g) (s,\cdot,\cdot)\|_{L^2_\xi (H^{-1}_\eta)_x}^2 + M_1 \|(f - g) (s,\cdot,\cdot)\|_{L^2_\xi (H^{-1}_\eta)_x}^2 \;\rd s
\\
\leq \;& \exp(M_0 t) \int_0^t M_1 \|(f - g) (s,\cdot,\cdot)\|_{L^2_\xi (H^{-1}_\eta)_x}^2 \;\rd s
\end{aligned}
\end{equation}
where $M_0,M_1$ are required to satisfy that
\begin{equation*}
\begin{aligned}
M_0 \geq \;&\sup_{t \in [0,t_2]} \bigg( \frac{16}{\sigma^2} \|\mu^*_{f}\|_{L^\infty_\xi W^{1,\infty}_x}^2 + 16 \|\eta'/\eta\|_{W^{1,\infty}}^2 + \Big( 4 + \frac{\sigma^2}{2} \Big)
\\
\;&\quad + 4 \|\nu\|_{W^{1,\infty}}^2 + 4 \|\mu^*_{f}\|_{L^\infty_\xi W^{1,\infty}_x}^2 \|\eta'/\eta\|_{W^{1,\infty}}^2 + 2 \sigma^2 \|\eta''/\eta\|_{W^{1,\infty}}^2 \bigg)
\\
M_1 \geq \;& \sup_{t \in [0,t_2]} \bigg[ \bigg( \frac{4}{\sigma^2} \| (\mathcal{L} g)\|_{L^\infty_\xi (H^{-1}_\eta)_x}^2 + 4\|\eta'/\eta\|_{W^{1,\infty}}^2 \| (\mathcal{L} g)\|_{L^\infty_\xi (H^{-1}_\eta)_x}^2 \bigg) \|w\|_{\mathcal{W}}^2 + \|\delta_0\|_{H^{-1}_\eta}^2 \bigg] C(\alpha)^2 \|\nu\|_{W^{1,\infty}}^2.
\end{aligned}
\end{equation*}
In addition, by $w \in \mathcal{W}$ and Lemma <ref>, we can derive
\begin{equation*}
\begin{aligned}
\|\mu^*_{f}\|_{L^\infty_\xi W^{1,\infty}_x} \leq \;& \|\mu\|_{W^{1,\infty}_x} + \sup_{\xi \in [0,1]} \bigg| \int_0^1 w(\xi, \zeta) J_f(t,\zeta) \;\rd \zeta \bigg|
\\
\leq \;& \|\mu\|_{W^{1,\infty}_x} + \|w\|_{\mathcal{W}} \; \|J_f(t,\cdot)\|_{L^\infty}
\\
\leq \;& \|\mu\|_{W^{1,\infty}_x} + \|w\|_{\mathcal{W}} \; C (\alpha) \|\nu\|_{W^{1,\infty}} \|f\|_{L^\infty_\xi (H^{-1}_\eta)_x}.
\end{aligned}
\end{equation*}
When $f,g,\mathcal{L}f,\mathcal{L}g \in E_{R;t_2}$, by Lemma <ref>, we have that
\begin{equation*}
\begin{aligned}
\|f\|_{L^\infty_\xi (H^{-1}_\eta)_x} \leq \frac{R}{2}, \quad \|(\mathcal{L}g)\|_{L^\infty_\xi (H^{-1}_\eta)_x} \leq \frac{R}{2},
\end{aligned}
\end{equation*}
for $t \in [0,t_2]$.
Hence $M_0,\; M_1$ in (<ref>) can be chosen such that they only depend on $R$ and the regularity of the various fixed coefficients in the system. By choosing sufficiently small $t_* > 0$, for example,
\begin{equation*}
\begin{aligned}
t_* \leq \max\left(t_2,\ \frac{1}{3M_1},\ \frac{\log 2}{M_0}\right),
\end{aligned}
\end{equation*}
by (<ref>) we conclude that $\mathcal{L}$ is contracting on the set $\mathcal{L}(E_{R;t_*})$ for the $L^2_\xi (H^{-1}_\eta)_x$ norm.
Repeating the argument allows extending the weak solution to any finite time interval as usual, since the a priori estimates (<ref>) and (<ref>) do not blow up in finite time.
We now turn to the derivation of the limiting hierarchy.
Taking the derivative of $\tau_\infty(T) = \tau_\infty(T,w,f)$ in Definition <ref>, we first obtain
\begin{equation*}
\begin{aligned}
&\partial_t \tau_\infty(T,w,f)(t,z) = \sum_{m=1}^{|T|} \bigg[ -\partial_{z_m}\Big( \mu(z_m) \tau_\infty(T)(t,z)\Big) + \frac{\sigma^2}{2} \partial_{z_m}^2 \tau_\infty(T)(t,z)
\\
&\qquad -\nu(z_m) \tau_\infty(T)(t,z) + \delta_0(z_m) \bigg( \int_{\R} \nu(u_m) \tau_\infty(T)(t,u) \bigg) \bigg|_{\forall n \neq m, u_n=z_n}
\\
& \qquad - \partial_{z_m} \bigg( \int_{[0,1]^{|T|}} w_T(\xi_1,\dots,\xi_{|T|}) f^{\otimes |T|}(t,z_1,\xi_1,\dots,z_{|T|},\xi_{|T|})
\\
& \qquad \bigg( \int_0^1 w(\xi_m,\xi_{|T|+1}) \int_{\R} \nu(z_{|T|+1}) f(t,z_{|T|+1},\xi_{|T|+1}) \;\rd z_{|T|+1} \rd \xi_{|T|+1} \bigg) \;\rd \xi_1,\dots,\xi_{|T|} \bigg) \bigg].
\end{aligned}
\end{equation*}
The last term can be rewritten by using the observables with one more leaf, resulting the limiting hierarchy (<ref>), restated here:
\begin{equation*}
\begin{aligned}
& \partial_t \tau_\infty (T)(t,z)
\\
&\quad= \sum_{m = 1}^{|T|} \Bigg\{ \bigg[ - \partial_{z_m}(\mu(z_m) \tau_\infty (T)(t,z)) + \frac{\sigma^2}{2} \partial_{z_m}^2 \tau_\infty (T)(t,z)
\\
&\qquad - \nu(z_m) \tau_\infty (T)(t,z) + \delta_0(z_m) \bigg( \int_{\R} \nu(u_m) \tau_\infty (T)(t,u) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n} \bigg]
\\
&\qquad - \partial_{z_m} \bigg[ \int_{\R} \nu(z_{|T|+1}) \tau_\infty (T+m)(t,z) \;\rd z_{|T|+1} \bigg] \Bigg\}.
\end{aligned}
\end{equation*}
§.§ Quantitative stability
This subsection focuses on the proof of the main quantitative estimate of the article. The technical Lemma <ref> about recursive differential inequalities is given separately in the next subsection.
For simplicity, let us recall the notation
\begin{equation*}
\begin{aligned}
\nu_m = 1 \otimes \dots \otimes \nu \otimes \dots \otimes,
\end{aligned}
\end{equation*}
where $\nu$ appears in the $m$-th coordinate, i.e. $\nu_m(z) = \nu(z_m)$. The same convention applies to $\mu$ and $\eta$.
Define the difference $\Delta_N (T) (t,z) \defeq \tau_N (T) (t,z) - \tau_\infty (T) (t,z)$.
By subtracting (<ref>) from (<ref>), one has that
\begin{equation*}
\begin{aligned}
& \partial_t \Delta_N (T)(t,z) \\
&\quad = \sum_{m = 1}^{|T|} \Bigg\{ \bigg[ - \partial_{z_m}(\mu(z_m) \Delta_N (T)(t,z)) + \frac{\sigma^2}{2} \partial_{z_m}^2 \Delta_N (T)(t,z) \\
&\qquad - \nu(z_m) \Delta_N (T)(t,z) + \delta_0(z_m) \bigg( \int_{\R} \nu(u_m) \Big( \Delta_N (T)(t,u) + \mathscr{R}_{N,T,m} (t,u) \Big) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n} \bigg] \\
&\qquad - \partial_{z_m} \bigg[ \int_{\R} \nu(z_{|T|+1}) \Big( \Delta_N (T+m)(t,z) + \mathscr{\tilde R}_{N,T+m,|T|+1} (t,z) \Big) \;\rd z_{|T|+1} \bigg] \Bigg\}, \quad \forall T \in \mathcal{T}.
\end{aligned}
\end{equation*}
We highlight that, for any fixed $N < \infty$, the above equalities and later inequalities involving $\Delta_N (T)$ can be understood as recursive relations that holds on all $T \in \mathcal{T}$.
At a first glance, one may think that the approximate hierarchy (<ref>) is only defined for observables $\tau_N(T)$ with $|T| \leq N$.
Nevertheless, by our formal definition that $f_{N}^{i_1,\dots, i_k} \equiv 0$ if there are duplicated indices among $i_1,\dots,i_k$, it is easy to verify that for any tree $T$ such that $|T| > N$,
\begin{equation*}
\begin{aligned}
\tau_N (T,w_N,f_N)(t,z) \defeq \frac{1}{N} \sum_{i_1,\dots, i_{|T|} = 1}^N w_{N,T}(i_1,\dots, i_{|T|}) f_{N}^{i_1,\dots, i_{|T|}}(t, z_1,\dots,z_{|T|})
\equiv 0
\end{aligned}
\end{equation*}
as in each marginal there must be duplicated indices. By a similar discussion, we see that $\mathscr{R}_{N,T,m} \equiv 0$ and $\mathscr{\tilde R}_{N,T+m,|T|+1} \equiv 0$ when $|T| > N$. With these formal definition, it is then straightforward to show that approximate hierarchy (<ref>) holds for all $T \in \mathcal{T}$.
By multiplying by the weight function $\eta^{\otimes |T|}$ and integrating, we obtain that
\begin{equation*}
\begin{aligned}
& \Big( \partial_t \Delta_N (T)(t,z) \Big) \eta^{\otimes |T|}(z) = \sum_{m = 1}^{|T|} \Bigg\{ - \partial_{z_m}\Big(\mu_m \Delta_N (T) \eta^{\otimes |T|}\Big)(t,z) + \frac{\sigma^2}{2} \partial_{z_m}^2 \Big( \Delta_N (T) \eta^{\otimes |T|}\Big)(t,z) \\
&\qquad - \Big(\nu_m \Delta_N (T) \eta^{\otimes |T|}\Big)(t,z) + \Big( \mu_m (\eta_m'/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \Big)(t,z)\\
&\qquad + \delta_0(z_m) \eta(z_m) \bigg( \int_{\R} \Big( (\nu_m/\eta_m) ( \Delta_N (T) + \mathscr{R}_{N,T,m} ) \eta^{\otimes |T|} \Big)(t,u) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n}
\\
&\qquad - \partial_{z_m} \bigg[ \int_{\R} \Big( (\nu_{|T|+1}/\eta_{|T|+1}) ( \Delta_N (T+m) + \mathscr{\tilde R}_{N,T+m,|T|+1} ) \eta^{\otimes |T|+1} \Big)(t,z) \;\rd z_{|T|+1} \bigg] \\
&\qquad + \frac{\sigma^2}{2} \bigg[ \partial_{z_m} \Big( -2(\eta_m'/\eta_m) \Delta_N (T) \eta^{\otimes |T|}\Big) + (\eta_m''/\eta_m) \Delta_N (T) \eta^{\otimes |T|}
\bigg](t,z) \Bigg\}.
\end{aligned}
\end{equation*}
Substituting $\big(\partial_t \Delta_N (T)\big) \eta^{\otimes |T|}$ in the right hand side of
\begin{equation*}
\begin{aligned}
\;& \frac{\rd}{\rd t} \bigg( \frac{1}{2} \int_{\R^{|T|}} \Big( K^{\otimes |T|} \star \big(\Delta_N (T)\eta^{\otimes |T|}\big) (t,z)\Big)^2 \; \rd z \bigg)
\\
= \;& \int_{\R^{|T|}} \bigg(K^{\otimes |T|} \star \big(\Delta_N (T)\eta^{\otimes |T|}\big) (t,z)\bigg) \bigg(K^{\otimes |T|} \star \big( \partial_t \Delta_N (T)\eta^{\otimes |T|} \big) (t,z)\bigg) \; \rd z,
\end{aligned}
\end{equation*}
yields the extensive expression
\begin{equation*}
\begin{aligned}
& \frac{\rd}{\rd t} \bigg( \frac{1}{2} \int_{\R^{|T|}} \Big( K^{\otimes |T|} \star \big(\Delta_N (T)\eta^{\otimes |T|}\big) (t,z)\Big)^2 \; \rd z \bigg)
\\
&\quad = \int_{\R^{|T|}} \sum_{m = 1}^{|T|} \Bigg\{ - \frac{\sigma^2}{2} \bigg[ \partial_{z_m} K^{\otimes |T|} \star \big( \Delta_N (T)\eta^{\otimes |T|} \big) (t,z) \bigg]^2
\\
&\qquad + \bigg[ K^{\otimes |T|} \star \big( \Delta_N (T)\eta^{\otimes |T|} \big) (t,z) \bigg] \bigg[- K^{\otimes |T|} \star \big( \nu_m \Delta_N (T)\eta^{\otimes |T|} \big)(t,z)
\\
&\qquad + K(z_m)\eta(0) \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,u) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n}
\\
&\qquad + K(z_m)\eta(0) \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \mathscr{R}_{N,T,m} \eta^{\otimes |T|} \big) (t,u) \;\rd u_m \bigg)\bigg|_{\forall n \neq m,\, u_n = z_n}
\\
&\qquad + K^{\otimes |T|} \star \big( \mu_m (\eta_m'/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,z) + \frac{\sigma^2}{2} K^{\otimes |T|} \star \big( (\eta_m''/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,z)
\bigg]
\\
&\qquad + \bigg[ \partial_{z_m} K^{\otimes |T|} \star \big( \Delta_N (T) \eta^{\otimes |T|} \big) (t,z) \bigg] \bigg[ K^{\otimes |T|} \star \big( \mu_m \Delta_N (T) \eta^{\otimes |T|} \big)(t,z)
\\
&\qquad + \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \Delta_N (T+m) \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1}
\\
&\qquad + \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \mathscr{\tilde R}_{N,T+m,|T|+1} \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1}
\\
&\qquad + \frac{\sigma^2}{2} K^{\otimes |T|} \star \big( 2(\eta_m'/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,z)
\bigg] \Bigg\}
\; \rd z.
\end{aligned}
\end{equation*}
We then apply Cauchy-Schwartz inequality to obtain,
\begin{equation} \label{eqn:energy_Cauchy_Schwartz}
\begin{aligned}
& \frac{\rd}{\rd t} \bigg( \frac{1}{2} \|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2 \bigg)
\leq \sum_{m = 1}^{|T|} \Bigg\{ \Big(2 + \frac{\sigma^2}{4}\Big) \|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2 + \frac{1}{2} \|\nu_m \Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2
\\
&\quad + \frac{1}{2}\|\mu_m (\eta_m'/\eta_m) \Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2 + \frac{\sigma^2}{4} \|(\eta_m''/\eta_m)\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2
\\
&\quad + \frac{1}{2} \|K\|_{L^2}^2 \eta(0)^2 \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
&\quad + \frac{1}{2} \|K\|_{L^2}^2 \eta(0)^2 \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \mathscr{R}_{N,T,m} \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
&\quad + \frac{2}{\sigma^2} \|\mu_m \Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2 + \frac{\sigma^2}{2} \|2(\eta_m'/\eta_m) \Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2
\\
&\quad + \frac{2}{\sigma^2} \int_{\R^{|T|}} \bigg( \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \Delta_N (T+m) \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1} \bigg)^2 \prod_{n=1}^{|T|} \;\rd z_n
\\
&\quad + \frac{2}{\sigma^2} \int_{\R^{|T|}} \bigg( \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \mathscr{\tilde R}_{N,T+m,|T|+1} \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1} \bigg)^2 \prod_{n=1}^{|T|} \;\rd z_n \Bigg\}.
\end{aligned}
\end{equation}
This is where the proper choice of weak distance becomes critical as we need to bound the various terms in the right-hand side by the norm $ \|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2$. The commutator estimate in Lemma <ref> can directly bound all the terms with an explicit $H^{-1 \otimes |T|}_\eta$-norms as the coefficients $\mu,\nu$ are $W^{1,\infty}$ and $\eta$ is smooth. For example
\[
\|\nu_m \Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2\leq 4\,\|\nu\|_{W^{1,\infty}(\R)}^2\,\|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2.
\]
This leads to the simplified expression for some constant $\tilde C_0$,
\begin{equation} \label{eqn:energy_Cauchy_Schwartz2}
\begin{aligned}
& \frac{\rd}{\rd t} \bigg( \frac{1}{2} \|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2 \bigg)
\leq \sum_{m = 1}^{|T|} \Bigg\{ \tilde C_0\, \|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2
\\
&\quad + \frac{1}{2} \|K\|_{L^2}^2 \eta(0)^2 \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
&\quad + \frac{1}{2} \|K\|_{L^2}^2 \eta(0)^2 \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \mathscr{R}_{N,T,m} \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
&\quad + \frac{2}{\sigma^2} \int_{\R^{|T|}} \bigg( \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \Delta_N (T+m) \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1} \bigg)^2 \prod_{n=1}^{|T|} \;\rd z_n
\\
&\quad + \frac{2}{\sigma^2} \int_{\R^{|T|}} \bigg( \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \mathscr{\tilde R}_{N,T+m,|T|+1} \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1} \bigg)^2 \prod_{n=1}^{|T|} \;\rd z_n \Bigg\}.
\end{aligned}
\end{equation}
The remaining integrals terms in (<ref>) can be bounded by first applying Lemma <ref> followed by Proposition <ref>.
For example, consider the first remainder term and write by Lemma <ref>,
\begin{equation*}
\begin{aligned}
& \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \mathscr{R}_{N,T,m} \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
%= \;& \int_{\R^{|T|-1}} \Bigg( K^{\otimes |T|-1} \star \Big(\int_{\R} K \star_m \big( (\nu_m/\eta_m) \mathscr{R}_{N,T,m} \eta_m \big) \;\rd z_m \; \eta^{\otimes |T|-1} \Big) \Bigg)^2 \prod_{n \neq m} \;\rd z_n
%\leq \;& {\color{red} \int_{\R^{|T|-1}} \Bigg( K^{\otimes |T|-1} \star \Big( C (\alpha) \|\nu\|_{W^{1,\infty}} \Big(\int_{\R} K \star_m \mathscr{R}_{N,T,m} \eta_m \; \rd z_m\Big)^{1/2} \; \eta^{\otimes |T|-1} \Big) \Bigg)^2 \prod_{n \neq m} \;\rd z_n}
%\leq \;& C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \int_{\R^{|T|-1}} \int_{\R} \bigg( K^{\otimes |T|} \star \big( \mathscr{R}_{N,T,m} \eta^{\otimes |T|} \big)(t,z) \bigg)^2 \;\rd z_m \prod_{n \neq m} \;\rd z_n
&\qquad \leq C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \|\mathscr{R}_{N,T,m}\|_{H^{-1 \otimes |T|}_\eta}^2.
\end{aligned}
\end{equation*}
Next, apply Proposition <ref> to the right hand side to conclude that
\begin{equation*}
\begin{aligned}
& \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \mathscr{R}_{N,T,m} \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
&\qquad\leq C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \big[ \exp \big( (2 + 2\alpha) c(w,|T|) \big) - 1 \big] \||\tau_N|(T)\|_{H^{-1 \otimes |T|}_\eta}^2.
\end{aligned}
\end{equation*}
The method applies for the other integrals terms in (<ref>), which yields
\begin{equation*}
\begin{aligned}
& \int_{\R^{|T|-1}} \bigg( \int_{\R} K^{\otimes |T|} \star \big( (\nu_m/\eta_m) \Delta_N (T) \eta^{\otimes |T|} \big)(t,z) \;\rd z_m \bigg)^2 \prod_{n \neq m} \;\rd z_n
\\
&\qquad \leq C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \|\Delta_N (T)\|_{H^{-1 \otimes |T|}_\eta}^2,
\\
& \int_{\R^{|T|}} \bigg( \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \Delta_N (T+m) \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1} \bigg)^2 \prod_{n=1}^{|T|} \;\rd z_n
\\
&\qquad \leq C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \|\Delta_N (T+m)\|_{H^{-1 \otimes (|T|+1)}_\eta}^2,
\end{aligned}
\end{equation*}
together with
\begin{equation*}
\begin{aligned}
& \int_{\R^{|T|}} \bigg( \int_{\R} K^{\otimes |T|+1} \star \big( (\nu_{|T|+1}/\eta_{|T|+1}) \mathscr{\tilde R}_{N,T+m,|T|+1} \eta^{\otimes |T|+1} \big) (t,z) \;\rd z_{|T|+1} \bigg)^2 \prod_{n=1}^{|T|} \;\rd z_n
\\
&\qquad \leq C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \big[ \exp \big( (2 + 2\alpha) c(w,|T|) \big) - 1 \big] \||\tau_N|(T)\|_{H^{-1 \otimes |T|}_\eta}^2.
\end{aligned}
\end{equation*}
Inserting those bounds into the energy estimate (<ref>), we obtain a recursive differential inequality: for all $T \in \mathcal{T}$,
\begin{equation} \label{eqn:differential_inequality_hierarchy}
\begin{aligned}
& \frac{\rd}{\rd t} \|\Delta_N (T) (t,\cdot) \|_{H^{-1 \otimes |T|}_\eta}^2
\leq \sum_{m = 1}^{|T|} \Bigg\{ \tilde C_0 \| \Delta_N (T) (t,\cdot) \|_{H^{-1 \otimes |T|}_\eta}^2 + \tilde C_1 \| \Delta_N (T+m) (t,\cdot) \|_{H^{-1 \otimes (|T|+1)}_\eta}^2
\\
& \qquad + \varepsilon_0(T) \| |\tau_N| (T) (t,\cdot) \|_{H^{-1 \otimes |T|}_\eta}^2 + \varepsilon_1(T) \| |\tau_N| (T+m) (t,\cdot) \|_{H^{-1 \otimes (|T|+1)}_\eta}^2 \Bigg\},
\end{aligned}
\end{equation}
where we can even provide the explicit expressions for the constants
\begin{equation*}
\begin{aligned}
\;& \tilde C_0 = 4 + \frac{\sigma^2}{2} + 4\bigg(\|\nu\|_{W^{1,\infty}}^2 + \|\mu(\eta'/\eta)\|_{W^{1,\infty}}^2 + \frac{\sigma^2}{2} \|\eta''/\eta\|_{W^{1,\infty}}^2 + \frac{4}{\sigma^2} \|\mu\|_{W^{1,\infty}}^2 + 2\sigma^2 \|(\eta'/\eta)\|_{W^{1,\infty}}^2 \bigg)
\\
\;& \quad\quad + \|K\|_{L^2}^2 \eta(0)^2 C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2,
\\
\;& \tilde C_1 = \frac{4C(\alpha)^2}{\sigma^2} \|\nu\|_{W^{1,\infty}}^2,
\\
\;& \varepsilon_0(T) = \|K\|_{L^2}^2 \eta(0)^2 C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \big[ \exp \big( (2 + 2\alpha) c(w,|T|) \big) - 1 \big],
\\
\;& \varepsilon_1(T) = \frac{4 C (\alpha)^2}{\sigma^2} \|\nu\|_{W^{1,\infty}}^2 \big[ \exp \big( (2 + 2\alpha) c(w,|T|) \big) - 1 \big].
\end{aligned}
\end{equation*}
We can now restrict the recursion relations by truncating them at any given depth $n \geq 1$, meaning that we only consider the inequalities (<ref>) for all $T \in \mathcal{T}$ such that $|T| \leq n - 1$.
In such a case, since
\begin{equation*}
\begin{aligned}
c(w,|T|) \leq |T| \big(\max_{i,j}|w_{i,j;N}| \big) \leq n \bar w_N,
\end{aligned}
\end{equation*}
the coefficients $\varepsilon_0$, $\varepsilon_1$ can take the vanishing expression
\begin{equation*}
\begin{aligned}
\;& \varepsilon_0(n) = \|K\|_{L^2}^2 \eta(0)^2 C (\alpha)^2 \|\nu\|_{W^{1,\infty}}^2 \big[ \exp \big( (2 + 2\alpha) n \bar w_N \big) - 1 \big],
\\
\;& \varepsilon_1(n) = \frac{4 C (\alpha)^2}{\sigma^2} \|\nu\|_{W^{1,\infty}}^2 \big[ \exp \big( (2 + 2\alpha) n \bar w_N \big) - 1 \big].
\end{aligned}
\end{equation*}
For a fixed depth $n \geq 1$, $\varepsilon_0(n)$ and $\varepsilon_1(n)$ now vanish as $\bar w_N \to 0$.
Let us now rescale the energy inequality through some $\lambda^{|T|}$ factor: For all $T \in \mathcal{T}$ such that $|T| \leq n - 1$,
\begin{equation} \label{eqn:recursive_energy_estimate}
\begin{aligned}
& \frac{\rd}{\rd t} \lambda^{|T|} \|\Delta_N (T) (t,\cdot) \|_{H^{-1 \otimes |T|}_\eta}^2
\\
&\quad \leq \sum_{m = 1}^{|T|} \Bigg\{ \tilde C_0 \lambda^{|T|} \| \Delta_N (T) (t,\cdot) \|_{H^{-1 \otimes |T|}_\eta}^2 + (\tilde C_1/\lambda) \lambda^{|T| + 1} \| \Delta_N (T+m) (t,\cdot) \|_{H^{-1 \otimes (|T|+1)}_\eta}^2
\\
&\qquad + \varepsilon_0(n) \lambda^{|T|} \| |\tau_N| (T) (t,\cdot) \|_{H^{-1 \otimes |T|}_\eta}^2 + (\varepsilon_1(n)/\lambda) \lambda^{|T| + 1} \| |\tau_N| (T+m) (t,\cdot) \|_{H^{-1 \otimes (|T|+1)}_\eta}^2 \Bigg\}.
\end{aligned}
\end{equation}
We also recall the a priori bound (<ref>) for $\tau_N,\tau_\infty$ assumed in Theorem <ref>:
\begin{equation*} %\label{eqn:hierarchy_boundedness_1}
% \||\tau_N|(\cdot,w_N,f_N)(t,\cdot)\|_{\lambda;K;\eta} \leq C_{\lambda;\eta}, \quad \forall t \in [0,t_*]
\sup_{t\leq t_*} \ \max_{|T| \leq \max(n,\ |T_*|)} \lambda^{\frac{|T|}{2}}\, \left(\||\tau_N|(T,w_N,f_N)(t,\cdot)\|_{H^{-1\otimes |T|}_\eta} %\leq , \quad \forall t \in [0,t_*],
% \|\tau_\infty(\cdot)(t,\cdot)\|_{\lambda;K;\eta} \leq C_{\lambda;\eta}, \quad \forall t \in [0,t_*].
+\|\tau_\infty(T)(t,\cdot)\|_{H^{-1\otimes |T|}_\eta}\right) \leq C_{\lambda;\eta}, %\leq C_{\lambda;\eta}, \quad \forall t \in [0,t_*],
\end{equation*}
where $T_* \in \mathcal{T}$ is the tree index in the final estimate (<ref>).
By a triangle inequality, this implies the following uniform bound of $\Delta_N$,
\begin{equation} \label{eqn:uniform_energy_bound}
\begin{aligned}
\sup_{t\leq t_*} \ \max_{|T| \leq \max(n,\ |T_*|)} \lambda^{|T|} \|\Delta_N (T) (t,\cdot) \|_{H^{-1\otimes |T|}_\eta}^2 \leq C_{\lambda;\eta}^2.
\end{aligned}
\end{equation}
\begin{equation*}
\begin{aligned}
M_k(t) = \;& \max_{|T| \leq k} \lambda^{|T|} \|\Delta_N (T) (t,\cdot) \|_{H^{-1\otimes |T|}_\eta}^2,
\\
C = \;& \tilde C_0 + \tilde C_1/\lambda,
\\
\varepsilon = \;& \big[ \varepsilon_0(n) + \varepsilon_1(n)/\lambda \big] C_{\lambda;\eta}^2,
\\
L = \;& C_{\lambda;\eta}^2,
\\
n = \;& n, \quad n' = |T_*|,
\end{aligned}
\end{equation*}
so that (<ref>) and (<ref>) can be summarized as follows,
\begin{align} \label{eqn:differential_inequality_recursion}
\frac{\rd}{\rd t} M_k(t)
\leq \;& k \Big( C M_{k+1}(t) + \varepsilon \Big), && \forall 1 \leq k \leq n-1,
\\ \label{eqn:bound_inequality_recursion}
M_k(t) \leq \;& L, && \forall 1 \leq k \leq \max(n,\ n'), \; t \in [0,t_*].
\end{align}
We now invoke the following result.
Consider a sequence of non-negative functions $(M_k(t))_{k=1}^\infty$ on $t \in [0,t_*]$ that satisfies the inequalities
(<ref>)-(<ref>) with $\big[ \varepsilon/CL + (2\theta)^n \big] \leq 1$.
\begin{equation} \label{eqn:convergence_inequality_hierarchy}
\begin{aligned}
\max_{1 \leq k \leq \max(n , \ n')} \big[ \theta^k M_k(t) \big]
\leq \;& L (Ct/\theta + 2) \,\max\left( \big[\varepsilon/CL + (2\theta)^n \big],\ \max_{1 \leq k \leq n - 1} \big[ \theta^k M_k(0) \big] / L \right)^{\frac{1}{p^{(Ct/\theta + 1)}}},
\end{aligned}
\end{equation}
holds for any $1 < p < \infty$, $0 < \theta < 2^{-p'}$ where $1/p + 1/p' = 1$, and any $t \in [0,t_*]$.
Assume for the time being that Lemma <ref> holds and apply it to (<ref>) and (<ref>).
Choose $p = 2$, $\theta = 1/8$ and substitute $\varepsilon, C, L$ by its explicit expression to find that
\begin{equation*}
\begin{aligned}
\varepsilon/CL = \frac{\varepsilon_0(n) + \varepsilon_1(n)/\lambda}{\tilde C_0 + \tilde C_1/\lambda} = C_1 \big[ \exp \big( (2 + 2\alpha) \bar w n \big) - 1 \big],
\end{aligned}
\end{equation*}
where $C_1$ depends only on $\lambda$, the $W^{1,\infty}$-regularity of coefficients $\mu$, $\nu$ and constant $\sigma > 0$ in (<ref>), but neither on $\bar w_N$ nor on $n$. Choosing $C_0 = C/\theta$, and as $\bar w_N\to 0$ as $N\to\infty$, we deduce that for $N$ large enough
\begin{equation*}
\begin{aligned}
\bar{\varepsilon} = \varepsilon/CL + (2\theta)^n = C_1 \big[ \exp \big( (2 + 2\alpha) n \bar w_N \big) - 1 \big] + (1/4)^n \leq 1.
\end{aligned}
\end{equation*}
The conclusion of Lemma <ref> hence holds, showing that
\begin{equation*}
\begin{aligned}
& \max_{|T| \leq \max(n , \ |T_*|)} (\lambda / 8)^{|T|} \| \tau_N(T,w_N,f_N)(t,\cdot) - \tau_\infty(T)(t,\cdot) \|_{H^{-1\otimes |T|}_\eta}^2
\\
&\quad \leq C_{\lambda;\eta}^2\, \Big( C_0 t + 2 \Big)\,\max \left( \bar{\varepsilon},\ \max_{|T| \leq n-1} (\lambda / 8)^{|T|} \| \tau_N(T,w_N,f_N)(0,\cdot) - \tau_\infty(T)(0,\cdot) \|_{H^{-1\otimes |T|}_\eta}^2 / C_{\lambda;\eta}^2 \right)^{\frac{1}{2^{(C_0 t + 1)}}}.
\end{aligned}
\end{equation*}
This can be further simplified to (<ref>) by relaxing the maximum on the left hand side as $T = T_*$, taking the maximum on the right hand side over $|T| \leq \max(n,\ |T_*|)$, and choosing $C_2$ in $\eqref{eqn:stable_power_norm}$ as $C_2 = \max\big(C_0 t + 2, \ 2^{(C_0 t + 1)} \big)$.
§.§ Proof of Lemma <ref>
Let us restate here the recursive differential inequality (<ref>),
\begin{equation*}
\begin{aligned}
\frac{\rd}{\rd t} M_k(t)
\leq \;& k \Big( C M_{k+1}(t) + \varepsilon \Big), && \forall 1 \leq k \leq n-1,
\end{aligned}
\end{equation*}
which directly yields
\begin{equation*}
\begin{aligned}
\frac{\rd}{\rd t} \Big( M_k(t) + (\varepsilon/C) \Big)
\leq k C \Big( M_{k+1}(t) + (\varepsilon/C) \Big), && \forall 1 \leq k \leq n-1.
\end{aligned}
\end{equation*}
For any $1 \leq k \leq n - 1$ and $t \in [0,t_*]$, by inductively integrating the inequalities in time, we obtain that
\begin{equation*}
\begin{aligned}
\Big( M_k(t) + (\varepsilon/C) \Big) \leq \;&
C^{n - k} \int_s^t \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \frac{(t - r)^{n-k-1}}{n-k-1} \Big( M_n(r) + (\varepsilon/C) \Big) \;\rd r
\\
\;& + \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} \Big( M_l(s) + (\varepsilon/C) \Big),
\end{aligned}
\end{equation*}
We estimate the increase on $M_k$ within time steps of size
\begin{equation*}
\begin{aligned}
t-s = \theta / C.
\end{aligned}
\end{equation*}
First, we bound the constant terms,
\begin{equation*}
\begin{aligned}
& C^{n - k} \int_s^t \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \frac{(t - r)^{n-k-1}}{n-k-1} (\varepsilon/C) \;\rd r
+ \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} (\varepsilon/C)
\\
&\quad = (\varepsilon/C) \bigg\{ C^{n - k} \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} (t - s)^{n-k-1}
+ \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} \bigg\}
\\
&\quad = (\varepsilon/C) \sum_{l = k}^n C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k}
\\
&\quad \leq \theta^{-k} (\varepsilon/C) \sum_{l = k}^n \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} \theta^l,
\end{aligned}
\end{equation*}
where the last inequality uses our choice of time step $(t-s) \leq \theta / C$.
On the other hand, for $\theta \leq 1/2$,
\begin{equation*}
\begin{aligned}
\sum_{l = k}^{\infty} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} \theta^l = \frac{1}{(\theta^{-1} - 1)^{k}} \leq 1.
\end{aligned}
\end{equation*}
\[
C^{n - k} \int_s^t \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \frac{(t - r)^{n-k-1}}{n-k-1} (\varepsilon/C) \;\rd r
+ \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} (\varepsilon/C)\leq \theta^{-k} \,\frac{\varepsilon}{C}.
\]
We now turn to the terms involving $M_l(s)$ and $M_n(r)$ (with $s \leq r \leq t$).
For $M_n(r)$ we have no choice but to take
\begin{equation*}
\begin{aligned}
M_n(r) \leq L
\end{aligned}
\end{equation*}
But for $M_l(s)$, $k \leq l \leq n-1$, we have
\begin{equation*}
\begin{aligned}
M_l(s) \leq \min\left(L,\ \max_{1 \leq m \leq n-1} \big[ \theta^m M_m(s) \big] \theta^{-l}\right),
\end{aligned}
\end{equation*}
together with any geometric average between the two terms. Choose $\frac{1}{p} + \frac{1}{p'} = 1$ so that
\begin{equation*}
\begin{aligned}
M_l(s) \leq \;& L^{\frac{1}{p'}} \Big( \max_{1 \leq m \leq n-1} \big[ \theta^m M_m(s) \big] \theta^{-l} \Big)^{\frac{1}{p}}
\\
= \;& L^{\frac{1}{p'}} \max_{1 \leq m \leq n-1} \big[ \theta^m M_m(s) \big]^{\frac{1}{p}} \big( \theta^{\frac{1}{p}} \big)^{-l}.
\end{aligned}
\end{equation*}
Then we may write
\begin{equation*}
\begin{aligned}
& C^{n - k} \int_s^t \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \frac{(t - r)^{n-k-1}}{n-k-1} M_n(r) \;\rd r
+ \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} M_l(s),
\\
&\quad\leq C^{n - k} \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} (t - s)^{n-k} L
+ \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} L^{\frac{1}{p'}} \max_{1 \leq m \leq n-1} \big[ \theta^m M_m(s) \big]^{\frac{1}{p}} \big( \theta^{\frac{1}{p}} \big)^{-l}
\\
&\quad \leq \theta^{-k} \bigg\{ L \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \theta^n + L^{\frac{1}{p'}} \max_{1 \leq m \leq n-1} \big[ \theta^m M_m(s) \big]^{\frac{1}{p}} \sum_{l = k}^{n-1} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} \big( \theta^{\frac{1}{p'}} \big)^{-l} \bigg\},
\end{aligned}
\end{equation*}
where again use our choice of time step $(t-s) \leq \theta / C$ in the last inequality.
Observe that
\begin{equation*}
\begin{aligned}
\begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \theta^n \leq 2^{n-1} \theta^n, \quad \sum_{l = k}^{n-1} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} \big( \theta^{\frac{1}{p'}} \big)^{-l} \leq \frac{1}{(\theta^{-\frac{1}{p'}} - 1)^{k}} \leq 1
\end{aligned}
\end{equation*}
when choosing $\theta^{\frac{1}{p'}} \leq 1/2$, so that
\[\begin{split}
&C^{n - k} \int_s^t \begin{pmatrix} n-1 \\ k-1 \end{pmatrix} \frac{(t - r)^{n-k-1}}{n-k-1} M_n(r) \;\rd r
+ \sum_{l = k}^{n-1} C^{l - k} \begin{pmatrix} l-1 \\ k-1 \end{pmatrix} (t - s)^{l-k} M_l(s)\\
&\qquad\leq \theta^{-k}\,\left(L (2\theta)^n + L^{\frac{1}{p'}} \max_{1 \leq m \leq n - 1} \big[ \theta^m M_m(s) \big]^{\frac{1}{p}} \right).
\end{split}\]
Combining those bounds, provided that $\theta^{\frac{1}{p'}} \leq 1/2$, we have that, for all $1 \leq k \leq n-1$,
\begin{equation*}
\begin{aligned}
M_k(t) \leq \;& \theta^{-k} \bigg\{(\varepsilon/C) + L (2\theta)^n + L^{\frac{1}{p'}} \max_{1 \leq m \leq n - 1} \big[ \theta^m M_m(s) \big]^{\frac{1}{p}} \bigg\}.
\end{aligned}
\end{equation*}
On the other hand, for $n \leq k \leq \max(n,\ n')$, we simply have $M_k(t) \leq L$. As $\theta^{-k + n} \geq 1$,
\begin{equation*}
\begin{aligned}
M_k(t) \leq L \leq \theta^{-k} \bigg\{L (2\theta)^n \bigg\},
\end{aligned}
\end{equation*}
and we can combine the two cases to obtain that
\begin{equation*}
\begin{aligned}
\max_{1 \leq k \leq \max(n,\ n')} \big[ \theta^k M_k(t) \big] \leq \;& (\varepsilon/C) + L (2\theta)^n + L^{\frac{1}{p'}} \max_{1 \leq k \leq n-1} \big[ \theta^k M_k(s) \big]^{\frac{1}{p}}.
\end{aligned}
\end{equation*}
If $t\leq \theta/C$ we are done but otherwise we need to sum up the various bounds. Denote $t_j=j\,\theta/C$ and write that By the fact that , we have that
\begin{equation*}
\begin{aligned}
& \max_{1 \leq k \leq \max(n,\ n')} \big[ \theta^k M_k(t_j) \big]\\
&\quad \leq (\varepsilon/C) + L (2\theta)^n + L^{\frac{1}{p'}} \max_{1 \leq k \leq n-1} \big[ \theta^k M_k(t_{j-1}) \big]^{\frac{1}{p}}\\
&\quad \leq (\varepsilon/C) + L (2\theta)^n + L^{\frac{1}{p'}} \bigg\{ (\varepsilon/C) + L (2\theta)^n + L^{\frac{1}{p'}} \max_{1 \leq k \leq n-1} \big[ \theta^k M_k(t_{j-2}) \big]^{\frac{1}{p}} \bigg\}^{\frac{1}{p}}\\
&\quad \leq (\varepsilon/C) + L (2\theta)^n + L^{\frac{1}{p'}} \bigg\{ (\varepsilon/C) + L (2\theta)^n \bigg\}^{\frac{1}{p}} + L^{1-\frac{1}{p^2}} \max_{1 \leq k \leq n-1} \big[ \theta^k M_k(t_{j-2}) \big]^{\frac{1}{p^2}}\\ &\qquad \dots \\
&\quad \leq \sum_{i=0}^{j-1} L^{1-\frac{1}{p^i}} \bigg\{ (\varepsilon/C) + L (2\theta)^n \bigg\}^{\frac{1}{p^i}} \; + \; L^{1-\frac{1}{p^j}} \max_{1 \leq k \leq n-1} \big[ \theta^k M_k(0) \big]^{\frac{1}{p^j}},
\end{aligned}
\end{equation*}
where we use that $(a+b)^{1/p} \leq a^{1/p} + b^{1/p}$ by concavity.
For any $t\geq 0$, we hence have with $j(t)=\bigg\lfloor \frac{C t}{\theta} \bigg\rfloor + 1$,
\begin{equation} \label{eqn:convergence_inequality_hierarchy_tight}
\begin{aligned}
\max_{1 \leq k \leq \max(n, \ n')} \big[ \theta^k M_k(t) \big] \leq \;& \sum_{i=0}^{j(t)-1} L^{1-\frac{1}{p^i}} \bigg\{ \varepsilon/C + L (2\theta)^n \bigg\}^{\frac{1}{p^i}} \; + \; L^{1-\frac{1}{p^{j(t)}}} \max_{1 \leq k \leq n-1} \big[ \theta^k M_k(0) \big]^{\frac{1}{p^{j(t)}}}.
\end{aligned}
\end{equation}
Finally, by the assumption that $\big[ \varepsilon/CL + (2\theta)^n \big] \leq 1$,
\begin{equation*}
\begin{aligned}
\forall i \leq j, \quad L^{1-\frac{1}{p^i}} \bigg\{ \varepsilon/C + L (2\theta)^n \bigg\}^{\frac{1}{p^i}} = L \bigg\{ \varepsilon/CL + (2\theta)^n \bigg\}^{\frac{1}{p^i}} \leq L \bigg\{ \varepsilon/CL + (2\theta)^n \bigg\}^{\frac{1}{p^j}}.
\end{aligned}
\end{equation*}
Hence we can replace every $i$ and every $j(t)$ in (<ref>) by $(Ct/\theta + 1)$, which gives the looser bound (<ref>), restated here
\begin{equation*}
\begin{aligned}
\max_{1 \leq k \leq \max(n, \ n')} \big[ \theta^k M_k(t) \big]
\leq \;& L (Ct/\theta + 2) \max\bigg( \big[\varepsilon/CL + (2\theta)^n \big] , \sup_{1 \leq k \leq n-1} \big[ \theta^k M_k(0) \big] / L \bigg)^{\frac{1}{p^{(Ct/\theta + 1)}}}.
\end{aligned}
\end{equation*}
[1]
S.-i. Amari, Dynamics of pattern formation in lateral-inhibition
type neural fields, Biological cybernetics, 27 (1977), pp. 77–87.
[2]
T. Aoki and T. Aoyagi, Co-evolution of phases and connection
strengths in a network of phase oscillators, Phys. Rev. Lett., 102 (2009),
p. 034101.
[3]
N. Ayi and N. P. Duteil, Mean-field and graph limits for collective
dynamics models with time-varying weights, Journal of Differential
Equations, 299 (2021), pp. 65–110.
[4]
L. Badel, S. Lefort, T. K. Berger, C. C. Petersen, W. Gerstner, and M. J.
Richardson, Extracting non-linear integrate-and-fire models from
experimental data using dynamic i–v curves, Biological cybernetics, 99
(2008), pp. 361–370.
[5]
J. Baladron, D. Fasoli, O. Faugeras, and J. Touboul, Mean-field
description and propagation of chaos in networks of hodgkin-huxley and
fitzhugh-nagumo neurons, The Journal of Mathematical Neuroscience, 2 (2012),
pp. 1–50.
[6]
J. Bergh and J. Löfström, Interpolation spaces: an
introduction, vol. 223, Springer Science & Business Media, 2012.
[7]
R. L. Beurle, Properties of a mass of cells capable of regenerating
pulses, Philosophical Transactions of the Royal Society of London. Series B,
Biological Sciences, (1956), pp. 55–94.
[8]
D. Bresch, P.-E. Jabin, and J. Soler, A new approach to the
mean-field limit of vlasov-fokker-planck equations, arXiv preprint
arXiv:2203.15747, (2022).
[9]
A. N. Burkitt, A review of the integrate-and-fire neuron model: I.
homogeneous synaptic input, Biol. Cybern., 95 (2006), pp. 1–19.
[10]
M. J. Cáceres, J. A. Carrillo, and B. Perthame, Analysis of
nonlinear noisy integrate & fire neuron models: blow-up and steady states,
The Journal of Mathematical Neuroscience, 1 (2011), pp. 1–33.
[11]
M. J. Cáceres and B. Perthame, Beyond blow-up in excitatory
integrate and fire neuronal networks: refractory period and spontaneous
activity, J. Math. Neurosci., 1 (2014).
[12]
J. A. Carrillo, M. d. M. González, M. P. Gualdani, and M. E.
Schonbek, Classical solutions for a nonlinear fokker-planck equation
arising in computational neuroscience, Communications in Partial
Differential Equations, 38 (2013), pp. 385–409.
[13]
J. A. Carrillo, B. Perthame, D. Salort, and D. Smets, Qualitative
properties of solutions for the noisy integrate and fire model in
computational neuroscience, Nonlinearity, 28 (2015), p. 3365.
[14]
J. Chevallier, Mean-field limit of generalized hawkes processes,
Stochastic Processes and their Applications, 127 (2017), pp. 3870–3912.
[15]
J. Chevallier, A. Duarte, E. Löcherbach, and G. Ost, Mean field
limits for nonlinear spatially extended hawkes processes with exponential
memory kernels, Stochastic Processes and their Applications, 129 (2019),
pp. 1–27.
[16]
H. Chiba and G. Medvedev, The mean field analysis for the kuramoto
model on graphs i. the mean field equation and transition point formulas,
Discrete Contin. Dyn. Syst. Ser. A, 39 (2019), pp. 131–155.
[17]
height 2pt depth -1.6pt width 23pt, The mean field
analysis for the kuramoto model on graphs ii. asymptotic stability of the
incoherent state, center manifold reduction, and bifurcations, Discrete
Contin. Dyn. Syst. Ser. A, 39 (2019), pp. 3897–3921.
[18]
F. Coppini, H. Dietert, and G. Giacomin, A law of large numbers and
large deviations for interacting diffusions on erdös-rényi graphs,
Stoch. Dyn., 20 (2020), p. 2050010.
[19]
Q. Cormier, A mean-field model of integrate-and-fire neurons:
non-linear stability of the stationary solutions, arXiv preprint
arXiv:2002.08649, (2020).
[20]
Q. Cormier, E. Tanré, and R. Veltz, Long time behavior of a
mean-field model of interacting neurons, Stochastic Processes and their
Applications, 130 (2020), pp. 2553–2595.
[21]
height 2pt depth -1.6pt width 23pt, Hopf bifurcation in
a mean-field model of spiking neurons, Electronic Journal of Probability, 26
(2021), pp. 1–40.
[22]
A. De Masi, A. Galves, E. Löcherbach, and E. Presutti, Hydrodynamic limit for interacting neurons, Journal of Statistical Physics,
158 (2015), pp. 866–902.
[23]
F. Delarue, J. Inglis, S. Rubenthaler, and E. Tanré, Global
solvability of a networked integrate-and-fire model of mckean-vlasov type,
Annals of Applied Probability, 25 (2015), pp. 2096–2133.
[24]
height 2pt depth -1.6pt width 23pt, Particle systems
with a singular mean-field self-excitation. application to neuronal
networks, Stochastic Processes and their Applications, 125 (2015),
pp. 2451–2492.
[25]
S. Delattre, N. Fournier, and M. Hoffmann, Hawkes process on large
networks, The Annals of Applied Probability, 26 (2016), pp. 216–261.
[26]
Y. Deng and Z. Hani, On the derivation of the wave kinetic equation
for nls, in Forum of Mathematics, Pi, vol. 9, Cambridge University Press,
2021, p. e6.
[27]
height 2pt depth -1.6pt width 23pt, Full derivation of
the wave kinetic equation, Inventiones mathematicae, (2023), pp. 1–182.
[28]
A. Drogoul and R. Veltz, Exponential stability of the stationary
distribution of a mean field of spiking neural network, Journal of
Differential Equations, 270 (2021), pp. 809–842.
[29]
X. Erny, E. Löcherbach, and D. Loukianova, Conditional
propagation of chaos for mean field systems of interacting neurons,
Electronic Journal of Probability, 26 (2021), pp. 1–25.
[30]
R. FitzHugh, Impulses and physiological states in theoretical models
of nerve membrane, Biophysical journal, 1 (1961), pp. 445–466.
[31]
F. Flandoli, E. Priola, and G. Zanco, A mean-field model with
discontinuous coefficients for neurons with spatial interaction, Dyn. Syst.
Ser. A, 39 (2019), pp. 3037–3067.
[32]
N. Fournier and E. Löcherbach, On a toy model of interacting
neurons, Annales de l'Institut Henri Poincaré (B) Probabilités et
Statistiques, 52 (2016).
[33]
C. D. Geisler and J. M. Goldberg, A stochastic model of the
repetitive activity of neurons, Biophysical journal, 6 (1966), pp. 53–69.
[34]
G. L. Gerstein and B. Mandelbrot, Random walk models for the spike
activity of a single neuron, Biophysical journal, 4 (1964), pp. 41–68.
[35]
W. Gerstner and W. M. Kistler, Spiking neuron models: Single
neurons, populations, plasticity, Cambridge University Press, 2002.
[36]
W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski, Neuronal
dynamics: From single neurons to networks and models of cognition, Cambridge
University Press, 2014.
[37]
M. A. Gkogkas and C. Kuehn, Graphop mean-field limits for
kuramoto-type models, SIAM Journal on Applied Dynamical Systems, 21 (2022),
pp. 248–283.
[38]
M. A. Gkogkas, C. Kuehn, and C. Xu, Mean field limits of
co-evolutionary heterogeneous networks, arXiv preprint arXiv:2202.01742,
[39]
M. A. Gkogkas, C. Kuehn, and C. Xu, Continuum limits for adaptive
network dynamics, Communications in Mathematical Sciences, 21 (2023),
pp. 83–106.
[40]
F. Golse, C. Mouhot, and V. Ricci, Empirical measures and vlasov
hierarchies, Kinetic and related models, 6 (2013), pp. 919–943.
[41]
P. Grazieschi, M. Leocata, C. Mascart, J. Chevallier, F. Delarue, and
E. Tanré, Network of interacting neurons with random synaptic
weights, ESAIM: Proceedings and Surveys, 65 (2019), pp. 445–475.
[42]
J. S. Griffith, A field theory of neural nets: I: Derivation of
field equations, The Bulletin of Mathematical Biophysics, 25 (1963),
pp. 111–120.
[43]
height 2pt depth -1.6pt width 23pt, On the stability of
brain-like structures, Biophysical journal, 3 (1963), pp. 299–308.
[44]
height 2pt depth -1.6pt width 23pt, A field theory of
neural nets: Ii. properties of the field equations, The Bulletin of
Mathematical Biophysics, 27 (1965), pp. 187–195.
[45]
D. Hebb, The Organization of Behavior, Wiley New York, 1949.
[46]
A. V. Hill, Excitation and accommodation in nerve, Proceedings of
the Royal Society of London. Series B-Biological Sciences, 119 (1936),
pp. 305–355.
[47]
P. Hodara and E. Löcherbach, Hawkes processes with variable
length memory and an infinite number of components, Advances in Applied
Probability, 49 (2017), pp. 84–107.
[48]
A. L. Hodgkin and A. F. Huxley, A quantitative description of
membrane current and its application to conduction and excitation in nerve,
The Journal of physiology, 117 (1952), p. 500.
[49]
B. K. Hulse, H. Haberkern, R. Franconville, D. B. Turner-Evans, S. Y.
Takemura, T. Wolff, M. Noorman, M. Dreher, C. Dan, R. Parekh, A. Hermundstad,
G. M. Rubin, and V. Jayaraman, A connectome of the drosophila central
complex reveals network motifs suitable for flexible navigation and
context-dependent action selection, eLife, 10 (2021), p. e66039.
[50]
J. Inglis and D. Talay, Mean-field limit of a stochastic particle
system smoothly interacting through threshold hitting-times and applications
to neural networks with dendritic component., SIAM J. Math. Anal., 47
(2015), pp. 3884–3916.
[51]
P.-E. Jabin, D. Poyato, and J. Soler, Mean-field limit of
non-exchangeable systems, arXiv preprint arXiv:2112.15406, (2021).
[52]
P.-E. Jabin and Z. Wang, Mean field limit and propagation of chaos
for vlasov systems with bounded forces, Journal of Functional Analysis, 271
(2016), pp. 3588–3627.
[53]
height 2pt depth -1.6pt width 23pt, Quantitative
estimates of propagation of chaos for stochastic systems with
$W^{-1,\infty}$ kernels, Inventiones mathematicae, 214 (2018),
pp. 523–591.
[54]
D. Kaliuzhnyi-Verbovetskyi and G. S. Medvedev, The mean field
equation for the kuramoto model on graph sequences with non-lipschitz limit,
SIAM Journal on Mathematical Analysis, 50 (2018), pp. 2441–2465.
[55]
B. W. Knight, The relationship between the firing rate of a single
neuron and the level of activity in a population of neurons: Experimental
evidence for resonant enhancement in the population response, The Journal of
general physiology, 59 (1972), pp. 767–778.
[56]
C. Kuehn and C. Xu, Vlasov equations on digraph measures, Journal
of Differential Equations, 339 (2022), pp. 261–349.
[57]
Y. Kuramoto, International symposium on mathematical problems in
theoretical physics, Lecture Notes in Physics, 30 (1975), p. 420.
[58]
D. Lacker, Hierarchies, entropy, and quantitative propagation of
chaos for mean field diffusions, Probability and Mathematical Physics, 4
(2023), pp. 377–432.
[59]
D. Lacker, K. Ramanan, and R. Wu, Local weak convergence for sparse
networks of interacting processes, arXiv preprint arXiv:1904.02585, (2019).
[60]
C. Lancellotti, On the vlasov limit for systems of nonlinearly
coupled oscillators without noise, Transport Theor. Stat. Phys., 34 (2005),
pp. 523–535.
[61]
L. Lapicque, Recherches quantitatives sur l'excitation
électrique des nerfs traitée comme une polarisation., Journal of
Physiol Pathol Générale, 9 (1907), pp. 567–578.
[62]
L. Lovász and B. Szegedy, Limits of dense graph sequences,
Journal of Combinatorial Theory, Series B, 96 (2006), pp. 933–957.
[63]
W. S. McCulloch and W. Pitts, A logical calculus of the ideas
immanent in nervous activity, The Bulletin of Mathematical Biophysics, 5
(1943), pp. 115–133.
[64]
G. Medvedev, The continuum limit of the kuramoto model on sparse
random graphs, Commun. Math. Sci., 17 (2019), pp. 883–898.
[65]
J. Nagumo, S. Arimoto, and S. Yoshizawa, An active pulse
transmission line simulating nerve axon, Proceedings of the IRE, 50 (1962),
pp. 2061–2070.
[66]
R. I. Oliveira, G. H. Reis, and L. M. Stolerman, Interacting
diffusions on sparse graphs: hydrodynamics from local weak limits, arXiv
e-prints, (2018), pp. arXiv–1812.
[67]
K. Pakdaman, B. Perthame, and D. Salort, Dynamics of a structured
neuron population, Nonlinearity, 23 (2010), pp. 55–75.
[68]
K. Pakdaman, M. Thieullen, and G. Wainrib, Fluid limit theorems for
stochastic hybrid systems with application to neuron models, Advances in
Applied Probability, 42 (2010), pp. 761–794.
[69]
B. Perthame and D. Salort, On a voltage-conductance kinetic system
for integrate & fire neural networks, Kinet. Relat. Models, 6 (2013),
pp. 841–864.
[70]
B. Perthame, D. Salort, and G. Wainrib, Distributed synaptic weights
in a LIF neural network and learning rules, Physica D, 353-354 (2017),
pp. 20–30.
[71]
J. Pham, K. Pakdaman, J. Champagnat, and J.-F. Vibert, Activity in
sparsely connected excitatory neural networks: effect of connectivity,
Neural Netw., 11 (1998), pp. 415–434.
[72]
D. Poyato, Filippov flows and mean-field limits in the kinetic
singular kuramoto model, Preprint arXiv:1903.01305, (2019).
[73]
M. G. Riedler, M. Thieullen, and G. Wainrib, Limit theorems for
infinite-dimensional piecewise deterministic markov processes. applications
to stochastic excitable membrane models, Electron. J. Probab, 17 (2012),
pp. 1–48.
[74]
L. Sacerdote and M. T. Giraudo, Stochastic integrate and fire
models: a review on mathematical methods and their applications, Stochastic
biomathematical models: with applications to neuronal modeling, (2013),
pp. 99–148.
[75]
H. Spohn, Large scale dynamics of interacting particles, Springer,
[76]
O. Sporns, Networks of the Brain, Cambridge, MA: MIT Press, 2010.
[77]
A. S. Sznitman, Topics in propagation of chaos, Ecole d'Eté de
Probabilités de Saint-Flour XIX-1989, 1464 (1991), pp. 165–251.
[78]
N. Torres and D. Salort, Dynamics of neural networks with elapsed
time model and learning processes, Acta Appl. Math., 170 (2020),
pp. 1065–1099.
[79]
H. R. Wilson and J. D. Cowan, Excitatory and inhibitory interactions
in localized populations of model neurons, Biophysical journal, 12 (1972),
pp. 1–24.
The discrete indices are extended to $[0,1]$ in the following way:
For any $N$-dim vector $v_N = (v_{1;N},\dots,v_{N;N})^{\top}$ and $N \times N$ matrix $w_N \defeq (w_{i,j;N})_{i,j =1}^N$, consider the extension to piecewise constant function and kernel over $[0,1]$ as
\begin{equation*}
\begin{aligned}
\tilde v_N(\xi) \defeq \;& \sum_{i = 1}^N w_{i;N} \mathbbm{1}_{[\frac{i-1}{N}, \frac{i}{N})}(\xi), && \forall \xi \in [0,1],
\\
\tilde w_N(\xi,\zeta) \defeq \;& \sum_{i,j = 1}^N N w_{i,j;N} \mathbbm{1}_{[\frac{i-1}{N}, \frac{i}{N})}(\xi) \mathbbm{1}_{[\frac{j-1}{N}, \frac{j}{N})}(\zeta), && \forall \xi,\zeta \in [0,1].
\end{aligned}
\end{equation*}
Then for the vector $u_N = (u_{1;N},\dots,u_{N;N})^{\top}$ given by matrix multiplication $u_N = w_N v_N$, one has
\begin{equation*}
\begin{aligned}
\tilde u_N(\xi) = \int_{\zeta} \tilde w_N(\xi,\zeta) v_N(\zeta) \;\rd \zeta,
\end{aligned}
\end{equation*}
which translates matrix multiplication to kernel operation on $[0,1]$.
|
# Modality-Balanced Embedding for Video Retrieval
Xun Wang1 Bingqing Ke1 Xuanping Li1 Fangyu Liu2
Mingyu Zhang1 Xiao Liang1 Qiushi Xiao1 Cheng Luo1 Yue Yu1
1 Kuaishou 2 University of Cambridge
(2022)
###### Abstract.
Video search has become the main routine for users to discover videos relevant
to a text query on large short-video sharing platforms. During training a
query-video bi-encoder model using online search logs, we identify a modality
bias phenomenon that the video encoder almost entirely relies on text
matching, neglecting other modalities of the videos such as vision, audio,
etc. This modality imbalance results from a) modality gap: the relevance
between a query and a video text is much easier to learn as the query is also
a piece of text, with the same modality as the video text; b) data bias: most
training samples can be solved solely by text matching. Here we share our
practices to improve the first retrieval stage including our solution for the
modality imbalance issue. We propose MBVR (short for Modality Balanced Video
Retrieval) with two key components: manually generated modality-shuffled (MS)
samples and a dynamic margin (DM) based on visual relevance. They can
encourage the video encoder to pay balanced attentions to each modality.
Through extensive experiments on a real world dataset, we show empirically
that our method is both effective and efficient in solving modality bias
problem. We have also deployed our MBVR in a large video platform and observed
statistically significant boost over a highly optimized baseline in an A/B
test and manual GSB evaluations.
video retrieval; modality-shuffled negatives; dynamic margin
††journalyear: 2022††copyright: acmlicensed††conference: Proceedings of the
45th International ACM SIGIR Conference on Research and Development in
Information Retrieval; July 11–15, 2022; Madrid, Spain††booktitle: Proceedings
of the 45th International ACM SIGIR Conference on Research and Development in
Information Retrieval (SIGIR ’22), July 11–15, 2022, Madrid, Spain††price:
15.00††doi: 10.1145/3477495.3531899††isbn: 978-1-4503-8732-3/22/07††ccs:
Information systems Video search††ccs: Computing methodologies Neural networks
## 1\. Introduction
Video search, which aims to find the relevant videos of a query from billions
of videos, is essential to video-sharing platforms(e.g., TikTok, Likee, and
Kuaishou). To be efficient, most video search systems adopt a multi-stage
pipeline that gradually shrinks the number of candidates. The first stage,
known as retrieval, recalls thousands of candidates from billions efficiently,
determining the upper bound of the overall performance of a search engine. The
subsequent pre-ranking stages further shrink the candidates to the size of
hundreds, and the final ranking server then scores and selects videos to
display for users. In this paper, we focus on improving the retrieval stage of
video search with multimodal embedding learning.
With the recent development of embedding learning (Bengio et al., 2013) and
pre-trained language models (Liu et al., 2019; Devlin et al., 2019; Zhan et
al., 2020), embedding-based retrieval approaches have obtained promising
results in web (i.e., document) retrieval (Liu et al., 2021b; Huang et al.,
2013; Guu et al., 2020; Karpukhin et al., 2020; Zhan et al., 2021; Khattab and
Zaharia, 2020) and product search (Li et al., 2021; Zhang et al., 2020). Most
of them adopt a bi-encoder architecture and are trained on labeled data or
online logs. _However, when training a query-video bi-encoder with online
search logs, we have identified a bothering modality imbalance phenomenon_: a
video’s embedding overly relies on its associated text contents, neglecting
its visual information. Such models would falsely recall videos that only
matching the query textually, but with irrelevant vision contents. Notice that
on video-sharing platforms, a video is usually composed of multiple modalities
including video frames, audio, text (e.g., title and hashtag), and etc. In
this paper, we select two modalities to represent the video content: the text
modality from the title, banners, hashtags, and the vision modality from a key
frame or the cover.111The other modalities like audio, are dropped here due to
the information being either extremely noisy or negligible in our scenario.
This modality imbalance phenomenon results from: 1) modality gap: both query
and text are of the same textual modality, thus their relevance is easier to
grasp than other video’s modalities; 2) data bias: current search engines are
mostly based on text matching, at lexical or semantic level, thus the online
search logs, which are used as the training data, are heavily biased towards
examples with high query-text similarities.
Recent research in video retrieval mostly focuses on designing more
sophisticated architectures (Sun et al., 2019; Lei et al., 2021; Huang et al.,
2020b; Liu et al., 2021a) or stronger cross-modal fusion operations (Gabeur et
al., 2020; Qu et al., 2021; Xu et al., 2021). They require large-scale clean
training data and heavy computational resources, making them suitable for only
specific settings. What’s more, in a real-world scenario with modality biased
data like the video search logs, they unavoidably suffer from the modality
imbalance problem.
To bridge this gap, our paper offers a feasible solution named MBVR to
learning modality-balanced video embeddings using noisy search logs, which is
a bi-encoder framework with two key components, illustrated in Fig. 1.
Modality-Shuffled negatives. To correct the modality imbalance bias, we
generate novel modality-shuffled (MS) negative samples that train the model
adversarially. An MS negative consists of a relevant text and an irrelevant
video frame w.r.t. a query. MS negatives can be mistakenly ranked at top if a
model overly relies on a single modality (in Fig. 2(b)). We add an additional
objective to explicitly punish wrongly ranked MS negatives.
Dynamic margin. We further enhance the model with a margin dynamically
changed w.r.t. the visual relevance. The dynamic margin amplifies the loss for
the positive query-video pairs that with both related texts and vision
contents. Thus, the models with dynamic margin pull visually relevant videos
closer to the query.
We conduct extensive offline experiments and ablation study on the key
components to validate the effectiveness of MBVR over a strong baseline and
recent methods related to modality balancing (Kim et al., 2020; Lamb et al.,
2019). Furthermore, we deploy an online A/B test and GSB evaluations on a
large video sharing platform to show that our MBVR improves the relevance
level and users’ satisfaction of the video search.
Figure 1. A graphical illustration of MBVR.
## 2\. MBVR
In this section, we first introduce the model architecture, the training of a
strong baseline model. Then we illustrate the modality imbalance issue with
statistical analysis, and introduce our MBVR with generated negatives and the
dynamic margin.
(a) $R_{vt}$ distribution
(b) Similarity scores of base model
(c) Similarity scores of MBVR
Figure 2. (a) $R_{vt}$ (the ratio of the vision modality influence to the text
modality influence) distribution of base model and MBVR. (b) Similarity scores
between the queries and the positives/MS negatives of base model and (c) that
of MBVR.
### 2.1. Model Architecture
Our model architecture follows the popular two-tower (i.e., bi-encoder)
formulation, as (Huang et al., 2013; Li et al., 2021; Zhang et al., 2020; Liu
et al., 2021b; Zhan et al., 2021; Liu et al., 2021c), with a transformer-based
text encoder of query and a multi-modal encoder of video.
Query encoder $\mathcal{F}_{q}$ can be summarised as `RBT3+average+FC`. We
utilize the RoBERTa (Liu et al., 2019) model with three transformer layers as
the backbone 222We have also tried to use larger bert encoders with 6 and 12
transformer layers. However, such large models only bring negligible effect
with much heavier computational cost, thus we choose use the 3-layers RoBERTa
as our text encoder. and use `average+FC` (i.e., average pooling followed by a
fully connected layer) to compress the final token embeddings to $d-$dim
($d=64$).
Multimodal encoder $\mathcal{F}_{m}$ consists of a text encoder
$\mathcal{F}_{t}$, a vision encoder $\mathcal{F}_{v}$ and a fusion module
$\mathcal{H}$. For a video sample $m$, its embedding is computed as
(1)
$\displaystyle\mathcal{F}_{m}(m)=\mathcal{F}_{m}(t,v)=\mathcal{H}(\mathcal{F}_{t}(t),~{}\mathcal{F}_{v}(v)),$
where $t$ is the text input and $v$ is the vision input. The text encoder
$\mathcal{F}_{t}$ shares weights with $\mathcal{F}_{q}$ 333Such weight sharing
brings several benefits, e.g., reducing model parameters to save memory and
computational cost, and introducing prior query-text matching relation to
regularize the training (Firat et al., 2016; Xia et al., 2018; Liu et al.,
2021b).. The vision encoder $\mathcal{F}_{v}$ adopts the classical ResNet-50
(He et al., 2016) network. For the fusion module $\mathcal{H}$, we adopt the
multi-head self-attention (MSA) (Vaswani et al., 2017) to dynamically
integrate the two modalities and aggregate the outputs of MSA with an average
pooling. We have also tried other feature fusion operations (e.g., direct
addition and concatenation-MLP) and discovered that MSA works the best.
### 2.2. Base Model
Most existing works, e.g., (Huang et al., 2013; Liu et al., 2021b; Huang et
al., 2020a), train bi-encoders with the approximated query-to-document
retrieval objective. Specifically, given a query $q$, its relevant videos
$\mathcal{M}^{+}_{q}$, and its irrelevant videos $\mathcal{M}^{-}_{q}$, the
query-to-document objective is as below:
(2)
$\displaystyle\mathcal{L}_{qm}=-\log\Big{(}\frac{\exp(s(q,m)/\tau)}{\exp(s(q,m)/\tau)+\sum_{\hat{m}\in
M^{-}_{q}}\exp(s(q,\hat{m})/\tau)}\Big{)},$
where $\tau$ is a temperature hyper-parameter set to 0.07, and $s(q,m)$ is the
cosine similarity of a query-video pair $(q,m)$ (i.e.,
$s(q,m)=cos<\mathcal{F}_{q}(q),\mathcal{F}_{m}(m)>$). Notably, here we adopt
in-batch random negative sampling, which means $\mathcal{M}^{-}_{q}$ is all
the videos in current mini-batch except the positive sample $m$.
As recent works (Liu et al., 2021c; Liu et al., 2021a) added a conversed
document-to-query retrieval loss, we formulate a corresponding
$\mathcal{L}_{mq}$ as below:
(3)
$\displaystyle\mathcal{L}_{mq}=-\log\Big{(}\frac{\exp(s(q,m)/\tau)}{\exp(s(q,m)/\tau)+\sum_{\hat{q}\in\mathcal{Q}^{-}_{m}}\exp(s(\hat{q},m)/\tau)}\Big{)},$
where $\mathcal{Q}^{-}_{m}$ denotes the irrelevant queries of the video $m$,
i.e., all the queries in current mini-batch except $q$.
The sum of the query-to-document loss and the reversed document-to-query loss
results in the main bidirectional objective:
(4) $\displaystyle\mathcal{L}_{bi}=\mathcal{L}_{qm}+\mathcal{L}_{mq}.$
Beside the above bidirectional objective optimizing the relevance between the
query embedding and the video’s multimodal embedding, we also add a auxiliary
task $\mathcal{L}_{t}$ ($\mathcal{L}_{v}$) to optimize the relevance between
the query and the video’s text modality (the vision modality) with similar
formulations. The whole objective $\mathcal{L}_{base}$ for the base model is:
(5)
$\displaystyle\mathcal{L}_{base}=\mathcal{L}_{bi}+\alpha\mathcal{L}_{v}+\beta\mathcal{L}_{t},$
where $\alpha=\beta=0.1$, are the weight hyper-parameters.
### 2.3. Statistical Analysis of Modality Imbalance
To identify the modality imbalance, we define an indicator ${R}_{vt}$ as
below,
(6)
$\displaystyle{R}_{vt}=\frac{cos<\mathcal{F}_{v}(v),\mathcal{F}_{m}(m)>}{cos<\mathcal{F}_{t}(t),\mathcal{F}_{m}(m)>},$
where the definitions of $\mathcal{F}_{m},\mathcal{F}_{v},\mathcal{F}_{t}$ are
given in Eq. (1). $R_{vt}$ is the ratio between the cosine similarity of
vision-video and that of text-video and measures the extent of modality bias
of the multi-modal encoder $\mathcal{F}_{m}$.
For the base model in Eq. (5), we compute $R_{vt}$ for a randomly sampled set
of videos and plot the density histogram graph in Fig. 2(a). As observed, most
$R_{vt}<0.3$, indicating the model suffer from the modality imbalance problem
and the multimodal embeddings are heavily biased to the text contents.
Consequently, when retrieving videos with the base model, visually irrelevant
videos can be recalled falsely with even higher similarities than videos
relevant to the queries textually and visually. The fundamental cause of
modality imbalance is that text matching provides a shortcut for the bi-
encoder: the query-text relation is easier to grasp, and most samples in
training set can be solved by lexical relevance.
### 2.4. Modality-Shuffled Negatives
To eliminate the shortcut, we generate novel Modality-Shuffled (MS for short)
negative samples, whose vision modality is irrelevant with the query, while
the text is relevant to the query. As illustrated in fig. 2(b), such MS
negatives are serious adversarial attacks for the base model, which cannot
distinguish MS negatives with the real positives. This inspires our loss
design of MS negatives as below:
(7)
$\displaystyle\mathcal{L}_{ms}=-\log\Big{(}\frac{\exp(s(q,m)/\tau)}{\exp(s(q,m)/\tau)+\sum_{\hat{m}\in\mathcal{M}_{ms}}\exp(s(q,\hat{m})/\tau)}\Big{)},$
where $\mathcal{M}_{ms}$ denotes the set of generated MS negatives.
$\mathcal{L}_{ms}$ straightly promotes the model disentangle the MS negatives
from the real positives as shown in Fig. 2(c). By the way, it is not hard to
find that when both $R_{vt}$ and $R_{\hat{v}t}$ are close to 0,
$\mathcal{F}_{m}(t,v)$ will be close to its MS negative
$\mathcal{F}_{m}(t,\hat{v})$. Thus, MBVR with $\mathcal{L}_{ms}$ also pushes
$R_{vt}$ faraway from 0 as in Fig. 2(a), which indicates that
$\mathcal{L}_{ms}$ effectively alleviates the modality imbalance problem.
Consequently, the information of both text and vision can be well-preserved in
the final video embedding.
How to generate MS negatives efficiently? We design to re-combine text
embeddings and vision embeddings in the mini-batch as in Fig. 1. For a mini-
batch of size $n$, the vision, text embeddings of the $k$-th video are
$\mathcal{F}_{v}(v_{k})$, $\mathcal{F}_{t}(t_{k})$. Then the MS negatives of
the k-th video can be computed as
$\mathcal{H}(\mathcal{F}_{v}(v_{l}),\mathcal{F}_{t}(t_{k}))$, where $l$ is a
randomly selected integer from $1$ to $n$ except $k$. Such design can generate
one MS negative for each video with only one MSA operation, which is extremely
efficient. By repeating the above process $M$ times, we can generate $M$ MS
negatives for each video. Empirically, more MS negatives can result in better
performance, we set M as 32 to balance the effectiveness and efficiency.
### 2.5. Dynamic Margin
To further address the modality bias, we apply a dynamic margin $\lambda$ on
the positive pair $(q,m)$ of $\mathcal{L}_{qm}$ as below:
(8)
$\displaystyle\mathcal{L}_{qm}=-\log\Big{(}\frac{\exp((s(q,m)-\lambda)/\tau)}{\exp((s(q,m)-\lambda)/\tau)+\sum_{\hat{m}\in\mathcal{M}^{-}_{q}}\exp(s(q,\hat{m})/\tau)}\Big{)}.$
$\lambda$ is computed from the visual relevance of $(q,m)$ through a scale and
shift transformation:
(9)
$\displaystyle\lambda=w\sigma(cos<\mathcal{F}_{v}(v),\mathcal{F}_{q}(q)>)+b,$
where $w=0.3,b=-0.1$, and $\sigma$ denotes the sigmoid function, i.e.,
$\sigma(x)=\frac{1}{1+e^{-x}}$. Then the margin $\lambda$ varies in
$(-0.1,0.2)$ and monotonically increases w.r.t. the visual relevance (i.e.,
$cos<\mathcal{F}_{v}(v),\mathcal{F}_{q}(q)>$). We also do the same
modification for the video-to-query loss $\mathcal{L}_{mq}$ and the MS loss
$\mathcal{L}_{ms}$. Then the main objective $\mathcal{L}_{bi}$ in Eq. 4 with
dynamic margin is referred as $\widetilde{\mathcal{L}}_{bi}$ and
$\mathcal{L}_{ms}$ with dynamic margin as $\widetilde{\mathcal{L}}_{ms}$. Note
that the gradient of the margin $\lambda$ is detached during model training.
To understand the effect of the dynamic margin easily, when $\lambda>0$, it
can be considered as moving the positive video $m$ a bit faraway from the
query before computing the loss and results in a larger loss value as in Fig.
1. Therefore, the dynamic margin encourages the model to produce even higher
similarities for the vision related query-video pairs.
Finally, the overall learning objective of our MBVR framework is as follows,
(10)
$\displaystyle\mathcal{L}=\widetilde{\mathcal{L}}_{bi}+\alpha\mathcal{L}_{v}+\beta\mathcal{L}_{t}+\gamma\widetilde{\mathcal{L}}_{ms},$
where $\gamma$ is a weight hyper-parameter and set as $0.01$. In summary, MBVR
solves the modality imbalance issue from two collaborative aspects: punishing
on videos with unrelated vision contents with the MS component, and enhancing
query-video pairs of related vision contents with the dynamic margin.
## 3\. Experiments
### 3.1. Experimental Setup
In this section, we describe our datasets, evaluation metrics.
Training Dataset. The training dataset contains about 5 million queries, 42
million videos and 170 million relevant query-video pairs mined from recent
nine months’ search logs.
Evaluation Datasets. We do offline evaluations on two test datasets. Manual:
a manually annotated dataset of two million query-video pairs to evaluate the
relevance of recalled videos; Auto: a dataset of five million randomly sampled
query-video pairs that automatically collected with Wilson CTR from search
logs to simulate the online performance. The datasets used in the paper will
be public only as embedding vectors to protect the privacy of users.
For all models, we compute their Precision@K and MRR@K on Auto and PNR on
Manual, which are defined as below:
Precision@K. Given a query $q$, $\mathcal{V}_{q}$ is the set of relevant
videos, and the top $K$ documents returned by a model is denoted as
$\mathcal{R}_{q}=\\{r_{1},\cdots,r_{K}\\}$. The metric of Precision@$K$ is
defined as
(11) $\displaystyle
<EMAIL_ADDRESS>
MRR@K. Mean Reciprocal Rank at K (MRR@K) is defined as
(12) $\displaystyle
MRR@K=\frac{1}{K}\sum_{i=1}^{K}\mathcal{I}_{r_{i}\in\mathcal{V}_{q}}\cdot\frac{1}{i},$
where $\mathcal{I}_{\mathcal{A}}$ is an indicator function444Compared with
Precision@K, MRR@K can reflect the order of top-$K$. Note that MRR@K is
usually computed when there is only one positive sample and its value is
always below 1, whereas we have more than one relevant videos for each query,
then the value of MRR@K can be larger than 1.. If $\mathcal{A}$ holds, it is
1, otherwise 0.
PNR. For a given query $q$ and its associated videos $\mathcal{D}_{q}$, the
positive-negative ratio (PNR) can be defined as
(13) $\displaystyle
PNR=\frac{\sum_{d_{i},d_{j}\in\mathcal{D}_{q}}\mathcal{I}(y_{i}>y_{j})\cdot\mathcal{I}(s(q,d_{i})>s(q,d_{j}))}{\sum_{d_{i^{\prime}},d_{j^{\prime}}\in\mathcal{D}_{q}}\mathcal{I}(y_{i^{\prime}}>y_{j^{\prime}})\cdot\mathcal{I}(s(q,d_{i^{\prime}})<s(q,d_{j^{\prime}}))},$
where $y_{i}$ represents the manual label of $d_{i}$, and $s(q,d_{i})$ is the
predicted score between $q$ and $d_{i}$ given by model. PNR measures the
consistency between labels and predictions.
### 3.2. Offline Evaluations
Table 1. Offline experimental results of compared methods on Auto and Manual test sets. | Auto | Manual
---|---|---
Method | MRR@10 | Precision@10 (%) | PNR
_Text_ | 1.406 | 45.58 | 2.188
_Vision_ | 0.603 | 17.55 | 1.942
_Base model_ | 1.446 | 46.53 | 2.230
_+IAT(Lamb et al., 2019)_ | 1.452 | 46.22 | 2.243
_+CDF(Kim et al., 2020)_ | 1.463 | 47.31 | 2.253
_+MS_ | 1.600 | 50.52 | 2.311
_+DM_ | 1.491 | 47.75 | 2.267
_MBVR_ | 1.614 | 51.10 | 2.310
Compared Methods We compare our method with the highly optimized baseline,
and recent state-of-the-art modality-balanced techniques of IAT (Lamb et al.,
2019) and CDF (Kim et al., 2020). _IAT_ trains a robust model under the
adversarial attacks of PGD (Madry et al., 2018) and _CDF_ aims to retrieve
videos with one modality missed.
* •
_Base_ is Eq. 5 without MS negatives and dynamic margin.
* •
_Text_ (_Vision_) only uses the text (vision) modality of videos.
* •
_+IAT_ equips _Base_ with IAT (Lamb et al., 2019).
* •
_+CDF_ equips _Base_ with CDF (Kim et al., 2020).
* •
_+MS_ equips _Base_ with MS negatives $\mathcal{L}_{ms}$.
* •
_+DM_ equips _Base_ with dynamic margin $\widetilde{\mathcal{L}}_{bi}$.
* •
_MBVR_ is the full model with both DM and _MS_.
Table 1 illustrates experimental results of compared methods on both Auto and
Manual test sets. The drastic performance difference between _Vision_ and
_Text_ results from the dominated role of text modality in video’s modalities.
_+IAT_ and _+CDF_ bring marginal improvements over _Base_. Both _+MS_ and
_+DM_ boost significantly over the strong baseline _Base_. _MS_ is extremely
effective, which brings nearly 4% absolute boost over _Base_
($46.53\%\longrightarrow 50.52\%$). Furthermore, the full model _MBVR_ , with
both _MS_ and _DM_ , achieves the best performance. The offline evaluations
verify the effectiveness and compatibility of both components of MBVR (_MS_
and _DM_).
### 3.3. Online Evaluations
For the online test, the control baseline is current online search engine,
which is a highly optimized system with multiple retrieval routes of text
embedding based ANN retrieval, text matching with inverted indexing, etc., to
provide thousands of candidates, and several pre-ranking and ranking models to
rank these candidates. And the variant experiment adds our MBVR multimodal
embedding based retrieval as an additional route.
Online A/B Test We conducted online A/B experiments over 10% of the entire
traffic for one week. The watch time has increased by1.509%; The long watch
rate has increased by 2.485%; The query changed rate555The decrease of query
changed rate is positive, as it means the users find relevant videos without
changing the query to trigger a new search request. has decreased by 1.174%.
This is a statistically significant improvement and verifies the effectiveness
of MBVR. Now MBVR has been deployed online and serves the main traffic.
Manual Evaluation We conduct a manual side-by-side comparison on the top-4
videos between the baseline and the experiment. We randomly sample 200 queries
whose top-4 videos are different, and then we let several human experts to
tell whether the experiment’s results are more relevant than the baseline
ones. The Good _vs._ Same _vs._ Bad (GSB) metric is G=45, S=126, B=29, where G
(or B) denotes the number of queries whose results in experiments are more
relevant (or irrelevant) than the baseline. This GSB result indicates that
MBVR can recall more relevant videos to further meet the users’ search
requests.
## 4\. Conclusions and Discussions
In this paper, we identify the challenging modality bias issue in multimodal
embedding learning based on online search logs and propose our solution MBVR.
The main contributions of MBVR are the modality-shuffled (MS) negatives and
the dynamic margin (DM), which force the model to pay more balanced attention
to each modality. Our experiments verify that the proposed MBVR significantly
outperforms a strong baseline and recent modality balanced techniques on
offline evaluation and improves the highly optimized online video search
system.
As an early exploration of building multimodal retrieval system for short-
video platforms, our MBVR adopts a succinct scheme (i.e., we keep
engineering/architecture design choices simple). There are potential
directions to further enhance the system, e.g., using more frames, designing
more sophisticated cross-modal fusion modules, and adopting smarter data
cleaning techniques, which can be explored in the future.
## Acknowledgments
We thank Xintong Han for early discussions of MBVR. We thank Xintong Han,
Haozhi Zhang, Yu Gao, Shanlan Nie for paper review. We also thank Tong Zhao,
Yue Lv for preparing training datasets for the experiments.
## References
* (1)
* Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013\. Representation learning: A review and new perspectives. _TPAMI_ 35, 8 (2013).
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _NAACL_. 4171–4186.
* Firat et al. (2016) Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T Yarman Vural, and Kyunghyun Cho. 2016\. Zero-Resource Translation with Multi-Lingual Neural Machine Translation. In _EMNLP_. 268–277.
* Gabeur et al. (2020) Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. 2020. Multi-modal Transformer for Video Retrieval. In _ECCV_.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: retrieval-augmented language model pre-training. _arXiv preprint arXiv:2002.08909_ (2020).
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016\. Deep residual learning for image recognition. In _CVPR_. 770–778.
* Huang et al. (2020a) Jui-Ting Huang, Ashish Sharma, Shuying Sun, Li Xia, David Zhang, Philip Pronin, Janani Padmanabhan, Giuseppe Ottaviano, and Linjun Yang. 2020a. Embedding-based retrieval in facebook search. In _KDD_. 2553–2561.
* Huang et al. (2013) Po-Sen Huang, X. He, Jianfeng Gao, L. Deng, A. Acero, and Larry Heck. 2013\. Learning deep structured semantic models for web search using clickthrough data. In _CIKM_.
* Huang et al. (2020b) Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020b. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. _arXiv preprint arXiv:2004.00849_ (2020).
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. _EMNLP_ (2020).
* Khattab and Zaharia (2020) Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval_. 39–48.
* Kim et al. (2020) Hyounghun Kim, Hao Tan, and Mohit Bansal. 2020. Modality-balanced models for visual dialogue. In _AAAI_ , Vol. 34. 8091–8098.
* Lamb et al. (2019) Alex Lamb, Vikas Verma, Juho Kannala, and Yoshua Bengio. 2019\. Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy. In _Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security_. 95–103.
* Lei et al. (2021) Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. 2021. Less is more: Clipbert for video-and-language learning via sparse sampling. In _CVPR_. 7331–7341.
* Li et al. (2021) Sen Li, Fuyu Lv, Taiwei Jin, Guli Lin, Keping Yang, Xiaoyi Zeng, Xiao-Ming Wu, and Qianli Ma. 2021\. Embedding-based Product Retrieval in Taobao Search. _KDD_ (2021).
* Liu et al. (2021a) Song Liu, Haoqi Fan, Shengsheng Qian, Yiru Chen, Wenkui Ding, and Zhongyuan Wang. 2021a. Hit: Hierarchical transformer with momentum contrast for video-text retrieval. _ICCV_ (2021).
* Liu et al. (2021b) Yiding Liu, Guan Huang, Weixue Lu, Suqi Cheng, Daiting Shi, Shuaiqiang Wang, Zhicong Cheng, and Dawei Yin. 2021b. Pre-trained Language Model for Web-scale Retrieval in Baidu Search. _KDD_ (2021).
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ (2019).
* Liu et al. (2021c) Yiqun Liu, Kaushik Rangadurai, Yunzhong He, Siddarth Malreddy, Xunlong Gui, Xiaoyi Liu, and Fedor Borisyuk. 2021c. Que2Search: Fast and Accurate Query and Document Understanding for Search at Facebook. In _KDD_. 3376–3384.
* Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018\. Towards Deep Learning Models Resistant to Adversarial Attacks. In _ICLR_.
* Qu et al. (2021) Leigang Qu, Meng Liu, Jianlong Wu, Zan Gao, and Liqiang Nie. 2021. Dynamic Modality Interaction Modeling for Image-Text Retrieval. In _SIGIR_. 1104–1113.
* Sun et al. (2019) Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In _ICCV_. 7464–7473.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In _NeurIPS_. 5998–6008.
* Xia et al. (2018) Yingce Xia, Xu Tan, Fei Tian, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2018\. Model-level dual learning. In _ICML_. PMLR, 5383–5392.
* Xu et al. (2021) Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. 2021. Videoclip: Contrastive pre-training for zero-shot video-text understanding. _EMNLP_ (2021).
* Zhan et al. (2021) Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021\. Optimizing Dense Retrieval Model Training with Hard Negatives. _SIGIR_ (2021).
* Zhan et al. (2020) Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2020. RepBERT: Contextualized Text Embeddings for First-Stage Retrieval. _arXiv preprint arXiv:2006.15498_ (2020).
* Zhang et al. (2020) Han Zhang, Songlin Wang, Kang Zhang, Zhiling Tang, Yunjiang Jiang, Yun Xiao, Weipeng Yan, and Wen-Yun Yang. 2020\. Towards personalized and semantic retrieval: An end-to-end solution for e-commerce search via embedding learning. In _SIGIR_. 2407–2416.
## References
* (1)
* Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013\. Representation learning: A review and new perspectives. _TPAMI_ 35, 8 (2013).
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _NAACL_. 4171–4186.
* Firat et al. (2016) Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T Yarman Vural, and Kyunghyun Cho. 2016\. Zero-Resource Translation with Multi-Lingual Neural Machine Translation. In _EMNLP_. 268–277.
* Gabeur et al. (2020) Valentin Gabeur, Chen Sun, Karteek Alahari, and Cordelia Schmid. 2020. Multi-modal Transformer for Video Retrieval. In _ECCV_.
* Guu et al. (2020) Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. Realm: retrieval-augmented language model pre-training. _arXiv preprint arXiv:2002.08909_ (2020).
* He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016\. Deep residual learning for image recognition. In _CVPR_. 770–778.
* Huang et al. (2020a) Jui-Ting Huang, Ashish Sharma, Shuying Sun, Li Xia, David Zhang, Philip Pronin, Janani Padmanabhan, Giuseppe Ottaviano, and Linjun Yang. 2020a. Embedding-based retrieval in facebook search. In _KDD_. 2553–2561.
* Huang et al. (2013) Po-Sen Huang, X. He, Jianfeng Gao, L. Deng, A. Acero, and Larry Heck. 2013\. Learning deep structured semantic models for web search using clickthrough data. In _CIKM_.
* Huang et al. (2020b) Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020b. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. _arXiv preprint arXiv:2004.00849_ (2020).
* Karpukhin et al. (2020) Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. _EMNLP_ (2020).
* Khattab and Zaharia (2020) Omar Khattab and Matei Zaharia. 2020. Colbert: Efficient and effective passage search via contextualized late interaction over bert. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval_. 39–48.
* Kim et al. (2020) Hyounghun Kim, Hao Tan, and Mohit Bansal. 2020. Modality-balanced models for visual dialogue. In _AAAI_ , Vol. 34. 8091–8098.
* Lamb et al. (2019) Alex Lamb, Vikas Verma, Juho Kannala, and Yoshua Bengio. 2019\. Interpolated adversarial training: Achieving robust neural networks without sacrificing too much accuracy. In _Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security_. 95–103.
* Lei et al. (2021) Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, and Jingjing Liu. 2021. Less is more: Clipbert for video-and-language learning via sparse sampling. In _CVPR_. 7331–7341.
* Li et al. (2021) Sen Li, Fuyu Lv, Taiwei Jin, Guli Lin, Keping Yang, Xiaoyi Zeng, Xiao-Ming Wu, and Qianli Ma. 2021\. Embedding-based Product Retrieval in Taobao Search. _KDD_ (2021).
* Liu et al. (2021a) Song Liu, Haoqi Fan, Shengsheng Qian, Yiru Chen, Wenkui Ding, and Zhongyuan Wang. 2021a. Hit: Hierarchical transformer with momentum contrast for video-text retrieval. _ICCV_ (2021).
* Liu et al. (2021b) Yiding Liu, Guan Huang, Weixue Lu, Suqi Cheng, Daiting Shi, Shuaiqiang Wang, Zhicong Cheng, and Dawei Yin. 2021b. Pre-trained Language Model for Web-scale Retrieval in Baidu Search. _KDD_ (2021).
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_ (2019).
* Liu et al. (2021c) Yiqun Liu, Kaushik Rangadurai, Yunzhong He, Siddarth Malreddy, Xunlong Gui, Xiaoyi Liu, and Fedor Borisyuk. 2021c. Que2Search: Fast and Accurate Query and Document Understanding for Search at Facebook. In _KDD_. 3376–3384.
* Madry et al. (2018) Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018\. Towards Deep Learning Models Resistant to Adversarial Attacks. In _ICLR_.
* Qu et al. (2021) Leigang Qu, Meng Liu, Jianlong Wu, Zan Gao, and Liqiang Nie. 2021. Dynamic Modality Interaction Modeling for Image-Text Retrieval. In _SIGIR_. 1104–1113.
* Sun et al. (2019) Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. 2019. Videobert: A joint model for video and language representation learning. In _ICCV_. 7464–7473.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In _NeurIPS_. 5998–6008.
* Xia et al. (2018) Yingce Xia, Xu Tan, Fei Tian, Tao Qin, Nenghai Yu, and Tie-Yan Liu. 2018\. Model-level dual learning. In _ICML_. PMLR, 5383–5392.
* Xu et al. (2021) Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko, Armen Aghajanyan, Florian Metze, Luke Zettlemoyer, and Christoph Feichtenhofer. 2021. Videoclip: Contrastive pre-training for zero-shot video-text understanding. _EMNLP_ (2021).
* Zhan et al. (2021) Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Jiafeng Guo, Min Zhang, and Shaoping Ma. 2021\. Optimizing Dense Retrieval Model Training with Hard Negatives. _SIGIR_ (2021).
* Zhan et al. (2020) Jingtao Zhan, Jiaxin Mao, Yiqun Liu, Min Zhang, and Shaoping Ma. 2020. RepBERT: Contextualized Text Embeddings for First-Stage Retrieval. _arXiv preprint arXiv:2006.15498_ (2020).
* Zhang et al. (2020) Han Zhang, Songlin Wang, Kang Zhang, Zhiling Tang, Yunjiang Jiang, Yun Xiao, Weipeng Yan, and Wen-Yun Yang. 2020\. Towards personalized and semantic retrieval: An end-to-end solution for e-commerce search via embedding learning. In _SIGIR_. 2407–2416.
|
[orcid=0000-0001-9914-5434] 1]organization=School of Computer Science,
Carleton University,city=Ottawa,country=Canada
2]organization=Data-driven Analysis of Software (DAS) Lab, Concordia
University,city=Montreal,country=Canada
[orcid=0000-0003-1285-9878]
# A Machine Learning Approach to Determine the Semantic Versioning Type of npm
Packages Releases
Rabe Abdalkareem<EMAIL_ADDRESS>[ Md Atique Reza Chowdhury
<EMAIL_ADDRESS>[ Emad Shihab<EMAIL_ADDRESS>
###### Abstract
Semantic versioning policy is widely used to indicate the level of changes in
a package release. Unfortunately, there are many cases where developers do not
respect the semantic versioning policy, leading to the breakage of dependent
applications. To reduce such cases, we proposed using machine learning (ML)
techniques to effectively predict the new release type, i.e., patch, minor,
major, in order to properly determine the semantic versioning type. To perform
our prediction, we mined and used a number of features about a release, such
as the complexity of the changed code, change types, and development
activities. We then used four ML classifiers. To evaluate the performance of
the proposed ML classifiers, we conducted an empirical study on 31 JavaScript
packages containing a total of approximately 6,260 releases. We started by
extracting 41 release-level features from historical data of packages’ source
code and repositories. Then, we used four machine learning classifiers, namely
XGBoost, Random Forest, Decision Tree, and Logistic Regression. We found that
the XGBoost classifiers performed the best achieving median ROC-AUC values of
0.78, 0.69, and 0.74 for major, minor, and patch releases, respectively. We
also found that features related to the change types in a release are the best
predictors group of features in determining the semantic versioning type.
Finally, we studied the generalizability of determining the semantic
versioning type by applying a cross-package validation. Our results showed
that the general classifier achieved median ROC-AUC values of 0.76, 0.69, and
0.75 for major, minor, and patch releases.
###### keywords:
npm Package Releases Semantic Version Mining Software Repository Machine
Learning
## 1 Introduction
Semantic versioning is a commonly used versioning approach to signal a
change’s compatibility through version numbers. Prior work showed that
properly adapting semantic versioning increases developers’ trust in their
dependent on packages and decreases the chance of facing backward
compatibility breakage [58, 11]. Therefore, most language-specific package
managers encourage the use of semantic versioning (e.g., npm for JavaScript,
Cargo for Rust, Gems for Ruby, among others) [23, 24]. Likewise, some of the
biggest software producers such as Microsoft, Netflix, Facebook, and Google
significantly use semantic versioning to tag their new software releases [43,
54, 29]. In addition, a survey with two thousand developers shows that
developers heavily rely on semantic versioning to determine the version of
their projects’ release type [9].
However, misuse of semantic versioning can cause many problems. Developers may
incorrectly identify the semantic versioning type and may tag a new release as
minor or patch even though it introduces breaking changes, especially for
packages that are continuously releasing [11, 4]. One example of such a
problem is in the context of the web browser Firefox and the font selection
library fontconfig [4]. At some point, the fontconfig’s developers decided to
change its implementation so that blank file names would no longer be
permitted. They chose to mark this change as a minor release. However, this
release of fontconfig caused Firefox to fail to render text for any
application that used that minor release. In addition, this issue of release
tagging can be particularly problematic for oversized packages or projects
that receive many contributions and perform many changes in one release
development duration. Therefor, this problem can negatively affect both the
developers of the packages and software applications that directly or
indirectly depend on these packages [11, 58].
Due to the increased adoption of semantic versioning, most of the previous
work focused on empirically studying its usage and benefits (e.g,. [11, 42,
70]). However, very few studies tried to improve the efficiency of applying
the semantic versioning in practice. More importantly, most of the prior
studies took reactive approaches and tried to detect breakage changes of a
package after it was released through the use of source code analysis (e,g.,
[49, 58, 48, 71]). Thus, we argue that prior approaches have two key
limitations. First, they tackled the issue of wrongly tagged releases after
they are out and being integrated by others depending on applications. Second,
they heavily relied on source code analysis, which suffers from high false-
positive rates and is incapable of detecting runtime changes, especially for
packages that are written in dynamic type language such as JavaScript [55, 5].
Therefore, the main goal of our work is to automatically determine the type of
the new package release, i.e., patch, minor, and major. To do so, we proposed
the use of machine learning (ML) techniques to predict the semantic versioning
type. We started by analyzing the npm package manager and selected 31 packages
with 6,268 releases that their developers properly use semantic versioning to
tag their releases. We then analyzed the source code and mined the development
history of the studied packages, and extracted 41 features that are grouped
into six dimensions, namely, change types, development activities, complexity
and code, time, dependency, and text dimensions. Next, we built four different
machine learning classifiers, namely XGBoost, Random Forest, Decision Tree,
and Logistic Regression, to determine the semantic versioning type of the
releases. Finally, to evaluate the effectiveness of using the ML techniques,
we performed an empirical study to answer the following questions:
RQ1: Can we effectively determine the semantic versioning type of a new
package release? We built four different ML classifiers using 41 features
extracted from packages’ repositories and source code. We then compared their
performance to the baseline, which is the ZeroR classifier. Our results showed
that XGBoost classifiers achieved average ROC-AUC values of 0.77, 0.69, and
0.74 (median $=$ 0.78, 0.69, and 0.74) for major, minor, and patch releases,
respectively. In addition, this improvement equates to an average improvement
of 1.58$X$, 1.38$X$, and 1.49$X$ by the built classifiers when they were
compared to our baseline for the major, minor, and patch releases.
Then, we examined the most important dimension of features used by the ML
classifiers to determine the semantic versioning type of a new package release
in order to provide insights to practitioners as to what features best
indicate the new package release type. This led us to ask the question; RQ2:
Which dimension of features are most important in determining the semantic
versioning type of a new package release? We built different classifiers based
on each dimension of features and evaluated and compared their performance.
Our results showed that change types (e,g., number of JavaScript files added
in a release.) and complexity of the source code of the release are the most
important dimension of features in determining the type of new release.
Lastly, to examine the generalizability of the proposed technique, we
investigated the effectiveness of the ML techniques in determining the
semantic versioning type of a new package release using cross-packages
validation. In particular, we asked the question; RQ3: How effective are the
machine learning techniques when applied on cross-packages? We built general
classifiers and evaluated their performance using cross-package validation.
The results showed that the classifier achieves average ROC-AUC values of
0.74, 0.68, and 0.75 (median $=$ 0.76, 0.69, and 0.75) for major, minor, and
patch releases. These results also showed that cross-package classifiers’
performances correspond to an average ROC-AUC improvement of 1.5$X$, 1.4$X$,
and 1.5$X$ over our baseline.
In general, our work made the following key contributions:
1. 1.
We formulated the problem of predicting semantic versioning for JavaScript
packages. To the best of our knowledge, this is the first work of using ML
techniques to determine semantic versioning type for JavaScript packages. We
envision that our approach can be used to predict the releases that are likely
to be breakage releases.
2. 2.
We proposed features that can be mined from JavaScript package repositories
and source code to predict semantic versioning type of a new package release.
We used the proposed features to predict semantic versioning accurately and
studied the features that best indicate the semantic versioning type.
3. 3.
We performed an empirical study on 31 open-source JavaScript packages, and our
experimental results showed that the use of ML techniques can achieve an
improvement over our baseline approach, which is the ZeroR classifier.
Structure of the paper: The remainder of the paper was organized as follows.
Section 2 provided a background on semantic versioning. We described our case
study design in Section 3. We presented our case study results in Section 4.
The work related to our study was discussed in Section 5 and the threats to
validity of our work is discussed in Section 6. Finally, Section 7 concluded
the paper.
## 2 Semantic Versioning
Since the primary goal of our work is to determine the semantic versioning
type of a new npm package release, it is essential first to provide background
on the concept of semantic versioning and how it is used to tag new package
releases.
Semantic Versioning is considered the de-facto versioning standard for many
software ecosystems, including node package manager (npm) and Python package
index (PyPI), to name a few. Semantic Versioning was introduced by the co-
founder of GitHub, Tom Preston-Werner, in 2011. In our study, we focused on
semantic versioning 2.0, which was released in 2013 [56]. The purpose of
semantic versioning is twofold. It first allows package developers to
communicate the extent of backward-incompatible changes in their new releases
to application dependents. Also, it allows for dependents of a package to
specify how restrictive or permissive they want to be in automatically
accepting new versions of the packages.
In general, semantic versioning proposes three dot-separated numbers
indicating the major, minor, and patch versions of a release. Those numbers
assist in identifying the type of changes in the newly released package. To
explain how semantic versioning works, we take the release m1.n1.p1 number as
an example. The first part m1 presents the major type, the number n1 stands
for the minor type, and the number p1 stands for the patch type. The semantic
versioning also shows rules for developers to determine how one of the three
types number should be incremented when a new release comes out. In
particular, any change to the new release package that is backward-
incompatible (e.g., break the API) requires an update to the major version.
Thus, a major release must yield the increment of the major version type, for
example, from m1.n1.p1 to m2.n1.p1. A minor release should be published when
some new backward-compatible change is introduced (e.g., adding or supporting
new functionality that does not create backward incompatibility). A minor
release must yield the increment of the minor type of the version number
(e.g., from m2.n1.p1 to m2.n2.p1). Finally, a patch release should be
published when the release represents backward compatible fixes (e.g., fixing
a bug). A patch release must yield the increment of the patch type of the
version number, such as from m2.n2.p1 to m2.n2.p2. In addition, there are some
optional tags for example specifying pre-releases type (e.g., 1.2.3-beta).
Although adopting the semantic version is not mandatory, prior studies showed
that mainly packages in npm comply with this specification (e.g., [23, 37]).
The mechanism to resolve a provided version relies on the precedence between
version numbers since npm needs to know if a particular version number is
greater than, less than, or equal to another version number. Similar to
decimal numbers, semantic version numbers are compared initially by the
magnitude of their major type, then by their minor and finally by patch types.
For example, version 3.2.1 is lower than versions 4.0.0 (by a major), 3.3.1
(by a minor), and 3.2.2 (by a patch), but greater than versions 2.2.1 (by a
major), 3.1.1 (by a minor), and 3.2.0 (by a patch).
While semantic versioning is a promising technique to specify the type of
changes in a new package release, and even though it is recommended by
ecosystem maintainers [27], it is not always straightforward to be used in
practice. For example, a package developer can mistakenly flag the new release
as a patch release while it is actually a major release. Therefore, this
mistake might lead to many problems, mainly breaking the applications that
depend on this package. In this paper, we formulated the determination of
semantic versioning type of a new package release as a research problem, which
aimed to facilitate npm packages developers to find the right semantic
versioning type for their new release packages. As a result, this will
increase the packages’ trust and reduce the breaking of applications that
depend on those packages.
## 3 Case Study Design
Table 1: The selection steps of the studied JavaScript packages that are published on npm. Selection Step | # Packages
---|---
Most starred packages | 100
Packages without post- and pre- releases | 96
Packages with more than 50 releases | 77
Packages without breakage releases | 36
The main goal of our study is to automatically determine the semantic
versioning type of a new release of a JavaScript package. To achieve this
goal, we proposed the use of machine learning techniques. We begin by
selecting JavaScript packages with a sufficient number of releases, and their
developers use semantic versioning to identify the type of the new releases.
Next, we used the selected npm packages as a labelled dataset. Then, we mined
the source code and development history of the selected JavaScript packages to
extract release-level features and used them as dependent variables in our
machine learning classifiers. In the following subsections, we detail our
labelled dataset, data extraction and processing steps, and the training of
our classifiers.
Table 2: Statistics of the studied JavaScript packages. The Table shows the
name, number of commits, releases, analyzed releases, percentage of major,
minor, patch releases of the studied packages.
Package | Commits | Release | Analyzed | %Major | %Minor | %Patch
---|---|---|---|---|---|---
renovate | 5,226 | 2293 | 1156 | 0.61 | 23.44 | 75.95
turtle.io | 1,110 | 413 | 294 | 2.38 | 8.16 | 89.46
sweetalert2 | 1,924 | 327 | 266 | 2.63 | 20.68 | 76.69
seek-style-guide | 579 | 280 | 222 | 10.81 | 39.19 | 50.00
oui | 722 | 226 | 207 | 4.35 | 5.31 | 90.34
react-isomorphic-render | 977 | 286 | 176 | 5.68 | 6.82 | 87.50
reactive-di | 625 | 133 | 107 | 6.54 | 8.41 | 85.05
module-deps | 492 | 135 | 104 | 5.77 | 30.77 | 63.46
express-processimage | 595 | 122 | 102 | 7.84 | 39.22 | 52.94
sku | 340 | 122 | 101 | 5.94 | 31.68 | 62.38
bittorrent-dht | 633 | 115 | 97 | 8.25 | 38.14 | 53.61
nightwatch-cucumber | 634 | 132 | 97 | 9.28 | 21.65 | 69.07
socketcluster-server | 282 | 111 | 94 | 12.77 | 27.66 | 59.57
eslint-config-canonical | 360 | 133 | 90 | 14.44 | 22.22 | 63.33
patchbay | 2,031 | 108 | 87 | 6.90 | 43.68 | 49.43
penseur | 210 | 95 | 81 | 8.64 | 50.62 | 40.74
mongo-sql | 511 | 87 | 78 | 7.69 | 12.82 | 79.49
pacote | 615 | 102 | 77 | 10.39 | 20.78 | 68.83
octokit/routes | 645 | 99 | 77 | 15.58 | 29.87 | 54.55
box-ui-elements | 1,329 | 88 | 72 | 9.72 | 52.78 | 37.50
rtc-quickconnect | 661 | 92 | 72 | 9.72 | 47.22 | 43.06
terrestris/react-geo | 2,846 | 73 | 69 | 11.59 | 46.38 | 42.03
rtcpeerconnection | 311 | 82 | 67 | 8.96 | 26.87 | 64.18
speakingurl | 429 | 78 | 66 | 19.70 | 28.79 | 51.52
license-checker | 377 | 70 | 65 | 35.38 | 18.46 | 46.15
octokit/fixtures | 378 | 81 | 64 | 12.50 | 51.56 | 35.94
repofs | 574 | 73 | 63 | 11.11 | 23.81 | 65.08
jsonrpc-bidirectional | 511 | 97 | 62 | 11.29 | 40.32 | 48.39
nes | 370 | 67 | 61 | 14.75 | 34.43 | 50.82
zapier-platform-cli | 1,003 | 69 | 61 | 11.48 | 27.87 | 60.66
rtc-signaller | 546 | 79 | 60 | 10.00 | 41.67 | 48.33
Mean | 898.30 | 202.20 | 138.50 | 10.09 | 29.72 | 60.20
Median | 595.00 | 102.00 | 81.00 | 9.72 | 28.79 | 59.57
### 3.1 Test Dataset
To perform our study, we needed to obtain a number of JavaScript packages that
follow semantic versioning guidelines to mark their releases type. To build
our labelled dataset, we started by looking at JavaScript packages that are
published on the Node Package Manager (npm). We chose npm package manager as
it is the official registry and repository for JavaScript packages.
To collect our dataset, we resorted to the public repository of npm that
contains a list of all the published packages on npm [52]. The npm repository
contains metadata about every published package, such as the different
releases of a package, the date of each release, and the release type. Since
there are a large numbers of packages published on npm and some of them did
not provide high-quality packages [2], we had to apply filtration steps to
select the packages that we wanted to study. We used four main criteria to
ensure that our dataset contains high-quality packages. The summary statistics
of these steps are shown in Table 1.
The first criterion in our selection process is to select mature and popular
packages. To do so, we chose the top 100 npm packages in our dataset based on
the number of stars they received on Github. We chose to use the number of
stars since prior work shows that the number of stars can provide a good proxy
for the popularity and maturity of software applications and packages [12,
22].
Second, we eliminated any packages from the dataset that contain at least one
release that is labelled as pre-releases or post-releases. We chose packages
that do not have pre-releases or post-releases since this is a good indicator
that the developers of those packages are somehow familiar with the semantic
versioning practices [23]. Also, we eliminated those packages to simplify our
classifications process since we would have only the three semantic versioning
type as labels in our dataset.
The third step to select the studied npm packages was to examine packages with
a sufficient number of releases. We filtered out from our dataset any package
that does not have at least five releases of each type of the semantic
versioning, and in total, the package must have at least 50 releases. We
excluded packages with a small number of releases since we wanted to use ML
techniques to determine the type of semantic versioning. Thus, we wanted to
have a sufficient number of labelled releases so that we could build robust ML
classifiers.
Figure 1: Our approach of identifying the period release history on GitHub.
We finally excluded packages that have any breakage releases identified by
developers. It is important to note that we performed this filtration step to
ensure that the developers of our studied packages understand semantic
versioning and use it adequately in practice. Thus, we had a high-quality
labelled dataset. To examine this criterion, for every npm package in our
dataset, we searched on Github for the applications that use these packages.
Then, we analyzed the development history of those applications. After that,
we examined them to see whether the developers of those applications that use
the package had downgraded a version of that package and indicated that they
performed the downgrade due to a breakage in the release of the package.
Mainly, we analyzed the historical data of these applications and identified
the commits where the developers rolled back a version of the selected
packages. We then manually examined those commits to determine if developers
rolled back a version of the selected packages due to a breaking release that
is not correctly specified by the right semantic versioning tag. Finally, we
removed any package from our dataset containing at least one case of such a
rollback. At the end of this step, we ended up having 36 packages in our
dataset.
### 3.2 Dataset Preparation
Once we decided which npm packages we would use in our study, we cloned them
locally and collected their metadata information from the npm registry. Then,
we built a semantic versioning parser to analyze every sequence release of
every package to label the release type, whether a release is major, minor, or
patch release based on the prior release. For example, suppose a package has a
release in an older date that holds the semantic versioning number as 3.2.6,
and the subsequent release based on the date has the semantic versioning
number as 3.3.6. In that case, we considered that release as a minor release
for that package (i.e., we labelled it as a minor release type). It is worth
mentioning that following this process, we were able to identify and eliminate
any backport releases from our dataset.
In the next step and since we wanted to extract features based on the source
code and the development history of the packages’ releases in our study, we
needed to have the source code and the development history of each package in
our dataset. Therefore, for each package in our dataset, we started by
collecting their metadata information and source code from the public
repository of npm. To do so, for each npm package in our dataset, we
downloaded the appropriate ‘tar’ file that contains the source code of every
release of that package. In addition, we collected the release date for every
release of the packages and the GitHub repository URL of the packages.
Now, we had the source code of each release. Next, we wanted to collect the
historical development data from the GitHub repository of each package. We
used the provided URL link to the GitHub repository to access the development
history. Then, we cloned the GitHub repository of each package and analyzed
it. However, we could not clone two package repositories because their GitHub
repositories do not exist or are changed to private repositories. In addition,
based on our research experience with the npm registry, we noted that more
than one npm packages could be hosted on the same GitHub repository (i.e.,
they hosted in monorepo repository). Thus, we manually examined the selected
packages and remove three packages from our dataset that their GitHub
repository contains more than one npm packages.
Once we collected the release information from npm and GitHub repositories, we
used a heuristic approach based on the release date to link each release to
its development history on the GitHub repository. Figure 1 shows the overall
approach. First, we analyzed the release date from the npm registry for each
package release in our dataset. And then, we extracted all the commits and
their metadata. By analyzing the commits, we extracted the commit date. Based
on the release date, we identified the first commit and the last commit for
each release (i.e., we identified the release timeframe). Now we had the
source code and the development history of each package release in our
dataset, we analyzed these data to extract a comprehensive set of features. We
describe our process for extracting the studied features for npm packages in
our dataset in the next section (Section 3.3).
Table 2 presents various statistics of our studied JavaScript packages from
npm. It shows first the name of the package and the number of commits. In
addition, the Table shows the total number of releases, the number of analyzed
releases of the studied packages, and the percentage of major, minor, and
patch releases of the studied packages. In total, there are 31 packages in our
dataset.
### 3.3 Features for Semantic Versioning Classification
Since our goal is to perform release-level predictions to determine the
semantic versioning type of a new package release, we resorted to using some
of the most commonly used release-level features. Some of these features were
used in prior software engineering tasks to identify post-release defects [63]
or used to determine crashing releases of mobile apps [74]. Therefore, we
believed that some of these features can be used to determine the level of
complexity of a new package release, hence, providing useful information as to
determine the type of a new release.
To perform our study of determining the semantic versioning type of a new
release, we resorted to using release-level features. In total, we extracted
41 features that are categorized into six dimensions. We distinguished between
these feature categories since; 1) it allowed us to observe the contribution
of different types of features, and 2) these categories let us organize how we
created and interpreted features related to determining the semantic
versioning type. In general, we extracted these features from analyzing the
source code and the development activities of each new package release in our
dataset. Table 3 presents the names and the definition of the extracted
features, and the rationale for examining them. In the following subsections,
we presented the detailed process of extracting the studied features in each
of the six dimensions.
Change Type Features: Change type features present the source code elements
that may impact the semantic versioning type of a new package release. To
extract change type features, we resorted to using source code analysis to
calculate these features (described in Table 3). Thus, we analyzed the changes
made after each release and extracted fine-grained source code change types.
To extract the features from code changes, we used the GumTree code
differencing algorithm [30]. GumTree takes as input the pair of revision files
and creates two Abstract Syntax Trees (ASTs) that are used to compare those
different revisions. As a result, GumTree outputs a list of fine-grained
source code changes (e.g., an update in a method invocation or rename). Then,
we wrote scripts that extract the fine-grained source code change types based
on the GumTree algorithm.
To extract change types features based on code that happened in each release,
we needed to have the complete version of the JavaScript files before and
after the release. To do so, we ran the diff command line between two
consecutive releases. Then, we extracted all the JavaScript files where the
files’ names have a .js extension (i.e., JavaScript source file). Once we had
the two revisions of each changed file in two consecutive releases, we ran the
GumTree tool on them. After that, we analyzed the results of GumTree to
extract the change-type features. Since the GumTree tool’s output is in a JSON
format, we parsed the resulting JSON files to retrieve the differences between
the before and after files versions. Based on this step’s results, we counted
the number of element changes in every two revisions of files and then summed
up them to get a change type value for each release.
Dependencies Features: Dependency features present the dependencies change
activities that occurred while developing a new package release. To calculate
the dependency-related features, we analyzed the changes that happened to the
package.json file. First, we analyzed the package.json file since it is the
configuration file used in the studied packages to manage and configure
dependencies. Then, we calculated the number of commits that touch the
package.json file and the number of commits that added, deleted, updated
packages in the package.json file. We built a tool that analyzes the
package.json file at every release and compares it with the previous releases
to identify dependencies that were changed.
Complexity and Code Features: Complexity and code features represent the
package’s source code changes in each release. To calculate the complexity and
code features (e.g., the difference average of Cyclomatic and the total line
of code added and deleted) for each examined release in our dataset, we
analyzed the release’s source code and computed the diff of the analyzed
release with the previous releases. To achieve this, we ran the Understand
tool [62] on every release for the examined packages in our dataset and
calculated the difference between the current release and the one before.
Time Feature: The time feature presents the time that a new release takes to
be developed and published. We counted the number of days a new release takes
to be published since the previous release date to calculate the time feature.
Development Features: Development features present the development activities
performed during the development of a new release of a package. To calculate
the development features, we analyzed the GitHub repository of each package in
our dataset. Then we measured the number of commits, unique developers, open
issues, closed pull requests, and open pull requests that occurred during that
release development timeframe.
Textual Features: Text features present extracted information from the commit
change logs that the developers have written during the development of a new
release. To extract the text features, we analyzed the commit message and
looked for specific keywords, “major”, “patch”, “break”, and then counted the
number of commits containing these keywords in each release. As for the
identify bug-fixing commits, we used a well-known approach that based on
examining the appearance of a pre-defined set of keywords that include “bug”,
“fix”, “defect”, “error”, “issue”, and their variants in commit messages [64,
69]. Then, we counted those commits in every studied release.
Table 3: Features used to determine the semantic versioning type of a new npm
package release.
Dim. | Name | Definition | Rational
---|---|---|---
Change type | AJF | The number of JavaScript files added between two releases. | The releases that modify several JavaScript files, functions or/and change the code structure in npm packages tend to be more major releases than being minor or patch releases. Furthermore, these are change types that can provide good indications of the semantic ve- rsioning type of a new npm package release. In other words, the re- leases that include adding new JavaScript functionalities are not small releases that are more likely to be major releases. For exam- ple, if there are several JavaScript files that are deleted in a new package release, then that release is not expected to be a patch or a minor release. Another example, If there are several non-JavaSc- ript files are changed (i.e., added, deleted, or modified) in a new package release, then the release is likely to be a patch or a minor release.
MJF | The number of JavaScript files modified between two releases.
DJF | The number of JavaScript files deleted between two releases.
ANJF | The number of non-JavaScript files added between two releases.
DNJF | The number of non-JavaScript files deleted between two releases.
MNJF | The number of non-JavaScript files modified between two releases.
ADM | The number of methods that are added between two releases. |
DEM | The number of methods that are deleted between two releases. |
MOM | The number of methods that are moved between two releases. |
MNC | The number of methods whose names are changed between two releases. |
MPC | The number of methods whose input parameters are changed between two releases. |
MPD | The number of methods whose input parameters are deleted between two releases. |
MLA | The number of logics in methods are added between two releases. |
MLM | The number of logics in methods are moved between two releases. |
MLD | The number of logics in methods are deleted between two releases. |
GVA | The number of global variables added in JavaScript files between two releases. |
GVD | The number of global variables deleted in JavaScript files between two releases. |
ICC | The number of total code comments added between two releases. |
DCC | The number of total code comments deleted between two releases. |
MCC | The number of total code comments modified between two releases. |
Dependency | TCPJ | The number of changes to the package.json file. | The releases that have more updates to the package dependencies list are more likely not to be patch releases. For example, adding more dependencies into the package dependencies list in the new release can indicate that this release is a major release. Another example, the changes that delete more dependencies in the new release can indicate a major release rather than a minor or a patch release.
PA | The number of used packages added between two releases.
PD | The number of used packages deleted between two releases.
PU | The number of used packages’ versions changed between two releases.
Complexity | ACYCD | The difference average of Cyclomatic between two consecutive releases. | We expect that the complexity and code features provide strong indicators of the semantic versioning type of the new release. If the complexity and the package size change a lot in the new release, these changes will likely present the type of semantic versioning release. For example, a large diff number of lines between two releases indicate that the new release introduces more code and is more likely not to be a patch or a minor release.
CLCJD | The difference of lines of code between two consecutive releases.
CYCD | The difference Cyclomatic between two consecutive releases.
LA | The total line of code added between two releases. |
LD | The total line of code deleted between two releases. |
Time | RDTD | The timestamp difference between two consecutive releases. | A package release development that takes a long time tends to contains several changes, which is not likely to be patch.
Development | TCM | The total number of commits between two releases. | The semantic versioning type of a new package heavily de- pends on the number of development activities in that rele- ase. For example, many commits or many numbers of clos- ed pull requests happened during the releases; this indicat- es that this release is not a patch release but tends to be a major or a minor package release.
TAU | The total number of authors made changes between two releases.
POI | The total number of open issue between two releases.
PCI | The total number of closed issue between two releases.
PCPR | The total number of closed pull request between two releases.
POPR | The total number of open pull request between two releases.
Textual | NBF | The total number of bug-fixing commits between two releases. | The change message contains the purpose of this commit. For example, commits that several messages contain the k- eyword major changes or breakage changes in a release de- velopment history provide a high indication that this relea- se a major release. On the other hand, releases that have co- mmits messages containing the word min- or tend to be minor or patch releases.
KWM | The total number of commits that have keyword major in commit message in the release.
KWP | The total number of commits that have keyword patch in commit message in the release.
KWB | The total number of commits that have keyword break in commit message in the release.
AML | The average commit message length in commits happened in the release.
### 3.4 Classification Algorithms
To perform our classification task, we chose four different machine learning
algorithms. In particular, we chose to use XGBoost (XGB), Random Forest (RF),
Decision Tree (DT), and Logistic Regression (LR) algorithms to classify
whether a new package release is a major, minor, or patch. We resorted to
using these ML algorithms since they 1) have different assumptions on the
examined dataset, 2) show different characteristics in terms of dealing with
overfitting and execution speed [18], and 3) provide an intuitive and
straightforward explanation of the classification, which enables developers to
easily understand why a decision to determine the type of package release was
made [41]. In addition, they have been commonly used in the past in other
software engineering studies and datasets (e., g. [32, 38, 6, 73, 67, 36,
35]). We then compared the performances of these different supervised
classifiers to determine the type of release. Now, we briefly described the
four examined machine learning algorithms.
XGBoost (XGB): The XGBoost classifier is an extended and innovative
application of gradient boosting algorithm proposed by Chen et al. [21].
Gradient boosting is an algorithm in which new models are created that predict
the residuals of prior models and then added together to make the final
prediction. Models are added recursively until no noticeable improvements can
be detected. This approach supports both regression and classification.
XGBoost has proven to push the limits of computing power for boosted tree
algorithms. Furthermore, prior work showed that applying the XGBoost
classifier on software engineering data produced good performance (e.g., [28,
46])
Random Forest (RF): The Random Forest classifier is a type of combination
approach, which is bagging and random subsets meta classifier based on a
decision tree classifier [15]. Random Forest combines multiple decision trees
for prediction. First, each decision tree is built based on the value of an
independent set of random vectors. Then, the Random Forest classifier adopts
the mode of the class labels output by individual trees. Also, prior work
showed that it performs well on software engineering problems (e.g., [59,
75]).
Decision Tree (DT): The decision trees classifier first creates a decision
tree based on the feature values of the training data where internal nodes
denote the different features [57]. The branches correspond to the value of a
particular feature, and the leaf nodes correspond to the classification of the
dependent variable. Then, the decision tree is made recursively by identifying
the feature(s) that discriminate the various instances most clearly, i.e.,
having the highest information gain [34]. Once a decision tree is built, the
classification for a new instance is performed by checking the respective
features and their values.
Logistic Regression (LR): The Logistic Regression is used to estimate the
probability of a binary response based on one or more independent variables
(i.e., features). Previous work showed that regression-based classifiers,
especially logistic regression, usually achieve high performance on software
engineering classification tasks (e.g., [32, 38]).
Baseline: Finally, to put our ML classification results in perspective, we
chose to use a simpler classifier as a baseline. In our study, we decided to
use the ZeroR (ZR) classifier, which is a primitive classifier [13]. It
basically predicts the majority class in the training data for all cases in
the test data without considering the independent features.
### 3.5 Training and Testing Classifiers
To conduct our experiments and answer our research questions, we constructed
an ML pipeline to build three different groups of classifiers. We first built
within-package classifiers where we used all the six dimensions of features to
train and test data from one package. Second, we built within-package
classifiers for each package based on each feature’s dimensions (i.e., for
each package, we built six classifiers). Finally, we built cross-package
classifiers, where for each package, a cross-package classifier is trained on
data from all packages except one and tested on the remaining one package.
Since, in our case, we have a multi-classes ML problem (e.g., as a major,
minor, patch), we formalized our ML problem to binary classification problems.
In another word, we used a one-versus-the-rest approach [50]. We used one-
versus-the-rest classifiers to ease the interpretation of our classifiers’
outcomes. In our study, we built three one-versus-the-rest classifiers for
each new release type: a major release or not, a minor release or not, and a
patch release or not. Thus, this requires creating three different ML
classifiers and training each of them with true positives and true negatives
(e.g., true minor releases and not minor releases). Furthermore, to train and
test our classifiers, we used the 5-fold cross-validation technique. In each
5-fold cross-validation, we divided the dataset into five folds. Then, four
folds are used to train the classifier, while the remaining one fold is used
to evaluate the performance of the built classifier. This process is repeated
five times so that each fold is used exactly once as the testing set. We
resorted to using 5-fold cross-validation to reduce the bias due to random
training data selection [8]. We finally reported the average performance
across these test runs. The reported results are the average of 5-fold cross-
validation, such that each sample in the total dataset was included exactly in
one test set. We implemented our examined classifiers using scikit-learn [53].
We also used the default scikit-learn configuration to set the different
parameters of the examined classifiers.
Furthermore, and as it is shown in Table 2, our dataset has on average 10.09%,
29.72%, and 60.20% for major, minor, and patch releases, which indicate that
our dataset contains imbalances data. Data imbalance occurs when one class
occurs much more than the other in a dataset, which leads to the situation
that the trained classifiers will learn from the features affecting the
majority cases than the minority cases [65]. To deal with the imbalance
problem in our experiments, we applied the synthetic minority oversampling
technique (SMOTE). SMOTE is a method for oversampling and can effectively
boost a classifier’s performance in an imbalanced case dataset such as our
dataset [20]. We applied the sampling technique to our dataset since it
balances the size of the majority class and allows us to report standard
performance and better interpret our results. It is essential to highlight
that we only applied this sampling technique to the training dataset. We did
not re-sample the testing dataset since we want to evaluate our classifier in
a real-life scenario, where the data might be imbalanced.
### 3.6 Performance Measures
To evaluate the performance of the used four machine learning classifiers and
compare their performance to our baseline, the ZeroR classifier, we calculated
the Area Under the Receiver Operating Characteristic curve (ROC-AUC). ROC-AUC
is a well-known evaluation measurement that is considered statistically
consistent. In the ROC curve, the true positive rate (TPR) is plotted as a
function of the false positive rate (FPR) across all thresholds. More
importantly, ROC-AUC is a threshold independent measure [14]. A threshold
represents the likelihood threshold for deciding an instance that is
classified as positive or negative. Usually, the threshold is set as 0.5, and
other performance measures for a classifier, such as the F1-score, heavily
depend on the threshold’s determination. However, some cases may need to
change the threshold, such as the class imbalance case. Thus, we used ROC-AUC
to avoid the threshold setting problem since ROC-AUC measures the
classification performance across all thresholds (i.e., from 0 to 1).
Likewise, ROC-AUC has the advantage of being robust towards class
distributions [44, 51].
The ROC-AUC has a value between 0 and 1, where one indicates perfect
classifications results and zero indicates completely wrong classifications.
It is important to note that prior work shows that achieving a 0.5 ROC-AUC
value indicates that the classifier performance is as good as random, while
the ROC-AUC value equal to or more than 0.7 indicates an acceptable classifier
performance using software engineering datasets [51, 44, 75].
## 4 Case Study Results
Table 4: The performance of the examined four ML classifiers for determining
the release type - major, minor, and patch. The results are reported for
XGBoost (XGB), Random Forest (RF), Decision Tree (DT), and Logistic Regression
(LR). In addition, the Table shows the results of our baseline classifier,
which is the ZeroR (ZR). The best performance values are highlighted in bold.
Packages | Major | Minor | Patch
---|---|---|---
XGB | RF | ZR | DT | LR | XGB | RF | ZR | DT | LR | XGB | RF | ZR | DT | LR
sweetalert2 | 0.85 | 0.92 | 0.44 | 0.59 | 0.76 | 0.73 | 0.71 | 0.49 | 0.56 | 0.59 | 0.74 | 0.74 | 0.52 | 0.61 | 0.65
renovate | 0.93 | 0.89 | 0.43 | 0.49 | 0.67 | 0.87 | 0.84 | 0.50 | 0.69 | 0.66 | 0.86 | 0.81 | 0.51 | 0.71 | 0.67
speakingurl | 0.73 | 0.73 | 0.50 | 0.60 | 0.73 | 0.44 | 0.34 | 0.53 | 0.46 | 0.65 | 0.74 | 0.72 | 0.45 | 0.64 | 0.63
license-checker | 0.62 | 0.64 | 0.47 | 0.52 | 0.46 | 0.59 | 0.50 | 0.49 | 0.52 | 0.39 | 0.73 | 0.75 | 0.52 | 0.63 | 0.62
bittorrent-dht | 0.86 | 0.87 | 0.42 | 0.54 | 0.65 | 0.51 | 0.61 | 0.54 | 0.49 | 0.59 | 0.67 | 0.74 | 0.48 | 0.60 | 0.53
nes | 0.48 | 0.42 | 0.44 | 0.56 | 0.49 | 0.82 | 0.76 | 0.51 | 0.66 | 0.63 | 0.68 | 0.66 | 0.53 | 0.60 | 0.67
box-ui-elements | 0.84 | 0.89 | 0.42 | 0.64 | 0.68 | 0.68 | 0.60 | 0.49 | 0.61 | 0.61 | 0.74 | 0.76 | 0.53 | 0.63 | 0.83
sku | 0.86 | 0.73 | 0.50 | 0.60 | 0.50 | 0.79 | 0.75 | 0.51 | 0.66 | 0.56 | 0.78 | 0.70 | 0.44 | 0.67 | 0.64
mongo-sql | 0.83 | 0.68 | 0.48 | 0.70 | 0.50 | 0.64 | 0.78 | 0.43 | 0.61 | 0.72 | 0.65 | 0.68 | 0.43 | 0.62 | 0.62
pacote | 0.93 | 0.90 | 0.52 | 0.78 | 0.84 | 0.82 | 0.81 | 0.46 | 0.61 | 0.77 | 0.85 | 0.87 | 0.45 | 0.71 | 0.66
seek-style-guide | 0.72 | 0.62 | 0.48 | 0.55 | 0.42 | 0.76 | 0.76 | 0.51 | 0.63 | 0.55 | 0.75 | 0.73 | 0.49 | 0.67 | 0.61
nightwatch-cucumber | 0.76 | 0.81 | 0.48 | 0.56 | 0.46 | 0.73 | 0.80 | 0.53 | 0.61 | 0.65 | 0.76 | 0.83 | 0.50 | 0.70 | 0.65
zapier-platform-cli | 0.87 | 0.85 | 0.54 | 0.75 | 0.69 | 0.78 | 0.75 | 0.54 | 0.57 | 0.65 | 0.82 | 0.83 | 0.48 | 0.73 | 0.64
patchbay | 0.68 | 0.60 | 0.51 | 0.45 | 0.33 | 0.69 | 0.72 | 0.47 | 0.62 | 0.59 | 0.68 | 0.73 | 0.48 | 0.60 | 0.57
module-deps | 0.75 | 0.80 | 0.57 | 0.51 | 0.64 | 0.65 | 0.60 | 0.47 | 0.59 | 0.43 | 0.68 | 0.61 | 0.48 | 0.59 | 0.49
turtle.io | 0.77 | 0.88 | 0.53 | 0.56 | 0.79 | 0.80 | 0.76 | 0.49 | 0.58 | 0.64 | 0.81 | 0.85 | 0.54 | 0.72 | 0.77
rtcpeerconnection | 0.75 | 0.62 | 0.50 | 0.57 | 0.71 | 0.59 | 0.55 | 0.54 | 0.55 | 0.57 | 0.62 | 0.44 | 0.51 | 0.55 | 0.63
react-isomorphic-render | 0.82 | 0.80 | 0.55 | 0.54 | 0.59 | 0.74 | 0.75 | 0.48 | 0.55 | 0.47 | 0.80 | 0.80 | 0.51 | 0.73 | 0.60
rtc-quickconnect | 0.78 | 0.85 | 0.58 | 0.60 | 0.78 | 0.72 | 0.66 | 0.51 | 0.66 | 0.58 | 0.78 | 0.78 | 0.50 | 0.64 | 0.63
terrestris/react-geo | 0.64 | 0.75 | 0.45 | 0.50 | 0.53 | 0.67 | 0.66 | 0.45 | 0.61 | 0.60 | 0.71 | 0.66 | 0.58 | 0.63 | 0.62
eslint-config-canonical | 0.82 | 0.83 | 0.50 | 0.75 | 0.56 | 0.64 | 0.69 | 0.48 | 0.55 | 0.49 | 0.74 | 0.74 | 0.51 | 0.63 | 0.58
repofs | 0.80 | 0.91 | 0.47 | 0.57 | 0.57 | 0.72 | 0.84 | 0.49 | 0.58 | 0.42 | 0.76 | 0.83 | 0.49 | 0.65 | 0.58
penseur | 0.64 | 0.76 | 0.49 | 0.49 | 0.61 | 0.68 | 0.66 | 0.57 | 0.58 | 0.56 | 0.75 | 0.75 | 0.45 | 0.71 | 0.73
octokit/routes | 0.82 | 0.65 | 0.49 | 0.68 | 0.65 | 0.71 | 0.59 | 0.52 | 0.55 | 0.56 | 0.63 | 0.67 | 0.53 | 0.57 | 0.57
socketcluster-server | 0.78 | 0.80 | 0.42 | 0.58 | 0.73 | 0.45 | 0.45 | 0.46 | 0.49 | 0.46 | 0.70 | 0.73 | 0.47 | 0.68 | 0.63
oui | 0.88 | 0.96 | 0.54 | 0.69 | 0.65 | 0.95 | 0.84 | 0.44 | 0.70 | 0.64 | 0.91 | 0.94 | 0.55 | 0.83 | 0.75
express-processimage | 0.67 | 0.39 | 0.46 | 0.48 | 0.47 | 0.62 | 0.61 | 0.46 | 0.60 | 0.51 | 0.69 | 0.68 | 0.50 | 0.59 | 0.61
octokit/fixtures | 0.75 | 0.71 | 0.57 | 0.77 | 0.61 | 0.74 | 0.70 | 0.52 | 0.70 | 0.65 | 0.70 | 0.62 | 0.48 | 0.61 | 0.52
jsonrpc-bidirectional | 0.62 | 0.50 | 0.49 | 0.61 | 0.53 | 0.63 | 0.59 | 0.58 | 0.58 | 0.67 | 0.57 | 0.62 | 0.48 | 0.51 | 0.60
reactive-di | 0.84 | 0.80 | 0.43 | 0.66 | 0.69 | 0.56 | 0.59 | 0.52 | 0.46 | 0.44 | 0.75 | 0.73 | 0.49 | 0.63 | 0.70
rtc-signaller | 0.81 | 0.85 | 0.59 | 0.51 | 0.59 | 0.63 | 0.64 | 0.57 | 0.61 | 0.57 | 0.80 | 0.76 | 0.52 | 0.64 | 0.65
Average | 0.77 | 0.76 | 0.49 | 0.59 | 0.61 | 0.69 | 0.67 | 0.50 | 0.59 | 0.58 | 0.74 | 0.73 | 0.50 | 0.65 | 0.63
Median | 0.78 | 0.80 | 0.49 | 0.57 | 0.61 | 0.69 | 0.69 | 0.50 | 0.59 | 0.59 | 0.74 | 0.74 | 0.50 | 0.63 | 0.63
Relative ROC-AUC | 1.58$X$ | 1.55$X$ | – | 1.21$X$ | 1.25$X$ | 1.38$X$ | 1.36$X$ | – | $1.18X$ | 1.15$X$ | 1.49$X$ | 1.48$X$ | – | 1.31$X$ | 1.28$X$
In this section, we presented our case study results for our three research
questions. For each research question, we presented the motivation for the
question, the approach to answering the question, and the results.
### 4.1 RQ1: Can we effectively determine the semantic versioning type of a
new package release?
Motivation: Prior work showed that determining the type of new package release
is challenging [11]. Even though prior work proposed techniques to detect
semantic breaking API changes through static analysis for languages such as
Java [71, 58], such techniques require a clear definition of the public and
private API. Such a distinction does not explicitly exist in many dynamic
languages such as JavaScript. In this question, we wanted to effectively
determine the semantic versioning type of a new JavaScript package release.
Therefore, automatically determining the type of semantic versioning can help
guide package maintainers on deciding the versioning type on a new release. In
this RQ, we aimed to examine the use of machine learning techniques.
Method: For each package in our dataset, we used the extracted 41 release-
level features that are presented in Table 3 to train the four classifiers to
determine whether a new package release is a major, minor, or patch release.
We had reformulated this classification task into a one-versus-the-rest
classification problem since this is a multi-class classification problem
[50]. We used one-versus-the-rest classifiers since it would help us
adequately interpret our classifiers’ results. We had a one-versus-the-rest
classifier for each new release type: a major release or not, a minor release
or not, and a patch release. Thus, we built three different classifiers for
each release type where the true positives will be the examine release type
(e.g., true minor releases and not minor releases).
After that, for each package, we used 5-fold cross validation [8]. First, we
divided the dataset for each package into five folds. Then, we used four folds
(i.e., 80% of the data) to train the four ML classifiers and used the
remaining one fold (i.e., 20% of the data) to evaluate the performance of the
classifiers. We ran this process five times for each fold (i.e., 1x5-folds).
In our study, we used the four ML classifiers described in Section 3.4 that
are XGBoost, Random Forest, Decision Tree, and Logistic Regression.
Finally, to evaluate and compare the performance of the four ML classifiers in
determining the semantic versioning type of a new package release, we computed
the Area Under the Receiver Operating Characteristic curve (ROC-AUC). Then, to
come up with one value for the five runs, we calculated the average of the
evaluation measurement for 5-folds five times (i.e., 1x5-fold) for every
package in our examined dataset.
Table 5: Mann-Whitney Test (p-value) and Cliff’s Delta (d) for the results of the four classifiers vs. the baseline classifiers for the tree different semantic versioning release types. ML | Major | Minor | Patch
---|---|---|---
p-value | d | p-value | d | p-value | d
XGB | 7.973e-11 | 0.96 | 1.061e-08 | 0.85 | 1.468e-11 | 0.99
RF | 9.392e-09 | 0.85 | 1.474e-08 | 0.84 | 2.16e-10 | 0.94
DT | 3.077e-06 | 0.69 | 3.382e-07 | 0.75 | 4.802e-11 | 0.97
LR | 4.105e-05 | 0.61 | 0.000254 | 0.54 | 2.81e-10 | 0.93
Since one of the main goals of using machine learning techniques is to help
determine the semantic versioning type of new release, we measured how much
better the performance of the four used classifiers is compared to the
baseline for each package. In our case, the baseline classifier is a
classifier that always reports the class of interest based on the majority,
which is the ZeroR classifier. In this case, the ZeroR classifier will achieve
100% recall and precision equal to the rate of examining release type (i.e.,
major, minor, patch). We followed the previously described process steps to
train and test the ZeroR classifier.
Then, we compared the values of ROC-AUC for the four classifiers against the
baseline by calculating the relative ROC-AUC (i. e.,
$Relative\leavevmode\nobreak\ ROC$$-$$AUC$
$=\frac{Examined\leavevmode\nobreak\ Classifier\leavevmode\nobreak\ ROC-
AUC}{Baseline\leavevmode\nobreak\ ROC-AUC}$). Relative ROC-AUC shows how much
better our classifiers perform compared to the baseline. For instance, if a
baseline achieves a ROC-AUC of 10%, while the XGBoost classifier, for example,
achieves a ROC-AUC of 20%, then the relative ROC-AUC is $\frac{20}{10}=2X$. In
other words, the XGBoost classifier performs twice as accurately as the
baseline classifier. It is important to note that the higher the relative ROC-
AUC value, the better the classifier is in determining the semantic versioning
type.
Finally, to examine whether the achieved improvement over the baseline
classifier is statistically significant, we performed a non-parametric Mann-
Whitney test [45] to compare the two distributions for each classifier results
in our dataset and determine if the difference is statistically significant,
with a $p$-value $<$ 0.05 [45]. We also used Cliff’s Delta ($d$), a non-
parametric effect size measure to interpret the effect size between the four
classifier results and our baseline. We then interpreted the effect size value
to be small for d $<$ 0.33 (for positive or negative values), medium for 0.33
$\leq d$ $<$ 0.474 and large for $d\geq$ 0.474 [33].
Result: Table 4 presents the ROC-AUC values of the four ML classifiers for
determining the release type of major, minor, and patch releases. Table 4
shows the results for XGBoost (XGB), Random Forest (RF), ZeroR (ZR), Decision
Tree (DT), and Logistic Regression (LR) for the 31 studied npm packages in our
dataset. Overall, we observe that for all three different types of the
semantic versioning (i.e., major, minor, and patch), the examined four
classifiers achieve acceptable performance in terms of ROC-AUC values [51,
44].
First, to determine the major release type, Table 4 shows that XGBoost
classifier achieves ROC-AUC values range between 0.48 and 0.93 with an average
ROC-AUC value equal to 0.77 (median$=$0.78). Also, the Random Forest achieves
a comparable performance in classifying major release types. The Table shows
that Random Forest has an average value of ROC-AUC equal to 0.76. Second, as
for the minor releases, we observed that again the XGBoost and Random Forest
classifiers perform better than the Decision Tree and Logistic Regression
classifiers. Table 4 shows that XGBoost and Random Forest have average ROC-AUC
values equal 0.69 and 0.67. Lastly, the highest ROC-AUC values for determining
the patch release types obtained by the XGBoost classifier range between 0.57
and 0.91, with an average of 0.74 (median$=$0.74). In contrast, the second
highest average ROC-AUC for determining the patch release type is achieved by
Random Forest with ROC-AUC values ranging between 0.44 and 0.94 and with an
average value of 0.73 (median $=$ 0.74). In general, the achieved ROC-AUC
values indicate that the XGBoost classifier effectively determines the
different semantic versioning types compared to the other examined ML
classifiers.
Furthermore, Table 4 shows the average relative ROC-AUC values when comparing
the performance of the four classifiers to our baseline. Overall, the computed
relative ROC-AUC shows a significant improvement over the baseline. In
particular, for all the 31 packages, the XGBoost outperforms the baseline with
average relative ROC-AUC values of 1.58$X$, 1.38$X$, and 1.49$X$ for major,
minor, and patch release types, respectively.
Finally, Table 5 presents the adjusted $p$-values and effect sizes according
to the Cliff’s delta ($d$) test. We observed that the differences are
statistically significant in the three semantic versioning types and with a
large effect size ($d>$ 0.474).
Our machine learning classifiers achieved a promising performance for
determining semantic versioning type of a new package release. They also
outperformed our baseline classifier in terms of ROC-AUC values. Out of the
four examined ML classifiers, XGBoost tended to achieve the best performance
with an average ROC-AUC of 0.77, 0.69, and 0.74 for the major, minor, and
patch releases. These results translated to an improvement of 58%, 38%, and
49% compared to our baseline.
### 4.2 RQ2: Which dimension of features are most important in determining
the semantic versioning type of a new package release?
Motivation: After determining the type of package release with adequate ROC-
AUC values and achieving a good improvement compared to our baseline, we are
now interested in understanding what dimensions of features impact determining
the type of new package releases the most. In our study, we have 41 release-
level features grouped into six dimensions. Therefore, being aware of what
dimension of features impacts a new release the most can help gain a deeper
understanding of these six dimensions. Also, we aim to provide developers with
actionable recommendations (i.e., determine the type of new package release).
More importantly, in our case, developers can know what dimensions of features
they should carefully examine when specifying the new release type.
(a) Major Releases
(b) Minor Releases
(c) Patch Releases
Figure 2: The distributions of the ROC-AUC values for the different built
classifiers.
Method: To identify the dimension of release-level features that are the most
important indicators of determining the semantic versioning type of a new
package release, we built several classifiers for each dimension. In
particular, for each package release type (i.e., major, minor, patch release),
we built six classifiers (one for each dimension of features). In total, we
built eighteen classifiers. For example, we built a classifier to determine
the major release using the change type dimension of features. To build and
evaluate these classifiers, we follow the same steps described in Section 3.5.
Since we found that the XGBoost classifier achieves the best performance in
our previous question, we used it as the classifier in this analysis.
Furthermore, to compare and evaluate the performance of the built classifiers
based on the different dimensions of features, we again used the well-known
evaluation measurement, the ROC-AUC. We then used violin plots to compare the
distributions of our results. The vertical curves of violin plots summarize
and compare the distributions of different ROC-AUC results.
Result: Figure 2 shows violin plots of the ROC-AUC values for the built
XGBoost classifier for each dimension of features for the three semantic
versioning release types. Violin plots are an effective way of presenting the
distribution of data. We also superimposed box plots to highlight the key
statistics of our results.
From Figure 2, we observed that all the six dimensions of features in our
study appear to be important in determining the semantic versioning type of a
new package release. However, one dimension of features tended to be a strong
indicator of the semantic versioning type of a release, which is the change
type dimension. Notably, for the major release type, Figure 2(a) shows that
the best dimension of features to determine the major release type is the
change type dimension with an average ROC-AUC value equal to 0.72 (median $=$
0.72).
As for the minor release, the violin plots in Figure 2(b) show that the built
XGBoost classifiers using the change type dimension outperformed other built
classifiers in most of the studied npm packages. Furthermore, our results
showed that the built classifiers based on the complexity and code dimension
of features achieved comparable performance to the change type classifiers
with average ROC-AUC values equal to 0.70 and 0.68 for classifiers that were
built using the change type and complexity and code dimension of features.
For determining the patch release type, from Figure 2(c), we observed that two
built classifiers seemed to have comparable results, which are the classifiers
that were built using change type and complexity dimensions. These two built
classifiers achieved an average ROC-AUC value equal to 0.73 for each. Overall,
our built classifiers based on the six dimensions of features in determining
the patch release type tended to achieve better performance in terms of
average ROC-AUC compared to classifiers built to determine the major and minor
release.
Interestingly, there is some dimension of features that appeared to be a good
determine of release type. For example, the dependencies related features
appeared to identify patch releases with a good performance. However,
classifiers that were built using the dependency dimension of features to
determine major and minor releases did not perform as well.
Our investigation showed that the built XGBoost classifiers using the change
type dimension of features tended to perform the best when used to determine
the semantic versioning release type compared to other built classifiers.
However, using all the six dimensions of features still achieved better
performance.
### 4.3 RQ3: How effective are the machine learning techniques when applied
on cross-packages?
Motivation: Building an ML classifier to determine the semantic versioning
release type on package-level requires having a sufficient amount of labelled
data to train on. However, many packages do not have enough historical
labelled data to build a classifier (e.g., newly adopting semantic versioning
and/or new packages). Therefore, it would be impossible to train a machine
learning classifier to determine semantic versioning type of a new release on
data from such packages. In this research question, we investigated to know to
what extent and with what performance a semantic versioning type of a new
package release can be automatically determined using a cross-package machine
learning classification. In addition, answering this question allowed us to
evaluate the generalizability of the built classifiers and their applications
when applied to other packages.
Table 6: Performance of Cross-packages classification. The results are
reported for XGBoost (XGB) and ZeroR (ZR) classifiers.
Package | Major | Minor | Patch
---|---|---|---
XGB | ZR | XGB | ZR | XGB | ZR
sweetalert2 | 0.83 | 0.59 | 0.70 | 0.48 | 0.75 | 0.49
renovate | 0.58 | 0.47 | 0.79 | 0.45 | 0.83 | 0.51
speakingurl | 0.71 | 0.61 | 0.56 | 0.62 | 0.68 | 0.39
license-checker | 0.61 | 0.52 | 0.56 | 0.33 | 0.72 | 0.48
bittorrent-dht | 0.89 | 0.49 | 0.63 | 0.64 | 0.75 | 0.42
nes | 0.59 | 0.49 | 0.75 | 0.49 | 0.75 | 0.56
box-ui-elements | 0.65 | 0.57 | 0.62 | 0.46 | 0.76 | 0.40
sku | 0.70 | 0.51 | 0.80 | 0.49 | 0.80 | 0.49
mongo-sql | 0.76 | 0.40 | 0.55 | 0.54 | 0.60 | 0.59
pacote | 0.92 | 0.47 | 0.86 | 0.54 | 0.90 | 0.52
seek-style-guide | 0.64 | 0.48 | 0.75 | 0.46 | 0.77 | 0.48
nightwatch-cucumber | 0.78 | 0.53 | 0.80 | 0.58 | 0.82 | 0.53
zapier-platform-cli | 0.82 | 0.43 | 0.75 | 0.53 | 0.82 | 0.42
patchbay | 0.53 | 0.51 | 0.77 | 0.53 | 0.76 | 0.56
module-deps | 0.82 | 0.62 | 0.53 | 0.50 | 0.61 | 0.49
turtle.io | 0.88 | 0.46 | 0.82 | 0.52 | 0.88 | 0.44
rtcpeerconnection | 0.86 | 0.59 | 0.56 | 0.45 | 0.63 | 0.49
react-isomorphic-render | 0.66 | 0.62 | 0.59 | 0.57 | 0.63 | 0.44
rtc-quickconnect | 0.84 | 0.45 | 0.62 | 0.36 | 0.70 | 0.58
terrestris/react-geo | 0.76 | 0.53 | 0.65 | 0.63 | 0.74 | 0.59
eslint-config-canonical | 0.70 | 0.56 | 0.68 | 0.41 | 0.78 | 0.42
repofs | 0.86 | 0.62 | 0.78 | 0.41 | 0.84 | 0.49
penseur | 0.82 | 0.28 | 0.57 | 0.46 | 0.72 | 0.50
octokit/routes | 0.61 | 0.44 | 0.70 | 0.64 | 0.63 | 0.55
socketcluster-server | 0.70 | 0.52 | 0.61 | 0.57 | 0.75 | 0.50
oui | 0.79 | 0.63 | 0.58 | 0.52 | 0.71 | 0.50
express-processimage | 0.69 | 0.45 | 0.69 | 0.56 | 0.72 | 0.53
octokit/fixtures | 0.78 | 0.52 | 0.86 | 0.55 | 0.82 | 0.46
jsonrpc-bidirectional | 0.62 | 0.61 | 0.70 | 0.54 | 0.73 | 0.45
reactive-di | 0.80 | 0.47 | 0.60 | 0.49 | 0.74 | 0.48
rtc-signaller | 0.84 | 0.50 | 0.75 | 0.55 | 0.79 | 0.47
Average | 0.74 | 0.52 | 0.68 | 0.51 | 0.75 | 0.49
Median | 0.76 | 0.51 | 0.69 | 0.52 | 0.75 | 0.49
Relative ROC-AUC | 1.5$X$ | - | 1.4$X$ | - | 1.5$X$ | -
Method: To better understand the generalizability of the performance achieved
by the training classifier on data from one package and apply it to another
package, we conducted a cross-packages validation. In particular, we
experimented with $n$ fold cross-packages validation, where $n$ is the number
of packages in our dataset (i.e., in our dataset, we have 31 packages). We
conducted an experiment that trains a classifier on data from thirty packages
and uses the built classifier to determine the type of semantic versioning in
the remaining one package, similar to the method used in prior work [7, 31,
1]. We repeated this process 31 times, one for each package in our dataset. To
build the classifier, we trained the XGBoost machine learning classifiers
following the same approach described earlier in Section 3.5. Once again, we
employed the well-known evaluation measurement where we computed ROC-AUC
values to measure the performance of the generated classifiers. Finally, to
examine the cross-packages classifier’s performance with respect to our
baseline, which is the ZeroR classifier, we computed the relative ROC-AUC
values.
Result: Table 6 presents the results of our experiment. It shows the ROC-AUC
values for each package for the different semantic versioning types. In
general, we observed that the built cross-packages classifiers achieved good
performance. The built classifiers have average ROC-AUC values of 0.74, 0.68,
and 0.75 for the major, minor, and patch releases. With an average ROC-AUC
score equal to 0.74 (median$=$0.75), the cross-packages classifier performs
significantly high when it is used to determine the major release type. For
example, seventeen packages in our dataset have ROC-AUC values greater than
0.75, which is an acceptable performance [51, 44, 75]. We also observed
similar performance for determining minor and patch release types.
Moreover, we compared the performance of the cross-packages classifiers to the
baseline for all the three semantic versioning release types (i.e., major,
minor, and patch). Our results showed that cross-packages classifiers show an
improvement of 50%, 40%, and 50% on average over the baseline for the major,
minor, and patch semantic versioning release type.
Table 7: Mann-Whitney Test (p-value) and Cliff’s Delta (d) for the results of XGBoost vs. ZeroR classifiers for the tree different version types. Version type | p-value | d
---|---|---
Major | 4.982e-10 | 0.92 (large)
Minor | 1.42e-08 | 0.84 (large)
Patch | 1.353e-11 | 1.00 (large)
Finally, we investigated whether the achieved improvements by the built
classifiers over the baseline classifiers for the different semantic
versioning types are statistically significant. Table 7 shows the p-values and
effect size values. It shows that for all semantic versioning types, the
differences are statistically significant, having p-values $<$ 0.05. Also, the
effect size values are large. These results showed that cross-packages
outperform the performance of the cross-package baseline classifier with
statistically significant results.
Our results indicated that cross-package machine learning classifiers can
provide comparable performances to within-package classifiers for determining
the semantic versioning type. For all packages in our dataset, cross-package
classifiers achieved average ROC-AUC values of 0.74, 0.68, and 0.75 with an
overall improvement over the baseline classifiers with relative ROC-AUC equal
to 50%, 40%, and 50% for major, minor, and patch release.
## 5 Related Work
In this paper, we proposed using machine learning techniques to effectively
determine the semantic versioning type of npm packages. Thus, our work is
mainly related to two areas of prior studies; work related to the use of
semantic versioning and work related to identifying breakage changes in third-
party packages.
Semantic versioning: Due to the importance of semantic versioning, several
studies have examined it. One of the first works that looked at the use of
semantic versioning is the work by Raemaekers et al. [58]. They investigated
the use of semantic versioning in the dataset of 22K Java packages published
on Maven that span for seven years. Their results showed that breaking changes
occur in 30% of the studied releases, including minor releases and patches.
Thus, several packages used strict dependency constraints, and package
maintainers avoid upgrading their dependencies. In addition, Kula et al. [42]
found that developers tend not to update their depend on packages even though
these updates are related to the addition of new features and patches to fix
vulnerabilities. Interestingly, Raemaekers et al. [58]’s approach relies on a
tool called tclirr, which detects breaking API changes through static analysis
of Java code. While a similar tool could be developed for other languages, it
requires a clear separation between the public and private API. Such a
distinction does not explicitly exist in dynamic languages such as JavaScript,
making the accurate detection of breaking changes much more difficult.
Moreover, fundamental differences, such as dynamic versus static typing or the
language’s dynamic nature, between JavaScript and other programming language
such as Java make the studies on this language difficult.
Dietrich et al. [25] also studied large dependencies in seventeen package
manager ecosystems found that many ecosystems support flexible versioning
practices and that the adoption of semantic versioning is increasing. In the
same line, Decan and Mens [23] empirically studied semantic versioning
compliances in four ecosystems (Cargo, npm, Packagist, and Rubygems) by
analyzing the packages dependency constraints. Their findings showed that the
proportion of compliant dependency constraints increases over time in all
studied ecosystems.
In the same direction, Wittern et al. [70] studied the evolution of a subset
of JavaScript packages in npm, analyzing characteristics such as their
dependencies, update frequency, and semantic versioning number. They observed
that the versioning conventions that maintainers use for their packages are
not always compatible with semantic versioning. Also, Bogart et al. [11]
conducted a qualitative comparison of npm, CRAN, and Eclipse, to understand
the impact of community values, tools, and policies on breaking changes. They
found two main types of mitigation strategies to reduce the exposure to
changes in dependencies: limiting the number of dependencies and depending
only on “trusted packages”. In a follow up work, they interviewed more than
2,000 developers about values and practices in 18 ecosystems [10]. Among other
findings, they observed that package maintainers are frequently exposed to
breaking changes and mainly discover them at build time.
Our work is motivated by these prior aforementioned research efforts. The
difference is that our work focuses on proposing a machine learning
classifiers to identify the semantic versioning type of a new npm package
release.
Identifying breakage changes in third-party packages: Several studies
investigated API evolution and stability and proposed techniques to detect
breakage changes [47, 72, 26, 39, 37].
Mujahid et al. [49] proposed the idea of using other’s tests to identify
breaking changes of JavaScript packages. They examined the accuracy of their
proposed approach on ten cases of breaking updates. Their experimental results
showed that their approach identified six breaking updates. Similarly, Xavier
et al. [72] performed a large-scale analysis on Java packages. Their results
showed that 14.78% of the API changes are incompatible with previous versions.
They also found that packages with a higher frequency of breaking changes are
larger, more popular, and more active. Also, Businge et al. [16, 17] studied
Eclipse interface usage by Eclipse third-party plug-ins and evaluated the
effect of API changes and non-API changes. Mostafa et al. [48] detected
backward compatibility problems in Java packages by performing regression
tests on version pairs and by inspecting bug reports related to version
upgrades. The similarity between our work and these aforementioned work is the
idea of identifying the type of changes in a new package release. However, to
the best of our knowledge, our work is the first work to investigated the use
of ML technique.
## 6 Threats to Validity
There are few important limitations to our work that need to be considered
when interpreting our findings. In this section, we described the threats to
the validity of our study.
Internal validity: Threats to internal validity concerns with factors that
could have influenced our study setup. First, we used the extracted AST
difference between two source codes to extract the change type features. To do
this, we used GumTree differencing algorithm [30]. Thus, we might be limited
by the accuracy and correctness of this tool. However, previous studies used
GumTree for calculating differences between two source codes for different
studies. It is also mentioned in the documentation of GumTree that the
algorithm is prone to some errors in the context of JavaScript, so it might
miss some instances when extracting the difference of JavaScript source codes.
For parsing the result of GumTree tool, we developed a parser to extract fine-
grained source code changes. This process could result in some errors. Thus,
we manually analyzed randomly selected 300 change types to mitigate this
threat, and our manual examination shows that the implemented parser correctly
extracts all the cases.
In addition, to answer our research questions and to extract the complexity
and code dimension of features between two consecutive releases, we used the
Understand tool [68]. Therefore, we were limited by the accuracy of the
Understand tool. That said, the Understand tool is a widely used analysis tool
in both research and industry [2, 60, 19, 3]. Also, a recent study showed that
the Understand tool analyzes JavaScript code with good accuracy [61], which
mitigate such a threat.
Construct validity: Threats to construct validity considers the relationship
between theory and observation, in case the measured variables do not measure
the actual factors. The labeled package releases (i.e., patch, minor, or
major) that we examined are releases that are explicitly marked as so by the
package developers in our dataset. In some cases, developers might mislabel
the releases. To mitigate this threat, we have applied different filtration
criteria (see Section 3.1) that include selecting mature and popular packages.
Also, we filtered out any package that their users reported it to has at least
one breakage release but their developers tagged it a minor or patch release.
Also, to extract the development features, we opted for analyzing the commits
in the Git system. Similar to prior work (e.g., [40, 66]) to identify those
commits between two consecutive releases, we consider all commits occurred in
the main trunk of the versioning system based on the release date. It is worth
mentioning that these dates could be approximations, as developers could start
working on the release even before it is issued.
External validity: Threats to external validity concern the generalization of
our findings. Our dataset only consists of JavaScript packages, which are
published on the npm package manager. Hence, our findings might not hold for
packages published on other package managers and written in different
programming languages. That said, prior work (e.g., [24]) showed that npm
packages are commonly used, and npm is one of the largest and rapidly growing
package managers, which make it the ideal case to study.
In this study, we performed a combination of feature extraction both from code
changes and development history from JavaScript open-source packages, and the
method used to extract the studied features is specific to JavaScript, so our
classifiers might not be generalized for other programming languages. Also,
different programming languages might require different feature extraction
methods due to their semantic differences. However, our data collections and
analysis approaches could be easily generalized to packages written in any
language.
In addition, our dataset presented only open-source packages whose source code
is hosted on GitHub that might not reflect close source packages. Also, in our
study, we examined a dataset that contains 31 npm JavaScript packages, which
may not represent the whole population of JavaScript packages, and examining a
larger number of packages may show different results.
## 7 Conclusion
In this paper, our goal is to use ML techniques to determine semantic
versioning type of a new package release. We used 41 release-level features
extracted by analyzing the source code and the development activities of the
releases of 31 JavaScript packages published on npm. Then, we built four ML
classifiers. We found that the XGBoost can effectively determine the type of
semantic versioning with average ROC-AUC equal to 0.77, 0.69, and 0.74 for
major, minor, and patch releases. It also showed an improvement of 58%, 38%,
and 49% over our baseline, which is the ZeroR classifier. Regarding the most
important features used by the XGBoost classifiers to determine semantic
versioning release type, we found that the change type and complexity and code
dimensions of features are the most important indicators of new release type.
Additionally, we investigated the generalizability of determining semantic
versioning type when we used cross-packages validation. Our results showed
that the cross-packages validation achieves acceptable performance compared to
within-packages validation.
## References
* Abdalkareem et al. [2020] Abdalkareem, R., Mujahid, S., Shihab, E., 2020. A machine learning approach to improve the detection of ci skip commits. IEEE Transactions on Software Engineering , 1–1.
* Abdalkareem et al. [2017] Abdalkareem, R., Nourry, O., Wehaibi, S., Mujahid, S., Shihab, E., 2017. Why do developers use trivial packages? an empirical case study on npm, in: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, Association for Computing Machinery, New York, NY, USA. p. 385–395. URL: https://doi.org/10.1145/3106237.3106267, doi:10.1145/3106237.3106267.
* Ahasanuzzaman et al. [2020] Ahasanuzzaman, M., Hassan, S., Hassan, A.E., 2020. Studying ad library integration strategies of top free-to-download apps. IEEE Transactions on Software Engineering .
* Alfassa [2013] Alfassa, E., 2013. 857922 - fontconfig change breaks webfonts rendering under linux. https://bugzilla.mozilla.org/show_bug.cgi?id=857922. (accessed on 02/25/2022).
* Andreasen et al. [2017] Andreasen, E., Gong, L., Møller, A., Pradel, M., Selakovic, M., Sen, K., Staicu, C.A., 2017. A survey of dynamic analysis and test generation for javascript. ACM Comput. Surv. 50.
* Bacchelli et al. [2012] Bacchelli, A., Dal Sasso, T., D’Ambros, M., Lanza, M., 2012\. Content classification of development emails, in: Proceedings of the 34th International Conference on Software Engineering, IEEE Press. pp. 375–385.
* Bacchelli et al. [2012] Bacchelli, A., Dal Sasso, T., D’Ambros, M., Lanza, M., 2012\. Content classification of development emails, in: 2012 34th International Conference on Software Engineering (ICSE), IEEE. pp. 375–385.
* Bengio and Grandvalet [2004] Bengio, Y., Grandvalet, Y., 2004\. No unbiased estimator of the variance of k-fold cross-validation. Journal of machine learning research 5, 1089–1105.
* Bogart et al. [2017a] Bogart, C., Filippova, A., Kastner, C., Herbsleb, J., 2017a. How ecosystem cultures differ: Results from a survey on values and practices across 18 software ecosystems. http://breakingapis.org/survey/. (accessed on 11/17/2020).
* Bogart et al. [2017b] Bogart, C., Filippova, A., Kästner, C., Herbsleb, J., 2017b. How ecosystem cultures differ: Results from a survey on values and practices across 18 software ecosystems. [Online]. Available: http://breakingapis.org/survey/. (Accessed on 08/10/2020).
* Bogart et al. [2016] Bogart, C., Kästner, C., Herbsleb, J., Thung, F., 2016\. How to break an api: Cost negotiation and community values in three software ecosystems, in: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Association for Computing Machinery, New York, NY, USA. p. 109–120. URL: https://doi.org/10.1145/2950290.2950325, doi:10.1145/2950290.2950325.
* Borges and Valente [2018] Borges, H., Valente, M.T., 2018\. What’s in a github star? understanding repository starring practices in a social coding platform. Journal of Systems and Software 146, 112 – 129.
* Bouckaert et al. [2013] Bouckaert, R.R., Frank, E., Hall, M., Kirkby, R., Reutemann, P., Seewald, A., Scuse, D., 2013. WEKA Manual for Version 3-7-8. (accessed on 02/28/2021).
* Bradley [1997] Bradley, A.P., 1997. The use of the area under the roc curve in the evaluation of machine learning algorithms. Pattern recognition 30, 1145–1159.
* Breiman [2001] Breiman, L., 2001. Random forests. Machine learning 45, 5–32.
* Businge et al. [2012] Businge, J., Serebrenik, A., van den Brand, M.G.J., 2012. Survival of eclipse third-party plug-ins, in: Proceedings of the 28th IEEE International Conference on Software Maintenance, IEEE, New York, NY, USA. pp. 368–377. doi:10.1109/ICSM.2012.6405295.
* Businge et al. [2015] Businge, J., Serebrenik, A., van den Brand, M.G.J., 2015. Eclipse api usage: The good and the bad. Software Quality Journal 23, 107–141. doi:10.1007/s11219-013-9221-3.
* Caruana and Niculescu-Mizil [2006] Caruana, R., Niculescu-Mizil, A., 2006\. An empirical comparison of supervised learning algorithms, in: Proceedings of the 23rd International Conference on Machine Learning, ACM. pp. 161–168.
* Castelluccio et al. [2019] Castelluccio, M., An, L., Khomh, F., 2019. An empirical study of patch uplift in rapid release development pipelines. Empirical Software Engineering 24, 3008–3044.
* Chawla et al. [2002] Chawla, N.V., Bowyer, K.W., Hall, L.O., Kegelmeyer, W.P., 2002\. Smote: synthetic minority over-sampling technique. Journal of artificial intelligence research 16, 321–357.
* Chen and Guestrin [2016] Chen, T., Guestrin, C., 2016\. Xgboost: A scalable tree boosting system, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery, New York, NY, USA. p. 785–794. URL: https://doi.org/10.1145/2939672.2939785, doi:10.1145/2939672.2939785.
* Dabbish et al. [2012] Dabbish, L., Stuart, C., Tsay, J., Herbsleb, J., 2012\. Social coding in github: Transparency and collaboration in an open software repository, in: Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, ACM. pp. 1277–1286.
* Decan and Mens [2019] Decan, A., Mens, T., 2019\. What do package dependencies tell us about semantic versioning? IEEE Transactions on Software Engineering , 1–15.
* Decan et al. [2019] Decan, A., Mens, T., Grosjean, P., 2019. An empirical comparison of dependency network evolution in seven software packaging ecosystems. Empirical Software Engineering , 381–416.
* Dietrich et al. [2019] Dietrich, J., Pearce, D., Stringer, J., Tahir, A., Blincoe, K., 2019. Dependency versioning in the wild, in: 2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR), pp. 349–359. doi:10.1109/MSR.2019.00061.
* Dig and Johnson [2006] Dig, D., Johnson, R., 2006. How do apis evolve? a story of refactoring. Journal of Software Maintenance 18, 83–107. doi:10.1002/smr.328.
* [27] npm documentation, . About semantic versioning | npm docs. https://docs.npmjs.com/about-semantic-versioning. (accessed on 03/08/2022).
* Esteves et al. [2020] Esteves, G., Figueiredo, E., Veloso, A., Viggiato, M., Ziviani, N., 2020. Understanding machine learning software defect predictions. Automated Software Engineering 27, 369–392.
* FaceBook [2016] FaceBook, 2016. Yarn: A new package manager for javascript - facebook engineering. https://engineering.fb.com/2016/10/11/web/yarn-a-new-package-manager-for-javascript/. (accessed on 03/13/2021).
* Falleri et al. [2014] Falleri, J., Morandat, F., Blanc, X., Martinez, M., Monperrus, M., 2014. Fine-grained and accurate source code differencing, in: ACM/IEEE International Conference on Automated Software Engineering, ASE ’14, Vasteras, Sweden - September 15 - 19, 2014, pp. 313–324. URL: http://doi.acm.org/10.1145/2642937.2642982, doi:10.1145/2642937.2642982.
* Fukushima et al. [2014] Fukushima, T., Kamei, Y., McIntosh, S., Yamashita, K., Ubayashi, N., 2014. An empirical study of just-in-time defect prediction using cross-project models, in: Proceedings of the 11th Working Conference on Mining Software Repositories, Association for Computing Machinery. p. 172–181.
* Ghotra et al. [2015] Ghotra, B., , S., Hassan, A.E., 2015. Revisiting the impact of classification techniques on the performance of defect prediction models, in: Proceedings of the 37th International Conference on Software Engineering, IEEE Press. pp. 789–800.
* Grissom and Kim [2005] Grissom, R.J., Kim, J.J., 2005\. Effect sizes for research: A broad practical approach. Lawrence Erlbaum Associates Publishers.
* Hall et al. [2009] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H., 2009\. The weka data mining software: an update. ACM SIGKDD explorations newsletter 11, 10–18.
* He et al. [2012] He, Z., Shu, F., Yang, Y., Li, M., Wang, Q., 2012\. An investigation on the feasibility of cross-project defect prediction. Automated Software Engineering. 19, 167–199.
* Iba [1996] Iba, H., 1996. Random tree generation for genetic programming, in: Proceedings of the 4th International Conference on Parallel Problem Solving from Nature, Springer-Verlag, London, UK, UK. pp. 144–153. URL: http://dl.acm.org/citation.cfm?id=645823.670546.
* Javan Jafari et al. [2021] Javan Jafari, A., Costa, D.E., Abdalkareem, R., Shihab, E., Tsantalis, N., 2021. Dependency smells in javascript projects. IEEE Transactions on Software Engineering , 1–1doi:10.1109/TSE.2021.3106247.
* Kamei et al. [2013] Kamei, Y., Shihab, E., Adams, B., Hassan, A.E., Mockus, A., Sinha, A., Ubayashi, N., 2013. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering 39, 757–773.
* Kapur et al. [2010] Kapur, P., Cossette, B., Walker, R.J., 2010. Refactoring references for library migration. ACM SIGPLAN Notices 45, 726–738. doi:10.1145/1932682.1869518.
* Khomh et al. [2015] Khomh, F., Adams, B., Dhaliwal, T., Zou, Y., 2015\. Understanding the impact of rapid releases on software quality. Empirical Softw. Engg. 20, 336–373. URL: https://doi.org/10.1007/s10664-014-9308-x, doi:10.1007/s10664-014-9308-x.
* Kotsiantis et al. [2006] Kotsiantis, S.B., Zaharakis, I.D., Pintelas, P.E., 2006. Machine learning: A review of classification and combining techniques. Artif. Intell. Rev. 26, 159–190.
* Kula et al. [2017] Kula, R.G., German, D.M., Ouni, A., Ishio, T., Inoue, K., 2017. Do developers update their library dependencies?: An empirical study on the impact of security advisories on library migration. doi:10.1007/s10664-017-9521-5, arXiv:1709.04621.
* Lauinger et al. [2018] Lauinger, T., Chaabane, A., Wilson, C., 2018. Thou shalt not depend on me: A look at javascript libraries in the wild. Queue 16, 62–82.
* Lessmann et al. [2008] Lessmann, S., Baesens, B., Mues, C., Pietsch, S., 2008\. Benchmarking classification models for software defect prediction: A proposed framework and novel findings. IEEE Transactions on Software Engineering 34, 485--496.
* Mann and Whitney [1947] Mann, H.B., Whitney, D.R., 1947\. On a test of whether one of two random variables is stochastically larger than the other. The annals of mathematical statistics , 50--60.
* Mariano et al. [2019] Mariano, R.V.R., dos Santos, G.E., V. de Almeida, M., Brandão, W.C., 2019\. Feature changes in source code for commit classification into maintenance activities, in: 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), IEEE. pp. 515--518.
* Mostafa et al. [2017a] Mostafa, S., Rodriguez, R., Wang, X., 2017a. A Study on Behavioral Backward Incompatibility Bugs in Java Software Libraries, in: Proceedings of the 39th International Conference on Software Engineering Companion, IEEE, New York, NY, USA. pp. 127--129. doi:10.1109/ICSE-C.2017.101.
* Mostafa et al. [2017b] Mostafa, S., Rodriguez, R., Wang, X., 2017b. Experience paper: A study on behavioral backward incompatibilities of java software libraries, in: Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis, Association for Computing Machinery, New York, NY, USA. p. 215–225. URL: https://doi.org/10.1145/3092703.3092721, doi:10.1145/3092703.3092721.
* Mujahid et al. [2020] Mujahid, S., Abdalkareem, R., Shihab, E., McIntosh, S., 2020\. Using others’ tests to identify breaking updates , 1--12.
* Murphy [2012] Murphy, K.P., 2012. Machine learning: a probabilistic perspective. MIT press.
* Nam and Kim [2015] Nam, J., Kim, S., 2015. Clami: Defect prediction on unlabeled datasets, in: Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering, IEEE Press. p. 452–463.
* [52] npm, . npm-registry | npm documentation. https://docs.npmjs.com/using-npm/registry.html. (Accessed on 08/13/2020).
* Pedregosa et al. [2011] Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al., 2011\. Scikit-learn: Machine learning in python. the Journal of machine Learning research 12, 2825--2830.
* Potvin and Levenberg [2016] Potvin, R., Levenberg, J., 2016\. Why google stores billions of lines of code in a single repository. Communications of the ACM 59, 78--87.
* Pradel et al. [2015] Pradel, M., Schuh, P., Sen, K., 2015. Typedevil: Dynamic type inconsistency analysis for javascript, in: 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, IEEE. pp. 314--324.
* Preston-Werner [2019] Preston-Werner, T., 2019. Semantic versioning 2.0. URL: https://semver.org/.
* Quinlan [1993] Quinlan, R., 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Mateo, CA.
* Raemaekers et al. [2017] Raemaekers, S., van Deursen, A., Visser, J., 2017. Semantic versioning and impact of breaking changes in the maven repository. Journal of Systems and Software 129, 140--158.
* Rahman et al. [2017] Rahman, M.M., Roy, C.K., Kula, R.G., 2017. Predicting usefulness of code review comments using textual features and developer experience, in: Proceedings of the 14th International Conference on Mining Software Repositories, IEEE Press. pp. 215--226.
* Rahman et al. [2019] Rahman, M.T., Rigby, P.C., Shihab, E., 2019. The modular and feature toggle architectures of google chrome. Empirical Software Engineering 24, 826--853.
* Reza Chowdhury et al. [2021] Reza Chowdhury, M.A., Abdalkareem, R., Shihab, E., Adams, B., 2021\. On the untriviality of trivial packages: An empirical study of npm javascript packages. IEEE Transactions on Software Engineering , 1--1.
* [62] SciTools-Documentation, . Understand static code analysis tool. https://www.scitools.com/. (accessed on 03/08/2022).
* Shihab et al. [2010] Shihab, E., Jiang, Z.M., Ibrahim, W.M., Adams, B., Hassan, A.E., 2010. Understanding the impact of code and process metrics on post-release defects: A case study on the eclipse project, in: Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement.
* Śliwerski et al. [2005] Śliwerski, J., Zimmermann, T., Zeller, A., 2005. When do changes induce fixes? ACM sigsoft software engineering notes 30, 1--5.
* Song et al. [2019] Song, Q., Guo, Y., Shepperd, M., 2019. A comprehensive investigation of the role of imbalanced learning for software defect prediction. IEEE Transactions on Software Engineering 45, 1253--1269. doi:10.1109/TSE.2018.2836442.
* Souza et al. [2014] Souza, R., Chavez, C., Bittencourt, R.A., 2014. Do rapid releases affect bug reopening? a case study of firefox, in: 2014 Brazilian Symposium on Software Engineering, pp. 31--40. doi:10.1109/SBES.2014.10.
* Thung et al. [2012] Thung, F., Lo, D., Jiang, L., Lucia, Rahman, F., Devanbu, P.T., 2012. When would this bug get reported?, in: Proceedings of the 28th IEEE International Conference on Software Maintenance, IEEE. pp. 420--429.
* [68] Understand, S., . Scitools.com. https://scitools.com/. (Accessed on 08/13/2020).
* Williams and Spacco [2008] Williams, C., Spacco, J., 2008\. Szz revisited: verifying when changes induce fixes, in: Proceedings of the 2008 workshop on Defects in large software systems, pp. 32--36.
* Wittern et al. [2016] Wittern, E., Suter, P., Rajagopalan, S., 2016. A look at the dynamics of the javascript package ecosystem, in: Proceedings of the 13th International Conference on Mining Software Repositories, Association for Computing Machinery, New York, NY, USA. p. 351–361. URL: https://doi.org/10.1145/2901739.2901743, doi:10.1145/2901739.2901743.
* Xavier et al. [2017] Xavier, L., Brito, A., Hora, A., Valente, M.T., 2017\. Historical and impact analysis of api breaking changes: A large-scale study, in: 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER), IEEE. pp. 138--147.
* Xavier et al. [2017] Xavier, L., Brito, A., Hora, A., Valente, M.T., 2017\. Historical and impact analysis of api breaking changes: A large-scale study, in: Proceedings of the 24th International Conference on Software Analysis, Evolution and Reengineering, IEEE, New York, NY, USA. pp. 138--147. doi:10.1109/SANER.2017.7884616.
* Xia et al. [2016a] Xia, X., , E., Kamei, Y., Lo, D., Wang, X., 2016a. Predicting crashing releases of mobile applications, in: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, ACM. pp. 29:1--29:10.
* Xia et al. [2016b] Xia, X., Shihab, E., Kamei, Y., Lo, D., Wang, X., 2016b. Predicting crashing releases of mobile applications, in: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM’16).
* Yan et al. [2019] Yan, M., Xia, X., Shihab, E., Lo, D., Yin, J., Yang, X., 2019\. Automating change-level self-admitted technical debt determination. IEEE Transactions on Software Engineering 45, 1211--1229.
|
# HI–shielding of ${\bf H_{2}}$ in UV–irradiated protogalaxies: suppression of
the photodissociation rate
Meredith Neyer1,2 ID and Jemma Wolcott-Green1 ID
1Department of Physics, University of California Santa Barbara, MC 9530, Santa
Barbara, CA 93106, USA
2Department of Physics and Kavli Institute for Astrophysics and Space
Research, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We study the impact of neutral hydrogen absorption on ${\rm H_{2}}$
photodissociation in protogalactic haloes exposed to soft-UV radiation. Lyman-
series absorption can significantly deplete dissociating photons as line
overlap with the ${\rm H_{2}}$ Lyman-Werner bands occurs for neutral column
densities exceeding $10^{22}~{}{\rm cm^{-2}}$, but this effect has not been
previously included in studies of protogalactic haloes. We use high–resolution
three–dimensional hydrodynamic simulations to investigate this “HI–shielding”
in three metal–free atomic cooling haloes collapsing at redshift $z\sim
10-20$. We use cloudy modeling to update a previous fitting formula for
HI–shielding which is a better model for shielding of non–ground state ${\rm
H_{2}}$ rovibrational populations and implement the new fit in our
simulations. We find that the inclusion of HI–shielding increases the
“critical flux” for suppression of ${\rm H_{2}}$ cooling in these haloes by
$\sim 60-100$ per cent. The larger critical flux has implications in
particular for the predicted numbers of candidate haloes in which“direct
collapse” could seed massive ($\sim 10^{5}~{}{\rm M_{\odot}}$) black holes at
$z\sim 15$.
###### keywords:
cosmology: theory – early Universe – galaxies: formation – molecular processes
– stars: Population III
††pagerange: HI–shielding of ${\bf H_{2}}$ in UV–irradiated protogalaxies:
suppression of the photodissociation rate–References††pubyear: 2022
## 1 Introduction
Molecular hydrogen, ${\rm H_{2}}$, has been extensively studied in the context
of the first generation of stars and galaxies, in which it plays a crucial
role as the primary coolant of primordial gas below $\sim 10^{4}$K (for a
review, see Bromm & Yoshida, 2011). Prior to the production and dispersion of
metals by the supernovae, the thermodynamic evolution of pristine primordial
gas depends sensitively on the ${\rm H_{2}}$ abundance and therefore on the
photodissociation of ${\rm H_{2}}$, which occurs in the presence of soft UV
photons in the “Lyman–Werner” (LW) bands (11.1-13.6 eV).
Depletion of ${\rm H_{2}}$ by LW radiation has been shown to raise the minimum
mass of protogalactic haloes in which gas is able to condense and cool, thus
delaying star formation in smaller “minihaloes” (Haiman et al., 1997, 2000;
Machacek et al., 2001; Yoshida et al., 2003; Mesinger et al., 2006; Wise &
Abel, 2007; O’Shea & Norman, 2008; Kulkarni et al., 2020; Schauer+21). In more
massive haloes, with virial temperatures $\mathrel{\hbox to0.0pt{\lower
4.0pt\hbox{\hskip 1.0pt$\sim$}\hss}\raise 1.0pt\hbox{$>$}}10^{4}$K, cooling by
neutral hydrogen allows gas to condense in haloes even in the absence of
significant ${\rm H_{2}}$ cooling, rendering these “atomic cooling haloes”
(ACHs) less vulnerable to feedback from a cosmological background LW radiation
(e.g. Oh & Haiman, 2002). Typically, the column densities of ${\rm H_{2}}$ in
ACHs grow large enough that ${\rm H_{2}}$ becomes self–shielding: systematic
depletion of LW band photons in the outer layers of the halo depresses
photodissociation of ${\rm H_{2}}$ in the core, allowing the gas to cool to
temperatures of a few hundred Kelvin. However, sufficiently strong LW
radiation fields have been shown suppress the ${\rm H_{2}}$ abundance and
thereby to prevent gas in ACHs from cooling below the virial temperature of
the halo (see Inayoshi et al., 2020, and references therein). This threshold
LW flux strength is commonly referred to as the critical flux or “$J_{\rm
crit}$” and has been typically found in hydrodynamic simulations to be in the
range $10^{3-4}$ in the customary units $10^{-21}~{}{\rm
erg~{}s^{-1}~{}cm^{-2}~{}Hz^{-1}~{}sr^{-1}}$. While this is orders of
magnitude larger than the expected cosmological background (e.g. Dijkstra et
al., 2008), a collapsing halo near a particularly bright neighboring galaxy
with recently-formed Pop III stars may be exposed to a such a flux (Visbal et
al., 2014; Regan et al., 2017); in this ”synchronized collapse” scenario, if
the two collapse within a short period of time – of order a few Myr – the
second halo to cross the atomic cooling threshold may have ${\rm
H_{2}}$–cooling entirely suppressed.
The presence of a super–critical flux has implications for the formation of
massive seed black holes, $\sim 10^{(4-5)}M_{\odot}$; rapid accretion in ACHs
that remain near the virial temperature; these “heavy seeds” could help
explain the existence of the earliest supermassive black holes, observed to
have masses $\mathrel{\hbox to0.0pt{\lower 4.0pt\hbox{\hskip
1.0pt$\sim$}\hss}\raise 1.0pt\hbox{$>$}}10^{9}{\rm M_{\odot}}$ at redshifts
$z\mathrel{\hbox to0.0pt{\lower 4.0pt\hbox{\hskip 1.0pt$\sim$}\hss}\raise
1.0pt\hbox{$>$}}6$ and as high as $\mathrel{\hbox to0.0pt{\lower
4.0pt\hbox{\hskip 1.0pt$\sim$}\hss}\raise 1.0pt\hbox{$>$}}7.5$ (Fan et al.,
2001; Fan et al., 2003; Morganson et al., 2012; Mazzucchelli et al., 2017;
Wang et al., 2019; Yang et al., 2019; Wang et al., 2021) These heavy seeds are
commonly referred to as “direct collapse” black holes, though they’re thought
to form via an intermediary supermassive star phase (e.g. Haemmerlé et al.,
2018).
Extensive work has been done to constrain the value of $J_{\rm crit}$ using
hydrodynamic simulations of ACHs, which relies on detailed modeling of the
${\rm H_{2}}$ chemistry. Since the fraction of haloes exposed to a
super–critical UV flux depends sensitively on $J_{\rm crit}$, even small
changes in the photodissociation rate significantly alters the predicted
prevalence of direct collapse halo candidates (Dijkstra et al., 2008; Ahn et
al., 2009; Agarwal et al., 2012; Dijkstra et al., 2014; Chon et al., 2016).
### 1.1 The ${\bf H_{2}}$ photodissociation rate
Self–shielding by ${\rm H_{2}}$ occurs as the LW bands become optically thick
at column densities $\mathrel{\hbox to0.0pt{\lower 4.0pt\hbox{\hskip
1.0pt$\sim$}\hss}\raise 1.0pt\hbox{$>$}}10^{13}~{}{\rm cm}^{-2}$, suppressing
the photodissociation rate (e.g. Draine & Bertoldi, 1996). The optically–thick
rate in general depends on the column density, gas temperature, rovibrational
populations of ${\rm H_{2}}$(Wolcott-Green & Haiman, 2019), and details of the
incident spectrum (Agarwal & Khochfar, 2014; Sugimura et al., 2014; Wolcott-
Green et al., 2017), and is prohibitively computational expensive to calculate
on–the–fly in simulations, due to the large number of LW transitions.
Simulations most often therefore implement a fitting formula to model the
optically–thick rate and rely on local estimates of the column density
(Wolcott-Green et al. 2011, but see Hartwig et al. 2015).
In addition to self–shielding, absorption of LW photons by neutral hydrogen
Lyman series resonances can decrease the rate of ${\rm
H_{2}}$–photodissociation. Processing of the cosmological UV background by HI
in the pre–reionization IGM has been studied in detail (Haiman et al., 1997,
2000); however, the effects of HI absorption within protogalactic halos has
not previous been included in 3D simulations. Using one–zone models, Wolcott-
Green & Haiman (2011) found that this HI–shielding of ${\rm H_{2}}$ can
significantly decrease the LW photodissociation rate when the column density
exceeds $N_{\rm HI}\mathrel{\hbox to0.0pt{\lower 4.0pt\hbox{\hskip
1.0pt$\sim$}\hss}\raise 1.0pt\hbox{$>$}}10^{22}~{}\rm{cm^{-3}}$ and provided
an analytic fit for the suppression factor $f_{\rm shield,HI}$.
In a study of the escape fraction of LW out of ACHs, Schauer et al. (2015)
used the WH11 fitting formula to quantify the effect of HI absorption of LW
photons emitted stars within the halo. They found the LW escape fraction was
reduced by a factor of three, owing to the large neutral column density. In a
later study of more massive halos, $10^{7-8}M_{\odot}$, Schauer et al. (2017)
found a significantly smaller effect, with escape fractions reduced by up to
$\sim 29$ per cent due to HI absorption, possibly due to significantly more
ionization by stellar clusters in the haloes resulting in lower neutral column
densities. Nevertheless, these results point to the possible importance of
HI–shielding of ${\rm H_{2}}$ in primordial ACHs irradiated by an external LW
field.
In this study, we use the three–dimensional hydrodynamic simulation code enzo
to test the effect of absorption by HI on $J_{\rm crit}$ in three
UV–irradiated protogalaxies collapsing at z$\sim 10$. We use a modified
version of the WH11 fitting formula for HI–shielding of ${\rm H_{2}}$ that we
updated to better fit non–ground state ${\rm H_{2}}$ rovibrational
populations, which become important at the temperatures and densities of
gravitationally collapsing ACHs. Our modified fitting formula, obtained using
data for the ${\rm H_{2}}$ rovibrational populations from cloudy, is accurate
to within $\sim 30$ per cent at $T=500-8000$K, $n=10^{0-5}~{}{\rm cm^{-3}}$,
and column densities $N_{\rm HI}\mathrel{\hbox to0.0pt{\lower
4.0pt\hbox{\hskip 1.0pt$\sim$}\hss}\raise 1.0pt\hbox{$<$}}10^{24}~{}cm^{-2}$,
and can be easily implemented in chemical models for future studies.
The rest of this paper is organized as follows. In § 2 we provide details of
our simulations and calculations of the HI–shielding; we discuss our results
in § 3 and conclude with a summary in § 4.
## 2 Numerical Modeling
Figure 1: The effect of shielding by HI on the H2 photodissociation rate is
parameterized by a shielding factor $f_{\rm HI}=k_{\rm diss}(n,T,N_{\rm
H_{2}},N_{\rm HI})/k_{\rm diss}(n,T,N_{\rm H_{2}})$. The blue dashed lines
show the HI–shielding factor from the full calculation, $f_{\rm exact}$, with
cloudy–derived rovibrational populations for ${\rm H_{2}}$. The magenta solid
lines show our fit. The black dot-dashed line in each lower panel shows the
ratio of our fit to the exact calculation. In order to isolate the HI fitting
formula accuracy from that for self–shielding, we calculate $f_{\rm{fit}}$ as
= $f_{\rm{HI,fit}}\times f_{\rm{H_{2},exact}}$. All are shown for
$\log(n/{\rm{cm^{-3}}})=3$, and $\log(N_{\rm{H_{2}}}/{\rm{cm^{-2}}})=16$.
Figure 2: Spherically–averaged radial profiles for Halo B at the collapse
redshift showing density (upper left), temperature (upper right), electron
fraction (lower left), and ${\rm H_{2}}$ fraction (lower right). Lyman-Werner
fluxes are $J_{21}=20,000$ (sub–critical; blue solid lines) and
$J_{21}=32,000$ (super–critical; magenta dashed lines). The radial distance is
measured in physical units. Figure 3: Phase plots showing Halo B at the
collapse redshift with sub–critical flux (left) and super–critical flux
(right). Figure 4: Histogram of HI column densities along lines of sight from
the center of Halo B just before cooling occurs (sub–critical flux). Column
densities exceed the $N_{\rm{HI}}=10^{22}\ \rm{cm}^{-2}$ threshold, above
which HI–shielding of ${\rm H_{2}}$ becomes significant.
### 2.1 Simulations
We run simulations of three atomic cooling halos using enzo, a publicly-
available three-dimensional adaptive mesh refinement (AMR) hydrodynamic code
(Bryan et al., 2014). Initial conditions for a box $1h^{-1}$ Mpc on a side and
$128^{3}$ root grid were generated using music (Hahn & Abel, 2011). In order
to select haloes for higher resolution “zoom–in” simulations, we performed an
initial dark-matter only enzo run from $z=99$ to $z=10$. We used the rockstar
halo finder package (Behroozi et al., 2013) to identify a halo with mass above
the atomic cooling threshold at $z=10$; we then added three additional levels
of refinement using nested grids which enclose the Lagrangian volume for the
halo of interest, yielding an effective $1024^{3}$ resolution and dark matter
particle mass $\sim 85{\rm M_{\odot}}$.
Each halo is then run with $+$ DM “zoom-in” simulations initialized at $z=99$
to a maximum refinement level of 18, which results in the highest resolution
regions having a minimum cell size of $.0298h^{-1}$pc. Additional refinement
is added each time the baryon or dark matter mass exceeds four times that of
the most refined cell. We also imposed that the local Jeans length is resolved
by at least 16 cells to prevent spurious fragmentation (Truelove et al.,
1997).
We utilize the 9–species non–equilibrium primordial chemistry network within
enzo to model the chemical evolution of the gas. The cooling function from
Galli & Palla (1998) is implemented to model the radiative cooling by ${\rm
H_{2}}$. Several of the reaction rate calculations have been modified in the
enzo chemistry code as described in Wolcott-Green et al. 2021 (see their
Appendix A for details). For ${\rm H_{2}}$ self–shielding, we use the local
column density from the “Sobolev–like” method described in WGHB11 and their
fitting formula for the optically–thick rate.111This fit has since been
updated by Wolcott-Green & Haiman (2019) to account for non-ground state
rovibrational populations; however, using the updated fit would not affect our
conclusions, since we are interested here in the change in $J_{\rm crit}$ due
to HI–shielding, rather than the precise value of the critical flux. We assume
a blackbody incident radiation field at temperature $T=10^{5}$ K.
In order to determine the impact of HI–shielding, we run realizations of each
halo to determine the value of $J_{\rm crit}$ first with ${\rm H_{2}}$ self-
shielding only, using the Newton-Raphson method to find $J_{\rm crit}$, and
then subsequently run each with HI–shielding included, using our new fitting
formula § 2.2.
We use the publicly-available package YT (YT10) for simulation data analysis
and visualization222yt-project.org. Throughout, we adopt the cosmological
parameters from the Planck 2018 collaboration (Planck Collaboration et al.,
2018), $\Omega_{\rm m}=0.315$, $\Omega_{\Lambda}=0.685$, $\Omega_{b}=0.0493$,
$h=0.674$, $\sigma_{8}=0.811$, and $n=0.965$.
### 2.2 HI–shielding of ${\bf H_{2}}$
In order to find the exact optically–thick photodissociation rate, we use the
method described in detail in WGH19 and summarized briefly here. The rate
calculation includes contributions from LW transitions originating in the 301
bound rovibrational levels of the electronic ground state. We use the spectral
synthesis code cloudy (Ferland et al., 2017) to model the ${\rm H_{2}}$
rovibrational populations at $T=(500-8000)$K, densities $T=10^{(0-5)}~{}{\rm
cm^{-3}}$, $N_{\rm HI}=10^{(20-25)}~{}{\rm cm^{-}2}$, and $N_{\rm
H2}=10^{(14-17)}~{}{\rm cm^{-}2}$. The fractional populations are then input
in the rate calculation for each density and temperature combination.
In order to determine the impact of HI–shielding, the rate is calculated with
and without HI Lyman series absorption for each ${\rm n,T,N_{H2}}$. We define
the dimensionless HI shield factor as:
$f_{\rm sh,HI}=\frac{k_{\rm diss}(n{\rm,T,N_{HI},N_{H2}})}{k_{\rm diss}(n,{\rm
T,N_{H2}})}.$ (1)
In order to develop our fit, we began with the form used in WH11,
$f_{\rm sh,HI}=\frac{\chi}{(1+x)^{\delta}}\exp{(-\alpha x)}$ (2)
which was used in that study for photodissociation of ${\rm H_{2}}$ in the
ground rovibrational state only; we modified the parameters using the downhill
simplex method amoeba provided in Numerical Recipes.
Here, $x=N_{\rm HI}/\zeta$ and our best fit parameters are:
$\displaystyle\noindent\alpha=1.45\times 10^{-1}$ $\displaystyle\delta=1.5$
$\displaystyle\noindent\zeta=2.85\times 10^{23}\rm{cm^{-2}}$
$\displaystyle\chi=\left\\{\begin{tabular}[]{ll}$1$,&${\rm
N_{HI}<10^{22}~{}cm^{-2}}$\\\ $0.95$,&${\rm N_{HI}\geq 10^{22}~{}cm^{-2}}$\\\
\end{tabular}\right.$
Figure 1 shows the new HI-shielding factor fit, $f_{\rm fit}$ (Equation 2),
and the exact shielding factor from the full calculation, $f_{\rm exact}$, at
fixed $\log(n/{\rm{cm^{-3}}})=3$, $\log(N_{\rm{H_{2}}}/{\rm{cm^{-2}}})=16$ and
a range of temperatures. In the lower part of each panel is the ratio between
$f_{\rm fit}$ and $f_{\rm exact}$. The fitting formula for the shielding
function of ${\rm H_{2}}$ by HI is robust in the range of temperatures studied
here, $500-8000$K and is accurate to within a factor of two for column
densities $10^{20-24}~{}{\rm cm^{-2}}$.
## 3 Results
Table 1: Critical fluxes with and without HI–shielding in $J_{21}$ units. Virial masses and collapse redshifts indicated for $J<$$J_{\rm crit}$ runs with HI–shielding. Halo | $M/10^{7}M_{\odot}$ | $z_{\rm{coll}}$ | $T_{vir}/\rm{K}$ | $J_{\rm crit}$$/10^{3}$ | $J_{\rm crit}$${}_{\rm{,HI}}/10^{3}\ $
---|---|---|---|---|---
A | $2.8$ | $13.0$ | $8,296$ | $11$ | $22$
B | $8.2$ | $10.9$ | $14,418$ | $17$ | $32$
C | $2.4$ | $18.0$ | $10,231$ | $10$ | $16$
Figure 2 shows spherically–averaged radial profile of density, temperature,
electron fraction, and ${\rm H_{2}}$ fraction for Halo A at the collapse
redshift. Results for both sub–critical ($J_{21}=20,000$) and super–critical
($J_{21}=32,000$) LW fluxes are shown. Our halos follow the well–known
behavior of ACHs cooling in the presence of a photodissociating flux: with
$J<J_{\rm crit}$, the ${\rm H_{2}}$ fraction in the halo’s dense core reaches
$\sim 10^{-3}$, the standard “freeze–out” value (Oh & Haiman, 2002); ${\rm
H_{2}}$ cooling is efficient and the gas temperature falls below $10^{3}$K in
the dense core. Irradiation by a super–critical flux results in a suppressed
${\rm H_{2}}$ fraction, $\sim 10^{-7}$, and the temperature remains at $T_{\rm
vir}\sim 10^{4}\rm{K}$ throughout the halo.
To determine $J_{\rm crit}$, we run the zoom-in simulations with varied levels
of incident $J_{\rm LW}$ to find minimum flux that suppress ${\rm
H_{2}}$–cooling and prevents cooling below the virial temperature. The
resulting $J_{\rm crit}$ values for each of the three haloes are listed in
Table 1. We find that the critical flux with HI–shielding of ${\rm H_{2}}$ is
$\sim$60-100 percent larger than without HI–shielding. The $J_{\rm crit,21}$
values without HI–shielding for these halos are within the range
$(10-17)\times 10^{3}$, comparable to those found in previous studies, and
with HI–shielding $J_{\rm crit,21}$ are within the range $(16-32)\times
10^{3}$. Figure 3 shows phase plots for both sub–critical and super–critical
fluxes for Halo B at the collapse redshift.
The increase in $J_{\rm crit}$ with HI–shielding of ${\rm H_{2}}$indicates
that the neutral column densities are sufficient in these ACHs for Lyman
series absorption to be important, which occurs at ${\rm N_{HI}\mathrel{\hbox
to0.0pt{\lower 4.0pt\hbox{\hskip 1.0pt$\sim$}\hss}\raise
1.0pt\hbox{$>$}}10^{22}~{}cm^{-2}}$ (see Figure 1). In order to verify this,
we find the column densities at the critical density333above which,
collisional dissociation dominates the total ${\rm H_{2}}$ destruction rate by
summing along sightlines extending from the densest point of the halo out to a
radius of $100$ pc. We show a histogram of 25 such sightlines in Figure 4 at
the time when the gas in the core has reached at the critical density, just
before runaway cooling occurs. (Results are shown here for Halo B and those
from the other two halos are similar.We see that indeed the neutral columns
have reached the threshold at which HI–shielding becomes significant.
## 4 Conclusions
In this study, we examined the impact of HI–shielding of ${\rm H_{2}}$ on the
thermal evolution of protogalactic atomic cooling haloes exposed to
photodissociating UV radiation using three–dimensional hydrodynamic
simulations. We find that incorporation of HI–shielding raised the value of
the critical flux to suppress ${\rm H_{2}}$–radiative cooling, $J_{\rm crit}$,
by $\sim 60-100\%$ in the three ACHs we studied. This increase may have
important implications for the predicted number of candidate halos that could
seed massive black holes at $z\sim 10$ via direct collapse, which is sensitive
to the critical flux.
We used an updated fitting formula to model the suppression of ${\rm H_{2}}$
photodissociation by HI, which can be used in future simulations. The modified
fitting function is accurate to within $\sim 30$ per cent at $T=500-8000$K,
$n=10^{(0-5)}~{}{\rm cm^{-3}}$ and $N_{\rm HI}=10^{(20-24)}{\rm cm^{-2}}$.
## Acknowledgments
We thank Zoltán Haiman and S. Peng Oh for helpful discussions during the
course of this work. Meredith Neyer acknowledges funding from an Edison STEM
summer research program grant at University of California Santa Barbara. This
material is based upon work supported by the National Science Foundation under
Award No. 1903935. This work used the Extreme Science and Engineering
Discovery Environment (XSEDE; allocation TG-PHY200043), which is supported by
National Science Foundation grant number ACI-1548562.
## 5 Data availability
The data underlying this paper will be shared on reasonable request to the
corresponding author.
## References
* Agarwal & Khochfar (2014) Agarwal B., Khochfar S., 2014, MNRAS, submitted, e-print ArXiv:1407.4115
* Agarwal et al. (2012) Agarwal B., Khochfar S., Johnson J. L., Neistein E., Dalla Vecchia C., Livio M., 2012, MNRAS, 425, 2854
* Ahn et al. (2009) Ahn K., Shapiro P. R., Iliev I. T., Mellema G., Pen U., 2009, ApJ, 695, 1430
* Behroozi et al. (2013) Behroozi P. S., Wechsler R. H., Wu H.-Y., 2013, ApJ, 762, 109
* Bromm & Yoshida (2011) Bromm V., Yoshida N., 2011, ARA&A, 49, 373
* Bryan et al. (2014) Bryan G. L., Norman M. L., O’Shea B. W., Abel T., Wise J. H., Turk M. J., Reynolds D. R., Collins D. C., Wang P., Skillman S. W., 2014, ApJS, 211, 19
* Chon et al. (2016) Chon S., Hirano S., Hosokawa T., Yoshida N., 2016, ApJ, 832, 134
* Dijkstra et al. (2014) Dijkstra M., Ferrara A., Mesinger A., 2014, MNRAS, 442, 2036
* Dijkstra et al. (2008) Dijkstra M., Haiman Z., Mesinger A., Wyithe J. S. B., 2008, MNRAS, 391, 1961
* Draine & Bertoldi (1996) Draine B. T., Bertoldi F., 1996, ApJ, 468, 269
* Fan et al. (2001) Fan X., Narayanan V. K., Lupton R. H., Strauss M. A., Knapp G. R., Becker R. H., White R. L., Pentericci L., Leggett S. K., Haiman Z., Gunn J. E., Ivezić Ž., Schneider D. P., Anderson S. F., Brinkmann J., Bahcall N. A., Connolly A. J., Csabai I., Doi M., Fukugita M., Geballe T., Grebel E. K., Harbeck D., Hennessy G., Lamb D. Q., Miknaitis G., Munn J. A., Nichol R., Okamura S., Pier J. R., Prada F., Richards G. T., Szalay A., York D. G., 2001, AJ, 122, 2833
* Fan et al. (2003) Fan X., Strauss M. A., Schneider D. P., Becker R. H., White R. L., Haiman Z., Gregg M., Pentericci L., Grebel E. K., Narayanan V. K., Loh Y.-S., Richards G. T., Gunn J. E., Lupton R. H., Knapp G. R., Ivezić Ž., Brandt W. N., Collinge M., Hao L., Harbeck D., Prada F., Schaye J., Strateva I., Zakamska N., Anderson S., Brinkmann J., Bahcall N. A., Lamb D. Q., Okamura S., Szalay A., York D. G., 2003, AJ, 125, 1649
* Ferland et al. (2017) Ferland G. J., Chatzikos M., Guzmán F., Lykins M. L., van Hoof P. A. M., Williams R. J. R., Abel N. P., Badnell N. R., Keenan F. P., Porter R. L., Stancil P. C., 2017, Rev. Mex. Astron. Astrofis, 53, 385
* Galli & Palla (1998) Galli D., Palla F., 1998, A&A, 335, 403
* Haemmerlé et al. (2018) Haemmerlé L., Woods T. E., Klessen R. S., Heger A., Whalen D. J., 2018, MNRAS, 474, 2757
* Hahn & Abel (2011) Hahn O., Abel T., 2011, MNRAS, 415, 2101
* Haiman et al. (2000) Haiman Z., Abel T., Rees M. J., 2000, ApJ, 534, 11
* Haiman et al. (1997) Haiman Z., Rees M. J., Loeb A., 1997, ApJ, 476, 458
* Hartwig et al. (2015) Hartwig T., Glover S. C. O., Klessen R. S., Latif M. A., Volonteri M., 2015, MNRAS, 452, 1233
* Inayoshi et al. (2020) Inayoshi K., Visbal E., Haiman Z., 2020, ARA&A, 58, 27
* Kulkarni et al. (2020) Kulkarni M., Visbal E., Bryan G. L., 2020, arXiv e-prints, p. arXiv:2010.04169
* Machacek et al. (2001) Machacek M. E., Bryan G. L., Abel T., 2001, ApJ, 548, 509
* Mazzucchelli et al. (2017) Mazzucchelli C., Bañados E., Venemans B. P., Decarli R., Farina E. P., Walter F., Eilers A. C., Rix H. W., Simcoe R., Stern D., Fan X., Schlafly E., De Rosa G., Hennawi J., Chambers K. C., Greiner J., Burgett W., Draper P. W., Kaiser N., Kudritzki R. P., Magnier E., Metcalfe N., Waters C., Wainscoat R. J., 2017, ApJ, 849, 91
* Mesinger et al. (2006) Mesinger A., Bryan G. L., Haiman Z., 2006, ApJ, 648, 835
* Morganson et al. (2012) Morganson E., De Rosa G., Decarli R., Walter F., Chambers K., McGreer I., Fan X., Burgett W., Flewelling H., Greiner J., Hodapp K., Kaiser N., Magnier E., Price P., Rix H.-W., Sweeney B., Waters C., 2012, AJ, 143, 142
* Oh & Haiman (2002) Oh S. P., Haiman Z., 2002, ApJ, 569, 558
* Omukai (2001) Omukai K., 2001, ApJ, 546, 635
* O’Shea & Norman (2008) O’Shea B. W., Norman M. L., 2008, ApJ, 673, 14
* Planck Collaboration et al. (2018) Planck Collaboration Aghanim N., Akrami Y., Ashdown M., Aumont J., Baccigalupi C., Ballardini M., Banday A. J., Barreiro R. B., Bartolo N., 2018, A&A, submitted, e-print arXiv:1807.06209
* Regan et al. (2017) Regan J. A., Visbal E., Wise J. H., Haiman Z., Johansson P. H., Bryan G. L., 2017, Nature Astronomy, 1, 0075
* Schauer et al. (2017) Schauer A. T. P., Agarwal B., Glover S. C. O., Klessen R. S., Latif M. A., Mas-Ribas L., Rydberg C.-E., Whalen D. J., Zackrisson E., 2017, MNRAS, 467, 2288
* Schauer et al. (2015) Schauer A. T. P., Whalen D. J., Glover S. C. O., Klessen R. S., 2015, MNRAS, 454, 2441
* Shang et al. (2010) Shang C., Bryan G. L., Haiman Z., 2010, MNRAS, 402, 1249
* Sugimura et al. (2014) Sugimura K., Omukai K., Inoue A. K., 2014, MNRAS, 445, 544
* Truelove et al. (1997) Truelove J. K., Klein R. I., McKee C. F., Holliman John H. I., Howell L. H., Greenough J. A., 1997, ApJL, 489, L179
* Visbal et al. (2014) Visbal E., Haiman Z., Bryan G. L., 2014, MNRAS, 445, 1056
* Wang et al. (2021) Wang F., Yang J., Fan X., Hennawi J. F., Barth A. J., Banados E., Bian F., Boutsia K., Connor T., Davies F. B., Decarli R., Eilers A.-C., Farina E. P., Green R., Jiang L., Li J.-T., Mazzucchelli C., Nanni R., Schindler J.-T., Venemans B., Walter F., Wu X.-B., Yue M., 2021, ApJL, 907, L1
* Wang et al. (2019) Wang F., Yang J., Fan X., Wu X.-B., Yue M., Li J.-T., Bian F., Jiang L., Bañados E., Schindler J.-T., Findlay J. R., Davies F. B., Decarli R., Farina E. P., Green R., Hennawi J. F., Huang Y.-H., Mazzuccheli C., McGreer I. D., Venemans B., Walter F., Dye S., Lyke B. W., Myers A. D., Haze Nunez E., 2019, ApJ, 884, 30
* Wise & Abel (2007) Wise J. H., Abel T., 2007, ApJ, 671, 1559
* Wolcott-Green & Haiman (2011) Wolcott-Green J., Haiman Z., 2011, MNRAS, 412, 2603
* Wolcott-Green & Haiman (2019) Wolcott-Green J., Haiman Z., 2019, MNRAS, 484, 2467
* Wolcott-Green et al. (2011) Wolcott-Green J., Haiman Z., Bryan G. L., 2011, MNRAS, 418, 838
* Wolcott-Green et al. (2017) Wolcott-Green J., Haiman Z., Bryan G. L., 2017, MNRAS, 469, 3329
* Wolcott-Green et al. (2021) Wolcott-Green J., Haiman Z., Bryan G. L., 2021, MNRAS, 500, 138
* Yang et al. (2019) Yang J., Wang F., Fan X., Yue M., Wu X.-B., Li J.-T., Bian F., Jiang L., Bañados E., Beletsky Y., 2019, AJ, 157, 236
* Yoshida et al. (2003) Yoshida N., Abel T., Hernquist L., Sugiyama N., 2003, ApJ, 592, 645
|
# Efficient Penalized Generalized Linear Mixed Models for Variable Selection
and Genetic Risk Prediction in High-Dimensional Data
JULIEN ST-PIERRE
Department of Epidemiology, Biostatistics and Occupational Health,
McGill University, Montreal, Quebec, Canada
<EMAIL_ADDRESS>
KARIM OUALKACHA
Département de Mathématiques,
Université du Québec à Montréal, Montreal, Quebec, Canada
SAHIR RAI BHATNAGAR
Department of Epidemiology, Biostatistics and Occupational Health,
McGill University, Montreal, Quebec, Canada To whom correspondence should be
addressed.
###### Abstract
Sparse regularized regression methods are now widely used in genome-wide
association studies (GWAS) to address the multiple testing burden that limits
discovery of potentially important predictors. Linear mixed models (LMMs) have
become an attractive alternative to principal components (PC) adjustment to
account for population structure and relatedness in high-dimensional penalized
models. However, their use in binary trait GWAS rely on the invalid assumption
that the residual variance does not depend on the estimated regression
coefficients. Moreover, LMMs use a single spectral decomposition of the
covariance matrix of the responses, which is no longer possible in generalized
linear mixed models (GLMMs). We introduce a new method called pglmm, a
penalized GLMM that allows to simultaneously select genetic markers and
estimate their effects, accounting for between-individual correlations and
binary nature of the trait. We develop a computationally efficient algorithm
based on PQL estimation that allows to scale regularized mixed models on high-
dimensional binary trait GWAS ($\sim 300,000$ SNPs). We show through
simulations that penalized LMM and logistic regression with PC adjustment fail
to correctly select important predictors and/or that prediction accuracy
decreases for a binary response when the dimensionality of the relatedness
matrix is high compared to pglmm. Further, we demonstrate through the analysis
of two polygenic binary traits in the UK Biobank data that our method can
achieve higher predictive performance, while also selecting fewer predictors
than a sparse regularized logistic lasso with PC adjustment. Our method is
available as a Julia package PenalizedGLMM.jl.
## 1 Introduction
Genome-wide association studies (GWAS) have led to the identification of
hundreds of common genetic variants, or single nucleotide polymorphisms
(SNPs), associated with complex traits (Visscher et al.,, 2017) and are
typically conducted by testing association on each SNP independently. However,
these studies are plagued with the multiple testing burden that limits
discovery of potentially important predictors. Moreover, GWAS have brought to
light the problem of missing heritability, that is, identified variants only
explain a low fraction of the total observed variability for traits under
study (Manolio et al.,, 2009). Multivariable regression methods, on the other
hand, simultaneously fit many SNPs in a single model and are exempt from the
multiple testing burden. In both simulations and analysis of high-dimensional
data, sparse regularized logistic models have shown to achieve lower false-
positive rates and higher precision than methods based on univariable GWAS
summary statistics in case-control studies (Hoggart et al.,, 2008; Privé et
al.,, 2019). Contrary to univariable methods which implicitly assume that SNPs
are independent, a regularized model makes use of the linkage disequilibrium
(LD) structure between different loci, assigning weights to SNPs based on
their relative importance after accounting for all other SNPs already in the
model.
Confounding due to population structure or subject relatedness is another
major issue in genetic association studies. Modern large scale cohorts will
often include participants from different ethnic groups as well as admixed
individuals, that is, subjects with individual-specific proportions of
ancestries, or individuals with known or unknown familial relatedness, defined
as cryptic relatedness (Sul et al.,, 2018). Confounding comes from the fact
that allele frequencies can differ greatly between individuals who do not
share similar ancestry. When ignored, population structure and subject
relatedness can decrease power and lead to spurious associations (Astle and
Balding,, 2009; Price et al.,, 2010). Common practice is still to drop samples
by applying filters for relatedness or genetic ancestry, which can result in
decreasing the sample size by nearly 30% (Loh et al.,, 2018) in the full UK
Biobank data set (Bycroft et al.,, 2018).
Principal component analysis (PCA) can control for the confounding effect due
to population structure by including the top eigenvectors of a genetic
similarity matrix (GSM) as fixed effects in the regression model (Price et
al.,, 2006). With admixture and population structure being low dimensional
fixed-effects processes, they can correctly be accounted for by using a
relatively small number of PCs (e.g. 10) (Astle and Balding,, 2009; Novembre
and Stephens,, 2008). However, using too few PCs can result in residual bias
leading to false positives, while adding too many PCs as covariates can lead
to a loss of efficiency (Zhao et al.,, 2018). Alternatively, using mixed
models (MMs), one can model population structure and/or closer relatedness by
including a polygenic random effect with variance-covariance structure
proportional to the GSM (Yu et al.,, 2005). Indeed, kinship is a high-
dimensional process, such that it cannot be fully captured by a few PCs
(Hoffman,, 2013). Thus, it would require the inclusion of too many PCs as
covariates, relative to the dimension of the sample size. Hence, while both
PCA and MMs share the same underlying model, MMs are more robust in the sense
that they do not require distinguishing between the different types of
confounders (Price et al.,, 2010). Moreover, MMs alleviate the need to
evaluate the optimal number of PCs to retain in the model as fixed effects.
Several authors have proposed to combine penalized quasi-likelihood (PQL)
estimation with sparsity inducing regularization to perform selection of fixed
and/or random effects in generalized linear mixed model (GLMMs) (Groll and
Tutz,, 2014; Hui et al.,, 2017). However, none of these methods are currently
scalable for modern large-scale genome-wide data, nor can they directly
incorporate relatedness structure through the use of a kinship matrix. Indeed,
the computational efficiency of recent multivariable methods for high-
dimensional MMs rely on performing a spectral decomposition of the covariance
matrix to rotate the phenotype and design matrix such that the transformed
data become uncorrelated (Bhatnagar et al., 2020b, ; Rakitsch et al.,, 2012).
These methods are typically restricted to linear models since in GLMMs, it is
no longer possible to perform a single spectral decomposition to rotate the
phenotype and design matrix, as the covariance matrix depends on the sample
weights which in turn depend on the estimated regression coefficients that are
being iteratively updated. This limits the application of high-dimensional MMs
to analysis of binary traits in genetic association studies.
In this paper, we introduce a new method called pglmm that allows to
simultaneously select variables and estimate their effects, accounting for
between-individual correlations and binary nature of the trait. We develop a
scalable algorithm based on PQL estimation which makes it possible, for the
first time, to fit penalized GLMMs on high-dimensional GWAS data. To speedup
computation, we estimate the variance components and dispersion parameter of
the model under the null hypothesis of no genetic effect. Secondly, we use an
upper-bound for the inverse variance-covariance matrix in order to perform a
single spectral decomposition of the GSM and greatly reduce memory usage.
Finally, we implement an efficient block coordinate descent algorithm in order
to find the optimal estimates for the fixed and random effects parameters. Our
method is implemented in an open source Julia programming language (Bezanson
et al.,, 2017) package called PenalizedGLMM.jl and freely available at
https://github.com/julstpierre/PenalizedGLMM.
The rest of this paper is structured as follows. In Section 2 we present our
model and describe the block coordinate gradient descent algorithm that is
used to estimate the model parameters. We also discuss several approaches to
select the optimal tuning parameter in regularized models, and we detail how
predictions are obtained in GLMs with PC adjustment versus our proposed mixed
model. In Section 3, we show through simulations that both LMM and logistic
model with PC adjustment fail to correctly select important predictors and
estimate their effects when the dimensionality of the kinship matrix is high.
Further, we demonstrate through the analysis of two polygenic binary traits in
the UKBB data that our method achieves higher predictive performance, while
also selecting consistently fewer predictors than a logistic lasso with PC
adjustment. We finish with a discussion of our results, some limitations and
future directions in Section 4.
## 2 Methods
### 2.1 Model
We consider the following GLMM
$\displaystyle
g(\mu_{i})=\eta_{i}=\bm{X}i\bm{\alpha}+\bm{G}i\bm{\gamma}+b_{i},$ (1)
for $i=1,..,n$, where
$\mu_{i}=\mathbb{E}(y_{i}=1|\bm{X}_{i},\bm{G}_{i},b_{i})$, $\bm{X}_{i}$ is a
$1\times m$ row vector of covariates for subject $i$, $\bm{\alpha}$ is a
$m\times 1$ column vector of fixed covariate effects including the intercept,
$\bm{G}_{i}$ is a $1\times p$ row vector of genotypes for subject $i$ taking
values $\\{0,1,2\\}$ as the number of copies of the minor allele, and
$\bm{\gamma}$ is a $p\times 1$ column vector of fixed additive genotype
effects. We assume that
$\bm{b}=(b_{1},...,b_{n})^{\intercal}\sim\mathcal{N}(0,\sum_{s=1}^{S}\tau_{s}\bm{V}_{s})$
is an $n\times 1$ column vector of random effects,
$\bm{\tau}=(\tau_{1},...,\tau_{S})^{\intercal}$ are variance component
parameters, and $\bm{V}_{s}$ are known $n\times n$ relatedness matrices. The
phenotypes $y_{i}$ are assumed to be conditionally independent and identically
distributed given $(\bm{X}_{i},\bm{G}_{i},\bm{b})$ and follow any exponential
family distribution with canonical link function $g(\cdot)$, mean
$\mathbb{E}(y_{i}|\bm{b})=\mu_{i}$ and variance $\text{Var}(y_{i}|\bm{b})=\phi
a_{i}^{-1}\nu(\mu_{i}),$ where $\phi$ is a dispersion parameter, $a_{i}$ are
known weights and $\nu(\cdot)$ is the variance function. In order to estimate
the parameters of interest and perform variable selection, we need to use an
approximation method to obtain a closed analytical form for the marginal
likelihood of model (1). Following the derivation of Chen et al., (2016), we
propose to fit (1) using a penalized quasi-likelihood (PQL) method, from where
the log integrated quasi-likelihood function is equal to
$\displaystyle
ql(\bm{\alpha},\bm{\gamma},\phi,\bm{\tau})=-\frac{1}{2}\text{log}\left|\sum_{s=1}^{S}\tau_{s}\bm{V}_{s}\bm{W}+\bm{I}_{n}\right|+\sum_{i=1}^{n}ql_{i}(\bm{\alpha},\bm{\gamma}|\bm{\tilde{b}})-\frac{1}{2}\bm{\tilde{b}}^{\intercal}\left(\sum_{s=1}^{S}\tau_{s}\bm{V}_{s}\right)^{-1}\bm{\tilde{b}},$
(2)
where
$\bm{W}=\textrm{diag}\left\\{\frac{a_{i}}{\phi\nu(\mu_{i})[g^{\prime}(\mu_{i})^{2}]}\right\\}$
is a diagonal matrix containing weights for each observation,
$ql_{i}(\bm{\alpha,\gamma}|\bm{b})=\int_{y_{i}}^{\mu_{i}}\frac{a_{i}(y_{i}-\mu)}{\phi\nu(\mu)}d\mu$
is the quasi-likelihood for the $ith$ individual given the random effects
$\bm{b}$, and $\tilde{\bm{b}}$ is the solution which maximizes (2).
In typical genome-wide studies, the number of predictors is much greater than
the number of observations ($p>n$), and the parameter vector $\bm{\gamma}$
becomes underdetermined when modelling all SNPs jointly. Thus, we propose to
add a lasso regularization term (Tibshirani,, 1996) to the negative quasi-
likelihood function in (2) to seek a sparse subset of $\bm{\gamma}$ that gives
an adequate fit to the data. Because
$ql(\bm{\alpha},\bm{\gamma},\phi,\bm{\tau})$ is a non-convex loss function, we
propose a two-step estimation method to reduce the computational complexity.
First, we obtain the variance component estimates $\hat{\phi}$ and
$\bm{\hat{\tau}}$ under the null hypothesis of no genetic effect
($\bm{\gamma}=\bm{0}$) using the AI-REML algorithm (Gilmour et al.,, 1995;
Chen et al.,, 2016) detailed in Appendix A of the supplementary material.
Assuming that the weights in $\bm{W}$ vary slowly with the conditional mean,
we drop the first term in (2) (Breslow and Clayton,, 1993) and define the
following objective function which we seek to minimize with respect to
$(\bm{\alpha},\bm{\gamma},\tilde{\bm{b}})$:
$\displaystyle(\hat{\bm{\alpha}},\hat{\bm{\gamma}},\hat{\bm{b}})$
$\displaystyle=\underset{\bm{\alpha},\bm{\gamma},\tilde{\bm{b}}}{\text{argmin
}}Q_{\lambda}(\bm{\alpha},\bm{\gamma},\tilde{\bm{b}}),$ $\displaystyle
Q_{\lambda}(\bm{\alpha},\bm{\gamma},\tilde{\bm{b}})$
$\displaystyle=-\sum_{i=1}^{n}ql_{i}(\bm{\alpha},\bm{\gamma}|\bm{\tilde{b}})+\frac{1}{2}\bm{\tilde{b}}^{\intercal}\left(\sum_{s=1}^{S}\hat{\tau}_{s}\bm{V}_{s}\right)^{-1}\bm{\tilde{b}}+\lambda\sum_{j}v_{j}|\gamma_{j}|$
$\displaystyle:=-\ell_{PQL}(\bm{\alpha},\bm{\gamma},\hat{\phi},\hat{\bm{\tau}})+\lambda\sum_{j}v_{j}|\gamma_{j}|,$
(3)
where $\lambda$ is a nonnegative regularization parameter, and $v_{j}$ is a
penalty factor for the $j^{th}$ predictor.
In Appendix B, we detail our proposed general purpose block coordinate
gradient descent algorithm (CGD) to solve (2.1) and obtain regularized PQL
estimates for $\bm{\alpha},\bm{\gamma}$ and $\tilde{\bm{b}}$. Briefly, our
algorithm is equivalent to iteratively solve the two penalized generalized
least squares (GLS)
$\underset{\tilde{\bm{b}}}{\textrm{argmin}}\left(\tilde{\bm{Y}}-\tilde{\bm{X}}{\bm{\beta}}-\tilde{\bm{b}}\right)^{\intercal}\bm{W}^{-1}\left(\tilde{\bm{Y}}-\tilde{\bm{X}}{\bm{\beta}}-\tilde{\bm{b}}\right)+\tilde{\bm{b}}^{\intercal}\left(\sum_{s=1}^{S}\hat{\tau}_{s}\bm{V}_{s}\right)^{-1}\tilde{\bm{b}},$
and
$\underset{\bm{\beta}}{\textrm{argmin}}\left(\tilde{\bm{Y}}-\tilde{\bm{X}}\bm{\beta}\right)^{\intercal}\bm{\Sigma}^{-1}\left(\tilde{\bm{Y}}-\tilde{\bm{X}}\bm{\beta}\right)+\lambda\sum_{j}v_{j}|\beta_{j}|,$
where $\bm{\Sigma}=\bm{W}^{-1}+\sum_{s=1}^{S}\hat{\tau}_{s}\bm{V}_{s}$ is the
covariance matrix of the working response vector $\tilde{\bm{Y}}$,
$\tilde{\bm{X}}=\left[\bm{X};\ \bm{G}\right]$ and
$\bm{\beta}=(\bm{\alpha}^{\intercal},\bm{\gamma}^{\intercal})^{\intercal}$. We
use the spectral decomposition of $\bm{\Sigma}$ to rotate $\tilde{\bm{Y}}$,
$\tilde{\bm{X}}$ and $\tilde{\bm{b}}$ such that the transformed data become
uncorrelated. For binary data, because the covariance matrix $\bm{\Sigma}$
depends on the sample weights $\bm{W}$, we use an upper-bound on
$\bm{\Sigma}^{-1}$ to ensure a single spectral decomposition is performed
(Böhning and Lindsay,, 1988). By cycling through the coordinates and
minimizing the objective function with respect to one parameter at a time,
$\tilde{\bm{b}}$ can be estimated by fitting a generalized ridge-like model
with a diagonal penalty matrix equal to the inverse of the eigenvalues of
$\sum_{s=1}^{S}\hat{\tau}_{s}\bm{V}_{s}$. Then, conditional on
$\tilde{\bm{b}}$, ${\bm{\beta}}$ is estimated by solving a weighed least
squares (WLS) with a lasso regularization term. All calculations and
algorithmic steps are detailed in Appendix B.
### 2.2 Model selection
Approaches to selecting the optimal tuning parameter in regularized models are
of primary interest since in real data analysis, the underlying true model is
unknown. A popular strategy is to select the value of the tuning parameter
that minimizes out-of-sample prediction error, e.g., cross-validation (CV),
which is asymptotically equivalent to the Akaike information criterion (AIC)
(Akaike,, 1998; Yang,, 2005). While being conceptually attractive, CV becomes
computationally expensive for very high-dimensional data. Moreover, in studies
where the proportion of related subjects is important, either by known or
cryptic relatedness, the CV prediction error is no longer an unbiased
estimator of the generalization error (Rabinowicz and Rosset,, 2020). Through
simulation studies and real data analysis, Wang et al., (2020) found that LD
and minor allele frequencies (MAF) differences between ancestries could
explain between 70 and 80% of the loss of relative accuracy of European-based
prediction models in African ancestry for traits like body mass index and type
2 diabetes. Thus, there is no clear approach to how multiple admixed and/or
similar populations should be split when using CV to minimize out-of-sample
prediction error.
Alternatively, we can use the generalized information criterion (GIC) to
choose the optimal value of the tuning parameter $\lambda$, defined as
$\displaystyle\textrm{GIC}_{\lambda}=-2\ell_{PQL}+a_{n}\cdot\hat{df}_{\lambda},$
(4)
where $\ell_{PQL}$ is defined in (2.1), and $\hat{df}_{\lambda}=|\\{1\leq
k\leq p:\hat{\beta}_{k}\neq 0\\}|+\textrm{dim}(\hat{\bm{\tau}})$ is the number
of nonzero fixed-effects coefficients (Zou et al.,, 2007) plus the number of
variance components. Special cases of the GIC include AIC ($a_{n}=2$) and the
Bayesian information criterion (BIC) (Schwarz,, 1978) ($a_{n}=\text{log}(n)$).
### 2.3 Prediction
It is often of interest in genetic association studies to make predictions on
a new set of individuals, e.g., the genetic risk of developing a disease for a
binary response or the expected outcome in the case of a continuous response.
In what follows, we compare how predictions are obtained in pglmm versus a GLM
with PC adjustment.
#### 2.3.1 pglmm
For the sake of comparison with the GLM with PC adjustment, we suppose a
sampling design where a single variance component is needed such that
$\bm{b}\sim\mathcal{N}(\bm{0},\tau_{1}\bm{V_{1}})$ where $\bm{V_{1}}$ is the
GSM between $n$ subjects that are used to fit the GLMM (1). We iteratively fit
on a training set the working linear mixed model
$\tilde{\bm{Y}}=\tilde{\bm{X}}\bm{\beta}+\bm{\bm{b}}+\bm{\epsilon},$
where
$\bm{\epsilon}=g^{\prime}(\bm{\mu})(\bm{y}-\bm{\mu})\sim\mathcal{N}(0,\bm{W}^{-1})$.
Let $\tilde{\bm{Y}}_{s}$ be the latent working vector in a set of individuals
with predictor set $\tilde{\bm{X}}_{s}$ that were not used in the model
training, $n_{s}$ denote the number of observations in the testing set and $n$
the number of observations in the training set. Similar to (Bhatnagar et al.,
2020b, ), we assume that the marginal joint distribution of
$\tilde{\bm{Y}}_{s}$ and $\tilde{\bm{Y}}$ is multivariate Normal :
$\displaystyle\begin{bmatrix}\tilde{\bm{Y}}_{s}\\\
\tilde{\bm{Y}}\end{bmatrix}\sim\mathcal{N}\left(\begin{bmatrix}\tilde{\bm{X}}_{s}\bm{\beta}\\\
\tilde{\bm{X}}\bm{\beta}\end{bmatrix},\begin{bmatrix}\bm{\Sigma}_{11}&\bm{\Sigma}_{12}\\\
\bm{\Sigma}_{21}&\bm{\Sigma}_{22}\end{bmatrix}\right),$
where $\bm{\Sigma}_{12}=\tau_{1}\bm{V}_{12}$ and $\bm{V}_{12}$ is the
$n_{s}\times n$ GSM between the testing and training individuals. It follows
from standard normal theory that
$\displaystyle\tilde{\bm{Y}}_{s}|\tilde{\bm{Y}},\phi,\tau_{1},\bm{\beta},\tilde{\bm{X}},\tilde{\bm{X}}_{s}\sim\mathcal{N}\left(\tilde{\bm{X}}_{s}\bm{\beta}+\bm{\Sigma}_{12}\bm{\Sigma}_{22}^{-1}(\tilde{\bm{Y}}-\tilde{\bm{X}}\bm{\beta}),\bm{\Sigma}_{11}-\bm{\Sigma}_{12}\bm{\Sigma}_{22}^{-1}\bm{\Sigma}_{21}\right).$
The estimated mean response $\hat{{\bm{\mu}}}_{s}$ for the testing set is
given by
$\displaystyle
g^{-1}\left(\mathbb{E}[\tilde{\bm{Y}}_{s}|\tilde{\bm{Y}},\hat{\phi},\hat{\tau}_{1},\hat{\bm{\beta}},\tilde{\bm{X}},\tilde{\bm{X}}_{s}]\right)$
$\displaystyle=g^{-1}\left(\tilde{\bm{X}}_{s}\hat{\bm{\beta}}+\bm{\Sigma}_{12}\bm{\Sigma}_{22}^{-1}(\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}})\right)$
$\displaystyle=g^{-1}\left(\tilde{\bm{X}}_{s}\hat{\bm{\beta}}+\hat{\tau}_{1}\bm{V}_{12}\left(\bm{W}^{-1}+\hat{\tau}_{1}\bm{V}_{1}\right)^{-1}(\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}})\right)$
$\displaystyle=g^{-1}\left(\tilde{\bm{X}}_{s}\hat{\bm{\beta}}+\bm{V}_{12}\bm{U}\left(\frac{1}{\hat{\tau}_{1}}\bm{D}+\tilde{\bm{U}}^{\intercal}\bm{W}\tilde{\bm{U}}\right)^{-1}\tilde{\bm{U}}^{\intercal}\bm{W}(\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}})\right),$
(5)
where $g(\cdot)$ is a link function and $\tilde{\bm{U}}=\bm{UD}$ is the
$n\times n$ matrix of PCs obtained from the spectral decomposition of the GSM
for training subjects.
#### 2.3.2 GLM with PC adjustment
Another approach to control for population structure and/or subjects
relatedness is to use the first $r$ columns of $\tilde{\bm{U}}$ as unpenalized
fixed effects covariates. This leads to the following GLM
$\displaystyle
g(\bm{\mu})=\tilde{\bm{X}}\bm{\beta}+\tilde{\bm{U}}_{r}\bm{\delta},$
where $\tilde{\bm{X}}=[\bm{X};\bm{G}]$ is the $n\times(m+p)$ design matrix for
non-genetic and genetic predictors, $\bm{\beta}\in\mathbb{R}^{p}$ is the
corresponding sparse vector of fixed effects, $\tilde{\bm{U}}_{r}$ is the
$n\times r$ design matrix for the first $r$ PCs and $\delta\in\mathbb{R}^{r}$
is the corresponding vector of fixed effects. Letting
$\tilde{\bm{Y}}=\tilde{\bm{X}}\bm{\beta}+\tilde{\bm{U}}_{r}\bm{\delta}+g^{\prime}(\bm{\mu})(\bm{y-\mu})$
be the working response vector, one can show that
$\displaystyle\hat{\bm{\delta}}=\left(\tilde{\bm{U}}_{r}^{\intercal}\bm{W}\tilde{\bm{U}}_{r}\right)^{-1}\tilde{\bm{U}}_{r}^{\intercal}\bm{W}\left(\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}}\right),$
(6)
where $\bm{W}$ is the diagonal matrix of GLM weights. Let $\bm{V}_{12}$ be the
$n_{s}\times n$ GSM between a testing set of $n_{s}$ individuals and $n$
training individuals such that the projected PCs on the testing subjects are
equal to $\bm{V}_{12}\bm{U}_{r}$. Then, the estimated mean response
$\hat{{\bm{\mu}}}_{s}$ for the testing set is given by
$\displaystyle\hat{\bm{\mu}}_{s}=g^{-1}\left(\tilde{\bm{X}_{s}}\hat{\bm{\beta}}+\bm{V}_{12}\bm{U}_{r}\hat{\bm{\delta}}\right)=g^{-1}\left(\tilde{\bm{X}_{s}}\hat{\bm{\beta}}+\bm{V}_{12}\bm{U}_{r}\left(\tilde{\bm{U}}_{r}^{\intercal}\bm{W}\tilde{\bm{U}}_{r}\right)^{-1}\tilde{\bm{U}}_{r}^{\intercal}\bm{W}\left(\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}}\right)\right).$
(7)
By comparing (2.3.1) and (7), we see that both GLM with PC adjustment and
pglmm use a projection of the training PCs on the testing set to predict new
responses, but with different coefficients for the projected PCs. For the
former, the estimated coefficients for the first $r$ projected PCs in (6) are
obtained by iteratively solving generalized least squares (GLS) on the partial
working residuals $\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}}$. For pglmm,
the estimated coefficients for all projected PCs are also obtained by
iteratively solving GLS on the partial working residuals
$\tilde{\bm{Y}}-\tilde{\bm{X}}\hat{\bm{\beta}}$, with an extra ridge penalty
for each coefficient that is equal to $\hat{\tau_{1}}^{-1}\Lambda_{i}$ with
$\Lambda_{i}$ the $i^{th}$ eigenvalue of $\bm{V}$ that is associated with the
$i^{th}$ PC.
From a Bayesian point of view, the fixed effect GLM assumes that each of the
$r$ selected PCs have equal prior probability, while the remaining $n-r$
components are given a prior with unit mass at zero (Astle and Balding,,
2009). In contrast, the MM puts a Gaussian prior on regression coefficients
with variances proportional to the corresponding eigenvalues. This implies
that the PCs with the largest eigenvalues have a higher prior probability of
explaining the phenotype. Hence, pglmm shrinks PCs coefficients in a smooth
way, while the fixed effect GLM uses a tresholding approach; the first $r$
predictors with larger eigenvalues are kept intact, and the others are
completely removed. This implies that the confounding effect from population
structure and/or relatedness on the phenotype is fully captured by the first
$r$ PCs. As we show in simulations results, departure from this assumption
leads to less accurate coefficients estimation, lower prediction accuracy and
higher variance in predictions.
### 2.4 Simulation design
We evaluated the performance of our proposed method against that of a lasso
LMM, using the R package ggmix (Bhatnagar et al., 2020a, ), and a logistic
lasso, using the Julia package GLMNet which wraps the Fortran code from the
original R package glmnet (Friedman et al.,, 2010). We compared glmnet when we
included or not the first 10 PCs in the model (glmnetPC). We performed a total
of 50 replications for each simulation scenario, drawing anew genotypes and
simulated traits. Values for all simulation parameters are presented in Table
1.
#### 2.4.1 Simulated genotype from the admixture model
In the first scenario, we studied the performance of all methods for different
population structures by simulating random genotypes from the BN-PSD admixture
model for 10 or 20 subpopulations with 1D geography or independent
subpopulations using the bnpsd package in R (Ochoa and Storey, 2016a, ; Ochoa
and Storey, 2016b, ). Sample size was set to $n=2500$. We simulated $p$
candidate SNPs and randomly selected $p\times c$ to be causal. The kinship
matrix $\bm{V}$ and PCs were calculated using a set of $50,000$ additional
simulated SNPs. We simulated covariates for age and sex using Normal and
Binomial distributions, respectively.
#### 2.4.2 Real genotypes from the UK Biobank data
In the second scenario, we compared the performance of all methods when a high
proportion of related individuals are present, using real genotype data from
the UK Biobank. We retained a total of 6731 subjects of White British ancestry
having estimated 1st, 2nd or 3rd degree relationships with at least one other
individual. We sampled $p$ candidate SNPS among all chromosomes and randomly
selected $p\times c$ to be causal. We used PCs as provided with the data set.
These were computed using a set of unrelated samples and high quality markers
pruned to minimise LD (Bycroft et al.,, 2018). Then, all subjects were
projected onto the principal components using the corresponding loadings.
Since the markers that were used to compute the PCs were potentially sampled
as candidate causal markers in our simulations, we included all candidate SNPs
in the set of markers used for calculating the kinship matrix $\bm{V}$. We
simulated age using a Normal distribution and used the sex covariate as
provided with the data.
#### 2.4.3 Simulation model
The number of candidate SNPs and fraction of causal SNPs were set to $p=5000$
and $c=0.01$ respectively. Let $S$ be the set of candidate causal SNPs, with
$|S|=p\times c$, then the causal SNPs fixed effects $\beta_{j}$ were generated
from a Gaussian distribution $\mathcal{N}(0,h^{2}_{g}\sigma^{2}/|S|)$, where
$h^{2}_{g}$ is the fraction of variance on the logit scale that is due to
total additive genetic fixed effects. That is, we assumed the candidate causal
markers explained a fraction of the total polygenic heritability, and the rest
was explained by a random polygenic effect
$b\sim\mathcal{N}(0,h^{2}_{b}\sigma^{2}\bm{V})$. We simulated a SNR equal to 1
for the fixed genetic effects ($h^{2}_{g}=50\%$) under strong random polygenic
effects ($h^{2}_{b}=40\%$). Finally, we simulated a binary phenotype using a
logistic link function
$\displaystyle\text{logit}(\pi)=\text{logit}(\pi_{0})-\text{log}(1.3)\times
Sex+\text{log}(1.05)Age/10+\sum_{j\in S}\beta_{j}\cdot\widetilde{G}_{j}+b,$
(8)
where the parameter $\pi_{0}$ was chosen to specify the prevalence under the
null, and $\widetilde{G}_{j}$ is the $j^{th}$ column of the standardized
genotype matrix $\tilde{g}_{ij}=(g_{ij}-2p_{i})/\sqrt{2p_{i}(1-p_{i})}$ and
$p_{i}$ is the MAF.
#### 2.4.4 Metric of comparison
For each replication, subjects were partitioned into training and test sets
using an 80/20 ratio, ensuring all related individuals were assigned into the
same set in the second scenario. Variable selection and coefficient estimation
were performed on training subjects for all methods. We compared each method
at a fixed number of predictors, ranging from 5 to 50 which corresponds to the
number of true causal SNPs. Comparisons were based on three criteria: the
ability to retrieve the causal predictors, measured by the true positive rate
${\textrm{TPR}=|\\{1\leq k\leq p:\hat{\beta}_{k}\neq 0\cap{\beta}_{k}\neq
0\\}|/|\\{1\leq k\leq p:\hat{\beta}_{k}\neq 0\\}|}$; the ability to accurately
estimate coefficients, measured by the root mean squared error
${\textrm{RMSE}=\sqrt{\frac{1}{p}\sum_{k=1}^{p}(\hat{\beta}_{k}-\beta_{k})^{2}}}$;
and the ability to predict outcomes in the test sets, measured by the area
under the roc curve (AUC).
In addition, we evaluated the performance of our proposed method when using
either AIC, BIC or cross-validation as model selection criteria, rather than
fixing the number of predictors in the model. For this, subjects from the real
UKBB data were randomly split into training (40%), validation (30%) and test
(30%) sets, again ensuring all related individuals were assigned into the same
set. For cross-validation, the full lasso solution path was fitted on the
training set, and the regularization parameter was obtained on the model which
maximized AUC on the validation set. We compared methods performance on the
basis of TPR, AUC on the test sets and RMSE. Additionally, we compared each
model selection approach on the total number of predictors selected and on the
model precision, which is defined as the proportion of selected predictors
that are true positives.
### 2.5 Real data application
We used the real UK Biobank data set presented in Section 2.4 to illustrate
the potential advantages of pglmm over logistic lasso with PC adjustment for
constructing a PRS on two highly heritable binary traits, asthma and high
cholesterol, in a set of related individuals. Asthma is a common respiratory
disease characterized by inflammation and partial obstruction of the bronchi
in the lungs that results in difficulty breathing (Anderson,, 2008). High
cholesterol can form plaques and fatty deposits on the walls of the arteries,
and thus prevent the blood to circulate to the heart and brain. It is one of
the main controllable risk factors for coronary artery disease, heart attack
and stroke (Kathiresan et al.,, 2008).
After filtering for SNPs with missing rate greater than $0.01$, MAF above
$0.05$ and a p-value for the Hardy–Weinberg exact test above $10^{-6}$, a
total of 320K genotyped SNPs were remaining. To better understand the
contribution of the PRS for predicting asthma and high cholesterol, we fitted
for each trait a null model with only age, sex, genotyping array and the first
10 PCs as main effects. For our proposed pglmm method, we did not include any
PC since kinship is accounted for by a random effect. Finally, we compared
with a logistic lasso in which the top 10 PCs were included as unpenalized
covariates in addition to age, sex and genotyping array. To find the optimal
regularization parameter for both methods, we split the subjects in training
(60%), validation (20%) and test (20%) sets for a total of 40 times. For each
replication, the full lasso solution path was fitted on the training set, and
the regularization parameter was obtained on the model which maximized AUC on
the validation set. We compared mean prediction accuracy on the test sets as
well as the median number of predictors included in all models. Finally, we
also compared our method’s performance when the best model was chosen using
BIC on the training fit.
## 3 Results
### 3.1 Simulation results from the admixture model
Results for selection of important predictors in the first simulation
scenario, as measured by the mean TPR in 50 replications, are presented in
Figure 1. For both 1D linear admixture and independent subpopulations, glmnet
without PC adjustment failed to retrieve causal markers compared to all other
methods. This is expected under population stratification; SNPs that differ in
frequency between subpopulations are identified as important predictors
because prevalence is not constant across each group. When the first 10 PCs
were added as unpenalized covariates, glmnetPC’s ability to select causal
predictors was lesser to that of pglmm and ggmix for the 20 independent
subpopulations. In this case, the rank of the GSM used to infer the PCs is
close to 20, and including only 10 PCs in the model does not correctly capture
the confounding structure. Because there is less overlap between
subpopulations in the admixture data compared to the independent populations
(Reisetter and Breheny,, 2021), a greater proportion of the simulated
polygenic random effect is explained by the GSM and including only 10 PCs is
enough to correct for confounding even when $K=20$ (bottom-left panel of
Figure 1). On the other hand, including a random effect with variance-
covariance structure proportional to the GSM correctly adjusts for population
structure in all scenarios while alleviating the burden of choosing the right
number of fixed predictors to include in the model. Even though ggmix assumes
a standard LMM for the binary trait, it was able to identify causal markers at
the same rate as pglmm.
Results for estimation of SNP effects as measured by the mean RMSE in 50
replications are presented in Figure 2. Results are consistent with TPR
results in that glmnet without PC adjustment performed poorly in all
scenarios, while pglmm outperformed all other methods for the 20 independent
subpopulations and performed comparably with glmnetPC for all other settings.
As expected, ggmix had higher RMSE compared to pglmm and glmnetPC. Thus, even
though ggmix was able to identify causal markers at the same rate as other
methods that accounted for the binary nature of the response, resulting
estimates for the SNP effects were not accurate.
For both 1D linear admixture and independent subpopulations, ggmix and glmnet
had poor predictive performance for $K=10$ and $K=20$, as reported in Figure
3. Also, the predictive performance of glmnetPC was greatly reduced when
$K=20$ for both admixture and independent populations. In the case of the
admixture data, the RMSE for estimation of SNP effects was comparable for
glmnetPC and pglmm. This means that the observed discrepancy in predictive
accuracy is due to the difference in how each method handle the confounding
effects. Using only 10 PCs as fixed effects when $K=20$ likely results in
overfitted coefficients, which may potentially decrease prediction accuracy
and increase variance of predictions in independent subjects. By using a
ridge-like estimator for the random effects, pglmm is less likely to overfit
the confounding effects compared to glmnetPC. This is supported by the results
of Table 2, where the relative decrease in AUC standard deviation for the
predictions obtained by pglmm could be as high as $16\%$ for $K=20$
subpopulations.
### 3.2 Simulation results from real genotype data
Results for selection of important predictors, estimation of SNPs effects and
prediction accuracy in the second simulation scenario are presented in Figure
4. We compared the ability of glmnetPC and pglmm to adjust for potential
confounding stemming from subjects relatedness. Both methods’ ability to
retrieve important predictors were comparable as measured by mean TPR, with
pglmm having a slight advantage. In terms of predictor effect estimation,
pglmm had lower reported mean RMSE. Furthermore, pglmm outperformed glmnetPC
when making predictions in independent test sets. Once again, this is
explained by the fact that pglmm uses a random effect parameterized by the
$n-$dimensional kinship matrix, which we have shown in Section 2.3 to be
equivalent to include all PCs as predictors in the model and shrink their
coefficients proportionally to their relative importance in a smooth way. On
the other hand, for glmnetPC, only the first $10$ PCs with larger eigenvalues
are kept intact, and the others are completely removed. As the confounding
effect from relatedness on the phenotype can not be fully captured by using
only the first $10$ PCs, prediction accuracy is greatly reduced.
### 3.3 Model selection
Boxplots of the model selection simulations results are presented in Figure 5.
As expected, BIC tended to choose sparser models with very high precision
values, compared to AIC and CV which tended to select larger models with
negligibly higher prediction performance. Thus, using BIC as a model selection
criteria resulted in trading a bit of prediction accuracy for a large boost in
the model precision. In many situations where it is of interest to identify a
smaller set containing the most important predictors, BIC should be preferred
over AIC and CV. Moreover, BIC alleviates the computational challenge of
performing out-of-sample predictions, which includes identifying pedigrees to
ensure independence between training, validation and testing sets.
### 3.4 PRS for the UK Biobank
Results for asthma and high cholesterol PRSs are summarized in Table 3. For
asthma, pglmm with either BIC or CV as model selection criteria performed
better than glmnetPC and the null model with covariates only when comparing
mean AUC on the test sets. The median number of predictors selected by
glmnetPC was four times higher than for glmnet when using CV for both methods.
Moreover, the variability in predictors selected was more important for
glmnetPC, as reported by an IQR value equal to 486, compared to 145 for our
method. pglmm with BIC selected 1 predictor (IQR: 1) compared to 16 (IQR: 145)
for pglmm with CV. This is consistent with our simulation results showing that
BIC results in sparser models with comparable predictive power. For high
cholesterol, very few genetic predictors were selected by all models, which
suggests that it may not be a highly polygenic trait. In fact, using only the
non-genetic covariates and first 10 PCs resulted in the best model for high
cholesterol based on mean test sets AUC.
## 4 Discussion
We have introduced a new method called pglmm based on regularized PQL
estimation, for selecting important predictors and estimating their effects in
high-dimensional GWAS data, accounting for population structure, close
relatedness and binary nature of the trait. Through a variety of simulations,
using both simulated and real genotype data, we showed that pglmm was markedly
better than a logistic lasso with PC adjustment when the number of
subpopulations was greater than the number of PCs included, or when a high
proportion of related subjects were present. We also showed that a lasso LMM
was unable to estimate predictor effects with accuracy for binary responses,
which greatly decreased its predictive performance. Performance assessment was
based on TPR of selected predictors, RMSE of estimated effects and AUC of
predictions. These results strongly advocate for using methods that explicitly
account for the binary nature of the trait while effectively controlling for
population structure and relatedness in genetic studies.
When the dimensionality of the confounding structure was low, we showed that
pglmm was equivalent to logistic lasso with PC adjustement. Hence, adjusting a
GLM with PCA is at best equivalent to pglmm, but with the additional burden of
selecting an appropriate number of PCs to retain in the model. Estimating the
dimensionality of real datasets, and thus the number of PCs to include as
fixed effects in a regression model can reveal to be challenging because
estimated eigenvalues have biased distributions (Yao and Ochoa,, 2022).
Another strategy involves selecting the appropriate number of PCs based on the
Tracy-Widom test (Tracy and Widom,, 1994). However, it is known that this test
tends to select a very large number of PCs (Lin and Zeng,, 2011), causing
convergence problems when fitting too many predictors. On the other hand,
modeling the population structure by using a random polygenic effect correctly
accounts for low and high-dimensional confounding structures, while only
fitting one extra variance component parameter.
We used real genotype data from the UK Biobank to simulate binary responses
and showed that BIC effectively selected sparser models with very high
precision and prediction accuracy, compared to AIC and CV. Using the same data
set, we illustrated the potential advantages of pglmm over a logistic lasso
with PC adjustment for constructing a PRS on two highly heritable binary
traits in a set of related individuals. Results showed that pglmm had higher
predictive performance for asthma, while also selecting consistently fewer
predictors as reported by median and IQR values.
In this study, we focused solely on the lasso as a regularization penalty for
the genetic markers effects. However, it is known that estimated effects by
lasso will have large biases because the resulting shrinkage is constant
irrespective of the magnitude of the effects. Alternative regularization like
the Smoothly Clipped Absolute Deviation (SCAD) (Fan and Li,, 2001) penalty
function should be explored. Although, we note that the number of nonzero
coefficients in the SCAD estimates is no longer an unbiased estimate of its
degrees of freedom. Other alternatives include implementation of the relaxed
lasso, which has shown to produce sparser models with equal or lower
prediction loss than the regular lasso estimator for high-dimensional data
(Meinshausen,, 2007). It would also be of interest to explore if tuning of the
generalized ridge penalty term on the random effects that arises from the PQL
loss could result in better predictive performance.
A limitation of pglmm compared to a logistic lasso with PC adjustment is the
computational cost of performing multiple matrix calculations that comes from
the estimation of variance components under the null. Indeed, at each
iteration, we perform a matrix inversion based on Cholesky decomposition with
complexity of $O(n^{3})$ and matrix multiplications with complexity of
$O(mn^{2}+S^{2}n^{2}+p^{2}n)$, where $n$ is the sample size, $m$ is the number
of non-genetic covariates, and $S$ is the number of variance components. Then,
we need to perform a spectral decomposition of the covariance matrix with a
computation time $O(n^{3})$. These computations become prohibitive for large
cohorts such as the full UK Biobank with a total of 500$K$ samples. A solution
to explore to increase computation speed and decrease memory usage would be
the use of conjugate gradient methods with a diagonal preconditioner matrix,
as proposed by Zhou et al., (2018).
Finally, we can take advantage of the fact that it is possible to allow for
multiple random effects to account for complex sampling designs and extend
pglmm to a wide variety of models. For example, building a PRS for a bivariate
binary trait, explicitly accounting for the shared causal pathways of many
diseases or complex traits. Moreover, pglmm could be used in models where
there is interest in selecting over fixed genetic and gene-environment
interaction (GEI) effects. Due to the hierarchical structure between the main
genetic and GEI effects, we will have to consider using a lasso for
hierarchical structures (Zemlianskaia et al.,, 2022).
## Software
Our Julia package called PenalizedGLMM is available on Github
https://github.com/julstpierre/PenalizedGLMM.
## Funding
This work was supported by the Fonds de recherche Québec-Santé [267074 to
K.O.]; and the Natural Sciences and Engineering Research Council of Canada
[RGPIN-2019-06727 to K.O., RGPIN-2020-05133 to S.B.].
## Acknowledgments
This research has been conducted using the UK Biobank Resource under
Application Number 20802. This study was enabled in part by support provided
by Calcul Québec (https://www.calculquebec.ca/) and Compute Canada
(https://www.computecanada.ca/). We thank the UK Biobank and all participants
for providing information.
## Data availability statement
Simulated data are available on Github
https://github.com/julstpierre/PenalizedGLMM/data. UK Biobank data are
available via application directly to UK Biobank
(https://www.ukbiobank.ac.uk/enable-your-research). The current study was
conducted under UK Biobank application number 20802.
Conflict of Interest: None declared.
## References
* Akaike, (1998) Akaike, H. (1998). Information Theory and an Extension of the Maximum Likelihood Principle, pages 199–213. Springer New York, New York, NY.
* Anderson, (2008) Anderson, G. P. (2008). Endotyping asthma: new insights into key pathogenic mechanisms in a complex, heterogeneous disease. The Lancet, 372(9643):1107–1119.
* Astle and Balding, (2009) Astle, W. and Balding, D. J. (2009). Population structure and cryptic relatedness in genetic association studies. Statistical Science, 24(4):451–471.
* Bezanson et al., (2017) Bezanson, J., Edelman, A., Karpinski, S., and Shah, V. B. (2017). Julia: A fresh approach to numerical computing. SIAM review, 59(1):65–98.
* (5) Bhatnagar, S. R., Yang, Y., and Greenwood, C. M. T. (2020a). ggmix: Variable selection in linear mixed models for snp data. R package version 0.0.1.
* (6) Bhatnagar, S. R., Yang, Y., Lu, T., Schurr, E., Loredo-Osti, J., Forest, M., Oualkacha, K., and Greenwood, C. M. T. (2020b). Simultaneous SNP selection and adjustment for population structure in high dimensional prediction models. PLOS Genetics, 16(5):e1008766.
* Böhning and Lindsay, (1988) Böhning, D. and Lindsay, B. G. (1988). Monotonicity of quadratic-approximation algorithms. Annals of the Institute of Statistical Mathematics, 40(4):641–663.
* Breslow and Clayton, (1993) Breslow, N. E. and Clayton, D. G. (1993). Approximate Inference in Generalized Linear Mixed Models. Journal of the American Statistical Association, 88(421):9–25.
* Bycroft et al., (2018) Bycroft, C., Freeman, C., Petkova, D., Band, G., Elliott, L. T., Sharp, K., Motyer, A., Vukcevic, D., Delaneau, O., O’Connell, J., Cortes, A., Welsh, S., Young, A., Effingham, M., McVean, G., Leslie, S., Allen, N., Donnelly, P., and Marchini, J. (2018). The UK biobank resource with deep phenotyping and genomic data. Nature, 562(7726):203–209.
* Chen et al., (2016) Chen, H., Wang, C., Conomos, M. P., Stilp, A. M., Li, Z., Sofer, T., Szpiro, A. A., Chen, W., Brehm, J. M., Celedón, J. C., Redline, S., Papanicolaou, G. J., Thornton, T. A., Laurie, C. C., Rice, K., and Lin, X. (2016). Control for population structure and relatedness for binary traits in genetic association studies via logistic mixed models. The American Journal of Human Genetics, 98(4):653–666.
* Fan and Li, (2001) Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. Journal of the American Statistical Association, 96(456):1348–1360.
* Friedman et al., (2010) Friedman, J., Hastie, T., and Tibshirani, R. (2010). Regularization paths for generalized linear models via coordinate descent. Journal of Statistical Software, 33(1):1–22.
* Gilmour et al., (1995) Gilmour, A. R., Thompson, R., and Cullis, B. R. (1995). Average information REML: An efficient algorithm for variance parameter estimation in linear mixed models. Biometrics, 51(4):1440.
* Groll and Tutz, (2014) Groll, A. and Tutz, G. (2014). Variable selection for generalized linear mixed models by L 1-penalized estimation. Statistics and Computing, 24(2):137–154.
* Hoffman, (2013) Hoffman, G. E. (2013). Correcting for population structure and kinship using the linear mixed model: Theory and extensions. PLoS ONE, 8(10):e75707.
* Hoggart et al., (2008) Hoggart, C. J., Whittaker, J. C., Iorio, M. D., and Balding, D. J. (2008). Simultaneous analysis of all SNPs in genome-wide and re-sequencing association studies. PLoS Genetics, 4(7):e1000130.
* Hui et al., (2017) Hui, F. K. C., Müller, S., and Welsh, A. H. (2017). Joint Selection in Mixed Models using Regularized PQL. Journal of the American Statistical Association, 112(519):1323–1333.
* Kathiresan et al., (2008) Kathiresan, S., Melander, O., Anevski, D., Guiducci, C., Burtt, N. P., Roos, C., Hirschhorn, J. N., Berglund, G., Hedblad, B., Groop, L., Altshuler, D. M., Newton-Cheh, C., and Orho-Melander, M. (2008). Polymorphisms Associated with Cholesterol and Risk of Cardiovascular Events. New England Journal of Medicine, 358(12):1240–1249.
* Lin and Zeng, (2011) Lin, D. Y. and Zeng, D. (2011). Correcting for population stratification in genomewide association studies. Journal of the American Statistical Association, 106(495):997–1008.
* Loh et al., (2018) Loh, P.-R., Kichaev, G., Gazal, S., Schoech, A. P., and Price, A. L. (2018). Mixed-model association for biobank-scale datasets. Nature Genetics, 50(7):906–908.
* Manolio et al., (2009) Manolio, T. A., Collins, F. S., Cox, N. J., Goldstein, D. B., Hindorff, L. A., Hunter, D. J., McCarthy, M. I., Ramos, E. M., Cardon, L. R., Chakravarti, A., Cho, J. H., Guttmacher, A. E., Kong, A., Kruglyak, L., Mardis, E., Rotimi, C. N., Slatkin, M., Valle, D., Whittemore, A. S., Boehnke, M., Clark, A. G., Eichler, E. E., Gibson, G., Haines, J. L., Mackay, T. F. C., McCarroll, S. A., and Visscher, P. M. (2009). Finding the missing heritability of complex diseases. Nature, 461(7265):747–753.
* Meinshausen, (2007) Meinshausen, N. (2007). Relaxed lasso. Computational Statistics & Data Analysis, 52(1):374–393.
* Novembre and Stephens, (2008) Novembre, J. and Stephens, M. (2008). Interpreting principal component analyses of spatial population genetic variation. Nature Genetics, 40(5):646–649.
* (24) Ochoa, A. and Storey, J. D. (2016a). FST and kinship for arbitrary population structures i: Generalized definitions.
* (25) Ochoa, A. and Storey, J. D. (2016b). FST and kinship for arbitrary population structures II: Method-of-moments estimators.
* Price et al., (2006) Price, A. L., Patterson, N. J., Plenge, R. M., Weinblatt, M. E., Shadick, N. A., and Reich, D. (2006). Principal components analysis corrects for stratification in genome-wide association studies. Nature Genetics, 38(8):904–909.
* Price et al., (2010) Price, A. L., Zaitlen, N. A., Reich, D., and Patterson, N. (2010). New approaches to population stratification in genome-wide association studies. Nature Reviews Genetics, 11(7):459–463.
* Privé et al., (2019) Privé, F., Aschard, H., and Blum, M. G. B. (2019). Efficient implementation of penalized regression for genetic risk prediction. Genetics, 212(1):65–74.
* Rabinowicz and Rosset, (2020) Rabinowicz, A. and Rosset, S. (2020). Cross-validation for correlated data. Journal of the American Statistical Association, pages 1–14.
* Rakitsch et al., (2012) Rakitsch, B., Lippert, C., Stegle, O., and Borgwardt, K. (2012). A lasso multi-marker mixed model for association mapping with population structure correction. Bioinformatics, 29(2):206–214.
* Reisetter and Breheny, (2021) Reisetter, A. C. and Breheny, P. (2021). Penalized linear mixed models for structured genetic data. Genetic Epidemiology.
* Schwarz, (1978) Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2).
* Sul et al., (2018) Sul, J. H., Martin, L. S., and Eskin, E. (2018). Population structure in genetic studies: Confounding factors and mixed models. PLOS Genetics, 14(12):e1007309.
* Tibshirani, (1996) Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 58(1):267–288.
* Tracy and Widom, (1994) Tracy, C. A. and Widom, H. (1994). Level-spacing distributions and the airy kernel. Communications in Mathematical Physics, 159(1):151–174.
* Visscher et al., (2017) Visscher, P. M., Wray, N. R., Zhang, Q., Sklar, P., McCarthy, M. I., Brown, M. A., and Yang, J. (2017). 10 years of GWAS discovery: Biology, function, and translation. The American Journal of Human Genetics, 101(1):5–22.
* Wang et al., (2020) Wang, Y., Guo, J., Ni, G., Yang, J., Visscher, P. M., and Yengo, L. (2020). Theoretical and empirical quantification of the accuracy of polygenic scores in ancestry divergent populations. Nature Communications, 11(1).
* Yang, (2005) Yang, Y. (2005). Can the strengths of aic and bic be shared? a conflict between model indentification and regression estimation. Biometrika, 92(4):937–950.
* Yao and Ochoa, (2022) Yao, Y. and Ochoa, A. (2022). Limitations of principal components in quantitative genetic association models for human studies.
* Yu et al., (2005) Yu, J., Pressoir, G., Briggs, W. H., Bi, I. V., Yamasaki, M., Doebley, J. F., McMullen, M. D., Gaut, B. S., Nielsen, D. M., Holland, J. B., Kresovich, S., and Buckler, E. S. (2005). A unified mixed-model method for association mapping that accounts for multiple levels of relatedness. Nature Genetics, 38(2):203–208.
* Zemlianskaia et al., (2022) Zemlianskaia, N., Gauderman, W. J., and Lewinger, J. P. (2022). A scalable hierarchical lasso for gene-environment interactions. Journal of Computational and Graphical Statistics, pages 1–36.
* Zhao et al., (2018) Zhao, H., Mitra, N., Kanetsky, P. A., Nathanson, K. L., and Rebbeck, T. R. (2018). A practical approach to adjusting for population stratification in genome-wide association studies: principal components and propensity scores (PCAPS). Statistical Applications in Genetics and Molecular Biology, 17(6).
* Zhou et al., (2018) Zhou, W., Nielsen, J. B., Fritsche, L. G., Dey, R., Gabrielsen, M. E., Wolford, B. N., LeFaive, J., VandeHaar, P., Gagliano, S. A., Gifford, A., Bastarache, L. A., Wei, W.-Q., Denny, J. C., Lin, M., Hveem, K., Kang, H. M., Abecasis, G. R., Willer, C. J., and Lee, S. (2018). Efficiently controlling for case-control imbalance and sample relatedness in large-scale genetic association studies. Nature Genetics, 50(9):1335–1341.
* Zou et al., (2007) Zou, H., Hastie, T., and Tibshirani, R. (2007). On the “degrees of freedom” of the lasso. The Annals of Statistics, 35(5).
## Tables
Table 1: Simulation parameters Parameter | Definition | Value
---|---|---
$M$ | Number of replications | 50
$h^{2}_{g}$ | Fraction of variance due to fixed | 0.5
| genetic effects (logit scale) |
$h^{2}_{b}$ | Fraction of variance due to random | 0.4
| genetic effects (logit scale) |
$\pi_{0}$ | Prevalence under the null | 0.1
$p$ | Number of snps | 5,000
$c$ | Fraction of causal SNPs | 0.01
Table 2: Mean and standard deviation of AUCs in test sets for 50 replications of the simulated genotype data. Model size represents the number of genetic predictors that are selected by each model. $K$ represents the number of intermediate subpopulations in the 1d linear admixture data, and the number of independent subpopulations in the independent data. $\%\Delta_{std}$ represents the relative decrease in AUC standard deviation for the predictions obtained by pglmm. | | 1d linear admixture | independent
---|---|---|---
K | Model size | glmnetPC | pglmm | $\%\Delta_{std}$ | glmnetPC | pglmm | $\%\Delta_{std}$
10 | 5 | 0.765 (0.0456) | 0.769 (0.0443) | 3.0 | 0.801 (0.0496) | 0.802 (0.0494) | 0.4
| 10 | 0.790 (0.0350) | 0.794 (0.0344) | 1.5 | 0.817 (0.0445) | 0.817 (0.0454) | -2.1
| 15 | 0.804 (0.0313) | 0.808 (0.0305) | 2.6 | 0.826 (0.0417) | 0.827 (0.0425) | -2.0
| 20 | 0.814 (0.0272) | 0.817 (0.0275) | -1.1 | 0.831 (0.0404) | 0.832 (0.0418) | -3.7
| 25 | 0.820 (0.0253) | 0.821 (0.0262) | -3.4 | 0.834 (0.0395) | 0.835 (0.0409) | -3.7
| 30 | 0.823 (0.0247) | 0.824 (0.0248) | -0.5 | 0.836 (0.0390) | 0.837 (0.0401) | -2.8
| 35 | 0.825 (0.0243) | 0.827 (0.0245) | -0.7 | 0.838 (0.0386) | 0.839 (0.0400) | -3.6
| 40 | 0.827 (0.0241) | 0.828 (0.0242) | -0.3 | 0.840 (0.0382) | 0.840 (0.0395) | -3.3
| 45 | 0.829 (0.0239) | 0.830 (0.0238) | 0.2 | 0.841 (0.0381) | 0.842 (0.0394) | -3.5
| 50 | 0.830 (0.0238) | 0.831 (0.0238) | -0.1 | 0.842 (0.0380) | 0.843 (0.0390) | -2.7
20 | 5 | 0.751 (0.0431) | 0.764 (0.0419) | 2.7 | 0.771 (0.0430) | 0.807 (0.0387) | 9.9
| 10 | 0.775 (0.0383) | 0.788 (0.0358) | 6.5 | 0.789 (0.0387) | 0.822 (0.0355) | 8.3
| 15 | 0.789 (0.0356) | 0.802 (0.0313) | 12.1 | 0.801 (0.0375) | 0.830 (0.0333) | 11.0
| 20 | 0.798 (0.0336) | 0.811 (0.0301) | 10.4 | 0.808 (0.0368) | 0.835 (0.0316) | 14.0
| 25 | 0.803 (0.0327) | 0.816 (0.0299) | 8.5 | 0.815 (0.0367) | 0.838 (0.0308) | 16.1
| 30 | 0.807 (0.0321) | 0.819 (0.0295) | 8.0 | 0.819 (0.0361) | 0.840 (0.0305) | 15.6
| 35 | 0.810 (0.0315) | 0.821 (0.0297) | 5.6 | 0.822 (0.0354) | 0.842 (0.0303) | 14.4
| 40 | 0.812 (0.0310) | 0.823 (0.0293) | 5.5 | 0.826 (0.0346) | 0.843 (0.0301) | 13.0
| 45 | 0.814 (0.0309) | 0.824 (0.0293) | 5.1 | 0.829 (0.0341) | 0.844 (0.0298) | 12.4
| 50 | 0.816 (0.0302) | 0.825 (0.0290) | 3.9 | 0.831 (0.0336) | 0.845 (0.0297) | 11.9
Table 3: PRS results for the UK Biobank AUC values for asthma. We report mean of AUC and standard deviation for a total of 40 random splits. For model size, we report median and interquartile range for the number of genetic predictors selected. For pglmm, we compare performance when model is selected using BIC or CV. For BIC, the best model is chosen based on training fit. For CV, the best model is chosen based on maximum AUC on the validation set. Model | AUCval | AUCtest | Size
---|---|---|---
Asthma | | |
Covariates + 10PCs | 0.5232 (0.019) | 0.5254 (0.026) | -
glmnetPC (CV) | 0.5410 (0.018) | 0.5253 (0.027) | 67.5 (486)
pglmm (CV) | 0.5539 (0.023) | 0.5385 (0.026) | 16 (145)
pglmm (BIC) | - | 0.5452 (0.025) | 1 (1)
High cholesterol | | |
Covariates + 10PCs | 0.7183 (0.017) | 0.7215 (0.017) | -
glmnetPC (CV) | 0.7196 (0.018) | 0.7196 (0.018) | 0.5 (15.5)
pglmm (CV) | 0.7213 (0.018) | 0.7202 (0.019) | 3 (33.5)
pglmm (BIC) | - | 0.7212 (0.017) | 0 (0)
## Figures
Figure 1: Mean of 50 TPRs for the simulated genotype data. $K$ represents the
number of intermediate subpopulations in the 1d linear admixture data (left
panel), and the number of independent subpopulations in the independent data
(right panel). Figure 2: Mean of 50 RMSEs for the simulated genotype data.
$K$ represents the number of intermediate subpopulations in the 1d linear
admixture data (left panel), and the number of independent subpopulations in
the independent data (right panel). Figure 3: Mean of 50 AUCs in test sets of
the simulated genotype data. $K$ represents the number of intermediate
subpopulations in the 1d linear admixture data (left panel), and the number of
independent subpopulations in the independent data (right panel). Figure 4:
Mean of 50 AUCs, RMSEs and TPRs for the UK Biobank genotype data with related
subjects. Figure 5: Boxplots of the model selection simulations results for 50
replications of the UK Biobank genotype data with related subjects. For each
replication, the best model for pglmm was chosen using either AIC, BIC or CV.
|
# Single production of an exotic vector-like $Y$ quark
at future high energy $pp$ colliders
Liangliang Shang1,2111Email<EMAIL_ADDRESS><EMAIL_ADDRESS>Yuxiao Yan1222Email<EMAIL_ADDRESS>Stefano Moretti2,3333Email<EMAIL_ADDRESS><EMAIL_ADDRESS>Bingfang Yang1444E-mail:
<EMAIL_ADDRESS>1School of Physics, Henan Normal University, Xinxiang
453007, PR China
2Department of Physics and Astronomy, Uppsala University, Box 516, SE-751 20
Uppsala, Sweden
3School of Physics and Astronomy, University of Southampton, Highfield,
Southampton SO17 1BJ, UK
###### Abstract
Vector-like quarks have been predicted in various new physics scenarios beyond
the Standard Model (SM). In a simplified modelling of a $(B,Y)$ doublet
including a vector-like quark $Y$, with charge $-\frac{4}{3}$e, there are only
two free parameters: the $Y$ coupling $\kappa_{Y}$ and mass $m_{Y}$. In the
five flavor scheme, we investigate the single production of the $Y$ state
decaying into $Wb$ at the Large Hadron Collider (LHC) Run-III and High-
Luminosity LHC (HL-LHC) operating at $\sqrt{s}$ = 14 TeV, the possible High-
Energy LHC (HE-LHC) with $\sqrt{s}$ = 27 TeV as well as the Future Circular
Collider in hadron-hadron mode (FCC-hh) with $\sqrt{s}$ = 100 TeV. Through
detailed signal-to-background analyses and detector simulations, we assess the
exclusion capabilities of the $Y$ state at the different colliders. We find
that this can be improved significantly with increasing collision energy,
especially at the HE-LHC and FCC-hh, both demonstrating an obvious advantage
with respect to the HL-LHC case in the case of high $m_{Y}$. Assuming a 10%
systematic uncertainty on the background event rate, the exclusion
capabilities are summarized as follows: (1) the LHC Run-III can exclude the
correlated regions of $\kappa_{Y}\in[0.044,0.5]$ and $m_{Y}\in[1000\text{
GeV},3099\text{ GeV}]$ with integrated luminosity $L=300\text{ fb}^{-1}$; (2)
the HL-LHC can exclude the correlated regions of $\kappa_{Y}\in[0.027,0.5]$
and $m_{Y}\in[1000\text{ GeV},3653\text{ GeV}]$ with $L=3$ ab-1; (3) the HE-
LHC can exclude the correlated regions of $\kappa_{Y}\in[0.030,0.5]$ and
$m_{Y}\in[1000\text{ GeV},4936\text{ GeV}]$ with $L=3$ ab-1; (4) the FCC-hh
can exclude the correlated regions of $\kappa_{Y}\in[0.051,0.5]$ and
$m_{Y}\in[1000\text{ GeV},6610\text{ GeV}]$ with $L=3$ ab-1.
## I introduction
In 2012, the ATLAS and CMS experiments at the Large Hadron Collider (LHC) made
a significant discovery by confirming the existence of the Higgs boson,
thereby providing further validation for the Standard Model (SM) ATLAS:2012yve
; CMS:2012qbp . However, the SM has certain limits in addressing several
prominent issues, such as neutrino masses, gauge hierarchy, dark matter and
dark energy. In various new physics scenarios like little Higgs models Arkani-
Hamed:2002ikv ; Han:2003wu ; Chang:2003vs ; Cao:2007pv , extra dimensions 4 ,
composite Higgs models Agashe:2004rs ; Bellazzini:2014yua ; Low:2015nqa ;
Bian:2015ota ; He:2001fz ; He:1999vp and other extended models 6 ; 7 ; 8 ,
Vector-Like Quarks (VLQs) are predicted to play a role in resolving the gauge
hierarchy problem by mitigating the quadratic divergences of the Higgs field.
Such VLQs are fermions with spin $\frac{1}{2}$ and possess the unique
characteristic of undergoing both left- and right-handed component
transformations under the Electro-Weak (EW) symmetry group of the SM 9 .
Unlike chiral quarks, VLQs do not acquire masses through Yukawa couplings to
the Higgs field and therefore have the potential to counterbalance loop
corrections to the Higgs boson mass stemming from the top quark of the SM.
Furthermore, VLQs can generate characteristic signatures at colliders and have
been widely studied (see, for example, Banerjee:2023upj ; Benbrik:2023xlo ;
Zeng:2023ljl ; Canbay:2023vmj ; Belyaev:2023yym ; Shang:2023ebe ; Yang:2023wnv
; Bhardwaj:2022nko ; Bhardwaj:2022wfz ; Bardhan:2022sif ; Shang:2022tkr ;
Freitas:2022cno ; Benbrik:2022kpo ; Corcella:2021mdl ; VLX2021 ;
Belyaev:2021zgq ; Deandrea:2021vje ; Dasgupta:2021fzw ; King:2020mau ;
Liu:2019jgp ; Benbrik:2019zdp ; Xie:2019gya ; Bizot:2018tds ;
Cacciapaglia:2018qep ; Cacciapaglia:2018lld ; Carvalho:2018jkq ; CMS:2018kcw ;
Barducci:2017xtw ; CMS:2017voh ; Chen:2016yfv ; Arhrib:2016rlj ;
Cacciapaglia:2015ixa ; Angelescu:2015kga ; Panizzi:2014dwa ; Panizzi:2014tya ;
Cacciapaglia:2012dd ; Okada:2012gy ; Cacciapaglia:2011fx ; delAguila:1989rq ).
A VLQ model typically introduces four new states: $T$, $B$, $X$ and $Y$, their
electric charges being $+\frac{2}{3}$, $-\frac{1}{3}$, $+\frac{5}{3}$ and
$-\frac{4}{3}$, respectively. In such kind of model, VLQs can be categorized
into three types: singlets $(T)$, $(B)$, doublets $(X,T)$, $(T,B)$, $(B,Y)$
and triplets $(X,T,B)$, $(T,B,Y)$. Notably, the $Y$ quark cannot exist as a
singlet. Further, it is expected to decay with a 100% Branching Ratio (BR)
into a $b$ quark and $W$ boson when $Y$ is lighter than the other VLQs,
whether in a doublet or triplet.
In this study, we will focus on the observability of single $Y$ production at
the Large Hadron Collider (LHC) Run-III, the High-Luminosity LHC (HL-LHC)
Gianotti:2002xx ; Apollinari:2017lan , the High-Energy LHC (HE-LHC)
FCC:2018bvk and the Future Circular Collider operating in hadron-hadron mode
(FCC-hh) FCC:2018vvp , specifically, within the $(B,Y)$ doublet realisation.
The ATLAS Collaboration conducted a search for a VLQ $Y$ at 13 TeV with an
integrated luminosity of 36.1 fb-1 ATLAS:2018dyh . They found that the upper
limits on the mixing angle are as small as $\left|\sin{\theta_{R}}\right|$ =
0.17 for a $Y$ quark with a mass of 800 GeV in the $(B,Y)$ doublet model, and
$\left|\sin{\theta_{L}}\right|$ = 0.16 for a $Y$ quark with a mass of 800 GeV
in the $(T,B,Y)$ triplet model. The CMS Collaboration also conducted a search
for $Y$ states in the $Wb$ channel at 13 TeV using 2.3 fb-1 of data
CMS:2017fpk . They searched for final states involving one electron or muon,
at least one $b$-tagged jet with large transverse momentum, at least one jet
in the forward region of the detector plus (sizeable) missing transverse
momentum. Their findings indicate that the observed (expected) lower mass
limits are 1.40 (1.0) TeV for a VLQ $Y$ with a coupling value of 0.5 and a
BR($Y\to W^{-}b$) = 1. The ATLAS Collaboration recently presented a search for
the pair-production of VLQ $T$ in the lepton+jets final state using 140 fb-1
at 13 TeV ATLAS:2023shy . They pointed out that the most stringent limits are
set for the scenario BR($T\to W^{+}b$)$=1$, for which $T$ masses below 1700
GeV (1570 GeV) are observed (expected) to be excluded at 95% Confidence Level
(CL). And the limits can also apply to a VLQ $Y$ with BR($Y\to W^{-}b$)$=1$.
All such limits stem from VLQ pair production, induced by Quantum Chromo-
Dynamics (QCD).
Furthermore, there are comparable exclusion limits on the mixing parameter
$\sin{\theta_{R}}$ from EW Precision Observables (EWPOs), for example within
the $(B,Y)$ doublet model, Ref. 9 found that the upper limits on
$\sin{\theta_{R}}$ are approximately 0.21 and 0.15 at $m_{Y}=1000\text{ GeV}$
and 2000 GeV respectively at 95% CL from the oblique parameters $S$ and $T$.
Ref. Cao:2022mif highlighted that, considering the $W$ boson mass measurement
by the CDF collaboration CDF:2022hxs , the $2\sigma$ bounds on
$\sin{\theta_{R}}$ from the oblique parameters $S,T$ and $U$ are approximately
$[0.15,0.23]$ and $[0.09,0.13]$ at $m_{Y}=1000\text{ GeV}$ and 3000 GeV in a
conservative average scenario, respectively. They also pointed out that the
constraints from the $Zb\bar{b}$ coupling are weaker than those from the EWPOs
for about $m_{Y}>1600\text{ GeV}$.
The single production of a VLQ is instead model dependent, as the couplings
involved are EW ones, yet they may make a significant contribution to the
total VLQ production cross section, compared to the pair production, due to
less phase space suppression, in the region of high VLQ masses. In this work,
we will in particular focus on the process $pp\to Y(\to W^{-}b)\bar{b}j\to
l^{-}\bar{\nu}_{l}b\bar{b}j$ (with $l^{-}$ standing for electron or muon and
$j$ standing for first two-generation quark jets), combined with its charged
conjugated process $pp\to\bar{Y}bj$. We expect that the forthcoming results
will provide complementary information to the one provided by VLQ pair
production in the quest to detect a doublet $Y$ quark at the aforementioned
future colliders.
The paper is structured as follows. In Section II, we introduce the simplified
VLQ model used in our simulations. In Section III, we analyze the properties
of the signal process and SM backgrounds. Subsequently, we conduct simulations
and calculate the $Y$ state exclusion and discovery capabilities at the HL-
LHC, HE-LHC and FCC-hh. Finally, in Section IV, we provide a summary. (We also
have an Appendix where we map the $Y$ state of our simplified model onto the
$(B,Y)$ doublet representation.)
## II Doublet $Y$ VLQ in a simplified model
As mentioned, in a generic VLQ model, one can include four types of states
called $T$, $B$, $X$ and $Y$, with electric charges $+\frac{2}{3}$,
$-\frac{1}{3}$, $+\frac{5}{3}$ and $-\frac{4}{3}$, respectively. Under the SM
gauge group, $SU(3)$C $\times$ $SU(2)$L $\times$ $U(1)$Y, there are seven
possible representations of VLQs as shown in Table 1.
| $T$ | $B$ | $(T,B)$ | $(B,Y)$ | $(X,T)$ | $(T,B,Y)$ | $(X,T,B)$
---|---|---|---|---|---|---|---
$SU(3)_{C}$ | 3 | 3 | 3 | 3 | 3 | 3 | 3
$SU(2)_{L}$ | 1 | 1 | 2 | 2 | 2 | 3 | 3
$U(1)_{Y}$ | $\frac{2}{3}$ | $-\frac{1}{3}$ | $\frac{1}{6}$ | $-\frac{5}{6}$ | $\frac{7}{6}$ | $-\frac{1}{3}$ | $\frac{2}{3}$
Table 1: Representations of VLQs and their quantum numbers under the SM gauge
group.
These representations allow for couplings between VLQs and SM gauge bosons and
quarks. The kinetic and mass terms of the VLQs are described as Cao:2022mif ,
$\mathcal{L}=\sum_{F}\bar{F}(i\not{D}-M_{F})F$ (1)
where $F=\left\\{U,D,Q_{1},Q_{5},Q_{7},T_{1},T_{2}\right\\}$,
$D_{\mu}=\partial_{\mu}+ig_{1}Y_{F}B_{\mu}+ig_{2}S^{I}W_{\mu}^{I}+ig_{s}T^{A}G^{A}_{\mu}$,
$\lambda^{A}$($A=1,2,\cdots,8$) and $\tau^{I}$($I=1,2,3$), related to the
Gell-Mann and Pauli matrices via $T^{A}=\frac{1}{2}\lambda^{A}$ and
$S^{I}=\frac{1}{2}\tau^{I}$, respectively. In our simplified model, we use an
effective Lagrangian framework for the interactions of a VLQ $Y$ with the SM
quarks through $W$ boson exchange, including as $Y$ free parameters
$\kappa^{i,L/R}_{Y}$ (couplings) and $m_{Y}$ (mass) Buchkremer:2013bha :
$\mathcal{L}=\left\\{\kappa_{Y}^{i,L/R}\sqrt{\frac{\zeta_{i}}{\Gamma_{W}^{0}}}\frac{g}{\sqrt{2}}\left[\bar{Y}_{L/R}W_{\mu}^{-}\gamma^{\mu}d^{i}_{L/R}\right]+\text{H.c.}\right\\}+m_{Y}\bar{Y}Y,$
(2)
where $d^{i}_{L/R}$($i=1,2,3$) represent the three types of quarks in the SM
while $L$ and $R$ stand for the left-handed and right-handed chiralities,
respectively. We assume that the $Y$ only couples to the third generation
quarks of the SM, that is, $Y$ decays 100% into $Wb$ and therefore
$\zeta_{1}=\zeta_{2}=0,\zeta_{3}=1$. Considering that the $Y$ mass is much
greater than any SM quark mass ($m_{q}$), that is, $m_{Y}\gg m_{q}$, the
kinematic function can be approximated as $\Gamma^{0}_{W}=1$
Buchkremer:2013bha , so that the above Lagrangian can be simplified as
$\mathcal{L}=\left\\{\frac{g\kappa_{Y}^{3,L/R}}{\sqrt{2}}\left[\bar{Y}_{L/R}W_{\mu}^{-}\gamma^{\mu}b_{L/R}\right]+\text{H.c.}\right\\}+m_{Y}\bar{Y}Y,$
(3)
where $g$ is the EW coupling constant. Comparing the Lagrangian for the
$(B,Y)$ doublet and $(T,B,Y)$ triplet, we observe that the relationship
between the coupling $\kappa_{Y}^{3,L/R}$ and mixing angle $\theta^{L/R}$ is
$\sin\theta^{L/R}=\kappa_{Y}^{3,L/R}$ for the doublet and
$\sin\theta^{L/R}=\sqrt{2}\kappa_{Y}^{3,L/R}$ for the triplet, with details to
be found in Appendix A. Taking into account the relationship
$\tan\theta^{L}=\frac{m_{b}}{m_{B}}\tan\theta^{R}$ and
$\tan\theta^{R}=\frac{m_{b}}{m_{B}}\tan\theta^{L}$ as well as the condition
$m_{B}\gg m_{b}$, we can assume $\kappa_{Y}^{3,L}=0$ for the doublet and
$\kappa_{Y}^{3,R}=0$ for the triplet. (In the subsequent content, we will use
$\kappa_{Y}$ to denote $\kappa_{Y}^{3,R}$ for the sake of simplicity.) The
decay width of the VLQ $Y$ can be expressed as Cetinkaya:2020yjf ,
$\Gamma(Y\to
Wq)=\frac{\alpha_{e}\kappa^{2}_{Y}}{16\sin^{2}\theta_{W}}\frac{(m^{2}_{W}-m^{2}_{Y})^{2}(2m^{2}_{W}+m^{2}_{Y})}{m^{2}_{W}m^{3}_{Y}},$
(4)
where $\alpha_{\rm EM}=\frac{{g^{\prime}}^{2}}{4\pi}$, $g^{\prime}$ is the
Electro-Magnetic (EM) coupling constant and $\theta_{W}$ the EW mixing angle.
In this paper, we solely focus on the Narrow Width Approximation (NWA), which
we use for the purpose of simplifying scattering amplitude calculations.
However, it is worth noting that several studies Carvalho:2018jkq ;
Berdine:2007uv ; Moretti:2016gkr ; Deandrea:2021vje have highlighted the
limitations of the NWA in scenarios involving new physics with VLQs.
Specifically, it becomes imperative to consider a finite width when this
becomes larger than $\alpha_{\rm EM}\approx 1\%$, given the substantial
interference effects emerging between VLQ production and decay channels,
coupled with their interactions with the corresponding irreducible
backgrounds. To address the limitations of our approach then, we will also
present the ratio $\Gamma_{Y}/m_{Y}$ in our subsequent results and we
emphasise since now that, crucially, for the region where
$\Gamma_{Y}/m_{Y}>1\%$, our sensitivities may be under- or over-estimated, as
such interferences could be positive or negative, respectively. Also, before
starting with our numerical analysis, we remind the reader that one can apply
the results of our forthcoming simulations to a specific VLQ representation,
such as, e.g., $(B,Y)$ or $(T,B,Y)$, by utilizing the aforementioned
relationships.
Figure 1: Representative Feynman diagram of single $Y$ (in red) production
followed by its subsequent decay $Y\to W^{-}(\to l^{-}\bar{\nu}_{l})b$. Here,
$q$ in the initial state represents one of the first two-generation quarks and
bottom quark, $j$ in the final state represents one of the first two-
generation jets, $b$ in the intermediate (final) state represents a $b$-quark
(jet) while $l^{-}$ represents either an electron or muon. Notice that, since
we use the five flavor scheme, the $g\to b\bar{b}$ splitting in the diagram is
actually accounted for through the PDF evolution.
In Figure 1, we show a representative Feynman diagram of the signal production
$pp\to Y\bar{b}j$ and decay chain $Y\to W^{-}(\to l^{-}\bar{\nu}_{l})b$. We
expect the $W$ boson and the high-momentum $b$-jet to exhibit a back-to-back
alignment in the transverse plane, originating from the decay of the massive
$Y$ quark. The topology also encompasses an outgoing light quark, often
resulting in a forward jet within the detector. Furthermore, the second
$b$-jet arising from the splitting of a gluon into a pair of $b$-quarks can be
observed in either the forward or central region. According to these features
of signal events, the primary SM backgrounds include $pp\to t\bar{b}j$, $pp\to
W^{+}W^{-}b$, $pp\to Zbj$, $pp\to W^{+}bj$, and their charge conjugated
processes. Among them, $pp\to t\bar{b}j$ and $pp\to W^{+}W^{-}b$ are
irreducible backgrounds, while the others are reducible backgrounds. We have
also assessed additional backgrounds, such as $pp\to t\bar{t}$, and found that
their contribution can be ignored based on the selection criteria that will be
discussed later.
The signal production cross section is determined not only by the mass $m_{Y}$
but also by the coupling strength $\kappa_{Y}$. The cross section is directly
proportional to $\kappa_{Y}^{2}$ for a fixed $m_{Y}$ as long as the NWA is met
Moretti:2016gkr . In Figure 2, we show the tree-level cross sections for
single $Y$ production as a function of the mass $m_{Y}$. We can see that, as
$m_{Y}$ increases, the cross section gradually decreases due to a smaller
phase space.
Figure 2: The tree-level cross sections for single $Y$ production as a
function of the mass $m_{Y}$ for various values of the coupling $\kappa_{Y}$.
The charge conjugated process has also been taken into account.
In Figure 3, we show the tree-level cross sections for the signal benchmarks
$m_{Y}=1000\text{ GeV}$ (labeled as $Y_{1000}$) and $m_{Y}=1500\text{ GeV}$
(labeled as $Y_{1500}$) with $\kappa_{Y}=0.1$ and $\kappa_{Y}=0.5$ as well as
the tree-level cross sections for the background processes. It is evident that
the rates for the latter are significantly larger than those for the former.
Consequently, we should design efficient selection criteria (in terms of
kinematic cuts) to reduce the number of background events while preserving the
signal events. Furthermore, the cross sections for both signal and backgrounds
increase with increasing collider energy.
Figure 3: The tree-level cross sections as a function of the center-of-mass
energy $\sqrt{s}$ for the signal benchmarks and backgrounds. Solid lines
represent the signal processes and dashed lines represent the background
processes. The cross sections also include the corresponding charge conjugated
processes.
The Next-to-Leading Order (NLO) (or even higher order) QCD corrections for the
SM background cross sections at the LHC have been extensively explored in
Refs. Czakon:2012zr ; Campbell:2005zv ; Campbell:2006cu ; Kidonakis:2018ncr ;
Boos:2012vm . The $K$ factors associated with the background cross sections
adopted in our calculations are summarized in Table 2. (Note that, despite
they change somewhat with energy, we neglect here changes of $K$ factors
values at different colliders, like in Ref. Yang:2022wfa .)
Processes | $Zbj$ | $W^{+}bj$ | $W^{+}W^{-}b$ | $t\bar{b}j$
---|---|---|---|---
$K$ factor | 1.3 Campbell:2005zv | 1.9 Campbell:2006cu | 2.1 Campbell:2006cu | 1.4 Kidonakis:2018ncr ; Boos:2012vm
Table 2: $K$ factors representing the QCD corrections for the background
processes.
There are stringent limits from the oblique parameters $S$, $T$ and $U$ in
EWPOs Hollik:1988ii ; Peskin:1990zt ; Grinstein:1991cd ; Peskin:1991sw ;
Lavoura:1992np ; Burgess:1993mg ; Maksymyk:1993zm ; Cynolter:2008ea ; 9 ;
Chen:2017hak ; Cao:2022mif ; He:2022zjz ; Arsenault:2022xty . These oblique
parameters relate to the weak isospin current $J^{\mu}_{1,2,3}$ and the
electromagnetic current $J^{\mu}_{Q}=J^{\mu}_{3}+J^{\mu}_{Y}$, involving their
vacuum-polarization amplitudes as defined in references Peskin:1990zt ;
Peskin:1991sw :
$\displaystyle S$ $\displaystyle\equiv$
$\displaystyle-\frac{16\pi}{m_{Z}^{2}}\left\\{\Sigma_{33}(m_{Z}^{2})-\Sigma_{33}(0)-\Sigma_{3Q}(m_{Z}^{2})\right\\}$
(5) $\displaystyle=$
$\displaystyle\frac{16\pi}{m_{Z}^{2}}\left\\{\Sigma_{3Y}(m_{Z}^{2})-\Sigma_{3Y}(0)\right\\},$
$\displaystyle T$ $\displaystyle\equiv$
$\displaystyle\frac{4\pi}{\sin^{2}\theta_{W}\cos^{2}\theta_{W}m_{Z}^{2}}\left\\{\Sigma_{33}(0)-\Sigma_{11}(0)\right\\},$
(6) $\displaystyle U$ $\displaystyle\equiv$
$\displaystyle\frac{16\pi}{m_{Z}^{2}}\left\\{\Sigma_{33}(m_{Z}^{2})-\Sigma_{33}(0)\right\\}-\frac{16\pi}{m_{W}^{2}}\left\\{\Sigma_{11}(m_{Z}^{2})-\Sigma_{11}(0)\right\\},$
(7)
where $m_{W}$ and $m_{Z}$ denote the mass for $W$ and $Z$ boson, respectively.
The $Z$-boson current, represented by
$e(J^{\mu}_{3}-s_{W}^{2}J^{\mu}_{Q})/(\sin\theta_{W}\cos\theta_{W})$, involves
$e$ linked to the fine-structure constant $\alpha$ through $e^{2}\equiv
4\pi\alpha$. Consequently, the oblique parameters can be reformulated using
the vacuum polarizations of the SM gauge bosons as:
$\displaystyle\alpha T$ $\displaystyle=$ $\displaystyle\frac{\Sigma_{ZZ}^{\rm
new}\left(0\right)}{m_{Z}^{2}}-\frac{\Sigma_{WW}^{\rm
new}\left(0\right)}{m_{W}^{2}}$ (8)
$\displaystyle\frac{\alpha}{\sin^{2}2\theta_{W}}\,S$ $\displaystyle=$
$\displaystyle-\frac{\Sigma_{ZZ}^{\rm
new}\left(m_{Z}^{2}\right)-\Sigma_{ZZ}^{\rm
new}\left(0\right)}{m_{Z}^{2}}+\left.\frac{\partial\Sigma_{\gamma\gamma}^{\rm
new}\left(p^{2}\right)}{\partial p^{2}}\right|_{p^{2}=0}+\frac{\cos
2\theta_{W}}{\cos\theta_{W}\sin\theta_{W}}\left.\frac{\partial\Sigma_{\gamma
Z}^{\rm new}\left(p^{2}\right)}{\partial p^{2}}\right|_{p^{2}=0}$ (9)
$\displaystyle\simeq$ $\displaystyle\left.-\frac{\partial\Sigma_{ZZ}^{\rm
new}\left(p^{2}\right)}{\partial
p^{2}}\right|_{p^{2}=0}+\left.\frac{\partial\Sigma_{\gamma\gamma}^{\rm
new}\left(p^{2}\right)}{\partial p^{2}}\right|_{p^{2}=0}+\frac{\cos
2\theta_{W}}{\cos\theta_{W}\sin\theta_{W}}\left.\frac{\partial\Sigma_{\gamma
Z}^{\rm new}\left(p^{2}\right)}{\partial p^{2}}\right|_{p^{2}=0}$
$\displaystyle\frac{\alpha}{4\sin^{2}\theta_{W}}\,U$ $\displaystyle=$
$\displaystyle-\frac{\Sigma_{WW}^{\rm
new}\left(m_{W}^{2}\right)-\Sigma_{WW}^{\rm
new}\left(0\right)}{m_{W}^{2}}+\cos^{2}\theta_{W}\,\frac{\Sigma_{ZZ}^{\rm
new}\left(m_{Z}^{2}\right)-\Sigma_{ZZ}^{\rm new}\left(0\right)}{m_{Z}^{2}}$
(10)
$\displaystyle+\sin^{2}\theta_{W}\left.\frac{\partial\Sigma_{\gamma\gamma}^{\rm
new}\left(p^{2}\right)}{\partial p^{2}}\right|_{p^{2}=0}+\sin
2\theta_{W}\left.\frac{\partial\Sigma_{\gamma Z}^{\rm
new}\left(p^{2}\right)}{\partial p^{2}}\right|_{p^{2}=0}$
$\displaystyle\simeq$ $\displaystyle-\frac{\partial\Sigma_{WW}^{\rm
new}\left(p^{2}\right)}{\partial
p^{2}}|_{p^{2}=0}+\cos^{2}\theta_{W}\frac{\partial\Sigma_{ZZ}^{\rm
new}\left(p^{2}\right)}{\partial
p^{2}}|_{p^{2}=0}+\sin^{2}\theta_{W}\frac{\partial\Sigma_{\gamma\gamma}^{\rm
new}\left(p^{2}\right)}{\partial p^{2}}|_{p^{2}=0}$ $\displaystyle+\sin
2\theta_{W}\frac{\partial\Sigma_{\gamma Z}^{\rm
new}\left(p^{2}\right)}{\partial p^{2}}|_{p^{2}=0}.$
The contributions in the doublet $(B,Y)$ model to these oblique parameters can
be approximated as follows Cao:2022mif :
$S\simeq\frac{1}{2\pi}\left\\{-\frac{2}{3}\kappa_{Y}^{2}\ln\frac{\mathcal{M}^{2}}{m_{b}^{2}}+\frac{11}{3}\kappa_{Y}^{2}\right\\},\,U\simeq-\frac{\kappa_{Y}^{2}}{2\pi},\,T\simeq\frac{3m_{t}^{2}}{8\pi\sin^{2}\theta_{W}m_{W}^{2}}\kappa_{Y}^{4}\frac{2\mathcal{M}^{2}}{3m_{t}^{2}}$
(11)
Here, $\mathcal{M}^{2}=(m_{Y}^{2}-m_{b}^{2}\kappa_{Y}^{2})/(1-\kappa_{Y}^{2})$
and $m_{W}=m_{Z}\cos\theta_{W}$. For the numerical calculation, the $\chi^{2}$
function for the oblique parameter fit should be less than 8.02 for three
degrees of freedom to compute the $2\sigma$ limits, respectively.
${S}=-0.02\pm 0.1$, ${T}=0.03\pm 0.12$, ${U}=0.01\pm 0.11$; there exists a
strong correlation of 92% between the $S$ and $T$ parameters, while the U
parameter exhibits an anti-correlation of -80% (-93%) with $S$ ($T$)
ParticleDataGroup:2022pth . Specific numerical values of the input parameters
are detailed in Eq. 12.
## III Signal to background analysis
The signal model file is sourced from FeynRules feynruls and parton-level
events are generated using MadGraph5_aMC$@$NLO Alwall:2014hca with the
NNPDF23LO1 NNPDF Parton Distribution Function (PDFs). Dynamic factorization
and renormalization scales, set as default in MadEvent website_factor , are
utilized. Subsequently, fast detector simulations are conducted using Delphes
3.4.2 deFavereau:2013fsa with the built-in detector configurations of the LHC
Run-III, HL-LHC, HE-LHC website_hllhc and FCC-hh website_fcchh . Jets are
clustered by FastJet Cacciari:2011ma employing the anti-$kt$ algorithm
Cacciari:2005hq with a distance parameter of $\Delta R=0.4$. Furthermore,
MadAnalysis 5 Conte:2012fm is used to analyze both signal and background
events. Finally, the EasyScan_HEP package Shang:2023gfy is utilized to
connect these programs and scan the VLQ parameter space.
The numerical values of the input SM parameters are taken as follows
ParticleDataGroup:2022pth :
$\displaystyle m_{b}=4.18{\rm~{}GeV},\quad m_{t}=172.69{\rm~{}GeV},\quad
m_{Z}=91.1876{\rm~{}GeV},$
$\displaystyle\sin^{2}\theta_{W}=0.22339,\quad\alpha(m_{Z})=\frac{1}{127.951},\quad\alpha_{s}(m_{Z})=0.1179.$
(12)
Considering the general detection capabilities of detectors, the following
basic cuts are chosen:
$\displaystyle\Delta R(x,y)>0.4$ $\displaystyle(x,y=l,j,b),$ $\displaystyle
p^{l}_{T}>25\ {\rm GeV},$ $\displaystyle\ \ |\eta_{l}|<2.5,$ $\displaystyle
p^{j}_{T}>20\ {\rm GeV},$ $\displaystyle\ \ |\eta_{j}|<5.0,$ $\displaystyle
p^{b}_{T}>25\ {\rm GeV},$ $\displaystyle\ \ |\eta_{b}|<2.5,$
where $\Delta R=\sqrt{\Delta\Phi^{2}+\Delta\eta^{2}}$ denotes the separation
in the rapidity($\eta$)–azimuth($\phi$) plane.
To handle the relatively small event number of signal ($s$) and background
($b$) events, we will use the median significance $\mathcal{Z}$ to estimate the
expected discovery and exclusion reaches Cowan:2010js ; Kumar:2015tna ,
$\displaystyle\mathcal{Z}_{excl}=\sqrt{2\left[s-b\ln\left(\frac{b+s+x}{2b}\right)-\frac{1}{\delta^{2}}\ln\left(\frac{b-s+x}{2b}\right)\right]-(b+s-x)\left(1+\frac{1}{\delta^{2}b}\right)},$
(13)
$\displaystyle\mathcal{Z}_{disc}=\sqrt{2\left[(s+b)\ln\left(\frac{(s+b)(1+\delta^{2}b}{b+(s+b)\delta^{2}b}\right)-\frac{1}{\delta^{2}}\ln\left(1+\frac{\delta^{2}s}{1+\delta^{2}b}\right)\right]},$
(14) $x=\sqrt{(s+b)^{2}-\frac{4\delta^{2}sb^{2}}{1+\delta^{2}b}},$ (15)
where $\delta$ is the uncertainty that inevitably appears in the measurement
of the background. In the completely ideal case, that is $\delta$=0, Eq. (13)
and (14) can be simplified as follows, respectively:
$\mathcal{Z}_{excl}=\sqrt{2\left[s-b\ln\left(1+\frac{s}{b}\right)\right]},$
(16)
and
$\mathcal{Z}_{disc}=\sqrt{2\left[(s+b)\ln\left(1+\frac{s}{b}\right)-s\right]}.$
(17)
### III.1 LHC Run-III and HL-LHC
Firstly, we establish a trigger that emulates the LHC Run-III and HL-LHC
detector response based on the count of final state particles detected in each
event. Given the limited efficiency of the detector in identifying jets, we
adopt a lenient approach towards the number of jets. Consequently, the final
trigger criteria are defined as follows: $N_{l}=1$, $N_{j}\geq 2$, $N_{j}\leq
4$ and $N_{b}\geq 2$.
Considering that the mass of $Y$ is notably greater than that of its decay
products, the latter exhibit distinct spatial characteristics in
pseudorapidity $\eta$ and spatial separation $\Delta R$ compared to
backgrounds. These differences inform our selection criteria.
Cuts | $Y_{1500}$ (fb) | $Y_{1800}$ (fb) | $t\bar{b}j$ (fb) | $W^{+}bj$ (fb) | $W^{+}W^{-}b$ (fb) | $Zbj$ (fb)
---|---|---|---|---|---|---
Basic Cuts | 1.99 | 0.97 | 13855.00 | 15016.00 | 18967.00 | 13897.00
Trigger | 0.29 | 0.13 | 2227.40 | 775.10 | 1251.50 | 312.80
Cut 1 | 0.25 | 0.12 | 40.09 | 11.95 | 39.12 | 2.63
Cut 2 | 0.23 | 0.11 | 7.46 | 4.07 | 8.07 | 0.63
Cut 3 | 0.16 | 0.08 | 4.51 | 3.02 | 4.93 | 0.39
Cut 4 | 0.08 | 0.05 | 0.08 | 1.35 | 1.18 | 0.15
Cut 5 | 0.08 | 0.04 | 0.08 | 1.00 | 0.89 | 0.13
Cut 6 | 0.04 | 0.03 | 0.01 | 0.02 | 0.05 | 0.00
Cut 7 | 0.03 | 0.02 | 0.01 | 0.00 | 0.03 | 0.00
Table 3: Cut flows of the signal with $\kappa_{Y}=0.1$ and backgrounds at the
14 TeV HL-LHC, where the conjugate processes $pp\to\bar{t}bj$,
$W^{-}\bar{b}j$, $W^{+}W^{-}\bar{b}$, $Z\bar{b}j$ have been included.
Furthermore, since the mass range of $Y$ is much heavier than the particles
originating from background processes, we anticipate that the transverse
momentum (referred to as $\vec{p}_{T}$ and its magnitude denoted as $p_{T}$)
of decay products of the $Y$ state will be substantially larger than those of
the same particles from background processes. Besides, we will also consider
variables such as $\not{E}_{T}$, $\not{H}_{T}$ and $M_{T}$ to distinguish the
signal from the background. Here, $\not{E}_{T}$ represents the magnitude of
the sum of the transverse momenta of all visible final state particles,
$\not{H}_{T}$ is analogous to $\not{E}_{T}$ but only considers all visible
hadronic momenta while the transverse mass $M_{T}$ is defined as follows:
$\displaystyle M_{T}^{2}$
$\displaystyle\equiv[E_{T}(1)+E_{T}(2)]^{2}-[\vec{p}_{T}(1)+\vec{p}_{T}(2)]^{2}$
$\displaystyle=m_{1}^{2}+m_{2}^{2}+2[E_{T}(1)E_{T}(2)-\vec{p}_{T}(1)\cdot\vec{p}_{T}(2)],$
where $E_{T}(i)=\sqrt{p^{2}_{T}(i)+m^{2}_{i}}$ and $m^{2}_{i}=p_{i}^{2}$ with
$p_{i}$ representing a 4-vector.
In Figure 4, we present the normalized distributions of $p_{T}^{j_{1}}$,
$M_{b_{1}l_{1}}$, $M_{j_{1}j_{2}}$, $M_{T}^{b_{2}l_{1}}$,
$M_{T}^{b_{1}l_{1}}$, $\Delta R_{j_{1},b_{1}}$, $\not{H}_{T}$ and
$\not{E}_{T}$ for both $m_{Y}=1500\text{ GeV}$ and $m_{Y}=1800\text{ GeV}$
with $\kappa_{Y}=0.1$ as well as for the background processes. Based on these
distributions, we have devised the following selection criteria to distinguish
the signal from the various backgrounds555The subscript on the particle symbol
is arranged according to the magnitude of the particle transverse momentum:
e.g., in the case of $b$-jets, $p_{T}^{b_{1}}$ is greater than
$p_{T}^{b_{2}}$.:
* •
Trigger: $N_{l}=1$, $N_{j}\geq 2$, $N_{j}\leq 4$, and $N_{b}\geq 2$;
* •
Cut-1: $p_{T}^{j_{1}}>300\text{ GeV}$;
* •
Cut-2: $M_{b_{1}l_{1}}>500\text{ GeV}$;
* •
Cut-3: $M_{j_{1}j_{2}}>500\text{ GeV}$;
* •
Cut-4: $M_{T}^{b_{1}l_{1}}>200\text{ GeV}$ and $M_{T}^{b_{2}l_{1}}>200\text{
GeV}$;
* •
Cut-5: $\Delta R_{j_{1},b_{1}}<1.0$;
* •
Cut-6: $\not{H}_{T}>600\text{ GeV}$;
* •
Cut-7: $\not{E}_{T}>200\text{ GeV}$.
By applying these cuts, we can see that the signal efficiencies for
$m_{Y}=1500\text{ GeV}$ and $m_{Y}=1800\text{ GeV}$ are 1.35% and 2.41%,
respectively. The higher efficiency for the latter can be attributed to the
larger transverse boost of the final state originating from an heavier $Y$.
Meanwhile, the background processes are significantly suppressed. For
reference, we provide the cut flows in Table 3.
We present the exclusion capability ($\mathcal{Z}_{\text{excl}}=2$) and
discovery potential ($\mathcal{Z}_{\text{disc}}=5$) for $Y$ with two different
integrated luminosities, 1000 fb-1 and 3000 fb-1, at the HL-LHC, as shown in
the top line of Figure 7. This analysis considers both the ideal scenario
without systematic uncertainties and the case with a 10% systematic
uncertainty. In the presence of 10% systematic uncertainty, the $Y$ can be
excluded in the correlated parameter space of $\kappa_{Y}\in[0.044,0.5]$ and
$m_{Y}\in[1000\text{ GeV},3099\text{ GeV}]$ with an integrated luminosity of
$L=300\text{ fb}^{-1}$, which corresponds to the maximum achievable integrated
luminosity during LHC Run-III. If the integrated luminosity is raised to 3000
fb-1, aligning with the maximum achievable at the HL-LHC, the excluded
parameter zones extend to $\kappa_{Y}\in[0.027,0.5]$ and $m_{Y}\in[1000\text{
GeV},3653\text{ GeV}]$. Furthermore, the discovery regions are
$\kappa_{Y}\in[0.072,0.5]$ ([0.047, 0.5]) and $m_{Y}\in[1000\text{
GeV},2621\text{ GeV}]$ ($[1000\text{ GeV},3047\text{ GeV}]$) with $L=300\text{
fb}^{-1}$ (3000 fb-1).
Figure 4: Normalized distributions for the signals of $m_{Y}=$ 1500 GeV and
1800 GeV and SM backgrounds at the HL-LHC. The conjugated processes have been
included.
### III.2 27 TeV HE-LHC
Cuts | $Y_{1500}$ (fb) | $Y_{1800}$ (fb) | $t\bar{b}j$ (fb) | $W^{+}bj$ (fb) | $W^{+}W^{-}b$ (fb) | $Zbj$ (fb)
---|---|---|---|---|---|---
Basic Cuts | 16.86 | 10.01 | 41398.00 | 38670.00 | 69303.00 | 69658.00
Trigger | 1.78 | 0.10 | 6224.50 | 2149.10 | 4445.40 | 1700.70
Cut 1 | 1.50 | 0.91 | 86.07 | 29.51 | 133.60 | 10.73
Cut 2 | 1.36 | 0.85 | 18.30 | 11.14 | 29.52 | 2.37
Cut 3 | 0.95 | 0.62 | 12.83 | 9.05 | 19.27 | 1.53
Cut 4 | 0.35 | 0.27 | 0.17 | 2.94 | 3.10 | 0.35
Cut 5 | 0.33 | 0.25 | 0.17 | 2.01 | 2.36 | 0.28
Cut 6 | 0.12 | 0.16 | 0.00 | 0.04 | 0.37 | 0.00
Cut 7 | 0.09 | 0.12 | 0.00 | 0.04 | 0.14 | 0.00
Table 4: Cut flows of the signal with $\kappa_{Y}=0.1$ and backgrounds at the
27 TeV HE-LHC.
Figure 5: Normalized distributions for the signals with $m_{Y}=$ 1500 GeV and
1800 GeV and backgrounds at the HE-LHC.
This section delves into the prospective signal of $Y$ at the future 27 TeV
HE-LHC. In Figure 5, we exhibit the normalized distributions for both signal
and background processes, forming the basis for our distinctive selection
criteria:
* •
Trigger: $N_{l}=1$, $N_{j}\geq 2$, $N_{j}\leq 4$, and $N_{b}\geq 2$;
* •
Cut-1: $p_{T}^{j_{1}}>350\text{ GeV}$;
* •
Cut-2: $M_{b_{1}l_{1}}>550\text{ GeV}$;
* •
Cut-3: $M_{j_{1}j_{2}}>550\text{ GeV}$;
* •
Cut-4: $M_{T}^{b_{2}l_{1}}>250\text{ GeV}$ and $M_{T}^{b_{1}l_{1}}>250\text{
GeV}$;
* •
Cut-5: $\Delta R_{j_{1},b_{1}}<0.5$;
* •
Cut-6: $\not{H}_{T}>650\text{ GeV}$;
* •
Cut-7: $\not{E}_{T}>200\text{ GeV}$.
The kinematic variables remain consistent with those of the 14 TeV case, but
the cut threshold values for transverse momentum-based variables, such as
$\not{H}_{T}>650\text{ GeV}$, are higher than those in the 14 TeV case. This
adjustment accounts for the increased center-of-mass energy. Detailed cut
flows are outlined in Table 4 and the exclusion capability and discovery
potential are shown in the second row of Figure 7. The $Y$ quark can be
excluded within the correlated parameter space of $\kappa_{Y}\in[0.033,0.5]$
and $m_{Y}\in[1000\text{ GeV},4783\text{ GeV}]$ with 10% systematic
uncertainty for $L=1000\text{ fb}^{-1}$. If the integrated luminosity is
raised to the highest designed value 10 ab-1, the excluded parameter regions
can be extended to $\kappa_{Y}\in[0.029,0.5]$ and $m_{Y}\in[1000\text{
GeV},4987\text{ GeV}]$. For $L=3000\text{ fb}^{-1}$, the discovery regions are
$\kappa_{Y}\in[0.053,0.5]$ and $m_{Y}\in[1000\text{ GeV},3885\text{ GeV}]$. If
the integrated luminosity is raised to the highest designed value 10 ab-1, the
discovery parameter regions can be extended to $\kappa_{Y}\in[0.051,0.5]$ and
$m_{Y}\in[1000\text{ GeV},3943\text{ GeV}]$.
### III.3 100 TeV FCC-hh
Cuts | $Y_{1500}$ (fb) | $Y_{1800}$ (fb) | $t\bar{b}j$ (fb) | $W^{+}bj$ (fb) | $W^{+}W^{-}b$ (fb) | $Zbj$ (fb)
---|---|---|---|---|---|---
Basic Cuts | 261.26 | 183.18 | 237538.00 | 206093.00 | 573258.00 | 291603.00
Trigger | 13.44 | 8.42 | 33633.00 | 17209.00 | 40939.00 | 6363.90
Cut 1 | 6.37 | 4.20 | 209.30 | 112.70 | 605.90 | 18.95
Cut 2 | 5.63 | 3.89 | 54.16 | 48.43 | 163.30 | 7.58
Cut 3 | 3.30 | 2.51 | 3.33 | 23.91 | 53.74 | 3.21
Cut 4 | 3.14 | 2.43 | 3.33 | 17.72 | 45.12 | 3.21
Cut 5 | 1.40 | 1.70 | 0.48 | 1.65 | 6.15 | 0.00
Cut 6 | 0.81 | 1.16 | 0.24 | 0.21 | 2.87 | 0.00
Table 5: Cut flows of the signal with $\kappa_{Y}=0.1$ and backgrounds at the
100 TeV FCC-hh.
Figure 6: Normalized distributions for the signals with $m_{Y}=$ 1500 GeV and
1800 GeV, and backgrounds at the FCC-hh.
Here, we explore the anticipated signal of ${Y}$ in the context of the future
100 TeV FCC-hh. The figures in Figure 6 portray normalized distributions for
both signal and background processes, laying the groundwork for our
distinctive selection criteria:
* •
Trigger: $N_{l}=1$, $N_{j}\geq 2$, $N_{j}\leq 4$, and $N_{b}\geq 2$;
* •
Cut-1: $p_{T}^{j_{1}}>350\text{ GeV}$, $|\eta_{j_{1}}|<1$;
* •
Cut-2: $M_{b_{1},l_{1}}>550\text{ GeV}$;
* •
Cut-3: $M_{T}^{b_{2}l_{1}}>150\text{ GeV}$ and $M_{T}^{b_{1}l_{1}}>250\text{
GeV}$;
* •
Cut-4: $\Delta R_{j_{1},b_{1}}<0.5\text{ GeV}$;
* •
Cut-5: $\not{H}_{T}>650\text{ GeV}$;
* •
Cut-6: $\not{E}_{T}>300\text{ GeV}$.
Compared to previous cases, an additional variable, $\eta_{j_{1}}$, is
introduced here. Upon analyzing the distributions of $\eta_{j_{1}}$, it is
apparent that the signal tends to be more central than the backgrounds. Thus,
we require $|\eta_{j_{1}}|<1$. The signal efficiencies for $m_{Y}=1500\text{
GeV}$ and $m_{Y}=1800\text{ GeV}$ are 0.20% and 0.45%, respectively. Notably,
there is a significant suppression in the background processes. Comprehensive
cut flows are provided in Table 5. The exclusion capability and discovery
potential are illustrated in the final row of Figure 7. It is evident that
systematic uncertainty has a considerable impact on the results. Even with a
10% systematic uncertainty, the parameter space region will significantly
shrink. Accounting for the 10% systematic uncertainty, the $Y$ quark can be
excluded within the correlated parameter space of $\kappa_{Y}\in[0.051,0.5]$
and $m_{Y}\in[1000\text{ GeV},6610\text{ GeV}]$ at the highest design value of
luminosity, $L=30\text{ ab}^{-1}$. Additionally, the $Y$ state can be
discovered within $\kappa_{Y}\in[0.088,0.5]$ and $m_{Y}\in[1000\text{
GeV},4624\text{ GeV}]$ at $L=30\text{ ab}^{-1}$.
Figure 7: The exclusion capability ($\mathcal{Z}_{\text{excl}}=2$) and
discovery potential ($\mathcal{Z}_{\text{disc}}=5$) for the $Y$ state at the
LHC Run-III and HL-LHC, $\sqrt{s}=27\text{ TeV}$ HE-LHC and
$\sqrt{s}=100\text{ TeV}$ FCC-hh. Solid lines represent the ideal scenario
without systematic uncertainty, the dotted lines represent the scenario with a
10% systematic uncertainty. Dashed lines denote the contours of
$\Gamma_{Y}/m_{Y}$. The blue (grey) shaded area indicates the exclusion region
of the current LHC at $\sqrt{s}=$ 13 TeV with $L=$36.1 fb-1 (140 fb-1), as
reported in Ref. ATLAS:2018dyh (Ref. ATLAS:2023shy ). Meanwhile, the yellow
shaded area denotes the allowed region for the oblique parameters $S,T$ and
$U$, considering the current measurements in Ref. ParticleDataGroup:2022pth .
## IV Summary
Colliders | $L/\text{fb}^{-1}$ | Uncertainty | Exclusion | Discovery
---|---|---|---|---
| | | $\kappa_{Y}$ | $m_{Y}$(GeV) | $\kappa_{Y}$ | $m_{Y}$(GeV)
LHC Run-III | 300 | 0 | [0.043,0.5] | [1000,3111] | [0.069,0.5] | [1000,2665]
300 | 10% | [0.044,0.5] | [1000,3099] | [0.072,0.5] | [1000,2621]
14 TeV HL-LHC | 1000 | 0 | [0.031,0.5] | [1000,3486] | [0.049,0.5] | [1000,2988]
3000 | 0 | [0.023,0.5] | [1000,3820] | [0.037,0.5] | [1000,3267]
1000 | 10% | [0.033,0.5] | [1000,3398] | [0.055,0.5] | [1000,2880]
3000 | 10% | [0.027,0.5] | [1000,3653] | [0.047,0.5] | [1000,3047]
| 1000 | 0 | [0.026,0.5] | [1000,5213] | [0.042,0.5] | [1000,4359]
27 TeV HE-LHC | 3000 | 0 | [0.020,0.5] | [1000,5811] | [0.031,0.5] | [1000,4863]
10000 | 0 | [0.015,0.5] | [1000,6476] | [0.024,0.5] | [1000,5513]
1000 | 10% | [0.033,0.5] | [1000,4783] | [0.057,0.5] | [1000,3783]
3000 | 10% | [0.030,0.5] | [1000,4936] | [0.053,0.5] | [1000,3885]
| 10000 | 10% | [0.029,0.5] | [1000,4987] | [0.051,0.5] | [1000,3943]
| 1000 | 0 | [0.022,0.5] | [1000,9953] | [0.035,0.5] | [1000,7933]
| 3000 | 0 | [0.016,0.5] | [1000,11259] | [0.026,0.5] | [1000,9000]
100 TeV FCC-hh | 10000 | 0 | [0.014,0.5] | [1000,12254] | [0.021,0.5] | [1000,10425]
30000 | 0 | [0.010,0.5] | [1000,13771] | [0.015,0.5] | [1000,11649]
1000 | 10% | [0.051,0.5] | [1000,6610] | [0.088,0.5] | [1000,4624]
3000 | 10% | [0.051,0.5] | [1000,6610] | [0.088,0.5] | [1000,4624]
| 10000 | 10% | [0.051,0.5] | [1000,6610] | [0.088,0.5] | [1000,4624]
| 30000 | 10% | [0.051,0.5] | [1000,6610] | [0.088,0.5] | [1000,4624]
Table 6: Summary for 2$\sigma$ exclusion limits and 5$\sigma$ signal
discoveries at the LHC Run-III and HL-LHC, $\sqrt{s}=27\text{ TeV}$ HE-LHC and
$\sqrt{s}=100\text{ TeV}$ FCC-hh.
In a simplified model, we have investigated the single production of a doublet
VLQ denoted by $Y$ in the $Wb$ decay channel at the the $\sqrt{s}=14$ TeV HL-
LHC, $\sqrt{s}=27$ TeV HE-LHC and $\sqrt{s}=100$ TeV FCC-hh, following its
production via $pp\to Ybj$, with the $W$ decaying leptonically (into electrons
and muons plus their respective neutrinos). We have performed a detector level
simulation for the signal and relevant SM backgrounds. Considering a
systematic uncertainty of 10% with an integrated luminosity of 3000 fb-1, the
exclusion and discovery capabilities, as displayed in Table VI, can be
described as follows: (1) The HL-LHC can exclude (discover) the correlated
regions of $\kappa_{Y}\in[0.027,0.5]$ $([0.047,0.5])$ and $m_{Y}\in[1000\text{
GeV},3653\text{ GeV}]$ $([1000\text{ GeV},3047\text{ GeV}])$; (2) The HE-LHC
can exclude (discover) the correlated regions of $\kappa_{Y}\in[0.030,0.5]$
$([0.053,0.5])$ and $m_{Y}\in[1000\text{ GeV},4936\text{ GeV}]$ $([1000\text{
GeV},3885\text{ GeV}])$; (3) The FCC-hh can exclude (discover) the correlated
regions of $\kappa_{Y}\in[0.051,0.5]$ $([0.088,0.5])$ and $m_{Y}\in[1000\text{
GeV},6610\text{ GeV}]$ $([1000\text{ GeV},4624\text{ GeV}])$.
Furthermore, we highlight that the stringent constraint on the VLQ $Y$,
derived from the $Y$ pair production search with BR($Y\to W^{-}b$) $=1$,
imposes $m_{Y}>1700\text{ GeV}$. In this context, we reassess the potential of
LHC Run-III to explore the VLQ $Y$, revealing that the associated parameter
regions of $\kappa_{Y}\in[0.044,0.5]$ $([0.072,0.5])$ and $m_{Y}\in[1000\text{
GeV},3099\text{ GeV}]$ $([1000\text{ GeV},2621\text{ GeV}])$ can be excluded
(discovered) based on LHC Run-III luminosity. We foresee that our
investigation will spur complementary explorations for a potential $Y$ quark
at forthcoming $pp$ colliders.
## Acknowledgements
This work of LS, YY and BY is supported by the Natural Science Foundation of
Henan Province under Grant No. 232300421217, the National Research Project
Cultivation Foundation of Henan Normal University under Grant No. 2021PL10,
the China Scholarship Council under Grant No. 202208410277 and also powered by
the High Performance Computing Center of Henan Normal University. The work of
SM is supported in part through the NExT Institute, the Knut and Alice
Wallenberg Foundation under the Grant No. KAW 2017.0100 (SHIFT) and the STFC
Consolidated Grant No. ST/ L000296/1.
## Appendix A Appendix: Relationship between Eq. (1) and Eq. (3)
In the appendix, we provide the relationship between the $(B,Y)$ doublet
representation and the simplified model used in the simulation. However, we do
not present the relationship between the $(X,B,Y)$ triplet representation and
the simplified model here because it can be easily derived from the remainder
of this Appendix.)
The Lagrangian for the $Y$ coupling with the SM gauge fields and the $Y$ mass
term is
$\displaystyle\mathcal{L}=\bar{Q}_{5}(i\not{D}-M_{F})Q_{5}$ (A.1)
where one has
$\displaystyle Q_{5}=\begin{pmatrix}B_{0}\\\
Y_{0}\end{pmatrix},\bar{Q}_{5}=(\bar{B}_{0},\bar{Y}_{0}),\not{D}=\gamma^{\mu}D_{\mu},D_{\mu}=\partial_{\mu}+i{g}^{\prime}Y_{F}B_{\mu}+\frac{i}{2}g\tau^{I}W_{\mu}^{I}$
(A.2)
and the weak isospin $g$ and weak hypercharge $g^{\prime}$ are the SU(2)L and
U(1)Y couplings, respectively. We use a subscript $0$ to represent the
interaction eigenstates. The unphysical fields $B_{\mu}$ and $W^{I}_{\mu}$
($I=1,2,3$) can be transformed into the physical fields of the photon
$A_{\mu}$, the neutral $Z$ boson $Z_{\mu}$ and charged $W$ bosons
$W^{\pm}_{\mu}$ via the following equations:
$\displaystyle B_{\mu}$
$\displaystyle=\cos\theta_{W}A_{\mu}-\sin\theta_{W}Z_{\mu},W^{3}_{\mu}=\sin\theta_{W}A_{\mu}+\cos\theta_{W}Z_{\mu},$
$\displaystyle W^{1}_{\mu}$
$\displaystyle=\frac{1}{\sqrt{2}}(W^{+}_{\mu}+W^{-}_{\mu}),W^{2}_{\mu}=\frac{i}{\sqrt{2}}(W^{+}_{\mu}-W^{-}_{\mu})$
(A.3)
where $\theta_{W}$ is the Weinberg angle, which can be expressed via
$\sin\theta_{W}=\frac{e}{g}$ and $\cos\theta_{W}=\frac{e}{g^{\prime}}$. Here,
$M_{F}$ is a free mass parameter. Considering the charge of $Y$, the
Lagrangian for the $Y$ coupling with the SM gauge fields is
$\displaystyle\mathcal{L}_{Q_{5}Q_{5}V}$
$\displaystyle=\bar{Q}_{5}\left(\frac{5}{6}{g}^{\prime}B_{\mu}-\frac{g}{2}\begin{bmatrix}W_{\mu}^{3}&W_{\mu}^{1}-iW_{\mu}^{2}\\\
W_{\mu}^{1}+iW_{\mu}^{2}&-W_{\mu}^{3}\end{bmatrix}\right)\gamma^{\mu}Q_{5}$
$\displaystyle=\bar{Q}_{5}\begin{bmatrix}\frac{1}{3}eA_{\mu}-\frac{g}{2\cos\theta}\left(1+\frac{2}{3}\sin^{2}\theta\right)Z_{\mu}&-\frac{g}{\sqrt{2}}W_{\mu}^{+}\\\
-\frac{g}{\sqrt{2}}W_{\mu}^{-}&\frac{4}{3}eA_{\mu}+\frac{g}{2\cos\theta}\left(1-\frac{8}{3}\sin^{2}\theta\right)Z_{\mu}\end{bmatrix}\gamma^{\mu}Q_{5}$
$\displaystyle=\frac{1}{3}e\bar{B}_{0}A_{\mu}\gamma^{\mu}B_{0}-\frac{g}{2\cos\theta}\left(1+\frac{2}{3}\sin^{2}\theta\right)\bar{B}_{0}Z_{\mu}\gamma^{\mu}B_{0}$
$\displaystyle\,\,\,\,\,\,+\frac{4}{3}e\bar{Y}_{0}A_{\mu}\gamma^{\mu}Y_{0}+\frac{g}{2\cos\theta}\left(1-\frac{8}{3}\sin^{2}\theta\right)\bar{Y}_{0}Z_{\mu}\gamma^{\mu}Y_{0}$
$\displaystyle\,\,\,\,\,\,-\frac{g}{\sqrt{2}}\bar{Y}_{0}W^{-}_{\mu}\gamma^{\mu}B_{0}-\frac{g}{\sqrt{2}}\bar{B}_{0}W^{+}_{\mu}\gamma^{\mu}Y_{0}.$
(A.4)
In our study, $(B,Y)$ states exclusively couple with the third-generation
quarks of the SM. Therefore, the Lagrangian for the mass term of the bottom
quark mass eigenstate $b$ and its partner mass eigenstate $B$ can be written
as
$\displaystyle\mathcal{L}_{\text{mass}}=-\begin{pmatrix}\bar{b}_{0}^{L}&\bar{B}_{0}^{L}\end{pmatrix}\begin{pmatrix}y_{33}^{d}\frac{v}{\sqrt{2}}&y_{34}^{d}\frac{v}{\sqrt{2}}\\\
y_{43}^{d}\frac{v}{\sqrt{2}}&M^{0}\end{pmatrix}\begin{pmatrix}b_{0}^{R}\\\
B_{0}^{R}\end{pmatrix}+\text{H.c.}$ (A.5)
where $v=246\text{ GeV}$ is the Vacuum Expectation Value (VEV) of the Higgs
field, $M^{0}$ is a bare mass term, $y_{33}^{d}$ and $y_{43}^{d}$ are Yukawa
coupling coefficients while $y_{34}^{d}=0$ for doublets. The mass matrix can
be diagonalized by the two mixing matrices $V^{L}$ and $V^{R}$, as follows:
$\displaystyle\begin{pmatrix}b_{0}^{L,R}\\\
B_{0}^{L,R}\end{pmatrix}=V^{L,R}\begin{pmatrix}b^{L,R}\\\
B^{L,R}\end{pmatrix}$ (A.6)
where $L$ and $R$ stands for the left-hand and right-hand chiralities,
respectively. There exists the following relationship too:
$B_{0}=B_{0}^{L}+B_{0}^{R}$ and $Y_{0}=Y_{0}^{L}+Y_{0}^{R}$. The $2\times 2$
unitary matrices $V^{L}$ and $V^{R}$ can be parameterized by the mixing angles
$\theta^{L}$ and $\theta^{R}$, respectively, as
$\displaystyle V^{L,R}=\begin{pmatrix}\cos\theta^{L,R}&\sin\theta^{L,R}\\\
-\sin\theta^{L,R}&\cos\theta^{L,R}\end{pmatrix}$ (A.7)
We can then determine the expressions
$B_{0}^{L,R}=-\sin\theta^{L,R}b^{L,R}+\cos\theta^{L,R}B^{L,R}$. For $Y$, it is
as simple as $Y_{0}^{L,R}=Y^{L,R}$, where $Y$ represents the mass eigenstate.
This is because there are no $\pm 4/3$ particles in the SM. Therefore, we can
derive the interactions between the $Y$, $W$ and $b$ states as follows:
$\displaystyle\mathcal{L}_{YW^{\pm}b}$
$\displaystyle=-\frac{g}{\sqrt{2}}\left(\bar{Y}^{L}+\bar{Y}^{R}\right)W^{-}_{\mu}\gamma^{\mu}\left(-\sin\theta^{L}b^{L}-\sin\theta^{R}b^{R}\right)+\text{H.c.}$
$\displaystyle=\frac{g}{\sqrt{2}}\sin\theta^{L}\bar{Y}^{L}W^{-}_{\mu}\gamma^{\mu}b^{L}+\frac{g}{\sqrt{2}}\sin\theta^{R}\bar{Y}^{R}W^{-}_{\mu}\gamma^{\mu}b^{R}+\text{H.c.}$
(A.8)
Using unitary matrices, we can finally obtain
$\displaystyle
V^{L}\begin{pmatrix}y_{33}^{d}\frac{v}{\sqrt{2}}&y_{34}^{d}\frac{v}{\sqrt{2}}\\\
y_{43}^{d}\frac{v}{\sqrt{2}}&M^{0}\end{pmatrix}(V^{R})^{\dagger}=\begin{pmatrix}m_{b}&0\\\
0&m_{B}\end{pmatrix}$ (A.9)
After performing calculations involving trigonometric function identities, we
can obtain666For the $(T,B,Y)$ triplet, $y_{43}^{d}=0$, we can deduce instead
that $\tan\theta^{R}=\frac{m_{b}}{m_{B}}\tan\theta^{L}$.:
$\displaystyle\tan\theta^{L}=\frac{m_{b}}{m_{B}}\tan\theta^{R}$ (A.10)
Since $m_{B}\gg m_{b}$, we can conclude that $\sin\theta^{L}\gg\sin\theta^{R}$
in the ($B,Y$) doublet. Therefore, our study primarily concentrates on the
right-handed coupling part of the interactions involving the $Y$, $W$ and $b$
states.
## References
* (1) ATLAS Collaboration, G. Aad et al., “Observation of a new particle in the search for the Standard Model Higgs boson with the ATLAS detector at the LHC,” Phys. Lett. B 716 (2012) 1–29, arXiv:1207.7214 [hep-ex].
* (2) CMS Collaboration, S. Chatrchyan et al., “Observation of a New Boson at a Mass of 125 GeV with the CMS Experiment at the LHC,” Phys. Lett. B 716 (2012) 30–61, arXiv:1207.7235 [hep-ex].
* (3) N. Arkani-Hamed, A. G. Cohen, E. Katz, and A. E. Nelson, “The Littlest Higgs,” JHEP 07 (2002) 034, arXiv:hep-ph/0206021.
* (4) T. Han, H. E. Logan, B. McElrath, and L.-T. Wang, “Phenomenology of the little Higgs model,” Phys. Rev. D 67 (2003) 095004, arXiv:hep-ph/0301040.
* (5) S. Chang and H.-J. He, “Unitarity of little Higgs models signals new physics of UV completion,” Phys. Lett. B 586 (2004) 95–105, arXiv:hep-ph/0311177.
* (6) Q.-H. Cao and C.-R. Chen, “Signatures of extra gauge bosons in the littlest Higgs model with T-parity at future colliders,” Phys. Rev. D 76 (2007) 075007, arXiv:0707.0877 [hep-ph].
* (7) K. Agashe, G. Perez, and A. Soni, “Collider Signals of Top Quark Flavor Violation from a Warped Extra Dimension,” Phys. Rev. D 75 (2007) 015002, arXiv:hep-ph/0606293.
* (8) K. Agashe, R. Contino, and A. Pomarol, “The Minimal composite Higgs model,” Nucl. Phys. B 719 (2005) 165–187, arXiv:hep-ph/0412089.
* (9) B. Bellazzini, C. Csáki, and J. Serra, “Composite Higgses,” Eur. Phys. J. C 74 no. 5, (2014) 2766, arXiv:1401.2457 [hep-ph].
* (10) M. Low, A. Tesi, and L.-T. Wang, “Twin Higgs mechanism and a composite Higgs boson,” Phys. Rev. D 91 (2015) 095012, arXiv:1501.07890 [hep-ph].
* (11) L. Bian, D. Liu, and J. Shu, “Low scale composite Higgs model and 1.8 $\sim$2 TeV diboson excess,” Int. J. Mod. Phys. A 33 no. 11, (2018) 1841007, arXiv:1507.06018 [hep-ph].
* (12) H.-J. He, C. T. Hill, and T. M. P. Tait, “Top Quark Seesaw, Vacuum Structure and Electroweak Precision Constraints,” Phys. Rev. D 65 (2002) 055006, arXiv:hep-ph/0108041.
* (13) H.-J. He, T. M. P. Tait, and C. P. Yuan, “New top flavor models with seesaw mechanism,” Phys. Rev. D 62 (2000) 011702, arXiv:hep-ph/9911266.
* (14) H.-J. He, T. M. P. Tait, and C. P. Yuan, “New top flavor models with seesaw mechanism,” Phys. Rev. D 62 (2000) 011702, arXiv:hep-ph/9911266.
* (15) X.-F. Wang, C. Du, and H.-J. He, “LHC Higgs Signatures from Topflavor Seesaw Mechanism,” Phys. Lett. B 723 (2013) 314–323, arXiv:1304.2257 [hep-ph].
* (16) H.-J. He, C. T. Hill, and T. M. P. Tait, “Top Quark Seesaw, Vacuum Structure and Electroweak Precision Constraints,” Phys. Rev. D 65 (2002) 055006, arXiv:hep-ph/0108041.
* (17) J. A. Aguilar-Saavedra, R. Benbrik, S. Heinemeyer, and M. Pérez-Victoria, “Handbook of vectorlike quarks: Mixing and single production,” Phys. Rev. D 88 no. 9, (2013) 094010, arXiv:1306.0572 [hep-ph].
* (18) A. Banerjee, V. Ellajosyula, and L. Panizzi, “Heavy vector-like quarks decaying to exotic scalars: a case study with triplets,” arXiv:2311.17877 [hep-ph].
* (19) R. Benbrik, M. Berrouj, M. Boukidi, A. Habjia, E. Ghourmin, and L. Rahili, “Search for single production of vector-like top partner T → H+b and H$\pm$→tb¯ at the LHC Run-III,” Phys. Lett. B 843 (2023) 138024.
* (20) Q.-G. Zeng, Y.-S. Pan, and J. Zhang, “Search for the signal of vector-like bottom quark at LHeC in the final state with 3 $b$-jets,” Nucl. Phys. B 995 (2023) 116347.
* (21) A. C. Canbay and O. Cakir, “Investigating the single production of vectorlike quarks decaying into a top quark and W boson through hadronic channels at the HL-LHC,” Phys. Rev. D 108 no. 9, (2023) 095006, arXiv:2307.12883 [hep-ph].
* (22) A. Belyaev, R. S. Chivukula, B. Fuks, E. H. Simmons, and X. Wang, “Vectorlike top quark production via an electroweak dipole moment at a muon collider,” Phys. Rev. D 108 no. 3, (2023) 035016, arXiv:2306.11097 [hep-ph].
* (23) L. Shang and K. Sun, “Single vector-like quark X production in the tW channel at high energy pp colliders,” Nucl. Phys. B 990 (2023) 116185.
* (24) B. Yang, S. Wang, X. Sima, and L. Shang, “Singlet vector-like $T$ quark production in association with $W$b at the CLIC,” Commun. Theor. Phys. 75 no. 3, (2023) 035202.
* (25) A. Bhardwaj, T. Mandal, S. Mitra, and C. Neeraj, “Roadmap to explore vectorlike quarks decaying to a new scalar or pseudoscalar,” Phys. Rev. D 106 no. 9, (2022) 095014, arXiv:2203.13753 [hep-ph].
* (26) A. Bhardwaj, K. Bhide, T. Mandal, S. Mitra, and C. Neeraj, “Discovery prospects of a vectorlike top partner decaying to a singlet boson,” Phys. Rev. D 106 no. 7, (2022) 075024, arXiv:2204.09005 [hep-ph].
* (27) J. Bardhan, T. Mandal, S. Mitra, and C. Neeraj, “Machine learning-enhanced search for a vectorlike singlet B quark decaying to a singlet scalar or pseudoscalar,” Phys. Rev. D 107 no. 11, (2023) 115001, arXiv:2212.02442 [hep-ph].
* (28) L. Shang, C. Chen, S. Wang, and B. Yang, “Single production of vector-like B quark decaying into bZ at future ep colliders,” Nucl. Phys. B 984 (2022) 115977.
* (29) F. F. Freitas, J. a. Gonçalves, A. P. Morais, and R. Pasechnik, “Phenomenology at the large hadron collider with deep learning: the case of vector-like quarks decaying to light jets,” Eur. Phys. J. C 82 no. 9, (2022) 826, arXiv:2204.12542 [hep-ph].
* (30) R. Benbrik, M. Boukidi, and S. Moretti, “Probing Light Charged Higgs Bosons in the 2-Higgs Doublet Model Type-II with Vector-Like Quarks,” arXiv:2211.07259 [hep-ph].
* (31) G. Corcella, A. Costantini, M. Ghezzi, L. Panizzi, G. M. Pruna, and J. Šalko, “Vector-like quarks decaying into singly and doubly charged bosons at LHC,” JHEP 10 (2021) 108, arXiv:2107.07426 [hep-ph].
* (32) G. Corcella, A. Costantini, M. Ghezzi, L. Panizzi, G. M. Pruna, and J. Šalko, “Vector-like quarks decaying into singly and doubly charged bosons at LHC,” JHEP 10 (2021) 108, arXiv:2107.07426 [hep-ph].
* (33) A. Belyaev, R. S. Chivukula, B. Fuks, E. H. Simmons, and X. Wang, “Vectorlike top quark production via a chromomagnetic moment at the LHC,” Phys. Rev. D 104 no. 9, (2021) 095024, arXiv:2107.12402 [hep-ph].
* (34) A. Deandrea, T. Flacke, B. Fuks, L. Panizzi, and H.-S. Shao, “Single production of vector-like quarks: the effects of large width, interference and NLO corrections,” JHEP 08 (2021) 107, arXiv:2105.08745 [hep-ph]. [Erratum: JHEP 11, 028 (2022)].
* (35) S. Dasgupta, R. Pramanick, and T. S. Ray, “Broad toplike vector quarks at LHC and HL-LHC,” Phys. Rev. D 105 no. 3, (2022) 035032, arXiv:2112.03742 [hep-ph].
* (36) S. J. D. King, S. F. King, S. Moretti, and S. J. Rowley, “Discovering the origin of Yukawa couplings at the LHC with a singlet Higgs and vector-like quarks,” JHEP 21 (2020) 144, arXiv:2102.06091 [hep-ph].
* (37) Y.-B. Liu and S. Moretti, “Search for single production of a top quark partner via the $T\to th$ and $h\to WW^{\ast}$ channels at the LHC,” Phys. Rev. D 100 no. 1, (2019) 015025, arXiv:1902.03022 [hep-ph].
* (38) R. Benbrik et al., “Signatures of vector-like top partners decaying into new neutral scalar or pseudoscalar bosons,” JHEP 05 (2020) 028, arXiv:1907.05929 [hep-ph].
* (39) K.-P. Xie, G. Cacciapaglia, and T. Flacke, “Exotic decays of top partners with charge 5/3: bounds and opportunities,” JHEP 10 (2019) 134, arXiv:1907.05894 [hep-ph].
* (40) N. Bizot, G. Cacciapaglia, and T. Flacke, “Common exotic decays of top partners,” JHEP 06 (2018) 065, arXiv:1803.00021 [hep-ph].
* (41) G. Cacciapaglia, A. Carvalho, A. Deandrea, T. Flacke, B. Fuks, D. Majumder, L. Panizzi, and H.-S. Shao, “Next-to-leading-order predictions for single vector-like quark production at the LHC,” Phys. Lett. B 793 (2019) 206–211, arXiv:1811.05055 [hep-ph].
* (42) G. Cacciapaglia, A. Deandrea, N. Gaur, D. Harada, Y. Okada, and L. Panizzi, “The LHC potential of Vector-like quark doublets,” JHEP 11 (2018) 055, arXiv:1806.01024 [hep-ph].
* (43) A. Carvalho, S. Moretti, D. O’Brien, L. Panizzi, and H. Prager, “Single production of vectorlike quarks with large width at the Large Hadron Collider,” Phys. Rev. D 98 no. 1, (2018) 015029, arXiv:1805.06402 [hep-ph].
* (44) CMS Collaboration, A. M. Sirunyan et al., “Search for single production of vector-like quarks decaying to a b quark and a Higgs boson,” JHEP 06 (2018) 031, arXiv:1802.01486 [hep-ex].
* (45) D. Barducci and L. Panizzi, “Vector-like quarks coupling discrimination at the LHC and future hadron colliders,” JHEP 12 (2017) 057, arXiv:1710.02325 [hep-ph].
* (46) CMS Collaboration, A. M. Sirunyan et al., “Search for single production of a vector-like T quark decaying to a Z boson and a top quark in proton-proton collisions at $\sqrt{s}$ = 13 TeV,” Phys. Lett. B 781 (2018) 574–600, arXiv:1708.01062 [hep-ex].
* (47) C.-H. Chen and T. Nomura, “Single production of $X_{\pm 5/3}$ and $Y_{\mp 4/3}$ vectorlike quarks at the LHC,” Phys. Rev. D 94 no. 3, (2016) 035001, arXiv:1603.05837 [hep-ph].
* (48) A. Arhrib, R. Benbrik, S. J. D. King, B. Manaut, S. Moretti, and C. S. Un, “Phenomenology of 2HDM with vectorlike quarks,” Phys. Rev. D 97 (2018) 095015, arXiv:1607.08517 [hep-ph].
* (49) G. Cacciapaglia, A. Deandrea, N. Gaur, D. Harada, Y. Okada, and L. Panizzi, “Interplay of vector-like top partner multiplets in a realistic mixing set-up,” JHEP 09 (2015) 012, arXiv:1502.00370 [hep-ph].
* (50) A. Angelescu, A. Djouadi, and G. Moreau, “Vector-like top/bottom quark partners and Higgs physics at the LHC,” Eur. Phys. J. C 76 no. 2, (2016) 99, arXiv:1510.07527 [hep-ph].
* (51) L. Panizzi, “Vector-like quarks: $t^{\prime}$ and partners,” Nuovo Cim. C 037 no. 02, (2014) 69–79.
* (52) L. Panizzi, “Model-independent Analysis of Scenarios with Vector-like Quarks,” Acta Phys. Polon. Supp. 7 no. 3, (2014) 631.
* (53) G. Cacciapaglia, A. Deandrea, L. Panizzi, S. Perries, and V. Sordini, “Heavy Vector-like quark with charge 5/3 at the LHC,” JHEP 03 (2013) 004, arXiv:1211.4034 [hep-ph].
* (54) Y. Okada and L. Panizzi, “LHC signatures of vector-like quarks,” Adv. High Energy Phys. 2013 (2013) 364936, arXiv:1207.5607 [hep-ph].
* (55) G. Cacciapaglia, A. Deandrea, L. Panizzi, N. Gaur, D. Harada, and Y. Okada, “Heavy Vector-like Top Partners at the LHC and flavour constraints,” JHEP 03 (2012) 070, arXiv:1108.6329 [hep-ph].
* (56) F. del Aguila, L. Ametller, G. L. Kane, and J. Vidal, “Vector Like Fermion and Standard Higgs Production at Hadron Colliders,” Nucl. Phys. B 334 (1990) 1–23.
* (57) F. Gianotti et al., “Physics potential and experimental challenges of the LHC luminosity upgrade,” Eur. Phys. J. C 39 (2005) 293–333, arXiv:hep-ph/0204087.
* (58) “High-Luminosity Large Hadron Collider (HL-LHC): Technical Design Report V. 0.1,”.
* (59) FCC Collaboration, A. Abada et al., “HE-LHC: The High-Energy Large Hadron Collider: Future Circular Collider Conceptual Design Report Volume 4,” Eur. Phys. J. ST 228 no. 5, (2019) 1109–1382.
* (60) FCC Collaboration, A. Abada et al., “FCC-hh: The Hadron Collider: Future Circular Collider Conceptual Design Report Volume 3,” Eur. Phys. J. ST 228 no. 4, (2019) 755–1107.
* (61) ATLAS Collaboration, M. Aaboud et al., “Search for single production of vector-like quarks decaying into $Wb$ in $pp$ collisions at $\sqrt{s}=13$ TeV with the ATLAS detector,” JHEP 05 (2019) 164, arXiv:1812.07343 [hep-ex].
* (62) CMS Collaboration, A. M. Sirunyan et al., “Search for single production of vector-like quarks decaying into a b quark and a W boson in proton-proton collisions at $\sqrt{s}=$ 13 TeV,” Phys. Lett. B 772 (2017) 634–656, arXiv:1701.08328 [hep-ex].
* (63) ATLAS Collaboration, “Search for pair-production of vector-like quarks in lepton+jets final states containing at least one $b$-jet using the Run 2 data from the ATLAS experiment,”.
* (64) J. Cao, L. Meng, L. Shang, S. Wang, and B. Yang, “Interpreting the W-mass anomaly in vectorlike quark models,” Phys. Rev. D 106 no. 5, (2022) 055042, arXiv:2204.09477 [hep-ph].
* (65) CDF Collaboration, T. Aaltonen et al., “High-precision measurement of the $W$ boson mass with the CDF II detector,” Science 376 no. 6589, (2022) 170–176.
* (66) M. Buchkremer, G. Cacciapaglia, A. Deandrea, and L. Panizzi, “Model Independent Framework for Searches of Top Partners,” Nucl. Phys. B 876 (2013) 376–417, arXiv:1305.4172 [hep-ph].
* (67) V. Cetinkaya, A. Ozansoy, V. Ari, O. M. Ozsimsek, and O. Cakir, “Single production of vectorlike Y quarks at the HL-LHC,” Nucl. Phys. B 973 (2021) 115580, arXiv:2012.15308 [hep-ph].
* (68) D. Berdine, N. Kauer, and D. Rainwater, “Breakdown of the Narrow Width Approximation for New Physics,” Phys. Rev. Lett. 99 (2007) 111601, arXiv:hep-ph/0703058.
* (69) S. Moretti, D. O’Brien, L. Panizzi, and H. Prager, “Production of extra quarks at the Large Hadron Collider beyond the Narrow Width Approximation,” Phys. Rev. D 96 no. 7, (2017) 075035, arXiv:1603.09237 [hep-ph].
* (70) M. Czakon and A. Mitov, “NNLO corrections to top-pair production at hadron colliders: the all-fermionic scattering channels,” JHEP 12 (2012) 054, arXiv:1207.0236 [hep-ph].
* (71) J. M. Campbell, R. K. Ellis, F. Maltoni, and S. Willenbrock, “Production of a $Z$ boson and two jets with one heavy-quark tag,” Phys. Rev. D 73 (2006) 054007, arXiv:hep-ph/0510362. [Erratum: Phys.Rev.D 77, 019903 (2008)].
* (72) J. M. Campbell, R. K. Ellis, F. Maltoni, and S. Willenbrock, “Production of a $W$ boson and two jets with one $b^{-}$ quark tag,” Phys. Rev. D 75 (2007) 054015, arXiv:hep-ph/0611348.
* (73) N. Kidonakis, “Single-top production in the Standard Model and beyond,” in 13th Conference on the Intersections of Particle and Nuclear Physics. 8, 2018. arXiv:1808.02934 [hep-ph].
* (74) E. Boos and L. Dudko, “The Single Top Quark Physics,” Int. J. Mod. Phys. A 27 (2012) 1230026, arXiv:1211.7146 [hep-ph].
* (75) B. Yang, X. Sima, S. Wang, and L. Shang, “Single vectorlike top quark production in the tZ channel at high energy pp colliders,” Phys. Rev. D 105 no. 9, (2022) 096010.
* (76) W. F. L. Hollik, “Radiative Corrections in the Standard Model and their Role for Precision Tests of the Electroweak Theory,” Fortsch. Phys. 38 (1990) 165–260.
* (77) M. E. Peskin and T. Takeuchi, “A New constraint on a strongly interacting Higgs sector,” Phys. Rev. Lett. 65 (1990) 964–967.
* (78) B. Grinstein and M. B. Wise, “Operator analysis for precision electroweak physics,” Phys. Lett. B 265 (1991) 326–334.
* (79) M. E. Peskin and T. Takeuchi, “Estimation of oblique electroweak corrections,” Phys. Rev. D 46 (1992) 381–409.
* (80) L. Lavoura and J. P. Silva, “The Oblique corrections from vector - like singlet and doublet quarks,” Phys. Rev. D 47 (1993) 2046–2057.
* (81) C. P. Burgess, S. Godfrey, H. Konig, D. London, and I. Maksymyk, “A Global fit to extended oblique parameters,” Phys. Lett. B 326 (1994) 276–281, arXiv:hep-ph/9307337.
* (82) I. Maksymyk, C. P. Burgess, and D. London, “Beyond S, T and U,” Phys. Rev. D 50 (1994) 529–535, arXiv:hep-ph/9306267.
* (83) G. Cynolter and E. Lendvai, “Electroweak Precision Constraints on Vector-like Fermions,” Eur. Phys. J. C 58 (2008) 463–469, arXiv:0804.4080 [hep-ph].
* (84) C.-Y. Chen, S. Dawson, and E. Furlan, “Vectorlike fermions and Higgs effective field theory revisited,” Phys. Rev. D 96 no. 1, (2017) 015006, arXiv:1703.06134 [hep-ph].
* (85) S.-P. He, “Leptoquark and vector-like quark extended model for simultaneousexplanation of W boson mass and muon g–2 anomalies*,” Chin. Phys. C 47 no. 4, (2023) 043102, arXiv:2205.02088 [hep-ph].
* (86) A. Arsenault, K. Y. Cingiloglu, and M. Frank, “Vacuum stability in the Standard Model with vectorlike fermions,” Phys. Rev. D 107 no. 3, (2023) 036018, arXiv:2207.10332 [hep-ph].
* (87) Particle Data Group Collaboration, R. L. Workman et al., “Review of Particle Physics,” PTEP 2022 (2022) 083C01.
* (88) A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, and B. Fuks, “FeynRules 2.0 - A complete toolbox for tree-level phenomenology,” Comput. Phys. Commun. 185 (2014) 2250–2300, arXiv:1310.1921 [hep-ph].
* (89) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, “The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations,” JHEP 07 (2014) 079, arXiv:1405.0301 [hep-ph].
* (90) NNPDF Collaboration, R. D. Ball et al., “Parton distributions from high-precision collider data,” Eur. Phys. J. C 77 no. 10, (2017) 663, arXiv:1706.00428 [hep-ph].
* (91) J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, H. S. Shao, T. Stelzer, P. Torrielli, and M. Zaro, “What are the default dynamic factorization and renormalization scales in madevent?” 2011. https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/FAQ-General-13. Accessed on 2023-12-25.
* (92) DELPHES 3 Collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi, “DELPHES 3, A modular framework for fast simulation of a generic collider experiment,” JHEP 02 (2014) 057, arXiv:1307.6346 [hep-ex].
* (93) CERN Collaboration, M. Selvaggi, “Delphes cards for LHC Run-III, HL-LHC and HE-LHC.” December 6th, 2017. https://github.com/delphes/delphes/blob/master/cards/delphes_card_HLLHC.tcl. Accessed on 2023-12-25.
* (94) CERN Collaboration, M. Selvaggi, “Delphes card for FCC-hh.” October 14th, 2020. https://github.com/delphes/delphes/blob/master/cards/FCC/FCChh.tcl. Accessed on 2023-12-25.
* (95) M. Cacciari, G. P. Salam, and G. Soyez, “FastJet User Manual,” Eur. Phys. J. C 72 (2012) 1896, arXiv:1111.6097 [hep-ph].
* (96) M. Cacciari and G. P. Salam, “Dispelling the $N^{3}$ myth for the $k_{t}$ jet-finder,” Phys. Lett. B 641 (2006) 57–61, arXiv:hep-ph/0512210.
* (97) E. Conte, B. Fuks, and G. Serret, “MadAnalysis 5, A User-Friendly Framework for Collider Phenomenology,” Comput. Phys. Commun. 184 (2013) 222–256, arXiv:1206.1599 [hep-ph].
* (98) L. Shang and Y. Zhang, “EasyScan_HEP: A tool for connecting programs to scan the parameter space of physics models,” Comput. Phys. Commun. 296 (2024) 109027, arXiv:2304.03636 [hep-ph].
* (99) G. Cowan, K. Cranmer, E. Gross, and O. Vitells, “Asymptotic formulae for likelihood-based tests of new physics,” Eur. Phys. J. C 71 (2011) 1554, arXiv:1007.1727 [physics.data-an]. [Erratum: Eur.Phys.J.C 73, 2501 (2013)].
* (100) N. Kumar and S. P. Martin, “Vectorlike Leptons at the Large Hadron Collider,” Phys. Rev. D 92 no. 11, (2015) 115018, arXiv:1510.03456 [hep-ph].
|
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m^{\prime}(s)\times\Delta
m^{\prime}_{n}(s),\phi(s)\right\rangle_{L^{2}}-\left\langle
m^{\prime}(s)\times\Delta
m^{\prime}(s),\phi(s)\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle=$
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla
m^{\prime}_{n},\nabla\phi(s)\times
m^{\prime}_{n}(s)\right\rangle_{L^{2}}-\left\langle\nabla
m^{\prime}_{n}(s),\nabla\phi(s)\times
m^{\prime}(s)\right\rangle_{L^{2}}\,ds\bigg{|}$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla
m^{\prime}_{n}(s),\nabla\phi(s)\times
m^{\prime}(s)\right\rangle_{L^{2}}-\left\langle\nabla
m^{\prime},\nabla\phi(s)\times m^{\prime}\right\rangle_{L^{2}}\,ds\bigg{|}$
$\displaystyle=$
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla
m^{\prime}_{n}(s),\nabla\phi(s)\times(m^{\prime}_{n}(s)-m^{\prime}(s))\right\rangle_{L^{2}}\,ds\bigg{|}$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla(m^{\prime}_{n}(s)-m^{\prime}(s)),\nabla\phi(s)\times
m^{\prime}(s)\right\rangle_{L^{2}}|\,ds\bigg{|}$ $\displaystyle\leq$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}|\nabla
m^{\prime}_{n}(s)|_{L^{2}}|\nabla\phi(s)|_{L^{2}}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{\infty}}ds$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla(m^{\prime}_{n}(s)-m^{\prime}(s)),\nabla\phi(s)\times
m^{\prime}(s)\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle\leq$
$\displaystyle
C\mathbb{E^{\prime}}\sup_{t\in[0,T]}|m^{\prime}_{n}(s)|_{H^{1}}\int_{0}^{T}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{H^{1}}^{\frac{1}{2}}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}^{\frac{1}{2}}\left|\phi(s)\right|_{H^{1}}\,ds$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla(m^{\prime}_{n}(s)-m^{\prime}(s)),\nabla\phi\times
m^{\prime}(s)\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle\leq$
$\displaystyle
C\mathbb{E^{\prime}}\sup_{t\in[0,T]}|m^{\prime}_{n}(s)|_{H^{1}}\int_{0}^{T}(|m^{\prime}_{n}(s)|_{H^{1}}+m^{\prime}(s)|_{H^{1}})^{\frac{1}{2}}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}^{\frac{1}{2}}\left|\phi(s)\right|_{H^{1}}\,ds$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla(m^{\prime}_{n}(s)-m^{\prime}(s)),\nabla\phi\times
m^{\prime}(s)\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle\leq$
$\displaystyle
C\mathbb{E^{\prime}}\sup_{t\in[0,T]}|m^{\prime}_{n}(s)|_{H^{1}}\sup_{s\in[0,T]}\left(|m^{\prime}_{n}(s)|_{H^{1}}+m^{\prime}(s)|_{H^{1}}\right)^{\frac{1}{2}}\left(\int_{0}^{T}\left|\phi(s)\right|_{H^{1}}^{2}\,ds\right)^{\frac{1}{2}}$
$\displaystyle\quad\centerdot\left(\int_{0}^{T}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}\,ds\right)^{\frac{1}{2}}$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle\nabla(m^{\prime}_{n}(s)-m^{\prime}(s)),\nabla\phi\times
m^{\prime}(s)\right\rangle_{L^{2}}\,ds\bigg{|}.$
The bounds (5.4) and (5.5) along with the convergence of $m^{\prime}_{n}$
imply that the first term in the above inequality goes to $0$ as $n$ goes to
$\infty$.
Due to the continuous embedding $H^{1}\hookrightarrow L^{\infty}$, there
exists a constant $C>0$ such that
$\displaystyle\mathbb{E}^{\prime}\int_{0}^{T}\left|\nabla\phi(s)\times
m^{\prime}(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\leq\mathbb{E}^{\prime}\int_{0}^{T}\left|\nabla\phi(s)\right|_{L^{2}}^{2}\left|m^{\prime}(s)\right|_{L^{\infty}}^{2}\,ds$
$\displaystyle\leq
C\mathbb{E}^{\prime}\left[\left(\sup_{t\in[0,T]}|m^{\prime}(s)|_{H^{1}}^{2}\right)\int_{0}^{T}|\nabla\phi(s)|_{L^{2}}^{2}\,ds\right]$
$\displaystyle\leq$ $\displaystyle
C\left[\mathbb{E}^{\prime}\left(\sup_{t\in[0,T]}|m^{\prime}(s)|_{H^{1}}^{4}\right)\right]^{\frac{1}{2}}\left[\mathbb{E}^{\prime}\left(\int_{0}^{T}|\nabla\phi(s)|_{L^{2}}^{2}\,ds\right)^{2}\right]^{\frac{1}{2}}<\infty.$
The above inequality along with the bound on $|m^{\prime}|_{H^{1}}$ implies
that the second term also goes to $0$ as $n$ goes to $\infty$. The right hand
side of the above inequality goes to $0$ as $n$ goes to $\infty$, thus
concluding the proof.
∎
###### Lemma 5.12.
Let $\phi\in L^{4}(\Omega^{\prime};L^{4}(0,T;H^{1}))$. Then
$\displaystyle\lim_{n\rightarrow\infty}$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m_{n}(s)\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}\,ds$
$\displaystyle=\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m^{\prime}(s)\times(m^{\prime}(s)\times\Delta
m^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds.$
###### Proof of Lemma 5.12.
By the triangle inequality, we have
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m_{n}(s)\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}ds-\left\langle
m^{\prime}(s)\times(m^{\prime}(s)\times\Delta
m^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle\leq$
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle(m^{\prime}_{n}(s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}\,ds\bigg{|}$
$\displaystyle+\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m^{\prime}(s)\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)-m^{\prime}(s)\times\Delta
m^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds\bigg{|}.$ (5.39)
The first term of (5) goes to 0 as $n$ goes to infinity as follows.
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle(m^{\prime}_{n}(s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)),\phi(s)\right\rangle_{L^{2}}\,ds\bigg{|}$
$\displaystyle\leq\mathbb{E^{\prime}}\int_{0}^{T}|\left\langle(m^{\prime}_{n}(s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)),\phi(s)\right\rangle_{L^{2}}|\,ds$
$\displaystyle\leq\left(\mathbb{E^{\prime}}\int_{0}^{T}\left|m^{\prime}_{n}(s)-m^{\prime}(s))\right|_{L^{4}}^{4}\,ds\right)^{\frac{1}{4}}\left(\mathbb{E^{\prime}}\int_{0}^{T}\left|\phi(s)\right|_{L^{4}}^{4}\,ds\right)^{\frac{1}{4}}\left(\mathbb{E^{\prime}}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}$
$\displaystyle\leq
C\left(\mathbb{E}^{\prime}\int_{0}^{T}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{4}}^{4}\,ds\right)^{\frac{1}{4}}.$
(5.40)
By the convergence in (5.24) the right hand side of the above inequality (5)
goes to 0 as $n$ goes to infinity.
The above bound uses the inequality (5.36) followed by the use of the
generalized Hölder inequality. More precisely,
$|v_{1}v_{2}v_{3}|_{L^{1}}\leq|v_{1}|_{L^{4}}|v_{2}|_{L^{4}}|v_{3}|_{L^{2}}\
\text{for}\ v_{1},v_{2}\in L^{4},v_{3}\in L^{2}.$
For the second term, we have the following.
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m^{\prime}(s)\times(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)-m^{\prime}(s)\times\Delta
m^{\prime}(s)),\phi(s)\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle=$
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)-m^{\prime}(s)\times\Delta
m^{\prime}(s)),m^{\prime}(s)\times\phi(s)\right\rangle_{L^{2}}\,ds\bigg{|}.$
(5.41)
For $v_{1},v_{2}\in H^{1}$, using the continuous embedding
$H^{1}\hookrightarrow L^{\infty}$, we can show that there exists a constant
$C>0$ such that
$\left|v_{1}v_{2}\right|_{H^{1}}\leq
C\left|v_{1}\right|_{H^{1}}\left|v_{2}\right|_{H^{1}}.$ (5.42)
Therefore,
$\displaystyle\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(s)\times\phi(s)\right|_{H^{1}}^{2}\,ds$
$\displaystyle\leq
C\,\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(s)\right|_{H^{1}}^{2}\left|\phi(s)\right|_{H^{1}}^{2}\,ds$
$\displaystyle\leq
C\,\mathbb{E}^{\prime}\left[\sup_{s\in[0,T]}\left|m^{\prime}(s)\right|_{H^{1}}^{2}\int_{0}^{T}\left|\phi(s)\right|_{H^{1}}^{2}\,ds\right]$
$\displaystyle\leq\left(\mathbb{E}^{\prime}\sup_{s\in[0,T]}\left|m^{\prime}(s)\right|_{H^{1}}^{4}\right)^{\frac{1}{2}}\left(\mathbb{E}^{\prime}\int_{0}^{T}\left|\phi(s)\right|_{H^{1}}^{4}\,ds\right)^{\frac{1}{2}}$
$\displaystyle<\infty.$
The finiteness of the right hand side is due to the bound (5.22) and the
assumption on $\phi$. For this calculation, letting
$\psi=m^{\prime}\times\phi$, the right hand side of (5) goes to 0 following
the weak convergence (5.10) of $m^{\prime}_{n}\times\Delta m^{\prime}_{n}$.
This concludes the proof of Lemma 5.12. ∎
###### Lemma 5.13.
For $\phi\in L^{4}(\Omega^{\prime};H^{1})$, the following convergences hold.
$\lim_{n\rightarrow\infty}\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s),\phi\right\rangle_{L^{2}}\,ds=\mathbb{E^{\prime}}\int_{0}^{T}\left\langle
m^{\prime}(s)\times u^{\prime}(s),\phi\right\rangle_{L^{2}}\,ds$
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E^{\prime}}\int_{0}^{T}\psi(|m^{\prime}_{n}(s)|_{L^{\infty}})\left\langle
m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}\,ds$
$\displaystyle\quad=\mathbb{E^{\prime}}\int_{0}^{T}\psi(|m^{\prime}(s)|_{L^{\infty}})\left\langle
m^{\prime}(s)\times(m^{\prime}(s)\times
u^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds.$
###### Proof of Lemma 5.13.
We prove the second convergence. The first one can be shown similarly.
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}\psi(|m^{\prime}_{n}(s)|_{L^{\infty}})\left\langle
m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}\,ds$
$\displaystyle\quad-\mathbb{E^{\prime}}\int_{0}^{T}\psi(|m^{\prime}(s)|_{L^{\infty}})\left\langle
m^{\prime}(s)\times(m^{\prime}(s)\times
u^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds$
$\displaystyle=\mathbb{E^{\prime}}\int_{0}^{T}\bigl{[}\psi(m^{\prime}_{n}(s))-\psi\bigl{(}m^{\prime}(s)\bigr{)}\bigr{]}\left\langle
m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}\,ds$
$\displaystyle\quad+\mathbb{E^{\prime}}\int_{0}^{T}\psi(m^{\prime}(s))\left\langle\left[m^{\prime}_{n}(s)\times\bigl{(}m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)\bigr{)}-m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times
u^{\prime}(s)\bigr{)}\right],\phi\right\rangle_{L^{2}}\,ds.$ (5.43)
Combining Lemma 5.10 and the bound (5.9) in Proposition 5.6, the first term on
the right hand side of the equality (5) goes to $0$ as $n$ goes to infinity.
For the second term, we have the following.
$\displaystyle\bigg{|}\mathbb{E^{\prime}}\int_{0}^{T}\psi(m^{\prime}(s))\left\langle
m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s))-m^{\prime}(s)\times(m^{\prime}(s)\times
u^{\prime}(s)),\phi\right\rangle_{L^{2}}\,ds\bigg{|}$ $\displaystyle\leq$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}|\psi(m^{\prime}(s))\left\langle(m^{\prime}_{n}(s)-m^{\prime}(s))\times(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}|\,ds$
$\displaystyle+\mathbb{E^{\prime}}\int_{0}^{T}|\psi(m^{\prime}(s))\left\langle
m^{\prime}(s)\times((m^{\prime}_{n}(s)-m^{\prime}(s))\times
u^{\prime}_{n}(s)),\phi\right\rangle_{L^{2}}|\,ds$
$\displaystyle+\left|\mathbb{E^{\prime}}\int_{0}^{T}\psi(m^{\prime}(s))\left\langle
m^{\prime}(s)\times(m^{\prime}(s)\times(u^{\prime}_{n}(s)-u^{\prime}(s))),\phi\right\rangle_{L^{2}}\,ds\right|.$
(5.44)
Claim: All the three terms on the right hand side of the above inequality go
to $0$ as $n$ goes to infinity. We use the assumption on $\phi$ along with the
fact that the space $H^{1}$ is continuously embedded into the space
$L^{\infty}$. By (5.24), the sequence $m^{\prime}_{n}$ converges to
$m^{\prime}$ in
$L^{4}\left(\Omega^{\prime};L^{4}\left(0,T;L^{4}\right)\right)$. Hence for the
first term on the right hand side of (5), it is sufficient to show that
$\left(m^{\prime}_{n}\times u^{\prime}_{n}\right)\times\phi\in
L^{\frac{4}{3}}\left(\Omega^{\prime};L^{\frac{4}{3}}\left(0,T;L^{\frac{4}{3}}\right)\right)$.
Note that
$\displaystyle\mathbb{E}^{\prime}\int_{0}^{T}|\left(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)\right)\times\phi|_{L^{\frac{4}{3}}}^{\frac{4}{3}}\ ds$
$\displaystyle\leq\mathbb{E}^{\prime}\int_{0}^{T}|m^{\prime}_{n}(s)|_{L^{4}}^{\frac{4}{3}}|u_{n}(s)|_{L^{2}}^{\frac{4}{3}}|\phi|_{L^{\infty}}^{\frac{4}{3}}\
ds$ $\displaystyle\leq
C\mathbb{E}^{\prime}|\phi|_{H^{1}}^{\frac{4}{3}}\int_{0}^{T}|m^{\prime}_{n}(s)|_{H^{1}}^{\frac{4}{3}}|u^{\prime}_{n}(s)|_{L^{2}}^{\frac{4}{3}}\
ds\ (\text{Since}\ H^{1}\hookrightarrow L^{\infty}\hookrightarrow L^{4})$
$\displaystyle\leq
C\left(\mathbb{E}^{\prime}\left|\phi\right|_{H^{1}}^{4}\right)^{\frac{1}{3}}\left(\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\right|_{H^{1}}^{4}\,ds\right)^{\frac{1}{3}}\left(\mathbb{E}^{\prime}\left(\int_{0}^{T}\left|u^{\prime}_{n}(s)\right|_{L^{2}}^{2}\,ds\right)^{2}\right)^{\frac{1}{3}}.$
The right hand side of the above inequality is finite by the bounds (5.17) and
(5.25). The second term follows similarly.
The third term goes to zero due to the cut-off function and the weak
convergence (5.30). Hence all the three terms on the right hand side of the
inequality (5) go to $0$ as $n$ goes to infinity and the claim holds. ∎
The following proposition proves the convergence of the terms corresponding to
$G_{n}(m^{\prime}_{n})$.
###### Lemma 5.14.
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}^{\prime}\sup_{s\in[0,T]}\left|G_{n}(m^{\prime}_{n}(s))-G(m^{\prime}(s))\right|_{L^{2}}^{2}=0.$
###### Proof of Lemma 5.14.
The proof follows from Lemma 5.9 and Lemma 5.10. ∎
Define the following $L^{2}$-valued random variables
$\\{M_{n}(t)\\}_{t\in[0,T]}$ and $\\{M_{n}^{\prime}(t)\\}_{t\in[0,T]}$ on
$(\Omega,\mathbb{F},\mathbb{P})$ and
$(\Omega^{\prime},\mathbb{F}^{\prime},\mathbb{P}^{\prime})$, respectively by
$\displaystyle M_{n}(t):=$ $\displaystyle
m_{n}(t)-m_{n}(0)-\int_{0}^{t}\bigg{[}F_{n}^{1}(m_{n}(s))-\alpha\,F_{n}^{2}(m_{n}(s))+F_{n}^{3}(m_{n}(s))$
$\displaystyle+\frac{1}{2}\psi\bigl{(}m_{n}(s)\bigr{)}^{2}\left[DG\bigl{(}m_{n}(s)\bigr{)}\right]\left[G_{n}\bigl{(}m_{n}(s)\bigr{)}\right]\bigg{]}\,ds,$
(5.45)
and
$\displaystyle M^{\prime}_{n}(t):=$ $\displaystyle
m^{\prime}_{n}(t)-m^{\prime}_{n}(0)-\int_{0}^{t}[F_{n}^{1}(m^{\prime}_{n}(s))-\alpha\,F_{n}^{2}(m^{\prime}_{n}(s))+F_{n}^{3}(m^{\prime}_{n}(s))$
$\displaystyle+\frac{1}{2}\psi\bigl{(}m^{\prime}_{n}(s)\bigr{)}^{2}\left[DG\bigl{(}m^{\prime}_{n}(s)\bigr{)}\right]\left[G_{n}\bigl{(}m^{\prime}_{n}(s)\bigr{)}\right]\bigg{]}\,ds,$
(5.46)
The aim here is to show that for each $t\in[0,T]$, $M^{\prime}_{n}(t)$
converges in some sense to $M^{\prime}(t)$, where $M^{\prime}(t)$ is defined
as
$\displaystyle
M^{\prime}(t):=m^{\prime}(t)-m^{\prime}_{0}-\int_{0}^{t}\bigg{[}m^{\prime}(s)\times\Delta
m^{\prime}(s)-\alpha\,m^{\prime}(s)\times(m^{\prime}(s)\times\Delta
m^{\prime}(s))+m^{\prime}(s)\times u^{\prime}(s)$
$\displaystyle-\alpha\,\psi(m^{\prime}(s))m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times
u^{\prime}(s)\bigr{)}+\frac{1}{2}\psi\bigl{(}m^{\prime}(s)\bigr{)}^{2}\left[DG\bigl{(}m^{\prime}_{n}(s)\bigr{)}\right]\bigl{[}G\bigl{(}m^{\prime}_{n}(s)\bigr{)}\bigr{]}\biggr{]}\,ds.$
(5.47)
The main contents of the remainder of this section will be as follows:
1. (1)
Showing the convergence of $M^{\prime}_{n}(t)$ to $M^{\prime}(t)$ in some
sense (Lemma 5.15).
2. (2)
Showing that the process $W^{\prime}$, obtained as a limit of Wiener processes
$W_{n}^{\prime}$ is a Wiener process (Lemma 5.16).
3. (3)
Showing that the limit $M^{\prime}$ is indeed an Itô’s integral (with respect
to the process $W^{\prime}$) as required. This will be done in two steps:
first we prove Lemma 5.17, which shows that $M^{\prime}_{n}$ converges to the
required stochastic integral and then comparing this with Lemma 5.15 gives us
the required result.
###### Lemma 5.15.
For $\phi\in L^{4}(\Omega;H^{1})$, and $t\in[0,T]$
$\mathbb{E^{\prime}}\left\langle
M_{n}^{\prime}(t),\phi\right\rangle_{L^{2}}\rightarrow\mathbb{E^{\prime}}\left\langle
M^{\prime}(t),\phi\right\rangle_{L^{2}}\ \text{as}\ n\to\infty.$
###### Proof of Lemma 5.15.
We show the convergence of the terms individually. The previously stated
lemmata, viz. Lemma 5.9, Lemma 5.10, Lemma 5.11, Lemma 5.13, Lemma 5.12, Lemma
5.14 show the convergence of some of the terms. The terms that remain are the
ones corresponding to the Stratonovich to Itô correction term. The convergence
follows from the convergence described in Lemma 5.9 and Lemma 5.10.
We show the calculations for one term. Rest of the terms follow similarly.
Claim:
$\displaystyle\lim_{n\rightarrow\infty}$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}\bigg{[}\big{|}\psi^{2}(m^{\prime}_{n}(s))P_{n}\bigl{(}P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)\bigr{)}$
$\displaystyle-\psi^{2}\bigl{(}m^{\prime}(s)\bigr{)}m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)\big{|}_{(H^{1})^{\prime}}^{2}\bigg{]}\,ds=0.$
Let $v_{1},v_{2},w_{1},w_{2}\in H_{n}$. Then
$\displaystyle\left|\psi(v_{1})w_{1}-\psi(v_{1})w_{1}\right|_{L^{2}}\leq\left|\left[\psi(v_{1})-\psi(v_{2})\right]w_{1}\right|_{L^{2}}+\left|\psi(v_{2})\left[w_{1}-w_{2}\right]\right|_{L^{2}}.$
The convergence in the claim can be seen into two parts, one with the
convergence for the cut-off and one with the convergence for the remaining
term. For the convergence of the cut-off function, we have Lemma 5.10. We
therefore continue with the remaining part. Note that the function $\psi$ need
not be written here since it takes values in $[0,1]$ and hence does not affect
the inequalities.
The convergence can be split up into the following parts.
$\displaystyle|P_{n}(P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times h))-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}$ $\displaystyle\leq$
$\displaystyle|P_{n}(P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times
h))-P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times h)\times(m^{\prime}(s)\times
h))|_{(H^{1})^{\prime}}$
$\displaystyle+|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h))-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}$ $\displaystyle\leq$
$\displaystyle|P_{n}(P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h))|_{(H^{1})^{\prime}}$
$\displaystyle+|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h))-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}$ $\displaystyle\leq$
$\displaystyle|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}$
$\displaystyle+|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h))-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}.$
Thus,
$\displaystyle\mathbb{E^{\prime}}$
$\displaystyle\int_{0}^{T}|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}ds$ $\displaystyle\leq$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)-(m^{\prime}(s)\times(m^{\prime}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)$
$\displaystyle+(m^{\prime}(s)\times(m^{\prime}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}ds$ $\displaystyle\leq$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}|(P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))-(m^{\prime}(s)\times(m^{\prime}(s)\times
h)))\times(m^{\prime}_{n}(s)\times h)|_{(H^{1})^{\prime}}ds$
$\displaystyle+\mathbb{E^{\prime}}\int_{0}^{T}|(m^{\prime}(s)\times(m^{\prime}(s)\times
h))\times(m^{\prime}_{n}(s)\times h)-m^{\prime}(s)\times(m^{\prime}(s)\times
h)\times(m^{\prime}(s)\times h)|_{(H^{1})^{\prime}}$
Using the following inequality
$|v_{1}v_{2}v_{3}|_{(H^{1})^{\prime}}\leq C|v_{1}v_{2}v_{3}|_{L^{1}}\leq
C|v_{1}|_{L^{2}}|v_{2}|_{L^{2}}|v_{3}|_{L^{\infty}},$ (5.48)
(for $v_{1},v_{2}\in L^{2}$ and $v_{3}\in L^{\infty}$) we observe that for
$s\in[0,T]$ and $n\in\mathbb{N}$,
$\displaystyle|(P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))-(m^{\prime}(s)\times(m^{\prime}(s)\times
h)))\times(m^{\prime}_{n}(s)\times h)|_{(H^{1})^{\prime}}$
$\displaystyle\leq|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))-(m^{\prime}(s)\times(m^{\prime}(s)\times
h))|_{L^{2}}|m^{\prime}_{n}(s)|_{L^{2}}|h|_{L^{\infty}}$ $\displaystyle\leq
C(h)\sup_{s\in[0,T]}|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))-(m^{\prime}(s)\times(m^{\prime}(s)\times
h))|_{L^{2}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)|_{L^{2}}.$
The right hand side of the above inequality goes to $0$ as $n$ goes to
infinity. This follows from the argument mentioned next along with the use of
Lebesgue dominated convergence theorem, which is again justified in the
following steps.
Using the fact that $P_{n}$ is a projection operator on $L^{2}$ and the Hölder
inequality, we get
$\displaystyle\mathbb{E^{\prime}}\sup_{s\in[0,T]}|P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))|_{L^{2}}$
$\displaystyle\leq\mathbb{E^{\prime}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h)|_{L^{2}}$
$\displaystyle\leq\mathbb{E^{\prime}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)|_{L^{2}}|m^{\prime}_{n}(s)|_{L^{\infty}}|h|_{L^{\infty}}$
$\displaystyle\leq
C\mathbb{E^{\prime}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)|_{L^{2}}|m^{\prime}_{n}(s)|_{H^{1}}|h|_{L^{\infty}}$
$\displaystyle\leq
C|h|_{L^{\infty}}|m(0)|_{L^{2}}\mathbb{E^{\prime}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)|_{H^{1}}.$
This along with the bound (5.5) give us a uniform bound for using the Lebesgue
Dominated Convergence Theorem.
$\displaystyle|(m^{\prime}(s)\times(m^{\prime}(s)\times
h))\times(m^{\prime}_{n}(s)\times h-m^{\prime}(s)\times
h)|_{(H^{1})^{\prime}}$
$\displaystyle\leq|(m^{\prime}(s)\times(m^{\prime}(s)\times
h))|_{L^{2}}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}|h|_{L^{\infty}}$
$\displaystyle\leq\sup_{s\in[0,T]}|m^{\prime}(s)|_{L^{2}}\sup_{s\in[0,T]}|m^{\prime}(s)|_{L^{\infty}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}|h|_{L^{\infty}}$
$\displaystyle\leq
C\sup_{s\in[0,T]}|m^{\prime}(s)|_{L^{2}}\sup_{s\in[0,T]}|m^{\prime}(s)|_{H^{1}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}|h|_{L^{\infty}}$
$\displaystyle\leq
CC(h)|m_{0}|_{L^{2}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}.$
Thus,
$\displaystyle\mathbb{E}^{\prime}$
$\displaystyle|(m^{\prime}(s)\times(m^{\prime}(s)\times
h))\times(m^{\prime}_{n}(s)\times h-m^{\prime}(s)\times
h)|_{(H^{1})^{\prime}}^{2}$ $\displaystyle\leq
CC(h)|m_{0}|^{2}_{L^{2}}\mathbb{E^{\prime}}\sup_{s\in[0,T]}|m^{\prime}_{n}(s)-m^{\prime}(s)|_{L^{2}}^{2}.$
The right hand side of the above inequality goes to $0$ by Lemma 5.9. Hence
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E^{\prime}}\left|(P_{n}(m^{\prime}_{n}(s)\times(m^{\prime}_{n}(s)\times
h))-(m^{\prime}(s)\times(m^{\prime}(s)\times
h)))\times(m^{\prime}_{n}(s)\times h)\right|_{(H^{1})^{\prime}}\,ds=0$
and
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E^{\prime}}\int_{0}^{T}|(m^{\prime}(s)\times(m^{\prime}(s)\times
h))\times(m^{\prime}_{n}(s)\times h-m^{\prime}(s)\times
h)|_{(H^{1})^{\prime}}\,ds=0.$
Concerning the remaining term, the calculations can be done as follows. For
$s\in[0,T]$,
$\lim_{n\rightarrow\infty}|P_{n}(m^{\prime}(s)\times(m^{\prime}(s)\times
h))-m^{\prime}(s)\times(m^{\prime}(s)\times h)|=0.$
The above pointwise convergence and the uniform bound
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}|m^{\prime}(s)\times(m^{\prime}(s)\times
h)|_{(H^{1})^{\prime}}\,ds\leq C(h)\mathbb{E^{\prime}}|m_{0}|_{L^{2}}^{2}$
together with the Lebesgue Dominated Convergence Theorem gives
$\displaystyle\lim_{n\rightarrow\infty}$
$\displaystyle\mathbb{E^{\prime}}\int_{0}^{T}\left|P_{n}\bigl{(}m^{\prime}(s)\times\left(m^{\prime}(s)\times
h\right)\times\bigl{(}m^{\prime}(s)\times
h\bigr{)}\bigr{)}-m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times
h\bigr{)}\times\bigl{(}m^{\prime}(s)\times
h\bigr{)}\right|_{(H^{1})^{\prime}}\,ds$ $\displaystyle=0.$
Combining the above calculations with the Lemma 5.10 justifies the claim.
∎
We now show that the driving process $W^{\prime}$ is a Wiener process.
###### Lemma 5.16.
The process $W^{\prime}$ is a Wiener process on the space
$(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})$. Also,
$W_{n}^{\prime}(t)-W_{n}^{\prime}(s)$ is independent of the $\sigma$\- algebra
generated by $m^{\prime}_{n}(r),u^{\prime}(r),W_{n}^{\prime}(r)$ for $0\leq
r\leq s<t$.
###### Proof of Lemma 5.16.
$W^{\prime}_{n}$ converges to $W^{\prime}$ in $C([0,T];\mathbb{R})$
$\mathbb{P}$-a.s. Hence, $W^{\prime}\in C([0,T];\mathbb{R})$ $\mathbb{P}$-a.s.
That is, $W^{\prime}$ thus has almost surely continuous trajectories. We
proceed as follows: First show that $W_{n}^{\prime}$ is a Wiener process on
$(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})$ for each
$n\in\mathbb{N}$. Recall that the processes $W_{n}^{\prime}$ and $W$ have the
same laws on the space $C([0,T];\mathbb{R})$.
Let $\phi_{i},\zeta_{i}$, $i=1,\dots,k$ be continuous and bounded real valued
functions on $(H^{1})^{\prime}$.
Let $\psi,\psi_{i}$, $i=1,\dots,k$ be continuous and bounded real valued
functions on $\mathbb{R}$. Let $0<r_{1}<\dots<r_{k}\leq s\leq t$,
$0<s_{1}<\dots<s_{k}\leq s\leq t$.
Now for each $n\in\mathbb{N}$
$\displaystyle\mathbb{E}^{\prime}$
$\displaystyle\left[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)}\psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\big{)}\right]$
$\displaystyle=\mathbb{E}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m_{n}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W(s_{j})\big{)}\psi\big{(}W(t)-W(s)\big{)}\right]$
$\displaystyle=\mathbb{E}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m_{n}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W(s_{j})\big{)}\right]\mathbb{E}\left[\psi\big{(}W(t)-W(s)\big{)}\right]$
$\displaystyle=\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)}\right]\mathbb{E}^{\prime}\left[\psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\big{)}\right].$
Thus, $W_{n}^{\prime}(t)-W_{n}^{\prime}(s)$ is independent of the $\sigma$\-
algebra generated by $m^{\prime}_{n}(r),u^{\prime}_{n}(r),W_{n}^{\prime}(r)$
for $r\leq s$.
Taking the limit as $n$ goes to infinity, we get
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}^{\prime}$
$\displaystyle\left[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)}\psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\big{)}\right]$
$\displaystyle=\lim_{n\rightarrow\infty}\mathbb{E}^{\prime}\left[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u^{\prime}_{n}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W_{n}^{\prime}(s_{j})\big{)}\right]\mathbb{E}^{\prime}\left[\psi\big{(}W_{n}^{\prime}(t)-W_{n}^{\prime}(s)\big{)}\right].$
By Lebesgue dominated convergence theorem, we have
$\displaystyle\mathbb{E}^{\prime}$
$\displaystyle\left[\prod_{j=1}^{k}\phi_{j}\bigl{(}m^{\prime}(r_{j})\bigr{)}\prod_{j=1}^{k}\zeta_{j}(u^{\prime}\big{(}r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W^{\prime}(s_{j})\big{)}\psi\big{(}W^{\prime}(t)-W^{\prime}(s)\big{)}\right]$
$\displaystyle=\mathbb{E}^{\prime}\large[\prod_{j=1}^{k}\phi_{j}\big{(}m^{\prime}(r_{j})\big{)}\prod_{j=1}^{k}\zeta_{j}\big{(}u^{\prime}(r_{j})\big{)}\prod_{j=1}^{k}\psi_{j}\big{(}W^{\prime}(s_{j})\big{)}\large]\mathbb{E}^{\prime}\left[\psi\big{(}W^{\prime}(t)-W^{\prime}(s)\big{)}\right].$
Thus, $W^{\prime}(t)-W^{\prime}(s)$ is independent of the $\sigma$\- algebra
generated by $m^{\prime}(r),u^{\prime}(r),W^{\prime}(r)$ for $r\leq s\leq t$.
Now, let $k\in\mathbb{N}$, $s_{0}=0<s_{1}<\dots<s_{k}\leq T$. For
$(t_{1},\dots t_{k})\in\mathbb{R}^{k}$. Then for each $n\in\mathbb{N}$, we
have
$\displaystyle\mathbb{E}^{\prime}\left[e^{i\sum_{j=1}^{k}t_{j}\big{(}W^{\prime}_{n}(s_{j})-W^{\prime}_{n}(s_{j-1})\big{)}}\right]$
$\displaystyle=\mathbb{E}\left[e^{i\sum_{j=1}^{k}t_{j}\bigl{(}W(s_{j})-W(s_{j-1})\bigr{)}}\right]$
$\displaystyle=e^{-\frac{1}{2}\sum_{j=1}^{k}t_{j}^{2}\big{(}s_{j}-s_{j-1}\big{)}}.$
Thus
$\displaystyle\lim_{n\rightarrow\infty}\mathbb{E}^{\prime}\left[e^{i\sum_{j=1}^{k}t_{j}\big{(}W^{\prime}_{n}(s_{j})-W^{\prime}_{n}(s_{j-1})\big{)}}\right]=\lim_{n\rightarrow\infty}e^{-\frac{1}{2}\sum_{j=1}^{k}t_{j}^{2}(s_{j}-s_{j-1})}$
and by the Lebesgue dominated convergence theorem,
$\displaystyle\mathbb{E}^{\prime}\left[e^{i\sum_{j=1}^{k}t_{j}\big{(}W^{\prime}(s_{j})-W^{\prime}(s_{j-1})\big{)}}\right]=e^{-\frac{1}{2}\sum_{j=1}^{k}t_{j}^{2}(s_{j}-s_{j-1})}.$
Hence, the increments are normally distributed. ∎
###### Lemma 5.17.
For each $t\in[0,T]$, $M_{n}^{\prime}(t)$ converges to
$\int_{0}^{t}\psi\left(m^{\prime}(s)\right)G\big{(}m^{\prime}(s)\big{)}\,dW^{\prime}(s)$
in $L^{2}(\Omega^{\prime};(H^{1})^{\prime})$. In particular,
$M^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s),\
\mathbb{P}^{\prime}-a.s.$ (5.49)
###### Idea of proof of Lemma 5.17.
We first give a brief idea of the proof in mainly two steps. We then go on to
justify the steps.
1. (1)
Let us choose and fix $t\in[0,T]$ and $n\in\mathbb{N}$. We show that
$M_{n}^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s),\
\mathbb{P}^{\prime}-a.s.$ (5.50)
2. (2)
Again, Let us choose and fix $t\in[0,T]$. Then, using step (1) we show that
$M_{n}^{\prime}(t)$ converges to
$\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s)$ as
$n\to\infty$ in $L^{2}(\Omega^{\prime};(H^{1})^{\prime})$, and hence, in
particular, weakly in $L^{\frac{4}{3}}(\Omega^{\prime};(H^{1})^{\prime})$.
From Lemma 5.15, we know that $M_{n}^{\prime}(t)$ converges to $M^{\prime}(t)$
weakly in $L^{\frac{4}{3}}(\Omega^{\prime};(H^{1})^{\prime})$. Combining this
convergence with the convergence from step (2), we have,
$M^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s),\
\mathbb{P}^{\prime}-a.s.$ (5.51)
∎
###### Proof of Lemma 5.17.
Proof of Step 1: Let $k,n\in\mathbb{N}$, and let $t\in[0,T]$.
Let
$\mathcal{P}_{k}:=\left\\{s_{j}^{k}:s_{j}^{k}=\frac{jT}{k},j=0,\dots,k\right\\}$
be a partition of $[0,T]$.
Claim:
$M_{n}^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s).$
(5.52)
For $n\in\mathbb{N}$ and $t\in[0,T]$, consider the following random variables.
$\displaystyle
M_{n}(t)-\sum_{j=0}^{k-1}\psi(m_{n}(s_{j}^{k}))G_{n}\bigl{(}m_{n}(s_{j}^{k})\bigr{)}\bigl{(}W(s_{j+1}^{k}\wedge
t\bigr{)}-W(s_{j}^{k}\wedge t)\bigr{)},$ (5.53)
and
$\displaystyle
M_{n}^{\prime}(t)-\sum_{j=0}^{k-1}\psi(m^{\prime}_{n}(s))G_{n}\bigl{(}m^{\prime}_{n}(s_{j}^{k})\bigr{)}\bigl{(}W_{n}^{\prime}(s_{j+1}^{k}\wedge
t)-W_{n}^{\prime}(s_{j}^{k}\wedge t)\bigr{)}.$ (5.54)
Sub-claim: For each $t\in[0,T]$ and $n\in\mathbb{N}$, we have the following
convergence. The random variable
$\displaystyle\sum_{j=0}^{k-1}\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}\bigl{(}m_{n}(s_{j}^{k}\wedge
t)\bigr{)}\bigl{(}W(s_{j+1}^{k}\bigr{)}-W(s_{j}^{k})\bigr{)}$
$\displaystyle=\int_{0}^{t}\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge t))\,dW(s),$ (5.55)
converges to the random variable
$\int_{0}^{t}\psi(m_{n}(s))G_{n}(m_{n}(s))\,dW(s),$ (5.56)
in the space $L^{2}(\Omega;L^{2})$ as $k\to\infty$. By the equality in (5),
we, therefore, have the first variable to be $0$ (in the limit as
$k\to\infty$) $\mathbb{P}^{\prime}$-a.s.
###### Proof of the sub-claim..
Firstly, for any $f\in C([0,T];H_{n})$, we have the following
$\lim_{k\to\infty}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)f(s_{j}^{k}\wedge
t)-f(s)\right|_{L^{2}}^{2}\,ds=0.$ (5.57)
Now, observe that $\psi(m_{n})G_{n}(m_{N})\in C([0,T];H_{n})$. Therefore for
$f(\cdot)=\psi(\cdot)G_{n}(m_{n}(\cdot))\in C([0,T];H_{n})$, we have
$\lim_{k\to\infty}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge
t))-\psi(m_{n}(s))G(m_{n}(s))\right|_{L^{2}}^{2}\,ds=0,\ \mathbb{P}-a.s.$
(5.58)
Moreover, by Lemma 4.9, there exists a constant $C$ independent of $k$ such
that
$\displaystyle\mathbb{E}\left[\int_{0}^{t}\left|\chi_{(s^{k}_{j},s^{k}_{j+1}]}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge
t))-\psi(m_{n}(s))G(m_{n}(s))\right|_{L^{2}}^{2}\,ds\right]^{2}$ (5.59)
$\displaystyle\leq$ $\displaystyle
4\mathbb{E}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge
t))\right|_{L^{2}}^{4}\,ds+4\mathbb{E}\int_{0}^{t}\left|\psi(m_{n}(s))G(m_{n}(s))\right|_{L^{2}}^{4}\,ds\leq
C.$ (5.60)
Therefore by the Vitali Convergence Theorem, we have the following
convergence.
$\lim_{k\to\infty}\mathbb{E}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge
t))-\psi(m_{n}(s))G(m_{n}(s))\right|_{L^{2}}^{2}\,ds=0.$ (5.61)
In order to prove the claim, we consider the following difference. By the Itô
isometry, we have
$\displaystyle\mathbb{E}\left|\int_{0}^{t}\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge
t))-\psi(m_{n}(s))G_{n}(m_{n}(s))\,dW(s)\right|_{L^{2}}^{2}$
$\displaystyle=\mathbb{E}\int_{0}^{t}\left|\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge
t))-\psi(m_{n}(s))G_{n}(m_{n}(s))\right|_{L^{2}}^{2}\,ds.$ (5.62)
The right hand side, and hence the left hand side of the above inequality
converges to $0$ as $k\to\infty$. This completes the proof of the sub-claim. ∎
Note that the two random variables in (5.53) and (5.54) are obtained by
applying measurable transformations to $m_{n},m^{\prime}_{n},W_{n}^{\prime}$
and $W$ and hence have the same distributions. Strong convergence of
$M_{n}(t)$ implies convergence of the corresponding laws. Since the random
variables in (5.53) and (5.54) have the same laws, the laws of
$M_{n}^{\prime}(t)$ also converge to the law of some random variable, the law
of which is the same as that of the law of the limit of $M_{n}(t)$. But since
$M_{n}(t)-\int_{0}^{t}\psi(m_{n}(s))G_{n}(m_{n}(s))\,dW(s)=0,\
\mathbb{P}$-a.s. (because $m_{n}$ is a solution to (4)), we have
$\displaystyle\lim_{k\to\infty}\left[M_{n}^{\prime}(t)-\int_{0}^{t}\chi_{[s^{k}_{j},s^{k}_{j+1})}(s)\psi(m_{n}(s_{j}^{k}\wedge
t))G_{n}(m_{n}(s_{j}^{k}\wedge t))\,dW_{n}^{\prime}(s)\right]=0,\
\mathbb{P}^{\prime}-a.s.$ (5.63)
Thus,
$M_{n}^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s),\
\mathbb{P}^{\prime}-a.s.$
Hence the claim is shown. This concludes step 1.
Proof of Step 2: In the second step, we have to show the convergence of
$M_{n}^{\prime}(t)$ to the stochastic integral
$\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s)$ as
$n\to\infty$. In step 1, we have shown that
$M_{n}^{\prime}(t)=\int_{0}^{t}\psi(m^{\prime}_{n}(s))G(m^{\prime}_{n}(s))\,dW_{n}^{\prime}(s),\mathbb{P}^{\prime}$-a.s.
Now, some standard adding and subtracting, along with the triangle inequality,
gives us the following inequality.
$\displaystyle\mathbb{E^{\prime}}$
$\displaystyle\left|\int_{0}^{t}\psi(m^{\prime}_{n}(s))G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s)-\int_{0}^{t}\psi(m^{\prime}(s))G\big{(}m^{\prime}(s)\big{)}\,dW^{\prime}(s)\right|_{(H^{1})^{\prime}}^{2}$
$\displaystyle\leq$
$\displaystyle\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}_{n}(s))G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s)-\int_{0}^{t}\psi(m^{\prime}(s))G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right]$
$\displaystyle+\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s)-\int_{0}^{t}\psi(m^{\prime}(s))G\big{(}m^{\prime}(s)\big{)}\,dW_{n}^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right]$
$\displaystyle+\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))G\big{(}m^{\prime}(s)\big{)}\,dW_{n}^{\prime}(s)-\int_{0}^{t}\psi(m^{\prime}(s))G\big{(}m^{\prime}(s)\big{)}\,dW^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right].$
(5.64)
The first term on the right hand side converges to $0$ as $n\to\infty$. This
follows from using the convergences in Lemma 5.10 and some standard arguments.
For the second term, note that since $L^{2}\hookrightarrow(H^{1})^{\prime}$,
there exists a constant $C>0$ such that
$\displaystyle\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))G_{n}\big{(}m^{\prime}_{n}(s)\big{)}\,dW_{n}^{\prime}(s)-\int_{0}^{t}\psi(m^{\prime}(s))G\big{(}m^{\prime}(s)\big{)}\,dW_{n}^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right]$
$\displaystyle=$
$\displaystyle\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))\left[G_{n}\big{(}m^{\prime}_{n}(s)\big{)}-G\big{(}m^{\prime}(s)\big{)}\right]\,dW_{n}^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right]$
$\displaystyle\leq$ $\displaystyle
C\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))\left[G_{n}\big{(}m^{\prime}_{n}(s)\big{)}-G\big{(}m^{\prime}(s)\big{)}\right]\,dW_{n}^{\prime}(s)\right|^{2}_{L^{2}}\right]$
$\displaystyle\leq$ $\displaystyle
C\mathbb{E^{\prime}}\left[\int_{0}^{t}\left|\left[G_{n}\big{(}m^{\prime}_{n}(s)\big{)}-G\big{(}m^{\prime}(s)\big{)}\right]\right|^{2}_{L^{2}}\,ds\right].$
In the last inequality, we have used the fact that $\psi\leq 1$, along with
the Itô isometry. By the convergence in Lemma 5.14, the right hand side
converges to 0 as $n\to\infty$. In particular, for every $\varepsilon>0$, we
can choose $N_{\varepsilon}$ large enough so that the first term is bounded by
$\frac{\varepsilon}{4}$ for each $n\geq N_{\varepsilon}$. For the third term,
we approximate the integrals by finite sums.
$\displaystyle\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW_{n}^{\prime}(s)-\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right]$
$\displaystyle\leq$
$\displaystyle\mathbb{E^{\prime}}\left[\left|\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW_{n}^{\prime}(s)-\sum_{j=0}^{k-1}\psi(m^{\prime}(s_{j}^{k}))G(m^{\prime}(s_{j}^{k}))\,\left(W_{n}^{\prime}(s_{j+1}^{k})-W_{n}^{\prime}(s_{j+1}^{k})\right)\right|^{2}_{(H^{1})^{\prime}}\right]$
$\displaystyle+\mathbb{E^{\prime}}\bigg{[}\bigg{|}\sum_{j=0}^{k-1}\psi(m^{\prime}(s_{j}^{k}))G(m^{\prime}(s_{j}^{k}))\,\left(W_{n}^{\prime}(s_{j+1}^{k})-W_{n}^{\prime}(s_{j+1}^{k})\right)$
$\displaystyle\quad-\sum_{j=0}^{k-1}\psi(m^{\prime}(s_{j}^{k}))G(m^{\prime}(s_{j}^{k}))\,\left(W^{\prime}(s_{j+1}^{k})-W^{\prime}(s_{j+1}^{k})\right)\bigg{|}^{2}_{(H^{1})^{\prime}}\bigg{]}$
$\displaystyle+\mathbb{E^{\prime}}\left[\left|\sum_{j=0}^{k-1}\psi(m^{\prime}(s_{j}^{k}))G(m^{\prime}(s_{j}^{k}))\,\left(W^{\prime}(s_{j+1}^{k})-W^{\prime}(s_{j+1}^{k})\right)-\int_{0}^{t}\psi(m^{\prime}(s))G(m^{\prime}(s))\,dW^{\prime}(s)\right|^{2}_{(H^{1})^{\prime}}\right].$
Since the mentioned sums approximate the corresponding Itô integrals in
$L^{2}(\Omega^{\prime};(H^{1})^{\prime})$, the first and the third term
converge to 0 as $k\to\infty$. Convergence of the processes $W_{n}^{\prime}$
to $W$ along with uniform integrability implies that the second term goes to 0
as $n$ goes to infinity.
Combining the convergences concludes step 2, and hence the proof of the lemma.
∎
## 6\. Continuation of the proof of Theorem 3.3: verification of the
constraint condition
After showing the existence of a solution to the equation, we now have to show
that the obtained process $m$ satisfies the constraint condition (3.9). We use
the Itô formula version from the paper of Pardoux [53], Theorem 1.2.
For $t\in[0,T]$, consider the equation in $(H^{1})^{\prime}$
$\displaystyle m^{\prime}(t)=$ $\displaystyle\
m^{\prime}_{0}+\int_{0}^{t}\bigg{[}m^{\prime}(s)\times\Delta
m^{\prime}(s)-\alpha\,m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times\Delta
m^{\prime}(s)\bigr{)}+m^{\prime}(s)\times u^{\prime}(s)$
$\displaystyle-\alpha\,\psi(m^{\prime}(s))m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times
u^{\prime}(s)\bigr{)}+\frac{1}{2}\psi(m^{\prime}(s))^{2}\left[DG\bigl{(}m^{\prime}(s)\bigr{)}\right]\bigl{[}G\bigl{(}m^{\prime}(s)\bigr{)}\bigr{]}\bigg{]}\,ds$
$\displaystyle+\int_{0}^{t}\psi(m^{\prime}(s))G\bigl{(}m^{\prime}(s)\bigr{)}\,dW^{\prime}(s).$
Let $M^{2}(0,T;L^{2})$ be the space of all $L^{2}$ valued processes $v$ that
satisfy
$\mathbb{E^{\prime}}\left[\int_{0}^{T}|v(t)|^{2}_{L^{2}}\,dt\right]<\infty.$
That is, $M^{2}(0,T;L^{2})=L^{2}(\Omega^{\prime};L^{2}(0,T;L^{2}))$. Similarly
define $M^{2}(0,T;H^{1})$, $M^{2}(0,T;(H^{1})^{\prime})$. (For details see
Section 1.3 in [53]).
Let $\phi\in C_{c}^{\infty}(\mathcal{O})$, with $\phi$ taking values in
$\mathbb{R}^{+}$. Define $\phi_{4}:L^{2}\rightarrow\mathbb{R}$ by
$\phi_{4}(v)=\frac{1}{2}\left\langle\phi v,v\right\rangle_{L^{2}}.$
This can be written as
$\displaystyle\phi_{4}(v)=\frac{1}{2}\int_{\mathcal{O}}\phi(x)\langle
v(x),v(x)\rangle_{\mathbb{R}^{3}}dx.$
First, we present the Fréchet derivatives
$\phi_{4}^{\prime},\phi_{4}^{\prime\prime}$ of $\phi_{4}$. Let $v_{i}\in
L^{2},i=1,2,3$. Then we have
$\phi_{4}^{\prime}(v_{1})(v_{2})=\left\langle\phi
v_{1},v_{2}\right\rangle_{L^{2}}.$
Similarly,
$\phi_{4}^{\prime\prime}(v_{1})(v_{2},v_{3})=\left\langle\phi
v_{2},v_{3}\right\rangle_{L^{2}}.$
Using the bound (5.22) and the assumption on the initial data $m_{0}$, one can
show that the following hold.
1. (1)
$m^{\prime}\in L^{2}(0,T;H^{1});$
2. (2)
$m_{0}^{\prime}\in H^{1};$
3. (3)
$m^{\prime}\times\Delta m^{\prime}\in M^{2}(0,T;(H^{1})^{\prime});$
4. (4)
$m^{\prime}\times(m^{\prime}\times\Delta m^{\prime})\in
M^{2}(0,T;(H^{1})^{\prime});$
5. (5)
$m\times u^{\prime}\in M^{2}(0,T;(H^{1})^{\prime});$
6. (6)
$m^{\prime}\times(m^{\prime}\times u^{\prime})\in
M^{2}(0,T;(H^{1})^{\prime});$
7. (7)
$\left[DG(m^{\prime})\right]\big{(}G(m^{\prime})\big{)}\in
M^{2}(0,T;(H^{1})^{\prime});$
8. (8)
$G(m^{\prime})\in M^{2}(0,T;L^{2}).$
Thus, the Itô formula can be applied to the function $\phi_{4}$ defined above.
The calculations that follow are similar to the ones in the proof of Lemma
4.9. On applying the Itô formula, Theorem 1.2, [53], we get the next stated
inequality because the terms that previously had the bump (cut-off) function
cancel with the correction term arising because of the Itô formula and the
other terms are $0$, except for the terms stated in the following equation.
For a similar computation, see Remark 3.2 in [14]. An application of the Itô
formula thus yields
$\displaystyle\phi_{4}(m^{\prime}(t))=$
$\displaystyle\phi_{4}(m_{0})+\int_{0}^{t}\left\langle
m^{\prime}(s)\times\Delta
m^{\prime}(s),m^{\prime}(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle-\alpha\,\int_{0}^{t}\left\langle
m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times\Delta
m^{\prime}(s)\bigr{)},m^{\prime}(s)\right\rangle_{H^{1}}\,ds$
$\displaystyle+\int_{0}^{t}\left\langle m^{\prime}(s)\times
u^{\prime}(s),m^{\prime}(s)\right\rangle_{H^{1}}\,ds$
$\displaystyle-\alpha\,\int_{0}^{t}\left\langle\psi(m^{\prime}(s))m^{\prime}(s)\times\bigl{(}m^{\prime}(s)\times
u^{\prime}(s)\bigr{)},m^{\prime}(s)\right\rangle_{H^{1}}\,ds$
$\displaystyle+\frac{1}{2}\int_{0}^{t}\left\langle\psi^{2}(m^{\prime}(s))\left[DG\bigl{(}m^{\prime}(s)\bigr{)}\right]\bigl{[}G\bigl{(}m^{\prime}(s)\bigr{)}\bigr{]},m^{\prime}(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle+\frac{1}{2}\int_{0}^{t}\psi^{2}(m^{\prime}(s))\left[\phi_{4}^{\prime\prime}(m^{\prime}(s))\right]\left\langle\big{(}G(m^{\prime}(s)),G(m^{\prime}(s))\big{)}\right\rangle_{L^{2}}\,ds$
$\displaystyle+\int_{0}^{t}\left\langle\psi(m^{\prime}(s))G\bigl{(}m^{\prime}(s)\bigr{)},m^{\prime}(s)\right\rangle_{L^{2}}\,dW^{\prime}(s)$
$\displaystyle=$ $\displaystyle\phi_{4}(m_{0})+\sum_{i=1}^{7}I_{i}(t).$ (6.1)
Our first observation for the integrals on the right hand side of (6) is that
$I_{i}(t)=0,\ \text{for}\ i=1,2,3,4,\ \text{and}\ 7.$ (6.2)
We give a brief justification for the following. We mainly use the fact that
for vectors $a,b\in\mathbb{R}^{3}$, we have
$\left\langle a\times b,a\right\rangle_{\mathbb{R}^{3}}=0.$
For any $p\geq 1$, the above equality gives
$\ {}_{L^{p^{\prime}}}\left\langle a\times b,a\right\rangle_{L^{p}}=0,$ (6.3)
with $\ {}_{L^{p^{\prime}}}\langle\,\cdot,\cdot\rangle_{L^{p}}$ denoting the
$L^{p}$ duality pairing.
Observe that not all the inner products on the right hand side of (6.4) are
the $L^{2}$ inner products. To use the above equality, we replace the
$(H^{1})^{\prime}-H^{1}$ duality pairing by $L^{p}$ duality pairing for some
convenient $p$. To see this, first, note that the space $H^{1}$ is compactly
embedded into the spaces $L^{4}$ and $L^{6}$. Therefore, the
$(H^{1})^{\prime}-H^{1}$ duality pairing can be appropriately replaced by the
$(H^{1})^{\prime}-H^{1}$ duality pairing can be replaced by the
$L^{\frac{4}{3}}-L^{4}$ (for $I_{2},I_{3}$) and $L^{\frac{6}{5}}-L^{6}$ (for
$I_{4}$) duality pairings.
For the triple product term $m\times\left(m^{\prime}\times\Delta
m^{\prime}\right)$ (inside the integral $I_{2}$), note that
$\left|m^{\prime}\times\left(m^{\prime}\times\Delta
m^{\prime}\right)\right|_{L^{\frac{4}{3}}}\leq
C\left|m^{\prime}\right|_{L^{4}}\left|m^{\prime}\times\Delta
m^{\prime}\right|_{L^{2}}.$
Similar can be said about $m^{\prime}\times u^{\prime}$ for $I_{3}$.
For $m^{\prime}\times\left(m^{\prime}\times u^{\prime}\right)$ (inside the
integral $I_{4}$), note that
$\left|m^{\prime}\times\left(m^{\prime}\times
u^{\prime}\right)\right|_{L^{\frac{6}{5}}}\leq
C\left|m^{\prime}\right|_{L^{6}}^{2}\left|u^{\prime}\right|_{L^{2}}.$
Now, the terms that remain are $I_{5},I_{6}$. Note that
$\left[\phi_{4}^{\prime\prime}(m^{\prime})\right](\left(G(m^{\prime}),G(m^{\prime})\right))=\left\langle
G(m^{\prime}),G(m^{\prime})\right\rangle_{L^{2}}=\left|G(m^{\prime})\right|_{L^{2}}^{2}.$
Moreover, the following equality holds from Lemma B.2 in [14].
$\left\langle\left[DG(m^{\prime})\right]\big{(}G(m^{\prime})\big{)},m^{\prime}\right\rangle_{L^{2}}=-\left|G(m^{\prime})\right|_{L^{2}}^{2}.$
Therefore,
$I_{6}(t)+I_{7}(t)=0,\ \forall t\in[0,T].$
Hence, the equality (6) is now
$\displaystyle\phi_{4}\big{(}m^{\prime}(t)\big{)}=\phi_{4}(m_{0}),$
for each $t\in[0,T]$. That is
$\int_{\mathcal{O}}\phi(x)|m^{\prime}(t,x)|_{\mathbb{R}^{3}}^{2}\,dx=\int_{\mathcal{O}}\phi(x)|m_{0}(x)|_{\mathbb{R}^{3}}^{2}\,dx.$
(6.4)
Now, the equality (6.4) holds for all $\phi\in C_{c}^{\infty}(\mathcal{O})$.
Hence, we have the following
$|m^{\prime}(t,x)|_{\mathbb{R}^{3}}^{2}=|m_{0}(x)|_{\mathbb{R}^{3}}^{2}=1,\
\text{Leb.a.a.}\ x\in\mathcal{O}\ \text{for all}\ t\in[0,T]\
\mathbb{P}^{\prime}-\text{a.s.}$ (6.5)
Thus, the constraint condition (3.9) is satisfied.
###### Remark 6.1.
Now that the constraint condition has been satisfied, we observe that the cut-
off $\psi$ only takes the value $1$, and hence can be removed from the
equation. This completes the proof of existence of a weak martingale solution
to the problem (3.7), as per Definition 3.2.
## 7\. Proof of Theorems 3.4 and 7.5 about the pathwise uniqueness and the
existence of a unique strong solution
For this section, let us fix a probability space
$\left(\Omega,\mathcal{F},\mathbb{P}\right)$ and a Wiener process $W$ on this
space, as in Definition 3.2. The existence theorem (Theorem 3.3) states that
the process $m$ satisfies the equation (3.7) with the help of a test function.
The following result, which is a corollary of Theorem 3.3, states that the
equation also makes sense in the strong (PDE) form.
###### Corollary 7.1.
Let us assume that the process $u$ is a control process such that (3.1) holds.
Let $\left(\Omega,\mathcal{F},\mathbb{P},W,m,u\right)$ be a weak martingale
solution of (3.7) corresponding to the control process $u$, satisfying the
properties stated in Theorem 3.3. Then the following equation is satisfied in
the strong (PDE) sense in the space $L^{2}$ for each $t\in[0,T]$.
$\displaystyle m(t)$ $\displaystyle=\int_{0}^{t}m(s)\times\Delta
m(s)\,ds-\alpha\,\int_{0}^{t}m(s)\times(m(s)\times
u(s))\,ds-\alpha\,\int_{0}^{t}m(s)\times\left(m(s)\times\Delta
m(s)\right)\,ds$ $\displaystyle+\int_{0}^{t}m(s)\times
u(s)\,ds+\frac{1}{2}\int_{0}^{t}\left[DG\left(m(s)\right)\right]\left[G\big{(}m\left(s\right)\big{)}\right]\,ds+\int_{0}^{t}G\big{(}m(s)\big{)}\,dW(s),\mathbb{P}-a.s.$
(7.1)
###### Proof of Corollary 7.1.
The proof of the above corollary follows once we note that each of the
integrands of the equality lies in the space
$L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$. This can be verified
by using the bounds established in the previous Section 6, Lemma 5.6 and Lemma
5.7.
By Theorem 3.3, the process $m\times\Delta m$ lies in the space
$L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$ $\mathbb{P}$-a.s. By
the constraint condition (3.9)
$\displaystyle\mathbb{E}\int_{0}^{T}\left|m(t)\times(m(t)\times\Delta
m(t))\right|_{L^{2}}^{2}\,dt$ $\displaystyle\leq
C\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{\infty}}^{2}\left|m(t)\times\Delta
m(t)\right|_{L^{2}}^{2}\,dt$
$\displaystyle=\mathbb{E}\int_{0}^{T}\left|m(t)\times\Delta
m(t)\right|_{L^{2}}^{2}\,dt<\infty.$ (7.2)
Hence the process $m\times(m\times\Delta m)$ also lies in the space
$L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$ $\mathbb{P}$-a.s.
Arguing similarly, we say that by the constraint condition (3.9) and part
$(4)$ in the Assumption 3.1 on the process $u$,
$\displaystyle\mathbb{E}\int_{0}^{T}\left|m(t)\times
u(t)\right|_{L^{2}}^{2}\,dt$
$\displaystyle\leq\mathbb{E}\int_{0}^{T}\left|m(t)\right|^{2}_{L^{\infty}}\left|u(t)\right|_{L^{2}}^{2}\,dt$
$\displaystyle=\mathbb{E}\int_{0}^{T}\left|u(t)\right|_{L^{2}}^{2}\,dt<\infty.$
(7.3)
Again from the constraint condition (3.9) and the above inequality,
$\displaystyle\mathbb{E}\int_{0}^{T}\left|m(t)\times(m(t)\times
u(t))\right|_{L^{2}}^{2}\,dt$ $\displaystyle\leq
C\mathbb{E}\int_{0}^{T}\left|m(t)\right|^{2}_{L^{\infty}}\left|m(t)\times
u(t)\right|_{L^{2}}^{2}\,dt$
$\displaystyle=C\mathbb{E}\int_{0}^{T}\left|m(t)\times
u(t)\right|_{L^{2}}^{2}\,dt\ \text{By}\ \eqref{eqn-constraint condition}$
$\displaystyle<\infty.$ (7.4)
We recall that
$G(m)=m\times h-\alpha\,m\times(m\times h).$
It is thus sufficient to verify the above inequality for the two terms
individually. We also recall that $h$ is assumed to be in $H^{1}$. The
continuous embedding $H^{1}\hookrightarrow L^{\infty}$ implies that there
exists a constant $C>0$ such that
$\left|h\right|_{L^{\infty}}\leq C\left|h\right|_{H^{1}}<\infty.$
Thus,
$\displaystyle\mathbb{E}\int_{0}^{T}\left|m(t)\times h\right|_{L^{2}}^{2}\,dt$
$\displaystyle\leq\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{2}}^{2}\left|h\right|_{L^{\infty}}^{2}\,dt$
$\displaystyle\leq
T\left|h\right|_{L^{\infty}}^{2}\mathbb{E}\sup_{t\in[0,T]}\left|m(t)\right|_{L^{2}}^{2}<\infty.$
(7.5)
The right hand side of the last inequality is finite because of the constraint
condition. Similarly,
$\displaystyle\mathbb{E}\int_{0}^{T}\left|m(t)\times(m(t)\times
h)\right|_{L^{2}}^{2}\,dt$
$\displaystyle\leq\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{\infty}}\left|m(t)\times
h\right|_{L^{2}}^{2}\,dt<\infty.$ (7.6)
The right hand side of the above inequality is finite by the constraint
condition (3.9) and the assumption on $h$. Hence $G(m)$ takes values in the
space $L^{2}\left(0,T;L^{2}\right)$ $\mathbb{P}$-a.s. What remains is to
verify the bounds for the correction term, that is to show that the term
$\left(DG(m)\right)\left(G(m)\right)$ also lies in the space
$L^{2}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$, $\mathbb{P}$-a.s.
Recall that Proposition 2.2 shows that the correction term is locally
Lipschitz. Also, by the definition of the term
$\left[DG(m)\right]\left(G(m)\right)$, we have
$\left[DG(0)\right]\left(G(0)\right)=0.$
The constraint condition (3.9) implies that the process $m$ takes values in
the unit ball in the space $L^{\infty}$. Hence there exists a constant $C>0$
such that
$\displaystyle\left|DG\big{(}m(t)\big{)}\big{[}G\big{(}m(t)\big{)}\big{]}\right|_{L^{2}}\leq
C\left|m(t)\right|_{L^{2}}.$
Hence
$\displaystyle\mathbb{E}\int_{0}^{T}\left|DG\big{(}m(t)\big{)}\big{[}G\big{(}m(t)\big{)}\big{]}\right|_{L^{2}}^{2}\,dt\leq
C\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{L^{2}}^{2}\,dt<\infty.$
The right hand side of the last inequality is finite by Theorem (3.3). This
concludes the proof of Corollary 7.1. ∎
Before we start the proof of the Theorem 3.4, we state a proposition, followed
by a corollary that will be used for the proof.
###### Proposition 7.2.
Let $v\in H^{1}$. Further assume that
$|v(x)|_{\mathbb{R}^{3}}=1\ \text{for Leb. a.a.}\ x\in D.$ (7.7)
Then the following equality holds in $(H^{1})^{\prime}$.
$\displaystyle v\times(v\times\Delta v)=-\Delta v-|\nabla
v|_{\mathbb{R}^{3}}^{2}v.$ (7.8)
###### Proof of Proposition 7.2.
We begin by verifying that each side of equality (7.8) belongs to the space
$(H^{1})^{\prime}$. By the equality in (3.2), we now show that $\Delta v$
takes values in the space $\left(H^{1}\right)^{\prime}$. Let $\phi\in H^{1}$.
Then
$\displaystyle\left|\ {}_{\left(H^{1}\right)^{\prime}}\left\langle\Delta
v,\phi\right\rangle_{H^{1}}\right|$ $\displaystyle=\left|-\left\langle\nabla
v,\nabla\phi\right\rangle_{L^{2}}\right|$
$\displaystyle=\left|\left\langle\nabla
v,\nabla\phi\right\rangle_{L^{2}}\right|$ $\displaystyle\leq\left|\nabla
v\right|_{L^{2}}\left|\nabla\phi\right|_{L^{2}}.$
The assumptions on $v$ and $\phi$ imply that the right hand side, and hence
the left hand side of the above inequality, is finite.
The second term on the right hand side of (7.8) is interpreted as follows:
$\displaystyle\ _{(H^{1})^{\prime}}\left\langle\left|\nabla
v\right|_{\mathbb{R}^{3}}^{2}v,\phi\right\rangle_{H^{1}}=\int_{\mathcal{O}}\left|\nabla
v(x)\right|_{\mathbb{R}^{3}}^{2}\left\langle
v(x),\phi(x)\right\rangle_{\mathbb{R}^{3}}\,dx.$ (7.9)
To show that the right hand side of the above equality makes sense, we observe
that $\phi\in H^{1}$ implies that $\phi\in L^{\infty}$. This along with the
equality (7.7)
$\displaystyle\left|\int_{\mathcal{O}}\left|\nabla
v(x)\right|_{\mathbb{R}^{3}}^{2}\left\langle
v(x),\phi(x)\right\rangle_{\mathbb{R}^{3}}\,dx\right|\leq
C\int_{\mathcal{O}}\left|\nabla v(x)\right|_{\mathbb{R}^{3}}^{2}\,dx.$
The right hand side of the above inequality is finite since $v\in H^{1}$. The
left hand side of the equality (7.8) is in $(H^{1})^{{\prime}}$ by the way the
triple product is understood in (3.6).
Hence both the terms on the right hand side of the equality (7.8) belong to
the space $\left(H^{1}\right)^{\prime}$. We now proceed to show the equality.
Let $\phi\in H^{1}$. The proof uses the following identity in
$\mathbb{R}^{3}$:
$a\times(b\times c)=b\left\langle
a,c\right\rangle_{\mathbb{R}^{3}}-c\left\langle
a,b\right\rangle_{\mathbb{R}^{3}},\ a,b,c\in\mathbb{R}^{3}.$ (7.10)
By (3.6), we have
$\ {}_{(H^{1})^{\prime}}\left\langle v\times(v\times\Delta
v),\phi\right\rangle_{H^{1}}$ $\displaystyle=\left\langle
v\times\nabla(\phi\times v),\nabla v\right\rangle_{L^{2}}$
$\displaystyle=\left\langle v\times(\nabla\phi\times v),\nabla
v\right\rangle_{L^{2}}+\left\langle v\times(\phi\times\nabla v),\nabla
v\right\rangle_{L^{2}}$
$\displaystyle=\left\langle\nabla\phi|v|_{\mathbb{R}^{3}}^{2}-v\left\langle
v,\nabla\phi\right\rangle_{\mathbb{R}^{3}},\nabla
v\right\rangle_{L^{2}}+\left\langle\left\langle\nabla
v,v\right\rangle_{\mathbb{R}^{3}}\phi-\nabla
v\left\langle\phi,v\right\rangle_{L^{2}},\nabla v\right\rangle_{L^{2}}$
$\displaystyle=\left\langle\nabla\phi|v|_{\mathbb{R}^{3}}^{2},\nabla
v\right\rangle_{L^{2}}-\left\langle\nabla
v\left\langle\phi,v\right\rangle_{\mathbb{R}^{3}},\nabla
v\right\rangle_{L^{2}}\ (\text{By}\ \eqref{dot product in R3 of v and nabla v
is 0})$ $\displaystyle=\left\langle\nabla\phi,\nabla
v\right\rangle_{L^{2}}-\left\langle\nabla
v\left\langle\phi,v\right\rangle_{\mathbb{R}^{3}},\nabla
v\right\rangle_{L^{2}}.\ (\text{By}\ \eqref{Intermediate eqn 1 Proposition m
times m times Delta m equals Delta m plus gradient m squared m})$
In view of the equalities (3.2) and (7.9), the right hand side of the above
equality equals
$-\Delta v-\left|\nabla v\right|_{\mathbb{R}^{3}}^{2}v$
in $(H^{1})^{\prime}$.
The following equality has been used in the calculations above:
$\displaystyle\left\langle v,\nabla v\right\rangle_{\mathbb{R}^{3}}$
$\displaystyle=\frac{1}{2}\nabla|v|_{\mathbb{R}^{3}}^{2}=0.$ (7.11)
The right hand side of the above equality is $0$ since by (7.7),
$\left|v\right|_{\mathbb{R}^{3}}^{2}$ is constant.
Hence
$\displaystyle v\times(v\times\Delta v)=-\Delta v-|\nabla
v|_{\mathbb{R}^{3}}^{2}v.$
This concludes the proof of Proposition 7.2. ∎
We have the following result as a corollary of the above proposition.
###### Corollary 7.3.
Let $\left(\Omega,\mathcal{F},\mathbb{P},W,m,u\right)$ be a weak martingale
solution of (3.7) corresponding to the control process $u$, as in Corollary
7.1. Then the following equality holds in $\left(H^{1}\right)^{\prime}$ for
every $t\in[0,T]$
$\displaystyle m(t)\times(m(t)\times\Delta m(t))=-\Delta m(t)-|\nabla
m(t)|_{\mathbb{R}^{3}}^{2}m(t),\ \mathbb{P}-a.s.$
###### Proof of Corollary 7.3.
To prove this corollary, it is sufficient to show that the process $m$
satisfies the assumptions in Proposition 7.2. Theorem 3.3 implies that, in
particular, for each $t\in[0,T]$, $m(t)\in H^{1}$, $\mathbb{P}$-a.s. Also, the
constraint condition (3.9) implies that
$\left|m(t,x)\right|_{\mathbb{R}^{3}}=1$, Leb-a.a. $x\in D$ for all
$t\in[0,T]$, $\mathbb{P}$-a.s. Hence the corollary follows by applying
Proposition 7.2 to $m(t)$ for each $t\in[0,T]$. ∎
Using the above mentioned corollary, we proceed to prove the pathwise
uniqueness.
###### Proof of Theorem 3.4.
Let us choose and fix a control process $u$ satisfying Assumption 3.1 and two
weak martingale solutions $(\Omega,\mathcal{F},\mathbb{P},W,m_{1},u)$ and
$(\Omega,\mathcal{F},\mathbb{P},W,m_{2},u)$ corresponding to $u$ as in
Definition 3.2 and satisfying the properties stated in Theorem 3.3.
Let us first observe that in view of Corollary 7.3, for each $i=1,2$, the
following identity holds in $(H^{1})^{\prime}$:
$\displaystyle m_{i}(t)=$ $\displaystyle\,\alpha\,\int_{0}^{t}\Delta
m_{i}(s)\,ds+\alpha\,\int_{0}^{t}|\nabla
m_{i}(s)|_{\mathbb{R}^{3}}^{2}m_{i}(s)\,ds$
$\displaystyle+\int_{0}^{t}m_{i}(s)\times\Delta
m_{i}(s)\,ds+\int_{0}^{t}m_{i}(s)\times
u(s)\,ds-\alpha\,\int_{0}^{t}m_{i}(s)\times(m_{i}(s)\times u(s))\,ds$
$\displaystyle+\frac{1}{2}\int_{0}^{t}\left[DG\bigl{(}m_{i}(s)\bigr{)}\right]\left[G\big{(}m_{i}\left(s\right)\big{)}\right]\,ds+\int_{0}^{t}G\big{(}m_{i}(s)\big{)}\,dW(s),$
(7.12)
for all $t\in[0,T]$, $\mathbb{P}$-a.s. The above equation is same as the
equation in Corollary 7.1, except that the triple product term is expressed as
a sum of two terms. The equality holds in $(H^{1})^{\prime}$ and hence it
should not make a difference to the equation. It is thus sufficient to show
that individually both the integrands lie in the space
$L^{2}\left(0,T;(H^{1})^{\prime}\right)$.
Following the arguments in Proposition 7.2, we can prove that for $v\in
L^{2}(0,T;H^{1})$ and $t\in[0,T]$,
$\displaystyle\int_{0}^{t}\ {}_{(H^{1})^{\prime}}\left\langle\Delta
m_{i}(s),v\right\rangle_{H^{1}}\,ds=-\int_{0}^{t}\left\langle\nabla
m_{i}(s),\nabla v(s)\right\rangle_{L^{2}}\,ds.$
Thus by the Cauchy-Schwartz inequality,
$\displaystyle\left|\int_{0}^{t}\left\langle\nabla m_{i}(s),\nabla
v(s)\right\rangle_{L^{2}}\,ds\right|\leq\left(\int_{0}^{t}\left|m_{i}(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}\left(\int_{0}^{t}\left|v(s)\right|_{L^{2}}^{2}\,ds\right)^{\frac{1}{2}}<\infty.$
The right hand side of the above inequality is finite because of the
assumptions on $m_{i}$ and $v$.
We now show that the remaining (second) term also takes values in the space
$(H^{1})^{\prime}$.
$\displaystyle\int_{0}^{T}\left|\left|\nabla
m(t)\right|_{\mathbb{R}^{3}}^{2}m_{i}(t)\right|_{(H^{1})^{\prime}}^{2}\,dt$
$\displaystyle\leq\int_{0}^{T}\left|\nabla
m_{i}(t)\right|_{L^{2}}^{2}\left|\nabla
m_{i}(t)\right|_{L^{2}}^{2}\left|m_{i}(t)\right|^{2}_{L^{\infty}}\,dt$
$\displaystyle\leq\left(\int_{0}^{T}\left|\nabla
m_{i}(t)\right|_{L^{2}}^{4}\,dt\right)^{\frac{1}{2}}\left(\int_{0}^{T}\left|\nabla
m_{i}(t)\right|_{L^{2}}^{4}\,dt\right)^{\frac{1}{2}}$
$\displaystyle=\int_{0}^{T}\left|\nabla m_{i}(t)\right|_{L^{2}}^{4}\,dt$
$\displaystyle\leq CT\sup_{t\in[0,T]}\left|\nabla
m_{i}(t)\right|_{L^{2}}^{4}.$
Hence
$\displaystyle\mathbb{E}\left[\int_{0}^{T}\left|\left|\nabla
m(t)\right|_{\mathbb{R}^{3}}^{2}m(t)\right|_{\left(H^{1}\right)^{\prime}}^{2}\,dt\right]\leq
C\mathbb{E}\left[\sup_{t\in[0,T]}\left|\nabla
m(t)\right|_{L^{2}}^{4}\right]<\infty.$
The last inequality follows from the Theorem 3.3. This justifies the writing
of equation (7).
Define a process $m$ by
$m(t)=m_{1}(t)-m_{2}(t)\ \text{for}\ t\in[0,T].$
We now consider the equation (7) satisfied by each $m_{i}$ for $i=1,2$. To get
the equation satisfied by the process $m$, take the difference of (7) for
$i=1$ and $i=2$. We then simplify it to get the equality in (7).
$\displaystyle m(t)=\alpha\,\int_{0}^{t}\Delta
m(s)\,ds+\alpha\,\int_{0}^{t}|\nabla m_{1}(s)|_{\mathbb{R}^{3}}^{2}m(s)\,ds$
$\displaystyle+\alpha\,\int_{0}^{t}\left\langle\big{(}\nabla m_{1}(s)-\nabla
m_{2}(s)\big{)},\big{(}\nabla m_{1}(s)+\nabla
m_{2}(s)\big{)}\right\rangle_{\mathbb{R}^{3}}m_{2}(s)\,ds$
$\displaystyle+\int_{0}^{t}m(s)\times\Delta
m_{1}(s)\,ds+\int_{0}^{t}m_{2}(s)\times\Delta m(s)\,ds+\int_{0}^{t}m(s)\times
u(s)\,ds$ $\displaystyle-\alpha\,\bigg{[}\int_{0}^{t}m(s)\times(m_{1}(s)\times
u(s))\,ds+\int_{0}^{t}m_{2}(s)\times\big{(}m(s)\times
u(s)\big{)}\,ds\bigg{]}+\int_{0}^{t}V_{n}(s)\,ds$
$\displaystyle+\int_{0}^{t}(m(s)\times
h)\,dW(s)-\alpha\,\bigg{[}\int_{0}^{t}m(s)\times\big{(}m_{1}(s)\times
h\big{)}\,dW(s)+\int_{0}^{t}m_{2}(s)\times\big{(}m(s)\times
h\big{)}\,dW(s)\bigg{]},$ (7.13)
where
$\displaystyle\int_{0}^{t}V_{n}(s)\,ds=\int_{0}^{t}(m(s)\times
h)\,ds+\frac{1}{2}\int_{0}^{t}((m(s)\times h)\times
h)\,ds-\frac{1}{2}\alpha\,\bigg{[}\int_{0}^{t}(m(s)\times(m_{1}(s)\times
h))\times h\,ds$ $\displaystyle+\int_{0}^{t}(m_{2}(s)\times(m(s)\times
h))\times h\,ds\bigg{]}$
$\displaystyle-\frac{1}{2}\alpha\,\bigg{[}\int_{0}^{t}m(s)\times((m_{1}(s)\times
h)\times h)\,ds+\int_{0}^{t}m_{2}(s)\times((m(s)\times h)\times
h)\,ds\bigg{]}$
$\displaystyle+\frac{1}{2}\alpha^{2}\bigg{[}\int_{0}^{t}(m(s)\times(m_{1}(s)\times
h))\times(m_{1}(s)\times h)\,ds$
$\displaystyle+\int_{0}^{t}(m_{2}(s)\times(m(s)\times h))\times(m_{1}(s)\times
h)\,ds+\int_{0}^{t}(m_{2}(s)\times(m_{2}(s)\times h))\times(m(s)\times
h)\,ds\bigg{]}$
$\displaystyle+\frac{1}{2}\alpha^{2}\bigg{[}\int_{0}^{t}m(s)\times((m_{1}(s)\times(m_{1}(s)\times
h))\times h)\,ds$
$\displaystyle+\int_{0}^{t}m_{2}(s)\times((m(s)\times(m_{1}(s)\times h))\times
h)\,ds+\int_{0}^{t}m_{2}(s)\times((m_{2}(s)\times(m(s)\times h))\times
h)\bigg{]}\,ds$
For convenience of notation, let us write equation (7) as
$\displaystyle
m(t)=\sum_{i=1}^{9}\int_{0}^{t}C_{i}\,z_{i}(s)\,ds+\sum_{i=10}^{12}\int_{0}^{t}C_{i}\,z_{i}(s)\,dW(s).$
(7.15)
Here $C_{i},\ i=1,\dots,12$ are constants accompanying the integrals.
Consider the function $\phi_{5}:L^{2}\to\mathbb{R}$ defined by
$v\mapsto\frac{1}{2}|v|_{L^{2}}^{2}.$
Consider the process $m$ defined above. We apply the Itô formula [53] to
$\phi_{5}$. That the integrands on the right hand side of the equation (7.15)
satisfy the conditions mentioned in [53] can be verified as done in section 6.
Applying the Itô formula gives us the following equation:
$\displaystyle\frac{1}{2}\left|m(t)\right|_{L^{2}}^{2}=$
$\displaystyle\frac{1}{2}\left|m(0)\right|_{L^{2}}^{2}+\sum_{i=1}^{9}\int_{0}^{t}C_{i}\left\langle
z_{i}(s),m(s)\right\rangle_{L^{2}}\,ds+\sum_{i=10}^{12}\int_{0}^{t}C_{i}\left\langle
z_{i}(s),m(s)\right\rangle_{L^{2}}\,dW(s)$
$\displaystyle+\frac{1}{2}\int_{0}^{t}\left|G\big{(}m(s)\big{)}\right|_{L^{2}}^{2}\,ds,$
(7.16)
for all $t\in[0,T]$ $\mathbb{P}$-a.s. Let us denote the last term on the right
hand side of the above equality by $Z_{13}$. Note that since $m_{1}$ and
$m_{2}$ have the same initial data, $m(0)=0$, $\mathbb{P}=a.s.$ For the sake
of simplicity, we write some calculations separately and then combining them
gives the desired result.
Calculation for $z_{1}$.
For each $t\in[0,T]$, the following equality holds $\mathbb{P}^{\prime}$-a.s.,
see (3.2)
$\displaystyle\int_{0}^{t}\left\langle\Delta
m(s),m(s)\right\rangle_{(H^{1})^{\prime}}\,ds=-\int_{0}^{t}\left|\nabla
m(s)\right|^{2}\,ds.$
The negative sign here implies that this term goes to the left hand side of
the equality with a positive coefficient and hence can be used to balance the
other $\int_{0}^{t}|\nabla m(s)|_{L^{2}}^{2}\,ds$ terms coming from some of
the other estimates.
Calculations for the terms $z_{2}$ and $z_{3}$.
The bound on the terms is calculated below. By Hölder’s inequality,
$\displaystyle\int_{0}^{t}\left\langle\left|\nabla
m_{1}(s)\right|_{\mathbb{R}^{3}}^{2}m(s),m(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle\leq C\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{2}\left|m\right|_{L^{\infty}}^{2}\,ds$ (By Agmon’s
inequality) $\displaystyle\leq C\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{2}\left|m\right|_{L^{2}}\left|m\right|_{H^{1}}\,ds$
$\displaystyle\leq C\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{2}\left|m\right|_{L^{2}}\left[\left|m\right|_{L^{2}}+\left|\nabla
m\right|_{L^{2}}\right]\,ds$ $\displaystyle\leq C\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{2}\left|m\right|_{L^{2}}^{2}\,ds+C\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{2}\left|m\right|_{L^{2}}\left|\nabla
m\right|_{L^{2}}\,ds$ (By Young’s inequality) $\displaystyle\leq
C\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{2}\left|m\right|_{L^{2}}^{2}\,ds+C^{2}\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{1}\right|_{L^{2}}^{4}\left|m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\quad+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$
Here $\varepsilon>0$ will be chosen later. The above sequence of inequalities
uses the inequality (5.36) along with Young’s inequality.
$\displaystyle\int_{0}^{t}\left\langle\left\langle\nabla m_{1}(s),\nabla
m(s)\right\rangle_{\mathbb{R}^{3}}m_{2}(s),m(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle\leq\int_{0}^{t}\left|\nabla
m_{1}(s)\right|_{L^{2}}\left|m_{2}(s)\right|_{L^{\infty}}\left|\nabla
m(s)\right|_{L^{2}}\left|m(s)\right|_{L^{\infty}}\,ds$ (Since
$\left|m_{2}(s)\right|_{L^{\infty}}=1$, Agmon’s inequality) $\displaystyle\leq
C\int_{0}^{t}\left|\nabla m_{1}(s)\right|_{L^{2}}\left|\nabla
m(s)\right|_{L^{2}}\left|m(s)\right|_{L^{2}}^{\frac{1}{2}}\left|m(s)\right|_{H^{1}}^{\frac{1}{2}}\,ds$
$\displaystyle\leq C\int_{0}^{t}\left|\nabla
m_{1}(s)\right|_{L^{2}}\left|\nabla
m(s)\right|_{L^{2}}\left|m(s)\right|_{L^{2}}^{\frac{1}{2}}\bigg{[}\left|m(s)\right|_{L^{2}}^{\frac{1}{2}}$
$\displaystyle\qquad+\left|\nabla
m(s)\right|_{L^{2}}^{\frac{1}{2}}\bigg{]}\,ds$ $\displaystyle\leq
C\int_{0}^{t}\left|\nabla m_{1}(s)\right|_{L^{2}}\left|\nabla
m(s)\right|_{L^{2}}\left|m(s)\right|_{L^{2}}\,ds$
$\displaystyle\quad+C\int_{0}^{t}\left|\nabla
m_{1}(s)\right|_{L^{2}}\left|m(s)\right|_{L^{2}}^{\frac{1}{2}}\left|\nabla
m(s)\right|_{L^{2}}^{\frac{3}{2}}\,ds$ $\displaystyle(\text{By Young's
inequality for}\ p=q=2)$ $\displaystyle\leq
C\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{1}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\quad+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$ $\displaystyle(\text{By Young's inequality for}\
p=4,q=\frac{4}{3})$
$\displaystyle\quad+C^{4}\frac{C(\varepsilon)}{4}\int_{0}^{t}\left|\nabla
m_{1}(s)\right|_{L^{2}}^{4}\left|m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\quad+\frac{3\varepsilon}{4}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle=C(\varepsilon)\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\left[\left|\nabla
m_{1}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{1}(s)\right|_{L^{2}}^{4}\right]\,ds$
$\displaystyle\quad+\frac{5\varepsilon}{4}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$
Similarly,
$\displaystyle\int_{0}^{t}\bigl{\langle}\left\langle\nabla m_{2}(s),\nabla
m(s)\right\rangle_{\mathbb{R}^{3}}m_{2}(s),m(s)\bigr{\rangle}_{L^{2}}\,ds\leq$
$\displaystyle
C(\varepsilon)\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\left[\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{4}\right]\,ds$
$\displaystyle+\frac{5\varepsilon}{4}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$
Note: All the constants have been condensed into $C(\varepsilon)$.
Hence
$\displaystyle\int_{0}^{t}\left\langle\left|\nabla
m_{1}(s)\right|_{\mathbb{R}^{3}}^{2}m_{1}(s)-\left|\nabla
m_{2}(s)\right|_{\mathbb{R}^{3}}^{2}m_{2}(s),m(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle\leq
C(\varepsilon)\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\left[\left|\nabla
m_{1}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{1}(s)\right|_{L^{2}}^{4}+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{4}\right]\,ds$
$\displaystyle\quad+\frac{5\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$
Calculation for the terms $z_{4}$ and $z_{5}$.
$\int_{0}^{t}\left\langle z_{4}(s),m(s)\right\rangle_{L^{2}}\,ds=0.$
$\displaystyle\left|\int_{0}^{t}\left\langle
z_{5}(s),m(s)\right\rangle_{L^{2}}\,ds\right|$
$\displaystyle\leq\int_{0}^{t}\left|\left\langle m_{2}(s)\times\Delta
m(s),m(s)\right\rangle_{L^{2}}\right|\,ds$
$\displaystyle=\int_{0}^{t}\left|\left\langle\nabla m_{2}(s)\times m(s),\nabla
m(s)\right\rangle_{L^{2}}\right|\,ds$ $\displaystyle(\text{H\"{o}lder's and
Young's inequalities})$
$\displaystyle\leq\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{\infty}}^{2}\,ds+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$ $\displaystyle(\text{By Agmon's inequality})$
$\displaystyle\leq\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{2}}\left|m(s)\right|_{H^{1}}\,ds$
$\displaystyle\quad+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\leq\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{2}}\left[\left|m(s)\right|_{L^{2}}+\left|\nabla
m(s)\right|_{L^{2}}\right]\,ds$
$\displaystyle\quad+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$ $\displaystyle(\text{By Young's inequality})$
$\displaystyle\leq\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\quad+\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{2}}\left|\nabla
m(s)\right|_{L^{2}}\,ds$
$\displaystyle\quad+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$ $\displaystyle(\text{By Young's inequality})$
$\displaystyle\leq\frac{C(\varepsilon)}{2}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}\left|m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\quad+\frac{C(\varepsilon)}{2}\frac{C(\varepsilon)^{2}}{4}\int_{0}^{t}\left|\nabla
m_{2}(s)\right|_{L^{2}}^{4}\left|m(s)\right|_{L^{2}}^{2}\,ds$
$\displaystyle\quad+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$
Here $\varepsilon>0$ will be chosen later. The second equality is basically
the way $m\times\Delta m$ is interpreted (as an element of
$(H^{1})^{\prime}$). The fourth inequality comes from the use of Young’s
$\varepsilon$ inequality.
Combining the constants into one constant $C(\varepsilon)$, we get
$\displaystyle\bigg{|}\int_{0}^{t}\left\langle
z_{4}(s)+z_{5}(s),m(s)\right\rangle_{L^{2}}$ $\displaystyle\,ds\bigg{|}\leq
C(\varepsilon)\int_{0}^{t}\bigg{[}\left|\nabla m_{2}(s)\right|_{L^{2}}^{2}$
$\displaystyle+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{4}\bigg{]}\left|m(s)\right|_{L^{2}}^{2}\,ds+\varepsilon\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$ (7.17)
Here the constants depending on $\varepsilon$ are combined into one constant
suitable $C(\varepsilon)$.
Calculation for $z_{6}$.
Concerning the first term with the control process $u$, that is $z_{6}$, we
observe that
$\displaystyle\int_{0}^{t}\left\langle
z_{6}(s),m(s)\right\rangle_{L^{2}}\,ds=\int_{0}^{t}\left\langle m(s)\times
u(s),m(s)\right\rangle_{L^{2}}\,ds=0.$
Calculation for $z_{7},z_{8}$.
For the remaining terms (with the control process $u$), the following estimate
can be done. By Hölder’s inequality, followed first by Agmon’s inequality and
then by Young’s inequality implies that for $\varepsilon>0$, there exists
constants $C,C(\varepsilon)$ such that for $t\in[0,T]$,
$\displaystyle\int_{0}^{t}$ $\displaystyle|\left\langle
m_{1}(s)\times\big{(}m_{1}(s)\times
u(s)\big{)}-m_{2}(s)\times\big{(}m_{2}(s)\times
u(s)\big{)},m(s)\right\rangle_{L^{2}}|\,ds$ $\displaystyle\leq
C\int_{0}^{t}\left(1+\left|u(s)\right|_{L^{2}}^{2}\right)\left|m(s)\right|_{L^{2}}^{2}\,ds+\frac{\varepsilon}{2}\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds.$
The terms that remain are the terms corresponding to the noise term, that is
$G(m)$ ($z_{10},z_{11},z_{12}$), Itô to Stratonovich correction term
$DG(m)(G(m))$, i.e. ($z_{9}$), along with the last term on the right hand side
of (7), i.e. $Z_{13}$.
Calculations for the terms $z_{9}$ and $Z_{13}$.
By Lemma 2.1 and Proposition 2.2, both $z_{9},Z_{13}$ are locally Lipschitz.
Hence it is sufficient to show that the processes $m_{1}$ and $m_{2}$ lie in a
ball in the space $L^{2}$. In this direction, by the continuous embedding
$L^{\infty}\hookrightarrow L^{2}$ and the Theorem 3.3, there exists a constant
$C>0$ such that
$|m_{i}|_{L^{2}}\leq C|m_{i}|_{L^{\infty}}\leq 2C.$ (7.18)
for $i=1,2$. The processes $m_{1}(s)$ and $m_{2}(s)$ thus take values in a
ball in $L^{2}$. Hence there exists a constant $C>0$ such that for each
$s\in[0,T]$,
$\displaystyle|G(m_{1}(s))-G(m_{2}(s))|_{L^{2}}\leq
C_{1}|m_{1}(s)-m_{2}(s)|_{L^{2}}=C_{1}|m(s)|_{L^{2}}.$
Similarly, there exists another constant $C_{2}$ such that for each
$s\in[0,T]$,
$\displaystyle\left|DG\big{(}m_{1}(s)\big{)}\left[G(m_{1})(s)\right]-DG\big{(}m_{2}(s)\big{)}\left[G\big{(}m_{2}(s)\big{)}\right]\right|_{L^{2}}$
$\displaystyle\leq
C_{2}\left|G(m_{1}(s))-G\big{(}m_{2}(s)\big{)}\right|_{L^{2}}$
$\displaystyle\leq C_{1}C_{2}|m_{1}(s)-m_{2}(s)|_{L^{2}}$
$\displaystyle=C_{1}C_{2}|m(s)|_{L^{2}}.$
Hence by the Cauchy-Schwartz inequality and the above estimate, we have
$\displaystyle\int_{0}^{T}\left\langle
G(m_{1})-G(m_{2}),m(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle\leq\int_{0}^{T}\left|G(m_{1})-G(m_{2})\right|_{L^{2}}\left|m(s)\right|_{L^{2}}\,ds$
$\displaystyle\leq C_{1}\int_{0}^{T}\left|m(s)\right|_{L^{2}}^{2}\,ds.$
Similarly,
$\displaystyle\int_{0}^{t}\left\langle
DG\big{(}m_{1}(s)\big{)}\left[G\big{(}m_{1}(s)\big{)}\right]-DG\big{(}m_{2}(s)\big{)}\left[G\big{(}m_{2}(s)\big{)}\right],m(s)\right\rangle_{L^{2}}\,ds$
$\displaystyle\leq\int_{0}^{t}\left|DG\big{(}m_{1}(s)\big{)}\left[G\big{(}m_{1}(s)\big{)}\right]-DG\big{(}m_{2}(s)\big{)}\left[G\big{(}m_{2}(s)\big{)}\right]\right|_{L^{2}}\left|m(s)\right|_{L^{2}}$
$\displaystyle\leq C_{1}C_{2}\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\,ds.$
Regarding the correction term that appears after the use of the Itô formula,
by the locally Lipschitz continuity of $G$, there exists a constant $C>0$ such
that
$\displaystyle\int_{0}^{t}\left|G(m_{1}(s))-G(m_{2}(s))\right|_{L^{2}}^{2}\,ds\leq
C\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\,ds.$
Now we combine (7) and the above mentioned estimates. We collect the integrals
with similar integrands. While doing this, we also combine the corresponding
constants for simplifying the presentation. Thus there exists a constant $C>0$
such that
$\displaystyle\left|m(t)\right|_{L^{2}}^{2}+\left(\alpha\,-4\varepsilon\right)\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds\leq$ $\displaystyle\left|m(0)\right|_{L^{2}}^{2}$
$\displaystyle+\int_{0}^{t}\left|m(s)\right|_{L^{2}}^{2}\bigg{[}C+C\bigg{(}\left|\nabla
m_{1}(s)\right|_{L^{2}}^{2}+\left|\nabla m_{1}(s)\right|_{L^{2}}^{4}$
$\displaystyle+\left|\nabla m_{2}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{4}\bigg{)}+\left|u(s)\right|_{L^{2}}+\left|u(s)\right|_{L^{2}}^{2}\bigg{]}\,ds$
$\displaystyle+\int_{0}^{t}\left\langle
G\big{(}m_{1}(s)\big{)}-G\big{(}m_{2}(s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s).$
We choose $\varepsilon>0$ such that $\left(\alpha\,-4\varepsilon\right)<0$.
We recall that the processes $m_{1}$ and $m_{2}$ have the same initial
condition $m_{0}$. Hence $\left|m(0)\right|_{L^{2}}=0$. Also by the choice of
$\varepsilon$, the term
$\left(\alpha\,-4\varepsilon\right)\int_{0}^{t}\left|\nabla
m(s)\right|_{L^{2}}^{2}\,ds$ is non-negative.
Let $C>0$ be a constant. For $t\in[0,T]$, let
$\displaystyle\Phi_{C}(t)=C+C\left(\left|\nabla
m_{1}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{1}(s)\right|_{L^{2}}^{4}+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{2}+\left|\nabla
m_{2}(s)\right|_{L^{2}}^{4}\right)+\left|u(s)\right|_{L^{2}}^{2}.$ (7.19)
Hence
$\displaystyle\left|m(t)\right|_{L^{2}}^{2}$
$\displaystyle\leq\int_{0}^{t}\Phi_{C}(t)\left|m(s)\right|_{L^{2}}^{2}\,ds+\int_{0}^{t}\left\langle
G\big{(}m_{1}(s)\big{)}-G\big{(}m_{2}(s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s).$
(7.20)
The application of the Itô formula gives
$\left|m(t)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq\int_{0}^{t}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}\left\langle
G\big{(}m_{1}(s)\big{)}-G\big{(}m_{2}(s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s).$
(7.21)
Some details of this calculation are given in the Appendix B. A similar idea
has been used in [14, 60].
By the definition of $\Phi_{C}$, $\Phi_{C}(t)\geq 0$ for each $t\in[0,T]$
$\mathbb{P}-$a.s. and the bounds established in Theorem 3.3 imply that for any
$t\in[0,T]$,
$\int_{0}^{t}\Phi_{C}(s)\,ds<\infty,\ \mathbb{P}-\text{a.s.}$ (7.22)
Hence $\mathbb{P}-$a.s.,
$e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq 1.$ (7.23)
The mapping $G$ is Lipschitz on balls. The processes $m_{1},m_{2}$ satisfy the
constraint condition (3.9), and hence are uniformly bounded. Hence the
processes $m$ is also uniformly bounded. This implies that the stochastic
integral on the right hand side of the inequality (7.21) is a martingale. Thus
taking the expectation on both the sides of the inequality (7.21), we get
$\displaystyle\mathbb{E}\left|m(t)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq\mathbb{E}\int_{0}^{t}e^{-\int_{0}^{s}\Phi_{C}(r)\,dr}\left\langle
G\big{(}m_{1}(s)\big{)}-G\big{(}m_{2}(s)\big{)},m(s)\right\rangle_{L^{2}}\,dW(s)=0.$
(7.24)
Hence
$\mathbb{E}\left|m(s)\right|_{L^{2}}^{2}e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\leq
0.$
But for each $t\in[0,T]$, $e^{-\int_{0}^{t}\Phi_{C}(s)\,ds}\geq 0$.
Hence
$\left|m(t)\right|_{L^{2}}^{2}=0\ \mathbb{P}-\text{a.s.}$ (7.25)
This concludes the proof of Theorem 3.4.
∎
We now define what we mean by a strong solution to the problem (3.7).
###### Definition 7.4 (Strong solution).
The problem (3.7) is said to admit a strong solution if the following holds:
Let $\left(\Omega,\mathbb{F},\mathcal{F},\mathbb{P}\right)$ be a filtered
probability space along with initial data $m_{0}$ and a control process $u$ on
the space, satisfying Assumption 3.1. Then there exists an
$\mathbb{F}$-adapted process $m$ on the said probability space such that the
tuple $\left(\Omega,\mathbb{F},\mathcal{F},\mathbb{P},W,m,u\right)$ is a weak
martingale solution to the problem (3.7) according to Definition 3.2.
The existence of a strong solution now follows as a consequence, which is
stated in the following result.
###### Theorem 7.5.
The problem (3.7) for a given initial data $m_{0}$ and a control process $u$,
both satisfying the assumptions mentioned in the Theorem 3.3, has a pathwise
unique strong solution as defined in Definition 7.4. Moreover, the strong
solution is unique in law.
###### Proof of Theorem 7.5.
To prove the existence of a strong solution, we apply Theorem 2 from [52],
which is a special case of Theorem 12.1 in the same reference.
First, Theorem 3.3 ensures that the problem (3.7) admits a weak martingale
solution for initial data and control process satisfying Assumption 3.1.
Further, Theorem 3.4 ensures that the obtained solution is pathwise unique in
the following sense. Let
$\left(\Omega,\mathbb{F},\mathcal{F},\mathbb{P},m_{1},u,W\right)$ and
$\left(\Omega,\mathbb{F},\mathcal{F},\mathbb{P},m_{2},u,W\right)$ be two weak
martingale solutions corresponding to the same initial data $m_{0}$ and
control $u$, on the same probability space. Let $m_{1}$ and $m_{2}$ satisfy
the bounds in $(5)$ of Definition 3.2. Then for each $t\in[0,t]$, we have
$m_{1}(t)=m_{2}(t),\ \mathbb{P}-a.s.$.
Let $C_{0}([0,T];\mathbb{R})$ denote the space
$\left\\{v\in C([0,T];\mathbb{R}):v(0)=0\right\\}.$
By part $(3)$ of Theorem 12.1, Theorem 13.2 and Lemma E, [52], there exists a
Borel measurable map
$J:C_{0}([0,T];\mathbb{R})\to C([0,T];L^{2})\cap L^{2}(0,T;H^{1})$
such that the following holds. Let
$\left(\Omega,\mathcal{F},\mathbb{F},\mathbb{P}\right)$ be a given filtered
probability space along with a control process $u$, all satisfying Assumption
3.1. Let $W=\left(W(t)\right)_{t\in[0,T]}$ be an arbitrary real valued Wiener
process on the said space. Let $m=J\circ W$. That is,
$m:\Omega\ni\omega\mapsto J(W(\omega))\in C([0,T];L^{2})\cap
L^{2}(0,T;H^{1}).$
Then, the tuple $\left(\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u\right)$
is a weak martingale solution to the problem (3.7) on the space
$\left(\Omega,\mathcal{F},\mathbb{F},\mathbb{P}\right)$.
Therefore, given a filtered probability space
$\left(\Omega,\mathbb{F},\mathcal{F},\mathbb{P}\right)$ along with initial
data $m_{0}$ and a control process $u$ on the space, satisfying Assumption
3.1, we have shown that there exists a $\mathbb{F}$-adapted process $m$ such
that the tuple $\left(\Omega,\mathbb{F},\mathcal{F},\mathbb{P},W,m,u\right)$
is a weak martingale solution to the problem (3.7), thus showing the existence
of a strong solution according to Definition 7.4.
∎
## 8\. Further regularity: Proof of Theorem 3.5
So far we have shown that there exists a strong solution to the problem (3.7)
with the initial condition and the given control satisfying the assumptions
given in Theorem 3.3. This section is dedicated to proving further regularity
for the above mentioned strong solution.
Recall that by definition
$Av=-\Delta v\ \text{for}\ v\in D(A),$
and
$D(A)=\left\\{v\in H^{2}:\frac{\partial v}{\partial\nu}=0\ \text{on}\
\partial\mathcal{O}\right\\},$
where $\nu$ denotes the outward pointing normal vector and
$\partial\mathcal{O}$ denotes the boundary of $\mathcal{O}$. In other words,
the domain of $A$ is the subspace of elements of $H^{2}$ that satisfy the
Neumann boundary condition.
We also recall that
$A_{1}=I_{L^{2}}+A.$
Here $I_{L^{2}}$ denotes the identity operator on the space $L^{2}$. Thus
showing the bound for $\Delta m$ should be enough since $m$ is already bounded
in $L^{2}$.
The existence of the process $m$ is guaranteed by Theorem 7.5. What remains to
show is that $m$ satisfies the inequality (3.13).
Let $\\{e^{-tA}\\}_{t\in[0,T]}$ denote the semigroup generated by the operator
$A$. The solution $m$ to the problem (3.7) can be written in mild form, see
for example, Section 6 in [25], or the proof of first part of Theorem 9.15 in
[56], as
$\displaystyle m(t)=$ $\displaystyle
e^{-\alpha\,tA}m_{0}+\alpha\,\int_{0}^{t}e^{-\alpha(t-s)A}(|\nabla
m(s)|_{\mathbb{R}^{3}}^{2})m(s)\,ds+\int_{0}^{t}e^{-\alpha(t-s)A}\left(m(s)\times\Delta
m(s)\right)\,ds$
$\displaystyle-\alpha\,\int_{0}^{t}e^{-\alpha(t-s)A}\left[m(s)\times\big{(}m(s)\times
u(s)\big{)}\right]\,ds+\int_{0}^{t}e^{-\alpha(t-s)A}\bigl{[}\big{(}m(s)\times
u(s)\big{)}\bigr{]}\,ds$
$\displaystyle+\alpha\,\int_{0}^{t}e^{-\alpha(t-s)A}\big{(}m(s)\times(m(s)\times
h)\big{)}\,dW(s)+\int_{0}^{t}e^{-\alpha(t-s)A}(m(s)\times h)\,dW(s)$
$\displaystyle+\frac{1}{2}\int_{0}^{t}e^{-\alpha(t-s)A}\left[DG\big{(}m(s)\big{)}\right]G\big{(}m(s)\big{)}\,ds.$
(8.1)
Idea of the proof of (3.13): The proof will primarily consist of two steps.
Step 1 shows the bound on the first term in the inequality (3.13). We consider
the above mentioned mild formulation (8). Instead of showing the bound
directly on the process $m$, the bound will be shown on each term on the right
hand side of (8). Step 2 will use the bound so obtained to show a bound on the
second term in the inequality (3.13).
The following properties of the operators $A,A_{1}$ will be used throughout
the proof.
1. (1)
$e^{-tA}$ is ultracontractive, see Section 7.2.7 in [4]. That is, for $1\leq
p\leq q\leq\infty$, there exists a constant $C>0$ such that
$\left|e^{-tA}f\right|_{L^{q}}\leq\frac{C}{t^{\frac{1}{2}\left(\frac{1}{p}-\frac{1}{q}\right)}}\left|f\right|_{L^{p}}\
\text{for}\ f\in L^{p},\ t>0.$ (8.2)
2. (2)
$A$ has the maximal regularity property. Let $f\in
L^{2}\left(0,T;L^{2}\right)$ and
$\displaystyle v(t)=\int_{0}^{t}e^{-(t-s)A}f(s)\,ds,\,\quad t\in[0,T].$
Then we have
$\displaystyle\int_{0}^{t}\left|Av(t)\right|_{L^{2}}^{2}\,dt\leq
C\int_{0}^{t}\left|f(t)\right|_{L^{2}}^{2}\,dt.$ (8.3)
3. (3)
The operator $A_{1}=I+A$ generates a semigroup (denoted by $e^{-tA_{1}}$), see
Theorem 1.1 in [55].
Thus using (8.2) for $f\in L^{p}$ and $t>0$, we get
$\displaystyle\left|e^{-tA_{1}}f\right|_{L^{q}}$
$\displaystyle=\left|e^{-tA}e^{-tI}f\right|_{L^{q}}$ $\displaystyle\leq
C\left|e^{-tA}f\right|_{L^{q}}$
$\displaystyle\leq\frac{C}{t^{\frac{1}{2}\left(\frac{1}{p}-\frac{1}{q}\right)}}\left|f\right|_{L^{p}}.$
(8.4)
4. (4)
The operators $A^{\delta}e^{-tA}$ and $A_{1}^{\delta}e^{-tA_{1}}$ are bounded
on $L^{2}$, see Theorem 6.13 in [55]. Moreover, there exists a constant $C>0$
such that
$\displaystyle\left|A^{\delta}e^{-tA}\right|\leq\frac{C}{t^{\delta}}$ (8.5)
and
$\displaystyle\left|A_{1}^{\delta}e^{-tA_{1}}\right|\leq\frac{C}{t^{\delta}}.$
(8.6)
Here $\left|A^{\delta}e^{-tA}\right|$ and
$\left|A_{1}^{\delta}e^{-tA_{1}}\right|$ denote the operator norms of
$A^{\delta}e^{-tA}$ and $A_{1}^{\delta}e^{-tA_{1}}$ respectively.
Step 1: We show that
$\displaystyle\mathbb{E}\int_{0}^{T}|\nabla m(t)|_{L^{4}}^{4}\,dt<\infty.$
(8.7)
The following Sobolev embedding holds for
$\delta\in\left(\frac{5}{8},\frac{3}{4}\right)$ , see Lemma C.1.
$\displaystyle X^{\delta}\hookrightarrow W^{1,4}.$
It is thus sufficient to prove the following stronger estimate to show (8.7).
$\displaystyle\mathbb{E}\int_{0}^{T}\left|A_{1}^{\delta}m(t)\right|_{L^{2}}^{4}\,dt<\infty.$
(8.8)
We recall that for $v\in X^{\delta}=D(A_{1}^{\delta})$,
$\left|v\right|_{X^{\delta}}=\left|A_{1}^{\delta}v\right|_{L^{2}}.$
The step will be further divided into 3 sub steps. The first dealing with the
first two terms appearing in the equality (8). In the second sub step, we
consider a function $f$ satisfying certain bounds and show the bounds for this
$f$. The idea is that the remaining terms in (8) (except the terms with the
stochastic integral) fall into this category and hence it suffices to show the
calculations for $f$. The third sub step deals with the terms that contain the
stochastic integral. Sub step 1:
Consider the first term $e^{-tA}m_{0}$.
$\displaystyle|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}$
$\displaystyle=|\left(I+A\right)^{\delta}e^{tI_{L^{2}}}e^{-t\left(A+I\right)}m_{0}|_{L^{2}}^{4}\
(\text{Since}\ A_{1}=I_{L^{2}}+A)$ $\displaystyle\leq
Ce^{t}|A_{1}^{\delta}e^{-tA_{1}}m_{0}|_{L^{2}}^{4}\ (\text{Since}\
\left|e^{tI_{L^{2}}}\right|\leq Ce^{t})$ $\displaystyle\leq
Ce^{T}|A_{1}^{\delta-\frac{1}{2}}e^{-tA_{1}}A_{1}^{\frac{1}{2}}m_{0}|_{L^{2}}^{4}\
(\text{Since}\ \delta=\delta-\frac{1}{2}+\frac{1}{2})$
$\displaystyle\leq\frac{C}{t^{4(\frac{2\delta-1}{2})}}\left|A_{1}^{\frac{1}{2}}m_{0}\right|_{L^{2}}^{4}\
(\text{By}\ \eqref{Norm A1 delta e to the power A1 bound})$
$\displaystyle\leq\frac{C}{t^{4\delta-2}}\left|m_{0}\right|_{H^{1}}^{4}.\
(\text{Since}\
\left|A_{1}^{\frac{1}{2}}\cdot\right|_{L^{2}}=\left|\cdot\right|_{H^{1}})$
Hence
$\displaystyle\int_{0}^{T}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}\,dt\leq\left|m_{0}\right|_{H^{1}}^{4}\int_{0}^{T}\frac{C}{t^{4\delta-2}}\,dt.$
Since $\delta<\frac{3}{4}$, the integral on the right hand side of the above
inequality is finite. Hence there exists a constant $C>0$ such that
$\int_{0}^{T}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}\,dt\leq C.$ (8.9)
And hence
$\mathbb{E}\int_{0}^{T}|A_{1}^{\delta}e^{-tA}m_{0}|_{L^{2}}^{4}\,dt\leq C.$
(8.10)
For the second term, first we observe the following. Let $t\in[0,T]$.
$\displaystyle\int_{\mathcal{O}}\left|\nabla
m(t,x)\right|_{\mathbb{R}^{3}}^{2}\left|m(t,x)\right|_{\mathbb{R}^{3}}\,dx=\int_{\mathcal{O}}\left|\nabla
m(t,x)\right|_{\mathbb{R}^{3}}^{2}\,dx\leq\left|m(t)\right|_{H^{1}}^{2}.$
Hence
$\displaystyle\sup_{t\in[0,T]}\int_{\mathcal{O}}\left|\nabla
m(t,x)\right|_{\mathbb{R}^{3}}^{2}\left|m(t,x)\right|_{\mathbb{R}^{3}}\,dx\leq\sup_{t\in[0,T]}\left|m(t)\right|_{H^{1}}^{2}.$
For simplicity of notation, let $g(s)=\left|\nabla
m(s)\right|_{\mathbb{R}}^{2}m(s)$.
$\displaystyle\left|A_{1}^{\delta}e^{-(t-s)A}g(s)\right|_{L^{2}}$
$\displaystyle\leq C\left|A_{1}^{\delta}e^{-(t-s)A_{1}}g(s)\right|_{L^{2}}$
$\displaystyle=C\left|A_{1}^{\delta}e^{-\frac{(t-s)}{2}A_{1}}e^{-\frac{(t-s)}{2}A_{1}}g(s)\right|_{L^{2}}$
$\displaystyle\leq
C\left|A_{1}^{\delta}e^{-\frac{(t-s)}{2}A_{1}}\right|\left|e^{-\frac{(t-s)}{2}A_{1}}g(s)\right|_{L^{2}}$
$\displaystyle\leq
C\left|A_{1}^{\delta}e^{-\frac{(t-s)}{2}A_{1}}\right|\frac{1}{(t-s)^{\frac{1}{4}}}\left|g(s)\right|_{L^{1}}\
(\text{By}\ \eqref{Norm of e^-tA_1}\ \text{with}\ p=1,q=2)$
$\displaystyle\leq\frac{C}{\left(t-s\right)^{\delta+\frac{1}{4}}}\left|g(s)\right|_{L^{1}}\
\text{By}\ \eqref{Norm A1 delta e to the power A1 bound}$
$\displaystyle\leq\frac{C}{\left(t-s\right)^{\delta+\frac{1}{4}}}\left|m(s)\right|_{H^{1}}^{2}.$
Therefore,
$\displaystyle\int_{0}^{T}\left|\int_{0}^{t}A_{1}^{\delta}e^{-(t-s)A}g(s)\,ds\right|_{L^{2}}^{4}\,dt$
$\displaystyle\leq
C\sup_{t\in[0,T]}\left|m(s)\right|_{H^{1}}^{8}\int_{0}^{T}\int_{0}^{t}\left(\frac{1}{\left(t-s\right)^{\delta+\frac{1}{4}}}\,ds\right)^{4}\,dt.$
Since $\delta<\frac{3}{4}$, that is $\delta+\frac{1}{4}<1$, the integration on
the right hand side is finite.
Hence there exists a constant $C>0$ such that
$\mathbb{E}\int_{0}^{T}\left|\int_{0}^{t}A_{1}^{\delta}e^{-(t-s)A}g(s)\,ds\right|_{L^{2}}^{4}\,dt\leq
C.$ (8.11)
Sub step 2:
Consider a function $f\in
L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$.
There exists constants $C_{1},C_{2}>0$ such that
$\displaystyle\left|A_{1}^{\delta}e^{-(t-s)A}f(s)\right|_{L^{2}}$
$\displaystyle=\left|A_{1}^{\delta}e^{-(t-s)A_{1}}e^{(t-s)I_{L^{2}}}f(s)\right|_{L^{2}}$
$\displaystyle\leq\left|A_{1}^{\delta}e^{-(t-s)A_{1}}e^{(t-s)I_{L^{2}}}\right|\left|f(s)\right|_{L^{2}}$
$\displaystyle\leq
C_{1}\left|A_{1}^{\delta}e^{-(t-s)A_{1}}\right|\left|f(s)\right|_{L^{2}}\
(\text{Since}\ \left|e^{(t-s)I_{L^{2}}}\right|\leq C_{1})$
$\displaystyle\leq\frac{C_{2}}{(t-s)^{\delta}}\left|f(s)\right|_{L^{2}}.\
(\text{By}\ \eqref{Norm A1 delta e to the power A1 bound})$
Therefore replacing the constants $C_{1},C_{2}$ above by a suitable constant
$C$, we get
$\displaystyle\int_{0}^{T}\left(\int_{0}^{t}\left|A_{1}^{\delta}e^{-(t-s)A}f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt\leq
C\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{(t-s)^{\delta}}\left|f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt,$
Using Young’s convolution inequality for $p=\frac{4}{3}$ and $q=2$, we get
$\displaystyle\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{(t-s)^{\delta}}\left|f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt\leq\left(\int_{0}^{T}s^{-\frac{4\delta}{3}}\,ds\right)^{\left(\frac{3}{4}\right)\left(4\right)}\left(\int_{0}^{T}\left|f(s)\right|_{L^{2}}^{2}\,ds\right)^{2}.$
That $\delta<\frac{3}{4}$ implies $\frac{4\delta}{3}<1$. Hence the first
integral on the right hand side of the above inequality is finite. Hence
$\displaystyle\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{(t-s)^{\delta}}\left|f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt\leq
C\left(\int_{0}^{T}\left|f(s)\right|_{L^{2}}^{2}\,ds\right)^{2}.$
Therefore
$\displaystyle\mathbb{E}\int_{0}^{T}\left(\int_{0}^{t}\frac{1}{(t-s)^{\delta}}\left|f(s)\right|_{L^{2}}\,ds\right)^{4}\,dt\leq
C\mathbb{E}\left(\int_{0}^{T}\left|f(s)\right|_{L^{2}}^{2}\,ds\right)^{2}<\infty.$
Now consider the remaining terms on the right hand side of the equality (8),
except for the terms with the Itô integral.
By Theorem 3.3, the solution $m$ takes values on the unit sphere in
$\mathbb{R}^{3}$. By the bounds mentioned in Theorem 3.3 and the Assumption
3.1 on the control process $u$, we have
$\displaystyle m\times\Delta m\in
L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right).$ (8.12)
The constraint condition (3.9) implies that
$\displaystyle m\times\left(m\times\Delta m\right)\in
L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right).$ (8.13)
The assumption on $u$, viz. 3.1 along with the constraint condition (3.9) and
the assumption on the function $h$ implies that
$\displaystyle m\times u\in
L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right),$ (8.14)
$\displaystyle m\times\left(m\times u\right)\in
L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right),$ (8.15)
and
$\displaystyle DG\left(m\right)\bigl{(}G(m)\bigr{)}\in
L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right).$ (8.16)
Note that the Assumption 3.1 has been applied here for $p=2$.
Hence each of the integrands (except for the terms with the Itô integral)
takes values in
$L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$. Hence by replacing $f$
in the above calculations by the integrands, one can show that each of the
terms also satisfies the required bounds.
Sub step 3:
What remains now is the Itô integral term. Recall that by Proposition 2.2 and
the bound on the process $m$ in Theorem 3.3,
$\displaystyle\mathbb{E}\int_{0}^{T}\left|G(m(t))\right|_{H^{1}}^{4}\,dt\leq
C\mathbb{E}\int_{0}^{T}\left|m(t)\right|_{H^{1}}^{2}<\infty.$ (8.17)
$\displaystyle\mathbb{E}\left|\int_{0}^{t}A_{1}^{\delta}e^{-(t-s)A_{1}}G(m(s))\,dW(s)\right|_{L^{2}}^{4}$
$\displaystyle\leq
C\mathbb{E}\left(\int_{0}^{t}\left|A_{1}^{\delta}e^{-(t-s)A_{1}}G(m(s))\right|^{2}_{L^{2}}\,ds\right)^{2}$
$\displaystyle\quad(\text{See Proposition 7.3
\cite[cite]{[\@@bibref{}{Prato+Zabczyk}{}{}]}})$ $\displaystyle\leq
C\mathbb{E}\left(\int_{0}^{t}\left|A_{1}^{\delta-\frac{1}{2}}e^{-(t-s)A_{1}}A_{1}^{\frac{1}{2}}G(m(s))\right|^{2}_{L^{2}}\,ds\right)^{2}$
$\displaystyle\quad(\text{Since}\ \delta=\delta-\frac{1}{2}+\frac{1}{2})$
$\displaystyle\leq
C\mathbb{E}\left(\int_{0}^{t}\frac{1}{(t-s)^{2\delta-1}}\left|A_{1}^{\frac{1}{2}}G(m(s))\right|^{2}_{L^{2}}\,ds\right)^{2}\
(\text{By}\ \eqref{Norm A1 delta e to the power A1 bound})$ $\displaystyle\leq
C\mathbb{E}\left(\int_{0}^{t}\frac{1}{(t-s)^{2\delta-1}}\left|G(m(s))\right|^{2}_{H^{1}}\,ds\right)^{2}$
$\displaystyle\quad(\text{Since}\
\left|A_{1}^{\frac{1}{2}}\centerdot\right|_{L^{2}}=\left|\centerdot\right|_{H^{1}})$
$\displaystyle\leq
C\mathbb{E}\left(\int_{0}^{t}\frac{1}{(t-s)^{4\delta-2}}\,ds\right)\mathbb{E}\left(\int_{0}^{t}\left|G(m(s))\right|^{4}_{H^{1}}\,ds\right).$
(Cauchy-Schwartz inequality)
Here $\delta<\frac{3}{4}$ implies that $4\delta-2<1$. Hence the first integral
is finite. The second integral is finite because of the inequality (8.17).
Hence combining all the inequalities, the bound (8.8), and hence (8.7) is
shown.
Step 2: This step uses the following identity. Let $a,b\in\mathbb{R}^{3}$.
$\displaystyle\left|a\times b\right|_{\mathbb{R}^{3}}^{2}+\left|\left\langle
a,b\right\rangle_{\mathbb{R}^{3}}\right|=\left|a\right|_{\mathbb{R}^{3}}^{2}\left|b\right|_{\mathbb{R}^{3}}^{2}.$
(8.18)
Brief proof of the equality (8.18):
$\displaystyle\left|a\times b\right|_{\mathbb{R}^{3}}^{2}$
$\displaystyle=\left\langle a\times b,a\times
b\right\rangle_{\mathbb{R}^{3}}=\left\langle a,b\times\left(a\times
b\right)\right\rangle_{\mathbb{R}^{3}}.$
We expand the right hand side using the triple product formula and simplify to
get the identity (8.18).
For Leb. a.a. $x\in\mathcal{O},t\in[0,T]$, the following equality holds
$\mathbb{P}$-a.e.
$\displaystyle\left|m(t,x)\times\Delta
m(t,x)\right|_{\mathbb{R}^{3}}^{2}+\left|\left\langle m(t,x),\Delta
m(t,x)\right\rangle_{\mathbb{R}^{3}}\right|^{2}$
$\displaystyle=\left|m(t,x)\right|_{\mathbb{R}^{3}}^{2}\left|\Delta
m(t,x)\right|_{\mathbb{R}^{3}}^{2}$ $\displaystyle=\left|\Delta
m(t,x)\right|_{\mathbb{R}^{3}}^{2}.$
Hence to show the bound on the second term, it is sufficient to show the
corresponding bound on the two terms on the left hand side of the above
equality.
For the second term,
$\displaystyle\mathbb{E}\int_{0}^{T}\int_{\mathcal{O}}\left|\left\langle
m(t,x),\Delta
m(t,x)\right\rangle_{\mathbb{R}^{3}}\right|^{2}\,ds\,dt=\mathbb{E}\int_{0}^{T}\int_{\mathcal{O}}\left|\nabla
m(t,x)\right|_{\mathbb{R}^{3}}^{4}\,ds\,dt.$
The right hand side of the above equality is finite because of the bound (8.7)
in Step 1. This, along with the bound in Theorem 3.3 (for the first term)
concludes the proof of the bound on the second term.
Hence the proof of Theorem 3.13 is complete.
###### Lemma 8.1.
The process $m$ lies in the space $C\left(\left[0,T\right];H^{1}\right)$
$\mathbb{P}-$a.s..
We postpone the proof of this lemma to Appendix A.
## 9\. Proof of Theorem 3.7 : Optimal control
The objective of this section is to show that there exists an optimal control
to the problem (3.7), with an appropriate admissibility criterion. We fix a
probability space $(\Omega,\mathcal{F},\mathbb{P})$ as in Section 3.
###### Outline of the section:.
We start by giving an equivalent equation (9) to equation (3.7). We follow it
up with the definition of a strong martingale solution to the problem in
Definition 9.1. Assumption 9.3 outlines the assumption that is required on the
control processes. The class $\mathcal{U}_{ad}(m_{0},T)$ of admissible
solutions is then defined. This is followed by a proof for Theorem 3.7. ∎
For the remainder of this section, we will consider the following equation.
For $t\in[0,T]$
$\displaystyle m(t)=$ $\displaystyle\int_{0}^{t}m(s)\times\Delta
m(s)\,ds-\alpha\,\int_{0}^{t}m(s)\times(m(s)\times u(s))\,ds$
$\displaystyle+\alpha\,\int_{0}^{t}\Delta m(s)\,ds+\alpha\,\int_{0}^{t}|\nabla
m(s)|_{\mathbb{R}^{3}}^{2}m(s)\,ds$ $\displaystyle+\int_{0}^{t}m(s)\times
u(s)\,ds+\frac{1}{2}\int_{0}^{t}\left[DG\left(m(s)\right)\right]\left(G(m\left(s\right))\right)\,ds+\int_{0}^{t}G(m(t))\,dW(t),\
\mathbb{P}-a.s.$ (9.1)
Recall that by Corollary 7.3, the equation (3.7) and the above equation (9)
are equivalent in $(H^{1})^{\prime}$, since $m$ satisfies the constraint
condition.
###### Definition 9.1 (Strong martingale solution).
Let the initial data $m_{0}$, the function $h$ and time $T$ be fixed. A strong
martingale solution of (9) is a tuple
$\pi=(\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u)$
such that $\pi$ is a weak martingale solution as in Definition 3.2 and the
process $m$ satisfies the additional regularity property (3.13), i.e.
$\displaystyle\mathbb{E}\left(\int_{0}^{T}|\nabla
m(t)|_{L^{4}}^{4}\,dt+\int_{0}^{T}|A_{1}m(t)|_{L^{2}}^{2}\,dt\right)<\infty.$
(9.2)
###### Remark 9.2.
A weak martingale solution is defined for the problem (3.7). By Corollary 7.3
the equations (3.7) and (9) are equivalent in $(H^{1})^{\prime}$. Hence the
above definition makes sense.
Hence Theorem 7.5 implies that the problem (9), with the initial data $m_{0}$
has a strong solution corresponding to any control process satisfying (3.1).
###### Assumption 9.3 (Admissibility criterion for the control process).
We say that a given control process $u$ satisfies the admissibility criterion
if for $p\geq 1$ and a given constant $K_{p}>0$,
$\mathbb{E}\left(\int_{0}^{T}\left|u(t)\right|_{L^{2}}^{2}\,dt\right)^{p}\leq
K_{p}.$ (9.3)
In particular, we assume (9.3) for $p=4$.
We now describe the class of admissible solutions over which the cost function
will be minimized. Let us fix the law of the initial data $m_{0}$ such that it
satisfies the assumptions in Theorem 3.3. Also fix the function $h\in H^{1}$.
Fix $T<\infty$. Consider a tuple
$\pi=(\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u)$ which is a strong
martingale solution to (9) as defined in Definition 9.1. Let the control
process $u$ also satisfy the Assumption 9.3 for $p=4$. Hence the process $m$
satisfies the bounds mentioned in Theorem 3.3. Such a tuple $\pi$ will be
called an admissible solution and the space of all such admissible solutions
will be denoted by $\mathcal{U}_{ad}(m_{0},T)$.
###### Remark 9.4.
Even if the tuples are strong martingale solutions, the equations still make
sense in $(H^{1})^{\prime}$, (and even in $L^{2}$, see Corollary 7.1) due to
the regularity proved in Theorem 3.5.
We recall the optimal control problem here for the reader’s convenience.
The cost functional is defined as follows. Let
$\pi=\left(\Omega,\mathcal{F},\mathbb{F},\mathbb{P},W,m,u\right)\in\mathcal{U}_{ad}(m_{0},T)$.
Assume that the terminal cost $\Psi$ is continuous on $L^{2}$. For a given
process (desired state) $\bar{m}\in L^{2}(\Omega;L^{2}(0,T;\mathcal{S}^{2}))$
$J(\pi)=\mathbb{E}\left[\int_{0}^{T}\left(\left|m(t)-\bar{m}(t)\right|_{H^{1}}^{2}+\left|u(t)\right|_{L^{2}}^{2}\right)\,dt+\Psi\left(m(T)\right)\right].$
(9.4)
Our aim is to minimize the above mentioned cost functional over the space
$\mathcal{U}_{ad}(m_{0},T)$.
Stated formally, the optimal control problem is to find an admissible solution
$\pi^{*}\in\mathcal{U}_{ad}(m_{0},T)$ such that
$J(\pi^{*})=\inf_{\pi\in\mathcal{U}_{ad}(m_{0},T)}J(\pi).$ (9.5)
Let us denote the infimum of the cost functional by $\Lambda$. That is
$\inf_{\pi\in\mathcal{U}_{ad}(m_{0},T)}J(\pi)=\Lambda.$ (9.6)
###### Idea of the proof of Theorem 3.7.
First, we show that the set of admissible solutions is non-empty. Hence the
infimum $\Lambda$ is finite. This implies the existence of a minimizing
sequence $\\{\pi_{n}\\}_{n\in\mathbb{N}}$. Lemma 9.6 and Lemma 9.7 show that
the minimizing sequence $\\{\pi_{n}\\}_{n\in\mathbb{N}}$ is uniformly bounded.
Lemma 9.8 shows that the minimizing sequence is bounded in the maximal regular
space. Further, Lemma 9.9 shows that the sequence of laws of
$\left(m_{n},u_{n}\right)$ are tight on the space $L^{2}(0,T;H^{1})\cap
C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})$. In Proposition 9.10, we use the
Jakubowski’s version of the Skorohod Theorem to obtain another sequence
$\\{\left(m^{\prime}_{n},u^{\prime}_{n}\right)\\}_{n\in\mathbb{N}}$ of
processes, along with random variables $m^{\prime},u^{\prime},W^{\prime}$,
possibly on a different probability space
$\left(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime},\mathbb{P}^{\prime}\right)$.
As before, we denote the tuple
$\\{\pi_{n}^{\prime}\\}_{n\in\mathbb{N}}:=\left(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime},\mathbb{P}^{\prime},m^{\prime}_{n},u^{\prime}_{n},W^{\prime}_{n}\right)$
and
$\\{\pi^{\prime}\\}_{n\in\mathbb{N}}:=\left(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{F}^{\prime},\mathbb{P}^{\prime},m^{\prime},u^{\prime},W^{\prime}\right)$.
Proposition 9.10 further gives us pointwise convergence of the processes
$m^{\prime}_{n},u^{\prime}_{n}$ and $W_{n}^{\prime}$ to their corresponding
limits in $\pi^{\prime}$, in appropriate spaces. Lemma 9.13, Lemma 9.14 and
Lemma 9.15 establish uniform bounds on the newly obtained processes
$m^{\prime}_{n},n\in\mathbb{N}$ and $m^{\prime}$. Then arguing similarly to
Section 5, we show that the obtained tuple $\pi^{\prime}$ is a strong
martingale solution of the problem (9). A main difference in the calculations
is that in Section 5 we consider processes that have values in finite
dimensional spaces, whereas that cannot be assumed here. One needs to be
careful while applying the Kuratowski Theorem. Some more details are given in
Remark 9.12. Moreover, we go on to show that the obtained tuple $\pi^{\prime}$
is an admissible solution. Then we show that the infimum for the cost $J$ is
attained at $\pi^{\prime}$, thus showing the existence of an optimal control
and completing the proof. ∎
###### Remark 9.5.
Before we begin with the proof of Theorem 3.7, we make a small comment.
Theorem 7.5, combined with Remark 9.2 gives us the existence of a strong
solution for the problem (9), which is stated in Theorem 7.5. That is, given a
filtered probability space
$\left(\Omega,\mathcal{F},\mathbb{F},\mathbb{P}\right)$, a Wiener process $W$,
an initial data and a control process $u$ on the given space, there exists a
process $m$ which is a solution of the problem (9). The optimization problem
can then be posed by fixing the given probability space and Wiener process,
and then finding a tuple $\left(m^{*},u^{*}\right)$ such that:
1. (1)
$m^{*}$ is a solution of the problem (9) corresponding to the control process
$u^{*}$.
2. (2)
The tuple $\left(m^{*},u^{*}\right)$ minimizes the cost (9.4) on the given
probability space.
This could be one way of formulating the problem. But, as of now, this does
not contribute significantly to the overall progression of the problem and
hence has not been considered.
###### Proof of Theorem 3.7.
Theorem 3.3 along with Theorem 3.5 shows that the space
$\mathcal{U}_{ad}(m_{0},T)$ is non-empty. Hence $\Lambda<\infty$. Hence there
exists a minimizing sequence $\\{\pi_{n}\\}_{n\in\mathbb{N}}$ of strong
martingale solutions,
$\pi_{n}=(\Omega_{n},\mathcal{F}_{n},\mathbb{F}_{n},\mathbb{P}_{n},W_{n},m_{n},u_{n}).$
That is
$\lim_{n\rightarrow\infty}J(\pi_{n})=\Lambda.$ (9.7)
Since $\pi_{n}$ is a minimizing sequence, there exists a constant $R>0$ such
that for each $n\in\mathbb{N}$,
$J(\pi_{n})\leq R.$ (9.8)
Hence there exists a constant $C>0$ such that for any $n\in\mathbb{N}$,
$\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\right|_{H^{1}}^{2}\,dt\leq C$ (9.9)
and
$\mathbb{E}^{n}\int_{0}^{T}\left|u_{n}(t)\right|_{L^{2}}^{2}\,dt\leq K_{1}.$
(9.10)
Here $\mathbb{E}^{n}$ denotes the expectation with respect to the probability
space $\left(\Omega_{n},\mathcal{F}_{n},\mathbb{P}_{n}\right)$.
Before we continue with the main line of the proof we formulate and prove some
essential auxiliary results.
∎
###### Lemma 9.6.
There exists a constant $C>0$ such that for each $n\in\mathbb{N}$, the
following bounds hold.
$\displaystyle\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\right|_{H^{1}}^{2}\,dt\leq
C,$ (9.11)
$\displaystyle\mathbb{E}^{n}\sup_{t\in[0,T]}\left|m_{n}(t)\right|_{H^{1}}^{4}\leq
C,$ (9.12)
$\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(s)\times\Delta
m_{n}(s)\right|_{L^{2}}^{2}\,ds\leq C,$ (9.13)
$\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(s)\times\left(m_{n}(s)\times\Delta
m_{n}(s)\right)\right|_{L^{2}}^{2}\,ds\leq C,$ (9.14)
$\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(s)\times
u_{n}(s)\right|_{L^{2}}^{2}\,ds\leq C,$ (9.15)
$\mathbb{E}^{n}\int_{0}^{t}\left|m_{n}(s)\times\left(m_{n}(s)\times
u_{n}(s)\right)\right|_{L^{2}}^{2}\,ds\leq C.$ (9.16)
###### Proof of Lemma 9.6.
The first inequality (9.11) follows from the fact that $\pi_{n}$ is a
minimizing sequence and the inequality (9.9).
The following equation is satisfied by the process $m_{n}$ for all $t\in[0,T]$
$\displaystyle m_{n}(t)$
$\displaystyle=m_{n}(0)+\int_{0}^{t}m_{n}(s)\times\Delta
m_{n}(s)\,ds-\alpha\,\int_{0}^{t}m_{n}(s)\times\left(m_{n}(s)\times\Delta
m_{n}(s)\right)\,ds$ $\displaystyle+\int_{0}^{t}m_{n}(s)\times
u_{n}(s)\,ds-\alpha\,\int_{0}^{t}m_{n}(s)\times\left(m_{n}(s)\times
u_{n}(s)\right)\,ds$
$\displaystyle+\frac{1}{2}\int_{0}^{t}\left[DG(m_{n}(s))\right]\left(G(m_{n}(s))\right)\,ds+\int_{0}^{t}G(m_{n}(s))\,dW_{n}(s),\
\mathbb{P}_{n}-a.s.$ (9.17)
Let $\bar{\phi}:H^{1}\to\mathbb{R}$ be given by
$\bar{\phi}(v)=\frac{1}{2}\left|\nabla v\right|_{L^{2}}^{2}.$ (9.18)
We now apply the Itô Lemma for the above function. The calculations are
similar to the proofs of Lemma 4.9 and Lemma 4.10, and hence are skipped. A
difference is that the calculations here are in infinite dimensions, for which
we apply the Itô formula from [53]. It is therefore sufficient to show that
the integrands on the right hand side of the equality (9) lie in appropriate
spaces, see [53], so that the Itô formula can be applied. Theorem 3.3 implies
that the terms $m_{n}\times\Delta m_{n},m_{n}\times\left(m_{n}\times\Delta
m_{n}\right)\in M^{2}(0,T;L^{2})$. For the definition of the space, see
Section 6, see also [53].
By the constraint condition (3.9),
$\displaystyle\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\times
u_{n}(t)\right|_{L^{2}}^{2}\,dt$
$\displaystyle\leq\mathbb{E}^{n}\int_{0}^{T}\left|u_{n}(t)\right|_{L^{2}}^{2}\,dt<\infty.$
The last inequality holds by (9.10).
Similarly, the constraint condition (3.9) implies that
$\displaystyle\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\times\bigl{(}m_{n}(t)\times
u_{n}(t)\bigr{)}\right|_{L^{2}}^{2}\,dt$
$\displaystyle\leq\mathbb{E}^{n}\int_{0}^{T}\left|m_{n}(t)\times
u_{n}(t)\right|_{L^{2}}^{2}\,dt<\infty.$
By the assumption $h\in H^{1}$, the embedding $H^{1}\hookrightarrow
L^{\infty}$ and by the constraint condition (3.9),we have
$\displaystyle\mathbb{E}^{n}\int_{0}^{T}\left|\big{[}DG(m_{n}(t))\big{]}\bigl{[}G\big{(}m_{n}(t)\big{)}\bigr{]}\right|_{L^{2}}^{2}\,dt<\infty.$
Hence $m_{n}\times u_{n},\ m_{n}\times(m_{n}\times u_{n}),\
\left[DG\big{(}m_{n}\big{)}\right]\left[G\big{(}m_{n}\big{)}\right]\in
M^{2}(0,T;L^{2})$.
Also, by the constraint condition implies that
$\displaystyle\mathbb{E}^{n}\int_{0}^{T}\left|G(m_{n}(t))\right|_{L^{2}}^{2}\,dt<\infty.$
Hence $G(m_{n})\in M^{2}(0,T;L^{2})$. The inequalities (9.12), (9.13) then
follow by applying the Itô formula. The inequalities (9.14), (9.15) and (9.16)
follow from the assumption on $u_{n}$ and the constraint condition (3.9). ∎
###### Lemma 9.7.
Let $\gamma\in\left(0,\frac{1}{2}\right)$ and $p\geq 2$. Then there exists a
constant $C>0$ such that for each $\mathbb{N}$, the following bound holds.
$\mathbb{E}^{n}\left[\left|m_{n}\right|^{2}_{W^{\gamma,p}(0,T;L^{2})}\right]\leq
C.$ (9.19)
###### Proof of Lemma 9.7.
The proof is similar to the proof of Lemma 4.10.
The idea of the proof is to show a stronger bound (in $W^{1,2}(0,T;L^{2})$)
for the terms without the stochastic intergral, as done in the proof of Lemma
4.10. Then use the embedding
$W^{1,2}(0,T;L^{2})\hookrightarrow W^{\gamma,p}(0,T;L^{2}),$ (9.20)
to conclude the bound. For the stochastic integral, the proof is similar to
the proof in Lemma 4.10, using Lemma C.2. ∎
Combining the bound (9.12) in Lemma 9.6 along with the Lemma 9.7, we have that
the sequence $\\{m_{n}\\}_{n\in\mathbb{N}}$ is bounded in the space
$L^{2}(\Omega;L^{\infty}(0,T;H^{1}))\cap
L^{2}(\Omega;W^{\gamma,p}(0,T;L^{2}))$.
That each $m_{n}$ satisfies (3.13) follows from Theorem 3.5. The aim here is
to show that the bound is uniform in $n\in\mathbb{N}$.
###### Lemma 9.8.
There exists a constant $C>0$ such that for all $n\in\mathbb{N}$,
$\displaystyle\mathbb{E}\left(\int_{0}^{T}|\nabla
m_{n}(t)|_{L^{4}}^{4}\,dt+\int_{0}^{T}|A_{1}m_{n}(t)|_{L^{2}}^{2}\,dt\right)\leq
C.$ (9.21)
###### Idea of the proof of Lemma 9.8.
That $m_{n}$ is a strong martingale solution for each $n\in\mathbb{N}$ implies
that the left hand side of the inequality (9.21) is finite for each
$n\in\mathbb{N}$. The aim of this lemma is to show that the constant on the
right hand side is independent of $n$. One can verify from the proof of
Theorem 3.5 that the bounds on the right hand side depends only on
$\mathbb{E}\left|u\right|_{L^{2}(0,T;L^{2})}^{2p}$, the initial data $m_{0}$
and the fixed time $T$. By the Assumption 3.1 and the fact that
$\\{\pi_{n}\\}_{n\in\mathbb{N}}$ is a minimizing sequence, we can conclude the
lemma. ∎
###### An outline of the proof of Lemma 9.8.
To prove the lemma, we will follow Step 1 and Step 2 (Section 8) of the proof
of Theorem 3.5 and show that the bound on the right hand side does not depend
on $n$. In that direction, first we recall that by Lemma 9.6, the bounds on
$m_{n},u_{n}$ are independent of $n$.
We now recall Step 1 in the proof of Theorem 3.5. The bound on
$\mathbb{E}\int_{0}^{T}\left|A_{1}^{\delta}m_{n}(t)\right|_{L^{2}}^{2}\,dt$
depends only on the choice of $\delta$ and the
$L^{4}\left(\Omega;L^{2}\left(0,T;L^{2}\right)\right)$ norm of the functions
on the right hand side of (9). Following the above arguments, we can show that
the required bounds do not depend on $n\in\mathbb{N}$.
For the Itô integral term, We observe that the bound depends on the time $T$,
the choice of $\delta$ and the norm
$\mathbb{E}\int_{0}^{T}\left|G(m_{n}(t))\right|_{H^{1}}^{2}\,dt$, which again
depends on the norm
$\mathbb{E}\int_{0}^{T}\left|m_{n}(t)\right|_{H^{1}}^{2}\,dt$, the constraint
condition and the fixed function $h$. Hence, from the above arguments, this
bound also does not depend on $n\in\mathbb{N}$.
Going back to Step 2 of the proof of Theorem 3.5, we observe that it is
sufficient to bound the term $m_{n}\times\Delta m_{n}$, along with Step 1 to
complete the proof of (9.21). Hence combining the arguments above, we conclude
that the bound (9.21) is independent of $n\in\mathbb{N}$. ∎
From the bounds established in Lemma 9.8, we can prove that the sequence
$\\{m_{n}\\}_{n\in\mathbb{N}}$ is bounded in the space
$L^{2}(\Omega;L^{2}(0,T;H^{2})\cap L^{2}(\Omega;W^{\gamma,p}(0,T;L^{2}))$.
We use the uniform bounds to show that the sequence of laws of $m_{n}$ is
tight on the space $L^{2}(0,T;H^{1})\cap C([0,T];L^{2})$. Similarly, we use
the uniform bound on the sequence of control processes $u_{n}$ to talk about
tightness of laws on a suitable space. This is outlined in the following
lemma.
###### Lemma 9.9.
The sequence of laws of
$\left\\{\left(m_{n},u_{n}\right)\right\\}_{n\in\mathbb{N}}$ is tight on the
space $L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})$.
###### Proof of Lemma 9.9.
The proof will be similar to the proof of Lemma 4.11. This lemma shows
tightness on a smaller (more regular) space than the previous counterpart. For
completion, we give some details here. We show calculations for the sequence
$\\{m_{n}\\}_{n\in\mathbb{N}}$. Tightness for the sequence of laws of
$\\{u_{n}\\}_{n\in\mathbb{N}}$ follows similar to Lemma 4.11. The main idea is
to show that the laws of $m_{n},n\in\mathbb{N}$ are concentrated inside a ball
in the space $L^{\infty}(0,T;H^{1})\cap L^{2}(0,T;H^{2})\cap
W^{\gamma,p}(0,T;L^{2})$, which is compactly embedded into the space
$L^{2}(0,T;H^{1})\cap C([0,T];L^{2})$.
Towards that, let $r\in\mathbb{R}$ be arbitrary and fixed.
$\displaystyle\mathbb{P}_{n}\left(\left|m_{n}\right|_{L^{\infty}(0,T;H^{1})\cap
L^{2}(0,T;H^{2})\cap W^{\gamma,p}(0,T;L^{2})}\geq r\right)$
$\displaystyle\leq\mathbb{P}_{n}\left(\left|m_{n}\right|_{L^{\infty}(0,T;H^{1})}\geq\frac{r}{3}\right)+\mathbb{P}_{n}\left(\left|m_{n}\right|_{L^{2}(0,T;H^{2})}\geq\frac{r}{3}\right)+\mathbb{P}_{n}\left(\left|m_{n}\right|_{W^{\gamma,p}(0,T;L^{2})}\geq\frac{r}{3}\right)$
$\displaystyle\leq\frac{9}{r^{2}}\mathbb{E}^{n}\left|m_{n}\right|_{L^{\infty}(0,T;H^{1})}^{2}+\frac{9}{r^{2}}\mathbb{E}^{n}\left|m_{n}\right|_{L^{2}(0,T;H^{2})}^{2}+\frac{9}{r^{2}}\mathbb{E}^{n}\left|m_{n}\right|_{W^{\gamma,p}(0,T;L^{2})}^{2}$
$\displaystyle\leq\frac{C}{r^{2}}.$ (9.22)
The second last inequality follows from the Chebyshev inequality. For the last
inequality, Lemma 9.7 and Lemma 9.8 imply the existence of a constant $C>0$
used in the inequality. Observe that the right hand side of the above
inequality, and hence the left hand side can be made as small as desired by
choosing $r$ large enough.
Let
$\displaystyle B_{r}:=\bigg{\\{}$ $\displaystyle v\in
L^{\infty}(0,T;H^{1})\cap L^{2}(0,T;H^{2})\cap W^{\gamma,p}(0,T;L^{2})$
$\displaystyle:\left|v\right|_{L^{\infty}(0,T;H^{1})\cap L^{2}(0,T;H^{2})\cap
W^{\gamma,p}(0,T;L^{2})}\geq r\bigg{\\}}.$ (9.23)
Let $\varepsilon>0$ be given. In order to show tightness of the laws, it
suffices to show that there exists a compact set $B^{\varepsilon}\subset
L^{2}(0,T;H^{1})\cap C([0,T];L^{2})$ such that for each $n\in\mathbb{N}$,
$\mathbb{P}_{n}\left(B^{\varepsilon}\right)>1-\varepsilon.$ (9.24)
In (9), we choose $r$ such that $r^{2}>\frac{C}{\varepsilon}$. Therefore
$\mathbb{P}_{n}\left(B_{r}\right)\leq\frac{C}{r^{2}}<\varepsilon.$ (9.25)
Let $B^{\varepsilon}$ denote the closure of the complement of this $B_{r}$.
Therefore for each $n\in\mathbb{N}$, we have
$\mathbb{P}_{n}\left(B^{\varepsilon}\right)\geq
1-\mathbb{P}_{n}\left(B_{r}\right)>1-\varepsilon.$ (9.26)
By Lemma C.7 and Lemma C.9, for $\gamma p>1$, the set $B^{\varepsilon}$ is a
compact subset of $L^{2}(0,T;H^{1})\cap C([0,T];L^{2})$. Hence the sequence of
laws $\left\\{\mathcal{L}(m_{n})\right\\}_{n\in\mathbb{N}}$ is tight on the
space $L^{2}(0,T;H^{1})\cap C([0,T];L^{2})$.
The proof for the tightness of the sequence of laws of $u_{n}$ on the space
$L^{2}_{w}(0,T;L^{2})$ is similar to the proof of Lemma 4.11. ∎
Note that each strong martingale solution has its own Wiener process. The
processes $W_{n}$ have the same laws on $C([0,T];\mathbb{R})$. Hence it is
sufficient to show that the law of $W_{n}$ is tight on the space
$C([0,T];\mathbb{R})$ for any $n\in\mathbb{N}$.
Let $n\in\mathbb{N}$. Since the space $C([0,T];\mathbb{R})$ is a Radon space,
every probability measure is tight. Hence, given $\varepsilon>0$ there exists
$K_{\varepsilon}\subset C([0,T];\mathbb{R})$ such that
$\displaystyle\mathbb{P}_{n}\left(W_{n}\in K_{\varepsilon}\right)\geq
1-\varepsilon.$ (9.27)
Since $W_{n}$ and $W_{k}$ have the same laws on the space
$C([0,T];\mathbb{R})$, for any $n,k\in\mathbb{N}$,
$\displaystyle\mathbb{P}_{n}\left(W_{n}\in
K_{\varepsilon}\right)=\mathbb{P}_{k}\left(W_{k}\in K_{\varepsilon}\right)\geq
1-\varepsilon.$ (9.28)
Hence the sequence of laws of $\\{W_{n}\\}_{n\in\mathbb{N}}$ is tight on the
space $C([0,T];\mathbb{R})$.
Now that we have shown the tightness, we proceed as done in Section 5.
###### Proposition 9.10.
There exists a probability space
$\left(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime}\right)$ and a
sequence $\left(m^{\prime}_{n},u^{\prime}_{n},W_{n}^{\prime}\right)$ of
$L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})\times
C([0,T];\mathbb{R})$-valued random variables, along with random variables
$\left(m^{\prime},u^{\prime}\ W^{\prime}\right)$ defined on $\Omega^{\prime}$
such that for each $n\in\mathbb{N}$, the law of
$\left(m_{n},u_{n},W_{n}\right)$ equals the law of
$\left(m^{\prime}_{n},u^{\prime}_{n},W_{n}^{\prime}\right)$ on
$L^{2}(0,T;H^{1})\cap C([0,T];L^{2})\times L^{2}_{w}(0,T;L^{2})\times
C([0,T];\mathbb{R})$ and the following convergences hold
$\mathbb{P}^{\prime}$-a.s. as $n$ goes to infinity.
$m^{\prime}_{n}\to m^{\prime}\ \text{in}\ L^{2}(0,T;H^{1})\cap
C([0,T];L^{2}),$ (9.29) $u^{\prime}_{n}\to u^{\prime}\ \text{in}\
L^{2}_{w}(0,T;L^{2}),$ (9.30) $W_{n}^{\prime}\to W^{\prime}\ \text{in}\
C([0,T];\mathbb{R}).$ (9.31)
###### Proof of Proposition 9.10.
The proof, similar to the proof of Proposition 5.1, follows from the
Jakubowski version of the Skorohod Theorem, see Theorem 3.11 in [19].
∎
###### Remark 9.11.
The processes $m^{\prime}$ and $u^{\prime}$ obtained in Proposition 9.10 are
Borel measurable. Let the filtration
$\mathbb{F}^{\prime}=\mathcal{F}_{t\in[0,T]}^{\prime}$ be defined by
$\displaystyle\mathcal{F}_{t}^{\prime}=\sigma\\{m^{\prime}(s),u^{\prime}(s),W^{\prime}(s):0\leq
s\leq t\\}.$
Hence $m^{\prime},u^{\prime}$ are $\mathbb{F}^{\prime}$-adapted. Thus, the
processes $m^{\prime}$ and $u^{\prime}$ have progressively measurable
modifications, see Proposition 1.12, [43]. From now on, these progressively
measurable modifications will be considered.
###### Remark 9.12.
This remark is written in the same spirit as that of Remark 5.5. The main
difference between Remark 5.5 and this Remark 9.12 is that we cannot use the
finite dimensionality of the spaces $H_{n}$ here. Let us show how we need to
modify the previous argument. First, we discuss the laws of
$m_{n},n\in\mathbb{N}$, and next we discuss the laws of
$u_{n},n\in\mathbb{N}$.
1. (1)
Note that the spaces $C([0,T];L^{2})$, $C([0,T];H^{1})$, $L^{2}(0,T;H^{1})$,
$L^{4}(0,T;W^{1,4})$, and $L^{2}(0,T;H^{2})$ are Polish spaces. In particular,
since the embedding of $C([0,T];H^{1})$ into the space $C([0,T];L^{2})\cap
L^{2}(0,T;H^{1})$ is continuous and injective, by using the Kuratowski
Theorem, Lemma C.10, we infer that the Borel subsets of $C([0,T];H^{1})$ are
also the Borel subsets of $C([0,T];L^{2})\cap L^{2}(0,T;H^{1})$. Now, since by
Lemma 8.1 $\mathbb{P}_{n}\left\\{m_{n}\in C([0,T];H^{1})\right\\}=1$ for each
$n$ and $m_{n}$ and $m^{\prime}_{n}$ have the same laws on $C([0,T];L^{2})\cap
L^{2}(0,T;H^{1})$ and $C([0,T];H^{1})$ is a Borel subset of
$C([0,T];L^{2})\cap L^{2}(0,T;H^{1})$, we deduce the following
$\mathbb{P}^{\prime}\left\\{m^{\prime}_{n}\in C([0,T];H^{1})\right\\}=1,\
\text{for each}\ n\in\mathbb{N}.$
Arguing similarly (i.e. using the continuous embedding of the spaces
$C([0,T];H^{1})$, $L^{2}(0,T;H^{1})$, $L^{4}(0,T;W^{1,4})$, and
$L^{2}(0,T;H^{2})$ into the space $C([0,T];L^{2})\cap L^{2}(0,T;H^{1})$), we
can prove that the processes $m^{\prime}_{n},n\in\mathbb{N}$ satisfy the same
bounds as the processes $m_{n},n\in\mathbb{N}$, in particular the bounds
$(1)$, $(2)$ and $(3)$ in Lemma 4.9.
2. (2)
Regarding the control processes corresponding to the processes $u_{n}$ and
$u^{\prime}_{n}$, we have the following. Firstly, the space
$L^{2}_{w}(0,T;L^{2})$ is the space $L^{2}(0,T;L^{2})$ endowed with the weak
topology, which is weaker than the norm topology. Therefore every open set in
$L^{2}_{w}(0,T;L^{2})$ is also an open set in $L^{2}(0,T;L^{2})$. Therefore,
the Borel sigma-algebra corresponding to $L^{2}_{w}(0,T;L^{2})$ is contained
in the Borel sigma-algebra corresponding to $L^{2}(0,T;L^{2})$. In other
words, Borel subsets of $L^{2}_{w}(0,T;L^{2})$ are also Borel subsets of
$L^{2}(0,T;L^{2})$. Moreover, by Theorem 7.19 in [66], see also page number
112 in [12], we infer that the Borel sigma algebras corresponding to
$L^{2}_{w}(0,T;L^{2})$ and $L^{2}(0,T;L^{2})$ are equal. By Proposition 9.10,
we infer that for each $n\in\mathbb{N}$, the law of the process
$u_{n}^{\prime}$ is equal to the law of the process $u_{n}$ on the space
$L^{2}_{\text{w}}(0,T;L^{2})$. In particular, the following holds for any
constant $K>0$.
$\mathbb{P}\left\\{\left|u_{n}\right|_{L^{2}(0,T;L^{2})}\leq
K\right\\}=\mathbb{P}^{\prime}\left\\{\left|u^{\prime}_{n}\right|_{L^{2}(0,T;L^{2})}\leq
K\right\\}.$
Hence we infer that the processes $u^{\prime}_{n}$ satisfy the same bounds as
the processes $u_{n}$.
The processes $m^{\prime}_{n}$ and $u^{\prime}_{n}$, therefore, satisfy the
same bounds as the processes $m_{n}$ and $u_{n}$ respectively, for each
$n\in\mathbb{N}$. We state this in the following two lemmata.
###### Lemma 9.13.
There exists a constant $C>0$ such that for all $n\in\mathbb{N}$, the
following bounds hold.
$\displaystyle\mathbb{E}^{\prime}\sup_{t\in[0,T]}\left|m^{\prime}_{n}(t)\right|_{H^{1}}^{2}\leq
C,$ (9.32)
$\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)\right|_{L^{2}}^{2}\,ds\leq C,$ (9.33)
$\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times\left(m^{\prime}_{n}(s)\times\Delta
m^{\prime}_{n}(s)\right)\right|_{L^{2}}^{2}\,ds\leq C,$ (9.34)
$\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)\right|_{L^{2}}^{2}\,ds\leq C,$ (9.35)
$\mathbb{E}^{\prime}\int_{0}^{t}\left|m^{\prime}_{n}(s)\times\left(m^{\prime}_{n}(s)\times
u^{\prime}_{n}(s)\right)\right|_{L^{2}}^{2}\,ds\leq C.$ (9.36)
###### Proof of Lemma 9.13.
The proof of this Lemma is similar to the proof of Proposition 5.6. It follows
from the bounds established in Lemma 9.6. ∎
We now use Lemma 9.8 along with the Remark 9.12 to get the following lemma.
###### Lemma 9.14.
There exists a constant $C>0$ such that for all $n\in\mathbb{N}$,
$\displaystyle\mathbb{E}^{\prime}\left(\int_{0}^{T}\left|m^{\prime}_{n}(t)\right|_{W^{1,4}}^{4}\,dt+\int_{0}^{T}|m^{\prime}_{n}(t)|_{H^{2}}^{2}\,dt\right)\leq
C.$ (9.37)
###### Proof of Lemma 9.14.
The proof follows from the Lemma 9.8 and Remark 9.12. ∎
Having shown uniform estimates for the sequence
$\\{m^{\prime}_{n}\\}_{\mathbb{N}}$, we show similar bounds for the limit
process $m^{\prime}$.
###### Lemma 9.15.
The process $m^{\prime}$ satisfies the following bounds.
1. (1)
$\sup_{0\leq t\leq T}\left|m^{\prime}(t)\right|_{L^{2}}\leq|m_{0}|_{L^{2}},\
\mathbb{P}^{\prime}-\text{a.s.}$ (9.38)
2. (2)
$\mathbb{E}^{\prime}\sup_{0\leq t\leq
T}\left|m^{\prime}(t)\right|^{4}_{H^{1}}<\infty,$ (9.39)
3. (3)
$\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{W^{1,4}}^{4}\,dt<\infty,$
(9.40)
4. (4)
$\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{H^{2}}^{2}\,dt<\infty.$
(9.41)
###### Proof.
The proof is essentially similar to the proof of Lemma 5.7. A sketch for the
proofs of the last two inequalities is given here.
For the last inequality, we first extend the norm $\left|\cdot\right|_{H^{2}}$
to the space $H^{1}$ as follows.
$\displaystyle\left|v\right|_{H^{2}}=\begin{cases}&\left|v\right|_{H^{2}},\
\mbox{ if }\ v\in H^{2},\\\ &\infty,\mbox{ if }\ v\in H^{1}\ \text{and}\
v\notin H^{2}.\end{cases}$
This extended norm is lower semicontinuous. Therefore the following holds for
each $t\in[0,T]$.
$\displaystyle\left|m^{\prime}(t)\right|_{H^{2}}^{2}\leq\liminf_{n\rightarrow\infty}\left|m^{\prime}_{n}(t)\right|_{H^{2}}^{2}.$
Hence by the Fatou Lemma,
$\displaystyle\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|_{H^{2}}^{2}\,dt\leq\liminf_{n\to\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(t)\right|_{H^{2}}^{2}\,dt.$
The bound in Lemma 9.14 implies that the right hand side of the above
inequality is finite. This concludes the proof.
For the second last inequality,we extend the norm
$\left|\centerdot\right|_{L^{4}(0,T;W^{1,4})}$ to the space $L^{2}(0,T;L^{2})$
as follows
$\displaystyle\left|v\right|_{L^{4}(0,T;W^{1,4})}=\begin{cases}&\left|v\right|_{L^{4}(0,T;W^{1,4})},\
\text{if}\ v\in L^{4}(0,T;W^{1,4}),\\\ &\infty,\ \text{if}\ v\in
L^{2}(0,T;L^{2})\ \text{and}\ v\notin L^{4}(0,T;W^{1,4}).\end{cases}$
The above defined map is lower semicontinuous. Therefore the following holds
for each $t\in[0,T]$ $\mathbb{P}^{\prime}$-a.s.
$\displaystyle\left|m^{\prime}(t)\right|_{L^{4}(0,T;H^{1})}\leq\liminf_{n\rightarrow\infty}\left|m^{\prime}_{n}(t)\right|_{L^{4}(0,T;H^{1})}.$
Hence by the Fatou Lemma,
$\displaystyle\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}(t)\right|^{4}_{L^{4}(0,T;W^{1,4})}\,dt\leq\liminf_{n\rightarrow\infty}\mathbb{E}^{\prime}\int_{0}^{T}\left|m^{\prime}_{n}(t)\right|^{4}_{L^{4}(0,T;W^{1,4})}<\infty.$
(9.42)
This concludes the proof of the Lemma 9.15. ∎
This concludes the Auxilliary results. We now use them to prove Theorem 3.7.
###### Continuation of the proof of Theorem 3.7.
We now show that the obtained limit is a strong martingale solution to the
problem (9). For this aim, we first show that it is a weak martingale
solution, for which we need to show that the process $m^{\prime}$ satisfies
(9) with the corresponding probability space.
|
# Efficient and robust high-dimensional sparse logistic regression via
nonlinear primal-dual hybrid gradient algorithms
Jérôme Darbon and Gabriel P. Langlois
###### Abstract
Logistic regression is a widely used statistical model to describe the
relationship between a binary response variable and predictor variables in
data sets. It is often used in machine learning to identify important
predictor variables. This task, variable selection, typically amounts to
fitting a logistic regression model regularized by a convex combination of
$\ell_{1}$ and $\ell_{2}^{2}$ penalties. Since modern big data sets can
contain hundreds of thousands to billions of predictor variables, variable
selection methods depend on efficient and robust optimization algorithms to
perform well. State-of-the-art algorithms for variable selection, however,
were not traditionally designed to handle big data sets; they either scale
poorly in size or are prone to produce unreliable numerical results. It
therefore remains challenging to perform variable selection on big data sets
without access to adequate and costly computational resources. In this paper,
we propose a nonlinear primal-dual algorithm that addresses these
shortcomings. Specifically, we propose an iterative algorithm that provably
computes a solution to a logistic regression problem regularized by an elastic
net penalty in $O(T(m,n)\log(1/\epsilon))$ operations, where
$\epsilon\in(0,1)$ denotes the tolerance and $T(m,n)$ denotes the number of
arithmetic operations required to perform matrix-vector multiplication on a
data set with $m$ samples each comprising $n$ features. This result improves
on the known complexity bound of $O(\min(m^{2}n,mn^{2})\log(1/\epsilon))$ for
first-order optimization methods such as the classic primal-dual hybrid
gradient or forward-backward splitting methods.
#### Significance statement
Logistic regression is a widely used statistical model to describe the
relationship between a binary response variable and predictor variables in
data sets. With the trends in big data, logistic regression is now commonly
applied to data sets whose predictor variables range from hundreds of
thousands to billions. State-of-the-art algorithms for fitting logistic
regression models, however, were not traditionally designed to handle big data
sets; they either scale poorly in size or are prone to produce unreliable
numerical results. This paper proposes a nonlinear primal-dual algorithm that
provably computes a solution to a logistic regression problem regularized by
an elastic net penalty in $O(T(m,n)\log(1/\epsilon))$ operations, where
$\epsilon\in(0,1)$ denotes the tolerance and $T(m,n)$ denotes the number of
arithmetic operations required to perform matrix-vector multiplication on a
data set with $m$ samples each comprising $n$ features. This result improves
on the known complexity bound of $O(\min(m^{2}n,mn^{2})\log(1/\epsilon))$ for
first-order optimization methods such as the classic primal-dual hybrid
gradient or forward-backward splitting methods.
## 1 Introduction
Logistic regression is a widely used statistical model to describe the
relationship between a binary response variable and predictor variables in
data sets [36]. It is often used in machine learning to identify important
predictor variables [22, 76]. This task, variable selection, typically amounts
to fitting a logistic regression model regularized by a convex combination of
$\ell_{1}$ and $\ell_{2}^{2}$ penalties. Variable selection is frequently
applied to problems in medicine [2, 9, 30, 52, 56, 71, 77], natural language
processing [6, 48, 29, 55, 62], economics [46, 64, 74, 75], and social science
[1, 39, 50], among others.
Since modern big data sets can contain up to billions of predictor variables,
variable selection methods require efficient and robust optimization
algorithms to perform well [47]. State-of-the-art algorithms for variable
selection methods, however, were not traditionally designed to handle big data
sets; they either scale poorly in size [14] or are prone to produce unreliable
numerical results [8, 45, 72, 73]. These shortcomings in terms of efficiency
and robustness make variable selection methods on big data sets essentially
impossible without access to adequate and costly computational resources [18,
57]. Further exacerbating this problem is that machine learning applications
to big data increasingly rely on computing power to make progress [19, 37, 47,
42]. Without efficient and robust algorithms to minimize monetary and energy
costs, these shortcomings prevent scientific discoveries. Indeed, it is
expected that progress will rapidly become economically and environmentally
unsustainable as computational requirements become a severe constraint [65].
This paper proposes a novel optimization algorithm that addresses the
shortcomings of state-of-the-art algorithms used for variable selection. Our
proposed algorithm is an accelerated nonlinear variant of the classic primal-
dual hybrid gradient (PDHG) algorithm, a first-order optimization method
initially developed to solve imaging problems [23, 54, 78, 12, 35, 13]. Our
proposed accelerated nonlinear PDHG algorithm, which is based on the work the
authors recently provided in [16], uses the Kullback–Leibler divergence to
efficiently fit a logistic regression model regularized by a convex
combination of $\ell_{1}$ and $\ell_{2}^{2}$ penalties. Specifically, our
algorithm provably computes a solution to a logistic regression problem
regularized by an elastic net penalty in $O(T(m,n)\log(1/\epsilon))$
operations, where $\epsilon\in(0,1)$ denotes the tolerance and $T(m,n)$
denotes the number of arithmetic operations required to perform matrix-vector
multiplication on a data set with $m$ samples each comprising $n$ features.
This result improves on the known complexity bound of
$O(\min(m^{2}n,mn^{2})\log(1/\epsilon))$ for first-order optimization methods
such as the classic primal-dual hybrid gradient or forward-backward splitting
methods.
### Organization of this preprint
In Section 2, we describe how variable selection works with logistic
regression regularized by the elastic net penalty, why this problem is
challenging, what the state-of-the-art algorithms are, and what their
limitations are. In Section 3, we describe our approach for solving this
problem using the Kullback–Leibler divergence, we derive an explicit algorithm
for solving this problem, and we explain why our algorithm overcomes the
limitations of current state-of-the-art algorithms. We also describe how our
approach can be adapted to solve a broad class logistic regression problems
regularized by an appropriate convex penalty, including, for example, the
Lasso penalty. Finally, Section 4 provides a detailed derivation of the
explicit algorithms described in Section 3.
## 2 Preliminaries
### Description of the problem
Suppose we receive $m$ independent samples
$\\{(\boldsymbol{x}_{i},y_{i})\\}_{i=1}^{m}$, each comprising an
$n$-dimensional vector of predictor variables
$\boldsymbol{x}_{i}\in\mathbb{R}^{n}$ and a binary response variable
$y_{i}\in\\{0,1\\}$. The predictor variables are encoded in an $m\times n$
matrix $\bm{A}$ whose rows are the vectors
$\boldsymbol{x}_{i}=(x_{i1},\dots,x_{in})$, and the binary response variables
are encoded in an $m$-dimensional vector $\boldsymbol{y}$. The goal of
variable selection is to identify which of the $n$ predictor variables best
describe the $m$ response variables. A common approach to do so is to fit a
logistic regression model regularized by a convex combination of $\ell_{1}$
and $\ell_{2}^{2}$ penalties:
$\inf_{\boldsymbol{\theta}\in\mathbb{R}^{n}}f(\boldsymbol{\theta};\alpha,\lambda)=\inf_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\left\\{\frac{1}{m}\sum_{i=1}^{m}\log\left(1+\exp{((\bm{A}\boldsymbol{\theta})_{i})}\right)-\frac{1}{m}\left\langle\boldsymbol{y},\bm{A}\boldsymbol{\theta}\right\rangle+\lambda\left(\alpha\left\|{\boldsymbol{\theta}}\right\|_{1}+\frac{1-\alpha}{2}\left\|{\boldsymbol{\theta}}\right\|_{2}^{2}\right)\right\\},$
(1)
where $\lambda>0$ is a tuning parameter and $\alpha\in(0,1)$ is a fixed
hyperparameter. The function
$\boldsymbol{\theta}\mapsto\lambda(\alpha\left\|{\boldsymbol{\theta}}\right\|_{1}+(1-\alpha)\left\|{\boldsymbol{\theta}}\right\|_{2}^{2}/2)$
is called the elastic net penalty [79]. It is a compromise between the ridge
penalty ($\alpha=0$) [34] and the lasso penalty ($\alpha=1$) [66]. The choice
of $\alpha$ depends on the desired prediction model; for variable selection
its value is often chosen to be close to but not equal to one [63].
The elastic net regularizes the logistic regression model in three ways.
First, it ensures that the logistic regression problem (1) has a unique
solution (global minimum) [21, Chapter II, Proposition 1.2]. Second, the
$\ell_{2}^{2}$ penalty shrinks the coefficients of correlated predictor
variables toward each other (and zero), which alleviates negative correlation
effects (e.g., high variance) between highly correlated predictor variables.
Third, the $\ell_{1}$ penalty promotes sparsity in the solution of (1); that
is, the global minimum of (1) has a number of entries that are identically
zero [26, 22, 76]. We note that other penalties are sometimes used in practice
to promote sparsity, including, for example, the group lasso penalty [49]. In
any case, the non-zero entries are identified as the important predictor
variables, and the zero entries are discarded. The number of non-zero entries
itself depends on the value of the fixed hyperparameter $\alpha$ and the
tuning parameter $\lambda$.
In most applications, the desired value of $\lambda$ proves challenging to
estimate. To determine an appropriate value for it, variable selection methods
first compute a sequence of minimums $\boldsymbol{\theta}^{*}(\lambda)$ of
problem (1) from a chosen sequence of values of the parameter $\lambda$ and
then choose the parameter that gives the preferred minimum [8, 28]. Variable
selection methods differ in how they choose the sequence of parameters
$\lambda$ and how they repeatedly compute global minimums of problem (1), but
the procedure is generally the same. The sequence of parameters thus computed
is called a regularization path [28].
Unfortunately, computing a regularization path to problem (1) can be
prohibitively expensive for big data sets. To see why, fix $\alpha\in(0,1)$
and $\lambda>0$, and let
$\boldsymbol{\theta}_{\epsilon}(\alpha,\lambda)\in\mathbb{R}^{n}$ with
$\epsilon>0$ denote an $\epsilon$-approximate solution to the true global
minimum $\boldsymbol{\theta}^{*}(\alpha,\lambda)$ in (1), i.e.,
$f(\boldsymbol{\theta}_{\epsilon}(\alpha,\lambda);\alpha,\lambda)-f(\boldsymbol{\theta}^{*}(\alpha,\lambda);\alpha,\lambda)<\epsilon.$
Then the best achievable rate of convergence for computing
$\boldsymbol{\theta}_{\epsilon}(\alpha,\lambda)$ in the Nesterov class of
optimal first-order methods is linear, that is, $O(\log(1/\epsilon))$ in the
number of iterations [51]. While optimal, this rate of convergence is
difficult to achieve in practice because it requires a precise estimate of the
largest singular value of the matrix $\bm{A}$, a quantity essentially
impossible to compute for large matrices due to its prohibitive computational
cost of $O(\min{(m^{2}n,mn^{2})})$ operations [31]. This issue generally makes
solving problem (1) difficult and laborious. As computing a regularization
path entails repeatedly solving problem (1) for different values of $\lambda$,
this process can become particularly time consuming and resource intensive for
big data sets.
In summary, variable selection methods work by repeatedly solving an
optimization problem that can be prohibitively computationally expensive for
big data sets. This issue has driven much research in the development of
robust and efficient algorithms to minimize costs and maximize performance.
### Algorithms for variable selection methods and their shortcomings
The state of the art for computing regularization paths to problem (1) is
based on coordinate descent algorithms [27, 28, 32, 60, 61, 67, 70, 73]. These
algorithms are implemented, for example, in the popular glmnet software
package [32], which is available in the Python, MATLAB, and R programming
languages. Other widely used variable selection methods include those based on
the least angle regression algorithm and its variants [20, 33, 41, 68, 79],
and those based on the forward-backward splitting algorithm and its variants
[5, 11, 17, 58, 59]. Here, we focus on these algorithms, but before doing so
we wish to stress that many more algorithms have been developed to compute
minimums of (1); see [7, 22, 43, 69, 76] for recent surveys and comparisons of
different methods and models.
Coordinate descent algorithms are considered the state of the art because they
are scalable, with steps in the algorithms generally having an asymptotic
space complexity of at most $O(mn)$ operations. Some coordinate descent
algorithms, such as those implemented in the glmnet software [32], also offer
options for parallel computing. Despite these advantages, coordinate descent
algorithms generally lack robustness and good convergence properties. For
example, the glmnet implementation depends on the sparsity of the matrix
$\bm{A}$ to converge fast [79], and it is known to be slowed down when the
predictor variables are highly correlated [27]. This situation often occurs in
practice, and it would be desirable to have a fast algorithm for this case.
Another issue is that the glmnet implementation approximates the logarithm
term in problem (1) with a quadratic in order to solve the problem
efficiently. Without costly step-size optimization, which glmnet avoids to
improve performance, the glmnet implementation may not converge [28, 41]. Case
in point, Yuan et al. [72] provides two numerical experiments in which glmnet
does not converge. Although some coordinate descent algorithms recently
proposed in [10] and in [24] can provably solve the logistic regression
problem (1) (with parameter $\alpha=1$), in the first case, the convergence
rate is strictly less than the achievable rate, and in the second case, the
method fails to construct meaningful regularization paths to problem (1), in
addition to having large memory requirements.
The least angle regression algorithm is another popular tool for computing
regularization paths to problem (1). This algorithm, however, scales poorly
with the size of data sets because the entire sequence of steps for computing
regularization paths has an asymptotic space complexity of at most
$O(\min{(m^{2}n+m^{3},mn^{2}+n^{3})})$ operations [20]. It also lacks
robustness because, under certain conditions, it fails to compute meaningful
regularization paths to problem (1) [8, 45]. Case in point, Bringmann et al.
[8] provides an example for which the least angle regression algorithm fails
to converge.
The forward-backward splitting algorithm and its variants are widely used
because they are robust and can provably compute $\epsilon$-approximate
solutions of (1) in at most $O(\log(1/\epsilon))$ iterations. To achieve this
convergence rate, the step size parameter in the algorithm needs to be fine-
tuned using a precise estimate of the largest singular value of the matrix
$\bm{A}$. As mentioned before, however, computing this estimate is essentially
impossible for large matrices due to its prohibitive computational cost, which
has an asymptotic computational complexity of at most
$O(\min{(m^{2}n,mn^{2})})$ operations. Line search methods and other
heuristics are often employed to bypass this problem, but they come at the
cost of slowing down the convergence of the forward-backward splitting
algorithm. Another approach is to compute a crude estimate of the largest
singular value of the matrix $\bm{A}$, but doing so dramatically reduces the
speed of convergence of the algorithm. This problem makes regularization path
construction methods based on the forward-backward splitting algorithm and its
variants generally inefficient and impractical for big data sets.
In summary, state-of-the-art and other widely used variable selection methods
for computing regularization paths to problem (1) either scale poorly in size
or are prone to produce unreliable numerical results. These shortcomings in
terms of efficiency and robustness make it challenging to perform variable
selection on big data sets without access to adequate and costly computational
resources. This paper proposes an efficient and robust optimization algorithm
for solving problem (1) that addresses these shortcomings.
## 3 Methodology
We consider the problem of solving the logistic regression problem (1) with
$\alpha\in(0,1)$. Our approach is to reformulate problem (1) as a saddle-point
problem and solve the latter using an appropriate primal-dual algorithm. Based
on work the authors recently provided in [16], we propose to use a nonlinear
PDHG algorithm with Bregman divergence terms tailored to the logistic
regression model and the elastic net penalty in (1). Specifically, we propose
to use the Bregman divergence generated from the negative sum of $m$ binary
entropy functions. This divergence is the function
$D_{H}\colon\mathbb{R}^{m}\times\mathbb{R}^{m}\to[0,+\infty]$ given by
$D_{H}(\boldsymbol{s},\boldsymbol{s}^{\prime})=\begin{dcases}&\sum_{i=1}^{m}s_{i}\log\left(\frac{s_{i}}{s_{i}^{\prime}}\right)+(1-s_{i})\log\left(\frac{1-s_{i}}{1-s_{i}^{\prime}}\right)\quad\mathrm{if}\,\boldsymbol{s},\boldsymbol{s}^{\prime}\in[0,1]^{m},\\\
&+\infty,\quad\mathrm{otherwise}.\end{dcases}$ (2)
We also show how to adapt our approach for solving the logistic regression
problem (1) with the lasso penalty ($\alpha=0$) and a broad class of convex
penalties, including for example the group lasso.
### Numerical optimization algorithm
The starting point of our approach is to express the logistic regression
problem (1) in saddle-point form. To do so, we use the convex conjugate
formula of the sum of logarithms that appears in (1), namely
$\psi(\boldsymbol{s})=\sup_{\boldsymbol{u}\in\mathbb{R}^{n}}\left\\{\left\langle\boldsymbol{s},\boldsymbol{u}\right\rangle-\sum_{i=1}^{m}\log(1+\exp{(u_{i})})\right\\}=\begin{dcases}&\sum_{i=1}^{m}s_{i}\log(s_{i})+(1-s_{i})\log(1-s_{i})\quad\mathrm{if}\,\boldsymbol{s}\in[0,1]^{m},\\\
&+\infty,\quad\mathrm{otherwise}.\end{dcases}$ (3)
Hence we have the representation
$\sum_{i=1}^{m}\log(1+\exp{((\bm{A}\boldsymbol{\theta})_{i})})=\sup_{\boldsymbol{s}\in[0,1]^{m}}\left\\{\left\langle\boldsymbol{s},\bm{A}\boldsymbol{\theta}\right\rangle-\psi(\boldsymbol{s})\right\\},$
and from it we can express problem (1) in saddle-point form as
$\inf_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\sup_{\boldsymbol{s}\in[0,1]^{m}}\left\\{-\frac{1}{m}\psi(\boldsymbol{s})-\frac{1}{m}\left\langle\boldsymbol{y}-\boldsymbol{s},\bm{A}\boldsymbol{\theta}\right\rangle+\lambda\left(\alpha\left\|{\boldsymbol{\theta}}\right\|_{1}+\frac{1-\alpha}{2}\left\|{\boldsymbol{\theta}}\right\|_{2}^{2}\right)\right\\}.$
(4)
A solution to the convex-concave saddle-point problem (4) is called a saddle
point. For $\alpha\in(0,1)$, the saddle-point problem (1) has a unique saddle
point $(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$, where the element
$\boldsymbol{\theta}^{*}$ itself is the unique global minimum of the original
problem (1) [21, Proposition 3.1, page 57]. Hence for our purpose it suffices
to compute a solution to the saddle-point problem (4), and to do so we can
take advantage of the fact that the saddle point
$(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$ satisfies the following
optimality conditions:
$\frac{1}{m}\bm{A}^{T}(\boldsymbol{y}-\boldsymbol{s}^{*})-\lambda(1-\alpha)\boldsymbol{\theta}^{*}\in\lambda\alpha\partial\left\|{\boldsymbol{\theta}^{*}}\right\|_{1}\quad\mathrm{and}\quad
s_{i}^{*}=\frac{1}{1+\exp{(-(\bm{A}\theta^{*})_{i})}}\quad\mathrm{for}\,i\in\\{1,\dots,m\\}.$
(5)
The next step of our approach is to split the infimum and supremum problems in
(4) with an appropriate primal-dual scheme. We propose to alternate between a
nonlinear proximal ascent step using the Kullback–Leibler divergence (2) and a
proximal descent step using a quadratic function:
$\displaystyle\boldsymbol{s}^{(k+1)}=\operatorname*{arg\,max}_{\boldsymbol{s}\in(0,1)^{m}}\left\\{-\psi(\boldsymbol{s})+\left\langle\boldsymbol{s},\bm{A}(\boldsymbol{\theta}^{(k)}+\rho(\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{(k-1)}))\right\rangle-\frac{1}{\sigma}D_{H}(\boldsymbol{s},\boldsymbol{s}^{(k)})\right\\},$
(6)
$\displaystyle\boldsymbol{\theta}^{(k+1)}=\operatorname*{arg\,min}_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\left\\{\left(\lambda_{1}\left\|{\boldsymbol{\theta}}\right\|_{1}+\frac{\lambda_{2}}{2}\left\|{\boldsymbol{\theta}}\right\|_{2}^{2}\right)+\left\langle\boldsymbol{s}^{(k+1)}-\boldsymbol{y},\bm{A}\boldsymbol{\theta}\right\rangle+\frac{1}{2\tau}\left\|{\boldsymbol{\theta}-\boldsymbol{\theta}^{(k)}}\right\|_{2}^{2}\right\\},$
where $\lambda_{1}=m\lambda\alpha$, $\lambda_{2}=m\lambda(1-\alpha)$, and
$\rho,\sigma,\tau>0$ are parameters to be specified in the next step of our
approach. The scheme starts from initial values
$\boldsymbol{s}^{(0)}\in(0,1)^{m}$ and
$\boldsymbol{\theta}^{(-1)}=\boldsymbol{\theta}^{(0)}\in\mathbb{R}^{n}$.
The key element in this primal-dual scheme is the choice of the
Kullback–Leibler divergence (2) in the first line of (LABEL:eq:kl-nPDHG-alg).
Its choice is motivated by two facts. First, because it is generated from the
sum of $m$ binary entropy that appears _explicitly_ in the saddle-point
problem (4) as the function $\psi$ defined in (3), i.e.,
$D_{H}(\boldsymbol{s},\boldsymbol{s}^{\prime})=\psi(\boldsymbol{s})-\psi(\boldsymbol{s}^{\prime})-\left\langle\boldsymbol{s}-\boldsymbol{s}^{\prime},\nabla\psi(\boldsymbol{s}^{\prime})\right\rangle.$
(7)
This fact will make the maximization step in (LABEL:eq:kl-nPDHG-alg) easy to
evaluate. Second, because it is strongly convex with respect to the $\ell
1$-norm in that
$D_{H}(\boldsymbol{s},\boldsymbol{s}^{\prime})\geqslant\frac{1}{2}\left\|{\boldsymbol{s}-\boldsymbol{s}^{\prime}}\right\|_{1}^{2}$
for every $\boldsymbol{s},\boldsymbol{s}^{\prime}\in[0,1]^{m}$, which is a
direct consequence of a fundamental result in information theory known as
Pinsker’s inequality [4, 15, 38, 40, 53].
The latter fact, notably, implies that the primal-dual scheme (LABEL:eq:kl-
nPDHG-alg) alternates between solving a 1-strongly concave problem over the
space $(\mathbb{R}^{m},\left\|{\cdot}\right\|_{1})$ and a
$\lambda_{2}$-strongly convex problem over the space
$(\mathbb{R}^{n},\left\|{\cdot}\right\|_{2})$. The choice of these spaces is
significant, for it induces the matrix norm
$\left\|{\bm{A}}\right\|_{op}=\sup_{\left\|{\boldsymbol{s}}\right\|_{1}=1}\left\|{\bm{A}^{T}\boldsymbol{s}}\right\|_{2}=\max_{i\in\\{1,\dots,m\\}}\sqrt{\sum_{j=1}^{n}{A_{ij}^{2}}}=\max_{i\in\\{1,\dots,m\\}}\left\|{\boldsymbol{x}_{i}}\right\|_{2},$
(8)
which can be computed in _optimal_ $\Theta(mn)$ time. This is unlike most
first-order optimization methods, such as the forward-backward splitting
algorithm, where instead the matrix norm is the largest singular value of the
matrix $\bm{A}$, which takes $O(\min{(m^{2}n,mn^{2})})$ operations to compute.
This point is _crucial_ : the smaller computational cost makes it easy and
efficient to estimate all the parameters of the nonlinear PDHG algorithm,
which is needed to achieve an optimal rate of convergence.
The last step of our approach is to choose the parameters $\rho$, $\sigma$,
and $\tau$ so that the iterations in the primal-dual scheme (LABEL:eq:kl-
nPDHG-alg) converge. Based on the analysis of accelerated nonlinear PDHG
algorithms the authors recently provided in [16, Section 5.4], the choice of
parameters
$\rho=1-\frac{\lambda_{2}}{2\left\|{\bm{A}}\right\|_{op}^{2}}\left(\sqrt{1+\frac{4\left\|{\bm{A}}\right\|_{op}^{2}}{\lambda_{2}}}-1\right),\quad\sigma=\frac{1-\rho}{\rho},\quad\mathrm{and}\quad\tau=\frac{(1-\rho)}{\lambda_{2}\rho},$
ensure that the iterations converge to the unique saddle point
$(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$ of problem (4). In particular,
the rate of convergence is linear in the number of iterations, with
$\frac{1}{2}\left\|{\boldsymbol{\theta}^{*}-\boldsymbol{\theta}^{(k)}}\right\|_{2}^{2}\leqslant\rho^{k}\left(\frac{1}{2}\left\|{\boldsymbol{\theta}^{*}-\boldsymbol{\theta}^{(0)}}\right\|_{2}^{2}+\frac{1}{\lambda_{2}}D_{H}(\boldsymbol{s}^{*},\boldsymbol{s}^{(0)})\right).$
(9)
This convergence rate is optimal: it is the best achievable rate of
convergence in the Nesterov class of optimal first-order methods [51].
An important feature of our proposed algorithm is that the minimization steps
in (LABEL:eq:kl-nPDHG-alg) can be computed exactly. Specifically, with the
auxiliary variables $\boldsymbol{u}^{(k)}=\bm{A}\boldsymbol{\theta}^{(k)}$ and
$v_{i}^{(k)}=\log\left(s_{i}^{(k)}/(1-s_{i}^{(k)})\right)$ for $i\in\\{1,\dots
m\\}$, the steps in algorithm (LABEL:eq:kl-nPDHG-alg) can be expressed
explicitly as follows:
$\begin{dcases}\boldsymbol{v}^{(k+1)}&=\frac{1}{1+\sigma}\left(\sigma\boldsymbol{u}^{(k)}+\sigma\rho\left(\boldsymbol{u}^{(k)}-\boldsymbol{u}^{(k-1)}\right)+\boldsymbol{v}^{(k)}\right)\\\
s^{(k+1)}_{i}&=\frac{1}{1+\exp{\left(-v_{i}^{(k+1)}\right)}}\quad\mathrm{for}\,i\in\left\\{1,\dots,m\right\\},\\\
\hat{\boldsymbol{\theta}}^{(k+1)}&=\boldsymbol{\theta}^{(k)}-\tau\bm{A}^{T}\left(\boldsymbol{s}^{(k+1)}-\boldsymbol{y}\right)\\\
\theta^{(k+1)}_{j}&=\mathrm{sign~{}}{\hat{\theta}^{(k+1)}_{j}}\max{\left(0,\frac{\left|\hat{\theta}^{(k+1)}_{j}\right|-\lambda_{1}\tau}{1+\lambda_{2}\tau}\right)}\quad\mathrm{for}\,j\in\\{1,\dots,n\\}\\\
\boldsymbol{u}^{(k+1)}&=\bm{A}\boldsymbol{\theta}^{(k+1)}.\end{dcases}$ (10)
In addition, from the auxiliary variables and the optimality condition on the
right in (5), we have the limit
$\lim_{k\to+\infty}\left\|{\boldsymbol{u}^{(k)}-\boldsymbol{v}^{(k)}}\right\|_{2}=0,$
which can serve as a convergence criterion. We refer to Material and Methods
for the derivation of algorithm (10) from the iterations in (LABEL:eq:kl-
nPDHG-alg).
Our proposed explicit nonlinear PDHG algorithm (10) offers many advantages in
terms of efficiency and robustness. First, the computational bottlenecks in
algorithm (10) consist of matrix-vector multiplications and the estimation of
the induced matrix $\left\|{\bm{A}}\right\|_{op}$ given by (8). If $T(m,n)$
denotes the number of arithmetic operations required to perform matrix-vector
multiplication with the matrix $\bm{A}$, then the asymptotic space complexity
for computing the iterations in algorithm (10) as well as the induced matrix
$\left\|{\bm{A}}\right\|_{op}$ is at most $O(T(m,n))$ operations. As mentioned
before, this is unlike most first-order optimization methods, such as the
forward-backward splitting algorithm, where instead the matrix norm is the
largest singular value of the matrix $\bm{A}$, which takes
$O(\min{(m^{2}n,mn^{2})})$ operations to compute. This fact is crucial because
the smaller computational cost makes it easy and efficient to estimate all the
parameters of the nonlinear PDHG algorithm, which is needed to achieve an
optimal rate of convergence.
Another advantage of our algorithm is that it exhibits scalable parallelism
because the matrix-vector multiplication operations can be implemented via
parallel algorithms. This makes it possible to implement our proposed
algorithms in a way that takes advantage of emerging hardware, such as field-
programmable gate arrays architectures.
Finally, our algorithm also provably computes an $\epsilon$-approximate
solution of (1) in $O(\log(1/\epsilon))$ operations [16, Section 5.4]. The
size of the parameter $\rho$ dictates this linear rate of convergence; it
depends on the matrix $\bm{A}$, the tuning parameter $\lambda$, and the
hyperparameter $\alpha$. Hence the overall complexity required to compute a
global minimum of the elastic net regularized logistic regression problem (1)
with tolerance $\epsilon\in(0,1)$ is on the order of
$O(T(m,n)\log(1/\epsilon))$ operations.
With these advantages, algorithm (10) overcomes the limitations of the state-
of-the-art and other widely-used algorithms for solving the logistic
regression problem (1). We are unaware of any other algorithm that offers
these advantages in terms of efficiency and robustness simultaneously.
In general, the nonlinear PDHG algorithm (10) can be adapted to any
regularized logistic regression problem for which the penalty is strongly
convex on the space $(\mathbb{R}^{n},\left\|{\cdot}\right\|_{2})$. To do so,
substitute this penalty for the elastic net penalty in the minimization
problem of the scheme (LABEL:eq:kl-nPDHG-alg) and use its solution in place of
the third and fourth lines in the explicit algorithm (10).
### Special case: Logistic regression with the lasso penalty
In some situations, it may be desirable to fit the regularized logistic
regression model (1) without the $\ell{2}^{2}$ penalty ($\alpha=1$). In this
case, algorithm (10) does not apply since it depends on the strong convexity
of $\ell{2}^{2}$ penalty. We present here an algorithm for fitting a logistic
regression model regularized by an $\ell 1$ penalty, or in principle, any
convex penalty that is not strongly convex, such as the group lasso.
The $\ell{1}$-regularized logistic regression problem is
$\inf_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\left\\{\frac{1}{m}\sum_{i=1}^{m}\log\left(1+\exp{((\bm{A}\boldsymbol{\theta})_{i})}\right)-\frac{1}{m}\left\langle\boldsymbol{y},\bm{A}\boldsymbol{\theta}\right\rangle+\lambda\left\|{\boldsymbol{\theta}}\right\|_{1}\right\\},$
(11)
and its associated saddle-point problem is
$\inf_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\sup_{\boldsymbol{s}\in[0,1]^{m}}\left\\{-\frac{1}{m}\psi(\boldsymbol{s})-\frac{1}{m}\left\langle\boldsymbol{y}-\boldsymbol{s},\bm{A}\boldsymbol{\theta}\right\rangle+\lambda\left\|{\boldsymbol{\theta}}\right\|_{1}\right\\}.$
(12)
The $\ell 1$ penalty in (11) guarantees that problem (11) has at least one
solution. Accordingly, the saddle-point problem (12) also has at least one
saddle point. As before, we split the infimum and supremum problems in (12) by
alternating between a nonlinear proximal ascent step using the
Kullback–Leibler divergence (2) and a proximal descent step using a quadratic
function, but this time we also update the stepsize parameters at each
iteration:
$\displaystyle\boldsymbol{s}^{(k+1)}=\operatorname*{arg\,max}_{\boldsymbol{s}\in(0,1)^{m}}\left\\{-\psi(\boldsymbol{s})+\left\langle\boldsymbol{s},\bm{A}(\boldsymbol{\theta}^{(k)}+\rho^{(k)}(\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{(k-1)}))\right\rangle-\frac{1}{\sigma^{(k)}}D_{H}(\boldsymbol{s},\boldsymbol{s}^{(k)})\right\\},$
(13)
$\displaystyle\boldsymbol{\theta}^{(k+1)}=\operatorname*{arg\,min}_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\left\\{m\lambda\left\|{\boldsymbol{\theta}}\right\|_{1}+\left\langle\boldsymbol{s}^{(k+1)}-\boldsymbol{y},\bm{A}\boldsymbol{\theta}\right\rangle+\frac{1}{2\tau^{(k)}}\left\|{\boldsymbol{\theta}-\boldsymbol{\theta}^{(k)}}\right\|_{2}^{2}\right\\},$
$\displaystyle\rho^{(k+1)}=1/\sqrt{1+\sigma^{(k)}},\quad\sigma^{(k+1)}=\rho^{(k+1)}\sigma^{(k)},\quad\tau^{(k+1)}=\tau^{(k)}/\rho^{(k+1)}.$
The scheme starts from initial stepsize parameters $\rho^{(0)}\in(0,1)$,
$\tau^{(0)}>0$ and
$\sigma^{(0)}=1/(\tau^{(0)}\left\|{\bm{A}}\right\|_{op}^{2})$, and initial
values $\boldsymbol{s}^{(0)}\in(0,1)^{m}$ and
$\boldsymbol{\theta}^{(-1)}=\boldsymbol{\theta}^{(0)}\in\mathbb{R}^{n}$.
The following accelerated nonlinear PDHG algorithm computes a global minimum
of (11):
$\begin{dcases}\boldsymbol{v}^{(k+1)}&=\frac{1}{1+\sigma^{(k)}}\left(\sigma^{(k)}\boldsymbol{u}^{(k)}+\sigma^{(k)}\rho^{(k)}\left(\boldsymbol{u}^{(k)}-\boldsymbol{u}^{(k-1)}\right)+\boldsymbol{v}^{(k)}\right)\\\
s^{(k+1)}_{i}&=\frac{1}{1+\exp{\left(-v_{i}^{(k+1)}\right)}}\quad\mathrm{for}\,i\in\left\\{1,\dots,m\right\\},\\\
\hat{\boldsymbol{\theta}}^{(k+1)}&=\boldsymbol{\theta}^{(k)}-\tau^{(k)}\bm{A}^{T}\left(\boldsymbol{s}^{(k+1)}-\boldsymbol{y}\right)\\\
\theta^{(k+1)}_{j}&=\mathrm{sign~{}}{\hat{\theta}^{(k+1)}_{j}}\max{\left(0,\left|\hat{\theta}^{(k+1)}_{j}\right|-m\lambda\tau^{(k)}\right)}\quad\mathrm{for}\,j\in\\{1,\dots,n\\},\\\
\boldsymbol{u}^{(k+1)}&=\bm{A}\boldsymbol{\theta}^{(k+1)},\\\
\rho^{(k+1)}&=1/\sqrt{1+\sigma^{(k)}},\quad\sigma^{(k+1)}=\rho^{(k+1)}\sigma^{(k)},\quad\tau^{(k+1)}=\tau^{(k)}/\rho^{(k+1)}.\end{dcases}$
(14)
In addition, from the auxiliary variables and the optimality condition on the
right in (5), we have the limit
$\lim_{k\to+\infty}\left\|{\boldsymbol{u}^{(k)}-\boldsymbol{v}^{(k)}}\right\|_{2}=0,$
which can serve as a convergence criterion. The derivation of algorithm (14)
from the iterations in (LABEL:eq:kl-lasso-alg) follows from the derivation of
algorithm (10) from the iterations in (LABEL:eq:kl-nPDHG-alg) described in
Material and Methods by setting $\alpha=1$. According to results provided by
the authors in [16, Proposition 5.2], the sequence of iterates
$\\{(\boldsymbol{\theta}^{(k)},\boldsymbol{s}^{(k)})\\}_{k=1}^{+\infty}$
converges to a saddle point $(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$ of
(12) at a sublinear rate $O(1/k^{2})$ in the iterations. Moreover, this
sublinear rate satisfies the lower bound
$\frac{2\tau^{(0)}\left\|{\bm{A}}\right\|^{2}_{op}}{1+2\tau^{(0)}\left\|{\bm{A}}\right\|^{2}_{op}}k+\frac{2\tau^{(0)}}{(1+2\tau^{(0)}\left\|{\bm{A}}\right\|^{2}_{op})^{2}}k^{2}.$
In particular, the constant term multiplying $k^{2}$ is maximized when
$\tau^{(0)}=1/(2\left\|{\bm{A}}\right\|_{op}^{2})$. This suggests a practical
choice for the free parameter $\tau^{(0)}$.
In general, the nonlinear PDHG algorithm (14) can be adapted to any
regularized logistic regression problem for which the penalty is proper, lower
semicontinuous and convex, and for which a solution exists. To do so,
substitute the $\ell 1$ penalty in the minimization problem of the scheme
(LABEL:eq:kl-nPDHG-alg) and use its solution in place of the third and fourth
lines in the explicit algorithm (14).
## 4 Material and Methods
### Derivation of the explicit algorithm (10)
We derive here the explicit algorithm (LABEL:eq:kl-nPDHG-alg) from the
iterations in (10). Consider the first line of (LABEL:eq:kl-nPDHG-alg). This
maximization problem has a unique maximum inside the interval $(0,1)^{m}$ [3,
Proposition 3.21-3.23, Theorem 3.24, Corollary 3.25], and the objective
function is differentiable. Thus it suffices to compute the gradient with
respect to $\boldsymbol{s}$ and solve for $\boldsymbol{s}$ to compute its
global maximum. To do so, it helps to first rearrange the objective function.
Substitute $\boldsymbol{s}^{(k)}$ for $\boldsymbol{s}^{\prime}$ in equation
(2), use equation (7), and rearrange to obtain the objective function
$\displaystyle-\psi(\boldsymbol{s})$
$\displaystyle+\left\langle\boldsymbol{s},\bm{A}(\boldsymbol{\theta}^{(k)}+\rho(\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{(k-1)}))\right\rangle-\frac{1}{\sigma}D_{H}(\boldsymbol{s},\boldsymbol{s}^{(k)})$
$\displaystyle=-\psi(\boldsymbol{s})+\left\langle\boldsymbol{s},\bm{A}(\boldsymbol{\theta}^{(k)}+\rho(\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{(k-1)}))\right\rangle-\frac{1}{\sigma}\left(\psi(\boldsymbol{s})-\psi(\boldsymbol{s}^{(k)})-\left\langle\boldsymbol{s}-\boldsymbol{s}^{(k)},\nabla\psi(\boldsymbol{s}^{(k)})\right\rangle)\right)$
$\displaystyle=-\left(1+\frac{1}{\sigma}\right)\psi(\boldsymbol{s})+\left\langle\boldsymbol{s},\bm{A}(\boldsymbol{\theta}^{(k)}+\rho(\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{(k-1)}))\right\rangle+\frac{1}{\sigma}\left\langle\boldsymbol{s}-\boldsymbol{s}^{(k)},\nabla\psi(\boldsymbol{s}^{(k)})\right\rangle+\psi(\boldsymbol{s}^{(k)}).$
The optimality condition is then
$\nabla\psi(\boldsymbol{s}^{(k+1)})=\frac{\sigma}{1+\sigma}\left(\bm{A}(\boldsymbol{\theta}^{(k)}+\rho(\boldsymbol{\theta}^{(k)}-\boldsymbol{\theta}^{(k-1)}))\right)+\frac{1}{1+\sigma}\nabla\psi(\boldsymbol{s}^{(k)}),$
where
$(\nabla\psi(\boldsymbol{s}))_{i}=\log\left(s_{i}^{(k+1)}/(1-s_{i}^{(k+1)})\right)$
for $i\in\\{1,\dots,m\\}$ and $\boldsymbol{s}\in(0,1)^{m}$. With the auxiliary
variables $\boldsymbol{u}^{(k)}=\bm{A}\boldsymbol{\theta}^{(k)}$ and
$v_{i}^{(k)}=\log\left(s_{i}^{(k)}/(1-s_{i}^{(k)})\right)$ for $i\in\\{1,\dots
m\\}$, the optimality condition can be written as
$\boldsymbol{v}^{(k+1)}=\frac{1}{1+\sigma^{(k)}}\left(\sigma^{(k)}\boldsymbol{u}^{(k)}+\sigma^{(k)}\rho^{(k)}\left(\boldsymbol{u}^{(k)}-\boldsymbol{u}^{(k-1)}\right)+\boldsymbol{v}^{(k)}\right).$
This gives the first line in (10). The second line follows upon solving for
$\boldsymbol{s}^{(k+1)}$ in terms of $\boldsymbol{v}^{(k+1)}$. The fifth line
follows from the definition of the auxiliary variable $\boldsymbol{u}^{(k)}$.
Now, consider the second line of (LABEL:eq:kl-nPDHG-alg). Complete the square
and multiply by $\tau/(1+\lambda_{2}\tau)$ to get the equivalent minimization
problem
$\boldsymbol{\theta}^{(k+1)}=\operatorname*{arg\,min}_{\boldsymbol{\theta}\in\mathbb{R}^{n}}\left\\{\frac{\lambda_{1}\tau}{1+\lambda_{2}\tau}\left\|{\boldsymbol{\theta}}\right\|_{1}+\frac{1}{2}\left\|{\boldsymbol{\theta}-\left(\boldsymbol{\theta}^{(k)}-\tau\bm{A}^{T}(\boldsymbol{s}^{(k+1)}-\boldsymbol{y})\right)/(1+\lambda_{2}\tau)}\right\|_{2}^{2}\right\\}.$
The unique minimum is computed using the soft thresholding operator [17, 25,
44]. With the notation
$\hat{\boldsymbol{\theta}}^{(k+1)}=\boldsymbol{\theta}^{(k)}-\tau\bm{A}^{T}\left(\boldsymbol{s}^{(k+1)}-\boldsymbol{y}\right),$
the soft thresholding operator is defined component-wise by
$\theta^{(k+1)}_{i}=\mathrm{sign~{}}{\hat{\theta}^{(k+1)}_{j}}\max{\left(0,\frac{\left|\hat{\theta}^{(k+1)}_{j}\right|-\lambda_{1}\tau}{1+\lambda_{2}\tau}\right)}\quad\mathrm{for}\,j\in\\{1,\dots,n\\}.$
The third and fourth lines of (10) are precisely these two equations.
## References
* Achia et al. [2010] Thomas NO Achia, Anne Wangombe, and Nancy Khadioli. A logistic regression model to identify key determinants of poverty using demographic and health survey data. _European Journal of Social Sciences_ , 13(1), 2010.
* Bagley et al. [2001] Steven C Bagley, Halbert White, and Beatrice A Golomb. Logistic regression in the medical literature:: Standards for use and reporting, with particular attention to one medical domain. _Journal of Clinical Epidemiology_ , 54(10):979–985, 2001. ISSN 0895-4356. doi: https://doi.org/10.1016/S0895-4356(01)00372-9.
* Bauschke et al. [2003] Heinz H Bauschke, Jonathan M Borwein, and Patrick L Combettes. Bregman monotone optimization algorithms. _SIAM Journal on control and optimization_ , 42(2):596–636, 2003.
* Beck and Teboulle [2003] Amir Beck and Marc Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. _Operations Research Letters_ , 31(3):167–175, 2003.
* Beck and Teboulle [2009] Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. _SIAM journal on imaging sciences_ , 2(1):183–202, 2009.
* Berger et al. [1996] Adam Berger, Stephen A Della Pietra, and Vincent J Della Pietra. A maximum entropy approach to natural language processing. _Computational linguistics_ , 22(1):39–71, 1996\.
* Bertsimas et al. [2019] Dimitris Bertsimas, Jean Pauphilet, and Bart Van Parys. Sparse regression: Scalable algorithms and empirical performance. _arXiv preprint arXiv:1902.06547_ , 2019.
* Bringmann et al. [2018] Björn Bringmann, Daniel Cremers, Felix Krahmer, and Michael Möller. The homotopy method revisited: Computing solution paths of $\ell_{1}$-regularized problems. _Mathematics of Computation_ , 87(313):2343–2364, 2018. URL https://doi.org/10.1090/mcom/3287.
* Bursac et al. [2008] Zoran Bursac, C Heath Gauss, David Keith Williams, and David W Hosmer. Purposeful selection of variables in logistic regression. _Source code for biology and medicine_ , 3(1):1–8, 2008.
* Catalina et al. [2018] Alejandro Catalina, Carlos M Alaíz, and José R Dorronsoro. scho. In _2018 International Joint Conference on Neural Networks (IJCNN)_ , pages 1–8. IEEE, 2018.
* Chambolle and Pock [2016a] A. Chambolle and T. Pock. An introduction to continuous optimization for imaging. _Acta Numer._ , 25:161–319, 2016a.
* Chambolle and Pock [2011] Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with applications to imaging. _Journal of mathematical imaging and vision_ , 40(1):120–145, 2011.
* Chambolle and Pock [2016b] Antonin Chambolle and Thomas Pock. On the ergodic convergence rates of a first-order primal–dual algorithm. _Mathematical Programming_ , 159(1-2):253–287, 2016b.
* Chu et al. [2007] Cheng Chu, Sang Kyun Kim, Yian Lin, YuanYuan Yu, Gary Bradski, Andrew Y Ng, and Kunle Olukotun. Map-reduce for machine learning on multicore. _Advances in neural information processing systems_ , 19:281, 2007.
* Csiszár [1967] Imre Csiszár. Information-type measures of difference of probability distributions and indirect observation. _studia scientiarum Mathematicarum Hungarica_ , 2:229–318, 1967.
* Darbon and Langlois [2021] Jérôme Darbon and Gabriel Provencher Langlois. Accelerated nonlinear primal-dual hybrid gradient algorithms with applications to machine learning, 2021.
* Daubechies et al. [2004] Ingrid Daubechies, Michel Defrise, and Christine De Mol. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. _Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences_ , 57(11):1413–1457, 2004.
* Demchenko et al. [2013] Yuri Demchenko, Paola Grosso, Cees De Laat, and Peter Membrey. Addressing big data issues in scientific data infrastructure. In _2013 International Conference on Collaboration Technologies and Systems (CTS)_ , pages 48–55. IEEE, 2013.
* Dhar [2020] Payal Dhar. The carbon impact of artificial intelligence. _Nat Mach Intell_ , 2:423–5, 2020.
* Efron et al. [2004] Bradley Efron, Trevor Hastie, Iain Johnstone, Robert Tibshirani, et al. Least angle regression. _The Annals of statistics_ , 32(2):407–499, 2004\.
* Ekeland and Temam [1999] Ivar Ekeland and Roger Temam. _Convex analysis and variational problems_. SIAM, 1999.
* El Guide et al. [2020] M El Guide, K Jbilou, C Koukouvinos, and A Lappa. Comparative study of l 1 regularized logistic regression methods for variable selection. _Communications in Statistics-Simulation and Computation_ , pages 1–16, 2020.
* Esser et al. [2010] Ernie Esser, Xiaoqun Zhang, and Tony F Chan. A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. _SIAM Journal on Imaging Sciences_ , 3(4):1015–1046, 2010.
* Fercoq and Richtárik [2016] Olivier Fercoq and Peter Richtárik. Optimization in high dimensions via accelerated, parallel, and proximal coordinate descent. _SIAM Review_ , 58(4):739–771, 2016.
* Figueiredo and Nowak [2001] Mário AT Figueiredo and Robert D Nowak. Wavelet-based image estimation: an empirical Bayes approach using Jeffrey’s noninformative prior. _IEEE Transactions on Image Processing_ , 10(9):1322–1331, 2001.
* Foucart and Rauhut [2013] Simon Foucart and Holger Rauhut. _Sparse Solutions of Underdetermined Systems_ , pages 41–59. Springer New York, New York, NY, 2013. doi: 10.1007/978-0-8176-4948-7˙2.
* Friedman et al. [2007] Jerome Friedman, Trevor Hastie, Holger Höfling, Robert Tibshirani, et al. Pathwise coordinate optimization. _The annals of applied statistics_ , 1(2):302–332, 2007.
* Friedman et al. [2010] Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for generalized linear models via coordinate descent. _Journal of statistical software_ , 33(1):1, 2010\.
* Genkin et al. [2007] Alexander Genkin, David D Lewis, and David Madigan. Large-scale bayesian logistic regression for text categorization. _Technometrics_ , 49(3):291–304, 2007.
* Greene et al. [2014] Casey S Greene, Jie Tan, Matthew Ung, Jason H Moore, and Chao Cheng. Big data bioinformatics. _Journal of cellular physiology_ , 229(12):1896–1900, 2014.
* Hastie et al. [2009] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. _The elements of statistical learning_. Springer Series in Statistics. Springer, New York, second edition, 2009\. doi: 10.1007/978-0-387-84858-7. Data mining, inference, and prediction.
* Hastie et al. [2021] Trevor Hastie, Junyang Qian, and Kenneth Tay. An introduction to glmnet (2021). Available at ”https://glmnet.stanford.edu/articles/glmnet.html”. Accessed on 17 February 2021., 2021.
* Hesterberg et al. [2008] Tim Hesterberg, Nam Hee Choi, Lukas Meier, Chris Fraley, et al. Least angle and $l^{1}$ penalized regression: A review. _Statistics Surveys_ , 2:61–93, 2008.
* Hoerl and Kennard [1970] Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. _Technometrics_ , 12(1):55–67, 1970.
* Hohage and Homann [2014] Thorsten Hohage and Carolin Homann. A generalization of the chambolle-pock algorithm to banach spaces with applications to inverse problems. _arXiv preprint arXiv:1412.0126_ , 2014.
* Hosmer Jr et al. [2013] David W Hosmer Jr, Stanley Lemeshow, and Rodney X Sturdivant. _Applied logistic regression_ , volume 398. John Wiley & Sons, 2013.
* Kambatla et al. [2014] Karthik Kambatla, Giorgos Kollias, Vipin Kumar, and Ananth Grama. Trends in big data analytics. _Journal of parallel and distributed computing_ , 74(7):2561–2573, 2014.
* Kemperman [1969] Johannes HB Kemperman. On the optimum rate of transmitting information. In _Probability and information theory_ , pages 126–169. Springer, 1969.
* King and Zeng [2001] Gary King and Langche Zeng. Logistic regression in rare events data. _Political analysis_ , 9(2):137–163, 2001.
* Kullback [1967] Solomon Kullback. A lower bound for discrimination information in terms of variation (corresp.). _IEEE transactions on Information Theory_ , 13(1):126–127, 1967.
* Lee et al. [2006] Su-In Lee, Honglak Lee, Pieter Abbeel, and Andrew Y Ng. Efficient l~ 1 regularized logistic regression. In _Aaai_ , volume 6, pages 401–408, 2006.
* Leiserson et al. [2020] Charles E Leiserson, Neil C Thompson, Joel S Emer, Bradley C Kuszmaul, Butler W Lampson, Daniel Sanchez, and Tao B Schardl. There’s plenty of room at the top: What will drive computer performance after moore’s law? _Science_ , 368(6495), 2020.
* Li et al. [2020] Xiaoping Li, Yadi Wang, and Rubén Ruiz. A survey on sparse learning models for feature selection. _IEEE Transactions on Cybernetics_ , 2020.
* Lions and Mercier [1979] Pierre-Louis Lions and Bertrand Mercier. Splitting algorithms for the sum of two nonlinear operators. _SIAM Journal on Numerical Analysis_ , 16(6):964–979, 1979.
* Loris [2008] Ignace Loris. L1packv2: A mathematica package for minimizing an l1-penalized functional. _Computer physics communications_ , 179(12):895–902, 2008.
* Lowe and Parvar [2004] David J Lowe and Jamshid Parvar. A logistic regression approach to modelling the contractor’s decision to bid. _Construction Management and Economics_ , 22(6):643–653, 2004.
* L’heureux et al. [2017] Alexandra L’heureux, Katarina Grolinger, Hany F Elyamany, and Miriam AM Capretz. Machine learning with big data: Challenges and approaches. _Ieee Access_ , 5:7776–7797, 2017.
* Manning and Klein [2003] Christopher Manning and Dan Klein. Optimization, maxent models, and conditional estimation without magic. In _Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: Tutorials-Volume 5_ , pages 8–8, 2003.
* Meier et al. [2008] Lukas Meier, Sara Van De Geer, and Peter Bühlmann. The group lasso for logistic regression. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 70(1):53–71, 2008.
* Muchlinski et al. [2016] David Muchlinski, David Siroky, Jingrui He, and Matthew Kocher. Comparing random forest with logistic regression for predicting class-imbalanced civil war onset data. _Political Analysis_ , pages 87–103, 2016.
* Nesterov [2018] Yurii Nesterov. _Lectures on Convex Optimization_. Springer International Publishing, 2018.
* Pereira et al. [2009] Francisco Pereira, Tom Mitchell, and Matthew Botvinick. Machine learning classifiers and fmri: A tutorial overview. _NeuroImage_ , 45(1, Supplement 1):S199–S209, 2009. ISSN 1053-8119. Mathematics in Brain Imaging.
* Pinsker [1964] Mark S Pinsker. _Information and information stability of random variables and processes_. Holden-Day, 1964.
* Pock et al. [2009] Thomas Pock, Daniel Cremers, Horst Bischof, and Antonin Chambolle. An algorithm for minimizing the mumford-shah functional. In _2009 IEEE 12th International Conference on Computer Vision_ , pages 1133–1140. IEEE, 2009.
* Pranckevičius and Marcinkevičius [2017] Tomas Pranckevičius and Virginijus Marcinkevičius. Comparison of naive bayes, random forest, decision tree, support vector machines, and logistic regression classifiers for text reviews classification. _Baltic Journal of Modern Computing_ , 5(2):221, 2017.
* Privé et al. [2018] Florian Privé, Hugues Aschard, Andrey Ziyatdinov, and Michael GB Blum. Efficient analysis of large-scale genome-wide data with two r packages: bigstatsr and bigsnpr. _Bioinformatics_ , 34(16):2781–2787, 2018.
* Sculley et al. [2014] D. Sculley, Gary Holt, Daniel Golovin, Eugene Davydov, Todd Phillips, Dietmar Ebner, Vinay Chaudhary, and Michael Young. Machine learning: The high interest credit card of technical debt. In _SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop)_ , 2014.
* Shi et al. [2010] Jianing Shi, Wotao Yin, Stanley Osher, and Paul Sajda. A fast hybrid algorithm for large-scale l1-regularized logistic regression. _The Journal of Machine Learning Research_ , 11:713–741, 2010.
* Shi et al. [2013] Jianing V Shi, Wotao Yin, and Stanley J Osher. Linearized bregman for l1-regularized logistic regression. In _Proceedings of the 30th international conference on machine learning (ICML)_. Citeseer, 2013.
* Simon et al. [2011] Noah Simon, Jerome Friedman, Trevor Hastie, and Rob Tibshirani. Regularization paths for cox’s proportional hazards model via coordinate descent. _Journal of statistical software_ , 39(5):1, 2011\.
* Simon et al. [2013] Noah Simon, Jerome Friedman, and Trevor Hastie. A blockwise descent algorithm for group-penalized multiresponse and multinomial regression. _arXiv preprint arXiv:1311.6529_ , 2013.
* Taddy [2013] Matt Taddy. Multinomial inverse regression for text analysis. _Journal of the American Statistical Association_ , 108(503):755–770, 2013.
* Tay et al. [2021] J Kenneth Tay, Balasubramanian Narasimhan, and Trevor Hastie. Elastic net regularization paths for all generalized linear models. _arXiv preprint arXiv:2103.03475_ , 2021.
* Theodossiou [1998] Ioannis Theodossiou. The effects of low-pay and unemployment on psychological well-being: a logistic regression approach. _Journal of health economics_ , 17(1):85–104, 1998.
* Thompson et al. [2021] Neil C Thompson, Kristjan Greenewald, Keeheon Lee, and Gabriel F Manso. Deep learning’s diminishing returns: The cost of improvement is becoming unsustainable. _IEEE Spectrum_ , 58(10):50–55, 2021.
* Tibshirani [1996] Robert Tibshirani. Regression shrinkage and selection via the lasso. _Journal of the Royal Statistical Society: Series B (Methodological)_ , 58(1):267–288, 1996.
* Tibshirani et al. [2012] Robert Tibshirani, Jacob Bien, Jerome Friedman, Trevor Hastie, Noah Simon, Jonathan Taylor, and Ryan J Tibshirani. Strong rules for discarding predictors in lasso-type problems. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_ , 74(2):245–266, 2012.
* Tibshirani et al. [2013] Ryan J Tibshirani et al. The lasso problem and uniqueness. _Electronic Journal of statistics_ , 7:1456–1490, 2013\.
* Vidaurre et al. [2013] Diego Vidaurre, Concha Bielza, and Pedro Larrañaga. A survey of l1 regression. _International Statistical Review_ , 81(3):361–387, 2013.
* Wu et al. [2008] Tong Tong Wu, Kenneth Lange, et al. Coordinate descent algorithms for lasso penalized regression. _The Annals of Applied Statistics_ , 2(1):224–244, 2008.
* Wu et al. [2009] Tong Tong Wu, Yi Fang Chen, Trevor Hastie, Eric Sobel, and Kenneth Lange. Genome-wide association analysis by lasso penalized logistic regression. _Bioinformatics_ , 25(6):714–721, 2009.
* Yuan et al. [2010] Guo-Xun Yuan, Kai-Wei Chang, Cho-Jui Hsieh, and Chih-Jen Lin. A comparison of optimization methods and software for large-scale l1-regularized linear classification. _The Journal of Machine Learning Research_ , 11:3183–3234, 2010.
* Yuan et al. [2012] Guo-Xun Yuan, Chia-Hua Ho, and Chih-Jen Lin. An improved glmnet for l1-regularized logistic regression. _The Journal of Machine Learning Research_ , 13:1999–2030, 2012.
* Zaghdoudi [2013] Taha Zaghdoudi. Bank failure prediction with logistic regression. _International Journal of Economics and Financial Issues_ , 3(2):537, 2013.
* Zaidi and Amirat [2016] M Zaidi and A Amirat. Forecasting stock market trends by logistic regression and neural networks: Evidence from ksa stock market. _Int. J. Econ. Commer. Manag_ , 4:4–7, 2016.
* Zanon et al. [2020] Mattia Zanon, Giuliano Zambonin, Gian Antonio Susto, and Seán McLoone. Sparse logistic regression: Comparison of regularization and bayesian implementations. _Algorithms_ , 13(6):137, 2020.
* Zhang et al. [2018] Zhongheng Zhang, Victor Trevino, Sayed Shahabuddin Hoseini, Smaranda Belciug, Arumugam Manivanna Boopathi, Ping Zhang, Florin Gorunescu, Velappan Subha, and Songshi Dai. Variable selection in logistic regression model with genetic algorithm. _Annals of translational medicine_ , 6(3), 2018.
* Zhu and Chan [2008] Mingqiang Zhu and Tony Chan. An efficient primal-dual hybrid gradient algorithm for total variation image restoration. _UCLA CAM Report_ , 34, 2008.
* Zou and Hastie [2005] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. _Journal of the royal statistical society: series B (statistical methodology)_ , 67(2):301–320, 2005.
|
# Phase Space Analysis of Cardiac Spectra
Onder Pekcan
Department of Molecular Biology and Genetics
Kadir Has University
Istanbul, Turkey
<EMAIL_ADDRESS>
Taner Arsan
Department of Computer Engineering
Kadir Has University
Istanbul, Turkey
<EMAIL_ADDRESS>
Corresponding Author
###### Abstract
Cardiac diseases are one of the main reasons of mortality in modern,
industrialized societies, and they cause high expenses in public health
systems. Therefore, it is important to develop analytical methods to improve
cardiac diagnostics. Electric activity of heart was first modeled by using a
set of nonlinear differential equations. Latter, variations of cardiac spectra
originated from deterministic dynamics are investigated. Analyzing the power
spectra of a normal human heart presents His-Purkinje network, possessing a
fractal like structure. Phase space trajectories are extracted from the time
series graph of ECG. Lower values of fractal dimension, $D$ indicate dynamics
that are more coherent. If $D$ has non-integer values greater than two when
the system becomes chaotic or strange attractor. Recently, the development of
a fast and robust method, which can be applied to multichannel physiologic
signals, was reported. This manuscript investigates two different ECG systems
produced from normal and abnormal human hearts to introduce an auxiliary phase
space method in conjunction with ECG signals for diagnoses of heart diseases.
Here, the data for each person includes two signals based on $V_{4}$ and
modified lead III (MLIII) respectively. Fractal analysis method is employed on
the trajectories constructed in phase space, from which the fractal dimension
$D$ is obtained using the box counting method. It is observed that, MLIII
signals have larger $D$ values than the first signals ($V_{4}$), predicting
more randomness yet more information. The lowest value of $D$ (1.708)
indicates the perfect oscillation of the normal heart and the highest value of
$D$ ( 1.863) presents the randomness of the abnormal heart. Our significant
finding is that the phase space picture presents the distribution of the peak
heights from the ECG spectra, giving valuable information about heart
activities in conjunction with ECG.
_Keywords_ Electrocardiography $\cdot$ Analysis $\cdot$ Diagnostic method
$\cdot$ Cardiac electrophysiology $\cdot$ Computer-based model
## 1 Introduction
It is well known that human heart has an electric activity, which can be
detected by measuring the potential difference from various points on the
surface of the body. The measured electric potential versus time is called
electrocardiogram (ECG), which possesses three separate parts. $P$-wave
presents the excitation of the atria, $QRS$ complex shows the ventricles (His-
Purkinje network) and $T$-wave is associated with the recovery of initial
electrical state of ventricles (see Figure 1(a)). Although ECG presents
periodic behavior, some irregularities can be seen in details of the record.
In fact, these irregularities belong to the intrinsic part of heart activity
and/or to the random noise that can be found in such systems. These activities
in ECG spectra are highly important to understand the cardiac dynamics.
Cardiac oscillations are sometimes perturbed by unpredictable contributions
which are part of the cardiac dynamics and therefore physiologically
important. These finding predict that heart is not a perfect oscillator and/or
cardiac muscles do not always vibrate harmonically.
It has been also well established that the transformation of a sequence of
values in time to a geometrical object in space is a highly considered topic
by Liebovitch (1998). This procedure replaces an analysis in time with an
analysis in space. Here, the space is called phase space and the procedure of
transforming the time series into the object in space is called an embedding.
Afterwards, topological properties of the object are determined based on its
fractal dimension. The fractal dimension characterizes the properties of the
phase space set, not the original time series. The measured dimension in the
phase space set determines whether a data set is generated by random or
deterministic processes. A large value of the fractal dimension indicates
random generation of the time series. This means that, the number of variables
and equations are so large that there are no ways to predict the future values
from the early parameters. In other words, the multiplicity of interacting
factors precludes the possibility of understanding how the underlying
mechanisms work. On the other hand, a low fractal dimension value indicates
that the data is generated by deterministic mechanisms, based on a small
number of independent variables, which helps to understand how the values in
the past can be used to predict the values in the future. If the mechanism
generating the data is deterministic, but the time series behaves like
generated by random processes, then the system is considered as chaotic. A
chaotic system is deterministic but not predictable in the long range. In a
deterministic system which is not chaotic, the value of a variable at a given
time can be used to generate the value of that variable at all times in
future.
Box counting technique is generally used to determine the fractal dimension of
the phase space, which is generated from the time series data. The values of
fractal dimension give the number of independent values to construct the time
series from which the phase space is generated. The box counting algorithm
analyzes the phase space set generated from the points with coordinates
$x(t)$, $x(t+\Delta t)$, . . . , $x(t+(n-1)\Delta t)$, where $x(t)$ are the
time series values and $\Delta t$ is the lag.
Figure 1: $PQRST$ in ECG and phase space. (a) ECG (b) Phase space.
At early times, electric activity of heart was modeled by using a set of
nonlinear differential equations van der Pol Jun Docts. Sc. and van der Mark
(1928); Katholi et al. (1977); West et al. (1985). Afterwards, variations of
cardiac spectra originated from deterministic dynamics called "chaotic
dynamics" which is highly sensitive to heart’s initial conditions are analyzed
by Babloyantz et al. (1985); Babloyantz and Destexhe (1986). Analyzing the
power spectra of a normal human heart shows that His-Purkinje network
possesses a fractal like structure in L.Goldberger et al. (1985) and the
existence of chaotic behavior of heart can be found out from the phase space
picture by evaluating a fractal dimension, $D$, where phase space trajectories
are extracted from the time series graph of ECG (Figure 1(b)) Babloyantz and
Destexhe (1988). Lower values of $D$ indicate more coherent dynamics. If
$D=1$, the oscillation is periodic and the phase space picture shows limiting
cycle. However, $D$ becomes larger than one when a limiting cycle is perturbed
by random noise Bergé et al. (1984). $D$ has non integer values greater than
two when the system becomes chaotic or strange attractor Bergé et al. (1984).
In this case, although trajectories in time do not converge towards a limiting
cycle, they stay in a bounded region in the phase space. Instead of $D$,
Babloyantz and Destexhe (1988) evaluates the correlation dimension $D_{2}$
from a time series of finite length using the existing algorithms, Grassberger
and Procaccia (1983a, b). The produced correlation dimensions are obtained
from a total of 36 ECG leads taken from 4 normal resting persons. Within the
range of computational errors, they find values of $D_{2}$ ranging from $3.6$
to $5.2$. These values suggest that the normal cardiac oscillations follow a
deterministic dynamics of chaotic nature.
It is also shown that a short-term heart rate variability analysis yields a
prognostic value in risk stratification, independent of clinical and
functional variables, Rovere et al. (2003). However, the detailed description
and classification of dynamical changes using time and frequency measures are
often not sufficient, especially in dynamical diseases as characterized by
Mackey and Glass (1977, 1979).
Wessel et al. (2009) try to answer the question: is the normal heart rate
chaotic due to respiration? In their work, they give an example of the
influence of respiration on heart beat dynamics, showing that observed
fluctuations can mostly be explained by respiratory modulations of heart rate
and blood pressure. Recently, the development of a fast and robust method
which can be applied to multichannel physiologic signals was reported by
Wilson and Haueisen (2017). This method elaborates either removing a selected
interfering signal or separating signals that arise from temporally correlated
and spatially distributed signals such as maternal or fetal ECG spectra.
Convolutional neural networks (CNNs) method was also applied to patient
specific ECG classification for real-time heart monitoring, Kiranyaz et al.
(2016).
Nowadays, it is well understood that cardiac diseases are one of the main
reasons of mortality in modern, industrialized societies, and they cause high
expenses in public health systems. Therefore, it is important to develop
analytical methods to improve cardiac diagnostics. In this work, we
investigate two different ECG systems taken from normal and abnormal human
hearts, Goldberger et al. (2000). Our aim is to introduce auxiliary phase
space method in conjunction with ECG signals to diagnose heart diseases. We
apply fractal analysis to the given data through trajectories produced in
phase space, from where fractal dimension $D$ is obtained with the use of the
box counting method, Liebovitch and Toth (1989).
## 2 Methods
### 2.1 Data
The data are taken from European $ST$-$T$ Database intended to be used for the
evaluation of algorithms for analysis of $ST$ and $T$-wave changes, Goldberger
et al. (2000). We have selected three different ECG records of two persons.
Person 1, considered as having a normal heart, is a man aged 51 with resting
angina and normal coronary arteries. Person 2, considered as having an
abnormal heart, is a man aged 58 with resting angina, anterior myocardial
infarction, 1-vessel disease (LAD) and aortic valvular regurgitation. The ECG
records e0118, e0121 and e0122 of person 1 (Figure 2) and the ECG records
e0123, e0125 and e0126 of person 2 (Figure 3) are examined. Each record has
two signals registered based on lead $V_{4}$ and modified lead III (MLIII),
respectively. For each signal, 200,000 samples are used.
Figure 2: First and Second ECG signal of normal heart taken at two different
records of the same person. Figure 3: First and Second ECG signal of abnormal
heart taken at two different records of the same person.
### 2.2 Algorithms
#### 2.2.1 Phase space
We construct the phase space based on heart voltage values over time (i.e.,
$V(t)$) and their first derivative (i.e., $dV(t)/dt$). Figure 4, and Figure 5
show the phase spaces, $V(t)$ versus $dV(t)/dt$.
To obtain the first derivative of the function $V(t)$, we use third-order
forward difference Taylor Series derivative approximation of $f^{\prime}(x)$
Burden and Faires (1993); Khan and Ohba (1999); Ronco et al. (1999). The
simple approximation of the first derivative of a function $f$ at a point $x$
is defined as the limit of a difference quotient as follows:
$f^{\prime}(x)=\lim_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h}$ (1)
Figure 4: Phase space of normal heart for the first and the second signal.
Figure 5: Phase space of abnormal heart for the first and the second signal.
If $h>0$, meaning that $h$ is a finite positive number, then
$f^{\prime}(x)=\frac{f(x+h)-f(x)}{h}$ (2)
is called the first-order forward difference approximation of $f^{\prime}(x)$.
The approximation of $f^{\prime}(x)$ can be obtained by combining Taylor
series expansions. If the values ($x_{(i+1)}$,$f_{(i+1)}$),
($x_{(i+2)}$,$f_{(i+2)}$) and ($x_{(i+3)}$,$f_{(i+3)}$) are known, the first
order derivative of $f_{i}(x)$ can be calculated as shown by
$f^{\prime}_{i}(x)$.
$f_{(i+1)}=f_{i}+\frac{f^{\prime}_{i}}{1!}h+\frac{f^{\prime\prime}_{i}}{2!}h^{2}+\frac{f^{\prime\prime\prime}_{i}}{3!}h^{3}+\ldots+\frac{f_{i}^{n}}{n!}h^{n}+R_{n}$
(3) $(x_{i+1},f_{i+1})\rightarrow
f_{i+1}=f_{i}+\frac{f^{\prime}_{i}}{1!}h+\frac{f^{\prime\prime}_{i}}{2!}h^{2}\rightarrow
f_{i+1}=f_{i}+hf^{\prime}_{i}+\frac{h^{2}}{2}f^{\prime\prime}_{i}$ (4)
$(x_{i+2},f_{i+2})\rightarrow
f_{i+2}=f_{i}+\frac{f^{\prime}_{i}}{1!}(2h)+\frac{f^{\prime\prime}_{i}}{2!}(2h)^{2}\rightarrow
f_{i+2}=f_{i}+2hf^{\prime}_{i}+2h^{2}f^{\prime\prime}_{i}$ (5)
$(x_{i+3},f_{i+3})\rightarrow
f_{i+3}=f_{i}+\frac{f^{\prime}_{i}}{1!}(3h)+\frac{f^{\prime\prime}_{i}}{2!}(3h)^{2}\rightarrow
f_{i+3}=f_{i}+3hf^{\prime}_{i}+\frac{9}{2}h^{2}f^{\prime\prime}_{i}$ (6)
The second derivative $f^{\prime\prime}_{i}$ is canceled by adding Equations
4, 5 and 6 after multiplying them by $18$, $-9$, and $2$, respectively.
Therefore, we obtain:
$18f_{i+1}-9f_{i+2}+2f_{i+3}=11f_{i}+6hf^{\prime}_{i}$ (7)
Finally, third-order forward difference Taylor series approximation of
$f^{\prime}(x)$ is obtained as follows:
$f^{\prime}_{i}=\frac{1}{6h}(-11f_{i}+18f_{i+1}-9f_{i+2}+2f_{i+3})$ (8)
#### 2.2.2 Box counting
The pseudocode of the box counting algorithm is shown in Figure 6. The samples
of the signal constitute the $x$ coordinates of the time series. A $y$
coordinate is calculated based on four consecutive $x$ values as explained in
the previous section (line 3). $x$ and $y$ values are then normalized (lines
6-9). A rectangle is indicated based on the ($x$,$y$) coordinates of its left
top corner, width and height (e.g., line 10). The for loop in lines 11-35
gives the number of boxes containing points in the phase space. The algorithm
starts with the smallest rectangle containing all points in the space (line
10). In each iteration, rectangles in the set Rectangle are divided into four
rectangles (lines 12-24). In lines 25-33, we count the number of boxes
containing at least one point in the space. The number of boxes per iteration
are given at the end (line 34).
Figure 6: Box Counting Algorithm.
## 3 Results and discussions
Figure 4, and 5 present the phase space pictures of ECG (time series) produced
from Figure 2, and Figure 3 for normal and abnormal hearts, respectively.
Phase space pictures of normal hearts for the first signal in Figure 4 show
deterministic behavior, meaning that this signal possesses a perfect
oscillatory character.
When the phase space pictures in Figure 4 are compared with the phase space
pictures of the first signal of the abnormal heart in Figure 5, random
behaviors are observed. Phase space pictures in Figure 5 imply that some
perturbations start to develop on the top of the heart oscillations at the
same time, which makes it difficult to understand the individual mechanisms
that occur while producing the first signal of the abnormal heart. Figure 4
(d) to (f) present the second signal of the normal heart, which shows slight
deviation from the oscillatory behavior, but still obeys the deterministic
character. The broadening of this signal is most probably due to the resting
angina which effects the heart oscillation. A similar broadening may be
interpreted for the abnormal heart. However, as seen in Figure 5, all the
phase space pictures have a strong random character, implying that the
abnormal heart has serious heart failure due to anterior myocardial
infarction, 1-vessel disease (LAD) and Aortic valvular regurgitation. In other
words, in phase space terminology, numerous simultaneous factors are behind
this random behavior of the abnormal heart. Here, in short, these phase space
pictures can imply that the abnormal heart is unable to perform normal
oscillatory action due to its cardiac deficiencies.
Figure 4, and Figure 5 can also be interpreted with the help of the peak
height, V(t) patterns in ECG in Figure 2,and Figure 3. In phase space
pictures, $R$ peaks always have larger values than $P$, $Q$, $S$ and $T$
peaks. On the other hand, $S$ peaks have the smallest values only in the first
signal of both normal and abnormal hearts, while in the second signals $Q$
peaks have the smallest values compared to the other peaks. Moreover, the
phase space picture in Figure 7(b) provides well localized $R$, $P$, $T$, $S$
and $Q$ values, implying that peak heights of the ECG signals in Figure 2(b)
have almost the same values and are distinguished from each other by their
location in the phase space picture. Spreading of $R$, $P$, $T$, $S$ and $Q$
values are more pronounced for the second signal’s phase space pictures in
Figure 4(e) due to a single defect, i.e. resting angina in normal heart. On
the other hand, as seen in Figure 5(a-b-c), especially the spreading of the R
values reflects the randomness of peak heights of $R$ in the ECG pattern,
implying a serious heart failure due to anterior myocardial infarction,
1-vessel disease (LAD) and Aortic valvular regurgitation of the abnormal
heart. This randomness of the peak heights of $R$, $P$, $T$, $S$ and $Q$
tremendously increases for the second signal of the abnormal heart, as seen in
Figure 5(d-e-f), where $R$ and $P$, $T$, $Q$, $S$ values spread in all
directions and become indistinguishable from each other in phase space,
showing a strong random character. In that sense, one of the significant
outcomes of our work is that the phase space analysis can be useful for the
diagnosis of heart diseases in conjunction with ECG patterns, because the
broadening of $R$, $P$, $T$, $S$ and $Q$ values can easily imply the
irregularities in ECG patterns and provides information for peak height
distribution.
The data in Figure 4, and Figure 5 are elaborated using box counting method in
Figure 6 , where the following equation (9) is used
$D=\frac{\log N}{\log\displaystyle\frac{1}{r}}$ (9)
to produce fractal dimension, $D$. In this equation, $N$ is the minimum number
of boxes needed to cover the set of points and $r$ is the box size. The plots
of $\log N$ versus $\log r$ are given in Figure 7 and Figure 8, from where $D$
values are calculated.
Figure 7: $\log N$ versus $\log r$ plots and best fit for normal heart data.
Figure 8: $\log N$ versus $\log r$ plots and best fit for abnormal heart data.
The results are shown in Table I together with the correlation coefficients,
$R^{2}$ which are found out to be in a reasonable range.
Fractal dimensions, $D$, calculated from the phase space pictures in Figure 4,
and Figure 5 by using the box counting method are listed in Table I. $D$
values calculated from the normal heart data (i.e., 1.787, 1.749, 1.708 for
the first signal and 1.804, 1.816, 1.821 for the second signal) are smaller
than the abnormal heart dimensions (i.e., 1.816, 1.814, 1.816 for the first
signal and 1.863, 1.861, 1.860 for the second signal), supporting the existing
deterministic behavior of the normal heart compared to the random behavior of
the abnormal heart. In other words, the former is a good oscillator, while the
latter has some serious difficulties to make perfect harmonic oscillations.
Table 1: Fractal Dimensions Produced from Phase Space in Figure 4, and Figure 5 and Fitting Procedures in Figure 7, and Figure 8. | | Fr.Dim.$D$ | | Fitting $R^{2}$ |
---|---|---|---|---|---
| Patient | Signal1 | Signal2 | Signal1 | Signal2
Normal | e0118 | 1.787 | 1.804 | 0.9981 | 0.9974
| e0121 | 1.749 | 1.816 | 0.9992 | 0.9980
| e0122 | 1.708 | 1.821 | 0.9992 | 0.9987
Abnormal | e0123 | 1.816 | 1.863 | 0.9978 | 0.9983
| e0125 | 1.814 | 1.861 | 0.9982 | 0.9980
| e0126 | 1.816 | 1.860 | 0.9982 | 0.9977
## 4 Conclusion
The most crucial observation in this study is the behavior of the second
signals (MLIII) which gives more information than the first signals ($V_{4}$)
for the action of normal and abnormal hearts. The second signals have larger
fractal dimension $D$ values than the first signals, predicting more
randomness yet more information about MLIII measurements. The lowest value of
$D$ (i.e., 1.708) indicates a perfect oscillation of the heart and the highest
value of $D$ (i.e., $1.863$) presents randomness of the heart. In fact, phase
space picture presents the distribution of the peak heights in the ECG
spectra, giving valuable information about heart activities in conjunction
with ECG itself. In future work, we plan to apply, both fractal dimension and
peak height distribution analysis in phase space for various abnormal human
hearts to improve the novel diagnoses method for heart diseases.
#####
Acknowledgment
We would like to thank Dr. Eliya Buyukkaya from Wageningen University for
visualizing the ECG data as well as for her fruitful discussions with us.
#####
Conflict of interest
The author declares no conflict of interest.
#####
Author’s contribution
The authors contributed equally to this work, and read and approved the final
manuscript.
#####
Availability of supporting data
All data used in this paper are obtained from European ST-T Database. It is
available at https://physionet.org/physiobank/database/edb/.
#####
Ethical approval and consent to participate
Kadir Has University sees no Ethical conflict about this study and approves
this manuscript to be submitted to scientific journals. Besides the need for
University consent was waived.
## References
* Liebovitch [1998] Larry S. Liebovitch. _Fractals and chaos simplified for the life sciences_. New York: Oxford University Press, 1998.
* van der Pol Jun Docts. Sc. and van der Mark [1928] Balth van der Pol Jun Docts. Sc. and J. van der Mark. Lxxii. the heartbeat considered as a relaxation oscillation, and an electrical model of the heart. _Philosophical Magazine Series 1_ , 6:763–775, 1928.
* Katholi et al. [1977] Charles R. Katholi, F. Urthaler, J. Macy, and T.N. James. A mathematical model of automaticity in the sinus node and av junction based on weakly coupled relaxation oscillators. _Computers and Biomedical Research_ , 10(6):529–543, 1977. ISSN 0010-4809. doi:https://doi.org/10.1016/0010-4809(77)90011-8. URL https://www.sciencedirect.com/science/article/pii/0010480977900118.
* West et al. [1985] Bruce J. West, Ary L. Goldberger, Galina Rovner, and Valmik Bhargava. Nonlinear dynamics of the heartbeat: I. the av junction: Passive conduit or active oscillator? _Physica D: Nonlinear Phenomena_ , 17(2):198–206, 1985. ISSN 0167-2789. doi:https://doi.org/10.1016/0167-2789(85)90004-1. URL https://www.sciencedirect.com/science/article/pii/0167278985900041.
* Babloyantz et al. [1985] Agnessa Babloyantz, J.M. Salazar, and C. Nicolis. Evidence of chaotic dynamics of brain activity during the sleep cycle. _Physics Letters A_ , 111(3):152–156, 1985. ISSN 0375-9601. doi:https://doi.org/10.1016/0375-9601(85)90444-X. URL https://www.sciencedirect.com/science/article/pii/037596018590444X.
* Babloyantz and Destexhe [1986] Agnessa Babloyantz and Alain Destexhe. Low-dimensional chaos in an instance of epilepsy. _Proceedings of the National Academy of Sciences of the United States of America_ , 83(10):3513–3517, 1986.
* L.Goldberger et al. [1985] Ary L.Goldberger, Valmik Bhargava, Bruce J. West, and Arnold J. Mandell. On a mechanism of cardiac electrical stability. the fractal hypothesis. _Biophysical journal_ , 48(3):525–528, 1985. doi:https://doi.org/10.1016/S0006-3495(85)83808-X.
* Babloyantz and Destexhe [1988] Agnessa Babloyantz and Alain Destexhe. Is the normal heart a periodic oscillator? _Biological Cybernetics_ , 58(3):203–211, 1988\. doi:https://doi.org/10.1007/BF00364139.
* Bergé et al. [1984] Pierre Bergé, Yves Pomeau, and Christian Vidal. _L’ordre dans le chaos: Vers une approche déterministe de la turbulence_. Hermann, 1984.
* Grassberger and Procaccia [1983a] Peter Grassberger and Itamar Procaccia. Measuring the strangeness of strange attractors. _Physica D: Nonlinear Phenomena_ , 9(1):189–208, 1983a.
* Grassberger and Procaccia [1983b] Peter Grassberger and Itamar Procaccia. Estimation of the kolmogorov entropy from a chaotic signal. _Physical Review A_ , 28:2591–2593, 1983b.
* Rovere et al. [2003] Maria Teresa La Rovere, Gian Domenico Pinna, Roberto Maestri, Andrea Mortara, Soccorso Capomolla, Oreste Febo, Roberto Ferrari, Mariella Franchini, Marco Gnemmi, Cristina Opasich, Pier Giorgio Riccardi, Egidio Traversi, and Franco Cobelli. Short-term heart rate variability strongly predicts sudden cardiac death in chronic heart failure patients. _Circulation_ , 107(4):565–570, 2003.
* Mackey and Glass [1977] Michael C. Mackey and Leon Glass. Oscillation and chaos in physiological control systems. _Science_ , 197(4300):287–289, 1977.
* Mackey and Glass [1979] Michael C. Mackey and Leon Glass. Pathological conditions resulting from instabilities in physiological control systems. _Annals of the New York Academy of Sciences_ , 316:214–235, 1979.
* Wessel et al. [2009] Niels Wessel, Maik Riedl, and Jurgen Kurths. Is the normal heart rate ”chaotic” due to respiration? _Chaos: An Interdisciplinary Journal of Nonlinear Science_ , 19(2), 2009.
* Wilson and Haueisen [2017] James D. Wilson and Jens Haueisen. Separation of physiological signals using minimum norm projection operators. _IEEE Transactions on Biomedical Engineering_ , 64(4):904–916, 2017.
* Kiranyaz et al. [2016] Serkan Kiranyaz, Turker Ince, and Moncef Gabbouj. Real-time patient-specific ecg classification by 1-d convolutional neural networks. _IEEE Transactions on Biomedical Engineering_ , 63(3):664–675, 2016.
* Goldberger et al. [2000] Ary L. Goldberger, Luis A. N. Amaral, Leon Glass, Jeffrey M. Hausdorff, Plamen Ch. Ivanov, Roger G. Mark, Joseph E. Mietus, George B. Moody, Chung-Kang Peng, and H. Eugene Stanley. Physiobank, physiotoolkit, and physionet: Components of a new research resource for complex physiologic signals. _Circulation_ , 101(23), 2000.
* Liebovitch and Toth [1989] Larry S. Liebovitch and Tibor Toth. A fast algorithm to determine fractal dimensions by box counting. _Physics Letters A_ , 141(8):386–390, 1989.
* Burden and Faires [1993] Richard L. Burden and J. Douglas Faires. _Numerical Analysis_. PWS-Kent Pub. Co., Boston, 1993.
* Khan and Ohba [1999] Ishtiaq Rasool Khan and Ryoji Ohba. Closed form expressions for the finite difference approximations of first and higher derivatives based on taylor series. _J. Comp. Appl. Math._ , 107:179–193, 1999.
* Ronco et al. [1999] Eric Ronco, Taner Arsan, and Peter J. Gawthrop. Open-loop intermittent feedback control: Practical continuous-time gpc. _IEE Proceedings - Control Theory and Applications_ , 146(5):426–434, 1999.
|
††institutetext: 1 Department of Physical Sciences, Oklahoma State University,
Stillwater, Oklahoma 74078, USA††institutetext: 2 LAPTh, Université Savoie
Mont Blanc, CNRS, B.P. 110, F-74941 Annecy Cedex, France††institutetext: 3
Centre for High Energy Physics, Indian Institute of Science, Bengaluru 560012,
India
# Is the light neutralino thermal dark matter in the MSSM ruled out?
Rahool Kumar Barman1, Genevieve Bélanger2, Biplob Bhattacherjee3, Rohini M.
Godbole3, Rhitaja Sengupta3<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
We explore the parameter space of the phenomenological Minimal Supersymmetric
Standard Model (pMSSM) with a light neutralino thermal dark matter
($M_{\widetilde{\chi}_{1}^{0}}\leq m_{h}/2$) for both positive and negative
values of the higgsino mass parameter ($\mu$) that is consistent with current
collider and astrophysical constraints. Our investigation shows that the
recent experimental results from the LHC as well as from direct detection
searches for dark matter by the LUX-ZEPLIN collaboration basically rule out
the $\mu>0$ scenario while only allowing a very narrow region with light
electroweakinos in the $\mu<0$ scenario. These are well within the reach of
the Run-3 of LHC and dedicated efforts to probe this region should be pursued.
## 1 Introduction
The R-parity conserved (RPC) scenario of the minimal supersymmetric extension
of the Standard Model (MSSM) has been among the most favourable choices for
exploring physics beyond the Standard Model (BSM). The RPC-MSSM scenario
alleviates the “naturalness” problem Gildener:1976ih ; PhysRevD.20.2619 in
Standard Model (SM), while also providing a SM-like Higgs boson ($h$) with
mass $m_{h}\sim 125~{}\mathrm{GeV}$ and a stable lightest supersymmetric
particle (LSP), typically the neutralino $\widetilde{\chi}_{1}^{0}$, which can
be a cold dark matter (DM) candidate. The case of the light neutralino
$m_{\tilde{\chi}_{1}^{0}}\leq m_{h}/2$ is of special interest since it is
kinematically feasible for the latter to decay invisibly through
$h\to\tilde{\chi}_{1}^{0}\tilde{\chi}_{1}^{0}$, thus providing an additional
signature for dark matter in the Higgs sector. Several studies have explored
the prospect of a light neutralino DM in the MSSM (in the constrained MSSM,
cMSSM and the phenomenological MSSM, pMSSM) considering the various
experimental constraints of the time PhysRevD.37.719 ; Djouadi:1996mj ;
Belanger:2000tg ; Belanger:2001am ; Belanger:2003wb ; Calibbi:2011ug ;
Dreiner:2012ex ; Ananthanarayan:2013fga ; Calibbi:2013poa ; Belanger:2013pna ;
Han:2014nba ; Belanger:2015vwa ; Hamaguchi:2015rxa ; Barman:2017swy ;
Pozzo:2018anw ; Wang:2020dtb ; KumarBarman:2020ylm ; VanBeekveld:2021tgn .
Collider experiments, like ATLAS and CMS, have made available the latest
results of searches of heavy Higgs bosons ATLAS:2020zms , direct searches of
charginos and neutralinos CMS:2020bfa ; ATLAS:2021moa ; ATLAS:2021yqv ;
CMS:2022sfi , as well as the invisible decay of the SM Higgs boson
ATLAS:2022yvh . The XENON-1T, PICO-60, PandaX-4T, and LUX-ZEPLIN (LZ)
collaborations have also published limits on the DM direct detection (DD)
cross-sections $-$ both spin-dependent (SD) and spin-independent (SI)
XENON:2018voc ; XENON:2019rxp ; PICO:2019vsc ; PandaX-4T:2021bab ;
Aalbers:2022fxq . Among these, the results from the LZ collaboration are the
most stringent ones for the SI DD cross-sections Aalbers:2022fxq . With the
advent of these results, it becomes important that we revisit the MSSM
parameter space of a light neutralino DM which can contribute to the invisible
decay of the SM Higgs boson.
In this paper, we study the current status of the light neutralino DM in MSSM
for both positive and negative values of the higgsino mass parameter, $\mu$.
It should be noted that a positive value of $\mu$ is indicated for a
supersymmetric explanation for the discrepancy between the experimentally
measured value of the $(g-2)_{\mu}$, and the SM prediction Muong-2:2021ojo .
Due to the prevalent uncertainties in the estimation of the hadronic
contributions in the SM prediction, we prefer to have an agnostic attitude
towards the sign of $\mu$. As the Large Hadron Collider (LHC) is gearing up
for Run-3 and will start collecting data soon, a careful study of the overall
status of this scenario is very timely to identify the interesting regions of
the parameter space which can be a focal point of the LHC searches at Run-3.
## 2 Current status of light neutralino dark matter in the MSSM
We consider the pMSSM parameter space with the parameters defined at the
electroweak scale. Our focus is the light neutralino sector with
$m_{\widetilde{\chi}_{1}^{0}}\leq m_{h}/2$ such that it is kinematically
feasible for the SM-like Higgs boson to decay into $\widetilde{\chi}_{1}^{0}$
pair and it can potentially contribute to the invisible decay mode of the
Higgs boson. The input parameters which capture the physics of the Higgs and
electroweakino sectors are: $M_{1}$, the bino mass, $M_{2}$, the wino mass
($M_{1}$ and $M_{2}$ are also collectively referred to as the gaugino masses),
$\mu$, the higgsino mass, $\tan\beta$, the ratio of the Higgs vacuum
expectation value, $M_{A}$, the pseudoscalar mass, $M_{\tilde{Q}_{3l}}$,
$M_{\tilde{t}_{R}}$, $M_{\tilde{b}_{R}}$, the mass of the third generation
squarks, $A_{t}$, trilinear coupling of the stop, and $M_{3}$, the mass of the
gluino. A random scan is performed over the following range of input
parameters:
$\displaystyle 30~{}{\rm GeV}<M_{1}<100~{}{\rm GeV},~{}1~{}{\rm
TeV}<M_{2}<3~{}{\rm TeV},$ $\displaystyle 100~{}{\rm GeV}<|\mu|<~{}2~{}{\rm
TeV},~{}2<\tan{\beta}<50,$ $\displaystyle~{}100~{}{\rm GeV}<M_{A}<5~{}{\rm
TeV},~{}3~{}{\rm TeV}<M_{\tilde{Q}_{3L}}<10~{}{\rm TeV},$ $\displaystyle
3~{}{\rm TeV}<M_{\tilde{t}_{R}}<10~{}{\rm TeV},~{}3~{}{\rm
TeV}<M_{\tilde{b}_{R}}<10~{}{\rm TeV},$ $\displaystyle-10~{}{\rm
TeV}<A_{t}<10~{}{\rm TeV},~{}2~{}{\rm TeV}<M_{3}<5~{}{\rm TeV}$
Since we are interested in light neutralino, it shall dominantly have bino
($\tilde{B}$) component, and therefore, we have scanned $M_{1}$ in the very
low mass region. The coupling of $Z$ and $h$ bosons to a pair of
$\tilde{\chi}_{1}^{0}$ depends also on its higgsino ($\tilde{H}$) and wino
($\tilde{W}$) components, and therefore to circumvent the overabundance of
$\tilde{\chi}_{1}^{0}$ as the DM candidate, we require some $\tilde{H}$ or
$\tilde{W}$ component in it. We are mostly interested in higgsino-like next-to
lightest supersymmetric partner (NLSP) in the present work due to the existing
stronger limits on wino-like NLSPs, and hence $M_{2}$ is varied starting from
1 TeV, and $\mu$ is varied starting from a comparatively lower value of 100
GeV. We cannot push $\mu$ below 100 GeV due to the existing limits on
charginos from the Large Electron Positron (LEP) collider experiments. We have
scanned the parameter space with both positive and negative values of $\mu$.
We have fixed the masses of the first and second generation squarks at 5 TeV,
the masses of all the three generations of sleptons at 2 TeV, and all their
trilinear couplings at zero, in order to keep them decoupled from the particle
spectrum. We perform a dedicated scan where we dynamically tune the $M_{1}$
parameter to keep $m_{\tilde{\chi}_{1}^{0}}$ within a window of 5 GeV of the
$Z$ mass and 3 GeV of the calculated $h$ mass for populating the funnel
regions properly.
We have used FeynHiggs 2.18.1 Heinemeyer:1998yj ; Heinemeyer:1998np ;
Degrassi:2002fi ; Frank:2006yh ; Hahn:2013ria ; Bahl:2016brp ; Bahl:2017aev ;
Bahl:2018qog to generate the SUSY spectra corresponding to the various sets
of input parameters, and to calculate the Higgs boson mass, and decays in the
Higgs sector. We assume that the lightest CP-even Higgs boson of MSSM is the
SM-like Higgs boson, observed by the ATLAS and CMS collaborations, and the
combined measured mass is quoted as $m_{h}=125.09\pm 0.21({\rm stat})\pm
0.11({\rm syst})$ GeV ATLAS:2015yey . We apply a conservative constraint on
its theoretically calculated mass, $122{\rm~{}GeV}<m_{h}<128{\rm~{}GeV}$.
Since our scan starts from very low ${\rm tan}\beta$ values where satisfying
the observed Higgs boson mass will require high stop masses and $A_{t}$
values, we need to ensure that the latter is not large enough leading to color
and charged breaking minima (CCB) Camargo-Molina:2013sta ; Chowdhury:2013dka ;
Blinov:2013fta . We have implemented the constraint from Ref.
Chowdhury:2013dka which showed that the electroweak vacuum becomes metastable
and unstable for $|X_{t}|\gtrsim\sqrt{6m_{\tilde{t}_{1}}m_{\tilde{t}_{2}}}$
unless $\mu\sim m_{\tilde{t}_{L},\tilde{t}_{R}}$, where $X_{t}=A_{t}-\mu/{\rm
tan}\beta$.
Next, we apply limits on the partial decay width of the invisible decay of
$Z$-boson from new physics, $\Gamma_{\rm inv}^{\rm new}<2$ MeV ALEPH:2005ab ,
chargino mass, $m_{\chi_{1}^{\pm}}>103$ GeV OPAL:2003wxm , and cross-section
of associated production of neutralinos in final states with jets,
$\sigma(e^{+}e^{-}\rightarrow\tilde{\chi}_{1}^{0}\tilde{\chi}_{2}^{0})\times{\rm
Br}(\tilde{\chi}_{2}^{0}\rightarrow\tilde{\chi}_{1}^{0}+{\rm
jets})+\sigma(e^{+}e^{-}\rightarrow\tilde{\chi}_{1}^{0}\tilde{\chi}_{3}^{0})\times{\rm
Br}(\tilde{\chi}_{3}^{0}\rightarrow\tilde{\chi}_{1}^{0}+{\rm jets})<0.1$ pb
OPAL:2003wxm , as obtained from experiments at the LEP. Subsequently, we add
the flavor physics constraints on various observables, like the branching
fractions of processes $b\rightarrow s\gamma$,
$B_{s}\rightarrow\mu^{+}\mu^{-}$, and $B\rightarrow\tau\nu$ which are required
to satisfy $3.00\times 10^{-4}<{\rm Br}(b\rightarrow s\gamma)<3.64\times
10^{-4}$ HFLAV:2016hnz , $1.66\times 10^{-9}<{\rm
Br}(B_{s}\rightarrow\mu^{+}\mu^{-})<4.34\times 10^{-9}$ CMS:2014xfa , and
$0.78<({\rm Br}(B\rightarrow\tau\nu))_{\rm obs}/({\rm
Br}(B\rightarrow\tau\nu))_{\rm SM}<1.78$ Belle:2010xzn , respectively. We have
used MicrOMEGAS 5.2.13 Belanger:2004yn ; Belanger:2006is ; Belanger:2008sj ;
Belanger:2010gh ; Belanger:2013oya ; Belanger:2020gnr to calculate both the
LEP and flavor physics observables. Constraints from flavor physics push
$M_{A}$ to be greater than $\sim 800$ GeV irrespective of ${\rm tan}\beta$.
On the remaining parameter space points, we apply the limits from the signal
strength measurements of the SM Higgs boson implemented in HiggsSignal 2.6.2
Bechtle:2013xfa ; Stal:2013hwa ; Bechtle:2014ewa , as well as limits from the
collider searches of heavy Higgs bosons in various final-state channels at
collider experiments, like ATLAS and CMS, using the HiggsBounds 5.10.0
Bechtle:2008jh ; Bechtle:2011sb ; Bechtle:2012lvg ; Bechtle:2013wla ;
Bechtle:2015pma package. The recent search of heavy Higgs bosons decaying to
$\tau$ leptons at ATLAS ATLAS:2020zms excludes a large part of high ${\rm
tan}\beta$ region for $M_{A}\lesssim 1$ TeV. The parameter space also has to
satisfy the recent limit on the branching of the SM Higgs boson to decay into
invisible particles. We apply the strongest limit which comes from the result
of the recent search for invisible Higgs boson decays, where the latter is
produced in vector-boson fusion (VBF), using an integrated luminosity of 139
fb-1 at ATLAS ATLAS:2022yvh , which restricts
Br($h\rightarrow$invisible)$<0.145$. We refer to all the constraints related
to the Higgs bosons together as the “Higgs constraints” hereafter for
simplicity, which includes the SM-like Higgs mass constraint and constraints
from HiggsSignal 2.6.2, HiggsBounds 5.10.0, and the invisible decay of the SM-
like Higgs boson.
The lightest supersymmetric particle (LSP), $\tilde{\chi}_{1}^{0}$, is a
viable DM candidate in the MSSM, having a thermal freeze-out production in the
early Universe. In the standard cosmology, we require the relic density of the
LSP ($\Omega_{\rm LSP}$) to be equal to the observed DM relic density as
measured by the PLANCK collaboration $\Omega^{\rm obs}_{\rm DM}h^{2}=0.120\pm
0.001$ Aghanim:2018eyx , which assuming a $2\sigma$ interval can vary from
0.118-0.122. Lifting up the requisite that the neutralino LSP forms 100% of
the observed DM relic owing to the possibility of multicomponent DM, we can
modify the relic density constraint to $\Omega_{\rm LSP}\lesssim 0.122$, where
MicrOMEGAS 5.2.13 is used to compute the relic density of
$\tilde{\chi}_{1}^{0}$.
In addition to the relic density constraint, we need to take into
consideration results from the current DD experiments of DM. These experiments
constrain the spin-dependent DM-neutron (SDn) and DM-proton (SDp) as well as
the spin-independent direct detection cross-sections of the lightest
neutralino ($\tilde{\chi}_{1}^{0}$), which is the DM candidate, as a function
of its mass. We use MicrOMEGAS 5.2.13 to compute these cross-sections for the
LSP and then compare them with the 90% confidence level (CL) upper limits
quoted by the PICO-60 (SDp PICO:2019vsc ), XENON-1T (SI XENON:2018voc and SDn
XENON:2019rxp ), and PandaX-4T (SI PandaX-4T:2021bab ) experiments. The DD
limits are placed assuming that a single DM candidate constitutes the entire
relic. Therefore, if the neutralino DM is underabundant, i.e., $\Omega_{\rm
LSP}<0.122$, then the DD limits are applied on the scaled cross-sections,
where the scaling factor is as follows:
$\xi=\frac{\Omega_{\rm LSP}}{0.122}$ (1)
Being underabundant, the $\chi_{1}^{0}$ component of DM might have evaded the
DD experiments, and therefore, the DD limits on the cross-section of the
lightest neutralino weaken in this scenario.
Moreover, we need to consider the results of direct searches for chargino and
neutralino at the ATLAS and CMS experiments of the LHC. We use the SModelS
2.2.0 Kraml:2013mwa ; Ambrogi:2017neo ; Dutta:2018ioj ; Heisig:2018kfq ;
Ambrogi:2018ujg ; Khosa:2020zar ; Alguero:2020grj ; Alguero:2021dig package
to implement the electroweakino search constraints on our scanned parameter
space. This version of SModelS includes implementation of results from the
recent search for electroweakinos in the leptonic final states at CMS
CMS:2020bfa and ATLAS ATLAS:2021moa and in the hadronic final states at
ATLAS ATLAS:2021yqv , all of which play significant roles in excluding a large
range of $m_{\tilde{\chi}_{2}^{0}}$, especially with the ATLAS analysis
extending the sensitivity to high masses with the hadronic final states.
Figure 1: Scaled SI DM-nucleon cross-section ($\sigma_{SI}\times\xi$) for
$\mu>0$ as a function of the mass of the LSP neutralino DM in the region of
parameter space satisfying LEP, flavor, Higgs, relic density and direct
detection (DD) constraints (grey), along with the regions surviving present
electroweakino searches and invisible branching fraction of Higgs boson with
$m_{\tilde{\chi}_{2}^{0}}$ in the colorbar.
We apply the constraints on our scanned parameter space in two steps $-$ first
“Set A” with constraints from LEP, flavor, Higgs constraints, relic density
and the DD experiments XENON-1T, PICO-60 and PandaX-4T, then “Set B” with
electroweakino constraints from the LHC. Fig. 1 shows the scaled ($\xi$ as
defined in Eq. 1) SI DD cross-sections for $\mu>0$ of the allowed parameter
space after applying all the constraints from “Set A” in grey, and then after
adding constraints from “Set B” in colored points, with the colorbar showing
the mass of $\tilde{\chi}_{2}^{0}$. We observe that the recent electroweakino
constraints have played a crucial role in completely excluding the $Z$-funnel
region of positive $\mu$. However, the colored points in the $h$-funnel region
indicate that the electroweakino searches, as implemented in the SModelS
framework, still allows regions of the parameter space where
$\tilde{\chi}_{2}^{0}$ can be very light ($\sim$ 146-158 GeV) or heavy ($\sim$
855-1330 GeV). Once we overlay the recent 90% CL upper limits on $\sigma_{SI}$
from the LZ experiment Aalbers:2022fxq and the projected limits from the
XENON-nT experiment on Fig. 1, we find that the current LZ result is strong
enough to exclude the entire allowed parameter region, even in the $h$-funnel,
which survived the present-day electroweakino constraints.
We would like to point out a caveat that one can evade the strong bounds from
LZ if the relic density condition is relaxed such that the coupling can be
small enough to satisfy the strong DD constraint. Assuming that theoretically
we are overestimating the relic density by, say, 20%, we should actually
compare 80% of the value of $\Omega_{\rm LSP}$ from MicrOMEGAS with 0.122,
which leaves a narrow region of parameter space allowed by even the LZ limits.
However these lie just below the current result and might be very easily
accessible to the full exposure of the LZ experiment. Another way to relax the
upper limit on the relic density of the LSP DM can be provided in scenarios of
non-standard cosmology since the observed relic can be satisfied due to the
presence of other physical states which decay later in the Universe, thus,
increasing the entropy density of the Universe. We have observed that there is
a linear relationship between how much we can relax the relic density in the
non-standard cosmological model and the improvement in the DD experimental
limit needed to fully probe that scenario, viz. to completely probe a scenario
in non-standard cosmology where the relic density can be relaxed by a factor
of 5 requires a five-fold improvement from the current LZ limit. We land up
with heavy higgsinos in both these cases, and the electroweakino searches
further push them beyond 850 GeV.
Figure 2: Scaled SI DM-nucleon cross-section ($\sigma_{SI}\times\xi$) (left)
and scaled SD DM-neutron cross-section ($\sigma_{SDn}\times\xi$) (right) for
$\mu<0$ as a function of the mass of the LSP neutralino DM in the region of
parameter space satisfying LEP, flavor, Higgs, relic density and direct
detection (DD) constraints (grey), along with the regions surviving present
electroweakino searches and invisible branching fraction of Higgs boson with
$m_{\tilde{\chi}_{2}^{0}}$ in the colorbar.
Let us now investigate what happens in the negative $\mu$ scenario. Fig. 2
shows a similar plot of the scaled SDn (left) and SI (right) DD cross-sections
for $\mu<0$ of the allowed parameter space, where the colors have the same
meaning as described for Fig. 1. From the left panel of Fig. 2, we observe
that we have parameter region remaining in both the $Z$ and $h$-funnels which
satisfy all the constraints from “Set A” and “Set B”, where the latter
restricts $m_{\tilde{\chi}_{2}^{0}}$ to either very small values ($\sim$
130-142 GeV and $\sim$ 147-157 GeV in the $Z$ and $h$-funnel respectively) or
high values ($\sim$ 856-1385 GeV in the $h$-funnel). The right panel shows the
allowed parameter space in the $m_{\tilde{\chi}_{1}^{0}}$-$\sigma_{\rm SI}$
plane and we observe that the recent LZ result Aalbers:2022fxq which has
collected data with an exposure of 60 days has excluded a large region in the
$h$-funnel region, leaving behind a narrow marginally allowed region where the
$m_{\tilde{\chi}_{2}^{0}}$ is very light ($\sim$ 149-155 GeV), which we can
expect to be probed in the near future by the LZ experiment with its full 1000
day exposure. On the other hand, the $Z$-funnel is not affected much by the LZ
limits. The projected upper limit on $\sigma_{SDn}$ from the future XENON-nT
experiment (left panel of Fig. 2) shows that it can probe the parameter space
with light $\tilde{\chi}_{2}^{0}$ in both the $Z$ and $h$-funnel regions.
Figure 3: Left: Allowed parameter space for $\mu<0$ after satisfying the LEP,
flavor, Higgs constraints, relic density DM DD constraints from the XENON-1T,
PICO-60 and PandaX-4T experiments (“Set A” in yellow circles), overlayed with
additional constraints from electroweakino searches at the LHC (“Set B” in
light green circles) and the recent constraint on the SI DD cross-section from
the LZ experiment (dark green stars) in the
$m_{\tilde{\chi}_{1}^{0}}$-$m_{\tilde{\chi}_{2}^{0}}$ plane; Right: $R$-values
from SModelS for the allowed parameter space.
The left plot of Fig. 3 shows the parameter space for $\mu<0$ in the
$m_{\tilde{\chi}_{1}^{0}}$-$m_{\tilde{\chi}_{2}^{0}}$ plane, where we can
observe that although the electroweakino searches allow heavy higgsinos in the
$h$-funnel, the very recent LZ DD limit restricts the allowed parameter space
to only light higgsinos, which are particularly interesting to probe at
collider experiments. The right plot of Fig. 3 shows the SModelS $R$-values in
the colorbar, varying from 0 to 1, of the parameter space allowed by all the
current constraints to give the reader an idea of how far these points are
from the present available limits from the LHC experiments $-$ a smaller
$R$-value indicates that the parameter space point lies way outside the
current limit, whereas a $R$-value close to 1 indicates that it lies just on
the border of the limit. The allowed points evade the ATLAS searches for
charginos and neutralinos in the leptonic final states at $\sqrt{s}=8$ TeV
with 20.3 fb-1 of data ATLAS:2014zve ; ATLAS:2014ikz and at $\sqrt{s}=13$ TeV
with 139 fb-1 of data ATLAS:2021moa , with some of them having very small
$R$-values.
These regions with light charginos and neutralinos, which are evading the
present constraints due to off-shell $Z$ or $h$ bosons, will be very important
for Run-3. To understand their prospect, we perform an analysis of the low
mass higgsino-like electroweakinos in the leptonic final state at
$\sqrt{s}=14$ TeV using the XGBOOST framework in this work. We consider the
process
$pp\rightarrow\tilde{\chi}_{1}^{\pm}\tilde{\chi}_{2}^{0}/\tilde{\chi}_{1}^{\pm}\tilde{\chi}_{3}^{0},~{}\tilde{\chi}_{1}^{\pm}\rightarrow
W^{\pm}\tilde{\chi}_{1}^{0},~{}\tilde{\chi}_{2}^{0}/\tilde{\chi}_{3}^{0}\rightarrow
f\bar{f}\tilde{\chi}_{1}^{0}$ with $m_{\tilde{\chi}_{1}^{\pm}}=142.4$ GeV,
$m_{\tilde{\chi}_{2}^{0}}=149.7$ GeV, $m_{\tilde{\chi}_{3}^{0}}=151.9$ GeV,
and $m_{\tilde{\chi}_{1}^{0}}=61.2$ GeV where $f$ is a SM fermion. We study 11
possible backgrounds for this process $-$ $lll\nu$ ($l\equiv e,\mu,\tau$),
$ZZ$, $t\bar{t}$, $VVV$, $Wh$, $Zh$, ggF and VBF production of $h$ with
$h\rightarrow ZZ^{*}$, $t\bar{t}h$, $t\bar{t}W$, and $t\bar{t}Z$.
We restrict to the leptonic final state which is cleaner for a lighter
benchmark, such as ours. We perform an analysis of the $3l+\rm
E{\\!\\!\\!/}_{T}$ final state where we require exactly three leptons
satisfying $p_{T}>25,25,20$ GeV and $|\eta|<2.4$, and we have put a veto on
$b$-jets with $p_{T}>30$ GeV and $|\eta|<2.5$. In our signal benchmark, since
we do not have any on-shell $Z$-boson, we also veto events where the invariant
mass of a pair of same flavor opposite sign (SFOS) leptons lie within 10 GeV
window of $m_{Z}=91.2$ GeV. After these preselections, we train our signal and
background samples using XGBOOST 111https://xgboost.readthedocs.io/en/stable/
with a set of 21 variables $-$ transverse momenta ($p_{T}$) of the three
leptons, transverse mass ($M_{T}$) and contransverse mass ($M_{CT}$) of each
of the three leptons with the $\rm E{\\!\\!\\!/}_{T}$, minimum and maximum
values of $\Delta R$ between opposite sign lepton pairs along with their
$\Delta\eta$ values, invariant mass of the opposite sign lepton pairs with
minimum and maximum $\Delta R$, missing transverse momentum, number of jets in
the event with the $p_{T}$ of the two leading jets, scalar sum of $p_{T}$ of
all the jets in the event ($H_{T}$), and the invariant mass of the three
leptons. We combine the backgrounds, weighted according to their cross-
sections and use unity weight for the signal 222Following are the
hyperparameters of the XGBOOST model: ‘objective’:‘multi:softprob’,
‘colsample_bytree’:0.3, ‘learning_rate’:0.1, ‘num_class’:12, ‘max_depth’:7,
‘alpha’:5, ‘eval_metric’:‘mlogloss’, ‘num_round’:1000,
‘early_stopping_rounds’:3. After training and validation of the model, we use
it to discriminate the signal benchmark from each of the background classes by
computing the significance of observing the signal over the background events.
At a 14 TeV collider with 137 fb-1 of integrated luminosity, we expect to
observe 310 signal events ($N_{S}$) and a total of 331 background events
($N_{B}$) for a threshold of 0.9 on the XGBOOST output, which upon adding a
20% (50%) systematic uncertainty translates to a significance (using the
formula in Ref. Adhikary:2020cli ) of 3.6 (1.5). We find that the result
sensitively depends on the systematic uncertainty, which might be dominant for
light electroweakinos. These might be either already ruled out by the Run-2
data in analyses which are not yet published and implemented in SModelS, or
can be probed in the Run-3 of LHC, given the systematic uncertainties can be
brought under control.
The future lepton colliders like ILC and CEPC will be crucial for precision
measurements of Higgs boson. The projected upper limit on the invisible
branching of the Higgs boson is 0.4% at ILC Asner:2013psa and 0.3% at CEPC
An:2018dwb . Although these can probe a significant part of the allowed
parameter space in the $\mu<0$ case, we still have regions with
Br($h\rightarrow$invisible$<0.003$) in both the $Z$ and $h$-funnels. In the
$\mu<0$ case, the partial decay width of the $Z$ boson to
$\tilde{\chi}_{1}^{0}$ ($\Gamma_{\rm inv}^{\rm new}$) is always less than 0.1
MeV for the allowed parameter region that we obtain. Therefore, we do not
expect the Giga-$Z$ option of ILC, which is expected to have a modest
improvement over LEP Carena:2003aj , to be sensitive to this region.
It is interesting to note that if in the future DD experiments, we discover a
light DM in the $Z$-funnel, it will mostly indicate a negative value of $\mu$.
Although we have seen in the present study that the $\mu>0$ scenario is in
severe tension with the experimental results, we have also discussed some
caveats where they can evade these constraints. In the $h$-funnel region, one
major difference between the $\mu>0$ and $\mu<0$ case is that in the former
the experimental limits along with relaxing relic density condition allows the
electroweakinos to be heavy (greater than a TeV), unlike the much lighter
electroweakinos (around 150 GeV) obtained for latter. Thus, an observation of
DM signal in the $h$-funnel from DD experiments would require an additional
observation of a signal in the collider experiments to understand the sign of
the $\mu$ parameter. Further, observation of heavy higgsinos in the collider
experiments might even be a harbinger of non-standard cosmology.
## 3 Conclusion
In summary, this work shows that the current experiments, especially the
recent results from the electroweakino searches at the LHC and the LZ dark
matter DD experiment have severely constrained the positive $\mu$ scenario,
whereas they have squeezed the parameter space to light electroweakinos in the
negative $\mu$ scenario for a light neutralino DM in the MSSM. Such light
higgsinos are also motivated from naturalness arguments. The Run-3 of LHC
shall target this low mass electroweakinos and perform dedicated analyses to
close this narrow gap. The experimental collaborations should look for any
kind of loopholes in the previous analyses of light electroweakino searches
and design analyses to cover them. They should also exhaust the possibility of
the presence of other light degenerate SUSY particles, which might lead to
difficult signatures to observe in the collider experiments. To conclude, at
present we are at a very exciting juncture where the experiments lined up
might exclude the possibility of a light neutralino DM in MSSM altogether, or
we might be very close to start observing the first hints of new physics.
## Acknowledgement
B.B. and R.S. thank Prabhat Solanki and Camellia Bose for useful discussions.
R.K.B. thanks the U.S. Department of Energy for the financial support, under
grant number DE-SC0016013.
## References
* (1) E. Gildener and S. Weinberg, “Symmetry Breaking and Scalar Bosons,” Phys. Rev. D, vol. 13, p. 3333, 1976.
* (2) L. Susskind, “Dynamics of spontaneous symmetry breaking in the weinberg-salam theory,” Phys. Rev. D, vol. 20, pp. 2619–2625, Nov 1979.
* (3) K. Griest and H. E. Haber, “Invisible decays of higgs bosons in supersymmetric models,” Phys. Rev. D, vol. 37, pp. 719–728, Feb 1988.
* (4) A. Djouadi, P. Janot, J. Kalinowski, and P. M. Zerwas, “SUSY decays of Higgs particles,” Phys. Lett. B, vol. 376, pp. 220–226, 1996.
* (5) G. Belanger, F. Boudjema, F. Donato, R. Godbole, and S. Rosier-Lees, “SUSY Higgs at the LHC: Effects of light charginos and neutralinos,” Nucl. Phys. B, vol. 581, pp. 3–33, 2000.
* (6) G. Belanger, F. Boudjema, A. Cottrant, R. M. Godbole, and A. Semenov, “The MSSM invisible Higgs in the light of dark matter and g-2,” Phys. Lett. B, vol. 519, pp. 93–102, 2001.
* (7) G. Belanger, F. Boudjema, A. Cottrant, A. Pukhov, and S. Rosier-Lees, “Lower limit on the neutralino mass in the general MSSM,” JHEP, vol. 03, p. 012, 2004.
* (8) L. Calibbi, T. Ota, and Y. Takanishi, “Light Neutralino in the MSSM: a playground for dark matter, flavor physics and collider experiments,” JHEP, vol. 07, p. 013, 2011.
* (9) H. K. Dreiner, J. S. Kim, and O. Lebedev, “First LHC Constraints on Neutralinos,” Phys. Lett. B, vol. 715, pp. 199–202, 2012.
* (10) B. Ananthanarayan, J. Lahiri, P. N. Pandita, and M. Patra, “Invisible decays of the lightest Higgs boson in supersymmetric models,” Phys. Rev. D, vol. 87, no. 11, p. 115021, 2013.
* (11) L. Calibbi, J. M. Lindert, T. Ota, and Y. Takanishi, “Cornering light Neutralino Dark Matter at the LHC,” JHEP, vol. 10, p. 132, 2013.
* (12) G. Bélanger, G. Drieu La Rochelle, B. Dumont, R. M. Godbole, S. Kraml, and S. Kulkarni, “LHC constraints on light neutralino dark matter in the MSSM,” Phys. Lett. B, vol. 726, pp. 773–780, 2013.
* (13) T. Han, Z. Liu, and S. Su, “Light Neutralino Dark Matter: Direct/Indirect Detection and Collider Searches,” JHEP, vol. 08, p. 093, 2014.
* (14) G. Belanger, D. Ghosh, R. Godbole, and S. Kulkarni, “Light stop in the MSSM after LHC Run 1,” JHEP, vol. 09, p. 214, 2015.
* (15) K. Hamaguchi and K. Ishikawa, “Prospects for Higgs- and Z-resonant Neutralino Dark Matter,” Phys. Rev. D, vol. 93, no. 5, p. 055009, 2016.
* (16) R. K. Barman, G. Belanger, B. Bhattacherjee, R. Godbole, G. Mendiratta, and D. Sengupta, “Invisible decay of the Higgs boson in the context of a thermal and nonthermal relic in MSSM,” Phys. Rev., vol. D95, no. 9, p. 095018, 2017.
* (17) G. Pozzo and Y. Zhang, “Constraining resonant dark matter with combined LHC electroweakino searches,” Phys. Lett. B, vol. 789, pp. 582–591, 2019\.
* (18) K. Wang and J. Zhu, “Funnel annihilations of light dark matter and the invisible decay of the Higgs boson,” Phys. Rev. D, vol. 101, no. 9, p. 095028, 2020.
* (19) R. Kumar Barman, G. Belanger, and R. M. Godbole, “Status of low mass LSP in SUSY,” Eur. Phys. J. ST, vol. 229, no. 21, pp. 3159–3185, 2020.
* (20) M. Van Beekveld, W. Beenakker, M. Schutten, and J. De Wit, “Dark matter, fine-tuning and $(g-2)_{\mu}$ in the pMSSM,” SciPost Phys., vol. 11, no. 3, p. 049, 2021.
* (21) G. Aad et al., “Search for heavy Higgs bosons decaying into two tau leptons with the ATLAS detector using $pp$ collisions at $\sqrt{s}=13$ TeV,” Phys. Rev. Lett., vol. 125, no. 5, p. 051801, 2020.
* (22) A. M. Sirunyan et al., “Search for supersymmetry in final states with two oppositely charged same-flavor leptons and missing transverse momentum in proton-proton collisions at $\sqrt{s}=$ 13 TeV,” JHEP, vol. 04, p. 123, 2021.
* (23) G. Aad et al., “Search for chargino–neutralino pair production in final states with three leptons and missing transverse momentum in $\sqrt{s}=13$ TeV pp collisions with the ATLAS detector,” Eur. Phys. J. C, vol. 81, no. 12, p. 1118, 2021.
* (24) G. Aad et al., “Search for charginos and neutralinos in final states with two boosted hadronically decaying bosons and missing transverse momentum in $pp$ collisions at $\sqrt{s}$ = 13 TeV with the ATLAS detector,” Phys. Rev. D, vol. 104, no. 11, p. 112010, 2021.
* (25) “Search for electroweak production of charginos and neutralinos at $\sqrt{s}$ =13 TeV in final states containing hadronic decays of WW, WZ, or WH and missing transverse momentum,” 5 2022.
* (26) G. Aad et al., “Search for invisible Higgs-boson decays in events with vector-boson fusion signatures using 139 $\text{fb}^{-1}$ of proton-proton data recorded by the ATLAS experiment,” 2 2022.
* (27) E. Aprile et al., “Dark Matter Search Results from a One Ton-Year Exposure of XENON1T,” Phys. Rev. Lett., vol. 121, no. 11, p. 111302, 2018\.
* (28) E. Aprile et al., “Constraining the spin-dependent WIMP-nucleon cross sections with XENON1T,” Phys. Rev. Lett., vol. 122, no. 14, p. 141301, 2019.
* (29) C. Amole et al., “Dark Matter Search Results from the Complete Exposure of the PICO-60 C3F8 Bubble Chamber,” Phys. Rev. D, vol. 100, no. 2, p. 022001, 2019.
* (30) Y. Meng et al., “Dark Matter Search Results from the PandaX-4T Commissioning Run,” Phys. Rev. Lett., vol. 127, no. 26, p. 261802, 2021\.
* (31) J. Aalbers et al., “First Dark Matter Search Results from the LUX-ZEPLIN (LZ) Experiment,” 7 2022.
* (32) B. Abi et al., “Measurement of the Positive Muon Anomalous Magnetic Moment to 0.46 ppm,” Phys. Rev. Lett., vol. 126, no. 14, p. 141801, 2021\.
* (33) S. Heinemeyer, W. Hollik, and G. Weiglein, “FeynHiggs: A Program for the calculation of the masses of the neutral CP even Higgs bosons in the MSSM,” Comput. Phys. Commun., vol. 124, pp. 76–89, 2000.
* (34) S. Heinemeyer, W. Hollik, and G. Weiglein, “The Masses of the neutral CP - even Higgs bosons in the MSSM: Accurate analysis at the two loop level,” Eur. Phys. J. C, vol. 9, pp. 343–366, 1999.
* (35) G. Degrassi, S. Heinemeyer, W. Hollik, P. Slavich, and G. Weiglein, “Towards high precision predictions for the MSSM Higgs sector,” Eur. Phys. J. C, vol. 28, pp. 133–143, 2003.
* (36) M. Frank, T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak, and G. Weiglein, “The Higgs Boson Masses and Mixings of the Complex MSSM in the Feynman-Diagrammatic Approach,” JHEP, vol. 02, p. 047, 2007.
* (37) T. Hahn, S. Heinemeyer, W. Hollik, H. Rzehak, and G. Weiglein, “High-Precision Predictions for the Light CP -Even Higgs Boson Mass of the Minimal Supersymmetric Standard Model,” Phys. Rev. Lett., vol. 112, no. 14, p. 141801, 2014.
* (38) H. Bahl and W. Hollik, “Precise prediction for the light MSSM Higgs boson mass combining effective field theory and fixed-order calculations,” Eur. Phys. J. C, vol. 76, no. 9, p. 499, 2016.
* (39) H. Bahl, S. Heinemeyer, W. Hollik, and G. Weiglein, “Reconciling EFT and hybrid calculations of the light MSSM Higgs-boson mass,” Eur. Phys. J. C, vol. 78, no. 1, p. 57, 2018.
* (40) H. Bahl, T. Hahn, S. Heinemeyer, W. Hollik, S. Paßehr, H. Rzehak, and G. Weiglein, “Precision calculations in the MSSM Higgs-boson sector with FeynHiggs 2.14,” Comput. Phys. Commun., vol. 249, p. 107099, 2020.
* (41) G. Aad et al., “Combined Measurement of the Higgs Boson Mass in $pp$ Collisions at $\sqrt{s}=7$ and 8 TeV with the ATLAS and CMS Experiments,” Phys. Rev. Lett., vol. 114, p. 191803, 2015.
* (42) J. E. Camargo-Molina, B. O’Leary, W. Porod, and F. Staub, “Stability of the CMSSM against sfermion VEVs,” JHEP, vol. 12, p. 103, 2013.
* (43) D. Chowdhury, R. M. Godbole, K. A. Mohan, and S. K. Vempati, “Charge and Color Breaking Constraints in MSSM after the Higgs Discovery at LHC,” JHEP, vol. 02, p. 110, 2014. [Erratum: JHEP 03, 149 (2018)].
* (44) N. Blinov and D. E. Morrissey, “Vacuum Stability and the MSSM Higgs Mass,” JHEP, vol. 03, p. 106, 2014.
* (45) S. Schael et al., “Precision electroweak measurements on the $Z$ resonance,” Phys. Rept., vol. 427, pp. 257–454, 2006.
* (46) G. Abbiendi et al., “Search for chargino and neutralino production at s**(1/2) = 192-GeV to 209 GeV at LEP,” Eur. Phys. J. C, vol. 35, pp. 1–20, 2004.
* (47) Y. Amhis et al., “Averages of $b$-hadron, $c$-hadron, and $\tau$-lepton properties as of summer 2016,” Eur. Phys. J. C, vol. 77, no. 12, p. 895, 2017.
* (48) V. Khachatryan et al., “Observation of the rare $B^{0}_{s}\to\mu^{+}\mu^{-}$ decay from the combined analysis of CMS and LHCb data,” Nature, vol. 522, pp. 68–72, 2015.
* (49) K. Hara et al., “Evidence for $B^{-}->\tau^{-}\bar{\nu}$ with a Semileptonic Tagging Method,” Phys. Rev. D, vol. 82, p. 071101, 2010.
* (50) G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, “micrOMEGAs: Version 1.3,” Comput. Phys. Commun., vol. 174, pp. 577–604, 2006.
* (51) G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, “MicrOMEGAs 2.0: A Program to calculate the relic density of dark matter in a generic model,” Comput. Phys. Commun., vol. 176, pp. 367–382, 2007.
* (52) G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, “Dark matter direct detection rate in a generic model with micrOMEGAs 2.2,” Comput. Phys. Commun., vol. 180, pp. 747–767, 2009.
* (53) G. Belanger, F. Boudjema, P. Brun, A. Pukhov, S. Rosier-Lees, P. Salati, and A. Semenov, “Indirect search for dark matter with micrOMEGAs2.4,” Comput. Phys. Commun., vol. 182, pp. 842–856, 2011.
* (54) G. Belanger, F. Boudjema, A. Pukhov, and A. Semenov, “micrOMEGAs$\\_$3: A program for calculating dark matter observables,” Comput. Phys. Commun., vol. 185, pp. 960–985, 2014.
* (55) G. Belanger, A. Mjallal, and A. Pukhov, “Recasting direct detection limits within micrOMEGAs and implication for non-standard Dark Matter scenarios,” Eur. Phys. J. C, vol. 81, no. 3, p. 239, 2021.
* (56) P. Bechtle, S. Heinemeyer, O. Stål, T. Stefaniak, and G. Weiglein, “$HiggsSignals$: Confronting arbitrary Higgs sectors with measurements at the Tevatron and the LHC,” Eur. Phys. J. C, vol. 74, no. 2, p. 2711, 2014\.
* (57) O. Stål and T. Stefaniak, “Constraining extended Higgs sectors with HiggsSignals,” PoS, vol. EPS-HEP2013, p. 314, 2013.
* (58) P. Bechtle, S. Heinemeyer, O. Stål, T. Stefaniak, and G. Weiglein, “Probing the Standard Model with Higgs signal rates from the Tevatron, the LHC and a future ILC,” JHEP, vol. 11, p. 039, 2014.
* (59) P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, and K. E. Williams, “HiggsBounds: Confronting Arbitrary Higgs Sectors with Exclusion Bounds from LEP and the Tevatron,” Comput. Phys. Commun., vol. 181, pp. 138–167, 2010.
* (60) P. Bechtle, O. Brein, S. Heinemeyer, G. Weiglein, and K. E. Williams, “HiggsBounds 2.0.0: Confronting Neutral and Charged Higgs Sector Predictions with Exclusion Bounds from LEP and the Tevatron,” Comput. Phys. Commun., vol. 182, pp. 2605–2631, 2011.
* (61) P. Bechtle, O. Brein, S. Heinemeyer, O. Stal, T. Stefaniak, G. Weiglein, and K. Williams, “Recent Developments in HiggsBounds and a Preview of HiggsSignals,” PoS, vol. CHARGED2012, p. 024, 2012.
* (62) P. Bechtle, O. Brein, S. Heinemeyer, O. Stål, T. Stefaniak, G. Weiglein, and K. E. Williams, “$\mathsf{HiggsBounds}-4$: Improved Tests of Extended Higgs Sectors against Exclusion Bounds from LEP, the Tevatron and the LHC,” Eur. Phys. J. C, vol. 74, no. 3, p. 2693, 2014.
* (63) P. Bechtle, S. Heinemeyer, O. Stal, T. Stefaniak, and G. Weiglein, “Applying Exclusion Likelihoods from LHC Searches to Extended Higgs Sectors,” Eur. Phys. J. C, vol. 75, no. 9, p. 421, 2015.
* (64) N. Aghanim et al., “Planck 2018 results. VI. Cosmological parameters,” 2018.
* (65) S. Kraml, S. Kulkarni, U. Laa, A. Lessa, W. Magerl, D. Proschofsky-Spindler, and W. Waltenberger, “SModelS: a tool for interpreting simplified-model results from the LHC and its application to supersymmetry,” Eur. Phys. J. C, vol. 74, p. 2868, 2014.
* (66) F. Ambrogi, S. Kraml, S. Kulkarni, U. Laa, A. Lessa, V. Magerl, J. Sonneveld, M. Traub, and W. Waltenberger, “SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps,” Comput. Phys. Commun., vol. 227, pp. 72–98, 2018.
* (67) J. Dutta, S. Kraml, A. Lessa, and W. Waltenberger, “SModelS extension with the CMS supersymmetry search results from Run 2,” LHEP, vol. 1, no. 1, pp. 5–12, 2018.
* (68) J. Heisig, S. Kraml, and A. Lessa, “Constraining new physics with searches for long-lived particles: Implementation into SModelS,” Phys. Lett. B, vol. 788, pp. 87–95, 2019.
* (69) F. Ambrogi et al., “SModelS v1.2: long-lived particles, combination of signal regions, and other novelties,” Comput. Phys. Commun., vol. 251, p. 106848, 2020.
* (70) C. K. Khosa, S. Kraml, A. Lessa, P. Neuhuber, and W. Waltenberger, “SModelS Database Update v1.2.3,” LHEP, vol. 2020, p. 158, 2020.
* (71) G. Alguero, S. Kraml, and W. Waltenberger, “A SModelS interface for pyhf likelihoods,” Comput. Phys. Commun., vol. 264, p. 107909, 2021.
* (72) G. Alguero, J. Heisig, C. Khosa, S. Kraml, S. Kulkarni, A. Lessa, H. Reyes-González, W. Waltenberger, and A. Wongel, “Constraining new physics with SModelS version 2,” 12 2021.
* (73) G. Aad et al., “Search for direct production of charginos, neutralinos and sleptons in final states with two leptons and missing transverse momentum in $pp$ collisions at $\sqrt{s}=$ 8 TeV with the ATLAS detector,” JHEP, vol. 05, p. 071, 2014.
* (74) G. Aad et al., “Search for direct production of charginos and neutralinos in events with three leptons and missing transverse momentum in $\sqrt{s}=$ 8TeV $pp$ collisions with the ATLAS detector,” JHEP, vol. 04, p. 169, 2014.
* (75) A. Adhikary, N. Chakrabarty, I. Chakraborty, and J. Lahiri, “Probing the $H^{\pm}W^{\mp}Z$ interaction at the high energy upgrade of the LHC,” Eur. Phys. J. C, vol. 81, no. 6, p. 554, 2021.
* (76) D. M. Asner et al., “ILC Higgs White Paper,” in Community Summer Study 2013: Snowmass on the Mississippi, 10 2013.
* (77) F. An et al., “Precision Higgs physics at the CEPC,” Chin. Phys. C, vol. 43, no. 4, p. 043002, 2019.
* (78) M. Carena, A. de Gouvea, A. Freitas, and M. Schmitt, “Invisible Z boson decays at e+ e- colliders,” Phys. Rev. D, vol. 68, p. 113007, 2003.
|
# Exact solvability and two-frequency Rabi oscillation in cavity-QED setup
with moving emitter
Mingzhu Weng Center for Quantum Sciences and School of Physics, Northeast
Normal University, Changchun 130024, China Zhihai Wang<EMAIL_ADDRESS>Center for Quantum Sciences and School of Physics, Northeast Normal
University, Changchun 130024, China
###### Abstract
In this paper, we investigate the energy spectrum and coherent dynamical
process in a cavity-QED setup with a moving emitter, which is subject to a
harmonic potential. We find that the vibration of the emitter will induce the
effective Kerr and optomechanical interactions. We generalize the Bogliubov
operators approach which dealt with quantum Rabi model, to our cavity-emitter-
vibration system and obtain the energy spectrum exactly. With the assistance
of Bogliubov operators approach, we obtain the energy spectrum of the system
exactly. Furthermore, we show that the dynamics of the system exhibit a two-
frequency Rabi oscillation behavior. We explain such behavior by
optomechanical interaction induced quantum transition between emitter-cavity
dressed states. We hope that the interaction between cavity mode and moving
emitter will provide a versatile platform to explore more exotic effects and
potential applications in cavity-QED scenario.
## I introduction
Light-matter interaction is a fundamental topic in the modern physics, ranging
from quantum optics and quantum information processing to the condensed matter
physics. The cavity is usually used to adjust the emission of the emitter,
leading to the Purcell effect purcell1946 , which is a central concept in the
field of cavity quantum electrodynamics (QED). In the strong coupling regime,
the cavity-QED setup can be described by the Jaynes-Cummings (JC) model JC1963
, and the single and multiple photon quantum Rabi oscillation have been
studied broadly Agarwal1985 ; Brune1996 ; Garziano2015 .
In the traditional investigations on cavity-QED and waveguide-QED system, the
emitter is usually assumed to be static under the dipole approximation.
However, the vibration degrees of freedom of quantum emitters are recently
received more and more attentions. For example, in waveguide-QED setup, the
waveguide induced interaction between moving emitters has been deeply studied
for both of the cases when the velocity of the emitter is faster and slower
than that of the photons in the waveguide GC2017 ; ES2020 . Also, the motion
of the emitter also leads to the recoil effect QL2013 ; DB2008 ; FD2016 ,
which is predicted by the modulated single-photon scattering line shape. Even
in the cavity-QED setup, the motion of emitter also induces many interesting
phenomena and applications which are absent for the static ones. For example,
the oscillation collapse and revival of atomic transition probability XG1995 ;
LX2000 , the spatial decoherence LZ2005 ; LZ2010 ; LY2001 , the motional
$n$-phonon bundle state YG2021 ; CS2014 , the exotic photon statistics YZ2015
; KM2015 , as well as the dynamical Casimir effect AA2021 ; SS2009 ; OD2019 ;
HW2019 ; WQ2018 ; VM2018 ; SF2015 , just to name a few. On the other hand, in
the recent cavity-QED experiment, the Rydberg atom is usually subject to the
harmonic potential which is generated by the laser or magneto-optical
technology Anderson2011 ; Tikman2016 ; Bounds2018 . Therefore, it naturally
motivates us to investigate the exact energy spectrum and dynamical evolution
of the cavity-QED setup with moving emitter which yields to a harmonic
potential.
In this work, we focus on the quantum effect of the vibration of the two-level
emitter on the energy spectrum and Rabi oscillation of cavity-QED setup. Our
model is similar to the trap ion system which is broadly studied to pursuit
its application in quantum information processing FM2018 ; FM2012 ; LD2018 ;
LD2019 . Instead, we here aim to find the exact energy diagram and study the
coherent dynamics of the system, in order to achieve a basic understand for
the model. With the assistance of a unitary transformation, we find the system
can be effectively described by an emitter-optomechanical cavity Hamiltonian
kippenberg2013 ; liu2014 , with a negligible Kerr term. We find that effective
Hamiltonian possesses a same mathematical structure with quantum Rabi model
DB2011 , and borrow the Bogliubov operators approach QC2012 ; QH2011 ; QH2012
to obtain the exact energy spectrum. It shows that, the optomechanical
interaction will induce a sideband effect, and in each of the sideband, we
observe the Rabi splitting which originates from the emitter-cavity coupling.
We also find that the effective optomechanical interaction leads to the two-
frequency Rabi oscillation and explain it in the dressed state presentation.
## II Model and Hamiltonian
Figure 1: Schematic diagram of the model: a single mode cavity couples to a
two-level moving emitter which is subject to a harmonic potential.
As schematically shown in Fig. 1, the system we consider is composed by a
single-mode cavity and a movable but spatially confined two-level emitter. The
emitter is characterized by its mass $M$ and the internal energy level spacing
$\Omega$ between the ground state $|g\rangle$ and excited state $|e\rangle$.
We introduce a confinement of the emitter by a harmonic potential of the
oscillator frequency $\omega$. Considering that the spatial motion (vibration)
of the emitter is along the $x$ axis, which is perpendicular to the wall of
the cavity, the Hamiltonian is given by XG1995 ; LX2000
$\displaystyle H$ $\displaystyle=$
$\displaystyle\frac{p^{2}}{2M}+\frac{1}{2}M\omega^{2}x^{2}+\hbar\omega_{a}a^{\dagger}a+\hbar\Omega|e\rangle\langle
e|$ (1) $\displaystyle+\hbar g(a^{\dagger}\sigma_{-}e^{-ikx}+{\rm H.c.}).$
Here, $x$ and $p$ are the emitter’s position and momentum operators,
$\omega_{a}=ck$ is the cavity frequency with $k$ being the photon wave vector,
and $c$ being the velocity of light.
$\sigma_{-}=(\sigma_{+})^{\dagger}=|g\rangle\langle e|$ is the Pauli operator
of the emitter, $a\,({a}^{\dagger})$ is the annihilation (creation) operator
of the cavity field. $g$ is the coupling strength between the emitter and
cavity. In the Hamiltonian Eq. (1), we have applied the rotating wave
approximation by considering $g\ll\\{\omega_{a},\Omega\\}$. It is convenient
to introduce the creation (annihilation) operator $b^{\dagger}\,(b)$, which
satisfies $x=\alpha(b^{\dagger}+b),p=i\hbar(b^{\dagger}-b)/(2\alpha)$
($\alpha=\sqrt{\hbar/2M\omega}$) and the Hamiltonian can be rewritten as
$\displaystyle H$ $\displaystyle=$ $\displaystyle\hbar\omega
b^{\dagger}b+\hbar\omega_{a}a^{\dagger}a+\hbar\Omega|e\rangle\langle e|$ (2)
$\displaystyle+\hbar g[a^{\dagger}\sigma_{-}e^{-ik\alpha(b^{\dagger}+b)}+{\rm
H.c.}]$
by neglecting the constant term.
The operator in the exponential term can be eliminated by performing a unitary
transformation $\tilde{H}=UHU^{\dagger}$ where
$U=e^{ik\alpha(b^{\dagger}+b)a^{\dagger}a}$, and it yields
$\tilde{H}=\tilde{H}_{1}+\tilde{H}_{2}$ with
$\displaystyle\tilde{H}_{1}$ $\displaystyle=$
$\displaystyle\hbar\chi(a^{\dagger}a)^{2}+\hbar\omega_{a}a^{\dagger}a+\hbar\Omega|e\rangle\langle
e|$ (3) $\displaystyle+\hbar g(a^{\dagger}\sigma_{-}+a\sigma_{+})+\hbar\omega
b^{\dagger}b,$ $\displaystyle\tilde{H}_{2}$ $\displaystyle=$ $\displaystyle
i\hbar\eta a^{\dagger}a(b-b^{\dagger}),$ (4)
where
$\chi=k^{2}\alpha^{2}\omega,\,\eta=k\alpha\omega.$ (5)
It is obvious that, the vibrational movement of the emitter induces two
effects. The first one is the Kerr effect as shown by the first term of
$\tilde{H}_{1}$, with the strength $\chi=k^{2}\alpha^{2}\omega=\hbar
k^{2}/(2M)$, which is independent of oscillator frequency $\omega$. Physically
speaking, the movement of the emitter is described by the generation and
absorption of the phonon in second quantization representation, and it is also
accompanied by the generation and absorption of photon in the cavity as shown
by the emitter-photon interaction Hamiltonian $\hbar
g[a^{\dagger}\sigma_{-}e^{-ik\alpha(b^{\dagger}+b)}+{\rm H.c.}]$. Therefore,
it naturally introduces a self-phase modulation to the photon in the cavity.
The other one is the effective coupling between the vibrational degree of
freedom of the emitter and the cavity mode. As given by $\tilde{H}_{2}$, it is
actually an effective optomechanical interaction law1995 ; marquardt2014 with
strength $\eta=k\alpha\omega$, which depends both on the parameters of the
emitter and the harmonic potential. Followed by the typical cavity QED system
with Rydberg atom, we take $\Omega=\omega_{a}=10^{5}$ GHz, $\omega=1\,{\rm
GHz},k=10^{7}\,{\rm m^{-1}},M=10^{-27}\,{\rm kg},g=100\,{\rm MHz}$. Within
these parameters, we will have $\chi=0.05g$, $\eta=g/\sqrt{2}$. Therefore, the
strength of the Kerr effect is much weaker than that of the emitter-cavity
coupling, that is, $\chi\ll g$.
The above cavity QED model can be experimentally realized in the Rydberg atom
platform, in which the parameters can be achieved by
$\Omega=\omega_{a}=10^{5}$ GHz, $k=10^{7}\,{\rm m^{-1}},M=10^{-27}\,{\rm
kg},g=100\,{\rm MHz}$ Anderson2011 ; Tikman2016 ; Bounds2018 . Furhtermore,
the trap of the atom can be realized by the optical tweezers technologies and
the depth of the harmonica trap $\omega$ can be achieve by hundreds of MHz
LTC2012 ; ZY2013 ; LH2017 . Within these parameters, the strength of the Kerr
effect is much weaker than that of the emitter-cavity coupling, that is,
$\chi\ll g$.
Figure 2: The energy spectrum diagram for $m=0,1$. The solid lines are the
eigen states of $\tilde{H}_{1}$ and the dashed lines represent the energy-
level transition induced by $\tilde{H}_{2}$.
The Hamiltonian $\tilde{H}_{1}$ is completely solvable due to the conservation
of the excitation number. The eigen values are
$\displaystyle E_{\pm}^{(m,n)}$ $\displaystyle=$
$\displaystyle(m^{2}+m+\frac{1}{2})\hbar\chi+(m+\frac{1}{2})\hbar\omega_{a}+\frac{1}{2}\hbar\Omega+n\hbar\omega$
$\displaystyle\pm\hbar\sqrt{[(m+\frac{1}{2})\chi+\frac{1}{2}\omega_{a}-\frac{1}{2}\Omega]^{2}+(m+1)g^{2}}$
and the corresponding eigen wave function can be obtained as
$\displaystyle|\psi_{+}^{(m,n)}\rangle$ $\displaystyle=$
$\displaystyle\cos\frac{\theta}{2}|m,n,e\rangle+\sin\frac{\theta}{2}|m+1,n,g\rangle,$
(7) $\displaystyle|\psi_{-}^{(m,n)}\rangle$ $\displaystyle=$
$\displaystyle-\sin\frac{\theta}{2}|m,n,e\rangle+\cos\frac{\theta}{2}|m+1,n,g\rangle,$
(8)
where $\tan\theta=2\sqrt{m+1}g/[\Omega-\omega_{a}-(2m+1)\chi]$, and
$|m,n,\sigma\rangle:=|m\rangle_{c}\otimes|n\rangle_{v}\otimes|\sigma\rangle_{a}$
($|\sigma\rangle=|e\rangle,|g\rangle$) represents the state in which the
cavity mode (vibrate mode) is in the bosonic Fock state with $m(n)$
excitations while the emitter is in the state $|\sigma\rangle$. In Fig. 2, we
illustrate the energy diagram for $m=0,1$. Here, the black solid lines are the
eigenstates of $\tilde{H}_{1}$ and the blue dashed lines represent the energy
level transitions between $|\psi_{\pm}^{(m,n)}\rangle$ and
$|\psi_{\pm}^{(m,n\pm 1)}\rangle$, which are induced by $\tilde{H}_{2}$. It
seems that the whole Hamiltonian can only be solved by the perturbation theory
with the presence of $\tilde{H}_{2}$ induced transition. However, thanks to
the excitation number conservation for the internal degree of freedom for the
emitter and the photons in the cavity, that is,
$[a^{\dagger}a+|e\rangle\langle e|,H]=0$, the whole system is still fully
solvable, and the exact energy spectrum can be obtained as what we will
discuss in the follows.
## III The solution of the Hamiltonian
Now, we derive the exact energy spectrum of the Hamiltonian $\tilde{H}$.
First, we introduce $\tilde{b}=ib$, $\tilde{b^{\dagger}}=-ib^{\dagger}$, then
the Hamiltonian $\tilde{H}$ becomes
$\displaystyle\tilde{H}=$
$\displaystyle\hbar\omega\tilde{b^{\dagger}}\tilde{b}+\hbar\omega_{a}a^{\dagger}a+\hbar\Omega|e\rangle\langle
e|+\hbar g(a^{\dagger}\sigma_{-}+a\sigma_{+})$ (9) $\displaystyle+\hbar
k\alpha\omega a^{\dagger}a(\tilde{b}+\tilde{b^{\dagger}})+\hbar
k^{2}\alpha^{2}\omega(a^{\dagger}a)^{2}.$
In what follows, we will still use the symbol $b$ to represent $\tilde{b}$ for
the sake of simplicity since it does not affect the final result. In the
cavity-emitter basis $\\{|m+1,g\rangle,\,|m,e\rangle\\}$, the Hamiltonian can
be expressed as
$\tilde{H}=\left(\begin{array}[]{cc}H_{11}&\hbar\sqrt{m+1}g\\\
\hbar\sqrt{m+1}g&H_{22}\end{array}\right),$ (10)
where
$\displaystyle H_{11}$ $\displaystyle=$ $\displaystyle\hbar\omega
b^{\dagger}b+(m+1)\hbar\omega_{a}$ $\displaystyle+(m+1)\hbar
k\alpha\omega(b+b^{\dagger})+(m+1)^{2}\hbar k^{2}\alpha^{2}\omega,$
$\displaystyle H_{22}$ $\displaystyle=$ $\displaystyle\hbar\omega
b^{\dagger}b+m\hbar\omega_{a}+\hbar\Omega$ (12) $\displaystyle+m\hbar
k\alpha\omega(b+b^{\dagger})+m^{2}\hbar k^{2}\alpha^{2}\omega.$
The Hamiltonian has the same mathematical structure with that of the quantum
Rabi model (see Eq. (2) in Ref. QC2012 ). It motivates us to apply the
Bogliubov operators approach to solve the eigen spectrum. The basic idea is
that we can introduce two Bogolibov transformations to diagonalize the
Hamiltonian $H_{11}$ and $H_{22}$ respectively, and therefore the wave
function of the whole Hamiltonian $\tilde{H}$ can be obtained two times. Since
they correspond to the same eigenvalue, they should be only different by a
complex constant, and then we can build the transcendental equation for the
eigen energy. Following the process as given in the appendix (the similar
calculation can also be found in Ref. QC2012 ), the transcendental equation is
obtained as $G_{m}(E)=0$, where
$\displaystyle G_{0}(E)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\left[\frac{g^{2}}{(-n\omega+\gamma+E/\hbar)(\gamma+E/\hbar)}-1\right]f_{n}(k\alpha)^{n},$
for $m=0$ and
$\displaystyle G_{m}(E)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}e_{n}[k\alpha(m+1)]^{n}\sum_{n=0}^{\infty}e_{n}(k\alpha
m)^{n}$ (14)
$\displaystyle-\sum_{n=0}^{\infty}f_{n}[k\alpha(m+1)]^{n}\sum_{n=0}^{\infty}f_{n}(k\alpha
m)^{n},$
for $m>0$.
The coefficients $e_{n}$ and $f_{n}$ are defined recursively as
$\displaystyle e_{n}$ $\displaystyle=$
$\displaystyle\frac{-\sqrt{m+1}gf_{n}}{l\omega-\gamma-E/\hbar},$ (15)
$\displaystyle nf_{n}$ $\displaystyle=$ $\displaystyle
K_{n-1}f_{n-1}-f_{n-2},$ (16)
with the initial conditions $f_{0}=1$ , $f_{1}=K_{0}$, and
$K_{n}=\frac{1}{k\alpha\omega}[(n\omega+\beta-E/\hbar)-\frac{(m+1)g^{2}}{n\omega-\gamma-E/\hbar}].$
(17)
Here, $\gamma:=-(m+1)\omega_{a},\beta:=k^{2}\alpha^{2}\omega+(m+1)\omega_{a}$.
Figure 3: (a) $G_{0}$ and (b) $G_{1}$ for $g=100\,{\rm
MHz},\Omega=\omega_{a}=10^{5}\,{\rm GHz},k=2\pi/\lambda=10^{7}\,{\rm m^{-1}}$
and $\omega=1\,{\rm GHz}=10g$.
In Fig. 3 (a) and (b), we plot the $G_{m}(E)$ functions for $m=0$ and $m=1$ by
the blue curves, respectively. Meanwhile, the red curves demonstrate the
divergent behavior at $E/\hbar=n\omega+(m+1)\omega_{a}$, which is implied by
Eq. (17). Therefore, the zero points of the blue curves correspond to the
eigen-energy of the system. As shown in the figure, where we have set the
transition frequency being resonant with the cavity, that is
$\omega_{a}=\Omega$, we can clearly observe the sidebands near $n\hbar\omega$,
which is induced by the vibration of the emitter. Near each sideband, it shows
a Rabi splitting behavior which is given by the emitter-cavity coupling terms
$\hbar g(a^{\dagger}\sigma^{-}+a\sigma^{+})$ in the Hamiltonian.
Recalling that, in Fig. 2, we have plotted the eigenstates of $\tilde{H_{1}}$
by the black solid lines where the energy level spacing between the states
$|\psi_{\pm}^{(m,n)}\rangle$ and $|\psi_{\pm}^{(m,n\pm 1)}\rangle$ is
$\Delta_{m,n}=2\hbar\sqrt{[(m+\frac{1}{2})\chi]^{2}+(m+1)g^{2}}.$ (18)
For the parameter regime considered in Fig. 3, the energy level space achieves
$\Delta_{0,n}\approx 2.11\hbar g$ with $m=0$, which is similar to the space
$\tilde{\Delta}_{0,n}\approx 1.99\hbar g$ in Fig. 3 (a). The similar result
can be also obtained for $m=1$, the result obtained from Eq. (6) is close to
the exact solution given in Fig. 3 (b) as
$|\Delta_{1,n}-\tilde{\Delta}_{1,n}|\approx 0.03\hbar g$. Therefore, the
energy level transitions introduced by $\tilde{H_{2}}$ produce the slight
shift to the energy spectrum of the system.
## IV The Rabi oscillation
From now on, we will numerically discuss the dynamical evolution of the
system, i.e., to study the Rabi oscillation behavior. Remember that the
effective Hamiltonian $\tilde{H}$ is obtained by a unitary transformation,
correspondingly, we also need to perform the same unitary transformation on
the quantum state. Therefore, preparing the initial pure state as
$|\psi(0)\rangle$, the dynamics of the system is governed by
$|\psi(t)\rangle=U^{\dagger}e^{-i\tilde{H}t}U|\psi(0)\rangle,$ (19)
and the average value for an arbitrary operator $\hat{A}$ reads
$\langle\hat{A}\rangle={\rm Tr}[\hat{A}\rho(t)],$ (20)
where the density matrix $\rho(t)=|\psi(t)\rangle\langle\psi(t)|$.
Figure 4: The Rabi oscillation of the system (a,c) and the corresponding
frequency spectrum (b,d). The initial state is set as
$|\psi(0)\rangle=|m+1,0,g\rangle$ with $m=0$ for (a) and $m=1$ for (c),
respectively. The parameters are set to be same as those in Fig. 3.
As is well known, for the traditional JC model, the emitter and the cavity
field will exchange the excitation, which leads to a perfect Rabi oscillation.
However, for a moving emitter, even when the vibrate mode is in the ground
state, the oscillation behavior is still changed dramatically. In Fig. 4, we
plot the average value $P=\langle\hat{A}\rangle$ with
$\hat{A}=|m,g\rangle\langle m,g|$ with the initial state of the system being
$|\psi(0)\rangle=|m+1,0,g\rangle$. In Fig. 4 (a) and (c), we illustrate the
results for $m=0$ and $m=1$, respectively.
As shown in the figure, for a deep harmonic potential $\omega=10g$, the blue
curves demonstrate perfect Rabi oscillations with periods $T=\pi/g$ and
$T=\pi/(\sqrt{2}g)$ respectively in Fig. 4 (a) and (c). In such a situation,
the emitter is confined tightly by the harmonic potential, and it is similar
to that in standard JC model with static atom. However, for the shallow
harmonic potential, as shown by the red dashed curves in Fig. (a) and (c), the
dynamics diverges from the standard Rabi oscillation, and it shows a two-
frequency oscillation character, which can be obtained by the numerical
Fourier transformation,
$f(\omega_{0})=\frac{1}{\sqrt{2\pi}}\int dtP(t)e^{-i\omega_{0}t}$ (21)
and the spectrum strength corresponds to (a) and (c) are given in (b) and (d),
respectively. Here, we clearly observe the spectrum splitting, which is
represented by the red dashed lines.
The splitting can be understood from the energy-level diagram in Fig. 2, which
shows the energy-level transition between different sidebands. Taking the
subspaces with $m=0$ and $n=0,1$ as an example, the states
$|\psi_{\pm}^{(0,0)}\rangle$ will couple to states
$|\psi_{\pm}^{(0,1)}\rangle$ simultaneously, that is, it forms four transition
channels as shown by the dashed lines. However, in the parameter regime we
consider, the coupling between $|\psi_{+}^{(0,0)}\rangle$ and
$|\psi_{-}^{(0,1)}\rangle$ will play the most important role due to their
smallest energy spacing. As a result, a simplified energy-level diagram can be
given by Fig. 5 (a). Neglecting the effective Kerr interaction, whose strength
$\chi$ is much smaller than that of the emitter-cavity coupling $g$, the
$|\psi_{+}^{(0,0)}\rangle\leftrightarrow|\psi_{-}^{(0,1)}\rangle$ transition
intensity $\mu$ is
$\displaystyle\hbar\mu\approx\langle\psi_{+}^{(0,0)}|\hbar\eta
a^{\dagger}a(b-b^{\dagger})|\psi_{-}^{(0,1)}\rangle=\frac{1}{2}\hbar\eta.$
(22)
with
$\displaystyle|\psi_{+}^{(0,0)}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}(|0,0,e\rangle+|1,0,g\rangle),$
$\displaystyle|\psi_{-}^{(0,1)}\rangle$ $\displaystyle=$
$\displaystyle\frac{1}{\sqrt{2}}(-|0,1,e\rangle+|1,1,g\rangle),$ (23)
where we have considered $\omega_{a}=\Omega$. As a result, it forms another
two dressed states $|\Psi_{+}\rangle$ and $|\Psi_{-}\rangle$, which are the
superposition of $|\psi_{+}^{(0,0)}\rangle$ and $|\psi_{-}^{(0,1)}\rangle$ as
shown in Fig. 5 (b). Therefore, the Rabi oscillation process can be
approximately considered as the oscillation between the states
$|\psi_{-}^{(0,0)}\rangle$ and $|\Psi_{\pm}\rangle$, the corresponding
transition frequency $\omega_{\pm}$ can be obtained as what follows.
Figure 5: (a) The original simplified energy-level diagram. (b) The
interpretation for the two-frequency Rabi oscillation behavior.
Neglecting the Kerr interaction and considering the situation with
$\omega_{a}=\Omega$, we will have
$\displaystyle E_{+}^{(0,0)}$ $\displaystyle=$
$\displaystyle\hbar(\omega_{a}+g),$ (24a) $\displaystyle E_{-}^{(0,1)}$
$\displaystyle=$ $\displaystyle\hbar(\omega_{a}+\omega-g),$ (24b)
$\displaystyle E_{-}^{(0,0)}$ $\displaystyle=$
$\displaystyle\hbar(\omega_{a}-g).$ (24c)
Therefore, the eigen energies of $|\Psi_{\pm}\rangle$ are obtained as
$\displaystyle E_{\pm}$ $\displaystyle=$
$\displaystyle\hbar\omega_{a}+\frac{1}{2}\hbar\omega\pm\hbar\sqrt{(g-\frac{1}{2}\omega)^{2}+\mu^{2}}.$
(25)
As a result, the energy level transition frequencies of the system shown in
Fig. 5(b) are
$\displaystyle\hbar\omega_{\pm}$ $\displaystyle=$ $\displaystyle
E_{\pm}-E_{-}^{(0,0)}$ (26) $\displaystyle=$ $\displaystyle\hbar
g+\frac{1}{2}\hbar\omega\pm\hbar\sqrt{(g-\frac{1}{2}\omega)^{2}+\mu^{2}}.$
For the considered parameters $\omega=2g$ in Fig. 4 (a) and (b), the coupling
strength achieves $\mu\approx 0.15g$, and $\omega_{\pm}\approx(2\pm 0.15)g$,
which coincides with the two peaks in Fig. 4 (b) (see the red dashed curve).
The similar results can also be obtained for $m=1$, and the two-peak structure
for the spectrum strength which is given by the red dashed curve in Fig. 4 (d)
can be predicted and the transition frequencies are approximately obtained as
$\omega_{\pm}^{\prime}\approx(2.9\pm 0.23)g$.
## V Conclusion
In this paper, we investigate the energy spectrum and the Rabi oscillation
behavior in a light-matter interaction model with a moving emitter. We
introduce a harmonic potential to confine the vibration degree of the emitter,
and show that the vibration of the emitter will induce an effective Kerr
interaction and opto-mechanical coupling. With the assistance of Bogliubov
operators approach, we obtain the exact energy spectrum of the system.
Furthermore, with a shallow potential, we find that the Rabi oscillation will
exhibit a two-frequency character, which is dramatically different from that
of a static emitter.
In the previous studies, it was shown that the mechanical squeezing can be
realized in the optomechanical system wollman2015 . Therefore, we hope our
study about the vibration induced optomechanical interaction can be applied in
the squeezed state preparation and is furthermore beneficial for quantum
precision measurement and sensing.
Note- During the preparation of this work, we find a similar investigation
about the optomechanical strong coupling between a single cavity photon and a
single atom chang2021 .
###### Acknowledgements.
We thank Profs. X. X. Yi, Y. Li and L.-P. Yang for helpful discussion. This
work is supported by the funding from Ministry of Science and Technology of
China (No. 2021YFE0193500) and the Natural Science Foundation of China (Nos.
11875011 and 12047566).
## Appendix A Exact solution
In this appendix, we give the detailed derivation of the $G$-function, whose
zero points yield the eigen energy of the system. The same approach to deal
with quantum Rabi model can be found in Ref. QC2012 .
Based on Eq. (10) in the main text, we introduce the Bogoliubov operators,
$b^{\dagger}=B^{\dagger}-k\alpha(m+1),\,b=B-k\alpha(m+1),$ (27)
to generate the new bosonic operators $B$ and $B^{\dagger}$. Thus, we can
remove the linear term of the diagonal elements of the Hamiltonian matrix and
simplify it to
$\displaystyle\tilde{H}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}\hbar\omega
B^{\dagger}B-\hbar\gamma&\hbar\sqrt{m+1}g\\\ \hbar\sqrt{m+1}g&\hbar\omega
B^{\dagger}B-\hbar k\alpha\omega(B^{\dagger}+B)+\hbar\beta\end{array}\right).$
(30)
where
$\gamma=-(m+1)\omega_{a},\beta=k^{2}\alpha^{2}\omega+m\omega_{a}+\Omega.$
The wave function can then be assumed as
$\displaystyle|\Phi\rangle_{B}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{c}\sum_{n=0}^{\infty}\sqrt{n!}e_{n}|n\rangle_{B}\\\
\sum_{n=0}^{\infty}\sqrt{n!}f_{n}|n\rangle_{B}\end{array}\right),$ (34)
where $e_{n}$ and $f_{n}$ are the expansion coefficients. $|n\rangle_{B}$ is
an extended coherent state. It has the following properties
$\displaystyle|n\rangle_{B}$ $\displaystyle=$
$\displaystyle\frac{(B^{\dagger})^{n}}{\sqrt{n!}}|0\rangle_{B}=\frac{(b^{\dagger}+k\alpha(m+1))^{n}}{\sqrt{n!}}|0\rangle_{B},$
(35) $\displaystyle|0\rangle_{B}$ $\displaystyle=$ $\displaystyle
e^{-\frac{1}{2}k^{2}\alpha^{2}(m+1)^{2}-k\alpha(m+1)b^{\dagger}}|0\rangle_{b}.$
(36)
Here the vacuum state represented by the Bogoliubov operator is defined as the
eigenstate of the annihilation operator $b$.
By the Schrödinger equation, we will have
$\displaystyle\sum\limits_{n=0}^{\infty}\hbar(n\omega-\gamma)\sqrt{n!}e_{n}|n\rangle_{B}+\hbar\sqrt{m+1}g\sum\limits_{n=0}^{\infty}\sqrt{n!}f_{n}|n\rangle_{B}$
$\displaystyle=E\sum\limits_{n=0}^{\infty}\sqrt{n!}e_{n}|n\rangle_{B},$
$\displaystyle\hbar\sqrt{m+1}g\sum\limits_{n=0}^{\infty}\sqrt{n!}e_{n}|n\rangle_{B}+\sum\limits_{n=0}^{\infty}\hbar(n\omega+\beta)\sqrt{n!}f_{n}|n\rangle_{B}$
$\displaystyle-\hbar
k\alpha\omega\sum\limits_{n=0}^{\infty}(\sqrt{n}f_{n}\sqrt{n!}|n-1\rangle_{B}+\sqrt{n+1}f_{n}\sqrt{n!}|n+1\rangle_{B})$
$\displaystyle=E\sum\limits_{n=0}^{\infty}\sqrt{n!}f_{n}|n\rangle_{B}.$
Left-multiplying both sides of the above equation by ${}_{B}\langle l|$ gives
$(l\omega-\gamma-E/\hbar)e_{l}=-\sqrt{m+1}gf_{l},$ (38)
$(l\omega+\beta-E/\hbar)f_{l}-k\alpha\omega(l+1)f_{l+1}-k\alpha\omega=-\sqrt{m+1}ge_{l}.$
(39)
The coefficients $e_{l}$ and $f_{l}$ have the following relationship
$\displaystyle e_{l}$ $\displaystyle=$
$\displaystyle\frac{-\sqrt{m+1}gf_{l}}{l\omega-\gamma-E/\hbar},$ (40)
$\displaystyle lf_{l}$ $\displaystyle=$ $\displaystyle
K_{l-1}f_{l-1}-f_{l-2},$ (41)
where
$K(l)=\frac{1}{k\alpha\omega}[(l\omega+\beta-E/\hbar)-\frac{(m+1)g^{2}}{l\omega-\gamma-E/\hbar}].$
(42)
with $f_{0}=1$ and $f_{1}=K_{0}$.
Similarly, we can define another Bogoliubov operator A
($b^{\dagger}=A^{\dagger}-k\alpha m$). The transformed Hamiltonian then reads
$\displaystyle\tilde{H}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{cc}\hbar\omega A^{\dagger}A+\hbar
k\alpha\omega(A^{\dagger}+A)+\hbar\beta^{\prime}&\hbar\sqrt{m+1}g\\\
\hbar\sqrt{m+1}g&\hbar\omega
A^{\dagger}A-\hbar\gamma^{\prime}\end{array}\right),$ (45)
where
$\gamma^{\prime}=-m\omega_{a}-\Omega,\
\beta^{\prime}=k^{2}\alpha^{2}\omega+(m+1)\omega_{a}.$
The wave function can also be written as
$\displaystyle|\Phi\rangle_{A}$ $\displaystyle=$
$\displaystyle\left(\begin{array}[]{c}(-1)^{n}\sum_{n=0}^{\infty}\sqrt{n!}f_{n}^{\prime}|n\rangle_{A}\\\
(-1)^{n}\sum_{n=0}^{\infty}\sqrt{n!}e_{n}^{\prime}|n\rangle_{A}\end{array}\right),$
(49)
and they obey the properties
$\displaystyle|n\rangle_{A}$ $\displaystyle=$
$\displaystyle\frac{(A^{\dagger})^{n}}{\sqrt{n!}}|0\rangle_{A}=\frac{(b^{\dagger}+k\alpha
m)^{n}}{\sqrt{n!}}|0\rangle_{A},$ (50) $\displaystyle|0\rangle_{A}$
$\displaystyle=$ $\displaystyle e^{-\frac{1}{2}k^{2}\alpha^{2}m^{2}-k\alpha
mb^{\dagger}}|0\rangle_{b}.$ (51)
Following the previous steps, the relationship between the two coefficients
can be obtained as
$\displaystyle
e_{l}^{\prime}=\frac{-\sqrt{m+1}gf_{l}^{\prime}}{l\omega-\gamma^{\prime}-E/\hbar},$
(52)
The corresponding recursive relationship is
$\displaystyle lf_{l}^{\prime}$ $\displaystyle=$ $\displaystyle
K_{l-1}^{\prime}f_{l-1}^{\prime}-f_{l-2}^{\prime},$ (53) $\displaystyle
K^{\prime}(l)$ $\displaystyle=$
$\displaystyle\frac{1}{k\alpha\omega}[(l\omega+\beta^{\prime}-E/\hbar)-\frac{(m+1)g^{2}}{l\omega-\gamma^{\prime}-E/\hbar}],$
with $f_{0}^{\prime}=1$ and $f_{1}^{\prime}=K_{0}^{\prime}$.
Since both the wave functions (34) and (49) are the true eigenfunction for a
nondegenerate eigenvalue $E$, they should be proportional to each other, that
is, $|\Phi\rangle_{B}=r|\Phi\rangle_{A}$, where $r$ is a complex constant.
Projecting both sides of this identity onto the original vacuum state
${}_{b}\langle 0|$ , we have
$\displaystyle\sum_{n=0}^{\infty}\sqrt{n!}e_{n}{}_{b}\langle 0|n\rangle_{B}$
$\displaystyle=$ $\displaystyle
r(-1)^{n}\sum_{n=0}^{\infty}\sqrt{n!}f_{n}^{\prime}{}_{b}\langle
0|n\rangle_{A},$ $\displaystyle\sum_{n=0}^{\infty}\sqrt{n!}f_{n}{}_{b}\langle
0|n\rangle_{B}$ $\displaystyle=$ $\displaystyle
r(-1)^{n}\sum_{n=0}^{\infty}\sqrt{n!}e_{n}^{\prime}{}_{b}\langle
0|n\rangle_{A},$
and from (36) and (51), we obtain
$\displaystyle\sqrt{n!}_{b}\langle 0|n\rangle_{B}$ $\displaystyle=$
$\displaystyle(k\alpha(m+1))^{n}e^{-\frac{1}{2}k^{2}\alpha^{2}(m+1)^{2}},$
$\displaystyle(-1)^{n}\sqrt{n!}_{b}\langle 0|n\rangle_{A}$ $\displaystyle=$
$\displaystyle(-k\alpha m)^{n}e^{-\frac{1}{2}k^{2}\alpha^{2}m^{2}}.$ (56)
Then we have to consider the situations with $m=0$ and $m\neq 0$,
respectively.
When $m=0$, eliminating the ratio constant $r$ gives
$\sum_{n=0}^{\infty}e_{n}(k\alpha)^{n}\sum_{n=0}^{\infty}e_{n}^{\prime}0^{n}=\sum_{n=0}^{\infty}f_{n}(k\alpha)^{n}\sum_{n=0}^{\infty}f_{n}^{\prime}0^{n},$
(57)
which yields
$\sum_{n=0}^{\infty}e_{n}(k\alpha)^{n}e_{0}^{\prime}=\sum_{n=0}^{\infty}f_{n}(k\alpha)^{n}f_{0}^{\prime}.$
(58)
With (40) and (52), we get
$\sum_{n=0}^{\infty}\frac{-gf_{n}}{n\omega-\gamma-E/\hbar}(k\alpha)^{n}\frac{gf_{0}^{\prime}}{\gamma^{\prime}+E/\hbar}-\sum_{n=0}^{\infty}f_{n}(k\alpha)^{n}f_{0}^{\prime}=0.$
(59)
Setting $\Omega=\omega_{a}$, we obtain the transcendental equation for the
eigen energy $E$ as
$\displaystyle G_{0}(E)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\frac{g^{2}f_{n}}{(-n\omega+\gamma+E/\hbar)(\gamma+E/\hbar)}(k\alpha)^{n}$
(60) $\displaystyle-\sum_{n=0}^{\infty}f_{n}(k\alpha)^{n}=0.$
For $m\neq 0$, we will similarly reach
$\displaystyle G_{m}(E)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}e_{n}[k\alpha(m+1)]^{n}\sum_{n=0}^{\infty}e_{n}(k\alpha
m)^{n}$
$\displaystyle-\sum_{n=0}^{\infty}f_{n}[k\alpha(m+1)]^{n}\sum_{n=0}^{\infty}f_{n}(k\alpha
m)^{n}=0.$
which are Eq. (LABEL:Gfunction0) and Eq. (14) in the main text for $m=0$ and
$m\neq 0$, respectvely.
## References
* (1) E. M. Purcell, Phys. Rev. 69, 681 (1946).
* (2) E. T. Jaynes and F. W. Cummings, Proc. IEEE 51, 89 (1963).
* (3) G. S. Agarwal, J. Opt. Soc. Am. B 2, 480 (1985)
* (4) M. Brune, F. S.-Kaler, A. Maali, J. Dreyer, E. Hagley, J. M. Raimond, and S. Haroche, Phys. Rev. Lett. 76, 1800 (1996).
* (5) L. Garziano, R. Stassi, V. Macrí, A. F. Kockum, S. Savasta and F. Nori, Phys. Rev. A 92, 063830 (2015).
* (6) G. Calajó and P. Rabl, Phys. Rev. A 95, 043824 (2017).
* (7) E. S.-Burillo, A. G.-Tudela, and C. G.-Ballestero, Phys. Rev. A 102, 013726 (2020).
* (8) Q. Li, D. Z. Xu, C. Y. Cai, and C. P. Sun, Sci. Rep. 3, 3144 (2013).
* (9) D. Braun and J. Martin, Phys. Rev. A 77, 032102 (2008).
* (10) F. Damanet, D. Braun, and J. Martin, Phys. Rev. A 93, 022124 (2016).
* (11) X. G. Wang and C. P. Sun, J. Mod. Optics 42, 515 (1995).
* (12) L. X. Cen and S. J. Wang, J. Phys. A: Math. Gen. 33, 3697 (2000).
* (13) L. Zheng, C. Li, Y. Li, and C. P. Sun, Phys. Rev. A 71, 062101 (2005).
* (14) L. Zheng, C. P. Yang, and F. Nori, Phys. Rev. A 82, 062106 (2010).
* (15) L. You, Phys. Rev. A 64, 012302 (2001).
* (16) Y. G. Deng, T. Shi, and S. Yi, Photon. Res. 9, 1289 (2021).
* (17) C. S. Muñoz, E. d. Valle, A. G. Tudela, K. Müller, S. Lichtmannecker, M. Kaniber, C. Tejedor, J. J. Finley, and F. P. Laussy, Nat. Photon. 8, 550 (2014).
* (18) Y. Zhang, J. Zhang, S. X. Wu, and C. S. Yu, Ann. Phys. 361, 563 (2015).
* (19) K. M. Birnbaum, A. Boca, R. Miller, A. D. Boozer, T. E. Northup, and H. J. Kimble, Nature 436, 87 (2005).
* (20) A. Agustí, L. G. Álvarez, E. Solano, and C. Sabín, Phys. Rev. A 103, 062201 (2021).
* (21) S. Scheel and S. Y. Buhmann, Phys. Rev. A 80, 042902 (2009).
* (22) O. D. Stefano, A. Settineri, V. Macrí, A. Ridolfo, R. Stassi, A. F. Kockum, S. Savasta, and F. Nori, Phys. Rev. Lett. 112, 030402 (2019).
* (23) H. Wang, M. P. Blencowe, C. M. Wilson, and A. J. Rimberg, Phys. Rev. A 99, 053833 (2019).
* (24) W. Qin, V. Macrí, A. Miranowicz, S. Savasta, and F. Nori, Phys. Rev. A 100, 062501 (2019).
* (25) V. Macrí, A. Ridolfo, O. D. Stefano, A. F. Kockum, F. Nori, and S. Savasta, Phys. Rev. X 8, 011031 (2018).
* (26) S. Felicetti, C. Sabín, I. Fuentes, L. Lamata, G. Romero, and E. Solano, Phys. Rev. B 92, 064501 (2015).
* (27) S. E. Anderson, K. C. Younge, and G. Raithel, Phys. Rev. Lett. 107, 263001 (2011).
* (28) Y. Tikman, I. Yavuz, M. F. Ciappina, A. Chacón, Z. Altun, and M. Lewenstein, Phys. Rev. A 93, 023410 (2016).
* (29) A. D. Bounds, N. C. Jackson, R. K. Hanley, R. Faoro, E. M. Bridge, P. Huillery, and M. P. A. Jones, Phys. Rev. Lett. 120, 183401 (2018).
* (30) F. Zhou, L. Yan, S. Gong, Z. Ma, J. He, T. Xiong, L. Chen, W. Yang, M. Feng and V. Vedral, Sci. Adv., 2, e1600578 (2018).
* (31) L. Chen, W. Wan, Y. Xie, F. Zhou and M. Feng, Chin. Phys. Lett., 29, 033701 (2012).
* (32) C. J. Trout, M. Li, M. Gutiérrez, Y. Wu, S.-T. Wang, L. Duan and K. R. Brown, New J. Phys., 20, 043038 (2018).
* (33) K. A. Landsman, Y. Wu, P. H. Leung, D. Zhu, N. M. Linke, K. R. Brown, L. Duan and C. Monroe, Phys. Rev. A, 100, 022332 (2019).
* (34) T. Ramos, V. Sudhir, K. Stannigel, P. Zoller, and T. J. Kippenberg, Phys. Rev. Lett. 110, 193602 (2013).
* (35) H. Wang, X. Gu, Y.-x. Liu, A. Miranowicz, and F. Nori, Phys. Rev. A 90, 023817 (2014).
* (36) D. Braak, Phys. Rev. Lett. 107, 100401 (2011).
* (37) Q. H. Chen, C. Wang, S. He, T. Liu, and K. Wang, Phys. Rev. A 86, 023822 (2012).
* (38) Q. H. Chen, T. Liu, Y. Y. Zhang, and K. L. Wang, Europhys. Lett. 96, 14003 (2011).
* (39) Q. H. Chen, L. Li, T. Liu, and K. L. Wang, Chin. Phys. Lett. 29, 014208 (2012).
* (40) T. Li, Z.-X. Gong, Z.-Q. Yin, H. T. Quan, X. Yin, P. Zhang, L.-M. Duan and X. Zhang, Phys. Rev. Lett., 109, 163001 (2012).
* (41) Z.-Q. Yin, T. Li, X. Zhang and L.M. Duan, Phys. Rev. A, 88, 033614 (2013) .
* (42) H.-K. Li, E. Urban, C. Noel, A. Chuang, Y. Xia, A. Ransford, B. Hemmerling, Y. Wang, T. Li, H. Häffner and X. Zhang, Phys. Rev. Lett., 118, 053001 (2017).
* (43) C. K. Law, Phys. Rev. A 51, 2537 (1995).
* (44) M. Aspelmeyer, T. J. Kippenberg, and F. Marquardt, Rev. Mod. Phys. 86, 1391 (2014).
* (45) E. E. Wollman, C. U. Lei, A. J. Weinstein, J. Suh, A. Kronwald, F. Marquardt, A. A. Clerk, and K. C. Schwab, Science 349, 952 (2015).
* (46) J. A.-Luengo and D. E. Chang, arXiv: 2108.03526 (2021).
|
where in the last line we used (6.12). Plugging this bound back in (6.8) we
get
$\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\geq(1-\delta)\mathbb{E}\left[\mathbf{1}\\{\mathsf{Fav}_{t}(\delta)\\}\mathbb{P}_{\operatorname{free},t}(A)\right]-(1-\delta)\delta$
$\displaystyle\geq(1-\delta)\mathbb{E}\left[\mathbf{1}\\{\mathsf{Fav}_{t}(\delta)\\}\gamma(A)-\delta\right]-(1-\delta)\delta$
$\displaystyle=\gamma(A)(1-\delta)\mathbb{P}(\mathsf{Fav}_{t}(\delta))-2\delta(1-\delta).$
where the inequality in the penultimate line follows from (6.13). Taking
$\liminf$ both sides as $t\to\infty$, in view of (6.7) we see that
$\displaystyle\liminf_{t\to\infty}\mathbb{P}_{t}(A)\geq(1-\delta)(1-\varepsilon)\gamma(A)-2\delta(1-\delta).$
Taking $\liminf_{\delta\downarrow 0}$ and using the fact that $\varepsilon$ is
arbitrary, we get that $\liminf_{t\to\infty}\mathbb{P}_{t}(A)\geq\gamma(A)$.
Similarly for the upper bound, on the event $\mathsf{Fav}_{t}(\delta)$ we have
$\displaystyle\frac{\mathbb{E}_{\operatorname{free},t}\left[W\mathbf{1}_{A}\right]}{\mathbb{E}_{\operatorname{free},t}\left[W\right]}$
$\displaystyle\leq\frac{\mathbb{P}_{\operatorname{free},t}(A)}{(1-\delta)\mathbb{P}_{\operatorname{free},t}(W\geq
1-\delta)}\leq\frac{1}{(1-\delta)^{2}}\mathbb{P}_{\operatorname{free},t}(A),$
where we again use (6.12) for the last inequality. Inserting the above bound
in (6.9) we get
$\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\leq\frac{1}{(1-\delta)^{2}}\mathbb{E}\left[\mathbf{1}\\{\mathsf{Fav}_{t}(\delta)\\}\mathbb{P}_{\operatorname{free},t}(A)\right]+\mathbb{P}\left(\neg\mathsf{Fav}_{t}(\delta)\right)$
$\displaystyle\leq\frac{\delta}{(1-\delta)^{2}}+\frac{1}{(1-\delta)^{2}}\mathbb{E}\left[\mathbf{1}\\{\mathsf{Fav}_{t}(\delta)\\}\gamma(A)\right]+\mathbb{P}\left(\neg\mathsf{Fav}_{t}(\delta)\right)$
$\displaystyle\leq\frac{\delta}{(1-\delta)^{2}}+\frac{1}{(1-\delta)^{2}}\gamma(A)+\mathbb{P}\left(\neg\mathsf{Fav}_{t}(\delta)\right).$
The inequality in the penultimate line above follows from (6.13). Taking
$\limsup$ both sides as $t\to\infty$, in view of (6.7) we see that
$\displaystyle\limsup_{t\to\infty}\mathbb{P}_{t}(A)\leq\frac{\delta}{(1-\delta)^{2}}+\frac{1}{(1-\delta)^{2}}\gamma(A)+\varepsilon.$
As before taking $\limsup_{\delta\downarrow 0}$ and using the fact that
$\varepsilon$ is arbitrary, we get that
$\limsup_{t\to\infty}\mathbb{P}_{t}(A)\leq\gamma(A)$. With the matching upper
bound for $\liminf$ derived above, we thus arrive at (6.1), completing the
proof of Theorem 1.11.
Step 4. In this step we prove (6.7). Fix any $\delta>0$. Recall the
distributional convergence of KPZ line ensemble to Airy line ensemble from
Proposition 2.7. By the Skorokhod representation theorem, we may assume that
our probability space are equipped with $\mathcal{A}_{1}(x)$
$\mathcal{A}_{2}(x)$, such that as $t\to\infty$, almost surely we have
$\displaystyle\max_{i=1,2}\sup_{x\in[-1,1]}|2^{1/3}\mathfrak{h}_{t}^{(i)}(2^{1/3}x)-\mathcal{A}_{i}(x)|\to
0.$ (6.14)
Figure 6. In the above figure $\mathsf{Gap}_{t}(\delta)$ defined in (6.3)
denotes the event that the value of the blue point is smaller than the value
of each of the red points at least by $\delta$, The
$\mathsf{Rise}_{t}(\delta)$ event defined in (6.4) requires no point on the
whole blue curve (restricted to ${I}_{t}=(-t^{-\alpha},t^{-\alpha})$) exceed
the value of the blue point by a factor $\frac{1}{4}\delta$ (i.e., there is no
significant rise). The $\mathsf{Tight}_{t}(\delta)$ defined in (6.5) event
ensures the value of the red points are within $[-\delta^{-1},\delta^{-1}]$.
The $\mathsf{Fluc}_{t}^{(i)}(\delta)$ event defined in (6.15) signifies every
value of every point on the $i$-th curve (restricted to ${I}_{t}$) is within
$\frac{1}{4}\delta$ distance away from its value on the left boundary:
$\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})$. Finally, $\mathsf{Sink}_{t}(\delta)$
event defined in (6.20) denotes the event that no point on the black curve
(restricted to ${I}_{t}$) drops below the value of the red points by a factor
larger than $\frac{1}{4}\delta$, (i.e., there is no significant sink).
For $i=1,2$, consider the event
$\displaystyle\mathsf{Fluc}_{t}^{(i)}(\delta):=\left\\{\sup_{x\in
I_{t}}|\mathfrak{h}_{t}^{(i)}(x)-\mathfrak{h}_{t}^{(i)}(-t^{-\alpha})|\leq\tfrac{1}{4}\delta\right\\}.$
(6.15)
See Figure 6 and its caption for an interpretation of this event. We claim
that for every $\delta>0$,
$\displaystyle\liminf_{t\to\infty}\mathbb{P}\left(\mathsf{Fluc}_{t}^{(i)}(\delta)\right)=1.$
(6.16)
Let us complete the proof of (6.7) assuming (6.16). Fix any $\varepsilon>0$.
Note that
$\\{|\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(t^{-\alpha})|\leq\tfrac{1}{4}\delta\\}\supset\mathsf{Fluc}_{t}^{(1)}(\delta)$.
Recall $\mathsf{Gap}_{t}(\delta)$ from (6.3). Observe that
$\displaystyle\neg\mathsf{Gap}_{t}(\delta)\cap\left\\{|\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(t^{-\alpha})|\leq\tfrac{1}{4}\delta\right\\}$
$\displaystyle\subset\left\\{\mathfrak{h}_{t}^{(2)}(-t^{\alpha})-\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})\geq-\tfrac{5}{4}\delta\right\\}$
$\displaystyle\subset\left\\{\inf_{x\in[-1,0]}[\mathfrak{h}_{t}^{(2)}(x)-\mathfrak{h}_{t}^{(1)}(x)]\geq-\tfrac{5}{4}\delta\right\\}.$
Using these two preceding set relations, by union bound we have
$\displaystyle\mathbb{P}\left(\neg\mathsf{Gap}_{t}(\delta)\right)$
$\displaystyle\leq\mathbb{P}\left(|\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(t^{-\alpha})|\geq\tfrac{1}{4}\delta\right)+\mathbb{P}\left(\neg\mathsf{Gap}_{t}(\delta)\cap|\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(t^{-\alpha})|\leq\tfrac{1}{4}\delta\right)$
$\displaystyle\leq\mathbb{P}\left(\neg\mathsf{Rise}_{t}^{(1)}(\delta)\right)+\mathbb{P}\left(\inf_{x\in[-1,0]}[\mathfrak{h}_{t}^{(2)}(x)-\mathfrak{h}_{t}^{(1)}(x)]\geq-\tfrac{5}{4}\delta\right).$
As $t\to\infty$, the first term goes to zero due (6.16) and by Proposition
2.7, the second term goes to
$\mathbb{P}\left(\inf_{x\in[-1,0]}[\mathcal{A}_{2}(2^{-1/3}x)-\mathcal{A}_{1}(2^{-1/3}x)]\geq-\tfrac{5}{4\cdot
2^{1/3}}\delta\right).$
But by (2.1) we know Airy line ensembles are strictly ordered. Thus the above
probability can be made arbitrarily small be choose $\delta$ small enough. In
particular, there exists a $\delta_{1}\in(0,1)$ such that for all
$\delta\in(0,\delta_{1})$ the above probability is always less than
$\frac{\varepsilon}{2}$. This forces
$\displaystyle\liminf_{t\to\infty}\mathbb{P}\left(\mathsf{Gap}_{t}(\delta)\right)\geq
1-\tfrac{\varepsilon}{2}.$ (6.17)
Recall $\mathsf{Rise}_{t}(\delta)$ from (6.4). Clearly
$\mathsf{Rise}_{t}(\delta)\subset\mathsf{Fluc}_{t}^{(2)}(\delta)$. Thus for
every $\delta>0$,
$\displaystyle\liminf_{t\to\infty}\mathbb{P}(\mathsf{Rise}_{t}(\delta))=1.$
(6.18)
Finally using Proposition 2.8 (a) and (b) we see that
$\mathfrak{h}_{t}^{(1)}(t^{-\alpha}),\mathfrak{h}_{t}^{(1)}(t^{-\alpha})$ are
tight. Thus there exists $\delta_{2}\in(0,1)$ such that for all
$\delta\in(0,\delta_{2})$, we have
$\displaystyle\liminf_{t\to\infty}\mathbb{P}\left(\mathsf{Tight}_{t}(\delta)\right)\geq
1-\tfrac{\varepsilon}{2}.$ (6.19)
Combining (6.17), (6.18), (6.19), and recalling the definition of
$\mathsf{Fav}_{t}(\delta)$ from (6.6), by union bound we get (6.7) for all
$\delta\in(0,\min\\{\delta_{1},\delta_{2}\\})$.
Let us now prove (6.16). Recall $\mathsf{Fluc}_{t}^{(i)}(\delta)$ from (6.15).
Define the event:
$\displaystyle\mathsf{Conv}_{t}(\delta):=\left\\{\sup_{x\in[-1,1]}|\mathfrak{h}_{t}^{(i)}(x)-2^{-1/3}\mathcal{A}_{i}(2^{-1/3}x)|\leq\tfrac{1}{16}\delta\right\\}.$
Observe that
$\displaystyle\left\\{\neg\mathsf{Fluc}_{t}^{(i)}(\delta),\mathsf{Conv}_{t}(\delta)\right\\}\subset\left\\{\sup_{|x|\leq
2^{-1/3}t^{-\alpha}}\left[\mathcal{A}_{i}(x)-\mathcal{A}_{i}(-2^{-1/3}t^{-\alpha})\right]\geq\tfrac{2^{1/3}}{8}\delta\right\\}.$
Thus by union bound
$\displaystyle\mathbb{P}\left(\neg\mathsf{Fluc}_{t}^{(i)}(\delta)\right)$
$\displaystyle\leq\mathbb{P}\left(\neg\mathsf{Conv}_{t}(\delta)\right)+\mathbb{P}\left(\neg\mathsf{Fluc}_{t}^{(i)}(\delta),\mathsf{Conv}_{t}(\delta)\right)$
$\displaystyle\leq\mathbb{P}\left(\neg\mathsf{Conv}_{t}(\delta)\right)+\mathbb{P}\left(\sup_{|x|\leq
2^{-1/3}t^{-\alpha}}\left[\mathcal{A}_{i}(x)-\mathcal{A}_{i}(-2^{-1/3}t^{-\alpha})\right]\geq\tfrac{2^{1/3}}{8}\delta\right).$
By (6.14), the first term above goes to zero as $t\to\infty$, whereas the
second term goes to zero as $t\to\infty$, via modulus of continuity of Airy
line ensembles from Proposition 2.4. Note that in Proposition 2.4 the modulus
of continuity is stated for $\mathcal{A}_{i}(x)+x^{2}$. However, in the above
scenario since we deal with a vanishing interval
$[-2^{-1/3}t^{-\alpha},2^{-1/3}t^{-\alpha}]$, the parabolic term does not play
any role. This establishes (6.16).
Step 5. In this step we prove (6.12). Let us consider the event
$\displaystyle\mathsf{Sink}_{t}(\delta):=\left\\{\inf_{x\in
I_{t}}\mathfrak{h}_{t}^{(1)}(x)\geq-\tfrac{1}{4}\delta+\min\\{\mathfrak{h}_{t}(-t^{-\alpha}),\mathfrak{h}_{t}(t^{-\alpha})\\}\right\\}.$
(6.20)
See Figure 6 and its caption for an interpretation of this event. Recall
$\mathsf{Gap}_{t}(\delta)$ and $\mathsf{Rise}_{t}(\delta)$ from (6.3) and
(6.4). Observe that on the event
$\mathsf{Gap}_{t}(\delta)\cap\mathsf{Rise}_{t}(\delta)$, we have $\sup_{x\in
I_{t}}\mathfrak{h}_{t}^{(2)}(x)\leq\min\\{\mathfrak{h}_{t}(-t^{-\alpha}),\mathfrak{h}_{t}(t^{-\alpha})\\}-\frac{3}{4}\delta$.
Thus on
$\mathsf{Gap}_{t}(\delta)\cap\mathsf{Rise}_{t}(\delta)\cap\mathsf{Sink}_{t}(\delta)$,
we have
$\inf_{x\in{I}_{t}}\left[\mathfrak{h}_{t}^{(1)}(x)-\mathfrak{h}_{t}^{(2)}(x)\right]\geq\tfrac{1}{2}\delta.$
Recall $W$ from (6.11). On the event $\\{\inf_{x\in
I_{t}}\left[\mathfrak{h}_{t}^{(1)}(x)-\mathfrak{h}_{t}^{(2)}(x)\right]\geq\tfrac{1}{2}\delta\\}$
we have the pointwise inequality
$W>\exp(-2t^{2/3-\alpha}e^{-\frac{1}{2}t^{1/3}\delta})\geq 1-\delta,$
where we choose a $t_{1}(\delta)>0$ so that the last inequality is true for
all $t\geq t_{1}$. Thus for all $t\geq t_{1}$,
$\displaystyle\mathbf{1}\\{\mathsf{Fav}_{t}(\delta)\\}\mathbb{P}_{\operatorname{free},t}(W\geq
1-\delta)\geq\mathbf{1}\\{\mathsf{Fav}_{t}(\delta)\\}\mathbb{P}_{\operatorname{free},t}(\mathsf{Sink}_{t}(\delta)).$
(6.21)
Recall that $\mathbb{P}_{\operatorname{free},t}$ denotes the law of a Brownian
bridge $B_{1}(\cdot)$ on $I_{t}$ starting at
$B_{1}(-t^{-\alpha})=\mathfrak{h}_{t}(-t^{-\alpha})$ and ending at
$B_{1}(t^{-\alpha})=\mathfrak{h}_{t}(t^{-\alpha})$. Let us consider another
Brownian bridge $\widetilde{B}_{1}(\cdot)$ on $I_{t}$ starting and ending at
$\min\\{\mathfrak{h}_{t}(-t^{-\alpha}),\mathfrak{h}_{t}(t^{-\alpha})\\}$. By
standard estimates for Brownian bridge (see Lemma 2.11 in [CH16] for example)
$\displaystyle\mathbb{P}\left(\inf_{x\in
I_{t}}\widetilde{B}_{1}(x)\geq-\tfrac{1}{4}\delta+\min\\{\mathfrak{h}_{t}(-t^{-\alpha}),\mathfrak{h}_{t}(t^{-\alpha})\\}\right)=1-\exp\left(-\tfrac{\delta^{2}}{8|I_{t}|}\right)=1-\exp\left(-\tfrac{\delta^{2}}{16}t^{\alpha}\right).$
Note that $B_{1}(\cdot)$ is stochastically larger than
$\widetilde{B}_{1}(\cdot)$. Since the above event is increasing, we thus have
$\mathbb{P}_{\operatorname{free},t}\left(\mathsf{Sink}_{t}(\delta)\right)$ is
at least $1-\exp\left(-\tfrac{\delta^{2}}{16}t^{\alpha}\right)$. We now choose
$t_{2}(\delta)>0$, such that
$1-\exp\left(-\tfrac{\delta^{2}}{16}t^{\alpha}\right)\geq 1-\delta$. Taking
$t_{0}=\max\\{t_{1},t_{2}\\}$, we thus get (6.12) from (6.21).
Step 6. In this step we prove (6.13). As before consider the Brownian bridge
$B_{1}(\cdot)$ on $I_{t}$ starting at
$B_{1}(-t^{-\alpha})=\mathfrak{h}_{t}(-t^{-\alpha})$ and ending at
$B_{1}(t^{-\alpha})=\mathfrak{h}_{t}(t^{-\alpha})$. We may write $B_{1}$ as
$B_{1}(x)=\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})+\frac{x+t^{-\alpha}}{2t^{-\alpha}}(\mathfrak{h}_{t}^{(1)}(t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(-t^{-\alpha}))+\overline{B}(x).$
where $\overline{B}$ is a Brownian bridge on $I_{t}$ starting and ending at
zero. Thus,
$\displaystyle
t^{1/3}(B_{1}(t^{-2/3}x)-B_{1}(0))=t^{1/3}\left[\overline{B}(t^{-2/3}x)-\overline{B}(0)\right]+\tfrac{1}{2}{t^{\alpha-1/3}x}(\mathfrak{h}_{t}^{(1)}(t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(-t^{-\alpha})).$
(6.22)
Recall that $\alpha=\frac{1}{6}$. By Brownian scaling,
$B_{*}(x):=t^{1/3}\overline{B}(t^{-2/3}x)$ is a Brownian bridge on the large
interval $[-\sqrt{t},\sqrt{t}]$ starting and ending at zero. By computing the
covariances, it is easy to check that as $t\to\infty$, $B_{*}(x)-B_{*}(0)$
converges weakly to a two-sided Brownian motion $B(\cdot)$ on $[-a,a]$. This
gives us the weak limit for the first term on the r.h.s. of (6.22). For the
second term, recall the event $\mathsf{Tight}_{t}(\delta)$ from (6.5). As
$|x|\leq a$, on $\mathsf{Tight}_{t}(\delta)$, we have
$\tfrac{1}{2}{t^{\alpha-1/3}x}(\mathfrak{h}_{t}^{(1)}(t^{-\alpha})-\mathfrak{h}_{t}^{(1)}(-t^{-\alpha}))\leq{t^{-1/6}a}\delta^{-1}.$
This gives an uniform bound (uniform over the event
$\mathsf{Fav}_{t}(\delta))$ on the second term in (6.22). Thus as long as the
boundary data is in $\mathsf{Tight}_{t}(\delta)$,
$\mathbb{P}_{\operatorname{free},t}(A)\to\gamma(A)$ where
$\gamma(A)=\mathbb{P}(B(\cdot)\in A)$. This proves (6.13).
### 6.2. Dyson Behavior around joint maximum
In this subsection we state and prove Proposition 6.1.
###### Proposition 6.1 (Dyson behavior around joint maximum).
Fix $p\in(0,1)$. Set $q=1-p$. Consider $2$ independent copies of the KPZ
equation $\mathcal{H}_{\uparrow}(x,t)$, and $\mathcal{H}_{\downarrow}(x,t)$,
both started from the narrow wedge initial data. Let $\mathcal{M}_{p,t}$ be
the almost sure unique maximizer of the process
$x\mapsto(\mathcal{H}_{\uparrow}(x,pt)+\mathcal{H}_{\downarrow}(x,qt))$ which
exists via Lemma 3.1. Set
$\displaystyle D_{1}(x,t)$
$\displaystyle:=\mathcal{H}_{\uparrow}(\mathcal{M}_{p,t},pt)-\mathcal{H}_{\uparrow}(x+\mathcal{M}_{p,t},pt),$
(6.23) $\displaystyle D_{2}(x,t)$
$\displaystyle:=\mathcal{H}_{\downarrow}(x+\mathcal{M}_{p,t},qt)-\mathcal{H}_{\downarrow}(\mathcal{M}_{p,t},qt).$
As $t\to\infty$, we have the following convergence in law
$\displaystyle(D_{1}(x,t),D_{2}(x,t))\stackrel{{\scriptstyle
d}}{{\to}}(\mathcal{D}_{1}(x),\mathcal{D}_{2}(x))$ (6.24)
in the uniform-on-compact topology. Here
$\mathcal{D}=(\mathcal{D}_{1},\mathcal{D}_{2}):\mathbb{R}\to\mathbb{R}^{2}$ is
a two-sided $\mathsf{DBM}$, that is,
$\mathcal{D}_{+}(\cdot):=\mathcal{D}(\cdot)\mid_{[0,\infty)}$ and
$\mathcal{D}_{-}(\cdot):=\mathcal{D}(-\cdot)\mid_{(-\infty,0]}$ are
independent copies of $\mathsf{DBM}$ defined in Definition 5.1.
For clarity, the proof is completed over several subsections (Sections
6.2.1-6.2.9) below and we refer to Figure 7 for the structure of the proof.
Recasting Proposition 6.1 in the KPZ line ensemble framework (Section 6.2.1)
Heuristics and outline of proof of Proposition 6.1 (Section 6.2.2) Reducing
the global maximizer to the local maximizer (Section 6.2.3) Defining “Nice”
events that happen with high probability (Lemma 6.2, Section 6.2.4)
Conditioning w.r.t. large boundaries to obtain Brownian bridge law (Section
6.2.5) Conditioning w.r.t. the max data and small boundaries to obtain
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ law (Section 6.2.6) Obtaining
matching upper and lower bounds for (6.31) and the desired convergence
(Section 6.2.7) Proof of Lemma 6.2 (Section 6.2.8) Proofs of Lemmas 6.6 and
6.7 (Section 6.2.9) Figure 7. Structure of Section 6.2.
#### 6.2.1. KPZ line ensemble framework
In this subsection, we convert Proposition 6.1 into the language of scaled KPZ
line ensemble defined in Proposition 2.5. We view
$\mathcal{H}_{\uparrow}(x,t)=\mathcal{H}_{t,\uparrow}^{(1)}(x),\mathcal{H}_{\downarrow}(x,t)=\mathcal{H}_{t,\downarrow}^{(1)}(x)$
as the top curves of two (unscaled) KPZ line ensembles:
$\\{\mathcal{H}_{t,\uparrow}^{(n)}(x),\mathcal{H}_{t,\downarrow}^{(n)}(x)\\}_{n\in\mathbb{N},x\in\mathbb{R}}$.
Following (2.5) we define their scaled versions:
$\displaystyle\mathfrak{h}_{t,\uparrow}^{(n)}(x)$
$\displaystyle:=t^{-1/3}\left(\mathcal{H}_{t,\uparrow}^{(n)}(t^{2/3}x)+\tfrac{t}{24}\right),\qquad\mathfrak{h}_{t,\downarrow}^{(n)}(x):=t^{-1/3}\left(\mathcal{H}_{t,\downarrow}^{(n)}(t^{2/3}x)+\tfrac{t}{24}\right).$
Along with the full maximizer $\mathcal{M}_{p,t}$, we will also consider local
maximizer defined by
$\displaystyle\mathcal{M}_{p,t}^{M}:=\mathop{\mathrm{argmax}}_{x\in[-Mt^{2/3},Mt^{2/3}]}(\mathcal{H}_{pt,\uparrow}^{(1)}(x)+\mathcal{H}_{qt,\downarrow}^{(1)}(x)),\qquad
M\in[0,\infty].$ (6.25)
For each $M>0$, $\mathcal{M}_{p,t}^{M}$ is unique almost surely by
$\mathbf{H}_{t}$-Brownian Gibbs property. We now set
$\displaystyle Y_{M,t,\uparrow}^{(n)}(x)$
$\displaystyle:=p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(n)}\big{(}(pt)^{-2/3}\mathcal{M}_{p,t}^{M}\big{)}-p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(n)}\big{(}p^{-2/3}x\big{)},$
(6.26) $\displaystyle Y_{M,t,\downarrow}^{(n)}(x)$
$\displaystyle:=q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(n)}\big{(}q^{-2/3}x\big{)}-q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(n)}\big{(}(qt)^{-2/3}\mathcal{M}_{p,t}^{M}\big{)}.$
Taking into account of (6.23) and all the above new notations, it can now be
checked that for each $t>0$,
$\displaystyle D_{1}(x,t)\stackrel{{\scriptstyle
d}}{{=}}t^{1/3}Y_{\infty,t,\uparrow}^{(1)}\big{(}t^{-2/3}(\mathcal{M}_{p,t}^{\infty}+x)\big{)},\quad
D_{2}(x,t)\stackrel{{\scriptstyle
d}}{{=}}t^{1/3}Y_{\infty,t,\downarrow}^{(1)}\big{(}t^{-2/3}(\mathcal{M}_{p,t}^{\infty}+x)\big{)},$
(6.27)
both as functions in $x$. Thus it is equivalent to verify Proposition 6.1 for
the above $Y_{\infty,t,\uparrow}^{(1)},Y_{\infty,t,\downarrow}^{(1)}$
expressions. In our proof we will mostly deal with local maximizer version,
and so for convenience we define:
$\displaystyle
D_{M,t,\uparrow}(x):{=}t^{1/3}Y_{M,t,\uparrow}^{(1)}\big{(}t^{-2/3}(\mathcal{M}_{p,t}^{M}+x)\big{)},\quad
D_{M,t,\downarrow}(x)=t^{1/3}Y_{M,t,\downarrow}^{(1)}\big{(}t^{-2/3}(\mathcal{M}_{p,t}^{M}+x)\big{)}.$
(6.28)
where $Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$ are defined in (6.26).
We will introduce several other notations and parameters later in the proof.
For the moment, the minimal set of notations introduced here facilitate our
discussion of ideas and outline of the proof of Proposition 6.1 in the next
subsection.
#### 6.2.2. Ideas and Outline of Proof of Proposition 6.1
Before embarking on a rather lengthy proof, in this subsection we explain the
core ideas behind the proof and provide an outline for the remaining
subsections.
First we contrast the proof idea with that of Theorem 1.11. Indeed, similar to
the proof of Theorem 1.11, from (6.27) we see that at the level
$Y_{\infty,t,\uparrow}^{(1)},Y_{\infty,t,\downarrow}^{(1)}$ we are interested
in understanding their laws restricted to a very small symmetric interval of
order $O(t^{-2/3})$ around the point $t^{-2/3}\mathcal{M}_{p,t}^{\infty}$.
However, the key difference from the conceptual argument presented at the
beginning of the proof if Theorem 1.11 is that the centered point
$t^{-2/3}\mathcal{M}_{p,t}^{\infty}$ is random and it does not go to zero.
Rather by Theorem 1.8 it converges in distribution to a nontrivial random
quantity (namely $\Gamma(p\sqrt{2})$). Hence one must take additional care of
this random point. This makes the argument significantly more challenging
compared to that of Theorem 1.11.
Figure 8. An overview of the proof for Proposition 6.1. The top and bottom
black curves are $Y_{M,t,\uparrow}^{(1)}$ and $Y_{M,t,\downarrow}^{(1)}$
respectively. Note that the way they are defined in (6.26),
$Y_{M,t,\uparrow}^{(1)}(x)\geq Y_{M,t,\downarrow}^{(1)}(x)$ with equality at
$x=\Phi=t^{-2/3}\mathcal{M}_{p,t}^{M}$ labelled as the red dot in the above
figure. The blue curves are $Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(2)}$.
There is no such ordering within blue curves. They may intersect among
themselves as well as with the black curves. With $\alpha=\frac{1}{6}$, we
consider the interval $K_{t}=(\Phi-t^{-\alpha},\Phi+t^{-\alpha})$. In this
vanishing interval around $\Phi$, the curves will be ordered with high
probability. In fact, with high probability, there will be a uniform
separation. For instance, for small enough $\delta$, we will have
$Y_{M,t,\uparrow}^{(2)}(x)-Y_{M,t,\uparrow}^{(1)}(x)\geq\frac{1}{4}\delta$,
and
$Y_{M,t,\downarrow}^{(1)}(x)-Y_{M,t,\downarrow}^{(2)}(x)\geq\frac{1}{4}\delta$,
for all $x\in K_{t}$ wth high probability. This will allow us to conclude
black curves are behave approximately like two-sided
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s on that narrow window. Then upon
going into a even smaller window of $O(t^{-2/3})$, the two-sided
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s turn into a two-sided
$\mathsf{DBM}$.
We now give a road-map of our proof. At this point, readers are also invited
to look into Figure 8 alongside the explanation offered in its caption.
* •
As noted in Lemma 3.1, the random centering
$t^{-2/3}\mathcal{M}_{p,t}^{\infty}$ has decaying properties and can be
approximated by $t^{-2/3}\mathcal{M}_{p,t}^{M}$ by taking large enough $M$.
Hence on a heuristic level it suffices to work with the local maximizers
instead. In Subsection 6.2.3, this heuristics will be justified rigorously. We
will show there how to pass from
$Y_{\infty,t,\uparrow}^{(1)},Y_{\infty,t,\downarrow}^{(1)}$ defined in (6.27)
to their finite centering analogs:
$Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$. The rest of the proof then
boils down to analyzing the laws of the latter.
* •
We now fix a $M>0$ for the rest of the proof. Our analysis will now operate
with $\mathcal{M}_{p,t}^{M}$. For simplicity, let us also use the notation
$\displaystyle\Phi:=t^{-2/3}\mathcal{M}_{p,t}^{M}$ (6.29)
for the rest of the proof. We now perform several conditioning on the laws of
the curves. Recall that by Proposition 2.5,
$\\{\mathfrak{h}_{pt,\uparrow}^{(n)}(\cdot)\\}_{n\in\mathbb{N}}$
$\\{\mathfrak{h}_{qt,\downarrow}^{(n)}(\cdot)\\}_{n\in\mathbb{N}}$ satisfy the
$\mathbf{H}_{pt}$-Brownian Gibbs property and the $\mathbf{H}_{qt}$-Brownian
Gibbs property respectively with $\mathbf{H}_{t}$ given by (2.4). Conditioned
on the end points of $\mathfrak{h}_{pt,\uparrow}^{(1)}(\pm Mp^{-2/3})$ and
$\mathfrak{h}_{qt,\downarrow}^{(1)}(\pm Mq^{-2/3})$ and the second curves
$\mathfrak{h}_{pt,\uparrow}^{(2)}(\cdot)$ and
$\mathfrak{h}_{qt,\downarrow}^{(2)}(\cdot)$, the laws of
$\mathfrak{h}_{pt,\uparrow}^{(1)}(\cdot)$, and
$\mathfrak{h}_{pt,\uparrow}^{(1)}(\cdot)$ are absolutely continuous w.r.t.
Brownian bridges with appropriate end points. This conditioning is done in
Subsection 6.2.5.
* •
We then condition further on Max data :
$\mathcal{M}_{p,t}^{M},\mathfrak{h}_{pt,\uparrow}^{(1)}((pt)^{-2/3}\mathcal{M}_{p,t}^{M}),\mathfrak{h}_{qt,\downarrow}^{(1)}((qt)^{-2/3}\mathcal{M}_{p,t}^{M})$.
Under this conditioning, via the decomposition result in Proposition 4.10, the
underlying Brownian bridges mentioned in the previous point, when viewed from
the joint maximizer, becomes two-sided
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s defined in Definition 4.4. This
viewpoint from the joint maximizer is given by
$Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$. See Figure 8 for more
details.
* •
We emphasize the fact that the deduction of
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s done above is only for the
underlying Brownian law. One still needs to analyze the Radon-Nikodym (RN)
derivative. As we are interested in the laws of
$Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$ on an interval of order
$t^{-2/3}$ around $\Phi$, we analyze the RN derivative only on a small
interval around $\Phi$. To be precise, we consider a slightly larger yet
vanishing interval of length $2t^{-\alpha}$ for $\alpha=\frac{1}{6}$ around
the random point $\Phi$. We show that the RN derivative on this small random
patch is close to $1$. Thus upon further conditioning on the boundary data of
this random small interval, the trajectories of $Y_{M,t,\uparrow}^{(1)}$ and
$Y_{M,t,\downarrow}^{(1)}$ defined in (6.26) around $\Phi$ turns out to be
close to two-sided $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ with appropriate
(random) endpoints.
* •
Finally, we zoom further into a tiny interval of order $O(t^{-2/3})$ symmetric
around the random point $\Phi$. Utilizing Lemma 5.3, we convert the two-sided
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s to two-sided $\mathsf{DBM}$s.
We now provide an outline of the rest of the subsections. In Subsection 6.2.3
we reduce our proof from understanding laws around global maximizers to that
of local maximizers. As explained in the above road-map, the proof follows by
performing several successive conditioning. On a technical level, this
requires defining several high probability events on which we can carry out
our conditional analysis. These events are all defined in Subsection 6.2.4 and
are claimed to happen with high probability in Lemma 6.2. We then execute the
first layer of conditioning in Subsection 6.2.5. The two other layers of
conditioning are done in Subsection 6.2.6. Lemma 6.6 and Lemma 6.7 are the
precise technical expressions for the heuristic claims in the last two bullet
points of the road-map. Assuming them, we complete the final steps of the
proof in Subsection 6.2.7. Proof of Lemma 6.2 is then presented in Subsection
6.2.8. Finally, in Subsection 6.2.9, we prove the remaining lemmas: Lemma 6.6
and 6.7.
#### 6.2.3. Global to Local maximizer
We now fill out the technical details of the road-map presented in the
previous subsection. Fix any $a>0$. Consider any Borel set $A$ of
$C([-a,a]\to\mathbb{R}^{2})$ which is a continuity set of a two-sided
$\mathsf{DBM}$ $\mathcal{D}(\cdot)$ restricted to $[-a,a].$ By Portmanteau
theorem, it is enough to show that
$\displaystyle\mathbb{P}((D_{1}(\cdot,t),D_{2}(\cdot,t))\in
A)\rightarrow\mathbb{P}(\mathcal{D}(\cdot)\in A),$ (6.30)
where $D_{1},D_{2}$ are defined in (6.23). In this subsection, we describe how
it suffices to check (6.30) with $\mathcal{M}_{p,t}^{M}$. Recall
$D_{M,t,\uparrow}(\cdot),D_{M,t,\downarrow}(\cdot)$ from (6.28). We claim that
for all $M>0$:
$\displaystyle\lim_{t\to\infty}\mathbb{P}((D_{M,t,\uparrow}(\cdot),D_{M,t,\downarrow}(\cdot))\in
A)\rightarrow\mathbb{P}(\mathcal{D}(\cdot)\in A).$ (6.31)
Note that when $\mathcal{M}_{p,t}^{\infty}=\mathcal{M}_{p,t}^{M}$,
$(D_{M,t,\uparrow}(\cdot),D_{M,t,\downarrow}(\cdot))$ is exactly equal to
$t^{1/3}Y_{\infty,t,\uparrow}^{(1)}\big{(}t^{-2/3}(\mathcal{M}_{p,t}^{\infty}+\cdot)\big{)}\
,\
t^{1/3}Y_{\infty,t,\downarrow}^{(1)}\big{(}t^{-2/3}(\mathcal{M}_{p,t}^{\infty}+\cdot)\big{)}$
which via (6.27) is same in distribution as $D_{1}(\cdot,t),D_{2}(\cdot,t)$.
Thus,
$\displaystyle{\left|\mathbb{P}((D_{1}(\cdot,t),D_{2}(\cdot,t))\in
A)-\mathbb{P}((D_{M,t,\uparrow}(\cdot),D_{M,t,\downarrow}(\cdot))\in
A)\right|\leq 2\mathbb{P}(\mathcal{M}_{p,t}\neq\mathcal{M}_{p,t}^{M}).}$
Now given any $\varepsilon>0$, by Lemma 3.1, we can take $M=M(\varepsilon)>0$
large enough so that
$2\mathbb{P}(\mathcal{M}_{p,t}\neq\mathcal{M}_{p,t}^{M})\leq\varepsilon$. Then
upon taking $t\to\infty$ in the above equation, in view of (6.31), we see that
$\limsup_{t\to\infty}\left||\mathbb{P}((D_{1}(\cdot,t),D_{2}(\cdot,t))\in
A)-\mathbb{P}((\mathcal{D}(\cdot)\in A)\right|\leq\varepsilon.$
As $\varepsilon$ is arbitrary, this proves (6.30). The rest of the proof is
now devoted in proving (6.31).
#### 6.2.4. Nice events
In this subsection, we focus on defining several events that are collectively
‘nice’ in the sense that they happen with high probability. We fix an $M>0$
for the rest of the proof and work with the local maximizer
$\mathcal{M}_{p,t}^{M}$ defined in (6.25). We will also make use of the
notation $\Phi$ defined in (6.29) heavily in this and subsequent subsections.
We now proceed to define a few events based on the location and value of the
maximizer and values at the endpoints of an appropriate interval. Fix any
arbitrary $\delta>0$. Let us consider the event:
$\displaystyle\mathsf{ArMx}(\delta):=\left\\{\Phi\in[-M+\delta,M-\delta]\right\\}.$
(6.32)
The $\mathsf{ArMx}(\delta)$ controls the location of the local maximizer
$\Phi$. Set $\alpha=\frac{1}{6}$. We define tightness event that corresponds
to the boundary of the interval of length $2t^{-\alpha}$ around $\Phi:$
$\displaystyle\mathsf{Bd}_{\uparrow}(\delta)$
$\displaystyle:=\mathsf{Bd}_{+,\uparrow}(\delta)\cap\mathsf{Bd}_{-,\uparrow}(\delta),\quad\mathsf{Bd}_{\downarrow}(\delta):=\mathsf{Bd}_{+,\downarrow}(\delta)\cap\mathsf{Bd}_{-,\downarrow}(\delta),$
(6.33)
where
$\displaystyle\mathsf{Bd}_{\pm,\uparrow}(\delta)$
$\displaystyle:=\left\\{\left|\mathfrak{h}_{pt,\uparrow}^{(1)}\big{(}p^{-2/3}(\Phi\pm
t^{-\alpha})\big{)}-\mathfrak{h}_{pt,\uparrow}^{(1)}\big{(}\Phi
p^{-2/3})\right|\leq\tfrac{1}{\delta}t^{-\alpha/2}\right\\}$ (6.34)
$\displaystyle\mathsf{Bd}_{\pm,\downarrow}(\delta)$
$\displaystyle:=\left\\{\left|\mathfrak{h}_{qt,\downarrow}^{(1)}\big{(}q^{-2/3}(\Phi\pm
t^{-\alpha})\big{)}-\mathfrak{h}_{qt,\downarrow}^{(1)}\big{(}\Phi
q^{-2/3})\right|\leq\tfrac{1}{\delta}t^{-\alpha/2}\right\\},$
Finally we consider the gap events that provide a gap between the first curve
and the second curve for each of the line ensemble:
$\displaystyle\mathsf{Gap}_{M,\uparrow}(\delta)$
$\displaystyle:=\left\\{p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}\big{(}\Phi
p^{-2/3}\big{)}\geq p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}\big{(}\Phi
p^{-2/3}\big{)}+\delta\right\\},$ (6.35)
$\displaystyle\mathsf{Gap}_{M,\downarrow}(\delta)$
$\displaystyle:=\left\\{q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(1)}\big{(}\Phi
q^{-2/3}\big{)}\geq q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(2)}\big{(}\Phi
q^{-2/3}\big{)}+\delta\right\\}.$ (6.36)
We next define the ‘rise’ events which roughly says the second curves
$\mathfrak{h}_{pt,\uparrow}^{(1)}$ and $\mathfrak{h}_{qt,\downarrow}^{(2)}$ of
the line ensembles does not rise too much on a small interval of length
$2t^{-\alpha}$ around $\Phi p^{-2/3}$ and $\Phi q^{-2/3}$ respectively.
$\displaystyle\mathsf{Rise}_{M,\uparrow}(\delta)$
$\displaystyle:=\left\\{\sup_{x\in[-t^{-\alpha},t^{-\alpha}]}p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}\big{(}\Phi
p^{-2/3}+x\big{)}\leq p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}\big{(}\Phi
p^{-2/3}\big{)}+\tfrac{\delta}{4}\right\\},$ (6.37)
$\displaystyle\mathsf{Rise}_{M,\downarrow}(\delta)$
$\displaystyle:=\left\\{\sup_{x\in[-t^{-\alpha},t^{-\alpha}]}q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(2)}\big{(}\Phi
q^{-2/3}+x\big{)}\leq q^{1/3}\mathfrak{h}_{pt,\downarrow}^{(2)}\big{(}\Phi
q^{-2/3}\big{)}+\tfrac{\delta}{4}\right\\}.$ (6.38)
$\mathsf{Bd}$, $\mathsf{Gap}$, $\mathsf{Rise}$ type events and their
significance are discussed later in Subsection 6.2.8 in greater details. See
also Figure 9 and its caption for explanation of some of these events. We put
all the above events into one final event:
$\displaystyle\mathsf{Nice}_{M}(\delta):=\left\\{\mathsf{ArMx}(\delta)\cap\bigcap_{x\in\\{\uparrow,\downarrow\\}}\mathsf{Bd}_{x}(\delta)\cap\mathsf{Gap}_{M,x}(\delta)\cap\mathsf{Rise}_{M,x}(\delta)\right\\}.$
(6.39)
All the above events are dependent on $t$. But we have suppressed this
dependence from the notations. The $\mathsf{Nice}_{M}(\delta)$ turns out to be
a favorable event. We isolate this fact as a lemma below.
###### Lemma 6.2.
For any $M>0$, under the above setup we have
$\displaystyle\liminf_{\delta\downarrow
0}\liminf_{t\to\infty}\mathbb{P}_{t}\left(\mathsf{Nice}_{M}(\delta)\right)=1.$
(6.40)
We postpone the proof of this technical lemma to Section 6.2.8 and for the
moment we continue with the current proof of Proposition 6.1 assuming its
validity.
#### 6.2.5. Conditioning with respect to large boundaries
As alluded in Subsection 6.2.2, the proof involves conditioning on different
$\sigma$-fields successively. We now specify all the different $\sigma$-fields
that we will use throughout the proof. Set $\alpha=\frac{1}{6}$. We consider
the random interval
$\displaystyle K_{t}:=(\Phi-t^{-\alpha},\Phi+t^{-\alpha}).$ (6.41)
Let us define:
$\displaystyle\mathcal{F}_{1}$
$\displaystyle:=\sigma\left(\left\\{\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x),\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}x)\right\\}_{x\in(-M,M)^{c}},\left\\{\mathfrak{h}_{pt,\uparrow}^{(2)}(x),\mathfrak{h}_{qt,\downarrow}^{(2)}(x)\right\\}_{x\in\mathbb{R}}\right)$
(6.42) $\displaystyle\mathcal{F}_{2}$
$\displaystyle:=\sigma\left(\Phi,\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3}),\mathfrak{h}_{qt,\downarrow}^{(1)}(\Phi q^{-2/3})\right),$ (6.43)
$\displaystyle\mathcal{F}_{3}$
$\displaystyle:=\sigma\left(\left\\{\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x),\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}x)\right\\}_{x\in
K_{t}^{c}}\right).$ (6.44)
In this step we perform conditioning w.r.t. $\mathcal{F}_{1}$ for the
expression on the l.h.s. of (6.31). We denote
$\mathbb{P}_{t}(A):=\mathbb{P}\big{(}(D_{M,t,\uparrow}(\cdot),D_{M,t,\downarrow}(\cdot))\in
A\big{)}$. Taking the $\mathsf{Nice}_{M}(\delta)$ event defined in (6.39)
under consideration, upon conditioning with $\mathcal{F}_{1}$ we have the
following upper and lower bounds:
$\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\geq\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta),A)=\mathbb{E}_{t}\left[\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta),A\mid\mathcal{F}_{1})\right],$
(6.45) $\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\leq\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta),A)+\mathbb{P}_{t}(\neg\mathsf{Nice}_{M}(\delta))=\mathbb{E}_{t}\left[\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta),A\mid\mathcal{F}_{1})\right]+\mathbb{P}_{t}(\neg\mathsf{Nice}_{M}(\delta)).$
(6.46)
Note that the underlying measure consists of the mutually independent
$\mathfrak{h}_{pt,\uparrow}^{(1)}(\cdot)$ and
$\mathfrak{h}_{qt,\downarrow}^{(1)}(\cdot)$ which by Proposition 2.5 satisfy
$\textbf{H}_{pt}$ and $\textbf{H}_{qt}$ Brownian Gibbs property respectively.
Applying the respectively Brownian Gibbs properties and following (2.3) we
have
$\displaystyle\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta),A\mid\mathcal{F}_{1})=\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta),A}W_{\uparrow}W_{\downarrow}]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}.$
(6.47)
Here
$\displaystyle
W_{\uparrow}:=\exp\left(-t^{2/3}\int_{-M}^{M}\exp\left(t^{1/3}\big{[}p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(p^{-2/3}x)-p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)\big{]}\right)\mathrm{d}x\right)$
(6.48)
and
$\displaystyle
W_{\downarrow}:=\exp\left(-t^{2/3}\int_{-M}^{M}\exp\left(t^{1/3}\big{[}q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(2)}(q^{-2/3}x)-q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}x)\big{]}\right)\mathrm{d}x\right).$
(6.49)
In (6.47), $\mathbb{P}_{\operatorname{free},t}$ and
$\mathbb{E}_{\operatorname{free},t}$ are the probability and the expectation
operator respectively corresponding to the joint ‘free’ law for
$(p^{1/3}\mathfrak{h}_{pt,\uparrow}(p^{-2/3}x),q^{1/3}\mathfrak{h}_{qt,\downarrow}(q^{-2/3}x))_{x\in[-M,M]}$
which by Brownian scaling is given by a pair of independent Brownian bridges
$(B_{1}(\cdot),B_{2}(\cdot))$ on $[-M,M]$ with starting points
$(p^{1/3}\mathfrak{h}_{pt,\uparrow}(-Mp^{-2/3}),q^{1/3}\mathfrak{h}_{qt,\downarrow}(-Mq^{-2/3}))$
and endpoints
$(q^{1/3}\mathfrak{h}_{pt,\uparrow}(Mp^{-2/3}),q^{1/3}\mathfrak{h}_{qt,\downarrow}(Mq^{-2/3})).$
#### 6.2.6. Conditioning with respect to maximum data and small boundaries
In this subsection we perform conditioning on the numerator of r.h.s. of
(6.47) w.r.t. $\mathcal{F}_{2}$ and $\mathcal{F}_{3}$ defined in (6.43) and
(6.44). Recall that by Proposition 4.10, upon conditioning Brownian bridges on
$\mathcal{F}_{2}$, the conditional laws around the joint local maximizer
$\Phi$ over $[-M,M]$ is now given by two
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s (defined in Definition 4.4) with
appropriate lengths and endpoints. Indeed, based on Proposition 4.10, given
$\mathcal{F}_{1},\mathcal{F}_{2}$, we may construct the conditional laws for
the two functions on $[-M,M]$:
###### Definition 6.3 ($\mathsf{Nlarge}$ Law).
Consider two independent $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$
$V_{\ell}^{\mathsf{large}}$ and $V_{r}^{\mathsf{large}}$ with following
description:
1. (1)
$V_{\ell}^{\mathsf{large}}$ is a $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on
$[0,\Phi+M]$ ending at
$\left(p^{1/3}\left[\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\mathfrak{h}_{pt,\uparrow}^{(1)}(-Mp^{-2/3})\right],q^{1/3}\left[\mathfrak{h}_{qt,\downarrow}^{(1)}(-Mq^{-2/3})-\mathfrak{h}_{qt,\downarrow}^{(1)}(\Phi
q^{-2/3})\right]\right),$
2. (2)
$V_{r}^{\mathsf{large}}$ is a $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on
$[0,M-\Phi]$ ending at
$\left(p^{1/3}\left[\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\mathfrak{h}_{pt,\uparrow}^{(1)}(Mp^{-2/3})\right],q^{1/3}\left[\mathfrak{h}_{qt,\downarrow}^{(1)}(Mq^{-2/3})-\mathfrak{h}_{qt,\downarrow}^{(1)}(\Phi
q^{-2/3})\right]\right).$
We then define ${B}^{\mathsf{large}}:[-M,M]\to\mathbb{R}^{2}$ as follows:
$\displaystyle{B}^{\mathsf{large}}(x)=\begin{cases}V_{\ell}(\Phi-x)&x\in[-M,\Phi]\\\
V_{r}(x-\Phi)&x\in[\Phi,M]\end{cases}.$
We denote the expectation and probability operator under above law for
${B}^{\mathsf{large}}$ (which depends on $\mathcal{F}_{1},\mathcal{F}_{2}$) as
$\mathbb{E}_{\mathsf{Nlarge|2,1}}$ and $\mathbb{P}_{\mathsf{Nlarge|2,1}}$.
Thus we may write
$\displaystyle\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta),A}W_{\uparrow}W_{\downarrow}]$
$\displaystyle=\mathbb{E}_{\operatorname{free},t}[\mathbb{E}_{\mathsf{Nlarge|2,1}}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta),A}W_{\uparrow}W_{\downarrow}]].$
(6.50)
Since $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s are Markovian, we may
condition further upon $\mathcal{F}_{3}$ to get
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s again but on a smaller interval.
To precisely define the law, we now give the following definitions:
###### Definition 6.4 ($\mathsf{Nsmall}$ law).
Consider two independent $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$
$V_{\ell}^{\mathsf{small}}$ and $V_{r}^{\mathsf{small}}$ with the following
descriptions:
1. (1)
$V_{\ell}^{\mathsf{small}}$ is a $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on
$[0,t^{-\alpha}]$ ending at
$\left(p^{1/3}\left[\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}(\Phi-t^{-\alpha}))\right],q^{1/3}\left[\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}(\Phi-t^{-\alpha}))-\mathfrak{h}_{qt,\downarrow}^{(1)}(\Phi
q^{-2/3})\right]\right),$
2. (2)
$V_{r}^{\mathsf{small}}$ is a $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on
$[0,t^{-\alpha}]$ ending at
$\left(p^{1/3}\left[\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}(\Phi+t^{-\alpha}))\right],q^{1/3}\left[\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}(\Phi+t^{-\alpha}))-\mathfrak{h}_{qt,\downarrow}^{(1)}(\Phi
q^{-2/3})\right]\right).$
We then define
${B}^{\mathsf{small}}:[\Phi+t^{-\alpha},\Phi-t^{-\alpha}]\to\mathbb{R}^{2}$ as
follows:
$\displaystyle{B}^{\mathsf{small}}(x)=\begin{cases}V_{\ell}(\Phi-x)&x\in[\Phi-t^{-\alpha},\Phi]\\\
V_{r}(x-\Phi)&x\in[\Phi,\Phi+t^{-\alpha}]\end{cases}.$
We denote the the expectation and probability operators under the above law
for ${B}^{\mathsf{small}}$ (which depends on
$\mathcal{F}_{1},\mathcal{F}_{2},\mathcal{F}_{3}$) as
$\mathbb{E}_{\mathsf{Nsmall|3,2,1}}$ and $\mathbb{P}_{\mathsf{Nsmall|3,2,1}}$
respectively.
We thus have
r.h.s. of (6.50)
$\displaystyle=\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[\mathbf{1}_{A}W_{\uparrow}W_{\downarrow}]].$
(6.51)
The $\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}$ comes of the interior expectation
above as $\mathsf{Nice}_{M}(\delta)$ is measurable w.r.t.
$\mathcal{F}_{1}\cup\mathcal{F}_{2}\cup\mathcal{F}_{3}$ (see its definition in
(6.39)).
Next note that due to the definition of $W_{\uparrow},W_{\downarrow}$ from
(6.48) and (6.49), we may extract certain parts of it which are measurable
w.r.t. $\mathcal{F}_{1}\cup\mathcal{F}_{2}\cup\mathcal{F}_{3}$. Indeed, we can
write $W_{\uparrow}=W_{\uparrow,1}W_{\uparrow,2}$ and
$W_{\downarrow}=W_{\downarrow,1}W_{\downarrow,2}$ where
$\displaystyle
W_{\uparrow,1}:=\exp\left(-t^{2/3}\int_{K_{t}}\exp\left(t^{1/3}\big{[}p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(p^{-2/3}x)-p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)\big{]}\right)\mathrm{d}x\right)$
(6.52) $\displaystyle W_{\uparrow,2}:=\exp\left(-t^{2/3}\int_{[-M,M]\cap
K_{t}^{c}}\exp\left(t^{1/3}\big{[}p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(p^{-2/3}x)-p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)\big{]}\right)\mathrm{d}x\right),$
and
$\displaystyle
W_{\downarrow,1}:=\exp\left(-t^{2/3}\int_{K_{t}}\exp\left(t^{1/3}\big{[}q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(2)}(q^{-2/3}x)-q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}x)\big{]}\right)\mathrm{d}x\right).$
(6.53) $\displaystyle W_{\downarrow,2}:=\exp\left(-t^{2/3}\int_{[-M,M]\cap
K_{t}^{c}}\exp\left(t^{1/3}\big{[}q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(2)}(q^{-2/3}x)-q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(1)}(q^{-2/3}x)\big{]}\right)\mathrm{d}x\right),$
where recall $K_{t}$ from (6.41). The key observation is that
$W_{\uparrow,2},W_{\downarrow,2}$ are measurable w.r.t.
$\mathcal{F}_{1}\cup\mathcal{F}_{2}\cup\mathcal{F}_{3}$. Thus we have
r.h.s. of (6.51)
$\displaystyle=\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}\cdot\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[\mathbf{1}_{A}W_{\uparrow,1}W_{\downarrow,1}]].$
(6.54)
###### Remark 6.5.
It is crucial to note that in (6.51) the event $\mathsf{Nice}_{M}(\delta)$
includes the event $\mathsf{ArMx}(\delta)$ defined in (6.32). Indeed, the
$\mathsf{ArMx}(\delta)$ event is measurable w.r.t.
$\mathcal{F}_{1}\cup\mathcal{F}_{2}$ and ensures that
$[\Phi-t^{-\alpha},\Phi+t^{-\alpha}]\subset[-M,M]$ for all large enough $t$,
which is essential for going from $\mathsf{Nlarge}$ law to $\mathsf{Nsmall}$
law. Thus such a decomposition is not possible for
$\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]$ which appears
in the denominator of r.h.s. of (6.47). Nonetheless, we may still provide a
lower bound for
$\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]$ as follows:
$\displaystyle\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]\geq\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow}W_{\downarrow}]=\mathbb{E}_{\operatorname{free},t}[W_{\uparrow,2}W_{\downarrow,2}\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\cdot\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[W_{\uparrow,1}W_{\downarrow,1}]].$
(6.55)
With the deductions in (6.54) and (6.55), we now come to the task of analyzing
$W_{\uparrow,1}W_{\downarrow,1}$ under $\mathsf{Nsmall}$ law. The following
lemma ensures that on $\mathsf{Nice}_{M}(\delta)$,
$W_{\uparrow,1}W_{\downarrow,1}$ is close to $1$ under $\mathsf{Nsmall}$ law.
###### Lemma 6.6.
There exist $t_{0}(\delta)>0$ such that for all $t\geq t_{0}$ we have
$\displaystyle\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(W_{\uparrow,1}W_{\downarrow,1}>1-\delta)\geq\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}(1-\delta).$
(6.56)
This allow us to ignore $W_{\uparrow,1}W_{\downarrow,1}$, in
$\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[\mathbf{1}_{A}W_{\uparrow,1}W_{\downarrow,1}]$.
Hence it suffices to study $\mathbb{P}_{\mathsf{Nsmall}|3,2,1}(A)$. The
following lemma then compares this conditional probability with that of
$\mathsf{DBM}$.
###### Lemma 6.7.
There exist $t_{0}(\delta)>0$ such that for all $t\geq t_{0}$ we have
$\displaystyle\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}|\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A)-\tau(A)|\leq\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\cdot\delta,$
(6.57)
where $\tau(A):=\mathbb{P}(\mathcal{D}(\cdot)\in A)$, $\mathcal{D}$ being a
two-sided $\mathsf{DBM}$ defined in the statement of Proposition 6.1.
We prove these two lemmas in Section 6.2.9. For now, we proceed with the
current proof of (6.31) in the next section.
#### 6.2.7. Matching Lower and Upper Bounds
In this subsection, we complete the proof of (6.31) by providing matching
lower and upper bounds in the two steps below. We assume throughout this
subsection that $t$ is large enough, so that (6.56) and (6.57) holds.
Step 1: Lower Bound. We start with (6.45). Following the expression in (6.47),
and our deductions in (6.50), (6.51), (6.54) we see that
$\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\geq\mathbb{E}_{t}\left[\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta),A\mid\mathcal{F}_{1})\right]$
$\displaystyle=\mathbb{E}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}\cdot\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[\mathbf{1}_{A}W_{\uparrow,1}W_{\downarrow,1}]]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right]$
(6.58)
$\displaystyle\geq(1-\delta)\mathbb{E}_{t}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}\cdot\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A,W_{\uparrow,1}W_{\downarrow,1}>1-\delta)]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right]$
(6.59)
where in the last inequality we used the fact
$W_{\uparrow,1}W_{\downarrow,1}\leq 1$. Now applying Lemma 6.6 and Lemma 6.7
successively we get
$\displaystyle\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A,W_{\uparrow,1}W_{\downarrow,1}>1-\delta)$
$\displaystyle\geq\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}[\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A)-\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(W_{\uparrow,1}W_{\downarrow,1}\leq
1-\delta)]$
$\displaystyle\geq\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}[\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A)-\delta]$
$\displaystyle\geq\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}[\tau(A)-2\delta]$
where recall $\tau(A)=\mathbb{P}(\mathcal{D}(\cdot)\in A)$. As
$W_{\uparrow,1}W_{\downarrow,1}\leq 1$ and probabilities are nonnegative,
following the above inequalities we have
$\displaystyle\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A,W_{\uparrow,1}W_{\downarrow,1}>1-\delta)\geq\max\\{0,\tau(A)-2\delta\\}\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,1}W_{\downarrow,1}.$
Substituting the above bound back to (6.59) and using the fact that
$W_{\uparrow,2}W_{\downarrow,2}W_{\uparrow,1}W_{\downarrow,1}=W_{\uparrow}W_{\downarrow}$,
we get
$\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\geq(1-\delta)\max\\{0,\tau(A)-2\delta\\}\mathbb{E}_{t}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow}W_{\downarrow}]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right]$
$\displaystyle=(1-\delta)\max\\{0,\tau(A)-2\delta\\}\mathbb{P}_{t}(\mathsf{Nice}_{M}(\delta)).$
In view of Lemma 6.2, taking $\liminf_{t\to\infty}$ followed by
$\liminf_{\delta\downarrow 0}$ we get that
$\liminf_{t\to\infty}\mathbb{P}_{t}(A)\geq\tau(A)$. This proves the lower
bound.
Step 2: Upper Bound. We start with (6.46). Using the equality in (6.58) we get
$\displaystyle\mathbb{P}_{t}(A)$
$\displaystyle\leq\mathbb{E}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}\cdot\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[\mathbf{1}_{A}W_{\uparrow,1}W_{\downarrow,1}]]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right]+\mathbb{P}_{t}(\neg\mathsf{Nice}_{M}(\delta))$
$\displaystyle\leq\mathbb{E}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}\cdot\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(A)]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right]+\mathbb{P}_{t}(\neg\mathsf{Nice}_{M}(\delta))$
$\displaystyle\leq(\tau(A)+\delta)\mathbb{E}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right]+\mathbb{P}_{t}(\neg\mathsf{Nice}_{M}(\delta)).$
(6.60)
Let us briefly justify the inequalities presented above. Going from first line
to second line we used the fact $W_{\uparrow,1}W_{\downarrow,1}\leq 1$. The
last inequality follows from Lemma 6.7 where recall that
$\tau(A)=\mathbb{P}(\mathcal{D}(\cdot)\in A).$ Now note that by Lemma 6.6, on
$\mathsf{Nice}_{M}(\delta)$,
$\displaystyle\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[W_{\uparrow,1}W_{\downarrow,1}]$
$\displaystyle\geq\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[\mathbf{1}_{W_{\uparrow,1}W_{\downarrow,1}\geq
1-\delta}\cdot W_{\uparrow,1}W_{\downarrow,1}]$
$\displaystyle\geq(1-\delta)\mathbb{P}_{\mathsf{Nsmall|3,2,1}}({W_{\uparrow,1}W_{\downarrow,1}\geq
1-\delta})\geq(1-\delta)^{2}.$
Using the expression from (6.55) we thus have
$\displaystyle\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]$
$\displaystyle\geq\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}\cdot\mathbb{E}_{\mathsf{Nsmall|3,2,1}}[W_{\uparrow,1}W_{\downarrow,1}]]$
$\displaystyle\geq(1-\delta)^{2}\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}W_{\uparrow,2}W_{\downarrow,2}].$
Going back to (6.60), this forces
$\displaystyle\mbox{r.h.s.~{}of
\eqref{627.5}}\leq\frac{\tau(A)+\delta}{(1-\delta)^{2}}+\mathbb{P}_{t}(\neg\mathsf{Nice}_{M}(\delta)).$
In view of Lemma 6.2, taking $\limsup_{t\to\infty}$, followed by
$\limsup_{\delta\downarrow 0}$ in above inequality we get that
$\limsup_{t\to\infty}\mathbb{P}_{t}(A)\leq\tau(A).$ Along with the matching
lower bound obtained in Step 1 above, this establishes (6.31).
#### 6.2.8. Proof of Lemma 6.2
Recall from (6.39) that $\mathsf{Nice}_{M}(\delta)$ event is an intersection
of several kinds of events. To show (6.40), it suffices to prove the same for
each of the events. That is, given an event $\mathsf{E}$ which is part of
$\mathsf{Nice}_{M}(\delta)$ we will show
$\displaystyle\limsup_{\delta\to\infty}\limsup_{t\to\infty}\mathbb{P}(\mathsf{E})=1.$
(6.61)
Below we analyze each such possible choices for $\mathsf{E}$ separately.
$\mathsf{ArMx}(\delta)$ event. Recall $\mathsf{ArMx}(\delta)$ event from
(6.32). As noted in (3.9),
$\mathcal{M}_{p,t}^{M}\stackrel{{\scriptstyle
d}}{{\to}}\mathop{\mathrm{argmax}}_{x\in[-M,M]}\mathcal{A}(x),$
where $\mathcal{A}$ is defined in (3.8). Since $\mathcal{A}$ restricted to
$[-M,M]$ is absolutely continuous with Brownian motion with appropriate
diffusion coefficients,
$\mathop{\mathrm{argmax}}_{x\in[-M,M]}\mathcal{A}(x)\in(-M,M)$ almost surely.
In other words, maximum is not attained on the boundaries almost surely. But
then
$\displaystyle\liminf_{\delta\downarrow
0}\liminf_{t\to\infty}\mathbb{P}(\mathsf{ArMx}(\delta))$
$\displaystyle=\liminf_{\delta\downarrow
0}\mathbb{P}(\mathop{\mathrm{argmax}}_{x\in[-M,M]}\mathcal{A}(x)\in[-M+\delta,M-\delta])$
$\displaystyle=\mathbb{P}(\mathop{\mathrm{argmax}}_{x\in[-M,M]}\mathcal{A}(x)\in(-M,M))=1.$
This proves (6.61) with $\mathsf{E}\mapsto\mathsf{ArMx}(\delta).$
$\mathsf{Bd}_{\uparrow}(\delta),\mathsf{Bd}_{\downarrow}(\delta)$ events. We
first define
$\displaystyle\mathsf{Tight}_{\pm,\uparrow}(\lambda):=\left\\{p^{1/3}\left|\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\mathfrak{h}_{pt,\uparrow}^{(1)}(\pm
Mp^{-2/3})\right|\leq\tfrac{1}{\lambda}\right\\},$
$\displaystyle\mathsf{Tight}_{\pm,\downarrow}(\lambda):=\left\\{q^{1/3}\left|\mathfrak{h}_{qt,\downarrow}^{(1)}(\Phi
q^{-2/3})-\mathfrak{h}_{qt,\downarrow}^{(1)}(\pm
Mq^{-2/3})\right|\leq\tfrac{1}{\lambda}\right\\},$
and set
$\displaystyle\mathsf{Sp}(\lambda):=\mathsf{ArMx}(\lambda)\cap\mathsf{Tight}_{+,\uparrow}(\lambda)\cap\mathsf{Tight}_{-,\uparrow}(\lambda)\cap\mathsf{Tight}_{+,\downarrow}(\lambda)\cap\mathsf{Tight}_{-,\downarrow}(\lambda)$
(6.62)
where $\mathsf{ArMx}(\lambda)$ is defined in (6.32). We claim that
$\displaystyle\limsup_{\lambda\downarrow
0}\limsup_{t\to\infty}\mathbb{P}(\neg\mathsf{Sp}(\lambda)))=0.$ (6.63)
Let us assume (6.63) for the time being and consider the main task of
analyzing the probability of the events
$\mathsf{Bd}_{\uparrow}(\delta),\mathsf{Bd}_{\downarrow}(\delta)$ defined in
(6.33). We have
$\mathsf{Bd}_{\uparrow}(\delta)=\mathsf{Bd}_{+\uparrow}(\delta)\cap\mathsf{Bd}_{-,\uparrow}(\delta)$
where $\mathsf{Bd}_{\pm,\uparrow}(\delta)$ is defined in (6.34). Let us focus
on $\mathsf{Bd}_{+,\uparrow}(\delta)$. Recall the $\sigma$-fields
$\mathcal{F}_{1},\mathcal{F}_{2}$ from (6.42) and (6.43). As described in
Subsection 6.2.6, upon conditioning on $\mathcal{F}_{1}\cup\mathcal{F}_{2}$,
the conditional law on $[-M,M]$ are given by $\mathsf{Nlarge}$ defined in
Definition 6.3, which are made up of
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$s
$V_{\ell}^{\mathsf{large}},V_{r}^{\mathsf{large}}$ defined in Definition 6.3.
Note that applying Markov inequality conditionally we have
$\displaystyle\mathbf{1}_{\mathsf{Sp}(\lambda)}\mathbb{P}\left(\mathsf{Bd}_{+,\uparrow}(\delta)\mid\mathcal{F}_{1},\mathcal{F}_{2}\right)$
$\displaystyle=\mathbf{1}_{\mathsf{Sp}(\lambda)}\cdot\mathbb{P}\left(|\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}(\Phi+t^{-\alpha}))-\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})|>\tfrac{1}{\delta}t^{-\alpha/2}\mid\mathcal{F}_{1},\mathcal{F}_{2}\right)$
$\displaystyle\leq\mathbf{1}_{\mathsf{Sp}(\lambda)}\cdot\delta^{2}t^{2\alpha}\cdot\mathbb{E}_{\mathsf{Nlarge}|2,1}\left[[V_{r,1}^{\mathsf{large}}(p^{-2/3}t^{-\alpha})]^{4}\right]$
However, on $\mathbf{1}_{\mathsf{Sp}(\lambda)}$, the
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ has length bounded away from zero
and the endpoints are tight. Applying (5.20) with $K\mapsto 2,t\mapsto
1,s\mapsto 0,n\mapsto p^{2/3}t^{\alpha},M\mapsto 1/\lambda$, for all large
enough $t$ we get
$\mathbb{E}_{\mathsf{Nlarge}|2,1}\left[[V_{r,1}^{\mathsf{large}}(p^{-2/3}t^{-\alpha})]^{4}\right]\leq\mathrm{C}_{p,\lambda}t^{-2\alpha}$.
Thus,
$\displaystyle\limsup_{t\to\infty}\mathbb{P}\left(\neg\mathsf{Bd}_{+,\uparrow}(\delta)\right)\leq\limsup_{t\to\infty}\mathbb{P}(\neg\mathsf{Sp}(\lambda))+\delta^{2}{\mathrm{C}_{p,\lambda}.}$
Taking $\delta\downarrow 0$, followed by $\lambda\downarrow 0$, in view of
(6.63) we get $\limsup_{\delta\downarrow
0}\limsup_{t\to\infty}\mathbb{P}(\neg\mathsf{Bd}_{+,\uparrow}(\delta))=0$.
Similarly one can conclude $\limsup_{\delta\downarrow
0}\limsup_{t\to\infty}\mathbb{P}(\neg\mathsf{Bd}_{-,\uparrow}(\delta))=0$
Thus, this two together yields $\liminf_{\delta\downarrow
0}\liminf_{t\to\infty}\mathbb{P}(\mathsf{Bd}_{\uparrow}(\delta))=1$. By
exactly the same approach one can derive that
$\mathbb{P}(\mathsf{Bd}_{\downarrow}(\delta))$ goes to $1$ under the same
iterated limit. Thus it remains to show (6.63).
Let us recall from (6.62) that $\mathsf{Sp}(\lambda)$ event is composed of
four tightness events and one event about the $\mathop{\mathrm{argmax}}$. We
first claim that $\limsup_{\lambda\downarrow
0}\limsup_{t\to\infty}\mathbb{P}(\mathsf{Tight}_{x,y}(\lambda))=1$ for each
$x\in\\{+,-\\}$ and $y\in\\{\uparrow,\downarrow\\}$. The earlier analysis of
$\mathsf{ArMx}(\lambda)$ event in (6.2.8) then enforces (6.63). Since all the
tightness events are similar, it suffices to prove any one of them say
$\mathsf{Tight}_{+,\uparrow}$. By Proposition 2.5 we have the distributional
convergence of $2^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(2^{1/3}x)$ to
$\mathcal{A}_{1}(x)$ in the uniform-on-compact topology, where
$\mathcal{A}_{1}(\cdot)$ is the parabolic $\operatorname{Airy}_{2}$ process.
As $\Phi\in[-M,M]$, we thus have
$\displaystyle\limsup_{t\to\infty}\mathbb{P}(\mathsf{Tight}_{+,\uparrow}(\lambda))$
$\displaystyle\leq\limsup_{t\to\infty}\mathbb{P}\left(p^{1/3}\sup_{x\in[-M,M]}\left|\mathfrak{h}_{pt,\uparrow}^{(1)}(xp^{-2/3})-\mathfrak{h}_{pt,\uparrow}^{(1)}(Mp^{-2/3})\right|\leq\tfrac{1}{\lambda}\right)$
$\displaystyle=\mathbb{P}\left(p^{1/3}\sup_{|x|\leq
2^{-1/3}M}\left|\mathcal{A}_{1}(xp^{-2/3})-\mathcal{A}_{1}(2^{-1/3}Mp^{-2/3})\right|\leq\tfrac{2^{1/3}}{\lambda}\right).$
For fixed $p,M$, by tightness of parabolic $\operatorname{Airy}_{2}$ process
on a compact interval, the last expression goes to one as $\lambda\downarrow
0$, which is precisely what we wanted to show.
$\mathsf{Gap}_{M,\uparrow}(\delta),\mathsf{Gap}_{M,\downarrow}(\delta)$
events. Recall the definitions of $\mathsf{Gap}_{M,\uparrow}(\delta)$ and
$\mathsf{Gap}_{M,\downarrow}(\delta)$ from(6.35) and (6.36).We begin with the
proof of $\mathsf{Gap}_{M,\uparrow}(\delta)$. Let
$\mathsf{Diff}_{M,\uparrow}(\delta):=\left\\{\inf_{|x|\leq
M}p^{1/3}\left(\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)-\mathfrak{h}_{pt,\uparrow}^{(2)}(p^{-2/3}x)\right)\geq\delta\right\\}.$
Note that $\Phi\in[-M,M]$. Thus
$\mathsf{Gap}_{M,\uparrow}(\delta)\supset\mathsf{Diff}_{M,\uparrow}(\delta).$
Thus to show (6.61) with $\mathsf{E}\mapsto\mathsf{Gap}_{M,\uparrow}(\delta)$
it suffices to prove
$\displaystyle\liminf_{\delta\downarrow
0}\liminf_{t\rightarrow\infty}\mathbb{P}(\mathsf{Diff}_{M,\uparrow}(\delta))=1,$
(6.64)
We recall from Proposition 2.7 the distributional convergence of the KPZ line
ensemble to the Airy line ensemble in the uniform-on-compact topology. By
Skorokhod representation theorem, we may assume that our probability space is
equipped with $\mathcal{A}_{1}(\cdot)$ and $\mathcal{A}_{2}(\cdot)$ such that
almost surely as $t\to\infty$
$\displaystyle\max_{i=1,2}\sup_{|x|\leq
Mp^{-2/3}}|2^{1/3}\mathfrak{h}_{t,\uparrow}^{(i)}(2^{1/3}x)-\mathcal{A}_{i}(x)|\rightarrow
0.$ (6.65)
We thus have
$\displaystyle\liminf_{t\rightarrow\infty}\mathbb{P}(\mathsf{Diff}_{M,\uparrow}(\delta))=\mathbb{P}\left(\inf_{|x|\leq
M2^{-1/3}p^{-2/3}}p^{1/3}\left(\mathcal{A}_{1}(x)-\mathcal{A}_{2}(x)\right)\geq
2^{1/3}\delta\right).$ (6.66)
As the Airy line ensemble is absolutely continuous w.r.t. non-intersecting
Brownian motions, it is strictly ordered with touching probability zero (see
(2.1)). Hence r.h.s. of (6.66) goes to zero as $\delta\downarrow 0$. This
proves (6.64). The proof is similar for $\mathsf{Gap}_{M,\downarrow}(\delta).$
$\mathsf{Rise}_{M,\uparrow}(\delta),\mathsf{Rise}_{M,\uparrow}(\delta)$
events. Recall
$\mathsf{Rise}_{M,\uparrow}(\delta),\mathsf{Rise}_{M,\uparrow}(\delta)$ events
from (6.37) and (6.38). Due to their similarities, we only analyze the
$\mathsf{Rise}_{M,\uparrow}(\delta)$ event. As with the previous case, we
assume that our probability space is equipped with $\mathcal{A}_{1}(\cdot)$
and $\mathcal{A}_{2}(\cdot)$ (first two lines of the Airy line ensemble) such
that almost surely as $t\to\infty$ (6.65) holds. Applying union bound we have
$\displaystyle\mathbb{P}\left(\neg\mathsf{Rise}_{M}(\delta)\right)$
$\displaystyle\leq\mathbb{P}\left(\sup_{{|x|\leq
Mp^{-2/3}}}p^{1/3}|2^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(2^{1/3}x)-\mathcal{A}_{2}(x)|\geq\tfrac{\delta}{16}\right)$
$\displaystyle\hskip
56.9055pt+\mathbb{P}\left(\neg\mathsf{Rise}_{M}(\delta),\sup_{|x|\leq
Mp^{-2/3}}p^{1/3}|2^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(2^{1/3}x)-\mathcal{A}_{2}(x)|\leq\tfrac{\delta}{16}\right)$
$\displaystyle\leq\mathbb{P}\left(\sup_{|x|\leq
Mp^{-2/3}}p^{1/3}|2^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(2^{1/3}x)-\mathcal{A}_{2}(x)|\geq\tfrac{\delta}{16}\right)$
$\displaystyle\hskip
56.9055pt+\mathbb{P}\bigg{(}\sup_{\begin{subarray}{c}x,y\in[-M,M]\\\ |x-y|\leq
t^{-\alpha}\end{subarray}}p^{1/3}|\mathcal{A}_{2}(x)-\mathcal{A}_{2}(y)|\geq\tfrac{\delta}{8}\bigg{)}.$
In the r.h.s. of above equation, the first term goes to zero as $t\to\infty$
by (6.65). The second term on the other hand goes to zero as $t\to\infty$ by
modulus of continuity estimates for Airy line ensemble from Proposition 2.4.
This shows,
$\lim_{t\to\infty}\mathbb{P}(\mathsf{Rise}_{M,\uparrow}(\delta))=1$. Similarly
one has $\lim_{t\to\infty}\mathbb{P}(\mathsf{Rise}_{M,\downarrow}(\delta))=1$
as well. This proves (6.61) for
$\mathsf{E}\mapsto\mathsf{Rise}_{M,\uparrow}(\delta),\mathsf{Rise}_{M,\downarrow}(\delta)$.
We have thus shown (6.61) for all the events listed in (6.39). This
establishes (6.40) concluding the proof of Lemma 6.2.
#### 6.2.9. Proof of Lemma 6.6 and 6.7
In this subsection we prove Lemma 6.6 and 6.7.
Proof of Lemma 6.6. Recall $W_{\uparrow,1}$ and $W_{\downarrow,1}$ from (6.52)
and (6.53) respectively. We claim that for all large enough $t$, on
$\mathsf{Nice}_{M}(\delta)$ we have
$\displaystyle\mathbb{P}_{\mathsf{Nsmall}|3,2,1}(W_{\uparrow,1}>\sqrt{1-\delta})\geq
1-\tfrac{1}{2}\delta,\quad\mathbb{P}_{\mathsf{Nsmall}|3,2,1}(W_{\downarrow,1}>\sqrt{1-\delta})\geq
1-\tfrac{1}{2}\delta$ (6.67)
simultaneously. (6.56) then follows via union bound. Hence we focus on proving
(6.67). In the proof below we only focus on first part of (6.67) and the
second one follows analogously. We now define the ‘sink’ event:
$\displaystyle\mathsf{Sink}_{\uparrow}(\delta)$
$\displaystyle:=\left\\{\inf_{x\in[-t^{-\alpha},t^{-\alpha}]}p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3}+x)\geq p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\tfrac{\delta}{4}\right\\}.$ (6.68)
Figure 9. In the above figure we have plotted the curves
$f(x):=p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)$ (black) and
$g(x):=p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(p^{-2/3}x)$ (blue) restricted
to the interval $K_{t}:=(\Phi-t^{-\alpha},\Phi+t^{-\alpha})$. For convenience,
we have marked two blue points along with their values as $(A,f(A))$,
$(B,g(B))$. $\mathsf{Gap}_{M,\uparrow}(\delta)$ defined in (6.35) denote the
event that the blue points are separated by $\delta$, i.e,
$f(A)-g(B)\geq\delta$. The $\mathsf{Rise}_{M,\uparrow}(\delta)$ defined in
(6.37) ensures no point on the blue curve (restricted to $K_{t}$) has value
larger than $g(B)+\frac{1}{4}\delta$ (that is no significant rise). The
$\mathsf{Bd}_{\uparrow}(\delta)$ event defined in (6.33) indicates the red
points on the black curve are within
$[f(A)-\frac{1}{\delta}t^{-\alpha/2},f(A)+\frac{1}{\delta}t^{-\alpha/2}]$. The
$\mathsf{Sink}_{\uparrow}(\delta)$ event defined in (6.68) ensures that all
points on the black curve (restricted to $K_{t}$) have values larger than
$f(A)-\frac{1}{4}\delta$ (that is no significant sink). Clearly then on
$\mathsf{Sink}_{\uparrow}(\delta)\cap\mathsf{Rise}_{M,\uparrow}(\delta)\cap\mathsf{Gap}_{M,\uparrow}(\delta)$
for all $x\in K_{t}$, we have $f(x)-g(x)\geq
f(A)-\frac{1}{4}\delta-g(B)-\frac{1}{4}\delta\geq\frac{1}{2}\delta$.
Recall $\mathsf{Rise}_{M,\uparrow}(\delta)$ and
$\mathsf{Gap}_{M,\uparrow}(\delta)$ from (6.37) and (6.35). Note that on
$\mathsf{Sink}_{\uparrow}(\delta)\cap\mathsf{Rise}_{M,\uparrow}(\delta)\cap\mathsf{Gap}_{M,\uparrow}(\delta)$
we have uniform separation between $\mathfrak{h}_{pt,\uparrow}^{(1)}$ and
$\mathfrak{h}_{pt,\downarrow}^{(2)}$ on the interval $p^{-2/3}{K}_{t}$, that
is
$\displaystyle\inf_{x\in[\Phi-t^{-\alpha},\Phi+t^{-\alpha}]}\left[p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)-p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(2)}(p^{-2/3}x)\right]\geq\tfrac{\delta}{2}.$
(6.69)
See Figure 9 alongside its caption for further explanation of the above fact.
But then (6.69) forces
$W_{\uparrow,1}\geq\exp(-t^{2/3}2t^{-\alpha}e^{-\frac{1}{4}t^{1/3}\delta})$
which can be made strictly larger than $\sqrt{1-\delta}$ for all large enough
$t$. Thus,
$\displaystyle\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(W_{\uparrow,1}>\sqrt{1-\delta})\geq\mathbf{1}_{\mathsf{Nice}_{M}(\delta)}\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(\mathsf{Sink}_{\uparrow}(\delta)).$
(6.70)
Now we divide the sink event into two parts:
$\mathsf{Sink}_{\uparrow}(\delta)=\mathsf{Sink}_{+\uparrow}(\delta)\cap\mathsf{Sink}_{-,\uparrow}(\delta)$
where
$\displaystyle\mathsf{Sink}_{\pm,\uparrow}(\delta)$
$\displaystyle:=\left\\{\inf_{x\in[0,t^{-\alpha}]}p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3}\pm x)\geq p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-\tfrac{\delta}{4}\right\\},$
In view of (6.70), to prove first part of (6.67), it suffices to show for all
large enough $t$, on $\mathsf{Nice}_{M}(\delta)$ we have
$\displaystyle\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(\mathsf{Sink}_{+,\uparrow}(\delta))\geq
1-\tfrac{\delta}{4},\quad\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(\mathsf{Sink}_{-,\uparrow}(\delta))\geq
1-\tfrac{\delta}{4}.$ (6.71)
We only prove first part of (6.71) below. Towards this end, recall
$Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$ from (6.26). Observe that
$\displaystyle
Y_{M,t,\uparrow}^{(1)}(\Phi+x)=p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi
p^{-2/3})-p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(\Phi p^{-2/3}+x).$
Recall $\mathsf{Nsmall}$ law from Definition 6.4. Our discussion in Subsection
6.2.6 implies that under $\mathbb{P}_{\mathsf{Nsmall}|3.2,1}$,
$(Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)})(\Phi+\cdot)|_{[0,t^{-\alpha}]}\stackrel{{\scriptstyle
d}}{{=}}V_{r}^{\mathsf{small}}(\cdot),\quad(Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)})(\Phi+\cdot)|_{[-t^{-\alpha},0]}\stackrel{{\scriptstyle
d}}{{=}}V_{\ell}^{\mathsf{small}}(-\cdot),$
where recall that $V_{\ell}^{\mathsf{small}}$ and $V_{r}^{\mathsf{small}}$ are
conditionally independent $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on
$[0,t^{-\alpha}]$ with appropriate end points, defined in Definition 6.4. In
particular we have,
$\displaystyle\mathbb{P}_{\mathsf{Nsmall|3,2,1}}(\mathsf{Sink}_{+,\uparrow}(\delta))=\mathbb{P}_{\mathsf{Nsmall|3,2,1}}\left(\sup_{x\in[0,t^{-\alpha}]}V_{r,1}^{\mathsf{small}}(x)\leq\tfrac{1}{4}\delta\right)$
(6.72)
where
$V_{r}^{\mathsf{small}}=(V_{r,1}^{\mathsf{small}},V_{r,2}^{\mathsf{small}})$.
Recall $\mathsf{Nice}_{M}(\delta)$ event from (6.39). It contains
$\mathsf{Bd}_{\uparrow}(\delta)$ event defined in (6.33). On this event,
$-\frac{1}{\delta}\leq
V_{r,1}^{\mathsf{Small}}(t^{-\alpha}),V_{r,2}^{\mathsf{Small}}(t^{-\alpha})\leq\frac{1}{\delta}t^{-\alpha/2}$.
We consider another $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$
$U=(U_{1},U_{2})$ on $[0,t^{-\alpha}]$ with non-random endpoints
$U_{1}(t^{-\alpha})=U_{2}(t^{-\alpha})=\frac{1}{\delta}t^{-\alpha/2}$. On
$\mathsf{Bd}_{\uparrow}(\delta)$ event, by monotonicity of non-intersecting
Brownian bridges (Lemma 2.6 in [CH14]), one may couple $U=(U_{1},U_{2})$ and
$V_{r}^{\mathsf{small}}$ so that $U_{i}$ always lies above
$V_{r,i}^{\mathsf{small}}$ for $i=1,2$. Thus on
$\mathsf{Bd}_{\uparrow}(\delta)$ event,
$\displaystyle\mathbb{P}_{\mathsf{Nsmall|3,2,1}}\left(\sup_{x\in[0,t^{-\alpha}]}V_{r,1}^{\mathsf{small}}(x)\leq\lambda
t^{-\alpha/2}\right)\geq\mathbb{P}\left(\sup_{x\in[0,1]}t^{\alpha/2}U_{1}(xt^{-\alpha})\leq\lambda\right)\geq
1-\tfrac{\delta}{4},$
where the last inequality is true by taking $\lambda$ large enough. This
choice of $\lambda$ is possible as by Brownian scaling,
$t^{\alpha/2}U_{1}(xt^{-\alpha}),t^{\alpha/2}U_{2}(xt^{-\alpha})$ is
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on $[0,1]$ ending at
$(\frac{1}{\delta},\frac{1}{\delta})$. Taking $t$ large enough one can ensure
$\lambda t^{-\alpha/2}\leq\frac{\delta}{4}$. Using the equality in (6.72) we
thus establish the first part of (6.71). The second part is analogous. This
proves the first part of (6.67). The second part of (6.67) follows similarly.
This completes the proof of Lemma 6.6.
Proof of Lemma 6.7. The idea behind this proof is Proposition 5.8, which
states that a $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ after Brownian
rescaling converges in distribution to a $\mathsf{DBM}$. The following fills
out the details. Recall that
$\mathbb{P}_{\mathsf{Nsmall}|3.2,1}(A)=\mathbb{P}_{\mathsf{Nsmall}|3.2,1}(D_{M,t,\uparrow},D_{M,t,\downarrow}(\cdot)\in
A).$
Recall from (6.28) that $D_{M,t,\uparrow},D_{M,t,\downarrow}$ is a diffusive
scaling of $Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$ when centering at
$\Phi$, where $Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$ are defined in
(6.26). Recall $\mathsf{Nsmall}$ law from Definition 6.4. Our discussion in
Subsection 6.2.6 implies that under $\mathbb{P}_{\mathsf{Nsmall}|3.2,1}$,
$(Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)})(\Phi+\cdot)|_{[0,t^{-\alpha}]}\stackrel{{\scriptstyle
d}}{{=}}V_{r}^{\mathsf{small}}(\cdot),\quad(Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)})(\Phi+\cdot)|_{[-t^{-\alpha},0]}\stackrel{{\scriptstyle
d}}{{=}}V_{\ell}^{\mathsf{small}}(-\cdot),$
where $V_{\ell}^{\mathsf{small}}$ and $V_{r}^{\mathsf{small}}$ are
conditionally independent $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on
$[0,t^{-\alpha}]$ with appropriate end points defined in Definition 6.4. Using
Brownian scaling, we consider
$V_{\ell}^{0}(x):=t^{\alpha/2}V_{\ell}^{\mathsf{small}}(xt^{-\alpha}),\quad
V_{r}^{0}(x):=t^{\alpha/2}V_{r}^{\mathsf{small}}(xt^{-\alpha}),$
which are now $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ on $[0,1]$. Note that
on $\mathsf{Bd}_{\uparrow}(\delta),\mathsf{Bd}_{\downarrow}(\delta)$ (defined
in (6.33)), we see that endpoints of $V_{\ell}^{0},V_{r}^{0}$ are in
$[-\frac{1}{\delta},\frac{1}{\delta}]$. Thus as $\alpha=\frac{1}{6}$,
performing another diffusive scaling by Proposition 5.8 we see that as
$t\to\infty$
$t^{1/4}V_{\ell}^{0}(xt^{-1/2})\ ,\ t^{1/4}V_{r}(xt^{-1/2})$
converges to two independent copies of $\mathsf{DBM}$s (defined in Definition
5.1) in the uniform-on-compact topology. Hence we get two-sided $\mathsf{DBM}$
convergence for the pair $(D_{M,t,\uparrow},D_{M,t,\downarrow})$ under
$\mathbb{P}_{\mathsf{Nsmall}|3.2,1}$ as long as
$\mathbf{1}\\{\mathsf{Nice}_{M}(\delta)\\}$ holds. This proves (6.57).
### 6.3. Proof of Theorem 1.10
We take $p\mapsto\frac{1}{2}$ and $t\mapsto 2t$ in Proposition 6.1. Then by
Lemma 3.2, $\mathcal{P}_{2,t}$ defined in the statement of Theorem 1.10 is
same as $\mathcal{M}_{\frac{1}{2},2t}$ considered in Proposition 6.1. Its
uniqueness is already justified in Lemma 3.1. Furthermore,
$R_{2}(x,t)\stackrel{{\scriptstyle d}}{{=}}D_{1}(x,t)-D_{2}(x,t),$
as functions in $x$, where $R_{2}(x,t)$ is defined in (1.11) and $D_{1},D_{2}$
are defined in (6.24). By Proposition 6.1 and Lemma 5.3 we get that
$D_{1}(x,t)-D_{2}(x,t)\stackrel{{\scriptstyle d}}{{\to}}\mathcal{R}_{2}(x)$ in
the uniform-on-compact topology. This proves Theorem 1.10 for $k=2$ case.
For $k=1$ case, by Lemma 3.2, $\mathcal{P}_{1,t}$ is same as
$\mathcal{M}_{*,t}$ which is unique almost surely by Lemma 3.1. This
guarantees $\mathcal{P}_{1,t}$ is unique almost surely as well. Thus we are
left to show
$\displaystyle\mathcal{H}(\mathcal{P}_{1,t},t)-\mathcal{H}(x+\mathcal{P}_{1,t},t)\stackrel{{\scriptstyle
d}}{{\to}}\mathcal{R}_{1}(x).$ (6.73)
where $\mathcal{R}_{1}(x)$ is a two-sided Bessel process with diffusion
coefficient $1$ defined in Definition 5.2. The proof of (6.73) is exactly
similar to that of Proposition 6.1 with few minor alterations listed below.
1. (1)
Just as in Subsection 6.2.1, one may put the problem in (6.73) under the
framework of KPZ line ensemble. Compared to Subsection 6.2.1, in this case,
clearly there will be just one set of line ensemble.
2. (2)
Given the decay estimates for $\mathcal{M}_{*,t}$ from Lemma 3.1, it boils
down to show Bessel behavior around local maximizers. The rigorous
justification follows from a soft argument analogous to what is done in
Subsection 6.2.3.
3. (3)
In the spirit of Subsection 6.2.4, one can define a similar
$\mathsf{Nice}^{\prime}_{M}(\delta)$ event but now for a single line ensemble.
$\mathsf{Nice}^{\prime}_{M}(\delta)$ will contain similar events, such as:
* •
control on the location of local maximizer (analog of $\mathsf{ArMx}(\delta)$
event (6.32)),
* •
control on the gap between first curve and second curve at the maximizer
(analog of $\mathsf{Gap}_{M,\uparrow}(\delta)$ event (6.35)),
* •
fluctuations of the first curve on a small interval say $I$ around maximizer
(analog of $\mathsf{Rise}_{M,\uparrow}(\delta)$ event (6.37),
* •
and control on the value of the endpoints of $I$ (analog of
$\mathsf{Bd}_{\uparrow}(\delta)$ event (6.33)).
On $\mathsf{Nice}^{\prime}_{M}(\delta)$ event, the conditional analysis can be
performed in the same manner.
4. (4)
Next, as in proof of Proposition 6.1, we proceed by three layers of
conditioning. For first layer, we use the $\mathbf{H}_{t}$ Brownian Gibbs
property of the single line ensemble under consideration. Next, conditioning
on the location and values of the maximizer, we similarly apply the same
Bessel bridge decomposition result from Proposition 4.8 to convert the
conditional law to that of the Bessel bridges over a large interval (see
Subsection 6.2.5). Finally, analogous to Subsection 6.2.6, the third layer of
conditioning reduces large Bessel bridges to smaller ones following the
Markovian property of Bessel bridges, see Lemma 4.2.
5. (5)
Since a Bessel bridge say on $[0,1]$ is a Brownian bridge conditioned to stay
positive on $[0,1]$, it has the Brownian scaling property and it admits
monotonicity w.r.t. endpoints. These are two crucial tools that went into the
Proof of Lemma 6.6 in Subsection 6.2.9. Thus the Bessel analogue of Lemma 6.6
can be derived using the scaling property and monotonicity stated above in the
exact same way. Finally, the Bessel analogue of Lemma 6.7 can be obtained from
Corollary 5.9. Indeed Corollary 5.9 ensures that small Bessel bridges
converges to Bessel process under appropriate diffusive limits on the
$\mathsf{Nice}^{\prime}_{M}(\delta)$ event.
Executing all the above steps in an exact same manner as proof of Proposition
6.1, (6.73) is established. This completes the proof of Theorem 1.10.
## 7\. Proof of localization theorems
In this section we prove our main results: Theorem 1.4 and Theorem 1.5. In
Section 7.1 we study certain tail properties (Lemma 7.1 and Proposition 7.2)
of the quantities that we are interested in and prove Theorem 1.4. Proof of
Proposition 7.2 is then completed in Section 7.2 along with proof of Theorem
1.5.
### 7.1. Tail Properties and proof of Theorem 1.4
We first settle the question of finiteness of the Bessel integral appearing in
the statements of Theorems 1.4 and 1.5 in the following Lemma.
###### Lemma 7.1.
Let $R_{\sigma}(\cdot)$ be a Bessel process with diffusion coefficient
$\sigma>0$, defined in Definition 5.2. Then
$\displaystyle\mathbb{P}\left(\int_{\mathbb{R}}e^{-R_{\sigma}(x)}\mathrm{d}x\
{\in(0,\infty)}\right)=1.$
###### Proof.
Since $R_{\sigma}(\cdot)$ has continuous paths,
$\sup_{x\in[0,1]}R_{\sigma}(x)$ is finite almost surely. Thus almost surely we
have
$\int_{\mathbb{R}}e^{-R_{\sigma}(x)}\mathrm{d}x\geq\int_{0}^{1}e^{-R_{\sigma}(x)}\mathrm{d}x>0.$
On the other hand, by the classical result from [Mot59] it is known that
$\mathbb{P}(R_{\sigma}(x)<x^{1/4}\mbox{ infinitely often})=0.$
Thus, there exists $\Omega$ such that $\mathbb{P}(\Omega)=1$ and for all
$\omega\in\Omega$, there exists $x_{0}(\omega)\in(0,\infty)$ such that
$\displaystyle R_{\sigma}(x)(\omega)\geq x^{1/4}\mbox{ for all }x\geq
x_{0}(\omega).$
Hence for this $\omega$,
$\displaystyle{\int_{0}^{\infty}e^{-R_{\sigma}(x)(\omega)}\mathrm{d}x=\int_{0}^{x_{0}(\omega)}e^{-R_{\sigma}(x)(\omega)}\mathrm{d}x+\int_{x_{0}(\omega)}^{\infty}e^{-R_{\sigma}(x)(\omega)}\mathrm{d}x<x_{0}(\omega)+\int_{0}^{\infty}e^{-x^{1/4}}\mathrm{d}x<\infty.}$
This establishes that $\int_{\mathbb{R}}e^{-R_{\sigma}(x)}\mathrm{d}x$ is
finite almost surely. ∎
Our next result studies the tail of the integral of the pre-limiting process.
###### Proposition 7.2.
Fix $p\in(0,1)$. Set $q=1-p$. Consider $2$ independent copies of the KPZ
equation $\mathcal{H}_{\uparrow}(x,t)$, and $\mathcal{H}_{\downarrow}(x,t)$,
both started from the narrow wedge initial data. Let $\mathcal{M}_{p,t}$ be
the almost sure unique maximizer of the process
$x\mapsto(\mathcal{H}_{\uparrow}(x,pt)+\mathcal{H}_{\downarrow}(x,qt))$ which
exists via Lemma 3.1. Set
$\displaystyle D_{1}(x,t)$
$\displaystyle:=\mathcal{H}_{\uparrow}(\mathcal{M}_{p,t},pt)-\mathcal{H}_{\uparrow}(x+\mathcal{M}_{p,t},pt),$
(7.1) $\displaystyle D_{2}(x,t)$
$\displaystyle:=\mathcal{H}_{\downarrow}(x+\mathcal{M}_{p,t},qt)-\mathcal{H}_{\downarrow}(\mathcal{M}_{p,t},qt).$
For all $\rho>0$ we have
$\displaystyle\limsup_{K\to\infty}\limsup_{t\to\infty}\mathbb{P}\left(\int_{[-K,K]^{c}}e^{D_{2}(x,t)-D_{1}(x,t)}\,\mathrm{d}x\geq\rho\right)=0.$
(7.2)
As a corollary, we derive that for any $p\in(0,1)$ the $pt$-point density of
point-to-point $\mathsf{CDRP}$ of length $t$ indeed concentrates in a
microscopic region of size $O(1)$ around the favorite point.
###### Corollary 7.3.
Recall the definition of $\mathsf{CDRP}$ and the notation $\mathbb{P}^{\xi}$
from Definition 1.1. Fix $p\in(0,1)$. Suppose $X\sim\mathsf{CDRP}(0,0;0,t)$.
Consider $\mathcal{M}_{p,t}$ the almost sure unique mode of $f_{p,t}$, the
quenched density of $X(pt)$. We have
$\displaystyle\limsup_{K\to\infty}\limsup_{t\to\infty}\mathbb{P}^{\xi}\left(|X(pt)-\mathcal{M}_{p,t}|\geq
K\right)=0,\mbox{ in probability}.$
One also has the analogous version of Proposition 7.2 involving one single
copy of the KPZ equation viewed around its maximum. This leads to a similar
corollary about tightness of the quenched endpoint distribution for point-to-
line $\mathsf{CDRP}$ (see Definition 1.2) when re-centered around its mode.
The details are skipped for brevity.
The proof of Proposition 7.2 is heavily technical and relies on the tools as
well as notations from Proposition 6.1. For clarity, we first prove Corollary
7.3 and Theorem 1.4 assuming the validity of Proposition 7.2. The proof of
Proposition 7.2 is then presented in Section 7.2.
###### Proof of Corollary 7.3.
We have $\mathcal{Z}(0,0;x,pt)\stackrel{{\scriptstyle
d}}{{=}}e^{\mathcal{H}_{\uparrow}(x,pt)}$ and by time reversal property
$\mathcal{Z}(x,pt;0,t)\stackrel{{\scriptstyle
d}}{{=}}e^{\mathcal{H}_{\downarrow}(x,qt)}$ as functions in $x$, where
$\mathcal{H}_{\uparrow},\mathcal{H}_{\downarrow}$ are independent copies of
KPZ equation started from narrow wedge initial data. The uniqueness of the
mode $\mathcal{M}_{p,t}$ for $f_{p,t}$ is already settled in Lemma 3.1. Thus,
the quenched density of $X(pt)-\mathcal{M}_{p,t}$ is given by
$\displaystyle
f_{p,t}(x+\mathcal{M}_{p,t})=\frac{\exp(D_{2}(x,t)-D_{1}(x,t))}{\int\limits_{\mathbb{R}}\exp(D_{2}(y,t)-D_{1}(y,t))\mathrm{d}y},$
(7.3)
where $D_{i}(x,t),i=1,2$ are defined in (6.23). Thus,
$\displaystyle\mathbb{P}^{\xi}\left(|X(pt)-\mathcal{M}_{p,t}|\geq
K\right)=\frac{\int\limits_{[-K,K]^{c}}e^{D_{2}(x,t)-D_{1}(x,t)}\,\mathrm{d}x}{\int\limits_{\mathbb{R}}e^{D_{2}(x,t)-D_{1}(x,t)}\,\mathrm{d}x}\leq\frac{\int\limits_{[-K,K]^{c}}e^{D_{2}(x,t)-D_{1}(x,t)}\,\mathrm{d}x}{\int\limits_{[-K,K]}e^{D_{2}(x,t)-D_{1}(x,t)}\,\mathrm{d}x}.$
(7.4)
Notice that by by (7.2) the numerator of r.h.s. of (7.4) goes to zero in
probability under the iterated limit $\limsup_{t\to\infty}$ followed by
$\limsup_{K\to\infty}$. Whereas due to Proposition 6.1, under the iterated
limit, the denominator converges in distribution to
$\int_{\mathbb{R}}e^{-R_{2}(x)}\mathrm{d}x$ which is strictly positive by
Lemma 7.1. Thus overall the r.h.s. of (7.4) goes to zero in probability under
the iterated limit. This completes the proof. ∎
###### Proof of Theorem 1.4.
Fix any $p\in(0,1).$ Set $q=1-p$. Recall from (7.3) that
$\displaystyle
f_{p,t}(x+\mathcal{M}_{p,t})=\frac{\exp(D_{2}(x,t)-D_{1}(x,t))}{\int\limits_{\mathbb{R}}\exp(D_{2}(y,t)-D_{1}(y,t))\mathrm{d}y}$
(7.5)
where $D_{i}(x,t),i=1,2$ are defined in (6.23). Note that by Proposition 6.1,
a continuous mapping theorem immediately implies that for any $K<\infty$
$\displaystyle\frac{\exp(D_{2}(x,t)-D_{1}(x,t))}{\int_{-K}^{K}\exp(D_{2}(y,t)-D_{1}(y,t))\mathrm{d}y}\stackrel{{\scriptstyle
d}}{{\rightarrow}}\frac{e^{-\mathcal{R}_{2}(x)}}{\int_{-K}^{K}e^{-\mathcal{R}_{2}(y)}\mathrm{d}y}$
(7.6)
in the uniform-on-compact topology. Here $\mathcal{R}_{2}$ is a 3D Bessel
process with diffusion coefficient $2$. For simplicity, we denote
$\displaystyle\Lambda_{t}(x):=\exp(D_{2}(x,t)-D_{1}(x,t))\text{ and
}\Lambda(x)=\exp(-\mathcal{R}_{2}(x)).$
We can then rewrite (7.5) as product of four factors:
$\displaystyle
f_{p,t}(x+\mathcal{M}_{p,t})=\frac{\Lambda_{t}(x)}{\int_{\mathbb{R}}\Lambda_{t}(y)\mathrm{d}y}=\frac{\int_{-K}^{K}\Lambda_{t}(y)\mathrm{d}y}{\int_{\mathbb{R}}\Lambda_{t}(y)\mathrm{d}y}\cdot\frac{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y}{\int_{-K}^{K}\Lambda(y)\mathrm{d}y}\cdot\frac{\int_{-K}^{K}\Lambda(y)\mathrm{d}y}{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y}\cdot\frac{\Lambda_{t}(x)}{\int_{-K}^{K}\Lambda_{t}(y)\mathrm{d}y}.$
Corollary 7.3 ensures
$\frac{\int_{-K}^{K}\Lambda_{t}(y)\mathrm{d}y}{\int_{\mathbb{R}}\Lambda_{t}(y)\mathrm{d}y}=\mathbb{P}^{\xi}(|X(pt)-\mathcal{M}_{p,t}|\leq
K)\stackrel{{\scriptstyle p}}{{\to}}1$
as $t\to\infty$ followed by $K\to\infty$. Lemma 7.1 with $\sigma=2$ yields
that
$\int_{[-K,K]^{c}}\Lambda(y)\mathrm{d}y=\int_{[-K,K]^{c}}e^{-\mathcal{R}_{2}(y)}\mathrm{d}y\stackrel{{\scriptstyle
p}}{{\rightarrow}}0$ as $K\rightarrow\infty.$ Thus as $K\rightarrow\infty$
$\displaystyle\frac{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y}{\int_{-K}^{K}\Lambda(y)\mathrm{d}y}\stackrel{{\scriptstyle
p}}{{\rightarrow}}1.$
Meanwhile, (7.6) yields that as $t\rightarrow\infty,$
$\displaystyle\frac{\int_{-K}^{K}\Lambda(y)\mathrm{d}y}{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y}\cdot\frac{\Lambda_{t}(x)}{\int_{-K}^{K}\Lambda_{t}(y)\mathrm{d}y}\stackrel{{\scriptstyle
d}}{{\rightarrow}}\frac{\int_{-K}^{K}\Lambda(y)\mathrm{d}y}{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y}\cdot\frac{\Lambda(x)}{\int_{-K}^{K}\Lambda(y)\mathrm{d}y}=\frac{\Lambda(x)}{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y}.$
in the uniform-on-compact topology. Thus, overall we get that
$f_{p,t}(x+\mathcal{M}_{p,t})\stackrel{{\scriptstyle
d}}{{\to}}\frac{\Lambda(x)}{\int_{\mathbb{R}}\Lambda(y)\mathrm{d}y},$ in the
uniform-on-compact topology. This establishes (1.7), completing the proof of
Theorem 1.4. ∎
### 7.2. Proof of Proposition 7.2 and Theorem 1.5
Coming to the proof of Proposition 7.2, we note that the setup of Proposition
7.2 is same as that of Proposition 6.1. Hence all the discussions pertaining
to Proposition 6.1 are applicable here. In particular, to prove Proposition
7.2, we will be using few notations and certain results from the proof of
Proposition 6.1.
###### Proof of Proposition 7.2.
Fix any $M>0$. The proof of (6.24) proceeds by dividing the integral into two
parts depending on the range:
$\displaystyle U_{1}$
$\displaystyle:=[-t^{2/3}M-\mathcal{M}_{p,t},t^{2/3}M-\mathcal{M}_{p,t}]^{c},$
(Deep Tail) $\displaystyle U_{2}$
$\displaystyle:=[K,K]^{c}\cap[-t^{2/3}M-\mathcal{M}_{p,t},t^{2/3}M-\mathcal{M}_{p,t}],$
(Shallow Tail)
and controlling each of them individually. See Figure 10 for details. In the
following two steps, we control these two kind of tails respectively.
Figure 10. Illustration for the proof of Proposition 7.2. In Deep Tail region
we use parabolic decay of KPZ line ensemble, and in Shallow Tail we use non-
intersecting Brownian bridge separation estimates from Proposition 5.6.
Step 1. In this step, we control the Deep Tail region: $U_{1}$. The goal of
this step is to show
$\displaystyle\limsup_{t\to\infty}\mathbb{P}\left(\int_{U_{1}}e^{D_{2}(x,t)-D_{1}(x,t)}\,\mathrm{d}x\geq\tfrac{\rho}{2}\right)\leq\mathrm{C}\exp(-\tfrac{1}{\mathrm{C}}M^{3}),$
(7.7)
for some constant $\mathrm{C}=\mathrm{C}(p)>0$. We now recall the framework of
KPZ line ensemble discussed in Subsection 6.2.1. We define
$\displaystyle\mathcal{S}_{p,t}(x):=p^{1/3}\mathfrak{h}^{(1)}_{pt,\uparrow}(p^{-2/3}x)+q^{1/3}\mathfrak{h}^{(1)}_{qt,\downarrow}(q^{-2/3}x)$
(7.8)
where $\mathfrak{h}_{t,\uparrow},\mathfrak{h}_{t,\downarrow}$ are scaled KPZ
line ensembles corresponding to
$\mathcal{H}_{\uparrow},\mathcal{H}_{\downarrow}$, see (2.6). Observe that
$\displaystyle D_{2}(x,t)-D_{1}(x,t)\stackrel{{\scriptstyle
d}}{{=}}t^{1/3}\left[\mathcal{S}_{p,t}(t^{-2/3}(x+\mathcal{M}_{p,t}))-\sup_{z\in\mathbb{R}}\mathcal{S}_{p,t}(z)\right],$
where $D_{1},D_{2}$ are defined in (7.1). Thus we have
$\displaystyle\int_{U_{1}}\exp(D_{2}(x,t)-D_{1}(x,t))\mathrm{d}x\stackrel{{\scriptstyle
d}}{{=}}\int_{|x|\geq
M}\exp\left(t^{1/3}\left[\mathcal{S}_{p,t}(x)-\sup_{z\in\mathbb{R}}\mathcal{S}_{p,t}(z)\right]\right)\mathrm{d}x$
where $U_{1}$ is defined in (Deep Tail). Towards this end, we define two
events
$\displaystyle\mathsf{A}:=\left\\{\sup_{z\in\mathbb{R}}\mathcal{S}_{p,t}(z)\leq-\tfrac{M^{2}}{4}\right\\},\quad\mathsf{B}:=\left\\{\sup_{x\in\mathbb{R}}\left(\mathcal{S}_{p,t}(x)+x^{2}\right)>\tfrac{M^{2}}{4}\right\\},$
Note that on $\neg A\cap\neg B$, for all $|x|\geq M$, we have
$\displaystyle\mathcal{S}_{p,t}(x)-\sup_{z\in\mathbb{R}}\mathcal{S}_{p,t}(z)\leq\tfrac{M^{2}}{4}+\tfrac{M^{2}}{4}-x^{2}\leq\tfrac{M^{2}}{2}-\tfrac{3M^{2}}{4}-\tfrac{x^{2}}{4}\leq-\tfrac{M^{2}}{4}-\tfrac{x^{2}}{4}.$
This forces
$\displaystyle\int_{|x|\geq
M}\exp\left(t^{1/3}\left[\mathcal{S}_{p,t}(x)-\sup_{z\in\mathbb{R}}\mathcal{S}_{p,t}(z)\right]\right)\mathrm{d}x\leq\int_{[-M,M]^{c}}\exp\left(-t^{1/3}(\tfrac{M^{2}}{2}+\tfrac{y^{2}}{4})\right)\mathrm{d}y,$
which goes to zero as $t\to\infty.$ Hence $\mbox{l.h.s.~{}of
\eqref{7wts}}\leq\mathbb{P}(\neg\mathsf{A})+\mathbb{P}(\neg\mathsf{B})$. Hence
it suffices to show
$\displaystyle\mathbb{P}(\neg\mathsf{A})\leq\mathrm{C}\exp\left(-\tfrac{1}{\mathrm{C}}M^{3}\right),\quad\mathbb{P}(\neg\mathsf{B})\leq\mathrm{C}\exp\left(-\tfrac{1}{\mathrm{C}}M^{3}\right).$
(7.9)
To prove the first part of (7.9), note that
$\displaystyle\mathbb{P}\left(\neg A\right)$
$\displaystyle\leq\mathbb{P}\left(\mathcal{S}_{p,t}(0)\leq-\tfrac{M^{2}}{4}\right)$
$\displaystyle\leq\mathbb{P}\left(p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(0)\leq-\tfrac{M^{2}}{8}\right)+\mathbb{P}\left(q^{1/3}\mathfrak{h}_{qt,\downarrow}^{(1)}(0)\leq-\tfrac{M^{2}}{8}\right)\leq\mathrm{C}\exp(-\tfrac{1}{\mathrm{C}}{M^{3}}).$
where the last inequality follows by Proposition 2.8 (b), for some constant
$\mathrm{C}=\mathrm{C}(p)>0$. This proves the first part of (7.9). For the
second part of (7.9), following the definition of $\mathcal{S}_{p,t}(x)$ from
(7.8), and using the elementary inequality $\frac{1}{4p}+\frac{1}{4q}\geq 1$
by a union bound we have
$\displaystyle\mathbb{P}\left(\sup_{x\in\mathbb{R}}\left(\mathcal{S}_{p,t}(x)+x^{2}\right)>\tfrac{M^{2}}{4}\right)$
$\displaystyle\leq\mathbb{P}\left(\sup_{x\in\mathbb{R}}\left(p^{1/3}\mathfrak{h}_{pt,\uparrow}^{(1)}(p^{-2/3}x)+\tfrac{x^{2}}{4p}\right)>\tfrac{M^{2}}{8}\right)$
(7.10) $\displaystyle\hskip
28.45274pt+\mathbb{P}\left(\sup_{x\in\mathbb{R}}\left(q^{1/3}\mathfrak{h}_{qt,\uparrow}^{(1)}(q^{-2/3}x)+\tfrac{x^{2}}{4q}\right)>\tfrac{M^{2}}{8}\right).$
Applying Proposition (2.8) (c) with $\beta=\frac{1}{2}$, we get that each of
the terms on r.h.s. of (7.10) are at most
$\mathrm{C}\exp(-\frac{1}{\mathrm{C}}M^{3})$ where
$\mathrm{C}=\mathrm{C}(p)>0$. This establishes the second part of (7.9)
completing the proof of (7.7).
Step 2. In this step, we control the Shallow Tail region: $U_{2}$. We first
lay out the heuristic idea behind the Shallow Tail region controls. We recall
the nice event $\mathsf{Sp}(\lambda)$ from (6.62) which occurs with high
probability. Assuming $\mathsf{Sp}(\lambda)$ holds, we apply the the
$\mathbf{H}_{t}$ Brownian Gibbs property of the KPZ line ensembles, and
analyze the desired integral
$\int_{U_{2}}e^{D_{2}(x,t)-D_{1}(x,t)}\mathrm{d}x$
under the ‘free’ Brownian bridge law. Further conditioning on the information
of the maximizer converts the free law into the law of the
$\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ (defined in Definition 4.4). On
$\mathsf{Sp}(\lambda)$, we may apply Proposition 5.6 to obtain the desired
estimates for the ‘free’ law. One then obtain the desired estimates for KPZ
law using the lower bound for the normalizing constant from Proposition 2.5
(b).
We now expand upon the technical details. In what follows we will only work
with the right tail:
$\displaystyle
U_{+,2}:=[-t^{2/3}M-\mathcal{M}_{p,t},t^{2/3}M-\mathcal{M}_{p,t}]\cap[K,\infty)=[K,t^{2/3}M-\mathcal{M}_{p,t}]$
and the argument for the left part of the shallow tail is analogous. Note that
we also implicitly assumed $t^{2/3}M-\mathcal{M}_{p,t}\geq K$ above. Otherwise
there is nothing to prove. As before we utilize the the notations defined in
Subsection 6.2.1. Recall the local maximizer $\mathcal{M}_{p,t}^{M}$ defined
in (6.25). Recall $Y_{M,t,\uparrow}^{(1)},Y_{M,t,\downarrow}^{(1)}$ from
(6.26). Set
$\displaystyle\Gamma_{t,M,K}$
$\displaystyle:=\int_{K}^{Mt^{2/3}-\mathcal{M}_{p,t}}e^{-t^{1/3}\left[Y_{M,t,\uparrow}^{(1)}(t^{-2/3}(\mathcal{M}_{p,t}^{M}+x))-Y_{M,t,\downarrow}^{(1)}(t^{-2/3}(\mathcal{M}_{p,t}^{M}+x))\right]}\mathrm{d}x$
(7.11)
$\displaystyle=\int_{K}^{Mt^{2/3}-\mathcal{M}_{p,t}}\exp(-D_{M,t,\uparrow}(x)+D_{M,t,\downarrow}(x))\mathrm{d}x,$
where the last equality follows from the definition of
$D_{M,t,\uparrow},D_{M,t,\downarrow}$ from (6.28). Recall that the only
difference between $D_{1},D_{2}$ (defined in (6.27)) and
$D_{M,t,\uparrow},D_{M,t,\downarrow}$ is that former is defined using the
global maximizer $\mathcal{M}_{p,t}$ and the latter by local maximizer
$\mathcal{M}_{p,t}^{M}$. However, Lemma 3.1 implies that with probability at
least $1-\mathrm{C}\exp(-\frac{1}{\mathrm{C}}M^{3}),$ we have
$\mathcal{M}_{p,t}=\mathcal{M}_{p,t}^{M}$. Next, fix $\lambda>0$. Consider
$\mathsf{Sp}(\lambda)$ event defined in (6.62). We thus have
$\displaystyle\mathbb{P}\left(\int_{U_{+,2}}e^{D_{2}(x,t)-D_{1}(x,t)}\mathrm{d}x\geq\frac{\rho}{4}\right)\leq\mathrm{C}\exp(-\tfrac{1}{\mathrm{C}}M^{3})+\mathbb{P}(\neg\mathsf{Sp}(\lambda))+\mathbb{P}\left(\Gamma_{t,M,K}\geq\tfrac{\rho}{4},\mathsf{Sp}(\lambda)\right).$
(7.12)
We recall the $\sigma$-fields $\mathcal{F}_{1},\mathcal{F}_{2}$ defined in
(6.42) and (6.43). We first condition on $\mathcal{F}_{1}$. As noted in
Subsection 6.2.5, since $\mathfrak{h}_{pt,\uparrow}^{(1)}$ and
$\mathfrak{h}_{qt,\downarrow}^{(1)}$ are independent, applying
$\mathbf{H}_{pt}$ and $\mathbf{H}_{qt}$ Brownian Gibbs property from
Proposition 2.5 for $\mathfrak{h}_{pt,\uparrow}^{(1)}$,
$\mathfrak{h}_{qt,\downarrow}^{(1)}$ respectively we have
$\displaystyle\mathbb{P}\left(\Gamma_{t,M,K}\geq\tfrac{\rho}{2},\mathsf{Sp}(\lambda)\right)=\mathbb{E}\left[\frac{\mathbb{E}_{\operatorname{free},t}[\mathbf{1}_{\Gamma_{t,M,K}\geq\tfrac{\rho}{4},\mathsf{Sp}(\lambda)}W_{\uparrow}W_{\downarrow}]}{\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]}\right],$
(7.13)
where $W_{\uparrow}$, $W_{\downarrow}$ are defined in (6.48) and (6.49). Here
$\mathbb{P}_{\operatorname{free},t}$ and $\mathbb{E}_{\operatorname{free},t}$
are the probability and the expectation operator respectively corresponding to
the joint ‘free’ law for $(p^{1/3}\mathfrak{h}_{pt,\uparrow}(p^{-2/3}x)$, and
$q^{1/3}\mathfrak{h}_{qt,\downarrow}(q^{-2/3}x))_{x\in[-M,M]}$ which by
Brownian scaling is given by a pair of independent Brownian bridges
$(B_{1}(\cdot),B_{2}(\cdot))$ on $[-M,M]$ with starting points
$(p^{1/3}\mathfrak{h}_{pt,\uparrow}(-Mp^{-2/3}),q^{1/3}\mathfrak{h}_{qt,\downarrow}(-Mq^{-2/3}))$
and endpoints
$(q^{1/3}\mathfrak{h}_{pt,\uparrow}(Mp^{-2/3}),q^{1/3}\mathfrak{h}_{qt,\downarrow}(Mq^{-2/3})).$
In addition, from the last part of Proposition 2.5 we know that for any given
$\lambda>0$, there exists $\delta(M,p,\lambda)>0$ such that
$\displaystyle\mathbb{P}(\mathbb{E}_{\operatorname{free},t}[W_{\uparrow}W_{\downarrow}]>\delta)\geq
1-\lambda.$ (7.14)
Since the weight $W_{\uparrow}W_{\downarrow}\in[0,1],$ (7.13) and (7.14) give
us
$\displaystyle\mbox{r.h.s.~{}of
\eqref{c16}}\leq\mathrm{C}\exp(-\tfrac{1}{\mathrm{C}}M^{3})+\mathbb{P}(\neg\mathsf{Sp}(\lambda))+\lambda+{\frac{1}{\delta}}\mathbb{E}\left[\mathbb{P}_{\operatorname{free},t}\left(\Gamma_{t,M,K}\geq\tfrac{\rho}{4},\mathsf{Sp}(\lambda)\right)\right].$
(7.15)
Next we condition on $\mathcal{F}_{2}$ defined in (6.43). By Proposition 4.10,
upon conditioning the free measure of two Brownian bridges when viewed around
the maximizer are given by two $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$
(defined in Definition 4.4). The precise law is given by ${\mathsf{Nlarge}}$
law defined in Definition 6.3. Note that $\mathsf{Sp}(\lambda)$ is measurable
w.r.t. $\mathcal{F}_{1}\cup\mathcal{F}_{2}$. By Reverse Fatou’s Lemma and the
tower property of conditional expectations, we obtain that
$\displaystyle\limsup_{K\rightarrow\infty}\limsup_{t\rightarrow\infty}\mathbb{E}\left[\mathbb{P}_{\operatorname{free},t}\left(\Gamma_{t,M,K}\geq\tfrac{\rho}{4},\mathsf{Sp}(\lambda)\right)\right]$
$\displaystyle\leq\mathbb{E}\left[\limsup_{K\rightarrow\infty}\limsup_{t\to\infty}\mathbf{1}_{\mathsf{Sp}(\lambda)}\mathbb{P}_{\mathsf{Nlarge}|2,1}\left(\Gamma_{t,M,K}\geq\tfrac{\rho}{4}\right)\right].$
(7.16)
Following the Definition 6.3 and (7.11) we see that under $\mathsf{Nlarge}$
law,
$\displaystyle\Gamma_{t,M,K}\stackrel{{\scriptstyle
d}}{{=}}\int_{K}^{Mt^{2/3}-\mathcal{M}_{p,t}}e^{-t^{1/3}\left[V_{r,1}^{\mathsf{large}}(t^{-2/3}x)-V_{r,2}^{\mathsf{large}}(t^{-2/3}x)\right]}\mathrm{d}x.$
(7.17)
where
$V_{r}^{\mathsf{large}}=(V_{r,1}^{\mathsf{large}},V_{r,2}^{\mathsf{large}})$
is a $\mathsf{NonInt}\mbox{-}\mathsf{BrBridge}$ defined in Definition 6.3. Now
notice that by the definition in (6.62), on the $\mathsf{Sp}(\lambda)$ event,
the length of the Brownian bridges considered are bounded from below and above
and the end points are tight. Following the equality in distribution in
(7.17), the technical result of Proposition 5.6 precisely tells us that the
term inside the expectation of r.h.s. of (7.16) is zero. Thus, going back to
(7.15) we get that
$\displaystyle\limsup_{K\to\infty}\limsup_{t\to\infty}\mathbb{P}\left(\int_{U_{+,2}}e^{D_{2}(x,t)-D_{1}(x,t)}\mathrm{d}x\geq\frac{\rho}{4}\right)\leq\mathrm{C}\exp(-\tfrac{1}{\mathrm{C}}M^{3})+\limsup_{t\to\infty}\mathbb{P}(\neg\mathsf{Sp}(\lambda))+\lambda.$
Taking $\limsup_{\lambda\downarrow 0}$, in view of (6.63), we get that last
two terms in r.h.s. of the above equation are zero. Similarly one can show the
same bound for the integral under
$U_{-,2}:=[-t^{2/3}M-\mathcal{M}_{p,t},t^{2/3}M-\mathcal{M}_{p,t}]\cap(-\infty,-K]$.
Together with (7.7), we thus have
$\displaystyle\limsup_{K\to\infty}\limsup_{t\to\infty}\mathbb{P}\left(\int_{[-K,K]^{c}}e^{D_{2}(x,t)-D_{1}(x,t)}\mathrm{d}x\geq\rho\right)\leq\mathrm{C}\exp(-\tfrac{1}{\mathrm{C}}M^{3}).$
Taking $M\to\infty$ we get (7.2) completing the proof. ∎
###### Proof of Theorem 1.5.
Recall from (1.6) that
$\displaystyle
f_{*,t}(x)=\frac{\mathcal{Z}(0,0;x,t)}{\mathcal{Z}(0,0;*,t)}=\frac{e^{\mathcal{H}(x,t)}}{\int_{\mathbb{R}}e^{\mathcal{H}(y,t)}\mathrm{d}y}.$
The uniqueness of the mode $\mathcal{M}_{*,t}$ for $f_{*,t}$ is already proved
in Lemma 3.1. Thus, we have
$\displaystyle
f_{*,t}(x+\mathcal{M}_{*,t})=\frac{\exp\left(\mathcal{H}(\mathcal{M}_{*,t}+x,t)-\mathcal{H}(\mathcal{M}_{*,t},t)\right)}{\int\limits_{\mathbb{R}}\exp\left(\mathcal{H}(\mathcal{M}_{*,t}+y,t)-\mathcal{H}(\mathcal{M}_{*,t},t)\right)\mathrm{d}y}.$
Just like in Proposition 7.2, we claim that
$\displaystyle\limsup_{K\to\infty}\limsup_{t\to\infty}\mathbb{P}\left(\int_{[-K,K]^{c}}e^{\mathcal{H}(\mathcal{M}_{*,t}+y,t)-\mathcal{H}(\mathcal{M}_{*,t},t)}\mathrm{d}y\geq\rho\right)=0.$
(7.18)
The proof of (7.18) is exactly same as that of (7.2), where we divide the
integral in (7.18) into a deep tail and a shallow tail and bound them
individually. To avoid repetition, we just add few pointers for the readers.
Indeed the two key steps of proof of Proposition 7.2 that bound the deep and
shallow tails can be carried out for the (7.18) case. The deep tail regime
follows an exact similar strategy as Step 1 of the proof of Proposition 7.2
and utilizes the same parabolic decay of the KPZ equation from Proposition
2.8. The analogous shallow tail regime also follows in a similar manner by
using the uniform separation estimate for Bessel bridges from Corollary 5.7.
Now note that by Theorem 1.10 with $k=1$, we have
$\displaystyle\mathcal{H}(\mathcal{M}_{*,t}+x,t)-\mathcal{H}(\mathcal{M}_{*,t},t)\stackrel{{\scriptstyle
d}}{{\to}}\mathcal{R}_{1}(x),$ (7.19)
in the uniform-on-compact topology. Here $\mathcal{R}_{1}$ is a 3D-Bessel
process with diffusion coefficient $1$. With the tail decay estimate in (7.18)
and the same for the Bessel process from Proposition 7.1, in view of (7.19)
one can show
$f_{*,t}(x+\mathcal{M}_{*,t})\to\frac{e^{-\mathcal{R}_{1}(x)}}{\int_{\mathbb{R}}e^{-\mathcal{R}_{1}(y)}\mathrm{d}y}$
in the uniform-on-compact topology by following the analogous argument from
the proof of Theorem 1.4. This completes the proof. ∎
## Appendix A Non-intersecting random walks
In this section we prove Lemma 4.7 that investigates the convergence of non-
intersecting random walks to non-intersecting brownian motions. We remark that
similar types of Theorems are already known in the literature such as [EK08],
where the authors considered random walks to start at different locations.
Since our walks starts at the same point, additional care is required.
We now recall Lemma 4.7 for readers’ convenience.
###### Lemma A.1.
Let $X_{j}^{i}$ be i.i.d. $\operatorname{N}(0,1)$ random variables. Let
$S_{0}^{(i)}=0$ and $S_{k}^{(i)}=\sum_{j=1}^{k}X_{j}^{i}.$ Consider
$Y_{n}(t)=(Y_{n,1}(t),Y_{n,2}(t)):=(\frac{S_{nt}^{(1)}}{\sqrt{n}},\frac{S_{nt}^{(2)}}{\sqrt{n}})$
an $\mathbb{R}^{2}$ valued process on $[0,1]$ where the in-between points are
defined by linear interpolation. Then conditioned on the non-intersecting
event $\Lambda_{n}:=\cap_{j=1}^{n}\\{S_{j}^{(1)}>S_{j}^{(2)}\\},$
$Y_{n}\stackrel{{\scriptstyle d}}{{\to}}W$, where $W(t)=(W_{1}(t),W_{2}(t))$
is distributed as $\mathsf{NonInt}\mbox{-}\mathsf{BM}$ defined in Definition
4.3.
###### Proof of Lemma A.1.
To show weak convergence, it suffices to show finite dimensional convergence
and tightness. Based on the availability of exact joint densities for non-
intersecting random walks from Karlin-McGregor formula [KM59], the
verification of weak convergence is straightforward. So, we only highlight
major steps of the computations below.
Step 1. One point convergence at $t=1$. Note that
$\displaystyle\mathbb{P}\left(|{\sqrt{n}Y_{n,i}(t)}-S_{\lfloor
nt\rfloor}^{(i)}|>\sqrt{n}\varepsilon\mid\Lambda_{n}\right)\leq\frac{1}{\mathbb{P}(\Lambda_{n})}\mathbb{P}\left(|X_{\lfloor
nt\rfloor+1}|>\sqrt{n}\varepsilon\right)\leq\tfrac{\mathrm{C}}{\varepsilon^{2}\sqrt{n}}$
The last inequality above follows by Markov inequality and the classical
result that $\mathbb{P}(\Lambda_{n})\geq\tfrac{\mathrm{C}}{\sqrt{n}}$ in
Spitzer [Spi60]. Thus it suffices to show finite dimensional convergence for
the cadlag process:
$\displaystyle(Z_{nt}^{(1)},Z_{nt}^{(2)}):=\frac{1}{\sqrt{n}}(S_{\lfloor
nt\rfloor}^{(1)},S_{\lfloor nt\rfloor}^{(2)}).$ (A.1)
We assume that $n$ large enough so that $\frac{n-1}{M\sqrt{n}}\geq 1$ for some
$M>0$ to be chosen later. When $t=1$, applying the Karlin-McGregor formula, we
obtain that
$\mathbb{P}(Z_{n}(1)\in\mathrm{d}y_{1},Z_{n}(1)\in\mathrm{d}y_{2}|\Lambda_{n})=\tau_{n}\cdot
f_{n,1}(y_{1},y_{2})\mathrm{d}y_{1}\mathrm{d}y_{2}$
where
$\displaystyle
f_{n,1}(y_{1},y_{2}):=\int\limits_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})\det(p_{n-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}\mathrm{d}a_{1}\mathrm{d}a_{2},$
and
$\displaystyle\tau_{n}^{-1}:=\int_{r_{1}>r_{2}}\int_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})\det(p_{n-1}(a_{i}-r_{j}\sqrt{n}))_{i,j=1}^{2}\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}r_{1}\mathrm{d}r_{2}.$
(A.2)
Note that here the Karlin-McGregor formula, after we have conditioned on the
first step of the random walks with $X_{1}^{1}=a_{1}>X_{1}^{2}=a_{2}.$
We will now show that $\frac{(n-1)^{2}}{\sqrt{n}}\tau_{n}^{-1}$ and
$\frac{(n-1)^{2}}{\sqrt{n}}f_{n,1}(y_{1},y_{2})$ converges to a nontrivial
limit. Observe that
$\displaystyle\tfrac{(n-1)^{2}}{\sqrt{n}}\det(p_{n-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}$
$\displaystyle=(n-1)p_{n-1}(a_{1}-y_{2}\sqrt{n})p_{n-1}(a_{2}-y_{1}\sqrt{n})$
(A.3) $\displaystyle\hskip
71.13188pt\cdot\tfrac{n-1}{\sqrt{n}}[e^{\frac{\sqrt{n}(a_{1}-a_{2})(y_{1}-y_{2})}{n-1}}-1].$
Thus, as $n\to\infty$, we have
$\displaystyle\tfrac{(n-1)^{2}}{\sqrt{n}}\det(p_{n-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}$
$\displaystyle\to p_{1}(y_{1})p_{1}(y_{2})(a_{1}-a_{2})(y_{1}-y_{2}).$ (A.4)
Next we proceed to find a uniform bound for the expression in (A.3). Not that
for $x,r\geq 1$, one has the elementary inequality $x^{r}\geq x^{r}-1\geq
r(x-1)$. Now taking $r=\frac{n-1}{M\sqrt{n}}$ and
$x=\exp(\frac{\sqrt{n}}{n-1}(a_{1}-a_{2})(y_{1}-y_{2})$ we get
r.h.s. of (A.3)
$\displaystyle\leq\frac{1}{2\pi}\exp\left(-\tfrac{(a_{1}-y_{2}\sqrt{n})^{2}}{2n-2}-\tfrac{(a_{2}-y_{1}\sqrt{n})^{2}}{2n-2}+\tfrac{1}{M}(a_{1}-a_{2})(y_{1}-y_{2})\right)$
$\displaystyle\leq\frac{1}{2\pi}\exp\left(-\tfrac{y_{2}^{2}}{4}-\tfrac{y_{1}^{2}}{4}+\tfrac{1}{M}(a_{1}-a_{2})(y_{1}-y_{2})+\tfrac{1}{M}(|a_{1}y_{2}|+|a_{2}y_{1}|)\right)$
$\displaystyle\leq\frac{1}{2\pi}\exp\left(-\tfrac{y_{2}^{2}}{4}-\tfrac{y_{1}^{2}}{4}+\tfrac{2(a_{1}^{2}+y_{1}^{2}+a_{2}^{2}+y_{2}^{2})}{M})\right),$
(A.5)
where the last inequality follows by several application of the elementary
inequality $|xy|\leq\frac{1}{2}(x^{2}+y^{2})$. One can choose $M$ large enough
so that the uniform bound in (A.5) is integrable w.r.t. the measure
$p_{1}(a_{1})p_{1}(a_{2})\mathrm{d}a_{1}\mathrm{d}a_{2}$. With the pointwise
limit from (A.4), by dominated convergence theorem we have
$\displaystyle\tfrac{(n-1)^{2}}{\sqrt{n}}f_{n,1}(y_{1},y_{2})$
$\displaystyle=\tfrac{(n-1)^{2}}{\sqrt{n}}\int_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})\det(p_{n-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}\mathrm{d}a_{1}\mathrm{d}a_{2}$
$\displaystyle\hskip 56.9055pt\to
p_{1}(y_{1})p_{1}(y_{2})(y_{1}-y_{2})\int_{a_{1}>a_{2}}(a_{1}-a_{2})p_{1}(a_{1})p_{1}(a_{2})\mathrm{d}a_{1}\mathrm{d}a_{2}.$
Similarly one can compute the pointwise limit for the integrand in
$\tau_{n}^{-1}$ (defined in (A.2)) and the uniform bound in (A.5) works for
the denominator as well. We thus have
$\displaystyle\tfrac{(n-1)^{2}}{\sqrt{n}}\tau_{n}^{-1}\to\int_{a_{1}>a_{2}}\int_{r_{1}>r_{2}}p_{1}(r_{1})p_{1}(r_{2})(r_{1}-r_{2})(a_{1}-a_{2})p_{1}(a_{1})p_{1}(a_{2})\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}r_{1}\mathrm{d}r_{2}.$
(A.6)
Plugging these limits back in (A.1), we arrive at (4.1) (the one point density
formula for $\mathsf{NonInt}\mbox{-}\mathsf{BM}$) as the limit for (A.1).
Step 2. One point convergence at $0<t<1$. When $0<t<1$, with the Karlin-
Mcgregor formula, we similarly obtain
$\displaystyle\mathbb{P}(Z_{nt}^{(1)}\in\mathrm{d}y_{1},Z_{nt}^{(2)}\in\mathrm{d}y_{2}\mid\Lambda_{n})=\tau_{n}\cdot
f_{n,t}(y_{1},y_{2})\mathrm{d}y_{1}\mathrm{d}y_{2}$ (A.7)
where $\tau_{n}$ is defined in (A.2) and
$\displaystyle f_{n,t}(y_{1},y_{2})$
$\displaystyle=\int_{r_{1}>r_{2}}\int_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})\left[\det(p_{\lfloor
nt\rfloor-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}\right.$ (A.8)
$\displaystyle\hskip 85.35826pt\left.n\cdot\det(p_{n-\lfloor
nt\rfloor}(\sqrt{n}y_{i}-\sqrt{n}r_{j}))_{i,j=1}^{2}\right]\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}r_{1}\mathrm{d}r_{2}.$
One can check that as $n\to\infty$, we have
$\displaystyle n^{3/2}\det(p_{\lfloor
nt\rfloor-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}$
$\displaystyle\to\tfrac{1}{t}p_{t}(y_{1})p_{t}(y_{2})(a_{1}-a_{2})(y_{1}-y_{2}),$
$\displaystyle n\cdot\det(p_{n-\lfloor
nt\rfloor}(\sqrt{n}y_{i}-\sqrt{n}r_{j}))_{i,j=1}^{2}$
$\displaystyle\to\det(p_{1-t}(y_{i}-r_{j}))_{i,j=1}^{2}.$
One can provide uniformly integrable bound for the integrand in
$f_{n,t}(y_{1},y_{2})$ in a similar fashion. Thus by dominated convergence
theorem,
$\displaystyle n^{3/2}f_{n,t}(y_{1},y_{2})$
$\displaystyle\to\tfrac{1}{t}p_{t}(y_{1})p_{t}(y_{2})(y_{1}-y_{2})\int_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})(a_{1}-a_{2})\mathrm{d}a_{1}\mathrm{d}a_{2}$
$\displaystyle\hskip
85.35826pt\int_{r_{1}>r_{2}}\det(p_{1-t}(y_{i}-r_{j}))_{i,j=1}^{2}\mathrm{d}r_{1}\mathrm{d}r_{2}.$
Using (A.6) we get that $\tau_{n}\cdot f_{n,t}(y_{1},y_{2})$ converges to
(4.2), the one point density formula for $\mathsf{NonInt}\mbox{-}\mathsf{BM}$.
Step 3. Transition density convergence. For the transition densities, let
$0<t_{1}<t_{2}<1,$ and fix $x_{1}>x_{2}$. Another application of Karlin-
McGregor formula tells us
$\displaystyle\mathbb{P}(Z_{nt_{2}}^{(1)}\in\mathrm{d}y_{1},Z_{nt_{2}}^{(2)}\in\mathrm{d}y_{2}\mid
Z_{nt_{1}}^{(1)}=x_{1},Z_{nt_{1}}^{(2)}=x_{2})$ (A.9)
$\displaystyle=n\det(p_{\lfloor nt_{2}\rfloor-\lfloor
nt_{1}\rfloor}(\sqrt{n}y_{i}-\sqrt{n}x_{j}))_{i,j=1}^{2}$ $\displaystyle\hskip
56.9055pt\cdot\frac{\int\limits_{r_{1}>r_{2}}\det(p_{n-\lfloor
nt_{2}\rfloor}(\sqrt{n}y_{i}-\sqrt{n}r_{j}))_{i,j=1}^{2}\mathrm{d}r_{1}\mathrm{d}r_{2}\mathrm{d}y_{1}\mathrm{d}y_{2}}{\int\limits_{r_{1}>r_{2}}\det(p_{n-\lfloor
nt_{1}\rfloor}(\sqrt{n}x_{i}-\sqrt{n}r_{j}))_{i,j=1}^{2}\mathrm{d}r_{1}\mathrm{d}r_{2}}.$
One can check as $n\to\infty$
$\displaystyle\text{ r.h.s of
}\eqref{trst1}\to\frac{\det(p_{t_{2}-t_{1}}(y_{i}-x_{j}))_{i,j=1}^{2}\int_{r_{1}>r_{2}}\det(p_{1-t_{2}}(y_{i}-r_{j}))_{i,j=1}^{2}\mathrm{d}r_{1}\mathrm{d}r_{2}\mathrm{d}y_{1}\mathrm{d}y_{2}}{\int_{r_{1}>r_{2}}\det(p_{1-t_{1}}(x_{i}-r_{j}))_{i,j=1}^{2}\mathrm{d}r_{1}\mathrm{d}r_{2}}$
which is same as transition densities for $\mathsf{NonInt}\mbox{-}\mathsf{BM}$
as shown in (4.3). This proves finite dimensional convergence.
Step 4. Tightness. To show tightness, by Kolmogorov tightness criterion, it
suffices to show there exist $K>0$ and $n_{0}\in\mathbb{N}$ such that for all
$n\geq n_{0}$
$\displaystyle\mathbb{E}\left[|Y_{n,i}(t)-Y_{n,i}(s)|^{K}\mid\Lambda_{n}\right]\leq\mathrm{C}_{K,n_{0}}\cdot(t-s)^{2}$
(A.10)
holds for all $0\leq s<t\leq 1.$
Recall that $\mathbb{P}(\Lambda_{n})\geq\frac{\mathrm{C}}{\sqrt{n}}$. For
$t-s\leq\frac{1}{n}$ with $K\geq 5$ we have
$\displaystyle\mathbb{E}\left[|Y_{n,i}(t)-Y_{n,i}(s)|^{K}\mid\Lambda_{n}\right]$
$\displaystyle\leq\mathrm{C}\cdot\sqrt{n}\mathbb{E}\left[|Y_{n,i}(t)-Y_{n,i}(s)|^{K}\right]$
$\displaystyle\leq\mathrm{C}\cdot\sqrt{n}\frac{(nt-
ns)^{K}}{n^{K/2}}\mathbb{E}[|X_{1}^{1}|^{K}]\leq\mathrm{C}n^{\frac{1-K}{2}}(nt-
ns)^{2}\leq\mathrm{C}_{K}(t-s)^{2}.$
Thus we may assume $t-s\geq 1/n$. Then it is enough to show (A.10) for
$Z_{nt}^{(i)}$ (defined in (A.1)) instead. Note that if
$t-s\in[n^{-1},{n^{-1/4}}]$, we may take $K$ large enough so
$\frac{1}{4}(K-4)\geq 1$. Then we have
$\displaystyle\mathbb{E}\left[|Z_{nt}^{(i)}-Z_{ns}^{(i)}|^{K}\mid\Lambda_{n}\right]$
$\displaystyle\leq\mathrm{C}\cdot\sqrt{n}\mathbb{E}\left[|Z_{nt}^{(i)}-Z_{ns}^{(i)}|^{K}\right]$
$\displaystyle\leq\mathrm{C}\cdot\sqrt{n}(t-s)^{K/2}\leq\mathrm{C}\cdot
n^{1/2-(K-4)/8}(t-s)^{2}$
where in the last line we used the fact $(t-s)^{(K-4)/2}\leq n^{-(K-4)/8}$. As
$\frac{1}{4}(K-4)\geq 1$, we have
$\mathbb{E}\left[|Z_{nt}^{(i)}-Z_{ns}^{(i)}|^{K}\mid\Lambda_{n}\right]\leq\mathrm{C}(t-s)^{2}$
in this case. So, we are left with the case $t-s\geq n^{-1/4}$.
Let us assume $t=0$, $s\geq n^{-\frac{1}{4}}$. As $ns\geq n^{3/4}\to\infty$,
we will no longer make the distinction between $ns$ and $\lfloor ns\rfloor$ in
our computations. We use the pdf formula from (A.7) and (A.8) to get
$\displaystyle\mathbb{E}[|Z_{ns}^{(i)}|^{5}]$
$\displaystyle\leq\tau_{n}\int_{y_{1}>y_{2}}|y_{i}|^{5}\int_{r_{1}>r_{2}}\int_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})\det(p_{ns-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}$
(A.11) $\displaystyle\hskip
85.35826pt\left.n\cdot\det(p_{n-ns}(\sqrt{n}y_{i}-\sqrt{n}r_{j}))_{i,j=1}^{2}\right]\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}r_{1}\mathrm{d}r_{2}\mathrm{d}y_{1}\mathrm{d}y_{2}.$
For the last determinant we may use
$\displaystyle
n\cdot\det(p_{n-ns}(\sqrt{n}y_{i}-\sqrt{n}r_{j}))_{i,j=1}^{2}\mathrm{d}r_{1}\mathrm{d}r_{2}$
$\displaystyle\leq n\cdot
p_{n-ns}(\sqrt{n}y_{1}-\sqrt{n}r_{1})p_{n-ns}(\sqrt{n}y_{2}-\sqrt{n}r_{2})\mathrm{d}r_{1}\mathrm{d}r_{2}$
which integrates to $1$ irrespective of the value of $y_{1},y_{2}$. Thus
r.h.s. of (A.11)
$\displaystyle\leq\tau_{n}\int_{y_{1}>y_{2}}|y_{i}|^{5}\int_{a_{1},a_{2}}p_{1}(a_{1})p_{1}(a_{2})\det(p_{ns-1}(a_{i}-y_{j}\sqrt{n}))_{i,j=1}^{2}\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}y_{1}\mathrm{d}y_{2}.$
(A.12)
Making the change of variable $y_{i}=\sqrt{s}z_{i}$ and setting $m=ns$, we
have
$\mbox{r.h.s.~{}of \eqref{a04}}\leq\tau_{n}\cdot
s^{\frac{5}{2}+1}\mathcal{I}_{m},$
where
$\displaystyle\mathcal{I}_{m}:=\int_{z_{1}>z_{2}}|z_{i}|^{5}\int_{a_{1}>a_{2}}p_{1}(a_{1})p_{1}(a_{2})\det(p_{m-1}(a_{i}-z_{j}\sqrt{m}))_{i,j=1}^{2}\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}z_{1}\mathrm{d}z_{2}.$
We claim that $\frac{(m-1)^{2}}{\sqrt{m}}{\mathcal{I}_{m}}\leq\mathrm{C}$ for
some universal constant $\mathrm{C}>0$. Clearly this integral is finite for
each $m$. And by exact same approach in Step 1, one can show as $m\to\infty$,
$\frac{(m-1)^{2}}{\sqrt{m}}\mathcal{I}_{m}:=\int_{z_{1}>z_{2}}|z_{i}|^{5}\int_{a_{1}>a_{2}}p_{1}(z_{1})p_{1}(z_{2})p_{1}(a_{1})p_{1}(a_{2})(a_{1}-a_{2})(z_{1}-z_{2})\mathrm{d}a_{1}\mathrm{d}a_{2}\mathrm{d}z_{1}\mathrm{d}z_{2}.$
Thus, $\frac{(m-1)^{2}}{\sqrt{m}}\mathcal{I}\leq\mathrm{C}$ for all $m\geq 1$.
Thus following (A.11), (A.12), in view of the above estimate we get
$\displaystyle\mathbb{E}[|Z_{ns}^{(i)}|^{5}]\leq\mathrm{C}\tau_{n}\frac{\sqrt{m}}{(m-1)^{2}}s^{\frac{5}{2}+1}.$
However, by Step 1, $n^{3/2}\tau_{n}^{-1}$ converges to a finite positive
constant. As $m=ns$, we thus get that the above term is at most
$\mathrm{C}\cdot s^{2}$. The case $t\neq 0$ can be checked similarly using the
formulas from (A.7) and (A.8) as well as transition densities formula (A.9).
This completes the proof. ∎
## References
* [ACQ11] G. Amir, I. Corwin, and J. Quastel. Probability distribution of the free energy of the continuum directed random polymer in 1+ 1 dimensions. Communications on pure and applied mathematics, 64(4):466–537, 2011\.
* [AKQ14a] T. Alberts, K. Khanin, and J. Quastel. The continuum directed random polymer. Journal of Statistical Physics, 154(1):305–326, 2014.
* [AKQ14b] T. Alberts, K. Khanin, and J. Quastel. The intermediate disorder regime for directed polymers in dimension $1+1$. The Annals of Probability, 42(3):1212–1256, 2014.
* [Bat18] E. Bates. Localization of directed polymers with general reference walk. Electronic Journal of Probability, 23:1–45, 2018.
* [Bat19] E. Bates. Localization and free energy asymptotics in disordered statistical mechanics and random growth models. Stanford University, 2019.
* [Bat21] E. Bates. Full-path localization of directed polymers. Electronic Journal of Probability, 26:1–24, 2021.
* [BC95] L. Bertini and N. Cancrini. The stochastic heat equation: Feynman-kac formula and intermittence. Journal of statistical Physics, 78(5):1377–1401, 1995.
* [BC20a] E. Bates and S. Chatterjee. The endpoint distribution of directed polymers. The Annals of Probability, 48(2):817–871, 2020.
* [BC20b] E. Bates and S. Chatterjee. Localization in gaussian disordered systems at low temperature. The Annals of Probability, 48(6):2755–2806, 2020.
* [Bol89] E. Bolthausen. A note on the diffusion of directed polymers in a random environment. Communications in mathematical physics, 123(4):529–534, 1989.
* [CC13] F. Comets and M. Cranston. Overlaps and pathwise localization in the anderson polymer model. Stochastic Processes and their Applications, 123(6):2446–2471, 2013\.
* [CG20a] I. Corwin and P. Ghosal. KPZ equation tails for general initial data. Electronic Journal of Probability, 25:1–38, 2020.
* [CG20b] I. Corwin and P. Ghosal. Lower tail of the KPZ equation. Duke Mathematical Journal, 169(7):1329–1395, 2020.
* [CGH21] I. Corwin, P. Ghosal, and A. Hammond. KPZ equation correlations in time. Ann. Probab., 49(2):832 – 876, 2021.
* [CH02] P. Carmona and Y. Hu. On the partition function of a directed polymer in a gaussian random environment. Probability theory and related fields, 124(3):431–457, 2002.
* [CH14] I. Corwin and A. Hammond. Brownian Gibbs property for Airy line ensembles. Invent. Math., 195(2):441–508, 2014.
* [CH16] I. Corwin and A. Hammond. KPZ line ensemble. Probability Theory and Related Fields, 166(1-2):67–185, 2016.
* [Cha19] S. Chatterjee. Proof of the path localization conjecture for directed polymers. Communications in Mathematical Physics, 5(370):703–717, 2019.
* [CHH19] J. Calvert, A. Hammond, and M. Hegde. Brownian structure in the kpz fixed point. arXiv preprint arXiv:1912.00992, 2019.
* [CHHM21] I. Corwin, A. Hammond, M. Hegde, and K. Matetski. Exceptional times when the KPZ fixed point violates Johansson’s conjecture on maximizer uniqueness. arXiv preprint arXiv:2101.04205, 2021.
* [CN16] F. Comets and V.-L. Nguyen. Localization in log-gamma polymers with boundaries. Probability Theory and Related Fields, 166(1):429–461, 2016.
* [Com17] F. Comets. Directed polymers in random environments. Springer, 2017.
* [Cor12] I. Corwin. The Kardar–Parisi–Zhang equation and universality class. Random Matrices: Theory Appl., 1(01):1130001, 2012.
* [CS19] I. Corwin and H. Shen. Some recent progress in singular stochastic PDEs. arXiv:1904.00334, 2019.
* [CSY03] F. Comets, T. Shiga, and N. Yoshida. Directed polymers in a random environment: path localization and strong disorder. Bernoulli, 9(4):705–723, 2003.
* [CW17] A. Chandra and H. Weber. Stochastic PDEs, regularity structures, and interacting particle systems. In Annales de la faculté des sciences de Toulouse Mathématiques, volume 26, pages 847–909, 2017.
* [CY06] F. Comets and N. Yoshida. Directed polymers in random environment are diffusive at weak disorder. The Annals of Probability, 34(5):1746–1770, 2006.
* [Dau22] D. Dauvergne. Non-uniqueness times for the maximizer of the KPZ fixed point. arXiv preprint arXiv:2202.01700, 2022.
* [Den84] I. Denisov. A random walk and a wiener process near a maximum. Theory of Probability & Its Applications, 28(4):821–824, 1984\.
* [DFF+21] E. Dimitrov, X. Fang, L. Fesser, C. Serio, C. Teitler, A. Wang, and W. Zhu. Tightness of Bernoulli Gibbsian line ensembles. Electronic Journal of Probability, 26:1–93, 2021.
* [Dim21] E. Dimitrov. Characterization of ${H}$-brownian Gibbsian line ensembles. arXiv preprint arXiv:2103.01186, 2021.
* [DM21] E. Dimitrov and K. Matetski. Characterization of brownian Gibbsian line ensembles. The Annals of Probability, 49(5):2477–2529, 2021.
* [DOV18] D. Dauvergne, J. Ortmann, and B. Virag. The directed landscape. arXiv preprint arXiv:1812.00309, 2018.
* [DSV20] D. Dauvergne, S. Sarkar, and B. Virág. Three-halves variation of geodesics in the directed landscape. arXiv preprint arXiv:2010.12994, 2020.
* [Dub04] J. Dubédat. Reflected planar brownian motions, intertwining relations and crossing probabilities. In Annales de l’Institut Henri Poincare (B) Probability and Statistics, volume 40, pages 539–552. Elsevier, 2004.
* [DV21a] D. Dauvergne and B. Virág. Bulk properties of the airy line ensemble. The Annals of Probability, 49(4):1738–1777, 2021.
* [DV21b] D. Dauvergne and B. Virág. The scaling limit of the longest increasing subsequence. arXiv preprint arXiv:2104.08210, 2021.
* [Dys62] F. J. Dyson. A brownian-motion model for the eigenvalues of a random matrix. Journal of Mathematical Physics, 3(6):1191–1198, 1962.
* [EK08] P. Eichelsbacher and W. König. Ordered random walks. Electronic Journal of Probability, 13:1307–1336, 2008.
* [Flo14] G. R. M. Flores. On the (strict) positivity of solutions of the stochastic heat equation. The Annals of Probability, 42(4):1635–1643, 2014.
* [FS10] P. L. Ferrari and H. Spohn. Random growth models. arXiv:1003.0881, 2010.
* [Gia07] G. Giacomin. Random polymer models. Imperial College Press, London., 2007.
* [GIP15] M. Gubinelli, P. Imkeller, and N. Perkowski. Paracontrolled distributions and singular PDEs. In Forum of Mathematics, Pi, volume 3. Cambridge University Press, 2015.
* [GJ14] P. Gonçalves and M. Jara. Nonlinear fluctuations of weakly asymmetric interacting particle systems. Archive for Rational Mechanics and Analysis, 212(2):597–644, 2014\.
* [GP17] M. Gubinelli and N. Perkowski. KPZ reloaded. Communications in Mathematical Physics, 349(1):165–269, 2017.
* [GP18] M. Gubinelli and N. Perkowski. Energy solutions of KPZ are unique. Journal of the American Mathematical Society, 31(2):427–471, 2018\.
* [Hai13] M. Hairer. Solving the KPZ equation. Annals of mathematics, pages 559–664, 2013.
* [Hai14] M. Hairer. A theory of regularity structures. Inventiones mathematicae, 198(2):269–504, 2014.
* [HH85] D. A. Huse and C. L. Henley. Pinning and roughening of domain walls in ising systems due to random impurities. Physical review letters, 54(25):2708, 1985.
* [HHF85] D. A. Huse, C. L. Henley, and D. S. Fisher. Huse, henley, and fisher respond. Physical review letters, 55(26):2924, 1985.
* [HM18] M. Hairer and J. Mattingly. The strong feller property for singular stochastic PDEs. In Annales de l’Institut Henri Poincaré, Probabilités et Statistiques, volume 54, pages 1314–1340. Institut Henri Poincaré, 2018\.
* [Igl74] D. L. Iglehart. Functional central limit theorems for random walks conditioned to stay positive. The Annals of Probability, 2(4):608–619, 1974.
* [IS88] J. Z. Imbrie and T. Spencer. Diffusion of directed polymers in a random environment. Journal of statistical Physics, 52(3):609–626, 1988.
* [Joh00] K. Johansson. Transversal fluctuations for increasing subsequences on the plane. Probab.Theory Related Fields, 116:445–456, 2000.
* [KM59] S. Karlin and J. McGregor. Coincidence probabilities. Pacific journal of Mathematics, 9(4):1141–1164, 1959.
* [KPZ86] M. Kardar, G. Parisi, and Y.-C. Zhang. Dynamic scaling of growing interfaces. Phys. Rev. Lett., 56(9):889, 1986.
* [KS91] J. Krug and H. Spohn. Kinetic roughening of growing surfaces. Solids far from equilibrium: growth, morphology and defects (C.Godreche, ed.), Cambridge University Press, pages 479 – 582, 1991.
* [LNP96] C. Licea, C. Newman, and M. Piza. Superdiffusivity in first-passage percolation. Probability Theory and Related Fields, 106:559–591, 1996.
* [Mej04] O. Mejane. Upper bound of a volume exponent for directed polymers in a random environment. Ann. Inst. H. Poincare probab. Statist, 40:299–308, 2004.
* [Mil78] P. Millar. A path decomposition for markov processes. The Annals of Probability, 6(2):345–348, 1978.
* [Mot59] M. Motoo. Proof of the law of iterated logarithm through diffusion equation. Ann. Inst. Statis. Math, 10:21–28, 1959.
* [MQR13] G. F. Moreno, J. Quastel, and D. Remenik. Endpoint distribution of directed polymers in 1+ 1 dimensions. Communications in Mathematical Physics, 317(2):363–380, 2013.
* [MQR21] K. Matetski, J. Quastel, and D. Remenik. The KPZ fixed point. Acta Mathematica, 227(1):115–203, 2021.
* [OY02] N. O’Connell and M. Yor. A representation for non-colliding random walks. Electronic communications in probability, 7:1–12, 2002.
* [Pim17] L. P. Pimentel. Ergodicity of the KPZ fixed point. arXiv preprint arXiv:1708.06006, 2017.
* [Piz97] M. Piza. Directed polymers in a random environment: Some results on fluctuations. J.Statist.Phys, 89:581–603, 1997.
* [PS02] M. Prähofer and H. Spohn. Scale invariance of the png droplet and the airy process. Journal of statistical physics, 108(5):1071–1106, 2002.
* [QR15] J. Quastel and D. Remenik. Tails of the endpoint distribution of directed polymers. In Annales de l’IHP Probabilités et statistiques, volume 51, pages 1–17, 2015.
* [QS15] J. Quastel and H. Spohn. The one-dimensional KPZ equation and its universality class. J. Stat. Phys., 160(4):965–984, 2015.
* [QS20] J. Quastel and S. Sarkar. Convergence of exclusion processes and KPZ equation to the KPZ fixed point. arXiv preprint arXiv:2008.06584, 2020.
* [Qua11] J. Quastel. Introduction to KPZ. Current developments in mathematics, 2011(1), 2011.
* [RY13] D. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293. Springer Science & Business Media, 2013.
* [Sep12] T. Seppäläinen. Scaling for a one-dimensional directed polymer with boundary conditions. The Annals of Probability, 40(1):19–73, 2012.
* [Spi60] F. Spitzer. A tauberian theorem and its probability interpretation. Transactions of the American Mathematical Society, 94(1):150–169, 1960.
* [SV21] S. Sarkar and B. Virág. Brownian absolute continuity of the KPZ fixed point with arbitrary initial condition. The Annals of Probability, 49(4):1718–1737, 2021.
* [Var07] V. Vargas. Strong localization and macroscopic atoms for directed polymers. Probability theory and related fields, 138(3):391–410, 2007.
* [Vir20] B. Virág. The heat and the landscape i. arXiv preprint arXiv:2008.07241, 2020.
* [Wal86] J. B. Walsh. An introduction to stochastic partial differential equations. In École d’Été de Probabilités de Saint Flour XIV-1984, pages 265–439. Springer, 1986.
* [War07] J. Warren. Dyson’s brownian motions, intertwining and interlacing. Electronic Journal of Probability, 12:573–590, 2007.
* [Wu21] X. Wu. Tightness and local fluctuation estimates for the KPZ line ensemble. arXiv preprint arXiv:2106.08051, 2021.
|
# Optimal Network Charge for Peer-to-Peer Energy Trading: A Grid Perspective
Yu Yang, Yue Chen,
Guoqiang Hu, and Costas J. Spanos This work was supported by the Republic of
Singapore’s National Research Foundation through a grant to the Berkeley
Education Alliance for Research in Singapore (BEARS) for the Singapore-
Berkeley Building Efficiency and Sustainability in the Tropics (SinBerBEST)
Program. BEARS has been established by the University of California, Berkeley
as a center for intellectual excellence in research and education in
Singapore.Y. Yang is with SinBerBEST, Berkeley Education Alliance for Research
in Singapore, Singapore 138602 e-mail: ([email protected]).Y. Chen is
with the Department of Mechanical and Automation Engineering, the Chinese
University of Hong Kong, Hong Kong SAR, China. (e-mail:
[email protected]). The work of Y. Chen was supported by CUHK research
startup fund.G. Hu is with the School of Electrical and Electronic
Engineering, Nanyang Technological University, Singapore, 639798 e-mail:
([email protected]).C. J. Spanos is with the Department of Electrical
Engineering and Computer Sciences, University of California, Berkeley, CA,
94720 USA email: ([email protected]).
###### Abstract
Peer-to-peer (P2P) energy trading is a promising market scheme to accommodate
the increasing distributed energy resources (DERs). However, how P2P to be
integrated into the existing power systems remains to be investigated. In this
paper, we apply network charge as a means for the grid operator to attribute
transmission loss and ensure network constraints for empowering P2P
transaction. The interaction between the grid operator and the prosumers is
modeled as a Stackelberg game, which yields a bi-level optimization problem.
We prove that the Stackelberg game admits an _equilibrium_ network charge
price. Besides, we propose a method to obtain the network charge price by
converting the bi-level optimization into a single-level mixed-integer
quadratic programming (MIQP), which can handle a reasonable scale of prosumers
efficiently. Simulations on the IEEE bus systems show that the proposed
optimal network charge is favorable as it can benefit both the grid operator
and the prosumers for empowering the P2P market, and achieves _near-optimal_
social welfare. Moreover, the results show that the presence of energy storage
will make the prosumers more sensitive to the network charge price changes.
###### Index Terms:
Peer-to-peer (P2P) transaction, network charge, transmission loss, Stackelberg
game, bi-level optimization.
## I Introduction
Driven by the technology advances and the pressure to advance low-carbon
society, power systems are experiencing the steady increase of distributed
energy resources (DERs), such as home batteries, roof-top solar panels, and
on-site wind turbines, etc. [1]. As a result, the traditional centralized
energy management is being challenged as the DERs on the customer side are
beyond the control of the power grid operator. In this context, peer-to-peer
(P2P) energy trading has emerged as a promising mechanism to account for the
DERs [2]. P2P aims for a consumer-centric electricity market that allows the
consumers with DERs (i.e., prosumer) to trade energy surplus or deficiency
mutually [3, 4]. The vision of P2P is to empower the prosumers to achieve the
balance of supply and demand autonomously and economically by leveraging their
complementary and flexible generation and consumption. P2P energy trading is
beneficial to both the power grid operator and the prosumers. Specifically,
P2P can bring monetary value to the prosumers by allowing them to sell surplus
local renewable generation to their neighbors or vice verse [5]. P2P also
favors the power grid operation in term of reducing the cost of generation and
transmission expansion to account for the yearly increasing demand as well as
reducing transmission loss by driving local self-sufficiency [6].
Due to the widespread prospect, P2P energy trading mechanism has raised
extensive interest from the research community. A large body of works has made
efforts to address the matching of supply and demand bids for prosumers with
customized preferences or interests. This is usually termed market clearing
mechanisms. The mechanisms in discussion are diverse and plentiful, which can
be broadly categorized by optimization-based approaches [7, 8], auction
schemes [6, 9], and bilateral contract negotiations [10, 11]. Quite a few of
comprehensive and systematic reviews have documented those market clearing
mechanisms, such as [12, 5, 13]. On top of that, a line of works has discussed
the trust, secure, and transparent implementation of P2P market scheme by
combing with the well-known blockchain technology, such as [14, 15].
The above studies are mainly focused on the business models of energy trading
in virtual layer and in the shoes of prosumers. Whereas the energy exchanges
in a P2P market require the delivery in physical layer taken by the power grid
operator who is responsible for securing the transmission capacity constraints
and compensating the transmission loss. In this regard, the effective
interaction between the prosumers making energy transaction in virtual layer
and the power grid operator delivering the trades in physical layer is
essential for the successful deployment of P2P market scheme. The interaction
requires to secure the economic benefit of prosumers in the P2P market as well
as ensure the operation feasibility of power grid operator. This has been
identified as one key issue that remains to be addressed [16].
Network charge which allows the grid operator to impose some grid-related cost
on the prosumers for energy exchanges, has been advocated as a promising tool
to bridge this interaction. Network charge is reasonable and natural
considering many aspects. First of all, network charge is necessary for the
power grid to attribute the network investment cost and the transmission loss
[17]. In traditional power systems where customers trade energy with the power
grid, such cost has been internalized in the electricity price, it is
therefore natural to pass the similar cost with P2P to the prosumers via some
price mechanisms. Besides, network charge can work as a means to shape the P2P
energy trading market to ensure the feasible delivery of trades in physical
layer taken by the grid operator [18]. Generally, network charge is charged by
the trades, therefore it can be used to guide the behaviors of the prosumers
in the P2P market. As a result, several recent works have relied on network
charge to account for the grid-related cost or shape the P2P markets, such as
[19, 17, 11]. Specifically, [19] has involved network charge in developing a
decentralized P2P market clearing mechanism. The work [17] comparatively
simulated three network charge models (i.e., unique model, electrical distance
based model, and zonal model) on shaping the P2P market. The work [11] has
relied on a network charge model to achieve _ex-post_ transmission loss
allocations across the prosumers. The above works have demonstrated that
network charge can effectively shape the P2P transaction market. In addition,
network charge can work as a tool to attribute grid-related cost and
transmission loss which are actually taken by the grid operator. However, the
existing works have mainly focused on studying how the network charge will
affect the behaviors of prosumers in a P2P market instead of studying how the
network charge price to be designed which couples the grid operator and the
prosumers acting as independent stakeholders and playing different roles.
This paper fills the gap by jointly considering the power grid operator who
provides transmission service and the prosumers who make energy transaction in
a P2P market and propose an optimal network charge mechanism. Particularly,
considering that the power grid operator and the prosumers are independent
stakeholders and have different objectives, we model the interaction between
the power grid operator and the prosumers as a Stackelberg game. First, the
grid operator decides on the optimal network charge price to trade off the
network charge revenue and the transmission loss considering the network
constraints, and then the prosumers optimize their energy management (i.e.,
energy consuming, storing and trading) for maximum economic benefits. Our main
contributions are:
* (C1)
We propose a Stackelbeg game model to account for the interaction between the
power grid operator imposing network charge price and the prosumers making
energy transaction in a P2P market. The distributed renewable generators and
energy storage (ES) devices on the prosumer side are considered. We prove that
the Stackelberg game admits an _equilibrium_ network charge price.
* (C1)
To deal with the computational challenges of obtaining the network charge
price, we convert the bi-level optimization problem yield by the Stackelberg
game to a single-level mixed-integer quadratic programming (MIQP) by exploring
the problem structures. The method can handle a reasonable scale of prosumers
efficiently.
* (C2)
By simulating the IEEE bus systems, we demonstrate that the network charge
mechanism is favorable as it can benefit both the grid operator and the
prosumers for empowering the P2P market. Moreover, it can provide _near-
optimal_ social welfare. In addition, we find that the presence of ES will
make the prosumers more sensitive to the network charge price changes.
The rest of this paper is as: in Section II, we present the Stackelberg game
formulation; in Section III, we propose a single-level conversion method; in
Section VI, we examine the proposed network charge mechanism via case studies;
in Section V, we conclude this paper and discuss the future work.
## II Problem Formulation
Figure 1: Interaction between the grid operator and a P2P energy trading
market.
Fig. 1 shows the interaction between the grid operator and a P2P energy
trading market to be discussed in this paper. By providing transmission
service and compensating the transmission loss for empowering P2P trading, the
grid operator plays the leading role by deciding the network charge price. In
response, the prosumers with DERs (e.g., solar panels, wind turbines, ES,
etc.) in the P2P market will optimize their optimal energy management (i.e.,
energy consuming, storing and trading) for maximum economic benefits. In this
paper, we assume the grid operator and prosumers are independent stakeholder
and are both profit-oriented, expecting to maximizing their own profit via the
interaction. For the grid operator, the profit is evaluated by the network
charge revenue minus the cost of transmission loss. For the prosumers, the
profit is quantified by the utility (i.e., satisfaction) of consuming certain
amount of energy and the energy cost such as the network charge payment. The
objective of this paper is to determine the optimal network charge price that
maximizes the grid profit while securing the prosumers’ profit in the P2P
market.
### II-A Network Charge Model
How to charge P2P energy trading for network utilization fee is still an open
issue. One way in extensive discussion is based on the electrical distance and
the volume of transaction. Specifically, if prosumer $i$ buys $p_{ij}$
[$\mathrm{kW}$] units of power from prosumer $j$ over an electrical distance
of $d_{ij}$ [$\mathrm{km}$], the network charge is calculated as
$\begin{split}&T(p_{ij})=\gamma d_{ij}p_{ij}\\\ \end{split}$ (1)
where $\gamma$[s$/($\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}$)] is the
network charge price determined by the grid operator, which represents the
network utilization fee for per unit of energy transaction over per unit of
electrical distance.
The electrical distance is determined by the electrical network topology and
the measures used. For a given electrical network, there are several popular
ways to measure the electrical distances as discussed in [20]. One of them is
the _Power Transfer Distance Factor_ (PTDF) which has been mostly used for
network charge calculations (see [19, 17, 11] for examples). We therefore use
the PTDF for measuring the electrical distances. For an electrical network
characterized by transmission lines $\mathcal{L}$, the electrical distance
between any trading peers $i,j$ based on PTDF is defined as
$\displaystyle d_{ij}=\sum_{\ell\in\mathcal{L}}|{\rm PTDF}_{\ell,ij}|$ (2)
where ${\rm PTDF}_{\ell,ij}$ represents the PTDF of prosumer $i,j$ related to
transmission line $\ell\in\mathcal{L}$, which characterizes the estimated
power flow change of line $\ell$ caused by per unit of energy transaction
between prosumer $i$ and prosumer $j$ according to the DC power flow
sensitivity analysis.
PTDF is directly derived from the DC power flow equations and the details can
be found in [21]. In the following, we only summarize the main calculation
procedures. For an electrical network characterized by $N$ buses and $L$
transmission lines, we first have the nodal acceptance matrix:
$\displaystyle B_{ij}=\begin{cases}\sum_{k=1}^{N}\frac{1}{x_{ik}},&{\rm
if}~{}j=i.\\\ -\frac{1}{x_{ij}},&{\rm if}~{}j\neq i.\\\ \end{cases}$
where $x_{ij}$ represents the reactance of the line connecting bus $i$ and bus
$j$.
We denote $\mathbf{B}_{r}$ as the sub-matrix of $\mathbf{B}$ which eliminates
the row and column related to the reference bus $r$. Without any loss of
generality, we specify bus $N$ as the reference bus, we therefore have
$\mathbf{B}_{r}=\mathbf{B}[1:N-1,1:N-1]$ and the reverse
$\mathbf{X}_{r}=\mathbf{B}_{r}^{-1}$. By setting _zero_ row and column for the
reference bus $r=N$, we have the augmented matrix:
$\displaystyle\mathbf{X}=\begin{pmatrix}\mathbf{X}_{r}&\mathbf{0}\\\
0&0\end{pmatrix}$
By using matrix $\mathbf{X}$, we can calculate the PTDF by
$\displaystyle{\rm
PTDF}_{\ell,ij}=\frac{X_{mi}-X_{mj}-X_{ni}+X_{nj}}{x_{\ell}}$ (3)
where $X_{mi},X_{mj},X_{ni},X_{nj}$ represent the elements of matrix
$\mathbf{X}$ at row $m,n$ and column $i,j$, $\ell$ is the transmission line
connecting bus $m$ and bus $n$.
An illustration example: we use the 5-bus system in Fig. 2 to illustrate the
interpretation of electrical distances based on PTDF. Based on (2)-(3) and the
reactance parameter $\mathbf{x}$, we can obtain the electrical distance
$\mathbf{d}$ shown in Fig. 2 (b). Particularly, we have the electrical
distance between bus $1$ and bus $3$:
$d_{13}=0.2958+0.4930+0.2113+0.2958+0.2113=1.5072$ which are the PTDF of bus
$1$ and bus $3$ related to the 5 transmission lines. As shown in Fig. 2 (a),
the PTDF for bus $1,3$ can be interpreted as the total power flow changes of
all transmission lines caused by per unit of energy transaction between the
bus $1$ and $3$ according to the DC power flow analysis.
Figure 2: (a) Power flow changes of all transmission lines if bus $1$
transfers 1 $\mathrm{kW}$ power to bus $3$ based on DC power flow analysis.
(b) The electrical distances between the buses based on PTDF for the 5-bus
system.
### II-B Stackelberg Game Formulation
As discussed, the interaction between the grid operator and the prosumers
shows a hierarchical structure. This corresponds well to a Stackelberg game
where the power grid behaves as the _leader_ and the prosumers are
_followers_. Before the formulation, we first define the main notations in
TABLE I.
TABLE I: Main notations Notation | Definition
---|---
$i,j$ | Prosumer/bus index.
$t$ | Time index.
$\gamma$ | Network charge price.
$p_{ij,t}^{+}/p_{ij,t}^{-}$ | Traded (buy or sell) energy between prosumer $i,j$.
$\theta_{i,t}$ | Phase angle at bus $i$.
$\mathcal{P}_{i,t}$ | Consumed or generated power of prosumer $i$.
$P_{i,t}$ | Injected power at bus $i$.
$p_{i,t}^{\rm ch}/p_{it}^{\rm dis}$ | Charged/discharged power of prosumer $i$’s ES.
$e_{i,t}$ | Stored energy of prosumer $i$’s ES.
$U_{i,t}(\mathcal{P}_{i,t})$ | Utility function of prosumer $i$.
$T(p_{ij,t}^{+})$ | Network charge for trading $p_{ij,t}^{+}$ units of energy.
$F_{ij}^{\max}$ | Transmission network capacity for line $(i,j)\in\mathcal{L}$.
$p_{i,t}^{\rm r}$ | Renewable generation of prosumer $i$.
$C_{ij}^{\max}$ | Max. trading power between prosumer $i,j$.
$e_{i}^{\min}/e_{i}^{\max}$ | Min./max. stored energy of prosumer $i$’s ES.
$\mathcal{P}_{i,t}^{\min}/\mathcal{P}_{i,t}^{\max}$ | Min./max. consumption/generation of prosumer $i$.
#### II-B1 Leader
In the upper level, the power grid optimizes the network charge price $\gamma$
to trade off the network charge revenue and the transmission loss considering
the transmission network constraints. Network charge revenue is calculated by
(1) and the power transmission loss is consolidated by the DC power flow of
the transmission network [22]. We have the problem for the power grid:
$\displaystyle\min_{\mathbf{x}_{U}}~{}$ $\displaystyle{\rm
Profit}=\sum_{t}\sum_{i}\sum_{j}\big{(}T(p_{ij,t}^{+})+T(p_{ij,t}^{-})\big{)}/2$
(${\rm P}_{U}$)
$\displaystyle\quad\quad~{}~{}-\rho\sum_{t}\sum_{(i,j)\in\mathcal{L}}b_{ij}(\theta_{i,t}-\theta_{j,t})^{2}$
$\displaystyle{\rm s.t.}$
$\displaystyle~{}\gamma_{\min}\leq\gamma\leq\gamma_{\max}.$ (4a)
$\displaystyle\mathbf{B}\bm{\theta}_{t}=\mathbf{P}_{t},\forall t.$ (4b)
$\displaystyle\theta_{r,t}=0,\forall t.$ (4c)
$\displaystyle|(\theta_{i,t}-\theta_{j,t})b_{ij}|\leq
F_{ij}^{\max},\forall(i,j)\in\mathcal{L},t.$ (4d) $\displaystyle
P_{i,t}={\textstyle\sum}_{j}p_{ij,t}^{-}-{\textstyle\sum}_{j}p_{ij}^{+},\forall
i,t.$ (4e)
$\displaystyle\mathbf{P}^{\min}\leq\mathbf{P}_{t}\leq\mathbf{P}^{\max},\forall
t.$ (4f)
where the decision variables for the power grid operator are $\mathbf{x}_{\rm
U}=[\gamma,\theta_{i,t}],\forall i,t$. We use
$\mathbf{P}_{t}=[P_{i,t}],\forall i$ to denote the power injections at the
buses and $b_{ij}$ denotes the admittance of the line connecting bus $i$ and
bus $j$. We have $\gamma_{\min},\gamma_{\max}>0$ characterize the range of
network charge price. We use the term
$\rho{\textstyle\sum}_{(i,j)\in\mathcal{L}}b_{ij}(\theta_{i,t}-\theta_{j,t})^{2}=\rho{\textstyle\sum}_{(i,j)\in\mathcal{L}}P_{ij,t}^{2}/b_{ij}$
related to the power flows to quantify the consolidated transmission loss over
the transmission networks $\mathcal{L}$ and $\rho$ is the transmission loss
cost coefficient [22]. Constraints (4b) represent the DC power flow equations.
Constraints (4c) specify the phase angle of reference bus $r$. Constraints
(4d) model the transmission line capacity limits. In this paper, we use the DC
power flow model to account for the transmission constraints and transmission
loss. Whereas the proposed framework can be readily extended to AC power flow
model by replacing (4b)-(4d) with the DistFlow [23] or the modified DistFlow
[24] model. The nonconvex AC power flow model can be further convexified into
a second-order cone program (SOCP) or a semi-definite program (SDP). Then the
proposed method of this paper can still be used to solve the problem though
with increased problem complexity.
#### II-B2 Followers
In the lower level, the prosumers in the P2P market will respond to the
network charge price $\gamma$ for maximal economic benefit. We use
$U_{i,t}(\mathcal{P}_{i,t})$ to represent the utility functions of prosumer
$i$. Due to the presence of DERs, a prosumer could be a consumer or a
producer. In this regard, $U_{i,t}(\mathcal{P}_{i,t})$ could represent the
satisfaction of a customer for consuming $\mathcal{P}_{i,t}$ units of power or
the cost of a producer for generating $\mathcal{P}_{i,t}$ units of energy. We
also involve the distributed renewable generators and ES devices on the
prosumer side in the formulation. In this paper, we assume the prosumers will
cooperate with each other in the P2P market and formulate the problem as a
centralized optimization problem as many existing works have proved that the
cooperation can make all prosumer better off with some suitable _ex-post_
profit allocation mechanisms (see [25, 26, 27] for examples). Since the
network charge is measured by the traded power regardless of the direction, we
distinguish the purchased power and sold power between prosumer $i$ and
prosumer $j$ by $p_{ij,t}^{+}$ and $p_{ij,t}^{-}$. The problem to optimize the
total prosumer profit considering network charge payment is presented below.
$\displaystyle\max_{\mathbf{x}_{L}}$ $\displaystyle~{}~{}{\rm
Profit}=\sum_{t}\sum_{i}U_{i,t}(\mathcal{P}_{i,t})$ (${\rm P}_{L}$)
$\displaystyle\quad\quad\quad-\sum_{t}\sum_{i}\sum_{j}\big{(}T(p^{+}_{ij,t})+T(p^{-}_{ij,t})\big{)}/2$
s.t. $\displaystyle p_{ij,t}^{+}=p^{-}_{ji,t},~{}~{}~{}\forall i,j,t.$ (5a)
$\displaystyle 0\leq p_{ij,t}^{+}\leq C_{ij}^{\max},~{}~{}\forall i,j,t.$ (5b)
$\displaystyle 0\leq p_{ij,t}^{-}\leq C_{ij}^{\max},~{}~{}\forall i,j,t.$ (5c)
$\displaystyle\mathcal{P}_{i,t}\leq p_{i,t}^{\rm r}\\!+\\!p_{i,t}^{\rm
dis}\\!-\\!p_{i,t}^{\rm
ch}\\!+\\!{\textstyle\sum}_{j}p_{ij,t}^{+}\\!-\\!{\textstyle\sum}_{j}p_{ij,t}^{-},\forall
i,t.$ (5d)
$\displaystyle\mathcal{P}_{i,t}^{\min}\leq\mathcal{P}_{i,t}\leq\mathcal{P}_{i,t}^{\max},~{}\forall
i,t.$ (5e) $\displaystyle e_{i,t+1}=e_{i,t}+p_{i,t}^{\rm ch}\eta-p_{i,t}^{\rm
dis}/\eta,~{}\forall i,t.$ (5f) $\displaystyle 0\leq p_{i,t}^{\rm ch}\leq
P_{i}^{\rm ch,\max},~{}~{}\forall i,t.$ (5g) $\displaystyle 0\leq p_{i,t}^{\rm
dis}\leq P_{i}^{\rm dis,\max},~{}~{}\forall i,t.$ (5h) $\displaystyle
e_{i}^{\min}\leq e_{i,t}\leq e_{i}^{\max},\forall i,t.$ (5i)
where the decision variables for the prosumers are
$\mathbf{x}_{L}=[p_{ij,t}^{+},p_{ij,t}^{-},\mathcal{P}_{i,t},p_{i,t}^{\rm
ch},p_{i,t}^{\rm dis},e_{i,t}],\forall i,t$. Constraints (5a) model the
consistence of energy transaction between the sellers and the buyers. Since
the transmission loss is compensated by the power grid operator, we have the
amount of energy that prosumer $i$ buys from prosumer $i$ equals that prosumer
$j$ sells to prosumer $i$. Constraints (5b)-(5c) impose the transaction limits
between the trading peers. Constraints (5d) ensure the load balance of each
prosumer. Particularly, we use inequality to capture the case where some
renewable generation is curtailed. Constraints (5e) characterize the demand or
supply flexibility of the prosumers. Constraints (5f) tracks the stored energy
of prosumers’ ES with $\eta\in(0,1)$ denoting the charging/discharging
efficiency. Constraints (5g)-(5h) impose the charging, discharging and stored
energy capacity limits. In this paper, we focus on the energy trading among
the prosumers in the P2P market. For the case where the prosumers also trade
electricity with the power grid, the proposed model can be readily extended by
adding the cost or revenue related to the energy trading with the grid to the
prosumers’ objective in the lower-level problem (${\rm P}_{L}$).
#### II-B3 Piece-wise linear utility function
Figure 3: (a) Piece-wise linear (PWL) utility function for a consumer $\nabla
U_{i}(\mathcal{P}_{i})\geq 0$. (b) Piece-wise linear utility (PWL) function
for a producer $\nabla U_{i}(\mathcal{P}_{i})\leq 0$ (time $i$ is omitted).
This paper employs concave piece-wise linear (PWL) utility functions to
capture the prosumers’ demand or supply flexibility as shown in Fig. 3. The
motivation behind is that PWL functions are universal and can approximate all
types of utility functions, such as quadratic and logarithmic [28]. We may
obtain the PWL utility functions by linearizing non-linear utility functions
or directly learn it from data [29]. Due to the presence of DERs, the prosumer
could be a consumer in energy deficiency or a producer with energy surplus.
This could be universally formulated by the PWL utility function but with the
opposite sign of the slopes. We use Fig. 3 (a) and (b) to show the two
scenarios (time $i$ is omitted): if the slope of the PWL utility function is
non-negative $U_{i}(\mathcal{P}_{i})\geq 0$, the prosumer plays the role of
customer and the prosumer will play the role of producer if
$U_{i}(\mathcal{P}_{i})\leq 0$. As shown in Fig 3, a general PWL utility
function composed of $K$ segments is characterized by the transition points
and slopes: $\mathcal{P}^{k}_{i}$ and $\beta^{k}_{i},k=1,2,\cdots,K$. The
function associated with the $k$-th segment can be described as
$\displaystyle U^{k}_{i}(\mathcal{P}_{i})\\!=$
$\displaystyle\alpha_{i}\\!+\\!\sum_{\ell=1}^{k-1}\beta^{\ell}_{i}\left(\mathcal{P}^{\ell}_{i}\\!-\\!\mathcal{P}^{\ell-1}_{i}\right)\\!\\!+\\!\\!\beta^{k}_{i}\left(\mathcal{P}_{i}\\!-\\!\mathcal{P}^{k-1}_{i}\right),\forall
i,k.$ (6)
where $\alpha_{i}$ is the constant component of prosumer $i$’s utility
function, which could represent the satisfaction level of a prosumer for
consuming zero unit of energy or the start-up generation cost for a producer.
It is easy to note that we have
$U_{i}(\mathcal{P}_{i})=U_{i}^{k}(\mathcal{P}_{i})$ if
$\mathcal{P}_{i}\in[\mathcal{P}^{k-1}_{i},\mathcal{P}^{k}_{i})$.
For the proposed Stackelberg game, we have the following results regarding the
existence of _equilibrium_.
###### Theorem 1.
The Stackelberg game (${\rm P}_{U}$)-(${\rm P}_{L}$) admits an equilibrium.
###### Proof.
For the lower-level problem (${\rm P}_{L}$), we note that the problem is
compact and convex with any given network charge price $\gamma$. This implies
that the optimal solution for the lower-level problem (${\rm P}_{L}$) always
exists and can be expressed by $\mathbf{x}_{L}(\gamma)$. By substituting the
closed-form solution $\mathbf{x}_{L}(\gamma)$ (if explicitly available) into
the upper-level problem (${\rm P}_{U}$) and by expressing the phase angle
decision variables $\bm{\theta}$ with the power flows determined by the lower-
level problem solution $\mathbf{x}_{L}(\gamma)$, we can conclude a single-
level optimization problem for the Stackelberg game with the only bounded
decision variables $\gamma\in[\gamma^{\min},\gamma^{\max}]$, which will yield
at least one optimal solution. This implies that the proposed Stackelberg game
adopts at least one Stackelberg _equilibrium_. ∎
###### Remark 1.
The existence of the Stackelberg _equilibrium_ implies that the proposed
optimal network charge model can yield an optimal network charge price that
maximizes the profit of the grid operator while considering the cost-aware
behaviors of the prosumers in the P2P market.
## III Methodology
Note that the optimal network charge associates with the _equilibrium_ of the
Stackelberg game (${\rm P}_{U}$)-(${\rm P}_{L}$) which yields a bi-level
optimization. Bi-level optimization is generally NP-hard and computationally
intensive [30]. This section proposes a method to convert the bi-level problem
to a single-level problem that can accommodate a reasonable scale of prosumers
by exploring the problems structures. To achieve this goal, we first restate
the lower-level problem (${\rm P}_{L}$) as
$\displaystyle\max_{\mathbf{x}_{L}}$ $\displaystyle~{}{\rm
Profit}=\sum_{t}\sum_{i}u_{i,t}-\sum_{t}\sum_{i}\sum_{j}T(p_{ij,t}^{+})$
(${\rm P}^{{}^{\prime}}_{L}$) $\displaystyle{\rm s.t.}~{}$ $\displaystyle
0\leq p_{ij,t}^{+}\leq
C_{ij}^{\max}:~{}\quad\quad~{}~{}\underline{\nu}_{ij,t},\overline{\nu}_{ij,t}\geq
0,~{}\forall i,j,t.$ (7a)
$\displaystyle\mathcal{P}_{i,t}\\!\leq\\!p_{i,t}^{\rm r}\\!+\\!p_{i,t}^{\rm
dis}\\!-\\!p_{i,t}^{\rm
ch}\\!+\\!{\textstyle\sum}_{j}p_{ij,t}^{+}\\!-\\!\\!{\textstyle\sum}_{j}p_{ji,t}^{+}:$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad~{}~{}\mu_{i,t}\geq
0,\forall i,t.$ (7b)
$\displaystyle\mathcal{P}_{i}^{\min}\leq\mathcal{P}_{i,t}\leq\mathcal{P}^{\max}_{i}:~{}\quad\quad~{}~{}\underline{\sigma}_{i,t},\overline{\sigma}_{i,t}\geq
0,~{}\forall i,t.$ (7c) $\displaystyle e_{i,t+1}=e_{i,t}+p_{i,t}^{\rm ch}\eta-
p_{i,t}^{\rm dis}/\eta:\quad~{}\mu_{i,t}^{\rm e}\in\mathbb{R},~{}\forall i,t.$
(7d) $\displaystyle 0\leq p_{i,t}^{\rm ch}\leq P_{i}^{\rm
ch,\max}:\quad\quad\quad~{}~{}\underline{\mu}^{\rm
ch}_{i,t},\overline{\mu}_{i,t}^{\rm ch}\geq 0,~{}\forall i,t.$ (7e)
$\displaystyle 0\leq p_{i,t}^{\rm dis}\leq P_{i}^{\rm
dis,\max}:\quad\quad~{}~{}~{}\underline{\mu}_{i,t}^{\rm
dis},\overline{\mu}_{i,t}^{\rm dis}\geq 0,~{}\forall i,t.$ (7f) $\displaystyle
e_{i}^{\min}\\!\leq\\!e_{i,t}\leq
e_{i}^{\max}:\quad\quad\quad~{}~{}~{}\underline{\mu}_{i,t}^{\rm
e},\overline{\mu}_{i,t}^{\rm e}\geq 0,~{}\forall i,t.$ (7g) $\displaystyle
u_{i,t}\leq
U_{i}^{k}(\mathcal{P}_{i,t}):\quad\quad\quad\quad\quad\quad\delta_{i,k,t}\geq
0,~{}\forall i,k,t.$ (7h)
where the decision variable $\mathbf{P}^{-}=[p_{ij,t}^{-}],\forall i,j,t$ are
removed based on $p_{ij,t}^{+}=p_{ji,t}^{-},\forall i,j,t$. Besides, some
auxiliary variables $u_{i,t}$ are introduced to relax the non-smooth prosumer
utility functions. Since the utility function is concave, it is easy to prove
that (${\rm P}^{{}^{\prime}}_{L}$) is equivalent to (${\rm P}_{L}$).
Additionally, the dual variables for the constraints are defined the right-
hand side.
For the reformulated lower-level problem (${\rm P}^{{}^{\prime}}_{L}$), we can
draw the Karush–Kuhn–Tucker (KKT) conditions [31]. We first have the first-
order optimality conditions:
$\displaystyle\partial
L/\partial\mathcal{P}_{i,t}\\!=\\!\mu_{i,t}\\!-\\!\underline{\sigma}_{i,t}\\!+\\!\overline{\sigma}_{i,t}\\!-\\!{\textstyle\sum}_{k}\delta_{i,k,t}\nabla
U_{i,t}^{k}(\mathcal{P}_{i,t})\\!=\\!0$ (8a)
$\displaystyle\partial\mathbb{L}/\partial p_{ij,t}^{+}\\!=\\!\gamma
d_{ij}-\underline{\nu}_{ij,t}+\overline{\nu}_{ij,t}-\mu_{i,t}+\mu_{j,t}=0$
(8b) $\displaystyle\partial\mathbb{L}/\partial
u_{i,t}\\!=\\!-1+{\textstyle\sum}_{k}\delta_{i,k,t}=0$ (8c)
$\displaystyle\partial\mathbb{L}/\partial p_{i,t}^{\rm
ch}\\!=\\!\mu_{i,t}-\mu_{i,t}^{\rm e}\eta-\underline{\mu}_{i,t}^{\rm ch}=0$
(8d) $\displaystyle\partial\mathbb{L}/\partial p_{i,t}^{\rm
dis}\\!=\\!-\mu_{i,t}+\mu_{i,t}^{\rm e}/\eta-\underline{\mu}_{i,t}^{\rm
dis}+\overline{\mu}_{i,t}^{\rm dis}=0$ (8e) $\displaystyle\partial L/\partial
e_{i,t}\\!=\\!-\mu_{i,t}^{\rm e}\\!+\\!\mu_{i,t-1}^{\rm
e}\\!-\\!\underline{\mu}_{i,t-1}^{\rm e}\\!+\\!\overline{\mu}_{i,t-1}^{\rm
e}=0,\forall t>1$ (8f)
where we use $\mathbb{L}$ to denote the Lagrangian function associated with
(${\rm P}^{{}^{\prime}}_{L}$). Based on (6), we have $\nabla
U_{i,t}^{k}(\mathcal{P}_{i,t})=\beta_{i}^{k}$ which represents the slope of
the prosumer $i$’s utility function at the $k$-th segment.
In addition, we have the complementary constraints for the inequality
constraints (7a)-(7h). Using (7d) as an example, we have the complementary
constraints:
$\displaystyle\mu_{i,t}\big{(}\mathcal{P}_{i,t}\\!-\\!p_{i,t}^{\rm
r}\\!-\\!p_{i,t}^{\rm dis}\\!+\\!p_{i,t}^{\rm
ch}\\!-\\!{\textstyle\sum}_{j}p_{ij,t}^{+}\\!+\\!\\!{\textstyle\sum}_{j}p_{ji,t}^{+}\big{)}\\!=\\!0,\forall
i,t.$ (9)
The general way to handle the non-linear complementary constraints is to
introduce binary variables to relax the constraints (see [32] for an example).
This could be problematic for problem (${\rm P}^{{}^{\prime}}_{L}$) due to the
large number of inequality constraints. To deal with the computational
challenges, we make use of the linear programming (LP) structure of problem
(${\rm P}^{{}^{\prime}}_{L}$). For a LP, we have the _strong duality_ and the
_complementary constraints_ are interchangeable (see [33], Ch4, pp. 147 for
detailed proof). Therefore, we use the strong duality condition for problem
(${\rm P}^{{}^{\prime}}_{L}$) to replace the complementary constraints, such
as (9). We have the strong duality for problem (${\rm P}^{{}^{\prime}}_{L}$):
$\begin{split}&\\!-\\!\sum_{t}\sum_{i}\sum_{j}\overline{\nu}_{ij,t}C_{ij}^{\max}\\!-\\!\sum_{t}\sum_{i}\mu_{i,t}p_{i,t}^{\rm
r}\\!\\!+\\!\\!\sum_{t}\sum_{i}\underline{\sigma}_{i,t}\mathcal{P}_{i,t}^{\min}\\\
&\\!-\\!\\!\sum_{t}\\!\sum_{i}\overline{\sigma}_{i,t}\mathcal{P}_{i,t}^{\max}\\!\\!-\\!\\!\sum_{t}\\!\\!\sum_{i}\overline{\mu}_{i,t}^{\rm
ch}P_{i}^{\rm
ch,\max}\\!\\!-\\!\\!\sum_{t}\\!\sum_{i}\overline{\mu}_{i,t}^{\rm
dis}P_{i}^{\rm dis,\max}\\\
&\\!\\!+\\!\\!\sum_{t}\sum_{i}\underline{\mu}_{i,t}^{\rm
e}e_{i}^{\min}\\!\\!-\\!\\!\sum_{t}\sum_{i}\overline{\mu}_{i,t}^{\rm
e}e_{i}^{\max}\\!\\!-\\!\\!\sum_{t}\sum_{i}\sum_{k}\delta_{i,k,t}U_{i}^{k}(0)\\\
&=\sum_{t}\sum_{i}\sum_{j}T(p_{ij,t}^{+})-\sum_{t}\sum_{i}u_{i,t}\end{split}$
(10)
Note that the strong duality (10) can be used to eliminate the large number of
non-linear complementary constraints but requires to tackle the bi-linear
terms related to the network charge calculations: $T(p_{ij,t}^{+})=\gamma
d_{ij}p_{ij,t}^{+}$. To handle such bi-linear terms, we discretize the network
charge price and convert the non-linear terms into mixed-integer constraints.
Specifically, we first define an auxiliary variable $Z$:
$Z=\sum_{t}\sum_{i}\sum_{j}d_{ij}p_{ij,t}^{+}$
We thus have the total network charge for P2P transaction:
$\begin{split}\sum_{t}\sum_{i}\sum_{j}T(p_{ij,t}^{+})=\gamma Z\end{split}$
(11)
We discretize the range of network charge price
$[\gamma_{\min},\gamma_{\max}]$ into $L$ levels
$\\{\gamma_{1},\gamma_{2},\cdots,\gamma_{L}\\}$ with an equal interval
$\Delta\gamma=(\gamma_{\max}-\gamma_{\min})/L$. Accordingly, we introduce the
binary variables $\mathbf{x}=[x_{\ell}],\ell=1,2,\cdots,L$ to indicate which
level of network charge price is selected, we thus have
$\displaystyle\gamma Z={\textstyle\sum}_{\ell=1}^{L}x_{\ell}\gamma_{\ell}Z$
(12)
$\displaystyle{\textstyle\sum}_{\ell=1}^{L}x_{\ell}=1,~{}x_{\ell}\in\\{0,1\\}$
(13)
Note that the network charge calculations rely on the product of binary
variable $x_{\ell}$ and continuous variable $Z$. This can be equivalently
expressed by the integer algebra:
$\displaystyle\quad\quad~{}-Mx_{\ell}\leq Y_{\ell}\leq Mx_{\ell}$ (14)
$\displaystyle-M(1-x_{\ell})\leq Z-Y_{\ell}\leq M(1-x_{\ell})$ (15)
where we have $\gamma Z=\sum_{\ell=1}^{L}\gamma_{\ell}Y_{\ell}$ and $M$ is a
sufficiently large positive constant.
By plugging
${\textstyle\sum}_{t}{\textstyle\sum}_{i}{\textstyle\sum}_{j}T(p_{ij,t}^{+})=\gamma
Z={\textstyle\sum}_{\ell=1}^{L}\gamma_{\ell}Y_{\ell}$ in (10), and by
replacing the lower level problem (${\rm P}^{{}^{\prime}}_{L}$) with KKT
conditions, we have the following single-level mixed-integer quadratic
programming (MIQP):
$\displaystyle\max_{\mathbf{x}_{U},\mathbf{x}_{L},\bm{\lambda}}~{}$
$\displaystyle\text{Profit}=\sum_{\ell=1}^{L}\gamma_{\ell}Y_{\ell}-\rho\\!\\!\\!\sum_{(i,j)\in\mathcal{L}}\\!\\!\\!b_{ij}(\theta_{i}-\theta_{j})^{2}$
($P$)
$\displaystyle\text{s.t.}~{}\eqref{eq:10a}-\eqref{eq:10h}.~{}~{}~{}~{}~{}~{}~{}~{}\text{Primal
constraints}$
$\displaystyle~{}~{}~{}~{}\eqref{eq:11a}-\eqref{eq:11f}.~{}~{}~{}~{}~{}~{}~{}~{}~{}\text{KKT
conditions}$
$\displaystyle~{}~{}~{}~{}\eqref{eq:strong_duality},\eqref{eq:linearization}-\eqref{eq:integer_algebra2}~{}~{}~{}~{}\text{Strong~{}duality}$
where
$\lambda=[\bm{\underline{\nu}},\bm{\overline{\nu}},\bm{\mu},\bm{\underline{\sigma}},\bm{\overline{\sigma}},\bm{\mu}^{\rm
e},\bm{\underline{\mu}}^{\rm ch},\bm{\overline{\mu}}^{\rm
ch},\bm{\underline{\mu}}^{\rm dis},\bm{\overline{\mu}}^{\rm
dis},\bm{\underline{\mu}}^{\rm e},\bm{\overline{\mu}}^{\rm e},\bm{\delta}]$
are the dual variables. Note that this single-level conversion favors
computation as the number of binary variables ($L$) is only determined by the
granularity of network charge discretization and independent of the scale of
prosumers, making it possible to accommodate a reasonable scale of prosumers.
## IV Case Studies
In this section, we evaluate the performance of the proposed network charge
mechanism via simulations. We first use IEEE 9-bus system to evaluate the
effectiveness of the solution method, the existence of _equilibrium_ network
charge price, and the social welfare. We further evaluate the performance on
the larger electrical networks including IEEE 39-bus, 57-bus, and 118-bus
systems. Particularly, we compare the results with and without ES on the
prosumer side in the case studies.
TABLE II: Simulation set-ups Param. | Definition | Value
---|---|---
$T$ | Time periods | 24
$\alpha_{i,t}$ | Proumer PWL utility constant | 0
$\beta_{i,t}^{k}$ | Prosumer PWL utility slopes | $[0,1]$
$K$ | Prosumer PWL utility segments | 2 or 3
$[\gamma^{\min},\gamma^{\max}]$ | Network charge price range | [0, 1] $\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
$\Delta\gamma$ | Network charge price discretization | 0.02 $\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
$L$ | Network charge discretization levels | 51
$e_{i}^{\min}/e_{i}^{\max}$ | Min./max storaged energy of ES | 0/60 $\mathrm{kW}\text{\,}\mathrm{h}$
$P_{i}^{\rm ch,\max}$ | Max. charging power | 50 $\mathrm{kW}$
$P_{i}^{\rm dis,\max}$ | Max. discharging power | 50 $\mathrm{kW}$
$\eta$ | Charging/discharging efficiency | 0.9
$\rho$ | Transmission loss cost coefficient | 0.01
### IV-A Simulation Set-ups
We set up the case studies by rescaling the real building load profiles [34]
and the renewable generation profiles (i.e., wind and solar) [35]. To capture
the demand flexibility, we set the lower prosumer demand as
$\mathcal{P}_{i,t}^{\min}=0$ (we focus on the flexible demand) and the upper
prosumer demand as $\mathcal{P}_{i,t}^{\max}=\text{\emph{demand
profile}}_{i,t}+30$ $\mathrm{kW}$. For each time period $t$, we uniformly
generate the slopes of prsumer PWL utility functions in
$\beta_{i,t}^{k}\in[0,1]$ with $K=2$ or $3$ segments (we only consider
customers in the following studies and the producers can be included by
setting $\beta_{i,t}^{k}\in[-1,0]$ if exist). We set the constant components
of PWL utility function as $\alpha_{i,t}=0$ for all customers.
Correspondingly, we equally divide the ranges of prosumer demand
$[\mathcal{P}^{\min}_{i,t},\mathcal{P}^{\max}_{i,t}]$ into $K=2$ or $3$
segments to obtain the PWL utility function transition points
$\mathcal{P}^{k}_{i,t}$. We simulate the P2P market for 24 periods with a
decision interval of one hour. The settings for the above parameters and the
prosumers’ ES are gathered in TABLE II. Particularly, we set the range of
network charge price as $\gamma^{\min}=0$ and $\gamma^{\max}=1.0$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
and the discretization interval as $\Delta\gamma=0.02$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
based on the simulation results of Section IV-B, which suggest such settings
are expected to provide solutions with sufficiently high accuracy. Besides,
the electrical distances measured by PTDF for the concerned bus systems are
directly obtained with the method in Section II-A.
In this paper, we refer to the P2P market with the proposed network charge as
_Optimal P2P_. The network charge price is obtained by solving problem ($P$)
with the _off-the-shelf_ solvers. In the following studies, we compare
_Optimal P2P_ with _No P2P_ (P2P transaction is forbidden), _Free P2P_ (P2P
transaction is allowed without any network charge form the prosumers) and
_Social P2P_ (P2P transaction is determined by maximizing the social profit
which is the sum of grid operator profit and prosumer profit defined in (${\rm
P}_{U}$) and (${\rm P}_{L}$)). Note that the network charge with _Social P2P_
will be internalized as the grid operator and the prosumers are unified as a
whole. For _Free P2P_ , the grid operator has no manipulation on the P2P
market and the optimal transaction can be determined by directly solving the
lower-level problem (${\rm P}_{L}$) by removing the network charge components
(To ensure the uniqueness of the solution, we keep the network charge but set
a sufficiently small value). In addition, we examine the different markets
without and with ES on the prosumer side. For the case without ES, we set
$e_{i}^{\max},P_{i}^{\rm ch,\max},P_{i}^{\rm dis,\max}$ as _zero_. For the
case with ES, we assume each prosumer has a ES with the configurations shown
in TABLE II. The market configurations for comparisons are shown in TABLE III.
We highlight _Optimal P2P_ and _Optimal P2P + ES_ as our main focus.
TABLE III: Market configurations for comparison Market | ES | P2P | Network charge
---|---|---|---
No P2P | | |
Free P2P | | $\checkmark$ |
Social P2P | | $\checkmark$ | Internalized
Optimal P2P | | $\checkmark$ | $\checkmark$
No P2P + ES | $\checkmark$ | |
Free P2P + ES | $\checkmark$ | $\checkmark$ |
Social P2P + ES | $\checkmark$ | $\checkmark$ | Internalized
Optimal P2P + ES | $\checkmark$ | $\checkmark$ | $\checkmark$
Figure 4: IEEE-9-bus system with 9 prosumers (P1-P9).
Figure 5: Grid profit w.r.t. network charge price $\gamma$ for IEEE 9-bus
system: (a) P2P + No ES. (b) P2P + With ES. ($\gamma_{\rm L}$: minimum network
charge price for the grid to attribute transmission loss. $\gamma_{\rm opt}$:
optimal network charge price for maximum grid profit. $\gamma_{\rm U}$:
maximum network charge price that the prosumers would take.)
### IV-B IEEE 9-bus system
We first use the small-scale IEEE 9-bus system with 9 prosumers shown in Fig.
4 to evaluate the proposed optimal network charge model. By solving the ($P$),
we can obtain the optimal network charge price $\gamma_{\rm opt}=0.2$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
(No ES) and $\gamma_{\rm
opt}=0.12$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
(With ES). To verify the solution accuracy, we compare the obtained solutions
with that identified from simulating the range of network charge price
$\gamma\in[0,1]$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
with an incremental of $\Delta\gamma=0.01$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$.
For each simulated network charge price, we evaluate the grid profit, network
charge, and transmission loss defined in (${\rm P}_{U}$) and display their
changes w.r.t. the network charge price in Fig. 5. From the results, the
optimal network charge price can be identified where the grid profit is
maximized, which are $\gamma_{\rm opt}=0.2$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
(No ES) and $\gamma_{\rm
opt}=0.12$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
(With ES) corresponding well to the obtained solutions. This demonstrates the
effectiveness of the proposed solution method. By further examining the
simulation results, we can draw the following main conclusions.
#### IV-B1 The network charge model admits an equilibrium network charge
price
From Fig. 5 (a) (No ES) and 5 (b) (With ES), we observe that the grid profit
first approximately increases and begins to drop after reaching the optimal
network charge price $\gamma_{\rm opt}$ with the maximum grid profit. Since
for any given network charge price $\gamma$, there exists an optimal energy
management strategy for the prosumers (i.e., there exists an optimal solution
for the lower level problem (${\rm P}_{L}$)), we imply that $\gamma_{\rm opt}$
is the _equilibrium_ network charge price. This demonstrates the existence of
_equilibrium_ for the proposed Stackelberg game, which is in line with Theorem
1.
Besides, we can imply from the results that there exists a minimal network
charge price for the grid operator to attribute the transmission loss. Such
minimal network charge price occurs where the network charge revenue equals
the transmission loss (i.e., the grid operator has _zero_ profit).
Specifically, the minimal network charge price is $\gamma_{\rm
L}=0.03$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$$
both with and without ES for the tested case. In addition, we note that there
also exists a maximal network charge price that the prosumers would take,
which are $\gamma_{\rm
U}=0.94$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
(No ES) and $\gamma_{\rm
U}=0.6$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$$
(With ES) for the tested case. When the network charge price exceeds the
maximum price, we observe that no transaction happens in the P2P market. We
note that when the prosumers have ES, they would take lower network charge
price. This is reasonable as the prosumers can use the ES to shift surplus
renewable generation for future use in addition to trade in the P2P market.
This can be perceived from Fig 6 which shows the total P2P trades in the P2P
market w.r.t. the network charge price both with and without ES. We note that
less trades will be made when the prosumers have ES than that of No ES for any
specific network charge price. Moreover, the total trades drop faster w.r.t.
the increase of network charge price when the prosumers have ES. This implies
that the deployment of ES will make the prosumers more sensitive to the
network charge price and impact the optimal network charge price.
Figure 6: Total P2P trades w.r.t. network charge price $\gamma$ for IEEE 9-bus
system ($\gamma_{\rm opt}$: optimal network charge price).
#### IV-B2 The network charge can benefit both the grid operator and the
prosumers
From the above results, we conclude that the proposed optimal network charge
can provide positive profit to the grid operator. This implies that the grid
operator can benefit from empowering P2P energy trading. An interesting
question to ask is how the economic benefit of P2P is shared by the grid
operator and the prosumers with the proposed network charge mechanism. To
answer that question, we use _No P2P_ as the base and define the increased
profit for the grid operator and the prosumers as the _benefit_ harnessed from
a specific P2P market. We compare the benefit of the two stakeholders with
_Optimal P2P_ and _Free P2P_. For _Optimal P2P_ , we impose the obtained
optimal network charge price $\gamma_{\rm opt}=0.2$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
(No ES) and $\gamma_{\rm
opt}=0.1$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$.
For _Free P2P_ , we set a sufficiently small network charge price
$\gamma=1e-7$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
to ensure the uniqueness of solution as mentioned. We evaluate the benefit of
grid operator and prosumers for each time period and display the results over
the 24 periods in Fig. 7 (No ES) and Fig. 8 (With ES). We note that when there
is no network charge (i.e., _Free P2P_ and _Free P2P + ES_), the prosumers can
gain considerable benefit from P2P transaction. Whereas the grid operator will
have to undertake the transmission loss which are in total 116.94s$ (No ES)
and 103.38s$ (With ES). Comparatively, the proposed optimal network charge
(i.e., _Optimal P2P_ , _Optimal P2P + ES_) can provide positive benefit to
both the grid operator and the prosumers, i.e., 206.77s$ vs. 158.67s$ (No ES)
and 103.01s$ vs. 76.50s$ (With ES). We therefore imply that the optimal
(_equilibrium_) network charge price that maximizes the grid profit also
secures the prosumers’ profit. Moreover, the benefit of P2P is almost equally
shared by the grid operator and the prosumers (i.e., 57.38% vs. 42.6%).
Figure 7: Benefit of the grid operator and prosumers with _Optimal P2P_ and
_Free P2P_ for IEEE 9-bus system (using No P2P as benchmark).
Figure 8: Benefit of the grid operator and prosumers with _Free P2P + ES_ and
_Optimal P2P + ES_ for IEEE 9-bus system (using No P2P as benchmark).
#### IV-B3 The network charge provides near-optimal social welfare
_Social welfare_ is one of the most important measures to be considered for
market design. For the concerned P2P market involving the grid operator and
the prosumers, the _social welfare_ refers to the sum profit of grid operator
and prosumers (i.e., social profit) and defined as ${\rm
Social~{}profit}={\textstyle\sum}_{t}{\textstyle\sum}_{i}U_{i,t}(\mathcal{P}_{i,t})-\rho{\textstyle\sum}_{t}{\textstyle\sum}_{(i,j)\in\mathcal{L}}b_{ij}(\theta_{i,t}-\theta_{j,t})^{2}$.
In this part, we study the social profit yield by the proposed network charge
model. To identify the social optimality gap, we compare _Optimal P2P_ with
_Social P2P_. We evaluate the social profit for each time period with _Optimal
P2P_ and _Social P2P_ , and display the results over the 24 periods in Fig. 9
(a) (No ES) and Fig. 9 (b) (With ES). To identify the social optimality gap,
we fill the difference of social profit curves with _Optimal P2P_ and _Social
P2P_ in blue. Note that the _positive area_ can be interpreted as the social
optimality gap. From the results, we conclude that the social optimality gap
is about 4.70% (No ES) and $1.32\%$ (With ES). To be noted, though we observe
a larger shaded area for the case with ES (see Fig. 9 (b)), the accumulated
positive area is quite small. This implies that the proposed network charge
mechanism can provide _near-optimal_ social welfare.
We further study how the social profit is affected by the network charge
price. Similarly, we simulate the range of network charge price
$\gamma\in[0,1]$$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$
with an incremental of $\Delta\gamma=0.01$
$\mathrm{s}\$\mathrm{/}\mathrm{(}\mathrm{k}\mathrm{W}\cdot\mathrm{k}\mathrm{m}\mathrm{)}$.
For each simulated network charge price, we evaluate the total social profit
over the 24 periods. As shown in Fig. 10, we observe that the social profit
first increases w.r.t. the network charge and begins to drop after the social
optima is reached. Besides, we note that though the obtained optimal network
charge price does not coincide with the social optima, the social optimality
gap is quite small, which are only $4.70\%$ and $1.32\%$ (With ES) as
discussed.
Figure 9: Social profit for each time period for IEEE-9-bus system: (a) No ES.
(b) With ES. (Positive shaded area represents the social profit loss of
_Optimal P2P_ compared with _Social P2P_).
Figure 10: Social profit w.r.t. network charge price for IEEE-9-bus system.
(P2P + With ES: social optimality gap 1.32%. P2P + No ES: social optimality
gap 4.70%.)
#### IV-B4 The network charge favors localized transaction and curbs long
distancing transaction
In this part, we study how the proposed network charge shapes the P2P markets.
We compare _Optimal P2P_ with _Free P2P_ both with and without ES on the
prosumer side. For each market, we calculate the aggregated transaction (in
$\mathrm{kW}$) for the trading peers over the 24 periods and visualize the
transaction in Fig. 11. The circles with IDs indicate the prosumers and the
line thickness represents the amounts of P2P transaction. We observe that the
network charge has an obvious impact on the behaviors of prosumers in the P2P
markets. Specifically, by comparing Fig. 11 (a) (No ES) and Fig. 11 (b) (With
ES), we notice that the network charge dose not affect the transaction between
the prosumers in close proximity (e.g., 3-6, 7-8, 2-8, 1-4) but obviously
discourages the long distancing transaction (e.g., 4-6, 1-6, 1-9). This is
reasonable as the network charge counts on the electrical distance. Therefore,
the proposed network charge model favors localized transaction and curbs the
long distancing transaction. This makes sense considering the transmission
losses related to the long distancing transaction. For the case with ES, we
can draw the similar conclusion from Fig. 11 (a) (No ES) and Fig. 11 (b) (With
ES).
Figure 11: Total P2P trades of 24 periods across the prosumers for IEEE 9-bus
system (line thickness represents the amounts of transaction).
### IV-C IEEE 39-bus, 57-bus and 118-bus systems
TABLE IV: Outcomes of different P2P markets System | Market | Grid-wise | Prosumer-wise | System-wise
---|---|---|---|---
Transmission loss | Network charge | Grid profit | Prosumer profit | Total transaction | Social profit
$\times 10^{2}$[s$] | $\times 10^{2}$[s$] | $\times 10^{2}$[s$] | $\times 10^{2}$[s$] | $\times 10^{2}$[$\mathrm{kW}\text{\,}\mathrm{h}$] | $\times 10^{2}$[s$]
9-bus | No P2P | 0 | 0 | 0 | 22.42 | 0 | 22.42
Free P2P | 1.17 | 0 | -1.17 | 28.25 | 17.43 | 27.10
Social P2P | – | – | – | – | – | 27.35
Optimal P2P | 0.19 | 2.26 | 2.07 | 24.00 | 7.82 | 26.08
No P2P+ES | 0 | 0 | 0 | 25.57 | 0 | 25.57
Free P2P+ES | 1.03 | 0 | -1.03 | 28.54 | 15.57 | 27.51
Social P2P+ES | – | – | – | – | – | 27.87
Optimal P2P+ES | 0.28 | 1.22 | 0.94 | 26.56 | 7.64 | 27.50
39-bus | No P2P | 0 | 0 | 0 | 110.86 | 0 | 110.86
Free P2P | 3.27 | 0 | -3.27 | 151.03 | 112.64 | 147.76
Social P2P | – | – | – | – | – | 148.15
Optimal P2P | 0.55 | 14.79 | 14.24 | 123.44 | 52.33 | 137.68
No P2P+ES | 0 | 0 | 0 | 124.69 | 0 | 124.69
Free P2P + ES | 2.62 | 0 | -2.62 | 154.38 | 107.69 | 151.76
Social P2P + ES | – | – | – | – | – | 152.18
Optimal P2P+ES | 0.60 | 11.26 | 10.65 | 134.07 | 45.17 | 144.73
57-bus | No P2P | 0 | 0 | 0 | 191.63 | – | 191.63
Free P2P | 65.52 | 0 | -65.52 | 26.20 | 185.61 | 196.53
Social P2P | – | – | – | – | – | 232.05
Optimal P2P | 4.72 | 22.40 | 17.68 | 205.94 | 61.16 | 223.61
No P2P + ES | 0 | 0 | 0 | 212.50 | 0 | 212.50
Free P2P + ES | 66.20 | 0 | -66.20 | 272.92 | 177.23 | 206.75
Social P2P + ES | – | – | – | – | – | 240.19
Optimal P2P+ES | 5.39 | 18.49 | 13.10 | 222.52 | 52.45 | 235.62
118-bus | No P2P | 0 | 0 | 0 | 427.68 | 0 | 427.68
Free P2P | 111.20 | 0 | -111.20 | 567.38 | 367.09 | 455.64
Social P2P | – | – | – | – | – | 530.84
Optimal P2P | 5.33 | 46.97 | 41.63 | 46.76 | 149.59 | 509.18
No P2P + ES | 0 | 0 | 0 | 474.84 | 0 | 474.84
Free P2P + ES | 177.47 | 0 | -177.47 | 603.83 | 391.05 | 426.35
Social P2P + ES | – | – | – | – | – | 557.79
Optimal P2P + ES | 5.46 | 37.43 | 31.97 | 505.38 | 128.27 | 537.35
We further examine the performance of the proposed network charge mechanism by
simulating the IEEE 39-bus, 57-bus, and 118-bus systems. We follow the same
simulation set-ups in Section IV-A and compare the different markets in TABLE
III. We report the results for different markets and bus systems in TABLE IV.
Particularly, we group the results by _Grid-wise_ , _Prosumer-wise_ and
_System-wise_. For _Grid-wise_ , we study the total transmission loss, network
charge revenue and the grid profit. For _Prosumer-wise_ , we are concerned
with the total prosumer profit and total P2P transaction. For _System-wise_ ,
we evaluate the social profit (i.e., grid operator profit plus prosumers’
profit). Note that the results for IEEE 9-bus system are also included for
completeness. The results associated with _Optimal P2P_ and _Optimal P2P + ES_
have been highlighted in bold as our main focus. Overall, for the larger
electrical networks, we can draw similar results in Section IV-B.
Specially, the proposed optimal network charge can provide positive profit
both to the power grid and the prosumers as with _Optimal P2P_ and _Optimal
P2P + ES_ reported in TABLE IV. Whereas _Free P2P_ and _Free P2P + ES_ only
favor the prosumers with considerable profit increase over _No P2P_ and will
displease the power grid operator considering the uncovered transmission loss
(i.e., negative profit for the grid operator). This implies that the network
charge is necessary to enable the successful deployment of P2P market in the
existing power system from the perspective of economic benefit.
Besides, we can conclude that the proposed network charge is favorable
considering the benefit of P2P shared by the grid operator and the prosumers.
Similarly, using _No ES_ as the benchmark, we define the profit increase of
the grid operator and the prosumers as the benefit. Based on the results in
TABLE IV, we have the report regarding the benefit of grid operator and the
prosumers in Fig. 12 (a) (No ES) and 12 (b) (With ES). Notably, we see that
the grid operator and the prosumers achieve almost equal benefit from the P2P
market with all cases. Specifically, the benefit for the prosumers and grid
operator are about 49.0% vs. 51.0% (No ES) and 48.4% vs. 51.6% (With ES) for
the tested IEEE-118 bus systems. This makes sense considering the balance of
the P2P market.
Figure 12: Benefit of grid operator and the prosumers with _Optimal P2P_ : (a)
No ES. (b) With ES. (using _No P2P_ as benchmark)
In addition, we conclude that the network charge can be used to shape the P2P
markets. For the tested bus systems, we compare the P2P transaction of _Free
P2P_ and _Optimal P2P_ both with and without ES. Similarly, we visualize the
total transaction over the 24 periods for the trading peers in Fig. 14
(39-bus), 15 (57-bus), 16 (118-bus). The circles with IDs indicate prosumers
located at the buses and line thickness represents the amounts of
transactions. We note that the imposed network charge has an obvious impact on
the energy trading behaviors of prosumers in the P2P market. When there is no
network charge and the grid operator has no manipulation on the P2P market,
the prosumers could trade regardless of the electrical distances, leading to
massive long distancing trades. The could be problematic considering the high
transmission loss and the possible network violations taken by the grid
operator. Comparatively, the proposed network charge favors localized
transaction and discourages long distancing transaction, yielding much lower
transmission loss as reported in TABLE IV (see Column 3). More importantly,
the network charge is necessary for the grid operator to ensure the network
constraints.
Last but not the least, we conclude that the proposed network charge favors
social welfare. By examining the _System-wise_ performance indicated by social
profit in TABLE IV (Column 8), we notice that P2P transaction (i.e., _Optimal
P2P_ , _Free P2P_ and _Social P2P_) favors social profit over _No P2P_. More
notably, we find that _Optimal P2P_ and _Optimal P2P + ES_ provide near-
optimal social profit indicated by _Social P2P_. Specifically, the overall
social optimality gap is less than 7% (No ES) and less than 5% (With ES) with
_Optimal P2P_ as reported in Fig. 13,
Figure 13: Social optimality gap of _Optimal P2P_.
Figure 14: Total P2P trades of 24 periods across the prosumers for IEEE 39-bus
system (line thickness represents the amount of transaction).
Figure 15: Total P2P trades of 24 periods across the prosumers for IEEE 57-bus
system (line thickness represents the amount of transaction).
Figure 16: Total P2P trades of 24 periods across the prosumers for IEEE
118-bus system (line thickness represents the amount of transaction).
## V Conclusion and Future Works
This paper discussed the integration of the P2P market scheme into the
existing power systems from the perspective of network charge design. We used
network charge as a means for the grid operator to attribute grid-related cost
(i.e., transmission loss) and ensure network constraints for empowering P2P
transaction. We characterized the interaction between the power grid operator
and the prosumers in a P2P market as a Stackelberg game. The grid operator
first decides on the optimal network charge price to trade off the network
charge revenue and the transmission loss considering the network constraints,
and then the prosumers optimize their energy management (i.e., energy
consuming, storing and trading) for maximum economic benefit. We proved the
Stackelberg game admits an _equilibrium_ network charge price. Besides, we
proposed a solution method to obtain the _equilibrium_ network charge price by
converting the bi-level optimization problem into a single-level optimization
problem. By simulating the IEEE bus systems, we demonstrated that the proposed
network charge mechanism can benefit both the grid operator and the prosumers
and achieve _near-optimal_ social welfare. In addition, we found that the
presence of ES on prosumer side will make the prosumers more sensitive to the
network charge price increase.
In this paper, we have studied the optimal network charge with deterministic
supply and demand and found that the network charge is effective in shaping
the behaviors of prosumers in a P2P market. Some future works along this line
include: 1) designing optimal network charge price considering the
uncertainties of prosumer supply and demand; 2) using the network charge as a
tool to achieve demand response.
## References
* [1] “Distributed energy resources for net zero: An asset or a hassle to the electricity grid?.” https://www.iea.org/commentaries/distributed-energy-resources-for-net-zero-an-asset-or-a-hassle-to-the-electricity-grid. Accessed: 2022-04-24.
* [2] T. Morstyn, N. Farrell, S. J. Darby, and M. D. McCulloch, “Using peer-to-peer energy-trading platforms to incentivize prosumers to form federated power plants,” Nature Energy, vol. 3, no. 2, pp. 94–101, 2018.
* [3] C. Zhang, J. Wu, Y. Zhou, M. Cheng, and C. Long, “Peer-to-peer energy trading in a microgrid,” Applied Energy, vol. 220, pp. 1–12, 2018.
* [4] Y. Chen, W. Wei, H. Wang, Q. Zhou, and J. P. Catalão, “An energy sharing mechanism achieving the same flexibility as centralized dispatch,” IEEE Transactions on Smart Grid, vol. 12, no. 4, pp. 3379–3389, 2021.
* [5] W. Tushar, C. Yuen, H. Mohsenian-Rad, T. Saha, H. V. Poor, and K. L. Wood, “Transforming energy networks via peer-to-peer energy trading: The potential of game-theoretic approaches,” IEEE Signal Processing Magazine, vol. 35, no. 4, pp. 90–111, 2018.
* [6] W. Tushar, T. K. Saha, C. Yuen, T. Morstyn, H. V. Poor, R. Bean, et al., “Grid influenced peer-to-peer energy trading,” IEEE Transactions on Smart Grid, vol. 11, no. 2, pp. 1407–1418, 2019.
* [7] T. Baroche, F. Moret, and P. Pinson, “Prosumer markets: A unified formulation,” in 2019 IEEE Milan PowerTech, pp. 1–6, IEEE, 2019.
* [8] S. Cui, Y.-W. Wang, and J.-W. Xiao, “Peer-to-peer energy sharing among smart energy buildings by distributed transaction,” IEEE Transactions on Smart Grid, vol. 10, no. 6, pp. 6491–6501, 2019.
* [9] D. Teixeira, L. Gomes, and Z. Vale, “Single-unit and multi-unit auction framework for peer-to-peer transactions,” International Journal of Electrical Power & Energy Systems, vol. 133, p. 107235, 2021.
* [10] T. Morstyn, A. Teytelboym, and M. D. McCulloch, “Bilateral contract networks for peer-to-peer energy trading,” IEEE Transactions on Smart Grid, vol. 10, no. 2, pp. 2026–2035, 2018.
* [11] J. Kim and Y. Dvorkin, “A P2P-dominant distribution system architecture,” IEEE Transactions on Power Systems, vol. 35, no. 4, pp. 2716–2725, 2019\.
* [12] T. Sousa, T. Soares, P. Pinson, F. Moret, T. Baroche, and E. Sorin, “Peer-to-peer and community-based markets: A comprehensive review,” Renewable and Sustainable Energy Reviews, vol. 104, pp. 367–378, 2019.
* [13] M. Khorasany, Y. Mishra, and G. Ledwich, “Market framework for local energy trading: A review of potential designs and market clearing approaches,” IET Generation, Transmission & Distribution, vol. 12, no. 22, pp. 5899–5908, 2018.
* [14] M. R. Hamouda, M. E. Nassar, and M. Salama, “A novel energy trading framework using adapted blockchain technology,” IEEE Transactions on Smart Grid, vol. 12, no. 3, pp. 2165–2175, 2020.
* [15] A. Esmat, M. de Vos, Y. Ghiassi-Farrokhfal, P. Palensky, and D. Epema, “A novel decentralized platform for peer-to-peer energy trading market with blockchain technology,” Applied Energy, vol. 282, p. 116123, 2021.
* [16] W. Tushar, T. K. Saha, C. Yuen, D. Smith, and H. V. Poor, “Peer-to-peer trading in electricity networks: An overview,” IEEE Transactions on Smart Grid, vol. 11, no. 4, pp. 3185–3200, 2020.
* [17] T. Baroche, P. Pinson, R. L. G. Latimier, and H. B. Ahmed, “Exogenous cost allocation in peer-to-peer electricity markets,” IEEE Transactions on Power Systems, vol. 34, no. 4, pp. 2553–2564, 2019.
* [18] J. Guerrero, A. C. Chapman, and G. Verbič, “Decentralized P2P energy trading under network constraints in a low-voltage network,” IEEE Transactions on Smart Grid, vol. 10, no. 5, pp. 5163–5173, 2018.
* [19] A. Paudel, L. Sampath, J. Yang, and H. B. Gooi, “Peer-to-peer energy trading in smart grid considering power losses and network fees,” IEEE Transactions on Smart Grid, vol. 11, no. 6, pp. 4727–4737, 2020.
* [20] P. Cuffe and A. Keane, “Visualizing the electrical structure of power systems,” IEEE Systems Journal, vol. 11, no. 3, pp. 1810–1821, 2015.
* [21] R. D. Christie, B. F. Wollenberg, and I. Wangensteen, “Transmission management in the deregulated environment,” Proceedings of the IEEE, vol. 88, no. 2, pp. 170–195, 2000.
* [22] L. Ding, G. Y. Yin, W. X. Zheng, Q.-L. Han, et al., “Distributed energy management for smart grids with an event-triggered communication scheme,” IEEE Transactions on Control Systems Technology, vol. 27, no. 5, pp. 1950–1961, 2018.
* [23] M. Farivar and S. H. Low, “Branch flow model: Relaxations and convexification—part i,” IEEE Transactions on Power Systems, vol. 28, no. 3, pp. 2554–2564, 2013.
* [24] R. Rigo-Mariani and V. Vai, “An iterative linear distflow for dynamic optimization in distributed generation planning studies,” International Journal of Electrical Power & Energy Systems, vol. 138, p. 107936, 2022.
* [25] Y. Yang, G. Hu, and C. J. Spanos, “Optimal sharing and fair cost allocation of community energy storage,” IEEE Transactions on Smart Grid, vol. 12, no. 5, pp. 4185–4194, 2021.
* [26] J. Jo and J. Park, “Demand-side management with shared energy storage system in smart grid,” IEEE Transactions on Smart Grid, vol. 11, no. 5, pp. 4466–4476, 2020.
* [27] F. Rey, X. Zhang, S. Merkli, V. Agliati, M. Kamgarpour, and J. Lygeros, “Strengthening the group: Aggregated frequency reserve bidding with admm,” IEEE Transactions on Smart Grid, vol. 10, no. 4, pp. 3860–3869, 2018.
* [28] Z. Xu, L. Guo, Y. Gao, M. Hussain, and P. Cheng, “Real-time pricing of smart grid based on piece-wise linear functions:,” Journal of Systems Science and Information, vol. 7, no. 4, pp. 295–316, 2019.
* [29] L. Wu, “A tighter piecewise linear approximation of quadratic cost curves for unit commitment problems,” IEEE Transactions on Power Systems, vol. 26, no. 4, pp. 2581–2583, 2011.
* [30] S. Dempe and A. Zemkoho, Bilevel optimization. Springer, 2020.
* [31] S. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
* [32] C. Feng, Z. Li, M. Shahidehpour, F. Wen, and Q. Li, “Stackelberg game based transactive pricing for optimal demand response in power distribution systems,” International Journal of Electrical Power & Energy Systems, vol. 118, p. 105764, 2020.
* [33] S. P. Bradley, A. C. Hax, and T. L. Magnanti, Applied Mathematical Programming. MA: Addison-Wesley Publishing Company, 1977.
* [34] “Openei.” https://openei.org/datasets/files/961/pub/. Accessed: 2021-07-31.
* [35] “Measurement and instrumentation data center (midc), nrel transforming energy.” https://midcdmz.nrel.gov/apps/daily.pl?site=NWTC&start=20010824&yr=2020&mo=1&dy=28. Accessed: 2021-07-31.
|
# Latency Analysis of Consortium Blockchained Federated Learning
Pengcheng Ren and Tongjiang Yan
###### Abstract
A decentralized federated learning architecture is proposed to apply to the
Businesses-to-Businesses scenarios by introducing the consortium blockchain in
this paper. We introduce a model verification mechanism to ensure the quality
of local models trained by participators. To analyze the latency of the
system, a latency model is constructed by considering the work flow of the
architecture. Finally the experiment results show that our latency model does
well in quantifying the actual delays.
###### Index Terms:
Federated learning, consortium blockchain, model verification, latency.
††footnotetext: This work was supported by Fundamental Research Funds for the
Central Universities (20CX05012A), the Major Scientific and Technological
Projects of CNPC under Grant(ZD2019-183-008), and Shandong Provincial Natural
Science Foundation of China (ZR2019MF070). (Corresponding author: Tongjiang
Yan.) The authors are with College of Science, China University of Petroleum,
Qingdao 266555, China (email<EMAIL_ADDRESS>[email protected]).
## I Introduction
Quantities of data have been generated continuously and become the new type of
fuel that promotes the development of production. But the data security has to
be considered during training a model cooperatively among different
participators. In this regard, federated learning (FL) architecture was
proposed [1].
The original FL system needs a central sever to collect and distribute the
model weights and gradient parameters (called the local model update). This
centralized architecture may introduce some problems. Firstly, the global
model that all participators receive depends on the single central server. If
a failure happens on the server, each participator would get an inaccurate
global model. Secondly, because all the model updates are stored in the sever.
Once the server is attacked, the whole system would be collapsed.
In order to avoid the negative effects brought by the centralized
architecture, the decentralized architecture was proposed by exploiting the
blockchain instead of the server [5].
FL based on the blockchain has been used on Internet of Things (IoT) [2],
Internet of Vehicular (IoV) [3], Mobile Edge Computing (MEC) [4] and so on. It
supports not only these Devices-to-Devices (D2D) applications but also
Businesses-to-Businesses (B2B) scenarios. Enterprises that own mass of data,
such as banks, securities and hospitals, would like to discover the intrinsic
value hidden in the data collaborating with others. In this paper, we present
a FL based on blockchain for these B2B scenarios.
Considering the efficiency of the architecture, consortium blockchain should
be used for the decentralized federated learning [6, 7]. Because only
authorized peers can join the network and have access to the data stored in
the distributed ledger on the consortium blockchain. The consensus protocol of
the consortium blockchain is usually not PoW (Proof of Work), but consensus
algorithms such as PBFT (Practical Byzantine Fault Tolerance) [8] and Raft
[9], which are more efficient and suitable for the multi-center network.
The verification mechanism of the blockchain is often used to authenticate
identities of peers. But the quality of models is especially important in the
FL. Thus we introduce a model verification mechanism in order to ensure the
quality of local model updates trained by participators.
The efficiency of the blockchained federated learning system is a key issue
for practical application. Therefore, it is important to analyse the latency
of the system. Most of the existing works explained the system delay by the
ways of numerical simulation. These empirical analyses are too costly to
obtain accurate results. Furthermore, the underlying networks for deploying
permissioned blockchains have a great impact on analysis results, thus these
results are not comparable and lack versatility [10]. It is imperative to
analyse theoretical latency to provide a quantitative model. The main
contributions of this paper are as follows:
* •
A decentralized federated learning based on consortium blockchain called CBFL
is proposed to train a classification model by logistic regression with a
horizontally partitioned dataset in B2B scenarios.
* •
We introduce a model verification mechanism for CBFL to validate the
availability of the local model updates trained by participators.
* •
The theoretic latency model for CBFL system is divided into three parts. Each
part involves several subdivisions for fine-grained analysis.
* •
Through the latency model, we get an optimal throughput configuration for PBFT
so as to improve the efficiency in practical application.
## II CBFL Architecture and Operation
Let $E=\\{E_{i}\\}_{i=1}^{N_{E}}$ be a set of enterprises collaborating with
each other in CBFL. The enterprises manage two types of nodes: compute nodes
$\\{C_{i}\\}_{i=1}^{N_{E}}$ and communication peers
$\\{P_{i}\\}_{i=1}^{N_{E}}$. The compute nodes have enough computing power to
train the models. And communication nodes are responsible for maintaining the
blockchain.
The CBFL architecture is organized as two layers: the model update layer and
the blockchain layer as shown in Fig. 1. In the model update layer, compute
nodes train the local models using its own raw data samples locally and upload
the local model updates to the corresponding communication peers in the
blockchain layer. Each peer verifies all the local model updates gathered from
other peers and operates the consensus algorithm to generate a new block.
Finally, the local model updates recorded in the newest block are aggregated
locally by each compute node. So all the participators achieve data
collaboration without leaking the raw data.
Figure 1: CBFL Architecture.
### II-A Model Update Layer
Suppose that the $i$-th enterprise $E_{i}$ owns a set of data $D_{i}$ which
includes $n$ features and $N_{i}$ samples, where $i\in\\{1,2,\cdots,N_{E}\\}$.
Let $D=\bigcup_{i=1}^{N_{E}}D_{i}$ be the entire dataset of all enterprises in
CBFL, where $|D|=N_{D}=\sum_{i=1}^{N_{E}}N_{i}$.
Our CBFL architecture focuses on the classification problem by using the
logistic regression with the horizontally partitioned data [11]. Let
$\left\\{x_{k},y_{k}\right\\}\in D_{i}$ be the $k$-th data sample, where
$x_{k}\in\mathbb{R}^{n}$ and $y_{k}\in\\{-1,1\\}$. The goal of logistic
regression is to train a linear model for classification by solving the
following optimization problem [12]:
$\min\frac{1}{~{}N_{D}}\sum_{i=1}^{N_{E}}\sum_{k=1}^{N_{i}}f_{k}\left(\omega;x_{k},y_{k}\right),$
(1)
where $\omega$ is the model parameter vector and
$f_{k}(\omega)\triangleq
f_{k}\left(\omega;x_{k},y_{k}\right)=\log\left(1+\exp\left(y_{k}\cdot\omega^{T}x_{k}\right)\right).$
In order to solve the optimization problem (1), the model is locally trained
with the stochastic variance reduced gradient(SVRG) [1]:
$w_{i}^{t,\ell}\\!=\\!w_{i}^{t\\!-\\!1,\ell}\\!-\\!\frac{\beta}{N_{i}}\left(\left[\nabla
f_{k}\left(w_{i}^{t\\!-\\!1,\ell}\right)\\!-\\!\nabla
f_{k}\left(w^{\ell}\right)\right]\\!+\\!\nabla
f\left(w^{\ell}\right)\right)\\!,\\!$ (2)
where $\omega_{i}^{t,\ell}\in\mathbb{R}^{n}$ is the local weight at the $t$-th
iteration of the $\ell$-th cycle and $\eta_{t}>0$ is the step size. Let
$\omega_{i}^{\ell}$ be the local weight after the last local iteration of the
$\ell$-th cycle. So $C_{i}$ gets the local model update
$\left\\{\omega_{i}^{\ell},\left\\{\nabla
f_{k}\left(\omega^{\ell}\right)\right\\}\right\\}\triangleq tx$. Then $C_{i}$
aggregates all the $tx$s to get the global model by
$\omega^{\ell}=\omega^{\ell-1}+\sum_{i=1}^{N_{E}}\frac{N_{i}}{N_{D}}\left(\omega_{i}^{\ell}-\omega^{\ell-1}\right),$
(3)
where $\omega^{\ell}$ is the global model weight of the $\ell$-th cycle.
### II-B Blockchain Layer with Model Verification Mechanism
In the blockchain layer, the size of each block is set as $h+\delta_{m}N_{B}$,
where $h$ is the block header size, $\delta_{m}$ is the single $tx$ size and
$N^{B}$ is the maximum number of $tx$s within a block. It is more efficient to
get the consensus by using a consortium blockchain instead of a public
blockchain. Besides, peers in a consortium blockchain are authorized, data
stored in the block are more secure.
The consensus protocol is the core component of a blockchain. In this paper,
PBFT is adopted to get the consensus for the consortium blockchain network. It
can get the consensus among $N_{P}$ peers with $f$ faulty peers, where
$f=\frac{N_{P}-1}{3}$.
PBFT includes three phases as shown in Fig. 2. A leader $L$ was chosen among
all peers beforehand to create a candidate block in which $tx$s are sorted by
timestamp. Then $L$ disseminates the candidate block to all other peers in a
pre-prepare message at the pre-prepare stage. If a peer receives and accepts
the message, it stores the message and enters the prepare phase broadcasting
prepare messages. Then peers wait for a quorum of prepare messages, i.e., at
least $2f+1$ prepare messages which match the stored pre-prepare message. In
the third phase peers broadcast commit messages to all others. Then, if a peer
collects another quorum of commit messages which match the previously
collected prepare messages, it will commit the state transition and reply to
the compute node [8].
Figure 2: Three phases of PBFT.
Replying on the blockchain network, we can build a secure data sharing
platform where enterprises can exchange model updates to achieve secure data
collaboration. Compute nodes can get all the local updates from the newest
block and aggregate locally instead of downloading the global model update
from the central server, which is more robust than the centralized FL. In the
B2B application scenario, the number of peers is not too many. So PBFT can
achieve the consensus efficiently. Furthermore, PBFT needs fewer computing
resources than PoW and can avoid forking [13].
In a normal blockchain, peers verify the validity of a transaction with the
digital signature technology. But with the federal learning protection privacy
mechanism, some unreal or even malicious participants[14] will provide
mendacious local model updates with some made-up data, which causes trouble
for the global model. In our CBFL, the communication peers verify the $tx$s
not only by checking the digital signatures but also by verifying the quality
of models.
We leverage the classification accuracy to quantify the performance of the
local model updates. More specifically, the accuracy is denoted by the
proportion of correctly classified samples. Denote that each $E_{i}$ owns $T$
testing instances for quantifying the accuracy of the local model updates. The
classification accuracy $e_{j}$ of the $j$-th received local model update can
be given by $e_{j}=\frac{n_{j}}{T}$, where $n_{j}$ is the number of the
correctly classified samples. When the communication peer $P_{i}$ receives
local model updates from other peers, it would admit the $j$-th local model
update whose classification accuracy satisfies $e_{j}\geq e_{0}$, where
$e_{0}$ is the threshold predetermined.
With the model verification mechanism, only the models trained by truthful
data can be recorded in the distributed ledger of the blockchain. In this way,
unnecessary oscillations can be avoided in the process of training the global
model, which improves the efficiency of the whole system by reducing the
training rounds.
### II-C One-Cycle CBFL Operation
As depicted in Fig. 1, the CBFL operation can be described by the following
six steps:
1. Step 1
Local model update: Each $C_{i}$ computes (2) with its own data to get the
local model update $tx$.
2. Step 2
Local model upload: $C_{i}$ uploads the $tx$ to its corresponding $P_{i}$.
3. Step 3
Cross-verification: $P_{i}$ broadcasts the $tx$ obtained from $C_{i}$. At the
same time, $P_{i}$ verifies the $tx$s received from other peers with our model
verification mechanism.
4. Step 4
Consensus: The verified $tx$s are recorded in the candidate block by the
leader $L$. The candidate block doesn’t generate until reaching the block size
$h+\delta_{m}N_{B}$ or maximum waiting time $\tau$. The leader $L$ multicasts
the candidate block to all peers to start the three-phase PBFT to get the
consensus among all peers.
5. Step 5
Global model download: When a peer $P_{i}$ receives $2f+1$ commit messages, it
sends the newest block which stores all participators’ $tx$s to the
corresponding $C_{i}$ as the reply.
6. Step 6
Global model update: Every $C_{i}$ computes the global model update by using
(3) with all $tx$s recorded in the block.
Step1 to Step6 is the one-cycle process of CBFL. This operation process
doesn’t stop until
$\left|\omega^{\ell}-\omega^{\ell-1}\right|\leq\varepsilon$.
## III One-Cycle Operation Latency Analysis
We aim to build a latency analysis model to quantify the time consumption of
the CBFL. Before building the latency model, some reasonable assumptions are
made as follows:
* •
The compute nodes and communication peers have stable and enough computing
resources for model training and verification.
* •
The communication peers have certain communication and storage capabilities to
ensure $tx$s sharing. And peers are defined to dispose the received messages
on a FIFO basis, while the processing time at each peer follows the
exponential distribution with the mean $\mu$.
* •
The arrival of new $tx$s follows the Poisson Process with the arrival rate
$\lambda$.
Let $T_{0}^{\ell}$ be the total time during $\ell$-th cycle process at a fixed
enterprise $E_{0}$ and
$T_{0}^{\ell}=T_{update}+T_{commun}+T_{consensus},$
where $T_{update}$, $T_{consensus}$ and $T_{commun}$ are model update,
consensus and communication delays respectively.
1) Model update latency: The model update delays are generated by Step 1 and
Step 6. Let $\delta_{d}$ be a single data sample size and $f_{c}$ be the clock
speed. So the local model update latency in Step 1 is evaluated as
$T_{local,0}^{\ell}=\delta_{d}N_{i}/f_{c}$ [5]. And the global model update
latency $T_{global,0}^{\ell}$ in Step 6 can be given as
$T_{global,0}^{\ell}=\delta_{m}N_{B}/f_{c}$ [5], where $\delta_{m}$ is the
size of a local model update $tx$.The model update latency can be calculated
by
$\displaystyle T_{update}=T_{local,0}^{\ell}+T_{global,0}^{\ell}.$ (4)
2) Consensus latency: The consensus delays are brought by Step 3 and Step 4.
And the latency of the PBFT consensus is fully considered according to its
three-phase work flow.
Let $N(\tau)$ be the number of arrived $tx$s within the max waiting time
$\tau$. So the leader $L$ sends the pre-prepare message to other peers when
the number of arrived $tx$s reaches $b$ according to the conditions above-
mentioned in Step3, where $b=max\\{N(\tau),N_{B}\\}$. The collection,
verification and batch processes of $tx$s at $L$ can be modeled by the $M/M/1$
queue. According to the Little’s law, The average waiting time of each $tx$
can be formulated as $\frac{1}{\mu-\lambda}$. Thus, the total latency of the
pre-prepare phase can be given as
$\displaystyle T_{preprepare}=\frac{max\\{N(\tau),N_{B}\\}}{\mu-\lambda}.$
For an arbitrary fixed $P_{o}$, its process of receiving prepare messages is
the Poisson process with the intensity $\lambda$. Thus, the time lag $t_{i}$
between two adjacent prepare messages follows the exponential distribution
with mean $1/\lambda$. The average waiting time of $P_{o}$ can be denoted as
$\displaystyle
T_{wait}=E[\sum_{i=1}^{2f}t_{i}]=\sum_{i=1}^{2f}E\left[t_{i}\right]=\frac{2f}{\lambda}.$
The total processing time in this phase is calculated as
$\displaystyle T_{process}=\frac{2f+1}{\mu},$
so the latency of prepare phase is
$T_{prepare}=T_{wait}+T_{process}.$
The latency of the commit phase is similar to the prepare delay.
The total latency of consensus phase is
$\displaystyle T_{consensus}=T_{preprepare}+T_{prepare}+T_{commit}.$ (5)
Figure 3: Mean time to consensus for large number of peers.
3) Communication latency: The communication delays are contributed by Step 2
and Step 5. The local model upload latency in Step 2 is computed as
$T_{up,0}^{\ell}=\delta_{m}/\left[W_{up}\log_{2}\left(1+\gamma_{up}\right)\right],$
where $W_{up}$ is the bandwidth between $C_{0}$ and $P_{0}$, $\gamma_{up}$ is
the signal-to-noise ratio [5]. Similarly, global model download delay in Step
5 is calculated as
$T_{dn,0}^{\ell}=\left(h+b\delta_{m}\right)/\left[W_{dn}\log_{2}\left(1+\gamma_{dn}\right)\right].$
So the latency of communication can be calculated as
$\displaystyle T_{commun}=T_{up,0}^{\ell}+T_{dn,0}^{\ell}.$ (6)
###### Theorem 1
If the algorithms for training local model updates and global model updates
are confirmed, $T_{local,0}^{\ell}$ and $T_{glocal,0}^{\ell}$ are constants.
$T_{up,0}^{\ell}$ and $T_{dn,0}^{\ell}$ are also constants when the underlying
network is determined. Thus, the total latency of CBFL can be modeled as
$\displaystyle T_{0}^{\ell}$
$\displaystyle=T_{update}+T_{commun}+T_{consensus}$
$\displaystyle=T_{constant}+\frac{(b-4f)\lambda+4f\mu}{\lambda(\mu-\lambda)}+\frac{4f+2}{\mu}.$
(7)
###### Proof:
According to the work flow of CBFL in Section II, $T_{0}^{\ell}$ is the sum of
$T_{update}$, $T_{commun}$ and $T_{consensus}$. Let
$T_{constant}=T_{update}+T_{commun}$. And
$\displaystyle T_{consensus}$
$\displaystyle=T_{preprepare}+T_{prepare}+T_{commit}$
$\displaystyle=\frac{b}{\mu-\lambda}+2(\frac{2f}{\lambda}+\frac{2f+1}{\mu})$
$\displaystyle=\frac{(b-4f)\lambda+4f\mu}{\lambda(\mu-\lambda)}+\frac{4f+2}{\mu}.$
(8)
∎
###### Theorem 2
With the case where the leader starts the PBFT when the maximum size of a
block is satisfied, i.e. $b=N_{B}$, the optimal $\lambda^{*}$ for PBFT can be
given by
$\displaystyle\lambda^{*}=\frac{-8f\mu+4\mu\sqrt{fN_{B}}}{2(N_{B}-4f)}.$ (9)
###### Proof:
According to (8), we can get the first derivative and the second derivative of
$T_{consensus}$ with respect to $\lambda$ as follows
$\displaystyle T_{consensus}^{{}^{\prime}}$
$\displaystyle=\frac{(N_{B}-4f)\lambda^{2}+8f\mu\lambda-4f\mu^{2}}{\lambda^{2}(\mu-\lambda)^{2}}.$
$\displaystyle T_{consensus}^{{}^{\prime\prime}}$
$\displaystyle=\frac{2N_{B}}{(\mu-\lambda)^{3}}+\frac{8f}{(\lambda)^{3}}.$
Thus the $T_{consensus}$ is convex for $\lambda$. The optimum $\lambda^{*}$ is
directly derived. ∎
## IV Numerical Results and Conclusion
(a) Latency with varying $\lambda$
(b) Latency results compare
Figure 4: Blockchain latency for the transactions arrival rate $\lambda$
The time consumption data was fitted with Exponential, Weibull, Gamma,
Hypoexponential, LogNormal and Pareto distributions using MLE (Maximum
Likelihood Estimation) technique in [14]. The best-fit model of $T_{prepare}$
is Weibull distribution ($shape=2.092$, $scale=0.8468$). For another,
$T_{prepare}$ is the sum of independent identically exponential distributed
random variables given in Section III. Thus it follows Gamma distribution.
While the Gamma distribution is a special kind of Weibull distribution, our
latency analyses of $T_{prepare}$ and $T_{commit}$ are suitable.
As the number of peers increases, the probability of failure peers occurrence
will also increase. In Fig. 3 [14], the mean latency to consensus increases
with the augment of the number of peers. This is similar to what our model (7)
shows.
Fig. 4 shows the impact of the transactions arrival rate $\lambda$ on the
blockchain average completion latency. The relationship between the latency
and $\lambda$ is a approximate inverse proportional function as shown in Fig.
4-a, which is consistent with our latency model. In Fig. 4-b, we compare the
latency results from the simulation [10] and the latency model. In order to
ensure comparability, the same configurations in [10] are adopted. Let
$N_{B}=100$, $N_{P}=4,f=1$ and the transactions arrival rate $\lambda$ starts
from 50tps to 250tps in this experiment. The model predicts the experimental
measurements with an error lower than 3.1%.
In the B2B scenarios, each enterprise can enhance the computing and
communication equipment to improve model update and communication efficiency.
So it is crucial to optimize CBFL with respect to latency, computing and
storage requirements by improving the underlying networks, which can reduce
the consistent delays according to our latency analysis model.
In conclusion, the consensus latency is a bottleneck of the whole system,
especially the latency of waiting for ordering. According to (7), the latency
of the system is proportional to the number of faulty nodes and inversely
proportional to the system TPS. The throughput can be adjusted to reduce
latency according to actual situation. In addition, faulty nodes can be
reduced by establishing a reward system and a node selection mechanism.
## V Acknowledgement
The authors thank professors Debiao He of Wuhan University and Xiaohong Huang
of Beijing University of Posts and telecommunications for their valuable
suggestions to improve the the innovation of this paper.
## References
* [1] J. Konečný, H. B. McMahan, D. Ramage and P. Richtárik, “Federated optimization: distributed machine learning for on-device intelligence,” [Online]. Available: https://arxiv.org/abs/1610.02527.
* [2] Y. Lu, X. Huang, Y. Dai, S. Maharjan and Y. Zhang, ”Blockchain and federated learning for privacy-preserved data sharing in industrial IoT,” IEEE Transactions on Industrial Informatics, vol. 16, no. 6, pp. 4177-4186, June 2020, doi: 10.1109/TII.2019.2942190.
* [3] Y. Lu, X. Huang, K. Zhang, S. Maharjan and Y. Zhang, “Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles,” IEEE Transactions on Vehicular Technology, vol. 69, no. 4, pp. 4298-4311, 2020.
* [4] Y. Zhao, J. Zhao, L. Jiang, R. Tan, D. Niyato, Z. Li, L. Lyu and Y. Liuet, “Mobile edge computing, blockchain and reputation based crowdsourcing federated learning: a secure, decentralized and privacy-preserving system,” [Online]. Available: https://arxiv.org/abs/1906.108932020.
* [5] H. Kim, J. Park, M. Bennis and S. Kim, “Blockchained on-Device federated learning,” IEEE Communications Letters, vol. 24, no. 6, pp. 1279-1283, 2020.
* [6] M. Shen, J. Zhang, L. Zhu, K. Xu and X. Tang, “Secure SVM training over vertically-partitioned datasets using consortium blockchain for vehicular social networks,” IEEE Transactions on Vehicular Technology, vol. 69, no. 6, pp. 5773-5783, 2020.
* [7] I. Martinez, S. Francis and A. Scnhaji Hafid, “Record and reward federated learning contributions with nlockchain,” International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), Guilin, China, pp. 50-57, 2019.
* [8] M. Castro and B. Liskov, “Practical byzantine fault tolerance and proactive recovery,” ACM Transactions on Computer Systems, vol. 20, no. 4, pp. 398-461, 2002.
* [9] D. Ongaro and J. Ousterhout, “In search of an understandable consensus algorithm,” USENIX Annual Technical Conference, Philadelphia, pp. 305-319, 2014.
* [10] X. Xu, G. Sun, L. Luo, H. Cao , H. Yu and A. V. Vasilakos, “Latency performance modeling and analysis for hyperledger fabric blockchain network,” Information Processing and Management, vol 58, no. 1, 2021.
* [11] S. Hardy, W. Henecka, H. Ivey-Law, R. Nock, G. Patrini, G. Smith and B. Thorne, “Private federated learning on vertically partitioned data via entity resolution and additively homomorphic encryption,” [Online]. Available: https://arxiv.org/abs/1711.10677.
* [12] B. McMahan, E. Moore, D. Ramage, S. Hampson and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized Data,” 20th International Conference on Artificial Intelligence and Statistics, vol. 54, pp. 1273-1282, 2017.
* [13] Y. Hao, Y. Li, X. Dong, L. Fang and P. Chen, “Performance analysis of consensus algorithm in private blockchain,” IEEE Intelligent Vehicles Symposium, Changshu, pp. 280-285, 2018.
* [14] A. N. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo, “Analyzing federated learning through an adversarial lens,” 36th International Conference on Machine Learning, pp. 634-643, 2019.
* [15] H. Sukhwani, J. M. Martínez, X. Chang, K. S. Trivedi and A. Rindos, “Performance modeling of PBFT consensus process for permissioned blockchain network (Hyperledger Fabric),” IEEE 36th Symposium on Reliable Distributed Systems, Hong Kong, pp. 253-255, 2017.
|
# Mobile Augmented Reality with Federated Learning in the Metaverse
Xinyu Zhou, Jun Zhao The authors are all with Nanyang Technological
University, Singapore. Emails<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract
The Metaverse is deemed the next evolution of the Internet and has received
much attention recently. Metaverse applications via mobile augmented reality
(MAR) require rapid and accurate object detection to mix digital data with the
real world. As mobile devices evolve, they become more potent in computing.
Hence, their computational resources can be leveraged to train machine
learning models. In light of the increasing concerns of user privacy and data
security, federated learning (FL) has become a promising distributed learning
framework for privacy-preserving analytics. In this article, FL and MAR are
brought together in the Metaverse. We discuss the necessity and rationality of
the combination of FL and MAR. The prospective technologies that power FL and
MAR in the Metaverse are also identified. In addition, existing challenges
that prevent the fulfilment of FL and MAR in the Metaverse and several
application scenarios are presented. Finally, two case studies of Metaverse
FL-MAR systems are demonstrated.
###### Index Terms:
Metaverse, augmented reality, virtual reality, federated learning.
## I Introduction
The Metaverse has become a hot topic recently. Mark Zuckerberg made the term
famous in 2021 when he announced that Facebook would change its name to Meta
and shift its future to build Metaverse technologies. The Metaverse integrates
augmented reality (AR), virtual reality (VR) and 3D technologies to create a
fully immersive virtual world. Mobile augmented reality (MAR) brings the
Metaverse to mobile user equipments. With the development of mobile
technologies, it has been increasingly common for mobile users to leverage AR
services to interact and entertain themselves in the real world. As machine
learning technologies are applied to mobile devices, people are developing
more intelligent MAR applications in the Metaverse scenarios such as daily
communications, entertainment, medical care, travel, transportation, etc.
Fig. 1 depicts a scenario where users can see descriptions of each building.
Such MAR applications require rapid and accurate object detection to mix
digital data with the real world. With the fast development of wearable and
mobile devices (e.g., Google Glass, Microsoft Hololens), such a scenario is
not a figment of the imagination. Researchers have implemented effective
object detection algorithms on mobile devices [1].
Usually, current machine learning models require large amounts of data for
training to achieve good performance, whereas it is a challenging task to
collect those data. As concerns about data security and privacy increase,
related regulations and laws are being introduced one after another. Hence,
considering the growing difficulty of collecting sensitive datasets to a
server for centralized training, distributed learning is a big trend in the
future. In an MAR system, devices can train their object detection models
separately. However, a mobile device can only store or collect limited data,
resulting in a less accurate model. To address this, federated learning (FL)
can be incorporated into the MAR system. FL was proposed by Google in 2017
[2]. It allows each device to train a shared model collaboratively without
sharing local data with others. As shown in Fig. 1, after a few local
iterations, each device uploads its local model parameter to a central base
station, and the station will send back an updated global model to each device
for continuous training. We refer to the system as the FL-MAR system.
Figure 1: The general system model of mobile augmented reality (MAR) with
federated learning (FL) in the Metaverse.
Integrating FL into the design of the MAR system poses some challenges. First,
limited communication and computing resources can lead to latency between the
server and users. Second, achieving a satisfactory model often requires many
local iterations and communication times, which also adds a large amount of
energy consumption for mobile devices. Moreover, latency and energy
consumption are often in conflict. It is necessary to find an appropriate
resource allocation strategy to optimize latency and energy consumption.
Besides, in the FL-MAR system, the video frame resolution affects the object
recognition accuracy, and it influences the computation energy and time when
training on each device. As a result, to minimize latency and energy
consumption and maximize the detection accuracy, we should find how to assign
communication and computation resources (i.e., transmission power, bandwidth,
CPU frequency and video frame resolution) for each device.
This article first discusses promising technologies for FL-MAR in the
Metaverse in Section II. Then, in Section III, we present existing challenges
for the applications. Section IV demonstrates various application scenarios of
FL-MAR in the Metaverse. Finally, Section V eloaborates two case studies for
optimizing resource allocation of FL-MAR systems in the Metaverse. The
framework of this article is illustrated in Fig. 2.
Figure 2: The framework of this article.
The contributions are as follows:
* •
Incorporating FL and MAR into the Metaverse is presented. For the proposed
incorporation, we further discuss prospective scenarios.
* •
Challenges in the applications of FL-MAR to the Metaverse are also listed from
different aspects, including limited communication/communication resources,
security and privacy, etc.
* •
To demonstrate the practicality, we demonstrate two case studies of FL-MAR
systems in the Metaverse. One is FDMA-enabled, and the other is based on NOMA.
## II Enabling Technologies for FL-MAR in the Metaverse
This section lists technologies needed for applying FL and MAR to the
Metaverse.
### II-A Channel access methods
Frequency Division Multiple Access (FDMA). FDMA is a channelization protocol
that divides the frequency band into non-overlapping channels of equal
bandwidth. Besides, each channel is assigned to one user only for the
conversation period. FDMA is one of the most commonly used analog multiple
access methods. It has some advantages: 1) FDMA systems are technically easy
to be implemented. 2) Signals can be transmitted simultaneously while not
interfering with each other. 3) The capacity can be increased by decreasing
the information bitrate and leveraging efficient numerical codes. Moreover,
FDMA also has some disadvantages: 1) Bandwidth utilization is limited since
channels will be idle if users do not utilize them. 2) If many signals of
different frequencies are transmitted simultaneously, inter-modulation
distortion is possible to happen at the transponder.
Time Division Multiple Access (TDMA). TDMA allows different users to share the
same frequency by dividing each channel into different time intervals. At each
time interval, the frequency is used for one user exclusively. Compared to
FDMA, the advantages of TDMA are: 1) The transmission rate is flexible because
multiple slots can be allocated to one user. 2) It can handle the changeable
bit rate. However, the disadvantages include the implementation complexity and
the requirement of synchronization.
Non-orthogonal Multiple Access (NOMA). NOMA has been seen as a promising
technology for intensifying the throughput in future wireless systems. Unlike
conventional orthogonal multiple access (OMA), it enables multiple users on
the same channel to be multiplexed to maximize the throughput and lower the
latency of the system. It adopts superposition coding at the transmitter and
utilizes successive interference cancellation (SIC) at the receiver to
distinguish signals of users. Hence, it increases OMA’s rate region.
For other channel access schemes, such as CDMA, SDMA and OFDMA, interested
readers can refer to [3].
### II-B Semantic Communication
Semantic communication has been deemed the breakthrough of Shannon’s paradigm
as it transmits only the relevant semantic information about the specific
resource. It does not aim at the accurate transmission of bit sequences. For
example, when transmitting a figure, semantic communications will extract the
relevant features of the figure for transmission by using semantic encoding
[4]. Hence, the data traffic can be significantly lowered, which is why it can
be one of the promising solutions leading to an efficient communication
network in the Metaverse. Studies of semantic communication for the Metaverse
are in the early stage [5]. Since VR/AR applications in the Metaverse require
a seamless experience, the requirements of low latency and high-speed
transmission may be satisfied by semantic communications.
### II-C Over-the-Air Computation
Over-the-air computation (AirComp) enables computation function by adding the
analog wave in a multiple-access channel. By utilizing the interference for
implementing the computation function, the wireless channel can be used as a
computer. In a distributed system, the signals sent by mobile devices are
superposed over the air and aggregated by the receiver as a weighted sum. The
weights represent channel coefficients [6].
In the Metaverse, numerous devices are connected to communicate through the
communication network. Large amounts of data are transmitted by various
devices simultaneously while devices wait for immediate feedback. The advent
of over-the-air computation may help to build low-latency communication
networks in the Metaverse.
### II-D Mobile Edge Computing
Before the emergence of edge computing, cloud computing was the new paradigm
of computing at that time. Cloud computing refers to computing, network
control and storage in the clouds. Generally, clouds are servers (e.g., data
centers) that can be approached through the Internet. However, such servers
are usually located far away from user devices, which results in long latency.
In 2014, European Telecommunications Standard Institute (ETSI) proposed the
concept of mobile edge computing (MEC). In the IT service environment, MEC
equips cloud-computing capabilities at the edge of mobile networks in the
vicinity of mobile users. It aims at reducing latency and providing high-
efficient network operations and services [5].
MEC is deployed at access points, such as small base stations, edge servers,
users’ computers, etc. FL-MAR systems consist of massive mobile devices.
Hence, if utilizing the idle computing resources of mobile resources through
MEC, the energy consumption and latency in communication networks in the
Metaverse can be significantly saved.
### II-E Blockchain and Cryptocurrency
Blockchain, which is a distributed ledger, first appeared in 1991. Blockchain
became widely known when Bitcoin emerged in 2009. It stores digital data in
blocks, and the blocks are strung together via cryptography. The data is time-
irreversible by leveraging cryptographic hash functions. With the merits of
transparency and security, blockchain has numerous prospective applications in
finance, government, commerce, etc. In the Metaverse, blockchain technology
can be utilized to build a decentralized virtual world, as shown by Kang et
al. [7]. To protect user privacy, cryptocurrencies are also essential in the
Metaverse. Cryptocurrencies (e.g., Bitcoin, Litecoin, Ethereum) are powered by
blockchain, and Bitcoin is the most well-known cryptocurrency. Additionally,
cryptocurrencies are not physical, they exist only in the decentralized
network, and their creation is determined by an algorithm (or protocol). The
benefits of using cryptocurrencies in the Metaverse include:
* •
Transactions are recorded permanently.
* •
Privacy is protected. There is no third-party bank verifying transactions, so
users do not provide sensitive information.
* •
Fairness and transparency can be achieved since transactions can be
scrutinized.
Blockchain also enables play-to-earn games. Players can earn cryptocurrencies
or digital assets through these games. Financial incentives can stimulate more
users to join the game. The more users participate in the game, the more
valuable the assets accumulated by senior players will be. Hence, incentive
mechanisms can be incorporated into play-to-earn games to attract more users.
This breathes new life into the Metaverse world as well since some people
could make a living on the revenue of these games.
## III Challenges for FL-MAR in the Metaverse
This section discusses the potential challenges of FL-MAR systems in the
Metaverse.
### III-A Limited communication resources
The demands of bandwidth in the Metaverse are much more significant than
current ordinary games and social entertainment. This is because the Metaverse
virtual world has to render massive surroundings (e.g., trees, flowers),
buildings, avatars (or people), etc., simultaneously.
Besides, to achieve the immersive experience in the Metaverse, haptic
technology is indispensable to create the experience of touch and receive the
reactions of the virtual world. Haptics technology allows only 1 ms of
latency. Yet, current LTE networks only sustain around 25 ms of latency [8].
Thus, limited communication resources are one of the barriers to the
widespread deployment of the Metaverse. To address the limitation,
technologies mentioned in Section II could be utilized, such as semantic
communications and MEC. Semantic communication can lower the data traffic, and
thus help to build an efficient communication network. MEC, located at the
edge of mobile networks, can reduce the communication latency and assist
mobile devices in handling complex tasks that exceed their capabilities.
### III-B Limited computation resources
The Metaverse will generate a vast volume of data, and thus it is in dire need
of powerful computing resources [9]. The resource-constrained devices worn by
users not only have to train their own FL model but also are responsible for
processing newly generated data and converting the raw data into the 3D
virtual world. Apparently, today’s mobile devices do not have the computing
capability to finish those complex tasks efficiently.
This existing obstacle might be addressed by better mobile edge computing
mechanisms. Mobile edge computing assists applications that have requirements
of low latency and high bandwidth close to the data source. For example,
devices can offload complex tasks to edge servers in proximity to save energy
consumption and computation resources. Since edge servers are much closer to
users than cloud servers, they could provide services with low latency, which
is suitable for the Metaverse to provide a real-time and stable immersive
VR/AR experience. However, if the computing tasks are pretty complex and
energy-consuming, they can be uploaded to cloud servers. Hence, the
hierarchical cloud-edge-device computing paradigm can be utilized. Cloud
computing provides robust computing and storage resources. The appropriate
combination of edge- and cloud-based applications is essential to maximize the
system performance.
### III-C Security and Privacy
Our physical world is being transformed into a digital one as time passes. In
the Metaverse, people’s lives are changing in the areas of shopping,
education, tourism, medical care, etc. There will be new forms of security
risks, including the threats of scams, identity leakage, and data protection.
Although FL can protect user privacy and data security to a certain extent,
users in the Metaverse still expose their sensitive information to the virtual
world. Besides, in FL, the model parameter transmission has the potential to
leak user privacy. In the following, some security techniques are discussed.
* •
Differential privacy (DP). It can be seen as a way of the mathematical
definition of privacy. If an algorithm is differentially private and it
publishes aggregation information about a database, others cannot predict from
the output whether any data record of a specific individual was recorded in
the original dataset or not. DP ensures the individual information in the
database will not be compromised. Additionally, DP has been applied to the
industry. For instance, Apple adopts local DP to behavior analytics of iPhone
users [10].
* •
Secure multi-party computation (SMC). It is a cryptographic protocol in that
several parties jointly compute an agreed function without exposing the data
of each party. In SMC, data can be shared distributedly with other parties
without the need for a third-party organization and is still under protected.
Hence, SMC can be deployed in distributed systems in the Metaverse.
* •
Homomorphic encryption (HE). HE is an encryption technique that enables users
to perform calculations without having to decrypt the data. When the resulting
computations are decrypted, they produce the same output as the result
calculated by the unencrypted data. Therefore, if outsourcing data to a third
party for analytics, HE can be utilized to ensure the data will not be
analyzed in its original form.
* •
Consensus mechanism. It could be any mechanism which is fault-tolerant and
usually used in blockchain systems to reach a consensus on a data value or a
network state among distributed systems (e.g., cryptocurrencies). Hence, a
consensus mechanism is frequently used to attain trust and security among
distributed parties.
## IV Applications of FL-MAR in the Metaverse
This section lists some scenarios in which MAR and FL are applied to the
Metaverse.
### IV-A Autonomous Driving
According to the National Motor Vehicle Crash Causation Survey (NMVCCS)
conducted by the National Highway Traffic Safety Administration of the United
States, 94% of NMVCCS crashes were caused by drivers [11]. Hence, autonomous
vehicles are becoming a feasible solution for transportation in the future.
Recently, deep learning has been a popular approach to the application of
autonomous driving in terms of object detection, obstacle avoidance and so
forth. Considering the fact that the capabilities of hardware storage and
computation are improving, training models locally is not only beneficial for
data security and user privacy but also reduces network energy consumption and
latency.
Fig. 3 depicts that several autonomous cars are driving on the road while
training their models locally in the Metaverse. Since different cars
experience various environments, such as weather and lighting conditions,
incorporating FL into this scenario will help each vehicle build a more
accurate model.
Figure 3: The architecture of autonomous driving with FL in the Metaverse.
Additionally, since Metaverse has immediate physical-virtual world interaction
characteristics, it can simulate various kinds of driving situations,
including some rare cases. Therefore, it helps test whether self-driving cars
are safe and reliable in various extreme conditions. Besides, it is no longer
an unrealistic fantasy. Oxbotica, which is an autonomous vehicle software
company, announced its AI-powered software MetaDriver in June 2022. MetaDriver
collects plentiful scenarios to test and improve autonomous vehicle behaviours
in the Metaverse without physically driving them [12].
### IV-B Shopping
Online shopping has become a part of people’s lives. As the number of MAR
applications grows, online shopping also enjoys this convenience. For example,
IKEA, the company that sells ready-to-assemble furniture, appliances and home
services, equips its app with AR capabilities. The AR function allows users to
place 3D models of equal scale with the real size virtually in their homes,
which facilitates online shopping and saves users’ time. Adidas also launched
the AR footwear try-on function in its iOS app. Fig. 4 illustrates these two
example applications.
Figure 4: Two examples of MAR applications.
However, it is evident that the MAR applications still need improvement. Each
person has different looks and various living places. Therefore, by
incorporating FL, the mobile device can learn its own model to fit the
specific person. Additionally, since the Metaverse can mix the virtuality with
the real world, people will reveal similarities to their real lives in the
virtual worlds due to more time spent virtually. Hence, more new shopping
models will appear in the future. Products such as digital clothing,
furniture, cosmetics and so forth may have a similar status to purchases in
the real world.
### IV-C Education
The Metaverse will transform the educational environment in the future [13].
Different from traditional in-person learning and online learning, Metaverse-
based learning will be an environment which is a mixture of the virtual and
real world. It allows students to interact with each other in a virtual and
decentralized setting and join various complex learning activities. In light
of the profusion of online learning experiences during the Covid-19 period,
Metaverse-based learning is much needed now. For example, in geography
classes, students can immerse themselves through VR headsets in geography and
experience the differences between different climates in different regions.
## V Case Studies of FL-MAR in the Metaverse
In this section, we present two case studies of FL-MAR in the Metaverse. One
FL-MAR system’s channel access scheme is FDMA, and the other one uses NOMA. We
study the system from the perspective of how to optimize the resource
allocation in the system to save energy and time consumption.
### V-A FDMA-enabled FL-MAR in the Metaverse
First, we investigate a basic FL-MAR system via FDMA. In [14], we formulate a
weighted sum of total energy, time consumption and accuracy by using three
weight parameters. We optimize the allocation of the bandwidth, transmission
power, CPU frequency setting and MAR video frame resolution for each
participating mobile device in the Metaverse. By setting different weight
parameters, our resource allocation algorithm can adapt to different
requirements of the FL-MAR system, either time-sensitive or energy-hungry.
We assume there are $40$ users in the system. Fig. 5 contains two subfigures.
One shows the total energy consumption, and the other is the total time
consumption. We choose three pairs of weight parameters to compare our
resource allocation algorithm with a random allocation strategy under
different maximum transmit power limits. Note that $w_{1}$ is the weight
parameter of energy consumption, and $w_{2}$ is the weight parameter of time
consumption. The weight parameter of the model accuracy is fixed because we
focus on the energy and time consumption here. If $w_{1}$ (resp., $w_{2}$)
becomes larger, our resource allocation algorithm will emphasize minimizing
the energy cost (resp., time consumption). Hence, it can be seen obviously
from each bar that as $w_{1}$ (resp. $w_{2}$) increases, the energy (resp.
time) consumption will decrease. In addition, the random strategy adjusts all
video frame resolutions as the minimum, and thus it saves much computation
energy consumption while sacrificing the model accuracy. Considering this
condition, our algorithm still achieves better total energy consumption than
the random strategy, even at $w_{1}$ = 0.1 (i.e., the algorithm mainly
stresses the minimization of the total time consumption). In short, the
results clearly illustrate the superiority of our proposed joint optimization
algorithm.
Figure 5: FDMA-enabled FL-MAR system: simulation results under different
transmit power limits.
### V-B NOMA-enabled FL-MAR in the Metaverse
Here, we study the NOMA-enabled FL-MAR system in the Metaverse. We also devise
a resource allocation algorithm and show the validity of our algorithm, which
jointly optimizes the weighted sum of energy and time consumption. Assume
there are $40$ users and $20$ channels. There are $2$ users multiplexed on one
channel.
Figure 6: NOMA-enabled FL-MAR system: simulation results under different
transmit power limits.
In Fig. 6, we compare three pairs of weight parameters
$(w_{1},w_{2})=(0.9,0.1),(0.5,0.5)$ and $(0.1,0.9)$ with a random allocation
strategy. $(w_{1},w_{2})=(0.9,0.1)$ refers to situations when devices are low-
battery. $(w_{1},w_{2})=(0.5,0.5)$ stands for the case of equal consideration
of optimization energy and time. Besides, $(w_{1},w_{2})=(0.1,0.9)$ stresses
the minimization of total time consumption.
Fig. 6 contains comparisons under different maximum transmission power limits
of the total energy consumption and time consumption. It can be concluded that
when the maximum transmission power increases, total time consumption slightly
decreases. Due to the expansion of the range of the maximum transmission
power, there will be a more optimal solution to decrease the time consumption.
Our resource allocation algorithm performs better than the random allocation
strategy in the aspect of energy optimization. In terms of total time
consumption, the proposed algorithm performs worse than the random allocation
when $(w_{1}=0.9,w_{1}=0.1)$. This is because the case of $w_{1}=0.9$
emphasises more about the energy optimization and less about the time
minimization. The simulations show the effectiveness of our approach for
different weight parameters.
### V-C Analysis of the difference between FDMA-enabled and NOMA-enabled FL-
MAR system
It could be concluded from Fig. 5 and Fig. 6 that, in terms of total energy
consumption, there is little difference between the performance of FDMA-
enabled and NOMA-enabled FL-MAR system. Regarding total time consumption, the
FDMA-enabled system performs slightly better when $w_{1}=0.9$ and $w_{2}=0.1$.
When $(w_{1}=0.5,w_{2}=0.5)$ and $(w_{1}=0.1,w_{2}=0.9)$, their performance is
similar. In addition, the random scheme under NOMA outperforms the random
scheme under FDMA. From the simulation results and our theoretical analyses,
when the system has ample channel resources, careful optimization of FDMA has
comparable performance with NOMA, so there is no need to implement the more
complex NOMA for resource-rich scenarios. Yet, when the system has limited
channel resources, NOMA can improve the system performance by leveraging
power-domain orthogonality. Our findings are also consistent with recent
results in the literature [15]. In Metaverse applications where devices demand
considerable communication resources, designing NOMA to improve system
performance requires not only simulation but also real-world experiments. We
hope our preliminary simulation can motivate more real-world NOMA experiments
for the Metaverse in the research community.
## VI Conclusion
In conclusion, this article gives insights into the necessity and rationality
of a federated learning enabled mobile augmented reality system (FL-MAR) in
the Metaverse. With the development of mobile devices, they can support more
and more complex tasks and operations. The combination of FL and MAR in the
Metaverse not only helps protect user privacy to a certain extent but also
utilizes the available computing resources on mobile devices. Besides, this
article lists and explains the promising technologies that enable FL-MAR
systems in the Metaverse. For example, MEC can be integrated into FL-MAR
systems to perform heavy tasks for users and reduce network latency.
Blockchain and cryptocurrencies facilitate the functioning of the Metaverse
world in terms of commerce, entertainment, etc. Some application scenarios are
also given in this article, including autonomous driving, shopping, education
and so forth. Finally, two case studies are evaluated. One is FDMA-enabled,
and the other uses NOMA. We envision our paper to motivate more research on
leveraging FL for the Metaverse, and designing more efficient channel access
mechanisms to enable the Metaverse for mobile user equipments.
## References
* [1] Y. Cai, H. Li, G. Yuan, W. Niu, Y. Li, X. Tang, B. Ren, and Y. Wang, “Yolobile: Real-time object detection on mobile devices via compression-compilation co-design,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 35, no. 2, 2021, pp. 955–963.
* [2] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in _Artificial Intelligence and Statistics_ , 2017, pp. 1273–1282.
* [3] A. Kumar and K. Kumar, “Multiple access schemes for cognitive radio networks: A survey,” _Physical Communication_ , vol. 38, p. 100953, 2020.
* [4] H. Xie, Z. Qin, G. Y. Li, and B.-H. Juang, “Deep learning enabled semantic communication systems,” _IEEE Transactions on Signal Processing_ , vol. 69, pp. 2663–2675, 2021.
* [5] M. Xu, W. C. Ng, W. Y. B. Lim, J. Kang, Z. Xiong, D. Niyato, Q. Yang, X. S. Shen, and C. Miao, “A full dive into realizing the edge-enabled metaverse: Visions, enabling technologies, and challenges,” _IEEE Communications Surveys & Tutorials_, 2022.
* [6] G. Zhu, J. Xu, K. Huang, and S. Cui, “Over-the-air computing for wireless data aggregation in massive IoT,” _IEEE Wireless Communications_ , vol. 28, no. 4, pp. 57–65, 2021.
* [7] J. Kang, D. Ye, J. Nie, J. Xiao, X. Deng, S. Wang, Z. Xiong, R. Yu, and D. Niyato, “Blockchain-based federated learning for industrial metaverses: Incentive scheme with optimal AoI,” in _2022 IEEE International Conference on Blockchain (Blockchain)_. IEEE, 2022, pp. 71–78.
* [8] S. Sukhmani, M. Sadeghi, M. Erol-Kantarci, and A. El Saddik, “Edge caching and computing in 5G for mobile AR/VR and tactile internet,” _IEEE MultiMedia_ , vol. 26, no. 1, pp. 21–30, 2018.
* [9] H. Ning, H. Wang, Y. Lin, W. Wang, S. Dhelim, F. Farha, J. Ding, and M. Daneshmand, “A survey on metaverse: the state-of-the-art, technologies, applications, and challenges,” _arXiv preprint arXiv:2111.09673_ , 2021.
* [10] “Apple differential privacy technical overview,” https://www.apple.com/privacy/docs/Differential_Privacy_Overview.pdf.
* [11] S. Singh, “Critical reasons for crashes investigated in the national motor vehicle crash causation survey,” Tech. Rep., 2015.
* [12] “Oxbotica metadriver uses ‘metaverse’ to detect rare and unusual scenarios 1,000 times faster than actual driving,” https://www.oxbotica.com/insight/oxbotica-metadriver-uses-metaverse-to-detect-rare-and-unusual-scenarios-1000-times-faster-than-actual-driving/.
* [13] X. Zhang, Y. Chen, L. Hu, and Y. Wang, “The metaverse in education: Definition, framework, features, potential applications, challenges, and future research topics,” _Frontiers in Psychology_ , vol. 13, 2022.
* [14] X. Zhou, C. Liu, and J. Zhao, “Resource allocation of federated learning for the metaverse with mobile augmented reality,” _arXiv preprint arXiv:2211.08705_ , 2022.
* [15] X. Li, Z. Xie, Z. Chu, V. G. Menon, S. Mumtaz, and J. Zhang, “Exploiting benefits of IRS in wireless powered NOMA networks,” _IEEE Transactions on Green Communications and Networking_ , vol. 6, no. 1, pp. 175–186, 2022.
## Biographies
Xinyu Zhou is currently pursuing a Ph.D. degree at Nanyang Technological
University (NTU) in Singapore. Her research interests include federated
learning and Metaverse.
Jun Zhao is currently an Assistant Professor in the School of Computer Science
and Engineering at Nanyang Technological University (NTU) in Singapore. He
received a PhD degree in May 2015 in Electrical and Computer Engineering from
Carnegie Mellon University (CMU) in the USA (advisors: Virgil Gligor, Osman
Yagan; collaborator: Adrian Perrig), affiliating with CMU’s CyLab Security &
Privacy Institute, and a bachelor’s degree in July 2010 from Shanghai Jiao
Tong University in China. Before joining NTU first as a postdoc with Xiaokui
Xiao and then as a faculty member, he was a postdoc at Arizona State
University as an Arizona Computing PostDoc Best Practices Fellow (advisors:
Junshan Zhang, Vincent Poor). His research interests include federated
learning, edge/fog computing, and Metaverse.
|
# Flow states and heat transport in Rayleigh–Bénard convection with different
sidewall boundary conditions
Philipp Reiter1These authors contributed equally<EMAIL_ADDRESS>Xuan Zhang1${\ddagger}$ Olga Shishkina1<EMAIL_ADDRESS>1Max Planck
Institute for Dynamics and Self-Organization,
Am Fassberg 17, 37077 Göttingen, Germany
###### Abstract
This work addresses the effects of different thermal sidewall boundary
conditions on the formation of flow states and heat transport in two- and
three-dimensional Rayleigh–Bénard convection (RBC) by means of direct
numerical simulations and steady-state analysis for Rayleigh numbers $Ra$ up
to $4\times 10^{10}$ and Prandtl numbers $Pr=0.1,1$ and $10$. We show that a
linear temperature profile imposed at the conductive sidewall leads to a
premature collapse of the single-roll state, whereas a sidewall maintained at
a constant temperature enhances its stability. The collapse is caused by
accelerated growth of the corner rolls with two distinct growth rate regimes
determined by diffusion or convection for small or large $Ra$, respectively.
Above the collapse of the single-roll state, we find the emergence of a
double-roll state in two-dimensional RBC and a double-toroidal state in three-
dimensional cylindrical RBC. These states are most prominent in RBC with
conductive sidewalls. The different states are reflected in the global heat
transport, so that the different thermal conditions at the sidewall lead to
significant differences in the Nusselt number for small to moderate $Ra$.
However, for larger $Ra$, heat transport and flow dynamics become increasingly
alike for different sidewalls and are almost indistinguishable for
$Ra>10^{9}$. This suggests that the influence of imperfectly insulated
sidewalls in RBC experiments is insignificant at very high $Ra$ \- provided
that the mean sidewall temperature is controlled.
###### keywords:
Rayleigh–Bénard convection, Turbulent convection, Computational methods
## 1 Introduction
Understanding thermally induced convection as it arises in the earth’s
atmospheric/oceanic circulations and deducing its fundamental aspects from
laboratory experiments is an ongoing endeavour which motivated numerous
experimental and theoretical studies. In this realm, Rayleigh–Bénard
convection (RBC), i.e. a fluid held between two parallel plates heated from
below and cooled from above, is the most thoroughly investigated model system
to study the complex physics behind natural convection such as pattern
formation and the transition to turbulence (Bodenschatz et al., 2000; Ahlers
et al., 2009b; Lohse & Xia, 2010).
Most of the early theoretical advances were made by considering the system as
infinitely extended in the lateral direction. For instance, conventional
linear-stability analysis predicts the formation of two-dimensional rolls
(Chandrasekhar, 1961), while a weakly non-linear analysis reveals the
stability regimes of these rolls and their path to subsequent oscillatory or
stationary type bifurcations (Schlüter et al., 1965; Busse, 1967, 1978). In
laboratory experiments, however, we must resort to laterally confined systems
where our understanding is far less complete. In particular, when the lateral
size of the container is close to or less than the height of the cell, the
presence of sidewalls plays an important role (Roche, 2020; Shishkina, 2021).
Therefore, this study focuses on the effects of different thermal sidewall
boundary conditions on heat transfer and the emergence of different flow
states.
Different sidewalls are known to affect the critical Rayleigh number $Ra_{c}$
above which convection sets in (Buell & Catton, 1983; Hébert et al., 2010),
and perfectly conducting sidewalls have been found to delay the onset compared
to adiabatic sidewalls. In an attempt to better understand the flow regimes
above onset, bifurcation analyses were performed in a cubic domain for
adiabatic (Puigjaner et al., 2004) and perfectly conducting sidewalls
(Puigjaner et al., 2008). The bifurcation diagrams for the conducting
sidewalls are generally more complex, and double-toroidal states predominate
over the classical single-roll structure found for adiabatic sidewalls.
Sidewalls also have a strong influence on pattern formation (Cross &
Hohenberg, 1993; de Bruyn et al., 1996; Bodenschatz et al., 2000) and
different sidewall boundary conditions lead to differences in observable
patterns even in cells with large aspect ratio (Hu et al., 1993).
In RBC experiments, spurious sidewall heat fluxes are a major practical
difficulty that can substantially bias global heat transport measurements.
Ahlers (2000) reported that naive sidewall corrections can overstate Nusselt
number measurements by up to $20\%$ and underestimate the scaling of the
Nusselt number $Nu$ with respect to the Rayleigh number $Ra$ ($Nu\sim
Ra^{\lambda}$) reflected in the reduction of the scaling exponent $\lambda$ by
about $2\%$, underscoring the importance of more sophisticated sidewall
corrections. Roche et al. (2001) further emphasized this conclusion by showing
that the sidewall corrections can be considerably larger than assumed, leading
to scaling exponents closer to the turbulent scaling of $Nu\sim Ra^{1/3}$
(Grossmann & Lohse, 2000, 2001, 2004) than previously measured. Probably the
most important question in convection today is whether the ultimate regime in
confined geometries has the same scaling as predicted for unbounded domains,
i.e. $Nu\sim Ra^{1/2}$ (up to different logarithmic corrections), as proposed
by Kraichnan (1962) and Grossmann & Lohse (2011). Another important question
is when and how exactly the transition to the ultimate regime takes place in
confined geometries. Laboratory experiments (Chavanne et al., 1997; Niemela et
al., 2000; Chavanne et al., 2001; Ahlers et al., 2009a, 2012; He et al., 2012;
Urban et al., 2014; Roche, 2020) in this extremely high $Ra$ regime are
notoriously difficult to perform and potentially sensitive to several unknowns
of the system, one of which is the influence of imperfectly isolated/adiabatic
sidewalls.
Numerical simulations were performed incorporating thermal conduction in the
solid sidewall to clarify the differences between an ideal adiabatic setup and
a finite thermal conductivity sidewall (Verzicco, 2002; Stevens et al., 2014;
Wan et al., 2019). The results of these studies suggest that different thermal
properties of the sidewall alter the mean flow structure, leading to
significant differences in global heat transport in the low to mid $Ra$ range.
However, this effect vanishes for larger $Ra$, at least when the sidewall
temperature is constant and maintained at the arithmetic mean of upper and
lower plate temperatures. Conversely, if the sidewall temperature deviates
from the arithmetic mean, differences in heat transport persist even for large
$Ra$. This indicates that it is more important to keep the environment at the
correct temperature than to shield the interior of the cell from its
surroundings.
Despite extensive previous work, the spatial distribution of flow and heat
transport in confined geometries with different thermal boundary condition has
not been exhausted, especially the conditions related to real experimental
sidewall boundary conditions. In the present work, we investigate RBC with the
following thermal sidewall boundary conditions: adiabatic, constant
temperature (isothermal) and linear temperature. In the first part of the
results, we focus on a steady-state analysis based on an adjoint descent
algorithm (Farazmand, 2016) to identify different flow states, their
properties and their evolution over $Ra$. In the second part, the analysis is
complemented and extended to higher $Ra$ into the turbulent regime by a set of
DNS for a 2D box and 3D cylindrical setup, covering a range of
$10^{3}<Ra<10^{11}$ and $10^{3}<Ra<10^{9}$, respectively, aiming for a more
complete picture. We first present our numerical methods, discuss the results
and conclude with our main findings.
## 2 Numerical methods
### 2.1 Governing equations
The dimensionless control parameters in RBC are the Rayleigh number
$\mbox{{Ra}}\equiv\alpha g\Delta H^{3}/(\kappa\nu)$, the Prandtl number
$\Pran\equiv\nu/\kappa$, and the width-to-height aspect ratio of the box,
$\Gamma\equiv L/H$. Here, $\alpha$ denotes the isobaric thermal expansion
coefficient, $\nu$ the kinematic viscosity, $\kappa$ the thermal diffusivity
of the fluid, $g$ the acceleration due to gravity, $\Delta\equiv T_{+}-T_{-}$
the difference between the temperatures at the lower ($T_{+}$) and upper
($T_{-}$) plates, $H$ the distance between the parallel plates (the container
height), and $L$ the length of the container or the diameter in the case of a
cylindrical setup. In this study, we focus on variations with $Ra$, while
$Pr=1$ is fixed for most results in this paper except for a $Pr$-dependence
study in section 4.5, and $\Gamma=1$ is held constant throughout the study.
The governing equations in the Oberbeck–Boussinessq approximation for the
dimensionless, incompressible velocity ${\bf u}$, temperature $\theta$ and
kinematic pressure $p$ read as follows:
$\displaystyle\partial{\bf u}/\partial t+{\bf u}\cdot{\bm{\nabla}}{\bf
u}+{\bm{\nabla}}{p}$ $\displaystyle=$
$\displaystyle\sqrt{Pr/Ra}{\bm{\nabla}}^{2}{\bf u}+{\theta}{\bf e}_{z},$
$\displaystyle\partial{\theta}/\partial t+{\bf u}\cdot{\bm{\nabla}}{\theta}$
$\displaystyle=$ $\displaystyle
1/\sqrt{PrRa}{\bm{\nabla}}^{2}{\theta},\quad{\bm{\nabla}}\cdot{\bf u}=0.$ (1)
The equations were made dimensionless using the free-fall velocity
$u_{ff}\equiv(\alpha g\Delta H)^{1/2}$, the free-fall time $t_{ff}\equiv
H/u_{ff}$, the temperature difference $\Delta\equiv T_{+}-T_{-}$ between
bottom ($T_{+}$) and top ($T_{-}$) plates and $H$ the cell height. Here ${\bf
e}_{z}$ is the unit vector in the vertical $z$-direction. This set of
equations is solved with the direct numerical solver goldfish, which uses a
fourth-order finite volume discretization on a staggered grid and a third
order Runge–Kutta time scheme. The code has been widely used in previous
studies and validated against other direct numerical simulation codes (Kooij
et al., 2018; Reiter et al., 2021a).
### 2.2 Boundary conditions
$(a)$$(b)$$(c)$$(d)$ Figure 1: 2D Numerical setup of $(a)$ adiabatic, $(b)$
linear and $(c)$ constant sidewall temperature boundary conditions. $(d)$
Sketch of cylindrical domain. Profiles next to $(b)$ and $(c)$ show the
imposed sidewall temperature distribution.
We study 2D RBC in a square box and 3D RBC in a cylindrical domain. The setups
and profiles of the sidewall (SW) boundary conditions (BCs) used are shown in
figure 1. The adiabatic, linear and constant conditions for the sidewall
region $\delta V_{S}$ are defined by
adiabatic: $\displaystyle\quad\partial\theta/\partial{\chi}=0,$ (2) linear:
$\displaystyle\quad\theta=\theta_{+}+z\left(\theta_{-}-\theta_{+}\right),$ (3)
constant:
$\displaystyle\quad\theta=\begin{cases}\frac{-k(2z-1)}{k+2z}\left(\theta_{+}-\theta_{m}\right),&0\leq
z\leq 1/2,\\\
\frac{k(2z-1)}{k-2z+2}\left(\theta_{-}-\theta_{m}\right),&1/2<z\leq
1,\end{cases}$ (4)
with the temperature of the lower plate $\theta_{+}=1/2$, the temperature of
the upper plate $\theta_{-}=-1/2$, their arithmetic mean $\theta_{m}=0$,
$z\equiv z/H\in[0,1]$ and $\chi=x$ for box and $\chi=r$ for cylinder,
respectively. As for the constant temperature conditions, most of the sidewall
is kept at a nearly uniform temperature ($\theta_{m}$), except for the
transition regions in the vicinity of the top and bottom plates to ensure a
smooth temperature distribution. The parameter $0<k\ll 1$ in eq. (4) defines
the thickness of the transition layer. Here we used $k=0.01$, which gives a
fairly sharp albeit sufficiently smooth transition, as can be seen in figure 1
$(c)$. Moreover, the velocity no-slip conditions apply to all walls, i.e.
$\mathbf{u}\evaluated{}_{\text{missing}}{wall}=0$.
### 2.3 Adjoint descent method
A complementary analysis to direct numerical simulations is the study of the
Boussinesq equations by means of its invariant solutions. Hopf (1948)
conjectured that the solution of the Navier–Stokes equations can be understood
as a finite but possibly large number of invariant solutions, and turbulence
from this point of view is the migration from the neighbourhood of one
solution to another. While highly chaotic systems seem hopelessly complex to
understand, laminar or weakly chaotic flows can often be captured quite well
with this approach. In this work, we focus solely on solutions for steady-
states (equilibrium).
Determining steady-state solutions can be quite difficult, especially when the
number of dimensions is large as it is the case for most fluid mechanical
problems. The most commonly used numerical method for this task is Newton’s
method, which usually uses the generalized minimal residual (GMRES) algorithm
to solve the corresponding systems of linear equations (Saad & Schultz, 1986).
This method generally shows fast convergence rates when the initial estimate
is close to the equilibrium point. However, if the initial estimate is too far
from the equilibrium, Newton’s method often fails. In particular, for fluid
mechanics, the basin of attraction of Newton’s method can be quite small,
making the search for steady-states highly dependent on the initial guess.
Here we consider an alternative approach recently proposed by Farazmand (2016)
based on an adjoint method. Farazmand (2016) has shown that this adjoint-
descent method can significantly improve the chance of convergence compared to
the Newton–descent method, and thus more reliably capture equilibrium states
from a given initial state, but at the cost of a generally slower convergence
rate. A detailed derivation of the algorithm can be found in Farazmand (2016).
Below we sketch the idea of the method.
Suppose we want to find equilibrium solutions of a particular PDE (in our case
the Boussinessq equations)
$\partial_{t}{\bf u}=F({\bf u}),$ (5)
with ${\bf u}={\bf u}(\mathbf{x},t)$. The equilibrium’s of F(u) can be
generally unstable and therefore difficult to detect. The idea is to search a
new PDE, i.e.
$\partial_{\tau}{\bf u}=G({\bf u}),$ (6)
which solutions always converge to the equilibrium solutions of (5) when the
fictitious time $\tau$ goes to infinity
$\norm{F({\bf u})}_{\mathcal{A}}^{2}\rightarrow
0\quad\text{as}\quad\tau\rightarrow\infty,$ (7)
with the weighted energy norm
$\norm{\cdot}_{\mathcal{A}}\equiv\langle\cdot,\cdot\rangle_{\mathcal{A}}\equiv\langle\cdot,\mathcal{A}\cdot\rangle$
for a certain real self-adjoint and positive definite operator $\mathcal{A}$.
$F({\bf u})$ evolves along a trajectory ${\bf u}^{\prime}$ in accordance with
$\frac{1}{2}\partial_{\tau}\norm{F({\bf u})}_{\mathcal{A}}^{2}=\langle\delta
F({\bf u},{\bf u}^{\prime}),F({\bf u})\rangle_{\mathcal{A}},$ (8)
where $\delta F({\bf u},{\bf u}^{\prime})\equiv\lim\limits_{\varepsilon\to
0}\frac{F({\bf u}+\varepsilon{\bf u}^{\prime})-F({\bf u})}{\varepsilon}$ of
$F({\bf u})$ is the functional Gateaux derivative at ${\bf u}$ in the
direction ${\bf u}^{\prime}$. In the Newton-descent method, the search
direction ${\bf u}^{\prime}$ is approximated from $\delta F({\bf u},{\bf
u}^{\prime})=-F({\bf u})$ by using, for example, a GMRES iterative algorithm.
For the adjoint-descent method, on the other hand, we rewrite eq. (8) in the
form
$\frac{1}{2}\partial_{\tau}\norm{F({\bf u})}_{\mathcal{A}}^{2}=\langle{\bf
u}^{\prime},\delta F^{\dagger}({\bf u},F({\bf u}))\rangle_{\mathcal{A}},$ (9)
where $\delta F^{\dagger}$ is the adjoint operator of the functional
derivative $\delta F$. For ${\bf u}^{\prime}=-\delta F^{\dagger}({\bf
u},F({\bf u}))$ one guarantees that $\norm{F({\bf u})}_{\mathcal{A}}^{2}$
decays to zero along the trajectory ${\bf u}^{\prime}$, since then
$\frac{1}{2}\partial_{\tau}\norm{F({\bf u})}_{\mathcal{A}}^{2}=-\norm{\delta
F^{\dagger}({\bf u},F({\bf u}))}_{\mathcal{A}}^{2}$. Letting ${\bf u}$ evolve
along the adjoint search direction ensures the convergence to an equilibrium,
thus we find the desired PDE $G({\bf u})\equiv{\bf u}^{\prime}$, i.e.
$G({\bf u})=-\delta F^{\dagger}({\bf u},F({\bf u})).$ (10)
The choice of the norm $\norm{\cdot}_{\mathcal{A}}$ is important for the
algorithm to be numerically stable and is explained in more detail in the
appendix. As mentioned, the operator $\mathcal{A}$ should be real-valued,
positive-definite and self-adjoint. Following Farazmand (2016), we use an
operator $\mathcal{A}$ that is closely related to the inversed Laplacian, i.e.
$\mathcal{A}=(I-\alpha{\bm{\nabla}}^{2})^{-1}$ where $I$ is the identity
operator and $\alpha$ is a non-negative scalar parameter. For $\alpha=0$ this
norm converges to the $L^{2}$-norm and for $\alpha>0$ it effectively dampens
smaller scales and provides a better numerical stability.
The linear adjoint equations for the Boussinesq equations (1) read
$\displaystyle-\partial_{\tau}{\bf u}$
$\displaystyle=\left({\bm{\nabla}}\tilde{{\bf
u}}^{\prime\prime}+({\bm{\nabla}}\tilde{{\bf
u}}^{\prime\prime})^{\text{T}}\right){\bf
u}+\theta{\bm{\nabla}}\tilde{\theta}^{\prime\prime}-{\bm{\nabla}}p^{\prime\prime}+\sqrt{Pr/Ra}{\bm{\nabla}}^{2}\tilde{{\bf
u}}^{\prime\prime},$ $\displaystyle-\partial_{\tau}\theta$ $\displaystyle={\bf
u}\cdot{\bm{\nabla}}\tilde{\theta}^{\prime\prime}+1/\sqrt{PrRa}{\bm{\nabla}}^{2}\tilde{\theta}^{\prime\prime}+\mathbf{\mathbf{e}}_{z}\cdot\tilde{{\bf
u}}^{\prime\prime},$ $\displaystyle{\bm{\nabla}}\cdot{\bf u}^{\prime\prime}$
$\displaystyle=0,\quad{\bm{\nabla}}\cdot{\bf u}=0$ (11)
(see derivations in the appendix). Here the double prime fields ${\bf
u}^{\prime\prime}$ and $\theta^{\prime\prime}$ denote the residuals of the
Navier–Stokes eq. (1), i.e.
$\displaystyle{\bf u}^{\prime\prime}$ $\displaystyle\equiv-{\bf
u}\cdot{\bm{\nabla}}{\bf u}-{\bm{\nabla}}p+\sqrt{Pr/Ra}{\bm{\nabla}}^{2}{\bf
u}+\mathbf{\mathbf{e}}_{z}\theta,$ $\displaystyle\theta^{\prime\prime}$
$\displaystyle\equiv-{\bf
u}\cdot{\bm{\nabla}}\theta+1/\sqrt{PrRa}{\bm{\nabla}}^{2}\theta.$ (12)
and $\tilde{{\bf u}}^{\prime\prime}\equiv\mathcal{A}\mathbf{{\bf
u}}^{\prime\prime}$ as well as
$\tilde{\theta}^{\prime\prime}\equiv\mathcal{A}\mathbf{\theta}^{\prime\prime}$.
For simplicity, let $\mathbf{q}\equiv({\bf u},\theta)$, then the adjoint
descent method consists of three steps
1. 1.
Find the residuals $\mathbf{q}^{\prime\prime}$ according to eq. (12).
2. 2.
Solve $\tilde{\mathbf{q}}^{\prime\prime}=\mathcal{A}\mathbf{q}^{\prime\prime}$
for $\tilde{\mathbf{q}}^{\prime\prime}$.
3. 3.
Update $\mathbf{q}$ according to eq. (11).
In step (i), we solve the time-stepping eq. (1), where we use a standard
pressure projection method and treat the diffusion term implicitly. The time
step size $\Delta t$ can be chosen independently of the artificial time step
size $\Delta\tau$ of the adjoint equations. For step (ii), using the energy
norm $\norm{\cdot}_{\mathcal{A}}$ with the operator
$\mathcal{A}=(I-\alpha{\bm{\nabla}}^{2})^{-1}$, we solve the Helmholtz-type
equation
$(I-\alpha{\bm{\nabla}}^{2})\tilde{\mathbf{q}}^{\prime\prime}=\mathbf{q}^{\prime\prime}$.
The integration of the adjoint equations in step (iii) is similar to step (i),
but all terms are treated explicitly. Through tests, we found that the
artificial time step $\Delta\tau$ can be chosen much larger than $\Delta t$ in
some cases, i.e. for large $Ra$.
The boundary conditions of $\tilde{{\bf u}}^{\prime\prime}$ and
$\tilde{\theta}^{\prime\prime}$ result from integration by parts in the
derivation of the adjoint equations. Evaluation of the adjoint operator of the
diffusion terms yields
$\displaystyle\int_{V}\tilde{{\bf u}}^{\prime\prime}{\bm{\nabla}}^{2}{\bf
u}^{\prime}dV=\int_{V}{\bf u}^{\prime}{\bm{\nabla}}^{2}\tilde{{\bf
u}}^{\prime\prime}dV+\int_{S}{\bf u}^{\prime}({\bm{\nabla}}\tilde{{\bf
u}}^{\prime\prime}\cdot\mathbf{n})dS-\int_{S}\tilde{{\bf
u}}^{\prime\prime}({\bm{\nabla}}{\bf u}^{\prime}\cdot\mathbf{n})dS,$ (13)
where we see the occurrence of two additional boundary terms (the last two
terms) evaluated on the boundary domain $S$. The first boundary term vanishes
since the search direction ${\bf u}^{\prime}$ is zero on the boundaries. The
second term can be eliminated if we also choose homogeneous Dirichlet boundary
conditions for the adjoint field $\tilde{{\bf u}}^{\prime\prime}$ on $S$. The
same logic applies to homogeneous Neumann conditions. For the pressure field
$p^{\prime\prime}$, we apply Neumann boundary conditions conditions on all
walls. In this study, all flow states showed good overall convergence
($\norm{F({\bf u})}_{\mathcal{A}}^{2}\leq 10^{-5}$) and the velocity fields
where almost divergence free ($\norm{\divergence{{\bf u}}}_{L^{2}}\leq
10^{-3}$). However, the rigorous verification of the chosen pressure BCs has
yet to be performed. Another interesting point, reserved for later
investigation, is whether a vorticity-streamfunction formulation might be
better suited to resolve issues with the boundary conditions.
Figure 2: Convergence of the adjoint-descent method for three different $Ra$,
starting from the same initial field. The time-step size for which the
algorithm is just stable increased with $Ra$, i.e., for these cases we used
$\Delta\tau=0.5$ ($Ra=10^{4}$), $\Delta\tau=2.0$ ($Ra=10^{5}$) and
$\Delta\tau=5.0$ ($Ra=10^{6}$). All three cases converged to large-scale
circulation flow states as described in section 3.2.
For the steady-state analysis, we use a Galerkin method with Chebyshev bases
in $x$ and $z$ directions and a quasi-inverse matrix diagonalization strategy
for better efficiency (Shen, 1995; Julien & Watson, 2009; Oh, 2019; Mortensen,
2018). The code is publicly available (Reiter, 2021). We use an implicit
backward Euler time discretization and alias the fields using the $2/3$ rule
by setting the last $1/3$ high-frequency spectral coefficients to zero after
evaluating the nonlinear terms. When used as a direct numerical solver, we
found excellent agreement with our finite-volume code goldfish. In addition,
the steady-states from the adjoint descent method showed excellent agreement
with those found by an alternative Newton–GMRES iteration. Figure 2 shows the
convergence rates for three different $Ra$, starting from the same initial
state. Overall, we find that the convergence chance is improved over the
Newton-descent method, although the convergence rate suffers and larger $Ra$
are either not feasible with the current approach as implemented in our code
or diverge after some time. Therefore, we restrict the steady-state analysis
to flows in the range $Ra\leq 10^{7}$ and investigate larger $Ra$ using direct
numerical simulations. One conceivable problem with the current approach is
that the currently used energy norm with the operator
$\mathcal{A}\equiv(I-\alpha{\bm{\nabla}}^{2})^{-1}$ dampens smaller scales in
order to increase the stability of the algorithm. But for larger $Ra$, smaller
scales become important to resolve the boundary layers sufficiently, so the
algorithm is likely to take longer to converge or the damping of the smaller
scales is too severe to reach convergence overall. Using smaller values of
$\alpha$ could lead to better results in that case, as it emphasizes smaller
scales more. Preliminary analysis suggests that $\alpha=10^{-3}$ leads to
better convergence to a steady-state than $\alpha=1$, but requires smaller
time steps $\delta\tau$, which currently makes it too costly to apply to a
wider range of parameters. In the future, the convergence rate might be
improved by employing a hybrid adjoint-descent and Newton-GMRES approach, as
proposed by Farazmand (2016). Alternative gradient optimization techniques are
also conceivable to boost convergence speed.
## 3 Steady-state analysis
$(a)$$(b)$$(c)$Adiabatic SWLinear SWConstant SW Figure 3: Growth rates
$\sigma$ as determined from linear stability analysis for the four most
unstable modes at the onset of convection in the 2D cell for $(a)$ adiabatic,
$(b)$ linear and $(c)$ constant sidewall boundary conditions. The most
unstable modes are schematically depicted above each graph with the
corresponding colour. The critical Rayleigh numbers for the current
convection, $Ra_{c}$, are marked with errors.
In this section, we study steady-states in 2D RBC for $Ra\leq 10^{7}$. In what
follows, we refer to flow states as single or multiple solutions connected by
inherent symmetries of the system. For example, the single-roll state (SRS) in
2D can exist in two forms, either circulating clockwise or counterclockwise,
but is considered as a single flow state that is invariant under reflection.
Steady-state solutions of the SRS state have been investigated in laterally
periodic flows with stress-free velocity boundary conditions on the horizontal
walls (Wen et al., 2015, 2020b) and with no-slip BCs (Waleffe et al., 2015;
Sondak et al., 2015; Wen et al., 2020a; Kooloth et al., 2021). Bifurcations
and different flow states have already been studied in laterally unbounded RBC
(Zienicke et al., 1998), in laterally bounded RBC for a cubic domain
(Puigjaner et al., 2008) and a 2D square domain (Venturi et al., 2010). Here
we focus on the onset of convection, the SRS and a vertically stacked double-
roll state (DRS) in two-dimensional RBC for three different sidewall BCs as
shown in figure 1.
### 3.1 Onset of Convection
In RBC, there is a critical Rayleigh number $Ra_{c}$ above which the system
bifurcates from the conduction state to coherent rolls. We calculate $Ra_{c}$
using a linear stability analysis described in more detail in Reiter et al.
(2021b). For adiabatic or linear (conductive) sidewall BCs, the conduction or
base state is characterized by a linear temperature profile in the vertical
direction with zero velocity field and independence from control parameters.
However, for a constant temperature sidewall distribution, a convective flow
is already present. In this case, we perform a steady-state search before
analyzing the local stability around this equilibrium point.
Figure 3 shows the linear growth rates of the four most unstable modes, which
resemble the first four Fourier modes as depicted in the same figure. All
three BCs initially bifurcate from the conduction state to a single roll
state. Adiabatic sidewalls lead to a lower critical Rayleigh number compared
to isothermal sidewalls, which is to be expected (Buell & Catton, 1983). The
onset for the adiabatic sidewall occurs at $Ra_{c}\approx 2.7\times 10^{3}$
which agrees well within our resolution limit with Venturi et al. (2010), who
reports a critical $Ra$ of about $2582$. The onset for the linear SW occurs at
$5.1\times 10^{3}$ and the onset for the constant SW occurs slightly later at
$5.6\times 10^{3}$. This indicates that the interaction of the convective
field - as present for the constant sidewall BC - with the unstable modes is
weak and its influence on the onset is small.
### 3.2 Single-roll (states $\mathcal{S}_{A}^{1}$, $\mathcal{S}_{L}^{1}$,
$\mathcal{S}_{C}^{1}$)
$(a)$
Adiabatic SW
$(b)$
Linear SW
$(c)$
Constant SW
Figure 4: Single roll state for $(a)$ adiabatic ($Ra=10^{6}$), $(b)$ linear
($Ra=9\times 10^{4}$) and $(c)$ constant ($Ra=10^{6}$) sidewall temperature
boundary conditions. Contours (streamlines) represent the temperature
(velocity) field.
The single roll state (SRS) is arguably the most important state in RBC. It is
the first mode to appear above the conduction state, as we have just seen, and
prevails even up to largest $Ra$ in the form of large-scale circulation (LSC)
on turbulent superstructures (Zhu et al., 2018; Reiter et al., 2021a). The SRS
is stable and time-independent for small $Ra$ but oscillatory, chaotic, or
even completely vanishing for larger $Ra$, as we will show in section 4.3.
Here we analyze its properties before collapse and show that the growth of
secondary corner rolls plays an important role in its destabilization and that
this process can be both suppressed and enhanced by different sidewall
boundary conditions.
Figure 4 shows the temperature and velocity fields of the SRS for different
sidewall BCs. For all three BCs we can identify a large primary roll
circulating counter-clockwise and two secondary corner rolls. The corner rolls
are most pronounced for the linear sidewall BC and the primary roll is nearly
elliptical. The dimensionless heat-flux is expressed in form of the Nusselt
number $Nu\equiv\sqrt{RaPr}F_{f}H/\Delta$ with the heat-flux $F_{f}$ entering
the fluid and the imposed temperature difference $\Delta$. $F_{f}$ can be
defined in different ways, especially in the presence of sidewall heat-fluxes.
Averaging the temperature equation in eq. (1) over time, one obtains
$\displaystyle\divergence\mathbf{F}=0,\quad\mathbf{F}\equiv{\bf
u}\theta-1/\sqrt{RaPr}{\bm{\nabla}}{\theta},$ (14)
from which it follows that the total heat flux must vanish through the
boundaries $S=\delta V$, i.e. $\int_{S}(F\cdot\mathbf{n})dS=0$. For isothermal
sidewall BCs, asymmetric flow states with net nonzero sidewall heat-fluxes are
possible; in this case the heat fluxes through the bottom and top plates would
deviate from each other. However, in the present study, we found that all
sidewall heat fluxes are approximately equal to zero when integrated
vertically and the temperature gradient at the bottom plate is approximately
equal to the temperature gradient at the top plate. Therefore, we define $Nu$
based on the lower (hot) plate at $z=0$:
$Nu\equiv-\frac{1}{A_{+}}\int_{S_{+}}\frac{\partial\theta}{\partial
z}d{S_{+}},$ (15)
with the bottom plate domain $S_{+}$ and its surface area $A_{+}$. The
dimensionless momentum transport is given by the Reynolds number
$Re\equiv\sqrt{Ra/Pr}\sqrt{\langle\mathbf{U}^{2}\rangle_{V}}L,$ (16)
based on total kinetic energy of the mean field velocity $\mathbf{U}$. Here,
$\langle\cdot\rangle_{V}$ denotes a volume average.
$(a)$$(b)$$(c)$Adiabatic SWLinear SWConstant SW Figure 5: Nusselt number $Nu$
for the single-roll states for $(a)$ adiabatic, $(b)$ linear and $(c)$
constant sidewall temperature boundary conditions. $(a)$$(b)$$(c)$Adiabatic
SWLinear SWConstant SW Figure 6: Reynolds number $Re$ for the single roll
states $\mathcal{S}_{A}^{1}$, $\mathcal{S}_{L}^{1}$ , $\mathcal{S}_{C}^{1}$.
$(a)$ adiabatic, $(b)$ linear and $(c)$ constant sidewall temperature boundary
conditions.
In the laminar regime, where the dissipation of velocity and temperature field
is determined by the contributions of the boundary layers, we expect the total
heat and momentum scaling $Nu\sim Ra^{1/4}$ and $Re\sim Ra^{1/2}$ (Grossmann &
Lohse, 2000), respectively. Figure 5 shows that the former scaling shows up
only for a very limited $Ra$ range and only for the adiabatic boundary
conditions. The SRS of the linear sidewall BCs is stable only up to $Ra\leq
10^{5}$, then the corner rolls become strong enough to lead to a collapse of
the SRS. The stability region where the steady-states converge is too small to
observe an unperturbed scaling. On the other hand, for the constant sidewall
boundary conditions, corner roll growth is less dominant. In this case, the
reason why $Nu$ scaling deviates from $1/4$, is that heat entering through the
bottom/top can immediately escape through the sidewalls in the form of a
”short-circuit”, which dominates the lower $Ra$ regime and is the reason why
$Nu$ is relatively large for small $Ra$. For the adiabatic sidewall BC, we
observe $Nu\sim Ra^{0.25}$ for $10^{4}\leq Ra\leq 3\times 10^{5}$, followed by
$Nu\sim Ra^{0.16}$ for $3\times 10^{5}\leq Ra\leq 10^{6}$. Similarly, the
growth of the corner rolls disturbs the convection wind, and $Nu$ deviates
from the ideal $1/4$ scaling. Looking at the $Re$ vs. $Ra$ scaling in figure
6, we find the theoretically predicted scaling of $1/2$ is better represented
in comparison and the different sidewall boundary conditions deviate less
among themselves. This suggests that momentum transport is less affected by
changing sidewall boundary conditions than heat transport.
#### 3.2.1 Growth of corner rolls
The SRS is stable up to a certain $Ra$ limit. Above this limit, it may
fluctuate, reverse orientation, or even disappear altogether. This process
occurs at $Ra\approx 10^{6}$ for the adiabatic and constant temperature
sidewall BCs and at $Ra\approx 10^{5}$ for the linear sidewall BC. While up to
this event the dynamic behaviour of the three different sidewall BCs is
qualitatively very similar, from there on it differs. The constant sidewall BC
case shows a time dependence, but remains in the SRS state without changing
its orientation. The adiabatic and linear sidewall BCs, on the other hand,
enter a more chaotic regime of regular and chaotic flow reversals (Xi & Xia,
2007; Sugiyama et al., 2010), some of which are discussed in section 3.3. Of
greatest importance here appears to be the presence and magnification of
secondary corner rolls (CRs).
$(a)$$(b)$diffusion$+$$(c)$buoyancy$+$$=0$$(d)$convection Figure 7: $(a)$
Steady-state vorticity field, velocity streamlines and corner roll size
$\delta_{CR}$ defined as a distance from the corner to the closest stagnation
point at the plate for $Ra=7\times 10^{5}$ and adiabatic sidewalls, and
vorticity balance contributions according to eq. (17) in the corner roll
domain, i.e., $(b)$ diffusion, $(c)$ buoyancy and $(d)$ convection. The same
contour levers were used for $(b-d)$.
Figure 7 $(a)$ shows the vorticity field and stream-function contour of two-
dimensional RBC with adiabatic sidewalls at $Ra=7\times 10^{5}$. The existence
of two corner vortices is apparent. Here we define their size $\delta_{CR}$
based on the zero crossing, or stagnation point, of the vorticity
$\omega\equiv\partial_{x}u_{z}-\partial_{z}u_{x}$ at the top plate, cf.
Shishkina et al. (2014). To understand the processes involved in the formation
of the corner rolls, we write down the evolution equation for vorticity
$\partial_{t}\omega=\underbrace{-{\bf
u}\cdot{\bm{\nabla}}\omega}_{\text{convection}}+\underbrace{\sqrt{Pr/Ra}{\bm{\nabla}}^{2}\omega}_{\text{diffusion}}+\underbrace{\partial_{x}\theta}_{\text{buoyancy}}.$
(17)
It is evident that for steady-states ($\partial_{t}\omega=0$) there must be an
equilibrium between convection, diffusion and buoyancy forces. The three
corresponding fields are shown in figure 7 $(b-d)$ zoomed in on the corner
roll region. For this particular $Ra$, all three contributions appear to be
significant. We evaluate the size of the corner rolls (figure 8) and analyse
contributions of diffusion, buoyancy, and convection for all $Ra$ (figure 7).
For this purpose, we evaluate the absolute values of the volume averages for
each term in the corner roll region, e.g.,
$\langle|\partial_{x}\theta|\rangle_{V_{CR}}$ represents the strength of the
buoyancy term in the corner roll volume $V_{CR}$, as shown in figure 7 $(c)$.
The constant BC yields a notable exception because multiple corner rolls can
exist. This can be sensed from figure 4 $(c)$. For small $Ra$, the corner roll
are dominant in the lower right and upper left corner, where the LSC detaches
(ejects). For the other two BCs, these rolls are not present. Looking at eq.
(17), we realize that the presence of a horizontal temperature gradient can
lead to the formation of vortex structures. This condition is present for the
constant BCs, e.g., in the lower right corner, where the hot LSC detaches
while the temperature is kept constant at zero, resulting in a (strong)
negative temperature gradient. The two more ”classical” corner rolls first
appear at larger $Ra$, but soon take over in size, as can be seen in figure 8.
$(a)$$(b)$$(c)$Adiabatic SWLinear SWConstant SW Figure 8: Growth of the
corner roll size $\delta_{CR}$ for $(a)$ adiabatic, $(b)$ linear and $(c)$
constant sidewall temperature boundary conditions. Adiabatic BC show two
distinct regions, a buoyant dominated regime and a regime where convective
influx leads to a more rapid increase. For the constant BC, the corner rolls
appear first in the plume ejecting corner (bottom right and upper left in
figure 4) which is represented by the open symbols in $(c)$, and only for
larger $Ra$ do they appear in the plume impacting region (closed symbols).
$(a)$$(b)$$(c)$Adiabatic SWLinear SWConstant SW Figure 9: Strength of the
vorticity balance contributions diffusion (black circles), buoyancy (orange
diamonds) and convection (purple pluses) in the corner roll region, according
to eq. (17). $(a)$ adiabatic, $(b)$ linear and $(c)$ constant sidewall
temperature boundary conditions. Adiabatic BC show two distinct regions, a
buoyancy dominated regime and a regime where convective influx leads to a more
rapid increase. For the constant BC, the corner rolls appear first in the
plume ejecting corner (main figure $c$) and only for larger $Ra$ do they
appear in the plume impacting region (inset $c$).
The adiabatic and linear sidewall BCs each yield only two corner rolls. These
are present from the onset of convection and grow until the collapse of the
SRS (figure 8). The main difference between the two is that for the adiabatic
sidewall, the corner rolls initially grow monotonically with respect to $Ra$,
whereas for the linear sidewall BCs, the corner rolls are already considerable
large as soon as the SRS is present. Moreover, they also grow faster with
respect to $Ra$ ($\delta_{CR}\sim Ra^{0.3}$) and soon cover almost $40\%$ of
the width of the cell. Their large initial size combined with faster growth is
the reason for premature SRS instability in linear sidewall BCs. Figure 9
$(b)$ shows that vorticity formation for the entire $Ra$ range is mainly
governed by buoyancy and balanced by diffusion. Assume the hot plumes carry
warm fluid to the upper plate where it meets a cold sidewall, generating
strong lateral gradients in the upper right corner and consequently vorticity,
according to eq. (17).
In the adiabatic case, on the other hand, the sidewall is warmer close to the
corner, which leads to less vorticity generation by lateral temperature
gradients and therefore smaller corner rolls. In the low $Ra$ regime, the
corner rolls of the adiabatic sidewall are also governed by buoyancy, with a
growth of the corner rolls of $\delta_{CR}\sim Ra^{0.21}$ (figure 8 $a$). This
can be understood by dimensional arguments. Assume convection can be neglected
in eq. (17), which is justified from the results in figure 9 $(a)$. Thus we
obtain $\sqrt{Pr/Ra}{\bm{\nabla}}^{2}\omega=\partial_{x}\theta$, or, in terms
of a characteristic temperature $\theta_{CR}$ and a characteristic vorticity
$\Omega_{CR}$, we have
$\nu\frac{\Omega_{CR}}{\delta_{CR}^{2}}\sim\frac{\theta_{CR}}{\delta_{CR}}$,
and thus
$\delta_{CR}\sim\sqrt{\frac{Pr}{Ra}}\frac{\Omega_{CR}}{\theta_{CR}}.$ (18)
The evaluation (not shown here) of the characteristic vorticity in the corner
roll regions by means of their root mean square value unveiled $\Omega\sim
Ra^{0.7}$. Assuming further that the temperature $\theta_{CR}$ is
approximately constant over $Ra$, we obtain $\delta_{CR}\sim Ra^{0.20}$, which
agrees remarkably well with $\delta_{CR}\sim Ra^{0.21}$. Figure 8 $(a)$
discloses a transition at $Ra\approx 3\times 10^{5}$ , above which the corner
roll growth accelerates exhibiting a scaling of $\delta_{CR}\sim Ra^{0.49}$.
Figure 9 $(a)$ indicates that convective processes begin to affect vorticity
generation. Figure 7 $(d)$ reveals a region with strong convective vorticity
current with the same sign as the buoyancy forces, which enhances the
vorticity generation in this region (figure 7 $c$). We interpret that above a
certain $Ra$ the primary roll of the SRS begins to feed the corner rolls until
they become strong enough, eventually leading to the collapse of the SRS
itself. We would like to note that the current analysis describes steady-
states up to $Ra\leq 10^{6}$. An opposite trend was observed for larger $Ra$
by Zhou & Chen (2018), who found a slow shrinkage of the corner rolls that
scales approximately with $\sim Ra^{-0.085}$. It would be interesting to
consolidate these results in future studies.
### 3.3 Double-roll ($\mathcal{S}_{A}^{2}$, $\mathcal{S}_{L}^{2}$)
$(a)$$(b)$
Adiabatic SW
Linear SW
Figure 10: Double-roll state (DRS) for $(a)$ adiabatic and $(b)$ linear.
Contours (streamlines) represent the temperature (velocity) field.
Having discussed the properties of the SRS state, we proceed to the double-
roll state (DRS) as shown in figure 10. It consists of two vertically stacked
hot and cold circulation cells rotating in opposite directions with an almost
discrete temperature jump in the mid plane. The DRS was not identified as an
equilibrium for the constant sidewall BCs, so we will discuss it exclusively
for the adiabatic and linear sidewall setup. The DRS can coexist with the SRS,
but is generally found at larger $Ra$. Here we have tracked it in the range
$10^{5}\leq Ra<7\times 10^{6}$ for adiabatic and $10^{5}\leq Ra<4\times
10^{6}$ for linear sidewall BCs. This range is consistent with Goldhirsch et
al. (1989) who described a roll-upon-roll state in 2D RBC for $Pr=0.71$ at
$Ra\approx 10^{5}$, but interestingly it was not found for $Pr=6.8$.
$(a)$$(b)$Adiabatic SWLinear SW Figure 11: Nusselt number $Nu$ for double-
roll states $\mathcal{S}_{A}^{2}$ and $\mathcal{S}_{L}^{2}$. $(a)$ adiabatic
and $(b)$ linear sidewall temperature boundary conditions.
$(a)$$(b)$Adiabatic SWLinear SW Figure 12: Maximum peak frequency
$f_{\text{max}}$ and average frequency $\overline{f}$ determined from $Nu(t)$
for double-roll states $\mathcal{S}_{A}^{2}$ and $\mathcal{S}_{L}^{2}$ for
$(a)$ adiabatic and $(b)$ linear sidewall temperature boundary conditions.
From figure 11 we see that $Nu$ scales close to $Nu\sim Ra^{1/4}$, which
corresponds to laminar scaling for RBC flows governed by boundary layer
dissipation. Compared to the single-roll state, it is less effective in
transporting heat from wall to wall, as evidenced by an overall smaller $Nu$.
This is actually to be anticipated, since one roll of the DRS can be
conceptually viewed as a half-height, half-temperature gradient RBC system,
implying a $16$ times smaller effective $Ra$. However, this factor most likely
overestimates the difference, since the mid plane velocity is much closer to a
free-slip flow than a no-slip flow and the aspect ratio is two rather than
one. In reality, a DRS has about the same $Nu$ as a SRS with a $6$ times
smaller $Ra$.
The DRS is found to be time-independent (stable) only for the adiabatic
sidewall BCs for $Ra\leq 4\times 10^{5}$. For other $Ra$ it is either
periodically oscillating or chaotic. In figure 12 we show characteristic
frequencies of the DRS obtained by initializing DNS simulation with the
steady-state solutions and evaluating the frequency spectra of $Nu(t)$. The
frequency is presented in free-fall time units. The DRS oscillates with a
frequency of about $0.1$ for $Ra\leq 10^{6}$ for both the adiabatic and linear
setups, i.e., about one cycle every $10$ time units. This cycle corresponds to
about half the circulation time of a cell, i.e., the characteristic velocity
of the circulation is about $0.09\sim 0.11$ and its size is $\approx 2L$.
Thus, the DRS oscillation frequency seems to be initially tied to the
circulation time. When $Ra$ exceeds $10^{6}$, we see the emergence of a more
chaotic behavior. Despite increasing turbulence, the DRS state persists and
does not show transition to a SRS state for $Ra<10^{7}$. In section 4.3 we
will see that for larger $Ra$ the DRS state is eventually replaced by a single
roll LSC again.
The DRS state is not merely an equilibrium solution, but more fundamentally
there is a regime in $Ra$ where the DRS is the preferred flow state to which
all initial states tested in this work tend towards. Starting from random
perturbations, one usually first finds a SRS, which soon goes through a series
of flow reversals and restabilizations until it evolves to the DRS state. This
process is depicted in an SRS-DRS phase space picture in figure 13. The
horizontal axis represents the SRS, and the vertical axis represents the DRS.
This process is qualitatively the same for adiabatic and linear sidewall
boundary conditions. We do not address the flow reversal process, as it is
described in more detail in Xi & Xia (2007); Sugiyama et al. (2010); Castillo-
Castellanos et al. (2016); Zhao et al. (2019), but note that the intermediate
flow fields bear striking resemblance to the proper orthogonal decomposition
modes presented in Podvin & Sergent (2015, 2017). We want to stress that the
transition time is surprisingly long. It can take up to several thousand free-
fall time units for the flow to settle in the DRS state, so it can be missed
if the observation window is too small.
$(a)$$(b)$Adiabatic SWLinear SW Figure 13: Phase space trajectories from a
single-roll ($\mathcal{S}_{A}^{1}$/$\mathcal{S}_{L}^{1}$) to a double-roll
state ($\mathcal{S}_{A}^{2}$/$\mathcal{S}_{L}^{2}$) for $(a)$ adiabatic
sidewall at $Ra=2\times 10^{6}$ and $(b)$ linear sidewall BCs at $Ra=1.5\times
10^{5}$.
## 4 Direct numerical simulations
In addition to the steady-state analysis, we performed a series of DNS of RBC
for 2D in a square and 3D in a cylinder with $\Gamma=1$ and $Pr=1$, covering
$Ra$ from the onset of convection to $4.64\times 10^{10}$ and $10^{9}$,
respectively. The highest $Ra$ in 2D was simulated on a $1024^{2}$ grid with
at least $15$ grid points in the thermal boundary layer and performed for
several thousand free-fall time units, ensuring adequate spatial resolution
and temporal convergence. The largest simulation for the cylindrical setup was
performed on a $N_{r}\times N_{\varphi}\times N_{z}=128\times 256\times 320$
grid, with about $10$ points inside the thermal and viscous boundary layers
and the averaging statistics were collected for at least $600$ free-fall time
units.
### 4.1 Vertical temperature profiles
$(a)$$(b)$$(c)$
2D Box
Adiabatic SWLinear SWConstant SW$(d)$$(e)$$(f)$
3D Cylinder
Adiabatic SWLinear SWConstant SW Figure 14: Mean temperature profile for
cases with $(a,d)$ adiabatic, $(b,e)$ linear and $(c,f)$ constant sidewall
boundary conditions for ($a$-$c$) 2D box and ($d$-$f$) cylinder.
Figure 14 shows the horizontally averaged temperature profiles
$\langle\theta\rangle_{A}$ for all conducted simulations. We first remark the
similarity between 2D and 3D. For example, both show the feature of a weakly
stabilizing positive temperature gradient in the mid plane for small $Ra$ and
adiabatic boundary conditions (figures 14 a,d). This phenomenon is often found
in the interior of the bulk (Tilgner et al., 1993; Brown & Ahlers, 2007; Wan
et al., 2019) and is caused by the thermal signature of the LSC. As the
thermal plume of the LSC climbs up along the sidewall, it penetrates deeper
into the bulk, thus hot (cold) plumes carry their signature into the top
(bottom) part of the cell, which can result in a slightly positive temperature
gradient in the center of the bulk.
Another important detail is the apparent non-monotonicity of the profiles in
the intermediate $Ra$ range, which is most pronounced for the linear sidewall
BCs (figure 14 b,e) and also occurs for the 2D adiabatic BCs. The temperature
profiles initially drop sharply and then level of at about a quarter of the
cell height before dropping sharply again in the cell center. This behaviour
was also observed in Stevens et al. (2014). These profiles are reminiscent of
the DRS state (see section 3.3) and indeed caused by transitions in the flow
structures, which we analyse in section 4.3 in more detail. Finally, all
simulations for larger $Ra$ show the classical RBC profile with steep
temperature gradients at the bottom and top plates and a well-mixed
homogeneous bulk.
### 4.2 Vertical sidewall heat flux profiles
$(a)$$(b)$
2D Box
Linear SWConstant SW$(c)$$(d)$
3D Cylinder
Linear SWConstant SW Figure 15: Comparison of the lateral sidewall heat flux
$Nu_{sw}$ for cases ($a,c$) linear and ($b,d$) constant sidewall boundary
conditions in ($a,b$) 2D box and ($c,d$) cylinder.
Next we analyse the horizontal heat flux through the vertical sidewall
$Nu_{sw}$ which is more elaborately defined in the appendix A. This is shown
in figure 15 for the linear and constant BCs, while the sidewall heat flux of
the adiabatic BC is obviously zero. The linear and constant BCs show two
opposite trends. The constant setup has the largest temperature gradients for
small $Ra$ and almost vanishing gradients for large $Ra$. This can be
understood from the temperature profiles in figure 14 $(c,f)$. As $Ra$
increases, the bulk is more efficiently mixed and the temperature distribution
becomes nearly constant, hence the temperature in the cell becomes more
similar to the sidewall temperature imposed by the BCs. On the other hand, the
linear sidewall BC corresponds exactly to the temperature profile before the
onset of convection and from then on its contrast increases more and more,
which is reflected in the relatively strong vertical temperature gradients for
large $Ra$. However, all profiles are symmetrical around the center and
consequently, although heat flows in and out locally, there is no net heat
flux through the vertical sidewalls. This is supported by the fact that in our
simulations the temperature gradients at the top and bottom plates were nearly
equal, linked by the heat flux balance
$Nu_{c}-Nu_{h}+\zeta\langle Nu_{sw}\rangle_{z}=0$ (19)
with $\zeta=\frac{1}{\Gamma}$ for the 2D box and $\zeta=\frac{4}{\Gamma}$ for
the cylindrical setup (see appendix A). Lastly, we detect at least two
transitions in $Nu_{sw}$ for the linear sidewall BCs (figure 15 $a,c$). These
are consistent with the transitions in the temperature profiles discussed in
the previous section and are elucidated in more detail in the following.
### 4.3 Mode analysis
It is generally difficult to compare the dynamics of flows in different,
possibly even turbulent, states without restricting the underlying state
space. Therefore, in this section we analyze the DNS results by projecting
each snapshot onto four distinct modes and evaluate time averages and standard
deviations.
Starting with the 2D simulations, a common choice for the mode are the first
four Fourier modes, see e.g. Petschel et al. (2011) and (Wagner & Shishkina,
2013), i.e.
$\displaystyle u_{x}^{m,k}$ $\displaystyle=-\sin(\pi mx/L)\cos(\pi kz/H),$
$\displaystyle u_{z}^{m,k}$ $\displaystyle=\cos(\pi mx/L)\sin(\pi kz/H).$ (20)
For the cylinder, the choice of modes is less obvious. In this work, we follow
Shishkina (2021) and use a combination of Fourier modes in $z$ and $\varphi$
direction and Bessel functions of the first kind $J_{n}$ of order $n$ in $r$
for the radial velocity component $u_{r}$ and the vertical velocity component
$u_{z}$. The first two (non-axisymmetric) modes are
$\displaystyle u_{r}^{1,k}$ $\displaystyle=J_{0}(\alpha_{0}r/R)\cos(\pi
kz/H)e^{i\varphi},$ $\displaystyle u_{z}^{1,k}$
$\displaystyle=J_{1}(\alpha_{1}r/R)\sin(\pi kz/H)e^{i\varphi},$ (21)
and the axisymmetric modes are
$\displaystyle u_{r}^{2,k}$ $\displaystyle=J_{1}(\alpha_{1}r/R)\cos(\pi
kz/H),$ $\displaystyle u_{z}^{2,k}$
$\displaystyle=-J_{0}(\alpha_{0}r/R)\sin(\pi kz/H),$ (22)
where $\alpha_{n}$ is the first positive root of the Bessel function $J_{n}$
for Dirichlet boundary conditions on the sidewall ($u_{r}$) and the $k$-th
positive root of the derivative of the Bessel function $J_{n}^{\prime}$ for
Neumann boundary conditions $(u_{z})$. The non-axisymmetric modes are complex-
valued to account for different possible azimuthal orientations. Ultimately,
however, we are only interested in the energy content and not the orientation
of the modes, so we evaluate their magnitude. We note further, that a vertical
slice through the cylindrical modes is very similar to the first four 2D
Fourier modes, albeit with a slightly different dependence in the radial
direction. For this reason, we use the same notation for the cylindrical modes
as for the Fourier modes in 2D. More precisely, we have
$F_{1}\equiv(u_{r}^{1,1},u_{z}^{1,1})$,
$F_{2}^{=}\equiv(u_{r}^{1,2},u_{z}^{1,2})$,
$F_{2}^{\parallel}\equiv(u_{r}^{2,1},u_{z}^{2,1})$ and
$F_{4}\equiv(u_{r}^{2,2},u_{z}^{2,2})$. Having defined the modes, we project
the velocity field ${\bf u}$ of several snapshots onto a mode ${\bf u}^{m}$
and evaluate the energy content $\mathcal{P}$ of each mode according to
$\displaystyle\mathcal{P}\equiv\frac{\int_{V}{\bf u}{\bf
u}^{m}dV}{\int_{V}{\bf u}^{m}{\bf u}^{m}dV},$ (23)
and analyse the time average and standard deviation of $\mathcal{P}$.
$\mathcal{F}_{1}$$\mathcal{F}_{2}^{=}$$\mathcal{F}_{2}^{\parallel}$$\mathcal{F}_{4}$$(a)$$(b)$$(c)$Adiabatic
SWLinear SWConstant SW Figure 16: Energy and standard deviation of the
projection of flow field snapshots onto the modes defined by eq. (20) for the
2D box and $(a)$ adiabatic, $(b)$ linear and $(c)$ constant sidewall
temperature boundary condition for the 2D box. Below: Streamlines, coloured by
vertical velocity, of the modes $\mathcal{F}_{1}$, $\mathcal{F}_{2}^{=}$,
$\mathcal{F}_{2}^{\parallel}$ and $\mathcal{F}_{4}$.
The energy of the individual Fourier mode for the 2D box is shown in figure
16. Above the onset of convection, only the first Fourier mode (single-roll)
contains a considerable amount of energy. Because of its similarity to the
SRS, this mode will be referred to as the SRS-mode. Following the stable SRS,
we find for adiabatic and linear sidewall BCs a flow regime that changes from
the SRS to a roll-upon-roll second Fourier mode
($\mathcal{F}_{2}^{\parallel}$) state. This state embodies the DRS state,
which we discussed in section 3.3. The $F_{2}^{=}$ regime, or DRS regime, is
found in the range $10^{6}<Ra\leq 10^{7}$ for an adiabatic sidewall and
$10^{5}\leq Ra\leq 10^{7}$ for a linear sidewall BC. In contrast, the DRS
regime is absent for a constant sidewall BC. As a reminder, this state could
not be found as an equilibrium solution for the constant sidewall boundary
condition either, which is in line with its absence in DNS. The next regime
can be regarded as a weakly chaotic SRS regime, with the SRS mode again
dominating but being transient and a substantial amount of energy is contained
in the $F_{4}$ (4-roll) mode, indicative of dynamically active corner rolls.
Finally, above $Ra\approx 10^{9}$ there exists another surprisingly sharp
transition. This regime is different from the others as now all Fourier modes
contain a significant amount of energy and exhibit strong fluctuations. An
inspection of the flow fields revealed an abundance of small-scale plumes and
strong turbulent dynamics. Most remarkably, in this regime all three sidewall
BCs show a very similar mode signature, i.e., they become increasingly alike,
or in other words, RBC becomes insensitive to sidewall BCs for large $Ra$.
$(a)$$(b)$$(c)$Adiabatic SWLinear SWConstant
SW$\mathcal{F}_{1}$$\mathcal{F}_{2}^{=}$$\mathcal{F}_{2}^{\parallel}$$\mathcal{F}_{4}$
Figure 17: Energy and standard deviation of the projection of flow field
snapshots onto the modes defined by eq. (22) and (21) for $(a)$ adiabatic,
$(b)$ linear and $(c)$ constant sidewall temperature boundary condition for
the cylinder. Below: Streamlines, coloured by vertical velocity, of the modes
$\mathcal{F}_{1}$, $\mathcal{F}_{2}^{=}$, $\mathcal{F}_{2}^{\parallel}$ and
$\mathcal{F}_{4}$.
Moving on to the mode analysis for the cylindrical setup, shown in figure 17,
we see a very similar picture as for the 2D box with some noticeable
differences. First, for the constant BC setup we note that the onset of
convection is significantly later than in the 2D case, while the other two
setups show a closer similarity with the 2D case. The cylindrical setup might
be more sensitive to the BCs of the sidewalls in general, since the ratio of
sidewall area to cell volume ratio is larger than in the 2D box and therefore
the sidewall temperature likely has a larger impact on the interior.
Another difference between the cylindrical and 2D box setup is, that the
adiabatic setup does not show a transition to a regime with a vanishing SRS;
rather, the SRS mode is the most dominant mode over all $Ra$. In contrast, the
linear sidewall BC possess a striking similarity to the observations in 2D.
Above $Ra\approx 10^{5}$ it undertakes a transition from a SRS-dominated
regime to a $F_{4}$-dominated regime. The $F_{4}$-mode is axissymmetric and
has a double-donut, or double-toroidal shape. Similar flow states were found
in a bifurcation analysis by Puigjaner et al. (2008) in a cubic domain with
the same lateral boundary conditions. Here, its existence range extends over
$10^{5}\leq Ra\leq 10^{8}$. The double-donut state can be considered as the
counterpart of the DRS state in 2D RBC, although we see that it outlasts its
2D analog by about a decade in $Ra$. At the highest $Ra$ available, the SRS
again dominates for all BC configurations considered, although the amount of
energy and the strength of the fluctuations are somewhat different for the
different BCs. At this points, we can only conjecture from their trend and our
findings in 2D that their deviation will decrease for even larger $Ra$ in the
high-turbulence/high-$Ra$ regime.
We conclude that there exist at least five different flow regimes: conduction
state, stable SRS, DRS (or double-donut state in the cylindrical setup),
weakly chaotic SRS and highly turbulent state. We find the constant isothermal
sidewall generally enhances the SRS dominance, while a linear isothermal
sidewall BC suppresses the SRS in the mid $Ra$ regime and induces the DRS or
double-donut state. Moreover, although we find strong differences in the flow
dynamics in the small to medium $Ra$ range, but these differences eventually
disappear and the system becomes increasingly insensitive to the type of
sidewall BC at high $Ra$.
### 4.4 Heat transport
Lastly, the global heat transport is discussed. The results are shown in
figure 18. For the 2D setup, we include the results from the steady-state
analysis from the first part of this study. Here, we find a very good
agreement between $Nu$ of the DNS and steady-states for the SRS mode as well
as for the DRS state for adiabatic sidewalls. However, the DRS state for
linear sidewalls shows slightly larger $Nu$ in the DNS. This is because the
DRS state is an unstable equilibrium solution that can oscillate strongly,
which apparently enhances heat transport properties.
We find that $Nu$ degrades strongly when switching from a SRS- to a DRS-
dominated regime at $Ra\approx 10^{5}$ (linear) and $Ra\approx 10^{6}$
(adiabatic) for the 2D domains (figure 18$a$). In contrast, this does not
occur for the cylindrical setup as it transitions from the SRS to the double-
toroidal state (figure 18$b$). In fact, this flow transition is hardly
observed in the evolution of heat transport.
In the high $Ra$ regime, the heat transport in the the cylindrical setup is
found to be more efficient than in the 2D setup, with about $30\%$ larger
$Nu$. This agrees well with the observations of van der Poel et al. (2013).
Both setups show $Nu\sim Ra^{0.285}$ scaling at the largest studied $Ra$. We
also observe that $Nu$ becomes independent of the choice of sidewall BCs for
high $Ra$. This agrees with Stevens et al. (2014), at least when the sidewall
temperature is equal to the arithmetic mean of bottom and top plate
temperature. If this condition is violated, Stevens et al. (2014) has shown
that $Nu$ differences will exist even for high $Ra$. This indicates that the
effects of an imperfectly insulated sidewall tend to be small in experiments
when the mean temperature of the sidewall is well controlled.
$(a)$$(b)$2D Box3D Cylinder Figure 18: Nusselt number Nu for cases with
different sidewall boundary conditions in $(a)$ 2D simulations, $(b)$ 3D
simulations. For comparison, open symbols shows heat transport in a periodic
2D domain with $\Gamma=2$ by Johnston & Doering (2009) $(a)$ and for
cylindrical setup with adiabatic sidewalls, $\Gamma=1$ and $Pr=0.7$ conducted
by Emran & Schumacher (2012) $(b)$. Dashed lines in $(a)$ show the results
from the steady-state analysis.
### 4.5 Prandtl number dependence
The previous analysis focused on fluids with $Pr=1$, but thermal convection is
relevant in nature in a wide variety of fluids and many experiments are
conducted in water ($Pr\approx 4$) or in liquid metals ($Pr\ll 1$) (Zwirner et
al., 2020). Therefore, we now explore the $Pr$ parameter space with $Pr=0.1,1$
and $10$ for $Ra$ up to $10^{9}$ in the 2D RBC setup.
The Nusselt number is shown in figure 19. We observe a collapse of all data
points for all studied boundary conditions at large $Ra$. However, the
collapse for large $Pr$ is achieved earlier, at $Ra\gtrapprox 10^{7}$, whereas
the differences between $Pr=1.0$ and $Pr=0.1$ are small. Both indicate heat
transport invariance for $Ra\gtrapprox 10^{8}$. This suggests that the size of
the thermal boundary layer $\lambda_{\theta}$ plays a crucial role. For small
$Pr$ we expect larger thermal boundary layers, which extend further into the
bulk and thus have a stronger influence on the system. As $\lambda_{\theta}$
gets smaller, the coupling between the sidewall and bulk disappears, and so do
the differences in heat transport. And although our results show a small
$Pr$-dependence, the main message remains. Experiments with very high $Ra$ are
not affected by different thermal sidewall BCs, regardless of whether they are
performed in a low $Pr$ or high $Pr$ medium.
$(a)$$(b)$$(c)$$Pr=0.1$$Pr=1$$Pr=10$ Figure 19: Nusselt number $Nu$ for $(a)$
$Pr=0.1$, $(b)$ $Pr=1$ and $(c)$ $Pr=10$ in 2D RBC with different thermal
sidewall BCs.
## 5 Conclusion
We have investigated the influence of three different lateral thermal boundary
conditions, i.e., adiabatic, linearly distributed in the vertical direction
and constant (isothermal) ones, on heat transport and flow states in two- and
three-dimensional Rayleigh-Bénard convection (RBC) using direct numerical
simulation and steady-state analysis. The steady-state analysis is based on an
adjoint-descent method (Farazmand, 2016). We found superior convergence chance
in the laminar and weakly laminar regime compared to Newton’s method, but did
not achieve convergence at larger $Ra$. Further studies on the proper boundary
conditions, the choice of the energy norm and or a combination with Newton’s
method are needed to further explore the potential of the method in the study
of convective flows.
Investigation of the stability of the single-roll state (SRS) revealed that a
linear temperature distribution at the sidewall leads to a premature collapse
of the SRS compared to adiabatic BCs. In contrast, the stability of the SRS
was enhanced by the introduction of constant temperature sidewall BCs. We find
that in 2D and for linear and adiabatic sidewall BCs, the collapse of the SRS
is followed by a regime in which the preferred flow state is a double-roll
state (DRS), where one roll is located on top of the other. The DRS can be
found for adiabatic and linear BCs in the regime $10^{6}<Ra\leq 10^{7}$ and
$10^{5}\leq Ra\leq 10^{7}$, respectively, and is associated with suppressed
heat transport. The DRS can be stable, it can oscillate periodically with a
frequency of $\approx 0.1$ free-fall time unit, or it can be chaotic for
larger $Ra$. In 3D cylindrical simulations, a similar flow transition occurs.
Imposing linear sidewall BCs leads to the emergence of a double-toroidal
structure, that prevails over a wide range of $Ra$, i.e., $10^{5}\leq Ra\leq
10^{8}$. Unlike in 2D, the double-toroidal structure does not lead to a heat
transport recession.
We confirmed that the collapse of the SRS in 2D RBC is strongly related to the
enlarging of corner rolls. Examining the setup with adiabatic sidewalls, there
seem to be two regimes with distinct corner roll growth rates. For small $Ra$,
the vorticity balance is dominated purely by diffusion and buoyancy in the
form of lateral temperature gradients. In this regime, the size of the corner
roll $\delta_{CR}$ grows as $\delta_{CR}\sim Ra^{0.21}$, which is consistent
with dimensional analysis. For larger $Ra$, the convective flux starts to be
of significance and the growth of the corner roll accelerates to
$\delta_{CR}\sim Ra^{0.49}$ before the SRS finally collapses and slowly
transforms to the DRS state, undergoing several cycles of flow reversals and
restabilization.
Analysis of global heat transport and the flow dynamics have shown that for
$Ra\leq 10^{8}$ there are significant differences between the various sidewall
BCs. However, for larger $Ra$ and for various $Pr$ these differences disappear
and the different sidewall BCs become globally - in terms of their integral
quantities - and dynamically similar. In this context, Verzicco & Sreenivasan
(2008) and Johnston & Doering (2009) showed that regardless of imposition of
fixed temperature or fixed heat flux at the bottom/top plates, high $Ra$ show
similar heat transport. Thus, together with our results, we can conclude that
the effects of different boundary conditions, at the sidewalls or at the
top/bottom plates, are limited for experiments with high $Ra$. However, there
are exceptions. For example, when the sidewall temperature differs from the
mean fluid temperature, larger $Nu$ differences can occur (Stevens et al.,
2014). Thus, in experiments at high Rayleigh numbers, it appears to be more
important to control the mean sidewall temperature than to ensure perfectly
insulating conditions. However, close to the onset of convection, the sidewall
thermal boundary conditions significantly influence the flow organization and
heat transport in the system.
## Acknowledgement
This work was supported by the Deutsche Forschungsgemeinschaft, grants
Sh405/10, Sh405/8, Sh405/7 (SPP 1881 Turbulent “Superstructures”). The authors
also acknowledge Leibniz Supercomputing Centre (LRZ) for providing computing
time.
## Declaration of Interests
The authors report no conflict of interest.
## Appendix A Heat flux
The temperature equation for an incompressible fluid in dimensional units is
$\displaystyle{\partial}{\theta}/{\partial}t+{\bm{\nabla}}\cdot{(\bf
u\theta)}$ $\displaystyle=\kappa{\bm{\nabla}}^{2}{\theta}.$ (24)
Averaging equation $\eqref{eq:T}$ over time yields the following relations for
the heat flux $\mathbf{F}$:
$\displaystyle\divergence\mathbf{F}=0,\quad\mathbf{F}\equiv{\bf
u}\theta-\kappa{\bm{\nabla}}{\theta}.$ (25)
Using the divergence theorem we obtain
$\displaystyle\int_{S}\mathbf{F}\cdot\mathbf{n}dS=0,$ (26)
which states that the net heat flux through the walls must be zero. Expressing
the heat fluxes by the Nusselt number and decomposing the contribution of the
surface integral into those for a lower plate heat flux $Nu_{h}$, for an upper
plate heat flux $Nu_{c}$ and for a side wall heat flux $Nu_{sw}$, we write
$\displaystyle Nu_{c}-Nu_{h}+\zeta\langle Nu_{sw}\rangle_{z}=0,$ (27)
where $\langle\cdot\rangle_{z}$ denotes a vertical mean and $\zeta$ a
geometric factor defining the ratio of the sidewall surface to the bottom/top
plate surface, which is $\zeta=1/\Gamma$ for the 2D box and $\zeta=4/\Gamma$
for the cylindrical setup. Note that the lateral heat flux $Nu_{sw}$ is
$z$-dependent as it was shown in section 4.2. For the 2D box this is
$\displaystyle Nu_{sw}=\frac{H}{\Delta}\left[\frac{\partial\theta}{\partial
x}\evaluated{}_{x=L}-\frac{\partial\theta}{\partial
x}\evaluated{}_{x=0}\right]$ (28)
and for the 3D cylinder setup it is
$\displaystyle
Nu_{sw}=\frac{H}{2\pi\Delta}\int_{0}^{2\pi}\frac{\partial\theta}{\partial
r}\evaluated{}_{r=R}d\varphi.$ (29)
## Appendix B Thermal dissipation rate
Multiplying equation $\eqref{eq:T}$ with $\theta$ and averaging over time
yields
$\displaystyle\frac{1}{2}{\partial}_{t}\theta^{2}+\frac{1}{2}{\bm{\nabla}}\cdot{({\bf
u}\theta^{2})}$ $\displaystyle=\kappa\theta{\bm{\nabla}}^{2}{\theta}.$ (30)
Taking a time and volume average of $\eqref{eq:T2}$, the time derivative and
the convective part (for impenetrable walls) vanish and using the relation
$({\bm{\nabla}}\theta)^{2}=\divergence{(\theta{\bm{\nabla}}\theta)}-\theta{\bm{\nabla}}^{2}\theta$
we obtain
$\displaystyle\kappa\int_{V}\overline{({\bm{\nabla}}\theta)^{2}}dV=\kappa\int_{V}\divergence{(\overline{\theta{\bm{\nabla}}\theta})}dV,$
(31)
where an overbar denotes a time average and
$\varepsilon_{\theta}=\kappa({\bm{\nabla}}\theta)^{2}$ is known as the thermal
dissipation rate. Using the divergence theorem once more, we find the relation
between the total thermal dissipation rate and the wall heat fluxes
$\displaystyle\int_{V}\overline{\varepsilon_{\theta}}dV=\kappa\int_{S}(\overline{\theta{\bm{\nabla}}\theta})\cdot\mathbf{n}dS.$
(32)
For clarification, writing eq. $\eqref{eq:eth}$ more explicitly and only for
2D Cartesian coordinates, we get
$\displaystyle\langle\overline{\varepsilon_{\theta}}\rangle_{V}$
$\displaystyle=\frac{\kappa}{V}\left(L\left[\langle\overline{\theta\partial_{z}\theta}\rangle_{x}\right]_{z=0}^{z=H}+H\left[\langle\overline{\theta\partial_{x}\theta}\rangle_{z}\right]_{x=0}^{x=L}\right),$
(33)
with the horizontal and vertical average $\langle\cdot\rangle_{x}$ and
$\langle\cdot\rangle_{z}$, respectively. In RBC, the temperatures of the upper
and lower plates are spatially homogeneous, i.e. $\theta_{h}=\frac{\Delta}{2}$
and $\theta_{c}=-\frac{\Delta}{2}$, and assuming that the vertical wall fluxes
are equal (which is not necessarily the case for non-adiabatic sidewalls, but
has been shown to be true in all our simulations), i.e.,
$\partial_{z}\theta_{c}=\partial_{z}\theta_{h}$, then
$\displaystyle\langle\overline{\varepsilon_{\theta}}\rangle_{V}$
$\displaystyle=\frac{\kappa}{V}\left(-L\Delta\langle\partial_{z}\theta_{h}\rangle_{x}+H\left[\langle\overline{\theta\partial_{x}\theta}\rangle_{z}\right]_{x=0}^{x=L}\right),$
$\displaystyle\langle\overline{\varepsilon_{\theta}}\rangle_{V}$
$\displaystyle=\frac{\kappa\Delta^{2}}{H^{2}}Nu+\frac{\kappa}{L}\left[\langle\overline{\theta\partial_{x}\theta}\rangle_{z}\right]_{x=0}^{x=L}.$
(34)
This results in
$\langle\overline{\varepsilon_{\theta}}\rangle_{V}=\frac{\kappa\Delta^{2}}{H^{2}}Nu$
for adiabatic sidewalls or for zero temperature sidewalls, but adds an
additional term to the $\varepsilon_{\theta}-Nu$ relation otherwise. A
comparison of $Nu$ and $\varepsilon_{\theta}$ is shown in figure 20. The
virtual discontinuity of $\varepsilon_{\theta}$ for the linear sidewall
temperature reflects the reordering of the flow structures as explained in the
main part of this study, but surprisingly $Nu$ shows a rather smooth change in
this regime.
Figure 20: Comparison of $Nu$ (closed symbols) and thermal dissipation rate
$\varepsilon_{\theta}$ (open symbols) in the 2D box. The connection between
thermal dissipation and $Nu$ is given in equation (34).
## Appendix C Adjoint descent
### C.1 Derivation
Following Farazmand (2016), we define the right-hand side of the Navier-Stokes
equations as the vector $\mathbf{F_{0}}$, i.e.
$\mathbf{F_{0}}({\bf q})=\begin{pmatrix}-{\bf u}\cdot{\bm{\nabla}}{\bf
u}-{\bm{\nabla}}p+\nu{\bm{\nabla}}^{2}{\bf
u}+\mathbf{\mathbf{e}}_{z}\theta\\\\[3.0pt] -{\bf
u}\cdot{\bm{\nabla}}\theta+\kappa{\bm{\nabla}}^{2}\theta\\\\[3.0pt]
{\bm{\nabla}}\cdot{\bf u}\end{pmatrix}.$ (35)
The functional Gateaux derivative $\delta F({\bf u},{\bf
u}^{\prime})\coloneqq\lim\limits_{\varepsilon\to 0}\frac{F({\bf
u}+\varepsilon{\bf u}^{\prime})-F({\bf u})}{\varepsilon}$ of equation (35) is
$\delta F({\bf q},{\bf q}^{\prime})=\begin{pmatrix}-{\bf
u}^{\prime}\cdot{\bm{\nabla}}{\bf u}-{\bf u}\cdot{\bm{\nabla}}{\bf
u}^{\prime}-{\bm{\nabla}}p^{\prime}+\nu{\bm{\nabla}}^{2}{\bf
u}^{\prime}+\mathbf{\mathbf{e}}_{z}\theta^{\prime}\\\\[3.0pt] -{\bf
u}^{\prime}\cdot{\bm{\nabla}}\theta-{\bf
u}\cdot{\bm{\nabla}}\theta^{\prime}+\kappa{\bm{\nabla}}^{2}\theta^{\prime}\\\\[3.0pt]
{\bm{\nabla}}\cdot{\bf u}^{\prime}\end{pmatrix}.$ (36)
We want to find the adjoint operator $\delta F^{\dagger}$ of equation (36)
with respect to the inner-product
$\langle{\bf q},{\bf
q}^{\prime}\rangle_{\mathcal{A}}=\int_{\mathcal{D}}\left({\bf
q}\cdot\mathcal{A}{\bf q}^{\prime}\right)\text{d}\bf x.$ (37)
The adjoint $\delta F$ of equation (36) with respect to the inner product
(37), with $\tilde{{\bf q}}\equiv\mathcal{A}{\bf q}$, is derived as follows
$\displaystyle\langle\delta F({\bf q},{\bf q}^{\prime}),\tilde{{\bf
q}}^{\prime\prime}\rangle_{\mathcal{A}}$ $\displaystyle=$
$\displaystyle=\int_{V}\begin{pmatrix}-{\bf u}^{\prime}\cdot{\bm{\nabla}}{\bf
u}-{\bf u}\cdot{\bm{\nabla}}{\bf
u}^{\prime}-{\bm{\nabla}}p^{\prime}+\nu{\bm{\nabla}}^{2}{\bf
u}^{\prime}+\mathbf{\mathbf{e}}_{z}\theta^{\prime}\\\\[3.0pt] -{\bf
u}^{\prime}\cdot{\bm{\nabla}}\theta-{\bf
u}\cdot{\bm{\nabla}}\theta^{\prime}+\kappa{\bm{\nabla}}^{2}\theta^{\prime}\\\\[3.0pt]
{\bm{\nabla}}\cdot{\bf u}^{\prime}\end{pmatrix}\begin{pmatrix}\tilde{{\bf
u}}^{\prime\prime}\\\\[3.0pt] \tilde{\theta}^{\prime\prime}\\\\[3.0pt]
\tilde{p}^{\prime\prime}\end{pmatrix}\text{d}\bf x$
$\displaystyle=\int_{V}\begin{pmatrix}\left({\bm{\nabla}}\tilde{{\bf
u}}^{\prime\prime}+{\bm{\nabla}}\tilde{{\bf
u}}^{\prime\prime\text{T}}\right){\bf
u}+\theta{\bm{\nabla}}\tilde{\theta}^{\prime\prime}-{\bm{\nabla}}\tilde{p}^{\prime\prime}+\nu{\bm{\nabla}}^{2}\tilde{{\bf
u}}^{\prime\prime}\\\\[3.0pt] {\bf
u}\cdot{\bm{\nabla}}\tilde{\theta}^{\prime\prime}+\nu{\bm{\nabla}}^{2}\tilde{\theta}^{\prime\prime}+\mathbf{\mathbf{e}}_{z}\cdot\tilde{{\bf
u}}^{\prime\prime}\\\\[3.0pt] {\bm{\nabla}}\cdot\tilde{{\bf
u}}^{\prime\prime}\end{pmatrix}\begin{pmatrix}{\bf u}^{\prime}\\\\[3.0pt]
\theta^{\prime}\\\\[3.0pt] p^{\prime}\end{pmatrix}\text{d}\bf x$
$\displaystyle=\langle{\bf q}^{\prime},\delta F^{\dagger}({\bf q},\tilde{{\bf
q}}^{\prime\prime})\rangle_{\mathcal{A}},$ (38)
where the second line follows from integration by parts. Here we have
refrained from writing the boundary terms that follow from the integration by
parts step, since they can be eliminated by choosing the boundary conditions
on $\tilde{{\bf q}}^{\prime\prime}$ as discussed in section 2.3.
### C.2 Choice of the norm
As mentioned in Farazmand (2016), the most obvious choice for the norm is the
$\text{L}^{2}$ norm, i.e. $\mathcal{A}=I$, where $I$ is the identity operator.
However, this norm is rather stiff and leads to restrictive small time steps.
As an alternative, Farazmand (2016) uses a norm related to the Laplacian,
which effectively smooths the $\tilde{{\bf q}}^{\prime\prime}$ field. Here we
use a similar norm based on the inversed Laplacian, i.e.
$\mathcal{A}=(I-\alpha{\bm{\nabla}}^{2})^{-1}$,
$\langle{\bf q},{\bf
q}^{\prime}\rangle_{{\bm{\nabla}}^{-2}}=\int_{V}\left({\bf
q}\cdot\mathcal{A}{\bf q}^{\prime}\right)\text{d}\bf x=\int_{V}\left({\bf
q}\cdot\tilde{{\bf q}}^{\prime}\right)\text{d}\bf x$ (39)
where $a$ is a positive constant. Then, $\tilde{{\bf q}}^{\prime}$ is obtained
as the solution of the Helmholtz equation
$(I-\alpha{\bm{\nabla}}^{2})\tilde{{\bf q}}^{\prime}={\bf q}^{\prime},$ (40)
which points out the smoothing property of this norm. In practice, we choose
$\alpha=1$. The choice of the operator for the energy norm is somewhat
arbitrary, but this peculiar choice leads to improved numerical stability
properties. Note that the operator $\mathcal{A}$ should be positive definite
and should commute with the divergence operator, i.e.
$\mathcal{A}({\bm{\nabla}}\cdot{\bf u})={\bm{\nabla}}\cdot\mathcal{A}{\bf u}$.
## References
* Ahlers (2000) Ahlers, G. 2000 Effect of sidewall conductance on heat-transport measurements for turbulent Rayleigh–Bénard convection. Phys. Rev. E 63, 015303.
* Ahlers et al. (2009a) Ahlers, G., Funfschilling, D. & Bodenschatz, E. 2009a Transitions in heat transport by turbulent convection at Rayleigh numbers up to $10^{15}$. New J. Phys. 11, 123001.
* Ahlers et al. (2009b) Ahlers, G., Grossmann, S. & Lohse, D. 2009b Heat transfer and large scale dynamics in turbulent Rayleigh–Bénard convection. Rev. Mod. Phys. 81, 503–537.
* Ahlers et al. (2012) Ahlers, G., He, X., Funfschilling, D. & Bodenschatz, E. 2012 Heat transport by turbulent Rayleigh–Bénard convection for $Pr\sim 0.8$ and $3\times 10^{12}\lesssim~{}Ra\lesssim 10^{15}$: Aspect ratio $\Gamma=0.50$. New J. Phys. 14, 103012.
* Bodenschatz et al. (2000) Bodenschatz, E., Pesch, W. & Ahlers, G. 2000 Recent developments in Rayleigh–Bénard convection. Annu. Rev. Fluid Mech. 32, 709–778.
* Brown & Ahlers (2007) Brown, E. & Ahlers, G. 2007 Temperature gradients, and search for non-Boussinesq effects, in the interior of turbulent Rayleigh–Bénard convection. Eur. Phys. Lett. 80, 14001\.
* de Bruyn et al. (1996) de Bruyn, J. R., Bodenschatz, E., Morris, S. W., Trainoff, S. P., Hu, Y., Cannell, D. S. & Ahlers, G. 1996 Apparatus for the study of Rayleigh–Bénard convection in gases under pressure. Rev. Sci. Instrum. 67 (6), 2043–2067.
* Buell & Catton (1983) Buell, J. C. & Catton, I. 1983 The effect of wall conduction on the stability of a fluid in a right circular cylinder heated from below. J. Heat Transfer 105, 255–260.
* Busse (1967) Busse, F. H. 1967 The stability of finite amplitude cellular convection and its relation to an extremum principle. J. Fluid Mech. 30, 625–649.
* Busse (1978) Busse, F. H. 1978 Non-linear properties of thermal convection. Rep. Prog. Phys. 41, 1929–1967.
* Castillo-Castellanos et al. (2016) Castillo-Castellanos, A., Sergent, A. & Rossi, M. 2016 Reversal cycle in square Rayleigh–Bénard cells in turbulent regime. J. Fluid Mech. 808, 614–640.
* Chandrasekhar (1961) Chandrasekhar, S. 1961 Hydrodynamic and hydromagnetic stability. Clarendon.
* Chavanne et al. (1997) Chavanne, X., Chilla, F., Castaing, B., Hebral, B., Chabaud, B. & Chaussy, J. 1997 Observation of the ultimate regime in Rayleigh–Bénard convection. Phys. Rev. Lett. 79, 3648–3651.
* Chavanne et al. (2001) Chavanne, X., Chillà, F., Chabaud, B., Castaing, B. & Hébral, B. 2001 Turbulent Rayleigh–Bénard convection in gaseous and liquid He. Phys. Fluids 13, 1300–1320.
* Cross & Hohenberg (1993) Cross, M. C. & Hohenberg, P. C. 1993 Pattern formation outside of equilibrium. Rev. Mod. Phys. 65, 851–1112.
* Emran & Schumacher (2012) Emran, M. S. & Schumacher, J. 2012 Conditional statistics of thermal dissipation rate in turbulent Rayleigh–Bénard convection. Eur. Phys. J. E 108, 35–42.
* Farazmand (2016) Farazmand, M. 2016 An adjoint-based approach for finding invariant solutions of Navier–Stokes equations. J. Fluid Mech. 795, 278–312.
* Goldhirsch et al. (1989) Goldhirsch, I., Pelz, R. B. & Orszag, S. A. 1989 Numerical simulation of thermal convection in a two-dimensional finite box. J. Fluid Mech. 199, 1––28.
* Grossmann & Lohse (2000) Grossmann, S. & Lohse, D. 2000 Scaling in thermal convection: A unifying theory. J. Fluid Mech. 407, 27–56.
* Grossmann & Lohse (2001) Grossmann, S. & Lohse, D. 2001 Thermal convection for large Prandtl numbers. Phys. Rev. Lett. 86, 3316–3319.
* Grossmann & Lohse (2004) Grossmann, S. & Lohse, D. 2004 Fluctuations in turbulent Rayleigh–Bénard convection: The role of plumes. Phys. Fluids 16, 4462–4472.
* Grossmann & Lohse (2011) Grossmann, S. & Lohse, D. 2011 Multiple scaling in the ultimate regime of thermal convection. Phys. Fluids 23, 045108\.
* He et al. (2012) He, X., Funfschilling, D., Nobach, H., Bodenschatz, E. & Ahlers, G. 2012 Transition to the ultimate state of turbulent Rayleigh–Bénard convection. Phys. Rev. Lett. 108, 024502.
* Hébert et al. (2010) Hébert, F., Hufschmid, R., Scheel, J. & Ahlers, G. 2010 Onset of Rayleigh–Bénard convection in cylindrical containers. Phys. Rev. E 81, 046318.
* Hopf (1948) Hopf, E. 1948 A mathematical example displaying features of turbulence. Commun. Appl. Maths 1, 303–322.
* Hu et al. (1993) Hu, Y., Ecke, R. & Ahlers, G. 1993 Convection near threshold for Prandtl numbers near 1. Phys. Rev. E 48, 4399–4413.
* Johnston & Doering (2009) Johnston, H. & Doering, C. R. 2009 Comparison of Turbulent Thermal Convection between Conditions of Constant Temperature and Constant Flux. Phys. Rev. Lett. 102, 064501.
* Julien & Watson (2009) Julien, K. & Watson, M. 2009 Efficient multi-dimensional solution of PDEs using Chebyshev spectral methods. J. Comput. Phys. 228, 1480–1503.
* Kooij et al. (2018) Kooij, G. L., Botchev, M. A., Frederix, E. M.A., Geurts, B. J., Horn, S., Lohse, D., van der Poel, E. P., Shishkina, O., Stevens, R. J. A. M. & Verzicco, R. 2018 Comparison of computational codes for direct numerical simulations of turbulent Rayleigh–Bénard convection. Comp. Fluids 166, 1–8.
* Kooloth et al. (2021) Kooloth, P., Sondak, D. & Smith, L. M. 2021 Coherent solutions and transition to turbulence in two-dimensional Rayleigh–Bénard convection. Phys. Rev. Fluids 6, 013501\.
* Kraichnan (1962) Kraichnan, R. 1962 Turbulent thermal convection at arbitrary Prandtl number. Phys. Fluids 5, 1374–1389.
* Lohse & Xia (2010) Lohse, D. & Xia, K.-Q. 2010 Small-scale properties of turbulent Rayleigh–Bénard convection. Annu. Rev. Fluid Mech. 42, 335–364.
* Mortensen (2018) Mortensen, M. 2018 Shenfun: High performance spectral Galerkin computing platform. J. Open Source Softw. 3, 1071\.
* Niemela et al. (2000) Niemela, J. J., Skrbek, L., Sreenivasan, K. R. & Donnely, R. J. 2000 Turbulent convection at very high Rayleigh numbers. Nature 404, 837–841.
* Oh (2019) Oh, S. 2019 An Efficient Spectral Method to Solve Multi-Dimensional Linear Partial Different Equations Using Chebyshev Polynomials. Mathematics 7, 90.
* Petschel et al. (2011) Petschel, K., Wilczek, M., Breuer, M., Friedrich, R. & Hansen, U. 2011 Statistical analysis of global wind dynamics in vigorous Rayleigh–Bénard convection. Phys. Rev. E 84, 026309.
* Podvin & Sergent (2015) Podvin, B. & Sergent, A. 2015 A large-scale investigation of wind reversal in a square Rayleigh–Bénard cell. J. Fluid Mech. 766, 172–201.
* Podvin & Sergent (2017) Podvin, B. & Sergent, A. 2017 Precursor for wind reversal in a square Rayleigh–Bénard cell. Phys. Rev. E 95, 013112.
* van der Poel et al. (2013) van der Poel, E. P., Stevens, R. J. A. M. & Lohse, D. 2013 Comparison between two- and three-dimensional Rayleigh–Bénard convection. J. Fluid Mech. 736, 177–194.
* Puigjaner et al. (2004) Puigjaner, D., Herrero, J., Giralt, F. & Simó, C. 2004 Stability analysis of the flow in a cubical cavity heated from below. Phys. Fluids 16, 3639–3655.
* Puigjaner et al. (2008) Puigjaner, D., Herrero, J., Simó, C. & Giralt, F. 2008 Bifurcation analysis of steady Rayleigh–Bénard convection in a cubical cavity with conducting sidewalls. J. Fluid Mech. 598, 393–427.
* Reiter (2021) Reiter, P. 2021 https://github.com/preiter93/rustpde.
* Reiter et al. (2021a) Reiter, P., Shishkina, O., Lohse, D. & Krug, D. 2021a Crossover of the relative heat transport contributions of plume ejecting and impacting zones in turbulent rayleigh-bénard convection (a). EPL 134, 34002.
* Reiter et al. (2021b) Reiter, P., Zhang, X., Stepanov, R. & Shishkina, O. 2021b Generation of zonal flows in convective systems by travelling thermal waves. J. Fluid Mech. 913, A13.
* Roche (2020) Roche, P. E. 2020 The ultimate state of convection: a unifying picture of very high Rayleigh numbers experiments. New J. of Phys. 22, 073056.
* Roche et al. (2001) Roche, P.-E., Castaing, B., Chabaud, B., Hébral, B. & Sommeria, J. 2001 Side wall effects in Rayleigh–Bénard experiments. Eur. Phys. J. B 24, 405–408.
* Saad & Schultz (1986) Saad, Y. & Schultz, M. H. 1986 GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems. SIAM J. Sci. Comput. 7, 856–869.
* Schlüter et al. (1965) Schlüter, A., Lortz, D. & Busse, F. 1965 On the stability of steady finite amplitude convection. J. Fluid Mech. 23, 129––144.
* Shen (1995) Shen, J. 1995 Efficient Spectral-Galerkin Method II. Direct Solvers of Second- and Fourth-Order Equations Using Chebyshev Polynomials. SIAM J. Sci. Comput. 16, 74–87.
* Shishkina (2021) Shishkina, O. 2021 Rayleigh–Bénard convection: The container shape matters. Phys. Rev. Fluids 6, 090502.
* Shishkina et al. (2014) Shishkina, O., Wagner, S. & Horn, S. 2014 Influence of the angle between the wind and the isothermal surfaces on the boundary layer structures in turbulent thermal convection. Phys. Rev. E 89, 033014.
* Sondak et al. (2015) Sondak, D., Smith, L. M. & Waleffe, F. 2015 Optimal heat transport solutions for Rayleigh–Bénard convection. J. Fluid Mech. 784, 565–595.
* Stevens et al. (2014) Stevens, R., Lohse, D. & Verzicco, R. 2014 Sidewall effects in Rayleigh–Bénard convection. J. Fluid Mech. 741, 1–27.
* Sugiyama et al. (2010) Sugiyama, K., Ni, R., Stevens, R. J. A. M., Chan, T. S., Zhou, S.-Q., Xi, H.-D., Sun, C., Grossmann, S., Xia, K.-Q. & Lohse, D. 2010 Flow reversals in thermally driven turbulence. Phys. Rev. Lett. 105, 034503.
* Tilgner et al. (1993) Tilgner, A., Belmonte, A. & Libchaber, A. 1993 Temperature and velocity profiles of turbulent convection in water. Phys. Rev. E 47, 2253–2257.
* Urban et al. (2014) Urban, P., Hanzelka, P., Musilova, V., Kralik, T., Mantia, M. L., Srnka, A. & Skrbek, L. 2014 Heat transfer in cryogenic helium gas by turbulent Rayleigh–Bénard convection in a cylindrical cell of aspect ratio 1. New J. Phys. 16, 053042\.
* Venturi et al. (2010) Venturi, D., Wan, X. & Karniadakis, G. 2010 Stochastic bifurcation analysis of Rayleigh–Bénard convection. J. Fluid Mech. 650, 391–413.
* Verzicco (2002) Verzicco, R. 2002 Sidewall finite-conductivity effects in confined turbulent thermal convection. J. Fluid Mech. 473, 201–210.
* Verzicco & Sreenivasan (2008) Verzicco, R. & Sreenivasan, K. R. 2008 A comparison of turbulent thermal convection between conditions of constant temperature and constant heat flux. J. Fluid Mech. 595, 203–219.
* Wagner & Shishkina (2013) Wagner, S. & Shishkina, O. 2013 Aspect ratio dependency of Rayleigh–Bénard convection in box-shaped containers. Phys. Fluids 25, 085110.
* Waleffe et al. (2015) Waleffe, F., Boonkasame, A. & Smith, L. M. 2015 Heat transport by coherent Rayleigh–Bénard convection. Phys. Fluids 27, 051702.
* Wan et al. (2019) Wan, Z., Wei, P., Verzicco, R., Lohse, D., Ahlers, G. & Stevens, R. 2019 Effect of sidewall on heat transfer and flow structure in Rayleigh-–Bénard convection. J. Fluid Mech. 881, 218––243.
* Wen et al. (2015) Wen, B., Chini, G. P., Kerswell, R. R. & Doering, C. R. 2015 Time-stepping approach for solving upper-bound problems: Application to two-dimensional Rayleigh–Bénard convection. Phys. Rev. E 92, 043012.
* Wen et al. (2020a) Wen, B., Goluskin, D. & Doering, C. R. 2020a Steady Rayleigh–Bénard convection between no-slip boundaries, arXiv: 2008.08752.
* Wen et al. (2020b) Wen, B., Goluskin, D., LeDuc, M., Chini, G. P. & Doering, C. R. 2020b Steady Rayleigh-–Bénard convection between stress-free boundaries. J. Fluid Mech. 905, R4.
* Xi & Xia (2007) Xi, H.-D. & Xia, K.-Q. 2007 Cessations and reversals of the large-scale circulation in turbulent thermal convection. Phys. Rev. E 75, 066307.
* Zhao et al. (2019) Zhao, J., Cai, W. & Jiang, Y. 2019 Study on corner vortex enlarging process of 2D square Rayleigh–Bénard cells filled with air in transient states. Int. J. Heat Mass Transfer 129, 599–609.
* Zhou & Chen (2018) Zhou, W.-F. & Chen, J. 2018 Letter: Similarity model for corner roll in turbulent Rayleigh–Bénard convection. Phys. Fluids 30, 111705.
* Zhu et al. (2018) Zhu, X., Mathai, V., Stevens, R., Verzicco, R. & Lohse, D. 2018 Transition to the ultimate regime in two-dimensional Rayleigh–Bénard convection. Phys. Rev. Lett. 120, 144502.
* Zienicke et al. (1998) Zienicke, E., Seehafer, N. & Feudel, F. 1998 Bifurcations in two-dimensional Rayleigh–Bénard convection. Phys. Rev. E 57, 428–435.
* Zwirner et al. (2020) Zwirner, L., Khalilov, R., Kolesnichenko, I., Mamykin, A., Mandrykin, S., Pavlinov, A., Shestakov, A., Teimurazov, A., Frick, P. & Shishkina, O. 2020 The influence of the cell inclination on the heat transport and large-scale circulation in liquid metal convection. J. Fluid Mech. 884, A18.
|
# Heegaard Floer homology and rational cuspidal curves
Maciej Borodzik Institute of Mathematics, University of Warsaw, ul. Banacha
2, 02-097 Warsaw, Poland<EMAIL_ADDRESS>and Charles Livingston
Department of Mathematics, Indiana University, Bloomington, IN 47405
<EMAIL_ADDRESS>
###### Abstract.
We apply the methods of Heegaard Floer homology to identify topological
properties of complex curves in $\mathbb{C}P^{2}$. As one application, we
resolve an open conjecture that constrains the Alexander polynomial of the
link of the singular point of the curve in the case that there is exactly one
singular point, having connected link, and the curve is of genus 0.
Generalizations apply in the case of multiple singular points.
###### Key words and phrases:
rational cuspidal curve, $d$–invariant, surgery, semigroup density
###### 2010 Mathematics Subject Classification:
primary: 14H50, secondary: 14B05, 57M25, 57R58
The first author was supported by Polish OPUS grant No 2012/05/B/ST1/03195
The second author was supported by National Science Foundation Grant 1007196.
## 1\. Introduction
We consider irreducible algebraic curves $C\subset\mathbb{C}P^{2}$. Such a
curve has a finite set of singular points, $\\{z_{i}\\}_{i=1}^{n}$; a
neighborhood of each intersects $C$ in a cone on a link $L_{i}\subset S^{3}$.
A fundamental question asks what possible configurations of links
$\\{L_{i}\\}$ arise in this way. In this generality the problem is fairly
intractable and research has focused on a restricted case, in which each
$L_{i}$ is connected, and thus a knot $K_{i}$, and $C$ is a rational curve,
meaning that there is a rational surjective map $\mathbb{C}P^{1}\to C$. Such a
curve is called rational cuspidal. Being rational cuspidal is equivalent to
$C$ being homeomorphic to $S^{2}$.
Our results apply in the case of multiple singular points, but the following
statement gives an indication of the nature of the results and their
consequences.
###### Theorem 1.1.
Suppose that $C$ is a rational cuspidal curve of degree $d$ with one singular
point, a cone on the knot $K$, and the Alexander polynomial of $K$ is expanded
at $t=1$ to be
$\Delta_{K}(t)=1+\frac{(d-1)(d-2)}{2}(t-1)+(t-1)^{2}\sum_{l}k_{l}t^{l}$. Then
for all $j,0\leq j\leq d-3$, $k_{d(d-j-3)}=(j-1)(j-2)/2$.
There are three facets to the work here:
1. (1)
We begin with a basic observation that a neighborhood $Y$ of $C$ is built from
the 4–ball by attaching a 2–handle along the knot $K=\\#K_{i}$ with framing
$d^{2}$, where $d$ is the degree of the curve. Thus, its boundary,
$S^{3}_{d^{2}}(K)$, bounds the rational homology ball
$\mathbb{C}P^{2}\setminus Y$. From this, it follows that the Heegaard Floer
correction term satisfies $d(S^{3}_{d^{2}}(K),{{\mathfrak{s}}}_{m})=0$ if
$d|m$, for properly enumerated Spincstructures ${{\mathfrak{s}}}_{m}$.
2. (2)
Because each $K_{i}$ is an algebraic knot (in particular an $L$–space knot),
the Heegaard Floer complex $\operatorname{\it CFK}^{\infty}(S^{3},K_{i})$ is
determined by the Alexander polynomial of $K_{i}$, and thus the complex
$\operatorname{\it CFK}^{\infty}(S^{3},K)$ and the $d$–invariants are also
determined by the Alexander polynomials of the $K_{i}$.
3. (3)
The constraints that arise on the Alexander polynomials, although initially
appearing quite intricate, can be reinterpreted in compact form using
semigroups of singular points. In this way, we can relate these constraints to
well-known conjectures.
### 1.1. The conjecture of Fernández de Bobadilla, Luengo, Melle-Hernandez
and Némethi
In [5] the following conjecture was proposed. It was also verified for all
known examples of rational cuspidal curves.
###### Conjecture 1.2 ([5]).
Suppose that the rational cuspidal curve $C$ of degree $d$ has critical points
$z_{1},\dots,z_{n}$. Let $K_{1},\dots,K_{n}$ be the corresponding links of
singular points and let $\Delta_{1},\dots,\Delta_{n}$ be their Alexander
polynomials. Let $\Delta=\Delta_{1}\cdot\ldots\cdot\Delta_{n}$, expanded as
$\Delta(t)=1+\frac{(d-1)(d-2)}{2}(t-1)+(t-1)^{2}\sum_{j=0}^{2g-2}k_{l}t^{l}.$
Then for any $j=0,\dots,d-3$, $k_{d(d-j-3)}\leq(j+1)(j+2)/2$, with equality
for $n=1$.
We remark that the case $n=1$ of the conjecture is Theorem 1.1. We will prove
this result in Section 4.4. Later we will also prove an alternative
generalization of Theorem 1.1 for the case $n>1$, stated as Theorem 5.4, which
is the main result of the present article. The advantage of this formulation
over the original conjecture lies in the fact that it gives precise values of
the coefficients $k_{d(d-j-3)}$. Theorem 6.5 provides an equivalent statement
of Theorem 5.4.
###### Acknowledgements.
The authors are grateful to Matt Hedden, Jen Hom and András Némethi for
fruitful discussions. The first author wants to thank Indiana University for
hospitality.
## 2\. Background: Algebraic Geometry and Rational Cuspidal Curves
In this section we will present some of the general theory of rational
cuspidal curves. Section 2.1 includes basic information about singular points
of plane curves. In Section 2.2 we discuss the semigroup of a singular point
and its connections to the Alexander polynomial of the link. We shall use
results from this section later in the article to simplify the equalities that
we obtain. In Section 2.3 we describe results from [5] to give some flavor of
the theory. In Section 2.4 we provide a rough sketch of some methods used to
study rational cuspidal curves. We refer to [13] for an excellent and fairly
up-to-date survey of results on rational cuspidal curves.
### 2.1. Singular points and algebraic curves
For a general introduction and references to this subsection, we refer to [3,
7], or to [12, Section 10] for a more topological approach. In this article we
will be considering algebraic curves embedded in $\mathbb{C}P^{2}$. Thus we
will use the word _curve_ to refer to a zero set of an irreducible homogeneous
polynomial $F$ of degree $d$. The _degree_ of the curve is the degree of the
corresponding polynomial.
Let $C$ be a curve. A point $z\in C$ is called _singular_ if the gradient of
$F$ vanishes at $z$. Singular points of irreducible curves in
$\mathbb{C}P^{2}$ are always isolated. Given a singular point and a
sufficiently small ball $B\subset\mathbb{C}P^{2}$ around $z$, we call
$K=C\cap\partial B$ the _link_ of the singular point. The singular point is
called _cuspidal_ or _unibranched_ if $K$ is a knot, that is a link with one
component, or equivalently, if there is an analytic map $\psi$ from a disk in
$\mathbb{C}$ onto $C\cap B$.
Unless specified otherwise, all singular points are assumed to be cuspidal.
Two unibranched singular points are called _topologically equivalent_ if the
links of these singular points are isotopic; see for instance [7, Definition
I.3.30] for more details. A unibranched singular point is topologically
equivalent to one for which the local parametrization $\psi$ is given in local
coordinates $(x,y)$ on $B$ by $t\mapsto(x(t),y(t))$, where $x(t)=t^{p}$,
$y(t)=t^{q_{1}}+\ldots+t^{q_{n}}$ for some positive integers
$p,q_{1},\ldots,q_{n}$ satisfying $p<q_{1}<q_{2}<\ldots<q_{n}$. Furthermore,
if we set $D_{i}=\gcd(p,q_{1},\ldots,q_{i})$, then $D_{i}$ does not divide
$q_{i+1}$ and $D_{n}=1$. The sequence $(p;q_{1},\ldots,q_{n})$ is called the
_characteristic sequence_ of the singular point and $p$ is called the
_multiplicity_. Sometimes $n$ is referred to as the _number of Puiseux pairs_
, a notion which comes from an alternative way of encoding the sequence
$(p;q_{1},\ldots,q_{n})$. We will say that a singular point is of type
$(p;q_{1},\ldots,q_{n})$ if it has a presentation of this sort in local
coefficients.
The link of a singular point with a characteristic sequence
$(p;q_{1},\ldots,q_{n})$ is an $(n-1)$–fold iterate of a torus knot
$T(p^{\prime},q^{\prime})$, where $p^{\prime}=p/D_{1}$ and
$q^{\prime}=q_{1}/D_{1}$; see for example [3, Sections 8.3 and 8.5] or [26,
Chapter 5.2]. In particular, if $n=1$, the link is a torus knot $T(p,q_{1})$.
In all cases, the genus of the link is equal to $\mu/2=\delta$, where $\mu$ is
the Milnor number and $\delta$ is the so-called $\delta$–invariant of the
singular point, see [7, page 205], or [12, Section 10]. The genus is also
equal to half the degree of the Alexander polynomial of the link of the
singular point. The Milnor number can be computed from the following formula,
see [12, Remark 10.10]:
$\mu=(p-1)(q_{1}-1)+\sum_{i=2}^{n}(D_{i}-1)(q_{i}-q_{i-1}).$
Suppose that $C$ is a degree $d$ curve with singular points
$z_{1},\ldots,z_{n}$ (and $L_{1},\ldots,L_{n}$ are their links). The genus
formula, due to Serre (see [12, Property 10.4]) states that the genus of $C$
is equal to
$g(C)=\frac{1}{2}(d-1)(d-2)-\sum_{i=1}^{n}\delta_{i}.$
If all the critical points are cuspidal, we have $\delta_{i}=g(L_{i})$, so the
above formula can be written as
(2.1) $g(C)=\frac{1}{2}(d-1)(d-2)-\sum_{i=1}^{n}g(L_{i}).$
In particular, $C$ is rational cuspidal (that is, it is a homeomorphic image
of a sphere) if and only $\sum g(L_{i})=\frac{1}{2}(d-1)(d-2)$.
### 2.2. Semigroup of a singular point
The notion of the semigroup associated to a singular point is a central notion
in the subject, although in the present work we use only the language of
semigroups, not the algebraic aspects. We refer to [26, Chapter 4] or [7, page
214] for details and proofs. Suppose that $z$ is a cuspidal singular point of
a curve $C$ and $B$ is a sufficiently small ball around $z$. Let
$\psi(t)=(x(t),y(t))$ be a local parametrization of $C\cap B$ near $z$; see
Section 2.1. For any polynomial $G(x,y)$ we look at the order at $0$ of an
analytic map $t\mapsto G(x(t),y(t))\in\mathbb{C}$. Let $S$ be the set
integers, which can be realized as the order for some $G$. Then $S$ is clearly
a semigroup of $\mathbb{Z}_{\geq 0}$. We call it the _semigroup of the
singular point_. The semigroup can be computed from the characteristic
sequence, for example for a sequence $(p;q_{1})$, $S$ is generated by $p$ and
$q_{1}$. The _gap sequence_ , $G:=\mathbb{Z}_{\geq 0}\setminus S$, has
precisely $\mu/2$ elements and the largest one is $\mu-1$, where $\mu$ is the
Milnor number.
We now assume that $K$ is the link of the singular point $z$. Explicit
computations of the Alexander polynomial of $K$ show that it is of the form
(2.2) $\Delta_{K}(t)=\sum_{i=0}^{2m}(-1)^{i}t^{n_{i}},$
where $n_{i}$ form an increasing sequence with $n_{0}=0$ and $n_{2m}=2g$,
twice the genus of $K$.
Expanding $t^{n_{2i}}-t^{n_{2i-1}}$ as
$(t-1)(t^{n_{2i}-1}+t^{n_{2i}-2}+\ldots+t^{n_{2i-1}})$ yields
(2.3) $\Delta_{K}(t)=1+(t-1)\sum_{j=1}^{k}t^{g_{j}},$
for some finite sequence $0<g_{1}<\ldots<g_{k}$. We have the following result
(see [26, Exercise 5.7.7]).
###### Lemma 2.4.
The sequence $g_{1},\ldots,g_{k}$ is the gap sequence of the semigroup of the
singular point. In particular $k=\\#G=\mu/2$, where $\mu$ is the Milnor
number, so $\\#G$ is the genus.
Writing $t^{g_{j}}$ as $(t-1)(t^{g_{j}-1}+t^{g_{j}-2}+\ldots+t+1)+1$ in (2.3)
yields the following formula
(2.5) $\Delta_{K}(t)=1+(t-1)g(K)+(t-1)^{2}\sum_{j=0}^{\mu-2}k_{j}t^{j},$
where $k_{j}=\\#\\{m>j\colon m\not\in S\\}$.
We shall use the following definition.
###### Definition 2.6.
For any finite increasing sequence of positive integers $G$, we define
(2.7) $I_{G}(m)=\\#\\{k\in G\cup\mathbb{Z}_{<0}\colon k\geq m\\},$
where $\mathbb{Z}_{<0}$ is the set of the negative integers. We shall call
$I_{G}$ the _gap function_ , because in most applications $G$ will be a gap
sequence of some semigroup.
###### Remark 2.8.
We point out that for $j=0,\ldots,\mu-2$, we have $I_{G}(j+1)=k_{j}$, where
the $k_{j}$ are as in (2.5).
###### Example 2.9.
Consider the knot $T(3,7)$. Its Alexander polynomial is
$\displaystyle\frac{(t^{21}-1)(t-1)}{(t^{3}-1)(t^{7}-1)}=$ $\displaystyle\
1-t+t^{3}-t^{4}+t^{6}-t^{8}+t^{9}-t^{11}+t^{12}$ $\displaystyle=$
$\displaystyle\ 1+(t-1)(t+t^{2}+t^{4}+t^{5}+t^{8}+t^{11})$ $\displaystyle=$
$\displaystyle\ 1+6(t-1)+$ $\displaystyle+(t-1)^{2}$
$\displaystyle\left(6+5t+4t^{2}+4t^{3}+3t^{4}+2t^{5}+2t^{6}+2t^{7}+t^{8}+t^{9}+t^{10}\right).$
The semigroup is $(0,3,6,7,9,10,12,13,14,\dots)$. The gap sequence is
$1,2,4,5,8,11$.
###### Remark 2.10.
The passage from (2.2) through (2.3) to (2.5) is just an algebraic
manipulation, and thus it applies to any knot whose Alexander polynomial has
form (2.2). In particular, according to [21, Theorem 1.2] it applies to any
$L$–space knot. In this setting we will also call the sequence
$g_{1},\dots,g_{k}$ the _gap sequence_ of the knot and denote it by $G_{K}$;
we will write $I_{K}(m)$ for the gap function relative to $G_{K}$. Even though
the complement $\mathbb{Z}_{\geq 0}\setminus G_{K}$ is not always a semigroup,
we still have $\\#G_{K}=\frac{1}{2}\deg\Delta_{K}$. This property follows
immediately from the symmetry of the Alexander polynomial.
### 2.3. Rational cuspidal curves with one cusp
The classification of rational cuspidal curves is a challenging old problem,
with some conjectures (like the Coolidge–Nagata conjecture [4, 14]) remaining
open for many decades. The classification of curves with a unique critical
point is far from being accomplished; the special case when the unique
singular point has only one Puiseux term (its link is a torus knot) is
complete [5], but even in this basic case, the proof is quite difficult.
To give some indication of the situation, consider two families of rational
cuspidal curves. The first one, written in projective coordinates on
$\mathbb{C}P^{2}$ as $x^{d}+y^{d-1}z=0$ for $d>1$, the other one is
$(zy-x^{2})^{d/2}-xy^{d-1}=0$ for $d$ even and $d>1$. These are of degree $d$.
Both families have a unique singular point, in the first case it is of type
$(d-1;d)$, in the second of type $(d/2;2d-1)$. In both cases, the Milnor
number is $(d-1)(d-2)$, so the curves are rational. An explicit
parametrization can be easily given as well.
There also exist more complicated examples. For instance, Orevkov [18]
constructed rational cuspidal curves of degree $\phi_{j}$ having a single
singular point of type $(\phi_{j-2};\phi_{j+2})$, where $j$ is odd and $j>5$.
Here the $\phi_{j}$ are the Fibonacci numbers, $\phi_{0}=0$, $\phi_{1}=1$,
$\phi_{j+2}=\phi_{j+1}+\phi_{j}$. As an example, there exists a rational
cuspidal curve of degree $13$ with a single singular point of type $(5;34)$.
Orevkov’s construction is inductive and by no means trivial. Another family
found by Orevkov are rational cuspidal curves of degree $\phi_{j-1}^{2}-1$
having a single singular point of type $(\phi_{j-2}^{2};\phi_{j}^{2})$, for
$j>5$, odd.
The main result of [5] is that apart of these four families of rational
cuspidal curves, there are only two sporadic curves with a unique singular
point having one Puiseux pair, one of degree $8$, the other of degree $16$.
### 2.4. Constraints on rational cuspidal curves.
Here we review some constraints for rational cuspidal curves. We refer to [13]
for more details and references. The article [5] shows how these constraints
can be used in practice. The fundamental constraint is given by (2.1). Next,
Matsuoka and Sakai [11] proved that if $(p_{1};q_{11},\ldots,q_{1k_{1}})$,
…,$(p_{n};q_{n1},\ldots,q_{nk_{n}})$ are the only singular points occurring on
a rational cuspidal curve of degree $d$ with $p_{1}\geq\ldots\geq p_{n}$, then
$p_{1}>d/3$. Later, Orevkov [18] improved this to
$\alpha(p_{1}+1)+1/\sqrt{5}>d$, where $\alpha=(3+\sqrt{5})/2\sim 2.61$ and
showed that this inequality is asymptotically optimal (it is related to the
curves described in Section 2.1). Both proofs use very deep algebro-geometric
tools. We reprove the result of [11] in Proposition 6.7 below.
Another obstruction comes from the semicontinuity of the spectrum, a concept
that arises from Hodge Theory. Even a rough definition of the spectrum of a
singular point is beyond the scope of this article. We refer to [1, Chapter
14] for a definition of the spectrum and to [5] for illustrations of its use.
We point out that recently (see [2]) a tight relation has been drawn between
the spectrum of a singular point and the Tristram–Levine signatures of its
link. In general, semicontinuity of the spectrum is a very strong tool, but it
is also very difficult to apply.
Using tools from algebraic geometry, such as the Hodge Index Theorem, Tono in
[25] proved that any rational cuspidal curve can have at most eight singular
points. An old conjecture is that a rational cuspidal curve can have at most
$4$ singular points; see [22] for a precise statement.
In [6] a completely new approach was proposed, motivated by a conjecture on
Seiberg–Witten invariants of links of surface singularities made by Némethi
and Nicolaescu; see [16]. Specifically, Conjecture 1.2 in the present article
arises from these considerations. Another reference for the general conjecture
on Seiberg–Witten invariants is [15].
## 3\. Topology, algebraic topology, and Spinc structures
Let $C\subset\mathbb{C}P^{2}$ be a rational cuspidal curve. Let $d$ be its
degree and $z_{1},\ldots,z_{n}$ be its singular points. We let $Y$ be a closed
manifold regular neighborhood of $C$, let $M=\partial Y$, and
$W=\overline{\mathbb{C}P^{2}-Y}$.
### 3.1. Topological descriptions of $Y$ and $M$
The neighborhood $Y$ of $C$ can be built in three steps. First, disk
neighborhoods of the $z_{i}$ are selected. Then neighborhoods of $N-1$
embedded arcs on $C$ are adjoined, yielding a 4–ball. Finally, the remainder
of $C$ is a disk, so its neighborhood forms a 2–handle attached to the 4–ball.
Thus, $Y$ is a 4–ball with a 2–handle attached. The attaching curve is easily
seen to be $K=\\#K_{i}$. Finally, since the self-intersection of $C$ is
$d^{2}$, the framing of the attaching map is $d^{2}$. In particular,
$M=S^{3}_{d^{2}}(K)$.
One quickly computes that $H_{2}(\mathbb{C}P^{2},C)=\mathbb{Z}_{d}$, and
$H_{4}(\mathbb{C}P^{2},C)=\mathbb{Z}$, with the remaining homology groups 0.
Using excision, we see that the groups $H_{i}(W,M)$ are the same. Via
Lefschetz duality and the universal coefficient theorem we find that
$H_{0}(W)=\mathbb{Z}$, $H_{1}(W)=\mathbb{Z}_{d}$ and all the other groups are
0. Finally, the long exact sequence of the pair $(W,M)$ yields
$0\to H_{2}(W,M)\to H_{1}(M)\to H_{1}(W)\to 0$
which in this case is
$0\to\mathbb{Z}_{d}\to\mathbb{Z}_{d^{2}}\to\mathbb{Z}_{d}\to 0.$
This is realized geometrically by letting the generator of $H_{2}(W,M)$ be
${\it H}\cap W$, where $\it H\subset\mathbb{C}P^{2}$ is a generic line. Its
boundary is algebraically $d$ copies of the meridian of the attaching curve
$K$ in the 2–handle decomposition of $Y$.
Taking duals we see that the map $H^{2}(W)\to H^{2}(M)$, which maps
$\mathbb{Z}_{d}\to\mathbb{Z}_{d^{2}}$, takes the canonical generator to $d$
times the dual to the meridian in $M=S^{3}_{d^{2}}(K)$.
### 3.2. Spinc structures
For any space $X$ there is a transitive action of $H^{2}(X)$ on Spinc($X$).
Thus, $W$ has $d$ Spincstructures and $M$ has $d^{2}$ such structures.
Since $\mathbb{C}P^{2}$ has a Spincstructure with first Chern class a dual to
the class of the line, its restriction to $W$ is a structure whose restriction
to $M$ has first Chern class equal to $d$ times the dual to the meridian.
For a cohomology class $z\in H^{2}(X)$ and a Spincstructure
${{\mathfrak{s}}}$, one has
$c_{1}(z\cdot{{\mathfrak{s}}})-c_{1}({{\mathfrak{s}}})=2z$. Thus for each
$k\in\mathbb{Z}$, there is a Spincstructure on $M$ which extends to $W$ having
first Chern class of the form $d+2kd$. Notice that for $d$ odd, all
$md\in\mathbb{Z}_{d^{2}}$ for $m\in\mathbb{Z}$ occur as first Chern classes of
Spincstructures that extend over $W$, but for $d$ even, only elements of the
form $md$ with $m$ odd occur. (Thus, for $d$ even, there are $d$ extending
structures, but only $d/2$ first Chern classes that occur.)
According to [20, Section 3.4], the Spincstructures on $M$ have an enumeration
${{\mathfrak{s}}}_{m}$, for $m\in[-d^{2}/2,d^{2}/2]$, which can be defined via
the manifold $Y$. Specifically, ${{\mathfrak{s}}}_{m}$ is defined to be the
restriction to $M$ of the Spincstructure on $Y$, ${{\mathfrak{t}}}_{m}$, with
the property that $\left<c_{1}({{\mathfrak{t}}}_{m}),C\right>+d^{2}=2m$. We
point out that if $d$ is even, ${{\mathfrak{s}}}_{d^{2}/2}$ and
${{\mathfrak{s}}}_{-d^{2}/2}$ denote the same structure; compare Remark 4.5
below.
It now follows from our previous observations that the structures
${{\mathfrak{s}}}_{m}$ that extend to $W$ are those with $m=kd$ for some
integer $k$, $-d/2\leq k\leq d/2$ if $d$ is odd. If $d$ is even, then those
that extend have $m=kd/2$ for some odd $k$, $-d\leq k\leq d$. For future
reference, we summarize this with the following lemma.
###### Lemma 3.1.
If $W^{4}=\overline{\mathbb{C}P^{2}-Y}$ where $Y$ is a neighborhood of a
rational cuspidal curve $C$ of degree $d$ (as constructed above), then the
Spincstructure ${{\mathfrak{s}}}_{m}$ on $\partial W^{4}$ extends to $W^{4}$
if $m=kd$ for some integer $k$, $-d/2\leq k\leq d/2$ if $d$ is odd. If $d$ is
even, then those that extend have $m=kd/2$ for some odd $k$, $-d\leq k\leq d$.
Here ${{\mathfrak{s}}}_{m}$ is the Spincstructure on $\partial W$ which
extends to a structure ${{\mathfrak{t}}}$ on $Y$ satisfying
$\left<c_{1}({{\mathfrak{t}}}_{m}),C\right>+d^{2}=2m$.
## 4\. Heegaard Floer theory
Heegaard Floer theory [19] associates to a 3–manifold $M$ with Spincstructure
${{\mathfrak{s}}}$, a filtered, graded chain complex
$CF^{\infty}(M,{{\mathfrak{s}}})$ over the field $\mathbb{Z}_{2}$. A
fundamental invariant of the pair $(M,{{\mathfrak{s}}})$, the correction term
or $d$–invariant, $d(M,{{\mathfrak{s}}})\in\mathbb{Q}$, is determined by
$CF^{\infty}(M,{{\mathfrak{s}}})$. The manifold $M$ is called an $L$–space if
certain associated homology groups are of rank one [21].
A knot $K$ in $M$ provides a second filtration on
$CF^{\infty}(M,{{\mathfrak{s}}})$ [19]. In particular, for $K\subset S^{3}$
there is a bifiltered graded chain complex $\operatorname{\it
CFK}^{\infty}(K)$ over the field $\mathbb{Z}_{2}$. It is known that for
algebraic knots the complex is determined by the Alexander polynomial of $K$.
More generally, this holds for any knot upon which some surgery yields an
$L$–space; these knots are called $L$–space knots.
The Heegaard Floer invariants of surgery on $K$, in particular the
$d$–invariants of $S^{3}_{q}(K)$, are determined by this complex, and for
$q>2(\textrm{genus}(K))$ the computation of $d(S^{3}_{q}(K),{{\mathfrak{s}}})$
from $CFK^{\infty}(K)$ is particularly simple. In this section we will
illustrate the general theory, leaving the details to references such as [9,
10].
### 4.1. $\operatorname{\it CFK}^{\infty}(K)$ for $K$ an algebraic knot
Figure 1 is a schematic illustration of a finite complex over
$\mathbb{Z}_{2}$. Each dot represents a generator and the arrows indicate
boundary maps. Abstractly it is of the form
$0\to\mathbb{Z}_{2}^{4}\to\mathbb{Z}_{2}^{5}\to 0$ with homology
$\mathbb{Z}_{2}$. The complex is bifiltered with the horizontal and vertical
coordinates representing the filtrations levels of the generators. We will
refer to the two filtrations levels as the $(i,j)$–filtrations levels. The
complex has an absolute grading which is not indicated in the diagram; the
generator at filtration level $(0,6)$ has grading 0 and the boundary map
lowers the grading by 1. Thus, there are five generators at grading level 0
and four at grading level one. We call the first set of generators type A and
the second type B.
We will refer to a complex such as this as a staircase complex of length $n$,
$\operatorname{St}(v)$, where $v$ is a $(n-1)$–tuple of positive integers
designating the length of the segments starting at the top left and moving to
the bottom right in alternating right and downward steps. Furthermore we
require that the top left vertex lies on the vertical axis and the bottom
right vertex lies on the horizontal axis. Thus, the illustration is of
$\operatorname{St}(1,2,1,2,2,1,2,1)$. The absolute grading of
$\operatorname{St}(v)$ is defined by setting the grading of the top left
generator to be equal to $0$ and the boundary map to lower the grading by $1$.
The vertices of $\operatorname{St}(K)$ will be denoted
$\operatorname{Vert}(St(K))$. We shall write
$\operatorname{Vert}_{A}(\operatorname{St}(K))$ to denote the set of type A
vertices and write $\operatorname{Vert}_{B}(\operatorname{St}(K))$ for the set
of vertices of type B.
If $K$ is a knot admitting an $L$–space surgery, in particular an algebraic
knot (see [8]), then it has Alexander polynomial of the form
$\Delta_{K}(t)=\sum_{i=0}^{2m}(-1)t^{n_{i}}$. To such a knot we associate a
staircase complex, $\operatorname{St}(K)=\operatorname{St}(n_{i+1}-n_{i})$,
where $i$ runs from 0 to $2m-1$. As an example, the torus knot $T(3,7)$ has
Alexander polynomial $1-t+t^{3}-t^{4}+t^{6}-t^{8}+t^{9}-t^{11}+t^{12}$. The
corresponding staircase complex is $\operatorname{St}(1,2,1,2,2,1,2,1)$.
0.6[subgriddiv=1,gridcolor=gray](-1,-1)(-1,-1)(7,7) Figure 1. The staircase
complex $\operatorname{St}(K)$ for the torus knot $T(3,7)$.
Given any finitely generated bifiltered complex $S$, one can form a larger
complex $S\otimes\mathbb{Z}_{2}[U,U^{-1}]$, with differentials defined by
$\partial(x\otimes U^{i})=(\partial x)\otimes U^{i}$. It is graded by
$gr(x\otimes U^{k})=gr(x)-2k$. Similarly, if $x$ is at filtration level
$(i,j)$, then $x\otimes U^{i}$ is at filtration level $(i-k,j-k)$. If $K$
admits an $L$–space surgery, then
$\operatorname{St}(K)\otimes\mathbb{Z}_{2}[U,U^{-1}]$ is isomorphic to
$\operatorname{\it CFK}^{\infty}(K)$. Figure 2 illustrates a portion of
$\operatorname{St}(T(3,7))\otimes\mathbb{Z}_{2}[U,U^{-1}]$; that is, a portion
of the Heegaard Floer complex $\operatorname{\it CFK}^{\infty}(T(3,7))$.
0.6[subgriddiv=1,gridcolor=gray](-1,-1)(-1,-1)(7,7) Figure 2. A portion of
$\operatorname{\it CFK}^{\infty}(T(3,7))$.
### 4.2. $d$–invariants from $\operatorname{\it CFK}^{\infty}(K)$.
We will not present the general definition of the $d$–invariant of a
3–manifold with Spincstructure; details can be found in [19]. However, in the
case that a 3–manifold is of the form $S^{3}_{q}(K)$ where $q\geq
2($genus($K$)), there is a simple algorithm (originating from [20, Section 4],
we use the approach of [9, 10]) to determine this invariant from
$\operatorname{\it CFK}^{\infty}(K)$.
If $m$ satisfies $-d/2\leq m\leq d/2$, one can form the quotient complex
$\operatorname{\it CFK}^{\infty}(K)/\operatorname{\it
CFK}^{\infty}(K)\\{i<0,j<m\\}.$
We let $d_{m}$ denote the least grading in which this complex has a nontrivial
homology class, say $[z]$, where $[z]$ must satisfy the added constraint that
for all $i>0$, $[z]=U^{i}[z_{i}]$ for some homology class $[z_{i}]$ of grading
$d_{m}+2i$.
In [20, Theorem 4.4], we find the following result.
###### Theorem 4.1.
For the Spincstructure ${{\mathfrak{s}}}_{m}$,
$d(S^{3}_{q}(K),{{\mathfrak{s}}}_{m})=d_{m}+\frac{(-2m+q)^{2}-q}{4q}$.
### 4.3. From staircase complexes to the $d$–invariants
Let us now define a distance function for a staircase complex by the formula
$J_{K}(m)=\min_{(v_{1},v_{2})\in\operatorname{Vert}(\operatorname{St}(K))}\max(v_{1},v_{2}-m),$
where $v_{1},v_{2}$ are coordinates of the vertex $v$. Observe that the
minimum can always be taken with respect to the set of vertices of type A. The
function $J_{K}(m)$ represents the greatest $r$ such that the region $\\{i\leq
0,j\leq m\\}$ intersects $\operatorname{St}(K)\otimes U^{r}$ nontrivially. It
is immediately clear that $J_{K}(m)$ is a non-increasing function. It is also
immediate that for $m\geq g$ we have $J_{K}(m)=0$.
0.6[subgriddiv=1,gridcolor=gray](0,0)(-7,-7)(7,7) Figure 3. The function
$J(m)$ for the knot $T(3,7)$. When $(0,m)$ lies on the dashed vertical
intervals, the function $J(m)$ is constant; when it is on solid vertical
intervals the function $J(m)$ is decreasing. The dashed lines connecting
vertices to points on the vertical axis indicate how the ends of dashed and
solid intervals are constructed.
For the sake of the next lemma we define $n_{-1}=-\infty$.
###### Lemma 4.2.
Suppose $m\leq g$. We have $J_{K}(m+1)-J_{K}(m)=-1$ if $n_{2i-1}-g\leq
m<n_{2i}-g$ for some $i$, and $J_{K}(m+1)=J_{K}(m)$ otherwise.
###### Proof.
The proof is purely combinatorial. We order the type A vertices of
$\operatorname{St}(K)$ so that the first coordinate is increasing, and we
denote these vertices $v_{0},\ldots,v_{k}$. For example, for
$\operatorname{St}(T(3,7))$ as depicted on Figure 1, we have $v_{0}=(0,6)$,
$v_{1}=(1,4)$, $v_{2}=(2,2)$, $v_{3}=(4,1)$ and $v_{4}=(6,0)$. We denote by
$(v_{i1},v_{i2})$ the coordinates of the vertex $v_{i}$.
A verification of the two following facts is straightforward:
(4.3) $\begin{split}\max(v_{i1},v_{i2}-m)&=v_{i1}\textrm{ if and only if
$m\geq v_{i1}-v_{i2}$}\\\
\max(v_{i1},v_{i2}-m)&\geq\max(v_{i-1,1},v_{i-1,2}-m)\textrm{ if and only if
$m\leq v_{i1}-v_{i-1,2}$}.\end{split}$
By the definition of the staircase complex we also have
$v_{i1}-v_{i2}=n_{2i}-g$ and $v_{i1}-v_{i-1,2}=n_{2i-1}-g$. The second
equation of (4.3) yields
$J_{K}(m)=\max(v_{i1},v_{i2}-m)\text{ if and only if
}m\in[n_{2i-1},n_{2i+1}].$
Then the first equation of (4.3) allows to compute the difference
$J_{K}(m+1)-J_{K}(m)$. ∎
The relationship between $J_{K}$ and the $d$–invariant is given by the next
result.
###### Proposition 4.4.
Let $K$ be an algebraic knot, let $q>2g(K)$, and let $m\in[-q/2,q/2]$ be an
integer. Then
$d(S^{3}_{q}(K),\mathfrak{s}_{m})=\frac{(-2m+q)^{2}-q}{4q}-2J(m).$
###### Proof.
Denote by $S_{i}$ the subcomplex $\operatorname{St}(K)\otimes U^{i}$ in
$\operatorname{\it CFK}^{\infty}(K)$. The result depends on understanding the
homology of the image of $S_{i}$ in $\operatorname{\it
CFK}^{\infty}(K)/\operatorname{\it CFK}^{\infty}(K)\\{i<0,j<m\\}$. Because of
the added constraint (see the paragraph before Theorem 4.1), we only have to
look at the homology classes supported on images of the type A vertices.
Notice that if $i>J_{K}(m)$, then at least one of the type A vertices is in
$\operatorname{\it CFK}^{\infty}(K)\\{i<0,j<m\\}$. But all the type A vertices
are homologous in $S_{i}$, and since these generate $H_{0}(S_{i})$, the
homology of the image in the quotient is 0. On the other hand, if $i\leq
J_{K}(m)$, then none of the vertices of $S_{i}$ are in $\operatorname{\it
CFK}^{\infty}(K)\\{i<0,j<m\\}$ and thus the homology of $S_{i}$ survives in
the quotient.
It follows that the least grading of a nontrivial class in the quotient arises
from the $U^{J_{K}(m)}$ translate of one of type A vertices of
$S_{0}=\operatorname{St}(K)$. Since $U$ lowers grading by 2, the grading is
$-2J_{K}(m)$. The result follows by applying the shift described in Theorem
4.1. ∎
###### Remark 4.5.
Notice that in the case that $q$ is even, the integer values $m=-q/2$ and
$m=q/2$ are both in the given interval. One easily checks that Proposition 4.4
yields the same value at these two endpoints.
We now relate the $J$ function to the semigroup of the singular point. Let
$I_{K}$ be the gap function as in Definition 2.6 and Remark 2.10.
###### Proposition 4.6.
If $K$ is the link of an algebraic singular point, then for $-g\leq m\leq g$
$J_{K}(m)=I_{K}(m+g)$.
###### Proof.
In Section 2.2 we described the gap sequence in terms of the exponents
$n_{i}$. It follows immediately that the growth properties of $I_{K}(m+g)$ are
identical to those of $J_{K}(m)$, as described in Lemma 4.2. Furthermore,
since the largest element in the gap sequence is $2g-1$, we have
$I_{K}(2g)=J_{K}(g)=0$. ∎
### 4.4. Proof of Theorem 1.1
According to Lemma 3.1, the Spincstructures on $S^{3}_{d^{2}}(K)$ that extend
to the complement $W$ of a neighborhood of $C$ are precisely those
${{\mathfrak{s}}}_{m}$ where $m=kd$ for some $k$, where $-d/2\leq k\leq d/2$;
here $k\in\mathbb{Z}$ if $d$ odd, and $k\in\mathbb{Z}+\frac{1}{2}$ if $d$ is
even. Since $W$ is a rational homology sphere, by [19, Proposition 9.9] the
associated $d$–invariants are 0, so by Proposition 4.4, letting $q=d^{2}$ and
$m=kd$, we have
$2J_{K}(kd)=\frac{(-2kd+d^{2})^{2}-d^{2}}{4d^{2}}.$
By Proposition 4.6 we can replace this with
$8I_{G_{K}}(kd+g)=(d-2k-1)(d-2k+1).$
Now $g=d(\frac{d-3}{2})+1$, so by substituting $j=k+\frac{d-3}{2}$ we obtain
$8I_{K}(jd+1)=4(d-j+1)(d-j+2)$
and $j\in[-3/2,\ldots,d-3/2]$ is an integer regardless of the parity of $d$.
The proof is accomplished by recalling that $k_{jd}=I_{K}(jd+1)$, see Remark
2.8.
## 5\. Constraints on general rational cuspidal curves
### 5.1. Products of staircase complexes and the $d$–invariants
In the case that there is more than one cusp, the previous approach continues
to apply, except the knot $K$ is now a connected sum of algebraic knots.
For the connected sum $K=\\#K_{i}$, the complex $\operatorname{\it
CFK}^{\infty}(K)$ is the tensor product of the $\operatorname{\it
CFK}^{\infty}(K_{i})$. To analyze this, we consider the tensor product of the
staircase complexes $\operatorname{St}(K_{i})$. Although this is not a
staircase complex, the homology is still $\mathbb{Z}_{2}$, supported at
grading level 0. For the tensor product we shall denote by
$\operatorname{Vert}(\operatorname{St}(K_{1})\otimes\ldots\otimes\operatorname{St}(K_{n}))$
the set of vertices of the corresponding complex. These are of the form
$v_{1}+\ldots+v_{n}$, where $v_{j}\in\operatorname{Vert}(K_{j})$,
$j=1,\ldots,n$.
Any element of the form $a_{1q_{1}}\otimes a_{2q_{2}}\otimes\cdots\otimes
a_{nq_{n}}$ represents a generator of the homology of the tensor product,
where the $a_{iq_{i}}$ are vertices of type A taken from each
$\operatorname{St}(K_{i})$. Furthermore, if the translated subcomplex
$\text{St}(K)\otimes U^{i}\subset\text{St}(K)\otimes\mathbb{Z}_{2}[U,U^{-1}]$
intersects $\operatorname{\it CFK}^{\infty}(K)\\{i<0,j<m\\}$ nontrivially,
then the intersection contains one of these generators. Thus, the previous
argument applies to prove the following.
###### Proposition 5.1.
Let $q>2g-1$, where $g=g(K)$ and $m\in[-q/2,q/2]$. Then we have
$d(S^{3}_{q}(K),\mathfrak{s}_{m})=-2J_{K}(m)+\frac{(-2m+q)^{2}-q}{4q},$
where $J_{K}(m)$ is the minimum of $\max(\alpha,\beta-m)$ over all elements of
form $a_{1q_{1}}\otimes a_{2q_{2}}\otimes\ldots\otimes a_{nq_{n}}$, where
$(\alpha,\beta)$, is the filtration level of the corresponding element.
Since the $d$–invariants vanish for all Spincstructures that extend to $W$, we
have:
###### Theorem 5.2.
If $C$ is a rational cuspidal curve of degree $d$ with singular points $K_{i}$
and $K=\\#K_{i}$, then for all $k$ in the range $[-d/2,d/2]$, with
$k\in\mathbb{Z}$ for $d$ odd and $k\in\mathbb{Z}+\frac{1}{2}$ for $d$ even:
$J_{K}(kd)=\frac{(d-2k-1)(d-2k+1)}{8}.$
###### Proof.
We have from the vanishing of the $d$–invariants,
$d(S^{3}_{d^{2}}(K),{{\mathfrak{s}}}_{m})$ (for $m=kd$) the condition
$J_{K}(m)=\frac{(-2m+d^{2})^{2}-d^{2}}{8d^{2}}.$
The result then follows by substituting $m=kd$ and performing algebraic
simplifications. ∎
### 5.2. Restatement in terms of $I_{K_{i}}(m)$.
We now wish to restate Theorem 5.2 in terms of the coefficients of the
Alexander polynomial, properly expanded. As before, for the gap sequence for
the knot $K_{i}$, denoted $G_{K_{i}}$, let
$I_{i}(s)=\\#\\{k\geq s\colon k\in G_{K_{i}}\cup\mathbb{Z}_{<0}\\}.$
For two functions $I,I^{\prime}\colon\mathbb{Z}\to\mathbb{Z}$ bounded below we
define the following operation
(5.3) $I\diamond I^{\prime}(s)=\min_{m\in\mathbb{Z}}I(m)+I^{\prime}(s-m).$
As pointed out to us by Krzysztof Oleszkiewicz, in real analysis this
operation is sometimes called the _infimum convolution_.
The following is the main result of this article.
###### Theorem 5.4.
Let $C$ be a rational cuspidal curve of degree $d$. Let $I_{1},\dots,I_{n}$ be
the gap functions associated to each singular point on $C$. Then for any
$j\in\\{-1,0,\ldots,d-2\\}$ we have
$I_{1}\diamond I_{2}\diamond\ldots\diamond
I_{n}(jd+1)=\frac{(j-d+1)(j-d+2)}{2}.$
###### Remark 5.5.
* •
For $j=-1$, the left hand side is $d(d-1)/2=d-1+(d-1)(d-2)/2$. The meaning of
the equality is that $\sum\\#G_{j}=(d-1)(d-2)/2$ which follows from (2.1) and
Lemma 2.4. Thus, the case $j=-1$ does not provide any new constraints.
Likewise, for $j=d-2$ both sides are equal to $0$.
* •
We refer to Section 6.2 for a reformulation of Theorem 5.4.
* •
We do not know if Theorem 5.4 settles Conjecture 1.2 for $n>1$. The passage
between the two formulations appears to be more complicated; see [17,
Proposition 7.1.3] and the example in Section 6.1.
Theorem 5.4 is an immediate consequence of the arguments in Section 4.4
together with the following proposition.
###### Proposition 5.6.
As in (5.3), let $I_{K}$ be given by $I_{1}\diamond\ldots\diamond I_{n}$, for
the gap functions $I_{1},\ldots,I_{n}$. Then $J_{K}(m)=I_{K}(m+g)$.
###### Proof.
The proof follows by induction over $n$. For $n=1$, the statement is
equivalent to Proposition 4.6. Suppose we have proved it for $n-1$. Let
$K^{\prime}=K_{1}\\#\ldots\\#K_{n-1}$ and let $J_{K^{\prime}}(m)$ be the
corresponding $J$ function. Let us consider a vertex
$v\in\operatorname{Vert}(\operatorname{St}_{1}(K)\otimes\ldots\otimes\operatorname{St}_{n}(K))$.
We can write this as $v^{\prime}+v_{n}$, where
$v^{\prime}\in\operatorname{Vert}(\operatorname{St}(K_{1})\otimes\cdots\otimes\operatorname{St}(K_{n-1}))$
and $v_{n}\in\operatorname{Vert}(\operatorname{St}(K_{n}))$. We write the
coordinates of the vertices as $(v_{1},v_{2})$,
$(v^{\prime}_{1},v^{\prime}_{2})$ and $(v_{n1},v_{n2})$, respectively. We have
$v_{1}=v^{\prime}_{1}+v_{n1}$, $v_{2}=v^{\prime}_{2}+v_{n2}$. We shall need
the following lemma.
###### Lemma 5.7.
For any four integers $x,y,z,w$ we have
$\max(x+y,z+w)=\min_{k\in\mathbb{Z}}\left(\max(x,z-k)+\max(y,w+k)\right).$
###### Proof of Lemma 5.7.
The direction ‘$\leq$’ is trivial. The equality is attained at $k=z-x$. ∎
_Continuation of the proof of Proposition 5.6._
Applying Lemma 5.7 to $v_{1}^{\prime},v_{2}^{\prime},v_{n1},v_{n2}-m$ and
taking the minimum over all vertices $v$ we obtain
$J_{K}(m)=\min_{v\in\operatorname{Vert}(\operatorname{St}(K_{1})\otimes\ldots\otimes\operatorname{St}(K_{n}))}\max(v_{1},v_{2}-m)=\\\
\min_{v^{\prime}\in\operatorname{Vert}^{\prime}}\min_{v_{n}\in\operatorname{Vert}_{n}}\min_{k\in\mathbb{Z}}\left(\max(v^{\prime}_{1},v_{2}^{\prime}-k)+\max(v_{n1},v_{n2}+k-m)\right),$
where we denote
$\operatorname{Vert}^{\prime}=\operatorname{Vert}(\operatorname{St}(K_{1})\otimes\cdots\otimes\operatorname{St}(K_{n-1}))$
and $\operatorname{Vert}_{n}=\operatorname{Vert}(\operatorname{St}(K_{n}))$.
The last expression is clearly
$\min_{k\in\mathbb{Z}}J_{K^{\prime}}(k)+J_{K_{n}}(m-k)$. By the induction
assumption this is equal to
$\min_{k\in\mathbb{Z}}I_{K^{\prime}}(k+g^{\prime})+I_{K_{n}}(m-k+g_{n})=I_{K}(m+g),$
where $g^{\prime}=g(K^{\prime})$ and $g_{n}=g(K_{n})$ are the genera, and we
use the fact that $g=g^{\prime}+g_{n}$. ∎
## 6\. Examples and applications
### 6.1. A certain curve of degree $6$
As described, for instance, in [5, Section 2.3, Table 1], there exists an
algebraic curve of degree $6$ with two singular points, the links of which are
$K=T(4,5)$ and $K^{\prime}=T(2,9)$. The values of $I_{K}(m)$ for
$m\in\\{0,\ldots,11\\}$ are $\\{6,6,5,4,3,3,3,2,1,1,1,1\\}$. The values of
$I_{K^{\prime}}(m)$ for $m\in\\{0,\ldots,7\\}$ are $\\{4,4,3,3,2,2,1,1\\}$. We
readily get
$I\diamond I^{\prime}(1)=10,\ I\diamond I^{\prime}(7)=6,\ I\diamond
I^{\prime}(13)=3,\ I\diamond I^{\prime}(19)=1,$
exactly as predicted by Theorem 5.4.
On the other hand, the computations in [5] confirm Conjecture 1.2 but we
sometimes have an inequality. For example $k_{6}=5$, whereas Conjecture 1.2
states $k_{6}\leq 6$. This shows that Theorem 5.4 is indeed more precise.
### 6.2. Reformulations of Theorem 5.4
Theorem 5.4 was formulated in a way that fits best with its theoretical
underpinnings. In some applications, it is advantageous to reformulate the
result in terms of the function counting semigroup elements in the interval
$[0,k]$. To this end, we introduce some notation.
Recall that for a semigroup $S\subset\mathbb{Z}_{\geq 0}$, the gap sequence of
$G$ is $\mathbb{Z}_{\geq 0}\setminus S$. We put $g=\\#G$ and for $m\geq 0$ we
define
(6.1) $R(m)=\\#\\{j\in S\colon j\in[0,m)\\}.$
###### Lemma 6.2.
For $m\geq 0$, $R(m)$ is related to the gap function $I(m)$ (see (2.7)) by the
following relation:
(6.3) $R(m)=m-g+I(m).$
###### Proof.
Let us consider an auxiliary function $K(m)=\\#\\{j\in[0,m):j\in G\\}$. Then
$K(m)=g-I(m)$. Now $R(m)+K(m)=m$, which completes the proof. ∎
We extend $R(m)$ by (6.3) for all $m\in\mathbb{Z}$. We remark that $R(m)=m-g$
for $m>\sup G$ and $R(m)=0$ for $m<0$. In particular, $R$ is a non-negative,
non-decreasing function.
We have the following result.
###### Lemma 6.4.
Let $I_{1},\dots,I_{n}$ be the gap functions corresponding to the semigroups
$S_{1},\ldots,S_{n}$. Let $g_{1},\dots,g_{n}$ be given by
$g_{j}=\\#{\mathbb{Z}_{\geq 0}\setminus S_{j}}$. Let $R_{1},\ldots,R_{n}$ be
as in (6.1). Then
$R_{1}\diamond R_{2}\diamond\ldots\diamond
R_{n}(m)=m-g+I_{1}\diamond\ldots\diamond I_{n}(m),$
where $g=g_{1}+\ldots+g_{n}$.
###### Proof.
To simplify the notation, we assume that $n=2$; the general case follows by
induction. We have
$\displaystyle R_{1}\diamond R_{2}(m)=$
$\displaystyle\min_{k\in\mathbb{Z}}R_{1}(k)+R_{2}(m-k)=$
$\displaystyle=\min_{k\in\mathbb{Z}}(k-g_{1}+I_{1}(k)+m-k-g_{2}+I_{2}(m-k))=$
$\displaystyle=m-g_{1}-g_{2}+I_{1}\diamond I_{2}(m).$
∎
Now we can reformulate Theorem 5.4:
###### Theorem 6.5.
For any rational cuspidal curve of degree $d$ with singular points
$z_{1},\dots,z_{n}$, and for $R_{1},\dots,R_{n}$ the functions as defined in
(6.1), one has that for any $j=\\{-1,\ldots,d-2\\}$,
$R_{1}\diamond R_{2}\diamond\ldots\diamond R_{n}(jd+1)=\frac{(j+1)(j+2)}{2}.$
This formulation follows from Theorem 5.4 by an easy algebraic manipulation
together with the observation that by (2.1) and Lemma 2.4, the quantity $g$
from Lemma 6.4 is given by $\frac{(d-1)(d-2)}{2}$.
The formula bears strong resemblance to [5, Proposition 2], but in that
article only the ‘$\geq$’ part is proved and an equality in case $n=1$ is
conjectured.
###### Remark 6.6.
Observe that by definition
$R_{1}\diamond\ldots\diamond
R_{n}(k)=\min_{\begin{subarray}{c}k_{1},\ldots,k_{n}\in\mathbb{Z}\\\
k_{1}+\ldots+k_{n}=k\end{subarray}}R_{1}(k_{1})+\ldots+R_{n}(k_{n}).$
Since for negative values $R_{j}(k)=0$ and $R_{j}$ is non-decreasing on
$[0,\infty)$, the minimum will always be achieved for
$k_{1},\ldots,k_{n}\geq-1$.
### 6.3. Applications
From Theorem 6.5 we can deduce many general estimates for rational cuspidal
curves. Throughout this subsection we shall be assuming that $C$ has degree
$d$, its singular points are $z_{1},\ldots,z_{n}$, the semigroups are
$S_{1},\ldots,S_{n}$, and the corresponding $R$–functions are
$R_{1},\ldots,R_{n}$. Moreover, we assume that the characteristic sequence of
the singular point $z_{i}$ is $(p_{i};q_{i1},\ldots,q_{ik_{i}})$. We order the
singular points so that that $p_{1}\geq p_{2}\geq\ldots\geq p_{n}$.
We can immediately prove the result of Matsuoka–Sakai, [11], following the
ideas in [5, Section 3.5.1].
###### Proposition 6.7.
We have $p_{1}>d/3$.
###### Proof.
Suppose $3p_{1}\leq d$. It follows that for any $j$, $3p_{j}\leq d$. Let us
choose $k_{1},\ldots,k_{n}\geq-1$ such that $\sum k_{j}=d+1$. For any $j$, the
elements $0,p_{j},2p_{j},\ldots$ all belong to the $S_{j}$. The function
$R_{j}(k_{j})$ counts elements in $S_{j}$ strictly smaller than $k_{j}$, hence
for any $\varepsilon>0$ we have
$R_{j}(k_{j})\geq
1+\genfrac{\lfloor}{\rfloor}{}{1}{k_{j}-\varepsilon}{p_{j}}.$
Using $3p_{j}\leq d$ we rewrite this as $R_{j}(k_{j})\geq
1+\genfrac{\lfloor}{\rfloor}{}{1}{3k_{j}-3\varepsilon}{d}$. Since
$\varepsilon>0$ is arbitrary, setting $\delta_{j}=1$ if $d|3k_{j}$, and $0$
otherwise, we write
$R_{j}(k_{j})\geq 1+\genfrac{\lfloor}{\rfloor}{}{1}{3k_{j}}{d}-\delta_{j}.$
We get
(6.8) $\sum_{j\colon
d|3k_{j}}R_{j}(k_{j})\geq\genfrac{\lfloor}{\rfloor}{}{1}{\sum 3k_{j}}{d}.$
Using the fact that
$\genfrac{\lfloor}{\rfloor}{}{1}{a}{d}+\genfrac{\lfloor}{\rfloor}{}{1}{b}{d}\geq\genfrac{\lfloor}{\rfloor}{}{1}{a+b}{d}-1$
for any $a,b\in\mathbb{Z}$, we estimate the other terms:
(6.9) $\sum_{j\colon d\not\;|\,3k_{j}}R_{j}(k_{j})\geq
1+\genfrac{\lfloor}{\rfloor}{}{1}{3\sum k_{j}}{d}.$
Since $\sum k_{j}=d+1$, there must be at least one $j$ for which $d$ does not
divide $3k_{j}$. Hence adding (6.8) to (6.9) we obtain
$R_{1}(k_{1})+\ldots+R_{n}(k_{n})\geq
1+\genfrac{\lfloor}{\rfloor}{}{1}{\sum_{j=1}^{n}3k_{j}}{d}=1+\genfrac{\lfloor}{\rfloor}{}{1}{3d+3}{d}=4.$
This contradicts Theorem 6.5 for $j=1$, and the contradiction concludes the
proof. ∎
We also have the following simple result.
###### Proposition 6.10.
Suppose that $p_{1}>\frac{d+n-1}{2}$. Then $q_{11}<d+n-1$.
###### Proof.
Suppose that $p_{1}>\frac{d+n-1}{2}$ and $q_{11}>d+n-1$. It follows that
$R_{1}(d+n)=2$. But then we choose $k_{1}=d+n$, $k_{2}=\ldots=k_{n}=-1$ and we
get $\sum_{j=1}^{n}R_{j}(k_{j})=2$, hence
$R_{1}\diamond R_{2}\diamond\ldots\diamond R_{n}(d+1)\leq 2$
contradicting Theorem 6.5. ∎
### 6.4. Some examples and statistics
We will now present some examples and statistics, where we compare our new
criterion with the semicontinuity of the spectrum as used in [5, Property
$(SS_{l})$] and the Orevkov criterion [18, Corollary 2.2]. It will turn out
that the semigroup distribution property is quite strong and closely related
to the semicontinuity of the spectrum, but they are not the same. There are
cases which pass one criterion and fail to another. Checking the semigroup
property is definitely a much faster task than comparing spectra; refer to [6,
Section 3.6] for more examples.
###### Example 6.11.
Among the 1,920,593 cuspidal singular points with Milnor number of the form
$(d-1)(d-2)$ for $d$ ranging between $8$ and $64$, there are only 481 that
pass the semigroup distribution criterion, that is Theorem 1.1. All of these
pass the Orevkov criterion $\overline{M}<3d-4$. Of those 481, we compute that
475 satisfy the semicontinuity of the spectrum condition and 6 them fail the
condition; these are: $(8;28,45)$, $(12;18,49)$, $(16;56,76,85)$,
$(24;36,78,91)$, $(24;84,112,125)$, $(36;54,114,133)$.
###### Remark 6.12.
The computations in Example 6.11 were made on a PC computer during one
afternoon. Applying the spectrum criteria for all these cases would take much
longer. The computations for degrees between $12$ and $30$ is approximately
$15$ times faster for semigroups; the difference seems to grow with the
degree. The reason is that even though the spectrum can be given explicitly
from the characteristic sequence (see [24]), it is a set of fractional numbers
and the algorithm is complicated.
###### Example 6.13.
There are $28$ cuspidal singular points with Milnor number equal to
$110=(12-1)(12-2)$. We ask, which of these singular points can possibly occur
as a unique singular point on a degree $12$ rational curve? We apply the
semigroup distribution criterion. Only 8 singular points pass the criterion,
as is seen on Table 1.
(3;56) | fails at $j=1$ | (6;9,44) | fails at $j=1$ | (8;12,14,41) | fails at $j=3$
---|---|---|---|---|---
(4;6,101) | fails at $j=1$ | (6;10,75) | fails at $j=1$ | (8;12,18,33) | fails at $j=4$
(4;10,93) | fails at $j=1$ | (6;14,59) | fails at $j=2$ | (8;12,22,25) | passes
(4;14,85) | fails at $j=1$ | (6;15,35) | fails at $j=2$ | (8;12,23) | passes
(4;18,77) | fails at $j=1$ | (6;16,51) | fails at $j=2$ | (8;14,33) | fails at $j=1$
(4;22,69) | fails at $j=1$ | (6;20,35) | fails at $j=4$ | (9;12,23) | passes
(4;26,61) | fails at $j=1$ | (6;21,26) | passes | (10;12,23) | passes
(4;30,53) | fails at $j=1$ | (6;22,27) | passes | (11;12) | passes
(4;34,45) | fails at $j=1$ | (6;23) | passes | |
(6;8,83) | fails at $j=1$ | (8;10,57) | fails at $j=2$ | |
Table 1. Semigroup property for cuspidal singular points with Milnor number
$12$. If a cuspidal singular point fails the semigroup criterion, we indicate
the first $j$ for which $I(12j+1)\neq\frac{(j-d+1)(j-d+2)}{2}$.
Among the curves in Table 1, all those that are obstructed by the semigroup
distribution, are also obstructed by the semicontinuity of the spectrum. The
spectrum also obstructs the case of $(8;12,23)$.
###### Example 6.14.
There are 2330 pairs $(a,b)$ of coprime integers, such that $(a-1)(b-1)$ is of
form $(d-1)(d-2)$ for $d=5,\ldots,200$. Again we ask if there exists a degree
$d$ rational cuspidal curve having a single singular point with characteristic
sequence $(a;b)$. Among these 2330 cases, precisely 302 satisfy the semigroup
distribution property. Out of these 302 cases, only one, namely $(2;13)$, does
not appear on the list from [5]; see Section 2.3 for the list. It is therefore
very likely that the semigroup distribution property alone is strong enough to
obtain the classification of [5].
###### Example 6.15.
In Table 2 we present all the cuspidal points with Milnor number
$(30-1)(30-2)$ that satisfy the semicontinuity of the spectrum. Out of these,
all but the three ($(18;42,65)$, $(18;42,64,69)$ and $(18;42,63,48)$) satisfy
the semigroup property. All three fail the semigroup property for $j=1$. In
particular, for these three cases the semigroup property obstructs the cases
which pass the semicontinuity of the spectrum criterion.
(15; 55, 69) | (18;42,64,69) | (20; 30, 59) | (25; 30, 59)
---|---|---|---
(15; 57, 71) | (18;42,63,68) | (24; 30, 57, 62) | (27; 30, 59)
(15;59) | (20; 30,55,64) | (24;30,58,63) | (28; 30,59)
(18;42,65) | (20; 30,58,67) | (24; 30,59) | (29; 30)
Table 2. Cuspidal singular points with Milnor number $752$ satisfying the
semicontinuity of the spectrum criterion.
###### Example 6.16.
The configuration of five critical points $(2;3)$, $(2;3)$, $(2;5)$, $(5;7)$
and $(5;11)$ passes the semigroup, the spectrum and the Orevkov criterion for
a degree $10$ curve. In other words, none of the aforementioned criteria
obstructs the existence of such curve. We point out that it is conjectured
(see [13, 22]) that a rational cuspidal curve can have at most $4$ singular
points. In other words, these three criteria alone are insufficient to prove
that conjecture.
## References
* [1] V.I. Arnold, A.N. Varchenko, S.M. Gussein–Zade, Singularities of differentiable mappings. II., “Nauka”, Moscow, 1984.
* [2] M. Borodzik, A. Némethi, Spectrum of plane curves via knot theory, J. London Math. Soc. 86 (2012), 87–110.
* [3] E. Brieskorn, H. Knörrer, Plane Algebraic Curves, Birkhäuser, Basel–Boston–Stuttgart, 1986.
* [4] J. Coolidge, _A treatise on plane algebraic curves_ , Oxford Univ. Press, Oxford, 1928.
* [5] J. Fernández de Bobadilla, I. Luengo, A. Melle-Hernández, A. Némethi, _Classification of rational unicuspidal projective curves whose singularities have one Puiseux pair_ , Proceedings of Sao Carlos Workshop 2004 Real and Complex Singularities, Series Trends in Mathematics, Birkhäuser 2007, 31–46.
* [6] J. Fernández de Bobadilla, I. Luengo, A. Melle-Hernández, A. Némethi, _On rational cuspidal projective plane curves_ , Proc. of London Math. Soc., 92 (2006), 99–138.
* [7] G. M. Greuel, C. Lossen, E. Shustin, _Introduction to singularities and deformations_ , Springer Monographs in Mathematics. Springer, Berlin, 2007.
* [8] M. Hedden, _On knot Floer homology and cabling. II_ , Int. Math. Res. Not. 2009, No. 12, 2248–2274.
* [9] S. Hancock, J. Hom, M. Newmann, _On the knot Floer filtration of the concordance group_ , preprint 2012, arxiv:1210.4193.
* [10] M. Hedden, C. Livingston, D. Ruberman, _Topologically slice knots with nontrivial Alexander polynomial_ , Adv. Math. 231 (2012), 913–939.
* [11] T. Matsuoka, F. Sakai, _The degree of rational cuspidal curves_ , Math. Ann. 285 (1989), 233–247.
* [12] J. Milnor, _Singular points of complex hypersurfaces_ , Annals of Mathematics Studies. 61, Princeton University Press and the University of Tokyo Press, Princeton, NJ, 1968.
* [13] T. K. Moe, _Rational cuspidal curves_ , Master Thesis, University of Oslo 2008, permanent link at University of Oslo: https://www.duo.uio.no/handle/123456789/10759.
* [14] M. Nagata, _On rational surfaces. I: Irreducible curves of arithmetic genus 0 or 1_ , Mem. Coll. Sci., Univ. Kyoto, Ser. A 32 (1960), 351–370.
* [15] A. Némethi, _Lattice cohomology of normal surface singularities_ , Publ. RIMS. Kyoto Univ., 44 (2008), 507–543.
* [16] A. Némethi, L. Nicolaescu, _Seiberg-Witten invariants and surface singularities: Splicings and cyclic covers_ , Selecta Math., New series, Vol. 11 (2005), 399–451.
* [17] A Némethi, F. Róman, _The lattice cohomology of $S^{3}_{−d}(K)$_ in: Zeta functions in algebra and geometry, 261–292, Contemp. Math., 566, Amer. Math. Soc., Providence, RI, 2012.
* [18] S. Orevkov, _On rational cuspidal curves. I. Sharp estimates for degree via multiplicity_ , Math. Ann. 324 (2002), 657–673.
* [19] P. Ozsváth, Z. Szabó, _Absolutely graded Absolutely graded Floer homologies and intersection forms for four-manifolds with boundary_ , Adv. Math. 173 (2003), 179–261.
* [20] P. Ozsváth, Z. Szabó, _Holomorphic disks and knot invariants_ , Adv. Math. 186 (2004), 58–116.
* [21] P. Ozsváth, Z. Szabó, _On knot Floer homology and lens space surgeries_ , Topology 44 (2005), 1281–1300.
* [22] J. Piontkowski, _On the number of cusps of rational cuspidal plane curves_ , Exp. Math. 16, no. 2 (2007), 251–255.
* [23] J. Rasmussen, _Floer homology and knot complements_ , Harvard thesis, 2003, available at arXiv:math/0306378.
* [24] M. Saito, _Exponents and Newton polyhedra of isolated hypersurface singularities_ , Math. Ann. 281 (1988), 411–417.
* [25] K. Tono, _On the number of cusps of cuspidal plane curves_ , Math. Nachr. 278 (2005), 216–221.
* [26] C. T. C. Wall, Singular Points of Plane Curves London Mathematical Society Student Texts, 63. Cambridge University Press, Cambridge, 2004.
|
# Measuring the Change in European and US COVID-19 death rates
Zeina S. Khan
Frank Van Bussel
&
Fazle Hussain∗
Texas Tech University Department of Mechanical Engineering
2703 7th Street Box: 41021 Lubbock TX 79409
Phone: 832-863-8364
$*$<EMAIL_ADDRESS>
###### Abstract
By fitting a compartment ODE model for Covid-19 propagation to cumulative case
and death data for US states and European countries, we find that the case
mortality rate seems to have decreased by at least 80% in most of the US and
at least 90% in most of Europe. These are much larger and faster changes than
reported in empirical studies, such as the 18% decrease in mortality found for
the New York City hospital system from March to August 2020 [2]. Our reported
decreases surprisingly do not have strong correlations to other model
parameters (such as contact rate) or other standard state/national metrics
such as population density, GDP, and median age. Almost all the decreases
occurred between mid-April and mid-June, which unexpectedly corresponds to the
time when many state and national lockdowns were released resulting in surges
of new cases. Several plausible causes for this drop are examined, such as
improvements in treatment, face mask wearing, a new virus strain, and
potentially changing demographics of infected patients, but none are
overwhelmingly convincing given the currently available evidence.
## Introduction
A novel strain of coronavirus, SARS-CoV-2, causing Covid-19 disease, was
identified in December 2019 by Chinese Health authorities in the city of Wuhan
(Hubei), China [3, 4]. This disease has spread worldwide and many governments
instituted measures to contain its outbreak, including city and state
lockdowns and prohibiting travel from affected areas [5]. However, such
restrictions are difficult to sustain in the long term, with millions of
people being affected by poverty and unemployment [5]. As a result, many
nations have eased population restrictions as of May 2020 to lessen the
economic impact of the disease [5, 6, 7]. A global pandemic is ongoing, with
over 40 million worldwide cases and 1.1 million deaths as of October 18, 2020
as declared by the World Health Organization (WHO) [8]. Currently, several
European nations are considering reimposing lockdowns and mobility
restrictions to contain the surging virus cases [9].
Surprisingly, despite the large increases in Covid-19 cases in the United
States and Europe since many countries and States eased lockdowns [9, 10], the
number of deaths due to this virus has not mirrored the dramatically increased
case counts. Though this trend has been noted by political and health
commentators [11, 12, 13], there are few mentions of any change of death rates
in the epidemiological and modeling literature. Clinical observations of a
continually decreasing death rate have been made in a New York City hospital
system, with a rate that had dropped by 18.2 percent from March to August [2].
Corroborating this, time-series generated for a model-based study of Covid-19
in New York City imply that the infection-fatality risk dropped by
approximately $\frac{1}{3}$ from early April to late May 2020 for people 65
years of age or older, whereas it barely changed for people less than 45
(fluctuations in the rate for the 45–64 years old group make the net effect
difficult to ascertain) [14]. Similarly, a review of English hospital
surveillance data found that the survival of hospitalized Covid-19 patients in
intensive care and high intensive units increased by approximately 11% per
week from late March up to the third week of June 2020 [15]. This study notes
that these improvements in survival were consistent across subgroups defined
by age, ethnicity, and major co-morbidites, among others [15]. While these
observations are consistent with our own, possible underlying causes were not
described in these reports.
By fitting a compartmental ODE disease model to state and national case and
death data we have been able to measure changes in the case mortality rate
across entire jurisdictions for all US states plus Washington DC and Puerto
Rico, and all European countries (except Russia) plus Turkey. Our finding is
that in most of these jurisdictions the death rate for diagnosed individuals
decreased dramatically ($\approx$ 80% in the US and 90% in Europe), and almost
all jurisdictions had a decrease of at least 30%. These decreases happened
largely in late April, May, and early June as many jurisdictions were easing
lockdowns which resulted in surging cases. Having checked several quantitative
regional factors that could influence these fatality rates, including basic
age demographics, population density, geographical location, and certain
economic indicators, we have surprisingly not found strong correlations to the
magnitude of the drop in death rate, or the initial or final death rates
individually. Several plausible causes for this dramatic drop are examined,
such as improvements in treatment, face mask wearing, a new virus strain, and
potentially changing demographics of infected patients, but none alone
convincingly explain the magnitude of change we have measured given the
currently available evidence.
## Calculating the Changes in Death Rate
To calculate the change in death rate we used a slightly modified version of
the compartment model first presented in [16]. This is a SIR-based ODE model
that includes extra compartments and transfer rates to deal with: detected
versus undetected infecteds, isolation on diagnosis, effects of social
distancing policies, and possible loss of immunity for recovered populations
(both detected and undetected). As well, it uses a power-law incidence rate to
adjust for the effect of heterogeneous population densities. While this model
could have many uses, our original purpose was to try to measure what
proportion of infecteds were eventually being detected (our finding: about
half in almost all jurisdictions).
Figure 1: Schematic of the compartments – $S$ susceptible, $I_{U}$ undetected
infected, $I_{D}$ detected infected, $D$ detected deceased, $Q$ sequestered,
$R_{U}$ undetected recovered, and $R_{D}$ detected recovered. Transfer between
compartments are indicated by arrows, where the detection rate $\delta$ and
detected death rate $\gamma$ are highlighted. Note that $R_{U}$ and $R_{D}$
are also transferred back to $S$ at a particular rate due to loss of immunity.
This is a small effect over the time scales we have considered here, therefore
arrows were omitted for the sake of clarity.
As one can see in figure 1, there is also a compartment for deaths of detected
infected individuals. This was not necessary for the model per se; deaths are
not part of the basic SIR model, which is static, and we did not include a
compartment for deaths of undetected infecteds either. However, deaths due to
COVID-19 are a readily available statistic, and possibly more trustworthy than
caseloads. So on the assumption that the proportion of serious-enough-to-be-
diagnosed cases which are fatal is relatively stable aspect of disease over
the longer term, we added to detected deaths to the model to aid in its use
for fitting to empirical data. Note: we were aware that case mortality rate in
early days of any outbreak is often large, because of unfamiliarity of
disease, and seriousness of first few diagnosed cases; but this tends to drop
quickly, and the numbers of cases as a proportion of those eventually infected
is small, so this transient effect did not affect our fits.
Coding of the ODE solutions and fitting routines were done in Matlab [16],
with empirical data (cumulative cases and deaths, for the US by county, and
globally by nation) obtained from Johns Hopkins University Center for Systems
Science and Engineering [17]. Since the model is non-linear, fitting requires
an iterative search through the solution space, so there is no guarantee of
obtaining optimal solutions within any set running time, but we found that
with various adjustments to the fit parameters we were able to get good fits
for all US states within a couple of hours (the criterion we use for goodness
of fit was that the coefficient of determination $R^{2}>=0.95$). We were also
able to fit all European countries Covid-19 case and death data (except
Russia) using the same optimized code with similar results.
Figure 2: Model fits using one death rate to cumulative case and death data
for (a) Washington State and (b) Belgium. These two jurisdictions (which will
be used to illustrative examples throughout the paper) were chosen at random
from out of US and European datasets.
In the late summer of 2020 we started submitting predictions obtained from our
model to the COVID-19 ForecastHub on GitHub [cite TBA]. When checking the
results of 1-month-ahead death predictions, they seemed to be quite high,
given that contemporary measures of deaths for various jurisdictions [11] were
not showing spiking activity (though the recent new spikes in cases at the
time suggested that deaths should be on the rise too). The problem was that
the empirical (and therefore model) deaths were small in number compared with
confirmed cases, so discrepancies between the model deaths and data were not
readily apparent to visual inspection or the error criterion used by the fit
routine, as can be seen from figure 2.
Figure 3: One-death-rate model fit residuals for (a) Washington State and (b)
Belgium. Note distinct time-dependent bias in error for fits to death data.
However, a closer look at the death curves alone revealed that while the error
level was within the desired tolerance, the residuals were not (more-or-less)
randomly distributed across time, but showed a distinct bias, undershooting
during the first half of the fit period and overshooting at later times, so
the fit curve missed the contour actually described by the death data – see
figures 3 for residuals and 4 (c) and (d) for closeups of the fits. This
caused the slope of the model deaths at the end of the fit period to be
greater than that implied by the empirical data, so that model projections of
deaths into the fairly near future would overestimate the number
significantly. Since this had not been a problem earlier, the implication was
that the cumulative deaths were no longer shadowing the cumulative case count.
Figure 4: Comparison of one-death-rate to two-death-rate model fits for (left)
Washington State and (right) Belgium. (a and b) Cumulative confirmed cases,
which show practically no change between versions. (c and d) Cumulative
deaths.
The solution was to make a simple modification of the model (and code) to
incorporate a second death rate, with a changeover at a specific date. We then
have two death rates, $\gamma_{1}$ and $\gamma_{2}$, plus a time parameter
$t_{\gamma}$ specifying changeover day (from beginning of fit period). In the
implementation of the ODE’s, the rate changes linearly from $\gamma_{1}$ to
$\gamma_{2}$ across 4 weeks centered on $t_{\gamma}$. As well, in the fitter
the death data is weighted to give it more emphasis in relation to the
confirmed case data. All other aspects of the model and implementation were
kept as is; i.e. all other rates (except for the already time-varying
sequestration rate $q$) stay constant, and no change was made to the
methodology with respect to solving the ODE’s or non-linear minimization used
by the fitter [16]. The result is that the model confirmed case curves are
practically identical to before, while the model death curve now falls quite
precisely over the empirical data points – see figure 4.
Figure 5: Residuals for two-death-rate fits to (a) Washington state and (b)
Belgium. The modified fit routine’s differential weighting of confirmed case
and death data results in much smaller residuals for death fit relative to
confirmed case fit.
Checking single death rate fits done at the beginning of September against
empirical data going up to September 25, we find that deaths in both the US
and Europe would be overestimated by approximately 50% in both cases (a 98,000
overcount for the US, and a 118,000 overcount for Europe), while the two-rate
version only misses by 9 and 2% respectively.
## Results
To obtain our results model fits were done on 52 US jurisdictions (all states,
plus Washington DC and Puerto Rico), and on 49 European zone countries (i.e.
all Europe proper except for Russia, plus Turkey). The fit period was Jan 22
to Sept 2 for Europe and Jan 22 to Sept 11 for the US. Each fit provided the
two death rates and a changeover time (in days from the start of the fit
period, Jan 22 2020); percent change from $\gamma_{1}$ to rate $\gamma_{2}$
was calculated as well ($100\times\frac{\gamma_{2}-\gamma_{1}}{\gamma_{1}}$).
See tables LABEL:Tallus and LABEL:Talleu in the Appendix for a full listing of
rates, changeover day, and percent change for each US state and European
country studied. Since the death rates apply only to detected infecteds, these
can be seen as roughly equivalent to a very smoothed version of the case
mortality rate, with weighting by current caseload – see figure 6. Note that
the empirical measures see a very high rate with a very large drop in the
early days of the outbreak when only a handful of seriously ill people have
been diagnosed, while any subsequent changes in the underlying rate are
difficult to ascertain from the still fairly large day-to-day fluctuations.
Figure 6: Daily deaths divided by daily new cases with a two week delay for:
(a) Washington state, and (b) Belgium.
We start with the rates themselves (see table 1). The initial death rate for
detected infecteds is approximately 1% in Europe and 1.5% in the US, which is
consistent with, if on low side of, values being hypothesized/calculated in
March/April of this year [18]. These go to approximately 1.5 per 1000 and 3
per 1000 respectively, a 5- or 6-fold drop. The changeover time in the fit
period corresponds to the dates May 18 in Europe and May 15 in the US. As the
standard deviation of the changeover time implies, most of the drops occurred
within the period between mid-April and mid-June. Figure 7 shows how all the
rates and changeover times are distributed. It is somewhat surprising that the
second death rate $\gamma_{2}$ is much more narrowly distributed than the
first death rate $\gamma_{1}$, and we have no explanation for this phenomenon.
Metric | Europe | US
---|---|---
Median $\gamma_{1}$ | 0.01058 | 0.014801
Standard deviation $\gamma_{1}$ | 0.022353 | 0.0071521
Median Changeover $t_{\gamma}$ (in days) | 117.7146 | 114.6639
Standard deviation $t_{\gamma}$ (in days) | 33.1116 | 19.2085
Median $\gamma_{2}$ | 0.0014119 | 0.0028296
Standard deviation $\gamma_{2}$ | 0.0058485 | 0.0075316
Table 1: Statistics for the death rates $\gamma_{1}$ and $\gamma_{2}$, as well
as the date of the change $t_{\gamma}$. Figure 7: (a) Distribution of first
$\gamma_{1}$ and second $\gamma_{2}$ death rates, and (b) distribution of the
day of death rate change for all European countries (except Russia). (c)
Distribution of first $\gamma_{1}$ and second $\gamma_{2}$ death rates, and
(d) distribution of the day of death rate change for all US states.
Change in death rate. While it is to be expected that the case mortality rate
of a disease will drift downward over time as medical treatments improve [19,
20, 21], both the relatively tight timing and magnitude of the change in death
rates are noteworthy; we see a decrease of approximately 90% across Europe and
80% across the US within a 2-month period. Table 2 shows various statistics
related to this drop (the value of the skewness measure most likely reflects
the fact that -100% is a sharp cutoff on the low end of the range of possible
changes). Figure 8 gives maps of the US and Europe color-coded by the drop in
rates and the changeover day; in the former particularly we see that the
countries of western Europe for the most part saw large decreases, while
eastern Europe is more variable. We also observe that US outliers with large
positive changes in death rate are in the east. While there are clusters for
the day of death rate change in Europe and the US, no clear pattern is
apparent.
Metric | Europe | US | comment
---|---|---|---
# Jurisdictions | 49 | 52 |
Mean % change in death rate | -70.0085 | -67.366 |
Median % change in death rate | -91.0148 | -80.6421 |
Mode % change in death rate | -94.8642 | -83.2356 |
Outliers | 6 | 3 | with positive change
Greatest decrease | -100 | -97.3713 |
Least decrease | -38.2119 | -38.0591 |
Greatest increase | 330.9656 | 234.5559 | excluding countries 0 reported deaths
Standard deviation | 16.6705 | 11.6069 | (and below) excluding outliers
Skewness | 1.5267 | 0.96071 | $>$ 0 – skews right
Kurtosis | 4.3511 | 4.5883 | $>$ 3 – thicker tailed distribution than Gaussian
Table 2: Statistics for the percent change in death rate.
Outliers. Not all jurisdictions saw decreases in death rates according to the
measurement derived from out model. In the US three states, New Hampshire, New
Jersey, and Rhode Island had increases, of 119, 235, and 55% respectively. We
note that New Jersey and Rhode Island had relatively late dates for the
effective release from lockdown in comparison with other states, as measured
by the model (June 16 and July 11 respectively). Mathematically, since these
states did not open up at the same time as the others, their cases did not
start rising dramatically again in the early summer, so the denominator
defining the case mortality rate stayed relatively low.
In Europe the outliers break down into two different groups. In the first case
we have the Faroe Islands, Gibraltar, and Latvia, which had effectively no
deaths in the period before the measured changeover (Faroe Islands and
Gibraltar apparently had no deaths whatsoever during the entire fit period);
in this case the astronomical positive changes in rate are merely an artifact
of the extremely low initial rates given by the fitter. It should be noted
that Latvia’s neighbor Estonia had no recorded deaths in the period after
changeover, and so achieved a 100% drop; this suggests that the death
statistics in the Baltic states may themselves be an issue.
Figure 8: (a and b) Percent change of death rate for US and Europe. (c and d)
changeover time (in days since January 22, 2020).
The second group of European outliers – Belarus (331% increase), Kosovo (78%),
and Serbia (30%), like the US outliers, are more perplexing. The latter two
were of course famously involved in a violent conflict in the 1990’s; all
three are not members of the EU. Aside from that we can note that these
countries had relatively late outbreaks (with first deaths recorded on March
22, March 29, and April 28 respectively), resulting in a later surge of cases
and deaths.
Correlations. One may ask if there is a relation between the measured changes
in death rates and various other metrics. However, with one rather trivial
exception (to be discussed below) we found no strong correlations of the drop
to either model-related quantities or a number of readily available
state/national statistics; though admittedly, our search through national
databases was not exhaustive. All correlations discussed below were calculated
using Matlab’s corrcoef function, which gives the Pearson correlation
coefficient.
The first place to look is within the model parameters themselves, and
quantities derived from either the raw data or projections based on the fits.
Across multiple fits we would expect some rates to move in tandem or
opposition to others, and indeed, for both Europe and the US we see that the
SIR-based contact rate has a strong negative correlation ($<-0.9$) with both
the recovery rate for undetected infecteds and the detection rate, as well as
a slightly weaker positive correlation ($>0.67$) with the severity of the
first social distancing intervention. In fact, one model parameter does
correlate strongly with the drop in the death rate: the second death rate
itself (0.72 for Europe, 0.97 US), which is hardly surprising. However, no
other rates or data derived quantities had an absolute correlation $>0.5$ for
either the US or Europe, and only a scattering had the absolute correlation
$>0.33$; these latter all had different signs for the European and US fits,
indicating that the relation was not particularly robust despite the
magnitude. Only three model related quantities other than the second death
rate had absolute correlations $\geq 0.1$ with same sign for the US and
Europe: loss of immunity rate (negative), initial condition (proportion of
population infected on day 1 of fit period, negative), and proportion of
unknown recovereds on last day of fit period (positive). In all these cases
the absolute correlation was $<$ 0.22, so rather weak.
Metric | Europe | US
---|---|---
Population | -0.03319 | -0.045653
Area | 0.057402 | -0.15576
Pop. density | -0.080492 | 0.015033
Latitude | -0.02293 | 0.062915
Longitude | 0.33451 | 0.16866
GDP [22, 23] | -0.16385 | -0.05716
GDP per capita [22, 23] | -0.33194 | -0.07874
Gini coefficient [24, 25] | -0.12044 | 0.10396
Median age [26, 27] | -0.25117 | 0.19846
Table 3: Pearson correlation coefficients for % changes in death rates and
state/national statistics. Population, area, population density, latitude, and
longitude data were obtained from Johns Hopkins University alongside Covid-19
data [17].
Since the correlations to standard state/national statistics may be of more
general interest, these are given in table 3. As with the model parameters,
most correlations here are quite weak and have different signs between Europe
and the US; only longitude has non-trivial (though not strong) correlations of
the same sign, which is apparent from figure 8(a) showing consistently larger
drops in percent death rate change in Western Europe than Eastern Europe. As
mentioned above, we did not check many other possible quantities (eg.
educational attainment, per capita health care expenditures, etc.) since each
requires finding and converting new data extraneous to our main project; in
particular, certain epidemiological data, such as COVID-19 testing rate (which
is itself time-varying), might yield interesting results with more intensive
comparison techniques. Note that correlations between the individual death
rates and changeover day with the other model parameters and state/national
statistics were also calculated, and as well did not show any strong or
surprising correlations (data not shown).
Figure 9: Model fit to confirmed Covid-19 deaths, and the number of deaths
predicted for the counterfactual (CF) scenario of no change in death rates
for: (a) Europe, and (b) the United States.
Counterfactual Scenario. Our implementation allowed us to run counterfactual
simulations to test various suppositions by rerunning the ODE solver on the
model with changed parameter values. By suppressing the second death rate, we
are able to estimate what the deaths outcome would be if no change in rate had
occurred. Figure 9 shows plots of deaths data, model fits, and counterfactual
projections for Europe and the US. As one would expect, if the rate had not
changed the number of deaths by September 25 would have been much greater,
more than triple in the US (from $\approx$ 204,000 to 706,000) and more than
double in Europe (from $\approx$ 208,000 to 531,000). Since the effect on the
cumulative confirmed cases was minimal, we have not shown these plots.
## Discussion and Conclusions
There are several factors expected to affect fatality rates over the course of
a pandemic. Improvements in medical treatments are to be expected as knowledge
about the disease increases. For example, aggregated data suggests that
transfusing (high anti-RBD IgG titer) convalescent plasma early in the
hospital course of Covid-19 patients significantly reduces mortality by
approximately 6% in comparison with control patients [19]. Additional
independent studies have shown that administering tocilizumab (a recombinant
monoclonal antibody that can mitigate cytokine release syndrome) to patients
admitted to intensive care with Covid-19 have a 23% [20] and a 12% [21]
reduction in mortality, compared with patients receiving standard care.
Importantly, a clinical outcomes study reported that patients who presented in
hospital with sufficient vitamin D levels ($\geq$ 30 ng/ml) had reduced
mortality rates by 10% in comparison with Covid-19 patients with insufficient
($<$ 30 ng/ml) vitamin D [28], which suggests that lowly toxic supplementation
and increased sun exposure can affect a population’s outcome. The studies
above also suggest that improvements in Covid-19 treatments since the start of
the pandemic can reduce a population’s overall mortality by at least 20%,
which is a smaller factor than we have measured.
It has also been suggested that mask wearing can reduce the mortality rate of
Covid-19 via two different means. First, a face mask worn by an infected
person forms a barrier for transmission of respiratory droplets to susceptible
populations, thus reducing transmissibility of the disease [29, 30]. It may be
reasonable to expect that populations with widespread mask usage and clear
government guidelines may have a reduction in contact rate $\beta$ associated
with policy implementation if the policy had been implemented after occurrence
of exponential growth in cases [31]. We have not observed the need for such a
reduction to fit cumulative cases in any country or state well. In any case,
while reduced case counts (if we had seen them) would result in fewer deaths
overall, this says nothing about deaths per case. But it is also possible that
wearing a face mask protects the wearer by reducing the SARS-CoV-2 inoculum
that they are exposed to by infected people [32]. Exposure to a low viral load
may result in a less severe, possibly asymptomatic, infection with a lower
chance of fatality [33]. So it is still possible that the changed Covid-19
death rates were have observed result from face mask wearing; the YouGov
online survey reporting tool demonstrates that self-reported face mask wearing
in public spaces in some European countries (Italy, Spain, France, and
Germany) rapidly increased to 80% of the population or more between late March
and May 2020 [34]. A similar trend is observed in the United States, where
self-reported face mask wearing in public places rose to 69% at the end of May
2020 [34]. However, self-reported face mask wearing in the Nordic nations of
Finland, Denmark, Norway, and Sweden did not exceed 20% of the population over
the same time frame, and these nations also experienced very large drops in
death rate. This evidence strongly suggests that if wearing face masks is a
factor that affects the death rate change, it is not the only one.
It is also possible for a virus to acquire mutations that alter its
infectivity and lethality over time. Genomic analyses have demonstrated that
the spike protein of SARS-CoV-2 has undergone an amino acid change from
aspartic acid (referred to as D614) to glycine (referred to as G614) in the
carboxy-terminal region of the S1 domain [35, 36, 37]. The very rapid spread
of the G614 mutant throughout Europe and parts of the Americas, monitored by
Covid-19 genetic surveillance studies over time, suggests that it could be
more transmissible [35, 36, 37, 38]. One regional study conducted within a
Houston hospital system showed that the virus strains originally introduced
into the city in March 2020 were diverse, with both D614 and G614 types
represented, however sequences taken during the much larger second wave that
occurred in June 2020 were nearly all of the G614 type [39]. They found that
patients with the G614 strains had higher nasopharyngial viral loads upon
diagnosis; however, the authors did not find evidence connecting virus
genotype with altered virulence [39]. Interestingly, a data correlation study
found that the G614 mutation corresponds to higher case fatality rates in
several countries [40]. Given the available evidence, it seems likely that the
highly prevalent G614 mutation is not less deadly than previous strains, which
leaves the distinct possibility that there is a newer, less deadly mutation
circulating.
Increasing testing can also significantly impact the case fatality rate of a
disease, since detecting increasing numbers of cases will increase the
denominator of the case fatality rate, and possibly lead to earlier detection
of a disease leading to earlier treatment thereby also reducing mortality [41,
42]. While we are not aware of any studies examining correlations between the
number of Covid-19 tests in time and case fatality rates, several studies
examining regional differences in fatality and testing have occurred. One
study comparing USA, Italy, UK, France, Spain, Sweden, and Germany found that
case fatality rates, normalized by the ratio of tests to total number of
positive cases, tended to cluster suggesting a correlation between mortality
and testing rate [41]. A multivariable statistical study of Covid-19 mortality
in 196 countries found that a 10 times decrease in per-capita testing was
correlated with a 26% increase in per-capita mortality, though this
correlation was not found to be statistically significant [42]. Another
statistical comparison of testing rates and mortality across French region
borders found that performing an additional 1000 tests would save one life
[43]. Data available from the Johns Hopkins University Coronavirus Resource
Center [44] shows that US tests increased by approximately 12 times (from 0.1
to 1.2 million) from April through November 2020, suggesting that increased
testing may have played some role in the large death rate decrease we have
observed in nearly all US states and European countries.
It is also possible that the age demographics of people more recently
afflicted with Covid-19 have affected the mortality rate – particularly if
more young people than elderly have become infected – who tend to be much less
likely to have severe disease [45]. Indeed, an analysis of Covid-19 cases that
occurred worldwide between February and July, 2020 revealed that the number of
infected people 15-24 years old increased from 5% to 15%. Cases of Covid-19 in
the USA in people 18-22 years old increased by 55% from August 2-Sept 5, 2020,
and was highest among people between 20 and 29 years old, with more than 20%
of the total cases, in contrast with March 2020 where Covid-19 incidence was
highest in people with ages over and including 60 years [46]. In conjunction
with this trend, some clinical reports indicate that Covid-19 has become less
deadly across all age groups. It was reported that the mortality rate,
adjusted for changes in demographics, had dropped by 18% in a New York city
hospital system from March to August 2020 [2]. Similarly, English hospital
surveillance data found that the survival of Covid-19 patients in both
intensive care and high intensive units increased by approximately 11% per
week from late March through June 2020 across age, ethnicity, and major co-
morbidity subgroups [15]. Given these observations, it appears that changes in
age demographics of Covid-19 incidence do not fully explain our observed
change in mortality over time.
Lastly, we look at the possibility that the drop could be a statistical
artifact caused by changes in the way death data is recorded and collected. It
should be noted, that we (along with [11]) first noticed the change of death
rate not as a drop in daily deaths versus total population, but as persistence
of the previous trend when surges in the number of cases versus total
population occurred after releases of lockdowns, where concomitant surges in
deaths were to be expected.
Data revision is common for many publicly maintained statistics, not only in
medical areas but also economics and demographics, since later figures often
improve or correct earlier ones, which may be based partly on estimates or
incomplete surveys. With respect to diseases or mortality, large upward
revisions often gain public attention, since the implication is of prior
negligence or coverup. During the current Covid-19 pandemic a couple of
instances do leap out: China’s April revision upward by 1290 deaths (which
increased their then case mortality by 50%) [47], or Argentina’s massive
correction at the beginning of October [48].
There are legitimate reasons for changes in procedure that result in lower
death counts and subsequent downward revisions. Many jurisdictions initially
logged all deaths of Covid-19 infected individuals as deaths by Covid,
presumably because in the early days of the pandemic the exact range of co-
morbidities had not been determined; when later information is available to
limit that range, non-Covid deaths of Covid-infected individuals can be placed
in the appropriate category. This is the case for the UK revision in August.
Previously, the UK had been counting all deaths of Covid-infected people
within 60 days as death by Covid-19, which was reduced to 28 days; applied
retroactively, this had the effect of reducing the UK Covid-19 death count by
5,377 ($\approx$ 13% at the time) [49]. Similarly, Washington State, which had
been counting all deaths of anyone who tested positive at any point as
Covid-19 deaths, officially adapted a more stringent protocol in mid-June,
only listing a death as Covid-related if it was a specific factor mentioned in
the death certificate [50]. Case and death reductions may also occur for other
reasons. In Belgium a downward revision, ostensibly to correct for double-
counting in nursing homes, made news because it seemed to be timed to avoid
the milestone of 10,000 Covid-19 deaths [51].
Downward revisions of past death statistics, if integrated properly into time-
series data, should not have an adverse affect on any attempt to determine
changes in case-mortality over time, whether by our model or other techniques.
Our primary data source, the JHU CSSE Covid team [17], seems to have made
every effort to revise past data to reflect current knowledge and practice. To
begin with, they cross-reference many sources of their own, including the
World Health Organization, the European Centre for Disease Prevention and
Control (ECDC), the US Center for Disease Control, many other national health
organizations (such as the National Health Commission of the People’s Republic
of China, Australia Government Department of Health, Italian Ministry of
Health, etc.), practically all US state Departments of Health, many
municipalities and US counties, news organizations such as the Los Angeles
Times and BNO News, and even a few other Covid-19 tracking projects
(presumably for confirmation) such as “the COVID Tracking Project” maintained
by The Atlantic (https://covidtracking.com/data) and WorldoMeters Covid page
(https://www.worldometers.info/coronavirus/).
Importantly, when possible the JHU CSSE Covid team back-distribute revisions
of past data (i.e. incorporate them on appropriate days in their currently
available time series). According to their records, there have been 22 data
modifications for European nations and 19 for US jurisdictions (which are
tallied by county). As well, several large-scale back distributions have been
done (twice for both New York City and Massachusetts; and once for the United
Kingdom, Michigan, New Jersey, North Carolina, and Harris County, Texas). In
general, such back distribution (whether an up or down revision) should make
death data before mid-May more trustworthy rather than less.
An issue arises if jurisdictions adopt new protocols without revising past
statistics, or do the revisions without back-distributing into the past data
sets. In the JHU CSSE time series we used 36 US states and 21 European
countries had decreases in cumulative deaths on 121 separate occasions, mostly
by 1 or 2 cases. Since any decrease in cumulative deaths is a physical
impossibility, the ones we see here presumably indicate data revisions which
could not be back-distributed. For example, the time-series for Washington
State has occasional negative day-to-day changes in death counts starting from
mid June (when they changed their protocol) and lasting through July (when
they seem to have finished whatever revisions they needed to make). The total
number of deaths involved in these post hoc revisions is 2,463 for the
European nations and 666 for the US states; while not trivial, these values
could hardly account for the drops we have seen in the death rates detailed
above. To determine how many downward non-back-distributed revisions occurred
which did not result in negative day-to-day changes in cumulative deaths, or
which countries, states, or counties quietly adopted different protocols or
definitions without attempting to revise past totals, would require greater
access to jurisdictional health agency revision and policy data than we have.
In conclusion, we have found that the case mortality rate of Covid-19 has
dramatically decreased between mid-April and mid-June 2020 in many European
countries and US states. While there are many plausible factors, such as
improved medical technique, mask wearing, increased testing, viral mutation,
demographics, or changes in recording of cases, that may have caused this, at
this point we cannot conclusively say which, if any, are the cause, or if it
is a combination of these or other subtle factors. This surprising finding
warrants further attention.
## Data Availability Statement
Data for cumulative confirmed cases and deaths were obtained from the Johns
Hopkins University (JHU) Center for Systems Science and Engineering, posted on
the GitHub website [17].
## Conflicts of Interest
Conflicts of Interest: None.
## Acknowledgements
This study was supported by TTU President’s Distinguished Chair Funds.
## 1 Appendix
Table 4: Death rates $\gamma_{1}$ and $\gamma_{2}$, day of change $t_{\gamma}$ and corresponding date, with percent change for US states. State | $\gamma_{1}$ | $t_{\gamma}$ | Date (2020) | $\gamma_{2}$ | % Change
---|---|---|---|---|---
Alaska | 0.018858 | 91.9803 | Apr 22 | 0.0029998 | -84.0922
Alabama | 0.01584 | 129.1083 | May 29 | 0.0028764 | -81.8412
Arkansas | 0.01501 | 107.0161 | May 07 | 0.006258 | -58.3066
Arizona | 0.019334 | 122.9221 | May 23 | 0.005112 | -73.5597
California | 0.014878 | 111.098 | May 11 | 0.0026375 | -82.2724
Colorado | 0.013955 | 115.4642 | May 15 | 0.0010777 | -92.2773
Connecticut | 0.0077278 | 113.8637 | May 14 | 0.0006074 | -92.14
District of Columbia | 0.01744 | 126.0829 | May 26 | 0.0014485 | -91.6943
Delaware | 0.01066 | 132.3503 | Jun 01 | 0.001541 | -85.5436
Florida | 0.015309 | 129.4741 | May 29 | 0.0042771 | -72.0615
Georgia | 0.011839 | 125.6616 | May 26 | 0.0028935 | -75.5597
Hawaii | 0.0092434 | 94.9948 | Apr 25 | 0.0024744 | -73.231
Iowa | 0.0041224 | 119.2705 | May 19 | 0.00037223 | -90.9704
Idaho | 0.014863 | 110.389 | May 10 | 0.0047981 | -67.7173
Illinois | 0.023902 | 164.9266 | Jul 04 | 0.0043251 | -81.9046
Indiana | 0.01642 | 113.6621 | May 14 | 0.0013726 | -91.6409
Kansas | 0.0066308 | 99.5549 | Apr 30 | 0.00028062 | -95.768
Kentucky | 0.028502 | 128.7954 | May 29 | 0.0057361 | -79.8748
Louisiana | 0.026275 | 137.2672 | Jun 06 | 0.0064757 | -75.3538
Massachusetts | 0.0053833 | 117.0038 | May 17 | 0.00089078 | -83.4529
Maryland | 0.0052426 | 124.0183 | May 24 | 0.00098801 | -81.154
Maine | 0.013773 | 120.8261 | May 21 | 0.0020725 | -84.9523
Michigan | 0.022984 | 108.211 | May 08 | 0.00097971 | -95.7374
Minnesota | 0.012611 | 120.2323 | May 20 | 0.0004156 | -96.7044
Missouri | 0.019139 | 115.7427 | May 16 | 0.0019625 | -89.7462
Mississippi | 0.017016 | 133.2993 | Jun 02 | 0.01054 | -38.0591
Montana | 0.010594 | 96.861 | Apr 27 | 0.0031836 | -69.9485
North Carolina | 0.01474 | 110.1662 | May 10 | 0.0023085 | -84.3386
North Dakota | 0.018339 | 131.7428 | Jun 01 | 0.0057475 | -68.6592
Nebraska | 0.012986 | 104.1514 | May 04 | 0.0046664 | -64.065
New Hampshire | 0.014292 | 96.4596 | Apr 26 | 0.031264 | 118.7539
New Jersey | 0.014136 | 76.3807 | Apr 06 | 0.047291 | 234.5559
New Mexico | 0.01693 | 121.5203 | May 22 | 0.0029829 | -82.3808
Nevada | 0.027147 | 110.4525 | May 10 | 0.0027828 | -89.7491
New York | 0.034929 | 112.4836 | May 12 | 0.0039349 | -88.7347
Ohio | 0.010123 | 127.3317 | May 27 | 0.001216 | -87.9875
Oklahoma | 0.024237 | 95.1332 | Apr 25 | 0.002249 | -90.7209
Oregon | 0.01994 | 105.7741 | May 06 | 0.0033566 | -83.1668
Pennsylvania | 0.0042774 | 112.0262 | May 12 | 0.00088788 | -79.2426
Puerto Rico | 0.015866 | 119.5141 | May 20 | 0.0041358 | -73.933
Rhode Island | 0.0074883 | 99.257 | Apr 29 | 0.011599 | 54.897
South Carolina | 0.01325 | 103.3883 | May 03 | 0.0038586 | -70.8792
South Dakota | 0.0064334 | 203.8722 | Aug 12 | 0.0015365 | -76.1174
Tennessee | 0.0061235 | 107.274 | May 07 | 0.0012167 | -80.1301
Texas | 0.011842 | 84.0796 | Apr 14 | 0.0041029 | -65.3542
Utah | 0.0037314 | 110.1407 | May 10 | 0.0011337 | -69.6166
Virginia | 0.017346 | 124.8077 | May 25 | 0.0066082 | -61.9042
Vermont | 0.023989 | 107.7641 | May 08 | 0.0006306 | -97.3713
Washington | 0.023503 | 123.4539 | May 23 | 0.0047135 | -79.9452
Wisconsin | 0.0044732 | 125.8417 | May 26 | 0.00037944 | -91.5175
West Virginia | 0.019036 | 109.8525 | May 10 | 0.0069387 | -63.5491
Wyoming | 0.0015296 | 132.9206 | Jun 02 | 0.00036233 | -76.3115
Table 5: Death rates $\gamma_{1}$ and $\gamma_{2}$, day of change $t_{\gamma}$ and corresponding date, with percent change for European countries. * for change indicates country had no deaths before time $t_{\gamma}$. Country | $\gamma_{1}$ | $t_{\gamma}$ | Date (2020) | $\gamma_{2}$ | % Change
---|---|---|---|---|---
Albania | 0.046842 | 62.1243 | Mar 23 | 0.0080575 | -82.7986
Andorra | 0.020542 | 116.5576 | May 17 | 0.00055115 | -97.3169
Austria | 0.0098244 | 113.6568 | May 14 | 0.00063225 | -93.5645
Belarus | 0.0047253 | 161.9287 | Jul 01 | 0.020364 | 330.9656
Belgium | 0.027217 | 112.0309 | May 12 | 0.0014119 | -94.8126
Bosnia and Herzegovina | 0.01543 | 115.5272 | May 16 | 0.005373 | -65.1794
Bulgaria | 0.0052442 | 131.303 | May 31 | 0.0018924 | -63.9136
Channel Islands | 0.0078607 | 106.7912 | May 07 | 0.00012372 | -98.4261
Croatia | 0.0081174 | 124.9307 | May 25 | 0.0021014 | -74.112
Cyprus | 0.015286 | 85.6981 | Apr 16 | 0.0014654 | -90.4136
Czechia | 0.0084008 | 111.5784 | May 12 | 0.00078631 | -90.6401
Denmark | 0.023793 | 106.2476 | May 06 | 0.00098962 | -95.8406
Estonia | 0.0024383 | 112.7541 | May 13 | 4.3292e-14 | -100
Faroe Islands | 8.2776e-11 | 216.8633 | Aug 25 | 1.9168e-09 | *
Finland | 0.0022821 | 112.5707 | May 13 | 3.8419e-05 | -98.3165
France | 0.062387 | 112.4251 | May 12 | 0.0024589 | -96.0586
Germany | 0.010068 | 117.7146 | May 18 | 0.0004936 | -95.0975
Gibraltar | 4.1871e-14 | 216.2532 | Aug 24 | 0.00062895 | *
Greece | 0.0095689 | 128.07 | May 28 | 0.0018301 | -80.8742
Hungary | 0.053932 | 121.291 | May 21 | 0.0033608 | -93.7685
Iceland | 0.0034045 | 100.8264 | May 01 | 3.0382e-08 | -99.9991
Ireland | 0.01058 | 110.7097 | May 11 | 0.00041922 | -96.0377
Italy | 0.055973 | 118.6296 | May 19 | 0.0056654 | -89.8783
Kosovo | 0.0021242 | 178.0724 | Jul 17 | 0.0037917 | 78.4974
Latvia | 1.0121e-08 | 79.0803 | Apr 09 | 0.031484 | *
Liechtenstein | 0.0044271 | 94.2358 | Apr 24 | 1.9343e-08 | -99.9996
Lithuania | 0.014566 | 131.6754 | Jun 01 | 0.001285 | -91.1785
Luxembourg | 0.0089991 | 117.8828 | May 18 | 0.00082332 | -90.851
Malta | 0.0012141 | 130.8727 | May 31 | 8.8242e-05 | -92.7317
Isle of Man | 0.019654 | 134.6321 | Jun 04 | 0.00042544 | -97.8354
Moldova | 0.01743 | 168.4762 | Jul 07 | 0.0096963 | -44.3701
Monaco | 0.010934 | 103.4746 | May 03 | 4.7518e-14 | -100
Montenegro | 0.0041409 | 199.6697 | Aug 08 | 0.0025586 | -38.2119
Netherlands | 0.026892 | 125.8466 | May 26 | 0.0017659 | -93.4335
North Macedonia | 0.0029871 | 170.4689 | Jul 09 | 0.00082065 | -72.5268
Norway | 0.0014174 | 98.9619 | Apr 29 | 0.00013942 | -90.1642
Poland | 0.016209 | 123.6755 | May 24 | 0.0031553 | -80.5332
Portugal | 0.016931 | 137.2254 | Jun 06 | 0.003978 | -76.5041
Romania | 0.027858 | 123.7185 | May 24 | 0.010543 | -62.1555
Serbia | 0.011069 | 46.1611 | Mar 07 | 0.014431 | 30.3728
Slovakia | 0.0019954 | 113.4444 | May 13 | 0.00013117 | -93.4265
Slovenia | 0.0028753 | 97.7511 | Apr 28 | 0.00018597 | -93.5319
San Marino | 0.12548 | 77.8736 | Apr 08 | 0.008933 | -92.8811
Spain | 0.044385 | 102.3834 | May 02 | 0.0012682 | -97.1427
Sweden | 0.040327 | 123.7356 | May 24 | 0.0051592 | -87.2067
Switzerland | 0.013404 | 105.5814 | May 06 | 0.00035987 | -97.3152
Turkey | 0.0079824 | 124.1928 | May 24 | 0.0039202 | -50.8902
United Kingdom | 0.051358 | 131.4971 | May 31 | 0.0098487 | -80.8233
Ukraine | 0.017525 | 147.2525 | Jun 16 | 0.010609 | -39.4664
## References
* [1]
* [2] Leora Horwitz, Simon A Jones, Robert J Cerfolio, Fritz Francois, Joseph Greco, Bret Rudy, and Christopher M Petrilli. Trends in Covid-19 risk-adjusted mortality rates. Journal of Hospital Medicine, October 2020.
* [3] World Health Organization. Novel coronavirus – China. Technical report, World Health Organization, 2020.
* [4] World Health Organization et al. Coronavirus disease 2019 (COVID-19): situation report, 22. Technical report, World Health Organization, 2020.
* [5] Emeline Han, Melisa Mei Jin Tan, Eva Turk, Devi Sridhar, Gabriel M Leung, Kenji Shibuya, Nima Asgari, Juhwan Oh, Alberto L García-Basteiro, Johanna Hanefeld, et al. Lessons learnt from easing COVID-19 restrictions: an analysis of countries and regions in Asia Pacific and Europe. The Lancet, 396(10261):1525–1534, November 2020.
* [6] Jason Horowitz. Hope and worry mingle as countries relax coronavirus lockdowns. The New York Times, May 2020.
* [7] Jeffrey Gettleman. As virus infections surge, countries end lockdowns. The New York Times, June 2020.
* [8] World Health Organization et al. COVID-19 weekly epidemiological update, data as received by WHO from national authorities, as of 18 October 2020, 10 am CEST. Technical report, World Health Organization, 2020.
* [9] Michael Crowley and Maggie Astor. European nations return to restrictions as virus surges. The New York Times, October 2020.
* [10] Sarah Mervosh and Lucy Tompkins. ‘It has hit us with a vengeance’: Virus surges again across the United States. The New York Times, October 2020.
* [11] Kevin Drum. If COVID-19 cases are going up, why is the death rate going down? Mother Jones, June 2020.
* [12] Lauren Justice. U.S. Coronavirus cases are rising sharply, but deaths are still down. New York Times, July 2020.
* [13] John Campbell. Coronavirus, death rates plummet. Podcast, August 2020.
* [14] Wan Yang, Sasikiran Kandula, Mary Huynh, Sharon K Greene, Gretchen Van Wye, Wenhui Li, Hiu Tai Chan, Emily McGibbon, Alice Yeung, Don Olson, et al. Estimating the infection-fatality risk of SARS-CoV-2 in New York City during the spring 2020 pandemic wave: a model-based analysis. The Lancet Infectious Diseases, October 2020.
* [15] John M Dennis, Andrew P McGovern, Sebastian J Vollmer, and Bilal A Mateen. Improving survival of critical care patients with Coronavirus disease 2019 in England. Critical Care Medicine, Online First, October 26, 2020, 2020.
* [16] ZS Khan, F Van Bussel, and F Hussain. A predictive model for Covid-19 spread–with application to eight US states and how to end the pandemic. Epidemiology & Infection, 148:1–40, October 2020.
* [17] Ensheng Dong, Hongru Du, and Lauren Gardner. An interactive web-based dashboard to track COVID-19 in real time. The Lancet infectious diseases, 20(5):533–534, 2020.
* [18] Max Roser, Hannah Ritchie, Esteban Ortiz-Ospina, and Joe Hasell. Mortality risk of COVID-19. Our World in Data, 2020. https://ourworldindata.org/ mortality-risk-covid.
* [19] Eric Salazar, Paul A Christensen, Edward A Graviss, Duc T Nguyen, Brian Castillo, Jian Chen, Bevin V Lopez, Todd N Eagar, Xin Yi, Picheng Zhao, et al. Treatment of coronavirus disease 2019 patients with convalescent plasma reveals a signal of significantly decreased mortality. The American Journal of Pathology, 190(11):2290–2303, 2020.
* [20] Timothée Klopfenstein, Souheil Zayet, Anne Lohse, Jean-Charles Balblanc, Julio Badie, Pierre-Yves Royer, Lynda Toko, Chaouki Mezher, Marie Bossert, Ana-Maria Bozgan, et al. Tocilizumab therapy reduced intensive care unit admissions and/or mortality in COVID-19 patients. Médecine et Maladies Infectieuses, 50(5):397–400, August 2020\.
* [21] Noa Biran, Andrew Ip, Jaeil Ahn, Ronaldo C Go, Shuqi Wang, Shivam Mathura, Brittany A Sinclaire, Urszula Bednarz, Michael Marafelias, Eric Hansen, et al. Tocilizumab among patients with covid-19 in the intensive care unit: a multicentre observational study. The Lancet Rheumatology, 2(10):e603–e612, 2020.
* [22] Bureau of Economic Analysis. Gross domestic product by state, 4th quarter and annual 2019, 2020. http://www.bea.gov.
* [23] International Monetary Fund. World economic outlook april 2018 edition, gdp nominal per capita – international dollar, 2018. http://www.imf.org.
* [24] United States Census Bureau. American community survey data, 2020. http://www.census.gov.
* [25] World Bank. Gini index (world bank estimate), 2020. http://www.data.worldbank.org.
* [26] StatsAmerica. Median age in 2018, 2020. http://www.statsamerica.org.
* [27] Central Intelligence Agency. The world factbook/median age, 2018. http://www.cia.gov.
* [28] Zhila Maghbooli, Mohammad Ali Sahraian, Mehdi Ebrahimi, Marzieh Pazoki, Samira Kafan, Hedieh Moradi Tabriz, Azar Hadadi, Mahnaz Montazeri, Mehrad Nasiri, Arash Shirvani, et al. Vitamin D sufficiency, a serum 25-hydroxyvitamin D at least 30 ng/ml reduced risk for adverse clinical outcomes in patients with COVID-19 infection. PloS one, 15(9):e0239799, 2020.
* [29] Cornelia Betsch, Lars Korn, Philipp Sprengholz, Lisa Felgendreff, Sarah Eitze, Philipp Schmid, and Robert Böhm. Social and behavioral consequences of mask policies during the COVID-19 pandemic. Proceedings of the National Academy of Sciences, 117(36):21851–21853, 2020.
* [30] Michael H Haischer, Rachel Beilfuss, Meggie Rose Hart, Lauren Opielinski, David Wrucke, Gretchen Zirgaitis, Toni D Uhrich, and Sandra K Hunter. Who is wearing a mask? Gender-, age-, and location-related differences during the COVID-19 pandemic. PloS one, 15(10):e0240785, 2020.
* [31] Rotich Kiplimo Titus, Lagat Robert Cheruiyot, and Choge Paul Kipkurgat. Mathematical modeling of Covid-19 disease dynamics and analysis of intervention strategies. Mathematical Modelling and Applications, 5(3):176, 2020.
* [32] Zuzana Střížová, Jiřina Bartŭňková, Daniel Smrž, et al. Can wearing face masks in public affect transmission route and viral load in COVID-19? Central European Journal of Public Health, 28(2):161–162, 2020\.
* [33] Monica Gandhi, Chris Beyrer, and Eric Goosby. Masks do more than protect others during COVID-19: reducing the inoculum of SARS-CoV-2 to protect the wearer. Journal of General Internal Medicine, 35(10):3063–3066, October 2020.
* [34] Max Roser, Hannah Ritchie, Esteban Ortiz-Ospina, and Joe Hasell. YouGov COVID-19 behaviour changes tracker: Wearing a face mask when in public places. YouGov, 2020. https://yougov.co.uk/topics/health/articles-reports/ 2020/07/27/face-mask-use-surges-after-becoming-compulsory-sho/.
* [35] Kathy Leung, Yao Pei, Gabriel M Leung, Tommy TY Lam, and Joseph T Wu. Empirical transmission advantage of the D614G mutant strain of SARS-CoV-2. medRxiv, 2020. 2020.09.22.20199810.
* [36] Bette Korber, Will Fischer, S Gnana Gnanakaran, Heyjin Yoon, James Theiler, Werner Abfalterer, Brian Foley, Elena E Giorgi, Tanmoy Bhattacharya, Matthew D Parker, et al. Spike mutation pipeline reveals the emergence of a more transmissible form of SARS-CoV-2. bioRxiv, 2020. 2020.04.29.069054.
* [37] Alberto Gómez-Carballa, Xabier Bello, Jacobo Pardo-Seco, Federico Martinón-Torres, and Antonio Salas. Mapping genome variation of SARS-CoV-2 worldwide highlights the impact of COVID-19 super-spreaders. Genome Research, 30(10):1434–1448, 2020.
* [38] Anwar Mohammad, Eman Alshawaf, Sulaiman K Marafie, Mohamed Abu-Farha, Jehad Abubaker, and Fahd Al-Mulla. Higher binding affinity of Furin to SARS-CoV-2 spike (S) protein D614G could be associated with higher SARS-CoV-2 infectivity. International Journal of Infectious Diseases, October 2020.
* [39] S Wesley Long, Randall J Olsen, Paul A Christensen, David W Bernard, James J Davis, Maulik Shukla, Marcus Nguyen, Matthew Ojeda Saavedra, Prasanti Yerramilli, Layne Pruitt, et al. Molecular architecture of early dissemination and massive second wave of the SARS-CoV-2 virus in a major metropolitan area. mBio, 11(6), 2020.
* [40] Manuel Becerra-Flores and Timothy Cardozo. SARS-CoV-2 viral spike G614 mutation exhibits higher case fatality rate. International Journal of Clinical Practice, 2020. 74:e13525.
* [41] Maria Pachetti, Bruna Marini, Fabiola Giudici, Francesca Benedetti, Silvia Angeletti, Massimo Ciccozzi, Claudio Masciovecchio, Rudy Ippodrino, and Davide Zella. Impact of lockdown on Covid-19 case fatality rate and viral mutations spread in 7 countries in Europe and North America. Journal of Translational Medicine, 18(1):1–7, 2020.
* [42] Christopher T Leffler, Edsel B Ing, Joseph D Lykins, Matthew C Hogan, Craig A McKeown, and Andrzej Grzybowski. Association of country-wide Coronavirus mortality with demographics, testing, lockdowns, and public wearing of masks. Update August 4, 2020. medRxiv, 2020. 2020.05.22.20109231.
* [43] Anthony Terriau, Julien Albertini, Arthur Poirier, and Quentin Le Bastard. Impact of virus testing on COVID-19 case fatality rate: estimate using a fixed-effects model. medRxiv, 2020. 2020.04.26.20080531.
* [44] Testing Hub. Daily state-by-state testing trends. Johns Hopkins University of Medicine Coronavirus Resource Center, 2020. https://coronavirus.jhu.edu/ testing/individual-states.
* [45] Daniel EL Promislow. A geroscience perspective on COVID-19 mortality. The Journals of Gerontology: Series A, 75:e30–e33, 2020.
* [46] Priya Venkatesan. The changing demographics of COVID-19. The Lancet Respiratory Medicine, October 2020.
* [47] James Griffiths and Steven Jiang. Wuhan officials have revised the city’s coronavirus death toll up by 50%. CNN, April 2020.
* [48] Reuters Staff. Argentina’s Coronavirus death toll leaps above 20,000 as new data added. Reuters Healthcare & Pharma, October 2020.
* [49] Carl Heneghan and Jason Oke. Public Health England has changed its definition of deaths: here?s what it means. CEBM The Centre for Evidence-Based Medicine, August 2020.
* [50] Jamie Nixon. News Release: June 17, 2020 (20-107) Department of Health adjusting reporting of COVID-19 related deaths. Washington State Department of Health, June 2020.
* [51] Philip Blenkinsop. Belgium revises down COVID-19 deaths just shy of 10,000 mark. Reuters Healthcare & Pharma, August 2020.
|
# Analysis of voxel-based 3D object detection methods efficiency for real-time
embedded systems
Illia Oleksiienko and Alexandros Iosifidis Department of Electrical and
Computer Engineering, Aarhus University, Denmark
<EMAIL_ADDRESS>
###### Abstract
Real-time detection of objects in the 3D scene is one of the tasks an
autonomous agent needs to perform for understanding its surroundings. While
recent Deep Learning-based solutions achieve satisfactory performance, their
high computational cost renders their application in real-life settings in
which computations need to be performed on embedded platforms intractable. In
this paper, we analyze the efficiency of two popular voxel-based 3D object
detection methods providing a good compromise between high performance and
speed based on two aspects, their ability to detect objects located at large
distances from the agent and their ability to operate in real time on embedded
platforms equipped with high-performance GPUs. Our experiments show that these
methods mostly fail to detect distant small objects due to the sparsity of the
input point clouds at large distances. Moreover, models trained on near
objects achieve similar or better performance compared to those trained on all
objects in the scene. This means that the models learn object appearance
representations mostly from near objects. Our findings suggest that a
considerable part of the computations of existing methods is focused on
locations of the scene that do not contribute with successful detection. This
means that the methods can achieve a speed-up of $40$-$60\%$ by restricting
operation to near objects while not sacrificing much in performance.
###### Index Terms:
3D object detection, point cloud, Lidar, embedded platforms, depth zones
## I Introduction
3D object detection is an important task for Autonomous Systems and Robotics,
as it provides to the (robotic) agent information needed to perceive its
surroundings. Detection of objects needs to be performed in a highly-reliable
and real-time manner in order to allow the agent to naturally interact with
the environment and avoid collisions. It can be used as the first processing
step for path planning, navigation, and/or interaction with objects in the 3D
scene. Many methods have been proposed to approach the 3D object detection
task, which can be categorized based on the type of data they receive as
input: those using monocular images with depth estimation [1, 2] or 2D-to-3D
regression [3, 4], those using binocular images which can provide relative
depth of the objects in the scene information [5, 6, 7], those using point
clouds commonly generated by a Lidar sensor [8, 9, 10, 11], or hybrid methods
combining point clouds with images [12, 13, 14].
Lidar is the most expensive sensor used for 3D object detection, but point
cloud-based methods are those providing the best compromise between
performance and speed. Point clouds are obtained from firing a set of laser
beams and receiving their reflections to calculate exact 3D coordinates of the
contact points. The generated point cloud is unordered and sparse and,
therefore, it cannot be directly processed by regular Convolutional Neural
Networks (CNNs) which are the de-facto choice in 2D object detection methods
operating on (grid-structured) images. To address this issue, several
approaches were proposed to transform the point cloud into a grid-structured
format, that can be used as input to CNNs. Projection-based methods use plane
projections [15, 16] to create multi-view images of the scene, spherical [17]
or cylindrical [18] projections to create a 2D map where each pixel
corresponds to a point in a scene. Voxel-based methods select a sub-scene to
process and split it into a 3D grid of voxels (volumetric elements) [8] to
apply 3D convolutions, or a 2D grid of pillars [9, 10, 11] to apply 2D CNNs.
While point-cloud based methods are able to achieve good performance in
general, class-wise limitations emerge from the increasing sparsity of the
point cloud with respect to the distance of the objects from the Lidar sensor,
making small objects practically undetectable when they are far away from the
Lidar sensor.
In this paper we provide an experimental analysis of the performance of voxel-
based methods in relation to the objects’ distance from the Lidar sensor. We
split the 3D scene used by these methods in two sub-scenes determined by using
different depth zones from the Lidar sensor, namely the near sub-scene
containing the points of objects close to the Lidar sensor (half of the scene
along the forward-axis) and the far sub-scene containing the points of objects
far away from the Lidar sensor (the rest of the scene). We experimentally show
that two of the most successful voxel-based methods, i.e. PointPillars [9] and
TANet [10], fail to detect small objects appearing in the far sub-scene and
that training the models on objects appearing in the near sub-scene leads to
performance that is similar or even better than the performance achieved by
considering all objects during training. This result indicates that the models
trained on all objects in the scene are likely to learn object representations
only based on the near objects and try to apply them to objects far away from
the Lidar sensor, which are described by a much smaller number of points. Our
experimental analysis leads to an important suggestion: in application
scenarios involving low-power processing units and requiring real-time
operation one should focus on the objects belonging to the near sub-scene, as
this leads to a considerable computational cost reduction and the detection
rate for small objects belonging to the rest of the scene is low. We observed
that following this strategy, a speed-up of $40$-$60\%$ is achieved leading to
real-time operation on embedded GPUs.
The remainder of the paper is organized as follows: Section II provides a
description of the related works. Section III describes the process we follow
to define different sub-scenes for 3D object detection based on the respective
depth zones and the protocol followed in our experimental analysis. Section IV
provides the results of the analysis. Section V concludes the paper and
formulates directions for future work.
## II Related work
In this Section we provide information regarding the real-time operation of
existing 3D object detection methods exploiting Lidar-based point clouds in
relation to processing on embedded platforms. Then, we briefly describe the
PointPillars [9] and the TANet [10] methods which are used in our experimental
analysis.
### II-A Real-time operation in DL-based 3D object detection
Deep Learning (DL) based methods gained a lot of attention for solving tasks
in Autonomous Systems and Robotics due to their high performance. While for
visual-based methods real-time operation is commonly defined at a $30$ FPS
inference speed, for Lidar-based methods like those targeting 3D object
detection the desired FPS is defined by the specifications of the adopted
Lidar sensor. Most available Lidar sensors operate at $10$-$20$ FPS and, thus,
Lidar-based methods target an operation at $10$-$20$ FPS as the methods will
not be able to process point clouds at a higher speed than it can be generated
[19, 20]. However, even though this choice seems reasonable for isolated
application of 3D object detection methods during experiments, Autonomous
Systems in real-life applications need to perform a variety of tasks using
embedded processing platforms with 3D object detection being a pre-processing
step to higher-level analysis tasks, like path planning, navigation and
interaction with objects in the scene. Therefore, aiming at the frame rate
determined by the Lidar sensor does not lead to satisfactory speed in
practice. The availability of high-power embedded platforms like the NVIDIA
Jetson AGX Xavier with a powerful GPU and shared CPU-GPU memory (making it a
suitable choice for running DL models) allows the adoption of DL-based methods
in real-life applications. However, even though such embedded platforms
contain powerful GPUs, their capabilities still lack compared to the high-end
(desktop) GPUs which are used to develop and test 3D object detection methods.
Therefore, efficient method design and usage are needed for the adoption of
the DL models in real-life applications involving 3D object detection.
Recently, the method in [21] proposed a DL model for 3D object detection that
operates at $10$ FPS on the NVIDIA Jetson AGX Xavier, which still is far from
the commonly considered real-time operation of $30$ FPS.
3D object detection and tracking methods are frequently based on ideas coming
from the much more mature 2D object detection problem. This is due to that
many 3D object detection methods use 2D CNNs as backbone networks and,
therefore, optimization strategies that target object detection in 2D can be
extended to the 3D case too. Speed up approaches that have been proposed for
2D object detection include the use of knowledge distillation to train high-
performing compact backbone networks [22], layer pruning to reduce the number
of computations in a high-performing backbone network while not sacrificing
much in performance [23], and network quantization in which the backbone
network is changed by replacing 32/64-bit floating-point operations with
faster low-bit integer operations [24, 25]. In this paper we follow a
different approach, which focuses on the input data received by Lidar-based
methods. The speed-up approaches described above focusing on the efficiency of
the backbone networks can be combined with our approach to further increase
processing speed, as they are focusing on complementary aspects of the overall
system.
### II-B PointPillars and TANet methods
PointPillars [9] is one of the fastest Lidar-based 3D Object Detection
methods, and it is commonly used as part of other methods [11, 14, 10]. It
selects a part of the scene111Selection of the sub-scene depends on the class
of interest and the adopted dataset. KITTI [26], which is the most widely used
dataset to evaluate 3D object detection methods, provides annotations only for
the objects laying inside the part of the scene inside the field-of-view of a
camera placed close to the Lidar sensor. Thus, only the frontal part of the
point cloud is processed. NuScenes [27], which is another widely used dataset,
has 6 cameras alongside a Lidar, so in every direction there is an input from
both cameras and Lidar, allowing to use all points generated by the Lidar. In
this paper we follow the setup used in KITTI. Extension of our approach to the
setup of NuScenes is straightforward. in a cuboid shape with boundaries of
$([0,x],[-y,y],[z_{0},z_{1}])$, as illustrated in Figure 1. In order to
transform the (unstructured) point cloud into a grid structure it performs
quantization based on a 2D grid along the $x$ (forward-axis) and $y$ (left-
right-axis) dimensions to form the so-called pillars. A pillar is a voxel of
size $(v_{x},v_{y},v_{z})$ with its size on the vertical-axis being equal to
all the available space, i.e. $v_{z}=z_{1}-z_{0}$. The $x$ and $y$ axes are
usually quantized using same-sized bins, i.e. $v_{x}=v_{y}$. Points in pillars
are processed to create pillar-wise features that are stored in a pseudo-image
where each cell represents a pillar. This image is processed by a Fully
Convolutional Network (FCN) [28] with final classification and regression
branches.
TANet [10] is a slower but more accurate method and is a one of the most
accurate methods for objects of small size. TANet follows similar processing
steps with PointPillars, but it uses a Triple Attention mechanism to create
more robust and accurate features for each pillar by combining point-wise,
channel-wise and voxel-wise attentions. These pillar features are stored in a
pseudo-image in the same way as in PointPillars, but they are processed by a
more complex DL model performing Coarse-to-Fine Regression. This DL model
consists of a Coarse network with an architecture similar to the Fully-
Convolutional Network (FCN) in PointPillars, and a Refine network which uses
features from the Coarse network to make more accurate predictions.
The size of a pseudo-image created by both methods depends on the number of
pillars that can fit into the scene. For the sub-scene with limits $[0,x]$
along forward-axis and $[-y,y]$ along left-right-axis, the size of the pseudo-
image is given by:
$W=\frac{2y}{v_{y}}\>\>\>\>\>\>\>\>\textrm{and}\>\>\>\>\>\>\>\>H=\frac{x}{v_{x}}.$
(1)
Increasing the size of pillars, when processing the same scene, leads to a
smaller pseudo-image and faster inference. The same effect is obtained by
decreasing the size of the sub-scene for fixed-sized pillars. PointPillars and
TANet are using FCNs to process the pseudo-image and, therefore, the trained
model can be directly applied to pseudo-images of different sizes. However,
there is a compromise between fast inference and performance which needs to be
considered when selecting the size of the pillars and the size of the scene.
## III Methodology
As it was mentioned before, voxel-based methods process the part of the scene
inside the field-of-view of a camera placed close to the Lidar sensor, as
shown in Figure 1. We refer to the part of the scene processed by the voxel-
based methods as full-scene hereafter. Considering the fact that the features
of each pillar are generated only based on the points inside it in an
independent to the rest of the pillars manner and that the density of points
inside pillars placed at different distances from the Lidar sensor in the
scene is different, it is natural to split the full-scene in sub-scenes based
on depth zones, i.e. to divide the full-scene with respect to the forward-
axis, as illustrated in Figure 1 where two depth zones with the same
dimensions are defined. Even though the near sub-scene and the far sub-scene
have the same size in the 3D scene, the number of points belonging to each of
them is very different due to the difference in distances between objects
inside these two sub-scenes and the Lidar sensor. For instance, the near sub-
scene on KITTI evaluation set used in TANet [10] for class Car contains
$17,026$ points on average, while the far sub-scene contains only $1,127$
points on average. This means that the point cloud corresponding to the far
sub-scene is sparser by a factor of $10$ compared to the point cloud of the
near sub-scene. Having such a small number of points in the far sub-scene
rises questions related to the efficiency of using voxel-based methods with
voxelization grid of a fixed size for all locations of the 3D scene, as well
as the object class multimodality inherited by the different levels of
sparsity at different distances from the Lidar sensor. By comparing the
pseudo-image generated by voxel-based methods corresponding to the full-scene
and the pseudo-images corresponding to the near and far sub-scenes, it can be
seen that the two latter pseudo-images correspond to two (non-overlapping)
parts of the first pseudo-image, each having half of its size. As the point
cloud in the far sub-scene is much sparser, the corresponding pseudo-image
contains a large number of empty pillars. That is, the model needs to learn
different representations for objects belonging to the same class (despite the
fact that they may have very similar appearance and orientation) due to high
differences in point cloud sparsity.
Figure 1: Example of a KITTI frame with a point cloud at the top and the
corresponding camera image at the bottom. The Lidar sensor is located at the
center of the point cloud. The part of the scene used in PointPillars and
TANet (called full-scene in this paper) is the cuboid with boundaries
$([0,x],[-y,y],[z_{0},z_{1}])$, which corresponds to the union of the two
areas included in the green and blue boxes. We divide this scene into two
equally-sized sub-scenes, namely the near sub-scene (with boundaries
$([0,x/2],[-y,y],[z_{0},z_{1}])$ \- green box) and the far sub-scene (with
boundaries $([x/2,x],[-y,y],[z_{0},z_{1}])$ \- blue box). Red boxes correspond
to ground-truth objects. Figure 2: Performance evaluation of PointPillars and
TANet models on the full-scene and the near sub-scene. Models with name
$\\{16,20,24,28\\}$ have a corresponding voxel size. Models with suffix
“-near” are trained on a near sub-scene. Each model is evaluated on the full-
scene and the near sub-scene, therefore having 2 points per line, where the
leftmost point corresponds to the near sub-scene evaluation and the rightmost
point corresponds to the full-scene evaluation. Red circles represent AP
values of TANet models and turquoise triangles represent AP values of
PointPillars models. Red/turquoise lines represent difference in AP of the
full-scene and near sub-scene evaluations.
To determine the effect of adopting different sizes of scenes and pseudo-
images in the performance and speed of voxel-based methods, we conduct an
extensive evaluation based on the following steps:
* •
We train models with pillar sizes $v_{x}=v_{y}=d$, with $d$ taking values in
the set $\\{16,20,24,28\\}$. For each pillar size, class combination and
method, we train a model on objects appearing in the full-scene and another
model trained on objects appearing in the near sub-scene. We use Car and
Pedestrian+Cyclist class combinations as in the original PointPillars and
TANet.
* •
We evaluate each model on the full-scene and on the near sub-scene. Therefore,
each model is evaluated on scenes of size equal to those used during its
training process and on scenes with a different size compared to the scenes
used during its training process.
* •
When evaluating the models using objects belonging to the near sub-scene, we
measure their performance considering all ground-truth objects in the full
scene and considering only the ground-truth objects inside the near sub-scene.
By applying these experiments we can compare performance of trained models
when applied to the full-scene and to the near sub-scene, calculating the drop
of performance between these two cases. This alone cannot give full
information about the ability of the models to detect objects in the far sub-
scene due to influence of the uneven objects’ class and difficulty
distributions between near and far sub-scenes, and thus the evaluation of the
models on the near sub-scene considering only ground-truth objects inside it
can be used to determine the performance loss caused by not detecting the
objects inside the far sub-scene.
## IV Experiments
We analyze the performance of models obtained using two voxel-based methods,
i.e. PointPillars [9] and TANet [10]. We follow the configurations of these
two methods, i.e. we use a scene with limits $[0,69.12]$ for forward-axis,
$[-39.68,39.68]$ for left-right axis and $[-3,1]$ for vertical axis for class
Car; and a scene with limits $[0,47.36]$ for forward-axis, $[-19.84,19.84]$
for left-right axis and $[-2.5,0.5]$ for vertical axis for classes Pedestrian
and Cyclist. These limits were designed for the voxel size $16$ and are
slightly adjusted for the other voxel sizes, so that the resulting pseudo-
images have a width and a height dimensions that are multiples of 8, which is
required by the structure of the methods’ FCN modules. The near sub-scene for
class Car has limits $[0,34.56]$ for forward-axis, $[-39.68,39.68]$ for left-
right axis and $[-3,1]$ for vertical axis, while for the classes Pedestrian
and Cyclist it has limits $[0,47.36]$ for forward-axis, $[-19.84,19.84]$ for
left-right axis and $[-2.5,0.5]$ for vertical axis.
We train each models for 160 epochs with batch size 2 and evaluate on the
$3,769$ samples from the evaluation subset of KITTI [10]. The Average
Precision (AP) [29] metric is used to evaluate detection accuracy on the three
object difficulty levels defined in the dataset, namely easy, moderate and
hard. The object difficulty level depends on the size of its 2D projection on
the camera plane and its occlusion level [26]. Each model is evaluated on a
desktop GPU NVIDIA GeForce GTX 1080Ti and the embedded platforms NVIDIA Jetson
Tx2 and NVIDIA Jetson AGX Xavier. We use the MAXN power mode for both Tx2 and
Xavier for maximum performance. The results of evaluation are given in Figure
2.
Figure 3: Performance Evaluation of PointPillars and TANet models on the full-scene and the near sub-scene considering only ground-truth objects inside the selected sub-scene. Models with names $\\{16,20,24,28\\}$ have a corresponding voxel size and are trained on the full-scene. Models with suffix “-near” are trained on the near sub-scene. Each model is evaluated on both the full-scene and the near sub-scene. Red circles represent AP values of TANet models and turquoise triangles represent AP values of PointPillars models. Figure 4: FPS evaluation of PointPillars and TANet models on the full-scene and the near sub-scene on desktop GPU and embedded systems. Models with names $\\{16,20,24,28\\}$ have a corresponding voxel size. Red circles represent FPS values of TANet models and turquoise triangles represent FPS values of PointPillars models. TABLE I: Distribution of object classes in the evaluation subset of KITTI dataset and object difficulties on a far sub-scene. Car | Pedestrian | Cyclist
---|---|---
75% | 15% | 4%
Far sub-scene
Easy | Moderate | Hard
14% | 71% | 15%
As can be seen in Figure 2, the drop in performance when evaluating on the
full-scene and near-scene is identical for each class across different
difficulties, but is the lowest for objects with easy difficulty level and
class Car. This can be explained by the distribution of class difficulties
given in the Table I. Class Car is the most represented in the dataset and
difficulty level easy is the least present in the far sub-scene, meaning that
there is an insufficient number of selected objects to make a difference in
performance.
Performance for class Pedestrian is changing the least and this can be
described either by the lack of objects belonging to this class in the far
sub-scene, or the inability of models to detect these objects on the far sub-
scene. Considering the difference between evaluations on the full-scene and
the near sub-scene for class Pedestrian from Figure 3, we can conclude that
there are enough objects belonging to class Pedestrian on the far sub-scene to
make a higher difference, but they remain undetected by the models.
Comparing the performance of models trained on the full-scene and on the near
sub-scene, it can be seen that their results are almost identical but, in some
cases, models trained on the near sub-scene lead to a better performance than
the corresponding models trained on the scene (i.e. 16-near for
Pedestrian+Cyclist). However, these models have never been trained on objects
of the far sub-scene, meaning that they apply features learned from near
objects to far objects having a different point cloud structure. The fact that
the model trained on near objects achieved better performance indicates that
models trained on the full-scene fail to learn separate features for far
objects and try to apply features learned by near objects on objects appearing
in the full-scene.
As shown in Figure 4, application of a model on the near sub-scene leads to a
$25\%$ increase in FPS on average on a desktop GPU, $40\%$ increase on an
NVIDIA Jetson AGX Xavier and $60\%$ increase on an NVIDIA Jetson Tx2. Tx2 is
the least powerful system among those tested, therefore only the $28$
PointPillars model applied on the near sub-scene can run in real-time for a
Lidar with sampling rate of $10$ Hz. The $16$ PointPillars model when applied
on the near sub-scene and $20$-$28$ PointPillars models applied on both full-
scene and near sub-scene on Xavier can also be called real-time for a Lidar
with sampling frequency of $10$ Hz. Near $28$ PointPillars model is close to
real-time for a $20$ Hz Lidar.
## V Conclusions and future work
In this paper, we analyzed the performance of two popular voxel-based 3D
object detection methods providing a good compromise between high performance
and speed based on their ability to detect objects appearing in locations of
the scene that are far away from the Lidar sensor, and their speed when
deployed on embedded platforms equipped with high-performance GPUs. Our
analysis shows that these methods mostly fail to detect distant small objects
due to the sparsity of the input point clouds at large distances. Moreover,
models trained on near objects achieve similar or better performance compared
to those trained on all objects in the scene. This means that the models learn
object appearance representations mostly from near objects. Our findings
suggest that a considerable part of the computations of existing methods is
focused on locations of the scene that do not contribute with successful
detection. This means that the methods can achieve a speed-up of $40$-$60\%$
by restricting operation to near objects while not sacrificing much in
performance. A possible remedy towards addressing these limitations of voxel-
based 3D object detection methods could be the application of complementary
models that can achieve high performance on lower-resolution pseudo-images
encoding the contents of the far sub-scene, in combination with the model
operating on the near sub-scene, as this approach can lead to an increase in
performance and in the total inference speed.
## Acknowledgement
This work has received funding from the European Union’s Horizon 2020 research
and innovation programme under grant agreement No 871449 (OpenDR). This
publication reflects the authors’ views only. The European Commission is not
responsible for any use that may be made of the information it contains.
TANet inference speed: During the evaluation of
TANet222https://github.com/happinesslz/TANet we noticed that the ratio of the
inference speed between TANet and PointPillars were lower, than stated 30 to
60 FPS. TANet code has timers to count inference time of separate modules,
counting total inference time for the evaluation pass and the number of
samples processed. The average inference time is computed by dividing total
time over the number of processed samples. We implemented additional timers
that count FPS for each inference and the average FPS for the whole evaluation
pass independently. The resulting FPS were quite different and the reason is
that TANet counts each processed frame twice: once in
$voxelnet.py::predict\\_coarse$ and second time in
$voxelnet.py::predict\\_refine$, increasing the value
$\\_total\\_inference\\_count$ by $batch\\_size$ every pass. Both functions
are called to create final prediction, so the inference time is computed once
per prediction, but the frames’ counter is increased twice, reporting higher
FPS than it should be.
## References
* [1] B. Xu and Z. Chen, “Multi-Level Fusion Based 3D Object Detection From Monocular Images,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [2] Z. Qin, J. Wang, and Y. Lu, “MonoGRNet: A Geometric Reasoning Network for Monocular 3D Object Localization,” in _AAAI Conference on Artificial Intelligence_ , 2019.
* [3] I. Barabanau, A. Artemov, E. Burnaev, and V. Murashkin, “Monocular 3D Object Detection via Geometric Reasoning on Keypoints,” _arXiv:1905.05618_ , 2019.
* [4] A. Simonelli, S. R. Bulò, L. Porzi, M. Lopez-Antequera, and P. Kontschieder, “Disentangling Monocular 3D Object Detection,” in _IEEE/CVF International Conference on Computer Vision_ , 2019.
* [5] Y. Wang, W. Chao, D. Garg, B. Hariharan, M. E. Campbell, and K. Q. Weinberger, “Pseudo-LiDAR From Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2019.
* [6] Y. You, Y. Wang, W.-L. Chao, D. Garg, G. Pleiss, B. Hariharan, M. Campbell, and K. Q. Weinberger, “Pseudo-LiDAR++: Accurate Depth for 3D Object Detection in Autonomous Driving,” in _International Conference on Learning Representations_ , 2020.
* [7] C. Li, J. Ku, and S. L. Waslander, “Confidence Guided Stereo 3D Object Detection with Split Depth Estimation,” _arXiv:2003.05505_ , 2020.
* [8] Y. Zhou and O. Tuzel, “VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [9] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, “PointPillars: Fast Encoders for Object Detection from Point Clouds,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2019.
* [10] Z. Liu, X. Zhao, T. Huang, R. Hu, Y. Zhou, and X. Bai, “TANet: Robust 3D Object Detection from Point Clouds with Triple Attention,” in _AAAI Conference on Artificial Intelligence_ , 2020.
* [11] Q. Chen, L. Sun, Z. Wang, K. Jia, and A. L. Yuille, “Object as Hotspots: An Anchor-Free 3D Object Detection Approach via Firing of Hotspots,” in _European Conference on Computer Vision_ , 2020.
* [12] C. R. Qi, W. Liu, C. Wu, H. Su, and L. J. Guibas, “Frustum PointNets for 3D Object Detection From RGB-D Data,” in _2018 IEEE Conference on Computer Vision and Pattern Recognition_ , 2018.
* [13] Z. Wang and K. Jia, “Frustum ConvNet: Sliding Frustums to Aggregate Local Point-Wise Features for Amodal,” in _IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2019.
* [14] S. Vora, A. H. Lang, B. Helou, and O. Beijbom, “PointPainting: Sequential Fusion for 3D Object Detection ,” in _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020.
* [15] X. He, S. Bai, J. Chu, and X. Bai, “An Improved Multi-View Convolutional Neural Network for 3D Object Retrieval,” _IEEE Transactions on Image Processing_ , vol. 29, pp. 7917–7930, 2020.
* [16] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, “Multi-view Convolutional Neural Networks for 3D Shape Recognition,” in _IEEE International Conference on Computer Vision_ , 2015.
* [17] B. Wu, A. Wan, X. Yue, and K. Keutzer, “SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud,” in _IEEE International Conference on Robotics and Automation_ , 2018.
* [18] B. Li, T. Zhang, and T. Xia, “Vehicle Detection from 3D Lidar Using Fully Convolutional Network,” in _Robotics: Science and Systems XII_ , 2016.
* [19] M. Verucchi, L. Bartoli, F. Bagni, F. Gatti, P. Burgio, and M. Bertogna, “Real-Time clustering and LiDAR-camera fusion on embedded platforms for self-driving cars,” in _IEEE International Conference on Robotic Computing_ , 2020.
* [20] H. Rashed, M. Ramzy, V. Vaquero, A. E. Sallab, G. Sistu, and S. Yogamani, “FuseMODNet: Real-Time Camera and LiDAR based Moving Object Detection for robust low-light Autonomous Driving,” in _International Conference on Computer Vision Workshops_ , 2019.
* [21] M. Sualeh and G. W. Kim, “Visual-LiDAR Based 3D Object Detection and Tracking for Embedded Systems,” _IEEE Access_ , vol. 8, pp. 156 285–156 298, 2020.
* [22] J. Yu, H. Xie, M. Li, G. Xie, Y. Yu, and C. W. Chen, “Mobile Centernet for Embedded Deep Learning Object Detection,” in _IEEE International Conference on Multimedia and Expo Workshops_ , 2020.
* [23] Z. Wang, J. Zhang, Z. Zhao, and F. Su, “Efficient Yolo: A Lightweight Model For Embedded Deep Learning Object Detection,” in _IEEE International Conference on Multimedia and Expo Workshops_ , 2020.
* [24] X. Yang, S. Chaudhuri, L. Naviner, and L. Likforman, “Quad-Approx CNNs for Embedded Object Detection Systems,” in _IEEE International Conference on Electronics, Circuits and Systems_ , 2020.
* [25] G. Plastiras, S. Siddiqui, C. Kyrkou, and T. Theocharides, “Efficient Embedded Deep Neural-Network-based Object Detection Via Joint Quantization and Tiling,” in _IEEE International Conference on Artificial Intelligence Circuits and Systems_ , 2020.
* [26] A. Geiger, P. Lenz, and R. Urtasun, “Are we ready for autonomous driving? The KITTI vision benchmark suite,” in _IEEE Conference on Computer Vision and Pattern Recognition_ , 2012.
* [27] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, “nuScenes: A Multimodal Dataset for Autonomous Driving,” in _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020.
* [28] E. Shelhamer, J. Long, and T. Darrell, “Fully Convolutional Networks for Semantic Segmentation,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , vol. 39, no. 4, pp. 640–651, 2017.
* [29] M. Everingham, L. Gool, C. K. Williams, J. Winn, and A. Zisserman, “The Pascal Visual Object Classes (VOC) Challenge,” _International Journal of Computer Vision_ , vol. 88, no. 2, pp. 303––338, 2010.
|
# Benchmarking neutrino interaction models via a comparative analysis of
kinematic imbalance measurements from the T2K, MicroBooNE and MINERvA
experiments
W. Filali<EMAIL_ADDRESS>European Organization for Nuclear Research
(CERN), 1211 Geneva 23, Switzerland. L. Munteanu<EMAIL_ADDRESS>European Organization for Nuclear Research (CERN), 1211 Geneva 23,
Switzerland. S. Dolan<EMAIL_ADDRESS>European Organization for
Nuclear Research (CERN), 1211 Geneva 23, Switzerland.
###### Abstract
Recent neutrino-nucleus cross-section measurements of observables
characterising kinematic imbalance from the T2K, MicroBooNE and MINERvA
experiments are used to benchmark predictions from widely used neutrino
interaction event generators. Given the different neutrino energy spectra and
nuclear targets employed by the three experiments, comparisons of model
predictions to their measurements breaks degeneracies that would be present in
any single measurement. In particular, the comparison of T2K and MINERvA
measurements offers a probe of energy dependence, whilst a comparison of T2K
and MicroBooNE investigates scaling with nuclear target. In order to isolate
the impact of individual nuclear effects, model comparisons are made following
systematic alterations to: the nuclear ground state; final state interactions
and multi-nucleon interaction strength. The measurements are further compared
to the generators used as an input to DUNE/SBN and T2K/Hyper-K analyses.
Whilst no model is able to quantitatively describe all the measurements,
evidence is found for mis-modelling of A-scaling in multi-nucleon interactions
and it is found that tight control over how energy is distributed among
hadrons following final state interactions is likely to be crucial to
achieving improved agreement. Overall, this work provides a novel
characterisation of neutrino interactions whilst offering guidance for
refining existing generator predictions.
## I Introduction
The challenge of constraining systematic uncertainties in neutrino oscillation
experiments operating in the GeV regime of neutrino energy is paramount
Alvarez-Ruso _et al._ (2018); Katori and Martini (2018). These uncertainties,
if not properly accounted for, can significantly impact the ultimate precision
of current (T2K Abe _et al._ (2011) and NOvA Ayres _et al._ (2007)) and
future (DUNE Abi _et al._ (2020a, b) and Hyper-Kamiokande Abe _et al._
(2018a)) long baseline neutrino oscillation experiments, as well as
oscillation analyses from Fermilab’s short baseline (SBN Antonello _et al._
(2015)) program. One of the primary sources of these uncertainties are those
that cover potential mismodelling of nuclear-medium effects in neutrino-
nucleus interactions within the neutrino interaction Monte Carlo (MC) event
generators used as an input to neutrino oscillation analyses Abe _et al._
(2023); Acero _et al._ (2022). These nuclear effects include Fermi motion,
which pertains to the inherent movement of nucleons within the nucleus; final
state interactions (FSI), referring to the re-interaction of outgoing hadrons
from an interaction with the remnant nucleus; and multi-nucleon two-particle
two-hole (2p2h) interactions, in which neutrinos interact with a correlated
pair of nucleons, bound via the exchange of a virtual meson. A detailed review
of nuclear effects are available in Refs. Alvarez-Ruso _et al._ (2018);
Katori and Martini (2018); Jachowicz and Nikolakopoulos (2021).
A global campaign of neutrino-nucleus cross-section measurements is underway,
within which a key goal is to characterise and constrain these nuclear
effects. Prior to $\sim$2016 such measurements focused mostly on the detection
of final-state muons in charged-current muon neutrino interactions, but there
is now also a rising interest for detecting and analysing final-state hadrons,
made possible with the evolution of experimental capabilities and analysis
techniques.
In this context, variables characterising kinematic imbalances between the
outgoing lepton and nucleon in pion-less neutrino-nucleus interactions have
emerged as a powerful tool Lu _et al._ (2015); Furmanski and Sobczyk (2017);
Baudis _et al._ (2023); Cai _et al._ (2020). The power of “transverse
kinematic imbalance” (TKI) variables comes with the examination of the
transverse components of the final state particles relative to the incoming
neutrino direction. Imbalances in the outgoing particles’ transverse momentum
vectors are thereby able to characterise nuclear effects without relying on
the unknown incoming neutrino energy. Differential cross-section measurements
as a function of these variables hence provide an accurate and straightforward
probe of nuclear effects.
The T2K Abe _et al._ (2018b), MicroBooNE Abratenko _et al._ (2023) and
MINERvA Lu _et al._ (2018) experiments have recently produced cross-section
measurements as a function of TKI variables. Whilst each result has been
studied independently, and some joint analyses of the T2K and MINERvA results
have been made Dolan (2018); Chakrani _et al._ (2024); Li _et al._ (2024),
there has so far not been a joint study of all three measurements. The
different sensitivities of each of these experiments makes a joint study
particularly interesting. T2K and MicroBooNE operate with relatively narrow-
band neutrino fluxes with energies around $\sim$1 GeV, while MINERvA uses a
wider-band flux extending beyond 3 GeV. A comparison of the shape of the
fluxes from the three experiments can be found in Figure 1. Additionally, the
experiments differ in their target materials: whilst T2K and MINERvA use a
hydrocarbon target, MicroBooNE uses argon. In this way, a combined analysis of
the three measurements provides the potential to study the energy- and target-
dependence of nuclear effects. In particular, due to their similar neutrino
energies, a comparison of T2K and MicroBooNE measurements highlights features
which probe the dependence of the aforementioned nuclear effects on the target
material. Conversely, a comparison of the hydrocarbon-based measurements of
T2K and MINERvA offers insight into the energy dependence of nuclear effects.
In this article, we extend the older analysis of T2K and MINERvA measurements
in Ref. Dolan (2018) to additionally consider the MicroBooNE measurement and
to confront all three with state-of-the-art neutrino event generator
predictions, including those used as an input to current oscillation analyses
as well as to sensitivity studies for the next generation of experiments. We
systematically alter the generator predictions by varying the modelling of one
nuclear effect at a time in order to both explore each result’s sensitivity
and to investigate sources of model-measurement discrepancies. The variables,
measurements, the models and the analysis strategy are defined in section II;
the results of the subsequent comparisons are shown and discussed in section
III. The conclusions are given in section IV.
Figure 1: A comparison of the shape of the incoming neutrino fluxes
predictions used for the T2K Abe _et al._ (2013, 2015); t2k , MINERvA Aliaga
_et al._ (2016); min and MicroBooNE analyses considered within this work,
alongside the flux prediction for the future DUNE experiment Acciarri _et
al._ (2015); dun . Note that the depicted MicroBooNE flux is the same one as
the one used by the MiniBooNE experiment Aguilar-Arevalo _et al._ (2009). The
labels “CH” and “Ar” stand for “hydrocarbon” and “argon” and indicate the
primary nuclear target used by the experiments depicted in each of the panels.
## II Analysis strategy
The general strategy for this analysis is to qualitatively and quantitatively
compare a variety of systematically altered models to measurements of TKI
variables from the T2K, MicroBooNE and MINERvA experiments. The TKI variables
considered are defined in subsection II.1, whilst the measurements are
detailed in subsection II.2. This includes a summary of the exact signal
definition for each measurement in addition to an overview of their
statistical power, correlations and the means by which models should be
compared to them. In order to draw quantitative conclusions from model-
measurement comparisons, a $\chi^{2}$ and $p$-value analysis is performed as
described in subsection II.3. The models used to compare to the measurements
are defined in subsection II.4 before the systematic variations are described
in subsection II.5.
In this analysis, we focus on measurements of charged-current neutrino
interactions with nucleons where no mesons are observed in the final state,
often referred to as the CC0$\pi$ topology. The dominant type of microscopic
process which contributes to these topologies is the quasi-elastic (QE)
process, in which the incoming muon neutrino interacts with a neutron inside
the nucleus. The final state of such interactions before considering FSI would
be composed of a muon and a single proton. However, the measurements
considered include QE interactions on bound nucleons inside atomic nuclei, and
the presence of FSI can stimulate the emission of additional nucleons.
Moreover, nuclear effects permit other processes to contribute to the CC0$\pi$
topology: multi-nucleon interactions (which produce two outgoing nucleons
before FSI) and resonant meson production channels (sometimes referred to as
“RES” in this work) in which the final state meson (most often a pion) has
been absorbed inside the nucleus by FSI.
### II.1 Transverse Kinematic Imbalance
In neutrino nucleus interactions, nuclear effects introduce a kinematic
imbalance between the initial neutrino four-momentum and the combined four-
momenta of the final-state lepton and hadronic system. This imbalance serves
as a sensitive probe for nuclear effects, including Fermi motion, FSI, and
2p2h interactions. A complete four dimensional analysis of kinematic
imbalance, as is used to probe nuclear ground state effects in electron
scattering (e.g. in Dutta _et al._ (2003); Khachatryan _et al._ (2021)), is
challenging in neutrino experiments due to an unknown incoming neutrino
energy. However, the imbalance in the plane transverse to the incoming
neutrino still provides a wealth of powerful information and can be quantified
by a multitude of variables that have been proposed Lu _et al._ (2016);
Baudis _et al._ (2023); Cai _et al._ (2020); Furmanski and Sobczyk (2017)
and, in many cases, measured Abe _et al._ (2018b); Lu _et al._ (2018); Cai
_et al._ (2020); Coplowe _et al._ (2020); Abratenko _et al._ (2021, 2024).
The primary variables considered in this work are schematically represented in
Figure 2. They are defined by:
$\delta
p_{\textrm{T}}=|\overrightarrow{p}_{T}^{l}+\overrightarrow{p}_{T}^{p}|,$ (1)
$\delta\alpha_{\textrm{T}}=\arccos\frac{-\overrightarrow{p}_{T}^{l}.\delta\overrightarrow{p}_{T}}{{p}_{T}^{l}\delta
p_{\textrm{T}}},$ (2)
where $\overrightarrow{p}_{T}^{l}$ and $\overrightarrow{p}_{T}^{p}$ are the
transverse momenta of the outgoing muon and highest momentum (or leading)
proton, respectively.
For pure charged-current quasi-elastic (QE) interactions devoid of nuclear
effects (e.g. interactions on a free nucleon target), $\delta p_{\textrm{T}}$
would be zero. In neutrino-nucleus interactions $\delta p_{\textrm{T}}$ is
non-zero and its shape is sensitive to different nuclear effects Lu _et al._
(2015). In the presence of Fermi motion but without FSI or 2p2h, $\delta
p_{\textrm{T}}$ is, to a good approximation, the projection of the initial
state struck nucleon momentum onto the plane transverse to the incoming
neutrino, typically peaking at around 200 MeV/$c$. FSI and 2p2h tend to cause
the emission of additional particles not included in Equation 1, thereby
giving $\delta p_{\textrm{T}}$ an extended tail well above the scale of Fermi
motion. Very broadly, in the absence of any constraints on outgoing particle
kinematics, the “bulk” of $\delta p_{\textrm{T}}$ is sensitive to Fermi motion
and the “tail” to FSI and 2p2h.
In the presence of kinematic thresholds on the outgoing nucleon, as are
included in all experimental signal definitions, the effect of FSI is more
complicated. FSI decelerates protons, which has two consequences on the
$\delta p_{\textrm{T}}$ distribution: first, it migrates a number of protons
below the detection threshold, which sculpts the visible phase space and
reduces the overall within-signal-phase-space cross section (including in the
bulk), and second, it increases the imbalance between the proton and the muon
transverse momenta and enhances the tail of the distribution. The final
distributions will be impacted by both effects simultaneously generally
causing a relative increase in the size of the tail with respect to the bulk,
but a lower overall cross section.
The direction of $\delta p_{\textrm{T}}$ with respect to the transverse
projection of the outgoing lepton momentum vector is described by the angle
$\delta\alpha_{\textrm{T}}$. In the absence of FSI or 2p2h,
$\delta\alpha_{\textrm{T}}$ is approximately uniformly distributed due to the
isotropic nature of Fermi motion. In the additional presence of FSI, it
provides an interesting characterisation of the deceleration the outgoing
nucleon experiences with respect to the pre-FSI kinematics. The more the
outgoing nucleon is slowed down, the higher the proportion of the cross
section is expected to be at high $\delta\alpha_{\textrm{T}}$
($\delta\alpha_{\textrm{T}}>90^{\circ}$)Lu _et al._ (2015). 2p2h interactions
also causes a shift of the $\delta\alpha_{\textrm{T}}$ distribution towards
higher values, due to the highest momentum outgoing proton having only a
fraction of the transferred momentum.
Figure 2: Schematic illustration of the definition of the $\delta
p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ TKI variables for charged-
current muon-neutrino interactions. The total momentum of particle $i$ is
given by $\vec{p}_{i}$, while its transverse component with respect to the
neutrino direction is represented by $\vec{p}_{T}^{\textrm{ }i}$.
$\vec{p}_{h}$ stands for the momentum vector of the hadronic system, but in
the CC0$\pi$ topology considered in this work this is constructed using the
highest momentum outgoing proton. The black filled circle represents the
initial struck nucleon; the gray plane shows the plane transverse to the
incoming neutrino direction; the orange circles and dashed lines indicate
possible final state interactions experienced by the outgoing hadrons. This
figure is adapted from Ref. Abe _et al._ (2021) which was adapted from Ref.
Lu _et al._ (2015).
The MINERvA collaboration also produced a measurement of the reconstructed
nucleon momentum ($p_{N}$), as detailed in Furmanski and Sobczyk (2017). The
variable $p_{N}$ is an estimation of the magnitude of the total momentum
imbalance, which is a composite of the longitudinal and transverse momentum
imbalances, $\delta p_{L}$ and $\delta p_{\textrm{T}}$ respectively. It is
defined by:
$p_{N}=\sqrt{\delta p_{\textrm{T}}^{2}+\delta p_{L}^{2}},$ (3)
eere, $\delta p_{L}$ is the longitudinal momentum imbalance, which is
expressed as:
$\delta p_{L}=\frac{1}{2}K-\frac{\delta p_{\textrm{T}}^{2}+M_{X}^{2}}{2K},$
(4a) where $K=M_{A}+p_{L}^{\mu}+p_{L}^{p}-E^{\mu}-E^{p},$ (4b) and
$M_{X}=M_{A}-M_{n}+\epsilon_{n}$ (4c)
where $M_{A}$ is the target nucleus mass, $M_{n}$ is the proton mass, and
$\epsilon_{n}$ is the neutron mean excitation energy. The value used in this
study is $\epsilon_{n}=27.1$ MeV, the same as in Ref. Lu _et al._ (2018).
### II.2 Experimental measurements
The main focus of this comparative analysis is on the measurements of the
missing transverse momentum $\delta p_{\textrm{T}}$ which has been measured by
T2K, MINERvA and MicroBooNE. As explained in the previous section, this
observable is uniquely suited to probe and disentangle nuclear effects due to
its distinctive features (i.e. QE-dominated bulk and FSI and non-QE-dominated
tail). All experiments have also measured the transverse boosting angle
$\delta\alpha_{\textrm{T}}$ and the $\delta\phi_{\textrm{T}}$ angle Lu _et
al._ (2015). We report comparisons for the $\delta\alpha_{\textrm{T}}$ angle,
which is sensitive to FSI effects, but we choose to omit comparisons to
$\delta\phi_{\textrm{T}}$ as it is less overtly sensitive to nuclear effects
than $\delta p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$, and additionally
is more dependent on the momentum transfer to the nucleus which varies widely
between experiments Lu _et al._ (2016). The MicroBooNE measurement also
includes the first multi-differential measurement of $\delta p_{\textrm{T}}$
in different regions of $\delta\alpha_{\textrm{T}}$, which allows for a better
separation of 2p2h interactions from QE interactions which have undergone FSI
Abe _et al._ (2019). Finally, we also report comparisons to the $p_{N}$
observable measured by the MINERvA experiment.
The cross sections measured by the three experiments set signal definitions
constrained to specific kinematic regions, as summarised in Table 1. The exact
way in which these ranges apply are subtly different between the three
experiment’s signal definitions, as is the proton momentum that is used to
reconstruct the TKI variables:
* •
For T2K, any number of protons are allowed but the proton momentum used to
build the TKI variables is always that of highest momentum proton in the
neutrino interaction. If the highest momentum proton or muon’s momenta or
angles with respect to the incoming neutrino fall outside of the kinematic
ranges given in Table 1 then the interaction does not count as signal.
* •
For MicroBooNE, only one proton is allowed inside the kinematic ranges given
in Table 1 (any number of protons are allowed outside of it) and it is the
momentum of this proton that goes into the calculation of the TKI variables
(whether or not it is the highest momentum proton in the interaction). Note
that this highest momentum proton in an interaction is not necessarily the one
used to reconstruct the TKI (as it may be outside of the allowed kinematic
phase space). Additionally, MicroBooNE allows interactions with final state
charged pions with momentum lower than 70 MeV/$c$ to be classified as signal.
* •
For MINERvA, any number of protons within the phase space constraints given in
Table 1 are allowed and it is the highest momentum proton of these that is
used to construct the TKI variables. Like in the MicroBooNE case, the highest
momentum proton is allowed to fall outside of the phase space constraints.
Analysis | $p_{\mu}$ [GeV/$c$] | cos$\theta_{\mu}$ | $p_{p}$ [GeV/$c$] | cos$\theta_{p}$
---|---|---|---|---
T2K Abe _et al._ (2018b) | $>0.25$ | $>-0.6$ | $0.45-1.2$ | $>0.4$
MicroBooNE Abratenko _et al._ (2023) | $0.1-1.2$ | - | $0.3-1$ | -
MINERvA Lu _et al._ (2018) | $1.5-10$ | $>0.94$ | $0.45-1.2$ | $>0.34$
Table 1: The kinematic phase space limits used in the signal definitions for
T2K, MicroBooNE, and MINERvA measurements considered in this work.
The comparisons of event generator model predictions to the cross-section
measurements of T2K and MINERvA are relatively straightforward, since the two
experiments unfold the measurement into the truth space, applying
regularization techniques111It should be noted that T2K also provides an
unregularised result, but that the regularisation has previously been shown to
have only a very small impact on quantitative conclusions Abe _et al._
(2018b); Dolan (2018).. In contrast, MicroBooNE reports its measurements in a
linearly transformed space with respect to the true quantity, where the cross
section is smooth, using the Wiener Singular Value Decomposition Technique
Tang _et al._ (2017). This procedure allows MicroBooNE to produce smooth
cross-section measurements with no regularisation bias at the cost of
providing a measurement as a function of a variable that is smeared by a
linear transformation from the true physics quantity of interest. The
MicroBooNE result is thus accompanied by an additional smearing matrix,
referred to as the $A_{c}$ matrix, encapsulating this transform. In this work,
any model prediction compared to MicroBooNE’s measurement has been transformed
using its corresponding $A_{c}$ matrix Abratenko _et al._ (2023).
### II.3 $\chi^{2}$ analysis
In order to quantify the agreement between the observed data and the
theoretical predictions, a chi-squared ($\chi^{2}$) test-statistic is
employed. The $\chi^{2}$ is defined as:
$\chi^{2}=\sum_{i,j}(D_{i}-D_{i}^{MC})(A^{-1}_{cov})_{ij}(D_{j}-D_{j}^{MC}),$
(5)
where $D_{i/j}$ and $D_{i/j}^{MC}$ represent the content of bin $i/j$ in a
measurement and a generator prediction histogram, respectively, and
$A^{-1}_{cov}$ is the inverse of the covariance matrix associated with a
measurement. For each of the experiments, covariance matrices are available
from their respective data releases, encapsulating uncertainties and
correlations in the cross-section extraction.
In addition to the $\chi^{2}$ values, $p$-values are also calculated to
provide a more intuitive estimation of the statistical significance of the
observed discrepancies between the measurements and the model predictions.
Note that, like all contemporary quantitative analyses of neutrino cross-
section measurements, any conclusions drawn from the $\chi^{2}$ and the
calculation of $p$-values assume that uncertainties on the experimental
measurements are well described using a multi-variate Gaussian as defined by
the covariance matrix provided by each analysis. However, past and recent
analyses have suggested this approximation may not always be valid, especially
for results limited mostly by systematic uncertainties Chakrani _et al._
(2024); D’Agostini (1994); Radev and Dolan (2024). Since we have no way to
test the validity of the assumption from the experimental data releases, we
proceed assuming gaussian uncertainties but urge readers to treat quantitative
conclusions cautiously (and urge experiments to detail the extent of
deviations from the gaussian case).
### II.4 Models
To generate all the simulations used in this work, we use a variety of
generators and configurations, which are described in this section. We also
use the NUISANCE framework Stowell _et al._ (2017) to process the simulated
events from each generator using a common format.
In order to generate interactions for comparisons to T2K and MINERvA
measurements, the NEUT Hayato and Pickering (2021) generator was used
(specifically NEUT version 5.6.2), with the official flux releases associated
to the measurements Abe _et al._ (2013, 2015); t2k ; Aliaga _et al._ (2016);
min , on a hydrocarbon target.
In CC0$\pi$ topologies in these energy ranges, the majority of the signal is
populated by QE interactions. We simulate QE interactions using the Benhar SF
Benhar _et al._ (1994) model, as is used as an input to T2K’s latest neutrino
oscillation measurements Abe _et al._ (2023). Within NEUT, this model is used
to describe the initial nuclear ground state as a function of nucleon momenta
and their removal energies in a factorized approach Hayato and Pickering
(2021). The distribution of intial state nucleon momenta and removal energies
has been derived from electron scattering data and the available phase space
is broadly divided into two parts: a mean-field (MF) part, in which single
nucleons are involved in the interaction, and a region corresponding to high-
momentum short-range correlated (SRC) nucleon pairs, accounting for $\sim$5%
of the total cross section and in which only one nucleon participates in the
interaction but another “spectator” nucleon is emitted alongside it. For a
more detailed description of the NEUT SF model and its associated
uncertainties, more details can be found in Furmanski (2015); Chakrani _et
al._ (2024).
In addition to the Benhar SF model, we also provide comparisons with two Fermi
gas-based models which are also implemented in NEUT: the Local Fermi Gas (LFG)
model developed by the Valencia group Nieves _et al._ (2011), which includes
corrections for long-range nucleon correlations based on the random phase
approximation (RPA) prescription, and the global relativistic Fermi gas (RFG)
following the prescription of Smith and Moniz Smith and Moniz (1972). Note
that in the RFG case a nucleon axial mass $M_{A}^{QE}$ of 1.21 GeV is used
(the NEUT default) as opposed to 1.03 or 1.05 GeV for SF and LFG respectively.
A direct comparison between the MicroBooNE measurement and a NEUT SF on argon
is not possible, since NEUT currently doesn’t have an implementation of an
argon spectral function. The NuWro event generator Juszczak _et al._ (2006),
on the other hand, has an argon spectral function but also contains a
significantly different modelling of 2p2h and meson absorption, which would
make direct comparisons with NEUT predictions difficult to interpret. To
address this for the case of MicroBooNE, we simulate QE interactions on argon
with the NuWro version 19.02.1 SF implementation and non-QE interactions (2p2h
and RES) with NEUT to create our SF baseline prediction. This model is
referred to as the “SF*” model throughout the remainder of this paper. The
consequence of this choice is that the QE events generated with the SF* model
on an argon target is also put through a different type of intra-nuclear
cascade than those from NEUT generated on other targets. To assess the impact
of this inconsistency, we also compare MicroBooNE predictions obtained with
the LFG model in NEUT with an argon target and which undergo the NEUT FSI
cascade. This is discussed in subsubsection III.1.4.
For 2p2h interactions, the NEUT model is based on the model by the Valencia
group Nieves _et al._ (2011); Gran _et al._ (2013); Schwehr _et al._
(2016). Note that NEUT contains two implementations of the Valencia model, and
in this work we opt to use the lookup table approach employed in recent T2K
measurements Abe _et al._ (2023).
Resonant meson production is simulated in NEUT using the Rein-Sehgal model
Rein and Sehgal (1981), with modifications to the axial form factor from
Graczyk and Sobczyk Graczyk and Sobczyk (2008) as well as lepton mass
corrections Berger and Sehgal (2007). RES interactions can pass selection
criteria for mesonless samples in one of two ways - for the MicroBooNE
samples, charged pions with momenta below 70 MeV/$c$ are allowed by the
selection criteria; for all other samples, RES interactions enter CC0$\pi$
samples through meson absorption processes, a type of FSI. Since the dominant
type of mesons produced in such interactions are pions, we will subsequently
refer to this process as pion absorption (though it applies, in principle, to
heavier mesons).
FSI are simulated through a semi-classical intra-nuclear cascade (INC) in both
NEUT and NuWro. The philosophy of the simulation is similar for both
generators, but they differ in several details of the implementation (notably,
in the choice of data sets used to tune the probability of each intra-nuclear
process). A more detailed review of the differences between the modeling of
hadron FSI in both NEUT and NuWro can be found in Dytman _et al._ (2021).
Whilst the NEUT SF model represents the baseline model used by the T2K
experiment for its oscillation analysis, we also consider the AR23_20i_00_000
GEN ; DUN configuration from the GENIE Andreopoulos _et al._ (2010, 2015)
event generator version 3.04.00 Alvarez-Ruso _et al._ (2021), which is used
as the baseline input model for DUNE and SBN analyses. Like NEUT LFG, GENIE
also uses the Valencia LFG model to describe QE interactions. Unlike the NEUT
LFG model, the GENIE configuration uses a few different model parameters which
will impact the predictions. Firstly, the $Q$-value (or the separation energy)
for nuclei is chosen such that the distribution of removal energies from which
the MC sampling is done covers the majority of the removal energies in the
argon Spectral Function model detailed in Jiang _et al._ (2022). Second,
unlike the NEUT LFG equivalent, the GENIE AR23_20i_00_000 model also includes
high-momentum nucleons in addition to the baseline LFG prediction, whose role
is to approximate the presence of SRCs Alvarez-Ruso _et al._ (2021). The
simulation parameters were also modified according to the nucleon-level tune
described in Tena-Vidal _et al._ (2021). The 2p2h model used in this
configuration is the SuSAv2 model Ruiz Simo _et al._ (2017); Megias _et al._
(2016), following the implementation described in Dolan _et al._ (2020). The
RES model is similar to the model in NEUT, but with the aforementioned GENIE
tune applied.
An important difference between the NEUT and GENIE simulations used in this
work is the modeling of FSI. The AR23_20i_00_000 configuration employs the so-
called “hA2018Intranuke” model, which is not a semi-classical cascade model.
The latter applies data-driven predictions to determine the “fates” that
hadrons undergo once produced inside the nucleus in a single step.
The main model choices between the different generator configurations which
will be described in subsection III.3 are summarized in Table 2.
Configuration | 1p1h model | 2p2h model | FSI model
---|---|---|---
NEUT SF | Benhar SF | Valencia | NEUT cascade
NEUT LFG | Valencia LFG | Valencia | NEUT cascade
GENIE AR23_20i_00_000 | Valencia LFG + correlated tail | SuSAv2 | GENIE hA2018
Table 2: A summary of models used for each generator configuration considered
in this work.
### II.5 Systematic variations
For this analysis, the reference model against which we have compared the
measurements and have used as a baseline to apply variations to nuclear
effects was chosen to be the NEUT Benhar Spectral Function (SF) model Benhar
_et al._ (1994), described in subsection II.4.
As described in subsection II.1, the TKI distributions offer sensitivity to
the presence and strength of different nuclear effects. In particular, the
tail of $\delta p_{T}$ is sensitive to the presence of FSI (on the outgoing
proton as well as the pions in the resonant background) and 2p2h. The
$\delta\alpha_{T}$ distribution has unique sensitivity to FSI. We investigate
the impact of these nuclear effects by varying them independently and
assessing the evolution of the agreement between the data and the generator
predictions. We apply these systematic variations by either reweighting the
events in our simulations or regenerating events with altered parameters in
the simulations.
##### Fermi motion
The bulk of $\delta p_{\textrm{T}}$ is directly sensitive to the transverse
component of the Fermi motion inside the nucleus. We compare the reference
model (NEUT SF or SF* for argon) with the predictions from the LFG and RFG
models from NEUT.
##### Total 2p2h cross section
The total cross-section for 2p2h processes is not well-known, with theoretical
models differing substantially in their prediction of it Nieves _et al._
(2011); Martini and Ericson (2013); Ruiz Simo _et al._ (2017); Megias _et
al._ (2016). In the context of this work, we choose to first assess the impact
of the total 2p2h cross section by scaling the strength of 2p2h interactions
by 70% flatly across all kinematics. This number was chosen based on the
difference in the total cross-section predicted by the Valencia Nieves _et
al._ (2011), SuSAv2 Ruiz Simo _et al._ (2017) and Martini et al. Martini and
Ericson (2013) 2p2h models. Of these models, the Martini et al. 2p2h model
shows the largest difference in integrated cross section with respect to the
Valencia model for neutrinos, and 70% was taken as a representative size of
the difference. This approach tests the impact of increasing the total cross
section of 2p2h interactions, but does not build in any shape-related freedom.
There is little available theoretical guidance on the plausible types of
variations we can expect on the final state nucleon kinematics, and generators
predict the latter based only on the available interaction phase space.
Varying the shape of, for example, the outgoing proton momentum spectrum may
also introduce sizable variations to the TKI distributions but, given the lack
of guidance from theory on what these variations should be, we leave such
studies for future work (although promising work in Refs Sobczyk _et al._
(2020); Martinez-Consentino _et al._ (2024) may soon change this). For a
discussion of the most extreme variations predicted by generators on the
outgoing nucleon kinematics, see Ref. Bathe-Peters _et al._ (2022).
##### Nucleon FSI
To gauge the impact of nucleon FSI on the features of the distributions, we
perform variations where we vary the mean free path (MFP) of protons inside
the nucleus by $\pm$30%. This value was chosen based on Ref. Niewczas and
Sobczyk (2019), in which it is shown that an alteration of this size
encompasses the spread of nuclear transparency measurements from various
sources on a carbon target. An increase (decrease) of the MFP by 30%
corresponds to a corresponding increase (decrease) in the nucleus
transparency, thereby decreasing (increasing) the probability that a proton
undergoes FSI. We will refer to these alterations as “strong/more” and
“weak/less” FSI for the $-30\%$ and $+30\%$ variations respectively. These
alterations are applied by regenerating the simulations with altered values of
the MFP, and the same approach is applied to the NEUT models (SF and LFG), as
well as the QE component from NuWro in the SF* model.
##### Pion FSI
We take a similar approach for pion absorption events. The NEUT intra-nuclear
cascade model has been tuned to world pion-nucleus scattering data Pinzon
Guerra _et al._ (2019), and as a result the underlying parameters governing
the cascade have a data-driven constraint. Since RES events can only end up in
the mesonless samples used in this analysis via pion absorption processes, we
vary the probability of this fate within the cascade. The variation we apply
is of $\pm$43%, on top of the tuned absorption probability used by NEUT (which
is itself 40% larger than that prescribed by Salcedo and Oset Salcedo _et
al._ (1988)), following the prescription in Ref. Pinzon Guerra _et al._
(2019). However, none of the T2K or MicroBooNE measurements exhibited any
sizable sensitivity to this alteration (due to the lower RES rate in their
energy regimes), so we will only present variations of this parameter for the
MINERvA measurements.
## III Results
A summary of the $p$-values obtained between each systematic variation with
each measurement is available in Table 3. This high-level analysis indicates
that all models, when compared to the MINERvA measurements, are statistically
rejected (i.e. all $p$-values $<0.05$ with one marginal exception), but not
the T2K and MicroBooNE measurements. Consequently, the focus of our
quantitative analysis focuses on comparisons between T2K and MicroBooNE
measurements. Despite the fact that the MINERvA measurements quantitatively
exclude all models, it remains crucial to consider these results
qualitatively, especially in the context of insights gained from the two other
experiments.
Measurement | $N_{bins}$ | SF/SF* | LFG | RFG | More 2p2h | More FSI | Less FSI | More $\pi$ abs. | Less $\pi$ abs.
---|---|---|---|---|---|---|---|---|---
T2K $\delta\alpha_{\textrm{T}}$ | 8 | 0.01 | 0.00 | 0.00 | 0.00 | 0.00 | 0.02 | 0.06 | 0.02
T2K $\delta p_{\textrm{T}}$ | 8 | 0.08 | 0.69 | 0.00 | 0.00 | 0.02 | 0.07 | 0.00 | 0.18
MINERvA $\delta\alpha_{\textrm{T}}$ | 12 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.06 | 0.00 | 0.00
MINERvA $\delta p_{\textrm{T}}$ | 24 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00
MINERvA $p_{N}$ | 24 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00
MicroBooNE $\delta\alpha_{\textrm{T}}$ | 7 | 0.02 | 0.45 | 0.62 | 0.07 | 0.18 | 0.00 | 0.02 | 0.01
MicroBooNE $\delta p_{\textrm{T}}$ | 13 | 0.12 | 0.42 | 0.00 | 0.33 | 0.23 | 0.02 | 0.13 | 0.10
MicroBooNE $\delta p_{\textrm{T}}$ low $\delta\alpha_{\textrm{T}}$ | 11 | 0.26 | 0.23 | 0.14 | 0.37 | 0.44 | 0.10 | 0.28 | 0.24
MicroBooNE $\delta p_{\textrm{T}}$ mid-low $\delta\alpha_{\textrm{T}}$ | 12 | 0.07 | 0.40 | 0.19 | 0.23 | 0.38 | 0.00 | 0.08 | 0.06
MicroBooNE $\delta p_{\textrm{T}}$ mid-high $\delta\alpha_{\textrm{T}}$ | 13 | 0.04 | 0.23 | 0.02 | 0.16 | 0.22 | 0.01 | 0.05 | 0.04
MicroBooNE $\delta p_{\textrm{T}}$ high $\delta\alpha_{\textrm{T}}$ | 13 | 0.03 | 0.13 | 0.08 | 0.12 | 0.09 | 0.01 | 0.04 | 0.03
Table 3: $p$-values obtained from $\chi^{2}$ under a Gaussian error
approximation between different models and measurements as well as to
systematic variations of the SF/SF* models. $N_{bins}$ gives the number of
bins for each measurement. $p$-values below 0.05, broadly indicating model
rejection, are marked in red.
Our detailed analysis of model-measurement comparisons begins with T2K and
MicroBooNE in subsection III.1, exploring how mis-modelling of nuclear effects
changes with nuclear target. We then compare T2K and MINERvA in subsection
III.2, allowing an exploration the energy dependence of mismodelling. We
finish with a comparison of all three measurements to the models used for T2K,
SBN and DUNE oscillation analyses in subsection III.3.
### III.1 T2K vs MicroBooNE: exploring nuclear target dependence
In this section, we focus on the comparison between T2K and MicroBooNE
measurements, concentrating on the comparison of nuclear ground state models,
nucleon FSI strength and 2p2h normalization. Although T2K and MicroBooNE
operate at comparable neutrino beam energies, a major difference between the
measurements from the two experiments lies in the nuclear target for neutrino
interactions: T2K uses a hydrocarbon (CH) target, whereas MicroBooNE uses an
argon (Ar) target. The comparison of these measurements therefore allows an
identification potential issues with the way in which neutrino event
generators predict how nuclear effects change as a function of atomic number.
#### III.1.1 Breakdown by interaction channel
Due to their similar neutrino beam energies, the $\delta p_{\textrm{T}}$
distributions for T2K and MicroBooNE have a broadly similar composition of the
different interaction channels, as illustrated in Figure 3. It is clear that
the flux-integrated cross section for both measurements is dominated by the
bulk, which is composed essentially of QE interactions. The relative
contribution with respect to the QE-dominated bulk of 2p2h and RES processes
which have undergone pion-absorption is comparable between the two experiments
in the simulation, but the MicroBooNE measurement requires more strength in
the tail. For both measurements, the nominal model (SF) yields a
$p$-value$>$0.05, implying that neither of the measurements is capable of
excluding this model. However, it is important to recall that there isn’t a
complete correspondence between the SF models used for these comparisons, with
NuWro’s FSI model being applied in the SF* case, as discussed in subsubsection
III.1.2. The impact of the altered FSI treatment is further discussed in
subsubsection III.1.4. The comparison to the MicroBooNE measurement suggests
that the combination of the NuWro Ar SF model, alongside the 2p2h and RES
contribution from NEUT, lacks some strength in describing the tail of the
distribution. The bulk of the distribution is also slightly shifted towards
lower values of $\delta p_{\textrm{T}}$, but this may be an artefact of
working in the $A_{c}$ smeared space as discussed in subsection II.2.
Figure 3: Differential cross section as a function of $\delta p_{\textrm{T}}$
as predicted by the NEUT SF model for T2K (left) and the SF* model for
MicroBooNE (right), compared to the respective measurements from each
experiment. The QE, 2p2h and resonant (RES) contributions are highlighted.
A finer analysis of the agreement between the models and the measurement can
be obtained by comparing the predictions to MicroBooNE’s multi-differential
measurement of $\delta p_{\textrm{T}}$ as a function of
$\delta\alpha_{\textrm{T}}$, which is shown in Figure 4. As described in
section II, generally large values of $\delta\alpha_{\textrm{T}}$ imply a more
important role of FSI. It is therefore unsurprising to see that, in the low
$\delta\alpha_{\textrm{T}}$ measurement where nuclear effects beyond Fermi
motion are expected to be small, the $\delta p_{\textrm{T}}$ distribution is
almost entirely dominated by QE interactions concentrated almost exclusively
in the bulk. The small amount of remaining 2p2h and RES contributions is also
shifted towards lower values of $\delta p_{\textrm{T}}$. In this region, the
agreement between the simulation and the measurement is very good (a $p$-value
of 0.26), suggesting a good description of the Fermi momentum by the Ar SF
model in NuWro. At $45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$, the
$\delta p_{\textrm{T}}$ distribution exhibits a slightly more pronounced tail
which is sill dominated by QE interactions, consistent with the increase in
the strength of proton FSI. The 2p2h and RES contributions remain small, but
are now shifted away from under the bulk of the distribution. The SF*
simulation of the bulk is also slightly shifted towards slightly lower values
of $\delta p_{\textrm{T}}$ with respect to the measurement, and it is apparent
that the simulation lacks strength in the description of the tail of the
measurement. As a result, the quantitative agreement between measurement and
simulation is less good (giving a $p$-value of 0.07). A similar (and more
pronounced) evolution can be seen at
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$ \- the simulation clearly
underpredicts the tail of the measurement as well as the bulk, and the
quantitative agreement is poor (giving a $p$-value of 0.04). We also note the
increase in the 2p2h and RES contributions across the entire distribution.
Finally, in the FSI-rich region of $135^{\circ}<\delta\alpha_{T}<180^{\circ}$,
the qualitative disagreement between simulation and the measurement is
accentuated in both the tail, despite remaining quantitatively similar, and in
the bulk, and we observe a slight increase in the strength of 2p2h and RES
events, but neither of these are sufficient to reach the measurement.
Overall, the relatively good agreement with MicroBooNE’s low
$\delta\alpha_{\textrm{T}}$ measurement, which then gets progressively worse
toward larger $\delta\alpha_{\textrm{T}}$, may suggest that there is a lacking
strength in either the 2p2h, RES or nucleon FSI strength for neutrino
interactions on an argon target. Conversely, this does not appear to be the
case for CH, as the simulation is able to accurately predict the strength of
the $\delta p_{\textrm{T}}$ for T2K. In the following sections, we vary each
of these effects individually and simultaneously for T2K and MicroBooNE in
order to identify possible areas of model improvement.
$0^{\circ}<\delta\alpha_{\textrm{T}}<45^{\circ}$
$45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$
Figure 4: Multi-differential cross-section as a function of $\delta
p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ from MicroBooNE, compared with
predictions using the combined SF* model. The figures showcase the breakdown
by interaction mode into QE, 2p2h and resonant interactions (RES), spanning
four regions of $\delta\alpha_{\textrm{T}}$.
#### III.1.2 Nuclear ground state
Figure 5 presents the comparison of nuclear ground state models for each
experiment. For T2K, the SF and LFG models both align relatively well with the
measurement. Conversely, in the context of MicroBooNE, the LFG model from NEUT
aligns better with the measurement than the combined SF* model from NEUT and
NuWro. However, this is at least partially due to the differing FSI model for
QE interactions rather than the change of nuclear ground state, which is
discussed further in subsubsection III.1.4. In both cases, the RFG model is
clearly excluded by the measurement. It is notable that the RFG model predicts
the expected “Fermi cliff” feature (the sharp drop off in strength at the
Fermi momentum) for T2K but that this is washed out by the smearing applied to
compare to the MicroBooNE measurement.
It is interesting to note the difference in the bulk of $\delta
p_{\textrm{T}}$ for both MicroBooNE and T2K. For T2K, the bulk predicted by
the SF model broadly aligns with that predicted by the LFG model, whereas the
bulk predicted by the LFG model is significantly smaller than the one for the
SF* model for the MicroBooNE comparison. Whilst it is tempting to assign this
to different treatments of FSI (with NEUT FSI to SF and LFG, and NuWro FSI to
SF*), the two FSI models result in a very similar proportion ($\sim$75%) of QE
interactions passing MicroBooNE’s signal definition and it is actually the
NuWro model (affecting SF*) that migrates the larger portion of events outside
of the phase space constraints (see subsubsection III.1.4 and Table 4). This
implies an expectation for FSI not to lower the LFG prediction with respect to
SF* in the $\delta p_{\textrm{T}}$ bulk. This is also consistent with the
comparison of the models to the multi-differential MicroBooNE measurement is
shown in Figure 6, which shows large differences between LFG and SF* in the
regions where $\delta\alpha_{\textrm{T}}<90^{\circ}$, where FSI is expected to
be less impactful. In fact, before the phase space constraints defined in
Table 1, the total cross section predicted by the SF(SF*) model is about
5%(10%) higher than that predicted by the LFG model for T2K(MicroBooNE). After
applying the low proton momentum constraints from Table 1, the ratio between
the SF* and LFG total cross section predictions stays relatively constant for
the MicroBooNE case, whereas for T2K it brings the SF prediction at about the
same level in the bulk as that predicted by LFG (as is observed in Figure 5).
This is primarily due to the more stringent T2K cuts on low momentum protons
with respect to those applied by MicroBooNE. Indeed, the impact of raising the
proton momentum threshold from 300 MeV/$c$ to 450 MeV/$c$ in T2K simulations
lowers the total SF cross section by $\sim$35%, and the total LFG cross
section by $\sim$25%. This indicates that the main driver for the suppression
of the SF cross section in the case of the T2K predictions is the removal of
protons whose momenta are between 300-450 MeV/$c$.
The larger reduction of the SF cross section with respect to LFG when applying
cuts that require large values of proton momentum may be expected from the
fact that the low energy transfer portion of the LFG cross section (broadly
corresponding to low proton momentum) is much smaller than in SF. This is
because of LFG’s considerations of a cross section suppression from long range
nucleon correlations via the random phase approximation. At larger energy
transfers (corresponding to cuts requiring higher proton momentum, as in T2K’s
signal definition) the LFG and SF model cross sections are more similar. We
also note that applying a similar suppression to the SF* model prediction of
the MicroBooNE measurement, via an optical potential correction from Ref.
Ankowski _et al._ (2015), provides a cross section with a more similar
normalisation to the NEUT LFG model (both before and after applying the
MicroBooNE kinematic phase space constraints). This potentially suggests that
the larger prediction for the MicroBooNE $\delta p_{\textrm{T}}$ measurement
bulk from SF* is related to the use of a calculation based almost solely on
the plane wave impulse approximation (PWIA) and in particular without any
consideration of the nuclear potential that the outgoing nucleon experiences
or of long range nucleon correlations. The MicroBooNE measurement is more
sensitive to physics beyond PWIA than T2K or MINERvA, thanks to its lower
proton tracking threshold giving access to interactions with lower energy
transfers.
Figure 5: Differential cross section as a function of $\delta p_{\textrm{T}}$
using different nuclear model predictions compared with the measurements from
T2K (left) and MicroBooNE (right). The different nuclear models are the NEUT
SF, LFG and RFG models for T2K and the SF*, NEUT LFG and NEUT RFG models for
MicroBooNE.
Figure 6 additionally shows that, at low $\delta\alpha_{\textrm{T}}$ values,
the performance of the LFG and SF* models is roughly equivalent ($p$-values of
0.23 and 0.26 respectively), and better than that of the RFG model ($p$-value
of 0.14). With increasing $\delta\alpha_{\textrm{T}}$, it appears that the
best agreement overall is achieved by the LFG model, which applies the NEUT
intra-nuclear cascade simulation, unlike the SF* model which uses the NuWro
FSI model. At $135^{\circ}<\delta\alpha_{\textrm{T}}<190^{\circ}$, we can see
that both the LFG and RFG models show higher predictions in the tail of the
distribution and show better agreement with the measurement compared to the
SF* model. In the low $\delta\alpha_{\textrm{T}}$ region (top row of Figure
6), the visual disagreement between the bulks predicted by the SF* and LFG
models appears significant, whereas in the bottom row the models show a more
consistent description of the bulk. However, the difference should not only be
interpreted visually due to the large correlations in particular in the tails
of the distributions and to the fact that the measurement is reported in the
smeared space.
It is additionally useful to examine the agreement between the different
models with MicroBooNE’s $\delta\alpha_{\textrm{T}}$ measurement, shown in
Figure 7. We can note that, although the normalization of the LFG and RFG
distributions is different, their shape as a function of
$\delta\alpha_{\textrm{T}}$ is similar, and both are quite different from that
predicted by the SF* model. This is not surprising, as the FSI model applied
to protons using the LFG and RFG models is identical (NEUT intra-nuclear
cascade), and different from the NuWro FSI model applied to the SF* model. We
also note the better quantitative agreement shown in Table 3 between the
purely NEUT-based simulations compared to the NuWro FSI model and the
MicroBooNE measurement. This is further discussed in subsubsection III.1.4.
Overall, it appears that LFG provides the best description of the T2K and
MicroBooNE measurements. Although both SF/SF* and LFG provide acceptable
$p$-values in the measurements of $\delta p_{\textrm{T}}$, LFG is better
suited to describing MicroBooNE’s measurement of $\delta\alpha_{\textrm{T}}$
and, in the high $\delta\alpha_{\textrm{T}}$ bins, its multi-differential
measurement of the two. However, we repeat that this preference at higher
$\delta\alpha_{\textrm{T}}$ may be driven by the differing FSI models. The RFG
model is excluded by both the T2K and MicroBooNE $\delta p_{\textrm{T}}$
measurements.
$0^{\circ}<\delta\alpha_{\textrm{T}}<45^{\circ}$
$45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$
Figure 6: Multi-differential cross-section as a function of $\delta
p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ from MicroBooNE, compared with
predictions using the SF*, NEUT LFG and NEUT RFG models, for different regions
of $\delta\alpha_{\textrm{T}}$. Figure 7: $\delta\alpha_{\textrm{T}}$
measurement from MicroBooNE, compared with the cross section predictions from
the combined SF*, NEUT LFG and NEUT RFG simulations.
#### III.1.3 2p2h
As discussed in section II and subsubsection III.1.1, the effect of 2p2h
interactions shows up predominantly in the tails of the $\delta
p_{\textrm{T}}$ distribution, and generally more at high
$\delta\alpha_{\textrm{T}}$. In this section, we increase the total cross
section of 2p2h interactions by 70% uniformly across neutrino energy, which
brackets the variations predicted by available models in neutrino generators
as discussed in subsection II.1.
Figure 8 shows the agreement between the nominal models and the modified
simulations with the measurements from T2K and MicroBooNE. The T2K measurement
clearly disfavors increasing the strength of 2p2h interaction, whereas the
MicroBooNE measurement shows the opposite preference. It is apparent that the
main effect of increasing the strength of 2p2h interactions is to increase the
strength of the tail of the $\delta p_{\textrm{T}}$ distribution, but both
samples are highly dominated by QE interactions, as shown in Figure 3, and
thus the effect is limited.
Figure 8: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from T2K (left) and MicroBooNE (right), compared with
predictions using the NEUT SF and the combined SF* model, respectively. The
measurements are compared with the same simulations where the total cross
section of 2p2h processes has been increased by 70% uniformly across neutrino
energies (labeled as “More 2p2h” in the legends).
Since the tail of the $\delta p_{\textrm{T}}$ distribution has contribution
from both 2p2h processes as well as QE interactions where the protons have
undergone FSI, it is useful to attempt to separate these effects with the
multi-differential MicroBooNE measurement. Figure 9 shows the evolution of the
agreement between the two simulations as a function of the measured value of
$\delta\alpha_{\textrm{T}}$. For all measurements, it is apparent that
increasing the strength of 2p2h interactions improves the agreement with the
measurement, although this trend is less pronounced at low
$\delta\alpha_{\textrm{T}}$, which is expected since there is less 2p2h in
general (see Figure 4). However, even the large increase of 2p2h considered
here is far from sufficient to describe the tails of the MicroBooNE
measurement at large $\delta\alpha_{\textrm{T}}$.
$0^{\circ}<\delta\alpha_{\textrm{T}}<45^{\circ}$
$45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$
Figure 9: Multi-differential cross-section as a function of $\delta
p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ from MicroBooNE, compared with
predictions using the SF* model, and the same model in which the 2p2h
interaction cross section is increased by 70% uniformly across neutrino
energies (labelled as “More 2p2h” in the legends).
While it is likely that the amount of 2p2h contribution is not uniquely
responsible for the disagreement between the MicroBooNE measurement and
simulations, the comparisons do seem to suggest that there may be an issue in
the modeling of the scaling of the 2p2h cross section with atomic number when
considering the nominal simulation’s reasonable agreement with the T2K
measurement. In addition to having a much larger atomic number, argon is,
unlike carbon, a non-isoscalar nucleus. The NEUT implementation of the
Valencia 2p2h model uses precomputed tensors tables for a number of isoscalar
nuclei to calculate the total 2p2h cross section. In the absence of such a
table for a specific nucleus (which is the case for argon), NEUT uses the
table for the available nucleus with the closest atomic number and scales the
cross section to the atomic number of argon. Isoscalarity may have a direct
impact on the rate of 2p2h interactions (as it modifies the fraction of $np$
and $nn$ initial state nucleon pairs), and other models, such as GiBUU Buss
_et al._ (2012), predict that the scaling of the cross section depends on the
difference between the numbers of protons and neutrons inside the nucleus
Dolan _et al._ (2018).
#### III.1.4 FSI
In this section, we report studies on the effect of nucleon FSI variations. As
stated in subsection II.1, FSI affects the TKI distributions, and in
particular those of $\delta p_{\textrm{T}}$, in a complex way, modifying both
the tail and the bulk. Moreover, whilst some aspects of generator FSI modeling
can be benchmarked by sophisticated theory calculations Franco-Patino _et
al._ (2022, 2024); Nikolakopoulos _et al._ (2024, 2022), no microscopic model
can predict the kinematics and multiplicities of all outgoing hadrons. This
makes generators’ FSI models, with their limited predictive power and
theoretical grounding, the only means of calculating the fully exclusive
CC0$\pi$ cross section.
We begin by assessing the impact of varying the nucleon MFP which, as
discussed in subsection II.5, is changed by 30% as motivated by nucleon
transparency measurements. Whilst this changes generator predictions, it does
not cover the full plausible variation from FSI on the TKI distributions of
interest. Consequently, we then study how different FSI models can alter
outgoing hadron kinematics differently (even if their transparency predictions
remain similar) and assess the potential impact on T2K and MicroBooNE
measurements.
Varying the nucleon FSI strength
We examine the impact of nucleon FSI by varying the intrinsic MFP inside the
intra-nuclear cascades applied in both NEUT and NuWro. We begin by assessing
the impact of altering FSI on the $\delta\alpha_{\textrm{T}}$ measurements
from T2K and MicroBooNE, shown in Figure 10 alongside measurements of $\delta
p_{\textrm{T}}$ and the multi-differential measurement in Figure 11 and Figure
12 respectively. Note that the MFP is changed inside the generators while
keeping the baseline models (NEUT SF for carbon and the SF* model for argon)
fixed. As a result, the baseline simulations are intrinsically different, with
SF* using NuWro for CCQE interactions. We explore this difference in more
detail later in the section.
Figure 10: Differential cross section measurements as a function of
$\delta\alpha_{\textrm{T}}$ from T2K(left) and MicroBooNE(right), compared
with predictions using the NEUT SF model for T2K, and the combined SF* model
for MicroBooNE. The effects of adjusting the nucleon mean free path by -(+)30%
are displayed and labeled as “More(Less) FSI”.
It can be immediately seen from Table 3, Figure 10, Figure 11 and Figure 12
that the MicroBooNE measurement displays a considerable preference for a
larger MFP (more FSI), conversely to the T2K measurement. Figure 10
additionally highlights that the alteration of the FSI strength changes the
total predicted cross section within the allowed phase space of the defined
signal, with a more pronounced change at lower values of
$\delta\alpha_{\textrm{T}}$.
Figure 11: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from T2K(left) and MicroBooNE(right), compared with
predictions using the NEUT SF model for T2K, and the combined SF* model for
MicroBooNE. The effects of adjusting the nucleon mean free path by -(+)30% are
displayed and labeled as “More(Less) FSI”.
Figure 11 demonstrates that modifying FSI primarily impacts the bulk of
$\delta p_{\textrm{T}}$ for both T2K and MicroBooNE. The cross section in the
tail stays broadly constant in an absolute sense but, as expected, the
relative contribution of the tail compared to the bulk increases with more
FSI. As discussed in subsection II.1, the relatively fixed cross section in
the tail from changing FSI strength comes from the combination of the two
effects of FSI shifting proton momentum to values of higher $\delta
p_{\textrm{T}}$ and FSI migrating protons under detection threshold (changing
the normalisation of the cross section within the experimental signal
definitions). In the MicroBooNE case, these two effects are further compounded
by the smearing between the bulk and tail.
Comparing FSI variations to MicroBooNE’s multi-differential measurement of
$\delta p_{\textrm{T}}$ as a function of $\delta\alpha_{\textrm{T}}$,
presented in Figure 12, is a particularly useful tool to isolate the impact of
nucleon FSI, as the separate evolution of FSI in $\delta\alpha_{\textrm{T}}$
and $\delta p_{\textrm{T}}$ allows some disambiguation from other nuclear
effects, as was initially proposed in Ref. Abe _et al._ (2019). In all bins
of $\delta\alpha_{\textrm{T}}$, the measurements seem to suggest that an
enhancement of FSI strength is preferred. At low and lower-intermediate values
of $\delta\alpha_{\textrm{T}}$ (regions with lower impact of FSI), the most
visible effect of the proton FSI alterations is on the bulk of the
distributions, from the aforementioned movement of events inside (outside) of
the signal definition with decreasing (increasing) FSI strength. In this
region, the evolution of the $p$-values with FSI strength disfavor a weakening
of the FSI strength. At higher intermediate and high values of
$\delta\alpha_{\textrm{T}}$, the impact of FSI becomes more visible on the
tails of the $\delta p_{\textrm{T}}$ distributions. As the value of
$\delta\alpha_{\textrm{T}}$ increases, we note that the cross section in the
tail rises in both the simulations and the measurements. However, this
increase is more drastic in the measurement compared to the simulation –
indeed, the rate at which the relative tail contribution rises in the
simulation is insufficient to reproduce the rate at which it increases in the
measurement. The $p$-values in Table 3 indicate a consistent preference for
increasing the strength of nucleon FSI on argon.
It is interesting to note that, while all $\delta\alpha_{\textrm{T}}$ bins
show the same overall preference for an enhancement of FSI, the balance
between the two effects previously discussed (reduction of the bulk due to
protons being out of phase space, on one hand, and enhancement of the tail, on
the other hand) is different between the different $\delta\alpha_{\textrm{T}}$
regions. The multi-differential MicroBooNE measurement of $\delta
p_{\textrm{T}}$ in bins of $\delta\alpha_{\textrm{T}}$ offers a particularly
powerful combination of kinematic imbalance variables which allows us to lift
the degeneracies between nuclear effects, and similar measurements in the
future from other experiments will prove to be invaluable to further
disentangle these effects.
$0^{\circ}<\delta\alpha_{\textrm{T}}<45^{\circ}$
$45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$
Figure 12: Multi-differential measurement as a function of $\delta
p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ from MicroBooNE compared with
predictions using the combined SF* model. The effects of adjusting the nucleon
mean free path by -(+)30% are displayed and labeled as “More(Less) FSI”.
Impact of FSI model on predicted kinematics
As discussed in the previous sections, it is often not possible to draw a
direct comparison between the NEUT SF and SF* models due to the fact that the
latter uses QE events generated with the NuWro event generator on argon and
thus applies a different set of intra-nuclear cascade processes than those in
NEUT. Furthermore, Figure 7 highlights that the FSI predictions from NuWro are
disfavored (i.e. $p$-value$<$0.05) by the MicroBooNE
$\delta\alpha_{\textrm{T}}$ measurements and also alter the shape of the final
state nucleon kinematics in a different way than those in NEUT. In order to
lift the ambiguity caused by this inconsistency, we confront the MicroBooNE
multi-differential measurement with a prediction from NEUT using the LFG model
on argon in Figure 13, to which we apply the same 30% variation in MFP at
generation time.
$0^{\circ}<\delta\alpha_{\textrm{T}}<45^{\circ}$
$45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$
Figure 13: Multi-differential measurement as a function of $\delta
p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ from MicroBooNE compared with
predictions using the NEUT LFG model. The effects of adjusting the nucleon
mean free path by -(+)30% are displayed and labeled as “More(Less) FSI”.
Through comparing Figure 12 and Figure 13 it is clear that the use of NEUT’s
FSI model model helps to recover some of the missing strength in the $\delta
p_{\textrm{T}}$ tail at large $\delta\alpha_{\textrm{T}}$, improving
quantitative agreement with the measurement, but that strength remains missing
at intermediate $\delta\alpha_{\textrm{T}}$. Similarly to the findings from
Figure 7, the alteration of the shape of the $\delta p_{\textrm{T}}$
distribution is the main driver for improved agreement. In contrast to the FSI
variations on the SF* model, which generally showed preference for stronger
FSI, variations of FSI on the LFG model prefer to leave FSI unchanged
(although variations in both directions are allowed).
A comparison of Figure 12 and Figure 13 additionally serves to highlight the
differences between the NEUT and NuWro FSI models – a variation of $\pm$30% of
the MFP in NEUT is not sufficient to cover the nominal prediction from NuWro,
despite the fact that the transparencies encoded in both NEUT and NuWro are
within 30% of one another Dytman _et al._ (2021). The difference between the
NuWro and NEUT simulations is therefore still related to FSI, but goes beyond
the impact of nuclear transparency. The modeling of FSI processes introduces
alterations to the predicted particle kinematics beyond what variations of the
MFP can cover. This is demonstrated in Figure 14, which shows the impact of
different FSI models on the outgoing proton kinematics from QE interactions
generated on an argon target using the MicroBooNE flux. Whilst in all cases
the effect of FSI is to shift the leading momentum proton distribution to
lower values, the size of the shift differs substantially between models.
Figure 14: Distribution of predicted MicroBooNE leading proton momentum for
simulations of QE events with (solid lines) and without (dashed lines) FSI
from different generators. The two scenarios are labelled as “post-FSI” and
“pre-FSI” respectively.
The impact of the FSI model on the proportion of QE interactions that fall
into the MicroBooNE kinematic constraints on the outgoing proton momentum
(300-1000 MeV/$c$, see Table 1) is quantified in Table 4. The number of
interactions which have migrated outside of the signal region as a fraction of
the events inside the signal region before FSI is given by the quantity
$\delta_{FSI}$:
$\delta_{FSI}=(N^{\text{signal}}_{\text{pre-
FSI}}-N^{\text{signal}}_{\text{post-FSI}})/N^{\text{signal}}_{\text{pre-
FSI}},$ (6)
where $N^{\text{signal}}_{\text{post-FSI}}$ and $N^{\text{signal}}_{\text{pre-
FSI}}$ are the number of events contained within proton momentum range of the
MicroBooNE signal definition, with and without applying FSI respectively, and
$N^{\text{signal}}_{\text{pre-FSI}}$ is the total number of events in this
signal region before FSI. We additionally define the quantities
$\rho^{\text{signal}}_{\text{post-FSI}}$ and $\rho^{\text{signal}}_{\text{pre-
FSI}}$ as the fraction of events inside the signal region for simulations with
and without FSI respectively as follows:
$\displaystyle\rho^{\text{signal}}_{\text{post-FSI}}$
$\displaystyle=N^{\text{signal}}_{\text{post-FSI}}/N^{\text{total}},$ (7)
$\displaystyle\rho^{\text{signal}}_{\text{pre-FSI}}$
$\displaystyle=N^{\text{signal}}_{\text{pre-FSI}}/N^{\text{total}},$ (8)
where $N^{\text{total}}$ is the total numbers of simulated CCQE events.
Model | $\rho^{\text{signal}}_{\text{pre-FSI}}$ | $\rho^{\text{signal}}_{\text{post-FSI}}$ | $\delta_{FSI}$
---|---|---|---
NEUT (LFG) | 81.3% | 74.3% | 8.5%
GENIE (AR23_20i_00_000) | 81.2% | 75.2% | 7.3%
NuWro (SF*) | 83.8% | 73.6% | 12.1%
Table 4: The fraction of CCQE events inside the signal region for simulations
with and without FSI using the MicroBooNE flux on an Argon target, alongside
the number of events which migrated outside of the MicroBooNE signal region as
a fraction of the total number of pre-FSI events in the signal region. The
quantities are defined in detail in the text.
From Table 4, we can see that the vast majority of QE events fall within the
signal region, but, as expected, the effect of FSI is to cause a decrease in
this fraction. Crucially, as showcased in Figure 14, this migration does not
happen in the same way for all generators, and it is apparent that the
migration in the case of the SF* model is the largest. This further supports
what was highlighted in Figure 13, i.e. that there are effects beyond those
covered by variations of the MFP which drive the discrepancy between the two
generators in the case of the MicroBooNE measurement.
### III.2 Impact of neutrino energy dependence: comparisons to MINERvA
measurements
As discussed in subsection III.1 and shown in Table 3, the $\chi^{2}$ values
obtained by comparing to the MINERvA measurements are all much higher than the
number of analysis bins, yielding $p$-values which exclude all considered
models. Nonetheless, it is still valuable to compare the simulations to the
MINERvA measurements in order to extract qualitative trends which might
indicate potential issues in the modeling of nuclear effects. The MINERvA
measurements considered in this work are all performed on a plastic
scintillator target, like those of T2K. However, as shown in Figure 1, the
MINERvA flux is at significantly higher neutrino energy than that of T2K, so a
comparison between the T2K and the MINERvA experiments’ measurements allows a
study of potential mismodeling of nuclear effects which vary with neutrino
energy.
We begin by considering the breakdown by interaction channel. Figure 15 shows
the contributions of the different interaction channels to the total cross
section as a function of $\delta p_{\textrm{T}}$. It is clear that the MINERvA
prediction has a significantly enhanced tail compared to the T2K prediction
shown in Figure 3. Unlike T2K, MINERvA sees a significant proportion of RES
events which have undergone pion absorption, as well as 2p2h interactions. The
RES contribution is also significantly more shifted under the bulk, indicating
that a lower proportion of the momentum of the hadronic system is in the
absorbed pions, or that the accompanying protons are less likely to under go
FSI.
Figure 15: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from MINERvA, compared to the NEUT SF model predictions from
NEUT on a carbon target. The different channel contributions to the total
cross sections are highlighted in the legend.
NEUT predictions for different nuclear ground state models are compared to the
MINERvA $\delta p_{\textrm{T}}$ and $p_{N}$ measurements in Figure 16.
Qualitatively, the SF model has the best agreement with both measurements, in
particular due to the better normalisation of the $\delta p_{\textrm{T}}$ bulk
and of the $p_{N}$ transition region between bulk and tail. For all models,
the tails of the $\delta p_{\textrm{T}}$ distributions look similar, as most
of the contribution in this region is given by non-QE events, whose modeling
doesn’t change. The largest apparent changes are in the bulk, where all models
overestimate (RFG most significantly) the predictions but with different
shapes.
Figure 16: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ (left) and $p_{N}$ (right) from MINERvA, compared to different
ground state predictions using different nuclear models in NEUT: SF, LFG, and
RFG.
The CCQE predictions from NEUT’s LFG and RFG models contain a single outgoing
proton before FSI but, as noted in subsection II.4, the SF model produces two
proton states due to SRCs. The $\delta p_{\textrm{T}}$ measurements from T2K
and MicroBooNE discussed so far offer little sensitivity to distinguish SRCs
from the dominant mean field interactions, but it is informative to examine
the measurement of the inferred initial state nucleon momentum by the MINERvA
experiment in this context. Figure 17 shows the QE and non-QE contributions to
the NEUT SF simulation, where the QE contribution is further subdivided into a
mean-field part and an SRC part, according to the categorization used in the
NEUT implementation and described in subsection II.4 (for further details on
the way this categorization is done in NEUT, see Chakrani _et al._ (2024);
Furmanski (2015)). This demonstrates that the reasonable description of the
$p_{N}$ transition region between bulk and tail comes from the SRC nucleons in
the SF model. Although the overall SF prediction does not agree quantitatively
with the measurement, this comparison serves the purpose of highlighting the
potential need for an adequate model of SRCs inside the nuclear medium to
describe MINERvA’s measurement.
Figure 17: Differential cross section measurement as a function of $p_{N}$
from MINERvA, compared to the NEUT SF prediction. The latter is divided into
three components: a QE mean-field contribution (“MF QE”), a QE contribution
from short range correlated pairs (“SRC QE”) and all other non-QE components
(“Non QE”).
At MINERvA energies, the 2p2h cross section predicted by the Valencia model is
saturated, so their production is much more prevalent than at T2K or
MicroBooNE energies. Similarly, the RES contribution is also larger due to
MINERvA’s higher energies but, as seen in Figure 15, is also shifted further
from the tail and under the bulk of the $\delta p_{\textrm{T}}$ distribution
making the impact of varying 2p2h more apparent in the tail. The fact that the
non-QE distributions have different shapes in $\delta p_{\textrm{T}}$ for the
T2K and MINERvA measurements makes the analysis of the two useful to
disambiguate nuclear effects. However, it should be noted that the details of
different signatures of the non-QE contribution heavily relies on the modeling
of poorly understood hadron kinematics.
The impact of varying the strength of 2p2h interactions is shown in Figure 18.
The MINERvA $\delta p_{\textrm{T}}$ measurement disfavors a large increase in
the normalization of 2p2h interactions when other effects stay fixed. As can
be seen in Table 3, this is consistent with the trend seen in the T2K
measurement in Figure 8, and opposite to the MicroBooNE measurements’
preferences shown in Figure 8 and Figure 9.
Figure 18: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from MINERvA compared to the nominal NEUT SF prediction and an
increase of the 2p2h cross section by 70% uniformly across neutrino energies.
A similar observation can be made about the impact of nucleon FSI, depicted in
Figure 19. As in the case of the T2K measurement and opposite to the trend
displayed by the MicroBooNE measurements, both shown in subsubsection III.1.4,
the MINERvA measurement seems to disfavor an enhancement of the proton MFP.
Figure 19: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from MINERvA compared to the NEUT SF prediction and variations
of the nucleon mean free path. The effects of adjusting the nucleon mean free
path by -(+)30% are displayed and labeled as “More(Less) FSI”.
Finally, it is interesting to consider the impact of pion absorption processes
in the context of the MINERvA measurement. As previously showcased in Figure
15, MINERvA sees a significantly enhanced contribution from resonant
interactions in which the pion has been absorbed, which is small for the
lower-energy fluxes of T2K and MicroBooNE (as shown in Figure 3). We compare
the MINERvA measurement with the NEUT prediction in which we vary the cross
section of pion absorption processes as described in subsection II.5, and show
the results in Figure 20. The same variation for the both T2K and MicroBooNE
measurements shows only a small impact on the $\chi^{2}$ values, whereas in
the case of MINERvA, an increase in the pion absorption probability is
disfavored.
Figure 20: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from MINERvA compared to the NEUT SF prediction and variations
of the pion absorption probability. The effects of adjusting the pion
absorption probability as described in subsection II.5 are displayed and
labeled as “More(Less) $\pi$ abs”.
### III.3 Global generator comparisons
Throughout this work, we have varied one nuclear effect at a time while
keeping the baseline model (NEUT SF or SF* for argon) fixed. In this section,
we compare predictions where we change all elements of the underlying
simulations from the GENIE and NEUT generators. The NEUT event generator is
currently used by the T2K collaboration for oscillation analyses (e.g. Abe
_et al._ (2023)) and will likely be used by the future Hyper-K experiment,
whereas DUNE plans to conduct its sensitivity studies with the GENIE
generator’s AR23_20i_00_000 model DUN . The latter is also used by the SBN
experiments.
The predictions from the different generators are shown in Figure 21 for the
$\delta p_{\textrm{T}}$ measurements by T2K, MINERvA and MicroBooNE, and in
Figure 22 for the multi-differential $\delta p_{\textrm{T}}$ measurement from
MicroBooNE. The obtained $p$-values are summarized in Table 5. The NEUT LFG
moel and GENIE prediction with the AR23_20i_00_000 model yield better
$p$-values with the SF/SF* models, giving reasonable agreement with all T2K
and MicroBooNE measurements (including when the measurement is split into
slices of $\delta\alpha_{\textrm{T}})$ but all models fail to describe the
MINERvA measurements.
In the case of T2K, the GENIE and the NEUT LFG models are roughly equivalent,
which is expected given their almost identical treatment of the nuclear model
and the low impact of 2p2h and pion-absorption processes. The agreement is
slightly worse for the NEUT SF model, primarily driven by the transition
region between the tail and the bulk. Since the modelling of 2p2h and pion-
absorption processes is similar between the different simulations, the
difference in shape in this region may be driven by SRCs. The latter are
included in the NEUT SF model and are approximated in the GENIE model, but
completely absent from NEUT LFG.
In the case of the MicroBooNE measurement, the GENIE and NEUT LFG models yield
the best overall agreement with the measurement, although the SF* model is not
excluded. Both the GENIE and the NEUT LFG model fall below the SF* prediction
in the bulk of the MicroBooNE measurement, which can be explained by the
higher cross section predicted by the SF* model, as was discussed in
subsubsection III.1.2. Although the SF* model seems to better agree visually
with the $\delta p_{\textrm{T}}$ bulk, it yields the lowest $p$-value
($p$-value=0.12) due to the shape of the $\delta p_{\textrm{T}}$ tail and the
transition between the bulk and the tail, which the measurement appears very
sensitive to. In the MicroBooNE multi-differential comparison, shown in Figure
22, we observe the same trends for $\delta\alpha_{\textrm{T}}<90^{\circ}$,
whereas for $\delta\alpha_{\textrm{T}}>90^{\circ}$ the impact of the FSI model
in the tail is more visible. This region highlights the shortcomings in the
modelling of FSI processes from all three generators (NEUT, GENIE and NuWro),
as the agreement between the measurement and the simulations worsens with
increasing $\delta\alpha_{\textrm{T}}$, albeit at different rates. An
important difference between these models concern the total cross section of
2p2h processes, for which the GENIE simulation uses the SuSAv2 model as
opposed to the Valencia model used in NEUT LFG and SF. SuSAv2 predicts a total
cross section (before phase space constraints) which is $\sim$30% higher than
that predicted by the Valencia model at MicroBooNE energies. Although part of
this increase is visible (notably in the $\delta p_{\textrm{T}}$ tail at
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$), it is insufficient to
cover the discrepancy between the simulation and the measurement. The
variation tested in subsubsection III.1.3 was indeed larger than the
difference between the SuSAv2 and Valencia models at MicroBooNE and still
proved to be insufficient.
Finally, the MINERvA measurement presented in Figure 21 excludes all models.
As previously discussed, the difference between the NEUT LFG and GENIE models
in terms of QE processes is very small. The largest differences between the
generator predictions stem from the modelling of 2p2h and pion absorption
processes. However, most models broadly reproduce the tail, and the model-
measurement agreement is mainly driven by the simulation of the QE-dominanted
bulk and the transition region between the bulk and the tail, where
correlations between adjacent bins play a major role.
Measurement | $N_{bins}$ | SF/SF* | LFG | GENIE
---|---|---|---|---
T2K $\delta p_{\textrm{T}}$ | 8 | 0.08 | 0.69 | 0.47
MINERvA $\delta p_{\textrm{T}}$ | 24 | 0.00 | 0.00 | 0.00
MicroBooNE $\delta p_{\textrm{T}}$ | 13 | 0.12 | 0.42 | 0.73
MicroBooNE $\delta p_{\textrm{T}}$ low $\delta\alpha_{\textrm{T}}$ | 11 | 0.26 | 0.23 | 0.28
MicroBooNE $\delta p_{\textrm{T}}$ mid-low $\delta\alpha_{\textrm{T}}$ | 12 | 0.07 | 0.40 | 0.41
MicroBooNE $\delta p_{\textrm{T}}$ mid-high $\delta\alpha_{\textrm{T}}$ | 13 | 0.04 | 0.23 | 0.32
MicroBooNE $\delta p_{\textrm{T}}$ high $\delta\alpha_{\textrm{T}}$ | 13 | 0.03 | 0.13 | 0.18
Table 5: $p$-values obtained from $\chi^{2}$ under a Gaussian error
approximation between different models and measurements. GENIE here is
AR23_20i_00_000. $N_{bins}$ gives the number of bins for each measurement.
$p$-values below 0.05, broadly indicating model rejection, are marked in red.
Figure 21: Differential cross section measurement as a function of $\delta
p_{\textrm{T}}$ from T2K (top), MicroBooNE (middle) and MINERvA (bottom),
compared with the SF (SF* for MicroBooNE), NEUT LFG and the GENIE
AR23_20i_00_000 predictions.
$0^{\circ}<\delta\alpha_{\textrm{T}}<45^{\circ}$
$45^{\circ}<\delta\alpha_{\textrm{T}}<90^{\circ}$
$90^{\circ}<\delta\alpha_{\textrm{T}}<135^{\circ}$
$135^{\circ}<\delta\alpha_{\textrm{T}}<180^{\circ}$
Figure 22: Multi-differential cross-section measurement as a function of
$\delta p_{\textrm{T}}$ and $\delta\alpha_{\textrm{T}}$ from MicroBooNE,
compared with the SF*, NEUT LFG and the GENIE AR23_20i_00_000 predictions.
## IV Conclusions
A joint analysis of measurements of TKI variables by T2K, MicroBooNE and
MINERvA has been shown to provide a unique opportunity to highlight and
disambiguate issues in modelling neutrino-nucleus interactions in the few-GeV
regime. In particular, measurements of $\delta p_{\textrm{T}}$ have shown
sensitivity to variations of Fermi motion, 2p2h and FSI (both on nucleons and
to absorb pions), whilst $\delta\alpha_{\textrm{T}}$ has shown sensitivity
mostly to FSI, but with some sensitivity to 2p2h. Furthermore, the multi-
differential measurement of TKI variables by the MicroBooNE collaboration
allows further untangling of nuclear effects, such as distinguishing the
impact of altering 2p2h interactions from alterations to FSI properties.
Comparisons of MicroBooNE and T2K measurements are sensitive to how nuclear
effects scale with nuclear target, whilst comparisons of T2K and MINERvA are
sensitive to their energy dependence.
Quantitatively (assuming the validity of the Gaussian uncertainties in the
covariance matrices provided by experiments), no model or systematic variation
considered can describe all measurements and in particular the MINERvA
measurements of $\delta p_{\textrm{T}}$ and $p_{N}$ reject all of them.
Conversely, many of the models are allowed by MicroBooNE measurements,
although those with a weaker FSI strength are disfavoured.
In terms of nuclear models, the RFG model is clearly excluded both
qualitatively and quantitatively by all $\delta p_{\textrm{T}}$ measurements.
The LFG model and the SF (and SF*) model achieve much better agreement with
the experimental measurements. However, conclusions related to the performance
of a nuclear model are coupled to the modelling of hadron transport through
the nucleus and must be interpreted with caution.
LFG or SF simulations of T2K and MINERvA measurements, from the NEUT
generator, broadly find a good description of the proportion of events in the
bulk and tail of $\delta p_{\textrm{T}}$. However, in the MicroBooNE case, all
simulations lack significant strength in the tail. Alterations of FSI strength
on top of the NuWro-based SF* model are insufficient to cover the discrepancy.
Moreover, any attempt to raise the tail with FSI depletes the bulk to the
detriment of measurement-model agreement. Changing NuWro’s FSI model to NEUT’s
improves agreement much more than variations to NuWro’s FSI, despite the
models having similar overall nuclear transparency. This is because of
differences in the way the FSI alters the nucleon kinematics. Still, even with
NEUT’s FSI model, the simulations still under-predict the tail, particularly
at intermediate $\delta\alpha_{\textrm{T}}$.
The other primary means to enhance the tail relative to the bulk is to vary
the poorly-known 2p2h contribution, and indeed stronger 2p2h is preferred by
MicroBooNE, contrary to the cases for T2K and MINERvA. Given that even large
variations of FSI cannot give simulations the strength needed to match
MicroBooNE’s observation of the $\delta p_{\textrm{T}}$ tail (and certainly
not without breaking agreement with the bulk), there appears to be reasonable
evidence for a mis-modelling of 2p2h strength differences on carbon and argon.
We note again that the GiBUU event generator predicts a carbon/argon cross-
section ratio often more than twice that of the 2p2h models considered in this
work Buss _et al._ (2012); Dolan _et al._ (2018). Conversely, the good
agreement with the relative tail-bulk strength between T2K and MINERvA implies
a reasonable modelling of 2p2h energy dependence, but it should be noted that
this is degenerate with possible variations of pion absorption FSI (which
affects MINERvA much more than T2K).
Whilst confronting the generator configurations used by current and future
experiments with the experimental measurements, it is found that MINERvA
measurements exclude all of them. The T2K and MicroBooNE measurements are
broadly compatible with the LFG-based configurations, whereas the SF* model is
excluded at high values of $\delta\alpha_{\textrm{T}}$ by the MicroBooNE
measurements, indicating an insufficient FSI strength.
In conclusion, a comparative analysis between T2K, MINERvA and MicroBooNE
measurements reveals:
* •
evidence for stronger 2p2h contributions for neutrino-argon interactions;
* •
considerable sensitivity to FSI and particularly how it changes outgoing
nucleon kinematics;
* •
a clear preference for more sophisticated nuclear ground state models, like
LFG or SF rather than RFG.
The statistical power and granularity of existing measurements, as well as the
lack of predictive power for hadron kinematics for non-QE processes in models,
prevents unambiguous conclusion on how exactly each process should change.
Future measurements offer an opportunity to further lift degeneracies between
nuclear effects. In particular, multi-differential measurements of TKI
variables from MINERvA and T2K (in particular those using the latter’s
upgraded near detector with tracking thresholds comparable to MicroBooNE)
offer opportunities to complement those of MicroBooNE. Higher statistics
measurements from SBND will allow increasingly differential measurements (for
example using calorimetric energy as an additional separator of nonQE
processes) whilst higher energy measurements from ICARUS will allow an
evaluation of the scaling of nuclear effects up to energies more relevant for
DUNE. Additional measurements of TKI in other topologies are also promising,
for example considering exclusively interactions with more than one proton or
with/without tagged neutrons to target specific QE or nonQE enhanced
topologies, allowing further disambiguation of nucleon FSI, pion absorption
and 2p2h.
###### Acknowledgements.
All authors would like to acknowledge support from the T2K, MicroBooNE and
MINERvA collaborations in the completion of this work. We offer particular
thanks to Afroditi Papadopoulou, Luke Pickering, Kevin McFarland and Ulrich
Mosel for feedback on preliminary versions of this work. We further thank
Afroditi for technical support using the MicroBooNE measurement’s data
release. We additionally thank the EP-NU group at CERN both for funding WF’s
summer internship and for numerous discussions. An important thanks is given
to Ciaran Hasnip for providing important technical insights. LM and SD also
thank Bongo Joe buvette, Geneva.
## References
* Alvarez-Ruso _et al._ (2018) L. Alvarez-Ruso _et al._ , Prog. Part. Nucl. Phys. 100, 1 (2018), arXiv:1706.03621 [hep-ph] .
* Katori and Martini (2018) T. Katori and M. Martini, J. Phys. G45, 013001 (2018), arXiv:1611.07770 [hep-ph] .
* Abe _et al._ (2011) K. Abe _et al._ (T2K), Nucl. Instrum. Meth. A 659, 106 (2011), arXiv:1106.1238 [physics.ins-det] .
* Ayres _et al._ (2007) D. S. Ayres _et al._ (NOvA), FERMILAB-DESIGN-2007-01 (2007), 10.2172/935497.
* Abi _et al._ (2020a) B. Abi _et al._ (DUNE), JINST 15, T08008 (2020a), arXiv:2002.02967 [physics.ins-det] .
* Abi _et al._ (2020b) B. Abi _et al._ (DUNE), (2020b), arXiv:2002.03005 [hep-ex] .
* Abe _et al._ (2018a) K. Abe _et al._ (Hyper-Kamiokande), (2018a), arXiv:1805.04163 [physics.ins-det] .
* Antonello _et al._ (2015) M. Antonello _et al._ (MicroBooNE, LAr1-ND, ICARUS-WA104), (2015), arXiv:1503.01520 [physics.ins-det] .
* Abe _et al._ (2023) K. Abe _et al._ (T2K), Eur. Phys. J. C 83, 782 (2023), arXiv:2303.03222 [hep-ex] .
* Acero _et al._ (2022) M. A. Acero _et al._ (NOvA), Phys. Rev. D 106, 032004 (2022), arXiv:2108.08219 [hep-ex] .
* Jachowicz and Nikolakopoulos (2021) N. Jachowicz and A. Nikolakopoulos, (2021), arXiv:2110.11321 [nucl-th] .
* Lu _et al._ (2015) X. G. Lu, D. Coplowe, R. Shah, G. Barr, D. Wark, and A. Weber, Phys. Rev. D 92, 051302 (2015), arXiv:1507.00967 [hep-ex] .
* Furmanski and Sobczyk (2017) A. P. Furmanski and J. T. Sobczyk, Phys. Rev. C 95, 065501 (2017), arXiv:1609.03530 [hep-ex] .
* Baudis _et al._ (2023) N. Baudis, S. Dolan, D. Sgalaberna, S. Bolognesi, L. Munteanu, and T. Dieminger, (2023), arXiv:2310.15633 [hep-ph] .
* Cai _et al._ (2020) T. Cai _et al._ (MINERvA), Phys. Rev. D 101, 092001 (2020), arXiv:1910.08658 [hep-ex] .
* Abe _et al._ (2018b) K. Abe _et al._ (T2K), Phys. Rev. D 98, 032003 (2018b), arXiv:1802.05078 [hep-ex] .
* Abratenko _et al._ (2023) P. Abratenko _et al._ ((MicroBooNE Collaboration)*, MicroBooNE), Phys. Rev. D 108, 053002 (2023), arXiv:2301.03700 [hep-ex] .
* Lu _et al._ (2018) X. G. Lu _et al._ (MINERvA), Phys. Rev. Lett. 121, 022504 (2018), arXiv:1805.05486 [hep-ex] .
* Dolan (2018) S. Dolan, (2018), arXiv:1810.06043 [hep-ex] .
* Chakrani _et al._ (2024) J. Chakrani _et al._ , Phys. Rev. D 109, 072006 (2024), arXiv:2308.01838 [hep-ex] .
* Li _et al._ (2024) W. Li _et al._ (GENIE), (2024), arXiv:2404.08510 [hep-ex] .
* Abe _et al._ (2013) K. Abe _et al._ (T2K), Phys. Rev. D87, 012001 (2013), arXiv:1211.0469 [hep-ex] .
* Abe _et al._ (2015) K. Abe _et al._ (T2K), Phys. Rev. D 91, 072010 (2015), arXiv:1502.01550 [hep-ex] .
* (24) http://t2k-experiment.org/wp-content/uploads/T2Kflux2016.tar, accessed: 07/12/2022.
* Aliaga _et al._ (2016) L. Aliaga _et al._ (MINERvA), Phys. Rev. D 94, 092005 (2016), [Addendum: Phys.Rev.D 95, 039903 (2017)], arXiv:1607.00704 [hep-ex] .
* (26) http://arxiv.org/src/1607.00704v2/anc/minerva_flux.root, accessed: 2024-07-05.
* Acciarri _et al._ (2015) R. Acciarri _et al._ (DUNE), (2015), arXiv:1512.06148 [physics.ins-det] .
* (28) https://home.fnal.gov/~ljf26/DUNEFluxes/OptimizedEngineeredNov2017_offaxis/, accessed: 2019-08-07.
* Aguilar-Arevalo _et al._ (2009) A. A. Aguilar-Arevalo _et al._ (MiniBooNE), Phys. Rev. D 79, 072002 (2009), arXiv:0806.1449 [hep-ex] .
* Dutta _et al._ (2003) D. Dutta _et al._ (JLab E91013), Phys. Rev. C68, 064603 (2003), arXiv:nucl-ex/0303011 [nucl-ex] .
* Khachatryan _et al._ (2021) M. Khachatryan _et al._ (CLAS, e4v), Nature 599, 565 (2021).
* Lu _et al._ (2016) X. G. Lu, L. Pickering, S. Dolan, G. Barr, D. Coplowe, Y. Uchida, D. Wark, M. O. Wascko, A. Weber, and T. Yuan, Phys. Rev. C 94, 015503 (2016), arXiv:1512.05748 [nucl-th] .
* Coplowe _et al._ (2020) D. Coplowe _et al._ (MINERvA), Phys. Rev. D 102, 072007 (2020), arXiv:2002.05812 [hep-ex] .
* Abratenko _et al._ (2021) P. Abratenko _et al._ (MicroBooNE), (2021), arXiv:2110.14023 [hep-ex] .
* Abratenko _et al._ (2024) P. Abratenko _et al._ (MicroBooNE), Phys. Rev. D 109, 092007 (2024), arXiv:2310.06082 [nucl-ex] .
* Abe _et al._ (2021) K. Abe _et al._ (T2K), Phys. Rev. D 103, 112009 (2021), arXiv:2102.03346 [hep-ex] .
* Abe _et al._ (2019) K. Abe _et al._ (T2K), (2019), arXiv:1901.03750 [physics.ins-det] .
* Tang _et al._ (2017) W. Tang, X. Li, X. Qian, H. Wei, and C. Zhang, JINST 12, P10002 (2017), arXiv:1705.03568 [physics.data-an] .
* D’Agostini (1994) G. D’Agostini, Nucl. Instrum. Meth. A 346, 306 (1994).
* Radev and Dolan (2024) R. Radev and S. Dolan, “Flow Matching Mitigates Gaussian Error Approximations in Neutrino Cross-Section Measurements,” (2024).
* Stowell _et al._ (2017) P. Stowell _et al._ , JINST 12, P01016 (2017), arXiv:1612.07393 [hep-ex] .
* Hayato and Pickering (2021) Y. Hayato and L. Pickering, Eur. Phys. J. ST 230, 4469 (2021), arXiv:2106.15809 [hep-ph] .
* Benhar _et al._ (1994) O. Benhar, A. Fabrocini, S. Fantoni, and I. Sick, Nucl. Phys. A 579, 493 (1994).
* Furmanski (2015) A. Furmanski, _Charged-current Quasi-elastic-like neutrino interactions at the T2K experiment_ , Ph.D. thesis, Warwick U. (2015).
* Nieves _et al._ (2011) J. Nieves, I. Ruiz Simo, and M. J. Vicente Vacas, Phys. Rev. C83, 045501 (2011), arXiv:1102.2777 [hep-ph] .
* Smith and Moniz (1972) R. A. Smith and E. J. Moniz, Nucl. Phys. B 43, 605 (1972), [Erratum: Nucl.Phys.B 101, 547 (1975)].
* Juszczak _et al._ (2006) C. Juszczak, J. A. Nowak, and J. T. Sobczyk, Nucl. Phys. B Proc. Suppl. 159, 211 (2006), arXiv:hep-ph/0512365 .
* Gran _et al._ (2013) R. Gran, J. Nieves, F. Sanchez, and M. J. Vicente Vacas, Phys. Rev. D 88, 113007 (2013), arXiv:1307.8105 [hep-ph] .
* Schwehr _et al._ (2016) J. Schwehr, D. Cherdack, and R. Gran, (2016), arXiv:1601.02038 [hep-ph] .
* Rein and Sehgal (1981) D. Rein and L. M. Sehgal, Annals Phys. 133, 79 (1981).
* Graczyk and Sobczyk (2008) K. M. Graczyk and J. T. Sobczyk, Phys. Rev. D 77, 053003 (2008), arXiv:0709.4634 [hep-ph] .
* Berger and Sehgal (2007) C. Berger and L. M. Sehgal, Phys. Rev. D 76, 113004 (2007), arXiv:0709.4378 [hep-ph] .
* Dytman _et al._ (2021) S. Dytman _et al._ , Phys. Rev. D 104, 053006 (2021), arXiv:2103.07535 [hep-ph] .
* (54) https://github.com/GENIE-MC/Generator/releases/tag/R-3_04_00, accessed: 10/07/2024.
* (55) https://indico.fnal.gov/event/57388/contributions/260907/attachments/164904/218862/GeneratorsWorkshopDUNE-NIUWG.pdf, accessed: 10/07/2024.
* Andreopoulos _et al._ (2010) C. Andreopoulos _et al._ , Nucl. Instrum. Meth. A614, 87 (2010), arXiv:0905.2517 [hep-ph] .
* Andreopoulos _et al._ (2015) C. Andreopoulos _et al._ , (2015), arXiv:1510.05494 [hep-ph] .
* Alvarez-Ruso _et al._ (2021) L. Alvarez-Ruso _et al._ (GENIE), Eur. Phys. J. ST 230, 4449 (2021), arXiv:2106.09381 [hep-ph] .
* Jiang _et al._ (2022) L. Jiang _et al._ (Jefferson Lab Hall A), Phys. Rev. D 105, 112002 (2022), arXiv:2203.01748 [nucl-ex] .
* Tena-Vidal _et al._ (2021) J. Tena-Vidal _et al._ (GENIE), Phys. Rev. D 104, 072009 (2021), arXiv:2104.09179 [hep-ph] .
* Ruiz Simo _et al._ (2017) I. Ruiz Simo, J. E. Amaro, M. B. Barbaro, A. De Pace, J. A. Caballero, and T. W. Donnelly, J. Phys. G 44, 065105 (2017), arXiv:1604.08423 [nucl-th] .
* Megias _et al._ (2016) G. Megias, J. Amaro, M. Barbaro, J. Caballero, T. Donnelly, and I. Ruiz Simo, Phys. Rev. D94, 093004 (2016), arXiv:1607.08565 [nucl-th] .
* Dolan _et al._ (2020) S. Dolan, G. D. Megias, and S. Bolognesi, Phys. Rev. D101, 033003 (2020), arXiv:1905.08556 [hep-ex] .
* Martini and Ericson (2013) M. Martini and M. Ericson, Phys. Rev. C 87, 065501 (2013), arXiv:1303.7199 [nucl-th] .
* Sobczyk _et al._ (2020) J. E. Sobczyk, J. Nieves, and F. Sánchez, Phys. Rev. C 102, 024601 (2020), arXiv:2002.08302 [nucl-th] .
* Martinez-Consentino _et al._ (2024) V. L. Martinez-Consentino, A. M. Cantizani, and J. E. Amaro, Phys. Rev. C 109, 015502 (2024), arXiv:2310.12642 [hep-ph] .
* Bathe-Peters _et al._ (2022) L. Bathe-Peters, S. Gardiner, and R. Guenette, (2022), arXiv:2201.04664 [hep-ph] .
* Niewczas and Sobczyk (2019) K. Niewczas and J. T. Sobczyk, Phys. Rev. C 100, 015505 (2019), arXiv:1902.05618 [hep-ex] .
* Pinzon Guerra _et al._ (2019) E. S. Pinzon Guerra _et al._ , Phys. Rev. D 99, 052007 (2019), arXiv:1812.06912 [hep-ex] .
* Salcedo _et al._ (1988) L. L. Salcedo, E. Oset, M. J. Vicente-Vacas, and C. Garcia-Recio, Nucl. Phys. A 484, 557 (1988).
* Ankowski _et al._ (2015) A. M. Ankowski, O. Benhar, and M. Sakuda, Phys. Rev. D91, 033005 (2015), arXiv:1404.5687 [nucl-th] .
* Buss _et al._ (2012) O. Buss _et al._ , Phys. Rept. 512, 1 (2012), arXiv:1106.1344 [hep-ph] .
* Dolan _et al._ (2018) S. Dolan, U. Mosel, K. Gallmeister, L. Pickering, and S. Bolognesi, Phys. Rev. C98, 045502 (2018), arXiv:1804.09488 [hep-ex] .
* Franco-Patino _et al._ (2022) J. M. Franco-Patino, R. González-Jiménez, S. Dolan, M. B. Barbaro, J. A. Caballero, G. D. Megias, and J. M. Udias, Phys. Rev. D 106, 113005 (2022), arXiv:2207.02086 [nucl-th] .
* Franco-Patino _et al._ (2024) J. M. Franco-Patino, S. Dolan, R. González-Jiménez, M. B. Barbaro, J. A. Caballero, and G. D. Megias, Phys. Rev. D 109, 013004 (2024), arXiv:2304.01916 [hep-ex] .
* Nikolakopoulos _et al._ (2024) A. Nikolakopoulos, A. Ershova, R. González-Jiménez, J. Isaacson, A. M. Kelly, K. Niewczas, N. Rocco, and F. Sánchez, (2024), arXiv:2406.09244 [nucl-th] .
* Nikolakopoulos _et al._ (2022) A. Nikolakopoulos, R. González-Jiménez, N. Jachowicz, K. Niewczas, F. Sánchez, and J. M. Udías, Phys. Rev. C 105, 054603 (2022), arXiv:2202.01689 [nucl-th] .
|
bornological Lie algebra chains
$C_{\bullet}^{\operatorname{\mathrm{born}}}(\Gamma_{c}(\mathcal{K}))$. Much of
the material within this section can be read up on in more detail in [40]. We
assume the reader is familiar with the basic notions of sheaves on topological
spaces.
###### Definition C.1.
[40, Chapter V.1] Let $M$ be a topological space.
* i)
A _precosheaf_ (of abelian groups) $\mathcal{P}$ on $M$ is a covariant functor
from the category of open sets of $M$, morphisms given by inclusions, into the
category of abelian groups.
Given an inclusion $U\subset V$ of open sets, we denote the associated mapping
$\mathcal{P}(U)\to\mathcal{P}(V)$ by $\iota_{U}^{V}$, called the _extension
map_ from $U$ to $V$ of the precosheaf $\mathcal{P}$.
* ii)
A _cosheaf_ is a precosheaf $\mathcal{P}$ with the property that for every
open cover $\mathcal{U}$ of an open set $U\subset M$, the sequence
$\displaystyle\bigoplus_{i,j}\mathcal{P}(U_{i}\cap
U_{j})\to\bigoplus_{i}\mathcal{P}(U_{i})\to\mathcal{P}(U)\to 0$
is exact, where the maps are given by
$\displaystyle(a_{ij})_{i,j}\mapsto\left(\sum_{j}\iota_{U_{i}\cap
U_{j}}^{U_{i}}(a_{ij}-a_{ji})\right)_{i},\quad(b_{i})_{i}\mapsto\sum_{i}\iota_{U_{i}}^{U}b_{i}.$
We call a cosheaf $\mathcal{P}$ _flabby_ if all extension maps $\iota_{U}^{V}$
are injective.
* iii)
A _morphism of (pre-)cosheaves_ is a natural transformation between the
functors defining the (pre-)cosheaves.
Most practical examples of cosheaves arise from some notion of compactly
supported objects, since compactly supported objects on some open set can
always extended by zero to bigger open sets. The following proposition
formalizes this:
###### Proposition C.2.
[40, Proposition V.1.6] Let $\mathcal{S}$ be a sheaf a topological space $M$,
and consider the precosheaf $\mathcal{S}_{c}$ which associates to an open
$U\subset M$ the set
$\displaystyle\mathcal{S}_{c}(U):=\\{s\in\mathcal{S}(M):\operatorname{supp}s\subset
U\\}$
and whose extension maps are given by extending by zero. If $\mathcal{S}$ is
soft, then $\mathcal{S}_{c}$ is a flabby cosheaf.
Completely dually to sheaf theory, one can define the _Čech complex_
$\check{C}_{\bullet}(\mathcal{U};\mathcal{P})$ of a (pre-)cosheaf
$\mathcal{P}$ associated to an open cover $\mathcal{U}$ of $M$, which is then
given as a chain complex
$\displaystyle\dots\to\bigoplus_{i,j}\mathcal{P}(U_{i}\cap
U_{j})\to\bigoplus_{i}\mathcal{P}(U_{i})\to 0,$
its differential being a skew-symmetric linear combination of the extension
maps $\iota_{U}^{V}$. Its construction is fully dual to the standard, sheaf-
theoretic Čech cochain complex, and we direct the reader to [40, Chapter VI.4]
for details. We denote its homology by
$\check{H}_{\bullet}(\mathcal{U};\mathcal{P})$. The defining properties of a
cosheaf $\mathcal{P}$ directly imply
$\displaystyle\check{H}_{0}(\mathcal{U};\mathcal{P})=\mathcal{P}(M)$
independently of the choice of $\mathcal{U}$. Refinements of open covers
induce on the associated Čech complexes the structure of a inverse system, and
as such we may define the Čech homology of a cosheaf $\mathcal{P}$ as the
inverse limit of the Čech homologies of its associated open covers:
$\displaystyle\check{H}_{\bullet}(M;\mathcal{P}):=\varprojlim\check{H}_{\bullet}(\mathcal{U};\mathcal{P}).$
Just like sheaf cohomology can be calculated in terms of resolutions, we shall
calculate Čech homology in terms of _coresolutions:_
###### Definition C.3.
[40, Chapter VI.7]
* i)
A precosheaf $\mathcal{P}$ on $M$ is called _locally zero_ if for every $x\in
M$ and every open neighbourhood $U$ of $x$ there is an open neighbourhood
$V\subset U$ so that $\iota_{U}^{V}=0$.
* ii)
A sequence of precosheaves
$\displaystyle\mathcal{P}_{1}\stackrel{{\scriptstyle
f}}{{\to}}\mathcal{P}_{2}\stackrel{{\scriptstyle g}}{{\to}}\mathcal{P}_{3}$
is called _locally exact_ ifthe precosheaf
$\displaystyle U\mapsto\operatorname{Im}f(\mathcal{P}_{1}(U))/\ker
g(\mathcal{P}_{2}(U))$
is locally zero.
* iii)
A _coresolution_ of a cosheaf $\mathcal{P}$ is a locally exact sequence of
cosheaves
$\displaystyle\dots\to\mathcal{P}_{2}\to\mathcal{P}_{1}\to\mathcal{P}_{0}\to\mathcal{P}\to
0.$
The coresolution is called _flabby_ if the
$\mathcal{P}_{0},\mathcal{P}_{1},\dots$ (but not necessarily $\mathcal{P}$)
are flabby.
To calculate Čech homology of cosheaves, the following result will be helpful:
###### Proposition C.4.
[40, Thms VI.7.2, VI.13.1] Let $\mathcal{P}$ be a cosheaf on $M$ with flabby
coresolution
$\displaystyle\dots\to\mathcal{P}_{2}\to\mathcal{P}_{1}\to\mathcal{P}_{0}\to\mathcal{P}\to
0.$
* i)
The Čech homology $\check{H}_{\bullet}(M;\mathcal{P})$ is equal to the
homology of the complex
$\displaystyle\dots\to\mathcal{P}_{2}(M)\to\mathcal{P}_{1}(M)\to\mathcal{P}_{0}(M)\to
0.$
* ii)
If $\mathcal{U}$ is an open cover of $M$ with the property that
$\displaystyle\dots\to\mathcal{P}_{2}(U)\to\mathcal{P}_{1}(U)\to\mathcal{P}_{0}(U)\to\mathcal{P}(U)\to
0$
is exact whenever $U$ is a finite intersection of elements of $\mathcal{U}$,
then
$\displaystyle\check{H}_{\bullet}(\mathcal{U};\mathcal{P})=\check{H}_{\bullet}(M;\mathcal{P}).$
###### Corollary C.5.
For every flabby cosheaf $\mathcal{P}$ on $M$, and every open cover
$\mathcal{U}$ of $M$, we have
$\displaystyle
H_{r}(M,\mathcal{P})=H_{r}(\mathcal{U},\mathcal{P})=\begin{cases}\mathcal{P}(M)&\text{
if }r=0,\\\ 0&\text{ else.}\end{cases}$
###### Proof.
Consider the flabby coresolution
$0\to\mathcal{P}\stackrel{{\scriptstyle\operatorname{id}}}{{\to}}\mathcal{P}\to
0$ and apply Proposition C.4. ∎
One concept which one might hope for in the theory of cosheaves is a dual
version of the well-known concept of sheafification, in other words, a way to
universally assign to every precosheaf an appropriate cosheaf. For sheaves,
one speaks of a left-adjoint functor to the inclusion of presheaves into
sheaves, and sheafification exists for presheaves in most standard categories,
e.g. the category of sets or abelian groups. Since sheafification respects
stalks, locally, the original presheaf and its associated sheafification carry
the same information. Surprisingly, the dual concept of “cosheafification” is
a lot more involved, and even existence of this concept in most standard
categories is a difficult question, let alone constructing it explicitly, see
for example [50]. Instead, we will consider the concept of a _cosheaf on a
base_. While the dual notion of sheaves on a base is well-studied, we are not
aware of any mention in the literature of the cosheaf-theoretic version
thereof.
###### Definition C.6.
Let $\mathcal{B}$ be a topological base of $M$. In the following, view
$\mathcal{B}$ as a subcategory of the category of open sets of $M$.
* i)
A _precosheaf $S$ on _ $\mathcal{B}$ is a covariant functor from $\mathcal{B}$
to the category of abelian groups. We denote the image of $U\in\mathcal{B}$ as
$S(U)$ and the arising extension maps for $U\subset V\in\mathcal{B}$ by
$\iota_{U}^{V}$.
* ii)
Choose for any $U\in\mathcal{B}$ an open cover $\\{U_{i}\\}_{i\in I}$ by
elements in $\mathcal{B}$, and for every $i,j\in I$ an open cover
$\\{V_{ij,k}\\}_{k\in K}$ of $U_{i}\cap U_{j}$ by elements in $\mathcal{B}$.
We call a precosheaf $S$ on $\mathcal{B}$ a _cosheaf on $\mathcal{B}$_ if, for
all such choices, the following sequence is exact:
$\displaystyle 0\leftarrow
P(U)\leftarrow\bigoplus_{i}P(U_{i})\leftarrow\bigoplus_{ijk}P(V_{ij,k}).$
* iii)
A morphism of (pre-)cosheaves on $\mathcal{B}$ is a natural transformation of
the functors defining the (pre-)cosheaves.
The sequence is the analogue of the cosheaf condition, but rather than working
with all open sets, we just work with elements of a topological base
$\mathcal{B}$. If $\mathcal{B}$ is chosen as the topology of $M$, then this
definition is equivalent to the definition of a cosheaf on $M$. This is
precisely the dual of the well-studied concept of sheaves on a base, by
viewing $\mathrm{Ab}$-valued cosheaves as $\text{Ab}^{\text{op}}$-valued
sheaves.
###### Theorem C.7.
Given a topological space $M$ and a topological base $\mathcal{B}$ of $M$. An
$\mathrm{Ab}$-valued cosheaf on $\mathcal{B}$ extends, up to cosheaf
isomorphism, uniquely to a cosheaf on $M$. A morphism between two cosheaves on
$\mathcal{B}$ of $M$ extends uniquely to a morphism between the induced
cosheaves on $M$.
###### Proof.
The following proof is due to [51]. The analogue statement for
$\mathcal{C}$-valued sheaves is true whenever $\mathcal{C}$ is a complete
category (see [52] or [53, Lemma 2.2.7]). However, since Ab is a cocomplete
category, $\text{Ab}^{\text{op}}$ is a complete category. This proves the
statement. ∎
It is known that the setwise cokernels of cosheaf morphisms are again
cokernels [40, Prop VI.1.2], the proof being a simple diagram chase. This
straightforwardly extends to cosheaves on a base:
###### Proposition C.8.
Let $\mathcal{B}$ be a topological base of $M$. Let further
$\phi:\mathcal{P}\to\mathcal{S}$ be a morphism of cosheaves on $\mathcal{B}$,
and define a precosheaf $\operatorname{coker}\phi$ by assigning to
$B\in\mathcal{B}$
$\displaystyle\operatorname{coker}\phi(B):=\mathcal{S}(B)/\phi(\mathcal{P}(B)),$
with extension maps induced by the cosheaf maps of $\mathcal{S}$. Then
$\operatorname{coker}\phi$ defines a cosheaf on $\mathcal{B}$.
## References
* [1] Bas Janssens and Cornelia Vizman. Universal central extension of the Lie algebra of Hamiltonian vector fields. Int. Math. Res. Not. IMRN, (16):4996–5047, 2016.
* [2] Karl-Hermann Neeb and Friedrich Wagemann. The second cohomology of current algebras of general Lie algebras. Canad. J. Math., 60(4):892–922, 2008.
* [3] Karl-Hermann Neeb and Christoph Wockel. Central extensions of groups of sections. Ann. Global Anal. Geom., 36(4):381–418, 2009.
* [4] Peter Maier. Central extensions of topological current algebras. In Geometry and analysis on finite- and infinite-dimensional Lie groups (Będlewo, 2000), volume 55 of Banach Center Publ., pages 61–76. Polish Acad. Sci. Inst. Math., Warsaw, 2002.
* [5] Bas Janssens and Christoph Wockel. Universal central extensions of gauge algebras and groups. J. Reine Angew. Math., 682:129–139, 2013.
* [6] Jean-Louis Loday and Daniel Quillen. Cyclic homology and the Lie algebra homology of matrices. Comment. Math. Helv., 59(4):569–591, 1984.
* [7] B. L. Tsygan. Homology of matrix Lie algebras over rings and the Hochschild homology. Uspekhi Mat. Nauk, 38(2(230)):217–218, 1983.
* [8] B. L. Feĭgin. On the cohomology of the Lie algebra of vector fields and of the current algebra. volume 7, pages 49–62. 1988. Selected translations.
* [9] R. Bott and G. Segal. The cohomology of the vector fields on a manifold. Topology, 16(4):285–298, 1977.
* [10] I. M. Gel’fand and D. B. Fuks. Cohomologies of the Lie algebra of tangent vector fields of a smooth manifold. Funkcional. Anal. i Priložen., 3(3):32–52, 1969.
* [11] Owen Gwilliam and Brian R Williams. Higher kac-moody algebras and symmetries of holomorphic field theories. arXiv preprint arXiv:1810.06534, 2018.
* [12] François Trèves. Topological vector spaces, distributions and kernels. Academic Press, New York-London, 1967.
* [13] Helmut H. Schaefer. Topological vector spaces. Graduate Texts in Mathematics, Vol. 3. Springer-Verlag, New York-Berlin, 1971. Third printing corrected.
* [14] Reinhold Meise and Dietmar Vogt. Introduction to functional analysis, volume 2 of Oxford Graduate Texts in Mathematics. The Clarendon Press, Oxford University Press, New York, 1997. Translated from the German by M. S. Ramanujan and revised by the authors.
* [15] A. Grothendieck. Produits tensoriels topologiques et espaces nucléaires. In Séminaire Bourbaki, Vol. 2, pages Exp. No. 69, 193–200. Soc. Math. France, Paris, 1995.
* [16] Andreas Kriegl and Peter W. Michor. The convenient setting of global analysis, volume 53 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 1997.
* [17] Ralf Meyer. Analytic cyclic cohomology. arXiv preprint math/9906205, 1999.
* [18] Helge Glöckner. Tensor products in the category of topological vector spaces are not associative. Comment. Math. Univ. Carolin., 45(4):607–614, 2004.
* [19] Jörg Eschmeier and Mihai Putinar. Spectral decompositions and analytic sheaves, volume 10 of London Mathematical Society Monographs. New Series. The Clarendon Press, Oxford University Press, New York, 1996. Oxford Science Publications.
* [20] Jean-Louis Loday. Cyclic homology, volume 301 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer-Verlag, Berlin, 1992. Appendix E by María O. Ronco.
* [21] Masoud Khalkhali. Basic noncommutative geometry. EMS Series of Lectures in Mathematics. European Mathematical Society (EMS), Zürich, second edition, 2013.
* [22] Alain Connes. Noncommutative differential geometry. Inst. Hautes Études Sci. Publ. Math., (62):257–360, 1985.
* [23] Jacek Brodzki and Zinaida A. Lykova. Excision in cyclic type homology of Fréchet algebras. Bull. London Math. Soc., 33(3):283–291, 2001.
* [24] Jean-Pierre Serre. Un théorème de dualité. Comment. Math. Helv., 29:9–26, 1955.
* [25] F. Gourdeau, Z. A. Lykova, and M. C. White. A Künneth formula in topological homology and its applications to the simplicial cohomology of $l^{1}({\mathbb{Z}}^{k}_{+})$. Studia Math., 166(1):29–54, 2005.
* [26] A. Grothendieck. Opérations algébriques sur les distributions à valeur vectorielle. théorème de künneth. Séminaire Schwartz, 1:1–6.
* [27] Markus J. Pflaum. On continuous Hochschild homology and cohomology groups. Lett. Math. Phys., 44(1):43–51, 1998.
* [28] Nicolae Teleman. Microlocalisation de l’homologie de Hochschild. C. R. Acad. Sci. Paris Sér. I Math., 326(11):1261–1264, 1998\.
* [29] Charles A. Weibel. An introduction to homological algebra, volume 38 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1994.
* [30] V. P. Palamodov. On a Stein manifold the Dolbeault complex splits in positive dimensions. Mat. Sb. (N.S.), 88(130):287–315, 1972.
* [31] Ralf Meyer. Excision in Hochschild and cyclic homology without continuous linear sections. J. Homotopy Relat. Struct., 5(1):269–303, 2010.
* [32] Jürgen Voigt. Factorization in some Fréchet algebras of differentiable functions. Studia Math., 77(4):333–348, 1984.
* [33] Christian-Oliver Ewald. Hochschild- and cyclic-homology of LCNT-spaces. Comm. Math. Phys., 250(1):195–213, 2004.
* [34] Dong Geng Gong. Excision of equivariant cyclic cohomology of topological algebras. Michigan Math. J., 39(3):455–473, 1992.
* [35] Mariusz Wodzicki. Excision in cyclic homology and in rational algebraic $K$-theory. Ann. of Math. (2), 129(3):591–639, 1989.
* [36] J.-P. Brasselet and A. Legrand. Teleman localization of Hochschild homology in a singular setting. Russ. J. Math. Phys., 16(3):391–403, 2009.
* [37] Phil Hanlon. On the complete ${\rm GL}(n,{\bf C})$-decomposition of the stable cohomology of ${\rm gl}_{n}(A)$. Trans. Amer. Math. Soc., 308(1):209–225, 1988.
* [38] G. Cortiñas. Cyclic homology of $h$-unital (pro-) algebras, lie algebra homology of matrices, and a paper of hanlon’s. arXiv preprint math/0504148, 2005.
* [39] Guillermo Cortiñas. The obstruction to excision in $K$-theory and in cyclic homology. Invent. Math., 164(1):143–173, 2006.
* [40] Glen E. Bredon. Sheaf theory, volume 170 of Graduate Texts in Mathematics. Springer-Verlag, New York, second edition, 1997.
* [41] Antonio Cassa. Formule di Künneth per la coomologia a valori in un fascio. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (3), 27:905–931 (1974), 1973\.
* [42] Ludger Kaup. Eine Künnethformel für Fréchetgarben. Math. Z., 97:158–168, 1967.
* [43] Ludger Kaup. Das topologische Tensorprodukt kohärenter analytischer Garben. Math. Z., 106:273–292, 1968.
* [44] Kevin Costello and Owen Gwilliam. Factorization algebras in quantum field theory. Vol. 1, volume 31 of New Mathematical Monographs. Cambridge University Press, Cambridge, 2017.
* [45] Pedro Boavida de Brito and Michael Weiss. Manifold calculus and homotopy sheaves. Homology Homotopy Appl., 15(2):361–383, 2013.
* [46] I. M. Gel’fand and O. Mathieu. On the cohomology of the Lie algebra of Hamiltonian vector fields. J. Funct. Anal., 108(2):347–360, 1992.
* [47] Daniel Quillen. Algebra cochains and cyclic cohomology. Inst. Hautes Études Sci. Publ. Math., (68):139–174 (1989), 1988\.
* [48] Constantin Teleman. Some Hodge theory from Lie algebras. In Motives, polylogarithms and Hodge theory, Part II (Irvine, CA, 1998), volume 3 of Int. Press Lect. Ser., pages 731–744. Int. Press, Somerville, MA, 2002.
* [49] Liviu Nicolaescu (https://mathoverflow.net/users/20302/liviu nicolaescu). de rham model for relative cohomology. MathOverflow. URL:https://mathoverflow.net/q/122900 (version: 2016-03-31).
* [50] Justin Michael Curry. Sheaves, cosheaves and applications. ProQuest LLC, Ann Arbor, MI, 2014. Thesis (Ph.D.)–University of Pennsylvania.
* [51] jgon (https://math.stackexchange.com/users/90543/jgon). Cosheaf on a base. Mathematics Stack Exchange. URL:https://math.stackexchange.com/q/3642492 (version: 2020-04-24).
* [52] Ravi Vakil. The rising sea: Foundations of algebraic geometry. preprint, 2017.
* [53] Qing Liu. Algebraic geometry and arithmetic curves, volume 6 of Oxford Graduate Texts in Mathematics. Oxford University Press, Oxford, 2002. Translated from the French by Reinie Erné, Oxford Science Publications.
|
differ from the highest weight $\widehat{\lambda}_{i}$ by a positive linear
combination of simple roots,
${\widehat{\lambda}}={\widehat{\lambda}}_{i}-{\widehat{\nu}}$,
${\widehat{\nu}}=\sum_{j=0}^{r}{\nu}_{j}{\widehat{\alpha}}_{j},\qquad{\nu}_{j}\in{\mathbb{Z}}_{+}$
we can write, with
${\tilde{\mathfrak{q}}}^{\widehat{\nu}}=\prod_{j=0}^{r}{\mathfrak{q}}_{j}^{\nu_{j}}$
${{\mathscr{X}}}_{i}({\mathscr{Y}}(x);{\tilde{\mathfrak{q}}})={\mathscr{Y}}_{i}\sum_{\widehat{\nu}}{\widehat{c}}_{\widehat{\nu}}^{i}{\tilde{\mathfrak{q}}}^{\widehat{\nu}}\,\prod_{k,j=0}^{r}{\mathscr{Y}}_{k}(x)^{-C_{kj}^{\widehat{\mathfrak{g}}}{\nu}_{j}}$
(6.25)
where we made the
${\tilde{\mathfrak{q}}}=({\mathfrak{q}}_{0},\ldots,{\mathfrak{q}}_{r})$
dependence explicit, and
${\widehat{c}}_{\widehat{\nu}}^{i}={\chi}_{{\widehat{R}}_{i},{\widehat{\lambda}}_{i}-{\widehat{\nu}}},\qquad$
(6.26)
Write ${\widehat{\nu}}=n{\delta}+{\nu}$, where $n\in{\mathbb{Z}}_{+}$, and
${\nu}\in{{\rm Q}}$ belongs to the root lattice of $\mathfrak{g}$. Notice that
the factor ${\tilde{\mathfrak{q}}}^{\widehat{\nu}}$ in (6.25) depends on $n$
only via the ${\mathfrak{q}}^{n}$ factor. For fixed $n$ the number of
${\nu}\in{{\rm Q}}$ such that ${\widehat{c}}_{n{\delta}+{\nu}}^{i}\neq 0$ is
finite.
The characters of $\widehat{\mathbf{G}}$ are well-studied [Kac:1984].
Physically they are the torus
${\mathscr{E}}={\mathbb{C}}^{\times}/{\mathfrak{q}}^{\mathbb{Z}}$ conformal
blocks of the WZW theories with the group $G$, and levels $k=a_{i}$,
$i=0,1,\ldots,r$ (see [Dolan:2007eh] for recent developments). The argument of
the characters can be viewed as the background $\mathbf{G}$ $(0,1)$-gauge
field $\bar{\bf A}$, which couples to the holomorphic current ${\bf
J}=g^{-1}{\partial}g$:
$Z_{k}({\tau};{\bar{\bf A}})=\int Dg\,{\exp}\,k\left(S_{\rm
WZW}(g)+\int_{\mathscr{E}}\left\langle{\bf J},{\bar{\bf
A}}\right\rangle\right)=\sum_{\widehat{\lambda}\text{ at level
$k$}}c_{\widehat{\lambda}}\cdot{\widehat{\chi}}_{\widehat{\lambda}}({\widehat{t}};{\mathfrak{q}})$
(6.27)
The background gauge field has only $r$ moduli. In practice, one chooses the
gauge ${\bar{\bf A}}=\frac{\pi}{{\rm Im}{\tau}}{\xi}$, where ${\xi}={\rm
const}\in{\mathfrak{h}}$.
Technically, it is more convenient to built the characters using the free
fermion theory, at least for the $A_{r}$, $D_{r}$ cases, and for the groups
$E_{6},E_{7},E_{8}$ at level $1$. We review this approach in the appendix.
The master equations (6.5)
${{\mathscr{X}}}_{i}({\mathscr{Y}}(x);{\tilde{\mathfrak{q}}})=T_{i}(x)$
describe a curve ${\mathcal{C}}_{u}\subset{\mathbf{C}_{\left\langle
x\right\rangle}}\times({\mathbb{C}}^{\times})^{r+1}$ which is a
$W({\widehat{\mathfrak{g}}})$-cover of the $x$-parametrized rational curve
$\Sigma_{u}$ in ${\mathbb{C}}^{r+1}={\rm
Spec}{\mathbb{C}}[{\widehat{\chi}}_{0},\ldots,{\widehat{\chi}}_{r}]$, cf.
(6.22):
${\widehat{\chi}}_{i}=\prod_{j}{\mathfrak{q}}_{j}^{-{\widehat{\lambda}}_{i}({\widehat{\lambda}}_{j}^{\vee})}\
T_{i}(x),\qquad i=0,\ldots,r$ (6.28)
Now, as we recall in the section C.5, the characters ${\widehat{\chi}}_{i}$,
$i=0,\ldots,r$ are the sections of the line (orbi)bundle ${\mathcal{O}}(1)$
over the coarse moduli space ${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$ of
holomorphic principal semi-stable $\mathbf{G}$-bundles over the elliptic curve
$\mathscr{E}$. Therefore (6.28),(6.5) define for each $u$ a quasimap $U$ of
the compactified $x$-plane ${{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}$ to ${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$, which is
actually an honest map near $x=\infty$, whose image approaches the fixed
$\mathbf{G}$-bundle ${\mathcal{P}}_{\tilde{\mathfrak{q}}}$. This bundle can be
described, e.g. by the transition function $g_{\infty}$, which is one of the
$\mathbf{T}$ lifts of
${\tilde{g}}_{\infty}=\prod_{i=1}^{r}{\mathfrak{q}}_{i}^{-{\lambda}^{\vee}_{i}}\in{\mathbf{T}}/Z$
(6.29)
By definition, the local holomorphic sections of
${\mathcal{P}}_{\tilde{\mathfrak{q}}}$ are the ${\mathbf{G}}$-valued functions
${\Psi}(z)$, defined in some domain in ${\mathbb{C}}^{\times}$ such that
${\Psi}({\mathfrak{q}}z)=g_{\infty}{\Psi}(z)$
The complex dimension of the space of quasimaps $U$ with fixed $U({\infty})$
is the number of coefficients in the polynomials
$(T_{i}(x))_{i\in\mathrm{Vert}_{\gamma}}$ excluding the highest coefficients,
that is (cf. Eq.(4.7)),
$\dim_{\mathbb{C}}{\mathfrak{M}}^{\mathrm{ext}}=\sum_{i\in\mathrm{Vert}_{\gamma}}\mathbf{v}_{i}=Nh\
.$
We say that $U$ is a quasimap, and not just a holomorphic map
${{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\to{{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}$ for two
reasons. Technically, a collection of ${\widehat{\chi}}_{i}$ in (6.28) defines
a point in $\mathbb{W}\mathbb{P}^{a_{0},a_{1},\ldots,a_{r}}$ only if the
polynomials $T_{i}(x)$ don’t have common weighted factors. If, however, for
some ${\mathfrak{m}}_{f}\in{\mathbf{C}_{\left\langle x\right\rangle}}$:
$T_{i}(x)={\tilde{T}}_{i}(x)(x-{\mathfrak{m}}_{f})^{a_{i}},{\rm for\ all\
}i=0,\ldots,r$ (6.30)
then the map ${\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}\to{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$ is not well-
defined at $x={\mathfrak{m}}_{f}$. It is trivial to extend the map there by
removing the $(x-{\mathfrak{m}}_{f})^{a_{i}}$ factors. This operation lowers
$N\to N-1$. In a way, the point ${\mathfrak{m}}_{f}$ carries a unit of the
instanton charge. Such a configuration is called a freckled instanton
[Losev:1999tu]. Thus, the extended moduli space eq. 4.7 of vacua
${{\mathfrak{M}}^{\mathrm{ext}}}_{N}$ of the gauge theory with
${G_{\text{g}}}=\times_{i}SU(Na_{i})$, contains the locus
${{\mathfrak{M}}^{\mathrm{ext}}}_{N-1}\times{\mathbf{C}_{\left\langle
x\right\rangle}}$. Allowing for several freckles at the unordered points
${\mathfrak{m}}_{f1},{\mathfrak{m}}_{f2},\ldots,{\mathfrak{m}}_{fi}$ we arrive
at the hierarchy of embeddings of the moduli spaces of vacua of the gauge
theories with different gauge groups $G_{\text{g}}$:
$\displaystyle{{\mathfrak{M}}^{\mathrm{ext}}}_{N}=$
$\displaystyle\,\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N}\cup\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N-1}\times{\mathbf{C}_{\left\langle
x\right\rangle}}\cup\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N-2}\times
Sym^{2}{\mathbf{C}_{\left\langle x\right\rangle}}$ (6.31)
$\displaystyle\qquad\qquad\cup\ldots\cup\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N-i}\times
Sym^{i}{\mathbf{C}_{\left\langle x\right\rangle}}\cup\ldots\cup
Sym^{N}{\mathbf{C}_{\left\langle x\right\rangle}}$
where $\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N}$ stands for the space of
degree $N$ rational maps $U:{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\to{{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}$.
This hierarchy of gauge theories is more familiar in the context of class I
theories. Presently, the freckled instantons to
${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$ correspond to the loci in
${\mathfrak{M}}$ where a Higgs branch of the gauge theory can open. Indeed, if
(6.30) holds, then we can solve the master equation (6.5) by writing
${\mathscr{Y}}_{j}(x)=(x-{\mathfrak{m}}_{f})^{a_{j}}\
{\tilde{\mathscr{Y}}}_{j}(x)$ (6.32)
with ${\tilde{\mathscr{Y}}}_{j}(x)$ solving the master equation (6.5) of the
$\times_{i\in\mathrm{Vert}_{\gamma}}\ SU\left(\left({N-1}\right)a_{i}\right)$
gauge theory. In the IIB string theory picture A.3 the full collection of
fractional branes in the amount of $a_{i}$ for the $i$’th type recombine, and
detach themselves from the fixed locus, moving away at the position
${\mathfrak{m}}_{f}$ on the transverse
${\mathbb{R}}^{2}={\mathbf{C}_{\left\langle x\right\rangle}}$.
Now let us take $u\in\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N}$. The
corresponding map $U$ defines a rational curve $\Sigma_{u}$ in
${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$ of degree $N$.
###### Remark.
Actually, there is another compactification of
$\mathring{{\mathfrak{M}}^{\mathrm{ext}}}_{N}$, via genus zero Kontsevich
stable maps of bi-degree $(1,N)$ to
${\mathbb{C}\mathbb{P}}^{1}\times{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$
(see [Givental:1997] where the space of quasimaps is called the toric map
spaces). It would be interesting to study its gauge theoretic meaning.
###### Remark.
The highest order coefficients $T_{i,0}({\tilde{\mathfrak{q}}})$ of the
polynomials $T_{i}(x)$ depend only on the gauge coupling constants, and
determine the limit $U(x)$, $x\to\infty$
$U({\infty})=[{\mathcal{P}}_{\tilde{\mathfrak{q}}}]\in{{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}$
(6.33)
The next-to-leading terms $T_{i,1}({\tilde{\mathfrak{q}}},m)$ depend only on
the gauge couplings and the bi-fundamental masses. These define the first jet
${\mathscr{T}}_{[{\mathcal{P}}_{\tilde{\mathfrak{q}}}]}{\Sigma}_{u}$ of the
rational curve $\Sigma_{u}$ at $x=\infty$.
Summarizing, _the moduli space ${{\mathfrak{M}}}_{N}$ of vacua of the class II
theory with the gauge group_
${G_{\text{g}}}=\times_{i\in\mathrm{Vert}_{\gamma}}\ SU(Na_{i})$
_is the moduli space of degree $N$ finely framed at infinity quasimaps_
$U:{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\to{{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}\approx{\mathbb{W}\mathbb{P}}^{a_{0},a_{1},\ldots,a_{r}}$
(6.34)
_where the fine framing is the condition that $U$ is the honest map near
$x=\infty$, and the first jet (the value and the tangent vector) at $x=\infty$
are fixed:_
$\left(U({\infty}),U^{\prime}({\infty})\right)\leftrightarrow\left({\tilde{\mathfrak{q}}},m\right)$
(6.35)
_We also have the identification of the extended moduli space
${{\mathfrak{M}}^{\mathrm{ext}}}$ with the space of framed quasimaps_
#### 6.2.3. The class II* theories
The theories with the affine quiver of the ${\widehat{A}}_{r}$ type can be
solved uniformly in both class II and class II* cases. This is related to the
fact that the current algebra ${\widehat{u(r+1)}}$, the affine Kac-Moody
algebra based on $U(r+1)$ is a subalagebra of
$\widehat{\mathfrak{gl}}_{\infty}$, consisting of the $(r+1)$-periodic
infinite matrices.
Let $\gamma$ be the affine Dynkin graph of the ${\widehat{A}}_{r}$ algebra. We
have, $\mathrm{Vert}_{\gamma}=\mathrm{Edge}_{\gamma}=\\{0,1,\ldots,r\\}$.
Choose such an orientation of the graph $\gamma$ that for any
$e\in\mathrm{Edge}_{\gamma}$: $s(e)=e$, $t(e)=(e+1)$ mod $r+1$. Let $m_{e}$,
$e=0,\ldots,r$ be the corresponding bi-fundamental multiplet masses, and
${\mathfrak{m}}=\sum_{e=0}^{r}m_{e}$ (6.36)
We are in the class II* theory iff $\mathfrak{m}\neq 0$.
It is convenient to extend the definition of $m_{e}$ to the universal cover of
$\gamma$. Thus, we define
$\displaystyle{\mathfrak{m}}_{i}=m_{i\,\mathrm{mod}\,(r+1)},$ (6.37)
$\displaystyle
Y_{i}(x)={\mathscr{Y}}_{i\,\mathrm{mod}\,(r+1)}(x-{\mathfrak{m}}_{(i)})\
,\qquad i\in\mathbb{Z}$
The extended amplitudes $Y_{i}(x)$ obey
$Y_{i+r+1}(x)=Y_{i}\left(x-{\mathfrak{m}}\right)$ (6.38)
Define
$t_{j}(x)={\check{t}}_{j}\,\frac{Y_{j}(x)}{Y_{j-1}(x)}$ (6.39)
where
$\displaystyle{\check{t}}_{j+1}={\mathfrak{q}}_{j\,\mathrm{mod}\,(r+1)}\,{\check{t}}_{j}$
(6.40) $\displaystyle\prod_{j=0}^{r}{\check{t}}_{j}=1\,,\ $
$\displaystyle{\check{t}}_{j+r+1}={\mathfrak{q}}\,{\check{t}}_{j}$
Then
$t_{j+r+1}(x)={\mathfrak{q}}\,t_{j}(x-{\mathfrak{m}})$
where for the $\widehat{A}_{r}$-series,
${\mathfrak{q}}=\prod_{j=0}^{r}{\mathfrak{q}}_{j}$
Now, consider the following element of $\widehat{GL}_{\infty}$:
$g(x)={\mathscr{Y}}_{0}(x)^{K}\times\prod_{i\in\mathbb{Z}}t_{i}(x)^{E_{i,i}}$
(6.41)
with $t_{i}(x)$ from (6.39), and $E_{i,j}$ denoting the matrix with all
entries zero except $1$ at the $i$’th row and $j$’th column. A closer
inspection shows (6.41) is the direct generalization of (6.20) with the
$r+1$-periodic matrix $g_{\infty}$, and
$({\mathscr{Y}}_{i}(x))_{i\in\mathrm{Vert}_{\gamma}}$ replaced by the infinite
array $(Y_{i}(x))_{i\in\mathbb{Z}}$. Indeed, the simple coroots of
$\widehat{GL}_{\infty}$ are the diagonal matrices, shifted in the central
direction
${\alpha}_{i}^{\vee}=K{\delta}_{i,0}+E_{i,i}-E_{i+1,i+1},\qquad
i\in\mathbb{Z}$ (6.42)
so that the analogue of (C.64) holds
$K=\sum_{i\in\mathbb{Z}}{\alpha}_{i}^{\vee}$
if we drop the telescopic sum $\sum_{i\in\mathbb{Z}}E_{i,i}-E_{i+1,i+1}\sim
0$.
We do not need to deal with all the coweights of $\widehat{GL}_{\infty}$, only
with the $r+1$-periodic ones, defined via:
$\prod_{j=0}^{r}{\mathfrak{q}}_{j}^{-{\tilde{\lambda}}_{j}^{\vee}}=\prod_{b\in\mathbb{Z}}\prod_{j=1}^{r+1}\left({\mathfrak{q}}^{b}{\check{t}}_{j}\right)^{E_{i+b(r+1),i+b(r+1)}}$
These coweights are the coweights of the ${\widehat{A}}_{r}$ Kac-Moody
algebra, embedded into $\mathfrak{gl}_{\infty}$ as the subalgebra of
$r+1$-periodic matrices
$\sum_{i,j\in\mathbb{Z}}a_{i,j}E_{i,j},\qquad a_{i+r+1,j+r+1}=a_{i,j}$
We shall describe the solution of this theory in detail in the next section.
### 6.3. Spectral curves
The cameral curve captures all the information about the limit shape, the
special coordinates, the vevs of the chiral operators, and the prepotential.
Its definition is universal.
However, the cameral curve is not very convenient to work with. In many cases
one can extract the same information from a ‘‘smaller’’ curve, the so-called
_spectral curve_. In fact, there are several notions of the spectral curve in
the literature.
Suppose ${\lambda}\in{\rm
Hom}(({\mathbb{C}}^{\times})^{\mathrm{Vert}_{\gamma}},{\mathbb{C}}^{\times})$
is a dominant weight, i.e. ${\lambda}({\alpha}_{i}^{\vee})\geq 0$ for all
$i\in\mathrm{Vert}_{\gamma}$. Let $R_{\lambda}$ be the irreducible highest
weight module of ${\mathbf{G}}_{\text{q}}$ with the highest weight $\lambda$,
and ${\pi}_{\lambda}:{{\mathbf{G}}_{\text{q}}}\longrightarrow{\rm
End}(R_{\lambda})$ the corresponding homomorphism. Then the spectral curve
$C^{R_{\lambda}}_{u}$ in $\mathbf{C}_{\left\langle
x\right\rangle}\times\mathbf{C}_{\left\langle t\right\rangle}$ is
${\det}_{R_{\lambda}}\left(1-t^{-1}{\zeta}(x)^{-1}{\pi}_{\lambda}(g(x))\right)=0$
(6.43)
where
1. (1)
for the class I theories we introduce the factor
$\zeta(x)=g_{\infty}(x)^{\lambda}\times{\rm\ a\ rational\ function\ of\ }x\,,$
having to do with the lift of the conjugacy class $[g(x)]$ from
${{\mathbf{G}}^{\text{ad}}}$ to ${\rm C}{\mathbf{G}}$. The rational function
is chosen so as to minimize the degree of the curve $C^{R_{\lambda}}_{u}$, as
we explain in the examples below.
2. (2)
for the class II, II* theories the factor $\zeta(x)$ is a constant.
Generally, the curve $C^{R_{\lambda}}_{u}$ defined by (6.43) is not
irreducible. The equation (6.43) factorizes into a product of components, one
component for each Weyl orbit in the set of weights $\Lambda_{R_{\lambda}}$
for the module $R_{\lambda}$. Each Weyl orbit intersects dominant chamber at
one point and therefore can be parametrized by dominant weights $\mu$.
Therefore
$C^{R_{\lambda}}_{u}=\bigcup_{\mu\in\Lambda_{R_{\lambda}}\cap\Lambda^{+}}{\mathrm{mult}(\lambda:\mu)}\cdot\left(C_{u}^{\mu}\right)$
(6.44)
where $\mathrm{mult}(\lambda:\mu)$ denotes multiplicity of weight $\mu$ in the
module $R_{\lambda}$. If $R_{\lambda}$ is minuscule module, then, by
definition, the curve $C^{R_{\lambda}}_{u}$ is irreducible.
###### Example.
Consider the $A_{1}$ theory and take $\lambda=3\lambda_{1}$, i.e. the spin
$\frac{3}{2}$ representation. If
$T_{1}(x)=\operatorname{tr}_{R_{1}}g(x)=t(x)+t(x)^{-1}$ one finds that
$\displaystyle C^{R_{\lambda_{1}}}:$ $\displaystyle\ 0=1-T_{1}(x)t+t^{2}$
(6.45) $\displaystyle C^{R_{3\lambda_{1}}}:$ $\displaystyle\
0=(1-T_{1}(x)t+t^{2})(1+3T_{1}(x)t-T_{1}(x)^{3}t+t^{2})$
Let ${{}^{i}{\mathcal{W}}}_{\mu}\subset{{}^{i}{\mathcal{W}}}$ be the
stabilizer of $\mu$ in ${{}^{i}{\mathcal{W}}}$, a subgroup of
${{}^{i}{\mathcal{W}}}$. Consider the map:
$p_{\mu}:{\mathbf{C}_{\left\langle
x\right\rangle}}\times\left({\mathbb{C}}^{\times}\right)^{\mathrm{Vert}_{\gamma}}\longrightarrow{\mathbf{C}_{\left\langle
x\right\rangle}}\times{\mathbf{C}_{\left\langle t\right\rangle}}$
given by:
$\displaystyle
p_{\mu}:(x,({\mathscr{Y}}_{i})_{i\in\mathrm{Vert}_{\gamma}})\mapsto$
$\displaystyle\,(x,t(x)),$ (6.46) $\displaystyle\qquad
t(x)=g(x)^{\mu}/g_{\infty}(x)^{\mu}=\prod_{i\in\mathrm{Vert}_{\gamma}}{\mathscr{Y}}_{i}^{{\mu}({\alpha}_{i}^{\vee})}$
Under the map $p_{\mu}$ the curve ${\mathcal{C}}_{u}$ maps to
$C_{u}^{\mu}={\mathcal{C}}_{u}/{{{}^{i}{\mathcal{W}}}}_{\mu}\subset{\mathbf{C}_{\left\langle
x\right\rangle}}\times{\mathbf{C}_{\left\langle t\right\rangle}}$, the
irreducible $\mu$-component of the spectral curve. This curve comes with the
canonical differential, which is the restriction of the differential on
${\mathbf{C}_{\left\langle x\right\rangle}}\times{\mathbf{C}_{\left\langle
t\right\rangle}}^{\times}$:
$dS=x\frac{dt}{t}$ (6.47)
Actually, in the case of the class II, II* theories the commonly used notion
of the spectral curve differs from the one in (6.43).
Although we suspect the study of spectral curves associated with the
integrable highest weight representations of affine Kac-Moody algebras may be
quite interesting, in this paper for the analysis of the class II and II*
theories we use the conventional notion of the spectral curve used for the
study of families of $\mathbf{G}$-bundles.
To define it, let us fix an irreducible representation $R$ of $\mathbf{G}$,
${\pi}_{R}:{\mathbf{G}}\to{\rm End}(R)$, and let us study the theory of a
complex chiral fermion valued in $R$, more precisely, an $(1,0)$ $bc$ system
in the representations $(R^{*},R)$:
${\mathscr{L}}_{bc}=\sum_{i=1}^{{\rm dim}R}\int b_{i}{\bar{\partial}}c^{i}$
(6.48)
coupled to a background ${\mathbf{G}}\times{\mathbb{C}}^{\times}$ gauge field
$\bar{\bf A}\oplus\bar{A}$, and compute its partition function on the torus
$\mathscr{E}$:
$Z({\bf t},t,q)={\operatorname{Tr}}_{{\mathcal{H}}_{R}}\left((-t)^{J}_{0}{\bf
t}^{{\bf J}_{0}}q^{L_{0}}\right)$ (6.49)
Mathematically, we consider the space
$H_{R}=R[z,z^{-1}]=H_{R}^{+}\oplus H_{R}^{-}$ (6.50)
of $R$-valued functions on the circle ${\mathbb{S}}^{1}$. In (6.50) we took
Laurent polynomials in $z\in{\mathbb{C}}^{\times}$, which correspond to
Fourier polynomials on the circle. We may take some completion of $H_{R}$ but
we shall not do this in the definition of the spectral determinant below.
Consider an element ${\widehat{g}}\in{\widehat{\mathbf{G}}}$ of the affine
Kac-Moody group, i.e. the central extension of
${\widetilde{L\mathbf{G}}}=L{\mathbf{G}}\ltimes{\mathbb{C}}^{\times}$, the
loop group $L{\mathbf{G}}$ extended by the $\mathbb{C}^{\times}$ acting by
loop rotations. We have the canonical homomorphism-projection
$f:{\widehat{\mathbf{G}}}\longrightarrow{\widetilde{L\mathbf{G}}}$ with the
fiber ${\mathbb{C}}^{\times}$, the center of the central extension:
$f:{\widehat{g}}\mapsto g(z)q^{z{\partial}_{z}}$ (6.51)
The projection is topologically non-trivial.
Now, ${\widetilde{L\mathbf{G}}}$ acts in $H_{R}$ via rotation and evaluation,
and so does $\widehat{\mathbf{G}}$ thanks to (6.51) : for ${\Psi}\in H_{R}$:
$\left(f({\widehat{g}})\cdot{\Psi}\right)(z)={\pi}_{R}(g(z))\cdot{\Psi}(qz)$
(6.52)
We would like to define the spectral determinant of $f({\widehat{g}})$ in the
representation $H_{R}$. The eigenvalues of $f({\widehat{g}})$ are easy to
compute
${\rm Eigen}(f({\widehat{g}}))=\\{\,{\bf t}^{\mu}q^{n}\,|\
{\mu}\in{\Lambda}_{R},\,n\in\mathbb{Z}\,\\}$ (6.53)
where we transformed $g(z)$ to a constant ${\bf t}\in\mathbf{T}$ by means of a
$z$-dependent $\mathbf{G}$-gauge transformation:
$g(z)\mapsto h^{-1}(z)g(z)h(qz)={\bf t}$ (6.54)
The fibration $f:{\widehat{\mathbf{G}}}\to{\widetilde{L{\mathbf{G}}}}$,
restricted onto
${\mathbb{C}}^{\times}_{q}\times{\mathbf{T}}\subset{\widetilde{L{\mathbf{G}}}}$
becomes trivial,
$f^{-1}\left({\mathbb{C}}^{\times}_{q}\times{\mathbf{T}}\right)\approx{\mathbb{C}}^{\times}_{c}\times{\mathbb{C}}^{\times}_{q}\times{\mathbf{T}}$.
Let us denote by $c$ the coordinate on the first factor.
The eigenvalues (6.53) concentrate both near $0$ and $\infty$, so we define:
$\displaystyle{\det}_{H_{R}}\,\left(1-t^{-1}{\widehat{g}}\right):={\det}_{H_{R}^{+}}(1-t^{-1}{\widehat{g}}){\det}_{H_{R}^{-}}(1-t{\widehat{g}}^{-1})=$
(6.55) $\displaystyle\qquad\qquad
c^{{\kappa}_{R}}\prod_{{\mu}\in{\Lambda}_{R}}\prod_{n=0}^{\infty}\left(1-q^{n}t^{-1}{\bf
t}^{\mu}\right)\left(1-q^{n+1}t\,{\bf t}^{-{\mu}}\right)$
The expression (6.55) is $W({\widehat{\mathfrak{g}}})$-invariant. The shifts
by ${\rm Q}$ act as follows, cf. (C.104):
$({\bf t},c)\mapsto\left(q^{\beta}\cdot{\bf t},\,{\bf
t}^{\beta}q^{\frac{1}{2}\left\langle\beta,\beta\right\rangle}\cdot c\right)$
(6.56)
where we view $\beta\in{\rm Q}$ both as a vector in the root lattice and as a
vector in the coroot lattice, and $\left\langle,\right\rangle$ is the Killing
metric. The level ${\kappa}_{R}$ in (6.55) is defined as follows:
$\sum_{{\mu}\in{\Lambda}_{R}}{\mu}\left\langle\mu,\beta\right\rangle={\kappa}_{R}{\beta}$
(6.57)
for any vector $\beta\in{\rm Q}$. Geometrically the spectral curve
corresponding to $R$ is obtained as follows: consider the universal principal
$\mathbf{G}$-bundle ${\mathcal{U}}$ over
${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})\times\mathscr{E}$, and associate
the vector bundle $\mathscr{R}$ with the fiber $R$:
${\mathscr{R}}={\mathcal{U}}\times_{\mathbf{G}}R$
Now restrict it onto the rational curve
$\Sigma_{u}\subset{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$. We get the
$R$-bundle over $\Sigma_{u}\times{\mathscr{E}}$.
For generic point $x\in{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}$ over the corresponding point $U(x)\in\Sigma_{u}$ we get the
vector bundle ${\mathscr{R}}_{x}$ over $\mathscr{E}$, which is semi-stable,
and splits as a direct sum of line bundles
${\mathscr{R}}_{x}=\bigoplus_{{\mu}\in{\Lambda}_{R}}{\mathscr{L}}_{{\mu},x}$
(6.58)
where the summands are the degree zero line bundles on $\mathscr{E}$. Under
the identification $Pic_{0}({\mathscr{E}})$ with $\mathscr{E}$ the line bundle
${\mathscr{L}}_{{\mu},x}$ corresponds to the point ${\bf t}(x)^{\mu}$
mod$\,{\mathfrak{q}}^{\mathbb{Z}}$ for some ${\bf
t}(x)\in{\mathbf{T}}/{\mathfrak{q}}^{{{\rm Q}}^{\vee}}$. The closure of the
union
$\bigcup_{x\in{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}}\,\left\\{\ {\bf t}(x)^{\mu}\ |\ {\mu}\in{\Lambda}_{R}\
\right\\}\ \subset{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\times{\mathscr{E}}$ (6.59)
is the spectral curve
$C^{R}_{u}\subset{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\times{\mathscr{E}}$. It is given by the vanishing locus of
the regularized determinant (6.55):
$c(x)^{{\kappa}_{R}}\prod_{{\mu}\in{\Lambda}_{R}}{\theta}(t^{-1}{\bf
t}(x)^{\mu};q)=0$ (6.60)
the choice of the $x$-dependence of $c(x)$ seems immaterial at this point, as
long as $c(x)\in{\mathbb{C}}^{\times}$.
##### Degree of the spectral curve
The $x$-degree of the spectral curve for class II theories in representation
$R$ is $N\kappa_{R}$ where $\kappa_{R}$ is given by (6.57). The $\kappa_{R}$
is the proportionality constant for the second Casimir in representation $R$
$\operatorname{tr}_{R}(\cdot,\cdot)=\kappa_{R}(\cdot,\cdot)_{2}$ where the
$(\cdot,\cdot)_{2}$ is the canonical Killing form in which the long roots have
length square equal to $2$. The standard computations leads to
$\kappa_{R}=\frac{\dim_{R}}{\dim_{\mathfrak{g}}}(\lambda_{R},\lambda_{R}+2\rho)_{2}$
(6.61)
where $\rho=\frac{1}{2}\sum_{\alpha>0}\alpha$ is the Weyl vector. For
fundamental representations $R_{1}$ we find for all cases
$\displaystyle\kappa_{R_{1}}(A_{r})=1$ (6.62)
$\displaystyle\kappa_{R_{1}}(D_{r})=2$ $\displaystyle\kappa_{R_{1}}(E_{6})=6$
$\displaystyle\kappa_{R_{1}}(E_{7})=12$
$\displaystyle\kappa_{R_{1}}(E_{8})=60$
### 6.4. Obscured curve
In the previous construction, in view of the identification
${\mathscr{L}}_{{\mu},x}\leftrightarrow{\bf t}(x)^{\mu}$ we can decompose, for
each weight
${\mu}=\sum_{i=1}^{r}{\mu}_{i}{\lambda}_{i}\in{\Lambda}_{R}$
${\mathscr{L}}_{{\mu},x}=\bigotimes_{i=1}^{r}{\mathbb{L}}_{i,x}^{\otimes{\mu}_{i}}$
for some "basic" line bundles ${\mathbb{L}}_{i,x}$ corresponding to the
fundamental weights. These basic line bundles are ordered, so they define a
point
$\\{\ {\mathbb{L}}_{1,x},\ldots,{\mathbb{L}}_{r,x}\ \\}\in
Pic_{0}({\mathscr{E}})^{r}\approx{\mathscr{E}}^{r}\ ,$
the Cartesian product of $r$ copies of the elliptic curve. Taking the whole
family and including the parametrization we obtain the _obscured curve_
${\mathscr{C}}_{u}$:
${\mathscr{C}}_{u}=\\{\ (x;{\mathbb{L}}_{1,x},\ldots,{\mathbb{L}}_{r,x})\ |\
x\in{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle x\right\rangle}}\
\\}\in{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\times{\mathscr{E}}^{r}$ (6.63)
Let us present another simple construction of ${\mathscr{C}}_{u}$. Namely, let
us use the fact [Friedman:1997yq, Friedman:1997ih, Donagi:1997dp,
Friedman:1998si, Friedman:2000ze], that
${{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}=({\mathscr{E}}\otimes{{\rm
Q}})/W({\mathfrak{g}})$ (6.64)
where the tensor product is understood in the category of abelian groups. At
the level of manifolds, (6.64) simply says that
${{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}={\mathscr{E}}^{r}/W({\mathfrak{g}})$
(6.65)
for some natural action of the Weyl group $W({\mathfrak{g}})$ on the Cartesian
product of $r$ copies of $\mathscr{E}$. Let us denote by ${\pi}_{W}$ the
projection
${\pi}_{W}:{\mathscr{E}}^{r}\to{{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}={\mathscr{E}}^{r}/W({\mathfrak{g}})$
(6.66)
The rational curve $\Sigma_{u}$ in
${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$ lifts to a curve in
${\mathscr{E}}^{r}$, and the graph of the parametrized curve
$\Sigma_{u}\in{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\times{{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}$ lifts to
the graph in ${{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}\times{\mathscr{E}}^{r}$ which is our friend _obscured curve_
${\mathscr{C}}_{u}$. It is the quotient of the cameral curve by the lattice
${\rm Q}^{\vee}$:
${\mathscr{C}}_{u}={\mathcal{C}}_{u}/{{\rm Q}}^{\vee}$ (6.67)
In the section 8.2 we shall present yet another construction of
${\mathbb{L}}_{i,x}$’s, using gauge theory.
There is the so-called determinant line bundle $L$ over the moduli space
${\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})$, whose sections are the
fundamental characters $\widehat{\chi}_{i}$, $i=0,1,\ldots,r$. In the E.
Loojienga’s identification
${{\mathrm{Bun}}_{\mathbf{G}}({\mathscr{E}})}\approx{\mathbb{W}\mathbb{P}}^{a_{0},a_{1},\ldots,a_{r}}$
this line bundle is just the ${\mathcal{O}}(1)$ orbibundle over the weighted
projective space.
We have then the line bundle ${\mathscr{L}}$ over ${\mathscr{E}}^{r}$:
${\mathscr{L}}={\pi}_{W}^{*}L$ (6.68)
Let us call this line bundle _the abelianized determinant line bundle_.
## Chapter 7 The Seiberg-Witten curves in some detail
In this section we shall discuss the geometry of curves describing the limit
shape configurations and the special geometry of the gauge theories under
consideration. When possible we identify the cameral or the spectral curves
with the analogous curves of some algebraic integrable systems, namely the
Hitchin systems on the genus zero (i.e. Gaudin model) or genus one (i.e. spin
elliptic Calogero-Moser system) curves with punctures. These identifications
are less universal then the identification with the spectral curves of the
spin chains based on the Yangian algebra built on $\mathfrak{g}$,
$\widehat{\mathfrak{g}}$, or $\widehat{GL}_{\infty}$, respectively. The latter
identification is a subject of a separate venue of research which touches upon
various advances in geometric representation theory, study of the symplectic
geometry of moduli spaces of instantons and monopoles, quantum cohomology of
quiver varieties, to name just a few. We shall only mention the relation to
spin chains in a few examples, in this work.
Throughout this section we shall use the notation
$g_{\lambda}(x)={\zeta}(x)^{-1}{\pi}_{\lambda}(g(x))$ (7.1)
for the projectively modified operator in the representation
$(R_{\lambda},{\pi}_{\lambda})$ of ${\mathbf{G}}_{\text{q}}$, corresponding to
the group element $g(x)\in{\mathbf{G}}_{\text{q}}$.
### 7.1. Class I theories of $A$ type
This is the so-called linear quiver theory. The set of vertices
$\mathrm{Vert}_{\gamma}=\\{1,\ldots,r\\}$, the set of edges
$\mathrm{Edge}_{\gamma}=\\{1,\ldots,r-1\\}$, the maps $s,t$ for a particular
orientation are given by: $s(e)=e$, $t(e)=e+1$. The bi-fundamental masses are
a trivial cocycle:
$m_{e}={\mu}_{e+1}-{\mu}_{e}$ (7.2)
The corresponding conformal group $C\mathbf{G}=GL(r+1,{\mathbb{C}})$, the
fundamental characters ${\chi}_{i}$ are the characters of the representations
${\Lambda}^{i}{\mathbb{C}}^{r+1}$. We shall now describe the spectral curve in
the representation $R_{\lambda_{1}}\approx{\mathbb{C}}^{r+1}$. The
corresponding group element $g_{\lambda_{1}}(x)$ in (6.14) is the diagonal
matrix
$g_{\lambda_{1}}(x)={\rm diag}(t_{1}(x),\ldots,t_{r+1}(x))$
with
$\displaystyle t_{1}(x)={\zeta}(x){\mathscr{Y}}_{1}(x),\quad
t_{r+1}(x)={\zeta}(x){\mathscr{P}}^{[r]}(x)\,{\mathscr{Y}}_{r}(x)^{-1}$ (7.3)
$\displaystyle\quad
t_{i}(x)={\zeta}(x){\mathscr{P}}^{[i-1]}(x)\,{\mathscr{Y}}_{i}(x){\mathscr{Y}}_{i-1}(x)^{-1},\qquad
i=2,\ldots,r$
with some normalization factor ${\zeta}(x)$ which we choose shortly, and the
explicit formula for the invariants ${\mathcal{X}}_{i}({\mathscr{Y}}(x))$ is
(we omit the $x$-dependence in the right hand side):
$\displaystyle{\mathcal{X}}_{i}({\mathscr{Y}}(x))=\prod_{j=1}^{i-1}{\mathscr{P}}_{j}^{j-i}\times$
(7.4) $\displaystyle\qquad
e_{i}\left({\mathscr{Y}}_{1},{\mathscr{Y}}_{2}{\mathscr{Y}}_{1}^{-1}{\mathscr{P}}^{[1]},\ldots,{\mathscr{Y}}_{i}{\mathscr{Y}}_{i-1}^{-1}{\mathscr{P}}^{[i-1]},\ldots,{\mathscr{Y}}_{r}^{-1}{\mathscr{P}}^{[r]}\right)$
where $e_{i}$ are the elementary symmetric polynomials in $r+1$ variables. Our
master equations (6.5) equate the right hand side of (7.4) with the degree
${\mathbf{v}}_{i}$ polynomial $T_{i}(x)$ in $x$, cf. (6.6).
It is convenient to organize the invariants (7.4) into a generating
polynomial, which is nothing but the characteristic polynomial of the group
element $g(x)$ in some representation of $C{\mathbf{G}}$. The most economical
is, of course, the defining fundamental representation ${\mathbb{C}}^{r+1}$
with the highest weight ${\lambda}_{1}$:
$\displaystyle{\det}\left(t\cdot 1_{r+1}-g_{\lambda_{1}}(x)\right)=$ (7.5)
$\displaystyle\qquad
t^{r+1}+\sum_{i=1}^{r}(-1)^{i}t^{r+1-i}{\zeta}(x)^{i}\prod_{j=1}^{i-1}{\mathscr{P}}_{j}^{i-j}(x)\,{\mathcal{X}}_{i}({\mathscr{Y}}(x))$
$\displaystyle\qquad\qquad\qquad+(-{\zeta}(x))^{r+1}\prod_{j=1}^{r}{\mathscr{P}}_{j}^{r+1-j}(x)$
The group ${{}^{i}{\mathcal{W}}}$ is the symmetric group
${\mathcal{S}}_{r+1}$, which acts by permuting the eigenvalues of $g(x)$ in
(7.3). The cameral curve ${\mathcal{C}}_{u}$ is the $(r+1)!$-fold ramified
cover of the compactified $x$-plane ${\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}$. The points in the fiber are the _ordered_ sets of roots
$(t_{1}(x),\ldots,t_{r+1}(x))$ of the polynomial (7.5).
The curve ${\mathcal{C}}_{u}$ covers the _spectral curve_ $C_{u}$. The latter
is defined as the zero locus of the characteristic polynomial (7.5). The cover
${\mathcal{C}}_{u}\to C_{u}$ is $r!:1$, it sends the ordered $r+1$-tuple of
roots $(t_{1},\ldots,t_{r+1})$ to the first root $t_{1}$. The cover
$C_{u}\to\mathbf{C}_{\left\langle x\right\rangle}$ is $(r+1):1$.
Explicitly, the curve $C_{u}$ is given by:
$0={\mathcal{P}}(t,x)=\sum_{i=0}^{r+1}(-1)^{i}t^{r+1-i}{\zeta}(x)^{i}\,{\prod_{j=1}^{i-1}{\mathscr{P}}_{j}(x)^{i-j}}\
T_{i}(x)$ (7.6)
#### 7.1.1. Relation to Gaudin model
Figure 7.1. Degree profile example for $A_{4}$ theory and
$(\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3},\mathbf{v}_{4})=(4,7,8,5)$. For
convenience one can set boundary conditions
$\mathbf{v}_{0}=\mathbf{v}_{r+1}=\mathbf{w}_{0}=\mathbf{w}_{r+1}=0$.
It is easy to see, using the Eqs. (3.8), (3.9) and Fig. 7.1 that
${\mathbf{w}}_{i_{*}}=w_{+}+w_{-}$, where
$\displaystyle w_{+}={\mathbf{v}}_{i_{*}}-{\mathbf{v}}_{i_{*}+1}\geq 0$ (7.7)
$\displaystyle w_{-}={\mathbf{v}}_{i_{*}}-{\mathbf{v}}_{i_{*}-1}\geq 0$
and it useful to record
$\displaystyle\mathbf{v}_{1}$
$\displaystyle=\mathbf{w}_{1}+\dots+\mathbf{w}_{i_{*}-1}+\mathbf{w}_{-}$ (7.8)
$\displaystyle\mathbf{v}_{r}$
$\displaystyle=\mathbf{w}_{r}+\dots+\mathbf{w}_{i_{*}+1}+\mathbf{w}_{+}$
$\displaystyle\mathbf{v}_{*}$
$\displaystyle=\sum_{i=1}^{i_{*}-1}i\mathbf{w}_{i}+i_{*}\mathbf{w}_{-}$
$\displaystyle\mathbf{v}_{*}$
$\displaystyle=\sum_{i=i_{*}+1}^{r}(r+1-i)\mathbf{w}_{i}+(r+1-i_{*})\mathbf{w}_{+}$
Accordingly, we can factorize the polynomial ${\mathscr{P}}_{i_{*}}(x)$ as:
${\mathscr{P}}_{i_{*}}(x)={\mathfrak{q}}_{i_{*}}{\mathscr{P}}^{+}(x){\mathscr{P}}^{-}(x)$
(7.9)
where ${\mathscr{P}}^{\pm}$ are monic polynomials of degrees
${\rm deg}{\mathscr{P}}^{\pm}=w_{\pm}$
We can actually transform (7.6) into something nice, by adjusting
${\zeta}(x)$:
${\zeta}(x)^{-1}={\mathscr{P}}^{-}(x){\mathscr{P}}^{[i_{*}-1]}(x)$ (7.10)
Then $D(g(x))$ is given by:
$\displaystyle D(g(x))=\frac{P_{0}(x)}{P_{\infty}(x)}$ (7.11) $\displaystyle
P_{0}(x)={\mathscr{P}}^{+}(x)^{r+1-i_{*}}\prod_{j=i_{*}+1}^{r}{\mathscr{P}}_{j}(x)^{r+1-j}$
$\displaystyle
P_{\infty}(x)={\mathscr{P}}^{-}(x)^{i_{*}}\prod_{j=1}^{i_{*}-1}{\mathscr{P}}_{j}(x)^{j}$
Then ${\mathcal{P}}(t,x)$ can be written as:
${\mathcal{P}}(t,x)=\prod_{j=1}^{i_{*}-1}{\mathfrak{q}}_{j}^{j}\cdot\frac{P(t,x)}{P_{\infty}(x)}$
where $P(t,x)$ is a degree $N={\mathbf{v}}_{i_{*}}$ polynomial in $x$, and the
degree $r+1$ polynomial in $t$, which is straightforward to calculate:
$\displaystyle(-1)^{i_{*}}\prod_{j=1}^{i_{*}}{\mathfrak{q}}_{j}^{j}\ P(t,x)=$
(7.12)
$\displaystyle\qquad(-{\mathfrak{q}}_{i_{*}})^{i_{*}}t^{r+1}P_{\infty}(x)\,+$
$\displaystyle\qquad\qquad+\sum_{i=1}^{i_{*}}t^{r+1-i}\,T_{i}(x){\mathfrak{q}}_{i_{*}}^{i_{*}-i}(-{\mathscr{P}}_{*}^{-}(x))^{i_{*}-i}\prod_{j=i}^{i_{*}-1}{\mathscr{P}}_{j}^{j-i}(x)$
$\displaystyle\qquad\qquad\qquad+\sum_{i=i_{*}+1}^{r}t^{r+1-i}\,T_{i}(x)(-{\mathscr{P}}_{*}^{+}(x))^{i-i_{*}}\prod_{j=i_{*}+1}^{i-1}{\mathscr{P}}_{j}^{i-j}(x)$
$\displaystyle\qquad\qquad\qquad\qquad\qquad+(-1)^{r+1-i_{*}}P_{0}(x)$
Now, recall that $T_{j,0}$ is fixed by the couplings ${\mathfrak{q}}$:
$T_{j,0}({\mathfrak{q}})=\prod_{j=1}^{i-1}{\mathfrak{q}}_{j}^{j-i}\,e_{i}(1,{\mathfrak{q}}_{1},{\mathfrak{q}}_{1}{\mathfrak{q}}_{2},\ldots,{\mathfrak{q}}_{1}{\mathfrak{q}}_{2}\ldots{\mathfrak{q}}_{i},\ldots{\mathfrak{q}}_{1}\ldots{\mathfrak{q}}_{r})$
(7.13)
and the coefficient $T_{j,1}$ is fixed by the masses $m_{i,{\mathfrak{f}}}$
and $m_{e}$.
Therefore, the coefficient of $x^{N}$ in $P(t,x)$ can be computed explicitly:
$\displaystyle\sum_{i=0}^{r+1}(-1)^{i}t^{r+1-i}\,\prod_{j=1}^{i_{*}}{\mathfrak{q}}_{j}^{-i}\,e_{i}(1,{\mathfrak{q}}_{1},{\mathfrak{q}}_{1}{\mathfrak{q}}_{2},\ldots,{\mathfrak{q}}_{1}{\mathfrak{q}}_{2}\ldots{\mathfrak{q}}_{i},\ldots,{\mathfrak{q}}_{1}\ldots{\mathfrak{q}}_{r})=$
(7.14)
$\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad=\prod_{i=0}^{r}\left(t-{\check{t}}_{i}\right)$
where
${\check{t}}_{i}=\frac{\prod_{j=1}^{i}{\mathfrak{q}}_{j}}{\prod_{j=1}^{i_{*}}{\mathfrak{q}}_{j}},\qquad\qquad
i=0,\ldots,r$ (7.15)
We thus rewrite the curve $C_{u}$ in the $(x,t)$-space, defined by the
equation
$\displaystyle
0={\mathcal{R}}_{A_{r}}(t,x)=\frac{P(t,x)}{\prod_{i=0}^{r}\left(t-{\check{t}}_{i}\right)}=\prod_{l=1}^{N}\left(x-x_{l}(t)\right)=$
(7.16)
$\displaystyle\qquad\qquad\qquad\qquad=x^{N}+\frac{1}{\prod_{i=0}^{r}\left(t-{\check{t}}_{i}\right)}\sum_{j=1}^{N}p_{j}(t)x^{N-j}$
where
$N={\mathbf{v}}_{i_{*}}$ (7.17)
It is clear from the Eq. (7.16) that as $t\to{\check{t}}_{i}$ one of the roots
$x_{l}(t)$ has a pole, while the other $N-1$ roots are finite. Near $t=0$ the
polynomial $P(t,x)$ approaches:
$P(0,x)=(-1)^{r+1-i_{*}}P_{0}(x)$ (7.18)
while near $t=\infty$
$P(t,x)t^{-r-1}\to(-{\mathfrak{q}}_{i_{*}})^{i_{*}}P_{\infty}(x)$ (7.19)
Let
$dS=x\frac{dt}{t}$ (7.20)
Then our discussion above implies that the differential $dS$ has the first
order poles on $C_{u}$: at one of the $N$ preimages of the points
${\check{t}}_{i}$, $i=0,1,\ldots,r$, and at all preimages of the points $t=0$
and $t=\infty$. The residues of $dS$ are linear combinations of the masses of
the hypermultiplets, in agreement with the observations in
[Seiberg:1994aj],[Donagi:1995cf].
Remarkably, we can identify $C_{u}$ with the spectral curve of the meromorphic
Higgs field ${\Phi}$:
${\Phi}={\Phi}(t)dt=\sum_{j=-1}^{r+1}\,{\Phi}_{j}\frac{dt}{t-{\check{t}}_{j}}$
(7.21)
where ${\check{t}}_{-1}=0$, ${\check{t}}_{r+1}={\infty}$, and ${\Phi}_{j}$ are
$N\times N$ matrices, which have rank one for $j=0,1,\ldots,r$, and have the
maximal rank for $j=-1,r+1$. Moreover, the eigenvalues of ${\Phi}_{j}$ are all
fixed in terms of the masses. The spectra of ${\Phi}_{j}$, $j=-1,\ldots,r+1$
have specified multiplicity:
1. (1)
the matrix ${\Phi}_{-1}$ has $w_{+}$ eigenvalues of multiplicity $r+1-i_{*}$,
and ${\mathbf{w}}_{r+1-j}$ eigenvalues of multiplicity $j$, for
$j=1,\ldots,r-i_{*}$;
the eigenvalues are fixed by the masses
2. (2)
the matrices ${\Phi}_{j}$, $j=0,1,\ldots,r$ has one non-vanishing eigenvalue
each, and $N-1$ vanishing eigenvalues; We can write
$({\Phi}_{j})_{a}^{b}=u_{a}^{j}v_{j}^{b},\qquad a,b=1,\ldots,N$
for some vectors $u^{j}$, $v_{j}\in{\mathbb{C}}^{N}$, obeying
$\sum_{a=1}^{N}u^{j}_{a}v_{j}^{a}=M_{j}$ (7.22)
and considered up to an obvious ${\mathbb{C}}^{\times}$-action, for some
$M_{j}$ which is linear in the bi-fundamental and fundamental masses.
3. (3)
the matrix ${\Phi}_{r+1}$ has $w_{-}$ eigenvalues of multiplicity $i_{*}$, and
${\mathbf{w}}_{j}$ eigenvalues of multiplicity $j$, for $j=1,\ldots,i_{*}-1$.
Then:
$\left(\frac{dt}{t}\right)^{N}\,{\mathcal{R}}_{A_{r}}(t,x)={\det}\left(\,x\frac{dt}{t}-{\Phi}\right)$
(7.23)
We can make an $SL(N)$ Higgs field out of $\Phi$ by shifting it by the scalar
meromorphic one-form $\frac{1}{N}{\operatorname{Tr}}_{N}{\Phi}$, which is
independent of the moduli $u$ of the curve $C_{u}$.
The moduli space of $r+3$-ples of matrices ${\Phi}_{j}$, obeying
$\sum_{j=-1}^{r+1}{\Phi}_{j}=0$ (7.24)
with fixed eigenvalues of the above mentioned multiplicity, considered up to
the simultaneous $SL(N)$-similarity transformation, is the phase space
${{\mathfrak{P}}}^{H}_{0,r+3}$ of the genus zero version of $SL(N)$ Hitchin
system, the classical Gaudin model on $r+3$ sites. The general Gaudin model
has the residues ${\Phi}_{j}$ belonging to arbitrary conjugacy classes.
See [Kronheimer:1990a, Kronheimer:1990] for the geometry of complex coadjoint
orbits. The Hitchin system with singularities was studied in [Gorsky:1994dj,
Nekrasov:1995nq, Donagi:1995am, Gukov:2006jk, Witten:2007td, Gukov:2008sn]. In
[Gaiotto:2009we, Nanopoulos:2009uw, Nanopoulos:2010zb, Nanopoulos:2010ga,
Nanopoulos:2009xe] this Hitchin system with singularities was discussed from
the point of view of brane constructions such as [Witten:1997sc,
Gaiotto:2009we].
###### Remark.
The curve $C_{u}$ is much more economical then ${\mathcal{C}}_{u}$. However,
the price we pay is the complexity of the relation between the special
coordinates ${\mathfrak{a}}_{i{\mathbf{a}}}$,
${\mathfrak{a}}_{i{\mathbf{a}}}^{D}$ and the moduli $u$ of the curve $C_{u}$.
Roughly speaking all special coordinates are linear combinations of the
periods of the differential
$x\frac{dt}{t}$
and the masses. The coordinates ${\mathfrak{a}}_{1{\mathbf{a}}}$ come from the
periods
$\oint x\,d{\log}g_{1}(x)\sim\oint xdt/t$
the coordinates ${\mathfrak{a}}_{2{\mathbf{a}}}$ come from the periods
$\oint x\,d{\log}(g_{1}(x)g_{2}(x))\sim\oint xdt/t+\oint xdt/t$
the coordinates ${\mathfrak{a}}_{i{\mathbf{a}}}$ come from the periods
$\oint x\,d{\log}(g_{1}(x)\ldots g_{i}(x))\sim\oint xdt/t+\ldots+\oint xdt/t$
etc.
###### Remark.
In the $A_{2}$ case our solution matches the one found in [Shadchin:2005cc].
###### Remark.
We can connect the cameral curve ${\mathcal{C}}_{u}$ to the spectral curve
$C_{u}$ via a tower of ramified covers:
${\mathcal{C}}_{u}\to C_{u}^{(r)}\to C_{u}^{(r-1)}\to\dots\to
C_{u}^{(1)}=C_{u}\to{{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}}$ (7.25)
which we can call the _Gelfand-Zeitlin_ tower of curves. The curve
$C_{u}^{(i)}$ is the quotient of ${\mathcal{C}}_{u}$ by the subgroup
$W(A_{r-i})$ of the Weyl group $W(A_{r})$, which acts on the amplitudes
$({\mathscr{Y}}_{i+1},\ldots,{\mathscr{Y}}_{r})$ while preserving
$({\mathscr{Y}}_{1},\ldots,{\mathscr{Y}}_{i})$.
###### Remark.
We should warn the reader that our cameral curves need not be the cameral
curves of Hitchin systems [Donagi:1995alg]. We mapped the spectral curve of
the family of conjugacy classes $[g(x)]$ corresponding to the fundamental
representation $R_{1}$ to the spectral curve of the $GL(N)$-Gaudin system,
i.e. the genus zero Hitchin system, corresponding to the $N$-dimensional
representation. One could then build the cameral curve for the $GL(N)$-Gaudin
system. This curve has all the reasons to differ from our cameral curve
${\mathcal{C}}_{u}$.
However, the identification of ${\mathfrak{M}}$ with the moduli spaces of
curves describing the spectrum of the transfer matrix in the quasi classical
limit of the $Y(A_{r})$ spin chain is more natural, and carries over to the
level of cameral curves. This statement will be elaborated upon below and in
[NP2012b].
###### Remark.
In view of [Gaiotto:2009we] it is natural to identify the space of couplings
${\tilde{\mathfrak{q}}}=({\mathfrak{q}}_{1},\ldots,{\mathfrak{q}}_{r})$ with a
coordinate patch in the moduli space $\overline{\mathcal{M}}_{0,r+3}$ of
stable genus zero curves with $r+3$ punctures. In this fashion the linear
quiver theories (the class I type $A_{r}$ theories) can be analytically
continued to other weakly coupled regions (weak coupling corresponds to the
maximal degeneration of the stable curve). Most of these regions do not have a
satisfactory Lagrangian description. Nevertheless, it would be interesting to
try to generalize the limit shape equations even without knowing their
microscopic origin. What would the iWeyl group look like in this case?
#### 7.1.2. Quiver description
We have thus found that a particular subset of Gaudin-Hitchin models, with all
but two residues of the minimal type, are the Seiberg-Witten integrable
systems of the class I $A_{r}$ type theories. As a check, let us compute the
dimension of the moduli space ${{\mathfrak{P}}}^{H}_{0,r+3}$ of solutions to
the (traceless part of the) moment map equation (7.24) divided by the
$SL(N,{\mathbb{C}})$-action is equal to:
$\displaystyle 2(r+1)(N-1)-2(N^{2}-1)+$ (7.26)
$\displaystyle\qquad+\left(N^{2}-\sum_{j=1}^{i_{*}-1}j^{2}{\mathbf{w}}_{j}-i_{*}^{2}w_{-}\right)+$
$\displaystyle\qquad+\left(N^{2}-\sum_{j=i_{*}+1}^{r}(r+1-j)^{2}{\mathbf{w}}_{j}-(r+1-i_{*})^{2}w_{+}\right)=$
$\displaystyle\qquad\qquad\qquad=2\sum_{i=1}^{r}({\mathbf{v}}_{i}-1)=2\,{\dim}{{\mathfrak{M}}}$
Actually, the moduli space ${{\mathfrak{P}}}^{H}_{0,r+3}$ can be described as
a quiver variety. Its graph is an $r+3$-pointed star, with $r+1$ legs of
length $1$, and two long legs, of the lengths $l_{-1}={\mathbf{v}}_{r}-1$ and
$l_{r+1}={\mathbf{v}}_{1}-1$, respectively. The dimensions of the vector
spaces assigned to vertices are: the $r+3$-valent vertex (the star) has
dimension $N$, the tails of the short legs all have dimension $1$, the
dimensions along the long legs start at $1$ at the tails, then grow with the
step $1$ for the first ${\mathbf{w}}_{1}$ (respectively, ${\mathbf{w}}_{r}$)
vertices, then grow with the step $2$ for the next ${\mathbf{w}}_{2}$
(respectively, ${\mathbf{w}}_{r-1}$) and so on. (See example in figure 7.2).
Figure 7.2. The example quiver variety for $A_{4}$ quiver at
$\mathbf{v}=(7,10,8,5)$ and $\mathbf{w}=(4,5,1,2)$ with $i_{*}=2$ and
$\mathbf{w}_{*}=\mathbf{w}_{-}+\mathbf{w}_{+}$ with $\mathbf{w}_{-}=3$ and
$\mathbf{w}_{+}=2$. The labels at vertices denote the dimensions in the
pattern as explained
The extended phase space ${\mathfrak{P}}^{\mathrm{ext}}$ for the class I
$A_{r}$ type theories is easy to describe. One just need to relax the
${\mathbb{C}}^{\times}$ moment map constraints (7.22) as well as the analogous
${\mathbb{C}}^{\times}$ constraints for the ${\Phi}_{-1}$, ${\Phi}_{r+1}$
residues. In the quiver description we make the quiver gauge group the product
of the special unitary groups as opposed to the product of unitary groups.
#### 7.1.3. Reduction to the spin chain
The simplest example of the Class I theory of the $A$ type is, of course, the
$A_{1}$ theory. This is the celebrated $N_{f}=2N_{c}$ theory, with
${\mathbf{w}}_{1}=N_{f}$, ${\mathbf{v}}_{1}=N_{c}=N$, in our notation. Let
${\mathfrak{q}}={\mathfrak{q}}_{1}$ and let $T(x)=T_{1,0}^{-1}T_{1}(x)$ denote
the monic degree $N$ polynomial.
The reduced curve (7.12) assumes a very simple form:
${\mathfrak{q}}{\mathscr{P}}^{-}(x)t+{\mathscr{P}}^{+}(x)t^{-1}=(1+{\mathfrak{q}})T(x)$
(7.27)
It is not difficult to recognize in this formula the quasiclassical limit of
Baxter’s $T-Q$ equation [Baxter:1985] for the $XXX$ $sl_{2}$ spin chain. In
fact, it was observed already in [Gorsky:1996hs, Gorsky:1996qp] that the
Seiberg-Witten curve of the ${\mathcal{N}}=2$ supersymmetric QCD can be
interpreted using the integrable spin chain, albeit in a somewhat different
fashion. Note that a possible lift of $[g(x)]$ to
$C{\mathbf{G}}=GL(2,{\mathbb{C}})$ in this case is given by the diagonal
matrix:
$g(x)=\left(\begin{matrix}{\mathfrak{q}}t{\mathscr{P}}^{-}(x)&0\\\
0&t^{-1}{\mathscr{P}}^{+}(x)\end{matrix}\right)$ (7.28)
where $t$ solves (7.27). However this choice of $g(x)$ is not continuous in
$x$. As we cross the cuts $I_{1,{\mathbf{a}}}$ the matrix $g(x)$ will have its
diagonal entries exchanged. We can conjugate $g(x)\to h^{-1}(x)g(x)h(x)$ into
a form, e.g.
$\mathbf{g}(x)=\left(\begin{matrix}{\mathfrak{q}}T(x)&1\\\
{\mathfrak{q}}\left(T^{2}(x)-{\mathscr{P}}^{+}(x){\mathscr{P}}^{-}(x)\right)&T(x)\end{matrix}\right)$
(7.29)
whose entries are polynomials. This is a particular case of a general
statement [Steinberg:1965], lifting a family of conjugacy classes in
${\mathbf{G}}_{\text{q}}$ to ${\mathbf{G}}_{\text{q}}$ itself (slightly
adapted for the conformal extension $C\mathbf{G}$). The lift (7.29) does not
depend on the split ${\mathscr{P}}(x)$ into the product of
${\mathscr{P}}^{\pm}$ factors.
There is yet another lift of $[g(x)]$ to $C\mathbf{G}$, which does depend on
the factorization, and makes closer contact with spin chains. We shall discuss
it in the section devoted to the study of the phase spaces of the integrable
systems corresponding to our gauge theories.
#### 7.1.4. Duality
In the mapping to the Gaudin-Hitchin system we employed a particular lift
$g(x)$ of the conjugacy class $[g(x)]$ in
$SL(r+1,{\mathbb{Z}})/{\mathbb{Z}}_{r+1}$ to the conjugacy class in
$GL(r+1,{\mathbb{C}})$ by a judicious choice of the normalization factor
${\zeta}(x)$. More importantly, the spectral one-form describing the
eigenvalues of the Higgs field, is equal to $xdt/t$ where $x$ is the argument
of the amplitude function, and $t$ is the spectral variable describing the
eigenvalues of $g(x)$. For the group $GL(r+1,{\mathbb{C}})$ the eigenvalues of
$g(x)$ in some representation take values in ${\mathbf{C}_{\left\langle
t\right\rangle}}={\mathbb{C}}^{\times}$ which gets naturally compactified to
${\mathbb{C}\mathbb{P}}^{1}$ to allow the degenerations.
To summarize, the Lax operator of Gaudin-Hitchin system, the Higgs field
${\Phi}(t)dt/dt$ lives on the curve ${\mathbf{C}_{\left\langle
t\right\rangle}}$ of the eigenvalues of the ‘‘Lax operator’’ $g(x)$ of the
gauge theory. Vice versa, the ‘‘Lax operator’’ $g(x)$ of the gauge theory
lives on the curve ${\mathbf{C}_{\left\langle x\right\rangle}}$ of the
eigenvalues of the Higgs field of Hitchin system.
We shall encounter some versions of this ‘‘eigenvalue – spectral parameter’’
duality in other sections of this work.
### 7.2. Class I theories of $D$ type
These are the $SU({\mathbf{v}}_{1})\times\ldots\times SU({\mathbf{v}}_{r})$
theories whose quiver contains a trivalent vertex which connects two one-
vertex legs to a leg of the length $r-3$. The corresponding group
${\mathbf{G}}_{\text{q}}$ is $Spin(2r,{\mathbb{C}})$, its conformal version
$C\mathbf{G}$ is the extension of $\mathbf{G}$ by ${\mathbb{C}}^{\times}$ or
${\mathbb{C}}^{\times}\times{\mathbb{C}}^{\times}$, depending on the parity of
$r$.
Passing from the $A$ type theories to the $D$ type theories we encounter new
phenomenon. In addition to the exterior powers $\wedge^{i}V$ of the vector
representation $V={\mathbb{C}}^{2r}$ of $Spin(2r)$ the fundamental
representations of the group ${\mathbf{G}}_{\text{q}}$ come also from spin
representations $S_{\pm}$. We should use the cameral curve ${\mathcal{C}}_{u}$
to get the special coordinates and the prepotential, however a lot of
information is contained in the spectral curve $C_{u}^{R}$ in some fundamental
representation $R$, which we shall take to be the vector $2r$ dimensional
representation $V=R_{\lambda_{1}}={\mathbb{C}}^{2r}$. In order to describe the
spectral curve we need to know the characters of the group element $g(x)$
(6.14) in the representations $\wedge^{i}V$, for $i=1,\ldots,2r$. When we deal
with $V$ and its exterior powers only, we do not see the full conformal
version of $\mathbf{G}$, only its one-dimensional extension (which we shall
denote simply by C$\mathbf{G}$) which consists of the matrices $g\in
GL(2r,{\mathbb{C}})$, such that $gg^{t}=D(g)\cdot{\bf 1}_{2r}$, with
$D(g)\in{\mathbb{C}}^{\times}$ a scalar.
The spectral curve $C_{u}=C^{V}_{u}$ in the vector representation can be
modified by the transformation similar to (7.10) to get the curve of minimal
degree in $x$. Let us label the vertices of the $D_{r}$ Dynkin diagram in such
a way, that the trivalent vertex is $r-2$, the tails are $r-1$, $r$, and the
end vertex of the ‘‘long leg’’ has the label $1$, see Appendix A. Then the
product of the matter polynomials ${\mathscr{P}}_{r-1}$ and
${\mathscr{P}}_{r}$ has degree
${\rm
deg}({\mathscr{P}}_{r-1}{\mathscr{P}}_{r})=2({\mathbf{v}}_{r-1}+{\mathbf{v}}_{r}-{\mathbf{v}}_{r-2})$
Now we shall factorize ${\mathscr{P}}_{r-1}{\mathscr{P}}_{r}$ into a product
of two factors of equal degrees
${\mathscr{P}}_{r-1}{\mathscr{P}}_{r}={\mathscr{P}}^{+}{\mathscr{P}}^{-},\qquad{\rm
deg}{\mathscr{P}}^{+}={\rm
deg}{\mathscr{P}}^{-}={\mathbf{v}}_{r-1}+{\mathbf{v}}_{r}-{\mathbf{v}}_{r-2}$
(7.30)
There are many possible factorizations. For example, if
${\mathbf{w}}_{r-1}\leq{\mathbf{w}}_{r}$, then we can take:
${\mathscr{P}}_{r}(x)={\mathscr{P}}^{+}(x)S(x)$,
${\mathscr{P}}^{-}(x)=S(x){\mathscr{P}}_{r-1}(x)$ for any degree
${\mathbf{v}}_{r}+{\mathbf{v}}_{r-1}-{\mathbf{v}}_{r-2}\leq{\mathbf{w}}_{r}=2{\mathbf{v}}_{r}-{\mathbf{v}}_{r-2}$
subfactor ${\mathscr{P}}^{+}(x)$ in ${\mathscr{P}}_{r}(x)$. We shall normalize
${\mathscr{P}}^{\pm}(x)$ so that the highest coefficient in both polynomials
equals
$\sqrt{{\mathfrak{q}}_{r-1}{\mathfrak{q}}_{r}}$
That there exist different decompositions (7.30) is a generalization of
$S$-duality of the ${\mathcal{S}}$-class ${\mathcal{N}}=2$ theories of the
$A_{r}$ type studied in [Gaiotto:2009we].
The spectral curve $C_{u}$ corresponding to the $2r$-dimensional vector
representation of $CSpin(2r,{\mathbb{C}})$ is mapped to the curve
$P_{D_{r}}^{C}(t,x)=0$ in the $(t,x)$-space, where
$P_{D_{r}}^{C}(t,x)=t^{-r}P_{\infty}(x){\det}_{R_{1}}(t\cdot 1_{2r}-g(x))$
(7.31)
with some polynomial $P_{\infty}(x)$ to be determined below. The group element
$g(x)$ in the vector representation ${\mathbb{C}}^{2r}$ of
$C{\mathbf{G}}_{\text{q}}$ is given by
$g(x)=E^{-1}\operatorname{diag}(g_{1}(x),\dots,g_{2r}(x))E$ (7.32)
with $E$ being any matrix such that
$(EE^{t})_{ij}={\delta}_{i,2r+1-j}$
represents the symmetric bilinear form on ${\mathbb{C}}^{2r}$ and
$\displaystyle g_{1}(x)=\zeta(x)\mathscr{Y}_{1}(x)$ (7.33) $\displaystyle
g_{i}(x)=\zeta(x)\mathscr{P}^{[i-1]}(x)\frac{\mathscr{Y}_{i}(x)}{\mathscr{Y}_{i-1}(x)},\quad
i=2,\dots,r-2$ $\displaystyle
g_{r-1}(x)=\zeta(x)\mathscr{P}^{[r-2]}(x)\frac{\mathscr{Y}_{r-1}(x)\mathscr{Y}_{r}(x)}{\mathscr{Y}_{r-2}(x)}$
$\displaystyle
g_{r}(x)=\zeta(x)\mathscr{P}^{[r-2]}(x)\mathscr{P}_{r-1}(x)\frac{\mathscr{Y}_{r}(x)}{\mathscr{Y}_{r-1}(x)}$
$\displaystyle
g_{r+1}(x)=\zeta(x)\mathscr{P}^{[r-2]}(x)\mathscr{P}_{r}(x)\mathscr{Y}_{r-1}(x)/\mathscr{Y}_{r}(x)$
$\displaystyle
g_{r+2}(x)=\zeta(x)\mathscr{P}^{[r]}(x)\mathscr{Y}_{r-2}(x)/(\mathscr{Y}_{r-1}(x)\mathscr{Y}_{r}(x))$
$\displaystyle
g_{2r+1-i}(x)=\zeta(x)\frac{\mathscr{P}^{[r]}(x)\mathscr{P}^{[r-2]}(x)}{{\mathscr{P}}^{[i-1]}(x)}\frac{\mathscr{Y}_{i-1}(x)}{\mathscr{Y}_{i}(x)},\quad
i=2,\dots,r-2$ $\displaystyle
g_{2r}=\zeta(x)\mathscr{P}^{[r]}(x)\mathscr{P}^{[r-2]}(x)\frac{1}{\mathscr{Y}_{1}(x)}$
The factor $\zeta(x)$ which likely gives the minimal degree curve is
$\zeta(x)^{-1}=\mathscr{P}^{+}(x)\mathscr{P}^{[r-2]}(x)$ (7.34)
Thus, the scalar $D(g(x))$ is equal to:
$D(g(x))=\frac{{\mathscr{P}}^{-}(x)}{{\mathscr{P}}^{+}(x)}$ (7.35)
and the prefactor in (7.31) is:
$P_{\infty}(x)=\mathscr{P}^{+}(x)^{r}\prod_{j=1}^{r-2}\mathscr{P}_{j}(x)^{j}.$
(7.36)
After some manipulations we find
$\displaystyle(-1)^{r}P_{D_{r}}^{C}(t,x)=T_{r}^{2}{\mathscr{P}}_{r-1}+T_{r-1}^{2}{\mathscr{P}}_{r}-{\eta}T_{r-1}T_{r}+$
(7.37)
$\displaystyle\qquad\sum_{l=1}^{\left[\frac{r}{2}\right]}T_{r-2l}\left(\prod_{j=r+1-2l}^{r-2}{\mathscr{P}}_{j}^{j-r+2l}\right)\,{\xi}_{l}^{2}-$
$\displaystyle\qquad\qquad-\sum_{l=1}^{\left[\frac{r-1}{2}\right]}T_{r-2l-1}\left(\prod_{j=r-2l}^{r-2}{\mathscr{P}}_{j}^{j-r+2l+1}\right)\,{\xi}_{l}{\xi}_{l+1}\,,$
$\displaystyle{\xi}_{l}=({\mathscr{P}}^{+}t)^{l}-({\mathscr{P}}^{-}t^{-1})^{l},\
{\eta}={\mathscr{P}}^{+}t+{\mathscr{P}}^{-}t^{-1}$
This equation has degree
$N=2({\mathbf{v}}_{r}+{\mathbf{v}}_{r-1})-{\mathbf{v}}_{r-2}$ in the $x$
variable. Note
${\mathbf{v}}_{r-2}\leq N\leq 2{\mathbf{v}}_{r-2}$ (7.38)
As in the $A_{r}$ case, the curve $C_{u}$ has branches going off to infinity
in the $x$-direction, over $2r$ points ${\check{t}}_{i},{\check{t}}_{i}^{-1}$,
$i=1,\ldots,r$ in the $t$-line ${\mathbb{C}\mathbb{P}}^{1}_{t}$ which
correspond to the weights of $R_{1}$
${\check{t}}_{i}=\frac{1}{\sqrt{{\mathfrak{q}}_{r-1}{\mathfrak{q}}_{r}}}\frac{{\mathfrak{q}}^{[i-1]}}{{\mathfrak{q}}^{[r-2]}}$
(7.39)
In addition, there are special points $t=0,\infty$. Over these points the
curve $C_{u}$ has $N$ branches, where $x$ approaches one of the roots of the
polynomial $P_{0}(x)$
$P_{0}(x)={\mathscr{P}}^{-}(x)^{r}\prod_{j=1}^{r-2}{\mathscr{P}}_{j}(x)^{j}\
.$ (7.40)
and $P_{\infty}(x)$, cf. (7.36), respectively.
The curve $C_{u}$ is invariant under the involution
$t\mapsto\frac{{\mathscr{P}}_{-}(x)}{{\mathscr{P}}_{+}(x)}t^{-1}$ (7.41)
The fixed points of (7.41) are the points of intersection of the curve $C_{u}$
and the curve
${\mathscr{P}}^{+}(x)t-{\mathscr{P}}^{-}(x)t^{-1}=0$ (7.42)
The equations ${\mathcal{R}}_{D_{r}}(t,x)=0$ (7.37) and (7.42) imply
$\displaystyle
T_{r}^{2}(x){\mathscr{P}}_{r-1}(x)+T_{r-1}^{2}(x){\mathscr{P}}_{r}(x)=$ (7.43)
$\displaystyle\qquad\qquad\qquad
T_{r-1}(x)T_{r}(x)({\mathscr{P}}^{+}(x)t+{\mathscr{P}}^{-}(x)t^{-1})$
and
$T_{r}^{2}(x){\mathscr{P}}_{r-1}(x)=T_{r-1}^{2}(x){\mathscr{P}}_{r}(x)$ (7.44)
Again, the curve $C_{u}$ is more economical then the full cameral curve
${\mathcal{C}}_{u}$. Again, the special coordinates
${\mathfrak{a}}_{i,{\mathbf{a}}}$ and the duals
${\mathfrak{a}}_{i,{\mathbf{a}}}^{D}$ are the linear combinations of the
periods of the differential $xdt/t$ and the masses.
Let us map the curve $C_{u}$ to the curve $\Sigma_{u}$ in the space $S$ which
is a ${\mathbb{Z}}_{2}$-quotient of the (blowup of the)
${\mathbf{C}_{\left\langle
x\right\rangle}}\times{\mathbb{C}\mathbb{P}}^{1}_{t}$ space, parametrized by
$(x,s)$ where
$s=\frac{{\mathscr{P}}^{+}(x)}{{\mathscr{P}}^{-}(x)}t^{2}$
The curve $\Sigma_{u}$ is described by the equations $s+s^{-1}=2c$ and:
$P_{D_{r}}^{\Sigma}(x,c)\equiv{\bf
A}(x,c)^{2}-2{\mathscr{P}}_{r}(x){\mathscr{P}}_{r-1}(x)(c+1){\bf
B}(x,c)^{2}=0$ (7.45)
where ${\bf A},{\bf B}$ are the polynomials in $x$ and $c$ of bi-degrees
$(N,\left[\frac{r}{2}\right])$ and
$({\mathbf{v}}_{r-1}+{\mathbf{v}}_{r},\left[\frac{r-1}{2}\right])$,
respectively:
$\displaystyle{\bf
A}(x,c)=T_{r}^{2}{\mathscr{P}}_{r-1}+T_{r-1}^{2}{\mathscr{P}}_{r}+$ (7.46)
$\displaystyle\qquad\qquad+2\sum_{l=1}^{\left[\frac{r}{2}\right]}\,{\bf
C}_{l}(c)\,T_{r-2l}{\mathscr{P}}_{r-1}^{l}{\mathscr{P}}_{r}^{l}\prod_{j=r+1-2l}^{r-2}{\mathscr{P}}_{j}^{j-r+2l},$
$\displaystyle{\bf
B}(x,c)=T_{r-1}T_{r}+2\sum_{l=1}^{\left[\frac{r-1}{2}\right]}\,{\bf
D}_{l}(c)\,T_{r-2l-1}{\mathscr{P}}_{r-1}^{l}{\mathscr{P}}_{r}^{l}\prod_{j=r-2l}^{r-2}{\mathscr{P}}_{j}^{j-r+2l+1},$
where the degree $l$ polynomials ${\bf C}_{l}(c)$, ${\bf D}_{l}(c)$ are
defined as follows:
$\displaystyle{\bf C}_{l}(c)=\frac{1}{2}(s^{l}+s^{-l})-1,\qquad s+s^{-1}=2c$
(7.47) $\displaystyle{\bf
D}_{l}(c)=\frac{(s^{l}-1)(s^{l+1}-1)}{2s^{l}(s+1)}=\sum_{j=0}^{l-1}(-1)^{j}{\bf
C}_{l-j}(c)$
Over the points $c=1$ and $c=-1$ the equation for $\Sigma_{u}$ becomes
reducible: at $c=1$:
$P_{D_{r}}^{\Sigma}(x,1)=({\mathscr{P}}_{r-1}T_{r}^{2}-{\mathscr{P}}_{r}T_{r-1}^{2})^{2}$
(7.48)
and at $c=-1$:
$P_{D_{r}}^{\Sigma}(x,-1)={\bf A}(x,-1)^{2}$ (7.49)
It is easy to see that the curve $\Sigma_{u}$ has double points at $(x,s)$
where either $s=1$ and $x$ being any of the $N$ roots of (7.48) or $s=-1$ and
$x$ is any of the $N$ roots of (7.49). The locations of these roots are not
fixed by the masses of the matter fields.
Let us normalize the equation of $\Sigma_{u}$ by dividing $P_{D_{r}}^{\Sigma}$
by the coefficient at $x^{2N}$:
${\mathcal{R}}_{D_{r}}(x,c)=\frac{P_{D_{r}}^{\Sigma}(x,c)}{\prod_{i=1}^{r}(s-{\check{t}}_{i}^{2})(1-s^{-1}{\check{t}}_{i}^{-2})}$
(7.50)
times a constant such that $\mathcal{R}_{D_{r}}(x,c)$ is monic in $x$ and a
rational function of $c$.
We thus arrive at the following interpretation of the curve $\Sigma_{u}$. It
is the spectral curve
${\mathcal{R}}_{D_{r}}\left(x,\frac{s^{2}+1}{2s}\right)\left(\frac{ds}{s}\right)^{2N}={\rm
Det}_{2N}\left(x\frac{ds}{s}-{\Phi}(s)\right)$ (7.51)
of the genus zero Higgs field
${\Phi}(s)=\sum_{s_{j}\in J}{\Phi}_{j}\frac{ds}{s-s_{j}}$ (7.52)
where $J\subset{\mathbb{C}\mathbb{P}}^{1}_{s}$ is the set of $2r+2$
singularities:
$J=\\{0,\infty\,\\}\cup\\{\,{\check{t}}_{i}^{2},\
{\check{t}}_{i}^{-2}\,|\,i=1,\ldots,r\\}$
Let ${\sigma}:{\mathbb{C}\mathbb{P}}^{1}_{s}\to{\mathbb{C}\mathbb{P}}^{1}_{s}$
be the involution ${\sigma}(s)=s^{-1}$. The Higgs field must obey:
${\sigma}^{*}{\Phi}={\Omega}{\Phi}^{t}{\Omega}^{-1}$ (7.53)
where ${\Omega}$ is a constant anti-symmetric matrix (cf. [Kapustin:1998fa]),
which defines the symplectic structure on $V={\mathbb{C}}^{2N}$. If we expand:
$\displaystyle{\Phi}(s)={\Phi}_{0}\frac{ds}{s}+\sum_{i=1}^{r}{\Phi}_{i}^{+}\frac{ds}{s-{\check{t}}_{i}^{2}}+\sum_{i=1}^{r}{\Phi}_{i}^{-}\frac{ds}{s-{\check{t}}_{i}^{-2}}\,,$
(7.54)
$\displaystyle\qquad\qquad{\Phi}_{\infty}=-{\Phi}_{0}-\sum_{i=1}^{r}({\Phi}_{i}^{+}+{\Phi}_{i}^{-})$
Then (7.53) implies:
$\displaystyle{\Phi}_{\infty}={\Omega}{\Phi}_{0}^{t}{\Omega}^{-1},\qquad{\Phi}_{i}^{+}={\Omega}({\Phi}_{i}^{-})^{t}{\Omega}^{-1},\qquad
i=1,\ldots,r$ (7.55)
Also, the matrices ${\Phi}^{+}_{i},{\Phi}^{-}_{i}$, $i=1,\ldots,r$, must have
rank one, while the matrices ${\Phi}_{0,\infty.\pm 1}$ have rank $2N$. We can
interpret
$\displaystyle{\mu}={\Phi}_{0}+{\Phi}_{\infty}+\sum_{i=1}^{r}({\Phi}_{i}^{+}+{\Phi}_{i}^{-})\,,$
(7.56)
$\displaystyle\qquad\qquad\qquad\qquad{\mu}^{t}={\Omega}^{-1}{\mu}{\Omega}$
as the moment map for the $Sp(2N)$ group action on the product of some orbits
${\mathcal{O}}_{0}\times{\mathcal{O}}_{-1}\times{\mathcal{O}}_{1}\times_{i=1}^{r}{\mathcal{O}}_{i}$
which generates the action ${\Phi}_{j}\mapsto g^{-1}{\Phi}_{j}g$ of $g\in
Sp(2N)$, such that:
$g{\Omega}g^{t}={\Omega}\ .$ (7.57)
It would be nice to develop further the theory of these orbifold Hitchin-
Gaudin systems. We shall encounter a genus one version of such theory in the
Class II $D_{r}$ section below.
The differential whose periods determine the special coordinates is equal to
$dS=x\frac{ds}{s}$ (7.58)
#### 7.2.1. Freezing example
Here we will illustrate how the $D_{4}$ theory with
$v_{1}=v_{3}=v_{4}=v,v_{2}=2v$ and $w_{1}=w_{3}=w_{4}=0,w_{2}=v$ reduces to
$A_{3}$ with $v_{1}=v_{3}=v,v_{2}=2v$ and $w_{2}=2v$ when the node 4 freezes
under $\mathfrak{q}_{4}\to 0$. Keeping in mind unfreezing to the affine
$\widehat{D}_{4}$, let polynomial $Y_{0}$ of degree $v$ denote the fundamental
matter polynomial attached to the node ‘‘2’’.
The $D_{4}$ spectral curve for the node ‘‘1’’ from (7.37) in terms of variable
$\eta$
$\eta=t+\frac{\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4}}{t}$
(7.59)
where $\mathscr{Y}_{2}=Y_{0}t$ is
$\mathcal{R}_{D_{4}}(\eta,x)=\eta^{4}Y_{0}^{2}-\eta^{3}T_{1}Y_{0}+\eta^{2}\left(\mathfrak{q}_{1}T_{2}-4\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4}Y_{0}^{2}\right)+\\\
\eta\left(-\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}T_{3}T_{4}+4\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4}T_{1}Y_{0}\right)-4\mathfrak{q}_{1}^{3}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4}T_{2}+\mathfrak{q}_{1}^{3}\mathfrak{q}_{2}^{2}\mathfrak{q}_{4}T_{3}^{2}+\mathfrak{q}_{1}^{3}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}T_{4}^{2}$
(7.60)
Notice that the curve is polynomial of degree $4$ in $\eta$ with polynomial
coefficients in $x$ of degree $2v$. In the limit $x\to\infty$ we find the
limiting values of $\eta$ are
$1+\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4},\quad\mathfrak{q}_{1}+\mathfrak{q}_{1}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4},\quad\mathfrak{q}_{1}\mathfrak{q}_{2}+\mathfrak{q}_{1}\mathfrak{q}_{2}\mathfrak{q}_{3}\mathfrak{q}_{4},\quad\mathfrak{q}_{1}\mathfrak{q}_{2}\mathfrak{q}_{3}+\mathfrak{q}_{1}\mathfrak{q}_{3}\mathfrak{q}_{4}$
(7.61)
Notice that the differential is
$\lambda=x\frac{dt}{t}=x\frac{d\eta}{(\eta^{2}-4\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4})^{\frac{1}{2}}}$
(7.62)
Also notice that at $\eta=\pm
2\mathfrak{q}_{1}\mathfrak{q}_{2}\mathfrak{q}_{3}^{\frac{1}{2}}\mathfrak{q}_{4}^{\frac{1}{2}}$
the curve factorizes as
$\mathcal{R}_{D_{4}}(\pm
2\mathfrak{q}_{1}\mathfrak{q}_{2}\mathfrak{q}_{3}^{\frac{1}{2}}\mathfrak{q}_{4}^{\frac{1}{2}},x)=\mathfrak{q}_{1}^{3}\mathfrak{q}_{2}^{2}(\mathfrak{q}_{3}T_{4}(x)\mp\mathfrak{q}_{4}T_{3}(x))^{2}$
(7.63)
as well as it factorizes at $\eta=\infty$
$\mathcal{R}_{D_{4}}(\eta=\infty,x)=Y_{0}(x)^{2}$ (7.64)
We can interpret the multi-valued nature of $\lambda$ on the $\eta$-plane as
the deformation of the punctured sphere underlying the $A_{r}$-type theories
to the curve describing the $D_{r}$-type theories, by opening punctures into
cuts. Perhaps one can elevate this observation to the corresponding
deformation of the Liouville theory coupled to some conformal matter, along
the lines of [Knizhnik:1987xp, Gerasimov:1988gy].
We see that in the decoupling limit $\mathfrak{q}_{4}=0$ the above curve
reduces to
$\mathcal{R}_{A_{3}}(\eta,x)=\eta^{4}Y_{0}^{2}-\eta^{3}T_{1}Y_{0}+\eta^{2}\mathfrak{q}_{1}T_{2}-\eta\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}T_{3}Y_{4}+\mathfrak{q}_{1}^{3}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}Y_{4}^{2}$
(7.65)
where we just set that $\mathscr{Y}_{4}$ freezes and converts to a factor of
degree $v$ contributing to the fundamental matter polynomial for the node
‘‘2’’; we denote this factor by $Y_{4}\equiv\mathscr{Y}_{4}=T_{4}$. The curve
(7.65) is precisely the $A_{3}$ curve for the node ‘‘1’’ (7.12) in terms of
the variable $\mathscr{Y}_{1}=Y_{0}\eta$. This curve corresponds to the
$GL(2)$ Hitchin system with punctures at four punctures
$1,\quad\mathfrak{q}_{1},\quad\mathfrak{q}_{1}\mathfrak{q}_{2},\quad\mathfrak{q}_{1}\mathfrak{q}_{2}\mathfrak{q}_{3}$
(7.66)
Moreover, from the discussion after (7.21) (we have
$\mathbf{w}_{0}=0,\mathbf{w}_{2}=2,\mathbf{w}_{3}=0$ and $i_{*}=2$ and
$\mathbf{w}_{+}=\mathbf{w}_{-1}=1$) it is clear the the eigenvalues of the
Higgs field residues at $\eta=0$ and at $\eta=\infty$ are doubly degenerate
which effectively means that $SL(2,\mathbb{C})$ part of the Higgs field does
not have punctures at $\eta=0$ and $\eta=\infty$. We can continue the freezing
reduction and now we shall set $\mathfrak{q}_{3}=0$ declaring the function
$\mathscr{Y}_{3}$ as contributing to the fundamental matter at the node ‘‘2’’,
we denote $\mathscr{Y}_{3}=T_{3}=Y_{3}$. After factoring out $\eta$, the curve
(7.65) reduces to the $A_{2}$ curve
$\mathcal{R}_{A_{2}}(\eta,x)=\eta^{3}Y_{0}^{2}-\eta^{2}T_{1}Y_{0}+\eta\mathfrak{q}_{1}T_{2}-\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}Y_{3}Y_{4}$
(7.67)
The corresponding Gaudin system has punctures at $\eta=0$ and $\eta=\infty$
and at
$1,\quad\mathfrak{q}_{1},\quad\mathfrak{q}_{1}\mathfrak{q}_{2}$ (7.68)
Finally, we can freeze the node ‘‘1’’ by sending $\mathfrak{q}_{1}$ to zero
and rescaling $\eta=\tilde{\eta}\mathfrak{q}_{1}$ so that the former punctures
$\mathfrak{q}_{1},\mathfrak{q}_{1}\mathfrak{q}_{2}$ on the
$\tilde{\eta}$-plane in terms of $\tilde{\eta}$ become
$1,\quad\mathfrak{q}_{2}$ (7.69)
while the puncture $\eta=1$ is send away to $\tilde{\eta}=\infty$. We set
$Y_{1}\equiv\mathscr{Y}_{1}=T_{1}$ and find that (7.67) reduces to the
familiar $A_{1}$ curve with gauge polynomial $T_{2}$ of degree $2v$ and four
factors $(Y_{0},Y_{1},Y_{3},Y_{4})$ of degree $v$ which make fundamental
polynomial of degree $4v$
$\mathcal{R}_{A_{2}}(\eta,x)=-\tilde{\eta}^{2}Y_{1}Y_{0}+\tilde{\eta}T_{2}-\mathfrak{q}_{2}Y_{3}Y_{4}$
(7.70)
The punctures of the corresponding Gaudin model in $\tilde{\eta}$ plane are at
$(0,\mathfrak{q}_{2},1,\infty)$.
### 7.3. Class I theories of $E$ type
We are using Bourbaki conventions to label the nodes on the Dynkin graph of
$E_{r}$ series, see figures in the Appendix A. One can construct the analogues
of the spectral curves $C_{u}$ or $\Sigma_{u}$ using the minuscule
representations in the $E_{6}$ and $E_{7}$ cases. For $E_{8}$ one can
construct the spectral curve using the adjoint representation $\bf 248$.
However it seems more advantageous to use the degenerate version of del
Pezzo/$E$-bundle correspondence, which we review below in the discussion of
Class II theories of $E$ type. For the standard conformal $E_{r}$ quivers,
which are obtained by freezing of the node ‘‘0’’ in the affine $E_{r}$ quivers
with ranks ${\mathbf{v}}_{i}=Na_{i}$ where $a_{i}$ are Dynkin marks, we find
spectral curves of $(t,x)$-degree equal to $(27,6N)$ for $E_{6}$, $(56,12N)$
for $E_{7}$ and $(240,60N)$ for $E_{8}$. These degrees can be understood from
the degeneration of $\widehat{E}_{r}$ spectral curves computed in section 6.3.
#### 7.3.1. The $E_{6}$ theory
The spectral curve in the fundamental representation $R_{6}=\mathbf{27}$
associated with the node ‘‘6’’, in which the group element of the conformal
extension of $E_{6}$ is $g(x)=(\mathscr{Y}_{6}(x),\dots)$ has the form
$\mathcal{R}_{E_{6}}(t,x)=0$ (7.71)
where the explicit expression is of the form111The explicit expression, which
we do not list here, is available upon a request; it is computed by the
straightforward expansion of the exterior powers $\bigwedge^{\bullet}R_{6}$ in
the representation ring $\mathrm{Rep}(E_{6})$ over the fundamental
representations $R_{1},\dots,R_{6}$.
$\mathcal{R}_{E_{6}}(t,x)={\det}_{R_{6}}(t\cdot
1_{27}-g(x))=t^{27}-t^{26}T_{6}+t^{25}\mathscr{P}_{6}T_{5}-t^{24}\mathscr{P}_{5}\mathscr{P}_{6}^{2}T_{4}+t^{23}\left(-\mathscr{P}_{2}^{2}\mathscr{P}_{3}^{2}\mathscr{P}_{4}^{4}\mathscr{P}_{5}^{4}\mathscr{P}_{6}^{4}T_{1}^{2}+\mathscr{P}_{1}\mathscr{P}_{2}^{2}\mathscr{P}_{3}^{2}\mathscr{P}_{4}^{4}\mathscr{P}_{5}^{4}\mathscr{P}_{6}^{4}T_{3}+\mathscr{P}_{4}\mathscr{P}_{5}^{2}\mathscr{P}_{6}^{3}T_{2}T_{3}-\mathscr{P}_{2}\mathscr{P}_{3}\mathscr{P}_{4}^{2}\mathscr{P}_{5}^{2}\mathscr{P}_{6}^{3}T_{1}T_{5}+\mathscr{P}_{1}^{2}\mathscr{P}_{2}^{3}\mathscr{P}_{3}^{4}\mathscr{P}_{4}^{6}\mathscr{P}_{5}^{5}\mathscr{P}_{6}^{4}T_{6}\right)+\dots-\mathscr{P}_{1}^{18}\mathscr{P}_{2}^{27}\mathscr{P}_{3}^{36}\mathscr{P}_{4}^{54}\mathscr{P}_{5}^{45}\mathscr{P}_{6}^{36}$
(7.72)
where we have omitted the explicit expressions for the terms from $t^{24}$ to
$t^{1}$, and we omitted the dependence on $x$ in the notations for the
polynomial coefficients so that $\mathcal{P}_{i}\equiv\mathcal{P}_{i}(x)$ and
$T_{i}\equiv T_{i}(x)$. The curve 7.72 has $x$-degree $27v_{6}$, and, of
course, is not the most economical. By rescaling $g(x)\to\zeta(x)g(x)$ with a
suitably chosen $\zeta(x)$ of degree $-v_{6}$ made of some powers of the
factors in fundamental polynomials we can reduce the degree of (7.72).
The most standard conformal $E_{6}$ quiver, which arises from the degenerate
limit $\mathfrak{q}_{0}\to 0$ in the node ‘‘0’’ of the affine
$\widehat{E}_{6}$ quiver, has matter polynomial
$\mathscr{P}_{2}=\mathfrak{q}_{2}Y_{0}$ of degree $N$ only at the node ‘‘2’’
to which the affine node ‘‘0’’ was attached, while the degrees of the gauge
polynomials are fixed by the Dynkin marks ${\mathbf{v}}_{i}=Na_{i}$, that is
$(\mathbf{v}_{1},\dots,\mathbf{v}_{6})=(N,2N,2N,3N,2N,N)$. For such conformal
$E_{6}$ quiver, the curve 7.72 has canonical reduced form under the choice
$\zeta^{-1}(x)=Y_{0}(x)$ and the degree of the reduced curve is
$6N=2\mathbf{v}_{*}$ where $\mathbf{v}_{*}\equiv\mathbf{v}_{4}=3N$ denotes the
rank in the trivalent node ‘‘4’’. The reduced curve of such special conformal
$E_{6}$ quiver is $\mathcal{R}_{E_{6}}(t,x)$, with
$\mathscr{P}_{i}=\mathfrak{q}_{i},i\neq
2;\mathscr{P}_{2}=\mathfrak{q}_{2}Y_{0}$ we find
$\mathcal{R}_{E_{6}}(t,x)=t^{27}Y_{0}^{6}-t^{26}Y_{0}^{5}T_{6}+t^{25}\mathfrak{q}_{6}Y_{0}^{4}T_{5}-t^{24}\mathfrak{q}_{5}\mathfrak{q}_{6}^{2}Y_{0}^{3}T_{4}+t^{23}\left(-\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}^{2}\mathfrak{q}_{4}^{4}\mathfrak{q}_{5}^{4}\mathfrak{q}_{6}^{4}Y_{0}^{4}T_{1}^{2}+\mathfrak{q}_{1}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}^{2}\mathfrak{q}_{4}^{4}\mathfrak{q}_{5}^{4}\mathfrak{q}_{6}^{4}Y_{0}^{4}T_{3}+\mathfrak{q}_{4}\mathfrak{q}_{5}^{2}\mathfrak{q}_{6}^{3}Y_{0}^{2}T_{2}T_{3}-\mathfrak{q}_{2}\mathfrak{q}_{3}\mathfrak{q}_{4}^{2}\mathfrak{q}_{5}^{2}\mathfrak{q}_{6}^{3}Y_{0}^{3}T_{1}T_{5}+\mathfrak{q}_{1}^{2}\mathfrak{q}_{2}^{3}\mathfrak{q}_{3}^{4}\mathfrak{q}_{4}^{6}\mathfrak{q}_{5}^{5}\mathfrak{q}_{6}^{4}Y_{0}^{5}T_{6}\right)+\dots-t^{2}\mathfrak{q}_{1}^{15}\mathfrak{q}_{2}^{23}\mathfrak{q}_{3}^{30}\mathfrak{q}_{4}^{46}\mathfrak{q}_{5}^{39}\mathfrak{q}_{6}^{32}Y_{0}^{4}T_{3}+t\mathfrak{q}_{1}^{16}\mathfrak{q}_{2}^{25}\mathfrak{q}_{3}^{33}\mathfrak{q}_{4}^{50}\mathfrak{q}_{5}^{42}\mathfrak{q}_{6}^{34}Y_{0}^{5}T_{1}-\mathfrak{q}_{1}^{18}\mathfrak{q}_{2}^{27}\mathfrak{q}_{3}^{36}\mathfrak{q}_{4}^{54}\mathfrak{q}_{5}^{45}\mathfrak{q}_{6}^{36}Y_{0}^{6}$
(7.73)
where again we only indicated the middle terms but skipped the explicit
expressions. Indeed, one sees that the curve 7.73 of the $E_{6}$ quiver with
the standard rank assignments $\mathbf{v}_{i}=Na_{i}$ has degree $6N$. At the
limit $x\to\infty$ the 27 roots of $\mathcal{R}_{E_{6}}(t,x)$ in 7.73 approach
the set of points in the $t$-plane labeled by the weights $\lambda$ in the
$\mathbf{27}$ representation of $E_{6}$ and given explicitly by
$\prod_{i=1}^{6}\mathfrak{q}_{i}^{(\lambda_{i},\lambda-\lambda_{i})}$, or
$\left\\{\ \prod_{i=1}^{6}\mathfrak{q}_{i}^{n_{i}}\ |\
\sum_{i=1}^{6}n_{i}\alpha_{i}\
=\lambda_{6}-\lambda,\quad\lambda\in\mathrm{weights}(R_{6})\right\\}$ (7.74)
where $n_{i}$ are the coefficients of the expansion in the basis of simple
roots of the difference between a given weight in $\mathbf{27}$ and the
highest weight. One can associate a Higgs field to the spectral curve 7.73
with poles in the 27 punctures (7.74) with certain relations. In other words,
the curve 7.73 realizes a certain embedding of the standard conformal $E_{6}$
quiver theory with gauge group ranks $\mathbf{v}_{i}=(N,2N,2N,3N,2N,N)$ to
some specialization of the $A_{26}$ theory with ranks $(6N,6N,\dots,6N)$, and
this embedding can be lifted to the Higgs field spectral curve representation
of (7.73).
For non-standard assignments of $\mathbf{w}_{i}$ and $\mathbf{v}_{i}$ for the
conformal $E_{6}$ quiver we did not find a simple choice of $\zeta(x)$
reducing the curve 7.72 to the minimal degree. For small ranks
$\mathbf{v}_{i},\mathbf{w}_{i}$ we can find the reduced curve using the brute
search minimization problem on the total degree of the reduced curve under
$g(x)\to\zeta(x)g(x)$. We have found different chambers in the space of
parameters $\mathbf{w}_{i},\mathbf{v}_{i}$ with piece-wise linear dependence
of the reduced degree of $\mathbf{w}_{i}$ or $\mathbf{v}_{i}$’s but not a
simple expression. For example, in several examples we find
$(w_{i})$ | $(v_{i})$ | reduced curve $x$-degree
---|---|---
$(0,4,0,0,0,0)$ | $(4,8,8,12,8,4)$ | 24
$(3,0,0,0,0,3)$ | $(6,6,9,12,9,6)$ | 33
$(6,0,0,0,0,0)$ | $(8,6,10,12,8,4)$ | 40
$(4,0,0,0,0,1)$ | $(6,5,8,10,7,4)$ | 31
$(6,0,0,0,0,3)$ | $(10,9,14,18,13,8)$ | 53
where the first three lines list different conformal $E_{6}$ quivers sharing
the same $\mathbf{v}_{*}=12$, and one can see that the curve of the minimal
degree $2\mathbf{v}_{*}$ is obtained in the standard assignment
$\mathbf{w}_{i}=0,i\neq 2$ associated to the degenerate limit of the affine
$E_{6}$.
#### 7.3.2. $E_{7}$ theory
We write the spectral curve in, for example, the $\mathbf{56}$ representation
of $E_{7}$ similar to the $E_{6}$ case. If
$(\mathbf{v}_{0},\dots,\mathbf{v}_{7})=Na_{i}$ where $a_{i}$ are Dynkin marks
of $E_{7}$ quiver, again, similar to $E_{6}$ quiver we find that the reduced
curve of the standard conformal $E_{7}$ quiver obtained from the degenerate
limit of the affine theory has $x$-degree $12N=3\mathbf{v}_{*}$ where
$\mathbf{v}_{*}=\mathbf{v}_{4}=4N$ is rank at the trivalent node. The standard
$E_{7}$ quiver spectral curve hence is realized as a specialization of the
spectral curve for $A_{55}$ quiver with ranks $(12N,12N,\dots,12N)$, or
Hitchin system with $56$ punctures on $t$-plane associated to the weights in
$\mathbf{56}$.
#### 7.3.3. $E_{8}$ theory
For $E_{8}$ the minimal representation is adjoint $\mathbf{248}$. The reduced
curve in the adjoint representation for the standard conformal $E_{8}$ quiver
obtained from the degenerate limit of the affine theory has $x$-degree
$60N=10\mathbf{v}_{*}$ where $\mathbf{v}_{*}=6N$ is rank at the trivalent
node. Hence the standard conformal $E_{8}$ quiver spectral curve is realized
as a specialization of the spectral curve for $A_{247}$ quiver with ranks
$60(N,N,\dots,N)$, or Hitchin system with $240$ punctures on $t$-plane
associated to the non-zero adjoint weights in $\mathbf{248}$.
### 7.4. Class II theories of $A$ type and class II* theories
Let us start with the simplest nontrivial examples, and then pass onto a
general case.
#### 7.4.1. Class II $\widehat{A}_{1}$ theory
For the class II theory we shift the arguments of ${\mathscr{Y}}_{i}(x)$ by
${\mu}_{i}$ to get rid of the bi-fundamental masses.
Let $g(x)\in\widehat{SL_{2}}$:
$g(x)={\mathfrak{q}}_{0}^{-{\widehat{\lambda}}_{0}^{\vee}}{\mathfrak{q}}_{1}^{-{\widehat{\lambda}}_{1}^{\vee}}{\mathscr{Y}}_{0}(x)^{{\widehat{\alpha}}_{0}^{\vee}}{\mathscr{Y}}_{1}(x)^{\widehat{\alpha}_{1}^{\vee}}$
(7.75)
We have: ${\mathfrak{q}}={\mathfrak{q}}_{0}{\mathfrak{q}}_{1}$,
$g(x)^{\alpha_{1}}=\frac{{\mathscr{Y}}_{1}^{2}}{{\mathfrak{q}}_{1}{\mathscr{Y}}_{0}^{2}},\quad
g(x)^{-\delta}={\mathfrak{q}},\quad
g(x)^{\widehat{\lambda}_{0}}={\mathscr{Y}}_{0}(x)$ (7.76)
The normalized $\widehat{sl_{2}}$ characters (6.22) of the fundamental
representations ${\widehat{R}}_{0},{\widehat{R}}_{1}$ are equal to
$\displaystyle{{\mathscr{X}}}_{0}({\mathscr{Y}}(x),{\bf\mathfrak{q}})=\,\frac{{\mathscr{Y}}_{0}(x)}{\phi({\mathfrak{q}})}\theta_{3}\left(\frac{{\mathscr{Y}}_{1}(x)^{2}}{{\mathfrak{q}}_{1}{\mathscr{Y}}_{0}(x)^{2}};{\mathfrak{q}}^{2}\right)$
(7.77)
$\displaystyle{{\mathscr{X}}}_{1}({\mathscr{Y}}(x),{\bf\mathfrak{q}})=\,\left(\frac{{\mathfrak{q}}_{1}}{\mathfrak{q}_{0}}\right)^{\frac{1}{4}}\frac{{\mathscr{Y}}_{0}(x)}{\phi({\mathfrak{q}})}\theta_{2}\left(\frac{{\mathscr{Y}}_{1}(x)^{2}}{{\mathfrak{q}}_{1}{\mathscr{Y}}_{0}(x)^{2}};{\mathfrak{q}}^{2}\right)$
(see the appendix for our conventions on elliptic functions). The characters
(7.77) are invariant under the Weyl transformations
$\displaystyle{\mathscr{Y}}_{0}\to{\mathfrak{q}}_{0}{\mathscr{Y}}_{0}^{-1}{\mathscr{Y}}_{1}^{2}$
(7.78)
$\displaystyle{\mathscr{Y}}_{1}\to{\mathfrak{q}}_{1}{\mathscr{Y}}_{1}^{-1}{\mathscr{Y}}_{0}^{2}$
and therefore we can equate them to the polynomials:
$\displaystyle{{\mathscr{X}}}_{0}({\mathscr{Y}}(x),{\bf\mathfrak{q}})=T_{0}(x)$
(7.79) $\displaystyle\qquad\qquad
T_{0,0}=\frac{\theta_{3}\left({\mathfrak{q}}_{1}^{-1};{\mathfrak{q}}^{2}\right)}{\phi({\mathfrak{q}})}$
$\displaystyle{{\mathscr{X}}}_{1}({\mathscr{Y}}(x),{\bf\mathfrak{q}})=T_{1}(x)$
$\displaystyle\qquad\qquad
T_{1,0}=\left(\frac{{\mathfrak{q}}_{1}}{\mathfrak{q}_{0}}\right)^{\frac{1}{4}}\frac{\theta_{2}\left({\mathfrak{q}}_{1}^{-1};{\mathfrak{q}}^{2}\right)}{\phi({\mathfrak{q}})}$
The values of characters (7.77) and ${\mathfrak{q}}_{0},{\mathfrak{q}}_{1}$
define ${\mathscr{Y}}_{0}$ and ${\mathscr{Y}}_{1}$ up to an affine Weyl
transformation. To recover ${\mathscr{Y}}_{0}$ and ${\mathscr{Y}}_{1}$ we
invert the relations (7.77):
$\displaystyle{\mathscr{Y}}_{1}(x)={\mathfrak{q}}_{1}^{\frac{1}{2}}{\mathscr{Y}}_{0}(x)t$
(7.80)
$\displaystyle{\mathscr{Y}}_{0}(x)=\frac{\phi({\mathfrak{q}})}{\theta_{3}(t^{2};{\mathfrak{q}}^{2})}T_{0}(x)$
and express
$\left(\frac{\mathfrak{q}_{0}}{{\mathfrak{q}}_{1}}\right)^{\frac{1}{4}}\frac{{\theta}_{3}(t^{2};{\mathfrak{q}}^{2})}{{\theta}_{2}(t^{2};{\mathfrak{q}}^{2})}=\frac{T_{0}(x)}{T_{1}(x)}$
(7.81)
Actually, the ratio
${\xi}=\left(\frac{\mathfrak{q}_{0}}{{\mathfrak{q}}_{1}}\right)^{\frac{1}{4}}\frac{{\theta}_{3}(t^{2};{\mathfrak{q}}^{2})}{{\theta}_{2}(t^{2};{\mathfrak{q}}^{2})}$
is a meromorphic function on $\mathscr{E}$ with two first order poles at
$t=\pm\mathrm{i}$ and two simple zeroes at $t=\pm\mathrm{i}\mathfrak{q}$.
Therefore
$\xi={\xi}_{\infty}\frac{X(t,\mathfrak{q})-X_{0}}{X(t,\mathfrak{q})-X_{1}},\quad
X_{0}:=X(\mathrm{i}\mathfrak{q},\mathfrak{q}),\quad
X_{1}:=X(\mathrm{i},\mathfrak{q})$ (7.82)
with
${\xi}_{\infty}=\left(\frac{\mathfrak{q}_{0}}{{\mathfrak{q}}_{1}}\right)^{\frac{1}{4}}\frac{\theta_{3}(1,\mathfrak{q}^{2})}{\theta_{2}(1,\mathfrak{q}^{2})}$
(7.83)
and the explicit $\mathfrak{q}$-series for $X(t,\mathfrak{q})$ is given in
(LABEL:eq:weierx1),(LABEL:eq:weierx2). Hence, the algebraic Seiberg-Witten
curve $C_{u}$ describing the $\widehat{A}_{1}$ theory is a two-fold cover of
the rational curve ${\Sigma}_{u}$
$({\xi}_{\infty}T_{1}(x)-T_{0}(x))X-({\xi}_{\infty}T_{1}(x)X_{0}-T_{0}(x)X_{1})=0$
(7.84)
defined by the Weierstraß cubic (LABEL:eq:wxy). There are $4N$ branch points
of the $2:1$ cover $C_{u}\to{\Sigma}_{u}$:
$\displaystyle{\xi}_{\infty}T_{1}(x_{\infty,{\mathbf{a}}})-T_{0}(x_{\infty,{\mathbf{a}}})=0$
(7.85)
$\displaystyle({\xi}_{\infty}T_{1}(x_{{\alpha},{\mathbf{a}}})-T_{0}(x_{{\alpha},{\mathbf{a}}}))e_{\alpha}-({\xi}_{\infty}T_{1}(x_{{\alpha},{\mathbf{a}}})X_{0}-T_{0}(x_{{\alpha},{\mathbf{a}}})X_{1})=0$
$\displaystyle{\alpha}=1,2,3,\qquad{\alpha}=1,\ldots,N$
which can be split into $2$ groups of $N$ pairs, corresponding to the cycles
$A_{i{\mathbf{a}}}$ with $i=0,1$, e.g. $A_{0,{\mathbf{a}}}$ is a small circle
around the cut which connects $x_{1,{\mathbf{a}}}$ to $x_{2,{\mathbf{a}}}$,
while $A_{1,{\mathbf{a}}}$ is a small circle around the cut which connects
$x_{3,{\mathbf{a}}}$ to $x_{\infty,{\mathbf{a}}}$. The special coordinates are
computed by the periods of
$dS_{-}=x\,d{\log}\,(t)=x\,\frac{dX}{Y}$
The curve $C_{u}$ is the spectral curve. The cameral curve ${\mathcal{C}}_{u}$
is a $\mathbb{Z}$-cover of spectral curve $C_{u}$, which is given by the same
equations but now with $t\in{\mathbb{C}}^{\times}$ as opposed to
$t\in{\mathscr{E}}$. On cameral curve ${\mathcal{C}}_{u}$ we have the second
differential
$dS_{+}=x\,d{\log}\,{\theta}_{3}(t^{2};{\mathfrak{q}})$
which would be a multi-valued differential on spectral curve $C_{u}$ whose
periods are defined up to the periods of $dS_{-}$, similar to the
polylogarithm motives [Cartier:1987].
#### 7.4.2. Class II* $\widehat{A}_{0}$ theory
This is a (noncommutative) $U(1)$ ${\mathcal{N}}=2^{*}$ theory. This theory
was solved in [Nekrasov:2003rj] by the similar method. There is only one
amplitude ${\mathscr{Y}}(x)={\mathscr{Y}}_{0}(x)$, with the single interval
$I$ as its branch cut, the single function
$t(x)\equiv
t_{0}(x)=\frac{{\mathscr{Y}}(x)}{{\mathscr{Y}}(x+{\mathfrak{m}})}.$
with two branch cuts $I$ and $I-{\mathfrak{m}}$. Crossing the $I$ cut maps
$t(x)\mapsto{\mathfrak{q}}t(x-{\mathfrak{m}})$. Crossing the cut
$I-{\mathfrak{m}}$ has the opposite effect:
$t(x)\mapsto{\mathfrak{q}}^{-1}t(x+{\mathfrak{m}})$. The extended functions
$t_{j}(x)={\mathfrak{q}}^{j}t(x-j{\mathfrak{m}})$
The analytically continued function $t(x)$ has cuts at
$I+{\mathfrak{m}}\mathbb{Z}$. The sheets of the Riemann surface of $t(x)$ are
labeled by $j\in\mathbb{Z}$, so that on the sheet $j$ the cuts are at
$I-j{\mathfrak{m}}$, and $I-(j+1){\mathfrak{m}}$. Upon crossing
$I+j{\mathfrak{m}}$ the $t_{j}(x)$ function transforms to $t_{j+1}(x)$
function. As $x\to\infty$ on this sheet the corresponding branch of $t(x)$
approaches ${\mathfrak{q}}^{j}$. These conditions uniquely fix the inverse
function to be the logarithmic derivative of $\theta_{1}$:
$x=a+{\mathfrak{m}}\,t\frac{d}{dt}{\rm log}\,{\theta}_{1}(t;{\mathfrak{q}})$
(7.86)
#### 7.4.3. Class II $A_{r}$ theories
In order to solve the general rank $r$ theory, it is convenient to form a
linear combination of fundamental characters of ${\widehat{A}}_{r}$.
Ultimately we would like to define a regularized version of the characteristic
polynomial of $g(x)$, where, as in the general case, after the shift of the
arguments of ${\mathscr{Y}}_{i}(x)\to{\mathscr{Y}}_{i}(x+{\mu}_{i})$:
$g(x)=\prod_{i=0}^{r}{\mathfrak{q}}_{i}^{-{\widehat{\lambda}}_{i}^{\vee}}{\mathscr{Y}}_{i}(x)^{{\widehat{\alpha}}_{i}^{\vee}}$
(7.87)
Using $t_{i}(x)=g(x)^{e_{i}}$ (see the appendix), we compute:
$t_{i}(x)={\check{t}}_{i}\,\frac{{Y}_{i}(x)}{{Y}_{i-1}(x)},\quad
i=1,\dots,r+1$ (7.88)
where we extended the amplitude functions ${\mathscr{Y}}_{j}(x)$ defined for
$j=0,\ldots,r$ to be defined for all $j\in\mathbb{Z}$ by periodicity
${Y}_{j}(x)={\mathscr{Y}}_{j+(r+1)}(x)$ and where
${\bf t}(x)=(t_{1}(x),t_{2}(x),\ldots,t_{r+1}(x))$
represents an element of the maximal torus of $SL(r+1,{\mathbb{C}})$, i.e.
$\prod_{i=1}^{r+1}t_{i}(x)=1.$
The $\check{t}_{i}$ are the asymptotic values at $x\to\infty$ of $t_{i}(x)$
and are given by
$\check{t}_{i}=(\mathfrak{q}_{i}\dots\mathfrak{q}_{r})^{-1}(\mathfrak{q}_{1}\mathfrak{q}_{2}^{2}\dots\mathfrak{q}_{r}^{r})^{\frac{1}{r+1}},\quad
i=1\dots r+1$ (7.89)
and
$g(x)^{-\delta}={\mathfrak{q}},\quad
g(x)^{\widehat{\lambda}_{0}}={\mathscr{Y}}_{0}(x).$ (7.90)
Now we shall explore the relation between the conjugacy classes in Kac-Moody
group and the holomorphic bundles on elliptic curve $\mathscr{E}$. We will
consider a family of bundles on $\mathscr{E}$ parametrized by the
$\mathbf{C}_{\left\langle x\right\rangle}$-plane, as e.g. in
[Friedman:1997ih]. We start with individual bundles.
Let V be a rank $r+1$ polystable vector bundle of degree zero over the
elliptic curve ${\mathscr{E}}=\mathbb{C}^{\times}/\mathfrak{q}^{\mathbb{Z}}$,
with trivial determinant,
${\rm det}V\approx{\mathcal{O}}_{\mathscr{E}}$
Such bundle always splits as a direct sum of line bundles
$V=\bigoplus_{i=1}^{r+1}L_{i}\ .$
Each summand is a degree zero line bundle $L_{i}$ which can be represented as
$L_{i}={\mathcal{O}}(p_{0})^{-1}{\mathcal{O}}(t_{i})$ where ${\mathcal{O}}(p)$
is the degree one line bundle whose divisor is a single point $p\in E$ and
$p_{0}$ denotes the point $t=1$ corresponding to the identity in the abelian
group law on the elliptic curve $\mathscr{E}$. A meromorphic section $s_{i}$
of $L_{i}$ with a simple pole at $t=1$ and zero at $t=t_{i}$ can be written
explicitly using the theta-functions:
$s_{i}(t)=\frac{\theta(t/t_{i};\mathfrak{q})}{\theta(t;\mathfrak{q})}$ (7.91)
and is unique up to a multiplicative constant. To each degree zero vector
bundle $V$ with the divisor
$D_{V}=-(r+1)p_{0}+t_{1}+\dots+t_{r+1}$
of ${\rm det}V$ we associate a projectively unique section $s$ of its
determinant ${\det}V$ which has zeroes at $t_{1},\dots,t_{r+1}$ and a pole of
the order not greater than $r+1$ at $t=1$:
$s(t;{\bf
t})=\prod_{i=1}^{r+1}\frac{\theta(t/t_{i};\mathfrak{q})}{\theta(t;\mathfrak{q})}$
(7.92)
where we explicitly indicate the $\bf t$ dependence of the section $s$. Now
set $t_{i}=t_{i}(x)$ given by (7.88). The meromorphic sections
$s(t;{\mathbf{t}}(x);\mathfrak{q})$ can be expanded in terms of the theta-
functions $\Theta_{j}(\mathscr{Y}_{0}(x);{\mathbf{t}};\mathfrak{q})$ and
characters of $\widehat{A}_{r}$ (see (LABEL:eq:Archar)(LABEL:eq:Ar-theta)) as
follows
$\mathscr{Y}_{0}(x)\prod_{i=1}^{r+1}\frac{\theta(t/t_{i}(x);\mathfrak{q})}{\theta(t,{\mathfrak{q}})}=\\\
\sum_{i=0}^{r}\mathfrak{q}^{-\frac{i}{2}}\mathfrak{q}^{\frac{i^{2}}{2(r+1)}}\Theta_{i}(\mathscr{Y}_{0}(x);{\mathbf{t}(x)};\mathfrak{q})\phi_{i}(t;\mathfrak{q})=\\\
=\phi(\mathfrak{q})^{r}\sum_{i=0}^{r}\chi_{i}(\mathscr{Y}_{0}(x);{\mathbf{t}(x)};\mathfrak{q})\phi_{i}(t;\mathfrak{q})$
(7.93)
where the functions $\phi_{i}(t;\mathfrak{q})$ are normalized meromorphic
elliptic functions defined in the appendix LABEL:subsubsec:phi. Hence we find
from (6.39) and (LABEL:eq:T-matrix) that the section $s(t,x)$ (7.92) obeys
${\mathscr{Y}}_{0}(x)s(t,x)=\phi(\mathfrak{q})^{r}\sum_{i=0}^{r}\chi_{i}(\mathscr{Y}_{0}(x);{\mathbf{t}(x)};\mathfrak{q})M_{i{\tilde{j}}}(\mathfrak{q})\tilde{\phi}_{{\tilde{j}}}(t;\mathfrak{q})$
(7.94)
where ${\tilde{\phi}}_{\tilde{j}}(t;{\mathfrak{q}})$ denotes the Weierstraß
monomials of Weierstraß elliptic functions $X(t,\mathfrak{q})$ and
$Y(t,\mathfrak{q})$; and $M_{i{\tilde{j}}}$ is a certain modular matrix as
defined in LABEL:subsubsec:phi. Recalling (6.5) that the characters
$(\chi_{i}(\mathscr{Y}_{0}(x);{\mathbf{t}(x)};\mathfrak{q}))$ evaluated on the
solutions $(\mathscr{Y}_{i}(x))$ are polynomials in $x$, from (6.22)(6.28) we
get
$\frac{{\mathscr{Y}}_{0}(x)s(t,x)}{\phi(\mathfrak{q})^{r}}=\sum_{i=0}^{r}\left(\prod_{j=0}^{r}\mathfrak{q}_{j}^{-\widehat{\lambda}_{i}(\widehat{\lambda}_{j}^{\vee})}\right)T_{i}(x)\sum_{{\tilde{j}}}M_{i\tilde{j}}(\mathfrak{q})\tilde{\phi}_{{\tilde{j}}}(t;\mathfrak{q})$
(7.95)
The section $s(t,x)$ vanishes at the $r+1$ points $t_{1}(x),\dots,t_{r+1}(x)$
for each $x\in\mathbf{C}_{\left\langle x\right\rangle}$, and hence defines the
$r+1$-folded spectral cover of $\mathbf{C}_{\left\langle x\right\rangle}$
plane by the equation
$R(t,x)=0$ (7.96)
where $R(t,x)$ is the right-hand side of (7.95). The curve (7.96) coincides
with the curve in [Witten:1997sc] constructed from by lifting to $M$-theory
the IIA brane arrangement realizing the elliptic model with $\mathfrak{m}=0$.
#### 7.4.4. Class II* theory
Recall that in (6.37) we defined an infinite set of functions
$Y_{i}(x),i\in\mathbb{Z}$ . The analogue of the formula (1.2) is the matrix
$g(x)\in{\widehat{GL}_{\infty}}$, (cf. (6.39)):
$g(x)=Y_{0}(x)^{K}\times{\rm
diag}\,\left(\,t_{i}(x)\,\right)_{i\in{\mathbb{Z}}},\qquad
t_{i}(x)={\check{t}}_{i}\,\frac{Y_{i}(x)}{Y_{i-1}(x)}$ (7.97)
where ${\check{t}}_{i}$, $i\in\mathbb{Z}$ solve
${\check{t}}_{i+1}={\mathfrak{q}}_{i\,{\rm mod}\,(r+1)}{\check{t}}_{i},$
and are normalized as in (6.40)
$\prod_{j=1}^{r+1}{\check{t}}_{j}=1$
so that for $i=1,\ldots,r+1$ the ${\check{t}}_{i}$ coincide with those in
(6.40), and
${\check{t}}_{i+b(r+1)}={\check{t}}_{i}{\mathfrak{q}}^{b},\qquad{\check{t}}^{[i+b(r+1)]}={\check{t}}^{[i]}\left({\mathfrak{q}}^{r+1}\right)^{\frac{b(b-1)}{2}}$
(7.98)
The fundamental characters of $\widehat{GL}_{\infty}$ evaluated on $g(x)$,
${\chi}_{i}(g(x))$ are associated with representations ${\mathcal{R}}_{i}$ of
$\widehat{GL}_{\infty}$ with the highest weight taking value (cf.
(LABEL:eq:hwgli)):
$g(x)^{\tilde{\lambda}_{i}}=\ Y_{i}(x)\ {\check{t}}^{[i]}=\ Y_{0}(x)\
t(x)^{[i]}$ (7.99)
The characters are given by the infinite sums over all partitions
${\lambda}=({\lambda}_{1}\geq{\lambda}_{2}\geq\ldots\geq{\lambda}_{{\ell}({\lambda})}>0)$
and so are the normalized invariants
$\displaystyle{\mathscr{X}}_{i}(\\{Y_{j}(x)\\},\mathfrak{q})=$
$\displaystyle\,\frac{1}{{\check{t}}^{[i]}}{\chi}_{i}(g(x))=$ (7.100)
$\displaystyle\sum_{{\lambda}}\prod_{j=1}^{{\ell}({\lambda})}\left({\mathfrak{q}}_{i-j+1}^{[{\lambda}_{j}]}\frac{Y_{i+{\lambda}_{j}-j+1}(x)}{Y_{i+\lambda_{j}-j}(x)}\right)Y_{i-{\ell}({\lambda})}(x)$
$\displaystyle\qquad=Y_{i}(x)+{\mathfrak{q}}_{i}\frac{Y_{i+1}(x)Y_{i-1}(x)}{Y_{i}(x)}+\ldots$
where we use the notation section 1.2.
The invariant ${\mathscr{X}}_{i}$ in (7.100) is a convergent series for
$|{\mathfrak{q}}_{i}|<1$ like the theta-series, if $t_{i}(x)$ is uniformly
bounded. In fact, for the periodic chain of arguments, i.e. for
$Y_{i}(x)=Y_{i+r+1}(x)$ the $\mathfrak{gl}_{\infty}$ character (7.100) reduces
to the usual affine character of $\widehat{\mathfrak{g}\mathfrak{l}}_{r}$. The
convergence of ${\mathscr{X}}_{i}$ in the class II* case is more subtle. We
shall comment on this below. For the moment let us view the invariants as the
formal power series in $\mathfrak{q}$ with coefficients in Laurent polynomials
in $Y_{i}(x)$.
For the class II* theory the extended amplitudes $Y_{i}(x)$ are quasi-periodic
in $i$, cf. (6.38), so
${{\mathscr{X}}}_{i+r+1}\left(\\{Y_{j}(x)\\},\mathfrak{q}\right)={{\mathscr{X}}}_{i}\left(\\{Y_{j}(x-(r+1){\mathfrak{m}})\\},\mathfrak{q}\right)$
(7.101)
The cameral curve ${\mathcal{C}}_{u}$ for the class II* $A_{r}$ theory is
defined by the system of $r+1$ functional equations
${\mathscr{X}}_{i}(\,\\{\,Y_{j}(x)\\},\mathfrak{q})=T_{i}(x),\quad
i=0,\dots,r$ (7.102)
with
$\displaystyle
T_{i}(x)=T_{i,0}x^{N}+T_{i,1}x^{N-1}+\sum_{{\mathbf{a}}=2}^{N}u_{i,{\mathbf{a}}}x^{N-\mathbf{a}}\,,$
(7.103) $\displaystyle\qquad\qquad
T_{i,0}=\sum_{\lambda}\prod_{j=1}^{{\ell}({\lambda})}{\mathfrak{q}}_{i-j+1}^{[\lambda_{j}]}$
Let us now describe the II* analogue of the spectral curve, and find its
realization in terms of some version of the Hitchin’s system. Along the way we
shall get an alternative derivation of (7.95) with the benefit of getting its
Hitchin’s form as well.
We form the generating function of ${\mathscr{X}}_{i}$’s and study its
automorphic properties. The idea is to regularize the infinite product
$\prod_{i\in\mathbb{Z}}(1-t_{i}(x)/t)/(1-{\check{t}}_{i}/t)$
while keeping the same set of zeroes and poles. Thus, we define
$R(t,x)=\frac{Y_{0}(x)}{D_{0}(t;{\bf\mathfrak{q}})}\prod_{k=1}^{\infty}(1-t_{k}(x)t^{-1})(1-t\,t_{1-k}(x)^{-1})$
(7.104)
where
$\displaystyle
D_{0}(t;{\bf\mathfrak{q}})=\prod_{k=1}^{\infty}(1-{\check{t}}_{k}t^{-1})(1-t\,{\check{t}}_{1-k}^{-1})$
(7.105)
$\displaystyle\qquad\qquad\qquad=\prod_{i=1}^{r+1}\frac{{\theta}(t/{\check{t}}_{i};{\mathfrak{q}})}{{\phi}({\mathfrak{q}})}$
First of all, given that at large $x$ the eigenvalues $t_{k}(x)$ approach
${\check{t}}_{k}$ which, in turn, behave as ${\mathfrak{q}}^{\frac{k}{r+1}}$,
we expect (7.104) to define the converging product, at least for large enough
$x$.
Secondly, let us check that (7.104) is ${}^{i}{\mathcal{W}}$-invariant. Let
$i=0,\ldots,r$, ${\mathbf{a}}=1,\ldots,N$. While crossing the
$I_{i,{\mathbf{a}}}$ cut the ‘‘eigen-value’’ $t_{i}(x)$ maps to $t_{i+1}(x)$,
which, in case $i\geq 1$ or $i<0$, leaves (7.104) manifestly invariant. For
$i=0$ several factors in $\Delta(t,x)$ transform, altogether conspiring to
make it invariant:
$\displaystyle
Y_{0}(x)\mapsto{\mathfrak{q}}_{0}Y_{-1}(x)Y_{1}(x)/Y_{0}(x)=t_{1}(x)/t_{0}(x),$
(7.106) $\displaystyle(1-t_{1}(x)t^{-1})(1-t\,t_{0}(x)^{-1})\mapsto$
$\displaystyle\qquad(1-t_{0}(x)t^{-1})(1-t\,t_{1}(x)^{-1})=\frac{t_{0}(x)}{t_{1}(x)}(1-t_{1}(x)t^{-1})(1-t\,t_{0}(x)^{-1})$
Thirdly, let us introduce the analogues of the spectral determinants for all
fundamental representations ${\mathcal{R}}_{i}$:
$\displaystyle{\Delta}_{i}(t,x)=\frac{Y_{i}(x)}{D_{i}(t;{\bf\mathfrak{q}})}\prod_{k=i+1}^{\infty}(1-t_{k}(x)t^{-1})(1-t\,t_{2i+1-k}(x)^{-1})$
(7.107) $\displaystyle\qquad\qquad
D_{i}(t;{\bf\mathfrak{q}})=\prod_{k=i+1}^{\infty}(1-{\check{t}}_{k}t^{-1})(1-t\,{\check{t}}_{2i+1-k}^{-1})$
Using
$D_{i+1}(t;{\mathfrak{q}})=-t{\check{t}}_{i+1}^{-1}D_{i}(t;{\mathfrak{q}})$,
$Y_{i+1}(x)=t_{i+1}(x){\check{t}}_{i+1}^{-1}Y_{i}(x)$ we derive:
${\Delta}_{i}(t,x)=R(t,x)$ for all $i\in\mathbb{Z}$.
Then, the quasi-periodicity (7.101), (7.98) implies
$R({\mathfrak{q}}t,x+{\mathfrak{m}})={\Delta}_{r+1}(t,x)=R(t,x)$ (7.108)
Given the large $x$ asymptotics of $Y_{0}(x)$ and $t_{i}(x)$, we conclude:
$R(t,x)=x^{N}+\sum_{k=1}^{N}{\delta}_{k}(t)x^{N-k}$ (7.109)
where ${\delta}_{k}(t)$ are the quasi-elliptic functions, which have the first
order poles at $t={\check{t}}_{i}$, $i=0,\ldots r$ on the elliptic curve
${\mathscr{E}}={\mathbb{C}}^{\times}/{\mathfrak{q}}^{\mathbb{Z}}$. Indeed, the
poles come from the $D_{0}(t;{\mathfrak{q}})$ denominator, while the quasi-
ellipticity of $\delta_{k}(t)$ follows from (7.108):
${\delta}_{i}({\mathfrak{q}}t)-{\delta}_{i}(t)={\mathfrak{m}}^{i}+{\rm
polynomial\ in}\ {\mathfrak{m}}\ {\rm linear\ in}\
{\delta}_{k}({\mathfrak{q}}t),\qquad k<i$ (7.110)
Now use (LABEL:eq:fermch), (LABEL:eq:chari) to rewrite $R(t,x)$ as:
$\displaystyle R(t,x)\,=$
$\displaystyle\frac{\sum_{i\in\mathbb{Z}}(-t)^{i}{\check{t}}^{[i]}{{\mathscr{X}}}_{i}\left(\\{Y_{j}(x)\\},\mathfrak{q}\right)}{D_{0}(t;{\bf\mathfrak{q}})}$
(7.111)
$\displaystyle\qquad\qquad=\frac{1}{D_{0}(t;{\bf\mathfrak{q}})}\sum_{i\in\mathbb{Z}}(-t)^{i}{\check{t}}^{[i]}T_{i}(x)$
where we extended the definition of gauge polynomials $T_{i}(x)$ to
$i\in\mathbb{Z}$ by quasi-periodicity implied by (7.101):
$T_{i+r+1}(x)=T_{i}(x-{\mathfrak{m}})$ (7.112)
Armed with (7.112), (7.98) we reduce (7.111) to a finite sum: let
$r(t,x)=\sum_{i=0}^{r}(-t)^{i}{\check{t}}^{[i]}T_{i}(x)\,,$
then (cf. (LABEL:eq:phi_p))
$\displaystyle
R(t,x)=\frac{1}{D_{0}(t;{\mathfrak{q}})}\sum_{b\in{\mathbb{Z}}}r(t,x-b{\mathfrak{m}})\left((-t)^{b}{\mathfrak{q}}^{\frac{b(b-1)}{2}}\right)^{r+1}$
(7.113)
$\displaystyle\qquad\qquad=\frac{1}{D_{0}(t;{\mathfrak{q}})}\left({\theta}\left(-(-t)^{r+1};{\mathfrak{q}}^{r+1}\right)\ast_{{\mathfrak{m}}}r(t,x)\right)$
where the $\ast_{\hbar}$-product is defined by the usual Moyal formula:
$\left(f\ast_{\hbar}g\right)(t,x)=e^{\hbar\frac{\partial^{2}}{{\partial}{\xi}_{1}{\partial}{\eta}_{2}}-\hbar\frac{\partial^{2}}{{\partial}{\xi}_{2}{\partial}{\eta}_{1}}}|_{{\xi}={\eta}=0}\,f(t+{\eta}_{1},x+{\xi}_{2})g(t+{\eta}_{2},x+{\xi}_{2})$
(7.114)
The appearance of the $\ast$-product is the first hint that the class II*
theory has something to do with the noncommutative geometry. We shall indeed
soon see that a natural interpretation of the solution to the limit shape
equations of the class II* theory involves instantons on the noncommutative
four-manifold ${\mathbb{R}}^{2}\times{\mathbb{T}}^{2}$, where the
noncommutativity is ‘‘between’’ the $\mathbb{R}^{2}$ and the $\mathbb{T}^{2}$
components.
#### 7.4.5. Hitchin system on $T^{2}$
The above solution can be represented by the affine $GL(N)$ Hitchin system on
$\mathscr{E}$:
${\Phi}({\mathfrak{q}}t)={\Phi}(t)+N{\mathfrak{m}}\cdot{\bf 1}_{N}$ (7.115)
with $r+1$ rank $1$ punctures ${\check{t}}_{j}$:
$\displaystyle{\Phi}(t)\sim{\Phi}_{j}\frac{dt}{t-{\check{t}}_{j}},\qquad
j=1,\ldots,r+1$ (7.116)
$\displaystyle\qquad\qquad{\Phi}_{j}={\mathbf{u}}_{j}\otimes{\mathbf{v}}_{j}^{t},\qquad{\mathbf{u}}_{j},{\mathbf{v}}_{j}\in{\mathbb{C}}^{N}$
whose eigenvalues are fixed in terms of masses:
${\mathbf{v}}_{j}^{t}{\mathbf{u}}_{j}={\operatorname{tr}}{\Phi}_{j}=Nm_{j}\ .$
(7.117)
Actually, the vectors and covectors ${\mathbf{v}}_{j},{\mathbf{u}}_{j}$ are
defined up to the ${\mathbb{C}}^{\times}$-action
$({\mathbf{v}}_{j},{\mathbf{u}}_{j})\mapsto(z_{j}{\mathbf{v}}_{j},z_{j}^{-1}{\mathbf{u}}_{j}),\qquad
z_{j}\in{\mathbb{C}}^{\times}$ (7.118)
and (7.117) is the corresponding moment map equation, defining the coadjoint
orbit ${\mathcal{O}}_{j}$ of $SL(N,{\mathbb{C}})$. We can shift ${\Phi}(t)$ by
the meromorphic scalar matrix
${\bf\Phi}(t)={\Phi}(t)-\sum_{j=1}^{r+1}m_{j}{\xi}(t/{\check{t}}_{j})\frac{dt}{t}\,{\bf
1}_{N}\ ,$
which gives the following traceless meromorphic Higgs field (see
[Nekrasov:1995nq]):
${\bf\Phi}({t})=\left\|p_{a}{\delta}_{a}^{b}+\sum_{j=0}^{r}u_{j}^{b}v^{j}_{a}(1-{\delta}_{a}^{b})\frac{{\theta}_{1}({t}/{t}_{j}w_{b}/w_{a}){\theta}_{1}^{\prime}(1)}{{\theta}_{1}({t}/{t}_{j}){\theta}_{1}(w_{b}/w_{a})}\right\|_{a,b=1}^{N}$
(7.119)
which depends, in addition to the $SL(N,{\mathbb{C}})$-orbits
${\mathcal{O}}_{1},\ldots,{\mathcal{O}}_{r+1}$ on the choice
$(w_{1},\ldots,w_{N})$ of a holomorphic $SL(N,{\mathbb{C}})$ bundle on
$\mathscr{E}$, and the dual variables $(p_{1},\ldots,p_{N})$, subject to
$\sum_{a=1}^{N}p_{a}=0,\qquad\prod_{a=1}^{N}w_{a}=1$
There are additional constraints:
$\sum_{j=1}^{r+1}u_{j}^{a}v_{a}^{j}={\mathfrak{m}}$ (7.120)
which generate the action of the residual gauge transformations in the maximal
torus ${\bf T}=({\mathbb{C}}^{\times})^{N-1}$ of $SL(N,{\mathbb{C}})$. The
dimension of the corresponding phase space ${\mathfrak{P}}$, whose open subset
${{\mathfrak{P}}}^{\circ}$ is isomorphic to
${{\mathfrak{P}}}^{\circ}\approx\left(T^{*}\mathrm{Bun}_{SL(N,{\mathbb{C}})}({\mathscr{E}})\times\times_{j=1}^{r+1}{\mathcal{O}}_{j}\right)//{\bf
T}$ (7.121)
is equal to
${\rm dim}{{\mathfrak{P}}}=2(N-1)+(r+1)(2(N-1))-2(N-1)=2(r+1)(N-1)=2{\bf r}$
(7.122)
which is twice the dimension of the moduli space ${\mathfrak{M}}$ of vacua of
the class II* $A_{r}$ theory with the gauge group
${G_{\text{g}}}=SU(N)^{r+1}$. The remaining $r+1$ mass parameters are encoded
in the symplectic moduli of the coadjoint orbits ${\mathcal{O}}_{j}$, as
expected.
The relation to our solution is in the equality of two spectral determinants:
$R(t,x)={\rm
Det}_{N}\,\left[\left(x-\sum_{j}m_{j}{\xi}(t/t_{j})\right)\cdot{\bf
1}_{N}-{\bf\Phi}({t})\right]=0$ (7.123)
which is established by comparing the modular properties and the residues of
the left and the right hand sides.
Note the duality of the twisted periodicities of the gauge theory and
Hitchin’s system Lax operators:
$\displaystyle{\Phi}({\mathfrak{q}}t)=w^{-1}{\Phi}(t)w+{\mathfrak{m}}\cdot{\bf
1}_{N}\in{\mathfrak{sl}}(N,{\mathbb{C}})$ (7.124)
$\displaystyle{\mathfrak{q}}\cdot
g(x-{\mathfrak{m}})=S^{-1}g(x)S\in\widehat{GL}_{\infty}$
where $S$ is the shift operator $S=\sum_{i\in\mathbb{Z}}E_{i,i+r+1}$, and
$w={\rm diag}(w_{1},\ldots,w_{N})$. The Eq. (7.123) can be suggestively
written as:
${\rm Det}_{N}(x-{\Phi}(t))\approx{\rm Det}_{H}(t-g(x))$ (7.125)
where $H$ is the single-particle Hilbert space of a free fermion $\psi$.
#### 7.4.6. Relation to many-body systems and spin chains
The parameters of the spectral curve (7.123) are holomorphic functions on
${{\mathfrak{P}}}^{\circ}$, which Poisson-commute, and define the integrable
system. One way of enumerating the Hamiltonians of the integrable system is to
mimic the construction of Hamiltonians (4.22) of the higher genus Hitchin
system. For example, the quadratic Casimir is a meromorphic $2$-differential
on $\mathscr{E}$ with the fixed second order poles at $t={\check{t}}_{j}$
${\operatorname{tr}}{\bf\Phi}(t)^{2}=\sum_{j=1}^{r+1}N^{2}m_{j}^{2}{\wp}(t/{\check{t}}_{j})dt^{2}+\sum_{j=1}^{r+1}U_{2,1,j}{\xi}(t/{\check{t}}_{j})+U_{2,0}$
The Hamiltonians $U_{2,0}$, $U_{2,1,j}$ are computed explicitly in
[Nekrasov:1995nq]. They describe the motion of $N$ particles on $\mathscr{E}$
with the coordinates $w_{1},\ldots,w_{N}$, which have additional
$GL(r+1,{\mathbb{C}})$-spin degrees of freedom. However, in view of our gauge
theory analysis, it seems more natural to view this system as the
$\widehat{GL}_{\infty}$-spin chain. We conjecture that the deformation
quantization of the properly compactified phase space ${\mathfrak{P}}$ will
contain the subalgebra ${\mathcal{A}}_{\mathfrak{m}}$ of the Yangian
$Y(\widehat{GL}_{\infty})$ algebra, which is a deformation of the Yangian of
the affine ${\widehat{A}}_{r}$.
The relation of many-body systems and spin chains based on finite dimensional
symmetry groups was discussed in the context of Hecke symplectic
correspondences in [Levin:2001nm, Olshanetsky:2008uu]. One can also interpret
the results of [Felder:1995iv] as the quantum version of this correspondence.
### 7.5. Class II theories of $D$ type
In this section $\mathfrak{g_{\text{q}}}=\widehat{D}_{r}$.
The fundamental weights of $\widehat{D}_{r}$ are
$\lambda_{0},\widehat{\lambda}_{i}=a_{i}^{\vee}\lambda_{0}+\lambda_{i},i=1,\dots,r$
where $\lambda_{i}$ are fundamental weights of $D_{r}$, and Dynkin labels are
$(a_{0},\dots,a_{r})=(1,1,2,\dots,2,1,1)$ (see Appendix C.3.2).
Correspondingly,
$\displaystyle t_{1}(x)=$
$\displaystyle\,\check{t}_{1}\mathscr{Y}_{1}(x)/\mathscr{Y}_{0}(x),$ (7.126)
$\displaystyle t_{2}(x)=$
$\displaystyle\,\check{t}_{2}\mathscr{Y}_{2}(x)/(\mathscr{Y}_{1}(x)\mathscr{Y}_{0}(x)),$
$\displaystyle t_{i}(x)=$
$\displaystyle\,\check{t}_{i}\mathscr{Y}_{i}(x)/\mathscr{Y}_{i-1}(x),\quad
i=3,\ldots,r-2$ $\displaystyle t_{r-1}(x)=$
$\displaystyle\,\check{t}_{r-1}\mathscr{Y}_{r-1}(x)\mathscr{Y}_{r}(x)/\mathscr{Y}_{r-2}(x),$
$\displaystyle t_{r}(x)=$
$\displaystyle\,\check{t}_{r}\mathscr{Y}_{r}(x)/\mathscr{Y}_{r-1}(x)$
with
$\displaystyle\check{t}_{i}$
$\displaystyle=\left(\mathfrak{q}_{i}\mathfrak{q}_{i+1}\dots\mathfrak{q}_{r-2}\right)^{-1}\left(\mathfrak{q}_{r-1}\mathfrak{q}_{r}\right)^{-\frac{1}{2}}$
(7.127) $\displaystyle\qquad i=1,\ldots,r-2$
$\displaystyle\check{t}_{r-1}=(\mathfrak{q}_{r-1}\mathfrak{q}_{r})^{-\frac{1}{2}}\,,\quad\check{t}_{r}=(\mathfrak{q}_{r-1}/\mathfrak{q}_{r})^{\frac{1}{2}}$
There are $4$ irreducible $\widehat{D}_{r}$ highest weight modules
$\widehat{R}_{0},\widehat{R}_{1},\widehat{R}_{r-1},\widehat{R}_{r}$ at level
$1$, and $r-3$ irreducible $\widehat{D}_{r}$ highest weight modules
$\widehat{R}_{2},\dots,\widehat{R}_{r-2}$ at level $2$. In this section, to
shorten formulae, we are using not the characters of $\widehat{R}_{i}$
themselves but the closely related affine Weyl invariant functions
${}_{2}\tilde{{\mathscr{X}}}^{\widehat{D}}_{j}$ at level $2$ and
${}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{j}$ at level $1$ expressed terms
of theta-functions explicitly as given in the appendix (LABEL:eq:Dr-inv). Such
functions ${}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{j}$ and
${}_{2}\tilde{{\mathscr{X}}}^{\widehat{D}}_{j}$ differ from the actual
characters by a simple power of Euler function $\phi(\mathfrak{q})$ and some
$\mathfrak{q}$-dependent constant, also
${}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{0},{}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{1}$
appear as a linear combination of $\widehat{R}_{0}$ and $\widehat{R}_{1}$
characters, while
${}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{r},{}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{r-1}$
appear as linear combination of $\widehat{R}_{r-1}$ and $\widehat{R}_{r}$
characters (see appendix (LABEL:eq:Dr-inv)).
$\begin{cases}{}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{j}(\mathscr{Y}_{0}(x),{\mathbf{t}(x)};\mathfrak{q})=T_{j}(x)\\\
{}_{2}\tilde{{\mathscr{X}}}^{\widehat{D}}_{j}(\mathscr{Y}_{0}(x),{\mathbf{t}(x)};\mathfrak{q})=T_{j}(x)\\\
\end{cases}$ (7.128)
where polynomials $T_{j}(x)$ are of degree $N$ for $j=0,1,r-1,r$ and of degree
$2N$ for $j=2,\dots,r-2$. so that ${}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}$
is of degree $1$ in $\mathscr{Y}_{0}$ for $j=0,1,r-1,r$ and
${}_{2}\tilde{{\mathscr{X}}}^{\widehat{D}}$ is of degree $2$ in
$\mathscr{Y}_{0}$ for $j=2,\dots,r-2$. Also, in this section the highest
coefficient of the polynomial $T_{j}(x)$ is normalized differently then in
6.22; one can find it as theta-series evaluating (LABEL:eq:Dr-inv) on
$\check{t}_{i}$.
Using the standard embedding $so(2r)\subset sl(2r)$ we construct the algebraic
equation of the spectral curve of the $\widehat{D}_{r}$ theory as the
specialization of the spectral curve for $\widehat{A}_{2r-1}$ theory. Indeed,
a vector bundle V associated to the vector representation of $SO(2r)$ splits
as the sum of $r$ pairs of line bundles
$L_{t_{i}}\oplus L_{t_{i}^{-1}}$
with the degree zero line bundle $L_{t}$ being
$L_{t}=\mathcal{O}(p_{0})^{-1}\mathcal{O}(t)$ (7.129)
and $p_{0}\in{\mathscr{E}}$ is our friend $t=1$. Then we proceed as in
(LABEL:eq:phi_p)(LABEL:eq:s-ThetaD)(7.92) by considering a meromorphic section
of the determinant bundle ${\rm det}$V$\approx{\mathcal{O}}_{\mathscr{E}}$
$s(t,x)=\prod_{i=1}^{r}\frac{\theta(t/t_{i}(x);\mathfrak{q})}{\theta(t;\mathfrak{q})}\frac{\theta(t/t_{i}(x)^{-1};\mathfrak{q})}{\theta(t;\mathfrak{q})}$
(7.130)
From LABEL:se:phiD we find
$\mathscr{Y}_{0}^{2}s(t,x)=\sum_{i=0}^{r}\Xi_{i}(\mathscr{Y}_{0};\mathbf{t}(x);\mathfrak{q})M_{ij}(\mathfrak{q})X^{j}(t;\mathfrak{q})$
(7.131)
where $X^{j}(t,\mathfrak{q})$ are powers of Weierstraß monomials forming a
basis in the space $H^{0}_{\text{even}}({\mathscr{E}},\mathcal{O}(2rp_{0}))$
of meromorphic functions on elliptic curve symmetric under the reflection
$t\to t^{-1}$ and with a pole of order no greater then $2r$ at the origin, and
$M_{ij}(\mathfrak{q})$ is a certain modular matrix.
The linear relations (LABEL:eq:D-theta-relations) allow to express $\Xi_{i}$
in terms of
$\displaystyle\tilde{\Xi}_{i}={}_{2}\tilde{{\mathscr{X}}}^{\widehat{D}}_{i}\quad
i=2,\dots,r-2$ (7.132)
$\displaystyle\tilde{\Xi}_{i}=({}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{i})^{2}\quad
i=0,1,r-1,r$
as
$\Xi_{i}=\sum_{{\tilde{i}}=0}^{r}{\tilde{\Xi}}_{\tilde{i}}{\tilde{M}}_{{\tilde{i}}i}(\mathfrak{q})$
(7.133)
where $\tilde{M}_{i\tilde{i}}(\mathfrak{q})$ is a certain modular
transformation matrix. Using the character equations (7.128) the spectral
curve (7.131) turns into
$\mathscr{Y}_{0}^{2}s(t,x)=\sum_{\tilde{i},j}\tilde{T}_{\tilde{i}}(x)\tilde{\tilde{M}}_{{\tilde{i}}j}(\mathfrak{q})X^{j}(t,\mathfrak{q})$
(7.134)
where
$\tilde{\tilde{M}}_{{\tilde{i}}j}(\mathfrak{q})=\tilde{M}_{\tilde{i}i}(\mathfrak{q})M_{ij}(\mathfrak{q})$
and
$\displaystyle\tilde{T}_{\tilde{i}}(x)=T_{\tilde{i}}(x)\quad\tilde{i}=2,\dots,r-1$
(7.135)
$\displaystyle\tilde{T}_{\tilde{i}}(x)=(T_{\tilde{i}}(x))^{2}\quad\tilde{i}=0,1,r-1,r$
The spectral curve of the $\widehat{D}_{r}$ theory is the algebraic equation
$R(t,x)=0$ where $R(t,x)$ is the right hand side of 7.134 combined with the
Weierstraß cubic equation (LABEL:eq:wxy). The $\widehat{D}_{r}$ curve is the
specialization of the $\widehat{A}_{2r-1}$ curve in two ways. First, there are
no odd in $Y$ monomials in (7.134), and, second, the polynomial coefficients
$\tilde{T}_{\tilde{i}}(x)$ of degree $2N$ in $x$ satisfy factorization
condition: they are full squares for $\tilde{i}=0,1,r-1,r$.
To interpret the curve in Hitchin-Gaudin formalism we will rewrite it in a
slightly different form. First, notice that222Indeed, the LHS and RHS is the
meromorphic elliptic function with $2r$ zeroes at points $X_{i},Y$ and
$X_{i},-Y$ and the pole of order $2r$ at $t=1$, or $X=\infty$. Such function
is unique up to a normalization which is fixed by the asymptotics at $t=1$.
$\prod_{i=1}^{r}\frac{\theta_{1}(t/\check{t}_{i};\mathfrak{q})}{\theta_{1}(t;\mathfrak{q})}\frac{\theta_{1}(t/\check{t}_{i}^{-1};\mathfrak{q})}{\theta_{1}(t;\mathfrak{q})}=\prod_{i=1}^{r}\theta_{1}(\check{t}_{i};\mathfrak{q})\theta_{1}(\check{t}_{i}^{-1};\mathfrak{q})(X-\check{X}_{i})$
(7.136)
We used here the notations (LABEL:eq:weierx2), (LABEL:eq:eal) for the
Weierstraß functions and
$\displaystyle\check{X}_{i}=X(\check{t}_{i};{\mathfrak{q}}),\quad\check{Y}_{i}^{2}=4\prod_{{\alpha}=1}^{3}(\check{X}_{i}-e_{\alpha})$
Then, if we divide (7.130) by (7.136) we find333And use
$\theta_{1}(t,\mathfrak{q})$ in lieu of $\theta(t,\mathfrak{q})$ as the basic
function, so that strictly speaking there is slightly different transformation
matrix $\tilde{\tilde{M}}_{\tilde{i}j}$ compared to (7.130)(LABEL:eq:s-ThetaD)
$\mathscr{Y}_{0}^{2}(x)\prod_{i=1}^{r}\theta_{1}(\check{t}_{i};\mathfrak{q})\theta_{1}(\check{t}_{i}^{-1};\mathfrak{q})\times\\\
\prod_{i=1}^{r}\frac{{\theta}_{1}(t/t_{i}(x);\mathfrak{q})}{{\theta}_{1}(t/\check{t}_{i};\mathfrak{q})}\frac{{\theta}_{1}(t/t_{i}^{-1}(x);\mathfrak{q})}{{\theta}_{1}(t/\check{t}_{i}^{-1};\mathfrak{q})}=R(x,X(t,{\mathfrak{q}}))\\\
R(x,X):=\frac{\sum_{\tilde{i},j=0}^{r}\tilde{T}_{\tilde{i}}(x)\tilde{\tilde{M}}_{{\tilde{i}}j}(\mathfrak{q})X^{j}}{\prod_{i=1}^{r}(X-\check{X}_{i})}$
(7.137)
Now, at the order two points on $\mathscr{E}$444e.g. the points
$(1,-1,-\mathfrak{q}^{-1/2},\mathfrak{q}^{1/2})$ in the $t$-parametrization,
where vanish the respective theta functions
$\theta_{1}(t;\mathfrak{q}),\theta_{2}(t;\mathfrak{q}),\theta_{3}(t;\mathfrak{q}),\theta_{4}(t;\mathfrak{q})$,
or, equivalently, at the four branch points in the $X$ plane:
$(\infty,e_{1},e_{2},e_{3})$, the value of the section $R(x,X)$ can be
expressed in terms of the weight $1$ invariants
${}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{r-1},{}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{r},{}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{0},{}_{1}\tilde{{\mathscr{X}}}^{\widehat{D}}_{1}$
(compare with (LABEL:eq:Dr-inv) and (LABEL:eq:D1-theta)(LABEL:eq:D1-theta-
jacobi)), and it factorizes as
$\displaystyle R(x,X)|_{X\to\infty}=(T_{r-1}(x))^{2}$ (7.138) $\displaystyle
R(x,X)|_{X\to e_{1}}=c_{2}({\tilde{\mathfrak{q}}})\,(T_{r}(x))^{2}$
$\displaystyle R(x,X)|_{X\to
e_{2}}=c_{3}({\tilde{\mathfrak{q}}})\,(T_{0}(x))^{2}$ $\displaystyle
R(x,X)|_{X\to e_{3}}=c_{4}({\tilde{\mathfrak{q}}})\,(T_{1}(x))^{2}$
where
$c_{k}({\tilde{\mathfrak{q}}})=\prod_{i=1}^{r}\frac{\theta_{1}(\check{t}_{i};\mathfrak{q})\theta_{1}(\check{t}_{i}^{-1};\mathfrak{q})}{\theta_{k}(\check{t}_{i};\mathfrak{q})\theta_{k}(\check{t}_{i}^{-1};\mathfrak{q})},\quad
k=2,3,4$ (7.139)
The Seiberg-Witten differential is given by:
${\lambda}=x\frac{dX}{Y}$ (7.140)
It is defined on the two fold cover $C_{u}$ of the curve $R(x,X)=0$, which is
a curve in the product
${\mathbb{C}\mathbb{P}}^{2}_{(X:Y:Z)}\times\mathbf{C}_{\left\langle
x\right\rangle}$, given by the equations:
$\displaystyle Y^{2}Z=4(X-e_{1}Z)(X-e_{2}Z)(X-e_{3}Z)$ (7.141) $\displaystyle
F(x,Z,X)=Z^{r}R(x,X/Z)=0$
The curve $C_{u}$ can be interpreted at the spectral curve of $GL(2N)$
Hitchin-Gaudin system on the orbifold ${\mathscr{E}}/\mathbb{Z}_{2}$, such
that at the fixed point $X=\infty,e_{1},e_{2},e_{3}$ the $GL(2N)$ system
reduces to the $Sp(2N)$ system. For more details on the Hitchin system, Nahm
transform and the brane construction of the the spectral curve for the
${\widehat{D}}_{r}$ quiver see [Kapustin:1998fa, Kapustin:1998xn]. Our main
result is the rigorous derivation of the spectral curve and its periods from
the gauge theory considerations.
#### 7.5.1. Deforming the $N_{f}=4$ SU(2) theory
The $\widehat{D}_{4}$ theory can be interpreted as the theory obtained from
gauging the flavor group of the $D_{4}$ theory with
$({\mathbf{v}}_{1},{\mathbf{v}}_{2},{\mathbf{v}}_{3},{\mathbf{v}}_{4})=(N,2N,N,N)$
theory, and with
$({\mathbf{w}}_{1},{\mathbf{w}}_{2},{\mathbf{w}}_{3},{\mathbf{w}}_{4})=(0,N,0,0)$
matter multiplets. In the limit $\mathfrak{q}_{0}\to 0$ the elliptic curve
$\mathscr{E}$ degenerates to the cylinder $\mathbb{C}_{\left\langle
t\right\rangle}^{\times}$, while Seiberg-Witten curve (7.141) degenerates to
the Seiberg-Witten curve of the $D_{4}$ theory (7.50).
Let us consider the case $N=1$. Let us parametrize the polynomials
$T_{0},T_{1},T_{3},T_{4}$ as:
$T_{i}(x)=T_{i,0}({\tilde{\mathfrak{q}}})(x-m_{i}),\qquad i=0,1,3,4$ (7.142)
and
$T_{2}(x)=T_{2,0}({\tilde{\mathfrak{q}}})(x^{2}-m_{2}x+u)$
where parameters $q_{i},m_{i}$ and $u$ are related to the microscopic
couplings ${\mathfrak{q}}_{i}$ and the $U(1)^{4}\times SU(2)$ Coulomb moduli
$\displaystyle
T_{3,0}({\tilde{\mathfrak{q}}})=\prod_{i=1}^{4}\theta_{1}(\check{t}_{i}),\quad$
$\displaystyle
T_{4,0}({\tilde{\mathfrak{q}}})=\prod_{i=1}^{4}\theta_{2}(\check{t}_{i}),$
(7.143) $\displaystyle
T_{0,0}({\tilde{\mathfrak{q}}})=\prod_{i=1}^{4}\theta_{3}(\check{t}_{i}),\quad$
$\displaystyle
T_{1,0}({\tilde{\mathfrak{q}}})=\prod_{i=1}^{4}\theta_{4}(\check{t}_{i}),$
$\displaystyle\qquad\qquad\qquad\qquad
T_{2,0}({\tilde{\mathfrak{q}}})=\Xi_{2}(1,\mathbf{\check{t}},\mathfrak{q})$
where $\check{t}_{i}$ are defined in (7.127). Then the spectral curve of the
${\widehat{D}}_{4}$ theory (7.137)(LABEL:eq:consD) has the generic form:
$R(x,X)=T_{3}^{2}(x)+\sum_{i=1}^{4}\frac{b_{i}(x)}{X-\check{X}_{i}}$ (7.144)
where $b_{i}(x)$ are some polynomials of degree $2$ that we want to relate to
the coupling constants and Coulomb parameters. The first thing to notice is
that $R(x,X)$ in (7.144) obtained from (7.137) does not have poles at
$X=\check{X}_{i}$ at $x\to\infty$ in the leading order $x^{2}$. Therefore, the
polynomials $b_{i}(x)$ are actually degree 1 polynomials containing $8$
coefficients. There are 6 linear equations on these coefficients coming from 3
factorization equations (LABEL:eq:consD) viewed as coefficients at $x^{1}$ and
$x^{0}$ (and notice that the equations at $x^{2}$ are identically satisfied
because of (7.143) and (7.139))
$\displaystyle\sum_{i=1}^{4}\frac{b_{i}(x)}{e_{1}-\check{X}_{i}}=c_{2}T_{4}^{2}(x)-T_{3}^{2}(x)$
(7.145)
$\displaystyle\sum_{i=1}^{4}\frac{b_{i}(x)}{e_{2}-\check{X}_{i}}=c_{3}T_{0}^{2}(x)-T_{3}^{2}(x)$
$\displaystyle\sum_{i=1}^{4}\frac{b_{i}(x)}{e_{3}-\check{X}_{i}}=c_{4}T_{1}^{2}(x)-T_{3}^{2}(x)$
The above three equations determine four linear functions $b_{i}(x)$ up to a
single linear function, which depends on two parameters
${\tilde{m}}_{2},{\tilde{u}}$:
$\tilde{b}_{j}(x)=(-1)^{j}(-\tilde{m}_{2}x+\tilde{u})\,{\rm
Det}\left\|\begin{matrix}\frac{1}{e_{a}-\check{X}_{b}}\end{matrix}\right\|_{a=1,\ldots
3}^{b=1,\ldots 4,\,b\neq j}$ (7.146)
From (7.137) it is clear that $\tilde{m}_{2},\tilde{u}$ are proportional to
$m_{2},u_{2}$. To summarize, we can describe the spectral curve (7.144) of
$\widehat{D}_{4}$ theory by the coupling constants
$\mathfrak{q}_{i},i=0,\dots,4$, which define the elliptic curve
$\mathscr{E}(\mathfrak{q})$ with modulus
$\mathfrak{q}=\mathfrak{q}_{0}\mathfrak{q}_{1}\mathfrak{q}_{2}^{2}\mathfrak{q}_{3}\mathfrak{q}_{4}$
and positions of 4 punctures $\check{X}_{i}$ in the $\mathbb{C}_{\left\langle
X\right\rangle}$ plane for Weierstraß cubic using (7.127), the 4 parameters
$m_{i},i=0,1,3,4$ entering into relations (7.145) through (7.142) and 2
parameters $\tilde{m}_{2},\tilde{u}$ in (7.146).
Now consider the limit $\mathfrak{q}_{0}\to 0$ which turns the
$\widehat{D}_{4}$ class II quiver theory to the $D_{4}$ class I quiver theory.
In this limit the Weierstraß cubic degenerates: $e_{1}=-2e_{3},\quad
e_{2}=e_{3}=1/12$,
$Y^{2}=4\left(X-e_{3}\right)^{2}\left(X+2e_{3}\right)^{2}$ (7.147)
with
$X=\frac{t}{(1-t)^{2}}+\frac{1}{12},\quad Y=\frac{t(1+t)}{(1-t)^{3}}$ (7.148)
The Seiberg-Witten differential $x\frac{dX}{Y}$ becomes $x\frac{dt}{t}$. The
elliptic curve $\mathscr{E}$ degenerates to the rational curve which is the
double cover $t\mapsto X$ of the complex projective line
$\mathbb{C}\mathbb{P}^{1}_{X}$. To make contact with the Seiberg-Witten curve
of the the $D_{4}$ quiver theory it is convenient to work in the coordinate
which is related to the coordinate $X$ by rational transformation
$\eta=2+\frac{1}{X-e_{3}}=t+t^{-1}$ (7.149)
The function $\eta(X)$ is degree two meromorphic function on $\mathscr{E}$
with values at the four $\mathbb{Z}_{2}$ invariant points given by
$\eta(e_{2})=\eta(e_{3})=\infty,\quad\eta(e_{1})=-2,\quad\eta(\infty)=2$
(7.150)
Rewriting (7.137) in terms of $\eta(x)$ we find the equation of spectral curve
$\tilde{\mathcal{R}}^{D_{4}}(\eta,x)=0$ for
$\tilde{\mathcal{R}}^{D_{4}}(\eta,x)=\sum_{i=0}^{4}\eta^{i}p_{i}(x)$ (7.151)
where $p_{i}(x)$ are some polynomials of degree $2$ in $x$. Moreover, the
factorization conditions (LABEL:eq:consD) translates to the statement that
$\tilde{\mathcal{R}}^{D_{4}}(\eta,x)$ is full square at $\eta=\infty$ and at
$\eta=\pm 2$ in the polynomial ring of $x$. Notice that this is precisely the
factorization conditions (7.63)(7.64) of the curve (7.60) for the $D_{4}$
quiver. (The variables $t$ and $\eta$ in the equations (7.60), (7.59)
correspond to $t$ and $\eta$ of this section multiplied by a factor
$\mathfrak{q}_{1}\mathfrak{q}_{2}\mathfrak{q}_{3}^{\frac{1}{2}}\mathfrak{q}_{4}^{\frac{1}{2}}$.)
Given the above discussion and the section 7.2.1 let us summarize the freezing
hierarchy $\widehat{D}_{4}\to D_{4}\to A_{3}\to A_{2}\to A_{1}$. For
$\widehat{D}_{4}$ theory we start with elliptic curve
$\mathscr{E}(\mathfrak{q})$ with $\mathbb{Z}_{2}$ reflection symmetry $t\to
t^{-1}$ (or $Y\to-Y$) and $8$ $\mathbb{Z}_{2}$-symmetrically located punctures
in 4 pairs $(\check{t}_{i},\check{t}_{i}^{-1})$. As we freeze
$\mathfrak{q}_{0}\to 0$, the elliptic curve $\mathscr{E}(\mathfrak{q})$
degenerates to a $\mathbb{Z}_{2}$-symmetrical cylinder
$\mathbb{C}^{\times}_{t}$ with 4 old pairs
$(\check{t}_{i},\check{t}_{i}^{-1})$ of punctures. The cylinder
$\mathbb{C}^{\times}_{t}$ double covers its $\mathbb{Z}_{2}$-quotient
$\mathbb{C}\mathbb{P}^{\times}_{\eta}$. This is the situation of $D_{4}$
quiver theory (7.60). As we freeze $\mathfrak{q}_{4}\to 0$ the second sheet of
the double cover $\mathbb{C}^{\times}_{t}\to\mathbb{C}^{\times}_{\eta}$ is
removed to infinity and we are left with $4$ punctures of $A_{3}$ quiver at
$(\mathfrak{q}_{1}^{-1},1,\mathfrak{q}_{2},\mathfrak{q}_{2}\mathfrak{q}_{3})$.555Keeping
in mind the ultimate configuration of the $A_{1}$ quiver dynamical at node “2”
we have rescaled the position of punctures by a factor of $\mathfrak{q}_{1}$.
Notice, that as discussed after (7.65) the $SL(2,\mathbb{C})$ residues of the
Higgs field vanish at the punctures in $0$ and $\infty$. As we freeze
$\mathfrak{q}_{3}\to 0$ the puncture at $\mathfrak{q}_{2}\mathfrak{q}_{3}$
(with non-trivial $SL(2,\mathbb{C})$ residue of Higgs field) merges with the
puncture $0$ and we are in the situation of the $A_{2}$ quiver with
$SL(2,\mathbb{C})$ punctures at $(\mathfrak{q}_{1}^{-1},1,\mathfrak{q}_{2},0)$
and $GL(1,\mathbb{C})$ puncture at $\infty$. Finally as we freeze
$\mathfrak{q}_{1}\to 0$ the puncture at $\mathfrak{q}_{1}^{-1}$ (with non-
trivial residue of the $SL(2,\mathbb{C})$ Higgs field) is merged with the
puncture at $\infty$ and we are left with $\mathbb{C}\mathbb{P}^{1}$ with
$SL(2,\mathbb{C})$ punctures at $(\infty,1,\mathfrak{q}_{2},0)$ for the
$A_{1}$ quiver theory defined at the dynamical node ‘‘2’’. See figure 7.3.
Figure 7.3. The freezing $\widehat{D}_{4}\to D_{4}\to A_{3}\to A_{2}\to
A_{1}$. The live nodes are denoted by red, the frozen nodes are denoted by
blue. The nodes are labeled as $i_{\mathbf{v}_{i}}$
### 7.6. Class II theories of $E$ type
The main technical tool is the natural isomorphism between the moduli space of
the $E_{k}$-bundles on elliptic curve $\mathscr{E}$ and the moduli space of
del Pezzo surfaces $\mathcal{S}_{k}$, which are obtained by blowing up $k$
points in ${\mathbb{C}}{\mathbb{P}}^{2}$, and have the fixed elliptic curve
$\mathscr{E}$ as the anticanonical divisor. The spectral curve is found using
the ‘‘cylinder map’’ [Kanev], and see [Donagi:2008kj, Donagi:2008ca,
Curio:1998bva] for applications.
Another way of encoding the geometry of the moduli space the $E_{k}$-bundles
is in the unfolding of the parabolic unimodular singularities [Arnold:1985]
${\widehat{T}}_{a,b,c}$ with
$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}=1$
which are:
$\displaystyle{\tilde{E}}_{6}=P_{8}={\widehat{T}}_{3,3,3}\,:$
$\displaystyle\,x^{3}+y^{3}+z^{3}+mxyz,\qquad\qquad m^{3}+27\neq 0,$ (7.152)
$\displaystyle{\tilde{E}}_{7}=X_{9}={\widehat{T}}_{2,4,4}\,:$
$\displaystyle\,x^{4}+y^{4}+z^{2}+mxyz,\qquad\qquad m^{4}-64\neq 0,$
$\displaystyle{\tilde{E}}_{8}=J_{10}={\widehat{T}}_{2,3,6}\,:$
$\displaystyle\,x^{6}+y^{3}+z^{2}+mxyz,\qquad\qquad 4m^{6}-432\neq 0$
We shall not pursue this direction in this work.
###### Remark.
Another important question left for future work is the connection between our
description of the special geometry via the periods of $d{\mathbb{S}}$ and the
periods of non-compact Calabi-Yau threefolds of [Katz:1997eq].
#### 7.6.1. Del Pezzo and $E_{6}$ bundles
The Del Pezzo surface
$\mathcal{S}_{6}\subset\mathbb{W}\mathbb{P}^{1,1,1,1}={\mathbb{C}\mathbb{P}}^{3}$
is a zero locus of a homogeneous degree $3$ polynomial:
${\Gamma}_{3}(X_{0},X_{1},X_{2},X_{3})=\sum_{i=0}^{3}X_{0}^{3-i}\,{\mathcal{G}}_{i}(X_{1},X_{2},X_{3})$
(7.153)
where ${\mathcal{G}}_{i}$ is the degree $i$ homogeneous polynomial in
$X_{1},X_{2},X_{3}$. In particular,
${\mathcal{G}}_{3}(X_{1},X_{2},X_{3})=-X_{1}X_{3}^{2}+4X_{2}^{3}-g_{2}X_{1}^{2}X_{2}-g_{3}X_{1}^{3}$
(7.154)
defines the elliptic curve ${\mathscr{E}}$, which determines the gauge
coupling $\mathfrak{q}={\exp}\,2\pi\mathrm{i}\tau$, cf. (LABEL:eq:wxy):
${\tau}=\frac{\oint_{B}dX/Y}{\oint_{A}dX/Y}\,,\qquad
X=X_{2}/X_{1},\,Y=X_{3}/X_{1}$ (7.155)
The rest of the coefficient functions ${\mathcal{G}}_{0,1,2}$ is parametrized
as follows:
$\displaystyle{\mathcal{G}}_{2}(X_{1},X_{2},X_{3})=p_{0}X_{1}^{2}+p_{1}X_{1}X_{2}+p_{6}X_{2}X_{3}$
(7.156)
$\displaystyle{\mathcal{G}}_{1}(X_{1},X_{2},X_{3})=p_{2}X_{1}+p_{3}X_{2}+p_{5}X_{3}$
(7.157) $\displaystyle{\mathcal{G}}_{0}(X_{1},X_{2},X_{3})=p_{4}$ (7.158)
The isomorphism classes of $\mathcal{S}_{6}$ surfaces containing the fixed
elliptic curve ${\mathscr{E}}$ are in one-to-one correspondence with the
points
$[p]=(p_{0}:p_{1}:p_{6}:p_{2}:p_{3}:p_{5}:p_{4})\in{\mathcal{M}}$ (7.159)
in the weighted projective space
${\mathcal{M}}={\mathbb{W}\mathbb{P}}^{1,1,1,2,2,2,3}$, which is also
isomorphic, by E.Loojienga’s theorem [Loojienga:1976], to the moduli space
$\mathrm{Bun}_{E_{6}({\mathbb{C}})}^{ss}(\mathscr{E})$ of holomorphic semi-
stable principal $E_{6}$-bundles on $\mathscr{E}$. We label the projective
coordinates $p_{i}$ in such a way that the projective weight of $p_{i}$ equals
Dynkin mark $a_{i}$ in our conventions Appendix A. The correspondence between
the $E_{6}$-bundles on $\mathscr{E}$ and the del Pezzo surfaces
$\mathcal{S}_{6}$ is geometric: there are precisely $27$ degree $1$ rational
curves (‘‘the $(-1)$-lines’’) $C_{a}$ on $\mathcal{S}_{6}$, $a=1,\ldots,27$,
which are the divisors of the line bundles ${\mathscr{L}}_{a}$ on
$\mathcal{S}_{6}$. The direct sum
${\mathcal{U}}=\bigoplus_{a=1}^{27}{\mathscr{L}}_{a}\,,$ (7.160)
has no infinitesimal deformations, as a bundle on $\mathcal{S}_{6}$. The
mapping class group of $\mathcal{S}_{6}$ acts on the $(-1)$-lines by the
$E_{6}$ Weyl transformations. As a result, the bundle ${\mathcal{U}}$ is a
vector bundle associated to a canonical principal $E_{6}({\mathbb{C}})$-bundle
${\mathcal{P}}_{\mathcal{S}_{6}}$ over $\mathcal{S}_{6}$ with the help of a
${\bf 27}$ representation:
${\mathcal{U}}={\mathcal{P}}_{\mathcal{S}_{6}}\times_{E_{6}({\mathbb{C}})}{\bf
27}$ (7.161)
The restriction of ${\mathcal{P}}_{\mathcal{S}_{6}}|_{E}$ is the holomorphic
principal $E_{6}({\mathbb{C}})$ bundle over $E$ which corresponds to the point
$[s]$ in Loojienga’s theorem. Again, the associated rank $27$ vector bundle
${\mathcal{U}}|_{E}$ splits
${\mathcal{U}}|_{\mathscr{E}}=\bigoplus_{a=1}^{27}{\mathscr{L}}_{a}$ (7.162)
The line subbundles ${\mathscr{L}}_{a}$ can be expressed as:
${\mathscr{L}}_{a}=\bigotimes_{i=1}^{6}\,{\mathbb{L}}_{i}^{w_{a,i}}$ (7.163)
where $w_{a,i}$, $i=1,\ldots,6$, $a=1,\ldots,27$ are the components of the
weight vector. The line bundles ${\mathbb{L}}_{i}$, $i=1,\ldots,6$ are defined
up to the action of the $E_{6}$ Weyl group. Let us now compute the
${\mathscr{L}}_{a}$’s. The rational curve of degree one in $\mathcal{S}_{6}$
is a rational curve of degree one in ${\mathbb{C}\mathbb{P}}^{3}$ which is
contained in $\mathcal{S}_{6}$. A parametrized rational curve of degree one in
${\mathbb{C}\mathbb{P}}^{3}$ is a collection of $4$ linear functions:
${\zeta}\mapsto{\mathbf{X}}({\zeta})$,
${\mathbf{X}}({\zeta})=\left(X_{0}+{\zeta}v_{0},X_{1}+{\zeta}v_{1},X_{2}+{\zeta}v_{2},X_{3}+{\zeta}v_{3}\right)$
(7.164)
The two quadruples
${\mathbf{X}}({\zeta})\qquad{\rm
and}\qquad(c{\zeta}+d)\,{\mathbf{X}}\left(\frac{a{\zeta}+b}{c{\zeta}+d}\right)$
for
$\left(\begin{matrix}a&b\\\ c&d\\\ \end{matrix}\right)\in{\rm
GL}_{2}({\mathbb{C}})$ (7.165)
define identical curves in ${\mathbb{C}\mathbb{P}}^{3}$. We can fix the
GL${}_{2}({\mathbb{C}})$ gauge by choosing the parameter $\zeta$ so that:
${\mathbf{X}}({\zeta})=\left({\zeta},1,X+{\zeta}v_{X},Y+{\zeta}v_{Y}\right)$
(7.166)
The requirement that the curve lands in
$\mathcal{S}_{6}\subset{\mathbb{C}\mathbb{P}}^{3}$ reads as
${\Gamma}_{3}\left({\zeta},1,X+{\zeta}v_{X},Y+{\zeta}v_{Y}\right)=\sum_{i=0}^{3}{\zeta}^{i}{\Xi}_{i}(X,Y;v_{X},v_{Y})\equiv
0$ (7.167)
which is a system of $4$ equations
${\Xi}_{i}(X,Y;v_{X},v_{Y})=0,\qquad i=0,\ldots,3$
on $4$ unknowns $X,Y,v_{X},v_{Y}$:
$\displaystyle\Xi_{0}$ $\displaystyle=-Y^{2}+4X^{3}-g_{2}X-g_{3}$
$\displaystyle\Xi_{1}$
$\displaystyle=-g_{2}v_{X}+p_{6}XY+p_{1}X+p_{0}+12X^{2}v_{X}-2Yv_{Y}$
$\displaystyle\Xi_{2}$
$\displaystyle=p_{6}Yv_{X}+p_{1}v_{X}+p_{6}Xv_{Y}+p_{3}X+p_{5}Y+p_{2}+12Xv_{X}^{2}-v_{Y}^{2}$
$\displaystyle\Xi_{3}$
$\displaystyle=p_{6}v_{X}v_{Y}+p_{3}v_{X}+p_{5}v_{Y}+p_{4}+4v_{X}^{3}$
The equation $\Xi_{0}=0$ in the above system is the equation of the elliptic
curve $\mathscr{E}$. To find the equation of the spectral cover associated
with the vector bundle $\mathcal{U}|_{\mathscr{E}}$ in the $\mathbf{27}$
representation we can express $v_{Y}$ from the equation $\Xi_{1}=0$, then plug
it into the remaining equations $\Xi_{2}=0$ and $\Xi_{3}=0$, compute the
resultant of these two polynomials with respect to the variable $v_{X}$,
reduce modulo the equation $\Xi_{0}=0$ defining the elliptic curve
$\mathscr{E}$, arriving at:
$C^{E_{6}}(X,Y;g_{2},g_{3},p_{0},\dots,p_{6})=-4Y^{4}\mathrm{res}_{v_{X}}(\Xi_{2}|_{v_{Y}:\Xi_{1}=0},\Xi_{3}|_{v_{Y}:\Xi_{1}=0})\mod\Xi_{0}$
(7.168)
The resultant $C^{E_{6}}(X,Y;g_{2},g_{3},p_{0},\dots,p_{6})$ is a polynomial
in $X,Y$ with polynomial coefficients in $g_{2},g_{3},p_{0},\dots,p_{6}$ of
the form
$C^{E_{6}}(X,Y;g_{2},g_{3},p_{0},\dots,p_{6})=\\\
(p_{0}^{6}+\dots)+(6p_{0}^{5}p_{1}+\dots)X+\dots+(-256p_{3}^{3}+\dots)X^{12}\\\
+(12g_{3}p_{0}^{4}p_{5}+\dots)Y+(32g_{3}p_{0}^{4}p_{5}+\dots)XY+\dots+(-256p_{5}^{3}+\dots)X^{12}Y$
(7.169)
(A short `Mathematica` version of this formula is given in appendix
LABEL:sec:_E6-delPezzo-code.) Now let us imagine having a family
${\mathcal{U}}$ of the $E_{6}$-bundles on $E$.
In our solution the vacuum $u$ of the gauge theory is identified with the
degree $N$ quasimap:
$\displaystyle p:{\mathbb{C}\mathbb{P}}^{1}_{\left\langle
x\right\rangle}\to\mathrm{Bun}_{E_{6}(\mathbb{C})}(\mathscr{E})\simeq\mathbb{W}\mathbb{P}^{1,1,1,2,2,2,3}$
given by the polynomials $p_{i}(x)$ of degree $Na_{i}$
$p_{i}=p_{i}(x),\quad i=0,\dots,6$ (7.170)
Together with the equation of the Weierstraß cubic
$\Xi_{0}(X,Y,g_{2},g_{3})=0$, the equation
$C^{E_{6}}(X,Y;g_{2},g_{3},p_{0}(x),\dots,p_{6}(x))=0$ (7.171)
defines the Seiberg-Witten curve of the affine $E_{6}$ theory as an algebraic
|
# Distributed Transfer Learning with 4th Gen Intel® Xeon® Scalable Processors
Lakshmi Arunachalam, Fahim Mohammad, Vrushabh H. Sanghavi
{lakshmi.arunachalam, fahim.mohammad<EMAIL_ADDRESS>AI
Framework Engineer
Data Center and Artificial Intelligence
Intel Corporation
###### Abstract
In this paper, we explore how transfer learning, coupled with Intel® Xeon®,
specifically 4th Gen Intel® Xeon® scalable processor, defies the conventional
belief that training is primarily GPU-dependent. We present a case study where
we achieved near state-of-the-art accuracy for image classification on a
publicly available Image Classification TensorFlow dataset using Intel®
Advanced Matrix Extensions(AMX) and distributed training with Horovod.
_Keywords_ Artificial Intelligence, Deep Learning Optimization, End-to-End AI
applications, E2E performance optimization, Transfer Learning, Intel® Xeon®
## 1 Introduction
Imagine how kids learn to start coloring with crayons. It may take few days
for them to learn how to hold the crayon, stay within the picture and so on.
They may need lots of crayons and coloring books so that they can get the hang
of it. Then they can easily apply their skills to learn how to use color
pencils, painting, pencil shading or master art. They don’t have to start all
over again from scratch because they already have a foundation of coloring
with crayons. This is what transfer learning is about. Instead of starting
from scratch and needing more time and resources, we can use the skills
already learnt and finetune a little more to learn a similar task and be good
at it.
In the world of machine learning and artificial intelligence, transfer
learning has emerged as a powerful technique. In this blog, we explore how
transfer learning, coupled with Intel® Xeon® Scalable CPUs, specifically 4th
Gen Intel® Xeon® scalable processor, defies the conventional belief that
training is primarily GPU-dependent. We present a case study where we achieved
near state-of-the-art accuracy for image classification on a publicly
available Image Classification TensorFlow dataset [1] using Intel® Advanced
Matrix Extensions(AMX)[2] and distributed training with Horovod.
## 2 Image Classification with Transfer Learning
The basic idea behind transfer learning is to use a pre-trained model, often
trained on a large and diverse dataset, as a starting point. This pre-trained
model has already learned useful features or representations from its original
task, which can be transferred and applied to the new task. The advantage of
transfer learning lies in its ability to significantly reduce the time and
resources needed for training while delivering impressive results. To
illustrate the power of transfer learning, let’s consider a case study of
identifying colorectal cancer tissue types through image classification [3].
We started with the pre-trained ResNet v1.5 [4][5] weights and fine-tuned the
last classification layer using a TensorFlow dataset with 5000 images with
4000 for training. This approach allowed us to build on the knowledge acquired
during pre-training and achieve close to state-of-the-art accuracy of 94.5%
[6] on this dataset. Data augmentation was used as a preprocessing step, and
early stopping criteria with a patience of 10 was employed to stop training
once convergence was reached. The pipeline demonstrated run-to-run variations
of 6-7 epochs, with an average of 45 epochs to achieve convergence. Figure 1
shows the transfer learning pipeline on vision task.
Figure 1: Vision Transfer Learning Pipeline
## 3 Leveraging Intel® Xeon® Scalable CPUs
Traditionally, training deep learning models was GPU-intensive, but with
Intel® Xeon® Scalable CPUs, we witnessed a paradigm shift. By utilizing Intel®
Advanced Matrix Instructions (AMX) with BF16 precision, we achieved remarkable
accuracy of 94.5% with the model converging in just 43 epochs. The entire
training process took less than 5.5 minutes with a single socket, showcasing
the efficiency and speed of Intel® Optimization for TensorFlow. Intel®
Optimization for TensorFlow is powered by Intel® oneAPI Deep Neural Network
Library (oneDNN) [7] [8], which includes convolution, normalization,
activation, inner product, and other primitives vectorized. To achieve the
above performance and accuracy, we used the following settings:
* •
Use Mixed Precision: Leverage Intel® AMX BF16 precision format by enabling
auto mixed precision in TensorFlow. BF16 offers better precision than FP16
while maintaining higher performance than FP32. In our case study we achieved
similar accuracy with BF16 as with FP32.
* •
Use numactl: Accessing memory from the local socket is faster than from a
remote socket in NUMA systems. To avoid potential performance issues due to
remote memory access, bind the memory to one socket using the numactl command.
For hyperthreading scenarios, use the command numactl -C 0-55,112-167 -m 0
python train.py to ensure memory is bound to one socket.
* •
Define run time parameters: Inter-op parallelism involves distributing tasks
across cores to manage system resources efficiently and improve overall system
performance. Intra-op parallelism focuses on optimizing parallel execution
within a single core, breaking tasks into smaller sub-tasks to boost
performance in single-threaded applications. For the case study, the inter-op
parallelism is set to 56 threads (number of cores), and the intra-op
parallelism is set to 56 threads. Additionally, use specific KMP settings as
below
* –
KMP_SETTINGS = 1
* –
KMP_BLOCKTIME = 1
* –
OMP_NUM_THREADS = NUM_CORES (56)
* –
KMP_AFFINITY = granularity=fine,compact,1,0
## 4 Empowering Multi-Socket Performance with Distributed Training
Intel® Xeon® Scalable CPUs come equipped with two sockets, each having 56
cores. To maximize performance, we employed distributed training with Horovod
[9] and OpenMPI [10] as the backend. Horovod, an open-source distributed
training framework developed by Uber, supports popular deep learning
frameworks like TensorFlow, PyTorch, and MXNet. By leveraging MPI, Horovod
efficiently distributes training data and model parameters across multiple
devices, resulting in faster training times. With all 112 cores, including
hyperthreading, fully engaged, we achieved an impressive training time of
around 3 minutes, comparable to an out-of-the-box training on an NVIDIA A100
Rome GPU. The total training time results are displayed in Figure 2.
In the specified distributed training setup, weak scaling is used, maintaining
the same batch size throughout. The training is performed using Horovod with
two workers on a single Sapphire Rapids system, where each worker is mapped to
one socket of the system. The dataset is divided into halves and assigned to
each worker for processing.
To reduce communication overhead, gradients are averaged every 5 epochs
instead of after each epoch. The training process utilizes the Horovod
optimizer, and a warmup period of 3 epochs is set. The initial learning rate
is set to 0.01, and it is scaled by the number of workers to 0.002. To
optimize intra-op parallelism, the number of threads is set to 54, which is
the number of cores minus 2. This configuration aims to achieve efficient and
effective training while leveraging the computational capabilities of the
Sapphire Rapids system.
To maximize the performance on Intel® Xeon® CPU leverage we followed the
recipe below.
Figure 2: Competitive Perf Results for Vision Transfer Learning Workload.
## 5 Conclusion
Transfer learning has proven to be a game-changer in deep learning, enabling
us to build on existing knowledge and achieve outstanding results with minimal
time and resources. The successful application of transfer learning on Intel®
Xeon® Scalable CPUs, particularly Sapphire Rapids, challenges the GPU-centric
training mindset and offers a compelling alternative for high-performance
image classification tasks. As we continue to explore the possibilities of
leveraging Intel®’s advanced technologies, we look forward to even greater
strides in AI and machine learning.
## Configuration Details
* •
3rd Gen Intel® Xeon® scalable processor (ICX) Test by Intel® as of 10/21/2022.
1-node, 2x Intel® Xeon® Platinum 8380, 40 cores, HT On, Turbo On, Total Memory
1024 GB (16 slots/ 64 GB/ 3200 MHz [run @ 3200 MHz] ),
SE5C620.86B.01.01.0005.2202160810, 0xd000375, Ubuntu 22.04.1 LTS,
5.15.0-48-generic, n/a, Vision Transfer Learning Pipeline,Intel-tensorflow-
avx512 2.10.0, resnet50v1_5, n/a
* •
4th Gen Intel® Xeon scalable processor (SPR) Test by Intel® as of 10/21/2022.
1-node, 2x Intel® Xeon® Platinum 8480+ ,56 cores, HT On, Turbo On, Total
Memory 1024 GB (16 slots/ 64 GB/ 4800 MHz [run @ 4800 MHz] ),
EGSDREL1.SYS.8612.P03.2208120629 , 0x2b000041 , Ubuntu 22.04.1 LTS,
5.15.0-48-generic, n/a, Vision Transfer Learning Pipeline, Intel-tensorflow-
avx512 2.10.0, resnet50v1_5, n/a.
* •
NVIDIA-A100 Test by Intel® as of 10/26/2022. 1-node (DGX-A100), 2xAMD EPYC
7742 64-Core Processor, 64 cores, HT On, Turbo On,, Total 1024GB (16
slots/64GB/3200 MHz) [run @ 3200MHz] ), Nvidia A100 GPU, BIOS 1.1, 0x830104d
,Ubuntu 20.04.2 LTS , 5.4.0-81-generic, n/a, Vision Transfer Learning
Pipeline, Tensorflow 2.10, resnet50v1_5, n/a.
## References
* [1] TensorFlow Datasets: a collection of ready-to-use datasets. URL https://www.tensorflow.org/datasets.
* int [a] Accelerate AI Workloads with Intel AMX, a. URL https://www.intel.com/content/www/us/en/products/docs/accelerator-engines/advanced-matrix-extensions/ai-solution-brief.html.
* Kather et al. [2016] Jakob Nikolas Kather, Cleo-Aron Weis, Francesco Bianconi, Susanne M Melchers, Lothar R Schad, Timo Gaiser, Alexander Marx, and Frank Gerrit Z"ollner. Multi-class texture analysis in colorectal cancer histology. _Scientific reports_ , 6:27988, 2016.
* [4] Intel ResNet 50v1.5 models. URL https://github.com/IntelAI/models/tree/master/benchmarks/image_recognition/tensorflow/resnet50v1_5.
* He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition, 2015. URL https://arxiv.org/abs/1512.03385.
* Plumworasawat and Sae-Bae [2023] Sirithep Plumworasawat and Napa Sae-Bae. Colorectal tissue image classification across datasets. In _2023 20th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON)_ , pages 1–4, 2023. doi:10.1109/ECTI-CON58255.2023.10153365.
* int [b] Intel oneAPI Deep Neural Network Library, b. URL https://www.intel.com/content/www/us/en/developer/tools/oneapi/onednn.html.
* int [c] Intel oneAPI AI Analytics Toolkit, c. URL https://www.intel.com/content/www/us/en/developer/tools/oneapi/toolkits.html.
* Sergeev and Balso [2018] Alexander Sergeev and Mike Del Balso. Horovod: fast and easy distributed deep learning in tensorflow, 2018.
* Gabriel et al. [2004] Edgar Gabriel, Graham E. Fagg, George Bosilca, Thara Angskun, Jack J. Dongarra, Jeffrey M. Squyres, Vishal Sahay, Prabhanjan Kambadur, Brian Barrett, Andrew Lumsdaine, Ralph H. Castain, David J. Daniel, Richard L. Graham, and Timothy S. Woodall. Open MPI: Goals, concept, and design of a next generation MPI implementation. In _Proceedings, 11th European PVM/MPI Users’ Group Meeting_ , pages 97–104, Budapest, Hungary, September 2004.
|
11footnotetext: Université de Lorraine, IECL, UMR 7502, Campus Scientifique,
B.P. 70239, Vandœuvre-lès-Nancy Cedex, F-54506, France22footnotetext: CNRS,
IECL, UMR 7502, Vandœuvre-lès-Nancy, F-54506, France33footnotetext: Inria,
TOSCA team, Villers-lès-Nancy, F-54600, France.
E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
# Uniform convergence to the $Q$-process
Nicolas Champagnat1,2,3, Denis Villemonais1,2,3
###### Abstract
The first aim of the present note is to quantify the speed of convergence of a
conditioned process toward its $Q$-process under suitable assumptions on the
quasi-stationary distribution of the process. Conversely, we prove that, if a
conditioned process converges uniformly to a conservative Markov process which
is itself ergodic, then it admits a unique quasi-stationary distribution and
converges toward it exponentially fast, uniformly in its initial distribution.
As an application, we provide a conditional ergodic theorem.
Keywords: quasi-stationary distribution; $Q$-process; uniform exponential
mixing property; conditional ergodic theorem
2010 Mathematics Subject Classification. 60J25; 37A25; 60B10.
## 1 Introduction
Let $(\Omega,({\cal F}_{t})_{t\geq 0},(X_{t})_{t\geq 0},(\mathbb{P}_{x})_{x\in
E\cup\\{\partial\\}})$ be a time homogeneous Markov process with state space
$E\cup\\{\partial\\}$, where $E$ is a measurable space. We assume that
$\partial\not\in E$ is an absorbing state for the process, which means that
$X_{s}=\partial$ implies $X_{t}=\partial$ for all $t\geq s$,
$\mathbb{P}_{x}$-almost surely for all $x\in E$. In particular,
$\tau_{\partial}:=\inf\\{t\geq 0,X_{t}=\partial\\}$
is a stopping time. We also assume that
$\mathbb{P}_{x}(\tau_{\partial}<\infty)=1$ and
$\mathbb{P}_{x}(t<\tau_{\partial})>0$ for all $t\geq 0$ and $\forall x\in E$.
A probability measure $\alpha$ on $E$ is called a quasi-stationary
distribution if
$\mathbb{P}_{\alpha}(X_{t}\in\cdot\mid t<\tau_{\partial})=\alpha,\quad\forall
t\geq 0.$
We refer the reader to [7, 9, 4] and references therein for extensive
developments and several references on the subject. It is well known that a
probability measure $\alpha$ is a quasi-stationary distribution if and only if
there exists a probability measure $\mu$ on $E$ such that
$\displaystyle\lim_{t\rightarrow+\infty}\mathbb{P}_{\mu}(X_{t}\in A\mid
t<\tau_{\partial})=\alpha(A)$ (1.1)
for all measurable subsets $A$ of $E$.
In [2], we provided a necessary and sufficient condition on $X$ for the
existence of a probability measure $\alpha$ on $E$ and constants $C,\gamma>0$
such that
$\left\|\mathbb{P}_{\mu}(X_{t}\in\cdot\mid
t<\tau_{\partial})-\alpha\right\|_{TV}\leq Ce^{-\gamma
t},\quad\forall\mu\in\mathcal{P}(E),\quad t\geq 0,$ (1.2)
where $\|\cdot\|_{TV}$ is the total variation norm and $\mathcal{P}(E)$ is the
set of probability measures on $E$. This immediately implies that $\alpha$ is
the unique quasi-stationary distribution of $X$ and that (1.1) holds for any
initial probability measure $\mu$.
The necessary and sufficient condition for (1.2) is given by the existence of
a probability measure $\nu$ on $E$ and of constants $t_{0},c_{1},c_{2}>0$ such
that
$\mathbb{P}_{x}(X_{t_{0}}\in\cdot\mid t_{0}<\tau_{\partial})\geq
c_{1}\nu,\quad\forall x\in E$
and
$\mathbb{P}_{\nu}(t<\tau_{\partial})\geq
c_{2}\mathbb{P}_{x}(t<\tau_{\partial}),\quad\forall t\geq 0,\ x\in E.$
The first condition implies that, in cases of unbounded state space $E$ (like
$\mathbb{N}$ or $\mathbb{R}_{+}$), the process $(X_{t},t\geq 0)$ comes down
from infinity in the sense that, there exists a compact set $K\subset E$ such
that $\inf_{x\in E}\mathbb{P}_{x}(X_{t_{0}}\in K\mid t_{0}\tau_{\partial})>0$.
This property is standard for biological population processes such as Lotka-
Volterra birth and death or diffusion processes [1, 3]. However, this is not
the case for some classical models, such as linear birth and death processes
or Ornstein-Uhlenbeck processes.
Many properties can be deduced from (1.2). For instance, this implies the
existence of a constant $\lambda_{0}>0$ such that
$\displaystyle\mathbb{P}_{\alpha}(t<\tau_{\partial})=e^{-\lambda_{0}t}$
and of a function $\eta:E\rightarrow(0,\infty)$ such that $\alpha(\eta)=1$ and
$\displaystyle\lim_{t\rightarrow+\infty}\sup_{x\in
E}\left|e^{\lambda_{0}t}\mathbb{P}_{x}(t<\tau_{\partial})-\eta(x)\right|=0$
(1.3)
as proved in [2, Prop. 2.3]. It also implies the existence and the exponential
ergodicity of the associated $Q$-process, defined as the process $X$
conditioned to never be extinct [2, Thm. 3.1]. More precisely, if (1.2) holds,
then the family $(\mathbb{Q}_{x})_{x\in E}$ of probability measures on
$\Omega$ defined by
$\displaystyle\mathbb{Q}_{x}(\Gamma)=\lim_{t\rightarrow+\infty}\mathbb{P}_{x}(\Gamma\mid
t<\tau_{\partial}),\ \forall\Gamma\in{\cal F}_{s},\ \forall s\geq 0,$ (1.4)
is well defined and the process $(\Omega,({\cal F}_{t})_{t\geq
0},(X_{t})_{t\geq 0},(\mathbb{Q}_{x})_{x\in E})$ is an $E$-valued homogeneous
Markov process. In addition, this process admits the unique invariant
probability measure (sometimes refered to as the doubly limiting quasi-
stationary distribution [5])
$\displaystyle\beta(dx)=\eta(x)\alpha(dx)$
and there exist constants $C^{\prime},\gamma^{\prime}>0$ such that, for any
$x\in E$ and all $t\geq 0$,
$\displaystyle\left\|\mathbb{Q}_{x}(X_{t}\in\cdot)-\beta\right\|_{TV}\leq
C^{\prime}e^{-\gamma^{\prime}t}.$ (1.5)
The measure $\beta$
The first aim of the present note is to refine some results of [2] in order to
get sharper bounds on the convergence in (1.3) and to prove that the
convergence (1.4) holds in total variation norm, with uniform bounds over the
initial distribution (see Theorem 2.1). Using these new results, we obtain in
Corollary 2.3 that the uniform exponential convergence (1.2) implies that, for
all bounded measurable function $f:E\rightarrow\mathbb{R}$ and all $T>0$,
$\displaystyle\left|\mathbb{E}_{x}\left(\frac{1}{T}\int_{0}^{T}f(X_{t})\,dt\mid
T<\tau_{\partial}\right)-\int_{E}f\,d\beta\right|\leq\frac{a\|f\|_{\infty}}{T},$
(1.6)
for some positive constant $a$. This result improves the very recent result
obtained independently by He, Zhang and Zu [6, Thm. 2.1] by providing the
convergence estimate in $1/T$. The interested reader might look into [6] for
nice domination properties between the quasi-stationary distribution $\alpha$
and the probability $\beta$.
The second aim of this note is to prove that the existence of the $Q$-process
with uniform bounds in (1.4) and its uniform exponential ergodicity (1.5) form
in fact a necessary and sufficient condition for the uniform exponential
convergence (1.2) toward a unique quasi-stationary distribution.
## 2 Main results
In this first result, we improve (1.3) and provide a uniform exponential bound
for the convergence (1.4) of the conditioned process toward the $Q$-process.
###### Theorem 2.1.
Assume that (1.2) holds. Then there exists a positive constant $a_{1}$ such
that
$\displaystyle\left|e^{\lambda_{0}t}\mathbb{P}_{x}(t<\tau_{\partial})-\eta(x)\right|\leq
a_{1}\,e^{\lambda_{0}t}\mathbb{P}_{x}(t<\tau_{\partial})e^{-\gamma t},$ (2.1)
where $\lambda_{0}$ and $\eta$ are the constant and function appearing in
(1.3) and where $\gamma>0$ is the constant from (1.2).
Moreover, there exists a positive constant $a_{2}$ such that, for all $t\geq
0$, for all $\Gamma\in\mathcal{F}_{t}$ and all $T\geq t$,
$\displaystyle\left\|\mathbb{Q}_{x}(\Gamma)-\mathbb{P}_{x}(\Gamma\mid
T<\tau_{\partial})\right\|_{TV}\leq a_{2}\,e^{-\gamma(T-t)},$ (2.2)
where $(\mathbb{Q}_{x})_{x\in E}$ is the $Q$-process defined in (1.4).
We emphasize that (2.1) is an improvement of (1.3), since the convergence is
actually exponential and, in many interesting examples, $\inf_{x\in
E}\mathbb{P}_{x}(t<\tau_{\partial})=0$. This is for example the case for
elliptic diffusion processes absorbed at the boundaries of an interval, since
the probability of absorption converges to 1 when the initial condition
converges to the boundaries of the interval. The last theorem has a first
corollary.
###### Corollary 2.2.
Assume that (1.2) holds. Then there exists a positive constant $a_{3}$ such
that, for all $T>0$, all probability measure $\mu_{T}$ on $[0,T]$ and all
bounded measurable functions $f:E\rightarrow\mathbb{R}$,
$\left|\mathbb{E}_{x}\left(\int_{0}^{T}f(X_{t})\mu_{T}(dt)\mid
T<\tau_{\partial}\right)-\int_{E}f\,d\beta\right|\\\ \leq
a_{3}\|f\|_{\infty}\int_{0}^{T}\left(e^{-\gamma^{\prime}t}+e^{-\gamma(T-t)}\right)\mu_{T}(dt).$
(2.3)
This follows from (2.2), the exponential ergodicity of the $Q$-process stated
in (1.5) and the inequality
$\left|\mathbb{E}_{x}\left(\int_{0}^{T}f(X_{t})\mu_{T}(dt)\mid
T<\tau_{\partial}\right)-\int_{E}f\,d\beta\right|\\\
\leq\int_{0}^{T}\left|\mathbb{E}_{x}(f(X_{t})\mid
T<\tau_{\partial})-\mathbb{E}^{\mathbb{Q}_{x}}(f(X_{t}))\right|\,\mu_{T}(dt)\\\
+\int_{0}^{T}\left|\mathbb{E}^{\mathbb{Q}_{x}}(f(X_{t}))-\int_{E}f\,d\beta\right|\,\mu_{T}(dt),$
where $\mathbb{E}^{\mathbb{Q}_{x}}$ is the expectation with respect to
$\mathbb{Q}_{x}$.
In particular, choosing $\mu_{T}$ as the uniform distribution on $[0,T]$, we
obtain a conditional ergodic theorem.
###### Corollary 2.3.
Assume that (1.2) holds. Then there exists a positive constant $a_{4}$ such
that, for all $T>0$ and all bounded measurable functions
$f:E\rightarrow\mathbb{R}$,
$\displaystyle\left|\mathbb{E}_{x}\left(\frac{1}{T}\int_{0}^{T}f(X_{t})\,dt\mid
T<\tau_{\partial}\right)-\int_{E}f\,d\beta\right|\leq\frac{a_{4}\,\|f\|_{\infty}}{T}.$
Considering the problem of estimating $\beta$ from $N$ realizations of the
unconditioned process $X$, one wishes to take $T$ as small as possible in
order to obtain the most samples such that $T<\tau_{\partial}$ (of order
$N_{T}=Ne^{-\lambda_{0}T}$). It is therefore important to minimize the error
in (2.3) for a given $T$. It is easy to check that $\mu_{T}=\delta_{t_{0}}$
with $t_{0}=\gamma T/(\gamma+\gamma^{\prime})$ is optimal with an error of the
order of $\exp(-\gamma^{\prime}\gamma T/(\gamma+\gamma^{\prime}))$. Combining
this with the Monte Carlo error of order $1/\sqrt{N_{T}}$, we obtain a global
error of order
$\frac{e^{\lambda_{0}T/2}}{\sqrt{N}}+e^{-\gamma\gamma^{\prime}T/(\gamma+\gamma^{\prime})}.$
In particular, for a fixed $N$, the optimal choice for $T$ is
$T\approx\frac{\log
N}{\lambda_{0}+2\gamma\gamma^{\prime}/(\gamma+\gamma^{\prime})}$ and the error
is of the order of $N^{-\zeta}$ with
$\zeta=\frac{\gamma\gamma^{\prime}}{2\gamma\gamma^{\prime}+\lambda_{0}(\gamma+\gamma^{\prime})}$.
Conversely, for a fixed $T$, the best choice for $N$ is
$N\approx\exp((\lambda_{0}+2\gamma\gamma^{\prime}/(\gamma+\gamma^{\prime}))T)$
and the error is of the order of
$\exp(-\gamma\gamma^{\prime}T/(\gamma+\gamma^{\prime}))$.
We conclude this section with a converse to Theorem 2.1. More precisely, we
give a converse to the fact that (1.2) implies both (1.5) and (2.2).
###### Theorem 2.4.
Assume that there exists a Markov process $(\mathbb{Q}_{x})_{x\in E}$ with
state space $E$ such that, for all $t>0$,
$\displaystyle\lim_{T\rightarrow+\infty}\sup_{x\in
E}\left\|\mathbb{Q}_{x}(X_{t}\in\cdot)-\mathbb{P}_{x}(X_{t}\in\cdot\mid
T<\tau_{\partial})\right\|_{TV}=0$ (2.4)
and such that
$\displaystyle\lim_{t\rightarrow+\infty}\sup_{x,y\in
E}\left\|\mathbb{Q}_{x}(X_{t}\in\cdot)-\mathbb{Q}_{y}(X_{t}\in\cdot)\right\|_{TV}=0.$
(2.5)
Then the process $(\mathbb{P}_{x})_{x\in E}$ admits a unique quasi-stationary
distribution $\alpha$ and there exist positive constants $\gamma,C$ such that
(1.2) holds.
It is well known that the strong ergodicity (2.5) of a Markov process implies
its exponential ergodicity [8, Thm. 16.0.2]. Similarly, we observe in our
situation that, if (2.4) and (2.5) hold, then the combination of the above
results implies that both convergences hold exponentially.
## 3 Proofs
### 3.1 Proof of Theorem 2.1
For all $x\in E$, we set
$\displaystyle\eta_{t}(x)=\frac{\mathbb{P}_{x}(t<\tau_{\partial})}{\mathbb{P}_{\alpha}(t<\tau_{\partial})}=e^{\lambda_{0}t}\mathbb{P}_{x}(t<\tau_{\partial}),$
and we recall from [2, Prop. 2.3] that $\eta_{t}(x)$ is uniformly bounded
w.r.t. $t\geq 0$ and $x\in E$. By Markov’s property
$\displaystyle\eta_{t+s}(x)$
$\displaystyle=e^{\lambda_{0}(t+s)}\mathbb{E}_{x}\left(\mathbbm{1}_{t<\tau_{\partial}}\mathbb{P}_{X_{t}}(s<\tau_{\partial})\right)$
$\displaystyle=\eta_{t}(x)\mathbb{E}_{x}\left(\eta_{s}(X_{t})\mid
t<\tau_{\partial}\right).$
By (1.2), there exists a constant $C^{\prime}$ independent of $s$ such that
$\displaystyle\left|\mathbb{E}_{x}\left(\eta_{s}(X_{t})\mid
t<\tau_{\partial}\right)-\int_{E}\eta_{s}d\alpha\right|\leq
C^{\prime}\,e^{-\gamma t}.$
Since $\int\eta_{s}d\alpha=1$, there exists a constant $a_{1}>0$ such that,
for all $x\in E$ and $s,t\geq 0$,
$\displaystyle\left|\frac{\eta_{t+s}(x)}{\eta_{t}(x)}-1\right|\leq
a_{1}\,e^{-\gamma t}.$
Hence, multiplying on both side by $\eta_{t}(x)$ and letting $s$ tend to
infinity, we deduce from (1.3) that, for all $x\in E$,
$\displaystyle\left|\eta(x)-\eta_{t}(x)\right|\leq a_{1}\,e^{-\gamma
t}\eta_{t}(x),\,\forall t\geq 0,$
which is exactly (2.1). We also deduce that
$\displaystyle\left(1-a_{1}e^{-\gamma
t}\right)\eta_{t}(x)\leq\eta(x)\leq\left(1+a_{1}e^{-\gamma
t}\right)\eta_{t}(x)$ (3.1)
and hence, for $t$ large enough,
$\displaystyle\frac{\eta(x)}{1+a_{1}e^{-\gamma
t}}\leq\eta_{t}(x)\leq\frac{\eta(x)}{1-a_{1}e^{-\gamma t}}.$ (3.2)
Let us now prove the second part of Theorem 2.1. For any $t\geq 0$,
$\Gamma\in\mathcal{F}_{t}$ and $0\leq t\leq T$,
$\displaystyle\mathbb{P}_{x}\left(\Gamma\mid T<\tau_{\partial}\right)$
$\displaystyle=\frac{\mathbb{P}_{x}\left(\Gamma\cap\\{T<\tau_{\partial}\\}\right)}{\mathbb{P}_{x}(T<\tau_{\partial})}$
$\displaystyle=\frac{e^{\lambda_{0}T}\mathbb{P}_{x}\left(\Gamma\cap\\{T<\tau_{\partial}\\}\right)}{\eta(x)}\,\frac{\eta(x)}{e^{\lambda_{0}T}\mathbb{P}_{x}(T<\tau_{\partial})}.$
We deduce from (2.1) that
$\displaystyle\left|\frac{\eta(x)}{e^{\lambda_{0}T}\mathbb{P}_{x}(T<\tau_{\partial})}-1\right|\leq
a_{1}e^{-\gamma T},$
while, for all $T>\frac{\log a_{1}}{\gamma}$, (3.2) entails
$\displaystyle\left|\frac{e^{\lambda_{0}T}\mathbb{P}_{x}\left(\Gamma\cap\\{T<\tau_{\partial}\\}\right)}{\eta(x)}\right|\leq\frac{\eta_{T}(x)}{\eta(x)}\leq\frac{1}{1-a_{1}e^{-\gamma
T}}.$
Hence, for all $t\geq 0$ and all $T>\frac{\log a_{1}}{\gamma}$,
$\displaystyle\left|\mathbb{P}_{x}\left(\Gamma\mid
T<\tau_{\partial}\right)-\frac{e^{\lambda_{0}T}\mathbb{P}_{x}\left(\Gamma\cap\\{T<\tau_{\partial}\\}\right)}{\eta(x)}\right|\leq\frac{a_{1}e^{-\gamma
T}}{1-a_{1}e^{-\gamma T}}.$ (3.3)
Now, the Markov property implies that
$\displaystyle\mathbb{P}_{x}\left(\Gamma\cap\\{T<\tau_{\partial}\\}\right)=\mathbb{E}_{x}\left(\mathbbm{1}_{\Gamma}\mathbb{P}_{X_{t}}(T-t<\tau_{\partial})\right),$
and we deduce from (3.3) that, for all $T>t+\frac{\log a_{1}}{\gamma}$,
$\displaystyle\left|e^{\lambda_{0}(T-t)}\mathbb{P}_{X_{t}}(T-t<\tau_{\partial})-\eta(X_{t})\right|\leq\frac{a_{1}e^{-\gamma(T-t)}}{1-a_{1}e^{-\gamma(T-t)}}\eta(X_{t}).$
Thus we have
$\left|\frac{e^{\lambda_{0}T}\mathbb{P}_{x}\left(\Gamma\cap\\{T<\tau_{\partial}\\}\right)}{\eta(x)}-\frac{e^{\lambda_{0}t}\mathbb{E}_{x}\left(\mathbbm{1}_{\Gamma}\eta(X_{t})\right)}{\eta(x)}\right|\\\
\begin{aligned}
&\leq\frac{e^{\lambda_{0}t}}{\eta(x)}\mathbb{E}_{x}\left[\mathbbm{1}_{\Gamma}\left|e^{\lambda_{0}(T-t)}\mathbb{P}_{X_{t}}(T-t<\tau_{\partial})-\eta(X_{t})\right|\right]\\\
&\leq\frac{a_{1}e^{-\gamma(T-t)}}{1-a_{1}e^{-\gamma(T-t)}}\frac{e^{\lambda_{0}t}\mathbb{E}_{x}(\eta(X_{t}))}{\eta(x)}\\\
&=\frac{a_{1}e^{-\gamma(T-t)}}{1-a_{1}e^{-\gamma(T-t)}},\end{aligned}$
where we used the fact that
$\mathbb{E}_{x}\eta(X_{h})=e^{-\lambda_{0}h}\eta(x)$ for all $h>0$ (see [2,
Prop. 2.3]). This and (3.3) allows us to conclude that, for all $t\geq 0$ and
all $T>t+\frac{\log a_{1}}{\gamma}$,
$\displaystyle\left|\mathbb{P}_{x}\left(\Gamma\mid
T<\tau_{\partial}\right)-\frac{e^{\lambda_{0}t}\mathbb{E}_{x}\left(\mathbbm{1}_{\Gamma}\eta(X_{t})\right)}{\eta(x)}\right|\leq\frac{2a_{1}e^{-\gamma
T}}{1-a_{1}e^{-\gamma T}}.$
Since
$\mathbb{Q}_{x}(\Gamma)=e^{\lambda_{0}t}\mathbb{E}_{x}\left(\mathbbm{1}_{\Gamma}\,\eta(X_{t})\right)/\eta(x)$
(see [2, Thm. 3.1 (ii)]), we deduce that (2.2) holds true.
This concludes the proof of Theorem 2.1.
### 3.2 Proof of Theorem 2.4
We deduce from (2.4) and (2.5) that there exists $t_{1}>0$ and $T_{1}>0$ such
that, for all $T\geq T_{1}$,
$\displaystyle\sup_{x,y\in E}\left\|\mathbb{P}_{x}(X_{t_{1}}\in\cdot\mid
T<\tau_{\partial})-\mathbb{P}_{y}(X_{t_{1}}\in\cdot\mid
T<\tau_{\partial})\right\|_{TV}\leq 1/2.$
In particular, for all $s\geq 0$ and all $T\geq s+T_{1}$,
$\displaystyle\sup_{x,y\in
E}\left\|\delta_{x}R_{s,s+t_{1}}^{T}-\delta_{y}R_{s,s+t_{1}}^{T}\right\|_{TV}\leq
1/2,$ (3.4)
where, for all $0\leq s\leq t\leq T$, $R_{s,t}^{T}$ is the linear operator
defined by
$\displaystyle\delta_{x}R_{s,t}^{T}f$
$\displaystyle=\mathbb{E}_{x}(f(X_{t-s})\mid T-s<\tau_{\partial})$
$\displaystyle=\mathbb{E}(f(X_{t})\mid X_{s}=x,\ T<\tau_{\partial})$
$\displaystyle=\delta_{x}R_{0,t-s}^{T-s}f,$
where we used the Markov property. Now, for any $T>0$, the family
$(R_{s,t}^{T})_{0\leq s\leq t\leq T}$ is a Markov semi-group. This semi-group
property and the contraction (3.4) classically imply that, for all $T\geq
T_{1}$,
$\displaystyle\sup_{x,y\in
E}\left\|\delta_{x}R_{0,T}^{T}-\delta_{y}R_{0,T}^{T}\right\|_{TV}\leq\left(1/2\right)^{\lfloor
T-T_{1}\rfloor/t_{1}}.$
Then, proceeding as in [2, Section 5.1], we deduce that (1.2) holds true. This
concludes the proof of Theorem 2.4.
## References
* [1] P. Cattiaux, P. Collet, A. Lambert, S. Martínez, S. Méléard, and J. San Martín. Quasi-stationary distributions and diffusion models in population dynamics. Ann. Probab., 37(5):1926–1969, 2009.
* [2] N. Champagnat and D. Villemonais. Exponential convergence to quasi-stationary distribution and Q-process. Probability Theory and Related Fields, 164(1):243–283, 2016.
* [3] N. Champagnat and D. Villemonais. Lyapunov criteria for uniform convergence of conditional distributions of absorbed Markov processes. ArXiv e-prints, Apr. 2017.
* [4] P. Collet, S. Martínez, and J. Martín. Quasi-Stationary Distributions: Markov Chains, Diffusions and Dynamical Systems. Probability and Its Applications. Springer Berlin Heidelberg, 2012.
* [5] D. C. Flaspohler. Quasi-stationary distributions for absorbing continuous-time denumerable Markov chains. Ann. Inst. Statist. Math., 26:351–356, 1974.
* [6] G. He, H. Zhang, and Y. Zhu. On the quasi-ergodic distribution of absorbing Markov processes. ArXiv e-prints, Nov. 2016.
* [7] S. Méléard and D. Villemonais. Quasi-stationary distributions and population processes. Probab. Surv., 9:340–410, 2012.
* [8] S. P. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Communications and Control Engineering Series. Springer-Verlag London, Ltd., London, 1993.
* [9] E. A. van Doorn and P. K. Pollett. Quasi-stationary distributions for discrete-state models. European J. Oper. Res., 230(1):1–14, 2013.
|
# Optimal photon polarization toward the observation of the nonlinear Breit-
Wheeler pair production
Yunquan Gao College of Physics and Optoelectronic Engineering, Ocean
University of China, Qingdao, Shandong, 266100, China Suo Tang
<EMAIL_ADDRESS>College of Physics and Optoelectronic Engineering, Ocean
University of China, Qingdao, Shandong, 266100, China
###### Abstract
We investigate the optimization of the photon polarization to increase the
yield of the Breit-Wheeler pair production in arbitrarily polarized plane wave
backgrounds. We show that the optimized photon polarization can improve the
positron yield by more than $20\%$ compared to the unpolarized case, in the
intensity regime of current laser-particle experiments. The seed photon’s
optimal polarization is resulting from the polarization coupling with the
polarization of the laser pulse. The compact expressions of the coupling
coefficients in both the perturbative and nonperturbative regime are given.
Because of the evident difference in the coupling coefficients for the linear
and circular polarization components, the seed photon’s optimal polarization
state in an elliptically polarized laser background, deviates considerably
from the orthogonal state of the laser polarization.
## I Introduction
The production of electron-positron pair in the collision of two high-energy
photons, now referred to as the linear Breit-Wheeler process (LBW), was first
proposed in 1930s Breit and Wheeler (1934). The production yield depends not
only on the photons’ dynamical parameters, but also on the relative
polarization of the two photons Breit and Wheeler (1934); Baier and Grozin
(2002); Adam et al. (2021).
With the improvement of the laser intensity, the decay of a single high-energy
photon into a pair of electron and positron in the collision with an intense
laser pulse, which is often referred to as the nonlinear Breit-Wheeler (NBW)
pair production Reiss (1962); Di Piazza et al. (2012); Gonoskov et al. (2021);
Fedotov et al. (2022), has been measured in the multiphoton perturbative
regime via the landmark E144 experiment more than two decades ago Burke et al.
(1997); Bamber et al. (1999) and been broadly studied within different type of
laser fields Nikishov and Ritus (1964); Heinzl et al. (2010); Krajewska and
Kamiński (2012); Titov et al. (2012); Fedotov and Mironov (2013); Titov et al.
(2016); Jansen and Müller (2013); Jansen and M¨¹ller (2017); Titov et al.
(2018); Ilderton (2019, 2020); King (2020); Tang (2021); Tang and King (2021).
The dependence of the NBW process on the polarisation state of the seed photon
has also been partially investigated in the current literature Ivanov et al.
(2005); Katkov (2012); Li et al. (2020); Chen et al. (2022); Wistisen (2020);
Titov and Kämpfer (2020); Seipt and King (2020); Tang (2022), in which the
laser backgrounds are commonly specified with the pure linear and/or circular
polarization, and the production yield could be considerably
improved/suppressed when the polarization of the seed photon is set to be
orthogonal/parallel to that of the background field Titov and Kämpfer (2020);
Seipt and King (2020); Tang (2022). However, in an arbitrarily polarized laser
background, how to assign the photon polarization to acquire the maximal
production yield has not been clearly investigated.
In the LBW process, the polarization dependence of the production is resulting
from the polarization coupling between the two high-energy photon Breit and
Wheeler (1934); Baier and Grozin (2002); Adam et al. (2021). However, how the
polarization of the seed photon couples with that of the laser pulse (or
multiple laser photons) in the NBW process is still not clear. In this paper,
we concentrate on the properties of the polarization coupling between the seed
photon and the laser pulse and reveal the optimal polarization of the seed
photon for the maximal yield of the NBW process in arbitrarily polarized laser
backgrounds. We find that the linear and circular polarization component of
the seed photon couple with the corresponding component of the laser
polarization with the quite different coefficients, and thus in an
elliptically polarized laser pulse, the optimal polarization state of the seed
photon deviates considerably from the orthogonal state of the laser
polarization.
The study of the optimal photon polarization for the maximal production yield
is partly motivated by the upcoming high-energy laser-particle experiment,
_i.e._ , LUXE at DESY Abramowicz et al. (2021); Borysova (2021); Macleod
(2022); Jacobs (2021) and E320 at SLAC Meuren (2019); Naranjo et al. (2021);
Salgado et al. (2021); Meuren et al. (2020) in which beams of photons with the
energy $O(10~{}\textrm{GeV})$ are generated to collide with laser pulses with
the intermediate intensity $\xi\sim O(1)$, and one of their main goals is to
detect the NBW process in the transition regime from the perturbative to the
non-perturbative regime Macleod (2022); Jacobs (2021), where $\xi$ is the
classical nonlinearity parameter for laser intensity. In this planned
intensity regime, the production yield could be enhanced/suppressed
considerably by the photon polarization effect Wistisen (2020); Tang (2022).
The paper is organised as follows. The theoretical model and relevant
parameters are introduced in Sec. II. In Sec. III, we first explore the
perturbative intensity regime and discuss the photon polarization coupling in
the LBW process, and then, we go to the non-perturbative intensity regime to
discuss the polarization coupling between the seed photon and the laser pulse
in the NBW precess in Sec. IV. At the end, we conclude in Sec. V. In following
discussions, the natural units $\hbar=c=1$ is used, and the fine structure
constant is $\alpha=e^{2}\approx 1/137$.
## II Theoretical model
We consider the typical scenario in the modern-day laser-particle experiments
in which a beam of high-energy photons interacts with an intense laser pulse
in the geometry close to the head-on collision. The laser pulse is modelled as
a plane wave with scaled vector potential $a^{\mu}(\phi)=|e|A^{\mu}(\phi)$
depending only on the laser phase $\phi=k\cdot x$, where
$k^{\mu}=\omega(1,0,0,-1)$ is the laser wave vector, $\omega$ is the central
frequency of the laser pulse and $|e|$ is the charge of the positron. This
plane wave background is a good approximation for collisions between high-
energy particles and weakly focussed pulses Nikishov and Ritus (1964); Di
Piazza (2015, 2016, 2017, 2021). The collision is characterized by the energy
parameter $\eta=k\cdot\ell/m^{2}$ and laser intensity parameter $\xi$, where
$\ell^{\mu}$ is the photon momentum and $m$ is electron rest mass.
The total yield of the NBW pair production from a polarized seed photon is
given as Tang (2022):
$\displaystyle{P}=$
$\displaystyle\frac{\alpha}{(2\pi\eta)^{2}}\int\frac{\mathrm{d}s}{ts}\int\mathrm{d}^{2}\bm{r}\iint\mathrm{d}\phi_{1}\mathrm{d}\phi_{2}~{}e^{i\int_{\phi_{2}}^{\phi_{1}}\mathrm{d}\phi^{\prime}\frac{\ell\cdot\pi_{q}(\phi^{\prime})}{m^{2}\eta
t}}$
$\displaystyle\left\\{h_{s}\bm{\Delta}^{2}/2+1-ih_{s}\Gamma_{3}\bm{w}(\phi_{1})\times\bm{w}(\phi_{2})\right.$
$\displaystyle-\Gamma_{1}\left[w_{x}(\phi_{1})w_{x}(\phi_{2})-w_{y}(\phi_{1})w_{y}(\phi_{2})\right]$
$\displaystyle\left.-\Gamma_{2}\left[w_{x}(\phi_{1})w_{y}(\phi_{2})+w_{y}(\phi_{1})w_{x}(\phi_{2})\right]\right\\}\,,$
(1)
where
$\bm{\Delta}=i\left[\bm{a}^{{\scriptscriptstyle\perp}}(\phi_{1})-\bm{a}^{{\scriptscriptstyle\perp}}(\phi_{2})\right]/m$,
$h_{s}=(s^{2}+t^{2})/(2st)$,
$\bm{w}(\phi)=\bm{r}-\bm{a}^{{\scriptscriptstyle\perp}}(\phi)/m$, and
$\bm{w}(\phi_{1})\times\bm{w}(\phi_{2})=w_{x}(\phi_{1})w_{y}(\phi_{2})-w_{y}(\phi_{1})w_{x}(\phi_{2})$,
$s=k\cdot q/k\cdot\ell$ ($t=1-s$) is the fraction of the light front momentum
taken by the produced position (electron), and
$\bm{r}=(\bm{q}^{{\scriptscriptstyle\perp}}-s\bm{\ell}^{{\scriptscriptstyle\perp}})/m$
denotes the transverse momenta of the positron, and $\pi_{q}(\phi)$ is the
positron’s instantaneous momentum in the laser pulse:
$\pi^{\mu}_{q}(\phi)=q^{\mu}-a^{\mu}(\phi)+\frac{q\cdot a(\phi)}{k\cdot
q}k^{\mu}-\frac{a^{2}(\phi)}{2k\cdot q}k^{\mu}\,.$
The polarization of the seed photon is comprehensively described with the
classical Stokes parameters $(\Gamma_{1},~{}\Gamma_{2},~{}\Gamma_{3})$
Berestetskii et al. (1982); Jackson (1999): $\Gamma_{1}$ ($\Gamma_{2}$) is the
degree of linear polarization indicating the preponderance of the polarization
in the $\varepsilon_{x}$ state ($\varepsilon_{45^{\circ}}$ state) over that in
the $\varepsilon_{y}$ state ($\varepsilon_{135^{\circ}}$ state), and
$\Gamma_{3}$ is the degree of circular polarization giving the preponderance
of the polarization in the $\varepsilon_{{\scriptscriptstyle+}}$ state over
that in the $\varepsilon_{{\scriptscriptstyle-}}$ state. The polarization
basis is given as
$\displaystyle\varepsilon^{\mu}_{x}$
$\displaystyle=\epsilon^{\mu}_{x}-\frac{\ell\cdot\epsilon_{x}}{k\cdot\ell}k^{\mu}\,,~{}~{}~{}\varepsilon^{\mu}_{y}=\epsilon^{\mu}_{y}-\frac{\ell\cdot\epsilon_{y}}{k\cdot\ell}k^{\mu}\,,$
$\displaystyle\varepsilon^{\mu}_{\psi}$
$\displaystyle=\epsilon^{\mu}_{\psi}-\frac{\ell\cdot\epsilon_{\psi}}{k\cdot\ell}k^{\mu}\,,~{}~{}~{}\varepsilon^{\mu}_{\pm}=\epsilon^{\mu}_{\pm}-\frac{\ell\cdot\epsilon_{\pm}}{k\cdot\ell}k^{\mu}\,,$
where $\epsilon^{\mu}_{x}=(0,1,0,0)$, $\epsilon^{\mu}_{y}=(0,0,1,0)$ and
$\epsilon_{\psi}=\epsilon_{x}\cos\psi+\epsilon_{y}\sin\psi$,
$\epsilon_{\pm}=(\epsilon_{x}\pm i\epsilon_{y})/\sqrt{2}$. For fully polarized
photon beams, the Stokes parameters satisfy
$\Gamma_{1}^{2}+\Gamma_{2}^{2}+\Gamma_{3}^{2}=1$ and for partially polarized
photon beams, $\Gamma_{1}^{2}+\Gamma_{2}^{2}+\Gamma_{3}^{2}<1$. The full
definition of the photon Stokes parameters
$(\Gamma_{1},~{}\Gamma_{2},~{}\Gamma_{3})$ can be found in Ref. Tang (2022).
Based on (1), the total yield of the NBW process can be phenomenologically
given as
$\displaystyle\textrm{P}=n_{0}+\Gamma_{1}n_{1}+\Gamma_{2}n_{2}+\Gamma_{3}n_{3}\,,$
(2)
where $n_{0}$ is the unpolarized contribution independent on the photon
polarization $(\Gamma_{1,2,3}=0)$ Tang (2021); Tang and King (2021), and
$n_{1,2,3}$ denote the contributions coupling to the polarization of the seed
photon. As one can simply infer, to maximize the production yield,
$\displaystyle\textrm{P}_{m}=n_{0}+n_{p}\,,$ (3)
the photon polarization should be selected as
$\displaystyle(\Gamma_{1},\Gamma_{2},\Gamma_{3})=(n_{1},n_{2},n_{3})/(n_{1}^{2}+n_{2}^{2}+n_{3}^{2})^{1/2}\,,$
(4)
which prompts the existence of the optimal photon polarization for the
specified laser pulse and collision parameter $\eta$ to achieve the maximal
production yield, where $n_{p}=(n_{1}^{2}+n_{2}^{2}+n_{3}^{2})^{1/2}$ is the
maximal contribution from the photon polarization. However, if reverse the
optimal polarization of the seed photon, _i.e._
$\Gamma_{1,2,3}\to-\Gamma_{1,2,3}$, the pair production would be largely
suppressed.
## III linear Breit-Wheeler process
One may realize that the polarization contribution $\Gamma_{i}n_{i}$ in (2)
comes from the polarization coupling between the seed and laser photons, and
thus the optimal photon polarization (4) depends on the polarization of the
laser photons. To manifest this polarization coupling effect, we resort to the
perturbative approximation of (1), which is often referred to as the LBW
process, by expanding the integrand in (1), keeping only
$\mathcal{O}(\xi^{2})$ terms and integrating over $s$,
$\displaystyle{P}_{\ell}=$
$\displaystyle\frac{\pi\alpha^{2}\lambdabar^{2}_{e}}{2}\int_{\nu_{*}}^{+\infty}\mathrm{d}{\nu}~{}D(\nu)$
$\displaystyle\left\\{\Xi+\kappa_{c}\Gamma_{3}\varsigma_{3}(\nu)+\kappa_{l}[\Gamma_{1}\varsigma_{1}(\nu)+\Gamma_{2}\varsigma_{2}(\nu)]\right\\}\,,$
(5)
where $\nu_{*}=2/\eta$ is the frequency threshold of the laser photon required
to trigger the pair production,
$D(\nu)=\nu|\tilde{\bm{a}}(\nu)|^{2}/(4\pi^{2}\alpha\lambdabar^{2}_{e}m^{2})$
is the (areal) number density of the laser photon with the frequency
$\nu\omega$, $\lambdabar_{e}=1/m=386.16~{}\textrm{fm}$ is the electron’s
reduced Compton wavelength,
$\tilde{\bm{a}}(\nu)=\int\mathrm{d}\phi[a_{x}(\phi),~{}a_{y}(\phi)]\exp(iv\phi)$,
and
$\displaystyle\begin{aligned}
\varsigma_{1}(\nu)&=\frac{|\tilde{a}_{x}(\nu)|^{2}-|\tilde{a}_{y}(\nu)|^{2}}{|\tilde{\bm{a}}(\nu)|^{2}},&\\\
\varsigma_{2}(\nu)&=\frac{\tilde{a}^{*}_{x}(\nu)\tilde{a}_{y}(\nu)+\tilde{a}_{x}(\nu)\tilde{a}_{y}^{*}(\nu)}{|\tilde{\bm{a}}(\nu)|^{2}},&\\\
\varsigma_{3}(\nu)&=i\frac{\tilde{a}_{x}(\nu)\tilde{a}_{y}^{*}(\nu)-\tilde{a}^{*}_{x}(\nu)\tilde{a}_{y}(\nu)}{|\tilde{\bm{a}}(\nu)|^{2}},\end{aligned}$
(6)
are the classical Stokes parameters of the laser photon $\nu\omega$ Jackson
(1999), satisfying
$\varsigma^{2}_{1}(\nu)+\varsigma^{2}_{2}(\nu)+\varsigma^{2}_{3}(\nu)=1$.
Similar as the seed photon, $\varsigma_{1,2,3}(\nu)$ characterize the
polarization property of the laser photon: $\varsigma_{1}(\nu)$
[$\varsigma_{2}(\nu)$] describes the preponderance of the $\epsilon_{x}$
($\epsilon_{45^{\circ}}$)-linear polarization over the $\epsilon_{y}$
($\epsilon_{135^{\circ}}$)-linear polarization, and $\varsigma_{3}(\nu)$
denotes the preponderance of the $\epsilon_{+}$-circular polarization over the
$\epsilon_{-}$-circular polarization. The parameter
$\displaystyle\Xi=(1-\beta^{2})\left[(3-\beta^{4})\ln\left(\frac{1+\beta}{1-\beta}\right)-2\beta(2-\beta^{2})\right]$
(7)
is the contribution from unpolarized photons Ng and Tsai (1977); Greiner and
Reinhardt (2009), and
$\displaystyle\kappa_{c}$
$\displaystyle=2(1-\beta^{2})\left[\ln\left(\frac{1+\beta}{1-\beta}\right)-3\beta\right]\,,$
(8) $\displaystyle\kappa_{l}$
$\displaystyle=-\frac{(1-\beta^{2})^{3}}{2}\left[\ln\left(\frac{1+\beta}{1-\beta}\right)+\frac{2\beta}{1-\beta^{2}}\right]$
(9)
are, respectively, the circular- and linear-polarization coupling
coefficients, and indicate the amplitude of the contributions from each kind
of polarization coupling between the seed and laser photons, where
$\beta=(1-\nu_{*}/\nu)^{1/2}$ is actually the normalized velocity of the
produced particles in the center-of-mass frame.
In (5), we can clearly see the contributions from the polarization coupling
between the seed and laser photons. To maximize the polarization contribution
in the LBW process, the polarization of the seed photon is optimized, based on
the polarization of the laser photon, as
$\displaystyle(\Gamma_{1},~{}\Gamma_{2},~{}\Gamma_{3})=\hat{\kappa}_{l}\frac{[\varsigma_{1}(\nu),~{}\varsigma_{2}(\nu),~{}\sigma\varsigma_{3}(\nu)]}{[\varsigma_{1}^{2}(\nu)+\varsigma_{2}^{2}(\nu)+\sigma^{2}\varsigma_{3}^{2}(\nu)]^{1/2}}\,,$
(10)
where $\sigma_{l}=\kappa_{c}/\kappa_{l}$, and $\hat{\kappa}_{l}$ is the sign
of $\kappa_{l}$. As we can also see in (5), the two sets of linear
polarization have the identical coupling coefficient $\kappa_{\ell}$, because
of the symmetry by rotating the linear polarization axis for $45^{\circ}$.
This identity results in the orthogonality between the linear polarization
components of the seed and laser photons as
$(\Gamma_{1},~{}\Gamma_{2})\sim-[\varsigma_{1}(\nu),~{}\varsigma_{2}(\nu)]$ as
shown in (10) where $\hat{\kappa}_{l}=-1$ is obtained in Fig. 1.
Figure 1: Comparison between the polarization contributions $\kappa_{c,l}$ and
the unpolarized contribution $\Lambda$ with the change of the parameter
$\beta$ in the linear Breit-Wheeler process. $\beta$ is defined in the text
and can be simply understood as the normalized velocity of the produced
particles in the center-of-mass frame.
In Fig. 1, the unpolarized contribution $\Xi$ and the polarization coupling
coefficients $\kappa_{l,c}$ are presented with the change of the parameter
$\beta$. As shown, the polarization contributions are indeed appreciable
compared with the unpolarized contribution, especially in the low-energy
region $\beta<0.2$, where $\Xi\approx-\kappa_{c}\approx-\kappa_{l}$ and the
energy of the laser photon is close to the frequency threshold
$\nu\to\nu_{*}$. With the proper photon polarization, the production could be
doubled if $\bm{\Gamma}\cdot\bm{\varsigma}(\nu)\to-1$ or completely suppressed
if $\bm{\Gamma}\cdot\bm{\varsigma}(\nu)\to 1$. Similar as the variation of the
unpolarized contribution $\Xi$ with $\beta\in(0,1)$ Greiner and Reinhardt
(2009), the amplitude of the coupling coefficients $\kappa_{c,l}$ increase
from zero at $\beta=0$ to the maximum at around $\beta\approx 0.45$ and then
fall off again to zero at $\beta=1$. In the region of $\beta<0.4$, the two
kind of polarization have the same coupling coefficient,
$\kappa_{c}\approx\kappa_{l}$. This means that, to acquire the maximal
polarization contribution, the seed photon should be fully polarized in the
state orthogonal to that of the laser photon, _i.e._
$(\Gamma_{1},~{}\Gamma_{2},~{}\Gamma_{3})=-[\varsigma_{1}(\nu),~{}\varsigma_{2}(\nu),~{}\sigma\varsigma_{3}(\nu)]$
with $\sigma_{l}\approx 1$ in (10). However, in the higher-energy region with
$\beta>0.4$, the difference between $\kappa_{c}$ and $\kappa_{l}$ becomes
considerable, which implies that the highest production yield is acquired from
the seed photon polarized in the state deviating from the orthogonal state of
the laser photon. Especially in the extremely high-energy region with
$\beta>0.95$ in which $\kappa_{l}$ is close to zero and $\kappa_{c}$ becomes
positive and dominates the polarization contribution, the highest yield
appears when the seed and laser photons have the pure circular polarization
parallel to each other.
We now know that the polarization coupling between the two photons in the LBW
process could contribute considerably to the production yield and the
polarization contributions $n_{1,2,3}$ in (2) are proportional to the Stoke
parameters of the laser photon as $n_{1,2}\sim
D(\nu)\kappa_{l}\varsigma_{1,2}(\nu)$ and $n_{3}\sim
D(\nu)\kappa_{c}\varsigma_{3}(\nu)$ with the coupling coefficients
$\kappa_{l,c}$ depending only on the dynamic parameter $\beta$ in the
perturbative regime $\xi\ll 1$. While in the upcoming laser-particle
experiments Abramowicz et al. (2021); Naranjo et al. (2021), the laser
intensity has increased to the magnitude of $\xi\sim\mathcal{O}(1)$, in which
the Breit-Wheeler pair production is in the transition regime from the
perturbative to the non-perturbative regime, a high number of laser photons
would be involved to satisfy the energy threshold in the center-of-mass frame,
and the NBW process would dominate the pair production. The polarization
contributions would, therefore, come from the polarization coupling with the
laser pulse, _i.e._ multiple laser photons, but not with a single laser
photon, and the coupling coefficients would depend also on the laser intensity
and field ellipticity.
## IV nonlinear Breit-Wheeler process
In this section, we consider the NBW process stimulated by a high-energy
photon in the collision with the laser pulse in the intermediate intensity
region $\xi\sim\mathcal{O}(1)$. This is the typical setup for the upcoming
laser-particle experiment in LUXE Abramowicz et al. (2021); Jacobs (2021). To
show the polarization effect clearly, we fix the energy parameter $\eta$ and
adjust the relative polarization of the seed photon and laser pulse.
The background laser field is expressed as
$\displaystyle
a^{\mu}(\theta,\phi)=m\xi~{}\textrm{Re}\left\\{\left[0,a_{x}(\theta),a_{y}(\theta),0\right]e^{-i\phi}\right\\}f(\phi),$
(11)
where $\textrm{Re}\left\\{\cdot\right\\}$ means the real part of the argument,
$a_{x}(\theta)=\cos\theta-i\delta\sin\theta$,
$a_{y}(\theta)=\sin\theta+i\delta\cos\theta$. $\delta\in[-1,1]$ characterizes
not only the rotation of laser field: $\delta/|\delta|=1$ means the left-hand
rotation and $\delta/|\delta|=-1$ is right-hand rotation, but also the
ellipticity $|\delta|$ of the laser pulse: $|\delta|=0,~{}1$ corresponds,
respectively, to the linearly and circularly polarized laser background and
$0<|\delta|<1$ gives a laser pulse with the elliptical polarization. The semi-
major axis of the elliptical laser field is along ($\cos\theta,~{}\sin\theta$)
with the deflection angle $\theta\in[-\pi,~{}\pi]$ in the transverse plane.
$f(\phi)$ depicts the envelope of the laser pulse. The polarization of the
laser field could also be described with the classical Stokes parameters
$(\varsigma_{1},~{}\varsigma_{2},~{}\varsigma_{3})$ Jackson (1999) as
$\displaystyle\begin{aligned}
\varsigma_{1}&=\frac{|a_{x}|^{2}-|a_{y}|^{2}}{|a_{x}|^{2}+|a_{y}|^{2}}=\frac{1-\delta^{2}}{1+\delta^{2}}\cos{2\theta}\,,&\\\
\varsigma_{2}&=\frac{a^{*}_{x}a_{y}+a_{x}a^{*}_{y}}{|a_{x}|^{2}+|a_{y}|^{2}}=\frac{1-\delta^{2}}{1+\delta^{2}}\sin{2\theta}\,,&\\\
\varsigma_{3}&=i\frac{a_{x}a^{*}_{y}-a^{*}_{x}a_{y}}{|a_{x}|^{2}+|a_{y}|^{2}}=\frac{2\delta}{1+\delta^{2}}\,.\end{aligned}$
(12)
where $\varsigma^{2}_{1}+\varsigma^{2}_{2}+\varsigma^{2}_{3}=1$. The total
linear polarization degree of the laser pulse is given as
$\varsigma_{l}=(\varsigma^{2}_{1}+\varsigma^{2}_{2})^{1/2}=(1-\delta^{2})/(1+\delta^{2})$,
and the laser’s circular polarization degree is given by $\varsigma_{3}$. The
equivalence between the laser Stokes parameters (12) and those of the laser
photon (6) can be seen when we consider a relatively long laser pulse with the
slowly varying envelope $f^{\prime}(\phi)\approx 0$ and
$|\tilde{f}(\nu+1)|\ll|\tilde{f}(\nu-1)|$ at $\nu\geq 1$ Tang and King (2021).
The frequency components of the laser pulse can be written approximately as
$\tilde{a}^{\mu}(\nu)\approx
m\xi/2~{}\left[0,a_{x}(\theta),a_{y}(\theta),0\right]\tilde{f}(\nu-1)$ and
therefore $\varsigma_{i}\approx\varsigma_{i}(\nu)$ with $i=1,2,3$.
### IV.1 Numerical results
To show the importance of polarization contributions and their dependence on
the corresponding laser Stokes parameters, we first present the numerical
results for the NBW process stimulated by a $16.5~{}\textrm{GeV}$ photon in
the head-on collision with the laser pulse in the intermediate intensity
region $\xi\sim\mathcal{O}(1)$. The pulse envelope is given as
$f(\phi)=\cos^{2}[\phi/(4\sigma)]$ in $\left|\phi\right|<2\pi\sigma$ and
$f(\phi)=0$ otherwise, where $\sigma=8$. The calculations have been done with
the laser central frequency $\omega=4.65~{}\textrm{eV}$, as an example, which
is the third harmonic of the normal laser with the wavelength
$\lambda=0.8~{}\mu\textrm{m}$. For the detail calculation of (1), one can
refer to the presentation in Ref. Tang (2021) and the analogous calculation in
Ref. King and Tang (2020) for the polarized nonlinear Compton scattering.
Figure 2: The energy spectra of the produced positron via the NBW process in
the head-on collision between a polarized photon and the laser pulse with
different ellipticity: (a) $\delta=1$ circular polarization, (b) $\delta=0.5$
elliptical polarization, and (c) $\delta=0$ linear polarization. The
contributions from an unpolarized photon $n_{0}$ and those $n_{i}$ coupling to
the photon polarization $\Gamma_{i}$ are compared. The energy of the seed
photon is $16.5~{}\textrm{GeV}$. The laser pulse has the intensity $\xi=1$,
central frequency $\omega=4.65~{}\textrm{eV}$ and the deflection angle
$\theta=0$.
In Fig. 2, we present the energy spectra of the produced positrons in the
laser backgrounds with the same intensity $\xi=1$ but different ellipticity
$\delta=1,~{}0.5,~{}0$ in Figs. 2 (a), (b) and (c) respectively. As shown, the
potential contributions coupling to the photon polarization are indeed
appreciable for the total positron yield. For the circularly polarized laser
background, $\delta=1$ in (a) with
$(\varsigma_{1},~{}\varsigma_{2},~{}\varsigma_{3})=(0,~{}0,~{}1)$, the
relative importance of the contribution $n_{3}$, coupling to the circular
polarization $\Gamma_{3}$ of the seed photon, is about $n_{3}/n_{0}\approx
22.3\%$ compared to the unpolarized contribution $n_{0}$. The contributions
$n_{1,2}$ coupling to the photon’s linear polarization are zero, because the
background field has no linear polarization Tang (2022). By increasing the
linear polarization of the background field
$(\varsigma_{1},~{}\varsigma_{2},~{}\varsigma_{3})=(0.6,~{}0,~{}0.8)$ in (b)
with the ellipticity $\delta=0.5$, the polarized contribution $n_{1}$ becomes
important with $n_{1}/n_{0}\approx 27.8\%$, while the importance of the
polarized contribution $n_{3}$ decreases to about $n_{3}/n_{0}\approx 14.5\%$.
For the laser pulse with the full linear polarization in (c) with $\delta=0$
and $(\varsigma_{1},~{}\varsigma_{2},~{}\varsigma_{3})=(1,~{}0,~{}0)$, the
polarized contribution $n_{3}$ becomes zero, and the relative importance of
the polarized contribution $n_{1}$ increases to about $n_{1}/n_{0}\approx
32.6\%$. With the decrease of the laser ellipticity, the harmonic structure
becomes more clear in the energy spectra and appears around
$s_{n>5}=\\{1\pm[1-(2+\xi^{2})/(n\eta)]^{1/2}\\}/2$ when $\delta=0$ Tang
(2022).
In Fig. 2, the contribution $n_{2}$ is always zero with the change of the
laser ellipticity $\delta$. This is because the laser has no polarization
preponderance along the direction of $\theta=\pi/4$, _i.e._ $\varsigma_{2}=0$.
To see the effect of the field deflection angle $\theta$, we plot the
variation of the polarization contributions $n_{i}$ with the change of
$\theta$ in Fig. 3 (a) for $\xi=1$ and $\delta=0.5$. As shown, the
polarization contributions $n_{1,2}$ vary in the trend as
$(n_{1},~{}n_{2})\propto-(\cos 2\theta,\sin 2\theta)$ and $n_{3}$ is unchanged
for different $\theta$. All are in the same trend as the variation of the
corresponding laser Stokes parameters $\varsigma_{1,2,3}$ in (12). However, we
also note that the amplitude of the linearly polarized contribution
$(n^{2}_{1}+n^{2}_{2})^{1/2}$ is constant with the change of $\theta$ shown as
the green dotted lines in Fig. 3 (a). Therefore, the maximized polarization
contribution $n_{p}$ in (3) from the optimized polarization (4) is independent
on the field’s deflection angle $\theta$ as shown in Fig. 3 (b), in which we
also find that the unpolarized contribution $n_{0}$ is unchanged for different
$\theta$. This is because of the azimuthal symmetry of the interaction
geometry. We can thus conclude that, for laser pulses with the fixed
ellipticity $\delta$ and intensity $\xi$, the field’s deflection angle
$\theta$ can only alter the relative value of the linear polarization
contributions $n_{1,~{}2}$ with the constant amplitude
$(n^{2}_{1}+n^{2}_{2})^{1/2}$, but not change the circularly polarized
($n_{3}$) and unpolarized ($n_{0}$) contributions. To show the correlation
between the polarization contribution $n_{i}$ and the corresponding laser
Stokes parameter $\varsigma_{i}$, we fit the numerical results in Fig. 3 (a)
respectively as
$n_{1}:~{}n_{1}(\theta=0)/\varsigma_{1}(\theta=0)\varsigma_{1}$,
$n_{2}:~{}n_{2}(\theta=\pi/4)/\varsigma_{2}(\theta=\pi/4)\varsigma_{2}$, and
$n_{3}:~{}n_{3}(\theta=0)/\varsigma_{3}(\theta=0)\varsigma_{3}$, and find the
precise agreement between the numerical results and data fitting.
Figure 3: Different contributions to the positron yield of the NBW process in
the elliptically polarized laser pulse with $\delta=0.5$ and the deflection
angle $\theta\in[0,\pi]$. (a) The variation of the polarization contributions
$n_{1,2,3}$ with the change of the field deflection angle. The full QED
results (‘cycle’, ‘plus’ and ‘square’) are fitted with the corresponding laser
Stokes parameters as $c_{1,2,3}\varsigma_{1,2,3}$, where
$c_{1}=n_{1}(\theta=0)/\varsigma_{1}(\theta=0)$,
$c_{2}=n_{2}(\theta=\pi/4)/\varsigma_{2}(\theta=\pi/4)$, and
$c_{3}=n_{3}(\theta=0)/\varsigma_{3}(\theta=0)$. The green dotted lines denote
the amplitude of the linear polarization contribution, _i.e._
$\pm(n^{2}_{1}+n^{2}_{2})^{1/2}$. (b) The unpolarised contribution $n_{0}$ and
the maximized polarization contribution $n_{p}$ in (3) from the seed photon
with the optimal polarization in (4). The other parameters are the same as in
Fig. 2. Figure 4: Different contributions to the positron yield of the NBW
process in the laser pulse with different ellipticity $\delta\in[0,~{}1]$, but
the fixed laser power density $I=\xi^{2}(1+\delta^{2})/2=1$ and deflection
angle $\theta=\pi/9$. (a) The unpolarized contribution $n_{0}$ and the
maximized polarization contribution $n_{p}$ from the seed photon with the
optimal polarization in (4). The relative importance $n_{p}/n_{0}$ of the
maximal polarization contribution $n_{p}$ is also plotted and compared with
that of the polarization contribution
$n^{\prime}_{p}=-(\varsigma_{1}n_{1}+\varsigma_{2}n_{2}+\varsigma_{3}n_{3})$
from the photon state orthogonal to the laser polarization. (b) The variation
of the polarization contributions $n_{1,2,3}$ with the change of the laser
ellipticity. The full QED results (‘cycle’, ‘plus’ and ‘square’) are fitted
with the corresponding laser Stokes parameters as
$c_{1,2,3}\varsigma_{1,2,3}$, where
$c_{1,2}=n_{1,2}(\delta=0)/\varsigma_{1,2}(\delta=0)$ and
$c_{3}=n_{3}(\delta=1)/\varsigma_{3}(\delta=1)$. The laser power density $I=1$
corresponds to the real power density $I\approx 3.84\times
10^{19}~{}\textrm{Wcm}^{-2}$. The other parameters are the same as in Fig. 2.
In Fig. 4, we show the variation of the different contributions to the
positron yield with the change of the laser ellipticity $\delta$ for the fixed
deflection angle $\theta=\pi/9$ and laser power density $I=1$, corresponding
to $3.84\times 10^{19}~{}\textrm{W}/\textrm{cm}^{2}$. As shown in Fig. 4 (a),
both the unpolarized contribution $n_{0}$ and the maximized polarization
contribution $n_{p}$ from the optimal polarization (4) [shown in Fig. 5 (b)]
decrease with the increase of the laser ellipticity $\delta$ from $0$ to $1$.
This is because of the decrease of the field intensity
$\xi=[2I/(1+\delta^{2})]^{1/2}$. Simultaneously, the relative importance,
$n_{p}/n_{0}$, of the maximized polarization contribution decreases from about
$31.6\%$ at $\delta=0$ for a linearly polarized laser pulse to about $22.3\%$
at $\delta=1$ for the laser pulse with pure circular polarization. For
comparison, we also plot the importance of the polarization contribution
$n^{\prime}_{p}=-(\varsigma_{1}n_{1}+\varsigma_{2}n_{2}+\varsigma_{3}n_{3})$
from the orthogonal state of the laser polarization, which is clearly smaller
than that from the optimal polarization state especially for the elliptically
polarized laser with $\delta\approx 0.5$. In Fig. 4 (b), we see that the
amplitude of the linear polarization contributions $n_{1,2}$ decrease with the
increase of $\delta$, while the amplitude of the contribution from the
circular polarization, $n_{3}$, increases. These variation are again in the
same trend as the laser Stokes parameters in (12). The difference between the
two linear polarization contributions can be depicted as
$n_{1}/n_{2}\approx\tan 2\theta=\varsigma_{1}/\varsigma_{2}$. The numerical
results in Fig. 4 (b) are respectively fitted as
$n_{1,2}:~{}n_{1,2}(\delta=0)/\varsigma_{1,2}(\delta=0)\varsigma_{1,2}$ and
$n_{3}:~{}n_{3}(\delta=1)/\varsigma_{3}(\delta=1)\varsigma_{3}$, and again, we
see the agreement between the numerical results and data fitting. The slight
difference around $\delta\approx 0.4$ implies the dependence of the
polarization coupling between the seed photon and laser pulse on the laser
ellipticity, as we will see later.
In this section, we investigate the NBW process in the laser pulse with the
ellipticity $\delta\in[0,1]$ and deflection angle $\theta\in[0,\pi]$. For the
laser pulse with the ellipticity $\delta\in[-1,0]$, the laser field would
rotates in the opposite direction as the laser with the ellipticity $-\delta$
(see the expression for $\varsigma_{3}$). The calculations would be consistent
with the above results, except that the polarized contribution $n_{3}$ would
change sign, but keeps the same amplitude. For the laser pulse with the
deflection angle $\theta\in[-\pi,0]$, all the above results would also be the
same except the polarized contribution $n_{2}$ would change sign because of
the odd property of $\varsigma_{2}$. All the calculations have be done for a
relative long laser pulse, and for a ultra-short laser pulse, the conclusion
would be different.
### IV.2 Polarization coupling coefficients
To manifest the dependence of the polarization contributions on the laser
Stokes parameters, we consider an elliptically polarized monochromatic field
with $f(\phi)=1$ in (11). This is a good approximation for the pulses with
slowly-varying envelope $f^{\prime}(\phi)\approx 0$. After integrating the
transverse momenta in (1), we can acquire the polarization contributions as
$\displaystyle(n_{1},n_{2},n_{3})=\alpha
I(\kappa_{nl}~{}\varsigma_{1},\kappa_{nl}~{}\varsigma_{2},\kappa_{nc}~{}\varsigma_{3})$
(13)
where
$\displaystyle\kappa_{nl}$
$\displaystyle=\frac{1}{\pi\eta}\hat{T}\sin\left(\frac{\vartheta\Lambda}{2\eta
ts}\right)~{}g(\vartheta,\varphi)\,,$ (14a) $\displaystyle\kappa_{nc}$
$\displaystyle=\frac{1}{\pi\eta}\hat{T}\cos\left(\frac{\vartheta\Lambda}{2\eta
ts}\right)h_{s}\left(\textrm{sinc}^{2}\frac{\vartheta}{2}-\textrm{sinc}\vartheta\right)\vartheta$
(14b)
are the coupling coefficients between the polarization of the seed photon and
that of the laser pulse in the NBW process, and
$\displaystyle g(\vartheta,\varphi)=$
$\displaystyle\cos\vartheta+\textrm{sinc}^{2}\frac{\vartheta}{2}-2\textrm{sinc}~{}\vartheta$
$\displaystyle+$
$\displaystyle\frac{1}{\varsigma_{l}}\left(1+\textrm{sinc}^{2}\frac{\vartheta}{2}-2\textrm{sinc}~{}\vartheta\right)\cos
2\varphi\,.$
$\Lambda$ is the Kibble mass and expressed as Brown and Kibble (1964)
$\Lambda=1+I-I\textrm{sinc}^{2}\frac{\vartheta}{2}-I\varsigma_{l}\cos
2\varphi\left(\textrm{sinc}^{2}\frac{\vartheta}{2}-\textrm{sinc}\vartheta\right)$
depending on the laser power density $I=\xi^{2}(1+\delta^{2})/2$ and its
linear polarization degree $\varsigma_{l}$. $\hat{T}$ is the integral operator
given as
$\hat{T}=\int^{1}_{0}\mathrm{d}s\int^{\infty}_{-\infty}\mathrm{d}\varphi\int^{\infty}_{0}\frac{\mathrm{d}\vartheta}{\vartheta}\,,$
with the average phase $\varphi=(\phi_{1}+\phi_{2})/2$ and the interference
phase $\vartheta=\phi_{1}-\phi_{2}$ Dinu et al. (2016); Seipt (2017); Ilderton
et al. (2019).
As we can see, in the NBW process, the polarization contribution $n_{i}$ is
also proportional directly to the corresponding laser Stokes parameter
$\varsigma_{i}$, as shown in Figs. 3 and 4, with the coupling coefficients in
(14) depending not only on the laser power, but also on the field ellipticity.
The two linear polarization components share, again, the same coupling
coefficient because of the symmetry of rotating the linear polarization axis
as discussed in Fig. 3. We put the fine structure constant $\alpha$ out of the
coupling coefficients as the NBW process is a single-vertex process, and $I$
is because of the increase of the contributions with the laser power and in
the perturbative regime, $n_{i}\propto\xi^{2}$ in (5).
Figure 5: (a) The variation of the coupling coefficients
$\kappa_{nl},~{}\kappa_{nc}$ with the change of the field ellipticity. The
ratio $\sigma_{n}=\kappa_{nc}/\kappa_{nl}$ is also plotted with the right
$y$-axis. The coefficient $\kappa_{nl}$ calculated from $n_{1}$ is exactly the
same as that from $n_{2}$. (b) The Stokes parameters of the photon’s optimal
polarization in (3) for different $\delta$. We also show the comparison with
the orthogonal state, $-(\varsigma_{1},\varsigma_{2},\varsigma_{3})$ of the
laser polarization. The same parameters in Fig. 4 are used.
In Fig. 5 (a), we present the dependence of the coupling coefficients
$\kappa_{nl}$ and $\kappa_{nc}$ on the field ellipticity for the lasers with
the fixed power $I=1$ and relatively long duration. As shown, the value of
$\kappa_{nl}$ and $\kappa_{nc}$ vary slightly with the change of the field
ellipticity $\delta$, and there exists significant difference between
$\kappa_{nl}$ and $\kappa_{nc}$ with the ratio $\kappa_{nc}/\kappa_{nl}<1$,
which also changes for different $\delta$.
The dependence of $\kappa_{nl}$ and $\kappa_{nc}$ on the laser power density
is presented in Fig. 6 (a) for the fixed field ellipticity $\delta=0.5$ and
deflection angle $\theta=\pi/8$. As shown, in the low-power density region
$I<10^{-3}$, $\kappa_{nl}$ and $\kappa_{nc}$ are independent on the laser
power $I$ because the LBW process dominates the production, $\kappa_{nl}$ and
$\kappa_{nc}$ can be acquired alternatively from the perturbative result (5)
with $\kappa_{l}$ and $\kappa_{c}$ depending only on the parameter $\beta$.
The value of $\kappa_{nl}$ and $\kappa_{nc}$ are determined by the energy
parameter $\eta$ and the pulse envelope. In this region, the positron yield
increases as $n_{0},n_{p}\propto I$ shown in Fig. 6 (c) because of the single-
photon effect with the high-frequency components from the finite-pulse effect
Tang and King (2021). In the intermediate laser power region,
$10^{-3}<I<10^{-1}$, the coupling coefficients increase as
$\kappa_{nl},\kappa_{nc}\propto I^{3}$ because of the multiphoton perturbative
effect, in which $4=\lceil 2/\eta\rceil$ laser photons are involved in the
production process and the positron yield increase in the trend as
$n_{0},n_{p}\propto I^{4}$ in Fig. 6 (c), where $\lceil x\rceil$ denotes the
minimal integer larger than $x$. With the further increase of the laser power,
$I\gtrsim 0.5$, this $4$-photons channel is forbidden and a higher number of
laser photons, $n=\lceil 2(1+I)/\eta\rceil$, would be involved in the
production process. Therefore, the fully non-perturbative effect would be
dominant. The increase of the coupling coefficients $\kappa_{nl}$ and
$\kappa_{nc}$ become slower, as well as the increase of the positron yield in
Fig. 6 (c). In Fig. 6 (a), we can also see the evident difference between
$\kappa_{nl}$ and $\kappa_{nc}$ in the broad laser power region with the ratio
$\kappa_{nc}/\kappa_{nl}<1$ depending also sensitively on the laser power.
This difference would result in the deviation of the optimal photon
polarization from the completely orthogonal state of the laser polarization.
Figure 6: (a) The variation of the coupling coefficients
$\kappa_{nl},~{}\kappa_{nc}$ with the increase of the laser power density. The
dependence of the ratio $\sigma_{n}=\kappa_{nc}/\kappa_{nl}$ on the laser
power is also presented with the right $y$-axis. (b) The Stokes parameters of
the photon’s optimal polarization with the change of the laser power.
$\Gamma_{1}=\Gamma_{2}$ as the field deflection angle is $\theta=\pi/8$. (c)
The yield from the unpolarized contribution $n_{0}$ and the maximal
polarization contribution $n_{p}$, and the relative importance of the
polarization effect $n_{p}/n_{0}$. In (a) and (b), the pink dotted lines are
the corresponding perturbative results acquired from (5), and the black dotted
lines show the varying trend of the curves. The field ellipticity is
$\delta=0.5$. The other parameters are the same as in Fig. 4.
### IV.3 Optimal photon polarization
From (14), the optimal polarization of the seed photon (4) can be written as
$\displaystyle(\Gamma_{1},~{}\Gamma_{2},~{}\Gamma_{3})=\hat{\kappa}_{nl}\frac{(\varsigma_{1},~{}\varsigma_{2},~{}\sigma_{n}\varsigma_{3})}{(\varsigma_{1}^{2}+\varsigma_{2}^{2}+\sigma^{2}_{n}~{}\varsigma_{3}^{2})^{1/2}}\,,$
(15)
based on the polarization of the laser pulse, where $\hat{\kappa}_{nl}=-1$ is
the sign of $\kappa_{nl}$ acquired numerically, and
$\sigma_{n}=\kappa_{nc}/\kappa_{nl}$ denotes the difference between the
coupling coefficients $\kappa_{nl}$ and $\kappa_{nc}$. If $\sigma_{n}\neq 1$,
the photon’s optimal polarization state would deviate from the orthogonal
state $-(\varsigma_{1},\varsigma_{2},\varsigma_{3})$ of the laser
polarization.
As shown in Fig. 5 (a), $\sigma_{n}$ is much smaller than $1$ for different
$\delta$. Therefore, the optimal polarization state of the seed photon, for
the maximal yield, is much different from the orthogonal state
$-(\varsigma_{1},\varsigma_{2},\varsigma_{3})$ of the laser polarization as
one can see in Fig. 5 (b), except in the regions around $\delta\approx
0,~{}1$, where the laser is linearly and circularly polarized, respectively.
With the optimized photon polarization in Fig. 5 (b), the production yield
could be enhanced for more than $20\%$ compared to the unpolarized case as
shown in Fig. 4 (a).
In Fig. 6 (b), the optimal polarization state of the seed photon is presented
in a broad laser power region for the specified ellipticity $\delta=0.5$.
Because the field deflection angle is $\theta=\pi/8$, the two linear
polarization components are equal, $\Gamma_{1}=\Gamma_{2}$. Again, because of
the evident difference between $\kappa_{nl}$ and $\kappa_{nc}$ in Fig. 6 (a),
the photon’s optimal polarization state deviates considerably from the
orthogonal state of the laser polarization as shown in Fig. 6 (b). Especially
in the non-perturbative regime $I>0.5$, the circular polarization degree
$|\Gamma_{3}|$ of the optimal polarization decreases rapidly with the increase
of $I$, because of the rapid decrease of the ratio $\kappa_{nc}/\kappa_{nl}$
for larger $I$ in Fig. 6 (a), which means that the contribution from the
circular polarization becomes less important. In the ultra-high intensity
regime $\xi\gg 10$ (not shown in Fig. 6), in which the locally constant field
approximation would work precisely Ilderton et al. (2019); Tang (2022), the
contribution from the circular polarization would be negligible, _i.e._
$k_{nc}\to 0$ and $\Gamma_{3}\to 0$. This is because the formation length of
the NBW process becomes much shorter than the typical length of the field
variation Ritus (1985) and the laser pulse would work as a linearly polarized
field with the direction varying with the laser phase Tang (2022).
With the polarization-optimized seed photon, the positron yield could be
enhanced appreciably as shown in Fig. 6 (c). In the perturbative intensity
region $I<10^{-3}$, the positron yield could be enhanced more than $55\%$ by
the polarization effect compared with the unpolarized case, and in the multi-
photon perturbative region $10^{-3}<I<10^{-1}$, the yield enhancement is about
$34\%$ from the optimized polarization state. With the further increase of the
laser power, even though the relative importance of the polarization
contribution becomes less, the positron yield could still be improved for more
than $16\%$ at $I\lesssim 50$.
## V Conclusion
The optimization of the photon polarization state to the maximal positron
yield of the Breit-Wheeler pair production is investigated in arbitrarily
polarized plane wave backgrounds for a broad intensity region. Both the
polarization of the photon and the laser pulse are comprehensively described
with the classical Stokes parameters.
The optimal polarization state of the seed photon is resulting from the
polarization coupling with the laser pulse/photon in the production process.
For the laser pulse with the pure linear or circular polarization, the seed
photon’s optimal polarization is the orthogonal state of the laser pulse.
However, because of the evident difference between the coupling coefficients
for the linear and circular polarization components, the seed photon’s optimal
polarization state in elliptically polarized laser backgrounds, deviates
considerably from the orthogonal state of the laser polarization, especially
in the ultrahigh-intensity regime in which the linear-polarization coupling
coefficient is much larger than that of the circular polarization and thus the
seed photon’s optimal polarization would tend to the linear polarization.
With the polarization-optimized seed photon, the positron yield could be
considerably enhanced in a broad intensity region. For the laser intensity
region, $\xi\sim\mathcal{O}(1)$, of current laser-particle experiments, the
yield enhancement from the optimized photon polarization could be more than
$20\%$ compared to the unpolarized case.
## VI Acknowledgments
The author thank A. Ilderton for helpful suggestions and comments on the
manuscript. The author acknowledge the support from the National Natural
Science Foundation of China, Grant No.12104428. The work was carried out at
Marine Big Data Center of Institute for Advanced Ocean Study of Ocean
University of China.
## References
* Breit and Wheeler (1934) G. Breit and J. A. Wheeler, Phys. Rev. 46, 1087 (1934), URL https://link.aps.org/doi/10.1103/PhysRev.46.1087.
* Baier and Grozin (2002) V. N. Baier and A. G. Grozin, arXiv e-prints hep-ph/0209361 (2002), eprint hep-ph/0209361.
* Adam et al. (2021) J. Adam, L. Adamczyk, J. R. Adams, J. K. Adkins, G. Agakishiev, M. M. Aggarwal, Z. Ahammed, I. Alekseev, D. M. Anderson, A. Aparin, et al. (STAR Collaboration), Phys. Rev. Lett. 127, 052302 (2021), URL https://link.aps.org/doi/10.1103/PhysRevLett.127.052302.
* Reiss (1962) H. R. Reiss, Journal of Mathematical Physics 3, 59 (1962), URL https://doi.org/10.1063/1.1703787.
* Di Piazza et al. (2012) A. Di Piazza, C. Müller, K. Z. Hatsagortsyan, and C. H. Keitel, Rev. Mod. Phys. 84, 1177 (2012), URL https://link.aps.org/doi/10.1103/RevModPhys.84.1177.
* Gonoskov et al. (2021) A. Gonoskov, T. G. Blackburn, M. Marklund, and S. S. Bulanov, arXiv e-prints arXiv:2107.02161 (2021), eprint 2107.02161.
* Fedotov et al. (2022) A. Fedotov, A. Ilderton, F. Karbstein, B. King, D. Seipt, H. Taya, and G. Torgrimsson, arXiv e-prints arXiv:2203.00019 (2022), eprint 2203.00019.
* Burke et al. (1997) D. L. Burke, R. C. Field, G. Horton-Smith, J. E. Spencer, D. Walz, S. C. Berridge, W. M. Bugg, K. Shmakov, A. W. Weidemann, C. Bula, et al., Phys. Rev. Lett. 79, 1626 (1997), URL https://link.aps.org/doi/10.1103/PhysRevLett.79.1626.
* Bamber et al. (1999) C. Bamber, S. J. Boege, T. Koffas, T. Kotseroglou, A. C. Melissinos, D. D. Meyerhofer, D. A. Reis, W. Ragg, C. Bula, K. T. McDonald, et al., Phys. Rev. D 60, 092004 (1999), URL https://link.aps.org/doi/10.1103/PhysRevD.60.092004.
* Nikishov and Ritus (1964) A. Nikishov and V. Ritus, Sov. Phys. JETP 19, 529 (1964).
* Heinzl et al. (2010) T. Heinzl, A. Ilderton, and M. Marklund, Physics Letters B 692, 250 (2010), ISSN 0370-2693, URL https://www.sciencedirect.com/science/article/pii/S0370269310008968.
* Krajewska and Kamiński (2012) K. Krajewska and J. Z. Kamiński, Phys. Rev. A 86, 052104 (2012), URL https://link.aps.org/doi/10.1103/PhysRevA.86.052104.
* Titov et al. (2012) A. I. Titov, H. Takabe, B. Kämpfer, and A. Hosaka, Phys. Rev. Lett. 108, 240406 (2012), URL https://link.aps.org/doi/10.1103/PhysRevLett.108.240406.
* Fedotov and Mironov (2013) A. M. Fedotov and A. A. Mironov, Phys. Rev. A 88, 062110 (2013), URL https://link.aps.org/doi/10.1103/PhysRevA.88.062110.
* Titov et al. (2016) A. I. Titov, B. Kämpfer, A. Hosaka, T. Nousch, and D. Seipt, Phys. Rev. D 93, 045010 (2016), URL https://link.aps.org/doi/10.1103/PhysRevD.93.045010.
* Jansen and Müller (2013) M. J. A. Jansen and C. Müller, Phys. Rev. A 88, 052125 (2013), URL https://link.aps.org/doi/10.1103/PhysRevA.88.052125.
* Jansen and M¨¹ller (2017) M. J. Jansen and C. M¨¹ller, Physics Letters B 766, 71 (2017), ISSN 0370-2693, URL https://www.sciencedirect.com/science/article/pii/S0370269316308024.
* Titov et al. (2018) A. I. Titov, H. Takabe, and B. Kämpfer, Phys. Rev. D 98, 036022 (2018), URL https://link.aps.org/doi/10.1103/PhysRevD.98.036022.
* Ilderton (2019) A. Ilderton, Phys. Rev. D 100, 125018 (2019), URL https://link.aps.org/doi/10.1103/PhysRevD.100.125018.
* Ilderton (2020) A. Ilderton, Phys. Rev. D 101, 016006 (2020), URL https://link.aps.org/doi/10.1103/PhysRevD.101.016006.
* King (2020) B. King, Phys. Rev. A 101, 042508 (2020), URL https://link.aps.org/doi/10.1103/PhysRevA.101.042508.
* Tang (2021) S. Tang, Phys. Rev. A 104, 022209 (2021), URL https://link.aps.org/doi/10.1103/PhysRevA.104.022209.
* Tang and King (2021) S. Tang and B. King, Phys. Rev. D 104, 096019 (2021), URL https://link.aps.org/doi/10.1103/PhysRevD.104.096019.
* Ivanov et al. (2005) D. Y. Ivanov, G. Kotkin, and V. Serbo, The European Physical Journal C-Particles and Fields 40, 27 (2005), URL https://doi.org/10.1140/epjc/s2005-02125-1.
* Katkov (2012) V. Katkov, Journal of Experimental and Theoretical Physics 114, 226 (2012), URL https://doi.org/10.1134/S1063776111160047.
* Li et al. (2020) Y.-F. Li, R. Shaisultanov, Y.-Y. Chen, F. Wan, K. Z. Hatsagortsyan, C. H. Keitel, and J.-X. Li, Phys. Rev. Lett. 124, 014801 (2020), URL https://link.aps.org/doi/10.1103/PhysRevLett.124.014801.
* Chen et al. (2022) Y.-Y. Chen, K. Z. Hatsagortsyan, C. H. Keitel, and R. Shaisultanov, arXiv e-prints arXiv:2201.10863 (2022), eprint 2201.10863.
* Wistisen (2020) T. N. Wistisen, Phys. Rev. D 101, 076017 (2020), URL https://link.aps.org/doi/10.1103/PhysRevD.101.076017.
* Titov and Kämpfer (2020) A. I. Titov and B. Kämpfer, The European Physical Journal D 74, 218 (2020), URL http://dx.doi.org/10.1140/epjd/e2020-10327-9.
* Seipt and King (2020) D. Seipt and B. King, Phys. Rev. A 102, 052805 (2020), URL https://link.aps.org/doi/10.1103/PhysRevA.102.052805.
* Tang (2022) S. Tang, Phys. Rev. D 105, 056018 (2022), URL https://link.aps.org/doi/10.1103/PhysRevD.105.056018.
* Abramowicz et al. (2021) H. Abramowicz et al., Eur. Phys. J. Spec. Top. 230, 2445 (2021), URL https://doi.org/10.1140/epjs/s11734-021-00249-z.
* Borysova (2021) M. Borysova, Journal of Instrumentation 16, C12030 (2021), URL https://doi.org/10.1088/1748-0221/16/12/c12030.
* Macleod (2022) A. J. Macleod, Journal of Physics: Conference Series 2249, 012022 (2022), URL https://doi.org/10.1088/1742-6596/2249/1/012022.
* Jacobs (2021) R. Jacobs, arXiv e-prints arXiv:2107.10026 (2021), eprint 2107.10026.
* Meuren (2019) S. Meuren, in _Third Conference on Extremely High Intensity Laser Physics (ExHILP)_ (2019), URL https://conf.slac.stanford.edu/facet-2-2019/sites/facet-2-2019.conf.slac.stanford.edu/files/basic-page-docs/sfqed_2019.pdf.
* Naranjo et al. (2021) B. Naranjo, G. Andonian, N. Cavanagh, A. D. Piazza, A. Fukasawa, E. Gerstmayr, R. Holtzapple, C. Keitel, N. Majernik, S. Meuren, et al., THPAB270 (2021), ISSN 2673-5490, URL https://jacow.org/ipac2021/papers/thpab270.pdf.
* Salgado et al. (2021) F. C. Salgado, N. Cavanagh, M. Tamburini, D. W. Storey, R. Beyer, P. H. Bucksbaum, Z. Chen, A. Di Piazza, E. Gerstmayr, Harsh, et al., New Journal of Physics 24, 015002 (2021), ISSN 1367-2630, URL http://dx.doi.org/10.1088/1367-2630/ac4283.
* Meuren et al. (2020) S. Meuren, P. H. Bucksbaum, N. J. Fisch, F. Fiúza, S. Glenzer, M. J. Hogan, K. Qu, D. A. Reis, G. White, and V. Yakimenko (2020), eprint 2002.10051.
* Di Piazza (2015) A. Di Piazza, Phys. Rev. A 91, 042118 (2015), URL https://link.aps.org/doi/10.1103/PhysRevA.91.042118.
* Di Piazza (2016) A. Di Piazza, Phys. Rev. Lett. 117, 213201 (2016), URL https://link.aps.org/doi/10.1103/PhysRevLett.117.213201.
* Di Piazza (2017) A. Di Piazza, Phys. Rev. A 95, 032121 (2017), URL https://link.aps.org/doi/10.1103/PhysRevA.95.032121.
* Di Piazza (2021) A. Di Piazza, Phys. Rev. D 103, 076011 (2021), URL https://link.aps.org/doi/10.1103/PhysRevD.103.076011.
* Berestetskii et al. (1982) V. B. Berestetskii, E. M. Lifshitz, and L. P. Pitaevskii, _Quantum Electrodynamics (2nd ed.)_ (Butterworth-Heinemann, Oxford, 1982).
* Jackson (1999) J. D. Jackson, _Classical Electrodynamics (3rd ed.)_ (Wiley, 1999).
* Ng and Tsai (1977) Y. J. Ng and W.-y. Tsai, Phys. Rev. D 16, 286 (1977), URL https://link.aps.org/doi/10.1103/PhysRevD.16.286.
* Greiner and Reinhardt (2009) W. Greiner and J. Reinhardt, _Quantum Electrodynamics (4th ed.)_ (Springer Berlin, 2009).
* King and Tang (2020) B. King and S. Tang, Phys. Rev. A 102, 022809 (2020), URL https://link.aps.org/doi/10.1103/PhysRevA.102.022809.
* Brown and Kibble (1964) L. S. Brown and T. W. B. Kibble, Phys. Rev. 133, A705 (1964), URL https://link.aps.org/doi/10.1103/PhysRev.133.A705.
* Dinu et al. (2016) V. Dinu, C. Harvey, A. Ilderton, M. Marklund, and G. Torgrimsson, Phys. Rev. Lett. 116, 044801 (2016), URL https://link.aps.org/doi/10.1103/PhysRevLett.116.044801.
* Seipt (2017) D. Seipt, arXiv preprint arXiv:1701.03692 (2017).
* Ilderton et al. (2019) A. Ilderton, B. King, and D. Seipt, Phys. Rev. A 99, 042121 (2019), URL https://link.aps.org/doi/10.1103/PhysRevA.99.042121.
* Ritus (1985) V. I. Ritus, J. Russ. Laser Res. 6, 497 (1985).
|
# Croesus: Multi-Stage Processing and Transactions for Video-Analytics in
Edge-Cloud Systems
Samaa Gazzaz University of California, Santa Cruz Vishal Chakraborty
University of California, Irvine Faisal Nawab University of California,
Irvine
###### Abstract
Emerging edge applications require both a fast response latency and complex
processing. This is infeasible without expensive hardware that can process
complex operations—such as object detection—within a short time. Many approach
this problem by addressing the complexity of the models—via model compression,
pruning and quantization—or compressing the input. In this paper, we propose a
different perspective when addressing the performance challenges. Croesus is a
multi-stage approach to edge-cloud systems that provides the ability to find
the balance between accuracy and performance. Croesus consists of two stages
(that can be generalized to multiple stages): an initial and a final stage.
The initial stage performs the computation in real-time using
approximate/best-effort computation at the edge. The final stage performs the
full computation at the cloud, and uses the results to correct any errors made
at the initial stage. In this paper, we demonstrate the implications of such
an approach on a video analytics use-case and show how multi-stage processing
yields a better balance between accuracy and performance. Moreover, we study
the safety of multi-stage transactions via two proposals: multi-stage
serializability (MS-SR) and multi-stage invariant confluence with Apologies
(MS-IA).
_K_ eywords multi-stage transaction, object detection, performance, accuracy
## 1 Introduction
Modern object detection models are based on complex Convolutional Neural
Networks (CNN) that require GPU clusters costing tens of thousands of dollars
to perform object detection in real-time [1, 2, 3, 4]. This is infeasible for
edge applications that require real-time processing but cannot afford to place
expensive hardware at the edge. Furthermore, many of these applications
require response in the scale of milliseconds (such as V/AR [5] and smart city
Vehicle-to-Everything [6]). This prohibits the use of faraway cloud resources.
There is a large body of research in the machine learning community that aims
at addressing the trade-off between accuracy and performance in deep learning
(DL) models by utilizing compression, pruning and quantization techniques [2,
3, 7, 4, 8, 9, 10, 11, 12, 13, 14]. In these approaches, we notice a trade-off
between accuracy and performance. The accuracy of a compressed model is
typically lower compared to the full model while performance is improved
dramatically. For example, in [2], the compressed model improves latency from
23.1 ms to 2.9 ms, while lowering the accuracy from $74.1\%$ to $50.2\%$.
Other papers in the field of image compression aid in reducing the amount of
time needed to process data [15, 16, 17]. other researchers opt to
specializing DL models for certain use cases to improve performance [18, 19,
20, 21].
An important aspect that is overlooked in many video analytics solutions is
that they are not integrated with the system’s data processing and management.
Video analytics generates insights from videos that would typically be used in
a data management application. For example, detecting objects in V/AR might
feed into a mobile game, immersive social network, or other application. We
propose Croesus, a multi-stage edge-cloud video processing framework that aims
to manage the performance-accuracy trade-off in DL models. The framework
consists of an edge-cloud video analytics component and a transaction
processing component. Each component may exist in isolation of the other and
benefit other use cases, however, they are co-designed to achieve the goals of
data management for video analytics applications. This proposal separates
computation into two stages: an initial stage that depends on best-effort
computations at the edge (using a fast but less accurate DL model), and a
final stage at the cloud to correct any errors incurred in the initial stage
(using the accurate but slower DL model.) For example, for object detection in
applications such as V/AR, instead of depending solely on the full CNN model,
a more compact model is used at the edge to respond immediately to users. If
needed, some frames are sent to the full CNN model on the cloud to detect any
errors on the immediate responses sent by the initial stage. If an error is
detected, then a correction process is performed in the final stage. The
mechanism to correct errors is an application-specific task and our method
allows flexibility in how errors are corrected. The advantage of this model is
that users have the illusion of both a fast and accurate object detection. The
downside is the possibility of short-term errors. This pattern of the multi-
stage model is useful for applications that require fast response but where
the full model cannot be used within the desired time restrictions.
We formalize and analyze the transactions (a transaction is a group of
database read/write operations that represents a task or a program) in Croesus
using a formal _multi-stage transaction_ model. Our model divides transactions
into two sections: an initial and a final sections (we also show how this
model can be extended to multiple sections). The initial section is
responsible for updating the system using the results of the initial object
detection stage, and the final transaction is responsible for
finalizing/correcting state using the results of the final (object detection)
stage. The multi-stage transaction model can be generalised to have more than
two stages. However, our analysis with the general design turned out to add
additional overhead without providing a significant benefit for edge-cloud
video analytics. The reason is that the asymmetry in edge-cloud systems is
two-fold: in the edge (low-capability, real-time requirement) and in the cloud
(high-capability, less stringent latency requirement).
The multi-stage transaction model leads to challenges when reasoning about the
correctness guarantees that should be provided to users. This is because the
multi-stage transaction model breaks a fundamental assumption in existing
transaction models, which is the assumption that a transaction is a single
program or block of code. Therefore, there are challenges on coming up with an
abstraction of initial and final sections and how they interact. Also, there
is a need to specify what makes an execution of initial and final sections
correct in the presence of concurrent transactions. We cannot reuse existing
correctness criteria—such as serializability [22]—as they would not apply to
the multi-stage transaction model.
For those reasons, we propose a multi-stage transaction processing protocol
and study the safety-performance trade-offs in multi-stage transactions. We
investigate two safety guarantees: _(1) Multi-stage Serializability (MS-SR)_ ,
which mimics the safety principles of serializability [22] by requiring that
each transaction would be isolated from all other transactions. _(2) Multi-
stage Invariant Confluence with Apologies (MS-IA)_ , which adapts invariant
confluence [23] and apologies [24] to the multi-stage transaction model and
enjoys better performance characteristics and flexibility compared to MS-SR.
The multi-stage transaction pattern of Croesus invites a natural method of
adapting invariant confluence and apologies. In particular, the final section
is—by design—intended to fix any errors caused by the initial stage. This can
be viewed as the final stage “correcting any invariant violations” and issuing
“apologies” for any erroneous work generated by the initial section.
In the rest of this paper, we present background in Section 2, followed by the
design of Croesus (Section 3) and multi-stage transactions (Section 4).
Experiments and related work are presented in Sections 5 and 6, respectively.
The paper concludes in Section 7.
Figure 1: Croesus’ execution pattern
## 2 Background
In this section, we present background on the multi-stage system model and
object detection.
### 2.1 System and Programming Model
Edge-Cloud Model. Our proposed system model consists of edge nodes and a cloud
node (see Figure 1). Each edge node maintains the state of a partition
(database transactions are performed on the partition copy at the edge.) For
ease of exposition, we focus on a single edge node and partition in this
paper. In edge applications, interactions between users tend to have spatial
locality and are therefore typically homed in the same edge node and
partition.
Application Model. The applications we consider are video-driven—meaning that
the input and triggers to operations on data are done via a video interface.
For example, a gesture or object detected on a V/AR headset triggers a
database transaction. This translates to the following processing steps for
each frame $f$: (1) the frame $f$ is processed using the small model on the
edge node, $M_{e}$, to generate labels (labels are the detected objects and/or
actions). We call these the edge labels and are denoted by a set $L_{e}$. (2)
the edge labels $L_{e}$ are used to trigger transactions that take the labels
as input. These transactions are denoted by the set $T_{f}$. The initial
sections of each of these transactions in $T_{f}$ are processed to return an
immediate response to users and potentially write to the database on the edge
node. (3) concurrently, the frame $f$ is also processed in the original, more
accurate object detection model on the cloud, denoted by $M_{c}$. Once the
cloud model generates the labels, denoted by $L_{c}$, they are sent to the
edge node. (4) when the labels $L_{c}$ from the cloud are received, they are
used to trigger two types of events. The first is to trigger the final
sections of the transactions $T_{f}$ that started for frame $f$. The input to
these sections is the correct label(s) of the object(s) that triggered the
transaction. The second is to trigger new transactions that should have been
triggered by the frame but their labels where missing in $L_{e}$. We focus on
the first pattern as the second pattern can be viewed as a subset of the
first.
Example Application. Consider a smart campus Augmented Reality (AR)
application with two basic functionalities: (1) Task 1: continuously, an
object detection CNN model detects buildings in the campus. If a building is
detected, the database is queried and information about the building—such as
available study rooms—is augmented onto the headset view. (2) Task 2: if the
user clicks on an auxiliary device, a study room is reserved in the currently
detected building.
Execution Pattern. The execution pattern of this application is the following
(shown in Figure 1): The headset captures images continuously and sends them
to the nearby edge node. The edge node performs the initial stage of
computation by running the captured frame, $f$ on the small (fast but
inaccurate) DL model, $M_{e}$ (step 1). The labels extracted from the model,
$L_{e}$, are used to trigger the initial section of transaction $T_{f}$ (step
2). For example, if the engineering building is detected, then the
transaction’s initial section reads information about the building. The
outcome of this transaction is sent back to the headset to be rendered and
augmented onto the display. During this time, the frame is forwarded to the
cloud node which runs the full (slow but accurate) CNN model, $M_{c}$ (step
3). The labels, $L_{c}$ extracted from the model are sent back to the edge
node. Once the edge node receives the correct labels, it performs the final
stage of the transactions in $T_{f}$ (step 4). The final stage takes as input
both the original detected labels in the initial stage as well as the new,
correct, labels.
Programming Interface. The programming model exposes an interface to write
both the initial and final sections of the transaction. In our application for
example, there are two transactions, one for each task. For task 1 (display
information about detected buildings), the initial section is triggered for
each frame with a label in the class “building” and it takes as input the
detected labels, $L_{e}$. For each detected label, the initial section reads
the information about that key from the database and returns it to the headset
to be rendered. The final section is triggered after the correct labels,
$L_{c}$, are sent from the cloud node. It checks if the labels are the same;
if they are, the transaction terminates (note that the decision to terminate
is specific to this example transaction, but other application might use the
final section to perform some final actions even if the labels were correctly
detected in the initial stage.) If they are not, then the transaction reads
the labels of the correct detected building and sends them to the headset to
render the correct information and an apology. 111In a real application, the
corrected information would also influence the small model—via retraining and
heuristics such as smoothing—so that the error would not be incurred in the
following frames..
For task 2 (reserve a study room), the initial section is triggered when the
auxiliary device is clicked by the user. The initial section takes as input
the most recent detected labels and their coordinates. If there are more than
one label, the initial section picks the label that is closest to the center
of the frame. Then, the initial section reserves a study room if one exists.
The final section—triggered after receiving the correct labels—checks if the
center-most label matches the building where the study room was reserved. If
so, the transaction terminates. Otherwise, the original reservation is removed
from the database and—if available—a new reservation with the right building
is made. The results are sent back to the AR headset to be rendered with an
apology.
### 2.2 Accuracy-Performance Trade-off in Object Detection
Convolutional Neural Networks (CNNs). A CNN is designed and trained to detect
labels of objects in an input frame. Different CNN models have different
structures and variations, and we refer the interested reader to these surveys
[25, 26]. Our work applies to a wide-range of CNN models as we use them as a
black box.
Accuracy-Performance Trade-off. The complex processing of CNNs result in
higher inference time. It is estimated that running a state-of-the-art CNN
model in real-time requires a cluster of GPUs that costs tens of thousands of
dollars [1]. This means that running a CNN model on commodity hardware—such as
what is used in edge devices—would lead to prohibitively high latency. This
led to exploring the accuracy-performance trade-off in CNN models.
Specifically, there has been efforts to produce smaller CNN models that would
run faster on commodity hardware [27, 20, 28, 29, 30, 31]. The downside of
these solutions is that they are less accurate than full CNN models. In this
work, we aim to utilize both small and full CNN models by using small models
for fast inference and original models to correct any errors.
Derivative Models. The interest in the accuracy-performance trade-off in CNNs
led to efforts that enable deriving smaller—faster—models using existing
original CNN models. One approach is to use a smaller model that handles the
same scope of labels of the original model but with less accuracy [27].
Another approach is to create smaller—specialized—models that narrow the scope
of labels to enable faster inference while retaining accuracy for the select
labels [1]. In our work, we consider both variations. For smaller, less
accurate models, the Croesus pipeline helps correct errors due to inaccuracy
and for specialized models, the Croesus pipeline helps correct errors due to
the narrower scope of labels.
## 3 Croesus Design
In this section, we present the design of Croesus and an optimization that
controls the accuracy-performance trade-off.
### 3.1 Overview
System Model. The system model of Croesus (Section 2) consists of an edge node
and a cloud node. The edge node hosts a small CNN model denoted by $M_{e}$
that is used to perform initial processing. The edge node also hosts the main
copy of it’s partition’s data. The edge node processes both the initial
section and the final section. The initial section of a transaction is
triggered by the labels of the model on the edge, $M_{e}$, and the final
section is triggered by the labels of the model on the cloud, $M_{c}$. The
execution pattern of requests is shown in Figure 1 and described in Section
2.1.
Workflow. The workflow of requests in Croesus is the following: a frame $f$ is
sent from the client to the edge node. The edge node processes $f$ using the
edge model, $M_{e}$. The labels from $M_{e}$, $L_{e}$, are used to trigger
corresponding transactions, $T_{f}$ (the programmer defines what transactions
should be triggered for each class of labels.) The initial sections of
transactions in $T_{f}$ are processed on the edge node. At this time, the
response from the initial sections are sent to the client. This marks the
initial commit stage. In the meantime, the frame $f$ is sent to the cloud
node. Once the cloud node receives it, the cloud model, $M_{c}$, is used to
process $f$. The corresponding labels, $L_{c}$, are then sent to the edge
node. When the edge node receives the cloud labels $L_{c}$, the final sections
of transactions in $T_{f}$ are triggered. The responses and apologies from
these final sections are sent to the client. This marks the final commit
stage.
Bandwidth Thresholding. The pattern of edge-cloud stages introduces a
bandwidth overhead due to the need to send all frames from the edge to the
cloud. This can be problematic due to the high overhead on the edge device and
the monetary cost of communicating data to the cloud. (e.g., some public cloud
providers charge a cost for communicated data between the data center and the
Internet). To this end, we tackle the problem of limiting edge-to-cloud
communication. We use the confidence of the labels that are generated by the
edge model, $M_{e}$, to decide whether we need to send the frame to the cloud
or not. Specifically, if the edge model’s confidence is high enough, this is
an indication that the detected labels are more reliable than other detections
that have less corresponding confidence. Later in this section, we develop a
bandwidth thresholding mechanism to investigate sending frames to the cloud
selectively using the edge model’s confidence.
### 3.2 Initial-Final Section Interaction
A unique property of multi-stage processing is that there are two stages of
processing where the first stage is fast and less accurate and the second is
slow and accurate. This property leads to the need to understand how they
interact and what guarantees should be associated with each stage. In the rest
of this section, we provide such properties that are useful to programmers in
the multi-stage model. In the initial stage, the initial section of a
transaction, $s_{i}$, uses the input from the edge model, $M_{e}$, to generate
a response to the user. This response represents an _initial-stage commit_.
The initial-stage commit—when received by a client—represents the following:
(1) the response is a preliminary and/or best-effort result of the
transaction. (2) any errors in this initial processing will be corrected by
the logic specified by the programmer in the corresponding final section. This
second property is critical because it leads to having to enforce a guarantee
that if the initial section of a transaction returns a response to the client
(an initial-stage commit), then the underlying system must guarantee that the
corresponding final section would commit as well. This is trivial for a
transaction running by itself, however, when transactions are running
concurrently, this leads to complications. (In Section 4, we present the
concurrency control mechanisms for multi-stage transactions where we encounter
these complications.)
When the final section of the transaction starts, it is anticipated for the
final section to observe what the input labels were to the initial section—to
know whether the input was erroneous—and what the initial section did—to know
what to fix if an error was detected. To avoid adding complexity to the system
model and description, we consider that these two tasks are performed by the
programmer using database reads and writes. Specifically, the initial section
communicates to the final section via writing its input and state to the
database.
### 3.3 Algorithms
Now, we provide the detailed algorithms of Croesus. Parts of the algorithms
use a concurrency control component that we present and design in Section 4.
We will denote this concurrency control component as CC and a transaction
block would either be CC.initial{ } for an initial section and CC.final{ } for
a final section. Both transaction blocks get the detected labels as input, but
we omit it for brevity.
#### 3.3.1 Client Interface
The client captures frames, gets user input (from auxiliary devices), and
displays responses. For example, in a V/AR application, the client captures a
frame from the headset camera and sends it to the edge node. Likewise, if
there are any associated auxiliary or wearable devices, the client sends the
input/commands that correspond to these devices. This process of sending
frames and input is continuous—there is no blocking to get the response from
the edge node. When a response is received from the edge node, that response
is rendered and augmented in the user’s view.
#### 3.3.2 Edge Node Algorithms
The edge node is responsible for the initial stage of processing (using the
small model $M_{e}$), transaction processing, and storage. There are two main
components in the edge node: the input processing component and the
transaction processing component. The following is a description of the main
tasks that are handled by the edge node.
Initialization and Setup. Starting an edge node includes setting up a small
model, $M_{e}$, a data store $ds$, and a _transactions bank_. The small model
$M_{e}$ is the one that will be used to process incoming frames. The
transaction bank is a data structure that maintains the application
transactions and what triggers each transaction. For example, an application
may have a transaction $t_{bldng}$ that reads the information about a building
that is detected in a frame. The transaction $t_{bldng}$ takes as input the
label that is associated with a building. The transactions bank helps the edge
node know which transactions should be triggered in response to a label. For
example, if a label $l_{1}$ represents a label name “Engineering Building” and
label $l_{2}$ represents a label name “University Shuttle 42”, the transaction
$t_{bldng}$ should be triggered in response to $l_{1}$ but not $l_{2}$.
The way the transactions bank helps in making this decision is that it
maintains a table, where each row corresponds to a class of labels and the
transactions that would be triggered from that class of labels. For example, a
row in that table can have a class of labels called “Buildings” and it
contains all the labels that would correspond to a building. That row would
also have $t_{bldng}$ and any other transactions that should be triggered in
response to the “Building” class. A row in the transactions bank may also have
other associated triggers, For example, a transaction $t_{rsrv}$ that is used
to reserve a study room in a building would be triggered if both a building
label is detected in the frame _and_ the auxiliary device input is received.
Input and Initial Stage Processing. The initial stage processing represents
the input processing using the small model, $M_{e}$, in response to a received
frame or user input. When a frame $f$ is received by the edge node, it is
supplied to the small model $M_{e}$. The model $M_{e}$ returns a set of labels
$L^{f}_{e}$. Each label, $L^{f}_{e}[i]$, consists of the the name of the
label, $L^{f}_{e}[i].name$, the confidence of the label,
$L^{f}_{e}[i].confidence$, and the coordinates of the label,
$L^{f}_{e}[i].coordinates$. The input processing component removes any labels
from the set $L^{f}_{e}$ that have low confidence (the threshold for a low
confidence is a configuration parameter.) Finally, the input processing
component gets the information of all the transactions that correspond to the
detected labels, $L^{f}_{e}$, by reading from the transactions bank. The set
of triggered transaction, $t_{f}$, is sent to the transaction processing
component.
Similar to how frames trigger transaction, when a different input is received
by the input processing component—such as a click on the auxiliary device—the
input processing component generates the set of transactions $t_{e}$ that
corresponds to the input. An auxiliary input might lead to an action that is
independent from the captured frame. For example, a click on the menu button
may display the menu and general user information. In this case, the entry in
the transactions bank is only specified by the input type. Alternatively, the
input might be coupled with a specific label class to trigger a transaction.
For example, a click would display a captured building’s information using
$t_{rsrv}$. In such a case, $t_{rsrv}$ would only be triggered if both the
click and a building label are detected. To facilitate such actions, the input
processing component matches a received auxiliary input with the labels from
the most recently detected labels.
After transactions, $t_{f}$, are sent to the transaction processing component
(TPC), the frame $f$ is sent to the cloud node to be processed using the cloud
model, $M_{c}$. This concludes the tasks performed for input processing.
Initial Transaction Section. When the input processing component generates the
set $t_{f}$ for a frame $f$, these transactions are sent to the TPC. The TPC
then triggers the initial section of these transactions. The read and write
operations to the database are managed by the concurrency control component by
wrapping them in the CC.initial{ } block. (The implementation details of the
concurrency control component are presented in Section 4). The initial section
of a transaction $t$ would either commit or abort—based on the decision of the
concurrency controller. If the initial section aborts, then the abort decision
is sent to the client. Otherwise, the response from the initial section is
sent to the client, which represents the initial commit point for $t$. The TPC
records the decision for the initial section with the labels, $L^{f}_{e}$, and
waits until the corresponding labels are received from the cloud model.
Final Transaction Section. After processing the initial section, the TPC waits
for the correct labels, $L^{f}_{c}$, from the cloud node. Once received, the
following is performed for each label, $L^{f}_{e}[i]$ in $L^{f}_{e}$. The
label $L^{f}_{e}[i]$ is matched with a label in $L^{f}_{c}$. The matching is
performed by finding if the bounding box (represented by the x-y coordinates)
of a label in $L^{f}_{c}$ overlaps with the bounding box of $L^{f}_{e}[i]$.
The overlap does not need to be exact—if the label overlap in more than X%,
where X is a configuration parameter, then the two labels are considered
overlapping. If there are more than two candidates in $L^{f}_{c}$ that overlap
with $L^{f}_{e}[i]$, then the one with the bigger overlap is chosen. There are
the following cases of matching the label $L^{f}_{e}[i]$ to a label in
$L^{f}_{c}$: (1) If an overlapping label cannot be found in $L^{f}_{c}$, then
the label $L^{f}_{e}[i]$ is considered erroneous and the final section of the
corresponding transaction is called with an empty label. (2) If there is a
label in $L^{f}_{c}$ that overlaps with $L^{f}_{e}[i]$ and the label name is
the same. In that case, the label $L^{f}_{e}[i]$ is considered correct and the
final section of the corresponding transaction is called with the same label.
(3) If there is a label in $L^{f}_{c}$ that overlaps with $L^{f}_{e}[i]$ and
the label names are different. In that case, the label $L^{f}_{e}[i]$ is
considered erroneous and the final section of the corresponding transaction is
called with the overlapping label from $L^{f}_{c}$.
Once this matching process is complete, then the TPC checks if there are any
labels in $L^{f}_{c}$ that were not matched. For each one of these labels
$L^{f}_{c}[i]$, the TPC triggers an initial section and final section with the
label in $L^{f}_{c}[i]$.
#### 3.3.3 Cloud Node Algorithms
The cloud node has a single task of processing frames using the cloud model,
$M_{c}$. When a frame $f$ is received from an edge node, the labels,
$L^{f}_{c}$, are derived using $M_{c}$ and then sent back to the edge node.
### 3.4 Bandwidth Thresholding
A major problem faced by video-analytics applications in the edge-cloud
paradigm is the high edge-cloud bandwidth consumption due to the large size of
videos. Sending all frames from the edge to the cloud poses a performance
challenge due to the communication overhead as well as a monetary overhead due
to the cost of transferring data between the edge and the cloud (most public
cloud providers charge applications for data communication between the cloud
and the Internet). We extend our solution to reduce the reliance on cloud
nodes with the goal of overcoming the performance overhead and monetary costs
of edge-cloud communication.
The observation we utilize to reduce edge-cloud communication is that we can
use the confidence of edge computation to decide whether verifying with the
cloud node is necessary. (Confidence here represents the statistical
confidence generated by CNN models which is a typical feature of such models.)
Specifically, if the confidence of the produced detections in the edge model,
$M_{e}$, is high, it is likely that the edge model produced correct labels.
Therefore, it would not be necessary to send the frame to the cloud. Likewise,
if the detections had extremely low confidence, then it is likely that these
are erroneous, false detections, and thus sending the frame to the cloud node
would be unnecessary as they can be discarded immediately. What is left are
detections that have confidence values that are not too high and not too low.
These detections are ones that likely indicate the presence of an object of
interest, but its label might be incorrect.
More formally, we represent with $\theta_{L}$ and $\theta_{U}$ the lower and
the upper confidence thresholds such that $0\leq\theta_{L}<\theta_{U}<1$.
Generally, an object with confidence lower than $\theta_{L}$ is discarded as
being likely a false-positive (this is called the _discard interval_). An
object with confidence higher than $\theta_{U}$ is assumed to be correct and
is not sent to the cloud node (this is called the _keep interval_). Objects
with a confidence between $\theta_{L}$ and $\theta_{U}$ are sent to the cloud
for validation (this is called the _validate interval_). However, there is a
challenge in adopting this model as it is not clear how to derive these
confidence thresholds to preserve the integrity of the underlying models.
Specifically, a _performance-accuracy trade-off_ controls this decision. A
large validate interval would lead to better accuracy, since more frames are
sent to the cloud for validation and correction. Likewise, a small validate
interval would lead to worse accuracy but better performance in terms of
average latency and edge-cloud bandwidth utilization. This is complicated
further because the size of the validate interval is not the only factor
controlling this trade-off. The validate interval size may lead to different
performance-accuracy trade-offs based on where it is located in the threshold
space from 0–100%.
Optimization Formulation. The input to the optimization problem is a set of
video frames $V=\\{v_{1},\ldots,v_{n}\\}$, and an object query $O$ (e.g.,
bus), which needs to be detected in the frames. Let $n_{i}$ be the number of
instances of object $O$ detected in frame $v_{i}$ (by the NN in the edge-node)
with confidence $\beta_{i}=(\beta_{i}^{1},\ldots,\beta_{i}^{n_{i}})$ where
$\beta_{i}^{k}$ is the confidence corresponding to the $k^{\text{th}}$
instance of object $O$, for $1\leq k\leq n_{i}.$ We denote this as _edge-
confidence_.
Let $m=\leavevmode\nobreak\ |\\{v_{i}\in V\mid\exists k\text{ s.t.
}\theta_{L}\leq\beta^{k}_{i}\leq\theta_{U}\\}|$ be the number of frames which
were sent to the cloud. We define the ratio
$\delta(\theta_{L},\theta_{U})=\frac{m}{n}$ (where $n$ is the number of frames
in $V$) and have the corresponding F-score
$f({\theta_{L},\theta_{U}})=\frac{2pr}{p+r}$ where $p$ is precision and $r$ is
recall. We want to find $(\theta_{L},\theta_{U})$ such that
$\delta(\theta_{L},\theta_{U})$ is minimized and the corresponding
$f({\theta_{L},\theta_{U}})\geq\mu.$ Let $\mathbb{S}=\\{x\in\mathbb{R}\mid
0\leq x<1\\}$. We have:
$\displaystyle T=\underset{(x,y)\in\mathbb{S}^{2},\mu}{\text{argthresh
}}f(x,y):=\\{(x,y)\in\mathbb{S}^{2}\mid f({x,y)\geq\mu}\\}$ (1)
$\displaystyle(\theta_{L},\theta_{U})$ $\displaystyle=\underset{(x,y)\in
T}{\text{argmin }}\delta(x,y)$ $\displaystyle:=\\{(x^{*},y^{*})\in
T\mid\forall(x,y)\in\mathbb{S}^{2},\delta(x^{*},y^{*})\leq\delta(x,y)\\}.$ (2)
This formulation produces the thresholds $(\theta_{L},\theta_{U})$ given
$\mu$.
### 3.5 Generalizing Multi-Stage Processing
In this section, we have focused on models with two stages. This is because
the application domain we consider has a two-tier symmetry that invites the
use of two sections, one that represents the edge and another that represents
the cloud. However, the multi-stage processing model can be utilized for other
use cases where the asymmetry has more than two levels. Our designs and
treatments can be extended to these cases as we describe in the rest of this
section.
Model. In a general multi-stage model, there are $m$ stages,
$s_{0},\ldots,s_{m-1}$. The first stage, $s_{0}$, represents the initial stage
of processing and the last stage, $s_{m-1}$, represents the final stage of
processing. All other stages are intermediate stages. The data storage is
maintained by the node handling stage $s_{0}$. Each stage contains a
video/image detection model—where typically the model at stage $s_{i}$
(denoted $m_{i}$) has better detection that model $m_{j}$, where $j<i$. A
transaction consists of $m$ sections, each one ($t_{i}$) corresponding to a
stage ($s_{i}$).
Processing. When a frame $f$ is received, it is first sent to the initial
stage, $s_{0}$. The initial stage processes $f$ using $m_{0}$ and takes the
outcome of the model to process the first section of the transaction $t_{0}$.
Then, the frame is processed at the next stage $s_{1}$—using $m_{1}$—and the
outcome is used to trigger transaction $t_{1}$. This continues until the final
stage. If bandwidth thresholding is performed at any stage, then the sequence
from initial to final stages might be broken. For example, if at stage
$s_{i}$, the bandwidth thresholding algorithm (as presented earlier in the
section) decides that the frame does not need to be forwarded to the next
stage, then the sequence stops and the remaining transaction sections are
performed.
## 4 Multi-Stage Transactions
### 4.1 Multi-Stage Transaction Model
We consider a new multi-stage transaction model where every transaction
comprises of two distinct sections: the initial section and the final section.
Each section, $s$—in a transaction $t$—consists of read ($r_{t}^{s}(x)$) and
write ($w_{t}^{s}(x)$) operations in addition to control operations to begin
($b_{t}^{s}$) and commit ($c_{t}^{s}$) each section. For example, consider a
multi-stage transaction $t$. The execution of the transaction would look like
the following: $b_{t}^{i}\ r_{t}^{f}(x)\ w_{t}^{i}(y)\ c_{t}^{i}\ b_{t}^{f}\
w_{t}^{f}(z)\ c_{t}^{f}$ where $i$ stands for the initial section and $f$
stands for the final section.
If the initial section of a transaction commits (called initial commit), then
the final section must begin and commit (called final commit) as well. When we
say that a transaction $t$ in our model has committed, we mean that both
sections of $t$ have committed. Furthermore, the final section of a
transaction cannot begin before the initial section. The case for conflicts of
transactions also demands special consideration. In our model, we say two
transactions to be conflicting if there is at least one conflicting operation
in either of the sections. The seemingly simple abstraction of splitting every
transaction into two sections complicates the basic notions of the general
transaction model. In the following, we take a look at safety and describe two
notions of consistency in our model.
### 4.2 Safety
In the absence of concurrent activity, safety is straight-forward; the initial
section is followed by the final section and both are processed as the
programmer expects. When concurrency—which is important for performance—is
introduced, it challenges the programmer’s notion of the sequentiality of
running transactions and multi-stage sections (other conflicting transactions
may run within and between a transaction’s sections.)
For example, consider an application where there are two transactions, $t_{1}$
and $t_{2}$, each of which increment the value of a data object $x$ by one.
Suppose that, for each transaction, the initial stage consists of reading the
value of $x$; the value is increased, and the new value is written in the
final section. Therefore, if the two transactions executed concurrently and
both $t_{1}$ and $t_{2}$ read the same value of $x$, then the final value of
$x$ would only increase by one. This is an anomaly because there were two
transactions that incremented the value of $x$ and the value of $x$ should
have increased by two.
safety is different because it is also actions between sections not only
within a transaction. safety here is also different than typical concurrency –
it is not about conflicting copies to be merged, it is about a wrong trigger
or wrong input. Evidently, multi-stage consistency adds to the complexity
involved in traditional consistency guarantees such as serializability in two
ways: (1) multi-stage transactions consists of two separate stages. This means
that in addition to the concern of concurrent transactions interleaving
operations within each section, there is a need to consider whether sections
of transactions running _between_ the sections of other transactions should be
permitted. (2) in multi-stage transactions, inconsistency is not only due to
concurrent activity, but also due to erroneous transactions that have an
incorrect trigger or input (_e.g._ , an erroneously detected building in the
edge stage of processing leads to triggering the wrong transaction and/or
supplying it with the wrong input.)
Due to these differences, we revisit transactional consistency in light of
multi-stage transactions. We present and discuss two variants of multi-stage
transaction consistency. In both variants, we assume that traditional
concurrency control mechanisms are used to ensure that each section is
serializable relative to other transactions’ sections. (This means that each
section is atomic and isolated from other sections and that there is a total
order on sections.) This leaves the novel challenge to safety that is
introduced in our work, which is how these sections can be reordered relative
to each other.
### 4.3 Multi-Stage Serializability (MS-SR)
In MS-SR, we mimic the safety principles of serializability, which
is—informally—a guarantee that all transactions execute with the illusion of
some serial order of all transactions [22]. When trying to project this to
multi-stage transactions, this translates to the requirement that all
transactions are processed serially, where the final section of a transaction
appears immediately after the initial section. This guarantee can be reduced
to serializability by considering that the initial and final sections are part
of the same serializable transaction. The main difference is that when the
initial section commits, it is a guarantee that the final section would
eventually commit—it cannot abort due to unresolved conflicts. As we will see
in the rest of this section, this requirements complicates the processing of
the initial section.
In order to specify MS-SR formally, we introduce some notations and state our
assumptions. We denote with $<_{h}$, the ordering relation on execution
history of transaction sections. This relation represents the ordering
relative to the commitment rather than the beginning of the section. For
example, $s_{a}<_{h}s_{b},$ denotes that the left-hand side is ordered before
the right-hand side, i.e., section $s_{a}$ is ordered before section $s_{b}$.
Consider two conflicting transactions $t_{k}$ and $t_{j}$ (i.e., they have at
least one conflicting operation in either section), where $s_{k}^{i}$ have
initially committed before $s_{j}^{i}$ initially committed. MS-SR guarantees
the following: (1) the final section of the first transaction, $s_{k}^{f}$ ,
must commit after $s_{k}^{i}$. This is the guarantee of multi-stage
transactions to commit the initial section before the final section of the
transaction. (2) $s_{k}^{f}$ must commit before $s_{j}^{f}$. This is due to
the MS-SR guarantee that the two sections of the transaction must be ordered
next to each other relative to other conflicting transactions. (3) $s_{k}^{f}$
must be ordered before $s_{j}^{i}$ only if there is a conflict between
$s_{k}^{f}$ and $s_{j}^{i}$. This is also due to the need to serialize the
sections of two conflicting transactions. The condition of the conflict
between $s_{k}^{f}$ and $s_{j}^{i}$ is to capture that if the two sections do
not conflict, then they can be reordered in the serializable history. These
conditions are represented by the following formulation, where (a) captures
both conditions (1) and (2), and (b) captures condition (3):
$\displaystyle\text{MS-SR: }(a)$ $\displaystyle\exists t^{s}\
\left(s_{k}^{i}<_{h}s_{j}^{i}\implies(s_{k}^{i}<_{h}t^{s}<_{h}s_{j}^{f}\wedge
t^{s}=s_{k}^{f})\right)$ $\displaystyle(b)$ $\displaystyle\text{if conflict in
}s_{k}^{f},s_{j}^{i}\implies s_{k}^{f}<_{h}s_{j}^{i}$
We elaborate on Example 4.2 to demonstrate the need for MS-SR(a) and MS-SR(b).
As an example of MS-SR, consider the two transactions:
$t_{k}:b^{i}_{t_{k}}r^{i}_{t_{k}}(x)c^{i}_{t_{k}}b^{f}_{t_{k}}w^{f}_{t_{k}}(x)c^{f}_{t_{k}}$
and
$t_{j}:b^{i}_{t_{j}}r^{i}_{t_{j}}(x)c^{i}_{t_{j}}b^{f}_{t_{j}}w^{f}_{t_{j}}(x)c^{f}_{t_{j}}$.
Further assume that $s^{i}_{k}<_{h}s^{i}_{j}.$ Condition MS-SR(a) above
guarantees that $s^{f}_{k}$ is committed after $s^{i}_{k}$ and before
$s^{f}_{j}$, i.e., we have $s^{i}_{k}<_{h}s^{f}_{k}<_{h}s^{f}_{j}.$ With MS-
SR(a) alone, the following
$s^{i}_{k}<_{h}s^{i}_{j}<_{h}s^{f}_{k}<_{j}s^{f}_{j}$ is permitted. However,
because $s^{f}_{k}$ conflicts with $s^{i}_{j}$, then the two sections must be
ordered according to MS-SR(b) and the following ordering relations must be
met: $s^{i}_{k}<_{h}s^{f}_{k}<_{h}s^{i}_{j}<_{j}s^{f}_{j}$. This ordering
avoids the anomaly of both transactions reading the same value of $x$, but one
overwriting the value written by the other.
Now, we present a protocol that guarantees MS-SR.
Two Stage 2PL (TSPL): The Two Stage 2PL is the two phase locking protocol [32]
modified for our multi-stage transactional model (See Algorithm 1.) Let
$t_{k}$ be a multi-stage transaction comprising of $t^{i}_{k}$ and
$t^{f}_{k}$. First, the initial section starts executing, locking each
accessed data item before reading or writing it. After the initial section
finishes processing, the initial commitment cannot be performed immediately.
This is because we need to guarantee that the final section can execute and
commit as well, due to the requirement of multi-stage transactions. Therefore,
the locks of all items that are accessed (or potentially accessed) by the
final section must be acquired first. Then, the transaction enters the initial
commit phase. Once all the needed input is available for the final section
(_e.g._ , the corrected labels from the cloud model), the final section
executes, and the transaction enters the final commit phase. Finally, all the
locks are released.
items $\leftarrow$ get_rwsets($t^{i}_{k}$)
if _acquirelocks(items)_ then
execute($t^{i}_{k}$)
items $\leftarrow$ get_rwsets($t^{f}_{k}$)
if _acquirelocks(items)_ then
Initial Commit
execute($t^{f}_{k}$)
Final Commit
else
abort
end if
else
abort
end if
releaselocks(get_rwsets($t^{i}_{k}$))
releaselocks(get_rwsets($t^{f}_{k}$))
Algorithm 1 Two Stage 2PL
###### Theorem 1.
The TSPL protocol satisfies MS-SR.
###### Proof.
Consider a pair of conflicting transactions $t_{p}$ and $t_{q}$, where
$t^{i}_{p}<_{h}t^{i}_{q}$. Following Algorithm 1, each section is serialized
relative to each other section because locks are held before execution. Now,
we show that the three conditions of MS-SR of ordering sections relative to
each other are met. The first guarantee is ordering the initial section before
the final section. The algorithm executes the initial section before the final
section which guarantees their ordering. The second guarantee that $t_{p}^{f}$
is ordered before $t_{q}^{f}$. There is at least one data object $o$ that both
$t_{p}$ and $t_{q}$ access. Because the final section is only executed after
all locks are held for the transaction (including the lock for $o$),
$t_{p}^{f}$ would be processed before $t_{q}^{f}$. The third guarantee is that
if $t_{p}^{f}$ conflicts with $t_{q}^{i}$, then $t_{p}^{f}<_{h}t_{q}^{i}$.
Assume that the conflict is on data object $o$. Assume to the contrary that
$t_{q}^{i}<_{h}t_{p}^{f}$. If that’s the case, this means that $t_{q}^{i}$
acquired the lock on $o$ before $t_{p}^{f}$ and before the point of initial
commitment (because initial commitment only happens after acquiring all locks
including the locks for the final section). Because the locks (including the
one on $o$) are not released until $t_{q}$ finishes, this means that before
the lock on $o$ is released, $t_{q}$ has initially committed. However, $t_{p}$
initially commits only after acquiring the lock on $o$, which means that
$t^{i}_{q}<_{h}t^{i}_{p}$, which is a contradiction to our starting assumption
that $t^{i}_{p}<_{h}t^{i}_{q}$. ∎
Discussion. Although MS-SR is an easy-to-use consistency guarantee, it leads
to complications and undesirable performance characteristics. The main
complication is due to the need to guarantee that committing the initial
section would lead to committing the final section. With the stringent
requirement that the two sections are serialized so that they appear to be
back-to-back in the serialization order, this leads to having to ensure that
the locks for the final section can be acquired. The design consequence as we
see in the TS-2PL algorithm is that the initial section cannot commit before
acquiring the locks of the final section. This leads to one of two
consequences: (1) the system can infer what data will be accessed (or
potentially accessed) in the final section so that the locks can be acquired
and the initial commit happens before having to wait for the cloud model to
finish processing, or (2) the transaction would not be able to initially
commit until the cloud model returns the correct labels so that it is known
what data items are going to be accessed. The first option may require complex
analysis or input from the programmer and the second option is prohibitive as
it means that the initial section has to wait for a potentially long time,
which invalidate the goals of multi-stage transactions. Another complication
is that the locks for the initial section must be held until the final section
finishes processing which would lead to higher contention.
### 4.4 Multi-Stage Invariant Confluence with Apologies (MS-IA)
Now, we propose a multi-stage safety criterion that is inspired from invariant
confluence [23] and apologies [24]. The initial-final pattern of multi-stage
transactions invites the utilization of these concepts as we discuss next.
Guesses and Apologies. The concept of guesses and apologies [24] was
introduced to describe a pattern of programming that balances local action
versus global action (for example, a local action on a replica versus global
action on the state of all replicas in the context of eventual consistency).
In this pattern, a _guess_ is performed with local information and, then,
guesses are reconciled with the global state which would lead to detecting
inconsistencies in the local guesses. Such errors lead to _apologies_ via
undoing actions, administrator intervention, and/or notifications to affected
users.
This pattern of guesses and apologies fits our multi-stage edge-cloud
transaction model. The initial section represents the guess and the final
section represents the apology. To illustrate, consider an example of a multi-
player AR game with three players: $A$ with 50 tokens, $B$ with 10 tokens, and
$C$ and $D$ with no tokens. The application has a token transfer function
transfer(from, to, amount). The initial section performs the transfer, and the
final section reconciles any mistakes. Now, assume that the initial section of
a transfer $t_{1}$ from $A$ to $B$ for 50 tokens took place. Then, the initial
section of a transfer $t_{2}$ from $B$ to $C$ for 10 tokens took place
followed by another transfer $t_{3}$ from $B$ to $C$ for 50 tokens. Due to
concurrency, assume that the final section of both $t_{2}$ and $t_{3}$ were
performed and that their trigger and inputs were correct. In this case, the
final section terminates for both transactions. Then, the final section of
$t_{1}$ starts. However, the correct input to $t_{1}$ turns out to be $D$
instead of $B$ (for example, because the edge CNN model detected player $B$
when it is actually player $D$ as detected by the cloud CNN model.) An apology
procedure in the final section could retract the effects of $t_{1}$ and any
other transactions that depended on it, which are $t_{2}$ and $t_{3}$.
Using guesses and apologies allows us to process the initial sections of
transactions fast while providing a mechanism to overcome the mistakes of the
edge best-effort computation. However, it may lead to a cascade of
retractions. To overcome this, we propose combining the concept of apologies
with invariant confluence as we show next.
Invariant Confluence. In invariant confluence, preserving the application-
level invariants is what constitutes a safe execution. In its original form,
invariant confluence is intended to reason about transactions mutating the
state of different copies of data [23]. Our edge-cloud model is different,
involving mutating the state of one (edge) copy. However, an inconsistency
might be introduced by the initial section of a transaction with erroneous
trigger/input. Our insight is that we can utilize the final (apology) section
to act as the _merge_ function that attempts to reconcile application-level
invariants instead of all potential inconsistencies. In a way, we are flipping
the model of invariant confluence systems from a pattern of _check-then-apply_
(check if the operation can merge, and decide whether coordination is
necessary before doing the operation), to a pattern of _apply-then-check_ (do
the operation then check whether you can merge, and if you cannot merge, then
perform an apology procedure and retract the initial section’s effects.)
MS-IA programming pattern. This pattern, when combined with apologies, can
lead to reducing the negative consequences of erroneous triggers/inputs.
Consider the multi-player AR game application introduced above (when
discussing apologies). Assume that the initial sections of $t_{1}$, $t_{2}$,
and $t_{3}$, were processed as well as the final sections of $t_{2}$ and
$t_{3}$. At this stage, $A$, $B$, and $D$ have no tokens and $C$ has 60
tokens. When the error is discovered, it triggers the final section of
$t_{1}$. A programmer, equipped with the notions of invariant confluence and
apologies, writes the final section to attempt to perform two tasks: (1)
retract the minimum amount of erroneous actions and their effects using
apologies, and (2) retain as much state as possible using invariant-preserving
merge functions. The specifics of this pattern depends on the application
invariants. For example, the final section of the transfer tasks could have
the invariant that no player should have less than 0 tokens. The final section
of $t_{1}$ would retract the 50 tokens that were initially sent to $B$ and
sends them to the rightful recipient, player $D$. This means that $B$ could
not have sent a combined 60 tokens to $C$. The merge function can then decide
to retain the 10 tokens sent from $B$ to $C$, since they are not affected by
the error. But, it retracts the 50 tokens. This retraction is accompanied by
an apology that depends on the application (_e.g._ , a message is sent to both
$B$ and $C$, with a free game item.)
In terms of the concurrency control guarantee that is needed for MS-IA, the
initial section of a transaction must be ordered before its corresponding
final section (in addition to our earlier assumption that each section is
serialized relative to other transactions’ sections). Formally, for an initial
section, $s_{k}^{i}$, the following is true:
MS-IA: $\exists t^{s}\ \left(s_{k}^{i}<_{h}t^{s}\wedge t^{s}=s_{k}^{f}\right)$
items $\leftarrow$ get_rwsets($t^{i}_{k}$);
if _acquirelocks(items)_ then
execute($t^{i}_{k}$)
end if
Initial Commit
releaselocks(get_rwsets($t^{i}_{k}$))
items $\leftarrow$ get_rwsets($t^{f}_{k}$)
if _acquirelocks(items)_ then
execute($t^{f}_{k}$)
else
abort
end if
Final Commit
releaselocks(get_rwsets($t^{f}_{k}$))
Algorithm 2 MS-IA Algorithm
Concurrency control. The concurrency control algorithm starts by acquiring all
the locks for the initial section, then processing the initial section. When
the processing of the initial section is done, the locks are released. Then,
when the final section is ready to start, the corresponding locks are acquired
before processing the final section. Finally, the locks for the final section
are released. Note here that unlike the algorithm for MS-SR, we did not hold
the locks for the initial section until the end of the final section and we
reach the point of initial-commit immediately after processing the initial
section without having to wait to lock or coordinate the final section. The
reason for this is that the logic for invariant checking and apologies is
embedded in the final section and that we do not need to ensure that the
initial and final sections of one transaction are serialized next to each
other.
Discussion. To have better performance characteristics, MS-IA presents a more
complex programming abstraction than MS-SR because it places the burden of
coordination (invariance checking, reconciliation, and apologies) on the
programmer. In MS-IA, transactions are written as guesses (in the initial
section) and apologies (in the final section). Furthermore, apologies are
merge functions that aim to reconcile the inconsistencies caused by incorrect
triggers or inputs. Given our apply-then-check pattern, it is possible that
some operations cannot be merged. In such cases the final section would undo
the effects of the initial section—and any transactions dependent on it. We
envision that this pattern of multi-stage guesses and apologies can
incorporate advances in merge operators that would allow minimizing the need
for undoing transactions. For example, programmers may use merge-able
operations in the initial sections and delaying other operations to the final
section. This can benefit from—and help empower—the literature of conflict-
free and compositional data types. These can be adapted to the initial-final
pattern by making merge-able parts in the initial section and enabling other
types of operations in the final section.
In Validation-based (optimistic) protocols, which operate in the context of a
single transaction, before validation, the outcome of the transaction is not
returned to the client and is not exposed to other transactions. Applying
validation-based protocol as they are in the edge-cloud setting would be
prohibitive because it means that a transaction would not commit until the
validation step - that would happen after cloud processing - is ready. The MS-
IA pattern, on the other hand, divides the transaction logic to two sections
each acting as an independent transaction, where the first one commits before
the second section starts, which allows returning responses to clients and
exposing the outcome to other transactions (even before the final section and
without having to wait for the processing at the cloud).
### 4.5 Multi-Partition Operations
The transaction processing protocols presented in this section focus on
transactions that are local to a partition. In the case of distributed
transactions (spanning multiple partitions), the presented algorithms need to
be extended. In particular, in the multi-partition case, the data objects that
are accessed by a transaction (whether in the initial or final sections) can
be in multiple partitions. Locking data objects in remote partitions will be
performed by sending the lock requests to the remote edge node that is
responsible for the partition. The second difference is that after the
transaction finishes, the partitions engage in a two-phase commit protocol to
ensure that the distributed commit is performed in an atomic way. This atomic
commitment step is performed in the following cases: (1) for MS-SR, it is
performed at the end of the final section, (2) for MS-IA, it is performed at
the end of both the initial and final sections. The reason for not performing
this step at the end of the initial section in MS-SR is that the locks are not
released until the end of the corresponding final section.
Figure 2: Croesus vs. state of the art baselines: Latency and F-score of
running Croesus over four videos. Some values are minute and are hard to show
on the figure.
## 5 Evaluation
In this section, we show how Croesus manages the trade-off between performance
and accuracy of two models with different characteristics: (1) YOLOv3 [27, 33]
as the cloud model, which is reported to achieve 45 FPS on high-end hardware
and achieves high accuracy. (2) Tiny YOLOv3 [33, 27]—which is a compact
version of YOLOv3—for the edge model. Tiny YOLOv3 is faster but less accurate
than YOLOv3 [33].
We compare Croesus with two baselines: • State-of-the-art edge baseline: this
baseline represents a performance-centric video analytics applications where a
compact model (Tiny YOLOv3) is deployed on the edge machine for lower latency.
• State-of-the-art cloud baseline: this baseline represents accuracy-centric
video analytics applications where a computationally expensive model (YOLOv3)
is deployed on a resourceful cloud machine for better accuracy.
Figure 3: Croesus latency vs. accuracy for different pairs of thresholds Figure 4: Latency in different setups for the optimal case that was dynamically configured by Croesus. Table 1: Comparison between state-of-the-art edge and cloud and optimal threshold Croesus | Accuracy | Latency (ms)
---|---|---
| Croesus | Edge | Cloud | Croesus | Edge | Cloud
v1 | 0.81x | 0.5x | 1 | | 427.02
---
(226.16)
210.74 | 1452.5
v2 | 0.8x | 0.45x | 1 | | 434.81
---
(224.41)
207.97 | 1427.69
v3 | 0.83x | 0.86x | 1 | | 225.63
---
(218.17)
211.19 | 1455.66
v4 | 0.85x | 0.41x | 1 | | 863.96
---
(235.02)
214.65 | 1638.89
### 5.1 Experimental setup
Our evaluations are performed on Amazon’s AWS EC2 services. Edge machines are
implemented on either t3a.xlarge instances (for the default setups) and
t3a.small (for experiments with limited resources). t3a.small machines have 2
virtual CPUs and 2GiB of memory and t3a.xlarge machines have 4 virtual CPUs
and 16GiB of memory. Machine locations are either in California or Virginia.
The default setup is of an edge machine in California and a cloud machine in
Virginia.We implement a prototype of Croesus in Python. In addition to model
detection, the edge node maintains a data store and processes transactions
according to the MS-IA algorithm. Transactions are constructed by randomly
selecting keys to read or write to the database in response to detected
labels.
We evaluate accuracy and performance as follows: Accuracy is measured as the
F-score. Performance is measured in two ways: (1) Latency, which we define as
the time required to commit transactions in the system. (2) Edge-Cloud
Bandwidth Utilization (BU), which we define as the ratio of frames being sent
to the cloud relative to all processed frames. This metric is proportional to
the number of corrections that need to be made in the final transaction. We
consider the YOLOv3 output to be the ground truth and we use it to compare
Creosus’ results and calculate the F-score. When the overlap between the truth
boundaries and the predicted boundaries is more than %10, we consider the
prediction correct. The calculation of the F-Score does not depend on the
percentage of frames that are sent or not sent to the cloud, but rather on the
accuracy of the detection from the perspective of the client (i.e., the
accuracy of the detection _and_ apologies, if any.) There is, however, a
correlation between sending more frames to the cloud as it means that more
errors are corrected by the more accurate cloud model.
Experiments run on a subset of five types of videos: Street traffic
(vehicles), street traffic (pedestrians), mall surveillance (all three
querying for ’person’), airport runway querying for ’airplane’, and home video
of pet in the park querying for ’dog’. Each detection acquired for each frame
triggers a transaction that has 6 operations, half of these mutate the state
of the database by inserting data items, and the other half read from
previously added items. This mimics a write-heavy workload of YCSB (Workload
A) [34]. Unless we mention otherwise, we use MS-IA as the consistency
guarantee.
### 5.2 Experimental results
#### 5.2.1 Performance vs. accuracy trade-off
Figure 2 shows the trade-off between the latency and accuracy as BU varies on
four videos: park video (v1), street traffic (v2), airport runway (v3) and
mall surveillance (v4). For each video, we compare different BU configurations
with the state-of-the-art edge and cloud solutions. In the figure, the stacked
bars represent the latency breakdown for each experiment. Edge latency and
cloud latency represent the average time needed to send a frame to the edge
and to the cloud, respectively. The edge detection latency and cloud detection
latency are defined as the average time it takes the tiny YOLOv3 and YOLOv3
models, respectively, to produce the detected objects list in a frame. The
initial transaction and final transaction latency are very minute and hard to
show in the figure, but they represent the time it takes to commit a
transaction after detection is done. The F-score metric is shown as a marked
line.
As shown in Figure 2, Croesus processes transaction updates in the initial
phase (measured by edge latency and edge detection latency), up to $6.9\times$
faster than the case with full BU while maintaining high accuracy (F-score up
to $\%94$ in the case of "airport runway") by utilizing the cloud corrections
and final transaction. The client observes two latencies: the first is the
real-time initial processing at the edge which corresponds to edge latency,
edge detection latency, and initial transactions latency. The second is for
the final processing after corrections, if any, from the cloud, which
corresponds to all the latency types shown in the figure. As BU increases, the
amount of frames sent to the cloud, and consequently the average cloud-related
latencies, increases. When BU is 100%, the total cloud latency for Croesus
becomes even higher than state-of-the-art cloud because it incurs all the
overheads of the state-of-the-art cloud in addition to the overhead of Croesus
methods.
The trend of increasing Croesus cloud latency as BU increases is observed in
videos 1, 2, and 4. However, a unique trend appears for video 3 (querying for
‘airplane’ on the airport runway video). In this video, the state-of-the-art
edge produce high accuracy due to the nature of the video (an object that is
detected by the edge model with high confidence). This asserts the need for
dynamic optimization over the detection thresholds for different applications
in order to address workload differences. Croesus’ dynamic optimization
ensures the best balance of the trade-off between accuracy and latency
depending on the needs of each application.
Figure 3 demonstrates the effect of choosing different thresholds on the
latency in Croesus. We demonstrate the results using the street traffic video
querying for vehicles. It shows the total Croesus cloud latency and the BU
percentage as the threshold pairs for detections are varied. For example, a
threshold pair (0.5, 0.6) means that only detections with confidence values in
the edge mode that are within these two values are sent to the cloud for
verification. Detections with lower confidence values are discarded and ones
with higher confidence values are assumed correct by the edge node and are not
verified (however, erroneous detections are still accounted for in the
F-score.)
When the thresholds are set to (0.5, 0.5) the resulting BU is $\%0$ since no
frames will be sent to the cloud for validation. The resulting accuracy is
comparable to the edge only baseline at $\%58$. For a threshold pair of (0.5,
0.6), the latency increases due to more results being validated in the cloud.
The resulting BU is $\%38.5$ while the F-score increases by $\%25$. When the
BU reaches $\%97.2$, the accuracy reaches $\%99.8$. For thresholds (0.6,0.7),
the BU is only $\%4$ lower than the BU of the thresholds (0.5, 0.6). However,
the F-score decreases by more than $\%21.24$. This shows that although two
pairs may have similar BU values, their corresponding F-score can be
significantly different. It indicates the importance of dynamically optimizing
for an optimal pair of thresholds that balance the trade-off between the
latency and accuracy while prioritizing thresholds that yield higher accuracy.
Table 2: The effect of the cloud model size. | Croesus
---
cloud model
| Optimal
---
threshold
| Croesus
---
F-score
| Bandwidth
---
Utilization
| Detection
---
latency (sec)
YOLOv3-320 | (0.2, 0.3) | 0.84 | 0.61 | 0.70
YOLOv3-416 | (0.4, 0.5) | 0.86 | 0.44 | 1.12
YOLOv3-608 | (0.4, 0.6) | 0.83 | 0.58 | 2.34
Another observation from Figure 3 is that the rate at which the bandwidth
utilization increases is faster than the rate of F-score increase over
different threshold pairs. This is an indicator that increasing dependence on
the cloud does not necessarily improve accuracy dramatically.
Figure 5: Croesus bandwidth utilization vs. accuracy based on the threshold
pair choice. a) traffic video querying "person" ($\mu=0.90$) and b) mall
surveillance querying "person" ($\mu=0.80$). For all pairs of lower threshold
$(\theta_{L})$ and upper threshold $(\theta_{U})$. Dynamically chosen pair:
yellow star using brute force, red star using gradient step.
The effect of changing the cloud model size in Croesus is demonstrated in
Table 2. In this experiment, we set $\mu=0.8$ and compare the performance of
Croesus while using three different cloud model sizes: YOLOv3-320, YOLOv3-416,
YOLOv3-608, where the number at the end of each model’s name represents the
width and height used in the neural network model. Therefore, a larger number
indicates a larger model. As the cloud model size gets larger, the detection
latency gets larger as well. This is the main impact of utilizing different
model sizes. The different models have different accuracy characteristics as
well. However, using them in the Croesus framework does not demonstrate such
differences in the resulting F-score and BU. This is because the optimal
thresholds are set based on the used cloud model to achieve the desired
minimum accuracy, $\mu$.
#### 5.2.2 Optimal threshold performance on different setups
Figure 4 shows the accuracy and performance results of Croesus for different
videos when using the optimal threshold. These experiments run across four
different setups: (a) Small edge, different locations: Edge machines are of
type t3a.small while cloud machines are of type t3a.xlarge. Edge machine are
located in California and cloud machines are in Virginia. (b) small edge, same
location: Small edge, different locations: Edge machines are of type t3a.small
while cloud machines are of type t3a.xlarge.Edge and cloud machines are
physically located in the same location. (c) Regular edge, different location:
Edge and cloud machines are both of type t3a.xlarge. Edge machine are located
in California and cloud machines are in Virginia. (d) Regular edge, same
location: Edge and cloud machines are physically located in the same location
and are both of type t3a.xlarge.
This figure demonstrates the improvement in latency that the optimal
thresholds provide compared with the performance shown in Figure 2 (For a
clearer presentation, we show the comparison numbers in Table 1, where the
number inside the parentheses in Croesus is the latency of the initial
transaction.). Also, it shows the effect of resource allocation and
geographical location on performance, and the importance of dynamic threshold
optimization to address the differences in applications.
In the case of applying the optimal thresholds, we see improvement in the
final latency over the state-of-the-art cloud implementation by up to $\%85$
(but as low as $\%47$ for the case of v4). In addition, committing the initial
transaction is always comparable to the state-of-the-art edge solutions. Even
though the final transaction in Croesus can take up to $\%75$ more than the
edge only implementation, the accuracy improvements is significant and can
justify the slight delay after the initial transaction.
In addition, the F-score of optimal Croesus is 2.1x higher than the F-score of
edge-only in video v4. In the case of video v3, the accuracy is comparable to
the state-of-the-art accuracy because the optimal thresholds represent a near
$\%0$ bandwidth utilization. This is possible in application where objects are
expected to be easier to detect in each frame. The figure also shows that as
the geographical distance between the edge and the cloud decreases (when
placed in the same location), Croesus performance improves. In addition, the
performance improves when edge resources are maximized.
Figure 6: (a) Comparing lock contention of MS-SR and MS-IA measured as the
average latency of holding locks. (b) Abort rate of MS-SR transactions. (c)
Hybrid system techniques.
#### 5.2.3 Dynamic preprocessing optimization
Figure 5 shows the bandwidth utilization and accuracy as we vary the
optimization thresholds (the lower threshold $\theta_{L}$ and upper threshold
$\theta_{U}$). The heatmaps illustrate the gradual shift in the balance
between bandwidth utilization and accuracy.
bandwidth utilization/accuracy trade-off. Figure 5(a1) for BU and Figure 5(a2)
for F1-Score show the trend where increasing the lower threshold and the gap
between the two optimization thresholds results in a higher throughput. For
example, when the optimization pair is (0.2, 0.4), the F-score is $\%98$ since
this pair of thresholds result in a high BU at$\%92$. However, when the
optimization pair is (0.3, 0.4) the bandwidth utilization drops to $\%59$
while the F-score remains relatively high $\%92$. We are able to conserve the
edge-cloud communication by more than %35.9 while maintaining relatively high
accuracy.
Figures 5(b1) for BU and 5(b2) for F1-Score show the same trends as the
previous set of heatmaps. However, we notice a sudden jump in bandwidth
utilization and F-score results. This is due to the quality of this second
video where objects are smaller and not as clear as the first video. In this
case, utilizing edge-cloud communication increases the quality of detections
dramatically compared to edge detections. For example, for the optimization
pair (0.4,0.5) %81 of frames are sent to the cloud and the F-score is %92.
However, when the optimization pair is (0.4,0.4) no frames are sent to the
cloud and the F-score decreases to %45.
Dynamically finding the optimal solution. We implemented two approaches to
acquire the optimized pair of thresholds. The first is a brute force method
that evaluates the whole space of threshold pairs. In it, we obtain the
optimal pair for balancing the trade-off (shown as a yellow star). The second
approach uses a gradient step with our optimization formulation. Using
gradient step is 2.2x times faster (shown as a red star). In both cases,
bandwidth utilization is $<\%78$, accuracy is at least $\%49$ higher than an
edge model.
#### 5.2.4 Comparing MS-SR and MS-IA
In the next set of experiments, we measure the performance differences between
the two proposed consistency levels: MS-SR and MS-IA. (In this set of
experiments we use video v4 with the query “person”.) The main difference
between the two consistency levels is that the locks in the initial section of
MS-SR are held until the end of the whole transaction, whereas in MS-IA, the
locks are released after the initial section. This results in increasing the
lock contention in MS-SR. Figure 6(a) shows the difference in contention by
measuring the average time locks are held in MS-SR and MS-IA (denoted average
latency in the figure.) While the average latency of MS-IA is in the order of
milliseconds, the average latency of initial sections in MS-SR is in the order
of hundreds of milliseconds. This is because the locks are not released until
the final section is performed which means that the locks are held while the
frame is being processed using the cloud model which takes a significant
amount of time.
The contention difference leads to a high likelihood of aborts in MS-SR.
Figure 6(b) shows the abort rate of transactions in MS-SR while emulating a
high contention scenario of hot sports with different sizes. The x-axis (key
range) is the key range of the hot spot that the transactions are trying to
access. In this model, transactions are executed in batches of 50 transactions
per batch where each transaction has 5 update operations. The figure shows
that the abort rate can be significant when the hot spot has a size that is
less than 10K keys. This demonstrates the benefit of using MS-IA to overcome
the hot spot contention problems while using MS-SR. The figure does not show
the abort rate of MS-IA transactions as the rate is 0% for all cases. This is
because our implementation uses a single-threaded sequencer to order
transactions in batches so that conflicting transactions do not overlap. This
is possible as the transactions do not have to hold locks for prolonged
durations.
#### 5.2.5 Hybrid edge-cloud techniques
Hybrid edge-cloud techniques have been proposed to process object detection
models [35, 36, 1, 37]. These techniques generally work by performing some
pre-processing steps at the edge node before sending the frame to be detected
at the cloud. We compare with two such techniques that were utilized in
various forms in prior work [36, 1]: (1) _compression_ in which the frame is
compressed before sending it to reduce the communication bandwidth and
latency, and (2) _difference communication_ in which only the difference
between the current frame and a reference frame is sent to the cloud. These
techniques, if implemented in isolation, would achieve a small improvement
over the performance of the state-of-the-art cloud baseline that we compared
with as they would still require sending all frames for detection in the
cloud. We show this in the evaluations on the park video v1 with the larger
cloud model (YOLOv3-608) in Figure 6(c) under cloud+compression and
cloud+compression+difference. These evaluations apply the hybrid techniques
which improves the latency as less data need to be sent. However, this is a
small improvement because the latency is dominated by the detection latency at
the cloud.
An alternative view of these techniques is as methods to augment with edge-
cloud Croesus. Figure 6(c) also shows how augmenting compression can improve
the final commit latency in Croesus (under Croesus+compression and
Croesus+compression+difference). The improvement is small because the model
detection latency in the cloud is the dominant latency (as we show in previous
evaluations.)
## 6 Related Work
The requirement of real-time processing has been tackled by real-time
Databases (RTDB) [38] that aim to process data in predictable short time. Our
method differs by allowing to manage the trade-off of performance and accuracy
and providing the illusion of both a fast and accurate processing. A hybrid
edge-cloud model (and similar caching-based models) have recently been used
[35, 36, 1, 37] to take advantage of cloud computing to process data on neural
networks, as well as leveraging resources at the edge. Our work extends these
efforts by providing a multi-stage transactional model that enables
programmers to reason about this hybrid edge-cloud model. In particular, these
hybrid edge-cloud models can be augmented with the edge-cloud model of Croesus
to improve the edge-to-cloud latency. However, when hybrid edge-cloud models
are used in isolation, they would incur the high costs of edge-to-cloud
communication for all frames since they require performing the detection in
the cloud.
The multi-stage transaction model differs from existing abstractions in that
each transaction is split into two asymmetrical sections. This makes
traditional consistency models [22] unsuitable for multi-stage transactions.
The pattern of initial-final sections resemble work on eventual consistency
[39] and Transaction Chains [40] but differs in one main way: the
inconsistencies in the multi-stage model are external to the database. They
are caused by erroneous inputs or triggers. In eventual consistency and
Transaction Chains, inconsistency is caused by concurrent operation across
different copies. This leads to similarities and differences, which led us to
adapt prior relevant literature. Multi-stage transactions resemble work on
long-lived transactions (LLT) as well, such as Sagas [41]. Multi-stage
transactions can be viewed as a special case of LLT’s—with a transaction and a
follow-up correction/compensation transaction—which enables simpler and more
efficient solutions.
We view Croesus as a data layer solution that builds on top of asymmetric
environments which - like edge-cloud - may include the lambda architecture
[42] with both batch processing (slower but more accurate) and speed/real-time
processing (faster but less-accurate). The contributions of Croesus can be
applied to the lambda environment [43] by using multi-stage transactions
(where the initial section is processed after real-time processing and the
final section is processed after batch processing), and thus provide Croesus
benefits to lambda programmers.
## 7 Conclusion
We presented Croesus, a multi-stage processing system for video analytics and
a multi-stage transaction model which optimizes the trade-off between
performance and accuracy. We present two variants of transnational consistency
for multi-stage transactions—multi-stage serializability and multi-stage
invariant confluence with apologies. Our evaluation demonstrates that multi-
stage processing is capable of managing the accuracy-performance trade-off and
that this model provides both immediate real-time responses and high accuracy.
Although we have presented the concept of multi-stage processing and
transactions in the context of edge-cloud video analytics and processing [44,
45, 46, 47, 48], these concepts are relevant to many problems that share the
pattern of needing immediate response and complex processing. Our future work
explores these applications. One area of future work is to apply this pattern
of multi-stage processing to blockchain systems with off-chain components [49,
50, 51]. In such a case, the first stage is performed in the off-chain
component while the final stage is performed after validation from the
blockchain. Another area we plan to explore is to integrate the multi-stage
processing structure with global-scale edge placement and reconfiguration [52,
53]. This will allow utilizing multi-stage processing more efficiently by
controlling where the stages are performed and what edge/cloud datacenters to
utilize.
## 8 Acknowledgement
This research is supported in part by the NSF under grant CNS-1815212.
## References
* [1] D. Kang, J. Emmons, F. Abuzaid, P. Bailis, and M. Zaharia, “Noscope: optimizing neural network queries over video at scale,” _arXiv preprint arXiv:1703.02529_ , 2017.
* [2] B. Wu, X. Dai, P. Zhang, Y. Wang, F. Sun, Y. Wu, Y. Tian, P. Vajda, Y. Jia, and K. Keutzer, “Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 10 734–10 742.
* [3] Y. He, J. Lin, Z. Liu, H. Wang, L.-J. Li, and S. Han, “Amc: Automl for model compression and acceleration on mobile devices,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 784–800.
* [4] H. Cai, C. Gan, T. Wang, Z. Zhang, and S. Han, “Once-for-all: Train one network and specialize it for efficient deployment,” _arXiv preprint arXiv:1908.09791_ , 2019.
* [5] P. Lincoln _et al._ , “From motion to photons in 80 microseconds: Towards minimal latency for virtual and augmented reality,” _IEEE transactions on visualization and computer graphics_ , vol. 22, no. 4, pp. 1367–1376, 2016\.
* [6] S. Chen _et al._ , “Vehicle-to-everything (v2x) services supported by lte-based systems and 5g,” _IEEE Communications Standards Magazine_ , vol. 1, no. 2, pp. 70–76, 2017.
* [7] S. Han, H. Mao, and W. J. Dally, “Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arxiv 2015,” _arXiv preprint arXiv:1510.00149_ , 2019.
* [8] H. Kim, M. U. K. Khan, and C.-M. Kyung, “Efficient neural network compression,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2019, pp. 12 569–12 577.
* [9] J.-H. Luo, J. Wu, and W. Lin, “Thinet: A filter level pruning method for deep neural network compression,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 5058–5066.
* [10] K. Ullrich, E. Meeds, and M. Welling, “Soft weight-sharing for neural network compression,” _arXiv preprint arXiv:1702.04008_ , 2017.
* [11] C. Chen, F. Tung, N. Vedula, and G. Mori, “Constraint-aware deep neural network compression,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 400–415.
* [12] Y. Xu, Y. Wang, A. Zhou, W. Lin, and H. Xiong, “Deep neural network compression with single and multiple level quantization,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 32, no. 1, 2018.
* [13] Y. Choi, M. El-Khamy, and J. Lee, “Universal deep neural network compression,” _IEEE Journal of Selected Topics in Signal Processing_ , vol. 14, no. 4, pp. 715–726, 2020.
* [14] A. Dubey, M. Chatterjee, and N. Ahuja, “Coreset-based neural network compression,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 454–470.
* [15] T. Chen, H. Liu, Q. Shen, T. Yue, X. Cao, and Z. Ma, “Deepcoder: A deep neural network based video compression,” in _2017 IEEE Visual Communications and Image Processing (VCIP)_ , 2017, pp. 1–4.
* [16] Z. Liu, T. Liu, W. Wen, L. Jiang, J. Xu, Y. Wang, and G. Quan, “Deepn-jpeg: A deep neural network favorable jpeg-based image compression framework,” in _Proceedings of the 55th Annual Design Automation Conference_ , ser. DAC ’18. New York, NY, USA: Association for Computing Machinery, 2018. [Online]. Available: https://doi.org/10.1145/3195970.3196022
* [17] Y. Li, D. Liu, H. Li, L. Li, Z. Li, and F. Wu, “Learning a convolutional neural network for image compact-resolution,” _IEEE Transactions on Image Processing_ , vol. 28, no. 3, pp. 1092–1107, 2019.
* [18] K. D. Julian, M. J. Kochenderfer, and M. P. Owen, “Deep neural network compression for aircraft collision avoidance systems,” _Journal of Guidance, Control, and Dynamics_ , vol. 42, no. 3, pp. 598–608, 2019.
* [19] D. Racki, D. Tomazevic, and D. Skocaj, “A compact convolutional neural network for textured surface anomaly detection,” in _2018 IEEE Winter Conference on Applications of Computer Vision (WACV)_ , 2018, pp. 1331–1339.
* [20] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and B. J. Lance, “Eegnet: a compact convolutional neural network for eeg-based brain–computer interfaces,” _Journal of neural engineering_ , vol. 15, no. 5, p. 056013, 2018.
* [21] Y. Guo, Z. Yang, K. Liu, Y. Zhang, and W. Feng, “A compact and optimized neural network approach for battery state-of-charge estimation of energy storage system,” _Energy_ , vol. 219, p. 119529, 2021.
* [22] P. A. Bernstein, V. Hadzilacos, and N. Goodman, _Concurrency control and recovery in database systems_. Addison-wesley Reading, 1987, vol. 370.
* [23] P. Bailis, A. Fekete, M. J. Franklin, A. Ghodsi, J. M. Hellerstein, and I. Stoica, “Coordination avoidance in database systems,” _Proceedings of the VLDB Endowment_ , vol. 8, no. 3, pp. 185–196, 2014.
* [24] P. Helland and D. Campbell, “Building on quicksand,” in _CIDR 2009, Fourth Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA, January 4-7, 2009, Online Proceedings_. www.cidrdb.org, 2009. [Online]. Available: http://www-db.cs.wisc.edu/cidr/cidr2009/Paper_133.pdf
* [25] A. Khan, A. Sohail, U. Zahoora, and A. S. Qureshi, “A survey of the recent architectures of deep convolutional neural networks,” _Artificial Intelligence Review_ , vol. 53, no. 8, pp. 5455–5516, 2020.
* [26] V. A. Sindagi and V. M. Patel, “A survey of recent advances in cnn-based single image crowd counting and density estimation,” _Pattern Recognition Letters_ , vol. 107, pp. 3–16, 2018.
* [27] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” _CoRR_ , vol. abs/1612.08242, 2016. [Online]. Available: http://arxiv.org/abs/1612.08242
* [28] S. Wen, H. Wei, Z. Yan, Z. Guo, Y. Yang, T. Huang, and Y. Chen, “Memristor-based design of sparse compact convolutional neural network,” _IEEE Transactions on Network Science and Engineering_ , vol. 7, no. 3, pp. 1431–1440, 2019.
* [29] K. Zhang, J. Chen, T. Zhang, and Z. Zhou, “A compact convolutional neural network augmented with multiscale feature extraction of acquired monitoring data for mechanical intelligent fault diagnosis,” _Journal of Manufacturing Systems_ , vol. 55, pp. 273–284, 2020.
* [30] D. Racki, D. Tomazevic, and D. Skocaj, “A compact convolutional neural network for textured surface anomaly detection,” in _2018 IEEE Winter Conference on Applications of Computer Vision (WACV)_. IEEE, 2018, pp. 1331–1339.
* [31] Z. Xu and R. C. Cheung, “Accurate and compact convolutional neural networks with trained binarization,” _arXiv preprint arXiv:1909.11366_ , 2019.
* [32] P. A. Bernstein, V. Hadzilacos, and N. Goodman, _Concurrency Control and Recovery in Database Systems_. Addison-Wesley, 1987.
* [33] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” _arXiv_ , 2018\.
* [34] B. F. Cooper, A. Silberstein, E. Tam, R. Ramakrishnan, and R. Sears, “Benchmarking cloud serving systems with ycsb,” in _Proceedings of the 1st ACM Symposium on Cloud Computing_ , ser. SoCC ’10. New York, NY, USA: Association for Computing Machinery, 2010, p. 143–154. [Online]. Available: https://doi.org/10.1145/1807128.1807152
* [35] T. Y.-H. Chen, L. Ravindranath, S. Deng, P. Bahl, and H. Balakrishnan, “Glimpse: Continuous, real-time object recognition on mobile devices,” in _Proceedings of the 13th ACM Conference on Embedded Networked Sensor Systems_ , ser. SenSys ’15. New York, NY, USA: Association for Computing Machinery, 2015, p. 155–168. [Online]. Available: https://doi.org/10.1145/2809695.2809711
* [36] P. M. Grulich and F. Nawab, “Collaborative edge and cloud neural networks for real-time video processing,” _Proceedings of the VLDB Endowment_ , vol. 11, no. 12, pp. 2046–2049, 2018.
* [37] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, and L. Tang, “Neurosurgeon: Collaborative intelligence between the cloud and mobile edge,” _ACM SIGARCH Computer Architecture News_ , vol. 45, no. 1, pp. 615–629, 2017.
* [38] S. H. Son, R. David, and B. Thuraisingham, “Improving timeliness in real-time secure database systems,” _ACM SIGMOD Record_ , vol. 25, no. 1, pp. 29–33, 1996.
* [39] P. Bailis and A. Ghodsi, “Eventual consistency today: Limitations, extensions, and beyond,” _Queue_ , vol. 11, no. 3, pp. 20–32, 2013.
* [40] Y. Zhang _et al._ , “Transaction chains: achieving serializability with low latency in geo-distributed storage systems,” in _SOSP_ , 2013.
* [41] H. Garcia-Molina and K. Salem, “Sagas,” _SIGMOD Rec._ , vol. 16, no. 3, p. 249–259, Dec. 1987. [Online]. Available: https://doi.org/10.1145/38714.38742
* [42] M. Kiran, P. Murphy, I. Monga, J. Dugan, and S. S. Baveja, “Lambda architecture for cost-effective batch and speed big data processing,” in _2015 IEEE International Conference on Big Data (Big Data)_ , 2015, pp. 2785–2792.
* [43] J. Warren and N. Marz, _Big Data: Principles and best practices of scalable realtime data systems_. Simon and Schuster, 2015.
* [44] F. Nawab, “Wedgechain: A trusted edge-cloud store with asynchronous (lazy) trust,” in _2021 IEEE 37th International Conference on Data Engineering (ICDE)_. IEEE, 2021, pp. 408–419.
* [45] F. Nawab, D. Agrawal, and A. El Abbadi, “Dpaxos: Managing data closer to users for low-latency and mobile applications,” in _Proceedings of the 2018 International Conference on Management of Data_ , 2018, pp. 1221–1236.
* [46] ——, “Nomadic datacenters at the network edge: Data management challenges for the cloud with mobile infrastructure.” in _EDBT_ , 2018, pp. 497–500.
* [47] S. Gazzaz and F. Nawab, “Collaborative edge-cloud and edge-edge video analytics,” in _Proceedings of the ACM Symposium on Cloud Computing_ , 2019, pp. 484–484.
* [48] N. Mittal and F. Nawab, “Coolsm: Distributed and cooperative indexing across edge and cloud machines,” in _2021 IEEE 37th International Conference on Data Engineering (ICDE)_. IEEE, 2021, pp. 420–431.
* [49] D. Abadi, O. Arden, F. Nawab, and M. Shadmon, “Anylog: a grand unification of the internet of things,” in _Conference on Innovative Data Systems Research (CIDR ‘20)_ , 2020.
* [50] M. Alaslani, F. Nawab, and B. Shihada, “Blockchain in iot systems: End-to-end delay evaluation,” _IEEE Internet of Things Journal_ , vol. 6, no. 5, pp. 8332–8344, 2019.
* [51] F. Nawab and M. Sadoghi, “Blockplane: A global-scale byzantizing middleware,” in _2019 IEEE 35th International Conference on Data Engineering (ICDE)_. IEEE, 2019, pp. 124–135.
* [52] V. Zakhary, F. Nawab, D. Agrawal, and A. El Abbadi, “Global-scale placement of transactional data stores.” in _EDBT_ , 2018, pp. 385–396.
* [53] ——, “Db-risk: The game of global database placement,” in _Proceedings of the 2016 International Conference on Management of Data_ , 2016, pp. 2185–2188.
|
2014 1 2 245-264 10.15346/hc.v1i2.12 © 2014, Ponciano & Brasileiro. CC-BY-3.0
# Finding Volunteers’ Engagement Profiles in
Human Computation for Citizen Science Projects
Lesandro Ponciano Universidade Federal de Campina Grande Francisco
Brasileiro Universidade Federal de Campina Grande
###### Abstract
Human computation is a computing approach that draws upon human cognitive
abilities to solve computational tasks for which there are so far no
satisfactory fully automated solutions even when using the most advanced
computing technologies available. Human computation for citizen science
projects consists in designing systems that allow large crowds of volunteers
to contribute to scientific research by executing human computation tasks.
Examples of successful projects are Galaxy Zoo and FoldIt. A key feature of
this kind of project is its capacity to engage volunteers. An important
requirement for the proposal and evaluation of new engagement strategies is
having a clear understanding of the typical engagement of the volunteers;
however, even though several projects of this kind have already been
completed, little is known about this issue. In this paper, we investigate the
engagement pattern of the volunteers in their interactions in human
computation for citizen science projects, how they differ among themselves in
terms of engagement, and how those volunteer engagement features should be
taken into account for establishing the engagement encouragement strategies
that should be brought into play in a given project. To this end, we define
four quantitative engagement metrics to measure different aspects of volunteer
engagement, and use data mining algorithms to identify the different volunteer
profiles in terms of the engagement metrics. Our study is based on data
collected from two projects: Galaxy Zoo and The Milky Way Project. The results
show that the volunteers in such projects can be grouped into five distinct
engagement profiles that we label as follows: hardworking, spasmodic,
persistent, lasting, and moderate. The analysis of these profiles provides a
deeper understanding of the nature of volunteers’ engagement in human
computation for citizen science projects.
keywords: citizen science, human computation, engagement, participation,
retention
## 1 Introduction
Human computation is a computing approach based on harnessing human cognitive
abilities to solve computational tasks for which there are so far no
satisfactory fully automated solutions even when using the most advanced
computing technologies currently available Quinn and Bederson (2011). Examples
of such tasks may be found in the areas of natural language processing, image
understanding, and creativity. They have been shown to be often in scientific
applications related to disciplines such as biology, linguistics, and
astronomy Wiggins and Crowston (2012); Lintott and Reed (2013). As a result,
it has become common among scientists to start projects to recruit ordinary
people for executing human computation tasks, which we call human computation
for citizen science projects. Citizen science can be broadly defined as a
partnership between scientists and ordinary people willing to contribute to an
authentic scientific research effort Cohn (2008); Dickinson et al. (2012);
Lintott and Reed (2013). A large range of activities can be carried out by
ordinary people in citizen science Goodchild (2007); Cohn (2008); Wiggins and
Crowston (2012). Those activities may require only some simple abilities, such
as data collecting and reporting, or more complex cognitive abilities such as
data aggregation and classification. In human computation for citizen science
projects, participants contribute by executing tasks that require cognitive
abilities. Examples of projects with such feature are Galaxy Zoo Lintott et
al. (2008) and FoldIt Cooper et al. (2010).
The contribution behaviour of people taking part in this type of project can
be examined in the light of two different research approaches centered on the
notions of voluntarism Clary et al. (1998); Wilson (2000) and human engagement
O’Brien and Toms (2008); Simpson (2009); Lehmann et al. (2012). Voluntarism
literature usually distinguishes between two different types of contribution
behaviour: helping activity behaviour and volunteerism behaviour Clary et al.
(1998); Wilson (2000). Helping activity behaviour designates a form of
_sporadic_ participation in which the individual is faced with an unexpected
request to help someone to do something. Volunteerism behaviour, on the other
hand, concerns to a kind of _planned_ behaviour. Volunteers are usually
actively seeking out opportunities to help others. They typically commit
themselves to an ongoing relationship at considerable personal cost in terms
of dedicated time or cognitive effort. Drawing this distinction between
helping activity and voluntarism seems to us to be important also in the
context of human computation for citizen science projects. A recent
characterization of the behaviour of volunteers in such projects brings to
light the existence of two main groups of participants: transient and regular
Ponciano et al. (2014b). Transient participants exhibit a helping behaviour,
whereas the behaviour of regular participants fits into the definition of
volunteerism. Not surprisingly, volunteers typically constitute a minority
among the participants, and execute the largest part of tasks in the project.
Thus, a key feature for the success of a human computation for citizen science
project is the capacity to foster such kind of sustained contribution
behaviour.
Fostering sustained contribution behaviour is an issue that has been widely
addressed in human engagement studies. Current literature on human engagement
focuses on the human behaviour when individuals are self-investing personal
resources such as time, physical energy, and cognitive power Bakker and
Demerouti (2008); O’Brien and Toms (2008); Simpson (2009); Lehmann et al.
(2012); McCay-Peet et al. (2012). Studies in this area usually focus on both
qualitative and quantitative dimensions of engagement by (i) analysing the
psychological factors behind engagement/disengagement such as motivation,
satisfaction, and frustration; and (ii) measuring the level of engagement
quantitatively in terms of the degree of contribution and the duration of the
contribution.
Several studies have been devoted to the understanding of psychological
factors of volunteer engagement in human computation for citizen science
projects Raddick et al. (2010); Rotman et al. (2012); Jennett et al. (2014);
Nov et al. (2014), while few studies have focused on quantitatively estimation
of the level of engagement of the volunteers Ponciano et al. (2014b). The lack
of studies with this perspective is an important constraint because a
fundamental requirement for proposing and evaluating new engagement strategies
is having a clear understanding of how volunteers typically behave in such
situations. This study aims at filling this gap by providing a quantitative
analysis of the nature of engagement of volunteers by using log data related
to their execution of tasks. Three research questions are addressed in this
study: $1)$ how engaged the volunteers are during their interaction with the
project; $2)$ what similarities and differences they exhibit among themselves
in terms of engagement; and $3)$ how the engagement characteristics of the
volunteers can be exploited for establishing the engagement strategies to be
implemented in a given project.
In order to answer these questions, we go through existing human engagement
studies and, based on the concepts and theories put forward, we propose the
following four metrics to measure the level of engagement of each volunteer:
activity ratio, relative activity duration, daily devoted time, and variation
in periodicity. Activity ratio is a measure of the return rate of the
volunteer to the project during the period that he/she stays contributing to
it. Daily devoted time is a measure of the length of the daily engagement.
Relative activity duration, in turn, is a measure of the duration of the
volunteer’s long-term engagement. Finally, variation in periodicity informs us
about the deviation in the periodicity with which the volunteer executes tasks
in the project. By using hierarchical and k-means algorithms, we cluster the
volunteers according to the values of their engagement metrics in order to
find out the different engagement profiles that arise from their natural
behaviour within the project.
We analyse volunteer engagement profiles according to the data collected from
two popular projects hosted at the Zooniverse platform: Galaxy Zoo and The
Milky Way Project. These projects ran for almost 2 years between 2010 and 2012
and involved more than one billion executed tasks and thousands of
participants, which turns them into valuable sources for the analysis of a
wide range of engagement aspects of the volunteers. In both projects, we found
5 different clusters of volunteers based on visual inspection and statistical
measures. Each cluster stands for a distinct engagement profile brought for by
the behaviour shown by the volunteers during their participation in the
projects. The distinct engagement profiles brought to light in this way are
labelled as: hardworking, spasmodic, persistent, lasting, and moderate.
Hardworking engagement is characterised by larger activity ratio, low
variation in periodicity and shorter relative activity duration. Volunteers
who exhibit this type of engagement profile typically work hard and regularly
when arriving at the project, but may leave the project quickly. Spasmodic
engagement is distinguished by a relatively high activity ratio and moderate
variation in periodicity. Volunteers who exhibit this engagement profile
provide an intense contribution, at a short period of time and with irregular
periodicity within this period. Persistent engagement, in turn, is
characterised by a larger activity duration and low activity ratio. Volunteers
who exhibit a persistent engagement profile remain in the project for a long
period of time but contribute only a few days within this time period. Lasting
engagement, in turn, is characterised by an engagement pattern similar to
persistent engagement, with the difference that volunteers exhibit here a much
shorter activity duration. Finally, moderate volunteers have intermediate
scores in all categories of engagement metrics.
Regarding the distribution of the volunteers per profile, the highest
percentage of volunteers ($30\%$ in The Milky Way Project and $31\%$ in Galaxy
Zoo) exhibits a moderate engagement profile, while few volunteers ($13\%$ in
The Milky Way Project and $16\%$ in Galaxy Zoo) show persistent engagement.
Given the total amount of human effort time required to execute all the tasks
in the project, the aggregate time devoted by volunteers who exhibit a
persistent engagement profile accounts for $40\%$ of total time in The Milky
Way Project and $46\%$ in Galaxy Zoo; this is the volunteer profile that
stands for the largest contribution.
The method we propose to measure the engagement of volunteers and set up
engagement profiles has been shown to be satisfactory in bringing to light the
main similarities and differences among the volunteers. The fact that the
results thus obtained are consistent throughout different projects strengthens
the thesis that engagement profiles can arise in various other projects.
Several other discussions can be drawn from our analysis. For example, the
engagement profiles enable the development of new recruitment strategies to
attract volunteers with a desired engagement profile as well as the design of
personalised engagement strategies that focuses on improving specific
engagement metrics. Finally, our results call for further theoretical and
qualitative studies that investigate the motivation of volunteers in the light
of the distinct engagement profiles they may exhibit. The combination of a
quantitative analysis of volunteer engagement and the psychological factors
established in qualitative studies will advance our comprehension about the
engagement patterns of volunteers in human computation and citizen science.
In this study we put forward three main contributions. First, we propose four
metrics to measure the level of engagement of volunteers with regard to both
the duration of the period of engagement with the project and the degree of
engagement during this period. Furthermore, we provide a deeper quantitative
assessment of volunteer engagement profiles derived from two popular human
computation for citizen science projects. To the best of our knowledge, this
is the first study assessing natural engagement profiles in volunteer task
execution behaviour in this type of project. Finally, this study allows us to
go beyond previous studies by covering a larger number of volunteers and
bringing forth engagement aspects which have so far not been identified in
studies focusing on qualitative methodologies.
The rest of this work is organised as follows. We provide first a background
of human engagement studies and discuss relevant previous work. Next we
describe our method to measure the volunteer engagement and identify
engagement profiles. Finally, we present an analysis of volunteer engagement
in Galaxy Zoo and The Milky Way Project.
## 2 Background and Related Work
This study builds on a broad set of studies covering volunteer engagement,
human computation and citizen science projects. In this section, we first
provide a background to the subject of human engagement. Thereafter, we
discuss the related work.
### 2.1 What is engagement and how to approach it
The subject of human engagement has been studied within a variety of
disciplines, such as education Meece et al. (1988), management science Simpson
(2009) and computer science O’Brien and Toms (2008). Some studies make an
attempt to conceptualize the term engagement in an interdisciplinary
perspective González-Romá et al. (2006); Bakker and Demerouti (2008); O’Brien
and Toms (2008); Simpson (2009); Lehmann et al. (2012); McCay-Peet et al.
(2012). A consensus that emerges from these studies is that engagement means
to participate in any enterprise by self-investing personal resources, such as
time, physical energy, and cognitive power.
O’Brien and Toms (2008) provide a conceptual framework to study human
engagement with technology. This framework establishes that the entire process
of engagement is comprised of four stages: point of engagement, period of
sustained engagement, disengagement and reengagement. The point of engagement
is the time at which the human perform the first action in the system. The
period of sustained engagement is the continuous period of time in which
he/she keeps on performing actions in the system. Disengagement occurs when
the period of sustained engagement ends. Finally, reengagement denotes new
engagement cycles composed of point the three first stages. Studies of such
process involve at least four dimensions: type of engagement, psychological
factors of engagement, duration of engagement, and degree of engagement.
The type of engagement is defined by the kind of personal resources and skills
that humans invest in performing an activity. Examples of types of engagement
are social engagement Porges (2003) and cognitive engagement Corno and
Mandinach (1983). Social engagement refers to actions that require humans to
interact with others. It is widely studied in areas such as online social
networks and communities Preece (2000); Millen and Patterson (2002). Cognitive
engagement refers to actions that require mainly human cognitive effort. It
has been widely addressed in educational psychology and work engagement Meece
et al. (1988); Simpson (2009).
The psychological factors of engagement are related to the motives leading to
a point of engagement, disengagement and reengagement, such as motivation,
satisfaction, perceived control, and frustration. Studies have proposed and/or
instantiated various theories in order to construct a framework of theories
that explain the psychological factors behind human engagement González-Romá
et al. (2006); O’Brien and Toms (2008). These theories include the self-
determination theory Deci and Ryan (2000) and the self-efficacy theory Bandura
(1977). The self-determination theory establishes that human motivation can be
broadly divided into intrinsic motivations, associated with inner personal
reward, and extrinsic motivations, associated with earning an external reward
or avoiding a punishment. The self-efficacy theory, in turn, advances the idea
that perceived human efficacy determines if an individual will initiate an
activity, how much effort will be expended, and how long the activity will be
sustained.
The duration of engagement measures the duration of the period of sustained
engagement, sometimes called retention. It expresses how long a human keeps on
to the system. It is short-term engagement when it occurs during a relatively
short period of time (e.g. minutes or hours), and long-term engagement when it
lasts a long period of time (e.g. months or years). In short-term engagement,
the point of engagement is the point in time at which the individual performs
the first action within the system, the period of engagement is the time span
under which he/she keeps interacting with the system in a continuous working
session, and the point of disengagement is the point in time at which the
working session ends. In long-term engagement, the point of engagement is the
point in time at which the individual performs the first action within the
system, the period of engagement refers to the number of days under which
she/he keeps on interacting with the system, and the point of disengagement
refers to the day when he/she leaves the system. Thus, long-term engagement
may consist of several short-term engagement cycles.
Finally, the degree of engagement is a quantitative measure of the degree of
participation during the period of sustained engagement. It can also be viewed
as a measure of the amount of resources invested by humans in participating in
the system. Measuring the degree of engagement has proven a challenging task.
Some studies use surveys to collect information about how humans perceive
their level of engagement and hence estimate their degree of engagement (e.g.,
O’Brien and Toms (2010); McCay-Peet et al. (2012)). Other studies use
behavioural data stored in logs of the system to measure the degree of
engagement (e.g. Lehmann et al. (2012)).
### 2.2 Related work
The dimensions of engagement presented in the last section are helpful to
framing the previous studies in engagement. There is an extensive body of work
dealing with engagement in technology-mediated social participation systems
Kraut et al. (2010) such as wiki-based systems Butler et al. (2002); Bryant et
al. (2005); Butler et al. (2008); Schroer and Hertel (2009); Preece and
Shneiderman (2009); Niederer and Van Dijck (2010); Liu and Ram (2011); Welser
et al. (2011); Zhu et al. (2012), open source software projects Hertel et al.
(2003); Niederer and Van Dijck (2010), and human computation for citizen
science projects Raddick et al. (2010); Rotman et al. (2012); López et al.
(2012); Mao et al. (2013); Jennett et al. (2014).
Wiki-based systems such as Wikipedia provide means that allow participants to
engage in a broad range of activities, such as the insertion of a sentence in
an article, modification of an existing reference, reverting an article to a
former version etc Butler et al. (2008); Liu and Ram (2011); Welser et al.
(2011). Participants assume different roles in the system when some of them
focus on performing a single type of activity, and others focus on performing
other types of activities Butler et al. (2008); Niederer and Van Dijck (2010);
Liu and Ram (2011). Such roles characterise different types of engagement in
the system. The motivation of the participants and their perception of their
own roles usually change as they become more active in the system Bryant et
al. (2005); Burke and Kraut (2008); Schroer and Hertel (2009); Preece and
Shneiderman (2009). Since such systems provide a collaborative environment,
the behaviour of some of the participants may also affect the behaviour of
others Butler et al. (2002); Zhu et al. (2012).
Studies on open source software (OSS) projects, in turn, have focused on
understanding the psychological factors that lead participants to engage in
OSS projects, and the kind of rewards they expect Hertel et al. (2003);
Roberts et al. (2006). For example, Hertel et al. (2003) show that
psychological factors appeared to be similar to those behind voluntary action
within social movements such as the civil rights, labour, and peace movements.
Studies on Apache projects suggest that there are also interrelationships
between motivation and degree of engagement Roberts et al. (2006). Extrinsic
motivation, such as monetary and status within the system, leads to above
average contribution levels, while intrinsic motivations do not significantly
impact average contribution levels.
Differently from Wiki-based systems, in which there is a diversity of types of
engagement, the role played by volunteers in human computation for citizen
science projects is mainly the execution of well defined human computation
tasks, although some projects allow volunteers to carry out social engagement
activities, for instance interacting in forums Fortson et al. (2012); Luczak-
Roesch et al. (2014). In such projects, as in the case of studies in wiki-
based systems and OSS projects, the psychological factor is the dimension of
engagement that has received most attention Raddick et al. (2010); Rotman et
al. (2012); Jennett et al. (2014); Nov et al. (2014).
Raddick et al. (2010) analyse the motivations of volunteers in the Galaxy Zoo
project. It is shown that, among $12$ categories of motivations mentioned by
the volunteers, the most mentioned category is interest in astronomy, which is
the theme of the project. Rotman et al. (2012) and Rotman et al. (2014) show
that the motivation of volunteers changes dynamically throughout the period of
their contribution to the projects. Jennett et al. (2014) analyse factors that
led volunteers to dabble and/or drop-out in the Old Weather project. The
analysis shows that this kind of volunteers are less motivated, though they
care about the project and the quality of the work they perform. Thus,
projects should be designed to encourage both dabbling and commitment. Nov et
al. (2014) analyses motivation factors that affect the quality and the
quantity of contributions to citizen science projects.
In general, these studies clarify several aspects of why volunteer engages in
human computation for citizen science projects. However, little progress has
been made in terms of understanding how to measure volunteer engagement and to
uncover natural patterns in which the engagement occurs. This fact constitutes
an important shortcoming because a key feature of this kind of project is its
capacity to engage volunteers. A clear understanding of how volunteers
typically engage with such kinds of projects is fundamental for proposing and
evaluating new strategies to encourage engagement.
## 3 Finding Engagement Profiles
In this section, we first present the metrics proposed to measure the degree
of engagement and the duration of engagement of volunteers. Then, we present a
strategy to cluster volunteer based on the values of these metrics for the
volunteers. This clustering allows the identification of profiles of
volunteers exhibiting similar engagement patterns.
### 3.1 Measuring engagement
We characterise volunteers according to how they score in different engagement
metrics. Engagement metrics are measures of volunteer interaction and
involvement with the project. The engagement metrics proposed in this section
are based on the conceptual framework proposed by O’Brien and Toms (2008). By
using this framework, we analyse the engagement over time of volunteers taking
into account their points of engagement, periods of sustained engagement,
disengagements and reengagements.
Figure 1 shows the structure of the time line of a volunteer during
participation in a project. This figure shows five concepts used in the
calculations of our metrics: the time the volunteer could potentially remain
linked to the project, days the volunteer remain linked to the project, the
active days, the time devoted on an active day, and the number of days elapsed
between two active days. Our metrics are designed to measure the engagement of
participants that exhibit an ongoing contribution and have contributed in at
least two different days. By doing so, we focus on participants that are more
likely to fit into the voluntarism definition Clary et al. (1998); Wilson
(2000).
Figure 1: Structure of the time line of a volunteer in a project, highlighting
the active days and working sessions on the active days.
The time a volunteer $i$ can potentially remain linked to the project is the
number of days elapsed between the day in which the volunteer joined the
project and the day in which the project is concluded. It is denoted by
$w_{i}$ days. An active day of a volunteer $i$ is a day on which this
volunteer is active in the project. We consider that a volunteer is active on
a particular day if he/she executes at least one task during that day. We
define $A_{i}$ as the sequence of dates in which the volunteer $i$ is active.
The time devoted on a specific active day is the sum of the time duration of
the contribution sessions of the volunteer on that active day. Contribution
sessions are continuous short periods of time during which the volunteer keeps
executing tasks. We define $D_{i}$ as the multiset of the amount of time the
volunteer $i$ devotes to the project on each active day. The time elapsed
between two active days is the number of days it took to the volunteer to
return to the project since the latest active day. We define $B_{i}$ as the
multiset of the number of days elapsed between every two sequential active
days. Considering $w_{i}$, $A_{i}$, $D_{i}$ and $B_{i}$, we can derive metrics
to measure the degree and the duration of engagement of each volunteer.
We define two metrics of degree of engagement: activity ratio and daily
devoted time. Activity ratio ($a_{i}$) is the proportion of days on which the
volunteer was active in relation to the total of days he/she remained linked
to the project. It can be computed as
$a_{i}=\frac{|A_{i}|}{(Max(A_{i})-Min(A_{i}))+1}$, $a\in(0,1]$. The closer to
1, the more assiduous the volunteer is during the time he/she remained linked
to the project. Daily devoted time ($d_{i}$) is the averaged hours the
volunteer remain executing tasks on each day he/she is active. It can be
computed as $d_{i}=avg(D_{i})$, $d\in(0,24]$. The higher the average, the
longer the time the volunteer devotes to the project executing tasks on the
days he/she is active. Note that, because the human computation projects
usually consist of different time-consuming tasks, the time devoted by the
volunteers executing tasks is a better measure of their degree of engagement
than the number of tasks they execute Geiger and Halfaker (2013); Ponciano et
al. (2014b).
We also define two metrics to assess the duration of engagement: relative
activity duration and variation in periodicity. Relative activity duration
($r_{i}$) is the ratio of days during which a volunteer $i$ remains linked to
the project in relation to the total number of days elapsed since the
volunteer joined the project until the project is over ($w_{i}$). It is
defined as $r_{i}=\frac{(Max(A_{i})-Min(A_{i}))+1}{w_{i}}$, $r\in(0,1]$. When
$r_{i}=1$, the volunteer remains linked to project since she/he came to the
project until the project is completed. The closer to $1$, the more persistent
is the participation of the volunteer in the project. Variation in periodicity
($v_{i}$) is the standard deviation of the times elapsed between each pair of
sequential active days. It is computed as $v_{i}=sd(B_{i})$. When $v_{i}=0$,
the volunteer exhibits a constant elapsed time between each pair of sequential
active days; this indicates that he/she comes back to the project with perfect
periodicity. On the contrary, the larger $v_{i}$, the larger the deviation in
the periodicity in which the volunteer comes back to the project to perform
more tasks.
The above engagement metrics fit well into our objective of analysing the
degree of engagement and the duration of engagement of the volunteers.
Activity ratio allows us to analyse the return rate of each volunteer to the
project during the period that he/she stays contributing. Daily devoted time
gives us a view of the length of the daily engagement, which is related to the
duration of the short-term engagement. Relative activity duration allows us to
analyse the duration of long-term engagement weighted by the duration of the
period in which the volunteer can potentially remain linked to the project.
Finally, variation in periodicity informs us about the periodicity of return
during the long-term engagement.
### 3.2 Clustering volunteers according to engagement metrics
We use clustering algorithms to find out groups of volunteers who exhibit
similar values for the engagement metrics. The input to clustering algorithms
is a matrix $|I|\times 4$ in which each row stands for a volunteer $i\in I$
and each column is an engagement metric, i.e. $a$, $d$, $r$, and $v$. As the
results of clustering depend on the relative values of the parameters being
clustered, a normalisation of the parameters prior to clustering would be
desirable Jain (2008). We use range normalisation to scale the values of the
engagement metrics in the interval $[0,1]$. The scaling formula is
$x_{i}=\frac{x_{i}-x_{min}}{x_{max}-x_{min}}$, where $x$ denotes the
engagement metric and $i$ the volunteer.
To identify the suitable number of clusters, we first run a hierarchical
clustering algorithm and observe its dendrogram, which yields a suitable
interval to test the number of clusters. Next we run k-means, varying the
number of clusters ($k$) in the suggested interval and using as initial
centroids the centres identified in the hierarchical clustering, which usually
reduces the impact of noise and requires less iteration time Lu et al. (2008).
We select thereafter a suitable $k$ and evaluate the quality of the clustering
by computing the within-group sum of squares Anderberg (1973) and Average
Silhouette width Rousseeuw (1987).
Within-group sum of squares measures the differences between the volunteers
and the centre of the group to which they belong. The lower the within-group
sum of squares, the better the clustering. It indicates that volunteers
clustered in the same group exhibit similar values for the engagement metrics
and that the centre of the group represents the group adequately. Average
Silhouette width, in turn, measures how well separated and cohesive the groups
are. This statistics ranges from $-1$, indicating a very poor clustering, to
$1$, indicating an excellent clustering. Struyf et al. (1997) propose the
following subjective interpretation of the silhouette statistics: between
$0.71$ and $1.00$, a strong structure has been found; between $0.51$ and
$0.70$, a reasonable structure has been found; between $0.26$ and $0.50$, the
structure is weak and could be artificial, and hence it is recommended that
additional methods of analysis are tried out; less than or equal to $0.25$, no
substantial structure has been found. In this study, a silhouette statistics
larger than or equal to $0.51$ indicates a reasonable partition of the
different patterns of engagement exhibited by the volunteers.
## 4 Engagement Profiles in Galaxy Zoo and The Milky Way Project
In this section we use the proposed method to analyse the engagement of
volunteers in two projects: Galaxy Zoo and The Milky Way Project. We first
introduce these projects and detail the data set collected from them. Then, we
present the results on the quality of clustering in these data sets and the
discovered engagement profiles. Finally, we discuss the results and their
implications.
### 4.1 Datasets
The data used in this study was collected from two human computation for
citizen science projects: Galaxy Zoo Hubble and The Milky Way Project. Both
projects were developed and deployed in the Zooniverse (zooniverse.org)
citizen science platform.
The original Galaxy Zoo Lintott et al. (2008) was launched in July 2007, but
has been thereafter redesigned and relaunched several times. In this project,
participants were asked to answer a series of simple questions about the
morphology of galaxies. Each classifying volunteer on Galaxy Zoo is presented
with a galaxy image captured by either the Sloan Digital Sky Survey (SDSS) or
the Hubble Space Telescope. A decision tree of questions is presented with the
answer to each question being represented by a fairly simple icon. The task is
straightforward and no specialist knowledge is required. In this paper, we
used data of the third iteration of Galaxy Zoo: Galaxy Zoo Hubble. It was
launched in April 2010 and ran until September 2012. It consisted of
$9,667,586$ tasks executed by $86,413$ participants. In The Milky Way Project
Simpson et al. (2012), participants are asked to draw ellipses onto the image
to mark the locations of bubbles. A short online tutorial shows how to use the
tool, and examples of prominent bubbles are given. As a secondary task, users
can also mark rectangular areas of interest, which can be labelled as small
bubbles, green knots, dark nebulae, star clusters, galaxies, fuzzy red objects
or “other”. Users can add as many annotations as they wish before submitting
the image, at which point they are given another image for annotation. We used
data of The Milky Way Project launched in December 2010 and ran until
September 2012. It consisted of $643,468$ tasks executed by $23,889$
participants.
Each entry in the data set refers to one task execution. Each task execution
is described by project_id, task_id, user_id, datetime. The project_id field
is the name of the project. The task_id field is a unique task identifier in
the project. The user_id field is a unique volunteer identifier in the
project. Finally, the datetime field indicates the date and time when the task
was executed. To form volunteers’ working sessions, we use the threshold-based
methodology Geiger and Halfaker (2013); Mehrzadi and Feitelson (2012);
Ponciano et al. (2014b). Following this methodology, we compute the interval
of time elapsed between every two sequential task executions for each
volunteer. Given these intervals, we use the method proposed by Mehrzadi and
Feitelson (2012) to identify for each volunteer a threshold that distinguishes
short intervals from long intervals. Hence, whenever the interval between the
execution of two tasks is not larger than the threshold, the two tasks are
assumed to have been executed in the same working session; otherwise, the
tasks are assumed to have been executed in two different and consecutive
working sessions. For more details about this methodology, see Mehrzadi and
Feitelson (2012).
In both projects, participants are considered volunteers only if they have
been engaged in at least two days of activity. Only volunteers who arrived
before the last quarter of the total duration time of the project were
considered in the analyses, i.e. the first 502 days of The Milky Way Project
and the first 630 days of the Galaxy Zoo project. As Table 1 shows, the final
dataset consists of $23,547$ volunteers for the Galaxy Zoo and $6,093$
volunteers for The Milky Way Project, whereas 2485 volunteers contributed to
both projects. As shown by the descriptive statistics in this table, in both
projects the volunteers differ among themselves significantly in terms of all
the engagement metrics, all of which are significantly non-normal (Kolmogorov-
Smirnov normality tests showing p-value $<0.05$). The variations in the
engagement metrics of the volunteers do not point out at any form of anomalous
behaviour among the volunteers, which can thus be considered as natural
throughout.
Table 1: Descriptive statistics of engagement metrics of volunteers in the studied datasets | The Milky Way Project | Galaxy Zoo
---|---|---
#Volunteers | 6,093 | 23,547
Activity ratio | $mean=0.40$, $sd=0.40$ | $mean=0.33$, $sd=0.38$
Daily devoted time | $mean=0.44$, $sd=0.54$ | $mean=0.32$, $sd=0.40$
Relative activity duration | $mean=0.20$, $sd=0.30$ | $mean=0.23$, $sd=0.29$
Variation in periodicity | $mean=18.27$, $sd=43.31$ | $mean=25.23$, $sd=49.16$
### 4.2 Clustering
The result of the quality of the clustering when the number of clusters varies
between $2$ and $10$ is shown in Figure 2 for The Milky Way Project and in
Figure 3 for Galaxy Zoo. These figures show that $5$ is the number of groups
that best optimise the trade-off between the number of groups and the within-
group sum of squares (Fig 2(a) and 3(a)). This number of groups also yields an
Averaged Silhouette statistic of $0.53$ in The Milky Way Project (Fig.2(b))
and $0.51$ in the Galaxy Zoo project (Fig. 3(b)). These values indicate that a
reasonable clustering structure has been found for both projects.
(a) Within-groups sum of squares
(b) Average Silhouette statistic
Figure 2: Analysis of k-means clustering in The Milky Way Project. Within-
groups sum of squares and average Silhouette statistic as the number of groups
(k) is varied.
(a) Within-groups sum of squares
(b) Average Silhouette statistic
Figure 3: Analysis of k-means clustering in the Galaxy Zoo project. Within-
groups sum of squares and average Silhouette statistic as the number of groups
(k) is varied.
### 4.3 Profiles
In order to understand the different groups uncovered by the clustering
algorithm, we analyse: (i) the centroids that represent the groups; (ii) the
correlation between each pair of volunteer engagement metrics for each group;
and (iii) how the groups differ in terms of the number of volunteers and
aggregate contribution. In this analysis, we established labels to the groups
in order to put into pespective their main engagement characteristics. Thus,
the groups represent different engagement profiles labelled as follows:
hardworking engagement; spasmodic engagement, persistent engagement; lasting
engagement; and moderate engagement. The general characteristics of these
profiles are shown in Figure 4, Table 2 and Table 3.
Figure 4 shows the centroids that represent each profile and how they differ
in terms of engagement metrics. In each image, the horizontal axis stands for
the engagement profiles, each bar representing one engagement metric, and the
vertical axis indicates how the profiles score in the particular engagement
metrics. Table 2, in turn, shows how the profiles differ in terms of
correlation between their engagement metrics. Finally, Table 3 shows how the
profiles differ in terms of the number of volunteers and how their aggregate
contributions differ in terms of total working time devoted to the project. In
the following paragraphs, we elaborate on these results by analysing each
engagement profile in turn.
(a) The Milky Way Project
(b) Galaxy Zoo
Figure 4: Score of each engagement profile in each engagement metric.
Engagement profiles are represented by the centroids of groups of volunteers
identified by the k-means algorithm in (a) The Milky Way Project and (b)
Galaxy Zoo project. Table 2: Spearman $\rho$ correlation between each pair of
engagement metrics of volunteers within each engagement profile
The Milky Way Project
---
Pair | Hardworking | Spasmodic | Persistent | Lasting | Moderate
$N=1,535$ | $N=1,060$ | $N=817$ | $N=844$ | $N=1,837$
$\rho(a,r)$ | -0.24* | -0.38* | -0.14* | -0.26* | -0.74*
$\rho(a,v)$ | -0.99* | -0.22* | 0.06 | 0.39* | -0.13*
$\rho(a,d)$ | -0.07* | -0.05 | 0.43* | 0.37* | 0.14*
$\rho(r,v)$ | 0.24* | 0.59* | -0.13* | -0.04 | 0.44*
$\rho(r,d)$ | 0.14* | 0.23* | -0.09* | 0.02 | 0.01
$\rho(v,d)$ | 0.07* | 0.29* | 0.19* | 0.31* | 0.21*
Galaxy Zoo
Pair | Hardworking | Spasmodic | Persistent | Lasting | Moderate
$N=4,572$ | $N=3,611$ | $N=3,783$ | $N=4,250$ | $N=7,331$
$\rho(a,r)$ | -0.30* | -0.45* | 0.15* | -0.23* | -0.76*
$\rho(a,v)$ | -0.99* | -0.31* | -0.26 | 0.27* | -0.12*
$\rho(a,d)$ | -0.10* | 0.03 | 0.33* | 0.30* | 0.19*
$\rho(r,v)$ | 0.30* | 0.66* | -0.12* | 0.00 | 0.43*
$\rho(r,d)$ | 0.07* | 0.17* | 0.08* | 0.02 | -0.05*
$\rho(v,d)$ | 0.10* | 0.26* | -0.01 | 0.16* | 0.16*
* •
Note 1: *Spearman’ $\rho$ significant coefficient of correlation (p-value
$<0.05$).
* •
Note 2: Moderate and strong correlations are highlighted in boldface.
Table 3: Profiles importance in terms of the number of volunteers and their
devoted time
Profiles | The Milky Way Project | Galaxy Zoo
---|---|---
#Volunteers | Devoted time | #Volunteers | Devoted time
Hardworking | 1,535 (25.19%) | 2,030.26 (13.86%) | 4,572 (19.42%) | 4,857.49 (9.44%)
Spasmodic | 1,060 (17.40%) | 1,912.05 (13.05%) | 3,611 (15.34%) | 6,061.40 (11.78%)
Persistent | 817 (13.41%) | 5,846.58 (39.91%) | 3,783 (16.07%) | 23,757.64 (46.16%)
Lasting | 844 (13.85%) | 2,273.10 (15.52%) | 4,250 (18.05%) | 8,168.95 (15.87%)
Moderate | 1,837 (30.15%) | 2,588.28 (17.67%) | 7,331 (31.13%) | 8,621.64 (16.75%)
sum | 6,093 (100%) | 14,650.27 (100%) | 23,547 (100%) | 51,467.12 (100%)
* •
Note: The highest number of volunteers and the longest devoted time for each
project are highlighted in boldface.
Hardworking engagement. Volunteers who exhibit a hardworking engagement
profile have larger activity ratio and shorter relative activity duration
compared to others profiles (Fig 4). Such metrics indicate that volunteers in
this profile work hard when they come into the project, but may leave the
project soon. This engagement profile also exhibits low variation in
periodicity. This means that volunteers who exhibit this engagement profile
return to the project to perform more tasks in nearly equal intervals of time,
which makes the time of return of these volunteers fairly predictable. Other
intrinsic feature of this group of volunteers is a very strong negative
correlation between activity ratio and variation in periodicity
($\rho(a,v)=-0.99$, in both projects). This correlation indicates that the
more days the volunteers return to the project to perform tasks, the less
variable are the time intervals between their active days.
Spasmodic engagement. This engagement profile is distinguished by a relatively
high activity ratio and low activity duration (Fig 4). This group of
volunteers exhibits a positive correlation between relative activity duration
and variation in periodicity. This correlation is moderate ($\rho(r,v)=0.59$)
in the Milky Way Project and strong ($\rho(r,v)=0.66$) in the Galaxy Zoo
project (Table 2). These correlations indicate that the longer the period of
time the volunteers remain linked to the project, the more erratic is the
periodicity of their return to the project within this period. All these
characteristics indicate that contributions of volunteers exhibiting this
profile typically takes place during a short period of time and with irregular
periodicity within this period.
Persistent engagement. Persistent engagement is characterised by the largest
relative activity duration, the highest variation in period, and a short
activity ratio (Fig 4). Thus, volunteers with a persistent engagement profile
remain linked to the project for a long interval of time but are active only a
few days within this interval. Considering these engagement metrics,
persistent engagement may be seen as the opposite of hardworking engagement.
In both projects, a small percentage of all the volunteers fall in this
engagement profile: $13.41\%$ in The Milky Way Project and $16.07\%$ in the
Galaxy Zoo project. Together, these volunteer stands for the largest
percentage of the total working time devoted to each project, $39.91\%$ in The
Milky Way Project and $46.16\%$ in the Galaxy Zoo project (Table 3). It is the
most important profile in terms of devoted working time.
Lasting engagement. This is the engagement profile of volunteers exhibiting
comparatively high relative activity duration and variation in periodicity
(Fig 4). This kind of volunteers show an activity ratio similar to that
exhibited by the volunteers who stay longer in the project (persistent
engagement) but remain in the project during a shorter period of time.
Finally, this is the only engagement profile showing very weak or weak
correlation between all pairs of metrics in both projects (Table 2).
Moderate engagement. As shown in Figure 4, this engagement profile has no
particularly distinguishable engagement metrics. Compared to the other
profiles, moderate volunteers exhibit intermediate values in all engagement
metrics. One important characteristic of moderate engagement is a strong
negative correlation between activity ratio and relative activity duration.
This correlation is $\rho(a,r)=-0.74$ in The Milky Way Project and
$\rho(a,r)=-0.76$ in Galaxy Zoo (Table 2). These values indicate that the
degree of volunteer engagement in this profile falls with increased engagement
duration. Hence, the more days the volunteers return to the project to perform
tasks, the shorter is the total period of time that they remain linked to the
project. This engagement profile is exhibited by most volunteers in both
studied projects: nearly $30\%$ of the volunteers in The Milky Way Project and
$31\%$ in Galaxy Zoo fall into this engagement profile (Table 3).
### 4.4 Discussion
Our results show that volunteers in the studied projects share several
similarities and differences in terms of engagement. The identified profiles
of engagement put into perspective such similarities and differences.
Furthermore, they help us to better understand how the different engagement
patterns result in different levels of aggregated contribution to the
projects. Several practical and research discussions can be done from this
analysis. We focus on four of them, which are: profile-oriented volunteers’
recruitment, personalised engagement strategies, psychological factors behind
the engagement profiles, and external validity of the results.
Profile-oriented volunteers’ recruitment. It is natural that scientists
running citizen science projects that require human computation want to devote
more effort in recruiting volunteers who exhibits a desired engagement
profile. It is still the most important aspect when they want to optimise the
tradeoff between the costs of recruiting volunteers and the benefit of having
all tasks of the project performed as soon as possible Ponciano et al.
(2014a). Studies have been devoted to understanding how different disclosure
campaigns (e.g. traditional media and online media Robson et al. (2013))
differ in terms of the type of volunteers they attract. In a similar
direction, it is also important to know how different disclosure campaigns
differ in terms of the engagement profile of the volunteers they attract. For
example, could a disclosure campaign based on sending e-mails to people
interested in the theme of the project (e.g., astronomy, biology) attract more
persistent volunteers than advertising campaigns in traditional media? Other
important aspects that can be taken into account in optimising volunteer
recruitment is human homophily McPherson et al. (2001), which is the principle
that humans tend to be similar to their friends in several aspects. Perhaps
taking homophily into account one could motivate volunteers with a desired
engagement profile to recruit volunteers among his/her relatives, friends, and
colleagues with a similar profile? Hence, new and more effective recruitment
procedures might be brought forth with an increased knowledge on volunteer
engagement profiles.
Personalised engagement strategies. Besides recruiting more suitable
volunteers, it is also important to keep existing volunteers engaged. The
impact of management practices on volunteer engagement is a widely discussed
issue in volunteerism literature Clary et al. (1992); Cravens (2000). Such
practices are implemented by volunteer supervisors in a way that takes into
account the specific behaviour of each volunteer, aiming thereby at enriching
the volunteer experience and satisfying organizational needs. By showing that
volunteers in human computation for citizen science projects behave very
differently from each other, this study encourages the development of a
component to manage the engagement of volunteers in such projects. This
component would incorporate personalised engagement strategies Fischer (2001);
López et al. (2012); Mao et al. (2013) derived from the volunteer engagement
profiles uncovered in the present work. The component could also both monitor
the contribution behaviour of each volunteer and, when necessary,
automatically trigger a suitable engagement strategy. Prospective volunteers
with different behaviour profiles should be approached with different
engagement strategies, which could focus on e.g. encouraging a reduction or an
improvement of their engagement.
Strategies can focus on encouraging a reduction of volunteer engagement when,
for example, some volunteers start to compromise too much of their time to the
project, which could perhaps have a negative impact on the rest of his/her
social life, in the worst case leading to a state of burnout González-Romá et
al. (2006); Simpson (2009). Fortunately, this is not the typical situation in
the two projects we have studied; even volunteers with a hardworking
engagement profile devote typically less than $21$ minutes per day to the
project, which is not alarming. It is important that this kind of behavior can
be monitored, and, if necessary, strategies are put in place to deal with the
potential harm that this can bring to volunteers. When volunteers exhibit a
suitable engagement profile, it is very important to recognize their
contributions in order to keep them engaged Wilson (2000); Rotman et al.
(2012). Strategies can also focus on encouraging the improvement of volunteer
engagement when volunteers exhibit a level of engagement below project
average. This occurred frequently in the projects we have studied. Each
volunteer engagement profile shows a lower level of engagement than the
moderate engagement profile in at least one engagement metric.
There is a large body of work on strategies for encouraging contribution to
online projects. Many of those strategies are discussed by Kraut et al.
(2012). Example of strategies are (i) sending a message to the volunteers
asking them for more contribution; or (ii) providing volunteers online in the
project with specific and highly challenging goals, e.g. executing a number
$n$ of tasks before logoff. One non trivial question that must be answered
before putting a strategy to work is which engagement metrics one wishes to
improve. Discovering the engagement profiles of the volunteers enables finding
out in which engagement metric each profile falls short, and to decide which
strategy to develop focusing on each volunteer profile. The correlations
between the engagement metrics in each engagement profile tell us how other
engagement metrics are affected when strategies are put into practice to
improve one specific engagement metric. They also allow one to assess, for
example, the additional gains that could be obtained from the multiplicative
effects resulting from relationships between various metrics.
Psychological factors behind the engagement profiles. As we discussed early,
some studies have sought to understand the motivation of volunteers to
participate in human computation for citizen science projects Raddick et al.
(2010); Rotman et al. (2012); Jennett et al. (2014). Our results open a new
perspective for such studies. Given that we have shown that volunteers exhibit
different engagement profiles, new studies on the motivation factors can be
conducted considering the engagement peculiarities of each profile. One major
question to be answered in such studies is which motivations may lay behind
each engagement profile. This calls for a more theoretical perspective, for
example: (i) considering self-determination theory Deci and Ryan (2000), are
persistent volunteers more extrinsically motivated than the volunteers who
exhibit other engagement profiles? or (ii) considering self-efficacy theory
Bandura (1977), why do hardworking volunteers expend much effort in the short
term, but do not sustain their engagement in the long term. Besides
complementing our understanding of volunteer engagement, such studies may
provide information about volunteer motivation and experience in the projects.
In the profiles’ analysis, we observe an opposition between degree of
engagement and duration of engagement. Such opposition is clear in two main
points: $1)$ very strong negative correlation between activity ratio and
activity duration in the moderate engagement profile; $2)$ the opposition
between the characteristics of hardworking engagement and persistent
engagement. The negative correlation between activity ratio and activity
duration in the moderate engagement profile indicates that participating in
the project with a high frequency rate and remaining a long time in the
project are contradictory characteristics. It can also be observed in the
opposition between hardworking volunteers and persistent volunteers.
Hardworking volunteers show a higher degree of engagement, but with a shorter
duration. Persistent volunteers, on the contrary, show a lower degree of
engagement but during a longer time period. It is important to understand the
factors behind this opposition and to ask if there are situations in which the
volunteers would present both a high degree and a long duration of engagement.
External validity. Here we discuss about the generality of our study
considering two main aspects: (i) whether the methodology we have proposed to
measure the engagement of volunteers and identify their engagement profiles
can be applied in other projects; and (ii) whether the results obtained in the
case study with data collected from Galaxy Zoo and The Milky Way Project can
be generalised to other human computation for citizen science projects.
The methodology we have proposed is based on theoretical frameworks that
support the study of voluntarism Clary et al. (1998); Wilson (2000) and human
engagement Bandura (1977); O’Brien and Toms (2008). We draw on such frameworks
to derive metrics for measuring the engagement of volunteers and to uncover
engagement profiles from grouping them. In the case study conducted with data
collected from Galaxy Zoo and The Milky Way Project, this methodology shown to
be satisfactory in uncovering groups of volunteers that bring to light the
main similarities and differences among them. Thus, studies seeking such
quantitative analysis of the engagement can take advantage of this
methodology.
Regarding the generality of the engagement profiles, there are two aspects
that reinforce the idea that these types of profiles are more generic and thus
can arise also in other types of projects. First, the same set of profiles
have arisen in projects significantly different in terms of the tasks and the
number of volunteers involved. Tasks in Galaxy Zoo are less time consuming
than tasks in The Milky Way Project Ponciano et al. (2014b). Galaxy Zoo has
almost four times more volunteers than The Milky Way Project (Table 1),
considering as volunteers those participants who have been active in at least
two different days. As most of our results and conclusions are equivalent in
both projects, the differences in the design of the tasks and in the number of
volunteers have been shown not to affect the engagement profiles. Second, some
profiles describe behaviours that are common in Web systems. For example, the
observed fact that a small group of volunteers (persistent engagement) are
responsible for the largest amount of contribution to the project has been
shown to be valid also elsewhere Hargittai and Walejko (2008); van Mierlo
(2014).
## 5 Conclusions and Future Work
In this study we answer three research questions: $1)$ how we can measure the
level of engagement of volunteers during their interaction with a citizen
science project that uses human computation; $2)$ which different patterns of
volunteer engagement behaviour can be identified and specified as typical
volunteer profiles; and $3)$ how the identified volunteer engagement profiles
can be exploited for designing strategies for increasing the engagement of
volunteers in a project. We go through existing human engagement studies and,
based on the concepts and theories put forward, we propose quantitative
engagement metrics to measure different aspects of volunteer engagement, and
use data mining algorithms to identify the different volunteer profiles in
terms of the engagement metrics. We use this method to analyse the engagement
of volunteers in two projects: Galaxy Zoo and The Milky Way Project.
Our results show that volunteers in the studied projects share several
similarities and differences in terms of engagement. We identify five distinct
engagement profiles that put into perspective such similarities and
differences. They are labelled as follows: hardworking, spasmodic, persistent,
lasting, and moderate. These profiles differ among themselves according to a
set of metrics that we have defined for measuring the degree and duration of
volunteer engagement. Regarding the distribution of the volunteers along the
profiles, the highest percentage of volunteers falls into the moderate
engagement profile, while only a few volunteers exhibit a persistent
engagement profile. On the other hand, persistent volunteers account for the
highest percentage of the total human effort dedicated to execute all the
tasks in the project. Several discussions are drawn from our analysis, such as
profile-oriented volunteers’ recruitment, personalised engagement strategies,
and psychological factors behind the engagement profiles.
Our analysis of volunteer engagement, based on log data, yielded a powerful
framework for identifying the relevant patterns of volunteer engagement in
human computation for citizen science projects. However, the current framework
still presents some shortcomings that will be addressed in future work. We
have focused on cognitive engagement of volunteers executing human computation
tasks, but it is known that volunteers also contribute by creating additional
content such as posts in project forums, which can be regarded as a form of
social engagement. Assessing the behaviour of volunteers with regard to this
type of engagement is also important. Finally, future work may be dedicated to
analysing volunteer engagement in the context of other citizen science
projects that use human computation. This analysis may give an answer to the
question whether the set of engagement profiles we have identified on the
basis of the two described projects is generic enough to be applied to the use
of human computation for citizen science projects in general. Thus, we hope
this study motivates further research on volunteer engagement in this type of
projects.
## 6 Acknowledgements
We are indebted to Arfon Smith and Robert Simpson for providing us the dataset
used in this study. We are also grateful to Herman Martins, Jussara Almeida,
Nazareno Andrade, Jose Luis Vivas Frontana and the anonymous reviewers for
their suggestions to improve several aspects of the manuscript. The authors
would like to acknowledge the financial support received from CNPq/Brazil,
CAPES/Brazil, and the European Union Seventh Framework Programme through the
SOCIENTIZE project (contract RI-312902).
## References
* (1)
* Anderberg (1973) Anderberg, M. (1973). Cluster analysis for applications. Academic Press, Waltham, Massachusetts,United States.
* Bakker and Demerouti (2008) Bakker, A. B and Demerouti, E. (2008). Towards a model of work engagement. Career development international 13, 3 (2008), 209–223. DOI:http://dx.doi.org/10.1108/13620430810870476
* Bandura (1977) Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change. Psychological review 84, 2 (1977), 191. DOI:http://dx.doi.org/10.1037/0033-295X.84.2.191
* Bryant et al. (2005) Bryant, S. L, Forte, A, and Bruckman, A. (2005). Becoming Wikipedian: Transformation of Participation in a Collaborative Online Encyclopedia. In Proceedings of the 2005 International ACM SIGGROUP Conference on Supporting Group Work. ACM, New York, NY, USA, 1–10. DOI:http://dx.doi.org/10.1145/1099203.1099205
* Burke and Kraut (2008) Burke, M and Kraut, R. (2008). Mopping Up: Modeling Wikipedia Promotion Decisions. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work. ACM, New York, NY, USA, 27–36. DOI:http://dx.doi.org/10.1145/1460563.1460571
* Butler et al. (2008) Butler, B, Joyce, E, and Pike, J. (2008). Don’T Look Now, but We’Ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1101–1110. DOI:http://dx.doi.org/10.1145/1357054.1357227
* Butler et al. (2002) Butler, B, Sproull, L, Kiesler, S, and Kraut, R. (2002). Community effort in online groups: Who does the work and why? In Leadership at a distance: Research in technologically supported work. Taylor & Francis Group, UK, 171–194.
* Clary et al. (1992) Clary, E. G, Snyder, M, and Ridge, R. (1992). Volunteers’ motivations: A functional strategy for the recruitment, placement, and retention of volunteers. Nonprofit Management and Leadership 2, 4 (1992), 333–350. DOI:http://dx.doi.org/10.1002/nml.4130020403
* Clary et al. (1998) Clary, E. G, Snyder, M, Ridge, R. D, Copeland, J, Stukas, A. A, Haugen, J, and Miene, P. (1998). Understanding and assessing the motivations of volunteers: a functional approach. Journal of personality and social psychology 74, 6 (1998), 1516\. DOI:http://dx.doi.org/10.1037/0022-3514.74.6.1516
* Cohn (2008) Cohn, J. P. (2008). Citizen Science: Can Volunteers Do Real Research? BioScience 58, 3 (2008), 192–197. DOI:http://dx.doi.org/10.1641/B580303
* Cooper et al. (2010) Cooper, S, Khatib, F, Treuille, A, Barbero, J, Lee, J, Beenen, M, Leaver-Fay, A, Baker, D, Popović, Z, and others, . (2010). Predicting protein structures with a multiplayer online game. Nature 466, 7307 (2010), 756–760. DOI:http://dx.doi.org/10.1038/nature09304
* Corno and Mandinach (1983) Corno, L and Mandinach, E. B. (1983). The role of cognitive engagement in classroom learning and motivation. Educational psychologist 18, 2 (1983), 88–108. DOI:http://dx.doi.org/10.1080/00461528309529266
* Cravens (2000) Cravens, J. (2000). Virtual volunteering: Online volunteers providing assistance to human service agencies. Journal of Technology in Human Services 17, 2-3 (2000), 119–136. DOI:http://dx.doi.org/10.1300/J017v17n02_02
* Deci and Ryan (2000) Deci, E. L and Ryan, R. M. (2000). The "What" and "Why" of Goal Pursuits: Human Needs and the Self-Determination of Behavior. Psychological Inquiry 11, 4 (2000), 227–268. DOI:http://dx.doi.org/10.1207/S15327965PLI1104_01
* Dickinson et al. (2012) Dickinson, J. L, Shirk, J, Bonter, D, Bonney, R, Crain, R. L, Martin, J, Phillips, T, and Purcell, K. (2012). The current state of citizen science as a tool for ecological research and public engagement. Frontiers in Ecology and the Environment 10, 6 (2012), 291–297. DOI:http://dx.doi.org/10.1890/110236
* Fischer (2001) Fischer, G. (2001). User Modeling in Human-Computer Interaction. User Modeling and User-Adapted Interaction 11, 1-2 (2001), 65–86. DOI:http://dx.doi.org/10.1023/A:1011145532042
* Fortson et al. (2012) Fortson, L, Masters, K, Nichol, R, Borne, K, Edmondson, E, Lintott, C, Raddick, J, Schawinski, K, and Wallin, J. (2012). Galaxy Zoo: Morphological Classification and Citizen Science. In Advances in Machine Learning and Data Mining for Astronomy. CRC Press, Boca Raton, Florida, United States, 213–236.
* Geiger and Halfaker (2013) Geiger, R. S and Halfaker, A. (2013). Using edit sessions to measure participation in Wikipedia. In Proceedings of the 2013 conference on Computer supported cooperative work. ACM, New York, NY, USA, 861–870. DOI:http://dx.doi.org/10.1145/2441776.2441873
* González-Romá et al. (2006) González-Romá, V, Schaufeli, W. B, Bakker, A. B, and Lloret, S. (2006). Burnout and work engagement: Independent factors or opposite poles? Journal of Vocational Behavior 68, 1 (2006), 165–174. DOI:http://dx.doi.org/10.1016/j.jvb.2005.01.003
* Goodchild (2007) Goodchild, M. F. (2007). Citizens as sensors: the world of volunteered geography. GeoJournal 69, 4 (2007), 211–221. DOI:http://dx.doi.org/10.1007/s10708-007-9111-y
* Hargittai and Walejko (2008) Hargittai, E and Walejko, G. (2008). The participation divide: Content creation and sharing in the digital age. Information, Communication & Society 11, 2 (2008), 239–256. DOI:http://dx.doi.org/10.1080/13691180801946150
* Hertel et al. (2003) Hertel, G, Niedner, S, and Herrmann, S. (2003). Motivation of software developers in Open Source projects: an Internet-based survey of contributors to the Linux kernel. Research Policy 32, 7 (2003), 1159 – 1177. DOI:http://dx.doi.org/10.1016/S0048-7333(03)00047-7
* Jain (2008) Jain, R. (2008). The art of computer systems performance analysis. John Wiley & Sons, Hoboken, New Jersey, US.
* Jennett et al. (2014) Jennett, C, Blandford, A, Brohan, P, and Cox, A. (2014). Designing for Dabblers and Deterring Drop-Outs in Citizen Science. In Proceedings of the ACM 2014 Conference on Human Factors in Computing System. ACM, New York, NY, USA, 2985–2994. DOI:http://dx.doi.org/10.1145/2556288.2557262
* Kraut et al. (2010) Kraut, R, Maher, M, Olson, J, Malone, T, Pirolli, P, and Thomas, J. (2010). Scientific Foundations: A Case for Technology- Mediated Social- Participation Theory. Computer 43, 11 (Nov 2010), 22–28. DOI:http://dx.doi.org/10.1109/MC.2010.324
* Kraut et al. (2012) Kraut, R. E, Resnick, P, Kiesler, S, Burke, M, Chen, Y, Kittur, N, Konstan, J, Ren, Y, and Riedl, J. (2012). Building successful online communities: Evidence-based social design. Mit Press, Cambridge, Massachusetts, US.
* Lehmann et al. (2012) Lehmann, J, Lalmas, M, Yom-Tov, E, and Dupret, G. (2012). Models of User Engagement. In Proceedings of the 20th International Conference on User Modeling, Adaptation, and Personalization. Springer-Verlag, Berlin, Heidelberg, 164–175. DOI:http://dx.doi.org/10.1007/978-3-642-31454-4_14
* Lintott and Reed (2013) Lintott, C and Reed, J. (2013). Human Computation in Citizen Science. In Handbook of Human Computation. Springer, New York, United States, 153–162. DOI:http://dx.doi.org/10.1007/978-1-4614-8806-4_14
* Lintott et al. (2008) Lintott, C. J, Schawinski, K, Slosar, A, Land, K, Bamford, S, Thomas, D, Raddick, M. J, Nichol, R. C, Szalay, A, Andreescu, D, Murray, P, and Vandenberg, J. (2008). Galaxy Zoo: morphologies derived from visual inspection of galaxies from the Sloan Digital Sky Survey. Monthly Notices of the Royal Astronomical Society 389, 3 (2008), 1179–1189. DOI:http://dx.doi.org/10.1111/j.1365-2966.2008.13689.x
* Liu and Ram (2011) Liu, J and Ram, S. (2011). Who Does What: Collaboration Patterns in the Wikipedia and Their Impact on Article Quality. ACM Trans. Manage. Inf. Syst. 2, 2 (2011), 11:1–11:23. DOI:http://dx.doi.org/10.1145/1985347.1985352
* López et al. (2012) López, C, Farzan, R, and Brusilovsky, P. (2012). Personalized incremental users’ engagement: driving contributions one step forward. In Proceedings of the 17th ACM international conference on Supporting group work. ACM, New York, NY, USA, 189–198. DOI:http://dx.doi.org/10.1145/2389176.2389206
* Lu et al. (2008) Lu, J, Tang, J, Tang, Z, and Yang, J. (2008). Hierarchical initialization approach for K-Means clustering. Pattern Recognition Letters 29, 6 (2008), 787 – 795. DOI:http://dx.doi.org/10.1016/j.patrec.2007.12.009
* Luczak-Roesch et al. (2014) Luczak-Roesch, M, Tinati, R, Simperl, E, Van Kleek, M, Shadbolt, N, and Simpson, R. (2014). Why Won’t Aliens Talk to Us? Content and Community Dynamics in Online Citizen Science. In Eighth International AAAI Conference on Weblogs and Social Media. AAAI, Palo Alto, CA, US, 315–324. http://www.aaai.org/ocs/index.php/ICWSM/ICWSM14/paper/view/8092
* Mao et al. (2013) Mao, A, Kamar, E, and Horvitz, E. (2013). Why Stop Now? Predicting Worker Engagement in Online Crowdsourcing. In Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing. AAAI, Palo Alto, CA, USA, 103–111.
* McCay-Peet et al. (2012) McCay-Peet, L, Lalmas, M, and Navalpakkam, V. (2012). On Saliency, Affect and Focused Attention. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 541–550. DOI:http://dx.doi.org/10.1145/2207676.2207751
* McPherson et al. (2001) McPherson, M, Smith-Lovin, L, and Cook, J. M. (2001). Birds of a Feather: Homophily in Social Networks. Annual Review of Sociology 27, 1 (2001), 415–444. DOI:http://dx.doi.org/10.1146/annurev.soc.27.1.415
* Meece et al. (1988) Meece, J. L, Blumenfeld, P. C, and Hoyle, R. H. (1988). Students’ goal orientations and cognitive engagement in classroom activities. Journal of educational psychology 80, 4 (1988), 514. DOI:http://dx.doi.org/10.1037/0022-0663.80.4.514
* Mehrzadi and Feitelson (2012) Mehrzadi, D and Feitelson, D. G. (2012). On extracting session data from activity logs. In Proceedings of the 5th Annual International Systems and Storage Conference. ACM, New York, NY, USA, 3:1–3:7. DOI:http://dx.doi.org/10.1145/2367589.2367592
* Millen and Patterson (2002) Millen, D. R and Patterson, J. F. (2002). Stimulating social engagement in a community network. In Proceedings of the 2002 ACM conference on Computer supported cooperative work. ACM, New York, NY, USA, 306–313. DOI:http://dx.doi.org/10.1145/587078.587121
* Niederer and Van Dijck (2010) Niederer, S and Van Dijck, J. (2010). Wisdom of the crowd or technicity of content? Wikipedia as a sociotechnical system. New Media & Society 12, 8 (2010), 1368–1387. DOI:http://dx.doi.org/10.1177/1461444810365297
* Nov et al. (2014) Nov, O, Arazy, O, and Anderson, D. (2014). Scientists@ Home: what drives the quantity and quality of online citizen science participation? PloS one 9, 4 (2014), e90375. DOI:http://dx.doi.org/10.1371/journal.pone.0090375
* O’Brien and Toms (2008) O’Brien, H. L and Toms, E. G. (2008). What is user engagement? A conceptual framework for defining user engagement with technology. Journal of the American Society for Information Science and Technology 59, 6 (2008), 938–955. DOI:http://dx.doi.org/10.1002/asi.20801
* O’Brien and Toms (2010) O’Brien, H. L and Toms, E. G. (2010). The development and evaluation of a survey to measure user engagement. Journal of the American Society for Information Science and Technology 61, 1 (2010), 50–69. DOI:http://dx.doi.org/10.1002/asi.21229
* Ponciano et al. (2014b) Ponciano, L, Brasileiro, F, Simpson, R, and Smith, A. (2014)b. Volunteers’ Engagement in Human Computation for Astronomy Projects. Computing in Science and Engineering 1, 1 (2014). DOI:http://dx.doi.org/10.1109/MCSE.2014.4
* Ponciano et al. (2014a) Ponciano, L, Brasileiro, F. V, Andrade, N, and Sampaio, L. M. R. (2014)a. Considering human aspects on strategies for designing and managing distributed human computation. J. Internet Services and Applications 5, 1 (2014). DOI:http://dx.doi.org/10.1186/s13174-014-0010-4
* Porges (2003) Porges, S. W. (2003). Social Engagement and Attachment. Annals of the New York Academy of Sciences 1008, 1 (2003), 31–47. DOI:http://dx.doi.org/10.1196/annals.1301.004
* Preece (2000) Preece, J. (2000). Online Communities: Designing Usability and Supporting Socialbilty (1st ed.). John Wiley & Sons, Inc., New York, NY, USA.
* Preece and Shneiderman (2009) Preece, J and Shneiderman, B. (2009). The reader-to-leader framework: Motivating technology-mediated social participation. Transactions on Human-Computer Interaction 1, 1 (2009), 13–32. http://aisel.aisnet.org/thci/vol1/iss1/5
* Quinn and Bederson (2011) Quinn, A. J and Bederson, B. B. (2011). Human computation: a survey and taxonomy of a growing field. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1403 – 1412. DOI:http://dx.doi.org/10.1145/1978942.1979148
* Raddick et al. (2010) Raddick, J, Bracey, G, Gay, P. L, Lintott, C. J, Murray, P, Schawinski, K, Szalay, A. S, and Vandenberg, J. (2010). Galaxy zoo: Exploring the motivations of citizen science volunteers. Astronomy Education Review 9, 1 (2010), 010103. DOI:http://dx.doi.org/10.3847/AER2009036
* Roberts et al. (2006) Roberts, J. A, Hann, I.-H, and Slaughter, S. A. (2006). Understanding the Motivations, Participation, and Performance of Open Source Software Developers: A Longitudinal Study of the Apache Projects. Manage. Sci. 52, 7 (July 2006), 984–999. DOI:http://dx.doi.org/10.1287/mnsc.1060.0554
* Robson et al. (2013) Robson, C, Hearst, M, Kau, C, and Pierce, J. (2013). Comparing the Use of Social Networking and Traditional Media Channels for Promoting Citizen Science. In Proceedings of the 2013 Conference on Computer Supported Cooperative Work. ACM, New York, NY, USA, 1463–1468. DOI:http://dx.doi.org/10.1145/2441776.2441941
* Rotman et al. (2014) Rotman, D, Hammock, J, Preece, J, Hansen, D, Boston, C, Bowser, A, and He, Y. (2014). Motivations Affecting Initial and Long-Term Participation in Citizen Science Projects in Three Countries. In iConference. iSchools, Illinois, US, 110–124. DOI:http://dx.doi.org/10.9776/14054
* Rotman et al. (2012) Rotman, D, Preece, J, Hammock, J, Procita, K, Hansen, D, Parr, C, Lewis, D, and Jacobs, D. (2012). Dynamic changes in motivation in collaborative citizen-science projects. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work. ACM, New York, NY, USA, 217–226. DOI:http://dx.doi.org/10.1145/2145204.2145238
* Rousseeuw (1987) Rousseeuw, P. J. (1987). Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics 20 (1987), 53–65. DOI:http://dx.doi.org/10.1016/0377-0427(87)90125-7
* Schroer and Hertel (2009) Schroer, J and Hertel, G. (2009). Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It. Media Psychology 12, 1 (2009), 96–120. DOI:http://dx.doi.org/10.1080/15213260802669466
* Simpson (2009) Simpson, M. R. (2009). Engagement at work: A review of the literature. International Journal of Nursing Studies 46, 7 (2009), 1012–1024. DOI:http://dx.doi.org/10.1016/j.ijnurstu.2008.05.003
* Simpson et al. (2012) Simpson, R, Povich, M, Kendrew, S, Lintott, C, Bressert, E, Arvidsson, K, Cyganowski, C, Maddison, S, Schawinski, K, Sherman, R, and others, . (2012). The milky way project first data release: a bubblier galactic disc. Monthly Notices of the Royal Astronomical Society 424, 4 (2012), 2442–2460. DOI:http://dx.doi.org/10.1111/j.1365-2966.2012.20770.x
* Struyf et al. (1997) Struyf, A, Hubert, M, and Rousseeuw, P. (1997). Clustering in an Object-Oriented Environment. Journal of Statistical Software 1, 4 (10 2 1997), 1–30. http://www.jstatsoft.org/v01/i04
* van Mierlo (2014) van Mierlo, T. (2014). The 1% Rule in Four Digital Health Social Networks: An Observational Study. J Med Internet Res 16, 2 (04 Feb 2014), e33. DOI:http://dx.doi.org/10.2196/jmir.2966
* Welser et al. (2011) Welser, H. T, Cosley, D, Kossinets, G, Lin, A, Dokshin, F, Gay, G, and Smith, M. (2011). Finding Social Roles in Wikipedia. In Proceedings of the 2011 iConference. ACM, New York, NY, USA, 122–129. DOI:http://dx.doi.org/10.1145/1940761.1940778
* Wiggins and Crowston (2012) Wiggins, A and Crowston, K. (2012). Goals and Tasks: Two Typologies of Citizen Science Projects. In Proceedings of the 45th Hawaii International Conference on System Sciences. IEEE Computer Society, Los Alamitos, CA, USA, 3426–3435. DOI:http://dx.doi.org/10.1109/HICSS.2012.295
* Wilson (2000) Wilson, J. (2000). Volunteering. Annual review of sociology 26, 1 (2000), 215–240. DOI:http://dx.doi.org/10.1146/annurev.soc.26.1.215
* Zhu et al. (2012) Zhu, H, Kraut, R, and Kittur, A. (2012). Effectiveness of Shared Leadership in Online Communities. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work. ACM, New York, NY, USA, 407–416. DOI:http://dx.doi.org/10.1145/2145204.2145269
|
f\vcentcolon=\mathcal{F}^{-1}[\check{\chi}(2^{-N}\cdot)]\ast f.$
By considering their Fourier transforms, one observes
$(\Delta G_{N})\ast f=-f+[\Delta(G_{N}-G)]\ast f$ (79)
We recall operations of kernels on modelled distributions from [34, Section
5].
###### Definition C.7.
Let $\mathscr{Z}=(\Pi,\Gamma)$ be a model realizing $K$ in the sense of [34,
Definition 5.9].
1. (a)
We set
$\mathcal{J}(x)\tau\vcentcolon=\mathcal{J}^{\mathscr{Z}}(x)\tau\vcentcolon=\sum_{\lvert
k\rvert<\lvert\tau\rvert_{+}+2}\frac{X^{k}}{k!}\big{[}D^{k}K\ast\Pi_{x}\tau(x)\big{]},\quad
x\in\mathbb{R}^{d},$
for $\tau\in\mathfrak{B}(\mathscr{T})$ and extend it linearly for
$\tau\in\mathscr{T}$.
2. (b)
Let $\gamma\in(0,\infty)\setminus\mathbb{N}$ and
$f\in\mathcal{D}^{\gamma}(\mathscr{T},\mathscr{Z})$. We set
$\mathcal{N}f(x)\vcentcolon=\mathcal{N}^{\mathscr{Z}}_{\gamma}f(x)\vcentcolon=\sum_{\lvert
k\rvert<\gamma+2}\frac{X^{k}}{k!}D^{k}K\ast(\mathcal{R}^{\mathscr{Z}}f-\Pi_{x}f(x))(x)$
(80)
and
$\mathcal{K}f(x)\vcentcolon=\mathcal{K}^{\mathscr{Z}}_{\gamma}f(x)\vcentcolon=(\mathscr{I}+\mathcal{J}^{\mathscr{Z}}(x))f(x)+\mathcal{N}^{\mathscr{Z}}_{\gamma}f(x).$
(81)
By [34, Theorem 5.12], $\mathcal{K}$ maps
$\mathcal{D}^{\gamma}(\mathscr{T},\mathscr{Z})$ to
$\mathcal{D}^{\gamma+2}(\mathscr{T},\mathscr{Z})$ and one has
$\mathcal{R}\mathcal{K}f=K\ast\mathcal{R}f$. More precisely, one has
$\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathcal{K}f\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma+2;\mathfrak{K}}\lesssim_{\mathscr{T},\gamma}(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma+2;B(\mathfrak{K},1)})^{2}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert
f\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},1)}.$
(82)
uniformly over $\mathscr{Z}\in\mathscr{M}(\mathscr{T},K)$,
$f\in\mathcal{D}(\mathscr{T},\mathscr{Z})$ and compact sets
$\mathfrak{K}\subseteq\mathbb{R}^{d}$. See [36, Theorem 5.1].
3. (c)
For a smooth function $F$ on $\mathbb{R}^{d}$ and $\beta\in(0,\infty)$, we set
$R_{\beta}F(x)\vcentcolon=\sum_{\lvert
k\rvert<\beta}\frac{X^{k}}{k!}D^{k}F(x),\quad x\in\mathbb{R}^{d}.$
Then [34, Lemma 2.12] implies
$R_{\beta}F\in\mathcal{D}^{\beta}(\mathscr{T},\mathscr{Z})$.
###### Definition C.8.
Suppose that the model $\mathscr{Z}$ realizes $K$. For
$\gamma\in(0,\infty)\setminus\mathbb{N}$,
$f\in\mathcal{D}^{\gamma}(\mathscr{T},\mathscr{Z})$ and $N\in\mathbb{N}_{0}$,
we set
$\mathcal{G}_{N}f(x)\vcentcolon=\mathcal{G}_{N,\gamma}^{\mathscr{Z}}f(x)\vcentcolon=\mathcal{K}^{\mathscr{Z}}_{\gamma}f(x)+R_{\gamma+2}[H_{N}\ast\mathcal{R}f](x).$
Note that one has
$\mathcal{R}^{\mathscr{Z}}\mathcal{G}^{\mathscr{Z}}_{N}f=G_{N}\ast\mathcal{R}^{\mathscr{Z}}f$.
For the meaning of the parameter $N$, see Remark C.16 below.
### C.3 Definition of modelled distributions
###### Definition C.9.
We define $\mathcal{T},\mathcal{T}_{-}\subseteq\mathscr{T}$ and
$\mathfrak{B}(\mathcal{T}),\mathfrak{B}(\mathcal{T}_{-})\subseteq\mathfrak{B}(\mathscr{T})$
as follows.
1. (a)
For $\tau_{1},\tau_{2}\in\mathscr{T}$ we write
“$\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})$” instead of
“$\sum_{j=1}^{d}\mathscr{I}_{j}(\tau_{1})\mathscr{I}_{j}(\tau_{2})$”.
2. (b)
We denote by $\mathcal{T}$ the smallest subset of $\mathscr{T}$ with the
following properties:
* •
$\Xi\in\mathcal{T}$ and
* •
if $\tau_{1},\tau_{2}\in\mathcal{T}$, then
$\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})\in\mathcal{T}$.
Furthermore, we associate $c(\tau)\in\mathbb{N}$ to each $\tau\in\mathcal{T}$
by setting $c(\Xi)\vcentcolon=1$ and by inductively setting for
$\tau_{1},\tau_{2}\in\mathcal{T}$
$c(\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2}))\vcentcolon=\begin{cases}2c(\tau_{1})c(\tau_{2})&\text{if
}\tau_{1}\neq\tau_{2},\\\ c(\tau_{1})c(\tau_{2})&\text{if
}\tau_{1}=\tau_{2}.\end{cases}$
3. (c)
One defines $\mathfrak{B}(\mathcal{T})\subseteq\mathfrak{B}(\mathscr{T})$ as
the minimal subset with the following properties:
* •
$\Xi\in\mathfrak{B}(\mathcal{T})$ and
* •
if $\tau_{1},\tau_{2}\in\mathfrak{B}(\mathcal{T})$ and $i\in\\{1,\ldots,d\\}$,
then
$\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{i}(\tau_{2})\in\mathfrak{B}(\mathcal{T})$.
4. (d)
We set
$\mathcal{T}_{-}\vcentcolon=\\{\tau\in\mathcal{T}\nonscript\>|\nonscript\>\mathopen{}\lvert\tau\rvert_{+}<0\\}$
and
$\mathfrak{B}(\mathcal{T}_{-})\vcentcolon=\\{\tau\in\mathfrak{B}(\mathcal{T})\nonscript\>|\nonscript\>\mathopen{}\lvert\tau\rvert_{+}<0\\}$.
###### Remark C.10.
After Lemma 1.12, we have discussed the algorithm to obtain the tree expansion
($\tau_{1},\tau_{2},\tau_{3},\ldots$) for $W$. The constant $c(\tau)$
represents the coefficient of the tree $\tau$ in that expansion.
###### Definition C.11.
Given a model $\mathscr{Z}$ realizing $K$, we associate
$\tau^{\mathcal{K}}=\tau^{\mathcal{K},\mathscr{Z}}\in\mathcal{D}^{\gamma_{\tau}}(\mathscr{T},\mathscr{Z})$
to each $\tau\in\mathcal{T}_{-}$ by setting $\Xi^{\mathcal{K}}\vcentcolon=\Xi$
and by inductively setting
$\gamma_{\tau}\vcentcolon=\min\\{\gamma_{\tau_{1}}+1+\lvert\tau_{2}\rvert_{+},\gamma_{\tau_{2}}+1+\lvert\tau_{1}\rvert_{+}\\},\quad\tau^{\mathcal{K}}\vcentcolon=\sum_{i=1}^{d}\mathscr{D}_{i}[\mathcal{K}\tau_{1}^{\mathcal{K}}]\star\mathscr{D}_{i}[\mathcal{K}\tau_{2}^{\mathcal{K}}]$
for $\tau=\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})$. The
exponent $\gamma_{\Xi}$ is choosen so that $\gamma_{\tau}>2$ for every
$\tau\in\mathcal{T}_{-}$.
###### Remark C.12.
Thanks to Proposition C.2 and [34, Theorem 5.12], indeed one has
$\tau^{\mathcal{K},\mathscr{Z}}\in\mathcal{D}^{\gamma_{\tau}}$. Furthermore,
for $\tau=\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})$ and a
compact set $\mathfrak{K}$, one has
$\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\tau^{\mathcal{K},\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma_{\tau};\mathfrak{K}}\lesssim_{\mathscr{T}}(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma_{\tau_{1}}+\gamma_{\tau_{2}}+2;B(\mathfrak{K},1)})^{6}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\tau_{1}^{\mathcal{K},\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma_{\tau_{1}};B(\mathfrak{K},1)}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\tau_{2}^{\mathcal{K},\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma_{\tau_{2}};B(\mathfrak{K},1)}.$
Therefore, there exist a constant $\gamma,C\in(0,\infty)$ and integers
$k,l\in\mathbb{N}$, which depend only on $\mathscr{T}$, such that
$\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\tau^{\mathcal{K},\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;\mathfrak{K}}\leq
C(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)})^{k}$
(83)
uniformly over $\tau\in\mathcal{T}_{-}$,
$\mathscr{Z}\in\mathscr{M}(\mathscr{T},\mathscr{Z})$ and compact sets
$\mathfrak{K}\subseteq\mathbb{R}^{d}$.
###### Definition C.13.
Let $F\in C_{c}^{\infty}(\mathbb{R})$ be such that $F(x)=-e^{2x}$ if $\lvert
x\rvert\leq 2$. Given $N\in\mathbb{N}$ and a model $\mathscr{Z}$ realizing
$K$, we set
$\bm{X}\vcentcolon=\bm{X}^{\mathscr{Z}}\vcentcolon=\sum_{\tau\in\mathcal{T}_{-}}c(\tau)\tau^{\mathcal{K},\mathscr{Z}},$
$\bm{W}_{N}\vcentcolon=\bm{W}^{\mathscr{Z}}_{N}\vcentcolon=\mathfrak{p}_{<2}\mathcal{G}_{N,2}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}}$
and
$\displaystyle\bm{Y}_{N}\vcentcolon=$ $\displaystyle\bm{Y}_{N}^{\mathscr{Z}}$
$\displaystyle\vcentcolon=$
$\displaystyle\mathfrak{p}_{<\delta}\Big{[}F^{\star}(\bm{W}_{N}^{\mathscr{Z}})\star\Big{\\{}\sum_{\begin{subarray}{c}\tau_{1},\tau_{2}\in\mathcal{T}_{-},\\\
\lvert\tau_{1}\rvert_{+}+\lvert\tau_{2}\rvert_{+}>-2\end{subarray}}\sum_{i=1}^{d}c(\tau_{1})c(\tau_{2})\mathscr{D}_{i}[\mathcal{K}^{\mathscr{Z}}\tau^{\mathcal{K},\mathscr{Z}}_{1}]\star\mathscr{D}_{i}[\mathcal{K}^{\mathscr{Z}}\tau^{\mathcal{K},\mathscr{Z}}_{2}]$
$\displaystyle+2\sum_{i=1}^{d}\mathscr{D}_{i}[\mathcal{K}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}}]\star
R_{2}[\partial_{i}\\{H_{N}\ast(\mathcal{R}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}})\\}]\Big{\\}}\Big{]}.$
###### Proposition C.14.
Suppose that a model $\mathscr{Z}$ realizes $K$. Let $N\in\mathbb{N}$. Then,
one has
$\bm{W}_{N}^{\mathscr{Z}}\in\mathcal{D}_{0}^{2}(\mathscr{T},\mathscr{Z})$ and
$\bm{Y}_{N}^{\mathscr{Z}}\in\mathcal{D}^{\delta}_{-1+\delta}(\mathscr{T},\mathscr{Z})$.
More precisely, there exist constants $\gamma,C\in(0,\infty)$ and integers
$k,l\in\mathbb{N}$ such that the following estimates hold uniformly over
$N\in\mathbb{N}$,
$\mathscr{Z},\overline{\mathscr{Z}}\in\mathscr{M}(\mathscr{T},K)$ and convex
compact sets $\mathfrak{K}\subseteq\mathbb{R}^{d}$:
$\displaystyle\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{X}^{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;\mathfrak{K}}$
$\displaystyle\leq
C(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)})^{k},$
$\displaystyle\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{W}^{\mathscr{Z}}_{N}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;\mathfrak{K}}$
$\displaystyle\leq
C\\{(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)})^{k}+\lVert
H_{N}\ast(\mathcal{R}^{\mathscr{Z}}X^{\mathscr{Z}})\rVert_{C^{2}(\mathfrak{K})}\\},$
$\displaystyle\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{Y}_{N}^{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\delta;\mathfrak{K}}$
$\displaystyle\leq
C(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}+\lVert
H_{N}\ast(\mathcal{R}^{\mathscr{Z}}X^{\mathscr{Z}})\rVert_{C^{2}(\mathfrak{K})})^{k},$
and furthermore
$\displaystyle\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{X}^{\mathscr{Z}};\bm{X}^{\overline{\mathscr{Z}}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;\mathfrak{K}}$
$\displaystyle\leq
C(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)})^{k}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z};\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)},$
$\displaystyle\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{W}^{\mathscr{Z}}_{N};\bm{W}^{\overline{\mathscr{Z}}}_{N}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;\mathfrak{K}}$
$\displaystyle\leq
C(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)})^{k}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z};\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}$
$\displaystyle\qquad+\lVert
H_{N}\ast(\mathcal{R}^{\mathscr{Z}}X^{\mathscr{Z}}-\mathcal{R}^{\overline{\mathscr{Z}}}X^{\overline{\mathscr{Z}}})\rVert_{C^{2}(\mathfrak{K})}$
$\displaystyle\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{Y}_{N}^{\mathscr{Z}};\bm{Y}_{N}^{\overline{\mathscr{Z}}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\delta;\mathfrak{K}}$
$\displaystyle\leq
C\Big{(}1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}$
$\displaystyle\qquad+\lVert
H_{N}\ast(\mathcal{R}^{\mathscr{Z}}X^{\mathscr{Z}})\rVert_{C^{2}(\mathfrak{K})}+\lVert
H_{N}\ast(\mathcal{R}^{\overline{\mathscr{Z}}}X^{\overline{\mathscr{Z}}})\rVert_{C^{2}(\mathfrak{K})}\Big{)}^{k}$
$\displaystyle\qquad\qquad\times(\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z};\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(\mathfrak{K},l)}+\lVert
H_{N}\ast(\mathcal{R}^{\mathscr{Z}}X^{\mathscr{Z}}-\mathcal{R}^{\overline{\mathscr{Z}}}X^{\overline{\mathscr{Z}}})\rVert_{C^{2}(\mathfrak{K})}).$
###### Proof.
The estimate for $\bm{X}^{\mathscr{Z}}$ follows from (83). As for the estimate
of $\bm{W}_{N}^{\mathscr{Z}}$, the Schauder estimate (82) gives the estimate
for $\mathcal{K}\bm{X}^{\mathscr{Z}}$. The estimate for
$R_{2}[H_{N}\ast(\mathcal{R}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}})]$ follows from
the estimate
$\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert
R_{2}[H_{N}\ast(\mathcal{R}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}})]\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;\mathfrak{K}}\leq\sum_{m:\lvert
m\rvert\leq
2}\lVert\partial^{m}[H_{N}\ast(\mathcal{R}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}})]\rVert_{L^{\infty}(\mathfrak{K})},$
where the convexity of $\mathfrak{K}$ is used. The estimate for
$\bm{Y}_{N}^{\mathscr{Z}}$ follows from Proposition C.2, the estimate (78) and
the Schauder estimate (82).
For the estimates of the differences, let us just mention that for differences
there exist analogue estimates to those in Proposition C.2, the estimate (78)
and (82), see [34, Proposition 4.10], [37, Proposition 3.11] and [34, Theorem
5.12] respectively. Using them, we can prove the last three inequalities of
the differences similarly. ∎
###### Definition C.15.
Given a model $\mathscr{Z}$ realizing $K$ and $N\in\mathbb{N}$, we set
$X\vcentcolon=X^{\mathscr{Z}}\vcentcolon=\mathcal{R}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}},\quad
W_{N}\vcentcolon=W^{\mathscr{Z}}_{N}\vcentcolon=\mathcal{R}^{\mathscr{Z}}\bm{W}^{\mathscr{Z}}_{N}$
and
$Y_{N}\vcentcolon=Y^{\mathscr{Z}}_{N}\vcentcolon=\mathcal{R}^{\mathscr{Z}}\bm{Y}^{\mathscr{Z}}_{N}+F(W_{N}^{\mathscr{Z}})\Big{\\{}\lvert\nabla[H_{N}\ast(X^{\mathscr{Z}})]\rvert^{2}+[\Delta(G_{N}-G)]\ast
X^{\mathscr{Z}}\Big{\\}}.$
###### Remark C.16.
The parameter $N$ will be used to ensure $W_{N}$ is bounded on a given bounded
domain. Therefore, $N$ will be random and will depend on the domain. The idea
of introducing such parameter is also used in [57]. As noted in Definition
C.8, one has $W_{N}=G_{N}\ast X$.
###### Lemma C.17.
Let $\varepsilon\in(0,1)$. To simplify notation, we write
$X^{\mathrm{can}}\vcentcolon=X^{\mathscr{Z}^{\mathrm{can},\varepsilon}}$ here
for instance. Then, one has the following identity:
$\lvert\nabla W_{N}^{\mathrm{can}}\rvert^{2}+\Delta
W_{N}^{\mathrm{can}}=-\xi_{\varepsilon}\\\
+\sum_{\begin{subarray}{c}\tau_{1},\tau_{2}\in\mathcal{T}_{-},\\\
\lvert\tau_{1}\rvert_{+}+\lvert\tau_{2}\rvert_{+}>-2\end{subarray}}c(\tau_{1})c(\tau_{2})\nabla(K\ast\mathcal{R}^{\mathrm{can}}\tau_{1}^{\mathcal{K},\mathrm{can}})\cdot\nabla(K\ast\mathcal{R}^{\mathrm{can}}\tau_{2}^{\mathcal{K},\mathrm{can}})\\\
+2\nabla[K\ast
X^{\mathrm{can}}]\cdot\nabla[H_{N}\ast(X^{\mathrm{can}})]+\lvert\nabla[H_{N}\ast(X^{\mathrm{can}})]\rvert^{2}+[\Delta(G_{N}-G)]\ast
X^{\mathrm{can}}$
###### Proof.
One has $W_{N}^{\mathrm{can}}=K\ast X^{\mathrm{can}}+H_{N}\ast
X^{\mathrm{can}}$ and
$\lvert\nabla
W_{N}\rvert^{2}=\sum_{\tau_{1},\tau_{2}\in\mathcal{T}_{-}}c(\tau_{1})c(\tau_{2})\nabla[K\ast\mathcal{R}^{\mathrm{can}}\tau_{1}^{\mathcal{K},\mathrm{can}}]\cdot\nabla[K\ast\mathcal{R}^{\mathrm{can}}\tau_{2}^{\mathcal{K},\mathrm{can}}]\\\
+2\nabla[K\ast X^{\mathrm{can}}]\cdot\nabla[H_{N}\ast
X^{\mathrm{can}}]+\lvert\nabla H_{N}\ast X^{\mathrm{can}}\rvert^{2}$
Furthermore,
$\Delta
W_{N}=-\sum_{\tau\in\mathcal{T}_{-}}c(\tau)\mathcal{R}^{\mathrm{can}}\tau^{\mathcal{K},\mathrm{can}}+[\Delta(G_{N}-G)]\ast
X^{\mathrm{can}}.$
Now it remains to observe
$\sum_{\tau_{1},\tau_{2}\in\mathcal{T}_{-}}c(\tau_{1})c(\tau_{2})\nabla[K\ast\mathcal{R}^{\mathrm{can}}\tau_{1}^{\mathcal{K},\mathrm{can}}]\cdot\nabla[K\ast\mathcal{R}^{\mathrm{can}}\tau_{2}^{\mathcal{K},\mathrm{can}}]-\sum_{\tau\in\mathcal{T}_{-}}c(\tau)\mathcal{R}^{\mathrm{can}}\tau^{\mathcal{K},\mathrm{can}}\\\
=-\xi_{\varepsilon}+\sum_{\begin{subarray}{c}\tau_{1},\tau_{2}\in\mathcal{T}_{-},\\\
\lvert\tau_{1}\rvert_{+}+\lvert\tau_{2}\rvert_{+}>-2\end{subarray}}c(\tau_{1})c(\tau_{2})\nabla(K\ast\mathcal{R}^{\mathrm{can}}\tau_{1}^{\mathcal{K},\mathrm{can}})\cdot\nabla(K\ast\mathcal{R}^{\mathrm{can}}\tau_{2}^{\mathcal{K},\mathrm{can}}).\qed$
### C.4 BPHZ renormalization for $\bm{X}$
The goal of this section is to show
$X^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}=X^{\mathscr{Z}^{\mathrm{can},\varepsilon}}-c_{\varepsilon}$
(Proposition C.25). To this end, our first goal is to obtain the basis
expansion for modelled distributions
$\tau^{\mathcal{K},\mathscr{Z}}\in\mathcal{T}_{-}$, which will be given in
Lemma C.20.
###### Lemma C.18.
For every $\tau_{1},\tau_{2}\in\mathcal{T}_{-}$ with
$\lvert\tau_{1}\rvert_{+},\lvert\tau_{2}\rvert_{+}<-1$ and
$i,j\in\\{1,\ldots,d\\}$, one has
$\Delta^{\circ}_{+}[\mathscr{I}_{i}(\tau_{1})]=\mathscr{I}_{i}(\tau_{1})\otimes\bm{1}_{+},\quad\Delta^{\circ}_{+}[\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{j}(\tau_{2})]=[\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{j}(\tau_{2})]\otimes\bm{1}_{+}.$
In particular, the constant map
$x\mapsto\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{j}(\tau_{2})$ belongs to
$\mathcal{D}^{\infty}_{\lvert\tau_{1}\rvert+\lvert\tau_{2}\rvert+2}(\mathscr{T},\mathscr{Z})$
for any model $\mathscr{Z}=(\Pi,\Gamma)$ and
$\mathcal{R}[\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{j}(\tau_{2})]=\Pi_{x}[\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{j}(\tau_{2})],$
where the right-hand side is independent of $x$.
###### Proof.
In view of the recursive formula [13, Proposition 4.17], one can prove the
claim by induction on $\lvert\cdot\rvert_{+}$. Indeed, suppose one is going to
prove $\Delta^{\circ}_{+}\tau=\tau\otimes\bm{1}_{+}$, where
$\tau=\mathscr{I}_{i}(\tau_{1})\mathscr{I}_{j}(\tau_{2})$ and
$\Delta^{\circ}_{+}\tau_{k}=\tau_{k}\otimes\bm{1}_{+}$. By Lemma B.29,
$\Delta^{\circ}_{+}\tau=\Delta^{\circ}_{+}[\mathscr{I}_{i}(\tau_{1})]\Delta^{\circ}_{+}[\mathscr{I}_{j}(\tau_{2})]$.
Therefore, it suffices to show
$\Delta^{\circ}_{+}[\mathscr{I}_{i}(\tau_{1})]=[\mathscr{I}_{i}(\tau_{1})]\otimes\bm{1}_{+}$.
By [13, Proposition 4.17], one has
$\Delta^{\circ}_{+}\mathscr{I}_{i}(\tau_{1})=(\mathscr{I}_{i}\otimes\operatorname{Id})\Delta\tau_{1}+\sum_{k:\lvert\tau\rvert_{+}+1-\lvert
k\rvert>0}\frac{X^{k}}{k!}\otimes\hat{\mathscr{I}}_{e_{i}+k}(\tau_{1}).$
It remains to observe that
$(\mathscr{I}_{i}\otimes\operatorname{Id})\Delta\tau_{1}=[\mathscr{I}_{i}(\tau_{1})]\otimes\bm{1}_{+}$
by hypothesis of the induction and that the set over which $k$ ranges is
empty. ∎
###### Definition C.19.
We use some notations from Section B.1. Let $\tau\in\mathfrak{B}(\mathcal{T})$
and let $e$ be an edge of $\tau$ with $\mathfrak{t}(e)=\mathscr{I}$. By
removing the edge $e$, we obtain a decorated forest with two connected
components. We denote by
$\operatorname{Remove}(\tau;e)$
the component containing the root of $\tau$, with decoration inherited from
$\tau$. For instance,
$\operatorname{Remove}(\leavevmode\hbox to62.35pt{\vbox
to48.12pt{\pgfpicture\makeatletter\hbox{\hskip 31.17409pt\lower-2.72133pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{ }
{{}}{{{{}}}}{}{}\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.12134pt}{0.0pt}\pgfsys@curveto{2.12134pt}{1.17159pt}{1.17159pt}{2.12134pt}{0.0pt}{2.12134pt}\pgfsys@curveto{-1.17159pt}{2.12134pt}{-2.12134pt}{1.17159pt}{-2.12134pt}{0.0pt}\pgfsys@curveto{-2.12134pt}{-1.17159pt}{-1.17159pt}{-2.12134pt}{0.0pt}{-2.12134pt}\pgfsys@curveto{1.17159pt}{-2.12134pt}{2.12134pt}{-1.17159pt}{2.12134pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-12.10504pt}{14.22638pt}\pgfsys@curveto{-12.10504pt}{15.39796pt}{-13.0548pt}{16.34772pt}{-14.22638pt}{16.34772pt}\pgfsys@curveto{-15.39796pt}{16.34772pt}{-16.34772pt}{15.39796pt}{-16.34772pt}{14.22638pt}\pgfsys@curveto{-16.34772pt}{13.0548pt}{-15.39796pt}{12.10504pt}{-14.22638pt}{12.10504pt}\pgfsys@curveto{-13.0548pt}{12.10504pt}{-12.10504pt}{13.0548pt}{-12.10504pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{-14.22638pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-14.22638pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-1.92427pt}{1.92427pt}\pgfsys@lineto{-12.30211pt}{12.30211pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-22.0635pt}{28.45276pt}\pgfsys@curveto{-22.0635pt}{29.62434pt}{-23.01326pt}{30.5741pt}{-24.18484pt}{30.5741pt}\pgfsys@curveto{-25.35643pt}{30.5741pt}{-26.30618pt}{29.62434pt}{-26.30618pt}{28.45276pt}\pgfsys@curveto{-26.30618pt}{27.28117pt}{-25.35643pt}{26.33142pt}{-24.18484pt}{26.33142pt}\pgfsys@curveto{-23.01326pt}{26.33142pt}{-22.0635pt}{27.28117pt}{-22.0635pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{-24.18484pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-24.18484pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-15.78694pt}{16.45578pt}\pgfsys@lineto{-22.62428pt}{26.22336pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{-26.33142pt}{42.67914pt}\pgfsys@curveto{-26.33142pt}{43.85072pt}{-27.28117pt}{44.80048pt}{-28.45276pt}{44.80048pt}\pgfsys@curveto{-29.62434pt}{44.80048pt}{-30.5741pt}{43.85072pt}{-30.5741pt}{42.67914pt}\pgfsys@curveto{-30.5741pt}{41.50755pt}{-29.62434pt}{40.5578pt}{-28.45276pt}{40.5578pt}\pgfsys@curveto{-27.28117pt}{40.5578pt}{-26.33142pt}{41.50755pt}{-26.33142pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{-28.45276pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-28.45276pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-24.96683pt}{31.05933pt}\pgfsys@lineto{-27.67078pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{-17.7956pt}{42.67914pt}\pgfsys@curveto{-17.7956pt}{43.85072pt}{-18.74535pt}{44.80048pt}{-19.91693pt}{44.80048pt}\pgfsys@curveto{-21.08852pt}{44.80048pt}{-22.03827pt}{43.85072pt}{-22.03827pt}{42.67914pt}\pgfsys@curveto{-22.03827pt}{41.50755pt}{-21.08852pt}{40.5578pt}{-19.91693pt}{40.5578pt}\pgfsys@curveto{-18.74535pt}{40.5578pt}{-17.7956pt}{41.50755pt}{-17.7956pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{-19.91693pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-19.91693pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-23.40286pt}{31.05933pt}\pgfsys@lineto{-20.69891pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-2.14658pt}{28.45276pt}\pgfsys@curveto{-2.14658pt}{29.62434pt}{-3.09633pt}{30.5741pt}{-4.26791pt}{30.5741pt}\pgfsys@curveto{-5.4395pt}{30.5741pt}{-6.38925pt}{29.62434pt}{-6.38925pt}{28.45276pt}\pgfsys@curveto{-6.38925pt}{27.28117pt}{-5.4395pt}{26.33142pt}{-4.26791pt}{26.33142pt}\pgfsys@curveto{-3.09633pt}{26.33142pt}{-2.14658pt}{27.28117pt}{-2.14658pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{-4.26791pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-4.26791pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\pgfsys@color@rgb@stroke{.75}{0}{.25}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{.75}{0}{.25}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.75,0,.25}{}\pgfsys@moveto{-12.66582pt}{16.45578pt}\pgfsys@lineto{-5.82848pt}{26.22336pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{-6.41449pt}{42.67914pt}\pgfsys@curveto{-6.41449pt}{43.85072pt}{-7.36424pt}{44.80048pt}{-8.53583pt}{44.80048pt}\pgfsys@curveto{-9.70741pt}{44.80048pt}{-10.65717pt}{43.85072pt}{-10.65717pt}{42.67914pt}\pgfsys@curveto{-10.65717pt}{41.50755pt}{-9.70741pt}{40.5578pt}{-8.53583pt}{40.5578pt}\pgfsys@curveto{-7.36424pt}{40.5578pt}{-6.41449pt}{41.50755pt}{-6.41449pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{-8.53583pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-8.53583pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{-5.0499pt}{31.05933pt}\pgfsys@lineto{-7.75385pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{2.12134pt}{42.67914pt}\pgfsys@curveto{2.12134pt}{43.85072pt}{1.17159pt}{44.80048pt}{0.0pt}{44.80048pt}\pgfsys@curveto{-1.17159pt}{44.80048pt}{-2.12134pt}{43.85072pt}{-2.12134pt}{42.67914pt}\pgfsys@curveto{-2.12134pt}{41.50755pt}{-1.17159pt}{40.5578pt}{0.0pt}{40.5578pt}\pgfsys@curveto{1.17159pt}{40.5578pt}{2.12134pt}{41.50755pt}{2.12134pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}{}\pgfsys@moveto{-3.48593pt}{31.05933pt}\pgfsys@lineto{-0.78198pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{16.34772pt}{14.22638pt}\pgfsys@curveto{16.34772pt}{15.39796pt}{15.39796pt}{16.34772pt}{14.22638pt}{16.34772pt}\pgfsys@curveto{13.0548pt}{16.34772pt}{12.10504pt}{15.39796pt}{12.10504pt}{14.22638pt}\pgfsys@curveto{12.10504pt}{13.0548pt}{13.0548pt}{12.10504pt}{14.22638pt}{12.10504pt}\pgfsys@curveto{15.39796pt}{12.10504pt}{16.34772pt}{13.0548pt}{16.34772pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.22638pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{1.92427pt}{1.92427pt}\pgfsys@lineto{12.30211pt}{12.30211pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{6.38925pt}{28.45276pt}\pgfsys@curveto{6.38925pt}{29.62434pt}{5.4395pt}{30.5741pt}{4.26791pt}{30.5741pt}\pgfsys@curveto{3.09633pt}{30.5741pt}{2.14658pt}{29.62434pt}{2.14658pt}{28.45276pt}\pgfsys@curveto{2.14658pt}{27.28117pt}{3.09633pt}{26.33142pt}{4.26791pt}{26.33142pt}\pgfsys@curveto{5.4395pt}{26.33142pt}{6.38925pt}{27.28117pt}{6.38925pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{4.26791pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{4.26791pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{12.66582pt}{16.45578pt}\pgfsys@lineto{5.82848pt}{26.22336pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{26.30618pt}{28.45276pt}\pgfsys@curveto{26.30618pt}{29.62434pt}{25.35643pt}{30.5741pt}{24.18484pt}{30.5741pt}\pgfsys@curveto{23.01326pt}{30.5741pt}{22.0635pt}{29.62434pt}{22.0635pt}{28.45276pt}\pgfsys@curveto{22.0635pt}{27.28117pt}{23.01326pt}{26.33142pt}{24.18484pt}{26.33142pt}\pgfsys@curveto{25.35643pt}{26.33142pt}{26.30618pt}{27.28117pt}{26.30618pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{24.18484pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{24.18484pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{15.78694pt}{16.45578pt}\pgfsys@lineto{22.62428pt}{26.22336pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{22.03827pt}{42.67914pt}\pgfsys@curveto{22.03827pt}{43.85072pt}{21.08852pt}{44.80048pt}{19.91693pt}{44.80048pt}\pgfsys@curveto{18.74535pt}{44.80048pt}{17.7956pt}{43.85072pt}{17.7956pt}{42.67914pt}\pgfsys@curveto{17.7956pt}{41.50755pt}{18.74535pt}{40.5578pt}{19.91693pt}{40.5578pt}\pgfsys@curveto{21.08852pt}{40.5578pt}{22.03827pt}{41.50755pt}{22.03827pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{19.91693pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.91693pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{23.40286pt}{31.05933pt}\pgfsys@lineto{20.69891pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{30.5741pt}{42.67914pt}\pgfsys@curveto{30.5741pt}{43.85072pt}{29.62434pt}{44.80048pt}{28.45276pt}{44.80048pt}\pgfsys@curveto{27.28117pt}{44.80048pt}{26.33142pt}{43.85072pt}{26.33142pt}{42.67914pt}\pgfsys@curveto{26.33142pt}{41.50755pt}{27.28117pt}{40.5578pt}{28.45276pt}{40.5578pt}\pgfsys@curveto{29.62434pt}{40.5578pt}{30.5741pt}{41.50755pt}{30.5741pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{24.96683pt}{31.05933pt}\pgfsys@lineto{27.67078pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}};\leavevmode\hbox
to6.67pt{\vbox to35.12pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-3.33301pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{ } {{}}{{{
{}{}{}}}}{}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{
{}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{ {}{}}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.00002pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}\pgfsys@color@rgb@stroke{.75}{0}{.25}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{.75}{0}{.25}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{.75,0,.25}{}\pgfsys@moveto{0.0pt}{3.93301pt}\pgfsys@lineto{0.00002pt}{24.51974pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}})=\leavevmode\hbox
to52.39pt{\vbox to48.12pt{\pgfpicture\makeatletter\hbox{\hskip
21.21562pt\lower-2.72133pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{1.2pt}\pgfsys@invoke{ }
{{}}{{{{}}}}{}{}\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{2.12134pt}{0.0pt}\pgfsys@curveto{2.12134pt}{1.17159pt}{1.17159pt}{2.12134pt}{0.0pt}{2.12134pt}\pgfsys@curveto{-1.17159pt}{2.12134pt}{-2.12134pt}{1.17159pt}{-2.12134pt}{0.0pt}\pgfsys@curveto{-2.12134pt}{-1.17159pt}{-1.17159pt}{-2.12134pt}{0.0pt}{-2.12134pt}\pgfsys@curveto{1.17159pt}{-2.12134pt}{2.12134pt}{-1.17159pt}{2.12134pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-12.10504pt}{14.22638pt}\pgfsys@curveto{-12.10504pt}{15.39796pt}{-13.0548pt}{16.34772pt}{-14.22638pt}{16.34772pt}\pgfsys@curveto{-15.39796pt}{16.34772pt}{-16.34772pt}{15.39796pt}{-16.34772pt}{14.22638pt}\pgfsys@curveto{-16.34772pt}{13.0548pt}{-15.39796pt}{12.10504pt}{-14.22638pt}{12.10504pt}\pgfsys@curveto{-13.0548pt}{12.10504pt}{-12.10504pt}{13.0548pt}{-12.10504pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{-14.22638pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-14.22638pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-1.92427pt}{1.92427pt}\pgfsys@lineto{-12.30211pt}{12.30211pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{-12.10504pt}{28.45276pt}\pgfsys@curveto{-12.10504pt}{29.62434pt}{-13.0548pt}{30.5741pt}{-14.22638pt}{30.5741pt}\pgfsys@curveto{-15.39796pt}{30.5741pt}{-16.34772pt}{29.62434pt}{-16.34772pt}{28.45276pt}\pgfsys@curveto{-16.34772pt}{27.28117pt}{-15.39796pt}{26.33142pt}{-14.22638pt}{26.33142pt}\pgfsys@curveto{-13.0548pt}{26.33142pt}{-12.10504pt}{27.28117pt}{-12.10504pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{-14.22638pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-14.22638pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-14.22638pt}{16.94772pt}\pgfsys@lineto{-14.22638pt}{25.73141pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{-16.37296pt}{42.67914pt}\pgfsys@curveto{-16.37296pt}{43.85072pt}{-17.32271pt}{44.80048pt}{-18.4943pt}{44.80048pt}\pgfsys@curveto{-19.66588pt}{44.80048pt}{-20.61563pt}{43.85072pt}{-20.61563pt}{42.67914pt}\pgfsys@curveto{-20.61563pt}{41.50755pt}{-19.66588pt}{40.5578pt}{-18.4943pt}{40.5578pt}\pgfsys@curveto{-17.32271pt}{40.5578pt}{-16.37296pt}{41.50755pt}{-16.37296pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{-18.4943pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-18.4943pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-15.00836pt}{31.05933pt}\pgfsys@lineto{-17.71231pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{-7.83713pt}{42.67914pt}\pgfsys@curveto{-7.83713pt}{43.85072pt}{-8.78688pt}{44.80048pt}{-9.95847pt}{44.80048pt}\pgfsys@curveto{-11.13005pt}{44.80048pt}{-12.0798pt}{43.85072pt}{-12.0798pt}{42.67914pt}\pgfsys@curveto{-12.0798pt}{41.50755pt}{-11.13005pt}{40.5578pt}{-9.95847pt}{40.5578pt}\pgfsys@curveto{-8.78688pt}{40.5578pt}{-7.83713pt}{41.50755pt}{-7.83713pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{-9.95847pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{-9.95847pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{-13.4444pt}{31.05933pt}\pgfsys@lineto{-10.74045pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{16.34772pt}{14.22638pt}\pgfsys@curveto{16.34772pt}{15.39796pt}{15.39796pt}{16.34772pt}{14.22638pt}{16.34772pt}\pgfsys@curveto{13.0548pt}{16.34772pt}{12.10504pt}{15.39796pt}{12.10504pt}{14.22638pt}\pgfsys@curveto{12.10504pt}{13.0548pt}{13.0548pt}{12.10504pt}{14.22638pt}{12.10504pt}\pgfsys@curveto{15.39796pt}{12.10504pt}{16.34772pt}{13.0548pt}{16.34772pt}{14.22638pt}\pgfsys@closepath\pgfsys@moveto{14.22638pt}{14.22638pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{14.22638pt}{14.22638pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{1.92427pt}{1.92427pt}\pgfsys@lineto{12.30211pt}{12.30211pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{6.38925pt}{28.45276pt}\pgfsys@curveto{6.38925pt}{29.62434pt}{5.4395pt}{30.5741pt}{4.26791pt}{30.5741pt}\pgfsys@curveto{3.09633pt}{30.5741pt}{2.14658pt}{29.62434pt}{2.14658pt}{28.45276pt}\pgfsys@curveto{2.14658pt}{27.28117pt}{3.09633pt}{26.33142pt}{4.26791pt}{26.33142pt}\pgfsys@curveto{5.4395pt}{26.33142pt}{6.38925pt}{27.28117pt}{6.38925pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{4.26791pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{4.26791pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{12.66582pt}{16.45578pt}\pgfsys@lineto{5.82848pt}{26.22336pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0,0,0}\pgfsys@color@gray@fill{0}\pgfsys@invoke{
}{}\pgfsys@moveto{26.30618pt}{28.45276pt}\pgfsys@curveto{26.30618pt}{29.62434pt}{25.35643pt}{30.5741pt}{24.18484pt}{30.5741pt}\pgfsys@curveto{23.01326pt}{30.5741pt}{22.0635pt}{29.62434pt}{22.0635pt}{28.45276pt}\pgfsys@curveto{22.0635pt}{27.28117pt}{23.01326pt}{26.33142pt}{24.18484pt}{26.33142pt}\pgfsys@curveto{25.35643pt}{26.33142pt}{26.30618pt}{27.28117pt}{26.30618pt}{28.45276pt}\pgfsys@closepath\pgfsys@moveto{24.18484pt}{28.45276pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{24.18484pt}{28.45276pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{{{}}}}{}{}{{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{15.78694pt}{16.45578pt}\pgfsys@lineto{22.62428pt}{26.22336pt}\pgfsys@stroke\pgfsys@invoke{
}\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{22.03827pt}{42.67914pt}\pgfsys@curveto{22.03827pt}{43.85072pt}{21.08852pt}{44.80048pt}{19.91693pt}{44.80048pt}\pgfsys@curveto{18.74535pt}{44.80048pt}{17.7956pt}{43.85072pt}{17.7956pt}{42.67914pt}\pgfsys@curveto{17.7956pt}{41.50755pt}{18.74535pt}{40.5578pt}{19.91693pt}{40.5578pt}\pgfsys@curveto{21.08852pt}{40.5578pt}{22.03827pt}{41.50755pt}{22.03827pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{19.91693pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{19.91693pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{23.40286pt}{31.05933pt}\pgfsys@lineto{20.69891pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{}}{{}}
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{{}{{{}}}{{}}{}{}{}{}{}{}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{1,1,1}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{30.5741pt}{42.67914pt}\pgfsys@curveto{30.5741pt}{43.85072pt}{29.62434pt}{44.80048pt}{28.45276pt}{44.80048pt}\pgfsys@curveto{27.28117pt}{44.80048pt}{26.33142pt}{43.85072pt}{26.33142pt}{42.67914pt}\pgfsys@curveto{26.33142pt}{41.50755pt}{27.28117pt}{40.5578pt}{28.45276pt}{40.5578pt}\pgfsys@curveto{29.62434pt}{40.5578pt}{30.5741pt}{41.50755pt}{30.5741pt}{42.67914pt}\pgfsys@closepath\pgfsys@moveto{28.45276pt}{42.67914pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} { {{}}{}{{}}
{{{{{}}{}{}{}{}{{}}}}}{}{{{{{}}{}{}{}{}{{}}}}}{{}}{}{}{}{}\pgfsys@moveto{24.96683pt}{31.05933pt}\pgfsys@lineto{27.67078pt}{40.07257pt}\pgfsys@stroke\pgfsys@invoke{
} }\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}},$
where represents the noise (6). We set
$\displaystyle\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$
$\displaystyle\vcentcolon=\\{\operatorname{Remove}(\tau;e)\nonscript\>|\nonscript\>\mathopen{}\tau\in\mathfrak{B}(\mathcal{T}),e\in
E_{\tau}\text{ with }\mathfrak{t}(e)=\mathscr{I}\\},$
$\displaystyle\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))$
$\displaystyle\vcentcolon=\\{(T,0)^{\mathfrak{n},0}_{\mathfrak{e}}\nonscript\>|\nonscript\>\mathopen{}(T,0)^{0,0}_{\mathfrak{e}}\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))\\}.$
###### Lemma C.20.
Suppose $\mathscr{Z}=(\Pi,\Gamma)$ is a model realizing $K$. Then, one has a
claim for $\tau\in\mathcal{T}_{-}$ as follows.
1. (a)
If $\tau=\Xi$ or
$\tau=\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})$ with
$\lvert\tau_{1}\rvert_{+},\lvert\tau_{2}\rvert_{+}<-1$, then
$\tau^{\mathcal{K},\mathscr{Z}}=\tau$.
2. (b)
If $\tau=\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})$ with
$\lvert\tau_{1}\rvert_{+}>-1$ and $\lvert\tau_{2}\rvert_{+}<-1$, then one has
the expansion
$\tau^{\mathcal{K},\mathscr{Z}}(x)=\tau+\sum_{\sigma\in\mathfrak{V}(\tau)}a_{\tau,\sigma}^{\mathscr{Z}}(x)\sigma,$
(84)
with the following properties:
* •
$\mathfrak{V}(\tau)$ is a finite subset of
$\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))$ that is
independent of $\mathscr{Z}$,
* •
one has
$\displaystyle a_{\tau,\sigma}^{\mathscr{Z}}(x)$ $\displaystyle=$
$\displaystyle\sum_{\begin{subarray}{c}j\in\\{1,\ldots,d\\},n\in\mathbb{N}_{0},\rho\in\mathcal{T}_{-},\\\
l_{1},\ldots,l_{n}\in\mathbb{N}_{0}^{d},\sigma_{1},\ldots,\sigma_{n}\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))\\\
\lvert\sigma_{k}\rvert_{+}+2-l_{k}>0,-1<\lvert\rho\rvert_{+}<\lvert\tau\rvert_{+}\end{subarray},}c_{\tau,\sigma,\rho}^{l_{1},\ldots,l_{n},\sigma_{1},\ldots,\sigma_{n}}(\mathcal{P})[\partial_{j}K\ast\Pi_{x}\rho^{\mathcal{K},\mathscr{Z}}(x)]\prod_{k=1}^{n}[\partial^{l_{k}}K\ast\Pi_{x}\sigma_{k}](x)$
$\displaystyle+$
$\displaystyle\sum_{\begin{subarray}{c}n\in\mathbb{N}_{0},\rho\in\mathcal{T}_{-},\\\
l,l_{1},\ldots,l_{n}\in\mathbb{N}_{0}^{d},\sigma_{1},\ldots,\sigma_{n}\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))\\\
\lvert\sigma_{k}\rvert_{+}+2-l_{k}>0,-1<\lvert\rho\rvert_{+}<\lvert\tau\rvert_{+}\end{subarray},}c_{\tau,\sigma,\rho,l}^{l_{1},\ldots,l_{n},\sigma_{1},\ldots,\sigma_{n}}(\mathcal{R})[\partial^{l}K\ast(\mathcal{R}^{\mathscr{Z}}\rho^{\mathcal{K},\mathscr{Z}}-\Pi_{x}\rho^{\mathcal{K},\mathscr{Z}}(x))](x)$
$\displaystyle\hskip
227.62204pt\times\prod_{k=1}^{n}[\partial^{l_{k}}K\ast\Pi_{x}\sigma_{k}](x),$
where the sum is actually finite and the constants
$c_{\tau,\sigma,\rho}^{l_{1},\ldots,l_{n},\sigma_{1},\ldots,\sigma_{n}}(\mathcal{P})\quad\text{and}\quad
c_{\tau,\sigma,\rho,l}^{l_{1},\ldots,l_{n},\sigma_{1},\ldots,\sigma_{n}}(\mathcal{R})$
are independent of $\mathscr{Z}$.
###### Proof.
To see the claim (a), if $\lvert\tau\rvert_{+}<-1$, thanks to Lemma C.18, the
identity (81) becomes
$\mathcal{K}\tau=\mathscr{I}\tau+(K\ast\mathcal{R}\tau)(x)\bm{1}$
and hence $\mathscr{D}_{i}\mathcal{K}\tau=\mathscr{I}_{i}\tau$. The claim (b)
seems complicated but can be proven easily by induction. Suppose that one has
$\tau=\nabla\mathscr{I}(\tau_{1})\cdot\nabla\mathscr{I}(\tau_{2})$ such that
$\tau_{1}$ has the expansions of the form (84) and
$\tau_{2}^{\mathcal{K}}=\tau_{2}$. Furthermore, one has
$-1<\lvert\tau_{1}\rvert_{+}<0$ since $\lvert\tau\rvert_{+}<0$. Therefore, one
has
$\tau_{1}^{\mathcal{K}}=\tau_{1}+\sum_{\sigma\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))}a_{\sigma}\sigma,\quad\tau_{2}^{\mathcal{K}}=\tau_{2}$
(85)
where $a_{\sigma}$ has the desired property. By the definition (81) of
$\mathcal{K}$ , one has
$\mathscr{D}_{i}\mathcal{K}\tau^{\mathcal{K}}_{1}(x)=\mathscr{I}_{i}\tau_{1}+\sum_{\sigma\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))}a_{\sigma}(x)\mathscr{I}_{i}(\sigma)+[\partial_{i}K\ast\Pi_{x}\tau_{1}](x)\bm{1}\\\
+\sum_{\begin{subarray}{c}\sigma\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T})),l\in\mathbb{N}_{0}^{d}\\\
\lvert\sigma\rvert+1-|l|>0\end{subarray}}a_{\sigma}(x)[\partial^{\bm{e}_{i}+l}K\ast\Pi_{x}\sigma](x)\frac{X^{l}}{l!}+\sum_{\lvert
l\rvert<\gamma_{\tau_{1}}+1}[\partial^{\bm{e}_{i}+l}K\ast(\mathcal{R}\tau_{1}^{\mathcal{K}}-\Pi_{x}\tau_{1}^{\mathcal{K}})](x)\frac{X^{l}}{l!},$
where $\gamma_{\tau_{1}}$ is chosen so that
$\tau_{1}^{\mathcal{K}}\in\mathcal{D}^{\gamma_{\tau_{1}}}(\mathscr{T},\mathscr{Z})$,
see Remark C.12. Since
$\mathscr{D}_{i}\mathcal{K}\tau_{2}^{\mathcal{K}}=\mathscr{I}_{i}\tau_{2}$ as
shown in the part (a), one has
$\mathscr{I}_{i}(\sigma)\mathscr{I}_{i}(\tau_{2}),X^{l}\mathscr{I}_{i}(\tau_{2})\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T})).$
Since $\lvert\tau_{1}\rvert_{+}<\lvert\tau\rvert_{+}$, we complete the
induction. ∎
We recall an explicit formula of the BPHZ realization.
###### Definition C.21 ([13, Theorem 6.18]).
Let $\hat{\mathscr{T}}_{-}$ be the free algebra generated by $\mathscr{T}$
under the forest product. (In fact, recalling $H^{R}_{1}$ from Definition
B.22, we have $\hat{\mathscr{T}}_{-}=H^{R}_{1}$.) We define the algebra
homomorphism $g_{\varepsilon}^{-}:\hat{\mathscr{T}}_{-}\to\mathbb{R}$
characterized by
$g_{\varepsilon}^{-}(\mathfrak{i}_{\circ}\tau)\vcentcolon=\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)],$
where $\mathfrak{i}_{\circ}:\mathscr{T}\to\hat{\mathscr{T}}_{-}$ is the
natural injection. Then, we have
$\mathbf{\Pi}^{\mathrm{BPHZ},\varepsilon}=(g_{\varepsilon}^{-}\hat{\mathscr{A}}_{-}\otimes\mathbf{\Pi}^{\mathrm{can},\varepsilon}\Delta^{\circ}_{-}).$
(86)
In view of the identity (86) and Lemma C.20, we need to understand
$(g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\otimes\mathbf{\Pi}^{\mathrm{can},\varepsilon})\Delta_{-}^{\circ}\tau$
for $\tau\in\mathcal{T}_{-}$ and
$\tau\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))$. As
one can easily guess from the definition of $g^{-}_{\varepsilon}$, it is
necessary to estimate
$\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)]$ for such $\tau$.
The following simple lemma is a consequence of the symmetry of the noise
$\xi$.
###### Lemma C.22.
For $\tau\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$, one has
$\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)]=0$.
###### Proof.
Let
$\tau=(T,0)^{0,0}_{\mathfrak{e}}\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$.
Let $\mathbf{\Pi}^{\text{minus}}$ be the canonical realization for
$\xi_{\varepsilon}(-\cdot)$. Since
$\xi\overset{\operatorname{d}}{=}\xi(-\cdot)$, one has
$\mathbf{\Pi}^{\text{minus}}\sigma\overset{\operatorname{d}}{=}\mathbf{\Pi}^{\mathrm{can},\varepsilon}\sigma$
for every $\sigma\in\mathscr{T}$. If we set
$n(T)\vcentcolon=\\#\\{e\in
E_{T}\nonscript\>|\nonscript\>\mathopen{}\mathfrak{t}(e)=\mathscr{I}\\},$
by using the identity
$\partial_{i}K\ast[f(-\cdot)]=-[\partial_{i}K\ast f](-\cdot),$
where the fact $K=K(-\cdot)$ is used, one has
$\mathbf{\Pi}^{\text{minus}}\tau=(-1)^{n(T)}\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau$.
However, since $\tau\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$,
$n(T)$ is odd. Therefore, one has
$\mathbf{\Pi}^{\text{minus}}\tau\overset{\operatorname{d}}{=}\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau\quad\text{and}\quad\mathbf{\Pi}^{\text{minus}}\tau=-\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau,$
and concludes $\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)]=0$.
∎
###### Lemma C.23.
For
$\tau=(F,\hat{F})^{\mathfrak{n},\mathfrak{o}}_{\mathfrak{e}}\in\mathfrak{B}(\mathcal{T})\cup\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))$
and $x\in\mathbb{R}^{d}$, one has
$\Delta^{\circ}_{-}\tau=\tau\otimes\bm{1}+\bm{1}_{-}\otimes\tau+\ker(g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\otimes\Pi^{\mathrm{can},\varepsilon}_{x})\cap\ker(g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\otimes\mathbf{\Pi}^{\mathrm{can},\varepsilon}).$
###### Proof.
Recall from Definition B.2-(a) that edges are oriented. We call an edge
$e=(a,b)$ a leaf if $b$ is not followed by any edge. We call a node $a$ of $F$
true if there exists an edge $e=(a,b)$ such that
$\mathfrak{t}(e)=\mathscr{I}$. We denote by $N^{\text{true}}$ the set of all
true nodes of $F$. For a subforest $G$ of $F$, we set
$N_{G}^{j}\vcentcolon=\\{a\in N_{G}\cap
N^{\text{true}}\nonscript\>|\nonscript\>\mathopen{}\text{ there exist exactly
$j$ outgoing edges in $G$ at $a$}\\}.$
Recalling the coproduct formula (73), one has
$\Delta^{\circ}_{-}\tau=\tau\otimes\mathscr{R}_{\lvert\tau\rvert_{+}}\bm{1}+\bm{1}_{-}\otimes\tau\\\
+\sum_{G\subseteq
F,G\neq\varnothing}\sum_{\mathfrak{n}_{G}\neq\mathfrak{n},\varepsilon^{F}_{G}}\frac{1}{\varepsilon^{F}_{G}!}\binom{\mathfrak{n}}{\mathfrak{n}_{G}}(G,0)^{\mathfrak{n}_{G}+\pi\varepsilon^{F}_{G},0}_{\mathfrak{e}}\otimes\mathscr{K}(F,\mathbbm{1}_{G})^{\mathfrak{n}-\mathfrak{n}_{G},\pi(\varepsilon^{F}_{A}-\mathfrak{e}\mathbbm{1}_{G})}_{\mathfrak{e}\mathbbm{1}_{E_{F}\setminus
E_{G}}+\varepsilon^{F}_{G}},$
where $\mathscr{R}_{\alpha}$ is defined in Definition 3.8. However, note that
$\mathbf{\Pi}^{\mathrm{can},\varepsilon}\mathscr{R}_{\alpha}\bm{1}=\mathbf{\Pi}^{\mathrm{can},\varepsilon}\bm{1}$.
We fix $G\neq\varnothing$, $\mathfrak{n}_{G}\neq\mathfrak{n}$ and
$\varepsilon^{F}_{G}$ and set
$\tau_{1}\vcentcolon=(G,0)^{\mathfrak{n}_{G}+\pi\varepsilon^{F}_{G},0}_{\mathfrak{e}},\quad\tau_{2}\vcentcolon=\mathscr{K}(F,\mathbbm{1}_{G})^{\mathfrak{n}-\mathfrak{n}_{G},\pi(\varepsilon^{F}_{A}-\mathfrak{e}\mathbbm{1}_{G})}_{\mathfrak{e}\mathbbm{1}_{E_{F}\setminus
E_{G}}+\varepsilon^{F}_{G}}$
We will prove
$(g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\otimes\Pi^{\mathrm{can},\varepsilon}_{x})(\tau_{1}\otimes\tau_{2})=0$
by considering various cases, which will complete the proof. When a case is
studied, we exclude all cases considered before.
1. (a)
Suppose that $G\neq F$ and that a connected component $T$ of $G$ satisfies
$N_{T}^{0}=\varnothing$ and $N_{T}^{1}=N_{F}^{1}\cap N_{G}$. Then, the forest
$\tau_{2}$ contains a leaf $(a,\rho_{T})$ of edge type $\mathscr{I}$ and hence
$\Pi^{\mathrm{can},\varepsilon}_{x}\tau_{2}=\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau_{2}=0$.
2. (b)
Suppose $G$ contains a leaf of edge type $\mathscr{I}$. Then, in view of the
recursive formula (76), this is also the case for each forest appearing in
$\hat{\mathscr{A}}_{-}\tau_{1}$ and hence
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau_{1}=0$.
3. (c)
Suppose $N_{G}^{0}\neq\varnothing$. If the case 2 is excluded, then a
connected component of $\tau_{1}$ is of the form
$\bullet^{\mathfrak{n}_{1},0}$ and hence $\tau_{1}=0$ (as an element of
$\mathscr{T}_{-}$).
4. (d)
Suppose $\tau_{1}$ contains a connected component
$\tau_{3}=(T,0)^{\mathfrak{n},0}_{\mathfrak{e}}$ such that $\\#N^{1}_{T}\geq
2$. Let $a\in N^{1}_{T}$.
* •
If $a$ is the root of $T$, then $\tau_{3}=\mathscr{I}_{i}(\tau_{4})$ and hence
$\tau_{1}=0$ (as an element of $\mathscr{T}_{-}$).
* •
If $a$ is not the root of $T$, one can merge two consecutive edges $(a_{1},a)$
and $(a,a_{2})$ into a single edge $(a_{1},a_{2})$ to obtain a new tree
$\tau_{5}\in\mathfrak{T}_{\circ}$ with
$\lvert\tau_{3}\rvert_{-}=\lvert\tau_{5}\rvert_{-}+1$. Since
$\lvert\sigma\rvert_{-}\geq-2+\delta$ for every
$\sigma\in\mathfrak{T}_{\circ}$, if $\\#(N_{T}^{1}\setminus\\{\rho_{T}\\})\geq
2$, then $\lvert\tau_{3}\rvert_{-}>0$ and hence $\tau_{1}=0$ (as an element of
$\mathscr{T}_{-}$).
5. (e)
Suppose that $\tau_{1}$ contains a connected component
$\tau_{6}=(T_{6},0)^{\mathfrak{n}_{6},0}_{\mathfrak{e}}$ such that
$N_{T_{6}}^{0}=N_{T_{6}}^{1}=\varnothing$. Then, $T_{1}=T_{6}=F$ and
$\tau_{1}\in\mathfrak{B}(\mathcal{T})$. However, this implies
$\mathfrak{n}=\mathfrak{n}_{G}=0$, which is excluded.
6. (f)
Therefore, it remains to consider the case where every connected component
$\tau_{7}=(T_{7},0)^{\mathfrak{n}_{7},0}_{\mathfrak{e}}$ of $\tau_{1}$
satisfies $\\#N_{T_{7}}^{1}=1$ and $N^{0}_{T_{7}}=\varnothing$ and all leaves
of $\tau_{7}$ are of type $\Xi$, namely
$\tau_{7}\in\operatorname{Remove}^{\mathfrak{n}}(\mathfrak{B}(\mathcal{T}))$.
If $\mathfrak{n}_{7}\neq 0$ on $N_{T_{7}}$, then $\lvert\tau_{7}\rvert_{-}>0$.
Thus, we suppose $\mathfrak{n}_{7}=0$. We will show
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau_{7}=0$, which implies
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau_{1}=0$ since the character
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}$ is multiplicative. To apply the
recursive formula (76), consider the expansion
$\hat{\Delta}_{-}\tau_{7}-\tau_{7}\otimes\bm{1}_{-}=\bm{1}\otimes\tau_{7}+\sum_{\tau_{8}}c_{\tau_{8}}\tau_{8}\otimes\tau_{9}.$
Then, one has
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau_{7}=-\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau_{7}(0)]-\sum_{\tau_{8}}c_{\tau_{8}}\times\big{(}g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau_{8}\big{)}\times\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau_{9}(0)].$
By the same reasoning as before, one can suppose that every component
$\tau_{10}=(T_{10},0)^{0,0}_{\mathfrak{e}}$ of $\tau_{8}$ belongs to
$\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$. However, since $T_{10}$
has a strictly smaller number of edges than $T_{7}$ does, one can assume
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau_{8}=0$ by induction. Therefore,
it remains to show
$\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau_{7}(0)]=0$. But this
was shown in Lemma C.22. ∎
###### Corollary C.24.
If $\tau\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$, then
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau=0$. If $\tau\in\mathcal{T}_{-}$,
then
$g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau=-\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)].$
###### Proof.
The claim for $\tau\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$ is
proved in the proof of Lemma C.23, see the case 6. If
$\tau\in\mathcal{T}_{-}$, by Lemma C.23 one has
$\mathbf{\Pi}^{\mathrm{BPHZ},\varepsilon}\tau=\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau+g^{-}_{\varepsilon}\hat{\mathscr{A}}\tau.$
However, since $\lvert\tau\rvert_{-}<0$, one has
$\mathbb{E}[\mathbf{\Pi}^{\mathrm{BPHZ},\varepsilon}\tau(0)]=0$ by definition,
which completes the proof. ∎
###### Proposition C.25.
For $\tau\in\mathcal{T}_{-}$, one has
$\displaystyle\Pi_{x}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}\tau^{\mathcal{K},\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}(x)$
$\displaystyle=\Pi_{x}^{\mathscr{Z}^{\mathrm{can},\varepsilon}}\tau^{\mathcal{K},\mathscr{Z}^{\mathrm{can},\varepsilon}}(x)-\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)],\quad
x\in\mathbb{R}^{d},$ (87)
$\displaystyle\mathcal{R}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}\tau^{\mathcal{K},\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}$
$\displaystyle=\mathcal{R}^{\mathscr{Z}^{\mathrm{can},\varepsilon}}\tau^{\mathcal{K},\mathscr{Z}^{\mathrm{can},\varepsilon}}-\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)].$
In particular,
$X^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}=X^{\mathscr{Z}^{\mathrm{can},\varepsilon}}-c_{\varepsilon}.$
where
$c_{\varepsilon}\vcentcolon=\sum_{\tau\in\mathcal{T}_{-}}c(\tau)\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)].$
(88)
###### Proof.
To simplify notation, we write
$\mathcal{R}^{\mathrm{BPHZ}}\vcentcolon=\mathcal{R}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}$
here, for instance. Since
$\mathcal{R}^{\\#}\tau^{\mathcal{K},\\#}(x)=[\Pi^{\\#}_{x}\tau^{\mathcal{K},\\#}(x)](x),\quad\\#\in\\{\mathrm{can},\mathrm{BPHZ}\\},$
it suffices to prove (87). By Lemma C.20, one has the expansion
$\tau^{\mathcal{K},\mathrm{BPHZ}}(x)=\tau+\sum_{\sigma}a_{\tau,\sigma}^{\mathrm{BPHZ}}(x)\sigma.$
In the expression of $a_{\tau,\sigma}^{\mathrm{BPHZ}}$ given in Lemma C.20,
every $\rho$ in the sum satisfies $\lvert\rho\rvert_{+}<\lvert\tau\rvert_{+}$.
Therefore, one can assume
$a_{\sigma}^{\mathrm{BPHZ}}=a_{\sigma}^{\mathrm{can}}$ by induction. By Lemma
C.23 and Corollary C.24,
$\Delta^{\circ}_{-}\tau^{\mathcal{K},\mathrm{BPHZ}}(x)=\tau\otimes\bm{1}+\bm{1}_{-}\otimes\tau+\sum_{\sigma}a_{\sigma}^{\mathrm{can}}(x)\bm{1}_{-}\otimes\sigma+\ker(g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\otimes\Pi^{\mathrm{can}}_{x}).$
Furthermore, by [13, Theorem 6.16], one has
$\Pi_{x}^{\mathrm{BPHZ}}=(g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\otimes\Pi^{\mathrm{can}}_{x})\Delta^{\circ}_{-}.$
Therefore,
$\displaystyle\Pi^{\mathrm{BPHZ}}_{x}\tau^{\mathcal{K},\mathrm{BPHZ}}(x)$
$\displaystyle=g^{-}_{\varepsilon}\hat{\mathscr{A}}_{-}\tau+\Pi^{\mathrm{can}}_{x}\tau+\sum_{\sigma}a_{\sigma}^{\mathrm{can}}(x)\Pi^{\mathrm{can}}_{x}\sigma$
$\displaystyle=-\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\tau(0)]+\Pi^{\mathrm{can}}_{x}\tau^{\mathcal{K},\mathrm{can}}(x),$
where we applied Corollary C.24 to get the last equality. ∎
### C.5 BPHZ renormalization for $\bm{Y}_{N}$
The goal of this section is to compare
$\bm{Y}_{N}^{\mathscr{Z}^{\mathrm{can},\varepsilon}}$ and
$\bm{Y}_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}$, as we did for $\bm{X}$
in the previous section. Again, we need to obtain the basis expansion for
$\bm{Y}_{N}$.
###### Lemma C.26.
Let $\tau_{1},\tau_{2}\in\mathcal{T}_{-}$, $i\in\\{1,\ldots,d\\}$ and
$N\in\mathbb{N}$. Let $\mathscr{Z}$ be a model realizing $K$. Assume
$\lvert\tau_{1}\rvert_{+}+\lvert\tau_{2}\rvert_{+}>-2$. Then, for
$x\in\mathbb{R}^{d}$, one has
$\mathfrak{p}_{<\delta}\big{\\{}F(\bm{W}_{N}^{\mathscr{Z}})(x)\star\mathscr{D}_{i}[\mathcal{K}\tau_{1}^{\mathcal{K},\mathscr{Z}}](x)\star\mathscr{D}_{i}[\mathcal{K}\tau_{2}^{\mathcal{K},\mathscr{Z}}](x)\big{\\}}\\\
=\mathfrak{p}_{<\delta}\Big{\\{}\sum_{k\in\mathbb{N}_{0}}\frac{D^{k}F(W_{N}^{\mathscr{Z}}(x))}{k!}\Big{(}\sum_{\tau\in\mathcal{T}_{-}}\mathscr{I}\tau\Big{)}^{\star
k}\star\mathscr{D}_{i}[\mathcal{K}\tau_{1}^{\mathcal{K},\mathscr{Z}}](x)\star\mathscr{D}_{i}[\mathcal{K}\tau_{2}^{\mathcal{K},\mathscr{Z}}](x)\Big{\\}}$
and
$\mathfrak{p}_{<\delta}\big{\\{}F(\bm{W}_{N}^{\mathscr{Z}})(x)\star\mathscr{D}_{i}[\mathcal{K}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}}](x)\star
R_{2}[\partial_{i}\\{H_{N}\ast(\mathcal{R}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}})\\}](x)\big{\\}}\\\
=\mathfrak{p}_{<\delta}\Big{\\{}\sum_{k\in\mathbb{N}_{0}}\frac{D^{k}F(W_{N}^{\mathscr{Z}}(x))}{k!}\partial_{i}[H_{N}\ast(X^{\mathscr{Z}})](x)\Big{(}\sum_{\tau\in\mathcal{T}_{-}}\mathscr{I}\tau\Big{)}^{\star
k}\star\mathscr{D}_{i}[\mathcal{K}^{\mathscr{Z}}\bm{X}^{\mathscr{Z}}](x)\Big{\\}}.$
###### Proof.
By Lemma C.20, one has
$\bm{W}_{N}^{\mathscr{Z}}(x)=\sum_{\tau\in\mathcal{T}_{-}}\mathscr{I}\tau+W_{N}^{\mathscr{Z}}(x)\bm{1}+\bm{W}_{N}^{\mathscr{Z},+}(x),$
where $\bm{W}_{N}^{\mathscr{Z},+}(x)\in\oplus_{\alpha\geq
1}\mathscr{T}_{\alpha}$. Recalling Definition C.3, one has
$F(\bm{W}_{N}^{\mathscr{Z}})(x)=\sum_{k\in\mathbb{N}_{0}}\frac{D^{k}F(W_{N}^{\mathscr{Z}}(x))}{k!}\Big{(}\sum_{\tau\in\mathcal{T}_{-}}\mathscr{I}\tau+\bm{W}_{N}^{\mathscr{Z},+}(x)\Big{)}^{\star
k}.$
Since Lemma C.20 implies that
$\mathscr{D}_{i}[\mathcal{K}\tau_{1}^{\mathcal{K},\mathscr{Z}}](x)\star\mathscr{D}_{i}[\mathcal{K}\tau_{2}^{\mathcal{K},\mathscr{Z}}](x)$
is $\oplus_{\alpha\geq-1+\delta}\mathscr{T}_{\alpha}$-valued, one can ignore
the contribution from $\bm{W}_{N}^{\mathscr{Z},+}(x)$ when the projection
$\mathfrak{p}_{<\delta}$ is applied. This observation proves the claimed
identities. ∎
###### Lemma C.27.
Let $N\in\mathbb{N}$. Then, one has
$\mathcal{R}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}\bm{Y}_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}=F(W_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}})\\\
\times\Big{\\{}\sum_{\begin{subarray}{c}\tau_{1},\tau_{2}\in\mathcal{T}_{-},\lvert\tau_{1}\rvert_{+}+\lvert\tau_{2}\rvert_{+}>-2\end{subarray}}c(\tau_{1})c(\tau_{2})\nabla(K\ast\mathcal{R}^{\mathscr{Z}^{\mathrm{can},\varepsilon}}\tau_{1}^{\mathcal{K},\mathscr{Z}^{\mathrm{can},\varepsilon}})\cdot\nabla(K\ast\mathcal{R}^{\mathscr{Z}^{\mathrm{can},\varepsilon}}\tau_{2}^{\mathcal{K},\mathscr{Z}^{\mathrm{can},\varepsilon}})\\\
+2\nabla[K\ast
X^{\mathscr{Z}^{\mathrm{can},\varepsilon}}]\cdot\nabla[H_{N}\ast
X^{\mathrm{can},\varepsilon}]\Big{\\}}$
###### Proof.
To simplify notation, we write
$\Pi_{x}^{\mathrm{BPHZ},\varepsilon}\vcentcolon=\Pi_{x}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}$
here, for instance. One has
$\mathcal{R}^{\mathrm{BPHZ},\varepsilon}\bm{Y}_{N}^{\mathrm{BPHZ},\varepsilon}(x)=[\Pi_{x}^{\mathrm{BPHZ},\varepsilon}\bm{Y}_{N}^{\mathrm{BPHZ},\varepsilon}(x)](x).$
In view of Lemma C.20, Proposition C.25 and Lemma C.26, it suffices to show
$\displaystyle\Pi^{\mathrm{BPHZ},\varepsilon}_{x}[\mathscr{I}(\tau_{1})\cdots\mathscr{I}(\tau_{n})\mathscr{I}_{i}(\tau_{n+1})]$
$\displaystyle=\Pi^{\mathrm{can},\varepsilon}_{x}[\mathscr{I}(\tau_{1})\cdots\mathscr{I}(\tau_{n})\mathscr{I}_{i}(\tau_{n+1})],$
$\displaystyle\Pi^{\mathrm{BPHZ},\varepsilon}_{x}[\mathscr{I}(\tau_{1})\cdots\mathscr{I}(\tau_{n})\mathscr{I}_{i}(\tau_{n+1})\mathscr{I}_{i}(\tau_{n+2})]$
$\displaystyle=\Pi^{\mathrm{can},\varepsilon}_{x}[\mathscr{I}(\tau_{1})\cdots\mathscr{I}(\tau_{n})\mathscr{I}_{i}(\tau_{n+1})\mathscr{I}_{i}(\tau_{n+2})],$
(89)
for $\tau_{1},\ldots,\tau_{n},\tau_{n+1}\in\mathcal{T}_{-}$ and
$\tau_{n+2}\in\operatorname{Remove}(\mathfrak{B}(\mathcal{T}))$. We only prove
the second identity of (89). We set
$\bm{\tau}\vcentcolon=(F,0)^{0,0}_{\mathfrak{e}}\vcentcolon=\mathscr{I}(\tau_{1})\cdots\mathscr{I}(\tau_{n})\mathscr{I}_{i}(\tau_{n+1})\mathscr{I}_{i}(\tau_{n+2}),\quad(F_{j},0)^{0,0}_{\mathfrak{e}}\vcentcolon=\tau_{j}.$
The proof of (89) follows the argument in the proof of Lemma C.23. We claim
$\Delta^{\circ}_{-}\bm{\tau}=\bm{1}_{-}\otimes\bm{\tau}+\sum_{J\subseteq\\{1,\ldots,n\\}}\big{[}\mathscr{I}_{i}(\tau_{n+1})\prod_{j\in
J}\mathscr{I}(\tau_{j})\big{]}\otimes\big{[}\mathscr{I}_{i}(\tau_{n+2})\prod_{j\notin
J}\mathscr{I}(\tau_{j})\big{]}\\\
+\sum_{J\subseteq\\{1,\ldots,n\\}}\big{[}\mathscr{I}_{i}(\tau_{n+1})\mathscr{I}_{i}(\tau_{n+2})\prod_{j\in
J}\mathscr{I}(\tau_{j})\big{]}\otimes\prod_{j\notin J}\mathscr{I}(\tau_{j}).$
(90)
Indeed, let $\sigma\otimes\sigma^{\prime}$ be a basis appearing in the
coproduct formula (73) for $\Delta^{\circ}_{-}\bm{\tau}$. If we set
$(G,0)^{\mathfrak{n},0}_{\mathfrak{e}}\vcentcolon=\sigma$ and
$\sigma_{k}\vcentcolon=(G\cap F_{j},0)^{\mathfrak{n},0}_{\mathfrak{e}}$, by
repeating the argument in the proof of Lemma C.23, the forest $\sigma_{k}$ is
either $\varnothing$, $\tau_{k}$ or $\operatorname{Remove}(\rho_{k};e_{k})$
for some $\rho_{k}$ and $e_{k}$.
* •
If $\sigma_{k}=\varnothing$, then $\sigma=0$ in $\mathscr{T}_{-}$ unless
$(\rho_{\bm{\tau}},\rho_{\tau_{k}})\notin E_{\sigma}$.
* •
If $\sigma_{k}=\tau_{k}$, then $\sigma^{\prime}$ has a leaf of type
$\mathscr{I}$ unless $(\rho_{\bm{\tau}},\rho_{\tau_{k}})\in E_{\sigma}$.
* •
If $\sigma_{k}=\operatorname{Remove}(\rho_{k};e_{k})$, then
$\lvert\sigma\rvert_{+}>0$ and hence $\sigma=0$ in $\mathscr{T}_{-}$.
Therefore, the claimed identity (90) is established. It remains to show
$g_{\varepsilon}^{-}\hat{\mathscr{A}}_{-}\big{[}\mathscr{I}_{i}(\tau_{n+1})\prod_{j\in
J}\mathscr{I}(\tau_{j})\big{]}=0,\quad
g_{\varepsilon}^{-}\hat{\mathscr{A}}_{-}\big{[}\mathscr{I}_{i}(\tau_{n+1})\mathscr{I}_{i}(\tau_{n+2})\prod_{j\in
J}\mathscr{I}(\tau_{j})\big{]}=0.$ (91)
Without loss of generality, we can suppose $J=\\{1,\ldots,n\\}$. The proof is
based on induction. We only consider the first identity of (91). As for the
case $n=0$, the first identity of (91) is shown in Lemma C.22. Similarly to
(90), one can show
$\hat{\Delta}_{-}\bm{\tau}=\bm{1}_{-}\otimes\bm{\tau}+\sum_{J\subseteq\\{1,\ldots,n\\}}\big{[}\mathscr{I}_{i}(\tau_{n+1})\prod_{j\in
J}\mathscr{I}(\tau_{j})\big{]}\otimes\prod_{j\notin J}\mathscr{I}(\tau_{j})$
In view of the recursive formula (76) and the hypothesis of the induction, it
remains to show
$\mathbb{E}[\mathbf{\Pi}^{\mathrm{can},\varepsilon}\bm{\tau}(0)]=0.$
However, this can be proved as in Lemma C.22, since $\bm{\tau}$ has an odd
number of edges $e$ such that $\mathfrak{t}(e)=\mathscr{I}$ and
$\lvert\mathfrak{e}(e)\rvert=1$. ∎
###### Proposition C.28.
Let $U$ be a bounded domain.
Suppose that $M$ and $\varepsilon$ are random variables (depending on $U$)
with values in $\mathbb{N}_{0}$ and $(0,\infty)$, respectively, such that
$\lvert W_{M}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}\rvert\leq 2$ on $U$
and $\lVert
W_{M}^{\operatorname{AH},\varepsilon}-W_{M}^{\operatorname{AH}}\rVert_{L^{\infty}(U)}\leq
1$ almost everywhere.
Then,
$\lvert\nabla W_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}\rvert^{2}+\Delta
W_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}+e^{-2W_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}}Y_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}=-\xi_{\varepsilon}+c_{\varepsilon}\quad\text{on
}U,$ (92)
where the constant $c_{\varepsilon}$ is defined in (88).
###### Proof.
By Proposition C.25, one has
$W_{N}^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}=W_{N}^{\mathscr{Z}^{\mathrm{can},\varepsilon}}$.
Therefore, by Lemma C.17 and Lemma C.27, the left-hand side of (92) is equal
to
$-\xi_{\varepsilon}-[\Delta(G_{N}-G)]\ast
c_{\varepsilon}=-\xi_{\varepsilon}+c_{\varepsilon}.\qed$
### C.6 Stochastic estimates and Besov regularity
Proposition C.14 gives pathwise estimates for the modelled distributions
$\bm{X}$, $\bm{W}_{N}$ and $\bm{Y}_{N}$. Here we give stochastic estimates for
$X$ and $Y_{N}$ in suitable Besov spaces. To this end, we will need a wavelet
characterization of weighted Besov spaces.
###### Theorem C.29 ([54], [71, Theorem 1.61]).
For any $k\in\mathbb{N}$, there exist
$\psi_{\mathfrak{f}},\psi_{\mathfrak{m}}\in C_{c}^{k}(\mathbb{R})$ with the
following properties.
* •
For $n\in\mathbb{N}_{0}$, if we denote by $V_{n}$ the subspace of
$L^{2}(\mathbb{R})$ spanned by
$\\{\psi_{\mathfrak{f}}(2^{n}\cdot-m)\nonscript\>|\nonscript\>\mathopen{}m\in\mathbb{Z}\\},$
then the inclusions $V_{0}\subseteq V_{1}\subseteq\cdots\subseteq
V_{n}\subseteq V_{n+1}\subseteq\cdots$ hold and $L^{2}(\mathbb{R})$ is the
closure of $\cup_{n\in\mathbb{N}_{0}}V_{n}$.
* •
The set
$\\{\psi_{\mathfrak{f}}(\cdot-m)\nonscript\>|\nonscript\>\mathopen{}m\in\mathbb{Z}\\}\cup\\{\psi_{\mathfrak{m}}(\cdot-m)\nonscript\>|\nonscript\>\mathopen{}m\in\mathbb{Z}\\}$
forms an orthonormal basis of $V_{1}$. Therefore, the set
$\\{\psi_{\mathfrak{f}}(\cdot-m)\nonscript\>|\nonscript\>\mathopen{}m\in\mathbb{Z}\\}\cup\\{2^{\frac{n}{2}}\psi_{\mathfrak{m}}(2^{n}\cdot-m)\nonscript\>|\nonscript\>\mathopen{}n\in\mathbb{N}_{0},m\in\mathbb{Z}\\}$
forms an orthonormal basis of $L^{2}(\mathbb{R})$.
* •
One has $\int_{\mathbb{R}}x^{l}\psi_{\mathfrak{m}}(x)\,\mathrm{d}x=0$ for
every $l\in\\{1,2,\ldots,k\\}$.
One can build an orthonormal basis of $L^{2}(\mathbb{R}^{d})$ as follows.
###### Proposition C.30 ([71, Proposition 1.53]).
Let $k\in\mathbb{N}$ and let $\psi_{\mathfrak{f}},\psi_{\mathfrak{m}}\in
C_{c}^{k}(\mathbb{R}^{d})$ be as in Theorem C.29. For $n\in\mathbb{N}_{0}$, we
define the sets of $d$-tuples by
$\mathfrak{G}^{n}\vcentcolon=\begin{cases}\\{(\mathfrak{f},\ldots,\mathfrak{f})\\}&\text{if
}n=0,\\\
\\{(G_{1},\ldots,G_{d})\in\\{\mathfrak{f},\mathfrak{m}\\}^{d}\nonscript\>|\nonscript\>\mathopen{}\exists
j\text{ s.t. }G_{j}=\mathfrak{m}\\}&\text{if }n\geq 1.\end{cases}$
For $n\in\mathbb{N}_{0}$, $G\in\mathfrak{G}^{n}$, $m\in\mathbb{Z}^{d}$ and
$x\in\mathbb{R}^{d}$, we set
$\Psi_{m}^{n,G}(x)\vcentcolon=2^{\frac{d\max\\{n-1,0\\}}{2}}\prod_{j=1}^{d}\psi_{G_{j}}(2^{\max\\{n-1,0\\}}x_{j}-m_{j}).$
(93)
The set
$\\{\Psi_{m}^{n,G}\nonscript\>|\nonscript\>\mathopen{}n\in\mathbb{N}_{0},G\in\mathfrak{G}^{n},m\in\mathbb{Z}^{d}\\}$
forms an orthonormal basis of $L^{2}(\mathbb{R}^{d})$.
With the expansion by the basis
$\\{\Psi_{m}^{n,G}\nonscript\>|\nonscript\>\mathopen{}n\in\mathbb{N}_{0},G\in\mathfrak{G}^{n},m\in\mathbb{Z}^{d}\\}$,
one can give a wavelet characterization of weighted Besov spaces.
###### Proposition C.31 ([71, Theorem 6.15]).
Let $p,q\in[1,\infty]$, $r\in\mathbb{R}$ and $\sigma\in(0,\infty)$. Suppose
$k>\max\Big{\\{}r,\frac{2d}{p}+\frac{d}{2}-r\Big{\\}}$
and let
$\\{\Psi_{m}^{n,G}\nonscript\>|\nonscript\>\mathopen{}n\in\mathbb{N}_{0},G\in\mathfrak{G}^{n},m\in\mathbb{Z}^{d}\\}$
be as in Proposition C.30. Then, there exists a constant $C\in(0,\infty)$ such
that for every $f\in B_{p,q}^{r,\sigma}(\mathbb{R}^{d})$ one has
$C^{-1}\lVert f\rVert_{B_{p,q}^{r,\sigma}(\mathbb{R}^{d})}\\\
\leq\Big{\lVert}\Big{(}2^{n(r-d/p)}\Big{(}\sum_{G\in\mathfrak{G}^{n},m\in\mathbb{Z}^{d}}w_{\sigma}(2^{-n}m)^{p}\lvert
2^{nd/2}\langle
f,\Psi^{n,G}_{m}\rangle\rvert^{p}\Big{)}^{1/p}\Big{)}_{n\in\mathbb{N}_{0}}\Big{\rVert}_{l^{q}(\mathbb{N}_{0})}\\\
\leq C\lVert f\rVert_{B_{p,q}^{r,\sigma}(\mathbb{R}^{d})}.$
We fix $k\in\mathbb{N}$ such that $k>\frac{5d}{2}+2$, and we consider the
orthonormal basis $\\{\Psi^{n,G}_{m}\\}$ given by (93). We set
$\Psi\vcentcolon=\Psi^{0,(\mathfrak{f},\ldots,\mathfrak{f})}_{0}$.
###### Definition C.32.
Let
$\mathscr{Z}=(\Pi,\Gamma),\overline{\mathscr{Z}}=(\overline{\Pi},\overline{\Gamma})\in\mathscr{M}(\mathscr{T},K)$.
Given a compact set $\mathfrak{K}\subseteq\mathbb{R}^{d}$, we set
$\displaystyle\llbracket\mathscr{Z}\rrbracket_{\mathfrak{K}}$
$\displaystyle\vcentcolon=\sup_{\tau=(T,0)^{\mathfrak{n},0}_{\mathfrak{e}}\in\mathfrak{B}(\mathscr{T})\cap\mathscr{T}_{<0}}\sup_{n\in\mathbb{N}}\sup_{x\in\mathfrak{K}\cap
2^{-n}\mathbb{Z}^{d}}2^{n\lvert\tau\rvert_{+}}\lvert\langle\Pi_{x}\tau,2^{nd}\Psi(2^{n}(\cdot-x))\rangle_{\mathbb{R}^{d}}\rvert,$
$\displaystyle\llbracket\mathscr{Z};\overline{\mathscr{Z}}\rrbracket_{\mathfrak{K}}$
$\displaystyle\vcentcolon=\sup_{\tau=(T,0)^{\mathfrak{n},0}_{\mathfrak{e}}\in\mathfrak{B}(\mathscr{T})\cap\mathscr{T}_{<0}}\sup_{n\in\mathbb{N}}\sup_{x\in\mathfrak{K}\cap
2^{-n}\mathbb{Z}^{d}}2^{n\lvert\tau\rvert_{+}}\lvert\langle\Pi_{x}\tau-\overline{\Pi}_{x}\tau,2^{nd}\Psi(2^{n}(\cdot-x))\rangle_{\mathbb{R}^{d}}\rvert.$
###### Lemma C.33.
For each $\gamma\in\mathbb{R}$, there exist a constant $C\in(0,\infty)$ and an
integer $k\in\mathbb{N}$ such that the following estimates hold uniformly over
$\mathscr{Z},\overline{\mathscr{Z}}\in\mathscr{M}(\mathscr{T},K)$ and compact
sets $\mathfrak{K}\subseteq\mathbb{R}^{d}$:
$\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;\mathfrak{K}}\leq
C(1+\llbracket\mathscr{Z}\rrbracket_{\mathfrak{K}})^{k},\quad\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z};\overline{\mathscr{Z}}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;\mathfrak{K}}\leq
C(1+\llbracket\mathscr{Z}\rrbracket_{\mathfrak{K}})^{k}(\llbracket\mathscr{Z};\overline{\mathscr{Z}}\rrbracket_{\mathfrak{K}}+\llbracket\mathscr{Z};\overline{\mathscr{Z}}\rrbracket_{\mathfrak{K}}^{k}).$
###### Proof.
Using the recursive formula [13, Proposition 4.17], one can prove the claim as
in [48, Lemma 2.3]. ∎
###### Lemma C.34.
Let $L\in[1,\infty)$ and set $Q_{L}\vcentcolon=[-L,L]^{d}$. Let $p\in
2\mathbb{N}$. Under Assumption 3.10, if $p\delta^{\prime}>d+1$, one has
$\mathbb{E}[\llbracket\mathscr{Z}^{\mathrm{BPHZ}}\rrbracket_{Q_{L}}^{p}]\leq
C_{p}^{\mathrm{BPHZ}}L^{d},\quad\mathbb{E}[\llbracket\mathscr{Z}^{\mathrm{BPHZ}};\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}\rrbracket_{Q_{L}}^{p}]\leq\bm{\varepsilon}^{\mathrm{BPHZ}}_{p}(\varepsilon)L^{d}.$
###### Proof.
The proof is essentially the repetition of [48, Lemma 4.11]. Set
$\mathfrak{B}_{0}(\mathscr{T})\vcentcolon=\\{\tau=(T,0)^{\mathfrak{n},0}_{\mathfrak{e}}\in\mathfrak{B}(\mathscr{T})\nonscript\>|\nonscript\>\mathopen{}\lvert\tau\rvert_{+}<0\\}.$
If we write
$\Psi^{\lambda}_{x}\vcentcolon=\lambda^{-d}\Psi(\lambda^{-1}(\cdot-x))$, one
has
$\displaystyle\mathbb{E}[\llbracket\mathscr{Z}^{\mathrm{BPHZ}}\rrbracket_{Q_{L}}^{p}]$
$\displaystyle=\mathbb{E}[\sup_{\tau\in\mathfrak{B}_{0}(\mathscr{T})}\sup_{n\in\mathbb{N}}\sup_{x\in
Q_{L}\cap
2^{-n}\mathbb{Z}^{d}}2^{n\lvert\tau\rvert_{+}p}\lvert\langle\Pi_{x}\tau,\Psi_{x}^{2^{-n}}\rangle_{\mathbb{R}^{d}}\rvert^{p}]$
$\displaystyle\lesssim\sum_{\tau\in\mathfrak{B}_{0}(\mathscr{T})}\sum_{n\in\mathbb{N}}2^{nd}L^{d}2^{n\lvert\tau\rvert_{+}p}\mathbb{E}[\lvert\langle\Pi_{0}\tau,\Psi_{0}^{2^{-n}}\rangle_{\mathbb{R}^{d}}\rvert^{p}],$
where the stationarity of the noise $\xi$ and the estimate $\\#(Q_{L}\cap
2^{-n}\mathbb{Z}^{d})\lesssim 2^{nd}L^{d}$ are used. By Assumption 3.10,
$\mathbb{E}[\lvert\langle\Pi_{0}\tau,\Psi_{0}^{2^{-n}}\rangle_{\mathbb{R}^{d}}\rvert^{p}]\lesssim_{\psi_{\mathfrak{f}}}C_{p}^{\mathrm{BPHZ}}2^{-np(\lvert\tau\rvert_{+}+\delta^{\prime})}.$
Therefore,
$\mathbb{E}[\llbracket\mathscr{Z}^{\mathrm{BPHZ}}\rrbracket_{Q_{L}}^{p}]\lesssim
C^{\mathrm{BPHZ}}_{p}L^{d}\lvert\mathfrak{B}_{0}(\mathscr{T})\rvert(2^{p\delta^{\prime}-d}-1)^{-1}.$
The estimate for the second claimed inequality is similar. ∎
###### Lemma C.35.
Let $\mathfrak{K}\subseteq\mathbb{R}^{d}$ be a compact set and
$\sigma\in(0,\infty)$. Then, there exists a constant $C\in(0,\infty)$ such
that for all $N\in\mathbb{N}$
$\lVert H_{N}\ast X\rVert_{C^{2}(\mathfrak{K})}\leq C2^{3N}\lVert
X\rVert_{\mathcal{C}^{-2,\sigma}(\mathbb{R}^{d})}.$
###### Proof.
Let $\phi\in C_{c}^{\infty}(\mathbb{R}^{d})$ be such that $\phi\equiv 1$ on
$\mathfrak{K}$. By Lemma A.4, one has
$\lVert H_{N}\ast X\rVert_{C^{2}(\mathfrak{K})}\lesssim\lVert\phi(H_{N}\ast
X)\rVert_{\mathcal{C}^{2}(\mathbb{R}^{d})}\lesssim_{\sigma}\lVert H_{N}\ast
X\rVert_{\mathcal{C}^{2,\sigma}(\mathbb{R}^{d})}.$
It remains to apply Corollary A.10. ∎
Recall from Definition B.6 that we have, for instance,
$\lvert\Xi\rvert_{+}=-2+\delta+\kappa$ for some
$\kappa\in(0,\delta^{\prime})$.
###### Proposition C.36.
Under Assumption 3.10, there exist a deterministic integer
$k=k(\delta_{-})\in\mathbb{N}$ such that for all $\sigma\in(0,\infty)$, $p\in
2\mathbb{N}$ with $p>(d+1)/\min\\{\delta^{\prime}-\kappa,\sigma\\}$ and
$N\in\mathbb{N}$ we have the following:
$\displaystyle\mathbb{E}[\lVert
X^{\mathscr{Z}^{\mathrm{BPHZ}}}\rVert_{B_{p,p}^{-2+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}]$
$\displaystyle\lesssim_{\delta,\delta^{\prime},\kappa,\sigma,p}C_{kp}^{\mathrm{BPHZ}},$
$\displaystyle\mathbb{E}[\lVert
Y^{\mathscr{Z}^{\mathrm{BPHZ}}}_{N}\rVert_{B_{p,p}^{-1+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}]$
$\displaystyle\lesssim_{\delta,\delta^{\prime},\kappa,\sigma,p}C_{kp}^{\mathrm{BPHZ}}2^{kpN}$
and
$\displaystyle\mathbb{E}[\lVert
X^{\mathscr{Z}^{\mathrm{BPHZ}}}-X^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}\rVert_{B_{p,p}^{-2+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}]$
$\displaystyle\lesssim_{\delta,\delta^{\prime},\kappa,\sigma,p}C_{kp}^{\mathrm{BPHZ}}[\bm{\varepsilon}^{\mathrm{BPHZ}}_{kp}(\varepsilon)+\bm{\varepsilon}^{\mathrm{BPHZ}}_{p}(\varepsilon)],$
$\displaystyle\mathbb{E}[\lVert
Y^{\mathscr{Z}^{\mathrm{BPHZ}}}_{N}-Y^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}_{N}\rVert_{B_{p,p}^{-1+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}]$
$\displaystyle\lesssim_{\delta,\delta^{\prime},\kappa/2,\sigma,p}C_{kp}^{\mathrm{BPHZ}}2^{kpN}[\bm{\varepsilon}^{\mathrm{BPHZ}}_{kp}(\varepsilon)+\bm{\varepsilon}^{\mathrm{BPHZ}}_{p}(\varepsilon)].$
###### Proof.
Set $\mathscr{Z}\vcentcolon=\mathscr{Z}^{\mathrm{BPHZ}}$. In the proof, we
drop superscripts for $\mathrm{BPHZ}$. Natural numbers $k,l,\gamma$ depend
only on $\mathscr{T}$ and they vary from line to line. We will not write down
the dependence on $\mathscr{T},\delta,\delta_{-},p,\sigma$. Recall the
notation $\Psi^{n,G}_{m}$ from (93).
Suppose we are given a modelled distribution
$f\in\mathcal{D}^{\gamma}_{\alpha}(\mathscr{T},\mathscr{Z})$ with
$\alpha<0<\gamma$. We decompose
$\langle\mathcal{R}f,2^{nd/2}\Psi^{n,G}_{m}\rangle_{\mathbb{R}^{d}}\\\
=\langle\mathcal{R}f-\Pi_{2^{-n}m}f(2^{-n}m),2^{nd/2}\Psi^{n,G}_{m}\rangle_{\mathbb{R}^{d}}+\langle\Pi_{2^{-n}m}f(2^{-n}m),2^{nd/2}\Psi^{n,G}_{m}\rangle_{\mathbb{R}^{d}}.$
Using (77), the first term is bounded by a constant times
$2^{-n\gamma}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert
f\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(2^{-n}m,l)}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(2^{-n}m,l)}.$
To estimate the second term, consider the basis expansion
$f(x)=\sum_{\sigma}a_{\sigma}(x)\sigma.$
One has $\lvert
a_{\sigma}(2^{-n}m)\rvert\leq\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert
f\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(2^{-n}m,l)}$ and
$\lvert\langle\Pi_{2^{-n}m}\sigma,2^{nd/2}\Psi^{n,G}_{m}\rangle_{\mathbb{R}^{d}}\rvert\lesssim
2^{-n\alpha}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(2^{-n}m,l)}.$
Therefore,
$\lvert\langle\mathcal{R}f,2^{nd/2}\Psi^{n,G}_{m}\rangle_{\mathbb{R}^{d}}\rvert\lesssim
2^{-n\alpha}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert
f\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(2^{-n}m,l)}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(2^{-n}m,l)}.$
(94)
Applying the estimate (94) to $\bm{X}$ and $\bm{Y}_{N}$, by Proposition C.31,
we get
$\lVert X\rVert_{B_{p,p}^{-2+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}\\\
\lesssim\sum_{n\in\mathbb{N}_{0}}2^{-n(d+\kappa/2)}\sum_{G\in
G^{n},m\in\mathbb{Z}^{d}}w_{\sigma}(2^{-n}m)^{p}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{X}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;B(2^{-n}m,l)}^{p}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{2;B(2^{-n}m,l)}^{p},$
$\lVert
Y_{N}\rVert_{B_{p,p}^{-1+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}\\\
\lesssim\sum_{n\in\mathbb{N}_{0}}2^{-n(d+\kappa/2)}\sum_{G\in
G^{n},m\in\mathbb{Z}^{d}}w_{\sigma}(2^{-n}m)^{p}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\bm{Y}_{N}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\delta;B(2^{-n}m,l)}^{p}\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\delta;B(2^{-n}m,l)}^{p}.$
To estimate $\lVert
X\rVert_{B_{p,p}^{-2+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}$, we use Lemma
C.14 and stationarity to obtain
$\mathbb{E}[\lVert
X\rVert_{B_{p,p}^{-2+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}]\lesssim\sum_{n\in\mathbb{N}_{0}}2^{-n(d+(\delta-\delta_{-}))}\sum_{G\in
G^{n},m\in\mathbb{Z}^{d}}w_{\sigma}(2^{-n}m)^{p}\mathbb{E}[(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(0,l)})^{kp}].$
Since
$\sum_{m\in\mathbb{Z}^{d}}w_{\sigma}(2^{-n}m)^{p}\lesssim\int_{\mathbb{R}^{d}}(1+\lvert
2^{-n}x\rvert^{2})^{-\frac{p\sigma}{2}}dx=2^{nd}\lVert
w_{\sigma}\rVert_{L^{p}(\mathbb{R}^{d})}^{p},$
and by Lemma C.33 and by Lemma C.34
$\mathbb{E}[(1+\lvert\kern-1.07639pt\lvert\kern-1.07639pt\lvert\mathscr{Z}\rvert\kern-1.07639pt\rvert\kern-1.07639pt\rvert_{\gamma;B(0,l)})^{kp}]\lesssim
C_{k^{\prime}p}^{\mathrm{BPHZ}}$
for some $k^{\prime}\in\mathbb{N}$, we conclude
$\mathbb{E}[\lVert
X\rVert_{B_{p,p}^{-2+\delta+\kappa/2,\sigma}(\mathbb{R}^{d})}^{p}]\lesssim
C_{kp}^{\mathrm{BPHZ}}.$
The estimate of $Y_{N}$ is similar by using Lemma C.35. The estimates of the
differences can be proved similarly by using [34, (3.4)]. ∎
###### Corollary C.37.
Under Assumption 3.10, let $\sigma\in(0,\infty)$, $p\in[1,\infty)$ and
$N\in\mathbb{N}$. Then, as $\varepsilon\downarrow 0$,
$(X^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}})_{\varepsilon\in(0,1)}$
converges in $L^{p}(\mathbb{P})$ to $X^{\mathscr{Z}^{\mathrm{BPHZ}}}$ in
$\mathcal{C}^{-2+\delta,\sigma}(\mathbb{R}^{d})$, and
$(Y^{\mathscr{Z}^{\mathrm{BPHZ},\varepsilon}}_{N})_{\varepsilon\in(0,1)}$
converges in $L^{p}(\mathbb{P})$ to $Y_{N}^{\mathscr{Z}^{\mathrm{BPHZ}}}$ in
$\mathcal{C}^{-1+\delta,\sigma}(\mathbb{R}^{d})$. Furtheremore, there exists a
deterministic $k=k(\delta)\in\mathbb{N}$, independent of $\sigma$ and $N$,
such that
$\sup_{N\in\mathbb{N}}2^{-kN}\lVert
Y_{N}^{\mathscr{Z}^{\mathrm{BPHZ}}}\rVert_{\mathcal{C}^{-1+\delta,\sigma}(\mathbb{R}^{d})}\in
L^{p}(\mathbb{P}).$ (95)
###### Proof.
The claim on the convergence follows from Proposition C.36 and by applying
Besov embeddings. To show (95), let $q\in 2\mathbb{N}$ be such that
$d/q<\kappa/2$ and $q>p$. By Proposition C.36 and the Besov embedding, for
some $k^{\prime}\in\mathbb{N}$,
$\mathbb{E}[\lVert
Y_{N}^{\mathscr{Z}^{\mathrm{BPHZ}}}\rVert_{\mathcal{C}^{-1+\delta,\sigma}(\mathbb{R}^{d})}^{q}]\lesssim_{q,\delta,\sigma}\mathbb{E}[\lVert
Y_{N}^{\mathscr{Z}^{\mathrm{BPHZ}}}\rVert_{B^{-1+\delta+d/q,\sigma}_{q,q}(\mathbb{R}^{d})}^{q}]\lesssim_{q,\delta,\kappa,\sigma}2^{qk^{\prime}N}.$
Therefore, if $k>k^{\prime}$,
$\sum_{N\in\mathbb{N}}2^{-kqN}\mathbb{E}[\lVert
Y_{N}^{\mathscr{Z}^{\mathrm{BPHZ}}}\rVert_{\mathcal{C}^{-1+\delta,\sigma}(\mathbb{R}^{d})}^{q}]<\infty.\qed$
## References
* [1] M.A Akcoglu and U Krengel “Ergodic theorems for superadditive processes” In _Journal für die reine und angewandte Mathematik_ 1981.323 De Gruyter, 1981, pp. 53–67
* [2] Romain Allez and Khalil Chouk “The continuous Anderson hamiltonian in dimension two” In _arXiv pre-print server_ , 2015 arXiv: https://arxiv.org/abs/1511.02718
* [3] P.. Anderson “Absence of Diffusion in Certain Random Lattices” In _Phys. Rev._ 109 American Physical Society, 1958, pp. 1492–1505 DOI: 10.1103/PhysRev.109.1492
* [4] Hajer Bahouri, Jean-Yves Chemin and Raphaël Danchin “Fourier Analysis and Nonlinear Partial Differential Equations” Berlin, Heidelberg: Springer Berlin Heidelberg, 2011 DOI: 10.1007/978-3-642-16830-7_3
* [5] I. Bailleul, N. V. Dang and A. Mouzard “Analysis of the Anderson operator” In _arXiv pre-print server_ , 2022 arXiv: https://arxiv.org/abs/2201.04705
* [6] I. Bailleul and M. Hoshino “A tourist’s guide to regularity structures” In _arXiv pre-print server_ , 2020 arXiv: https://arxiv.org/abs/2006.03524
* [7] Ismael Bailleul and Masato Hoshino “Paracontrolled calculus and regularity structures I” In _J. Math. Soc. Japan_ 73.2, 2021, pp. 553–595 DOI: 10.2969/jmsj/81878187
* [8] Ismael Bailleul and Masato Hoshino “Paracontrolled calculus and regularity structures II” In _J. Éc. polytech. Math._ 8, 2021, pp. 1275–1328 DOI: 10.5802/jep.172
* [9] Ismaël Bailleul and Frédéric Bernicot “High order paracontrolled calculus” In _Forum Math. Sigma_ 7, 2019, pp. e4494 DOI: 10.1017/fms.2019.44
* [10] Lorenzo Bertini and Giambattista Giacomin “Stochastic Burgers and KPZ equations from particle systems” In _Comm. Math. Phys._ 183.3, 1997, pp. 571–607 DOI: 10.1007/s002200050044
* [11] Pulin Kumar Bhattacharyya “Distributions” Generalized functions with applications in Sobolev spaces, De Gruyter Textbook Walter de Gruyter & Co., Berlin, 2012, pp. xxxviii+833 DOI: 10.1515/9783110269291
* [12] E Brezin and G Parisi “Exponential tail of the electronic density of levels in a random potential” In _Journal of Physics C: Solid State Physics_ 13.12 IOP Publishing, 1980, pp. L307–L310 DOI: 10.1088/0022-3719/13/12/005
* [13] Y. Bruned, M. Hairer and L. Zambotti “Algebraic renormalisation of regularity structures” In _Inventiones mathematicae_ 215.3, 2019, pp. 1039–1156 DOI: 10.1007/s00222-018-0841-x
* [14] Yvain Bruned, Ajay Chandra, Ilya Chevyrev and Martin Hairer “Renormalising SPDEs in regularity structures” In _Journal of the European Mathematical Society_ 23.3, 2020, pp. 869–947 DOI: 10.4171/jems/1025
* [15] S. Cambronero and H.. McKean “The ground state eigenvalue of Hill’s equation with white noise potential” In _Communications on Pure and Applied Mathematics_ 52.10, 1999, pp. 1277–1294 DOI: https://doi.org/10.1002/(SICI)1097-0312(199910)52:10<1277::AID-CPA5>3.0.CO;2-L
* [16] Santiago Cambronero, Brian Rider and José Ramírez “On the shape of the ground state eigenvalue density of a random Hill’s equation” In _Communications on Pure and Applied Mathematics_ 59.7, 2006, pp. 935–976 DOI: https://doi.org/10.1002/cpa.20104
* [17] J.. Cardy “Electron localisation in disordered systems and classical solutions in Ginzburg-Landau field theory” In _J. Phys. C_ 11.8, 1978, pp. L321–L327 DOI: 10.1088/0022-3719/11/8/006
* [18] R. Carmona and J. Lacroix “Spectral Theory of Random Schrödinger Operators”, Probability and Its Applications Birkhäuser Boston, 1990 URL: https://books.google.co.jp/books?id=cYDkBwAAQBAJ
* [19] Ajay Chandra and Martin Hairer “An analytic BPHZ theorem for regularity structures”, 2018 arXiv: https://arxiv.org/abs/1612.08138
* [20] Khalil Chouk and Willem Zuijlen “Asymptotics of the eigenvalues of the Anderson Hamiltonian with white noise potential in two dimensions” In _Ann. Probab._ 49.4, 2021, pp. 1917–1964 DOI: 10.1214/20-aop1497
* [21] E. Davies “Spectral Theory and Differential Operators”, Cambridge Studies in Advanced Mathematics Cambridge University Press, 1995 DOI: 10.1017/CBO9780511623721
* [22] Eleonora Di Nezza, Giampiero Palatucci and Enrico Valdinoci “Hitchhiker’s guide to the fractional Sobolev spaces” In _Bull. Sci. Math._ 136.5, 2012, pp. 521–573 DOI: 10.1016/j.bulsci.2011.12.004
* [23] Shin-Ichi Doi, Akira Iwatsuka and Takuya Mine “The uniqueness of the integrated density of states for the Schrödinger operators with magnetic fields” In _Mathematische Zeitschrift_ 237.2 Mathematische Zeitschrift, 2001, pp. 335–371 DOI: 10.1007/pl00004872
* [24] Laure Dumaz and Cyril Labbé “Anderson localization for the $1$-d Schrödinger operator with white noise potential”, 2022 arXiv:2212.04862 [math.PR]
* [25] Laure Dumaz and Cyril Labbé “Localization crossover for the continuous Anderson Hamiltonian in $1$-d”, 2021 arXiv:2102.09316 [math.PR]
* [26] Laure Dumaz and Cyril Labbé “Localization of the continuous Anderson Hamiltonian in 1-D” In _Probab. Theory Related Fields_ 176.1, 2020, pp. 353–419 DOI: 10.1007/s00440-019-00920-6
* [27] Laure Dumaz and Cyril Labbé “The delocalized phase of the Anderson Hamiltonian in 1-D” In _The Annals of Probability_ 51.3, 2023, pp. 805–839 DOI: 10.1214/22-AOP1591
* [28] Lawrence Craig Evans and Ronald F Gariepy “Measure Theory and Fine Properties of Functions, Revised Edition” Oakville: CRC Press LLC, 2015
* [29] Masatoshi Fukushima and Shintaro Nakao “On spectra of the Schrödinger operator with a white Gaussian noise potential” In _Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete_ 37.3, 1977, pp. 267–274 DOI: 10.1007/BF00537493
* [30] Máté Gerencsér and Martin Hairer “Boundary renormalisation of SPDEs”, 2021 arXiv:2110.03656 [math.PR]
* [31] Promit Ghosal and Jaeyun Yi “Fractal geometry of the PAM in 2D and 3D with white noise potential”, 2023 arXiv:2303.16063 [math.PR]
* [32] M. Gubinelli, B. Ugurcan and I. Zachhuber “Semilinear evolution equations for the Anderson Hamiltonian in two and three dimensions” In _Stoch. Partial Differ. Equ. Anal. Comput._ 8.1, 2020, pp. 82–149 DOI: 10.1007/s40072-019-00143-9
* [33] Massimiliano Gubinelli, Peter Imkeller and Nicolas Perkowski “Paracontrolled distributions and singular PDEs” In _Forum Math. Pi_ 3 Cambridge University Press, 2015 DOI: 10.1017/fmp.2015.2
* [34] Martin Hairer “A theory of regularity structures” In _Invent. Math._ 198.2, 2014, pp. 269–504 DOI: 10.1007/s00222-014-0505-4
* [35] Martin Hairer and Cyril Labbé “A simple construction of the continuum parabolic Anderson model on $\mathbf{R}^{2}$” In _Electron. Commun. Probab._ 20 The Institute of Mathematical Statisticsthe Bernoulli Society, 2015, pp. 11 pp. DOI: 10.1214/ECP.v20-4038
* [36] Martin Hairer and Cyril Labbé “The reconstruction theorem in Besov spaces” In _Journal of Functional Analysis_ 273.8, 2017, pp. 2578–2618 DOI: https://doi.org/10.1016/j.jfa.2017.07.002
* [37] Martin Hairer and Étienne Pardoux “A Wong-Zakai theorem for stochastic PDEs” In _J. Math. Soc. Japan_ 67.4, 2015, pp. 1551–1604 DOI: 10.2969/jmsj/06741551
* [38] Yueh-Sheng Hsu and Cyril Labbé “Asymptotic of the smallest eigenvalues of the continuous Anderson Hamiltonian in $d\leq 3$” In _Stochastics and Partial Differential Equations: Analysis and Computations_ , 2022 DOI: 10.1007/s40072-022-00252-y
* [39] Tuomas Hytönen, Jan Neerven, Mark Veraar and Lutz Weis “Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory” 63, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics] Springer, Cham, 2016, pp. xvi+614
* [40] Aukosh Jagannath and Nicolas Perkowski “A simple construction of the dynamical $\Phi^{4}_{3}$ model”, 2021 arXiv:2108.13335 [math.PR]
* [41] Tosio Kato “Perturbation Theory for Linear Operators” Berlin, Heidelberg: Springer Berlin Heidelberg, 1995 DOI: 10.1007/978-3-642-66282-9_10
* [42] W Kirsch and F Martinelli “On the density of states of Schrodinger operators with a random potential” In _Journal of Physics A: Mathematical and General_ 15.7 IOP Publishing, 1982, pp. 2139–2156 DOI: 10.1088/0305-4470/15/7/025
* [43] W. Kirsch and B. Metzger “The integrated density of states for random Schrödinger operators” In _Spectral theory and mathematical physics: A festschrift in honor of Barry Simon’s 60th birthday_ , 2007, pp. 649–696
* [44] W. König “The Parabolic Anderson Model: Random Walk in Random Potential”, Pathways in Mathematics Springer International Publishing, 2016
* [45] Wolfgang König, Nicolas Perkowski and Willem Zuijlen “Longtime asymptotics of the two-dimensional parabolic Anderson model with white-noise potential” In _Annales de l’Institut Henri Poincaré, Probabilités et Statistiques_ 58.3, 2022, pp. 1351–1384 DOI: 10.1214/21-AIHP1215
* [46] Wolfgang König, Nicolas Perkowski and Willem Zuijlen “Longtime asymptotics of the two-dimensional parabolic Anderson model with white-noise potential” In _Ann. Inst. Henri Poincaré Probab. Stat._ 58.3, 2022, pp. 1351–1384 DOI: 10.1214/21-aihp1215
* [47] Kazuhiro Kuwae and Takashi Shioya “Convergence of spectral structures: a functional analytic theory and its applications to spectral geometry” In _Communications in Analysis and Geometry_ 11.4 Communications in AnalysisGeometry, 2003, pp. 599–673 DOI: 10.4310/cag.2003.v11.n4.a1
* [48] Cyril Labbé “The continuous Anderson hamiltonian in d $\leq$ 3” In _J. Funct. Anal._ 277.9, 2019, pp. 3187–3235 DOI: https://doi.org/10.1016/j.jfa.2019.05.027
* [49] Terry J. Lyons “Differential equations driven by rough signals.” In _Revista Matemática Iberoamericana_ 14.2, 1998, pp. 215–310 URL: http://eudml.org/doc/39555
* [50] JüRgen Marschall “The trace of Sobolev-Slobodeckij spaces on Lipschitz domains” In _manuscripta mathematica_ 58.1-2 manuscripta mathematica, 1987, pp. 47–65 DOI: 10.1007/bf01169082
* [51] Jörg Martin “Refinements of the Solution Theory for Singular SPDEs”, 2018
* [52] Toyomu Matsuda “Integrated density of states of the Anderson Hamiltonian with two-dimensional white noise” In _Stochastic Process. Appl._ 153, 2022, pp. 91–127 DOI: 10.1016/j.spa.2022.07.007
* [53] H.. McKean “A limit law for the ground state of Hill’s equation” In _Journal of Statistical Physics_ 74.5, 1994, pp. 1227–1232 DOI: 10.1007/BF02188225
* [54] Yves Meyer “Wavelets and Operators” 1, Cambridge Studies in Advanced Mathematics Cambridge University Press, 1993 DOI: 10.1017/CBO9780511623820
* [55] Norman G. Meyers and James Serrin “$H=W$” In _Proc. Nat. Acad. Sci. U.S.A._ 51, 1964, pp. 1055–1056 DOI: 10.1073/pnas.51.6.1055
* [56] Jean-Christophe Mourrat and Hendrik Weber “Global well-posedness of the dynamic $\Phi^{4}$ model in the plane” In _The Annals of Probability_ 45.4, 2017, pp. 2398–247679 URL: https://doi.org/10.1214/16-AOP1116
* [57] Antoine Mouzard “Weyl law for the Anderson Hamiltonian on a two-dimensional manifold” In _Ann. Inst. Henri Poincaré Probab. Stat._ 58.3, 2022, pp. 1385–1425 DOI: 10.1214/21-aihp1216
* [58] Tosinobu Muramatu “On Besov spaces of functions defined in general regions” In _Publ. Res. Inst. Math. Sci._ 6, 1970/71, pp. 515–543 DOI: 10.2977/prims/1195193919
* [59] David J. Prömel and Mathias Trabs “Rough differential equations driven by signals in Besov spaces” In _J. Differential Equations_ 260.6, 2016, pp. 5202–5249
* [60] Michael Reed and Barry Simon “Methods of modern mathematical physics. I. Functional analysis” Academic Press, New York-London, 1972, pp. xvii+325
* [61] Tommaso C. Rosati “Synchronization for KPZ” In _Stoch. Dyn._ 22.4, 2022, pp. Paper No. 225001046 DOI: 10.1142/S0219493722500101
* [62] Yu.. Safarov and N.. Filonov “Asymptotic estimates of the difference between the Dirichlet and Neumann counting functions” In _Functional Analysis and Its Applications_ 44.4 Functional AnalysisIts Applications, 2010, pp. 286–294 DOI: 10.1007/s10688-010-0039-5
* [63] Y. Sawano “Theory of Besov Spaces”, Developments in Mathematics Springer Singapore, 2018 URL: https://books.google.de/books?id=lw92DwAAQBAJ
* [64] Winfried Sickel and Tino Ullrich “Tensor products of Sobolev-Besov spaces and applications to approximation from the hyperbolic cross” In _J. Approx. Theory_ 161.2, 2009, pp. 748–786 DOI: 10.1016/j.jat.2009.01.001
* [65] Barry Simon “Lifschitz tails for the Anderson model” In _J. Stat. Phys._ 38.1, 1985, pp. 65–76 DOI: 10.1007/BF01017848
* [66] Barry Simon “Semiclassical analysis of low lying eigenvalues. I. Nondegenerate minima: asymptotic expansions” In _Ann. Inst. H. Poincaré Sect. A (N.S.)_ 38.3, 1983, pp. 295–308
* [67] Elias M. Stein “Singular Integrals and Differentiability Properties of Functions (PMS-30)” Princeton University Press, 1970 URL: http://www.jstor.org/stable/j.ctt1bpmb07
* [68] Hans Triebel “Function spaces in Lipschitz domains and on Lipschitz manifolds. Characteristic functions as pointwise multipliers.” In _Revista Matemática Complutense_ 15.2, 2002, pp. 475–524 URL: http://eudml.org/doc/44356
* [69] Hans Triebel “Interpolation theory, function spaces, differential operators” 18, North-Holland Mathematical Library North-Holland Publishing Co., Amsterdam-New York, 1978, pp. 528
* [70] Hans Triebel “Theory of Function Spaces” Basel: Springer Basel, 1983 DOI: 10.1007/978-3-0346-0416-1_1
* [71] Hans Triebel “Theory of Function Spaces III” Basel: Birkhäuser Basel, 2006 DOI: 10.1007/3-7643-7582-5_1
* [72] B.E. Ugurcan “Anderson Hamiltonian and associated Nonlinear Stochastic Wave and Schrödinger equations in the full space” https://arxiv.org/abs/2208.09352arxiv.org/abs/2208.09352, 2022
* [73] Immanuel Zachhuber “Finite speed of propagation for the 2- and 3-dimensional multiplicative stochastic wave equation”, 2021 arXiv:2110.08086 [math.AP]
|
# Synthetic space bricks from lunar and martian regolith via sintering
Nitin Gupta Department of Mechanical Engineering, Indian Institute of
Science, Bangalore Vineet Dawara Department of Mechanical Engineering, Indian
Institute of Science, Bangalore Aloke Kumar Department of Mechanical
Engineering, Indian Institute of Science, Bangalore Koushik Viswanathan
<EMAIL_ADDRESS>Department of Mechanical Engineering, Indian Institute of
Science, Bangalore
###### Abstract
The prospect of establishing extra-terrestrial habitats using in situ resource
utilization (ISRU) constitutes a long-term goal of multiple space agencies
around the world. In this work, we investigate sintering as a potential route
for making building blocks—termed synthetic space bricks—using _in situ_
regolith material. By systematically investigating sintering parameters using
a numerical lattice model, coupled with experimental observations and post
sintering characterization, we propose a process protocol for two lunar—lunar
highland simulant (LHS) and lunar mare dust simulant (LMS)—and one martian
(martian global simulant, MGS) simulants. The resulting bricks demonstrate
compressive strengths of upto 45 MPa under uniaxial loading, depending on the
simulant used. These strengths are much greater than those typically mandated
for structural applications under reduced gravity. We infer microscale
sintering mechanisms at the individual particle level indirectly, by measuring
temporal evolution exponents of sample dimensions during sintering. For all
three simulants, volume diffusion appears to be the primary mechanism for
particle coalescence. Our results clearly make a strong case for the use of
sintering as a potentially scalable method for consolidating regolith into
brick-like structures for load-bearing applications in extra-terrestrial
settings.
#### Keywords:
Sintering; Space habitation; Extra terrestrial regolith; Synthetic space
bricks
## 1 Introduction
The prospect of establishing extra-terrestrial habitats continues to remain a
long-term goal of multiple space agencies around the world [1]. This field has
traditionally encompassed a wide variety of disciplines ranging from
biological effects on humans [2] to geology and civil engineering [3]. The
possibility of developing extra-terrestrial structures that can sustain both
extreme environments and be built with minimal raw materials has drawn
significant recent interest [4, 5]. The obvious option of exploiting intrinsic
features on the martian or lunar surface, for instance lava tubes [6, 7], is
fraught with fundamental difficulties such as potential structural collapse
and the occurrence of surface fissures. The alternative option of constructing
settlements with material sourced from earth is expected to incur significant
costs, rendering it practically unviable.
A recently emerging research paradigm— _in situ_ resource utilization or
ISRU—attempts to address this problem by developing technologies for
exploiting resources, primarily regolith and solar energy, available on the
martian or lunar surface [8, 9, 10]. To help seed research in this direction,
several artificial regolith simulants have been developed to mimic martian or
lunar soil corresponding to various known locations on mars and the moon,
respectively, based on corresponding spectroscopy and/or particle morphology
data [11, 12, 13, 14, 15, 16, 17, 18, 19]. These regolith simulants can be
consolidated to form load bearing structures for habitat applications using a
range of strategies, from exploiting biological routes [20, 21, 22, 23] to
employing concrete-mimicking processes [24, 25, 26] and polymer binders with
sintering [27, 28, 29].
Among these consolidation techniques, sintering-based routes have so far
yielded the most promising results when final part strength is the primary
requirement. Various sintering protocols that have hitherto been proposed
include microwave [30, 31, 32], spark plasma [33], cold-sintering [34] and
solar radiation [35, 36, 37]. The effects of various process parameters on
resulting consolidate strength have also been systematically investigated at
the macroscale, including the role of sintering temperature [38, 39], porosity
[40, 41], initial mineral compositions [42, 43] and the presence of glass-like
phases [44]. A summary of these sintering/consolidation techniques and the
resulting average compressive strength $(\sigma_{comp,avg})$ is presented in
table 1.
Consolidation technique | Temperature | Simulants | $\sigma_{comp,avg}$ | Ref.
---|---|---|---|---
| (∘C) | used | (MPa) |
Sintering | 1000-1100 | HUST-1, CAS-1 | 68 | [28]
Sintering & melting | 1000-1300 | JSC-2A | 31 | [29]
Hybrid microwave sintering | 1075-1125 | FJS-1 | 45 | [30]
Microwave sintering | 1200-1500 | MLS-1, JSC-1 | - | [31]
Microwave sintering | 1120 | KLS-1 | 37 | [32]
Spark Plasma sintering | 1050 | FJS-1 | 220 | [33]
Cold sintering | 250 | MGS-1 | 45 | [34]
Sintering | 1100-1200 | MGS-1, LMS-1 | 22-25 | [38]
Digital light | 1100-1150 | CLRS-2 | 56 - 312 | [39]
processing & sintering | | | |
Sintering | 1200 | JSC-1 A, JSC-1AF, | 103 - 232 | [40]
| | and JSC- 1AC | |
Sintering | 1120 | JSC-1A | 84.6 - 218.8 | [41]
Sintering | 1070-1125 | JSC-1, DNA, MLS-1 | 98 - 152 | [45, 42]
Additive Manufacturing | 1200 | LHS-1 | 20 | [43]
& sintering | | | |
Laser assisted sintering | 1400 | HIT-LRS | 68 | [44]
Solar Sintering | | JSC-2A | 2.49 | [46]
and 3D printing | | | |
Extrusion & sintering | 1050 | JSC-1A | 20 | [47]
Additive manufacturing | 1000 | EAC-1 | 5.4 | [48]
& sintering | | | |
Brazing of SiC | 1400 | LRS, MRS | 21 - 27 | [44]
Laser assisted sintering | 1000-1100 | Quartz sand | $\sigma_{tensile}$ = 9.28 | [49]
Electric current assisted | 700 | JSC-1A | 50 | [50]
sintering (ECAS) | | | |
Table 1: List of various sintering strategies used to consoildate regolith
simulants, with reported average mechanical strength under uniaxial
compression.
On the microscale, mechanisms governing particle consolidation during the
sintering process are expected to be independent of the energy source utilized
and largely determined by the thermal fields thereby produced [51]. In this
context, the specialized sintering techniques introduced allow for varying
degrees of thermal control. Yet they cannot compare with furnace-based
sintering, perhaps one of the oldest methods for producing ceramics. Here
spatially uniform temperatures are _a priori_ the norm, given the lack of
directionality of a focused energy source. Consequently parts produced via
furnace sintering are expected to have uniform, isotropic properties; this
process is also inherently scalable, as has been amply demonstrated on earth.
An additional advantage of furnace sintering is that it enables more
systematic study of the role of microscopic mechanisms operative at much
smaller length scales without intervening spatial inhomogeneity effects.
The primary objectives of the present work are threefold—the first is to
develop a scalable experimental protocol for making consolidated regolith-
based bricks on both the lunar and martian surfaces by using a polymer-based
binder and furnace sintering. To this end, we work with two lunar
simulants—lumar highland simulant (LHS) and lunar mare dust simulant (LMS)—and
one martian simulant (martian global simulant, MGS). The second is to evaluate
the microscopic mechanisms based on the kinetics of the sintering process by
taking recourse to classical results in the ceramics literature. The third and
final task is to correlate the established processing protocol and operative
microscopic mechanisms with final part strength. Our manuscript is organized
as follows. We first discuss the strategy of brick manufacturing in Sec. 2.1,
and corresponding post-consolidation characterization (Sec. 2.2). A simple
numerical model is presented to determine heating parameters needed for
ensuring spatial homogeneity (Sec. 2.3). The primary results are described in
Sec.3, beginning with numerical estimation of minimum soaking time $t_{s}$
required (Sec. 3.1), followed by evaluation of mechanical properties (Sec.
3.2) and sintering mechanisms (Sec. 3.4). We present a discussion of our
results and provide concluding results in Sec. 4.
## 2 Materials and Methods
### 2.1 Protocol for single brick production via sintering
We use three types of soil simulants for our experiments—MGS (Martian global
simulant), LHS (Lunar highland simulant), and LMS (Lunar mare dust simulant),
procured from Exolith lab, Florida, USA [14, 15]; details of these simulants
are provided in Fig. S2 and Table 1 of supplementary material. The
experimental procedure for producing sintered parts is summarized in Fig. 1. A
PVA solution was prepared by mixing 5g of PVA (polyvinyl alcohol) powder
(molecular weight 1,15,000 from Loba chemie pvt ltd.) with 100 ml of DI water
and stirring at 90∘C for 1 hr, followed by stirring at room temperature for
10-12hr. This solution (15 ml) was then thoroughly mixed with 100g of simulant
and the mixture die cast in the form of a cubical block of approximately $\sim
18\times 18\times 18$ mm3 using a hydraulic press with 280-300 MPa compaction
pressure. The resulting compacted sample weighed around 14 grams; it was then
heated in a muffle furnace (Delta Power systems) for sintering.
Stages of the sintering cycle are shown in the form of a temperature vs. time
curve, see Fig. 1 (bottom left). In the first step, as-cast samples were
heated for 1 hour at 600∘C with a slow temperature ramp-up and ramp-down. This
stage removes any volatile matter from the bricks, along with the PVA binder
used to make the compacted part. The furnace was then brought back to room
temperature at the time marked B. The dimensions of the sample were measured
at this stage and are henceforth referred to as the intiial dimensions,
denoted $L_{i}$. Correspondingly, the sample weight at the end of this stage
typically reduced by $\sim$13-14% . At time B, the sample is referred to as a
green part.
The green part was then subject to the next heating stage (C to D) upto a peak
temperature of $T_{a}=1150^{\circ}$C with heating rate $c=5^{\circ}$C/min for
time $t_{f}$ (point C to E). Following this, the samples were soaked for
different times ($t_{s}$) ranging from 10 minutes to 480 minutes (point E to
F), and then cooled (point F to D) at a rate of 4∘C/min to obtain final brown
parts, which we term ‘synthetic space bricks’. The final dimensions $L_{f}$
were measured post recovery to room temperature; the sample weight was
typically found to reduce by $\sim$4% at this stage, compared to the green
part.
Figure 1: Schematic representation of the sintering process. Mixing of 5% PVA
in 100 ml of DI water mixture extra-terrestrial soil simulant within the ratio
of 0.15:1 w/w. The mixture is poured into a die and compacted using a
hydraulic press, followed by sintering in the furnace. The sintering cycle
represents the preparation of the green part (heating to 600∘C for 1 hr)
followed by the brown part (heating to $T_{a}$ for $t_{s}$ minutes).
### 2.2 Post-sintering characterization
Internal porosity of sintered bricks was evaluated using mercury intrusion
porosimetry (Pore Master), henceforth referred to as MIP. This technique uses
high pressure (35000 psi) to drive mercury into the pore spaces in order to
determine the pore size distribution ranging from sub-micrometer pores to a
few hundred micrometers. Compressive strength measurements were performed
using quasi-static displacement-controlled unaxial compression on a universal
testing machine (Instron-5697) with a 30 kN capacity load cell and loading
rate of 0.5 mm/min. Post-sintering microstructural examination was carried out
using FE-SEM (field emission-scanning electron microscopy)(Carl Zeiss,
Germany) with a BSD detector. When compared with other detectors, BSD
detectors offer better efficiency for in-lens detection with higher surface
sensitivity and, consequently, enhanced spatial resolution for resolving pore
and grain-level information. The SE detector is used to image particle fusion
post-sintering.
### 2.3 Numerical estimation of sintering time $t_{s}$
In order to establish the efficacy of the sintering process and, in
particular, to estimate the optimal sintering time necessary for the complete
part to reach the desired sintering temperature, we employed a numerical model
based on a disordered lattice network description [52, 53, 54, 55]. The basic
problem consists of estimating the interior temperature of a porous material
(here the consolidated green part) as a function of temperature ramp and hold
at the boundaries (here the furnace conditions). The ramp rate was fixe at
$5^{\circ}$ C/min and the temperature at $T_{a}=1150^{\circ}$C, see schematic
in Fig. 1. The lattice network model was then used to estimate the duration of
soaking so that the interior of the brick attained uniform temperature to
ensure homogeneous sintering. This step is necessary because the interior
temperature of the bricks cannot be experimentally monitored during any stage
of the process.
The configuration used for the simulations presented schematically in Fig. 2,
and consists of a regular triangular lattice network with unit spacing $a$.
The governing heat conduction equation for temperature ($T$) evolution with
time ($t$) in an isotropic homogeneous solid with thermal diffusivity $\alpha$
is
$\displaystyle\frac{\partial T}{\partial t}=\alpha\nabla^{2}T$ (1)
which, when discretized on this lattice takes the form [56]
$\displaystyle\frac{\partial T_{i}}{\partial
t}=\frac{2\alpha}{3a^{2}}\sum_{j}^{6}(T_{j}(t)-T_{i}(t))$ (2)
Thus, we can imagine our solid as a regular network of bonds with diffusivity
(unit square bond length) $\kappa_{ij}=2\alpha/3a^{2}$ and a temperature
difference of $(T_{j}-T_{i})$ applied across it, as described in Fig. 2. To
determine the dynamic evolution of temperature, we used forward finite
difference time discretization in Eq. 2, yielding an explicit numerical scheme
$\displaystyle T_{i}(t+\Delta t)=T_{i}(t)+\Delta
t\sum\kappa_{ij}\sum_{j}^{6}(T_{j}(t)-T_{i}(t))$ (3)
All material information is described by bond diffusivity $\kappa_{ij}$
between nodes $i,j$. In order to simulate the internal porosity in the green
part, we assume that pores are randomly distributed throughout the specimen,
which otherwise has uniform diffusivity. Experimentally consistent porosity
values $p$ were first obtained from MIP measurements (see Sec. 2.2),
correspondingly nodes were removed in the lattice to generate an equivalent
porosity in the network (see inset to Fig. 2). In this way, different
realizations of a porous lattice of a given gross porosity $p$ were generated.
Figure 2: Schematic of lattice network model comprised of triangular lattice
nodes and random pore network with porosity $p$. Voronoi polygons were first
generated in the triangular lattice with porosity $p$, shown as a zoomed image
in the inset. Solid is shown as a regular network of thermal resistors with
conductivity $\tilde{\sigma}$ = 2/3 and a temperature difference of
($\theta_{j}$-$\theta_{i}$) applied across the bond joining lattice nodes
$i,j$; $\tau$, $\Theta$ and $\tilde{c}$ represents non dimensional time,
temperature and heating rate, respectively. See text for description.
At $t=0$, the temperature at all the nodes was kept equal to room temperature
$T_{0}=30^{\circ}$C. For $t>0$, we assume the nodes on the outer four
boundaries of the specimen (orange color, labeled ABCD in schematic) to be at
the furnace temperature at all times, which was increased at a constant rate
($c$) from room temperature $T_{0}$ to $T_{a}=1150^{\circ}$C, and thereafter
maintained constant. Equation 3 was then solved on the porous lattice to
determine time needed when the interior to reach the outer furnace temperature
$T_{a}$. This provided the minimum time necessary $t_{s}$ for the sintering
process to occur homogeneously. Data is presented in the form of non-
dimensionalized temperature $\Theta=(T-T_{0})/(T_{a}-T_{0})$.
## 3 Results
We now describe the results of sintering experiments, beginning with optimal
soaking time estimation using the lattice network model described in Sec. 2.3.
We then present results of compression testing experiments and use length
measurements during sintering to infer microscale mechanisms operative during
sintering.
### 3.1 Porosity and soaking time $t_{s}$
Results of the numerical simulations introduced in Sec. 2.3 are first used to
estimate the minimum soaking time $t_{S}$ necessary for optimal sintering.
This is defined as the time taken for the entire network to reach the furnace
temperature. However, the porosity $p$ of the green part is an input for the
model; we obtain this from MIP measurements of test samples as described in
Sec. 2.2. For bricks sintered at $1150^{\circ}$C, mercury was infused at 35
kpsi pressure in MIP. The porosity was estimated to be $24.2\%$ and $20.3\%$
for soaking durations of 1 hour and 6 hours, respectively. As an upper bound,
we set $p=25\%$ for the initial network; in the results that follow,
temperature $T$ and time $t$ are normalized as
$\displaystyle\Theta=(T-T_{0})/(T_{a}-T_{0})\hskip 28.45274pt\text{and}\hskip
28.45274pt\tau=\alpha t/a^{2}$ (4)
Being an explicit scheme, we use a small timestep $\Delta\tau=0.1$ for
ensuring a stable solution. The lattice size was taken to be 200$\times$200,
corresponding to the 18$\times$18mm2 dimensions of the green brick; thermal
diffusivity of LHS samples was approximated to be 2.65$\times 10^{-8}$ m2/s
(diffusivities for most simulants are $\sim 10^{-8}$ ($m^{2}$/s)), see [57,
58].
The results of the numerical simulation are summarized in Fig. 3. Panel (a)
shows the temperature field in the porous network at time $t_{f}$ when the
furnace first reaches the peak or soaking temperature $\Theta=1$. A gradient
is clearly observed in the field, with $\Theta$ varying by nearly 0.05 from
the sample periphery to its interior. As the furnace temperature is held
constant at $T_{a}$ (corresponding to $\Theta=1$), the interior temperature
continues to rise; the time taken for it to reach $T_{a}$ everywhere inside is
set to be the minimum soaking time $t_{s}$. The corresponding thermal field is
shown in panel (b) of Fig. 3.
Figure 3: Non-dimensional temperature field in the porous lattice network
($p=25\%$) when the boundary first reaches the soaking temperature (panel a),
and when the interior reaches $\Theta=1$ (panel b). Panel c shows temperature
distribution ($\theta$) along the horizontal lines at the center for various
values of $\tau$ (marked). Thermal conductivity of LHS is 2.65$\times$10-8
m2/s, 200$\times$200 lattice size, heat ramping rate 5∘C/min
The corresponding temperature evolution along the horizontal midline of the
sample is shown in Fig. 3(c) . The heterogeneity of the bricks, majorly pores,
causes the fluctuations in the curve. When the heating begins from room
temperature $\Theta$ = 0 (point C, in sintering cycle of Fig. 1), the
temperature at all points in the mid of the y-axis is zero. During the
temperature ramp, the end temperatures are slowly raised to $\Theta$ = 1
(point E in sintering cycle, Fig. 1). Even after ramp is complete, the
boundaries are maintained at $T_{a}$, the temperature continues to rise until
it becomes uniform inside the sample (point $E$ to $\tilde{F}$).
Based on these simulations, we determine that for the present geometry (cubic,
18 mm side length), a green part requires a total time $t_{f}+t_{s}\sim 285$
min to reach uniform peak temperature $T_{a}$. Given that the furnace takes
$t_{f}=224$ min to reach $T_{a}$ from room temperature, the total time for
sintering is expected to be atleast $t_{s}$ $\sim$60 min. This corresponds to
the temperature profile curve $\tilde{F}$ in Fig. 3(c). While this value holds
specifically for LMS, all simulants used in the present study have similar
thermal diffusivity and porosity so that we take 60 minutes as the minimum
soaking time for furnace sintering for LHS, LMS and MGS samples.
### 3.2 Compressive strength of synthetic space bricks
The compressive strength $\sigma_{c}$ of sintered samples was measured using
unconfined uniaxial compression, as described in Sec. 2.2 and the results
summarized in Fig. 4. For sintering, peak temperature was maintained at
$T_{a}=1150^{\circ}$C since that is the maximum temperature at which both LMS
and MGS can undergo solid-state sintering, without melting of their
constituent components. Data in is compiled based on individual stress-strain
curves such as the ones shown in Fig. S1 of supplementary material. The
results are summarized for LHS, LMS and MGS in Fig. 4 in the form of blue,
green orange and green bars, respectively. The values here are those obtained
over 4 samples with corresponding error bars representing standard deviation.
The horizontal axis represents the soaking time $t_{s}$, and the red arrow
marks the cut-off at the minimum soaking time estimated from the previous
section, corresponding to curve $\tilde{F}$ in Fig. 3(c).
For $t_{s}=10$ min, MGS-based bricks exhibit a mean compressive strength of
23.9 MPa, while LMS and LHS-based bricks showed compressive strengths of 11.0
MPa and 5.1 MPa, respectively. Beyond the estimated minimum $t_{s}$ of 60 min,
significant strength increase was observed for all three cases, with MGS, LMS,
and LHS reaching 40.8 MPa, 33.9 MPa, and 11.4 MPa, respectively. This is a
clear sign of enhanced sintering due to homogeneous internal temperature.
Figure 4: Compressive strength of Synthetic space bricks from LHS (blue), LMS
(orange) and MGS (green) simulants as a function of sintering or soaking time.
$T_{a}=1150^{\circ}$C.
However, given that this condition provides only a lower bound for sintering
time $t_{S}$, we evaluated the strength at times $t_{S}$ upto 480 min, as
shown in Fig. 4. LMS showed the largest compressive strength (exceeding 40
MPa) at $t_{S}=360$ min, with a reduction in strength at higher $t_{S}$.
Likewise, the strength of MGS bricks appeared to saturate after $t_{S}=240$
min, reaching $\sim 35$ MPa. LMS bricks showed the lowest strength ($<20$ MPa)
throughout the tested $t_{S}$ range. The reason for the apparent reduction in
strength for $t_{s}>360$ min for all three cases is not clear, and could be
dependent on specific microscopic processes operative in each of the regolith
materials. Potential causes include grain growth and microcrack formation due
to thermal stress leading to enhanced susceptibility to fracture.
Figure 5: Fracture patterns accompanying failure of synthetic space bricks
under unconfined compression. Arrows point to cracks growing in the loading
direction. $T_{a}=1150^{\circ}$C, $t_{S}=360$ min.
In each compression test, the compressive strength was evaluated using the
maximum force measured during loading, _cf._ Fig. S1 of supplementary. At this
point, the bricks undergo compressive failure, leading to a significant
reduction in the force-displacement curve. This failure is mediatead by a
number of cracks that are commonly aligned in the loading direction, see Fig.
5. The three panels in this figure show LHS (left), LMS (middle) and MGS
(right) samples, respectively, at the point of failure. All of these samples
were sintered for $t_{s}=360$ min. In each case, the proliferation of multiple
cracks is clear (see at arrows), all nominally aligned with the loading
direction. Prior simulations of these fracture patterns have shown that the
most likely mechanism for this is pore and microcrack coalescence leading to
macroscopic cracks. These then grow in a direction parallel to the compression
axis due to the lack of any lateral confining pressure [55].
### 3.3 Post-sintering microstructure
Post-process SEM images of bricks show clear signs of sintering on the
microscale, see Fig. 6. Panel (a) shows a typical brick and the location at
which SEM images are taken; the yellow arrow represents compaction direction.
A small section was mechanically removed from the sample surface and gold-
coated for imaging. Data for LHS bricks is presented, corresponding images for
LMS and MGS appear qualitatively similar. BSD detector was used to obtain
these images.
Panels (b) and (c) show the surface after $t_{S}=60$ min (minimum $t_{S}$
estimated) and $t_{S}=360$ min, respectively. Temporal progress of the
sintering process is clear from these images—initially disparate regolith
particles first appear loosely bound after 60 min (panel (b)). They
demonstrate significantly enhanced cohesion after 360 min, with a few pores
evident between particles (panel (c)). A higher resolution image (inset, blue
box) shows that the individuality of particles is clearly no longer
discernible at this stage.
Figure 6: SEM micrographs showing the top layer of LHS-based bricks using BSD
detector. (a) Schematic of SEM micrograph location. Panels (b), (c) show
micrographs for $t_{S}$ = 60 min and 480 min, respectively, clearly
demonstrating enhanced sintering with $t_{S}$, and consistent with compressive
strength measurements. $T_{a}=1150^{\circ}$C.
These micrographs show how crucial it is to regulate the sintering parameters
to create a coherent structure, which ultimately affects the strength and
utility of the bricks. The micrographs for LMS and MGS also show a similar
trend, even though the overall strength is quite different in all three cases
(_cf._ Fig. 4). This largely similar microstructure is also perhaps
responsible for the similar crack patterns observed during failure of all
three regolith-sintered bricks (_cf._ Fig. 5).
### 3.4 Inferring microscale sintering mechanisms
We now turn our attention to the microscopic mechanisms underlying particle
sintering during the soaking stage. Extensive prior studies by the ceramics
community dating back to the 1950s have identified four primary candidate
mechanisms for the joining of two individual particles—viscous flow, atomic
evaporation and condensation, surface diffusion, and lattice diffusion. The
underlying assumptions here are that only 2-particle mechanisms are dominant
and that particles themselves are approximately spherical.
Identifying which of these mechanisms is operative in our bricks is
challenging due to the large poly-dispersity in particle size and shape as
well as the complex mineral compositions involved. A simple macroscale method
for inferring the dominant mechanism(s) involves measuring sample dimensions
during the soaking process as a function of $t_{S}$ [59, 60, 61, 62, 63, 64,
51, 33]. Reduction in dimension is approximately correlated with centre-to-
centre distance $\delta$ and neck radius $x$ between particles on the
microscale, see Fig. 7(a). In general, $x^{n}\sim t$, where the exponent $n$
is governed by which mechanism is operative. For viscous flow, evaporation
condensation, volume diffusion and surface diffusion, the value $n$ is 2,3,5
and 7, respectively.
Specifically,
$\Big{(}\dfrac{x}{r}\Big{)}^{n}=\text{At}\implies\log\Big{(}\dfrac{x}{r}\Big{)}=\log{(\chi)}=\dfrac{1}{n}\log(A)+m\log(t)$
(5)
where $A$ is a material-dependent constant. For the bricks, $x/r$ is obtained
approximately by measuring the percentage shrinkage in linear dimensions from
the green to the brown part [63].
$\text{shrinkage}=\dfrac{L_{i}-L_{f}}{L_{i}}=\dfrac{\Delta
L}{L_{i}}=\dfrac{\delta}{r}\approx\dfrac{x^{2}}{4r^{2}}=\dfrac{\chi^{2}}{4}$
(6)
where $2\delta$ is change in centre-to-centre distance between two spherical
particles under coalescence, and $r$ is the particle size.
In order to finally evaluate the exponent $n$, dimensions of the cubic green
and brown parts were measured using a vernier caliper at various locations and
averaged to get a mean linear dimension; the value of $\Delta L/L_{i}$ then
provided an estimate of $\chi$ from Eq. 6. The corresponding log($\chi$) vs.
log($t_{s}$) plot is shown in Fig. 7(b). The relationship appears to be nearly
linear, with slope equal to the inverse exponent $m=1/n$, see Eq. 5. We have
only used samples for which $t_{s}>60$ min in accordance with the estimate
obtained in Sec. 3.1. The error bars represent standard deviations over 4
successive measurements.
Figure 7: Mechanism of neck growth and particle coalescence in sintered
bricks. (a) Schematic representing two-particle model for calculating the
growth rate. Panel (b) presenets experimental observations of variation of
$\chi$ vs. $t_{S}$ on a log-log scale, note that
$\chi=\dfrac{x}{r}=2\sqrt{\dfrac{\Delta L}{L}}$. Panels (c) and (d) show SEM
micrographs (SE detector) of MGS and LHS, respectively with red arrows
indicating particle coalescence. Data in (b) is averaged over 4 samples,
$T_{a}=1150^{\circ}$C.
The approximated slopes from the curves in Fig. 7(b) correspond to $n=$ 4.4,
4.2, and 4.9 for LHS, LMS, and MGS, respectively, sintered at 1150∘C. These
values strongly suggest that volume diffusion ($n=5$) is the predominant
mechanism for sintering on the microscale. Corresponding SEM images (SE
detector) of MGS and LHS are shown in Fig. 7(c) and (d), respectively. The
coalescence of individual regolith particles appears quite clear (at arrows).
The extent to which other mechanisms (e.g., viscous flow) are operative
remains uncertain based on these investigations and certainly warrants further
study.
## 4 Discussion and Summary
Based on our investigations, it is clear that a significant difference in
compressive strength (unconfined) is to be expected between various regolith
simulants—LMS and MGS are comparable ($\sim 40$ MPa) while that of LHS is
nearly $60\%$ lower. It is entirely possible that the use of higher sintering
temperature may significantly alter this picture since fundamental chemical
changes are expected, given the soil composition. In fact, exploiting the
presence of increased glass basalt content in LHS appears to be a direct route
for enhancing sintering strength that we are presently pursuing.
As fundamental building blocks for habitat applications, the minimum strength
needed for sustaining their self weight is around 3 MPa (lunar) and 6 MPa
(martian), based on the lower gravity on the surface of the moon or mars,
respectively. The synthetic space bricks reported in our work have
significantly larger strength, making them more than suitable for these
applications, even if only partially sintered (_cf._ Fig. 4). As far as
deployability is concerned, the sintering technique is ideally suited to _in
situ_ resource utilization (ISRU) on extra terrestrial habitats since the
process can be scaled and completely automated. We believe that this large
strength and process scalability are fundamental advantages of the sintering
process, over other routes that have been proposed in the literature, see also
Table. 1.
For both the lunar and martian bricks, ultimate failure (under unconfined
compression) occurs via the propagation of multiple axis-aligned cracks. A
typical stress-strain curve for these bricks (se Fig. S1, supplementary
material) shows that they are essentially brittle with little plastic flow on
the macroscale. The growth of cracks occurs due to local stress
concentration—at either large pores or pre-existing microcracks—that can lead
to catastrophic failure at a size-dependent critical load. This behaviour was
also observed to be somewhat anisotropic, being different in the compaction
direction (initial green part production) vis-á-vis the transverse directions.
In general, pores, inclusions, and grain boundaries are examples of
microstructural flaws that may inherently operate as stress concentrators, and
provide locations for crack initiation and propagation.
Based on our results, it is to be expected that the peak temperature and
soaking time are the two primary contributors to final part strength. While
the increase in strength with $t_{S}$ is to be expected—longer soaking time
leads to better particle coalescence and hence, enhanced sintering—the
reduction in strength for $t_{S}>360$ min across all three materials is
noteworthy. This is quite likely due to the occurrence of large internal
pores, either driven by vacancy diffusion in the bulk or by thermal stresses.
However, little evidence of this was found in the compression test failure
mechanisms and this thus warrants further investigation. As mentioned in the
text, the occurrence of high glass basalt content (melting $\sim
1170^{\circ}$C) in LHS is likely to increase the strength in samples sintered
at higher temperature; we are presently actively investigating this
possibility.
In the context of sintering mechanisms, our results, based on classical
analyses commonplace in the ceramics community, indicate that volume diffusion
is the primary driver of particle coalescence. This is clear based on the
macroscopic measurements presented in Fig. 7, yet direct evidence of these
mechanisms is fundamentally challenging to obtain. The possibility of directly
observing individual particle coalescence at the grain level is complicated by
both the length-scales and high temperatures involved, not to mention the
complete lack of axisymmetry in the particles themselves. We believe that
coarse-grained granular dynamics simulations could be profitably employed to
address this question in more detail.
In summary, our work proposes a procedural protocol for fabricating synthetic
space bricks via sintering in extra-terrestrial settings using _in situ_
resource utilization. We have demonstrated the potential for using both lunar
and martian regolith simulants, with significant compressive strengths of
final sintered bricks. Process parameters were estimated using a numerical
model to evaluate time needed for uniform temperature in the interior of
porous green parts produced using a polymer binder. Based on these results,
compressive strength measurements were performed as a function of the
sintering time at a fixed temperature. Our results show that strengths of
$>40$ MPa are achievable with both lunar (LMS) and martian (MGS) simulants.
Strength resulting from particle coalescence was confirmed using both
compression testing and SEM imaging of consolidated samples. Based on linear
shrinkage, we inferred the primary microscopic sintering mechanism to be
volume diffusion in the bulk of individual particles in all three simulants.
Based on our results, the potential for using sintering to consolidate
regolith into bricks for structural applications has been clearly
demonstrated.
## Acknowledgments
The authors acknowledge Mr Bhupendra Chand and Prof Tejas Murthy, Civil
Engineering, IISc for extending the MIP facilities for conducting porosity
analysis.
## References
* [1] Zebulon C Scoville. Artemis iii eva mission capability for de gerlache-shackleton ridge. In Lunar and Planetary Science Conference, 2022.
* [2] Paul C Rambaut, Carolyn S Leach, and Philip C Johnson. Calcium and phosphorus change of the apollo 17 crew members. Annals of Nutrition and Metabolism, 18(2):62–69, 1975.
* [3] Harrison H Schmitt. Apollo 17 report on the valley of taurus-littrow: A geological investigation of the valley visited on the last apollo mission to the moon. Science, 182(4113):681–690, 1973.
* [4] N Labeaga-Martínez, Manuel Sanjurjo-Rivo, José Díaz-Álvarez, and J Martínez-Frías. Additive manufacturing for a moon village. Procedia Manufacturing, 13:794–801, 2017.
* [5] Christian Stenzel, Lukas Weiss, and Thomas Rohr. Sustainable challenges on the moon. Current Opinion in Green and Sustainable Chemistry, 9:8–12, 2018\.
* [6] De Giovanni Angelis, JW Wilson, MS Clowdsley, JE Nealy, DH Humes, and JM Clem. Lunar lava tube radiation safety analysis. Journal of radiation research, 43(Suppl):S41–S45, 2002.
* [7] Audai K Theinat, Anahita Modiriasari, Antonio Bobet, H Jay Melosh, Shirley J Dyke, Julio Ramirez, Amin Maghareh, and Daniel Gomez. Lunar lava tubes: Morphology to structural stability. Icarus, 338:113442, 2020.
* [8] Ian A Crawford and Katherine H Joy. Lunar exploration: opening a window into the history and evolution of the inner solar system. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 372(2024):20130315, 2014.
* [9] Haym Benaroya. Turning dust to gold: building a future on the Moon and Mars. Springer Science & Business Media, 2016.
* [10] Charles S Cockell, Matt Balme, John C Bridges, Alfonso Davila, and Susanne P Schwenzer. Uninhabited habitats on mars. Icarus, 217(1):184–193, 2012.
* [11] I Venugopal, Kasinathan Muthukkumaran, KV Sriram, S Anbazhagan, T Prabu, S Arivazhagan, and Sanjay Kumar Shukla. Invention of indian moon soil (lunar highland soil simulant) for chandrayaan missions. International Journal of Geosynthetics and Ground Engineering, 6:1–9, 2020.
* [12] David S McKay, James L Carter, Walter W Boles, Carlton C Allen, and Judith H Allton. Jsc-1: A new lunar soil simulant. Engineering, construction, and operations in space IV, 2:857–866, 1994.
* [13] Laura E Fackrell, Paul A Schroeder, Aaron Thompson, Karen Stockstill-Cahill, and Charles A Hibbitts. Development of martian regolith and bedrock simulants: Potential and limitations of martian regolith as an in-situ resource. Icarus, 354:114055, 2021.
* [14] Maxim Isachenkov, Svyatoslav Chugunov, Zoe Landsman, Iskander Akhatov, Anna Metke, Andrey Tikhonov, and Igor Shishkovsky. Characterization of novel lunar highland and mare simulants for isru research applications. Icarus, 376:114873, 2022.
* [15] Kevin M Cannon, Daniel T Britt, Trent M Smith, Ralph F Fritsche, and Daniel Batcheldor. Mars global simulant mgs-1: A rocknest-based open standard for basaltic martian regolith simulants. Icarus, 317:470–478, 2019.
* [16] Victoria Sjøholt Engelschiøn, SR Eriksson, Aidan Cowley, Miranda Fateri, Alexandre Meurisse, Ulrich Kueppers, and Matthias Sperl. Eac-1a: A novel large-volume lunar regolith simulant. Scientific reports, 10(1):1–9, 2020.
* [17] Yongquan Li, Jianzhong Liu, and Zongyu Yue. Nao-1: Lunar highland soil simulant developed in china. Journal of Aerospace Engineering, 22(1):53–57, 2009.
* [18] Byung-Hyun Ryu, Cheng-Can Wang, and Ilhan Chang. Development and geotechnical engineering properties of kls-1 lunar simulant. Journal of Aerospace Engineering, 31(1):04017083, 2018.
* [19] Hiroshi Kanamori, Satoru Udagawa, Tetsuji Yoshida, Shinji Matsumoto, and Kenji Takagi. Properties of lunar soil simulant manufactured in japan. In Space 98, pages 462–468. 1998.
* [20] Ales Deakin Roberts, DR Whittall, Rainer Breitling, Eriko Takano, Jonny J Blaker, Sam Hay, and Nigel S Scrutton. Blood, sweat, and tears: extraterrestrial regolith biocomposites with in vivo binders. Materials Today Bio, 12:100136, 2021.
* [21] H Roedel, MD Lepech, and DJ Loftus. Protein-regolith composites for space construction. In Earth and Space 2014, pages 291–300. 2014.
* [22] Rashmi Dikshit, Arjun Dey, Nitin Gupta, Sarath Chandra Varma, I Venugopal, Koushik Viswanathan, and Aloke Kumar. Space bricks: From lss to machinable structures via micp. Ceramics International, 47(10):14892–14898, 2021.
* [23] Rashmi Dikshit, Nitin Gupta, Arjun Dey, Koushik Viswanathan, and Aloke Kumar. Microbial induced calcite precipitation can consolidate martian and lunar regolith simulants. Plos one, 17(4):e0266415, 2022.
* [24] Hatice S Cullingford and M Dean Keller. Lunar concrete for construction. In The Second Conference on Lunar Bases and Space Activities of the 21st Century, Volume 2, 1992.
* [25] Naoko Hatanaka and Tetsuya Ishida. Hydration reaction and strength development of lunar concrete under vacuum condition. SAE transactions, pages 324–334, 2004.
* [26] K Snehal, Priyanshu Sinha, and Piyush Chaunsali. Development of waterless extra-terrestrial concrete using martian regolith. Advances in Space Research, 2023.
* [27] Paul Hintze, Jerry Curran, and Teddy Back. Lunar surface stabilization via sintering or the use of heat cured polymers. In 47th AIAA Aerospace Sciences Meeting including The New Horizons Forum and Aerospace Exposition, page 1015, 2009.
* [28] Wenbin Han, Lieyun Ding, Lixiong Cai, Junjie Zhu, Hanbin Luo, and Tao Tang. Sintering of hust-1 lunar regolith simulant. Construction and Building Materials, 324:126655, 2022.
* [29] Andrea Zocca, Miranda Fateri, Dominik Al-Sabbagh, and Jens Günster. Investigation of the sintering and melting of jsc-2a lunar regolith simulant. Ceramics International, 46(9):14097–14104, 2020.
* [30] Shayan Gholami, Xiang Zhang, Young-Jae Kim, Yong-Rak Kim, Bai Cui, Hyu-Soung Shin, and Jangguen Lee. Hybrid microwave sintering of a lunar soil simulant: Effects of processing parameters on microstructure characteristics and mechanical properties. Materials & Design, 220:110878, 2022.
* [31] Lawrence A Taylor and Thomas T Meek. Microwave sintering of lunar soil: properties, theory, and practice. Journal of Aerospace Engineering, 18(3):188–196, 2005.
* [32] Young-Jae Kim, Byung Hyun Ryu, Hyunwoo Jin, Jangguen Lee, and Hyu-Soung Shin. Microstructural, mechanical, and thermal properties of microwave-sintered kls-1 lunar regolith simulant. Ceramics International, 47(19):26891–26897, 2021.
* [33] Xiang Zhang, Shayan Gholami, Mahdieh Khedmati, Bai Cui, Yong-Rak Kim, Young-Jae Kim, Hyu-Soung Shin, and Jangguen Lee. Spark plasma sintering of a lunar regolith simulant: effects of parameters on microstructure evolution, phase transformation, and mechanical properties. Ceramics International, 47(4):5209–5220, 2021.
* [34] Levent Karacasulu, David Karl, Aleksander Gurlo, and Cekdar Vakifahmetoglu. Cold sintering as a promising isru technique: A case study of mars regolith simulant. Icarus, 389:115270, 2023.
* [35] A Ghosh, JJ Favier, and MC Harper. Solar sintering on lunar regolith simulant (jsc-1) for 3d printing. Proc. Int. Astronaut. Congr. IAC, 2:1195–1203, 2016.
* [36] Miranda Fateri, Alexandre Meurisse, Matthias Sperl, Diego Urbina, Hemanth Kumar Madakashira, Shashank Govindaraj, Jeremi Gancet, Barbara Imhof, Waltraut Hoheneder, René Waclavicek, et al. Solar sintering for lunar additive manufacturing. Journal of Aerospace Engineering, 32(6):04019101, 2019.
* [37] Alexandre Meurisse, A Makaya, C Willsch, and M Sperl. Solar 3d printing of lunar regolith. Acta Astronautica, 152:800–810, 2018.
* [38] Peter Warren, Nandhini Raju, Hossein Ebrahimi, Milos Krsmanovic, Seetha Raghavan, Jayanta Kapat, and Ranajay Ghosh. Effect of sintering temperature on microstructure and mechanical properties of molded martian and lunar regolith. Ceramics International, 48(23):35825–35833, 2022.
* [39] Rui Dou, Wei Zhe Tang, Li Wang, Shan Li, Wen Yan Duan, Ming Liu, Yu Bei Zhang, and Gong Wang. Sintering of lunar regolith structures fabricated via digital light processing. Ceramics International, 45(14):17210–17215, 2019.
* [40] Thomas Gualtieri and Amit Bandyopadhyay. Compressive deformation of porous lunar regolith. Materials Letters, 143:276–278, 2015.
* [41] Stephen J Indyk and Haym Benaroya. A structural assessment of unrefined sintered lunar regolith simulant. Acta astronautica, 140:517–536, 2017.
* [42] Alexandre Meurisse, JC Beltzung, Matthias Kolbe, A Cowley, and Matthias Sperl. Influence of mineral composition on sintering lunar regolith. Journal of Aerospace Engineering, 30(4):04017014, 2017.
* [43] Jorge Osio-Norgaard, Austin C Hayes, and Gregory L Whiting. Sintering of 3d printable simulated lunar regolith magnesium oxychloride cements. Acta Astronautica, 183:227–232, 2021.
* [44] Wei Zheng and Guofu Qiao. Microstructure, thermophysical, and mechanical properties of bulk glass prepared from molten lunar regolith simulant. Advances in Space Research, 69(8):3130–3139, 2022.
* [45] HR Fischer. In-situ resource utilization–feasibility of the use of lunar soil to create structures on the moon via sintering based additive manufacturing technology. Aeronaut. Aerospace Open Access J, 2:243–248, 2018.
* [46] Barbara Imhof, Diego Urbina, Peter Weiss, Matthias Sperl, W Hoheneder, R Waclavicek, HK Madakashira, J Salini, S Govindaraj, J Gancet, et al. Advancing solar sintering for building a base on the moon. In 68th International Astronautical Congress (IAC), Adelaide, Australia, pages 25–29, 2017.
* [47] Shannon L Taylor, Adam E Jakus, Katie D Koube, Amaka J Ibeh, Nicholas R Geisendorfer, Ramille N Shah, and David C Dunand. Sintering of micro-trusses created by extrusion-3d-printing of lunar regolith inks. Acta Astronautica, 143:1–8, 2018.
* [48] Altan Alpay Altun, Florian Ertl, Maude Marechal, Advenit Makaya, Antonella Sgambati, and Martin Schwentenwein. Additive manufacturing of lunar regolith structures. Open Ceramics, 5:100058, 2021.
* [49] Hua Zhao, Lu Meng, Shaoying Li, Jihong Zhu, Shangqin Yuan, and Weihong Zhang. Development of lunar regolith composite and structure via laser-assisted sintering. Frontiers of Mechanical Engineering, 17(1):6, 2022.
* [50] Xin Li Phuah, Han Wang, Bruce Zhang, Jaehun Cho, Xinghang Zhang, and Haiyan Wang. Ceramic material processing towards future space habitat: Electric current-assisted sintering of lunar regolith simulant. Materials, 13(18):4128, 2020.
* [51] Suk-Joong L Kang. Sintering: densification, grain growth and microstructure. Elsevier, 2004.
* [52] Ghassan George Batrouni and Alex Hansen. Fourier acceleration of iterative processes in disordered systems. Journal of statistical physics, 52:747–773, 1988.
* [53] Alexander Hrennikoff. Solution of problems of elasticity by the framework method. 1941\.
* [54] Erik Schlangen and Edward J Garboczi. Fracture simulations of concrete using lattice models: computational aspects. Engineering fracture mechanics, 57(2-3):319–332, 1997.
* [55] Vineet Dawara, Nitin Gupta, Arjun Dey, Aloke Kumar, and Koushik Viswanathan. Pore–microcrack interaction governs failure in bioconsolidated space bricks. Ceramics International, 48(23):35874–35882, 2022.
* [56] Teresa Martín, Pep Espanol, Miguel A Rubio, and Ignacio Zúniga. Dynamic fracture in a discrete model of a brittle elastic solid. Physical Review E, 61(6):6120, 2000.
* [57] Samuel S Schreiner, Jesus A Dominguez, Laurent Sibille, and Jeffrey A Hoffman. Thermophysical property models for lunar regolith. Advances in space research, 57(5):1209–1222, 2016.
* [58] Seiichi Nagihara, Peter Ngo, and Matthias Grott. Thermal properties of the mojave mars regolith simulant in mars-like atmospheric conditions. International Journal of Thermophysics, 43(7):98, 2022.
* [59] G Kuczynski. Sintering and Related Phenomena: Proceedings of the Third International Conference on Sintering and Related Phenomena, Held at the University of Notre Dame, June 5–7, 1972, volume 6. Springer Science & Business Media, 2012.
* [60] GC Kuczynski. Study of the sintering of glass. Journal of Applied Physics, 20(12):1160–1163, 1949.
* [61] D Lynn Johnson and Ivan B Cutler. Diffusion sintering: I, initial stage sintering models and their application to shrinkage of powder compacts. Journal of the American Ceramic Society, 46(11):541–545, 1963.
* [62] D Lynn Johnson and TM Glarke. Grain boundary and volume diffusion in the sintering of silver. Acta Metallurgica, 12(10):1173–1179, 1964.
* [63] D Lynn Johnson. New method of obtaining volume, grain-boundary, and surface diffusion coefficients from sintering data. Journal of Applied Physics, 40(1):192–200, 1969.
* [64] W D Kingery and Morris Berg. Study of the initial stages of sintering by viscous flow, evaporation—condensation, and self-diffusion. Sintering Key Papers, pages 367–382, 1990.
|
# Gravitational radiation with $\Lambda>0$
Béatrice Bonga<EMAIL_ADDRESS>Institute for Mathematics, Astrophysics
and Particle Physics, Radboud University, 6525 AJ Nijmegen, The Netherlands
Claudio Bunster<EMAIL_ADDRESS>Centro de Estudios Científicos (CECs), Av.
Arturo Prat 514, Valdivia, Chile
Universidad San Sebastián, Chile Alfredo Pérez<EMAIL_ADDRESS>Centro
de Estudios Científicos (CECs), Av. Arturo Prat 514, Valdivia, Chile
Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, sede
Valdivia, General Lagos 1163, Valdivia 5110693, Chile
(August 28, 2024)
###### Abstract
We study gravitational radiation for a positive value of the cosmological
constant $\Lambda$. We rely on two battle-tested procedures: (i) We start from
the same null coordinate system used by Bondi and Sachs for $\Lambda=0$, but,
introduce boundary conditions adapted to allow radiation when $\Lambda>0$.
(ii) We determine the asymptotic symmetries by studying, à la Regge-
Teitelboim, the surface integrals generated in the action by these boundary
conditions. A crucial difference with the $\Lambda=0$ case is that the wave
field does not vanish at large distances, but is of the same order as de
Sitter space. This novel property causes no difficulty; on the contrary, it
makes quantities finite at every step, without any regularization. The
asymptotic symmetry algebra consists only of time translations and space
rotations. Thus, it is not only finite-dimensional, but smaller than de Sitter
algebra. We exhibit formulas for the energy and angular momentum and their
fluxes. In the limit of $\Lambda$ tending to zero, these formulas go over
continuously into those of Bondi, but the symmetry jumps to that of Bondi,
Metzner and Sachs. The expressions are applied to exact solutions, with and
without radiation present, and also to the linearized theory. All quantities
are finite at every step, no regularization is needed.
## I Introduction
We study gravitational radiation for a positive value of the cosmological
constant $\Lambda$ as generated by compact sources such as stars and black
holes. We are guided by, and closely follow the steps of, Sachs’ classical
analysis of the concepts introduced by Bondi for $\Lambda=0$ (Bondi:1960jsa, ;
Sachs:1962wk, ; Sachs:1962zza, ). That analysis gave unambiguous expressions
for energy and energy flux, and also established the existence of an infinite-
dimensional symmetry algebra, now called the Bondi-Metzner-Sachs (BMS)
algebra.
Bondi and Sachs relied only on Einstein’s equations with boundary conditions
to study a large class of spacetimes, now known as “asymptotically flat”
solutions. They did not, and needed not to, invoke the action principle. They
required though, that the mass should decrease when radiation is emitted. This
demand was essential to arrive at unambiguous expressions for the mass and its
flux. We have not been able to implement the mass diminution requirement for
$\Lambda>0$, although we do recover this in the linearized case. In order to
arrive at the formulas for the mass and its flux, we have appealed instead to
the action principle, including in it appropriate surface integrals à la
Regge-Teitelboim.
The symmetry algebra consists only of time translations and space rotations,
even when gravitational radiation is present. It is not only finite-
dimensional but smaller than the de Sitter algebra. This result is crucially
linked to the $\mathbb{R}\times\mathbb{S}^{2}$ topology of the future
boundary, and as shown in (abk1, ), the symmetry group for asymptotically de
Sitter spacetimes depends crucially on the topology. In the limit
$\Lambda\rightarrow 0$, the mass and flux formulas coincide with those of
Bondi and Sachs. In contradistinction, the symmetry algebra has an enormous
jump: It becomes the BMS symmetry.
For waves of small amplitude, we recover the energy flux of the linear theory
on a de Sitter background. Interestingly, even waves with small amplitudes
reach infinity without decay. This is not the case for asymptotically flat
spacetimes, in which linear waves decay at least as $1/r$ at large distances.
For special solutions such as Kerr-de Sitter and Robinson-Trautman with
$\Lambda>0$, we recover the accepted expressions for the mass and angular
momentum. All of our conclusions follow from General Relativity, by bringing
into it boundary conditions which are the natural extension for $\Lambda>0$ of
those employed by Sachs for $\Lambda=0$.
Interestingly, all equations in this paper hold also for $\Lambda<0$. In that
case, the boundary conditions do not correspond to the typical reflecting ones
(see e.g. (Ashtekar:1984zz, ; Henneaux:1985tv, ; Ashtekar:1999jx, )).111This
is clear from the non-conformal flatness of the boundary metric. We find this
an appealing issue for exploration, but we will not address it here.
The key findings of this paper are summarized in Table 1.
Table 1: This table summarizes the key results of this paper and compares those with the case $\Lambda=0$. | $\Lambda=0$ | $\Lambda>0$
---|---|---
Asymptotic region | Future null infinity | Future space-like infinity
Topology asymptotic region | $\mathbb{R}\times\mathbb{S}^{2}$ | $\mathbb{R}\times\mathbb{S}^{2}$ (= $\mathbb{S}^{3}$ with two points removed)
| | to describe radiation emitted by bounded sources
Conformal completion | Yes | Yes
of infinity possible? | |
Coordinates | $ds^{2}=e^{2\beta}\frac{V}{r}du^{2}-2e^{2\beta}dudr$ | $ds^{2}=e^{2\beta}\frac{V}{r}du^{2}-2e^{2\beta}dudr$
| $\qquad+r^{2}g_{AB}\left(dx^{A}-U^{A}du\right)\left(dx^{B}-U^{B}du\right)$, | $\qquad+r^{2}g_{AB}\left(dx^{A}-U^{A}du\right)\left(dx^{B}-U^{B}du\right)$,
| $\det g_{AB}=\sin^{2}\theta$ | $\det g_{AB}=\sin^{2}\theta$
| (Bondi gauge with the Sachs condition) | (Bondi gauge with the Sachs condition)
Fall-off | $\beta=O\left(r^{-2}\right)$, | $\beta=O\left(r^{-2}\right)$,
| $U^{A}=-\frac{1}{2r^{2}}D_{B}C^{AB}$ | $U^{A}=U_{\left(0\right)}^{A}-\frac{1}{2r^{2}}D_{B}C^{AB}$
| $\qquad-\frac{2}{3r^{3}}\left(N^{A}-\frac{1}{2}C^{AB}D^{C}C_{BC}\right)+\dots$, | $\qquad-\frac{2}{3r^{3}}\left(N^{A}-\frac{1}{2}C^{AB}D^{C}C_{BC}\right)+\dots$,
| $V=-r+2M+\dots$, | $V=\frac{\Lambda r^{3}}{3}-D_{A}U_{\left(0\right)}^{A}r^{2}$$-\left(1+\frac{\Lambda}{16}C_{AB}C^{AB}\right)r+2M+\dots,$
| $g_{AB}=\gamma_{AB}+\frac{C_{AB}}{r}+\frac{\gamma_{AB}C_{CD}C^{CD}}{4r^{2}}+\frac{E_{AB}}{r^{3}}+\dots$ | $g_{AB}=\gamma_{AB}+\frac{C_{AB}}{r}+\frac{\gamma_{AB}C_{CD}C^{CD}}{4r^{2}}+\frac{E_{AB}}{r^{3}}+\dots$,
| | $D_{A}U_{\left(0\right)B}+D_{B}U_{\left(0\right)A}-\gamma_{AB}D_{C}U_{\left(0\right)}^{C}=\frac{\Lambda}{3}C_{AB}$
Radiation field vanishes | Yes | No
at infinity | ($U^{A}_{(0)}=0$) | ($U^{A}_{(0)}\neq 0$)
Imprint on the | Symmetric traceless tensor $C_{AB}$ (“Bondi News”) | Symmetric traceless tensor $C_{AB}$ (“Bondi News”)
metric of the most | arbitrary functions of the retarded time and | arbitrary functions of the retarded time and
general wave | the angles (generic graviton) | the angles (generic graviton)
Symmetry | Infinite-dimensional (“BMS”) Lie algebra: | Four-dimensional Lie algebra:
| $so\left(3,1\right)+\text{``supertranslations''}$ | $so\left(3\right)\oplus\mathbb{R}$
Energy (“Bondi mass”) | $E=\frac{1}{4\pi G}\oint d^{2}S\,M$ | $E=\frac{1}{4\pi G}\oint d^{2}S\,M$
Angular momentum | $\vec{J}=\frac{1}{8\pi G}\oint d^{2}S\,\hat{r}\epsilon^{AB}D_{A}N_{B}$ | $\vec{J}=\frac{1}{8\pi G}\oint d^{2}S\,\hat{r}\epsilon^{AB}D_{A}N_{B}$
Angular momentum | Yes | No
ambiguity | (angular momentum not invariant | (there are no supertranslations)
| under supertranslations) |
Energy flux | $\frac{dE}{du}=-\frac{1}{32\pi G}\oint d^{2}S\,N_{AB}N^{AB},$ | $\frac{dE}{du}=-\frac{1}{32\pi G}\oint d^{2}S\,\left[N_{AB}^{\left(\Lambda\right)}N^{\left(\Lambda\right)AB}+\frac{2\Lambda}{3}C^{AB}C_{AB}\right.$
| with $N_{AB}:=\dot{C}_{AB}$ | $\qquad-\frac{\Lambda}{6}C^{AB}D^{2}C_{AB}+\frac{7\Lambda^{2}}{144}\left(C^{AB}C_{AB}\right)^{2}-\frac{\Lambda^{2}}{3}C^{AB}E_{AB}$
| | $\qquad\left.+\left(4M+D_{A}D_{B}C^{AB}\right)\left(D_{C}U_{\left(0\right)}^{C}\right)\right],$
| | with $N_{AB}^{\left(\Lambda\right)}:=\dot{C}_{AB}+\mathcal{L}_{U_{\left(0\right)}}C_{AB}-\frac{1}{2}\left(D_{C}U_{\left(0\right)}^{C}\right)C_{AB}$
| | $\qquad\qquad\qquad\qquad-\frac{\Lambda}{6}\gamma_{AB}C_{CD}C^{CD}$
Inputs to arrive at a | Equations of motion (asymptotic form of the | Equations of motion (asymptotic form of the
formula for the mass | solution should include the generic graviton) | solution should include the generic graviton)
and its variation | |
(energy flux) | Mass should reduce to known expressions when | Mass should reduce to known expressions when
| there is no radiation | there is no radiation
| Energy flux should be negative or zero | Action principle should be well-defined
## II Bondi revisited for $\Lambda>0$
Although the geometry is very different for $\Lambda>0$ and $\Lambda=0$, it
turns out that in the natural extension of the coordinate system used by Bondi
and Sachs the formulas for energy flux, energy, and the like turn out to be
remarkably simple, and furthermore reduce for $\Lambda=0$ to theirs. For this
reason, we will go right away into the analysis in that particular coordinate
system. In Appendix A, we show how these coordinates can be obtained from a
more geometric approach à la Penrose.
### II.1 Asymptotic behavior of the metric
In the coordinate system $\left(u,r,\theta,\phi\right)$ originally introduced
by Bondi (Bondi:1960jsa, ) and generalized later to the non-axisymmetric case
by Sachs (Sachs:1962wk, ; Sachs:1962zza, ), the line element reads
$\displaystyle ds^{2}$
$\displaystyle=e^{2\beta}\frac{V}{r}\;du^{2}-2e^{2\beta}\;dudr$
$\displaystyle+r^{2}g_{AB}\left(dx^{A}-U^{A}\;du\right)\left(dx^{B}-U^{B}\;du\right)\;$
(1)
with $-\infty<u<\infty$ and $0<r<\infty$. The $x^{A}$ are coordinates on the
two-sphere, which we choose here to be the standard spherical one:
$x^{A}=\left(\theta,\phi\right)$ with $0\leq\theta\leq\pi$ and
$0\leq\phi<2\pi$.222Strictly speaking, one of course needs two charts to cover
the 2-sphere. The coordinate $u$ is null because when $du=0$ and $dx^{A}=0$,
one has $ds^{2}=0$. Radiation is “observed” as $r\rightarrow\infty$. In this
limit, one approaches the future boundary — often denoted by $\mathcal{I}$.
These coordinates nicely encode that the topology of $\mathcal{I}$ is
$\mathbb{R}\times\mathbb{S}^{2}$, which is the relevant setting for studying
gravitational radiation emitted by compact sources.
The functions $\beta$, $V$, $g_{AB}$ and $U^{A}$ depend on $x^{A}$, $u$, and
$r$. The procedure is to expand the metric components in powers of $r^{-1}$,
demand reasonable boundary conditions and impose Einstein’s equations order by
order in $r$. The latter step does not restrict the dependence on $u$ and
$x^{A}$, but leads to relationships between different coefficients in the
expansion. We will omit the details of this calculation and state the result
to the order needed for the determination of possible asymptotic “charges,”
and their fluxes.
One finds
$\displaystyle\beta$ $\displaystyle=-\frac{1}{32r^{2}}C^{AB}C_{AB}$
$\displaystyle\;\;+\frac{1}{128r^{4}}\left(\left(C^{AB}C_{AB}\right)^{2}-12C^{AB}E_{AB}\right)+\ldots,$
(2a) $\displaystyle V$ $\displaystyle=\frac{\Lambda
r^{3}}{3}-D_{A}U_{\left(0\right)}^{A}\;r^{2}-\left(1+\frac{\Lambda}{16}C^{AB}C_{AB}\right)r$
$\displaystyle\qquad+2M+\ldots$ (2b) $\displaystyle U^{A}$
$\displaystyle=U_{(0)}^{A}-\frac{1}{2r^{2}}D_{B}C^{AB}$
$\displaystyle\qquad-\frac{2}{3r^{3}}\left(N^{A}-\frac{1}{2}C^{AB}D^{C}C_{BC}\right)+\ldots,$
(2c) $\displaystyle g_{AB}$
$\displaystyle=\gamma_{AB}+\frac{C_{AB}}{r}+\frac{C^{CD}C_{CD}\gamma_{AB}}{4r^{2}}+\frac{E_{AB}}{r^{3}}+\ldots,$
(2d) $\displaystyle\det g_{AB}$ $\displaystyle=\sin^{2}\theta.$ (2e)
These expressions depend on $\Lambda$ explicitly in Eq. (2b) and through
$U_{(0)}^{A}\left(\Lambda\right)$, which vanishes for $\Lambda=0$, but depends
implicitly on it according to Eq. (4) below. When $\Lambda=0$, they reduce to
those of Sachs. Here $D_{A}$ is the covariant derivative with respect to the
metric of the unit two-sphere $\gamma_{AB}$. The indices $A,B$ are lowered and
raised with the metric $\gamma_{AB}$. The symmetric tensors $C_{AB}$ and
$E_{AB}$ are traceless: $\gamma^{AB}C_{AB}=\gamma^{AB}E_{AB}=0$.
Besides Eqs. (2), there are two further restrictions on the coefficients which
are of decisive importance in the analysis. They are the following: (i) The
zeroth order term in $g_{AB}$ is demanded to be the standard line element on
the unit two-sphere:
$\gamma_{AB}dx^{A}dx^{B}=d\theta^{2}+\sin^{2}\theta d\phi^{2}.$ (3)
This additional demand, imposed by Bondi, which does not follow from
Einstein’s equations and it is not a mere restriction on the coordinate
system, turns out to be of enormous consequence: It will guarantee later on
that no divergent quantities appear in the analysis of a problem that has no
physical singularities. In contradistinction, Eq. (2e) can be imposed to all
orders by a change of coordinates $r=f\left(r^{\prime},\theta,\phi\right)$.
(ii) Besides the relations between the coefficients in Eqs. (2), Einstein’s
equations imply
$2D_{(A}U_{B)}^{(0)}-\gamma_{AB}D_{C}U_{(0)}^{C}=\frac{\Lambda}{3}\;C_{AB}\;.$
(4)
Equation (4) exhibits the key difference in the imprint of the gravitational
wave on the metric for $\Lambda=0$ versus $\Lambda\neq 0$. In fact, as we will
see, the tensor $C_{AB}$ describes the field of the wave, and we see from (4)
that when $\Lambda=0$, the waves do not affect the metric to the lowest order.
However, when $\Lambda\neq 0$ the wave affects the metric even to the lowest
order through the shift vector $U_{\left(0\right)}^{A}$.
Note that the particular solution to Eq. (4) exclusively exhibits modes with
$\ell\geq 2$ that are inherited from the tensor $C_{AB}$. The information of
the gravitational wave is exclusively contained within these modes. On the
other hand, the solution of the homogeneous equation, specifically the
conformal Killing equation on the 2-sphere, only has $\ell=1$ modes and are
independent of the wave degrees of freedom. These latter modes represent the
freedom in selecting the frame at infinity and can be set to zero without loss
of generality.
_Remark._ The fact that no regularization is needed at any step in the present
work and that, in particular, all the charges are finite follows from allowing
a generic $U^{A}_{(0)}\neq 0$. Had we imposed $U^{A}_{(0)}=0$, we would have
been forced to led $\gamma_{AB}$ be a generic metric, but divergences would
appear.
### II.2 Asymptotic symmetries for $\Lambda=0$ and $\Lambda>0$ compared and
contrasted
#### II.2.1 Mass for $\Lambda=0$
When the cosmological constant vanishes, Bondi proposed that the integral over
a two-sphere of the coefficient $M\left(u,\theta,\phi\right)$ appearing in Eq.
(2b)
$E=\frac{1}{4\pi G}\oint d^{2}S\;M,$ (5)
is the total energy of the system (with $d^{2}S=\sin\theta\,d\theta d\phi$).
To validate this guess, he observed first that for the static Schwarzschild
solution, $M$ was indeed the Schwarzschild mass. Then he moved on to
investigate dynamical cases with gravitational waves, when the integral of $M$
over a large sphere was expected to diminish as a function of $u$ due to an
energy flux emitted by a source within the sphere and going out to infinity
(the coordinate $u$ is a retarded coordinate because the sign of the $dudr$
term in the line element is negative). This crucial test was satisfied because
one can verify, from Einstein’s equations, that
$\frac{dE}{du}=-\frac{1}{32\pi G}\oint
d^{2}S\,N_{AB}N^{AB}<0\qquad(\Lambda=0).$ (6)
The mass expression in Eq. (5) has later also been derived using other methods
such as the Landau-Lifschitz approach based on a pseudo-tensor (see e.g.
Thorne:1980ru ) and covariant phase space methods (see e.g. Barnich:2011mi ;
Flanagan:2015pxa ).
#### II.2.2 Angular momentum for $\Lambda=0$
If one were to attempt guessing an expression for the angular momentum, one
would naturally focus on the shift $N_{A}$ because it carries the imprint of
being “stationary” (versus static). One would need a two-form to integrate
over the sphere constructed out of this shift. The simplest candidate is its
exterior derivative. So, one would write
$\displaystyle\vec{J}$ $\displaystyle=\frac{1}{8\pi G}\oint
d^{2}S\;\hat{r}\epsilon^{AB}D_{A}N_{B}.$ (7)
The first test would be to check if this formula gives the right value for the
angular momentum of the Kerr-de Sitter solution (which can be brought to
satisfy the boundary conditions in Eq. (2), see Sec. V.3). If one does so, one
finds that indeed the test is passed. One does not expect the angular momentum
flux to have a definite sign so that test is not available, but a complete
analysis of the asymptotically defined symmetries confirms its validity. The
vector $N^{A}$ is referred to as “angular momentum aspect”.333Beware,
conventions differ on the exact definition of the angular-momentum aspect:
some authors shift $N^{A}$ by terms proportional to $C_{AB}$ and its
derivatives, and/or multiply it by a numerical factor.
#### II.2.3 Symmetry for $\Lambda=0$
In order to prove that Eq. (5) and (7) are the energy and the angular
momentum, one needs to show that they generate time translations and spatial
rotations at infinity when acting on phase space. That proof, and much more,
was given by Sachs who, in a brilliant analysis did two things: (i) He
discovered, extending previous work of Bondi, Metzner and Van der Burg, that
the asymptotically defined symmetry is enormously larger than the expected
Poincaré group, and that the commutators of its Killing vectors form an
infinite-dimensional Lie algebra now called the Bondi-Metzner-Sachs algebra
(Sachs:1962wk, ). (ii) He _postulated_ a commutation rule for the two
independent components of the news $C_{AB}$ and showed that, with just that,
$M\left(\theta,\phi\right)$ and the Lorentz generators $J_{\mu\nu}$ that he
also constructed, generate the symmetry algebra (Sachs:1962zza, ). In
particular, the zero mode (5) generates time translations. By guessing the
commutation rule, Sachs did not need to use the action principle, but just the
equations of motion. Later developments have permitted to recover the
canonical generators of the Bondi-Metzner-Sachs algebra from the action
principle Barnich:2011mi ; Henneaux:2018cst ; Bunster:2018yjr .
#### II.2.4 Symmetry for $\Lambda>0$
For $\Lambda=0$, besides the energy and angular momentum, one has boosts
$\vec{K}$ and infinitely many supertranslation generators
$M\left(\theta,\phi\right)$, with spherical modes $\ell\geq 1$. The situation
is dramatically different for $\Lambda\neq 0$, in which case only $E$ and
$\vec{J}$ are present. The complete asymptotic symmetry algebra consists just
of time translations and spatial rotations, and _the expressions for the
generators are the same as for $\Lambda=0$_. This is why we have brought them
out especially above.
## III Regge-Teitelboim analysis of the symmetries for $\Lambda\neq 0$
### III.1 Preservation of the asymptotic behavior of the metric
Since $\Lambda$ does not appear explicitly in the asymptotic form (1) of the
metric, the form of the asymptotic Killing vectors for $\Lambda\neq 0$ is the
same as the one given by Sachs for $\Lambda=0$ (his equations III5-7 in
(Sachs:1962zza, )), that is
$\displaystyle\xi^{u}$ $\displaystyle=T\left(u,x^{A}\right),$ (8a)
$\displaystyle\xi^{r}$
$\displaystyle=-\frac{r}{2}\left(D_{A}X^{A}+D_{A}I^{A}-U^{A}D_{A}T\right),$
(8b) $\displaystyle\xi^{A}$
$\displaystyle=X^{A}\left(u,x^{A}\right)+I^{A}\left(u,r,x^{A}\right)$ (8c)
$\displaystyle\text{with}\;\;I^{A}=-\left(D_{B}T\right)\int_{r}^{\infty}dr^{\prime}\left(\frac{e^{2\beta}}{r^{2}}g^{AB}\right).$
(8d)
The preservation of Eq. (4) under the action of the asymptotic Killing vectors
implies that $X^{A}$ must obey the following differential equation
$2D_{(A}X_{B)}-\gamma_{AB}D_{C}X^{C}=2U^{(0)}_{(A}D_{B)}T-\gamma_{AB}U^{C}_{(0)}D_{C}T.$
(9)
The preservation of the fall-off of the metric also requires that the
parameters $T$ and $X^{A}$ obey the following first order differential
equations in time
$\displaystyle\dot{T}$
$\displaystyle=\frac{1}{2}D_{A}X^{A}-\frac{3}{2}U_{(0)}^{A}D_{A}T\,,$ (10)
$\displaystyle\dot{X}^{A}$
$\displaystyle=\dot{T}U^{A}_{(0)}-U^{A}_{(0)}U^{B}_{(0)}D_{B}T-\frac{\Lambda}{3}D^{A}T\,.$
(11)
In particular, Eq. (10) is obtained from the preservation of the decay of the
$g_{ur}$ component, and Eq. (11) from the $g_{uA}$ component. Eqs. (9)-(11)
constrain the algebra to three rotations and the time translation as we will
see in the next subsection.
### III.2 Symmetry algebra
The symmetry algebra is determined from $T$ and $X^{A}$ satisfying Eqs.
(9)-(11). Eq. (9) constraints $T$ tremendously: from a generic function of
$u,\theta,\phi$ to a function of $u$ only. Eq. (10) then further requires that
$T$ is time-independent, so that $T$ can only be a constant. Using this, we
find that there are only three independent solutions for $X^{A}$ describing
exactly the three rotations on the sphere. We will now show in detail how this
comes about.
To analyze Eq. (9), it is useful to introduce $Y^{A}$
$\displaystyle Y^{A}$ $\displaystyle=X^{A}-U_{\left(0\right)}^{A}T\;,$ (12)
which explicitly separates a “frame rotation” at infinity. In which case, we
get
$2D_{(A}Y_{B)}-\gamma_{AB}D_{C}Y^{C}=-\frac{\Lambda}{3}TC_{AB}.$ (13)
This equation has the same form as the one obeyed by the zero order shift (Eq.
(4)), except for a negative sign — which is just a matter of convention in the
definition of $Y^{A}$ — and the appearance of the factor $T$ on the right-hand
side. Decomposing $Y^{A}$ into vector spherical harmonics, we see that the
left-hand side of Eq. (13) contains no $\ell=1$ modes as these are in the
kernel of the conformal Killing operator. Therefore, the right-hand side
cannot contain any $\ell=1$ modes. Decomposing $T$ and $C_{AB}$ into spin-
weighted spherical harmonics
$\displaystyle T$ $\displaystyle=\sum_{\ell,m}T_{\ell m}(u)\;{}_{0}Y_{\ell m}$
(14) $\displaystyle C_{AB}$ $\displaystyle=\sum_{\ell,m}C^{E}_{\ell
m}(u)\left({}_{-2}Y_{\ell m}m_{A}m_{B}+{}_{2}Y_{\ell
m}\bar{m}_{A}\bar{m}_{B}\right)$ $\displaystyle\qquad-iC^{B}_{\ell
m}(u)\left({}_{-2}Y_{\ell m}m_{A}m_{B}-{}_{2}Y_{\ell
m}\bar{m}_{A}\bar{m}_{B}\right)$ (15)
where $m_{A},\bar{m}_{A}$ are complex null vectors on the two-sphere
satisfying $m^{A}\bar{m}_{A}=1$, we find that if we project onto
$\bar{m}^{A}\bar{m}^{B}$, their product can be written as
$\displaystyle T\bar{m}^{A}\bar{m}^{B}C_{AB}=\sum_{\ell,m}\mathcal{C}_{\ell
m}\;{}_{-2}Y_{\ell m}\;.$ (16)
So we need to determine what the constraints on $T_{\ell m}$ are such that
$\mathcal{C}_{\ell m}$ does not contain any $\ell=1$ modes. We find that
$\displaystyle\mathcal{C}_{\ell m}$
$\displaystyle=\sum_{\ell^{\prime},m^{\prime},\ell^{\prime\prime},m^{\prime\prime}}T_{\ell^{\prime}m^{\prime}}\left(C^{E}_{\ell^{\prime\prime}m^{\prime\prime}}-iC^{B}_{\ell^{\prime\prime}m^{\prime\prime}}\right)\times$
$\displaystyle\qquad\qquad\qquad\int
d^{2}S\;{}_{0}Y_{l^{\prime}m^{\prime}}\;{}_{-2}Y_{\ell^{\prime\prime}m^{\prime\prime}}{}_{-2}\bar{Y}_{\ell
m}$ (17)
$\displaystyle=\sum_{\ell^{\prime},m^{\prime},\ell^{\prime\prime},m^{\prime\prime}}\sqrt{\frac{(2\ell+1)(2\ell^{\prime}+1)(2\ell^{\prime\prime}+1)}{4\pi}}\times$
$\displaystyle\qquad\qquad\qquad(-1)^{m}T_{\ell^{\prime}m^{\prime}}\left(C^{E}_{\ell^{\prime\prime}m^{\prime\prime}}-iC^{B}_{\ell^{\prime\prime}m^{\prime\prime}}\right)\times$
$\displaystyle\qquad\qquad\qquad\begin{pmatrix}\ell&\ell^{\prime}&\ell^{\prime\prime}\\\
m&m^{\prime}&m^{\prime\prime}\end{pmatrix}\begin{pmatrix}\ell&\ell^{\prime}&\ell^{\prime\prime}\\\
-2&0&2\end{pmatrix}.$ (18)
where in going from Eq. (17) to (18), we used that ${}_{s}\bar{Y}_{\ell
m}=(-1)^{m+s}{}_{-s}Y_{\ell m}$ and that the integral over three spin-weighted
spherical harmonics is given by the product of two 3$j$-symbols (NIST:DLMF, ,
Eq. (34.3.22)). Spin-weighted spherical harmonics ${}_{s}Y_{\ell m}$ are not
defined for $|s|>\ell$ so $C^{E}_{\ell m}/C^{B}_{\ell m}$ does not have any
modes with $\ell=0$ or $1$. Hence, $\mathcal{C}_{\ell m}$ contains no $\ell=1$
modes only if $T_{\ell m}$ is non-zero for $\ell=0$, because $C^{E}_{\ell
m}/C^{B}_{\ell m}$ is generically non-zero for $\ell\geq 2$ and the
3$j$-symbols are non-zero when
$|\ell^{\prime}+\ell^{\prime\prime}|\leq\ell\leq\ell^{\prime}+\ell^{\prime\prime}$.
So far, we have seen that $T(u,\theta,\phi)=T_{0}(u)$ and $X^{A}$ satisfies
the conformal Killing equation. Substituting this into Eq. (10), we obtain
$\dot{T}_{0}=\frac{1}{2}D_{A}X^{A}.$ (19)
The only consistent solution is if both sides of the equation vanish
independently. Hence, we find that $T_{0}$ is $u$-independent and $X^{A}$ is
also divergence-free. Finally, From Eq. (11), we obtain that $X^{A}$ is time-
independent. Therefore, we find that $T$ and $X^{A}$ are
$T=T_{0}\qquad\text{and}\qquad
X^{A}=\epsilon^{AB}D_{B}\left(\vec{\Omega}\cdot\hat{r}\right),$ (20)
for constant $T_{0}$ and $\vec{\Omega}$. Substituting this back into the form
of the asymptotic Killing vector fields, we obtain
$\displaystyle\xi^{u}$ $\displaystyle=T_{0},$ (21a) $\displaystyle\xi^{r}$
$\displaystyle=0,$ (21b) $\displaystyle\xi^{A}$
$\displaystyle=\epsilon^{AB}D_{B}\left(\vec{\Omega}\cdot\hat{r}\right).$ (21c)
One immediately recognizes this as the $\mathbb{R}\oplus so\left(3\right)$
algebra. This result generalizes the findings in (abk1, ), where it was shown
that the asymptotic symmetry group is exactly the 4-dimensional group of time
translations and rotations when $\mathcal{I}$ has
$\mathbb{R}\times\mathbb{S}^{2}$ topology _and_ the induced metric at
$\mathcal{I}$ is conformally flat. The requirement of conformal flatness,
which severely restricted the allowed gravitational radiation by essentially
cut the degrees of freedom of the gravitational field in half, can be lifted.
## IV Revisiting Regge-Teitelboim for $\Lambda>0$ and radiation at future
space-like infinity
### IV.1 Charges
The variation of the charge is obtained using the covariant approach of
Barnich and Brandt Barnich:2001jy , which — as they proved — is equivalent to
the standard Regge-Teitelboim analysis (Regge:1974zd, ).444Note that it is
also equivalent to the Wald-Zoupas method Lee:1990nz ; Wald:1999wa for an
appropriate choice of boundary terms (see e.g. Compere:2018aar ), as well as
to the one of Abott, Deser and Tekin Abbott:1981ff ; Deser:2002rt ;
Deser:2002jk . In particular, if $h_{\mu\nu}:=\delta g_{\mu\nu}$ corresponds
to the functional variation of the spacetime metric, then the general
expression for the variation of the charge is given by
$\displaystyle\delta_{\xi}Q$ $\displaystyle=\frac{1}{16\pi
G}\oint_{\mathcal{I}}\left(d^{2}x\right)_{\mu\nu}\left[\xi^{\nu}\nabla^{\mu}h-\xi^{\nu}\nabla_{\sigma}h^{\mu\sigma}+\xi_{\sigma}\nabla^{\nu}h^{\mu\sigma}\right.$
$\displaystyle\left.\qquad+\frac{1}{2}h\nabla^{\nu}\xi^{\mu}+\frac{1}{2}h^{\nu\sigma}\left(\nabla^{\mu}\xi_{\sigma}-\nabla_{\sigma}\xi^{\mu}\right)-\left(\mu\leftrightarrow\nu\right)\right],$
(22)
where $\xi^{\mu}$ is the asymptotic Killing vector, $h:=g^{\mu\nu}h_{\mu\nu}$
and the volume form is
$\left(d^{2}x\right)_{\mu\nu}:=\frac{1}{4}\epsilon_{\mu\nu\lambda\sigma}dx^{\lambda}\wedge
dx^{\sigma}.$ (23)
Applying this to our set-up, we find
$\displaystyle\delta_{\xi}Q$ $\displaystyle=\frac{1}{16\pi
G}\oint_{\mathcal{I}}d^{2}S\,\left[T\delta\left(4M\right)+\frac{T}{2}N_{AB}^{\left(\Lambda\right)}\delta
C^{AB}\right.$
$\displaystyle\left.+X^{A}\delta\left(2N_{A}+\frac{1}{16}D_{A}\left(C_{BD}C^{BD}\right)\right)\right.$
$\displaystyle\left.-TU^{A}_{(0)}\delta\left(2N_{A}+\frac{1}{16}D_{A}\left(C_{BD}C^{BD}\right)\right)\right]\;,$
(24)
where the tensor $N_{AB}^{\left(\Lambda\right)}$ is defined by
$\displaystyle N_{AB}^{\left(\Lambda\right)}$
$\displaystyle:=\dot{C}_{AB}+\mathcal{L}_{U_{\left(0\right)}}C_{AB}-\frac{1}{2}\left(D_{C}U_{\left(0\right)}^{C}\right)C_{AB}$
$\displaystyle-\frac{\Lambda}{6}\gamma_{AB}C_{CD}C^{CD}.$ (25)
$N_{AB}^{(\Lambda)}$ generalizes the Bondi News tensor when the cosmological
constant is non-zero. This expression acquires a similar structure as the one
obtained in (Barnich:2011mi, ) for the asymptotically flat case with $M$
playing the role of the “Bondi mass aspect” and $N^{A}$ that of the “angular
momentum aspect”. However, there are some differences that come from the
presence of a non-zero cosmological constant. Apart from the correction coming
from $\Lambda$ in the “News tensor” $N_{AB}^{(\Lambda)}$ in Eq. (25), there is
an additional non-integrable term proportional to $U^{A}_{(0)}$ that vanishes
in the limit when $\Lambda\rightarrow 0$.
It is worth emphasizing that the variation of the charge is finite in the
limit when $r\rightarrow\infty$, _without the need of any ad-hoc
regularization procedure_. The only potentially divergent terms were those
proportional to $r$, which after some appropriate integration by parts on the
sphere acquire the form
$\displaystyle\delta_{\xi}Q_{\text{div}}$ $\displaystyle=-\frac{r}{32\pi
G}\oint d^{2}S\Big{(}2D_{(A}X_{B)}-\gamma_{AB}D_{C}X^{C}$
$\displaystyle-2U_{(A}^{(0)}D_{B)}T+\gamma_{AB}U^{C}_{(0)}D_{C}T$
$\displaystyle-T\left[2D_{(A}U^{(0)}_{B)}-\gamma_{AB}D_{C}U^{C}_{(0)}-\frac{\Lambda}{3}C_{AB}\right]\Big{)}\delta
C^{AB},$ (26)
which, by virtue of Eqs. (4) and (9), vanishes identically.
Thus, if one assumes that $\delta T_{0}=\delta\vec{\Omega}=0$, then the
integrable part (in the functional sense) of the variation of the charge,
takes the form
$\displaystyle Q^{\text{int}}[T_{0},\vec{\Omega}]$
$\displaystyle=T_{0}\;E+\vec{\Omega}\cdot\vec{J},$ (27)
where the energy $E$ and angular momentum $\vec{J}$ are
$\displaystyle E$ $\displaystyle=\frac{1}{4\pi G}\oint d^{2}S\;M,$ (28)
$\displaystyle\vec{J}$ $\displaystyle=\frac{1}{8\pi G}\oint
d^{2}S\;\hat{r}\epsilon^{AB}D_{A}N_{B}.$ (29)
Note that the term proportional to $T$ in the last line of Eq. (24) does not
contribute to the mass, because it does not contain any $\ell=0$ modes (this
can again be seen from an analysis of the $3j$-symbols and noting that
$U_{A}^{(0)}$ is only non-zero for $\ell\geq 2$). As we will show in Sec. V.3,
these expressions give the expected results for the mass and angular momentum
for the Kerr-de Sitter geometry, and allows to extend to notion of energy and
angular momentum to the case when gravitational waves are present.
### IV.2 Fluxes
The fluxes of energy and angular momentum can be directly obtained by taking
the time derivative of Eqs. (28) and (29) in conjunction with Einstein’s
equation. In particular, Einstein’s equations yield the evolution of $M$ and
$N^{A}$, respectively. The resulting expressions are rather long but
manageable:
$\displaystyle\dot{M}$
$\displaystyle=\frac{1}{4}D_{A}D_{B}N_{(\Lambda)}^{AB}-\frac{1}{8}N_{AB}^{(\Lambda)}N_{(\Lambda)}^{AB}+\frac{\Lambda}{96}C^{AB}D^{2}C_{AB}$
$\displaystyle-\frac{\Lambda}{12}C^{AB}C_{AB}-\frac{\Lambda}{6}D_{A}N^{A}-\frac{\Lambda}{96}\left(D_{C}C_{AB}\right)\left(D^{C}C^{AB}\right)$
$\displaystyle+\frac{\Lambda^{2}}{24}C^{AB}E_{AB}-\frac{7\Lambda^{2}}{1152}\left(C^{AB}C_{AB}\right)^{2}$
$\displaystyle-
U_{(0)}^{A}D_{A}M-\frac{3}{2}MD_{A}U_{(0)}^{A}-\frac{1}{8}C^{AB}D_{A}D_{B}D_{C}U_{(0)}^{C}$
(30)
and
$\displaystyle\dot{N}^{A}$
$\displaystyle=D^{A}M+\frac{1}{4}D^{A}D^{B}D^{C}C_{BC}-\frac{1}{4}D_{B}D^{2}C^{AB}$
$\displaystyle+\frac{5}{16}C^{AB}D^{C}N_{BC}^{(\Lambda)}-\frac{3}{16}C_{BC}D^{B}N_{(\Lambda)}^{AC}-\frac{\Lambda}{2}D_{B}E^{AB}$
$\displaystyle-\frac{1}{2}N_{(\Lambda)}^{AB}D^{C}C_{BC}+\frac{1}{16}N_{(\Lambda)}^{BC}D^{A}C_{BC}+D_{B}C^{AB}$
$\displaystyle+\frac{5\Lambda}{32}C_{BD}C^{CD}D_{C}C^{AB}+\frac{7\Lambda}{48}C^{AB}C^{CD}D_{B}C_{CD}$
$\displaystyle-
U_{(0)}^{B}D_{B}N^{A}+N^{B}D_{B}U_{(0)}^{A}-2N^{A}D_{C}U_{(0)}^{C}$
$\displaystyle-\frac{1}{64}U_{(0)}^{A}\left(C_{BD}C^{BD}\right)-\frac{1}{64}\left(D^{2}U_{(0)}^{A}\right)C_{BD}C^{BD}$
$\displaystyle+\frac{1}{32}D^{A}\left(D_{C}U_{(0)}^{C}\right)\left(C_{BD}C^{BD}\right)\;.$
(31)
The energy flux is given by
$\displaystyle\frac{dE}{du}$ $\displaystyle=-\frac{1}{32\pi G}\oint
d^{2}S\left\\{N_{AB}^{(\Lambda)}N^{(\Lambda)AB}+\frac{2\Lambda}{3}C^{AB}C_{AB}\right.$
$\displaystyle-\frac{\Lambda}{6}C^{AB}D^{2}C_{AB}+\frac{7\Lambda^{2}}{144}\left(C^{AB}C_{AB}\right)^{2}-\frac{\Lambda^{2}}{3}C^{AB}E_{AB}$
$\displaystyle\left.+\left(4M+D_{A}D_{B}C^{AB}\right)\left(D_{C}U_{\left(0\right)}^{C}\right)\right\\}.$
(32)
The first term on the right-hand side has the same form as the one that
contributes to the loss of energy in the asymptotically flat case. However,
there are now also corrections coming from the presence of the cosmological
constant which are up to fourth order in the fields. These higher order terms
are characteristic of the full nonlinear theory and cannot be seen in the
linearized approximation. In Sec. V.4.1, we will show that when the higher
order terms are neglected, the total amount of energy radiated in a certain
interval of time precisely coincide with the one reported in (abk2, ;
Chrusciel:2020rlz, ; Kolanowski:2020wfg, ). An important difference with the
asymptotically flat case is that the flux of energy is not manifestly
negative. This was also observed for the case of homogeneous gravitational
perturbations on a de Sitter background in (abk2, ). Moreover, this can also
occur for Maxwell fields on a de Sitter background (abk2, ) and thus seems a
rather generic feature of spacetimes with $\Lambda>0$. This is likely due to
the fact that there is no global time-like Killing vector field in de Sitter
spacetime. However, as was pointed out in (abk3, ), and as we will show in
Sec. V.4, in the case of quadrupolar radiation in the linearized theory, the
flux of energy _is_ manifestly negative.
Analogously, the flux of angular momentum takes the form
$\frac{d\vec{J}}{du}=\frac{1}{8\pi G}\oint
d^{2}S\;\hat{r}\;\epsilon^{AB}D_{A}\dot{N}_{B},$ (33)
where $\dot{N}_{B}$ is given by Eq. (31). Due to the cosmological constant
there is no angular momentum ambiguity, because there are no abelian
supertranslations as is the case with $\Lambda=0$.
The flux of energy and angular momentum in Eqs. (32)-(33) can alternatively be
obtained from the non-integrable part of the variation of the charge in Eq.
(24) following the prescription in (Barnich:2011mi, ) (see also
(Bunster:2018yjr, ; Bunster:2019mup, )). We have verified this explicitly for
the energy flux.
## V Application to special cases
In this section, we show explicitly that the fall off conditions in Eq. (2)
accommodate a wide range of physically interesting solutions to Einstein’s
equation. First, we discuss the de Sitter spacetime itself before moving on to
two black hole solutions in the presence of a positive cosmological constant:
the non-rotating Schwarzschild-de Sitter spacetime and the rotating Kerr-de
Sitter spacetime. Next, we discuss linearized solutions to Einstein’s
equations with $\Lambda>0$ representing gravitational radiation emitted by a
compact source. Finally, we describe a simple model of gravitational radiation
with a single degree of freedom known as the Robinson-Trautman spacetime.
### V.1 de Sitter spacetime
The full de Sitter spacetime is not an example of the class of spacetimes we
have defined. This is not problematic, as the goal of this paper is to
describe radiation in the presence of $\Lambda$ in which case not the complete
de Sitter spacetime, but the Poincaré patch of de Sitter spacetime with an
additional point at $\mathcal{I}$ removed is relevant. The removal of this
additional point is natural as it represents the intersection of the future
boundary with the source generating radiation (see also (abk3, , Sec. II)). As
a result, the future boundary has topology $\mathbb{R}\times\mathbb{S}^{2}$
and is naturally coordinatized by $(t,r,\theta,\phi)$:
$ds^{2}=-\left(1-\frac{\Lambda
r^{2}}{3}\right)du^{2}-2dudr+r^{2}\left(d\theta^{2}+\sin^{2}\theta
d\phi^{2}\right)\;.$ (34)
The time translation vector field $\frac{\partial}{\partial u}$ and the three
rotational Killing vector fields are not only asymptotic symmetries, but
symmetries of the entire spacetime. Translations and inverted translations,
which ae symmetries of the full de Sitter spacetime, do not leave $i^{0}$ and
$i^{+}$ invariant and are therefore not permissible (for a more extensive
discussion, see (abk1, )).
### V.2 Schwarzschild-de Sitter spacetime
The simplest prototype for describing non-dynamical isolated gravitating
systems in the presence of a cosmological constant is undoubtedly the
Schwarzschild-de Sitter spacetime. This spacetime describes a non-rotating
black hole with $\Lambda>0$. We consider the metric in Eddington-Finkelstein
coordinates $(u,r,\theta,\phi)$:
$\displaystyle ds^{2}$
$\displaystyle=-\left(1-\frac{2m}{r}-\frac{r^{2}}{l^{2}}\right)du^{2}-2dudr$
$\displaystyle\;\;+r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right)\;,$
(35)
where $\Lambda=3l^{-2}$. The coordinate ranges are
$u\in(-\infty,\infty),r\in(0,\infty),\theta\in\left[0,\pi\right)$ and
$\phi\in\left[0,2\pi\right)$. While these coordinates do not provide a global
chart of the spacetime, they suffice to cover the asymptotic region near
$\mathcal{I}$. In terms of the asymptotic expansions of the metric in Eq. (2),
this metric has $M(u,\theta,\phi)=m$ and all other coefficients zero. In
particular, there is no gravitational radiation so that $C_{AB}$ and
$U_{(0)}^{A}$ are both zero.
This metric has four Killing vector fields: one Killing vector field generates
time translations and the other three describe the spherical symmetry of the
spacetime. A well-known property of (global) Killing vector fields is that
every Killing field of the physical spacetime admits an extension to the
boundary and is tangential to it. This is also the case for the above Killing
vector fields, which coincide exactly with the asymptotic symmetry vector
fields. All charges and fluxes vanish except for the mass, which is
$E_{{\rm Sch-dS}}=\frac{m}{G}\;.$ (36)
### V.3 Kerr-de Sitter spacetime
Kerr-de Sitter spacetimes are stationary, vacuum solutions to Einstein’s
equations describing rotating black holes in the presence of a positive
cosmological constant. Let us consider the Kerr de-Sitter metric in standard
Boyer-Lindquist coordinates $(T,R,\Theta,\Phi)$ (Carter:1968ks, )
$\displaystyle ds^{2}$
$\displaystyle=-2a\sin^{2}\Theta\left(\frac{2mR}{a^{2}\cos^{2}\Theta+R^{2}}+\frac{a^{2}+R^{2}}{l^{2}}\right)dTd\Phi$
$\displaystyle-\left(1-\frac{2mR}{a^{2}\cos^{2}\Theta+R^{2}}-\frac{a^{2}\sin^{2}\Theta+R^{2}}{l^{2}}\right)dT{}^{2}$
$\displaystyle+\sin^{2}\Theta\left(\frac{2a^{2}mR\sin^{2}\Theta}{a^{2}\cos^{2}\Theta+R^{2}}+\left(a^{2}+R^{2}\right)\left(1+\frac{a^{2}}{l^{2}}\right)\right)d\Phi^{2}$
$\displaystyle+\left(a^{2}\cos^{2}\Theta+R^{2}\right)\left(\frac{dR^{2}}{R^{2}-\left(a^{2}+R^{2}\right)\frac{R^{2}}{l^{2}}-2mR+a^{2}}\right.$
$\displaystyle\left.+\frac{d\Theta^{2}}{1+\frac{a^{2}\cos^{2}\Theta}{l^{2}}}\right),$
(37)
where the parameter $a$ is related to the amount of rotation of this rotating
black hole. In the limit, $a\to 0$ one recovers the Schwarschild-de Sitter
metric in static coordinates. Note that these Boyer-Lindquist coordinates are
‘twisted’ at $\mathcal{I}$: for instance, surfaces of constant $T,R$ describe
deformed spheres (consequently, the range of $\Theta,\Phi$ is not the standard
range for coordinates on the sphere). Inspired by the coordinate
transformation used in (Henneaux:1985tv, ) to undo this twisting, we perform
the following asymptotic change of coordinates
$\displaystyle T$
$\displaystyle=u+l\,\text{arctanh}\left(\frac{r}{l}\right)-\frac{ml^{4}\left(1-\frac{a^{2}\sin^{2}\theta}{2l^{2}}\right)}{2\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{5/2}}\frac{1}{r^{4}}+\ldots$
$\displaystyle R$
$\displaystyle=r\sqrt{1+\frac{a^{2}\sin^{2}\theta}{l^{2}}}-\frac{\left(1+\frac{a^{2}}{l^{2}}\right)a^{2}\sin^{2}\theta}{2\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{3/2}}\frac{1}{r}$
$\displaystyle-\frac{ma^{2}\sin^{2}\theta}{2\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{2}}\frac{1}{r^{2}}+\ldots$
$\displaystyle\Theta$
$\displaystyle=\arccos\left(\frac{\cos\theta}{\sqrt{1+\frac{a^{2}\sin^{2}\theta}{l^{2}}}}\right)-\frac{a^{2}\sin\left(2\theta\right)\sqrt{1+\frac{a^{2}}{l^{2}}}}{4\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{2}}\frac{1}{r^{2}}$
$\displaystyle+\frac{3a^{4}\sin\left(2\theta\right)\left(1-2\sin^{2}\theta\left(1+\frac{a^{2}}{2l^{2}}\right)\right)\sqrt{1+\frac{a^{2}}{l^{2}}}}{16\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{4}}\frac{1}{r^{4}}\ldots$
$\displaystyle\Phi$
$\displaystyle=\frac{1}{1+\frac{a^{2}}{l^{2}}}\left(\phi+\frac{a\left(u+l\text{arctanh}\left(\frac{r}{l}\right)\right)}{l^{2}}\right)$
$\displaystyle+\frac{ma^{3}\sin^{2}\theta}{4\left(1+\frac{a^{2}}{l^{2}}\right)\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{5/2}}\frac{1}{r^{4}}+\ldots.$
The leading terms of the Kerr-de Sitter metric near $\mathcal{I}$ fit within
our asymptotic conditions. 555Note that the solution is not in the Bondi gauge
everywhere, but its asymptotic form to the orders needed is. Indeed
$g_{rr}=O\left(r^{-6}\right)$ and $g_{rA}=O\left(r^{-4}\right)$. The metric on
$\mathcal{I}$ with $u$=constant is the unit two-sphere with $\theta,\phi$
having their standard range, i.e.,
$\theta\in\left[0,\pi\right),\phi\in\left[0,2\pi\right)$ . Moreover,
$U_{\left(0\right)}^{A}$ and $C_{AB}$ are both equal to zero, which is
consistent with the fact that there is no gravitational radiation in this
spacetime. The mass and angular momentum aspect are given by:
$\displaystyle M$
$\displaystyle=m\frac{\left(1-\frac{a^{2}\sin^{2}\theta}{2l^{2}}\right)}{\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{5/2}}\;,$
(38) $\displaystyle N^{\theta}$ $\displaystyle=0\;,$ (39) $\displaystyle
N^{\phi}$
$\displaystyle=-\frac{3am}{\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{5/2}}.$
(40)
We also find that $E_{AB}$ is
$\displaystyle E_{\theta\theta}$
$\displaystyle=-\frac{ma^{2}\sin^{2}\theta}{\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{5/2}}\;,$
(41) $\displaystyle E_{\phi\phi}$
$\displaystyle=\frac{ma^{2}\sin^{4}\theta}{\left(1+\frac{a^{2}\sin^{2}\theta}{l^{2}}\right)^{5/2}}\;,$
(42) $\displaystyle E_{\theta\phi}$ $\displaystyle=0\;,$ (43)
so that $E_{AB}$ is traceless with respect to the unit two-sphere metric, as
it should.
The mass and the angular momentum can be directly computed from the
expressions for the charges in Eqs. (28) and (29) (which also define the
normalization of the Killing vectors here). They are given by
$E_{{\rm{Kerr-dS}}}=\frac{m}{G}\left(1+\frac{a^{2}}{l^{2}}\right)^{-2},\qquad
J_{z}=-a\;E_{{\rm{Kerr-dS}}}.$ (44)
These results coincide with the charges obtained using Hamiltonian methods by
Marolf and Kelly in (Kelly:2012zc, ).666These final expressions also agree
with the gravitational charges defined in terms of the electric part of the
Weyl tensor in (abk1, ) despite the fact that the mass and angular momentum
there refer to a differently normed Killing vector field. This is due to the
fact that in (abk1, ), the $\Theta,\Phi$ coordinates were assumed to have the
standard range on the two-sphere, which is not the case. If this is corrected,
the results here and in (abk1, ) differ exactly by the expected scaling with
the Killing vector field. Moreover, these expression also precisely coincide
with the ones obtained for Kerr-_anti_ -de Sitter spacetimes after replacing
$l\rightarrow il$ (Henneaux:1985tv, ). Since this spacetime is stationary, the
fluxes are trivially zero, which we verified by direct computation.
### V.4 Linearized solutions in de Sitter spacetime
#### V.4.1 Linearized charges and fluxes
The expressions for the charges and fluxes simplify drastically in the
linearized context. Here, we will briefly comment on the linearized setting
and explicitly connect the resulting flux of energy radiated across
$\mathcal{I}$ to existing results in the literature.
Let us consider the linearized gravitational field in retarded null
coordinates $\left(u,r,x^{A}\right)$ around the de Sitter background metric
$d\bar{s}^{2}=-\left(1-\frac{\Lambda
r^{2}}{3}\right)du^{2}-2dudr+r^{2}\left(d\theta^{2}+\sin^{2}d\phi^{2}\right).$
(45)
The spacetime metric is then written as
$g_{\mu\nu}=\bar{g}_{\mu\nu}+h_{\mu\nu}\;.$ (46)
where $h_{\mu\nu}$ is kept only up to first order. Quantities that have
dimensions of length are compared with an external fixed length scale. The
linearization expression (46) is valid everywhere, not just asymptotically.
The fall-off of the metric in the linearized theory can be directly obtained
from our asymptotic conditions in Eq. (2) by neglecting the terms that are
quadratic in the fields. The asymptotic form of the metric then reads
$\displaystyle\beta$ $\displaystyle=\mathcal{O}\left(\frac{1}{r^{5}}\right),$
(47a) $\displaystyle V$ $\displaystyle=\frac{\Lambda
r^{3}}{3}+r+\left(-D_{A}U_{\left(0\right)}^{A}\;r^{2}+2M+\ldots\right),$ (47b)
$\displaystyle U^{A}$
$\displaystyle=\left(U_{(0)}^{A}-\frac{D_{B}C^{AB}}{2r^{2}}-\frac{2N^{A}}{3r^{3}}+\ldots\right),$
(47c) $\displaystyle g_{AB}$
$\displaystyle=\gamma_{AB}+\left(\frac{C_{AB}}{r}+\frac{E_{AB}}{r^{3}}+\ldots\right),$
(47d)
where $U_{(0)}^{A}$ obeys Eq. (4). Note that $U_{A}^{(0)}$, which is of order
zero in the asymptotic expansion, in which the reference length is $r$,
becomes of first order in the linearized theory, in which the reference length
is a fixed distance. So, both expansions do not coincide at large distances.
The time derivatives of $M$ and $N^{A}$ reduce to
$\displaystyle\dot{M}$
$\displaystyle=\frac{1}{2}D_{A}D_{B}N_{(\Lambda)}^{AB}-\frac{\Lambda}{6}D_{A}N^{A},$
(48) $\displaystyle\dot{N}^{A}$
$\displaystyle=D^{A}M+\frac{1}{4}D^{A}D^{B}D^{C}C_{BC}-\frac{1}{4}D_{B}D^{2}C^{AB}$
$\displaystyle\;\;-\frac{\Lambda}{2}D_{B}E^{AB}+D_{B}C^{AB},$ (49)
while the linearized version of the News tensor now takes the form
$N_{AB}^{(\Lambda)}=\dot{C}_{AB}.$ (50)
In the linearized limit, the symmetry algebra of the de Sitter background
metric can be naturally used (see Sec. V.1). The radiation rates in the linear
theory can be obtained from the corresponding expressions in the non-linear
theory by dropping the cubic and the quartic terms in the fields. In the case
of the energy, we have
$\displaystyle\frac{dE}{du}$ $\displaystyle=-\frac{1}{32\pi G}\oint
d^{2}S\left\\{N_{AB}^{(\Lambda)}N^{(\Lambda)AB}+\frac{2\Lambda}{3}C^{AB}C_{AB}\right.$
$\displaystyle-\frac{\Lambda}{6}C^{AB}D^{2}C_{AB}-\frac{\Lambda^{2}}{3}C^{AB}E_{AB}$
$\displaystyle\left.+\left(4M+D_{A}D_{B}C^{AB}\right)\left(D_{C}U_{\left(0\right)}^{C}\right)\right\\}.$
(51)
Remarkably, if one calculates the total flux of energy radiated in a finite
interval of time $\Delta u$ (assuming appropriate fall-off near the edges of
$\Delta u$), one obtains perfect agreement with the results obtained
independently by Chrúsciel, Hoque, and Smolka in (Chrusciel:2020rlz, ), and by
Kolanowski and Lewandowski in (Kolanowski:2020wfg, ). It also coincides with
the expression found by Ashtekar, Bonga and Kesavan in a different set of
coordinates (abk2, ). This can be seen as follows: If we re-express $C_{AB}$
in the terms with explicit $\Lambda$ dependence by using Eq. (4), we obtain
$\displaystyle\frac{dE}{du}$ $\displaystyle=-\frac{1}{32\pi G}\oint
d^{2}S\left(\dot{C}_{AB}\dot{C}^{AB}-2\Lambda
D^{B}U_{\left(0\right)}^{A}E_{AB}\right.$
$\displaystyle+4D^{B}U_{\left(0\right)}^{A}C_{AB}-D^{B}U_{\left(0\right)}^{A}D^{2}C_{AB}$
$\displaystyle\left.+\left(4M+D_{A}D_{B}C^{AB}\right)\left(D_{C}U_{\left(0\right)}^{C}\right)\right).$
After an integration by parts on the sphere, one obtains
$\displaystyle\frac{dE}{du}$ $\displaystyle=-\frac{1}{32\pi G}\oint
d^{2}S\left(\dot{C}_{AB}\dot{C}^{AB}+U_{\left(0\right)}^{A}\left[2\Lambda
D^{B}E_{AB}\right.\right.$
$\displaystyle\left.\left.-4D^{B}C_{AB}+D^{B}D^{2}C_{AB}-D_{A}\left(4M+D_{B}D_{C}C^{BC}\right)\right]\right).$
Using the linearized equation of motion for $N^{A}$ in (49), we can write this
compactly as
$\frac{dE}{du}=\frac{1}{8\pi G}\oint
d^{2}S\left(-\frac{1}{4}\dot{C}_{AB}\dot{C}^{AB}+U_{\left(0\right)}^{A}\dot{N}_{A}\right)\,.$
So that the total amount of energy radiated in the interval of time $\Delta u$
is given by
$\left.E\right|_{\Delta u}=\frac{1}{8\pi G}\int_{\Delta u}du\oint
d^{2}S\left(-\frac{1}{4}\dot{C}_{AB}\dot{C}^{AB}+U_{\left(0\right)}^{A}\dot{N}_{A}\right).$
Assuming that there is no flux of radiation outside this interval of time, we
can integrate by parts in the null time and discard the corresponding boundary
terms, so that we can write:
$\left.E\right|_{\Delta u}=-\frac{1}{8\pi G}\int_{\Delta u}du\oint
d^{2}S\left(\frac{1}{4}\dot{C}_{AB}\dot{C}^{AB}+\dot{U}_{\left(0\right)}^{A}N_{A}\right).$
Now, by virtue of our asymptotic conditions,
$h_{uA}=-\frac{1}{r^{2}}U_{A}.$
Therefore, if we consider an asymptotic expansion of the form
$h_{uA}=h_{uA}^{\left(2\right)}r^{2}+h_{uA}^{\left(1\right)}r+h_{uA}^{\left(0\right)}+\frac{h_{uA}^{\left(-1\right)}}{r}+\dots,$
we obtain
$h_{uA}^{\left(2\right)}=-U_{\left(0\right)A}\qquad,\qquad
h_{uA}^{\left(-1\right)}=\frac{2}{3}N_{A}.$
Thus, in terms of these variables, the total amount of energy radiated in the
interval of time $\Delta u$ acquires the following form
$\left.E\right|_{\Delta u}=-\frac{1}{32\pi G}\int_{\Delta u}du\oint
d^{2}S\left(\dot{C}_{AB}\dot{C}^{AB}-6\dot{h}_{uA}^{\left(2\right)}h_{u}^{\left(-1\right)A}\right).$
This expression precisely coincides with the ones found in (Chrusciel:2020rlz,
) and (Kolanowski:2020wfg, ).
#### V.4.2 Explicit solutions for quadrupolar modes
Using the set-up in the previous subsection, we will now study explicit
solutions to the linearized Einstein’s equation. The strategy is to solve for
the homogeneous solution of Einstein’s equation, which corresponds in a
partial wave expansion to the gravitational field away from the source
generating the gravitational waves (which is assumed to be bounded). In
particular, the homogeneous solutions corresponding to a fixed $\ell$ in the
spherical harmonic expansion, should necessarily be generated by a source with
multipole moment $\ell$. Even though the homogeneous solution is only valid
outside the source, in principle we could match this solution with the “inner”
solution using matched asymptotic expansions (Burke:1969zz, ). The matching
with an interior solution is beyond the scope of this paper, which focuses on
the solution far away from the source.
Since the background spacetime is spherically symmetric, it is convenient to
use similar techniques as Regge and Wheeler did when solving for linearized
perturbations off Schwarzschild (although we will not implement the Regge-
Wheeler gauge) (Regge:1957td, ). In particular, we will use separation of
variables and for the angular part of the perturbations, we introduce scalar,
vector and tensor spherical harmonics. Following the notation in (Martel-
Poisson, ), we note that the scalar harmonics are the usual spherical-harmonic
functions $Y_{\ell m}(x^{A})$ satisfying the eigenvalue equation $D^{2}Y_{\ell
m}=-\ell\left(\ell+1\right)Y_{\ell m}$. There are two types of vector
harmonics: even-parity $Y_{A}^{\ell m}$ (also known as electric) and odd-
parity $X_{A}^{\ell m}$ (also known as magnetic), which are related to the
scalar harmonics through the covariant derivative operator compatible with
$\gamma_{AB}$
$\displaystyle Y_{A}^{\ell m}$ $\displaystyle:=D_{A}Y_{\ell m}$ (52)
$\displaystyle X_{A}^{\ell m}$
$\displaystyle:=-\epsilon_{A}^{\;\;B}D_{B}Y_{\ell m}\;.$ (53)
The even- and odd-parity harmonics are orthogonal in the sense that $\int
d^{2}S\;\bar{Y}_{\ell m}^{A}X_{A}^{\ell^{\prime}m^{\prime}}=0$. The tensor
harmonics also come in the same two types:
$\displaystyle Y_{AB}^{\ell m}$ $\displaystyle:=D_{(A}Y_{B)}^{\ell
m}-\frac{1}{2}\gamma_{AB}D_{C}Y_{\ell m}^{C}$ (54) $\displaystyle X_{AB}^{\ell
m}$ $\displaystyle:=-\epsilon_{(A}^{\;\;\;\;C}D_{B)}Y_{C}^{\ell m}\;.$ (55)
These operators are traceless, i.e., $\gamma^{AB}Y_{AB}^{\ell
m}=0=\gamma^{AB}X_{AB}^{\ell m}$ and orthogonal in the same sense as the
vector harmonics are. The separation of variables takes the form
$\displaystyle h_{uu}$ $\displaystyle=\sum_{\ell,m}f_{\ell
m}\left(u,r\right)Y_{\ell m},$ (56a) $\displaystyle h_{ur}$
$\displaystyle=\sum_{\ell,m}h_{\ell m}\left(u,r\right)Y_{\ell m},$ (56b)
$\displaystyle h_{uA}$ $\displaystyle=\sum_{\ell,m}F_{1}^{\ell
m}\left(u,r\right)Y_{A}^{\ell m}+G_{1}^{\ell m}\left(u,r\right)X_{A}^{\ell
m},$ (56c) $\displaystyle h_{AB}$ $\displaystyle=\sum_{\ell,m}F_{2}^{\ell
m}\left(u,r\right)Y_{AB}^{\ell m}+G_{2}^{\ell m}\left(u,r\right)X_{AB}^{\ell
m},$ (56d)
where the sum here is restricted to $\ell\geq 2$ and $m$ ranges from $-\ell$
to $\ell$. We neglect the $\ell=0$ and $\ell=1$ multipoles, which are non-
radiative and require a special treatment. We have also set
$h_{rr}=h_{rA}=\gamma^{AB}h_{AB}=0$ to ensure that the linearized metric
satisfies the required fall-off in Eq.(47). This gauge choice can always be
made (in fact, there is some residual gauge freedom left that we will use in
the analysis below).
The even and odd-parity modes remain decoupled in the linearized Einstein’s
equation and so are all the $\ell,m$ modes in the spherical decomposition. We
will restrict ourselves in this section to the $\ell=2$ modes; the structure
of the solution is very similar for higher $\ell$ modes and we will briefly
comment on the form of the general solution at the end. Solving for the
simpler, odd-parity sector first, we find the following retarded solution:
$\displaystyle G_{1}^{\ell=2}\left(u,r\right)$
$\displaystyle=\sum_{m=-2}^{2}\left[\frac{1}{2}\dot{C}_{2}^{m}\frac{r^{2}}{l^{4}}-\left(l^{-2}\dot{C}_{1}^{m}-\dddot{C}_{1}^{m}\right)+\frac{2}{r}\ddot{C}_{1}^{m}\right.$
$\displaystyle\;\;\left.+\frac{3}{2r^{2}}\dot{C}_{1}^{m}\right],$ (57)
$\displaystyle G_{2}^{\ell=2}\left(u,r\right)$
$\displaystyle=\sum_{m=-2}^{2}\left[\left(C_{2}^{m}+C_{1}^{m}-\ddot{C}_{1}^{m}l^{2}\right)\frac{r^{2}}{l^{4}}\right.$
$\displaystyle\left.+r\left(l^{-2}\dot{C}_{1}^{m}-\dddot{C}_{1}^{m}\right)+\frac{1}{r}\dot{C}_{1}^{m}\right].$
(58)
with $C_{1}^{m},C_{2}^{m}$ dimensions of length squared. The term proportional
to $r^{2}$ in $G_{2}^{\ell=2,m}$ spoils the fall-off behavior of the angular
part of the metric in Eq. (2). This is, however, easily remedied by realizing
that the solution is not completely gauge fixed and with the residual gauge
freedom this part of the metric can be gauged away. With an appropriate gauge
choice, we set777By the Stewart-Walker lemma, the linearized Weyl tensor is
gauge-invariant and a straightforward computation shows that the linearized
Weyl tensor is independent of $C_{2}^{m}$. Therefore, the $C_{2}^{m}$ solution
is pure gauge and contains no physical degrees of freedom. This interpretation
of the solution is consistent with the gauge choice made in Eq. (59).
$C_{2}^{m}\equiv\ddot{C}_{1}^{m}l^{2}-C_{1}^{m}.$ (59)
This gauge choice is further preserved by the residual gauge freedom generated
by $\chi^{A}=\epsilon^{AB}D_{B}\left(\vec{\Omega}\cdot\hat{r}\right)$. With
this gauge choice, and introducing $B_{m}=\dot{C}_{1}^{m}$, we finally obtain
the following odd-parity solutions for the quadrupolar modes with $\ell=2$
$\displaystyle h_{uu}^{\text{odd}}$ $\displaystyle=0$ (60a) $\displaystyle
h_{ur}^{\text{odd}}$ $\displaystyle=0$ (60b) $\displaystyle
h_{uA}^{\text{odd}}$
$\displaystyle=\sum_{m=-2}^{2}\left[\frac{1}{2}\left(\ddot{B}_{m}-l^{-2}B_{m}\right)\frac{r^{2}}{l^{2}}+\left(\ddot{B}_{m}-l^{-2}B_{m}\right)\right.$
$\displaystyle\left.+\frac{2}{r}\dot{B}_{m}+\frac{3}{2r^{2}}B_{m}\right]X_{A}^{2m}$
(60c) $\displaystyle h_{AB}^{\text{odd}}$
$\displaystyle=\sum_{m=-2}^{2}\left[r\left(\ddot{B}_{m}-l^{-2}B_{m}\right)-\frac{1}{r}B_{m}\right]X_{AB}^{2m}$
(60d)
where $B_{m}=B_{m}\left(u\right)$ is an arbitrary function of the retarded
time $u$ with dimensions of length. Note that the leading order of the angular
metric is independent of the wave, but that $C_{AB}\neq 0$ and
$U_{(0)}^{A}\neq 0$ (with the constraint relating these metric coefficients in
Eq. (4) satisfied).
The solution for general $\ell$ is more complicated, but has these general
features:
$\displaystyle G_{1}^{\ell}(u,r)$
$\displaystyle=\sum_{m=-2}^{2}\left[\frac{1}{2}\dot{C}_{2}^{\ell
m}\frac{r^{2}}{l^{4}}+\sum_{i=0}^{\ell}a_{i}^{(\ell)}(r)\;\overset{(i)}{C_{i}^{\ell
m}}\right]$ (61) $\displaystyle G_{2}^{\ell}(u,r)$
$\displaystyle=\sum_{m=-2}^{2}\left[C_{2}^{\ell
m}\frac{r^{2}}{l^{4}}+\sum_{i=0}^{\ell}\left\\{\begin{array}[]{ll}b_{i}^{(\ell)}(r)\;\overset{(i)}{C_{i}^{\ell
m}}&\text{if }\ell\text{ even}\\\
b_{i}^{(\ell)}(r)\;\overset{(i+1)}{C_{i}^{\ell m}}&\text{if }\ell\text{
odd}\end{array}\right.\right]$ (64)
with the $C$-coefficients depending on $u$ only, the factor $(i)$ on top of
these coefficients indicate its $i$-th derivative with respect to $u$ and
$a_{i}^{(\ell)},b_{i}^{(\ell)}$ are polynomials in $r$ (and its inverse
powers) with the highest power being $r^{2}$. Note that the term proportional
to $C_{2}$ is in fact independent of $\ell$ and can always be gauged away. As
a result, even though generically $G_{2}$ contains terms proportional to
$r^{2}$ which could spoil the desired fall-off, these terms can always be set
to zero by a clever gauge choice for $C_{2}$ — similar to the case with
$\ell=2$. Hence, linearized solutions with odd-parity satisfy the desired
fall-off conditions for any $\ell\geq 2$.
The analysis for the even-parity sector mimicks that of the odd-parity sector,
but is more involved as more terms are non-zero. Nonetheless, also in this
case one can gauge fix the solution to obtain a linearized solution that
satisfies the fall-off conditions prescribed in Eq. (47). Specifically, the
retarded $\ell=2$ even-parity solutions for $h_{\mu\nu}$ takes the form
$\displaystyle h_{uu}^{\text{even}}$
$\displaystyle=\sum_{m=-2}^{2}\left[3\left(\ddot{A}_{m}-4l^{-2}A_{m}\right)\frac{r}{l^{2}}\right.$
$\displaystyle\left.+6\left(\ddot{A}_{m}-l^{-2}A_{m}\right)\frac{1}{r}+\frac{6}{r^{2}}\dot{A}_{m}+\frac{3}{r^{3}}A_{m}\right]Y_{2m},$
(65a) $\displaystyle h_{ur}^{\text{even}}$ $\displaystyle=0,$ (65b)
$\displaystyle h_{uA}^{\text{even}}$
$\displaystyle=\sum_{m=-2}^{2}\left[\left(2l^{-2}A_{m}-\frac{1}{2}\ddot{A}_{m}\right)\frac{r^{2}}{l^{2}}+\left(4l^{-2}A_{m}-\ddot{A}_{m}\right)\right.$
$\displaystyle\left.+\frac{2}{r}\dot{A}_{m}+\frac{3}{2r^{2}}A_{m}\right]Y_{A}^{2m},$
(65c) $\displaystyle h_{AB}^{\text{even}}$
$\displaystyle=\sum_{m=-2}^{2}\left[r\left(\ddot{A}_{m}-4l^{-2}A_{m}\right)+\frac{1}{r}A_{m}\right]Y_{AB}^{2m}$
(65d)
where $A_{m}=A_{m}(u)$ is an arbitrary function of the retarded time $u$ and
dimensions of length. Also, similar to the odd-parity sector, we have set
$h_{rr}=h_{rA}=\gamma^{AB}h_{AB}=0$. Note that there is backreaction on the
“background” metric as $r\to\infty$ through the leading term of $h_{uA}$, that
is, $U_{(0)}^{A}\neq 0$. The backreaction onto the leading order part is
unique to $\Lambda\neq 0$. In the limit $l\to\infty$, this backreaction
vanishes. This is immediately clear from the limit of the even- and odd-parity
solutions:
$\displaystyle\lim_{l\to\infty}h_{uu}$
$\displaystyle=\sum_{m=-2}^{2}\left[\frac{6\ddot{A}_{m}}{r}+\frac{6\dot{A}_{m}}{r^{2}}+\frac{3A_{m}}{r^{3}}\right]Y_{2m},$
(66a) $\displaystyle\lim_{l\to\infty}h_{ur}$ $\displaystyle=0,$ (66b)
$\displaystyle\lim_{l\to\infty}h_{uA}$
$\displaystyle=\sum_{m=-2}^{2}\left(\left[-\ddot{A}_{m}+\frac{2\dot{A}_{m}}{r}+\frac{3A_{m}}{2r^{2}}\right]Y_{A}^{2m}\right.$
$\displaystyle\left.+\left[\ddot{B}_{m}+\frac{2\dot{B}_{m}}{r}+\frac{3B_{m}}{2r^{2}}\right]X_{A}^{2m}\right),$
(66c) $\displaystyle\lim_{l\to\infty}h_{AB}$
$\displaystyle=\sum_{m=-2}^{2}\left(\left[\frac{\ddot{A}_{m}}{r}+\frac{A_{m}}{r^{3}}\right]r^{2}Y_{AB}^{2m}\right.$
$\displaystyle\left.+\left[\frac{\ddot{B}_{m}}{r}-\frac{B_{m}}{r^{3}}\right]r^{2}X_{AB}^{2m}\right)\;,$
(66d)
where $A_{m}$ and $B_{m}$ reduce to the standard quadrupole moments on flat
spacetime.
Connecting these results with the Bondi-Sachs expansions, we find that the
linear part of the metric coefficients is given by
$\displaystyle M$
$\displaystyle=\sum_{m=-2}^{2}3\left(\ddot{A}_{m}-l^{-2}A_{m}\right)Y^{2m}$
(67a) $\displaystyle U_{A}^{(0)}$
$\displaystyle=\frac{1}{l^{2}}\sum_{m=-2}^{2}\left(2l^{-2}A_{m}-\frac{1}{2}\ddot{A}_{m}\right)Y_{A}^{2m}$
$\displaystyle+\frac{1}{2}\left(\ddot{B}_{m}-l^{-2}B_{m}\right)X_{A}^{2m}$
(67b) $\displaystyle N_{A}$
$\displaystyle=\sum_{m=-2}^{2}-3\dot{A}_{m}\;Y_{A}^{2m}+3\dot{B}_{m}X_{A}^{2m}$
(67c) $\displaystyle C_{AB}$
$\displaystyle=\sum_{m=-2}^{2}\left(\ddot{A}_{m}-4l^{-2}A_{m}\right)Y_{AB}^{2m}+\left(\ddot{B}_{m}-l^{-2}B_{m}\right)X_{AB}^{2m}$
(67d) $\displaystyle E_{AB}$
$\displaystyle=\sum_{m=-2}^{2}A_{m}\;Y_{AB}^{2m}-B_{m}\;X_{AB}^{2m}$ (67e)
and all other coefficients vanishing or determined by lower order terms.
The radiation rate at the linearized level in Eq. (51) reduces after some
further simplifications to
$\displaystyle\frac{dE}{du}$ $\displaystyle=-\frac{3}{8\pi
G}\sum_{m=-2}^{2}\left[\left(\dddot{A}_{m}-4l^{-2}\dot{A}_{m}\right)^{2}\right.$
$\displaystyle\left.-3l^{-2}\ddot{A}_{m}\left(\ddot{A}_{m}-4l^{-2}A_{m}\right)+\left(\dddot{B}_{m}-l^{-2}\dot{B}_{m}\right)^{2}\right.$
$\displaystyle\left.+3l^{-2}\ddot{B}_{m}\left(\ddot{B}_{m}-l^{-2}B_{m}\right)\right]\;.$
(68)
If we consider the total energy radiated during some large time interval
$\Delta u$, where we assume that at far past and at far future the system will
not radiate so that we can remove the boundary terms in time, then
$\displaystyle E_{\Delta u}$ $\displaystyle=-\frac{3}{8\pi
G}\int_{-\infty}^{\infty}du\sum_{m=-2}^{2}\left[\left(\dddot{A}_{m}-\frac{\dot{A}_{m}}{l^{2}}\right)\right.$
(69)
$\displaystyle\left.\left(\dddot{A}_{m}-\frac{4\dot{A}_{m}}{l^{2}}\right)+\left(\dddot{B}_{m}-\frac{\dot{B}_{m}}{l^{2}}\right)\left(\dddot{B}_{m}-\frac{4\dot{B}_{m}}{l^{2}}\right)\right].$
In particular, after integration by parts, we have
$\displaystyle E_{\Delta u}$ $\displaystyle=-\frac{3}{8\pi
G}\int_{-\infty}^{\infty}du\sum_{m=-2}^{2}\left[\left(\dddot{A}_{m}\right)^{2}+\frac{5\ddot{A}_{m}^{2}}{l^{2}}+\frac{4\dot{A}_{m}^{2}}{l^{4}}\right.$
$\displaystyle\left.+\left(\dddot{B}_{m}\right)^{2}+\frac{5\ddot{B}_{m}^{2}}{l^{2}}+\frac{4\dot{B}_{m}^{2}}{l^{4}}\right].$
(70)
This flux is manifestly negative. Therefore, the total energy always decreases
for a source characterized by a quadrupole. The flat spacetime limit yields
the expected result for a quadrupolar source $A,B$
$\lim_{l\to\infty}E_{\Delta u}=-\frac{3}{8\pi
G}\int_{-\infty}^{\infty}du\sum_{m=-2}^{2}\left[\left(\dddot{A}_{m}\right)^{2}+\left(\dddot{B}_{m}\right)^{2}\right].$
(71)
Note that for $\Lambda<0$, i.e. $l\to il$, the energy flux is non-zero so that
the boundary is not reflective (as is typically imposed). In fact, the energy
flux can have an arbitrary sign depending on the values of $A$, its time
derivatives and $l$.
### V.5 Robinson-Trautman spacetime
The Robinson-Trautman spacetime is an exact solution of Einstein equations
that describes the backreaction of a non-linear gravitational wave on a
Schwarzschild spacetime. The Robinson-Trautman solution is dynamical: it
models gravitational radiation expanding from a radiating object. Since
ultimately, we are interested in describing gravitational radiation emitted by
compact sources in the presence of a cosmological constant, this example is of
particular interest for our analysis. The original Robinson-Trautman solution
contained no cosmological constant (Robinson:1962zz, ), but it was soon
realized that the solution easily accommodates for a non-zero cosmological
constant. This class of spacetimes is the most general radiative vacuum
solution admitting a geodesic, shearfree and twistfree null congruence of
diverging rays. It has been shown that starting with arbitrary, smooth initial
data at some retarded time $u=u_{0}$ , the cosmological Robinson-Trautman
solutions converge exponentially fast to a Schwarzschild-de Sitter solution at
large retarded times ($u\to\infty$). Thus, these solutions also belong to the
class of solutions discussed in this paper. In this section, we will show this
explicitly by providing the form of this solution in Bondi-Sachs like
coordinates.
The line element of the Robinson-Trautman solution with a positive
cosmological constant is given by
$\displaystyle ds^{2}=$
$\displaystyle-2H\left(u,r,\theta,\phi\right)du^{2}-2dudr$
$\displaystyle+\frac{r^{2}}{P^{2}\left(u,\theta,\phi\right)}\left(d\theta^{2}+\sin^{2}\theta
d\phi^{2}\right),$ (72)
with
$2H\left(u,r,\theta,\phi\right)=-\frac{r^{2}}{\ell^{2}}-\frac{2r\dot{P}}{P}+\frac{1}{2}\mathcal{R}_{h}-\frac{2m\left(u\right)}{r}.$
Here $P\left(u,\theta,\phi\right)$ is an arbitrary function of the retarded
time and the angles, and contains the information of the gravitational wave.
According to Einstein’s equations, the following equation governs the time
evolution of $m\left(u\right)$:
$\dot{m}=3m\frac{\dot{P}}{P}-\frac{1}{8}\Delta_{h}\mathcal{R}_{h}.$ (73)
The Laplacian $\Delta_{h}$ is defined with respect to the metric
$h_{AB}=P^{-2}\left(u,\theta,\phi\right)\left(d\theta^{2}+\sin^{2}\theta
d\phi^{2}\right)$. In the particular case when $P=1$ (no radiation), the
Schwarzschild de Sitter solution is recovered.
The Robinson-Trautman solution, as written in Eq. (72), does not fit
immediately within the asymptotic conditions in Eqs. (2). The reason is the
presence of the function $P^{-2}$ appearing in front of the metric of the
2-sphere. In order to accommodate the solution one must perform an appropriate
change of coordinates. In general, the implementation of this change of
coordinates is technically a very hard task. However, a simplified analysis
can be achieved by considering the Robinson-Trautman metric with axial
symmetry. In addition, and for clarity to the reader, in this section we will
only consider a linearized version of the solution. The non-linear analysis
will be discussed in Appendix.
Assuming for simplicity axial symmetry, the linearized Robinson-Trautman
solution expanded around a Schwarzschild-de Sitter background is obtained by
expressing the function $P=P\left(u,\theta\right)$ as follows:
$P=1+\epsilon p\left(u,\theta\right).$
Here $\epsilon$ is a small parameter that controls the linearized expansion.
In this approximation, the leading order of eq. (73) becomes
$\displaystyle\dot{m}=$
$\displaystyle\frac{1}{4}\epsilon\left[12m\dot{p}+p^{(4)}-\cot^{2}\theta
p^{\prime\prime}+2\cot\theta p^{\prime\prime\prime}\right.$
$\displaystyle\left.+\cot\theta\left(\csc^{2}\theta+2\right)p^{\prime}\right]+O\left(\epsilon^{2}\right).$
(74)
Here the prime denotes derivatives with respect to $\theta$. In order to
accommodate the solution within our asymptotic conditions, one can implement
the following change of coordinates to linear order in $\epsilon$
$\displaystyle u$ $\displaystyle\rightarrow u-\epsilon
f\left(\mathit{u},\theta\right)+O\left(r^{-3}\right),$ $\displaystyle r$
$\displaystyle\rightarrow r\left(1+\epsilon
p\left(\mathit{u},\theta\right)\right)-\frac{1}{2}\epsilon
D^{2}f\left(\mathit{u},\theta\right)+O\left(r^{-2}\right),$
$\displaystyle\theta$
$\displaystyle\rightarrow\theta+\epsilon\frac{f^{\prime}\left(\mathit{u},\theta\right)}{r}+O\left(r^{-3}\right),$
$\displaystyle\phi$ $\displaystyle\rightarrow\phi,$
where
$p=\dot{f}.$ (75)
Thus, one finds
$\displaystyle M=m\left[1-\epsilon\left(3\dot{f}+f\dot{m}\right)\right],$
(76a) $\displaystyle
C_{\theta\theta}=-\csc^{2}\theta,\;\;C_{\phi\phi}=\epsilon\left(f^{\prime\prime}-\cot\theta
f^{\prime}\right),\;\;C_{\theta\phi}=0,$ (76b) $\displaystyle
U_{\left(0\right)}^{\theta}=\frac{\epsilon}{\ell^{2}}f^{\prime},\;\;\;\;U_{\left(0\right)}^{\phi}=0,$
(76c) $\displaystyle N^{\theta}=-3\epsilon mf^{\prime},\;\;\;\;N^{\phi}=0,$
(76d) $\displaystyle E_{AB}=0,$ (76e)
In addition, at linear order the News tensor in eq. (25) takes the simple form
$N_{AB}^{(\Lambda)}=\dot{C}_{AB}$.
To obtain the flux of energy one can replace (76) in equation (32), while
retaining only the terms up to order $\epsilon^{2}$. After some integrations
by parts one finally obtains
$\frac{dE}{du}=-\frac{\epsilon^{2}}{16\pi G}\oint
d^{2}S\left[\left(\dot{f}^{\prime\prime}-\cot\theta\dot{f}^{\prime}\right)^{2}+\frac{3m}{\ell^{2}}\partial_{u}\left(f^{\prime
2}\right)\right].$
## VI Comparison with alternative approaches
Given the observational evidence for an accelerated expansion of our Universe
and the recent gravitational wave observations, the challenge of understanding
gravitational waves in the presence of a positive cosmological constant
$\Lambda$ has received considerable attention in recent years. At the
linearized level, most of the previous results in the literature agree with
each other. As we will see below, our results are also in agreement with them.
However, the situation is drastically different in the full non-linear theory.
Different methods and/or different boundary conditions are employed — some of
which even require regularization; the results in general do not agree. We
describe below some of these approaches without any pretense of being
exhaustive.
### VI.1 Linearized gravity
An important starting point is a thorough understanding of weak gravitational
waves on a de Sitter background. There are two key issues predominantly
studied within this context: (1) a mathematically sound and physically
sensible notion of energy and its flux, and (2) finding explicit solutions for
gravitational waves generated by a compact source and their link to time-
changing quadrupole moments, thereby generalizing the well-known flat
result.888The gravitational memory effect in de Sitter spacetimes has been
investigated in (Bieri:2015jwa, ), however, this paper focused on the
cosmological horizon and is therefore more difficult to relate to the results
in this paper that exclusively apply near $\mathcal{I}$.
There are various notions of energy and its flux in the literature that mostly
distinguish themselves by the method used to derive it (as a consequence,
these notions typically are equivalent up to boundary terms), “where” in
spacetime the energy (flux) is evaluated (mostly on $\mathcal{I}$ or across
the cosmological horizon) and by the class of linearized solutions for which
the energy is defined. For instance, in (Kolanowski:2020wfg, ), the energy
flux across $\mathcal{I}$ is derived using the same symplectic methods as in
the earlier work in (abk2, ), but for a slightly larger class of linearized
solutions. In (Kolanowski:2021hwo, ), the authors use the Wald-Zoupas
prescription to define energy (and angular momentum). In all these cases, the
resulting energy flux is finite and the result gauge invariant.
On the other hand, the energy flux obtained in (Chrusciel:2020rlz, ) by direct
use of Noether currents is not finite (note also earlier work
(Chrusciel:2016oux, )). In that paper, they remedy this issue by isolating the
terms which would lead to infinite energy and introducing a “renormalised
canonical energy”. Their argument for the plausibility of this procedure is
based on the observation that the diverging terms have dynamics of their own,
which evolves independently from the remaining part of the canonical energy.
Yet another approach, which applies only for sources supporting the short
wavelength approximation, as it relies on the Isaacson effective stress-
tensor, matches the results in (abk3, ) if one identifies the transverse-
traceless gauge with a certain projection operation.999For linearized
solutions on Minkowski spacetime, this is a well-defined and consistent
procedure for the leading order components of the gravitational field. This is
shown explicitly in (ab, ), in which the first notion is referred to as ‘TT’
gauge and the second as ‘tt’ gauge. However, it is not clear that the two
notions are also equivalent for the leading order fields in de Sitter
spacetime.
In (Hoque:2018byx, ), the authors employ the symplectic current density of the
covariant phase space to show that the integrand in the energy flux expression
on the cosmological horizon is same as that on $\mathcal{I}$ . This result is
interesting as it suggests that at the linearized level propagation of energy
flux is along null rays in de Sitter spacetime, despite the fact that
gravitational waves themselves have tail terms due to back-scattering off the
background curvature.
The second key issue investigated is the link between time variation of some
compact source generating gravitational waves and the resulting gravitational
waves themselves. This was investigated in (abk3, ) by solving the linearized
Einstein’s equation on de Sitter background sourced by a (first order) stress-
energy tensor. To study the limit to $\mathcal{I}$, the authors introduce a
late time approximation in addition to the commonly used post-Newtonian
approximation. This allowed them to express the leading terms of the
gravitational waves in terms of the quadrupole moments of sources. Moreover,
the energy carried away by this gravitational waves was studied using
Hamiltonian methods on the covariant phase-space of the linearized solutions
introduced in their earlier paper (abk2, ). This showed that despite the fact
that in principle the energy for linearized perturbations on de Sitter
spacetime can be negative (note that this is not in contrast with the
finiteness discussed in the previous paragraph), the energy of gravitational
waves emitted by compact objects is always positive. This is also consistent
with our results in Sec. V.4. The quadrupolar solutions in (abk3, ) were also
reinterpreted in (He:2018ikd, ), by writing the solutions in Bondi-Sachs type
coordinates different from the ones introduced in this paper. The authors
showed that the quadrupolar solutions can be accommodated by a non-zero shear
for the leading order part of $g_{AB}$. This is different but not in
contradiction with the results in this paper, which show that the radiative
solution contributes to the sub-leading part of $g_{AB}$ _and_ to
$U^{A}_{(0)}$, while the leading order part of $g_{AB}$ is equal to
$\gamma_{AB}$. This is a gauge choice. Other papers relating the source
dynamics modeled by some compact stress-energy tensor to the gravitational
wave and the energy have relied on the short wave approximation (Date:2016uzr,
; Hoque:2017xop, ; Hoque:2018dcg, ). These results are consistent with the
results in (abk2, ).
### VI.2 Full non-linear theory
Early investigations of the asymptotic structure of asymptotically de Sitter
spacetimes in full non-linear general relativity such as (Strominger:2001pn, ;
Anninos:2010zf, ; Anninos:2011jp, ) imposed too stringent boundary conditions
by demanding conformal flatness of the induced three-dimensional metric on
$\mathcal{I}$. As a result, these early investigations concluded that the
asymptotic symmetry group is the full de Sitter group. However, as was shown
in (abk1, ), imposing conformal flatness ruled out many physically relevant
spacetimes as they enforced a vanishing flux of radiation across $\mathcal{I}$
(and consequently all charges are strictly conserved, see also
(aneesh_conserved_2019, )).
The observation that demanding the asymptotic symmetry group to be the de
Sitter one ruled out gravitational waves sparked new interest in this
challenging problem. It lead the authors in (He:2015wfa, ) to consider Bondi-
Sachs type coordinates for asymptotically de Sitter spacetimes, which are not
conformally flat at $\mathcal{I}$. A nice property of their coordinates is
that the Weyl tensor has peeling behavior near $\mathcal{I}$ (Xie:2017uqa, ).
While these authors also rely on the Bondi framework, their fall-off
conditions on the metric coefficients are different from those considered
here. In particular, the authors did not fix $g_{AB}$ to be equal to the unit
two-sphere but instead allowed for a non-zero shear at leading order. Their
shear contains all the information about gravitational radiation. However,
based on our analysis, using their fall-off conditions the variation of the
charge is infinite. This makes these fall-off conditions not as attractive.
Moreover, the analysis in those papers was restricted to axi-symmetric
spacetimes and limited to the study of Einstein’s equations; these papers did
not study gravitational charges and fluxes.
Subsequently, various authors used the Newman-Penrose formalism to define and
study asymptotically de Sitter spacetimes (Saw:2016isu, ; Saw:2017hsf, ;
Mao:2019ahc, ). The two earlier papers by Saw used a special choice of null
foliation, thereby excluding the Robinson-Trautman spacetime with a positive
cosmological constant as part of their allowed class of spacetimes. The class
of null foliations was generalized in (Mao:2019ahc, ) by Mao to accommodate
for the Robinson-Trautman spacetime. A nice feature of the fall-off conditions
on the spin coefficients and Weyl scalars in those papers is that they have a
well-defined flat limit. However, Mao finds that the asymptotic symmetry
algebra consists of all diffeomorphisms on the two-sphere and translations in
the $u$-direction. In the limit $l\to\infty$, the asymptotic symmetry algebra
becomes the algebra of all diffeomorphisms on the two-sphere and
supertranslations known as the extended BMS algebra (Campiglia:2020qvc, )
instead of the BMS algebra. Gravitational charges and fluxes were not studied
in these papers.
Another set of recent papers on this topic uses similar techniques to those
used here (Compere:algebra, ; Compere:group, ). These authors find that the
asymptotic symmetries form a Lie algebroid instead of a Lie algebra, as they
used different fall-off conditions on the metric. Their asymptotic symmetries
consist of infinite-dimensional “non-abelian supertranslations” and
superrotations, and like in (Mao:2019ahc, ) reduces in the limit $l\to\infty$
to the extended BMS algebra. The resulting definitions for the Bondi mass and
angular momentum aspects is discontinuous in the flat limit. Moreover, the
authors also note that Kerr-de Sitter spacetime is ultimately not included in
the class of spacetimes considered in their work. These boundary conditions
were used in (erfani_bondi_2022, ) to define a Bondi news-like tensor using a
Newman-Penrose tetrad.
Inspired by the dictionary between Bondi and Fefferman-Graham gauges
(Poole:2018koa, ), the authors used earlier results in Fefferman-Graham gauges
to define a new class of asymptotically de Sitter spacetimes. In their follow-
up work (poole_charges_2022, ), in order to obtain finite charges and fluxes,
these authors introduce a holographic renormalization procedure while all
charges and fluxes are naturally finite in this paper and do not require any
_ad hoc_ regularization. The latter work also states more clearly that their
interest is in spacetimes with compact spatial slices, as opposed to this
work.
Other work has focused on studying the possible isometries of asymptotically
de Sitter spacetimes. One of the key results is that the asymptotic symmetry
algebra they find is maximally four-dimensional Kaminski:2022tum , which in
spirit agrees with our work.
Research in a different direction focused on the question of how to identify
the presence of gravitational radiation in the presence of $\Lambda$ using
geometric tools only and without referring to a specific coordinate system
(radiation-criterion, ). The criterion proposed is based on value of super-
Poynting vector at $\mathcal{I}$: if it vanishes, there is no gravitational
radiation across $\mathcal{I}$ while if it is non-zero, there is gravitational
radiation across $\mathcal{I}$. This criterion is straightforward to check as
the super-Poynting vector is the commutator of the leading order electric
$\mathcal{E}_{ab}$ and magnetic $\mathcal{B}_{ab}$ part of the Weyl tensor.
When the cosmological constant vanishes, this criterion is equivalent to the
standard ‘identification’ method of gravitational radiation at null infinity
through the means of the (non-)vanishing of the Bondi news tensor (radiation-
criterion-flat, ). When $\mathcal{B}_{ab}$ vanishes on $\mathcal{I}$, the
super-Poynting vector also vanishes and this criterion implies that there is
no radiation. However, the vanishing of $\mathcal{B}_{ab}$ is not a necessary
condition. In particular, certain Kerr-de Sitter generalized spacetimes have
non-vanishing $\mathcal{B}_{ab}$ on $\mathcal{I}$ yet their super-Poynting
vector vanishes. Here we find that there is gravitational radiation whenever
$C_{AB}$ (and hence $U_{(0)}^{A}$) is non-zero. When these are zero,
$\mathcal{B}_{ab}$ vanishes. Therefore, our criterion to establish the
presence of gravitational radiation seems to be stricter. In other words,
based on the super-Poynting vector criterion a spacetime may be labeled as
non-radiating, while based on $C_{AB}$ it would be considered radiating. It is
therefore not too surprising that the authors in follow-up work found that the
asymptotic symmetry algebra for the spacetimes they considered are infinite-
dimensional (Fernandez-Alvarez:2021yog, ; Fernandez-Alvarez:2021zmp, ;
Senovilla:2022pym, ).
###### Acknowledgements.
BB would like to thank Ahbay Ashtekar for many thought-provoking conversations
over the years, as well as CECs for its hospitality during her visits,
separated in time by force majeure, when this work was initiated and
completed. CB would like to thank Neil Turok for bringing the authors of this
paper together at the Path Integral for Gravity workshop held at the Perimeter
Institute in November 2017, when the discussions that led to this article
started. AP wishes to thank Jorge Noreña and Ricardo Troncoso for helpful
discussions. The research of AP is partially supported by Fondecyt grants No
1211226, 1220910 and 1230853.
## Appendix A Link with geometric approach
The goal of this appendix is to show how the metric in Eq. (1) with the fall-
off conditions specified in Eq. (2) can be derived from a geometric starting
point.
We start with a geometric definition of asymptotically (Schwarzschild-)de
Sitter spacetimes (abk1, ), which is analogous to the geometric definition à
la Penrose and others for asymptotically flat spacetimes. Given this
definition, we construct Bondi-Sachs-like coordinates for the conformally
completed spacetime. This process naturally indicates the minimal fall-off
conditions for the metric coefficients. With these coordinates for the
conformally completed spacetime in hand, we perform a conformal transformation
to obtain coordinates for the physical spacetime. The result is consistent
with the fall-off conditions in Eq. (2).
### A.1 Bondi-Sachs coordinates
In (abk1, ), the following definition for asymptotically de Sitter spacetimes
was introduced:
_Definition 1:_ A space-time $(M,g_{ab})$ is weakly asymptotically de Sitter
if there exists a manifold $\tilde{M}$ with boundary $\mathcal{I}$ equipped
with a metric $\tilde{g}_{ab}$ and a diffeomorphism from $M$ onto
$(\tilde{M}\,\setminus\,\mathcal{I})$ of $\tilde{M}$ (with which we identify
$M$ and ($\tilde{M}\,\setminus\,\mathcal{I}$)) such that:
1. 1.
there exists a smooth function $\Omega$ on $\tilde{M}$ such that
$\tilde{g}_{ab}=\Omega^{2}g_{ab}$ on $M$; $\Omega=0$ on $\mathcal{I}$; and
$n_{a}:=\nabla_{a}\Omega$ is nowhere vanishing on $\mathcal{I}$;
2. 2.
$g_{ab}$ satisfies Einstein’s equations with a positive cosmological constant
$\Lambda$, i.e., $R_{ab}-\frac{1}{2}Rg_{ab}+\Lambda g_{ab}=8\pi G\;T_{ab}$
where $\Omega^{-1}T_{ab}$ has a smooth limit to $\mathcal{I}$.
These weakly asymptotically de Sitter spacetimes contain three different sub-
classes which distinguish themselves by the topology of $\mathcal{I}$. We
restrict ourselves to the sub-class for which $\mathcal{I}$ has topology
$\mathbb{S}^{2}\times\mathbb{R}\simeq\mathbb{S}^{3}\setminus\\{i^{0},i^{+}\\}$,
which in (abk1, ) were coined asymptotically Schwarzschild-de Sitter
spacetimes.
From this definition, we will construct Bondi-Sachs type coordinates near
$\mathcal{I}$ for the conformally completed spacetime $\tilde{g}_{ab}$. We
start by introducing coordinates _on_ $\mathcal{I}$ for the induced metric.
Future null infinity $\mathcal{I}$ is a space-like surface with topology
$\mathbb{R}\times\mathbb{S}^{2}$, for which we choose any foliation of the
$\mathbb{R}$ direction and pick $\Omega$ such that when restricted to the base
$\mathbb{S}^{2}$, we have the unit round metric. This choice can _always_ be
made and is simply a convenient choice of the conformal factor. With this
choice, the divergence of $\tilde{\nabla}_{a}\Omega$ is non-zero. This is
different from the typical choice made in the asymptotically flat context,
where one often chooses $\Omega$ such that one is in a conformal divergence-
free frame because for those spacetimes this choice simplifies intermediate
results significantly.101010One can then show that one can _also_ choose
$\Omega$ to be such that In this context, however, the more convenient choice
is to pick $\Omega$ such that the metric on $\mathbb{S}^{2}$ is the unit two-
sphere metric. Given that this is merely a convenient choice of the conformal
factor (a conformal gauge choice), it does not reduce the class of
asymptotically Schwarzschild-de Sitter spacetimes. In particular, when we
label the coordinates on the two-sphere with $x^{A}=(\theta,\varphi)$ and the
third coordinate as $u$ we find that:
$\displaystyle\tilde{g}_{\mu\nu}dx^{\mu}dx^{\nu}\mathrel{\mathop{\widehat{=}}}$
$\displaystyle\;UWdu^{2}$ (77)
$\displaystyle+\gamma_{AB}\left(dx^{A}-U^{A}du\right)\left(dx^{B}-U^{B}du\right),$
where $U,W,U^{A}$ are arbitrary functions of $(u,\theta,\varphi)$ only
constrained by the requirement that the pullback of this metric to
$\mathcal{I}$ is Riemannian. The hat on the equal sign indicates an equality
at $\mathcal{I}$. We choose the coordinate $u$ such that it is monotonically
increasing from $i^{+}$ to $i^{0}$. Away from $\mathcal{I}$, we have $\Omega$
as an additional coordinate. Consider null hypersurfaces transverse to
$\mathcal{I}$ that intersect $\mathcal{I}$ in the cross-sections $S_{u}$. In
fact, there are many such null surfaces in all directions. Fortunately, the
coordinate $u$ along $\mathcal{I}$ introduces a “special” direction. Using
this to select which transverse null hypersurfaces to select, in a
sufficiently small neighborhood of null infinity, these null hypersurfaces do
not intersect each other and thus generate a null foliation. We extend the
coordinate $u$ by demanding that it is constant along these null hypersurfaces
into the spacetime. Operationally, this means that we introduce a past-
directed null co-vector field $l_{\mu}$, which on $\mathcal{I}$ is given by
$l_{\mu}\hat{=}-\tilde{\nabla}_{\mu}u$ and extend it off of $\mathcal{I}$ by
demanding that it is geodesic ($l^{\mu}\tilde{\nabla}_{\mu}l^{\nu}=0$). One
can “normalize” $l_{\mu}$ so that $n^{\mu}l_{\mu}\hat{=}-1$, but we will keep
this arbitrary for now. We use this null vector field to extend the
coordinates $(u,\theta,\varphi)$ away from $\mathcal{I}$ by demanding that
$(\theta,\varphi)$ are parallel transported along $l^{\mu}$:
$l^{\mu}\tilde{\nabla}_{\mu}\theta=0$ and
$l^{\mu}\tilde{\nabla}_{\mu}\varphi=0$. This implies immediately that
$\tilde{g}^{uu}=0=\tilde{g}^{uA}$, which in turn implies that
$\tilde{g}_{\Omega\Omega}=0=\tilde{g}_{\Omega A}$. As a result, we can write
$\displaystyle\tilde{g}_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=e^{2\beta}W\;du^{2}+2e^{2\beta}\;dud\Omega$
$\displaystyle+q_{AB}\left(dx^{A}-U^{A}\;du\right)\left(dx^{B}-U^{B}\;du\right)\;,$
(78)
where we can expand all the metric coefficients in powers of $\Omega$ (since
$\tilde{g}_{\mu\nu}$ is smooth on $\mathcal{I}$ the leading order parts of the
metric start with arbitrary functions of $(u,\theta,\varphi)$):
$\displaystyle\beta$
$\displaystyle=\beta^{(0)}(u,\theta,\varphi)+\beta^{(1)}(u,\theta,\varphi)\Omega+\beta^{(2)}(u,\theta,\varphi)\Omega^{2}$
$\displaystyle+\beta^{(3)}(u,\theta,\varphi)\Omega^{3}+\ldots$ (79a)
$\displaystyle W$
$\displaystyle=W^{(0)}(u,\theta,\varphi)+W^{(1)}(u,\theta,\varphi)\Omega+W^{(2)}(u,\theta,\varphi)\Omega^{2}$
$\displaystyle+W^{(3)}(u,\theta,\varphi)\Omega^{3}+\ldots$ (79b)
$\displaystyle U^{A}$
$\displaystyle=U_{(0)}^{A}(u,\theta,\varphi)+U_{(1)}^{A}(u,\theta,\varphi)\Omega+U_{(2)}^{A}(u,\theta,\varphi)\Omega^{2}$
$\displaystyle+U_{(3)}^{A}(u,\theta,\varphi)\Omega^{3}+\ldots$ (79c)
$\displaystyle q_{AB}$
$\displaystyle=\gamma_{AB}+C_{AB}\Omega+d_{AB}\Omega^{2}+E_{AB}\Omega^{3}+\ldots$
(79d)
We will now set $l^{\mu}n_{\mu}\hat{=}-1$, which requires that
$\beta^{(0)}=0$. From the definition of asymptotically de Sitter spacetimes,
combined with Einstein’s equation, we know that
$\tilde{g}^{\mu\nu}\nabla_{\mu}\Omega\nabla_{\nu}\Omega\hat{=}-\frac{1}{\ell^{2}}$
(see e.g. (abk1, , Eq. (2.2))). As a result, we also know that
$W^{(0)}=\frac{1}{\ell^{2}}\;.$ (80)
Furthermore, imposing the Sachs condition, i.e., the determinant of $q_{AB}$
is the determinant of the unit two-sphere, we also find that
$\displaystyle\gamma^{AB}C_{AB}=0$ (81a) $\displaystyle
d_{AB}=\frac{1}{4}C^{CD}C_{CD}\gamma_{AB}$ (81b)
$\displaystyle\gamma^{AB}E_{AB}=0\;.$ (81c)
These conformal Bondi-Sachs coordinates for the unphysical spacetime
constructed above can be used to obtain asymptotic coordinates for the
physical metric. In the conformal Bondi-Sachs coordinates, the physical metric
is
$\displaystyle g_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=\Omega^{-2}\tilde{g}_{\mu\nu}dx^{\mu}dx^{\nu}$
$\displaystyle=\Omega^{-2}e^{2\beta}W\;du^{2}+2\Omega^{-2}e^{2\beta}\;dud\Omega$
$\displaystyle+\Omega^{-2}q_{AB}\left(dx^{A}-U^{A}\;du\right)\left(dx^{B}-U^{B}\;du\right)\;.$
(82)
Note that surfaces of constant $u$ are outgoing null surfaces for both the
unphysical and the physical spacetime, since conformal transformations do not
change the properties of null geodesics. To put this metric in a more familiar
form, we define a radial coordinate $r$ in the physical spacetime so that null
infinity is approached as the radial coordinate goes to infinity along the
null surfaces of constant $u$. The natural candidate for this radial
coordinate is $r=\Omega^{-1}$, in which case we exactly recover the metric
with fall-off conditions as in Eq. (2).
_Remark._ While in the definition of asymptotically de Sitter spacetimes and
at the level of Einstein’s equation it is relatively easy to include a non-
zero stress-energy tensor, to derive charges and fluxes one would also need to
specify a Lagrangian for the matter fields. In this paper, however, we have
restricted ourselves to a stress-energy tensor with compact support so that
effectively we are considering vacuum solutions only.
### A.2 The electric and magnetic part of the Weyl tensor
In (abk1, ), also _strongly asymptotically de Sitter_ spacetimes were defined.
These classes of spacetimes satisfy the conditions in the definition for
weakly asymptotically de Sitter spactimes, but in addition have a conformally
flat metric at $\mathcal{I}$. It is clear from Eq. (77) that the class of
spacetimes considered in this paper do not belong to strongly asymptotically
de Sitter spacetimes. This can also been seen by studying the Weyl tensor. In
particular, conformal flatness of the metric at $\mathcal{I}$ is equivalent to
the vanishing of the next-to-leading order magnetic part of the Weyl tensor at
$\mathcal{I}$. Given the above expression for the metric, we can compute the
Weyl tensor explicitly. We find that – as expected – the leading order part of
the Weyl tensor vanishes on $\mathcal{I}$, that is,
$C_{abcd}\mathrel{\mathop{\widehat{=}}}0$ (abk1, ). The next-to-leading order
part is non-zero and we will decompose it into its electric and magnetic part:
$\displaystyle\mathcal{E}_{ab}$
$\displaystyle:=l^{2}\Omega^{-1}C_{acbd}n^{c}n^{d}$ (83)
$\displaystyle\mathcal{B}_{ab}$
$\displaystyle:=l^{2}\Omega^{-1}{}^{*}C_{acbd}n^{c}n^{d}=\frac{l^{2}}{2}\epsilon_{ac}^{\;\;\;mn}\;C_{mnbd}n^{c}n^{d}\;.$
(84)
where both $\mathcal{E}_{ab}$ and $\mathcal{B}_{ab}$ are symmetric, traceless
and orthogonal to $n^{a}$, which here is equal to
$n^{a}\partial_{a}=\partial/\partial u$. The resulting expressions are rather
long, so here we only show the part linear in the metric coefficients:
$\displaystyle\left.\mathcal{E}_{ab}dx^{a}dx^{b}\right|_{{\rm lin}}$
$\displaystyle\mathrel{\mathop{\widehat{=}}}-\frac{1}{l^{2}}V^{(3)}\;\left(du+l^{2}d\Omega\right)^{2}$
$\displaystyle+\left(\frac{3}{2l^{2}}U_{A}^{(3)}+\frac{l^{2}}{2}\partial_{u}\left[D^{2}+1\right]U_{A}^{(0)}\right)\left(du\right.$
$\displaystyle+\left.\ell^{2}d\Omega\right)dx^{A}-\frac{l^{2}}{2}\left(\frac{3}{l^{4}}q_{AB}^{(3)}-\frac{1}{l^{2}}V^{(3)}\gamma_{AB}\right.$
$\displaystyle+\left[2l^{2}\partial_{u}^{2}+D^{2}-4\right]\left[D_{(A}U_{B)}^{(0)}-\frac{1}{2}\gamma_{AB}D_{C}U_{(0)}^{C}\right]$
$\displaystyle\left.-\left[D_{(A}D_{B)}-\frac{1}{2}\gamma_{AB}D^{2}\right]D_{C}U_{(0)}^{C}\right)dx^{A}dx^{B}$
(85)
and
$\displaystyle\left.\mathcal{B}_{ab}dx^{a}dx^{b}\right|_{{\rm lin}}$
$\displaystyle\mathrel{\mathop{\widehat{=}}}\frac{1}{2}\left[D^{2}+2\right]\epsilon^{AB}D_{A}U_{B}^{(0)}\left(du+l^{2}d\Omega\right)^{2}$
$\displaystyle\frac{l^{2}}{2}\partial_{u}\left[D^{2}+1\right]\epsilon_{A}^{\;\;B}U_{B}^{(0)}(du+l^{2}d\Omega)dx^{A}$
$\displaystyle-\frac{l^{2}}{2}\left(\frac{1}{2}\left[D^{2}+2\right](\epsilon^{EF}D_{E}U_{F}^{(0)})\;\gamma_{AB}\right.$
$\displaystyle-2l^{2}\partial_{u}^{2}\left[D_{(A}(\epsilon_{B)}^{\;\;\;\;C}U_{C}^{(0)})\right.$
$\displaystyle\left.-\frac{1}{2}\gamma_{AB}D_{C}(\epsilon^{CE}U_{E}^{(0)})\right]$
$\displaystyle\left.-\left[D_{(A}D_{B)}-\frac{1}{2}\gamma_{AB}D^{2}\right]\epsilon^{EF}(D_{E}U_{F}^{(0)})\right)\times$
$\displaystyle\times dx^{A}dx^{B}\;.$ (86)
It is evident that $\mathcal{B}_{ab}$ is non-zero and consequently the class
of spacetimes considered in this paper are not strongly symptotically de
Sitter; as desired, because strongly asymptotically spacetimes remove half the
permissible data and have no fluxes of energy across $\mathcal{I}$.
Using the explicit linearized solutions in Sec. V.4, we find that for
quadrupolar gravitational waves, the electric and magnetic part of the Weyl
tensor are
$\displaystyle\mathcal{E}_{ab}dx^{a}dx^{b}$
$\displaystyle\mathrel{\mathop{\widehat{=}}}\sum_{m=-2}^{2}\left\\{6l^{-2}\left(l^{-2}A_{m}-\ddot{A}_{m}\right)Y_{2m}\;du^{2}\right.$
$\displaystyle+\left(\partial_{u}\left(l^{-2}A_{m}-\ddot{A}_{m}\right)Y_{A}^{2m}\right.$
$\displaystyle\left.-\partial_{u}\left(4l^{-2}B_{m}-\ddot{B}_{m}\right)X_{A}^{2m}\right)dudx^{A}$
$\displaystyle+\left(\left[-\frac{3}{2}\left(l^{-2}A_{m}-\ddot{A}_{m}\right)\right.\right.$
$\displaystyle\left.+\frac{1}{2}\partial_{u}^{2}\left(l^{-2}A_{m}-\ddot{A}_{m}\right)\right]Y_{AB}^{2m}$
$\displaystyle-3\left(l^{-2}A_{m}-\ddot{A}_{m}\right)Y_{2m}\;S_{AB}$
$\displaystyle\left.\left.-\frac{1}{2}\partial_{u}^{2}\left(4B_{m}-l^{2}\ddot{B}_{m}\right)X_{AB}^{2m}\right)dx^{A}dx^{B}\right\\}$
(87)
and
$\displaystyle\mathcal{B}_{ab}dx^{a}dx^{b}$
$\displaystyle\mathrel{\mathop{\widehat{=}}}\sum_{m=-2}^{2}\left\\{6l^{-2}\left(l^{-2}B_{m}-\ddot{B}_{m}\right)Y_{2m}\;du^{2}\right.$
$\displaystyle-\left(\partial_{u}\left(4l^{-2}A_{m}-\ddot{A}_{m}\right)Y_{A}^{2m}\right.$
$\displaystyle\left.-\partial_{u}\left(l^{-2}B_{m}-\ddot{B}_{m}\right)X_{A}^{2m}\right)dudx^{A}$
$\displaystyle+\left(\left[-\frac{3}{2}\left(l^{-2}B_{m}-\ddot{B}_{m}\right)\right.\right.$
$\displaystyle\left.+\frac{1}{2}\partial_{u}^{2}\left(l^{-2}B_{m}-\ddot{B}_{m}\right)\right]Y_{AB}^{2m}$
$\displaystyle\left.-3\left(l^{-2}B_{m}-\ddot{B}_{m}\right)Y_{2m}\;S_{AB}\right.$
$\displaystyle\left.\left.+\frac{1}{2}\partial_{u}^{2}\left(4A_{m}-l^{2}\ddot{A}_{m}\right)X_{AB}^{2m}\right)dx^{A}dx^{B}\right\\}$
(88)
We have explicitly verified that these expressions are symmetric, transverse
$\mathcal{E}_{ab}n^{b}\mathrel{\mathop{\widehat{=}}}0\mathrel{\mathop{\widehat{=}}}\mathcal{B}_{ab}n^{b}$
and traceless
$q^{ab}\mathcal{E}_{ab}\mathrel{\mathop{\widehat{=}}}0\mathrel{\mathop{\widehat{=}}}q^{ab}\mathcal{B}_{ab}$.
In taking the limit $l\to\infty$, one needs to be careful to rescale
$\mathcal{E}_{ab}$ and $\mathcal{B}_{ab}$; otherwise, due to the overall
factor of $l^{2}$ in the definition in Eqs. (83)-(84), this limit trivially
diverges. The flat limit is
$\displaystyle\lim_{l\to\infty}l^{-2}\mathcal{E}_{ab}dx^{a}dx^{b}$
$\displaystyle\mathrel{\mathop{\widehat{=}}}\sum_{m=-2}^{2}-2\partial_{u}^{4}B_{m}X_{AB}^{2m}dx^{A}dx^{B}$
(89) $\displaystyle\lim_{l\to\infty}l^{-2}\mathcal{B}_{ab}dx^{a}dx^{b}$
$\displaystyle\mathrel{\mathop{\widehat{=}}}\sum_{m=-2}^{2}2\partial_{u}^{4}A_{m}X_{AB}^{2m}dx^{A}dx^{B}\;.$
(90)
Note that the parity-even solution, which is also called an electric solution,
contributes to the magnetic part of the Weyl tensor and vice versa. There is
no contradiction here, as the names electric and magnetic refer to very
different notions.
In this linearized limit one can also explicitly show that the $\ell=1$ modes
in $U_{(0)}^{A}$ do not contribute to $\mathcal{E}_{ab}$ nor to
$\mathcal{B}_{ab}$.111111To show this, first decompose $U_{A}^{(0)}$ into an
‘electric’ and ‘magnetic’ part:
$U_{A}^{(0)}=D_{A}f+\epsilon_{A}^{\;\;B}D_{B}g$ and use that if $f$ and $g$
are $\ell=1$ modes $D_{A}D_{B}f=-\gamma_{AB}f$ and similarly for $g$. This
further supports the interpretation of those modes as non-radiative.
## Appendix B Change of coordinates for the non-linear Robinson-Trautman
solution
In this appendix we show that the non-linear Robinson-Trautman solution can be
accommodated within the asymptotic conditions in Eq. (2). Starting from the
solution in Eq. (72) one can perform the following asymptotic change of
coordinates
$\displaystyle u\rightarrow$ $\displaystyle
f\left(u,\theta\right)+\frac{\ell^{2}\csc\theta\left(\sin\theta
h^{\prime}-\sin h\right)}{P}\frac{1}{r}+\frac{U_{2}}{r^{2}}+\dots,$
$\displaystyle r\rightarrow$ $\displaystyle\sin\theta\csc
hP\,r+R_{0}+\frac{R_{1}}{r}+\dots,$ $\displaystyle\theta\rightarrow$
$\displaystyle
h\left(u,\theta\right)-\frac{Pf^{\prime}}{r}-\frac{1}{2}\ell^{2}\left(h^{\prime\prime}+\cot\theta
h^{\prime}\right.$ $\displaystyle\left.-\frac{1}{2}\csc^{2}\theta\sin
2h\right)\frac{1}{r^{2}}+\dots,$ $\displaystyle\phi\rightarrow$
$\displaystyle\phi,$
with
$\displaystyle U_{2}$ $\displaystyle=\frac{\ell^{4}\left(\csc\theta\sin
h-h^{\prime}\right)}{4f^{\prime}P^{3}}\left[P\left(-2h^{\prime\prime}-2\cot\theta
h^{\prime}+\csc^{2}\theta\sin(2h)\right)\right.$
$\displaystyle\left.-2\left(\csc\theta\sin
h-h^{\prime}\right)\left(f^{\prime}\left(\partial_{f}P\right)+\left(\partial_{h}P\right)\left(h^{\prime}+\csc\theta\sin
h\right)\right)\right],$ $\displaystyle R_{0}$
$\displaystyle=\frac{1}{2}\ell\sin\theta\csc
h\left(\frac{\ell\left(-h^{\prime\prime}-\cot\theta
h^{\prime}+\frac{1}{2}\csc^{2}\theta\sin
2h\right)}{f^{\prime}}-\frac{f^{\prime}P\left(\partial_{h}P\right)}{\ell}+\frac{2\ell\left(\partial_{f}P\right)\left(h^{\prime}-\csc\theta\sin
h\right)}{P}\right),$ $\displaystyle R_{1}$
$\displaystyle=\frac{\ell^{4}\csc^{3}\theta\sin^{3}h}{16f^{\prime
2}P^{5}}\left\\{-2P^{2}\left(\partial_{h}P\right)\left(h^{\prime}\sin\theta\csc
h-1\right)^{2}\left(h^{\prime}\sin\theta\csc
h+1\right)\left(8f^{\prime}\sin\theta\csc
h\left(\partial_{f}P\right)\right.\right.$
$\displaystyle\left.+3\left(\partial_{h}P\right)\left(h^{\prime}\sin\theta\csc
h+1\right)\right)+4P^{3}\left(\sin^{2}\theta h^{\prime
2}\csc^{2}h-1\right)\left[4\sin\theta f^{\prime}\csc
h\left(\partial_{f}\partial_{h}P\right)\left(h^{\prime}\sin\theta\csc
h-1\right)\right.$ $\displaystyle\left.+2\partial_{h}^{2}P\left(h^{\prime
2}\sin^{2}\theta\csc^{2}h-1\right)+\left(\partial_{h}P\right)\left(\sin\theta\csc^{2}h\left(\sin\theta
h^{\prime\prime}+\cos\theta h^{\prime}\right)-\cot h\right)\right]$
$\displaystyle-8\ell^{2}P\left(\partial_{f}^{2}P\right)\left(h^{\prime}\sin\theta\csc
h-1\right)^{3}\left(h^{\prime}\sin\theta\csc
h+1\right)+8\ell^{2}\left(\partial_{f}P\right)^{2}\left(h^{\prime}\sin\theta\csc
h-1\right)^{3}\left(h^{\prime}\sin\theta\csc h+1\right)$
$\displaystyle+\csc^{2}hP^{4}\left[2h^{\prime\prime
2}\sin^{4}\theta\csc^{2}h+4h^{\prime\prime}\sin^{2}\theta\left(h^{\prime}\sin\theta\csc^{2}h\left(\cos\theta-2h^{\prime}\sin\theta\cot
h\right)+\cot h\right)\right.$
$\displaystyle+h^{\prime}\left(h^{\prime}\sin^{2}\theta\csc^{2}h\left(8h^{\prime}\sin\theta\left(h^{\prime}\sin\theta\csc^{2}h-\cos\theta\cot
h\right)+4\cos(2h)+\cos(2\theta)-11\right)+2\sin(2\theta)\cot h\right)$
$\displaystyle\left.\left.-3\cos(2h)+5\right]\right\\}$
and $P=P\left(f\left(u,\theta\right),h\left(u,\theta\right)\right)$. The
functions $f$ and $h$ are required to satisfy the following conditions
$h^{\prime 2}+\frac{P^{2}f^{\prime 2}}{\ell^{2}}=\csc^{2}\theta\sin^{2}h,$
(91)
$P\left(\dot{f}h^{\prime}-f^{\prime}\dot{h}\right)=\csc^{2}\theta\sin^{2}h.$
(92)
Note that in the linear approximation, Eq (91) indicates that $h=\theta$,
while Eq. (92) implies Eq. (75), i.e., $\dot{f}=-\omega$. In the flat limit
($\ell\rightarrow\infty$), Eq. (91) implies that $h=\theta$, while according
to Eq. (92) one has $\dot{f}=P^{-1}$. This condition is the same as that
obtained in vonderGonna:1997sh .
## References
* (1) H. Bondi, “Gravitational Waves in General Relativity,” Nature, vol. 186, no. 4724, pp. 535–535, 1960.
* (2) R. K. Sachs, “Gravitational waves in general relativity. 8. Waves in asymptotically flat space-times,” Proc. Roy. Soc. Lond. A, vol. 270, pp. 103–126, 1962.
* (3) R. Sachs, “Asymptotic symmetries in gravitational theory,” Phys. Rev., vol. 128, pp. 2851–2864, 1962.
* (4) A. Ashtekar, B. Bonga, and A. Kesavan, “Asymptotics with a positive cosmological constant: I. Basic framework,” Class. Quant. Grav., vol. 32, no. 2, p. 025004, 2015.
* (5) A. Ashtekar and A. Magnon, “Asymptotically anti-de Sitter space-times,” Class. Quant. Grav., vol. 1, pp. L39–L44, 1984.
* (6) M. Henneaux and C. Teitelboim, “Asymptotically anti-De Sitter Spaces,” Commun. Math. Phys., vol. 98, pp. 391–424, 1985.
* (7) A. Ashtekar and S. Das, “Asymptotically Anti-de Sitter space-times: Conserved quantities,” Class. Quant. Grav., vol. 17, pp. L17–L30, 2000.
* (8) K. S. Thorne, “Multipole Expansions of Gravitational Radiation,” Rev. Mod. Phys., vol. 52, pp. 299–339, 1980.
* (9) G. Barnich and C. Troessaert, “BMS charge algebra,” JHEP, vol. 12, p. 105, 2011.
* (10) E. E. Flanagan and D. A. Nichols, “Conserved charges of the extended Bondi-Metzner-Sachs algebra,” Phys. Rev. D, vol. 95, no. 4, p. 044002, 2017.
* (11) M. Henneaux and C. Troessaert, “BMS Group at Spatial Infinity: the Hamiltonian (ADM) approach,” JHEP, vol. 03, p. 147, 2018.
* (12) C. Bunster, A. Gomberoff, and A. Pérez, Regge-Teitelboim analysis of the symmetries of electromagnetic and gravitational fields on asymptotically null spacelike surfaces. 5 2018.
* (13) “NIST Digital Library of Mathematical Functions.” http://dlmf.nist.gov/, Release 1.1.1 of 2021-03-15. F. W. J. Olver, A. B. Olde Daalhuis, D. W. Lozier, B. I. Schneider, R. F. Boisvert, C. W. Clark, B. R. Miller, B. V. Saunders, H. S. Cohl, and M. A. McClain, eds.
* (14) G. Barnich and F. Brandt, “Covariant theory of asymptotic symmetries, conservation laws and central charges,” Nucl. Phys. B, vol. 633, pp. 3–82, 2002.
* (15) T. Regge and C. Teitelboim, “Role of Surface Integrals in the Hamiltonian Formulation of General Relativity,” Annals Phys., vol. 88, p. 286, 1974\.
* (16) J. Lee and R. M. Wald, “Local symmetries and constraints,” J. Math. Phys., vol. 31, pp. 725–743, 1990.
* (17) R. M. Wald and A. Zoupas, “A General definition of ’conserved quantities’ in general relativity and other theories of gravity,” Phys. Rev. D, vol. 61, p. 084027, 2000.
* (18) G. Compère and A. Fiorucci, “Advanced Lectures on General Relativity,” 1 2018\.
* (19) L. Abbott and S. Deser, “Stability of Gravity with a Cosmological Constant,” Nucl. Phys. B, vol. 195, pp. 76–96, 1982.
* (20) S. Deser and B. Tekin, “Gravitational energy in quadratic curvature gravities,” Phys. Rev. Lett., vol. 89, p. 101101, 2002.
* (21) S. Deser and B. Tekin, “Energy in generic higher curvature gravity theories,” Phys. Rev. D, vol. 67, p. 084009, 2003.
* (22) A. Ashtekar, B. Bonga, and A. Kesavan, “Asymptotics with a positive cosmological constant. II. Linear fields on de Sitter spacetime,” Phys. Rev. D, vol. 92, no. 4, p. 044011, 2015.
* (23) P. Chruściel, S. J. Hoque, and T. Smołka, “Energy of weak gravitational waves in spacetimes with a positive cosmological constant,” 3 2020.
* (24) M. Kolanowski and J. Lewandowski, “Energy of gravitational radiation in the de Sitter universe at the scri and at a horizon,” 8 2020.
* (25) A. Ashtekar, B. Bonga, and A. Kesavan, “Asymptotics with a positive cosmological constant: III. The quadrupole formula,” Phys. Rev. D, vol. 92, no. 10, p. 104032, 2015.
* (26) C. Bunster, A. Gomberoff, and A. Pérez, “Bondi-Metzner-Sachs invariance and electric-magnetic duality,” Phys. Rev. D, vol. 101, no. 4, p. 044003, 2020\.
* (27) B. Carter, “Hamilton-Jacobi and Schrodinger separable solutions of Einstein’s equations,” Commun. Math. Phys., vol. 10, no. 4, pp. 280–310, 1968.
* (28) W. R. Kelly and D. Marolf, “Phase Spaces for asymptotically de Sitter Cosmologies,” Class. Quant. Grav., vol. 29, p. 205013, 2012.
* (29) W. L. Burke, The coupling of gravitational radiation to nonrelativistic sources. PhD thesis, Caltech, 1969.
* (30) T. Regge and J. A. Wheeler, “Stability of a Schwarzschild singularity,” Phys. Rev., vol. 108, pp. 1063–1069, 1957.
* (31) K. Martel and E. Poisson, “Gravitational perturbations of the Schwarzschild spacetime: A Practical covariant and gauge-invariant formalism,” Phys. Rev. D, vol. 71, p. 104003, 2005.
* (32) I. Robinson and A. Trautman, “Some spherical gravitational waves in general relativity,” Proc. Roy. Soc. Lond. A, vol. 265, pp. 463–473, 1962.
* (33) L. Bieri, D. Garfinkle, and S.-T. Yau, “Gravitational wave memory in de Sitter spacetime,” Phys. Rev. D, vol. 94, no. 6, p. 064040, 2016.
* (34) M. Kolanowski and J. Lewandowski, “Hamiltonian charges in the asymptotically de Sitter spacetimes,” JHEP, vol. 05, p. 063, 2021.
* (35) P. T. Chruściel and L. Ifsits, “The cosmological constant and the energy of gravitational radiation,” Phys. Rev. D, vol. 93, no. 12, p. 124075, 2016\.
* (36) A. Ashtekar and B. Bonga, “On the ambiguity in the notion of transverse traceless modes of gravitational waves,” Gen. Rel. Grav., vol. 49, no. 9, p. 122, 2017.
* (37) S. J. Hoque and A. Virmani, “On Propagation of Energy Flux in de Sitter Spacetime,” Gen. Rel. Grav., vol. 50, no. 4, p. 40, 2018.
* (38) X. He, J. Jing, and Z. Cao, “Relationship between Bondi-Sachs quantities and source of gravitational radiation in asymptotically de Sitter spacetime,” Int. J. Mod. Phys. D, vol. 27, no. 04, p. 1850046, 2017.
* (39) G. Date and S. J. Hoque, “Cosmological Horizon and the Quadrupole Formula in de Sitter Background,” Phys. Rev. D, vol. 96, no. 4, p. 044026, 2017.
* (40) S. J. Hoque and A. Aggarwal, “Quadrupolar power radiation by a binary system in de Sitter Background,” Int. J. Mod. Phys. D, vol. 28, no. 01, p. 1950025, 2018.
* (41) S. J. Hoque, Physics of gravitational waves in presence of positive cosmological constant. PhD thesis, HBNI, Mumbai, 2017.
* (42) A. Strominger, “The dS / CFT correspondence,” JHEP, vol. 10, p. 034, 2001\.
* (43) D. Anninos, G. S. Ng, and A. Strominger, “Asymptotic Symmetries and Charges in De Sitter Space,” Class. Quant. Grav., vol. 28, p. 175019, 2011.
* (44) D. Anninos, G. S. Ng, and A. Strominger, “Future Boundary Conditions in De Sitter Space,” JHEP, vol. 02, p. 032, 2012.
* (45) P. B. Aneesh, S. J. Hoque, and A. Virmani, “Conserved charges in asymptotically de Sitter spacetimes,” Classical and Quantum Gravity, vol. 36, p. 205008, Oct. 2019. arXiv:1902.07415 [gr-qc, physics:hep-th].
* (46) X. He and Z. Cao, “New Bondi-type outgoing boundary condition for the Einstein equations with cosmological constant,” Int. J. Mod. Phys. D, vol. 24, no. 10, p. 1550081, 2015.
* (47) F. Xie and X. Zhang, “Peeling Property of Bondi-Sachs metrics for nonzero Cosmological Constant,” Sci. China A, vol. 59, p. 1753, 2016.
* (48) V.-L. Saw, “Mass-loss of an isolated gravitating system due to energy carried away by gravitational waves with a cosmological constant,” Phys. Rev. D, vol. 94, no. 10, p. 104004, 2016.
* (49) V.-L. Saw and F. C. S. Thun, “Peeling property and asymptotic symmetries with a cosmological constant,” Int. J. Mod. Phys. D, vol. 29, no. 03, p. 2050020, 2020.
* (50) P. Mao, “Asymptotics with a cosmological constant: The solution space,” Phys. Rev. D, vol. 99, no. 10, p. 104024, 2019.
* (51) M. Campiglia and J. Peraza, “Generalized BMS charge algebra,” Phys. Rev. D, vol. 101, no. 10, p. 104039, 2020.
* (52) G. Compère, A. Fiorucci, and R. Ruzziconi, “The $\Lambda$-BMS4 charge algebra,” JHEP, vol. 10, p. 205, 2020.
* (53) G. Compère, A. Fiorucci, and R. Ruzziconi, “The $\Lambda$-BMS4 group of dS4 and new boundary conditions for AdS4,” Class. Quant. Grav., vol. 36, no. 19, p. 195017, 2019.
* (54) D.-S. Erfani, “Bondi news in de Sitter space-time,” Apr. 2022. arXiv:2204.05960 [gr-qc].
* (55) A. Poole, K. Skenderis, and M. Taylor, “(A)dS4 in Bondi gauge,” Class. Quant. Grav., vol. 36, no. 9, p. 095005, 2019.
* (56) A. Poole, K. Skenderis, and M. Taylor, “Charges, conserved quantities and fluxes in de Sitter spacetime,” Physical Review D, vol. 106, p. L061901, Sept. 2022. arXiv:2112.14210 [gr-qc, physics:hep-th].
* (57) W. Kamiński, M. Kolanowski, and J. Lewandowski, “Symmetries of the asymptotically de Sitter spacetimes,” Class. Quant. Grav., vol. 39, no. 19, p. 195009, 2022.
* (58) F. Fernández-Álvarez and J. M. Senovilla, “Gravitational radiation condition at infinity with a positive cosmological constant,” Phys. Rev. D, vol. 102, no. 10, p. 101502, 2020.
* (59) F. Fernández-Álvarez and J. M. M. Senovilla, “Novel characterization of gravitational radiation in asymptotically flat spacetimes,” Phys. Rev. D, vol. 101, no. 2, p. 024060, 2020.
* (60) F. Fernández-Álvarez and J. M. M. Senovilla, “Asymptotic structure with a positive cosmological constant,” Class. Quant. Grav., vol. 39, no. 16, p. 165012, 2022.
* (61) F. Fernández-Álvarez and J. M. M. Senovilla, “Asymptotic structure with vanishing cosmological constant,” Class. Quant. Grav., vol. 39, no. 16, p. 165011, 2022.
* (62) J. M. M. Senovilla, “Gravitational Radiation at Infinity with Non-Negative Cosmological Constant,” Universe, vol. 8, no. 9, p. 478, 2022.
* (63) U. von der Gonna and D. Kramer, “Pure and gravitational radiation,” Class. Quant. Grav., vol. 15, pp. 215–223, 1998.
|
# First-order approximation of strong vector equilibria with application to
nondifferentiable constrained optimization
Amos Uderzo Dept. of Mathematics and Applications, University of Milano -
Bicocca, Milano, Italy<EMAIL_ADDRESS>
###### Abstract.
Vector equilibrium problems are a natural generalization to the context of
partially ordered spaces of the Ky Fan inequality, where scalar bifunctions
are replaced with vector bifunctions. In the present paper, the local geometry
of the strong solution set to these problems is investigated through its
inner/outer conical approximations. Formulae for approximating the contingent
cone to the set of strong vector equilibria are established, which are
expressed via Bouligand derivatives of the bifunctions. These results are
subsequently employed for deriving both necessary and sufficient optimality
conditions for problems, whose feasible region is the strong solution set to a
vector equilibrium problem, so they can be cast in mathematical programming
with equilibrium constraints.
###### Key words and phrases:
Strong vector equilibrium, contingent cone, nondifferentiable optimization,
generalized differentiation, subdifferential, mathematical programming with
equilibrium constraint
###### 2010 Mathematics Subject Classification:
49J53, 49J52, 90C33
## 1\. Introduction
Given a mapping (vector-valued bifunction)
$f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$, with
$\mathbb{R}^{m}$ being partially ordered by a (nontrivial) closed, convex and
pointed cone $C\subset\mathbb{R}^{m}$, and a nonempty, closed set
$K\subseteq\mathbb{R}^{n}$, by strong vector equilibrium problem the problem
is meant
$None$ $\hbox{ find $x\in K$ such that }f(x,z)\in C,\quad\forall z\in K.$
The set of all solutions (if any) to problem $({\rm VEP})$ will be denoted
throughout the paper by ${\mathcal{S}}{\mathcal{E}}$, namely
(1.1) ${\mathcal{S}}{\mathcal{E}}=\bigcap_{z\in K}f^{-1}(\cdot,z)(C)\cap K,$
and referred to as the set of strong vector equilibria. Clearly, strong vector
equilibrium problems are a natural generalization of the well-known Ky Fan
inequality to the more general context of partially ordered vector spaces.
Similarly as their scalar counterpart, they provide a convenient format to
treat in an unifying framework several different classes of problems, ranging
from multicriteria optimization problems, vector Nash equilibrium problems, to
vector variational inequalities and complementarity problems (see, for
instance, [1, 2, 3, 5, 9, 10, 16]).
As for many problems formalized by traditional or generalized equations, for
several purposes the mere knowledge of a single solution to $({\rm VEP})$ is
not enough. Very often, once a strong vector equilibrium
$\bar{x}\in{\mathcal{S}}{\mathcal{E}}$ has been found (or shown to exist), one
would need/aspire to glean insights into the behaviour of the set
${\mathcal{S}}{\mathcal{E}}$ around $\bar{x}$. The fact that $\bar{x}$ may be
an isolated element of ${\mathcal{S}}{\mathcal{E}}$ or lie in the boundary or,
instead, be an interior element of this set, might change dramatically the
outcome of a further analysis, where the local geometry of
${\mathcal{S}}{\mathcal{E}}$ around $\bar{x}$ does matter. On the other hand,
finding all the solutions of $({\rm VEP})$ around $\bar{x}$ could be a task
that one can hardly accomplish in many concrete cases. What is reasonably
achievable sometimes is only a local approximation of
${\mathcal{S}}{\mathcal{E}}$ near $\bar{x}$, yet suitable in specific
circumstances. To mention one of them, with connection with the subject of the
present paper, consider the successful approach to optimality conditions for
constrained problems, where at a certain step an approximated representation
of the feasible region already does the trick.
It is well known that in nonsmooth analysis tangent cones, working as a
surrogate of derivative for sets, are the main tools for formalizing first-
order (and beyond, if needed) approximations of sets. So the main aim of the
present paper is to provide elements for a conical approximation of strong
vector equilibria. It should be remarked that a difficulty in undertaking such
a task comes from the fact that the set ${\mathcal{S}}{\mathcal{E}}$ is not
explicitly defined. Besides, if addressing this question through the
reformulation of ${\mathcal{S}}{\mathcal{E}}$ as in $(\ref{eq:interEquiref})$,
classical results on the tangent cone representation of such sets as
$f^{-1}(\cdot,z)(C)\cap K$, now at disposal in nonsmooth analysis as a modern
development of the Lyusternik theorem (see [13, 15, 20]), seem not be readily
exploitable because of the intersection over $K$ appearing in
$(\ref{eq:interEquiref})$.
In this context, the findings exposed in what follows are focussed on
representing the contingent cone to ${\mathcal{S}}{\mathcal{E}}$ at a given
strong vector equilibrium $\bar{x}$, which is one of the most employed conical
approximations in the literature devoted to variational analysis and
optimization. The representation of such a cone will be performed by means of
first-order approximations of the problem data, namely generalized derivatives
of the bifunction $f$ and tangent cones of the set $K$ defining $({\rm VEP})$.
In other words, following a principle deep-rooted in many contexts of
nonlinear analysis, approximations of the solution set to a given problem are
obtained by means of exact solutions to approximated problems.
The paper is structured as follows. Section 2 aims at recalling preliminary
notions of nonsmooth analysis, which play a role in formulating and
establishing the achievements of the paper. Section 3 contains the main
results concerning the first-order approximation of the contingent cone to
${\mathcal{S}}{\mathcal{E}}$. In Section 4, these results are applied to
derive both necessary and sufficient optimality conditions for
nondifferentiable optimization problems, whose constraint systems are
formalized as a strong vector equilibrium problem.
Below, the basic notations employed in the paper are listed. The acronyms
l.s.c., u.s.c and p.h. stand for lower semicontinuous, upper semicontinuous
and positively homogeneous, respectively. $\mathbb{R}^{d}$ denotes the finite-
dimensional Euclidean space, with dimension $d\in\mathbb{N}$. The closed ball
centered at an element $x\in\mathbb{R}^{d}$, with radius $r\geq 0$, is denoted
by ${\rm B}\left(x;r\right)$. In particular, ${\mathbb{B}}={\rm
B}\left(\mathbf{0};1\right)$ stands for the unit ball, whereas ${\mathbb{S}}$
stands for the unit sphere, $\mathbf{0}$ denoting the null vector of an
Euclidean space. Given a subset $S\subseteq\mathbb{R}^{d}$, the distance of a
point $x$ from a set $S$ is denoted by ${\rm dist}\left(x;S\right)$, with the
convention that ${\rm dist}\left(x;\varnothing\right)=+\infty$. The prefix
${\rm int}\,S$ denotes the interior of $S$, ${\rm cl}\,S$ denotes its closure,
whereas ${\rm cone}\,S$ its conical hull, respectively. Given two subsets $A$
and $B$ of the same space, the excess of $A$ over $B$ is indicated by ${\rm
exc}(A;B)=\sup_{a\in A}{\rm dist}\left(a;B\right)$. By
$\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m})$ the
space of all continuous p.h. mappings acting between $\mathbb{R}^{n}$ and
$\mathbb{R}^{m}$ is denoted, equipped with the norm
$\|h\|_{\mathscr{P}\hskip-2.84544pt\mathscr{H}}=\sup_{u\in{\mathbb{S}}}\|h(u)\|$,
$h\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m})$,
while $\mathscr{L}(\mathbb{R}^{n},\mathbb{R}^{m})$ denotes its subspace of all
linear operators. The inner product of an Euclidean space will be denoted by
$\langle\cdot,\cdot\rangle$. Whenever $C$ is a cone in $\mathbb{R}^{n}$, by
${C}^{{}^{\ominus}}=\\{v\in\mathbb{R}^{n}:\ \langle v,c\rangle\leq
0,\quad\forall c\in C\\}$ the negative dual (a.k.a. polar) cone to $C$ is
denoted. Given a function
$\varphi:\mathbb{X}\longrightarrow\mathbb{R}\cup\\{\pm\infty\\}$, the symbol
$\partial\varphi(x)$ denotes the subdifferential of $\varphi$ at $x$ in the
sense of convex analysis (a.k.a. Fenchel subdifferential). The normal cone to
a set $S\subseteq\mathbb{R}^{q}$ at $x\in S$ in the sense of convex analysis
is denoted by ${\rm N}(x;S)=\\{v\in\mathbb{R}^{n}:\ \langle v,s-x\rangle,\
\forall s\in S\\}$.
## 2\. Preliminaries
### 2.1. Approximation of sets
Given a nonempty set $K\subseteq\mathbb{R}^{n}$ and $\bar{x}\in K$, in the
sequel the following different notions of tangent cone will be mainly
employed:
* (i)
the contingent (a.k.a. Bouligand tangent) cone to $K$ at $\bar{x}$, which is
defined by
${\rm T}(\bar{x};K)=\\{v\in\mathbb{R}^{n}:\ \exists(v_{n})_{n},\ v_{n}\to v,\
\exists(t_{n})_{n},\ t_{n}\downarrow 0:\ \bar{x}+t_{n}v_{n}\in K,\ \forall
n\in\mathbb{N}\\};$
* (ii)
the cone of radial (a.k.a. weak feasible) directions to $K$ at $\bar{x}$,
which is defined by
${\rm T}_{\rm r}(\bar{x};K)=\\{v\in\mathbb{R}^{n}:\ \forall\epsilon>0\ \exists
t_{\epsilon}\in(0,\epsilon):\ \bar{x}+t_{\epsilon}v\in K\\}.$
Clearly, for every $K\subseteq\mathbb{R}^{n}$ and $\bar{x}\in K$, it is ${\rm
T}_{\rm r}(\bar{x};K)\subseteq{\rm T}(\bar{x};K)$. Moreover ${\rm
T}(\bar{x};K)$ is always closed. If, in particular, $K$ is convex, then the
following representations hold
(2.1) ${\rm T}_{\rm r}(\bar{x};K)={\rm cone}\,(K-\bar{x})\quad\hbox{ and
}\quad{\rm T}(\bar{x};K)={\rm cl}\,({\rm cone}\,(K-\bar{x}))={\rm cl}\,{\rm
T}_{\rm r}(\bar{x};K)$
(see [20, Proposition 11.1.2(d)]). Thus, in such an event, both ${\rm T}_{\rm
r}(\bar{x};K)$ and ${\rm T}(\bar{x};K)$ are convex. It is well known that an
equivalent (variational) reformulation of the notion of contingent cone is
provided by the equality
(2.2) ${\rm T}(\bar{x};K)=\left\\{v\in\mathbb{R}^{n}:\ \liminf_{t\downarrow
0}{{\rm dist}\left(\bar{x}+tv;K\right)\over t}=0\right\\}.$
###### Remark 2.1.
Whenever a convex set $K\subseteq\mathbb{R}^{n}$ is, in particular,
polyhedral, one has ${\rm T}_{\rm r}(\bar{x};K)={\rm T}(\bar{x};K)$. To see
this, it suffices to exploit the formulae in $(\ref{eq:convexWTangcone})$ and
to observe that, in the present circumstance, ${\rm T}_{\rm r}(\bar{x};K)$
happens to be closed. The latter follows from the fact that, if $S$ is a
closed affine half-space in $\mathbb{R}^{n}$, then ${\rm T}_{\rm
r}(\bar{x};S)={\rm cone}\,(S-\bar{x})=S-\bar{x}$ is a closed set and from the
fact that, if $K_{1}$ and $K_{2}$ are convex sets with $\bar{x}\in\ K_{1}\cap
K_{2}$, then it holds ${\rm T}_{\rm r}(\bar{x};K_{1}\cap K_{2})={\rm T}_{\rm
r}(\bar{x};K_{1})\cap{\rm T}_{\rm r}(\bar{x};K_{2})$.
Along with the above cones, in the context of optimization problems some
further notions of first-order conical approximation will be needed:
* (iii)
the cone of radial inner (a.k.a. feasible) directions to $K$ at $\bar{x}$,
which is defined by
${\rm T}_{\rm f}(\bar{x};K)=\\{v\in\mathbb{R}^{n}:\ \exists\epsilon>0:\
\forall t\in(0,\epsilon),\ \bar{x}+tv\in K\\};$
* (vi)
the cone of inner directions (a.k.a. interior displacements) to $K$ at
$\bar{x}$, which is defined by
${\rm I}(\bar{x};K)=\\{v\in\mathbb{R}^{n}:\ \exists\epsilon>0:\ \forall
u\in{\rm B}\left(v;\epsilon\right),\ \forall t\in(0,\epsilon),\ \bar{x}+tu\in
K\\}.$
For a systematic discussion about properties of the above tangent cones and
their relationships, the reader is referred for instance to [4, Chapter 4],
[7, Chapter I.1], [8], [18, Chapter 2], and [20, Chapter 11].
### 2.2. Approximation of scalar functions
Given a function
$\varphi:\mathbb{R}^{n}\longrightarrow\mathbb{R}\cup\\{\pm\infty\\}$, let
$\bar{x}\in\varphi^{-1}(\mathbb{R})$. The set
$\widehat{\partial}^{+}\varphi(\bar{x})=\left\\{v\in\mathbb{R}^{n}:\
\limsup_{x\to\bar{x}}{\varphi(x)-\varphi(\bar{x})-\langle
v,x-\bar{x}\rangle\over\|x-\bar{x}\|}\leq 0\right\\}$
is called (Fréchet) upper subdifferential of $\varphi$ at $\bar{x}$. Any
element $v\in\widehat{\partial}^{+}\varphi(\bar{x})$ can be characterized by
the existence of a function $\psi:\mathbb{R}^{n}\longrightarrow\mathbb{R}$
such that $\varphi(\bar{x})=\psi(\bar{x})$, $\varphi(x)\leq\psi(x)$, for every
$x\in\mathbb{R}^{n}$, $\psi$ is (Fréchet) differentiable at $\bar{x}$ and
$v=\nabla\psi(\bar{x})$. If $\varphi:\mathbb{R}^{n}\longrightarrow\mathbb{R}$
is concave, then $\widehat{\partial}^{+}\varphi(\bar{x})$ coincides with the
superdifferential (a.k.a. upper subdifferential) in the sense of convex
analysis, i.e. $-\partial(-\varphi)(\bar{x})$.
Whenever $\varphi$ is an u.s.c. function, the upper subdifferential admits
another characterization in terms of Dini-Hadamard directional derivative, in
fact being equivalent to the Dini-Hadamard upper subdifferential (in finite-
dimensional spaces, the Fréchet bornology is equivalent to the Hadamard
bornology). More precisely, it holds
(2.3) $\widehat{\partial}^{+}\varphi(\bar{x})=\\{v\in\mathbb{R}^{n}:\ \langle
v,w\rangle\geq{\rm D}^{+}_{H}\varphi(\bar{x};w),\quad\forall
w\in\mathbb{R}^{n}\\},$
where
${\rm D}^{+}_{H}\varphi(\bar{x};w)=\limsup_{u\to w\atop t\downarrow
0}{\varphi(\bar{x}+tu)-\varphi(\bar{x})\over t}$
denotes the Dini-Hadamard upper directional derivative of $\varphi$ at
$\bar{x}$, in the direction $w\in\mathbb{R}^{n}$ (see [15, Chapter 1.3], [19,
Chapter 8.B]). Let us recall that, whenever $\varphi$ is locally Lipschitz
around $\bar{x}$, its Dini-Hadamard directional derivative at $\bar{x}$ takes
the following simpler form
${\rm D}^{+}_{D}\varphi(\bar{x};w)=\limsup_{t\downarrow
0}{\varphi(\bar{x}+tw)-\varphi(\bar{x})\over t},$
which is known as Dini upper directional derivative. The lower versions of
these generalized derivatives are
${\rm D}^{-}_{H}\varphi(\bar{x};w)=\liminf_{u\to w\atop t\downarrow
0}{\varphi(\bar{x}+tu)-\varphi(\bar{x})\over t},$
called the Dini-Hadamard lower directional (a.k.a. contingent) derivative of
$\varphi$ at $\bar{x}$, in the direction $w$, and
${\rm D}^{-}_{D}\varphi(\bar{x};w)=\liminf_{t\downarrow
0}{\varphi(\bar{x}+tw)-\varphi(\bar{x})\over t},$
called the Dini lower directional derivative of $\varphi$ at $\bar{x}$, in the
direction $w$.
The set
$\widehat{\partial}\varphi(\bar{x})=\left\\{v\in\mathbb{R}^{n}:\
\liminf_{x\to\bar{x}}{\varphi(x)-\varphi(\bar{x})-\langle
v,x-\bar{x}\rangle\over\|x-\bar{x}\|}\geq 0\right\\}$
is called (Fréchet) regular subdifferential of $\varphi$ at $\bar{x}$.
Whenever $\varphi$ is l.s.c. around $\bar{x}$, it admits the following
representation in terms of Dini-Hadamard lower directional generalized
derivative
(2.4) $\widehat{\partial}\varphi(\bar{x})=\\{v\in\mathbb{R}^{n}:\ \langle
v,w\rangle\leq{\rm D}^{-}_{H}\varphi(\bar{x};w),\quad\forall
w\in\mathbb{R}^{n}\\}.$
Whenever $\varphi$ is Fréchet differentiable at $\bar{x}$, one has
$\widehat{\partial}^{+}\varphi(\bar{x})=\widehat{\partial}\varphi(\bar{x})=\\{\nabla\varphi(\bar{x})\\}$,
where $\nabla\varphi(\bar{x})$ denotes the gradient of $\varphi$ at $\bar{x}$.
Comprehensive discussions from various viewpoints as well as detailed material
about these generalized derivatives can be found in many textbooks devoted to
nonsmooth analysis, among which [7, Chapter I.1], [15, Chapter 1], [18,
Chapter 2], [19, Chapter 8], [20].
### 2.3. Approximation of mappings and bifunctions
A mapping $g:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$ is said to be
$B$-differentiable at $\bar{x}\in\mathbb{R}^{n}$ if there exists a mapping
${\rm
D}_{B}g(\bar{x})\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m})$
such that
$\lim_{x\to\bar{x}}{\|g(x)-g(\bar{x})-{\rm
D}_{B}g(\bar{x})(x-\bar{x})\|\over\|x-\bar{x}\|}=0.$
As a consequence of the continuity of ${\rm D}_{B}g(\bar{x})$, it is readily
seen that if $g$ is $B$-differentiable at $\bar{x}$, it is also continuous at
the same point. Notice that, when, in particular, ${\rm
D}_{B}g(\bar{x})\in\mathscr{L}(\mathbb{R}^{n},\mathbb{R}^{m})$, $g$ turns out
to be (Fréchet) differentiable at $\bar{x}$. In such an event, its derivative,
represented by its Jacobian matrix, will be indicated by $\nabla g(\bar{x})$.
Given a nonempty set $K\subseteq\mathbb{R}^{n}$, a bifunction
$f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$ is said to
be $B$-differentiable at $\bar{x}\in K$, uniformly on $K$, if there exists a
family $\\{{\rm
D}_{B}f(\bar{x},z)\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m}):\
z\in K\\}$ such that for every $\epsilon>0$ $\exists\delta_{\epsilon}>0$ such
that
$\sup_{z\in K}{\|f(x,z)-f(\bar{x},z)-{\rm
D}_{B}f(\bar{x},z)(x-\bar{x})\|\over\|x-\bar{x}\|}<\epsilon,\quad\forall
x\in{\rm B}\left(\bar{x};\delta_{\epsilon}\right).$
It should be clear that the above notion of generalized differentiation for
bifunctions is a kind of partial differentiation, in considering variations of
a mapping with respect to changes of one variable only.
###### Example 2.2.
(i) Separable mappings: let us consider mappings
$f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$, which can
be expressed in the form
$f(x,z)=f_{1}(x)+f_{2}(z),$
for proper $f_{1},\,f_{2}:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$.
Whenever $f_{1}$ is $B$-differentiable at $\bar{x}$, with $B$-derivative ${\rm
D}_{B}f_{1}(\bar{x})$, the bifunction $f$ is $B$-differentiable at $\bar{x}$
uniformly on $K$, with $\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}=\\{{\rm
D}_{B}f_{1}(\bar{x})\\}$.
(ii) Factorable mappings: whenever a mapping
$f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$ can be
factorized as
$f(x,z)=\alpha(z)g(x),$
where $g:\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$ is $B$-differentiable at
$\bar{x}$, with $B$-derivative ${\rm D}_{B}g(\bar{x})$, and
$\alpha:\mathbb{R}^{n}\longrightarrow\mathbb{R}$ is bounded on $K$, the
bifunction $f$ is $B$-differentiable at $\bar{x}$ uniformly on $K$, with
$\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}=\\{\alpha(z){\rm
D}_{B}g(\bar{x}):z\in\mathbb{R}^{n}\\}$.
(iii) Composition with differentiable mappings: if
$f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb{R}^{p}$ is
$B$-differentiable at $\bar{x}$ uniformly on $K$ and
$g:\mathbb{R}^{p}\longrightarrow\mathbb{R}^{m}$ is Fréchet differentiable at
each point $f(\bar{x},z)$, with $z\in K$, then their composition $g\circ f$
turns out to be $B$-differentiable at $\bar{x}$ uniformly on $K$, with
$\\{{\rm D}_{B}(g\circ f)(\bar{x},z):\ z\in K\\}=\\{\nabla g(f(\bar{x},z)){\rm
D}_{B}f(\bar{x},z):\ z\in K\\}$.
A stronger notion of uniform $B$-differentiability will be needed for one of
the main results, which is based on strict $B$-differentiability. Given a
nonempty set $K\subseteq\mathbb{R}^{n}$, a bifunction
$f:\mathbb{R}^{n}\times\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$ is said to
be strictly $B$-differentiable at $\bar{x}\in K$, uniformly on $K$, if there
exists a family $\\{{\rm
D}_{B}f(\bar{x},z)\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m}):\
z\in K\\}$ such that for every $\epsilon>0$ $\exists\delta_{\epsilon}>0$ such
that
$\sup_{z\in K}{\|f(x_{1},z)-f(x_{2},z)-{\rm
D}_{B}f(\bar{x},z)(x_{1}-x_{2})\|\over\|x_{1}-x_{2}\|}<\epsilon,\quad\forall
x_{1},\,x_{2}\in{\rm B}\left(\bar{x};\delta_{\epsilon}\right),\ x_{1}\neq
x_{2}.$
### 2.4. Distance from strong vector equilibria
The function $\nu:\mathbb{R}^{n}\longrightarrow[0,+\infty)$, defined by
(2.5) $\nu(x)=\sup_{z\in K}{\rm dist}\left(f(x,z);C\right),$
can be exploited as a natural measure of the distance of a given point
$x\in\mathbb{R}^{n}$ from being a solution to $({\rm VEP})$. Clearly it is
${\mathcal{S}}{\mathcal{E}}=\nu^{-1}(0)\cap K$, while positive values of $\nu$
quantify the violation of the strong equilibrium condition in $({\rm VEP})$.
A local error bound (in terms of $vu$) is said to be valid near
$\bar{x}\in{\mathcal{S}}{\mathcal{E}}$ for problem $({\rm VEP})$ if there
exist positive $\kappa$ and $\delta$ such that
(2.6) ${\rm
dist}\left(x;{\mathcal{S}}{\mathcal{E}}\right)\leq\kappa\nu(x),\quad\forall
x\in{\rm B}\left(\bar{x};\delta\right)\cap K.$
Notice that, whereas for computing ${\rm
dist}\left(x;{\mathcal{S}}{\mathcal{E}}\right)$ one needs to know all the
solutions to $({\rm VEP})$ near $\bar{x}$, the value of $\nu(x)$ can be
computed directly by means of problem data. A study of sufficient conditions
for the error in bound in $(\ref{in:erboSE})$ to hold has been recently
undertaken in [21]. In particular, the following global error bound condition
under an uniform $B$-differentiability assumption on $f$ is known to hold.
###### Proposition 2.3 ([21]).
With reference to a problem $({\rm VEP})$, suppose that:
* (i)
each function $x\mapsto f(x,z)$ is $C$-u.s.c. on $K$, for every $z\in K$;
* (ii)
the set-valued mapping $x\leadsto f(x,K)$ takes $C$-bounded values on $K$;
* (iii)
$K$ is convex;
* (iv)
$f$ is $B$-differentiable uniformly on $K$ at each point of
$K\backslash{\mathcal{S}}{\mathcal{E}}$;
* (v)
there exists $\sigma>0$ with the property that for every $x_{0}\in
K\backslash{\mathcal{S}}{\mathcal{E}}$ there is $u_{0}\in{\mathbb{S}}\cap{\rm
cone}\,(K-x_{0})$ such that
${\rm D}_{B}f(x_{0},z)(u_{0})+\sigma{\mathbb{B}}\subseteq C,\quad\forall z\in
K.$
Then, ${\mathcal{S}}{\mathcal{E}}$ is nonempty, closed and the following
estimate holds true
${\rm
dist}\left(x;{\mathcal{S}}{\mathcal{E}}\right)\leq{\nu(x)\over\sigma},\quad\forall
x\in K.$
## 3\. Tangential approximation of ${\mathcal{S}}{\mathcal{E}}$
###### Theorem 3.1 (Inner approximation).
With reference to a problem $({\rm VEP})$, let
$\bar{x}\in{\mathcal{S}}{\mathcal{E}}$. Suppose that:
* (i)
$f$ is $B$-differentiable at $\bar{x}$, uniformly on $K$, with $\\{{\rm
D}_{B}f(\bar{x},z):\ z\in K\\}$;
* (ii)
a local error bound such as $(\ref{in:erboSE})$ is valid near $\bar{x}$.
Then, it holds
(3.1) $\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm
r}(\bar{x};K)\subseteq{\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}}).$
###### Proof.
Let us start with observing that, since it is ${\rm
D}_{B}f(\bar{x},z)\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m})$
for every $z\in K$, and $C$ is a cone, each set ${\rm
D}_{B}f(\bar{x},z)^{-1}(C)$ turns out to be a cone containing $\mathbf{0}$, as
well as ${\rm T}_{\rm r}(\bar{x};K)$ does by definition. Thus, if taking
$v=\mathbf{0}\in\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm
T}_{\rm r}(\bar{x};K)$, the inclusion $v\in{\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$ obviously holds as the latter cone is
closed. So, take an arbitrary $v\in\left(\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm
r}(\bar{x};K)\right)\backslash\\{\mathbf{0}\\}$. Since both the sets in the
inclusion in $(\ref{in:inapproxTangEqui})$ are cones, one can assume without
any loss of generality that $\|v\|=1$. In the light of the characterization
via $(\ref{eq:charTangcone})$, $v$ is proven to belong to ${\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$ if one shows that
(3.2) $\liminf_{t\downarrow 0}{{\rm
dist}\left(\bar{x}+tv;{\mathcal{S}}{\mathcal{E}}\right)\over t}=0.$
Showing the equality in $(\ref{eq:thesisreform})$ amounts to show that for
every $\tau>0$ and $\epsilon>0$ there exists $t_{0}\in(0,\tau)$ such that
(3.3) ${{\rm dist}\left(\bar{x}+t_{0}v;{\mathcal{S}}{\mathcal{E}}\right)\over
t_{0}}\leq\epsilon.$
So, let us fix ad libitum $\tau$ and $\epsilon$. Hypothesis (ii) ensures the
existence of $\delta,\ \kappa>0$ as in $(\ref{in:erboSE})$. By virtue of
hypothesis (i), corresponding to $\epsilon/\kappa$, there exists
$\delta_{\epsilon}>0$ such that
$f(x,z)\in f(\bar{x},z)+{\rm
D}_{B}f(\bar{x},z)(x-\bar{x})+\kappa^{-1}\epsilon\|x-\bar{x}\|{\mathbb{B}},\quad\forall
x\in{\rm B}\left(\bar{x};\delta_{\epsilon}\right),\ \forall z\in K,$
and hence, in particular,
$f(\bar{x}+tv,z)\in f(\bar{x},z)+t{\rm
D}_{B}f(\bar{x},z)(v)+\kappa^{-1}\epsilon t{\mathbb{B}},\quad\forall
t\in(0,\delta_{\epsilon}),\ \forall z\in K.$
By taking into account that $\bar{x}\in{\mathcal{S}}{\mathcal{E}}$ and
$v\in{\rm D}_{B}f(\bar{x},z)^{-1}(C)$ for every $z\in K$, the above inclusion
implies
$f(\bar{x}+tv,z)\in C+tC+\kappa^{-1}\epsilon t{\mathbb{B}}\subseteq
C+\kappa^{-1}\epsilon t{\mathbb{B}},\quad\forall t\in(0,\delta_{\epsilon}),\
\forall z\in K.$
In terms of the residual function $\nu$ introduced in $(\ref{eq:defnumf})$,
this means
(3.4) $\displaystyle\nu(\bar{x}+tv)=\sup_{z\in K}{\rm
dist}\left(f(\bar{x}+tv,z);C\right)$ $\displaystyle\leq$ $\displaystyle{\rm
exc}(C+\kappa^{-1}\epsilon t{\mathbb{B}};C)={\rm exc}(\kappa^{-1}\epsilon
t{\mathbb{B}};C)$ $\displaystyle\leq$ $\displaystyle\kappa^{-1}\epsilon
t,\quad\forall t\in(0,\delta_{\epsilon}),$
where the second equality holds because $C$ is a convex cone. On the other
hand, according to hypothesis (ii) there exists
$\delta_{0}\in(0,\min\\{\tau,\delta,\delta_{\epsilon}\\})$ such that
(3.5) ${\rm
dist}\left(x;{\mathcal{S}}{\mathcal{E}}\right)\leq\kappa\nu(x),\quad\forall
x\in{\rm B}\left(\bar{x};\delta_{0}\right)\cap K.$
Since it is $v\in{\rm T}_{\rm r}(\bar{x};K)$, for some
$t_{*}\in(0,\delta_{0})$ it happens
$\bar{x}+t_{*}v\in K\cap{\rm B}\left(\bar{x};\delta_{0}\right),$
and therefore, by inequality $(\ref{in:erboEquidelta})$, one obtains
(3.6) $\displaystyle{\rm
dist}\left(\bar{x}+t_{*}v;{\mathcal{S}}{\mathcal{E}}\right)\leq\kappa\nu(\bar{x}+t_{*}v).$
By combining inequalities $(\ref{in:resepst})$ and $(\ref{in:distnuv})$, as it
is $t_{*}<\delta_{0}<\delta_{\epsilon}$, one obtains
${\rm
dist}\left(\bar{x}+t_{*}v;{\mathcal{S}}{\mathcal{E}}\right)\leq\kappa\cdot\kappa^{-1}\epsilon
t_{*}=\epsilon t_{*}.$
The last inequality shows that $(\ref{eq:thesisreform2})$ is true for
$t_{0}=t_{*}\in(0,\tau)$, thereby completing the proof. ∎
The inclusion in $(\ref{in:inapproxTangEqui})$ states that, under proper
assumptions, any solution of the (approximated) problem
(3.7) $\hbox{ find $v\in{\rm T}_{\rm r}(\bar{x};K)$ such that }{\rm
D}_{B}f(\bar{x};z)(v)\in C,\quad\forall z\in K,$
provides a vector, which is tangent to ${\mathcal{S}}{\mathcal{E}}$ at
$\bar{x}$ in the sense of Bouligand. Notice that problem $(\ref{in:HomVEP})$
is almost in the form $({\rm VEP})$ (it would be exactly in the form $({\rm
VEP})$ if ${\rm T}_{\rm r}(\bar{x};K)=K$). Roughly speaking, all of this means
that if the problem data of $({\rm VEP})$ are properly approximated ($K$ by
its radial direction cone, $f$ by its generalized derivatives in the sense of
Bouligand, respectively) near a reference solution $\bar{x}$, then the
solutions of the resulting approximated problem $(\ref{in:HomVEP})$ work as a
first-order approximation of the solution set to the original problem $({\rm
VEP})$. Problem $(\ref{in:HomVEP})$ is typically expected to be easier than
$({\rm VEP})$ by virtue of the structural properties of its data. Basically,
$(\ref{in:HomVEP})$ can be regarded as a cone constrained p.h. vector
inequality system, so its solution set is a cone. Furthermore, if $K$ is
convex and ${\rm
D}_{B}f(\bar{x},z):\mathbb{R}^{n}\longrightarrow\mathbb{R}^{m}$ is $C$-concave
for every $z\in K$, the latter meaning that
${\rm D}_{B}f(\bar{x},z)(v_{1})+{\rm
D}_{B}f(\bar{x},z)(v_{2})\leq_{{}_{C}}{\rm
D}_{B}f(\bar{x},z)(v_{1}+v_{2}),\quad\forall v_{1},\,v_{2}\in\mathbb{R}^{n},$
where $\leq_{{}_{C}}$ denotes the partial ordering on $\mathbb{R}^{m}$ induced
in the standard way by the cone $C$, then the solution set to problem
$(\ref{in:HomVEP})$ is a convex cone.
As a further comment to Theorem 3.1, it must be remarked that the inclusion in
$(\ref{in:inapproxTangEqui})$ provides only a one-side approximation of ${\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$, which may happen to be rather rough.
This fact is illustrated by the next example.
###### Example 3.2 (Inclusion $(\ref{in:inapproxTangEqui})$ may be strict).
Consider the problem $({\rm VEP})$ defined by the following data:
$K=C=\mathbb{R}^{2}_{+}=\\{x=(x_{1},x_{2})\in\mathbb{R}^{2}:\ x_{1}\geq 0,\
x_{2}\geq 0\\}$ and a vector-valued bifunction
$f:\mathbb{R}^{2}\times\mathbb{R}^{2}\longrightarrow\mathbb{R}^{2}$ given by
$f(x_{1},x_{2},z_{1},z_{2})=\left(\begin{array}[]{c}{1\over
2}(-m_{z}^{-}x_{1}+x_{2}+1)^{2}\\\ \\\ {1\over
2}(m_{z}^{+}x_{1}-x_{2}+1)^{2}\end{array}\right),$
where
$m_{z}^{-}=1-{1\over\|z\|^{2}+1}\qquad\hbox{ and }\qquad
m_{z}^{+}=1+{1\over\|z\|^{2}+1},\quad\ z\in\mathbb{R}^{2}.$
Since $f(x,z)\in\mathbb{R}^{2}_{+}$ for every
$(x,z)\in\mathbb{R}^{2}\times\mathbb{R}^{2}$, it is clear that
${\mathcal{S}}{\mathcal{E}}=K=\mathbb{R}^{2}_{+}$. Fix
$\bar{x}=\mathbf{0}\in{\mathcal{S}}{\mathcal{E}}$, so one has
${\rm T}_{\rm r}(\mathbf{0};K)={\rm
T}(\mathbf{0};{\mathcal{S}}{\mathcal{E}})=\mathbb{R}^{2}_{+}.$
In view of the next calculations, it is convenient to observe that
$f(x,z)=(g\circ h)(x,z),$
where the mappings $g:\mathbb{R}^{2}\longrightarrow\mathbb{R}^{2}$ and
$h:\mathbb{R}^{2}\times\mathbb{R}^{2}\longrightarrow\mathbb{R}^{2}$ are given
respectively by
$g(y)=\left(\begin{array}[]{c}y_{1}^{2}/2\\\
y_{2}^{2}/2\end{array}\right)\qquad\hbox{ and }\qquad
h(x,z)=\left(\begin{array}[]{c}-m_{z}^{-}x_{1}+x_{2}+1\\\
m_{z}^{+}x_{1}-x_{2}+1\end{array}\right).$
To check that the bifunction $h$ is $B$-differentiable at $\mathbf{0}$
uniformly on $\mathbb{R}^{2}_{+}$, with
$\left\\{{\rm D}_{B}h(\mathbf{0},z)=\nabla
h(\mathbf{0},z)=\left(\begin{array}[]{rr}-m_{z}^{-}&1\\\
m_{z}^{+}&-1\end{array}\right),\ z\in\mathbb{R}^{2}_{+}\right\\}$
it suffices to observe that
$\displaystyle\|h(x,z)-h(\mathbf{0},z)-{\rm D}_{B}h(\mathbf{0},z)(x)\|$
$\displaystyle=$
$\displaystyle\left\|\left(\begin{array}[]{c}-m_{z}^{-}x_{1}+x_{2}+1\\\
m_{z}^{+}x_{1}-x_{2}+1\end{array}\right)-\left(\begin{array}[]{c}1\\\
1\end{array}\right)-\left(\begin{array}[]{rr}-m_{z}^{-}&1\\\
m_{z}^{+}&-1\end{array}\right)\left(\begin{array}[]{c}x_{1}\\\
x_{2}\end{array}\right)\right\|$ $\displaystyle=$ $\displaystyle
0,\quad\forall z\in\mathbb{R}^{2}_{+}.$
Thus, since $g$ is Fréchet differentiable at each point of $\mathbb{R}^{2}$
and
$\nabla g(y)=\left(\begin{array}[]{rr}y_{1}&0\\\ 0&y_{2}\end{array}\right),$
according to what remarked in Example 2.2(iii), the mapping $f=g\circ h$ turns
out to be $B$-differentiable at $\mathbf{0}$ uniformly on
$\mathbb{R}^{2}_{+}$, with
${\rm D}_{B}f(\mathbf{0},z)=\nabla g(h(\mathbf{0},z))\circ{\rm
D}_{B}h(\mathbf{0},z)=\left(\begin{array}[]{rr}1&0\\\
0&1\end{array}\right)\left(\begin{array}[]{rr}-m_{z}^{-}&1\\\
m_{z}^{+}&-1\end{array}\right)=\left(\begin{array}[]{rr}-m_{z}^{-}&1\\\
m_{z}^{+}&-1\end{array}\right),\ z\in\mathbb{R}^{2}_{+}.$
Notice that a local error bound as in $(\ref{in:erboSE})$ is evidently valid
near $\mathbf{0}$ because it is ${\mathcal{S}}{\mathcal{E}}=K$. Thus, all the
hypotheses of Theorem 3.1 are satisfied.
Now, one readily sees that
${\rm
D}_{B}f(\mathbf{0},z)(v)=\left(\begin{array}[]{c}-m_{z}^{-}v_{1}+v_{2}\\\
m_{z}^{+}v_{1}-v_{2}\end{array}\right)\in\mathbb{R}^{2}_{+}\qquad\hbox{ iff
}\qquad\left\\{\begin{array}[]{c}-m_{z}^{-}v_{1}+v_{2}\geq 0\\\ \\\
m_{z}^{+}v_{1}-v_{2}\geq 0.\end{array}\right.$
This leads to find
${\rm D}_{B}f(\mathbf{0},z)^{-1}(\mathbb{R}^{2}_{+})=\\{v\in\mathbb{R}^{2}:\
m_{z}^{-}v_{1}\leq v_{2}\leq m_{z}^{+}v_{1}\\},\quad\forall
z\in\mathbb{R}^{2}_{+}.$
Since one has
$\lim_{\|z\|\to\infty}m_{z}^{-}=1^{-}=1=1^{+}=\lim_{\|z\|\to\infty}m_{z}^{+},$
it results in
$\bigcap_{z\in\mathbb{R}^{2}_{+}}{\rm
D}_{B}f(\mathbf{0},z)^{-1}(\mathbb{R}^{2}_{+})\cap{\rm T}_{\rm
r}(\mathbf{0};\mathbb{R}^{2}_{+})=\\{v\in\mathbb{R}^{2}_{+}:\
v_{2}=v_{1}\\}\subsetneqq\mathbb{R}^{2}_{+}={\rm
T}(\mathbf{0};{\mathcal{S}}{\mathcal{E}}).$
The above example motivates the interest in outer approximations of
${\mathcal{S}}{\mathcal{E}}$. Below, a result in this direction is presented.
###### Theorem 3.3 (Outer approximation).
With reference to a problem $({\rm VEP})$, let
$\bar{x}\in{\mathcal{S}}{\mathcal{E}}$. Suppose that:
* (i)
$f$ is strictly $B$-differentiable at $\bar{x}$, uniformly on $K$, with
$\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$;
* (ii)
the family of mappings $\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$ is
equicontinuous at each point of $\mathbb{R}^{n}$.
Then, it holds
(3.9) ${\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}})\subseteq\bigcap_{z\in
K}{\rm D}_{B}f(\bar{x},z)^{-1}({\rm T}(f(\bar{x},z);C))\cap{\rm
T}(\bar{x};K).$
###### Proof.
Since it is ${\rm
D}_{B}f(\bar{x},z)\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m})$
for every $z\in K$, one has
${\rm D}_{B}f(\bar{x},z)(\mathbf{0})=\mathbf{0}\in{\rm
T}(f(\bar{x},z);C),\quad\forall z\in K.$
Therefore, it clearly holds
$\mathbf{0}\in\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}({\rm
T}(f(\bar{x},z);C))\cap{\rm T}(\bar{x};K).$
So take an arbitrary $v\in{\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})\backslash\\{\mathbf{0}\\}$. As all the
sets involved in inclusion $(\ref{in:outapproxTangEqui})$ are cones, without
loss of generality it is possible to assume that $\|v\|=1$. According to the
definition of contingent cone, there exist $(v_{n})_{n}$, with
$v_{n}\longrightarrow v$ and $(t_{n})_{n}$, with $t_{n}\downarrow 0$, such
that $\bar{x}+t_{n}v_{n}\in{\mathcal{S}}{\mathcal{E}}\subseteq K$. Notice that
this inclusion in particular implies that $v\in{\rm T}(\bar{x};K)$. What
remains to be shown is that
(3.10) $v\in\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}({\rm
T}(f(\bar{x},z);C)).$
Fix an arbitrary $\epsilon>0$. By virtue of hypothesis (i), there exists
$\delta_{\epsilon}>0$ such that
$f(x_{1},z)-f(x_{2},z)-{\rm
D}_{B}f(\bar{x},z)(x_{1}-x_{2})\in\epsilon\|x_{1}-x_{2}\|{\mathbb{B}},\quad\forall
z\in K,\ \forall x_{1},\,x_{2}\in{\rm
B}\left(\bar{x};\delta_{\epsilon}\right)$
and hence
(3.11) ${\rm D}_{B}f(\bar{x},z)(x_{1}-x_{2})\in
f(x_{1},z)-f(x_{2},z)+\epsilon\|x_{1}-x_{2}\|{\mathbb{B}},\quad\forall z\in
K,\ \forall x_{1},\,x_{2}\in{\rm B}\left(\bar{x};\delta_{\epsilon}\right).$
Since it is $\bar{x}+t_{n}v_{n}\longrightarrow\bar{x}$ as $n\to\infty$ (as a
converging sequence $(v_{n})_{n}$ must be bounded), for some
$n_{\epsilon}\in\mathbb{N}$ it is true that $\bar{x}+t_{n}v_{n}\in{\rm
B}\left(\bar{x};\delta_{\epsilon}\right)$ for every $n\geq n_{\epsilon}$.
Thus, by taking $x_{1}=\bar{x}+t_{n}v_{n}$ and $x_{2}=\bar{x}$ in
$(\ref{in:fstrBdifx1x2})$, one finds
$t_{n}{\rm D}_{B}f(\bar{x},z)(v_{n})\in
f(\bar{x}+t_{n}v_{n},z)-f(\bar{x},z)+\epsilon
t_{n}\|v_{n}\|{\mathbb{B}},\quad\forall z\in K,\ \forall n\geq n_{\epsilon},$
whence it follows
${\rm D}_{B}f(\bar{x},z)(v_{n})\in{f(\bar{x}+t_{n}v_{n},z)-f(\bar{x},z)\over
t_{n}}+\epsilon\|v_{n}\|{\mathbb{B}},\quad\forall z\in K,\ \forall n\geq
n_{\epsilon}.$
By taking into account that $v_{n}\longrightarrow v$ as $n\to\infty$ and
$\|v\|=1$, one has that $\|v_{n}\|\leq 2$ for all $n\geq n_{\epsilon}$, up to
a proper increase in the value of $n_{\epsilon}$, if needed. Thus, from the
last inclusion one obtains
(3.12) ${\rm
D}_{B}f(\bar{x},z)(v_{n})\in{f(\bar{x}+t_{n}v_{n},z)-f(\bar{x},z)\over
t_{n}}+2\epsilon{\mathbb{B}},\quad\forall z\in K,\ \forall n\geq
n_{\epsilon}.$
By hypothesis (ii) the family $\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$ is
equicontinuous at $v$. This means that there exists $n_{*}\in\mathbb{N}$
(independent of $z$), with $n_{*}\geq n_{\epsilon}$, such that
$\|{\rm D}_{B}f(\bar{x},z)(v_{n})-{\rm
D}_{B}f(\bar{x},z)(v)\|\leq\epsilon,\quad\forall z\in K,\ \forall n\geq
n_{*},$
or, equivalently,
${\rm D}_{B}f(\bar{x},z)(v)\in{\rm
D}_{B}f(\bar{x},z)(v_{n})+\epsilon{\mathbb{B}},\quad\forall z\in K,\ \forall
n\geq n_{*}.$
By recalling $(\ref{in:Bdervnincrrepeps})$, from the last inclusion one gets
${\rm D}_{B}f(\bar{x},z)(v)\in{f(\bar{x}+t_{n}v_{n},z)-f(\bar{x},z)\over
t_{n}}+3\epsilon{\mathbb{B}},\quad\forall z\in K,\ \forall n\geq n_{*}.$
Since it is $\bar{x}+t_{n}v_{n}\in{\mathcal{S}}{\mathcal{E}}$ for every
$n\in\mathbb{N}$, this implies
${\rm D}_{B}f(\bar{x},z)(v)\in{C-f(\bar{x},z)\over
t_{n}}+3\epsilon{\mathbb{B}}\in{\rm
cone}\,(C-f(\bar{x},z))+3\epsilon{\mathbb{B}},\quad\forall z\in K,\ \forall
n\geq n_{*}.$
Since $C$ is convex so ${\rm T}(f(\bar{x},z);C)={\rm cl}\,{\rm
cone}\,(C-f(\bar{x},z)))$, it results in
${\rm D}_{B}f(\bar{x},z)(v)\in{\rm
T}(f(\bar{x},z);C)+3\epsilon{\mathbb{B}},\quad\forall z\in K.$
The arbitrariness of $\epsilon$ and the fact ${\rm T}(f(\bar{x},z);C)$ is
closed allow one to assert that
${\rm D}_{B}f(\bar{x},z)(v)\in{\rm T}(f(\bar{x},z);C),\quad\forall z\in K,$
which proves the validity of $(\ref{in:thesreformoutapprox})$. Thus the proof
is complete. ∎
###### Remark 3.4.
(i) In the case in which ${\rm int}\,C\neq\varnothing$, it is useful to remark
that the formula in $(\ref{in:outapproxTangEqui})$ can be equivalently
rewritten as
${\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})\subseteq\\{\mathbf{0}\\}\cup\left(\bigcap_{z\in
K\cap f^{-1}(\bar{x},\cdot)({\rm bd}\,C)}{\rm D}_{B}f(\bar{x},z)^{-1}({\rm
T}(f(\bar{x},z);C))\cap{\rm T}(\bar{x};K)\right),$
with the convention that an intersection over an empty index set is the empty
set. Indeed, whenever it happens $f(\bar{x},z)\in{\rm int}\,C$, one has ${\rm
T}(f(\bar{x},z);C)=\mathbb{R}^{m}$, with the consequence that ${\rm
D}_{B}f(\bar{x},z)^{-1}({\rm T}(f(\bar{x},z);C))=\mathbb{R}^{n}$.
(ii) It is worth noticing that for all those $z_{0}\in K$ such that
$f(\bar{x},z_{0})=\mathbf{0}$ (if any), the formula in
$(\ref{in:outapproxTangEqui})$ entails
${\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}})\subseteq{\rm
D}_{B}f(\bar{x},z_{0})^{-1}(C)\cap{\rm T}(\bar{x};K),$
as it is ${\rm T}(f(\bar{x},z_{0});C)={\rm T}(\mathbf{0};C)=C$.
The next example shows that also the outer approximation of ${\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$ provided by Theorem 3.3 may happen to
be rather rough.
###### Example 3.5 (Inclusion $(\ref{in:outapproxTangEqui})$ may be strict).
Consider the (actually scalar) problem $({\rm VEP})$ defined by the following
data: $K=\mathbb{R}$, $C=[0,+\infty)$,
$f:\mathbb{R}\times\mathbb{R}\longrightarrow\mathbb{R}$ given by
$f(x,z)={x^{2}z\over z^{2}+1}.$
It is clear that ${\mathcal{S}}{\mathcal{E}}=\\{0\\}$. So, fix $\bar{x}=0$. In
order for checking that $f$ is strictly $B$-differentiable at $0$ uniformly on
$\mathbb{R}$, with $\\{{\rm D}_{B}f(0,z)\equiv 0,\ z\in\mathbb{R}\\}$,
according to the definition it suffices to observe that, fixed an arbitrary
$\epsilon>0$, one has
$\displaystyle\sup_{z\in\mathbb{R}}{|f(x_{1},z)-f(x_{2},z)|\over|x_{1}-x_{2}|}$
$\displaystyle=$
$\displaystyle\sup_{z\in\mathbb{R}}{\displaystyle{\left|{x_{1}^{2}z\over
z^{2}+1}-{x_{2}^{2}z\over
z^{2}+1}\right|}\over|x_{1}-x_{2}|}=\sup_{z\in\mathbb{R}}{|z|\over
z^{2}+1}\cdot|x_{1}+x_{2}|\leq|x_{1}|+|x_{2}|$ $\displaystyle\leq$
$\displaystyle\epsilon,\quad\forall x_{1},\,x_{2}\in{\rm
B}\left(0;\epsilon/2\right),\ x_{1}\neq x_{2}.$
As the family $\\{{\rm D}_{B}f(0,z)\equiv 0,\ z\in\mathbb{R}\\}$ is actually
independent of $z\in\mathbb{R}$, also hypothesis (ii) of Theorem 3.3 is
satisfied.
Since $f(0,z)=0$ for every $z\in\mathbb{R}$, so it is ${\rm
T}(f(0,z);[0,+\infty))=[0,+\infty)$, one finds
${\rm D}_{B}f(0,z)^{-1}\left({\rm
T}(f(0,z);[0,+\infty))\right)=\mathbb{R},\quad\forall z\in\mathbb{R}.$
Consequently, in the current case, one obtains
${\rm
T}(0;{\mathcal{S}}{\mathcal{E}})=\\{0\\}\subsetneqq\mathbb{R}\cap\mathbb{R}=\bigcap_{z\in\mathbb{R}}{\rm
D}_{B}f(0,z)^{-1}({\rm T}(f(0,z);[0,+\infty)))\cap{\rm T}(0;\mathbb{R}).$
Relying on both the preceding approximations, the next result singles out a
sufficient condition, upon which one can establish an exact representation of
${\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$.
###### Corollary 3.6.
With reference to a problem $({\rm VEP})$, let
$\bar{x}\in{\mathcal{S}}{\mathcal{E}}$. Suppose that:
* (i)
$K$ is polyhedral;
* (ii)
$f(\bar{x},z)=\mathbf{0},\quad\forall z\in K$;
* (iii)
$f$ is strictly $B$-differentiable at $\bar{x}$, uniformly on $K$, with
$\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$;
* (iv)
the family of mappings $\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$ is
equicontinuous at each point of $\mathbb{R}^{n}$;
* (v)
a local error bound such as in $(\ref{in:erboSE})$ is valid near $\bar{x}$.
Then, it holds
${\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}})=\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}(\bar{x};K).$
###### Proof.
The above assumptions enable one to apply both Theorem 3.1 and Theorem 3.3.
From the former one, in the light of Remark 2.1 and hypothesis (i), one
obtains
(3.13) $\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm
T}(\bar{x};K)\subseteq{\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}}).$
From the latter, in the light of hypothesis (ii) and Remark 3.4(ii), one
obtains
(3.14) ${\rm T}(\bar{x};{\mathcal{S}}{\mathcal{E}})\subseteq\bigcap_{z\in
K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}(\bar{x};K).$
By combining inclusions $(\ref{in:innerTang})$ and $(\ref{in:outTang})$ one
gets the equality in the thesis. ∎
## 4\. Applications to constrained optimization
This section deals with first-order optimality conditions for optimization
problems, whose feasible region is formalized as a set of strong vector
equilibria. As such, these problems can be cast in mathematical programming
with equilibrium constraints, a well-recognized topic and active area of
research (see, among others, [11, 12, 14, 17, 22]). Thus, the optimization
problems here considered take the following form
$None$ $\min\vartheta(x)\quad\hbox{ subject to }\quad
x\in{\mathcal{S}}{\mathcal{E}},$
where $\vartheta:\mathbb{R}^{n}\longrightarrow\mathbb{R}$ is the objective
function formalizing the criterion used for comparing variables, while
${\mathcal{S}}{\mathcal{E}}$ is the feasible region of the problem, denoting
as in the previous sections the solution sets to an inner problem $({\rm
VEP})$. Throughout this section $\vartheta$ will be assumed to be continuous
around $\bar{x}$, but possibly nondifferentiable, as well as the bifunction
$f$ defining $({\rm VEP})$.
In constrained nondifferentiable optimization, first-order optimality
conditions are typically obtained by locally approximating the objective
function and the feasible region of a given problem. In this vein, the fact
stated in the next lemma is widely known to hold, which has been used as a
starting point for various, more elaborated, optimality conditions. For a
direct proof see, for instance, [20, Chapter 7.1]. To a deeper view, it can be
restored as a special case of an axiomatic scheme of analysis, which was
developed in [6, 8] (see [6, Theorem 2.1]).
###### Lemma 4.1.
Let $\bar{x}\in{\mathcal{S}}{\mathcal{E}}$ be a local optimal solution to
problem $({\rm MPVEC})$. Then, it holds
(4.1) ${\rm D}^{+}_{D}\vartheta(\bar{x};w)\geq 0,\quad\forall w\in{\rm T}_{\rm
r}(\bar{x};{\mathcal{S}}{\mathcal{E}})$
and
(4.2) ${\rm D}^{+}_{H}\vartheta(\bar{x};w)\geq 0,\quad\forall w\in{\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}}).$
###### Remark 4.2.
Since from their very definition one sees that
${\rm D}^{+}_{D}\vartheta(\bar{x};w)\leq{\rm
D}^{+}_{H}\vartheta(\bar{x};w),\quad\forall w\in\mathbb{R}^{n},$
whereas it is ${\rm T}_{\rm
r}(\bar{x};{\mathcal{S}}{\mathcal{E}})\subseteq{\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$, none of the conditions
$(\ref{in:nocDTr})$ and $(\ref{in:nocDHT})$ can imply in general the other,
unless $\vartheta$ is locally Lipschitz near $\bar{x}$ or it is ${\rm T}_{\rm
r}(\bar{x};{\mathcal{S}}{\mathcal{E}})={\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})$. Thus, the author does not agree with
what asserted in [20, pag. 132]. For the purposes of the present analysis,
only the condition in $(\ref{in:nocDHT})$ will be actually exploited.
###### Theorem 4.3 (Necessary optimality condition).
Let $\bar{x}\in{\mathcal{S}}{\mathcal{E}}$ be a local optimal solution to
problem $({\rm MPVEC})$. Suppose that:
* (i)
$f$ is $B$-differentiable at $\bar{x}$, uniformly on $K$, with $\\{{\rm
D}_{B}f(\bar{x},z):\ z\in K\\}$;
* (ii)
a local error bound such as in $(\ref{in:erboSE})$ is valid near $\bar{x}$.
Then, it holds
(4.3) $-\widehat{\partial}^{+}\vartheta(\bar{x})\subseteq{\left(\bigcap_{z\in
K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm
r}(\bar{x};K)\right)}^{{}^{\ominus}}.$
###### Proof.
Under the above assumptions, by Theorem 3.1 the inclusion in
$(\ref{in:inapproxTangEqui})$ holds true. Consequently, since
$\bar{x}\in{\mathcal{S}}{\mathcal{E}}$ is a local optimal solution to $({\rm
MPVEC})$, according to condition $(\ref{in:nocDHT})$ it must be
${\rm D}^{+}_{H}\vartheta(\bar{x};w)\geq 0,\quad\forall w\in\bigcap_{z\in
K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm r}(\bar{x};K).$
If $\widehat{\partial}^{+}\vartheta(\bar{x})=\varnothing$ the thesis becomes
trivial. Otherwise, by taking into account the representation in
$(\ref{eq:UpsubdDHder})$, which is valid because the function $\vartheta$ is
in particular u.s.c. around $\bar{x}$, for an arbitrary
$v\in\widehat{\partial}^{+}\vartheta(\bar{x})$ one finds
$\langle v,w\rangle\geq 0,\quad\forall w\in\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm r}(\bar{x};K),$
which amounts to say that
$-v\in{\left(\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm
r}(\bar{x};K)\right)}^{{}^{\ominus}}.$
The arbitrariness of $v\in\widehat{\partial}^{+}\vartheta(\bar{x})$ completes
the proof. ∎
###### Remark 4.4.
To assess the role of the optimality condition formulated in Theorem 4.3,
notice that it does not carry useful information whenever
$\widehat{\partial}\vartheta(\bar{x})=\varnothing$. This happens, for example,
if $\vartheta$ is a convex continuous function, which is nondifferentiable at
$\bar{x}$. Nevertheless, the upper subdifferential is nonempty for large
classes of functions, including the class of semiconcave ones (see [14]). In
all such cases, condition $(\ref{in:NOCMPVEC})$ provides a necessary
optimality condition, which may be more efficient than those expressed in
terms of more traditional lower subdifferentials. This because it requires
that all elements in $-\widehat{\partial}^{+}\vartheta(\bar{x})$ belong to the
set in the right-side of $(\ref{in:NOCMPVEC})$, in contrast to a mere nonempty
intersection requirement, which is typical for the lower subdifferential case.
###### Corollary 4.5.
Under the same assumptions of Theorem 4.3, if the following additional
hypotheses are satisfied:
* (i)
$K$ is polyhedral;
* (ii)
${\rm
D}_{B}f(\bar{x},z)\in\mathscr{P}\hskip-2.84544pt\mathscr{H}(\mathbb{R}^{n},\mathbb{R}^{m})$
is $C$-concave for every $z\in K$;
* (iii)
the qualification condition holds
(4.4) $\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm int}\,{\rm
T}(\bar{x};K)\neq\varnothing,$
then the inclusion in $(\ref{in:NOCMPVEC})$ takes the simpler form
$-\widehat{\partial}^{+}\vartheta(\bar{x})\subseteq{\left(\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}(C)\right)}^{{}^{\ominus}}+{\rm N}(\bar{x};K).$
###### Proof.
It is well know that if $S_{1}$ and $S_{2}$ are closed convex cones, then
${(S_{1}\cap S_{2})}^{{}^{\ominus}}={\rm
cl}\,({S_{1}}^{{}^{\ominus}}+{S_{2}}^{{}^{\ominus}})$ (see [20, Lemma 2.4.1]).
On the other hand, if $S_{1}-S_{2}=\mathbb{R}^{n}$, then
${S_{1}}^{{}^{\ominus}}+{S_{2}}^{{}^{\ominus}}$ is closed (see [20,
Proposition 2.4.3] If the qualification condition $S_{1}\cap{\rm
int}\,S_{2}\neq\varnothing$ happens to be satisfied, then
$S_{1}-S_{2}=\mathbb{R}^{n}$ (see [20, Lemma 2.4.4]). Thus, since
$\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)$ and ${\rm T}(\bar{x};K)$ are
closed convex cone, by virtue of $(\ref{in:qcnocMPVEC})$ and the assumption
(i), one obtains
${\left(\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}(C)\cap{\rm T}_{\rm
r}(\bar{x};K)\right)}^{{}^{\ominus}}={\left(\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}(C)\right)}^{{}^{\ominus}}+{{\rm
T}(\bar{x};K)}^{{}^{\ominus}}.$
Then, in order to achieve the inclusion in the thesis it suffices to recall
that ${{\rm T}(\bar{x};K)}^{{}^{\ominus}}={\rm N}(\bar{x};K)$ (see [20, Lemma
11.2.2]). ∎
Now, let us consider sufficient optimality conditions, a topic usually
investigated in a subsequent step of analysis.
The next lemma provides a sufficient optimality condition for $({\rm MPVEC})$
in the case the objective function is locally Lipschitz. For its proof see [7,
Lemma 1.3, Chapter V]. Notice that for the statement of Lemma 4.6, the
hypothesis on the feasible region of the problem to allow a first-order
uniform conical approximation in the sense of Demyanov-Rubinov is not needed
(see [7, Remark 1.6, Chapter V]).
###### Lemma 4.6.
With reference to $({\rm MPVEC})$, suppose that $\vartheta$ is locally
Lipschitz around $\bar{x}\in{\mathcal{S}}{\mathcal{E}}$. If it holds
(4.5) ${\rm D}^{-}_{D}\vartheta(\bar{x};w)>0,\quad\forall w\in{\rm
T}(\bar{x};{\mathcal{S}}{\mathcal{E}})\backslash\\{\mathbf{0}\\},$
then $\bar{x}$ is a strict local solution to $({\rm MPVEC})$.
On the base of the above lemma, one is in a position to establish the next
result.
###### Theorem 4.7 (Sufficient optimality condition).
With reference to $({\rm MPVEC})$, assume that $\vartheta$ is locally
Lipschitz around $\bar{x}\in{\mathcal{S}}{\mathcal{E}}$. Suppose that:
* (i)
$f$ is strictly $B$-differentiable at $\bar{x}$, uniformly on $K$, with
$\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$;
* (ii)
the family of mappings $\\{{\rm D}_{B}f(\bar{x},z):\ z\in K\\}$ is
equicontinuous at each point of $\mathbb{R}^{n}$.
If the condition
(4.6) $\mathbf{0}\in\widehat{\partial}\vartheta(\bar{x})+{\rm
int}\,\left[{\left(\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}({\rm
T}(f(\bar{x},z);C))\cap{\rm T}(\bar{x};K)\right)}^{{}^{\ominus}}\right],$
is satisfied, then $\bar{x}$ is a strict local solution to $({\rm MPVEC})$.
###### Proof.
Observe first that if for a given cone $S\subseteq\mathbb{R}^{n}$ it is
$v\in{\rm int}\,({S}^{{}^{\ominus}})$, then it must be
$\langle v,s\rangle<0,\quad\forall s\in S\backslash\\{\mathbf{0}\\}.$
Indeed, there exists $\delta>0$ such that
$v+\delta{\mathbb{B}}\subseteq{S}^{{}^{\ominus}}$, and therefore it holds
$\langle v+\delta u,s\rangle\leq 0,\quad\forall u\in{\mathbb{B}},\ \forall
s\in S.$
Thus, for any $s\in S\backslash\\{\mathbf{0}\\}$, the last inequality implies
$\sup_{u\in{\mathbb{B}}}\langle v+\delta u,s\rangle=\langle
v,s\rangle+\delta\sup_{u\in{\mathbb{B}}}\langle u,s\rangle=\langle
v,s\rangle+\delta\|s\|\leq 0,$
whence one gets
$\langle v,s\rangle\leq-\delta\|s\|<0.$
Consequently, the condition $(\ref{in:socDlo})$ implies that there exists
$v\in\widehat{\partial}\vartheta(\bar{x})$ such that it is
$\langle v,w\rangle>0,\quad\forall w\in\left[\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}({\rm T}(f(\bar{x},z);C))\cap{\rm
T}(\bar{x};K)\right]\backslash\\{\mathbf{0}\\}.$
By recalling the representation of $\widehat{\partial}\vartheta(\bar{x})$ in
$(\ref{eq:FsubdDHder})$, from the last inequality one obtains
${\rm D}^{-}_{D}\vartheta(\bar{x};w)={\rm
D}^{-}_{H}\vartheta(\bar{x};w)>0,\quad\forall w\in\left[\bigcap_{z\in K}{\rm
D}_{B}f(\bar{x},z)^{-1}({\rm T}(f(\bar{x},z);C))\cap{\rm
T}(\bar{x};K)\right]\backslash\\{\mathbf{0}\\}.$
Since under the above assumptions Theorem 3.3 can be applied, then by virtue
of the inclusion in $(\ref{in:outapproxTangEqui})$ one can state that
condition $(\ref{in:socDlolem})$ turns out to be satisfied. Thus, the thesis
of the theorem follows from Lemma 4.6. ∎
###### Remark 4.8.
(i) As it is possible to see by elementary examples (see [15, Chapter 1]),
$\widehat{\partial}\vartheta(\bar{x})$ may happen to be empty even though
$\vartheta$ is locally Lipschitz around $\bar{x}$. In these circumstances, the
condition in $(\ref{in:socDlo})$ can never be satisfied. On the other hand,
whenever the p.h. function ${\rm
D}^{-}_{H}\vartheta(\bar{x};\cdot):\mathbb{R}^{n}\longrightarrow\mathbb{R}$ is
sublinear (and hence continuous), then
$\widehat{\partial}\vartheta(\bar{x})=\partial{\rm
D}^{-}_{H}\vartheta(\bar{x};\cdot)(\mathbf{0})\neq\varnothing$. This happens
e.g. (but not only) when $\vartheta:\mathbb{R}^{n}\longrightarrow\mathbb{R}$
is convex, in which case one has
$\widehat{\partial}\vartheta(\bar{x})=\partial\vartheta(\bar{x})$.
(ii) The local Lipschitz continuity of $\vartheta$ near $\bar{x}$ might lead
to believe that the Clarke subdifferential may come into play in the current
context. Recall that the latter is defined by
${\partial}_{C}\vartheta(\bar{x})=\left\\{v\in\mathbb{R}^{n}:\ \langle
v,w\rangle\leq\limsup_{x\to\bar{x}\atop t\downarrow
0}{\vartheta(x+tw)-\vartheta(x)\over t},\quad\forall
w\in\mathbb{R}^{n}\right\\}.$
Since, if $\vartheta$ is locally Lipschitz around $\bar{x}$, then it is
$\widehat{\partial}\vartheta(\bar{x})\subseteq{\partial}_{C}\vartheta(\bar{x})$
(see, for instance, [15, Chapter 1]), it follows that the condition
(4.7) $\mathbf{0}\in{\partial}_{C}\vartheta(\bar{x})+{\rm
int}\,\left[{\left(\bigcap_{z\in K}{\rm D}_{B}f(\bar{x},z)^{-1}({\rm
T}(f(\bar{x},z);C))\cap{\rm T}(\bar{x};K)\right)}^{{}^{\ominus}}\right]$
does not imply in general the condition in $(\ref{in:socDlo})$.
## References
* [1] Q. H. Ansari, I.V. Konnov, J.C. Yao, Characterizations of solutions for vector equilibrium problems, J. Optim. Theory Appl. 113 (2002), no. 3, 435–447.
* [2] Q. H. Ansari, E. Köbis, J.-C. Yao, Vector variational inequalities and vector optimization. Theory and applications. Vector Optimization, Springer, Cham, 2018.
* [3] Q. H. Ansari, W. Oettli, D. Schläger, A generalization of vectorial equilibria, Math. Methods Oper. Res. 46 (1997), no. 2, 147–152.
* [4] J.-P. Aubin, H. Frankowska, Set-valued analysis, Birkhäuser Boston, Boston, MA, 2009.
* [5] M. Bianchi, N. Hadjisavvas, and S. Schaible, Vector equilibrium problems with generalized monotone bifunctions, J. Optim. Theory Appl. 92 (1997), no. 3, 527–542.
* [6] M. Castellani, M. Pappalardo, First-order cone approximations and necessary optimality conditions, Optimization 35 (1995), no. 2, 113–126.
* [7] V.F. Demyanov, A.M. Rubinov, Constructive nonsmooth analysis, Peter Lang, Frankfurt am Main, 1995.
* [8] K.-H. Elster, J. Thierfelder, Abstract cone approximations and generalized differentiability in nonsmooth optimization, Optimization 19 (1988), no. 3, 315–341.
* [9] X.H. Gong, Strong vector equilibrium problems, J. Global Optim. 36 (2006), no. 3, 339–349.
* [10] X.H. Gong, K. Kimura, J.-C. Yao, Sensitivity analysis of strong vector equilibrium problems, J. Nonlinear Convex Anal. 9 (2008), no. 1, 83–94.
* [11] Z.-Q. Luo, J.-S. Pang, D. Ralph, Mathematical programs with equilibrium constraints, Cambridge University Press, Cambridge, 1996.
* [12] Z.-Q. Luo, J.-S. Pang, D. Ralph, S.-Q. Wu, Exact penalization and stationarity conditions of mathematical programs with equilibrium constraints, Math. Programming 75 (1996), no. 1, Ser. A, 19–76.
* [13] B.S. Mordukhovich, Variational analysis and generalized differentiation. I. Basic theory, Springer-Verlag, Berlin, 2006.
* [14] B.S. Mordukhovich, Variational analysis and generalized differentiation. II. Applications, Springer-Verlag, Berlin, 2006.
* [15] B.S. Mordukhovich, Variational analysis and applications, Springer, Cham, 2018.
* [16] W. Oettli, A remark on vector-valued equilibria and generalized monotonicity, Acta Math. Vietnam. 22 (1997), no. 1, 213–221.
* [17] J.V. Outrata, M. Kočvara, J. Zowe, Nonsmooth approach to optimization problems with equilibrium constraints. Theory, applications and numerical results, Nonconvex Optimization and its Applications, 28. Kluwer Academic Publishers, Dordrecht, 1998.
* [18] J.P. Penot, Calculus without derivatives, Springer, New York, 2013.
* [19] R.T. Rockafellar and R.J.-B. Wets, Variational Analysis, Springer-Verlag, Berlin, 1998.
* [20] W. Schirotzek, Nonsmooth analysis, Springer, Berlin, 2007.
* [21] A. Uderzo, Some enhanced existence results for strong vector equlibrium problems, to appear on Pure and Applied Functional Analysis.
* [22] J.J. Ye, Necessary and sufficient optimality conditions for mathematical programs with equilibrium constraints, J. Math. Anal. Appl. 307 (2005), no. 1, 350–369.
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.