37 pages, 10 figures, 3 tables, 2 appendices, supplement https://doi.org/10.5194/gmd-14-1949-2021-supplement ; Diversity plays a key role in the adaptive capacity of marine ecosystems to environmental changes. However, modelling the adaptive dynamics of phytoplankton traits remains challenging due to the competitive exclusion of sub-optimal phenotypes and the complexity of evolutionary processes leading to optimal phenotypes. Trait diffusion (TD) is a recently developed approach to sustain diversity in plankton models by introducing mutations, therefore allowing the adaptive evolution of functional traits to occur at ecological timescales. In this study, we present a model called Simulating Plankton Evolution with Adaptive Dynamics (SPEAD) that resolves the eco-evolutionary processes of a multi-trait plankton community. The SPEAD model can be used to evaluate plankton adaptation to environmental changes at different timescales or address ecological issues affected by adaptive evolution. Phytoplankton phenotypes in SPEAD are characterized by two traits, the nitrogen half-saturation constant and optimal temperature, which can mutate at each generation using the TD mechanism. SPEAD does not resolve the different phenotypes as discrete entities, instead computing six aggregate properties: total phytoplankton biomass, the mean value of each trait, trait variances, and the inter-trait covariance of a single population in a continuous trait space. Therefore, SPEAD resolves the dynamics of the population's continuous trait distribution by solving its statistical moments, wherein the variances of trait values represent the diversity of ecotypes. The ecological model is coupled to a vertically resolved (1D) physical environment, and therefore the adaptive dynamics of the simulated phytoplankton population are driven by seasonal variations in vertical mixing, nutrient concentration, water temperature, and solar irradiance. The simulated bulk properties are validated by observations from Bermuda Atlantic Time-series Studies (BATS) in the Sargasso Sea. We find that moderate mutation rates sustain trait diversity at decadal timescales and soften the almost total inter-trait correlation induced by the environment alone, without reducing the annual primary production or promoting permanently maladapted phenotypes, as occur with high mutation rates. As a way to evaluate the performance of the continuous trait approximation, we also compare the solutions of SPEAD to the solutions of a classical discrete entities approach, with both approaches including TD as a mechanism to sustain trait variance. We only find minor discrepancies between the continuous model SPEAD and the discrete model, with the computational cost of SPEAD being lower by 2 orders of magnitude. Therefore, SPEAD should be an ideal eco-evolutionary plankton model to be coupled to a general circulation model (GCM) of the global ocean ; This work was funded by national research grant CTM2017-87227-P (SPEAD) from the Spanish government. We acknowledge support for the publication fee by the CSIC Open Access Publication Support Initiative through its Unit of Information Resources for Research (URICI). The Institute of Marine Sciences (ICM – CSIC) is supported by a "Severo Ochoa" Centre of Excellence grant (CEX2019-000928-S) from the Spanish government ; Peer reviewed
Measurements of the centrality and rapidity dependence of inclusive jet production in √sNN=5.02 TeV proton-lead (p+Pb) collisions and the jet cross-section in √s=2.76 TeV proton-proton collisions are presented. These quantities are measured in datasets corresponding to an integrated luminosity of 27.8 nb-1 and 4.0 pb-1, respectively, recorded with the ATLAS detector at the Large Hadron Collider in 2013. The p+Pb collision centrality was characterised using the total transverse energy measured in the pseudorapidity interval -4.9 < η < -3.2 in the direction of the lead beam. Results are presented for the double-differential per-collision yields as a function of jet rapidity and transverse momentum (pT) for minimum-bias and centrality-selected p+Pb collisions, and are compared to the jet rate from the geometric expectation. The total jet yield in minimum-bias events is slightly enhanced above the expectation in a pT-dependent manner but is consistent with the expectation within uncertainties. The ratios of jet spectra from different centrality selections show a strong modification of jet production at all pT at forward rapidities and for large pT at mid-rapidity, which manifests as a suppression of the jet yield in central events and an enhancement in peripheral events. These effects imply that the factorisation between hard and soft processes is violated at an unexpected level in proton-nucleus collisions. Furthermore, the modifications at forward rapidities are found to be a function of the total jet energy only, implying that the violations may have a simple dependence on the hard parton-parton kinematics ; We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF, DNSRC and Lundbeck Foundation, Denmark; EPLANET, ERC and NSRF, European Union; IN2P3-CNRS, CEA-DSM/IRFU, France; GNSF, Georgia; BMBF, DFG, HGF, MPG and AvH Foundation, Germany; GSRT and NSRF, Greece; ISF, MINERVA, GIF, I-CORE and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; FOM and NWO, Netherlands; BRF and RCN, Norway; MNiSW and NCN, Poland; GRICES and FCT, Portugal; MNE/IFA, Romania; MES of Russia and ROSATOM, Russian Federation; JINR; MSTD, Serbia; MSSR, Slovakia; ARRS and MIZŠ, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SER, SNSF and Cantons of Bern and Geneva, Switzerland; NSC, Taiwan; TAEK, Turkey; STFC, the Royal Society and Leverhulme Trust, United Kingdom; DOE and NSF, United States of America. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN and the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA) and in the Tier-2 facilities worldwide
WOS: 000480466200001 ; A measurement of jet substructure observables is presented using data collected in 2016 by the ATLAS experiment at the LHC with proton-proton collisions at root s= 13 TeV. Large-radius jets groomed with the trimming and soft-drop algorithms are studied. Dedicated event selections are used to study jets produced by light quarks or gluons, and hadronically decaying top quarks and W bosons. The observables measured are sensitive to substructure, and therefore are typically used for tagging large-radius jets from boosted massive particles. These include the energy correlation functions and the N-subjettiness variables. The number of subjets and the Les Houches angularity are also considered. The distributions of the substructure variables, corrected for detector effects, are compared to the predictions of various Monte Carlo event generators. They are also compared between the large-radius jets originating from light quarks or gluons, and hadronically decaying top quarks and W bosons. ; ANPCyT, ArgentinaANPCyT; YerPhI, Armenia; ARC, AustraliaAustralian Research Council; BMWFW, Austria; FWF, AustriaAustrian Science Fund (FWF); ANAS, AzerbaijanAzerbaijan National Academy of Sciences (ANAS); SSTC, Belarus; CNPq, BrazilNational Council for Scientific and Technological Development (CNPq); FAPESP, BrazilFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP); NSERC, CanadaNatural Sciences and Engineering Research Council of Canada; NRC, Canada; CFI, CanadaCanada Foundation for Innovation; CERN; CONICYT, ChileComision Nacional de Investigacion Cientifica y Tecnologica (CONICYT); CAS, ChinaChinese Academy of Sciences; MOST, ChinaMinistry of Science and Technology, China; NSFC, ChinaNational Natural Science Foundation of China; COLCIENCIAS, ColombiaDepartamento Administrativo de Ciencia, Tecnologia e Innovacion Colciencias; MSMT CR, Czech RepublicMinistry of Education, Youth & Sports - Czech RepublicCzech Republic Government; MPO CR, Czech RepublicCzech Republic Government; VSC CR, Czech RepublicCzech Republic Government; DNRF, Denmark; DNSRC, DenmarkDanish Natural Science Research Council; IN2P3-CNRS, FranceCentre National de la Recherche Scientifique (CNRS); CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, GermanyFederal Ministry of Education & Research (BMBF); HGF, Germany; MPG, GermanyMax Planck Society; GSRT, GreeceGreek Ministry of Development-GSRT; RGC, Hong Kong SAR, ChinaHong Kong Research Grants Council; ISF, IsraelIsrael Science Foundation; Benoziyo Center, Israel; INFN, ItalyIstituto Nazionale di Fisica Nucleare; MEXT, JapanMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT); JSPS, JapanMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of Science; CNRST, Morocco; NWO, NetherlandsNetherlands Organization for Scientific Research (NWO)Netherlands Government; RCN, Norway; MNiSW, PolandMinistry of Science and Higher Education, Poland; NCN, Poland; FCT, PortugalPortuguese Foundation for Science and Technology; MNE/IFA, Romania; MES of Russia, Russian FederationRussian Federation; NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS, SloveniaSlovenian Research Agency - Slovenia; MIZS, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC, Sweden; Wallenberg Foundation, Sweden; SERI, Switzerland; SNSF, SwitzerlandSwiss National Science Foundation (SNSF); Canton of Bern, Switzerland; MOST, TaiwanMinistry of Science and Technology, Taiwan; TAEK, TurkeyMinistry of Energy & Natural Resources - Turkey; STFC, United KingdomScience & Technology Facilities Council (STFC); DOE, United States of AmericaUnited States Department of Energy (DOE); NSF, United States of AmericaNational Science Foundation (NSF); BCKDF, Canada; CANARIE, Canada; CRC, Canada; Compute Canada, Canada; COST, European Union; ERC, European UnionEuropean Union (EU)European Research Council (ERC); ERDF, European UnionEuropean Union (EU); Horizon 2020, European Union; Marie Sk lodowska-Curie Actions, European Union; Investissements d' Avenir Labex, ANR, FranceFrench National Research Agency (ANR); DFG, GermanyGerman Research Foundation (DFG); AvH Foundation, GermanyAlexander von Humboldt Foundation; Herakleitos programme - EU-ESF, Greece; Thales programme - EU-ESF, Greece; Aristeia programme - EU-ESF, Greece; Greek NSRF, Greece; BSF-NSF, Israel; GIF, IsraelGerman-Israeli Foundation for Scientific Research and Development; CERCA Programme Generalitat de Catalunya, Spain; The Royal Society, United Kingdom; Leverhulme Trust, United KingdomLeverhulme Trust; Canton of Geneva, Switzerland; Investissements d' Avenir Idex, ANR, FranceFrench National Research Agency (ANR) ; We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZS, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, CANARIE, CRC and Compute Canada, Canada; COST, ERC, ERDF, Horizon 2020, and Marie Sk lodowska-Curie Actions, European Union; Investissements d' Avenir Labex and Idex, ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and GIF, Israel; CERCA Programme Generalitat de Catalunya, Spain; The Royal Society and Leverhulme Trust, United Kingdom.; The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (U.K.) and BNL (U.S.A.), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in ref. [92].
WOS: 000460674700008 ; The efficiency of the photon identification criteria in the ATLAS detector is measured using 36.1 fb1 to 36.7 fb1 of pp collision data at v s = 13 TeV collected in 2015 and 2016. The efficiencies are measured separately for converted and unconverted isolated photons, in four different pseudorapidity regions, for transverse momenta between 10 GeV and 1.5 TeV. The results from the combination of three data-driven techniques are compared with the predictions from simulation after correcting the variables describing the shape of electromagnetic showers in simulation for the average differences observed relative to data. Data-tosimulation efficiency ratios are determined to account for the small residual efficiency differences. These factors are measured with uncertainties between 0.5% and 5% depending on the photon transverse momentum and pseudorapidity. The impact of the isolation criteria on the photon identification efficiency, and that of additional soft pp interactions, are also discussed. The probability of reconstructing an electron as a photon candidate ismeasured in data, and compared with the predictions from simulation. The efficiency of the reconstruction of photon conversions is measured using a sample of photon candidates from Z. mu mu. events, exploiting the properties of the ratio of the energies deposited in the first and second longitudinal layers of the ATLAS electromagnetic calorimeter. ; ANPCyT, ArgentinaANPCyT; YerPhI, Armenia; ARC, AustraliaAustralian Research Council; BMWFW, Austria; FWF, AustriaAustrian Science Fund (FWF); ANAS, AzerbaijanAzerbaijan National Academy of Sciences (ANAS); SSTC, Belarus; CNPq, BrazilNational Council for Scientific and Technological Development (CNPq); FAPESP, BrazilFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP); NSERC, CanadaNatural Sciences and Engineering Research Council of Canada; CFI, CanadaCanada Foundation for Innovation; CERN; CONICYT, ChileComision Nacional de Investigacion Cientifica y Tecnologica (CONICYT); CAS, ChinaChinese Academy of Sciences; MOST, ChinaMinistry of Science and Technology, China; NSFC, ChinaNational Natural Science Foundation of China; COLCIENCIAS, ColombiaDepartamento Administrativo de Ciencia, Tecnologia e Innovacion Colciencias; MSMT CR, Czech RepublicMinistry of Education, Youth & Sports - Czech RepublicCzech Republic Government; MPO CR, Czech RepublicCzech Republic Government; VSC CR, Czech RepublicCzech Republic Government; DNRF, Denmark; DNSRC, DenmarkDanish Natural Science Research Council; IN2P3-CNRS, CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, GermanyFederal Ministry of Education & Research (BMBF); HGF, Germany; MPG, GermanyMax Planck Society; GSRT, GreeceGreek Ministry of Development-GSRT; RGC, Hong Kong SAR, ChinaHong Kong Research Grants Council; ISF, IsraelIsrael Science Foundation; Benoziyo Center, Israel; INFN, ItalyIstituto Nazionale di Fisica Nucleare; MEXT, JapanMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT); JSPS, JapanMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of Science; CNRST, Morocco; NWO, The NetherlandsNetherlands Organization for Scientific Research (NWO)Netherlands Government; RCN, Norway; MNiSW, PolandMinistry of Science and Higher Education, Poland; NCN, Poland; FCT, PortugalPortuguese Foundation for Science and Technology; MNE/IFA, Romania; MES of Russia, Russian FederationRussian Federation; NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS, SloveniaSlovenian Research Agency - Slovenia; MIZS, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC, Sweden; Wallenberg Foundation, Sweden; SERI, Switzerland; SNSF, SwitzerlandSwiss National Science Foundation (SNSF); Canton of Bern, Switzerland; Canton of Geneva, Switzerland; MOST, TaiwanMinistry of Science and Technology, Taiwan; TAEK, TurkeyMinistry of Energy & Natural Resources - Turkey; STFC, UKScience & Technology Facilities Council (STFC); DOE, USAUnited States Department of Energy (DOE); NSF, USANational Science Foundation (NSF); BCKDF, Canada; CANARIE, Canada; CRC, Canada; Compute Canada, Canada; COST, European Union; ERC, European UnionEuropean Union (EU)European Research Council (ERC); ERDF, European UnionEuropean Union (EU); Horizon 2020, European Union; Marie Sklodowska-Curie Actions, European UnionEuropean Union (EU); Investissements d' Avenir Labex and Idex, ANR, FranceFrench National Research Agency (ANR); DFG, GermanyGerman Research Foundation (DFG); AvH Foundation, GermanyAlexander von Humboldt Foundation; Herakleitos programme; Thales programme; EU-ESF, Greece; Greek NSRF, Greece; BSF-NSF, Israel; GIF, IsraelGerman-Israeli Foundation for Scientific Research and Development; CERCA Programme Generalitat de Catalunya, Spain; Royal Society, UKRoyal Society of London; Leverhulme Trust, UKLeverhulme Trust; Aristeia programme; NRC, Canada ; We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, The Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRC KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZS, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, UK; DOE and NSF, USA. In addition, individual groups and members have received support fromBCKDF, CANARIE, CRC and Compute Canada, Canada; COST, ERC, ERDF, Horizon 2020, and Marie Sklodowska-Curie Actions, European Union; Investissements d' Avenir Labex and Idex, ANR, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF, Greece; BSF-NSF and GIF, Israel; CERCA Programme Generalitat de Catalunya, Spain; The Royal Society and Leverhulme Trust, UK. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NLT1 (The Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref. [43].
WOS: 000459292400001 ; PubMed: 30880822 ; The performance of the missing transverse (E-T(miss) momentum) reconstruction with the ATLAS detector is evaluated using data collected in proton-proton collisions at the LHC at a centre-of-mass energy of 13 TeV in 2015. To reconstruct E-T(miss), fully calibrated electrons, muons, photons, hadronically decaying tau-leptons, and jets reconstructed from calorimeter energy deposits and charged-particle tracks are used. These are combined with the soft hadronic activity measured by reconstructed charged-particle tracks not associated with the hard objects. Possible double counting of contributions from reconstructed charged-particle tracks from the inner detector, energy deposits in the calorimeter, and reconstructed muons from the muon spectrometer is avoided by applying a signal ambiguity resolution procedure which rejects already used signals when combining the various E-T(miss) contributions. The individual terms as well as the overall reconstructed E-T(miss) are evaluated with various performance metrics for scale (linearity), resolution, and sensitivity to the data-taking conditions. The method developed to determine the systematic uncertainties of the E-T(miss) scale and resolution is discussed. Results are shown based on the full 2015 data sample corresponding to an integrated luminosity of 3.2 fb(-1). ; ANPCyT, ArgentinaANPCyT; YerPhI, Armenia; ARC, AustraliaAustralian Research Council; BMWFW, Austria; FWF, AustriaAustrian Science Fund (FWF); ANAS, AzerbaijanAzerbaijan National Academy of Sciences (ANAS); SSTC, Belarus; CNPq, BrazilNational Council for Scientific and Technological Development (CNPq); FAPESP, BrazilFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP); NSERC, CanadaNatural Sciences and Engineering Research Council of Canada; CFI, CanadaCanada Foundation for Innovation; CONICYT, ChileComision Nacional de Investigacion Cientifica y Tecnologica (CONICYT); NSFC, ChinaNational Natural Science Foundation of China; COLCIENCIAS, ColombiaDepartamento Administrativo de Ciencia, Tecnologia e Innovacion Colciencias; MSMT CR, Czech RepublicMinistry of Education, Youth & Sports - Czech RepublicCzech Republic Government; MPO CR, Czech RepublicCzech Republic Government; VSC CR, Czech RepublicCzech Republic Government; DNRF, Denmark; DNSRC, DenmarkDanish Natural Science Research Council; CEA-DRF/IRFU, France; BMBF, GermanyFederal Ministry of Education & Research (BMBF); MPG, GermanyMax Planck Society; GSRT, GreeceGreek Ministry of Development-GSRT; RGC, Hong Kong SAR, ChinaHong Kong Research Grants Council; I-CORE, Israel; Benoziyo Center, Israel; INFN, ItalyIstituto Nazionale di Fisica Nucleare; MEXT, JapanMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT); JSPS, JapanMinistry of Education, Culture, Sports, Science and Technology, Japan (MEXT)Japan Society for the Promotion of Science; NWO, NetherlandsNetherlands Organization for Scientific Research (NWO)Netherlands Government; RCN, Norway; MNiSW, PolandMinistry of Science and Higher Education, Poland; NCN, Poland; FCT, PortugalPortuguese Foundation for Science and Technology; MNE/IFA, Romania; MESTD, Serbia; MSSR, Slovakia; ARRS, SloveniaSlovenian Research Agency - Slovenia; MIZS, Slovenia; MINECO, Spain; Wallenberg Foundation, Sweden; Canton of BernJapanese Urological Association; MOST, TaiwanMinistry of Science and Technology, Taiwan; TAEK, TurkeyMinistry of Energy & Natural Resources - Turkey; STFC, United KingdomScience & Technology Facilities Council (STFC); DOE, United States of AmericaUnited States Department of Energy (DOE); NSF, United States of AmericaNational Science Foundation (NSF); BCKDF; FQRNT, CanadaFQRNT; Ontario Innovation Trust, Canada; ERC, European UnionEuropean Union (EU)European Research Council (ERC); ERDF, European UnionEuropean Union (EU); Marie Sklodowska-Curie Actions, European UnionEuropean Union (EU); Investissements d'Avenir LabexFrench National Research Agency (ANR); ANR, Region AuvergneFrench National Research Agency (ANR); Fondation Partager le Savoir, France; DFG, GermanyGerman Research Foundation (DFG); EU-ESFEuropean Union (EU); Greek NSRFGreek Ministry of Development-GSRT; BSF, IsraelUS-Israel Binational Science Foundation; Minerva, Israel; BRF, Norway; CERCA Programme Generalitat de Catalunya; Generalitat Valenciana, SpainGeneralitat Valenciana; Royal Society, United KingdomRoyal Society of London; Leverhulme Trust, United KingdomLeverhulme Trust; NRC, Canada; CERN, Chile; CAS, ChinaChinese Academy of Sciences; MOST, ChinaMinistry of Science and Technology, China; IN2P9-CNRS, France; IIGF, Germany; ISF, Isreael; CNRST, Morocco; MES of Russia; NRC KI, Russian Federation; JINR; DST/NRF, South Africa; SERI; SNSFSwiss National Science Foundation (SNSF); Canton of Geneva; Canton of Switzerland; Canada Council; CANARIE; CRCAustralian GovernmentDepartment of Industry, Innovation and ScienceCooperative Research Centres (CRC) Programme; Compute Canada; EPLANET, European UnionEuropean Union (EU); FP7, European UnionEuropean Union (EU); Horizon 2020, European Union ; We thank CERN for the very successful operation of the LHC, as well as the support staff from our institutions without whom ATLAS could not be operated efficiently. We acknowledge the support of ANPCyT, Argentina; YerPhI, Armenia; ARC, Australia; BMWFW and FWF, Austria; ANAS, Azerbaijan; SSTC, Belarus; CNPq and FAPESP, Brazil; NSERC, NRC and CFI, Canada; CERN; CONICYT, Chile; CAS, MOST and NSFC, China; COLCIENCIAS, Colombia; MSMT CR, MPO CR and VSC CR, Czech Republic; DNRF and DNSRC, Denmark; IN2P3-CNRS, CEA-DRF/IRFU, France; SRNSFG, Georgia; BMBF, HGF, and MPG, Germany; GSRT, Greece; RGC, Hong Kong SAR, China; ISF, I-CORE and Benoziyo Center, Israel; INFN, Italy; MEXT and JSPS, Japan; CNRST, Morocco; NWO, Netherlands; RCN, Norway; MNiSW and NCN, Poland; FCT, Portugal; MNE/IFA, Romania; MES of Russia and NRIN2C KI, Russian Federation; JINR; MESTD, Serbia; MSSR, Slovakia; ARRS and MIZS, Slovenia; DST/NRF, South Africa; MINECO, Spain; SRC and Wallenberg Foundation, Sweden; SERI, SNSF and Cantons of Bern and Geneva, Switzerland; MOST, Taiwan; TAEK, Turkey; STFC, United Kingdom; DOE and NSF, United States of America. In addition, individual groups and members have received support from BCKDF, the Canada Council, CANARIE, CRC, Compute Canada, FQRNT, and the Ontario Innovation Trust, Canada; EPLANET, ERC, ERDF, FP7, Horizon 2020 and Marie Sklodowska-Curie Actions, European Union; Investissements d'Avenir Labex and Idex, ANR, Region Auvergne and Fondation Partager le Savoir, France; DFG and AvH Foundation, Germany; Herakleitos, Thales and Aristeia programmes co-financed by EU-ESF and the Greek NSRF; BSF, GIF and Minerva, Israel; BRF, Norway; CERCA Programme Generalitat de Catalunya, Generalitat Valenciana, Spain; the Royal Society and Leverhulme Trust, United Kingdom. The crucial computing support from all WLCG partners is acknowledged gratefully, in particular from CERN, the ATLAS Tier-1 facilities at TRIUMF (Canada), NDGF (Denmark, Norway, Sweden), CC-IN2P3 (France), KIT/GridKA (Germany), INFN-CNAF (Italy), NL-T1 (Netherlands), PIC (Spain), ASGC (Taiwan), RAL (UK) and BNL (USA), the Tier-2 facilities worldwide and large non-WLCG resource providers. Major contributors of computing resources are listed in Ref. [46].
WOS:000449259100001 ; A measurement of the groomed jet mass in PbPb and pp collisions at a nucleon-nucleon center-of-mass energy of 5.02 TeV with the CMS detector at the LHC is presented. Jet grooming is a recursive procedure which sequentially removes soft constituents of a jet until a pair of hard subjets is found. The resulting groomed jets can be used to study modifications to the parton shower evolution in the presence of the hot and dense medium created in heavy ion collisions. Predictions of groomed jet properties from the pythia and herwig++ event generators agree with the measurements in pp collisions. When comparing the results from the most central PbPb collisions to pp data, a hint of an increase of jets with large jet mass is observed, which could originate from additional medium-induced radiation at a large angle from the jet axis. However, no modification of the groomed mass of the core of the jet is observed for all PbPb centrality classes. The PbPb results are also compared to predictions from the jewel and q-pythia event generators, which predict a large modification of the groomed mass not observed in the data. ; BMWFW (Austria); FWF (Austria)Austrian Science Fund (FWF); FNRS (Belgium)Fonds de la Recherche Scientifique - FNRS; FWO (Belgium)FWO; CNPq (Brazil)National Council for Scientific and Technological Development (CNPq); CAPES (Brazil)CAPES; FAPERJ (Brazil)Carlos Chagas Filho Foundation for Research Support of the State of Rio de Janeiro (FAPERJ); FAPESP (Brazil)Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP); MES (Bulgaria); MoST (China)Ministry of Science and Technology, China; NSFC (China)National Natural Science Foundation of China (NSFC); COL-CIENCIAS (Colombia)Departamento Administrativo de Ciencia, Tecnologia e Innovacion Colciencias; CSF (Croatia); SENESCYT (Ecuador); MoER (Estonia); ERDF (Estonia)European Union (EU); Academy of Finland (Finland)Academy of Finland; MEC (Finland); CEA (France)French Atomic Energy Commission; CNRS/IN2P3 (France)Centre National de la Recherche Scientifique (CNRS); BMBF (Germany)Federal Ministry of Education & Research (BMBF); DFG (Germany)German Research Foundation (DFG); HGF (Germany); GSRT (Greece)Greek Ministry of Development-GSRT; NKFIA (Hungary); DAE (India)Department of Atomic Energy (DAE); DST (India)Department of Science & Technology (India); IPM (Iran); SFI (Ireland)Science Foundation Ireland; INFN (Italy)Istituto Nazionale di Fisica Nucleare (INFN); NRF (Republic of Korea); MOE (Malaysia); UM (Malaysia); BUAP (Mexico); CONACYT (Mexico)Consejo Nacional de Ciencia y Tecnologia (CONACyT); UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE (Poland); FCT (Portugal)Portuguese Foundation for Science and Technology; JINR (Dubna); RosAtom (Russia); RFBR (Russia)Russian Foundation for Basic Research (RFBR); MESTD (Serbia); SEIDI (Spain); FEDER (Spain)European Union (EU); Swiss Funding Agencies (Switzerland); NSTDA (Thailand); TUBITAK (Turkey)Turkiye Bilimsel ve Teknolojik Arastirma Kurumu (TUBITAK); TAEK (Turkey)Ministry of Energy & Natural Resources - Turkey; NASU (Ukraine); SFFR (Ukraine)State Fund for Fundamental Research (SFFR); DOE (U.S.A.)United States Department of Energy (DOE); NSF (U.S.A.)National Science Foundation (NSF); Marie-Curie programmeEuropean Union (EU); European Research CouncilEuropean Research Council (ERC); European UnionEuropean Union (EU) [675440]; Leventis Foundation; A.P. Sloan FoundationAlfred P. Sloan Foundation; Alexander von Humboldt FoundationAlexander von Humboldt Foundation; Belgian Federal Science Policy OfficeBelgian Federal Science Policy Office; Fonds pour la Formation a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium)Fonds de la Recherche Scientifique - FNRS; Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium)Institute for the Promotion of Innovation by Science and Technology in Flanders (IWT); FWO (Belgium) under the "Excellence of Science - EOSFWO [30820817]; Ministry of Education, Youth and Sports (MEYS) of the Czech RepublicMinistry of Education, Youth & Sports - Czech Republic; NKFIA (Hungary) [123842, 123959, 124845, 124850, 125105]; Council of Science and Industrial Research, IndiaCouncil of Scientific & Industrial Research (CSIR) - India; HOMING PLUS programme of the Foundation for Polish Science; European Union, Regional Development FundEuropean Union (EU); Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland) [Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406]; National Priorities Research Program by Qatar National Research Fund; Programa Severo Ochoa del Principado de Asturias; Thalis programme - EU-ESF; Aristeia programme - EU-ESF; Greek NSRFGreek Ministry of Development-GSRT; Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University (Thailand); Welch FoundationThe Welch Foundation [C-1845]; Weston Havens Foundation (U.S.A.); CERN; CAS (China)Chinese Academy of Sciences; MSES (Croatia); RPF (Cyprus); ERC IUT (Estonia)Estonian Research Council; HIP (Finland); MSIP (Republic of Korea); LAS (Lithuania); CINVESTAV (Mexico); LNS (Mexico); SEP (Mexico); MON (Russia); RAS (Russia)Russian Academy of Sciences; PCTI (Spain); CPAN (Spain); ThEPCenter (Thailand); IPST (Thailand); STFC (United Kingdom)Science & Technology Facilities Council (STFC); Lendulet ("Momentum") Programme (Hungary); Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences (Hungary); New National Excellence Program UNKP (Hungary); F.R.S.-FNRS (Belgium)Fonds de la Recherche Scientifique - FNRS; Programa Estatal de Fomento de la Investigacion Cientffica y Tecnica de Excelencia Maria de Maeztu [MDM-2015-0509]; Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); Science and Technology Facilities CouncilScience & Technology Facilities Council (STFC) [ST/J005479/1, ST/N001273/1, ST/J004871/1, ST/I003622/1, ST/I505580/1, ST/K003542/1, ST/F007434/1, ST/L005603/1, ST/K003542/1 GRID PP, ST/M004775/1] Funding Source: researchfish ; We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COL-CIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI and FEDER (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (U.S.A.).; Individuals have received support from the Marie-Curie programme and the European Research Council and Horizon 2020 Grant, contract No. 675440 (European Union); the Leventis Foundation; the A.P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the "Excellence of Science - EOS" - be.h project n. 30820817; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lendulet ("Momentum") Programme and the Janos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program UNKP, the NKFIA research grants 123842, 123959, 124845, 124850 and 125105 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus programme of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Programa Estatal de Fomento de la Investigacion Cientffica y Tecnica de Excelencia Maria de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Welch Foundation, contract C-1845; and the Weston Havens Foundation (U.S.A.).
The theme of the conference is: "E-learning and STEM Education". "Skills in Science, Technology, Engineering and Mathematics (STEM) are becoming an increasingly important part of basic literacy in today's knowledge economy. To keep Europe growing, we will need one million additional researchers by 2020." (http://www.eun.org/focus-areas/stem) The monograph "E-learning and STEM Education" includes articles based on the best papers prepared and presented by authors from nine European countries and from more than twenty universities during the 11th Annual Internatio- nal Scientific Conference "Theoretical and Practical Aspects of Distance Learning", subtitled: "E-learning and STEM Education", which was held on 14-15 October 2019, organized by the Faculty of Ethnology and Sciences of Education in Cieszyn, University of Silesia in Katowice, Poland. Experts on STEM and robotics in education from 10 countries, in particular Austria, Bulgaria, Czech Republic, Morocco, the Netherlands, Poland, Slovakia, Ukraine, Russia, Turkey reflect on how STEM education is currently viewed and implemented in their country, drawing on the legislation and funding focus and using local data to predict how the future will unfold for STEM education. The speakers from the University of Innsbruck (Austria), University of Twente (the Netherlands), the Comenius University in Bratislava (Slovakia), Plovdiv University "Paisii Hilendarski" (Bulgaria), Borys Grinchenko Kyiv University (Ukraine), Gdańsk Technical University (Poland), Herzen State Pedagogical University of Russia, St. Petersburg (Russia), Jagiellonian University (Poland), Warsaw University (Poland), Silesian University in Opava (Czech Republic), Jesuit University of Philosophy and Education "Ignatianum", Cracow, (Poland), University of Silesia in Katowice (Poland), University of Defence in Brno (Czech Republic), K. Ushynskyi South Ukrainian National Pedagogical University (Ukraine), Maria Curie-Skłodowska University in Lublin (Poland), Lublin University of Technology (Poland), Mykhailo Drahomanov National Pedagogical University, Kyiv, (Ukraine), Kazimierz Wielki University in Bydgoszcz (Poland), Taras Shevchenko National University "Chernihiv Collegium" (Ukraine), Dniprovsk State Technical University (Ukraine), University of Ostrava (Czech Republic), Pedagogical University of Krakow (Poland), University of Social Sciences and Humanities in Warsaw (Poland), Makarenko Sumy State Pedagogical University (Ukraine), Poznań University of Medical Sciences (Poland), Ternopil University (Ukraine), Kherson State University (Ukraine), Warsaw University of Technology(Poland), University of Social Sciences and Humanities in Warsaw (Poland), Izmail State University of Humanities (Ukraine), Adam Mickiewicz University in Poznań, (Poland), and other educational institutions delivered lectures providing insights into interesting studies, presented their recent research results and discussed their further scientific work. The authors include experts, well-known scholars, young researchers, highly trained academic lecturers with long experience in the field of e-learning, PhD students, distance course developers, authors of multimedia teaching materials, designers of websites and educational sites. I am convinced that this monograph will be an interesting and valuable publication, describing the theoretical, methodological and practical issues in the field of E- learning in STEM education offering proposals of solutions to certain important problems and showing the road to further work in this field, allowing exchange of experiences of scholars from various universities from many European countries and other countries of the world. This book includes a sequence of responses to numerous questions that have not been answered yet. The papers of the authors included in the monograph are an attempt at providing such answers. The aspects and problems discussed in the materials include the following: 1. E-learning and STEM Education STEM education trends Robots and coding in education. Immersive learning environments. Blockchain. Internet of things. 3D printing 2. E-environment and Cyberspace E-environment of the University. SMART-Universities. SMART Technology in education E-learning in a sustainable society. 3. E-learning in the Development of Key and soft Competences: Effective development of teachers; skills in the area of ICT and e- learning Key competences in the knowledge society, Use of e-learning in improving the level of students' digital competences, Distance Learning and Lifelong Learning Self-learning based on Internet technology Table of Contents 15 4. E-learning and Intercultural Competences Development in Different Countries: Legal, social, human, scientific, technical aspects of distance learning and e-learning in different countries, Psychological and ethical aspects of distance learning and e-learning in different countries, Collaborative learning in e-learning, 5. E-learning Methodology – Implementation and Evaluation: European and national standards of e-learning quality evaluation, Evaluation of synchronous and asynchronous teaching and learning, methodology and good examples, MOOCs – methodology of design, conducting, implementation and evaluation, Contemporary trends in world education – globalization, internationalization, mobility. 6. ICT Tools – Effective Use in Education: Selected Web 2.0 and Web 3.0 technology, LMS, CMS, VSCR, SSA, CSA, Cloud computing environment, social media, Multimedia resources Video-tutorial design. 7. Alternative Methods, Forms and Techniques in Distance Learning: simulations, models in distance learning, networking, distance learning systems, m-learning. 8. Theoretical, Methodological Aspects of Distance Learning: Successful examples of e-learning, Distance learning in humanities and science, Quality of teaching, training programs and assessment, E-learning for the disabled. Publishing this monograph is a good example of expanding and strengthening international cooperation.
In 1928, Utah Construction Company completed its first project outside of the United States with the 110 mile railroad for Southern Pacific of Mexico. Over the next 30 years, UCC continued to work on projects in Mexico including dams, roads, mining, and canals. The collection contains several booklets and correspondence along with approximately 500 photographs. ; 8.5 x 11 in. paper ; SANCHEZ MEJORADA Y CREEL CARLOS SANCHEZ NEHIRADA LUTS J. CREEL LUJAN CARLOS SANCHEZ MEJORADA (JR.) Februrary 2, 1960 Mr. O. L. Dykstra, Treasurer Utah Construction & Mining Company 100 Bush Street, San Francisco 4, Calif. Dear Mr. Dykstra: In connection with the opinion you requested during your last trip to Me-xico regarding the advantages and disadvantages of operating in this country through a foreign corporation, I beg to give you the following information: 1. According to law, there are certain activities that must necessarily be carried out by a Mexican corporation; among such are the chemical, the fertilizer, the insecticide, the fishery and the soft drink industries. Furthermore. also according to law, no foreign inidividual or corporation car hold land within the so called prohibited zone, which is an area of 100 kilometers wide on the international borders and 50 kilometers wide on the sea coasts. 2. If there is no legal impediment to operate in Mexico through a fo-reign corporation, it will be necessary to fulfill the following requirements in order to have said foreign corporation qualified in this country. a). Submit evidence that the foreign corporation has been duly incor-porated in accordance with the laws of its place of incorporation; this evidence consists in: i). A copy, duly certified by the Secretary of State of the State where the corporation was incorporated or by the competent authority, of the deed, of incorporation and of the other pertinent corporate document:. This certified copy must be duly authorized by the competent Mexican diplomatic or consular representatives. ii). A certification from the competent Mexican diplomatic or consular authorities that the foreign corporation has been duly incorporated and is qualified according to the laws of the country of incorporation to act. b). An official translation prepared by order of a Mexican Court, of the documents referred to in a) i). SANCHEZ MEJORADA Y CREEL ABOGANDOS - 2 - Mr. O. L. Dykstra San Francisco 4, Calif. February 2, 1960. c). A Mexican Court ruling that the foreign corporation's charter and bylaws is not contrary to the provisions of public policy of the laws of this country. d). Legalization by the Ministry of Foreign Relations of the do-cuments referred to in paragraph a). e). A permit from the Ministry of Foreign Relations, in which it will be provided that the stockholders of the foreign corporation will consider themselves as Mexican citizens in all matters pertaining to the operations of the corporation in Mexico and that they waive the right of ever requesting diplomatic protection in connection with the activities carried forth in this country. f). A permit from the Ministry of Industry and Commerce, who will examine all the documents referred to in preceding paragraphs and will, at his discretion, authorize qualification of the foreign cor-poration to operate in this country. g). Registration of the official Spanish translation of the charter and. bylaws of the foreign corporation, as well as of all other documents such as the permits from the Ministries of Foreign Relations and of Industry and Commerce at the Public Registry of Commerce. h). The appointment of a permanent representative of the foreign corporation in Mexico. i). The establishment of permanent offices in Mexico. j). Registration of the corporation in question with the appropriate industrial chamber or chamber of commerce and with the competent tax authorities. k). The opening of books of accounts, duly legalized by the Tax Authorities. 1). The publication in the Federal Official Daily of the yearly balance sheet of the foreign corporation. 3. The foreign corporation operating in Mexico will naturally be ABOGANDOS - 3 - Mr. O. L. Dykstra San Francisco 4, Calif. February 2, 1960. subject to compliance of all legal requirements pertaining to Me-xican taxation, such as the filing of returns. Though the tax laws provide for specific rules in determining the taxable income of fo-reign corporations operating in this country, there is the possibility that due to the internal organization of the corporation, it would be hard to comply with the requirements of the tax laws, primarily the Income Tax legislation, regarding certain deductions. To overcome this difficult there is the possibility of entering into agreements with the Mexican Tax Authorities which would establish the basis for computing the taxable income. In my experience I know of very few foreign corporations which operate directly in Mexico and in my experience, those who started of on said basis have decided to establish Mexican corporations as subsidiaries of the parent The major drawback is the tax question and I would recommend that the opinion of the Mexico City Office of your independent auditors, Price Waterhouse & Company, be obtained. please let me know if you will require any additional information and oblige, Yours very truly, Luis J Creel Luis J. Creel Lujah. c. c. Mr. L. B. Shelley LJCL:gaa l/3
The issue of the regulation of artificial intelligence (AI) is one of the significant challenges faced by the EU at present. Most researchers focus on the substantive scope of AI regulation, including state law, ethical norms and soft law. In addition to the substantive and legal scope of the regulation, it is worthwhile considering the manner of such regulation.1 Since AI is an algorithmic code, it seems correct to regulate (restrict) AI not so much with traditional law established in natural (human) language as with one implemented into algorithms. They may operate as a tool supporting traditional legislation (RegTech), but it is possible to go further with the issue and create regulation algorithms which implement the law as the effective law. However, this requires a new approach to law and legislation – the law as algorithmic code. ; University of Silesia, Poland ; Dariusz Szostek is Associate Professor in the Department of Civil Law and Civil Procedure at the Faculty of Law and Administration, Opole University, Poland, Head of the Centre for Legal Problems of Technical Issues and New Technologies; European Parliament AI Observatory science expert (2020–2024), and member of European Union Intellectual Property Office, Chairman of the Scientific Council of the Virtual Department of Law and Ethics. ; dariusz.szostek@szostek-bar.pl ; 43 ; 60 ; 3 ; Aires J., Pinheiro D., Strube de Lima V. and Meneguzzi F., Norm conflict identification in contracts, "Artificial Intelligence and Law" 2017, vol. 25. ; Araszkiewicz M.: Algorytmizacja myślenia prawniczego. Modele, możliwości, ograniczenia, (in:) D. Szostek (ed.), Legal tech. Czyli jak bezpiecznie korzystać z narzędzi informatycznych w organizacji, w tym w kancelarii oraz dziale prawnym, Warsaw 2021. ; Artificial Intelligence (AI): new developments and innovations applied to e-commerce, https://www.europarl.europa.eu/thinktank/en/document.html?reference=IPOL_IDA(2020)648791. ; Artificial Intelligence and Data Protection in Tension, 01.11.2018, https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_ai_first_report__artificial_intelligence_and_data_protection_in_te.pdf. ; Australian Government, Artificial Intelligence: Solving problems, growing the economy and improving our quality of life, 20.12.2019, https://data61.csiro.au/en/Our-Research/Our-Work/ AI-Roadmap. ; Badiul Islam M. and Governatori G., RuleRS: A Rule-Based Architecture for Decision Support Systems, "Artificial Intelligence and Law" 2018, vol. 26. ; Barfield W. and Pagallo U., Law and AI, Cheltenham/Northampton 2020. ; Barton T.D., Haapio H., Passera S. and Hazard J.G., Successful Contracts: Integrating Design and Technology, (in:) M. Corrales, M. Fenwick and N. Forgó (eds.), Robotics, AI and the Future of Law, Singapore 2018. ; Boulet R., Mazzega P. and Bourcier D., Network Approach to the French System of Legal Codes, part II: The Role of the Weights in a Network, "Artificial Intelligence and Law" 2018, vol. 26. ; Chinen M., Law and Autonomous Machines, Cheltenham 2019. ; Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions on Artificial Intelligence for Europe, Brussels (COM(2018) 237), 25.04.2018, https://eur-lex.europa.eu/legalcontent/EN/TXT/?uri=COM% 3A2018%3A237%3AFIN. ; Corrales M., Fenwick M. and Haapio H., Legal Tech, Smart Contracts and Blockchain, Singapore 2019. ; Cyrul W., LegalTech a tworzenie i publikacja tekstów prawnych, (in:) D. Szostek (ed.), Legal Tech. Czyli jak bezpiecznie korzystać z narzędzi informatycznych w organizacji, w tym w kancelarii oraz dziale prawnym, Warsaw 2021. ; Declaration by the Committee of Ministers on the manipulative capabilities of algorithmic processes, 13.02.2019. ; eIDAS Regulation, https://digital-strategy.ec.europa.eu/en/policies/eidas-regulation. ; EU guidelines on ethics in artificial intelligence: Context and implementation, https://www.europarl.europa.eu/RegData/etudes/BRIE/2019/640163/EPRS_BRI(2019)640163_EN.pdf. ; European Artificial Intelligence (AI) leadership, the path for an integrated vision https://nws.eurocities.eu/MediaShell/media/European_AI_study.pdf. ; European Commission For The Efficiency Of Justice, European Ethical Charter on the use of artificial intelligence in judicial systems, Guidelines on Artificial Intelligence and Data Protection T-PD(2019)01, 14.04.2021. ; Executive Office of the President of the United States, 2016–2019 Progress report: Advancing artificial intelligence R&D (November 2019), https://www.whitehouse.gov/wp-content/ uploads/2019/11/AI-Research-and-Development-Progress-Report-2016–2019.pdf. ; Feldstein S., The Global Expansion and AI Surveillance, Washington 2019. ; Fenwick M., Vermeulen E.P.M. and Corrales M., Business and Regulatory Responses to Artificial Intelligence: Dynamic Regulation, Innovation Ecosystems and the Strategic Management of Disruptive Technology, (in:) M. Corrales, M. Fenwick and Nikolaus Forgó (eds.), Robotics, AI and the Future of Law, Singapore 2018. ; Finlay S., Artificial Intelligence and Machine Learning for Business, Great Britain 2018. ; Gautrais V., Lex Electronica: d'aujourd'hiu a demain, "Lex Electronica" 2016, http://www.lex-electronica.org/articles/volume-21/lex-electronica-daujourdhui-a-demain. ; Governatori G., Idelberger F., Milosevic Z., Riveret R., Sartor G. and Xu X., On legal contracts, imperative and declarative smart contracts, and blockchain system, "Artificial Intelligence and Law" 2018, vol. 26. ; Harasimiuk D. E. and Braun T., Regulating Artificial Intelligence. Binary Ethics and the law, London/ New York 2021. ; Hartung M., Bues M. and Halblieb G., Legal Tech, Baden-Baden 2018. ; Hildebrandt M., Smart Technologies and the End(s) of Law, Northampton 2016. ; Interpol Innovation Centre, Singapore, Innovation Report Artificial Intelligence, https://media.licdn.com/dms/document/C4E1FAQHbu EqCSHEUsQ/feedshare-document-pdf-analyzed/0?e=1549350000&v=beta&t=lpYHjU3SizFf82swBk3g33TLFqWGRy8EjbKyhLPsST0. ; Johnson D. R. and Post D., Law And Borders: the Rise of Law in Cyberspace, "Stanford Law Review"1996, vol. 48, no. 5. ; Kerikmäe T. (ed.) Regulating eTechnologies in the European Union. Normative Realities and Trends, Cham 2014. ; Krasuski A., Status sztucznego agenta. Podstawy zastosowania sztucznej inteligencji, Warsaw 2021. ; Kulesza J., Międzynarodowe prawo Internetu, Poznań 2010. ; Lai L. and Świerczyński M. (eds.), Prawo Sztucznej Inteligencji, Warsaw 2020. ; Leens R., Regulating New Technologies in Times of Change, (in:) L. Reins (ed.), Regulating New Technologies in Uncertain Times, Cham 2019. ; Lessig L., Code and Other Laws of Cyberspace, New York 1999. ; Lessig L., Code is Law: On Liberty in Cyberspace, "Harvard Magazine", https://harvardmagazine.com/2000/ 01/code-is-law-html. ; Liability for Artificial Intelligence and other emerging digital technologies: Report from the Expert Group on Liability and New Technologies – New Technologies Formation, https://ec.europa.eu/transparency/regexpert/index.cfm?do=groupDetail.groupMeetingDoc&docid=36608. ; M. Hildebrandt, Algorithmic Regulation and the Rule of Law, "Philosophical Transactions of the Royal Society A" 2018, vol. 376, issue 2128, https://royalsocietypublishing.org/doi/10.1098/ rsta.2017.0355. ; Magnusson Sjöberg C., Legal Automation, AI and Law Revisited, (in:) M. Corrales, M. Fenwick and Haapio H., Legal Tech, Smart Contracts and Blockchain, Singapore 2019. ; McCarthy J., Minsky M.L., Rochester N. and Shannon C.E., A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence http://jmc.stanford.edu/articles/dartmouth/ dartmouth.pdf. ; Menthe D.C., Jurisdiction in Cyberspace: a Theory of International Spaces, "Michigan Telecommunications and Technology Law Review" 1998, no. 69. ; Minsky M., Perceptrons: An Introduction to Computational Geometry, Massachusetts 1969. ; Minsky M., The Emotion Machine. Commonsense Thinking, Artifi cial Intelligence, and the Future of the Human Mind, New York/London/Toronto/Sydney 2007. ; Nanda R., Siragusa G., Di Caro L., Boella G., Grossio L., Gerbaudo M. and Costamanga F., Unsupervised and Supervised Text Similarity Systems for Automated Identification of National Implementing Measures of European Directives, "Artificial Intelligence and Law" 2019, vol. 27. ; National Artificial Intelligence Strategy of the Czech Republic, https://www.mpo.cz/assets/en/guidepost/for-the-media/press releases/2019/5/NAIS_eng_web.pdf. ; Prabucki R., Szostek D. and Wyczik J., Prawo jako kod, (in:) D. Szostek (ed.), Legal tech. Czyli jak bezpiecznie korzystać z narzędzi informatycznych w organizacji, w tym w kancelarii oraz dziale prawnym, Warsaw 2021. ; Proposal For A Regulation Of The European Parliament And Of The Council Laying Down Harmonised Rules On Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts Com/2021/206 Final, https://Eur-Lex.Europa.Eu/Legal-Content/EN/TXT/?Uri=CELEX:52021PC0206. ; Railas L., The Rise of the Lex Electronica and the International Sale of Goods, Helsinki 2004. ; Recommendation of Committee of Ministers No. 2102 (2017) about technological convergence, artificial intelligence and human rights (Doc. 14432). ; Recommendation of the Council on Artificial Intelligence, OECD, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. ; Recommendations on regulation, innovation and finance: Final Report to the European Commission, 01.12.2019, https://ec.europa.eu/info/sites/info/files/business_economy_euro/banking_and_finance/documents/191113-report-expert-group-regulatory-obstacles-financial-innovation_en.pdf. ; Report with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), https://www.europarl.europa.eu/doceo/document/A-9–2020-0178_EN.html. ; Responsibility and AI: Council of Europe study, 21.12.2019, https://rm.coe.int/ responsability-and-ai-en/168097d9c5. ; Schrebak S., Integrating Computer Science into Legal Discipline: Th e Rise of Legal Programming, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2496094. ; Szostek D. (ed.), Legal tech. Czyli jak bezpiecznie korzystać z narzędzi informatycznych w organizacji, w tym w kancelarii oraz dziale prawnym, Warsaw 2021. ; Szostek D., Blockchain and Law, Baden-Baden 2019. ; Szostek D., Czynność prawna a środki komunikacji elektronicznej, Krakow 2004. ; Szostek D., Sztuczna inteligencja a kody. Czy rozwiązaniem dla uregulowania sztucznej inteligencji jest smart contract i blockchain? (in:) L. Lai and M. Świerczyński (eds.), Prawo Sztucznej Inteligencji, Warsaw 2020. ; Szpringer W., Blockchain jako innowacja systemowa. Od Internetu informacji do Internetu wartości, Warsaw 2018. ; Świerczyński M. and Lai L., Prawo Sztucznej Inteligencji, Warsaw 2020. ; Tegmark M., Życie 3.0. Człowiek w Erze sztucznej Inteligencji, Warsaw 2019. ; The European Commission's high-level expert group on artificial intelligence, A definition of AI: Main capabilities and scientific disciplines. Definition developed for the purpose of the deliverables of the High-Level Expert Group on AI, Brussels, 18.12.2018, https://ec.europa.eu/digital-single-market/en/news/defi nition-artifi cial-intelligence-main-capabilities-and-scientific-disciplines. ; The High-Level Expert Group on Artificial Intelligence European Commission Directorate-General for Communications Networks Technology, 20.09.2018. ; Trudel P., La lex electronica, (in:) C.A. Morand (ed.), Le droit saisi par la mondialisation, Brussels 2001. ; Turing A., Computing Machinery and Intelligence, "Mind" 1950, vol. 49, no. 236. ; Turner J., Robot Rules. Regulating Artificial Intelligence, Cham 2019. ; Wendehorst C.H., Safety and Liability Related Aspect of Software, 17.06.2021, https://digital-strategy.ec.europa.eu/en/library/study-safety-and-liability-related-aspects-software. ; Werbach K., The Blockchain and the New Architecture of Trust, London 2018. ; White Paper On Artificial Intelligence. A European approach to excellence and trust, COM(2020) 65 final, European Commission, https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf. ; Wiebe A., Die elektronische Willenserklarung, Tubingen 2002. ; Wood G., Ethereum: A Secure Decentralized Generalized Transaction Ledger (EIP-150 revision), http://gavwood.com/Paper.pdf. ; Wright N.D. (ed.), Artificial Intelligence, China, Russia, and the Global Order, Maxwell 2019. ; Yao M., Jia M. and Zhou A., Applied Artificial Intelligence. A Handbook for Business Leaders, Middletown 2018. ; Zalewski T., Definicja sztucznej inteligencji, (in:) L. Lai and M. Swierczyński (eds.), Prawo Sztucznej Inteligencji, Warsaw 2020. ; 26
Background International agencies such as the World Health Organisation have highlighted the potential of digital information and communications technologies to strengthen health systems, which are underpinned by the 'building blocks' of information, human resources, finances, commodities, leadership and governance, and service delivery. In high income countries, evidence of the positive impacts of 'eHealth' innovations on the cost-effectiveness of healthcare is growing and many governments are now providing incentives for their adoption. In contrast, the use of eHealth in developing countries has remained low and efforts to introduce these new approaches have experienced high failure rates. There is even scepticism regarding the feasibility of eHealth in low-resource settings, which may be hindered by high costs, indeterminate returns on investment, technical problems and socio-organisational barriers. More research is needed to document both the value of eHealth for strengthening resource-limited health systems and the challenges involved in their implementation and adoption, so that insights from such research may be used to inform future initiatives. While many studies of eHealth for patient care in low- and middle-income countries (LMIC) are taking place, evidence of its role in improving administrative processes such as financial management is lacking, despite the importance of 'good governance' (transparency and accountability) for ensuring strong and resilient health systems. The overall objective of this PhD was to elucidate the enablers, inhibitors and outcomes characterising the implementation and adoption of a modular eHealth system in a group of healthcare facilities in rural Malawi. The system included both clinical and billing modules. The specific objectives were (i) to understand the socio-technical, organisational and change management factors facilitating or hindering the implementation and adoption of the eHealth system, (ii) to assess the quality of data captured by the eHealth system compared with conventional paper-based records, and (iii) to understand how information within the eHealth system was used for service delivery, reporting and financial management. A further aim was to contribute to the corpus of mixed-methods case studies exploring eHealth system implementation processes and outcomes (including data quality) in LMIC. As described in the following chapters, the research also gave rise to unanticipated and serendipitous findings, which led to new lines of enquiry and influenced the theoretical perspectives from which the analysis drew. Methods Mixed-methods case study was used for the research, taking a 'soft-positivist' approach to analysis, which encompasses both inductive and deductive forms of enquiry. Two case studies were undertaken in rural Malawi: one at a 300-bed fee-for-service hospital, and the other at nine primary care health centres that surround the hospital. At the outset of the research, the 'logic model' underpinning the eHealth system implementation programme was mapped, based on formative scoping to articulate the goals and intentions of those commissioning and supplying the eHealth system, along with literature-informed theory. This provided a framework against which to evaluate the processes and outcomes of eHealth system implementation at the ten facilities. For the hospital case study (Case Study 1), a retrospective single-case embedded design was employed, with outpatient and inpatient departments being the two units of analysis. Qualitative data included document review and in-depth key informant interviews, while quantitative data was obtained from the web-based District Health Information System (DHIS2), patient files and the hospital's finance records. For the study of primary health centres (Case Study 2), a single-case embedded design was also used, with the rollout project as the case and the three units of analysis being 3 Early Adopter Facilities, 4 Late Majority facilities and 2 Laggard facilities. This case study used a prospective design, with data being collected 7 months and 24 months after implementation of the eHealth system due to a mismatch between the independent eHealth implementation project and the PhD research. Data sources included documentation screened against the criteria listed in the Performance of Routine Information System Management (PRISM) tools, information extracted from the eHealth system, health indicators drawn from DHIS2 and qualitative data from focus group discussions. In both case studies, framework analysis was used for qualitative data, while quantitative data was analysed by calculating data completeness, accuracy and agreement. Descriptive statistics and the Mann-Whitney U-test were used for analysing finance data in Case Study 1. Content analysis was also used to gain insights from Case Study 2. Results Based on the initial logic model, staff-, service delivery- and management-level outcomes were moderated through the organisational change management and socio-technical factors described below. Key organisational and process factors influencing system implementation Change management processes: Organisational strategies aimed at facilitating the introduction of the eHealth system included training clinical and clerical staff in the computer skills required to use it (see below) and adapting work processes to accommodate and optimise adoption. At the three health facilities where the billing module was implemented, the latter included introducing new procedures for providing electronic receipts to clients and service providers. At Madalo Hospital this also involved the creation of a new category of administrative staff with responsibility for managing the appropriate capture, entry and exchange of data using the system. However, such data clerks were only introduced within the inpatient department, whilst already over-burdened clinical staff in the outpatient department were expected to integrate the eHealth system into their existing work routines. Outpatient departments at the health centres resorted to task-shifting patient data entry roles from clinicians to lower-educated allied staff such as janitors and security guards. Infrastructure and security issues: Organisational enablers were infrastructural and policy interventions aimed at securing equipment and patient data. These included installations of locks and burglar-proof bars, enhanced engagement of security guards and frequent backup of data. An organisational intervention undertaken at the health centres was the introduction of backup batteries and solar power, aimed at providing a continuous electricity supply. However, problems with battery depletion, frequent connectivity interruptions between the client computers and the server and electricity fluctuations and outages, affected both the efficiency of the batteries and the practical utility of the eHealth system. Highly efficient nano-computing units were later introduced, to reduce electricity demands and improve the consistency of available power for the purposes of using the system. Socio-technical issues arising during the implementation process Technical/software problems: There were 24 problems identified with the eHealth system, encompassing its design flaws, security protocols, and hardware and database limitations. For instance, entry of patient data was in multiple windows needing to be minimised, passwords expired with no one at the facilities with rights to issue new passwords, there were frequent disconnections between the client computers and the server, and lists of drugs and indicators for reporting in its database were limited. Although health centre staff used the system for backup storage and retrieval of data, only Early Adopters reported use of the eHealth system's search function. Socio-technical issues: The technical problems outlined above resulted in a heavy reliance on paper records by the health centres, although centres varied in their attitude towards and persistence with eHealth system implementation, with Early Adopter sites overcoming most challenges. At the hospital, the eHealth system was subjected to such inappropriate use by staff that even establishing rules and an IT centre to regulate usage were ineffective, leading to a system crash in 2012 due to viruses and other malware. Such inappropriate use included staff depleting hospital server space by storing personal files (videos, music, pictures, games), being on Facebook instead of attending to patients, sharing of login credentials and not always logging off their account after use, and removal of cables from the computers. Leadership: At the hospital, there was strong management support for the eHealth system. In contrast, there were strong opinions from staff at Late Majority and Laggard facilities about the ineffective engagement of health facility "in-charges". Further, many system champions were senior staff and thus busier and more mobile, most often leaving the junior staff at the health centres, who were not formally trained, to be using the eHealth system. Training: Limitations in the scope and number of staff formally trained was perceived to be a barrier to eHealth system adoption at the health centres, particularly lack of training in basic troubleshooting and maintenance. Even peer training lacked follow-up formal training. At the hospital, developing an appropriately skilled cadre of system users was hindered by high staff turnover and departmental rotations, which required frequent rounds of basic training. Staff at the hospital and health centres were nevertheless happy about the computer knowledge they had gained as a result of the implementation programme, although most expressed a lack of confidence in using the eHealth system. Technical support: For reasons including those already outlined, staff requested support for a range of hardware and software problems, not all of which it was possible to fulfil in a timely way, due to lack of sufficient IT personnel. Lack of in-country technical support for the software was also a considerable barrier to progress, particularly for the IT team based at the hospital, requiring requests for changes to be passed to the parent company. In one attempt to address this, the rights to a partial version of the software was passed to a local foundation for onward management, however the software developers were unwilling to release the source code so that further enhancements and customisation could be made. Efforts to recruit more hospital IT workers and reorganising responsibilities were frustrated by high staff turnover among the IT team. As a result, response to calls from health centres for technical support by the IT team was said to be slow and ineffective (except at Late Majority Facilities), and there was no transfer of basic troubleshooting and minor repair skills from the IT team to the health facility staff. Perceived outcomes: Despite the challenges described above, some tracer outcomes of the eHealth system were detectable from the qualitative and numerical results, relating to data quality, service delivery, reporting and decision-making, and financial management. Perceived and measured outcomes of eHealth system implementation Documentation and associated workload: In both case studies, implementation of the eHealth system illuminated the dysfunctional paper-based system, particularly loss of documents. At the health centres (Case Study 2), only Early Adopters reported reduced administrative and patient care workload following eHealth implementation, while the other adopter groups reported increased workload due to dual use of paper and electronic systems, as well as staff shortage and high patient load. Data quality: Both case studies reported poor data quality in the eHealth system, mainly due to the dual use of the paper-based and electronic systems, and staff defaulting to using the paper-based system only. This was aggravated by infrastructure and leadership problems at the health centres. Across the health centres, completeness of outpatient registration data in the eHealth system was 82.4%, as compared to DHIS2 (100.0% for Early Adopters, 73.9% for Late Majority), equivalent to an average monthly omission of 1,271 clients. When compared to DHIS2 data at Madalo Hospital, outpatient registration data in the eHealth system was 76.0% complete, under-reporting by an average 577 clients per month. Compared with the hospital's paper-based records, inpatient registration and diagnosis data in the eHealth system, as entered by ward clerks, was 93.6% complete and 68.9% accurate. Service delivery (efficiency and patient experience): At Madalo Hospital, the eHealth system was reported to have made retrieval of patients' paper files faster, as the implementation project had also led to changes in the hospital's filing system. This new filing system also facilitated retrieval of data for patients with lost paper records, and allowed linking of patients' outpatient and inpatient records. Reported service delivery improvements at the health centres included enhanced ability for tracing patients, treatment continuity, identifying the correct patient, ensuring patient confidentiality, keeping health workers alert and available, following clinical protocols, identifying the need to change prescription for (or refer) a recurrent patient, and reportedly showing the patient that the provider was paying attention. Improvements in patient experience were perceived to be through avoiding the need for patient details to be re-entered at subsequent visits, better management of queues, and patients feeling more understood by the service provider and having more confidence in the services. Perceived negative patient experiences were associated with staff members' slow typing skills and unfamiliarity with the eHealth system, dual entry of patient information into both the electronic and paper systems, extra steps added to the patient journey through the care process, and disrupted patient-provider interaction. Efficiency of reporting: After its implementation at the hospital site, the eHealth system had become routinely used to generate data for measuring quality of care, and partly for national reporting purposes (HMIS). Customised reports for the hospital were created and used for decisions such as allocation of wards, advocacy and funding applications. In contrast, all the primary healthcare facilities were still using paper registers to compile HMIS reports, a few in combination with the eHealth system, because of lack of knowledge of the reporting module, poor design of the system's reports, and disruptions in electricity and network connections to the server. Management of finances: Financial management was reported to have improved at Madalo Hospital due to better-quality data capture and tracking of service charges, separation of billing and receiving roles by recruiting ward clerks, enhanced oversight by management, and fraud prevention through greater transparency and accountability. Although median monthly revenue was significantly higher after eHealth system implementation (P=0.024), micro- and macro-contextual factors confounded this effect, and the descriptive and qualitative data revealed that genuine improvement only came about after recruitment of ward clerks towards the end of the study period. At the health centres, the eHealth system reportedly helped staff in the accounts department with billing, the facility in-charges with financial oversight, and clients with more trust in printed receipts. Conclusion Converging the results of these two case studies illustrates the potential of eHealth to strengthen LMIC health systems through developing human resource capacity (skills, staff roles), facilitating service delivery, and improving financial management and governance. However, realising such improvements is dependent upon understanding the socio-technical interactions mediating the integration of new systems into organisational processes and work practices, and implementing appropriate change management interventions. The results of this study suggest that, for effective implementation and adoption of eHealth systems, healthcare leaders should (1) recruit data entry clerks to relieve clinical staff, improve workflow and avoid data fraud, (2) facilitate appropriate data use among system users and an information culture at the facilities, and (3) strengthen knowledge and skills transfer from eHealth system developers to local implementers and system champions, to optimise responsiveness and ensure sustainability. Further interdisciplinary research is needed to obtain additional insights into factors affecting the quality of eHealth data and its use in the management of LMIC health systems, including the role of social, professional and technological influences on financial good-governance.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
(This post continues part 1 which just looked at the data. Part 3 on theory is here) When the Fed raises interest rates, how does inflation respond? Are there "long and variable lags" to inflation and output? There is a standard story: The Fed raises interest rates; inflation is sticky so real interest rates (interest rate - inflation) rise; higher real interest rates lower output and employment; the softer economy pushes inflation down. Each of these is a lagged effect. But despite 40 years of effort, theory struggles to substantiate that story (next post), it's had to see in the data (last post), and the empirical work is ephemeral -- this post. The vector autoregression and related local projection are today the standard empirical tools to address how monetary policy affects the economy, and have been since Chris Sims' great work in the 1970s. (See Larry Christiano's review.) I am losing faith in the method and results. We need to find new ways to learn about the effects of monetary policy. This post expands on some thoughts on this topic in "Expectations and the Neutrality of Interest Rates," several of my papers from the 1990s* and excellent recent reviews from Valerie Ramey and Emi Nakamura and Jón Steinsson, who eloquently summarize the hard identification and computation troubles of contemporary empirical work.Maybe popular wisdom is right, and economics just has to catch up. Perhaps we will. But a popular belief that does not have solid scientific theory and empirical backing, despite a 40 year effort for models and data that will provide the desired answer, must be a bit less trustworthy than one that does have such foundations. Practical people should consider that the Fed may be less powerful than traditionally thought, and that its interest rate policy has different effects than commonly thought. Whether and under what conditions high interest rates lower inflation, whether they do so with long and variable but nonetheless predictable and exploitable lags, is much less certain than you think. Here is a replication of one of the most famous monetary VARs, Christiano Eichenbaum and Evans 1999, from Valerie Ramey's 2016 review: Fig. 1 Christiano et al. (1999) identification. 1965m1–1995m6 full specification: solid black lines; 1983m1–2007m12 full specification: short dashed blue (dark gray in the print version) lines; 1983m1–2007m12, omits money and reserves: long-dashed red (gray in the print version) lines. Light gray bands are 90% confidence bands. Source: Ramey 2016. Months on x axis. The black lines plot the original specification. The top left panel plots the path of the Federal Funds rate after the Fed unexpectedly raises the interest rate. The funds rate goes up, but only for 6 months or so. Industrial production goes down and unemployment goes up, peaking at month 20. The figure plots the level of the CPI, so inflation is the slope of the lower right hand panel. You see inflation goes the "wrong" way, up, for about 6 months, and then gently declines. Interest rates indeed seem to affect the economy with long lags. This was the broad outline of consensus empirical estimates for many years. It is common to many other studies, and it is consistent with the beliefs of policy makers and analysts. It's pretty much what Friedman (1968) told us to expect. Getting contemporary models to produce something like this is much harder, but that's the next blog post. What's a VAR?I try to keep this blog accessible to nonspecialists, so I'll step back momentarily to explain how we produce graphs like these. Economists who know what a VAR is should skip to the next section heading. How do we measure the effect of monetary policy on other variables? Milton Friedman and Anna Schwartz kicked it off in the Monetary History by pointing to the historical correlation of money growth with inflation and output. They knew as we do that correlation is not causation, so they pointed to the fact that money growth preceeded inflation and output growth. But as James Tobin pointed out, the cock's crow comes before, but does not cause, the sun to rise. So too people may go get out some money ahead of time when they see more future business activity on the horizon. Even correlation with a lead is not causation. What to do? Clive Granger's causality and Chris Sims' VAR, especially "Macroeconomics and Reality" gave today's answer. (And there is a reason that everybody mentioned so far has a Nobel prize.) First, we find a monetary policy "shock," a movement in the interest rate (these days; money, then) that is plausibly not a response to economic events and especially to expected future economic events. We think of the Fed setting interest rates by a response to economic data plus deviations from that response, such as interest rate = (#) output + (#) inflation + (#) other variables + disturbance. We want to isolate the "disturbance," movements in the interest rate not taken in response to economic events. (I use "shock" to mean an unpredictable variable, and "disturbance" to mean deviation from an equation like the above, but one that can persist for a while. A monetary policy "shock" is an unexpected movement in the disturbance.) The "rule" part here can be but need not be the Taylor rule, and can include other variables than output and inflation. It is what the Fed usually does given other variables, and therefore (hopefully) controls for reverse causality from expected future economic events to interest rates. Now, in any individual episode, output and inflation and inflation following a shock will be influenced by subsequent shocks to the economy, monetary and other. But those average out. So, the average value of inflation, output, employment, etc. following a monetary policy shock is a measure of how the shock affects the economy all on its own. That is what has been plotted above. VARs were one of the first big advances in the modern empirical quest to find "exogenous" variation and (somewhat) credibly find causal relationships. Mostly the huge literature varies on how one finds the "shocks." Traditional VARs use regressions of the above equations and the residual is the shock, with a big question just how many and which contemporaneous variables one adds in the regression. Romer and Romer pioneered the "narrative approach," reading the Fed minutes to isolate shocks. Some technical details at the bottom and much more discussion below. The key is finding shocks. One can just regress output and inflation on the shocks to produce the response function, which is a "local projection" not a "VAR," but I'll use "VAR" for both techniques for lack of a better encompassing word. Losing faithShocks, what shocks?What's a "shock" anyway? The concept is that the Fed considers its forecast of inflation, output and other variables it is trying to control, gauges the usual and appropriate response, and then adds 25 or 50 basis points, at random, just for the heck of it. The question VARS try to answer is the same: What happens to the economy if the Fed raises interest rates unexpectedly, for no particular reason at all? But the Fed never does this. Ask them. Read the minutes. The Fed does not roll dice. They always raise or lower interest rates for a reason, that reason is always a response to something going on in the economy, and most of the time how it affects forecasts of inflation and employment. There are no shocks as defined.I speculated here that we might get around this problem: If we knew the Fed was responding to something that had no correlation with future output, then even though that is an endogenous response, then it is a valid movement for estimating the effect of interest rates on output. My example was, what if the Fed "responds" to the weather. Well, though endogenous, it's still valid for estimating the effect on output. The Fed does respond to lots of things, including foreign exchange, financial stability issues, equity, terrorist attacks, and so forth. But I can't think of any of these in which the Fed is not thinking of these events for their effect on output and inflation, which is why I never took the idea far. Maybe you can. Shock isolation also depends on complete controls for the Fed's information. If the Fed uses any information about future output and inflation that is not captured in our regression, then information about future output and inflation remains in the "shock" series. The famous "price puzzle" is a good example. For the first few decades of VARs, interest rate shocks seemed to lead to higher inflation. It took a long specification search to get rid of this undesired result. The story was, that the Fed saw inflation coming in ways not completely controlled for by the regression. The Fed raised interest rates to try to forestall the inflation, but was a bit hesitant about it so did not cure the inflation that was coming. We see higher interest rates followed by higher inflation, though the true causal effect of interest rates goes the other way. This problem was "cured" by adding commodity prices to the interest rate rule, on the idea that fast-moving commodity prices would capture the information the Fed was using to forecast inflation. (Interestingly these days we seem to see core inflation as the best forecaster, and throw out commodity prices!) With those and some careful orthogonalization choices, the "price puzzle" was tamped down to the one year or so delay you see above. (Neo-Fisherians might object that maybe the price puzzle was trying to tell us something all these years!) Nakamura and Steinsson write of this problem: "What is being assumed is that controlling for a few lags of a few variables captures all endogenous variation in policy... This seems highly unlikely to be true in practice. The Fed bases its policy decisions on a huge amount of data. Different considerations (in some cases highly idiosyncratic) affect policy at different times. These include stress in the banking system, sharp changes in commodity prices, a recent stock market crash, a financial crisis in emerging markets, terrorist attacks, temporary investment tax credits, and the Y2K computer glitch. The list goes on and on. Each of these considerations may only affect policy in a meaningful way on a small number of dates, and the number of such influences is so large that it is not feasible to include them all in a regression. But leaving any one of them out will result in a monetary policy "shock" that the researcher views as exogenous but is in fact endogenous." Nakamura and Steinsson offer 9/11 as another example summarizing my "high frequency identification" paper with Monika Piazzesi: The Fed lowered interest rates after the terrorist attack, likely reacting to its consequences for output and inflation. But VARs register the event as an exogenous shock.Romer and Romer suggested that we use Fed Greenbook forecasts of inflation and output as controls, as those should represent the Fed's complete information set. They provide narrative evidence that Fed members trust Greenback forecasts more than you might suspect. This issue is a general Achilles heel of empirical macro and finance: Does your procedure assume agents see no more information than you have included in the model or estimate? If yes, you have a problem. Similarly, "Granger causality" answers the cock's crow-sunrise problem by saying that if unexpected x leads unexpected y then x causes y. But it's only real causality if the "expected" includes all information, as the price puzzle counterexample shows. Just what properties do we need of a shock in order to measure the response to the question, "what if the Fed raised rates for no reason?" This strikes me as a bit of an unsolved question -- or rather, one that everyone thinks is so obvious that we don't really look at it. My suggestion that the shock only need be orthogonal to the variable whose response we're estimating is informal, and I don't know of formal literature that's picked it up. Must "shocks" be unexpected, i.e. not forecastable from anything in the previous time information set? Must they surprise people? I don't think so -- it is neither necessary nor sufficient for shock to be unforecastable for it to identify the inflation and output responses. Not responding to expected values of the variable whose response you want to measure should be enough. If bond markets found out about a random funds rate rise one day ahead, it would then be an "expected" shock, but clearly just as good for macro. Romer and Romer have been criticized that their shocks are predictable, but this may not matter. The above Nakamura and Steinsson quote says leaving out any information leads to a shock that is not strictly exogenous. But strictly exogenous may not be necessary for estimating, say, the effect of interest rates on inflation. It is enough to rule out reverse causality and third effects. Either I'm missing a well known econometric literature, as is everyone else writing the VARs I've read who don't cite it, or there is a good theory paper to be written.Romer and Romer, thinking deeply about how to read "shocks" from the Fed minutes, define shocks thus to circumvent the "there are no shocks" problem:we look for times when monetary policymakers felt the economy was roughly at potential (or normal) output, but decided that the prevailing rate of inflation was too high. Policymakers then chose to cut money growth and raise interest rates, realizing that there would be (or at least could be) substantial negative consequences for aggregate output and unemployment. These criteria are designed to pick out times when policymakers essentially changed their tastes about the acceptable level of inflation. They weren't just responding to anticipated movements in the real economy and inflation. [My emphasis.] You can see the issue. This is not an "exogenous" movement in the funds rate. It is a response to inflation, and to expected inflation, with a clear eye on expected output as well. It really is a nonlinear rule, ignore inflation for a while until it gets really bad then finally get serious about it. Or, as they say, it is a change in rule, an increase in the sensitivity of the short run interest rate response to inflation, taken in response to inflation seeming to get out of control in a longer run sense. Does this identify the response to an "exogenous" interest rate increase? Not really. But maybe it doesn't matter. Are we even asking an interesting question? The whole question, what would happen if the Fed raised interest rates for no reason, is arguably besides the point. At a minimum, we should be clearer about what question we are asking, and whether the policies we analyze are implementations of that question. The question presumes a stable "rule," (e.g. \(i_t = \rho i_{t-1} + \phi_\pi \pi_t + \phi_x x_t + u_t\)) and asks what happens in response to a deviation \( +u_t \) from the rule. Is that an interesting question? The standard story for 1980-1982 is exactly not such an event. Inflation was not conquered by a big "shock," a big deviation from 1970s practice, while keeping that practice intact. Inflation was conquered (so the story goes) by a change in the rule, by a big increase in $\phi_\pi$. That change raised interest rates, but arguably without any deviation from the new rule \(u_t\) at all. Thinking in terms of the Phillips curve \( \pi_t = E_t \pi_{t+1} + \kappa x_t\), it was not a big negative \(x_t\) that brought down inflation, but the credibility of the new rule that brought down \(E_t \pi_{t+1}\). If the art of reducing inflation is to convince people that a new regime has arrived, then the response to any monetary policy "shock" orthogonal to a stable "rule" completely misses that policy. Romer and Romer are almost talking about a rule-change event. For 2022, they might be looking at the Fed's abandonment of flexible average inflation targeting and its return to a Taylor rule. However, they don't recognize the importance of the distinction, treating changes in rule as equivalent to a residual. Changing the rule changes expectations in quite different ways from a residual of a stable rule. Changes with a bigger commitment should have bigger effects, and one should standardize somehow by the size and permanence of the rule change, not necessarily the size of the interest rate rise. And, having asked "what if the Fed changes rule to be more serious about inflation," we really cannot use the analysis to estimate what happens if the Fed shocks interest rates and does not change the rule. It takes some mighty invariance result from an economic theory that a change in rule has the same effect as a shock to a given rule. There is no right and wrong, really. We just need to be more careful about what question the empirical procedure asks, if we want to ask that question, and if our policy analysis actually asks the same question. Estimating rules, Clarida Galí and Gertler. Clarida, Galí, and Gertler (2000) is a justly famous paper, and in this context for doing something totally different to evaluate monetary policy. They estimate rules, fancy versions of \(i_t = \rho i_{t-1} +\phi_\pi \pi_t + \phi_x x_t + u_t\), and they estimate how the \(\phi\) parameters change over time. They attribute the end of 1970s inflation to a change in the rule, a rise in \(\phi_\pi\) from the 1970s to the 1980s. In their model, a higher \( \phi_\pi\) results in less volatile inflation. They do not estimate any response functions. The rest of us were watching the wrong thing all along. Responses to shocks weren't the interesting quantity. Changes in the rule were the interesting quantity. Yes, I criticized the paper, but for issues that are irrelevant here. (In the new Keynesian model, the parameter that reduces inflation isn't the one they estimate.) The important point here is that they are doing something completely different, and offer us a roadmap for how else we might evaluate monetary policy if not by impulse-response functions to monetary policy shocks. Fiscal theoryThe interesting question for fiscal theory is, "What is the effect of an interest rate rise not accompanied by a change in fiscal policy?" What can the Fed do by itself? By contrast, standard models (both new and old Keynesian) include concurrent fiscal policy changes when interest rates rise. Governments tighten in present value terms, at least to pay higher interest costs on the debt and the windfall to bondholders that flows from unexpected disinflation. Experience and estimates surely include fiscal changes along with monetary tightening. Both fiscal and monetary authorities react to inflation with policy actions and reforms. Growth-oriented microeconomic reforms with fiscal consequences often follow as well -- rampant inflation may have had something to do with Carter era trucking, airline, and telecommunications reform. Yet no current estimate tries to look for a monetary shock orthogonal to fiscal policy change. The estimates we have are at best the effects of monetary policy together with whatever induced or coincident fiscal and microeconomic policy tends to happen at the same time as central banks get serious about fighting inflation. Identifying the component of a monetary policy shock orthogonal to fiscal policy, and measuring its effects is a first order question for fiscal theory of monetary policy. That's why I wrote this blog post. I set out to do it, and then started to confront how VARs are already falling apart in our hands. Just what "no change in fiscal policy" means is an important question that varies by application. (Lots more in "fiscal roots" here, fiscal theory of monetary policy here and in FTPL.) For simple calculations, I just ask what happens if interest rates change with no change in primary surplus. One might also define "no change" as no change in tax rates, automatic stabilizers, or even habitual discretionary stimulus and bailout, no disturbance \(u_t\) in a fiscal rule \(s_t = a + \theta_\pi \pi_t + \theta_x x_t + ... + u_t\). There is no right and wrong here either, there is just making sure you ask an interesting question. Long and variable lags, and persistent interest rate movementsThe first plot shows a mighty long lag between the monitor policy shock and its effect on inflation and output. That does not mean that the economy has long and variable lags. This plot is actually not representative, because in the black lines the interest rate itself quickly reverts to zero. It is common to find a more protracted interest rate response to the shock, as shown in the red and blue lines. That mirrors common sense: When the Fed starts tightening, it sets off a year or so of stair-step further increases, and then a plateau, before similar stair-step reversion. That raises the question, does the long-delayed response of output and inflation represent a delayed response to the initial monetary policy shock, or does it represent a nearly instantaneous response to the higher subsequent interest rates that the shock sets off? Another way of putting the question, is the response of inflation and output invariant to changes in the response of the funds rate itself? Do persistent and transitory funds rate changes have the same responses? If you think of the inflation and output responses as economic responses to the initial shock only, then it does not matter if interest rates revert immediately to zero, or go on a 10 year binge following the initial shock. That seems like a pretty strong assumption. If you think that a more persistent interest rate response would lead to a larger or more persistent output and inflation response, then you think some of what we see in the VARs is a quick structural response to the later higher interest rates, when they come. Back in 1988, I posed this question in "what do the VARs mean?" and showed you can read it either way. The persistent output and inflation response can represent either long economic lags to the initial shock, or much less laggy responses to interest rates when they come. I showed how to deconvolute the response function to the structural effect of interest rates on inflation and output and how persistently interest rates rise. The inflation and output responses might be the same with shorter funds rate responses, or they might be much different. Obviously (though often forgotten), whether the inflation and output responses are invariant to changes in the funds rate response needs a model. If in the economic model only unexpected interest rate movements affect output and inflation, though with lags, then the responses are as conventionally read structural responses and invariant to the interest rate path. There is no such economic model. Lucas (1972) says only unexpected money affects output, but with no lags, and expected money affects inflation. New Keynesian models have very different responses to permanent vs. transitory interest rate shocks. Interestingly, Romer and Romer do not see it this way, and regard their responses as structural long and variable lags, invariant to the interest rate response. They opine that given their reading of a positive shock in 2022, a long and variable lag to inflation reduction is baked in, no matter what the Fed does next. They argue that the Fed should stop raising interest rates. (In fairness, it doesn't look like they thought about the issue much, so this is an implicit rather than explicit assumption.) The alternative view is that effects of a shock on inflation are really effects of the subsequent rate rises on inflation, that the impulse response function to inflation is not invariant to the funds rate response, so stopping the standard tightening cycle would undo the inflation response. Argue either way, but at least recognize the important assumption behind the conclusions. Was the success of inflation reduction in the early 1980s just a long delayed response to the first few shocks? Or was the early 1980s the result of persistent large real interest rates following the initial shock? (Or, something else entirely, a coordinated fiscal-monetary reform... But I'm staying away from that and just discussing conventional narratives, not necessarily the right answer.) If the latter, which is the conventional narrative, then you think it does matter if the funds rate shock is followed by more funds rate rises (or positive deviations from a rule), that the output and inflation response functions do not directly measure long lags from the initial shock. De-convoluting the structural funds rate to inflation response and the persistent funds rate response, you would estimate much shorter structural lags. Nakamura and Steinsson are of this view: While the Volcker episode is consistent with a large amount of monetary nonneutrality, it seems less consistent with the commonly held view that monetary policy affects output with "long and variable lags." To the contrary, what makes the Volcker episode potentially compelling is that output fell and rose largely in sync with the actions [interest rates, not shocks] of the Fed. And that's a good thing too. We've done a lot of dynamic economics since Friedman's 1968 address. There is really nothing in dynamic economic theory that produces a structural long-delayed response to shocks, without the continued pressure of high interest rates. (A correspondent objects to "largely in sync" pointing out several clear months long lags between policy actions and results in 1980. It's here for the methodological point, not the historical one.) However, if the output and inflation responses are not invariant to the interest rate response, then the VAR directly measures an incredibly narrow experiment: What happens in response to a surprise interest rate rise, followed by the plotted path of interest rates? And that plotted path is usually pretty temporary, as in the above graph. What would happen if the Fed raised rates and kept them up, a la 1980? The VAR is silent on that question. You need to calibrate some model to the responses we have to infer that answer. VARs and shock responses are often misread as generic theory-free estimates of "the effects of monetary policy." They are not. At best, they tell you the effect of one specific experiment: A random increase in funds rate, on top of a stable rule, followed by the usual following path of funds rate. Any other implication requires a model, explicit or implicit. More specifically, without that clearly false invariance assumption, VARs cannot directly answer a host of important questions. Two on my mind: 1) What happens if the Fed raises interest rates permanently? Does inflation eventually rise? Does it rise in the short run? This is the "Fisherian" and "neo-Fisherian" questions, and the answer "yes" pops unexpectedly out of the standard new-Keynesian model. 2) Is the short-run negative response of inflation to interest rates stronger for more persistent rate rises? The long-term debt fiscal theory mechanism for a short-term inflation decline is tied to the persistence of the shock and the maturity structure of the debt. The responses to short-lived interest rate movements (top left panel) are silent on these questions. Directly is an important qualifier. It is not impossible to answer these questions, but you have to work harder to identify persistent interest rate shocks. For example, Martín Uribe identifies permanent vs. transitory interest rate shocks, and finds a positive response of inflation to permanent interest rate rises. How? You can't just pick out the interest rate rises that turned out to be permanent. You have to find shocks or components of the shock that are ex-ante predictably going to be permanent, based on other forecasting variables and the correlation of the shock with other shocks. For example, a short-term rate shock that also moves long-term rates might be more permanent than one which does not do so. (That requires the expectations hypothesis, which doesn't work, and long term interest rates move too much anyway in response to transitory funds rate shocks. So, this is not directly a suggestion, just an example of the kind of thing one must do. Uribe's model is more complex than I can summarize in a blog.) Given how small and ephemeral the shocks are already, subdividing them into those that are expected to have permanent vs. transitory effects on the federal funds rate is obviously a challenge. But it's not impossible. Monetary policy shocks account for small fractions of inflation, output and funds rate variation. Friedman thought that most recessions and inflations were due to monetary mistakes. The VARs pretty uniformly deny that result. The effects of monetary policy shocks on output and inflation add up to less than 10 percent of the variation of output and inflation. In part the shocks are small, and in part the responses to the shocks are small. Most recessions come from other shocks, not monetary mistakes. Worse, both in data and in models, most inflation variation comes from inflation shocks, most output variation comes from output shocks, etc. The cross-effects of one variable on another are small. And "inflation shock" (or "marginal cost shock"), "output shock" and so forth are just labels for our ignorance -- error terms in regressions, unforecasted movements -- not independently measured quantities. (This and old point, for example in my 1994 paper with the great title "Shocks." Technically, the variance of output is the sum of the squares of the impulse-response functions -- the plots -- times the variance of the shocks. Thus small shocks and small responses mean not much variance explained.)This is a deep point. The exquisite attention put to the effects of monetary policy in new-Keynesian models, while interesting to the Fed, are then largely beside the point if your question is what causes recessions. Comprehensive models work hard to match all of the responses, not just to monetary policy shocks. But it's not clear that the nominal rigidities that are important for the effects of monetary policy are deeply important to other (supply) shocks, and vice versa. This is not a criticism. Economics always works better if we can use small models that focus on one thing -- growth, recessions, distorting effect of taxes, effect of monetary policy -- without having to have a model of everything in which all effects interact. But, be clear we no longer have a model of everything. "Explaining recessions" and "understanding the effects of monetary policy" are somewhat separate questions. Monetary policy shocks also account for small fractions of the movement in the federal funds rate itself. Most of the funds rate movement is in the rule, the reaction to the economy term. Like much empirical economics, the quest for causal identification leads us to look at a tiny causes with tiny effects, that do little to explain much variation in the variable of interest (inflation). Well, cause is cause, and the needle is the sharpest item in the haystack. But one worries about the robustness of such tiny effects, and to what extent they summarize historical experience. To be concrete, here is a typical shock regression, 1960:1-2023:6 monthly data, standard errors in parentheses: ff(t) = a + b ff(t-1) + c[ff(t-1)-ff(t-2)] + d CPI(t) + e unemployment(t) + monetary policy shock, Where "CPI" is the percent change in the CPI (CPIAUCSL) from a year earlier. ff(t-1)ff(t-1)-ff(t-2)CPIUnempR20.970.390.032-0.0170.985(0.009)(0.07)(0.013)(0.009)The funds rate is persistent -- the lag term (0.97) is large. Recent changes matter too: Once the Fed starts a tightening cycle, it's likely to keep raising rates. And the Fed responds to CPI and unemployment. The plot shows the actual federal funds rate (blue), the model or predicted federal funds rate (red), the shock which is the difference between the two (orange) and the Romer and Romer dates (vertical lines). You can't see the difference between actual and predicted funds rate, which is the point. They are very similar and the shocks are small. They are closer horizontally than vertically, so the vertical difference plotted as shock is still visible. The shocks are much smaller than the funds rate, and smaller than the rise and fall in the funds rate in a typical tightening or loosening cycle. The shocks are bunched, with by far the biggest ones in the early 1980s. The shocks have been tiny since the 1980s. (Romer and Romer don't find any shocks!) Now, our estimates of the effect of monetary policy look at the average values of inflation, output, and employment in the 4-5 years after a shock. Really, you say, looking at the graph? That's going to be dominated by the experience of the early 1980s. And with so many positive and negative shocks close together, the average value 4 years later is going to be driven by subtle timing of when the positive or negative shocks line up with later events. Put another way, here is a plot of inflation 30 months after a shock regressed on the shock. Shock on the x axis, subsequent inflation on the y axis. The slope of the line is our estimate of the effect of the shock on inflation 30 months out (source, with details). Hmm. One more graph (I'm having fun here):This is a plot of inflation for the 4 years after each shock, times that shock. The right hand side is the same graph with an expanded y scale. The average of these histories is our impulse response function. (The big lines are the episodes which multiply the big shocks of the early 1980s. They mostly converge because, either multiplied by positive or negative shocks, inflation wend down in the 1980s.) Impulse response functions are just quantitative summaries of the lessons of history. You may be underwhelmed that history is sending a clear story. Again, welcome to causal economics -- tiny average responses to tiny but identified movements is what we estimate, not broad lessons of history. We do not estimate "what is the effect of the sustained high real interest rates of the early 1980s," for example, or "what accounts for the sharp decline of inflation in the early 1980s?" Perhaps we should, though confronting endogeneity of the interest rate responses some other way. That's my main point today. Estimates disappear after 1982Ramey's first variation in the first plot is to use data from 1983 to 2007. Her second variation is to also omit the monetary variables. Christiano Eichenbaum and Evans were still thinking in terms of money supply control, but our Fed does not control money supply. The evidence that higher interest rates lower inflation disappears after 1983, with or without money. This too is a common finding. It might be because there simply aren't any monetary policy shocks. Still, we're driving a car with a yellowed AAA road map dated 1982 on it. Monetary policy shocks still seem to affect output and employment, just not inflation. That poses a deeper problem. If there just aren't any monetary policy shocks, we would just get big standard errors on everything. That only inflation disappears points to the vanishing Phillips curve, which will be the weak point in the theory to come. It is the Phillips curve by which lower output and employment push down inflation. But without the Phillips curve, the whole standard story for interest rates to affect inflation goes away. Computing long-run responsesThe long lags of the above plot are already pretty long horizons, with interesting economics still going on at 48 months. As we get interested in long run neutrality, identification via long run sign restrictions (monetary policy should not permanently affect output), and the effect of persistent interest rate shocks, we are interested in even longer run responses. The "long run risks" literature in asset pricing is similarly crucially interested in long run properties. Intuitively, we should know this will be troublesome. There aren't all that many nonoverlapping 4 year periods after interest rate shocks to measure effects, let alone 10 year periods.VARs estimate long run responses with a parametric structure. Organize the data (output, inflation, interest rate, etc) into a vector \(x_t = [y_t \; \pi_t \; i_t \; ...]'\), then the VAR can be written \(x_{t+1} = Ax_t + u_t\). We start from zero, move \(x_1 = u_1\) in an interesting way, and then the response function just simulates forward, with \(x_j = A^j x_1\). But here an oft-forgotten lesson of 1980s econometrics pops up: It is dangerous to estimate long-run dynamics by fitting a short run model and then finding its long-run implications. Raising matrices to the 48th power \(A^{48}\) can do weird things, the 120th power (10 years) weirder things. OLS and maximum likelihood prize one step ahead \(R^2\), and will happily accept small one step ahead mis specifications that add up to big misspecification 10 years out. (I learned this lesson in the "Random walk in GNP.") Long run implications are driven by the maximum eigenvalue of the \(A\) transition matrix, and its associated eigenvector. \(A^j = Q \Lambda^j Q^{-1}\). This is a benefit and a danger. Specify and estimate the dynamics of the combination of variables with the largest eigenvector right, and lots of details can be wrong. But standard estimates aren't trying hard to get these right. The "local projection" alternative directly estimates long run responses: Run regressions of inflation in 10 years on the shock today. You can see the tradeoff: there aren't many non-overlapping 10 year intervals, so this will be imprecisely estimated. The VAR makes a strong parametric assumption about long-run dynamics. When it's right, you get better estimates. When it's wrong, you get misspecification. My experience running lots of VARs is that monthly VARs raised to large powers often give unreliable responses. Run at least a one-year VAR before you start looking at long run responses. Cointegrating vectors are the most reliable variables to include. They are typically the state variable that most reliably carries long - run responses. But pay attention to getting them right. Imposing integrating and cointegrating structure by just looking at units is a good idea. The regression of long-run returns on dividend yields is a good example. The dividend yield is a cointegrating vector, and is the slow-moving state variable. A one period VAR \[\left[ \begin{array}{c} r_{t+1} \\ dp_{t+1} \end{array} \right] = \left[ \begin{array}{cc} 0 & b_r \\ 0 & \rho \end{array}\right] \left[ \begin{array}{c} r_{t} \\ dp_{t} \end{array}\right]+ \varepsilon_{t+1}\] implies a long horizon regression \(r_{t+j} = b_r \rho^j dp_{t} +\) error. Direct regressions ("local projections") \(r_{t+j} = b_{r,j} dp_t + \) error give about the same answers, though the downward bias in \(\rho\) estimates is a bit of an issue, but with much larger standard errors. The constraint \(b_{r,j} = b_r \rho^j\) isn't bad. But it can easily go wrong. If you don't impose that dividends and price are cointegrated, or with vector other than 1 -1, if you allow a small sample to estimate \(\rho>1\), if you don't put in dividend yields at all and just a lot of short-run forecasters, it can all go badly. Forecasting bond returns was for me a good counterexample. A VAR forecasting one-year bond returns from today's yields gives very different results from taking a monthly VAR, even with several lags, and using \(A^{12}\) to infer the one-year return forecast. Small pricing errors or microstructure dominate the monthly data, which produces junk when raised to the twelfth power. (Climate regressions are having fun with the same issue. Small estimated effects of temperature on growth, raised to the 100th power, can produce nicely calamitous results. But use basic theory to think about units.) Nakamura and Steinsson (appendix) show how sensitive some standard estimates of impulse response functions are to these questions. Weak evidenceFor the current policy question, I hope you get a sense of how weak the evidence is for the "standard view" that higher interest rates reliably lower inflation, though with a long and variable lag, and the Fed has a good deal of control over inflation. Yes, many estimates look the same, but there is a pretty strong prior going in to that. Most people don't publish papers that don't conform to something like the standard view. Look how long it took from Sims (1980) to Christiano Eichenbaum and Evans (1999) to produce a response function that does conform to the standard view, what Friedman told us to expect in (1968). That took a lot of playing with different orthogonalization, variable inclusion, and other specification assumptions. This is not criticism: when you have a strong prior, it makes sense to see if the data can be squeezed in to the prior. Once authors like Ramey and Nakamura and Steinsson started to look with a critical eye, it became clearer just how weak the evidence is. Standard errors are also wide, but the variability in results due to changes in sample and specification are much larger than formal standard errors. That's why I don't stress that statistical aspect. You play with 100 models, try one variable after another to tamp down the price puzzle, and then compute standard errors as if the 100th model were written in stone. This post is already too long, but showing how results change with different specifications would have been a good addition. For example, here are a few more Ramey plots of inflation responses, replicating various previous estimatesTake your pick. What should we do instead? Well, how else should we measure the effects of monetary policy? One natural approach turns to the analysis of historical episodes and changes in regime, with specific models in mind. Romer and Romer pass on thoughts on this approach: ...some macroeconomic behavior may be fundamentally episodic in nature. Financial crises, recessions, disinflations, are all events that seem to play out in an identifiable pattern. There may be long periods where things are basically fine, that are then interrupted by short periods when they are not. If this is true, the best way to understand them may be to focus on episodes—not a cross-section proxy or a tiny sub-period. In addition, it is valuable to know when the episodes were and what happened during them. And, the identification and understanding of episodes may require using sources other than conventional data.A lot of my and others' fiscal theory writing has taken a similar view. The long quiet zero bound is a test of theories: old-Keynesian models predict a delation spiral, new-Keynesian models predicts sunspot volatility, fiscal theory is consistent with stable quiet inflation. The emergence of inflation in 2021 and its easing despite interest rates below inflation likewise validates fiscal vs. standard theories. The fiscal implications of abandoning the gold standard in 1933 plus Roosevelt's "emergency" budget make sense of that episode. The new-Keynesian reaction parameter \(\phi_\pi\) in \(i_t - \phi_\pi \pi_t\), which leads to unstable dynamics for ](\phi_\pi>1\) is not identified by time series data. So use "other sources," like plain statements on the Fed website about how they react to inflation. I already cited Clarida Galí and Gertler, for measuring the rule not the response to the shock, and explaining the implications of that rule for their model. Nakamura and Steinsson likewise summarize Mussa's (1986) classic study of what happens when countries switch from fixed to floating exchange rates: "The switch from a fixed to a flexible exchange rate is a purely monetary action. In a world where monetary policy has no real effects, such a policy change would not affect real variables like the real exchange rate. Figure 3 demonstrates dramatically that the world we live in is not such a world."Also, analysis of particular historical episodes is enlightening. But each episode has other things going on and so invites alternative explanations. 90 years later, we're still fighting about what caused the Great Depression. 1980 is the poster child for monetary disinflation, yet as Nakamura and Steinsson write, Many economists find the narrative account above and the accompanying evidence about output to be compelling evidence of large monetary nonneutrality. However, there are other possible explanations for these movements in output. There were oil shocks both in September 1979 and in February 1981.... Credit controls were instituted between March and July of 1980. Anticipation effects associated with the phased-in tax cuts of the Reagan administration may also have played a role in the 1981–1982 recession ....Studying changes in regime, such as fixed to floating or the zero bound era, help somewhat relative to studying a particular episode, in that they have some of the averaging of other shocks. But the attraction of VARs will remain. None of these produces what VARs seemed to produce, a theory-free qualitative estimate of the effects of monetary policy. Many tell you that prices are sticky, but not how prices are sticky. Are they old-Keynesian backward looking sticky or new-Keynesian rational expectations sticky? What is the dynamic response of relative inflation to a change in a pegged exchange rate? What is the dynamic response of real relative prices to productivity shocks? Observations such as Mussa's graph can help to calibrate models, but does not answer those questions directly. My observations about the zero bound or the recent inflation similarly seem (to me) decisive about one class of model vs. another, at least subject to Occam's razor about epicycles, but likewise do not provide a theory-free impulse response function. Nakamura and Steinsson write at length about other approaches; model-based moment matching and use of micro data in particular. This post is going on too long; read their paper. Of course, as we have seen, VARs only seem to offer a model-free quantitative measurement of "the effects of monetary policy," but it's hard to give up on the appearance of such an answer. VARs and impulse responses also remain very useful ways of summarizing the correlations and cross correlations of data, even without cause and effect interpretation. In the end, many ideas are successful in economics when they tell researchers what to do, when they offer a relatively clear recipe for writing papers. "Look at episodes and think hard is not such recipe." "Run a VAR is." So, as you think about how we can evaluate monetary policy, think about a better recipe as well as a good answer. (Stay tuned. This post is likely to be updated a few times!) VAR technical appendixTechnically, running VARs is very easy, at least until you start trying to smooth out responses with Bayesian and other techniques. Line up the data in a vector, i.e. \(x_t = [i_t \; \pi_t\; y_t]'\). Then run a regression of each variable on lags of the others, \[x_t = Ax_{t-1} + u_t.\] If you want more than one lag of the right hand variables, just make a bigger \(x\) vector, \(x_t = [i_t\; \pi_t \; y_t \; i_{t-1}\; \pi_{t-1} \;y_{t-1}]'.\) The residuals of such regressions \(u_t\) will be correlated, so you have to decide whether, say, the correlation between interest rate and inflation shocks means the Fed responds in the period to inflation, or inflation responds within the period to interest rates, or some combination of the two. That's the "identification" assumption issue. You can write it as a matrix \(C\) so that \(u_t = C \varepsilon_t\) and cov\((\varepsilon_t \varepsilon_t')=I\) or you can include some contemporaneous values into the right hand sides. Now, with \(x_t = Ax_{t-1} + C\varepsilon_t\), you start with \(x_0=0\), choose one series to shock, e.g. \(\varepsilon_{i,1}=1\) leaving the others alone, and just simulate forward. The resulting path of the other variables is the above plot, the "impulse response function." Alternatively you can run a regression \(x_t = \sum_{j=0}^\infty \theta_j \varepsilon_{t-j}\) and the \(\theta_j\) are (different, in sample) estimates of the same thing. That's "local projection". Since the right hand variables are all orthogonal, you can run single or multiple regressions. (See here for equations.) Either way, you have found the moving average representation, \(x_t = \theta(L)\varepsilon_t\), in the first case with \(\theta(L)=(I-AL)^{-1}C\) in the second case directly. Since the right hand variables are all orthogonal, the variance of the series is the sum of its loading on all of the shocks, \(cov(x_t) = \sum_{j=0}^\infty \theta_j \theta_j'\). This "forecast error variance decomposition" is behind my statement that small amounts of inflation variance are due to monetary policy shocks rather than shocks to other variables, and mostly inflation shocks. Update:Luis Garicano has a great tweet thread explaining the ideas with a medical analogy. Kamil Kovar has a nice follow up blog post, with emphasis on Europe. He makes a good point that I should have thought of: A monetary policy "shock" is a deviation from a "rule." So, the Fed's and ECB's failure to respond to inflation as they "usually" do in 2021-2022 counts exactly the same as a 3-5% deliberate lowering of the interest rate. Lowering interest rates for no reason, and leaving interest rates alone when the regression rule says raise rates are the same in this methodology. That "loosening" of policy was quickly followed by inflation easing, so an updated VAR should exhibit a strong "price puzzle" -- a negative shock is followed by less, not more inflation. Of course historians and practical people might object that failure to act as usual has exactly the same effects as acting. * Some Papers: Comment on Romer and Romer What ends recessions? Some "what's a shock?"Comment on Romer and Romer A new measure of monetary policy. The greenbook forecasts, and beginning thoughts that strict exogeneity is not necessary. Shocks monetary shocks explain small fractions of output variance.Comments on Hamilton, more thoughts on what a shock is.What do the VARs mean? cited above, is the response to the shock or to persistent interest rates?The Fed and Interest Rates, with Monika Piazzesi. Daily data and interest rates to identify shocks. Decomposing the yield curve with Monika Piazzesi. Starts with a great example of how small changes in specification lead to big differences in long run forecasts. Time seriesA critique of the application of unit root tests pretesting for unit roots and cointegration is a bad ideaHow big is the random walk in GNP? lessons in not using short run dynamics to infer long run properties. Permanent and transitory components of GNP and stock prices a favorite of cointegration really helps on long run propertiesTime series for macroeconomics and finance notes that never quite became a book. Explains VARs and responses.