Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
As famine looms in northern Gaza, the United Nations agency for Palestinian refugees is hurtling toward collapse."What I can say today is that we can run our operation until the end of May, whereas a month ago I had just the visibility for the next week or two weeks," Philippe Lazzarini, head of the U.N. Palestinian refugees agency (UNRWA), told reporters in Geneva last week, just days after Israel denied him entry into Gaza. "But that shows also how bad the financial situation of the organization is."The money crunch stems in part from a fateful U.S. decision. When Israel accused a dozen of UNRWA's 13,000 Gaza-based employees of facilitating the Oct. 7 attacks, American officials immediately paused funding for the organization pending an investigation. Many other top donors followed suit, leaving UNRWA scrambling to stay afloat.It's since become clear that Israel's accusations relied on less-than-definitive evidence. This revelation led most funders to turn the spigot back on. But the U.S., with its unusually deep pockets, is now banned from changing course. Less than two weeks ago, Congress passed a law blocking all funding for UNRWA until March 2025. The timing of this decision is nothing short of disastrous, according to Christopher Gunness, a former spokesperson for UNRWA. "Mass starvation has already set in, but without UNRWA it's impossible to even slow that down," Gunness said.Despite Israel's claims to the contrary, there is no way to replace UNRWA's role in Gaza, especially amid the largest Palestinian humanitarian crisis since Israel's war of independence, according to experts on humanitarian aid and UNRWA's history. Analysts also fear that potential interruptions in the agency's operations across the Middle East — including in war-torn Syria and crisis-riven Lebanon — could further undermine regional stability.A State Department spokesperson told RS that getting aid to Palestinians in Gaza is a "team effort." "[W]hile we will continue to provide funding to organizations like the World Food Programme [WFP], we will be looking to other donors to continue to provide critical funding to UNRWA as long as our funding remains paused," the spokesperson said in a statement.But groups like WFP simply don't have the capacity to fill the gap made by defunding UNRWA, according to a humanitarian working to get aid into Gaza who requested anonymity to prevent Israeli retribution."The work they do on a day-to-day basis, no one else does it, and you couldn't stand up an organization to do it," the humanitarian worker told RS. "There's literally no other place for [Gazans] to go."A love-hate relationshipDecades removed from its founding, it can be easy to forget where UNRWA came from. In a practical sense, it sprung from the need to get aid to 700,000 Palestinian refugees when it became clear that Israel would not let them return home after the 1948 war. But ideologically, UNRWA's story begins in the Tennessee Valley. In the 1930s, Congress launched a New Deal project known as the Tennessee Valley Authority (TVA). The TVA was a development initiative; it enlisted some of those hardest hit by the Great Depression and put them to work building dams, boosting crop yields, and bringing electricity to rural communities. It was, by most accounts, a rousing success. After the humanitarian disaster of the 1948 war, President Harry Truman hoped TVA chief Gordon Clapp could bring that success to the Middle East. With the support of the fledgling U.N., which had yet to establish an agency for refugees, Clapp visited the region in 1949 and became convinced that the Jordan Valley and other fertile areas in the Levant were ripe for TVA-style development. The U.N. General Assembly agreed, and the United Nations Relief and Works Agency was born. It didn't take long for "works" to disappear from the mission. Development projects sputtered, missing deadlines due to infighting among host countries and the refugees' general unwillingness to be relocated once more. "Most refugees refused to work," said Jalal al-Husseini, an expert on UNRWA's history and an associate researcher at the Insitut français du Proche Orient (Ifpo). "They wanted to go back home." Donor states also realized that large-scale public works are a good bit more expensive than more mundane relief projects. UNRWA's other activities — from schools to healthcare facilities and aid distribution — were far more successful. The organization provided much-needed help to the governments of Syria, Jordan, and Lebanon, each of which had little capacity to manage the refugee influx on their own. Besides a brief period in the early 1950s, Israel had little to do with UNRWA until 1967, when its forces routed Egypt, Jordan, and Syria in the provocatively named Six Day War. The conquest created a problem: As an occupying power, Israel was suddenly in charge of the welfare of millions of Palestinians. Tel Aviv quickly struck a deal with UNRWA to keep its operations going in the West Bank and Gaza Strip. Since the vast majority of UNRWA's local staff is Palestinian, the agency was "never really seen by Israel as a neutral and independent and impartial U.N. organization," according to Lex Takkenberg, a 30-year veteran of UNRWA who left the agency in 2019. "It started off with an explicit request by Israel for UNRWA to continue operating," Takkenberg said. "Since that time, there has sort of been a hatred-love relationship." Israel-Palestine watchers will recognize the pattern. Since the 1960s, Israel has periodically bemoaned the contents of UNRWA textbooks or accused staff of ties to Palestinian political groups (or terrorist organizations, in Tel Aviv's telling), drawing scrutiny from Western donors. UNRWA responds by excising objectionable content from courses and firing employees with apparent conflicts of interest. Over the years, these back-and-forths forced the agency to develop a comprehensive "neutrality framework" to keep politics out of its work. "Almost without exception, Israel never provided evidence" that employees had ties to groups like Hamas, Takkenberg recalled. But UNRWA would still usually fire them to protect the organization as a whole. "Then the Israelis would be back to business as usual," he said. "It never reached the point that [Israel] asked UNRWA to stop operations." In substance, the Oct. 7 allegations were the latest entry in this story. Israeli officials made bold allegations that UNRWA employees facilitated the attacks but have yet to provide evidence, even to U.N. investigators. But the reaction from donors was different. While the International Court of Justice has twice demanded a surge of aid into Gaza to avert disaster, most Western countries suspended support for the strip's leading relief group. Many have restarted their funding, but the U.S., United Kingdom, and Australia are still holding out. "Prohibiting the Biden administration from contributing to UNRWA creates a large gap in the Agency's annual operating budget," said William Deere, the head of UNRWA's Washington office. The shortfall "will make it harder for UNRWA to assist starving Gazans and potentially further weaken regional stability," Deere argued.UNRWA in crisisUNRWA is, of course, no stranger to crises. When Saddam Hussein's Iraq invaded Kuwait in 1990, Israel imposed a blanket curfew on the West Bank and Gaza, leaving many Palestinians with limited access to food. Quick mobilization from UNRWA prevented a bad situation from getting worse, according to Takkenberg."I organized massive food distributions during short periods that Israel lifted the curfew so that people could collect food from distribution points," he remembered.In the tumultuous period since, UNRWA has managed to stay afloat and provide aid across the Levant despite wars and a blockade in Gaza; a brutal conflict in Syria; and a protracted economic crisis in Lebanon.When President Donald Trump cut off funding in 2018, it came as a shock. "We found out that the Americans were not going to be giving us their money when the check did not arrive in the post," Gunness, the former spokesperson, recalled. This diplomatic equivalent of an Irish goodbye lit a fire under UNRWA staff, who put fundraising efforts into overdrive and filled the gap with pledges from wealthy Gulf countries. Even Israeli Prime Minister Benjamin Netanyahu reportedly backed the effort to avert "disaster" in Gaza.But all of these crises pale in comparison to the trial that the organization faces today. Gulf donors have so far failed to fill the gap left by the U.S. decision to cut off funding. At least 154 UNRWA employees have been killed since Oct. 7, and many of its facilities have been destroyed in the bombing. These direct attacks have been paired with an unprecedented Israeli PR effort to discredit the organization, all with the substantive backing of a Democratic U.S. president.Fringe Israeli activists have long argued that UNRWA is illegitimate in some fundamental sense, perpetuating a fanciful dream that Palestinians will eventually return home. Its existence, they argue, encourages false hope and prevents an end to the conflict. As Israel's political scene has lurched to the right, this view has become more popular. Now, multiple members of Netanyahu's cabinet are publicly opposed to UNRWA's very existence.Israel is now actively working to undermine UNRWA. In January, Finance Minister Bezalel Smotrich blocked a large shipment of U.S. aid in order to stop it from reaching UNRWA. The U.N. claims that Israeli officials are holding up visas for aid workers affiliated with the agency."UNRWA are part of the problem, and we will now stop working with them," an Israeli spokesperson said last week. "We are actively phasing out the use of UNRWA because they perpetuate the conflict rather than try and alleviate the conflict."Israeli opposition can only do so much to block the agency's work in the short term, according to Takkenberg, who noted that other groups are likely importing humanitarian aid in their own name and simply handing it off to UNRWA upon arrival. But that workaround has its limits as Israel allows only a trickle of aid to enter Gaza each day. There are currently as many as 30,000 trucks sitting in Egypt waiting to cross the border, according to a Jordanian official who spoke with NPR. "There are trucks that have been at the border for three months," the humanitarian worker told RS. "There's all sorts of crazy restrictions that make no sense, even from a security standpoint," they said, adding that they've had medical equipment and food confiscated during inspections.This has left UNRWA, and Gaza as a whole, on the verge of collapse. Israel and its Western backers will likely regret their role in bringing the crisis to this point, argued Gunness. "Any donor governments, especially those who are friends of Israel, who think that it's somehow in Israel's security interests to have millions of angry, hungry, radicalized, mourning, grief-stricken people living in appalling refugee camps and other circumstances on the doorstep of Israel, I wonder what planet they are living in," he said.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Over the first weekend of August 2023, Saudi Arabia convened an international summit on the war in Ukraine. Held in Jeddah and attended by representatives from 40 countries As was widely expected, the meeting did not produce any breakthroughs. Still, it provided a golden opportunity for Saudi Crown Prince Mohammed bin Salman Al Saud (MBS) to position himself as a leader of what might be called a second "Non-Aligned Movement." This movement's growing influence owes much to Russia's invasion of Ukraine. For while Moscow's assault had the unintended effect of revitalizing—and expanding—NATO, it also created an opening for many countries to leverage a multipolar international system in ways that have limited Washington's global power, not to mention its regional clout in the Middle East.
But these "balancing" efforts come with a high cost, as rising grain prices have threatened the stability of many of the very states that have thus far refused to condemn Russia's invasion, much less support Ukraine. For these states, the status quo is increasingly precarious, hence the wider logic of inviting China and the United States to sit a few short whispers away from their Saudi hosts at a meeting to which Russia was not invited.
China's Multi-Faceted Foreign Policy
From the outset, China has tried to advance a position of "neutrality" while echoing Russian President Vladimir Putin's justification for his unprovoked invasion of Ukraine. Beyond rhetoric, Beijing has provided economic support via its purchase of price-discounted Russian oil and, some experts argue, limited military assistance as well (a claim Beijing denies).
China's efforts to maintain these two tracks reflect structural tensions at the heart of China's global engagement. On the one hand, China is closely tied to a global economic order that is dominated by Western states and multilateral institutions. On the other hand, President Xi Jinping is seeking to counter U.S. military, economic, and even cultural power, and has done so in multiple ways, thus inviting conflict with the United States even as Beijing needs to cooperate with Washington and its Western allies. Russia's invasion of Ukraine, which reportedly took Chinese leaders by surprise, created opportunities for China to flex its "counter-hegemonic" muscles, but also opened the door to economic and strategic threats that were likely to intensify absent a diplomatic solution to the Ukraine conflict.
The tension between China's ideological and global economic interests extends well beyond the China-West arena. Many middle-sized regional powers such as India, Brazil, and South Africa share Beijing's desire to counter U.S. global dominance. But their policies are also rooted in the principle of state sovereignty and the rejection of the use of force to solve international conflicts. Beijing has long advocated these very norms and has given them pride of place in the charter of the Shanghai Cooperation Organization, not to mention China's 12-point Ukraine peace proposal.
Thus its support for Russia has raised legitimate questions from many leaders about the rising humanitarian and economic costs of the war and Beijing's readiness to take credible steps to show that its Ukraine plan is not a mere diplomatic feint. The most important of these steps would be for Beijing to reduce its diplomatic support for Moscow.
Beijing's Jeddah Dance
From the outset of the Jeddah meeting China's delegate avoided suggesting that Beijing would endorse any particular proposal other than its own. Indeed, Special Representative Li Hui seemed to emphasize the limited goals of the meeting—and the conflicts animating its leading participants—when he declared, "We have many disagreements and we have heard different positions, but it is important that our principles be shared."
Putting a more positive spin, a spokesperson for the Chinese government noted that, "China is willing to work with the international community to continue to play a constructive role in promoting a political solution to the crisis in Ukraine."
But what kind of solution? The outcome that China has outlined in its own 12-point proposal calls for respecting "the independence and territorial integrity of all countries" but also for a negotiated "political settlement" that could fudge or violate these principles. Thus, China is not ready to accept Ukraine's 10-point peace plan, which would require Russia's total withdrawal from all Ukrainian lands, including Crimea.
That Moscow has totally rejected this idea is not surprising; any hint by Putin that he might accept Ukraine's terms could undermine his rule. Moreover, as one expert has noted, "Ukraine's best-case scenario for the end of this war is also China's worst-case scenario," because Beijing wants Putin to remain in power while sustaining Russia's occupation until it is Ukraine that makes the key compromises.
It is inconceivable that China's envoy came to the Jeddah meeting believing that these various circles could be squared. Still, with the Ukraine conflict settling into what could be a prolonged war of attrition, and with Russia's suspension of its grain deal and its attacks on shipping in the Black Sea, China had to demonstrate concern for those states suffering from Moscow's policy of global blackmail.
Ukrainian President Volodymyr Zelenskyy exploited China's unease when he noted that, "On issues such as food security, the fate of millions of people in Africa, Asia and other parts of the world directly depends on how fast the world moves to implement the peace formula." He was, of course, talking about Ukraine's own proposal, which China certainly did not back. Still, China's active presence in Jeddah presumably showed that it was ready to assume its responsibilities as a major global power that, in the words of the Chinese Foreign Ministry, had helped "to consolidate international consensus" on Ukraine.
Whatever the veracity of this claim, it is worth noting that on the second day of the Jeddah conference, Russian Deputy Foreign Minister Sergei Ryabkov insisted that the meeting was "a reflection of the West's attempt to continue futile, doomed efforts," but added that China could nevertheless "convey common sense to the Western patrons of Kyiv."
China did its best to avoid taking on the role of Moscow's messenger. Still, days after the Jeddah summit, Chinese Foreign Minister Wang Yi, in a phone call with his Russian counterpart Sergey Lavrov, reassured him that Beijing was committed to being an "objective and rational voice." That the week before the summit China had joined Russia in a naval exercise off the coast of Alaska that prompted the deployment of four U.S. Navy destroyers underscored the risky juggling act that is at the heart of Beijing's foreign policy.
A Win for MBS and Zelenskyy
The key participants in the Jeddah conference made good use of the multiple balls that Beijing has thus far kept in the air. Ukrainian officials declared that the meeting "completely destroys the narrative of Russia" that Ukraine was only backed by "countries of the collective West." Such hyperbole was as necessary as they were predictable. Indeed, while in the lead up to the meeting Ukrainian officials insisted that "our goal in Saudi Arabia is to develop a unified vision" ahead of a future global peace summit, the fact that no such vision emerged in Jeddah was almost irrelevant. What counted most was that the summit was held and that it ended, as the Ukrainian ambassador to Saudi Arabia pitched it, with "constructive" talks and "a broad vision." Jeddah was thus a win for Zelenskyy.
The same, of course, can be said for MBS. He may have not fully agreed when the Ukrainian ambassador thanked Saudi Arabia "for being so committed and hospitable to Ukraine in moving forward our peace formula plan." But the meeting signaled that the crown prince is on his way to rehabilitating his international reputation.
More broadly, as one leading Saudi journalist noted, the conference underscored Saudi Arabia's growing clout as a "neutral" mediator in a diverse group of states that constitute a kind of second Non-Aligned Movement whose members are leveraging the US-Russia-China triangle of conflict to advance their interests while maintaining good relations with all three countries.
For Riyadh, a key element in this juggling act is its unhappiness with being replaced by Russia as China's chief supplier of crude oil. This represents a real economic and political cost for MBS, who to the frustration of the Biden White House, has sustained the oil production cuts he initiated in Spring 2023.
In short, the Jeddah meeting gave Riyadh a practically risk-free opportunity to direct multiple signals in multiple directions. Thus, while the conference ended without any final declaration, Saudi officials held that the meeting contributed to "building common grounds that pave the way for peace." As for China, it has signaled its readiness to attend a follow-up meeting.
The Biden Administration (and China) Navigate Choppy Waters
While U.S. national security adviser Jake Sullivan went to the Jeddah conference, the administration's best bet was to let other participants, most importantly Ukraine's president, make their case and test China's intentions. That in the words of one unnamed U.S. official, the administration was "glad" that China attended and participated in the meeting "in a constructive way"— highlights the challenges that the administration faces as it navigates choppy diplomatic waters.
Those waters got a little rougher as the BRICS countries — Brazil, Russia, India, China, and South Africa — held a summit on summit on August 22 during which Egypt, Iran, Ethiopia, the U.A.E. and Saudi Arabia were invited to join. If the decision shows that Russia and China are advancing their efforts to create an alternative to the International Monetary Fund and other Western dominated multi-lateral financial powerhouses, the expanded BRICS club includes not a few members who have concerns about the agenda of Moscow and China on a host of issues, including the Ukraine war.
That Russian Deputy Foreign Minister Ryabkov stated that Moscow looked forward to "an exchange of views" with the BRICS countries that attended the Jeddah meeting could suggest some unease in the Kremlin. For however determined to foster a multi-polar global system, major regional players such as Brazil and South Africa have no interest letting Moscow or China become the new arbiters of a counter hegemonic agenda. Speaking to the point, South African President Cyril Ramaphosa has warned, "We have resisted pressure to align ourselves with any one of the global powers or with influential blocs of nations."
Vladimir Putin might ignore such sentiments, but Beijing's leaders cannot. Afterall, precisely because China is a real world economic and military power in ways that Russia will never be, it needs to find a path to engaging across the global spectrum. This balancing act has become harder with a struggling domestic economy, not to mention the efforts of the Biden White House to foster greater security cooperation in Asia — amply demonstrated by the recent US/South Korea/Japan Camp David summit.
Biden wisely insisted that the meeting was not "anti-China." Still, it is far from clear that this statement shows that the White House has forged a policy that fully takes into account the tensions that are at the heart of China's foreign relations — and that brought Beijing's emissary to Jeddah in the first place.
A version of this piece was published by Arab Center Washington DC. The views expressed in this publication are the author's own and do not necessarily reflect the position of Arab Center Washington DC, its staff, or its Board of Directors.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
This post is from a set of comments I gave at the NBER Asset Pricing conference in early November at Stanford. Conference agenda here. My full slides here. There was video, but sadly I took too long to write this post and the NBER took down the conference video. I was asked to comment on "Downward Nominal Rigidities and Bond Premia" by François Gourio and Phuong Ngo. It's a very nice clean paper, so all I could think to do as discussant is praise it, then move on to bigger issues. These are really comments about whole literatures, not about one paper. One can admire the play but complain about the game. The paper implements a version of Bob Lucas' 1973 "International evidence" observation. Prices are less sticky in high inflation countries. The Phillips curve more vertical. Output is less affected by inflation. The Calvo fairy visits every night in Argentina. To Lucas, high inflation comes with variable inflation, so people understand that price changes are mostly aggregate not relative prices, and ignore them. Gourio and Ngo use a new-Keynesian model with downwardly sticky prices and wages to express the idea. When inflation is low, we're more often in the more-sticky regime. They use this idea in a model of bond risk premia. Times of low inflation lead to more correlation of inflation and output, and so a different correlation of nominal bond returns with the discount factor, and a different term premium. I made two points, first about bond premiums and second about new-Keynesian models. Only the latter for this post. This paper, like hundreds before it, adds a few ingredients on top of a standard textbook new-Keynesian model. But that textbook model has deep structural problems. There are known ways to fix the problems. Yet we continually build on the standard model, rather than incorporate known ways or find new ways to fix its underlying problems. Problem 1: The sign is "wrong" or at least unconventional.The basic sign is wrong -- or at least counter to the standard belief of all policy makers. In the model, higher interest rates cause inflation to jump down immediately, and then rise over time. Everyone at the Fed uniformly believes that higher interest rates cause inflation to go nowhere immediately, and then gently decline over time, with "long and variable lags." Larry Ball pointed this out 30 years ago. The behavior comes straight from the forward-looking Phillips curve. Lower output goes with lower inflation, relative to future inflation. I.e. inflation rising over time. To be clear, maybe the model is right and the beliefs are wrong. It's amazing that so much modeling and empirical work has gone in to massaging theory and data to conform to Milton Friedman's 1968 proclamation of how monetary policy works. The "long and variable lags" in particular are a trouble to modern economics. If you know prices are going up tomorrow, you raise prices today. But that's for another day. This model does not behave the way most people think the economy behaves, so if you're going to use it, at least that needs a major asterisk. Well, we know how to fix this. You can see that sneaking lagged inflation into the Phillips curve is going to be a big part of that. Christiano Eichenbaum and Evans, 20 years ago, produced a widely cited model that "fixes" this problem. It has a lot of ingredients. Most of all, it assumes that wages and prices are indexed. Firms and workers that don't get tapped by the Calvo fairy to change their price or wage nonetheless raise by observed inflation. This gives a Phillips curve with lagged inflation. Moreover, in preferences, investment, and this Phillips curve, CEE modify the model to put growth rates in place of levels. (More review in a three part series on new-Keynesian models here.) The result: If the funds rate goes down (right panel) unexpectedly, inflation goes down just a bit but then turns around and goes up a year later. (Several other authors get to the same place by abandoning rational expectations. But that has its own problems, and it's going to be hard to incorporate asset pricing that way. Much more in Expectations and the Neutrality of Interest Rates) Great. But notice that neither Gourio and Pho nor pretty much anyone else builds on this model. We cite it, but don't use it. Instead, 20 more years of NK theorizing studies different extensions of the basic model, that don't solve the central conundrum. Problem 2: Fed induced explosionsThe standard new-Keynesian model says that if the Fed holds interest rates constant, inflation is stable -- will go away on its own -- but indeterminate. There are multiple equilibria. The standard new-Keynesian model thus assumes that the Fed deliberately destabilizes the economy. If inflation comes out more than the Fed wishes, the Fed will lead the economy to hyperinflation or hyper deflation. Under that threat, people jump to the inflation that the Fed wishes to see. But the Fed does no such thing. Central bankers resolutely state that their job is to stabilize the economy, to bring inflation back from wherever it might go. Despite thousands of papers with new-Keynesian equations written at central banks, if anyone were ever to honestly describe those equations in the introduction, "we assume that the central bank is committed to respond to inflation by hyperinflation or deflation in order to select from multiple equilibria" they would be laughed out of a job. This has been clear, I think, since 2000 or so. I figured it out by reading Bob King's "Language and Limits." My "Determinacy and Identification" in the JPE 2011 was all about this. We've also known at least one way to fix it, as shown: fiscal theory. OK, I'm a broken record on this topic. Instead, we go on with the same model and its underlying widely counterfactual assumption about policy. Problem 3: The fit is terribleA model consists of a set of equations, with the thing you want to determine (say, inflation) on the left, the economic causes described by the model on the right, plus "shocks," which are things your model can't capture. In the explanation part, there are parameters (\(\sigma, \ \beta, \ \kappa, \ \phi\)), that control how much the things on the right affect the things on the left. The fit of new-Keynesian models is usually terrible. In accounting for economic variables (\(x_t,\) \(\pi_t, \) \(i_t \) here), the error terms (\(\varepsilon\)) are much larger than the model's economic mechanisms (the \(x,\) \(\pi\) on the right hand side). Forecasts -- predicting \(\pi\), \(x\) ahead of time -- is worse. For example, where did inflation come from and why did it go away? Expected inflation hasn't moved much, and the economy just plugged along. Most of the rise and fall of inflation came from inflation shocks. Related, the fit of the models is about the same amount of terrible for different values of the parameters. That means the parameters are "poorly identified" if identified at all. That means that the mechanisms of the model -- say, how much higher interest rates lower output, and then how much lower output affects inflation -- are weak, and poorly understood. In part this isn't often noticed because we got out of the habit of evaluating models by fit in the 1980s. Most models are evaluated, as I showed above for CEE by matching select "identified" impulse response functions. But as those response functions also explain small variances of output and inflation, it's possible to match response functions well, yet still fit the data badly, i.e. fit the data only by adding big shocks to every equation. I don't know of good fixes here. Old fashioned ISLM models had similar problems (See Sims 1980). But it is a fact that we just ignore and go on. The Phillips curve is a central problem, which has only gotten worse lately. Unemployment was high and declining throughout the 2010s, with stable inflation. Inflation came with high unemployment in 2021. And inflation fell with no high real interest rates, no unemployment, and strong growth in 2022-2023. But what will replace it? So where are we?Macro is surprisingly un-cumulative. We start with a textbook model. People find some shortcomings and suggest a fix. But rather than incorporate that fix, the next paper adds a different fix to the same textbook model. One would think we would follow the path on the right. We don't. We follow the path on the left. This is common in economics. The real business cycle literature followed much the same path. After the King Plosser Rebelo stochastic growth model became the standard, people spent a decade with one extension after another, each well motivated to fix a stylized fact. But by and large the next paper didn't build on the last one, but instead offered a new variation on the KPR model. Posteriors follow priors according to Bayes' rule, of course. So another way of putting the observation, people seem to put a pretty high prior on the original model, but don't trust the variations at all. I sin too. In Fiscal Theory of the Price Level I married fiscal theory with the new-Keynsian IS and Phillips curve, exactly as above, despite problems #1 and #3. Well, it makes a lot of sense to change one ingredient at a time to see how a new theory works. I'm unhappy with the result, but I haven't been able to move on to a new and better textbook model, which is what has occasioned several of these related posts. Wę need a digestion. Which of the new ingredients are reliable, robust, and belong as part of the new "textbook" model? That's not easy. Reliable and robust is very hard to find, and to persuade people. There are so many to choose from -- CEE's smorgasbord, capital, financial frictions, heterogeneous agents, different expectation formation stories, different pricing frictions, and so on. What's the minimal easy set of these to use? Part of the trouble lies in how publishing works. It's nearly impossible to publish a paper that removes old ingredients, that digests the model down to a new textbook version. The rewards are to publishing papers that add new ingredients. Even if, like CEE, everyone cites them but doesn't use them. I've asked many economists why they build on a model with so many known problems, and why they don't include known fixes. (Not just fiscal theory!) The answer is usually, yes, I know about all these problems, but nobody will bother me about them since every other paper makes the same assumptions, and I need to get papers published. I went on a bit of a tear here as I referee lots of great papers like this one. Every part of the paper is great, except it builds on a model with big flaws we've known about for 30 years. It feels unfair to complain about the underlying model, since the journal has published and will publish a hundred other papers. But at what point can we, collectively, scream "Stop!" The new-Keynesian model has been the standard model for an astonishing 30 years. None of ISLM, monetarism, rational expectations, or real business cycles lasted that long. It's even more amazing that it is so unchanged in all this time. It is definitely time for a better textbook version of the model! Maybe this is a plea for Woodford, Gali or one of the other NK textbook authors, which much better command of all the variations than I have, to bless us a new textbook model. Or, perhaps it's time for something totally new. That's not fiscal theory per se. Fiscal theory is an ingredient, not a model. You can marry it to new-Keynesian models, as I, Leeper, Sims, and others have done. But you can also marry it to old ISLM or anything else you want. Given the above, maybe there isn't an existing modification but a new start. I don't know what that is. (My comments also have some similar comments about term premiums and how to think about them, but this post is long enough.) Update:Twitter correspondents Stéphane Surprenant and Tom Holden point me to The Transmission of Monetary Policy Shocks by Silvia Miranda-Agrippino Giovanni Ricco in the AEJ Macro, and Inflation, output and markup dynamics with purely forward-looking wage and price setters by Louis Phaneuf, Eric Sims, and Jean Gardy Victor in the European Economic Review. The former is a VAR with high frequency measurement of the monetary policy shock. And.. Source: Miranda-Agrippino and RiccoThe price level as well as the inflation rate can jump down immediately when the interest rate rises! (I think the graph plots the level of CPI, not growth rate.) That's even stronger than the baseline model in which the price level, being sticky, does not move, but the inflation rate jumps on the interest rate rise. The latter is a nice theoretical paper. It adds a lot of the CEE assumptions. I overstated a great deal that others have not used these ingredients. They are used in these "medium scale" models, just not in "textbook" models. However, it gets rid of indexed prices and wages with purely forward looking Phillips curves. It adds intermediate goods however. This makes prices changes work through the network of suppliers adding interesting dynamics, which has always struck me as a very important ingredient. And...Source: Phaneuf, Sims ,and VictorThe main estimate is the dark line. Here you see a model with the conventional response: inflation does not move on impact, and increases some time after the interest rate rise. So, we can switch places! Estimates can replicate the conventional model, with an instant inflation response. Models can replicate the conventional estimates, with a slow inflation response. This one is much prettier than CEEs.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
I just finished a new draft of "Expectations and the neutrality of interest rates," which includes some ruminations on inflation that may be of interest to blog readers. A central point of the paper is to ask whether and how higher interest rates lower inflation, without a change in fiscal policy. That's intellectually interesting, answering what the Fed can do on its own. It's also a relevant policy question. If the Fed raises rates, that raises interest costs on the debt. What if Congress refuses to tighten to pay those higher interest costs? Well, to avoid a transversality condition violation (debt that grows forever) we get more inflation, to devalue outstanding debt. That's a hard nut to avoid. But my point today is some intuition questions that come along the way. An implicit point: The math of today's macro is actually pretty easy. Telling the story behind the math, interpreting the math, making it useful for policy, is much harder. 1. The Phillips curveThe Phillips curve is central to how the Fed and most policy analysts think about inflation. In words, inflation is related to expected future inflation and by some measure if economic tightness, factor \(x\). In equations, \[ \pi_t = E_t \pi_{t+1} + \kappa x_t.\] Here \(x_t\) represents the output gap (how much output is above or below potential output), measures of labor market tightness like unemployment (with a negative sign), or labor costs. (Fed Governor Chris Waller has a great speech on the Phillips curve, with a nice short clear explanation. There are lots of academic explanations of course, but this is how a sharp sitting member of the FOMC thinks, which is what we want to understand. BTW, Waller gave an even better speech on climate and the Fed. Go Chris!) So how does the Fed change inflation? In most analysis, the Fed raises interest rates; higher interest rates cool down the economy lowering factor x; that pushes inflation down. But does the equation really say that? This intuition thinks of the Phillips curve as a causal relation, from right to left. Lower \(x\) causes lower inflation. That's not so obvious. In one story, the Phillips curve represents how firms set prices, given their expectation of other's prices and costs. But in another story, aggregate demand raises prices, and that causes firms to hire more (Chris Waller emphasized these stories). This reading may help to digest an otherwise puzzling question: Why are the Fed and its watchers so obsessed with labor markets? This inflation certainly didn't start in labor markets, so why put so much weight on causing a bit of labor market slack? Well, if you read the Phillips curve from right to left, that looks like the one lever you have. Still, since inflation clearly came from left to right, we still should put more emphasis in curing it that way. 2. Adjustment to equilibrium vs. equilibrium dynamics. But does the story work? Lower \(x_t\) lowers inflation \(\pi_t\) relative to expected future inflation \(E_t \pi_{t+1}\). Thus, it describes inflation that is rising over time. This does not seem at all what the intuition wants. So how do we get to the intuition that lower \(x_t\) leads to inflation got goes down over time? (This is on p. 16 of the paper by the way.) An obvious answer is adaptive expectations: \(E_t \pi_{t+1} = \pi_{t-1}\). Then lower \(x_t\) does mean inflation today lower than it was in the past. But the Fed and most commenters really don't want to go there. Expectations may not be "rational," and in most commentary they are either "anchored" by faith in the Fed, or driven by some third force. But they aren't mechanically last year's inflation. If they were, we would need much higher interest rates to get real interest rates above zero. Perhaps the intuition comes from remembering these adaptive expectations dynamics, and not realizing that the new view that expectations are forward looking, even if not rational, undermines those dynamics. Another answer may be confusion between adjustment to equilibrium and movement of equilibrium inflation over time. Lower \(x_t\) means lower inflation \(\pi_t\) than would otherwise be the case. But that reduction is an adjustment to equilibrium. It's not how inflation we observe -- by definition, equilibrium inflation -- evolves over time. This is, I think, a common confusion. It's not always wrong. In some cases, adjustment to equilibrium does describe how an equilibrium quantity changes, and in a more complex model that adjustment plays out as a movement over time. For example, a preference or technology shock might give a sudden increase in capital; add adjustment costs and capital increases slowly over time. A fiscal shock or money supply shock gives a sudden increase in the price level; add sticky prices and you get a slow increase in the price level over time. But we already have sticky prices. This is supposed to be the model, the dynamic model, not a simplified model. Here, inflation lower than it otherwise would be is not the same thing as inflation that goes down slowly over time. It's just a misreading of equations. Another possibility is that verbal intuition refers to the future, \[ E_t \pi_{t+1} = E_t \pi_{t+2} + \kappa E_t x_{t+1} .\]Now, perhaps, raising interest rates today lowers future factor x, which then lowers future inflation \(E_t\pi_{t+1}\) relative to today's inflation \(\pi_t\). That's still a stretch however. First, the standard new-keynesian model does not have such a delay. \[x_t = E_t x_{t+1} - \sigma(i_t - E_t \pi_{t+1})\]says that higher interest rates also immediately lower output, and lower output relative to future output. Higher interest rates also raise output growth. This one is more amenable to adding frictions -- habits, capital accumulation, and so forth -- but the benchmark model not only does not have long and variable lags, it doesn't have any lags at all. Second, maybe we lower inflation \(\pi_{t+1}\) relative to its value \(\pi_t\), in equilibrium, but we still have inflation growing from \(t+1\) to \( t+2\). We do not have inflation gently declining over time, which the intuition wants. We are left -- and this is some of the point of my paper -- with a quandary. Where is a model in which higher interest rates lead to inflation that goes down over time? (And, reiterating the point of the paper, without implicitly assuming that fiscal policy comes to the rescue.) 3. Fisherian intuitionA famous economist, who thinks largely in the ISLM tradition, once asked me to explain in simple terms just how higher interest rates might raise inflation. Strip away all price stickiness to make it simple, still, the Fed raises interest rates and... now what? Sure point to the equation \( i_t = r + E_t\pi_{t+1} \) but what's the story? How would you explain this to an undergraduate or MBA class? I fumbled a bit, and it took me a good week or so to come up with the answer. From p. 15 of the paper, First, consider the full consumer first-order condition \[x_t = E_t x_{t+1} - \sigma(i_t -E_t \pi_{t+1})\] with no pricing frictions. Raise the nominal interest rate \(i_t\). Before prices change, a higher nominal interest rate is a higher real rate, and induces people to demand less today \(x_t\) and more next period \(x_{t+1}\). That change in demand pushes down the price level today \(p_t\) and hence current inflation \(\pi_t = p_t - p_{t-1}\), and it pushes up the expected price level next period \(p_{t+1}\) and thus expected future inflation \(\pi_{t+1}=p_{t+1}-p_t\). So, standard intuition is correct, and refers to a force that can lower current inflation. Fisherian intuition is correct too, and refers to a natural force that can raise expected future inflation. But which is it, lower \(p_t\) or higher \(p_{t+1}\)? This consumer first-order condition, capturing an intertemporal substitution effect, cannot tell us. Unexpected inflation and the overall price level is determined by a wealth effect. If we pair the higher interest rate with no change in surpluses, and thus no wealth effect, then the initial price level \(p_t\) does not change [there is no devaluation of outstanding debt] and the entire effect of higher interest rates is a rise in \(p_{t+1}\). A concurrent rise in expected surpluses leads to a lower price level \(p_t\) and less current inflation \(\pi_t\). Thus in this context standard intuition also implicitly assumes that fiscal policy acts in concert with monetary policy. In both these stories, notice how much intuition depends on describing how equilibrium forms. It's not rigorous. Walrasian equilibrium is just that, and does not come with a price adjustment process. It's a fixed point, the prices that clear markets, period. But believing and understanding how a model works needs some sort of equilibrium formation story. 4. Adaptive vs. rational expectations The distinction between rational, or at least forward-looking and adaptive or backward-looking expectations is central to how the economy behaves. That's a central point of the paper. It would seem easy to test, but I realize it's not. Writing in May 2022, I thought about adaptive (backward-looking) and rational (forward-looking), and among other points that under adaptive expectations we need nominal interest rates above current inflation -- i.e. much higher -- to imply real interest rates, while that isn't necessarily true with forward-looking expectations. You might be tempted to test for rational expectations, or look at surveys to pronounce them "rational" vs. "behavioral," a constant temptation. I realize now it's not so easy (p. 44): Expectations may seem adaptive. Expectations must always be, in equilibrium, functions of variables that people observe, and likely weighted to past inflation. The point of "rational expectations'' is that those forecasting rules are likely to change as soon as a policy maker changes policy rules, as Lucas famously pointed out in his "Critique." Adaptive expectations may even be model-consistent [expectations of the model equal expectations in the model] until you change the model.That observation is important in the current policy debate. The proposition that interest rates must be higher than current inflation in order to lower inflation assumes that expected inflation equals current inflation -- the simple one-period lagged adaptive expectations that I have specified here. Through 2021-2022, market and survey expectations were much lower than current (year on year) inflation. Perhaps that means that markets and surveys have rational expectations: Output is temporarily higher than the somewhat reduced post-pandemic potential, so inflation is higher than expected future inflation (\(\pi_t = E_t \pi_{t+1} + \kappa x_t\)). But that observation could also mean that inflation expectations are a long slow-moving average of lagged inflation, just as Friedman speculated in 1968 (\(\pi^e_t = \sum_{j=1}^\infty \alpha_j \pi_{t-j}\)). In either case, expected inflation is much lower than current inflation, and interest rates only need to be higher than that low expectation to reduce inflation. Tests are hard, and you can't just look at in-sample expectations to proclaim them rational or not. Rational expectations change when policy deviates from a rule, or when the policy rule changes. That's their key feature. We should talk perhaps about rational vs. exogenous expectations. 5. A few final Phillips curve potshotsIt is still a bit weird that so much commentary is so focused on the labor market to judge pressure on inflation. This inflation did not come from the labor market! Some of this labor market focus makes sense in the new-Keynesian interpretation of the Phillips curve: Firms set prices based on expected future prices of their competitors and marginal costs, which are largely labor costs. That echoes the 1960s "cost push" view of inflation (as opposed to its nemesis "demand pull" inflation). But it begs the question, well, why are labor costs going up? The link from interest rates to wages is about as direct as the link from interest rates to pries. This inflation did not come from labor costs, maybe we should fix the actual problem? Put another way, the Phillips curve is not a model. It is part of a model, and lots of equations have inflation in them. Maybe our focus should be elsewhere. Back to Chris Waller, whose speech seems to me to capture well sophisticated thinking at the Fed. Waller points out how unreliable the Phillips curve is What do economic data tell us about this relationship? We all know that if you simply plot inflation against the unemployment rate over the past 50 years, you get a blob. There does not appear to be any statistically significant correlation between the two series.In more recent years, since unemployment went up and down but inflation didn't go far, the Phillips curve seemed "flat," the Phillips curve was very flat for the 20-plus years before the pandemic, You can see this in the decline of unemployment through 2020, as marked, with no change in inflation. Then, unemployment surged in 2021, again with no deflation. 2009 was the last time there was any slope at all to the Phillips curve. But is it "flat" -- a stable, exploitable, flat relationship -- or is it just a stretched out "blob", two series with no stable relationship, one of which just got stable? In any case, as unemployment went back down to 3.5 percent in 2022, inflation surged. You can forgive the Fed a bit: We had 3.5% unemployment with no inflation in 2020, why should we worry about 3.5% unemployment in 2022? I think the answer is, because inflation is driven by a whole lot more than unemployment -- stop focusing on labor markets! A flat curve, if it is a curve, is depressing news: Based on the flatness of the Phillips curve in recent decades, some commentators argued that unemployment would have to rise dramatically to bring inflation back down to 2 percent. At best, we retrace the curve back to 2021 unemployment. But (I'll keep harping on this), note the focus on the error-free Phillips curve as if it is the entire economic model. Waller views the new Phillips curve as a "curve," that has become steeper, and cites confirming evidence that prices are changing more often and thus becoming more flexible. ... considering the data for 2021... the Phillips curve suddenly looked relatively steep.. since January 2022, the Phillips curve is essentially vertical: The unemployment rate has hovered around 3.6 percent, and inflation has varied from 7 percent (in June) to 5.3 percent (in December).Waller concludes A steep Phillips curve means inflation can be brought down quickly with relatively little pain in terms of higher unemployment. Recent data are consistent with this story.Isn't that nice -- from horizontal to vertical all on its own, and in the latest data points inflation going straight down. Still, perhaps the right answer is that this is still a cloud of coincidence and not the central, causal, structural relationship with which to think about how interest rates affect inflation. If only I had a better model of inflation dynamics...
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
I've been thinking a bit lately about theories of the business cycle (a lot of time for reflection in these days of COVID-19). At least, the way some of these theories have evolved over my lifetime and from the perspective of my own training in the field. From my (admittedly narrow) perspective as a researcher and advisor at a central bank, the journey beginning c. 1960 seems like it's taken the following steps: (1) Phillips Curve and some Natural Rate Hypothesis; (2) Real Business Cycle (RBC) theory; (3) New Keynesian theory. It seems like we might be ready to take the next step. I'll offer some thoughts on this at the end, for whatever they're worth. There's no easy way to summarize the state of macroeconomic thinking, of course. But it seems clear that, at any given time, some voices and ways of thinking are more dominant than others. By the time the 1960s rolled around, there seemed to be a consensus that monetary and fiscal policy should be used to stabilize the business cycle. The main issue, in this regard, revolved over which set of instruments was better suited for the job. (See, for example, this classic debate between Milton Friedman and Walter Heller). Central to macroeconomic thinking at the time was a concept called the Phillips Curve (PC). There is a subtle, but important, distinction to make here between the PC as a statistical correlation and the PC as a theory of that statistical relationship. In 1958, Phillips noticed an interesting pattern in the data: nominal wage growth seemed negatively correlated with the unemployment rate in the U.K. over the period 1913-48 (see diagram to the right). How to interpret this correlation? One theory is that when the unemployment rate is high, workers are easy to find and their bargaining position is weak, leading to small nominal wage gains. Conversely, when unemployment is low, available workers are scarce and their bargaining position is strong, leading to large nominal wage gains. Then, in 1960, Paul Samuelson and Robert Solow wrote their classic piece "Problem of Achieving and Maintaining a Stable Price-Level: Analytical Aspects of Anti-Inflation Policy." Then, as is the case still now, the authors lamented the lack of consensus on a theory of price inflation. Various cost-push and demand-pull hypotheses were reviewed, problems of identification noted, and calls for micro-data to help settle the issue were made. They also mentioned Phillips' article and noted how the same diagram for the U.S. looked like a shot-gun blast (little correlation, except for some sub-samples). Then they translated the Phillips curve using price inflation instead of wage inflation. No data was sacrificed in this exercise; their "theory" was summarized with the diagram to the left. I put "theory" in quotes in the passage above because the theory (explanation) was never clear to me. In particular, while I could see how an increase in the rate of unemployment might depress the level wage, I could not grasp how it could influence the rate of growth of wages for any prolonged period of time. This logical inconsistency was solved by the Phelps-Friedman natural rate hypothesis; see Farmer (2013) for a summary and critique. The TL;DR version of this hypothesis is that the PC is negatively sloped only in the short-run, but vertical in the long-run. So, while monetary policy (increasing the rate of inflation) could lower the unemployment rate below its natural rate, it could only do so temporarily. Eventually, the unemployment rate would move back to its natural rate at the higher rate of inflation. This hypothesis seemed to provide a compelling interpretation of the stagflation (high inflation and high unemployment) experienced in the 1970s. It also seemed to explain the success of Volcker's disinflation policy in the 1980s. Nevertheless, uneasiness in the state of the theory remained and a new (well, nothing is ever completely new) way of theorizing was on the horizon.By the time I got to grad school in the late 1980s, "real business cycle theory" was in vogue; see Charles Plosser's summary here and Bob King's lecture notes here. There was a lot going on with this program. A central thesis of RBC theory is that the phenomena of economic growth and business cycles are inextricably linked. This is, of course, is an old idea in economics going back at least to Dennis Robertson (see this review by Charles Goodhart) and explored extensively by a number of Austrian economists, like Joseph Schumpeter. The idea that "the business cycle" is to some extent a byproduct of the process of economic development is an attractive hypothesis. Economic growth is driven by technological innovation and diffusion, and perhaps regulatory policies. There is no a priori reason to expect these "real" processes to evolve in a "smooth" manner. In fact, these changes appear to arrive randomly and with little or no mean-reverting properties. It would truly be a marvel if the business cycle did not exist. The notion of "no mean-reverting properties" is important. It basically means that technology/policy shocks are largely permanent (or at least, highly persistent). If macroeconomic variables like the GDP inherit this property, then a "cycle"--the tendency for a variable to return to some long-run trend--does not even exist (and if you think you see it, it's only a figment of your imagination). For this reason, early proponents of RBC theory preferred the label "fluctuations" over "cycle." This view was supported by the fact that econometricians had a hard time rejecting the hypothesis that the real GDP followed a random walk (with drift). For example, here is Canadian GDP plotted against two realizations of a random walk with drift:This perspective fermented at a time when the cost of computing power was falling dramatically. This permitted economists to study models that were too complicated to analyze with conventional "pencil and paper" methods. Inspiration was provided by Lucas (1980), who wrote:Our task, as I see it…is to write a FORTRAN program that will accept specific economic policy rules as 'input' and will generate as 'output' statistics describing the operating characteristics of time series we care about, which are predicted to result from these policies."And so that's what people did. But what sort of statistics were model economies supposed to reproduce? Once again, it was Lucas (1976) who provided the needed guidance. The empirical business cycle regularities emphasized by Lucas were "co-movements" between different aggregate time-series. Employment, for example, is "pro-cyclical" (tends to move in the same direction as GDP) around "trend." These types of regularities can be captured by statistics like correlations. But these correlations (and standard deviations) only make sense for stationary time-series, and the data is mostly non-stationary. So, what to do? Transforming the data through first-differencing (i.e., looking at growth rates instead of levels) is one way to render (much of) the data stationary. Another approach was made popular by Prescott (1986), who advocated a method that most people employ: draw a smooth line through the data, label it "trend," and then examine the behavior of "deviations from trend." Something like this, It's important to note that Prescott viewed the trend line in the figure above as "statistical trend," not an "economic trend." To him, there was no deterministic trend, since the data was being generated by a random walk (so, the actual trend is stochastic). Nevertheless, drawing a smooth trend line was a useful way to render the data stationary. The idea was to apply the same de-trending procedure to actual data and simulated data, and then compare statistical properties across model and data.The point of mentioning this is that no one involved in this program was conditioned to interpret the economy as "overheating" or in "depression." Growing economies exhibited fluctuations--sometimes big and persistent fluctuations. The question was how much of these observed fluctuations could be attributed purely to the process of economic development (technological change), without reference to monetary or financial factors? I think it's fair to say that the answer turned out to be "not much, at least, not at business cycle frequencies." The important action seemed to occur at lower frequencies. Lucas (1988) once again provided the lead when he remarked "Once one starts to think about growth, it is hard to think about anything else." And so, the narrow RBC approach turned its attention to low-frequency dynamics; e.g., see my interview with Lee Ohanian here. Of course, many economists never bought into the idea that monetary and financial factors were unimportant for understanding business cycles. Allen and Gale, for example, schooled us on financial fragility; see here. But this branch of the literature never really made much headway in mainstream macro, at least, not before 2008. Financial crises were something that happened in history, or in other parts of the world. Instead, macroeconomists looked back on its roots in the 1960s and embedded a version of the PC into an RBC model to produce what is now known as the New Keynesian framework. Short-run money non-neutrality was achieved by assuming that nominal price-setting behavior was subject to frictions, rendering nominal prices "sticky." In this environment, shocks to the economy are not absorbed efficiently, at least, not in the absence of an appropriate monetary policy. And so, drawing inspiration from John Taylor and Michael Woodford, the framework added an interest rate policy rule now known as the Taylor rule. Today, the basic NK model consists of these three core elements:[1] An IS curve: Relates aggregate demand to the real interest rate and shocks.[2] An Phillips Curve: Relates the rate of inflation (around trend) to the output gap.[3] A Taylor Rule: Describes how interest rate policy reacts to output and inflation gaps.I have to be honest with you. I never took a liking to NK model. I'm more of an Old Keynesian, similar to Roger Farmer (we share the same supervisor, so perhaps this is no accident). In any case, the NK framework became (and continues to be) a core thought-organizing principle for central bank economists around the world. It has become a sort of lingua franca in academic macro circles. And if you don't know how to speak its language, you're going to have a hard time communicating with the orthodoxy. Of the three basic elements of the NK model, I think the NK Phillips Curve (which embeds the natural rate hypothesis) has resulted in the most mischief; at least, from the perspective of advising the conduct of monetary policy. The concept is firmly embedded in the minds of many macroeconomists and policymakers. Consider, for example, Greg Mankiw's recent piece "Yes, There is a Trade-Off Between Inflation and Unemployment."Today, most economists believe there is a trade-off between inflation and unemployment in the sense that actions taken by a central bank push these variables in opposite directions. As a corollary, they also believe there must be a minimum level of unemployment that the economy can sustain without inflation rising too high. But for various reasons, that level fluctuates and is difficult to determine. The Fed's job is to balance the competing risks of rising unemployment and rising inflation. Striking just the right balance is never easy. The first step, however, is to recognize that the Phillips curve is always out there lurking.The Phillips curve is always lurking. The message for a central banker is "sure, inflation and unemployment may be low for now, but if we keep monetary policy where it is and permit the unemployment rate to fall further, we will risk higher inflation in the future." I'm not sure if economists who write in this manner are aware that they're making it sound like workers are somehow responsible for inflation. Central banker to workers: "I'm sorry, but we need to keep some of you unemployed...it's the inflation, you see." There is evidence that this line of thinking influenced the FOMC in 2015 in its decision to "lift off" and return the policy rate to some historically normal level; see my post here explaining the pros and cons in the lift-off debate. By the start of 2014, there was considerable pressure on the Fed to begin "normalizing" its policy rate. By mid 2014, the expectation of "lift off" likely contributed to significant USD appreciation and the economic weakness that followed. If I recall correctly, Vice Chair Stan Fischer started off the year by announcing that four rate hikes for 2015 were in order (as it turned out, the Fed only raised rates once--in December). To some observers, this all seemed very strange. After all, the unemployment rate was still above its estimated "natural" rate (5%) and inflation continued to undershoot its 2% target. What was going on?What was going on was the Phillips curve. Here is Chair Yellen at the March 17-18, 2015 FOMC meeting (transcript available here):If we adopt alternative B, one criterion for an initial tightening is that we need to be reasonably confident that inflation will move back to 2 percent over the medium term. For the remainder of this year, my guess is that it will be hard to point to data demonstrating that inflation is actually moving up toward our objective. Measured on a 12-month basis, both core and headline inflation will very likely be running below 1½ percent all year. That means that if we decide to start tightening later this year, a development that I think is likely, we will have to justify our inflation forecasts using indirect evidence, historical experience, and economic theory.The argument from history and economic theory seems straightforward. Experience here and abroad teaches us that, as resource utilization tightens, eventually inflation will begin to rise. To me, this seems like a simple matter of demand and supply. So the more labor and product markets tighten, the more confident I'll become in the inflation outlook. Because of the lags in monetary policy, the current high degree of monetary accommodation, and the speed at which the unemployment rate is coming down, it would, to my mind, be imprudent to wait until inflation is much closer to 2 percent to begin to normalize policy. I consider this a strong argument for an initial tightening with inflation still at low levels, and it's one that I plan to make. But I also recognize and am concerned that, at least in recent years, the empirical relationship between slack and inflation has been quite weak.Now, I don't want to make too much of this particular episode. Personally, I don't think it had a major impact on the recovery dynamic. But I do think it had an impact; in particular, the pace of improvement in labor market conditions temporarily slowed. It was an unforced error (as I think other members of the Committee sensed as well). I think the lift-off episode has contributed to a general re-thinking of the Phillips curve and the natural rate hypothesis. The notion of an economy operating at "excess capacity" has always seemed a bit strange to me, let alone the idea that excess capacity as a cause of inflation (as opposed to a force operating on the price-level). Perhaps it is time to re-visit Milton Friedman's "plucking model." Instead of drawing a smooth line through the center of a time-series, Friedman drew a line that defined a ceiling (a capacity constraint). Shocks to the economy manifest themselves as "downward plucks" (as if plucking on an elastic band). The plucking model is consistent with the observed cyclical asymmetry in unemployment rate fluctuations. And labor market search models are a natural way to model that asymmetry. In case you're interested, I develop a super-simple (and dare I say, elegant) search model here to demonstrate (and test) the idea: Evidence and Theory on the Cyclical Asymmetry in Unemployment Rate Fluctuations, CJE 1997). See also my blog post here as well as some recent work by Ferraro (RED, 2018) and Dupraz, Nakamura and Steinsson (2019). I like where this is going! One attractive feature of search models, in my view, is that they model relationship formation. Relationships provide a very different mechanism for coordinating economic activity relative to the canonical economic view of anonymous spot exchange in centralized markets. In a relationship, spot prices do not matter as much as the dynamic path of these prices (and other important aspects) over the course of a relationship (see my critique of the sticky price hypothesis here). The observation that retailers, in the early days of C-19, voluntarily rationed goods instead of raising prices makes little sense in anonymous spot exchange, but makes perfect sense for a merchant concerned with maintaining a good relationship with his or her customers. And merchant-supplier relationships can handle shortages without price signals (we're out of toilet paper--please send more!). In financial markets too, the amount of time that is spent forming and maintaining credit relationships is hugely underappreciated in economic modeling. Search theory turns out to be useful for interpreting the way money and bond markets work too. These markets are not like the centralized markets we see modeled in textbooks--they operate as decentralized over-the-counter (OTC) markets, where relationships are key. One reason why economies sometimes take so long to recover after a shock is because the shock has destroyed an existing set of relationships. And it takes time to rebuild relationship capital.Notions of "overheating" in this context probably do not apply to labor market variables, although there is still the possibility of an overaccumulation of certain types of physical capital in a boom (what the Austrians label "malinvestment"). Any "overheating" is likely to manifest itself primarily in asset prices. And sudden crashes in asset prices (whether driven by fundamentals or not), can have significant consequences on real economic activity if asset valuations are used to support lines of credit. Finally, we need a good theory of inflation. The NKPC theory of inflation is not, in my view, a completely satisfactory theory in this regard. To begin, it simply assumes that the central bank can target a long-run rate of inflation (implicitly, with the support of a Ricardian fiscal policy, though this is rarely, if ever, mentioned). At best, it is a theory of how inflation can temporarily depart from its long-run target and how interest rate policy can be used to influence transition dynamics. But the really interesting questions, in my view, have to do with monetary and fiscal policy coordination and what this entails for the ability of an "independent" central bank even to determine the long-run rate of inflation (Sargent and Wallace, 1981). I know what I've described only scratches the surface of this amazingly deep and broad field. Most of you have no doubt lived through your own process of discovery and contemplation in the world of macroeconomic theorizing. Feel free to share your thoughts below.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
I've been reading a lot of macro lately. In part, I'm just catching up from a few years of book writing. In part, I want to understand inflation dynamics, the quest set forth in "expectations and the neutrality of interest rates," and an obvious next step in the fiscal theory program. Perhaps blog readers might find interesting some summaries of recent papers, when there is a great idea that can be summarized without a huge amount of math. So, I start a series on cool papers I'm reading. Today: "Tail risk in production networks" by Ian Dew-Becker, a beautiful paper. A "production network" approach recognizes that each firm buys from others, and models this interconnection. It's a hot topic for lots of reasons, below. I'm interested because prices cascading through production networks might induce a better model of inflation dynamics. (This post uses Mathjax equations. If you're seeing garbage like [\alpha = \beta] then come back to the source here.) To Ian's paper: Each firm uses other firms' outputs as inputs. Now, hit the economy with a vector of productivity shocks. Some firms get more productive, some get less productive. The more productive ones will expand and lower prices, but that changes everyone's input prices too. Where does it all settle down? This is the fun question of network economics. Ian's central idea: The problem simplifies a lot for large shocks. Usually when problems are complicated we look at first or second order approximations, i.e. for small shocks, obtaining linear or quadratic ("simple") approximations. On the x axis, take a vector of productivity shocks for each firm, and scale it up or down. The x axis represents this overall scale. The y axis is GDP. The right hand graph is Ian's point: for large shocks, log GDP becomes linear in log productivity -- really simple. Why? Because for large enough shocks, all the networky stuff disappears. Each firm's output moves up or down depending only on one critical input. To see this, we have to dig deeper to complements vs. substitutes. Suppose the price of an input goes up 10%. The firm tries to use less of this input. If the best it can do is to cut use 5%, then the firm ends up paying 5% more overall for this input, the "expenditure share" of this input rises. That is the case of "complements." But if the firm can cut use of the input 15%, then it pays 5% less overall for the input, even though the price went up. That is the case of "substitutes." This is the key concept for the whole question: when an input's price goes up, does its share of overall expenditure go up (complements) or down (substitutes)? Suppose inputs are complements. Again, this vector of technology shocks hits the economy. As the size of the shock gets bigger, the expenditure of each firm, and thus the price it charges for its output, becomes more and more dominated by the one input whose price grows the most. In that sense, all the networkiness simplifies enormously. Each firm is only "connected" to one other firm. Turn the shock around. Each firm that was getting a productivity boost now gets a productivity reduction. Each price that was going up now goes down. Again, in the large shock limit, our firm's price becomes dominated by the price of its most expensive input. But it's a different input. So, naturally, the economy's response to this technology shock is linear, but with a different slope in one direction vs. the other. Suppose instead that inputs are substitutes. Now, as prices change, the firm expands more and more its use of the cheapest input, and its costs and price become dominated by that input instead. Again, the network collapsed to one link. Ian: "negative productivity shocks propagate downstream through parts of the production process that are complementary (\(\sigma_i < 1\)), while positive productivity shocks propagate through parts that are substitutable (\(\sigma_i > 1\)). ...every sector's behavior ends up driven by a single one of its inputs....there is a tail network, which depends on \(\theta\) and in which each sector has just a single upstream link."Equations: Each firm's production function is (somewhat simplifying Ian's (1)) \[Y_i = Z_i L_i^{1-\alpha} \left( \sum_j A_{ij}^{1/\sigma} X_{ij}^{(\sigma-1)/\sigma} \right)^{\alpha \sigma/(\sigma-1)}.\]Here \(Y_i\) is output, \(Z_i\) is productivity, \(L_i\) is labor input, \(X_{ij}\) is how much good j firm i uses as an input, and \(A_{ij}\) captures how important each input is in production. \(\sigma>1\) are substitutes, \(\sigma<1\) are complements. Firms are competitive, so price equals marginal cost, and each firm's price is \[ p_i = -z_i + \frac{\alpha}{1-\sigma}\log\left(\sum_j A_{ij}e^{(1-\sigma)p_j}\right).\; \; \; (1)\]Small letters are logs of big letters. Each price depends on the prices of all the inputs, plus the firm's own productivity. Log GDP, plotted in the above figure is \[gdp = -\beta'p\] where \(p\) is the vector of prices and \(\beta\) is a vector of how important each good is to the consumer. In the case \(\sigma=1\) (1) reduces to a linear formula. We can easily solve for prices and then gdp as a function of the technology shocks: \[p_i = - z_i + \sum_j A_{ij} p_j\] and hence \[p=-(I-\alpha A)^{-1}z,\]where the letters represent vectors and matrices across \(i\) and \(j\). This expression shows some of the point of networks, that the pattern of prices and output reflects the whole network of production, not just individual firm productivity. But with \(\sigma \neq 1\) (1) is nonlinear without a known closed form solution. Hence approximations. You can see Ian's central point directly from (1). Take the \(\sigma<1\) case, complements. Parameterize the size of the technology shocks by a fixed vector \(\theta = [\theta_1, \ \theta_2, \ ...\theta_i,...]\) times a scalar \(t>0\), so that \(z_i=\theta_i \times t\). Then let \(t\) grow keeping the pattern of shocks \(\theta\) the same. Now, as the \(\{p_i\}\) get larger in absolute value, the term with the greatest \(p_i\) has the greatest value of \( e^{(1-\sigma)p_j} \). So, for large technology shocks \(z\), only that largest term matters, the log and e cancel, and \[p_i \approx -z_i + \alpha \max_{j} p_j.\] This is linear, so we can also write prices as a pattern \(\phi\) times the scale \(t\), in the large-t limit \(p_i = \phi_i t\), and \[\phi_i = -\theta_i + \alpha \max_{j} \phi_j.\;\;\; (2)\] With substitutes, \(\sigma<1\), the firm's costs, and so its price, will be driven by the smallest (most negative) upstream price, in the same way. \[\phi_i \approx -\theta_i + \alpha \min_{j} \phi_j.\] To express gdp scaling with \(t\), write \(gdp=\lambda t\), or when you want to emphasize the dependence on the vector of technology shocks, \(\lambda(\theta)\). Then we find gdp by \(\lambda =-\beta'\phi\). In this big price limit, the \(A_{ij}\) contribute a constant term, which also washes out. Thus the actual "network" coefficients stop mattering at all so long as they are not zero -- the max and min are taken over all non-zero inputs. Ian: ...the limits for prices, do not depend on the exact values of any \(\sigma_i\) or \(A_{i,j}.\) All that matters is whether the elasticities are above or below 1 and whether the production weights are greater than zero. In the example in Figure 2, changing the exact values of the production parameters (away from \(\sigma_i = 1\) or \(A_{i,j} = 0\)) changes...the levels of the asymptotes, and it can change the curvature of GDP with respect to productivity, but the slopes of the asymptotes are unaffected....when thinking about the supply-chain risks associated with large shocks, what is important is not how large a given supplier is on average, but rather how many sectors it supplies...For a full solution, look at the (more interesting) case of complements, and suppose every firm uses a little bit of every other firm's output, so all the \(A_{ij}>0\). The largest input price in (2) is the same for each firm \(i\), and you can quickly see then that the biggest price will be the smallest technology shock. Now we can solve the model for prices and GDP as a function of technology shocks: \[\phi_i \approx -\theta_i - \frac{\alpha}{1-\alpha} \theta_{\min},\] \[\lambda \approx \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min}.\] We have solved the large-shock approximation for prices and GDP as a function of technology shocks. (This is Ian's example 1.) The graph is concave when inputs are complements, and convex when they are substitutes. Let's do complements. We do the graph to the left of the kink by changing the sign of \(\theta\). If the identity of \(\theta_{\min}\) did not change, \(\lambda(-\theta)=-\lambda(\theta)\) and the graph would be linear; it would go down on the left of the kink by the same amount it goes up on the right of the kink. But now a different \(j\) has the largest price and the worst technology shock. Since this must be a worse technology shock than the one driving the previous case, GDP is lower and the graph is concave. \[-\lambda(-\theta) = \beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\max} \ge\beta'\theta + \frac{\alpha}{1-\alpha}\theta_{\min} = \lambda(\theta).\] Therefore \(\lambda(-\theta)\le-\lambda(\theta),\) the left side falls by more than the right side rises. Does all of this matter? Well, surely more for questions when there might be a big shock, such as the big shocks we saw in a pandemic, or big shocks we might see in a war. One of the big questions that network theory asks is, how much does GDP change if there is a technology shock in a particular industry? The \(\sigma=1\) case in which expenditure shares are constant gives a standard and fairly reassuring result: the effect on GDP of a shock in industry i is given by the ratio of i's output to total GDP. ("Hulten's theorem.") Industries that are small relative to GDP don't affect GDP that much if they get into trouble. You can intuit that constant expenditure shares are important for this result. If an industry has a negative technology shock, raises its prices, and others can't reduce use of its inputs, then its share of expenditure will rise, and it will all of a sudden be important to GDP. Continuing our example, if one firm has a negative technology shock, then it is the minimum technology, and [(d gdp/dz_i = \beta_i + \frac{\alpha}{1-\alpha}.\] For small firms (industries) the latter term is likely to be the most important. All the A and \(\sigma\) have disappeared, and basically the whole economy is driven by this one unlucky industry and labor. Ian: ...what determines tail risk is not whether there is granularity on average, but whether there can ever be granularity – whether a single sector can become pivotal if shocks are large enough.For example, take electricity and restaurants. In normal times, those sectors are of similar size, which in a linear approximation would imply that they have similar effects on GDP. But one lesson of Covid was that shutting down restaurants is not catastrophic for GDP, [Consumer spending on food services and accommodations fell by 40 percent, or $403 billion between 2019Q4 and 2020Q2. Spending at movie theaters fell by 99 percent.] whereas one might expect that a significant reduction in available electricity would have strongly negative effects – and that those effects would be convex in the size of the decline in available power. Electricity is systemically important not because it is important in good times, but because it would be important in bad times. Ben Moll turned out to be right and Germany was able to substitute away from Russian Gas a lot more than people had thought, but even that proves the rule: if it is hard to substitute away from even a small input, then large shocks to that input imply larger expenditure shares and larger impacts on the economy than its small output in normal times would suggest.There is an enormous amount more in the paper and voluminous appendices, but this is enough for a blog review. ****Now, a few limitations, or really thoughts on where we go next. (No more in this paper, please, Ian!) Ian does a nice illustrative computation of the sensitivity to large shocks:Ian assumes \(\sigma>1\), so the main ingredients are how many downstream firms use your products and a bit their labor shares. No surprise, trucks, and energy have big tail impacts. But so do lawyers and insurance. Can we really not do without lawyers? Here I hope the next step looks hard at substitutes vs. complements.That raises a bunch of issues. Substitutes vs. complements surely depends on time horizon and size of shocks. It might be easy to use a little less water or electricity initially, but then really hard to reduce more than, say, 80%. It's usually easier to substitute in the long run than the short run. The analysis in this literature is "static," meaning it describes the economy when everything has settled down. The responses -- you charge more, I use less, I charge more, you use less of my output, etc. -- all happen instantly, or equivalently the model studies a long run where this has all settled down. But then we talk about responses to shocks, as in the pandemic. Surely there is a dynamic response here, not just including capital accumulation (which Ian studies). Indeed, my hope was to see prices spreading out through a production network over time, but this structure would have all price adjustments instantly. Mixing production networks with sticky prices is an obvious idea, which some of the papers below are working on. In the theory and data handling, you see a big discontinuity. If a firm uses any inputs at all from another firm, if \(A_{ij}>0\), that input can take over and drive everything. If it uses no inputs at all, then there is no network link and the upstream firm can't have any effect. There is a big discontinuity at \(A_{ij}=0.\) We would prefer a theory that does not jump from zero to everything when the firm buys one stick of chewing gum. Ian had to drop small but nonzero elements of the input-output matrix to produces sensible results. Perhaps we should regard very small inputs as always substitutes? How important is the network stuff anyway? We tend to use industry categorizations, because we have an industry input-output table. But how much of the US industry input-output is simply vertical: Loggers sell trees to mills who sell wood to lumberyards who sell lumber to Home Depot who sells it to contractors who put up your house? Energy and tools feed each stage, but don't use a whole lot of wood to make those. I haven't looked at an input-output matrix recently, but just how "vertical" is it? ****The literature on networks in macro is vast. One approach is to pick a recent paper like Ian's and work back through the references. I started to summarize, but gave up in the deluge. Have fun. One way to think of a branch of economics is not just "what tools does it use?" but "what questions is it asking? Long and Plosser "Real Business Cycles," a classic, went after idea that the central defining feature of business cycles (since Burns and Mitchell) is comovement. States and industries all go up and down together to a remarkable degree. That pointed to "aggregate demand" as a key driving force. One would think that "technology shocks" whatever they are would be local or industry specific. Long and Plosser showed that an input output structure led idiosyncratic shocks to produce business cycle common movement in output. Brilliant. Macro went in another way, emphasizing time series -- the idea that recessions are defined, say, by two quarters of aggregate GDP decline, or by the greater decline of investment and durable goods than consumption -- and in the aggregate models of Kydland and Prescott, and the stochastic growth model as pioneered by King, Plosser and Rebelo, driven by a single economy-wide technology shock. Part of this shift is simply technical: Long and Plosser used analytical tools, and were thereby stuck in a model without capital, plus they did not inaugurate matching to data. Kydland and Prescott brought numerical model solution and calibration to macro, which is what macro has done ever since. Maybe it's time to add capital, solve numerically, and calibrate Long and Plosser (with up to date frictions and consumer heterogeneity too, maybe). Xavier Gabaix (2011) had a different Big Question in mind: Why are business cycles so large? Individual firms and industries have large shocks, but \(\sigma/\sqrt{N}\) ought to dampen those at the aggregate level. Again, this was a classic argument for aggregate "demand" as opposed to "supply." Gabaix notices that the US has a fat-tailed firm distribution with a few large firms, and those firms have large shocks. He amplifies his argument via the Hulten mechanism, a bit of networkyiness, since the impact of a firm on the economy is sales / GDP, not value added / GDP. The enormous literature since then has gone after a variety of questions. Dew-Becker's paper is about the effect of big shocks, and obviously not that useful for small shocks. Remember which question you're after.My quest for a new Phillips curve in production networks is better represented by Elisa Rubbo's "Networks, Phillips curves and Monetary Policy," and Jennifer La'o and Alireza Tahbaz-Salehi's "Optimal Monetary Policy in Production Networks," If I can boil those down for the blog, you'll hear about it eventually. The "what's the question" question is doubly important for this branch of macro that explicitly models heterogeneous agents and heterogenous firms. Why are we doing this? One can always represent the aggregates with a social welfare function and an aggregate production function. You might be interested in how aggregates affect individuals, but that doesn't change your model of aggregates. Or, you might be interested in seeing what the aggregate production or utility function looks like -- is it consistent with what we know about individual firms and people? Does the size of the aggregate production function shock make sense? But still, you end up with just a better (hopefully) aggregate production and utility function. Or, you might want models that break the aggregation theorems in a significant way; models for which distributions matter for aggregate dynamics, theoretically and (harder) empirically. But don't forget you need a reason to build disaggregated models. Expression (1) is not easy to get to. I started reading Ian's paper in my usual way: to learn a literature start with the latest paper and work backward. Alas, this literature has evolved to the point that authors plop results down that "everybody knows" and will take you a day or so of head-scratching to reproduce. I complained to Ian, and he said he had the same problem when he was getting in to the literature! Yes, journals now demand such overstuffed papers that it's hard to do, but it would be awfully nice for everyone to start including ground up algebra for major results in one of the endless internet appendices. I eventually found Jonathan Dingel's notes on Dixit Stiglitz tricks, which were helpful. Update:Chase Abram's University of Chicago Math Camp notes here are also a fantastic resource. See Appendix B starting p. 94 for production network math. The rest of the notes are also really good. The first part goes a little deeper into more abstract material than is really necessary for the second part and applied work, but it is a wonderful and concise review of that material as well.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Mit der Verabschiedung der Agenda 2030 wurden im Jahr 2015 siebzehn Ziele für eine nachhaltige Entwicklung, die sogenannten Sustainable Development Goals (SDGs), festgelegt. Da die SDGs auf alle Ebenen der Regierung anwendbar sind, bilden diese Ziele auch für Kommunen einen wichtigen Rahmen zur Orientierung. Eine nachhaltige Entwicklung gewinnt somit auch auf kommunaler Ebene zunehmend an Bedeutung. Dies zeigt sich unter anderem anhand des Engagements, das in vielen Kommunen zu erkennen ist. Durch die SDGs haben auch Kommunen einen strategischen Orientierungsrahmen und können konkrete Ziele und Maßnahmen leichter festlegen. Nachhaltigkeit kann somit vor Ort wirkungsvoller in die Realität umgesetzt werden.Effektiver Klimaschutz und Nachhaltigkeitsmanagement auf kommunaler Ebene ist essenziell. Neben dem notwendigen Beitrag zur nationalen und internationalen nachhaltigen Entwicklung können Kommunen klare Vorteile aus einer Nachhaltigkeitsstrategie ziehen: Beispielsweise können Gebäude energieeffizienter gebaut und genutzt und das Verkehrssystem kann effizienter und umweltfreundlicher gestaltet werden und gleichzeitig die CO2-Belastung und Verkehrsdichte im urbanen Raum reduzieren. Was zum Klimaschutz beiträgt, kann demnach gleichzeitig die Attraktivität von Kommunen steigern. Des Weiteren schützen sich Kommunen so vor Wetterextremen und können sich an den Klimawandel anpassen.Nach wie vor bestehen Unterschiede. Während einige Kommunen bereits seit mehreren Jahrzehnten an einer möglichst nachhaltigen Stadtentwicklung arbeiten und bereits viele Erfahrungen sammeln und Erkenntnisse gewinnen konnten, haben andere Städte vergleichsweise spät damit begonnen. Weiterhin schlagen Kommunen teils sehr unterschiedliche Wege ein, um die festgelegten Nachhaltigkeitsziele zu verwirklichen. Dies kann beispielsweise an den örtlichen Gegebenheiten oder an unterschiedlichen Ziel- und Schwerpunktsetzungen liegen. Übergeordnet stellen sich die Fragen, wieso gerade auf kommunaler Ebene viel für den Klimaschutz und Nachhaltigkeit getan werden muss und seit wann dies konkrete Formen annimmt.Ziel dieser Ausarbeitung ist es, zwei europäische Großstädte bezüglich ihrer bisherigen Nachhaltigkeitsentwicklung zu untersuchen. Die Schwerpunktsetzung liegt dabei sowohl beim Bereich Mobilität als auch bei ausgewählten Maßnahmen im Bereich einer nachhaltigen Stadtplanung. Weitere Aspekte werden bei Bedarf hinzugezogen. Ein Vergleich zwischen beiden Städten soll anschließend erfolgen. Bei diesem Vergleich müssen die Besonderheiten der jeweiligen Stadt berücksichtigt werden. Auch wenn nicht alle Parameter berücksichtigt werden können und ein direkter Vergleich möglicherweise nicht in allen Bereichen zielführend ist, können dadurch Erkenntnisse, beispielsweise bezüglich des Fortschritts der jeweiligen Stadt, gewonnen werden.Bei den zu untersuchenden Kommunen handelt es sich um Kopenhagen und München. Beide Städte weisen unterschiedliche Ausgangslagen, Besonderheiten und geografische Gegebenheiten auf, was darauf schließen lässt, dass divergente Befunde auftreten. Dies macht einen Vergleich interessanter und aufschlussreicher als beispielsweise einen Vergleich auf nationaler Ebene. Es handelt sich um internationale Städte innerhalb der Europäischen Union. Weiterhin sind beide Städte Großstädte, die ihre jeweilige Region prägen. Trotz der verschiedenen Gegebenheiten werden dabei exemplarisch ähnliche Bereiche beleuchtet. Dies soll die Vergleichbarkeit gewährleisten. Neben der Mobilität werden die Bereiche der Energieversorgung und Extremwetter- beziehungsweise Klimaanpassung beleuchtet.Bevor die Kommunen untersucht werden, werden im Vorgriff die für diese Ausarbeitung notwendigen Grundlagen thematisiert. Hier werden zentrale Elemente untersucht, zum Beispiel, wie Nachhaltigkeit definiert wird, welche Rolle eine nachhaltige Stadt spielt, was eine nachhaltige Stadt ausmacht und wie der urbane Raum überhaupt zentral für internationale Klimaschutz- und Nachhaltigkeitsbestrebungen werden konnte. Da es sich hierbei um zentrale Aspekte handelt, die es auf dem Weg zu einer nachhaltigen Stadtentwicklung zu verstehen gilt, fällt dieser Teil verhältnismäßig umfangreich aus.GrundlagenIn diesem Kapitel werden relevante Grundlagen betrachtet. Dazu gehört neben Grundbegriffen und Faktoren, die sich auf nachhaltige Mobilität und Stadtplanung beziehen, ein kurzer Überblick, der beschreibt, wie das Thema Nachhaltigkeit historisch betrachtet für die kommunale Ebene relevant wurde. Darüber hinaus muss der Begriff Nachhaltigkeit vorab definiert werden, womit nachfolgend begonnen wird.Begriff NachhaltigkeitDer Begriff Nachhaltigkeit existiert seit mehr als drei Jahrhunderten und wurde ursprünglich in der Forstwirtschaft verwendet. Nachhaltigkeit stammt aus einem Bereich, in dem ressourcenschonendendes Wirtschaften äußerst relevant ist. Bezeichnend für das damalige Verständnis von Nachhaltigkeit ist die Vorgabe, innerhalb eines Jahres nicht mehr Holz zu fällen, als in derselben Zeitspanne nachwachsen kann (vgl. Weinsziehr/Verhoog/Bruckner 2014, S. 3). Die Forstwirtschaft arbeitete demzufolge dann nachhaltig, wenn der Verbrauch der Ressourcen und somit die Abholzung die Menge des nachwachsenden Holzes nicht übersteigt. Die heutige Auffassung von Nachhaltigkeit ist mit diesem Ursprungsgedanken eng verknüpft. Dies zeigt sich auch anhand der folgenden Definition:"Nachhaltigkeit oder nachhaltige Entwicklung bedeutet, die Bedürfnisse der Gegenwart so zu befriedigen, dass die Möglichkeiten zukünftiger Generationen nicht eingeschränkt werden" (vgl. Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023c, o.S.).In der heutigen Zeit bezieht sich der Begriff Nachhaltigkeit jedoch auf alle Wirtschaftsbereiche und beinhaltet einen weiteren Aspekt, die sogenannte "Triple Bottom Line" (TBL), welche drei Dimensionen einer nachhaltigen Entwicklung benennt (vgl. Weinsziehr/Verhoog/Bruckner 2014, S. 3f.): Die wirtschaftliche Effizienz, die soziale Gerechtigkeit und die ökologische Tragfähigkeit müssen gleichberechtigt betrachtet werden, und möglichst alle politischen Entscheidungen sollten Nachhaltigkeit als Grundlage beinhalten (vgl. Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023c, o.S.).Der Begriff Nachhaltigkeit wird heute teilweise inflationär verwendet (vgl. Aden 2012, S. 15). Im weiteren Verlauf dieser Arbeit spielt vor allem das Verständnis einer nachhaltigen Entwicklung eine Rolle, was wie folgt definiert werden kann:"Politik und menschliches Verhalten sollen sich an der langfristigen Erhaltung der Lebensgrundlagen orientieren" (ebd., S. 15).Nachhaltige Stadt: Eine ArbeitsdefinitionEs gilt, eine adäquate Arbeitsdefinition von Nachhaltigkeit im Sinne einer nachhaltigen Stadtentwicklung zu formulieren. Ziel dieser Arbeit ist es, vor allem den Bereich Mobilität innerhalb von München und Kopenhagen zu beleuchten. Nachhaltigkeit im weiteren Verlauf bezieht sich somit vermehrt auf eine ressourcenschonende und emissionsarme Verkehrsplanung. Neben der Verkehrsplanung sind jedoch weitere Elemente interessant. Eine in der Gesamtheit nachhaltige Stadt lässt sich wie folgt definieren:" […] ein gut ausgebautes Netz des Öffentlichen Personennahverkehrs, eine regelmäßige Müllentsorgung sowie architektonische Innovationen, die es der städtischen Bevölkerung erlauben, einen nachhaltigen Lebensstil zu pflegen" (Bildung für nachhaltige Entwicklung 2023, o.S.).Ein nachhaltiger Lebensstil wiederum bedeutet, dass Menschen durch ihren eigenen Lebensstil und den Verbrauch ihrer Ressourcen nachfolgenden Generationen dieselben Möglichkeiten bieten (vgl. Aachener Stiftung Kathy Beys 2015, o.S.). Eine nachhaltige Stadt ist gleichzeitig eine für ihre Bewohner:innen ansprechende Stadt, die eine saubere Umwelt, ein intaktes Verkehrssystem, erschwingliche Energie und ein gutes gesellschaftliches Miteinander gewährleistet (vgl. Dütz 2017, S. 15).Eine nachhaltige Stadtentwicklung kann somit eine Vielzahl verschiedener Themenbereiche beinhalten (vgl. Firmhofer 2018, S. 10). Aufgeteilt in zwei Oberbereiche muss sich eine Stadt bezogen auf die städtische Infrastruktur und auf das städtische Leben verändern. Die städtische Infrastruktur beinhaltet zum Beispiel das Transportwesen sowie die Energie- und Wasserversorgung. Das städtische Leben enthält unter anderem wohnliche, arbeitstechnische, soziale und kulturelle Elemente (vgl. ebd., S. 10). Der Begriff Stadtentwicklung selbst bezeichnet"die Steuerung der Gesamtentwicklung von Städten und Gemeinden und erfordert eine integrierte und zukunftsgerichtete Herangehensweise, die durch Stadtplanung […] umgesetzt wird" (Koch/Krellenberg 2021, S. 19).Folgende Handlungsfelder sind besonders relevant für eine nachhaltige Stadtentwicklung: Die Dekarbonisierung, die Förderung möglichst umweltfreundlicher Mobilität, das Ziel einer baulich und räumlich kompakten sowie sozial durchmischten Stadt, die Klimawandelanpassung und die Bekämpfung von Armut (vgl. ebd., S. 22).Diese Eingrenzung dient als Fokus dieser Ausarbeitung. Das Augenmerk liegt neben der städtischen Verkehrsinfrastruktur auf weiteren ausgewählten Aspekten, beispielsweise auf der Energieversorgung und baulichen Maßnahmen. Diese Aspekte werden hinsichtlich der Frage betrachtet, ob und in welchem Maße die städtische Bevölkerung dadurch einen nachhaltigen Lebensstil erreichen kann. Somit ist ebenso das städtische Leben relevant.Entwicklung nachhaltiger KlimaschutzzieleUm zu verstehen, wie sich ein Nachhaltigkeitskonzept auf kommunaler Ebene entwickeln konnte, wird ein historischer Überblick gegeben, der die Entwicklung nachhaltiger Klimaschutzziele von der globalen bis hin zur kommunalen Ebene zusammenfasst. Dabei werden vor allem relevante Eckpunkte benannt.Im Jahr 1997 wurde das Kyoto-Protokoll beschlossen und trat acht Jahre später in Kraft. Durch diese Vereinbarung verpflichteten sich die meisten Industriestaaten inklusive der damaligen EU-Mitgliedsstaaten dazu, die Emissionen von bestimmten Treibhausgasen innerhalb von vier Jahren um mindestens fünf Prozent, verglichen mit dem Jahr 1990, zu senken (vgl. Eppler 2023, o.S.).Im Jahr 2000 verständigten sich die Vereinten Nationen (UN) auf die Millennium Development Goals (MDGs) (vgl. Koch/Krellenberg 2021, S. 6). Durch diese Erklärung verpflichteten sich die Staats- und Regierungschefs der jeweiligen Staaten neben der Bekämpfung von Armut, Hunger und Krankheiten auch gegen Umweltzerstörung vorzugehen. Um die Fortschritte messbar zu machen, wurden Zielvorgaben für das Jahr 2015 formuliert (vgl. Weltgesundheitsorganisation 2018, o.S.). Der Fokus lag auf der supranationalen, also auf der überstaatlichen Ebene. Eine nachhaltige Stadtentwicklung stand nicht im Fokus, war durch einige Zielformulierungen dennoch indirekt betroffen (vgl. Koch/Krellenberg 2021, S. 6).Im Jahr 2009 fand die Weltklimakonferenz in Kopenhagen statt. Das Ziel, die Erderwärmung auf weniger als zwei Grad Celsius im Vergleich zum vorindustriellen Niveau zu begrenzen, wurde als Absichtsziel erklärt, jedoch fehlten verpflichtende Regelungen (vgl. Schellnhuber u. a. 2010, S. 5). Der festgelegte Wert von zwei Grad Celsius kommt durch die Wissenschaft zustande. Diese geht davon aus, dass dieser Wert nicht überschritten werden darf, um drastische Konsequenzen zu vermeiden (vgl. Buhofer 2018, S. 83).Mit dem Pariser Klimaabkommen wurde das Zwei-Grad-Celsius-Ziel festgelegt (vgl. Edenhofer/Jakob 2017, S. 39). Dieses Mal handelt es sich um ein völkerrechtlich bindendes Abkommen, welches das Kyoto-Protokoll ablöste und zur Erreichung der Eckpunkte verstärkt die kommunale Ebene miteinbezieht (vgl. Landeszentrale für politische Bildung Baden-Württemberg 2023, o.S.). Weitere Ziele des Pariser Klimaabkommens, das Ende 2016 in Kraft trat, sind die Senkung von Emissionen und die Klimawandelanpassung (vgl. Watjer 2023, o.S.). Nationale Klimaschutzkonzepte sind in der Regel als Folge des Pariser Klimaabkommens entstanden (vgl. ebd. 2023, o.S.). Die Vereinten Nationen brachten im Jahr 2015 die Agenda 2030 auf den Weg, die klare Ziele für eine nachhaltige Entwicklung benennt (vgl. Koch/Krellenberg 2021, S. 7).Agenda 2030 und die Sustainable Development Goals"Transforming our world" (Koch/Krellenberg 2021, S. 6) - diese Formulierung verdeutlicht die ambitionierten Ziele, die mit der Agenda 2030 durch die Ziele für nachhaltige Entwicklung, die Sustainable Development Goals (SDGs) festgelegt wurden. Die Agenda 2030 ist für alle Mitgliedsstaaten der Vereinten Nationen gültig. Kern der Agenda ist das Ziel einer nachhaltigen globalen Entwicklung auf allen dazugehörigen Ebenen, was durch die 17 Ziele erreicht werden soll (vgl. Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023a, o.S.). Diese Ziele ergänzen sich gegenseitig, haben den gleichen Stellenwert und beinhalten jeweils zwischen acht und zwölf Unterziele (vgl. Koch/Krellenberg 2021, S. 9). Auch wenn die Agenda 2030 von allen UN-Mitgliedsstaaten beschlossen wurde, ist diese rechtlich nicht bindend, was ebenfalls für die SDGs gilt (vgl. ebd. 2021, S. 12).Im Vergleich zu den MDG-Zielen sind die SDG-Zielsetzungen umfangreich formuliert und mit SDG-Ziel elf wird erstmals die regionale und lokale Ebene in den Blickpunkt genommen. Dieses Ziel betrachtet ausdrücklich die Entwicklung von Städten und Gemeinden mit dem Anspruch, diese neben einer nachhaltigen Gestaltung sicherer, inklusiver und widerstandsfähig zu gestalten (vgl. Koch/Krellenberg 2021, S. 7f.).Nachfolgend werden die wichtigsten Unterziele dargestellt. Neben der Sicherung von bezahlbarem Wohnraum soll das Verkehrssystem nachhaltig, sicher, zugänglich und bezahlbar ausgebaut werden (vgl. Vereinte Nationen 2023b, S. 24). Siedlungspläne sollen auf eine nachhaltige Entwicklung ausgerichtet werden (vgl. ebd., S. 24). Ziel hierbei ist es, die Verstädterung bis 2030 nachhaltiger und inklusiver zu organisieren. Ebenfalls bis 2030 soll die Zahl der durch Klimakatastrophen bedingten Todesfälle und Betroffenen deutlich gesenkt werden (vgl. Koch/Krellenberg 2021, S. 10). Von Städten ausgehende schädliche Umweltauswirkungen sollen verringert, die Luftqualität verbessert und Grünflächen als öffentliche Räume geschaffen und inklusiv, also für alle Menschen, zugänglich gemacht werden (vgl. Vereinte Nationen 2023b, S. 24).Weitere SDGs lassen sich nur durch städtische Maßnahmen verwirklichen und sind daher eng mit der urbanen Entwicklung verbunden. Ein Beispiel ist SDG 7, das auf nachhaltige beziehungsweise erneuerbare Energien fokussiert ist und nicht entkoppelt von der zukünftigen Energieversorgung in den Städten betrachtet werden kann (vgl. Koch/Krellenberg 2021, S. 11).Durch die Festlegung dieser Ziele ist Nachhaltigkeit ein zentraler Aspekt der Städteplanung und -entwicklung. Städte stehen somit spätestens seit der Agenda 2030 auch formell vor großen Herausforderungen und Transformationsprozessen. Die Zuspitzung von Umweltkatastrophen und Extremwetterereignissen zeigt, dass Städte darüber hinaus dazu gezwungen sind, Klimaanpassungsmaßnahmen und eine nachhaltige Stadtentwicklung zügig umzusetzen.Klimaschutz in der Europäischen Union, in Deutschland und in DänemarkWas haben diese internationalen Abkommen bewirkt? Da München und Kopenhagen im Fokus dieser Ausarbeitung stehen, müssen diese Städte betreffende Beschlüsse bezüglich der gesetzten Ziele einer nachhaltigen Stadtentwicklung auf weiteren Ebenen betrachtet werden. Trotz der Ähnlichkeit der festgelegten Klimaschutzprogramme in der EU, in Dänemark und in Deutschland, werden diese separat zusammengefasst. Im Jahr 2007 betrug der Anteil der EU an globalen CO2-Emissionen ein Sechstel und der Anteil der Treibhausgasemissionen der Industrieländer ein Fünftel (vgl. Dröge 2007, S. 2). Dies untermauert den Handlungsbedarf.Das Klimaschutzprogramm der aktuellen Fassung des deutschen Klimaschutzgesetzes hat an den ehrgeizigen Zielen nichts geändert. Nach wie vor soll Deutschland bis 2045 treibhausgasneutral sein und den Ausstoß von Treibhausgasen bereits bis 2030 um 65 Prozent gesenkt haben (vgl. Presse- und Informationsamt der Bundesregierung 2023, o.S.).Dänemark hat eine Klimastrategie vorgelegt und sich das Ziel gesetzt, eine Vorreiterrolle einzunehmen. Bis 2030 will Dänemark seine Treibhausgasemissionen um 70 Prozent senken. Klimaneutralität soll bis 2050 erreicht sein (vgl. Außenministerium Dänemark 2020, S. 27). Ebenso will Dänemark dazu beitragen, die globalen Anstrengungen voranzutreiben. Hierfür soll mit anderen Ländern und mit nichtstaatlichen Akteur:innen zusammengearbeitet werden (vgl. Außenministerium Dänemark 2020, S. 6).Auf EU-Ebene sind die Zielsetzungen ähnlich, was sich durch den "Green Deal" der EU zeigt. Demzufolge sollen die Netto-Treibhausgasemissionen bis 2030 um 55 Prozent reduziert werden, bis 2050 soll Treibhausgasneutralität herrschen (vgl. Europäische Kommission 2023, o.S.). Ziel ist es, durch diesen europäischen "Grünen Deal" der erste klimaneutrale Kontinent zu werden und dementsprechend die Verpflichtungen umzusetzen, die sich aus dem Pariser Klimaabkommen ergeben (vgl. Europarat 2023, o.S.). Folglich sind die Ziele von Deutschland und Dänemark bezüglich der Erreichung und der Höhe der Einsparungen teilweise höher angesetzt, als auf EU-Ebene beschlossen.Nachhaltige StadtentwicklungEs stellt sich die Frage, aus welchen Gründen gerade der urbane Raum eine zentrale Größe für Nachhaltigkeitsziele einnimmt. Aktuelle Berichte, Daten und Prognosen können dabei helfen, diese Frage zu beantworten.Relevanz einer nachhaltigen StadtentwicklungDer jüngste SDG-Fortschrittsbericht wurde im Mai 2023 veröffentlicht. Die Vereinten Nationen kommen darin zu dem Ergebnis, dass über die Hälfte der Weltbevölkerung momentan in städtischen Gebieten lebt. Dieser Anteil könnte bis 2050 auf etwa 70 Prozent steigen (vgl. Vereinte Nationen 2023a, S. 34). Verglichen mit dem Jahr 2020 wird die urbane Bevölkerung in Mitteleuropa und somit auch in Deutschland und Dänemark im Jahr 2050 um acht Prozent steigen (vgl. Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023b, S. 4).Im Vergleich zu anderen Kontinenten stellt dies einen geringen Anstieg dar. So wird die städtische Bevölkerung in Nordafrika im gleichen Referenzzeitraum voraussichtlich um 79 Prozent steigen (vgl. ebd. 2023b, S. 4). Zwei Aspekte dürfen jedoch nicht unbeachtet bleiben: Zum einem ist es eine globale Herausforderung, diesem Anstieg gerecht zu werden. Die Auswirkungen werden für viele mittelbar und unmittelbar spürbar sein. Weiterhin stehen bei einem Bevölkerungsanstieg von acht Prozent auch dicht besiedelte mitteleuropäische Städte vor einer Vielzahl an Aufgaben, was sich auch für Städte wie München und Kopenhagen bemerkbar machen wird. Beispielsweise lebten bereits im Jahr 2017 drei von vier Menschen in Deutschland innerhalb von Städten (vgl. Dütz 2017, S. 14). Dementsprechend sind auch europäische Städte zentral, was die Implementierung der Klimaschutzziele angeht (vgl. ebd., S. 13).Städte verbrauchen mit knapp 80 Prozent bereits heute einen Großteil der weltweiten Energie und Ressourcen, beispielsweise durch die großen Abfallmengen, das Heizen und den Schadstoßausstoß der vielen Fahrzeuge (vgl. Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023d, o.S.). Gleichzeitig sind Städte für bis zu 76 Prozent der CO2-Emissionen weltweit verantwortlich (vgl. Climate Service Center Germany 2015, S. 1). Städte gehören somit zu den Hauptverursachern des Klimawandels, was durch folgende Worte deutlich wird:"Der Klimanotstand ist auch ein Notstand der Stadt" (Chatterton 2019, S. 275).Durch den prognostizierten Bevölkerungsanstieg wird die Relevanz von Städten bezogen auf die Realisierung von Klimaschutzzielen weiter steigen. Nicht zuletzt, da Städte bereits heute für den Großteil der CO2-Emissionen und des Energieverbrauchs verantwortlich sind. Städte nehmen eine zentrale Rolle in der Verwirklichung einer nachhaltigen Zukunft ein. Gleichzeitig sind gerade Städte durch den Klimawandel in erhöhtem Maße gefährdet (vgl. Climate Service Center Germany 2015, S. 1f.). Auch aus Gründen des Selbstschutzes sind Städte daher gezwungen, Strategien und Maßnahmen zur Klimaanpassung zu entwickeln. Nur so kann der urbane Raum dem Klimanotstand gerecht werden. Entwicklung einer nachhaltigen und klimaneutralen Stadt"Wie lässt sich die Entwicklung der Städte so steuern, dass diese den notwendigen Beitrag zu einer globalen nachhaltigen Entwicklung leisten können?" (Koch & Krellenberg 2021, S. 2).Diese zentrale Frage stellt sich in diesem Kapitel. Konkret wird der Frage nachgegangen, wie eine Stadtentwicklung aussehen muss, um notwendige Nachhaltigkeitsziele hinreichend zu erfüllen und den Erfordernissen einer nachhaltigen Stadt gerecht zu werden.Der aktuelle SDG-Fortschrittsbericht bilanziert die Hälfte der Zeit seit Inkrafttreten der SDG-Ziele. Die Halbzeitbilanz der Agenda 2030 liest sich bezogen auf die Fortschritte einer städtischen Nachhaltigkeitsentwicklung insgesamt ernüchternd: Lediglich die Hälfte der städtischen Bevölkerung hatte im Jahr 2022 annehmbaren Zugang zu öffentlichen Verkehrsmitteln, auch die Luftverschmutzung und der Mangel an Freiflächen sind anhaltende Probleme in Städten (vgl. Vereinte Nationen 2023a, S. 34).Gleichzeitig hält der Bericht fest, dass in Ländern mit hohem Einkommen viel für die Bekämpfung der Luftverschmutzung getan wurde, was dennoch nicht ausreichend ist. Darüber hinaus wird angemerkt, dass es sich bei der Luftverschmutzung um kein rein städtisches Problem handelt (vgl. ebd., S. 35). Allerdings muss sich gerade der Autoverkehr in der Stadt ändern. Paul Chatterton spielt dabei auf ein neues Mobilitätsparadigma an und fordert eine autofreie Stadt, da nur dies dem Klima wirklich gerecht werden und soziale Ungleichheit reduzieren kann (vgl. Chatterton 2019, S. 278).Ebenso muss der Aspekt berücksichtigt werden, dass Menschen in Großstädten häufig verschiedene Verkehrsmittel nutzen, um an ihr Ziel zu kommen (vgl. Kallenbach 2021, S. 33). Selbst wenn klimafreundliche Mobilität zur Verfügung steht, wird diese somit nicht ausschließlich genutzt. Hieran anknüpfend stellt sich die Frage, wie sich dies ändern lässt. Hierfür besteht bereits eine Vielzahl an Lösungsvorschlägen, unter anderem die Abkehr von der Vorstellung einer autogerechten Stadt, die effizientere Nutzung der vorhandenen Infrastruktur, die Verbesserung des Öffentlichen Personennahverkehrs (ÖPNV) zur Schaffung einer wirklichen Alternative oder eine kilometerabhängige Gebühr für die Nutzung von Straßen (vgl. Edenhofer/Jakob 2017, S. 101f. ).Ein Großteil des Energiebedarfs in Städten kommt durch die Verbrennung fossiler Brennstoffe, den Transport und die Heizung beziehungsweise Kühlung von Gebäuden zustande (vgl. Climate Service Center Germany 2015, S. 2). Sollen die Einsparziele gelingen, so ist eine Verkehrswende unumgänglich (vgl. Jakob 2023, S. 1). Gleichzeitig stehen durch den Klimawandel auch städtische Verkehrssysteme vor enormen Herausforderungen. Gerade in urbanen Gebieten hängen viele Infrastrukturnetze, die zum Funktionieren des städtischen Systems beitragen, mit dem Verkehrssystem zusammen (vgl. Climate Service Center Germany 2015, S. 6).Dabei bestehen mehrere Möglichkeiten, städtische Verkehrsnetze zu verbessern und gleichzeitig zukunftsfähig und nachhaltig zu gestalten: Die Fokussierung auf Fußgänger und nicht-motorisierten Verkehr sowie auf den ÖPNV kann einige Vorteile, wie zum Beispiel eine Reduzierung von Emissionen und wirtschaftlichen Wohlstand, bieten (vgl. ebd. 2015, S. 6). Die Verbesserung des öffentlichen Nahverkehrs und anderer emissionsarmer Infrastrukturen kann darüber hinaus zu Energieeinsparungen, Zeitersparnis und einer besseren Luftqualität beitragen (vgl. ebd., S. 6). Die Zukunftsgestaltung der städtischen Verkehrsinfrastruktur spielt daher in mehrfacher Hinsicht eine zentrale Rolle. Neben dem Verkehrsbereich sind weitere Sektoren, unter anderem das Abfallsystem und der Umgang mit Gebäuden entscheidend (vgl. ebd. 2015, S. 2).Der Energiesektor ist enorm wichtig, da hier das größte Potential für eine Reduzierung von Emissionen liegt. Parallel mit einer steigenden Energienachfrage, beispielsweise in Strom oder Brennstoffen, werden Treibhausgasemissionen ansteigen. Gerade Städte sind dazu gezwungen, den Energiebedarf zu senken, die Energieerzeugung sowie den -verbrauch effizienter zu gestalten, auf erneuerbare Energiequellen umzusteigen und gleichzeitig eine sichere Versorgung zu gewährleisten (vgl. Climate Service Center Germany 2015, S. 6).Im weiteren Verlauf werden nun die Städte Kopenhagen und München in Bezug auf ihre Anstrengungen untersucht. Fokus dabei bleibt der Bereich Verkehr und Mobilität. Ebenso wird exemplarisch der Bereich der Extremwetteranpassung sowie, für den Bereich der Energieversorgung, die kommunale Wärmeplanung untersucht.KopenhagenKopenhagen ist Sitz des dänischen Königshauses (vgl. Heidenreich 2019, o.S.). Die Stadt liegt auf der Insel Seeland (vgl. Britannica 2023, o.S.) und ist an der Meerenge Öresund gelegen, welche die Ost- und die Nordsee miteinander verbindet (vgl. Heidenreich 2019, o.S.). Gegründet wurde die Stadt im frühen zehnten Jahrhundert, seit 1445 ist Kopenhagen Dänemarks Hauptstadt (vgl. Britannica 2023, o.S.). Die Einwohnerzahl Kopenhagens ist in den letzten zehn Jahren um knapp 100.000 Einwohner:innen gewachsen Mit aktuell etwa 653.000 Einwohner:innen ist Kopenhagen die größte Stadt Dänemarks (vgl. Dyvik 2023, o.S.). Sie hat eine Fläche von ungefähr 88 Quadratkilometern, ist damit vergleichsweise klein und liegt 24 Meter über dem Meeresspiegel (vgl. Kallenbach 2021, S. 34).Grundlegende Informationen und BesonderheitenDie Stadt Kopenhagen hat eine bewegte Geschichte. Beispielsweise wurde die Stadt im Laufe der Jahrhunderte mehrmals von Großfeuern zerstört, war sehr umkämpft und im Zweiten Weltkrieg von deutschen Soldaten besetzt (vgl. Findeisen/Husum 2008, S. 146ff.). Damals blieb die Stadt jedoch überwiegend unbeschädigt, was sich auch heute im Stadtbild bemerkbar macht. Ein Beispiel hierfür ist Schloss Rosenborg (vgl. Heidenreich 2019, o.S.). Im Jahr 1996 wurde die Stadt zur Kulturhauptstadt ernannt (vgl. Findeisen/Husum 2008, S. 149).Das Klima in Kopenhagen ist mild und gemäßigt. Die durchschnittliche Jahrestemperatur beträgt 8,9 Grad Celsius (vgl. climate-data.org 2023, o.S.). In Kopenhagen fällt insgesamt viel Regen. Selbst in den trockenen Monaten ist die Niederschlagsmenge erheblich (vgl. ebd. 2023, o.S.). Aufgrund der Lage am Meer können Sturmfluten zu Überschwemmungen mit gravierenden Auswirkungen führen. Dieser Gefahr und der sich daraus ergebenden Notwendigkeit zu handeln, ist sich auch die Stadtverwaltung Kopenhagens bewusst (vgl. Stadtverwaltung Kopenhagen 2023, o.S.).Verkehr und MobilitätBetrachtet man die Verkehrsplanung Kopenhagens, so muss zwingend auf die Fahrradinfrastruktur eingegangen werden. Der Autoverkehr sowie der ÖPNV dürfen dennoch nicht außer Acht gelassen werden. Ziel dieser Betrachtung ist es, Aufschlüsse über die Beweggründe und konkreten Vorgehensweisen der Verkehrsplanung und -infrastruktur in Kopenhagen zu erhalten. Dabei soll eine Bestandsaufnahme der aktuellen Situation erfolgen.Regelmäßig liegt Kopenhagen auf dem ersten Platz der weltweit besten Fahrradstädte und dennoch wurden im Jahr 2021 knapp ein Drittel aller Fahrten mit dem Auto bewältigt (vgl. Kallenbach 2021, S. 5). In den 1950er und 1960er Jahren war die Verkehrsplanung auf das Auto ausgerichtet, was zu einer deutlichen Verringerung der Radfahrenden in den darauffolgenden Jahrzehnten führte. Während 1949 an der Nørrebrogade, einer zentralen Hauptstraße in Kopenhagen, an einem Tag durchschnittlich mehr als 62.000 Radfahrende gezählt wurden, waren es im Jahr 1978 nur etwa 8.000 (vgl. ebd. 2021, S. 5f.).In den 1970er Jahren kam es zu umfangreichen Fahrradprotesten und Forderungen nach mehr Fahrradwegen. Trotz der damals bereits vorhandenen Relevanz war der Umweltaspekt jedoch nicht ausschlaggebend. Vielmehr stand die Verkehrssicherheit für die Radfahrenden im Fokus der Fahrradproteste (vgl. ebd., S. 30f.). Im Jahr 2019 gab die deutliche Mehrheit aller Fahradfahrenden in Kopenhagen an, aufgrund der Zeitersparnis gegenüber anderen Verkehrsmitteln (46 Prozent) und aus praktischen Aspekten (55 Prozent) mit dem Fahrrad zu fahren. Ein deutlich geringerer Anteil von 16 Prozent gab Umweltschutzaspekte als Beweggrund an (vgl. ebd., S. 31). Ein weiterer Faktor war die Ölkrise in den 1970er Jahren, welche die Notwendigkeit alternativer Verkehrsmittel untermauerte und in der Folge die Anzahl der Fahrradfahrenden in Kopenhagen stark anstiegen ließ (vgl. Kallenbach 2021, S. 35).Trotz dieser Faktoren sind gerade die nicht-diskursiven, also die bereits vorhandenen Faktoren wesentlich für den Weg Kopenhagens zur Fahrradmetropole und für die Umsetzung entsprechender Maßnahmen. Zum einem sind es geographisch vorteilhafte Gegebenheiten, die Kopenhagen vorteilhaft für den Fahrradverkehr machen, was durch die geringe Größe und die flache Lage der Stadt sichtbar wird (vgl. Kallenbach 2021, S. 34). Dadurch bedingt ist auch die Geschichte Kopenhagens, in welcher der Radverkehr einen relevanten Teil einnimmt (vgl. ebd. 2021, S. 36). Der Sicherheitsaspekt beim Fahrradfahren ist sehr relevant. In Kopenhagen setzte man dementsprechend bereits früh auf vom Autoverkehr separierte Fahrradwege, was parallel zu einem Anstieg der Fahrradfahrenden führte (vgl. Søholt 2014, S. 1f.).Ein weiterer Faktor ist die ununterbrochene politische Richtung hinsichtlich der Mobilität in Kopenhagen, die durch Sozialdemokrat:innen und linke Parteien seit den 1970er Jahren besteht. Diese Kontinuität wirkte sich ebenso auf Investitionen für den Fahrradverkehr und die Fahrradinfrastruktur aus (vgl. Kallenbach 2021, S. 36f.). Zusammengesetzt aus solchen Faktoren konnte sich in Kopenhagen eine Kultur des Fahrradfahrens herausbilden. Neben den Umweltschutzaspekten ist Kopenhagen dadurch attraktiver für Menschen, aber auch für Unternehmen geworden (vgl. Søholt 2014, S. 1).Auch negative Effekte können auftreten. Beispielsweise kommt es vermehrt zu Staus auf den stark befahrenen Fahrradwegen. Die Stadt reagiert darauf mit dem Ausbau der Fahrradspuren und dementsprechend der Verkleinerung von Fahrbahnen für Autos (vgl. Søholt 2014, S. 2). Auch das Sperren von Straßen für den Autoverkehr wird in Erwägung gezogen. Ziel dabei ist es, mehr Platz für die Radfahrenden und den ÖPNV zu schaffen (vgl. ebd., S. 2). Kopenhagen versucht weiterhin umweltfreundliche Kraftstoffe und den Anteil von Elektroautos, auch unter den Taxen der Stadt, voranzutreiben (vgl. Stadt Kopenhagen 2020, S. 41).Der Klimaschutzplan der Stadt benennt den Bereich der Mobilität als eine von vier zentralen Säulen (vgl. Stadt Kopenhagen 2020, S. 13). Im Bericht aus dem Jahr 2020 wird festgestellt, dass CO2-Emissionen nach wie vor deutlich reduziert werden müssen. So sind trotz der Bemühungen und einiger Erfolge die Kohlenstoffemissionen im PKW-Bereich zwischen 2012 und 2018 um zehn Prozent gestiegen (vgl. ebd. 2020, S. 39f.). Parallel mit dem Bevölkerungsanstieg ist die Zahl der Autobesitzer:innen gestiegen. Dennoch sind die Pro-Kopf Emissionen im Straßenverkehr von 2010 bis 2018 um 16 Prozent gesunken (vgl. ebd. 2020, S. 41).Kopenhagen eröffnete im Herbst 2019 den "Cityring" und baut diesen nach und nach aus. Der damit verbundene Ausbau der U-Bahn soll die verschiedenen Stadteile an den öffentlichen Nahverkehr anbinden und effiziente öffentliche Verkehrsmittel gewährleisten (vgl. Stadt Kopenhagen 2019, S. 26). Langfristig soll der Ausbau immer weiter vorangetrieben werden, um auch während der Rushhour eine attraktive Alternative zum Autoverkehr darzustellen (vgl. ebd., S. 26).Die Stadt Kopenhagen zeigt, wie Mobilität in einer nachhaltigen Stadt der Zukunft aussehen kann. Im gleichen Zug müssen dabei jedoch die vorteilhaften Gegebenheiten berücksichtigt werden, beispielsweise die flache Lage und die geringe Größe der Stadt. Aus diesem Grund muss in größeren und hügligeren Städten beispielsweise der ÖPNV als Alternative gedacht werden und mit ähnlicher Entschlossenheit verbessert werden.Dennoch gibt es Faktoren aus Kopenhagen, die eine grüne Mobilität begünstigen und theoretisch in jeder Stadt umsetzbar sind. Ein Beispiel ist die politische Kontinuität bezogen auf die Förderung des Fahrradverkehrs. Umwelt- und Klimaschutz muss nicht zwingend die ausschlaggebende Motivation für den Beginn einer Verkehrswende sein. Trotz aller positiven Aspekte und der Vorreiterrolle der Fahrradstadt Kopenhagen wurden auch im Jahr 2021 noch einige Fahrten mit dem Auto zurückgelegt.Die dauerhafte Förderung der Alternative Fahrrad konnte das enorme Wachstum des Autoverkehrs jedoch eindämmen. Es liegt auf der Hand, dass durch die Verkleinerung beziehungsweise Sperrung von Fahrbahnen und Straßen für den Autoverkehr auch strittige Debatten entstehen können. Die Stadt Kopenhagen verfolgt jedoch den klaren Plan, das Rad und den ÖPNV als Mobilitätsmittel der Wahl weiter voranzutreiben. Bereits zur Mitte des vergangenen Jahrzehnts nutzen 45 Prozent der Einwohner:innen Kopenhagens das Fahrrad für den Schul- beziehungsweise Arbeitsweg (vgl. Diehn 2015, o.S.). Dennoch halten aktuelle Ergebnisse fest, dass die Anstrengungen bei weitem nicht genügen.Weitere Maßnahmen und HerausforderungenZiel dieses Kapitels ist es, weitere Maßnahmen in Kopenhagen zu untersuchen. Aufgrund des Umfangs handelt es sich dabei jedoch um Beispiele, die kompakt dargestellt werden. Dabei werden Beispiele aus dem Bereich der Extremwetteranpassung und der kommunalen Wärmeplanung untersucht. Mit der Stadt München wird ähnlich vorgegangen, die Kategorien werden gleich gewählt. Ziel dabei ist festzustellen, welche Anstrengungen in der jeweiligen Kommune unternommen werden, um Nachhaltigkeitsziele voranzubringen.Durch die örtlichen Gegebenheiten muss Kopenhagen Extremwetterereignisse bewältigen, die sich durch den Klimawandel verstärken. So gab es in der dänischen Hauptstadt allein zwischen 2010 und 2015 sechs Starkregenereignisse, die Straßen und Gebäudekeller überfluteten und für einen enormen finanziellen Schaden sorgten (vgl. Kruse 2016, S. 669). Dementsprechend ist vor allem die Anpassung der Stadt an solche Starkregenereignisse ein wichtiger Bestandteil, der im Klimaanpassungsplan festgehalten ist.Um das Überflutungsrisiko zu verringern und dieser Herausforderung gerecht zu werden, arbeitet die Stadt an der Verwirklichung fünf zentraler Aspekte. Dazu zählen Maßnahmen, die einen Beitrag zur Verringerung des Überflutungsrisikos leisten können, zum Beispiel eine qualitative und quantitative Erhöhung des städtischen Grünflächenbereichs (vgl. ebd. 2016, S. 669f.).Ein konkretes Beispiel ist der Kopenhagener Ortsteil Skt. Kjelds Kvarter, der nach und nach in einen klimagerechten Stadtraum der Zukunft umgewandelt werden soll. Zum einem soll sich die Natur in diesem Quartier weiter ausbreiten, gleichzeitig wird die Regulierung von Regenwasser verbessert (vgl. Technik- und Umweltverwaltung Kopenhagen 2023, o.S.). Konkret dienen die Grünflächen als Versickerungsbecken, wodurch das Wasser unabhängig von der Kanalisation zum Hafenbecken geleitet wird. Hierfür wurde auch die Straßenführung angepasst (vgl. Kruse 2016, S. 270). Neben der Risikoreduzierung durch Extremwetterereignisse wird die Stadt durch solche Projekte nachhaltiger. Zugunsten von Grünflächen wird die Verkehrsinfrastruktur verändert und der Natur wird mehr Raum innerhalb der Stadt gegeben.Die Gefährdung der Stadt durch Extremwetterereignisse soll durch weitere Maßnahamen reduziert werden. Dazu zählen beispielsweise die Bereitstellung von Pumpen und die Ausrüstung von Kellern, um gegen Überschwemmungen vorbereitet zu sein. Gleichzeitig macht der Klimaanpassungsplan deutlich, dass die Entwicklung eines grünen Wachstums gewünscht ist und parallel zur Klimaanpassung vollzogen wird (vgl. Stadtverwaltung Kopenhagen 2011, S. 5). So sollen Grün- und Freiflächen verbessert und ergänzt werden. Dort wo gebaut wird, ist dies entsprechend zu berücksichtigen (vgl. ebd. 2011, S. 12).Neben dem Schutz vor Extremwetterereignissen sollen diese grünen Maßnahmen dazu führen, den Energieverbrauch der Stadt zu senken, die Luftqualität zu verbessern und die Lärmbelästigung zu bekämpfen. Durch die Schaffung von Freiflächen kann beispielsweise die Temperatur gemäßigt und für Luftzirkulation gesorgt werden (vgl. ebd. 2011, S. 12).Kopenhagen benennt in seinem aktuellen Klimaschutzplan neben der Mobilität drei weitere Bereiche: Den Energieverbrauch, die Energieproduktion und Initiativen der Stadtverwaltung (vgl. Stadt Kopenhagen 2020, S. 13). Laut eigenen Worten will sich Kopenhagen, neben der Konzentration auf den öffentlichen Verkehr, auf den Energieausstoß, die kohlenstoffneutrale Fernwärme und Maßnahmen zur Verringerung von Kohlenstoffemissionen fokussieren (vgl. Stadt Kopenhagen 2019, S. 25).2014 wurde Kopenhagen von einem unabhängigen und internationalen Expertenteam zur Umwelthauptstadt ernannt. Es gibt eine Reihe von Kriterien, die hierfür erfüllt sein müssen. Neben dem Nahverkehr wird die Luftqualität, der Anteil sowie die Qualität des grünen Stadtgebietes und der Umgang mit dem Klimawandel berücksichtigt (vgl. Diehn 2015, o.S.).Dementsprechend wurden früh weitere Anstrengungen unternommen. Gerade das weit ausgebaute Fernwärmenetz Kopenhagens muss hierbei erwähnt werden. Dieses versorgt den Großteil der Gebäude und trägt damit maßgeblich zur Einsparung von C02-Emissionen in Kopenhagen bei (vgl. Burckhardt/Tappe/Rehrmann 2022, o.S.). Gleichzeitig bieten sich auch Vorteile für die dortigen Bewohner:innen: Die Preise werden staatlich kontrolliert und die Infrastruktur der Fernwärme ermöglicht einen einfachen und für Verbraucher:innen kostengünstigen Umstieg auf erneuerbare Energien, was ermöglicht, dass Kopenhagens Fernwärme bereits zu 80 Prozent aus erneuerbaren Energien erzeugt wird (vgl. ebd. 2022, o.S.).Das Fernwärmenetz der Stadt hat unter anderem mehrere Müllverbrennungsanlagen und Blockheizkraftwerke, die von verschiedenen Versorgungsunternehmen betrieben werden (vgl. Harrestrup/Svendsen 2014, S. 296). Dies gewährleistet die Nutzung von Abwärme als Heizquelle. Eine dieser Müllverbrennungsanliegen liegt nah am Zentrum Kopenhagens und trägt den Namen Amager Bakke. Das Dach der Müllverbrennungsanlage dient der Bevölkerung gleichzeitig als Skigebiet und steht somit sinnbildlich für die Innovation und entsprechende Nachhaltigkeitsbestrebungen innerhalb der Stadt (vgl. Kafsack 2023, o.S.).Um im Bereich Energie die gesetzten Ziele zu erreichen, setzt Kopenhagen auf eine Vielzahl weiterer Maßnahmen. Dazu zählt neben der Fernwärme der Einsatz erneuerbarer Energietechnologien und die entsprechende Förderung von Heizungspumpen, Erdwärme, Sonnenkollektoren und Windkraftanlagen. Auch Biomasse als Übergangstechnologie wird von der Stadt befürwortet (vgl. Stadt Kopenhagen 2019, S. 54).Kopenhagen wird häufig als grüne Stadt bezeichnet. Viele Maßnahmen der Stadt wurden bereits vor langer Zeit getroffen. Die Pläne der Stadt Kopenhagen sind weit vorangeschritten, äußerst detailliert und durchdacht. Um sich zukünftig besser vor Extremwetterereignissen schützen zu können, arbeitet die Stadt an verschiedenen Lösungen und setzt beispielsweise auf den Ausbau und die Entlastung der Kanalisation. Dass hierbei ebenfalls freie Grünflächen entstehen, ist nicht nur ein nützlicher Nebeneffekt, sondern gewolltes Ziel.Im Bereich der Energieversorgung muss vor allem die Fernwärme genannt werden. Diese wurde in Kopenhagen bereits sehr früh ausgebaut und versorgt dementsprechend fast alle Gebäude. Somit ist dies der wohl wichtigste Bereich der Energieversorgung und gleichzeitig das Hauptaugenmerk des Kopenhagener Klimaplans. Dennoch gibt es auch hier Verbesserungs- und Optimierungspotential. Auch Kritikpunkte sind berechtigt. Beispielsweise ist der Einsatz von Biomasse fraglich. Einen weiteren Rückschlag musste Kopenhagen kürzlich einstecken: Die Stadt gab bekannt, dass sie das Ziel der Klimaneutralität bis 2025 deutlich verfehlen wird (vgl. Wolff 2022, o.S.).MünchenMünchen wurde im Jahr 1158 erstmals urkundlich erwähnt und liegt am Fluss Isar, der im Stadtgebiet eine Länge von 13,7 Kilometern einnimmt (vgl. Stadt München 2023, o.S.). Die Stadt ist bereits seit Beginn des 16. Jahrhunderts die bayrische Landeshauptstadt (vgl. Stahleder 2023, o.S.). Heute hat München mehr als 1,5 Millionen Einwohner und kann damit einen deutlichen Bevölkerungsanstieg verbuchen (vgl. Statistisches Amt München 2023, o.S.). Verglichen mit dem Jahr 2004 stieg die Anzahl der Einwohner:innen um 300.000 Menschen (vgl. Münchner Stadtmuseum 2004, S. 155). München liegt etwa 519 Meter über dem Meeresspiegel und hat eine Fläche von mehr als 310 Quadratkilometern, wodurch die Stadt flächenmäßig zu den größten Städten Deutschlands gehört (vgl. Stadt München 2023, o.S.).Grundlegende Informationen und BesonderheitenAnlass der Gründung Münchens war ein Konflikt zwischen Herzog Heinrich dem Löwen und Bischof Otto I. von Freising (vgl. Scholz 2004, S. 20). Das Bevölkerungswachstum stieg rasch an, was bereits zur Mitte des 13. Jahrhunderts eine deutliche Vergrößerung der Stadt nötig machte (vgl. Scholz 2004, S. 22). Die Isar wurde in München bereits vor mehreren Jahrhunderten als Transportmittel für Waren genutzt und prägte daher die Entwicklung der Stadt maßgeblich (vgl. Scholz 2004, S. 31f.).Im Jahr 1795 begann eine neue Entwicklung. Die bisher genutzten Festigungsanlagen wurden aufgegeben und die dynamische, unbegrenzte Weiterentwicklung der Stadt konnte gelingen (vgl. Lehmbruch 2004, S. 38). Im Laufe der Jahrhunderte kam es zu mehreren Eingemeindungen (vgl. Münchner Stadtmuseum 2004, S. 155). Während des Zweiten Weltkriegs wurden 90 Prozent der historischen Altstadt Münchens zerstört und die Stadt verlor bis zum Ende des Krieges mehr als die Hälfte seiner Einwohner:innen (vgl. Stahleder 2023, o.S.).Münchens Grünanlagen nehmen etwa 13,4 Prozent der gesamten Stadtfläche ein. Den größten zusammenhängenden Teil bildet dabei der Englische Garten mit einer Größe von 374,13 Hektar (vgl. Stadt München 2023, o.S.). Die Jahresmitteltemperatur in München liegt im Durchschnitt bei 8,7 Grad Celsius und der Niederschlag beträgt circa 834 Millimeter im Jahr (vgl. Deutscher Wetterdienst 2023, o.S.). In jüngster Zeit hat München mit einigen Extremwetterereignissen zu kämpfen gehabt, unter anderem mit Starkregen (vgl. Handel 2023, o.S.) und Rekord-Hitzewellen (vgl. Harter 2023, o.S.). Verkehr und MobilitätMünchen arbeitet seit vielen Jahren an seiner Verkehrsstrategie. Der ursprüngliche Verkehrsentwicklungsplan wurde bereits im Jahr 2006 veröffentlicht. Im Sommer 2021 wurde ein neuer Entwurf bezüglich der zukünftigen Mobilitätsplanung beschlossen. Der Stadtrat setzte sich dabei ambitionierte Ziele: Der Verkehr im Stadtgebiet sollte demnach innerhalb von vier Jahren zu mindestens 80 Prozent durch abgasfreie Fahrzeuge beziehungsweise den ÖPNV oder den Fuß- und Radverkehr realisiert werden. Weiterhin soll der Verkehr in München bis 2035 vollständig klimaneutral sein (vgl. Landeshauptstadt München 2023c, o.S.). Der neue Mobilitätsplan der Stadt soll den zukünftigen Herausforderungen gerecht werden. Dazu zählt unter anderem die steigende Bevölkerungszahl und der somit zunehmende Mobilitätsbedarf sowie der Umwelt- und Gesundheitsschutz (vgl. Landeshauptstadt München 2023b, o.S.).Der motorisierte Individualverkehr nimmt in der bayrischen Landeshauptstadt nach wie vor einen hohen Stellenwert ein und wurde 2017 von rund 24 Prozent der Münchner:innen in Anspruch genommen. Die Anzahl der täglich bewältigten Personenkilometer nahm ebenfalls zu, was durch den Anstieg der Bevölkerung und die Zunahme der täglichen Strecken erklärt wird (vgl. Landeshauptstadt München 2022, S. 107f.).Der ÖPNV wurde im Jahr 2017 von 24 Prozent der Münchner:innen genutzt, was verglichen mit dem Jahr 2008 ein leichter Anstieg ist. Verglichen mit dem Jahr 2008 wird das Fahrrad mit 18 Prozent von weniger Münchner:innen genutzt (vgl. ebd. 2022, S. 107f.). Die Stadt kommt in ihrem Nachhaltigkeitsbericht zu dem Ergebnis, dass die Entwicklung in Richtung ÖPNV und des Radverkehrs geht. Durch das starke Wachstum der Stadt und des Umlands kommt es jedoch zu einem Anstieg des Verkehrs insgesamt, was die eigentlich positive Entwicklung aufhebt (vgl. ebd. 2022, S. 107f.). Die Stadt München beschäftigt sich seit einiger Zeit mit sogenannten Radschnellverbindungen."Radschnellverbindungen sind hochwertige Verbindungen im Radverkehrsnetz (von Kommunen oder StadtUmlandRegionen), die wichtige Zielbereiche (zum Beispiel Stadtteilzentren, Wohn und Arbeitsplatzschwerpunkte, (Hoch)Schulen) mit hohen Potenzialen über größere Entfernungen verknüpfen und durchgängig ein sicheres und attraktives Befahren mit hohen Reisegeschwindigkeiten […] ermöglichen" (Landeshauptstadt München 2022, S. 109).Solche Strecken haben somit das Potential, einen nicht zu unterschätzenden Beitrag hin zur grünen Mobilität zu leisten. Radschnellwege können nicht nur für die Freizeit, sondern auch von Berufspendler:innen genutzt werden und sind daher eine Alternative zum Auto. Die Landeshauptstadt München hat bereits mehrere Machbarkeitsstudien in Auftrag gegeben. Die Ergebnisse belegen, dass viele dieser Strecken, beispielsweise die Strecke zwischen der Innenstadt Münchens und Starnberg, technisch machbar und wirtschaftlich gewinnbringend sind (vgl. Landeshauptstadt München/Landratsamt München/Landratsamt Starnberg 2020, S. 29).Die lange Planung der Radschnellwege ist seit diesem Jahr in einer neuen Phase. Im Juni 2023 wurde mit dem Bau der ersten von insgesamt fünf Strecken begonnen, welche die Stadt München mit Unterschleißheim und Garching verbinden soll (vgl. Heudorfer 2023, o.S.). Gleichzeitig müssen die enorm hohen Kosten für den Bau solcher Strecken berücksichtigt werden. Dies ist der Grund, weshalb beispielsweise die Strecke zwischen München und Starnberg nicht realisiert wird (vgl. ebd. 2023, o.S.).München plant die Reduzierung des Autoverkehrs in seiner Altstadt. So soll mehr Platz für Fußgänger:innen, Radfahrende und den ÖPNV geschaffen werden. Die Stadt nennt eine Reihe an Maßnahmen, die das Ziel einer autofreien Altstadt realisieren sollen. Dazu zählen unter anderem das Errichten und die Erweiterung von Fußgängerzonen, die Neuregelung des Parkens, was auch das Erhöhen der Parkgebühren beinhaltet, die Verbesserung des Liefer- und Ladeverkehrs sowie das Erbauen eines breiten Radrings in der Altstadt (vgl. Landeshauptstadt München 2023a, o.S.).Ein Pilotprojekt diesbezüglich befindet sich in der zentral gelegenen Kolumbusstraße. Die Straße wurde für Fahrzeuge gesperrt und mit Rollrasen, Sitzmöglichkeiten und Hochbeeten ausgestattet (vgl. Stäbler 2023, o.S.). Das Projekt hat jedoch nicht nur Befürworter:innen. Der Verlust von knapp 40 Parkplätzen sowie der Lärm durch spielende Kinder wird kritisiert (vgl. ebd. 2023, o.S.).Der ÖPNV hat in München einen hohen Stellenwert. Bereits im Jahr 2010 lag München im Vergleich unter den besten deutschen Städten. Berücksichtigt wurde damals unter anderem die Fahrtdauer, die Informationslage und die Preise (vgl. Wagner 2010, o.S.). Eine ADAC-Studie zeigt, dass München im Jahr 2021 die teuerste Einzelfahrkarte unter 21 deutschen Großstädten mit mehr als 300.000 Einwohner:innen hatte. Die Münchner Monatskarte sowie die Wochenkarte hingegen war mit Abstand am günstigsten. Die Monatskarte kostete im Vergleich zu Hamburg knapp die Hälfte (vgl. ADAC 2021, o.S.). Dieser Aspekt muss hinsichtlich der Einführung des Deutschlandtickets und der damit verbundenen Preisentwicklung des ÖPNV neu bewertet werden, ist jedoch aufgrund der damals fehlenden Alternative des Deutschlandtickets nicht zu vernachlässigen.Langfristig plant München eine Bahnstrecke, die Stadt und Umland miteinander verbindet und das bereits vorhandene Schienennetz ergänzt. Dieses Projekt ist zuletzt aus finanziellen Gründen gescheitert, soll jedoch durch spezielle Buslinien kompensiert und nach Möglichkeit neu geprüft werden (vgl. Landeshauptstadt München 2023d, o.S.). Um die Kapazität des ÖPNV in München und Umland zu erhöhen, werden im Rahmen eines Programms verschiedene Maßnahmen umgesetzt. Dazu zählt unter anderem die Anbindung an den Flughafen und der Ausbau der Schieneninfrastruktur im Nordosten Münchens (vgl. Landeshauptstadt München 2023d, o.S.).Auch das U-Bahn- und Tramnetz soll durch die Münchner Verkehrsgesellschaft (MVG) ausgebaut werden. Vorgesehen ist die Verlängerung beziehungsweise der Neubau verschiedener Strecken (vgl. ebd. 2023d, o.S.). Gleichzeitig wird auf die Problematik verwiesen, dass die Kapazitätsgrenze des Schienenverkehrs in München und der Region bereits erreicht ist (vgl. ebd., o.S.).Die bayrische Landeshauptstadt setzt sich selbst ambitionierte Ziele, was den Verkehr und die Mobilität betreffen. Bereits seit vielen Jahren wurde mit entsprechenden Planungen begonnen. Auf der Webseite der Landeshauptstadt finden sich viele Informationen und Vorhaben bezüglich der Verkehrsplanung. Der Ausbau des Fahrradverkehrs, vor allem die Planungen von Radschnellstrecken sind vielsprechend. Die Machbarkeitsstudien belegen das große Potential. Da jedoch erst vor einigen Monaten mit dem Bau der ersten Strecke begonnen wurde, muss München hier in relativ kurzer Zeit viel erreichen.Gleichzeitig kann somit nicht abschließend festgestellt werden, wie groß das Potential der Radschnellverbindungen in der Praxis ist. Der Zuwachs der Stadt München und des Umlands stellt die Landeshauptstadt vor Herausforderungen in doppelter Hinsicht. Obwohl der Anteil der Radfahrenden und der ÖPNV-Fahrenden deutlich zugenommen hat, steigt der Verkehr insgesamt. Gleichzeitig stellt die Stadt fest, dass der ÖPNV an der Kapazitätsgrenze ist. Dennoch müssen die positiven Aspekte betrachtet werden. Hierzu zählt unter anderem das Potential des Münchner ÖPNV und der verschiedenen Projekte. Auch wenn es von der Planung bis zur Umsetzung viele Jahre dauert, ist München sicherlich vielen Städten, vor allem im deutschen Städtevergleich, voraus, da die Planungen früh begonnen haben.Weitere Maßnahmen und HerausforderungenHier werden nun weitere Maßnahmen untersucht. Dabei wird, wie bei Kopenhagen, in exemplarischer Weise auf den Bereich der Extremwetter- beziehungsweise Klimawandelanpassung und den Bereich der kommunalen Wärmeplanung eingegangen. Gleichzeitig werden Herausforderungen, Chancen und Schwierigkeiten beleuchtet, die sich daraus ergeben.Die bayrische Landeshauptstadt hat im Jahr 2019 den Klimanotstand ausgerufen. Damit verbunden ist das Ziel der Klimaneutralität bis 2035 (vgl. Landeshauptstadt München 2023e, o.S.). Das Klima in der Stadt München weist aufgrund der dichten Bebauung spezifische Besonderheiten auf. Dazu zählt der sogenannte "Wärmeinseleffekt", der dazu führt, dass ein Temperaturunterschied im Vergleich zum Münchner Umland besteht (vgl. Landeshauptstadt München u. a. 2016, S. 8).Im Stadtgebiet ist es deshalb im Durchschnitt zwei bis drei Grad wärmer, wobei der Temperaturunterschied in der Nacht deutlich höher ausfällt: Im Vergleich zum Münchner Umland ist es nachts im Stadtgebiet Münchens bis zu zehn Grad wärmer, was durch den Klimawandel und den damit verbundenen Anstieg der Durchschnittstemperatur noch deutlich ansteigen wird (vgl. ebd. 2016, S. 8).Dementsprechend sieht das Klimaanpassungskonzept verschiedene Maßnahmen vor. Dazu zählt zum Beispiel der Ausbau der Dachbegrünung und Photovoltaikanlagen auf Gebäuden, die Verbesserung des Wärmeschutzes in der Gebäudeplanung und Förderprogramme für Klimaanpassungsmaßnahmen auf privaten Grundstücken (vgl. ebd. 2016, S. 40). In München gründeten sich einige Bewegungen, die sich für mehr Nachhaltigkeit einsetzen. Die Münchner Initiative Nachhaltigkeit (MIN) ist ein Beispiel dafür und setzt sich aus mehrheitlich zivilgesellschaftlichen Organisationen zusammen. Die Ziele der MIN orientieren sich an den SDGs (vgl. Münchner Initiative Nachhaltigkeit 2023, o.S.).Der Münchner Nachhaltigkeitsbericht liefert interessante Aufschlüsse. Der Anteil der erneuerbaren Energien im Gebiet der Stadtwerke München lag 2019 bei insgesamt 6,4 Prozent. Den größten Anteil hat dabei die Wasserkraft, gefolgt von Solar (vgl. Landeshauptstadt München 2022, S. 85). Ökostrom soll in den eigenen Stadtwerken langfristig betrachtet in ausreichender Menge erzeugt werden, um damit die Stadt München selbst versorgen zu können.Daraus ergibt sich für den Leiter der Stadtwerke die politische Aufgabe, die Energiewende voranzubringen (vgl. Hutter 2019, o.S.). Gerade die lokale Erzeugung von Ökostrom kann sich in einer dicht bebauten Stadt als schwierig herausstellen. Hier stellt sich die Frage, wie viel Potential München und das direkte Umland hat. Dabei kann es sich zum Beispiel um den Auf- und Ausbau umliegender Windräder oder Biomassekraftwerke handeln (vgl. ebd., o.S.).München setzt auf Tiefengeothermie und kann einen Anstieg in der Erzeugung und den Anteil der Tiefengeothermie am Fernwärmeverbrauch verbuchen. Jedoch lag der Anteil der Geothermie am Fernwärmeverbrauch im Jahr 2019 lediglich bei 3,8 Prozent (vgl. Landeshauptstadt München 2022, S. 86f.). Aktuell wird in München das größte Geothermiekraftwerk Europas erbaut. Somit ist davon auszugehen, dass der Anteil der Geothermie innerhalb der Fernwärmeversorgung in München weiter zunimmt und diese in der Konsequenz Schritt für Schritt nachhaltiger und regenerativ gestalten (vgl. Schneider 2022, o.S.). In München befinden sich momentan sechs Geothermieanlagen. Durch die Erweiterungen soll das Fernwärmenetz den Wärmebedarf Münchens bis 2040 klimaneutral abdecken (vgl. Stadtwerke München 2023a, o.S.).Die Methode der Kraft-Wärme-Kopplung (KWK), also die gleichzeitige Gewinnung von mechanischer Energie und nutzbarer Wärme (vgl. Umweltbundesamt 2022, o.S.), wird von den Stadtwerken München genutzt und dient als eine Art Zwischenlösung, die intensiver genutzt wird, bis der Ausbau der Geothermieanlagen abgeschlossen ist (vgl. Stadtwerke München 2023b, o.S.). Die durch die Stromerzeugung der KWK-Methode gewonnene Abwärme wird in das Fernwärmenetz der Stadt München eingespeist. Die so erzeugte Fernwärme kann dementsprechend schon heute in einem beträchtlichen Maß umweltschonend bereitgestellt werden und ersetzt laut den Stadtwerken München bereits etwa 400 Millionen Liter Heizöl und spart pro Jahr eine Millionen Tonnen CO2 ein (vgl. ebd. 2023b, o.S.).Die Stromerzeugung selbst funktioniert mit Brennstoffen. Neben erneuerbaren Energien können dabei auch fossile Energieträger zum Einsatz kommen. Die Stadtwerke München selbst setzen sich das langfristige Ziel, fossile Brennstoffe abzulösen (vgl. ebd. 2023b, o.S.). Das Heizkraftwerk Süd der Stadtwerke München arbeitet beispielsweise mit der KWK-Methode. Die Stromerzeugung wird durch Erdgas gewährleistet (vgl. Stadtwerke München 2022, o.S.). Somit wird ein fossiler Brennstoff verwendet.Im deutschen Städtevergleich gilt München oft als Vorreiter, was Nachhaltigkeitsbemühungen betrifft. München hat 2019 den Klimanotstand ausgerufen und sich das Ziel gesetzt, bis 2035 klimaneutral zu werden. Das Ausrufen des Klimanotstands hat eher symbolischen Charakter. Dennoch wird die Dringlichkeit der Sache damit auch auf kommunaler Ebene betont.Bezüglich der Anpassung an Extremwetterereignisse finden sich viele Informationen der Stadt München. Dabei werden auch viele Maßnahmen genannt, die nach und nach umgesetzt werden sollen. Die Stadt ist sich der Relevanz des Themas bewusst. Durch das veränderte Stadtklima wird deutlich, wie wichtig die Anpassung an Extremwetterereignisse ist, um das Leben in der Stadt auch zukünftig zu sichern.Im Fall von München sind die Maßnahmen gegen Hitze besonders relevant. Hier hat München bereits Pilotprojekte und verschiedene Fördermaßnahmen in die Wege geleitet. Im Bereich der Energieversorgung muss vor allem die Tiefengeothermie benannt werden. München setzt verstärkt darauf und erkennt das große Potential. Gleichzeitig müssen die hohen Kosten und der damit verbundene Aufwand berücksichtigt werden.Aktuell kommen auch KWK-Werke zum Einsatz. Dies ermöglicht die umweltschonende Bereitstellung von Fernwärme. Der Einsatz mehrerer Geothermieanlagen kann dieses Potential jedoch beträchtlich steigern. Erdgas wird zur Erzeugung von Strom in München auch aktuell eingesetzt. Langfristig wollen die Stadtwerke jedoch ohne den Einsatz fossiler Brennstoffe arbeiten. Die Fernwärme Münchens ist weit ausgebaut und bietet hohes Potential. Dennoch zeigen erst die nächsten Jahre, wie nachhaltig und flächendeckend das Fernwärmenetz konkret ausgebaut werden kann.ErkenntnisseDie Einwohnerzahl Kopenhagens ist in den letzten Jahren gestiegen. Auch zukünftig muss die Stadt mit einem Bevölkerungswachstum rechnen. In München ist ebenso von einem Bevölkerungsanstieg auszugehen, was auch in den letzten Jahren der Fall war. Der Anstieg der Bevölkerung in Zahlen ist deutlich höher, was sich durch die größere Fläche der bayrischen Landeshauptstadt zumindest teilweise relativieren lässt. Im direkten Vergleich ist München mehr als drei Mal so groß wie Kopenhagen.Kopenhagen gilt als eine der besten Fahrradstädte weltweit. Dies führt neben den positiven Aspekten auch zu vollen Fahrradwegen. Die Stadt reagiert mit der Verbreiterung von Fahrradwegen und der Sperrung beziehungsweise Verkleinerung von Autofahrbahnen und ganzen Straßen. München geht diesbezüglich nicht so konsequent vor, hat jedoch ein vergleichbares Pilotprojekt gestartet, welches eine zentrale Straße zeitweise für den Autoverkehr gesperrt hat.Das Fahrrad als Verkehrsmittel konnte sich in Kopenhagen bereits früh etablieren. Ein zentraler Faktor, der für das Fahrrad in Kopenhagen spricht, ist unter anderem die Zeitersparnis. Eine Reihe nicht-diskursiver Faktoren spielen eine wichtige Rolle für die bedeutsame Rolle des Fahrrads in der dänischen Hauptstadt. Neben der flachen Lage und der geringen Größe zählt dazu auch der politische Wille und die Bereitschaft, das Fahrrad als Verkehrsmittel kontinuierlich zu fördern.In München wurde die Relevanz des Fahrrads ebenfalls erkannt. München kann im Vergleich jedoch auf keine derart ausgeprägte Fahrradkultur zurückblicken. Dennoch stellt sich heraus, dass das Fahrrad in München nicht unterschätzt wird. Die aktuellen Planungen und erste bauliche Maßnahmen der Radschnellverbindungen belegen, dass die Stadt den Radverkehr als Alternative zum Auto etablieren möchte.Dabei sollen, wie es in Kopenhagen bereits der Fall ist, nicht nur Freizeitradler:innen, sondern auch Berufspendler:innen angesprochen werden. Das Münchner Umland soll in den Bau der Radschnellverbindungen zu weiten Teilen integriert werden. Theoretisch könnte München auf diese Weise trotz der deutlich weiteren Distanzen die optimale Infrastruktur für das Fahrrad als grüne Alternative etablieren.Der Autoverkehr spielt in Kopenhagen nach wie vor eine Rolle. Trotz einiger Maßnahmen müssen die CO2-Emissionen weiter reduziert werden. Die Emissionen im PKW-Bereich sind bis vor fünf Jahren noch angestiegen. Auch in München ist der Autoverkehr relevant und wurde im Jahr 2017 von fast einem Viertel der Münchner:innen genutzt. Von der Stadt München werden verschiedene Maßnahmen benannt, die zu einer autofreien Altstadt führen sollen. Dabei soll ähnlich wie in Kopenhagen vorgegangen werden, unter anderem mit der Erweiterung von Fußgängerzonen. Kopenhagen scheint diesbezüglich jedoch weiter fortgeschritten zu sein. Bei der Verkleinerung von Fahrbahnen im Bereich des Autoverkehrs handelt es sich dort um dauerhafte Maßnahmen. In München beschränkt sich dies bislang auf Pilotprojekte und Vorhaben.Beide Städte haben ein gut ausgebautes ÖPNV-Netz. In München ist sich die Stadt der Tatsache bewusst, dass die aktuelle ÖPNV-Infrastruktur an seiner Kapazitätsgrenze angekommen ist. Aus diesem Grund plant München den Ausbau und setzt bereits einige Großprojekte, unter anderem die Erweiterung der Schieneninfrastruktur, in verschiedenen Stadteilen, um. Vor allem das Tramnetz hat sicherlich das Potential, für Münchner:innen eine dauerhafte Alternative zum Auto zu sein. Da das Hauptproblem augenscheinlich die Kapazitätsgrenze des bestehenden Schienennetzes ist, kommt es auf den zügigen und konsequenten Ausbau in den nächsten Jahren an.Kopenhagen hat im Vergleich bereits im Jahr 2019 eine Stadtlinie eröffnet, die immer weiter ausgebaut wird. Kopenhagen will die Attraktivität des ÖPNVs auch während der Rushhour gewährleisten. Dies lässt darauf schließen, dass einer der Hauptfaktoren auch hier die aktuelle Auslastung der vorhandenen öffentlichen Verkehrsmittel ist. In diesem Bereich haben beide Städte somit ähnliche Herausforderungen zu bewältigen. Beide Städte sind aktiv und scheinen den ÖPNV als dauerhaftes Verkehrsmittel fördern zu wollen.Kopenhagen liegt direkt am Meer und 24 Meter über dem Meeresspiegel. Ähnlich wie München sieht sich Kopenhagen mit Extremwetterereignissen konfrontiert. In Kopenhagen regnet es sehr häufig und durch die Lage am Meer und die geringe Höhe über dem Meeresspiegel sind Sturmfluten und Überschwemmungen keine Seltenheit. München hat ebenso mit Starkregen zu kämpfen, wobei Hitzewellen hier auch nicht zu unterschätzen sind. Beide Städte stellen verschiedene Maßnahmen vor, die zur Vermeidung negativer Folgen führen sollen. In der Umsetzung hat Kopenhagen bereits Erstaunliches erreicht, um sich vor Starkregen zu schützen. Beide Städte nehmen die durch den Klimawandel entstehenden Extremwetterereignisse und deren mögliche Folgen ernst und arbeiten an spezifischen Lösungen.Die Energieversorgung ist in beiden Städten ein zentraler Aspekt. Beide Städte nehmen hier in gewisser Weise Vorreiterrollen ein. Sowohl Kopenhagen als auch München fördern den Einsatz erneuerbarer Technologien in verschiedener Hinsicht. Das Fernwärmenetz in Kopenhagen ist bereits sehr gut ausgebaut. Gleichzeitig kann die Fernwärme Kopenhagens bereits zu 80 Prozent aus erneuerbaren Energien gewonnen werden. Die dänische Hauptstadt hat hier einige Vorzeigeprojekte, unter anderem die Müllverbrennungsanlage Amager Bakke.Die Stadt München setzt vermehrt auf Tiefengeothermie und treibt den Ausbau voran. Dies soll die Fernwärme nach und nach nachhaltiger machen. Bis 2040 soll das Fernwärmenetz in München somit klimaneutral arbeiten können. Die KWK-Methode wird in München eingesetzt und spart nennenswerte Mengen an CO2 ein. Fossile Brennstoffe kommen hier aber nach wie vor zum Einsatz. Dennoch hat auch München ein ausgefeiltes Konzept und ist vor allem im deutschen Vergleich weit vorangeschritten und hat bereits früh nach alternativen Wegen gesucht. Daher sind die Fortschritte Münchens in der Wärmeversorgung beachtlich. Im direkten Vergleich kann Kopenhagen jedoch mit noch mehr Innovation und aktuell größeren Fortschritten punkten.FazitEs wurde untersucht, wie eine nachhaltige Stadt gestaltet werden kann. Eine aktuelle Bestandsaufnahme zeigt, dass die Entwicklungen in Städten sehr unterschiedlich sind. Die Abkehr von der Vorstellung einer autogerechten Stadt scheint sinnvoll. Bereits vorhandene ÖPNV-Strukturen und weitere Alternativen zum motorisierten Individualverkehr müssen effizienter genutzt oder geschaffen werden. Der Energiesektor ist besonders relevant, da hier die größten Möglichkeiten hinsichtlich einer Reduzierung von Emissionen bestehen. Städte sollten daher Maßnahmen etablieren, um den Energiebedarf zu senken und auf regenerative Energien umsteigen zu können. In dieser Arbeit wurde bezogen auf den Bereich der Energie die kommunale Wärmeplanung berücksichtigt.Untersucht wurden die Bereiche des Verkehrs und der Mobilität, der Extremwetteranpassung und der kommunalen Wärmeplanung. München und Kopenhagen haben in den untersuchten Bereichen bereits eine Vielzahl an Maßnahmen und Vorhaben vorgestellt und initiiert. Dabei stellt sich heraus, dass die spezifischen Gegebenheiten in Städten stets berücksichtigt werden müssen. Diese unterschiedlichen Gegebenheiten führen dazu, dass ein Städtevergleich nicht in jedem Aspekt einer nachhaltigen Stadtentwicklung zielführend ist. München zeigt jedoch am Beispiel der geplanten Radschnellverbindungen, dass es auch Lösungen für suboptimale Gegebenheiten gibt, in diesem Fall für größere Distanzen beim Radverkehr.Beide Städte sind fortgeschritten, was den Bereich der nachhaltigen Mobilität betrifft. Hier stellt vor allem der erwartete Bevölkerungsanstieg eine Herausforderung dar, da dies zur weiteren Be- beziehungsweise Überlastung der bestehenden Verkehrsinfrastruktur und zur Zunahme des Verkehrs generell führen wird. Dementsprechend finden sich in beiden Städten Projekte, die auch teils in der Umsetzung und bezogen auf die Zukunft der nachhaltigen Mobilität vielversprechend sind. Hier bleiben jedoch die konkreten Fortschritte in den nächsten Jahren abzuwarten, was eine erneute Untersuchung zu einem späteren Zeitpunkt interessant macht. Die Vision beziehungsweise Utopie einer autofreien Stadt scheint für Kopenhagen einen Schritt näher zu sein. München zeigt jedoch, dass zumindest eine autofreie Altstadt in naher Zukunft nicht undenkbar ist.Die Anpassung an die Folgen des Klimawandels ist für beide Städte relevant. Kopenhagen hat hier eine Reihe innovativer Projekte bereits umgesetzt. München stellt viele Maßnahmen vor, die im Detail jedoch noch weiter vorangetrieben werden müssen.Bezogen auf die kommunale Wärmeplanung gehen beide Städte verschiedene Wege und haben bestimmte Visionen. Einen Beitrag zur Energiewende wollen beide Städte und deren ansässige Stadtwerke leisten. Die Fernwärme ist sowohl in Kopenhagen als auch in München der zentrale Faktor. Kopenhagen ist bezogen auf den Anteil erneuerbarer Energien und den Ausbau des Fernwärmenetzes weiter fortgeschritten als München. Ebenso bestehen in Kopenhagen innovative Ideen zur nachhaltigen Erzeugung von Fernwärme und zur Einbettung verschiedener Anlagen in die Kopenhagener Stadt und das Umland. München setzt auf die Nutzung von Geothermie, was zu einer sehr guten CO-2-Bilanz beitragen kann.In den untersuchten Bereichen weisen beide Städte Fortschritte auf. Kopenhagen hat zeitlich betrachtet deutlich früher mit dem Ausbau einer nachhaltigen Stadtentwicklung begonnen. Dementsprechend sind einige Pläne ausgereifter und es finden sich hinsichtlich der untersuchten Bereiche mehr konkrete Umsetzungen. München könnte hier jedoch in den nächsten Jahren ähnlich weit voranschreiten, was unter anderem hinsichtlich des Maßnahmenkatalogs deutlich wird. Auch aus diesem Grund wäre die Betrachtung zu einem späteren Zeitpunkt interessant und würde weitere Aufschlüsse liefern.Durch die Untersuchung der Verkehrsinfrastruktur und der kommunalen Wärmeplanung beider Städte wurden Schlüsselaspekte einer nachhaltigen Stadtentwicklung berücksichtigt. Dennoch muss betont werden, dass bei weitem nicht alle Aspekte einer nachhaltigen Stadt berücksichtigt und untersucht werden konnten. Dies würde den Rahmen dieser Arbeit sprengen. Eine Untersuchung in weiteren Bereichen würde daher eine sinnvolle Ergänzung darstellen.LiteraturverzeichnisAachener Stiftung Kathy Beys (2015): Nachhaltiger Lebensstil (Aachener Stiftung Kathy Beys vom 16.12.2015) < https://www.nachhaltigkeit.info/artikel/nachhaltiger_lebensstil_1978.htm > (11.11.2023).ADAC (2021). ÖPNV Tickets 2021: ADAC Studie zeigt gewaltige Preisunterschiede (ADAC vom 04.11.2021) < https://www.adac.de/reise-freizeit/ratgeber/tests/oepnv-preisvergleich/ > (19.11.2023).Aden, Hartmut (2012): Umweltpolitik, VS Verlag für Sozialwissenschaften: Wiesbaden.Außenministerium Dänemark (2020): A Green and Sustainable World - The Danish Government's long-term strategy for global climate action, Ministry of Foreign Affairs of Denmark: Kopenhagen < https://www.regeringen.dk/media/10084/a_green_and_sustainable_world.pdf> (09.12.2023). Bildung für nachhaltige Entwicklung (2023): Städte und BNE (Bildung für nachhaltige Entwicklung 2023) < https://www.bne-portal.de/bne/de/einstieg/themen/nachhaltige-stadtentwicklung/nachhaltige-stadtentwicklung_node.html > (17.11.2023).Britannica (2023). Copenhagen (Britannica vom 13.12.2023) < https://www.britannica.com/place/Copenhagen > (19.11.2023).Buhofer, Stephan (2018): Der Klimawandel und die internationale Politik in Zahlen, Oekom: München, 2. Auflage. Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung (2023a): Die globalen Ziele für nachhaltige Entwicklung (Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023) < https://www.bmz.de/de/agenda-2030 > (14.11.2023).Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung (2023b): Nachhaltige Stadtentwicklung (Positionspapier). Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung: Bonn < https://www.bmz.de/resource/blob/163312/positionspapier-nachhaltige-stadtentwicklung.pdf > (10.11.2023).Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung (2023c): Nachhaltigkeit (nachhaltige Entwicklung) (Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023) < https://www.bmz.de/de/service/lexikon/nachhaltigkeit-nachhaltige-entwicklung-14700 > (09.11.2023).Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung (2023d): Stadt und Klima (Bundesministerium für wirtschaftliche Zusammenarbeit und Entwicklung 2023) < https://www.bmz.de/de/themen/klimawandel-und-entwicklung/stadt-und-klima > (12.11.2023). Burckhardt, Ines/Tappe, Susanne/Rehrmann, Marc-Oliver (2022): Kopenhagens Klima-Trumpf heißt Fernwärme für alle (Norddeutscher Rundfunk vom 15.05.2022) < https://www.ndr.de/nachrichten/info/Kopenhagens-Klima-Trumpf-heisst-Fernwaerme-fuer-alle,kopenhagen394.html > (17.11.2023).Chatterton, Paul (2019): Die klimaneutrale Stadt. In: Blätter für deutsche und internationale Politik (Hrsg.): Unsere letzte Chance. Der Reader zur Klimakrise, Blätter Verlagsgesellschaft mbH: Berlin, S. 275–281.Climate Service Center Germany (2015): Climate Focus Paper: Cities and Climate Change, Climate Service Center Germany: Hamburg und Frankfurt am Main < https://climate-adapt.eea.europa.eu/en/metadata/publications/climate-focus-paper-cities-and-climate-change/gerics_kfw_2015_climatefocuscities.pdf > (19.11.2023).Climate-data.org (2023): Copenhagen Climate (Climate-Data.org 2023) < https://en.climate-data.org/europe/denmark/capital-region-of-denmark/copenhagen-23/ > (18.11.2023). Deutscher Wetterdienst (2023): München (Flugh.) (Deutscher Wetterdienst 2023) < https://www.dwd.de/DE/wetter/wetterundklima_vorort/bayern/muenchen/_node.html > (15.11.2023). Diehn, Sonya (2015). Wie wird man Umwelthauptstadt? (Deutsche Welle vom 18.06.2015) < https://www.dw.com/de/wie-wird-man-europas-umwelthauptstadt/a-18525603 > (07.11.2023).Dröge, Susanne (2007): Die EU-Klimastrategie, Stiftung Wissenschaft und Politik: Berlin (= SWP-Aktuell) < https://www.swp-berlin.org/publications/products/aktuell/2007A12_dge_ks.pdf > (14.11.2023).Dütz, Armand (2017): European Energy Award - der Weg zum kommunalen Klimaschutz, Fraunhofer Verlag; Stuttgart, 2. Auflage. Dyvik, Einar H. (2023). Population of Copenhagen from 2013 to 2023. Statista online vom 10.08.2023) < https://www.statista.com/statistics/1303909/population-copenhagen/ > (13.11.2023).Edenhofer, Ottmar/Jakob, Michael (2017). Klimapolitik, C.H. Beck: München.Eppler, A. (2023): Kyoto-Protokoll (Bundeszentrale für politische Bildung 2023) < https://www.bpb.de/kurz-knapp/lexika/das-europalexikon/177102/kyoto-protokoll/ > (11.11.2023). Europäische Kommission (2023): Der europäische Grüne Deal (Europäische Kommission 2023) < https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/european-green-deal_de > (12.11.2023). Europarat (2023): 5 facts about the EU's goal of climate neutrality (European Council 2023) < https://www.consilium.europa.eu/en/5-facts-eu-climate-neutrality/ > (11.11.2023). Findeisen, Jörg-Peter/Husum, Poul (2008): Kleine Geschichte Kopenhagens, Friedrich Pustet: Regensburg.Firmhofer, Angela (2018): Pioniere des Wandels und städtische Kulturen der Nachhaltigkeit: Beispiele für zivilgesellschaftliche Transformation in München, Barcelona und Kopenhagen, Oekom: München.Handel, Stephan (2023): Starkregen in München (Süddeutsche Zeitung vom 27.08.2023) < https://www.sueddeutsche.de/muenchen/muenchen-regen-wetterdienst-1.6168415?reduced=true > (11.11.2023).Harrestrup, M./Svendsen, S. (2014): Heat planning for fossil-fuel-free district heating areas with extensive end-use heat savings: A case study of the Copenhagen district heating area in Denmark. In: lsevier (Hrsg.): Energy Policy (68). lsevier, S. 294–305 < https://doi.org/10.1016/j.enpol.2014.01.031 > (25.11.2023). Harter, Bettina (2023): Längste Hitzewelle: Neuer Wetterrekord in München (wetter.com vom 24.08.2023) < https://www.wetter.com/news/laengste-hitzewelle-neuer-wetterrekord-in-muenchen_aid_64e71ffdbf87bd2bb702f1f5.html > (11.11.2023).Heidenreich, Bärbel (2019): Kopenhagen (Planet Wissen (WDR) vom 23.07.2019). < https://www.planet-wissen.de/kultur/metropolen/kopenhagen/index.html > (14.11.2023). Heudorfer, Katharina (2023): Radschnellwege ins Münchner Umland: Der Anfang ist gemacht! (BR.de vom 28.06.2023) < https://www.br.de/nachrichten/bayern/radschnellwege-ins-muenchner-umland-der-anfang-ist-gemacht,TiS8vql > (18.11.2023). Hutter, Rolf (2019): Wie grün ist Münchens Öko-Strom wirklich? (Deutschlandfunk Kultur vom 25.06.2019) < https://www.deutschlandfunkkultur.de/energiewende-und-stromerzeugung-wie-gruen-ist-muenchens-100.html > (24.11.2023).Kafsack, Hendrik (2023): Geglückte Wärmewende: Dänen frieren nicht (Frankfurter Allgemeine Zeitung vom 02.12.2023) < https://www.faz.net/aktuell/wirtschaft/geglueckte-waermewende-in-daenemark-was-wir-daraus-lernen-koennen-19353650.html > (07.12.2023). Kallenbach, Theresa (2021): Von Kopenhagen lernen: Erfolgreiche Narrative für eine nachhaltige urbane Mobilität (IASS Discussion Paper), IASS Potsdam: Potsdam < https://publications.rifs-potsdam.de/rest/items/item_6000767_5/component/file_6000799/content > (01.12.2023). Koch, Florian/Krellenberg, Kerstin (2021): Nachhaltige Stadtentwicklung. Die Umsetzung der Sustainable Development Goals auf kommunaler Ebene, Springer VS:Wiesbaden.Kruse, Elke (2016): Kopenhagen: Vorreiter beim Thema "Überflutungsvorsorge" In: Korrespondenz Wasserwirtschaft 9 (11), S. 669-673 < https://www.hcu-hamburg.de/fileadmin/documents/REAP/files/KLIQ_Kopenhagen_KW_Korrespondenz_Wasserwirtschaft.pdf > (02.12.2023). Landeshauptstadt München (2023a): Autoreduzierte Altstadt (muenchen.de 2023) < https://stadt.muenchen.de/infos/autoreduzierte-altstadt.html > (12.11.2023).Landeshauptstadt München u. a. (2016): Konzept zur Anpassung an die Folgen des Klimawandels in der Landeshauptstadt München, Landeshauptstadt München, bifa Umweltinstitut, Institut für ökologische Wirtschaftsordnung, Deutscher Wetterdienst: München, Augsburg, Berlin < https://stadt.muenchen.de/dam/jcr:8eb68d50-5c21-4948-92ae-62fa129ab566/bericht_klwa_10_2016.pdf > (18.11.2023). Landeshauptstadt München (2023b): Mobilitätsplan: Verkehrsstrategie für München (muenchen.de 2023) < https://stadt.muenchen.de/infos/mobilitaetsplan.html > (14.11.2023).Landeshauptstadt München (2023c): Mobilitätsstrategie 2035 (München unterwegs online 2023) < https://muenchenunterwegs.de/2035 > (15.11.2023).Landeshauptstadt München (2022): Nachhaltigkeitsbericht München 2022. Indikatoren, Herausforderungen und Handlungspotentiale, Landeshauptstadt München: München < https://stadt.muenchen.de/dam/jcr:4f1b242a-0bff-49b0-b5fc-cba78afca3c9/Nachhaltigkeitsbericht_2022_Web.pdf > (17.11.2023). Landeshauptstadt München (2023d): Planungen für den Öffentlichen Personennahverkehr (ÖPNV) (muenchen.de 2023) < https://stadt.muenchen.de/infos/oeffentlicher-nahverkehr.html > (14.11.2023).Landeshauptstadt München (2023e): Über das Referat für Klima- und Umweltschutz (muenchen.de 2023) < https://stadt.muenchen.de/infos/portrait-referat-klima-umweltschutz.html > (17.11.2023).Landeshauptstadt München/Landratsamt München/Landratsamt Starnberg (2020): Vertiefte Machbarkeitsstudie radiale Radschnellverbindung. Münchner Innenstadt - Starnberg, Planersocietät: München; Starnberg; Dortmund < https://muenchenunterwegs.de/index.php/content/717/download/kurzfassung-rsv-munchen-starnberg.pdf > (30.11.2023). Landeszentrale für politische Bildung Baden-Württemberg (2023): Pariser Klimaabkommen (Landeszentrale für politische Bildung Baden-Württemberg vom September 2023) < https://www.lpb-bw.de/pariser-klimaabkommen# > (20.10.2023).Lehmbruch, Hans (2004): Ein neues München. In: Münchner Stadtmuseum (Hrsg.): München - wie geplant: Die Entwicklung der Stadt von 1158 bis 2008, Münchner Stadtmuseum: München, S. 37-66.Münchner Initiative Nachhaltigkeit (2023): Münchner Initiative Nachhaltigkeit (m-i-n.net 2023) < https://www.m-i-n.net > (16.11.2023).Münchner Stadtmuseum (Hrsg.): (2004). München - wie geplant: Die Entwicklung der Stadt von 1158 bis 2008, Münchner Stadtmuseum: München.Presse- und Informationsamt der Bundesregierung (2023): Ein Plan fürs Klima (Presse- und Informationsamt der Bundesregierung vom 21.06.2023) < https://www.bundesregierung.de/breg-de/themen/tipps-fuer-verbraucher/klimaschutzgesetz-2197410 > (14.11.2023). Schellnhuber, Hans Joachim u. a. (2010): Klimapolitik nach Kopenhagen, Wissenschaftlicher Beirat der Bundesregierung Globale Umweltveränderungen (Politikbericht): Berlin. < https://www.wbgu.de/fileadmin/user_upload/wbgu/publikationen/politikpapiere/pp6_2010/wbgu_pp2010.pdf > (10.11.2023).Schneider, Matthias (2022): Die Energie der Zukunft: Wird München bei Geothermie zur Blaupause für Deutschland? Merkur online vom 06.05.2022) < https://www.merkur.de/wirtschaft/geothermie-energie-zukunft-muenchen-deutschland-kraftwerk-anlage-projekt-sendling-zr-91526752.html > ( 21.11.2023).Scholz, Freimut (2004): Vom Marktflecken zur Residenzstadt. In: Münchner Stadtmuseum (Hrsg.): München - wie geplant: Die Entwicklung der Stadt von 1158 bis 2008, Münchner Stadtmuseum: München, S. 17–36.Søholt, Helle/Breitinger, Matthias (2014): Radfahrer machen eine Stadt erst richtig lebendig (Zeit online Interview vom 30.06.2014) < https://www.zeit.de/mobilitaet/2014-05/stadtplanung-fahrrad# > (27.11.2023). Stäbler, Patrik (2023): Verkehrsberuhigte Kolumbusstraße: "Befürworter und Gegner brüllen sich nur noch an" (Süddeutsche Zeitung online vom 08.08.2023) < https://www.sueddeutsche.de/muenchen/muenchen-kolumbusstrasse-verkehrsberuhigung-parkplaetze-auto-1.6108299 > (Zugriff am: 19.11.2023).Stadt Kopenhagen (2019): Copenhagen's Municipal Plan 2019, Stadt Kopenhagen: Kopenhagen < https://kp19.kk.dk/copenhagen-municipal-plan-2019 > (17.11.2023). Stadt Kopenhagen (2020): CPH 2025 Climate Plan, Stadt Kopenhagen: Kopenhagen < https://kk.sites.itera.dk/apps/kk_pub2/index.asp?mode=detalje&id=2062 > (16.11.2023). Stadt München (2023): München in Zahlen: Daten und Statistiken der Stadt München (muenchen.de 2023) < https://www.muenchen.de/sehenswuerdigkeiten/muenchen-zahlen-interessante-fakten-ueber-die-stadt > (19.11.2023). Stadtverwaltung Kopenhagen (2023): Climate Adaption, Stadtverwaltung Kopenhagen: Kopenhagen < https://international.kk.dk/about-copenhagen/liveable-green-city/climate-adaptation > (10.11.2023).Stadtverwaltung Kopenhagen (2011): Copenhagen Climate Adaption Plan. Stadtverwaltung Kopenhagen: Kopenhagen < https://international.kk.dk/sites/default/files/2021-09/Copenhagen%20Climate%20Adaptation%20Plan%20-%202011%20-%20short%20version.pdf > (26.11.2023). Stadtwerke München (2022): Der neue Energiestandort Süd (Stadtwerke München online vom 21.12.2022) < https://www.swm.de/magazin/energie/energiestandort-sued > (20.11.2023). Stadtwerke München (2023a): Geothermieanlage Michaelibad (Stadtwerke München online 2023) < https://www.swm.de/lp/geothermie-michaelibad > (20.11.2023).Stadtwerke München (2023b): Kraft-Wärme-Kopplung: Effektiv und ressourcenschonend (Stadtwerke München 2023) < https://www.swm.de/energiewende/kraft-waerme-kopplung > (21.11.2023).Stahleder, Helmuth (2023): München - ein chronologischer Überblick von Helmuth Stahleder (muenchen.de 2023) < https://stadt.muenchen.de/infos/stadtgeschichte.html > (13.11.2023).Statistisches Amt München (2023): Statistische Daten zur Münchner Bevölkerung (muenchen.de 2023) < https://stadt.muenchen.de/infos/statistik-bevoelkerung.html > (13.11.2023).Technik- und Umweltverwaltung Kopenhagen (2023): Skt. Kjelds Plads (Klimakvarter Teknik- og Miljøforvaltningen online 2023) < https://klimakvarter.dk/projekt/skt-kjelds-plads/ > (15.11.2023).Umweltbundesamt (2023): Kraft-Wärme-Kopplung (KWK) (Umweltbundesamt online vom 06.12.2023) < https://www.umweltbundesamt.de/daten/energie/kraft-waerme-kopplung-kwk > (10.12.2023).Vereinte Nationen (2023a): The Sustainable Development Goals Report, Vereinte Nationen: New York < https://unstats.un.org/sdgs/report/2023/The-Sustainable-Development-Goals-Report-2023.pdf > (19.11.2023). Vereinte Nationen (2023b): Transforming our world: The 2030 Agenda for Sustainable Development, Vereinte Nationen: New York. < https://sustainabledevelopment.un.org/content/documents/21252030%20Agenda%20for%20Sustainable%20Development%20web.pdf > (07.12.2023). Wagner, Katharina (2010): München hat den besten öffentlichen Nahverkehr (Frankfurter Allgemeine Zeitung online vom 18.02.2010) < https://www.faz.net/aktuell/gesellschaft/adac-test-muenchen-hat-den-besten-oeffentlichen-nahverkehr-1941703.html > (17.11.2023).Watjer, A. (2023): Pariser Klimaabkommen (Bundeszentrale für politische Bildung 2023) < https://www.bpb.de/kurz-knapp/lexika/das-europalexikon/309438/pariser-klimaabkommen/ > (14.11.2023). Weinsziehr, Theresa/Verhoog, Mart/Bruckner, Thomas (2014): Forschung für die und in der energieeffizienten Stadt. In: Wilde, Manfred (Hrsg.): Dier nachhaltige Stadt, Oldenbourg Wissenschaftsverlag: München, S. 3-14.Weltgesundheitsorganisation (2018): Millennium Development Goals (MDGs) (Weltgesundheitsorganisation online vom 19.02.2018) < https://www.who.int/news-room/fact-sheets/detail/millennium-development-goals-(mdgs) > (13.11.2023). Wolff, Reinhard (2022): Kopenhagen verfehlt Klimaziel (taz online vom 28.08.2022) < https://taz.de/Probleme-mit-dem-Restmuell/!5874726/ > (18.11.2023).
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
(This post continues part 1 which just looked at the data. Part 3 on theory is here) When the Fed raises interest rates, how does inflation respond? Are there "long and variable lags" to inflation and output? There is a standard story: The Fed raises interest rates; inflation is sticky so real interest rates (interest rate - inflation) rise; higher real interest rates lower output and employment; the softer economy pushes inflation down. Each of these is a lagged effect. But despite 40 years of effort, theory struggles to substantiate that story (next post), it's had to see in the data (last post), and the empirical work is ephemeral -- this post. The vector autoregression and related local projection are today the standard empirical tools to address how monetary policy affects the economy, and have been since Chris Sims' great work in the 1970s. (See Larry Christiano's review.) I am losing faith in the method and results. We need to find new ways to learn about the effects of monetary policy. This post expands on some thoughts on this topic in "Expectations and the Neutrality of Interest Rates," several of my papers from the 1990s* and excellent recent reviews from Valerie Ramey and Emi Nakamura and Jón Steinsson, who eloquently summarize the hard identification and computation troubles of contemporary empirical work.Maybe popular wisdom is right, and economics just has to catch up. Perhaps we will. But a popular belief that does not have solid scientific theory and empirical backing, despite a 40 year effort for models and data that will provide the desired answer, must be a bit less trustworthy than one that does have such foundations. Practical people should consider that the Fed may be less powerful than traditionally thought, and that its interest rate policy has different effects than commonly thought. Whether and under what conditions high interest rates lower inflation, whether they do so with long and variable but nonetheless predictable and exploitable lags, is much less certain than you think. Here is a replication of one of the most famous monetary VARs, Christiano Eichenbaum and Evans 1999, from Valerie Ramey's 2016 review: Fig. 1 Christiano et al. (1999) identification. 1965m1–1995m6 full specification: solid black lines; 1983m1–2007m12 full specification: short dashed blue (dark gray in the print version) lines; 1983m1–2007m12, omits money and reserves: long-dashed red (gray in the print version) lines. Light gray bands are 90% confidence bands. Source: Ramey 2016. Months on x axis. The black lines plot the original specification. The top left panel plots the path of the Federal Funds rate after the Fed unexpectedly raises the interest rate. The funds rate goes up, but only for 6 months or so. Industrial production goes down and unemployment goes up, peaking at month 20. The figure plots the level of the CPI, so inflation is the slope of the lower right hand panel. You see inflation goes the "wrong" way, up, for about 6 months, and then gently declines. Interest rates indeed seem to affect the economy with long lags. This was the broad outline of consensus empirical estimates for many years. It is common to many other studies, and it is consistent with the beliefs of policy makers and analysts. It's pretty much what Friedman (1968) told us to expect. Getting contemporary models to produce something like this is much harder, but that's the next blog post. What's a VAR?I try to keep this blog accessible to nonspecialists, so I'll step back momentarily to explain how we produce graphs like these. Economists who know what a VAR is should skip to the next section heading. How do we measure the effect of monetary policy on other variables? Milton Friedman and Anna Schwartz kicked it off in the Monetary History by pointing to the historical correlation of money growth with inflation and output. They knew as we do that correlation is not causation, so they pointed to the fact that money growth preceeded inflation and output growth. But as James Tobin pointed out, the cock's crow comes before, but does not cause, the sun to rise. So too people may go get out some money ahead of time when they see more future business activity on the horizon. Even correlation with a lead is not causation. What to do? Clive Granger's causality and Chris Sims' VAR, especially "Macroeconomics and Reality" gave today's answer. (And there is a reason that everybody mentioned so far has a Nobel prize.) First, we find a monetary policy "shock," a movement in the interest rate (these days; money, then) that is plausibly not a response to economic events and especially to expected future economic events. We think of the Fed setting interest rates by a response to economic data plus deviations from that response, such as interest rate = (#) output + (#) inflation + (#) other variables + disturbance. We want to isolate the "disturbance," movements in the interest rate not taken in response to economic events. (I use "shock" to mean an unpredictable variable, and "disturbance" to mean deviation from an equation like the above, but one that can persist for a while. A monetary policy "shock" is an unexpected movement in the disturbance.) The "rule" part here can be but need not be the Taylor rule, and can include other variables than output and inflation. It is what the Fed usually does given other variables, and therefore (hopefully) controls for reverse causality from expected future economic events to interest rates. Now, in any individual episode, output and inflation and inflation following a shock will be influenced by subsequent shocks to the economy, monetary and other. But those average out. So, the average value of inflation, output, employment, etc. following a monetary policy shock is a measure of how the shock affects the economy all on its own. That is what has been plotted above. VARs were one of the first big advances in the modern empirical quest to find "exogenous" variation and (somewhat) credibly find causal relationships. Mostly the huge literature varies on how one finds the "shocks." Traditional VARs use regressions of the above equations and the residual is the shock, with a big question just how many and which contemporaneous variables one adds in the regression. Romer and Romer pioneered the "narrative approach," reading the Fed minutes to isolate shocks. Some technical details at the bottom and much more discussion below. The key is finding shocks. One can just regress output and inflation on the shocks to produce the response function, which is a "local projection" not a "VAR," but I'll use "VAR" for both techniques for lack of a better encompassing word. Losing faithShocks, what shocks?What's a "shock" anyway? The concept is that the Fed considers its forecast of inflation, output and other variables it is trying to control, gauges the usual and appropriate response, and then adds 25 or 50 basis points, at random, just for the heck of it. The question VARS try to answer is the same: What happens to the economy if the Fed raises interest rates unexpectedly, for no particular reason at all? But the Fed never does this. Ask them. Read the minutes. The Fed does not roll dice. They always raise or lower interest rates for a reason, that reason is always a response to something going on in the economy, and most of the time how it affects forecasts of inflation and employment. There are no shocks as defined.I speculated here that we might get around this problem: If we knew the Fed was responding to something that had no correlation with future output, then even though that is an endogenous response, then it is a valid movement for estimating the effect of interest rates on output. My example was, what if the Fed "responds" to the weather. Well, though endogenous, it's still valid for estimating the effect on output. The Fed does respond to lots of things, including foreign exchange, financial stability issues, equity, terrorist attacks, and so forth. But I can't think of any of these in which the Fed is not thinking of these events for their effect on output and inflation, which is why I never took the idea far. Maybe you can. Shock isolation also depends on complete controls for the Fed's information. If the Fed uses any information about future output and inflation that is not captured in our regression, then information about future output and inflation remains in the "shock" series. The famous "price puzzle" is a good example. For the first few decades of VARs, interest rate shocks seemed to lead to higher inflation. It took a long specification search to get rid of this undesired result. The story was, that the Fed saw inflation coming in ways not completely controlled for by the regression. The Fed raised interest rates to try to forestall the inflation, but was a bit hesitant about it so did not cure the inflation that was coming. We see higher interest rates followed by higher inflation, though the true causal effect of interest rates goes the other way. This problem was "cured" by adding commodity prices to the interest rate rule, on the idea that fast-moving commodity prices would capture the information the Fed was using to forecast inflation. (Interestingly these days we seem to see core inflation as the best forecaster, and throw out commodity prices!) With those and some careful orthogonalization choices, the "price puzzle" was tamped down to the one year or so delay you see above. (Neo-Fisherians might object that maybe the price puzzle was trying to tell us something all these years!) Nakamura and Steinsson write of this problem: "What is being assumed is that controlling for a few lags of a few variables captures all endogenous variation in policy... This seems highly unlikely to be true in practice. The Fed bases its policy decisions on a huge amount of data. Different considerations (in some cases highly idiosyncratic) affect policy at different times. These include stress in the banking system, sharp changes in commodity prices, a recent stock market crash, a financial crisis in emerging markets, terrorist attacks, temporary investment tax credits, and the Y2K computer glitch. The list goes on and on. Each of these considerations may only affect policy in a meaningful way on a small number of dates, and the number of such influences is so large that it is not feasible to include them all in a regression. But leaving any one of them out will result in a monetary policy "shock" that the researcher views as exogenous but is in fact endogenous." Nakamura and Steinsson offer 9/11 as another example summarizing my "high frequency identification" paper with Monika Piazzesi: The Fed lowered interest rates after the terrorist attack, likely reacting to its consequences for output and inflation. But VARs register the event as an exogenous shock.Romer and Romer suggested that we use Fed Greenbook forecasts of inflation and output as controls, as those should represent the Fed's complete information set. They provide narrative evidence that Fed members trust Greenback forecasts more than you might suspect. This issue is a general Achilles heel of empirical macro and finance: Does your procedure assume agents see no more information than you have included in the model or estimate? If yes, you have a problem. Similarly, "Granger causality" answers the cock's crow-sunrise problem by saying that if unexpected x leads unexpected y then x causes y. But it's only real causality if the "expected" includes all information, as the price puzzle counterexample shows. Just what properties do we need of a shock in order to measure the response to the question, "what if the Fed raised rates for no reason?" This strikes me as a bit of an unsolved question -- or rather, one that everyone thinks is so obvious that we don't really look at it. My suggestion that the shock only need be orthogonal to the variable whose response we're estimating is informal, and I don't know of formal literature that's picked it up. Must "shocks" be unexpected, i.e. not forecastable from anything in the previous time information set? Must they surprise people? I don't think so -- it is neither necessary nor sufficient for shock to be unforecastable for it to identify the inflation and output responses. Not responding to expected values of the variable whose response you want to measure should be enough. If bond markets found out about a random funds rate rise one day ahead, it would then be an "expected" shock, but clearly just as good for macro. Romer and Romer have been criticized that their shocks are predictable, but this may not matter. The above Nakamura and Steinsson quote says leaving out any information leads to a shock that is not strictly exogenous. But strictly exogenous may not be necessary for estimating, say, the effect of interest rates on inflation. It is enough to rule out reverse causality and third effects. Either I'm missing a well known econometric literature, as is everyone else writing the VARs I've read who don't cite it, or there is a good theory paper to be written.Romer and Romer, thinking deeply about how to read "shocks" from the Fed minutes, define shocks thus to circumvent the "there are no shocks" problem:we look for times when monetary policymakers felt the economy was roughly at potential (or normal) output, but decided that the prevailing rate of inflation was too high. Policymakers then chose to cut money growth and raise interest rates, realizing that there would be (or at least could be) substantial negative consequences for aggregate output and unemployment. These criteria are designed to pick out times when policymakers essentially changed their tastes about the acceptable level of inflation. They weren't just responding to anticipated movements in the real economy and inflation. [My emphasis.] You can see the issue. This is not an "exogenous" movement in the funds rate. It is a response to inflation, and to expected inflation, with a clear eye on expected output as well. It really is a nonlinear rule, ignore inflation for a while until it gets really bad then finally get serious about it. Or, as they say, it is a change in rule, an increase in the sensitivity of the short run interest rate response to inflation, taken in response to inflation seeming to get out of control in a longer run sense. Does this identify the response to an "exogenous" interest rate increase? Not really. But maybe it doesn't matter. Are we even asking an interesting question? The whole question, what would happen if the Fed raised interest rates for no reason, is arguably besides the point. At a minimum, we should be clearer about what question we are asking, and whether the policies we analyze are implementations of that question. The question presumes a stable "rule," (e.g. \(i_t = \rho i_{t-1} + \phi_\pi \pi_t + \phi_x x_t + u_t\)) and asks what happens in response to a deviation \( +u_t \) from the rule. Is that an interesting question? The standard story for 1980-1982 is exactly not such an event. Inflation was not conquered by a big "shock," a big deviation from 1970s practice, while keeping that practice intact. Inflation was conquered (so the story goes) by a change in the rule, by a big increase in $\phi_\pi$. That change raised interest rates, but arguably without any deviation from the new rule \(u_t\) at all. Thinking in terms of the Phillips curve \( \pi_t = E_t \pi_{t+1} + \kappa x_t\), it was not a big negative \(x_t\) that brought down inflation, but the credibility of the new rule that brought down \(E_t \pi_{t+1}\). If the art of reducing inflation is to convince people that a new regime has arrived, then the response to any monetary policy "shock" orthogonal to a stable "rule" completely misses that policy. Romer and Romer are almost talking about a rule-change event. For 2022, they might be looking at the Fed's abandonment of flexible average inflation targeting and its return to a Taylor rule. However, they don't recognize the importance of the distinction, treating changes in rule as equivalent to a residual. Changing the rule changes expectations in quite different ways from a residual of a stable rule. Changes with a bigger commitment should have bigger effects, and one should standardize somehow by the size and permanence of the rule change, not necessarily the size of the interest rate rise. And, having asked "what if the Fed changes rule to be more serious about inflation," we really cannot use the analysis to estimate what happens if the Fed shocks interest rates and does not change the rule. It takes some mighty invariance result from an economic theory that a change in rule has the same effect as a shock to a given rule. There is no right and wrong, really. We just need to be more careful about what question the empirical procedure asks, if we want to ask that question, and if our policy analysis actually asks the same question. Estimating rules, Clarida Galí and Gertler. Clarida, Galí, and Gertler (2000) is a justly famous paper, and in this context for doing something totally different to evaluate monetary policy. They estimate rules, fancy versions of \(i_t = \rho i_{t-1} +\phi_\pi \pi_t + \phi_x x_t + u_t\), and they estimate how the \(\phi\) parameters change over time. They attribute the end of 1970s inflation to a change in the rule, a rise in \(\phi_\pi\) from the 1970s to the 1980s. In their model, a higher \( \phi_\pi\) results in less volatile inflation. They do not estimate any response functions. The rest of us were watching the wrong thing all along. Responses to shocks weren't the interesting quantity. Changes in the rule were the interesting quantity. Yes, I criticized the paper, but for issues that are irrelevant here. (In the new Keynesian model, the parameter that reduces inflation isn't the one they estimate.) The important point here is that they are doing something completely different, and offer us a roadmap for how else we might evaluate monetary policy if not by impulse-response functions to monetary policy shocks. Fiscal theoryThe interesting question for fiscal theory is, "What is the effect of an interest rate rise not accompanied by a change in fiscal policy?" What can the Fed do by itself? By contrast, standard models (both new and old Keynesian) include concurrent fiscal policy changes when interest rates rise. Governments tighten in present value terms, at least to pay higher interest costs on the debt and the windfall to bondholders that flows from unexpected disinflation. Experience and estimates surely include fiscal changes along with monetary tightening. Both fiscal and monetary authorities react to inflation with policy actions and reforms. Growth-oriented microeconomic reforms with fiscal consequences often follow as well -- rampant inflation may have had something to do with Carter era trucking, airline, and telecommunications reform. Yet no current estimate tries to look for a monetary shock orthogonal to fiscal policy change. The estimates we have are at best the effects of monetary policy together with whatever induced or coincident fiscal and microeconomic policy tends to happen at the same time as central banks get serious about fighting inflation. Identifying the component of a monetary policy shock orthogonal to fiscal policy, and measuring its effects is a first order question for fiscal theory of monetary policy. That's why I wrote this blog post. I set out to do it, and then started to confront how VARs are already falling apart in our hands. Just what "no change in fiscal policy" means is an important question that varies by application. (Lots more in "fiscal roots" here, fiscal theory of monetary policy here and in FTPL.) For simple calculations, I just ask what happens if interest rates change with no change in primary surplus. One might also define "no change" as no change in tax rates, automatic stabilizers, or even habitual discretionary stimulus and bailout, no disturbance \(u_t\) in a fiscal rule \(s_t = a + \theta_\pi \pi_t + \theta_x x_t + ... + u_t\). There is no right and wrong here either, there is just making sure you ask an interesting question. Long and variable lags, and persistent interest rate movementsThe first plot shows a mighty long lag between the monitor policy shock and its effect on inflation and output. That does not mean that the economy has long and variable lags. This plot is actually not representative, because in the black lines the interest rate itself quickly reverts to zero. It is common to find a more protracted interest rate response to the shock, as shown in the red and blue lines. That mirrors common sense: When the Fed starts tightening, it sets off a year or so of stair-step further increases, and then a plateau, before similar stair-step reversion. That raises the question, does the long-delayed response of output and inflation represent a delayed response to the initial monetary policy shock, or does it represent a nearly instantaneous response to the higher subsequent interest rates that the shock sets off? Another way of putting the question, is the response of inflation and output invariant to changes in the response of the funds rate itself? Do persistent and transitory funds rate changes have the same responses? If you think of the inflation and output responses as economic responses to the initial shock only, then it does not matter if interest rates revert immediately to zero, or go on a 10 year binge following the initial shock. That seems like a pretty strong assumption. If you think that a more persistent interest rate response would lead to a larger or more persistent output and inflation response, then you think some of what we see in the VARs is a quick structural response to the later higher interest rates, when they come. Back in 1988, I posed this question in "what do the VARs mean?" and showed you can read it either way. The persistent output and inflation response can represent either long economic lags to the initial shock, or much less laggy responses to interest rates when they come. I showed how to deconvolute the response function to the structural effect of interest rates on inflation and output and how persistently interest rates rise. The inflation and output responses might be the same with shorter funds rate responses, or they might be much different. Obviously (though often forgotten), whether the inflation and output responses are invariant to changes in the funds rate response needs a model. If in the economic model only unexpected interest rate movements affect output and inflation, though with lags, then the responses are as conventionally read structural responses and invariant to the interest rate path. There is no such economic model. Lucas (1972) says only unexpected money affects output, but with no lags, and expected money affects inflation. New Keynesian models have very different responses to permanent vs. transitory interest rate shocks. Interestingly, Romer and Romer do not see it this way, and regard their responses as structural long and variable lags, invariant to the interest rate response. They opine that given their reading of a positive shock in 2022, a long and variable lag to inflation reduction is baked in, no matter what the Fed does next. They argue that the Fed should stop raising interest rates. (In fairness, it doesn't look like they thought about the issue much, so this is an implicit rather than explicit assumption.) The alternative view is that effects of a shock on inflation are really effects of the subsequent rate rises on inflation, that the impulse response function to inflation is not invariant to the funds rate response, so stopping the standard tightening cycle would undo the inflation response. Argue either way, but at least recognize the important assumption behind the conclusions. Was the success of inflation reduction in the early 1980s just a long delayed response to the first few shocks? Or was the early 1980s the result of persistent large real interest rates following the initial shock? (Or, something else entirely, a coordinated fiscal-monetary reform... But I'm staying away from that and just discussing conventional narratives, not necessarily the right answer.) If the latter, which is the conventional narrative, then you think it does matter if the funds rate shock is followed by more funds rate rises (or positive deviations from a rule), that the output and inflation response functions do not directly measure long lags from the initial shock. De-convoluting the structural funds rate to inflation response and the persistent funds rate response, you would estimate much shorter structural lags. Nakamura and Steinsson are of this view: While the Volcker episode is consistent with a large amount of monetary nonneutrality, it seems less consistent with the commonly held view that monetary policy affects output with "long and variable lags." To the contrary, what makes the Volcker episode potentially compelling is that output fell and rose largely in sync with the actions [interest rates, not shocks] of the Fed. And that's a good thing too. We've done a lot of dynamic economics since Friedman's 1968 address. There is really nothing in dynamic economic theory that produces a structural long-delayed response to shocks, without the continued pressure of high interest rates. (A correspondent objects to "largely in sync" pointing out several clear months long lags between policy actions and results in 1980. It's here for the methodological point, not the historical one.) However, if the output and inflation responses are not invariant to the interest rate response, then the VAR directly measures an incredibly narrow experiment: What happens in response to a surprise interest rate rise, followed by the plotted path of interest rates? And that plotted path is usually pretty temporary, as in the above graph. What would happen if the Fed raised rates and kept them up, a la 1980? The VAR is silent on that question. You need to calibrate some model to the responses we have to infer that answer. VARs and shock responses are often misread as generic theory-free estimates of "the effects of monetary policy." They are not. At best, they tell you the effect of one specific experiment: A random increase in funds rate, on top of a stable rule, followed by the usual following path of funds rate. Any other implication requires a model, explicit or implicit. More specifically, without that clearly false invariance assumption, VARs cannot directly answer a host of important questions. Two on my mind: 1) What happens if the Fed raises interest rates permanently? Does inflation eventually rise? Does it rise in the short run? This is the "Fisherian" and "neo-Fisherian" questions, and the answer "yes" pops unexpectedly out of the standard new-Keynesian model. 2) Is the short-run negative response of inflation to interest rates stronger for more persistent rate rises? The long-term debt fiscal theory mechanism for a short-term inflation decline is tied to the persistence of the shock and the maturity structure of the debt. The responses to short-lived interest rate movements (top left panel) are silent on these questions. Directly is an important qualifier. It is not impossible to answer these questions, but you have to work harder to identify persistent interest rate shocks. For example, Martín Uribe identifies permanent vs. transitory interest rate shocks, and finds a positive response of inflation to permanent interest rate rises. How? You can't just pick out the interest rate rises that turned out to be permanent. You have to find shocks or components of the shock that are ex-ante predictably going to be permanent, based on other forecasting variables and the correlation of the shock with other shocks. For example, a short-term rate shock that also moves long-term rates might be more permanent than one which does not do so. (That requires the expectations hypothesis, which doesn't work, and long term interest rates move too much anyway in response to transitory funds rate shocks. So, this is not directly a suggestion, just an example of the kind of thing one must do. Uribe's model is more complex than I can summarize in a blog.) Given how small and ephemeral the shocks are already, subdividing them into those that are expected to have permanent vs. transitory effects on the federal funds rate is obviously a challenge. But it's not impossible. Monetary policy shocks account for small fractions of inflation, output and funds rate variation. Friedman thought that most recessions and inflations were due to monetary mistakes. The VARs pretty uniformly deny that result. The effects of monetary policy shocks on output and inflation add up to less than 10 percent of the variation of output and inflation. In part the shocks are small, and in part the responses to the shocks are small. Most recessions come from other shocks, not monetary mistakes. Worse, both in data and in models, most inflation variation comes from inflation shocks, most output variation comes from output shocks, etc. The cross-effects of one variable on another are small. And "inflation shock" (or "marginal cost shock"), "output shock" and so forth are just labels for our ignorance -- error terms in regressions, unforecasted movements -- not independently measured quantities. (This and old point, for example in my 1994 paper with the great title "Shocks." Technically, the variance of output is the sum of the squares of the impulse-response functions -- the plots -- times the variance of the shocks. Thus small shocks and small responses mean not much variance explained.)This is a deep point. The exquisite attention put to the effects of monetary policy in new-Keynesian models, while interesting to the Fed, are then largely beside the point if your question is what causes recessions. Comprehensive models work hard to match all of the responses, not just to monetary policy shocks. But it's not clear that the nominal rigidities that are important for the effects of monetary policy are deeply important to other (supply) shocks, and vice versa. This is not a criticism. Economics always works better if we can use small models that focus on one thing -- growth, recessions, distorting effect of taxes, effect of monetary policy -- without having to have a model of everything in which all effects interact. But, be clear we no longer have a model of everything. "Explaining recessions" and "understanding the effects of monetary policy" are somewhat separate questions. Monetary policy shocks also account for small fractions of the movement in the federal funds rate itself. Most of the funds rate movement is in the rule, the reaction to the economy term. Like much empirical economics, the quest for causal identification leads us to look at a tiny causes with tiny effects, that do little to explain much variation in the variable of interest (inflation). Well, cause is cause, and the needle is the sharpest item in the haystack. But one worries about the robustness of such tiny effects, and to what extent they summarize historical experience. To be concrete, here is a typical shock regression, 1960:1-2023:6 monthly data, standard errors in parentheses: ff(t) = a + b ff(t-1) + c[ff(t-1)-ff(t-2)] + d CPI(t) + e unemployment(t) + monetary policy shock, Where "CPI" is the percent change in the CPI (CPIAUCSL) from a year earlier. ff(t-1)ff(t-1)-ff(t-2)CPIUnempR20.970.390.032-0.0170.985(0.009)(0.07)(0.013)(0.009)The funds rate is persistent -- the lag term (0.97) is large. Recent changes matter too: Once the Fed starts a tightening cycle, it's likely to keep raising rates. And the Fed responds to CPI and unemployment. The plot shows the actual federal funds rate (blue), the model or predicted federal funds rate (red), the shock which is the difference between the two (orange) and the Romer and Romer dates (vertical lines). You can't see the difference between actual and predicted funds rate, which is the point. They are very similar and the shocks are small. They are closer horizontally than vertically, so the vertical difference plotted as shock is still visible. The shocks are much smaller than the funds rate, and smaller than the rise and fall in the funds rate in a typical tightening or loosening cycle. The shocks are bunched, with by far the biggest ones in the early 1980s. The shocks have been tiny since the 1980s. (Romer and Romer don't find any shocks!) Now, our estimates of the effect of monetary policy look at the average values of inflation, output, and employment in the 4-5 years after a shock. Really, you say, looking at the graph? That's going to be dominated by the experience of the early 1980s. And with so many positive and negative shocks close together, the average value 4 years later is going to be driven by subtle timing of when the positive or negative shocks line up with later events. Put another way, here is a plot of inflation 30 months after a shock regressed on the shock. Shock on the x axis, subsequent inflation on the y axis. The slope of the line is our estimate of the effect of the shock on inflation 30 months out (source, with details). Hmm. One more graph (I'm having fun here):This is a plot of inflation for the 4 years after each shock, times that shock. The right hand side is the same graph with an expanded y scale. The average of these histories is our impulse response function. (The big lines are the episodes which multiply the big shocks of the early 1980s. They mostly converge because, either multiplied by positive or negative shocks, inflation wend down in the 1980s.) Impulse response functions are just quantitative summaries of the lessons of history. You may be underwhelmed that history is sending a clear story. Again, welcome to causal economics -- tiny average responses to tiny but identified movements is what we estimate, not broad lessons of history. We do not estimate "what is the effect of the sustained high real interest rates of the early 1980s," for example, or "what accounts for the sharp decline of inflation in the early 1980s?" Perhaps we should, though confronting endogeneity of the interest rate responses some other way. That's my main point today. Estimates disappear after 1982Ramey's first variation in the first plot is to use data from 1983 to 2007. Her second variation is to also omit the monetary variables. Christiano Eichenbaum and Evans were still thinking in terms of money supply control, but our Fed does not control money supply. The evidence that higher interest rates lower inflation disappears after 1983, with or without money. This too is a common finding. It might be because there simply aren't any monetary policy shocks. Still, we're driving a car with a yellowed AAA road map dated 1982 on it. Monetary policy shocks still seem to affect output and employment, just not inflation. That poses a deeper problem. If there just aren't any monetary policy shocks, we would just get big standard errors on everything. That only inflation disappears points to the vanishing Phillips curve, which will be the weak point in the theory to come. It is the Phillips curve by which lower output and employment push down inflation. But without the Phillips curve, the whole standard story for interest rates to affect inflation goes away. Computing long-run responsesThe long lags of the above plot are already pretty long horizons, with interesting economics still going on at 48 months. As we get interested in long run neutrality, identification via long run sign restrictions (monetary policy should not permanently affect output), and the effect of persistent interest rate shocks, we are interested in even longer run responses. The "long run risks" literature in asset pricing is similarly crucially interested in long run properties. Intuitively, we should know this will be troublesome. There aren't all that many nonoverlapping 4 year periods after interest rate shocks to measure effects, let alone 10 year periods.VARs estimate long run responses with a parametric structure. Organize the data (output, inflation, interest rate, etc) into a vector \(x_t = [y_t \; \pi_t \; i_t \; ...]'\), then the VAR can be written \(x_{t+1} = Ax_t + u_t\). We start from zero, move \(x_1 = u_1\) in an interesting way, and then the response function just simulates forward, with \(x_j = A^j x_1\). But here an oft-forgotten lesson of 1980s econometrics pops up: It is dangerous to estimate long-run dynamics by fitting a short run model and then finding its long-run implications. Raising matrices to the 48th power \(A^{48}\) can do weird things, the 120th power (10 years) weirder things. OLS and maximum likelihood prize one step ahead \(R^2\), and will happily accept small one step ahead mis specifications that add up to big misspecification 10 years out. (I learned this lesson in the "Random walk in GNP.") Long run implications are driven by the maximum eigenvalue of the \(A\) transition matrix, and its associated eigenvector. \(A^j = Q \Lambda^j Q^{-1}\). This is a benefit and a danger. Specify and estimate the dynamics of the combination of variables with the largest eigenvector right, and lots of details can be wrong. But standard estimates aren't trying hard to get these right. The "local projection" alternative directly estimates long run responses: Run regressions of inflation in 10 years on the shock today. You can see the tradeoff: there aren't many non-overlapping 10 year intervals, so this will be imprecisely estimated. The VAR makes a strong parametric assumption about long-run dynamics. When it's right, you get better estimates. When it's wrong, you get misspecification. My experience running lots of VARs is that monthly VARs raised to large powers often give unreliable responses. Run at least a one-year VAR before you start looking at long run responses. Cointegrating vectors are the most reliable variables to include. They are typically the state variable that most reliably carries long - run responses. But pay attention to getting them right. Imposing integrating and cointegrating structure by just looking at units is a good idea. The regression of long-run returns on dividend yields is a good example. The dividend yield is a cointegrating vector, and is the slow-moving state variable. A one period VAR \[\left[ \begin{array}{c} r_{t+1} \\ dp_{t+1} \end{array} \right] = \left[ \begin{array}{cc} 0 & b_r \\ 0 & \rho \end{array}\right] \left[ \begin{array}{c} r_{t} \\ dp_{t} \end{array}\right]+ \varepsilon_{t+1}\] implies a long horizon regression \(r_{t+j} = b_r \rho^j dp_{t} +\) error. Direct regressions ("local projections") \(r_{t+j} = b_{r,j} dp_t + \) error give about the same answers, though the downward bias in \(\rho\) estimates is a bit of an issue, but with much larger standard errors. The constraint \(b_{r,j} = b_r \rho^j\) isn't bad. But it can easily go wrong. If you don't impose that dividends and price are cointegrated, or with vector other than 1 -1, if you allow a small sample to estimate \(\rho>1\), if you don't put in dividend yields at all and just a lot of short-run forecasters, it can all go badly. Forecasting bond returns was for me a good counterexample. A VAR forecasting one-year bond returns from today's yields gives very different results from taking a monthly VAR, even with several lags, and using \(A^{12}\) to infer the one-year return forecast. Small pricing errors or microstructure dominate the monthly data, which produces junk when raised to the twelfth power. (Climate regressions are having fun with the same issue. Small estimated effects of temperature on growth, raised to the 100th power, can produce nicely calamitous results. But use basic theory to think about units.) Nakamura and Steinsson (appendix) show how sensitive some standard estimates of impulse response functions are to these questions. Weak evidenceFor the current policy question, I hope you get a sense of how weak the evidence is for the "standard view" that higher interest rates reliably lower inflation, though with a long and variable lag, and the Fed has a good deal of control over inflation. Yes, many estimates look the same, but there is a pretty strong prior going in to that. Most people don't publish papers that don't conform to something like the standard view. Look how long it took from Sims (1980) to Christiano Eichenbaum and Evans (1999) to produce a response function that does conform to the standard view, what Friedman told us to expect in (1968). That took a lot of playing with different orthogonalization, variable inclusion, and other specification assumptions. This is not criticism: when you have a strong prior, it makes sense to see if the data can be squeezed in to the prior. Once authors like Ramey and Nakamura and Steinsson started to look with a critical eye, it became clearer just how weak the evidence is. Standard errors are also wide, but the variability in results due to changes in sample and specification are much larger than formal standard errors. That's why I don't stress that statistical aspect. You play with 100 models, try one variable after another to tamp down the price puzzle, and then compute standard errors as if the 100th model were written in stone. This post is already too long, but showing how results change with different specifications would have been a good addition. For example, here are a few more Ramey plots of inflation responses, replicating various previous estimatesTake your pick. What should we do instead? Well, how else should we measure the effects of monetary policy? One natural approach turns to the analysis of historical episodes and changes in regime, with specific models in mind. Romer and Romer pass on thoughts on this approach: ...some macroeconomic behavior may be fundamentally episodic in nature. Financial crises, recessions, disinflations, are all events that seem to play out in an identifiable pattern. There may be long periods where things are basically fine, that are then interrupted by short periods when they are not. If this is true, the best way to understand them may be to focus on episodes—not a cross-section proxy or a tiny sub-period. In addition, it is valuable to know when the episodes were and what happened during them. And, the identification and understanding of episodes may require using sources other than conventional data.A lot of my and others' fiscal theory writing has taken a similar view. The long quiet zero bound is a test of theories: old-Keynesian models predict a delation spiral, new-Keynesian models predicts sunspot volatility, fiscal theory is consistent with stable quiet inflation. The emergence of inflation in 2021 and its easing despite interest rates below inflation likewise validates fiscal vs. standard theories. The fiscal implications of abandoning the gold standard in 1933 plus Roosevelt's "emergency" budget make sense of that episode. The new-Keynesian reaction parameter \(\phi_\pi\) in \(i_t - \phi_\pi \pi_t\), which leads to unstable dynamics for ](\phi_\pi>1\) is not identified by time series data. So use "other sources," like plain statements on the Fed website about how they react to inflation. I already cited Clarida Galí and Gertler, for measuring the rule not the response to the shock, and explaining the implications of that rule for their model. Nakamura and Steinsson likewise summarize Mussa's (1986) classic study of what happens when countries switch from fixed to floating exchange rates: "The switch from a fixed to a flexible exchange rate is a purely monetary action. In a world where monetary policy has no real effects, such a policy change would not affect real variables like the real exchange rate. Figure 3 demonstrates dramatically that the world we live in is not such a world."Also, analysis of particular historical episodes is enlightening. But each episode has other things going on and so invites alternative explanations. 90 years later, we're still fighting about what caused the Great Depression. 1980 is the poster child for monetary disinflation, yet as Nakamura and Steinsson write, Many economists find the narrative account above and the accompanying evidence about output to be compelling evidence of large monetary nonneutrality. However, there are other possible explanations for these movements in output. There were oil shocks both in September 1979 and in February 1981.... Credit controls were instituted between March and July of 1980. Anticipation effects associated with the phased-in tax cuts of the Reagan administration may also have played a role in the 1981–1982 recession ....Studying changes in regime, such as fixed to floating or the zero bound era, help somewhat relative to studying a particular episode, in that they have some of the averaging of other shocks. But the attraction of VARs will remain. None of these produces what VARs seemed to produce, a theory-free qualitative estimate of the effects of monetary policy. Many tell you that prices are sticky, but not how prices are sticky. Are they old-Keynesian backward looking sticky or new-Keynesian rational expectations sticky? What is the dynamic response of relative inflation to a change in a pegged exchange rate? What is the dynamic response of real relative prices to productivity shocks? Observations such as Mussa's graph can help to calibrate models, but does not answer those questions directly. My observations about the zero bound or the recent inflation similarly seem (to me) decisive about one class of model vs. another, at least subject to Occam's razor about epicycles, but likewise do not provide a theory-free impulse response function. Nakamura and Steinsson write at length about other approaches; model-based moment matching and use of micro data in particular. This post is going on too long; read their paper. Of course, as we have seen, VARs only seem to offer a model-free quantitative measurement of "the effects of monetary policy," but it's hard to give up on the appearance of such an answer. VARs and impulse responses also remain very useful ways of summarizing the correlations and cross correlations of data, even without cause and effect interpretation. In the end, many ideas are successful in economics when they tell researchers what to do, when they offer a relatively clear recipe for writing papers. "Look at episodes and think hard is not such recipe." "Run a VAR is." So, as you think about how we can evaluate monetary policy, think about a better recipe as well as a good answer. (Stay tuned. This post is likely to be updated a few times!) VAR technical appendixTechnically, running VARs is very easy, at least until you start trying to smooth out responses with Bayesian and other techniques. Line up the data in a vector, i.e. \(x_t = [i_t \; \pi_t\; y_t]'\). Then run a regression of each variable on lags of the others, \[x_t = Ax_{t-1} + u_t.\] If you want more than one lag of the right hand variables, just make a bigger \(x\) vector, \(x_t = [i_t\; \pi_t \; y_t \; i_{t-1}\; \pi_{t-1} \;y_{t-1}]'.\) The residuals of such regressions \(u_t\) will be correlated, so you have to decide whether, say, the correlation between interest rate and inflation shocks means the Fed responds in the period to inflation, or inflation responds within the period to interest rates, or some combination of the two. That's the "identification" assumption issue. You can write it as a matrix \(C\) so that \(u_t = C \varepsilon_t\) and cov\((\varepsilon_t \varepsilon_t')=I\) or you can include some contemporaneous values into the right hand sides. Now, with \(x_t = Ax_{t-1} + C\varepsilon_t\), you start with \(x_0=0\), choose one series to shock, e.g. \(\varepsilon_{i,1}=1\) leaving the others alone, and just simulate forward. The resulting path of the other variables is the above plot, the "impulse response function." Alternatively you can run a regression \(x_t = \sum_{j=0}^\infty \theta_j \varepsilon_{t-j}\) and the \(\theta_j\) are (different, in sample) estimates of the same thing. That's "local projection". Since the right hand variables are all orthogonal, you can run single or multiple regressions. (See here for equations.) Either way, you have found the moving average representation, \(x_t = \theta(L)\varepsilon_t\), in the first case with \(\theta(L)=(I-AL)^{-1}C\) in the second case directly. Since the right hand variables are all orthogonal, the variance of the series is the sum of its loading on all of the shocks, \(cov(x_t) = \sum_{j=0}^\infty \theta_j \theta_j'\). This "forecast error variance decomposition" is behind my statement that small amounts of inflation variance are due to monetary policy shocks rather than shocks to other variables, and mostly inflation shocks. Update:Luis Garicano has a great tweet thread explaining the ideas with a medical analogy. Kamil Kovar has a nice follow up blog post, with emphasis on Europe. He makes a good point that I should have thought of: A monetary policy "shock" is a deviation from a "rule." So, the Fed's and ECB's failure to respond to inflation as they "usually" do in 2021-2022 counts exactly the same as a 3-5% deliberate lowering of the interest rate. Lowering interest rates for no reason, and leaving interest rates alone when the regression rule says raise rates are the same in this methodology. That "loosening" of policy was quickly followed by inflation easing, so an updated VAR should exhibit a strong "price puzzle" -- a negative shock is followed by less, not more inflation. Of course historians and practical people might object that failure to act as usual has exactly the same effects as acting. * Some Papers: Comment on Romer and Romer What ends recessions? Some "what's a shock?"Comment on Romer and Romer A new measure of monetary policy. The greenbook forecasts, and beginning thoughts that strict exogeneity is not necessary. Shocks monetary shocks explain small fractions of output variance.Comments on Hamilton, more thoughts on what a shock is.What do the VARs mean? cited above, is the response to the shock or to persistent interest rates?The Fed and Interest Rates, with Monika Piazzesi. Daily data and interest rates to identify shocks. Decomposing the yield curve with Monika Piazzesi. Starts with a great example of how small changes in specification lead to big differences in long run forecasts. Time seriesA critique of the application of unit root tests pretesting for unit roots and cointegration is a bad ideaHow big is the random walk in GNP? lessons in not using short run dynamics to infer long run properties. Permanent and transitory components of GNP and stock prices a favorite of cointegration really helps on long run propertiesTime series for macroeconomics and finance notes that never quite became a book. Explains VARs and responses.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
This post takes up from two previous posts (part 1; part 2), asking just what do we (we economists) really know about how interest rates affect inflation. Today, what does contemporary economic theory say? As you may recall, the standard story says that the Fed raises interest rates; inflation (and expected inflation) don't immediately jump up, so real interest rates rise; with some lag, higher real interest rates push down employment and output (IS); with some more lag, the softer economy leads to lower prices and wages (Phillips curve). So higher interest rates lower future inflation, albeit with "long and variable lags." Higher interest rates -> (lag) lower output, employment -> (lag) lower inflation. In part 1, we saw that it's not easy to see that story in the data. In part 2, we saw that half a century of formal empirical work also leaves that conclusion on very shaky ground. As they say at the University of Chicago, "Well, so much for the real world, how does it work in theory?" That is an important question. We never really believe things we don't have a theory for, and for good reason. So, today, let's look at what modern theory has to say about this question. And they are not unrelated questions. Theory has been trying to replicate this story for decades. The answer: Modern (anything post 1972) theory really does not support this idea. The standard new-Keynesian model does not produce anything like the standard story. Models that modify that simple model to achieve something like result of the standard story do so with a long list of complex ingredients. The new ingredients are not just sufficient, they are (apparently) necessary to produce the desired dynamic pattern. Even these models do not implement the verbal logic above. If the pattern that high interest rates lower inflation over a few years is true, it is by a completely different mechanism than the story tells. I conclude that we don't have a simple economic model that produces the standard belief. ("Simple" and "economic" are important qualifiers.) The simple new-Keynesian model The central problem comes from the Phillips curve. The modern Phillips curve asserts that price-setters are forward-looking. If they know inflation will be high next year, they raise prices now. So Inflation today = expected inflation next year + (coefficient) x output gap. \[\pi_t = E_t\pi_{t+1} + \kappa x_t\](If you know enough to complain about \(\beta\approx0.99\) in front of \(E_t\pi_{t+1}\) you know enough that it doesn't matter for the issues here.)Now, if the Fed raises interest rates, and if (if) that lowers output or raises unemployment, inflation today goes down. The trouble is, that's not what we're looking for. Inflation goes down today, (\(\pi_t\))relative to expected inflation next year (\(E_t\pi_{t+1}\)). So a higher interest rate and lower output correlate with inflation that is rising over time. Here is a concrete example: The plot is the response of the standard three equation new-Keynesian model to an \(\varepsilon_1\) shock at time 1:\[\begin{align} x_t &= E_t x_{t+1} - \sigma(i_t - E_t\pi_{t+1}) \\ \pi_t & = \beta E_t \pi_{t+1} + \kappa x_t \\ i_t &= \phi \pi_t + u_t \\ u_t &= \eta u_{t-1} + \varepsilon_t. \end{align}\] Here \(x\) is output, \(i\) is the interest rate, \(\pi\) is inflation, \(\eta=0.6\), \(\sigma=1\), \(\kappa=0.25\), \(\beta=0.95\), \(\phi=1.2\). In this plot, higher interest rates are said to lower inflation. But they lower inflation immediately, on the day of the interest rate shock. Then, as explained above, inflation rises over time. In the standard view, and the empirical estimates from the last post, a higher interest rate has no immediate effect, and then future inflation is lower. See plots in the last post, or this one from Romer and Romer's 2023 summary:Inflation jumping down and then rising in the future is quite different from inflation that does nothing immediately, might even rise for a few months, and then starts gently going down. You might even wonder about the downward jump in inflation. The Phillips curve makes it clear why current inflation is lower than expected future inflation, but why doesn't current inflation stay the same, or even rise, and expected future inflation rise more? That's the "equilibrium selection" issue. All those paths are possible, and you need extra rules to pick a particular one. Fiscal theory points out that the downward jump needs a fiscal tightening, so represents a joint monetary-fiscal policy. But we don't argue about that today. Take the standard new Keynesian model exactly as is, with passive fiscal policy and standard equilibrium selection rules. It predicts that inflation jumps down immediately and then rises over time. It does not predict that inflation slowly declines over time. This is not a new issue. Larry Ball (1994) first pointed out that the standard new Keynesian Phillips curve says that output is high when inflation is high relative to expected future inflation, that is when inflation is declining. Standard beliefs go the other way: output is high when inflation is rising. The IS curve is a key part of the overall prediction, and output faces a similar problem. I just assumed above that output falls when interest rates rise. In the model it does; output follows a path with the same shape as inflation in my little plot. Output also jumps down and then rises over time. Here too, the (much stronger) empirical evidence says that an interest rate rise does not change output immediately, and output then falls rather than rises over time. The intuition has even clearer economics behind it: Higher real interest rates induce people to consume less today and more tomorrow. Higher real interest rates should go with higher, not lower, future consumption growth. Again, the model only apparently reverses the sign by having output jump down before rising. Key issuesHow can we be here, 40 years later, and the benchmark textbook model so utterly does not replicate standard beliefs about monetary policy? One answer, I believe, is confusing adjustment to equilibrium with equilibrium dynamics. The model generates inflation lower than yesterday (time 0 to time 1) and lower than it otherwise would be (time 1 without shock vs time 1 with shock). Now, all economic models are a bit stylized. It's easy to say that when we add various frictions, "lower than yesterday" or "lower than it would have been" is a good parable for "goes down over time." If in a simple supply and demand graph we say that an increase in demand raises prices instantly, we naturally understand that as a parable for a drawn out period of price increases once we add appropriate frictions. But dynamic macroeconomics doesn't work that way. We have already added what was supposed to be the central friction, sticky prices. Dynamic economics is supposed to describe the time-path of variables already, with no extra parables. If adjustment to equilibrium takes time, then model that. The IS and Phillips curve are forward looking, like stock prices. It would make little sense to say "news comes out that the company will never make money, so the stock price should decline gradually over a few years." It should jump down now. Inflation and output behave that way in the standard model. A second confusion, I think, is between sticky prices and sticky inflation. The new-Keynesian model posits, and a huge empirical literature examines, sticky prices. But that is not the same thing as sticky inflation. Prices can be arbitrarily sticky and inflation, the first derivative of prices, can still jump. In the Calvo model, imagine that only a tiny fraction of firms can change prices at each instant. But when they do, they will change prices a lot, and the overall price level will start increasing right away. In the continuous-time version of the model, prices are continuous (sticky), but inflation jumps at the moment of the shock. The standard story wants sticky inflation. Many authors explain the new-Keynesian model with sentences like "the Fed raises interest rates. Prices are sticky, so inflation can't go up right away and real interest rates are higher." This is wrong. Inflation can rise right away. In the standard new-Keynesian model it does so with \(\eta=1\), for any amount of price stickiness. Inflation rises immediately with a persistent monetary policy shock. Just get it out of your heads. The standard model does not produce the standard story. The obvious response is, let's add ingredients to the standard model and see if we can modify the response function to look something like the common beliefs and VAR estimates. Let's go. Adaptive expectations We can reproduce standard beliefs about monetary policy with thoroughly adaptive expectations, in the 1970s ISLM form. I think this is a large part of what most policy makers and commenters have in mind. Modify the above model to leave out the dynamic part of the intertemporal substitution equation, to just say in rather ad hoc way that higher real interest rates lower output, and specify that the expected inflation that drives the real rate and that drives pricing decisions is mechanically equal to previous inflation, \(E_t \pi_{t+1} = \pi_{t-1}\). We get \[ \begin{align} x_t &= -\sigma (i_t - \pi_{t-1}) \\ \pi_t & = \pi_{t-1} + \kappa x_t .\end{align}\] We can solve this sytsem analytically to \[\pi_t = (1+\sigma\kappa)\pi_{t-1} - \sigma\kappa i_t.\]Here's what happens if the Fed permanently raises the interest rate. Higher interest rates send future inflation down. (\(\kappa=0.25,\ \sigma=1.\)) Inflation eventually spirals away, but central banks don't leave interest rates alone forever. If we add a Taylor rule response \(i_t = \phi \pi_t + u_t\), so the central bank reacts to the emerging spiral, we get this response to a permanent monetary policy disturbance \(u_t\): The higher interest rate sets off a deflation spiral. But the Fed quickly follows inflation down to stabilize the situation. This is, I think, the conventional story of the 1980s. In terms of ingredients, an apparently minor change of index from \(E_t \pi_{t+1}\) to \(\pi_{t-1}\) is in fact a big change. It means directly that higher output comes with increasing inflation, not decreasing inflation, solving Ball's puzzle. The change basically changes the sign of output in the Phillips curve. Again, it's not really all in the Phillips curve. This model with rational expectations in the IS equation and adaptive in the Phillips curve produces junk. To get the result you need adaptive expectations everywhere. The adaptive expectations model gets the desired result by changing the basic sign and stability properties of the model. Under rational expectations the model is stable; inflation goes away all on its own under an interest rate peg. With adaptive expectations, the model is unstable. Inflation or deflation spiral away under an interest rate peg or at the zero bound. The Fed's job is like balancing a broom upside down. If you move the bottom (interest rates) one way, the broom zooms off the other way. With rational expectations, the model is stable, like a pendulum. This is not a small wrinkle designed to modify dynamics. This is major surgery. It is also a robust property: small changes in parameters do not change the dominant eigenvalue of a model from over one to less than one. A more refined way to capture how Fed officials and pundits think and talk might be called "temporarily fixed expectations." Policy people do talk about the modern Phillips curve; they say inflation depends on inflation expectations and employment. Expectations are not mechanically adaptive. Expectations are a third force, sometimes "anchored," and amenable to manipulation by speeches and dot plots. Crucially, in this analysis, expected inflation does not move when the Fed changes interest rates. Expectations are then very slowly adaptive, if inflation is persistent, or if there is a more general loss of faith in "anchoring." In the above new-Keynesian model graph, at the minute the Fed raises the interest rate, expected inflation jumps up to follow the graph's plot of the model's forecast of inflation. As a simple way to capture these beliefs, suppose expectations are fixed or "anchored" at \(\pi^e\). Then my simple model is \[\begin{align}x_t & = -\sigma(i_t - \pi^e) \\ \pi_t & = \pi^e + \kappa x_t\end{align}\]so \[\pi_t = \pi^e - \sigma \kappa (i_t - \pi^e).\] Inflation is expected inflation, and lowered by higher interest rates (last - sign). But those rates need only be higher than the fixed expectations; they do not need to be higher than past rates as they do in the adaptive expectations model. That's why the Fed thinks 3% interest rates with 5% inflation is still "contractionary"--expected inflation remains at 2%, not the 5% of recent adaptive experience. Also by fixing expectations, I remove the instability of the adaptive expectations model... so long as those expectations stay anchored. The Fed recognizes that eventually higher inflation moves the expectations, and with a belief that is adaptive, they fear that an inflation spiral can still break out.Even this view does not give us any lags, however. The Fed and commenters clearly believe that higher real interest rates today lower output next year, not immediately; and they believe that lower output and employment today drive inflation down in the future, not immediately. They believe something like \[\begin{align}x_{t+1} &= - \sigma(i_t - \pi^e) \\ \pi_{t+1} &= \pi^e + \kappa x_t.\end{align}\] But now we're at the kind of non-economic ad-hockery that the whole 1970s revolution abandoned. And for a reason: Ad hoc models are unstable, regimes are always changing. Moreover, let me remind you of our quest: Is there a simple economic model of monetary policy that generates something like the standard view? At this level of ad-hockery you might as well just write down the coefficients of Romer and Romer's response function and call that the model of how interest rates affect inflation. Academic economics gave up on mechanical expectations and ad-hoc models in the 1970s. You can't publish a paper with this sort of model. So when I mean a "modern" model, I mean rational expectations, or at least the consistency condition that the expectations in the model are not fundamentally different from forecasts of the model. (Models with explicit learning or other expectation-formation frictions count too.) It's easy to puff about people aren't rational, and looking out the window lots of people do dumb things. But if we take that view, then the whole project of monetary policy on the proposition that people are fundamentally unable to learn patterns in the economy, that a benevolent Federal Reserve can trick the poor little souls into a better outcome. And somehow the Fed is the lone super-rational actor who can avoid all those pesky behavioral biases. We are looking for the minimum necessary ingredients to describe the basic signs and function of monetary policy. A bit of irrational or complex expectation formation as icing on the cake, a possible sufficient ingredient to produce quantitatively realistic dynamics, isn't awful. But it would be sad if irrational expectations or other behavior is a necessary ingredient to get the most basic sign and story of monetary policy right. If persistent irrationality is a central necessary ingredient for the basic sign and operation of monetary policy -- if higher interest rates will raise inflation the minute people smarten up; if there is no simple supply and demand, MV=PY sensible economics underlying the basic operation of monetary policy; if it's all a conjuring trick -- that should really weaken our faith in the whole monetary policy project. Facts help, and we don't have to get religious about it. During the long zero bound, the same commentators and central bankers kept warning about a deflation spiral, clearly predicted by this model. It never happened. Interest rates below inflation from 2021 to 2023 should have led to an upward inflation spiral. It never happened -- inflation eased all on its own with interest rates below inflation.Getting the desired response to interest rates by making the model unstable isn't tenable whether or not you like the ingredient. Inflation also surged in the 1970s faster than adaptive expectations came close to predicting, and fell faster in the 1980s. The ends of many inflations come with credible changes in regime. There is a lot of work now desperately trying to fix new-Keynesian models by making them more old-Keynesian, putting lagged inflation in the Phillips curve, current income in the IS equation, and so forth. Complex learning and expectation formation stories replace the simplistic adaptive expectations here. As far as I can tell, to the extent they work they largely do so in the same way, by reversing the basic stability of the model. Modifying the new-Keynesian modelThe alternative is to add ingredients to the basic new-Keynesian model, maintaining its insistence on real "micro-founded" economics and forward-looking behavior, and describing explicit dynamics as the evolution of equilibrium quantities. Christiano Eichenbaum and Evans (2005) is one of the most famous examples. Recall these same authors created the first most influential VAR that gave the "right" answer to the effects of monetary policy shocks. This paper modifies the standard new-Keynesian model with a specific eye to matching impulse response functions. The want to match all impulse-responses, with a special focus on output. When I started asking my young macro colleagues for a standard model which produces the desired response shape, they still cite CEE first, though it's 20 years later. That's quite an accomplishment. I'll look at it in detail, as the general picture is the same as many other models that achieve the desired result. Here's their bottom line response to a monetary policy shock: (Figure from the 2018 Christiano Eichenbaum and Trabandt Journal of Economic Perspectives summary paper.) The solid line is the VAR point estimate and gray shading is the 95% confidence band. The solid blue line is the main model. The dashed line is the model with only price stickiness, to emphasize the importance of wage stickiness. The shock happens at time 0. Notice the funds rate line that jumps down at that date. That the other lines do not move at time 0 is a result. I graphed the response to a time 1 shock above. That's the answer, now what's the question? What ingredients did they add above the textbook model to reverse the basic sign and jump problem and to produce these pretty pictures? Here is a partial list: Habit formation. The utility function is \(log(c_t - bc_{t-1})\). A capital stock with adjustment costs in investment. Adjustment costs are proportional to investment growth, \([1-S(i_t/i_{t-1})]i_t\), rather than the usual formulation in which adjustment costs are proportional to the investment to capital ratio \(S(i_t/k_t)i_t\). Variable capital utilization. Capital services \(k_t\) are related to the capital stock \(\bar{k}t\) by \(k_t = u_t \bar{k}_t\). The utilization rate \(u_t\) is set by households facing an upward sloping cost \(a(u_t)\bar{k}_t\).Calvo pricing with indexation: Firms randomly get to reset prices, but firms that aren't allowed to reset prices do automatically raise prices at the rate of inflation.Prices are also fixed for a quarter. Technically, firms must post prices before they see the period's shocks.Sticky wages, also with indexation. Households are monopoly suppliers of labor, and set wages Calvo-style like firms. (Later papers put all households into a union which does the wage setting.) Wages are also indexed; Households that don't get to reoptimize their wage still raise wages following inflation. Firms must borrow working capital to finance their wage bill a quarter in advance, and thus pay a interest on the wage bill. Money in the utility function, and money supply control. Monetary policy is a change in the money growth rate, not a pure interest rate target. Whew! But which of these ingredients are necessary, and which are just sufficient? Knowing the authors, I strongly suspect that they are all necessary to get the suite of results. They don't add ingredients for show. But they want to match all of the impulse response functions, not just the inflation response. Perhaps a simpler set of ingredients could generate the inflation response while missing some of the others. Let's understand what each of these ingredients is doing, which will help us to see (if) they are necessary and essential to getting the desired result. I see a common theme in habit formation, adjustment costs that scale by investment growth, and indexation. These ingredients each add a derivative; they take a standard relationship between levels of economic variables and change it to one in growth rates. Each of consumption, investment, and inflation is a "jump variable" in standard economics, like stock prices. Consumption (roughly) jumps to the present value of future income. The level of investment is proportional to the stock price in the standard q theory, and jumps when there is new information. Iterating forward the new-Keynesian Phillips curve \(\pi_t = \beta E_t \pi_{t+1} + \kappa x_t\), inflation jumps to the discounted sum of future output gaps, \(\pi_t = E_t \sum_{j=0}^\infty \beta^jx_{t+j}.\) To produce responses in which output, consumption and investment as well as inflation rise slowly after a shock, we don't want levels of consumption, investment, and inflation to jump this way. Instead we want growth rates to do so. With standard utility, the consumer's linearized first order condition equates expected consumption growth to the interest rate, \( E_t (c_{t+1}/c_t) = \delta + r_t \) Habit, with \(b=1\) gives \( E_t [(c_{t+1}-c_t)/(c_t-c_{t-1})] = \delta + r_t \). (I left out the strategic terms.) Mixing logs and levels a bit, you can see we put a growth rate in place of a level. (The paper has \(b=0.65\) .) An investment adjustment cost function with \(S(i_t/i_{t-1})\) rather than the standard \(S(i_t/k_t)\) puts a derivative in place of a level. Normally we tell a story that if you want a house painted, doubling the number of painters doesn't get the job done twice as fast because they get in each other's way. But you can double the number of painters overnight if you want to do so. Here the cost is on the increase in number of painters each day. Indexation results in a Phillips curve with a lagged inflation term, and that gives "sticky inflation." The Phillips curve of the model (32) and (33) is \[\pi_t = \frac{1}{1+\beta}\pi_{t-1} + \frac{\beta}{1+\beta}E_{t-1}\pi_{t+1} + (\text{constants}) E_{t-1}s_t\]where \(s_t\) are marginal costs (more later). The \(E_{t-1}\) come from the assumption that prices can't react to time \(t\) information. Iterate that forward to (33)\[\pi_t - \pi_{t-1} = (\text{constants}) E_{t-1}\sum_{j=0}^\infty \beta^j s_{t+j}.\] We have successfully put the change in inflation in place of the level of inflation. The Phillips curve is anchored by real marginal costs, and they are not proportional to output in this model as they are in the textbook model above. That's important too. Instead,\[s_t = (\text{constants}) (r^k_t)^\alpha \left(\frac{W_t}{P_t}R_t\right)^{1-\alpha}\] where \(r^k\) is the return to capital \(W/P\) is the real wage and \(R\) is the nominal interest rate. The latter term crops up from the assumption that firms must borrow the wage bill one period in advance. This is an interesting ingredient. There is a lot of talk that higher interest rates raise costs for firms, and they are reducing output as a result. That might get us around some of the IS curve problems. But that's not how it works here. Here's how I think it works. Higher interest rates raise marginal costs, and thus push up current inflation relative to expected future inflation. The equilibrium-selection rules and the rule against instant price changes (coming up next) tie down current inflation, so the higher interest rates have to push down expected future inflation. CEE disagree (p. 28). Writing of an interest rate decline, so all the signs are opposite of my stories, ... the interest rate appears in firms' marginal cost. Since the interest rate drops after an expansionary monetary policy shock, the model embeds a force that pushes marginal costs down for a period of time. Indeed, in the estimated benchmark model the effect is strong enough to induce a transient fall in inflation.But pushing marginal costs down lowers current inflation relative to future inflation -- they're looking at the same Phillips curve just above. It looks to me like they're confusing current with expected future inflation. Intuition is hard. There are plenty of Fisherian forces in this model that want lower interest rates to lower inflation. More deeply, we see here a foundational trouble of the Phillips curve. It was originally a statistical relation between wage inflation and unemployment. It became a (weaker) statistical relation between price inflation and unemployment or the output gap. The new-Keynesian theory wants naturally to describe a relation between marginal costs and price changes, and it takes contortions to make output equal to marginal costs. Phillips curves fit the data terribly. So authors estimating Phillips curves (An early favorite by Tim Cogley and Argia Sbordone) go back, and separate marginal cost from output or employment. As CET write later, they "build features into the model which ensure that firms' marginal costs are nearly acyclical." That helps the fit, but it divorces the Phillips curve shifter variable from the business cycle! Standard doctrine says that for the Fed to lower inflation it must soften the economy and risk unemployment. Doves say don't do it, live with inflation to avoid that cost. Well, if the Phillips curve shifter is "acyclical" you have to throw all that out the window. This shift also points to the central conundrum of the Phillips curve. Here it describes the adjustment of prices to wages or "costs" more generally. It fundamentally describes a relative price, not a price level. OK, but the phenomenon we want to explain is the common component, how all prices and wage tie together or equivalently the decline in the value of the currency, stripped of relative price movements. The central puzzle of macroeconomics is why the common component, a rise or fall of all prices and wages together, has anything to do with output, and for us how it is controlled by the Fed. Christiano Eichenbaum and Evans write (p.3) that "it is crucial to allow for variable capital utilization." I'll try explain why in my own words. Without capital adjustment costs, any change in the real return leads to a big investment jump. \(r=f'(k)\) must jump and that takes a lot of extra \(k\). We add adjustment costs to tamp down the investment response. But now when there is any shock, capital can't adjust enough and there is a big rate of return response. So we need something that acts like a big jump in the capital stock to tamp down \(r=f'(k)\) variability, but not a big investment jump. Variable capital utilization acts like the big investment jump without us seeing a big investment jump. And all this is going to be important for inflation too. Remember the Phillips curve; if output jumps then inflation jumps too. Sticky wages are crucial, and indeed CEE report that they can dispense with sticky prices. One reason is that otherwise profits are countercyclical. In a boom, prices go up faster than wages so profits go up. With sticky prices and flexible wages you get the opposite sign. It's interesting that the "textbook" model has not moved this way. Again, we don't often enough write textbooks. Fixing prices and wages during the period of the shock by assuming price setters can't see the shock for a quarter has a direct effect: It stops any price or wage jumps during the quarter of the shock, as in my first graph. That's almost cheating. Note the VAR also has absolutely zero instantaneous inflation response. This too is by assumption. They "orthogonalize" the variables so that all the contemporaneous correlation between monetary policy shocks and inflation or output is considered part of the Fed's "rule" and none of it reflects within-quarter reaction of prices or quantities to the Fed's actions. Step back and admire. Given the project "find elaborations of the standard new-Keynesian model to match VAR impulse response functions" could you have come up with any of this? But back to our task. That's a lot of apparently necessary ingredients. And reading here or CEE's verbal intuition, the logic of this model is nothing like the standard simple intuition, which includes none of the necessary ingredients. Do we really need all of this to produce the basic pattern of monetary policy? As far as we know, we do. And hence, that pattern may not be as robust as it seems. For all of these ingredients are pretty, ... imaginative. Really, we are a long way from the Lucas/Prescott vision that macroeconomic models should be based on well tried and measured microeconomic ingredients that are believably invariant to changes in the policy regime. CEE argue hard for the plausibility of these microeconomic specifications (see especially the later CET Journal of Economic Perspectives article), but they have to try so hard precisely because the standard literature doesn't have any of these ingredients. The "level" rather than "growth rate" foundations of consumption, investment, and pricing decisions pervade microeconomics. Microeconomists worry about labor monopsony, not labor monopoly; firms set wages, households don't. (Christiano Eichenbam and Trabandt (2016) get wage stickiness from a more realistic search and matching model. Curiously, the one big labor union fiction is still the most common, though few private sector workers are unionized.) Firms don't borrow the wage bill a quarter ahead of time. Very few prices and wages are indexed in the US. Like habits, perhaps these ingredients are simple stand ins for something else, but at some point we need to know what that something else is. That is especially true if one wants to do optimal policy or welfare analysis. Just how much economics must we reinvent to match this one response function? How far are we really from the ad-hoc ISLM equations that Sims (1980) destroyed? Sadly, subsequent literature doesn't help much (more below). Subsequent literature has mostly added ingredients, including heterogeneous agents (big these days), borrowing constraints, additional financial frictions (especially after 2008), zero bound constraints, QE, learning and complex expectations dynamics. (See CET 2018 JEP for a good verbal survey.) The rewards in our profession go to those who add a new ingredient. It's very hard to publish papers that strip a model down to its basics. Editors don't count that as "new research," but just "exposition" below the prestige of their journals. Though boiling a model down to essentials is maybe more important in the end than adding more bells and whistles. This is about where we are. Despite the pretty response functions, I still score that we don't have a reliable, simple, economic model that produces the standard view of monetary policy. Mankiw and Reis, sticky expectations Mankiw and Reis (2002) expressed the challenge clearly over 20 years ago. In reference to the "standard" New-Keynesian Phillips curve \(\pi_t = \beta E_t \pi_{t+1} + \kappa x_t\) they write a beautiful and succinct paragraph: Ball [1994a] shows that the model yields the surprising result that announced, credible disinflations cause booms rather than recessions. Fuhrer and Moore [1995] argue that it cannot explain why inflation is so persistent. Mankiw [2001] notes that it has trouble explaining why shocks to monetary policy have a delayed and gradual effect on inflation. These problems appear to arise from the same source: although the price level is sticky in this model, the inflation rate can change quickly. By contrast, empirical analyses of the inflation process (e.g., Gordon [1997]) typically give a large role to "inflation inertia."At the cost of repetition, I emphasize the last sentence because it is so overlooked. Sticky prices are not sticky inflation. Ball already said this in 1994: Taylor (1979, 198) and Blanchard (1983, 1986) show that staggering produces inertia in the price level: prices just slowly to a fall in th money supply. ...Disinflation, however, is a change in the growth rate of money not a one-time shock to the level. In informal discussions, analysts often assume that the inertia result carries over from levels to growth rates -- that inflation adjusts slowly to a fall in money growth. As I see it, Mankiw and Reis generalize the Lucas (1972) Phillips curve. For Lucas, roughly, output is related to unexpected inflation\[\pi_t = E_{t-1}\pi_t + \kappa x_t.\] Firms don't see everyone else's prices in the period. Thus, when a firm sees an unexpected rise in prices, it doesn't know if it is a higher relative price or a higher general price level; the firm expands output based on how much it thinks the event might be a relative price increase. I love this model for many reasons, but one, which seems to have fallen by the wayside, is that it explicitly founds the Phillips curve in firms' confusion about relative prices vs. the price level, and thus faces up to the problem why should a rise in the price level have any real effects. Mankiw and Reis basically suppose that firms find out the general price level with lags, so output depends on inflation relative to a distributed lag of its expectations. It's clearest for the price level (p. 1300)\[p_t = \lambda\sum_{j=0}^\infty (1-\lambda)^j E_{t-j}(p_t + \alpha x_t).\] The inflation expression is \[\pi_t = \frac{\alpha \lambda}{1-\lambda}x_t + \lambda \sum_{j=0}^\infty (1-\lambda)^j E_{t-1-j}(\pi_t + \alpha \Delta x_t).\](Some of the complication is that you want it to be \(\pi_t = \sum_{j=0}^\infty E_{t-1-j}\pi_t + \kappa x_t\), but output doesn't enter that way.) This seems totally natural and sensible to me. What is a "period" anyway? It makes sense that firms learn heterogeneously whether a price increase is relative or price level. And it obviously solves the central persistence problem with the Lucas (1972) model, that it only produces a one-period output movement. Well, what's a period anyway? (Mankiw and Reis don't sell it this way, and actually don't cite Lucas at all. Curious.) It's not immediately obvious that this curve solves the Ball puzzle and the declining inflation puzzle, and indeed one must put it in a full model to do so. Mankiw and Reis (2002) mix it with \(m_t + v = p_t + x_t\) and make some stylized analysis, but don't show how to put the idea in models such as I started with or make a plot. Their less well known follow on paper Sticky Information in General Equilibrium (2007) is much better for this purpose because they do show you how to put the idea in an explicit new-Keynesian model, like the one I started with. They also add a Taylor rule, and an interest rate rather than money supply instrument, along with wage stickiness and a few other ingredients,. They show how to solve the model overcoming the problem that there are many lagged expectations as state variables. But here is the response to the monetary policy shock: Response to a Monetary Policy Shock, Mankiw and Reis (2007). Sadly they don't report how interest rates respond to the shock. I presume interest rates went down temporarily. Look: the inflation and output gap plots are about the same. Except for the slight delay going up, these are exactly the responses of the standard NK model. When output is high, inflation is high and declining. The whole point was to produce a model in which high output level would correspond to rising inflation. Relative to the first graph, the main improvement is just a slight hump shape in both inflation and output responses. Describing the same model in "Pervasive Stickiness" (2006), Mankiw and Reis describe the desideratum well: The Acceleration Phenomenon....inflation tends to rise when the economy is booming and falls when economic activity is depressed. This is the central insight of the empirical literature on the Phillips curve. One simple way to illustrate this fact is to correlate the change in inflation, \(\pi_{t+2}-\pi_{t-2}\) with [the level of] output, \(y_t\), detrended with the HP filter. In U.S. quarterly data from 1954-Q3 to 2005-Q3, the correlation is 0.47. That is, the change in inflation is procyclical.Now look again at the graph. As far as I can see, it's not there. Is this version of sticky inflation a bust, for this purpose? I still think it's a neat idea worth more exploration. But I thought so 20 years ago too. Mankiw and Reis have a lot of citations but nobody followed them. Why not? I suspect it's part of a general pattern that lots of great micro sticky price papers are not used because they don't produce an easy aggregate Phillips curve. If you want cites, make sure people can plug it in to Dynare. Mankiw and Reis' curve is pretty simple, but you still have to keep all past expectations around as a state variable. There may be alternative ways of doing that with modern computational technology, putting it in a Markov environment or cutting off the lags, everyone learns the price level after 5 years. Hank models have even bigger state spaces! Some more modelsWhat about within the Fed? Chung, Kiley, and Laforte 2010, "Documentation of the Estimated, Dynamic, Optimization-based (EDO) Model of the U.S. Economy: 2010 Version" is one such model. (Thanks to Ben Moll, in a lecture slide titled "Effects of interest rate hike in U.S. Fed's own New Keynesian model") They describe it as This paper provides documentation for a large-scale estimated DSGE model of the U.S. economy – the Federal Reserve Board's Estimated, Dynamic, Optimization- based (FRB/EDO) model project. The model can be used to address a wide range of practical policy questions on a routine basis.Here are the central plots for our purpose: The response of interest rates and inflation to a monetary policy shock. No long and variable lags here. Just as in the simple model, inflation jumps down on the day of the shock and then reverts. As with Mankiw and Reis, there is a tiny hump shape, but that's it. This is nothing like the Romer and Romer plot. Smets and Wouters (2007) "Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach" is about as famous as Christiano Eichenbaum and Evans as a standard new-Keynesian model that supposedly matches data well. It "contains many shocks and frictions. It features sticky nominal price and wage settings that allow for backward inflation indexation, habit formation in consumption, and investment adjustment costs that create hump-shaped responses... and variable capital utilization and fixed costs in production"Here is their central graph of the response to a monetary policy shockAgain, there is a little hump-shape, but the overall picture is just like the one we started with. Inflation mostly jumps down immediately and then recovers; the interest rate shock leads to future inflation that is higher, not lower than current inflation. There are no lags from higher interest rates to future inflation declines. The major difference, I think, is that Smets and Wouters do not impose the restriction that inflation cannot jump immediately on either their theory or empirical work, and Christiano, Eichenbaum and Evans impose that restriction in both places. This is important. In a new-Keynesian model some combination of state variables must jump on the day of the shock, as it is only saddle-path stable. If inflation can't move right away, that means something else does. Therefore, I think, CEE also preclude inflation jumping the next period. Comparing otherwise similar ingredients, it looks like this is the key ingredient for producing Romer-Romer like responses consistent with the belief in sticky inflation. But perhaps the original model and Smets-Wouters are right! I do not know what happens if you remove the CEE orthogonalization restriction and allow inflation to jump on the day of the shock in the date. That would rescue the new-Keynesian model, but it would destroy the belief in sticky inflation and long and variable lags. Closing thoughtsI'll reiterate the main point. As far as I can tell, there is no simple economic model that produces the standard belief. Now, maybe belief is right and models just have to catch up. It is interesting that there is so little effort going on to do this. As above, the vast outpouring of new-Keynesian modeling has been to add even more ingredients. In part, again, that's the natural pressures of journal publication. But I think it's also an honest feeling that after Christiano Eichenbaun and Evans, this is a solved problem and adding other ingredients is all there is to do. So part of the point of this post (and "Expectations and the neutrality of interest rates") is to argue that this is not a solved problem, and that removing ingredients to find the simplest economic model that can produce standard beliefs is a really important task. Then, does the model incorporate anything at all of the standard intuition, or is it based on some different mechanism al together? These are first order important and unresolved questions!But for my lay readers, here is as far as I know where we are. If you, like the Fed, hold to standard beliefs that higher interest rates lower future output and inflation with long and variable lags, know there is no simple economic theory behind that belief, and certainly the standard story is not how economic models of the last four decades work. Update:I repeat a response to a comment below, because it is so important. I probably wasn't clear enough that the "problem" of high output with inflation falling rather than rising is a problem of models vs. traditional beliefs, rather than of models vs. facts. The point of the sequence of posts, really, is that the traditional beliefs are likely wrong. Inflation does not fall, following interest rate increases, with dependable, long, and perhaps variable lags. That belief is strong, but neither facts, empirical evidence, or theory supports it. ("Variable" is a great way to scrounge data to make it fit priors.) Indeed many successful disinflations like ends of hyperinflations feature a sigh of relief and output surge on the real side.