Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Industrial policy is a pillar of President Biden's economic agenda (aka "Bidenomics"), and the White House recently cited a spike in U.S. manufacturing construction spending to show that billions (perhaps trillions) of dollars in new federal subsidies for "clean energy," semiconductors, and other preferred industries—enacted in the infrastructure law, CHIPS and Science Act, and Inflation Reduction Act—are "working." As I noted in a recent column, however, there are many reasons to remain skeptical of these subsidies' overall economic benefits, even assuming they've actually encouraged new private investments in supposedly "strategic" goods. A new report from the Wall Street Journal provides a real‐world cautionary tale in just this regard: New York state paid to build a quarter‐mile‐long facility with 1.2 million square feet of industrial space, which it now owns and leases to Tesla for $1 a year. It bought $240 million worth of solar‐panel manufacturing equipment. Musk had said that by 2020 the Buffalo plant each week would churn out enough solar‐panel shingles to cover 1,000 roofs. The Tesla solar‐energy unit behind the plan, however, is averaging just 21 installations a week, according to energy analysts at Wood Mackenzie who reviewed utility data. The building houses some factory workers, but also hundreds of lower‐paid desk‐bound data analysts working on other Tesla business. The suppliers that Cuomo predicted would flock to a modern manufacturing hub never showed up. The only new nearby business is a Tim Horton's coffee shop. Most of the solar‐panel manufacturing equipment bought by the state has been sold at a discount or scrapped. A state comptroller's audit found just 54 cents of economic benefit for every subsidy dollar spent on the factory, which rose on the site of an old steel mill. External auditors have written down nearly all of New York's investment.
I highly encourage you to read the whole thing, which provides an almost textbook case for why we should continue to question U.S. industrial policy initiatives, whether part of "Bidenomics" or any other political agenda. All too often, the subsidies' seen benefits are swamped by their unseen costs—especially after you consider alternative policies that could have achieved the same objectives with fewer taxpayer dollars and lower economic or geopolitical risks. As I wrote last month about that U.S. construction spending surge and the "strategic" projects it's supporting, "[m]aybe our grand, subsidy‐driven 'industrial transition' will be worth it in the end (outside of favored industries, the sector is today struggling mightily), but declarations of victory at this stage are absurd." The Tesla factory in New York once again shows why.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
If you have not been paying attention, our government has decided that all electric vehicles are the solution to the climate problem. At least as long as they are made in the US with union labor and benefits. California has committed to banning the sale of anything else. In today's post, a few tidbits from my daily WSJ reading on the subject. From Holman Jenkins on electric cars: If the goal were to reduce emissions, the world would impose a carbon tax. Then what kind of EVs would we get? Not Teslas but hybrids like Toyota's Prius. "A wheelbarrow full of rare earths and lithium can power either one [battery-powered car] or over 90 hybrids, but, uh, that fact seems to be lost on policymakers," a California dealer recently emailed me.[Note: that wheelbarrow of rare earths comes from multiple truckloads of actual rocks. Also see original for links.] ...The same battery minerals in one Tesla can theoretically supply 37 times as much emissions reduction when distributed over a fleet of Priuses.This is a shock only to those who weren't paying attention. It certainly isn't lost on government. Chris Atkinson, the Ohio State University sustainable transportation guru whose slogan I've cited before—"the best use of a battery is in a hybrid"—was a key official in the Obama Energy Department.Our policies don't exist to incentivize carbon reduction, they exist to lure affluent Americans to make space in their garages for oversized, luxurious EVs so Tesla can report a profit and so other automakers can rack up smaller losses on the "compliance" vehicles they create in obedience to government mandates.Actually, I vote GMC's 9,000 lb, 1,000 hp. 0-60 in 3 seconds $110,000 electric Hummer the prize for most conspicuous mis-use of Chinese lithium and its associated carbon emissions. Tesla's new "cyber truck" comes close. I can't wait to see those driving around Palo Alto. Mining the required minerals produces emissions. Keeping the battery charged produces emissions. Jenkins is a pretty good economist. There is supply and demand: Only if a great deal of gasoline-based driving is displaced would there be net reduction in CO2. But who says any gasoline-based driving is being displaced? When government ladles out tax breaks for EVs, when wealthy consumers splurge on a car that burns electrons instead of gasoline, they simply leave more gasoline available for someone else to consume at a lower price.Stop just a minute and digest this one, if you have not already. If you use less gas, someone else uses more. EV subsidies just shift who uses the gas. The same supply just goes somewhere else. One has to subsidize electrics so much that the price of gas goes down, permanently, so that it's not worth bringing out of the ground. And the price and demand are global. Lower prices encourage Indians and Africans to finally get cheap gas powered cars. This may be a secret to you, the public. It's not to economists.Well, some economists. Alas my beloved profession is as open to virtue signaling as everyone else so I don't see a loud "stop subsidizing battery only EVs and banning everything else" from economists. The problem here is the problem with any plan to subsidize our way to emissions reduction. Humans are perfectly capable of consuming both renewable and dirty energy in ever-growing quantities if the price is right. The emissions data prove as much....By incorporating carbon taxes into its tax systems, global society might at least slow the rate of CO2 emissions while simultaneously improving the efficiency of its tax codes. It still seemed unlikely, but it wasn't clear why. After all, politicians enact plenty of taxes. Governments have been advised for decades to adopt consumption taxes as a way to fund their welfare states without destroying the possibility of growth. Cramming a lot into one delicious column, Jenkins wonders at human nature: How to explain, along the way, the coevolution of the climate empty gesture with climate rhetoric that increasingly shouts the unfounded claim that climate change threatens human survival? I explain it this way: When it became clear nobody was going to do anything about climate change, it became safe to engage in hysterical rhetoric about climate change....As David Burge put it (thanks to an anonymous colleague for this delicious tidbit)"To help poor children, I am going to launch flaming accordions into the Grand Canyon.""That's stupid.""WHY DO YOU HATE POOR CHILDREN?"Climate change is real. Climate change matters. Addressing it is expensive. Other environmental problems clamor for resources too. Europe has stopped growing, and the US is headed the same way. We don't have trillions to waste. California as always leads the way on the beau geste: ... in California, ... drayage trucks, which carry containerized cargo to and from ports and rail centers, face a looming deadline. The state will require any new drayage trucks added to fleets starting next year to run on electric batteries or hydrogen fuel cells. California also plans to phase out sales of new gasoline-powered passenger cars, pickup trucks and SUVs by 2035 and require all new medium- and heavy-duty truck sales be zero-emissions by 2036. ... Trucks represent 6% of the vehicles on California's roads, but a quarter of the state's on-road greenhouse-gas emissions,.... California plans to spend $1.7 billion for medium- and heavy-duty infrastructure for zero-emission vehicles by 2026.$1.7 billion, for state-provided "infrastructure," on top of the costs to industries... for a benefit of...? The central problem: How are they going to recharge those trucks? [Truck operators] They position trucks near highways, rail or ports, not available power. As fleets add trucks they may need to draw an additional 6 to 8 megawatts of power or more. "That's about 1,000 homes," said Steve Powell, chief executive of utility Southern California Edison. "We may need a new substation or something like that and a line to be built." It has not been built, and the truck deadline is now. So what do operators do? A mobile charging system in California runs on natural gas. PHOTO: PROLOGIS MOBILITY, from Wall Street JournalSouthern California Edison has come across some fleets powering chargers using diesel generators...so that new EV trucks don't sit unused. Another solution: more batteries. [Pacific Drayage Services President] Gillis is installing a system of chargers paired with battery storage. It can discharge power to trucks even during times of grid stress. The battery storage itself can recharge at a time of day when electricity prices are the cheapest. There is an important point here on just how many batteries are needed for the "transition." Don't just count the batteries in the trucks. Count the batteries in the charging stations too. And the utility. Even California knows that it does no good to electrify and then power the grid with coal and natural gas. The plan is for solar and wind electricity, but that needs utility scale battery backup. A week or more of power. The sources of my last post only added up the batteries needed for the cars. That's too low by many multiples. He is also hedging—Gillis tripled his usual order of new diesel trucks from 30 to 100, which will arrive by year-end, just beating the deadline before California phases them out.I get the idea. Build it and they will come. Put the trucks in place now, so what if at huge cost, and so what if we burn coal to power them. Then when solar and wind and utility scale storage arrive, the users will be there. But trucks don't last that long. By 20 years when all that infrastructure finally has its permits, today's electric trucks will be long gone. Covering Kerry's trip to China, a reminder that climate is all about how China and India develop, not which car San Franciscans use to drive up to Tahoe. The Climate Action Tracker says that between 2015 and 2022 China's greenhouse gas emissions increased nearly 12%, while U.S. emissions declined some 5%. China's methane emissions rose about 3% from 2015 to 2021, the latest year with good data, while the U.S. cut them by 5%.... China's "coal production reached record levels in 2022 for the second year running," and "coal is set to remain the backbone" of China's energy system. No kidding: Between 2020 and 2022, China added some 113 gigawatts of new coal-fired power plants, according to S&P Global Commodity Insights. The entire world managed to retire some 187 gigawatts of coal plants between 2017 and 2022.As of January China had some 306 coal-fired power stations proposed, permitted or under construction, according to Global Energy Monitor, a nonprofit that tracks worldwide coal-fired power projects of 30 megawatts or more. When finished those plants would generate some 366 gigawatts, or about 68% of the world's total coal capacity under development.As of April China also had 180 new coal mines or mine expansions proposed, permitted or under construction, the nonprofit reported.In a lovely article Aatish Taseer reminds us there are 1.3 billion people in India (as well as 1.4 billion in China). It's hot, just like it is in Texas. When they reach middle income, they will want air conditioning, just like in Texas. This doesn't make the virtue-signaling tour because there is no easy answer. If China and India don't think they can grow based on solar, wind, and nuclear, just what can we do about it? Send more diplomats? It does not help that the US is now deciding to "disengage" and fight some sort of battle for economic supremacy via industrial policy trade restrictions and tariffs. Even Taiwan on a silver platter isn't going to get China to change. Even if the US shuts down, de-growths, and goes back to subsistence farming, China will spew CO2. I guess the argument is go first to establish a moral example. But if that moral example is obviously self-defeating, pointless, and just money down ratholes to entrenched interests, I doubt it will shame China to much action. A carbon tax, and a Manhattan project to drive down the cost of nuclear would make a whole lot more sense. (Half the Manhattan project is technical, the other half is to rewrite the regulatory rule book on a wartime schedule.) Think what you could do with the trillion or so dollars going to various subsidies and mandates. Update Read "Old Eagle Eye" excellent July 19 comment below. Boiling it all down to a nutshell, our policy path now is going to produce energy with a lot more materials -- rocks, steel, concrete, batteries, aluminum, carbon fibre -- and energy to produce those materials, relative to fossil fuels or nuclear. Producing those materials also produces more carbon now, with a hoped for savings later. That the 1970s environmental movement ends up with a huge increase in making stuff from rocks, rather than a service-oriented economy with small impact power, first natural gas and then nuclear, and a light touch upon the earth, is a bit of a paradox. Also, in addition to spending our trillion dollars and industrial policy wonks on making nuclear cheap and abundant, if a warming climate really is an economic and environmental problem, and given the current policy path is both ineffective and hugely expensive, why should we not even speak or research geoengineering? It's not ideal, but nothing is ideal.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Yet another example of fallacious argumentation in the energy industry: The UK's net zero ambitions are expected to be dealt a blow as offshore wind farm developers halt new projects because of a shortfall in government funding.On Thursday night, no major new offshore wind projects were expected to be included in the latest annual round of renewable contracts to be announced on Friday, The Telegraph understands.There were five major projects, with a total capacity of 5GW, that were eligible to bid for contracts to boost the country's current 14GW of offshore wind.But industry sources said it was "likely" that none had chosen to bid into the process, because the Government failed to offer sufficient prices for their energy.The outcome will deal a blow to the Government's ambitions to install 50GW of offshore capacity by 2030, and remove fossil fuels from the electricity grid by 2035.The fallacy is that as wind has become more expensive therefore subsidies must rise so that we gain the same amount of future wind power. Which is indeed a fallacy - if wind is now more expensive then we desire to have less wind in our energy mix. As relative prices change so do the things we want to buy, of course. And this is what is wrong with the general plan people are trying to work to. They've decided there should be this much production of this sort of energy, so much of that and so on. But what is the desired mix changes as prices do. And this then feeds through, up a level, to the basic idea of net zero by one or another date. How much climate change mitigation we do depends upon the prices of doing climate mitigation. Our whole idea here is to maximise human utility over time. As the Stern Review went on (at 1200 page length) this does not mean "no climate change". It means the right amount of climate change. Something which is determined by how expensive it is to mitigate, how expensive to adapt and the expense - all of these are the real expense, not mere money but the cost to human lifestyles - of just putting up with it.As the various prices change therefore the right amount of mitigation, adaptation and just putting up with change. Which is the one thing that a plan for net zero by a certain date dosn't take account of. Which is why it is a bad plan of course - because it's violating the very aim of doing anything at all, that maximisation of human utility.Because wind prices have changed we therefore desire a different amount of wind power. Further, because mitigation costs have changed we therefore desire a different amount of mitigation. Wind costs are up - OK, the correct answer is to have more climate change then.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
By the standards of mainstream media coverage of technical economics, Peter Coy's coverage of HANK (Heterogeneous Agent New Keynesian) models in the New York Times was actually pretty good. 1) Representative agents and distributions. Yes, it starts with the usual misunderstanding about "representative agents," that models assume we are all the same. Some of this is the standard journalist's response to all economic models: we have simplified the assumptions, we need more general assumptions. They don't understand that the genius of economic theory lies precisely in finding simplified but tractable assumptions that tell the main story. Progress never comes from putting more ingredients and stirring the pot to see what comes out. (I mean you, third year graduate students looking for a thesis topic.) But in this case many economists are also confused on this issue. I've been to quite a few HANK seminars in which prominent academics waste 10 minutes or so dumping on the "assumption that everyone is identical." There is a beautiful old theorem, called the "social welfare function." (I learned this in graduate school in fall 1979, from Hal Varian's excellent textbook.) People can have almost arbitrarily different preferences (utility functions), incomes and shocks, companies can have almost arbitrarily different characteristics (production functions), yet the aggregate economy behaves as if there is a single representative consumer and representative firm. The equilibrium path of aggregate consumption, output, investment, employment, and the prices and interest rates of that equilibrium are the same as those of an economy where everyone and every firm is the same, with a "representative agent" consumption function and "representative firm" production function. Moreover, the representative agent utility function and representative firm production function need not look anything like those of any particular individual person and firm. If I have power utility and you have quadratic utility, the economy behaves as if there is a single consumer with something in between. Defining the job of macroeconomics to understand the movement over time of aggregates -- how do GDP, consumption, investment, employment, price level, interest rates, stock prices etc. move over time, and how do policies affect those movements -- macroeconomics can ignore microeconomics. (We'll get back to that definition in a moment.) Now uniting macro and micro is important. Macro estimation being what it is, it would be awfully nice to use micro evidence. The program kicked off by Kydland and Prescott to "calibrate" macro models from micro evidence would be very useful. Kydland and Prescott may have had a bit of grass-is-greener optimism about just how much precise evidence macroeconomists have on firms and people, but it's a good idea. Adding up micro evidence to macro is hard, however. Here "aggregation theory," often confused with the "social welfare function" theorem comes up, more as a nightmare from graduate school. The conditions under which the representative agent preferences look like individual people are much more restricted. Like all good theorems, this one rests on assumptions, and the assumptions are false. The crucial assumption is complete markets, and in particular complete risk sharing: There is an insurance market in which you can be compensated for every risk, in particular losing your job. A generalized form still works, however. There is still a representative agent, but it cares about distributions. The representative agent utility function depends on aggregate consumption, aggregate labor supply but now also statistics about the distribution of consumption across people. In asset pricing, the Constantinides-Duffie model is a great example: the cross-sectional variance of consumption becomes a crucial state variable for the value of the stock market, not just aggregate consumption. All economic theorems are false of course, in that the assumptions are not literally true. The question is, how false? Conventional macroeconomics comes down to a description of how aggregates evolve over time, based on past aggregates: [aggregate income, consumption, employment, inflation... next year ] = function of [aggregate income, consumption, employment, inflation, policy variables... this year ] + unforecastable shocks. That's it. That's what macroeconomics is. Theory, estimation and calibration to figure out the function. [Update. I added policy variables, e.g. interest rates, to the function. And, the point of macro is to figure out how policies affect the economy, and furthermore with an objective in hand to derive optimal policies. Thanks François Velde for pointing out the omissions in comments.] If HANK is useful to macroeconomics, then, it must be that adding distributional statistics helps to describe aggregate dynamics. Reality must be [aggregate income, consumption, employment, inflation... next year ] = function of [aggregate income, consumption, employment, inflation, distribution of consumption, employment, etc., policy variables,... this year ] + unforecastable shocks. So here is a central question I have for HANK modelers: Is that true? Do statistics on the distribution across people of economic variables really help us to forecast or understand aggregate dynamics? So far, my impression is, not much. The social welfare function theorem can be wrong in its assumptions, yet still a pretty good approximation. And "heterogeneity" has been around macro for a long time, but never has seemed to matter much in the end. (The investment literature of the early 1990s is a great example.) But I would be happy to be proved wrong. This post is as much a suggestion for HANK modelers as a critique. Another possibility: Maybe HANK is about aggregation after all. Can we actually use micro evidence, and add it up constructively, to learn what the representative agent - social welfare function is? Even before HANK, there were good examples. For example, the literature on labor supply: Macro models want people to work more in response to temporarily higher wages. Most individual people work 8 hours a day or zero, so micro evidence finds a small response. But a small number of people move from non-work to work as wages rise. So the representative agent can have a much larger elasticity than individual people. And, you have to understand labor market structure, and the distribution of who is available to work to add up from micro to macro evidence. Here, I would like to know the basic functional form -- how much does the SWF care about today vs. tomorrow, risk, work vs leisure, as well as any distributional effect? 2) Income effectsCoy also goes on with the usual New York Times schtick about how dumb and irrational all the little hoi polloi are. (Of course we of the elite and the federal government handing out nudges would never be behavioral.) But you don't need HANK to assume that the representative investor is dumb either. He goes on to describe pretty well where the current literature is. Behind this is, however, one of the major features of HANK models so far. One of its most important uses has been to put current income in the IS equation. (Economists talk amongst yourselves for a bit while I explain this to regular people. So far, the central description of demand in new Keynesian models is based on "intertemporal substitution:" When the real interest rate is higher, you consume a bit less today, save a bit more, so that you can consume a lot more tomorrow. That is the crucial mechanism by which higher real interest rates (say, induced by the Fed) lower demand today. Old Keynesian models didn't have people in them at all, but hypothesized that consumption simply follows income. That adds a more powerful mechanism, the "multiplier:" an initial income drop lowers consumption, which lowers income and around we go. )HANK models often add some "hand to mouth" consumers. Some people think about today vs. the future, but others just eat what income they make today. You can get this out of "rational, liquidity constrained" people, but that's typically not enough. To get significant effects, you need people who just behave that way. So, there is this little bit of behaviorism in many HANK models. But it's a little spice in the otherwise Lucas soup. In equations, the standard model says consumption today = expected consumption tomorrow - (number) x real interest rateAfter an immense amount of algebra and computer time, HANK models allow you to writeconsumption today = (number) x income today + (number) x expected consumption tomorrow - (number) x real interest rate New Keynesian models were invented on the hope they would turn out to be holy water sprinkled on old-Keynesian thinking, for example justifying big spending multipliers and strong monetary policy. They turned out to be nothing at the sort once you read the equations. A movement is underway to modify (torture?) new-Keynesian models to look like old-Keynesian models, to bring macro back to roughly the 1976 edition of Dornbush and Fisher's textbook. Complex expectation formation theories and this aspect of HANK can be digested that way. So here is my second question for HANK modelers: Is this it? When we boil it all down to the linearized equations of the model you take to data, to explain aggregates and monetary and fiscal policy, is there a big bottom line beyond an excuse to revive bits of the Keynesian consumption function? That too is an honest question, and perhaps a suggestion--show us the textbook back of the envelope bottom line model. (It would be awfully nice if distributions mattered here too, theoretically, empirically, and quantitatively.) 3) Micro implications of macro Maybe you disagreed a few paragraphs ago with my definition of macroeconomics, as only concerned with the movement of aggregates over time. Talking with some of my HANK colleagues, a different purpose is at work -- figuring out the effects of macroeconomics on different people. Recessions fall harder on those who lose jobs, and certain income and other groups; harder on some industries and areas than others. Here HANK dovetails with concerns over income diversity and "equity." That's a perfectly good reason to study it, but let's then be clear. If that's the case, HANK really doesn't change our understanding of how policies and events move aggregates around, it is really just about understanding how those aggregates affect different people differently. That may change calculations of optimal monetary policy. If the objective function cares negatively about income diversity, then adding HANK may produce a model that makes no difference at all for the effect of monetary policy on aggregates, but gives a greater weight to employment vs. inflation. ("May!" Inflation also falls harder on people experiencing low incomes, so concerns for equity could go the other way too. Thanks to a correspondent for pointing that out.) Many models have observationally equivalent predictions for aggregates but different welfare implications, and the same model can have different welfare implications if you put in different preferences for distributions across people. But surely HANK has more to offer than a long-winded excuse for dovishness towards tolerating inflation in place of unemployment. Also, in the big picture this seems like a classic answer in search of a question. If you care about the less fortunate, you start with the big issues: crime, awful schools, family breakdown, opportunity. The additional benefit for the less fortunate from the level of the overnight federal funds rate might be fun to isolate in a model, but we are really staring at a caterpillar on a leaf of a tree and missing the forest of economic misfortune. 4) Last thoughtsI hesitate to write, as I am a consumer not a producer of HANK research, and thus will probably get things wrong or show my limited knowledge of the literature. Please fill the comments with corrections, amplifications, pointers to good papers, etc. There is a tendency in economics to pursue a new technical possibility without really knowing where it's going or why. That's not unhealthy; figure out what you can do first, and what to do later. The why always does come later. This was true of rational expectations, real business cycles, new-Keynesian models and more. Now that HANK is pretty well developed and is coming out in public, with admiring New York Times articles, it is worth assessing the why, the bottom line, what it does. I'm also hesitant to write and especially too critically. I vividly recall being in grad school, and some speaker (I mercifully forgot who) went on a tirade about all these young whippersnappers using too much math and not enough intuition and just being in love with building models. I vowed if I ever thought that I would retire. What do we say to the angel of old age? Not today. Bring it on, and let's all figure out what it means.Update: Alessandro Davis comments below, reminding me of their recent QJE paper "Imperfect Risk Sharing and the Business Cycle." This paper evaluates directly the question, how much does heterogeneity matter for aggregate dynamics? The headline answer is "not much, though maybe more at the zero bound." deviations from perfect risk sharing implied by this class of models account for only 7% of output volatility on average but can have sizable output effects when nominal interest rates reach their lower bound. Now, 7% might actually be a lot. A little secret of contemporary macro models is that none of them explain a lot of output volatility. In my above characterization aggregates next year = function of aggregates today + shocks, the shocks are big and account for most variation in aggregates. Most inflation comes from inflation shocks, not movements in other variables like employment, especially as fed through a model. This isn't necessarily a failing of models. New Keynesian models are designed to understand how monetary policy affects output, not to explain why output varies. Milton Friedman thought that most business cycles were due to monetary policy mistakes, so understanding the former is the same as the latter, but he seems to have been wrong about that, at least since 1982. Or maybe not. The paper's computation takes heterogene in the data, and asks how much does that affect the new-Keynesian model's predictions for output, employment, etc. I have in mind a slightly different question: Even without much theory, how much can data on heterogeneity actually improve forecasts of output, employment, etc. Do distributional variables improve VAR forecasts? Let me know if you have an answer to that one. The paper has a crystal clear summary of the representative agent theorem, and its important extension. They show how distributional variables enter in to a representative agent representation as simple "wedges." Using a representative agent does not mean you assume all people are identical! There is also a great literature review on the general understanding that distributional variables don't matter much for aggregates, starting with Krussell and Smith. A parallel literature in finance qualitatively examined the beautiful Constantinides-Duffie mechanism, finding that uninsured idiosyncratic risk isn't large enough or variable enough to account for asset pricing puzzles. So far -- that's all from the 1990s and a lot of the point of HANK is to reverse that impression. UpdateSee Matthew Rognlie's superb answer below. I ask a lot of questions but seldom get such clear and detailed answers! Thanks for the short course on Hank model big picture! Update 2 Ben Moll writes Hi John, thanks a lot for the very thoughtful post. Lots of great food for thought. In case you hadn't seen it, Tom Sargent posted a new paper a few days ago that has a really great discussion of the main takeaways from HANK. See in particular sections 5 and 7. For example, see the point that HANK "challenges the neoclassical synthesis and a widely-believed prescription for separating macro policy design from policies to redistribute income and wealth." But plenty of other great points there too. Finally, yes, Matt Rognlie's response is really fantastic.Sargent's paper is here. It's fantastic. I'm going to save a review for a separate blog post.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
At the height of the "unipolar moment" of the 1990s and 2000s, a popular school of thought argued that the U.S. could use its unchecked power to do good on the world stage. When peace proved elusive after the fall of the Soviet Union, the world saw exceptionally deadly warfare and attempted ethnic cleansing erupt in Karabakh, the Congo, Rwanda, and Yugoslavia. If there was one power that could do anything about this, it was the United States.The primary ideologue of this push for humanitarian interventionism would become Samantha Power. A journalist and author of the influential book A Problem from Hell: America in the Age of Genocide, Power argued that the U.S. and other liberal nation states should use military force to halt crimes against humanity in other countries. Nation-states have a responsibility to protect their citizens, and if they are found not be doing this, other states may then step in to do so should the situation continue to deteriorate. This principle was called "Responsibility to Protect," or "R2P," and it became so influential that it was adopted into the United Nations Charter in 2005.Two decades later, the concept has been shown to be both hypocritical and dangerous. As Israel bombs Gaza to dust, world elites have shown no interest in living up to their apparent responsibility to protect civilians. Contrast this with the response to Russia's war in Ukraine, in which commentators called for direct intervention to save innocent Ukrainians. And actual examples of R2P have led to disaster, as civilians in Libya can attest. Fortunately, there's a simple answer to this hellish problem: abandoning R2P once and for all.Who's protecting who?Power's line of argument masked the darker history of so-called humanitarian interventions. Both the Romans and Spanish, more than 1000 years apart, argued that their bloody conquests in Carthage and Central America would help the world by ending the practice of human sacrifice. In the 19th century, the colonial states of western Europe began to see their maritime expansion abroad not simply in terms of wealth and power acquisition but as an inevitable stage of human moral development. France and Britain in particular dove into Africa with the justification of putting an end to tribal wars and extirpating the slave trade. With economic and technological shifts, the same empires who had once expanded with shackles in hand now came, so they said, as liberators. Yet the result of this "liberation" was little more than violence and famine.Unfortunately, the lessons of this sordid history didn't stick. Barack Obama would end up recruiting Power for multiple roles in his administration. While Obama ran explicitly against the disastrous foreign policy of George W Bush in 2008, subsequent events related to the "Arab Spring" would see him continue if not expand these same policies, launching an overt regime change war in Libya (and a covert one in Syria) while intensifying U.S. operations in Yemen.It was in the Libya war that Power would see her dream fulfilled, when R2P was explicitly invoked to protect Libyan civilians in rebel-held areas from retaliation by the government of Muamar Gaddafi. This operation resulted in widespread state failure, a collapse of central authority, a massive refugee crisis, a resurgence of extremism in that country, and, most gallingly of all, the return of slave markets along the North African coast. It turned out that there are worse things out there than oppressive governments.R2P's reputation took a hit and receded from the discourse, at least until Russia's invasion of Ukraine gave it life once more. Some members of the press called for no-fly zones over the country, and politicians in NATO states debated if such ideas were feasible. Ultimately, they decided that it was not, given Russia's conventional capabilities, particularly in the field of anti-aircraft weaponry. There was also the inconvenient fact of Moscow having nuclear deterrence. Any attempts to create a safe area for Ukrainians using NATO military forces risked a level of escalation deemed unpalatable by most in the alliance. This clearly showed that R2P was only ever really meant to be used by great powers against smaller countries that could not defend themselves. It never had any utility against more robust foes. Interestingly, Moscow had picked up on the trick and justified its invasion of Ukraine using explicitly R2P-flavored rhetoric about protecting lives in the Donbass.The Gaza challengeThe current Gaza War is, by all the standards usually cited by humanitarian interventionists, a perfect case for invoking R2P. Rhetoric from some actors inside the Israeli state suggests that some form of population displacement is at least being considered. The incredible population density of the Gaza Strip means civilians bear the brunt of the violence. Lack of access to medicine and food exacerbates all of these problems.Samantha Power is the current head of USAID in the Biden administration. One cannot help but wonder what she thinks of the war currently unfolding in Gaza while she coordinates food shipments through Egypt's Sinai that are often held up at border checkpoints. For one so adamant that the U.S. must intervene whenever war disproportionately hits a civilian populace, she has been strangely silent.Meanwhile, Yemen's Houthis explicitly cite their own version of R2P to justify their attacks on shipping in the Red Sea. This shows the true future of R2P, if it is to have any — one of selective applications by great powers that muddy the waters around their wars in an attempt to gain media approval from within their alliance networks. As the world's balance of power inevitably moves into its more normal state of multipolarity, governments further undermine claims of ethical conduct in foreign policy, as states now have constant incentive to claim that their foes have a monopoly on human rights abuses. Inevitably, this will lead to all such claims being met with greater and greater skepticism by the global public, regardless of their factuality.Far from the centers of great power rivalries, the utility of R2P seems to be declining, as crises such as Myanmar and Sudan are assiduously ignored by the countries expected to be intervening, likely out of fear of being sucked into another Syria or Iraq and giving one's foes more opportunities to indirectly strike at stretched-thin tripwire forces.It is a trite observation to point out that great powers have different moral standards for their allies and their enemies, but R2P is a foreign policy doctrine entirely based on moralist claims. Not being able to live up to its own principles when a humanitarian crisis is unleashed by an allied country such as Israel is the last nail in the coffin for a thoroughly failed idea. We can only hope that a silver lining of this difficult moment in history is that we never have to hear about the Responsibility to Protect again.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Late last year, the Federal Reserve caught attention for its central bank digital currency (CBDC) pilot projects. Going beyond traditional research, the Federal Reserve contracted with the private sector to build potential CBDCs for the United States. As American Banker's John Adams reported at the time, "Even as debate in the U.S. rages over the utility of a digital dollar, work continues on the nuts and bolts of a potential American CBDC." Therefore, with concerns about the risks of CBDCs high and the potential for such CBDC development to go awry, Representative Alex X. Mooney (R‑WV) introduced a bill to rein in the Federal Reserve's discretion. The Digital Dollar Pilot Prevention Act, or DDP Prevention Act, would establish that the Federal Reserve would need explicit congressional authority to set up a CBDC pilot on its own or through a private sector contractor. The bill covers both the Federal Reserve board which would ultimately lead the charge on issuing a CBDC as well as the regional Federal Reserve banks that have thus far been conducting CBDC pilots. With that in mind, Representative Mooney's bill would help to stop a CBDC pilot from being quietly transitioned to the masses. The Federal Reserve recently tempered some of its public statements on CBDCs, but one doesn't need to look far to see where other statements have given reason to be concerned. For example, former vice‐chair Lael Brainard testified before Congress in 2022 that developing a CBDC may take five years. While serving this warning, Brainard attempted to make the case that the United States should create a CBDC now because it might be needed in the future: It's really the future states of the financial system that we should be thinking about as we think about the costs and benefits [of a CBDC.] … What is really important is that it takes a long time if, for instance, Congress decides that it is very important for the Federal Reserve to issue a [CBDC]. It could take five years to put in place the requisite security features [and] design features.
While this type of argument is less than compelling, it could help explain why the Federal Reserve has been developing potential CBDCs through pilot programs. If the Federal Reserve sees its authority to issue a CBDC as questionable, then developing one unofficially might be the next best thing in its eyes. Unofficial development through pilots, for instance, could mean that a CBDC would be ready to go if Congress gives the green light. However, it would also mean that a CBDC would be ready and could be launched alongside a flurry of other programs once a crisis occurs. While research is important, there's a fine line between lab experiments and operating a venture capitalist incubator for tech startups and financial institutions to get their CBDC models off the ground. More so, there's a fine line between conducting internal research and conducting pilots where the broader public is used for experiments. With all these factors in mind, it is easy to see why Representative Mooney is concerned about the Federal Reserve's actions on CBDCs.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Diane Coyle is suggesting that there needs to be a good, hard and long look at Britain's productivity problem. Which would be an excellent idea, of course: But what explains the UK's specifically dismal productivity problem?Some culprits will be depressingly familiar. A new report from The Productivity Institute (TPI) documents the consequences of the decade of declining spending per capita on education at all levels above primary school, the way expenditure on research and development as a share of GDP has fallen far behind other G7 economies and the confusing mishmash of small business support schemes. There is no shortage of diagnostic evidence about the wide range of productivity-limiting challenges. But two overarching weaknesses stand out: long-term under-investment and policy churn.Investment in the UK has been lower, as a share of GDP, than in other G7 countries for decades.This then leads to this suggestion: This political economy context is why this week's report, which captures the views of many of the UK researchers investigating productivity, calls for a new independent and statutory body to monitor, evaluate and report on policies for productivity and growth.This institution would parallel the Office for Budget Responsibility, with a remit covering supply-side policies. It would co-ordinate across areas of policy and levels of government, with a focus on spatial economic growth, and would involve relevant stakeholders in its assessments. And it would need to be protected from policy churn itself with a statutory footing.No, we're not in favour of yet another bureaucracy. But even if we were we'd insist that people get to grips with what is being measured when we talk of productivity. For, by going green, we are deliberately, definitely and with malice aforethought, reducing productivity. This is also by definition, this is not something arguable. We've mentioned this before around here but here's one we prepared earlier, elsewhere: We are, by dealing with those externalities, devoting economic energy — and other economic resources, but think just of the human effort here — to solving things which are not included in markets, in prices, in GDP. It's that last factor that should cause the dawning realisation. It's entirely true that solar power creates more jobs than nuclear per GWh of 'leccie produced. The GWh is worth the same from either source, though, and solar requires more human labour — that's the same statement as "creates more jobs" — so therefore solar power lowers measured productivity.Sure, sure, we can say that not melting Greenland is important — human utility maximising even — and that's almost certainly true as well. That's not in our economic measures, though (externalities, see?), which means that, yes, preventing Lowestoft sinking beneath the waves is actually recorded as a decline in UK productivity.It could be entirely true that we're better off by addressing climate change. But it would still also be true - and again by definition - that by addressing climate change and other green externalities we are reducing productivity.Now, given that the person recommending a Productivity Institute - or as far as we're aware, actually runs one - isn't mentioning this point we'd assume they're not thinking of it. Which means we're really most unsure of the merits of the thinking being done. And we'd certainly not recommend entombing that thinking into the bureaucracy. It really is true that going green reduces labour productivity. By definition. A conversation about productivity that doesn't even mention this isn't one worth having.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
There is a little contretemps over the management of Wickes (a matter to do with tradesmen we believe) shouting that of course everyone should be celebrating 2SLGBTQ+ or something. Others insisting that well, if they're going to start ramming that idea down our throats then we'll not shop there - leading to that fall in the share price.Good.Not the idea of 2SLGBTQ+ or not, the celebration of or not, but the idea and practice of boycotts - and their opposite the deliberately thought through extra purchase. As happened with Chik-Fil-A over a related issue some years back, when those supporting their stance deliberately and offensively went to eat mor' chikkin in the parking lots in celebration. How we spend our money is how we bend producers to our will. Capitalists desire our money. In fact, any producer of anything does. In order to get our money the producer has to pleasure us sufficiently to get us to hand over that cash. Free markets mean that anyone can set up - that is what the free means, freedom of entry - to make any attempt to pleasure us enough to hand over our money.Excellent, our job as consumers is to dispose of our money in the manner that most pleasures us - maximises our utility in the technical jargon. If this includes strong celebrations of 2SLGBTQ+ then we should - note should - spend our money with those who display such support. If it doesn't then we should - again note, should - not spend our money in such places. The actual issue here we're not commenting upon, that's not for us to have an official view upon in the slightest. A step back and of course consenting adults get to do as consenting adults wish, that's the core of the liberal argument. But that includes, obviously enough, deploying one's own cash in whatever manner one wishes - again the core of the liberal argument. Which is that joy of the capitalist and free market system. We've got the incentive for producers to do as we wish them to, we've got the feedback mechanism to force that behaviour. Their greed for our cash plus our ability to direct our cash gives us the one and only system where we all get to vote our views each and every day. What an excellent system, eh? We can even go further. Those aiming at the mass market have to be careful of those utils of the mass market. Anheuser Busch trying the idea with Bud Light - the working man's beer heavily associated with F-150s and the like - might not have been the wisest move. Trying it with Michelob wouldn't have worried anyone because who would worry about that beer? Similarly, Wickes might not have made the wisest choice given their customer base. But that free entry does allow capitalist greed to aim at niches too. Someone is making a fortune out of all those flags. There are people profiting from heavier foundation creams and clothes with tucks. The system allows both the mass market offerings that have to average out those utils and also the niches where very specific desires can be profited from.It is only free market capitalism that gives us both kinds of music, country and western. Our role in the system is to force the producers to pleasure us by judicious spending of our own money. Therefore we should do just that, spend on those who pleasure us.What it is that people are using as their measure is the deeply unimportant thing - liberals, recall, your life, your decisions about it. That everyone gets to act upon their desires is the very joy of this capitalist free marketry.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
This time, presumed conservative supermajorities in the state Legislature need to act like it and not spend just because they can.
With projections of state spending coming in a bit more flush that previously predicted – about $88 million more this current fiscal year and $197 million for the next – plans are afoot on what to do with it. One suggestion has been to restore all current state spending on early childhood education, which at this point is budgeted at $24 million fewer.
A few years ago, legislators set up a fund to entice matching dollars from local governments to dole out to families with child care expenses. Although the law creating the program to draw from the fund cast a wide eligibility net, subsequent rules issued by the fund's overseer the Department of Education essentially steer most of the money, and encourages all of it, to go to the lowest-income households with preschool-and-younger children for use at better-quality licensed child care facilities.
This Louisiana Early Childhood Education Fund received a boost when sports betting became legal in Louisiana. A quarter of the revenues from on-site and (limited to 55 parishes that permit) mobile wagering are deposited into this fund, with a maximum of $20 million disbursable to parishes that match dollar-for-dollar.
Around this same time, massively debt-fueled federal government pandemic dollars were flung to states, a part of which Louisiana used to expand child care provision. Leave-no-big-spending-government-behind Democrat former Gov. John Bel Edwards then cajoled Republican majority legislators, when the tap stopped during this fiscal year, to toss in state dollars to supplant that. These dollars were in addition to Fund bucks and didn't require a local match.
The $24 million would keep things stasis, but if delivered again without strings or cuts elsewhere to compensate would echo the Summer EBT cash food benefit fiasco, where legislators recently decided to spend additional state taxpayer dollars to chase extra pandemic spending unwisely made permanent, like addicts needing a fix, despite little demonstrated need or utility of the larded-up benefit. The case for early childhood spending is somewhat better, but with budgetary rough waters ahead for FY 2026 because of a temporary sales tax hike from the Edwards era finally rolling off and with other revenue stream adjustments into the state's general fund on the horizon, continuing the new commitment isn't as wise as cutting or redirecting spending from elsewhere.
Fortunately, the statute containing the fund has the answer. It allocates a tenth of revenues to the 55 parishes no strings attached and over half gets dumped into the general fund. That tenth should be added to the quarter but in proportion to the contribution by parish to be drawn the same matching way. Throwing in the local match gets that extra amount close to the $24 million. It would require a change in a statute, but plenty of legislative instruments exist in this legislative session to provide a vehicle to do so.
Local governments, essentially footing this entire increase, may squawk, but they are the direct beneficiaries of the program as it supplements their workforces while injecting money into their local economies. It also makes much more fiscal sense to have an entirely dedicated revenue stream for this purpose, rather than plug the hole with general fund dollars and hope enough show up for FY 2026 when initial estimates peg hundreds of millions fewer dollars of revenue.
Of course, to do something like this that prioritizes spending will take much more governing fortitude than grabbing at a bonus and kicking the can down the road. Truly conservative legislators would act preemptively with a strategy like this rather than sign the check and think wishfully.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
The men gathered in a graveyard in the dead of night. They wore body armor, boots and carried semi-automatic weapons. Their target lay a mile away, the official residence of the president of The Gambia, Yahya Jammeh — a U.S.-trained military officer who seized power in 1994. Those in the cemetery planned to oust him, but within hours, they were either dead or on the run.
One of those killed, the ring-leader and former head of Gambia's Presidential Guard, Lamin Sanneh, had previously earned a master's degree at the Pentagon's National Defense University in Washington, D.C.
Some of the plotters were eventually convicted in the United States "for their roles in planning and executing an unsuccessful coup attempt to overthrow the government of The Gambia on December 30, 2014." Four pled guilty on counts related to the Neutrality Act — a federal law that prohibits Americans from waging war against friendly nations. A fifth was sentenced in March 2017 for buying and exporting weapons used in the failed coup, which pitted two generations of U.S.-trained mutineers against each other.
The State Department doesn't know about any of this — or doesn't want to. A simple Google search reveals this information, but when Responsible Statecraft asked if Yahya Jammeh or Lamin Sanneh had received U.S. training, a State Department spokesperson responded: "We do not have the ability to provide records for these historical cases at this time." When asked about other trainees in other nations that have experienced military uprisings, the response was the same.
Responsible Statecraft has found that at least 15 U.S.-supported officers have been involved in 12 coups in West Africa and the greater Sahel during the war on terror. The list includes military personnel from Burkina Faso (2014, 2015, and twice in 2022); Chad (2021); Gambia (2014); Guinea (2021); Mali (2012, 2020, 2021); Mauritania (2008); and Niger (2023). At least five leaders of the most recent coup in Niger, received U.S. training, according to a U.S. official. They, in turn, appointed five U.S.-trained members of the Nigerien security forces to serve as governors, according to the State Department.
The total number of U.S.-trained mutineers across Africa since 9/11 may be far higher than is known, but the State Department, which tracks data on U.S. trainees, is either unwilling or unable to provide it. Responsible Statecraft identified more than 20 other African military personnel involved in coups who may have received U.S. training or assistance, but when asked, the State Department said it lacks the "ability" to provide information that it possesses.
"If we are training individuals who are executing undemocratic coups, we need to be asking more questions about how and why that happens," said Elizabeth Shackelford, a senior fellow at the Chicago Council on Global Affairs and lead author of the newly-released report, "Less is More: A New Strategy for U.S. Security Assistance to Africa." "If we aren't even trying to get to the bottom of that, we are part of the problem. This shouldn't just be on our radar — it should be something we intentionally track."
Shackleford and her colleagues say that the U.S. penchant for pouring money into abusive African militaries instead of making long-term investments in bolstering democratic institutions, good governance, and the rule of law, has undermined wider American aims.
In addition to training military mutineers in Africa, other U.S. security assistance efforts during the war on terror have also foundered and failed. Ukrainian troops trained by the U.S. and its allies stumbled during a long-awaited counteroffensive against Russian forces, raising questions about the utility of the instruction.
In 2021, an Afghan army created, trained, and armed by the United States over 20 years dissolved in the face of a Taliban offensive. In 2015, a $500 million Pentagon effort to train and equip Syrian rebels, slated to produce 15,000 troops, yielded just a few dozen before being scrapped. A year earlier, an Iraqi army built, trained, and funded — to the tune of at least $25 billion — by the U.S. was routed by the rag-tag forces of the Islamic State.
"U.S. policy in Africa has for too long prioritized short-term security to the detriment of long-term stability by prioritizing the provision of military and security assistance," Shackelford writes in the new Chicago Council report. "Partnerships and military assistance with illiberal, undemocratic countries have delivered little, if any, sustainable security improvements, and in many cases have prompted further instability and violence by building the capacity of abusive security forces."
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
The most cataclysmic failure of the concluded 2024 First Extraordinary Session of the Louisiana Legislature was genuinely closed primary elections for all Louisiana federal offices and statewide plenary executive and judicial contests (much less for statewide single executive, legislative, and appellate and district court offices) won't come anytime soon.
Recently, particularly Republicans had stumped for closed primaries, and even made legislative overtures to close primaries for all federal contests, as presidential preference primaries have been for decades. This means only registrants of that party could vote to determine a general election nominee, as opposed to the current blanket primary which actually isn't a primary but instead a general election contest without party nominees where all candidates regardless of labelling run together. If no candidate receives a majority, a runoff election occurs.
In this session, Republican Gov. Jeff Landry backed even more ambitious changes that would have created unadulterated closed primaries for all majoritarian branch contests at the federal and state level. Major parties at their own discretion could opt in to let unaffiliated voters participate in these contests. The plurality winner in a primary would advance. Local offices were excluded, which is not inconsistent in that the current blanket primary system simulates nonpartisan local elections commonly held across the country.
By far this was the most important issue in the session because the blanket primary system distorts the policy-making process to the state's detriment. If parties can control the most important decision they can make, providing a candidate of their label for the general election, this prompts election of officials more likely to pursue a common agenda rather than individual ones. And a major reason why Louisiana has fallen so far behind other states on issues of economic prosperity and quality of life, as evidenced by the outmigration of over 100,000 residents during the governorship of Democrat Gov. John Bel Edwards, is because the blanket primary system doesn't provide the glue to hold together a majority party that helps it make significant and bolder holistic policy changes necessary to address numerous shortcomings.
Of course, the bigger the change, the greater the political power needed. Landry and his legislative allies needed to let nothing interfere with this quest. And then they botched it, big time.
It all started going awry as dilution after dilution of the original legislation occurred. When the dust settled days later, all that was left was "closed" primaries for congressional offices and for the Public Service Commission, Board of Elementary and Secondary Education, and Supreme Court, with runoffs if needed. And these aren't really even "closed," because the version that passed took away parties' ability to exclude unaffiliated voters.
Note that if one goal of Landry and the GOP supermajorities in each legislative chamber was to enhance the party's ability to carry a conservative agenda that could fix what ails the state through primaries that tended to weed out Republicans-in-Name-Only, they failed almost entirely. And even in the narrow areas of education and utility regulation, or for congressional contests, milquetoast conservatives still can win those GOP primaries with support from moderate leftists that can change their registrations to no party (which can be done in seconds online if you have a driver's license, or in days by mail if you don't) and back again every four years to vote in the Democrats' presidential primary.
Landry and the GOP flubbed their chance for meaningful and comprehensive close primaries because they mistakenly gave away a congressional seat – and nearly did the same with a state Supreme Court seat – to Democrats through voluntary congressional reapportionment spurred by an unresolved court case that the state has a decent chance of winning, even though they are contesting the very same kind of case concerning legislative reapportionment rather than capitulating. These attempted giveaways sensitized particularly Senate Republicans, all of whom gained their seats under the blanket primary system and many of whom blanched at handing Democrats a gift, to seeing they had no real reason to turn their backs on the system that got them there by going out on a limb for a Republican governor willing to give away the farm.
If bound to surrender seats thus triggering this reaction within their own party, to get the votes to pass something close to the original bill what Landry and GOP leaders had to do was tell the beneficiary Democrats that unless they went along with that, then congressional reapportionment was off the table and the state would take its chances with the courts, just like it was with legislative reapportionment. If they tried this, they did it too ineffectively.
Chances for change like this don't come often, yet Landry blew this opportunity for fundamental reform necessary for moving the state forward by letting reapportionment distractions get in the way.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
A new open data platform will accelerate robust and comprehensive research in the agricultural sectorThis post was written by Jenna Fahle (CEGA), Radhika Goyal (UCSD), Vinny Armentano (UCSD), and Craig McIntosh (UCSD).Introduction to the ATAI Data PortalSince 2009, the Agricultural Technology Adoption Initiative (ATAI), co-managed by the Center for Effective Global Action (CEGA) and the Abdul Latif Jameel Poverty Action Lab (J-PAL), has generated robust evidence of the impacts of agricultural technologies, such as stress-tolerant rice or mobile-phone based agricultural extension, on small-scale farmer welfare. Today, ATAI launched a new open data platform to bring together the best evidence from ATAI-funded research in a single portal, making it easily accessible to researchers and policymakers alike. The initiative aims to foster collaboration and evidence-informed decision-making in the agricultural sector, ultimately contributing to the advancement of the most effective agricultural practices and improving farmer welfare.Why make data open?Access to high-quality data has long been recognized as a significant obstacle in social science research. To address this issue, data repositories like the J-PAL Dataverse have emerged, making it easier for researchers, policymakers, and others to access and utilize data from completed research studies. In recent times, the effectiveness of these data repositories has been bolstered by data sharing policies put into place by funders, journals, and research organizations. UC Berkeley's Initiative for Transparency in the Social Sciences (BITSS) — incubated at CEGA — champions these and other open data approaches as a standard practice that promotes transparency and reproducibility of evidence, strengthening the scientific ecosystem and bolstering the credibility of research findings.The ATAI Data Portal goes beyond the principles of open data by incorporating data harmonization. Data harmonization involves the collection of data from various sources or, in the case of ATAI, a research portfolio, in a manner that ensures users have a comprehensive and comparable view of the information.Harmonized data holds tremendous value for researchers aiming to extract insights from multiple studies. In the past, researchers had to collect datasets from various sources, investing valuable time in cleaning and integrating the data. Often, the unavailability of raw data hindered such comparisons, and the resulting publicly available data lacked sufficient information for meaningful analyses. However, researchers now have a powerful tool at their disposal. With the ATAI Data Portal, they can access harmonized data, enabling them to conduct meta-analyses and explore the external validity and generalizability of research results more efficiently and effectively. This transformative platform opens up new avenues for robust and comprehensive research in the agricultural sector.The ATAI Data Portal also improves the richness and quality of datasets from ATAI-funded projects in several ways. For instance, a number of ATAI-funded studies contain georeferencing, or latitude and longitude coordinates for agricultural fields, households, or study administrative boundaries. When geographic coordinates are available, the ATAI Data Portal overlays the project dataset with environmental variables — such as temperature, precipitation, night lights, and forest cover –- to expand the richness and utility of the data. (Many predictive models rely on this kind of information as ground truth data.)To maintain the anonymity of the surveyed population, the data linkage employs industry-standard geo-masking techniques. By implementing these measures, the ATAI Data portal ensures that the privacy and confidentiality of the participants are preserved while providing valuable insights into the relationships between agricultural practices and environmental factors.During the data harmonization process, meticulous data cleaning is carried out to ensure data integrity. This includes harmonizing units, eliminating negative values, and removing duplicate records as part of the harmonization effort. These measures contribute to the overall reliability and consistency of the data made available through the ATAI Data portal, fostering more robust and trustworthy research outcomes.Thus, the ATAI Data Portal offers a novel approach in that it features high-quality, harmonized data integrated with environmental variables in an open and accessible format."This portal is a first step in an effort to allow datasets from randomized controlled trials to be put to a broader set of uses. By harmonizing core agricultural variables to the fullest extent possible as well as providing broad access to raw data, the portal will allow the research community to aggregate across studies and geographies in a way not possible in any single study." — Craig McIntosh, ATAI Co-Chair and Professor of Economics at UCSDATAI-data.org launched with seventeen datasets based in Bangladesh, Ghana, Ethiopia, India, Kenya, Mozambique, Uganda, and Zambia. The portal will continue to grow as more research teams complete and submit their datasets to ATAI.What comes next?The ATAI Data Portal is a public good that will increase in volume and value over time as more open datasets from ATAI become available and more researchers make use of it. The ATAI Data Portal is open-source and freely available.ATAI has seized an opportunity to institutionalize harmonized, open data and further standardize data collection for agricultural randomized evaluations — making every research step count. We hope that this model is an encouraging approach and tool for researchers working to evaluate the effectiveness of agricultural development programs.For more information and for portal documentation, please visit atai-data.org.Making every research step count: Introducing the ATAI Data Portal was originally published in CEGA on Medium, where people are continuing the conversation by highlighting and responding to this story.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Today, I'll add an entry to my occasional reviews of interesting academic papers. The paper: "Price Level and Inflation Dynamics in Heterogeneous Agent Economies," by Greg Kaplan, Georgios Nikolakoudis and Gianluca Violante. One of the many reasons I am excited about this paper is that it unites fiscal theory of the price level with heterogeneous agent economics. And it shows how heterogeneity matters. There has been a lot of work on "heterogeneous agent new-Keynesian" models (HANK). This paper inaugurates heterogeneous agent fiscal theory models. Let's call them HAFT. The paper has a beautifully stripped down model. Prices are flexible, and the price level is set by fiscal theory. People face uninsurable income shocks, however, and a borrowing limit. So they save an extra amount in order to self-insure against bad times. Government bonds are the only asset in the model, so this extra saving pushes down the interest rate, discount rate, and government service debt cost. The model has a time-zero shock and then no aggregate uncertainty. This is exactly the right place to start. In the end, of course, we want fiscal theory, heterogeneous agents, and sticky prices to add inflation dynamics. And on top of that, whatever DSGE smorgasbord is important to the issues at hand; production side, international trade, multiple real assets, financial fractions, and more. But the genius of a great paper is to start with the minimal model. Part II effects of fiscal shocks. I am most excited by part II, the effects of fiscal shocks. This goes straight to important policy questions. Note: This figure plots impulse responses to a targeted and untargeted helicopter drop, aggregated at the quarterly frequency. The helicopter drop is a one-time issuance of 16% of total government nominal debt outstanding at t = 0. Only households in the bottom 60% of the wealth distribution receive the issuance in the targeted experiment (dashed red line). The orange line plots dynamics in the representative agent (RA) model. The dashed black line plots the initial steady state. Source: Kaplan et al. Figure 7At time 0, the government drops $5 trillion of extra debt on people, with no plans to pay it back. The interest rate does not change. What happens? In the representative agent economy, the price level jumps, just enough to inflate away outstanding debt by $5 trillion. (In this simulation, inflation subsequent to the price level jump is just set by the central bank, via an interest rate target. So the rising price level line of the representative agent (orange) benchmark is not that interesting. It's not a conventional impulse response showing the change after the shock; it's the actual path after the shock. The difference between colored heterogeneous agent lines and the orange representative agent line is the important part.) Punchline: In the heterogeneous agent economies, the price level jumps a good deal more. And if transfers are targeted to the bottom of the wealth distribution, the price level jumps more still. It matters who gets the money. This is the first step on an important policy question. Why was the 2020-2021 stimulus so much more inflationary than, say 2008? I have a lot of stories ("fiscal histories," FTPL), one of which is a vague sense that printing money and sending people checks has more effect than borrowing in treasury markets and spending the results. This graph makes that sense precise. Sending people checks, especially people who are on the edge, does generate more inflation. In the end, whether government debt is inflationary or not comes down to whether people treat the asset as a good savings vehicle, and hang on to it, or try to spend it, thereby driving up prices. Sending checks to people likely to spend it gives more inflation. As you can see, the model also introduces some dynamics, where in this simple setup (flexible prices) the RA model just gives a price level jump. To understand those dynamics, and more intuition of the model, look at the response of real debt and the real interest rate The greater inflation means that the same increase in nominal debt is a lesser increase in real debt. Now, the crucial feature of the model steps in: due to self-insurance, there is essentially a liquidity value of debt. If you have less debt, the marginal value of higher; people bid down the real interest rate in an attempt to get more debt. But the higher real rate means the real value of debt rises, and as the debt rises, the real interest rate falls. To understand why this is the equilibrium, it's worth looking at the debt accumulation equation, \[ \frac{db}{dt} = r_t (b_t; g_t) b_t - s_t. \]\(b_t\) is the real value of nominal debt, \(r_t=i_t-\pi_t\) is the real interest rate, and \(s_t\) is the real primary surplus. Higher real rates (debt service costs) raise debt. Higher primary surpluses pay down debt. Crucially -- the whole point of the paper -- the interest rate depends on how much debt is outstanding and on the distribution of wealth \(g_t\). (\(g_t\) is a whole distribution.) More debt means a higher interest rate. More debt does a better job of satisfying self-insurance motives. Then the marginal value of debt is lower, so people don't try to save as much, and the interest rate rises. It works a lot like money demand,Now, if the transfer were proportional to current wealth, nothing would change, the price level would jump just like the RA (orange) line. But it isn't; in both cases more-constrained people get more money. The liquidity constraints are less binding, they're willing to save more. For given aggregate debt the real interest rate will rise. So the orange line with no change in real debt is no longer a steady state. We must have, initially \(db/dt>0.\) Once debt rises and the distribution of wealth mixes, we go back to the old steady state, so real debt rises less initially, so it can continue to rise. And to do that, we need a larger price level jump. Whew. (I hope I got that right. Intuition is hard!) In a previous post on heterogeneous agent models, I asked whether HA matters for aggregates, or whether it is just about distributional consequences of unchanged aggregate dynamics. Here is a great example in which HA matters for aggregates, both for the size and for the dynamics of the effects. Here's a second cool simulation. What if, rather than a lump-sum helicopter drop with no change in surpluses, the government just starts running permanent primary deficits? Note: Impulse response to a permanent expansion in primary deficits. The dotted orange line shows the effects of a reduction in surplus in the Representative Agent model. The blue line labelled "Lump Sum" illustrates the dynamics following an expansion of lump sum transfers. The dashed red line labelled "Tax Rate" plots dynamics following a tax cut. The orange line plots dynamics in the representative agent (RA) model. The dashed black line plots the initial steady state. Source: Kaplan et. al. Figure 8.
In the RA model, a decline in surpluses is exactly the same thing as a rise in debt. You get the initial price jump, and then the same inflation following the interest rate target. Not so the HA models! Perpetual deficits are different from a jump in debt with no change in deficit. Again, real debt and the real rate help to understand the intuition. The real amount of debt is permanently lower. That means people are more starved for buffer stock assets, and bid down the real interest rate. The nominal rate is fixed, by assumption in this simulation, so a lower real rate means more inflation. For policy, this is an important result. With flexible prices, RA fiscal theory only gives a one-time price level jump in response to unexpected fiscal shocks. It does not give steady inflation in response to steady deficits. Here we do have steady inflation in response to steady deficits! It also shows an instance of the general "discount rates matter" theorem. Granted, here, the central bank could lower inflation by just lowering the nominal rate target but we know that's not so easy when we add realisms to the model. To see just why this is the equilibrium, and why surpluses are different than debt, again go back to the debt accumulation equation, \[ \frac{db}{dt} = r_t (b_t, g_t) b_t - s_t. \] In the RA model, the price level jumps so that \(b_t\) jumps down, and then with smaller \(s_t\), \(r b_t - s_t\) is unchanged with a constant \(r\). But in the HA model, the lower value of \(b\) means less liquidity value of debt, and people try to save, bidding down the interest rate. We need to work down the debt demand curve, driving down the real interest costs \(r\) until they partially pay for some of the deficits. There is a sense in which "financial repression" (artificially low interest rates) via perpetual inflation help to pay for perpetual deficits. Wow! Part I r<gThe first theory part of the paper is also interesting. (Though these are really two papers stapled together, since as I see it the theory in the first part is not at all necessary for the simulations.) Here, Kaplan, Nikolakoudis and Violante take on the r<g question clearly. No, r<g does not doom fiscal theory! I was so enthused by this that I wrote up a little note "fiscal theory with negative interest rates" here. Detailed algebra of my points below are in that note, (An essay r<g and also a r<g chapter in FTPL explains the related issue, why it's a mistake to use averages from our real economy to calibrate perfect foresight models. Yes, we can observe \(E(r)<E(g)\) yet present values converge.) I'll give the basic idea here. To keep it simple, think about the question what happens with a negative real interest rate \(r<0\), a constant surplus \(s\) in an economy with no growth, and perfect foresight. You might think we're in trouble: \[b_t = \frac{B_t}{P_t} = \int e^{-r\tau} s d\tau = \frac{s}{r}.\]A negative interest rate makes present values blow up, no? Well, what about a permanently negative surplus \(s<0\) financed by a permanently negative interest cost \(r<0\)? That sounds fine in flow terms, but it's really weird as a present value, no? Yes, it is weird. Debt accumulates at \[\frac{db_t}{dt} = r_t b_t - s_t.\] If \(r>0\), \(s>0\), then the real value of debt is generically explosive for any initial debt but \(b_0=s/r\). Because of the transversality condition ruling out real explosions, the initial price level jumps so \(b_0=B_0/P_0=s/r\). But if \(r<0\), \(s<0\), then debt is stable. For any \(b_0\), debt converges, the transversality condition is satisfied. We lose fiscal price level determination. No, you can't take a present value of a negative cashflow stream with a negative discount rate and get a sensible present value. But \(r\) is not constant. The more debt, the higher the interest rate. So \[\frac{db_t}{dt} = r(b_t) b_t - s_t.\] Linearizing around the steady state \(b=s/r\), \[\frac{db_t}{dt} = \left[r_t + \frac{dr(b_t)}{db}\right]b_t - s.\] So even if \(r<0\), if more debt raises the interest rate enough, if \(dr(b)/db\) is large enough, dynamics are locally and it turns out globally unstable even with \(r<0\). Fiscal theory still works! You can work out an easy example with bonds in utility, \(\int e^{-\rho t}[u(c_t) + \theta v(b_t)]dt\), and simplifying further log utility \(u(c) + \theta \log(b)\). In this case \(r = \rho - \theta v'(b) = \rho - \theta/b\) (see the note for derivation), so debt evolves as \[\frac{db}{dt} = \left[\rho - \frac{\theta}{b_t}\right]b_t - s = \rho b_t - \theta - s.\]Now the \(r<0\) part still gives stable dynamics and multiple equilibria. But if \(\theta>-s\), then dynamics are again explosive for all but \(b=s/r\) and fiscal theory works anyway. This is a powerful result. We usually think that in perfect foresight models, \(r>g\), \(r>0\) here, and consequently positive vs negative primary surpluses \(s>0\) vs. \(s<0\) is an important dividing line. I don't know how many fiscal theory critiques I have heard that say a) it doesn't work because r<g so present values explode b) it doesn't work because primary surpluses are always slightly negative. This is all wrong. The analysis, as in this example, shows is that fiscal theory can work fine, and doesn't even notice, a transition from \(r>0\) to \(r<0\), from \(s>0\) to \(s<0\). Financing a steady small negative primary surplus with a steady small negative interest rate, or \(r<g\) is seamless. The crucial question in this example is \(s<-\theta\). At this boundary, there is no equilibrium any more. You can finance only so much primary deficit by financial repression, i.e. squeezing down the amount of debt so its liquidity value is high, pushing down the interest costs of debt. The paper staples these two exercises together, and calibrates the above simulations to \(s<0\) and \(r<g\). But I bet they would look almost exactly the same with \(s>0\) and \(r>g\). \(r<g\) is not essential to the fiscal simulations.* The paper analyzes self-insurance against idiosyncratic shocks as the cause of a liquidity value of debt. That's interesting, and allows the authors to calibrate the liquidity value against microeconomic observations on just how much people suffer such shocks and want to insure against them. The Part I simulations are just that, heterogeneous agents in action. But this theoretical point is much broader, and applies to any economic force that pushes up the real interest rate as the volume of debt rises. Bonds in utility, here and in the paper's appendix, work. They are a common stand in for the usefulness of government bonds in financial transactions. And in that case, it's easier to extend the analysis to a capital stock, real estate, foreign borrowing and lending, gold bars, crypto, and other means of self-insuring against shocks. Standard ``crowding out'' stories by which higher debt raises interest rates work. (Blachard's r<g work has a lot of such stories.) The ``segmented markets'' stories underlying faith in QE give a rising b(r). So the general principle is robust to many different kinds of models. My note explores one issue the paper does not, and it's an important one in asset pricing. OK, I see how dynamics are locally unstable, but how do you take a present value when r<0? If we write the steady state \[b_t = \int_{\tau=0}^\infty e^{-r \tau}s d\tau = \int_{\tau=0}^T e^{-r \tau}s d\tau + e^{-rT}b_{t+T}= (1-e^{-rT})\frac{s}{r} + e^{-rT}b,\]and with \(r<0\) and \(s<0\), the integral and final term of the present value formula each explode to infinity. It seems you really can't discount with a negative rate. The answer is: don't integrate forward \[\frac{db_t}{dt}=r b_t - s \]to the nonsense \[ b_t = \int e^{-r \tau} s d\tau.\]Instead, integrate forward \[\frac{db_t}{dt} = \rho b_t - \theta - s\]to \[b_t = \int e^{-\rho \tau} (s + \theta)dt = \int e^{-\rho \tau} \frac{u'(c_t+\tau)}{u'(c_t)}(s + \theta)dt.\]In the last equation I put consumption (\(c_t=1\) in the model) for clarity. Discount the flow value of liquidity benefits at the consumer's intertemporal marginal rate of substitution. Do not use liquidity to produce an altered discount rate. This is another deep, and frequently violated point. Our discount factor tricks do not work in infinite-horizon models. \(1=E(R_{t+1}^{-1}R_{t+1})\) works just as well as \(1 = E\left[\beta u'(c_{t+1})/u'(c_t)\right] r_{t+1}\) in a finite horizon model, but you can't always use \(m_{t+1}=R_{t+1}^{-1}\) in infinite period models. The integrals blow up, as in the example. This is a good thesis topic for a theoretically minded researcher. It's something about Hilbert spaces. Though I wrote the discount factor book, I don't know how to extend discount factor tricks to infinite periods. As far as I can tell, nobody else does either. It's not in Duffie's book. In the meantime, if you use discount factor tricks like affine models -- anything but the proper SDF -- to discount an infinite cashflow, and you find "puzzles," and "bubbles," you're on thin ice. There are lots of papers making this mistake. A minor criticism: The paper doesn't show nuts and bolts of how to calculate a HAFT model, even in the simplest example. Note by contrast how trivial it is to calculate a bonds in utility model that gets most of the same results. Give us a recipe book for calculating textbook examples, please!Obviously this is a first step. As FTPL quickly adds sticky prices to get reasonable inflation dynamics, so should HAFT. For FTPL (or FTMP, fiscal theory of monetary policy; i.e. adding interest rate targets), adding sticky prices made the story much more realistic: We get a year or two of steady inflation eating away at bond values, rather than a price level jump. I can't wait to see HAFT with sticky prices. For all the other requests for generalization: you just found your thesis topic. Send typos, especially in equations. Updates*Greg wrote, and pointed out this isn't exactly right. "In the standard r>g, s>0 case, an increase desire to hold real assets (such as more income risk) leads to a lower real rate and higher real debt - the standard "secular stagnation" story. With r<g, s<0, an increased desire to hold real assets leads to higher real rates and higher debt." To understand this comment, you have to look at the supply and demand graph in the paper, or in my note. The "supply" of debt in the steady state \(b = s/r/), plotted with \(r\) as a function of \(b\) flips sign from a declining curve to a rising curve when \(s\) and \(r\) change sign. The "demand" \( r(b)) is upward sloping. So when demand shifts out, \(b\) rises, but \(r\) falls when \(r>0\) and rises when \(r<0\). With positive interest rates, you produce a greater amount of real debt, for the same surplus, with a lower real interest rate. With negative interest rates and a negative surplus, you produce more debt with a less negative real rate. Hmm. The \(r<g\) region is still a little weird. There is also the possibility of multiple equilibria, like the New-Keynesian zero bound equilibria; see the paper and note. Erzo Luttmer has a related HAFT paper, "Permanent Primary Deficits, Idiosyncratic Long-Run Risk, and Growth." It's calibrated in much more detail, and also more detailed on the r<g and long run deficit questions. It includes fiscal theory (p. 14) but does not seem centrally focused on inflation. I haven't read it yet, but it's important if you're getting in to these issues. I still regard r<g as a technical nuisance. In most of the cases here, it does not relieve the government of the need to repay debts, it does not lead to a Magic Money Tree, and it does not undermine fiscal price level determination. I am still not a fan of OLG models, which delicately need the economy truly to go on for infinite growth. I'm not totally persuaded HA is first-order important for getting aggregate inflation dynamics right. The Phillips curve still seems like the biggest rotten timber in the ship to me. But these issues are technical and complex, and I could be wrong. Attention is limited, so you have to place your bets in this business; but fortunately you can still read after other people work it out! Noah Kwicklis at UCLA has a very interesting related paper "Transfer Payments, Sacrifice Ratios, and Inflation in a Fiscal Theory HANK"I numerically solve a calibrated Heterogeneous Agent New-Keynesian (HANK) model that features nominal rigidities, incomplete markets, hand-to-mouth households, nominal long-term government debt, and active fiscal policy with a passive monetary policy rule to analyze the implications of the fiscal theory of the price level (FTPL) in a setting with wealth and income inequality. In model simulations, the total cumulative inflation generated by a fiscal helicopter drop is largely determined by the size of the initial stimulus and is relatively insensitive to the initial distribution of the payments. In contrast, the total real GDP and employment response depends much more strongly on the balance sheets of the transfer recipients, such that payments to and from households with few assets and high marginal propensities to consume (MPCs) move aggregate output much more strongly than payments to or from households with low MPCs....
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
A new revelation from Bossier City's contracted city engineer relating to a state grant and a water deal between the city and Port of Caddo-Bossier may explain why a City Council majority passed the deal despite numerous red flags at least one of its members seemed to miss, as part of campaign pursued by the Port, contractor Manchac Consulting Group, and the Republican Mayor Tommy Chandler Administration.
It's frustrating to me, and perhaps repetitive if not boring to readers, to have to go through this again, because the facts outlined below have been discussed time and time and time and time and time again, and well in advance of the Apr. 4 meeting that approved the deal. But it all necessarily deserves another look as a consequence of statements made at the last, Apr. 18, Council meeting shedding light on the possibility that a strategy of ambiguity engaged in by several parties with a vested interest in seeing the deal go through sought to steer, and seemingly successfully did, skeptical councilors away from a true and full understanding of the deal's implications.
By now, details of the water deal are familiar: the Port would issue bonds to pay for a water facility for distribution and treatment that Bossier City would run, which would commit itself to a long-term liability equal to the total cost of the bond issue for the right to run it and keep all revenues past costs once that amount had been reached. While the deal would last 40 years, the obligation to rebate (legally necessary because of city ordinances that don't let the city treat customers in the same water/sewerage class differentially in rates charged) would begin only if and when the city drew a single drop of water from the plant, which could be years into it.
The financial information attached to it made its acceptance on the city's part at best a risky gamble. The Port's executive director Eric England quoted figures that suggested the entire cost of the deal (two separate bond issues, one of 20 and the other of 30 years' duration) would come in around $62 million. Further, given the city's recent history of an 11 percent margin on utility sales and those revenues generated, to reach the point where enough could be made to keep all revenues would require that the Port's tenants boost sales by 92 percent or, given typical city daily production, 11 million gallons of water a day from day one of the deal's kicking off (and would be higher the longer it takes for the deal to begin) – a dubious scenario (alternatively, the new business could be as little as 5.5 MGD, the level at which the city could meets its obligation but not keep any revenues, as the deal states it otherwise keeps half beyond costs up to the amount of bond payments compiled every quarter while rebating the other half, but past that coupon it keeps all).
In other words, unless the city sold to the Port a volume of water so high for so long as to be very unlikely to occur, it would have to hit up utility reserve funds, if not ratepayers, to fulfill its obligation. Yet as has become evident since the Council backed deal, with the full support of the Chandler Administration, at least one councilor, and quite likely others, didn't see it that way. The clearest articulation of that has come from the only councilor in support who has appeared willing to address and has spoken in detail about the deal and his reasons for support, Republican Chris Smith.
In essence, Smith has said he doesn't see the deal as a cumulative payoff to the Port. In his conceptualization of it, once the deal starts the city gathers revenues and issues rebates, but once the Port's bonds are extinguished, the city can leave the deal at any time without full reimbursement. Thus, the perception of risk to ratepayers would diminish substantially.
The problem with this view – let's call it the noncumulative – is it runs very much counter not only to the actual wording of the cooperative endeavor agreement, but also to the public rhetoric of England's. And, as it turns out, was a view that the Chandler Administration didn't discourage councilors from adopting, possibly for a reason revealed at the last City Council meeting, the one after deal approval.
Understanding this requires review of the CEA sections operative to this task. They are worded as follows (emphases added):
SECTION 1.04: Term. The provisions of this Agreement shall remain binding upon the respective parties hereto and their successors in office and shall be in effect for a period of Forty (40) years commencing on the ___ day of ____ , 2023, and ending on the __ day of _____ , 2063.
SECTION 3.02: Receipt of Sewerage. Within the limits and subject to the provisions of Article Ill, Sewer Service of the Code of Ordinances of the City of Bossier City and this Agreement, Bossier City shall accept for treatment such wastewater as may be generated by the Commission and its users. Bossier City will treat all sewerage collected through the sewerage improvements financed with the proceed of the Bonds discussed in Section 1.02 above until such time as the Bonds described in Section 1.02 have been fully repaid, except as otherwise provided in this Agreement.
SECTION 3.05: Invoicing…. Bossier City shall remit to the Commission no later than the fifteenth (15th) day after the closing of each quarter an amount equal to the lessor of: an amount equal to fifty percent (50%) of all collections of revenue by or through the delivery of water and sewer services to the Commission and its users, less documented treatment expenses of Bossier City; and (ii) an amount equal to the pro rata amount of the next ensuing scheduled payment of principal and/or interest (if any interest due) on the Bonds described in Section 1.02 above. In no event shall Bossier City be responsible for remitting such payments described hereinabove, until such a time as water is being delivered and consumed by the Commission and its users.
SECTION 3.05.1: Prepayment. In no event shall the total payments by Bossier City to the Commission exceed the principal and interest payments made or to be made by the Commission on the Bonds described in Section 1.02, unless it is agreed by and between Bossier City and the Commission that early payment or payments should be made on the Bonds. Notwithstanding the foregoing, the total payments by Bossier City to the Commission shall not be less than the principal and interest payments made or to be made by the Commission on the Bonds described in Section 1.02.
SECTION 3.06: Termination of Service. Bossier City may discontinue water service under this Agreement to the Commission or any of its users for failure to pay monthly bills rendered by Bossier City in accordance with Bossier City Ordinances.
Here, four words (those in bold) in 3.05.01 make all the difference. The second sentence of 3.05.01 makes clear that the cumulative view – that once the deal starts the city owes to the Port the entire estimated $62 million, one way or the other by, it would appear, April, 2063 – is the one in effect. And this reflects as well England's rhetoric, who through appearances to the Council – its regular meetings and the first workshop of two – several times proclaimed publicly that the Port would have to be "made whole" in any deal.
Yet by the second workshop, England dropped that phrasing, perhaps because the cumulative liability to the city was pointed out in this space and elsewhere, and who instead began saying that questions about this part of the CEA would have to be referred to the Port's counsel – who, conveniently enough, never made a public appearance to have to take these kinds of questions. Still, not once in public did England ever waver from the assertion that any deal would have to have the Port "made whole" – unmistakably meaning whatever the Port spent on bonds the city would have to cover and that this was the view of the Port on the CEA.
In an appearance on a Bossier Watch narrowcast, Smith tried to discount that ascribed meaning as being reflected in that section, saying it applied only to the possibility of prepayment. This is mistaken in triplicate, beginning with the three words "notwithstanding the foregoing" that negates the interpretation that the rest of the sentence applies only to 3.05.01 while in fact applying to all of Section 3 addressing payments.
It's also mistaken in understanding why 3.05.01 is there. It means that the Port can't force the city to pay more than principal plus interest due in a period unless the city volunteers. And this leads to the fourth important word, which entirely invalidates the noncumulative thesis: "prepayment."
"Prepayment" as a concept can't exist unless a total payment amount already is set in place. If the noncumulative view held, or that the city paid as it went without any further liability, a prepayment clause would be irrelevant, for there would be no final amount on which to calculate what constitutes prepayment, or the excess over the scheduled payment. The very fact that the CEA has such a clause – much like, for example, a mortgage document may have such a clause allowing a borrower to prepay in order to save on interest expenses down the road on the set principal amount – presupposes a fixed liability the payer can't avoid, validating the cumulative view.
The question now becomes how, and most importantly why, Smith and perhaps others were led astray from understanding this. Here, England contributed by alleging that the city could "walk away" from the deal after extinguishment.
However, in only one place in the CEA is deal termination discussed, Section 3.06, where if the Port tenants stop paying, the city stops acting as their utility (even here, it's ambiguous whether the city has pay off the full $62 million in this instance). Nowhere else does the issue of early termination arise.
What England appeared to be referring to as "walk away" is Section 3.02, which states after extinguishment the city can quit treating sewage, and may have led to Smith's and perhaps others' misunderstanding that the deal was noncumulative. But that section doesn't address the overall liability in any way, with it just saying the obligation to treat ends, not the obligation to pay. The second sentence of 3.05.01 – referred to in 3.02 as "except as otherwise provided in this Agreement" – confirms that.
That mistaken inference was reiterated by City Attorney Charles Jacobs, who (after England began refusing to answer questions about financial obligations) parroted the notion that extinguishment meant deal off. Still, nothing in the document allows the city unilaterally to terminate the agreement – which includes paying to the Port an amount of money equivalent to the "principal and interest payments made or to be made by the Commission on the Bonds" – except by Port tenant non-payment.
But why would Jacobs do that or, in a larger sense, not be more diligent in making clearer the actual intent of the CEA insofar as the city's obligations went? It would have been so easy to ask that, for example, the second sentence of 3.05.01 be removed, or that additional language go in that would specify clearly that the city could terminate the deal after extinguishment without owing anything else – and was something that councilors could have directed be done had any realized what was going on and had been of a mind to do that. Yet Jacobs raised no alarms and made no suggestions that would pivot the document from the cumulative to noncumulative view.
The motives of some who backed the deal were quite clear. The ordinance's sponsor, Republican Councilor David Montgomery, since 2008 has made over $600,000 doing business with the Port, on whose governing Commission sits some of his political allies. The city's contractor for utilities Manchac Consulting worked both sides of the street, as months prior to deal approval it had been contracted by the Port to draw up plans for the facility and its ancillaries, and can be expected to play a major role and rake in millions of dollars in commissions in the coming buildout.
The Manchac employee seconded as city engineer, Ben Rauschenbach, throughout the process spoke sympathetically of the deal. As it turns out, that may have been not only because of the prospect that approval would bring Manchac future business, but also because Rauschenbach and Manchac, whether with the initial complicity of Chandler and other administration officials, apparently committed the city to the deal months before Council approval in order to secure a grant from the state.
At the Apr. 18 meeting, in conjunction with an ordinance to appropriate funds, Rauschenbach gave details pertaining to the city having won a long-desired state grant to work on utilities infrastructure in the city's south. It basically means the city can perform the work at one-third cost, ponying up $2.5 million to receive $5 million.
It's not the first time Rauschenbach talked about the project and grant, having brought it up several times over the preceding few months. In the first round the city applied for it but, he said, according to the rubric that scored requests had fallen a few points short. He added that what he thought this time made the difference, even as he noted the project originated for different reasons, was the port deal which he declared "pushed it across the goal line" – "the ability to supply water to the Port was important to getting the five million."
The Water Sector Program grant of which he spoke comes from a $300 million pot from the American Rescue Plan, the large debt-fueled package supposedly responding to the Wuhan coronavirus pandemic. At its monthly meetings, the Water Sector Commission comprised of state legislators makes a decision, which it then passes on to the Legislature's Joint Legislative Committee on the Budget for final approval (memberships overlap).
His comments seemed to suggest recent approval, because the Council only had given the green light to the water deal two weeks earlier. But, in fact, the grant had been submitted back on Aug. 26, 2022, approved by the Commission on Dec. 12, and days later by the JLCB. Indeed, the project scored sixth-highest out of dozens approved. This event even was acknowledged in Ordinance 180 of 2022 passed by the Council on Dec. 20 in that "Manchac Consulting Group has assisted the City in the successful recommendation of the water sector programs $5,000,000 grant award."
How could a deal only approved in Apr., 2023 be part of a grant application approved in Dec., 2022? Only if the city prematurely had inserted the deal as part of that, with the Port's blessing but as yet without the Council's. Keep in mind that at its Oct. 17 meeting the Port had approved its Resolution 19 of 2022 that authorized pursuit of the bonds to build the facility – even mentioning that the money would go to "purchasing the necessary right-of way for constructing and installing a water main, valve, hydrants, and other appurtenances from the southern edge of Bossier City Water System to the Port's campus which will be used to service residential customers along the route, in addition to future industrial tenants of the Port" – six months before city approval ever came.
In other words, the city had to approve the deal in order to make good its promise to the state that the project the state had funded would include the deal. And if that meant the Chandler Administration joining with Manchac and the Port to downplay that the city was making a hard $62 million commitment in order to cultivate the impression that this wasn't so as a strategy to convince skeptical councilors, so be it.
So, who was in on it? Obviously Rauschenbach and Manchac, whose behind-the-scenes maneuvering on this should become a serious talking point when the firm's contract expires next year. That means at some point Chandler and the rest of his administration had to have known and given assent. And as obviously Montgomery, who receives invitations to Port Commission meetings and when the measure first came up, perhaps unaware of what he was letting slip, to the Council delivered an impassioned plea not to delay a matter that "we've been working on for quite some years."
Clearly not were Montgomery's fellow graybeards no party Jeff Darby and Democrat Bubba Williams, who voted against it. Smith's dogged pursuit of workshops to explore the topic suggests he wasn't, nor another councilor who like Smith has a short but clear history of fiscal probity, Republican Brian Hammons, who seemingly unaware that the grant had gone through despite his voting on #180 actually asked Rauschenbach at the last meeting whether the Port deal had contributed to the grant win.
In his Bossier Watch interview, Smith voiced optimism that the decision had been made in an open and deliberative fashion, contrary to the reputation the city has had for its elected officials making backroom deals prior to the election that brought Smith onto the Council. On this deal, that desire seems entirely misplaced. That Smith and perhaps others apparently unwittingly supported a fiscally-suspect done deal isn't so much reflective of a lyric from the rock band The Who, "Meet the new boss/Same as the old boss," but an inability to achieve what the line previous to that counsels: "We don't get fooled again."
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
[Past annual reviews: 2020, 2019 & '18, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 2005, and 2004.]Off the blog:The biggest development for me was joining utilitarianism.net as lead editor. I then completed their chapters on population ethics and theories of well-being, and wrote a new chapter outlining some basic arguments for utilitarianism. More to come soon!For more traditional academic publications:* Parfit's Ethics appeared in print with Cambridge University Press. (Summary here.)* 'Pandemic Ethics and Status Quo Risk' (summarized here) was accepted by Public Health Ethics.* 'Negative Utility Monsters' was published in Utilitas.I'm also pretty excited about various works-in-progress that are currently under review, especially my new paradox of deontology...Blog posts:Normative Ethics* The Cost of Contraints -- sets out the core of my "new paradox of deontology". Further developed in Preferring to Act Wrongly, Why Constraints are Agent Neutral, and Discounting Illicit Benefits.* The Most Important Thing in the World -- is plausibly the trajectory of the long-term future.* The Paralysis of Deontology* Three Dogmas of Utilitarianism -- (i) Confusing value with what's valuable; (ii) Neglecting fittingness; and (iii) Treating all interests as innocent.* Agency as a Force for Good -- and the appeal of consequentialism.* Learning from Lucifer -- If Satan would be a consequentialist, should the good guys be likewise (just, you know, with better goals)? Or is there a deeper asymmetry between right and wrong?* Tendentious Terminology in Ethics -- against common uses of "mere means" and "separateness of persons" talk.* Is Effective Altruism Inherent Utilitarian? I suggest not. There's a weaker normative principle in the vicinity, potentially shareable by any other sensible view, which should be difficult to deny. In a later post, I call this: Beneficentrism: The view that promoting the general welfare is deeply important.* Consequentialism's Central Concept may be importance rather than rightness.* What's at Stake in the Objective/Subjective Wrongness Debate? Seems terminological. Appeal to "what a morally conscientious agent would be concerned about" doesn't help, because (my Moral Stunting Objection shows) a morally conscientious agent wouldn't be concerned about right or wrong per se.Welfare and Population Ethics* Is Conscientious Sadism still bad?* Is Objective List Theory "Spooky"?* Parsimony in Theories of Welfare -- is it really a relevant consideration at all?* The Limits of Defective Character Solutions -- and why they don't help with the non-identity problem.* Stable Actualism and Asymmetries of Regret -- actualist partiality is defensible once you subtract the possibility of elusive permissions.Pandemic Ethics* Lessons from the Pandemic: blocking innovation is bad.* The Risk of Excessive Conservatism. See also Pandemic Paralysis and JCVI endorses Status Quo Bias.* Epistemic Calibration Bias and Blame Aversion -- we're often too scared of being wrong, and not sufficiently attuned to the risks of failing to be right (e.g. by instead remaining non-committal) when it matters.* There's No Such Thing as "Following the Science" -- normative principles are needed to bridge the is/ought gap. Better slogan: Follow Decision Theory!* Appeasing Anti-Vaxxers -- and why it's wrong.* The Ethics of Off-Label Vaccinations for Kids* Imagining an Alternative Pandemic Response -- with vaccine challenge trials, targeted immunity via variolation, and immunity passports to spare many (e.g. healthy young people) from lockdowns.* The Indefensibility of Post-Vaccine LockdownsApplied Ethics* Companies, Cities, and Carbon -- blaming large corporations for proportionately large carbon emissions makes no more sense than blaming large cities. * Five Fallacies of Collective Harm -- Critiquing the five main reasons why people falsely believe that collective difference-making doesn't require individual difference-making.* The Absurdity of "Undue Inducement" argues that there's no in-principle basis for objecting to monetary incentives to (e.g.) research participants. If concerned that an offer might be exploitative, the solution is to pay more, not less.* Against Anti-Beneficent Paternalism - as a general rule, we shouldn't prevent people from doing good (even if we aren't entirely certain of the quality of their understanding or consent).* Puzzling Conditional Obligations -- if positively good to comply with, then you ought to have unconditional reason to get yourself into position to meet the putative obligation.Metaethics* The Parochialism of Metaethical Naturalism - the basic moral facts should not differ depending on our location in modal space (i.e. which world is actual). But synthetic metaethical naturalism, with its 2-D semantic asymmetry, violates this principle.* Ruling out Helium-Maximizing -- without giving up robust realism. * Why Belief is No Game - pragmatists (like Maguire & Woods) are wrong about what people are rationally criticizable for, and hence wrong about what reasons there are.Other* Philosophical Pluralism and Modest Dogmatism - On why we should welcome philosophical dissensus.* Querying vs Dismissive Objections - are you aiming to create a dialectical opening (to which you'd like to hear a response), or simply shutting things down? When is the latter appropriate?* Commonsense Epiphenomenalism - could the view be less weird than everyone tends to assume?* Helen interviewed on Idealism -- including why Idealism might warrant up to 30% credence.* New Blogs of Note -- three recommendations.* Zach Barnett's guest post on 'Meeting Taurek's Challenge'.* Philosophy Spotlight posts from Eden Lin, Jess Flanigan, and Hrishikesh Joshi. I'm still waiting for other blogs to join in!Happy New Year!