Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
In Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It, Erica Thompson explores how mathematical models are used in contexts that affect our everyday lives – from finance to climate change to health policy – and what can happen when they are malformed or misinterpreted. Rather than abandoning these models, … Continued
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Recently, CMFA published an article and a working paper that detailed the Federal Reserve's departure from rules‐based governance following the financial crisis of the late 2000s. As per academics and Fed officials, the era of rules‐based governance facilitated the Great Moderation – a stable economic period characterized by less volatile macro indicators such as inflation, output gap, and unemployment. In academic parlance, macroeconomists refer to this situation as determinacy. Despite conflicting evidence, the prevailing view is that the Fed facilitated the Great Moderation by establishing a determinate economic environment through rules‐based governance that focused on keeping inflation low. Previous CMFA papers had posited the question as to whether the Fed's departure from this "successful" era of monetary policy may have instead led to indeterminacy. This article provides evidence that indeterminacy did occur during this period. Determinacy is a feature of an economic system whereby outcomes such as inflation, output, etc., can be precisely determined based on a given set of initial conditions and policy rules. Under determinacy, the economy (as represented by a mathematical model) has a unique equilibrium outcome. In simple terms, under determinacy, the economy has only one possible resting state and is also stable with no large spirals or variability. Conversely, indeterminacy occurs when there are multiple possible equilibria that could result from the same initial conditions and policy rules. This state can create uncertainty in predicting the future state of the economy, as different equilibria may lead to significantly divergent economic outcomes. Simply put, the economy could end up in multiple possible states, some of which may be highly volatile, depending on how individuals form their expectations and make decisions. Academics generally believe that a strong Fed response to inflation (a more than one‐to‐one increase in the target federal funds rate to inflation changes) can ensure a determinate system. This is known as the Taylor Principle. A greater than one‐to‐one response to inflation is deeply entrenched in the economic literature; most empirical macroeconomic studies simply assume determinacy and fix the Fed's response to inflation at a number higher than one or use estimation techniques that entirely exclude the possibility of indeterminacy. This determinacy bias has serious implications for policy analysis because economic models (such as those used by the Fed) exhibit significantly different dynamics in an indeterminate system. Additionally, even approaches that account for indeterminacy, including seminal papers, fail to take consumers' inflation expectations seriously. As noted above, expectations matter drastically when determining equilibrium selection. They should be included in the datasets used by empirical methods. I utilize a simple macro model – connecting output gap, inflation, and the federal funds rate – to test the determinacy of the U.S. economy during the period when the Fed abandoned rules‐based governance (2009 through 2022). I use actual U.S. time series data for the three variables listed above as well as a measure of consumers' inflation expectations – one year ahead inflation expectations collected from the Michigan Survey of Consumers.[1] I fit the macro model to the data using a Bayesian estimation procedure under both determinacy and indeterminacy to see which fits the data better. I find that the model under indeterminacy significantly outperforms its determinate counterpart in fitting the data set. That is, the model under indeterminacy has a much higher "goodness‐of‐fit" versus determinacy. Goodness‐of‐fit values from Bayesian analysis are unlike the usual R2 value reported from regressions. Bayesian model comparison is conducted through marginal likelihoods which are then converted to an odds ratio (similar to betting odds) called the Bayes factor. The estimated odds of determinacy to indeterminacy are 1 to 1.5 x 1015 – making determinacy an extremely unlikely event. To understand exactly how unlikely, let us compare these odds to another extremely unlikely event – being struck by lightning. The odds of being struck by lightning are much higher in comparison: 1 to 1.5 x 104. In other words, the odds of being struck by lightning are significantly higher than the odds that the U.S. economy was determinate from 2009 through 2022. Consequently, the probability that the U.S. economy was indeterminate following the financial crisis is nearly 100%. The (indeterminate) model with a 0.57 estimated inflation response coefficient fits the data better than the (determinate) model with a 1.13 coefficient estimate. The results confirm that the Fed did not target inflation in line with the Taylor Principle. These findings raise an important question: how responsible is the Fed in keeping the economy determinate with a unique and stable outcome? If it is, as several academics and Fed officials have claimed, then they must answer why they did not conduct policy in a way that ensured the economy's determinacy. If they are not responsible for keeping the economy determinate (as several recent studies are now finding), then the Fed's reputation for stabilizing the economy is undeserved, and the public should question why an unelected governmental agency exerts such a high degree of influence over the political economy discourse if it is ineffective in maintaining prices or keeping the economy stable. A forthcoming paper will further examine the history of the Fed's effectiveness in achieving determinacy. The author thanks Jerome Famularo for providing research assistance during the preparation of this essay. For more information on the model, empirical methodology, and posterior distribution please click here.
[1] Respondents are asked the question: 'By what percent do you expect prices to go up, on the average, during the next 12 months?' The average of all responses is used as the measure for inflation expectations.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
I want to say a few things about Chicago Booth's recent survey questions posed to a set of economists; see here. The survey asked how strongly one believes in the following two statements:
Question A: Countries that borrow in their own currency should not worry about government deficits because they can always create money to finance their debt.
Question B: Countries that borrow in their own currency can finance as much real government spending as they want by creating money.
Not surprisingly, most economists surveyed disagreed with both statements. Fine. But, not fine, actually. Because the survey prefaced the two questions with
Modern Monetary Theory
as if the the two statements constitute some core belief of MMT.
Was any MMT proponent included in the survey? Don't be ridiculous, of course not (there were a couple from MIT though--perhaps they thought this was close enough). How would a typical MMT proponent have answered these two questions? I am sure that most would have answered in the exact same way as other economists. If this is the case, then why does Chicago Booth preface the survey with MMT? There are many possibilities, none of which are attractive for Chicago Booth.
Let's consider Question B first. Or, better yet, let's not. This question is so ridiculous it hardly merits a response. Nobody believes that governments face no resource constraints.
O.K., so let's consider Question A, where some legitimate confusion may be present. Before I start though, I want to make clear that I don't purport to know the entire MMT academic literature very well. But I have done some reading and I have corresponded with some very smart, very thoughtful MMT proponents. I don't agree with many of their views, but I think I see how some of what they say is both valid and contrary to conventional thinking. At the very least, it seems worth exploring. What I am about to say is my own interpretation -- I am not speaking on behalf of MMTers.
Alright, so on to the question of whether deficits "matter." The more precise MMT statement reads more like this "A country that issues debt denominated in its own currency operating in a flexible exchange rate regime need not worry about defaulting in technical terms on its outstanding debt." That is, the U.S. government can always print money to pay for its maturing debt. That's because U.S. Treasury securities represent claims for U.S. dollars, and the government can (if it wants) print all the dollars it needs.
Nobody disagrees with this statement. MMTers like to make it explicit because, first, much of the general public does not understand this basic fact, and second, this misunderstanding is sometimes (perhaps often) used to promote particular ideological views on the "proper" role of government.
Mainstream economists, like myself, like to point out what matters is not technical default but economic "default." An unexpected inflation whittles away the purchasing power of those caught holding old money as new money is printed to pay for whatever. I think it's clear that MMTers understand this too. This can be seen in their constant reference to an "inflation constraint" as defining the economic limits to government spending. I tried to formalize this idea in my previous blog post; see here: Sustainable Deficits.
But it's more complicated than this -- and in interesting ways, I think. Consider a large corporation, like General Motors. GM issues both debt and equity. The debt GM issues is denominated in dollars, so it can go bankrupt. But GM also issues a form of "money"--that is, is can use newly created equity to pay its employees or to make acquisitions.
Issuing more equity does not expose GM to greater default risk. Indeed, it may very well reduce it if the equity is used to buy back GM debt. If GM is thinking about financing an acquisition through new equity issuance, the discussion is not going to about whether GM can afford to print the new shares. Of course it can print all the shares it wants. The question is whether the acquisition is accretive or dilutive. If the former, then issuing new money will make the value of GM money go up. If the latter, then the new share issue will be inflationary (the purchasing power of GM shares will go down). In other words, "deficits don't matter" in the sense that the outstanding GM liabilities do not matter per se -- what matters is something more fundamental. Equity "over-issue" may not be desirable, but the phenomenon is symptomatic, not causal.
The U.S. government and Federal Reserve in effect issue equity. The government need not default on its debt. This is because U.S. Treasury debt is convertible into money (equity) and the Fed can do so if it so chooses. The question for the government, as with GM, is whether any new spending program is accretive or dilutive. If the economy is operating at less than full capacity, then this is like GM being presented with a positive NPV investment opportunity. The government can issue new money that, if used wisely, need not be inflationary.
There are limits to how far this can go, of course. And there was the all important qualifier "if used wisely." But this is exactly where the debate should be: how should our institutions be designed to promote the "best" allocation of resources?
I often hear that MMTers don't have a good theory of inflation. As if there is a good theory of inflation out there already. But I see in MMT a theory of inflation that overlaps (not entirely) with my own views expressed, say, here: The Failure to Inflate Japan. The MMT view seems to take a broader view over the set of instruments that monetary policy may employ to control inflation. We can have a debate about the merits of their views, but there's no reason to dismiss them outright or to pretend they don't have a theory of inflation.
Another complaint I hear: the MMTers don't want to produce a model. You know, it's true, there are not many mathematical models out there. So what?
First, the lingua franca of policy making is English -- math is a part of a trade language. Economic ideas can be understood when expressed in the vernacular. It's also been helpful to me and others to attempt to "formalize" our thoughts in our trade language. But it seems to me that some of my colleagues can only understand an argument if it's posed in their trade language. This is a rather sad state of affairs, if true.
Second, MMT, like any school of thought, is evolving over time and comes from a different tradition. Instead of demanding a model (now!), why not reach out and try to help formalize some of their ideas. You never know -- you may actually learn something in the process.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
My first post described a few anecdotes about what a warm person Bob Lucas was, and such a great colleague. Here I describe a little bit of his intellectual influence, in a form that is I hope accessible to average people.The "rational expectations" revolution that brought down Keynesianism in the 1970s was really much larger than that. It was really the "general equilibrium" revolution. Macroeconomics until 1970 was sharply different from regular microeconomics. Economics is all about "models," complete toy economies that we construct via equations and in computer programs. You can't keep track of everything in even the most beautiful prose. Microeconomic models, and "general equilibrium" as that term was used at the time, wrote down how people behave — how they decide what to buy, how hard to work, whether to save, etc.. Then it similarly described how companies behave and how government behaves. Set this in motion and see where it all settles down; what prices and quantities result. But for macroeconomic issues, this approach was sterile. I took a lot of general equilibrium classes as a PhD student — Berkeley, home of Gerard Debreu was strong in the field. But it was devoted to proving the existence of equilibrium with more and more general assumptions, and never got around to calculating that equilibrium and what it might say about recessions and government policies. Macroeconomics, exemplified by the ISLM tradition, inhabited a different planet. One wrote down equations for quantities rather than people, for example that "consumption" depended on "income," and investment on interest rates. Most importantly, macroeconomics treated each year as a completely separate economy. Today's consumption depended on today's income, having nothing to do with whether people expected the future to look better or worse. Economists recognized this weakness, and a vast and now thankfully forgotten literature tried fruitlessly to find "micro foundations" for Keynesian economics. But building foundations under an existing castle doesn't work. The foundations want a different castle. Bob's "islands" paper is famous, yes, for a complete model of how unexpected money might move output in the short run and not just raise inflation. But you can do that with a half a page of simple math, and Bob's paper is hard to read. It's deeper contribution, and the reason for that difficulty, is that Bob wrote out a complete "general equilibrium" model. People, companies and government each follow described rules of behavior. Those rules are derived as being the optimal thing for people and companies to do given their environment. And they are forward-looking. People think about how to make their whole lives as pleasant as possible, companies to maximize the present value of profits. Prices adjust so supply = demand. Bob said, by example, that we should do macroeconomics by writing down general equilibrium models. General equilibrium had also been abandoned by the presumption that it only studies perfect economies. Macroeconomics is really about studying how things go wrong, how "frictions" in the economy, such as the "sticky" wages underlying Keynesian thinking, can produce undesirable and unnecessary recessions. But here too, Bob requires us to write down the frictions explicitly. In his model, people don't see the aggregate price level right away, and do the best they can with local information. That is the real influence of the paper and Bob's real influence in the profession. (Current macroeconomic modeling reflects the fact that the Fed sets interest rates, and does not control the money supply.) You can see this influence in Tom Sargent's textbooks. The first textbook has an extensive treatment of Keynesian economics. It's about the most comprehensible treatment there is — but it is no insult to Tom to say that in that book you can see how Keynesian economics really doesn't hang together. Tom describes how, the minute he learned from Bob how to to general equilibrium, everything changed instantly. Rational expectations was, like any other advance, a group effort. But what made Bob the leader was that he showed the rest how to do general equilibrium. This is the heart of my characterization that Bob is the most important macroeconomist of the 20th century. Yes, Keynes and Friedman had more policy impact, and Friedman's advocacy of free markets in microeconomic affairs is the most consequential piece of 20th century economics. But within macroeconomics, there is before Lucas and after Lucas. Everyone today does economics the Lucas way. Even the most new-Keynesian article follows the Lucas rules of how to do economics. Once you see models founded on complete descriptions of people, businesses, government, and frictions, you can see the gaping holes in standard ISLM models. This is some of his stinging critique, such as "after Keynesian macroeconomics." Sure, if people's income goes up they are likely to consume more, as the Keynesians posited. But interest rates, wages, and expectations of the future also affect consumption, which Keynesians leave out. "Cross equations restrictions" and "budget constraints" are missing. Now, the substantive prediction that monetary policy can only move the real economy via unexpected money supply growth did not bear out, and both subsequent real business cycles and new-Keynesianism brought persistent responses. But the how we do macroeconomics part is the enduring contribution. The paper still had enduring practical lessons. Lucas, together with Friedman and Phelps brought down the Phillips curve. This curve, relating inflation to unemployment, had been (and sadly, remains) at the center of macroeconomics. It is a statistical correlation, but like many correlations people got enthused with it and started reading it as stable relationship, and indeed a causal one. Raise inflation and you can have less unemployment. Raise unemployment in order to lower inflation. The Fed still thinks about it in that causal way. But Lucas, Friedman, and Phelps bring a basic theory to it, and thereby realize it is just a correlation, which will vanish if you push on it. Rich guys wear Rolexes. That doesn't mean that giving everyone a Rolex will have a huge "multiplier" effect and make us all rich. This is the essence of the "Lucas critique" which is a second big contribution that lay readers can easily comprehend. If you push on correlations they will vanish. Macroeconomics was dedicated to the idea that policy makers can fool people. Monetary policy might try to boost output in a recession with a surprise bit of money growth. That will wok once or twice. But like the boy who cried wolf, people will catch on, come to expect higher money growth in recessions and the trick won't work anymore. Bob showed here that all the "behavioral" relations of Keynesian models will fall apart if you exploit them for policy, or push on them, though they may well hold as robust correlations in the data. The "consumption function" is the next great example. Keynesians noticed that when income rises people consume more, so write a consumption function relating consumption to income. But, following Friedman's great work on consumption, we know that correlation isn't always true in the data. The relation between consumption and income is different across countries (about one for one) than it is over time (less than one for one). And we understand that with Friedman's theory: People, trying to do their best over their whole lives don't follow mechanical rules. If they know income will fall in the future, they consume a lot less today, no matter what today's current income. Lucas showed that people who behave this sensible way will follow a Keynesian consumption function, given the properties of income overt the business cycle. You will see a Keynesian consumption function. Econometric estimates and tests will verify a Keynesian consumption function. Yet if you use the model to change policies, the consumption function will evaporate. This paper is devastating. Large scale Keynesian models had already been constructed, and used for forecasting and policy simulation. It's natural. The model says, given a set of policies (money supply, interest rates, taxes, spending) and other shocks, here is where the economy goes. Well, then, try different policies and find ones that lead to better outcomes. Bob shows the models are totally useless for that effort. If the policy changes, the model will change. Bob also showed that this was happening in real time. Supposedly stable parameters drifted around. (This one is also very simple mathematically. You can see the point instantly. Bob always uses the minimum math necessary. If other papers are harder, that's by necessity not bravado.) This devastation is sad in a way. Economics moved to analyzing policies in much simpler, more theoretically grounded, but less realistic models. Washington policy analysis sort of gave up. The big models lumber on, the Fred's FRBUS for example, but nobody takes the policy predictions that seriously. And they don't even forecast very well. For example, in the 2008 stimulus, the CEA was reduced to assuming a back of the envelope 1.5 multiplier, this 40 years after the first large scale policy models were constructed. Bob always praised the effort of the last generation of Keynesians to write explicit quantitative models, to fit them to data, and to make numerical predictions of various policies. He hoped to improve that effort. It didn't work out that way, but not by intention. This affair explains a lot of why economists flocked to the general equilibrium camp. Behavioral relationships, like what fraction of an extra dollar of income you consume, are not stable over time or as policy changes. But one hopes that preferences, — how impatient you are, how much you are willing to save more to get a better rate of return — and technology — how much a firm can produce with given capital and labor — do not change when policy changes. So, write models for policy evaluation at the level of preferences and technology, with people and companies at the base, not from behavioral relationships that are just correlations. Another deep change: Once you start thinking about macroeconomics as intertemporal economics — the economics that results from people who make decisions about how to consume over time, businesses make decisions about how to produce this year and next — and once you see that their expectations of what will happen next year, and what policies will be in place next year are crucial, you have to think of policy in terms of rules, and regimes, not isolated decisions. The Fed often asks economists for advice, "should we raise the funds rate?" Post Lucas macroeconomists answer that this isn't a well posed question. It's like saying "should we cry wolf?" The right question is, should we start to follow a rule, a regime, should we create an institution, that regularly and reliably raises interest rates in a situation like the current one? Decisions do not live in isolation. They create expectations and reputations. Needless to say, this fundamental reality has not soaked in to policy institutions. And that answer (which I have tried at Fed advisory meetings) leads to glazed eyes. John Taylor's rule has been making progress for 30 years trying to bridge that conceptual gap, with some success. This was, and remains, extraordinarily contentious. 50 years later, Alan Blinder's book, supposedly about policy, is really one long snark about how terrible Lucas and his followers are, and how we should go back to the Keynesian models of the 1960s. Some of that contention comes back to basic philosophy. The program applies standard microeconomics: derive people's behaviors as the best thing they can do given their circumstances. If people pick the best combination of apples and bananas when they shop, then also describe consumption today vs. tomorrow as the best they can do given interest rates. But a lot of economics doesn't like this "rational actor" assumption. It's not written in stone, but it has been extraordinarily successful. And it imposes a lot of discipline. There are a thousand arbitrary ways to be irrational. Somehow though, a large set of economists are happy to write down that people pick fruit baskets optimally, but don't apply the same rationality to decisions over time, or in how they think about the future. But "rational expectations" is really just a humility condition. It says, don't write models in which the predictions of the model are different from the expectations in the model. If you do, if your model is right, people will read the model and catch on, and the model won't work anymore. Don't assume you economist (or Fed chair) are so much less behavioral than the people in your model. Don't base policy on an attempt to fool the little peasants over and over again. It does not say that people are big super rational calculating machines. It just says that they eventually catch on. Some of the contentiousness is also understandable by career concerns. Many people had said "we should do macro seriously like general equilibrium." But it isn't easy to do. Bob had to teach himself, and get the rest of us to learn, a range of new mathematical and modeling tools to be able to write down interesting general equilibrium models. A 1970 Keynesian can live just knowing how to solve simple systems of linear equations, and run regressions. To follow Bob and the rational expectations crowd, you had to learn linear time-series statistics, dynamic programming, and general equilibrium math. Bob once described how tough the year was that it took him to learn functional analysis and dynamic programming. The models themselves consisted of a mathematically hard set of constructions. The older generation either needed to completely retool, fade away, or fight the revolution. Some good summary words: Bob's economics uses"rational expectations," or at least forward-looking and model-consistent expectations. Economics becomes "intertemporal," not "static" (one year at a time). Economics is "stochastic" as well as "dynamic," we can treat uncertainty over time, not just economies in which everyone knows the future perfectly. It applies "general equilibrium" to macroeconomics. And I've just gotten to the beginning of the 1970s. When I got to Chicago in the 1980s, there was a feeling of "well, you just missed the party." But it wasn't true. The 1980s as well were a golden age. The early rational expectations work was done, and the following real business cycles were the rage in macro. But Bob's dynamic programming, general equilibrium tool kit was on a rampage all over dynamic economics. The money workshop was one creative use of dynamic programs and interetempboral tools after another one, ranging from taxes to Thai villages (Townsend). I'll mention two. Bob's consumption model is at the foundation of modern asset pricing. Bob parachuted in, made the seminal contribution, and then left finance for other pursuits. The issue at the time was how to generalize the capital asset pricing model. Economists understood that some stocks pay higher returns than others, and that they must do so to compensate for risk. The understood that the risk is, in general terms, that the stock falls in some sense of bad times. But how to measure "bad times?" The CAPM uses the market, other models use somewhat nebulous other portfolios. Bob showed us that at least in the purest theory, that stocks must pay higher average returns if they fall when consumption falls. (Breeden also constructed a consumption model in parallel, but without this "endowment economy" aspect of Bob's) This is the purest most general theory, and all the others are (useful) specializations. My asset pricing book follows. The genius here was to turn it all around. Finance had sensibly built up from portfolio theory, like supply and demand: Given returns, what stocks do you buy, and how much to you save vs. consume? Then, markets have to clear find the stock prices, and thus returns, given which people will buy exactly the amount that's for sale and consume what is produced. That's hard. (Technically, finding the vector of prices that clears markets is hard. Yes, N equations in N unknowns, but they're nonlinear and N is big.) Bob instead imagined that consumption is fixed at each moment in time, like a desert island in which so many coconuts fall each day and you can't store them or plant them. Then, you can just read prices from people's preferences. This gives the same answer as if the consumption you assume is fixed had derived from a complex production economy. You don't have to solve for prices that equate supply and demand. Brilliantly, though prices cause consumption to individual people, consumption causes prices in aggregate. This is part of Bob's contribution to the hard business of actually computing quantitative models in the stochastic dynamic general equilibrium tradition. Bob, with Nancy Stokey also took the new tools to the theory of taxation. (Bob Barro also was a founder of this effort in the late 1980s.) You can see the opportunity: we just learned how to handle dynamic (overt time, expectations of tomorrow matter to what you do today) stochastic (but there is uncertainty about what will happen tomorrow) economics (people make explicit optimizing decisions) for macro. How about taking that same approach to taxes? The field of dynamic public finance is born. Bob and Nancy, like Barro, show that it's a good idea for governments to borrow and then repay, so as to spread the pain of taxes evenly over time. But not always. When a big crisis comes, it is useful to execute a "state contingent default." The big tension of Lucas-Stokey (and now, all) dynamic public finance: You don't want any capital taxes for the incentive effects. If you tax capital, people invest less, and you just get less capital. But once people have invested, a capital tax grabs revenue for the government with no economic distortion. Well, that is, if you can persuade them you'll never do it again. (Do you see expectations, reputations, rules, regimes, wolves in how we think of policy?) Lucas and Stoney say, do it only very rarely to balance the disincentive of a bad reputation with the need to raise revenue in once a century calamities. Bob went on, of course, to be one of the founders of modern growth theory. I always felt he deserved a second Nobel for this work. He's absolutely right. Once you look at growth, it's hard to think about anything else. The average Indian lives on $2,000 per year. The average American, $60,000. That was $15,000 in 1950. Nothing else comes close. I only work on money and inflation because that's where I think I have answers. For us mortals, good research proceeds where you think you have an answer, not necessarily from working on Big Questions. Bob brilliantly put together basic facts and theory to arrive at the current breakthrough. Once you get out of the way, growth does not come from more capital, or even more efficiency. It comes from more and better ideas. I remember being awed by his first work for cutting through the morass and assembling the facts that only look salient in retrospect. A key one: Interest rates in poor countries are not much higher than they are in rich countries. Poor countries have lots of workers, but little capital. Why isn't the return on scarce capital enormous, with interest rates in the hundreds of percent, to attract more capital to poor countries? Well, you sort of know the answer, that capital is not productive in those countries. Productivity is low, meaning those countries don't make use of better ideas on how to organize production. Ideas too are produced by economics, but, as Paul Romer crystallized, they are fundamentally different from other goods. If I produce an idea, you can use it without hurting my use of it. Yes, you might drive down the monopoly profits I gain from my intellectual property. But if you use my Pizza recipe, that's not like using my car. I can still make Pizza, where if you use my car I can't go anywhere. Thus, the usual free market presumption that we will produce enough ideas is false. (Don't jump too quickly to advocate government subsides for ideas. You have to find the right ideas, and governments aren't necessarily good at subsidizing that search.) And the presumption that intellectual property should be preserved forever is also false. Once produced it is socially optimal for everyone to use it. I won't go on. It's enough to say that Bob was as central to the creation of idea-based growth theory, which dominates today, as he was to general equilibrium macro, which also dominates today.Bob is an underrated empiricist. Bob's work on the size distribution of firms (great tweet summary by Luis Garicano) similarly starts from basic facts of the size distribution of firms and the lack of relationship between size and growth rates. It's interesting how we can go on for years with detailed econometric estimates of models that don't get basic facts right. I loved Bob's paper on money demand for the Carnegie Rochester conference series. An immense literature had tried to estimate money demand functions with dynamics, and was pretty confusing. It made a basic mistake, by looking at first differences rather than levels and thereby isolating the noise and drowning out the signal. Bob made a few plots, basically rediscovered cointegration all on his own, and made sense of it all. And don't forget the classic international comparison of inflation-output relations. Countries with volatile inflation have less Phillips curve tradeoff, just as his islands model featuring confusion between relative prices and the price level predicts. One last note to young scholars. There is a tendency today to value people by the number of papers they produce, and how quickly they rise through the ranks. Read Bob's CV. He wrote about one paper a year, starting quite late in life. But, as Aesop said, they were lions. In his Nobel prize speech, Bob also passed on that he and his Nobel-winning generation at Chicago always felt they were in some backwater, where the high prestige stuff was going on at Harvard and MIT. You never know when it might be a golden age. And the AER rejected his islands paper (as well as Akerlof's lemons). If you know it's good, revise and try again. I will miss his brilliant papers as much as his generous personality. Update: See Ivan Werning's excellent "Lucas Miracles" for an appreciation by a real theorist.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
A few weeks ago, American Compass released Rebuilding American Capitalism, A Handbook for Conservative Policymakers. This Forbes column (American Compass Points To Myths Not Facts) provided a very brief critique of the handbook's "Financialization" chapter, and Oren Cass, American Compass's Executive Director, released a response titled Yes, Financialization Is Real. This Cato at Liberty post is the fourth in a series that expands on the original criticisms outlined in the Forbes column. (The first three in the series are available here, here, and here.) This post demonstrates the evidence does not support American Compass's claims regarding investment. It also further documents American Compass's failure to clearly specify terms and dates, as well as its selective use of examples that appear to support its positions. To recap, the American Compass handbook states the following: American finance has metastasized, claiming a disproportionate share of the nation's top business talent and the economy's profits, even as actual investment has declined." [Emphasis added.]
As with profits, the "Financialization" chapter does not specify a single preferred measure of investment or any time frame for analysis. It simply complains that "In recent decades…actual investment has declined." [Emphasis added.] The original critique stated, "The claim that investment has declined is also easily verified as false," and then used National Income and Product Account (NIPA) data to show "investment in fixed assets has been steadily increasing since 1970, a trend that holds even if the data is adjusted for inflation." Cass takes issue with the original critique's use of absolute investment dollars rather than investment as a share of GDP. Cass's response states: Of course, investment rises in absolute dollars as the American population grows and economy expands. Who would claim otherwise? The question is what has happened relative to GDP.
Yet, American Compass uses the term actual investment in the introduction to the "Financialization" chapter and purposely uses aggregate data in levels when doing so suits its purpose. But importantly, American Compass fails to settle on any definition of investment. Here's a list of direct quotes describing investment from the "Financialization" chapter: Unfortunately, in the United States, productive business investment has been in long‐term decline and the financial industry now specializes in trading assets around in circles. [Emphasis added, no dates given.] Economy‐wide, business investment has fallen significantly as a share of GDP. [Emphasis added, no dates given.] They instead become savers themselves by acquiring financial assets, effectively deferring the earthy and material work of productive capital investment to others. [Emphasis added, no dates given.] Statistically, this transition began in the 1980s, as the share of corporate investment in tangible assets declined and the acquisition of financial assets climbed. [Emphasis added.] Despite this ambiguity, Cass's response insists that readers should know exactly what investment data American Compass's handbook is referring to because "the Rebuilding American Capitalism handbook is a synthesis of our analysis and recommendations and provides copious references to further reading alongside each proposal." So, here's a list of direct quotes from two other American Compass reports, none of which provide a clear answer: Actual‐investment, by which I mean the allocation of capital toward the development of new productive capacity—the building of structures, the installment of machines, the creation of intellectual property—has been weakening in America for decades now. [Found in "The Rise of Wall Street and the Fall of American Investment" – emphasis added.] As non‐investors have overrun the banks and markets and taken control of corporations, actual‐investment has slowed. The nation's capital base is smaller by literally trillions of dollars as a result, representing untold enterprises never built, innovations never pursued, and workers never given opportunity. [Found in "The Rise of Wall Street and the Fall of American Investment" – emphasis added.] Net non‐residential fixed investment as a share of GDP has fallen by almost half, from 4.1% in the 1970s and 80s to 2.5% in the 2010s. [Found in "The Rise of Wall Street and the Fall of American Investment" – emphasis added.] The classic categories of investment, structures and equipment, account for 87% of the nation's capital base and the rate of investment there has been declining in both gross and net terms. [Found in "The Rise of Wall Street and the Fall of American Investment" – emphasis added.] Net investment as a share of value‐add averaged 4.3% during 1998–2000 and then 0.5% during 2002-04. During 2000–17, the average was 2.2%, leading to a $1.0 trillion shortfall over the period, relative to the 1970–99 rate. [Found in "The Rise of Wall Street and the Fall of American Investment" – emphasis added.] As we have seen, the cumulative gross investment shortfall during 2009–17 as compared to 1970–99 amounted to $3.4 trillion. [Found in "The Rise of Wall Street and the Fall of American Investment" – emphasis added.] Nationwide, net investment as a share of GDP has fallen sharply, and the shortfall since the Great Recession totals roughly $3 trillion (equivalent to the excess outflow from public companies). [Found in "Confronting Coin‐Flip Capitalism" – emphasis added.] This creates a vicious cycle in which business leaders pursuing promising opportunities become harder to find, further encouraging the financial sector to develop strategies for deriving profits disconnected from actual investment. [Found in "Confronting Coin‐Flip Capitalism" – emphasis added.] From 2009 to 2017, the nation needed $22.9 trillion in gross investment to match the average growth rate of the capital stock during 1970–99 (3.8% of GDP annually). Instead, investment totaled only $19.6 trillion. [Found in "Confronting Coin‐Flip Capitalism" – emphasis added.] Even the market fundamentalists—indeed, especially the market fundamentalists—recognize that higher investment levels would be beneficial. [Found in "Confronting Coin‐Flip Capitalism" – emphasis added.] Setting aside American Compass's failure to explain whether any of these versions of investment is its single preferred measure of investment to study "financialization," it is true that there are many ways to describe investment. Indeed, there are even many different time periods, inflation adjustments, aggregation issues, and sub‐components of investment that influence how an aggregate investment series behaves. Moreover, if investment (however defined) declines, or declines slower than some metric, that fact alone would not be evidence that investment is less than optimal. While many American Compass reports imply investment is suboptimal, American Compass has not provided evidence that investment is less than what it should be. Take, for instance, American Compass's claim that "Net non‐residential fixed investment as a share of GDP has fallen by almost half, from 4.1 percent in the 1970s and 80s to 2.5 percent in the 2010s." If American Compass believes that that 4.1 percent was the optimal share in 1970, and a 1.6 percentage point lower share in the 2010s "threatens our future prosperity" and requires America to rebuild capitalism, then the least it can do is state a clear hypothesis and make an empirical case. Such critics cannot simply argue that a lower number is less than optimal. (For what it's worth, American Compass's "The Corporate Erosion of Capitalism" also fails to provide such evidence – it is merely an accounting exercise without any economic analysis of the optimal levels of real investment individual firms need to sustain their own operations.) It turns out, though, that the long‐term trend in most of these investment measures is not decreasing. The only way to show that "investment" has declined is to selectively define the measure and period for analysis. Otherwise, it is impossible to say that investment has declined. Regardless, there is no inherent economic reason that investment, whether in absolute amounts or relative to GDP, whether net or gross, or real or nominal, must constantly increase. A developed economy with evolving working patterns, for instance, would not need to constantly invest more in new corporate structures. Similarly, the rate of growth of investment does not have to constantly match or exceed GDP (or profit) growth in any historical period. The mere fact that some metric of investment grows slower than some other economic measure – even for an extended period – does not indicate that the economy will be harmed much less that "financialization" caused the "slow" growth. Our analysis now turns to the actual time series of real investment and real GDP, respectively (see Figure 1 and Figure 2). Contrary to what Cass claims in his response, the right question is not always "what has happened [to investment] relative to GDP." (Even if it was the correct question, simply dividing investment by GDP would not adequately account for confounding factors such as population growth, the cost of investment, productivity, feedback loops, etc.) Bluntly, it is not at all clear that using a relative measure is the "right" way to look at investment.
Figure 1: Real Gross Private Investment in the U.S., Annual from 1929 to 2022
Figure 2: Real Gross Domestic Product in the U.S., Annual from 1929 to 2022 For starters, nobody invests in amounts relative to GDP, and even American Compass often refers to levels of investment. Regardless, there are some basic mathematical issues that suggest, at the very least, researchers must be very careful drawing inferences from relative investment measures. As Figure 1 and Figure 2 show, real (gross private domestic) investment and real GDP both display a sharp upward trend. However, the two series exhibit an enormous difference in size and volatility – the standard deviation of investment growth is five times greater than for GDP growth. A 20 percent year‐to‐year decline in investment is normal, but it would be highly unusual for GDP. Moreover, any decline in the ratio can easily mask the causal relationship between investment and economic growth. Put differently, GDP is stable but one of its components – investment – fluctuates rapidly. This stability arises as investment accounts for only 13 percent of GDP on average and fluctuations in investment can be offset by counter‐cyclical fluctuations in other components of GDP, such as consumption or fiscal spending. Consequently, measuring investment relative to GDP can give the appearance that something dreadful has happened even though such deviations may be the result of a perfectly normal economy, even one with optimal decision making. Investment's high volatility is a commonly known macroeconomic fact. Benchmark macro models dating back to the start of macroeconomic modeling itself have highlighted investment's significant volatility in comparison to the rest of the economy. Successes of models since then are measured (at least in part) by whether their simulated time series can match the observed volatility of macro indicators such as investment. Leaving the appropriateness of using levels aside, we now examine nominal investment relative to nominal GDP (i.e., the investment‐to‐GDP ratio), as well as several of its component measures. (See Figure 3.) These nominal metrics are available from 1929, but in fairness to American Compass, we only present the data from 1950 onwards. The series exhibits high volatility between 1929 and 1950 and starting the graph in 1929 biases the data toward a steeper increasing trend for investment and its component measures. (Incidentally, replicating Figure 3 with real investment and GDP figures also shows the ratio exhibiting an increasing trend.[1]) As Figure 3 shows, the investment‐to‐GDP ratio exhibits variation around a very mildly increasing trend for all of modern U.S. economic history. As for component measures, non‐residential investment has grown significantly as a share of the economy, offsetting the decline in the share of residential investment. Finally, net private domestic investment has declined over time. From this set of investment measures, focusing only on net private domestic investment to argue "actual investment" has declined equates to selectively using a sub‐component of investment while ignoring others.
Figure 3: Investment Metrics as Share of NGDP in the U.S., Annual from 1950 to 2022 Of course, there are still many other ways to describe investment. Assume, for example, that net nonresidential (or business) investment is the "right" measure to analyze, as Cass's response suggests. Figure 4 presents real net business investment and its subcomponents from 1967 to 2021.[2] It shows that net business investment exhibits a sharp increasing trend. (Figure 3 showed that nonresidential investment as a share of GDP exhibits an increasing trend.) While the trends are not as steep for the subcomponents, Figure 4 shows that net investment in business equipment and intellectual property also display increasing trends. However, net business investment in structures exhibits a decreasing trend. (Interestingly, in The Rise of Wall Street and the Fall of American Investment, American Compass groups structures and equipment together to demonstrate that investment in "Structures & Equipment" is declining.)
Figure 4: Real Net Nonresidential Investment and its Components in the U.S., Annual from 1967 to 2022 Obviously, it would make little sense to argue that "actual investment" is declining by focusing only on the decline in the net structure subcomponent, or any other component of investment for a shorter period. Put bluntly, American Compass is incorrect to use declines in any of these subcomponents to argue that there is some kind of broad decline in investment. Still, the trends in these subcomponents on Figure 4 are even more problematic for American Compass. Not only do the trends contradict that investment is in a general decline, but American Compass's story requires an explanation for: (1) why the highly developed U.S. economy needs constantly increasing growth in structures; and, (2) how "financialization" is responsible for a decline in structure investment and a simultaneous increase in equipment and IP investment. It is also worth noting that while there's nothing inherently wrong with using net investment figures, as American Compass sometimes does, neither the Bureau of Economic Analysis's depreciation estimates nor accounting depreciation perfectly coincide with economic depreciation. In other words, even when someone fully depreciates a piece of equipment for tax or accounting purposes, it does not mean that the equipment is no longer useful and must be immediately replaced. More generally, from a macroeconomic perspective, there is no reason to distinguish between different sub‐components of investment. The primary macro indicator of economic health is real GDP growth away from trend (or similar metrics such as output gap and unemployment). Investment is a means of facilitating capital accumulation and it is not immediately clear why one component of it is necessarily better than any other. This is why seminal empirical economic papers always focus on investment as an aggregate. Even highly cited papers that explicitly model net investment don't bother to include it in their results, instead focusing again on aggregate investment. Overall, aggregate investment in the United States is not in decline. Yet, American Compass relies on a wide array of investment descriptions, in various time periods, to argue that American investment is in a general decline and below optimal levels. American Compass's error is only compounded by its imprecise definition of financialization. This combination of errors leaves American Compass with little more than a set of stories that appear to provide evidence financial markets threaten American capitalism. In the next post, we will conclude this series by discussing American Compass's flawed characterization of Americans' income.
[1] It may seem unintuitive that shares would be different between nominal and real variables (prices on the numerator and denominator should cancel out), but the relative price of investment has changed significantly in comparison to overall GDP. Specifically, the deflator for investment has been significantly higher than for GDP, equalizing only since the 2010s.
[2] The BEA provides values for sub‐components of investment that only go back to 1967.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Yes, it's already time to be worried — very worried. As the wars in Ukraine and Gaza have shown, the earliest drone equivalents of "killer robots" have made it onto the battlefield and proved to be devastating weapons. But at least they remain largely under human control. Imagine, for a moment, a world of war in which those aerial drones (or their ground and sea equivalents) controlled us, rather than vice-versa. Then we would be on a destructively different planet in a fashion that might seem almost unimaginable today. Sadly, though, it's anything but unimaginable, given the work on artificial intelligence (AI) and robot weaponry that the major powers have already begun. Now, let me take you into that arcane world and try to envision what the future of warfare might mean for the rest of us.By combining AI with advanced robotics, the U.S. military and those of other advanced powers are already hard at work creating an array of self-guided "autonomous" weapons systems — combat drones that can employ lethal force independently of any human officers meant to command them. Called "killer robots" by critics, such devices include a variety of uncrewed or "unmanned" planes, tanks, ships, and submarines capable of autonomous operation. The U.S. Air Force, for example, is developing its "collaborative combat aircraft," an unmanned aerial vehicle (UAV) intended to join piloted aircraft on high-risk missions. The Army is similarly testing a variety of autonomous unmanned ground vehicles (UGVs), while the Navy is experimenting with both unmanned surface vessels (USVs) and unmanned undersea vessels (UUVs, or drone submarines). China, Russia, Australia, and Israel are also working on such weaponry for the battlefields of the future.The imminent appearance of those killing machines has generated concern and controversy globally, with some countries already seeking a total ban on them and others, including the U.S., planning to authorize their use only under human-supervised conditions. In Geneva, a group of states has even sought to prohibit the deployment and use of fully autonomous weapons, citing a 1980 U.N. treaty, the Convention on Certain Conventional Weapons, that aims to curb or outlaw non-nuclear munitions believed to be especially harmful to civilians. Meanwhile, in New York, the U.N. General Assembly held its first discussion of autonomous weapons last October and is planning a full-scale review of the topic this coming fall.For the most part, debate over the battlefield use of such devices hinges on whether they will be empowered to take human lives without human oversight. Many religious and civil society organizations argue that such systems will be unable to distinguish between combatants and civilians on the battlefield and so should be banned in order to protect noncombatants from death or injury, as is required by international humanitarian law. American officials, on the other hand, contend that such weaponry can be designed to operate perfectly well within legal constraints.However, neither side in this debate has addressed the most potentially unnerving aspect of using them in battle: the likelihood that, sooner or later, they'll be able to communicate with each other without human intervention and, being "intelligent," will be able to come up with their own unscripted tactics for defeating an enemy — or something else entirely. Such computer-driven groupthink, labeled "emergent behavior" by computer scientists, opens up a host of dangers not yet being considered by officials in Geneva, Washington, or at the U.N.For the time being, most of the autonomous weaponry being developed by the American military will be unmanned (or, as they sometimes say, "uninhabited") versions of existing combat platforms and will be designed to operate in conjunction with their crewed counterparts. While they might also have some capacity to communicate with each other, they'll be part of a "networked" combat team whose mission will be dictated and overseen by human commanders. The Collaborative Combat Aircraft, for instance, is expected to serve as a "loyal wingman" for the manned F-35 stealth fighter, while conducting high-risk missions in contested airspace. The Army and Navy have largely followed a similar trajectory in their approach to the development of autonomous weaponry.The Appeal of Robot "Swarms"However, some American strategists have championed an alternative approach to the use of autonomous weapons on future battlefields in which they would serve not as junior colleagues in human-led teams but as coequal members of self-directed robot swarms. Such formations would consist of scores or even hundreds of AI-enabled UAVs, USVs, or UGVs — all able to communicate with one another, share data on changing battlefield conditions, and collectively alter their combat tactics as the group-mind deems necessary."Emerging robotic technologies will allow tomorrow's forces to fight as a swarm, with greater mass, coordination, intelligence and speed than today's networked forces," predicted Paul Scharre, an early enthusiast of the concept, in a 2014 report for the Center for a New American Security (CNAS). "Networked, cooperative autonomous systems," he wrote then, "will be capable of true swarming — cooperative behavior among distributed elements that gives rise to a coherent, intelligent whole."As Scharre made clear in his prophetic report, any full realization of the swarm concept would require the development of advanced algorithms that would enable autonomous combat systems to communicate with each other and "vote" on preferred modes of attack. This, he noted, would involve creating software capable of mimicking ants, bees, wolves, and other creatures that exhibit "swarm" behavior in nature. As Scharre put it, "Just like wolves in a pack present their enemy with an ever-shifting blur of threats from all directions, uninhabited vehicles that can coordinate maneuver and attack could be significantly more effective than uncoordinated systems operating en masse."In 2014, however, the technology needed to make such machine behavior possible was still in its infancy. To address that critical deficiency, the Department of Defense proceeded to fund research in the AI and robotics field, even as it also acquired such technology from private firms like Google and Microsoft. A key figure in that drive was Robert Work, a former colleague of Paul Scharre's at CNAS and an early enthusiast of swarm warfare. Work served from 2014 to 2017 as deputy secretary of defense, a position that enabled him to steer ever-increasing sums of money to the development of high-tech weaponry, especially unmanned and autonomous systems.From Mosaic to ReplicatorMuch of this effort was delegated to the Defense Advanced Research Projects Agency (DARPA), the Pentagon's in-house high-tech research organization. As part of a drive to develop AI for such collaborative swarm operations, DARPA initiated its "Mosaic" program, a series of projects intended to perfect the algorithms and other technologies needed to coordinate the activities of manned and unmanned combat systems in future high-intensity combat with Russia and/or China."Applying the great flexibility of the mosaic concept to warfare," explained Dan Patt, deputy director of DARPA's Strategic Technology Office, "lower-cost, less complex systems may be linked together in a vast number of ways to create desired, interwoven effects tailored to any scenario. The individual parts of a mosaic are attritable [dispensable], but together are invaluable for how they contribute to the whole."This concept of warfare apparently undergirds the new "Replicator" strategy announced by Deputy Secretary of Defense Kathleen Hicks just last summer. "Replicator is meant to help us overcome [China's] biggest advantage, which is mass. More ships. More missiles. More people," she told arms industry officials last August. By deploying thousands of autonomous UAVs, USVs, UUVs, and UGVs, she suggested, the U.S. military would be able to outwit, outmaneuver, and overpower China's military, the People's Liberation Army (PLA). "To stay ahead, we're going to create a new state of the art… We'll counter the PLA's mass with mass of our own, but ours will be harder to plan for, harder to hit, harder to beat."To obtain both the hardware and software needed to implement such an ambitious program, the Department of Defense is now seeking proposals from traditional defense contractors like Boeing and Raytheon as well as AI startups like Anduril and Shield AI. While large-scale devices like the Air Force's Collaborative Combat Aircraft and the Navy's Orca Extra-Large UUV may be included in this drive, the emphasis is on the rapid production of smaller, less complex systems like AeroVironment's Switchblade attack drone, now used by Ukrainian troops to take out Russian tanks and armored vehicles behind enemy lines.At the same time, the Pentagon is already calling on tech startups to develop the necessary software to facilitate communication and coordination among such disparate robotic units and their associated manned platforms. To facilitate this, the Air Force asked Congress for $50 million in its fiscal year 2024 budget to underwrite what it ominously enough calls Project VENOM, or "Viper Experimentation and Next-generation Operations Model." Under VENOM, the Air Force will convert existing fighter aircraft into AI-governed UAVs and use them to test advanced autonomous software in multi-drone operations. The Army and Navy are testing similar systems.When Swarms Choose Their Own PathIn other words, it's only a matter of time before the U.S. military (and presumably China's, Russia's, and perhaps those of a few other powers) will be able to deploy swarms of autonomous weapons systems equipped with algorithms that allow them to communicate with each other and jointly choose novel, unpredictable combat maneuvers while in motion. Any participating robotic member of such swarms would be given a mission objective ("seek out and destroy all enemy radars and anti-aircraft missile batteries located within these [specified] geographical coordinates") but not be given precise instructions on how to do so. That would allow them to select their own battle tactics in consultation with one another. If the limited test data we have is anything to go by, this could mean employing highly unconventional tactics never conceived for (and impossible to replicate by) human pilots and commanders.The propensity for such interconnected AI systems to engage in novel, unplanned outcomes is what computer experts call "emergent behavior." As ScienceDirect, a digest of scientific journals, explains it, "An emergent behavior can be described as a process whereby larger patterns arise through interactions among smaller or simpler entities that themselves do not exhibit such properties." In military terms, this means that a swarm of autonomous weapons might jointly elect to adopt combat tactics none of the individual devices were programmed to perform — possibly achieving astounding results on the battlefield, but also conceivably engaging in escalatory acts unintended and unforeseen by their human commanders, including the destruction of critical civilian infrastructure or communications facilities used for nuclear as well as conventional operations.At this point, of course, it's almost impossible to predict what an alien group-mind might choose to do if armed with multiple weapons and cut off from human oversight. Supposedly, such systems would be outfitted with failsafe mechanisms requiring that they return to base if communications with their human supervisors were lost, whether due to enemy jamming or for any other reason. Who knows, however, how such thinking machines would function in demanding real-world conditions or if, in fact, the group-mind would prove capable of overriding such directives and striking out on its own.What then? Might they choose to keep fighting beyond their preprogrammed limits, provoking unintended escalation — even, conceivably, of a nuclear kind? Or would they choose to stop their attacks on enemy forces and instead interfere with the operations of friendly ones, perhaps firing on and devastating them (as Skynet does in the classic science fiction Terminator movie series)? Or might they engage in behaviors that, for better or infinitely worse, are entirely beyond our imagination?Top U.S. military and diplomatic officials insist that AI can indeed be used without incurring such future risks and that this country will only employ devices that incorporate thoroughly adequate safeguards against any future dangerous misbehavior. That is, in fact, the essential point made in the "Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy" issued by the State Department in February 2023. Many prominent security and technology officials are, however, all too aware of the potential risks of emergent behavior in future robotic weaponry and continue to issue warnings against the rapid utilization of AI in warfare.Of particular note is the final report that the National Security Commission on Artificial Intelligence issued in February 2021. Co-chaired by Robert Work (back at CNAS after his stint at the Pentagon) and Eric Schmidt, former CEO of Google, the commission recommended the rapid utilization of AI by the U.S. military to ensure victory in any future conflict with China and/or Russia. However, it also voiced concern about the potential dangers of robot-saturated battlefields."The unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability," the report noted. This could occur for a number of reasons, including "because of challenging and untested complexities of interaction between AI-enabled and autonomous weapon systems [that is, emergent behaviors] on the battlefield." Given that danger, it concluded, "countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems."When the leading advocates of autonomous weaponry tell us to be concerned about the unintended dangers posed by their use in battle, the rest of us should be worried indeed. Even if we lack the mathematical skills to understand emergent behavior in AI, it should be obvious that humanity could face a significant risk to its existence, should killing machines acquire the ability to think on their own. Perhaps they would surprise everyone and decide to take on the role of international peacekeepers, but given that they're being designed to fight and kill, it's far more probable that they might simply choose to carry out those instructions in an independent and extreme fashion.If so, there could be no one around to put an R.I.P. on humanity's gravestone.This article was republished with permission from Tom Dispatch