In this paper, the authors discuss the ways in which national governments, firms, and individuals respond to policy related to food safety, environmental protection, and trade. These responses must be considered in the development of policy to ensure the best possible outcomes. It accounts for uncertainty about policy impacts and scientific knowledge and incorporates stochastic environmental factors. The authors argue use of such a model in the development of health and environmental policy can overcome capture by domestic forces opposed to trade liberalization. The effectiveness of policy, of course, is dependent upon firm and consumer response to policy. Section one describes the impacts of international transfer of species and genetic material, paying particular attention to the introduction of alien invasive species. Section two discusses issues surrounding trade in environmental amenities. Food safety and environmental regulations are reviewed in section three, along with mechanisms by which such policy can serve as a proxy for protectionists. Section four develops a risk assessment model that can be used in policy design. Section five considers the role of institutional, firm and individual behavior in the development and effectiveness of policy. Section six summarizes our analysis in offering an agenda for trade talks.
With 2015 marking the transition from the Millennium to the Sustainable Development Goals, the international community can celebrate many development successes since 2000. Three key challenges stand out: the depth of remaining poverty, the unevenness in shared prosperity, and the persistent disparities in non-income dimensions of development. First, the policy discourse needs to focus more directly on the poorest among the poor. While pockets of ultra-poverty exist around the world, Sub-Saharan Africa is home to most of the deeply poor. To make depth a more central element in policy formulation, easy-to-communicate measures are needed, and this note attempts a step in this direction with person-equivalent measures of poverty. Second, the eradication of poverty in all of its forms requires steady growth of the incomes of the bottom 40 percent. Yet, economic growth, a key driver of shared prosperity, may not be as buoyant as before the global financial crisis. Third, unequal progress in non-income dimensions of development requires addressing widespread inequality of opportunity, which transmits poverty across generations and erodes the pace and sustainability of progress for the bottom 40. To meet these challenges, three ingredients are core to the policy agenda: sustaining broad-based growth, investing in human development, and insuring the poor and vulnerable against emerging risks.
Ecosystem accounting is a coherent and integrated approach to the assessment of the environment through the measurement of ecosystems, and measurement of the flows of services from ecosystems into economic and other human activity. The scale on which the accounting may be conducted varies: the ecosystems measured may range from specific land cover type areas, such as forests, to larger integrated areas, such as river basins, and may include areas considered to be relatively natural and those that are heavily affected by human activity, such as agricultural areas. The System of Environmental-Economic Accounting 2012—Experimental Ecosystem Accounting (SEEA Experimental Ecosystem Accounting) constitutes an integrated statistical framework for organizing biophysical data, measuring ecosystem services, tracking changes in ecosystem assets and linking this information to economic and other human activity.
This book chronicles the development of Vietnam's rural electrification program. It tells the story of how the Vietnamese government conceived, developed, scaled up, and improved its program. It also discusses the role the government, the countries main utility, local authorities, local communities, and the country's international development partners played in the pursuit of the electrification agenda. The book provides an overview of the strategies that fueled the impressive expansion of access to electricity in Vietnam, the development of the institutions that implemented the program and the passage of policies and laws that made growth of such scale possible. It also discusses results from the ground, and particularly the impacts of electrification on people's lives. It concludes with an attempt to draw lessons from Vietnam's experience. The book comprises three main parts: part A, made up of eight sections, provides an overview of Vietnam's rural electrification experience, which can be divided into six distinct periods. Part B summarizes a select set of findings from the multiyear survey and discusses the impact of rural electrification on Vietnamese households. Part C draws lessons from the experience of rural electrification in Vietnam, based on the information presented in parts A and B. It discusses the lessons learned from the perspectives of the government and the World Bank.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
This post takes up from two previous posts (part 1; part 2), asking just what do we (we economists) really know about how interest rates affect inflation. Today, what does contemporary economic theory say? As you may recall, the standard story says that the Fed raises interest rates; inflation (and expected inflation) don't immediately jump up, so real interest rates rise; with some lag, higher real interest rates push down employment and output (IS); with some more lag, the softer economy leads to lower prices and wages (Phillips curve). So higher interest rates lower future inflation, albeit with "long and variable lags." Higher interest rates -> (lag) lower output, employment -> (lag) lower inflation. In part 1, we saw that it's not easy to see that story in the data. In part 2, we saw that half a century of formal empirical work also leaves that conclusion on very shaky ground. As they say at the University of Chicago, "Well, so much for the real world, how does it work in theory?" That is an important question. We never really believe things we don't have a theory for, and for good reason. So, today, let's look at what modern theory has to say about this question. And they are not unrelated questions. Theory has been trying to replicate this story for decades. The answer: Modern (anything post 1972) theory really does not support this idea. The standard new-Keynesian model does not produce anything like the standard story. Models that modify that simple model to achieve something like result of the standard story do so with a long list of complex ingredients. The new ingredients are not just sufficient, they are (apparently) necessary to produce the desired dynamic pattern. Even these models do not implement the verbal logic above. If the pattern that high interest rates lower inflation over a few years is true, it is by a completely different mechanism than the story tells. I conclude that we don't have a simple economic model that produces the standard belief. ("Simple" and "economic" are important qualifiers.) The simple new-Keynesian model The central problem comes from the Phillips curve. The modern Phillips curve asserts that price-setters are forward-looking. If they know inflation will be high next year, they raise prices now. So Inflation today = expected inflation next year + (coefficient) x output gap. \[\pi_t = E_t\pi_{t+1} + \kappa x_t\](If you know enough to complain about \(\beta\approx0.99\) in front of \(E_t\pi_{t+1}\) you know enough that it doesn't matter for the issues here.)Now, if the Fed raises interest rates, and if (if) that lowers output or raises unemployment, inflation today goes down. The trouble is, that's not what we're looking for. Inflation goes down today, (\(\pi_t\))relative to expected inflation next year (\(E_t\pi_{t+1}\)). So a higher interest rate and lower output correlate with inflation that is rising over time. Here is a concrete example: The plot is the response of the standard three equation new-Keynesian model to an \(\varepsilon_1\) shock at time 1:\[\begin{align} x_t &= E_t x_{t+1} - \sigma(i_t - E_t\pi_{t+1}) \\ \pi_t & = \beta E_t \pi_{t+1} + \kappa x_t \\ i_t &= \phi \pi_t + u_t \\ u_t &= \eta u_{t-1} + \varepsilon_t. \end{align}\] Here \(x\) is output, \(i\) is the interest rate, \(\pi\) is inflation, \(\eta=0.6\), \(\sigma=1\), \(\kappa=0.25\), \(\beta=0.95\), \(\phi=1.2\). In this plot, higher interest rates are said to lower inflation. But they lower inflation immediately, on the day of the interest rate shock. Then, as explained above, inflation rises over time. In the standard view, and the empirical estimates from the last post, a higher interest rate has no immediate effect, and then future inflation is lower. See plots in the last post, or this one from Romer and Romer's 2023 summary:Inflation jumping down and then rising in the future is quite different from inflation that does nothing immediately, might even rise for a few months, and then starts gently going down. You might even wonder about the downward jump in inflation. The Phillips curve makes it clear why current inflation is lower than expected future inflation, but why doesn't current inflation stay the same, or even rise, and expected future inflation rise more? That's the "equilibrium selection" issue. All those paths are possible, and you need extra rules to pick a particular one. Fiscal theory points out that the downward jump needs a fiscal tightening, so represents a joint monetary-fiscal policy. But we don't argue about that today. Take the standard new Keynesian model exactly as is, with passive fiscal policy and standard equilibrium selection rules. It predicts that inflation jumps down immediately and then rises over time. It does not predict that inflation slowly declines over time. This is not a new issue. Larry Ball (1994) first pointed out that the standard new Keynesian Phillips curve says that output is high when inflation is high relative to expected future inflation, that is when inflation is declining. Standard beliefs go the other way: output is high when inflation is rising. The IS curve is a key part of the overall prediction, and output faces a similar problem. I just assumed above that output falls when interest rates rise. In the model it does; output follows a path with the same shape as inflation in my little plot. Output also jumps down and then rises over time. Here too, the (much stronger) empirical evidence says that an interest rate rise does not change output immediately, and output then falls rather than rises over time. The intuition has even clearer economics behind it: Higher real interest rates induce people to consume less today and more tomorrow. Higher real interest rates should go with higher, not lower, future consumption growth. Again, the model only apparently reverses the sign by having output jump down before rising. Key issuesHow can we be here, 40 years later, and the benchmark textbook model so utterly does not replicate standard beliefs about monetary policy? One answer, I believe, is confusing adjustment to equilibrium with equilibrium dynamics. The model generates inflation lower than yesterday (time 0 to time 1) and lower than it otherwise would be (time 1 without shock vs time 1 with shock). Now, all economic models are a bit stylized. It's easy to say that when we add various frictions, "lower than yesterday" or "lower than it would have been" is a good parable for "goes down over time." If in a simple supply and demand graph we say that an increase in demand raises prices instantly, we naturally understand that as a parable for a drawn out period of price increases once we add appropriate frictions. But dynamic macroeconomics doesn't work that way. We have already added what was supposed to be the central friction, sticky prices. Dynamic economics is supposed to describe the time-path of variables already, with no extra parables. If adjustment to equilibrium takes time, then model that. The IS and Phillips curve are forward looking, like stock prices. It would make little sense to say "news comes out that the company will never make money, so the stock price should decline gradually over a few years." It should jump down now. Inflation and output behave that way in the standard model. A second confusion, I think, is between sticky prices and sticky inflation. The new-Keynesian model posits, and a huge empirical literature examines, sticky prices. But that is not the same thing as sticky inflation. Prices can be arbitrarily sticky and inflation, the first derivative of prices, can still jump. In the Calvo model, imagine that only a tiny fraction of firms can change prices at each instant. But when they do, they will change prices a lot, and the overall price level will start increasing right away. In the continuous-time version of the model, prices are continuous (sticky), but inflation jumps at the moment of the shock. The standard story wants sticky inflation. Many authors explain the new-Keynesian model with sentences like "the Fed raises interest rates. Prices are sticky, so inflation can't go up right away and real interest rates are higher." This is wrong. Inflation can rise right away. In the standard new-Keynesian model it does so with \(\eta=1\), for any amount of price stickiness. Inflation rises immediately with a persistent monetary policy shock. Just get it out of your heads. The standard model does not produce the standard story. The obvious response is, let's add ingredients to the standard model and see if we can modify the response function to look something like the common beliefs and VAR estimates. Let's go. Adaptive expectations We can reproduce standard beliefs about monetary policy with thoroughly adaptive expectations, in the 1970s ISLM form. I think this is a large part of what most policy makers and commenters have in mind. Modify the above model to leave out the dynamic part of the intertemporal substitution equation, to just say in rather ad hoc way that higher real interest rates lower output, and specify that the expected inflation that drives the real rate and that drives pricing decisions is mechanically equal to previous inflation, \(E_t \pi_{t+1} = \pi_{t-1}\). We get \[ \begin{align} x_t &= -\sigma (i_t - \pi_{t-1}) \\ \pi_t & = \pi_{t-1} + \kappa x_t .\end{align}\] We can solve this sytsem analytically to \[\pi_t = (1+\sigma\kappa)\pi_{t-1} - \sigma\kappa i_t.\]Here's what happens if the Fed permanently raises the interest rate. Higher interest rates send future inflation down. (\(\kappa=0.25,\ \sigma=1.\)) Inflation eventually spirals away, but central banks don't leave interest rates alone forever. If we add a Taylor rule response \(i_t = \phi \pi_t + u_t\), so the central bank reacts to the emerging spiral, we get this response to a permanent monetary policy disturbance \(u_t\): The higher interest rate sets off a deflation spiral. But the Fed quickly follows inflation down to stabilize the situation. This is, I think, the conventional story of the 1980s. In terms of ingredients, an apparently minor change of index from \(E_t \pi_{t+1}\) to \(\pi_{t-1}\) is in fact a big change. It means directly that higher output comes with increasing inflation, not decreasing inflation, solving Ball's puzzle. The change basically changes the sign of output in the Phillips curve. Again, it's not really all in the Phillips curve. This model with rational expectations in the IS equation and adaptive in the Phillips curve produces junk. To get the result you need adaptive expectations everywhere. The adaptive expectations model gets the desired result by changing the basic sign and stability properties of the model. Under rational expectations the model is stable; inflation goes away all on its own under an interest rate peg. With adaptive expectations, the model is unstable. Inflation or deflation spiral away under an interest rate peg or at the zero bound. The Fed's job is like balancing a broom upside down. If you move the bottom (interest rates) one way, the broom zooms off the other way. With rational expectations, the model is stable, like a pendulum. This is not a small wrinkle designed to modify dynamics. This is major surgery. It is also a robust property: small changes in parameters do not change the dominant eigenvalue of a model from over one to less than one. A more refined way to capture how Fed officials and pundits think and talk might be called "temporarily fixed expectations." Policy people do talk about the modern Phillips curve; they say inflation depends on inflation expectations and employment. Expectations are not mechanically adaptive. Expectations are a third force, sometimes "anchored," and amenable to manipulation by speeches and dot plots. Crucially, in this analysis, expected inflation does not move when the Fed changes interest rates. Expectations are then very slowly adaptive, if inflation is persistent, or if there is a more general loss of faith in "anchoring." In the above new-Keynesian model graph, at the minute the Fed raises the interest rate, expected inflation jumps up to follow the graph's plot of the model's forecast of inflation. As a simple way to capture these beliefs, suppose expectations are fixed or "anchored" at \(\pi^e\). Then my simple model is \[\begin{align}x_t & = -\sigma(i_t - \pi^e) \\ \pi_t & = \pi^e + \kappa x_t\end{align}\]so \[\pi_t = \pi^e - \sigma \kappa (i_t - \pi^e).\] Inflation is expected inflation, and lowered by higher interest rates (last - sign). But those rates need only be higher than the fixed expectations; they do not need to be higher than past rates as they do in the adaptive expectations model. That's why the Fed thinks 3% interest rates with 5% inflation is still "contractionary"--expected inflation remains at 2%, not the 5% of recent adaptive experience. Also by fixing expectations, I remove the instability of the adaptive expectations model... so long as those expectations stay anchored. The Fed recognizes that eventually higher inflation moves the expectations, and with a belief that is adaptive, they fear that an inflation spiral can still break out.Even this view does not give us any lags, however. The Fed and commenters clearly believe that higher real interest rates today lower output next year, not immediately; and they believe that lower output and employment today drive inflation down in the future, not immediately. They believe something like \[\begin{align}x_{t+1} &= - \sigma(i_t - \pi^e) \\ \pi_{t+1} &= \pi^e + \kappa x_t.\end{align}\] But now we're at the kind of non-economic ad-hockery that the whole 1970s revolution abandoned. And for a reason: Ad hoc models are unstable, regimes are always changing. Moreover, let me remind you of our quest: Is there a simple economic model of monetary policy that generates something like the standard view? At this level of ad-hockery you might as well just write down the coefficients of Romer and Romer's response function and call that the model of how interest rates affect inflation. Academic economics gave up on mechanical expectations and ad-hoc models in the 1970s. You can't publish a paper with this sort of model. So when I mean a "modern" model, I mean rational expectations, or at least the consistency condition that the expectations in the model are not fundamentally different from forecasts of the model. (Models with explicit learning or other expectation-formation frictions count too.) It's easy to puff about people aren't rational, and looking out the window lots of people do dumb things. But if we take that view, then the whole project of monetary policy on the proposition that people are fundamentally unable to learn patterns in the economy, that a benevolent Federal Reserve can trick the poor little souls into a better outcome. And somehow the Fed is the lone super-rational actor who can avoid all those pesky behavioral biases. We are looking for the minimum necessary ingredients to describe the basic signs and function of monetary policy. A bit of irrational or complex expectation formation as icing on the cake, a possible sufficient ingredient to produce quantitatively realistic dynamics, isn't awful. But it would be sad if irrational expectations or other behavior is a necessary ingredient to get the most basic sign and story of monetary policy right. If persistent irrationality is a central necessary ingredient for the basic sign and operation of monetary policy -- if higher interest rates will raise inflation the minute people smarten up; if there is no simple supply and demand, MV=PY sensible economics underlying the basic operation of monetary policy; if it's all a conjuring trick -- that should really weaken our faith in the whole monetary policy project. Facts help, and we don't have to get religious about it. During the long zero bound, the same commentators and central bankers kept warning about a deflation spiral, clearly predicted by this model. It never happened. Interest rates below inflation from 2021 to 2023 should have led to an upward inflation spiral. It never happened -- inflation eased all on its own with interest rates below inflation.Getting the desired response to interest rates by making the model unstable isn't tenable whether or not you like the ingredient. Inflation also surged in the 1970s faster than adaptive expectations came close to predicting, and fell faster in the 1980s. The ends of many inflations come with credible changes in regime. There is a lot of work now desperately trying to fix new-Keynesian models by making them more old-Keynesian, putting lagged inflation in the Phillips curve, current income in the IS equation, and so forth. Complex learning and expectation formation stories replace the simplistic adaptive expectations here. As far as I can tell, to the extent they work they largely do so in the same way, by reversing the basic stability of the model. Modifying the new-Keynesian modelThe alternative is to add ingredients to the basic new-Keynesian model, maintaining its insistence on real "micro-founded" economics and forward-looking behavior, and describing explicit dynamics as the evolution of equilibrium quantities. Christiano Eichenbaum and Evans (2005) is one of the most famous examples. Recall these same authors created the first most influential VAR that gave the "right" answer to the effects of monetary policy shocks. This paper modifies the standard new-Keynesian model with a specific eye to matching impulse response functions. The want to match all impulse-responses, with a special focus on output. When I started asking my young macro colleagues for a standard model which produces the desired response shape, they still cite CEE first, though it's 20 years later. That's quite an accomplishment. I'll look at it in detail, as the general picture is the same as many other models that achieve the desired result. Here's their bottom line response to a monetary policy shock: (Figure from the 2018 Christiano Eichenbaum and Trabandt Journal of Economic Perspectives summary paper.) The solid line is the VAR point estimate and gray shading is the 95% confidence band. The solid blue line is the main model. The dashed line is the model with only price stickiness, to emphasize the importance of wage stickiness. The shock happens at time 0. Notice the funds rate line that jumps down at that date. That the other lines do not move at time 0 is a result. I graphed the response to a time 1 shock above. That's the answer, now what's the question? What ingredients did they add above the textbook model to reverse the basic sign and jump problem and to produce these pretty pictures? Here is a partial list: Habit formation. The utility function is \(log(c_t - bc_{t-1})\). A capital stock with adjustment costs in investment. Adjustment costs are proportional to investment growth, \([1-S(i_t/i_{t-1})]i_t\), rather than the usual formulation in which adjustment costs are proportional to the investment to capital ratio \(S(i_t/k_t)i_t\). Variable capital utilization. Capital services \(k_t\) are related to the capital stock \(\bar{k}t\) by \(k_t = u_t \bar{k}_t\). The utilization rate \(u_t\) is set by households facing an upward sloping cost \(a(u_t)\bar{k}_t\).Calvo pricing with indexation: Firms randomly get to reset prices, but firms that aren't allowed to reset prices do automatically raise prices at the rate of inflation.Prices are also fixed for a quarter. Technically, firms must post prices before they see the period's shocks.Sticky wages, also with indexation. Households are monopoly suppliers of labor, and set wages Calvo-style like firms. (Later papers put all households into a union which does the wage setting.) Wages are also indexed; Households that don't get to reoptimize their wage still raise wages following inflation. Firms must borrow working capital to finance their wage bill a quarter in advance, and thus pay a interest on the wage bill. Money in the utility function, and money supply control. Monetary policy is a change in the money growth rate, not a pure interest rate target. Whew! But which of these ingredients are necessary, and which are just sufficient? Knowing the authors, I strongly suspect that they are all necessary to get the suite of results. They don't add ingredients for show. But they want to match all of the impulse response functions, not just the inflation response. Perhaps a simpler set of ingredients could generate the inflation response while missing some of the others. Let's understand what each of these ingredients is doing, which will help us to see (if) they are necessary and essential to getting the desired result. I see a common theme in habit formation, adjustment costs that scale by investment growth, and indexation. These ingredients each add a derivative; they take a standard relationship between levels of economic variables and change it to one in growth rates. Each of consumption, investment, and inflation is a "jump variable" in standard economics, like stock prices. Consumption (roughly) jumps to the present value of future income. The level of investment is proportional to the stock price in the standard q theory, and jumps when there is new information. Iterating forward the new-Keynesian Phillips curve \(\pi_t = \beta E_t \pi_{t+1} + \kappa x_t\), inflation jumps to the discounted sum of future output gaps, \(\pi_t = E_t \sum_{j=0}^\infty \beta^jx_{t+j}.\) To produce responses in which output, consumption and investment as well as inflation rise slowly after a shock, we don't want levels of consumption, investment, and inflation to jump this way. Instead we want growth rates to do so. With standard utility, the consumer's linearized first order condition equates expected consumption growth to the interest rate, \( E_t (c_{t+1}/c_t) = \delta + r_t \) Habit, with \(b=1\) gives \( E_t [(c_{t+1}-c_t)/(c_t-c_{t-1})] = \delta + r_t \). (I left out the strategic terms.) Mixing logs and levels a bit, you can see we put a growth rate in place of a level. (The paper has \(b=0.65\) .) An investment adjustment cost function with \(S(i_t/i_{t-1})\) rather than the standard \(S(i_t/k_t)\) puts a derivative in place of a level. Normally we tell a story that if you want a house painted, doubling the number of painters doesn't get the job done twice as fast because they get in each other's way. But you can double the number of painters overnight if you want to do so. Here the cost is on the increase in number of painters each day. Indexation results in a Phillips curve with a lagged inflation term, and that gives "sticky inflation." The Phillips curve of the model (32) and (33) is \[\pi_t = \frac{1}{1+\beta}\pi_{t-1} + \frac{\beta}{1+\beta}E_{t-1}\pi_{t+1} + (\text{constants}) E_{t-1}s_t\]where \(s_t\) are marginal costs (more later). The \(E_{t-1}\) come from the assumption that prices can't react to time \(t\) information. Iterate that forward to (33)\[\pi_t - \pi_{t-1} = (\text{constants}) E_{t-1}\sum_{j=0}^\infty \beta^j s_{t+j}.\] We have successfully put the change in inflation in place of the level of inflation. The Phillips curve is anchored by real marginal costs, and they are not proportional to output in this model as they are in the textbook model above. That's important too. Instead,\[s_t = (\text{constants}) (r^k_t)^\alpha \left(\frac{W_t}{P_t}R_t\right)^{1-\alpha}\] where \(r^k\) is the return to capital \(W/P\) is the real wage and \(R\) is the nominal interest rate. The latter term crops up from the assumption that firms must borrow the wage bill one period in advance. This is an interesting ingredient. There is a lot of talk that higher interest rates raise costs for firms, and they are reducing output as a result. That might get us around some of the IS curve problems. But that's not how it works here. Here's how I think it works. Higher interest rates raise marginal costs, and thus push up current inflation relative to expected future inflation. The equilibrium-selection rules and the rule against instant price changes (coming up next) tie down current inflation, so the higher interest rates have to push down expected future inflation. CEE disagree (p. 28). Writing of an interest rate decline, so all the signs are opposite of my stories, ... the interest rate appears in firms' marginal cost. Since the interest rate drops after an expansionary monetary policy shock, the model embeds a force that pushes marginal costs down for a period of time. Indeed, in the estimated benchmark model the effect is strong enough to induce a transient fall in inflation.But pushing marginal costs down lowers current inflation relative to future inflation -- they're looking at the same Phillips curve just above. It looks to me like they're confusing current with expected future inflation. Intuition is hard. There are plenty of Fisherian forces in this model that want lower interest rates to lower inflation. More deeply, we see here a foundational trouble of the Phillips curve. It was originally a statistical relation between wage inflation and unemployment. It became a (weaker) statistical relation between price inflation and unemployment or the output gap. The new-Keynesian theory wants naturally to describe a relation between marginal costs and price changes, and it takes contortions to make output equal to marginal costs. Phillips curves fit the data terribly. So authors estimating Phillips curves (An early favorite by Tim Cogley and Argia Sbordone) go back, and separate marginal cost from output or employment. As CET write later, they "build features into the model which ensure that firms' marginal costs are nearly acyclical." That helps the fit, but it divorces the Phillips curve shifter variable from the business cycle! Standard doctrine says that for the Fed to lower inflation it must soften the economy and risk unemployment. Doves say don't do it, live with inflation to avoid that cost. Well, if the Phillips curve shifter is "acyclical" you have to throw all that out the window. This shift also points to the central conundrum of the Phillips curve. Here it describes the adjustment of prices to wages or "costs" more generally. It fundamentally describes a relative price, not a price level. OK, but the phenomenon we want to explain is the common component, how all prices and wage tie together or equivalently the decline in the value of the currency, stripped of relative price movements. The central puzzle of macroeconomics is why the common component, a rise or fall of all prices and wages together, has anything to do with output, and for us how it is controlled by the Fed. Christiano Eichenbaum and Evans write (p.3) that "it is crucial to allow for variable capital utilization." I'll try explain why in my own words. Without capital adjustment costs, any change in the real return leads to a big investment jump. \(r=f'(k)\) must jump and that takes a lot of extra \(k\). We add adjustment costs to tamp down the investment response. But now when there is any shock, capital can't adjust enough and there is a big rate of return response. So we need something that acts like a big jump in the capital stock to tamp down \(r=f'(k)\) variability, but not a big investment jump. Variable capital utilization acts like the big investment jump without us seeing a big investment jump. And all this is going to be important for inflation too. Remember the Phillips curve; if output jumps then inflation jumps too. Sticky wages are crucial, and indeed CEE report that they can dispense with sticky prices. One reason is that otherwise profits are countercyclical. In a boom, prices go up faster than wages so profits go up. With sticky prices and flexible wages you get the opposite sign. It's interesting that the "textbook" model has not moved this way. Again, we don't often enough write textbooks. Fixing prices and wages during the period of the shock by assuming price setters can't see the shock for a quarter has a direct effect: It stops any price or wage jumps during the quarter of the shock, as in my first graph. That's almost cheating. Note the VAR also has absolutely zero instantaneous inflation response. This too is by assumption. They "orthogonalize" the variables so that all the contemporaneous correlation between monetary policy shocks and inflation or output is considered part of the Fed's "rule" and none of it reflects within-quarter reaction of prices or quantities to the Fed's actions. Step back and admire. Given the project "find elaborations of the standard new-Keynesian model to match VAR impulse response functions" could you have come up with any of this? But back to our task. That's a lot of apparently necessary ingredients. And reading here or CEE's verbal intuition, the logic of this model is nothing like the standard simple intuition, which includes none of the necessary ingredients. Do we really need all of this to produce the basic pattern of monetary policy? As far as we know, we do. And hence, that pattern may not be as robust as it seems. For all of these ingredients are pretty, ... imaginative. Really, we are a long way from the Lucas/Prescott vision that macroeconomic models should be based on well tried and measured microeconomic ingredients that are believably invariant to changes in the policy regime. CEE argue hard for the plausibility of these microeconomic specifications (see especially the later CET Journal of Economic Perspectives article), but they have to try so hard precisely because the standard literature doesn't have any of these ingredients. The "level" rather than "growth rate" foundations of consumption, investment, and pricing decisions pervade microeconomics. Microeconomists worry about labor monopsony, not labor monopoly; firms set wages, households don't. (Christiano Eichenbam and Trabandt (2016) get wage stickiness from a more realistic search and matching model. Curiously, the one big labor union fiction is still the most common, though few private sector workers are unionized.) Firms don't borrow the wage bill a quarter ahead of time. Very few prices and wages are indexed in the US. Like habits, perhaps these ingredients are simple stand ins for something else, but at some point we need to know what that something else is. That is especially true if one wants to do optimal policy or welfare analysis. Just how much economics must we reinvent to match this one response function? How far are we really from the ad-hoc ISLM equations that Sims (1980) destroyed? Sadly, subsequent literature doesn't help much (more below). Subsequent literature has mostly added ingredients, including heterogeneous agents (big these days), borrowing constraints, additional financial frictions (especially after 2008), zero bound constraints, QE, learning and complex expectations dynamics. (See CET 2018 JEP for a good verbal survey.) The rewards in our profession go to those who add a new ingredient. It's very hard to publish papers that strip a model down to its basics. Editors don't count that as "new research," but just "exposition" below the prestige of their journals. Though boiling a model down to essentials is maybe more important in the end than adding more bells and whistles. This is about where we are. Despite the pretty response functions, I still score that we don't have a reliable, simple, economic model that produces the standard view of monetary policy. Mankiw and Reis, sticky expectations Mankiw and Reis (2002) expressed the challenge clearly over 20 years ago. In reference to the "standard" New-Keynesian Phillips curve \(\pi_t = \beta E_t \pi_{t+1} + \kappa x_t\) they write a beautiful and succinct paragraph: Ball [1994a] shows that the model yields the surprising result that announced, credible disinflations cause booms rather than recessions. Fuhrer and Moore [1995] argue that it cannot explain why inflation is so persistent. Mankiw [2001] notes that it has trouble explaining why shocks to monetary policy have a delayed and gradual effect on inflation. These problems appear to arise from the same source: although the price level is sticky in this model, the inflation rate can change quickly. By contrast, empirical analyses of the inflation process (e.g., Gordon [1997]) typically give a large role to "inflation inertia."At the cost of repetition, I emphasize the last sentence because it is so overlooked. Sticky prices are not sticky inflation. Ball already said this in 1994: Taylor (1979, 198) and Blanchard (1983, 1986) show that staggering produces inertia in the price level: prices just slowly to a fall in th money supply. ...Disinflation, however, is a change in the growth rate of money not a one-time shock to the level. In informal discussions, analysts often assume that the inertia result carries over from levels to growth rates -- that inflation adjusts slowly to a fall in money growth. As I see it, Mankiw and Reis generalize the Lucas (1972) Phillips curve. For Lucas, roughly, output is related to unexpected inflation\[\pi_t = E_{t-1}\pi_t + \kappa x_t.\] Firms don't see everyone else's prices in the period. Thus, when a firm sees an unexpected rise in prices, it doesn't know if it is a higher relative price or a higher general price level; the firm expands output based on how much it thinks the event might be a relative price increase. I love this model for many reasons, but one, which seems to have fallen by the wayside, is that it explicitly founds the Phillips curve in firms' confusion about relative prices vs. the price level, and thus faces up to the problem why should a rise in the price level have any real effects. Mankiw and Reis basically suppose that firms find out the general price level with lags, so output depends on inflation relative to a distributed lag of its expectations. It's clearest for the price level (p. 1300)\[p_t = \lambda\sum_{j=0}^\infty (1-\lambda)^j E_{t-j}(p_t + \alpha x_t).\] The inflation expression is \[\pi_t = \frac{\alpha \lambda}{1-\lambda}x_t + \lambda \sum_{j=0}^\infty (1-\lambda)^j E_{t-1-j}(\pi_t + \alpha \Delta x_t).\](Some of the complication is that you want it to be \(\pi_t = \sum_{j=0}^\infty E_{t-1-j}\pi_t + \kappa x_t\), but output doesn't enter that way.) This seems totally natural and sensible to me. What is a "period" anyway? It makes sense that firms learn heterogeneously whether a price increase is relative or price level. And it obviously solves the central persistence problem with the Lucas (1972) model, that it only produces a one-period output movement. Well, what's a period anyway? (Mankiw and Reis don't sell it this way, and actually don't cite Lucas at all. Curious.) It's not immediately obvious that this curve solves the Ball puzzle and the declining inflation puzzle, and indeed one must put it in a full model to do so. Mankiw and Reis (2002) mix it with \(m_t + v = p_t + x_t\) and make some stylized analysis, but don't show how to put the idea in models such as I started with or make a plot. Their less well known follow on paper Sticky Information in General Equilibrium (2007) is much better for this purpose because they do show you how to put the idea in an explicit new-Keynesian model, like the one I started with. They also add a Taylor rule, and an interest rate rather than money supply instrument, along with wage stickiness and a few other ingredients,. They show how to solve the model overcoming the problem that there are many lagged expectations as state variables. But here is the response to the monetary policy shock: Response to a Monetary Policy Shock, Mankiw and Reis (2007). Sadly they don't report how interest rates respond to the shock. I presume interest rates went down temporarily. Look: the inflation and output gap plots are about the same. Except for the slight delay going up, these are exactly the responses of the standard NK model. When output is high, inflation is high and declining. The whole point was to produce a model in which high output level would correspond to rising inflation. Relative to the first graph, the main improvement is just a slight hump shape in both inflation and output responses. Describing the same model in "Pervasive Stickiness" (2006), Mankiw and Reis describe the desideratum well: The Acceleration Phenomenon....inflation tends to rise when the economy is booming and falls when economic activity is depressed. This is the central insight of the empirical literature on the Phillips curve. One simple way to illustrate this fact is to correlate the change in inflation, \(\pi_{t+2}-\pi_{t-2}\) with [the level of] output, \(y_t\), detrended with the HP filter. In U.S. quarterly data from 1954-Q3 to 2005-Q3, the correlation is 0.47. That is, the change in inflation is procyclical.Now look again at the graph. As far as I can see, it's not there. Is this version of sticky inflation a bust, for this purpose? I still think it's a neat idea worth more exploration. But I thought so 20 years ago too. Mankiw and Reis have a lot of citations but nobody followed them. Why not? I suspect it's part of a general pattern that lots of great micro sticky price papers are not used because they don't produce an easy aggregate Phillips curve. If you want cites, make sure people can plug it in to Dynare. Mankiw and Reis' curve is pretty simple, but you still have to keep all past expectations around as a state variable. There may be alternative ways of doing that with modern computational technology, putting it in a Markov environment or cutting off the lags, everyone learns the price level after 5 years. Hank models have even bigger state spaces! Some more modelsWhat about within the Fed? Chung, Kiley, and Laforte 2010, "Documentation of the Estimated, Dynamic, Optimization-based (EDO) Model of the U.S. Economy: 2010 Version" is one such model. (Thanks to Ben Moll, in a lecture slide titled "Effects of interest rate hike in U.S. Fed's own New Keynesian model") They describe it as This paper provides documentation for a large-scale estimated DSGE model of the U.S. economy – the Federal Reserve Board's Estimated, Dynamic, Optimization- based (FRB/EDO) model project. The model can be used to address a wide range of practical policy questions on a routine basis.Here are the central plots for our purpose: The response of interest rates and inflation to a monetary policy shock. No long and variable lags here. Just as in the simple model, inflation jumps down on the day of the shock and then reverts. As with Mankiw and Reis, there is a tiny hump shape, but that's it. This is nothing like the Romer and Romer plot. Smets and Wouters (2007) "Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach" is about as famous as Christiano Eichenbaum and Evans as a standard new-Keynesian model that supposedly matches data well. It "contains many shocks and frictions. It features sticky nominal price and wage settings that allow for backward inflation indexation, habit formation in consumption, and investment adjustment costs that create hump-shaped responses... and variable capital utilization and fixed costs in production"Here is their central graph of the response to a monetary policy shockAgain, there is a little hump-shape, but the overall picture is just like the one we started with. Inflation mostly jumps down immediately and then recovers; the interest rate shock leads to future inflation that is higher, not lower than current inflation. There are no lags from higher interest rates to future inflation declines. The major difference, I think, is that Smets and Wouters do not impose the restriction that inflation cannot jump immediately on either their theory or empirical work, and Christiano, Eichenbaum and Evans impose that restriction in both places. This is important. In a new-Keynesian model some combination of state variables must jump on the day of the shock, as it is only saddle-path stable. If inflation can't move right away, that means something else does. Therefore, I think, CEE also preclude inflation jumping the next period. Comparing otherwise similar ingredients, it looks like this is the key ingredient for producing Romer-Romer like responses consistent with the belief in sticky inflation. But perhaps the original model and Smets-Wouters are right! I do not know what happens if you remove the CEE orthogonalization restriction and allow inflation to jump on the day of the shock in the date. That would rescue the new-Keynesian model, but it would destroy the belief in sticky inflation and long and variable lags. Closing thoughtsI'll reiterate the main point. As far as I can tell, there is no simple economic model that produces the standard belief. Now, maybe belief is right and models just have to catch up. It is interesting that there is so little effort going on to do this. As above, the vast outpouring of new-Keynesian modeling has been to add even more ingredients. In part, again, that's the natural pressures of journal publication. But I think it's also an honest feeling that after Christiano Eichenbaun and Evans, this is a solved problem and adding other ingredients is all there is to do. So part of the point of this post (and "Expectations and the neutrality of interest rates") is to argue that this is not a solved problem, and that removing ingredients to find the simplest economic model that can produce standard beliefs is a really important task. Then, does the model incorporate anything at all of the standard intuition, or is it based on some different mechanism al together? These are first order important and unresolved questions!But for my lay readers, here is as far as I know where we are. If you, like the Fed, hold to standard beliefs that higher interest rates lower future output and inflation with long and variable lags, know there is no simple economic theory behind that belief, and certainly the standard story is not how economic models of the last four decades work. Update:I repeat a response to a comment below, because it is so important. I probably wasn't clear enough that the "problem" of high output with inflation falling rather than rising is a problem of models vs. traditional beliefs, rather than of models vs. facts. The point of the sequence of posts, really, is that the traditional beliefs are likely wrong. Inflation does not fall, following interest rate increases, with dependable, long, and perhaps variable lags. That belief is strong, but neither facts, empirical evidence, or theory supports it. ("Variable" is a great way to scrounge data to make it fit priors.) Indeed many successful disinflations like ends of hyperinflations feature a sigh of relief and output surge on the real side.
A Network Analysis of Museums on Instagram -- Towards a Hybrid Minimax Recommender for Free-Roaming Museum Visits -- Developing a Multi-Channel Customer Relationship Management Strategy for Hotel Operation -- The Importance Attached to Culinary Blogs in the Promotion of Food Products with Health and Nutrition Claims-a Perspective of Polish Processors and Distributors -- Quality Service in the Hospitality Industry: Achieving Effective Service Processes and Designs -- Hospitality, Tourism, and Events Industry Competency Model: Human Resource Management Implications -- Temporary Museum Exhibitions as Tools for Cultural Innovation -- The Relationship between Service Quality and Customer Satisfaction on Luxurious Hotels so to Produce Error-Free Service -- The Use of Customer Relations Management's Digital Technologies from Greek Hotels -- The Sharing Economy and how it Affects the Conditions of Consumption and Competition in the Tourism Industry - The case of Airbnb in Greece -- Tourism and Destination Branding: the Case of Greek Islands -- Greek Culinary Tourism is Lost in Translation -- The Crucial Role of the Museums in Allying Alternative Forms of Diplomacy -- Digital Inbound Marketing: Particularities of Business-to-Business and Business-to-Customer Strategies -- Open-access Resource for the Management and Promotion of Greek Museums with Folk Exhibits -- Small Scale Event Product Analysis Based on Conjoint Analysis Results: The Case of Yedi Kule Conquest Monuments Run -- Art or Society with "εmotions": Manifestations of εmotions from Ancient Greece to Today -- Analysis of the Relationship between Transformational Leadership & Learning Organization in Health Care Services -- Ways to Find Employment and Preferences in Relation to a New Job for University Graduates of the Pilsen Region in the Czech Republic -- Entrepreneurial Intention Determinants Among Female Students. The Influence of Role Models, Parent Occupation and Perceived Behavioral Control on Forming the Desire to Become a Business Owner -- Satisfaction of Internal Customers in a Public University Hospital -- Facebook Content Strategies: A Case Study of a Subsidiary Company in Greece -- Examining Gaps in Business and Logistics Skills and their Performance Implications in the Agrifood Supply Chain in Greece -- Understanding Aviation Consumers' Experience at the Time of Their Journeys: the Use of the Denied Boarding Regulations (DBRs) at Chios Airport -- ICT-based Participatory Approaches for the Exploitation and Re-use of Intangible Cultural Heritage -- Business Students in Mexico Developing Work Skills with Work Integrated Learning -- Investigating the Adoption of Integrated Marketing Communication Strategies at Institutional Cultural Festivals in Greece. The Case Study of Aeschylia in Elefsina -- Using the Theory of Planned Behavior versus the Big Five Personality Trait Model in Predicting Entrepreneurial Intention. A Comparison Study of the Two Models -- Consumers' Attitudes Towards Advertisement in YouTube -- An Innovative Platform for Creating Audience-Specific Gamified Cultural Tourism Guides where Art, Tradition and Culture, Technology and Business Converge -- The Historic and Monumental Enhancement as a Tool for the Sustainable Development of Cultural Assets of Greece -- Internet Services and Social Networks in the Greek Academia -- The Effect of Asymmetrical Image Projections on Online Destination Branding -- The Aegina Pistachio as a Cultural Resource for the Development and Promotion of the Island -- The Cultural, Nutritional and the Socio-Economic Value of Messinian Olive Oil from Greece -- Enhancing Practice in Continuity with Innovative Thinking: The Primary Task of Higher Education -- Engaging New Museum Audience through Art Workshops: The Case of Adult Art at Macedonian Museum of Contemporary Art -- The Role of Social Relations in Fostering Entrepreneurship and Creative Use of SME Resources -- Traditional and Modern Forms of Synergies between Cultural and Tourism Sectors: Challenges and Opportunities for Greek Folk Festivities Promoting Traditional Products -- Health Management Issues from Related Quality of Life between Greek and Refugees Postpartum Women in Greece -- The Use of ICT in the Promotion of Arts and Cultural Heritage Education in Museums -- The Role of Destination Brand Image and Revisit Intention on Tourist Spending -- Creativity and Sustainable Development: A Proposal to Transform a Small Greek Island into a Creative Town -- Understanding the Challenges of the Healthcare System in Promoting Best Use of Digital Marketing Tools for a Purposeful Living -- Combining Traditional and Digital Tools in Developing an International Guerilla Marketing Strategy: The Case of a SME Greek Company -- The End of Marketing as You Know It. Or Not! -- Categorization of Ambient Media Based on Identification of Common Signs of Selected Installations -- The Dynamic Synergy in Communication and Tourism Development is Calling the City of Katerini, Greece -- Collaborative Problem Solving as a Critical Transversal Skill for the Transition from the School Environment to the Workplace -- Seeking for a Better Fitting: Understanding the Tour Operators for Cruises on Sao Miguel Island -- Impact of the Economic Crisis on Health Indicators in Greece and the Need for Primary Health Care Services -- Voters' Behavior: what Drives Political Consumption in Cyprus? An Analysis of the Behavior of Young Voters -- An Examination of Consumers' Attitudes Towards Genetically Modified-Food: The Greek Case -- App Store Optimization Factors for Effective Mobile App Ranking -- Websites and Facebook Profile Communication of Dentists in Oradea, Romania -- Institutional and Organizational Efforts to Establish Primary Health Care in Greece -- Actors in a Museum: Towards an Innovative Theatre Stage -- Online Imagined Communities in the Airline Industry -- Advances and Distinctions in the Use of Biometric Methods Versus Traditional Methods for Studying the Customer Experience -- The Effect of Airport Branding to Air Traffic and Passenger Movement: An Overview -- Do Social Media Affect Museums' Brand Equity? An Exploratory Qualitative Study -- The Sociological Approach to Greek Tourism in the Period of Economic Crisis -- Exploring the Factors Affecting Consumer Acceptance of Proximity M-Payment Services -- OneAppy: An Interactive Platform Providing Novel Marketing Channels and Promoting Product and Services to the Tourism Industry -- European Works Councils in the Hotel Industry -- Investigating the Social Media potential to attract customers in the banking services industry using Fuzzy Delphi and Analytic Hierarchy Process -- Greek Consumers' Awareness and Perceptions for Green Hotels and Green Furniture -- Exploring the Power of Certification Systems in the Development of Hiking Tourism in Greece: The Case of Menalon Trail from the Demand-Side -- Tourism, Design and Disability -- Analysing Projected Destination Images Versus Visitor-Generated Visual Content in Brasov, Transylvania -- The Dawn of GDPR: Implications for the Digital Business Landscape -- Strategic Synergies and Co-Marketing – A case study of the Airline Sector -- Cyprus's Wine Market: Influencing Factors of Consumer Behaviour as Part of Destination Marketing -- The Evolution of Demographic Changes in Greek Society and Proposed Administrative Management Model -- Reasons for Using Daily Deal Sites -- Does Sharing Economy Shape Consumers' Attitudes? -- Spatial Principles as an active agent. Making Greece a Global Market for Sensitized Tourism -- Understanding Consumers' Attitudes and Behavioural Intentions Towards Corporate Social Responsibility in the Airline Industry: The Role of Perceived Value and Expectations -- Effects of the Blockchain Technology on Human Resources and Marketing. An Exploratory Study -- Indian Cultural Institutions on Instagram -- Island Tourism: Keep it Smart and Sustainable- an Exploratory Approach -- Senior Tourism in Europe: when There are "no boundaries" -- Follow the Trail of Culture through Time: The Use of Archeological Sites as Performance Space -- Tourism as a Process of Promoting and Developing Traditional Settlements -- Brand as a Strategic Asset - A Proposal for the Forthcoming Cultural Institution of Pafos -- The Concept of Analysis and Reconstruction of the Work Time System in a Tourism Enterprise -- A Proposed Online Platform for Ranking Place Brands Identity Characteristics of Official Tourism Websites -- Cyber Security: From Regulations and Policies to Practice -- The Migrant Every Country Wants to Have: Motion, Emotion, Imagination at the Joe Petrosino Museum in Padula, Italy -- The Dynamics of Small and Peripheral Airports in Tourism and Regional Development: the Case of GREECE -- Cultural Tourism, Destination Branding and Spatial Experience Design: Contemporary Hospitality Design in Cyprus -- Innovative Mobile Tourism Services: The case of 'Greek at the Hotel' -- Modeling the International Tourism Demand as a Complex Network: the Case of the Global Inbound Tourism Market -- Netourwork Framework - A New Era in Strategic Innovative Networking System for Tourism Enterprises -- Social Media Celebrities and Para-social Relationships: The Chinese Context -- Environment and Tourism: An Interactive and Creative Relation: the Cases of Vorres Museum and a Cultural Network of Cities -- Trust and Role in the Sharing Economy -- Online Personal Selling, Need for Touch and e-loyalty -- The Passage from the Materialistic Approach to the Integrated One in Cultural Heritage and Cultural Policies: Challenges and Perspectives -- Carrying Capacity Assessment for Tourism Development- A Strategic Approach -- YouTube Videos: a Destination Marketing Outlook -- Strategic Innovations in Tourism Enterprises through Blockchain Technology -- Understanding Tourist Diaspora: An Interdisciplinary Mixed Mode Design Documenting Greek Tourist Diaspora -- Adventur
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Iran's massive April 13 attack was meant to settle scores for the blow that it suffered when, 12 days before, Israel assassinated two Islamic Revolutionary Guard Corps (IRGC) generals and five other officers, three of whom directed operations in Yemen, Iraq, and Lebanon. With some of its highest-ranking officers eliminated in one fell swoop on what was legally Iranian sovereignty territory, the IRGC was determined to show on the home and regional fronts that it is still a potent and lethal fighting machine. The purpose of Iran's assault was not only to make Israel pay, but also to blot out the shame for what IRGC leaders experienced as a day that will live in infamy.The task facing Iran was how to retaliate without inviting a wider war with Israel and especially the United States. The regime's effort to address this dilemma was not hidden from public view. Indeed, in the days leading up to its retaliation the semi-official press was full of warnings not to fall into "Netanyahu's trap." But if Tehran's perception that Israel's April 1 assault was designed to produce an overreaction by Iran, its retaliation has created new realities and new risks. Iranian leaders insist that the attack has forged a "new equation" for deterrence and thus are hoping—or praying—that, in the words of a statement issued by Iran's U.N. delegation, "the matter can be deemed concluded." But whether Tehran can sustain this new reality and also avoid a wider conflagration will depend on at least two other key players: the United States and Israel.The regional arena: Sustaining resistance before April 1The highest-ranking Iranian officer killed on April 1 was Brigadier General Mohammad Reza Zahedi, an IRGC Al-Quds Force commander who played a major role in coordinating Iran's "resistance" strategy with armed groups in Lebanon, Syria, Yemen, the West Bank and Gaza. To appreciate the symbolic and strategic significance of this loss for Iran, it should be noted that in 1998 Zahedi was appointed commander of the IRGC Quds Lebanon Force by Major General Qassem Soleimani, the architect of Iran's regional security, who was killed on January 2020 by a U.S. drone strike outside Baghdad airport. In the three years following his killing, the regime appointed a series of commanders in a determined bid to sustain its resistance strategy.Zahedi's role in this effort was pivotal, especially after he was named as the Al Quds Force Syria and Lebanon commander, succeeding Seyyed Razi Mousavi, who was assassinated in December 25, 2023, by Israeli air strike. In short, Zahedi was killed in a line of IRGC commanders, all of whose deaths the regime failed to prevent. While he and the other IRGC officers who died next to him were celebrated as "martyrs," Israel's superior technological and military capacities made their sacrifice possible. In a long shadow war that has come into the full light of direct Israeli-Iranian conflict, Iranian leaders had good cause to worry that Israel has had the deadly upper hand. This concern was in fact amplified when, in the aftermath of Hamas's October 7 "Al-Aksa Storm" atrocities and Israel's ensuing military onslaught, Iranian leaders within or close to the ruling circles of the regime warned that by failing to respond forcefully through its regional partners, Iran's capacity to deter Israel had been greatly weakened. Nevertheless, the question was how to shore up what might be called a "no war, no peace" resistance strategy that still depended on maintaining this increasingly precarious formula.Seeking to sustain this balancing act, Esmail Qani, the chief of Iran's Al-Quds Forces, together with, IRGC Commander in Chief Major General Hossein Salami, met with militia commanders from Yemen, Iraq, Lebanon and Syria in Tehran in February. "At the end," one source reported, "all the participants agreed that Israel wanted to expand the war and falling in that trap should be avoided as it will justify the presence of more U.S. troops in the region." Not long after, Iran persuaded its allies in Iraq to cease attacks on U.S. forces. As for the Lebanon front, Iran pressured Hezbollah to modulate the escalating tit-for-tat strikes between it and Israel. In this way, Tehran tried to sustain the basic if increasingly precarious rules of the deterrence game with Israel.Iran retaliates: Creating a new (and old) resistance equilibrium?Apart from the huge strategic loss that it inflicted by directly hitting IRGC leaders on what was in effect Iranian sovereign territory, Israel's April 1 attack violated those very rules. Henceforth, the nature of the "trap" facing Iran was more direct and menacing because it seemed that Israel was now determined to compel Iran to respond in ways that would open the door to a wider conflict not only with Jerusalem but also potentially with Washington as well.Thus, an adviser to President Ebrahim Raisi issued a statement warning "the U.S. leadership not to get dragged into Netanyahu's trap." But at the same time, commentaries in the semi-official press cautioned against a "suicidal reaction" and urged Iran's leaders to "behave wisely," given "the potential pitfalls of unrestrained aggression against Iran." These op-eds seemed to imply that Iran's retaliation would –or should --create a new balance of deterrence with Israel that protects Iran without inviting a large military conflict.After April 13: New risks and dangers for Iran and IsraelIt is still too early to tell whether Iran has succeeded in reestablishing a new sweet (or bitter) spot that will enable it to hurt its enemies while avoiding all-out war. It could be that by targeting its 300 or so missiles and drones at Israeli military targets –an assault that with the exception of one young Bedouin girl who was wounded, failed to produce a single civilian or military casualty--Iran has actually achieved its purpose. Paradoxically, Israel's success--with the support of the U.S. and other countries--in destroying the vast majority of these missiles may have helped to redefine a new, if perhaps even more dangerous, rulebook for Israel-Iran deterrence. Statements by IRGC leaders show that they are hoping this is true and that "with this operation as complete, this operation has ended."At the same time, however, Iran's retaliation has created new dynamics that should worry its leaders. The role that the U.S., together with France and Britain (and quite possibly several Arab states including Jordan), played in knocking out Iranian drones and missiles has created a new reality. Hence forth, Israel's deterrent capacity is inextricably linked to a military partnership with the U.S. and, in effect, some of its Western allies. This fact has grabbed the attention of Iran's leaders, who, prior to and after their assault, openly warned Washington to "stay out of it." Now that the U.S.—and the Biden White House-- is very much in it, Iran will have to carefully contemplate how to retool resistance.Moreover, Tehran's attack has refocused global and regional attention away from Gaza and onto Iran. In the coming days and weeks, it is possible that the U.N. Security Council will convene not to push for a Gaza ceasefire, but rather to press for a new resolution focusing on the consequences of Iran's April 13 attack. This, of course, will be good news for the government of Prime Minister Benyamin Netanyahu but will make Iran's leaders nervous.And yet the new reality created by Iran's attack poses hard choices for Israel. That its deterrent capacity is now tied to the U.S. could constrain Israel's leaders. Indeed, Biden telegraphed this message in a discussion with Netanyahu in which he reportedly affirmed U.S. support while not only warning Israel not to retaliate in ways that would escalate the conflict but also asserting that Washington would not be a partner to any Israeli military retaliation on Iran.Nor will the administration allow the Israel-Iran conflict to deflect it from pushing for a Gaza ceasefire. Biden distrusts and dislikes Netanyahu today as much or more than ever, a sentiment that was probably reinforced by Israel's April 1 attack on the Iranian consulate in Damascus. For even if it was not designed to pull the U.S. into battle with Iran, it has created what Marxists might call an "objective" trap that the White House is struggling to avoid. Moving forward, the administration might redouble its quest for a diplomatic solution to the Gaza war (and Palestinian- Israeli conflict) even as it warily watches an Iranian regime that hopes that it has a new recipe for deterrence that will allow Tehran to have its resistance kabob and eat it too.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
On March 8, a Manhattan federal court found Juan Orlando Hernández, president of Honduras from 2014 to 2022, guilty of conspiracy to import large amounts of cocaine into the United States over nearly two decades. Mainstream U.S. media generally framed the ex-president's trial and conviction as a triumph of justice, a service rendered by the impartial U.S. justice system to the people of Honduras.The great majority of such accounts, however, ignored and obscured context crucial for understanding Hernández's rise and rule; in particular, how Washington contributed to both. Though the mainstream narrative around the ex-president rightly connects his tenure in office with massive emigration from Honduras, it has elided the degree to which U.S. influence enabled Hernández's career and thus partially drove the migration that arose in response. For roughly two centuries, Honduras, the original "banana republic," has suffered a deeply unequal relationship with the far more powerful United States. One of the poorest countries in the Western Hemisphere, Honduras and its people have endured frequent American military interventions, U.S.-backed coups, and a corrupt, rapacious local oligarchy closely tied to U.S. corporate interests.Despite Hernández's ultimate conviction on U.S. soil, he served Washington for many years as a loyal client. The single most important event in the ex-president's political career was a 2009 coup, which overthrew center-left president Manuel Zelaya (whose wife, Xiomara Castro, won election in 2021 and currently occupies the presidency). Zelaya raised the minimum wage, subsidized small farmers, and authorized the morning-after pill, infuriating the country's business elite and, in the last case, ultra-conservative religious leaders. Moreover, to Washington's consternation, he made overtures toward Hugo Chavez's socialist Venezuela and sought to convert a crucial U.S. airbase entirely to civilian use.Joint action by Honduras' military and judiciary — in a manner the U.S. ambassador called "clearly illegal" and "totally illegitimate" at the time — forced Zelaya to pay for these sins in late June 2009. While the White House's reaction to the coup initially appeared confused, Washington soon recovered its footing. Even as huge protests raged, the Obama administration played a key role in ultimately compelling Honduras' people and the region's governments to acquiesce to the regime change as a fait accompli. Despite widespread repression by the post-coup de facto government, accounts of fraud, and the condemnation of many countries and international organizations (including the normally deferential Organization of American States), U.S.-endorsed elections in November 2009 received Washington's imprimatur. In her memoirs (the passage excised from the book's paperback edition with no explanation), then-Secretary of State Hillary Clinton explained that the U.S. sought to "render the question of Zelaya moot and give the Honduran people a chance to choose their own future."It was in this context that Hernández catapulted into power. After Porfirio Lobo won the 2009 presidential race, Hernández became President of the National Congress as a member of Lobo's National Party — an institution historically closely linked to U.S. agribusiness. Lobo was Hernández's mentor and groomed his protege to succeed him. But while Hernández enjoyed success, the coup's consequences constituted disaster for ordinary Hondurans.Political violence and repression became routine. The murder rate, much of it due to cartel-related gang violence, soared — it was the world's highest for three years running. As the economic situation also deteriorated, and Lobo and his son allied with major narcotics syndicates, a huge surge of emigration swelled out of Honduras, with desperate citizens flooding northward. The total number of Hondurans apprehended at the U.S. border exploded — from less than 25,000 in 2009 to nearly 100,000 in 2014 — reaching 250,000 by 2020. In Washington's eyes, however, such concerns took a back seat to longstanding strategic needs: above all, Honduras' openness to foreign investment and its role as a base for American military power. And, as head of the National Congress, Hernandez was seen as particularly amenable to U.S. desires. "The State Department loved Hernandez," according to Dana Frank, an expert on Honduras at UC Santa Cruz. As Lobo's heir apparent, "he was young and could stay in power for a long time." Frank cites a 2010 cable from the U.S. embassy in Tegucigalpa asserting that "He has consistently supported U.S. interests."The depth of American support for Hernández became clear after his 2013 election to the presidency. Despite credible reports of fraud, his National Party's control over the counting process, and a wave of threats and sometimes lethal violence against opposition candidates and activists during the campaign, the State Department commended the election as "transparent, free, and fair." In 2015, a major corruption scandal centered on the misappropriation of funds from Honduras' Social Security Institute exploded, prompting unprecedented popular demonstrations against Hernandez and calling for his resignation, "There was a real sense that Hernández could fall," according to Alexander Main, a Latin America expert at the Washington-based Center for Economic and Policy Research. Fortunately for Hernández, however, the U.S. swooped in, helping to defuse the unrest by prodding the OAS to organize a local anti-corruption body known as MACCIH. In that same year, according to Frank, Washington gave an "official green light" to a "completely criminal" power grab by Hernández whereby his hand-picked Supreme Court ruled that he was eligible to run for a second term in clear violation of Honduras' constitution. Washington's complacent reaction — "It is up to the Honduran people to determine their political future" — stood in remarkable contrast to 2009, when Zelaya's mere suggestion that the constitution might be amended to permit a second term served as the pretext for the coup that the U.S. subsequently legitimized. In Hernández's 2017 reelection bid, the fraud was so blatant and widespread that even the generally conservative OAS declared the incumbent's victory an example of "extreme statistical improbability" and called for new elections. The State Department, however, stood by Hernández, prodding Mexico and other OAS members to recognize the results, even as security forces suppressed massive and prolonged protests with live ammunition.Indeed, U.S. training and funding also proved crucial in the creation of the brutal special operations units Hernández's government used to terrorize opposition and environmental activists. Particularly significant in the military sphere was the role of U.S. Southern Command (SOUTHCOM), the American combatant command responsible for Latin America. Hernández was a particular favorite of John Kelly, SOUTHCOM's head during Obama's second term (and then White House chief of staff for Donald Trump), who, as Dana Frank noted, once referred to the convicted drug trafficker as a "great guy" and "good friend."Considering the U.S. relationship with Hernández, it is perhaps unsurprising that U.S. officials seemingly turned a blind eye to his deep involvement in narcotics trafficking. As both Hernández's recent trial — during which a witness claimed Hernandez had privately vowed to "stuff drugs up the noses of the gringos" — and that of his brother in 2019 showed, the drug trade's reach into the Honduran government was unmistakable, with numerous high-ranking security officials repeatedly implicated. CEPR's Main argues that it was "highly unlikely American officials were unaware" of Hernández's criminality. Indeed, as a document from his brother's trial revealed, the DEA began investigating the ex-president as early as 2013. As noted in Hernández's trial, just weeks after his inauguration in 2014, the agency reportedly obtained video evidence indicating his involvement with major drug traffickers. Even after his brother's 2019 conviction, when it became apparent that millions of dollars in drug money helped underwrite Hernández's political career, President Donald Trump publicly praised him for "working with the United States very closely" and for his help in "stopping drugs at a level that has never happened."Given all this, the U.S. media's failure to probe the influence of American policy on Hernández's career begins to look less like an anomalous oversight and more like a manifestation of structural dynamics that tend to reinforce the notion of American innocence. We can see the same logic apply to the frenzied media accounts detailing "caravans" of Central American migrants headed to the U.S. While mainstream news outlets rightly note the relationship between Hernández's presidency and increased migration from Honduras, they nevertheless fail to connect the two to the impact of U.S. policymaking. Without Washington's complicity and assistance, Hernandez might have spent 2014 to 2022 in prison, rather than the presidency. Unfortunately, it was the Honduran people who paid the price.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
After Hamas attacked Israel on October 7, the U.S. far right and supporters of Israel pointed to Iran as the main behind-the-scenes culprit, hoping that their message would spur a military attack on Iran. It is well known that the Islamic Republic has supported Hamas for decades, but Hamas is not a puppet of Iran. During the civil war in Syria, Hamas supported the armed opposition, angering both Iran's leadership and Syria's president, Bashar al-Assad. And in the current war, Hamas appears to be angry that Iran and its allies have not provided it with direct assistance or intervened on its behalf.Tehran's leadership, as well as that of the Lebanese Hezbollah, was as surprised as anyone when the attacks took place, with Hassan Nasrallah, Hezbollah's leader, asserting in a speech that his group was not given advance notice about Hamas' plans. So did Iran's Supreme Leader, Ayatollah Ali Khamenei, who flatly denied that Iran had participated in planning or executing the attacks, or had advance notice. U.S. and Israeli officials also stated that there is no evidence that Iran participated directly in planning the attacks.Khamenei has also stated that Iran will not enter the war on behalf of Hamas. In his recent meeting with Ismail Haniyeh, the chairman of Hamas' political bureau, Khamenei reportedly criticized Hamas for attacking Israel, calling it a strategic mistake that resulted in the redeployment of a large U.S. force to the Middle East and threw Washington's full support behind Israel.The Israel-Palestinian conflict began 30 years before the Islamic Revolution in Iran and will continue indefinitely even if the Islamic Republic disappears tomorrow, so long as Palestinians are denied their own independent state. At the same time, the fact is that Iran's internal political dynamics are complex, and various political factions are not unified about Iran's policy toward the Middle East, in general, and the Palestinians and Israel and the current war, in particular. There are deep fissures within Iran when it comes to debating foreign policy, particularly Middle East policy.To begin with, all Iranian political factions agree on, (1) forcing the U.S. military to leave the Middle East; (2) raising the costs of the "maximum pressure" policy that began with the Trump administration and continued under the Biden administration; (3) the importance of having a strong deterrent against possible military attacks by the U.S. and/or Israel, and (4) supporting the rights of the Palestinian people. But there is no agreement on how to put such policies into effect. The hardliners believe that to punish the U.S. for its "maximum pressure" policy and force its military to leave the Middle East, the best approach is to forge alliances with China, Russia and other nations that oppose the U.S. interventions around the world and to create problems for the U.S. in the region. Moderates and pragmatists, on the other hand, advocate close relations with Iran's neighbors and the Arab nations of the Persian Gulf, as well as with Europe, to reduce tensions. The hardliners believe that the most effective deterrent is arming the country and its proxies with advanced weapons, whereas moderates, while supporting arming the nation, also believe that regaining the confidence of the Iranian people by opening up political space, holding free elections, and taking deep and irreversible reforms would be the most effective deterrent. As former Foreign Minister Mohammad Javad Zarif put it recently, "what has preserved Iran is not its weapons, but its people."While the hardliners equate supporting the Palestinian people with arming them, moderates and pragmatists believe that Iran should limit its assistance to diplomatic support and humanitarian aid. What these factions do not agree on are (1) a foreign policy based on ideology, which is supported by the hardliners, rather than one generally preferred by moderates, reformists, and pragmatic conservatives that gives highest priority to Iran's true national – rather than ideological -- interests; (2) rapprochement with the United States, which is rejected by the hardliners but supported by all other factions; and (3) how to punish Israel for its campaign of assassinations and sabotage in Iran and its support for Iran's small separatist groups. The hardliners view arming Iran's proxies as the "best" option because it forces Israel to spend its resources on its own borders, whereas all other factions believe that diplomacy is the best possible approach. In the current war between Hamas and Israel, all factions have condemned Israel's attacks on civilians in Gaza, with the moderates having also condemned the October 7 attacks on Israeli civilians. But the similarities end there.At the beginning of the war, some of the hardliners declared that Iran should join the fighting. But this was hollow posturing whose purpose was to outmaneuver competitors within their own faction. Foreign Minister Hossein Amir-Abdollahian, who has always been close to the Islamic Revolutionary Guard Corps, even warned that Iran's proxies have "their fingers on the trigger" and may enter the war. While there have indeed been skirmishes between Hezbollah and Israeli forces, they appear to be carefully calibrated, and viewed in Tehran as a way of lessening the pressure on Hamas, and not a prelude to a full-blown war, unless, of course, the carnage in Gaza escalates to much worse levels.Thus, Amir-Abdollahian's declaration should be viewed as his attempt to elevate himself within the hardline camp since he has been an utterly ineffective foreign minister who even Khamenei does not seem to trust completely. And while President Ebrahim Raisi has adopted a hard line regarding the war in Gaza, his stance should be best seen as an attempt to distract attention from his administration's failure to improve the economy and reduce inflation. Similarly, the IRGC's Quds Force commander, Brigadier General Esmail Qaani, declared, "We will do anything required in this historic battle." This, however, is only bluster, as Qaani is trying to use the war to elevate himself to the level of his predecessor, Major General Qasem Soleimani [promoted posthumously to lieutenant general], who played a key role in organizing Iran's proxies in the region and was assassinated by the United States in January 2020. As noted above, Khamenei, Qaani's boss, has already ruled out Iran entering the war.Qaani and the IRGC are simply trying to use the war to regain full control of Iran's Middle East policy and suppress voices of dissent protesting their hardline posturing. But, even within the IRGC, there are voices of reason that oppose Iran's entry into a a war with the U.S. and Israel. Brigadier General Amir Ali Hajizadeh, commander of IRGC's aerospace force, which oversees its missile program, recently said that after the Trump Administration assassinated General Soleimani, Iran did not attack all the U.S. military bases in the Middle East because "ten, fifteen thousand civilians would have been killed, and the country's development would have been set back by 20 years."At the same time, moderates and pragmatists have called for restraint, fearing a wider war in the Middle East that could engulf Iran. As Zarif put it a few days ago, "Supporting the Palestinian people does not imply that we should fight for them. The best defense of the Palestinian people is [creating the conditions] to prevent Israel from calling them [Iran's] proxy. The [Iranian] people are tired of paying the price [for arming the Palestinians]." Former President Mohammed Khatami has also spoken out in favor of restraint. "The era of occupying other people's lands has ended," he said recently, stressing that Tehran should rely more on diplomatic initiatives based on Iran's national interests and its leaders should avoid taking positions based on factional politics. It thus appears that the most important political factions in Iran reject war with the U.S. or Israel and favor a policy of restraint in the current war, however much this may disappoint Iran hawks in the United States. But so long as Palestinians are denied their aspirations for an independent state, Iran's hardliners and other non-state actors, including radical Islamists like Hamas, will seek to take political advantage of their plight. The most effective way to neutralize Iran's hawks — and thus reduce a chronic contributor to regional instability and tension — is for the U.S., the West, the Arab world and Israel itself to finally grasp the nettle and work seriously to help Palestinians realize their goal as expeditiously as possible.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Crypto startups and venture capitalists are not the only ones pivoting to artificial intelligence (AI). Recently, SEC Chair Gary Gensler delivered remarks to the National Press Club outlining his concerns about AI's role in the future of finance. In those high‐level remarks, Gensler shared his anxiety that AI could threaten macro‐level financial stability, positing that "AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator." This fear largely rests on a pair of debatable assumptions: one, that the market for AI models will be highly concentrated, and two, that this will cause financial groupthink. There are important reasons to doubt both premises. Before the SEC, or any regulator, puts forward an AI policy agenda, the assumptions on which it rests must be closely scrutinized and validated. Assumption 1: Foundation Model Market Concentration Chair Gensler's assessment assumes that the market for AI foundation models will be highly concentrated. Foundation models, like OpenAI's GPT‑4 or Meta's Llama 2, are pre‐trained on reams of data to establish predictive capabilities and can serve as bases for "downstream" applications that further refine the models to better perform specific tasks. Because upstream foundation models are data‐intensive and have the potential to leverage downstream data for their own benefit, Gensler is concerned that one or a few model providers will be able to corner the market. It's understandable that one might assume this, but there are plenty of reasons to doubt the assumption. The best arguments for the market concentration assumption are that natural barriers to entry, economies of scale, and network effects will produce a small number of clear market leaders in foundation models. For instance, pre‐training can require a lot of data, computing power, and money, potentially advantaging a small number of well‐resourced players. In addition, network effects (i.e., platforms with more users are more valuable to those users) could further entrench incumbents, either because big‐tech leaders already have access to more training data from their user networks, because the model providers attracting the most users will come to access more data to further improve their models or some combination of both. But the assumption that the market for foundation models inevitably will be concentrated is readily vulnerable to counterarguments. For one, the recent AI surge has punctured theories about the perpetual dearth of tech platform competition. With the launch of ChatGPT, OpenAI—a company with fewer than 400 full‐time employees earlier this year—became a household name and provoked typically best‐in‐class firms to scramble in response. And while it's true that OpenAI has made strategic partnerships with Microsoft, OpenAI's rise undermined the conventional wisdom that the same five technology incumbents would enjoy unalloyed dominance everywhere forever. The emergence of additional players, like Anthropic, Inflection, and Stability AI, to name just a few, provides further reason to question the idea of a competition‐free future for AI models. In addition, the availability of high‐quality foundation models with open‐source (or other relatively permissive) licenses runs counter to the assumed future of monopoly control. Open‐source licenses typically grant others the right to use, copy, and modify software for their own purposes (commercial or otherwise) free of charge. The AI tool builder Hugging Face currently lists tens of thousands of open‐source models. And other major players are providing their own models with open‐source licenses (e.g., Stability AI's new language model) or relatively permissive "source available" licenses (e.g., Meta's latest Llama 2). Open‐source model availability could have a material impact on competitive dynamics. A reportedly leaked document from Google put it starkly: [T]he uncomfortable truth is, we aren't positioned to win this arms race and neither is OpenAI. While we've been squabbling, a third faction has been quietly eating our lunch. I'm talking, of course, about open source.
Lastly, Gensler's vision of a concentrated foundation model market itself rests in large part on the assumption that model providers will continuously improve their models with the data provided to them by downstream third‐party applications. But this too should not be taken as a given. Such arrangements are a possible feature of a model provider's terms but not an unavoidable one. For example, OpenAI's current data usage policies for those accessing its models through an application programming interface (API), as opposed to OpenAI's own applications (like ChatGPT), limit (as of March 2023) OpenAI's use of downstream data to improve its models: By default, OpenAI will not use API data to train OpenAI models or improve OpenAI's service offering. Data submitted by the user for fine‐tuning will only be used to fine‐tune the customer's model.
Indeed, providers of base models may not always benefit from downstream data, as finetuning a model for better performance in one domain could risk undermining performance in others (a dramatic form of this phenomenon is known as "catastrophic forgetting"). Again, this is not to say that foundation model market concentration is impossible. The point is simply that there also are plenty of reasons the concentrated market Gensler envisions may not come to pass. Indeed, a source Gensler cited put it well: "It is too early to tell if the supply of base AI models will be highly competitive or concentrated by only a few big players." Any SEC regulatory intervention premised on the idea of a non‐competitive foundation model market would similarly be too early. Assumption 2: Foundation Model Market Concentration Will Cause Risky Capital Market Participant Groupthink The second assumption underpinning Gensler's financial fragility fear is that a limited number of model providers will lead to dangerous uniformity in the behavior of market participants using those models. As Gensler put it, "This could encourage monocultures." Even if one accepts for argument's sake a future of foundation model market concentration, there are reasons to doubt the added assumption that this will encourage monocultures or herd behavior among financial market participants. While foundation models can be used as generic tools out of the box, they also can be further customized to users' unique needs and expertise. Finetuning—further training a model on a smaller subset of domain‐specific data to improve performance in that area—can allow users to tailor base models to firm‐specific knowledge and maintain a degree of differentiation from their competitors. This complicates the groupthink assumption. Indeed, Morgan Stanley has leveraged OpenAI's GPT‑4 to synthesize the wealth manager's own institutional knowledge. Taking a step back, is it more likely that financial firms with coveted caches of proprietary data and know‐how will forfeit their competitive advantages, or that they will look to capitalize on them with new tools? Beyond training and finetuning models around firm‐specific data, firms also can maintain their edge simply by prompting models in a manner consistent with their unique approaches. In addition, firms almost certainly will continue to interpret results based on their specific strategies, cultures, and philosophies. Lastly, because there are profits to be made from identifying mispriced assets, firms would be incentivized to spot others' inefficient herding behavior and diverge from the "monoculture"; they may even devise ways to leverage models for this purpose. At the very least, as with model market concentration, more time and research are needed before the impact of the latest generation of AI on financial market participant herding behavior can be assessed with enough confidence to provide a sound basis for regulatory intervention. Conclusion Emerging technologies can, of course, be disruptive. But before regulators assume novel technologies present novel risks, they should test and validate their assumptions. Otherwise, one can reasonably doubt regulators when they proclaim themselves "technology neutral." As SEC Commissioner Hester Peirce noted last week regarding the SEC's proposed rules tackling a separate AI‐related concern—conflict-of-interest risks from broker‐dealers' and investment advisers' use of "predictive data analytics"—singling out a specific technology for "uniquely onerous review" is tantamount to "regulatory hazing." Another word of caution is warranted: even where regulators do perceive bona fide evidence of enhanced risks, they should be wary of counterproductive interventions. To name just one example, heightened regulatory barriers to entry could worsen the very concentration in the market for AI models that Gensler fears.
This research was supported by the European Commission's Horizon 2020 Framework Programme (ACTRIS2 (grant agreement no. 654109)), the University of Helsinki, the Finnish Meteorological Institute, the Department of Science and Innovation of South Africa, the Academy of Finland Centre of Excellence programme (project no. 272041), the Academy of Finland project Greenhouse gas, aerosol and albedo variations in the changing Arctic (project no. 269095), the Novel Assessment of Black Carbon in the Eurasian Arctic: From Historical Concentrations and Sources to Future Climate Impacts (NABCEA, project no. 296302), the Korea Meteorological Administration Research and Development Program "Development of Monitoring and Analysis Techniques for Atmospheric Composition in Korea" (grant no. KMA2018-00522), the National Research Foundation of Korea (grant no. 2017R1D1A1B06032548), the Korea Meteorological Administration Research and Development Program (grant no. KMI2018-01111), the Taiwan Environmental Protection Administration, the China Meteorological Administration, the National Scientific Foundation of China (41675129, 41875147), the National Key R&D Program of the Ministry of Science and Technology of the People's Republic of China (grant no. 2016YFC0203305 and 2018YFC0213204), the Chinese Academy of Meteorological Sci-ences (2020KJ001), the Innovation Team for Haze-fog Observation and Forecasts of MOST and CMA, CNRS-INSU, the French Ministry for Research under the ACTRIS-FR national research infrastructure, the French Ministry of the Environment, MeteoSwiss (GAW-CH aerosol monitoring programme), the Swiss State Secretariat for Education, Research and Innovation (SERI), the Ministry of Education, Youth and Sports of CR within National Sustainability Program I (NPU I, grant no. LO1415), ERDF "ACTRISCZ RI" (grant no. CZ.02.1.01/0.0/0.0/16_013/0001315), CRISOL (CGL2017-85344-R MINECO/AEI/FEDER, UE), TIGAS-CM (Madrid Regional Government Y2018/EMT-5177), AIRTECCM (Madrid Regional Government P2018/EMT4329), REDMAAS2020 (RED2018-102594-T CIENCIA), Red de Excelencia ACTRIS-ESPANA (CGL2017-90884-REDT), the Spanish Ministry of Economy, Industry and Competitiveness, FEDER funds (project HOUSE, grant no. CGL2016-78594-R), the Generalitat de Catalunya (AGAUR 2017 SGR41 and the DGQA), the National Institute for Aerospace Technology, the Ministerio Espanol de Economia, Industria y Competitividad (MINECO), the Spanish Ministry of Economy and Competitiveness (projects no. CGL2016-81092-R, CGL2017-90884-REDT, RTI2018-097864-BI00 and PGC2018-098770-B-I00), the Andalusia Regional Government (project no. P18-RT-3820), the PANhellenic infrastructure for Atmospheric Composition and climate change (MIS 5021516), Research and Innovation Infrastructure, Competitiveness, Entrepreneurship and Innovation (grant no. NSRF 20142020), the Italian Ministry of Research and Education, the Norwegian Environment Agency, Swedish FORMAS, the Swedish Research Council (VR), the Magnus Bergvall foundation, the Marta och Erik Holmberg foundation, and the Swedish EPA. ; Aerosol particles are a complex component of the atmospheric system which influence climate directly by interacting with solar radiation, and indirectly by contributing to cloud formation. The variety of their sources, as well as the multiple transformations they may undergo during their transport (including wet and dry deposition), result in significant spatial and temporal variability of their properties. Documenting this variability is essential to provide a proper representation of aerosols and cloud condensation nuclei (CCN) in climate models. Using measurements conducted in 2016 or 2017 at 62 ground-based stations around the world, this study provides the most up-to-date picture of the spatial distribution of particle number concentration (N-tot) and number size distribution (PNSD, from 39 sites). A sensitivity study was first performed to assess the impact of data availability on N-tot's annual and seasonal statistics, as well as on the analysis of its diel cycle. Thresholds of 50% and 60% were set at the seasonal and annual scale, respectively, for the study of the corresponding statistics, and a slightly higher coverage (75 %) was required to document the diel cycle. Although some observations are common to a majority of sites, the variety of environments characterizing these stations made it possible to highlight contrasting findings, which, among other factors, seem to be significantly related to the level of anthropogenic influence. The concentrations measured at polar sites are the lowest (similar to 10(2) cm(-3)) and show a clear seasonality, which is also visible in the shape of the PNSD, while diel cycles are in general less evident, due notably to the absence of a regular day-night cycle in some seasons. In contrast, the concentrations characteristic of urban environments are the highest (similar to 10(3)-10(4) cm(-3)) and do not show pronounced seasonal variations, whereas diel cycles tend to be very regular over the year at these stations. The remaining sites, including mountain and non-urban continental and coastal stations, do not exhibit as obvious common behaviour as polar and urban sites and display, on average, intermediate N-tot (similar to 10(2)-10(3) cm(-3)). Particle concentrations measured at mountain sites, however, are generally lower compared to nearby lowland sites, and tend to exhibit somewhat more pronounced seasonal variations as a likely result of the strong impact of the atmospheric boundary layer (ABL) influence in connection with the topography of the sites. ABL dynamics also likely contribute to the diel cycle of N-tot observed at these stations. Based on available PNSD measurements, CCN-sized particles (considered here as either >50 nm or >100 nm) can represent from a few percent to almost all of N-tot, corresponding to seasonal medians on the order of similar to 10 to 1000 cm(-3), with seasonal patterns and a hierarchy of the site types broadly similar to those observed for N-tot. Overall, this work illustrates the importance of in situ measurements, in particular for the study of aerosol physical properties, and thus strongly supports the development of a broad global network of near surface observatories to increase and homogenize the spatial coverage of the measurements, and guarantee as well data availability and quality. The results of this study also provide a valuable, freely available and easy to use support for model comparison and validation, with the ultimate goal of contributing to improvement of the representation of aerosol-cloud interactions in models, and, therefore, of the evaluation of the impact of aerosol particles on climate. ; European Commission's Horizon 2020 Framework Programme (ACTRIS2) 654109 ; University of Helsinki ; Finnish Meteorological Institute ; Department of Science and Innovation of South Africa ; Academy of Finland 272041 ; Academy of Finland project Greenhouse gas, aerosol and albedo variations in the changing Arctic 269095 ; Novel Assessment of Black Carbon in the Eurasian Arctic: From Historical Concentrations and Sources to Future Climate Impacts (NABCEA) 296302 ; Korea Meteorological Administration Research and Development Program "Development of Monitoring and Analysis Techniques for Atmospheric Composition in Korea" KMA2018-00522 ; National Research Foundation of Korea 2017R1D1A1B06032548 ; Korea Meteorological Administration Research and Development Program KMI2018-01111 ; Taiwan Environmental Protection Administration ; China Meteorological Administration ; National Natural Science Foundation of China (NSFC) 41675129 41875147 ; National Key R&D Program of the Ministry of Science and Technology of the People's Republic of China 2016YFC0203305 2018YFC0213204 ; Chinese Academy of Meteorological Sciences 2020KJ001 ; Innovation Team for Haze-fog Observation and Forecast of MOST Innovation Team for Haze-fog Observation and Forecast of CMA Innovation Team for Haze-fog Observation and Forecast of CNRS-INSU ; French Ministry for Research under the ACTRIS-FR national research infrastructure ; French Ministry of the Environment, MeteoSwiss (GAW-CH aerosol monitoring programme) ; Swiss State Secretariat for Education, Research and Innovation (SERI) ; Ministry of Education, Youth and Sports of CR within National Sustainability Program I (NPU I) LO1415 ; ERDF "ACTRISCZ RI" CZ.02.1.01/0.0/0.0/16_013/0001315 ; CRISOL CGL2017-85344 ; TIGAS-CM (Madrid Regional Government) Y2018/EMT-5177 ; AIRTECCM (Madrid Regional Government) P2018/EMT4329 ; REDMAAS2020 RED2018-102594-T ; Red de Excelencia ACTRIS-ESPANA CGL2017-90884-REDT ; Spanish Ministry of Economy, Industry and Competitiveness, FEDER funds CGL2016-78594-R ; Generalitat de Catalunya ; General Electric AGAUR 2017 SGR41 ; National Institute for Aerospace Technology ; Ministerio Espanol de Economia, Industria y Competitividad (MINECO) Spanish Government CGL2017-90884-REDT CGL2016-81092-R RTI2018-097864-BI00 PGC2018-098770-B-I00 ; Andalusia Regional Government P18-RT-3820 ; PANhellenic infrastructure for Atmospheric Composition and climate change MIS 5021516 ; Research and Innovation Infrastructure, Competitiveness, Entrepreneurship and Innovation NSRF 20142020 ; Ministry of Education, Universities and Research (MIUR) ; Norwegian Environment Agency, Swedish FORMAS ; Swedish Research Council ; Magnus Bergvall foundation ; Marta och Erik Holmberg foundation ; Swedish EPA
This article was born as a result of a research financed within the framework of the Internal Call 001 - 2017 for the Financing of Research and Development Projects of the Technological Institution Colegio Mayor de Bolívar and is part of the macro-project "Social Restoration as a Tool for the Action without Harm and Peacebuilding, after the Armed Conflict in Colombia ".Objective: Describe the incidence of the effects of the armed conflict and the peace accords in the social restoration of victims and displaced persons in seven (7) municipalities of the Colombian Caribbean Coast, promoting the implementation of actions for their integral reparation, as part of the dynamics of the post-agreements.Material and Method: The study was carried out with a qualitative approach, from an exploratory-descriptive research, which will recognize the social, demographic, cultural and economic characteristics of the victims of the armed conflict in the municipalities of Tuchín (Córdoba), Chalán (Sucre) ), Carmen de Bolívar (Bolívar), Santa Lucía (Atlántico), Ariguaní (Magdalena), Bosconia (Cesar) and Villanueva (Guajira).Consistent with the investigative approach and according to Passos (2015), it was determined "a part of the population, sufficiently necessary to obtain the required information" (Therefore, a probabilistic sample was worked, stratified at random and for convenience, since each element could be determined the possibility of forming part of the sample in question (p. 24, 25); in the sense of the above it was necessary to determine a stratified sample proportional to random of 1% of the total inhabitants of the municipalities of the seven selected departments whose approximate universe is 227,887, leaving a population of 2,278 and a convenience sample of 340 inhabitants of the Colombian Caribbean affected by the violence.Referring to conflict, unfailingly leads us to think about an undeniable reality that Colombia has been going through for more than five decades; This article shows an investigative phase that reports the results of the impact that the peace accords have had on the aforementioned municipalities; evidencing that the population victim of the armed conflict, expressed their refusal to the Peace Agreement, since they feel that they were not taken into account in the construction of such a transcendental agreement, not only for the region, but also for the country and for Latin America. . On the other hand, there are those who, if they approve the agreement, but consider that there could be better, if it had contemplated their positions vis-à-vis the process as such; Likewise, one found a minority that showed their total and absolute agreement, arguing that it is the best way to end this scourge.Likewise, 80% of those surveyed stated that after the signing of the aforementioned agreement, there has not been a positive development regarding security on the Caribbean Coast, with 10% of the population saying that they have seen slight progress. in security aspects.In addition to the fact that the vast majority of the population object of the investigation stated that they did not show positive aspects regarding security in the regions, special emphasis was placed on the increase in insecurity, because although the agreement implied the end After the war, illegal groups are also identified by appropriating some territories, as well as dissidents from the peace process. This is how the majority of respondents consider that security on the Caribbean coast is considered that the impact of the signing of the agreement has not changed the harsh reality marked by the violence of social conflicts that permeates the territories and even more the expression of sentiment of bewilderment and laziness about what they consider continues to be the lack of protection of the State.In summary, the municipalities that are the object of the investigation have been the most affected by the armed conflict in the Caribbean region, since for many years they were the epicenter of combats between the FARC and the paramilitaries; coupled with the governmental neglect of both the national and territorial order and that despite the signing of the agreement, they continue to feel abandoned by the State. ; El presente artículo nace como resultado de una investigación financiada en el marco de la Convocatoria Interna 001 – 2017 para la Financiación de Proyectos de Investigación y Desarrollo de la Institución Tecnológica Colegio Mayor de Bolívar y hace parte del macroproyecto "La Restauración Social como Herramienta para la Acción sin Daño y Construcción de Paz, posterior al Conflicto Armado en Colombia".Objetivo: Describir la incidencia de los efectos del conflicto armado y los acuerdo de paz en la restauración social de personas víctimas y desplazadas en siete (7) municipios de la Costa Caribe colombiana, propiciando la implementación de acciones para su reparación integral, como parte de las dinámicas de los post-acuerdos.Material y Método: El estudio se dio con enfoque cualitativo, desde una investigación exploratoria-descriptiva, que permitió reconocer las características sociales, demográficas, culturales y económicas que tienen las víctimas del conflicto armado en los municipios de Tuchín (Córdoba), Chalán (Sucre), Carmen de Bolívar (Bolívar), Santa Lucia (Atlántico), Ariguaní (Magdalena), Bosconia (Cesar) y Villanueva (Guajira).Consecuentes con el enfoque investigativo y de acuerdo con Passos (2015), se determinó "una parte de la población, lo suficientemente necesaria para obtener la información requerida" (Por lo tanto, se trabajó una muestra probabilística, estratificada al azar y por conveniencia, ya que a cada elemento se le pudo determinar la posibilidad de que forme parte de la muestra en cuestión (p. 24, 25); en sentido de lo anterior fue necesario determinar una muestra estratificada proporcional al azar del 1% del total de los habitantes de los municipios de los siete departamentos seleccionados cuyo universo aproximado es de 227.887, quedando una población de 2.278 y una muestra por conveniencia de 340 personas habitantes del caribe colombiano afectados por la violencia.Conclusiones: Hacer referencia a conflicto, de forma indefectible nos lleva a pensar en una innegable realidad por la que atraviesa Colombia desde hace más de cinco décadas; el presente artículo muestra una fase investigativa que relata los resultados del impacto que han tenido los acuerdos de paz en los municipios referidos; evidenciándose que la población víctima del conflicto armado, manifestó su negativa frente al Acuerdo de Paz, toda vez que sienten que no fueron tenidos en cuenta en la construcción de tan trascendental acuerdo, no solo para la región, sino también para el país y para Latinoamérica. Por otro parte, están quienes, si aprueban el acuerdo, pero consideran que pudo haber mejor, si el mismo hubiera contemplado sus posiciones frete al proceso como tal; así mismo una se encontró una minoría que mostró su total y absoluto acuerdo argumentando que el mismo es la mejor forma de acabar con este flagelo. Así mismo, el 80% de los encuestados manifestaron, que posterior a la firma del mencionado acuerdo, no se ha visto un desarrollo positivo respecto a la seguridad en la Costa Caribe, con un 10% de población que manifestó que han visto un leve avance en aspectos de seguridad.Adicionalmente al hecho que la gran mayoría de la población objeto de la investigación manifestó no evidenciar aspectos positivos en lo que respecta a la seguridad en las regiones, se hizo especial énfasis con respecto al incremento de la inseguridad, pues si bien el acuerdo implicó el fin de la guerra, igualmente se identifican grupos al margen de la ley apropiándose de algunos territorios, así como disidentes del proceso de paz. Es así como la mayoría de encuestados considera que la seguridad en la costa Caribe se considera que el impacto de la firma del acuerdo no ha cambiado la cruda realidad marcada por la violencia de conflictos sociales que permea los territorios y más aún la expresión de sentimiento de desconcierto y desidia sobre lo que ellos consideran sigue siendo la desprotección del Estado. En síntesis, los municipios objeto de la investigación han sido los más afectados por el conflicto armado en la región caribe, toda vez que durante muchos años fueron epicentro de combates entre las FARC y los paramilitares; aunado al olvido gubernamental tanto del orden nacional como del territorial y que a pesar de la firma del acuerdo, se siguen sintiendo abandonados por el Estado.
The 2008 financial crisis opened the doors of green capitalism as a financially sound approach to saving the planet from the worst effects of the climate emergency. The emphasis on the role of finance in promoting "green growth" has permeated mainstream political, academic and business approaches to climate change adaptation and mitigation, assuming multiple forms - from the carbon markets of the Kyoto Protocol and the Paris Agreement, to the Environmental, Social and Governance taxonomy for "green" investments, to the proliferation of sustainable labels in several economic sectors. The present article offers a critical appraisal of one of the most prominent arguments that upholds the idea that it is possible and desirable to achieve sustainability and economic growth through finance: green bonds. Green bonds are debt instruments whose proceeds are earmarked to fund projects with supposedly environmental benefits. After some years in the background, they now occupy a central position in the green recovery narrative and political framework all over the world. Most of the academic literature tends to naturalize green bonds as an eminently technical solution to reconcile economic growth and environmental sustainability. Filling an epistemological gap, the present article leverages a world-ecology approach to embed the financial phenomenon of green bonds within the broader picture of the capitalist political economy and the expansion of its ecological frontier. In light of the ongoing experiences that the authors have been following in the Brazilian legal, financial and political context, the article unpacks and makes sense of green bonds as a tool in the hands of climate finance that reproduces global patterns of North-South uneven development and the shifting of ecological costs. To test the potential of the "interpretative framework" offered by a world-ecology approach, we mobilize it in the concrete case of green bonds issuances directed to fund the forestry sector in Brazil. Aware that the current phenomenon only represents a blip in comparison to the largeer temporal (the longue dureé) and spatial (the world system) scales usually deployed by world-ecology, we nonetheless discuss how the ideological, technical and power dynamics behind the issuance of green bonds unleash capital accumulation, produce a financialized and subordinated construction of nature, and entail an institutional arrangement. The article is organized around 3 main sections. After the introduction, section 1 describes green bonds as one of the most fashionable financial topics of the moment, and one that promotes a shift in discourses towards the need of actively building a "green economy". Although from a legal standpoint green bonds embody no significant difference from regular bonds, our focus is to describe the promises around them, the current (private) governance structure, and the trends in the issuance of these debt instruments both in the Global North and South, with a specific focus on the case of Brazil. In section 2, we look at the operations of green bonds emissions on the ground, i.e. taking as an example the context of green debt underpinning the Brazilian forestry sector. The analysis reveals how the emissions, made predominantly by large multinational companies actively present on the global market, feed off great efforts deployed by both the public and the private sector in constructing an image of the sector as a key player in the emergent "bioeconomy" and in the strengthening of Brazil's goals in the Paris Agreement. However, we describe how green bond revenues that are officially committed to the implementation of "sustainable management of forests" are associated with the expansion of the ecological frontier in the Brazilian territory, stretching the boundaries of the area dedicated to tree plantations and amplifying social and environmental tensions. The backstage of the emissions shows how capital accumulation through green bonds is associated with the co-production of nature for the purpose of accumulation, generating concerns that are often diluted or transformed into procedural requirements. Debt generated by the subscription of green bonds, we argue, is not only financial, but also social and ecological. In section 3, we put forward that for private accumulation to be successful, green bonds in the forestry sector demand an institutional arrangement that combines state support and private governance of debt in its financial, social and ecological dimensions. Rather than being the result of an idealized and spontaneous market, a set of institutional transformations have to be considered in order to comprehend the feasibility of green bonds in the Brazilian forestry sector. We thus describe the historic connection between forestry and the state, the endless public incentives to put nature to work, the functional adaptations of the Brazilian environmental legislation and the regulation concerning the demarcation, access and use of land. In this context, we argue that green bonds add yet a new institutional layer to the process of creating and validating specific forms of nature, through a governance structure that dilutes the tensions between the promise of environmental benefits and its concrete negative social and environmental impacts. We conclude the article by reassembling these findings as part of the capitalist world ecology "dialectical unity" of capital accumulation, co-production of nature and power. We suggest that the world-ecology approach allows us to grasp green bonds as a complex form that has so far been ignored in the relevant literature. As any other phenomenon of financialization, a green bond should not be understood in isolation from its material basis, since it is from that basis – and its social and environmental conditions and contradictions – that it appropriates value. As the example of the Brazilian forestry sector illuminates, the "greenness" of the financial debt inscribed in green bonds may come into existence at the expense of the social and environmental debt that underlie the forestry sector productive model. Hence, although the explicit inclusion of "environmental concerns" into financial considerations and project implementation has been praised as a step towards the recognition that finance has a material impact on the planet and that these externalities shall be accounted for, the article warns of the typical green arithmetic move put forward by green bonds. Green bonds inevitably co-produce nature and social relations, but in a very unequal way that emphasizes capital accumulation and that does not necessarily protect the environment (even when standards are introduced). Much to the contrary, green bonds may come into being at the expense of other ways of living ecologically, and by restoring injustices of the past and creating a regenerative future - in other words, by creating debt. ; El presente artículo parte de un enfoque de ecología-mundo para develar y dar sentido a los bonos verdes como una herramienta en manos de las finanzas climáticas, que reproduce los patrones globales de desarrollo desigual y acumulación capitalista. A través del estudio de las emisiones de bonos verdes dirigidas a financiar el sector forestal en Brasil, se revisa cómo la dinámica ideológica, técnica y de poder detrás de dichas emisiones desencadena la acumulación de capital, y produce una construcción de la naturaleza financiarizada y subordinada. Actualmente, los bonos verdes son una de las formas más destacadas de la economía verde para intentar conciliar la "sostenibilidad ambiental" con el crecimiento dentro de los parámetros de las finanzas. Como instrumentos de deuda cuyos ingresos van destinados a financiar proyectos con supuestos beneficios ambientales, los bonos verdes ocupan ya una posición central en la narrativa y el marco político de la economía verde. El artículo reseña los bonos verdes, como uno de los temas financieros relevantes del momento, y describe las promesas y tendencias de emisiones de estos bonos, tanto en los países del Norte y del Sur, y específicamente en Brasil. Al observar las emisiones concretas en el sector forestal brasileño, el artículo muestra cómo la acumulación de capital a través de bonos verdes se asocia a coproducir la naturaleza en formas problemáticas, tanto social como ambientalmente, generando deuda no solo financiera, sino esencialmente social y ecológica. Con una combinación de enfoques micro y macro del tema, el artículo explica cómo los bonos verdes en el sector forestal exigen un arreglo institucional que combine el apoyo estatal y la gobernanza privada de la deuda en sus ámbitos financiero, social y ecológico, para que la acumulación se produzca en tales términos. Se sugiere, sin pretensión de universalidad, que el caso de los bonos verdes en el sector forestal en Brasil es un fenómeno financiero y ecológico novedoso, e ilumina el rompecabezas del capitalismo en la trama de la vida y el análisis de las nuevas fronteras de la miseria planificada.
This article focuses on the historical productive restructuring of the oases in the semi-arid region of Cuyo, in central-western Argentina, particularly in the provinces of Mendoza and San Juan. From the 1990s onward, as in various latitudes of the arid-South American diagonal, the famous slogan of "making the desert bloom" has found in the agro-export boom its new raison d'être. Several areas of oasis agriculture production, traditionally structured around a surface-water distribution network, have undergone an expansion of their agricultural frontiers through intensive exploitation of its aquifers. Through groundwater access and the systematic application of modern irrigation technologies, domestic and foreign investors converted land branded as "dry", "marginal" or "empty" into sources of profit. As a result, several oases are increasingly serving export markets and ultimately a global diet. In these dry environments, commodity flows, as either fresh or processed goods, depend of course on significant water supply. In the case of world wine capitals like Mendoza and San Juan, such rural dynamics go hand in hand with the commodification of the countryside for the tourist and real estate sectors. By exercising effective control over land and (mostly underground) water corporate actors contribute to the growing commoditization and enclosure of spaces of the arid piedmonts.In this work we propose an analytical conceptualization of the oases as built environments, historically constructed through intensive and systematic water management. But we also understand them as an epistemic and ontological approach, challenging the usual society/nature dualism, as produced natures (Smith, 1984; Moore, 2015). Although the most recent agro-export restructuring of the oases were carried out within the framework of the current neoliberal and corporate agro-food regime, we show that such transformations are the culmination of a long insertion in the development of capitalism as a world ecology (Moore, 2015; Walker and Moore, 2019). From this hybrid, cross-border, and relational perspective we trace how the logic of endless capital accumulation and the production of nature have been central to the region's history.Cuyo rivers originate in the snow-crested mountain range of the Cordillera de los Andes and flow toward the lower plains providing diverted water for five main oases developed on the piedmont. In this arid land with 100-350 mm of annual precipitation, no rain-fed agriculture is possible; water control is thus essential for the subsistence of the created and domesticated oasis ecosystems. The irrigation of the piedmont oases dates back to remote times but an important change in land use came in the colonial period when oasis economy was gradually modified in order to develop cattle fattening activities dependent on irrigated alfalfa, and complemented by wine production and subsistence crops. Another mayor revolution came in the late nineteenth century with the arrival of the railroad and a massive wave of Mediterranean migration. At the time, the provincial government and elite became active in a hydraulic mission by financing the expansion of a run-of-the-river irrigation system. This hydraulic mission goes hand and hand with a process of land and water commodification that results in the dispossession of native and peasant groups from their traditional land and water rights. Concomitantly, such massive hydraulic infrastructure has been at the core of a winegrowing and winemaking historical production model, supplemented by other fruit and vegetable crops. This model, which has been boosted by the growth in the domestic consumption of low-quality table wine, entered into absolute collapse in 1980. Since the late 1980s larger and better capitalized firms began a process that would become known as the reconversión of the Argentine wine industry. As the decade of the 1990s progressed, Mendoza's oases started arousing great interest from transnational investors. At the same time a restructuring process started to reveal its spatial consequences: while some areas were abandoned, others expanded. In particular, the expansion of the agricultural frontier was made possible by intensive aquifer exploitation in the context of loosely regulated groundwater management. Led by intensive, mainly large-scale and export-oriented projects, this conquest of the piedmont involved not only the high-quality wine-making sector but also the production of fruit, tree nuts, vegetables and olive oil. Former "marginal lands" were now in the sights of firms, who saw the peripheral areas of the oases as potentially highly profitable. With access to groundwater, corporate actors became disconnected and independent from the complex run-off-the-river irrigation system by irrigating their fields at their leisure. In particular, drip irrigation was used not only to overcome physical constraints in conquering the new space of production but also to optimize farming performance, guaranteeing quality and quantity. Many business groups, seeking to diversify their activities or finding stability in the face of financial market turbulence, have chosen to combine export wine production with other sources of profit, such as tourism and luxury real estate complexes.Our article is structured in three moments. Firstly, we propose an ontology of the concept of oasis through the thesis of the production of nature. We will give an account of different forms of internalisation of nature through production in general, for exchange, and finally through the circulation and accumulation of capital. Secondly, we develop a periodization of capitalism in its relation to the production, commercialization and consumption of food, resorting to the discussions on the so-called agro-food regimes. From this approach, we propose to reconstruct the fragmented spatial trajectory of the main oases of Cuyo from their pre-Hispanic origin to their articulation in three successive regimes: the diaspora-agro-export, the mercantile-industrial and the neoliberal corporate regimes. Thirdly, drawing on the concept of commodity frontiers we focus on three products for export-markets: wine, olive oil and pre-fried potatoes. Drawing upon these short examples and in resonance with the world ecology approach we will show that the oases as such represent a renewed attempt to expand, but also to maintain active commodity frontiers. We argue that strategies employed by investors to gain access to the land and water –or to maintain their initial business plans– may encounter obstacles, such as policy shifts, legal constraints or lack of economic openness. In other words, the commodity frontier is not always a worry-free process. Finally, this work aims to show that the relational processes embodied by such commodities and frontier-making not only transcend the Cartesian binary society/nature, but also make more complex the comfortable compartmentalisation between local and global processes. ; En las últimas décadas, los oasis de Mendoza y San Juan (Argentina) se transformaron dramáticamente al calor de reestructuraciones agro-exportadoras que se produjeron en el marco del actual régimen agro-alimentario corporativo neoliberal. Estos procesos no son sino la culminación de una larga trayectoria de inserción en el desarrollo del capitalismo como ecología-mundo. De este modo, nos proponemos, desde este enfoque híbrido, transfronterizo y relacional, reconstruir históricamente la fragmentaria trayectoria espacial de los principales oasis cuyanos desde su origen prehispánico hasta su articulación en tres regímenes agroalimentarios sucesivos. En segundo término, nos proponemos describir la reciente expansión de la frontera agrícola mediante el uso no controlado de reservas de agua subterráneas en el marco del último régimen. Evidenciamos los procesos relacionales que, a partir del enfoque de la ecología-mundo, están a la base de la producción y comercialización de tres mercancías emblemáticas en estos oasis: el vino, el aceite de oliva y la papa prefrita. La metodología que adoptamos estuvo destinada a describir las trayectorias históricas de los oasis recurriendo a una selección de fuentes documentales y bibliografía regional. Entre los principales hallazgos encontramos que el tercer régimen agroalimentario permitió a los oasis agro-industriales una nueva expansión de fronteras de mercancías. Esta se sustentó en la explotación sistemática del agua subterránea, hasta este momento esencialmente complementaria a la fuente superficial. En este marco neoliberal, la agricultura de precisión permite el control de las etapas claves del proceso productivo a partir de criterios estandarizados de demanda internacional de calidad y cantidad. Como conclusión, entendemos que el acceso a fuentes de Naturaleza barata (ya sea agua, suelos o trabajo humano) permitió no sólo la elaboración de mercancías apetecibles en exigentes mercados mundiales, sino que también reconfiguró el propio modelo de gestión del agua desde uno más estatalista y socialmente condicionado a uno más privatista, telecontrolado y autónomo. No obstante, estos procesos de expansión de la frontera de mercancías están condicionados por elementos locales propios de dinámicas socioecológicas preexistentes, por lo que esta frontera encuentra límites, a menudo, infranqueables.
In the broadest sense, cosmopolitanism can be described as a belief and action in accordance with the view that all human beings belong to a unique world political community. However, such a simplified definition overlooks the multidimensionality of the concept. The term cosmopolitanism has been present in public discourse since ancient times and has carried different connotations throughout history, which contributes to its ambiguity. The preconditions for the development of cosmopolitanism in its present sense arose in the mid-20th century, after the world wars and the onset of new globalisation processes. Within the social sciences, a significant interest in a more specific definition and conceptualisation of cosmopolitanism emerged in the second half of the 20th century. However, numerous theoretical discussions since then have not yet offered such a definition of the concept. An additional problem lies in the fact that those theoretical discussions, which defined multiple aspects and types of cosmopolitanism, are not accompanied by a corresponding number of empirical research. By considering previous theoretical and empirical research on the topic, this paper aims to offer a clearer conceptualisation and operationalisation of cosmopolitanism, with the focus on constructing a valid instrument for its measurement. While it is difficult to offer a clear and unambiguous theoretical definition of cosmopolitanism, most researchers have moved in the direction of a clearer definition of certain aspects of the concept. There were a few attempts of such conceptualisation that have been met with wider acclamation, some of which were more complex and some simpler. Vertovec and Cohen (2002) established the most sophisticated conceptualisation by defining cosmopolitanism as a sociocultural condition, a philosophy or worldview, a political project, an attitude or disposition, and a practice or competence. With the aim of capturing multiple aspects of the concept, Boucher, Aubert and de Latour (2019) defined four types of cosmopolitanism: moral, institutional, civil and cultural. Furthermore, Delanty (2009) offered a different approach by distinguishing moral, political and cultural cosmopolitanism. The various definitions agree, however, on including the political and the cultural aspects of cosmopolitanism. Therefore, this paper is based on the robust typology offered by Hannerz (2006), which distinguishes between the cultural and political faces of cosmopolitanism. The author sees the political face of the concept as the one that tries to solve macro problems of human, economic, legal, environmental and other processes that transcend nation-state borders. Hannerz (2006) defines the cultural face as an identity characteristic of individuals who enjoy new cultures, people, tastes, sounds and the like. The cultural dimension of cosmopolitanism, which arises from the awareness and practices of individuals, is the focus of this research. It is important to mention that numerous researchers define certain types (banal, patriotic, thin, ordinary) of cosmopolitanism. However, it is questionable to what extent the determination of such types contributes to a clearer understanding of the concept, especially when they are defined solely on the basis of theoretical considerations. Before establishing the framework for the empirical research, it was important to clarify the source of contemporary cosmopolitanism. Most researchers link cosmopolitanism to globalisation processes. Beck and Sznaider (2010) explain globalisation as processes that take place "out there" in the world and define cosmopolitanism as "globalization from within", a process that is closely related to globalisation but takes place within society. Such a connection becomes questionable when the terms glocalisation, which includes micro as well as macro processes, and segmented globalisation, which refers to the different dynamics by which globalisation occurs in places around the world, are introduced into the discourse. These concepts also suggest that all individuals involved in globalisation trends will express cosmopolitan views, which is not the case. On the other hand, Roudometof (2005) emphasises the link between cosmopolitanism and transnationalism, a view that is elaborated in this paper. The author defines the concept of transnationalism as a social condition that arises in the stage of internal globalisation and is not influenced by the emotions and attitudes of individuals but its most important feature is that it can stimulate individuals to develop an open attitude, that is, cosmopolitanism, or a defensive attitude towards differences. The lack of a clear theoretical definition of cosmopolitanism has influenced the disproportion between the theoretical considerations and empirical research of the concept. One part of the researchers used secondary data to examine attitudes about cosmopolitanism in a certain population. These studies have led to important insights, but they have not contributed to the creation of a valid and reliable instrument for measuring cosmopolitanism. Another problem with such research is that it is conducted using data that are focused on examining other concepts. One such example is the study by Olofsson and Öhman (2007), where the authors interpreted views contrary to nationalism as cosmopolitan views. The other part of empirical research on the subject of cosmopolitanism is focused on constructing an instrument for its measurement. While most such studies were conducted to explore a particular aspect of the concept, Saran and Kalliny (2012) offered an instrument to measure general cosmopolitan attitudes, values, and practices within a particular population. The authors first conducted interviews, the results of which were used to construct questions for the survey. After collecting survey data, the authors defined a valid and reliable, one-dimensional 14-item scale of cosmopolitanism by conducting exploratory and confirmatory factor analysis. The findings of other authors pointed to the connection between cosmopolitanism and other concepts like transnational experiences, political orientation, and sociodemographic characteristics. The empirical part of this research aimed to define a valid and reliable instrument for measuring the presence of cosmopolitanism in a given population. For this purpose, the scale offered by Saran and Kalliny (2012) was used in a slightly modified form to further test its construct validity, reliability and applicability. The instrument was tested on the student population of the University of Zadar via an online survey in October 2020. In addition to the cosmopolitanism scale, the questionnaire contained questions about the number of countries the respondents had visited, the number of foreign languages they spoke, their political orientation, their support for general human rights and certain sociodemographic characteristics, with the purpose of testing the convergent validity of the instrument. In order to determine the metrics of the cosmopolitanism scale, bivariate (correlation analysis) and multivariate statistical procedures (exploratory factor analysis, confirmatory factor analysis and multiple regression analysis) were conducted in the statistical programming language R. First, exploratory factor analysis was performed on the cosmopolitanism scale with parallel analysis as a factor retention method, which extracted one 13-item factor with a high level of reliability (α=0.93). To examine the construct validity of the scale, confirmatory factor analysis was further performed, resulting in an acceptable goodness-of-fit. In order to define a scale that shows even better psychometric properties, confirmatory factor analysis was conducted on a reduced 6-item scale that Saran and Kalliny (2012) found to show stronger construct validity. Following their results, the 6-item scale showed even better goodness-of-fit (χ²=25, df=9, SRMR=0.05, RMSEA=0.09, CFI=0.95, TLI=0.91), and its factor scores were used in further analyses. Correlation analysis was used to measure the relationship between the cosmopolitanism scale and transnational experiences and political views. It was found that respondents who express stronger cosmopolitan views speak more foreign languages, have visited a greater number of foreign countries, express stronger support for leftwing political options, and a stronger need to protect universal human rights, of which the latter correlation proved to be the highest. Lastly, multiple regression was conducted, where the cosmopolitanism scale factor scores served as the dependent variable and sociodemographic variables as predictors. The regression model confirmed the previous finding that women express stronger cosmopolitan attitudes than men. The results of the statistical analysis indicate a high level of reliability and validity of the reduced 6-item cosmopolitanism scale. On that basis, it can be stated that the scale serves as a valid instrument for measuring cosmopolitan attitudes within a population. The scope of the study is limited because it was conducted on a relatively homogeneous sample of the University of Zadar student population. In future research, the 6-item scale should be tested on a more heterogeneous sample which could indicate the applicability of the instrument to a wider population. Besides, in future research, it would be advisable to pay more attention to examining indicators and constructs related to cosmopolitanism, based on which clearer types of cosmopolitan could potentially be defined.