Early General Equilibrium Economics: Walras, Pareto, and Cassel
In: A Companion to the History of Economic Thought, S. 278-293
22 Ergebnisse
Sortierung:
In: A Companion to the History of Economic Thought, S. 278-293
In: A Concise History of Economic Thought, S. 227-234
In: Rationality, Institutions and Economic Methodology; Economics as Social Theory
Blog: ThinkMarkets
by Andreas Hoffmann David Glasner has posted his paper on “Hayek and equilibrium concepts” on SSRN. An earlier version of this fascinating paper was presented at the History of Economics Society in Toronto in 2017 and the NYU Colloquium. A teaser (taken from the abstract): The now dominant Lucas rational-expectations approach misconceives intertemporal equilibrium and … Continue reading Glasner: “Hayek, Hicks, Radner and Three Equilibrium Concepts”
Proposes an alternative to theories focusing on cumulative individual behavior to explain the generation of economic organizations, based on the general equilibrium theory Walrasian conception of the economy that stresses the interaction & connectedness of markets in & across economies. It is argued that market participants are embedded in organized environments that steer & constrain both firm-level & individual economic processes. Calculability, generalized through price systems, helps rationalize an economy's organization, causing firms to coordinate or not coordinate their activities in a business group, depending on price incentives. These system actions are mutually maintained & require participants to play by the rules of the organizational game. Economic organization theories that assume individual aggregation are summarized, along with the benefits of a neo-Walrasian conception, as opposed to bottom-up theories, for addressing the effects of economic organizations on the formation of a complex capitalist economy. It is maintained that both economic & noneconomic factors are important in understanding structural differences among capitalist economies. 90 References. J. Lindroth
Blog: Penn LDI
The Law of One Price One of the core tenets of economic analysis of competitive markets is that, in competitive equilibrium, a particular good or service produced by multiple competing firms should sell for the same price. This "Law of One Price" follows from the assumed property in a competitive market with informed buyers and […]
In: Конкурентоспособность региональной экономики: опыт, проблемы, перспективы ; Материалы международной научно-практической конференции, S. 57-59
Article about the role of equilibrium in regional economic development and regional governance. The author proposes a system of index ratings disparities in socio-economic development of the regions of the Russian Federation.
Blog: The Grumpy Economist
My first post described a few anecdotes about what a warm person Bob Lucas was, and such a great colleague. Here I describe a little bit of his intellectual influence, in a form that is I hope accessible to average people.The "rational expectations" revolution that brought down Keynesianism in the 1970s was really much larger than that. It was really the "general equilibrium" revolution. Macroeconomics until 1970 was sharply different from regular microeconomics. Economics is all about "models," complete toy economies that we construct via equations and in computer programs. You can't keep track of everything in even the most beautiful prose. Microeconomic models, and "general equilibrium" as that term was used at the time, wrote down how people behave — how they decide what to buy, how hard to work, whether to save, etc.. Then it similarly described how companies behave and how government behaves. Set this in motion and see where it all settles down; what prices and quantities result. But for macroeconomic issues, this approach was sterile. I took a lot of general equilibrium classes as a PhD student — Berkeley, home of Gerard Debreu was strong in the field. But it was devoted to proving the existence of equilibrium with more and more general assumptions, and never got around to calculating that equilibrium and what it might say about recessions and government policies. Macroeconomics, exemplified by the ISLM tradition, inhabited a different planet. One wrote down equations for quantities rather than people, for example that "consumption" depended on "income," and investment on interest rates. Most importantly, macroeconomics treated each year as a completely separate economy. Today's consumption depended on today's income, having nothing to do with whether people expected the future to look better or worse. Economists recognized this weakness, and a vast and now thankfully forgotten literature tried fruitlessly to find "micro foundations" for Keynesian economics. But building foundations under an existing castle doesn't work. The foundations want a different castle. Bob's "islands" paper is famous, yes, for a complete model of how unexpected money might move output in the short run and not just raise inflation. But you can do that with a half a page of simple math, and Bob's paper is hard to read. It's deeper contribution, and the reason for that difficulty, is that Bob wrote out a complete "general equilibrium" model. People, companies and government each follow described rules of behavior. Those rules are derived as being the optimal thing for people and companies to do given their environment. And they are forward-looking. People think about how to make their whole lives as pleasant as possible, companies to maximize the present value of profits. Prices adjust so supply = demand. Bob said, by example, that we should do macroeconomics by writing down general equilibrium models. General equilibrium had also been abandoned by the presumption that it only studies perfect economies. Macroeconomics is really about studying how things go wrong, how "frictions" in the economy, such as the "sticky" wages underlying Keynesian thinking, can produce undesirable and unnecessary recessions. But here too, Bob requires us to write down the frictions explicitly. In his model, people don't see the aggregate price level right away, and do the best they can with local information. That is the real influence of the paper and Bob's real influence in the profession. (Current macroeconomic modeling reflects the fact that the Fed sets interest rates, and does not control the money supply.) You can see this influence in Tom Sargent's textbooks. The first textbook has an extensive treatment of Keynesian economics. It's about the most comprehensible treatment there is — but it is no insult to Tom to say that in that book you can see how Keynesian economics really doesn't hang together. Tom describes how, the minute he learned from Bob how to to general equilibrium, everything changed instantly. Rational expectations was, like any other advance, a group effort. But what made Bob the leader was that he showed the rest how to do general equilibrium. This is the heart of my characterization that Bob is the most important macroeconomist of the 20th century. Yes, Keynes and Friedman had more policy impact, and Friedman's advocacy of free markets in microeconomic affairs is the most consequential piece of 20th century economics. But within macroeconomics, there is before Lucas and after Lucas. Everyone today does economics the Lucas way. Even the most new-Keynesian article follows the Lucas rules of how to do economics. Once you see models founded on complete descriptions of people, businesses, government, and frictions, you can see the gaping holes in standard ISLM models. This is some of his stinging critique, such as "after Keynesian macroeconomics." Sure, if people's income goes up they are likely to consume more, as the Keynesians posited. But interest rates, wages, and expectations of the future also affect consumption, which Keynesians leave out. "Cross equations restrictions" and "budget constraints" are missing. Now, the substantive prediction that monetary policy can only move the real economy via unexpected money supply growth did not bear out, and both subsequent real business cycles and new-Keynesianism brought persistent responses. But the how we do macroeconomics part is the enduring contribution. The paper still had enduring practical lessons. Lucas, together with Friedman and Phelps brought down the Phillips curve. This curve, relating inflation to unemployment, had been (and sadly, remains) at the center of macroeconomics. It is a statistical correlation, but like many correlations people got enthused with it and started reading it as stable relationship, and indeed a causal one. Raise inflation and you can have less unemployment. Raise unemployment in order to lower inflation. The Fed still thinks about it in that causal way. But Lucas, Friedman, and Phelps bring a basic theory to it, and thereby realize it is just a correlation, which will vanish if you push on it. Rich guys wear Rolexes. That doesn't mean that giving everyone a Rolex will have a huge "multiplier" effect and make us all rich. This is the essence of the "Lucas critique" which is a second big contribution that lay readers can easily comprehend. If you push on correlations they will vanish. Macroeconomics was dedicated to the idea that policy makers can fool people. Monetary policy might try to boost output in a recession with a surprise bit of money growth. That will wok once or twice. But like the boy who cried wolf, people will catch on, come to expect higher money growth in recessions and the trick won't work anymore. Bob showed here that all the "behavioral" relations of Keynesian models will fall apart if you exploit them for policy, or push on them, though they may well hold as robust correlations in the data. The "consumption function" is the next great example. Keynesians noticed that when income rises people consume more, so write a consumption function relating consumption to income. But, following Friedman's great work on consumption, we know that correlation isn't always true in the data. The relation between consumption and income is different across countries (about one for one) than it is over time (less than one for one). And we understand that with Friedman's theory: People, trying to do their best over their whole lives don't follow mechanical rules. If they know income will fall in the future, they consume a lot less today, no matter what today's current income. Lucas showed that people who behave this sensible way will follow a Keynesian consumption function, given the properties of income overt the business cycle. You will see a Keynesian consumption function. Econometric estimates and tests will verify a Keynesian consumption function. Yet if you use the model to change policies, the consumption function will evaporate. This paper is devastating. Large scale Keynesian models had already been constructed, and used for forecasting and policy simulation. It's natural. The model says, given a set of policies (money supply, interest rates, taxes, spending) and other shocks, here is where the economy goes. Well, then, try different policies and find ones that lead to better outcomes. Bob shows the models are totally useless for that effort. If the policy changes, the model will change. Bob also showed that this was happening in real time. Supposedly stable parameters drifted around. (This one is also very simple mathematically. You can see the point instantly. Bob always uses the minimum math necessary. If other papers are harder, that's by necessity not bravado.) This devastation is sad in a way. Economics moved to analyzing policies in much simpler, more theoretically grounded, but less realistic models. Washington policy analysis sort of gave up. The big models lumber on, the Fred's FRBUS for example, but nobody takes the policy predictions that seriously. And they don't even forecast very well. For example, in the 2008 stimulus, the CEA was reduced to assuming a back of the envelope 1.5 multiplier, this 40 years after the first large scale policy models were constructed. Bob always praised the effort of the last generation of Keynesians to write explicit quantitative models, to fit them to data, and to make numerical predictions of various policies. He hoped to improve that effort. It didn't work out that way, but not by intention. This affair explains a lot of why economists flocked to the general equilibrium camp. Behavioral relationships, like what fraction of an extra dollar of income you consume, are not stable over time or as policy changes. But one hopes that preferences, — how impatient you are, how much you are willing to save more to get a better rate of return — and technology — how much a firm can produce with given capital and labor — do not change when policy changes. So, write models for policy evaluation at the level of preferences and technology, with people and companies at the base, not from behavioral relationships that are just correlations. Another deep change: Once you start thinking about macroeconomics as intertemporal economics — the economics that results from people who make decisions about how to consume over time, businesses make decisions about how to produce this year and next — and once you see that their expectations of what will happen next year, and what policies will be in place next year are crucial, you have to think of policy in terms of rules, and regimes, not isolated decisions. The Fed often asks economists for advice, "should we raise the funds rate?" Post Lucas macroeconomists answer that this isn't a well posed question. It's like saying "should we cry wolf?" The right question is, should we start to follow a rule, a regime, should we create an institution, that regularly and reliably raises interest rates in a situation like the current one? Decisions do not live in isolation. They create expectations and reputations. Needless to say, this fundamental reality has not soaked in to policy institutions. And that answer (which I have tried at Fed advisory meetings) leads to glazed eyes. John Taylor's rule has been making progress for 30 years trying to bridge that conceptual gap, with some success. This was, and remains, extraordinarily contentious. 50 years later, Alan Blinder's book, supposedly about policy, is really one long snark about how terrible Lucas and his followers are, and how we should go back to the Keynesian models of the 1960s. Some of that contention comes back to basic philosophy. The program applies standard microeconomics: derive people's behaviors as the best thing they can do given their circumstances. If people pick the best combination of apples and bananas when they shop, then also describe consumption today vs. tomorrow as the best they can do given interest rates. But a lot of economics doesn't like this "rational actor" assumption. It's not written in stone, but it has been extraordinarily successful. And it imposes a lot of discipline. There are a thousand arbitrary ways to be irrational. Somehow though, a large set of economists are happy to write down that people pick fruit baskets optimally, but don't apply the same rationality to decisions over time, or in how they think about the future. But "rational expectations" is really just a humility condition. It says, don't write models in which the predictions of the model are different from the expectations in the model. If you do, if your model is right, people will read the model and catch on, and the model won't work anymore. Don't assume you economist (or Fed chair) are so much less behavioral than the people in your model. Don't base policy on an attempt to fool the little peasants over and over again. It does not say that people are big super rational calculating machines. It just says that they eventually catch on. Some of the contentiousness is also understandable by career concerns. Many people had said "we should do macro seriously like general equilibrium." But it isn't easy to do. Bob had to teach himself, and get the rest of us to learn, a range of new mathematical and modeling tools to be able to write down interesting general equilibrium models. A 1970 Keynesian can live just knowing how to solve simple systems of linear equations, and run regressions. To follow Bob and the rational expectations crowd, you had to learn linear time-series statistics, dynamic programming, and general equilibrium math. Bob once described how tough the year was that it took him to learn functional analysis and dynamic programming. The models themselves consisted of a mathematically hard set of constructions. The older generation either needed to completely retool, fade away, or fight the revolution. Some good summary words: Bob's economics uses"rational expectations," or at least forward-looking and model-consistent expectations. Economics becomes "intertemporal," not "static" (one year at a time). Economics is "stochastic" as well as "dynamic," we can treat uncertainty over time, not just economies in which everyone knows the future perfectly. It applies "general equilibrium" to macroeconomics. And I've just gotten to the beginning of the 1970s. When I got to Chicago in the 1980s, there was a feeling of "well, you just missed the party." But it wasn't true. The 1980s as well were a golden age. The early rational expectations work was done, and the following real business cycles were the rage in macro. But Bob's dynamic programming, general equilibrium tool kit was on a rampage all over dynamic economics. The money workshop was one creative use of dynamic programs and interetempboral tools after another one, ranging from taxes to Thai villages (Townsend). I'll mention two. Bob's consumption model is at the foundation of modern asset pricing. Bob parachuted in, made the seminal contribution, and then left finance for other pursuits. The issue at the time was how to generalize the capital asset pricing model. Economists understood that some stocks pay higher returns than others, and that they must do so to compensate for risk. The understood that the risk is, in general terms, that the stock falls in some sense of bad times. But how to measure "bad times?" The CAPM uses the market, other models use somewhat nebulous other portfolios. Bob showed us that at least in the purest theory, that stocks must pay higher average returns if they fall when consumption falls. (Breeden also constructed a consumption model in parallel, but without this "endowment economy" aspect of Bob's) This is the purest most general theory, and all the others are (useful) specializations. My asset pricing book follows. The genius here was to turn it all around. Finance had sensibly built up from portfolio theory, like supply and demand: Given returns, what stocks do you buy, and how much to you save vs. consume? Then, markets have to clear find the stock prices, and thus returns, given which people will buy exactly the amount that's for sale and consume what is produced. That's hard. (Technically, finding the vector of prices that clears markets is hard. Yes, N equations in N unknowns, but they're nonlinear and N is big.) Bob instead imagined that consumption is fixed at each moment in time, like a desert island in which so many coconuts fall each day and you can't store them or plant them. Then, you can just read prices from people's preferences. This gives the same answer as if the consumption you assume is fixed had derived from a complex production economy. You don't have to solve for prices that equate supply and demand. Brilliantly, though prices cause consumption to individual people, consumption causes prices in aggregate. This is part of Bob's contribution to the hard business of actually computing quantitative models in the stochastic dynamic general equilibrium tradition. Bob, with Nancy Stokey also took the new tools to the theory of taxation. (Bob Barro also was a founder of this effort in the late 1980s.) You can see the opportunity: we just learned how to handle dynamic (overt time, expectations of tomorrow matter to what you do today) stochastic (but there is uncertainty about what will happen tomorrow) economics (people make explicit optimizing decisions) for macro. How about taking that same approach to taxes? The field of dynamic public finance is born. Bob and Nancy, like Barro, show that it's a good idea for governments to borrow and then repay, so as to spread the pain of taxes evenly over time. But not always. When a big crisis comes, it is useful to execute a "state contingent default." The big tension of Lucas-Stokey (and now, all) dynamic public finance: You don't want any capital taxes for the incentive effects. If you tax capital, people invest less, and you just get less capital. But once people have invested, a capital tax grabs revenue for the government with no economic distortion. Well, that is, if you can persuade them you'll never do it again. (Do you see expectations, reputations, rules, regimes, wolves in how we think of policy?) Lucas and Stoney say, do it only very rarely to balance the disincentive of a bad reputation with the need to raise revenue in once a century calamities. Bob went on, of course, to be one of the founders of modern growth theory. I always felt he deserved a second Nobel for this work. He's absolutely right. Once you look at growth, it's hard to think about anything else. The average Indian lives on $2,000 per year. The average American, $60,000. That was $15,000 in 1950. Nothing else comes close. I only work on money and inflation because that's where I think I have answers. For us mortals, good research proceeds where you think you have an answer, not necessarily from working on Big Questions. Bob brilliantly put together basic facts and theory to arrive at the current breakthrough. Once you get out of the way, growth does not come from more capital, or even more efficiency. It comes from more and better ideas. I remember being awed by his first work for cutting through the morass and assembling the facts that only look salient in retrospect. A key one: Interest rates in poor countries are not much higher than they are in rich countries. Poor countries have lots of workers, but little capital. Why isn't the return on scarce capital enormous, with interest rates in the hundreds of percent, to attract more capital to poor countries? Well, you sort of know the answer, that capital is not productive in those countries. Productivity is low, meaning those countries don't make use of better ideas on how to organize production. Ideas too are produced by economics, but, as Paul Romer crystallized, they are fundamentally different from other goods. If I produce an idea, you can use it without hurting my use of it. Yes, you might drive down the monopoly profits I gain from my intellectual property. But if you use my Pizza recipe, that's not like using my car. I can still make Pizza, where if you use my car I can't go anywhere. Thus, the usual free market presumption that we will produce enough ideas is false. (Don't jump too quickly to advocate government subsides for ideas. You have to find the right ideas, and governments aren't necessarily good at subsidizing that search.) And the presumption that intellectual property should be preserved forever is also false. Once produced it is socially optimal for everyone to use it. I won't go on. It's enough to say that Bob was as central to the creation of idea-based growth theory, which dominates today, as he was to general equilibrium macro, which also dominates today.Bob is an underrated empiricist. Bob's work on the size distribution of firms (great tweet summary by Luis Garicano) similarly starts from basic facts of the size distribution of firms and the lack of relationship between size and growth rates. It's interesting how we can go on for years with detailed econometric estimates of models that don't get basic facts right. I loved Bob's paper on money demand for the Carnegie Rochester conference series. An immense literature had tried to estimate money demand functions with dynamics, and was pretty confusing. It made a basic mistake, by looking at first differences rather than levels and thereby isolating the noise and drowning out the signal. Bob made a few plots, basically rediscovered cointegration all on his own, and made sense of it all. And don't forget the classic international comparison of inflation-output relations. Countries with volatile inflation have less Phillips curve tradeoff, just as his islands model featuring confusion between relative prices and the price level predicts. One last note to young scholars. There is a tendency today to value people by the number of papers they produce, and how quickly they rise through the ranks. Read Bob's CV. He wrote about one paper a year, starting quite late in life. But, as Aesop said, they were lions. In his Nobel prize speech, Bob also passed on that he and his Nobel-winning generation at Chicago always felt they were in some backwater, where the high prestige stuff was going on at Harvard and MIT. You never know when it might be a golden age. And the AER rejected his islands paper (as well as Akerlof's lemons). If you know it's good, revise and try again. I will miss his brilliant papers as much as his generous personality. Update: See Ivan Werning's excellent "Lucas Miracles" for an appreciation by a real theorist.
In: Восемнадцатые Апрельские экономические чтения: Материалы международной научно-практической конференции, S. 94-97
Article on the causes, dynamics of development and institutional forms of economic cycles. We study the institutional patterns associated with the formation, development and completion of economic cycles from the standpoint of an exogenous approach. The author proves the thesis about the relationship of formation of new institutions of economic behavior as a cause of the economic crisis and the formation of new institutions as the primary sign of transition to a new economic cycle. This article is an attempt to prove the relationship of economic and institutional cycling.
In: Infrastruktur: Theorie und Politik, S. 87-108
In: Проблемы развития Омского Прииртышья в переходный период, S. 225-239
Article on the use of system analysis methodology for modeling socio-economic pro-cesses. The author justifies the methodological approach, based on dynamic analysis of exogenous processes in economic and social management practices. Particular attention is paid to the use of index indicators adaptive response of socio-economic systems to environmental changes.
Blog: Greg Mankiw's Blog
I was recently chatting with someone who teaches introductory macroeconomics (not using my favorite textbook). He does not teach the students about money creation under fractional reserve banking, which he considers an unnecessary technicality, but he does teach them the following two statements about inflation.
If
the Fed lowers the interest rate on reserves, that policy stimulates economic activity in the short run and, via the Phillips curve, increases inflation.
In
the long run, the quantity theory of money explains inflation.
I agree with both of these statements, and I consider them critical for students to understand. But consider: How does one explain the transition from the short run to the long run?
The only way I know to answer this question is that a lower interest rate on
reserves increases bank lending and expands the money supply by increasing the money
multiplier. But if students don't know about how banks create money under fractional
reserve banking, they are not equipped to understand this logic.The bottom line: The traditional pedagogy about how banks influence the money supply remains important if students are to understand the economics of inflation.Update: This post generated more than the usual amount of confusion and misdirection on Twitter. So let me explain my logic more slowly:
It is useful to teach
the quantity theory of money (M and P are parallel) as a long-run
equilibrium condition, regardless of which direction causality runs.
It is useful for
students to know that cutting the interest rate on reserves is
expansionary for aggregate demand and, over time, inflationary. That is,
it raises P.
To complete the story,
you need to explain how cutting the interest rate on reserves raises M.
To be sure, lower
interest rates increase the quantity of money demanded. But you also must
explain the quantity of money supplied.
The money supply M equals m*B, where m is the money multiplier and B is the monetary base
(currency plus reserves).
Cutting the interest on
reserves (unlike open-market operations) does not change B. So if it
changes the money supply M, it must work through the money multiplier m.
One cannot understand
the money multiplier m without understanding fractional reserve banking. (Under 100-percent-reserve banking, m is fixed at 1.)
In: Die Natur der Gesellschaft: Verhandlungen des 33. Kongresses der Deutschen Gesellschaft für Soziologie in Kassel 2006. Teilbd. 1 u. 2, S. 2113-2125
"In der Spieltheorie sind in den letzten Jahren zunehmend verhaltenstheoretische Ansätze in den Vordergrund gestellt worden, die sich mit Abweichungen vom ökonomischen rationalen Entscheidungsmodell beschäftigen. Das ökonomische Standardmodell kann das Verhalten in vielen Situationen nicht erklären. Die Ergebnisse in Diktator- und Ultimatumsspielen deuten darauf hin, dass in den Verhaltensweisen von Menschen neben egoistischen Motiven Altruismus, Fairness und Reziprozität eine große Rolle spielen (vgl. Diekmann 2004; Fehr/ Gächter 2000; Ockenfels 1999). In vielen Fällen orientieren sich die Akteure nicht am Nash-Gleichgewicht, sondern Diktatoren, Proposer und Responder weichen zum Teil erheblich von rationalen Strategien ab (u.a. Forsythe et al. 1994). Das Ausmaß der Abweichung variiert dabei in verschiedenen experimentellen Studien. Die Schwankungen sind bislang noch nicht zufriedenstellend erklärt worden. Die wesentliche Frage ist also, auf welche Ursachen sich die Heterogenität an Verhaltensweisen in diesen einfachen Spielen zurückführen lässt? Einige Studien deuten an, dass das Ausmaß der Marktintegration in einer Gesellschaft eine erklärende Variable darstellt (Henrich et al. 2004). Die Abgaben würden also mit dem Grad der individuellen Einbettung in Netzwerke schwanken. Ist also die Netzwerkeinbindung ein Erklärungsfaktor für unterschiedliche Abgaben in Diktatorspielen? Neben einem Experiment zum sequenziellen Diktatorspiel (vgl. Diekmann 2004) werden über Fragebögen Netzwerkdaten erhoben. Die Erhebung der individuellen Netzwerke zu unterschiedlichen Lebensbereichen (Freunde, Familie, Bekannte, etc.) ermöglicht es, die Zusammenhänge zwischen Netzwerkeinbindung und dem strategischen Verhalten in Spielsituationen genauer darzustellen." (Autorenreferat)
In: Die Natur der Gesellschaft: Verhandlungen des 33. Kongresses der Deutschen Gesellschaft für Soziologie in Kassel 2006. Teilbd. 1 u. 2, S. 2126-2138
"Korruption ist unter den Staatsangestellten vieler Länder weit verbreitet und scheint sich trotz negativer Sanktionen so hartnäckig zu halten, dass man dahinter eine evolutionär stabile Strategie im Sinne der mathematischen Spieltheorie von J. Maynard Smith vermuten könnte. Zur Klärung dieser Vermutung wird im Folgenden ein Modell eines evolutionären Spiels entwickelt, in welchem vier unterschiedliche soziale 'Spezies' miteinander interagieren, nämlich korrupte und korrekte Staatsangestellte einerseits und korrupte und korrekte Bürger/innen andererseits. Die Payoffs dieses Spiels hängen vermutlich von folgenden Faktoren ab: a) Der Art der Korruption, die entweder die Einlösung von staatlich blockierten legitimen oder die Beanspruchung von ungesetzlichen Leistungen zum Ziele hat. b) Der Strategienwahl der Spieler, die zu beidseitig korrektem Verhalten oder zu ein- oder wechselseitiger Korruption führt. c) Der Höhe der erwarteten Sanktionkosten eines fehlgeschlagenen einseitigen Korruptionsversuchs. d) Der Höhe des Bestechungsgelds und des erwarteten Ertrags, die mit einer korrupten staatlichen Leistung verbunden sind. Die Theorie der evolutionären Spiele geht davon aus, dass der statistische Erwartungswert des Payoffs einer Strategie das Wachstum der zugehörigen Gruppe bestimmt: Gemäß Annahme der evolutionären Spieltheorie hat eine Spezies mit einem höheren Gesamtpayoff eine höhere 'Fitness' und somit auch bessere Überlebens- und Reproduktionschancen. Nach einer soziologischen Reinterpretation kann dieses Fitness-Konzept dazu verwendet werden, um die zeitliche Dynamik und allfällige Gleichgewichtszustände des Anteils der nicht/ korrupten Akteure zu berechnen. Zum einen lässt sich damit die Frage nach dem evolutionär stabilen Charakter der Korruption beantworten. Zum anderen können diese Informationen aber auch dazu verwendet werden, um das Modell anhand von Beobachtungsdaten zu validieren zu versuchen: Zu diesen Zweck wird im Referat für eine größere Zahl von Ländern der Corruption Perception Index (CPI) von Transparency International analysiert und mit den theoretischen Aussagen der Modells verglichen." (Autorenreferat)
Blog: Theory Talks
Loet Leydesdorff on the Triple Helix: How Synergies in University-Industry-Government Relations can Shape Innovation Systems
This is the sixth and last in a series of Talks dedicated to the technopolitics of International Relations, linked to the forthcoming double volume 'The Global Politics of Science and Technology' edited by Maximilian Mayer, Mariana Carpes, and Ruth Knoblich
The relationship between technological innovation
processes and the nation state remains a challenge for the discipline of
International Relations. Non-linear and multi-directional
characteristics of knowledge production, and the diffusive nature of knowledge
itself, limit the general ability of governments to influence and steer
innovation processes. Loet Leydesdorff advances the framework of the "Triple
Helix" that disaggregates national innovation systems into
evolving university-industry-government eco-systems. In this Talk, amongst others, he shows that these eco-systems can be
expected to generate niches with synergy at all scales, and emphasizes that, though
politics are always involved, synergies develop unintentionally.
Print version of this Talk (pdf)
What is the most relevant aspect
of the dynamics of innovation for the discipline of International Relations?
The
main challenge is to endogenize the notions of technological progress and
technological development into theorizing about political economies and nation states.
The endogenization of technological innovation and technological development
was first placed on the research agenda of economics by evolutionary economists
like Nelson and Winter in the late 1970s and early 1980s. In this context, the
question was how to endogenize the dynamics of knowledge, organized knowledge,
science and technology into economic theorizing. However, one can equally well
formulate the problem of how to reflect on the global (sub)dynamics of
organized knowledge production in political theory and International Relations.
From
a longer-term perspective, one can consider that the nation states – the
national or political economies in Europe – were shaped in the 19th
century, somewhat later for Germany (after 1871), but for most countries it was
during the first half of the 19th century. This was after the French
and American Revolutions and in relation to industrialization. These nation
states were able to develop an institutional framework for organizing the
market as a wealth-generating mechanism, while the institutional framework
permitted them to retain wealth, to regulate market forces, and also to steer
them to a certain extent. However, the market is not only a local dynamics; it
is also a global phenomenon.
Nowadays,
another global dynamics is involved: science and technology add a dynamics different
from that of the market. The market is an equilibrium-seeking mechanism at each
moment of time. The evolutionary dynamics of science and technology nowadays
adds a non-equilibrium-seeking dynamics over time on top of that, and this puts
the nation state in a very different position. Combining an equilibrium-seeking
dynamics at each moment of time with a non-equilibrium seeking one over time
results in a complex adaptive dynamics, or an eco-dynamics, or however you want
to call it – these are different words for approximately the same thing.
For
the nation state, the question arises of how it relates to the global market
dynamics on the one side, and the global dynamics of knowledge and innovation on
the other. Thus, the nation state has to combine two tasks. I illustrated this
model of three subdynamics with a figure in my 2006 book entitled The Knowledge-Based Economy: Modeled,
measured, simulated (see image). The figure shows that first-order interactions
generate a knowledge-based economy as a next-order or global regime on top of
the localized trajectories of nation states and innovative firms. These complex
dynamics have first to be specified and then to be analyzed empirically.
For
example, the knowledge-based dynamics change the relation between government
and the economy; and they consequently change the position of the state in
relation to wealth-retaining mechanisms. How can the nation state be organized
in such a way as to retain wealth from knowledge locally, while knowledge (like
capital) tends to travel beyond boundaries? One can envisage the complex system
dynamics as a kind of cloud – a cloud
that touches the ground at certain places, as Harald Bathelt, for example,
formulated.
How
can national governments shape conditions for the cloud to touch and to remain
on the ground? The Triple Helix of University-Industry-Government Relations can
be considered as an eco-system of bi- and tri-lateral relations. The three
institutions and their interrelations can be expected to form a system carrying
the three functions of (i) novelty production, (ii) wealth generation, and
(iii) normative control. One tends to think of university-industry-government
relations first as neo-corporatist arrangements between these institutional
partners. However, I am interested in the ecosystem shaped through the tri- and
bilateral relationships.
This
ecosystem can be shaped at different levels. It can be a regional ecosystem or
a national ecosystem, for instance. One can ask whether there is a surplus of synergy
between the three (sub-)dynamics of university-industry-government relations
and where that synergy can generate wealth, knowledge, and control; in which
places, and along trajectories for which periods of time – that is, the same
synergy as meant by "a cloud touching the ground".
For
example, when studying Piedmont as a region in Northern Italy, it is
questionable whether the synergy in university-industry-government relations is
optimal at this regional level or should better be examined from a larger perspective
that includes Lombardy. On the one
hand, the administrative borders of nations and regions result from the
construction of political economies in the 19th century; but on the
other hand, the niches of synergy that can be expected in a knowledge-based
economy are bordered also; for example, in terms of metropolitan regions (e.g.,
Milan–Turin–Genoa).
Since
political dynamics are always involved, this has implications for International
Relations as a field of study. But the dynamic analysis is different from
comparative statics (that is, measurement at different moments of time). The
knowledge dynamics can travel and be "footloose" to use the words of Raymond
Vernon, although it leaves footprints behind. Grasping "wealth from knowledge"
(locally or regionally) requires taking a systems perspective. However, the
system is not "given"; the system remains under reconstruction and can thus be
articulated only as a theoretically informed hypothesis.
In
the social sciences, one can use the concept of a hypothesized system
heuristically. For example, when
analyzing the knowledge-based economy in Germany, one can ask whether more synergy
can be explained when looking at the level of the whole country (e.g., in terms
of the East-West or North-South divide) or at the level of Germany's Federal
States? What is the surplus of the nation or at the European level? How can one
provide political decision-making with the required variety to operate as a
control mechanism on the complex dynamics of these eco-systems?
A
complex system can be expected to generate niches with synergy at all scales,
but as unintended consequences. To what extent and for which time span can
these effects be anticipated and then perhaps be facilitated? At this point,
Luhmann's theory comes in because he has this notion of different codifications
of communication, which then, at a next-order level, begin to self-organize
when symbolically generalized.
Codes
are constructed bottom-up, but what is constructed bottom-up may thereafter
begin to control top-down. Thus, one should articulate reflexively the
selection mechanisms that are constructed from the bottom-up variation by
specifying the why as an hypothesis.
What are the selection mechanisms? Observable relations (such as
university-industry relations) are not neutral, but mean different things for
the economy and for the state; and this meaning of the observable relations can
be evaluated in terms of the codes of communication.
Against
Niklas Luhmann's model, I would argue that codes of communication can be
translated into one another since interhuman communications are not
operationally closed, as in the biological model of autopoiesis. One also needs a social-scientific perspective on the
fluidities ("overflows") and translations among functions, as emphasized, for
example, by French scholars such as Michel Callon and Bruno Latour. In
evolutionary economics, one distinguishes between market and non-market
selection environments, but not among selection environments that are
differently codified. Here, Luhmann's theory offers us a heuristic: The complex
system of communications tends to differentiate in terms of the symbolic
generalizations of codes of communication because this differentiation is
functional in allowing the system to process more complexity and thus to be
more innovative. The more orthogonal the codes, the more options for
translations among them. The synergy indicator measures these options as
redundancy. The selection environments, however, have to be specified historically
because these redundancies—other possibilities—are not given but rather constructed
over long periods of time.
How did you arrive where
you currently work on?
I
became interested in the relations between science, technology, and society as
an undergraduate (in biochemistry) which coincided with the time of the student
movement of the late 1960s. We began to study Jürgen Habermas in the framework
of the "critical university," and I decided to continue with a second degree in
philosophy. After the discussions between Luhmann and Habermas (1971), I
recognized the advantages of Luhmann's more empirically oriented systems approach
and I pursued my Ph.D. in the sociology of organization and labour.
In
the meantime, we got the opportunity to organize an interfaculty department for
Science and Technology Dynamics at the University of Amsterdam after a
competition for a large government grant. In the context of this department, I
became interested in methodology: how can one compare across case studies and
make inferences? Actually, my 1995 book The
Challenge of Scientometrics
had a kind of Triple-Helix model on the cover: How do cognitions, texts, and
authors exhibit different dynamics that influence one another?
For
example, when an author publishes a paper in a scholarly journal, this may add
to his reputation as an author, but the knowledge claimed in the text enters a
process of validation which can be much more global and anonymous. These
processes are mediated since they are based on communication. Thus, one can add
to the context of discovery (of authors) and the context of justification (of
knowledge contents) a context of mediation (in texts). The status of a journal,
for example, matters for the communication of the knowledge content in the article.
The contexts operate as selection environments upon one another.
In
evolutionary economics, one is used to distinguishing between market and
non-market selection environments, but not among more selection environments
that are differently codified. At this point, Luhmann's theory offers a new
perspective: The complex system of communications tends to differentiate in
terms of the symbolic generalization of codes of communication because this differentiation
among the codes of communication allows the system to process more complexity
and to be more innovative in terms of possible translations. The different
selection environments for communications, however, are not given but constructed
historically over long periods of time. The modern (standardized) format of the
citation, for example, was constructed at the end of the 19th
century, but it took until the 1950s before the idea of a citation index was
formulated (by Eugene Garfield). The use of citations in evaluative
bibliometrics is even more recent.
In
evolutionary economics, one distinguishes furthermore between (technological)
trajectories and regimes. Trajectories can result from "mutual shaping" between
two selection environments, for example, markets and technologies. Nations and
firms follow trajectories in a landscape. Regimes are global and require the
specification of three (or more) selection environments. When three (or more)
dynamics interact, symmetry can be broken and one can expect feed-forward and
feedback loops. Such a system can begin to flourish auto-catalytically when the
configuration is optimal.
From
such considerations, that is, a confluence of the neo-institutional program of
Henry Etzkowitz and my neo-evolutionary view, our Triple Helix model emerged in
1994: how do institutions and functions interrelate and change one another or,
in other words, provide options for innovation? Under what conditions
can university-industry-government relations lead to wealth generation and
organized knowledge production? The starting point was a workshop about Evolutionary Economics and Chaos Theory: New
directions for technology studies held in Amsterdam in 1993. Henry
suggested thereafter that we could collaborate further on university-industry
relations. I answered that I needed at least three (sub)dynamics from the
perspective of my research program, and then we agreed about "A Triple Helix of
University-Industry-Government Relations". Years later, however, we took our
two lines of research apart again, and in 2002 I began developing a
Triple-Helix indicator of synergy in a series of studies of national systems of
innovation.
What would you give as
advice to students who would like to get into the field of innovation and
global politics?
In
general, I would advise them to be both a specialist and broader than that.
Innovation involves crossing established borders. Learn at least two languages.
If your background is political science, then take a minor in science &
technology studies or in economics. One needs both the specialist profile and
the potential to reach out to other audiences by being aware of the need to
make translations between different frameworks. Learn to be reflexive about the
status of what one can say in one or the other framework.
For
example, I learned to avoid the formulation of grandiose statements such as
"modern economies are knowledge-based economies," and to say instead: "modern
economies can increasingly be considered as knowledge-based economies." The
latter formulation provides room for asking "to what extent," and thus one can
ask for further information, indicators, and results of the measurement.
In
the sociology of science, specialisms and paradigms are sometimes considered as
belief systems. It seems to me that by considering scholarly discourses as
systems of rationalized expectations one can make the distinction between
normative and cognitive learning. Normative learning (that is, in belief
systems) is slower than cognitive learning (in terms of theorized expectations)
because the cognitive mode provides us with more room for experimentation: One
can afford to make mistakes, since one's communication and knowledge claims
remain under discussion, and not one's status as a communicator. The cognitive
mode has advantages; it can be considered as the surplus that is further
developed during higher education. Normative learning is slower; it dominates
in the political sphere.
What
does the "Triple Helix" reveal about the fragmentation of "national innovation
systems"?
In
2003, colleagues from the Department of Economics and Management Studies at the
Erasmus University in Rotterdam offered me firm data from the Netherlands containing
these three dimensions: the economic, the geographical, and the technological dimensions
in data of more than a million Dutch firms. I presented the results at the
Schumpeter Society in Turin in 2004, and asked whether someone in the audience
had similar data for other countries. I expected Swedish or Israeli colleagues
to have this type of statistics, but someone from Germany stepped in, Michael
Fritsch, and so we did the analysis for Germany. These studies were first
published in Research Policy.
Thereafter, we did studies on Hungary, Norway, Sweden, and recently also China
and Russia.
Several
conclusions arise from these studies. Using entropy statistics, the data can be
decomposed along the three different dimensions. One can decompose national
systems geographically into regions, but one can also decompose them in terms
of the technologies involved (e.g., high-tech versus medium-tech). We were
mainly relying on national data. And of course, there are limitations to the
data collections. Actually, we now have international data, but this is
commercial data and therefore more difficult to use reliably than governmental
statistics.
For
the Netherlands, we obtained the picture that would more or less be expected:
Amsterdam, Rotterdam, and Eindhoven are the most knowledge-intensive and
knowledge-based regions. This is not surprising, although there was one surprise:
We know that in terms of knowledge bases, Amsterdam is connected to Utrecht and
then the geography goes a bit to the east in the direction of Wageningen. What
we did not know was that the niche also spreads to the north in the direction
of Zwolle. The highways to Amsterdam Airport (Schiphol) are probably the most
important.
In
the case of Germany, when we first analyzed the data at the level of the
"Laender" (Federal States), we could see the East-West divide still prevailing,
but when we repeated the analysis at the lower level of the "Regierungsbezirke"
we no longer found the East-West divide as dominant (using 2004 data). So, the
environment of Dresden for example was more synergetic in Triple-Helix terms
than that of Saarbruecken. And this was nice to see considering my idea that
the knowledge-based economy increasingly prevails since the fall of the Berlin
Wall and the demise of the Soviet Union. The discussion about two different
models for organizing the political economy—communism or liberal democracy—had
become obsolete after 1990.
After
studying Germany, I worked with Balázs Lengyel on Hungarian data. Originally,
we could not find any regularity in the Hungarian data, but then the idea arose
to analyze the Hungarian data as three different innovation systems: one around
Budapest, which is a metropolitan innovation system; one in the west of the
country, which has been incorporated into Western Europe; and one in the east
of the country, which has remained the old innovation system that is state-led
and dependent on subsidies. For the western part, one could say that Hungary
has been "europeanized" by Austria and Germany; it has become part of a
European system.
When
Hungary came into the position to create a national
innovation system, free from Russia and the Comecon, it was too late, as
Europeanization had already stepped in and national boundaries were no longer
as dominant. Accordingly, and this was a very nice result, assessing this
synergy indicator on Hungary as a nation, we did not find additional synergy at
the national (that is, above-regional) level. While we clearly found synergy at
the national level for the Netherlands and also found it in Germany, but at the
level of the Federal States, we could not find synergy at a national level for
Hungary. Hungary has probably developed too late to develop a nationally
controlled system of innovations.
A
similar phenomenon appeared when we studied Norway: my Norwegian colleague
(Øivind Strand) did most of our analysis there. To our surprise, the
knowledge-based economy was not generated where the universities are located (Oslo
and Trondheim), but on the West Coast, where the off-shore, marine and maritime
industries are most dominant. FDI (foreign direct investment) in the marine and
maritime industries leads to knowledge-based synergy in the regions on the West
Shore of Norway. Norway is still a national system, but the Norwegian
universities like Trondheim or Oslo are not so much involved in entrepreneurial
networks. These are traditional universities, which tend to keep their hands
off the economy.
Actually,
when we had discussions about these two cases, Norway and Hungary, which both
show that internationalization had become a major factor, either in the form of
Europeanization in the Hungarian case, or in the form of foreign-driven
investments (off-shore industry and oil companies) in the Norwegian case, I
became uncertain and asked myself whether we did not believe too much in our
indicators? Therefore, I proposed to Øivind to study Sweden, given the availability
of well-organized data of this national system.
We
expected to find synergy concentrated in the three regional systems of
Stockholm, Gothenburg, and Malmö/Lund. Indeed, 48.5 percent of the Swedish
synergy is created in these three regions. This is more than one would expect
on the basis of the literature. Some colleagues were upset, because they had
already started trying to work on new developments of the Triple Helix, for
example, in Linköping. But the Swedish economy is organized and centralized in
this geographical dimension. Perhaps that is why one talks so much about
"regionalization" in policy documents. Sweden is very much a national
innovation system, with additional synergy between the regions.
Can governments alter
historical trajectories of national, regional or local innovation systems?
Let
me mention the empirical results for China in order to illustrate the
implications of empirical conclusions for policy options. We had no Chinese
data set, but we obtained access to the database Orbis of the Bureau van Dijk
(an international company, which is Wall Street oriented, assembling data about
companies) that contains industry indicators such as names, addresses,
NACE-codes, types of technology, the sizes of each enterprise, etc. However,
this data can be very incomplete. Using this incomplete data for China, we said
that we were just going to show how one could do the analysis if one had full data. We guess that the
National Bureau of Statistics of China has complete data.
I did the analysis with Ping Zhou, Professor at Zhejiang University.
We
analyzed China first at the provincial level, and as expected, the East Coast
emerged as much more knowledge intense than the rest of the country. After
that, we also looked at the next-lower level of the 339 prefectures of China.
From this analysis, four of them popped up as far more synergetic than the
others. These four municipalities were: Beijing, Shanghai, Tianjin, and
Chongqing.
These
four municipalities became clearly visible as an order of magnitude more
synergetic than other regions. The special characteristic about them is that –as
against the others – these four municipalities are administered by the central
government. Actually, it came out of my data and I did not understand it; but
my Chinese colleague said that this result was very nice and specified this
relationship.
The
Chinese case thus illustrates that government control can make a difference. It
shows – and that is not surprising, as China runs on a different model – that
the government is able to organize the four municipalities in such a way as to
increase synergy. Of course, I do not know what is happening on the ground. We
know that the Chinese system is more complex than these three dimensions
suggest. I guess the government agencies may wish to consider the option of
extending the success of this development model, to Guangdong for example or to
other parts of China. Isn't it worrisome that all the other and less controlled
districts have not been as successful in generating synergy?
Referring
more generally to innovation policies, I would advise as a heuristics that
political discourse is able to signal a problem, but policy questions do not
enable us to analyze the issues. Regional development, for example, is an issue
in Sweden because the system is very centralized, more than in Norway, for
example. But there is nothing in our data that supports the claim that the
Swedish government is successful in decentralizing the knowledge-based economy
beyond the three metropolitan regions. We may be able to reach conclusions like
these serving as policy advice. One develops policies on the basis of intuitive
assumptions which a researcher is sometimes able to test.
As
noted, one can expect a complex system continuously to produce unintended
consequences, and thus it needs monitoring. The dynamics of the system are
different from the sum of the sub-dynamics because of the interaction effects
and feedback loops. Metaphors such as a Triple Helix, Mode-2, or the Risk
Society can be stimulating for the discourse, but these metaphors tend to develop
their own dynamics of proliferating discourses.
The
Triple Helix, for example, can first be considered as a call for collaboration
in networks of institutions. However, in an ecosystem of bi-lateral and
tri-lateral relations, one has a trade-off between local integration
(collaboration) and global differentiation (competition). The markets and the
sciences develop at the global level, above the level of specific relations. A
principal agent such as government may be locked into a suboptimum. Institutional
reform that frees the other two dynamics (markets and sciences) requires
translation of political legitimation into other codes of communication. Translations
among codes of communication provide the innovation engine.
Is there a connection
between infrastructures and the success of innovation processes?
One
of the conclusions, which pervades throughout all advanced economies, is that
knowledge intensive services (KIS) are
not synergetic locally because they can be disconnected – uncoupled – from the
location. For example, if one offers a knowledge-intensive service in Munich
and receives a phone call from Hamburg, the next step is to take a plane to
Hamburg, or to catch a train inside Germany perhaps. Thus, it does not matter
whether one is located in Munich or Hamburg as knowledge-intensive services
uncouple from the local economy. The main point is proximity to an airport or
train station.
This
is also the case for high-tech knowledge-based manufacturing. But it is
different for medium-tech manufacturing, because in this case the dynamics are
more embedded in the other parts of the economy. If one looks at Russia, the
knowledge-intensive services operate differently from the Western European
model, where the phenomenon of uncoupling takes place. In Russia, KIS
contribute to coupling, as knowledge-intensive services are related to state
apparatuses.
In
the Russian case, the knowledge-based economy is heavily concentrated in Moscow
and St. Petersburg. So, if one aims –as the Russian government proclaims – to create
not "wealth from knowledge" but "knowledge from wealth" – that is, oil revenues
–it might be wise to uncouple the knowledge-intensive services from the state
apparatuses. Of course, this is not easy to do in the Russian model because
traditionally, the center (Moscow) has never done this. Uncoupling
knowledge-intensive services, however, might give them a degree of freedom to
move around, from Tomsk to Minsk or vice
versa, steered by economic forces more than they currently are (via institutions
in Moscow).
Final question. What does path-dependency mean in the context of
innovation dynamics?
In
The Challenge of Scientometrics. The development, measurement, and
self-organization of scientific communications (1995), I used Shannon-type information theory to study
scientometric problems, as this methodology combines both static and dynamic
analyses. Connected to this theory I developed a measurement method for
path-dependency and critical transitions.
In
the case of a radio transmission, for example, you have a sender and a
receiver, and in between you may have an auxiliary station. For instance, the
sender is in New York and the receiver is in Bonn and the auxiliary station is
in Iceland. The signal emerges in New York and travels to Bonn, but it may be possible
to improve the reception by assuming the signal is from Iceland instead of
listening to New York. When Iceland provides a better signal, it is possible to
forget the history of the signal before it arrived in Island. It no longer
matters whether Iceland obtained the signal originally from New York or Boston.
One takes the signal from Iceland and the pre-history of the signal does not
matter anymore for a receiver.
Such
a configuration provides a path-dependency (on Iceland) in
information-theoretical terms, measurable in terms of bits of information. In a
certain sense you get negative bits of information, since the shortest path in
the normal triangle would be from New York to Bonn, and in this case the
shortest path is from New York via Iceland to Bonn. I called this at the time a
critical transition. In a scientific text for instance, a new terminology can
come up and if it overwrites the old terminology to the extent that one does
not have to listen to the old terminology anymore, one has a critical
transition that frees one from the path-dependencies at a previous moment of
time.
Thus,
my example is about radical and knowledge-based changes. As long as one has to
listen to the past, one does not make a critical transition. The knowledge-based
approach is always about creative destruction and about moving ahead,
incorporating possible new options in the future. The hypothesized future
states become more important than the past. The challenge, in my opinion, is to
make the notion of options operational and to bring these ideas into
measurement. The Triple-Helix indicator measures the number of possible options
as additional redundancy. This measurement has the additional advantage that one
becomes sensitive to uncertainty in the prediction.
Loet Leydesdorff is Professor Emeritus at the Amsterdam School of Communications Research (ASCoR) of the University of Amsterdam. He is Honorary Professor of the Science and Technology Policy Research Unit (SPRU) of the University of Sussex, Visiting Professor at the School of Management, Birkbeck, University of London, Visiting Professor of the Institute of Scientific and Technical Information of China (ISTIC) in Beijing, and Guest Professor at Zhejiang University in Hangzhou. He has published extensively in systems theory, social network analysis, scientometrics, and the sociology of innovation (see at http://www.leydesdorff.net/list.htm). With Henry Etzkowitz, he initiated a series of workshops, conferences, and special issues about the Triple Helix of University-Industry-Government Relations. He received the Derek de Solla Price Award for Scientometrics and Informetrics in 2003 and held "The City of Lausanne" Honor Chair at the School of Economics, Université de Lausanne, in 2005. In 2007, he was Vice-President of the 8th International Conference on Computing Anticipatory Systems (CASYS'07, Liège). In 2014, he was listed as a highly-cited author by Thomson Reuters.
Literature and Related
links:
Science & Technology Dynamics,
University of Amsterdam / Amsterdam School of Communications
Research (ASCoR)
Leydesdorff,
L. (2006). The Knowledge-Based Economy: Modeled, Measured, Simulated. Universal Publishers, Boca Raton, FL.
Leydesdorff, L. (2001). A Sociological Theory of Communication: The Self-Organization of the
Knowledge-Based Society. Universal Publishers, Boca Raton, FL.
Leydesdorff,
L. (1995). The Challenge of Scientometrics . The development, measurement, and
self-organization of scientific communications. Leiden, DSWO Press, Leiden University.
http://www.leydesdorff.net/
Print version of this Talk (pdf)
0
0
1
4814
27442
School of Global Studies, University of Gothenburg
228
64
32192
14.0
Normal
0
false
false
false
EN-US
JA
X-NONE
/* Style Definitions */
table.MsoNormalTable
{mso-style-name:"Table Normal";
mso-tstyle-rowband-size:0;
mso-tstyle-colband-size:0;
mso-style-noshow:yes;
mso-style-priority:99;
mso-style-parent:"";
mso-padding-alt:0cm 5.4pt 0cm 5.4pt;
mso-para-margin-top:0cm;
mso-para-margin-right:0cm;
mso-para-margin-bottom:10.0pt;
mso-para-margin-left:0cm;
mso-pagination:widow-orphan;
font-size:12.0pt;
font-family:Cambria;
mso-ascii-font-family:Cambria;
mso-ascii-theme-font:minor-latin;
mso-hansi-font-family:Cambria;
mso-hansi-theme-font:minor-latin;}