Struggle for a Constitutional Regime:Armenian-Young Turk Relations in the Era of Abdulhamid II, 1895 -1909Although scholars have devoted substantial attention to Armenian revolutionary movements, the treatment of the subject of relations between various Armenian organizations and the Young Turks and their leaders, both individually or through various institutions, have received tangential consideration at best. This is largely due to the paucity of original sources for the second half of the reign of Sultan Abdulhamid II given that most Ottoman, Armenian Revolutionary Federation's (ARF, also referred to as Tashnagtsutyun) as well as the Social Democratic Hnchagyan Party's archives were very hard, if not impossible, to access. These hardships were partially resolved when the Ottoman Archives were opened to the academic public in 2005, while ARF archives became accessible through special permission only. There still remains the Archives of the Armenian Patriarchates of Constantinople and Jerusalem, as well as the archives of the Social Democratic Hnchagyan Party that remain closed to the general public. Two seminal studies by a leading Turkish historian, Şükrü Hanioğlu, provided the first careful assessments regarding the Young Turks, albeit from the Turkish perspective, which relied heavily on Ottoman archival materials. Yet, while both studies have added to our knowledge of what transpired at the time, they skirted much of the critical Armenian contributions that assisted the Young Turks to attain power. Hence, It is the overall purpose of this dissertation to address precisely the significant relationships between various Armenian leaders, political parties, and revolutionary movements with their Ottoman counterparts to clarify what actually occurred on the ground and to capture the reasons as to why these relations did occur. This study aims to shed light on some of the reasons that motivated Armenians and Ottomans alike to collaborate and, to the degree that it may be possible to ascertain, to identify causes for their failures.The dissertation opens with an analysis of long-promised reform efforts, ostensibly to benefit Armenian inhabitants of the Ottoman Empire after the Treaty of Berlin (1878), and the skillful manipulations by Sultan Abdulhamid II to dilute them. An effort is made to assess carefully the palace's reliance on the Kurdish Hamidiye Regiments, which were used to sabotage reforms, and to understand better Constantinople's political intrigues. Because it was during this process that the Armenian Millet [nation]--hitherto known as the Millet-i Sadika (Loyal Millet)--was transformed into a Millet-i Asiya (Rebel Millet), the introduction examines how revolutionary agitations led to profound socio-political schisms.Much of this rich history is described and analyzed in the six chapters that follow. In Chapter One, the history from the "May Reform Project of 1895" to Abdulhamid II's abdication in 1909 is thoroughly discussed, placing the agitations for reforms within the overall tensions that affected the Ottoman Empire. In fact, as the Armenian Millet faced its conundrum at a time when Turks themselves wished to establish a constitutional monarchy to unite the empire and its peoples, the roles played by Armenian subjects were especially important. In turn, this vital position of instituting a constitutional regime attracted Armenian revolutionary movements to Young Turk leaders, who promised that their own revolution against the sultan would open a new era in Ottoman history. The Young Turks assured their European critics that their objective was to restore the 1876 Constitution, which was promulgated at the same time Abdulhamid II acceded the throne, but which was suspended as a result of the Russo-Turkish war of 1877-78. Several maintained that they were following in the footsteps of the reformist Midhat Pasha and his disciples, who became better known as the Yeni Osmanlılar (New Ottomans. Also known as Young Ottomans).Despite the Young Turks initial enthusiasm to establish cordial relations with their Armenian revolutionary counterparts, strong elements of suspicions and mistrust were present. These important differences are discussed and analyzed in Chapter Two. By December 1907, Armenian and Young Turk forces managed to overcome political hurdles to form a united front, whose declared goal was to topple Abdulhamid II. How the two sides tried to overcome their mutual reservations and doubts, which colored future ties, are assessed in Chapter Three. Inasmuch as one of the chief disputes between Armenian revolutionary and Young Turk leaders were the conditions of the Armenian inhabited provinces, Chapter Four provides an examination of the situation in the eastern provinces and how the Armenian revolutionary movement was able (or at least tried very hard) to revolutionize the Kurdish and Turkish populations there, in accordance with the directives of the first anti-Hamidian conference held in Paris in 1902. Along the same lines, the study further tackles the issue of Armenian-Macedonian relations in Chapter Five, which were blessed, and to a certain degree manipulated, by the fledgling state of Bulgaria for its own political and national gains. In fact, one could naturally speak about a Bulgarian connection vis-à-vis this cooperation, which is seldom addressed in scholarly sources. Finally, Chapter Six brings forth a reevaluation of the reasons behind the April 1909 Adana Massacres and illustrates why the ARF continued to cooperate with the Young Turks despite the slaughters, if for no other reason than to give the fledgling constitutional-revolutionary movement an opportunity to succeed. The dissertation closes with an assessment of Armenian-Young Turk relations. Despite outrageous developments, the CUP (Committee of Union and Progress, Ittihad ve Terakki Cemiyyeti) leadership covered up the vagaries of bewildered officials, confused by the desire to investigate wrongdoings, while seeking Armenian political assistance against the Sublime Porte. Both at the official as well as the popular levels, Armenian ties with Ottoman leaders disintegrated, which planted the seeds for fresh animosities. By 1913, the gap that separated the two nations enlarged, and while few anticipated future catastrophes, revealing signs were present that tensions simmered. Regrettably, hardly anyone exercised the foresight to prevent new tragedies.
Der Arbeitsmarkt für Ältere rückt mit seiner Bedeutung für die Bewältigung der Probleme des demografischen Wandels in den Mittelpunkt des öffentlichen Interesses. Wie gelingt es, Ältere länger als bisher in Beschäftigung zu halten? Wie können ihre Wiedereinstellungschancen erhöht werden? Trotz der verbesserten Arbeitsmarktintegration Älterer der vergangenen Jahre ist die Langzeitarbeitslosigkeit unter ihnen in Deutschland immer noch hoch und deutet darauf hin, dass weiterhin Defizite bestehen. Anders als viele andere untersuchen die hier vorgestellten Arbeiten daher auch die Nachfrageseite des Arbeitsmarktes. Ihnen ist gemein, dass sie sich auf eine Kombination von Individual- und Betriebsinformationen stützen, die mit mikroökonometrischen Methoden ausgewertet werden. Hierdurch ließen sich betriebliches Verhalten wie auch individuelle (Erwerbs-) Lebensverläufe in ihrem Zusammenspiel untersuchen. Kapitel A untersucht, wie Betriebe das Erwerbsaustrittsverhalten ihrer älteren Arbeitnehmer beeinflussen. In Hazardratenmodellen kommt ein spezieller LIAB-Datensatz der Jahre 1996 bis 2004 zum Einsatz, um Bestimmungsgründe für das Ende des individuellen Arbeitslebens zu ermitteln. Dabei kommt zum Vorschein, dass Institutionen wie der Kündigungsschutz, Mitbestimmung und Tarifbindung keine beschäftigungsverlängernde Wirkung entfalten. Es zeigt sich dagegen, dass sie die Position Älterer im Betrieb untergraben und einen frühen Austritt begünstigen. Die Annahme, dass die betriebliche Entscheidung über die Weiterbeschäftigung Älterer in starkem Ausmaß als Anpassungsinstrument des Personalbedarfs bei Nachfrageschwankungen dient, konnte allgemein bestätigt werden. Die Abhängigkeit der Austrittswahrscheinlichkeit von den Umsatzerwartungen des beschäftigenden Betriebes spricht hier eine deutliche Sprache. Andererseits ließen sich auch Anzeichen für Bedingungen finden, unter denen die Generation der Über-50-Jährigen dem Betrieb, dem Arbeitsmarkt und schließlich den Sozialversicherten als Beitragszahler erhalten bleiben kann. Es zeigt sich, dass ein hoher Anteil Jüngerer im Betrieb die Austrittswahrscheinlichkeit Älterer senkt. Der Furcht vor dem technischen Wandel im Zusammenhang mit dem Ziel einer Erhöhung der Erwerbsbeteiligung Älterer konnte hier keine Nahrung gegeben werden. Die Ergebnisse deuten im Gegenteil darauf hin, dass eine moderne technische Ausstattung des Betriebes sich positiv auf die Erwerbsaussichten auch der Älteren auswirkt. Genauso wenig ließen sich nachteilige, kurzfristige Effekte der Einführung neuer Technik nachweisen. Mit den Ergebnissen bestätigt sich die Erwartung, dass beim Übergang aus dem Erwerbs- in die Ruhephase der beschäftigende Betrieb und damit die Nachfrageseite des Arbeitsmarktes eine wichtige Rolle spielen. Eine spezielle Auseinandersetzung widmet Kapitel B dem Kündigungsschutz und seinem Schwellenwert. Es gibt gute Gründe für die These, dass insbesondere Ältere ihre Wiederbeschäftigungschancen durch das geltende Kündigungsschutzrecht und seinen Schwellenwert berechtigt geschmälert sehen. Insbesondere die Sozialauswahl mit ihrer vermeintlichen Schutzfunktion für die ältere Belegschaft stehe einer Neueinstellung Älterer im Wege, so die landläufige Meinung. Diese Vermutung lässt sich durch die Untersuchung der LIAB-Daten nicht bestätigen. Angesichts der verwendeten RDD-Methode, die Gesetzesänderungen als quasinatürliche Experimente nutzt, erweist sich der empirische Ansatz als unabhängig von Fragen der notwendigen Kontrolle von Einflussmerkmalen und der geeigneten parametrischen Form. Genauso wenig ließ sich die Hoffnung bestätigen, dass sich Betriebe durch Weiterbildung allein zu einem verstärkten Einsatz Älterer bewegen lassen. Die in Kapitel C geschätzten Effekte von Weiterbildung auf den Anteil Älterer waren bestenfalls insignifikant. Wieder gingen kombinierte Angaben aus IAB-Betriebspanel und der Beschäftigtenstatistik in die Auswertungen ein. Der möglichen Endogenität der betrieblichen Weiterbildungsentscheidung wurde mittels eines Matching-Verfahrens Rechnung getragen. Der vermutete kausale Effekt betrieblicher Weiterbildung hin zu einem höheren Anteil Älterer konnte in einem Vergleich der Btriebspaare nicht nachgewiesen werden. Hier könnte eine altersselektive Auswahl jüngerer Teilnehmer zu einem Hinausdrängen derjenigen geführt haben, die eben nicht von der Weiterbildungsaktivität ihres Arbeitgebers profitieren. ; The labour market for older workers attracts growing attention when trying to solve the problems of demographic change. How can it be achieved that older workers stay in employment for a longer time? How can their reemployment chances be raised? Although integration of the older generation into the labour market has improved in recent years, long term unemployment among this group remains to be high in Germany hinting at ongoing deficits. In contrast to many other studies the ones presented here analyze both supply and demand side of the labour market. They all rely on combined individual information on employer and employee and use micro-econometric methods. Thus establishment behavior can be taken into account as well as individual employment histories over the life time and their interaction. Chapter A determines the role establishments play in the retirement process of older workers. A special LIAB data set for the years 1996 to 2004 is used to estimate hazard rate models determining the triggers to end individual employment. It could be shown that institutions like employment protection, the existence of a works council and collective bargaining do not retard the transition out of the employment period. Instead they deteriorate the position of older workers and favour an early exit from working life. It could be confirmed that the employer's decision to continue the relation to older employees serves as a means to adjust for changes in demand. This could be concluded from the dependency of workers' exit probability on their establishment's sales expectations. On the other hand, there have been found circumstances under which older workers have a higher tendency to remain employed. A high share of young employees lowers the propensity to exit from the labour market for their older coworkers. The fear from technical change regarding the labour market participation of older people could not be supported. On the contrary, the results raise the assumption that a modern technical equipment of the establishment has positive repercussions on the employment prospects of the older workers. Apart from that, no negative short term effects of the introduction of new technology could be detected. These results confirm the expectation that the transition to the post-employment phase is heavily influenced by the employing establishment and thus the demand side of the labour market. Chapter B adds a specific analysis of employment protection and its threshold of application. There are strong theoretical reasons to believe that especially older workers are hindered from reentering employment due to ruling employment protection laws and its threshold. The obligation to take account of social factors – one of which is age - in choosing the persons to be made redundant ("Sozialauswahl") poses problems to hiring older applicants according to common opinion. This hypothesis could not be confirmed by the RDD analysis of the LIAB data. Using changes in legislation as quasi-natural experiments the method employed is immune to errors in respect to specification of control variables or parametric form. Neither could be found support for the hypothesis that establishments would hire a larger share of older workers once they had engaged in further training. The effects estimated in Chapter C concerning the effect of company training involvement on the establishments' age structure in subsequent years were insignificant at best. Once again a combination of data stemming from the IAB establishment panel and employment statistics built the empirical base of the study. The possible endogeneity of the company's training decision could be controlled by using a matching-algorithm. Comparing twin establishments did not support the assumed casual effect of company training resulting in a higher share of older workers. This can be due to a choice of younger to take part in trainings leading to a crowding out of those not included in their employer's training decision.
Socio-economic scenarios constitute an important tool for exploring the long-term consequences of anthropogenic climate change and available response options. They have been applied for different purposes and to a different degree in various areas of climate change analysis, typically in combination with projections of future climate change. Integrated assessment modeling (IAM) has used them to develop greenhouse gas (GHG) emissions scenarios for the 21st century and to investigate strategies for mitigating GHG emissions on a global scale. Analyses of climate change impacts, adaptation and vulnerabilities (IAV) depend heavily on assumptions about underlying socio-economic developments, but have employed socio-economic scenarios to a lesser degree, due mainly to the multitude of contexts and scales of such analyses. A more consistent use of socio-economic scenarios that would allow an integrated perspective on mitigation, adaptation and residual climate impacts remains a major challenge. We assert that the identification of a set of global narratives and socio-economic pathways offering scalability to different regional contexts, a reasonable coverage of key socio-economic dimensions and relevant futures, and a sophisticated approach to separating climate policy from counter-factual "no policy" scenarios would be an important step towards meeting this challenge. Such "Shared Socio-economic Pathways" (SSPs) should be specified in an iterative manner and with close collaboration between IAM and IAV researchers to assure coverage of key dimensions, sufficient scalability and widespread adoption. They can be used not only as inputs to analyses, but also to collect the results of different climate change analyses in a matrix defined by two dimensions : climate exposure as characterized by a radiative forcing or temperature level and socio-economic development as classified by the SSPs. For some applications, SSPs may have to be augmented by "Shared Climate Policy Assumptions" (SPAs) capturing global components of climate policies that some studies may require as inputs. Finally, sufficient coverage of the relevant socio-economic dimensions for the analysis of mitigation, adaptation and residual climate impacts may be assessed by locating the SSPs along the dimensions of challenges to mitigation and to adaptation. We conclude that the development of SSPs, and integrated socio-economic scenarios more broadly, is a useful focal point for collaborative efforts between IAM and IAV researchers. This is likely to be a long-term and iterative enterprise comprising a collection of different activities : periodically taking stock of the evolving scenario work in both research communities, linking up individual efforts, and pursuing collaborative scenario work through appropriate platforms that still need to be established. In the short run, an important goal is to produce tangible outcomes that would allow the 5th Assessment Report of the IPCC to take a more integrated perspective on mitigation, adaptation and residual climate impacts. ; Les scénarios socio-économiques constituent un important outil d'exploration des conséquences de long terme du changement climatique d'origine anthropique, et des options disponibles pour y répondre. Ils ont été appliqués pour différents objectifs et à des degrés différents dans plusieurs domaines de l'analyse du changement climatique, typiquement en combinaison avec les projections de changements climatiques futurs. Les modèles d'évaluation intégrés (MEI) les ont utilisés pour développer des scénarios d'émissions de gaz à effet de serre (GES) pour le XXIe siècle et pour examiner les stratégies d'atténuation des émissions de GES à l'échelle planétaire. Les analyses sur les "impacts, l'adaptation, et la vulnérabilité" (IAV) du changement climatique dépendent fortement des hypothèses relatives aux développements socio-économiques sous-jacents, mais elles ont utilisé dans une moindre mesure les scénarios socio-économiques, principalement en raison de la multitude des contextes et des échelles de telles analyses. Il reste aujourd'hui difficile de mettre en place une utilisation plus cohérente des scénarios socio-économiques, qui autoriserait une vision plus intégrée de l'atténuation, de l'adaptation et des impacts climatiques résiduels. Nous soutenons qu'une étape importante pour relever ce défi est l'identification d'un ensemble de scénarios qualitatifs globaux et des trajectoires socio-économiques correspondantes, qui pourraient être déclinés à différentes échelles spatiales et couvriraient de manière convenable les différentes dimensions socio-économiques des futurs possibles. De telles " trajectoires socio-économiques partagées " (TSP) doivent être spécifiées de façon interactive, par une étroite collaboration entre les chercheurs MEI et IAV pour garantir le traitement des dimensions les plus importantes, la possibilité de décliner les scénarios à différentes échelles, et pour favoriser une large utilisation des TSP par l'ensemble de la communauté. Ces TSP peuvent être utilisées non seulement comme une information d'entrée pour des analyses spécifiques, mais aussi pour rassembler les résultats de différentes études dans une matrice définie selon deux dimensions : l'exposition climatique, caractérisée par un forçage radiatif ou un niveau de température, et le développement socio-économique, décrit par les TSP. Les TSP pourront également être augmentées "d'hypothèses partagées de politique climatique" (HPP), qui décrivent les composantes globales des politiques climatiques, nécessaires à certaines études. Finalement, on pourra vérifier que les TSP réalisent une couverture suffisante des dimensions socio-économiques pertinentes pour l'analyse de l'atténuation, de l'adaptation, et des impacts climatiques résiduels, en les situant selon deux dimensions, la première mesurant la capacité à l'atténuation et l'autre la capacité à s'adapter. Nous concluons que le développement des TSP, et plus largement celui des scénarios socioéconomiques intégrés, est un point central des efforts de collaboration entre les chercheurs des communautés MEI et IAV. C'est probablement une entreprise itérative et de long terme comportant un ensemble d'activités très différentes : faire périodiquement un bilan du travail des deux communautés de recherche sur les scénarios, rapprocher les efforts individuels sur ce thème, et mener un travail collaboratif sur les scénarios via des plates-formes appropriées qui restent encore à instituer. À court terme, un objectif important est la production de résultats tangibles permettant au cinquième rapport d'évaluation du GIEC d'adopter un point de vue plus intégré sur l'atténuation, l'adaptation et les impacts climatiques résiduels.
Socio-economic scenarios constitute an important tool for exploring the long-term consequences of anthropogenic climate change and available response options. They have been applied for different purposes and to a different degree in various areas of climate change analysis, typically in combination with projections of future climate change. Integrated assessment modeling (IAM) has used them to develop greenhouse gas (GHG) emissions scenarios for the 21st century and to investigate strategies for mitigating GHG emissions on a global scale. Analyses of climate change impacts, adaptation and vulnerabilities (IAV) depend heavily on assumptions about underlying socio-economic developments, but have employed socio-economic scenarios to a lesser degree, due mainly to the multitude of contexts and scales of such analyses. A more consistent use of socio-economic scenarios that would allow an integrated perspective on mitigation, adaptation and residual climate impacts remains a major challenge. We assert that the identification of a set of global narratives and socio-economic pathways offering scalability to different regional contexts, a reasonable coverage of key socio-economic dimensions and relevant futures, and a sophisticated approach to separating climate policy from counter-factual "no policy" scenarios would be an important step towards meeting this challenge. Such "Shared Socio-economic Pathways" (SSPs) should be specified in an iterative manner and with close collaboration between IAM and IAV researchers to assure coverage of key dimensions, sufficient scalability and widespread adoption. They can be used not only as inputs to analyses, but also to collect the results of different climate change analyses in a matrix defined by two dimensions : climate exposure as characterized by a radiative forcing or temperature level and socio-economic development as classified by the SSPs. For some applications, SSPs may have to be augmented by "Shared Climate Policy Assumptions" (SPAs) capturing global components of climate policies that some studies may require as inputs. Finally, sufficient coverage of the relevant socio-economic dimensions for the analysis of mitigation, adaptation and residual climate impacts may be assessed by locating the SSPs along the dimensions of challenges to mitigation and to adaptation. We conclude that the development of SSPs, and integrated socio-economic scenarios more broadly, is a useful focal point for collaborative efforts between IAM and IAV researchers. This is likely to be a long-term and iterative enterprise comprising a collection of different activities : periodically taking stock of the evolving scenario work in both research communities, linking up individual efforts, and pursuing collaborative scenario work through appropriate platforms that still need to be established. In the short run, an important goal is to produce tangible outcomes that would allow the 5th Assessment Report of the IPCC to take a more integrated perspective on mitigation, adaptation and residual climate impacts. ; Les scénarios socio-économiques constituent un important outil d'exploration des conséquences de long terme du changement climatique d'origine anthropique, et des options disponibles pour y répondre. Ils ont été appliqués pour différents objectifs et à des degrés différents dans plusieurs domaines de l'analyse du changement climatique, typiquement en combinaison avec les projections de changements climatiques futurs. Les modèles d'évaluation intégrés (MEI) les ont utilisés pour développer des scénarios d'émissions de gaz à effet de serre (GES) pour le XXIe siècle et pour examiner les stratégies d'atténuation des émissions de GES à l'échelle planétaire. Les analyses sur les "impacts, l'adaptation, et la vulnérabilité" (IAV) du changement climatique dépendent fortement des hypothèses relatives aux développements socio-économiques sous-jacents, mais elles ont utilisé dans une moindre mesure les scénarios socio-économiques, principalement en raison de la multitude des contextes et des échelles de telles analyses. Il reste aujourd'hui difficile de mettre en place une utilisation plus cohérente des scénarios socio-économiques, qui autoriserait une vision plus intégrée de l'atténuation, de l'adaptation et des impacts climatiques résiduels. Nous soutenons qu'une étape importante pour relever ce défi est l'identification d'un ensemble de scénarios qualitatifs globaux et des trajectoires socio-économiques correspondantes, qui pourraient être déclinés à différentes échelles spatiales et couvriraient de manière convenable les différentes dimensions socio-économiques des futurs possibles. De telles " trajectoires socio-économiques partagées " (TSP) doivent être spécifiées de façon interactive, par une étroite collaboration entre les chercheurs MEI et IAV pour garantir le traitement des dimensions les plus importantes, la possibilité de décliner les scénarios à différentes échelles, et pour favoriser une large utilisation des TSP par l'ensemble de la communauté. Ces TSP peuvent être utilisées non seulement comme une information d'entrée pour des analyses spécifiques, mais aussi pour rassembler les résultats de différentes études dans une matrice définie selon deux dimensions : l'exposition climatique, caractérisée par un forçage radiatif ou un niveau de température, et le développement socio-économique, décrit par les TSP. Les TSP pourront également être augmentées "d'hypothèses partagées de politique climatique" (HPP), qui décrivent les composantes globales des politiques climatiques, nécessaires à certaines études. Finalement, on pourra vérifier que les TSP réalisent une couverture suffisante des dimensions socio-économiques pertinentes pour l'analyse de l'atténuation, de l'adaptation, et des impacts climatiques résiduels, en les situant selon deux dimensions, la première mesurant la capacité à l'atténuation et l'autre la capacité à s'adapter. Nous concluons que le développement des TSP, et plus largement celui des scénarios socioéconomiques intégrés, est un point central des efforts de collaboration entre les chercheurs des communautés MEI et IAV. C'est probablement une entreprise itérative et de long terme comportant un ensemble d'activités très différentes : faire périodiquement un bilan du travail des deux communautés de recherche sur les scénarios, rapprocher les efforts individuels sur ce thème, et mener un travail collaboratif sur les scénarios via des plates-formes appropriées qui restent encore à instituer. À court terme, un objectif important est la production de résultats tangibles permettant au cinquième rapport d'évaluation du GIEC d'adopter un point de vue plus intégré sur l'atténuation, l'adaptation et les impacts climatiques résiduels.
Preface Half a century since its inception as a community of western European democracies, limited in scope and competences, the European Union (EU) is taken to denote a composite polity that combines unity and multiplicity while having the capacity to produce publicly binding decisions and allocate values in European society. It is thus possible to capture the endemic systemic complexity of the regional process through the lens of new theoretical perspectives with a view to developing a series of novel understandings of EU governance in the early twenty-first century. p. far, the EU polity refers to a system of institutionalized shared rule among multiple state and non-state actors, characterized by the dispersal of political authority among various levels and the transcendence of hierarchical forms of power distribution. Thus different notions of democracy, legitimacy and representation produce novel accounts of post-national politics. Accordingly, a new democratic concept for the EU project should entail a balanced mix of social and political forces that share in the emerging sovereignty of the larger unit. Within the latter, public authority should not reside within a single decision-making centre, but rather should be diffused among different governance levels and forms of social, political and cultural contention that can combine territorial and substantive public issues. At the same time, recent changes in the workings of the EU polity have not affected its nature as an essentially statecentric project, preserving a balance between state sovereignty and a relatively moderate yet discernible deepening of integration by means of producing a system of political co-determination; in other words, a new form of synarchy between states and demoi – an ensemble sui generis of highly interdependent systems – is created, its structural and functional interaction resulting in a multilogical system of entwined sovereignties.1 Yet, the EU polity still remains a treaty-constituted body politic and not the unilateral act of a single and undifferentiated demos. Moreover, it does not derive its political authority from its citizens directly and has not – as yet – resulted in a complete fusion among different levels of public authority. Also, its constituent parts, in the form of historically constituted nation-states, are free to dissociate themselves from the larger unit. Finally, its emerging yet nebulous and even controversial constitutional identity rests heavily on the domestic orders of states, although the EU already projects a profound intertwining of democracies regarding the joint exercise of fundamental powers. Arguably, all of the above is crucial to understanding the changing conventions regarding state sovereignty that may now be interpreted as the right to be involved in the joint exercise of competences with other states. Linked to the question of sovereignty is that of democracy, which currently points to a negative side-effect of European integration: the growing dissonance between the requisites of democratic rule and the actual conditions on which the political management of EU affairs is largely based. The crucial distinction here concerns an institutional and a sociopsychological perspective. Whereas the former focuses on power-sharing and on institutional reform as a solution to the actual or perceived problems of democracy in the EU, the latter is concerned with questions of European identity and the formation of a composite European demos that is nonetheless distinct as a collectivity. As the current debate raises fundamental questions about the future form of the EU as a polity of highly interrelated states and demoi – a synarchy of entwined sovereignties – recent reforms, including the Lisbon Treaty2, whose ratification is still pending, failed to enhance the democratic properties of the general system, leaving the EU to resemble a system of democracies more than a democratic system in its own right. In a period when transnational pressures are challenging both intrastate and interstate relations, it may no longer be enough to confine democracy within state boundaries to deal effectively with the implications of new forms of polity. This raises new questions such as how to hold transnational decision-makers to account to citizens who belong to different national political systems. Such questions reflect substantive concerns that have grown as the regional process has evolved from an interstate diplomatic forum to a fully-fledged polity. This development, otherwise known as a "normative turn" in EU studies, has led to scholarly interest in the idea that the EU might one day transform itself into a democratic political system. While there is some measure of agreement that the EU is not democratic, there is no consensus on how it might become so. Indeed, there are two different understandings of what the EU's democratic deficit comprises. The first focuses on institutional properties, arguing that the problem of democracy in the EU is tied to the flawed interinstitutional interactions that characterize the functioning of a non-state polity like the EU. In this context, proposals for further reform speak of the EU's "institutional imbalance" and of the need to enhance the public accountability as well as the representative nature of EU policy-makers and decision-takers. The second focuses on sociopsychological factors and makes the case for a new sense of European "demos-hood". It argues that the EU's present democratic pathology occurs because of the absence of a European demos. As a consequence, this second perspective is more interested in collective civic identity and the extent to which there is "a feeling of community" amongst Europeans. Acknowledging that the absence of a European demos – assuming that a legal or economic demos already exists – is a barrier to a democratic Europe, proposals for further reform tend to suggest paths to transnational demos-formation based on a common European civicness3. These notions of plural citizenship give rise to the idea of a "Republic of Europeans", to which we now turn. ; peerReviewed
In: Population and development review, Volume 32, Issue 4, p. 793-798
ISSN: 1728-4457
In a study of the economics of climate change commissioned by the British government, released on 30 October, the former World Bank chief economist Sir Nicholas Stern presents a vigorously argued case for early curtailment of greenhouse gas emissions and proposes mitigation strategies that appear to offer highly favorable benefit‐cost ratios. An excerpt from the Executive Summary of the Stern Review, concerned with the nature and magnitude of the deleterious economic consequences of anticipated climate change, is printed below.The principal scientific reviews of knowledge of climate change, its consequences, and mitigation strategies are the (roughly) quinquennial reports of the Intergovernmental Panel on Climate Change (IPCC)—the work of hundreds of lead authors, subjected in turn to elaborate peer review and line‐by‐line scrutiny by interested governments. They represent a broad, though not total, expert consensus. The third IPCC assessment was issued in 2001; the fourth, already in draft, will be released next year. The Stern Review draws heavily on this scientific underpinning, but goes further than the IPCC exercise in computing economic values for the projected changes and costing out remedial policy responses. More forthright in style and emphatic in its conclusions, it reads as a resounding call to international action.The Review explores the implications of atmospheric concentrations of carbon dioxide and other greenhouse gases being capped at 550ppm (parts per million), double the preindustrial level, an objective it argues is feasible. That concentration would be reached by 2050 at current emission rates, or by 2035 if emissions rise as expected. The resulting warming, it believes, would be 2‐5°C, roughly in accord with the IPCC's third‐assessment estimates (see the Documents section of PDR 27, no. 1 for the IPCC projections). The positive feedbacks identified in some recent studies, generated by processes such as release of methane from permafrost, could lead to still higher temperatures.The forecast effects described are by now familiar, though no less grim for being so: species extinctions, expanding disease zones, reductions in surface water availability, coastal flooding, ocean acidification, and so on. The Review translates these effects into economic losses, adjusting for risk, using Monte Carlo simulation applied to an integrated assessment model (the so‐called PAGE 2002 model). The exercise, requiring many heroic—and often contestable—assumptions, produces the most quoted figures in the report: that climate change "will reduce welfare by an amount equivalent to a reduction in consumption per head of between 5 and 20%"—now and into the future.The absolute magnitude of those projected economic losses is made arbitrarily large by their permanence. Typical benefit‐cost calculations applied to appraisal of development projects convert such long‐term trajectories into a present value using a discount rate comparable to a market interest rate or some (lower) assumed rate of time preference. The Stern Review, however, argues that any discounting is ethically inappropriate for this global issue: "if a future generation will be present, we suppose that it has the same claim on our ethical attention as the current one" (p. 31). The only exception is an allowance for the possibility that future generations are not present—through human extinction—which is held to justify a minuscule discount rate of 0.1 percent per annum (p. 161).The percentage economic losses from climate change appear less daunting if set against the recent pace of expansion in the world economy. Real per capita income growth since 1990 has averaged about 1.5 percent per year worldwide, and about 3 percent in developing countries. In such a regime, a 5 percent one‐time drop to a lower expansion path is no more than a two‐ or three‐year delay in attaining a given income level. For China and India, whose economies are doubling in size each decade, even a 20 percent reduction in income would be a mere hiccough on the path to affluence—hardly enough to motivate major shifts in lifestyle ambitions. The dire repercussions on global environments of a greenhouse warming at the upper end of the forecast range are poorly captured by those percentages.Demography has a marginal place in the Review. The underlying IPCC emission scenarios incorporate expected population growth, using the UN medium projections. Many of the climate‐change effects incur costs that are similarly magnified by population growth. One‐sixth of the world's population is "threatened" by water scarcities; 1 in 20 people may be displaced by a rising sea level; mortality may increase from vector‐borne diseases and from malnutrition linked to income losses.The later part of the Review is concerned with mitigation and adaptation strategies. It lays out an ambitious set of policies for transition to a low‐carbon economy that could stabilize greenhouse gas concentrations over the next several decades. By 2050, emissions would have to be 25 percent below today's and emissions per unit of GDP 75 percent below. In perhaps the most problematic part of the exercise the Review asserts that such cuts could be achieved at a cost of only around 1 percent of annual global GDP—implying that investment in mitigation should be strongly favored on straightforward economic grounds. (This figure, like others in the Review, is acknowledged to lie within a substantial envelope of uncertainty—here a range of −1.0 percent to +3.5 percent of global GDP (p. 212), or, drawing on a wider range of models, −4 percent to +15 percent (p. 241).) In the decades before the investment pays off, adverse consequences of the warming trends already underway must be dealt with by adaptation, such as through better disaster preparedness, lessening the vulnerability of infrastructure, and risk‐pooling measures.The excerpt is from pp. iii–iv and vi–xi. The full Stern Review (579 pages), the executive summary, and the commissioned background papers are available online at «http://www.hm‐treasury.gov.uk/independent_reviews/stern_review_economics_climate_change/sternreview_index.cfm». A hard copy of the Review will be issued by Cambridge University Press.
Meine Arbeit reflektiert die Auseinandersetzung von Fotografen mit der Großstadt Berlin innerhalb des Zeitraumes 1871 bis 1914. Ich untersuche anhand der Fotografien die Aussagen, die sich über die Stadtwahrnehmung ihrer Zeit machen lassen. Der Wahrnehmungsrahmen der Epoche, die durch den Wilhelmismus einerseits und den Anbruch der Moderne in der Welt der Medien andererseits charakterisierbar ist, führt zu Konditionierungen von Fotografen und Rezipienten der Fotografien. Die gegenseitige Abhängigkeit dieser Elemente wird anhand ihrer fotografischen Äußerungen gezeigt und Thesen zu deren Aussagen über ästhetische Fragen, Großstadtwahrnehmung und Moderne aufgestellt. Die spezifischen Möglichkeiten des Mediums werden dabei berücksichtigt. Voraussetzung dieser Beobachtungen ist die Grundannahme, daß die Fotografen zum einen ihr eigenes Verhältnis zur Welt durch die Fotografien strukturieren (aufgrund tradierter Darstellungskanones oder aufgrund eigener Inventionen), zum anderen aber auch Welten schaffen, für die sie selbst noch keine Begriffe haben. Auf diese Weise greifen sie in die Zukunft und schaffen Bildwelten, deren Gehalte sich erst späteren Generationen eröffnen. In der Gesamtheit der betrachteten Bilder ergibt sich ein Ausschnitt der visuellen, aber daraus abgeleitet auch des kulturellen, politischen und gesellschaftlichen Kosmos der Berliner Kaiserzeit. Die Arbeit nimmt erstmals eine umfassende Bestandsaufnahme Berliner Stadt- und Architekturfotografie vor und hat zum Ziel, die Schwerpunkte des medial erfaßten Goßstadtbildes zu zeigen. Sie verweist sowohl auf die allgemein für Großstadtphotographien typischen Pheonomene (beispielsweise das für den Tourismus bestimmte, idealisierende Photo) als auch auf die zwar literarisch behandelte, jedoch nur spärlich photographisch berücksichtigte Realität der Metropole, die Armut, die Randgebiete und das nächtliche Leben. Die Gliederung der Arbeit ist thematisch und ergibt sich anhand der Untersuchung überwiegend unveröffentlichter Stadt- und Architekturfotografien (von der eleganten Lichtdruck-Mappe über die Postkarte und das Pressefoto zum privaten Schnappschuß) der acht bedeutendsten Berliner fotografischen Sammlungen. Die methodische Einführung erläutert den grundsätzlich kunsthistorischen, aufgrund des Betrachtungsgegenstandes jedoch multidisziplinären Ansatz der Arbeit. Ihr schließen sich vier Kapitel an, die folgende Schwerpunkte behandeln: 1. Die Verbreitung des Berlin-Bildes durch moderne Reproduktionsverfahren, insbesondere dem Lichtdruck und der Illustrierten Presse (Bilder des Großstadtlebens); 2. Die fotografische Verarbeitung der massiven Abbruch- und Neubauvorgänge in Berlin (Die Stadt als Palimpsest); 3. Die amateurhafte und künstlerische Reflektion von Stadtrand und Fauna im städtischen Raum (Stadt, Natur, Stadtlandschaft); und schließlich 4. Die Auseinandersetzung mit Wohnungselend in Berliner Mietskasernen durch Fotodokumentation (Das steinerne Berlin – Die andere Seite der Gesellschaft). Der Erkenntnisgewinn ergibt sich anhand von formalen, sachlichen und inhaltlichen Vergleichen des Berliner Materials mit Beispielen aus der Malerei (Natur-Kapitel), aus anderen Städten (Die andere Seite der Gesellschaft), unterschiedlicher Reproduktionsverfahren (Großstadtleben) und verschiedener Herangehensweisen (dokumentarisch/künstlerisch) von Fotografen (Palimpsest). Das Ergebnis besteht nicht allein in einer zeithistorischen Erkenntins zum So-sein von Stadt der Wilhelminischen Ära, sondern auch in der Entdeckung der Manifestation von Raumverlust und wachsender Beschleunigung im modernen urbanen Raum anhand der Bildorganisation und der Bildinhalte. ; In this study, photography is interpreted as a parameter within a cultural context and a reflector of the dramatic changes in society stirred by urbanization and industrialization in the modern age. Berlin city photographs, whether an image of a monumental landmark or of the urban periphery, equally reveal changes in the city, in its formal depiction, and in modes of visual perception. Evidence of these changes varies, looked at them as a whole body, from motion captured in images to the lost depth of space they employ in abstracting their subject. The doctoral dissertation approaches city photography based upon certain cultural and political premises. During the period following the German Empire's founding in 1871, the nation was subject to massive cultural, political, and social changes. Drastic alterations to the city's shape began to appear during the first half of the 19th century and reached a peak at its end. Berlin's face and structure changed from that of a provincial capital of the Prussian kingdom into a heavily industrialized metropolis. Photography was invented in 1839 and developed within thirty years. It moved from being a means of and subject to mostly technical experimentation and entrepreneurism to being a serious means of image-making, rivaling painting and other media. Not only for painters and etchers, but also for the press and a new group of amateur image-makers, photography became an important tool and means of expression. It provided a forum for diverse ways of city presentation, from the postcard to the snapshot to the pictorialist platinum print. This phenomenon challenged concepts of authorship and representation. From these general premises that typify transformations of European capitals into modern, industrialized places, the dissertation focuses on Berlin's situation. After an introduction to the methodological grounds for the study, Berlin city photographs are discussed in four thematic chapters that each evidence the working premises in different ways. Images of the Modern Metropolis discusses the city as a subject of commercial photographic imagery; the form and strategies of distribution of the city image; and the lively, mostly technically and scientifically oriented photography community in Berlin during the end of the 19th century. The City as Palimpsest examines the destruction and new construction of Berlin's architectural and urban fabric as how it was reflected in the photograph. It focuses on the artistic and sentimental treatment of the historic city by pictorialists and preservationists. In this chapter, the city presents itself not as a new metropolis, but as a "palimpsest", a ground for several layers of past "cities" that get destroyed and rebuilt over and over again. Photography in this context performs a stretch between serving as an objective tool of documentation and a means for idealization. An important issue of Berlin's photographic imagery is the periphery and the parks of the city. City, Nature, Cityscape reflects the presentation of nature in an urban context and the ambiguity of its representation as artistic "landscape" or mere "nature". Berlin's periphery turns out to be the playground for the most experimental approach to city photography. Photography and painting both prove to be witnesses and voices to a new artistic development which is best described as the growing importance of the banal, the random. A second subject discussed in this chapter is the increasing popularity of snapshot photography that is created during weekend excursions to Berlin's outskirts. Industrialization brought new social issues to the city's agenda. More modest than a work such as Jacob Riis' book "How the other half lives," which documented the conditions in New York's tenements, a local health insurance company in Berlin initiated a research project to bring light to the interconnection between illnesses and living conditions in that city's tenements. Stony Berlin – How the Other Half Lives compares the insurance company's study with Jacob Riis' effort and similar work in Vienna by Hermann Drawe. The approach in the Berlin case can best be described as comparatively fact-oriented and by all means "objective", whereas Riis and Drawe both used the medium from a rather editorial stance. For each of these four chapters, a small selection of photographs was made from the totality of eight major Berlin photo collections. The photographs are understood as rooted in the recognition of their complex cultural webbing; and as subjects in a dialogue with the observer. The dissertation's methodology is cultural historical, recognizing the importance of a comprehensive understanding of the medium instead of limiting its history to single individuals and their oeuvre.
• After making substantial progress on tobacco control in the mid-1990s, the tobacco industry has stifled tobacco control activities in Washington through a mixture of campaign contributions and legal challenges. • Political campaign contributions have remained steadily high throughout the 1990s. Philip Morris, RJ Reynolds, the Tobacco Institute, Lorillard, Brown & Williamson, and the Smokeless Tobacco Council contributed $362,298 to campaigns in 1996 through 2000 election cycles: $114,123 in the 1996 election cycle, $109,975 in 1998, and $138,200 in 2000. • From 1996-2000, 92% of these campaign contributions by the tobacco industry were to Republican candidates, party contributions, and soft money. • The largest lifetime recipients of campaign contributions were Clyde Ballard ($16,830, R-East Wenatchee), William Grant ($7,400, D-Franklin), Dan McDonald ($7,246, R-King), and Pat Scott ($5,490, D-Snohomish). Ballard and Grant are both powerful in the House leadership; Ballard is the Co-Speaker of the House of Representatives, and Grant is the House Democratic Caucus Chair. McDonald was the Majority Leader of the Senate from 1996-1999, and prior to that, he chaired the powerful Ways and Means Committee. • The tobacco industry has also spent heavily on lobbying; from 1996-2000, the industry spent $1,864,086 to lobby members of the legislature and the state administrative offices. This includes lobbyists' salaries and perks given to legislators such as holiday gifts, entertainment and meals. • Washington and its Attorney General, Christine Gregoire, played an active role in the state tobacco trials and subsequent 46 state settlement (known as the Master Settlement Agreement) in 1998. The legislature reserved $100 million of the settlement money for a new Department of Health tobacco control program. • Department of Health officials and health advocates had requested $26 million to begin the tobacco control program, but, because of pressure from the Republican members, the legislature only allocated $15 million for the first year, about half of what the federal Centers for Disease Control and Prevention recommends for Washington State. • Attempts to pass local smoking regulations, with provisions stricter than the 1985 Clean Indoor Air Act, have been hampered by an ambiguous legal question about whether the Clean Indoor Air Act preempts local legislation. Puyallup, the only city attempting to pass a smokefree restaurant ordinance, rescinded it after being challenged with an industry-funded lawsuit without seeking a court ruling on the issue of preemption. The fact that the Attorney General's office has not issued a formal decision on this issue has contributed to the reluctance from local governments. This circumstance has allowed the tobacco industry to stop local clean indoor air regulations. • Spokane has adopted a voluntary approach to controlling secondhand smoke in restaurants with their Big Air Program. Many restaurants in the City and County of Spokane have become voluntarily smokefree since the program was initiated in 1996. Although city officials initiated their program independently, it is very similar to the tobacco industry's voluntary smoking regulations through their "accommodation" and "red light/green light" programs. • The industry has also orchestrated legal pressure to attack the federally-funded ASSIST tobacco control project with allegations of "illegal lobbying" and filed a complaint to the Public Disclosure Commission. While the ASSIST project has ended, the tobacco industry successfully used this experience to discourage health departments and advocates from using the policy process to promote tobacco control. • Several counties, including King, Snohomish, Pierce and Spokane passed outdoor and color advertising restrictions between 1996 and 1999. These measures abolished all outdoor billboards and restricted advertising in stores to small, black and white posters. The industry supported challenges that overturned these restrictions on the grounds that they were preempted by federal law and a violation of the First Amendment. The industry prevailed in the federal Ninth Circuit Court of Appeals and the defendants agreed not to appeal in exchange for the industry not demanding that its legal fees be paid. This situation has stopped similar legislation in Washington. Similar laws were upheld elsewhere in the United States and as of early 2001 the issue was under review at the US Supreme Court. If the Supreme Court upholds such laws, Washington authorities may seek to reinstate them. • Health advocates, who could play a decisive role in anti-tobacco campaigns, have been hindered by the lack of a continuously functioning statewide coalition in Washington. The former group, Tobacco Free Washington Coalition, was funded in large part by a grant from ASSIST. Without ASSIST funding, the statewide coalition could not procure the resources to continue operations. Many advocates limited their efforts to smaller, local coalitions which dilutes their strength as a statewide lobbying force. A new group, Washington Alliance for Tobacco Control and Children's Health (WATCH), was created in 1998 to replace Tobacco Free Washington. They are a broad-based coalition funded by their member organizations. They lobbied in 1999 to ensure that money from the tobacco settlement went to fund health services and that the Department of Health's tobacco programs received adequate funding. • WATCH, together with the Washington Restaurant Association, sponsored Senate Bill 5993 which passed the Senate in March 2001. The bill would ensure more smokefree restaurants, but it has several flaws including exceptions for restaurants where minors are prohibited and the creation of a task force to study ventilation systems. The tobacco industry often advocates ventilation systems to dilute smokefree restaurant legislation and perpetuate controversy around the fact that no ventilation system can completely remove secondhand smoke from an enclosed environment. • In general, the tobacco industry has succeeded in stalling tobacco control efforts in Washington State. Although advocates, local public health officials, the Department of Health, the Attorney General, and some influential members of the legislature are all in favor of broad-based, fully funded, tobacco control education programs and increased Clean Indoor Air legislation, these advocates have not mobilized the resources necessary to overcome the legal and political impediments the industry has created.
Lithuania's public policy makers arc united in their desire to foster high rates of sustained economic growth through democratic, market-oriented reform. But as the event' of late 1998 and early 1999 demonstrate, economic growth can he elusive, and can hinge on factors well beyond domestic policy control.In a market economy, the private sector can tap ready sources of growth to expand operations and increase profitability. In so doing, they add value to the domestic economy. The Government can complement private initiative by creating an enabling environment conducive to high-quality, economic growth. Whether or not public policies do, in fact, fost9r growth depends on the sources and impediments to growth prevailing in the economy and the wider constellation of constraint' and opportunities posed by the global economic setting.How can public policy be crafted to support high-quality economic growth? This paper reviews the sources and prospect' for long-term economic growth inLithuania drawing on a number of different approaches and perspectives. Lithuania's 1990-1994 economic collapse was primarily due to the breakdown of traditional trade relations. rapid adjustment of raw material prices and high rates of inflation. From 1994 to 1998, the economic recovery was led by export growth, a widening savings-investment gap, resource shifts into higherproductivity services activity and, from 1997 onwards, buoyant consumer demand. Enterprises rated the acquisition of market-oriented skills as the single-most important determinant of growth during the recovery period.Growth has been stimulated by cost-reduction (or an improvement in total factor productivity) far more than by factor augmentation. Qualityadjusted labor supply ha, made little contribution to aggregate growth. On average, the contribution of capital investment to growth during 1995-1997 ha, been modest. The larger firms rely more heavily on capital investment as a source of growth, hut rates of total factor productivity improvement in these large firms arc disappointing indeed.The regression results indicate that Lithuania's enterprise sector is split between two very different types of enterprises. At one extreme arc the big companies that haven't yet been restructured hut arc relatively successful at attracting investment and staking claims on government resources. These firms arc generating negative value added, and arc witnessing a decline in factor productivity. At the other extreme arc the small, nimble. private enterprises. These arc generating most of the growth in value-added and they arc doing it by raising factor productivity. They tend to he in the services and construction sectors, areas that were ripe for new start-ups in the early 1990s.These results also suggest that in the first stage of transition, private investment is modest and expensive (investors demand a high premium for risk). Economic growth is stimulated more by factors related to human capital (the acquisition of the knowledge and skills needed to work in market conditions and the ability to find new markets, enter into new relations) which results in an increase in total factor productivity.After the macroeconomic situation stabilizes. and transition reforms deepen, capital inflows accelerate. But if these investments arc inefficient, they don't lead to sustained economic growth, hut to the accumulation of macroeconomic imbalances. Ali a result, modest rates of return on capital will make the large companies less attractive to investors. and the growth of these enterprises slows. A sustainable rate of growth can he maintained if total factor productivity (TFP) continues to improve. This requires a wide range of learning, technology adoption, market development and managerial improvement, a' a prerequisite to efficient private investment and appropriate enterprise restructuring.Public policy in support of economic growth has been fixated on stimulating private investment. This is clearly a mistake, especially since so much of the investment undertaken by the larger enterprises isn't contributing either to growth or to TFP improvement. Government has been reticent to correct macroeconomic imbalances, for fear of either curbing private investment or choking private consumption growth.The 1994 to 1998 experience demonstrates the importance of macro-economic stability and an outward orientation to sustained growth. Efforts arc needed to restore macro-economic balances to a sustainable level before financial stability is threatened.Public policy should correct the hi as against labor, and in favor of capital investment, in the tax system. Excessively generous treatment of capital investment and of the wealthy asset holders. does little to stimulate efficient investment. and, in fact, discourages investment in human-capital and TFPenhancing innovations. Reducing the tax wedge against labor and eliminating policies that provide capital-subsidies to poorly performing enterprises (both in heavy industry and agriculture) will help to correct Government's pro-investment bias. Stimulating savings. rather than promoting (wasteful, state-directed) private investment, should he the main macroeconomic initiative for an economy that needs to enhance TFPgrowth and reduce its external imbalance.Rather than use its scarce resources to tip the balance in favor of private capital investment, the Government should invest in public goods and services that will augment TFP growth. Efficient and effective provision of social services. economic infrastructure and an impartial commercial judiciary arc needed to complement private initiative. To compete globally. Lithuania must continue to improve its gateway status, both in terms of physical infra structure. hut more-so in terms of the foreign languages. information superhighway. travel opportunities and cultural awareness needed to do business worldwide.Public policy should build on the steady progress registered by Lithuania's small enterprises. Deregulation, together with a policy of zero-tolerance with respect to corruption and organized crime, will significantly enh.mce the enabling environment for small and medium-sized enterprise activity. Reducing the tax wedge on labor will encourage SME's to expand employment and to avoid the gray economy, both of which will generate positive effects beyond those enjoyed by the SME itself. ; Straipsnio tikslas - padėti nustatyti ilgalaikio ir stabilaus ekonomikos augimo šaltinius. Jame trumpai apibūdinama ekonomikos augimo teorijų raida. Įmonių ir nacionalinių sąskaitų lygmeniu, remiantis apklausų duomenimis ir ekonometriniu požiūriu, analizuojamas ekonomikos augimo procesas Lietuvoje. Nagrinėjami ilgalaikio ekonomikos augimo veiksniai. Pateikiami ekonominės politikos, skatinančios ekonomikos augimą, formavimo pasiūlymai.
This paper explores the social, economic and ideological context within which the World Summit for Social Development took place. As its subtitle implies, the discussion focuses particularly on some of the assumptions about global trends that were prevalent in 1995 and evaluates their adequacy in the light of the actual course of events during the following five years. The Summit was conceived during a period when neoliberal orthodoxy was at its height. Many countries were-willingly or unwillingly-implementing structural adjustment policies, devised by the Bretton Woods institutions (BWIs), that systematically weakened earlier national development strategies. The public was insistently reminded that, in the much-quoted words of Margaret Thatcher, there was no alternative to the free-market revolution. The collapse of the Soviet Union lent force to that assumption. The neoliberal case against earlier models of economic development and social welfare was further strengthened by growing reference to the process of 'globalization', which was seen not as the (reversible) outcome of particular national policies, but as an inevitable fact of contemporary economic life. This justified a broad attack on the welfare state, and indeed on many less comprehensive forms of public social provision that were assumed to be unviable in a highly competitive international marketplace. Notions of equity and social justice became unfashionable. A second feature of the period leading up to the Social Summit was the standardization of economic policy prescriptions, applied across a large part of the developing world. The 'fundamental' to which these prescriptions were tethered was not growth, but stabilization; and in most cases they worsened both poverty and inequality. But international financial institutions used the spectacular economic performance of a number of Asian countries (the so-called Asian miracle which, by 1995, was well into its second decade) to argue that neoliberal insistence on free markets was entirely justified. A third element in the climate of ideas surrounding the Summit was therefore a serious ongoing debate about the extent to which successful Asian economies did, in fact, epitomize free-market principles. Sceptics noted that the state in these countries played a major role in protecting and promoting national industries. Fourth, the mid-1990s were a time when renewed economic growth outside Asia seemed just around the corner. Economic statistics were looking healthier than they had in years. And the rapid surge of private financial flows toward some developing countries further supported a sense of optimism. How has this environment changed over the last five years? At the ideological level, neoliberalism is today far more contested than it was at the time of the Summit. Erosion of the theoretical and empirical underpinnings of the model has been accompanied by growing political opposition. The collapse of important Asian economies in 1997 dealt a serious blow to those who believed uncritically in rapid financial liberalization. And slow growth continued to afflict the majority of developing countries throughout the remainder of the decade. Indeed, in a sample of 95 developing economies monitored in the 1990s, 32 experienced declining per capita income in 1999, compared to only 14 in 1996. Five years after the Social Summit, poverty and inequality continue to grow, and prospects that this trend will be reversed remain quite poor. Continuing emphasis on austerity does not create the conditions for broad-based growth and social development. Unemployment and underemployment are increasing in many parts of the world. Moreover, the quality of work and working conditions is declining, under the impact of 'flexible' labour policies and the expansion of the informal sector. Private financial flows have not proved a panacea, either. They are concentrated in relatively few developing countries and, when significant, imply serious risk of greater economic volatility. Given this generally sombre picture, it is not surprising that poverty alleviation now occupies a central place on the development agenda. The World Bank has begun to justify support for social sectors on developmental grounds, and the IMF has been forced to recognize the importance of addressing social concerns. But there is still a tendency to deal with these issues in technocratic ways. Thus a single-minded focus on poverty reduction, without broader commitment to improving the quality of life throughout society, obscures issues of income distribution and social equity. In addition, targeting resources only toward the poorest or toward certain groups encourages the creation of a dual structure of social services-one aimed at the poor, funded by the state, and one aimed at the rest of the population, which can afford to pay for private services. This is not only poor politics, but also poor economics. In fact, neoliberal economics has traditionally denounced targeting in the economic field, citing problems of imperfect information, fundamental distortion of incentives, moral hazard, high administrative costs and corruption. Market orthodoxy is based on an ideal of non-intervention. Thus it is paradoxical that while universalism is championed in economics, it is roundly rejected in social policy. In another significant reversal of previous policies, the Bretton Woods institutions now concede a much more important role for the state. But this concession comes after years during which there has been a devastating weakening of state capacity. There is therefore a danger that states in parts of the developing world will now be expected to do too much-often in association with a trend to place ever more stringent conditions on delivery of loans or grants. The debt relief programme for heavily indebted poor countries, known as the HIPC initiative, clearly reflects this problem. Countries still struggling under the weight of inappropriate structural adjustment policies are required to assume anti-poverty commitments for which both resources and institutional capacity may be insufficient. Finally, recent years have witnessed attempts to rethink global governance and to create a more stable international economic environment. In most cases, however, this seems a patchwork affair. Suggestions for remedial action are tacked onto a system that is not being fundamentally analysed or challenged. If real progress is to be made, questions of distributive justice and social policy must become an integral part of the debate on development-not an add-on to be considered when existing economic policies prove inadequate.
Macroeconomic expectation data are of great interest to different agents due to their importance as central input factors in various applications. To name but a few, politicians, capital market participants, as well as academics, incorporate these forecast data into their decision processes. Consequently, a sound understanding of the quality properties of macroeconomic forecast data, their quality determinants, as well as potential ways to improve macroeconomic predictions is desirable. This thesis consists of three essays on the quality of analysts' forecasts. The first essay deals with macroeconomic forecast quality on the consensus level, while the second one investigates individual analysts' predictions and their quality determinants. In the third essay a bottom-up approach is introduced to derive macroeconomic forecasts from analysts' predictions at the microeconomic level. It is generally assumed that macroeconomic consensus forecasts provide a reasonable approximation of market participants' expectations regarding upcoming macroeconomic releases. Research areas in which these expectation data are a central input to isolate the unanticipated news component of a given announcement include studies analyzing the price impact of macroeconomic news in bond markets (e.g., Balduzzi et al., 2001; Gilbert et al., 2010), stock markets (e.g., Boyd et al., 2005; Cenesizoglu, 2011) as well as in foreign exchange markets (e.g., Andersen et al., 2003; Evans and Lyons, 2008). Furthermore, these forecast data are used to study market co-movement (e.g., Albuquerque and Vega, 2009), market volatility (e.g., Beber and Brandt, 2008; Brenner et al., 2009), changes in market liquidity (e.g., Brandt and Kavajecz, 2004; Pasquariello and Vega, 2007, 2009) as well as bond and equity risk premiums (e.g., Savor and Wilson, 2012; Dicke and Hess, 2012). It appears reasonable to assume that macroeconomic consensus forecasts represent market participants' expectations properly. So far available studies on forecast rationality at the consensus level largely test for general quality properties. They commonly find no evidence of systematic or persistent inefficiencies. In contrast to these previous studies, Campbell and Sharpe (2009) test for a specific behavioral inefficiency, the anchoring bias, first documented by Tversky and Kahneman (1974) in psychological experiments. Transferred to the context of macroeconomic forecasts, anchoring means that analysts put too much importance on last months' data and therefore underweight meanwhile released relevant information. This behavior implies a false incorporation of all available information into their forecasts. Consequently, a correction, i.e., the efficient use of the entire available information set would yield forecast improvements. Our analysis reveals a counter-intuitive result: We find strong statistical significance for anchoring in most macroeconomic forecast series, but applying a look-ahead bias free estimation and adjustment procedure leads to no systematic forecast improvements. Therefore, our results question the economical significance of the anchoring bias. To provide an explanation for the disconnection of statistical and economical significance, we decompose the anchoring bias test statistic and find that the test is biased itself. While the test assumes a univariate information environment, it neglects the possibility that analysts may provide superior forecasts by using a more comprehensive information set than just the univariate time series itself. Our empirical as well as our simulation results strongly support this explanation for a broad range of macroeconomic series. Our analysis contributes to different strands of literature. First, our results directly add to the scarce literature analyzing the efficiency of macroeconomic survey forecasts by showing that informational advantages of analysts, i.e., the incorporation of related macroeconomic data, enable them to outperform mechanically generated time series forecasts. Furthermore, our results provide motivation for other research areas, such as studies analyzing equity analysts' outputs, to control for a larger information set, for instance by including earnings information of related companies or information about overall business conditions. Second, our findings strongly support the assumption that macroeconomic survey forecasts represent a reasonable proxy measure for the anticipated information component in macroeconomic releases and consequently justify their use in the above mentioned research areas. Furthermore, our results highlight the danger to test for cognitive biases in a time series context which were previously only tested in controlled experiments. Especially when experiments are conducted in a highly regulated informational setting, i.e., when information given to test participants has to be strictly controlled for, as in anchoring bias experiments, it is questionable whether a direct transfer in a time series setting is possible at all. Future studies analyzing cognitive biases in time series frameworks have to consider carefully whether informational constraints might drive the results and lead to false conclusions. The first essay provides strong evidence for the quality of macroeconomic forecasts at the consensus level, the second essay deals with individual macroeconomic forecasts and analyzes why certain analysts provide better forecasts then others. In particular, we focus on the association between the idiosyncratic predictability of a given macroeconomic indicator and the relation between analyst characteristics and macroeconomic forecast accuracy. Obviously, there might be quality differences on the individual analyst level, i.e., there are more and less precise macroeconomic analysts. Exploiting these quality differences is a desirable task, because academics would obtain better proxy measures for market participants' expectations, and for investors an information advantage should translate into higher profits. We argue that if an indicator's idiosyncratic predictability is low, i.e., the series is almost not predictable, for instance due to information constrains and very volatile processes, then analysts' forecast performance is rather random than systematic because skills cannot take effect. In contrast, if a macroeconomic indicator has a high idiosyncratic predictability, then analysts with certain characteristics benefit from their abilities and skills, and generate more precise forecasts than less skilled analysts. Accordingly, for the unpredictable indicators the relation between analyst characteristics and forecast accuracy should be less pronounced than for the predictable ones. Consequently, we hypothesize that the idiosyncratic predictability of a certain macroeconomic indicator has to be taken into account whenever the relation between analyst characteristics and forecast accuracy is analyzed. So far there is only contradictory evidence concerning differences in individual forecast quality of macroeconomic analysts. While some studies provide evidence for different forecast quality among individual macroeconomic analysts (e.g. Zarnowitz, 1984; McNees, 1987; Zarnowitz and Braun, 1993; Kolb and Stekler, 1996; Brown et al., 2008) other articles come to the opposite conclusion (e.g. Stekler, 1987; Ashiya, 2006). Despite this disagreement, the relation between macroeconomic forecast accuracy differences and analyst characteristics has not been analyzed so far, although the extensive strand of literature analyzing the association of equity analyst characteristics and earnings per share forecast accuracy (e.g. Clement, 1999; Clement and Tse, 2005; Brown and Mohammad, 2010) provides a sound framework for an analysis. Most importantly, we find that model performance heavily depends on the idiosyncratic predictability of macroeconomic indicators. With decreasing idiosyncratic predictability the relevance of analyst characteristics for forecast accuracy diminishes for some characteristics and disappears for others. In terms of economic significance we find substantial differences between macroeconomic indicators with high and low idiosyncratic predictability. Consequently, our results show that the idiosyncratic predictability of a given forecast target has to be taken into account when the association between analyst characteristics and forecast accuracy is analyzed. Our findings have implications for different research areas. Most importantly we directly add to the literature analyzing individual macroeconomic analysts' forecast performance. We provide evidence that the idiosyncratic predictability of an indicator has to be taken into account if the relation between analyst characteristics and forecast accuracy is analyzed. Differentiation among analysts is only very limited if the figure to be forecasted is virtually unpredictable, because analysts do not benefit from their abilities and experiences. Systematic forecast accuracy differences arise if the forecast target is predictable at all and more skilled analysts have the opportunity to differentiate themselves form less skilled ones based on superior skills. Since there are differences in the predictability of company earnings our framework is transferable. Analogous to our findings for macroeconomic analysts, we expect that idiosyncratic predictability plays an equally important role analyzing the association between equity analysts' characteristics and their earnings per share forecast performance, i.e., for company earnings with higher idiosyncratic predictability we expect higher heterogeneity in forecast accuracy which can be explained by analyst characteristics. The first two essays provide evidence that macroeconomic predictions are in general of high quality as they incorporate rationally information from various sources. Besides the previously analyzed macroeconomic forecasts, agents such as politicians and employers, also heavily rely on other information, for example, on coincident and leading macroeconomic indicators. Determining the current state of the economy and obtaining sound projections about future overall macroeconomic developments plays an important role in their decision processes. Coincident and leading macroeconomic indicators incorporate a large set of macroeconomic variables as well as stock and bond market measures, e.g., returns and interest rate spreads. However, there is no evidence about how expectations at the microeconomic level relate to expectations at the macroeconomic level. Consequently, an aggregate of microeconomic expectation data, i.e., individual company expectations, are not included in coincident and leading macroeconomic indicators so far. To overcome this shortcoming we introduce a bottom-up approach that aggregates individual company expectations to derive macroeconomic content. Since the development of the entire economy is closely related to the development of its individual parts, among them individual companies, aggregated company information must contain macroeconomic information. Unfortunately, there is no database containing managements' expectations, however, we use equity analysts' outputs as proxy measure. Equity analysts' information sets comprise public macroeconomic-, industry- and company-specific content as well as non-public company-specific information (Grossman and Stiglitz, 1980) and is therefore arguably the best available proxy for managements' expectations. Regarding the choice of the best analyst's output we use recommendation changes instead of earnings per share (EPS) changes, because recommendations comprise more information. Besides the one year earnings estimate, recommendations also contain a series of future earnings expectations as well as interest rate and risk premium expectations. We show that aggregated recommendation changes as proxy measure for changing company outlooks have predictive power for overall economic developments. Our results provide evidence that aggregated recommendation changes, which approximate changing expectations about individual companies' economic prospects, have predictive power for future macroeconomic developments of about one year. Controlling for other well established macroeconomic predictors our results remain robust indicating that our measure contains additional independent information. Consequently, it seems promising to include our new predictor into the set of macroeconomic predictors in future applications. Additionally, we find that EPS changes have no predictive power lending support to our assumption that more forward looking information, as included in recommendation changes, is required if one attempts to forecast future macroeconomic developments. Furthermore, our findings provide the missing link between previous studies showing that aggregated analyst outputs have predictive power for overall stock market developments (Howe et al., 2009) and those showing that the stock market leads the real economy (Stock and Watson, 1998). Our results support the notion that changes in expectations about future company performance rationally determine asset values in advance of overall economic activity changes providing the explanation why stock markets lead the real economy. Overall, the three essays in this thesis advance different strands of literature. We show that macroeconomic consensus forecasts are a reliable proxy measure for market participants' expectations. Furthermore, our results provide strong evidence that it is dangerous to transfer psychological experiments into time series frameworks without appropriately controlling the informational environment. Additionally, we show that the idiosyncratic predictability of a given forecast objective, i.e. whether a forecast task is satisfyingly feasible at all, has to be taken into account whenever the association between analyst characteristics and forecast accuracy is analyzed. Macroeconomic analysts do only benefit from their superior skills compared to their competitors if the macroeconomic series is idiosyncratically predictable. For unpredictable series, forecast accuracy is rather random than systematic, because superior skills do not systematically translate in better forecasts. Finally, we show that the aggregation of forecasts on the microeconomic level, i.e., company expectations, is a promising approach to extract macroeconomic information. Overall, we conclude that macroeconomic analysts are very efficient information processors and play an important role as intermediaries in financial markets.
Conventional wisdom would have it that presidential elections are not decided on foreign policy, and that Black Americans, like the majority of the American population, vote primarily on domestic issues.Both statements are partially true. However, based on a recent survey conducted by the Carnegie Endowment's American Statecraft Program, African Americans do not vote primarily on foreign policy, but they are paying attention to how candidates describe the US's role in the world. In other words, foreign policy may not be the deciding factor for how the masses of Black Americans vote in 2024, but it stands to have an impact on voter enthusiasm and turnout.The Carnegie Endowment survey, comprised of a representative sample of Black Americans, revealed that economy and jobs registered as the most important issue with 29 % of respondents selecting that option. This finding tracks with a recent polling that found that the majority of Americans view the economy as the most important issue facing the country. Considering that increased inflation and the overall cost of living has placed hardships on many American families, this shouldn't come as a surprise..Our survey contained three options related to foreign policy: National Security, Immigration (which some argue cuts across international and domestic politics), and US foreign policy/role of the U.S .in the world. None of these choices cracked double digits. National Security received 5% of the vote, followed by immigration at 3%, rounded out by the U.S.'s role in the world at 2%. Despite these findings, evidence shows that Black Americans can see the importance of global developments, even when facing domestic issues. Respondents were asked if a President's foreign policy agenda matters to them when they vote. Interestingly, four in ten (39%) respondents said that it is "very important" and a plurality (44%) reported that it is "somewhat important." Across party lines, 43% of both Black Democrats (who comprised 70% of the sample) and Black Republicans reported that a president's foreign policy agenda mattered a great deal when they vote.Education attainment proved salient here, with 59% of African Americans with a postgraduate degree reporting that a president's foreign policy platform was "very important" when voting compared to 39% of all Black respondents who felt the same way. The same held true for respondents' familiarity with foreign policy issues. Forty five percent of Black Americans with postgraduate education reported being "very familiar" with foreign policy matters compared to 25% of all Black respondents that reported similarly.It should be noted, regardless of political affiliation, two of the top three most important issues to African American voters — economy and healthcare — have an international component. For instance, jobs and economic growth are impacted by trade agreements and foreign supply lines, and the response to the COVID-19 pandemic (which disproportionally impacted communities of color) required significant international cooperation. Even for foreign policy topics that did not rank high for respondents, such as immigration and National Security, there is evidence that these issues will weigh heavily on the minds of some Black voters. Take for example, the crisis at the southern border. While few African Americans see migrants as a critical threat (only 29% of African Americans see immigrants and refugees as a critical threat compared to 44% of White Americans), some African American communities may view the accommodation of migrants into historically underserved neighborhoods as reason to be concerned with Biden's handling of illegal immigration and border security.For instance, as the city of Chicago plans to turn a community center in the predominantly Black northwest neighborhood of Galewood into a migrant housing facility, Chicago CBS reported that many Black residents feel their already scarce community resources are being allocated elsewhere. One resident stated "the thing that we're most concerned about is our children, our Black children, the football, the soccer, and all the things that they do… and now they're going to take this part beautiful part and give it to migrants."On the topic of national security and US military intervention, only two in ten (20%) Black Americans are supportive of possibly sending troops to assist Ukraine or help Taiwan defend itself from a hypothetical Chinese invasion. This, of course, is partly due to the community's perennial concern with the material and human costs associated with major war. Moreover, research conducted by the Chicago Council for International Affairs reveals that 46% of Black Americans feel the US should urge Ukraine to settle for peace as soon as possible so that the costs aren't so great for American households, compared to 38% of the American public that feel the same way. According to the numbers, Black Americans are supportive of assistance to Ukraine, but many are cautious that an incipient forever war would potentially require imbalanced human and material costs from the community, or even draw attention and resources from domestic challenges. Perhaps it will prove beneficial if messaging around support for Ukraine for "as long as it takes" is coupled with a peace plan or pathways towards de-escalation.On collective action issues such as climate change, a majority of African Americans (54 %) believe the US should take the lead in combating the issue, at an even higher rate than White Americans (42 %). With concerns regarding Black voter enthusiasm and turnout in the 2024 presidential election, it might be beneficial for candidates addressing Black voters to tie together how mutual cooperation on shared global challenges will impact the economic wellbeing of the Black community. One potential solution to solving the disconnect between Black voters and the foreign policy establishment lies in connecting with Black American civil society. The same manner in which Vice President Kamala Harris tapped into the Black sorority network (which serve as some of the Black community's oldest advocacy organizations) to galvanize the Black vote, it can prove beneficial for political parties to connect with historically Black fraternities and sororities and explicate how foreign policy translates into domestic wellbeing for the Black American community. Much more in terms of foreign policy messaging will be required, of course, but it is a first step.What is clear is that solely using messaging that addresses Black American domestic concerns without addressing America's role in the world will leave many Black voters feeling they are getting more of the same.
Reapportionment shook up Caddo and Bossier Parish representation in the Louisiana Legislature, setting up for some new faces, refugees from other local offices, and intense clashes on this fall's ballot.
The area's overall declining population rearranged things considerably. Which boundaries are coming and going will create some interesting dynamics.
In the House of Representatives, District 1 won last time for his first term by Republican Danny McCormick, retreated from Bossier Parish to become a completely Caddo-based entity. He will again face Republican Randall Liles in a race that could be closer than last time. Although McCormick eked out a general election win without runoff, he ran six percentage points better in Bossier than Caddo. Still, the foothold he has established over the past four years and votes reflecting the district's ideological tilt should be enough to reelect him.
The Bossier precincts largely transfer into District 10, the Webster Parish-based district held by an unopposed Republican Wayne McMahen. The district him for currently has Bossier's most southern precinct as a footnote, but starting in 2024 his Bossier portion swings north and grows substantially.
The other district within Bossier that largely was an afterthought, the Caddo-based District 2 currently held by the leader of Democrats' House caucus Sam Jenkins, expanded somewhat around old Bossier City. Jenkins will try for the Senate District 39 seat of term-limited Democrat Greg Tarver, and in his place Democrat Caddo School Board Member Terence Vinson will make his second try for the seat in eight years. He is joined by Democrat Caddo Parish Commissioner Steven Jackson.
Vinson's familiarity with the district should help, but his main asset is the quite erratic and controversial tenure that Jackson has had in office. In Bossier, Jackson committed his most recent and biggest folly, convicted for impersonating a police officer, which should just add to fodder that ends Jackson's political career.
Joining Jenkins in pursuit of a Senate seat is District 4 Democrat Cedric Glover. That House territory open, school board member Democrat Jasmine Green, term-limited Democrat commissioner Lyndon Johnson, and former District 3 candidate Democrat Joy Walters, who in 2019 lost narrowly to Democrat Tammy Phelps, who didn't draw an opponent this year, when she lived in that district. This could go in any direction, likely to the candidate who most convincingly runs to the far left ideologically while conveying an ability to bring home the bacon.
District 5 shifted eastward, still containing southeastern Caddo but with much of it now outside the parish, the plurality of that in southern Bossier and the remainder scattered well south into Red River Parish. Former Bossier Parish School Board Member Republican Dennis Bamburg played his cards right to become only one of three rookies to enter the House next year without drawing any opposition.
While that district entered Bossier, District 6 exited it and will pick up a new representative as GOP state Rep. Thomas Pressly will gun for Senate District 38 that moved over to grab east Shreveport. Michael Melerine, present Board of Elementary and Secondary Education member, also will take part in musical chairs by running for this spot. He is a big favorite against no party public defender Evan McMichael and Democrat theater manager and activist Robert Darrow.
In recent years, the traditional afterthought House district in Caddo has been District 7, as most of it is in rural parishes to the south. GOP incumbent Larry Bagley is heavily favored to retain that seat for a final term. In Bossier, the boundaries shoved a bit north, District 8 Republican Raymond Crews faces no opposition for another term.
As previously noted, district boundary changes and Bossier political establishment annoyance will create a rugged reelection battle for Republican Dodie Horton to secure a final term. She squares off against businessman Chris Turner, the establishment's pick. Her solid conservative legislative record might be enough to hold him off.
The reconfigured Senate District 38, open as incumbent Barry Milligan declined to run for a second term, that drew in Pressly managed to retain former Democrat now Republican former seat holder John Milkovich. Voters after one term grew sour on his taste for big government in particular and Milligan handily defeated him last time. Pressly is favored over him and also banker but previously political operative Republican Chase Jennings.
Spanning both parishes with a little presence in Bossier, the District 39 race that attracted Glover and Jenkins also brought back Democrat former state Rep. Barbara Norton, who tried to contest Tarver last time but who ran afoul of the residency requirement. With all three having tasted success in running in parts of the district, this also will be a contest where the winning candidate most convincingly runs to the left and shows an ability to pile up the pork. Making things even more interesting, perennial GOP candidate Jim Slagle is back, with his impact uncertain on who will make the runoff, or even against him. However, Glover's track record also as Shreveport mayor perhaps gives him a slight edge.
Perhaps receiving the award for most far-flung district touching multiple parishes, the plurality of new District 31's residents lie in Bossier and Caddo, washing over the southern reaches of each. As in the case of House District 9 but for somewhat different reasons, this turned out as a battle of the Bossier political establishment, if not of the rump grouping of white northwest Louisiana Democrats whose power steadily has faded, against conservative insurgents. As previously noted, establishment forces back retired basketball coach Republican Mike McConathy while insurgents are behind one of the Legislature's most prominent disruptors of traditional liberal populism, term-limited Republican state Rep. Alan Seabaugh in a race sure to be close. Again, a long legislative conservative record may allow Seabaugh to move into the upper chamber.
Finally, as previously noted another contest in which the establishment has a vested interest occurs in District 36, which has migrated south and mostly out of Webster Parish. Republican incumbent Robert Mills, who beat an establishment incumbent last time that spent the most money on a legislative race in history, takes on another establishment favorite in the form of Republican Bossier Parish School Board Member Adam Bass. Mills has run into difficulty among area conservatives by not backing an effort to hold the line on spending hundreds of millions more dollars on capital outlay this past year, and this disgruntlement presented an opportunity for the establishment to jump on him. Mills will have to hope an otherwise solidly conservative voting record carries him to victory.
Earlier this week an unusual event took place: a U.S. shipyard delivered an oceangoing merchant ship. Named the Janet Marie, the vessel will transport goods between Hawaii and the U.S. mainland. Anyone tempted to toast the new ship as a symbol of U.S. shipbuilding prowess, however, should keep the champagne bottle corked. More an embarrassment than cause for celebration, the containership serves as a rich symbol of the heavily protected U.S. shipbuilding industry's myriad shortcomings. The most glaring problem with the Janet Marie is its price. Although the exact figure has not been published, its George III sister ship—a vessel delivered last year with the exact same specifications—was revealed to cost "$225 million‐plus." It's a safe bet Janet Marie did not cost less. In comparison, two similarly‐sized containerships were ordered from a South Korean shipyard in 2021 for $41 million each. The vast difference ($184 million) is no anomaly, comporting with previous assessments from maritime industry observers that U.S.-built cargo ships are five times the price of those constructed abroad. The length of time required to build the Janet Marie provides another unfavorable contrast. From the time steel was cut until the ship's delivery was approximately 50 months. In contrast, the Ever Alot—one of the largest containerships in the world with a cargo capacity nearly 10 times that of the Janet Marie (and a price tag $80 million lower)—required just 19 months to build. Originally slated for delivery in the third quarter of 2020, the Janet Marie is a full three years late. Unfortunately, construction times significantly longer than those of foreign shipyards have been a feature of the U.S. shipbuilding industry for decades. Faced with such high prices and lengthy construction times, the market for U.S.-built merchant ships is limited to those who must buy them to comply with the 1920 Jones Act, which requires vessels transporting goods within the United States to be constructed in U.S. shipyards. Understandably, these vessel operators delay purchasing new ships for as long as possible. Instead of buying new ships when existing ones approach the 20 year mark as is commonly done abroad, Jones Act‐compliant ships are often not scrapped until age 40 or older. As a result, few ships are built. In fact, with the Janet Marie now delivered there are currently no merchant ships under construction in the United States (there are, however, three containerships on the order book slated for delivery in 2026 and 2027 at the astounding price of $333 million each). Since 2000 deliveries of oceangoing merchant ships by U.S. shipyards combined have averaged fewer than three per year.
In comparison, a single shipyard in South Korea is slated to deliver 47 ships this year alone. But it's not just the large shipbuilders in Asia that have left protected American shipyards in the dust. Even European shipyards are churning out more vessels than those in the United States. Dutch shipyards, for example, delivered nearly 118,000 gross tons of merchant vessels in 2021 (latest available numbers) compared to less than 33,000 gross tons for U.S. shipyards. At 147,000 gross tons, Norway delivered almost five times the U.S. figure. That U.S. shipyards produce so relatively little while charging so much should come to the surprise of no one. Having a captive Jones Act market means reduced incentive for U.S. shipyards to achieve the specialization and scale required for competition in the international market. It also means less competitive pressure—why be world‐class when a lesser standard will suffice? Another prominent factor is the large government contracts reserved for U.S. shipyards that many shipbuilders have prioritized at the expense of commercial shipbuilding. Indeed, the CEO of Overseas Shipholding Group, which operates Jones Act‐compliant tankers, recently described the construction of commercial vessels to comply with the Jones Act as "a minor sideline interest" for most U.S. shipyards. His assessment appears borne out by the numbers, with a U.S. Maritime Administration study finding that government contracts accounted for nearly 80 percent of U.S. shipyard revenue in 2019. Despite the U.S. shipbuilding industry's extraordinary costs and trivial levels of output, some may believe that protectionism is nonetheless warranted to avoid U.S. reliance on foreigners for its shipbuilding needs. But the Jones Act doesn't even do that. The Janet Marie's sister ship delivered last year, for example, is chock full of foreign components. A sampling of the ship's suppliers includes the China State Shipbuilding Corporation (supplier of the ship's provision cranes), Alfa Laval (Qingdao) Ltd. (fired exhaust gas boiler), Zhenjiang Tongzhou Propeller Co. (fixed pitch propeller), and Jiangsu Xiangsheng Heavy Industries Co. (anchors and anchor chains). That's par for the course. Tankers built by the Philly Shipyard from 2004-06 required approximately 500 containers per vessel of material from South Korea as well as roughly 25 bulk shipments for larger items such as the main engine. While there is nothing wrong with relying on imported parts (imagine how costly these ships would be if all the components had to be U.S.-made as well!), notions that U.S. shipyards can produce ships without the need of foreigners is wishful thinking. The Jones Act's requirement that vessels used in domestic trade be constructed in U.S. shipyards fails to pass even the faintest whiff of a cost‐benefit analysis. Among its harms include higher costs for those that must rely on these ships, added stress on infrastructure and congestion as transport is shifted to less expensive modes, irritated relations with U.S. trading partners and allies, and a smaller and older U.S. fleet than would otherwise be the case. The other side of the ledger, meanwhile, consists of a commercial shipbuilding industry that punches well below its weight. Indeed, when the United States first required ships flying the U.S. flag to be domestically built U.S. merchant shipbuilders were among the world's best and most competitive. Now they are a global afterthought, and there is good reason to think the industry would be better off without such heavy‐handed protectionism. Rather than a triumph, the Janet Marie offers a cautionary tale of protectionism's heavy toll. The Jones Act's U.S.-built requirement—if not the law entirely—should be discarded immediately.
Over the last year, news media have run numerous stories of offices, shopping malls, and other commercial properties going into foreclosure or being sold at substantial discounts. Given local government's reliance on property tax revenues, a collapse in commercial property values might appear to have disastrous consequences for city and county finances. But circumstances differ widely from one region to another (and even between local governments within a region), so the impact of the commercial real estate decline will vary greatly. Among the factors we must consider when evaluating the revenue impact of lower commercial real estate valuations are, first, what proportion of revenue comes from property taxes on commercial real estate, next, how closely assessed values tracked market values before the collapse, and finally, whether and when properties will be reassessed to conform with reduced market prices more closely. San Francisco—a city at the epicenter of the commercial property collapse—provides an example of how to evaluate these three factors. (For more about why San Francisco finds itself at the epicenter please see my recent post on the city's policy failures.) Commercial Property Tax Dependency Local government is heavily reliant on property taxes generally, but many entities diversify their revenue sources with income taxes, sales and excise taxes, and other levies, as well as non‐tax revenues. San Francisco anticipates $6.4 billion in revenue in fiscal year 2023–24. Of this total, $4.4 billion is expected to come from tax revenues. The combined city/county government levies a variety of taxes aside from the property tax, including sales tax, hotel room tax, utility user tax, parking tax, real property transfer tax, sugar sweetened beverage tax, and a unique tax on executive pay. Property taxes are expected to contribute $2.5 billion of the $4.4 billion of anticipated tax revenue. Since the current real estate valuation slump is only affecting certain categories of properties, it is also essential to understand how assessed value breaks down by category. According to San Francisco Assessor's latest annual report, three major commercial property categories (office, retail and hotel) accounted for 27% of total assessments in 2021. This proportion slightly understates the share of property tax revenue derived from commercial property, because only residential property is eligible for a homeowners' exemption. In California, this exemption is only $7,000 per owner occupied property and thus not as significant a factor as in Texas. Overall, San Francisco's commercial property valuation decline places at risk about $700 million of annual revenue or about 11% of total general fund collections. It is easy to see how this proportion might vary across cities and counties. Suburban communities that are primarily residential are likely to have very little exposure to commercial valuations, while cities hosting large malls and office clusters should be at greater risk. Assessed Versus Market Values Due to Proposition 13, the relationship between properties assessed and market value is complex. The 1978 measure limited assessment increases to 2% annually if a property does not change hands and is not subject to major construction. For properties that have not been reassessed since Proposition 13's implementation, their market values have risen about ten‐fold on average, but their assessed value have increased by a factor of only about 2.4. While it is unlikely that many high‐value commercial properties have avoided reassessment through the entire life of Proposition 13, significant gaps between assessed and market value have emerged over shorter periods: between 2012 and 2022 alone, California property prices more than doubled (it should be noted that I am using a residential price index for these value increases; the changes in commercial property valuations are likely to be different). To reasonably estimate the potential impact of underassessments, it would be necessary to review a sample of local properties. San Francisco's City Controller is performing such an analysis but the results have yet to be published. While no other state has Proposition 13, there can still be variances between assessed and market valuations outside of California. For example, a Georgia property assessor reviewed a sample of ten commercial properties and found that, on average, they were assessed at 40% below market value (his findings were published in a paywalled edition of Fair & Equitable, the magazine of the International Association of Assessing Officers). Reassessment Timing Just as assessments may not reflect market values on the way up, they may also lag declining resale values. But this is less affect is less likely to persist given the incentive that property owners have to minimize their property tax liabilities. In California, property owners can ask their assessor for a reduction, and, if not satisfied, they can appeal the assessor's decision to a county board. San Francisco's Assessment Appeals Board has an active docket of appeals cases at the moment, with some filers requesting assessment reductions of more than 50%. In one extreme case, the owner of the Westin St. Francis Hotel in San Francisco's Union Square is seeking a 90% reduction in its assessed valuation. Owners of commercial real estate may hesitate to seek downward reassessments if they are marketing their properties since potential buyers might use the lower assessment as a basis for negotiating a sales price. And, in California, at least, retroactive reassessments are not possible. So, in some cases, a commercial property may be assessed above market value at least during the current tax year. Conclusion Although San Francisco may be considered ground zero for the commercial property collapse, the budgetary impact has been limited this far. The city's FY 2022–23 revenues are running just 1% below prior year levels and the city is forecasting small increases for the next five fiscal years. That said, these are nominal amounts, and it is fair to conclude that San Francisco's projected revenues are expected to grow at or below the rate of inflation and are significantly underperforming recent growth rates. San Francisco is receiving some protection from undervaluation before the pandemic and revenue source diversification. That said, the city's unique challenges may also impact its sales tax and hotel room tax collections as well as its property tax revenues. For other jurisdictions, results can be expected to vary. Blanket nationwide assessments may well prove to be a poor substitute for an in‐depth look at each city's and county's unique characteristics.