Executive summary Introduction CTA works primarily through intermediary organisations and partners (non-governmental organisations, farmers' organisations, regional organisations) to promote agriculture and rural development and to deliver its various information products and capacity building services. By partnering with these organisations, CTA seeks to increase the number of ACP organisations capable of generating and managing information and developing their own information and communication management strategies. In the period 2003 – 2005, CTA undertook a series of needs assessment studies in 21 countries in the ACP Pacific and Caribbean. As a continuation of this process, CTA have now commissioned assessments of the agricultural information needs of 6 countries emerging from prolonged conflict situations in ACP Africa, including Mozambique, which forms the focus of this report. Objectives of the Study The objectives of this study are to develop a strategy for CTA's approach to post-conflict countries, to improve the effectiveness of CTA's support for post-conflict countries and to compile baseline data on the status of ICM and ICTs in agriculture and rural development in Mozambique. Methodology The country profile was produced through a desk study. This study relied heavily on information available on the internet and additional information was obtained from various institutions in Mozambique, internationally and from key informants. Through the desk study we were able to identify a list of nine key institutions. This list was discussed with CTA and informants in Mozambique and face to face interviews were requested with each institution. Of these, seven of the institutions agreed, whilst one indicated that it would be closing its operations within a year and was therefore removed from the list. Expected results This study will provide: 1) an inventory of the status of agricultural information services, institutions and other actors and their needs as their relate to physical infrastructure, information availability and access and human capacity development; 2) an assessment of the current and / or planned interventions of the government and bi- or multilateral agencies in the field of information for agriculture and rural development; 3) an overview of the needs of potential partners for CTA activities and services in terms of building capacity for information and communication management; 4) a short-list of potential partners / beneficiaries for CTA activities and services; 5) baseline data to facilitate subsequent monitoring activities. The study will also provide a framework for CTA to develop a framework for action and fashion a strategy aimed at institutions in countries emerging from conflict situations and provide input into its 2006 – 2010 strategic plan. Findings Following the signing of a peace agreement in 1992 to end 16 years of conflict, Mozambique has achieved impressive economic growth and lowered its prevalence of poverty. Sustained by strong foreign investment, real GDP in Mozambique has been growing at rates in excess of 7 percent for the last 4 consecutive years, and per capita income in US dollars has increased by nearly 50 percent between 2001 and 2004. Mozambique's economic growth, however, implies an important transformation in the composition of its GDP, although services remain the dominant sector. The share of industry in total GDP increased to 27 percent in 2004 from about 16 percent in 1996, whereas the share of agriculture decreased to 23 percent from about 30 percent in the same period. The agricultural sector, however, still supports 80 percent of the economically active population, and agriculture still provides major export earnings from commodities such as prawns and fish, cotton, sugar, timber and cashew nuts. The forestry sector also has an important role in the country, contributing 4 percent of gross domestic product and supplying about 80 percent of the energy used. There is no unified policy or strategy in Mozambique with regard to the management of agricultural information and broad dissemination networks are not well developed. Institutions that fall outside of the state or donor worlds often find it difficult to get hold of information. Information exchange between institutions tends to be informal rather than structured. Agricultural research generally is restricted by the insufficient number of scientists who can formulate and carry out studies relevant to Mozambican needs. Budgets for information management tend to be a low priority. In-house capacity for maintaining and troubleshooting computer networks is a constraint; reliance is made upon private companies specialised in IT. Retention of staff at the centre with IT skills, in the face of competition from the private sector, was cited as a problem by all of the state agencies. Building the capacities of in-house staff was therefore felt to be important. Existing websites, maintained by some of the key institutions vary in their effectiveness as publishing outlets and often tend to be reflections of the institution, its structure and its work programme, rather than being designed specifically to disseminate information, reports, studies, etc. that are produced, or to act as advocacy tools. Use of radio as a means of disseminating information in vernacular languages is still rather limited. Problems in this respect are the costs of translation and payments for the transmissions. Training in how to pass information on to low literacy groups was also indicated by several agencies. This included training in more effective writing skills and training in how to compile radio programmes and audio visual materials. There is a general lack of metadata; documentation on who is doing what and types of available information is generally poor. This has a double negative effect. On one hand, potential data and information users have difficulties finding or getting access to relevant information and on the other hand, information suppliers do not know what they have, which prevents better organisation of information for dissemination. Conclusions Most of the institutions which we interviewed have fairly well-developed links to relevant sources of information; data on the functioning of markets, prices and production levels in the agriculture and fisheries sectors has vastly improved in recent years. Some of these agencies need information on regional and international markets and production levels. Technical data is still harder to source, particularly in Portuguese. There are information needs regarding developments in thinking on food security, forms and means of supporting organisations at community level, participatory approaches to resource management, information on low cost technologies for increasing production and conserving produce, on gender, on HIV/AIDS prevention and mitigation strategies and on general rural development issues. Respondents requested capacity building in information management to increase the effectiveness of their organisations. Government services and NGO staff indicated the importance of training in the analysis of socio-economic data. Training in the use of the internet to obtain information and in the targeting of information by library and documentation services was also a broad need. The design and development of web sites was indicated by many respondents, but it is felt that there is a particular need for support in how to conceptualise these as sources of information rather than just 'publicity'. Training in effective communication to low literacy target audiences, in the development of extension materials and the use of radio and audio-visual materials is also important. Recommendations We recommend that CTA provide support to the development of a national IMC strategy for agricultural information that takes full advantage of the opportunities offered by the new GovNet infrastructure. The ICM strategy should ensure that information is easily available to all stakeholders in rural development. Furthermore, the ICM strategy should prevent a gap from evolving between organisations connected to GovNet and those that are not. Finally, the ICM should provide for communication channels from the rural poor to research organisations and policy makers, to ensure that research and policies are guided by the needs of poor rural households. We recommend that CTA attempts to increase the amount of information disseminated in Portuguese language, particularly in regard to food security, forms and means of supporting organisations at community level, participatory approaches to resource management, information on low cost technologies for increasing production and conserving produce, on gender, on HIV/AIDS prevention and mitigation strategies and on general rural development issues. We recommend that CTA support short term research activities targeted at Mozambican-specific issues in relation to agricultural production and the conservation of produce. Links should be set up to inform IIAM and DNER about the information needs of poor farmers, women and PLWHA. These links can be set up through members organisations such as ORAM and UNAC, through Farmer Field Schools and through NGOs to which DNER has outsourced extension activities. We recommend that CTA investigate ways of supporting exchange of experience between organisations involved in training through associations and support to the development of training packages for associations. These should be provided to increase the effectiveness and efficiency of extension efforts in the field. In the long term, the required technical information can be provided to associations through newly developed training packages. ; The objectives of this study are to develop a strategy for CTA's approach to post-conflict countries, to improve the effectiveness of CTA's support for post-conflict countries and to compile baseline data on the status of ICM and ICTs in agriculture.
Inhaltsangabe: Abstract: The US subprime-crisis became a headline in the global media starting in February 2007 after the US housing market had already shown first signs of a slowdown in late 2006. Previously, the US housing market had enjoyed a favorable environment, especially from 2002 to 2005, which was characterized by low interest rates, rising house values, and increasing home financing possibilities through subprime mortgages. However, more and more events were published during the year by US mortgage brokers, international investment banks, and central banks around the world that presented a picture which caused today's perception of the subprime-crisis. What's more, the subprime-crisis is far from being over: an end to the crisis is not yet in sight. One rather unique characteristic of this crisis is that its actual basis is the delinquencies and defaults of subprime single-family home mortgages in the US which is commonly not regarded to be of great relevance for the international capital markets. However, taking into account the originate and distribute business model of US mortgage brokers in connection with the securitization of these mortgages into various types of securities that are traded on a global basis, it is not surprising to observe that banks and investment funds around the world were invested into these securities. Before the crisis started, only a few banks or funds considered the liquidity of these securities when investing significant amounts of money in them because they focused on maximizing their returns. But, when larger problems in the US subprime mortgage market became evident, liquidity became the major concern for investors and investor preferences significantly shifted to safer assets such as government bonds. This caused severe problems in the money market, which ultimately brought the crisis across the Atlantic to Europe. Moreover, funding problems emerged and caused the first bank run in Europe in decades when depositors in Britain started to queue outside Northern Rock branches for hours to withdraw their deposits in light of fears that the bank might have to file for bankruptcy. In addition, another British bank had been in the spotlight earlier that year because HSBC was the first European bank to announce a billion dollar write-off linked to its exposure to subprime mortgages. Taking into consideration the subprime-crisis-related events in Europe, the British banking market can be characterized as the only banking market in Europe where the subprime crisis caused banks to substantially write down subprime-related assets on the one hand but where severe funding problems even led to a bank run that had to be bailed out by the central bank and the government on the other hand. Consequently, the British banking market can be considered to be the European banking market with the highest impact of the subprime-crisis and is, therefore, worth analyzing in detail. The objective of this thesis is to discuss the reasons for the emergence of the subprime-crisis and to empirically examine whether the subprime-crisis had an impact on the British Banking sector. The empirical analysis consists of two different approaches whereas an event study measures the short-term impact of certain news. The performance of the British banking sector in the full year 2007 is analyzed in a second approach that focuses on the long-term impact of the subprime-crisis. In addition, the paper provides an overview on the development of the subprime-crisis in 2007 based on a detailed description of the underlying fundamental market characteristics. In order to empirically measure the impact of the subprime-crisis on British banks, an event study will be conducted. Event studies are a widely-used empirical methodology in economics and finance to examine the impact of certain events: they are considered to be the standard method to measure security price reactions. An event study is an empirical study that measures if specific events have a significant impact on certain stock prices by calculating abnormal stock returns around predefined events. In this regard, an abnormal return is the difference between the actual return in the market and the expected return according to a return generating model. A common assumption in this regard is that positive events lead to positive abnormal returns whereas negative events cause the abnormal returns to be negative. Consequently, important news relating to the subprime-crisis will be categorized as positive or negative and its impact on stock returns will be determined. The event study, as well as the timeline of the subprime-crisis, include events from January 1, 2007 to December 31, 2007. The analysis of the year-round performance of the British banking sector in 2007 is conducted in addition to the event study and follows a different methodology. In contrast to the analysis of the impact of individual events, this approach deals with the performance of British banks and compares this to the performance of an alternative non-bank portfolio. Key to this analysis is that both portfolios have the same risk and return characteristics at the beginning of 2007 that have been determined through a backtesting of the portfolios' performance in 2006. Course of the Investigation: In the second chapter, important fundamentals of the subprime-crisis will be examined. These fundamentals explain how an environment was able to develop in the last decades that lay the foundation for today's crisis. In Chapter 2.1, an overview about the development and the structure of the US subprime mortgage market will be presented before specific characteristics of subprime mortgages will be outlined in 2.2. The unique business model of mortgage brokers is depicted subsequently. The last segments of Chapter 2 specify complex financial instruments that enabled the subprime-crisis to spread around the world and explain why the securitization process leads to high-risk securities. Chapter 3 specifically describes the development of the subprime-crisis in 2007. After presenting an overview about the situation of the US housing market up to 2007 in 3.1, a timeline about last year's subprime-crisis is outlined in 3.2, and the impact on international capital markets is discussed in 3.3. Chapter 3.4 focuses on the consequences for British banks and the actions of the British financial regulatory environment. An empirical analysis of the subprime-crisis is conducted in Chapter 4. A general overview about event studies and their historic development is presented in 4.1. After deducing the typical framework of an event study in 4.2, the relevant British banks in line with its market index as well as relevant news for the event study are determined in Chapter 4.3. The actual event study that analyzes the impact of the subprime-crisis on British banks will be presented in Chapter 4.4. Additionally, a comparison of the performance of a bank portfolio with an alternative non-bank portfolio is given in 4.5. Finally, Chapter 5 contains a summary of the theoretical concepts and the empirical results and gives an outlook about a potential development of the subprime-crisis, capital markets, and specifically the British banking market. Ideas for further research are also presented.Inhaltsverzeichnis:Table of Contents: LIST OF FIGURESI LIST OF TABLESII LIST OF ABBREVIATIONSIII 1.INTRODUCTION1 1.1Motivation and Objective1 1.2Course of the Investigation3 2.FUNDAMENTALS OF THE SUBPRIME-CRISIS4 2.1The US Housing and Subprime Mortgage Market4 2.2Characteristics ofSubprime Mortgages7 2.3Business Model of US Mortgage Brokers9 2.4Financial Instruments Underlying the Subprime-Crisis10 2.5Consequences of the Fragmented Securitization Process14 3.THE DEVELOPMENT OF THE SUBPRIME-CRISIS15 3.1Situation of the US Housing Market up to200715 3.2Timeline of the Subprime-Crisis in 200717 3.3Spillover Effects from the Mortgage Market to the Global Capital Markets21 3.4Consequences for the British Banking Market22 4.EMPIRICAL ANALYSIS ABOUT THE SUBPRIME-CRISIS27 4.1History and Overview of Event Studies27 4.2Framework of an Event Study28 4.3Selection of Relevant Data31 4.3.1British Banks and Market Index31 4.3.2News about Private Financial Institutions and Central Banks32 4.4Event Study About the Subprime-Crisis34 4.4.1Event Study Methodology34 4.4.2Formulation and Testing of Hypotheses36 4.4.3Interpretation of Results37 4.5Year-round Performance of the British Banking Sector in 200740 5.SUMMARY AND CONCLUSION43 REFERENCES45 APPENDIXES51Textprobe:Text Sample: Chapter 3.2,Timeline of the Subprime-Crisis in 2007: In February 2007, the first signs appeared that subprime mortgage brokers were in trouble as ResMae Mortgage filed for bankruptcy and Nova Star Financial reported a loss that was not expected by analysts. It was also the beginning of European banks having to announce losses that were caused by the subprime-crisis. HSBC reported losses of Dollar 10.5bn in its mortgage finance subsidiary in the US and, consequently, fired the head of that particular division. Problems of US mortgage brokers became more and more evident in March 2007 as People's Choice was the next mortgage broker that had to declare bankruptcy. Moreover, the brokers Fremont General and New Century Financial stopped making new subprime mortgages. Two weeks later, rumors appeared that New Century Financial may have to file for bankruptcy as well and these rumors became true in the beginning of April when the company had to file for Chapter 11. In May, the next European bank announced an involvement in the subprime-crisis when UBS had to close its US hedge fund operation Dillon Read Capital Management. In June 2007, rating agencies began appearing in the crisis. Moody's downgraded 131 subprime MBSs and announced to review the rating of an additional 260 securities. Moreover, two Bear Stearns hedge funds that heavily invested in subprime-backed securities lost a significant part of their value and Bear Stearns had to bail-out the hedge funds and provide them with Dollar3.2bn to cover their subprime exposure. As a result, they fired their head of asset management who was responsible for the hedge fund business. July 2007 is considered the first month when the subprime-crisis had a significant impact on the stock market. After closing above 14,000 points for the first time in history, the Dow Jones lost about seven percent until the end of September. UBS brought the crisis back to Europe once more when they suddenly fired their chief executive officer (CEO) Peter Wuffli, mentioning problems relating to the subprime crisis as the cause of this decision. Ration agencies also played a major role in July when Standard Poor's (SP) and Moody's downgraded the ratings of subprime MBSs with values of Dollar 7.3bn and Dollar 5.0bn, respectively. On July 7, SP announced a review of the ratings of numerous CDOs with investments in subprime structured products; Moody's was said to review 184 mortgage-backed CDO tranches. Mortgage brokers were in the spotlight again when American Home Mortgage had difficulties in the refinancing of loans. Countrywide, another major mortgage broker announced a drop in earnings as more and more of their subprime loans defaulted. Fed Chairman Ben Bernanke also mentioned rising defaults in the subprime market and estimated that total losses caused by the subprime crisis could add up to Dollar 100bn. Suddenly, on July 30, the German bank IKB Deutsche Industriebank (IKB) had to announce that one of its ABCP conduits that invested in subprime structured products had troubles refinancing itself. As a consequence, IKB's main shareholder, state-owned KfW, had to bail-out IKB and guaranteed liquidity lines for the conduit Rhineland Funding. One day later, on August 1, 2007, the whole picture of IBK was presented to the public. Total losses due the Rhineland Funding conduit were Euro 3.5bn and a rescue fund by KfW and other German private banks was installed. The mortgage broker American Home Mortgage finally declared bankruptcy and extended the terms on ABCP that were issued by one of its funding conduits. Liquidity problems in the markets for structured products became obvious when BNP Paribas stopped the redemption of three of its funds with a total value of Euro 2bn because they were not able to calculate a fair price for the funds due to the illiquid subprime MBS market. This announcement triggered concerns about market prices of structured credit products in general and interbank lending rates such as LIBOR strongly increased as banks sought liquidity. ABCPs were also priced with higher premiums. This closure of BNP Paribas funds can be regarded as one of the key events in the subprime-crisis because it caused central banks to heavily intervene in the money markets. One day later, the European Central Bank (ECB) injected Euro 95bn of short-term liquidity into the European money market and, subsequently, the Fed as well as the Bank of Japan provided liquidity to their respective money markets. These central banks continued to provide hundreds of billions of dollars of short-term liquidity to the global money markets in the following weeks. The Fed intervened again, by reducing the discount rate in order to provide liquidity to the markets. Goldman Sachs was the next company that had to inject money into a hedge fund in mid-August. The investment bank injected Dollar3bn into one of its hedge funds that suffered from losses in subprime structured products. Citigroup closed seven SIVs with a value of Dollar49bn and took the SIVs' subprime debt on its balance sheet as the SIVs were not able to receive funding due to the illiquidity in the money markets. Morgan Stanley announced to a write-off of Dollar 9.4bn due to investments in the subprime market and sold a 9.9 percent stake to a Chinese investment company in order to strengthen its equity base later that month. Countrywide also suffered from the illiquid markets and had to draw down Dollar 11.5bn from the company's credit lines before receiving a Dollar2bn cash injection from Bank of America. Similar to the losses of IKB, SachsenLB, another German bank, reported refinancing problems in one of its conduits that invested into subprime mortgage products and was, consequently, sold to LBBW after receiving a Euro17.3bn credit line. Looking at the British Banking market, Barclays received a Pounds1.6bn short-term loan from the Bank of England. In the beginning of September 2007, it became evident that the subprime-crisis was a truly global crisis when Bank of China revealed that they made losses of Dollar 9bn that can be attributed to subprime investments. The major event of the subprime-crisis in Britain started on September 13, when the BBC announced that Northern Rock received an emergency loan from the Bank of England in order to solve its refinancing problems. As a consequence, a bank run started that could only be stopped when the British government guaranteed all savings. A more detailed analysis of Northern Rock is presented in Chapter 3.4. A number of investment banks announced their quarterly results in September. Goldman Sachs reports net earnings of Dollar 2.8bn, which were mainly due to short positions in structured subprime mortgage products, whereas Deutsche Bank announced losses of Euro1.7bn. HSBC had losses of Dollar 880m in the third quarter and announced the closure of its US subprime mortgage unit. International banks continued to announce quarterly results in October. UBS reveled an unexpectedly high loss, wrote down Dollar3.4bn in its fixed income division, and fired its Chief Financial Officer and its investment banking head. Moreover, Citigroup had to write-off Dollar 5.9bn in addition to its earlier write-offs. Merrill Lynch's write-offs accounted for Dollar 7.9bn and caused total losses of Dollar 2.3bn. As a result, CEO Stan O'Neil resigned from his position. The Japanese Bank Nomura also announced a substantial loss and closed its US MBS department. The US government initiated the Hope Now initiative that was set up in order to support homeowners to avoid defaults on their mortgage. The US Treasury Department also made major US banks install the Master Liquidity Enhancement Conduit that was supposed to buy illiquid structured products to reestablish liquidity in the market. SP downgraded another Dollar23bn worth of structured products that were backed by mortgage loans and unlike the downgrades in August, SP also downgraded securities that had an AAA rating before. On October 31, the Fed announced the expected reduction of the federal funds target rate by another 25 basis points to 4.5 percent. Investment banks continued to report their subprime exposure in November 2007. Citigroup started with admitting an additional write-down requirement between Dollar8bn and Dollar11bn after already having to write-off Dollar5.9bn in October. As a consequence of these substantial losses, CEO Charles Prince resigned. Morgan Stanley reported a Dollar3.7bn loss in its subprime mortgage investments, whereas Wachovia announced a total loss of Dollar1.7bn. Bank of America wrote off Dollar3bn due to investments in the subprime market and the GSE Freddie Mac reported a loss of Dollar2bn. Besides US banks, UK banks were also again affected by the subprime-crisis. Barclays and HSBC had to write down Dollar2.7bn and Dollar3.4bn, respectively. At the end of November, Citigroup announced an increase in its equity base and sold additional shares to an investment fund based in Abu Dhabi in order to raise Dollar7.5bn. Moreover, Freddie Mac increased its equity by issuing Dollar6bn worth of new shares. In line with Freddie Mac's capital increase, Fannie Mae also issued new shares worth Dollar7bn in the beginning of December 2007. On December 3, Moody's announced a review of additional subprime debt. The British banks Royal Bank of Scotland and Lloyds TSB reported subprime write-offs with a value of Pounds1.25bn and Pounds200m, respectively. On December 6, the Bank of England lowered the interest rate by 25 basis points while the ECB left the interest rates at a constant level following its regular meeting on the same day. In the US, the Fed lowered the discount rate by 25 basis points one week later although some directors were in favor of a 50 basis points interest rate cut. UBS announced that the bank had to write-down another Dollar10bn due to its subprime mortgage market investments. In addition, the company received an Dollar11.5bn capital infusion by investors from Singapore and the Middle East. The last banks that reported substantial losses in 2007 were Washington Mutual, who reported fourth quarter losses of Dollar1.6bn and Morgan Stanley, who wrote off an additional 9.4 Dollar bn and also sold new equity to a foreign investor. In order to provide European banks with sufficient liquidity at the end of the year, the ECB provided banks with 500 Dollar bn at the end of December. This timeline of the development of the subprime-crisis in 2007 shows the huge impact on the international financial markets and global financial institutions that the problems in the US subprime mortgage market have caused. The next chapter will highlight how the crisis in the subprime mortgage market was able to spill over to other asset classes on a global basis. In order to understand the consequences of the subprime-crisis and especially the need for central bank interventions in the money markets, it is necessary to understand the emergence of the liquidity crisis that appeared in the second half of 2007. Many economists such as Buiter define August 9, 2007 as the day when the subprime-crisis was evidently the trigger for the global capital markets crisis. The closure of the BNP Paribas funds due to its inability to value ABS had a spill-over effect on many asset classes and also forced the central banks to massively intervene in the money markets. In economic theory, these spill-over effects are called contagion, which is defined as the spread of a crisis from one specific market into different countries or asset classes. One major consequence was the widening in credit spreads in the global money markets that were caused by the liquidity shortage in the interbank market. Banks across the globe were more and more uncertain about other banks' involvement in subprime MBSs and CDOs and the financial health of their counterparties in money market transactions and became reluctant to lend money, even on a short-time basis. As a result, a liquidity crisis occurred that forced the central banks to provide enormous amounts of liquidity to the interbank markets. One characteristic of the liquidity crisis was a so-called flight to quality which means that banks and fund managers sell riskier assets such as subprime MBSs and CDOs and invest in safe assets such as government bonds. A flight to quality is generally regarded to be based on uncertainty in the markets or uncertainty about counterparties rather than on the risk of specific assets itself. This also seems to hold true for the subprime-crisis. Due to the large supply in these risky asset classes, the markets for MBSs, CDOs, and ABCPs became very illiquid because many sellers were opposing few buyers. As a result, credit spreads in these asset classes significantly increased. As a reaction to the liquidity crisis in the interbank market, central banks intervened several times and provided liquidity to the market.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
In a recent interview with Jordan's government-backed broadcaster, America's top military officer lavished praise on the country's armed forces.
"We have common interests and common values," said Gen. Mark Milley, the chairman of the Joint Chiefs of Staff. "The Jordanian Armed Forces are very professional. They're very capable. They're well led."
Milley's view represents the most common American line on the Jordanian military, which has long enjoyed a close relationship with the Pentagon. There's just one problem: It's dead wrong, according to Sean Yom, a political science professor at Temple University.
Where Washington sees a small-but-mighty army, Yom sees a "glorified garrison force," as he wrote in a chapter of the recent edited volume, "Security Assistance in the Middle East." The Jordanian military, he writes, is "more accustomed to policing society to maintain authoritarian order at home than undertaking sophisticated operations."
As Yom notes, the regime that the Jordanian military defends has become increasingly autocratic in recent years. King Abdullah recently approved a cybercrime law that would allow the government to jail its citizens for promulgating "fake news" or "undermining national unity" — terms that the law largely leaves undefined. The crackdown on expression comes just three years after the government crushed the country's teachers' union, which had previously acted as a primary vehicle for political opposition in Jordan.
So what does the U.S. have to show for its decades of lavish support for Jordan's military? And what can that tell us about how Washington should approach security aid? RS spoke with Yom to find out. The conversation has been edited for length and clarity.
RS: The conventional story of U.S. security assistance is that, even though some of the countries that we help are authoritarian in nature, our aid tends to lead to greater respect for democracy, and if it doesn't do that, it at least will strengthen partner militaries. But in your chapter, you describe a different story in Jordan. Can you walk me through that a little bit?
Yom: U.S security assistance is typically justified through the doctrine of "building partner capacity." There has been a lot of ink spilled on the importance of modernizing the Jordanian Armed Forces and ensuring that it is a capable, coherent and interoperable armed force that can seamlessly work with the U.S. military or conduct operations on its own in the service of defending Jordan, or bolstering regional stability, for instance, by undertaking counterterrorist operations or contributing to peacekeeping missions.
The problem is that there is very little historical evidence that the Jordanian military is actually a capable fighting force, and I think a few key pieces of evidence underlie this. Number one, Jordan really hasn't fought a major armed conflict in a half century. It's undertaken peacekeeping abroad through the moniker of the UN, and it occasionally conducts one-off missions such as its airstrikes against the Islamic State in Syria back in 2014. But there is very little evidence on the battlefield that the Jordanian military is what the U.S. would call a capable and competent partner military. The other piece of evidence is that much of Jordan's defense structure has partly been offshored to the United States. The border surveillance system between Jordan and Syria was built by Raytheon Company through U.S. military and economic grants, and much of Jordanian airspace is monitored as closely by the United States as it is by the Jordanians themselves. The significant U.S. military buildup in Jordan is part and parcel of the United States interest in defending the sovereignty of Jordan and ensuring that foreign aggressors — whether they are terrorists or militant organizations or even foreign states — do not penetrate very far into the Hashemite Kingdom.
We don't see a military that is being built to be capable and modernized and independent and combat ready. Instead, the overriding justification — internally at least, seldom mentioned publicly — is that U.S. security assistance in Jordan is designed not to build partner capacity but to ensure political access to the Hashemite monarchy and to lubricate U.S.-Jordanian relations to make sure that this bilateral alliance is smooth and allows both sides to achieve their mutual interests. In Jordan's case, [its interests are] to remain stable, to receive aid and arms from the United States, and to preserve its sovereignty, and in Washington's case, it's to make sure that there is a pro-Western oasis of moderation in the heart of the Near East.
RS: A question that's underlying a bunch of this is whether the monarchy and the system as it exists in Jordan could even continue to exist without American support. To put it bluntly, does U.S. aid underwrite autocracy in Jordan?
Yom: I think it does, but with a few caveats. The first is that, in comparative perspective, Jordan is not unique in being a middle-income country whose autocratic regime needs foreign aid to survive. The other caveat is that I don't necessarily think that U.S. support and aid is the only reason why the current system of government in Jordan is able to endure. It has its own survival mechanisms, whether it is rallying support from certain constituencies in society, such as some tribal communities, or leaning heavily on other partners in the region.
But I will say this: U.S. support may not be the only reason, but it is a major reason why the Hashemite monarchy and its regime has been able to maintain its current political strategy of maintaining power, which is not to democratize or alleviate repression but rather to maintain an authoritarian status quo. And I think U.S. support is also a major reason why the Jordanian leadership has very little incentive to grant meaningful political reforms such as curtailing corruption and granting more democratic freedoms, which clearly a majority of Jordanians desire. And we know this from public surveys. Jordanians are very explicit in what they are unhappy about the current political system, but they also feel that, because the U.S. often refuses to pressure the Jordanian government to grant or concede more of these reforms, they feel that the U.S. is complicit and preserving the authoritarian status quo.
Geopolitically, Jordan plays an important function to U.S. grand strategy as a critical part of its war-making infrastructure in the Middle East, as well as diplomatically a pro-Western oasis or island of stability in the heart of a "shatterbelt" of the Middle East. Because of these factors, Washington has very little problem providing such profuse amounts of military assistance to the Jordanian Armed Forces. Above all else, of course, Jordan abuts Israel. Jordan's role in the Palestinian-Israeli conflict and its primary purpose as a peace partner of Israel validates in the eyes of many American policymakers why they should continue supporting the modernization and the arming of the Jordanian Armed Forces under the guise, of course, of building partner capacity but knowing full well that Jordan is not going to be fighting a war anytime soon.
RS: At some level, you've painted a picture of a big win for U.S. interests here. There's a sense in which America gets a huge plot of land in the middle of a region that it deems vital, and the only downside is that that support doesn't really square with our stated values. But in your article, you had a different conclusion. Can you tell me more about that?
Yom: By helping to maintain [Jordan's] political infrastructure, the United States is complicit in the continued economic and social stagnation of Jordan. For every dinar that the Jordanian leadership spends on security or military items — money that many Jordanians feel it does not have to spend — the less money there is to spend on, say, social programs or economic development.
If you look at the Jordanian economy, it is astounding how much of a crisis that it has fallen into. We're looking at, right now, 22 to 23 percent unemployment overall, which is probably a vast understatement of the real statistic. We're looking at nearly 50 percent youth unemployment. We're looking at poverty, which is between 25 to 30 percent depending upon which estimate we take as reliable. And this is all in a country that also spends approximately a third of each annual budget on military and security spending. So essentially, what you're looking at when you think about the Jordanian economy today is a wartime economy. The Jordanian government positions itself and maintains an army as if it were about to wage a war it doesn't have to wage, and that has a destructive effect on the economy and often justifies draconian security measures to regulate and police society. The United States, I would argue, is complicit in that arrangement.
Washington has had very similar experiences in the past with other countries where regimes have some kind of deep economic or political crisis, and yet they believe that having a well-armed coercive apparatus is going to immunize them from any sort of domestic unrest or popular overthrow. Now, that may be the case in Jordan, because the future is hard to tell. But that certainly wasn't the case in, say, Iran under the Shah. It wasn't the case in South Vietnam. It wasn't the case in some of our Central American client states in the 1970s and the 1980s.
One of the things I wish U.S. policymakers would reconsider is whether or not the current arrangement is fundamentally in the interest of the Jordanian people. If we define stability as a country having not just a legitimate political system, but a sustainable economy and a relatively satisfied population, then Jordan is failing on some of these key fronts.
History shows us that [this] kind of strategy seldom works, and it's one of the dark consequences that I fear the most in Jordan, since obviously instability in Jordan doesn't help anyone. But the current vision of stability that has encaged itself in the minds of American lawmakers is not one that I think is going to be fruitful over the long term.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Prior to the 2022-2023 legislative session, five states (California, Virginia, Utah, Colorado, and Connecticut) had passed consumer data privacy laws, but now the patchwork of state laws has more than doubled. Congress has continued to debate a potential federal standard with the American Data Privacy Protection Act in the 117th Congress being the first such proposal to be voted out of a committee; however, without momentum around a federal standard and with continuing and new concerns about data privacy from consumers, many states are undertaking their own policy actions around data privacy. The patchwork nature of these individual state laws can potentially amplify compliance costs for businesses operating across different states and create confusion among American consumers whose digital footprint often crosses state borders. The potential financial impact of complying with 50 distinct state laws could surpass $1 trillion over a decade, with a minimum of $200 billion being borne by small businesses. As this patchwork grows, what does data privacy look like as the 2022-2023 legislative session comes to a close? What happened with data privacy in 2022-2023? As of 2023, the majority of states have considered data privacy legislation, likely in response to consumer concerns on this issue — 32 state legislatures have kicked off the debate and presented bills. Ten states have already signed comprehensive privacy bills into law. Six states—Florida, Indiana, Iowa, Montana, Tennessee, and Texas—enacted data privacy legislation this year. Oregon is the latest state to pass a comprehensive law, which is now awaiting the governor's signature. Additionally, there are five more bills under consideration as of July 2023. Most of these bills share similarities with the existing data privacy laws in California, Virginia, and Utah.
States with data privacy acts enacted in 2023 that have followed the California model Of the five additional states that enacted data privacy laws this year, Indiana and Montana appear to most closely resemble California's model, which relies heavily on administrative rules. Montana, for example, even goes beyond California by creating a right for consumers to revoke their consent to data processing. None of the states that have enacted laws this year have created a private right of action as seen in a limited capacity in the current California law. States that have followed the Virginia or Utah model Notably, a growing number of states have passed or considered a data privacy framework that more closely resembles the laws initially passed in Utah and Virginia. This includes Iowa, Tennessee, and Texas as well as a bill still under consideration in North Carolina. Such models provide baseline protections but typically have fewer obligations or areas of covered data, limit enforcement to the attorney general, and are more likely to provide safe harbors. Still, each proposal remains unique. For example, Tennessee became the first state to create a compliance safe harbor for companies complying with National Institute of Standards and Technology (NIST) standards. Other states have considered similar carve-outs for existing standards. Such an approach may lessen some problems with the patchwork by providing a way for a single set of best practices that could be compliant from state to state. Notable privacy bill trends to watch In addition to the growing patchwork of state privacy laws, this latest legislative term has also provided additional information about the debates around data privacy legislation. Notably, private rights of action continue to raise concerns and may make proposals less likely to succeed. Additionally, a new trend of health privacy-focused bills is emerging at the state level. Currently, four states that still have active bills—Maine, Massachusetts, New Jersey, and Rhode Island—contemplate creating a private right of action. However, to date, all bills from Hawaii to Mississippi to New York that included provisions on the private right of action have failed. New York's failed "It's Your Data Act" had foreseen that consumers "need not suffer monetary or property loss as a result of such violation in order to bring an action for a violation." The Washington Privacy Act was passed only after eliminating the private right of action, which was later reinstated in a very limited form by allowing a private right of action only for injunctive relief without monetary damages. The inclusion of a private right of action for statutory violations so that individuals can sue companies without the need to prove that actual harm inflicted upon them has grave consequences. Such private right of action for statutory damages raises significant concerns about how litigation could be used to prevent innovation. While a private right of action wouldn't pose any significant issues if the burden of proof was solely tied to demonstrating the harm, the problem arises when there's no requirement to prove harm. Such a provision could prompt a surge in class action lawsuits, thereby impeding innovation, especially among small companies that may become more risk-averse for fear of being sued. The United States, with its distinct litigation system, and features such as the absence of a "loser pays" rule, is more susceptible to the abuse of the private right of action for statutory violations. Illinois's Biometric Information Privacy Act provides such a right in the context of certain collection of data and has seen everything from photo tagging to trucking companies be sued. Most of the resulting funds have gone to attorneys, with limited amounts to the class members alleged to be "violated" by the action. In the photo tagging case, Facebook was directed to pay $650 million without the necessity of demonstrating any harm. In the trucking case, truck drivers secured a $228 million judgment because, as employees, they were required to scan fingerprints to confirm their identity, again without the need to show actual harm. A new emerging trend to watch is the ongoing debate surrounding the sponsorship of bills aimed at regulating consumer health data, primarily focusing on reproductive health data. Washington is the first state to pass such a law, which is set to take effect in 2024. In a post-Roe context, it is likely that similar legislation — particularly in blue states — will emerge, regulating actors that are not governed by HIPAA. Given the broad scope of what is classified as health data, debates on its definition, collection, and usage are likely to be heated. Such laws also raise unique compliance questions for a variety of popular apps that are not regulated as medical devices but provide consumers with empowering ways to track information from blood sugar to mental health. What do state data privacy laws mean for consumers, innovators, and the federal privacy policy debate? States are acting on data privacy in part because of the continued interest in the issue from constituents. In 2022, more than 80% of voters polled supported the idea of a federal data privacy law. Given that data privacy remains a concern and due to the lack of progress on a federal bill, it is unsurprising that much of the debate over data privacy has shifted to a local or state level where legislatures are able to move more quickly. But is this good for consumers and innovators? Is there a case for data privacy legislation anyway? While many polled consumers are in favor of data privacy legislation, there remains a great amount of difference in the actual privacy preferences they have. In fact, the overwhelming support for data privacy becomes far more complicated when you consider questions like how much an individual would be willing to pay for social media or other products as opposed to an ad-supported version. Similarly, research has shown a "privacy paradox" where revealed preferences for privacy tend to be weaker than stated preferences. If policymakers are to consider legislation around data privacy, they should focus on real and widely agreed-upon harms, not merely expressed preferences. This approach prevents a shift toward a more European "privacy fundamentalism" that is more likely to result in conflicts both with other rights, like speech, as well as create a static approach that could deter innovation including those that may improve privacy. Understanding the problems of a patchwork approach The continuing, emerging patchwork of data privacy laws at a state level is likely to lead to both increased costs and confusion. This is true not only for the businesses that handle data but also for consumers. A state-by-state approach makes it uncertain for both innovators and consumers what may or may not be done with their data. For consumers, this can create confusion about why certain products or features may not be available in their state or what rights they have when it comes to obtaining or correcting their data online. Particularly for small businesses, a state-by-state approach is likely to significantly raise costs as new compliance concerns arise in each state. In some cases, this may result in applying the most restrictive standard necessary, but in other cases, it may require development of specific features to comply. In either case, again both consumers and innovators lose out. Consumers may find themselves losing features because of standards imposed by legislatures in other states and innovators may find themselves focusing on compliance rather than the improvements that best serve their customers. Far from being the second-best solution, it is almost inevitable that proposals will eventually conflict with one another which makes it impossible to comply with all such state laws. The most obvious example of this would be if one state chooses an opt-out model while another chooses an opt-in model, but many other conflicts could arise around issues such as data minimization or retention. Given the potential and likelihood for conflicts and the burden on out-of-state businesses, a state-by-state approach also should give rise to dormant commerce clause concerns. The interstate (and international) nature of data means that a federal standard should be considered constitutionally necessary in this case.
Conclusion The 2022-2023 session saw a doubling of the number of states with consumer data privacy laws. While policymakers may feel they are responding to constituent concerns, the patchwork approach remains problematic for both innovators and consumers.
With English universally considered as lingua franca, and with the increasing amount of English being spoken in education, business, medicine, technology, government policy, and public affairs around the world, it is natural that an expansion of variations of the English language follows. The phenomenon turns the world into a global village and makes communication more accessible, especially if intelligibility is taken into account. Intelligibility, the degree to which a listener can understand a speaker's speech, is essential for effective communication and should be the objective of the second language (L2) learning and teaching. One of the elements of intelligibility is pronunciation. The way teachers pronounce words impacts students' comprehension. Furthermore, studies suggest that pronunciation is the make-it-or-break-it component for effective conversations. The book "Intelligibility, Oral Communication, and the Teaching of Pronunciation" by John Levis (2018) provides concepts, reminders, and new approaches to teaching the second language and emphasizes the importance of pronunciation instruction. It also shows how the crucial findings of relevant research in the field inform teachers about what should and should not practice in an intelligibility-based classroom. It addresses the practical aspects of teaching and the factors necessary for effective communication when learning a second language. In terms of organization, the book started with an introduction and was followed by four sections: a framework for teaching spoken language; word-based errors and intelligibility; discourse-based errors and intelligibility; and teaching and research approaches to intelligibility. Furthermore, these sections are subdivided into ten chapters. Section I consists of two chapters that conceptualize the relationship between pronunciation and intelligibility and the relationship between intelligibility and communication. This part also asserts that the ultimate goal of pronunciation teaching should be to make the speaker understandable rather than produce a native-like accent. Chapter 1, "Intelligibility, Comprehensibility, and Spoken Language", defines the key terms used throughout the book. Intelligibility refers to a listener's ability to understand a speaker, whereas comprehensibility measures how easy something is to understand (Chan, 2021). However, spoken language is the utterance itself. Chapter 2, "Priorities: What Teachers and Researchers Say", discusses the priorities in teaching and learning pronunciation using an intelligibility-based approach: what characteristics should be emphasized, what should be taught, and what should not be taught. The author presents three reasons why the practice of intelligibility-oriented approach is recommended: English is the universal language franca; not all English speakers have a native-like accent; effective communication is its aim. This chapter also discusses relevant studies and existing recommendations in teaching pronunciation that may serve as a basis for future researchers. Section II, "Word-based errors and Intelligibility", consists of three chapters that explore intelligibility and English language word-based pronunciation features. Word-based pronunciation features influence intelligibility the most. We sometimes have difficulties identifying, processing, and understanding the word said by other speakers. This circumstance is likely to occur with nonnative speakers of the English language, in which uttering is unnatural English words is different from uttering words in their native language. Each chapter of this section presents various word-based pronunciation features. Chapter 3, "Segmentals and Intelligibility" discusses phonemes and allophones, which are frequently uttered incorrectly by second-language speakers; thus, interferes intelligibility on the part of the listeners. Chapter 4, "Consonant Clusters and Intelligibility" focuses on the complexity of the English language in terms of consonant clusters and grammatical morphemes, such as past tense inflections (e.g., -ed, -t, -d) and their effect on intelligibility. Second language learners often mispronounce consonant clusters due to the unfamiliarity of the language. The book argues that the consonant cluster mispronunciations are more alarming for the speaker's intelligibility than individual consonant mispronunciations. The author illustrates that consonant clusters are adjusted into speakers' native language systems, specifically into vowel epenthesis (when some Spanish speakers say sC words with an initial vowel: eschool for school; and espeak for speak) or deletions of sounds (such as when some Vietnamese speakers say cas instead of clasps). Changing the expected syllable structure is likely to influence intelligibility, so it is vital to consider specific consonant cluster pronunciation. This section is concluded with Chapter 5, "Word Stress and Intelligibility" which is about the role of word stress on intelligibility and teaching pronunciation. The author shows that stress-related mispronunciations are more alarming than segmental mispronunciations regarding the intelligibility among the listeners. Although segmental and lexical-stress errors might not negatively affect word recognition, some second language learners may perceive the mispronounced stress of words differently, such as the noun OBject and the verb obJECT. Section III, "Discourse-Based Errors and Intelligibility" consists of two chapters that address the effects of discourse-based errors on intelligibility. Pronunciation may not affect word recognition negatively, but in some way, it impacts the overall message or perceived message of a listener. Specifically, how pronunciation in terms of rhythm and intonation contain this section. Chapter 6, "Rhythm and Intelligibility" describes the concept of rhythm (including speech rate, fluency, and connected speech) and its connection to intelligibility. The author mentioned that he had not wanted to write this chapter but was influenced that the elements related to rhythm, such as connected speech, may not be related to production but are vital for speaker perception. Rhythm assists the listeners in receiving, processing, and organizing each linguistic unit produced by the speaker. Chapter 7, "Intonation and Intelligibility: The Roles of Prominence and Tune" examines the importance of prominence and tone for pragmatics and social interaction. Prominence is when we give more emphasis to some parts of a major grammatical unit than others. On the other hand, the tune is "the pitch movement from the first syllable to the phrase's end" (p. 171). The book demonstrates how incorrect prominence and intonation patterns interfere with intelligibility or even cause unintended insults. The placements of prominence and tune on phrases depend on how information is arranged in speech. Since there are no definite rules regarding the placement of prominence and tune on grammatical units on a text, speakers and listeners must consider contextualization and familiarity to improve intelligibility. The author concluded this chapter by stressing the need to include an intonation-based way of teaching pronunciation, in addition to word-based and sentence-based practices of teaching pronunciation. The last section of the book, Section IV, "Teaching and Research Approaches to Intelligibility" consists of three chapters that conclude all the key concepts on the first three sections of the book and addresses its' relationship with teaching specifically, using the intelligibility-centered approach. In Chapter 8, "Teaching for Intelligibility: Guidelines for Setting Priorities" the author provided a list of principles for intelligibility-based teaching. These principles provide guidelines for selecting teaching/learning content that can be adapted to various contexts. Chapter 9, "The Intelligibility-Based Classroom" explores the role of the intelligibility-based approach in language teaching. It supports the adaptation of the approach and provides realistic pronunciation teaching recommendations. The author also critiques the nativeness approach in second language teaching, emphasizing drills and traditional pronunciation exercises to develop native-like pronunciation. The goal of second language learning is cannot and does not have to speak like a native; all that is necessary is intelligibility. Finally, Chapter 10, "What Should and Should Not Be Taught: An Intelligibility-Based Approach," presents "what-to-dos" associated with planning and implementing pronunciation teaching. Teachers must not be selective, rather be adaptive in their approaches to teaching pronunciation. They must base the approaches on researches and experience. Additionally, the author recommends "more important" (e.g., initial consonant clusters) and "less important" (e.g., medial consonants cluster) suggestions for intelligibility-centered pronunciation teaching. This chapter discusses practical methods that second language educators can use to teach language in general and pronunciation in particular. John Levis' (2018) book "Intelligibility, Oral Communication, and the Teaching of Pronunciation" is a timely, user-friendly, and practical book that caters overview of knowledge and approaches in second language teaching to foster intelligibility. It showcases the existing practice of pronunciation teaching reinforced by relevant studies. It promotes the adaption of the intelligibility approach that focuses on understanding and comprehension rather than native-like fluency with these traditional practices. The book outlines guidelines for planning and implementing the intelligibility approach. The author derives research findings and implications into helpful strategies teachers can use in teaching pronunciation. For example, the third section shows how intonation and prominence are "heavily implicated in the loss of intelligibility" (p. 2) and might be an eye-opener for educators and interest in future research. Moreover, the book is easy to understand because of the language use and well-described technical terms that facilitate understanding. This book can be more practical if the guidelines for the intelligibility approach include a chapter that discusses pronunciation assessment and provides a list of specific classroom tasks teachers can facilitate in an intelligibility-centered classroom. However, the author clarifies that the book would not provide this part and suffices it by referring readers to a reference that addresses this topic. Another content that can make the book more comprehensive is a chapter dedicated to how individual differences affect second language pronunciation learning. Factors such as age, sex, motivation, beliefs, personality, culture, learning style, and autonomy contribute to learners' individuality (Griffiths & Soruc, 2021). This book, with comprehensive concepts on pronunciation and intelligibility concerning the second language, is recommended for teachers, students, and researchers interested in the development of communication skills of English as second language learners. Educational practitioners and linguists will be informed of the language trends, which can be used as a reference for future studies. While native-like speaking is generally considered standard, readers will recognize that intelligibility is more important, particularly for second language learners.
The study was undertaken by researchers at UNSW Sydney, with the assistance and support of the City of Sydney Council.The aim of this research was to develop a survey tool for on-going assessment of social interactions and social cohesion at a large-scale urban renewal site that could be used to:➢ Measure the nature of social cohesion and social interaction and identify opportunities and barriers residents face in contributing to social cohesion and community development.➢ Understand the wellbeing of residents and workers, including their satisfaction with and attachment to the area, their local area preferences and desires, and their plans for the future.The results of the survey were presented to staff across the City of Sydney Council. It is expected that the survey findings will be used to inform Council's investments and activities across a range of areas, including community development, civic engagement, communications, placemaking, land use planning, open space and public domain planning, and local business development. The implications for practice presented here are preliminary and it is expected that City staff will further analyse and apply the survey findings to inform their work going forward. The City intends for the survey to be undertaken on a recurring basis over coming years, to monitor changes to the social fabric over time as the urban renewal area develops.Implications for community development: Green Square is an area with a large proportion of new residents (72% of survey respondents have lived in the area for 5 years or less), but that majority (70%) plan to remain resident in the area for a number of years. People feel more strongly connected to community at the larger scales of Sydney and Australia than at the local level of the suburb and street, but there is a desire to build more local connections, with the majority (68%) of residents wanting to have more interaction with others who live and work in the area.Private renters and younger people in particular desire more local social connection. Importantly, connection to community at the building scale is higher than at the suburb or street level, and the building scale was the only scale at which sense of community increased between 2017 and 2020. This suggests that community development at the building level is promising, but also that there is room to further develop community connections at the local suburb level. Interventions to encourage social interaction will be needed that engage residents who demonstrated a desire for greater involvement in social interactions but are constrained because of a lack of time and/or knowledge about the opportunities available to them, and a lack of confidence when dealing with strangers.Implications for civic engagement Around a third (32%) of residents felt they understood their rights around planning and urban development in the local area, slightly higher than in 2017 (27%). A smaller percentage (17%) felt they had made a civic contribution by working with others to improve the area. One in five felt that their thoughts about local issuescould be heard by people who make a difference (22%) and that there was strong local leadership in the area (18%), demonstrating a slight improvement from 2017 (when the figures were 20% and 15% respectively). There is potential for improved engagement amongst residents in the area as demonstrated by their willingness to be engaged in political discussions, with more residents having participated in other research (25%) and signed petitions (35%). There was also a substantial increase in the proportion of people who had joined a protest or demonstration from 8% in 2017 to 17% in 2020. The survey also revealed that relationships between language spoken at home and civic engagement are complex. People who speak a language other than English at home are less likely to be involved in communicating with a local politician or participated in the running of a strata or community title scheme. However, participation in research and council planning processes were equal or higher amongst people who speak a language other than English at home. There were also differences between people who speak a Chinese language and other language at home, with participation in online discussions, attendance at community events and sending letters to the media being higher amongst Chinese speaking residents than those speaking another language at home. In comparison, participation in a protest or demonstration was higher for those speaking English and another language at home compared to Chinese-speaking residents. These observed differences are based on small sample sizes and should therefore be treated with caution. However, they suggest that different strategies may be needed to encourage civic engagement of people who speak a language other than English at home and that different strategies may be more effective for different language groups.Implications for communications: Aside from time constraints, difficulty in finding information about social activities (26%) was the second most common limitation given by residents to socialising with others in the area. Barriers to participate in community activities were more pronounced among people speaking languages other than English at home. However, there are some interesting differences when comparing people speaking a Chinese language at home and people speaking another language at home, notably that people speaking a Chinese language are more likely to say that they are not confident with strangers, not interested in getting involved and have difficulty accessing facilities or venues, but are less likely to say that they do not feel welcome than people speaking another language at home.Residents would like to receive information about social activities through social media (63%), e-mails (56%), noticeboards in public places and their buildings (52%) and websites (36%). The City can provide such information through City-specific social media and through partnering with other social media platforms known to be actively used in the area, as well as collaborating with building managers. These approaches were effective in promoting the survey to residents. However, preferences for information differ greatly by age and language spoken at home. People aged over 50 were much less likely to want to receive information via social media (36%). However, e-mailed community newsletters were a more popular option amongst people over 50 (56%). People speaking a Chinese language at home are more likely to want to receive information via social media, noticeboards in public places or their building, websites, at the local community centre or library and in local newspapers and businesses and less likely to want to receive this information via word of mouth than both people speaking English and those speaking other languages at home. These results indicate that a variety of communication methods will be needed to reach all groups. However social media, e-mailed community newsletters and websites are important sources of information.Implications for placemaking: The majority of residents (90%) agreed that the area is a good place to live. This proportion has changed little since the 2014 and 2017 surveys and did not change before and after the introduction of the Covid-19 restrictions. This suggests that a high level of satisfaction with the area. However, people felt more strongly connected to Australia, Sydney and the inner city and surrounds than to their local area, street or building. Respondents to the 2020 survey were less connected to the communities at different scales than in 2017, with the exception of the building scale. As there is a relationship between length of residence and community attachment, this likely reflects the high proportion of residents who have lived in the area for less than six years, but nevertheless suggests that there is potential for further community development at the local scale.Implications for land use planning: The things people most commonly said they disliked about the area related to the danger of overdevelopment and the impacts of construction on the area and its overall density. Many people were also concerned about heavy traffic (48%) and parking (31%). However, while improvements to traffic management and public transport were the most important improvements residents wanted to see in 2017 (mentioned by 49% and 50% of resident respectively), in 2020 they remained important (mentioned by 43% and 43% of residents respectively) but were no longer the most commonly mentioned improvement. This likely reflects the gradual maturity of Green Square as a neighbourhood, where most hard infrastructure is now in place. More than half (58%) of residents travel to work or study using public transport and almost half (47%) of people said they moved to the area because of the proximity to public transport, demonstrating the important role that public transport plays in the attractiveness of the area.Notably, improvements that residents wanted to see in the area differed between age groups with younger people more likely to desire a greater variety of cafes, restaurants and bars, evening activities and public transport that connects to more parts of the city, while older people were more likely to desire landscaping in streets and parks a greater variety of retail shops and improved traffic management.Implications for open space and public domain planning: Parks and public spaces are significant locations for social interaction in Green Square and heavily used by residents. After cafes and restaurants, local (79%) and regional (66%) parks were the most commonly used local facilities. This could influence local land use planning and infrastructure development in Green Square and in future urban renewal areas, as it indicates that parks are important in facilitating local social interaction. However, there remains an important role for more formal community facilities, especially for particular groups, demonstrated by the higher proportion of unemployed people making use of community centres (19%) compared to the population as a whole (10%).Implications for local business: The most common places where people socialise with others in Green Square is cafes, restaurants and/or pubs (52%) and incidental interaction also commonly occurs in these places (52%). Cafes and restaurants are also the most commonly used services and facilities (94% of residents). Such businesses are therefore playing an important social role in the area, and two-thirds of residents (65%) said that they would like to see a wider variety of cafes, restaurants and bars in the area in the future. This suggests that the ideal of mixed-use development encouraging greater social interaction is supported by the findings in this case and has implications for development application planners who are making decisions about new businesses in the area.
ABSTRAKTesis ini membahas tentang Peran Komisi Pemilihan Umum Daerah Dalam Membangun Partisipasi Politik Masyarakat Pada Pemilihan Kepala Daerah (Studi Pemilihan Kepala Daerah Kabupaten Kubu Raya Tahun 2018). Metode yang digunakan dalam penelitian ini adalah pendekatan Normatif Sosiologis. Kesimpulan dari tesis ini adalah Pola dan bentuk kinerja KPUD Kabupaten Kubu Raya untuk membangun partisipasi politik masyarakat pada Pemilihan Kepala Daerah di Kabupaten Kubu Raya Tahun 2018 yaitu Melakukan Sosialisasi Ke Masyarakat Kabupaten Kubu Raya, bertujuan untuk meningkatkan partisipasi masyarakat agar bersedia memberikan suaranya pada saat pemungutan suara. Kemudian Penyebaran Informasi Melalui Alat-Alat Peraga. Alat peraga yang dilakukan KPUD Kabupaten Kubu Raya yaitu .(Baliho, Poster, Pamflet, Pin, Spanduk, Stiker Pada Mobil/Motor/Rumah). Ini dilakukan oleh KPUD Kabupaten Kubu Raya Menjelang Pemilihan Kepala Daerah Di Kabupaten Kubu Raya Tahun 2018. Kemudian Program Relawan Demokrasi (Relasi). Program relawan demokrasi adalah gerakan sosial yang dimaksudkan untuk meningkatkan partisipasi dan kualitas pemilih dalam menggunakan hak pilih. Dan Sosialisasi Mobil Keliling. Upaya terkhir yang dilakukam oleh KPUD Kabupaten Kubu Raya dalam meningkatkan partisipasi pemilih masyarakat yaitu KPUD Kabupaten Kubu Raya mensosialisasikan pelaksanaan Pilpres melalui promosi mobil keliling. Minimnya partisipasinya politik masyarakat pada Pemilihan Kepala Daerah merupakan bentuk kegagalan KPUD dalam menjalankan perannya untuk membangun partisipasinya politik masyarakat di pengaruhui beberapa faktor-faktor yaitu Faktor pekerjaan (Faktor Ekonomi) : Jika di lihat dari bentuk mata pencaharian masyarakat di Kabupaten Kubu Raya yang berbeda-beda bahwasannya sebagian besar masyarakat berkerja sebagai petani sehingga lebih banyak yang menghabisan waktu di luar rumah seperti keladang berdagang berkebun dan sebagainya. Padahal dalam pelaksanaan pemilu atau pesta rakyat yang diselenggarakan seharusnya masyarakat dapat menyampaikan aspirasi politiknya dengan ikut serta menyampaikan suara dan hak pilihnya. Faktor kesadaran masyarakat : Faktor lain yang mempengaruhi kurangnya partisipasi masyarakat yaitu, dikarenakan belum adanya fasilitas pendidikan seperti perguruan tinggi di daerah tersebut sehingga mengharuskan pemuda/pemudi yang ingin melanjutkan pendidikannya pergi keluar daerah. Maka dari itu karena jauhnya jarak untuk pulang kekampung mempengaruhi tingkat kesadaran akan hak politik dan suara mereka, pemuda/pemudi lebih memilih tidak menggunakan hak suaranya (golput) dari hal tersebut berpengaruh pada kesadaran dan juga menjadi alasan untuk tidak ikut sertanya masyarakat maupun pemuda/pemudi pada pemilihan kepala daerah yang berlangsung. Sosialisasi politik : Disetiap akan dilaksanakan pilkada ataupun pemilu legislatif panitia PPS (Panitia Pemungutan Suara) selalu memberikan pemahaman ataupun sosialisasi kepada masyarakat agar ikut serta dalam pemilihan umum yang salahsatunya dengan cara memberikan surat pilih. Faktor-faktor dominan yang menyebabkan minimnya partisipasi politik masyarakat dalam pemilihan daerah mempunyai korelasi langsung dengan peran KPUD Kabupaten Kubu Raya yaitu Letak Geografis yang mana Kabupaten Kubu Raya mempunyai 9 kecamatan yang tersebar dan beberapa diantaranya, daerahnya sangat susah dijangkau dan curam sekali. Jika menuju lokasi harus melewati perkebunan, jalanan berbatu-batu yang belum kena aspal, jalan yang berlobang dan jalan berkelok – kelok yang di tepi kanan kirinya terdapat jurang dan daerah perairan terpanjang di Provinsi Kalimantan Barat, letak geografis menjadi faktor penghambat KPUD Kabupaten Kubu Raya, itu dikarenakan jalan antara lokasi sosialisasi dan TPS kurang dapat dijangkau oleh masyarakat. Dikeranakan akses jalan yang belum merata di daerah perairan dan jalan-jalan yang masih banyak rusak di Kabupaten Kubu Raya. Pola Pikir Masyarakat Kabupaten Kubu Raya masih ada yang mempunyai mata pencaharian bertani dan berladang. Tidak semua kesejahteraan masyarakatnya tersebar merata oleh karena itu masih ada saja di beberapa daerah yang tingkat ekonomi dan pendidikannya masih rendah, tidak memperdulikan bahkan acuh terhadap Pada Pemilihan Kepala Daerah di Kabupaten Kubu Raya Tahun 2018. Masalah Daftar Pemilih Tetap (DPT) yang mana KPUD Kabupaten Kubu Raya sudah berusaha untuk mengatasi masalah DPT dengan cara memberitahukan kepada masyarakat bahwa masyarakat yang belum terdaftar dalam DPT agar segara mendaftarkan dirinya ke Panitia Pemungutan Suara (PPS) di kantor desa atau kelurahan. Akan tetapi masalah Daftar Pemilih Tetap (DPT) selalu saja muncul, dimana masih banyak pemilih yang belum terdaftar atau sudah terdaftar tetapi tidak masuk ke DPT. Kendala Pada Saat Melakukan Sosialisasi yaitu Keterbatasan dana. Kurang antusiasnya masyarakat, yang menghadiri sosialisasi hanya pihak-pihak tertentu, seperti tokoh masyarakat, RT, dan RW. Kurang adanya kepedulian dari masyarakat seperti kelompok perempuan, khususnya ibu-ibu, pemilih pemula yang terkadang menolak pada saat akan diberikan sosialisasi. Pada saat pemberian undangan, calon pemilih tidak berada di rumah. Sikap acuh yang ditunjukkan masyarakat pada saat diberikan sosialisasi. Masyarakat beranggapan memilih maupun tidak sama saja karena tidak akan berdampak pada kehidupannya. Masyarakat lansia sulit untuk diberikan sosialisasi karena salah satu faktornya yaitu sering lupa walaupun sudah diberikan pengetahuan tentang pemilu. Kata Kunci: Peran, Komisi Pemilihan Umum Daerah, Membangun, Partisipasi Politik. ABSTRACTThis thesis discusses the Role of the Regional Election Commission in Building Community Political Participation in the Election of Regional Heads (Study of the Election of Regional Heads of Kubu Raya in 2018). The method used in this study is the Sociological Normative approach. The conclusion of this thesis is that the pattern and form of performance of the Election Commission of Kubu Raya Regency to build community political participation in the Election of Regional Heads in Kubu Raya in 2018 is to socialize to the community of Kubu Raya Regency, aiming to increase community participation to be willing to vote at polling . Then Disseminate Information Through Teaching Aids. The props performed by the Election Commission of Kubu Raya Regency are (Billboards, Posters, Pamphlets, Pins, Banners, Stickers on Cars / Motorbikes / Houses). his was done by the Election Commission of Kubu Raya Regency Ahead of the Election of Regional Heads in Kubu Raya Regency in 2018. Then the Democratic Volunteer Program (Relations). The democratic volunteer program is a social movement intended to increase the participation and quality of voters in exercising their right to vote. And Mobile Car Socialization. The last effort carried out by the Election Commission of Kubu Raya Regency in increasing community voter participation, namely the Election Commission of Kubu Raya Regency, socialized the implementation of the Presidential Election through the promotion of mobile cars. he lack of public participation in the elections of Regional Heads is a form of failure of the Election Commission in carrying out its role to build community participation in the influence of several factors, namely employment factors (Economic Factors): If viewed from different forms of community livelihood in Kubu Raya District bahwasannya most people work as farmers so that more people run out of time outside the home such as trading in gardening and so on. Whereas in the conduct of elections or public parties held, the public should be able to convey their political aspirations by participating in conveying their votes and voting rights. Community awareness factor: Another factor that influences the lack of community participation is that there are no educational facilities such as universities in the area so that young people who want to continue their education go out of the area. Therefore, because the distance to return to the village affects the level of awareness of their political rights and voices, young people prefer not to use their voting rights (abstentions), which influences awareness and is also a reason not to participate in the community or youth. in regional elections that take place. Political socialization: Every election will be held or legislative elections PPS committee (Voting Committee) always provides understanding or socialization to the community to participate in the wrong election by giving a select letter. The dominant factors that led to the lack of community political participation in regional elections have a direct correlation with the role of the Election Commission of Kubu Raya Regency, namely the Geographical Location where Kubu Raya Regency has 9 sub-districts scattered and some of them, the area is very difficult to reach and very steep. The lack of public participation in the elections of Regional Heads is a form of failure of the Election Commission in carrying out its role to build community participation in the influence of several factors, namely employment factors (Economic Factors): If viewed from different forms of community livelihood in Kubu Raya District bahwasannya most people work as farmers so that more people run out of time outside the home such as trading in gardening and so on. Whereas in the conduct of elections or public parties held, the public should be able to convey their political aspirations by participating in conveying their votes and voting rights. Community awareness factor: Other factors If you go to the location you have to pass through plantations, rocky roads that have not been hit by asphalt, hollow roads and winding roads which have the longest ravines and water areas in West Kalimantan Province, geographical location is a factor inhibiting the KPu Raya Election Commission, because the road between the location of the socialization and the TPS is less accessible to the community. Road access that is not evenly distributed in the waters and roads that are still heavily damaged in Kubu Raya Regency is planned. The Mindset of the people of Kubu Raya Regency still has farming and farming livelihoods. Not all people's welfare is spread evenly because there are still some in some regions whose economic and education levels are still low, ignoring even indifferent to the Election of Regional Heads in Kubu Raya in 2018. Problems with Permanent Voters List (DPT) which Regency Election Commission Kubu Raya has been trying to overcome the DPT problem by telling the public that the people who have not been registered in the DPT must immediately register themselves with the Voting Committee (PPS) in the village or kelurahan office. However, the problem of the Permanent Voters List (DPT) always arises, where there are still many voters who have not been registered or have registered but have not entered the DPT. Constraints at the time of socializing namely limited funds. The lack of enthusiasm of the people, who attended the socialization were only certain parties, such as community leaders, RTs and RWs. Lack of concern from the community such as women's groups, especially mothers, beginner voters who sometimes refuse when given socialization. At the time of the invitation, prospective voters were not at home. Indifferent attitude shown by the community when given socialization. The community thinks that choosing or not is the same because it will not affect their lives. The elderly community is difficult to be given information because one of the factors is often forgetting even though they have been given knowledge about the election.Keywords: Role, Regional Election Commission, Building, Political Participation.
The aim of the paper is to analyze the shared basins between Lebanon and riparians, considering cooperation and conflict, geopolitical aspects in the Arab region, as well as governance. This is complemented with the Integrated Water Resources Management (IWRM) approach. Traditionally, the issue of shared water resources in the Arab region has been highly politicized as well as a critical feature of high-level negotiations between governments. At the same time, it causes concerns about justice and security among the general public around the world in relation to the human right to water. Attention has largely been focused on long-standing disputes arising from Arab dependence on surface water resources originating from (or controlled by) non-Arab countries. Water is one of the most precious resources in Lebanon and all around the world, especially considering current and future climate change scenarios. The effects of the humanitarian crisis with 1.5 million Syrian refugees in Lebanon, putting pressure on water services and resources cannot be diminished either. However, the water crisis affecting Lebanon predates the arrival of the Syrian refugees and it's signed by its geopolitical situation. Available water includes rivers and springs, storage dams and groundwater. Lebanon's water resources are under stress due to several factors: unsustainable water management practices, increasing water demand from all sectors, water pollution, and ineffective water governance. Lebanon shares the following basins with riparian countries: the Jordan River, the Orontes River basin, also known as the Al Asi River and the Nahr Al Kabir basin. Concerning to groundwater, the Anti-Lebanon Mountain range is located at the Lebanese-Syrian border. Originating from the Anti-Lebanon and Mount Hermon mountain ranges, the Jordan River covers a distance of 223 km from north to south and discharges into the Dead Sea. The river has five riparians: Israel, Jordan, Lebanon, Palestine and Syria.The Jordan River headwaters (Hasbani, Banias and Dan) are fed by groundwater and seasonal surface runoff. Water use in the Jordan River basin is unevenly developed. Palestine and Syria have no access to the Jordan River; hence their use of water resources from the river itself is nil. However, Syria has built several dams in the Yarmouk River sub-basin. Overall, the Jordan River basin has an estimated total irrigated area of 100,000- 150,000 ha of which around 30% is located in Israel, Jordan and Syria, 5% in Palestine and 2% in Lebanon. Regarding to the main agreements, on 1953 and 1987 Jordan and Syria agreed on the use of the Yarmouk River, including the construction of the Wahdah Dam and 25 dams in Syria. The agreement also establishes a joint commission for the implementation of the provisions on the Wahdah Dam. On 1994, Israel and Jordan agreed on Annex II of the Treaty of Peace concerns water allocation and storage of the Jordan and Yarmouk Rivers, and calls for efforts to prevent water pollution as well as the establishment of a Joint Water Committee. Israel and Palestine (PLO) accepted on 1995 the Article 40 of the Oslo II political agreement states that Israel recognizes Palestinian water rights in the West Bank only and establishes the Joint Water Committee to manage West Bank waters and develop new supplies. Palestinians are denied access to the Jordan River under this agreement. Geopolitically, the question of water sharing in the Jordan River basin is inextricably linked to the ongoing conflicts between Israel and Syria, Israel and Lebanon, and Israel and Palestine, and while a wide range of issues are at stake, control over water in the basin has added to existing regional tensions. The Orontes River basin, also known as the Al Asi River, is the only perennial river in Western Asia that flows north from Lebanon to Syria and Turkey, and drains west into the Mediterranean Sea. The river is mainly used for irrigation purposes with several agricultural projects planned in the three riparian countries. There is no basin-wide agreement between the three riparians, but there are several bilateral agreements in place on issues such as water allocation (Agreement between Lebanon and Syria on the distribution of water of Al Asi River, 1994) and the joint construction of infrastructure (Syria and Turkey). Orontes basin politics are heavily influenced by the status of Turkish-Syrian relations in general, and discussions over the sharing of the Euphrates River in particular. Syria and Turkey have not resolved the question of the disputed coastal province of Hatay (Iskenderun) through which the Orontes exits to the Mediterranean Sea. On 1994, Lebanon and Syria reached an agreement on the distribution of Orontes River Water originating in Lebanese territory, which specifies water allocation between the two countries. On 2009, Syria and Turkey agreed on the Memorandum of Understanding concerning the construction of the joint Orontes River Friendship Dam. The NahrAl Kabir basin rises from numerous springs in Syria and in the Lebanon Mountain range. It runs a westerly course forming a natural border between northern Lebanon and Syria. The river is severely polluted by widespread discharge of untreated sewage and uncontrolled solid waste disposal. The two countries cooperate on the basis of a 2002 water-sharing agreement, with several joint technical sub-committees tackling various issues related to the watershed. On 2002, Lebanon and Syria reached the Agreement to share the water of the NahrAl Kabir and build a joint dam on the main stem. Concerning to groundwater, the Anti-Lebanon Mountain range is located at the Lebanese-Syrian border between the Bekaa Plain in the west and the Damascus Plain in the east. The Anti-Lebanon is an important source of water, both locally and in the wider regional context, as it forms the source of a number of rivers in the Mashrek. Several large springs emanate from these aquifers and contribute to the Awaj, Barada, Litani, Orontes and (Upper) Jordan Rivers. There are no water agreements in place for any part of the Anti-Lebanon Mountain range, nor for the three shared spring catchments. The two riparians coordinate shared water resources management issues through the Syrian-Lebanese Joint Committee for Shared Water, which also implements the agreements in place over the Nahr Al Kabir and the Orontes River. In this regard, enhancing cooperation between Lebanon and riparians countries is crucial to manage shared water resources in this water-scarce region. More cooperative action and constructive dialogue is needed to sustain these shared resources, considering water governance, hydrodiplomacy principles and the IWRM approach. The questions that guide this study are to know which watersheds Lebanon shares with its neighboring countries, what their characteristics are, and if there are international agreements that regulate their use and joint development. In reference to the techniques of data collection, the research process will collect information from primary and secondary sources (academic research, specialized press, statistical series and international surveys, among others). As regards the techniques of data analysis, the research will use the documentary analysis, qualitative data analysis. This article is presented with an introduction to water management in Arab countries. Next, theoretical bases for the study of transboundary basins are proposed. As a contribution to the theoretical framework, the principles on shared watercourses in International Law are developed. Then, the transboundary basins of the country of Phoenician origin, Lebanon, are presented. Finally, the conclusions of the study are given. ; El objetivo del presente artículo es analizar las cuencas transfronterizas entre la República del Líbano y los países ribereños, considerando aspectos de cooperación y conflicto, geopolíticos de la región árabe, así como gobernanza. Esto se complementa con la propuesta del paradigma de la Gestión Integrada de Recursos Hídricos (GIRH). Tradicionalmente, la cuestión de recursos hídricos compartidos en la región árabe ha estado altamente politizada, así como las negociaciones de alto nivel entre gobiernos han sido críticas. Al mismo tiempo, esto genera preocupación sobre la justicia y la seguridad hídrica. La atención ha estado centrada en disputas de largo recorrido surgidas de la dependencia árabe en recursos hídricos superficiales originados (o controlados por) países no árabes. El agua es uno de los recursos más preciados en Líbano, como en todo el mundo, en especial si se consideran los actuales y futuros escenarios de cambio climático. La crisis humanitaria de 1,5 millones de refugiados sirios en territorio libanés presiona en mayor medidasobre los servicios de agua y sobre los recursos; si bien dicha crisis hídrica antecede a la llegada de dichos refugiados y está marcada por la propia situación geopolítica. El agua disponible discurre por ríos, lagos, embalses y aguas subterráneas. Los recursos hídricos de este país se encuentran bajo estrés debido a distintos factores: prácticas de manejo no sostenibles, aumento de la demanda desde todos los sectores, contaminación y gobernanza inefectiva (casi ausente) del agua. El Líbano comparte las siguientes cuencas con países ribereños: el río Jordán, el río Orontes y el río Nahr Al Kabir. En lo que respecta a agua subterránea, la cadena montañosa de Anti-Líbano está ubicada en la frontera con Siria. La promoción de la cooperación es crucial para el manejo de los recursos hídricos compartidos en esta región. En este sentido, una acción más cooperativa y un diálogo constructivo son necesarios para gestionar estos recursos compartidos, considerando la gobernanza del agua, los principios de la hidrodiplomacia y la GIRH. Las preguntas que guían este estudio son conocer qué cuencas hídricas comparte el Líbano, cuáles son las características de las mismas, y si existen acuerdos internacionales que regulen su uso y aprovechamiento conjunto. En cuanto a la metodología utilizada, la siguiente investigación se basa en información de fuentes primarias y secundarias (investigaciones académicas, prensa especializada, estadísticas, entre otras). El análisis de datos es cualitativo y documental. Este artículo comienza con una introducción a la administración del agua en los países árabes. A continuación, se proponen bases teóricas para el estudio de cuencas transfronterizas. Como aporte al marco teórico, se desarrollan los principios sobre cursos de agua compartidos en el Derecho Internacional. A continuación, se presentan las cuencas transfronterizas del país de origen fenicio. Finalmente, se proponen las conclusiones del estudio.
The present dissertation deals with selected aspects of corporate governance and personnel management and provides an in-depth analysis of capital markets' perception of these issues and the effects on shareholder wealth. Subjects of the investigation are the role and effects of gender diversity on corporate boards and female leadership, CEO overconfidence and corporate layoff decisions. Chapter 2 offers a comprehensive overview of existing research on the effects of an increased female representation on corporate boards as well as stronger participation of women in leadership on firm performance and thus shareholder wealth. The chapter reviews empirical evidence from 44 studies published between 1996 and 2014. The guiding question of the review is if previous research does provide empirical evidence for economic benefits of increased female representation in top management positions. No uniform picture emerges from almost 20 years of research on the relationship between gender diversity on corporate boards and on TMTs and firm performance. There is no clear trend towards a general economic advantageousness of increased female leadership and performance, the findings are ambiguous. While 15 studies find empirical evidence for a positive relationship, five studies report a negative relationship. Several studies report mixed evidence regarding the relationship (13 studies) and a substantial number of studies cannot establish any link between gender diversity and financial performance (14 studies). A wide variety of different regression models is applied, furthermore events study methodology or interaction analysis. Findings suggest that the relationship between female representation in top management positions and financial firm performance appears to be more complex than originally assumed. The answer to my research question is thus: it depends. Certain boundary conditions and moderating factors appear to influence the relationship. First, performance effects vary between different business sectors. Female representation in top management is associated with better performance if the firm is operating in a complex business environment. Positive effects are observed in particular in the areas of technology and telecommunications. Second, the firm's strategic orientation is a decisive factor. Firms with a strategic focus on innovation benefit from increased gender diversity in TMTs with regard to performance and firms with a strong growth orientation benefit with respect to productivity. Third, women's education is a factor of relevance. Performance effects are positive and stronger for female CEOs with a university degree. Fourth, performance effects depend on the quality of a firm's corporate governance. Gender diversity on the board has a positive impact on the performance of firms that otherwise have weak governance and shareholder rights as intensified monitoring could enhance firm value. Fifth, it needs a critical mass of women in order to realize the potential benefits from increased gender diversity. There is evidence for a curvilinear instead of a simple, linear relationship between gender diversity and firm performance. Although there appears to be no generally applicable rule for the "right" level of gender diversity in upper echelons, critical mass theory gives an indication. The reported evidence on a U-shaped link means that it needs a critical mass of about 30 percent women on the board in order to realize potential benefits stemming from a gender-diverse board. This finding lends support to the statutory gender quotas for supervisory boards at levels between 30 and 40 percent. Against the background of the statutory gender quotas for supervisory boards, chapter 3 analyzes the acceptance level of the quota in firms in German-speaking Europe. It further examines compliance with corporate governance codes' recommendations and industry's objectives for the promotion of female leadership. Areas under investigation also include capital markets' perception of corporate gender diversity initiatives, the major drivers for the development of programs and the perspective on the subject of diversity. For this purpose, an anonymous survey among investor relations professionals in Germany, Switzerland and Austria is conducted, which yields almost 100 analyzable data sets. Findings suggest that staff diversity remains a niche topic for capital markets. Primarily specialized investors and rating agencies with a focus on sustainability, CSR or ESG make inquiries relating to workforce diversity. Accordingly, corporate initiatives for increased gender diversity in executive positions are believed to have no impact on external company valuation by capital market participants. The vast majority of companies does not consider diversity issues under economic aspects but predominantly under aspects of fairness and equality. Most influential external stakeholders driving diversity initiatives are government authorities and regulators, women's and interest associations and the media. The general acceptance of the quota from investor relations is rather low. Half of the companies have not implemented specific promotion programs for women in leadership and almost two thirds of all surveyed companies have not set any planning targets. Chapter 4 shows the potential adverse effects of failures in corporate governance by the example of CEO overconfidence. Within the scope of a case study, it traces the development of (male) overconfidence on the part of CEO Hans-Martin Rueter with fatal consequences for the firm CONERGY AG, eventually leading to its insolvency. The comprehensive content analysis of press reports, official company documents and analyst reports yields several indicators of optimism and overconfidence. The content analysis of press reports clearly shows that Rueter is portrayed as optimistic and confident. Furthermore, he is described as charismatic, eloquent and persuasive while credible and trustworthy at the same time. Media praise both indicates and will foster overconfidence. Moreover, heightened acquisitiveness in conjunction with large amounts of paid goodwill can be observed. The paid premiums are at least partly attributable to valuation errors and hubris on the part of the bidder. Rueter was presumably overly optimistic about potential synergies and overestimated increases in value. In addition, there are several promoting factors for optimism and overconfidence. The state-funded boom of the German and European solar sector in the first decade of the new millennium led to very successful years for CONERGY. It is most likely that Rueter himself claimed full credit for the organizational successes and it was also credited to him externally, for instance by research analysts. This attribution encourages CEO overconfidence and inter-organizational prestige. A very important source of overconfidence, however, is weak board vigilance. The supervisory board has the decisive duty to monitor and control management's actions. It should be aware of the potentially serious risks of extreme managerial overconfidence and it must exercise control. The supervisory board, with Rueter's uncle being Chairman and his brother being a board member, did not effectively constrain the CEO's excessive expansion. Four major effects of this expansion in combination caused CONERGY's existential crisis in 2007 and 2008. First, personnel and infrastructure costs rose rapidly due to the newly founded subsidiaries as well as poorly targeted acquisitions. Second, the growing complexity on the organizational level as well as on the technology and product level became hardly manageable. Third, increasing cash requirements and poor working capital management caused precarious shortfalls in liquidity, nearly resulting in insolvency. Finally, CONERGY failed repeatedly in procurement. CONERGY did not recover from the crisis and filed for insolvency in 2013. Chapter 5 provides an analysis of the wealth effects of layoff decisions by banks. Large-scale layoffs are personnel measures that are executed proactively or reactively for various reasons. The effect on stock prices and thus on the shareholders' equity is examined by applying event study methodology to a sample of 210 layoff announcements issued by banks in Western Europe and the United States between 2004 and 2014. Results refute the thesis of a stakeholder conflict in which several stakeholders are affected, but only shareholders benefit from the staff cuts at the expense of employees. Capital markets on the whole respond to layoff announcements with significant negative abnormal returns in event windows up to eleven days around the announcement date, supporting the declining investment opportunities hypothesis. From the capital markets' perspective, the announcements of planned redundancies convey negative information about a bank's current status and also its future prospects including poor investment or growth opportunities or uncertain future cash flows. Banks belong to the financial services industry, their employees are their key source of earnings and their main links to the customers. Capital markets appear to realize and assess the risk associated with the loss of human capital. The detriments associated with the mass layoffs hence weigh more heavily compared with the potential benefits from cost savings. Solely dismissals of employees from the investment banking division are considered as positive by capital markets, most likely owed to the associated reduction of risks and the substantial cost savings due to the high salaries in this division. Furthermore, the negative share price reaction is less pronounced if the planned layoffs are perceived as a proactive measure aiming at reducing costs or increasing efficiency but more pronounced if they are perceived as reactive to adverse market conditions or poor past financial performance. In summary, the results suggest that layoff announcements by banks generally have a decreasing effect on shareholder value. Hence, the owners of the firm in the short term do not benefit from collective dismissals at the expense of employees. In summary, corporate governance and strategic personnel management can impact firm value substantially. This is supported by the evidence provided across the four sections of this dissertation. The effects can be positive or negative. This dissertation shows under which boundary conditions increased gender diversity on corporate boards and in top management teams can but does not necessarily have positive effects on firm value. It also outlines associated potentials for improvement of quality and effectiveness of corporate governance. In contrast, the present work discusses the risks of weak board vigilance, thereby emphasizing the relevance of corporate governance. Failures in monitoring and control through the supervisory board can severely affect firm value. Finally, this dissertation focuses on the personnel measure of layoffs and provides evidence for negative effects on firm value and thus shareholder wealth.
My dissertation consists of three chapter that analyze of the behavior of decision makers and their interactions in situations where agents engage in costly effort investment in order to earn a prize such as property rights, natural resources, market shares, etc. This analysis allows competition between economic agents when the property rights are not clearly defined, imperfectly enforced, or absent completely by design or naturally. As property rights are absent in various economic environments, my dissertation allows applications of political economy, litigation, sports, etc. First chapter is a joint work with my supervisor Luis C. Corchón and it is a published work in Journal of Economic Behavior and Organization. Our paper concentrates on the Political Economy implications of contests and analyzes a conflict over a resource between two contestants that differ in effectiveness of employing their investments on forces. We consider, for the cases of complete and asymmetric information, a pre-existent resource distribution and we analyze the effects of that distribution on sustaining peace. We find that under complete information there is always some distribution that achieves peace. If the contestants are similar, the set of such distributions are larger. Thus, peace may be achieved rather simply. Under asymmetric information we show that if the asymmetric information is small there may be no distribution of the resource that sustains peace in equilibrium. Thus, when there is asymmetric information or misinterpretation of the strength of the other contestant, the cheap-talk game of declaring war has consequences that lead to war even if the contestants have the chance of obtaining a part of the resource under conflict without aggression. Second chapter focuses on the possibility of ties in contests in general. The possibility of a tie may naturally be existent in the contest such as an impasse in a familitary conflict, the imperfect credibility of the prize granting authority in lobbying. It may also be imposed by design as in sport events such as football, chess, etc and promotional contests where denying promotion too all is a granted right of an employer. My paper introduces a new functional form allowing the possibility of a draw in a contest as a function of the expenses spent by the contestants and analyses the game induced by assigning a non-negative price for the outcome of the tie. I also build a dataset from four major football leagues of Europe including information on market values of the teams, and the result of each match for ten seasons. I use this data as a first assessment of the empirical performance of contest success functions for ties in the literature. According to my functional form, probability of a tie reaches a maximum whenever contestants exert equal amounts of effort regardless of the magnitude of these efforts. It increases when a player with less effort increases his effort, and decreases otherwise. In the unique equilibrium, players spend more for the contest with ties compared to the amount they spend for the contest without ties, even if the prize obtained in case of a tie is equivalent to losing the contest. This result implies that players compete more even if their expected prize is lower than the one when there is no possibility of a tie. Equilibrium also indicates that the equilibrium efforts do not depend on the prize allocated in case of a tie. Thus, if a contest designer wishes to obtain the largest effort from the contest, she should admit the possibility of a draw and assign zero prize for it. Moreover, if there is a constrained player in terms of resources an increase in the tie prize decreases the total expenditures spent by reducing the incentives for effort of the unconstrained player. The empirical application shows that my contest success function has promising results in determining the likelihood of various possible outcomes of the contest. Third chapter is also a Political Economy application of a contest and considers a two-player dynamic conflict in which, after an initial stage of arming decisions, players decide whether to stop or continuing fighting at each stage of conflict. The war does not stop until both sides decide to stop. Arms are destructive to rival forces and relative amount of forces determines the destructive power and bargaining power over the resource at each stage. In the subgame that starts after the initial arming decisions, war does not start at all if players discount future heavily or one side has a very large or a very small advantage in forces. Given that war starts, the smaller the relative advantage in forces the longer the conflict lasts. In the subgame perfect equilibrium of this game I find that the unique equilibrium is the one in which armed peace prevails. ; Mi tesis consiste tres capítulos que analizan el comportamiento de los tomadores de las decisiones y sus interacciones en situaciones donde los agentes se dedican a la inversión costoso esfuerzo con el fin de ganar un premio como los derechos de propiedad, los recursos naturales, cuotas de mercado, etc. Este análisis permite la competencia entre los agentes económicos, cuando los derechos de propiedad no están claramente de nidos, de manera imperfecta forzada, o ausente por completo por su diseño o de forma natural. Como los derechos de propiedad están ausentes en diversos entornos económicos, mi tesis permite a las aplicaciones de la economía política, litigios, deportes, etc. El primer capítulo es un trabajo conjunto con mi supervisor Luis C. Corchón y es una obra publicada en Journal of Economic Behavior and Organization. Nuestro trabajo se centra en las implicaciones Economía Política de concursos y analiza un conflicto por un recurso entre dos concursantes que difieren en la eficacia del empleo de sus inversiones en las fuerzas. Consideramos que, para los casos de información completa y asimétrica, una distribución de los recursos preexistentes y se analizan los efectos de que la distribución en el mantenimiento de la paz. Nos encontramos con que la sección de información completa, siempre hay alguna distribución que logra la paz. Si los concursantes son similares, el conjunto de estas distribuciones son más grandes. Por lo tanto, la paz se puede lograr simplemente. Según la información asimétrica se demuestra que si la información asimétrica es pequeña puede que no haya distribución del recurso que sustenta la paz en el equilibrio. Por lo tanto, cuando hay información asimétrica o mala interpretación de la fuerza del otro competidor, el juego con parloteo de declarar guerra tiene consecuencias que conducir a la guerra, aunque los concursantes tienen la oportunidad de obtener una parte de los recursos situados bajo conflicto sin agresión. El segundo capítulo se centra en la posibilidad de empates en los concursos en general. La posibilidad de un empate se puede existir naturalmente en el concurso como un impasse en un conflicto militar, la credibilidad imperfecta de la autoridad que distribuye el premio en un proceso de cabildeo. También puede ser impuesta por el diseño como en eventos deportivos como el fútbol, el ajedrez, etc, y en concursos promocionales en que la negación de la promoción a todos es un derecho otorgado de un empleador. Mi trabajo se introduce una nueva forma funcional que permite la posibilidad de un empate en un concurso en función de los gastos desembolsados por los concursantes y analiza el juego inducido mediante la asignación de un precio no negativo para el resultado de la eliminatoria. También construyo un conjunto de datos a partir de los cuatro principales ligas de fútbol de Europa, incluyendo información sobre los valores de mercado de los equipos, y el resultado de cada partido durante diez temporadas. Yo uso estos datos como una primera evaluación de los resultados empíricos de las funciones de éxito del concurso de empates en la literatura. De acuerdo a mi forma funcional, la probabilidad de un empate alcanza un máximo cuando concursantes ejercen cantidades iguales de esfuerzo independientemente de la magnitud de estos esfuerzos; aumenta cuando un jugador con menos esfuerzo aumenta su esfuerzo, y disminuye de otro modo. En el equilibrio único, los jugadores gastan más por el concurso con empates en comparación con la cantidad que gastan para el concurso sin empates, incluso si el premio obtenido en caso de empate es equivalente a perder el concurso. Este resultado implica que los jugadores compiten más incluso si su premio esperado es menor que el uno cuando no hay posibilidad de un empate. El equilibrio también indica que los esfuerzos de equilibrio no dependen en el premio asignado en caso de empate. Por lo tanto, si un diseñador de concurso desea obtener el mayor esfuerzo de la contienda, debe admitir la posibilidad de un empate y asignar premio cero para el. Por otra parte, si hay un jugador limitado en de recursos, un aumento en el premio de empate disminuye los gastos totales gastados por la reducción de los incentivos para el esfuerzo del jugador sin restricciones. La aplicación empírica muestra que mi función de éxito de concurso tiene prometedores resultados en la determinación de la probabilidad de varios resultados posibles de la contienda. El tercer capítulo es también una aplicación de la economía política de un concurso y considera un conflicto dinámico de dos jugadores en el que, después de una etapa inicial de las decisiones de armado, los jugadores deciden si se debe detener o continuar luchando en cada etapa del conflicto. La guerra no se detiene hasta que ambas partes deciden parar. Las armas son destructivas para las fuerzas rivales y cantidad relativa de fuerzas determina el poder de destrucción y poder de negociación sobre el recurso en cada etapa. En el subjuego que se inicia después de las primeras decisiones de las armas, la guerra no se inicia si los jugadores descartan futuro mucho o uno de los lados tiene una muy grande o una muy pequeña ventaja en fuerzas. Dado que comienza la guerra, la más pequeña es la ventaja relativa de las fuerzas, más largo el conflicto dura. En el equilibrio perfecto en subjuegos de este juego, el único equilibrio es aquel en el que prevalece la paz armada. ; Doctorado en Economía ; Presidente: Carmen Bebiá Baeza; Vocal: Ángel Hernando; Secretario: Antonio Cabrales
Tese de doutoramento em Engenharia Informática, apresentada ao Departamento de Engenharia Informática da Faculdade de Ciências e Tecnologia da Universidade de Coimbra ; Atualmente a sociedade contemporânea tem ao seu dispor um sem numero de serviços que suportam toda a economia globalizada em que vivemos bem como o nosso modo de vida. Serviços como distribuição de energia, água, gás, redes de transportes, telecomunicações, a Internet, entre outros, são atualmente parte integrante da vida dos cidadãos e das empresas. Estes serviços estão de tal forma presentes nas nossas vidas que a sua relevância e o grau de dependência aos serviços, apenas é sentido aquando da sua indisponibilidade. Este tipo de serviço dos quais depende o nosso modo de vida, são fornecidos por infraestruturas críticas, assim referidas pois a sua falha ou quebra da qualidade do serviço prestado pode ter um grande impacto na sociedade ou economia de um País. Para além dos fenómenos da natureza e dos riscos inerentes à sua própria exploração, os riscos que estas infraestruturas correm têm vindo a aumentar ao atrair cada vez mais o interesse de grupos de hackers e terroristas, principalmente pela forte visibilidade e consequências que mesmo um pequeno ataque pode acarretar. De entre os problemas inerentes ao funcionamento das infraestruturas críticas destaca-se o fato da existência de dependências ou interdependências entre infraestruturas. Veja-se o exemplo do serviço de telecomunicações que está por natureza dependente do fornecimento de energia elétrica ou dos serviços bancários que estão dependentes de ambos. Mas não está atualmente o fornecimento de energia dependente dos serviços de telecomunicações e dos seus sistemas de informação? Destes exemplos torna-se visível que, para além da (inter)dependência que possa existir, é necessário analisar também os efeitos em cascata que podem surgir após a falha de uma infraestrutura. Com o objetivo de promover a segurança em infraestruturas críticas, vários governos, em conjunto com a comunidade científica, promovem esforços de investigação nesta área. Em particular, nas áreas da distribuição de energia e das telecomunicações. Ao nível da União Europeia, existe grande determinação para promover projetos nesta área, em particular, projetos que promovem a troca de informação entre infraestruturas, na forma de alertas de risco, prevenindo os Operadores das infraestruturas relativamente a um aumento de risco de perda ou quebra de qualidade do serviço fornecido. Esta troca permite que as infraestruturas possam aplicar atempadamente os seus planos de contingência ou recuperação, minimizando eventuais quebras de serviço e consequentemente reduzindo o indesejado efeito de falha em cascata. A motivação para o trabalho apresentado nesta tese, surgiu da identificação dos principais aspectos em aberto relativos à troca e gestão de alertas de risco entre infraestruturas críticas. Muitas das abordagens existentes relativas à segurança em infraestruturas críticas focam-se na obtenção de níveis de risco através do uso de modelos mais ou menos complexos das infraestruturas. Apesar de estes modelos permitirem uma base sólida para a monitorização do risco, não apresentam mecanismos para a sua troca, gestão e avaliação de qualidade. Este trabalho aborda o problema relacionado com a confiança, reputação e gestão de alertas de risco no seio das infraestruturas críticas. Nesse sentido é proposta a introdução de mecanismos que permitam gerir e aferir em cada instante, o grau de confiança atribuído a cada um dos alertas de risco recebidos ou calculados internamente, permitindo melhorar a sua precisão e consequentemente melhorar também a resiliência da infraestrutura critica quando confrontada com alertas de riscos imprecisos ou inconsistentes. Na tese é abordado o problema da segurança em infraestruturas críticas interdependentes e identificados os principais problemas inerentes à troca de informação de risco, em particular, a forma de efetuar a partilha de informação de uma forma segura, a gestão dessa mesma partilha e a avaliação da fiabilidade da informação envolvida na partilha. Propõe-se nesta tese, a aplicação de mecanismos de gestão baseados no paradigma de gestão por politicas para a gestão da partilha de alertas de risco entre infraestruturas críticas. Com o objetivo de melhorar a gestão da partilha e posterior interpretação dos alertas de risco, é proposta a introdução da análise de confiança e reputação na avaliação da fiabilidade da informação envolvida na partilha e na avaliação do comportamento das entidades envolvidas. As propostas apresentadas nesta tese são discutidas e aplicadas no âmbito do projeto Europeu MICIE (Tool for systemic risk analysis and secure mediation of data exchanged across linked CI information infrastructures), em particular, no que se refere à solução proposta para a gestão da partilha de alertas de risco, que em conjunto com os indicadores de confiança e reputação propostos, permitem melhorar a proteção de cada infraestrutura relativamente ao uso de informação menos confiável ou inconsistente. Apresenta-se também a adaptação dos conceitos propostos ao CI Security Model, um modelo de análise de risco em tempo real, no qual as falhas identificadas são atenuadas com a introdução da análise de confiança e reputação proposta nesta tese. Os resultados da avaliação das propostas apresentadas são discutidos com base em cenários de simulação bem como através de dados reais de uma infraestrutura crítica. Os resultados obtidos indicam que as propostas apresentadas satisfazem os objectivos definidos, nomeadamente, ao contribuir para o aumento da confiança que uma infraestrutura crítica tem relativamente à informação recebida em tempo real acerca dos serviços dos quais depende, ao permitir uma melhor gestão dessa mesma informação e também ao contribuir para o aumento da fiabilidade dos resultados provenientes dos modelos de risco em uso na infraestrutura. ; Currently, our society has at its disposal an uncountable number of services able to support the global economy and also our current way of life. Services such as power distribution, water, gas, transport networks, telecommunications, the Internet, among others, are now an integral part of the citizens' lives and businesses. These services play such a big role in our lives that their importance is only appreciated when they are unavailable. These types of services, that our lives so heavily depend on, are provided by Critical Infrastructures. They are referred to as ``Critical" due to the fact that in case of failure or breakdown in providing quality of service, the impact on society and the economy of a country can be enormous. Beyond the phenomena of nature and risks inherent to the infrastructure operation, the risks faced by these infrastructures have continuously increasing, by attracting interest from groups of hackers and terrorist groups. Primarily due to the strong visibility and consequences that may result even from a small successful attack. Among the problems inherent to the operation of Critical Infrastructures, it is possible to emphasise the existence of dependencies and interdependencies among infrastructures. For example, a telecommunications service is inherently dependent on the electricity supply or, for instance, banking services are dependent on both telecommunications and energy supply services. However, is it not the service that provides power supply actually dependent on telecommunications services and also on information systems? Based on these examples it becomes apparent that in addition to the (inter)dependence that may exist, it is also necessary to examine the cascading effects that may arise after the failure of a Critical Infrastructure. Critical Infrastructures security has been the subject of discussion by numerous governments with the support of the academia by promoting research efforts in these areas, in particular in areas such as power distribution and telecommunications. Furthermore, within the European Union, there is determination to promote projects in these areas, in particular the promotion of projects that foster the exchange of information, in the form of warnings, among infrastructures. These warnings allow the Critical Infrastructure to be informed and aware of the increasing risk of loss or reduction in quality of the service received. This exchange allows the infrastructure to timely implement their contingency and recovery plans to minimise any service breaks and consequently minimise the unwanted effect of a cascading failure. The motivation for the work presented in this thesis arose from the identification of the main open issues relating to the exchange and management of risk warnings among Critical Infrastructures. Many of the existing approaches to security in Critical Infrastructures are focused on obtaining risk levels through the use of models based on the infrastructure. Although these models allow a solid foundation for risk monitoring, they do not have mechanisms for exchange, management and assessment of its quality. This work addresses the problem related to trust, reputation and risk alerts management within Critical Infrastructures. Accordingly, it is proposed to introduce mechanisms to manage and measure at each instant, the degree of confidence assigned to each of the alerts received or computed internally. Allowing improvement of their accuracy and consequently improving the resilience of Critical Infrastructures when faced with inaccurate or inconsistent risk alerts. This thesis addresses the problem of interdependent Critical Infrastructure security and identifies the main problems related to risk information sharing. In particular, how to allow information sharing in a secure manner, the management of that sharing and how to assess the reliability of such information. This thesis proposes the application of Policy Based Management mechanisms for the management of the risk alert information shared among Critical Infrastructures. In order to improve the information sharing management and the further interpretation of the risk alerts, it is proposed to evaluate Trust and Reputation in order to assess the shared information and also to consider the behaviour of the entities involved. The proposals presented in this thesis are discussed and applied in the context of the European Project MICIE ({Tool for systemic risk analysis and secure mediation of data exchanged across linked CI information infrastructures). In particular with regard to the proposed solution for the management of shared risk alerts, which uses the Policy Based Management paradigm. By incorporating the proposed Trust and Reputation indicators it allows to improve the Critical Infrastructure protection considering the use of untrustworthy or inconsistent information. It is also proposed the adaptation of the presented concepts to the CI Security Model, a model for real time risk analysis evaluation, in which the identified shortcomings are addressed with the integration of the Trust and Reputation approach proposed in this thesis. The results of the proposals evaluation are discussed based on simulation scenarios as well as through real data of a Critical Infrastructure. The achieved results indicate that the proposed mechanisms meet the objectives such as, by contributing to the increase in confidence that a Critical Infrastructure has on the information received about the services on which it depends. To allow improvement in management of such information as well as contribution to increased reliability of results obtained from the risk models applied to the infrastructure. ; FCT - (SFRH BD/35772/2007)
Tropische Entwaldung ist eines der dringendsten Umweltprobleme unserer Zeit. Sie ist einer der wichtigsten Treiber des Klimawandels und führt zu hohen Verlusten von Biodiversität und Ökosystemdienstleistungen. Bolivien ist eines der Länder mit den höchsten Entwaldungsraten weltweit. Im Rahmen der weltweiten Bemühungen zur Lösung dieses Problems unter dem REDD- Mechanismus ist es wichtig, konkrete und länderspezifische Handlungsoptionen für eine effektive und effiziente Entwaldungsreduktion zu identifizieren. Eine wichtige Voraussetzung dafür ist ein tiefgehendes Verständnis der komplexen Prozesse, die zu Entwaldung führen. Räumliche Modelle können hierfür wertvolle Informationen liefern, indem sie mögliche Einflussfaktoren in der Vergangenheit auswerten und Szenarien über künftige Entwicklungen generieren. In dieser Arbeit wird die logistische Regression als Schlüsselinstrument für eine systematische Identifikation von Handlungsoptionen angewendet, um die Ausbreitung der wichtigsten waldersetzenden Landnutzungsaktivitäten einzudämmen. Die gesamte Arbeit untersucht das bolivianische Tiefland als Modellregion. In einer Fallstudie wird zunächst die Expansion der mechanisierten Landwirtschaft im Department Santa Cruz untersucht. Der großflächige Soja-Anbau macht diese Region zu einem der Brennpunkte der Entwaldung in Südamerika. Ein logistisches Regressionsmodell über fünf Beobachtungszeitpunkte (1976, 1986, 1992, 2001 und 2005) identifiziert die wichtigsten Einflussfaktoren für die Ausbreitung der mechanisierten Landwirtschaft und analysiert ihre Wirkung über die Zeit. Es zeigt sich, dass die übergeordnete Entwaldungsdynamik über die Zeit stabil blieb, wobei es jedoch eine Tendenz zum Vordringen in die amazonischen, feuchteren Wälder im Norden von Santa Cruz gibt; eine analoge Entwicklung ist auch aus Brasilien bekannt. Die Modellierungsergebnisse werden genau validiert; dafür werden projizierte mit tatsächlichen Entwaldungsmustern verglichen und versteckte Korrelationen zwischen unabhängigen Variablen aufgedeckt. Die Fallstudie zeigt, dass die logistische Regression ein geeignetes Werkzeug für die weitergehenden Studien ist, unter der Voraussetzung, dass sie von sorgfältigen Evaluierungen und Plausibilitätschecks begleitet wird. In einer Folgeanalyse werden die drei wichtigsten direkten Ursachen für Entwaldung im gesamten bolivianischen Tiefland identifiziert: Mechanisierte Landwirtschaft war für 54% der Entwaldung zwischen 1992 und 2004 verantwortlich, gefolgt von Rinderzucht mit 27% und kleinbäuerlicher Landwirtschaft mit 19%. Mithilfe eines multinomialen Logitmodells werden die Einflussfaktoren dieser drei Landnutzungsformen analysiert. Die Resultate zeigen, dass die Expansion der mechanisierten Landwirtschaft hauptsächlich mit einem guten Zugang zu den Exportmärkten, fruchtbaren Böden und moderaten Niederschlagsbedingungen im Zusammenhang steht. Die Ausbreitung der kleinbäuerlichen Landwirtschaft ist mit einem eher feuchten Klima assoziiert, außerdem mit fruchtbaren Böden und einem guten Zugang zu lokalen Märkten. Die Umwandlung von Wald in Weideland zeigt nur geringe Korrelationen mit Umweltfaktoren und lässt sich am besten mit dem Zugang zu lokalen Märkten erklären. Landnutzungsrestriktionen, etwa Schutzgebiete, scheinen die Expansion von mechanisierter Landwirtschaft zu verhindern, zeigen aber wenig Wirkung in Bezug auf kleinbäuerliche Landwirtschaft und Viehzucht. Eine Analyse von zukünftigen Entwaldungstendenzen zeigt die wahrscheinliche künftige Ausbreitung jeder der drei Landnutzungsformen und identifiziert insbesondere zwei mögliche neue Expansionsgebiete der mechanisierten Landwirtschaft bei Puerto Suarez und San Buenaventura. Die quantitativen Modellierungsergebnisse werden ergänzt durch eine qualitative Analyse historischer Prozesse, die die Landnutzungsmuster in verschiedenen Teilen des bolivianischen Tieflands geformt haben. Während die quantitative Analyse die neueren räumlichen Entwaldungsdynamiken gut erklären kann, scheinen die Zeitpunkte von Entwaldungsereignissen vor allem durch historische Faktoren und politische Interventionen bestimmt zu werden. In einer dritten Analyse wird – wieder am Beispiel Boliviens – ein systematischer Ansatz zur Identifikation von Handlungsoptionen entwickelt, wobei die Modellierungsergebnisse ein wichtiges Element bilden. Die Ableitung von Handlungsoptionen basiert auf dem räumlichen und ökonomischen Potenzial landwirtschaftlicher Expansion, auf den erwarteten Kosten einer Entwaldungsreduktion sowie auf den aktuellen rechtlichen und politischen Rahmenbedingungen in Bolivien. Alle Analysen beziehen sich auf die drei direkten Ursachen von Entwaldung; für diese Landnutzungsformen werden spezifische Handlungsoptionen diskutiert. Die Eindämmung der Viehwirtschaft zeigt sich trotz des höheren Entwaldungsanteils der mechanisierten Landwirtschaft als Priorität, da die Umwandlung in Weideland für nahezu alle zugänglichen Wälder eine Bedrohung darstellt und da eine Reduktion zu relativ geringen Kosten möglich sein sollte. Eine schärfere gesetzliche Kontrolle sowie die Stärkung von zuständigen Institutionen auf nationaler und lokaler Ebene sind von höchster Bedeutung für die Reduktion aller drei Entwaldungstypen. Spezifische Maßnahmen sollten eine effizientere Produktion auf bereits genutzten Flächen gegenüber dem Vordringen in bewaldete Gebiete attraktiver machen. In diesem Zusammenhang könnten höhere Gebühren für legale Entwaldung die Ausbreitung von mechanisierter Landwirtschaft und Viehwirtschaft eindämmen. Auch eine Rückführung der Diesel-Subventionen dürfte die Expansion der mechanisierten Landwirtschaft bremsen. Solche Maßnahmen sollten durch die Förderung einer höheren räumlichen Produktionseffizienz ergänzt werden, etwa durch verbesserten Zugang zu Dünger oder technische Beratung und Unterstützung für höhere Bestockungsdichten. Die Ausbreitung der kleinbäuerlichen Landwirtschaft scheint aufgrund der hohen Zahl von Akteuren schwerer kontrollierbar zu sein; wichtig wäre es aber, das Eindringen in Schutzgebiete zu verhindern und effizientere und nachhaltigere Anbauformen sowie auch Arbeitsplätze außerhalb der Landwirtschaft zu fördern. Die Entwaldungsmodellierung zeigt sich als wichtiges analytisches Werkzeug zum Verständnis der zugrunde liegenden Prozesse; sie kann wichtige Informationen zur Ableitung von Handlungsoptionen liefern. Zukünftige Forschung könnte die Möglichkeiten von komplexeren Szenarien durch die Integration dynamischer Elemente ausloten; entsprechende Möglichkeiten sind in bestehenden Modellierungsprogrammen angelegt. Im Ausblick dieser Arbeit wird außerdem die Technik des Kartierens von Opportunitätskosten des Waldschutzes vorgestellt: Sie ermöglicht Szenarien auf der Basis von nicht-räumlichen Faktoren, etwa von Preisen landwirtschaftlicher Produkte. Für die praktische Anwendung von Modellen scheint es allerdings wichtig zu sein, eine hohe Transparenz zu wahren, um regelmäßige Plausibilitätschecks zu ermöglichen. Es besteht weiterer Forschungsbedarf zur Identifikation geeigneter Handlungsoptionen für eine effektive und effiziente Entwaldungsreduktion. In der Diskussion um REDD scheint die Bekämpfung der Entwaldung durch industrielle Landwirtschaft und große Rinderfarmen nur eine untergeordnete Rolle zu spielen. Dies könnte im Vorherrschen traditioneller Naturschutzkonzepte begründet sein sowie in einem ungerechtfertigten Fokus auf Kleinbauern. Auch der Schwerpunkt auf marktbasierten Lösungsansätzen scheint fragwürdig; nach den Ergebnissen dieser Arbeit könnte die direkte Unterstützung der Regierungen von tropischen Ländern bei der Umsetzung der erfolgsversprechendsten Maßnahmen zielführender sein. Des Weiteren scheint es wichtig, bei existierenden entwaldungsrelevanten globalen Märkten anzusetzen, etwa beim Handel mit Agrarrohstoffen wie Soja, Rindfleisch, Palmöl oder Tropenholz aus Kahlschlägen. ; Tropical deforestation represents one of the most urgent environmental problems of our time; it contributes heavily to climate change, causes immense losses of biodiversity and endangers important environmental services. Bolivia is among the countries with the highest deforestation rates in the world. In light of the current international efforts to reduce deforestation within the framework of REDD, effective and efficient country-specific policy options need to be identified to make progress on the ground. A prerequisite for the prioritization of such policy options is a detailed understanding of the complex processes driving deforestation. Spatial models can contribute valuable information to this end. They can provide quantitative evaluations of hypothesized drivers of deforestation in the past and also generate scenarios that represent probable developments in the future. This study applies spatially explicit regression models as a key instrument for the systematic identification of specific policy options suitable for mitigating the expansion of the main forest-depleting land uses. The entire study is based on Bolivia as a model country. The expansion of mechanized agriculture in the department of Santa Cruz is analyzed as a first case study. Soybean production has converted this area into one of the hotspots of deforestation in the entire Amazon. A logistic regression model covering five time steps (1976, 1986, 1992, 2001 and 2005) identifies the main determinants of the expansion of mechanized agriculture and explores the development of their effects over time. It shows that – while deforestation dynamics have been generally stable over time – there is a tendency of increased penetration into the more humid Amazonian forests in northern Santa Cruz, a development that is also known from Brazil. The model's results are thoroughly validated, including a comparison between projected and observed deforestation patterns and the investigation of hidden correlations between independent variables. The case study shows that logistic regression is a suitable tool for the purposes of the entire study, provided that careful evaluations and plausibility checks of the model outputs are conducted. In a subsequent analysis covering the entire Bolivian lowlands, three main proximate causes of deforestation are identified: mechanized agriculture was responsible for 54% of deforestation between 1992 and 2004, followed by cattle ranching with 27 %, and small-scale agriculture with 19%. A multinomial logit model is applied to analyze the determinants of each of these proximate causes of deforestation. The results suggest that the expansion of mechanized agriculture occurs mainly in response to good access to export markets, fertile soil and intermediate rainfall conditions. Increases in small-scale agriculture are mainly associated with a humid climate, fertile soil and proximity to local markets. Forest conversion into pastures for cattle ranching occurs mostly irrespective of environmental determinants and can mainly be explained by access to local markets. Land use restrictions, such as protected areas, seem to prevent the expansion of mechanized agriculture but have little impact on the expansion of small-scale agriculture and cattle ranching. An analysis of future deforestation trends reveals possible hotspots of future expansion for each proximate cause and specifically highlights the possible opening of new frontiers of deforestation due to mechanized agriculture in the areas of Puerto Suarez and San Buenaventura. The quantitative insights of the model are substantiated with a qualitative analysis of historical processes that have shaped land use patterns in different zones of the Bolivian lowlands to date. Whereas the quantitative analysis effectively elucidates the spatial patterns of recent agricultural expansion, the interpretation of long-term historic drivers reveals that the timing and quantity of forest conversion are often triggered by political interventions and historical legacies. In a third analysis, a systematic approach is developed in order to prioritize policy options for effective and efficient deforestation reduction, making use of the model outputs, among other things. Again, Bolivia is taken as a model country. The derivation of policy options is based on analyses of the spatial and economic potential of agricultural expansion, the expected costs of deforestation reduction, and the current legal and political framework in Bolivia. All analyses focus on the three proximate causes of deforestation; and specific policy options are discussed for these types of land use. It is concluded that, although mechanized agriculture caused more than half of all past deforestation in lowland Bolivia, cattle ranching activities should be targeted as a priority since their expansion threatens forests in many different locations and improvements could be achieved at relatively low costs. Enforcing legislation while strengthening institutions on both national and local levels is of utmost importance for the reduction of the expansion of all three land use categories. Specific measures should aim at giving an advantage to more efficient production on existing farms over the expansion into forested areas. In this context, a higher legal fee for deforestation has potential to mitigate forest conversion due to mechanized agriculture and cattle ranching farms, while a removal of subsidies for agro-diesel may specifically reduce the expansion of mechanized agriculture. Such measures could be complemented by a support for higher production efficiency, such as better access to fertilizer and techniques allowing increased cattle stocking densities. The expansion of small-scale agriculture seems to be difficult to control, due to the large number of agents; measures should focus on mitigating the encroachment into areas with land use restrictions, fostering more sustainable and space-efficient agricultural practices, as well as off-farm employment. Models of deforestation are found to be important analytical tools for a better understanding of the processes leading to deforestation; they can render important information for the development of policy options to combat deforestation. Further investigations may explore the possibilities of building more complex scenarios by adding dynamic elements that are contained in some existing land use modeling frameworks. In the outlook of this study, the mapping of opportunity costs of forest conservation is shortly introduced as a promising possibility of generating scenarios based non-spatial factors such as prices of agricultural goods. It is however concluded that, for practical applications, it seems reasonable to keep the transparency of models as high as possible in order to allow for constant plausibility checks of the model outputs. The study concludes that more research is needed to identify and evaluate suitable policy options to reduce deforestation on the ground. In the discussion on REDD, only little attention seems to be given to the development of mitigation strategies for large forest clearings driven by corporate agents and large cattle farms. This may be due to a certain prevalence of traditional approaches to biodiversity conservation within selected conservation areas and an unjustified focus on smallholders. Also the strong focus on market-based solutions may be questionable; according to this study it would be more appropriate to directly support the governments of tropical countries to implement the most promising measures. It may also be important to target existing markets that drive deforestation, i.e., global markets for beef, soybean, palm oil and tropical timber stemming from clear-cuts.
ZUSAMMENFASSUNG: DEUTSCHES REICH IM IMPERIALEN KONTEXT DER WESTLICHEN KONSULARGERICHTSBARKEIT IN JAPAN UND KOREADie westliche Extraterritorialität in Ostasien ist heute ein Symbol asymmetrischer Machtverhältnisse und der Einschränkung der Souveränität Chinas, Japans und Koreas. Die damalige Begründung der ausländischen Mächte, sie benötigten ein rationales und humaneres Rechtssystem, welches in asiatischen Ländern noch nicht existiere, wird inzwischen häufig als reine Legitimationsstrategie für eine imperialistische Machtausübung verworfen. Wissenschaftler zeichneten in den letzten Jahren ein differenzierteres Bild einer komplexeren Rechtsordnung, in der unterschiedliche Interessen vertreten waren. Die internationale Forschung hat sich überwiegend mit der Extraterritorialität der angelsächsischen Länder, vor allem Großbritanniens, beschäftigt, die das größte System westlicher Konsulatsgerichte unterhielten. Die Konsulatsgerichte des deutschen Kaiserreichs in Japan und Korea waren bisher noch nicht beachtet worden. Anhand diplomatischer Quellen und Presseberichte rekonstruiert dieser Artikel die Praktiken der deutschen Konsulatsgerichte. Die Studie untersucht drei chronologisch zusammenhängende Themen: den Aufbau eines deutschen Systems der Konsulatsgerichte durch internationale bilaterale Verträge und deutsche Gesetze, die Funktionsweise der deutschen Konsulatsgerichte in Japan und Korea und die japanischen Verhandlungen, die die deutschen extraterritorialen Privilegien im eigenen Land und in Korea beendeten.Das deutsche Kaiserreich war in der extraterritorialen Gerichtsbarkeit in Japan und Korea eine der wichtigen Mächte, welches auch zeitweise die Gerichtsbarkeit für andere Staaten wie die Schweiz, Schweden-Norwegen oder Dänemark ausübte. Es bewahrte seine Rechte und Privilegien in Kooperation mit und manchmal in Konkurrenz zu den anderen Westmächten. Auch wenn Großbritannien und die Vereinigten Staaten bei der Anzahl der geschätzten und dokumentieren Gerichtsfälle weit vorne lagen, kann man von einer geschätzten Gesamtzahl von über 2.000 deutschen Konsulargerichtsfällen ausgehen. In Korea waren dies hingegen weniger als 30 im Zeitraum der deutschen Extraterritorialität. Anhand von Statistiken, diplomatischen Quellen und Presseberichten scheint es, als ob dieses Rechtswesen einigermaßen effektiv funktionierte trotz nationaler und sprachlicher Barrieren. Ein japanischer Arbeiter konnte bei Misshandlung durch seinen deutschen Arbeitgeber genauso eine Kompensation erhalten wie ein japanisches Kindermädchen, das eine vertraglich vereinbarte Schiffsreise aus Europa zurück in sein Heimatland bezahlt bekam. Trotzdem könnte es Ungerechtigkeiten gegeben haben, wenn beispielsweise das Wort eines "christlichen Ehrenmannes", der vereidigt werden konnte, mehr zählte als dasjenige eines "Heiden", insbesondere wenn es sich um eine Frau handelte. Allerdings gab es auch vor japanischen Gerichten Grenzen für Ausländer, wenn beispielsweise die Klage eines Deutschen mit der Begründung abgewiesen wurde, die verklagte Regierungsbehörde weigere sich, mit dem Gericht zu kommunizieren. Unterschiedliche Rechtsnormen und Gesetze wurden in Yokohama für vergleichbare Vergehen angewandt. Zwei Männer, die beim Diebstahl der Zeitung Japan Gazette erwischt wurden, erhielten von unterschiedlichen Gerichten jeweils Gefängnisstrafen von 7 Tagen (Deutsches Konsulargericht) und 1 Jahr (lokales japanisches Gericht) nach den Strafgesetzen ihrer jeweiligen Heimatstaaten.Man würde vermuten, dass ein deutsches Konsulatsgericht deutsches Recht angewandt hätte, jedoch scheint es eher, als ob das Gericht einem ungeschriebenen Rechtsbrauch gefolgt wäre, welchen man als "Vertragshafengesetzesbrauch" bezeichnen könnte, und welcher weder deutsch noch japanisch geprägt war. Das deutsche Konsularrecht von 1879 erlaubte in Handels- und Kommerzangelegenheiten, das übliche lokale Recht anzuwenden, was insofern ironisch ist, als die Westmächte forderten, dieses zu ändern. Deutsche Konsulatsrichter und auch diejenigen anderer Staaten beachteten die juristischen Implikationen ihrer eigenen Handlungen. Bei einem Präzedenzfall zum Markenschutz des Flaschenetiketts der Flensburger Brauerei fragte der zuständige Richter nach den Gesetzen und Vorschriften der jeweiligen Länder der Prozessbeteiligten und Japan, um mit einem Kompromissvorschlag den Fall zu beenden. Manchmal endschied sich ein Gericht für die Vertagung einer Entscheidung aus Respekt vor einem anhängigen Verfahren im gleichen Streitfall vor dem Gericht einer anderen Nation. Auch wenn die vorliegende Studie sich vor allem mit der Institution des deutschen Konsulatsgerichts in Yokohama beschäftigt, so wird, wenn man die Aktionen "deutscher Reichsbürger" als Kläger und Zeugen in anderen Gerichten verfolgt, ein transkulturelles Netzwerk von Sozial- und Wirtschaftsbeziehungen sichtbar, welches nationale und institutionelle Grenzen überschritt. In Gegensatz zu dem weit verbreiteten Bild, welches auch durch den Begriff der "Ungleichen Verträge" propagiert wurde, handelte es sich bei der Mehrheit der Gerichtsfälle, sowohl in Zivil- als auch in Strafsachen, um Streitfälle innerhalb der westlichen Fremdenkolonie, und man könnte durchaus argumentieren, dass die Konsulatsgerichte die Ausbreitung des westlichen Imperialismus durch eine Art Selbst- Regulierung der Ausländer gebremst habe.Die westliche Extraterritorialität verschwand, als Japan die ausländischen Mächte von der Effektivität seiner Rechtsreformen überzeugte, welche sich an westlichen Gesetzen und Prozeduren ausrichteten. Auch schon vorher hatte japanischer Patriotismus in der öffentlichen Meinung und durch politische Handlungen dazu geführt, die Auswirkungen der Extraterritorialität auf die Konsulargerichtsbarkeit im engeren Sinne zu beschränken. Die Anwendung von japanischen Verwaltungsvorschriften wie Quarantäneregeln, Jagdverordnung, Zoll- oder Pressevorschriften waren kontroverse öffentliche Themen im Verlauf der Vertragsrevisionsverhandlungen. Auch wenn das Deutsche Reich an einigen dieser Zwischenfälle beteiligt war, gehörte es zu einer der ersten westlichen Mächte, die zu einer Aufgabe ihrer rechtlichen Privilegien in Japan bereit waren. Eine Serie von diplomatischen Konferenzen und bilateralen Konsultationen brachte schließlich alle westlichen Staaten dazu, einem Ende der Konsulatsgerichtsbarkeit zum Juli 1899 zuzustimmen. Mehr noch als in Japan war die Extraterritorialität in Korea breiter definiert, sodass sowohl Landbesitz von Ausländern vor koreanischem Zugriff geschützt war als auch kaum Reisebeschränkungen im Inland existierten. Wegen der verschwindend geringen Anzahl Deutscher in Korea war diese Gruppe eine unbedeutende Größe im Vergleich zu den ins Land strömenden Japanern. Allerdings gehörte ein Deutscher zu den größten westlichen Grundbesitzern Koreas, und deutsche Diplomaten befürchteten wirtschaftliche Nachteile durch die Änderung im Rechtsstatus seiner Ländereien. Als Japan einseitig die internationalen Verträge Koreas bei der Annexion des Landes im August 1910 kündigte, bezweifelte der deutsche Konsul die Rechtmäßigkeit dieser Handlung und bestand ohne Erfolg darauf, dass die Verträge weiterhin bestünden. In einem langwierigen Verhandlungsprozess mit Japan unter deutscher Koordination wurde ein Abkommen mit Japan unterzeichnet, welches auch formal die Konsulargerichtsbarkeit im April 1913 beendete. Nachdem die westlichen Mächte ihre entsprechenden Rechte in Japan aufgegeben hatten, konnten sie nun nicht mehr argumentieren, dass sie den japanischen Gesetzen nicht vertrauten, wenn diese auf Korea angewandt würden, und ihnen blieb nichts anderes übrig, als über den Mangel an kompetenten Richtern und modernen Gefängnissen in Korea zu klagen.Es ist unstrittig, dass die Existenz der westlichen Extraterritorialität den rechtlichen Modernisierungsprozess in Japan beschleunigte. Die westlichen Mächte hatten ja genau diese Veränderung zur Vorbedingung einer Revision der "Ungleichen Verträge" gemacht, und die Beteiligung der ausländischen Mächte an japanischen Gesetzgebungsverfahren war einer der Streitpunkte in den späten 1880er Jahren. Insofern hatte Japan einen Anreiz, sein Rechtssystem formal anzupassen. Die entsprechende Transfergeschichte des kontinentaleuropäischen Rechts nach Japan ist auch schon in vielen Dimensionen untersucht und beschrieben worden. Das Beispiel sowohl westlicher Gerichte und ihrer Funktionsweise als auch die Anwendung westlicher Gesetze in Yokohama oder in Hyogo-Osaka scheint hingegen in Japan keinen Modellcharakter für die weitere Entwicklung des Rechts in Japan gehabt zu haben.109 Weder beeinflusste es den Kodifikationsprozess spezifischer Gesetzeswerke noch die lokale Rechtsprechung. So wichtig das Beispiel in der Ferne war, so wenig zählte die gewonnene Erfahrung durch die praktische Interaktion vor Ort. Die Konsulargerichtsbarkeit hat im japanischen Recht so geringe Spuren hinterlassen, dass sie heute völlig in Vergessenheit geraten ist. ; SUMMARYWestern extraterritoriality in East Asia has long been considered a symbol of asymmetrical power relations and criticized as an infringement of the sovereignty of China, Japan and Korea. By contrast, imperial powers justified their need to maintain "the rule of law" in an uncivilized East Asian region lacking rational and humane ways of justice. Recent scholarship paints a more balanced and nuanced picture of a system that was more complex with multiple stakeholders. Most international research, however, focused on the interaction of the major Anglo-Saxon states, especially Great Britain, with China and Japan. Little attention has so far been paid to Imperial Germany and its system of consular jurisdiction in Japan and Korea. This article is the first study of its kind and therefore it relies heavily on unpublished primary sources from diplomatic archives and on late nineteenth century press reports. Its aim is to recreate the German consular court experience and contextualize it in the broader framework of Western extraterritoriality and of German legal history. It narrates three interrelated chronological stories, how international bilateral treaties and German laws formed the backbone of the system, how the German consular courts worked in practice, and finally how Japan terminated the German and Western consular court system in her own country and in Korea. Imperial Germany was one of the major players in operating extraterritorial jurisdiction in Japan and Korea. It guarded its rights and privileges with caution, sometimes in cooperation and sometimes in competition with the other European powers. Lagging behind the UK and the US in the total number of judgments, especially due to fewer criminal cases, it can be estimated that the German consular courts in Japan conducted about 2,000 trials whereas their counterpart in Korea barely decided less than thirty cases over the years of extraterritoriality. As seen through the statistics and extant records of decisions it appears to have been a reasonably well-functioning system of justice administration across national and language barriers. A Japanese coolie or a local maid could successfully sue their German employers for damages or could enforce contracts. Nevertheless, elements of an "unfair system" may still have existed in terms of the willingness to admit oral evidence when the counterpart was not "a Christian gentleman". Conversely a Japanese Court rejected a case by a German plaintiff merely on the formal grounds that the Japanese government refused to communicate with its court. Different laws applied to similar crimes when committed in Yokohama. Two individuals who had cooperated in stealing newspapers were sentenced by different national courts to jail sentences ranging between 7 days (German) and one year (Japanese) according to the criminal codes of their respective countries that were then in force. Theoretically expected to apply German law, in many of the trade and commercial affairs the German consular court followed what one could call "treaty port customary law", which was neither strictly German nor Japanese. In fact the German law of consular jurisdiction of 1879 explicitly permitted such a use of local customary law in commercial affairs. One does see consular court judges, Germans and others, considering the wider community implications of their actions and asking questions about the laws and regulations of countries of the parties and Japan and finally settling the case by proposing a compromise. Sometimes a court would simply defer a decision altogether in respect to law suits in other national courts within the same litigation complex of suits and countersuits. Although the scope of this study was mostly confined to the German consular court as an institution of justice, tracing some of the cases involving German speakers in other courts as plaintiffs and witnesses shows an intricate web of transcultural social and economic relations across national and institutional boundaries. Contrary to the popular image evoked by the term "unequal treaties" the majority of law suits, civil as well as criminal, in both the German and other consular courts stayed within the parameters of the Western community and this study argues that they may have contained the further spread of Western imperialism through legal self-regulation. Extraterritoriality receded when Japan had convinced foreign powers of the reliability of her new justice system modeled after Western laws and procedures. Previously, nationalist fervor, through public opinion and administrative action, also helped in confining the "midas touch of extraterritoriality" to stretch beyond the legal defense of individuals in the consular courts of their own nations. The application of Japanese administrative laws such as quarantine, firearm, custom and press regulations became contested ground in the process leading up to revising the unequal treaties. Although Imperial Germany was involved in some of these controversial incidents, together with the United States, she was one of the first Western powers willing to give up her extraterritorial privileges in Japan to the chagrin of the British diplomats. In a series of diplomatic conferences and consultations all Western powers agreed to a settlement that ended the consular court system in Japan by July 1899. In contrast to Japan, the initial unequal treaties with Korea had extended the scope of extraterritoriality to land acquired by foreigners and gave foreigners broad travel permissions in the country at large. Due to the small number of German residents in Korea these treaty stipulations were not a core issue except that a German subject was one of the largest foreign landowners benefitting from extraterritorial stipulations. When Japan unilaterally cancelled Korea´s international treaties with the annexation of Korea in August 1910, the German Consul to Korea questioned the legality of the Japanese action and insisted on the continuation of Western extraterritorial rights. In a process of multilateral negotiations Japan then addressed the legal and commercial concerns of Western diplomats and by April 1913 signed an agreement mutually ending Western consular jurisdiction in Korea. After all Western nations had already agreed that Japanese laws where in principle on par with their own, it was difficult on this ground to maintain consular court privileges in Korea and oppose the extension of Japanese laws Korea.
Inhaltsangabe: Introduction: The fundamental motive of this thesis is to locate the main catalysts that caused the 2007/2008 financial crisis, and the rationale of their unique interaction. The three primary avenues used to layout the analysis are the Structured Finance Instruments involved, the parties concerned and the channels that exacerbated the rapidity of the spread that ultimately increased the severity of the crunch. Chapter One lays an overview of the Structured Finance Instruments prevalent in the financial spectrum, of which the main instruments that, contributed to the triggering and propagation of the financial turmoil are demonstrated and explained. Chapter Two exemplifies the pre-cursors of the crisis which began in the sub-prime sector of the United States. The vital triggers that caused the bust of the subprime bubble are further illustrated as well. Chapter Three examines the varying involvement of the contributing instruments to the rapid propagation and displays the connecting link between the Structured Finance Instruments and the very source of the turmoil. Chapter Four illustrates in depth the involvement of the three vital players (the Rating Agencies, Banks and the Regulatory/Supervisory Institutions) and their effect on the propagation of the crisis. Chapter Five analyses the two main regulatory catalysts that contributed to the crunch through Pro-cyclicality and will also examine the role and consequences of mark-to-market Fair Value Accounting and Minimum Capital Adequacy Requirements (Basel II Accord). Chapter Six will present a post crisis status quo with recommendations and remedies for relevant counter-cyclical mechanisms.Inhaltsverzeichnis:Table of Contents: TABLE OF CONTENTI ABBREVIATIONSIII LIST OF FIGURESIV GLOSSARYVI INTRODUCTION AND CHANNELING OF THE RESEARCH10 1.CHARACTERISTICS OF CREDIT RISK TRANSFER INSTRUMENTS11 1.1Securitization13 1.1.1Mortgage Backed Securities (MBS)16 1.1.2Asset Backed Commercial Papers (ABCP)17 1.1.3Cash Flow Collateralised Debt Obligations (CDOs)19 1.2Credit Derivatives and Hybrid Products20 1.2.1Single Name CDSs20 1.2.2Synthetic CDOs23 1.3Re-Securitization24 1.3.1ABS CDOs24 1.3.2CDO²26 2.THE PRECURSORS AND TRIGGERS OF THE CRISIS29 2.1Soft Macroeconomic Environment in the United States and the Vulnerability of Banks29 2.1.1The Soft Macroeconomic Environment29 2.1.2The Vulnerability of Banks29 2.2The Augmentation of Subprime Mortgages30 2.3Increased Significance of the 'Originate to Distribute' Model31 2.4Surging Default Rates in the Subprime Mortgage Sector33 3.THE CHANNELLING OF STRUCTURED FINANCE THE FINANCIAL CRISIS34 3.1Step I: Reprising of Risk and Credit Market Spillovers (Feb – July 2007)35 3.2Step II: The Liquidity Squeeze (Aug 2007)38 3.3Step III: The Rapid Deleveraging Process (Sep 2007- Dec 2007)39 3.4Step IV: Dysfunctional Credit Markets and Further Deleveraging (Jan - May 2008)42 4.THE VITAL PLAYERS45 4.1The Role of Supervision by Regulatory Institutions46 4.1.1Greenspan and his failed motives46 4.1.2The Dispersion of Financial Regulation among Multiple Institutions47 4.1.3The Gloomy Banking System49 4.2The Role of Banks50 4.2.1Unforeseen Consequences of Basel II50 4.2.2The Short Term Horizons of Manager's Incentive Schemes51 4.2.3Failures Associated with Structured Finance51 4.3The Role of the Rating Agencies53 4.3.1Conflict of Interest between Rating Agencies and Issuers53 4.3.2Absence in Cross-Checking the Origin of the Loans54 4.3.3The Dependency on AAA Credit Ratings55 4.3.4Misleading Risk Interpretations58 5.PRO-CYCLICALITY THE EXACERBATION OF THE CRUNCH61 5.1Issues of Capital Adequacy Requirements and Accounting Disclosure61 5.1.1The Basel II Accord and the Curtailment in Lending Activity62 5.1.1.1The Concept of Basel II62 5.1.1.2The Drawbacks Associated with Basel II63 5.1.1.3Pro-cyclicality and Basel II64 5.1.2Fair value accounting and plummeting asset values67 5.1.2.1The Concept of Fair Value Accounting67 5.1.2.2Loopholes, Advances in Accounting Standards and their Consequences68 5.1.2.3Pro-cyclicality and Fair Value Accounting70 5.1.2.4Discussion74 5.1.3Summary75 5.2Case Study: Northern Rock76 5.2.1The Role of Securitization77 5.2.2The Downfall79 5.2.3Pro-cyclicality and Leverage79 5.2.4Summary81 6.A POST-CRISIS OUTLOOK AND POLICY IMPLICATIONS82 6.1Pro-active Monetary Policy82 6.2Fair Value Accounting: Current Value Measurement Method83 6.3An Alternative Approach to the Market Based Basel II Models85 6.3.1Less Reliance on Risk Sensitive Market Based Models85 6.3.2The Imposition of a Liquidity Regulation87 CONCLUSION88 APPENDIX91 BIBLIOGRAPHY97Textprobe:Text Sample: Chapter 4, The Vital Players: In this chapter, the author attempts to portray the financial crisis of 2007-2008 by means of an analogy to a house caught in a devastating blaze. The portions and materials in this building are compared to various factors of the economy in order to better illustrate their role in the event to improve the thesis perspective. Imagine a house of hay built with the haphazard and disjointed carelessness on muddy ground, standing in the midst of a field surrounded by forest. This dwelling of straw is symbolic of structured finance instruments and the muddy ground on which the house is built is symbolic to the lack of supervision. This house is surrounded and tangled with ample and prodigious amounts of wiring. Most of this wiring may be compared with the housing and real estate sector. The subprime mortgages and the large losses incurred in the US real estate sector was the origin of the crisis and the occurrence of the housing bust was a short circuit that sparked off the blaze. The very material which was used to construct the house, hay, being so easily ignited with a mere spark, enabled the crackling glows to burst into a blaze in rapid succession. Now a house of hay would normally be overwhelmed in flames and then be no more than ashes within a matter of minutes. However, imagine that at the top of this straw structure were reserves of crude oil. The very heat from the flames below, having melted the container in which such liquid was held, set free a stream of oil which not only fuelled, but also aggravated and kept the straw burning longer than predicted. Now imagine that the design of the straw building was such that any liquid poured from the top, without being channeled into a single pipe, in fact is conducted through a multitude of channels which spread the liquid over all surfaces before reaching the foundations. This oil that kept this particular fire blazing longer is the sudden liquidity rush and rapid deleveraging. In addition regulatory measures such as marked to market fair value accounting and minimum capital adequacy requirements by the Basel II Accord (the oil) , intensified by pro-cyclicality (the multitude of channels), facilitated the speed and propensity of the blaze. That resulted in the flames spreading beyond the house, through the field and eventually resulting in a forest fire. Thus the 'Subprime Crisis of 2007' due to its global repercussions was renamed 'the Financial Crisis of 2007-2008'. Chapter 4 gives a detailed description on the characteristics that fuelled the heat of the blaze, illustrated via the three main players involved: the rating agencies, the regulators and supervisory institutions and the banks. Chapter 5 progresses the analysis of the Pro-cyclicality inherent within the financial spectrum which was largely overseen by regulators. Using the Pro-cyclicality associated with fair value accounting and minimum capital requirements of Basel II, the speed and propensity of the spread is demonstrated. Finally Chapter 6 will conclude with policy implications that need to be adhered to in order to avoid a recurrence of pro-cyclicality and unintentional consequences within the financial spectrum. The Role of Supervision by Regulatory Institutions: The financial regulators possessed a large stake in the propagation channel. Alan Greenspan's administration believed that markets could police themselves better without government intervention and thus hardly intervened during the subprime bubble. The dispersion of regulatory institutions globally and the overlapping of tasks disrupted the supervisory process and facilitated growing complications. Greenspan and his failed motives: 'The financial crisis was caused by lack of supervision of the financial sector, rather than bankers breaking the rules.' (Adam Smith Institute). The US Federal Reserve (Fed) not only regulates monetary policy but also plays a significant role in the regulating of banking infrastructure within the United States. The highly influential Fed has the capacity over any other regulator to implement policies irrespective of the opinion of the President and Congress. Alan Greenspan (head of the Federal Reserve during the time the subprime loans were being originated) believed that a well functioning market with appropriate incentives could police itself more effectively than government bureaucrats. The mortgage lending industry he believed qualified under this category and believed that lenders ultimately had to answer to self-interested global investors, who possessed no intentions to originate bad loans. Furthermore the Basel II capital reserve requirements were implemented about during this era where rules where heavily reliant on market forces. The amount of capital that banks require was determined by the market value of their holdings. This allowed market forces to determine what was appropriate. On three occasions, concerns were raised relating to the distortion of the US mortgage industry. Firstly, a Fed Governor proposed that predatory lending needed screening. Secondly, the House of Congress and the Senate of the United States passed the home ownership and equity protection act of 1994, which gave the Fed the authority to curb unfair or deceptive lending practices. Thirdly, the US congress pushed the Fed to use the federal trade act to set down the rules on the deceptive lending practices. All three proposals were rejected by Mr. Greenspan and when he finally realised that the industry was spiraling out of control, the damage was irreparable. According to Mr. Greenspan the principal task of the regulators during the last decade was to supervise the existing principals and to ensure that internal risk systems within the institutions were functional. Following which the tasks were given to the senior managements to judge the prudence of overall risk levels. This was where the fundamental disputes arose. The financial and services authority in the UK and the commodity future trading commission on the United States competed with one another to grant more liberal systems with motives to facilitate a comparative advantage amongst economies. This in turn resulted in greater profit maximizing opportunities for financial institutions. Mr. Greenspan has been accused of adopting a profit maximization approach among financial institutions and encouraging risk management structures that enhance self interests. As Mr. Greenspan admitted the granting of internal risk management systems was a mistake. Although by granting the senior management the authority to manage their risk internally also spurred greater innovative thinking that was not prevalent in the previous restrictive regime. Mr. Greenspan acknowledged that the bank lenders misused this freedom and were not prudent in managing their risks.
Aviation is a pastime enjoyed by many amateur pilots. Of the 21 000 aircraft registered in the UK [1], 96% are engaged in general aviation activities (non-commercial flying). The UK Civil Aviation Authority (UK CAA) classifies microlights, gliders and autogyros as recreational sports aircraft. Of the 21 000 UK-registered general aviation aircraft, only 306 are autogyros, compared to over 4300 microlights and almost 2600 gliders. Despite this fact, the autogyro has been seen to exhibit a fatal accident rate up to 100 times higher than those of the microlight or glider. In response to the identification of this high accident rate amongst autogyro type vehicles, the CAA commissioned a programme of research intended to understand the cause. This research, undertaken in the UK by the University of Glasgow, consisted of analytical, wind tunnel and flight test activities. These studies concluded that the autogyro displayed "conventional lateral and directional dynamic stability characteristics", and that both the static and dynamic stability (in particular, a lightly damped phugoid mode) was highly sensitive to the vertical position of the centre of gravity (c.g.) relative to the propeller thrust line. The lack of provision within the autogyro community to collect meaningful data in relation to the airworthiness requirements was also highlighted. Outside of the work performed as part of the research programme that generated this report, there is still considered to be "little indication that rigorous scientific or engineering investigation of airworthiness has occurred". Therefore, there remains significant scope for further research into just what makes autogyros so unsafe to fly and how to improve their airworthiness. Prior research into the autogyro and its aerodynamic characteristics can be broadly divided into two phases, the first being from its inception in 1923 to the beginning of World War II and the period between 1996 and the present day, when a resurgence of interest in the autogyro began to occur. Much of the early works concentrate on characterising the autogyro's aerodynamic characteristics, relying heavily upon wind tunnel testing, flight testing and analytical investigation to establish an understanding of the fundamental flight dynamics of the autogyro. With the first flight of the first functional helicopter, the outbreak of World War II and the death of the inventor of the autogyro, research interest in this aircraft type was critically diminished. Only three papers on the subject of autogyros were published between 1939 and 1996. The Air Accident Investigation Branch (AAIB) review of the airworthiness of the grounded Air Command autogyro, conducted after the occurrence of 5 fatal accidents between 1989 and 1991, recommended the commissioning of a programme of research into both the airworthiness and aerodynamic characteristics of light autogyros. As a result of this recommendation, the autogyro experienced a resurgence in research interest, culminating in the publication of a CAA Paper which presented 4 recommendations intended to improve the airworthiness of the autogyro: 1. It is recommended that the vertical location of the centre of gravity (c.g.) should lie within a ± 2 inch envelope of the propeller thrust line. 2. Horizontal tailplanes are largely ineffective in improving the long term pitch dynamic stability (phugoid mode). 3. Extreme manoeuvring can lead to excessive rotor teeter angles during certain phases of flight, potentially resulting in the rotor blades striking the prop or empennage. 4. The chordwise centre of gravity of the rotor blades should always lie at or ahead of the 25% chord position to prevent rotor blade instability. One of the primary aims of this Thesis was to assess the validity and applicability of these recommendations; in order to do so, a simulation model of an autogyro was created. The model was based on the G-UNIV research autogyro owned by Glasgow University and validated against flight test data in order to ensure the required level of fidelity was achieved. Upon re-assessing the recommendations, in some cases, different conclusions were drawn. The first recommendation, while accepted as a sensible design aim, was found to be overly restrictive. BCAR Section T, the airworthiness specification for autogyros, specifies requirements on the period and time to half amplitude of any longitudinal oscillations present in the aircraft. Limiting the vertical position of the centre of gravity to within ±2 inches of the propeller thrustline resulted in forcing a design which is compliant with the requirements of BCAR Section T outside the specified range, to become non-compliant when the centre of gravity lies within the range. Recommendation 2 suggests that the removal of the tailplane of the aircraft has little impact on the longitudinal trim control positions and the characteristics of the phugoid mode. It was found that the results from the simulation model disagreed with this conclusion; the removal of the tailplane changed the characteristics of the phugoid model and the trimmed control positions significantly. The third recommendation highlighted the potential for a rotor blade to strike the propeller or the empennage under extreme manoeuvring. The simulation environment provided a safe environment in which to test this recommendation; it was found under the loading of an extreme manoeuvre it was possible for the main rotor to strike the tail, supporting the conclusion drawn in CAA Report 2009/02. It was not possible to assess the fourth and final recommendation, relating the positioning of the chord-wise position of the blade centre of gravity, due to the limitations of the simulation model developed. Another focus of the recent work surrounding the autogyro has been on quantifying and assessing the handling qualities of such vehicles. This presents many challenges, including the fact that no autogyro-specific handling qualities specifications currently exist. One of the main themes of this Thesis was to progress towards the creation of such a specification, either through development of a new methodology or development of existing specifications, such as ADS-33E-PRF. The first steps in this field have been taken by Glasgow University using ADS-33; the primary specification used in the assessment of military rotorcraft. Assessment of the autogyro was previously carried out in a real-world flight trial using existing Mission Task Elements (MTEs) taken from ADS-33, the Slalom and the Acceleration-Deceleration. The results from this trial were then used to derive proposed Level 1, 2 and 3 boundaries for quickness and pilot attack. This Thesis replicated this trial using real-time piloted simulation, and the method described in the work carried out by Glasgow University was also utilised to derive a set of predicted handling qualities Levels for both quickness and pilot attack. It was found that the predicted Level boundaries generated from the simulation trial did not agree well with those predicted in the original flight trial. There were several reasons for this; in the original flight trial the pilot used a non-standard technique to fly the Slalom manoeuvre, using sideslip to complete the test course. Additionally, both the original flight trial and the simulated flight trial used data from several different course geometries. This resulted in the ordering of the Level boundaries being reversed for the Levels predicted by the simulated flight trial, as those test points carried out on the more aggressive course geometries received lower handling qualities ratings, whilst using the most aggressive control inputs, compared to those on easier courses which used lower magnitude and aggression inputs, and thus a lower quickness, while receiving better handling qualities. Recommendations were made to address these issues in future iterations of this work. This Thesis also sought to establish whether the MTEs specified in ADS-33 highlighted the deficiencies within the autogyro in the same manner as they do in the helicopter, as well as identifying the fundamental differences between the autogyro and helicopter. For the most part, the MTEs chosen did highlight the deficiencies in the same manner for both autogyro and helicopter, with the exception of the Acceleration-Deceleration. When assessing the helicopter, the Acceleration-Deceleration is intended to highlight the presence of any undesirable roll due to pitch cross couplings present in the aircraft. As the autogyro uses throttle setting to accelerate, and not pitch attitude, the Acceleration-Deceleration cannot be used to assess the impact of this cross coupling in the autogyro. The Acceleration-Deceleration also revealed the presence of a roll due to throttle coupling in the autogyro which had not previously been reported. Alongside the repetition of this original flight trial, new MTEs were analysed. The Heave Hop and the Roll Step are MTEs originally intended to assess tilt rotor type aircraft; however, they have shown some utility in assessing the autogyro. The Heave Hop in particular highlighted another of the fundamental differences between the autogyro and the helicopter. The Heave Hop is intended to test the ability of the aircraft to sustain a positive load factor, before transitioning to a negative load factor. Much of the effect of this changing load factor is mitigated for the helicopter by the presence of a rotor speed governor, which maintains the rotor speed at a constant value for irrespective of the rotor loading. However, the autogyro rotor is unpowered, meaning that changes in the rotor loading also change the rotor speed. This results in potentially problematic changes in the autogyro handling qualities; reducing or increasing the available control power or quickness available to the pilot. The completion of the work described herein has raised many possible avenues for further work; although it has been shown that ADS-33 style predicted handling qualities represent a good baseline for development of autogyro specific handling qualities, there remains scope for redefinition of the Level 1/2 and Level 2/3 boundaries for the predicted handling qualities parameters, such as quickness and control power, as well as for the re-definition of MTEs to make them autogyro-specific. In order to draw absolute conclusions, these manoeuvres must also be reassessed using different autogyro types and configurations – often the work presented herein is only the second time such an assessment has taken place. The development of the easily reconfigurable autogyro model developed as part of this Thesis presents an ideal tool to achieve this goal. In summary, through development of existing work and introduction of new ideas, some progress has been made in the progression towards an autogyro-specific handling qualities specification. Whilst there is still a long way to go in thoroughly domesticating the autogyro, this Thesis represents a step in the right direction.