Electric vehicles are an effective tool to reduce vehicle born emissions from road transportation. Faced with major pollution issues, China is committed to vigorously promoting electric vehicles. China has made active efforts in subsidies, policies, charging facilities, business models, etc., so that the annual growth rate of electric vehicle sales has accelerated. State subsidies have greatly promoted the use of electric vehicles, but the government is gradually reducing subsidies. In the case of government subsidy decline or even zero subsidy, &ldquo ; separation of vehicle and battery&rdquo ; is considered to be a good mode for solving the development of private EVs. The battery of an electric vehicle does not form a whole with the chassis, but they could be physically separated, replacing the battery with one which is fully charged instead of charging by users themselves, substituting battery leases for battery purchases, called separation of vehicle and battery. However, a series of issues such as whether this mode is beneficial to consumers, whether it has competitive advantages for vehicle companies, and what difficulties exist need to be furtherstudied. This paper firstly analyzes whether it is necessary to implement &ldquo ; separation of vehicle and battery&rdquo ; for private electric vehicles (SEPARATION) in China. Based on this, it sums up the attempts of two companies to implement SEPARATION and extracts the key factors involved in SEPARATION. Then, such key factors are analyzed, and the customer delivered value model of SEPARATION is established. Finally, this article discusses the predicament of SEPARATION and makes some recommendations for the implementation of SEPARATION in China. The innovations in this paper include: (1) Analyzing the issue of SEPARATION from the perspective of customer delivered value. (2) Proposing a customer delivered value model of SEPARATION for the first time. (3) Proposing a two‐level battery replacement network in the SEPARATION mode.
The concepts of soil quality and soil health are widely used as soils receive more attention in the worldwide policy arena. So far, however, the distinction between the two concepts is unclear, and operational procedures for measurement are still being developed. A proposal is made to focus soil health on actual soil conditions, as determined by a limited set of indicators that reflect favourable rooting conditions. In addition, soil quality can express inherent soil conditions in a given soil type (genoform), reflecting the effects of past and present soil management (expressed by various phenoforms). Soils contribute to ecosystem services that, in turn, contribute to the UN Sustainable Development Goals (SDGs) and, more recently, to the EU Green Deal. Relevant soil ecosystem services are biomass production (SDG 2 – zero hunger), providing clean water (SDG 6), climate mitigation by carbon capture and reduction of greenhouse gas emissions (SDG 13 – climate action), and biodiversity preservation (SDG 15 – life on land). The use of simulation models for the soil–water–atmosphere–plant system is proposed as a quantitative and reproducible procedure to derive single values for soil health and soil quality for current and future climate conditions. Crop production parameters from the international yield gap programme are used in combination with soil-specific parameters expressing the effects of phenoforms. These procedures focus on the ecosystem service, namely biomass production. Other ecosystem services are determined by soil-specific management and are to be based on experiences obtained in similar soils elsewhere or by new research. A case study, covering three Italian soil series, illustrates the application of the proposed concepts, showing that soil types (soil series) acted significantly differently to the effects of management and also in terms of their reaction to climate change.
The vision of the OSPAR Commission and the Convention for the protection of the marineenvironment in the North-East Atlantic is a clean, healthy and biologically diverse ocean usedsustainably. Within the Hazardous Substances Strategy, the Commission has prioritizedsubstances for which measures, actions and background documents where adopted in order toreach the stated vision; cessation of losses, discharges and emissions of contaminants andconcentrations of anthropogenic substances close to zero, and for naturally occurringsubstances at background levels in the North-East Atlantic. Due to constitutive regulatorywork within the European Commission as well as the development of international treaties,the background documents of the prioritized substances have not been continually updated.This report analyzed and evaluated if and which of the OSPAR background documentsneeded updating, in addition to giving suggestions on potential further actions OSPAR couldtake in order to protect the marine environment against hazardous substances. This was doneby reviewing the current legislative status of the prioritized substances under REACH, theStockholm Convention, the EU Water Framework Directive as well as the biocidal and plantprotection products regulations. Where possible, monitoring data was incorporated in theevaluation. This report presented generalized groupings of suggested OSPAR actions for eachof the background documents; substances which require no additional OSPAR actions;substances which are not considered fulfilling the criteria as hazardous substances and shouldbe removed from the LCPA; substances for which additional actions may be needed;substances which require continuous monitoring and potentially evaluations of needs foradditional measures. In this report it is suggested that OSPAR in the future takes a moreactive role acting as a safety net for the EU regulations, with the aim of protecting the marineenvironment in the North-East Atlantic. The OSPAR Contracting Parties should decide onhow to follow this advice and decide how the future work within the Hazardous SubstanceStrategy should be undertaken.
In: Blanco Reaño , H J , Nijs , W , Ruf , J & Faaij , A 2018 , ' Potential for hydrogen and Power-to-Liquid in a low-carbon EU energy system using cost optimization ' , Applied Energy , vol. 232 , pp. 617-639 . https://doi.org/10.1016/j.apenergy.2018.09.216 ; ISSN:0306-2619
Hydrogen represents a versatile energy carrier with net zero end use emissions. Power-to-Liquid (PtL) includes the combination of hydrogen with CO2 to produce liquid fuels and satisfy mostly transport demand. This study assesses the role of these pathways across scenarios that achieve 80–95% CO2 reduction by 2050 (vs. 1990) using the JRC-EU-TIMES model. The gaps in the literature covered in this study include a broader spatial coverage (EU28+) and hydrogen use in all sectors (beyond transport). The large uncertainty in the possible evolution of the energy system has been tackled with an extensive sensitivity analysis. 15 parameters were varied to produce more than 50 scenarios. Results indicate that parameters with the largest influence are the CO2 target, the availability of CO2 underground storage and the biomass potential. Hydrogen demand increases from 7 mtpa today to 20–120 mtpa (2.4–14.4 EJ/yr), mainly used for PtL (up to 70 mtpa), transport (up to 40 mtpa) and industry (25 mtpa). Only when CO2 storage was not possible due to a political ban or social acceptance issues, was electrolysis the main hydrogen production route (90% share) and CO2 use for PtL became attractive. Otherwise, hydrogen was produced through gas reforming with CO2 capture and the preferred CO2 sink was underground. Hydrogen and PtL contribute to energy security and independence allowing to reduce energy related import cost from 420 bln€/yr today to 350 or 50 bln€/yr for 95% CO2 reduction with and without CO2 storage. Development of electrolyzers, fuel cells and fuel synthesis should continue to ensure these technologies are ready when needed. Results from this study should be complemented with studies with higher spatial and temporal resolution. Scenarios with global trading of hydrogen and potential import to the EU were not included.
Modern isolated power grids are constantly evolving to adopt smart grid concepts that can permit higher renewable energy penetration and energy management optimization, in the view of a sustainable RES based energy production EU policy with reduced pollutant emissions. Nevertheless, many islandic power systems like the islands in Southern Europe are still depending on oil-fired diesel engines, while the renewable energy production is limited due to financial, technical and environmental reasons. In this study, the power system of a typical non-interconnected South European island consisting of diesel generators and a PV farm is modelled and simulated. Scope of this paper is to examine the ability of a Battery Energy Storage System (BESS) to achieve load peak shaving combined with maximization of the PV power penetration into the grid leading to pre-planned zero curtailment. For this purpose, a novel peak shaving algorithm is developed and implemented into an Energy Management System (EMS), for optimal scheduling of the diesel engines. Thereinafter, dynamic simulations of the island's power system are carried out employing a predictive control strategy for different time scales, ranging from a supervisor BESS controller based on load forecasting, to a real-time battery power regulation. The predictive BESS controller is based on future consumption values forecasting, which in turn result from an Artificial Neural Network (ANN) and an optimization procedure taking into account PV power generation and a peak shaving threshold. Thus, a new diesel engine scheduling is obtained capable of replacing the maximum peak power demand with renewable power while at the same time load curve smoothening and reduced diesel generators ramps-up are achieved. The simulations are executed in APROS (Advanced Process Simulator) dynamic simulation platform, using built-in components for the BESS modelling, an external model for load forecasting and a user-developed EMS structure.
In the UK, the sustainability agenda for housebuilding is now over ten years old, dating from the 2006 launch of the Code for Sustainable Homes as the benchmark for the reduction of carbon emissions in all new housing. The government's 2015 decision however, to dismantle the Code for Sustainable Homes and remove the 2016 Zero Carbon targets, has meant that with little warning, the sustainability industry has had to start fending for itself. Without the incentive of tariffs to focus the minds of the developers, the sustainability industry is now looking vulnerable, and the realities of having to be financially viable are coming home to roost. But our statutory requirement to reach an 80% carbon reduction by 2050 has not changed, and neither have the reasons for achieving it.During that time, the sustainability agenda was a key driver for innovation within the housebuilding industry. However, rather than focusing on the many benefits that innovation can bring, in this paper the authors look at the barriers to adoption of innovation and asks whether these barriers have been fully understood by those who are accusing the housing industry of complacency for its failure to reinvent itself. The main method used for investigating what these barriers might be was a series of industry interviews, carried out across all the sectors defined as being part of that decision-making process, in order to better understand how their motivations might differ, and if so whether this disconnect could be preventing the progress that all individually profess to want but none appear able to deliver. The findings suggest that a more informed approach to promoting or considering any innovative product within the housebuilding industry could avoid many of the barriers currently being confronted head on.
One of the top priorities of European countries is to reduce energy consumption and greenhouse gas (GHG) emissions in the built environment. To reach this goal, urban renewal processes are seen as a core strategy towards a sustainable built fabric, given the fact that a large proportion of the cities of tomorrow is already standing. Indeed, there are still considerable potential energy savings to be made in European countries in general, and in Spain in particular, a country in which most residential buildings were constructed before 2001 based on very low thermal regulations compared to the current ones. Therefore, these buildings require large amounts of energy to ensure a minimum indoor comfort. Considering the global context of the country and the current economic crisis, which is particularly affecting the construction sector, the future lies in the renewal process of the built environment, responsible for 40% of the primary energy consumption in the country. The originality of this research is to apply the cost-optimal methodology at the territorial scale, using statistical and population census data and taking into account the 12 climatic zones of the Spanish territory. Initially, the cost-optimal methodology was proposed by the European Union to study different building retrofitting scenarios. It consists in a multi-criteria assessment that allows comparing various levels of intervention under different macro-economic scenarios, in terms of the cost- effectiveness of the strategies, and energy and environmental savings. This thesis presents the application of the cost-optimal methodology at the territorial scale in order to estimate the energy saving potential of residential buildings constructed before 2001. The assessment methodology is implemented in an Excel- based strategic decision-making tool, aiming to help select the best strategy for achieving the European requirements according to the nearly Zero Energy Building (nZEB) standard. Moreover, this research proposes a new systematic approach ...
In June 2016, a climate and energy strategy for Oslo was enacted by the city council. The overall climate targets adopted for Oslo 2030–2050 were to reduce greenhouse gas emissions by 50% (2030) and to zero in 2050. To achieve these goals, several measures for the various municipal sectors have to be implemented. For the transport sector, the most important measure is to ensure transition from individual car transport to public transport, cycling and walking, green goods transportation. Having a fossil free society in 2050, then it is crucial to reverse trends to changing people's behaviour. If car traffic inside the city centre will be banned, then the consequences need to be evaluated. The concern for the disabled, goods transportation, public transit and residents must be safeguarded. Through comparison with other cities that are partially car-free, we have described some consequences that must be taken into account when implementing the green policy. Several methods are used, a literature review and face-to-face interviews. A study-tour to Germany and France included city-walks, and interviews with urban and transport planners and politicians from the cities of Nuremberg, Freiburg, and Strasbourg. These cities were chosen because of their log experience with implementation of pedestrianized city centres. The provision of pedestrianized city centre in Oslo may at first glance seem like a political stunt from an inexperienced visionary political party. Drastic measures must be implemented, that requires thoughtful planning and participation. A car-free city centre cannot be completed at the expense of the population, and to have a good planning phase will be more important than the goal of implementing the measures within a certain time limit. Achieving good accessibility for all affected may potentially lead to a vibrant cityscape, suitable for many. ; publishedVersion
The process of deforestation in the Brazilian Amazon region during the six last decades has built disorganized landscapes, in terms of agronomic, ecological, economic and social aspects. Pastures were implanted after slash-and-burn, in a spatially systematic diffusion, buffering the road network with a land tenure objective, and with no consideration for topography or other natural resources, except hydrographic network for watering cattle. This spatial pattern of colonization has generated a large waste of space and natural resources. Since ten years, the zero deforestation policies are building a new legal context for land use and natural resources management, opening possibilities for landscape optimization for ecosystemic services and ecological intensification. To repair environmental liabilities, to connect remaining forest and preserve landscapes, to protect hydrolocical resources, to manage the land tenure and agricultural diversification, local actors need a tool to take into account the spatial distribution of the environmental, economic, agronomic and social organization in each jurisdiction. The authors present in this communication a GIS tool at the municipality level, able to support the local actors' decisions and interrogations to build new ecoefficient landscapes, in order to mitigate emissions, optimize natural resources productivities, farm profitability and to plan local policies such as localizing logistics or agro-industrial plants. The main options about land-use changes, mitigation practices, pasture management practices, use of fire, are informed with a remote sensing set of indicators, crossing land tenure layers, and forest connection metrics. The participative use of this tool should help local multistakeholders platform to monitor, evaluate and plan innovation, mitigation, diversification and synergies at the landscape scale, and improve the local governance's inclusion of smallholders. An external certification of this system at a jurisdictional level will be provided, demonstrating that local institutions can assume environmental responsibilities, and attracting new investors in a green economy perspective. (Texte intégral)
Doutoramento em Engenharia da Refinação, Petroquímica e Química ; Desulfurization is one of the most important processes in the refining industry. Due to a growing concern about the risks to human health and environment, associated with the emissions of sulfur compounds, legislation has become more stringent, requiring a drastic reduction in the sulfur content of fuel to levels close to zero (< 10 ppm S). However, conventional desulfurization processes are inefficient and have high operating costs. This scenario stimulates the improvement of existing processes and the development of new and more efficient technologies. Aiming at overcoming these shortcomings, this work investigates an alternative desulfurization process using ionic liquids for the removal of mercaptans from "jet fuel" streams. The screening and selection of the most suitable ionic liquid were performed based on experimental and COSMO-RS predicted liquid-liquid equilibrium data. A model feed of 1-hexanethiol and n-dodecane was selected to represent a jet-fuel stream. High selectivities were determined, as a result of the low mutual solubility between the ionic liquid and the hydrocarbon matrix, proving the potential use of the ionic liquid, which prevents the loss of fuel for the solvent. The distribution ratios of mercaptans towards the ionic liquids were not as favorable, making the traditional liquid-liquid extraction processes not suitable for the removal of aliphatic S-compounds due to the high volume of extractant required. This work explores alternative methods and proposes the use of ionic liquids in a separation process assisted by membranes. In the process proposed the ionic liquid is used as extracting solvent of the sulfur species, in a hollow fiber membrane contactor, without co-extracting the other jet-fuel compound. In a second contactor, the ionic liquid is regenerated applying a sweep gas stripping, which allows for its reuse in a closed loop between the two membrane contactors. This integrated extraction/regeneration process ...
Analytical chemistry can be split into two main types, qualitative and quantitative. Most modern analytical chemistry is quantitative. Popular sensitivity to health issues is aroused by the mountains of government regulations that use science to, for instance, provide public health information to prevent disease caused by harmful exposure to toxic substances. The concept of the minimum amount of an analyte or compound that can be detected or analysed appears in many of these regulations (for example, to discard the presence of traces of toxic substances in foodstuffs) generally as a part of method validation aimed at reliably evaluating the validity of the measurements. The lowest quantity of a substance that can be distinguished from the absence of that substance (a blank value) is called the detection limit or limit of detection (LOD). Traditionally, in the context of simple measurements where the instrumental signal only depends on the amount of analyte, a multiple of the blank value is taken to calculate the LOD (traditionally, the blank value plus three times the standard deviation of the measurement). However, the increasing complexity of the data that analytical instruments can provide for incoming samples leads to situations in which the LOD cannot be calculated as reliably as before. Measurements, instruments and mathematical models can be classified according to the type of data they use. Tensorial theory provides a unified language that is useful for describing the chemical measurements, analytical instruments and calibration methods. Instruments that generate two-dimensional arrays of data are second-order instruments. A typical example is a spectrofluorometer, which provides a set of emission spectra obtained at different excitation wavelengths. The calibration methods used with each type of data have different features and complexity. In this thesis, the most commonly used calibration methods are reviewed, from zero-order (or univariate) to second-order (or multi-linears) calibration models. ...
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Herausgeber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie diese Quelle zitieren möchten.
By Morgan Bazilian and Thijs Van de GraafThe International Energy Agency (IEA) is marking its 50-year anniversary. From its origins as a relatively unknown analytical body primarily focused on oil security for a select group of OECD members, it has evolved significantly. Today, the IEA also confronts pressing global issues such as climate change and mineral security. Its expanding collaboration with countries like India underscores its increasing global influence.However, the IEA's transformation is not devoid of challenges or criticism. It faces intensified scrutiny within the increasingly politicized arena of energy policy, a massive global energy crisis that has refocused attention on energy security, and the urgent imperative of achieving net-zero emissions.Redefining Energy SecurityEnergy security has long been a central goal of both domestic and foreign policy. In the US, it harkens back to the Eisenhower and Carter doctrines committing US forces to minding the Strait of Hormuz and the Arabian Sea. In the UK, the tales of Churchill and Fisher moving the Royal Navy from domestic coal to imported oil have achieved nearly legendary status. But doing so traded huge gains in efficiency and vessel range for lasting dependence on strategic resources sourced from abroad. Today there are new technologies, new priorities for society, a new focus for capital, and thus new ways to think about energy security. Geopolitical shocks and the race to net zero emissions have brought into focus supply chain security for key energy technologies—from solar panels, to heat pumps, electrolyzers and batteries—as well as the demand side of the energy security equation.Yet, the IEA's definition of energy security is still largely focused on supply – that is the supply of fuels like oil. The ongoing security crisis in the Red Sea, the ongoing Ukraine War, attacks on Gulf refineries, and other conflict underscores the persistent importance of fuel supply security. Oil Market Dynamics and the OPEC RivalryThe IEA was established as a response to the global oil crisis of 1973 as a way for oil importers to have a way to discuss the big issues of oil security.It also established a mechanism for countries to agree to a minimum amount of oil to have on hand in the case of another shock. Until recently, that emergency tool had only been activated three times: during the first Gulf War, after hurricane Katrina, and in 2011 during the Libyan Crisis. In 2022, in the wake of Russia's invasion of Ukraine, the IEA decided in two steps to release no less than 182.7 million barrels, the largest collective drawdown of emergency stocks ever. As the move was not preceded by an immediate supply shock, it prompted criticism that the Biden administration was using it for political gains – to combat inflation and rising energy prices ahead of the 2022 midterm elections. Perhaps not surprisingly, the move also drew criticism from OPEC. At the time of its founding, the IEA's relations with OPEC were antagonistic, but a thaw in the relations had set in since the end of the Cold War. A producer-consumer dialogue ensued and was formalized in the form of the International Energy Forum, based in Riyadh.Today, the relationship has soured again. OPEC countries are upset by the oil price cap that the West has imposed on Russia, fearing the mechanism could one day be directed against them. The oil producer club is also upset by the IEA's projection that demand for fossil will peak before the end of the decade, describing such a narrative as "extremely risky," "impractical" and "ideologically driven."Navigating Political CrosscurrentsOver the last decade especially, the IEA has come under increasing pressure. First from left leaning organizations and climate activists around its under-estimation of renewable energy and its alleged bias against those forms of energy.Such criticism even led some founding IEA member countries to strike out on their own, and set up the International Renewable Energy Agency (IRENA) in 2009. That agency also celebrated its anniversary this year, turning 15. It already boasts an impressive 169 members, far more than the IEA's 31 members.Still, criticism of the IEA did not dissipate. In April 2019, a group business leaders, scientists and campaigners criticized the IEA's World Energy Outlook for not considering the tougher temperature goal of the Paris Agreement. The IEA also faced a campaign to make its data publicly available, rather than locked after paywalls.Most recently, criticism, largely from the political right have come in the form that the organization has become too climate focused, and that is clouding its analytical rigor.Both critiques are not terribly helpful or illuminating. Rather, any international organization must be understood to be political to some degree. Further, the IEA has a relatively modest budget for an international organization. That budget come from countries – namely the small group of wealthy OECD nations. Those countries have shifting politics, and that comes across in funding requests—strings attached funding. The United States is the largest funder of the Agency—its politics, therefore, have a larger impact than others. That said, the EU and Japan also have considerable weight. It is no small feat to juggle this dynamic backdrop—fundamentally political—for an organization that aspires to do rigorous and balanced analysis. Forging Ahead: Dr. Birol's VisionIts current leadership, Dr. Fatih Birol is a former Chief Economist of OPEC and then of the IEA itself. He has made some rather significant changes to the scope of the Agency and its focus. Perhaps the most important, is his focus on expanding the breadth of membership and scope to countries outside the OECD who are critical to global energy markets—think China and India. But, others, too like South Africa, etc. He has also prioritized areas of late that are not on the top-tier political list of most members, such as clean cooking. An area that causes millions of deaths each year and is imminently addressable from a technological perspective. Until recently, the IEA was not well known outside of energy circles. Yet, the worst energy crisis in 50 years, combined with the pressing need to address climate change, have changed that. As the IEA moved into the limelight, it has also faced more scrutiny. Amidst this politically charged landscape, the IEA must both stick to its core strengths - being a source of high-quality energy data and analysis - while also continuing to evolve its mandate and outreach for the challenges of the 21st century.
Context: Empirical evidence shows that vehicle ownership has increased exponentially in the last fifteen years in the Metropolitan Area of Guadalajara, amounting more than two million vehicles. Moreover, a high percentage of these vehicles is very old and lack air-pollution control systems such as catalytic converters. The goal of this paper is to meditate about motorized transport in the second largest city in Mexico.Method: A descriptive analytical method was used to review data and information from governmental and non-governmental sources. Air contaminants are related to car ownership increment, county multi-modal transport distribution, registered vehicles, and the amount of vehicles that have been verified and certified by the government air quality program. From a systems theory point of view, current scenarios are analyzed and inferences are made. As a result, there are balancing and unbalancing forces that affect the homeostasis of this metropolitan area.Results: Air-quality regulations have failed to echo among their citizens that neglect to tune up and verify emissions from their vehicles. Therefore, government officials and the society itself must reconsider their efforts and renew air quality policies based on low emissions, safe, reliable, and sustainable transportation to avoid air quality contingencies. In addition, data clearly suggests that land and urbanistic planning has not followed a systemic approach.Conclusions: Guadalajara metropolitan area clearly shows a lack of a systemic methodology in planning its urbanistic and land development. Hence, a perverse synergy of hostile conditions have emerged within its territory and internal combustion vehicles are the most important destabilizing force in the homeostasis of the city increasing entropy. The presence of this entropy is forcing the inhabitants in Guadalajara to migrate to a multi-modal sustainable transportation system, and start a systematic car substitution program that ensures zero or very low vehicle emissions. ; Contexto: la evidencia empírica demuestra que, en los últimos tres lustros del presente siglo, el parque vehicular se ha incrementado en más de dos millones de unidades en el área metropolitana de Guadalajara. Dentro de este incremento alarmante, muchos de los vehículos no poseen convertidores catalíticos y a pesar de ello circulan cotidianamente por las vías primarias y secundarias de la metrópoli. De ahí el objetivo del presente trabajo consistente en reflexionar sobre la movilidad motorizada que se vive actualmente en la segunda urbe más importante de México.Método: se utiliza un método analítico descriptivo basado en la revisión documental de contenidos temáticos sobre el porcentaje de contaminantes que aporta cada municipio que compone la metrópoli de Guadalajara, con respecto al incremento del parque vehicular, calidad del aire, cantidad de vehículos registrados y cantidad de vehículos verificados. Con base en el enfoque sistémico, se hacen inferencias del escenario actual de la ciudad bajo el argumento de que ésta se encuentra ante la presencia de fuerzas estabilizadoras y desestabilidazoras de su estado homeostático.Resultados: se obtiene que la política gubernamental en materia de reducción de contaminantes ha tenido poco impacto en la concientización de la población.Por lo tanto, resulta necesario realizar un repensamiento de las acciones gubernamentales y ciudadanas para reducir la probabilidad de precontingencias y contingencias ambientales y promover la cultura de la movilidad incluyente, sustentable y segura. Asimismo, los datos sugieren que existe una ausencia sistémica en la planificación de la ciudad y del territorio.Conclusiones: la ausencia sistémica en la planificación de la ciudad y el territorio ha originado condiciones hostiles manifiestas en el espacio físico. En este espacio, los vehículos privados, y en general los medios motorizados, se han convertido en estas fuerzas que alteran la homeostasis de la ciudad provocando entropía, lo que conlleva al uso de medios de transporte sostenibles y a trabajar colectivamente para la sustitución sistemática del automóvil.
Der Klimawandel entwickelt sich zu einer ernsthaften Bedrohung für die Menschheit. Es herrscht weitgehend Konsens, dass die Emissionen von Treibhausgasen drastisch reduziert werden müssen, um große Schäden für Umwelt und Gesellschaft zu vermeiden. Die Transformation des Energiesektors - einem der größten Verursacher von Treibhausgasemissionen - ist dringend erforderlich, um kohlenstofffreie Erzeugungsformen zu ermöglichen. In diesem Zusammenhang spielen erneuerbare Energien (EE) eine entscheidende Rolle. Weltweit ergreifen Regierungen Maßnahmen, die Entwicklung und den Ausbau von EE zu fördern. Hierbei wird eine intelligente Energiepolitik erforderlich sein, um die Diffusion von EE möglichst wirkungsvoll und effizient voranzutreiben. Ziel dieser Dissertation ist es, bestehende Barrieren für die Entwicklung von EE zu identifizieren und daraus Implikationen für Energiepolitik abzuleiten. Dazu werden in drei Studien zu Onshore- und Offshore-Windkraft Politikinstrumente aus der Perspektive von Entwicklern und Investoren analysiert. Die erste Studie untersucht die Hauptbarrieren für die Entwicklung von Onshore-Windkraft. Sie zeigt auf, welche Nutzenwerte Entwickler diversen politischen Rahmenbedingungen zuordnen. Die zweite Studie befasst sich mit Barrieren und Entwicklungsdynamiken bei der Offshore-Windkraft. Die dritte Studie analysiert die Rentabilität verschiedener europäischer Offshore-Windkraft Projekte unter Berücksichtigung von standortspezifischen Kosten, Windstärken und Vergütungssystemen. Die Hauptbeiträge der Arbeit bestehen (1) in der Erhebung eines breiten Spektrums an neuen, empirischen Daten zur Entwicklung von Onshore- und Offshore-Windkraft; (2) in der Erweiterung des Wissensstands im Bereich der noch jungen Technologie Offshore-Windkraft; und (3) in der Entwicklung eines Models, welches die Bewertungskriterien von Entwicklungsfirmen für die Einschätzung von politischen Rahmenbedingungen aufzeigt. Die Analysen zeigen auf, dass Regulatoren auch jenseits von finanziellen Anreizen über großen Einfluss auf den Abbau von Entwicklungshürden bei der Windkraft verfügen. Ent-wicklungsunternehmen legen großen Wert auf Maßnahmen zur Risikominimierung. Der Fokus sollte hier auf der Klarheit und Verlässlichkeit von Anreizsystemen, Zulassungs-, und Netzzugangsbestimmungen liegen. Desweiteren legen die Ergebnisse nahe, dass der Erfolg von Technologiepolitik nicht nur von der Wahl der primären Anreizinstrumente abhängt, sondern auch maßgeblich von der Art und Weise derer Implementierung sowie geeigneter Begleitmaßnahmen. Es wird dargelegt, wie die Summe der Rahmenbestimmungen die Wahrnehmung der Entwickler auf die Attraktivität eines Investitionsstandortes prägen. ; Climate change is on the verge of becoming a severe threat to mankind. It is widely acknowledged that the emissions of greenhouse gases need to be reduced drastically to prevent major damages to the environment and society. As the power sector is one of the biggest emission sources of greenhouse gases a step-change transformation of energy systems towards zero-carbon power generation will be indispensible. Renewable energy technologies (RETs) will play a pivotal role in this endeavor. As regulators across the globe take actions to foster the swift development and deployment of RETs it will be crucial to identify and employ smart energy policies to drive effective and efficient diffusion of RETs. Using onshore and offshore wind power as research cases, this dissertation strives to identify existing barriers to RET deployment and derive recommendations on which policy measures can be employed to overcome them. Throughout all three studies, the analyses adopt the perspective of the main addressees of policy instruments—developer and investor companies. The first study investigates the key barriers of onshore wind power development and reveals the preference values that developers place on various policy settings. The second study scrutinizes current barriers and deployment dynamics in developing offshore wind power. The third study analyzes the profitability prospects of various offshore wind power locations in Europe by modeling location-dependent costs, wind resources, and remuneration policy schemes. The main contributions of this dissertation lie in providing a rich set of new, empirical data for both onshore and offshore wind power; in expanding the scientific knowledge about developing the nascent offshore wind power technology; and in offering a framework that captures the determinants that developer companies use to assess policy regimes. The results indicate that regulators can address many barriers to developing wind power by means that go beyond monetary support. Developers highly value policy measures that mitigate risks, both concrete and perceived. Those include providing reliable, clear, and stable support schemes, permitting procedures, and grid access regulations. Furthermore, the studies indicate that the success of technology support does not only depend on the choice of the primary deployment policy instrument but rather on its specific implementation design and on suitable secondary regulatory aspects. Together these can have significant impact on how developers perceive the attractiveness of a given policy regime as a whole.
Due to the combined inertia of the climatic and socio-economic systems, policymakers cannot avoid making early decisions on climate mitigation "in a sea of uncertainties", even though the very legitimacy of economic analysis to address equity issues (who pays for climate mitigation, and when) faces widespread skepticism. This thesis aims at demonstrating that public economics remains a powerful tool to (i) assess the consistency of the various discourses relative to climate mitigation, (ii) provide robust insights into climate decisions, and thus (iii) try and put some rationale into the debate. We use a set of compact integrated climate policy optimization models to articulate and numerically assess the prominent issues at stake. Our analysis yields three main results. We first demonstrate that the trade-off between present and future efforts cannot be reduced to the controversy over the value of the discount rate. Under uncertainty, in fact, the margins of freedom we bequeath to future generations—notably the technical and institutional systems we transmit to them—prove more important for short-term decision than the value of the discount rate. Secondly, we model various quota allocation rules proposed in the literature to enlarge Annex B to developing economies, and show that their distributive outcome critically depends on ex ante assumptions about future economic and emissions growth. A careful design of the institutions surrounding the tradable permits market to smooth out sensitivity to economic and emissions paths is thus a necessary condition to enhance the system's negotiability and robustness. Last, this thesis illustrates the complementarity between ethics and economics: though economists do not have any particular legitimacy to say what is fair or just, their toolbox is powerful enough to show how intuitively appealing ideas—such as a zero discount rate to take care of both present and future generations alike, or an equal per capita allocation of emissions quota—may sometimes lead to very questionable outcomes. ; Alors même que les incertitudes sur l'effet de serre sont loin d'être résolues, l'inertie du système climatique nous interdit d'attendre avant de prendre une décision en matière d'effet de serre: il nous faut décider " sous controverses ". Dans un tel contexte, la légitimité même de l'économiste à examiner une double question d'équité spatiale (qui paye pour le climat ?) et temporelle (quand ?) est remise en cause. L'objet de cette thèse est de montrer que le calcul économique reste néanmoins un outil opératoire pour révéler des éléments de cohérence robustes face à l'incertitude, et aider ainsi à mettre un peu de rationalité dans le débat public. Une série de modèles d'optimisation des politiques climatiques nous permet d'introduire progressivement, d'articuler et d'évaluer numériquement ces différentes questions. Nous obtenons trois résultats principaux. En premier lieu, nous montrons que l'arbitrage entre court terme et long terme ne saurait se réduire au seul débat sur la valeur du taux d'actualisation. Sous incertitudes, les marges de manoeuvre que nous offrons à nos descendants, via les systèmes techniques ou institutionnels que nous leur laissons en héritage, se révèlent en effet tout aussi importantes. Ensuite, négocier l'entrée des pays du sud dans les politiques climatiques avec des règles d'allocation des quotas d'émissions apparaît difficile tant leurs conséquences dépendent des anticipations sur la croissance économique future. Rendre le système robuste impose un encadrement institutionnel du marché de permis d'émissions négociables. Enfin, cette thèse illustre la complémentarité entre éthique et économie: si l'économie n'a pas à définir ce qui est équitable, le calcul économique permet de montrer comment des raisonnements intuitivement justes, comme annuler le taux d'actualisation pour ne pas pénaliser les générations futures, ou allouer les quotas d'émissions au prorata de la population, peuvent en fait avoir des implications particulièrement discutables.