Maģistra darba nosaukums ir "Projekts: Piena produktu bērniem ražotnes izveide Rīgā". Maģistra darba mērķis-no 2010.gada 5.janvāra līdz 2010.gada 31.decembrim uzcelt uzņēmumu SIA "Mazulis" un sākot ar 2011.gada 2.janvāri uzsākt piena un skābpiena produktu bērniem ražošanu ar jaudu 10 tonnas produkcijas maiņā, nepārtērējot 2010.gadā projekta realizēšanai atvēlētos naudas līdzekļus 1250000 Ls apmērā, lai nodrošinātu Latvijas zīdaiņus un pirmskolas vecuma bērnus ar kvalitatīvu bērnu pārtiku. Dažādu iemeslu dēļ ne visām sievietēm ir iespēja barot bērnus ar mātes pienu un, ja bērnam nav iespējams dzert mātes pienu, tad bērnu attīstības apmierināšanai ir svarīga pareiza piena produktu izvēle. SIA "Mazulis" veiks pilnu piena pārstrādes ciklu sākot no piena kā izejvielas pieņemšanas un beidzot ar gatavo piena produktu bērniem piegādi tirdzniecības tiklos Latvijā. Uzņēmumā ražotie produkti būs bioloģiski tīri, jo tiks izmantotas izejvielas tikai no bioloģiskajām lauksamniecībām. Piena produktu sastāvs būs maksimāli adaptēts mātes piena sastāvam, lai maksimāli nodrošinātu mazuļus ar visām nepieciešamām mātes piena sastāvdaļām. Ražošanas receptes tiks izstrādātas sadarbībā ar zinātniekiem un bērnu ārstiem, produkti būs piemēroti arī alerģiskiem mazuļiem. Piedāvātais produktu sortiments būs sekojošs: mātes piena aizstājējs, mātes piena aizstājējs raudzētais, piens topošajām māmiņām, piena produkts "Mazulis 1", piena produkts "Mazulis 2" ar un bez piedevām, šķidrie piena maisījumi ar griķu un auzu pārslām. Kopējais ražotās produkcijas apjoms 2011.gadā-315 tonnas/mēnesī , 2012.gadā-435 tonnas/mēnesī. SIA "Mazulis" misija-nodrošināt patērētājus ar augstas kvalitātes produktiem, izmantojot tradīcijas, kvalificētus speciālistus un jaunākās tehnoloģijas. Attīstībā uzņēmums plāno īstenot trīs stratēģiskos virzienus: visaugstākajā līmenī nodrošināt ražošanas tehnisko kvalitāti, nodrošināt augsto kvalitāti uzņēmumā ražotiem produktiem un radīt atpazīstamās tirdzniecības markas. SIA "Mazulis" tiks dibināts 2010.gadā janvārī. Sabiedrības pamatkapitāls būs 1250000 Ls. Projekta īstenošanai paredzēts pirkt zemi īpašumā Rīgas piepilsētā tuvāk pilsētas komunikācijām un būvēt ēku sākot ar 2010.gada marta vidu. 2010.gada decembrī tiks parakstīts akts par ēkas nodošanu ekspluatācijā. 2010.gada decembrī sāksies jauno pamatlīdzekļu piegāde un uzstādīšana. Lai nodrošinātu celtniecības projekta realizāciju 2010.gadā būs nepieciešami sekojošie projekta dalībnieki: vadītājs, jurists, vadītāja asistents, tehniskais direktors, finanšu direktors, mārketinga direktors, personāla direktors, kvalitātes direktors, būvuzraugs. Uzņēmums plāno uzsākt piena produkcijas ražošanu 2011.gada janvārī. 2011.gadā paredzēts nodarbināt 21 darbinieku. 2012.gadā ir plānots papildus pieņemt darbā vēl trīs darbiniekus. Uzņēmumā cenu veidošanai izmantos-vidējās izmaksas plus mērķpeļņa- cenu veidošanas metodi. Lai informētu potenciālos klientus par SIA "Mazulis" ražotiem produktiem, uzņēmumam būs nepieciešama reklāma. Reklāma televīzijā tiks izmantota tikai produkcijas ražošanas sākumposmā. SIA "Mazulis" savu produkciju reklamēs sekojošos žurnālos: "Veselība" , "Mans mazais", "36,6", "?????". Uzņēmumam būs sava mājas lapa- www.mazulis.lv. SIA "Mazulis" reklāmu izvietos arī www.berni.lv, www.mansmazais.lv, www.maminuklubs.lv. Kopējie izdevumi reklāmai 2011.–2012.gadā sastādīs 23140 Ls. Uzņēmums centīsies sagādāt saražotu produkciju pēc iespējas vairākos tirdzniecības tiklos. Perspektīvā tiek plānots tieši piegādāt produkciju tām ģimenēm, kurām vecākiem ir pārvietošanas grūtības, kā arī realizēt produkciju aptiekās. Uzņēmums 2011.gada martā plāno ņemt kredītu uz pieciem gadiem no A/ S " DnB NORD banka " 85 000 Ls apmērā. Kredīta dzēšana tiks veikta katru mēnesi sakot ar aprīli pēc anuitātes principa. Kredīta procentu likme 15 % gadā. SVID analīzes rezultātā tika konstatētas uzņēmuma stiprās un vajās ; The name of master work is-"Project: Creation of Dairy Products for Children Manufacture in Riga". The aim is-creation and building the business organization LTD "Mazulis" from the 05.01.2010 till 31.12.2010 and from the 02.01.2010 beginning the production of dairy and cultured milk foods for the children with the capacity of 10 tons per shift, not exceeding the allocated funds in an amount of 1250000 Ls for realization of the project. Many women can't breed theirs children with breast milk and it became one of the reasons of creation the given project. If the newborn don't have the possibility to feed mother's milk it is necessary to choose dairy products correctly. Ltd "Mazulis" will realize the complete cycle on processing the milk starting from the receiving and finishing with the ultimate products delivery to distribution networks in Latvia. The produced dairy products on Ltd "Mazulis" will be bio clean because raw materials will be used only from biological farms. Composition of dairy products will adapted to composition of mother's milk in order to provide the children all necessary components from mother's milk. Mix formulas and composition of dairy products will be worked out in cooperation with academics and children's doctors, the products may be used by the children with allergy. Ltd "Mazulis" offers the following range of products: mother's milk substitute, sour mother's milk substitute, milk for future mothers, dairy product "Baby 1", dairy product "Baby 2" without admixture and with cucurbit and apple-pear additive, liquid dairy products with beech - wheat and oats. The total value of production in 2011 is planned 315 tons in a month, in 2012- 435 tons in a month. The mission of Ltd "Mazulis"-to provide the consumers with high quality products using traditions, highly skilled specialists and the newest technologies. Business organization will be established in January 2010. For the project's realization is planned to buy ground in suburb of Riga near town communications and to build a new construction from the beginning of the middle of March 2010. Capital stock will be 125000 Ls. In December 2010 will be signed act of handing-over. In order to provide the project's building realization in 2010 will be invited the following participants: head manager, the lawyer, the assistant of head manager, financial director, marketing director, HR- director, quality head, building head. The production of dairy foods will be starting in the January 2011. This year 21 people will be employed. From the beginning of 2012 will be employed 24 people. In order that potential clients could receive the complete information about production, advertisement is necessary to business organization. TV spots will be used only till december 2010. Ltd "Mazulis" will advertise the production in "Veselība", "Mans mazais", "36,6", "?????" magazines. Also advertisement will be distributed on web sites www.berni.lv, www.mansmazais.lv, www.maminuklubs.lv. Total overall on advertisement will be 23140 Ls. For valuation of advantages and disadvantages the author of Ltd "Mazulis" makes SWOT- analysis. The strengths: business organization will provide consumers with high quality dairy products, complete professionalism of labourers, using of the newest technology, employment of 21 people, high competitive ability, correlation of price and quality. Strengths give the possibilities to receive recognition from consumers, to employ more professionals, to expand the range of products, to attract more consumers. Also, business organization has some weaknesses: lack of outlets, export, and legislation of Latvian Republic and directions of European Union are not developed. It may lead to more competitors on Latvian market, volume of sales reduction on the local market, changes of demand. It is planned that in 2010 will be created the building and the business or
Elektroniskā versija nesatur pielikumus ; Anotācija Promocijas darba mērķis ir analizēt nekustamā īpašuma kā aktīva nozīmi Latvijas monetārās politikas transmisijas mehānismā un ar ekonometriskajiem modeļiem novērtēt finanšu ierobežojumu, nekustamā īpašuma cenu un valsts riska prēmijas ietekmi uz Latvijas makroekonomiskajiem rādītājiem. Promocijas darba 1. nodaļā aplūkoti nekustamā īpašuma un tautsaimniecības augsmes savstarpējās mijiedarbības teorētiskie aspekti. Analizēta mājokļa kā aktīva loma monetārās transmisijas mehānismā un salīdzināta dažādu valstu monetārās politikas reakcija uz mājokļu cenu pārmaiņām. Darba 2. nodaļā analizēta sērijveida dzīvokļu cenu dinamika salīdzinājumā ar kreditēšanas attīstību un iedzīvotāju maksātspēju. Pielietojot logit modeli, novērtēti faktori, kas hipotekārās kreditēšanas straujās izaugsmes periodā noteica indivīdu vēlmi ņemt hipotekāro kredītu, kā arī novērtēts maksātspējīgais pieprasījums mājokļu iegādei. Darba 3. nodaļā novērtēts Latvijas vektora autoregresijas (VAR) modelis un neierobežotais vektora kļūdu korekcijas (VEC) modelis, iegūta dzīvokļu cenu un kredītu ilgtermiņa attiecība, novērtētas dzīvokļu cenu novirzes no ilgtermiņa sakarības un noteikts dzīvokļu cenu pielāgošanās ātrums. Lai izpētītu dzīvojamā fonda pieaugumu noteicošos faktorus, novērtēts Latvijas dzīvojamā fonda investīciju modelis ar fundamentāliem rādītājiem. Modelī par fundamentāliem rādītājiem izmantots demogrāfiskās struktūras rādītājs, mājokļa cenas attiecība pret īres cenu, mājsaimniecību patēriņš, darbaspēka līdzdalības rādītājs un kapitāla cena. Darba 4. nodaļā izveidots un ar Baiesa pieeju novērtēts Latvijas dinamiskā stohastiskā vispārējā līdzsvara (DSGE) modelis ar nekustamā īpašuma cenām, valsts riska prēmiju un finanšu ierobežojumiem. Simulējot endogēno mainīgo impulsu reakcijas uz eksogēniem šokiem – ārvalstu procentu likmes, valsts riska prēmijas, kredītu, nekustamā īpašuma cenu, tehnoloģiskiem un tirdzniecības nosacījumu šokiem – analizēta aizņemšanās ierobežojumu un valsts riska prēmijas loma Latvijas monetārās transmisijas aktīvu kanālā. Pielietojot logit modeli, secināts, ka iedzīvotāju vēlme ņemt kredītu ir atkarīga no tā, vai cilvēks ir nodarbināts, dzīvojamās platības m2 skaita uz vienu ģimenes locekli un ienākumiem, kā arī pierādīta mājokļa platības potenciālā ietekme uz lēmumu ņemt hipotēku kredītu tikai ar nosacījumu, ja cilvēkam ir arī pietiekami ienākumi. No VAR modeļa secināts, ka mājokļu cenu kāpumu visbūtiskāk ietekmēja kredītu pieaugums, savukārt ar VEC modeli pierādīta dzīvokļu cenu un kredītu cieša ilgtermiņa sakarība. Ar fundamentālo rādītāju modeli pamatota cenas un īres attiecības nozīmīgā un pozitīvā ietekme uz dzīvojamā fonda pieaugumu, kā arī pierādīts Eiropas Savienībā iestāšanās efekts mājokļu cenu kāpumā. DSGE modeļa ietvaros pierādīta ārvalstu procentu likmju un valsts riska prēmijas kāpuma ietekme uz mājsaimniecību patēriņa, nekustamā īpašuma pieprasījuma, kredītu, nekustamā īpašuma cenu un nefinanšu investīciju samazinājumu. Pierādīta arī uzņēmumu parāda, patēriņa un nekustamā īpašuma pieprasījuma krituma izteikta reakcija uz ārvalstu procentu likmju un nekustamā īpašuma cenu kāpumu pie augstākas uzņēmumu kredīta un ķīlas vērtības attiecības. Darbā ir pamatota aprobēto modeļu nozīme Latvijas Bankas monetārās politikas veidošanā un monetārās transmisijas mehānisma pētīšanā; valdībai monetāro šoku, valsts riska prēmijas, nekustamā īpašuma cenu, tirdzniecības nosacījumu un produktivitātes šoku ietekmes uz makroekonomiskajiem rādītājiem novērtēšanā; komercbankām kreditēšanas politikas izstrādē; Finanšu un kapitāla tirgus komisijai komercbanku kredītu izsniegšanas nosacījumu kritēriju definēšanā. Ir izvirzīti priekšlikumi DSGE modeļa pilnveidošanai nolūkā pielietot modeli prognozēšanā, kā arī fiskālās un monetārās politikas ietekmes pētīšanā. Promocijas darba pamatteksts ir izklāstīts 170 lappusēs un ilustrēts ar 48 attēliem, 16 tabulām un 7 pielikumiem. ; Abstract The aim of the promotion paper is to study the role of real estate as an asset in the transmission mechanism of Latvia's monetary policy and by means of econometric models to estimate the impact of financial frictions, real estate prices and country risk premium on Latvia's macroeconomic indicators. The first chapter of the promotion paper covers the theoretical basis of interaction between real estate and economic growth. The role of housing as an asset in the monetary transmission is studied and monetary policy response to housing price developments in various central banks is analyzed. The second chapter deals with the developments in Latvia's market of standard type apartments along with lending and capacity to pay. Using logit model, key factors that affected decisions to take mortgage loans in the period of rapid mortgage lending growth are identified and solvent demand for housing purchase is estimated. In the third chapter, vector autoregression (VAR) model and unrestricted vector error correction (VEC) model for Latvia are estimated, long-term relationship between housing prices and loans is obtained, housing price deviations from the long-term relationship and speed of housing price adjustment are determined. In order to identify determinants of housing stock growth in Latvia, housing investment model with fundamentals is estimated. Fundamental factors included in the model are demographic structure, housing price-to-rent ratio, household consumption, labour force participation rate, and cost of capital. In the fourth chapter, the dynamic stochastic general equilibrium (DSGE) model with real estate prices, country risk premium and financial frictions for Latvia is developed and estimated by Bayesian approach. Within the framework of the estimated model, impulse responses of endogenous variables to exogenous shocks – foreign monetary, country risk premium, credit, real estate prices, productivity and terms of trade shocks – are simulated and the role of financial frictions and country risk premium in Latvia's monetary transmission asset channel is evaluated. Within the scope of empirical analysis performed in the promotion paper, the results of the logit model imply that decision to take a mortgage loan depends on either a person is employed or not, on living area and income per member of household. Furthermore, living area per member of household may affect a decision to take a loan provided that the person has sufficient income. VAR model results provide evidence that housing price growth was largely driven by credit growth, whereas VEC model confirms long run relationship between housing prices and loans. Housing investment model with fundamentals leads to a conclusion that housing price-to-rent ratio determined increasing housing stock, whereas housing prices were significantly driven by the accession to the European Union. In the framework of the DSGE model, effect of foreign interest rates and country risk premium growth on decreasing household consumption, credit, demand and prices of the real estate, and nonfinancial investment was proved. The model also confirms pronounced decrease of firms' debt, consumption and demand for the real estate in response to increasing foreign interest rates and housing prices under higher firms' loan-to-value ratio. The tested models in the promotion paper are important tools for monetary policy formation and analysis of monetary transmission mechanism of the Bank of Latvia; they are also valuable for the government to estimate the effect of monetary, country risk premium, real estate prices, terms of trade and productivity shocks on the macro variables; for commercial banks in credit policy development; for the Financial and Capital Market Commission in order to define criteria for loans issued by the commercial banks. Finally, suggestions for further development of the DSGE model in order to apply the model for forecasting, as well as fiscal and monetary policy
Anotācija Promocijas darba mērķis ir analizēt nekustamā īpašuma kā aktīva nozīmi Latvijas monetārās politikas transmisijas mehānismā un ar ekonometriskajiem modeļiem novērtēt finanšu ierobežojumu, nekustamā īpašuma cenu un valsts riska prēmijas ietekmi uz Latvijas makroekonomiskajiem rādītājiem. Promocijas darba 1. nodaļā aplūkoti nekustamā īpašuma un tautsaimniecības augsmes savstarpējās mijiedarbības teorētiskie aspekti. Analizēta mājokļa kā aktīva loma monetārās transmisijas mehānismā un salīdzināta dažādu valstu monetārās politikas reakcija uz mājokļu cenu pārmaiņām. Darba 2. nodaļā analizēta sērijveida dzīvokļu cenu dinamika salīdzinājumā ar kreditēšanas attīstību un iedzīvotāju maksātspēju. Pielietojot logit modeli, novērtēti faktori, kas hipotekārās kreditēšanas straujās izaugsmes periodā noteica indivīdu vēlmi ņemt hipotekāro kredītu, kā arī novērtēts maksātspējīgais pieprasījums mājokļu iegādei. Darba 3. nodaļā novērtēts Latvijas vektora autoregresijas (VAR) modelis un neierobežotais vektora kļūdu korekcijas (VEC) modelis, iegūta dzīvokļu cenu un kredītu ilgtermiņa attiecība, novērtētas dzīvokļu cenu novirzes no ilgtermiņa sakarības un noteikts dzīvokļu cenu pielāgošanās ātrums. Lai izpētītu dzīvojamā fonda pieaugumu noteicošos faktorus, novērtēts Latvijas dzīvojamā fonda investīciju modelis ar fundamentāliem rādītājiem. Modelī par fundamentāliem rādītājiem izmantots demogrāfiskās struktūras rādītājs, mājokļa cenas attiecība pret īres cenu, mājsaimniecību patēriņš, darbaspēka līdzdalības rādītājs un kapitāla cena. Darba 4. nodaļā izveidots un ar Baiesa pieeju novērtēts Latvijas dinamiskā stohastiskā vispārējā līdzsvara (DSGE) modelis ar nekustamā īpašuma cenām, valsts riska prēmiju un finanšu ierobežojumiem. Simulējot endogēno mainīgo impulsu reakcijas uz eksogēniem šokiem – ārvalstu procentu likmes, valsts riska prēmijas, kredītu, nekustamā īpašuma cenu, tehnoloģiskiem un tirdzniecības nosacījumu šokiem – analizēta aizņemšanās ierobežojumu un valsts riska prēmijas loma Latvijas monetārās transmisijas aktīvu kanālā. Pielietojot logit modeli, secināts, ka iedzīvotāju vēlme ņemt kredītu ir atkarīga no tā, vai cilvēks ir nodarbināts, dzīvojamās platības m2 skaita uz vienu ģimenes locekli un ienākumiem, kā arī pierādīta mājokļa platības potenciālā ietekme uz lēmumu ņemt hipotēku kredītu tikai ar nosacījumu, ja cilvēkam ir arī pietiekami ienākumi. No VAR modeļa secināts, ka mājokļu cenu kāpumu visbūtiskāk ietekmēja kredītu pieaugums, savukārt ar VEC modeli pierādīta dzīvokļu cenu un kredītu cieša ilgtermiņa sakarība. Ar fundamentālo rādītāju modeli pamatota cenas un īres attiecības nozīmīgā un pozitīvā ietekme uz dzīvojamā fonda pieaugumu, kā arī pierādīts Eiropas Savienībā iestāšanās efekts mājokļu cenu kāpumā. DSGE modeļa ietvaros pierādīta ārvalstu procentu likmju un valsts riska prēmijas kāpuma ietekme uz mājsaimniecību patēriņa, nekustamā īpašuma pieprasījuma, kredītu, nekustamā īpašuma cenu un nefinanšu investīciju samazinājumu. Pierādīta arī uzņēmumu parāda, patēriņa un nekustamā īpašuma pieprasījuma krituma izteikta reakcija uz ārvalstu procentu likmju un nekustamā īpašuma cenu kāpumu pie augstākas uzņēmumu kredīta un ķīlas vērtības attiecības. Darbā ir pamatota aprobēto modeļu nozīme Latvijas Bankas monetārās politikas veidošanā un monetārās transmisijas mehānisma pētīšanā; valdībai monetāro šoku, valsts riska prēmijas, nekustamā īpašuma cenu, tirdzniecības nosacījumu un produktivitātes šoku ietekmes uz makroekonomiskajiem rādītājiem novērtēšanā; komercbankām kreditēšanas politikas izstrādē; Finanšu un kapitāla tirgus komisijai komercbanku kredītu izsniegšanas nosacījumu kritēriju definēšanā. Ir izvirzīti priekšlikumi DSGE modeļa pilnveidošanai nolūkā pielietot modeli prognozēšanā, kā arī fiskālās un monetārās politikas ietekmes pētīšanā. Promocijas darba pamatteksts ir izklāstīts 170 lappusēs un ilustrēts ar 48 attēliem, 16 tabulām un 7 pielikumiem. ; Abstract The aim of the promotion paper is to study the role of real estate as an asset in the transmission mechanism of Latvia's monetary policy and by means of econometric models to estimate the impact of financial frictions, real estate prices and country risk premium on Latvia's macroeconomic indicators. The first chapter of the promotion paper covers the theoretical basis of interaction between real estate and economic growth. The role of housing as an asset in the monetary transmission is studied and monetary policy response to housing price developments in various central banks is analyzed. The second chapter deals with the developments in Latvia's market of standard type apartments along with lending and capacity to pay. Using logit model, key factors that affected decisions to take mortgage loans in the period of rapid mortgage lending growth are identified and solvent demand for housing purchase is estimated. In the third chapter, vector autoregression (VAR) model and unrestricted vector error correction (VEC) model for Latvia are estimated, long-term relationship between housing prices and loans is obtained, housing price deviations from the long-term relationship and speed of housing price adjustment are determined. In order to identify determinants of housing stock growth in Latvia, housing investment model with fundamentals is estimated. Fundamental factors included in the model are demographic structure, housing price-to-rent ratio, household consumption, labour force participation rate, and cost of capital. In the fourth chapter, the dynamic stochastic general equilibrium (DSGE) model with real estate prices, country risk premium and financial frictions for Latvia is developed and estimated by Bayesian approach. Within the framework of the estimated model, impulse responses of endogenous variables to exogenous shocks – foreign monetary, country risk premium, credit, real estate prices, productivity and terms of trade shocks – are simulated and the role of financial frictions and country risk premium in Latvia's monetary transmission asset channel is evaluated. Within the scope of empirical analysis performed in the promotion paper, the results of the logit model imply that decision to take a mortgage loan depends on either a person is employed or not, on living area and income per member of household. Furthermore, living area per member of household may affect a decision to take a loan provided that the person has sufficient income. VAR model results provide evidence that housing price growth was largely driven by credit growth, whereas VEC model confirms long run relationship between housing prices and loans. Housing investment model with fundamentals leads to a conclusion that housing price-to-rent ratio determined increasing housing stock, whereas housing prices were significantly driven by the accession to the European Union. In the framework of the DSGE model, effect of foreign interest rates and country risk premium growth on decreasing household consumption, credit, demand and prices of the real estate, and nonfinancial investment was proved. The model also confirms pronounced decrease of firms' debt, consumption and demand for the real estate in response to increasing foreign interest rates and housing prices under higher firms' loan-to-value ratio. The tested models in the promotion paper are important tools for monetary policy formation and analysis of monetary transmission mechanism of the Bank of Latvia; they are also valuable for the government to estimate the effect of monetary, country risk premium, real estate prices, terms of trade and productivity shocks on the macro variables; for commercial banks in credit policy development; for the Financial and Capital Market Commission in order to define criteria for loans issued by the commercial banks. Finally, suggestions for further development of the DSGE model in order to apply the model for forecasting, as well as fiscal and monetary policy
Das Ziel dieser Arbeit war die Entwicklung einer Methode zur Berechnung der flächenhaften Verteilung von Luftverunreinigungen. Atmosphärische Ausbreitungsmodelle modellieren die Verteilung von Gasen oder Partikel anhand mathematisch formulierbarer Wirkmechanismen. Messergebnisse von Luftverunreinigungen werden nur zur Validierung der berechneten Verteilung, bzw. zur Modellentwicklung eingesetzt. Jedoch ist die Verwendung vieler, gut verteilter Messdaten als Modelleingabe die einzige Möglichkeit, die tatsächlich existierende Verteilung der gesuchten Komponente zu berücksichtigen. Mit einem Netz von Passivsammlern kann eine solche Verteilung kostengünstig und zeitgleich erhoben werden. Im Rahmen des UNOPS-Projekts "Preliminary Assessment of Ambient Air Quality in Cyprus" wurde an 270 Punkten NO2 Passivsammler in sechs Messkampagnen exponiert. Auf diese Weise entstand eine gute Datenbasis für die Entwicklung einer Methode, mit der man die tatsächlich gemessene Verteilung von Luftverunreinigungen berücksichtigen kann. Darüber hinaus sollte man bei einem realitätsnahen Modell die wichtigsten Einflüsse, wie etwa die Bevölkerungsdichte oder die Emissionsverteilung miteinbeziehen. Gegenwärtig erfüllen drei Verfahren diese Anforderungen: Regressionsmodelle, Interpolation und Künstliche Neuronale Netze. Künstliche Neuronale Netze sind derzeit die einzige Möglichkeit, ein Modell unter Berücksichtigung der obengenannten Kriterien zu entwickeln. Der heute meistverwendete Netzwerktyp im Bereich Luftqualitätsmodellierung ist das sogenannte "Multilayer Perceptron", das auch im Rahmen dieser Arbeit zum Einsatz kam. Im Trainingsmodus wurde ein sogenannter kontrollierter Lernalgorithmus verwendet, bei dem der Anwender das Netz mit Ein- und bekannten Ausgangsvariablen, also Luftverunreinigungsmessungen trainiert. Die Aufgabe des neuronalen Netzes ist das Erlernen der zumeist nichtlinearen Zusammenhänge zwischen Ein- und Ausgabe. Zu diesem Zweck wurde ein Analysegitter mit 1x1 km Kantenlänge über das Untersuchungsgebiet Zypern gelegt und jeder Gitterzelle wurden die entsprechenden Eingabevariablen, wie etwa die UTM-Koordinaten zugeordnet. Bei den Gitterzellen mit Passivsammlern konnten die Zusammenhänge zwischen Eingabe und Ausgabe ermittelt und anschließend auf alle anderen Zellen übertragen werden. Im Laufe der Entwicklung wurden alle verfügbaren Variablen in uni- und multivariaten Modellen getestet. Mit den UTM-Koordinaten als Eingangsdatensatz konnte zunächst eine neuronale Interpolation erzielt werden. Das Ergebnis war eine vereinfachte Interpolationskarte mit NO2 Konzentrationen von 30 bis 40 μg/m³ in den Städten und geringeren Konzentrationen in ländlichen Gebieten. Bemerkenswert sind hierbei unrealistische, gerade Streifen von NO2 Konzentrationen um 20 μg/m³, welche die Städte Nicosia, Limassol und Larnaka miteinander verbinden. Anschließend wurde dem Analysegitter ein digitales Höhenmodell angepasst und das Netzwerk mit den resultierenden Höhenwerten trainiert. Auch hier traten physikalisch unmögliche Werte auf, wie etwa hohe NO2 Konzentrationen in Flusstälern. Ein sehr wichtiger Einfluss auf die Verteilung von Luftverunreinigungen sind die atmosphärischen Ausbreitungsbedingungen. Da alle meteorologischen Parameter ständig variieren ist es unmöglich ein mittleres Windfeld zu berechnen. Windstatistiken wären eine mögliche Lösung, können jedoch nicht direkt als Eingabe verwendet werden, da es sich im Falle der Windrichtung um eine Verteilung von mehreren Werten handelt. Um dennoch die Ausbreitungsbedingungen zu berücksichtigen, wurden Abgasfahnen der wichtigsten Emissionsquellen in Zypern berechnet. Methodisch kamen hierbei das Gauß-Modell P&K 3782 und statistische Analyseverfahren wie Regressionsanalysen zum Einsatz. Die Ergebnisse wurden mit einem neuen rechnerischen Ansatz auf das Analysegitter verteilt, wobei das Konzept der "Distributed Emissions" entwickelt wurde. Mit diesem Input für das Neuronale Netz konnte bereits eine sehr genaue NO2 Immissionskarte berechnet werden, auf der die Lage der Quellen und deren Emissionsstärken gut wiedergegeben werden. Nach den oben beschriebenen univariaten Modellen wurden multivariate Berechnungen durchgeführt, um auch noch die vorhandenen weiteren Einflussparametern miteinzubeziehen. Alle Modelle mit UTM-Koordinaten und Höhenwerten als Eingabe produzierten wiederum unrealistische Verteilungsmuster. Das beste Ergebnis konnte mit einem Neuronalen Netzwerk erzielt werden, das mit "Verteilten Emissionen" und der Populationsdichte trainiert wurde. Diese Modellkonfiguration bewahrte die positiven Aspekte der univariaten Ansätze und machte außerdem noch weitere Quellen wie Dörfer deutlich sichtbar. Eine realistische, fein strukturierte Immissionskarte von Zypern ist das Resultat dieser Berechnungen. ; The objective of this work was to develop a method to calculate realistic air pollutants distribution maps. Most common state of the art dispersion models are limited in using air pollutant measurement data. They are only used for validation of the results - a fact that does not limit the importance of this model type which is still first choice for many questions. Yet, the usage of many, well distributed measurement data as a model input is the only possibility to provide a direct link to the actual air pollution distribution. So the first demand to the model was the ability to consider such data in its algorithm. Diffusive samplers are a predestined method to assess the spatial distribution of air pollutants, since they are the only economic possibility to carry out simultaneous measurements at many places. During the UNOPS project "Preliminary Assessment of Ambient Air Quality in Cyprus" NO2 diffusive samplers were exposed at 270 sites in six campaigns throughout one year. A second requirement for the model was the ability to consider the most important influence parameters, like the emissions or the population density - most available methods are limited in this respect. Three methods were found to generally fulfil these requirements: Regression models, interpolation algorithms and artificial neural networks. First, the regression analysis was excluded since it is too rigid, too limited to reproduce non-linear correlations and based on too many theoretical statistical assumptions that are hard to be kept. Interpolation is a well-developed standard method that is also recommended by the European Union. It was therefore tried to investigate the possibilities that lay in this approach. Simple interpolation failed to reproduce a realistic distribution of air pollutants, although exact interpolation algorithms preserve the measured values. The result maps depend strongly on the location of the measurement site; sources in between are neglected. In air pollution modelling and comparable fields of neural network application, it is common sense that multilayer perceptrons with backpropagation deliver the best results. In training mode, a so-called "controlled learning" was applied, where the user provides the input data together with a set of known outputs. The task of the network is to learn the correlation between inputs and outputs. For this purpose, an analysis grid of 1x1 km was laid over the research area of Cyprus and the most important influencing variables were intersected with this grid. For all grid cells with diffusive samplers, the correlations of the annual NO2 concentration with the input variables were calculated. First of all a neural interpolation of the diffusive sampler measurements could be established by using the UTM coordinates as input. The result was a simplified interpolation map with NO2 concentrations around 30 to 40 μg/m³ in the cities and lower concentrations in rural areas. Remarkable here are unrealistic straight stripes of medium NO2 concentrations of about 20 μg/m³ connecting the cities of Nicosia, Limassol and Larnaka where the highest concentrations were measured. A very important influence on the distribution of air pollutants are the dispersion conditions. The only realizable possibility to include meteorological data was to consider case studies for the major emission sources in Cyprus. So the dispersion plumes for the most important sources were calculated using annual wind statistics: For the largest cities, the power plants, one cement factory and the highways. Applied methods for this purpose were the Gaussian dispersion model P&K 3782 and statistical evaluations including regression analysis. Finally the plumes were transferred into so-called distributed emissions. Trained with this input data, the network reproduced the actual location and strength of all the sources that were used as input. After using univariate models, multivariate calculations were carried out to depict the complex correlations that influence the distribution of air pollutants. All models with UTM coordinates again created unrealistic large and regular plumes coming from the south-eastern bottom of the map, which lead to their exclusion. The same is true for the altitude. For both maps the population density was added with no positive effect. Finally, the best result could be established with a model that uses the enhanced emissions inventory and the population density as inputs. This configuration preserved the positive aspects of the univariate model with emissions and added additional sources like villages, which were omitted in the emissions inventory. A realistic, fine structured map was the result.
In: The economic history review, Band 39, Heft 2, S. 295-340
ISSN: 1468-0289
Book reviewed in this articleGREAT BRITAINHenry Cleere and David Crossley (Eds.). The Iron Industry of the Weald. (Leicester: Leicester University Press. 1985. Pp. xvi + 395. 74 figs. £47‐50.)C. G. A. Clay. Economic Expansion and Social Change: England 1500‐1700. (Cambridge, 1984. Vol. I, Pp. xiv + 268. 6 tables, 4 figs. 6 maps; Vol II, Pp. xii + 324. 15 tables, 4 figs. 5 maps. £20 each vol.; paperbacks £6‐95 each vol.)Bernard Rudden. The New River: A Legal History. (Oxford: Clarendon Press. 1985. Pp. xiv + 335. 2 plates. 18 figs. 9 appendices. £20‐00.)Norma Landau. The Justices of the Peace, 1679‐1760. (Berkeley and Los Angeles, California, and London: University of California Press. 1984. Pp. xv + 422. £35‐40.)Robin D. Gwynn. Huguenot Heritage: The History and Contribution of the Huguenots in Britain. (London: Routledge and Kegan Paul. 1985. Pp. xii + 220. 8 Plates. 4 Tables. 10 Maps. £15‐95.)David Ormrod. English Grain Exports and the Structure of Agrarian Capitalism, 1700‐1760. (Hull: Hull University Press. 1985. Pp. xii + 145. £8.95.)N. F. R. Crafts. British Economic Growth during the Industrial Revolution. (Oxford: Clarendon Press. 1985. Pp. 193. £19.50.)Maxine Berg. The Age of Manufactures: Industry, Innovation, and Work in Britain,M. E. Rose (Ed.)‐ The Poor and the City: The English Poor Law in its Urban Context, 1834‐1914. (Leicester: Leicester University Press. 1985. Pp. xi + 175. 1 fig. 5 tables £22‐00.)Neville Kirk. The Growth of Working Class Reformism in Mid‐Victorian England. (Beckenham: Croom Helm. 1985. Pp. xiii + 369. £22.50.)Derek J. Oddy and Derek S. Miller (Eds.). Diet and Health in Modem Britain. (London: Croom Helm. 1985. Pp. vi + 326. £22.00.)Roger Middleton. Towards the Managed Economy: Keynes, the Treasury and the Fiscal Policy Debate of the 1930s. (London: Methuen 1985. Pp. xii + 244. £25‐00.)GENERALPeter Clark (Ed.). The European Crisis of the 1590s. (London: George Allen & Unwin. 1985. Pp. xiv + 324. £25‐00.)L. A. Clarkson. Proto‐Industrialization: The First Phase of Industrialization? (London: MacMillan. Studies in Economic and Social History. 1985. Pp. 71. £395.)Tommy Bengtsson, Gunnar Fridlizius and Rolf Ohlsson (Eds.). Pre‐Industrial Population Change: The Mortality Decline and Short‐Term Population Movements. (Stockholm: Almquist and Wiksell. 1984. Pp. 419. S.Kr.280.)Richard Smith (Ed.)‐ Land, Kinship, and Life‐cycle. (Cambridge: Cambridge University Press. 1984. Pp. xiii + 547. £40‐00.)David Levine (Ed.). Proletarianization and Family History. (London: Academic Press. 1984. Pp. xii + 315. £38‐50.)Robert R. Locke. The End of Practical Man: Entrepreneurship and Higher Education in Germany, France, and Great Britain, 1880‐1940. (Greenwich, Connecticut and London: JAI Press. 1984. Pp. xii + 363. 31 tables. £46.50.)Harold James. The Reichsbank and Public Finance in Germany, 1924‐1933: A Study of the Politics of Economics during the Great Depression. (Frankfurt am Main: Fritz Knapp Verlag. 1985. Pp. 426. n.p.)Joseph Harrison. The Spanish Economy in the Twentieth Century. (London: Croom Helm. 1985. Pp. 207. 27 tables. £1995.)Ivan T. Berend and Gyorgy Ranki. The Hungarian Economy in the Twentieth Century. (London: Croom‐Helm. 1985. Pp. 316. £25.)HUGH BROGAN. Longman History of the United States of America. (London: Longman. 1985. Pp. viii + 470. £19‐95.)C. Cochran. Challenge of American Values: Society, Business, and Religion. (New York: Oxford University Press. 1985. Pp. 147. $13.95.)Hasia R. Diner. Erin's Daughters in America: Irish Immigrant Women in the Nineteenth Century. (Baltimore and London: The Johns Hopkins University Press. 1983. Pp. xvi + 192. $2000; paperback $9‐95.)Humbert S. Nelli. From Immigrants to Ethnics: The Italian Americans. (Oxford: Oxford University Press. 1983. Pp. viii + 225. £19‐50.)Martin V. Melosi. Coping with Abundance: Energy and Environment in Industrial America. (New York: Alfred A. Knopf. 1985. Pp. xii + 355. $1095.)Carole Haber. Beyond Sixty‐Five: The Dilemma of Old Age in America's Past. (Cambridge: Cambridge University Press. 1985. Pp. ix + 181. Paperback, £6‐95.)Ruth Schwartz Cowan. More Work for Mother: The Ironies of Household Technology from the Open Hearth to the Microwave. (New York and London: Harper & Row. 1983. Pp. xiv + 257. 46 plates. £19‐75; paperback £8‐75.)Christopher L. Tomlins. The State and the Unions: Labor Relations, Law, and the Organized Labor Movement in America, 1880‐1960. (Cambridge: Cambridge University Press. 1985. Pp. xvi 4‐ 348. £30; paperback £1095.)Gary A. Puckrein. Little England: Plantation Society and Anglo‐Barbadian Politics, 1627‐1700. (New York and London: New York University Press. 1984. xxiv + 235. $51.50.)Kenneth F. Kiple. The Caribbean Slave: A Biological History. (Cambridge: Cambridge University Press. 1984. Pp. xiii + 274. £27.50.)Linda Alexander Rodriguez. The Search for Public Policy: Regional Politics and Government Finances in Ecuador, 1830‐1940. (Berkeley & London: Univ. of California Press. 1985. Pp. xv + 281. $32.50.)K. N. Chaudhuri. Trade and Civilisation in the Indian Ocean: An Economic History from the Rise of Islam to 1750. (Cambridge: Cambridge University Press. 1985. Pp. xiv 4‐ 269. 18 maps. 23 plates. £25; paperback £895.)Christopher John Baker. An Indian Rural Economy, 1880‐1955: The Tamilnad Countryside. (Oxford: The Clarendon Press. 1984. Pp. xvii + 616. 9 maps. 30 tables. 16 graphs. £28‐00.)David K. Wyatt. Thailand: A Short History (New Haven and London: Yale University Press. 1984. Pp. xviii + 351. £27‐50.)Joseph Needham (Ed.)‐ Science and Civilisation in China, Volume VI, Biology and Biological Technology: Part II: Agriculture by Francesca Bray. (Cambridge: Cambridge University Press. 1984. Pp. xxvii + 724. £50.)William D. Wray. Mitsubishi and the N.Y.K., 1870‐1914: Business Strategy in the Japanese Shipping Industry. (Cambridge, Mass.: Harvard University Press (Harvard East Asian Monographs No. 108). 1984. Pp. xx + 672. £21‐95.)Raymond W. Goldsmith. The Financial Development of Japan, 1868‐1977. (New Haven and London: Yale University Press. 1983, Pp. xv + 231. £27.00.)Raymond W. Goldsmith. The Financial Development of India, Japan, and the United States: A Trilateral Institutional, Statistical, and Analytic Comparison. (New Haven and London: Yale University Press. 1983. Pp. xiv + 120. £1095.)Tsunehiko Yui and Keiichiro Nakagawa (Eds.). Business History of Shipping: The International Conference on Business History 11: Proceedings of the Fuji Conference. (Tokyo: University of Tokyo Press. 1985. Pp. xxix + 330.£ 6000.)David Mackay. In the Wake of Cook: Exploration, Science & Empire, 1880‐1801. (London: Croom Helm. 1985. Pp. vi + 216. £18.95.)D. C. M. Platt and Guido Di Tella (Eds.). Argentina, Australia, and Canada: Studies in Comparative Development, 1870‐1965. (London: Macmillan. 1985. Pp. xi + 237. 25‐00.)Charles Lipson. Standing Guard: Protecting Foreign Capital in the Ninteteenth and Twentieth Centuries. (Berkeley, Los Angeles & London: University of California Press. 1985. Pp. xvii + 332. 13 tables, 3 figs. £3325; paperback £11‐25.)Bill Albert and Adrian Graves (Eds.). Crisis and Change in the International Sugar Economy, 1860‐1914. (Norwich and Edinburgh: ISC Press. 1984. Pp. xii + 381. £12‐00.)Michael D. Bordo and Anna J. Schwartz (Eds.). A Retrospective on the Classical Gold Standard, 1821‐1931. (London: University of Chicago Press. 1984. Pp. xi + 681. £59‐80.)E. J. Hobsbawm. Worlds of Labour: Further Studies in the History of Labour. (London: Weidenfeld and Nicolson. 1984. Pp. x + 369. £15.95; paperback £7‐95.)
One hundred per cent of the natural units of analysis will continue to be negatively affected, with a concomitant decrease in natures contributions to people, given current trends (business as usual), though the magnitude and exact mechanism of the individual drivers will vary by driver and unit of analysis (established but incomplete){5.4}. For example, tropical moist and dry forest and coastal mangroves will continue to exhibit a decline due to land use change regardless of the scenarios considered, but different local factors (agriculturalization and urbanization, respectively) will be involved (well established) {5.4.1, 5.4.11}. Additionally, some drivers will affect units of analysis differently. Empirical evidence indicates differential effects of climate change: boreal forest is extending northward {5.4.2}, while tundra is diminishing in land area (established but incomplete) {5.4.3}. Thus, some drivers, and their relative roles, will need to be further refined on a local scale and with respect to their proximate factors.2. Multiple drivers will act in synergy and further produce biodiversity loss and impact nature?s contributions to people in most of the units of analysis for the Americas (established but incomplete){5.4}. Climate change, combined with other drivers, is predicted to account for an increasingly larger proportion of biodiversity loss in the future, in both terrestrial and aquatic ecosystems {5.3}. Forest fragmentation, climate change and industrial development increase risk of biodiversity and nature?s contributions to people loss i.e. dry forest unit of analysis {5.4.1.2}. Predictions on invasive species and climate change indicates an increase in habitable areas and their potential impacts on different units of analysis {5.3}.3. Changes in temperature, precipitation regime and extreme climate events are predicted to impact all units of analysis in the Americas (well established) {5.4}. Climate change and the potential impacts on tropical dry forests by changing the frequency of wildfires; change in forest structure and functional composition in the Amazon tropical moist forest; extreme drought events changing nature?s contributions to people in the Amazon region; insect outbreaks and changes in albedo are predicted to significantly impact temperate, boreal and tundra units of analysis, affecting society and indigenous communities and well-being {5.4}.4. Thresholds, or tipping points (conditions resulting in rapid and potentially irreversible changes) may have already been exceeded for some ecosystems and are likely for others (established but incomplete). For instance, it is considered more likely than not that such a threshold has already been passed in the cryosphere with respect to summer sea ice (established but incomplete) {5.4.12}. Model simulations indicate changes in forest structure and species distribution in the Amazon forest in response to global warming and change in precipitation patterns (forest die-back) (established but incomplete) {5.4.1}. So too, a 4oC increase in global temperatures is predicted to likely cause widespread die off of boreal forest due to greater susceptibility to disease {5.4.2} and global temperature increases may have already started persistent thawing of the permafrost {5.4.3}. Under 4°C warming, widespread coral reef mortality is expected with significant impacts on coral reef ecosystems {5.4.11}. Sea surface water temperature increase will cause a reduction of sea grass climatic niche: those populations under seawater surface temperature thresholds higher than the temperature ranges required by the species could become extinct by 2100 with concomitant loss of ecosystem services.IPBES/6/INF/4/Rev.15415. Changes in nature and nature?s contributions to people in most units of analysis are increasingly driven by causal interactions between distant places (i.e. telecouplings) (well established) {5.6.3}, thus scenarios and models that incorporate telecouplings will better inform future policy decisions. Nature and nature?s contributions to people in telecoupled systems can be affected negatively or positively by distant causal interactions. Provision of food and medicine from wild organisms in temperate and tropical grasslands, savannas and forests of South America is being dramatically reduced due to land-use changes driven by the demand of agricultural commodities (e.g. soybeans) mainly from Europe and China. Conservation of insectivorous migratory bats in Mexico benefits pest control in agroecosystems of North America, resulting in increased yields and reduced pesticide costs. Trade policies and international agreements will thus have an increasingly strong effect on environmental outcomes in telecoupled systems.6. Policy interventions have resulted in significant land use changes at the local and regional scales and will continue to do so through 2050. These policies have affected nature?s contributions to people both positively and negatively, and provide an opportunity to manage trade-offs among nature?s contributions to people (well established) {5.4}. Land use changes are now mainly driven by high crop demand, big hydropower plans, rapid urban growth and result in a continued loss of grasslands {5.4.4, 5.4.5}. However, strategies for establishing conservation units have helped in reducing deforestation in the Brazilian Amazon from the period of 2004 to 2011 (well established) {5.4.1}. Similarly, wetland protection policies and regulation have helped reduce the conversion of wetlands in North America {5.4.7}. Policies based on command and control measures may be limited in providing effective reduction in ecosystem loss and should be complemented with policies acknowledging multiple values {5.6.3}.7. Policy interventions at vastly differing scales (from national to local) lead to successful outcomes in mitigating impacts to biodiversity (established but incomplete){5.4}. For instance, long-established governmental protections of wetlands in North America have significantly slowed and may have stopped wetland loss based on acreage {5.4.7}. In South America, where mangrove loss continues at a rate of one to two per cent, different stakeholders such as local communities and/or governments have been successful in protecting mangroves based on empowerment and shared interests in their preservation {5.4.11}.8. Pressures to nature are projected to increase by 2050, negatively affecting biodiversity as indicated by a potential reduction of the mean species abundance index. However, the magnitude of the pressures by 2050 are expected to be less under transition pathways to sustainability in comparison to the business as usual scenario (established but incomplete), {5.5}. The Global Biodiversity model projected that under the business as usual scenario mean species abundance had decreased in the Americas by approximately 30 per cent by 2010 compared to its values prior to European settlement of the New World, with historical losses primarily attributed to land transformation to agricultural uses. Using the Global Biodiversity model, there is an additional projected loss of 9.6 per cent by 2050, primarily attributed to some additional land use changes , and especially to climate change, which will steadily increase relative to other drivers considered in the model. However, under the transition pathways to sustainability of global technologies, decentralised solutions, and consumption change pathways, the projected losses are 6 per cent, 5 per cent, and 5 per cent, respectively,IPBES/6/INF/4/Rev.1542achieving a relative improvement of approximately 30 per cent to 50 per cent compared to the business as usual scenario. Under these pathways, climate change mitigation, the expansion of protected areas and the recovery of abandoned lands would significantly contribute to reducing biodiversity loss.9. Participative scenarios have proven to be a successful tool for envisioning potential futures and pathways and to embrace and integrate multiple and sometime conflicting values and their role in promoting bottom-up decision making in the face of futures uncertainties (well established) {5.3}. The use of participative approaches to develop scenarios has increased during recent years in the Americas. The inclusion of different stakeholders and their knowledges in the process of constructing potential futures has promoted a better understanding of the complexity of the social-ecological systems in which they are embedded. This has enhanced co-learning processes between all actors involved, even those normally under-represented in decision-making activities. As a result, several participative scenario exercises have motivated community-based solutions and local governance initiatives all pointing towards the development of adaptive management strategies {5.3}.10. Pathways that consider changes in societal options will lead to less pressure to nature (established but incomplete) {5.6.3}. An example is the indirect impact that shifts in urban dietary preferences have on agricultural production and expansion, and food options that are expected to continue growing into the future. Therefore, not only is there a strong connection between urbanization and economic growth, but also between affluence (and urban preferences) and the global displacement of land use particularly from high-income to low-income countries.11. Available local studies informing regional futures of nature and natures benefit to people do not allow scalability as of yet (well established) {5.3}. The challenge in expanding the findings from local studies resides in the fact that a number of comparable local studies are still not available. Information is scattered throughout the region by the use of different units, methods and scales, which prevents a local-to-regional generalization. The list of nature indicators used in studies at local scales is large and heterogeneous (well established). Even for the same indicator (e.g. biodiversity), different metrics are used (e.g. species-area curve, mean species abundance) {5.5}. In other cases, multiple indicators are used to describe different aspects of biodiversity and ecosystem services. In this latter case, synergies and trade-offs are explicitly mentioned with a clear pattern in which increasing the provision of some indicators result in the detriment of others {5.3}. For example, agriculture expansion leading to loss in biodiversity illustrates a common trend from local studies expected to continue into the future.12. There is a significant research gap in the development of models and scenarios that integrate drivers, nature, natures contributions to people and good quality of life (well established){5.3}. Models and scenarios can be powerful tools to integrate and synthesize the complex dynamics of coupled human and nature systems, and to project their plausible behaviors into the future. Most existing models and scenarios focus on the link between drivers and its impacts on nature. Few cases exist in which models or scenarios integrate the relationships between changes in nature and changes in natures contributions to people and good quality of life {5.3}. Inter-and trans-disciplinary modeling efforts will be required to address this research gap {5.3}. ; Fil: Klatt, Brian. Michigan State University; Estados Unidos ; Fil: Ometto, Jean Pierre. National Institute For Space Research; Brasil ; Fil: García Marquez, Jaime. Universität zu Berlin; Alemania ; Fil: Baptiste, María Piedad. Instituto Alexander Von Humboldt; Colombia ; Fil: Instituto Alexander von Humboldt. Independent Consultant; Canadá ; Fil: Acebey, Sandra Verónica. No especifíca; ; Fil: Guezala, María Claudia. Inter-american Institute For Global Change Research; Perú ; Fil: Mastrangelo, Matias Enrique. Consejo Nacional de Investigaciones Científicas y Técnicas; Argentina ; Fil: Pengue, Walter Alberto. Universidad Nacional de General Sarmiento; Argentina ; Fil: Blanco, Mariela Verónica. Consejo Nacional de Investigaciones Científicas y Técnicas. Oficina de Coordinación Administrativa Saavedra 15. Centro de Estudios e Investigaciones Laborales; Argentina ; Fil: Gadda, Tatiana. Universidade Tecnológica Federal Do Paraná; Brasil ; Fil: Ramírez, Wilson. Instituto Alexander Von Humboldt; Colombia ; Fil: Agard, John. University Of West Indies; Trinidad y Tobago ; Fil: Valle, Mireia. Universidad Laica Eloy Alfaro de Manabí; Ecuador
My PhD thesis focuses on the role urban planning can play in the management of diversity in the city. In European countries, migration policies have failed to create a common and shared space to deal with the numerous questions raised by the so called "migration crisis" creating informal, legal, passive or active boundaries for foreigners. However, even when absent at the level of government we can nevertheless observe informal practices that are put in place by migrants to give an answer to their immediate needs and that can be used as a magnifying glass of more deeply rooted socio-political needs. I argue that these dynamics can be observed in same particular space as multiethnic markets in the city where informal practices could be looked as clues in order to suggest new policies. Urban planning has a crucial role to play in these policies, enhancing the creation of new forms of 'living the city' and a renewed form of urban citizenship can represent a pivotal point to commit urban planning in the quest for spatial and social justice (Sandercook, 1996). Notions of formal citizenship have changed within the context of contemporary massive urban migration (Penninx et al., 2004).The search for new models of integration has been the priority of States concerned by both the problem of the second generation without legal rights, and new incoming groups. Better integration of migrant groups and guarantees of security have been the leitmotiv in the European political discourses in this search. However, the gap between the policies defined at the national level and the implemented policies on the ground increased, because local experiences do not resonate with the abstract frame of the national policy directives. This gap leads to a further growing apart of formal and informal practices of citizenship. Usually these conflicts show themselves in sensitive urban space like squares, ethnic streets or suburbs. However, before their final outbreaks, these conflicts are informal and can be observed in shared and everyday urban space as streets, squares and markets. I suggest that these practices are insurgent (Holston,1996) that means they are a direct response to some gap into the management policies of the multicultural issue in the market and as a consequence in the city. These practices show the weakness or the failures of policies when national or local governments decided to rationalize, restrict and underestimate the relevant role of the space in the integration policies. In fact, the neglected part of the policies is the power of diversion (Olivier de Sardan, 2008) that those subjected to certain policies exert as an option strategy. This means that people can sometimes act in open contrast with policies that seemingly stimulate integration while they increase cultural, economic or political segregation. These dynamics reveal some problems entangled in the national policies that can be listed as follows:– A misunderstanding of the importance of cultural differences, and of the fact that they are not fixed in a time or in a space but adjustable, flexible and dynamic; – Misinterpretation of the practices that immigrants bring with them that could be seen as illegal or could be in contrast with the customary and formal understanding of a space;– Unawareness or underestimation of the asymmetrical local power relations and their consequences on the everyday life and on the relation with the policies themselves. This includes the overcoming of the concept of a single public interest in favour of a wider view of several public interests, sometimes struggling against each other;– The fear of more forward-looking policies that need an upturning of the present fixed idea of national identity;– Underestimation of the potential of multicultural planning in improving public, multicultural and accessible space in the city I conclude the dissertation with a deep rethinking of the notion of citizenship that has to take into account the needs behind the everyday informal practices as a core objective for more encompassing integration policies. I push forward the notion of metrozenship made by Oren Yiftachel and using it as a tool for a new social impact assessment aimed at foreseeing the possible consequences of a policy before its implementation. The very last part of the dissertation is dedicated to the theorical analysis of the market space as an heterotopia (Foucault, 1974). I argue that this comparison can open and widen the possibility for the urban planning to think at such space and to use it as a microcosm in which experiment new forms of living the diversity. ; Ma thèse de doctorat porte sur le rôle que l'urbanisme peut jouer dans la gestion de la diversité dans la ville. Dans les pays européens, les politiques migratoires n'ont pas réussi à créer un espace commun et partagé pour répondre aux nombreuses questions soulevées par la crise des migrations créant des frontières informelles, légales, passives ou actives pour les étrangers. Cependant, même en l'absence du gouvernement, nous pouvons néanmoins observer des pratiques informelles mises en place par les migrants pour répondre à leurs besoins immédiats et qui peuvent être utilisées comme une loupe de besoins sociopolitiques plus enracinés. Je soutiens que ces dynamiques peuvent être observées dans des espaces particuliers comme les marchés multiethniques dans la ville où les pratiques informelles pourraient être considérées comme des indices pour suggérer de nouvelles politiques.La planification urbaine a un rôle crucial à jouer dans ces politiques, l'amélioration de la création de nouvelles formes de «vivre la ville» et une forme renouvelée de citoyenneté urbaine peuvent représenter un point essentiel pour engager l'urbanisme dans la recherche de la justice spatiale et sociale (Sandercook, 1996). Les notions de citoyenneté formelle ont changé dans le contexte de la migration urbaine massive contemporaine(Penninx et al., 2004). La recherche de nouveaux modèles d'intégration a été la priorité des États concernés tant par le problème de la deuxième génération sans droits légaux que par les nouveaux groupes entrants. Une meilleure intégration des groupes de migrants et des garanties de sécurité ont été le leitmotiv dans les discours politiques européens dans cette recherche. Toutefois, l'écart entre les politiques définies au niveau national et les politiques mises en œuvre sur le terrain a augmenté, car les expériences locales ne sont pas en accord avec le cadre abstrait des directives de politique nationale. Cet écart conduit à une plus grande différenciation des pratiques formelles et informelles de la citoyenneté. Habituellement ces conflits se manifestent dans l'espace urbain sensible comme les places, les rues ethniques ou les banlieues. Cependant, avant leur éclosion finale, ces conflits sont informels et peuvent être observés dans l'espace urbain partagé et quotidien comme les rues, les places et les marchés.Je suggère que ces pratiques sont insurgées ("insurgent", Holston, 1996) qui signifie qu'elles sont une réponse directe à un certain écart dans les politiques de gestion de la question multiculturelle sur le marché et, par conséquent, dans la ville. Ces pratiques montrent la faiblesse ou parfois les échecs des politiques lorsque les gouvernements nationaux ou locaux ont décidé de rationaliser, de restreindre et de sous-estimer le rôle pertinent de l'espace dans les politiques d'intégration. En fait, la partie négligée des politiques est le pouvoir de détournement (Olivier de Sardan, 2008) que ceux soumis à certaines politiques exercent comme une stratégie d'option. Cela signifie que les gens peuvent parfois s'opposer ouvertement à des politiques qui semblent stimuler l'intégration tout en augmentant la ségrégation culturelle, économique ou politique. Ces dynamiques révèlent certains problèmes qui se retrouvent dans les politiques nationales qui peuvent être énumérées comme suit:– Une méconnaissance de l'importance des différences culturelles et du fait qu'elles ne sont pas fixées dans un temps ou dans un espace mais réglables, flexibles et dynamiques;–Une interprétation erronée des pratiques que les immigrants apportent avec eux, qui pourrait être considérée comme illégale ou pourrait être en contraste avec la compréhension habituelle et formelle d'un espace;– Incapacité ou sous-estimation des relations asymétriques de pouvoir local et de leurs conséquences sur la vie quotidienne et sur la relation avec les politiques elles-mêmes. Cela comprend la suppression du concept d'intérêt public unique en faveur d'une vision plus large de plusieurs intérêts publics, parfois en lutte les uns contre les autres;–La crainte de politiques plus tournées vers l'avenir qui ont besoin d'un renversement de l'idée fixe actuelle de l'identité nationale;– Sous-estimation du potentiel de la planification multiculturelle dans l'amélioration de l'espace public, multiculturel et accessible dans la villeJe termine la thèse par une réflexion approfondie sur la notion de citoyenneté qui doit tenir compte des besoins qui sous-tendent les pratiques informelles quotidiennes comme un objectif essentiel pour des politiques d'intégration plus globales. Je fais avancer la notion de métrozenship faite par Oren Yiftachel (Yiftachel, 2014) et en l'utilisant comme un outil pour une nouvelle étude d'impact social visant à prévoir les conséquences possibles d'une politique avant sa mise en œuvre. La dernière partie de la dissertation est consacrée à l'analyse théorique de l'espace du marché comme une hétérotopie (Foucault, 1974). Je prétends que cette comparaison peut ouvrir et élargir la possibilité pour l'urbanisme de penser à tel espace et de l'utiliser comme un microcosme dans lequel expérimenter de nouvelles formes de vivre la diversité.
La filosofia del diritto occidentale, mentre si è ampiamente preoccupata di indagare i fondamenti giustificativi dei diritti di proprietà intellettuale, raramente si è occupata di quelli del loro antagonista concettuale, ovvero del pubblico dominio. Lo scarso interesse manifestato dalla letteratura scientifica trova plausibile spiegazione nella concezione largamente diffusa che identifica il pubblico dominio nel concetto opposto e contrario di "proprietà", finendo per cadere nell'equazione che considera una risorsa valorizzabile e meritevole di attenzione e tutela solo ciò che è appropriabile e tratta ciò che non lo è come scarto privo di intrinseco interesse. Oggi, a questa visione se ne è sostituita un'altra, che vede il pubblico dominio non più come res nullius, ma come res publici iuris, ovvero proprietà collettiva, comune, di tutti. Riconoscere che sul pubblico dominio insiste un interesse proprietario comune, equivale a dire che ciascun membro della collettività vanta un interesse a rivendicarne la comune proprietà, ovvero ad opporsi a tentativi di loro privata appropriazione. In relazione al diritto dei marchi tale interesse si contrappone al fenomeno di appropriazione indebita dei segni distintivi che devono considerarsi patrimonio comune, sottratto a qualsiasi diritto di privativa e liberamente disponibile per la collettività. Il capitolo introduttivo affronta i problemi definitori del pubblico dominio con riferimento alle principali privative industrialistiche, per poi concentrarsi, in particolare, sul rapporto con il diritto dei marchi. Esso dà conto delle principali iniziative mosse a livello internazionale per studiare le interazioni tra diritto dei marchi e pubblico dominio e si conclude con l'individuazione degli interessi collegati alla salvaguardia di un pubblico dominio ricco ed accessibile e delle minacce a tale interesse, ravvisabili nella tendenza all'espansione e al cumulo delle tutele. La durata tendenzialmente illimitata del diritto e la revocabilità dello status di pubblico dominio di un segno, caratterizzano il diritto di marchio rispetto alle altre privative industrialistiche per non avere una struttura di per sé favorevole e predisposta alla salvaguardia del dominio pubblico. Tale limite strutturale è però temperato dal legislatore mediante la previsione di limiti alla possibilità di acquisizione del diritto, nonché alla sua portata una volta acquisito. I capitoli II e V si occupano dei meccanismi che il diritto dei marchi prevede al fine di garantire spazi di pubblico dominio cui gli operatori del mercato possono liberamente attingere senza perciò interferire con l'area dei diritti di esclusiva dei titolari di marchio. Il capitolo II, in particolare, si occupa dei meccanismi di salvaguardia del pubblico dominio costituito dai segni esclusi dalla registrazione. I singoli impedimenti alla registrazione sono presi in esame evidenziandone la scarsa capacità escludente anche alla luce della tendenza all'estensione dell'oggetto della tutela di marchio, evidenziata attraverso una rassegna dei principali marchi-non convenzionali cui negli anni è stata concessa tutela. Tra tali meccanismi di salvaguardia del pubblico dominio spazio centrale è dedicato al principio dell'imperativo di disponibilità dei segni distintivi. Tale dottrina è stata elaborata dal formante giurisprudenziale tedesco sotto il nome di "Freihaltebedürfnis" (letteralmente "necessità di mantenere libero") e fatta propria dalla dottrina anglosassone come "right to keep free" e sostiene, almeno nel suo impianto originale, la necessità di subordinare la registrazione di un marchio ad una previa valutazione di opportunità che il segno per cui si domanda tutela debba rimanere in pubblico dominio, ovvero liberamente appropriabile dalla collettività. Oggi è più che mai in dubbio quale sia il ruolo di questo principio all'interno del diritto comunitario dei marchi. Il capitolo III illustra l'iter della giurisprudenza comunitaria con riferimento alla questione del riconoscimento e della rilevanza dell'imperativo di disponibilità nel giudizio di registrazione. L'analisi evidenzierà come la Corte sia giunta a conclusioni differenti a seconda del diverso impedimento alla registrazione oggetto di interpretazione, con risultati considerati irragionevoli da larga parte della dottrina e non banali difficoltà e incertezze applicative per gli Uffici di registrazione. Nonostante ciò si evidenzierà l'emergere di una linea interpretativa comune alla maggior parte delle decisioni analizzate, tesa a riconoscere un ruolo effettivo all'imperativo di disponibilità nel giudizio di registrazione, seppur solo di carattere strumentale alla valutazione di distintività di un segno. Il capitolo IV illustra come il recente progetto di riforma di Direttiva e Regolamento comunitari non abbia colto l'opportunità di positivizzare tale principio, restando insensibile alla proposta originaria formulata dallo Studio del Max Planck Institut diMonaco di Baviera di inserire un riconoscimento espresso del suo operare all'interno del giudizio di registrazione. Conclusa la prima parte del lavoro dedicata alle interazioni tra il principio di disponibilità dei segni distintivi e la registrazione di marchio, nel capitolo V si entrerà nel terreno meno battuto dei riflessi che lo stesso principio dispiega nei confronti del giudizio di contraffazione. Dopo aver analizzato i diversi meccanismi previsti dal legislatore al fine di salvaguardare il pubblico dominio costituito dalle libere utilizzazioni di un segno registrato, si darà conto della loro scarsa capacità escludente e delle conseguenti minacce che la tutela assoluta prevista per i casi di contraffazione per doppia identità e quella aggravata del marchio che gode di rinomanza pongono alla salvaguardia dello spazio di pubblico dominio, specialmente con riguardo ai numerosi casi in cui il marchio altrui è utilizzato per scopi "atipici", ovvero non chiaramente distintivi dell'attività imprenditoriale e dei beni o servizi dell'avente diritto. Con riferimento ad essi, l'interprete ha l'arduo compito di capire, di volta in volta, se sia maggiormente meritevole di tutela il titolare di marchio nel suo interesse di escludere i terzi dall'utilizzo del proprio segno, o i terzi stessi nell'interesse antagonista di fare uso del segno per finalità descrittive, espressive, decorative ecc. La giurisprudenza non ha offerto alcuna interpretazione univoca di questo bilanciamento, stentando a tracciare lo spartiacque tra usi leciti ed illeciti del marchio altrui. Molti di questi casi sono allora stati risolti dalla giurisprudenza ricorrendo, per sancirne la liceità, ad un principio di "necessità dell'uso" che porta nel giudizio di contraffazione gli stessi interessi di libera disponibilità presenti in sede di registrazione. Anche all'interno del giudizio di contraffazione, tuttavia, tale interesse resta sostanzialmente un oggetto misterioso per la Corte di Giustizia, che resta ancorata alla contraddizione che vede tale interesse confinato ad operare come principio interpretativo generale della normativa, privo però di qualsiasi implicazione concreta ed effettiva nel giudizio di registrazione e di contraffazione. In conclusione si suggerisce la necessità di sciogliere questo paradosso e si individua nella proposta del Max Planck un'occasione inspiegabilmente mancata per farlo. ; Among the 45 Recommendations adopted under the WIPO Development Agenda, two indicate the preservation of public domain as a key task for firms, individuals and Member States. This study explores the notion of "public domain" in relation to trademark law, with particular reference to the challenging issue of how safeguarding it, avoiding misappropriation of signs which should remain usable by the public. Some studies have shown that legal instruments provided by trademark law to keep signs and certain forms of use free, risk not being appropriate counterbalances to prevent the misappropriation of public domain. A general exclusion from registration does not exist for many signs which are part of a communal heritage and even if a refusal for registration may be grounded on the lack of distinctiveness, this requirement may still be overcome, showing that the sign has acquired a "secondary meaning". Furthermore, a look into the registers reveals that trademark right is often used as a vehicle to extend prior patent, design or copyrights, with great public domain's concerns. At last, the space of public domain is endangered by the expanded protection of new types of marks and by the anti-dilution enhanced protection, which gave the registered trademark's owner more general control over his sign, making it unavailable for socially and culturally valuable use, such as news reporting, criticism, review and parody. German case law was the first to address the issue of the safeguard of this room for free signs and uses, suggesting that trademark registration should be subject to a prior assessment of the opportunity that a sign remain public available (Freihaltebedürfnis). This interest raises from the observation of the negative impact that granting rights to certain types of trademark may have on market competition and led most countries to refrain from recognising trademark rights to descriptive and generic signs and functional shapes. Otherwise, by choosing these signs, right holders may acquire strategic competitive advantages on competitors whose marketing strategies and communication, deprived of the opportunity to use them, would result much less effective than that allowed to the trademark's holder. This advantage has nothing to do with the essential function to guarantee the trade mark as an indication of origin and is therefore not justified in the light of the objectives underlying trade mark law. The ECJ, requested to preliminary ruling on whether this "need to keep free" should play any role in the European trademark law, answered contradictorily. Notwithstanding, courts still rely on public policy concerns in order to preclude or limit the trademark protection, such as the "color depletion" and the "functionality" doctrine used by U.S. Courts for granting protection to color or shape marks. This work suggests that public interest should still play a role as a key-factor in order to assess the distinctive character relevant both in registration and infringement proceedings and shares the view that wording should be added in the Trademark Directive and Regulation, that the assessment of distinctive character should take into account the "right to keep free". This proposal becomes particularly actual in the light of the works in progress for reforming the European trademark legislation, which appear to have ignored the problem of striking the proper balance between trademark right and public domain.
In: Rohaan , E J 2009 , ' Testing teacher knowledge for technology teaching in primary schools ' , Doctor of Philosophy , Eindhoven School of Education , Eindhoven . https://doi.org/10.6100/IR653226
Today's pupils grow up in a world full of technology. Education's duty is to offer them the opportunity to develop the ability to use, manage, assess, and understand technology in order to `survive' in today's technological society, and to provide them with a comprehensive and realistic concept of technology. Besides, stimulating pupils' natural curiosity for science and technology by offering science and technology education at school, is expected to increase the number of students in the field of science and technology. In the Netherlands, as in other European countries, only a relatively small number of students choose to start a study and career in this field, which is worrying with regard to the pursued development towards a more knowledge-based and technology-intensive economy (expressed in the Lisbon Strategy of the European Council in the year 2000). Although new standards for science and technology education in primary schools have been developed and governmental programmes have been started in the Netherlands since the beginning of this century, science and technology education has not yet a strong and established position in the curriculum of most primary schools and teacher training colleges. Teachers express to be confused about the content and learning activities that belong to science and technology education. Moreover, insufficient expertise of teachers is often mentioned as a restraint to offer science and technology education more regularly. Clearly, primary school teachers need to be trained to improve their knowledge of science and technology teaching. Therefore, teacher training colleges for primary education need to know what knowledge domains to train in order to educate pre-service and in-service teachers effectively. This thesis is specifically focused on technology education, as part of the learning domain `science and technology education', in the upper grades of primary schools. The general research aim of the presented studies is to investigate three domains of technology-specific teacher knowledge: (1) subject matter knowledge (smk), (2) pedagogical content knowledge (pck), and (3) attitude and self-efficacy (the affective domain). Moreover, the impact of these teacher knowledge domains on pupils' concept of and attitude towards technology is examined. Based on scientific literature, it is assumed that primary school teachers need to have sufficient smk and pck of technology, as well as a positive attitude towards technology and high self-efficacy in teaching technology, in order to stimulate their pupils' development of a realistic and comprehensive concept of technology and a positive attitude towards technology. Because teachers' pck is generally considered to be a central and vital domain of teacher knowledge, a major part of this thesis concerns the measurement and conceptualisation of teachers' pck of technology education. The research questions addressed in this thesis are as follows. 1. What teacher knowledge of technology do primary school teachers have and how are the different domains of teacher knowledge interrelated? 2. To what extent is teacher knowledge of technology related to pupils' concept of and attitude towards technology? 3. How to construct and validate a multiple choice test to measure primary school teachers' pck of technology education? 4. What latent factor structure underlies primary school teachers' pck of technology education? In chapter 2, entitled \Reviewing the relations between teachers' knowledge and pupils' attitude in the field of primary technology education", research questions 1 and 2 are addressed theoretically. Scientific literature on teacher knowledge for science and technology education is thoroughly reviewed with the help of a diagram that specifies the hypothetical relations between the three teacher knowledge domains (smk, pck, and attitude and self-e cacy). Teachers' smk is reported to be an influential knowledge aspect for technology teaching. It is generally assumed that smk is a prerequisite for the development of pck and it is said that smk is positively related to teachers' self-efficacy beliefs in teaching technology. Previous research empirically showed that pck of technology education is associated with enhanced learning of pupils in technology, with increased motivation, and with increased interest in technology. Concerning the relation between teachers' attitude and pupils' attitude, it is often intuitively expected that teachers' attitude affects pupils' attitude towards technology, though empirical evidence on this relation is not found. Furthermore, pupils' concept of technology is reported to be strongly related to their attitude towards technology. In chapter 3, entitled \Measuring teachers' pedagogical content knowledge in primary technology education", research question 3 is addressed. The construction and small-scale validation of a multiple choice test to measure primary school teachers' pck of technology education, the Teaching of Technology Test (ttt), is reported. The `rational' method of test construction proved to be effective and the content validity of the test is approved. However, regarding the production of items, it turned out to be very hard to formulate best answers and plausible distracters. The experts who wrote the items particularly struggled with creating best answer alternatives that needed to reflect a proper blend of smk and pedagogical knowledge. In chapter 4, entitled \Conceptualising pedagogical content knowledge by analysing the latent factor structure of a multiple choice test", research questions 3 and 4 are addressed. In this chapter, a large scale validation of the ttt and an analysis of the latent factor structure of pck is reported. As was expected from literature on pck in science education, the test scores correlate significantly with the test scores on smk and self-efficacy. It is therefore concluded that the ttt is also valid in terms of convergent validity. Concerning the reliability of the test, internal consistency (Cronbach's alpha) is found to be low, but this can be theoretically explained by the heterogeneous nature of pck. Calculation of test-retest reliability shows that the ttt is satisfactory consistent over time. Although it can be concluded that an important step forward has been made regarding the measurement of pck with a multiple choice test, the ttt is not a ready-to-use instrument yet. With regard to the latent factor structure of teachers' pck of technology, a factor structure of three independent knowledge components is confirmed by means of confirmatory factor analysis. The first factor, labelled knowledge of pupils' general concept and misconceptions related to technology, can be indicated as `knowing how to adjust activities to pupils' experiences of the technological world around them and their (mis)conceptions of technological topics'. The second factor, knowledge of the nature and purpose of technology education, can be described as `knowing the core characteristics of technology education, i.e., hands-on experiences and authentic problem solving'. The third factor, knowledge of pedagogical approaches and teaching strategies for technology education, can be characterised as `the art of asking questions that encourage pupils to think critically about the technological problem encountered'. However, the factor structure turned out to be obscured by many other elements, which supports the idea that pck is a highly heterogeneous construct, consisting of multiple intrinsic elements that are difficult to unravel. In chapter 5, entitled \Analysing teacher knowledge of technology education and its effects on pupils' concept and attitude", research questions 1 and 2 are addressed empirically. From test scores on a content knowledge test and the ttt, it is concluded that primary school teachers have basic levels of smk and pck of technology. Besides, both teachers' self-efficacy in teaching technology and their attitude towards technology are found to be moderately positive. Based on path analysis, it is concluded that smk is an influential factor for pck as well as for self-efficacy beliefs in teaching technology. Subsequently, self-efficacy beliefs are shown to have a strong influence on teachers' attitude towards technology. Furthermore, it is found that all effects of teacher knowledge on pupils' concept and attitude are non-significant, though teachers' pck proved to be the most important of the three teacher knowledge domains in a effecting pupils' concept. Through correlating test scores of individual pupils, it is shown that pupils' concept of technology is weakly, but significantly and positively, related to their attitude towards technology. Based on the findings in this thesis, it is recommended to focus on acquiring and developing smk as well as pck in primary school teacher training. Pre- and in-service teachers should be stimulated to acquire relevant smk and, above all, should be trained to use their smk to foster pupils' learning in technology, i.e., develop their pck. Besides, it would be helpful if the learning domain of science and technology education is defined in more detail. With clearly formulated key learning concepts, learning activities, and standards, teachers could focus more specifically on developing their smk and pck. However, it could be questioned whether it is realistic to presume that all primary school teachers have profound smk and pck of all subjects they teach. Educating more specialist instead of only generalist teachers could also be a way to increase smk and pck. One way or the other, it is highly recommended to ground regular teacher training and special professionalisation programmes on scientific research. But most importantly, science and technology education should first be given higher priority in primary teacher training and primary school curricula.
Access to finance is a key component of poverty reduction, as it enables individuals to make economic decisions that can improve their welfare. The equality of access among different groups in society is also crucial for correctly allocating the positive benefits of improved financial services. In Turkey, bank account, debit card, and credit card ownership, which can serve as the main indicators of access to finance, are at a remarkably high level. However, adjusting the coverage rate of these indicators by controlling for age, education, gender, an income reveals that gender is the main source of inequality in Turkey at the individual level. Despite the progress made in addressing the gender disparity in access to finance between 2011 and 2014, females in Turkey continue to be financially less included. Moreover, Turkey's low level of savings and high rate of informal borrowing compared with its peers diminish individuals' resilience to future shocks. Promisingly, Turkey has been able to improve its rate of savings significantly over the past few years, although it continues to be among the countries that save at a lower level.
Die Inhalte der verlinkten Blogs und Blog Beiträge unterliegen in vielen Fällen keiner redaktionellen Kontrolle.
Warnung zur Verfügbarkeit
Eine dauerhafte Verfügbarkeit ist nicht garantiert und liegt vollumfänglich in den Händen der Blogbetreiber:innen. Bitte erstellen Sie sich selbständig eine Kopie falls Sie einen Blog Beitrag zitieren möchten.
Gabrielle Hecht on Nuclear Ontologies, De-provincializing the Cold War, and Postcolonial Technopolitics
This is the fourth in a series of Talks dedicated to the technopolitics of International Relations, linked to the forthcoming double volume 'The Global Politics of Science and Technology' edited by Maximilian Mayer, Mariana Carpes, and Ruth Knoblich
Nuclear power has formed a centerpiece of Cold-War IR theorizing. Yet besides the ways in which its destructive capacity invalidates or alters the way we should understand questions of war and peace, there are different powers at play in the roles the nuclear assumes in global politics. Through careful investigations of alternative sites and spaces of nuclear politics, Gabrielle Hecht has uncovered some of the unexpected ways in which what one can call the 'nuclear condition' affects politics across the globe. In this Talk, Hecht, amongst others, explores what it means to 'be nuclear'; explains how we need to deprovincialize the Cold War to fully grasp its significance in global politics; and challenges us to explore technopolitics outside of the comfortable context of OECD-countries.
Print version of this Talk (pdf)
What is according to your view the most important challenge facing global politics and what is/should be the central debate in the discipline of International Relations (IR)?
I think one of the most important challenges in global politics is the question of planetary boundaries. In the 1970s the Club of Rome published the report 'the Limits to Growth' (read PDF here), which addressed the finite quality of the planet's resources. It exposed the problems that the ideology (and practice) of endless economic growth posed for these limits. The question of climate change today really is all about planetary boundaries. We have already exceeded the CO2 level that is safe for the planet to sustain human life: We have just passed 400 parts per million; the desirable level is rated at 350 parts per million; the pre-industrial level of CO2 was 270 parts per million. So we have already produced more CO2 than is sustainable. And that is just one indicator. There are all kinds of other planetary boundaries at play—energy supply being the most salient one in terms of climate change. How can we even produce enough energy to maintain the lifestyles of the industrialized north? What about the requirements of the so-called 'rest'?
Obviously this is a huge issue and there are many parts to it. One part of this—the piece that I have studied the most—is nuclear power. Many people are enthusiastic about nuclear power as a solution to climate change. Some prominent environmentalists have been converted, because they believe nuclear power offers a way to produce a large amount of energy with a very small amount of matter, and because they see it as carbon free. (That's pretty clearly not the case, by the way, though nuclear power certainly produces less carbon than fossil fuels.) But are the human health and environmental costs worth the savings in carbon? Do the resources poured into nuclear power—some are predicting a thousand new reactors in the next few decades—take away resources from other forms of energy production, forms that could potentially address the emissions problems more rapidly and with lower costs for the environment and for human health? Moreover, nuclear power in any one location ends up becoming a global issue. So in that sense nuclear power in China, in India or in Japan is inherently a global problem. And the industry everywhere certainly needs global regulation—at the moment, there is none. The International Atomic Energy Agency is not a regulator. These are serious questions for international relations, and should be fodder for analysis.
One can obviously put this into perspective by comparing the death toll from nuclear power with that related to coal—would one then actually have to be against the use of coal? The numbers of coal-related deaths are astonishing. But the first, most obvious point to make is that being against coal doesn't require being in favor of nuclear power! It's also extremely important to realize that death and morbidity figures for nuclear power are highly contested. Take the figures concerning Chernobyl. The IAEA and WHO put Chernobyl deaths at 4,000. A study published by National Cancer Institute in the United States puts the deaths at something like 43,000. A meta-analysis of 5,000 Slavic language scientific studies estimates the total number of Chernobyl deaths (some of which are yet to come) at 900,000. These discrepancies have a lot to do with controversies over the biological effects of low-level radiation, and also with the technopolitics of measurement and counting. Comparing the two energy technologies is much more complicated than merely counting coal deaths vs. nuclear power deaths.
How did arrive where you currently are in your thinking about these issues?
Actually, the real question is how I came to study politics. I got my bachelor's degree in physics from MIT in the 1980s. The two biggest political issues on campus at that time were Ronald Reagan's Strategic Defense Initiative and Apartheid in South Africa (specifically, a move to divest American corporate interests in South Africa, the very corporations that were funding MIT research and for which MIT students would work when they graduated). I got interested in both, and along the way I came to realize that I was much more interested in the politics of science and technology than I was in actually doing physics. So I took some courses in the field of science and technology studies (STS), and decided to attend graduate school in the history and sociology of science and technology.
I had also always had a morbid fascination with nuclear weapons. I'd read a lot of post-apocalyptic science fiction when I was a teenager. All of these things came together for me in graduate school. I first hoped to study the history of Soviet nuclear weapons but quickly realized that would be impossible for all kinds of reasons. I ended up studying French nuclear power after I realized that nobody had researched it in the ways that interested me. I had lived in France in the 1970s, when the nuclear power program was undergoing rapid expansion. So it was a good fit. After I was done with that project, I became interested in rethinking the so-called nuclear age from a colonial and post-colonial perspective.
What would a student need to become a specialist in global studies or understand the world in a global way?
Travel, learn languages. Remain attentive to—and critical of—the political work done by claims to 'global' purview. Learn history—you won't understand international relations in any depth at all if you remain rooted in the present.
Then, for those want to start exploring the global politics of science and technology, two books come immediately to mind. Timothy Mitchell's (Theory Talk #59) Carbon Democracy, on the global technopolitics of fossil fuels. And Paul Edwards's A Vast Machine, on the relationship between data and models in the production of knowledge about climate change. Both are must-reads.
The world is permeated with technological artifacts and systems—in what ways is this relevant for approaches to global politics? Where is the conceptual place for technologies within IR?
First, I should make clear that I am not an IR specialist.
That said, I think it does not make sense to think about international relations (lower case) without thinking about the technologies, systems, and infrastructures that make any kind of global movement possible. The flows of people, of products, of culture, political exchanges—these are all mediated through and practiced in the technological systems that permeate our globe. So are the interruptions and absences in such 'flows'. I draw attention to the specific political practices that are enacted through technological systems with the notion of technopolitics. I initially used this concept in my work on nuclear power in France to capture the ways in which hybrid forms of power are enacted in technological artifacts, systems and practices. There I used the term in a rather narrow sense to talk about the strategic practices of designing technologies to enact political goals. My paramount example was that of the French atomic weapons program. In the early 1950s, France's political leaders insisted that France would never build atomic weapons. But engineers and other leaders in the nascent nuclear program were designing reactors in a way that optimized the production of weapons-grade plutonium rather than electricity. When politicians finally signed on, the technology was ready to go. This example problematizes the very notion of a 'political decision'. Instead of a single, discursive decision, we see a complex process whereby political choices are inscribed into technologies, which subsequently favor certain political outcomes over others.
In this example, both engineers and politicians consciously engaged in technopolitics. By contrast, Timothy Mitchell has used the hyphenated term 'techno-politics' to emphasize the unpredictable and unintended effects of technological assemblages. Over the last fifteen years, I have also developed a broader notion of the term, particularly in its adjectival form, 'technopolitical'. I find this to be a useful shorthand for describing both how politics can be strategically enacted through technological systems, and also how technological systems can be re-appropriated for political ends in ways that were unintended by their designers. The point, really, is to highlight the myriad politics of materiality.
Do the particular characteristics of nuclear technologies and related research programs make it impossible to apply the lenses of 'high politics'?
I think a high-politics approach to understanding nuclear weapons decision-making is extremely impoverished. It's not that there aren't high politics, of course there are. But they cannot offer a sufficient or straightforward explanation for how or why any one particular country develops a nuclear program. A focus on high politics implies a focus decision makers and moments. But that's really misleading. In pretty much every case, the apparent 'moment' of decision is in fact a long process involving a tremendous amount of technopolitical, cultural, and institutional work, rife with conflicts and contingencies of all kinds. I think a more productive approach is to try to understand nuclear capacity-building.
Itty Abraham has done some fantastic work on India's nuclear program, which helps us think about other cases as well. For example, he analyzes the symbolic importance of the nuclear test, noting that IR uses 'the test' as kind of 'aha!' moment, the moment in which one knows that a country has nuclear weapons. Instead, Abraham sees the test as a process for the cultural production of meaning: a process in which certain meanings get fixed, but by no means the most important moment for understanding the actual technology and politics behind the production of nuclear weapons.
Your book Entangled Geographies (2011) explores a plethora of places, people, and technical networks that sustained the US and Soviet empires. Here, as in Being Nuclear (2012), you insist on investigating the Cold War as transnational history. What difference does this move make?
In Entangled Geographies, my colleagues and I build on the work of Odd Arne Westad, whose book The Global Cold War was an argument for understanding the non-superpower, non-European dimensions of the Cold War. We give that a technopolitical spin, which offers a de-provincializing of the Cold War that's complementary to Westad's. By focusing on places like Saudi Arabia, or Zimbabwe, or Brazil, or South Africa, we show how even the central struggles of the Cold War were intimately bound up in 'northern' relationships to colonial and post-colonial worlds, and in the imaginaries that characterized those relationships.
In Being Nuclear I focus on uranium from Africa—more specifically South Africa, Namibia, Gabon, Madagascar, and Niger. Uranium from Africa has long been a major source of fuel for nuclear power and atomic weapons, including the bomb dropped on Hiroshima, but it has been almost completely absent from accounts of the nuclear age, whether scholarly or popular. This changed in 2002, when the US and British governments claimed that Iraqi leader Saddam Hussein 'sought significant quantities of uranium from Africa' (later specified as the infamous 'yellowcake from Niger'). Africa suddenly became notorious as a source of uranium. But that did not admit Niger, or any of Africa's other uranium-producing countries, to the select society of nuclear states. Nor did it mean that uranium itself counted as a nuclear thing. My book explores what it means for something—a state, an object, an industry, a workplace—to be 'nuclear'. I show that such questions lie at the heart of today's global order and the relationships between 'developing nations' and 'nuclear powers'.
Being Nuclear argues that 'nuclearity' is not a straightforward scientific classification but a contested technopolitical one. In the first part of the book, I follow uranium's path out of Africa and analyze the invention of the global uranium market. In the second part, I enter African nuclear worlds, focusing on miners and the occupational hazard of radiation exposure. In both parts, I show that nuclearity requires instruments and data, technological systems and infrastructures, national agencies and international organizations, experts and conferences, and journals and media exposure. When (and where) nuclearity is densely distributed among these elements, it can offer a means of claiming expertise, compensation, or citizenship. It can serve as a framework for making sense of history, experience, and memory. When (and where) network elements are absent, weak, or poorly connected, nuclearity falters, fades, or disappears altogether, failing to provide a resource for people claiming remediation or treatment. Nuclearity in one register doesn't easily transpose to another: geopolitical nuclearity doesn't automatically translate into occupational nuclearity. Yet these domains remain connected. African uranium miners depend on the transnational movement of nuclear things, but that movement also depends on African miners. Ultimately, I conclude, nuclear security must be considered in tandem with other forms of human security—food and health and environmental and political security. By placing Africa in the nuclear world, and the nuclear world in Africa, the book seeks to remake our understanding of the nuclear age.
I should note that it's not only uranium production that connects the colonial and postcolonial spaces with nuclear things. (Also: African countries weren't the only such places where uranium was produced. Much of the rest of the world's uranium came from the Navajo nation in the United States, Aboriginal territories in Australia, First Nation territories in Canada, colonized spaces in the Soviet Empire, etc.) French nuclear weapons were tested in the Algerian desert and French Polynesia; the United States tested its weapons on the Bikini Islands; Britain tested its weapons in Maralinga, in Aboriginal Australia; the Soviet Union tested its weapons on the planes of Kazakhstan. And so on.
So, understanding the history of the Cold War—even its most iconic technology, nuclear weapons—as a form of transnational history really calls attention to spaces that have previously been considered marginal, even perhaps not fully nuclear. Ultimately, it should provoke us to problematize 'the Cold War' as a frame for global or transnational history (and social science).
Looking at those colonized and semi-colonized spaces of mining, testing and monitoring infrastructures gives us not necessarily an answer to the question of why the Cold War ended, but it does enable you to ask different and possibly more interesting questions. It can lead you, for example, to place the Cold War within the framework of imperialism (rather than the other way around). A longer historical view questions whether the Cold War really represents historical rupture. What political work is done by such claims to rupture? How does that work differ in different places? What are its material consequences?
Why are science and technology hardly ever studied in the postcolonial world from a STS perspective?
I think there are a number of reasons why STS has paid relatively little attention to the postcolonial world. One is that in STS—like many disciplines—the prestige of the subject matter maps onto the prestige of the researcher. So STS researchers who study cutting-edge science or large-scale technological systems seem somehow to be getting at 'harder' topics, ones that that focus on active creation. Engineering and other acts of creation appear more prestigious than acts of maintenance, or acts of dismantling. Even studying small-scale creation seems to confer more prestige than studying mundane practices. This brings us back to the theme of rupture vs. continuity: studying or proclaiming rupture seems somehow sexier—and certainly more radical—than studying continuity.
Another, more trivial answer is just that most STS researchers so far have come from Europe and North America, and they tend not to be trained in area studies.
Does the constant ontological insecurity of nuclear things mean that the 'nuclear' is purely a matter of social and political construction?
No, definitely not. But I think to explain what I mean by all this we should take a few steps back and start with what I like to call nuclear exceptionalism. This is a technopolitical claim—emerging immediately after the end of World War II—that there was something radically unique about nuclear things. From 1945 onward, both cold warriors and their activist opponents cultivated this nuclear exceptionalism. Atomic weapons were portrayed as fundamentally different from any other human creation. The bomb was the ultimate geopolitical trump card, and it was imagined as replacing empire in one fell swoop. You see nuclear scientists and engineers gaining prestige, power, and funding far beyond their colleagues in conventional research. In the meantime, anti-nuclear groups make their own claims to exceptionalism by talking about the unprecedented dangers posed by nuclear things. Everywhere you see nuclearity and morality intertwined. Nuclear things either represent salvation or moral depravity… or the apocalyptic end of mankind. But regardless of where you stood politically, this notion of nuclear exceptionalism rested on the sense that the difference between nuclear and non-nuclear things was transparent---ultimately a clear-cut, physical matter of radioactivity.
The nuclear thus emerges not just as a category, but also as a universal and universalizing ontology, one that seems to apply in the same way all over the globe. And frankly, historians, political scientists, anthropologists, and sociologists have reproduced that nuclear exceptionalism. (I did it myself in my first book, The Radiance of France.)
All of which has made it hard to see that what I call nuclearity—the process by which something comes to count as a "nuclear" thing – has a history, a politics, and a geography. Things that count as nuclear in one time and place might not count as nuclear at another. Rendering something as nuclear and exceptional is a form of technopolitical claims-making. It follows that insisting that certain things are not especially nuclear, or that they are banal, is also a form of technopolitical claims-making.
You can see this in the response of the nuclear industry to activist opponents. In the late 1960s and over the course of the 1970s, the nuclear industry began to represent nuclear power not as a life-saving technology for the human race, but as simply another way to boil water. Radiation was just another industrial risk. Such representations seek to banalize nuclear things.
Nuclearity could thus get made, unmade and remade. My favorite example comes from a 1995 US government report on nuclear proliferation. The appendix has a table that summarizes the nuclear activities of 172 nations. Neither Gabon, nor Niger, nor Namibia are listed as having any nuclear activities, despite the fact that those nations together, during that very year, produced something like 25% of the world's uranium. So when does uranium count as a nuclear thing? When does it lose its nuclearity? And what does Africa have to do with it?
The argument is not that radioactivity doesn't have to do anything with nuclearity, or that nuclearity has nothing to do with the technologies and physical processes we typically associate with the word. Rather, I argue that nuclearity is one thing, and radioactivity and fission are another; sometimes they are co-terminus, but not always and not necessarily. Understanding where (and why) they don't map onto each other is politically revealing.
Which kind of interdisciplinary exchanges do we need between your discipline and IR to deepen our understanding of global technopolitics?
Science and technology studies (STS) is really good at exploring practice, and especially at calling attention to the differences between principles and practice—for example, between regulation on the one hand, and the actual practices that regulations are meant to control (without ever entirely succeeding). STS can bring to IR an understanding of how the intimate details of practice matter politically—of how everyday technopolitical and techno-scientific exchanges can be more important loci for politics than treaties, diplomacy, and other forms of what you called high politics.
I can also answer this question wearing my historian's hat. The IR scholarship on nuclear weapons that I'm familiar with (and again, I'm not an expert!) seems to be quite focused on producing models—on using history to produce predictive models that will in turn serve to shape international policy on nuclear weapons regulation. But if history tells us one thing, it is that models are basically useless for understanding how countries develop nuclear weapons. Instead, history and STS both teach us about which questions to ask (in this instance, about nuclear development). Identifying the important questions—rather than prescribing the applicable model—leaves open the list of possible answers. It also leaves open solutions and policies, letting us be more attentive to the specificities and uniqueness of individual cases.
Final question. Let's take the example of Iran's nuclear program. What alternative question about the issue would lenses of nuclear exceptionalism bring us?
Nuclear technology has played an important role in shaping modern Iranian national identity. This began in the 1970s under the Shah, who – with the support of the US – developed a grandiose plan to build a fleet of nuclear reactors. It took a different turn after the 1979 Iranian revolution. For a while, the new regime sidelined the nuclear program as an unwelcome manifestation of western corruption. But after a few years leaders reappropriated nuclear development and sought to invest it with Iranian-ness. The dynamics of nuclear exceptionalism have operated in Iran much the same way they did in France and in South Africa. Nuclear exceptionalism has served to give material form to national identity. And materialized national identity is most emphatically not something that you can negotiate away in the P5+1 talks.
Gabrielle Hecht is Professor of History at the University of Michigan, where she also directs the Program in Science, Technology, and Society and serves as associate director of the African Studies Center. She recently published Being Nuclear: Africans and the Global Uranium Trade (MIT Press and Wits University Press, 2012), which has received awards from the American Historical Association and the American Sociological Association, as well as the 2013 Susanne M. Glasscock Humanities Book Prize and Honorable Mention for the African Studies Association's 2013 Herskovits Award. She is also the author of The Radiance of France: Nuclear Power and National Identity after World War II (MIT Press 1998 & 2009) and editor of Entangled Geographies: Empire and Technopolitics in the Global Cold War, editor (MIT Press, 2011). Hecht is embarking on a new book project on technology and power in Africa, as well as new research on transnational toxic trash. She has held visiting positions at universities in Australia, France, Norway, South Africa, and Sweden.
Related links
Hecht's faculty profile at the University of Michigan Read Hecht's Introduction to Entangled Geographies (MIT Press 2011) here (pdf) Read Hecht's The Power of Nuclear Things (Technology & Culture 2010) here (pdf) Read Hecht's Nuclear Ontologies (Constellations 2006) here (pdf) Read Hecht's Rupture-Talk in the Nuclear Age (Social Studies of Science 2002) here (pdf)
Print version of this Talk (pdf)
0 0 1 3775 21518 School of Global Studies/University of Gothenburg 179 50 25243 14.0
Over the past couple of decades, global financing for development has changed dramatically. The biggest shifts have been the rapid increase of net private financing flows to developing countries, in particular to middle-income countries (MICs); the sustained growth of official development assistance (ODA) from Development Assistance Committee (DAC) members, even excluding debt relief; the emergence of MICs as growth poles and sources of ODA with different approaches to aid delivery than those of DAC donors; and the expanded role of private aid. In addition, past trends of proliferation, fragmentation and earmarking of aid have continued. This paper reviews broad trends in global financing for development, with a focus on ODA and the growing importance of new development partners such as the so-called BRICS. In this context, it discusses the implications of this changing landscape for aid effectiveness and the role of ODA going forward.
This paper explores the economic impacts of two related tracks of China's expected transformation—economic slowdown and rebalancing away from investment toward consumption—and estimates the spillovers for the rest of the world, with a special focus on Sub-Saharan African countries. The paper finds that an average annual slowdown of gross domestic product in China of 1 percent over 2016–30 is expected to result in a decline of gross domestic product in Sub-Saharan Africa by 1.1 percent and globally by 0.6 percent relative to the past trends scenario by 2030. However, if China's transformation also entails substantial rebalancing, the negative income effects of the economic slowdown could be offset by the positive changes brought along by rebalancing through higher overall imports by China and positive terms of trade effects for its trading partners. If global supply responds positively to the shifts in relative prices and the new sources of consumer demand from China, a substantial rebalancing in China could have an overall favorable impact on the global economy. Economic growth could turn positive and higher on average, by 6 percent in Sub-Saharan Africa and 5.5 percent globally, as compared with the past trends scenario. Finally, rebalancing reduces the prevalence of poverty in Sub-Saharan Africa compared with the isolated negative effects of China's slowdown, which slightly increase the incidence of poverty. Overall, China's slowdown and rebalancing combined are estimated to increase gross domestic product in Sub-Saharan Africa by 4.7 percent by 2030 and reduce poverty, but the extent of this varies by country.