Pricing of cap insurance contracts is considered for political mortgage rates. A simple stochastic process for mortgage rates is proposed. The process is based on renewal processes for modelling the length of periods with downward and upward trend respectively. Prices are calculated by simulation of conditional future sample paths. Future conditional quantiles can be obtained to assess the risk of a contract. The method is illustrated by applying it to observed quarterly mortgage rates of the Swiss Union of Raiffeisenbanks for the years 1970 to 2001.
This paper uses a panel probit model with simultaneous equations to explain the joint determination of de facto and de jure exchange rate regimes in developing countries since 1980. We also derive an ordered-choice panel probit model to explain the causes of discrepancies between the two regime choices. Both models are estimated using simulation-based maximum likelihood methodsl. The results of the simultaneous equations model suggest that the two regime choices are dependent of each other and exhibit considerable state dependence. The ordered probit model provides evidence that regime discrepancies reflect an error-correction mechanism, and the discrepancies are persistent over time.
The purpose of this study is to investigate if the wage setting in certain sectors of the Swedish economy influences the wage setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage setting in the sectors exposed to international competition should influence the wage setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created based on assumptions of which sectors are and which sectors are not exposed to international competition. The wage adaptability between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate large wage adaptability between manufacturing, construction, wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is in line with the assumptions of the Scandinavian model. Furthermore, the empirical results indicate low wage adaptability between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence of the existence of intersectoral wage causality, but no evidence of a wage leading role in line with the assumptions of the Scandinavian model for any of the sectors.
The paper develops an empirical no-arbitrage Gaussian affine term structure model to explain the dynamics of the German term structure of interest rates from 1979 through 1998. In contrast to most affine term structure models two risk factors that drive the dynamics are linked to observable macroeconomics factors: output and inflation. The results obtained by a Kalman-filter-based maximum likelihood procedure indicate that the dynamics of the German term structure of interest rates can be sufficiently explained by expected variations in those macroeconomic factors plus an additional unobservable factor. Furthermore, we are able to extract a monetary policy reaction function within this no-arbitrage model of the term structure that closely resembles the empirical reaction functions that are based on the dynamics of the short rate only.
This paper investigates the macroeconomic effects of job creation schemes and vocational training on the matching processes in West Germany. The empirical analysis is based on regional data for local employment office districts for the period from 1999 to 2003. The empirical model relies on a dynamic version of a matching function augmented by ALMP. In order to obtain consistent estimates in the presence of a dynamic panel data model, a firstdifferences GMM estimator and a transformed maximum likelihood estimator are applied. Furthermore the paper considers the endogeneity problem of the policy measures. The results obtained from our estimates indicate that vocational training does not significantly affect the matching process and that job creation schemes have a negative effect.
Spanish regulation on dam safety recommends the estimation of a return period of 1000 years for the Design Flood and of 10,000 for the Safety Check Flood. The criteria used for the calculations are based on Flood frequency analysis (FFA) from gauge record data or on the Probable Maximum Flood (PMF) which is the hypothetical maximums of rainfall turned into runoff. The short gauge records affords little support for the FFA and for the hypothetical PMF, which in addition the absence of a probability associated, limits the utility of these indices for risk-based dam safety decisions. Palaeoflood techniques are a means to directly assess the probability of extreme floods and test the validity of the PMF-based models. In this paper, palaeoflood record was applied to a case study in the Guadalentín river, upstream of the Valdeinfierno reservoir (372 km2). The Valdeinfierno dam spillway has a capacity of 550 m3s-1. (design flood). In addition, available historical flood data was collected. Palaeoflood data, together with the gauge record, was combined for the FFA, using existing methods for the fitting of distribution functions, such as the maximum likelihood. These return periods are compared with the PMF results. Palaeoflood analysis provides a discharge of 2350 m3s-1 for the design flood (1000-yr return period), and 3450 m3s-1 for the safety check flood (10,000 yr-return period). Some recommendations for the calculation of the safety check flood and design flood are provided which can improve existing legislation on dam safety. ; Este trabajo se ha realizado con la financiación del Proyecto CICYT REN2001-1633/RIES "Incorporación de datos de paleocrecidas e inundaciones históricas al cálculo de la avenida de diseño de presas" (PALEOCAP). ; Peer Reviewed
This study examines the relative merits of grievance-based explanations of civil conflict that stress 'Malthusian crises' and 'creed-related,' civilizational clashes against competing propositions of greed- and governance-related explanations. The results from maximum-likelihood analyses on approximately 139 countries over the entire post-Cold War period find little support for Malthusian propositions. Abundant mineral wealth makes countries highly unstable, whereas scarcity of renewable resources is largely unrelated to civil conflict. There is some evidence suggesting that Malthusian crises are likely to be man made. Ethnicity is related to conflict when society is moderately homogenous and safer if highly plural. Large populations of Christians and Muslims within one society make countries remarkably safer, contrary to popular beliefs that ancient hatreds and Islamic militarism drive conflict. Trade to GDP (openness) is also strongly associated with peace.
Study IReal Wage Determination in the Swedish Engineering Industry This study uses the monopoly union model to examine the determination of real wages and in particular the effects of active labour market programmes (ALMPs) on real wages in the engineering industry. Quarterly data for the period 1970:1 to 1996:4 are used in a cointegration framework, utilising the Johansen's maximum likelihood procedure. On a basis of the Johansen (trace) test results, vector error correction (VEC) models are created in order to model the determination of real wages in the engineering industry. The estimation results support the presence of a long-run wage-raising effect to rises in the labour productivity, in the tax wedge, in the alternative real consumer wage and in real UI benefits. The estimation results also support the presence of a long-run wage-raising effect due to positive changes in the participation rates regarding ALMPs, relief jobs and labour market training. This could be interpreted as meaning that the possibility of being a participant in an ALMP increases the utility for workers of not being employed in the industry, which in turn could increase real wages in the industry in the long run. Finally, the estimation results show evidence of a long-run wage-reducing effect due to positive changes in the unemployment rate. Study IIIntersectoral Wage Linkages in Sweden The purpose of this study is to investigate whether the wage-setting in certain sectors of the Swedish economy affects the wage-setting in other sectors. The theoretical background is the Scandinavian model of inflation, which states that the wage-setting in the sectors exposed to international competition affects the wage-setting in the sheltered sectors of the economy. The Johansen maximum likelihood cointegration approach is applied to quarterly data on Swedish sector wages for the period 1980:1–2002:2. Different vector error correction (VEC) models are created, based on assumptions as to which sectors are exposed to international competition and which are not. The adaptability of wages between sectors is then tested by imposing restrictions on the estimated VEC models. Finally, Granger causality tests are performed in the different restricted/unrestricted VEC models to test for sector wage leadership. The empirical results indicate considerable adaptability in wages as between manufacturing, construction, the wholesale and retail trade, the central government sector and the municipalities and county councils sector. This is consistent with the assumptions of the Scandinavian model. Further, the empirical results indicate a low level of adaptability in wages as between the financial sector and manufacturing, and between the financial sector and the two public sectors. The Granger causality tests provide strong evidence for the presence of intersectoral wage causality, but no evidence of a wage-leading role in line with the assumptions of the Scandinavian model for any of the sectors. Study IIIWage and Price Determination in the Private Sector in Sweden The purpose of this study is to analyse wage and price determination in the private sector in Sweden during the period 1980–2003. The theoretical background is a variant of the "Imperfect competition model of inflation", which assumes imperfect competition in the labour and product markets. According to the model wages and prices are determined as a result of a "battle of mark-ups" between trade unions and firms. The Johansen maximum likelihood cointegration approach is applied to quarterly Swedish data on consumer prices, import prices, private-sector nominal wages, private-sector labour productivity and the total unemployment rate for the period 1980:1–2003:3. The chosen cointegration rank of the estimated vector error correction (VEC) model is two. Thus, two cointegration relations are assumed: one for private-sector nominal wage determination and one for consumer price determination. The estimation results indicate that an increase of consumer prices by one per cent lifts private-sector nominal wages by 0.8 per cent. Furthermore, an increase of private-sector nominal wages by one per cent increases consumer prices by one per cent. An increase of one percentage point in the total unemployment rate reduces private-sector nominal wages by about 4.5 per cent. The long-run effects of private-sector labour productivity and import prices on consumer prices are about –1.2 and 0.3 per cent, respectively. The Rehnberg agreement during 1991–92 and the monetary policy shift in 1993 affected the determination of private-sector nominal wages, private-sector labour productivity, import prices and the total unemployment rate. The "offensive" devaluation of the Swedish krona by 16 per cent in 1982:4, and the start of a floating Swedish krona and the substantial depreciation of the krona at this time affected the determination of import prices.
Research on labour/management negotiation traditionally has focused on the 'instrumental' elements of negotiating, including the likelihood of bargaining success based on a series of structural factors. The ' expressive' dimension of negotiating, i.e., the interactional strategies employed by negotiators, has been less emphasized. This study is an attempt to address this imbalance. It investigates and describes labour/management negotiation using Goffman's Dramaturgy Model. The primary objectives of this study are to determine whether labour/management negotiations are dramaturgical in nature and to describe the dramaturgy employed by negotiators. In-depth interviews, primarily consisting of open-ended questions, were conducted with ten labour/management negotiators. Chief negotiators, five from labour and five from management, were selected to ensure an interview group with maximum interactional experience and expertise. The interview group was diverse, included both male and female negotiators, and was drawn from multiple sectors, including the health care, education, government, not-for-profit and private sectors. The interviews were transcribed and then coded for instances of the dramaturgical principles delineated by Goffman. Additionally, data gathered through the open-ended questions yielded elements of the dramaturgy of labour/management negotiations beyond those expressly indicated as principles. (Abstract shortened by UMI.)
Molecular markers based on mitochondrial (mt) DNA control region se quences were used to test the hypothesis that juvenile loggerhead sea turtles (Caretta caretta) in pelagic habitats of the eastern Atlantic are derived from nesting populations in the western Atlantic. We compared mtDNA haplotypes from 131 pelagic juvenile turtles (79 from the Azores and 52 from Madeira) to mtDNA haplotypes observed in major nesting colonies of the Atlantic Ocean and Mediterranean Sea. A subset of 121 pelagic samples (92%) contained haplotypes that match mtDNA sequences observed in nesting colonies. Maximum likelihood analyses (UCON, SHADRACQ) estimate that 100% of these pelagic juveniles are from the nesting populations in the southeastern United States and adjacent Yucatan Peninsula, Mexico. Estimated contributions from nesting populations in south Florida (0.71, 0.72), northern Florida to North Carolina (0.19, 0.17), and Quintana Roo, Mexico (0.11, 0.10) are consistent with the relative size of these nesting aggregates. No contribution was detected from nesting colonies in the Mediterranean (Greece) or South Atlantic (Brazil), although samples sizes are insufficient to exclude these locations with finality. The link between west Atlantic nesting colonies and east Atlantic feeding grounds provides a more complete scientific basis for assessing the impact of subadult mortality in oceanic fisheries. Demographic models for loggerhead turtles in the western Atlantic can now be improved by incorporating growth and mortality data from juvenile turtles in pelagic habitats. These data demonstrate that the appropriate scale for loggerhead turtle conservation efforts is vastly larger than the current scale of management plans based on political boundaries. ; info:eu-repo/semantics/publishedVersion
10 páginas, 2 figuras, 1 tabla.-- Trabajo presentado al VIII Reunión Nacional de Geomorfología, Toledo, 22-25 de septiembre de 2004). ; [EN]: Spanish regulations on dam safety recommend the estimation of a return period of 1000 years for the Design Flood and of 10,000 for the Safety Check Flood. The criteria used for the calculations are based on Flood Frequency Analysis (FFA) from gauge record data or on the Probable Maximum Flood (PMF) which is the hypothetical maximum of rainfall turned into runoff. The short gauge records affords little support for the FFA and for the hypothetical PMF, which in addition to the absence of a associated probability, limits the utility of these indices for risk-based dam safety decisions. Palaeoflood techniques are a means of directly assessing the probability of extreme floods and testing the validity of the PMF-based models. In this paper, palaeoflood record was applied to a case study in the Guadalentín river, upstream of the Valdeinfierno reservoir (372 km2) with a spillway capacity of 550 m3s-1 (design flood). In addition, available historical flood data was collected. Palaeoflood data and gauge record, were combined for the FFA, using existing methods for the fitting of distribution functions, such as the maximum likelihood. Palaeoflood analysis provides a discharge of 2350 m3s-1 for the design flood (1000-yr return period), and 3450 m3s-1 for the safety check flood (10,000 yr-return period). The discharge estimated by the PMF is 5786 m3s-1, showing an overestimation of this empirical method. Some recommendations for the calculation of the safety check flood and design flood are provided which can improve existing legislation on dam safety. ; [ES]: En el Reglamento Técnico sobre Seguridad de Presas y Embalses se recomienda, en función de la categoría de la presa considerada, el uso de las Avenidas de Proyecto con periodo de retorno de 1.000 años para la Avenida de Diseño, y de hasta 10.000 para la Avenida Extrema. Su estimación a partir de las exiguas series de datos foronómicos o meteorológicos del registro sistemático (ordinario) carece del rigor científico que requiere la envergadura de estos proyectos. En este trabajo se presentan los resultados del análisis de paleocrecidas en el río Guadalentín aguas arriba del embalse de Valdeinfierno (372 km2) con un aliviadero de 550 m3s-1, y su aplicación a la determinación de los cuantiles requeridos para la determinación de las avenidas de diseño y máxima. Los depósitos analizados corresponden a arenas y limos acumulados durante las crecidas en zonas de remanso (slackwater flood sediments). En el análisis de frecuencia de crecidas se han combinado los caudales estimados para estos depósitos con los datos de aforo utilizando el método de máxima verosimilitud para ajustar una distribución Gumbel. En este análisis se obtiene un caudal de 2350 m3s-1 para la avenida de diseño (1.000 años de periodo de retorno), y 3450 m3s-1 para la avenida máxima (10.000 años de periodo de retorno). Los caudales asociados a crecidas con periodos de retorno elevado se han comparado con los resultantes del cálculo de la avenida máxima probable (PMF), que es de 5786 m3s-1. ; Este trabajo se ha realizado con la financiación del Proyecto CICYT REN2001- 1633/RIES "Incorporación de datos de paleocrecidas e inundaciones históricas al cálculo de la avenida de diseño de presas" (PALEOCAP). ; Peer reviewed
Prior to the commercial availability of Very High Resolution (VHR) satellite imagery, the applicability of Earth Observation data in the urban planning sector was very limited. The spatial resolution of the imagery, supplied by platforms like Landsat TM and SPOT HRV, was too coarse to be of real practical use to urban planners and their applications. Satellite images of urban or sub-urban areas are characterized by large radiometric variations due to the small size and the diversity of the objects. This in turn causes a radiometric contamination between neighbouring pixels which renders object recognition nearly impossible. Satellite images with a higher resolution might alleviate this problem. The dawn of the VHR era was thus anticipated with great aspiration by urban remote sensing researchers. In the framework of a DWTC/OSTC Telsat 4 pilot project we proposed a methodology to employ IKONOS-21 imagery to develop an inventory of built-up, and un-built areas in Belgium's Flemish region. Such an inventory can be of use to regional planning agencies that are responsible for the implementation of the government's planning policies. In Flanders, AROHM (Administration of Spatial Planning, Housing, Monuments and landscapes) records, monitors, and evaluates the built-up areas. To do this, they need an extensive data input from the communities, which requires a lot of time and effort. A reliable and swift technique, based on earth observation data, and applicable for each residential area in Flanders, would be of great value to them. Not only would it allow them to make swift assessments more frequently, they could also double-check incoming data from the communities. The aforementioned project consisted of three parts: the visual interpretation of two study areas (Hasselt and Ghent), the automatic classification of these areas using both Maximum Likelihood and Neural Network classifiers, and the development of GIS procedures to transform the classified images into thematic maps like, for instance, a map of building densities.
This article questions current empirical practice in the study of growth. It argues that much of the modern empirical growth literature is based on assumptions about regressors, residuals, and parameters that are implausible from the perspective of both economic theory and the historical experiences of the countries under study. Many of these problems, it argues, are forms of violations of an exchangeability assumption that implicitly underlies standard growth exercises. The article shows that these implausible assumptions can be relaxed by allowing for uncertainty in model specification. Model uncertainty consists of two types: theory uncertainty, which relates to which growth determinants should be included in a model; and heterogeneity uncertainty, which relates to which observations in a data set constitute draw from the same statistical model. The article proposes ways to account for both theory and heterogeneity uncertainty. Finally, using an explicit decision-theoretic framework, the authors describe how one can engage in policy-relevant empirical analysis.
A) Motivation und Problemstellung Der Computer ist heute eines der wichtigsten Hilfsmittel beim Entwurf und der Entwicklung von Fahrzeugen. Zunächst wurde er von Konstrukteuren für das Computer Aided Design (CAD) von virtuellen Fahrzeugmodellen eingesetzt. Inzwischen ist er in vielen anderen Bereichen der Fahrzeugentwicklung unentbehrlich geworden: der Entwicklungszyklus von Automobilen ist durch die Computer-gestützte numerische Simulation substanziell verkürzt worden. An virtuellen Prototypen gewonnene Ergebnisse können zunächst optimiert und anschließend am realen Prototypen validiert werden. Untersuchungen der Fahrdynamik, des Crash-Verhaltens, der Innenraumakustik und der Außenhautumströmung sind nur einige Anwendungsfelder, in denen Computer-basierte Technologien zur Senkung der Entwicklungskosten beitragen. Nach den Vorgaben aus dem Design wird ein Konstruktionsmodell erstellt; die erzeugten CAD-Daten beschreiben die Fahrzeugkomponenten anhand parametrischer Flächen und zusätzlicher Materialinformationen. Um die Daten in numerischen Simulationen verwenden zu können, müssen die Flächenbeschreibungen zunächst diskretisiert werden. Für die Strukturmechaniksimulation wird das CAD-Modell daher in ein Finite-Element-Modell umgewandelt, das dann zum größten Teil aus drei- und viereckigen Schalenelementen besteht. Das Finite-Element-Modell wird in einem Vorverarbeitungsschritt aufbereitet und mit zusätzlichen Daten ergänzt, bevor es dem Simulationsprogramm als Eingabedaten übergeben wird. Nach der meist sehr zeitintensiven Simulation, die für Gesamtfahrzeugmodelle mehrere Tage in Anspruch nehmen kann, liegen als Ergebnis große Datenmengen vor. Diese können aufgrund ihres Umfangs und ihrer Komplexität nur mit ausgereiften Visualisierungswerkzeugen ausgewertet werden. Die Erkenntnisse aus der Simulationsanalyse fließen an den Konstrukteur zurück. So schließt sich der Zyklus, der den virtuellen Prototypen solange iterativ verbessert, bis alle Zielgrößen erreicht werden. Bereits Anfang der achtziger Jahre wurden Strukturanalysen mit Hilfe numerischer Simulation anhand einfacher Balkenmodelle durchgeführt. Die Modellkomplexität geht mit der Leistungssteigerung der Hardware und der Weiterentwicklung der Simulationssoftware einher: die Modellgröße hat sich seitdem alle drei Jahre mehr als verdoppelt. Der Notwendigkeit, dem Entwicklungsingenieur entsprechende Visualisierungswerkzeuge zur Verfügung zu stellen, wurde bisher von den Herstellern der Simulationssoftware mit eigenen Lösungen begegnet. Anfangs entstanden einfache Darstellungsanwendungen, die das Fahrzeug als Gittermodell zeichneten, ohne einen räumlichen Eindruck zu vermitteln; die Applikationen wurden weiterentwickelt, entsprechen jedoch heute in der Regel nicht mehr dem, was im Bereich der Softwareentwicklung und vor allem der Visualisierung Stand der Technik ist. Der Fortschritt durch wissenschaftliche Forschung in der Computergraphik und die Weiterentwicklung der Hardware, insbesondere der Graphiksubsysteme, lässt die Lücke zwischen dem, was in der Visualisierung möglich ist, und dem, was in kommerziellen Produkten zur Datenanalyse angeboten wird, immer größer klaffen. Zudem ist die Wissenschaft im Bereich der angewandten Informatik stets bemüht, Einsatzgebiete zu identifizieren, in denen neu entwickelte Methoden evaluiert und verbessert werden können. Eine enge Zusammenarbeit zwischen Ingenieuren und Wissenschaftlern erscheint daher als sehr vielversprechend und zwingend notwendig. Die vorliegende Arbeit ist im Rahmen einer engen Kooperation mit der Berechnungsabteilung der BMW-Gruppe entstanden. Sie hat zum Ziel, den bestehenden Fahrzeugentwicklungsprozess im Umfeld der Strukturmechaniksimulation zu analysieren und durch Adaption neuer Methoden der Computergraphik aus anderen Bereichen sowie durch Entwicklung neuer Visualisierungstechniken und Interaktionsmechanismen zu beschleunigen. Eine Evaluation der eingesetzten Konzepte soll anhand einer prototypischen Applikation vorgenommen werden. Bisher wurde in der Visualisierung im Bereich der Strukturmechaniksimulation auf Basis von vierseitigen Schalenelementen nur wenig geforscht. Für die Analyse von Crash-Simulationsergebnissen müssen große, zeitabhängige Datensätze effizient verarbeitet werden. Dabei soll die Netzstruktur des Finite-Element-Modells erhalten bleiben, ohne dass dabei auf hohe Interaktionsraten verzichtet werden muss. Eine schnelle Datenaufbereitung spielt dabei eine ebenso wichtige Rolle, wie die Navigation durch das virtuelle dreidimensionale Fahrzeugmodell mit Hilfe der an einem Standardarbeitsplatz vorhandenen Eingabegeräte. Die Weiterentwicklung der Simulationscodes hat es ermöglicht, Teilstrukturen mit Randbedingungen zu versehen, so dass nun Berechnungen an Teilmodellen vorgenommen werden können. In der Karosserieberechnung werden anstatt homogen vernetzter Gesamtmodelle seitdem unabhängig voneinander vernetzte Bauteile zu virtuellen Fahrzeugmodellen zusammengesetzt. Der Netzanschluss benachbarter Bauteile wird nun nicht mehr über das aufwändige Abgleichen und Nutzen gemeinsamer Randknoten hergestellt; stattdessen überlappen die Bauteilnetze in Flanschbereichen, wo sie durch neu entwickelte Verbindungselemente aneinander gebunden werden. Dies hat unter anderem den Vorteil, dass einzelne Bauteile durch Varianten ausgetauscht werden können, ohne dass die Umgebung neu vernetzt werden muss. Wichtige Entscheidungen müssen bereits in der frühen Phase eines Fahrzeugprojektes aufgrund der durch numerische Simulation gewonnenen Erkenntnisse getroffen werden. Das setzt voraus, dass die bis dahin verfügbaren Konstruktionsdaten in rechenbare Simulationsmodelle umgesetzt werden können. Durch die unabhängige Vernetzung einzelner Bauteile und den unterschiedlichen Konstruktionsstand der verschiedenen Fahrzeugkomponenten kommt es nach der Zusammenführung häufig zu Berührungen und Durchdringungen der Bauteilnetze im diskretisierten Finite-Element-Modell. Diese müssen zunächst detektiert und beseitigt werden, da sie ansonsten die Simulationsergebnisse verfälschen würden. Darüber hinaus müssen die Bauteilnetze miteinander durch Verbindungselemente verbunden werden. Da die Konstruktionsdaten in der frühen Phase jedoch keine vollständigen Verbindungsdaten beinhalten, muss der Berechnungsingenieur den Datensatz mit entsprechender Information aufbereiten. Die Vorverarbeitung von Eingabedaten für den Simulationsprozess nimmt angesichts steigender Variantenrechnungen und einer halbwegs automatisierten Standardauswertung der Simulationsergebnisse gegenüber der Nachbearbeitung einen immer höheren Stellenwert ein. Derartige Ergänzungen der Simulationsmodelle mussten bisher mit Hilfe eines Text-Editors direkt an den Eingabedateien vorgenommen werden. Netzfehler und fehlende Anbindungen zwischen Bauteilen konnten lediglich durch Anrechnen des Modells entdeckt werden. Dazu wurde der Simulationsprozess gestartet und nach einiger Zeit wieder abgebrochen. Durch die Analyse der bis dahin berechneten Zwischenergebnisse werden derartige Unzulänglichkeiten des Eingabemodells sichtbar. Vorweggenommene Konsistenzprüfungen machen zeitaufwändige Anrechnungen überflüssig. Weiteres Prozessoptimierungspotenzial liegt in der Integration und Angleichung verschiedener Werkzeuge. Eine enge Kopplung des Simulationsprozesses an die Vor- und Nachbearbeitung der Daten durch ein und dieselbe Applikation, die sowohl Ein- als auch Ausgabedaten verarbeiten kann, schafft die Grundlage für eine schnelle Iteration im Optimierungsprozess und trägt zur angestrebten Reduzierung der vom Ingenieur zu bedienenden Vielzahl von Applikationen bei. In Zeiten fortschreitender Globalisierung und Fusionierung, aber auch durch zunehmendes Outsourcing der Teilmodellerstellung an darauf spezialisierte Dienstleistungsunternehmen steigt der Kommunikationsbedarf über räumliche Grenzen hinweg zusammenarbeitender Entwicklungsteams. Kostenintensive Besprechungen, zu denen sich alle Beteiligten an einem Ort zusammenfinden müssen, können nur durch Verbesserung der bereits vorhandenen Fernkommunikationsinfrastruktur reduziert werden. Da auf die gemeinsame Betrachtung der Modelldaten als Diskussionsgrundlage bei Besprechungen nicht verzichtet werden kann, bietet sich eine netzwerkbasierte Kopplung entfernter Arbeitsplätze an, um kleinere Besprechungstermine durch Telefonate mit zeitgleicher kooperativer Visualisierungssitzung am Arbeitsplatzrechner ersetzen zu können. Ein weiterer Aspekt befasst sich mit der Absicherung der Zuverlässigkeit von Simulationsergebnissen. Um eine Aussage darüber machen zu können, welche Modifikation der Fahrzeugstruktur das simulierte Verhalten positiv beeinflusst hat, müssen zunächst alle Einflussfaktoren, die auf die Simulation wirken, analysiert werden. Dies geschieht in Stabilitätsanalysen, in denen anhand gleicher Eingabedaten die Streuung der Simulationsergebnisse gemessen und die Ursache dafür erforscht wird. Durch konstruktive Maßnahmen soll in Ursprungsbereichen maximaler Streuung das Modell dahingehend modifiziert werden, dass gleiche Eingabedaten zu annähernd gleichen Simulationsresultaten führen. Ziel dieser Arbeit ist, das Pre- und Postprocessing von Strukturmechaniksimulation im Fahrzeugentwicklungsprozess bezüglich neuer Funktionalität und Leistungsfähigkeit durch die Einbeziehung neuer Graphiktechnologien sowie durch die Entwicklung neuer Visualisierungsalgorithmen signifikant voranzubringen. B) Beiträge dieser Arbeit Da die Arbeit in enger Zusammenarbeit mit der Karosserieberechnungsabteilung der BMW-Gruppe entstand und die Forschungsergebnisse direkt von den Ingenieuren an alltäglichen Problemstellungen eingesetzt werden sollten, mussten zunächst Voraussetzungen für eine erhöhte Akzeptanz bei den Anwendern geschaffen werden. Dazu gehören vor allem kurze Ladezeiten der Daten, eine intuitiv zu bedienende Benutzerschnittstelle sowie komfortable Funktionalität, die es ermöglicht, die Aufgaben schneller zu lösen als mit anderen Applikationen. Die Analyse der zu verarbeitenden zeitabhängigen Simulationsdaten und der zum Einlesen zur Verfügung gestellten Bibliothek führte zu einer geeigneten internen Datenstruktur, die eine effiziente Datenaufbereitung erlaubt und damit zu geringeren Start-up-Zeiten führt als bei vielen kommerziell verfügbaren Visualisierungswerkzeugen. Ferner resultiert aus der Evaluation verschiedener Szenengraphbibliotheken unter Berücksichtigung der Rechnerplattform im Anwenderumfeld die Entscheidung zu Cosmo3D / OpenGL Optimizer. Durch die Datenstrukturen und Funktionalitäten der Bibliothek bleibt der Ressourcenbedarf im Zusammenspiel mit einem optimierten Szenengraph-Design im Rahmen dessen, was auf einem Standard-Arbeitsplatzrechner zur Verfügung steht. Zur Beschleunigung der Bildsynthese werden bereits bekannte Verfahren zur Optimierung polygonaler Modelle für die Weiterverarbeitung in der OpenGL Pipeline mit adaptierten Algorithmen verglichen, die unter Berücksichtigung der zugrunde liegenden Daten Quadrilateralstreifen maximaler Länge bilden. Zusätzlich wird der Einsatz verschiedener Detailstufen im CAE-Umfeld untersucht und eine Lösung zur deutlichen Steigerung der Bildwiederholrate während der Navigation durch das Finite-Element-Modell präsentiert. Durch die Entwicklung und Evaluation Textur-basierter Visualisierungsverfahren werden deren Vorzüge anhand verschiedener Beispiele aus dem Berechnungsumfeld verdeutlicht. Daraus lässt sich die Notwendigkeit ableiten, Standard-Arbeitsplatzrechner in Zukunft mit entsprechender Graphik-Hardware auszustatten, um von den Möglichkeiten moderner Visualisierungsalgorithmen profitieren zu können. Im Rahmen der vorliegenden Arbeit wurde die Möglichkeit geschaffen, nach dem interaktiven Zusammenführen der Modellkomponenten aus verschiedenen Datenquellen auftretende Netzfehler zu visualisieren und selektiv zu beheben. Dazu kommen auf hierarchischen Datenstrukturen basierende Algorithmen zum Einsatz. Verbunden mit Methoden zur Darstellung und interaktiven Modifikation von Verbindungselementen sowie der Detektion fehlerhafter Schweißpunktdaten wird die Grundlage geschaffen, um Finite-Element-Modelle für die Crash-Simulation effizient aufzubereiten und die Modellerstellung für Variantenrechnungen stark zu vereinfachen. Speziell auf die Bedürfnisse der Berechnungsingenieure zugeschnittene Interaktions- und Navigationsmechanismen sowie frei bewegliche Clip-Objekte erleichtern den Umgang mit den Modelldaten. Durch die Entwicklung dreidimensionaler Selektionsobjekte und eine effiziente Schnittkraftberechnung steht die Kraftflussvisualisierung mit Hilfe dynamischer Kraftflussröhren nun auch als interaktives Analysewerkzeug im Postprocessing zur Verfügung. Die Berechnung der notwendigen Größen im Batch-Betrieb und die anschließende Zwischenspeicherung in einem eigenen Dateiformat ermöglicht die Standardauswertung von festgelegten Kraftflussverläufen im Anschluss an die Simulation. Dies trägt weiterhin zur Beschleunigung und Automatisierung der Nachverarbeitung bei. Mit der Entwicklung eines Bild- beziehungsweise Filmgenerators konnte mit Ergebnissen dieser Arbeit zur Entwicklung eines Integrationswerkzeuges für die Ablaufsteuerung und das Datenmanagement in der Karosserieberechnung beigetragen werden. Es werden zwei Lösungen für Szenarien einer kooperativen Sitzung mit mehreren Rechnern präsentiert. Die Ergebnisse zeigen auf, wie zeitaufwändige Treffen zwischen Ingenieuren durch Telefonate mit gleichzeitiger kooperativer Visualisierungssitzung ersetzt werden können. Das vorgestellte Verfahren zur Stabilitätsanalyse von Simulationsprozessen hilft, Ursprünge von Instabilitäten aufzudecken und die Aussagekraft der Simulationsergebnisse verbesserter Modelle zu erhöhen. Durch diese Arbeit ist eine prototypische Visualisierungsplattform für die Vor- und Nachbereitung von Strukturmechanikdaten namens crashViewer entstanden. Das objektorientierte Softwaredesign des Prototypen erlaubt die Integration weiterer Datenformate sowie die Implementierung neuer Algorithmen zu deren Evaluation im produktiven Einsatz in der Karosserieberechnung, aber auch in anderen CAE-Bereichen. C) Gliederung der Arbeit Im Folgenden wird ein Überblick über den Aufbau dieser Arbeit gegeben, woraus der Zusammenhang der Kapitel untereinander hervorgeht. Kapitel 2 motiviert die Entwicklung einer in die Simulation integrierten Vor- und Nachverarbeitungsapplikation. Zu Beginn führt das Kapitel in das breite Feld der digitalen Fahrzeugentwicklung ein. Das Umfeld der Crash-Simulation wird detaillierter betrachtet, wodurch ein Fundament für ein besseres Verständnis der darauffolgenden Kapitel geschaffen wird. Kapitel 3 gibt einen Überblick zu den Grundlagen der interaktiven Computergraphik und der Visualisierung. Darüber hinaus werden die der Crash-Simulation zugrunde liegenden Daten beschrieben, indem zunächst die Strukturen, aus denen sich ein Gesamtfahrzeugmodell im Allgemeinen zusammensetzt, erläutert werden und anschließend auf die Datenformate der Simulationseingaben beziehungsweise der Simulationsergebnisse eingegangen wird, um die breite Spanne der zu verarbeitenden Daten zu beleuchten. Kapitel 4 präsentiert ein effizientes Szenengraph-Design für zeitabhängige Finite-Element-Modelle mit invarianter Topologie. Dazu werden die notwendigen Grundlagen der verwendeten Graphikbibliotheken vermittelt und Erweiterungsmöglichkeiten diskutiert. Kapitel 5 erläutert die Architektur des im Rahmen dieser Arbeit entstandenen Prototypen und gibt einen Überblick über entwickelte Interaktionsmechanismen zur effizienten Datenanalyse. Kapitel 6 diskutiert verschiedene Verfahren zur Darstellungsbeschleunigung. Während sich die Quadrilateralstreifengenerierung und die Simplifizierung mit der Optimierung der modellierten Geometrie für eine effiziente Verarbeitung in der Graphik-Hardware auseinandersetzt, zeigt der letzte Teil dieses Kapitels, wie durch den Einsatz von Texturen zusätzliche Geometrieverarbeitung überflüssig wird. Kapitel 7 stellt spezielle Funktionalitäten für die Vorverarbeitung von Eingabemodellen vor. Außer der in verschiedenen Bereichen eingesetzten Distanzvisualisierung wird das interaktive Modifizieren von Schweißpunktdaten und die Parameterübertragung zwischen inkompatiblen Gittern erläutert. Kapitel 8 veranschaulicht Konzepte für die Nachverarbeitung von Simulationsergebnissen. Es werden Techniken zur Visualisierung skalarer und vektorieller Größen präsentiert. Darüber hinaus werden die interaktive Kraftflussvisualisierung und die Darstellung von Instabilitäten in Simulationsergebnissen betrachtet. Die CORBA-basierte Erweiterung zum gemeinschaftlichen Arbeiten räumlich getrennter Anwender, sowie die Batch-basierte Bild- und Film-Generierung von Strukturmechanikdaten schließen die Vorstellung der im Rahmen dieser Arbeit neu entwickelten Methoden ab. Kapitel 9 stellt die Ergebnisse dieser Arbeit in einen Kontext. An Beispielen aus dem produktiven Entwicklungsprozess werden die erzielten Fortschritte verdeutlicht und die Akzeptanz bei den Anwendern kritisch beleuchtet. Abschließend wird ein Überblick über weiterführende Arbeiten gegeben, die auf den vorliegenden Ergebnissen basieren. ; This thesis presents visualization techniques and interaction concepts that have been developed for the pre- and post-processing of structural mechanics. This work was done in cooperation with the crash simulation department of the BMW Group. The automotive industry's main goal is ensuring its share of a global market, which is becoming ever more competitive and dynamic. On the one hand, product quality has to be increased in relation to fuel consumption, weight, and passive safety. On the other hand the development process needs to be streamlined in order to reduce development costs and time to market. These objectives are hoped to be achieved by making extensive use of virtual prototyping. The vehicle development process has completely changed during the past two decades. In the early eighties finite element analysis found its way into the simulation of structural mechanics in the German automotive industry. Initially developed for military purposes, in 1983 a group of engineers of several automotive companies started feasibility studies based on finite element models containing less than 5,000 beam elements. Seven years later productive results of finite element analysis in structural mechanics started to influence car body development. In the second half of the nineties important project decisions were made during the early phase of the development process based on a deeper insight into crash behavior provided by crash-worthiness simulation results. Today, numerical simulation is an indispensable part of the computer aided product development chain in the automotive industry. After the design of a new car model is completed, a digital model is constructed by means of computer aided design (CAD). In a meshing step the created parametric surfaces defined by spline curves are discretized. The results describe the geometry of the car body by finite elements, for crash simulation mainly three- or four-sided shells but also beams and volumetric elements. The finite element model is completed, for example, with material properties, contact information, and boundary conditions. This is done in a preprocessing step. Once the simulation input data is available, the whole model is handed over to the simulation process which takes about one or two days using massive parallel processing. The deformation is recorded in a large result file. Commonly, the first 120 milliseconds of a crash are simulated and each two-thousandth time step is stored. A time step of less than one microsecond leads to a result file containing more than sixty snapshots. Typically, physical data such as acceleration, velocity, displacement, and forces per nodes or thickness, stress, strain, and other energies per element are recorded. During post-processing the information is analyzed in order to evaluate crash-worthiness. The feedback provided for the construction department closes the loop when all goals set for the car body structure have been attained. The virtual car body development in the pre- and post-processing of crash simulation divides into two main phases: (1) In the concept phase, new ideas are discussed and evaluated in order to reach a priority objective like minimizing car body weight. Only few people per car project are involved for a period of 12 to 18 months in this first stage. (2) After the basic decisions have been made regarding new concepts, a second phase follows which compromises all other fields to refine those concepts with respect to other objectives coming from the area of stability, dynamic or NVH. This phase requires many engineers from different departments for a period of more than 30 months to optimize the product. The later undesirable properties are revealed the higher the cost will be for the then necessary changes. Until the late nineties transitions between adjacent car body parts were modeled as two finite element meshes that share common nodes at the border. Each time a car body part was replaced by an optimized variant in order to enhance the crash behavior either the whole car model had to be re-meshed or border nodes had to be adapted manually. Therefore, preparing such variant models was a very time consuming task. After simulation codes like PAM-CRASH were able to simulate car body connections like for example spotwelds, finite element models became more like their counterparts made of steel. Now, a flange is constructed, along which adjacent car body parts can be connected without sharing common nodes. This allows for an independent meshing of car body parts and therefore for more variant computations. However, independent meshing lead to mesh penetration or even perforation in flange areas when inhomogeneous meshes are assembled. Thus, a new task arose for preprocessing where those mesh errors have to be removed in order to avoid error-prone simulation results. On the post-processing side, large time-dependent data sets require dedicated visualization methods that help the engineers to interpret the huge amount of data at interactive frame rates. Therefore, finite element meshes need to be prepared for an optimized processing in the graphics pipeline. Typical types of data that have to be visualized are scalar values, vectors, tensors or any combination thereof. In 1997, when most of the commercial visualization tools provided by solver companies as add-on to the simulation software could no longer provide state-of-the-art graphics, a demand arose for new visualization techniques available for productive use. Besides, the growing throughput caused by an increasing number of car body variants and the acceleration of computation hardware allowing more optimization cycles, made it necessary to automate as much processing steps as possible. The growing outsourcing of development tasks to suppliers and the number of merges in the automotive industry, which entail cooperation of corresponding departments at different sites, require a client-server-solution for cooperative work. Finally, the simulation community is concerned about the validity of their results compared to real crash-tests. It is important to detect and minimize the sources of scattering in the results originated by the simulation model or process. This is also a precondition for stochastic simulation, which is used in order to optimize the crash behavior by varying input parameters. The goal of this thesis is to provide solutions to some of the issues just stated. First of all, the applicability of different scene graph APIs is evaluated for large time-dependent data sets. APIs such as Performer or Cosmo3D / OpenGL Optimizer have been developed to take advantage of multiprocessing. Those APIs can perform model optimization during scene graph creation and benefit from multiprocessing using frustum culling and occlusion culling while traversing the scene graph to increase frame and interaction rates. Because of the large time-dependent databases and the limited memory of the workstations an efficient scene graph design is very important in order to handle the complex data interdependencies and to achieve high rendering speed. While five years ago the models consisted of about 250,000 finite elements with nearly the same number of nodes, today the size of the models has almost quadrupled. Since the element topology does not change in crash simulation, the connectivity of the finite elements needs to be stored just once. An index set representing the topology is shared across the sub-graphs of all time steps. A Gouraud shaded surface requires to do the edge detection on the state where geometry is deformed most, which is in general the last one. In order to minimize memory consumption, the index set is used for both coordinates and normals. Therefore, coordinates at vertices with multiple normals need to be added once per normal. Other scene graph nodes, e.g. the one specifying the appearance, can also be shared. Handling triangular elements as degenerate quadrilaterals allows to represent a mesh of three- and four-sided shell elements in one scene graph node. The prototype application named crashViewer, which was implemented to evaluate the methods developed in this thesis, uses Cosmo3D / OpenGL Optimizer for historical reasons: this bundle had been presented as predecessor of the 1997 announced Fahrenheit project, which was aborted two years later. Nevertheless, the proposed scene graph design allows to visualize 60 time steps of a model containing half a million elements and the same number of nodes with a memory consumption of 360 MB (flat-shaded) or 970 MB (Gouraud-shaded), provided that a crease angle of 20 degree leads to 40% more normals than vertices. One basic requirement for an interactive visualization application is that the frame rate does not fall below an acceptable minimum threshold. What "acceptable" means depends on data and experience with other tools. Generally speaking, visualization data should be rendered as fast as possible. For this reason, one aim of this work is to point out methods to optimize the finite element meshes' rendering acceleration. Two approaches are examined: (1) concatenation of adjacent elements to reduce data redundancy during geometry processing and (2) mesh simplification to remove information that does not significantly influence the shape of a car body part. Although OpenGL Optimizer provides a tri-stripper (opTriStripper) that is able to convert any polygonal mesh into strips of triangles, it is not applicable in this field because the original mesh structure should still be visible in wireframe mode. Hence, a quadrilateral stripper was developed, which analyses the mesh structure and generates many parallel bands of maximum length. As a matter of fact, quad-strips are not as versatile as tri-strips because each turn costs two extra vertices. However, compared to opTriStripper's reduction to 63.5% the proposed bandification algorithm reduces the number of referenced vertices to 54% of an unstripped representation averaged over 3,274 car body parts. Depending on the availability of vertex arrays the bandification leads to a rendering speed-up factor of about 4.5 without and 1.7 with vertex arrays in comparison to the unstripped geometry. In order to achieve even higher frame rates during camera interaction, a two-stage level-of-detail concept is developed. In addition to a fine level displaying the original mesh resolution, which is essential for the visualization in pre- and post-processing of structural mechanics simulation, a second level with coarse triangles is used as intermediate model for camera movement. Each time the user modifies the view, a previously simplified mesh is rendered until the camera parameters are no longer changed. Then the finer level of detail is displayed. An simplification algorithm was implemented which uses the one-sided Hausdorff distance as an error measure and which is compared to the Successive Relaxation Algorithm provided by OpenGL Optimizer as opSRASimplify. Aside from the interface opSRASimplify, which turns out to be unsuitable for getting an optimal decimated mesh with respect to a predetermined error tolerance, the resulting mesh quality is not as high as with the new HECSimplify simplifier. Breaking the error criterion can be avoided by defining an appropriate cost function for opSRASimplify, which causes less triangles to be removed compared to HECSimplify. If the decimation target is specified to achieve the same level of reduction, then the resulting triangle mesh contains gaps and the model appears distorted. Different car body models are reduced to 9-18% of the original number of triangles applying HECSimplify. This leads to a rendering speed-up factor of between 3.4 and 6.7. There is a trade-off between rendering speed and memory consumption. HECSimplify only applies half-edge collapses to the polygonal mesh. These operations just modify the topology, not the vertex coordinates. Therefore, the original and the reduced mesh are able to refer to the same set of coordinates. On the one hand, coordinate set sharing requires less memory, on the other hand the speed-up factor of a model that could be reduced to 9% of the original triangles is far away from 11. Both modules, the quad-stripper and the simplifier, are embedded into the Cosmo3D / OpenGL Optimizer framework by providing corresponding action objects and new scene graph nodes. Furthermore, the scene graph API was extended by several other new objects to overcome the restrictions with line picking or to provide new functionality. For example, the csClipGroup node enables the user to control a freely movable clip-plane, which affects only the underlying scene graph. This provides better insight into heavily deformed car body structures. Another form of clipping that is developed within this thesis uses one-dimensional RGBA texture maps in order to hide geometry that should not be displayed under certain conditions. This visualization method can be used, for example, to mask out those model regions that do not correspond to a given critical value range. In combination with distance values to adjacent car body parts the rendering can be restricted to potential flange regions of the model. For that purpose, first of all the parameter has to be transformed into a texture coordinate with respect to the specified parameter range. Then, a color scale is defined as a texture map. Using the GL_DECAL environment the color coding can be limited to regions of certain value ranges. Using the same mechanism in the GL_MODULATE environment with the alpha-test enabled allows for clipping the geometry where the corresponding values map into texture regions with an alpha component set to be fully transparent. This technique facilitates the accentuation of critical structures, because the user is able to interactively modify the texture map or the texture lookup table, if an index texture is used to control the visualization. As stated above, the most important change in finite element modeling for crash simulation was the introduction of independently meshed car body parts. Since the assembly of such inhomogeneous meshes may include perforations or penetrations, for example, caused by a shifted discretization along curved flanges, there was a growing demand for an interactive method to detect and remove this kind of mesh errors. Perforating regions can be detected by applying collision detection to the finite element model. Efficient collision detection as proposed by Gottschalk et al. requires hierarchical sub-structuring of the car body model. Consequently, in this study, each car body part is subdivided by a bounding volume tree (BVT). Different bounding volume types are tested: spheres, axis-aligned (AABB), and object-oriented bounding boxes (OBB). For perforation detection, where computation-intensive element-element-intersection tests are necessary at the lowest BVT level, OBBs turn out to be most efficient because they are very tight positioned around the element structure. Hence, downward traversals in the BVT can be terminated early because any one of 15 separating axes that defines a plane disjoining both volumes can be found. In order to detect penetrating nodes the minimum node-element-distances need to be computed. The BVT traversal algorithm is adapted appropriately. Children of a BVT node are visited only if their distance falls below a specified maximum distance of interest. Penetrating node-element-pairs are collected in a list for subsequent visualization. Using an interface to the original PAM-CRASH algorithm for penetration removal it is possible to provide the desired interactive mechanism that allows the engineers even to restrict mesh modification to selected car body parts. This is a big advantage over starting the simulation until initial forces move penetrating nodes apart from penetrated elements and restoring the computed mesh modification in the input data deck. First, the interactive method gives direct feedback and the engineers do not have to switch tools. Second, during the optimization of the car body structure by replacing single car body parts by variants this procedure enables the engineers to keep everything but the variant part fixed. Thus, only the nodes of the incoming part are aligned to its neighborhood and the confined modification makes it easier to compare the results of two simulations runs. What is more, the bounding volume hierarchy provides beneficial effects on many other tasks in pre- and post-processing of crash-worthiness simulation. It can be used to detect and follow flanges automatically or to spot flange regions that have only been inadequately connected. A basic task is the verification of connecting elements. For example, a spotweld must not exceed a maximum distance to those parts it should connect. Otherwise, a pending spotweld may suspend the simulation run delaying the development process and raising costs. In this study, several criteria for the validation of connecting elements are elaborated. Erroneous spotweld elements are emphasized by different colors and/or geometry. The engineers are successively guided to each problematic connection. Without this feature it would not be possible to find these model errors in such a short time. Furthermore, a method is developed that enables the computation engineers to effectively add missing connection information by means of spotwelds to the input model. Thus, it becomes even practicable to start with crash-worthiness simulations before detailed connection data is available from CAD data. One aim in virtual vehicle development is to combine the results of several areas in numerical simulation. In order to map the real development chain closely onto the virtual one, material properties influencing preliminary steps like forming have to be considered. For example, a deep-drawn blank sheet has a lower thickness in areas of high curvature than in other areas. As long as the material of a car body in crash simulation is assumed to be constant the before mentioned manufacturing influences cannot be properly represented. A hardware-based method is developed in this thesis for the efficient mapping of any type of data between incompatible meshes that are geometrically congruent. It utilizes the transformation and interpolation capability of the graphics subsystem. Element identifiers of one mesh are color-coded. For each element of the other mesh the view matrix is set up appropriately and the visible part of the first mesh is rendered. The colors that represent the element IDs can be read back into main memory. After the correlation between elements is finished in graphics hardware, the values are finally transfered in software. The post-processing of crash-worthiness simulation results necessitates the handling of large data sets. Since a binary result file may contain 2 GB of data but an engineer's workstation often is limited to 1 GB of main memory these boundary conditions need to be considered while designing data structures and algorithms of visualization software for this application area. On the other hand, interactivity and high rendering performance is a precondition to obtain acceptance by the user. The required tool should provide interaction mechanisms that assist the user in exploring and navigating through the data. Mainly, it should help to interpret the data by making the invisible visible. Besides an effective scene graph design, in this study, the internal data structures of the developed prototype application have been implemented with memory consumption in mind. Parameter transfer in post-processing is done state by state very fast by pointer-based data structures. The extensive use of texture mapping enhances the rendering performance. Visualization techniques are proposed that use textures for the direct mapping of scalar values onto the car body geometry, for the animated display of vector data, and for the visual discretization of the finite element mesh in the form of a wireframe texture map. All these approaches spare the transformation stage of the graphics pipeline additional processing of vertex-based data. For example, the traditional display method for shaded geometry with visible element borders is two-pass rendering, which halves the frame rate. The application of a black-bordered luminance texture, which is white inside, onto each geometric primitive balances the load between geometry and texture unit. Also, the encoding of a vector's direction by applying an animated texture onto a line reduces geometry load and leaves the underlying structure mostly visible in contrast to conventional vector visualization with arrows. Force flux visualization, first presented by Kuschfeldt et al. gives an overview over which components of the car body model absorb or transfer forces. It is necessary to detect and to understand the force progression within the car body structure. For example, the longitudinal structures within the front part of a car body play an important role for increasing the ability of the body to absorb forces in a frontal crash. Force flux visualization enables the engineers to design car components with an optimal crash behavior. This technique was made available for interactive daily use in crash simulation analysis. Providing a dedicated interaction mechanism, the prototype application allows to interactively define a trace-line along which the force flux can be visualized. For each section plane positioned in small intervals perpendicular along the trace-line the simulated node forces are accumulated. The resampling is accelerated by utilization of the bounding volume hierarchy. Each section force sum is then represented by color and radius of one ring of a tube around the trace-line. The dynamic trace-line definition aligns the force tube to the deforming structure of the analyzed car body part, for example, a longitudinal mounting. The specified trace-lines can be stored in order to precompute force tubes off line. This can be done by another prototype application in batch processing after simulation has finished. During a visualization session the precomputed values can be directly converted into time dependent force tube representations. The decoupling of time consuming computation and the interactive visualization further accelerates the analysis of crash-worthiness simulation. Starting multiple simulation runs with the same input data deck will produce different results. The scattering in results has to be minimized in order to be able to evaluate the influence of structure modifications. This work presents a method to detect and visualize instabilities of the simulation. The above stated texture-based visualization points out sources of instability and helps the engineers to determine if a branching is caused by the model structure or if it was originated by the solver. The pros and cons of different measurement functions are discussed. Furthermore, a CORBA-based synchronization of multiple viewers displaying different data sets is presented. This allows to analyze the simulation results of one run in direct comparison to those of other runs. It is very useful to view the differences in crash behavior of multiple car body models on one workstation. Moreover, this functionality can be used in combination with a telephone call to supersede a meeting between a computation engineer and his external supplier. The visualization is done locally on each client. Providing that data and software is available at each participating client, the only data that have to be transferred during a cooperative session are the events triggered at the steering master-client and propagated to one or more slave-clients. A master token decides which participant is able to send generated events to the other instances. This mechanism avoids conflicting camera control when multiple users try to modify their view at the same time. Another approach describes how an image-based client-server model can be used in this context. After a frame has been rendered on the server, it is encoded to reduce the amount of data. The encoded image stream is transferred to any client that is able to decode and display. There are less requirements for the client but the connection needs to provide a certain bandwidth. This scenario can also be used for remote visualization. The prototype crashViewer can be connected to a Java applet running inside a web-browser. Finally, a method for standardized analysis of crash-worthiness simulation is presented. A batch-processing prototype application has been developed to generate digital movies using a predetermined camera path. The contributions of this thesis aim at further acceleration of the virtual vehicle development process, for example, by introducing new interaction mechanisms, making extensive use of hierarchical data structures, using hardware-accelerated visualization techniques, and providing solutions for process automation.
In: Gratchev , D A 2004 , ' Problematika termina abstraktnyj avtor i charakternye certy abstraktnych avtorov v russkich bol'sich narrativach 20-30-ch godov XX veka ' , Doctor of Philosophy .
This study is based on the twin exigencies of introducing the concept of abstract author into the scheme of narrative construction, and distinguishing sharply between this object of analysis and the concepts of concrete author and narrator. In accordance with the definition advanced by Wolf Schmid, whose vision on the issues of narrative instances appears most judicious, the abstract author ('der abstrakte Autor') can be defined as 'the principle that, in a work, determines the articulatory layer, the semantic layer, and the layer of the objectivity deployed, as well as the aesthetic organisation and the hierarchy of these layers in the total structure in one specific way' ('dasjenige Prinzip, das in einem Werk die sprachlautliche Schicht, die Bedeutungsschicht und die Schicht der dargestellten Gegenständlichkeiten sowie die ästhetische Organisation und Hierarchie dieser Schichten in der Gesamtstruktur so und nicht anders beschaffen sein lässt', Schmid 1973. S.24). More succinctly and generally, the abstract author (henceforth: AA) is the principle according to which the meaning of a literary work is constructed. If we wish to switch from phenomenological to structuralist terminology, we can define the AA as the construction principle of the paradigmatic elements of the work. Thus the AA is fundamentally distinguished from both the concrete author and from the narrator in whose name (voice) the story is being told – itself a creation of the AA. This means that the latter is not directly represented in the text, in view of the fact that it is a reconstructed virtual construction. Naturally, this aspect of the AA considerably complicates a study devoted to the reconstruction of various types of AAs as regards concrete literary texts (in our case, great Russian prose forms from the 1920s and 1930s) in view of the fact that it cannot be based on the principle of the reconstruction according to which the AA must be reconstructed. Taking into account that a) no single reconstruction principle could ever be exhaustively explained, and b) various interpretations of facts and motives are possible even within a single analytical doctrine, the final result of this kind of reconstruction will unavoidably contain fairly controversial or debatable aspects. In principle, a certain objectivity could be achieved by blending a large number of different analytical strategies in order to arrive at a reconstruction of the AA, but in view of the fact that this kind of undertaking is not able to be carried out within the framework of a dissertation project, one is obliged to accept a priori a certain sketchiness in the results obtained. As regards methodology, we considered it better to base our undertaking on a structuralistic approach. This does not mean, however, that we regard structuralism as a methodological panacea. Our choice was ultimately determined by the fact that the conclusions reached on the basis of structuralistic analysis are highly illustrative, in the sense that the mechanism of deduction can be represented in the form of logically unambiguous causality. At points where the conclusions of the structuralistic approach appeared to us to be incomplete representations of a work's meaning, we resorted to other approaches. In order to analyse texts from the period in question (the 1920s and 1930s) we applied an analysis model first presented by B. A. Uspensky, and subsequently elaborated by W. Schmid, albeit it with a few specifications, which will be discussed shortly. The Uspensky-Schmid model is based on the division and analysis of the narrative into five levels: spatial, temporal, phraseological, psychological and ideological. It is a rather economic and practical scheme which provides a thorough analysis. Our refinement refers only to the last, ideological, level. We recognize J. Lintvelt's view (Lintvelt 1981) which does not see this as a separate layer, basing his argument on its intertwining with other levels. However, W. Schmid insisted on its retention, indicating that it could also manifest itself independently of the other levels, namely, as a direct, explicit evaluation. In this case the ideological level is then a facultative phenomenon only functional in the narrative scheme of the text when there are explicit ideological utterances. We propose using an old definition of ideology set down by A. J. Greimas and J. Courtés, who, in Sémiotique. Dictionnaire raisonné de la théorie du langue (1979), define ideology as the syntagmatic aspect of the taxonomic concept of axiology. The acceptance of this definition brings with it a number of important consequences: 1) In view of the fact that the narrative text, a product of subjective consciousness, inevitably consists of axiologically meaningful relationships, the axiological level is an immanent element of every narrative text, independent of explicitly manifested ideological rhetoric. 2a) Regardless of its intertwining with the syntagmatics of other narrative levels, there is still the possibility of a paradigmatic reconstruction of the ideology, based on the axiomatics of these narrative levels. 2b) If the analysis of the other four levels is correctly executed, the ideology component will inevitably be the most concise, since it merely summarises all the conclusions that are drawn from analyses of the other levels and brings them into the required equilibrium. After all, to formulate it slightly differently, analysis means the exposure of the axiologically meaningful relationships that have been imposed on the text by the abstract author. Additionally, the mutual hierarchy of the narrative instances must be further determined, a specification one must consistently take into account as one reconstructs the AA. The point at issue here is the more detailed determination of the hierarchical dispositions of instances of abstract author and abstract reader due to Bely's death? Should Moscow be supplemented by the novel Petersburg by the same author because it is apparently symmetrical to Moscow? In that case, should we not also expand the notion of the analysed text to include the novel The silver dove, the first part of an unwritten trilogy of which the novel Petersburg is the second part, etc.? Such questions are allied to the concept of text itself and can arise in infinite shapes and quantities; it is clear that the AA's structure depends on how we respond to them. In our opinion the answers to such questions fall within the competency of the abstract reader who is, par excellence, sensitive to the literary work's nuances in meaning. For this reason we suggest representing the relationship between the abstract author and the abstract reader as an opposition between the principle to be reconstructed and the reconstructing principle, which presupposes a corresponding hierarchical disposition between both. Furthermore, more precision is needed with respect to the analytical methods used in this study. Taking into account the fact that, ideally, the perfectly accurate researcher should concur with the abstract reader, as outlined above, it is useful to call to mind the following aspects: a) the infinitely great competency of the abstract reader with regard to all intertextual connections of a given work and with regard to all meaningful connections, in all their variations, of the work with the extra-literary world – from social-political realia to the psychic circumstances of the concrete author. b) the infinite analytical flexibility of the abstract reader who uses the greatest possible quantity of analytical methods in his interpretative strategy, aiming at the most complete reconstruction of the AA. In view of the fact that, within the framework of a rather restricted study, it is not possible to present a more or less complete description of the AA of even a single work by means of the methodology of even a single analytical approach, it is advisable to limit the analysis to a single feature, albeit one that is shared by the majority of the chosen texts. This means we have mainly confined our efforts to the particular construction which governed the generation of the selected individual texts and which we could provisionally indicate as the abstract meta-author. We believe that the principle of negative anthropology, which – at least regarding Russian literature – was new in the first third of the twentieth century and which contains the denial or explicit 'denigration' of all manifestations of the specifically human, constitutes this kind of integral concept. We must emphasis that this concept – at least in its basic features – is not a twentieth-century invention. However, it does form a sharp contrast with literature of the nineteenth century imbued with humanism. For the analysis of this attitude, large-scale works of prose (novels, short stories) were chosen as the most representative for the 1920s and 1930s. The basis of the selection was the pursuit of maximum diversity with regard to ideology (in the narrowest sense of the word), genre and stylistics, and pragmatics. Taking their fundamental principles into account, the texts were chosen from the following literary movements or paradigms: (post)symbolism (Bely, Moscow), skaz (Klyèkov, Èertuxinskij balakir'), (post)modernism (Nabokov, The Gift), socialist realism (Gaidar, The Tale of the Military Secret). During the course of the analysis it became clear that the following two fundamental constructive principles that nourish the concept of negative anthropology could be identified in the above-mentioned texts. The essence of the matter is that W. Booth (Booth 1961) recognised the usefulness of designing a system of narrative hierarchy within communicative interaction. He defined a receptive side for each of the positions: in his scheme, the concrete author (Flesh-and-Blood Author) was correlated to the concrete reader (Fleshand- Blood Re-Creator), the narrator (or, in his terminology, Teller of This Story) was correlated to the fictive reader (Credulous Listener), and, finally, the abstract author (Implied Author) was correlated to the Postulated Reader, or the 'abstract reader', as Wolf Schmid would refer to him later. In Schmid's view, the abstract reader is the 'ideal recipient of the author', a definition with which we entirely agree. In our opinion, however, this does not apply to the phylogenetic constituent of this concept as Schmid tends to present it. In his view, the picture of the abstract reader seems to be determined in advance by the corresponding structural configurations of the work; in other words, it is a more or less passive communicative duplication of the AA. However, further examination indicates that in the reasoning in question the objectivity of the semantic configurations in the text is implicitly postulated; in other words, there is a presumption that the full (all-embracing) meaning of the work is not only given a priori but is also materially present in the text components themselves. In reality, however, the full meaning (and here we concur with W. Iser) is realised by the reader who fills in, as it were, the gaps in meaning intentionally or unintentionally embedded in the work by the author. In theory, there are an infinitely large number of such gaps and, correspondingly, every time a reader fills in a different number or group of gaps one can speak of a different structure of the total meaning of the work. Only God is capable of filling in all the gaps, making Him the most ideal recipient to figure in all models of narrative instances according to the communicative model. Nonetheless, we must also take into consideration the possibly less obvious fact that the text whose meaning is to be reconstructed in the analysis is not a protoplasmic entity but the product of certain conventions or analytic procedures. Both the conventions and the analytic procedures applied to the text belong to the competencies of the abstract reader. We shall explain this in more detail below. When dealing with, for example, Pushkin's novel Yevgeny Onegin, it is clear that the text itself provides no answer to the question whether this work has been completed. Our decision to regard this work as finished or unfinished affects its significance and, correspondingly, the picture of the AA. In our opinion, the instance of the abstract reader is responsible for the decision concerning the boundaries of the text; in other words, the decision to limit interpretative activity to eight chapters, or ten, or to state, as a matter of principle, that the work has only one boundary – a beginning. In each of these cases, the complete meaning of the work will have a different structure. The same argument can be applied with regard to varying editions of one and the same work. Consider the case of collected stories. How can one correctly determine this text's boundaries? Should we reconstruct each story's AA, so that something like a portrait gallery is created, or is it more sensible to regard a collection of stories as a single text and to reconstruct an integral AA on the basis of all the stories? Or, as in the case of the novel Moscow by Andrei Bely analysed in this dissertation: is it valid to regard the three sections of this work as separate texts – after all, they were published as separate books at different times, and the stylistic variations are evident? Is it valid to speak about an AA as a self-contained concept in view of the fact that the novel actually remained unfinished 1. Space destabilisation In view of the fact that it is only through history that man realises himself as an intrinsic integrity, he is most easily marginalised in the most unequivocal, i.e., most effective, way in a universe in which history in the usual sense of the term is seriously problematised by spatio-temporal ambivalence. The spatio-temporal continuum evaporates in this set-up, which may manifest itself in various ways but essentially involves the same mechanism. In some texts, normally seen as belonging to the modernistic paradigms (in our case, Moscow, Èertuxinskij balakir', The Gift), a destabilisation of the normal world view has occurred and this is more or less evident to the reader: the attributes of a certain point in space can easily belong to a different point, just as the attributes of a certain object can turn out to be the attributes of a different object. One spatial area can be projected upon a different spatial area, and, in such cases, the boundaries between the areas become so transparent that distinction between them is no longer possible. All objects and points in this kind of space enter, as it were, into relationships of mutual equivalence, or if we regard it in semiotic terms, all objects enter into relationships of crosswise reference without having an unambiguously phraseable singular denotation. Another way to destabilise space, however paradoxical it may sound, is by structuring space by means of mythopoetic patterns. We believe that mythopoetic structures occur in every narrative text, which seems largely self-evident. In view of the fact that in narrative texts we deal with subjectivity pur sang, it is perfectly logical that this subjectivity will lend varying axiological colour to the different segments of space. In conjunction with our cognitive schemes, i.e., the structure of our brains, this colour is generated according to the principle of binary opposition. In this way each narrative space has an axiological marking on the basis of duality (high-low, here-there, citycountry, etc.), for which in historical terms the priority lies with the myth as the first (spontaneous) project in human history to be given structure. The issue is merely one concerning the extent to which this mythopoetic – or as one may prefer, quasimythopoetic – scheme becomes manifest, and even the rather confined analysis we have performed demonstrates that this is largely the situation in Russian prose of the 1920s and 1930s. It is understandable that in both cases space destabilisation results in the elimination of the human subject. In the former case, when space is characterised by a high degree of relativity, man adopts in a metonymic way space's capacity to undergo all kinds of metamorphoses whose degree of radicalness can vary: from the possibility of metempsychosis, as in the case of the reincarnated protagonists in Nabokov's The Gift, to the division of protagonists in a synchronous system of look-alikes, of which each one has a role in the distinction of meaning and which only begin to acquire consolidated meaning when they have been conceptually united (cf. the Korobkin brothers, Mandro – Dromarden, Lizaša – Leonora, and Kierko – Titelev in Moscow; Ul'jana – Maria in Èertuxinskij balakir'; the whole herd of doubles in The Gift). In the latter case, when space has been structured according to the myth, for a number of reasons man is also eliminated: a) man, as we know him, necessarily realises himself in history (only God realises himself in infinity), whereas myth knows no linearity and therefore no history. Thus only a certain notion of man or a model of subjectivity can possibly realise itself in myth, but not man as such b) the concept of realisation itself (of man or of other objects that occupy mythical space) is weakened here by virtue of the fact that determinism rules in myth – a genuine paradigmatic formation – which substantially weakens the independence and the responsibility of the protagonists somehow engaged in realising themselves c) which also directly combines points a) and b): myth, which does not recognise the singularity of here and now, does not accommodate the aspect of subjectivity, which is a constitutive element of man (and without this relationship, there can be no subjectivity). 2. Theatricalisation of narrative space An important factor in the construction of the great narrative forms of this period is the theatricalisation of space: in some cases the accent is placed on an analogous segmentation of space (Gaidar) when, for example, the entire adventurous part of the narrative is linked to a concrete topos, while the lyrical part, as a whole, is connected to another area, etc.; in other words, the narrative space, just as in theatre space, is divided into semiologically clearly delineated segments. In other cases the theatricalisation can be achieved by assigning purely dramatic characteristics to the protagonists. In this latter case there is a strikingly varied list of procedures that can be applied to create a 'theatrical text' in which the majority of the protagonists, or even all of them, are assigned a role. The most prominent technique is the construction of a character on the basis of a marionette or automatic dummy (Bely) with the corresponding imitation of its expressiveness and speech which become isomorphous with the discrete, emphatically affected expressiveness of a doll, where the character disintegrates into disassociated sememes and is only held together by the context. In such cases, to emphasise the artificiality a complicated, deviating syntax and an extremely extensive vocabulary of occasionalisms is employed. A less conspicuous strategy (as in Gaidar and also in many social-realist authors of the 1920s and 1930s) consists of a return to the constructive configurations of pre-realistic theatre in which the only possible actors are masks or, to use a more recent term, types, whose dynamics are determined entirely by fable and not by any intrinsic structure or stratification of character. Each of them has an ontologically determined role (of course, this concerns only the ontology of that specific space) and the mode of existence here is such that there are no a priori opportunities to switch roles; this space simply does not enable this kind of transformation. A strictly natural effect of this type of situation is the extremely normative behaviour of the characters in both their actions and their verbal expressions. Another method of desecrating narrative space is rooted in the symbolistic paradigm (or to be more precise, in the paradigm of early Russian symbolism) which is typified by the representation of this world as a close-knit semiotic universe whose characters refer to a supratextual substance that governs this world. A consequence of this worldview is the acceptance of fairly strict definitions of determinism and its unavoidable companion destiny. The concept of destiny assumes a certain marked role for each of the characters; after all, a complete behavioural paradigm (as regards destiny) can be created for a (marked) role, whereas this is impossible for the vital realisation of a person in his existence: in this latter case only the syntagmatic logical coherence can be determined, and that coherence is incomplete by definition. The narrative in The Gift (as in several other of Nabokov's novels) is constructed in this way, i.e., in functional-behavioural terms of destiny. This is also the case in Èertuxinskij balakir' by Klyèkov, in which the centre of the narrative is occupied by a kind of minus type: a character that not only lacks psychology (in as much as this kind of reduction is possible for humans), but also every manifestation of his own will which could testify to even an illusory independence from the functional universe. We must consider that a person's dramatic accessories (e.g., a person on stage) are essentially emancipated from existence and as a result revealingly attest to the nature of the processes that eliminate humans from the prose of the era. The observation of AA structures in 1920s and 1930s Russian prose thus offers the opportunity to bring to light a collection of these texts' implicit features, which manifest themselves in the first third of the twentieth century and which consequently enabled radical qualitative change in the entire structure of Russian prose. If executed with sufficient accuracy, the reconstruction of AAs in works from different periods can offer new insight into the history of literature. In more precise terms, it can open a new history of literature, a history engaged in the diachronous modification of the models according to which creative texts are generated at a certain points in time.