The Center for Reflected Text Analytics (CRETA) develops interdisciplinary mixed methods for text analytics in the research fields of the digital humanities. This volume is a collection of text analyses from specialty fields including literary studies, linguistics, the social sciences, and philosophy. It thus offers an overview of the methodology of the reflected algorithmic analysis of literary and non-literary texts.
This article investigates the integration of machine learning in the political claim annotation workflow with the goal to partially automate the annotation and analysis of large text corpora. It introduces the MARDY annotation environment and presents results from an experiment in which the annotation quality of annotators with and without machine learning based annotation support is compared. The design and setting aim to measure and evaluate: a) annotation speed; b) annotation quality; and c) applicability to the use case of discourse network generation. While the results indicate only slight increases in terms of annotation speed, the authors find a moderate boost in annotation quality. Additionally, with the help of manual annotation of the actors and filtering out of the false positives, the machine learning based annotation suggestions allow the authors to fully recover the core network of the discourse as extracted from the articles annotated during the experiment. This is due to the redundancy which is naturally present in the annotated texts. Thus, assuming a research focus not on the complete network but the network core, an AI-based annotation can provide reliable information about discourse networks with much less human intervention than compared to the traditional manual approach.
This article investigates the integration of machine learning in the political claim annotation workflow with the goal to partially automate the annotation and analysis of large text corpora. It introduces the MARDY annotation environment and presents results from an experiment in which the annotation quality of annotators with and without machine learning based annotation support is compared. The design and setting aim to measure and evaluate: a) annotation speed; b) annotation quality; and c) applicability to the use case of discourse network generation. While the results indicate only slight increases in terms of annotation speed, the authors find a moderate boost in annotation quality. Additionally, with the help of manual annotation of the actors and filtering out of the false positives, the machine learning based annotation suggestions allow the authors to fully recover the core network of the discourse as extracted from the articles annotated during the experiment. This is due to the redundancy which is naturally present in the annotated texts. Thus, assuming a research focus not on the complete network but the network core, an AI-based annotation can provide reliable information about discourse networks with much less human intervention than compared to the traditional manual approach.
This study was conceived in the context of a project course on CSR instruments and modelled on a real case. The German Federal Ministry of Labour and Social Affairs (BMAS) had tendered a report that would provide an overview of the existing classification tools used to evaluate CSR performance. The report is part of the National Strategy on Corporate Social Responsibility, the CSR action plan of the German government. The Federal Ministry of Labour and Social Affairs is the lead ministry responsible for implementing the National Strategy for Corporate Social Responsibility. The Action Plan for CSR was decided by the federal cabinet on October 6th, 2010. According to the Action Plan, CSR measures include a company's contributions to voluntarily participate and include social responsibility into their business plan. The inclusion of a company's key stakeholders is an integral part of CSR. The goal was not only to provide an overview of the principle and international agreements but also to classify existing instruments from different stakeholders' perspectives. For this, students analysed 58 CSR instruments for which they developed a diverse set of evaluation criteria. Moreover, they tested their assumptions about key stakeholders' interests in a small workshop interviewing representatives of trade unions, chambers of commerce, consumer groups as well NGOs. The result is a very impressive piece of work as it presents new insight in a largely under-researched field. The report was presented to the working group on evaluation of the federal CSR forum in Berlin on June 6th, 2012 where it received a lot of praise by the participants. I hope it will be used in the professional world.