pt. 1. Historical and societal aspects of healthcare professions -- pt. 2. Learning and teaching healthcare professionalism -- pt. 3. Transformation of healthcare professions -- pt. 4. Professional leadership and team decision-making in healthcare -- pt. 5. Ethical challenges to healthcare professionalism.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
Intro -- Preface -- Part I: Contexts of Professional Practice at the End of Life -- Part II: Professional Standards at the End of Life -- Part III: Treatment Preferences and Professional Companionship at the End of Life -- Part IV: Joint Discussion Paper -- Acknowledgements -- Contents -- Chapter 1: Introduction -- 1.1 Health-Care Professions and End-of-Life Care -- 1.2 Professional Ethics at the End of Life -- References -- Part I: Contexts of Professional Practice at the End of Life -- Chapter 2: Palliative Care Between Certainty and Uncertainty. Which Philosophy of Death at the End of Life? -- 2.1 Introduction -- 2.2 Why Vladimir Jankélévitch? -- 2.3 Death Is a Paradox -- 2.4 Between Knowledge and Awareness, the Relationship to Death: First, Second and Third Person -- References -- Chapter 3: An Ethical Discussion of (Un-)Certainty at the End of Life: Exemplification by Means of Continuous Deep Sedation and Advance Directives -- 3.1 Introduction -- 3.2 (Un)Certainty at the End of Life: The Examples of Continuous Deep Sedation and Advance Directives -- 3.2.1 Continuous Deep Sedation -- 3.2.1.1 United Kingdom -- 3.2.1.2 France -- 3.2.1.3 Germany -- 3.2.2 Advance Directives -- 3.2.2.1 United Kingdom -- 3.2.2.2 France -- 3.2.2.3 Germany -- 3.3 Results of the International Comparison: A Cluster of (Un)Certainties -- 3.3.1 Continuous Deep Sedation: Epistemic, Diagnostic and Prognostic (Un)Certainty -- 3.3.2 Advance Directives: Epistemic, Linguistic, Relational and Existential (Un)Certainty -- 3.4 Discussion in the Light of Professional Medical Ethics -- 3.4.1 Dealing with (Un)Certainties in the Realm of Continuous Deep Sedation -- 3.4.2 Dealing with (Un)Certainties in the Realm of Advance Directives -- 3.4.3 General Perspectives on Dealing with (Un)Certainty -- 3.5 Conclusion -- Appendixes -- References.
Zugriffsoptionen:
Die folgenden Links führen aus den jeweiligen lokalen Bibliotheken zum Volltext:
BACKGROUND: Patient advocacy organizations (PAOs) have an increasing influence on health policy and biomedical research, therefore, questions about the specific character of their responsibility arise: Can PAOs bear moral responsibility and, if so, to whom are they responsible, for what and on which normative basis? Although the concept of responsibility in healthcare is strongly discussed, PAOs particularly have rarely been systematically analyzed as morally responsible agents. The aim of the current paper is to analyze the character of PAOs' responsibility to provide guidance to themselves and to other stakeholders in healthcare. METHODS: Responsibility is presented as a concept with four reference points: (1) The subject, (2) the object, (3) the addressee and (4) the underlying normative standard. This four-point relationship is applied to PAOs and the dimensions of collectivity and prospectivity are analyzed in each reference point. RESULTS: Understood as collectives, PAOs are, in principle, capable of intentionality and able to act and, thus, fulfill one prerequisite for the attribution of moral responsibility. Given their common mission to represent those affected, PAOs can be seen as responsible for patients' representation and advocacy, primarily towards a certain group but secondarily in a broader social context. Various legal and political statements and the bioethical principles of justice, beneficence and empowerment can be used as a normative basis for attributing responsibility to PAOs. CONCLUSIONS: The understanding of responsibility as a four-point relation incorporating collective and forward-looking dimensions helps one to understand the PAOs' roles and responsibilities better. The analysis, thus, provides a basis for the debate about PAOs' contribution and cooperation in the healthcare sector.
In the last decades, managerial instruments have gained importance to medical decisions and the logic of managerialism is juxtaposed with the logic of medical professionalism. Recent changes in the hospital employment structure raise the question of contradictory logics not only at the organizational but also at the individual level. Therefore, we investigate the rise of locum doctors which is a relatively new phenomenon in Germany. Our qualitative interview study with 21 locum tenens, permanently employed physicians, and chief physicians shows that locum physicians re-contextualize professional standards in hospitals. According to their self-perception, patient care stays at the center of their medical practice regardless of economic, bureaucratic, and hierarchical requirements as well as hospital-specific routines. We argue that the interrelationship between professionalism and managerialism exists not only within organizations but also on an individual level of locum doctors.
AbstractAs society increasingly integrates artificial intelligence (AI) into its fabric, AI ethics education in primary schools becomes necessary. Drawing parallels between the integration of foundational subjects such as languages and mathematics and the pressing need for AI literacy, we argue for mandatory, age-appropriate AI education focusing on technical proficiency and ethical implications. Analogous to how sex and drug education prepare youth for real-world challenges and decisions, AI education is crucial for equipping students to navigate an AI-driven future responsibly. Our study delineates the ethical pillars, such as data privacy and unbiased algorithms, essential for students to grasp, and presents a framework for AI literacy integration in elementary schools. What is needed is a comprehensive, dynamic, and evidence-based approach to AI education, to prepare students for an AI-driven future.
"Artificial Intelligence (AI) systems are increasingly being developed and various applications are already used in medical practice. This development promises improvements in prediction, diagnostics and treatment decisions. As one example, in the field of psychiatry, AI systems can already successfully detect markers of mental disorders such as depression. By using data from social media (e.g. Instagram or Twitter), users who are at risk of mental disorders can be identified. This potential of AI-based depression detectors (AIDD) opens chances, such as quick and inexpensive diagnoses, but also leads to ethical challenges especially regarding users' autonomy. The focus of the presentation is on autonomy-related ethical implications of AI systems using social media data to identify users with a high risk of suffering from depression. First, technical examples and potential usage scenarios of AIDD are introduced. Second, it is demonstrated that the traditional concept of patient autonomy according to Beauchamp and Childress does not fully account for the ethical implications associated with AIDD. Third, an extended concept of "Health-Related Digital Autonomy" (HRDA) is presented. Conceptual aspects and normative criteria of HRDA are discussed. As a result, HRDA covers the elusive area between social media users and patients. "