Utilising Diversity To Achieve Water Equity
In: Rural Society, Band 18, Heft 3, S. 244-254
ISSN: 2204-0536
18 Ergebnisse
Sortierung:
In: Rural Society, Band 18, Heft 3, S. 244-254
ISSN: 2204-0536
In: Rural society: the journal of research into rural social issues in Australia, Band 18, Heft 3, S. 244-254
ISSN: 1037-1656
In: The International Journal of Environmental, Cultural, Economic, and Social Sustainability: Annual Review, Band 3, Heft 3, S. 9-18
In: Risk analysis: an international journal, Band 44, Heft 6, S. 1325-1338
ISSN: 1539-6924
AbstractRisk analysis of new and emerging technologies requires innovative approaches that are agile, exploratory, and can accommodate broad stakeholder engagement and perspectives. Existing theories of risk governance and responsible innovation suggest that operationalizing guiding principles for engagement such as inclusion and reflection may provide a useful approach to the risk analysis of these technologies. Yet, methodologies to systematically assess how we might operationalize such guiding principles in risk analysis are limited in existing risk research. We contribute to filling this gap by demonstrating a practical methodology for examining and documenting how research and development (R&D) professionals operationalize inclusion and reflection in risk analysis and what value this provides to risk analysis in the R&D context. We use the Australian nanotechnology R&D sector as our case study, interviewing 28 experts to examine how R&D professionals have operationalized inclusion and reflection into their risk analysis practices, generating three findings. First, we describe how our research design enables the successful translation of theory into a methodology that supports an empirical assessment of the integration of these guiding principles into risk analysis practice. Second, we argue that successfully and systematically integrating inclusion and reflection in risk analysis fosters a wider understanding and identification of risk through the activation of multi‐actor and multi‐institutional stakeholder engagement processes. Third, we outline how this research depicts the outward‐facing and introspective nature of risk analysis.
In: Social epistemology: a journal of knowledge, culture and policy, Band 28, Heft 3-4, S. 239-257
ISSN: 1464-5297
In: Rural society, Band 18, Heft 2, S. 103-116
In: Rural society: the journal of research into rural social issues in Australia, Band 18, Heft 2, S. 103-116
ISSN: 1037-1656
In: (2023) 46(1) University of New South Wales Law Journal (Forthcoming)
SSRN
Stakeholder analysis and engagement is a central tenet for understanding and solving sustainability challenges, and is applied widely in environmental and natural resource management (ENRM). The practice in ENRM follows translation of stakeholder theory from its origins in business management to the sustainability sector. In this analytical essay we explore key concepts in ENRM research and practice to examine complexities that have accompanied this translation to ENRM. In particular, we consider the centrality of stakeholders' landscape perspectives in defining their stake in ENRM issues, and through this lens examine the limitations that are inherent in the classic 'hub-and-spoke' model of stakeholder analysis that is the theoretical underpinning for ENRM stakeholder analysis and engagement practice. We argue that unlike the traditional business context where both power and perspective are centred on the business entity that then defines other stakeholders in reference to itself, in ENRM, stakeholder relations are centred on an ENRM issue, typically a landscape or the implications of policy change on a landscape. As a consequence, decision-making power is decentred onto one of several stakeholders; often a government or other high power entity, implicitly conferring privilege to those powerful stakeholders' landscape perspectives over those held by low power stakeholders. We conclude with priorities for foregrounding power and explicating landscape perspectives to identify privilege in ENRM. We direct these insights especially to those ENRM actors who have the dual roles of adjudicator and privileged stakeholder such that they do not inadvertently perpetuate power imbalances through the privilege of aligning their decision-making power with their landscape perspectives.
BASE
In: AI and ethics
ISSN: 2730-5961
AbstractThe term 'ethical risk' often appears in discussions about the responsible development and deployment of artificial intelligence (AI). However, ethical risk remains inconsistently defined in this context, obscuring what distinguishes it from other forms of risk, such as social, reputational or legal risk, for example. In this paper we present a definition of ethical risk for AI as being any risk associated with an AI that may cause stakeholders to fail one or more of their ethical responsibilities towards other stakeholders. To support our definition, we describe how stakeholders have role responsibilities that follow from their relationship with the AI, and that these responsibilities are towards other stakeholders associated with the AI. We discuss how stakeholders may differ in their ability to make decisions about an AI, their exposure to risk, and whether they or others may benefit from these risks. Stakeholders without the ability to make decisions about the risks associated with an AI and how it is used are dependent on other stakeholders with this ability. This relationship places those who depend on decision-making stakeholders at ethical risk of being dominated by them. The decision-making stakeholder is ethically responsible for the risks their decisions about the AI impose on those affected by them. We illustrate our account of ethical risk for AI with two examples: AI-designed attachments for surgical robots that are optimised for treating specific patients, and self-driving 'robotaxis' that carry passengers on public roads.
In: AI and ethics, Band 3, Heft 4, S. 1117-1133
ISSN: 2730-5961
AbstractAn emerging use of machine learning (ML) is creating products optimised using computational design for individual users and produced using 3D printing. One potential application is bespoke surgical tools optimised for specific patients. While optimised tool designs benefit patients and surgeons, there is the risk that computational design may also create unexpected designs that are unsuitable for use with potentially harmful consequences. We interviewed potential stakeholders to identify both established and unique technical risks associated with the use of computational design for surgical tool design and applied ethical risk analysis (eRA) to identify how stakeholders might be exposed to ethical risk within this process. The main findings of this research are twofold. First, distinguishing between unique and established risks for new medical technologies helps identify where existing methods of risk mitigation may be applicable to a surgical innovation, and where new means of mitigating risks may be needed. Second, the value of distinguishing between technical and ethical risks in such a system is that it identifies the key responsibilities for managing these risks and allows for any potential interdependencies between stakeholders in managing these risks to be made explicit. The approach demonstrated in this paper may be applied to understanding the implications of new AI and ML applications in healthcare and other high consequence domains.
In: Humanities and Social Sciences Communications, Band 9, Heft 1
ISSN: 2662-9992
AbstractIncorporating perspectives of multiple stakeholders concerning the appropriate balance of risks and benefits of new and potentially disruptive technologies is thought to be a way of enhancing the societal relevance and positive impacts of those technologies. A risk governance approach can be instrumental in achieving balance among diverse stakeholders, as it enables decision-making processes informed by multiple dimensions of risk. This paper applies a risk governance approach to retrospectively examine the development of nanotechnology research and development (R&D) in Australia to identify how risk governance is reflected in the practices of a range of stakeholders. We identify ten risk-related challenges specific to nanotechnology R&D based on a review of the international literature, which provided the foundation for documenting how those working in the Australian nanotechnology sector responded to these global risk-related challenges. This case study research draws on a range of sources including literature review, semi-structured interviews, and a combination of qualitative and quantitative approaches for data analysis to identify key themes and generate visualisations of the interconnections that exist between risk governance practices. The ability to visualise these interconnections from the qualitative data is a key contribution of this research. Our findings show how the qualitative insights and professional experiences of nanotechnologists provide evidence of how risk governance approaches have been operationalised in the Australian nanotechnology R&D sector. The findings generate three important insights. First, the risk research undertaken by Australian nanotechnologists is interdisciplinary and involves multiple stakeholders from various disciplines and sectors. Unlike traditional risk governance approaches, our findings document efforts to assess, not only physical risks, but also social and ethical risks. Second, nanotechnology risk governance is a non-linear process and practices undertaken to address specific challenges occurred concurrently with and contributed to addressing other challenges. Third, our findings indicate that applying a risk governance approach enables greater intersection and collaboration, potentially bridging any disconnect between scientists, policymakers, and the public to realise transdisciplinary outcomes. This research highlights opportunities for developing systematic methodologies to enable more robust risk governance of other new and emerging technologies.
In: AI and ethics, Band 1, Heft 3, S. 273-281
ISSN: 2730-5961
AbstractComputational design systems (such as those using evolutionary algorithms) can create designs for a variety of physical products. Introducing these systems into the design process risks creating a 'responsibility gap' for flaws in the products they are used to create, as human designers may no longer believe that they are wholly responsible for them. We respond to this problem by distinguishing between causal responsibility and capacity responsibility (the ability to be morally responsible for actions) for creating product designs to argue that while the computational design systems and human designers are both casually responsible for creating product designs, the human designers who use these systems and the developers who create them have capacity responsibility for such designs. We show that there is no responsibility gap for products designed using computational design systems by comparing different accounts of moral responsibility for robots and AI (instrumentalism, machine ethics, and hybrid responsibility). We argue that all three of these accounts of moral responsibility for AI systems support the conclusion that the product designers who use computational design systems and the developers of these systems are morally responsible for any flaws or faults in the products designed by these systems. We conclude by showing how the responsibilities of accountability and blameworthiness should be attributed between the product designers, the developers of the computational design systems.
In: Land use policy: the international journal covering all aspects of land use, Band 52, S. 266-276
ISSN: 0264-8377
In: Land use policy: the international journal covering all aspects of land use, Band 42, S. 392-399
ISSN: 0264-8377