Trusting organizations: The institutionalization of trust in interorganizational relationships
In: Organization: the interdisciplinary journal of organization, theory and society, Volume 19, Issue 6, p. 743-763
ISSN: 1461-7323
5 results
Sort by:
In: Organization: the interdisciplinary journal of organization, theory and society, Volume 19, Issue 6, p. 743-763
ISSN: 1461-7323
In: Organization: the interdisciplinary journal of organization, theory and society, Volume 19, Issue 6, p. 743-763
ISSN: 1461-7323
Recent research on interorganizational trust has stressed the need to develop a deeper understanding of the multi-level nature of this construct. This article focuses on trust on different analytical levels in an interorganizational context, and on the hitherto underspecified connections between these. Based on an institutionalization approach, it revisits the classic question: (how) can organizations trust each other? To do so, we consider organizations as objects of trust and reappraise the transferral from interpersonal to interorganizational trust in 'facework' (Giddens, 1990). We also examine the conflicts and struggles of trust and power that can arise from this process between boundary spanners and their organizational constituents. Next, we consider organizations as subjects of trust in interorganizational relationships. We detail the institutionalization of trust and its reproduction on an organizational level, and how it can be transmitted to new generations of organizational actors, creating path-dependent histories of trust which are truly interorganizational. Taking up the theme of trust and power, we analyse ways in which the institutionalization of trust can entail that of power, too, and examine the implications of this from a critical point of view. We conclude that in interorganizational trust, both the subject and object of trust move across analytical levels, and further, that this movement demonstrates the significance of the organization as a distinct entity that can be both trusted and trusting.
In: Asian studies review, p. 1-20
ISSN: 1467-8403
In: International sociology: the journal of the International Sociological Association, Volume 32, Issue 1, p. 3-20
ISSN: 1461-7242
This article suggests conceptualising trust as a generalised symbolic medium of communication. It is argued that in business relationships trust appears intertwined with other media, such as power or money. Furthermore, it is shown that typical combinations of trust, power and money are dominant in different business systems (liberal vs coordinated market economies). The overarching aim of this article is to demonstrate that trust is a fundamental concept of social theory and that the theory of generalised symbolic media of communication provides a useful conceptual perspective to integrate trust into social theory.
In: Brundage , M , Avin , S , Wang , J , Belfield , H , Krueger , G , Hadfield , G , Khlaaf , H , Yang , J , Toner , H , Fong , R , Maharaj , T , Koh , P W , Hooker , S , Leung , J , Trask , A , Bluemke , E , Lebensold , J , O'Keefe , C , Koren , M , Ryffel , T , Rubinovitz , JB , Besiroglu , T , Carugati , F , Clark , J , Eckersley , P , Haas , S D , Johnson , M , Laurie , B , Ingerman , A , Krawczuk , I , Askell , A , Cammarota , R , Lohn , A , Krueger , D , Stix , C , Henderson , P , Graham , L , Prunkl , C , Martin , B , Seger , E , Zilberman , N , hÉigeartaigh , S Ó , Kroeger , F , Sastry , G , Kagan , R , Weller , A , Tse , B , Barnes , E , Dafoe , A , Scharre , P , Herbert-Voss , A , Rasser , M , Sodhani , S , Flynn , C , Gilbert , T K , Dyer , L , Khan , S , Bengio , Y & Anderljung , M 2020 , ' Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims ' , arXiv.org, e-Print Archive, Mathematics .
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
BASE