Taking superintelligence seriously
In: Futures, Volume 72, p. 32-35
6 results
Sort by:
In: Futures, Volume 72, p. 32-35
In: Futures: the journal of policy, planning and futures studies, Volume 72, p. 32
ISSN: 0016-3287
In: Journal of experimental political science: JEPS, Volume 9, Issue 1, p. 104-117
ISSN: 2052-2649
AbstractOnline misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) such as GPT-2 mean that actors can now synthetically generate text in ways that mimic the style and substance of human-created news stories. We carried out three original experiments to study whether these AI-generated texts are credible and can influence opinions on foreign policy. The first evaluated human perceptions of AI-generated text relative to an original story. The second investigated the interaction between partisanship and AI-generated news. The third examined the distributions of perceived credibility across different AI model sizes. We find that individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals' policy views. The findings have important implications in understanding AI in online misinformation campaigns.
SSRN
Working paper
In: Yale Law & Economics Research Paper
SSRN
In: Brundage , M , Avin , S , Wang , J , Belfield , H , Krueger , G , Hadfield , G , Khlaaf , H , Yang , J , Toner , H , Fong , R , Maharaj , T , Koh , P W , Hooker , S , Leung , J , Trask , A , Bluemke , E , Lebensold , J , O'Keefe , C , Koren , M , Ryffel , T , Rubinovitz , JB , Besiroglu , T , Carugati , F , Clark , J , Eckersley , P , Haas , S D , Johnson , M , Laurie , B , Ingerman , A , Krawczuk , I , Askell , A , Cammarota , R , Lohn , A , Krueger , D , Stix , C , Henderson , P , Graham , L , Prunkl , C , Martin , B , Seger , E , Zilberman , N , hÉigeartaigh , S Ó , Kroeger , F , Sastry , G , Kagan , R , Weller , A , Tse , B , Barnes , E , Dafoe , A , Scharre , P , Herbert-Voss , A , Rasser , M , Sodhani , S , Flynn , C , Gilbert , T K , Dyer , L , Khan , S , Bengio , Y & Anderljung , M 2020 , ' Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims ' , arXiv.org, e-Print Archive, Mathematics .
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
BASE