AI and catastrophic risk
In: Journal of democracy, Volume 34, Issue 4, p. 111-121
ISSN: 1086-3214
8 results
Sort by:
In: Journal of democracy, Volume 34, Issue 4, p. 111-121
ISSN: 1086-3214
World Affairs Online
In: IEEE technology and society magazine: publication of the IEEE Society on Social Implications of Technology, Volume 39, Issue 1, p. 16-25
ISSN: 0278-0097
In: Adaptive computation and machine learning
Applied math and machine learning basics. Linear algebra -- Probability and information theory -- Numerical computation -- Machine learning basics -- Deep networks: modern practices. Deep feedforward networks -- Regularization for deep learning -- Optimization for training deep models -- Convolutional networks -- Sequence modeling: recurrent and recursive nets -- Practical methodology -- Applications -- Deep learning research. Linear factor models -- Autoencoders -- Representation learning -- Structured probabilistic models for deep learning -- Monte Carlo methods -- Confronting the partition function -- Approximate inference -- Deep generative models
SSRN
SSRN
Computational approaches in drug discovery and development hold great promise, with artificial intelligence methods undergoing widespread contemporary use, but the experimental validation of these new approaches is frequently inadequate. We are initiating Critical Assessment of Computational Hit-finding Experiments (CACHE) as a public benchmarking project that aims to accelerate the development of small molecule hit-finding algorithms by competitive assessment. Compounds will be identified by participants using a wide range of computational methods for dozens of protein targets selected for different types of prediction scenarios, as well as for their potential biological or pharmaceutical relevance. Community-generated predictions will be tested centrally and rigorously in an experimental hub(s), and all data, including the chemical structures of experimentally tested compounds, will be made publicly available without restrictions. The ability of a range of computational approaches to find novel compounds will be evaluated, compared, and published. The overarching goal of CACHE is to accelerate the development of computational chemistry methods by providing rapid and unbiased feedback to those developing methods, with an ancillary and valuable benefit of identifying new compound-protein binding pairs for biologically interesting targets. The initiative builds on the power of crowd sourcing and expands the open science paradigm for drug discovery. ; ACKNOWLEDGEMENTS The Structural Genomics Consortium is a registered charity (no: 1097737) that receives funds from Bayer AG, Boehringer Ingelheim, Bristol Myers Squibb, Genentech, Genome Canada through Ontario Genomics Institute [OGI-196], Janssen, Merck KGaA (aka EMD in Canada and US), Pfizer, Takeda and the Innovative Medicines Initiative 2 Joint Undertaking (JU) under grant agreement No 875510. The JU receives support from the European Union's Horizon 2020 research and innovation programme and EFPIA and Ontario Institute for Cancer Research, Royal Institution for ...
BASE
In: Brundage , M , Avin , S , Wang , J , Belfield , H , Krueger , G , Hadfield , G , Khlaaf , H , Yang , J , Toner , H , Fong , R , Maharaj , T , Koh , P W , Hooker , S , Leung , J , Trask , A , Bluemke , E , Lebensold , J , O'Keefe , C , Koren , M , Ryffel , T , Rubinovitz , JB , Besiroglu , T , Carugati , F , Clark , J , Eckersley , P , Haas , S D , Johnson , M , Laurie , B , Ingerman , A , Krawczuk , I , Askell , A , Cammarota , R , Lohn , A , Krueger , D , Stix , C , Henderson , P , Graham , L , Prunkl , C , Martin , B , Seger , E , Zilberman , N , hÉigeartaigh , S Ó , Kroeger , F , Sastry , G , Kagan , R , Weller , A , Tse , B , Barnes , E , Dafoe , A , Scharre , P , Herbert-Voss , A , Rasser , M , Sodhani , S , Flynn , C , Gilbert , T K , Dyer , L , Khan , S , Bengio , Y & Anderljung , M 2020 , ' Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims ' , arXiv.org, e-Print Archive, Mathematics .
With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
BASE