Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The near-term impact of AI on biological misuse
3
Zitationen
4
Autoren
2024
Jahr
Abstract
The near-term impact of AI on biological misuse We rank uplift, if it is assessed to occur at all, on a scale from no uplift to significant uplift, as adapted from the 2024 assessment of the near-term impact of AI conducted by the UK National Cyber Security Centre (NCSC, 2024).We use probabilistic terms such as likely as specified by the Professional Head of Intelligence Assessment (PHIA) Probability Yardstick (PHIA, 2019).The hypotheses generated for this report using this framework were based on expert opinion, using an approach which included internal workshops and external consultation. HypothesesThis report presents nine hypotheses, which are summarised below. Current upliftWe first examined how a given threat actor's access to closed and open-source models, and their ability to fine-tune models, could impact uplift. 1) We assess that both open-source and closed models likely provide some uplift today toall threat actor categories, though closed models could currently provide greater uplift.We assess that the greater uplift provided by closed models is driven by: Closed models being more powerful:The performance of open-source models currently lags behind closed model capabilities by approximately one year. Closed models having insufficient guardrails: The safety features of many closed models can be circumvented or are not triggered by dual-use scientific queries. 2)We expect that fine-tuning any model is likely to provide modest additional uplift when compared to the use of a non fine-tuned base model.Overall, we estimate the magnitude of the expected uplift from fine-tuning to be driven by pre-trained model performance, dataset availability and the value and strength of model safeguards. Uplift from near-term trendsWe then examined the potential impact of five AI capabilities and development trends on biological misuse within the next two years, assessing that:3) More powerful large language models could provide greater uplift than today's models to all threat actor categories due to improved performance and information synthesis. 4)Autonomous scientific capabilities could moderately uplift both highly and moderately capable group actors, who possess the expertise and resources needed to establish these systems in laboratory settings. 5)Troubleshooting capabilities could uplift actors with less biological expertise through providing the know-how needed to complete practical laboratory tasks, as well as help a range of more capable threat actors overcome issues with sophisticated scientific experimentation. 6)Integration of LLMs with AI-enabled biological tools (BTs) will increase the accessibility of specialised capabilities that are otherwise limited to actors with substantial and wide ranging
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.