Статья

Assessing the future plausibility of catastrophically dangerous AI

A. Turchin,
2021

In AI safety research, the median timing of AGI creation is often taken as a reference point, which various polls predict will happen in second half of the 21th century, but for maximum safety, we should determine the earliest possible time of dangerous AI arrival and define a minimum acceptable level of AI risk. Such dangerous AI could be either narrow AI facilitating research into potentially dangerous technology like biotech, or AGI, capable of acting completely independently in the real world or an AI capable of starting unlimited self-improvement. In this article, I present arguments that place the earliest timing of dangerous AI in the coming 10–20 years, using several partly independent sources of information: 1. Polls, which show around a 10 percent of the probability of an early creation of artificial general intelligence in the next 10-15 years. 2. The fact that artificial neural network (ANN) performance and other characteristics, like number of “neurons” are doubling every year, and extrapolating this tendency suggests that roughly human-level performance will be reached in less than a decade. 3. The acceleration of the hardware performance available for AI research, which outperforms Moore's law thanks to advances in specialized AI hardware, better integration of such hardware in larger computers, cloud computing and larger budgets. 4. Hyperbolic growth extrapolations of big history models. © 2018 Elsevier Ltd

Цитирование

Похожие публикации

Документы

Источник

Версии

  • 1. Version of Record от 2021-04-27

Метаданные

Об авторах
  • A. Turchin
    Science for Life Extension Foundation, Moscow Prospekt Mira 124-15, Moscow, 129164, Russian Federation
Название журнала
  • Futures
Том
  • 107
Страницы
  • 45-58
Ключевые слова
  • artificial intelligence; artificial neural network; catastrophic event; future prospect; performance assessment; probability; research and development; risk assessment; safety; technological development
Издатель
  • Elsevier Ltd
Тип документа
  • journal article
Тип лицензии Creative Commons
  • CC
Правовой статус документа
  • Свободная лицензия
Источник
  • scopus