Response to the March 2023'Pause Giant AI experiments: an open letter'by Yoshua Bengio, signed by Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari …

J Samuel - Elon Musk, Steve Wozniak, Yuval Noah Harari and …, 2023 - papers.ssrn.com
Prominent 'experts' recently released an open statement for a 'pause'to developing AI citing
ambiguous risks and yet to be proven 'dangers'. I am surprised that exceptionally intelligent …

[HTML][HTML] Hinton & Me: Don't Pause Giant AI Experiments, Ban Them

R Hanna - Unpublished MS. Available online at URL=< https … - againstprofphil.org
In an open letter published online in late March 2023, directed not only to the digital
technology and AI community in particular but also to the world more generally, card …

[HTML][HTML] From fear to action: AI governance and opportunities for all

K Baum, J Bryson, F Dignum, V Dignum… - Frontiers in Computer …, 2023 - frontiersin.org
OpenAI's GPT-4 (OpenAI, 2023) reignited the public discussions regarding Artificial
Intelligence (AI) and its risks. In a recent open letter (Future of Life Institute, 2023) technology …

[PDF][PDF] Here's Why AI May Be Extremely Dangerous—Whether It's Conscious or Not

T Hunt - Scientific American, 2023 - dottts.com
“The idea that this stuff could actually get smarter than people.... I thought it was way off….
Obviously, I no longer think that,” Geoffrey Hinton, one of Google's top artificial intelligence …

[PDF][PDF] Oppenheimer, Kaczynski, Shelley, Hinton, & Me: Don't Pause Giant AI Experiments, Ban Them

R Hanna - Unpublished MS. Available online at URL=< https … - academia.edu
What we are creating now [with the atomic bomb] is a monster whose influence is going to
change history, provided there is any history left, yet it would be impossible not to see it …

Why They're Worried: Examining Experts' Motivations for Signing the'Pause Letter'

I Struckman, S Kupiec - arXiv preprint arXiv:2306.00891, 2023 - arxiv.org
This paper presents perspectives on the state of AI, as held by a sample of experts. These
experts were early signatories of the recent open letter from Future of Life, which calls for a …

Don't pause giant AI for the wrong reasons

M Ienca - Nature Machine Intelligence, 2023 - nature.com
An open letter 1 to the Future of Life Institute has called on all AI labs to “immediately pause
for at least 6 months the training of AI systems more powerful than GPT-4”. The letter has …

[PDF][PDF] The case for taking AI seriously as a threat to humanity

K Piper - en. In: Vox (Dec. 2018). URL: https://www. vox. com …, 2018 - cs.fsu.edu
That might have people asking: Wait, what? But these grand worries are rooted in research.
Along with Hawking and Musk, prominent figures at Oxford and UC Berkeley and many of …

Assessing artificial intelligence for humanity: Will ai be the our biggest ever advance? or the biggest threat [opinion]

A Nowak, P Lukowicz… - IEEE Technology and …, 2018 - ieeexplore.ieee.org
Recent rapid advancements in artificial intelligence (AI) are arguably the most important
dimension of humanity's progress to date. As members of the human race, that is, homo …

Ai winter

S Umbrello - 2021 - philpapers.org
Coined in 1984 at the American Association of Artificial intelligence (now the Association for
the Advancement of Artificial Intelligence or AAAI), the various boom and bust periods of AI …