Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity.
They have signed an open letter warning of potential risks, and say the race to develop AI systems is out of control.
Twitter chief Elon Musk is among those who want training of AIs above a certain capacity to be halted for at least six months.
Apple co-founder Steve Wozniak and some researchers at DeepMind also signed.
OpenAI, the company behind ChatGPT, recently released GPT-4 – a state-of-the-art technology, which has impressed observers with its ability to do tasks such as answering questions about objects in images.
The letter, from Future of Life Institute and signed by the luminaries, wants development to be halted temporarily at that level, warning in their letter of the risks future, more advanced systems might pose.
"AI systems with human-competitive intelligence can pose profound risks to society and humanity," it says.
The Future of Life Institute is a not-for-profit organisation which says its mission is to "steer transformative technologies away from extreme, large-scale risks and towards benefiting life".
This video can not be played
Watch: What is artificial intelligence?
Mr Musk, owner of Twitter and chief executive of car company Tesla, is listed as an external adviser to the organisation.
Advanced AIs need to be developed with care, the letter says, but instead, "recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no-one – not even their creators – can understand, predict, or reliably control".
The letter warns that AIs could flood information channels with misinformation, and replace jobs with automation.
The letter follows a recent report from investment bank Goldman Sachs which said that while AI was likely to increase productivity, millions of jobs could become automated.
However, other experts told the BBC the effect of AI on the labour market was very hard to predict.
More speculatively, the letter asks: "Should we develop non-human minds that might eventually outnumber, outsmart, obsolete [sic] and replace us?"
Stuart Russell, computer-science professor at the University of California, Berkeley, and a signatory to the letter, told BBC News: "AI systems pose significant risks to democracy through weaponised disinformation, to employment through displacement of human skills and to education through plagiarism and demotivation."
And in the future, advanced AI's may pose a "more general threat to human control over our civilization".
"In the long run, taking sensible precautions is a small price to pay to mitigate these risks," Prof Russell added.
But Princeton computer-science professor Arvind Narayanan accused the letter of focusing on "speculative, futuristic risk, ignoring the version of the problem that is already harming people".
In a recent blog post quoted in the letter, OpenAI warned of the risks if an artificial general intelligence (AGI) were developed recklessly: "A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that, too.
"Co-ordination among AGI efforts to slow down at critical junctures will likely be important," the firm wrote.
OpenAI has not publicly commented on the letter. The BBC has asked the firm whether it backs the call.
Mr Musk was a co-founder of OpenAI – though he resigned from the board of the organisation some years ago and has tweeted critically about its current direction.
Autonomous driving functions made by his car company Tesla, like most similar systems, use AI technology.
The letter asks AI labs "to immediately pause for at least six months the training of AI systems more powerful than GPT-4".
If such a delay cannot be enacted quickly, governments should step in and institute a moratorium, it says.
"New and capable regulatory authorities dedicated to AI" would also be needed.
Recently, a number of proposals for the regulation of technology have been put forward in the US, UK and EU. However, the UK has ruled out a dedicated regulator for AI.
UK rules out new AI regulator
AI could affect 300 million jobs – report
Bill Gates: AI most important tech advance in decades
Is the world prepared for the coming AI storm?
Israel says body of hostage found as communications go down in Gaza
Israel says it attacked ‘underground sites’ in Gaza
Singer Cassie accuses Diddy of rape and abuse
Is Mean Girls too noughties to function in 2024?
Weekly quiz: Who was given the cold shoulder by the Grammys?
The last chance to save a mighty river
Ukrainian men flee the draft in their thousands
Africa's top shots: Pirates and paper doves
The Indian artist who painted for British rulers
How to avoid buying new clothes
Five things we learned from the Biden-Xi meeting
BBC goes inside Al-Shifa hospital with the Israeli army
The reality of passive income streams
How bison are restoring US grasslands
The origins of a classic preppy look
© 2023 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
Elon Musk among experts urging a halt to AI training – BBC.com
