• Video
  • Shop
  • Culture
  • Family
  • Wellness
  • Food
  • Living
  • Style
  • Travel
  • News
  • Book Club
  • Newsletter
  • Privacy Policy
  • Your US State Privacy Rights
  • Children's Online Privacy Policy
  • Interest-Based Ads
  • Terms of Use
  • Do Not Sell My Info
  • Contact Us
  • © 2026 ABC News
  • News

Dispute over threat of extinction posed by artificial intelligence looms over surging industry

5:08
How AI could transform education
Pierre Albouy/Reuters
ByMax Zahn
July 21, 2023, 10:09 AM

While experts disagree about whether AI poses an existential threat, Dan Hendrycks, a researcher and the director of the Center for AI Safety, or CAIS, is among those who believe the technology could destroy humanity in a variety of ways.

A bad actor could gain possession of a future version of generative AI, ask it for instructions on how to make a biological weapon and set off devastation, he told ABC News. Or, he added, the efficiency delivered by AI could force widespread business adoption, leaving the global economy in its thrall; alternatively, it could also worsen the spread of misinformation and disinformation.

The end of humanity, Hendrycks said, is hardly a remote possibility. "If I see international coordination doesn't happen, or much of it, it'll be more likely than not that we go extinct," he added.

Experts who lend credence to the threat told ABC News that the massive potential risks require urgent attention and stiff oversight; while skeptics warned that grave forecasts fuel a misunderstanding about the capabilities of AI and distract from current harms caused by the technology.

In recent months, dire warnings about the massive threat posed by AI have ascended from the corridors of computer science departments to the halls of Congress.

An open letter written in May by CIAS warned that AI poses a "risk of extinction" akin to pandemics or nuclear war, featuring signatures from hundreds of researchers and industry leaders like OpenAI CEO Sam Altman and ​​Demis Hassabis, the CEO of Google DeepMind, the tech giant's AI division.

Altman, whose company developed the viral AI sensation Chat-GPT, in May told a Senate subcommittee: "If this technology goes wrong, it can go quite wrong." In an interview with ABC News, in March, Altman said, "I think people should be happy that we are a little bit scared of this."

OpenAI and Google did not immediately respond to ABC News' request for comment.

Related Articles

MORE: AI leaders warn the technology poses 'risk of extinction' like pandemics and nuclear war

Other AI luminaries, however, have balked. Yann LeCun, chief AI scientist at Meta, told the MIT Technology Review that fear of an AI takeover is "preposterously ridiculous." Sarah Myers West, managing director of the nonprofit AI Now Institute, told ABC News: "A lot of this is more rhetoric than grounded analysis."

The divide over the existential threat posed by AI looms over recent advances in the technology as it sweeps across institutions from manufacturing to mass entertainment, prompting disagreement about the pace of development and the focus of possible regulation.

"We're looking to experts to tell us," Jeffrey Sonnenfeld, a professor of management at Yale University who convenes gatherings of top CEOs. "But experts are split on this."

As AI develops, however, an imperative for onlookers is clear, he said: "We can't sit on the sidelines."

Concern about the risks posed by AI have drawn greater attention lately in response to major breakthroughs like ChatGPT, which reached 100 million users within two months of its launch in November.

Microsoft launched a version of its Bing search engine in March that offers responses delivered by GPT-4, the latest model of ChatGPT. Rival search company Google in February announced an AI model called Bard.

"AI in previous instantiations was a largely invisible system," Myers West said. "It wasn't something we interacted with in a tangible way. This has had a really visceral effect on the broader public in that it's contributing to this wave of both excitement and a tremendous amount of anxiety."

Related Articles

MORE: Is AI coming for your job? ChatGPT renews fears

Doomsday forecasts, however, lack granular specifics and overstate the potential for self-awareness to form within generative AI like ChatGPT, which scans text from across the internet and strings words together based on statistical probability, Myers West said.

OpenAI CEO Sam Altman speaks with ABC News, Mar. 15, 2023.
ABC News

"At present, essentially the way these systems work is akin to applied statistics, so they don't have any capacity for deeper understanding, don't have any capacity for empathy and certainly not sentience," Myers West added.

But the risk posed by AI stems from its potential to exceed human intelligence rather than mimic it, Stuart Russell, an AI researcher at the University of California, Berkeley who co-authored a study on societal-scale dangers of the technology, told ABC News.

"If you make systems that are more intelligent than humans, they will have more power over the world than we do, just as we have more power over the world than other species on earth," he said.

Acknowledging a lack of specifics in some prominent messages about the extreme risks, such as the open letter released by CAIS in May, Russell said: "Once you get into specifics, you end up with arguments about which is the most plausible." Hendrycks, of CAIS, added: "Since AI touches on many aspects of society, we end up finding there are many, many risk sources."

Key remedies, such as robust government oversight and liability for AI developers, can help deter a range of catastrophic scenarios, Hendrycks said. "We don't need to know exactly what's going to happen to make interventions to reduce risk," he said.

Related Articles

MORE: Can artificial intelligence help stop mass shootings?

To be sure, while acknowledging the risks of AI, experts heralded its potential benefits. Proponents of AI say the technology could increase productivity, automate unpleasant or mundane tasks and afford the opportunity to focus on creative and innovative endeavors. AI has been touted as an aid for endeavors ranging from the fight against climate change to the diagnosis of cancer.

Senate Majority Chuck Schumer, D-N.Y., released a framework last month outlining four pillars that he hopes will guide future bipartisan legislation governing AI: security, accountability, protecting our foundations and explainability.

The framework is not legislative text, and it's not clear how long it will take for Congress to begin putting together legislative proposals. There has not yet been any comprehensive legislation introduced in Congress to deal with regulating AI, though a bicameral group of lawmakers introduced a proposal last month that would create a blue-ribbon commission to study AI's impact.

"We have no choice but to acknowledge that AI's changes are coming, and in many cases are already here. We ignore them at our own peril," Schumer said last month in prepared remarks at the Center for Strategic and International Studies.

President Joe Biden, meanwhile, appeared last month at a roundtable event focused on AI in California, describing artificial intelligence as something that has "enormous promise and its risks."

An effort to ward off hypothetical long-term dangers could distract from present-day damage caused by AI, said Isabelle Jones, campaign outreach manager for Stop Killer Robots, which aims to establish an international agreement prohibiting the use of autonomous weapons.

"I think that to purely focus on the future is to the detriment of the existing harms that are coming about or that there's an immediate risk of," Jones told ABC News.

Policymakers can address current and future dangers at the same time, Russell said, just as they do in combating climate change. "I think the narrative that you can either do one or the other but can't do both is actually poisonous."

Regardless of whether they believe or question forecasts of extreme risk, experts who spoke with ABC News called on the government to regulate the technology.

"My biggest thing is regulation and international coordination," Hendrycks said. Jones cited international accords on issues like nuclear proliferation as a model for reigning in autonomous weapons.

Still, Myers West said, differences could again arise on the issue of specifics. "The devil is in the details," she said.

ABC News' Alison Pecorin contributed reporting.

Up Next in News—

Gas station clerk speaks out after foiling alleged kidnapping

April 15, 2026

Oklahoma high school principal takes down would-be shooter, hailed as hero

April 15, 2026

Family seeks answers after influencer Ashlee Jenae is found dead on vacation in Tanzania

April 15, 2026

Couple shares warning after nearly losing down payment in mortgage fraud

April 10, 2026

Shop GMA Favorites

ABC will receive a commission for purchases made through these links.

Sponsored Content by Taboola

The latest lifestyle and entertainment news and inspiration for how to live your best life - all from Good Morning America.
  • Contests
  • Terms of Use
  • Privacy Policy
  • Do Not Sell My Info
  • Children’s Online Privacy Policy
  • Advertise with us
  • Your US State Privacy Rights
  • Interest-Based Ads
  • About Nielsen Measurement
  • Press
  • Feedback
  • Shop FAQs
  • ABC News
  • ABC
  • All Videos
  • All Topics
  • Sitemap

© 2026 ABC News
  • Privacy Policy— 
  • Your US State Privacy Rights— 
  • Children's Online Privacy Policy— 
  • Interest-Based Ads— 
  • Terms of Use— 
  • Do Not Sell My Info— 
  • Contact Us— 

© 2026 ABC News