• Video
  • Shop
  • Culture
  • Family
  • Wellness
  • Food
  • Living
  • Style
  • Travel
  • News
  • Book Club
  • Newsletter
  • Privacy Policy
  • Your US State Privacy Rights
  • Children's Online Privacy Policy
  • Interest-Based Ads
  • Terms of Use
  • Do Not Sell My Info
  • Contact Us
  • © 2026 ABC News
  • News

After New Zealand mosque attacks, Facebook changes its livestream policy

6:07
News headlines today: May 15, 2019
Bloomberg via Getty Images, FILE
BySoo Youn
May 15, 2019, 9:44 PM

Facebook officials, who have admitted their systems failed to prevent the broadcast of the New Zealand mosque massacre on their platform, have announced a new policy for livestreaming.

"We will now apply a ‘one strike’ policy to [Facebook] Live, in connection with a broader range of offenses," Facebook's vice president of integrity, Guy Rosen, wrote in a post on the company's site late Tuesday. "From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense."

Previously, the company took down posts that violated community standards. If a user continued to post content that violated the standards, Facebook temporarily blocked the user's account, removing the ability to broadcast live. More extreme posters – of terror propaganda or violations of children – would be banned altogether, Rosen wrote.

Related Articles

(MORE: Facebook admits its AI failed to flag the New Zealand terror attack livestream)

But now, violators are penalized starting with their first offense.

"For instance, someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time," Rosen said, adding that the company will also work to ban those users from placing ads in the coming weeks.

The move was praised by cybersecurity experts who often critique social media platforms for lack of action regarding hate speech.

"It’s a positive step toward curbing abuse of live streaming, and Facebook has been taking real steps on curbing hate content over the last few months," Chad Loder, CEO and founder of cybersecurity firm Habitu8, told ABC News.

Related Articles

(MORE: Instagram announced methods to stop bullying, and experts weigh in)

The video of the Christchurch mass shooting, in which 51 people at two mosques were killed, was viewed at least 200 times live, Facebook said shortly after the attacks.

"This particular video did not trigger our automatic detection systems," Rosen wrote in the days following the attacks. The video was then viewed about 4,000 times before being taken down. The video and images of the attack were disseminated across all major social media platforms, including Twitter and YouTube.

In the 24 hours after the attacks, Facebook removed at least 1.2 million videos of the massacre as they were uploaded, but before they were viewed, according to Rosen. "Approximately 300,000 additional copies were removed after they were posted," Rosen wrote.

Part of the difficulty in detecting violent content is that videos are edited, making them harder to spot. The company said it would devote $7.5 million to partner with the University of Maryland, Cornell University and the University of California, Berkeley, to research better ways to "detect manipulated media across images, video and audio" and distinguish unwitting posters from those deliberately trying to manipulate content.

"This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack," Rosen said.

Related Articles

(MORE: Why it took YouTube, Facebook and Twitter so long to remove video of New Zealand mosque shootings)

Facebook officials announced the change as New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron met Wednesday on the sidelines of the G-7 gathering in Paris. The two leaders were signing the "Christchurch Call," a demand for the world's tech giants to take action to stop extremism on their platforms.

The U.S. declined to sign the international accord.

"While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the Call," Trump Administration officials said in a statement. "The best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging."

Up Next in News—

Gas station clerk speaks out after foiling alleged kidnapping

April 15, 2026

Oklahoma high school principal takes down would-be shooter, hailed as hero

April 15, 2026

Family seeks answers after influencer Ashlee Jenae is found dead on vacation in Tanzania

April 15, 2026

Couple shares warning after nearly losing down payment in mortgage fraud

April 10, 2026

Shop GMA Favorites

ABC will receive a commission for purchases made through these links.

Sponsored Content by Taboola

The latest lifestyle and entertainment news and inspiration for how to live your best life - all from Good Morning America.
  • Contests
  • Terms of Use
  • Privacy Policy
  • Do Not Sell My Info
  • Children’s Online Privacy Policy
  • Advertise with us
  • Your US State Privacy Rights
  • Interest-Based Ads
  • About Nielsen Measurement
  • Press
  • Feedback
  • Shop FAQs
  • ABC News
  • ABC
  • All Videos
  • All Topics
  • Sitemap

© 2026 ABC News
  • Privacy Policy— 
  • Your US State Privacy Rights— 
  • Children's Online Privacy Policy— 
  • Interest-Based Ads— 
  • Terms of Use— 
  • Do Not Sell My Info— 
  • Contact Us— 

© 2026 ABC News