Facebook is taking steps to prevent atrocities like the Christchurch massacre from being broadcast live on its platform.
When the Christchurch shooter opened fire on Muslim worshippers in March, he streamed the attack live on Facebook. Facebook said that in the first 24 hours it removed 1.5 million iterations of the video, with 1.2 million videos blocked at upload. Other platforms such as YouTube similarly struggled to stem the tide of videos.
Long after the attack, however, copies of the video were still to be found on the Facebook and Instagram, and officials in New Zealand and beyond were sharply critical of Facebook’s response.
VP of Integrity Guy Rosen published a blog on Tuesday outlining the new rules for users surrounding infringements on Facebook Live, saying Facebook will now operate on a “one strike” basis.
“From now on, anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense,” Rosen wrote. He gave an example of a serious violation as sharing a link to a terrorist group’s statement with no context.
It means Facebook is stopping short of introducing more radical suggestions. The Australian government has suggested slapping a time delay on live videos, an old TV trick allowing potentially offensive material to be censored before it reaches viewers.
Facebook has previously said this would be impractical. In a blog last month, the social network said a time delay would be difficult because there are millions of live streams a day, it would “further slow down” the reporting and review of harmful videos, and delay first responders.
The problem isn’t isolated to bad actors deliberately posting abusive content. “One of the challenges we faced in the days after the Christchurch attack was a proliferation of many different variants of the video of the attack. People — not always intentionally — shared edited versions of the video, which made it hard for our systems to detect,” said Rosen.
To combat this, Rosen announced that the company is investing $7.5 million in research from The University of Maryland, Cornell, and Berkeley with a view to developing new and better techniques for automatically spotting manipulated footage.
“This work will be critical for our broader efforts against manipulated media, including deepfakes (videos intentionally manipulated to depict events that never occurred). We hope it will also help us to more effectively fight organized bad actors who try to outwit our systems as we saw happen after the Christchurch attack,” he said.