Documents Show How Facebook Moderates Terrorism on Livestreams

Credit to Author: Joseph Cox| Date: Fri, 15 Mar 2019 16:21:19 +0000

This piece is part of an ongoing Motherboard series on Facebook’s content moderation strategies. You can read the rest of the coverage here.

On Friday, at least 49 people were killed in terror attacks against mosques in Christchurch, New Zealand. One apparent shooter broadcast the attack onto Facebook Live, the social network’s streaming service. The footage was graphic, and Facebook deleted the attackers’ Facebook and Instagram accounts, although archives of the video have spread across other online services.

The episode highlights the fraught difficulties in moderating live content, where an innocuous seeming video can quickly turn violent with little or no warning. Motherboard has obtained internal Facebook documents showing how the social media giant has developed tools to make this process somewhat easier for its tens of thousands of content moderators. Motherboard has also spoken to senior employees of Facebook as well as sources with direct knowledge of the company’s moderation strategy, who described how Live was, and sometimes still is, a difficult type of content to keep tabs on.

“I couldn’t imagine being the reviewer who had to witness that livestream in New Zealand,” a source with direct knowledge of Facebook’s content moderation strategies told Motherboard. Motherboard granted some sources in this story anonymity to discuss internal Facebook mechanisms and procedures.

Like any content on Facebook, be those posts, photos, or pre-recorded videos, users can report Live broadcasts that they believe contain violence, hate speech, harassment, or other terms of service violating behaviour. After this, content moderators will review the report, and make a decision on what to do with the Live stream.

Read more on Motherboard.

This article originally appeared on VICE US.

http://www.vice.com/en_ca/rss