Facebook and Instagram logos

An independent oversight board at Meta, the parent company of Facebook and Instagram, has found that the automated content moderation tools used since October 7th have been overly zealous in removing posts. This conclusion emerged from the board's review of two specific instances where human moderators had to reinstate content erroneously removed by automated systems.

The first case involved a Facebook video from October 7th, allegedly showing a Hamas terrorist abducting a woman. The second was an Instagram video depicting the aftermath of an attack near the Al-Shifa hospital in Gaza. These incidents were the inaugural cases handled under Meta's new expedited review process, designed for swift resolutions of critical issues.

Meta's automated tools had been adjusted to more aggressively flag and remove content related to Israel and Gaza. This change lowered the threshold for content deemed in violation of the company's policies on violence, hate speech, and harassment. While this approach reduced the risk of missing harmful content, it also increased the likelihood of mistakenly removing posts that did not violate Meta's policies.

As of the last report, Meta had not restored the 'confidence thresholds' for content flagging to their levels before October 7th, maintaining a heightened risk of inappropriate content removal.

The oversight board has repeatedly urged Meta to refine its algorithms to prevent the accidental deletion of posts that serve educational or counter-extremism purposes. This issue is not new to Meta; similar problems arose when the company banned Holocaust denial in 2020.

Meta faces challenges in balancing freedom of expression with the prevention of violence and hate speech. The oversight board emphasized the importance of preserving testimonies that could provide valuable insights into significant events and potential violations of international human rights.

Meta's content moderation practices are under scrutiny not just for Facebook and Instagram but also for how they handle contentious issues like the Israel-Hamas conflict. For instance, TikTok has been criticized for the prevalence of pro-Palestinian content, and the European Union has launched an investigation into Twitter (now known as X) for its handling of content since October 7th.

Instagram recently introduced a "Fact-Checked Control" feature, allowing users to manage how fact-checked content appears in their feeds. This update has sparked backlash and speculation, especially among pro-Palestinian groups who claim it censors their content. The feature's rollout seems to be an effort by Meta to give users more control over their feed content, amid ongoing struggles with misinformation and conflicting demands.

Meta plans to extend its fact-checking program to Threads, another of its platforms, in the coming year. The company relies on independent fact-checkers to evaluate and rate content across its platforms, working with over 90 organizations in more than 60 languages. This system, in place since 2016 on Facebook and 2019 on Instagram, aims to remove policy-violating content and reduce the spread of false information.

Sign Up For The Judean Newsletter

I agree with the Terms and conditions and the Privacy policy