As the Israel-Hamas war becomes more intense, digital platforms are increasingly being used to disseminate critical information, both within the impacted regions, and to audiences in the wider world. As a result, militant groups are looking to use social platforms to influence such messaging, in order to sow dissent and confusion, which each platform now has to mitigate as best it can.
And with the European Union’s new regulations on misinformation now in effect, the major platforms are already coming under scrutiny, with the EU issuing notices to Meta, X, and TikTok to remind them of its new, more stringent obligations.
As a result, EU officials have already announced an investigation into X, while Meta has today provided a full overview of its efforts, in line with the latest EU requests.
In response to the EU’s request for more information regarding its crisis process, Meta says that it has:
- Established a special operations center, staffed with experts that are fluent Hebrew and Arabic speakers, in order to monitor and respond to the evolving situation in real time
- Implemented limits on recommendations of potentially violative content
- Expanded its “Violence and Incitement” policy in order to remove content that clearly identifies hostages “even if it’s being done to condemn or raise awareness of their situation”
- Restricted the use of hashtags that have been associated content that violates its Community Guidelines
- Restricted the use of Live for users that have previously violated certain policies. Meta notes that it’s also prioritizing moderation of live-streams from the impacted region, with particular emphasis on Hamas’ threats to broadcast footage of hostages
- Added warning labels on content that’s been rated “false” by third-party fact-checkers, while also applying labels to state-controlled media publishers.
These more advanced measures will give EU officials, in particular, more understanding of how Meta’s looking to combat false and misleading reports in its apps, which they’ll then need to assess against the new Digital Services Act (DSA) criteria to monitor Meta’s progress.
The EU DSA relates to online platforms with more than 45 million European users, and includes specific provisions for crisis situations, and the obligations of “large online platforms” to protect their users from mis- and disinformation within their apps.
As per the DSA documentation (paraphrased for clarity):
“Where a crisis occurs, the Commission may adopt a decision requiring one or more providers of very large online platforms, or of very large online search engines, to assess whether the functioning and use of their services significantly contribute to a serious threat, and identify and apply specific, effective and proportionate measures, to prevent, eliminate or limit any such contribution to the serious threat.”
In other words, social platforms with over 45 million EU users need to take proportionate measures to mitigate the spread of misinformation during a crisis, as assessed by EU officials.
In order to facilitate this, the rules also require large online platforms to report to the Commission at regular intervals, in order to outline the specific measures being taken in response to said incident.
The EU has now submitted those requests to Meta, X, and TikTok, with X seemingly falling short of its expectations, given that it’s now also launched an inquiry into its process.
The penalties for failing to meet these obligations could be fines amounting to 6% of a company's annual global revenue, not just its EU intake.
Meta is likely less at-risk in this respect, as its mitigation programs are well-established, and have been evolving for some time.
Indeed, Meta notes that it has “the largest third-party fact checking network of any platform”, helping to power its efforts to actively limit the spread of potentially harmful content.
X, after recently culling 80% of its global staff, could be in a tougher spot, with its new approach, that puts a heavier reliance on crowd-sourced fact-checking via Community Notes, seemingly failing to catch all incidents of misinformation around the attacks. Various third party analyses has shown that misinformation and fake reports are spreading via X posts, and it could be difficult for X to catch all of them with its now limited resources.
To be clear, X has also responded to the EU’s request for more information, outlining how it’s working to take action to manage such. But it’ll now be up to EU officials to assess whether it’s doing enough to meet its requirements under the DSA.
Which, of course, is the same situation that Meta is in, though again, Meta’s systems are well-established, and are more likely to meet the new requirements.
It’ll be interesting to see how EU analysts view such, and what that then means for each platform moving forward.
Can X actually meet these obligations? Will TikTok be able to adhere to tougher enforcement requirements, in line with its algorithmic amplification approach?
It’s a key test, as we move into the next stage of EU officials largely dictating broader social platform policy.