It’s interesting to note the varying approaches that each social app is taking to political content, as we head into what’s expected to be a particularly tumultuous election period.
Not only do you have the U.S. election, which is coming up on November 5th, but also heading to the polls this year are India (May), the European Union (June), the U.K., South Africa, and a heap more.
And with all of this happening, among nations that see very high social media usage, each platform is looking to either reduce or expand the presence of political debate within their apps. Which could have a significant impact on voting outcomes, depending on how things actually play out.
First off, you have Meta, which is now taking definitive steps to reduce the presence of political content in its apps.
Last week, Meta announced that it would eliminate its Facebook News initiative, and end all agreements with local news publishers, as part of its latest efforts to dilute news content, and subsequent debate, on Facebook and Instagram.
That effort has actually been ongoing for some time. Back in 2021, in the wake of the Capitol Riots, Meta CEO Mark Zuckerberg noted that one of the most common notes of feedback that the company had been receiving from its users was that they didn’t want to keep seeing divisive political content in their feeds.
After years of angst-ridden posts, sparking tension between friends and family, Meta realized that the negative association of such wasn’t worth the engagement benefits. Which then set the Meta team on a mission to reduce political content, however it could. That’s since seen the company lean further into AI-recommended content, mostly via short-form video clips, which have since become a far bigger element in user feeds.
Indeed, as of Q4 last year, AI-based content recommendations accounted for 40% of the content that people see on Instagram, with that figure coming in slightly lower on Facebook. The result? Over the past year, Meta’s seen a 7% increase in time spent on Facebook, and a 6% increase in the same on IG.
By contrast, news content now makes up less than 3% of what people around the world see in their Facebook feed, and shrinking.
With this in mind, and with the reputational damage that news and political debate has caused Meta in the past, it’s now looking to step away from it entirely, in favor of a more light hearted, entertainment-based content approach.
Which makes even more sense when you also consider how much news and political debate has cost Meta over recent years.
Cumulatively, as a result of the Cambridge Analytica debacle, in which political operatives reportedly stole Facebook user data to formulate voter influence operations, Meta has had to pay almost $6 billion in direct costs, via a penalty from the FTC, a settlement over the “data breach”, and fines from the US Securities and Exchange Commission.
But the reputational damage may have been even worse. In 2022, Meta said that Apple’s new opt-in data tracking prompts would cost its ad business over $10 billion in that year alone, which can also be linked back to an erosion of trust in the company due to the Cambridge Analytica incident.
On balance, then, it makes sense for Meta to step away from news and politics where it can. But what will that ultimately mean for voters?
According to Pew Research, 30% of Americans get at least some of their news input from Facebook, and with the platform deliberately moving away from such, that has to have an impact.
The end result will be that Meta will be able to raise its hands and claim it had nothing to do with the outcome of any election, whatever it may be, which could help it avoid similar negative headlines. But is that good for democracy, and will that lead to a less-informed public?
On the flipside, the platform formerly known as Twitter is leaning further into political debate, with X owner Elon Musk using the app as his personal bullhorn to sound the alarm on whatever political issue he’s concerned with day-to-day.
Musk regularly posts about the influx of immigrants in the U.S., the war in Ukraine, drug policies, corporate governance concerns, and the perceived decline of various U.S. cities. Musk also regularly points the finger at various politicians, from the President on down, and as the most-followed person in the app, who’s also (reportedly) tilted the algorithm in favor of showing his posts to more people, he alone has significant influence over the general discussion among X users.
What’s more, X’s more “free speech” aligned approach to content moderation, which puts increased reliance on its crowd-sourced “Community Notes” to police misinformation, has left it open to expanded manipulation, which could further skew political discussion in the app.
But that’s pretty much how Elon wants it, with his stated view being that people should be able to see all opinions, no matter how incorrect or ill-informed they are, so that they can decide for themselves what they believe and what they don’t.
Which seems to ignore the past harms caused by such, but nevertheless, Musk believes that no one should be censored, as such, and all opinions should be examined on their merits.
Which then increases political debate within the app. And with Meta looking to reduce political and news content, X may actually be winning out, in becoming the social app of choice to engage in political debate.
Is that a good thing?
I mean, theoretically, as noted, X’s Community Notes system should enable “the people” to decide what they believe, and what should be left, or “noted” in the app. But Community Notes are only displayed on posts once contributors of opposing political viewpoints agree that a note is necessary. Which, for many of the most divisive political debates, is never going to happen, so for a lot of claims, X is facilitating the spread of misinformation.
Indeed, research has shown that the expansion of Community Notes has done little to reduce engagement with misinformation in the app.
And when you also consider claims that coordinated groups are already active within the Community Notes system, and are working to amplify and/or quash notes that go against their own agendas, and that those groups are potentially operating on behalf of foreign governments, it does seem like X is offering little protection from voter manipulation heading into the election period.
Which could skew political debate, and subsequent voter outcomes, with Musk himself looking to sway voters towards Republican candidates.
X’s direct influence in his respect is far less than Facebook (Pew Research data shows that 12% of U.S. adults regularly get news content in the app). But X/Twitter has always had an oversized influence on related debate, because of its popularity among the most passionate newshounds and reporters, who source much of the information from the app, then disseminate such to other platforms.
That’s why Donald Trump was able to use Twitter to such great effect, and likely why Elon was so attracted to it.
The outcome then is that you are going to have more voters more influenced by misinformation via the app, with some of the most divisive, angst-inducing claims already stemming from X posts.
Will that sway the outcome of elections? Likely yes, and with Meta offering no counter, that does seem like a significant concern.
In the end, however, Meta seems more concerned about its business interests than the role it plays, or doesn’t, in politics. Which, again, makes sense when you weigh up the cost-benefit of such for the company. But the concern is that X-sourced, unfounded conspiracies are going to infect the minds of enough voters to sway the outcome of each poll, which could cause significantly more harm in the long run.
Of course, it’s not Meta’s responsibility to play arbiter in such, and it’s also worth noting that TikTok is in a difficult position as well, given its alleged ties to the Chinese Government, and how that might influence what users see in that app.
But it is a potentially concerning situation, heading into the various polls, with X’s more “free for all” approach looking far more like the situation in the lead-up to the 2016 election, as opposed to the lessons learned as a result.
The worst part is that nothing can seemingly be done about this, and all of this analysis and attribution will be conducted in retrospect.
And for many, many people, that could be too late.