Hey, remember how last week TikTok added new AI-generated content labels, and new rules against posting AI-generated content without them?
This would be one of the reasons why:
TikTok’s got AI generated content problems
— Matt Navarra (@MattNavarra) September 25, 2023
And it’s going to get a lot worse…???? ???????? pic.twitter.com/IihehN8pp8
As you can see in this example, posted by social media expert Matt Navarra, TikTok’s currently seeing an influx of AI-created spam, with a simulated character presenting a horrendous tool that claims to “remove the clothes of any picture you want”.
Yeah, it’s all bad, and there’s a heap of profiles promoting the same thing in the app.
As many have warned, the rise of generative AI will facilitate all new types of spam attacks, by making it much easier to create a heap of profiles and videos like this, which aren’t going to dupe any real users, necessarily, but could get through each app’s detection systems, and gain more reach, even if they eventually do get removed.
Which, you would assume they will, given that TikTok now has very clear rules on such.
“We welcome the creativity that new artificial intelligence (AI) and other digital technologies may unlock. However, AI can make it more difficult to distinguish between fact and fiction, carrying both societal and individual risks. Synthetic or manipulated media that shows realistic scenes must be clearly disclosed. This can be done through the use of a sticker or caption, such as ‘synthetic’, ‘fake’, ‘not real’, or ‘altered’.”
Though, technically, I’m not 100% sure that this type of video would be covered by this policy, because while it does depict a real person, it’s not a “realistic scene”, as such. The policy is more geared towards hoax content, like The Pope in a Puffer Jacket, or the recent Pentagon bombing, that was created by AI.
But does a fake person promoting a trash app count in this context?
If it’s not yet in there, I assume that TikTok will expand its rules to cover such, because this is the type of content that could become very problematic, especially as more of these poorly scripted, robotic versions of people continue to crop up. And as the technology evolves, it’s going to get even harder to distinguish between real and fake people.
I mean, it’s pretty easy with the above clip, but some of the other examples of AI look pretty good.
X owner Elon Musk has been one of the loudest voices warning about this, repeatedly highlighting the coming influx of spam that’ll be utilizing these new technologies.
As Musk notes, this is part of his own push to implement verification, as a means to filter out bot content. And while not everyone agrees that paid verification is the solution, the above examples from TikTok are exactly the type of thing that Musk is warning about, and trying to find a solution to address.
As such, you can expect every platform to introduce new AI content rules shortly, with Instagram also developing its own AI content labels, and YouTube working on its own tools to deal with the expected “AI tsunami”.
Because it is going to get worse.
Expect to see more bots in your social streams soon.