The latest developments in generative AI have opened up a range of new possibilities and potential use cases. But are we sure that there’s a value to them within social media apps?
Sure, there are some helpful, practical use cases like image editing for ad backgrounds, and creating optimized ad copy for varying purpose.
But for regular users, does generative AI really enhance the social app experience?
For years, people have complained about spam messages polluting their DMs, spam comments linking through to junk websites, and artificial engagement prompted by, say, anniversary and birthday updates. These types of posts feel disingenuous, non-engaging, and don’t really add any value to the “social” experience.
Yet now, with Gen AI, social apps are trying to make such even more prominent, with almost every platform now experimenting with different forms of automated content generation, which can then be used as updates that humans can then post to their profiles, cosplaying actual engagement.
Is that a good thing?
For context, here's an overview of the current state of generative AI within the major social apps:
- Facebook has an AI post creation tool, generative AI text and image creation for ads, AI chatbots based on celebrities, and AI assistant in Messenger, generative AI stickers, an AI chatbot assistant in Ray Ban Stories, and generative AI profile pictures.
- Instagram has generative AI image filters and background generation tools, and AI creation and editing options, while it’s also experimenting with a conversational AI chatbot for DMs.
- LinkedIn has an AI post composer, an AI assistant for InMails, AI article summaries, generative AI tools that can write your profile for you, and job ads suggestions, among other tools within its Recruiter and ad options.
- Snapchat has its “My AI” conversational chatbot, as well as its “Dreams” image generation tool,, along with AI-generated Snap captions (for paying subscribers), and “AI Mode” for creating generative AI Snaps.
- TikTok has AI profile images, AI effects tools, and AI song generation, while it’s also experimenting with conversational search, powered by AI, text-to-video generation, as well as an integrated chatbot experience.
- Pinterest is primarily using generative AI, at this stage, to power its back-end search and ad tools.
- X owner Elon Musk says that X’s “Grok” AI chatbot will soon be able to create in-app updates for you, while they’re also exploring visual generation via the tool.
As you can see, most of these tools are designed to simulate human updates, and create unreal images and depictions. And there are already a heap of these options available, which, intentional or not, effectively reduce, and even eliminate, human input in the process.
Why would people want that? Why would people want to post robot responses, and attempt to pass them off as their own thoughts and opinions?
And even if creators might find value there, what about the consumers of such updates?
Spammers and scammers will love it, no doubt, and engagement farmers will be keen to “optimize” their updates through these tools. But are those the types of posts that actually enhance social media interaction?
Of course, that’s seemingly an afterthought, because now, you can create a profile image of yourself as an 18th century warrior. Isn’t that cool?
As a novelty, sure, that’s kind of interesting. But how many generative AI images can you create to depict yourself in different scenes before it starts to weigh on you that you’re not actually doing any of these things?
Social media, by definition, is “social”, which involves humans interacting with other humans, sharing their own experiences, and the things that are filtering through their real human brains, in order to then feel more connected to the world around them. That’s been the universal value of the medium, building on books and movies in facilitating more understanding and connectedness, so we all feel less alone, and more engaged with the world around us.
How do bot updates help with that?
And of course, this is all, inevitably, going to get a lot worse yet.
Indeed, last week, LinkedIn noted that it’s re-building its foundations around AI, in order to power “the next ten years of product development and innovation.” Which means more AI integration, and more bot-generated content. And as these tools continue to iterate on the latest trends, in order maintain relevance, they’ll also be training on more and more AI-generated updates that are flowing through their circuits.
Which means that AI tools will increasingly be powered by AI responses, diluting more and more human input out of the process with every refresh.
The “social” aspect is becoming more automated, more stale, and less human with every such integration.
Of course, the counter is that people can already use AI tools outside of social apps anyway, so whether they’re integrated or not, they’re going to be utilized for the same purpose. Which is partly true, but still, adding them in-stream, making it easier for people to just tap a button to generate a response, seems like a step in the wrong direction either way.
That’s not to say that Gen AI tools are not useful. As noted, there are practical use cases for optimized, simplified tools that can complement human creation.
But bleaching humanity out of the source code is simply not a pathway to value.
And whether we realize it or not, the Gen AI shift is going to take far more significant turns yet.