Social Media and AI
A world of deception.
Every day we hear about a new AI tool that is here to enhance and make our lives easier, from optimizing back-end operations to generating the content we scroll past daily. But as the use of AI in social media ramps up, are we actually using it the right way?
Every week seems to bring a new tool promising to boost productivity, save time or automate content creation, all under the banner of progress. Yet in the rush to embrace these efficiencies, we risk losing something vital: authenticity!
Fran Watson, our Organic Social Media Expert, explains why in B2B, trust is everything. Decisions are high-stakes, investments are long-term and relationships are built to last. Organic social media plays a critical role, it’s where credibility is earned, thought leadership is demonstrated and brand authority is shaped over time. But trust isn’t built overnight. It’s a slow burn, fueled by real insight, consistent presence and genuine human connection. Now, with AI disrupting how we create and consume content, we have to ask: is it enhancing the trust we’ve worked so hard to build or quietly eroding it?
Convincing, but not quite human
AI is rapidly becoming a go to tool for social media content creation, from writing post copy to drafting comments to generating images and even producing short form videos, AI is streamlining the entire creative process. What used to take hours of planning and production can now be done in minutes, often with impressive result to the untrained eye.
Take Mia Zelu, for example, a viral AI influencer who’s been fooling millions online. Most recently spotted at Wimbledon, she now boasts over 169,000 Instagram followers. With hundreds of comments on each post and Insta-style captions that could’ve come straight from a lifestyle influencer’s keyboard, she blends in almost too well. Her visuals are just as convincing, creative, polished and lifelike, though often overly airbrushed (which, let’s be honest, is nothing new on Instagram!). But was she really fooling anyone? Her bio clearly states she’s an AI influencer. Still, scroll through the comments and you’ll find plenty of followers engaging with her like she’s a real person.
Despite her success, not everything goes to plan when AI is involved, especially when it comes to imagery. As we’ve seen time and time again, AI still struggles with spatial awareness and realism, although scrolling through her profile you wouldn’t notice this initially. Take the example to the right, where Mia appears noticeably larger than her fellow Wimbledon spectator. It’s a small detail, but it breaks the illusion and highlights a bigger issue: when these visual glitches slip through, they undermine credibility and remind audiences that what they’re seeing isn’t real.
And it’s not just visual content that’s raising questions. Text-based AI interactions are also blurring the lines between real and artificial, especially on platforms like X. Take Grok, X’s AI chatbot, which was designed to engage with users in real-time, replying to tweets, offering quick takes, and injecting Elon Musk-style sarcasm into the platform’s conversations. But while the intent was to create a more interactive, intelligent experience, Grok has had its fair share of missteps.
From replying to satire as if it were fact, to amplifying conspiracy theories or responding with outdated or inaccurate information, the bot has repeatedly blurred the line between helpful and harmful. These moments highlight a core risk of AI-driven interaction at scale, when machines are given a voice without full context, the results can be tone-deaf, misleading or even damaging, especially on a platform where speed and virality often trump nuance.
Faking authority: The credibility problem with AI thought leadership
But it’s not just platforms like X experimenting with AI-driven interaction, or AI influencers like Mia Zelu capturing attention. The same tools are now in the hands of anyone looking to build authority online. With ChatGPT and similar platforms, it’s easier than ever to craft sharp takes, polished posts and thought leadership that looks the part, even if there’s little substance behind it. But as AI-generated content floods our feeds, audiences are starting to question what and who is real. When everyone looks like a thought leader, how do we know who to trust? If AI becomes a shortcut to credibility, rather than a tool to support it, we risk undermining the very trust that B2B relationships are built on.
The rise of B2B influencers is quickly becoming one of the most powerful trends in the industry. More than ever, professionals are building personal brands alongside their corporate roles, from employee advocates sharing real-world experiences, to industry experts turning into full-fledged creators, to founders stepping up as storytellers for their companies. These voices are trusted not because they’re polished, but because they’re real! They speak from lived experience, offer genuine insight and show up consistently, and that’s exactly what builds credibility in B2B. The risk with AI-generated content is that it can look like expert advice on the surface, but often lacks the nuance, depth and accountability that comes from actually being part of the industry. And in a space where relationships are long-term and decisions carry weight, that distinction really matters.
The real risk and opportunity of AI in B2B content
So what does all this mean for B2B? As AI becomes more embedded in content creation, it brings real challenges, especially in an industry where trust and expertise are everything.
First, there’s the dilution of expertise. When anyone can publish polished thought leadership at the click of a button, how do the real experts stand out? Then there’s audience fatigue and skepticism. Overly polished, AI-generated posts can start to feel sterile, lacking the human insight or edge that makes content truly resonate. And perhaps most importantly, there’s the question of brand trust. If businesses lean too heavily on AI without transparency, or use it in ways that mislead, they risk damaging their long-term credibility. In a space built on relationships, is it a risk worth taking?
AI has huge potential in B2B social, but it should never come at the cost of authenticity. Unlike B2C, where content is often driven by speed, trends and entertainment, B2B is rooted in substance: real expertise, thoughtful nuance and long-term relationships. That’s why AI shouldn’t be used to mimic human voices or manufacture authority, it should be used to support and scale the real ones. Let it help refine messaging or personalize at scale, but always keep the human at the core. And crucially, make sure AI doesn’t drown out the authentic human voice. In a space where trust matters, authenticity isn’t just a nice-to-have, it’s what makes your content resonate and stand out.