
If you've ever run a large-scale influencer campaign, you know the drill. You find 200 creators who fit the brief, your team starts vetting them one by one, and somewhere around creator number 40, someone discovers that a seemingly perfect fitness influencer once promoted a crypto scam in 2021. Back to the drawing board.
Now multiply that by every market you operate in. German creators, US creators, Italian creators, each with their own life, their own opinions, their own language. Your team doesn't speak seven languages.
This is the problem we kept running into at Tatam. And it's why we built Tatam ShieldAI.
The gap nobody talks about
Brand safety in influencer marketing has been stuck between two bad options for years.
Option one: you vet everyone manually. Your team Googles every creator, scrolls through years of tweets, news articles, and eventually builds a spreadsheet of "safe" and "not safe." It's thorough. It also takes forever, and it completely breaks when scaling across multiple markets.
Option two: you use automated keyword filters. These scan bios and captions for words like "scam" or "controversy" and flag anything that matches. Fast? Sure. But also incredibly limited.
Neither option works when you're managing campaigns for brands across dozens of markets. We needed something different, new, intelligently scalable.
What ShieldAI actually does
At its core, ShieldAI is a research AI agent. Think of it like someone giving your brand safety analysis superpowers.
When you feed it a creator's social media link, it doesn't just check their bio. It runs real-time web searches across news articles, forums, social discussions, and archives. It even figures out the creator's name, and replicates the search also using the real person. Then it uses LLMs to actually read what it finds. Not keyword matching. Reading. Understanding context, sentiment, associations.
The output isn't a red or green light. It's a summary: here's what we found, here's why it might matter, here's the severity, and gives a "Safe" or "Investigate Further". And our team takes it from there.
A quick example from a recent run: we checked a creator whose content looked perfectly clean on the surface. ShieldAI's web search pulled up connections to far-right movements buried in local-language news articles that no English-speaking team would have caught in a standard review.
How it works in practice
A few things that made this actually useful for our day-to-day:
Built for Google Sheets and Airtable. Our campaigns and team "live" on Google Sheets and Airtable, so we programmed ShieldAI to be embeded right there where we list creators for approval in the media plan.
Multi-language support. This was non-negotiable for us. We operate across markets where controversies don't make english-language news. ShieldAI searches and analyzes in the local language.
Custom keyword sets. Different brands have different sensitivities, and have very different definitions of "risky." Our team can customize what the agent looks for on each campaign.
Blacklist integration. If you already maintain internal lists of creators you won't work with, ShieldAI can check against those automatically.
Scale. We can run hundreds of profiles through the system in the time it used to take to manually vet a handful.
Why we still don't let the AI decide
Here's the thing we feel strongly about: ShieldAI doesn't approve or reject anyone. It researches. It summarizes. It flags.
The actual final check and decision is still made by a person on our team.
We call this "AI in the loop" rather than "human in the loop," because the AI isn't running the show. It's a tool in a human-led process. Our specialists still handpick every influencer. They still review every piece of content before it goes live. ShieldAI just makes sure they're not flying blind when they do.
Our clients can get visibility too. We peridically share safety summaries so they can make their own calls based on their own internal risk appetite.
Want to see ShieldAI in action? Get in touch, we're happy to walk you through a demo.


