TikTok accused of creating racist AI ads for indie publisher Finji without permission

Reviewed byNidhi Govil

3 Sources

Share

Indie game publisher Finji alleges TikTok used generative AI to modify its game advertisements without consent, creating racist and sexualized depictions of characters. Despite having AI features disabled, Finji discovered the problematic AI-generated ads only after fans reported them. TikTok's response has been inconsistent, initially denying the issue before acknowledging it but offering no clear resolution.

Indie Game Publisher Finji Confronts TikTok Over Unauthorized AI Ads

Finji, the indie game publisher behind beloved titles like Tunic and Night in the Woods, has accused TikTok of using generative AI to alter advertisements for its games without permission or knowledge. The company only discovered the problematic AI-generated ads after followers on its official TikTok account sent screenshots showing disturbing modifications to game characters

1

. CEO and co-founder Rebekah Saltsman publicly addressed the issue on Bluesky, asking followers to send screencaps of any Finji ads that looked "distinctly UN-Finji-like"

2

.

Source: PC Gamer

Source: PC Gamer

The most egregious example involved unauthorized advertisements for Usual June, Finji's upcoming game featuring a Black woman protagonist. According to evidence shared with IGN, one AI-modified image depicted the character June with "a bikini bottom, impossibly large hips and thighs, and boots that rise up over her knees," creating what Finji describes as racist and sexualized caricatures that bear no resemblance to the character's actual in-game appearance

3

. These unauthorized advertisements appeared as slideshows rather than the video format Finji had created, and were displayed on TikTok as if posted directly from Finji's official account.

Source: Engadget

Source: Engadget

TikTok's Inconsistent Response Raises Platform Control Questions

When Saltsman escalated the issue to TikTok customer support, the platform's response was both inconsistent and inadequate. Initially, TikTok stated it could find "no evidence" that "AI-generated assets or slideshow formats are being used," despite Finji providing clear screenshots of the modified content

1

. In subsequent exchanges, TikTok appeared to acknowledge the evidence, stating it was "no longer disputing whether this occurred" and promised to escalate the issue internally

1

.

Finji had explicitly disabled TikTok's AI features, including Smart Creative and Automate Creative options. Smart Creative uses generative AI to create multiple versions of user-created ads, mixing and matching different elements to test which combinations perform best with audiences. The Automate Creative feature uses AI to automatically optimize assets like images, music, and audio effects

3

. Saltsman provided evidence to IGN showing both features were turned off, which a TikTok agent confirmed for the ads in question

1

.

Catalog Ads Format Reveals Deeper Platform Control Issues

After multiple frustrated exchanges, TikTok eventually admitted the ad "raises significant issues, including the unauthorized use of AI, the sexualization and misrepresentation of your characters, and the resulting commercial and reputational harm to your studio"

1

. The platform explained that Finji's campaign used a "catalog ads format" designed to "demonstrate the performance benefits of combining carousel and video assets in Sales campaigns." TikTok described this as "an initiative aimed at helping advertisers like you achieve better results with less effort," but notably did not address the harmful content directly

2

. Finji apparently opted into this ad format without knowing it had done so

1

.

The incident highlights critical questions about platform control over AI-generated content and advertiser consent. Finji reports being unable to view or edit the AI-generated versions of its own ads, only becoming aware of them through comments and Discord reports from users

3

. Saltsman suspects at least one other inappropriate ad featuring the character Frankie is circulating based on user comments, but cannot confirm without seeing the modifications herself.

Brand Integrity and Reputational Damage for Small Studios

Saltsman was told the issue could not be escalated any higher, with communication remaining unresolved. In her statement to IGN, she expressed shock at "TikTok's complete lack of appropriate response to the mess they made." She expected both an apology and clear reassurance about how similar issues would be prevented, but was "obviously not holding my breath for any of the above"

1

. Saltsman elaborated on the severity: "It's one thing to have an algorithm that's racist and sexist, and another thing to use AI to churn content of your paying business partners, and another thing to do it against their consent, and then to also NOT respond to any of those mistakes in a coherent way?"

2

The situation raises urgent concerns for other advertisers about brand integrity when platforms deploy AI tools without explicit consent. For small indie game publishers like Finji, such misrepresentation carries significant commercial and reputational risks. The case demonstrates how automated systems can produce harmful content that contradicts a company's values and damages relationships with their audience. As Saltsman pointedly asked: "Does TikTok want me to be grateful for the mistreatment of my company and our game?"

2

TikTok declined to comment when approached by IGN

1

.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo