Canva's AI tool caught replacing Palestine with Ukraine, sparking bias concerns

2 Sources

Share

Canva's Magic Layers feature was found automatically changing the word 'Palestine' to 'Ukraine' in user designs without instruction. The graphic design platform has apologized and resolved the issue, but the incident raises questions about AI bias in design tools and the training data used to build them.

Canva AI Tool Replaces Palestine in User Designs

Canva has issued an apology after its Magic Layers feature was discovered automatically replacing the word Palestine with Ukraine in user designs

1

. The issue came to light when X user @ros_ie9 shared an image showing the AI tool changing text from "cats for Palestine" to "cats for Ukraine" without any user instruction

2

. The Magic Layers feature, introduced last month as part of Canva's AI overhaul, is designed to convert flat images into fully editable, multi-layered designs inside the graphic design platform's editor.

Source: The Verge

Source: The Verge

How the AI-Powered Design Tools Exhibited Bias

What makes this incident particularly troubling is that the AI tool was not supposed to make visible alterations to content at all. Multiple users reported being able to replicate the bug before Canva fixed it, though the issue appeared specifically limited to the word Palestine

1

. Related terms like Gaza remained unaffected by the feature, raising questions about the training data and instructions the tool received

2

. The AI exhibiting bias in this manner suggests potential issues in how the model was developed and what guardrails were implemented during its creation.

Canva Apologizes and Launches Internal Audit

Canva spokesperson Louisa Green confirmed the company moved quickly to investigate and resolve the issue. "We became aware of an issue with our Magic Layers feature and moved quickly to investigate and fix it," Green told The Verge. "We take reports like this very seriously, and we're putting additional checks in place to help prevent this in future. We're sorry for any distress this may have caused"

1

. The company has launched an audit into how the issue arose and is reviewing its internal testing processes to detect and prevent unexpected outputs in the future

2

.

Pattern of AI Bias Raises Broader Concerns

This incident is not isolated in the AI industry. Meta faced similar criticism when its generative AI tools in WhatsApp produced images of a boy with a gun when asked to create an image of a Palestinian

2

. In 2023, activists discovered that OpenAI's ChatGPT refused to answer affirmatively when asked if Palestinians should be free, while having no issue answering the same question for other populations

2

. These patterns suggest systemic challenges in how AI models are trained and tested across the industry.

Implications for Canva's Competition with Adobe

The timing of this blunder is particularly awkward for Canva, which is increasingly trying to compete against Adobe's suite of AI-powered design tools

1

. Magic Layers is a major component of Canva's recent AI overhaul, which the company claims "marks the beginning of the next era of creation"

1

. As AI tools become central to design platforms, users and industry watchers will be scrutinizing how companies address bias and implement safeguards to ensure their tools don't inadvertently censor or alter content in politically sensitive ways. The question of why such a feature would change text on its own and specifically replace Palestine with Ukraine remains unanswered, leaving concerns about transparency in AI development.

Today's Top Stories

TheOutpost.ai

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

Instagram logo
LinkedIn logo
Youtube logo
© 2026 TheOutpost.AI All rights reserved