Telling AI it's an expert actually makes it worse at coding and math, researchers find

Reviewed byNidhi Govil

2 Sources

Share

University of Southern California researchers discovered that popular AI prompting techniques backfire. Telling AI models to act as experts improves safety and writing tasks but significantly reduces accuracy on factual work like coding and math. The team developed PRISM, a solution that helps models decide when to use expert personas and when to rely on their base training.

Expert Persona Prompting Backfires on Knowledge Tasks

A widely adopted AI prompting technique may be sabotaging your results. Researchers from the University of Southern California have found that instructing AI models to assume expert personas—such as "You're an expert machine learning programmer"—can worsen its factual accuracy on knowledge-based tasks like coding and math

1

. While this approach has been popular since 2023, when role-playing instructions first gained traction in AI prompting circles, the practice degrades their performance when factual recall matters most.

Source: The Register

Source: The Register

Using the Measuring Massive Multitask Language Understanding (MMLU) benchmark, the research team tested persona-based AI prompting across multiple subject categories. The results were striking: expert persona prompts achieved only 68.0 percent accuracy compared to 71.6 percent for the base model

1

. The gap reveals a fundamental problem with how these prompts work. According to the researchers, persona prefixes activate instruction-following over factual recall, effectively distracting the model from accessing its pretrained knowledge base

1

.

Why AI Model Performance Suffers Under Expert Roles

The core issue lies in what expert personas actually do—and don't do—to AI systems. Telling a model it's an expert doesn't add any facts to its training data

1

. Instead, it shifts the model's operational mode. PhD student Zizhao Hu, one of the study's co-authors, explained that asking AI to adopt an expert programmer persona won't improve code quality or utility

1

. The technique proves particularly harmful for what researchers call pretraining-dependent tasks—those requiring factual accuracy like math and coding

1

.

Source: TechRadar

Source: TechRadar

However, the picture isn't entirely negative. For alignment-dependent tasks involving writing, role-playing, and safety protocols, personas do improve AI model performance

1

. A dedicated "Safety Monitor" persona increased attack refusal rates across three safety benchmarks, with JailbreakBench showing a 17.7 percentage point jump from 53.2 percent to 70.9 percent

1

. This split performance explains why online guides continue recommending expert personas despite mixed results.

PRISM Offers Smart Solution Through Adaptive Routing

To address this challenge, the researchers developed Persona Routing via Intent-based Self-Modeling, or PRISM

1

. This technique uses a gated LoRA (low-rank adaptation) mechanism that keeps the base model intact for generations requiring pretrained knowledge

1

. The system learns to dynamically apply persona-based behaviors only where they improve output, falling back on the unmodified model otherwise

2

.

PRISM generates answers both with and without personas, compares which performs better, and learns when to apply each approach in future interactions. This avoids the tradeoffs of prompt-based routing at inference time and supervised fine tuning that bakes behavior into model weights

1

.

What This Means for Effective AI Prompting

The findings suggest a simpler approach to prompt engineering. Hu advised: "When you care more about alignment (safety, rules, structure-following, etc), be specific about your requirement; if you care more about accuracy and facts, do not add anything, just send the query"

1

. For tasks requiring factual accuracy, specific, comprehensive prompts that provide context and tools work better than role-playing directives.

The research also uncovered that reasoning models benefit more from context length while instruction-tuned models show greater sensitivity to personas. This complexity suggests users should focus on clearly explaining tasks and sharing relevant context rather than dictating how AI should approach responses. The paper's authors specifically discourage exploiting biases through over-engineered prompts, warning this may produce unexpected side effects and reinforce societal biases.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2026 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo