Stop telling AI it's an expert programmer, you're making it worse at its job new research shows the best results need specific prompts
Date:
Tue, 24 Mar 2026 16:30:00 +0000
Description:
Stop telling AI how it should generate answers focus more on asking what you want of it, and giving it the necessary context.
FULL STORY
New research has claimed asking AI to 'act
as an expert' doesn't actually improve result reliability despite being a widely used prompt enhancer.
More specifically, it might help with alignment-style tasks such as writing, tone and structure guidance, but it likely hurts knowledge tasks like maths
and coding. Per the data these so-called expert personas underperformance
base models on benchmarks likely because they're triggering the AI to shift into instruction-following mode rather than fact recall. "We
specifically discourage crafting (system) prompt for maximum performance by exploiting biases, as this may have unexpected side effects, reinforce
societal biases and poison training data obtained with such prompts," the paper, written by researchers affiliated with the University of Southern California (USC) reads.
Separate research along the same lines found that while persona prompting can help shape tone and style, it does nothing to add factual capability to a model.
Instead, prompt length and accuracy matters. A comprehensively designed
prompt will ultimately give AI as much context as it needs to act
autonomously and generate higher-quality output.
The paper introduces a new PRISM (Persona Routing via Intent-based Self-Modeling) solution, whereby AI generates answers with and without a persona and compares which answer is best. The AI then learned when to apply personas in the future, falling back on the base model's functionality when personas hurt output quality.
Adding to the complexity of prompt engineering, the researchers also uncover differences in model types, noting that reasoning models benefit more from context length while instruction-tuned models can be most sensitive to personas.
In short, it seems that model developers are doing all the work needed to ensure generative AI gives us the best output, and that we should only aim to give chatbots tasks and share relevant context without dictating how they should go about creating a response.
Link to news story:
https://www.techradar.com/pro/stop-telling-ai-its-an-expert-programmer-youre-m aking-it-worse-at-its-job-new-research-shows-the-best-results-need-specific-pr ompts
$$
---
� Synchronet � CAPCITY2 * Capitol City Online