Spend any time around AI advice online and you will quickly see a popular tip:
“Just tell the AI to act like a university professor.”
The idea sounds sensible. Surely asking ChatGPT or Gemini to behave like an expert must make the answer better, more accurate, and more trustworthy. Except… that is not how Large Language Models actually work.
Recent academic research and practical testing suggest something more subtle — and more important — is happening instead.
What the Research Really Shows
Studies examining how Large Language Models respond to prompts like “act as an expert” or “respond like a professor” found that, in most cases, the model does not gain new knowledge or reasoning ability.
What changes is the style of delivery.
The language becomes more formal.
The tone becomes more confident.
The structure sounds more authoritative.
But the underlying information is often unchanged — and in some cases, the mistakes simply sound more convincing.
In short: the AI does not become smarter. It becomes more persuasive.
Why This Happens
LLMs such as ChatGPT and Gemini do not “think” or “understand” in a human sense. They predict what words are likely to come next based on patterns learned from vast amounts of human-written text.
When you say “act like a professor”, the model does not access a hidden database of expert knowledge. Instead, it switches to a writing style commonly associated with academic language.
If the information is incomplete or wrong, that confidence can actually make the error harder to spot — especially for newcomers.
The Illusion of Authority
This is where problems arise. Humans are naturally influenced by tone and authority. A calm, confident explanation sounds correct, even when it is not.
Researchers have shown that LLMs are designed to always give an answer, even when they are uncertain. They are optimised to be helpful, not to say “I do not know.”
That means if the AI lacks information, it may fill the gaps with something that sounds plausible. This is what is often called hallucination — not deliberate deception, but confident guesswork.
Why Shortcuts Do Not Work
Many people use AI tools hoping for shortcuts:
- Write articles instantly
- Replace research
- Skip thinking
- Avoid decision-making
That approach almost always leads to low-value content. The AI simply recombines existing ideas that already exist elsewhere, adding nothing new.
But when a human provides structure, context, intent, and boundaries — the AI becomes incredibly useful. It refines language. It improves clarity. It helps explore alternatives.
You only get out what you put in.
AI Works Best as an Assistant
This is the key point: AI is not a replacement for human judgement. It is an assistant.
Used correctly, it can:
- Improve wording and flow
- Summarise notes
- Suggest structure
- Help clarify ideas
Used incorrectly, it can:
- Reinforce mistakes
- Create convincing nonsense
- Multiply low-quality content
Even the tools themselves warn users that they can make mistakes. That is not a weakness — it is honesty. The responsibility remains with the human to guide, check, and decide.
Confidence Is Not Accuracy
Asking an AI to sound like an expert does not make it be one. Authority in writing comes from knowledge, context, and care — not tone alone.
This is why good use of AI always involves:
- Clear instructions
- Proper context
- Human review
- Willingness to challenge the output
There are no magic prompts. No shortcuts. Just better collaboration between human and machine.
Final Thoughts
AI tools like ChatGPT and Gemini are powerful, helpful, and here to stay. They should not be feared — but they should not be misunderstood either.
Telling an AI to “act like a professor” may make it sound smarter, but intelligence comes from how the tool is used, not how it is addressed.
The future belongs to people who use AI thoughtfully — not as a replacement for thinking, but as a partner in doing it better.