What a Movie Prompt Revealed About AI Bias — And Why It Matters for Professionals Who Rely on Trust and Accuracy
If AI tools like ChatGPT are pulling from “all the data,” why do they keep giving the same answers?
That’s the question a group of 30+ copywriters asked when we ran a simple movie recommendation challenge during a Spring 2023 experiment.
Each of us fed ChatGPT our unique preferences and personalities and asked for five movie suggestions. But across dozens of submissions, the same two films popped up again and again.
At first, it was funny. Then it got… concerning.
What we discovered revealed far more than a quirky flaw in the system—it exposed a pattern of algorithmic bias and decision-making limitations that anyone using AI for business should understand.
What We Did: The Prompt and the Challenge
About 750 copywriters participated in an AI challenge using ChatGPT back in spring 2023. We were experimenting with different prompts and exploring various ways to use AI tools. Some experiments were lighthearted, some irrelevant, but a few revealed deeper insights—like this one.
Around 30 of us focused on one particular exercise that would reveal something unexpected about how AI makes decisions.
The prompt began: “You are a film expert who specializes in making recommendations people love. Please recommend 5 movies I should watch this week, based on my own unique preferences and personality…”
We then used a specific template to describe who we were and what kinds of movies we enjoyed. Each person’s input was genuinely different—different personalities, different preferences, different contexts.
The Surprise: Same Movies, Over and Over
What surprised us most wasn’t just the lack of award-winning or blockbuster films. It was the repetition of the same movies across people with vastly different tastes.
Two 2013 movies kept appearing again and again: The Secret Life of Walter Mitty and About Time.
Regardless of the personality type or stated preference, if someone mentioned romantic comedy or adventure, one or both of those films showed up.
One copywriter said she gave highly specific input, yet still received the same recommendations. Even when she asked for alternatives, the results circled back to the same narrow pool.
With the sheer volume of films produced globally over the past century, there should have been more variation. Instead, we saw a closed loop.
Why It Matters: Choice Architecture and Visibility Bias
This isn’t about movies. It’s about how AI structures choice.
When ChatGPT offers a “top” recommendation—be it a movie, product, service, or piece of advice—it gains traction. The more people engage, the more visible that choice becomes. Visibility reinforces value, which in turn reinforces visibility.
It’s a feedback loop that favors the already-seen.
In low-stakes situations (like Netflix picks), this might be annoying. But in high-trust industries, it’s a red flag.
Bias Isn’t Just Annoying. It’s Dangerous in Certain Fields.
For professionals in regulated industries—like legal, financial, or healthcare firms—this pattern of bias and limited outputs isn’t just inconvenient. It’s potentially dangerous.
When you’re dealing with client trust, regulatory compliance, or life-impacting decisions, you can’t afford to rely on systems that default to the same narrow set of “safe” answers. Your clients deserve nuanced, thoughtful responses that account for their specific situations—not algorithmic shortcuts.
The challenge is that what we call AI (artificial intelligence) isn’t intelligent in a human sense. It generates predictions based on probability. The technical term for this is “hallucination”—when the system confidently generates plausible but inaccurate content.
Data ≠ Neutral: The Illusion of Objectivity
We tend to think of data as objective. But curated data carries the biases of its sources.
AI responses are based on data we didn’t personally vet or select. That means the output may reflect worldviews, assumptions, or priorities we don’t share—even if the facts appear sound on the surface.
For example, ChatGPT claims cultural knowledge of movies, music, art, and literature from various eras and regions. But in practice, it offered a very limited slice of that dataset.
That’s not a problem when you’re picking a Friday night flick. It is when you’re communicating with clients or making decisions that require discretion, precision, or ethical alignment.
Human Lens vs. Machine Output
There’s a fundamental gap between how AI writes and how humans read.
AI writes literally and without emotional context. Humans fill in the gaps using shared understanding. We’re wired for nuance. Machines aren’t.
This is why AI-generated content can feel off, even if it’s grammatically perfect. It lacks the resonance that comes from knowing what to say, what to leave out, and how to read the room.
What Professionals Should Do Instead
If you’re leading communication, content, or compliance in a client-facing role:
Use AI tools, but use them wisely.
- Tell it what to use. The more specific the input, the better the output.
- Edit everything. Always review AI-generated content with professional judgment.
- Feed better data. Don’t expect great output from vague or default inputs.
- Stay discerning. Tools are helpful. Judgment is essential.
Final Thought: AI Can Help, But It Can’t Think for You
If you tried the same movie challenge today, you’d likely get different results. Models evolve. Outputs improve.
But one thing hasn’t changed: you’ll get the best results when you lead the process.
AI can be a powerful asset for professionals who use it with intention and clarity. But it’s not a replacement for discernment.
Thinking about how to use AI tools in your practice or firm? Start with a strategy rooted in clarity, accuracy, and trust. That’s how professionals use AI intelligently—without outsourcing discernment.
P.S. If your firm is exploring how AI fits into your content or communication strategy, I offer strategy consults and messaging audits to help you lead with confidence.