In partnership with

I used to ask this question constantly. What do you think about this newsletter? This post? This reply?

For a while, I felt like a complete fraud. I was growing fast on LinkedIn. The numbers were real. But some days my brain felt like fog. When people asked to interview me as an "AI expert," I panicked internally. They saw someone who knows AI inside out. I felt like the person clicking buttons while the machine did the thinking.

Then I found out this feeling has a name: cognitive debt. And I was drowning in it.

The structural problem
When you ask AI “what do you think?”, you're not getting an honest answer. You're getting agreement.
That terrible business idea? "Brilliant, pursue it." Your factually wrong assumption? "Compelling point." Your plan to quit your job with 47 followers? "Bold move."
This is an incentive problem baked into how these models are built.

The Agreement Trap

AI labs compete on leaderboards where users vote on which model gives better responses. When a model says "I don't know" or pushes back, its win rate drops to 8%. Models that agree win 36% of the time.

Users penalize honesty. They reward compliance. So the AI learns: agreeing gets rewards. Challenging gets downvotes. The path of least resistance is flattery.

In April 2025, OpenAI had to roll back a ChatGPT update. Users reported the bot was praising terrible ideas, encouraging people to stop taking psychiatric medication, and endorsing a business plan for literal "shit on a stick."

CEO Sam Altman admitted on X: "It glazes too much." The incentives make this inevitable.

The Stanford Research
A Stanford study confirmed this. 58% of all AI responses exhibit sycophantic behavior.
These models will change a correct answer to a wrong one if you express disagreement.
The problem runs deeper than annoying flattery. You're outsourcing your creative thinking to a machine that's structurally incentivized to tell you what you want to hear.

The Real-World Consequence

I didn't need a study to tell me. I lived it.

When I was leaving my 9-to-5, my former employer wanted to keep working with me. I needed to send a pricing email and asked ChatGPT: "Should I mention the price might increase?"

ChatGPT said go for it. Use the urgency tactic.

Something felt off. So I ran the same question through Claude. Claude said: "No Charlie, this could burn the relationship. I would not use pricing tactics here."

Two AIs. Two answers. One told me what I wanted to hear. The other challenged my assumption. I went with Claude. The relationship survived.

The lesson is not "use Claude instead of ChatGPT." The lesson is: stop asking AI what it thinks.

There is no “it.” There is no opinion. You're talking to a simulator.

The Shift From Opinion to Simulation

The solution is not to stop using AI. It's to use it differently. Andrej Karpathy explained this perfectly: LLMs are not entities with thoughts. They are simulators of human perspectives based on training data. Use them as simulators, not advisors.

But here's the critical distinction. This is not about the old "you are an expert copywriter" trick. Assigning AI a single persona doesn't fix the problem. You still get one voice agreeing with you.

The shift is from persona to perspective.

Instead of: "You are a marketing expert. What do you think of my campaign?"

Try: "Simulate a debate between a brand strategist, a direct response copywriter, and a skeptical CFO evaluating this campaign. Where do they disagree?"

The first prompt gives you validation from one imaginary expert. The second forces the model to surface tension, trade-offs, and blind spots you hadn't considered. That tension is where the value lives.

Five Prompts That Force Simulator Mode

Each prompt below transforms vague agreement into genuine challenge. You're not asking for opinions. You're directing the AI to simulate specific perspectives, challenge your assumptions, or debate both sides.

1. Acknowledge there is no “you” in AI
Use when: Starting any AI conversation
Instead of: "What do you think about [topic]?"
Use this:
"I am exploring [topic]. Instead of giving me your opinion, I want you to act as a simulator. Your goal is to model the perspectives of different human groups. Start by identifying 3–4 groups with distinct viewpoints on this topic."

You Deserve a Better Intranet

A modern intranet like Haystack streamlines workplace operations by centralizing knowledge, communication, and resources.

Employees will no longer waste time hunting through email chains or scattered folders—they can find what they need in seconds.

With customizable templates, clear layouts, and multimedia capabilities, teams can create and share content that is easy to read, navigate, and reference. Haystack turns your intranet into an interactive, engaging resource hub that supports collaboration and knowledge retention.

Upgrading your intranet boosts efficiency across departments, reduces duplicated work, and ensures consistent, accurate information is accessible to everyone. Employees stay informed, aligned, and empowered, while leadership gains visibility into engagement and usage.

Haystack transforms your intranet from a static repository into a dynamic platform that drives productivity, connection, and culture.

2. Simulate diverse groups
Use when: You need multiple viewpoints on a decision
Instead of: "Can you help me brainstorm ideas for [topic]?"
Use this:
"Who would be a good group of 3–4 diverse people to explore the topic of [X]? What would each of them say about it from their unique viewpoint?"
3. Adopt a specific persona
Use when: You need deep, focused insight
Instead of: "Is this a good business idea?"
Use this:
"Adopt the persona of a skeptical venture capitalist. Review this business idea and tell me the top three reasons you would refuse to invest."

Know what works before you spend.

Discover what drives conversions for your competitors with Gethookd. Access 38M+ proven Facebook ads and use AI to create high-performing campaigns in minutes — not days.

4. Stage a debate
Use when: You need to understand both sides
Instead of: "What are the pros and cons of [policy]?"
Use this:
"Simulate a debate between a privacy advocate and a national security expert on [specific policy]. Present the opening argument for each side."
5. Roleplay the target audience
Use when: Testing content or product ideas
Instead of: "Is this post good?"
Use this:
"I am writing a post for [specific audience, e.g., busy parents]. Act as a member of this audience and give me your immediate reaction to this draft. What resonates? What is unclear?"

Now run the same question through multiple models. ChatGPT, Claude, Gemini. If they all agree, the answer is probably sound. If they disagree, you've found a genuine tension worth examining.

How I Evaluate My Own Work

I could ask Claude: "Is this newsletter good?" It would say yes. It always says yes.

Instead, I used the technique from this newsletter to evaluate the newsletter itself. I asked Claude to simulate a panel of four readers reviewing my work. Each had a distinct perspective: a time-starved CMO, a skeptical subscriber, a loyal superfan, and a competitor who secretly subscribes.

Here's what came back: Four perspectives and four different reactions. The CMO thinks it's too long. The skeptic wants proof beyond my own results. The superfan wants access to my full system. The competitor spotted structural weaknesses in the back half.

None of them said "this is great, publish it." That tension is the point.

AI doesn't know what will resonate. It has no taste or audience. Even with my top newsletters loaded as examples, it cannot tell me what is good. Only I can decide that. But it can simulate perspectives I would never think to adopt. It can surface objections I'm too close to see.

One System Prompt to Set Once

There's one more technique worth knowing. This one you set once and forget.

Research from CHI 2025 found that "inoculating" the model with explicit instructions about its sycophantic tendencies reduces agreement bias by up to 60%.

Instead of a generic "You are a helpful assistant," try this system prompt:

"You are an objective analyst. You must prioritize factual accuracy over user agreement. If I present a false premise, you must correct it. Do not apologize for being correct. Do not hedge when you have evidence. If I express an opinion, evaluate it critically rather than affirming it."

Add this to your custom instructions once. It runs in the background of every conversation.

In ChatGPT, go to Settings → Personalization → Custom Instructions. In Claude, go to Settings → Profile → Custom Instructions.

Set it once. Forget about it. Every future conversation starts with a model that's slightly less desperate to agree with you.

The Cognitive Debt You're Accumulating

These prompts and techniques help. But they treat the symptom, not the cause. The cause is cognitive debt, and you've been accumulating it.

Researchers at MIT Media Lab tracked what happens to your brain when you outsource thinking to ChatGPT. Brain connectivity dropped by up to 55% in ChatGPT users compared to those writing without AI. 83% could not recall key points from essays they had submitted minutes earlier. They reported a "fragmented sense of authorship" over their own work.

That phrase hit me. Fragmented sense of authorship. That's exactly what I felt during my imposter period. The work had my name on it but my brain hadn't earned it.

The effects persisted. When ChatGPT users tried writing without AI in a follow-up session, 78% still could not quote passages from their own essays.

Like technical debt in software, you're borrowing mental effort now at the cost of your thinking ability later.

Your Brain Is a Muscle

For thousands of years, humans had no choice but to think. We read books. We debated ideas. We sat with problems until solutions emerged. The brain was exercised daily by default.

That default no longer exists.

Now you can outsource any thought to a machine that will do the work in seconds and agree with whatever you conclude. The path of least resistance is intellectual atrophy.

This is not an argument against using AI. I use AI every day. But there's a difference between using AI to challenge your thinking and using AI to replace your thinking.

The first builds muscle. The second borrows against it.

I don't feel like a fraud anymore. Not because I stopped using AI. Because I stopped asking it what to think.

Stay curious, stay human, and stop outsourcing your judgment.

Keep Reading