Hello,
I posted something on LinkedIn this week that hit a nerve. Within hours, my inbox was full of people sharing their own stories. Not about AI being bad. About AI being so good at sounding right that they stopped questioning it.
One guy told me he'd been quoting a statistic in sales calls for three months. A stat ChatGPT gave him about "customer retention in SaaS." He finally tried to find the original source for a pitch deck. It didn't exist. Three months. Dozens of calls. A made-up number delivered with total confidence.
He's not careless. He's a smart, experienced professional. And that's the problem.
ChatGPT doesn't make dumb people lazy. It makes smart people sloppy. Because the output looks like the work is already done.
Today I want to go deeper than I did on LinkedIn. I want to show you exactly why this happens, how to spot it, and give you a system you can use every single time so you never get caught out.
Why ChatGPT Makes Things Up
This isn't a bug. It's how the technology works.
ChatGPT doesn't have a database of facts it checks before answering you. It predicts the next word based on patterns from its training data. That's it. When you ask it for a statistic, it doesn't think, "Let me look that up." It thinks, "What would a convincing statistic look like in this context?"
And because it was trained on millions of well-written reports, articles, and papers, it knows exactly what a real stat sounds like. So it produces one. Complete with a plausible source, a believable year, and sometimes even a page number.
The result reads like a fact. It feels like a fact. But it's a prediction dressed in a suit.
Here's what makes it worse: ChatGPT never flags its own uncertainty. A human colleague might say, "I think the number was around 40%, but don't quote me." ChatGPT will say, "According to a 2023 McKinsey report, 43% of..." with zero hesitation.
That confidence is the trap.

The 5 Things ChatGPT Gets Wrong Most Often

Not everything ChatGPT produces is unreliable. But some categories are far more dangerous than others. Here's where I've learned to be most careful:
1. Statistics and percentages. Anything with a specific number attached to a source. "67% of consumers prefer..." unless it searched the web for that, treat it as fiction until proven otherwise.
2. Named sources. "According to Harvard Business Review..." or "A 2024 Deloitte study found..." These are the most dangerous because they sound the most credible. I've seen it invent journal names, authors, and even DOI numbers.
3. Historical dates and details. It often gets the broad strokes right but fumbles specifics. It might tell you a company was founded in 2014 when it was actually 2016. Close enough to feel right. Wrong enough to embarrass you.
4. Quotes from real people. Ask it for a quote from a CEO or public figure, and it'll often give you something that sounds exactly like them but was never actually said. It's fabricating in their voice.
5. Legal and regulatory information. It'll confidently cite laws, regulations, and compliance requirements that are outdated, misquoted, or simply wrong. This is the one that can actually cost you money.
The stuff it's genuinely great at, drafting, brainstorming, restructuring, rewriting, tone-matching, none of that depends on factual accuracy. That's where you can trust it and go fast.
The System I Use Every Time
After catching myself twice (once in an email to a client, once in a blog post that thankfully never went live), I built a simple system. It takes about 30 extra seconds per answer, and it's saved me more times than I can count.
Step 1: Force it to search. If you're asking anything factual, start your message with "Search the web for..." This switches ChatGPT from memory mode (where hallucinations live) to search mode (where it pulls real, current sources). Most people don't know this exists. It changes everything.
Step 2: Click every source. When ChatGPT gives you a link or names a report, open it. Actually open it. If the link is dead, if the report doesn't exist, if the page says something different from what ChatGPT told you, now you know.
Step 3: Challenge the clean numbers. If a stat feels too round, too perfect, too convenient for your argument, ask it directly: "Is this a verified statistic? Show me the original source." Half the time, it'll backtrack and admit it can't verify the figure. That admission is the most useful thing ChatGPT can give you.
Step 4: Use the right tool for the right job. ChatGPT for structure, tone, and ideas. Perplexity or Google for facts, sources, and verification. Stop asking one tool to do both jobs. You wouldn't ask your accountant to design your logo.

Prompt of the Week
Use case: You've got a draft (written by you or by AI), and you need to fact-check it before it goes anywhere.
Copy and paste this:
"You are a fact-checker. Go through the text below and identify every specific claim, statistic, named source, date, and quote. For each one, tell me:
1. Whether you can verify it (yes/no / uncertain) 2. If yes, provide the original source with a link 3. If no or uncertain, flag it clearly so I know to check it manually
Be honest. If you're not sure, say so. Do not guess or fill in gaps. I'd rather have 3 verified facts than 10 unverified ones.
Here's the text: [paste your draft]."
Why it works: This prompt forces ChatGPT into verification mode instead of generation mode. By telling it to flag uncertainty instead of filling gaps, you get a much more honest output. It won't catch everything, but it catches the obvious hallucinations before they leave your desk.
The Bottom Line
AI isn't the problem. Blind trust is the problem.
ChatGPT is one of the most useful tools I've ever used. I use it every single day. But I use it the way I'd use a brilliant first draft from a new hire: I read it, I check it, I make it mine.
The people who will get the most from AI in the next few years aren't the ones who use it the fastest. They're the ones who know exactly where the line is between "AI can handle this" and "I need to verify this myself."
That line is the whole game.
Thanks for reading,
See you next Tuesday with more ways to use AI without losing your mind (or your credibility).
Orgesa Meli
P.S. If this saved you from a future hallucination disaster, forward it to someone who's using ChatGPT for proposals, reports, or client work. They'll thank you later. Subscribe to my community here.

