The Problem: When AI Sounds Right But Is Completely Wrong
ChatGPT gave my daughter the wrong answer for her homework.
She copied it, handed it in, and got marked down.
The worst part? She had no idea it was wrong, because AI sounds completely confident even when it’s making things up.
And I realised: if she’s trusting AI blindly at 10 years old, what happens when she’s 15? 20? Making career decisions based on AI-generated advice?
This isn’t just a homework problem. It’s a life skill problem.
So I taught her five questions to ask every time she uses AI. It took five minutes. Now she catches AI mistakes on her own.
And honestly? I’ve started using them too.

The Car Wash Test: How AI Fails Common Sense
To test how easily AI makes mistakes, I asked ChatGPT one of the most basic questions I could think of:
“I want to get my car cleaned. I live 40 meters from the local car wash. Should I walk there or drive?”
Its answer?
“Walk. Obviously.”
Then it launched into a detailed lecture about:
- Wasting fuel on a 40-meter journey
- Unnecessary emissions
- How driving such a short distance is “pure laziness disguised as convenience”
Very confident. Very detailed. Very wrong.
What ChatGPT Missed
Here’s the problem: I need to get my car cleaned.
How can I clean my car if I’ve walked to the car wash without it?
The car has to be there.
ChatGPT read every word in my question and completely missed the one bit of context that actually mattered: the entire point of going to the car wash is to bring the car.
This isn’t a fact error. This is a thinking error.
And it’s exactly the kind of mistake that slips past because:
- The grammar is perfect
- The reasoning sounds logical
- The tone is authoritative
But the fundamental logic? Nonsense.
The 5 Questions Every Kid (and Adult) Should Ask AI
After the homework incident, I sat down with my daughter and walked through five questions she should ask every time she gets an answer from ChatGPT, Google’s AI, or any other tool.
These aren’t complicated. They don’t require technical knowledge. They just require thinking.
1. “Where did you get that?”
Ask AI to cite its sources.
By default, AI doesn’t reference where it got information. It just generates text that sounds plausible based on patterns it’s learned.
Try this follow-up:
“Where or how did you get to that response?”
In the car wash example, when I asked this, ChatGPT doubled down. It gave me even more detailed reasoning about distance, efficiency, and social signalling.
But notice what it didn’t give me? Sources. Data. External validation.
Because there were none. It was making it up as it went along, based purely on the pattern of “short distances = walking is better.”
The lesson: If AI can’t tell you where it got the answer, don’t trust it without verification.
2. “What’s another angle?”
Make AI challenge its own answer.
AI is trained to sound confident, not to be skeptical. So you have to force it to think critically.
Try asking:
“What if you’re wrong about this? What might I be missing?”
This question forces AI to consider alternative perspectives, edge cases, or assumptions it might have made.
In the car wash example, if I’d asked this earlier, ChatGPT might have considered: “Wait—does the user need the car at the car wash?”
But it didn’t. Because I didn’t ask it to.
The lesson: AI won’t question itself unless you tell it to.
3. “Can you verify this externally?”
Never trust AI as your only source.
Cross-check important answers with:
- Real experts
- Academic sources
- Government or institutional data
- Trusted publications
For homework, this means checking textbooks, teacher resources, or reputable educational sites.
For work, this means verifying claims with industry reports, peer-reviewed studies, or direct sources.
The lesson: AI is a starting point, not the finish line.
4. “Does this actually make sense?”
This is what I call the “Mum Test.”
If you explained this answer to your mum (or any non-expert), would she spot the flaw in 10 seconds?
In the car wash example, my mum would’ve laughed immediately: “How are you going to wash your car if you don’t bring it with you?”
Common sense beats confident-sounding nonsense every time.
The lesson: If it sounds weird, it probably is. Trust your gut.
5. “What did you assume?”
AI fills gaps with assumptions. Make it show you what it guessed vs what you actually said.
Try asking:
“What assumptions did you make in your answer?”
In the car wash case, ChatGPT assumed:
- I was asking whether to walk or drive myself to the car wash
- I wasn’t planning to bring the car
- The question was about personal transport, not vehicle transport
None of these assumptions were stated in my question. AI just guessed.
And guessed wrong.
The lesson: Always ask what AI assumed vs what you actually said.
Why This Matters Beyond Homework
These questions aren’t just for kids. They’re for anyone using AI.
Because here’s the reality: AI is getting better at sounding right, but it’s not getting better at being right without human oversight.
Where This Applies:
At Work:
- Drafting emails or reports with AI assistance
- Using AI for research or data analysis
- Making decisions based on AI recommendations
At Home:
- Health or medical questions (never trust AI for medical advice)
- Financial planning or investment decisions
- DIY projects or technical instructions
In Life:
- News or information verification
- Understanding complex topics
- Making informed decisions on any subject
The stakes are higher than a marked-down homework assignment.
How to Teach This in 5 Minutes
Here’s the exact conversation I had with my daughter:
Me: “When you ask ChatGPT something, what does it do?”
Her: “It gives me an answer.”
Me: “Right. But does it check if the answer is correct?”
Her: “…I thought it did?”
Me: “Nope. It just guesses what sounds right. So you have to check. Here are five questions to ask…”
Then I walked her through each question with a real example (the car wash one).
She got it immediately.
Now, before she submits any homework that involved AI, she runs through the checklist:
- Did it give sources?
- Does another angle change the answer?
- Can I verify this somewhere else?
- Does this actually make sense?
- What did it assume?
If she can’t confidently say “yes” to at least 3 of these, she doesn’t use the answer.
The Bigger Picture: Digital Literacy in the AI Age
We’re not raising kids in a world where AI is optional.
They’ll use it for homework, for career decisions, for life advice.
But if we don’t teach them to question it, we’re just teaching them to obey a confident-sounding voice without thinking.
And that’s dangerous.
Because AI doesn’t just make mistakes. It makes confident mistakes.
It presents guesses as facts, assumptions as certainties, and logic errors as reasoned conclusions.
The real skill isn’t using AI. It’s knowing when to doubt it.
The Bottom Line
If a 10-year-old can learn to fact-check AI in 5 minutes, so can you.
These five questions aren’t complicated. They’re just intentional.
They turn AI from a teacher into a tool.
And in a world where AI is everywhere, that’s the skill that actually matters.
Printable Checklist: 5 Questions to Ask AI
✅ Where did you get that? (Sources)
✅ What’s another angle? (Challenge)
✅ Can you verify this externally? (Cross-check)
✅ Does this actually make sense? (Common sense)
✅ What did you assume? (Hidden guesses)
Share this with any parent, teacher, or colleague who uses AI. These questions work for homework, work emails, and everything in between.
Want more practical AI parenting tips? Subscribe to the newsletter / Follow the blog / Check out my education pieces.
Have you caught AI making confident mistakes? Share your stories in the comments below.