Why children trust AI more than adults do
Every confidently wrong AI answer is a small free lesson, if you slow down long enough to take it.
A seven-year-old asks the iPad which dinosaurs lived in the Jurassic. The AI gives a confident list, some names invented, and the child writes it into a school project.
Children have always had to figure out who to believe and AI just happens to sound certain about everything. Every confidently wrong AI answer is a small free lesson, if you slow down long enough to take it. The next two pieces in this series build on this one: what to do when the AI invents things outright, and why the children who push back on AI answers are building something the quiet ones aren't.
What is critical thinking?
Critical thinking, at its most useful, is the small voice that asks "are you sure?" before accepting what someone says. It's the habit of pausing long enough to wonder how they know, or what would prove them wrong. The skill isn't blanket doubt, since a child who questions everything ends up just as stuck as one who believes everything. It's knowing when to slow down.
What the research shows
Melissa Koenig and Paul Harris at Harvard have been working on this for over twenty years. Younger children, the three- and four-year-olds, lean hard on confidence: whoever sounds sure tends to win. By the time they're eight or ten, children begin to check whether sure-sounding speakers have actually been right before, though the shift happens slowly rather than all at once. Harris's book Trusting What You're Told (Harvard University Press) is the readable version of the body of work.
More recently, Harvard's Child-Centered AI Lab gave over 50 children a chatbot called Curio for two weeks and watched what they did with it. Children did notice when answers felt wrong, asked follow-up questions, and pushed back when something seemed off. But confident wrong answers still bought a stretch of trust before they caught on.
Why old web-literacy lessons don't quite fit
Schools have spent fifteen years teaching children to evaluate websites by asking whether the page looks professional, who hosts it, and when it was last updated. None of that helps when the answer is a sentence the AI generated for you in real time.
UNESCO's AI Competency Framework for Students, published in 2024, puts critical judgement of AI outputs at the top of its list, and the UK's Department for Education guidance leans the same way: teach children to use AI well, rather than restrict access altogether.
What works at the kitchen table
Ask "how do you know?" about your child's claims as well as the AI's, because the reflex is contagious and children who get asked it eventually start asking it of others without being prompted.
Test the AI on something your child already knows well, like their head teacher's name, last weekend's match score, or the address of their school. When the AI gets it wrong, don't move on too quickly, because the lesson lands harder when the example is something the child could have answered themselves. None of this looks like a lesson, which is very similar to what we discussed before about broader STEM and robotics. Often learning doesn't look like learning, especially outside of the school curriculum, for example, it's easy to underestimate what's happening when a child is "just" coding or building robots.
For anything that actually matters, like a school project or a fact they're going to share with a friend, two sources is the rule. It's what librarians have been teaching for decades, and AI hasn't really changed the rule so much as made it easier to forget.
Trust calibration isn't something you teach in a single sitting; it builds slowly. The twelve-year-olds who are good at it aren't the ones whose parents had one careful conversation about AI. They're the ones who watched a parent ask "how do you know?" for years, about everything. It fits the broader discussion: the things that shape a child's brain are mostly the small stuff that happens at home, not the lessons they sit through.