Your AI Interface Is Training Users to Stop Thinking
Most AI interfaces are designed to deliver output, not to help users evaluate it. If your product presents AI-generated content without prompting users to engage critically, the interface itself is the problem.
There's a particular kind of AI output that should concern product teams more than any other, and it isn't the kind that obviously fails. It's the output that looks right, reads confidently, and gets approved without a second thought, because nothing about the interface suggested a second thought was needed.
Recent benchmarks suggest that AI now matches professional-quality output roughly 70% of the time on well-specified tasks, which is genuinely impressive, and the number that most AI coverage tends to focus on. The figure that receives far less attention is the remaining 30%, which doesn't distribute itself neatly across obvious failures; it hides in output that is plausible, well-formatted, and quietly wrong in ways that only someone with real domain expertise would catch.
The ability to look at AI output and say "this is wrong, and here's why" is what some researchers have started calling the "rejection skill," and it's legitimately valuable for anyone working alongside AI systems. It breaks down into three parts: recognition (catching that something is off), articulation (explaining the constraint that was violated), and encoding (documenting it somewhere that prevents the same mistake from recurring). For individuals, this is a learnable discipline, and an important one.
But I assert that if you're building a product where users interact with AI output, you've turned their ability to reject that output into a design decision, not just a personal virtue. The interface either helps users engage critically with what the AI produces, or it trains them not to. There isn't really a neutral option here, because the absence of design for critical engagement is itself a choice, and it defaults to passivity.
The interface either helps users engage critically with what the AI produces, or it trains them not to.
The conversational chat pattern, which has become the default interaction model for nearly every AI-powered feature shipped in the last two years (and, arguably, the laziest one), is particularly poorly suited for this. A chat interface that streams confident-looking text at conversational speed gives users no signal about what to scrutinize; there's no visual hierarchy of reliability, no moment of friction that says "this is where your judgment actually matters." The interaction pattern rewards scrolling and accepting, because that's the only behavior the interface was designed to support.
The result is that the interface itself is training users to stop thinking, even while the underlying model is producing output that will, at a statistically meaningful rate, require exactly that.
You cannot design for critical engagement after the interface is built, any more than you can design for accessibility as an afterthought and expect it to work well. The questions of how users will evaluate AI output, when the interface should prompt them to exercise judgment versus let them defer to the system, and what confidence thresholds should trigger human review, all of this belongs in the requirements phase, specified before the first screen is designed.
The organizations that get this right will not just ship better AI products. They'll build products that make their users more capable over time, instead of gradually more passive. And the ones that don't will discover, probably through an incident they'd rather not have, that the 30% they weren't designing for was the 30% that mattered most.
If you're planning an AI-powered product and you haven't yet specified how the interface supports human oversight, that's the conversation worth having now, not after the first release.
Matt leads Planorama Design, a product acceleration firm for enterprise software teams. With nearly 30 years of engineering experience, he helps CTOs and VPs of Engineering structure requirements, validate AI feasibility, and ship better software faster.
AI-generated information can actually threaten your users' productivity. Thoughtful design of AI-enabled features can boost it instead.
How AI can help surface the needs of users who don't speak up, and why clarity in requirements is the key to real product value.