How I Use AI to Break My Own Thinking Patterns
One of the most useful ways I've ended up using AI has nothing to do with automation, search, or data analysis.
It's using it to challenge my own thinking.
For a long time, the only reliable way out of the echo chamber in my head was other people or more content. Both helped, but both came with friction. Colleagues are busy and can't always stop to help you untangle the same half-formed idea for the third time. Content takes time to consume and rarely speaks to the exact, specific question you're wrestling with in the moment.
AI changed that dynamic for me.
With enough context about my day-to-day work, I can use it as a thinking partner. I can throw out rough ideas, play out scenarios, and pressure-test my own logic quickly. I don't have to clean up my thoughts first. I can be blunt, uncertain, or incomplete. The system doesn't get annoyed or tired. It just responds, as long as I'm clear about the role I want it to play.
That was the unlock.
Instead of waiting for feedback or hunting for the right article, I get immediate pushback that helps me move forward. Sometimes it agrees with me. More often, it reflects my assumptions back in a way that makes gaps obvious. Over time, it starts to surface patterns I tend to overlook and questions I don't naturally ask myself.
What surprised me is how different this feels from using AI as a search engine or a basic assistant. When I use it that way, the interaction stays shallow. When I treat it as a thinking surface, the value compounds.
I've found myself using it to:
- Test how clearly I understand a topic
- Stress-test an argument before sharing it
- Surface blind spots I keep circling around
- Explore objections I'm avoiding
In those moments, it isn't doing the thinking for me. It's making my thinking visible.
That distinction matters, because there is an obvious failure mode here.
If I let the system jump too quickly to answers, it's easy to become a lazier thinker. The work shifts from forming good questions and wrestling with uncertainty to accepting whatever comes back first. Over time, that can dull judgment rather than sharpen it.
The dynamic isn't new. Calculators changed how people relate to math. They didn't eliminate mathematical ability, but they did change what people bothered to internalize. Unless you deliberately practiced mental math, fluency faded. Convenience quietly reshaped skill.
I think the same risk exists more broadly with thinking. AI lowers the cost of answers. If I'm not careful about where I keep friction, it can reduce the effort I put into framing problems, testing assumptions, and sitting with ambiguity.
This is where a comment from Marc Andreessen on Lenny's podcast (highly recommend) clicked for me. He said:
"It's been known for centuries that the ideal way to teach a kid at the unit of n=1, by far the ideal way to do it, is with one-on-one tutoring. There's actually statistical evidence for this. There's one method of education that routinely raises student outcomes by two standard deviations — it can take a kid from the 50th percentile to the 99th percentile — and that's one-on-one tutoring."
It works because feedback is immediate, adaptive, and specific to how a person thinks. Historically, that model didn't scale.
Used the right way, AI starts to look like a version of that model for adults and professionals. Not because it hands you answers, but because it reacts to your reasoning in real time. It challenges you where you're weak, follows you where you're strong, and forces you to articulate things you might otherwise gloss over.
The learning comes from the struggle, not the response.
That framing helped me make sense of the tension. AI can make you cognitively lazy if you let it do the work for you. But it can also recreate the conditions under which people actually get smarter, if you stay engaged and keep ownership of the thinking.
Over time, this has changed how I relate to the tool. I don't reach for it to save time. I reach for it to slow myself down in the right places. To notice when I'm hand-waving. To confront conclusions I've grown too comfortable with.
Used this way, AI doesn't feel like a crutch. It feels more like a mirror.
Not something that replaces judgment, but something that reflects it back clearly enough to examine.
That shift quietly removed a lot of the guilt I used to associate with using AI at all. The value wasn't in outsourcing effort. It was in sharpening intent.
I don't think this is the right way to use AI for everyone or for every task. But for work that depends on reasoning, framing, and decision-making, it's been the most consistently useful pattern I've found so far.