Skip to content

Is AI making you dumb?

There's a quiet trade-off happening every time you reach for an AI assistant: convenience for cognition. The question isn't whether AI helps you get things done faster — it does. The question is what you're losing in the process.

The tools we use shape how we think. When we outsource thinking to machines, we risk outsourcing our ability to think altogether.

I've been using AI coding assistants, writing tools, and research aids for a while now. They're undeniably powerful. They save time, reduce friction, and help me explore ideas faster than I could on my own. But lately, I've started to notice something unsettling: the moments when I reach for AI first, before even attempting to think through the problem myself.

It's subtle at first. You start using AI to draft a function, generate a code snippet, or summarize an article. It feels productive — you're moving faster, shipping more. But then you realize you've forgotten how to do the thing without the assistant. The neural pathways that used to fire when you solved a problem from scratch start to atrophy. You become dependent not just on the tool, but on the pattern of using the tool.

This isn't unique to AI. Calculators didn't make us dumber, but they did change how we relate to arithmetic. GPS didn't ruin our sense of direction entirely, but it did make us less likely to develop one in the first place. Every tool that automates cognition trades short-term efficiency for long-term capability.

The difference with AI is scale and speed. These tools can automate not just repetitive tasks, but creative and analytical ones too. They can write, code, reason, and synthesize information at a pace that makes it tempting to let them do all the heavy lifting. And when that happens, we risk becoming curators instead of creators, validators instead of thinkers.

So what do we do? The answer isn't to reject AI entirely — that would be foolish. These tools are too powerful and too useful to ignore. But we need to be intentional about how we use them. Here's what I'm trying:

1. Start without AI. Before I reach for an assistant, I try to think through the problem myself first. Even if I know the AI could solve it faster, I force myself to struggle with it for a few minutes. That struggle is where the learning happens.

2. Use AI as a second opinion, not a first draft. Instead of asking AI to write something from scratch, I write a rough version first, then ask it to critique or improve what I've already started. This keeps me in the driver's seat.

3. Regularly test yourself without the tool. I try to build things, write things, and solve problems without AI on a regular basis. It's like going to the gym — if you only ever lift with assistance, you never build real strength.

4. Be skeptical of convenience. Just because AI can do something doesn't mean I should let it. Sometimes the harder path is the one that makes you better.

AI isn't making us dumb by default, but it could if we're not careful. The challenge is to use these tools as amplifiers of our abilities, not replacements for them. To stay curious, stay skeptical, and stay engaged with the hard work of thinking.

Because at the end of the day, the most valuable skill isn't knowing how to use AI — it's knowing when not to.