The lazy approach to AI debugging:

  1. Copy error message
  2. Paste into ChatGPT
  3. Copy suggestion
  4. Paste into code
  5. Wonder why it didn't work

This works sometimes. For simple errors with standard fixes, it's fine. But for real debugging—the kind where you're actually stuck—you need a better approach.

Why Copy-Paste Fails

Error messages are symptoms, not diagnoses.

When you only share the error, the AI is guessing. It doesn't know:

  • What you're trying to do
  • What you've already tried
  • Your codebase structure
  • The full context around the error

The AI gives a generic answer to a generic question. Sometimes it matches your problem. Often it doesn't.

The Context-First Approach

Better debugging starts with context:

What were you trying to do? Not "it broke"—what behavior were you implementing?

What did you expect to happen? The intended outcome.

What actually happened? The actual outcome. Include the full error, not just the message.

What have you tried? AI needs to know what's been ruled out.

What's the relevant code? Not the whole file—the relevant section.

This takes more effort than copy-paste. It also produces answers that actually solve your problem.

The Debugging Conversation

Treat debugging like a conversation, not a query:

Start broad: Describe the problem, share context, ask for potential causes.

Narrow down: Based on the AI's analysis, investigate specific hypotheses. Report back.

Iterate: Share what you found. Ask follow-up questions. Refine the diagnosis.

Confirm: When you think you've found the issue, verify with the AI before implementing a fix.

This mirrors how you'd debug with a colleague. AI is a collaborator, not an oracle.

Example: A Real Debugging Session

Bad approach:

"TypeError: Cannot read property 'map' of undefined"

Generic answer about null checks. Might not apply.

Better approach:

"I'm fetching user data from an API and rendering a list. On initial load, I get 'TypeError: Cannot read property map of undefined'. The fetch is in useEffect, and I'm rendering users.map() directly. I've verified the API returns data correctly. The error happens before the fetch completes."

Now the AI knows: it's a timing issue. The component renders before data arrives. The answer will be specific: initialize with empty array, add loading state, or conditional render.

Using AI as a Rubber Duck

Sometimes the act of explaining helps more than the answer.

When you structure your problem for the AI:

  • You organize your thinking
  • You identify gaps in your understanding
  • You often spot the issue yourself

I've solved bugs mid-prompt more times than I can count. The AI was just the excuse to think clearly.

When to Escalate

AI debugging has limits:

Environment-specific issues. The AI can't see your machine, your Docker setup, your network configuration.

Obscure library bugs. AI knowledge has cutoffs. Very recent bugs won't be in training data.

Complex interactions. When multiple systems interact in unexpected ways, you need actual investigation.

Recognize when you've hit these limits. Switch to traditional debugging: logs, breakpoints, isolation testing.

The Debugging Prompt Template

When you're stuck, structure your prompt:

I'm working on: [feature/task]

Expected behavior: [what should happen]

Actual behavior: [what happens instead]

Error message: [full error if applicable]

Relevant code:
[paste code]

What I've tried:
- [attempt 1]
- [attempt 2]

My hypothesis: [what you think might be wrong]

This template forces you to think before asking. And it gives the AI everything it needs to actually help.

Tools for Better Context

Make sharing context easier:

Claude: Large context windows. Paste multiple files.

Editor extensions: Select code and send to AI with file context included.

Screenshots: For visual bugs, show what's happening.

The easier it is to share context, the more context you'll share, the better the answers.