Prompts That Actually Work in Claude Code (And the Mistake 90% of Devs Make)

Programming· 5 min read

Prompts That Actually Work in Claude Code (And the Mistake 90% of Devs Make)

A few months ago, I was about to uninstall Claude Code.

I’d been using it for weeks and the results were… mediocre. Code that didn’t fit the rest of the project. Changes that broke things that already worked. Generic responses I could’ve found on Stack Overflow.

Then I watched a colleague use it and immediately spotted the problem: I was prompting it like ChatGPT. Like a fancy search engine.

He was using it like a senior developer brought onto his team.

That distinction changes everything.

Why Claude Code Is Different (And Why Your Prompting Approach Matters More)

Look, there’s a fundamental difference between AI coding tools:

GitHub Copilot completes your lines. It lives in autocomplete.

Cursor helps you from the IDE with file context.

Claude Code reasons autonomously about entire codebases. It’s agentic. It doesn’t just guess the next line — it understands the architecture and makes decisions.

That means the model can do things other tools can’t. But it also means if you treat it like a glorified autocomplete, you’re leaving 80% of its potential on the table.

Google engineers have reported that Claude Code solved in one hour what had taken a year to build. When you hear something like that, the question isn’t whether the model is good. The question is: how were they using it?

The Most Common Mistake: Describing Implementation Instead of Outcome

Here’s the pattern that destroys most Claude Code sessions:

Bad prompt:

[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop

Good prompt:

[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop

See the difference?

The first prompt tells it how to do it. The second tells it what should happen when it’s done.

When you describe the outcome, Claude Code can choose the best implementation based on your project’s real context. When you describe the implementation, you’re limiting it to your own solution — which might not be optimal.

4 Strategies That Change Your Results

1. Always Include Specific File Paths

Claude Code can read your entire codebase, but if you don’t tell it where to act, it wastes time searching for context.

Before:

[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop

After:

[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop

Specific paths eliminate ambiguity. A senior developer wouldn’t ask “which file?” if you already told them.

2. Give Constraint Context, Not Just the Goal

I learned this the hard way. Claude Code sometimes proposes technically correct solutions that don’t fit your stack or previous architectural decisions.

[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop

When you give constraints, the result is almost always production-ready on the first try.

3. Use /compact Before Long Tasks

This command is massively underrated. When you’ve accumulated a lot of context in a session, the model starts losing track of earlier decisions.

/compact compresses the context while keeping what matters. Use it:

  • Before switching tasks within the same session
  • When responses start getting less precise
  • Before asking it to refactor something with multiple dependencies

4. The CLAUDE.md File Nobody Configures

If you do only one thing from this article, make it this: create a CLAUDE.md file in your project root.

This file is the contract between you and Claude Code. It explains the project rules before it starts working.

[@portabletext/react] Unknown block type "code", specify a component for it in the `components.types` prop

Claude Code reads this file automatically. It’s like the onboarding document you’d give a new developer on your team.

Real Limitations (And How to Work With Them)

Let me tell you what nobody says in Twitter threads: Claude Code isn’t perfect.

Where it struggles most is complex event-driven architectures. If you have multiple services communicating through message queues with cascading side effects, the model can lose track of implicit dependencies.

The solution isn’t to ask less. It’s to ask differently:

  1. Explicit checkpoints: “Before making the change, explain which other files would be affected by this modification”
  2. Smaller tasks: Instead of “refactor the notification system”, break it down: “first move the sending logic to a service, then we’ll separate the channels”
  3. Understanding confirmation: “Summarize how this module currently works before modifying it”

In 2026, the Advantage Isn’t Having the Tool

Every developer I know is already using some AI coding tool. The competitive advantage isn’t having it. It’s knowing how to talk to it.

The data Anthropic works with points to a 20-55% improvement in task completion speed and up to 3-5x acceleration in prototyping. Those numbers aren’t magic. They’re the result of well-executed prompting.

The difference between a dev who uses it well and one who doesn’t isn’t in the model. It’s whether they treat Claude Code like an autocomplete or like a collaborator with context.

Start Today With This

Two concrete steps for tomorrow:

  1. Create your CLAUDE.md: Spend 20 minutes documenting your stack, conventions, and what Claude Code should never do in your project. It’s an investment that pays off in the first long session.
  2. Change how you frame your next 5 prompts: Before sending them, ask yourself: am I describing the implementation or the expected outcome? Rewrite them if needed.

We keep building.

Brian Mena

Brian Mena

Software engineer building profitable digital products: SaaS, directories and AI agents. All from scratch, all in production.

LinkedIn