Perspective · 3 min read

Why we won't hand you our prompts.

It's a fair question — clients want to see the magic. The honest answer is that the prompt isn't where the magic lives, and handing it over without the rest of the system is a recipe for disappointment.

A few months into any engagement, the question comes up. Can we see the prompts you're using? It's a reasonable request. The prompt feels like the seat of the intelligence — where instructions meet capability — and clients want a copy. We almost always decline, then we explain why.

The system, not the sentence.

A working AI feature is rarely held together by a clever prompt. It's held together by everything around the prompt. The retrieval that puts the right context in front of the model. The evaluation harness that catches regressions when a prompt or model changes. The output validation that re-runs the call when the result doesn't parse. The fallback logic for when the model is slow, wrong, or unavailable. The logging that tells us, a week later, whether the thing is still working. The prompt is one ingredient. It's a small one.

We've watched what happens when prompts leave that system. A client takes a prompt that worked, hands it to a different team, runs it through a different model, against different inputs, without the evaluation suite — and concludes the prompt doesn't work. They're right. The prompt doesn't work, in that context. It worked in the original system because of the system, not the words. The words were tuned to fit the rest. Move them, and the fit goes.

The same thing happens internally. Engineers see a sentence that worked over there, paste it into a new feature, and watch it underperform. The lesson is rediscovered weekly: prompts don't compose, systems do.

Ask for the eval suite instead.

Our honest recommendation, when clients ask, is to ask for something else. Ask for the evaluation suite — the inputs, the expected outputs, the scoring methodology. With that in hand, you can change models, change vendors, change the prompts entirely, and verify that you're still getting the result you wanted. The eval suite is portable. The prompt isn't.

We've shipped engagements where the prompt changed five times during build and another three times after launch, while the eval suite stayed the anchor. That's the asset worth keeping. The prompt is a means to an end. The eval is the end, written down.

A clarification.

We'll share the prompts when clients insist. They aren't secret. We'd just rather spend the conversation talking about the part that travels — the part that will still be useful after the next model release, the next provider switch, the next round of edits. That conversation is more valuable than handing over a snippet that decays the moment it leaves the system it was built for.

The work isn't in the prompt. It's in the system. Ask for the system.

← Back to all notes