Wed Sep 10 - Written by: Brendan McNulty

Week 37: Testing whether better prompts actually matter

AI prompt testing and optimization analysis

Week 37: Testing whether better prompts actually matter

(and why I’ve been overthinking my AI conversations)

The Experiment

I’ve been obsessing over prompt engineering for months. Reading guides, testing different frameworks, trying to craft the “perfect” prompt. But here’s the thing—I started wondering if I was overthinking it. Are better prompts actually delivering better results, or am I just making my AI conversations unnecessarily complicated?

So I decided to put it to the test. I’d run the same tasks with three different prompt approaches and see if the extra effort actually mattered.

The Approaches

  • Simple Prompt: Just ask the question directly
  • Structured Prompt: Use a framework like “Act as a [role], provide [format], consider [context]”
  • Optimized Prompt: My current approach with detailed instructions, examples, and constraints

The Process

I designed three real-world tasks that would test different aspects of AI interaction:

  1. Content Creation Challenge: “Write a product description for a wireless charging pad” Tests: Quality, creativity, completeness
  2. Analysis Task: “Review this customer feedback and identify key themes” Tests: Accuracy, insight depth, organization
  3. Problem-Solving: “Help me troubleshoot why my website loads slowly” Tests: Practical usefulness, step-by-step guidance

For each task, I’d use all three prompt approaches and rate them on quality, speed, and that gut-level “is this actually better?” feeling.

The Outcome

Here’s what genuinely surprised me: the differences were smaller than I expected.

Simple prompts often worked just fine. For straightforward tasks, asking directly got me 80% of the way there. The AI understood what I wanted and delivered solid results without all the extra instruction.

Structured prompts had their moments. When I needed specific formatting or a particular perspective, the framework approach helped. But it wasn’t always necessary—sometimes it just added complexity without adding value.

Optimized prompts weren’t always worth the effort. Sure, they sometimes delivered more polished results, but the time investment didn’t always pay off. I was spending more time crafting prompts than I was saving in output quality.

But here’s the kicker: the biggest difference wasn’t in the prompts themselves—it was in my expectations. When I used a “simple” prompt, I was more forgiving of imperfections. When I used an “optimized” prompt, I expected perfection and got frustrated when it wasn’t delivered.

Key Takeaway

Better prompts can deliver better results, but the improvement is often marginal compared to the effort required. The real value isn’t in perfect prompt engineering—it’s in knowing when to optimize and when to keep it simple.

I don’t think it’s as much about the prompt structure as it is about clarity of intent. A clear, direct question often works better than a complex, perfectly structured prompt that obscures what you actually want.

Pro Tips for Prompt Testing:

  1. Start Simple: Try the most direct approach first. You might be surprised how well it works.
  2. Test Your Real Workflows: Don’t just do generic tests—use the actual tasks you do every day.
  3. Measure Time vs. Quality: Consider whether the extra prompt engineering time is worth the marginal quality improvement.
  4. Know When to Optimize: Save the complex prompts for tasks where the extra structure actually adds value.

What’s Next?

I’m planning to simplify my prompt approach. Focus on clarity over complexity, and only add structure when it genuinely improves the output. Sometimes the best prompt is just asking exactly what you want.

The real question isn’t whether better prompts matter (they can), it’s whether the effort to create them is always worth it. For most tasks, the answer is probably no.

Want to Try It Yourself?

  • Start Simple: Ask your question directly before adding complexity
  • Test Side-by-Side: Compare simple vs. structured prompts on the same task
  • Focus on Clarity: Make sure your prompt clearly communicates what you want
  • Measure Results: Track whether the extra effort actually improves your outcomes