← Back to Blog

Using AI to Write an Award-Winning Business Nomination

Paul Allington 27 March 2026 7 min read

We won Customer Service of the Year at the Uttlesford Business Awards. I should probably mention that upfront, because otherwise the rest of this post sounds like I'm writing about a failed experiment, and for once it's not.

The nomination was written with significant help from Claude. The customer data it analysed was real. The communication logs it parsed were real. And the number of times I had to correct its assumptions about our business was also, regrettably, very real.

The Brief

The Uttlesford Business Awards has a Customer Service of the Year category. You submit a written nomination explaining why your business deserves it, backed by evidence. At The Code Zone, we'd always felt our customer service was strong, but "we feel like we're good at this" doesn't win awards. You need data, examples, and a compelling narrative.

I had the data. Customer reviews on reviews.io. A complete export of our customer messages. Notes from our internal system. What I didn't have was the time or inclination to read through all of it, identify the strongest examples, calculate response metrics, and weave it into a persuasive nomination document.

So I asked Claude to do it.

Feeding In the Evidence

I exported everything. The reviews.io data gave us star ratings, written reviews, and timestamps. The messages.json dump gave us every customer communication - enquiries, complaints, follow-ups, the lot. The notes.json file had internal records of customer interactions that hadn't gone through the main messaging system.

I loaded it all into Claude and asked for an analysis: what themes emerge from our customer feedback? What's our average response time? What are the strongest examples of customer service we could cite in a nomination?

The initial analysis was thorough. Claude identified recurring themes in positive reviews, calculated response time averages from the message timestamps, and pulled out specific customer quotes that were genuinely compelling. So far, so good.

The Wrong Customer

Then Claude wrote the first draft of the nomination, and it was immediately obvious that something was off. The entire framing was centred around children. "The Code Zone provides exceptional service to its young learners...", "Children across Uttlesford benefit from...", that sort of thing.

Here's the thing though: our customers are not strictly the children. There are two elements to The Code Zone. The kids are our students - they attend the classes, they learn to code, they're the ones having the experience. But our customers, the people who book, pay, communicate with us, and write reviews, are the parents. And the award was for customer service, which means it's about our interactions with the people we have a service relationship with.

I had to explain this distinction to Claude. "Our customers are parents - isn't strictly true, there are two elements. The children are students, but customer service is about our relationship with parents." Claude adjusted, but it's a good example of something AI can't know unless you tell it: the domain-specific nuance of who your actual customer is.

A human who'd worked at The Code Zone would know this instinctively. Claude, working from the data alone, made a reasonable but wrong assumption. And if I hadn't caught it, the nomination would have been subtly off-target - technically accurate but missing the point of what makes our customer service good.

"Remove the Summary. That's Very AI."

The second draft was better in terms of focus, but it had a different problem. Claude had included a neat summary section at the end that wrapped everything up with a bow. It was the kind of conclusion you'd expect from a well-structured document: "In summary, The Code Zone's commitment to exceptional customer service is evidenced by..." followed by bullet points restating everything that had already been said.

I told Claude to remove it. "Remove the summary, that's very AI."

And it is. That wrap-up-with-a-summary-paragraph pattern is one of the most recognisable tells of AI-generated content. Real business writing, the kind that wins awards, doesn't summarise itself at the end. It makes its case and trusts the reader to have followed along. The summary adds nothing except a signal that says "this was generated, not written."

This is a recurring theme in my experience with AI writing: the first draft is always too structured. Too many headings, too many bullet points, too many summaries. Humans don't write like that. We meander. We emphasise through repetition and phrasing, not through bold headers. Getting AI output to feel human often means removing the scaffolding that makes it feel organised.

Mining the Data for Numbers

What Claude was excellent at was the quantitative work. I asked for average response times from the message data, and it calculated them properly - median response time across all customer enquiries, broken down by time of day and day of week. The numbers were genuinely impressive, and they were numbers I wouldn't have calculated manually because the dataset was too large to do by hand in any reasonable timeframe.

It also did something I hadn't thought to ask for: it identified response time patterns. Our fastest responses were during weekday business hours, obviously, but our weekend and evening response times were also strong - better than you'd expect for a small business. That became a key part of the nomination. We don't just provide good customer service during office hours. We provide it when parents actually need it, which is often evenings and weekends when they're booking classes for their kids.

Claude pulled specific quotes from reviews that supported the narrative. Not just five-star reviews - it found reviews where customers specifically mentioned the quality of communication, the speed of responses, and the personal touch. Those specific, detailed quotes are worth ten times more than generic "great service" reviews in a nomination document.

The Final Version

The nomination that actually went in was probably fourth or fifth draft. Each iteration involved me reading through, correcting assumptions, removing AI-sounding language, and redirecting the emphasis. Claude did the heavy lifting of data analysis and initial structuring. I did the heavy lifting of making it sound like it was written by a real person about a real business.

The split of work was roughly: Claude provided 80% of the content and structure. I provided 100% of the voice and judgement about what to emphasise. Neither could have done it alone - I genuinely wouldn't have gone through all that customer data manually, and Claude genuinely couldn't write in a way that doesn't sound like AI without being corrected repeatedly.

What Winning Actually Felt Like

When we won, I felt two things simultaneously. First, genuine pride. The Code Zone's customer service is good because the team works hard at it, and the award recognised that. Second, a slight unease about how much of the nomination process was AI-assisted.

I've thought about this, and I've landed on being comfortable with it. The customer service itself is entirely human. The reviews are real. The response times are real. The personal interactions that customers cited are real. Claude helped me compile and present that evidence effectively, but it didn't create the evidence. It's not fundamentally different from hiring a copywriter to help with a nomination - the writing isn't the achievement, the service is.

But I do think it's worth being transparent about. If AI tools can help a small business compile an award-winning nomination from genuine customer data in an afternoon, that's useful information for other small business owners who might be sitting on great evidence but don't have the time or writing resources to present it effectively.

The Lesson About AI and Domain Knowledge

The biggest takeaway from this experience isn't about award nominations. It's about the interaction between AI capability and domain knowledge. Claude could process the data, identify patterns, calculate metrics, and draft persuasive text. But it couldn't know that our customers are parents not children. It couldn't know that a summary paragraph would flag the writing as AI-generated. It couldn't know which aspects of our service the judges would find most compelling.

AI without domain knowledge produces impressive-looking output that misses the point. Domain knowledge without AI's analytical capability produces gut feelings that can't be backed by data. The combination, with a human steering and an AI processing, produces something genuinely better than either could manage alone.

Just be prepared to say "that's very AI" more often than you'd expect. And don't feel bad about it. That's the job.

Want to talk?

If you're on a similar AI journey or want to discuss what I've learned, get in touch.

Get In Touch

Ready To Get To Work?

I'm ready to get stuck in whenever you are...it all starts with an email

...oh, and tea!

paul@thecodeguy.co.uk