← Back to Blog

Using AI to Build Your Own Sales Intelligence Pipeline

Paul Allington 2 September 2025 8 min read

I'll be honest with you. When people talk about "sales intelligence", I picture enterprise dashboards with Salesforce integrations and people in suits pointing at quarterly graphs. That's not what this is. This is me, at my desk, writing a C# console app to download meeting transcripts and then asking Claude to tell me why people don't buy our stuff.

It's less glamorous than it sounds. It's also one of the most useful things I've done with AI, and it had nothing to do with writing code.

The Problem With Scattered Data

We use Fathom for recording sales calls. It's brilliant at what it does - joins your meetings, records them, generates summaries and transcripts. But here's the thing: all that intelligence sits inside Fathom. Each call is its own little island. You can read individual summaries, but you can't easily ask questions across all your calls. Questions like: why do people actually buy? Why don't they? What objections come up repeatedly? What features get requested that we haven't built yet?

These are the questions that should be driving product and sales strategy. And the answers were buried in hundreds of individual meeting transcripts that nobody had time to read through systematically.

Step One: Getting the Data Out

Fathom has an API. I say this casually, but discovering it had one and figuring out how to use it was its own little adventure. I built a C# console app - FathomTranscriptDownloader - to pull all our meeting transcripts out.

The pagination was the first headache. Fathom's API returns results in pages, which is standard, but getting all the historical data meant handling pagination properly. Not just the recent calls - everything. Months of sales conversations, discovery calls, demos, follow-ups. I needed the lot.

The app itself was straightforward. HTTP client, deserialise the JSON, handle the pagination tokens, write everything to files. Nothing clever. But the result was something I didn't have before: every sales conversation we'd ever recorded, in a format I could feed to Claude.

The First Pass: Too Vague

My first attempt at analysis was predictably rubbish. I dumped a batch of transcripts into Claude and asked something like "analyse these sales calls and tell me what patterns you see."

Claude did exactly what I asked. It gave me a beautifully structured summary of common themes, key topics discussed, and general observations. It was accurate. It was also completely useless. "Customers frequently ask about pricing" - yes, thank you, they're on a sales call. "Integration capabilities are a common concern" - right, that's true of literally every B2B software product.

The problem wasn't Claude. The problem was my question. I was asking for patterns when I should have been asking for answers to specific business questions.

Getting Specific

Round two was better. Instead of "what patterns do you see", I asked specific questions: Why do we win deals? Why do we lose them? What's the usual pushback? At what point in the conversation do prospects go cold?

The difference was immediate. Claude started pulling out specific objections, actual phrases prospects used, moments in conversations where enthusiasm turned to hesitation. It could identify that pricing wasn't the issue in most lost deals - it was implementation timeline concerns. It could see that the prospects who converted fastest were the ones who'd already tried a competitor and been frustrated by specific limitations.

That's not a generic insight. That's something I can act on in a sales deck tomorrow morning.

The Iteration That Made It Useful

Here's the thing though. The first specific analysis was based on whatever batch of transcripts I'd loaded. As we recorded more calls, I needed to re-run the analysis with the new data included. So I'd add the latest transcripts and ask Claude to update its findings.

But I also started pushing back on the format of the answers. The initial responses were qualitative: "many prospects raised concerns about X", "several calls mentioned Y". That's helpful for understanding themes, but it's not helpful for prioritisation. Is "many" 80% or 30%? Because those lead to very different business decisions.

So I asked Claude to quantify. Give me proportions. What percentage of lost deals cited implementation concerns? What fraction of successful conversions mentioned a specific competitor by name? Of the product development recommendations, which ones appear most frequently and what's the split?

This is where the analysis got genuinely valuable. Not "customers want better reporting" but "42% of feature requests relate to reporting, with dashboard customisation being the most specific ask, appearing in 15 of 36 calls where feature requests were made." That's data I can take to a product planning session. That's data that changes how I prioritise a roadmap.

When the Context Ran Out

Inevitably, the conversation hit the context window wall. We had a lot of transcripts, the analysis was getting detailed, and Claude's context filled up. The conversation just stopped being useful - responses got vague, it started repeating itself, and I could tell it was losing the thread of the earlier analysis.

I had to start a new conversation and re-establish the context. This is annoying but manageable. The key was being explicit about what had already been established: "I've been analysing sales call transcripts. Here are the key findings so far. Here are the new transcripts. Update the analysis with these included and pay particular attention to whether the proportions change."

It's not seamless. You lose some nuance in the transition. But the alternative is not doing the analysis at all, so you work with the tools as they are.

What I Actually Learned About Our Sales

I'm not going to share the specific findings because they're commercially sensitive. But I can share the shape of what came out:

Our win reasons were different from what we assumed. We thought we were winning on features. We were actually winning on speed of setup and the quality of our onboarding conversations. The product was table stakes - what differentiated us was the experience of becoming a customer.

Our loss reasons were more fixable than expected. Most of what we were losing to wasn't fundamental product gaps. It was specific, addressable concerns about implementation that we could mitigate with better documentation and clearer timelines upfront.

The pushback patterns were remarkably consistent. Three objections accounted for over 70% of resistance. Once you know that, you can prepare for them. You can address them proactively in your pitch. You can restructure your demo flow to neutralise them before they come up.

None of these insights are revolutionary. A good sales manager reviewing calls manually would eventually reach the same conclusions. But "eventually" is doing a lot of heavy lifting in that sentence. I got quantified, evidence-based answers in an afternoon. The manual approach would have taken weeks, and realistically, nobody was ever going to do it.

AI as Business Intelligence, Not Just Code

This project shifted how I think about AI in a business context. Most of my AI journey so far has been about writing code faster. This wasn't about code at all. The console app took an hour to build. The real value was in the conversation with Claude about the data.

I was using AI as a business analyst. Not in the abstract, hand-wavy "AI will transform your business" sense. In the practical sense of: I have data, I have questions, and I need someone to go through hundreds of documents and give me structured, quantified answers.

The iterative refinement is the part nobody talks about. You don't just dump data into AI and get insights. You dump data in, get vague insights, ask better questions, get better insights, push for quantification, get useful insights, add more data, re-run, and gradually build up a picture that's actually actionable. The quality of what you get out is directly proportional to how well you steer the conversation.

If you're sitting on a pile of customer conversations, support tickets, or sales data and you haven't tried this approach yet, I'd strongly recommend it. Write a script to extract the data, feed it to Claude, and start asking the questions you've always wanted answered but never had time to investigate. The answers might surprise you. They certainly surprised me.

Want to talk?

If you're on a similar AI journey or want to discuss what I've learned, get in touch.

Get In Touch

Ready To Get To Work?

I'm ready to get stuck in whenever you are...it all starts with an email

...oh, and tea!

paul@thecodeguy.co.uk