Forty thousand dollars on a contractor over six months. Zero dashboards delivered. Not because the contractor was bad — because the problem was wrong for the approach. Custom client reports pulled from Google Ads, Meta Ads, Klaviyo, and Shopify data, formatted differently for every client, with insights that actually mean something. That doesn't yield to a traditional development process where you scope, build, test, and iterate over weeks.
Then on a Monday night, five fully custom client reports. Intercontinental, Beauty Chef, JMC Academy, Roomia, Husqvarna. Each from completely different data sources. Each in a different format for a different business. Built with Claude Code and live MCP connections — Model Context Protocol, basically live data feeds from Google Ads, Meta Ads, and Klaviyo.
But this is not a victory lap.
The contractor was brought in to solve our reporting problem. We needed automated dashboards — something better than Looker Studio with notes pasted in. Something the team could send to clients that didn't require three hours of manual formatting every month.
Six months and $40,000 later, we had workflows in progress, some automation scaffolding, and no finished dashboards. The account manager workflow got built but never fully deployed. The QSP system got started and then rebuilt. Things were paused while contracts were renegotiated. Meanwhile, Alex — who'd been pushing for this since day one — put it plainly in the Monday meeting: "I've spent over $40,000 on G14 and we have got not one dashboard."
That's not a criticism of the contractor. It's a recognition that the problem — diverse data, diverse clients, custom everything — doesn't fit the traditional build-test-deploy cycle. By the time you've scoped one client's requirements, the next client needs something entirely different.
The build happened partly by accident. I'd been on a call with Ales earlier that day, showing him the Google Ads MCP connected through Claude. "Build me a Google Ads dashboard for JMC Academy," I said, and we watched it pull 24 months of data and construct the report in real time. "This will be a game changer for the eCommerce squad sessions," he said. "Every client will have a dashboard."
That energy carried into the evening. One report became two, then three. The Google Ads MCP pulled the campaign data. The Meta Ads MCP — which we'd only just got working through a third-party tool — pulled the paid social numbers. Klaviyo's native MCP connected for the customer lifecycle data. Each report was built from scratch for the specific client, pulling live data, generating insights, formatting in branded HTML.
Five reports. Shared in Slack at 2am. With my own caveat attached: "Still work to do on the insights."
This is the core of the piece. And it's the part that doesn't make for a clean narrative.
The reports exist. Five of them. Each one better than anything we've sent a client from Looker Studio. But by Tuesday morning, the gap between what I'd built and what the organisation could do with it was painfully obvious.
Aleksey messaged asking for clarity on what the next step was. Not because he wasn't engaged — because the path from "Fitz built five reports at 2am" to "the team delivers these to clients" had about fifteen unsolved problems in it. Where do they get hosted? Who deploys them? What's the folder structure? How do we stop people overwriting each other's work? Is it GitHub? Cloudflare Pages? A Webprofits subdomain?
Alex, meanwhile, immediately wanted full automation within a month. Every client. Every channel. Every report. He'd gone from telling me I was moving too fast on Monday morning to wanting the entire company automated by month's end. Both impulses made sense. Neither was a plan.
Mason, Jay, and Alex spent an entire meeting on Tuesday trying to solve the infrastructure: GitHub repos, deployment pipelines, CNAME records, brand consistency across reports. The technology had worked. The organisation wasn't shaped to receive it.
"I can build this" does not equal "we can sustain this."
That's the sentence I keep coming back to. One person building five reports overnight proves the technology works. It doesn't prove the organisation can deliver five reports every month, to every client, at a consistent standard, with a process that doesn't depend on one person staying up until 2am.
The plan shifted — and this is where the companion piece picks up the story from the vulnerability angle. Instead of me building everything, we moved to:
The clearest signal it could work: Alex building his own Waterdrop report by Tuesday night. Connecting the Klaviyo MCP himself. Not because I trained him — because the starting point was there and he could build something real.
This is not a story about AI replacing contractors. The contractor wasn't the wrong choice at the time — we didn't have the tools six months ago that we have now. The MCP ecosystem, Claude Code's ability to work with live data, the maturity of the agent paradigm — none of that existed when we started the engagement.
It's a story about individual capability versus organisational readiness. And the measure of success isn't what one person can build at 2am. It's what the organisation sustains at 2pm on a Wednesday — when there's a client meeting in an hour, three reports due, and the person who built the prototype is in another meeting entirely.
We're not there yet. But Alex building Waterdrop unprompted on Tuesday night is the first sign that the gap is starting to close. Not because the technology got better. Because the infrastructure for the team to use it got built.
This article covers the capability proof. The vulnerability side — what I got wrong about the rollout, why I was arguing with my co-founder, and what changed in 48 hours — is in the companion piece.
Read: I Was Wrong About How to Roll Out AI Across a Team →