The situation
A seed-stage B2B SaaS team had raised on early traction and was hiring ahead of Series A. Marketing had been running for fourteen months and was producing results, but the team could not reliably explain why any given month was good or bad. The founders were making the majority of commercial decisions by instinct, and the marketing function was effectively an execution layer without a decision layer.
The brief from the CEO was short: “I do not want to add more activity. I want the team to know what it is actually trying to learn.”
Diagnosis (weeks 1–2)
The first two weeks were a structured diagnosis, not a discovery sprint. The goal was to separate the urgent noise from the decisions that actually mattered.
What we found:
- Seven active acquisition channels. Three were absorbing most of the calendar time. Only one had clear evidence of compounding return.
- A landing page that described the product in internal language. The real customer language — visible in support tickets, sales transcripts, and onboarding calls — was not making it into the site.
- A “weekly growth sync” that reviewed tactics but not hypotheses. The team could list what had happened but not what had been learned.
- Monthly reporting that pulled numbers from five tools into a spreadsheet. Nobody used the spreadsheet to make a decision.
The diagnosis ended with a three-page memo: the real bottleneck was not channel performance. It was decision quality.
The rebuild (weeks 3–12)
Three layers were rebuilt in parallel.
1. A single GTM priority stack
One page. Three priorities for the quarter. Each one tied to a specific commercial outcome the team could actually measure. Everything that did not contribute to one of the three priorities was paused, parked, or killed. Three channels were retired in the first month.
2. An experiment cadence with explicit hypotheses
Every experiment now had a written hypothesis, a pre-agreed decision rule, and a single owner. The weekly sync was replaced with a fortnightly review that opened with the experiments closed that week and what each one had taught the team. “What did we learn?” became the first question in the room.
3. A monthly growth review
A 45-minute monthly session with the CEO and head of product. No slides. One page: priorities, experiments closed, learnings, decisions, revisions. The review produced decisions, not updates. By month three, the CEO was attending, not running it.
What changed
By day 90:
- Qualified pipeline had grown 2.4× versus the previous 90 days, driven mostly by sharper targeting and a positioning update that reflected the actual job customers were hiring the product for.
- Outbound reply rates rose by 42% after a narrative refresh built from existing customer language — no new research project required.
- Active channels dropped from seven to three. Per-channel learning accelerated; team meetings got shorter.
- The CEO estimated a 60% reduction in time spent in marketing decisions. More importantly, the decisions that did reach the CEO were the right ones.
The engagement handed off to the in-house team with a written operating rhythm, a decision log, and a short hiring brief for the first full-time marketing leader.
Where this fit
This was a classic fractional CMO window: real demand, growing team, rising cost of unclear priorities, and a founder who knew the commercial spine needed to be built before the next hire, not after. The value was not in doing more — it was in building the decision layer the team had never had.
This case study is anonymised. Engagement details, metrics, and quote are preserved with the client’s permission; company name and identifying specifics are withheld.