PROVE Framework
Five phases. One system.
Every engagement.
Most businesses know something isn't working. They just can't tell you exactly what — or what to fix first. PROVE is the system we use across every engagement to find the real problems, fix the right ones, and make the improvements stick.
Five phases. One question each.
Profile — Understand before you change anything.
Before we touch a single process, page, or data source, we map what's actually happening — not what should be happening. Profiling closes the gap between assumption and reality.
What this looks like in practice
- Quantitative: Pull the data. Where are the drop-offs? What's the actual conversion rate, task completion time, error frequency, or cost per outcome?
- Qualitative: Talk to people. What do users, staff, or customers actually experience? Where do they get stuck?
- Journey mapping: Trace the full path from first touchpoint to final outcome. Most problems hide in the transitions between steps.
- Environment scan: What tools are in play? What's connected and what isn't? Where is data being entered twice?
Across the three services
Process audit — map every manual step, time each one, identify bottlenecks and tasks ripe for AI.
Web & CROAnalytics deep-dive, heatmaps, session recordings. Funnel analysis from landing to conversion.
AnalyticsData source inventory — what's tracked, what's missing, what's misconfigured. Reporting gaps.
Key questions
- What does the client think the problem is?
- What does the data say the problem actually is?
- Where is the biggest gap between those two answers?
- What's the baseline we're measuring improvement against?
Rank — Fix the things that matter most, first.
Profiling always surfaces more problems than you can solve at once. Rank is where you decide what to tackle — and in what order — based on evidence, not gut feel.
The ranking criteria
How much will this move the number that matters?
HighHow sure are we this will work? Based on data, not opinion.
MediumHow much time, money, or complexity is involved?
MediumEven if this doesn't win, will it teach us something we can use?
Low–MediumWorked example
A manual CRM data entry process taking 4 hours/week (high impact) with proven n8n templates available (high confidence) and a 1-week build (low effort).
A full website redesign with medium expected impact, an unproven design direction, and a 3-month timeline.
A dashboard colour scheme refresh with low measurable impact.
Across the three services
Which process wastes the most time? Which is simplest to automate or enhance with AI? Start with the one that proves ROI fastest.
Web & CROWhich funnel step has the highest drop-off? Fix the biggest leak first.
AnalyticsWhich missing data point causes the worst decisions? Which dashboard would save the most meeting time?
Key questions
- If we could only fix one thing, what would move the needle most?
- What do we have strong enough evidence to act on right now?
- What needs more data before we commit?
- What's the fastest path to a visible win?
Outline — No guessing. Write testable hypotheses.
Every change starts with a written hypothesis. If you can't articulate why you're making a change and what you expect to happen, you're not optimising — you're gambling.
The hypothesis format
Because we observed [specific evidence from Profile],
we believe [specific change]
will cause [specific outcome],
measured by [specific metric],
within [specific timeframe].
Example hypotheses
Because the sales team spends 4 hours/week manually entering CRM data from email enquiries, we believe an automated email-to-CRM workflow will reduce data entry time by 80%, measured by weekly hours logged, within 2 weeks of deployment.
Because 68% of mobile users abandon the pricing page without scrolling past the hero, we believe moving the pricing table above the fold will increase pricing page engagement by 25%, measured by scroll depth and CTA click rate, within 4 weeks.
Because the marketing team can't attribute leads to specific campaigns, we believe implementing UTM-standardised tracking and a weekly source dashboard will reduce unattributed leads from 40% to under 10%, within 3 weeks of setup.
Because the support team answers 120+ repetitive enquiries per week, we believe an AI-assisted reply system will handle 60% of tier-1 tickets without human input, measured by resolution rate, within 6 weeks.
Key questions
- What specific evidence supports this change?
- What exactly will we change, and what will we leave alone?
- How will we know if it worked?
- What would make us stop or reverse this change?
Validate — Implement, measure, and find out if you were right.
This is execution with measurement built in from the start. Not "launch and hope." It's "launch, watch the numbers, and be ready to adapt."
What this looks like in practice
- Build the minimum version that tests the hypothesis. Don't over-engineer — the goal is learning, not perfection.
- Instrument before you launch. If the tracking isn't in place before the change goes live, you can't measure it.
- Set review points. Not just an end date — checkpoints along the way.
- Compare against the baseline from Profile. This is why Profile exists.
- Call it early if the data is clear. Don't wait for a calendar date if the result is already decisive.
Validation methods by service
Before/after time tracking. Error rate comparison. AI accuracy and adoption metrics. Monitor for edge cases in the first 2 weeks.
Web & CROA/B testing where traffic allows. Before/after conversion rates. Heatmap and session recording comparison.
AnalyticsData completeness checks. Dashboard usage tracking. Decision audit — did the new data change any real decisions?
Key questions
- Did the change produce the outcome we predicted?
- By how much? Is it meaningful or marginal?
- Were there unintended side effects — positive or negative?
- Is this ready to scale, or does it need refinement first?
Embed — Make it stick. Document what you learned.
This is the phase most agencies skip — and it's where improvements start building on each other. A single improvement is valuable. A system that continuously improves is transformational.
Three layers of embedding
The test log
Every hypothesis — validated or not — gets recorded.
The full statement from Outline
Validated / Invalidated / Inconclusive
Baseline → Result (with dates)
How reliable is this result?
What do we know now that we didn't before?
What does this suggest we do next?
Key questions
- Is the change documented well enough that someone new could maintain it?
- What did we learn that applies beyond this specific change?
- What new questions or opportunities did this cycle reveal?
- What's the next highest-priority item to Profile?
A typical first engagement.
PROVE scales to the project, but the sequence stays the same. Here's how a typical automation-led engagement unfolds over four weeks.
Free automation audit — process mapping, time analysis, bottleneck identification. Delivered as a one-page summary. Ranked list of opportunities by end of week.
Written hypothesis: what we'll automate, expected time savings, how we'll measure. Client signs off before any build work begins.
Build, deploy, measure. Two-week monitoring window with weekly check-ins against the baseline from Profile.
Documentation, handover, training. Review results. Identify next opportunity — which often opens the door to analytics, web, or AI.
Same discipline. Different inputs.
PROVE works across every service we offer. The framework stays the same — only the data, tools, and tactics change.
AI & Automation
Profile your processes for automation and AI fit. Rank by time saved and risk. Outline the hypothesis. Validate with before/after metrics. Embed with SOPs, runbooks, and monitoring.
Learn more → 02Web & App Build
Profile your funnel. Rank the leaks. Outline conversion hypotheses. Validate with A/B tests. Embed the winning variants permanently.
Learn more → 03Analytics
Profile your data gaps. Rank by decision impact. Outline the dashboard spec. Validate with usage tracking. Embed with training and cadence.
Learn more →See the thinking in action.
How UK SMBs Lose 21 Hours a Week
The average UK small business loses more than a full day every week to manual tasks. Here's where the time goes.
Read article → WebYour Website Isn't Broken — It Was Never Built to Convert
Your site gets traffic but not sales. The problem isn't your product — it's that the site was never designed to convert.
Read article →Ready to start with a free Profile session?
Book a free 30-minute call. We'll Profile your biggest pain point, show you where the time goes, and give you an honest recommendation — no strings attached.
Book Your Free Profile Session →