PROVE Framework

Five phases. One system.
Every engagement.

Most businesses know something isn't working. They just can't tell you exactly what — or what to fix first. PROVE is the system we use across every engagement to find the real problems, fix the right ones, and make the improvements stick.



Phase 1 of 5

Profile — Understand before you change anything.

Before we touch a single process, page, or data source, we map what's actually happening — not what should be happening. Profiling closes the gap between assumption and reality.

What this looks like in practice

  • Quantitative: Pull the data. Where are the drop-offs? What's the actual conversion rate, task completion time, error frequency, or cost per outcome?
  • Qualitative: Talk to people. What do users, staff, or customers actually experience? Where do they get stuck?
  • Journey mapping: Trace the full path from first touchpoint to final outcome. Most problems hide in the transitions between steps.
  • Environment scan: What tools are in play? What's connected and what isn't? Where is data being entered twice?

Key questions

  1. What does the client think the problem is?
  2. What does the data say the problem actually is?
  3. Where is the biggest gap between those two answers?
  4. What's the baseline we're measuring improvement against?

Phase 2 of 5

Rank — Fix the things that matter most, first.

Profiling always surfaces more problems than you can solve at once. Rank is where you decide what to tackle — and in what order — based on evidence, not gut feel.

The ranking criteria

Impact

How much will this move the number that matters?

High
Confidence

How sure are we this will work? Based on data, not opinion.

Medium
Effort

How much time, money, or complexity is involved?

Medium
Learning Value

Even if this doesn't win, will it teach us something we can use?

Low–Medium

Worked example

Do Now

A manual CRM data entry process taking 4 hours/week (high impact) with proven n8n templates available (high confidence) and a 1-week build (low effort).

Do Next

A full website redesign with medium expected impact, an unproven design direction, and a 3-month timeline.

Revisit Later

A dashboard colour scheme refresh with low measurable impact.

Key questions

  1. If we could only fix one thing, what would move the needle most?
  2. What do we have strong enough evidence to act on right now?
  3. What needs more data before we commit?
  4. What's the fastest path to a visible win?

Phase 3 of 5

Outline — No guessing. Write testable hypotheses.

Every change starts with a written hypothesis. If you can't articulate why you're making a change and what you expect to happen, you're not optimising — you're gambling.

The hypothesis format

Because we observed [specific evidence from Profile],

we believe [specific change]

will cause [specific outcome],

measured by [specific metric],

within [specific timeframe].

Example hypotheses

Automation

Because the sales team spends 4 hours/week manually entering CRM data from email enquiries, we believe an automated email-to-CRM workflow will reduce data entry time by 80%, measured by weekly hours logged, within 2 weeks of deployment.

Web & CRO

Because 68% of mobile users abandon the pricing page without scrolling past the hero, we believe moving the pricing table above the fold will increase pricing page engagement by 25%, measured by scroll depth and CTA click rate, within 4 weeks.

Analytics

Because the marketing team can't attribute leads to specific campaigns, we believe implementing UTM-standardised tracking and a weekly source dashboard will reduce unattributed leads from 40% to under 10%, within 3 weeks of setup.

AI Adoption

Because the support team answers 120+ repetitive enquiries per week, we believe an AI-assisted reply system will handle 60% of tier-1 tickets without human input, measured by resolution rate, within 6 weeks.

Key questions

  1. What specific evidence supports this change?
  2. What exactly will we change, and what will we leave alone?
  3. How will we know if it worked?
  4. What would make us stop or reverse this change?

Phase 4 of 5

Validate — Implement, measure, and find out if you were right.

This is execution with measurement built in from the start. Not "launch and hope." It's "launch, watch the numbers, and be ready to adapt."

What this looks like in practice

  • Build the minimum version that tests the hypothesis. Don't over-engineer — the goal is learning, not perfection.
  • Instrument before you launch. If the tracking isn't in place before the change goes live, you can't measure it.
  • Set review points. Not just an end date — checkpoints along the way.
  • Compare against the baseline from Profile. This is why Profile exists.
  • Call it early if the data is clear. Don't wait for a calendar date if the result is already decisive.

Key questions

  1. Did the change produce the outcome we predicted?
  2. By how much? Is it meaningful or marginal?
  3. Were there unintended side effects — positive or negative?
  4. Is this ready to scale, or does it need refinement first?

Phase 5 of 5

Embed — Make it stick. Document what you learned.

This is the phase most agencies skip — and it's where improvements start building on each other. A single improvement is valuable. A system that continuously improves is transformational.

Three layers of embedding

01

Operationalise the win

If it worked, make it permanent. Documentation, handover, and training — not just "it's live, good luck." SOPs for automations, updated style guides for web changes, data dictionaries for analytics, AI tool runbooks with escalation paths.

02

Document the learning

Whether validated or not, record what happened. Every engagement builds a knowledge base that makes the next one sharper.

03

Feed back into Profile

Every Embed phase generates new inputs for the next Profile phase. PROVE isn't linear — it's a cycle. The best results come from the second, third, and fourth rotations.

The test log

Every hypothesis — validated or not — gets recorded.

Hypothesis

The full statement from Outline

Result

Validated / Invalidated / Inconclusive

Metric Change

Baseline → Result (with dates)

Confidence

How reliable is this result?

Key Insight

What do we know now that we didn't before?

Next Action

What does this suggest we do next?

Key questions

  1. Is the change documented well enough that someone new could maintain it?
  2. What did we learn that applies beyond this specific change?
  3. What new questions or opportunities did this cycle reveal?
  4. What's the next highest-priority item to Profile?

In Practice

A typical first engagement.

PROVE scales to the project, but the sequence stays the same. Here's how a typical automation-led engagement unfolds over four weeks.

Week 1 Profile + Rank

Free automation audit — process mapping, time analysis, bottleneck identification. Delivered as a one-page summary. Ranked list of opportunities by end of week.

Week 2 Outline

Written hypothesis: what we'll automate, expected time savings, how we'll measure. Client signs off before any build work begins.

Weeks 2–3 Validate

Build, deploy, measure. Two-week monitoring window with weekly check-ins against the baseline from Profile.

Week 4 Embed

Documentation, handover, training. Review results. Identify next opportunity — which often opens the door to analytics, web, or AI.



Ready to start with a free Profile session?

Book a free 30-minute call. We'll Profile your biggest pain point, show you where the time goes, and give you an honest recommendation — no strings attached.

Book Your Free Profile Session →