ILLUSTRATIVE EXAMPLE — Northwind Advisory is a fictional organisation. Real Deliverables are personalised to your organisation.
Based on Northwind Advisory's AI Disruption Score of 72/100 (Significant), this roadmap prioritises 10 initiatives across three horizons. 2 foundation enablers must be established first to unlock the remaining programme.
What this is
Northwind Advisory's phased programme — typically a 12–18 month view across three horizons, sequencing the Strategy Canvas initiatives with dependencies mapped out.
How we built it
From your survey or document upload, structured against the canvas initiatives. The sequencing reflects what we know about Northwind Advisory from your input — the more you provided, the more grounded the timing and ownership recommendations.
Limits to keep in mind
Timelines are indicative. Real sequencing depends on internal commitments, resource availability, and risk appetite — things we can only partially see. Treat horizons as rough containers, not deadlines.
How to use it
This is meant to be the spine of an internal planning workshop, not a finished document. Print it, walk your leadership team through it, and adjust based on what only you know.
Strategic Context
Key Findings Summary
This roadmap addresses 8 identified threats and 8 opportunities through a portfolio of 10 prioritised initiatives spanning three horizons.
5
Horizon 1 (0-12m)
4
Horizon 2 (12-24m)
1
Horizon 3 (24-36m)
The portfolio balances quick wins that build momentum with longer-term strategic investments. Enabler initiatives — those that unlock multiple downstream capabilities — are flagged and should be prioritised within each horizon.
Initiative Portfolio Overview
Initiative portfolio — horizon, category, value, feasibility, effort, and type for each prioritised initiative
Initiative
H
Category
Val
Feas
Effort
T
I1* AI Strategy Ownership & Governance Reset
H1
Enabler
8
9
Quick Win
D
I2Invoicing & Quote-to-Cash Agent
H1
Agent: Reduce
8
8
Focused
D
I3Brief Interpretation & Intake Agent
H1
Agent: Amplify
9
7
Focused
O
I4* Workforce AI Readiness Sprint
H1
Enabler
7
9
Quick Win
D
I5Quality Control Co-Pilot for Deliverable Review
H2
Agent: Amplify
8
6
Focused
D
I6Productised AI-Augmented Strategy Service Line
H2
Grow Revenue with AI
8
6
Significant
O
I7Replace Salesforce with Purpose-Built Advisory Operations Platform
H2
Agent: Amplify
6
5
Significant
D
I8Methodology-as-Agent: Codify the Advisory Process
These capabilities must be in place before other initiatives can succeed.
I1.AI Strategy Ownership & Governance Reset
H1DefensiveEnabler
Cross-functional AI Council formation, decision rights framework, AI charter sign-off by ExCo, and a shared portfolio backlog in existing tooling.
Rationale
The workshop confirmed the existential threat: every strategic technology decision flows through a stretched CTO function, and competitors are compounding advantage while Northwind Advisory waits. According to the OpenAI Australia Opportunity Report 2025, Professional Services has hit 79% AI adoption — the cost of centralised AI decision-making is now non-linear.
What It Takes
Cross-functional AI Council formation, decision rights framework, AI charter sign-off by ExCo, and a shared portfolio backlog in existing tooling.
Addresses: T3 AI-native firms undercutting pricing by 40-60%; T8 25-30% of advisory work hours becoming automatable; O2 AI-powered workflow automation freeing 15-20 hours weekly
Quantified benefit requires further analysis — Enabler for I2, I3, I5, I7, I8 — removes the central bottleneck on AI decisions and creates the decision rights needed for any agent deployment to land.
90-day enterprise programme to bring all 2,000+ FTE to baseline AI fluency on ChatGPT Enterprise, Microsoft Copilot, and the firm's emerging agent stack. Addresses the workshop concern of 'losing talent to automation fears' by reframing AI as capacity-creation, not replacement. Targets 1,500-2,000 hrs/week of recovered capacity firm-wide per the workshop's own estimate.
Rationale
Workshop flagged 'teams are already stretched thin' and 'cannot dedicate time for upskilling'. A bounded 90-day sprint cohort with protected time is more realistic than open-ended training and directly counters the talent-flight risk.
What It Takes
2 hours per person per week for 12 weeks, structured prompting playbooks, peer-led demos, and a 'show me what you automated' Friday cadence. Existing ChatGPT Enterprise/Copilot licences sufficient.
Addresses: T8 25-30% of advisory work hours becoming automatable; T7 Government CRC funding accelerating competitor AI adoption; O5 AI training services creating new A$150K revenue stream
Quantified benefit requires further analysis — Enabler — without baseline fluency, agent initiatives I2, I3, I5, I7, I8 will under-adopt. Benefit is realised through those initiatives, not standalone.
Deploy an agent that ingests project data from Mural and Microsoft 365/Word, drafts proposals and finance system invoices, and eliminates the cross-system data entry flagged as costing 1,500-2,000 hours weekly across the firm. At a blended A$200/hr loaded cost, this recovers approximately A$15M-A$20M of capacity annually against a A$1-3M build — payback inside 4-5 months.
Rationale
Workshop identified cross-system data entry across Salesforce → finance systems → engagement reporting as the single largest internal friction (1,500-2,000 hrs/week firm-wide). This is the highest-value, lowest-complexity agent target — ship it first to fund the rest of the portfolio.
What It Takes
Finance system API access, structured engagement templates, an orchestration layer (enterprise iPaaS with LLM nodes), and a 12-16 week build. ChatGPT Enterprise/Copilot already in use provide the language layer.
Addresses: T4 Client expectations shifting to AI-speed delivery; T5 Talent retention crisis from automation fears; O3 Reposition as AI-augmented advisory strategists commanding premium
Estimated: A$10M – A$20M annual cost saving from eliminating cross-system data entry across engagement setup, proposal drafting, and finance invoicing(Low — Workshop-confirmed 30-50 FTE saving at blended firm cost of A$180-280K/yr fully loaded, adjusted for 50-70% adoption in Year 1.)
Value: 8/10Feasibility: 8/10FocusedDays 31-60Owner: Operations Lead (or equivalent)Jordan
I3.Brief Interpretation & Intake Agent
H1Offensive
Deploy an agent that handles inbound client requests — runs a structured brief-discovery dialogue, extracts requirements, flags ambiguity, and produces a consultant-ready brief. Directly addresses the workshop pain points 'brief interpretation' and 'getting detailed briefs from the customer'. Estimated 20-30% reduction in deliverable rework on a A$500M-A$1B revenue base translates to A$30M-A$90M in recovered margin.
Rationale
Engagement scoping quality is the upstream cause of 'rework' and 'delays in delivery' issues raised in the workshop. Fixing the front door has compounding effects through the firm.
What It Takes
Codify the firm's proprietary briefing methodology (workshop-confirmed asset) into agent prompts and structured forms. Sits in front of email/Salesforce intake with handoff to humans for nuance.
Addresses: T4 Client expectations shifting to AI-speed delivery; T6 Enterprise platforms becoming overpriced for AI-achievable outcomes; O3 Reposition as AI-augmented advisory strategists commanding premium; O4 Build proprietary AI delivery framework leveraging unique methodology
Estimated: A$15M – A$30M capacity unlock equivalent to 50-80 fte on senior account/strategy time currently spent on brief discovery, redirected to higher-value advisory work(Low — 50-80 FTE at senior loaded rate (A$250-400K) with 50-70% Year 1 adoption.)
Value: 9/10Feasibility: 7/10FocusedDays 31-60Owner: Engagement Director (or equivalent)Jordan
Defensive repricing exercise: audit every service line against AI-native advisory pricing, redesign packages to separate methodology-anchored premium tiers from AI-leveraged speed tiers. Workshop flagged this as a live threat ('AI-native competitors building pricing advantages while organisation waits'). Even a 5-10% margin defence on A$500M-A$1B base = A$25M-A$100M annual.
Rationale
The pricing gap is forming now and is hardest to close once clients have anchored to lower numbers from competitors. This is a 60-90 day exercise, not a multi-year programme — and it directly counters the workshop's stated existential concern about AI-native undercutting.
What It Takes
Competitive pricing scan (3-5 AI-native peers), package redesign workshop, sales collateral refresh, and a pilot with 5-10 prospects.
Addresses: T3 AI-native firms undercutting pricing by 40-60%; T6 Enterprise platforms becoming overpriced for AI-achievable outcomes; O6 Replace Salesforce with purpose-built AI coordination platform
A Big-4 advisory firm (Australia) — Reconfigured a major operational diagnostic engagement using AI-equipped consultants — agents handled data extraction, framework population and first-draft synthesis while the senior team focused on judgement and client direction., achieving Engagement parameters reported publicly indicate roughly an order-of-magnitude reduction in elapsed time and a substantial reduction in consulting headcount, with client-reported quality assessed as equivalent to the prior delivery model. over 2024–2025.
A top-tier Australian law firm — Firm-wide deployment of a legal research and document drafting agent, integrated with the matter management system and supervised by senior practitioners rather than juniors — addressing the same junior-pathway erosion question advisory firms face., achieving Public commentary describes meaningful reductions in time-to-first-draft on standardised matter types, with associate time redirected to client strategy and complex negotiation. over 2024–2026.
A Big-4 advisory practice (Australia) — Internal "quote-to-cash" agent stack covering proposal drafting, engagement setup, milestone invoicing and reconciliation across CRM, finance and engagement systems — the same friction Northwind Advisory described in the workshop., achieving Public commentary cites reclaimed senior consultant capacity equivalent to 15–20% of billable hours previously lost to administrative work, redeployed into pursuit and delivery. over 2024–2026.
Based on publicly available reporting. Results may reflect different organisational contexts.
Horizon 2 - Acceleration (12-24 Months)
I5.Quality Control Co-Pilot for Deliverable Review
H2Defensive
Workshop confirmed the gap: 'Quality of AI-generated output is not at a high enough quality for our standards' and 'quality control is very manual'. Deploy a multimodal review agent that pre-checks deliverables against firm standards, engagement requirements, and the firm's quality rubric before human QC. Protects the firm's premium positioning against AI-native undercutters.
Rationale
The premium quality position is what defends pricing against AI-native firms. Automating the mechanical part of QC (spec compliance, framework consistency, brief alignment) lets senior consultants spend their judgment on substance — the irreplaceable part.
What It Takes
Codify QC rubric into structured checks, multimodal LLM for deliverable review, integration with Microsoft 365 and Mural.
Addresses: T4 Client expectations shifting to AI-speed delivery; T6 Enterprise platforms becoming overpriced for AI-achievable outcomes; O4 Build proprietary AI delivery framework leveraging unique methodology
Estimated: A$14M – A$28M capacity amplification on senior deliverable review, redirected to client-facing advisory direction and reducing the quality gap on ai-generated output(Low — 50-80 FTE at senior consultant loaded cost (A$220-380K) with 40-60% Year 1 adoption given novelty of QC tooling.)
Value: 8/10Feasibility: 6/10Focused12-18 monthsOwner: Engagement Director (or equivalent)Jordan
I6.Productised AI-Augmented Strategy Service Line
H2Offensive
Launch a fixed-fee, AI-accelerated advisory engagement leveraging the firm's proprietary methodology (workshop-confirmed asset) plus generative tools to deliver in 6-8 weeks what currently takes 4-6 months. Expands addressable market downmarket without cannibalising premium work. Targeting 30-50 engagements/year at A$500K-A$1.5M each = A$15M-A$75M new revenue against existing A$500M-A$1B base.
Rationale
Workshop named the threat directly: AI-native firms undercutting on price. The defensive move is not to match their price — it's to productise a tier that uses the same AI leverage but wraps it in the firm's methodology moat. This is the 'AI-Augmented Incumbent' play.
What It Takes
Service design, pricing model, market launch collateral, intake automation (built on I3), and 2-3 pilot clients. Methodology IP must be codified into reusable agent prompts.
Addresses: T3 AI-native firms undercutting pricing by 40-60%; T6 Enterprise platforms becoming overpriced for AI-achievable outcomes; O6 Replace Salesforce with purpose-built AI coordination platform; O4 Build proprietary AI delivery framework leveraging unique methodology
Estimated: A$15M – A$50M incremental revenue from a new fixed-fee ai-accelerated advisory engagement(Low — Estimated 15-30 engagements sold in Year 1 at A$500K-A$1.5M fixed fee, against revenue band of A$500M-A$1B.)
I7.Replace Salesforce with Purpose-Built Advisory Operations Platform
H2Defensive
Workshop named Salesforce as a friction system causing 'cross-platform integration breakdowns — CRM to finance and engagement systems causing missed deadlines' and 'fragmented data'. Replace with a purpose-built advisory operations platform integrating intake, engagement tracking, and finance systems. Build economics: ~A$8M-A$12M vs current Salesforce stack + integration tax. Three-year net saving estimated at A$10M-A$18M with materially better fit.
Rationale
Salesforce is the workshop-named villain in the integration story. Per the Three-Layer Model, it's an opinionated middle-layer platform forcing the firm into its template. AI-accelerated build economics now favour ownership for a 2,000+ FTE firm with specific workflows.
What It Takes
Requirements documented from agent initiatives 02 and 03, a 8-12 person build team using AI coding tools, 9-12 month build, and parallel run before cutover.
Addresses: T4 Client expectations shifting to AI-speed delivery; O3 Reposition as AI-augmented advisory strategists commanding premium
Estimated: A$8M – A$18M estimated annual saving from removing salesforce-related friction plus avoided licence cost(Low — Estimated 20-40 FTE recovered across PMs and account leads from integration friction, plus licence reallocation.)
Value: 6/10Feasibility: 5/10Significant15-24 monthsOwner: Operations Lead (or equivalent)Jordan
I8.Methodology-as-Agent: Codify the Advisory Process
H2Offensive
Convert the firm's proprietary methodology and advisory process (workshop-named as the strategic moat) into a structured agent stack — a sequence of prompts, templates, and checks that any team member or client-facing tool can invoke. This is the Agent: Expand play: it lets Northwind Advisory deliver methodology-grade work at 2-3x current capacity without proportional hiring, enabling pursuit of work currently declined for capacity reasons.
Rationale
The workshop identified methodology and advisory process as the proprietary data moat. Per the Three Types of Business framework, AI-native entrants have process speed but no methodology depth — codifying the firm's IP into agents is the durable defensive-and-offensive position.
What It Takes
Senior partner time to externalise tacit knowledge (the hardest part), prompt engineering, version control on methodology assets, and a private model layer to keep IP from leaking into public LLMs.
Addresses: T3 AI-native firms undercutting pricing by 40-60%; T6 Enterprise platforms becoming overpriced for AI-achievable outcomes; O6 Replace Salesforce with purpose-built AI coordination platform; O7 AI-enhanced brief interpretation reducing revision cycles
Estimated: A$25M – A$60M capacity expansion equivalent to 80-120 fte of senior methodology-led work, enabling more engagements without proportionate headcount(Low — 80-120 FTE at blended senior loaded cost (A$250-400K) with 40-60% adoption ramp by end of Year 1, capped near 5% of revenue.)
Value: 9/10Feasibility: 5/10Significant12-24 monthsOwner: Engagement Director (or equivalent)Jordan
Precedents
A Big-4 consulting practice (Australia) — Productised an AI-augmented advisory service line using internal generative tools layered over an existing consulting methodology, sold as a fixed-fee outcome rather than time-and-materials., achieving Public commentary suggests the new service line is generating standalone revenue and re-anchoring client conversations away from hourly billing. over 2024–2026.
Based on publicly available reporting. Results may reflect different organisational contexts.
Horizon 3 - Transformation (24-36 Months)
I10.Comprehensive Workforce AI Transformation
H3Offensive
Full role redesign and career pathway restructuring for the AI-augmented firm: define what a consultant/strategist/account lead role looks like when 30-40% of historical task volume is agent-handled. Addresses the workforce displacement risk honestly — 25-30% of work hours may be automated per ABS analysis — by redesigning roles before attrition forces the issue. Protects the entry-level pipeline that develops future advisory leaders.
Rationale
Per the Workforce Transformation framework, the entry-level pathway compression risk is real: junior consultants learn craft by doing routine work that agents will absorb. If Northwind Advisory doesn't redesign development pathways now, today's productivity gain becomes tomorrow's leadership crisis.
What It Takes
Role definition workshops, revised performance frameworks, mentorship structures that compensate for lost learning-by-doing, and leadership capability assessment against the 5 Irreplaceables (judgment, influence, sense-making, nerve, execution discipline).
Addresses: T8 25-30% of advisory work hours becoming automatable; T7 Government CRC funding accelerating competitor AI adoption; O5 AI training services creating new A$150K revenue stream; O7 AI-enhanced brief interpretation reducing revision cycles
Quantified benefit requires further analysis — Enabler at H3 — locks in the AI-augmented operating model from earlier initiatives. Benefit is retention, role clarity, and capacity scaling rather than direct dollars in this horizon.
Jordan has assessed each initiative for investment required, benefit allocation, and quantified benefit where estimable. This is a benefit case for strategic prioritisation, not a business case for budgeting.
Benefit case summary — investment, confidence, risk, dollar benefit, and uncertainty flag per initiative
Initiative
Investment
C
R
$
!
Quantified Benefit
I1. AI Strategy Ownership & Governance Reset
A$200K-A$500K
●●●●
●●●●
●●●●
●●●●
Further Analysis Req'd2 gaps
I2. Invoicing & Quote-to-Cash Agent
A$1M-A$3M
●●●●
●●●●
●●●●
●●●●
A$10M-A$20M2 gaps
I3. Brief Interpretation & Intake Agent
A$1.5M-A$3.5M
●●●●
●●●●
●●●●
●●●●
A$15M-A$30M2 gaps
I4. Workforce AI Readiness Sprint
A$1.2M-A$3M
●●●●
●●●●
●●●●
●●●●
Further Analysis Req'd2 gaps
I5. Quality Control Co-Pilot for Deliverable Review
A$1.5M-A$3M
●●●●
●●●●
●●●●
●●●●
A$14M-A$28M2 gaps
I6. Productised AI-Augmented Strategy Service Line
A$2M-A$5M
●●●●
●●●●
●●●●
●●●●
A$15M-A$50M2 gaps
I7. Replace Salesforce with Purpose-Built Advisory Operations Platform
A$1.5M-A$3M
●●●●
●●●●
●●●●
●●●●
A$8M-A$18M2 gaps
I8. Methodology-as-Agent: Codify the Advisory Process
C = Customer | R = Revenue | $ = Cost | ! = Risk · ●●●● Strong | ●●●○ Significant | ●●○○ Moderate | ●○○○ Minor
7 of 10 initiatives have quantified benefits. 3 require further analysis. All figures are benefit case estimates for prioritisation, not business case projections.
Agent Deployment Portfolio
The following agent deployments are recommended across three value frames. Each agent is an autonomous digital worker that completes tasks end-to-end - not a tool that people use, but a worker that operates alongside people.
Agent summary — recommended agent deployments by value frame, function, and FTE impact
Agent
Frame
Function
FTE
Build
H
2. Invoicing & Quote-to-Cash Agent
Reduce
Finance & Project Administration
0.5
3-5 weeks
H1
3. Brief Interpretation & Intake Agent
Amplify
Client Services & Account Management
0.75
4-6 weeks
H1
5. Quality Control Co-Pilot for Deliverable Review
Amplify
Deliverable QA
0.75
8-12 weeks
H2
8. Methodology-as-Agent: Codify the Advisory Process
Expand
Brand & Design Strategy
1.5
4-6 months
H2
All estimates are inferred from industry benchmarks unless validated through the Strategy Workshop.
2. Invoicing & Quote-to-Cash Agent
Reduce
Reads engagement briefs and advisory deliverables, drafts proposals using firm methodology templates, generates finance system invoices on milestone completion, and reconciles line items across Word, advisory tools, and finance system. Human review before send.
Prerequisites: Finance system API token, Standardised proposal templates, Project taxonomy
3. Brief Interpretation & Intake Agent
Amplify
Conducts structured brief intake via web form + conversational follow-up, applies the firm's branding methodology to probe missing dimensions (audience, tone, success metrics), produces a consultant-ready brief with a confidence score and flagged ambiguities.
Prerequisites: Documented briefing methodology, Intake form on website, Copilot/ChatGPT Enterprise API access
5. Quality Control Co-Pilot for Deliverable Review
Amplify
Reviews advisory deliverables against framework standards, brief specifications, and a QC rubric. Flags inconsistencies, missing assets, and brief deviations. Human QC focuses on craft and judgment.
Prerequisites: Codified QC rubric, Mural and Microsoft 365 API access, Multimodal LLM access
8. Methodology-as-Agent: Codify the Advisory Process
Expand
An orchestrated stack of agents that walk a project through the firm's methodology — discovery, positioning, identity exploration, application — with human advisory direction at decision points. Encodes the firm's IP rather than relying on generic LLM output.
Prerequisites: Documented methodology, Private LLM workspace, IP protection guardrails
Dependencies & Sequencing
Initiative
Depends On
Invoicing & Quote-to-Cash Agent
AI Strategy Ownership & Governance Reset
Brief Interpretation & Intake Agent
AI Strategy Ownership & Governance Reset
Workforce AI Readiness Sprint
AI Strategy Ownership & Governance Reset
Quality Control Co-Pilot for Deliverable Review
Brief Interpretation & Intake Agent, Workforce AI Readiness Sprint
Productised AI-Augmented Strategy Service Line
Brief Interpretation & Intake Agent, Quality Control Co-Pilot for Deliverable Review, Methodology-as-Agent: Codify the Advisory Process
Replace Salesforce with Purpose-Built Advisory Operations Platform
Invoicing & Quote-to-Cash Agent, Brief Interpretation & Intake Agent, Quality Control Co-Pilot for Deliverable Review
Methodology-as-Agent: Codify the Advisory Process
Brief Interpretation & Intake Agent, Workforce AI Readiness Sprint
AI-Native Competitive Pricing & Packaging Reset
AI Strategy Ownership & Governance Reset
Comprehensive Workforce AI Transformation
Workforce AI Readiness Sprint
Sequencing Recommendation
The following sequencing reflects the recommended order of execution based on dependencies, enabler status, and competitive urgency.
Land I1 (AI Strategy Ownership & Governance Reset) inside the first 30 days, before any other initiative starts — every H1 move (I2 (Invoicing & Quote-to-Cash Agent), I3 (Brief Interpretation & Intake Agent), I4 (Workforce AI Readiness Sprint), I9 (AI-Native Competitive Pricing & Packaging Reset)) names it as an enabler, and shipping any agent ahead of governance recreates the single-CTO bottleneck the firm is trying to dissolve.
Run I3 (Brief Interpretation & Intake Agent) and I4 (Workforce AI Readiness Sprint) in parallel from Days 30-90 — I3 (Brief Interpretation & Intake Agent) fixes the upstream cause of QC rework, while I4 (Workforce AI Readiness Sprint) builds the fluency needed for both I3 (Brief Interpretation & Intake Agent) and I5 (Quality Control Co-Pilot for Deliverable Review) to be supervised rather than rubber-stamped.
Sequence I5 (Quality Control Co-Pilot for Deliverable Review) only after I3 (Brief Interpretation & Intake Agent) and I4 (Workforce AI Readiness Sprint) are both live — running QC against AI output the team cannot yet judge produces approval theatre, not quality control.
Ship I2 (Invoicing & Quote-to-Cash Agent) in Days 31-90 alongside I3 (Brief Interpretation & Intake Agent) — it is technically independent but shares the orchestration layer, and the working capital release funds the rest of the H2 portfolio without a separate budget request.
Land I9 (AI-Native Competitive Pricing & Packaging Reset) inside Days 30-180 — it is the only H1 move that defends the existing revenue base directly, and the longer it is delayed the more the pricing anchor drifts.
Defer I7 (Replace Salesforce with Purpose-Built Advisory Operations Platform) to the H2 gate — it depends on the integration patterns proven by I2 (Invoicing & Quote-to-Cash Agent), I3 (Brief Interpretation & Intake Agent) and I5 (Quality Control Co-Pilot for Deliverable Review), and at an A$8M-A$12M build with 18-36 month payback it is also the most expensive way to be wrong.
Treat I7 (Replace Salesforce with Purpose-Built Advisory Operations Platform) as a watch-item and accelerate only if Salesforce integration pain materially worsens during H1.
Sequence I8 (Methodology-as-Agent: Codify the Advisory Process) immediately after I3 (Brief Interpretation & Intake Agent) and I4 (Workforce AI Readiness Sprint) stabilise — it is the keystone for I6 (Productised AI-Augmented Strategy Service Line), and trying to launch I6 (Productised AI-Augmented Strategy Service Line) before the methodology is codified means selling a fixed-fee engagement the firm cannot deliver consistently.
Hold I10 (Comprehensive Workforce AI Transformation) until H3 — it depends on I4 (Workforce AI Readiness Sprint) having built baseline fluency and on the H1 and H2 agents being live long enough to show which roles actually changed in practice, since running role redesign against assumptions rather than evidence is how transformations produce attrition rather than amplification.
Strategic Trade-offs
Strategy requires choosing what NOT to do. The following trade-offs reflect our recommended prioritisation.
PRIORITISE Methodology-as-Agent: Codify the Advisory Process OVER Replace Salesforce with Purpose-Built Advisory Operations Platform because the methodology is the firm's actual moat against AI-native competitors, while Salesforce is a A$3-5M/year irritation.
The cost: continued Salesforce integration friction for another 18 months.
The payoff: a defensible IP-as-agent stack that lets the firm pursue 2-3x current project load without proportional hiring.
PRIORITISE Brief Interpretation & Intake Agent OVER Quality Control Co-Pilot for Deliverable Review because brief quality is the upstream cause of the QC problem — fix the front door first and a meaningful share of QC issues disappear.
The cost: 6-9 months more manual QC.
The payoff: QC build operates on cleaner inputs and delivers higher leverage when it ships.
PRIORITISE AI-Native Competitive Pricing & Packaging Reset OVER Productised AI-Augmented Strategy Service Line because repricing existing work defends the A$500M-A$1B revenue base in 90-180 days, whereas a new service line takes 12+ months to generate material revenue.
The cost: deferred new-revenue upside.
The payoff: margin protection on the core book while the productised offer is properly designed rather than rushed.
PRIORITISE Workforce AI Readiness Sprint OVER Comprehensive Workforce AI Transformation in year one because the team is, by their own statement, stretched thin — a bounded 30-day sprint is achievable, a full transformation programme right now is not.
The cost: deeper role redesign waits until H3.
The payoff: the team builds AI fluency without breaking current client delivery, creating the capacity that makes the H3 transformation feasible.
First 100 Days
One hundred days is long enough to build real momentum and short enough to maintain urgency. This section maps the Horizon 1 initiatives into three execution phases.
Days 1-30: Foundation
Quick wins and enablers that build organisational readiness and demonstrate early value.
1. AI Strategy Ownership & Governance Reset
Cross-functional AI Council formation, decision rights framework, AI charter sign-off by ExCo, and a shared portfolio backlog in existing tooling.
Quick Win · Enabler
2. Workforce AI Readiness Sprint
90-day enterprise programme to bring all 2,000+ FTE to baseline AI fluency on ChatGPT Enterprise, Microsoft Copilot, and the firm's emerging agent stack. Addresses the workshop concern of 'losing talent to automation fears' by reframing AI as capacity-creation, not replacement. Targets 1,500-2,000 hrs/week of recovered capacity firm-wide per the workshop's own estimate.
Quick Win · Enabler
Days 31-60: Momentum
Focused initiatives that deliver visible results and build confidence for larger-scale transformation.
1. Invoicing & Quote-to-Cash Agent
Deploy an agent that ingests project data from Mural and Microsoft 365/Word, drafts proposals and finance system invoices, and eliminates the cross-system data entry flagged as costing 1,500-2,000 hours weekly across the firm. At a blended A$200/hr loaded cost, this recovers approximately A$15M-A$20M of capacity annually against a A$1-3M build — payback inside 4-5 months.
Focused
2. Brief Interpretation & Intake Agent
Deploy an agent that handles inbound client requests — runs a structured brief-discovery dialogue, extracts requirements, flags ambiguity, and produces a consultant-ready brief. Directly addresses the workshop pain points 'brief interpretation' and 'getting detailed briefs from the customer'. Estimated 20-30% reduction in deliverable rework on a A$500M-A$1B revenue base translates to A$30M-A$90M in recovered margin.
Defensive repricing exercise: audit every service line against AI-native advisory pricing, redesign packages to separate methodology-anchored premium tiers from AI-leveraged speed tiers. Workshop flagged this as a live threat ('AI-native competitors building pricing advantages while organisation waits'). Even a 5-10% margin defence on A$500M-A$1B base = A$25M-A$100M annual.
Focused
Days 61-100: Scale
Review outcomes, consolidate gains, and prepare for Horizon 2 initiatives.
1. Quality Control Co-Pilot for Deliverable Review (Horizon 2 prep)
Workshop confirmed the gap: 'Quality of AI-generated output is not at a high enough quality for our standards' and 'quality control is very manual'. Deploy a multimodal review agent that pre-checks deliverables against firm standards, engagement requirements, and the firm's quality rubric before human QC. Protects the firm's premium positioning against AI-native undercutters.
Focused
2. Productised AI-Augmented Strategy Service Line (Horizon 2 prep)
Launch a fixed-fee, AI-accelerated advisory engagement leveraging the firm's proprietary methodology (workshop-confirmed asset) plus generative tools to deliver in 6-8 weeks what currently takes 4-6 months. Expands addressable market downmarket without cannibalising premium work. Targeting 30-50 engagements/year at A$500K-A$1.5M each = A$15M-A$75M new revenue against existing A$500M-A$1B base.
Workshop named Salesforce as a friction system causing 'cross-platform integration breakdowns — CRM to finance and engagement systems causing missed deadlines' and 'fragmented data'. Replace with a purpose-built advisory operations platform integrating intake, engagement tracking, and finance systems. Build economics: ~A$8M-A$12M vs current Salesforce stack + integration tax. Three-year net saving estimated at A$10M-A$18M with materially better fit.
Significant
Recommended Governance
Effective AI transformation requires dedicated governance to maintain momentum, resolve blockers, and adapt the roadmap as conditions change. The following structure is recommended for Northwind Advisory.
AI Steering Committee
Establish a cross-functional AI Steering Committee with authority to allocate resources, resolve inter-departmental conflicts, and adjust priorities. This group should include senior representation from each business area affected by the roadmap initiatives.
Review Cadence
Monthly — Initiative owners report progress, flag blockers, and update risk registers. Quick wins are reviewed for impact and learnings.
Quarterly — Steering Committee reviews the overall portfolio, reassesses priorities based on market developments, and approves the next quarter's investment focus.
Bi-annually — Board-level review of AI transformation progress, strategic alignment check, and horizon planning update.
Success Indicators
Track progress against initiative milestones rather than financial metrics alone. Key indicators include: number of initiatives launched vs planned, adoption rates for new AI capabilities, speed of decision-making in AI-augmented processes, and capability uplift across the maturity dimensions assessed in the AI Disruption Analysis.
Adaptive Planning
This roadmap is a living document. AI technology and competitive landscapes evolve rapidly. Build in formal review points at each quarterly checkpoint to reassess whether initiatives remain relevant, whether new threats or opportunities have emerged, and whether the sequencing needs adjustment.
Financial Assumptions and Methodology
All financial estimates in this document are benefit case estimates for strategic prioritisation, not business case projections. They are designed to support investment decision-making, not to replace detailed financial modelling.
Portfolio confidence
10 low confidence3 require further analysis
All estimates in AUD. Three-year horizon unless otherwise stated. No inflation adjustment applied.
Assumes H1 timing aligns with broader leadership bandwidth
How we calculated this
Directional estimate based on comparable governance stand-ups in A$500M-A$1B advisory firms — facilitation, template pack, and three steering sessions over 90 days. Benefit is structural (decision velocity) rather than directly quantifiable.
Confidence range
Optimistic
A$50K
Realistic
A$200K
Conservative
A$500K
Data gaps
NARROWS
Current time CEO spends on technology decisions not quantified
Would allow direct cost-of-bottleneck calculation for benefit side
NARROWS
Existence of any current decision-rights documentation unknown
Would clarify whether this is a build or a refresh
I2Invoicing & Quote-to-Cash AgentA$1M – A$3M
Key assumptions
Assumes existing finance system has API access and integration capability
Assumes engagement data in Microsoft 365 and Mural is structured enough for extraction (50%+ of engagements)
Assumes 50-70% adoption in Year 1, scaling to 80%+ by Year 2
Assumes the 30-50 FTE figure self-reported in workshop reflects true loaded cost
Assumes invoicing volumes remain at current levels
How we calculated this
Workshop-stated 30-50 FTE saving scaled by fully-loaded consultant cost (A$180-280K) and adoption ramp. Investment based on comparable enterprise finance + tooling agent integrations at 2,000+ FTE firms.
Confidence range
Optimistic
A$20M
Realistic
A$15M
Conservative
A$10M
Data gaps
NARROWS
Actual loaded cost of the 30-50 FTE pool not confirmed
Would tighten benefit range from ±50% to ±20%
NARROWS
Volume of invoices/proposals per month not provided
Assumes senior team will redirect freed time to billable advisory work, not absorb it
Assumes inbound brief volume is sufficient to justify automated intake (>5/week)
Assumes clients will engage with a structured AI-led discovery dialogue
Assumes 50-70% Year 1 adoption across new enquiries
Assumes brief quality lift translates to measurable rework reduction
How we calculated this
Workshop-stated 50-80 FTE amplification scaled by senior loaded cost. Investment benchmarked against comparable intake agent deployments in enterprise advisory firms.
Confidence range
Optimistic
A$30M
Realistic
A$22M
Conservative
A$15M
Data gaps
NARROWS
Current inbound brief volume and conversion rate
Would validate whether the 50-80 FTE amplification is conservative or optimistic
NARROWS
Average rework cost per ambiguous brief
Would add a second benefit stream beyond capacity
I4Workforce AI Readiness SprintA$1.2M – A$3M
Key assumptions
Assumes full team participation across consultants, strategists and account leads
Assumes existing ChatGPT Enterprise/Copilot licences cover the team or are added at modest cost
Assumes a 30-day window can be protected from client delivery pressure
Assumes the agent stack is sufficiently mature by H1 to train against
How we calculated this
Directional estimate based on comparable AI fluency programmes in 2,000+ FTE professional services firms (per-head training cost of A$500-1,200 plus facilitation). Benefit is captured in downstream agent initiatives.
Confidence range
Optimistic
A$1.2M
Realistic
A$2M
Conservative
A$3M
Data gaps
NARROWS
Current AI tool licence footprint not detailed
Would clarify whether licences are an additional line item
NARROWS
Baseline AI fluency assessment not conducted
Would right-size sprint depth and duration
I5Quality Control Co-Pilot for Deliverable ReviewA$1.5M – A$3M
Key assumptions
Assumes senior reviewers will trust and use a co-pilot's flags rather than override
Assumes deliverable QC criteria can be codified into agent prompts/checks
Assumes 40-60% Year 1 adoption — lower than transactional agents due to craft sensitivity
Assumes the firm's deliverable QC standards are documented or can be elicited
Assumes integration with Microsoft 365 and Mural review flows is feasible
How we calculated this
Workshop-stated 50-80 FTE amplification scaled by senior consultant loaded cost with conservative adoption ramp reflecting craft-tooling novelty. Investment based on directional estimate for custom agent build pending vendor scan.
Confidence range
Optimistic
A$28M
Realistic
A$20M
Conservative
A$14M
Data gaps
CHANGES
Current rework rate on AI-generated deliverable output
Would quantify the rework saving as a second benefit stream
NARROWS
Whether QC criteria are documented or tacit
Would change build effort materially
I6Productised AI-Augmented Strategy Service LineA$2M – A$5M
Key assumptions
Assumes the firm's proprietary methodology genuinely differentiates vs commoditised AI advisory offers
Assumes 15-30 engagements sold in Year 1 — modest ramp
Assumes existing pursuit list and referrals can absorb the new offer without separate marketing build
Assumes generative output meets quality threshold once I5 is live
Assumes fixed-fee margins hold at 40-55% after AI tool costs
How we calculated this
Directional revenue estimate based on engagement volume × fixed fee, capped at 5% of upper revenue band. Investment scaled from comparable productised service launches at enterprise advisory firms.
Confidence range
Optimistic
A$50M
Realistic
A$30M
Conservative
A$15M
Data gaps
CHANGES
Current advisory service pricing and gross margin
Would validate margin assumption and cannibalisation risk
NARROWS
Pursuit list / pipeline coverage for advisory work
Would tighten Year 1 volume range
I7Replace Salesforce with Purpose-Built Advisory Operations PlatformA$1.5M – A$3M
Key assumptions
Assumes a viable purpose-built advisory operations platform exists at comparable or lower licence cost
Assumes data migration from Salesforce is feasible within 6-9 months
Assumes 20-40 FTE friction is real (workshop-confirmed friction, but FTE not quantified)
Assumes team adoption of the replacement reaches 70%+ within 6 months
Assumes no critical client-facing workflow is locked into Salesforce
How we calculated this
Directional estimate — friction-FTE recovered scaled by blended firm cost, plus licence delta. Investment based on comparable platform migrations at 2,000+ FTE firms.
Confidence range
Optimistic
A$18M
Realistic
A$12M
Conservative
A$8M
Data gaps
NARROWS
Current Salesforce licence spend and modules in use
Would clarify migration scope and licence delta
CHANGES
Quantified time lost to integration breakdowns
Would convert friction into a defensible FTE saving
I8Methodology-as-Agent: Codify the Advisory ProcessA$3M – A$8M
Key assumptions
Assumes the methodology is sufficiently documented or elicitable for codification
Assumes 40-60% Year 1 adoption — slower than transactional agents
Assumes senior partners will trust agent-led methodology steps for routine cases
Assumes capacity unlock is reinvested in growth, not absorbed
Assumes IP protection of the codified methodology is addressed
How we calculated this
Workshop-stated 80-120 FTE expansion scaled by senior loaded cost with conservative adoption. Investment based on directional estimate for multi-stage agent orchestration in an enterprise advisory firm.
Confidence range
Optimistic
A$60M
Realistic
A$40M
Conservative
A$25M
Data gaps
CHANGES
Current state of methodology documentation
Tacit-only methodology would materially increase build cost
NARROWS
Project mix amenable to methodology automation
Would size addressable share of delivery
NARROWS
IP / confidentiality posture for codified methodology
Assumes AI-native pricing pressure is materialising in the firm's segments (workshop-flagged)
Assumes clients will accept tiered packaging with a methodology-anchored premium
Assumes the firm has competitive pricing data or can acquire it
Assumes account leads can re-pitch existing clients at new tiers within 6 months
Assumes 5-10% of revenue is the realistic defended/captured band
How we calculated this
Revenue percentage applied to known revenue band, with conservative cap. Investment is advisor-days plus internal effort — scaled from comparable repricing exercises in large advisory firms.
Confidence range
Optimistic
A$50M
Realistic
A$30M
Conservative
A$15M
Data gaps
CHANGES
Current pricing benchmarks vs AI-native competitors
Would convert defensive estimate into specific tier-level uplift
NARROWS
Existing client contract renewal cadence
Would clarify how fast new pricing flows through
I10Comprehensive Workforce AI TransformationA$1M – A$3M
Key assumptions
Assumes H1/H2 initiatives have landed sufficiently to redesign roles around
Assumes the firm wants to retain headcount and reshape rather than reduce
Assumes external HR/org-design support is engaged for ~60-100 days
Assumes career pathway changes are accepted by the team
Assumes competitive talent market still rewards AI-fluent advisory roles
How we calculated this
Order-of-magnitude estimate for an H3 role-redesign programme at a 2,000+ FTE firm. Benefits indirect — captured in retention and the sustained realisation of I2/I3/I5/I8.
Confidence range
Optimistic
A$1M
Realistic
A$2M
Conservative
A$3M
Data gaps
BLOCKS
Outcomes from H1/H2 initiatives — adoption, role impact
Would convert this from indicative scoping to a sized programme
NARROWS
Current attrition rate and cost-per-hire
Would size the retention benefit
NARROWS
Existing role definitions and career framework
Would clarify build-vs-refresh scope
Key Unknowns
The following information gaps affect the precision of estimates in this roadmap. Resolving them would improve the quality of financial projections and prioritisation decisions.
BLOCKS
Outcomes from H1/H2 initiatives — adoption, role impact
Would convert this from indicative scoping to a sized programme Affects: I10
CHANGES
Current rework rate on AI-generated deliverable output
Would quantify the rework saving as a second benefit stream Affects: I5
CHANGES
Current advisory service pricing and gross margin
Would validate margin assumption and cannibalisation risk Affects: I6
CHANGES
Quantified time lost to integration breakdowns
Would convert friction into a defensible FTE saving Affects: I7
CHANGES
Current state of methodology documentation
Tacit-only methodology would materially increase build cost Affects: I8
CHANGES
Current pricing benchmarks vs AI-native competitors
Would convert defensive estimate into specific tier-level uplift Affects: I9
NARROWS
Current time CEO spends on technology decisions not quantified
Would allow direct cost-of-bottleneck calculation for benefit side Affects: I1
NARROWS
Existence of any current decision-rights documentation unknown
Would clarify whether this is a build or a refresh Affects: I1
NARROWS
Actual loaded cost of the 30-50 FTE pool not confirmed
Would tighten benefit range from ±50% to ±20% Affects: I2
NARROWS
Volume of invoices/proposals per month not provided
Would validate the 0.5 FTE figure independently Affects: I2
NARROWS
Current inbound brief volume and conversion rate
Would validate whether the 50-80 FTE amplification is conservative or optimistic Affects: I3
NARROWS
Average rework cost per ambiguous brief
Would add a second benefit stream beyond capacity Affects: I3
NARROWS
Current AI tool licence footprint not detailed
Would clarify whether licences are an additional line item Affects: I4
NARROWS
Baseline AI fluency assessment not conducted
Would right-size sprint depth and duration Affects: I4
NARROWS
Whether QC criteria are documented or tacit
Would change build effort materially Affects: I5
NARROWS
Pursuit list / pipeline coverage for advisory work
Would tighten Year 1 volume range Affects: I6
NARROWS
Current Salesforce licence spend and modules in use
Would clarify migration scope and licence delta Affects: I7
NARROWS
Project mix amenable to methodology automation
Would size addressable share of delivery Affects: I8
NARROWS
IP / confidentiality posture for codified methodology
May add legal/architecture cost Affects: I8
NARROWS
Existing client contract renewal cadence
Would clarify how fast new pricing flows through Affects: I9
NARROWS
Current attrition rate and cost-per-hire
Would size the retention benefit Affects: I10
NARROWS
Existing role definitions and career framework
Would clarify build-vs-refresh scope Affects: I10
Strategic unknowns
Northwind Advisory's internal AI experimentation (any pilots run inside individual engagement teams, agent prototypes built outside the licensed stack, or methodology-as-agent work-in-progress) is not visible from public sources. The firm's actual AI-related operating costs, current engagement margin profile by service type, and the share of revenue tied to commoditising work versus defensible advisory retainers cannot be confirmed without internal data. Where this assessment uses ranges (for example, A$50M–A$150M revenue at risk), those ranges are anchored on industry-level benchmarks and the firm's public revenue band — not on Northwind Advisory's own financial disclosures. The Strategy Canvas and Roadmap should be read as a directional thesis rather than a costed plan; the Guided Strategy Session is where firm-specific numbers replace the ranges.