How to budget tutoring post-NTP: Cost models that actually show impact
A practical guide to tutoring budgets post-NTP, with ROI models, pricing comparisons, and scenarios for primaries and MATs.
When the National Tutoring Programme ended, many school leaders were left with the same question: how do we keep high-impact tutoring going without guesswork? The answer is not simply to “find the cheapest option.” It is to build a tutoring budget that shows cost, reach, dosage, safeguarding, and pupil impact in the same frame. That means comparing fixed-fee AI offerings, per-hour human tutors, and hybrid models using school finance logic rather than marketing claims. It also means understanding how intervention funding, MAT planning, and subject priorities change the economics of tutoring from one school to the next.
This guide is designed for leaders who are now budgeting in a post-NTP landscape, where scrutiny on value for money is higher and every pound has to justify itself. If you are also reviewing wider digital spend, our guide on auditing your school website with website traffic tools shows how leaders can bring a more data-driven mindset to decisions across the school. For tutoring specifically, the key is to move beyond a simple hourly rate and compare total cost of ownership, reach, and impact per pupil. That is where fixed-price AI systems can look dramatically different from traditional tuition, especially at scale.
One useful way to think about the problem is to borrow from other sectors that manage complex service pricing. For example, the logic behind hybrid cost calculators is similar: you do not judge a service only by the advertised monthly fee, but by usage, scalability, overhead, resilience, and hidden costs. Tutoring has the same challenge. Schools pay not just for teaching minutes, but for onboarding, scheduling, safeguarding checks, reporting, curriculum fit, cancellation risk, and the administrative load that comes with intervention delivery.
Pro tip: The cheapest tutoring quote is often the most expensive model once you factor in unused capacity, coordination time, and weak progress monitoring. Budget for outcomes, not just sessions.
1. Why post-NTP tutoring budgets need a different model
From grant-funded scale to school-funded discipline
The National Tutoring Programme gave many schools a first major taste of subsidised tuition at scale. That changed expectations, but it also masked some of the real costs involved in delivering sustainable tutoring. When external funding reduced or ended, leaders had to decide whether to keep running the same volume, narrow the offer, or redesign provision around priority cohorts. That is why a tutoring budget post-NTP must be built like a recurring operational plan, not a one-year project bid.
In practical terms, this means identifying which pupils receive tuition, how many weeks they attend, what subject or skill the intervention targets, and what evidence will show improvement. A school might previously have funded 1:1 or small-group sessions because they were available, but now it needs a tighter view of dosage and return. For background on how tutoring can be sourced after the programme, see online tutoring websites for UK schools, which highlights the current market for school leaders.
Why the price per hour is not the full story
Human tutors are usually priced by the hour, but the hourly rate hides important variables. A tutor charging £26 per hour may be good value if attendance is high, planning is minimal, and reporting is strong. The same tutor can become poor value if sessions are cancelled, pupils are inconsistently matched, or the school team spends hours coordinating timetables. That is why leaders should calculate effective cost per successful session, not only listed price per hour.
Schools also need to compare tutoring with other intervention spending. If you want a broader lens on expenditure control, corporate finance tricks applied to personal budgeting provides a useful analogy for sequencing large purchases and timing spend. In a school setting, the equivalent is deciding when tutoring should be concentrated, paused, or replaced by class-based catch-up.
What impact evidence should look like
Impact is not just exam scores. For primary, it may include accelerated phonics decoding, fluency growth, and better confidence in maths. For secondary, it may mean stronger retrieval, improved mock outcomes, or fewer misconceptions in a key unit. In both cases, tutoring should sit within a clear intervention framework that tracks baseline, dosage, attendance, and post-intervention movement. For leaders seeking to improve how they identify what works, what makes a good mentor is a useful companion read because quality relationships are often the hidden lever behind tutoring impact.
2. The three core pricing models: fixed-fee AI, per-hour human, and hybrid
Fixed-fee AI tutoring: predictable budget, scalable reach
Fixed-fee AI tutoring is attractive because it turns an uncertain variable cost into a predictable annual line item. Instead of paying per session, schools pay for access, often with unlimited or high-volume usage, plus built-in progress reporting and curriculum alignment. In the source material, Third Space Learning’s AI maths tutor, Skye, is positioned as an example of this model, with unlimited one-to-one maths tutoring for schools at a fixed annual price starting from £3,500. That kind of pricing is especially compelling for schools that need to serve many pupils without increasing administrative complexity.
The strongest use case for AI tutoring is usually high-frequency practice in one subject, especially maths, where repeated explanation, retrieval, and error correction can be standardised. AI can also reduce staffing bottlenecks because it is available at scale and does not require matching each pupil with a specific tutor. However, it is not a universal solution: schools still need curriculum oversight, lesson integration, and staff confidence in how the system is used. If you are evaluating AI adoption more broadly, skilling and change management for AI adoption is a useful lens for thinking about staff readiness.
Per-hour human tutoring: flexible, familiar, and often more expensive
Per-hour human tutoring remains the default model for many schools because it feels intuitive and versatile. Leaders can choose subject expertise, adjust session frequency, and in many cases offer more personalised support for exam classes or pupils with complex needs. The challenge is that the apparent flexibility can lead to volatile spend, especially when timetables shift or attendance is uneven. Typical school-facing prices in the market can range from around £20 to £40+ per hour depending on subject, level, and provider structure.
Human tutors are often the right choice for GCSE, A level, or specialised intervention where explanation, rapport, and subject nuance matter. Providers such as MyTutor, Tutor House, Spires, and Tutorful illustrate different points on the market spectrum, from school partnerships to broader marketplace options. For schools comparing specialist providers, the article on the best online tutoring websites for UK schools is especially relevant because it maps the current market by subject, safeguarding, and price.
Hybrid models: best of both worlds if designed carefully
Hybrid tutoring blends AI-led practice with human oversight or targeted human intervention. In a well-designed hybrid model, AI handles routine rehearsal, gap detection, and independent practice, while human tutors focus on motivation, intervention for stuck pupils, or high-stakes topics. This can be the most efficient option where schools want to stretch limited budgets without sacrificing personal support. Hybrid also makes sense when leaders want one model for broad coverage and another for intensive work with pupils who have the highest need.
The danger is complexity creep. If the hybrid offer is not clearly defined, schools can end up paying for two systems that do not talk to each other, doubling administration without increasing impact. That is why leaders should also think like operations managers. The article on fragmented office systems is a surprisingly relevant reminder that disconnected tools create hidden costs in time, reporting, and decision-making. Tutoring systems work the same way.
3. A practical tutoring budget framework schools can actually use
Step 1: calculate the true unit cost
Start by defining the cost unit that matters most to your school. For an AI offer, that may be cost per school per year, cost per pupil served, and cost per completed practice cycle. For a human tutor, it may be cost per hour, cost per session delivered, and cost per pupil making expected progress. For a hybrid, you should model the combined cost of the AI platform plus the human layer, because the blended value only appears when both parts are measured together.
Include the “invisible” costs in each model: staff time, onboarding, safeguarding, reporting, tech setup, and any missed sessions due to cancellation or scheduling. If you are looking to improve accuracy in school finance planning, using OCR to automate receipt capture is a helpful analogy for reducing manual admin and improving cost visibility. The same principle applies to tutoring: the better you capture real usage, the better your decision-making.
Step 2: estimate effective dosage and attendance
Dosage matters because a tutoring budget only works if pupils actually receive enough learning time to change outcomes. A low-cost provider with poor attendance may be worse value than a more expensive model with high engagement and fewer drop-outs. In small primaries, even a few missed sessions can distort the impact picture, while in MATs poor attendance can make a whole central programme look weaker than it is. Build your budget assumptions around realistic attendance, not ideal scheduling.
This is also where safeguarding and logistics matter. Providers with strong checks, clear communication, and school-facing reporting reduce the risk of session drift. If a school is comparing providers, it should include the safeguarding layer in the cost-benefit equation, not treat it as a separate issue. For a useful reminder of how trust and verification shape service quality, see how to find reliable, cheap phone repair shops and avoid scams; the lesson is the same: low price without verification can be expensive later.
Step 3: map spend to likely impact
A strong tutoring budget connects each pound spent to a plausible outcome. For example, Year 6 catch-up maths may justify a more intensive model than low-stakes general enrichment. A Year 10 GCSE intervention might justify per-hour human tutoring if the group needs exam technique and confidence building. Meanwhile, a large primary MAT may justify a fixed-fee AI model for a universal maths catch-up layer, with human tutors reserved for top-need pupils.
To make that mapping visible, use a simple impact rubric: baseline gap, intervention dose, expected gain, and evidence quality. Leaders who like structured comparison may also find the approach in tooling breakdowns for data roles useful, because it shows how to align the tool to the task rather than choosing based on popularity alone. Tutoring procurement should be no different.
4. Sample cost models for small primaries and large MATs
Small primary school scenario: a tight budget, high accountability
Consider a small primary with 180 pupils and a limited intervention budget of £6,000. The school wants to support Year 4 and Year 6 maths catch-up after assessment data shows a persistent gap. A human tuition model at £26 per hour could buy about 230 hours before overheads, but once admin, cancellations, and reporting are included, the effective delivery might fall meaningfully. A fixed-fee AI model starting from £3,500 leaves £2,500 for staff training, timetable adjustments, and targeted booster sessions for the lowest-attaining pupils.
In this scenario, a hybrid may be strongest: use AI for the broad, repeatable maths support and save human time for pupils who need motivational support or rapid clarification. This gives the school wider coverage without turning the budget into a staffing puzzle. If the Year 6 cohort is small, leaders may still decide that a human tutor is better for exam-like reasoning or confidence work, but the AI layer can reduce the number of paid hours required. For schools wanting a practical review of how different platforms compare, revisit UK school tutoring websites.
Large MAT scenario: central procurement and economies of scale
Now consider a MAT with 12 schools and a centrally managed intervention fund of £90,000. On a per-hour human tuition model at £30 per hour, the trust might fund around 3,000 hours before administration, but the operational burden is high: different timetables, different subject needs, different local leaders, and variable reporting. A fixed-fee platform or centralised hybrid can be more efficient because the trust negotiates once, supports implementation centrally, and standardises progress tracking across schools.
Large MATs often benefit from central planning because they can identify patterns across schools: multiple Year 5 cohorts with the same maths gaps, or overlapping secondary needs in GCSE English and science. This is where hybrid design becomes powerful. The trust can use a fixed-fee AI layer for universal catch-up, then layer human specialists for top-end need or subject-specific bursts. For central leaders who are thinking about wider change management, AI adoption skilling and change management is relevant because scale only works when leaders, teachers, and tutors understand the model.
Comparative budget table
| Model | Typical cost structure | Best for | Budget predictability | Impact visibility | Main risk |
|---|---|---|---|---|---|
| Fixed-fee AI tutoring | Annual school or trust fee | High-volume maths intervention | High | High if reporting is strong | Limited fit for nuanced needs |
| Per-hour human tutoring | Hourly or session-based | Exam support and bespoke needs | Low to medium | Medium to high | Cost creep and cancellations |
| Hybrid blend | Platform fee plus human hours | Mixed cohorts and tiered intervention | Medium to high | High if integrated well | Complexity and duplication |
| Small-group human tuition | Hourly rate spread across pupils | Cost-conscious targeted support | Medium | Medium | Lower individualisation |
| Trust-wide central procurement | Negotiated multi-school contract | MAT planning and consistency | High | High if standardised | Implementation drift between schools |
5. How to calculate ROI without overselling the result
Use a simple return-on-investment formula
ROI in education should be used carefully, because schools are not businesses chasing profit. But a structured ROI framework still helps leaders compare value. A practical version is: ROI = estimated value of learning gain ÷ total tutoring cost. The “value” may be measured as progress toward age-related expectations, reduced need for future intervention, improved exam outcomes, or greater access to the curriculum. The important point is to make the assumptions explicit so leaders can challenge them.
For example, if a school spends £3,500 on a fixed-fee AI maths platform and it supports 40 pupils, the cost per pupil is £87.50 before internal staffing. If those pupils each gain just one meaningful gap closure in a high-priority topic, the return may be strong. The same school might spend £3,000 on 100 human sessions and only reach 10 pupils, making the per-pupil cost much higher even if individual sessions are excellent. This is why comparisons must combine price, reach, and likely gain.
Track the right metrics, not just the easiest ones
Leaders often track attendance because it is easy, but attendance alone does not show impact. A better dashboard includes baseline score, session completion, tutor feedback, teacher follow-up, and post-intervention assessment. That approach lets the school distinguish between a weak intervention and a strong intervention delivered to the wrong cohort. If your school already uses data dashboards, the principles in building a business confidence dashboard can inspire a cleaner way to visualise tutoring outcomes.
MATs should also report at trust level and school level. Trust leaders need aggregate cost per successful pupil, but headteachers need cohort-level nuance. Without both, the budget can look better or worse than it really is. Transparency builds trust and helps justify continued investment to governors or trustees.
Avoid the false precision trap
ROI frameworks can create a false sense of certainty if leaders assign exact values to uncertain benefits. The solution is to model ranges: conservative, expected, and optimistic. A cautious forecast may assume 70% attendance and modest gain; an optimistic one may assume 90% attendance and strong uplift. This lets leaders see which model still works when conditions are not ideal. For schools managing a broader portfolio of suppliers and services, the logic is similar to evaluating AI vendor claims, explainability, and TCO questions in healthcare: do not accept outcomes at face value without testing assumptions.
6. What to ask providers before you sign
Questions about pricing and usage
Ask whether pricing is fixed, capped, usage-based, or tiered. Confirm what happens if you need more pupils mid-year, and whether the school pays for setup, training, or reporting. For human tutoring, ask how cancellation, no-show, or replacement tutor time is billed. For AI platforms, ask how access is licensed and whether there are limits on school-wide use.
You should also ask how the provider supports procurement across multiple schools. For MATs, a useful contract may include shared reporting, central support, and flexible rollout rather than per-school renegotiation. The right pricing model should reduce admin, not add to it. If the contract language feels opaque, treat that as a warning sign.
Questions about quality and safeguarding
Quality assurance matters as much as price. Ask how tutors are vetted, how progress is reported, what safeguarding checks are in place, and who the school contacts if a concern arises. The source material notes that the best platforms combine rigorous vetting, enhanced DBS checks, and clear progress reporting, which is exactly what school leaders should expect. If a provider cannot explain these basics clearly, the budget risk is not worth it.
For a wider lesson on choosing reliable services, the article on DIY vs professional repair is surprisingly apt: schools should not take on hidden risk just to save a small amount of money. When pupil safety and learning time are on the line, verification is part of the cost structure.
Questions about implementation and reporting
Ask how the provider will fit into your timetable, MIS, and intervention workflow. Will reports be weekly or termly? Can teachers see progress by pupil, class, or cohort? Will the system flag non-engagement early enough to intervene? These are not administrative extras; they are the difference between an intervention that looks good in theory and one that actually changes classroom performance.
Leaders should also ask for case studies relevant to their phase and size. A small rural primary should not rely on a model proven only in large urban secondary settings. Likewise, a MAT should not base procurement solely on an isolated single-school success story. Context matters.
7. Building a decision matrix for governors and trustees
Use five decision criteria
A strong decision matrix should include at least five criteria: cost predictability, per-pupil reach, safeguarding confidence, expected impact, and implementation effort. Each provider or model can be scored 1-5 against these criteria, then weighted according to school priorities. For a small primary, implementation simplicity may matter more than maximum flexibility. For a MAT, standardisation and reporting may matter more than bespoke design.
This is useful because it keeps debates focused on evidence rather than preference. A leader may personally prefer one-to-one human tutoring, but if the trust needs to support 800 pupils with a finite budget, the matrix may point elsewhere. If your leadership team is interested in practical operations, building a tracker that actually gets used shows how good systems depend on adoption, not just good design.
Apply the matrix to real cohorts
Do not use generic school averages. Build separate cases for Year 6 premium catch-up, secondary GCSE revision, SEND accessibility support, and top-need intervention. The right tutoring budget for one cohort may be wrong for another. This is especially important where intervention funding is ring-fenced or where subject gaps are concentrated in one phase.
For example, a MAT may decide that AI tutoring is the default for universal maths catch-up, while human tutoring is reserved for key exam classes and specialist intervention. That structure can be more defensible to trustees than a single all-purpose tuition spend. It also creates a clearer line of sight between budget and outcomes.
Test the model with one term first
If your school is uncertain, pilot the model for one term with a defined cohort and a clear baseline. Measure cost, attendance, and progress, then compare against your expected model. A controlled pilot is often the best way to avoid locking into an expensive mistake. The data you collect will be more useful than any vendor brochure, especially if you plan to scale across schools or phases.
8. Common mistakes schools make when budgeting tutoring
Overbuying capacity
Schools sometimes buy more tutoring than pupils can practically absorb. If a cohort is already overloaded, attendance drops and value collapses. It is better to budget for a smaller, reliable intervention that pupils can sustain than to assume an idealised timetable. Capacity should be matched to realistic pupil availability and school routines.
Ignoring coordination costs
Human tutoring often looks cheaper until staff time is counted. Arranging schedules, chasing attendance, sharing data, and managing safeguarding concerns all take time. A school may pay a lower hourly rate but spend far more internally. This is where fixed-fee systems can win because they simplify deployment and reduce hidden administration.
Failing to separate intervention and enrichment
Not every tutoring activity should be funded as catch-up. Some spend is enrichment, some is targeted intervention, and some is exam acceleration. Mixing these categories makes ROI analysis unreliable. If your finance committee needs clarity, keep the categories separate and report outcomes accordingly.
9. A recommended post-NTP budgeting approach for schools
For small primaries
Start with a narrow subject focus, usually maths or early reading, and choose the model that gives the most consistent dosage for the lowest total cost. A fixed-fee AI option may free enough budget for one-to-one human sessions where they matter most. Keep the reporting simple and termly. Most importantly, protect staff capacity so the intervention does not become an administrative burden.
For large MATs
Use central procurement, a standard reporting template, and a shared set of impact metrics. Negotiate multi-school pricing and decide where AI should be the default versus where human specialists are essential. The more schools you have, the more valuable a hybrid system becomes — if it is governed centrally. The aim is not to force every school into the same model, but to create a common framework for decision-making.
For governors and trustees
Ask for three views of the same budget: cost per hour, cost per pupil, and cost per expected gain. That will stop any one metric from dominating the decision. You can also require a pilot stage for any new provider. Better to learn with 30 pupils than to discover a budget overrun after a full-year rollout.
10. Final recommendations: how to make tutoring spend defensible
Choose the model that matches the need
Use AI when you need scale, predictability, and repeatable practice. Use human tutors when you need nuance, motivation, or subject depth. Use hybrid when the cohort is mixed and you want to stretch the budget without flattening the experience. The wrong model is usually the one that does not match the cohort.
Measure impact in a way finance teams can trust
Finance leaders need evidence that is both simple and honest. That means showing spend, reach, attendance, and outcomes together. It also means acknowledging uncertainty rather than claiming every pound has a fixed return. A good tutoring budget is one that can survive scrutiny from governors, trustees, and senior leaders because the assumptions are visible and realistic.
Build a system, not a one-off purchase
Post-NTP, tutoring is not a temporary project. It is part of the school’s wider intervention strategy, and it should be planned like one. That means comparing school tutoring platforms, thinking about fragmented systems, and using a decision framework that supports long-term consistency. Schools that budget this way will be better placed to show impact, protect staff time, and make every intervention pound count.
FAQ: Tutoring budget post-NTP
1. Is AI tutoring always cheaper than human tutoring?
Not always, but it is usually more predictable. A fixed-fee AI offer can be cheaper at scale, especially when many pupils need similar support. Human tutoring may still be better value for very small cohorts or highly specialised exam support. The real test is cost per successful outcome, not just headline price.
2. How should schools compare tutoring providers?
Compare total cost, safeguarding, reporting quality, subject fit, and expected impact. Do not compare only hourly rates or only platform fees. Ask how the provider handles attendance, data sharing, and progress monitoring. A good provider should make the budget easier to manage, not more complicated.
3. What is the best model for a small primary school?
Small primaries often benefit from a fixed-fee or hybrid model because it gives predictable spend and broad coverage. Human tutoring can still be useful for the highest-need pupils, but it is usually best reserved for targeted use. The key is to avoid spending heavily on administration and under-delivering on dosage.
4. What is the best model for a large MAT?
Large MATs often get the best value from central procurement and a hybrid approach. That allows the trust to standardise reporting, negotiate better rates, and deploy AI for scale while reserving human support for high-need or high-stakes cohorts. A trust-wide framework also makes it easier to compare impact between schools.
5. How do we show ROI to trustees without overstating the case?
Use a simple model with conservative, expected, and optimistic scenarios. Show cost per pupil, attendance, and progress against baseline. Be honest about uncertainty and avoid claiming exact financial returns from educational outcomes. Trustees usually want clarity and discipline more than perfect precision.
Related Reading
- 7 Best Online Tutoring Websites For UK Schools: 2026 - Compare current tutoring options, pricing, and safeguarding standards in the UK market.
- Skilling & Change Management for AI Adoption: Practical Programs That Move the Needle - Learn how to prepare staff for new AI tools without creating rollout fatigue.
- Evaluating AI-driven EHR features, vendor claims, explainability and TCO questions you must ask - A useful framework for interrogating vendor promises and total cost of ownership.
- The Hidden Costs of Fragmented Office Systems - See why disconnected tools quietly inflate admin time and weaken reporting.
- How to Build a Business Confidence Dashboard for UK SMEs with Public Survey Data - Borrow dashboard thinking for clearer tutoring impact reporting.
Related Topics
James Carter
Senior SEO Editor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Beyond Price and Praise: A school leader’s checklist for evaluating AI tutoring claims
From Newsletter to National Voice: What Education Week’s 40+ Years Teach School Communicators
Build a Local School-Closing Tracker: A Practical Guide for Teachers and Leaders
The Consolidation Playbook: What Tutoring Startups Should Learn from Market M&A Trends
Where the Demand Is: How Tutors Should Read Regional Growth in In-Person Learning
From Our Network
Trending stories across our publication group