Our website uses cookies.
Reject AllAllow all

This website stores cookies on your computer. The data is used to collect information about how you interact with our website and allow us to remember you. We use this information to improve and customize your browsing experience and for analytics and metrics about our visitors both on this website and other media.

dodaj tez tutaj ze przycisk ma byc schowany jezeli scrollY jest mniejsze niz 100vh

How to Estimate the ROI of Replacing a SaaS Tool (With Real Numbers)

TL;DR: Most ROI calculations for custom software fail in one of two directions — wildly optimistic (ignoring maintenance, overcounting savings) or too conservative (dismissing opportunity costs entirely). This post walks through the model we built for our ATS replacement: 150,648 PLN in identified annual costs, 86,000 PLN build budget, and 85% of the savings case resting on opportunity costs we cannot fully prove. On direct costs alone, the project does not payback. Here is how to run the same calculation for your tool — and what to do with the uncertainty when you do.

The ROI framework: two buckets

Every SaaS replacement ROI calculation has the same basic structure. Two things change when you replace a tool: your costs go down (you recover the time and money you were losing to the gap), and your ceiling goes up (you can now do things you could not do before). Those are not the same type of number, and conflating them is where most ROI models go wrong.

Bucket 1: Direct costs. Time your team actually burned doing manual work the software could not handle. Subscription fees you stop paying. Integration costs that disappear. These numbers exist on time sheets and invoices. They are recoverable — in the sense that you can verify them after the fact, not just estimate them before.

Bucket 2: Opportunity costs. The downstream consequences of the software’s limitations. The hire that did not happen because the pipeline data was wrong. The candidate you re-sourced from scratch who was already in your system from eighteen months ago. The senior engineer you lost in month four because the feedback loop during their evaluation was fragmented. These costs are real — the losses happened — but the attribution is a judgment call, not a calculation.

The reason this distinction matters: your ROI depends entirely on whether and how much you believe Bucket 2. Bucket 1 alone rarely justifies a custom build. The project economics work only if you are willing to make explicit, defensible assumptions about what Bucket 2 is actually worth — and then hold those assumptions up to scrutiny before you commit.

The numbers that follow are ours, from the ATS audit described in Part 5. The methodology that produced them is in Part 4.

Our numbers

The 24 problems across 7 clusters were costed individually before being aggregated. Each pain point with a time-per-occurrence got an hourly rate applied to it. Each cluster with a downstream consequence got an opportunity cost estimate with an explicit attribution assumption. The full picture:

# Cluster Problems Direct cost (PLN/yr) Opportunity cost (PLN/yr) Total (PLN/yr)
Reports calculate with errors 4 1,728 99,792 101,520
Attribution assumption: 50% of failed hires trace to decisions on inaccurate pipeline data. One additional failed hire/yr × ~199,584 PLN total cost × 50% = 99,792 PLN/yr.

Sensitivity: At 0% attribution → project ROI is −74%. At 25% → roughly break-even. At 50% → +75%. This single cluster accounts for 67% of total estimated cost and 78% of all opportunity cost.
Metrics not customisable 3 2,304 2,304
Confidence: High (direct only). 160 min/month of workaround time at loaded hourly rate. No opportunity cost included — impact prevents process optimisation but no defensible downstream financial consequence could be isolated.
Funnels not elastic 3 780 4,320 5,100
Attribution assumption: ~1 extra day of rework per recruitment process due to data corruption from ineditable pipelines. Conservative: "1 day" = 2 hours actual reconciliation work × 2.5 processes/month.

Confidence: Medium. Time estimate is based on workshop data. Includes one 5/5-severity blocking problem.
No competency matrices + sourcing inefficiency 4 6,036 4,056 10,092
Attribution assumption: 20% candidate reuse rate achievable with competency-indexed tooling + automated relationship reminders. Saves re-sourcing cost (219 PLN/mo) and partial LinkedIn licence waste (119 PLN/mo).

Confidence: Medium. Reuse rate is an estimate — requires both tooling and behavioural change. Direct costs (503 PLN/mo for manual LinkedIn entry, CV scanning, matrix work) are high-confidence.
No interview transcription + manual feedback 3 4,680 19,956 24,636
Attribution assumption: 10% of failed hires attributable to poor documentation/evaluation quality. More conservative than Cluster 1 (10% vs 50%) because documentation is one of several factors in hiring outcomes.

Confidence: Medium. Direction is sound (degraded signal → worse decisions). Magnitude is an estimate. Direct costs (390 PLN/mo for post-interview writeups) are high-confidence.
Calendar/scheduling gaps 3 3,888 3,888
Confidence: High (direct only). Pure time savings — 240 min/month scheduling, 30 min/month task coordination. No attribution assumptions. Three problems rated 5/5 severity. A "quick win" cluster.
GDPR compliance gaps 3 3,108 3,108
Confidence: High (direct only). 120 min/month on workarounds + 96 min/month on database cleaning. GDPR fine risk (up to 4% of annual revenue) excluded — requires legal input for probability assessment.
Total 24 22,524 128,124 150,648

Build budget: 86,000 PLN (scoped engagement, fixed price).

The math:

  • Direct savings only: 22,524 PLN/yr recovered.Against 86,000 PLN, that is a -74% ROI in year one. Full payback takes roughly four years.
  • Direct + opportunity savings: 150,648 PLN/yr recovered. Against 86,000 PLN, that is +75% ROI in year one.

The difference between those two outcomes is 128,124 PLN — the full opportunity cost estimate. And 99,792 PLN of that (78%) sits in a single cluster: Cluster 1 (reports calculate with errors).

The ROI of this project hinges almost entirely on one number in one cluster, derived from one attribution assumption: that 50% of Appunite’s failed hires trace back to decisions made on inaccurate reporting data. At 0% attribution, the full ROI falls from +75% to -74%. At 25%attribution, the project is roughly break-even in year one. At 50%, it produces a strong positive return.

That is the structure of the decision. Everything below is about howto handle that structure honestly — and how to replicate it for your own tools.

The honest uncertainty

Three things have to be true for this project to show positive ROI.Not optimistically — literally. If any one of them fails to hold, the math does not work.

One: opportunity costs are at least 50% real.

Cluster 1’s opportunity cost estimate rests on the assumption that one additional failed hire per year is attributable to bad pipeline data. At Appunite’s hiring volume and cost-per-hire, one failed hire costs roughly 99,792 PLN annually — recruiting costs, onboarding, the delay to value of the role. The attribution is defensible. Inaccurate pipeline data does affect headcount decisions. Headcount decisions do affect hiring outcomes. The 23-day versus 31-day time-to-hire gap was not hypothetical — it had been in use for months before the audit surfaced it. Whether the full causal chain holds at exactly 50% cannot be proven. It can only be argued. That is the position this model is in.

Cluster 5’s opportunity cost estimate (19,956 PLN/yr) carries a similar structure: interview signal that degrades between conversation and written summary produces worse offer acceptance rates. The direction is right. The magnitude is an estimate.

If you are running this model for your own tool, the question to ask is: what does your dominant cluster’s attribution assume, what evidence supports it, and what happens to your ROI if that assumption is cut in half?

Two: custom software can deliver what alternatives cannot.

The solvability filter in Part 4 asks a specific question before any build decision: would a different SaaS solve this? For Cluster 1, the answer was no — the reporting problem is not a bug in Recruitee, it is a design boundary. Recruitee calculates time-to-hire from job posting. Appunite’s process starts the clock at first candidate interaction. No configuration option changes the underlying logic. For Cluster 4 (no competency matrices, no longitudinal candidate records), no ATS we evaluated offered structured competency tracking and candidate re-surfacing as native features. The ROI model assumes these gaps cannot be addressed by switching vendors.If a competitor launched a feature that addressed Cluster 1 while we were building, the model would need to be rerun.

Three: there are additional benefits beyond the numbers.

This is the part that did not fit neatly into a table. Building the replacement ATS generates three benefits outside the cost model: (1) it produces a first-party case study in the exact thing Appunite sells —custom software that outperforms generic SaaS on a specific business process; (2) it creates data ownership the company did not have under Recruitee, enabling reporting across hiring cycles that was previously impossible; (3) it is a learning investment in the team’s ability to define, scope, and build a process-native system from scratch.

None of those belong in an ROI table. All of them are real. The economics of custom software more broadly have changed — AI has made the build cost significantly lower than it was five years ago, which means the threshold at which these ancillary benefits tip the balance is lower too. The honest version of the model acknowledges these benefits exist and keeps them separate from the quantified case. They are real and they are not the reason the math works.

How to run this calculation yourself

The mechanics are not complicated. What makes them hard is the discipline to keep direct costs and opportunity costs separate and to name attribution assumptions explicitly rather than burying them in a total.

Step 1: Build the cluster cost table.

For each cluster from your discovery process, calculate two figures independently.

Direct costs: Convert every pain point’s workaround time into an annual cost. Use fully-loaded hourly rate — salary plus employer costs — not headline salary. Keep time estimates conservative: use what you measured in the workshop, not what feels right. Optimism here contaminates every number downstream.

Opportunity costs: These require explicit assumptions —write them down. For each cluster with a downstream consequence, capture three things in one sentence: what business outcome degrades, what that outcome is worth financially, and what percentage of that consequence you are willing to attribute to the software limitation.

For Cluster 1: downstream consequence = failed hires; value = one additional failed hire at approximately 199,584 PLN total cost;attribution = 50% = 99,792 PLN/yr. That sentence is the assumption.Everything else in the model depends on it.

Do not blend assumptions across clusters. Each one stands alone and should be challengeable independently.

Step 2: Calculate the three ROI scenarios.

Run the model three times:

  • Conservative case (0% opportunity cost): Direct savings only. If this is positive, you do not need to argue about attribution.Most builds will not pass this test.
  • Mid case (50% of opportunity cost estimates): The scenario where attribution assumptions are half right. This is usually the most defensible position in an internal business case.
  • Full case (100% of opportunity cost estimates): Useful only as an upper bound. Do not present this as the central estimate to a stakeholder.

Step 3: Sensitivity test the dominant cluster.

Find the cluster that accounts for the majority of your opportunity cost estimate. Vary its attribution assumption from 0% to 100% in 25% increments and map what happens to total ROI. If ROI is negative at 25%attribution, the case is weak — the project is a bet that attribution is high, not a case that it probably is. If ROI is strongly positive at25%, the case is robust and can withstand a skeptical challenge.

For Appunite, Cluster 1 was so dominant (67% of total) that the sensitivity analysis reduced to: ROI as a function of the failed-hire attribution assumption. That made the case transparent and fragile in a specific way the team understood before committing.

Step 4: Assign a confidence level to each cluster.

Not a formula — a judgment. High confidence: the pain is direct, the time is measurable, the workaround is documented and consistent. Medium confidence: the opportunity cost is directionally sound but the attribution percentage can be argued. Low confidence: the downstream consequence is plausible but the causal chain has more links than can be verified.

The table becomes more honest, and more useful, when confidence levels are visible alongside the numbers.

These four steps describe the logic. The actual worksheets —including the direct cost table, opportunity cost estimation form,honest split table, decision matrix, and scoping guardrails — are in the assessment template. The template also includes the full decision tree (process problem vs software problem, switching vs building, when not to build)that this post does not cover.

The budget cap insight

There is one number in every SaaS replacement decision that gets underused: the cost of doing nothing.

Specifically: two years of your current subscription cost, multiplied by 1.2. That 20% is a margin for the inevitable — migration, edge cases,first-cycle maintenance. If your SaaS costs 43,000 PLN/yr, the ceiling is (43,000 × 2) × 1.2 = 103,200 PLN. Any build scoped under that number is worth considering on cost grounds alone. If the build comes in at the ceiling, break-even is two years — assuming the custom software solves100% of the identified problems and your opportunity cost estimates are accurate. Both assumptions are optimistic, which is why the ceiling exists and why you should target to come in below it, not at it.

The moment to run this math is not when you have already decided to build. It is at the SaaS renewal negotiation. Before you sign for another year, calculate: (a) what the tool is costing you in direct friction, (b) what two years of subscription costs, and (c) what aversion 1 build scoped tightly around your actual required features would cost. Those three numbers belong in the same conversation.

The discovery methodology from Part 4 takes roughly a week to run. You want those numbers before the subscription auto-renews, not after. The leverage is highest at the renewal moment — and it disappears once you have signed.

What this doesn’t tell you yet

The ROI model answers a specific question: is the pain expensive enough to justify exploring a build?

It does not answer: can a build actually be scoped and delivered at a cost that makes the math work?

Those are different questions, and conflating them is a common error.A pain model that produces 150,648 PLN/yr in estimated costs does not automatically support any given build scope. It supports further investigation. The scoping step introduces new information: what the minimum viable system actually requires, what it realistically costs to build and maintain, where the scope is well-defined and where it is speculative.

That step happened for Appunite, and what it found changed parts of the picture. Not the direction of the decision — but specific assumptions behind it. Both the cost evidence and the scoping result had to be in the same room before a real decision was possible. That is where the numbers from this post met the numbers from the build estimate for the first time. The next post covers what that conversation produced.

Sources

  • Part 2 — Hold My Beer (The Manifesto): https://www.appunite.com/blog/manifesto-building-our-own-ats
  • Part 4 — How to Discover What’s Actually Broken in Your SaaS Tool: https://www.appunite.com/blog/how-to-discover-whats-actually-broken-in-your-saas-tool
  • Part 5 — 24 Problems, 7 Clusters — What We Found Wrong with Our ATS: https://www.appunite.com/blog/what-we-found-wrong-with-our-ats
  • The Paradox of Cheaper Code — Why AI is Making Custom Software More Valuable: https://www.appunite.com/blog/why-ai-is-making-custom-software-development-more-valuable

Further reading

Buy vs build

The SaaS conundrum

AI-assisted engineering