TL;DR: Most ROI calculations for custom software fail in one of two directions — wildly optimistic (ignoring maintenance, overcounting savings) or too conservative (dismissing opportunity costs entirely). This post walks through the model we built for our ATS replacement: 150,648 PLN in identified annual costs, 86,000 PLN build budget, and 85% of the savings case resting on opportunity costs we cannot fully prove. On direct costs alone, the project does not payback. Here is how to run the same calculation for your tool — and what to do with the uncertainty when you do.
Every SaaS replacement ROI calculation has the same basic structure. Two things change when you replace a tool: your costs go down (you recover the time and money you were losing to the gap), and your ceiling goes up (you can now do things you could not do before). Those are not the same type of number, and conflating them is where most ROI models go wrong.
Bucket 1: Direct costs. Time your team actually burned doing manual work the software could not handle. Subscription fees you stop paying. Integration costs that disappear. These numbers exist on time sheets and invoices. They are recoverable — in the sense that you can verify them after the fact, not just estimate them before.
Bucket 2: Opportunity costs. The downstream consequences of the software’s limitations. The hire that did not happen because the pipeline data was wrong. The candidate you re-sourced from scratch who was already in your system from eighteen months ago. The senior engineer you lost in month four because the feedback loop during their evaluation was fragmented. These costs are real — the losses happened — but the attribution is a judgment call, not a calculation.
The reason this distinction matters: your ROI depends entirely on whether and how much you believe Bucket 2. Bucket 1 alone rarely justifies a custom build. The project economics work only if you are willing to make explicit, defensible assumptions about what Bucket 2 is actually worth — and then hold those assumptions up to scrutiny before you commit.
The numbers that follow are ours, from the ATS audit described in Part 5. The methodology that produced them is in Part 4.
The 24 problems across 7 clusters were costed individually before being aggregated. Each pain point with a time-per-occurrence got an hourly rate applied to it. Each cluster with a downstream consequence got an opportunity cost estimate with an explicit attribution assumption. The full picture:
Build budget: 86,000 PLN (scoped engagement, fixed price).
The math:
The difference between those two outcomes is 128,124 PLN — the full opportunity cost estimate. And 99,792 PLN of that (78%) sits in a single cluster: Cluster 1 (reports calculate with errors).
The ROI of this project hinges almost entirely on one number in one cluster, derived from one attribution assumption: that 50% of Appunite’s failed hires trace back to decisions made on inaccurate reporting data. At 0% attribution, the full ROI falls from +75% to -74%. At 25%attribution, the project is roughly break-even in year one. At 50%, it produces a strong positive return.
That is the structure of the decision. Everything below is about howto handle that structure honestly — and how to replicate it for your own tools.
Three things have to be true for this project to show positive ROI.Not optimistically — literally. If any one of them fails to hold, the math does not work.
One: opportunity costs are at least 50% real.
Cluster 1’s opportunity cost estimate rests on the assumption that one additional failed hire per year is attributable to bad pipeline data. At Appunite’s hiring volume and cost-per-hire, one failed hire costs roughly 99,792 PLN annually — recruiting costs, onboarding, the delay to value of the role. The attribution is defensible. Inaccurate pipeline data does affect headcount decisions. Headcount decisions do affect hiring outcomes. The 23-day versus 31-day time-to-hire gap was not hypothetical — it had been in use for months before the audit surfaced it. Whether the full causal chain holds at exactly 50% cannot be proven. It can only be argued. That is the position this model is in.
Cluster 5’s opportunity cost estimate (19,956 PLN/yr) carries a similar structure: interview signal that degrades between conversation and written summary produces worse offer acceptance rates. The direction is right. The magnitude is an estimate.
If you are running this model for your own tool, the question to ask is: what does your dominant cluster’s attribution assume, what evidence supports it, and what happens to your ROI if that assumption is cut in half?
Two: custom software can deliver what alternatives cannot.
The solvability filter in Part 4 asks a specific question before any build decision: would a different SaaS solve this? For Cluster 1, the answer was no — the reporting problem is not a bug in Recruitee, it is a design boundary. Recruitee calculates time-to-hire from job posting. Appunite’s process starts the clock at first candidate interaction. No configuration option changes the underlying logic. For Cluster 4 (no competency matrices, no longitudinal candidate records), no ATS we evaluated offered structured competency tracking and candidate re-surfacing as native features. The ROI model assumes these gaps cannot be addressed by switching vendors.If a competitor launched a feature that addressed Cluster 1 while we were building, the model would need to be rerun.
Three: there are additional benefits beyond the numbers.
This is the part that did not fit neatly into a table. Building the replacement ATS generates three benefits outside the cost model: (1) it produces a first-party case study in the exact thing Appunite sells —custom software that outperforms generic SaaS on a specific business process; (2) it creates data ownership the company did not have under Recruitee, enabling reporting across hiring cycles that was previously impossible; (3) it is a learning investment in the team’s ability to define, scope, and build a process-native system from scratch.
None of those belong in an ROI table. All of them are real. The economics of custom software more broadly have changed — AI has made the build cost significantly lower than it was five years ago, which means the threshold at which these ancillary benefits tip the balance is lower too. The honest version of the model acknowledges these benefits exist and keeps them separate from the quantified case. They are real and they are not the reason the math works.
The mechanics are not complicated. What makes them hard is the discipline to keep direct costs and opportunity costs separate and to name attribution assumptions explicitly rather than burying them in a total.
Step 1: Build the cluster cost table.
For each cluster from your discovery process, calculate two figures independently.
Direct costs: Convert every pain point’s workaround time into an annual cost. Use fully-loaded hourly rate — salary plus employer costs — not headline salary. Keep time estimates conservative: use what you measured in the workshop, not what feels right. Optimism here contaminates every number downstream.
Opportunity costs: These require explicit assumptions —write them down. For each cluster with a downstream consequence, capture three things in one sentence: what business outcome degrades, what that outcome is worth financially, and what percentage of that consequence you are willing to attribute to the software limitation.
For Cluster 1: downstream consequence = failed hires; value = one additional failed hire at approximately 199,584 PLN total cost;attribution = 50% = 99,792 PLN/yr. That sentence is the assumption.Everything else in the model depends on it.
Do not blend assumptions across clusters. Each one stands alone and should be challengeable independently.
Step 2: Calculate the three ROI scenarios.
Run the model three times:
Step 3: Sensitivity test the dominant cluster.
Find the cluster that accounts for the majority of your opportunity cost estimate. Vary its attribution assumption from 0% to 100% in 25% increments and map what happens to total ROI. If ROI is negative at 25%attribution, the case is weak — the project is a bet that attribution is high, not a case that it probably is. If ROI is strongly positive at25%, the case is robust and can withstand a skeptical challenge.
For Appunite, Cluster 1 was so dominant (67% of total) that the sensitivity analysis reduced to: ROI as a function of the failed-hire attribution assumption. That made the case transparent and fragile in a specific way the team understood before committing.
Step 4: Assign a confidence level to each cluster.
Not a formula — a judgment. High confidence: the pain is direct, the time is measurable, the workaround is documented and consistent. Medium confidence: the opportunity cost is directionally sound but the attribution percentage can be argued. Low confidence: the downstream consequence is plausible but the causal chain has more links than can be verified.
The table becomes more honest, and more useful, when confidence levels are visible alongside the numbers.
These four steps describe the logic. The actual worksheets —including the direct cost table, opportunity cost estimation form,honest split table, decision matrix, and scoping guardrails — are in the assessment template. The template also includes the full decision tree (process problem vs software problem, switching vs building, when not to build)that this post does not cover.
There is one number in every SaaS replacement decision that gets underused: the cost of doing nothing.
Specifically: two years of your current subscription cost, multiplied by 1.2. That 20% is a margin for the inevitable — migration, edge cases,first-cycle maintenance. If your SaaS costs 43,000 PLN/yr, the ceiling is (43,000 × 2) × 1.2 = 103,200 PLN. Any build scoped under that number is worth considering on cost grounds alone. If the build comes in at the ceiling, break-even is two years — assuming the custom software solves100% of the identified problems and your opportunity cost estimates are accurate. Both assumptions are optimistic, which is why the ceiling exists and why you should target to come in below it, not at it.
The moment to run this math is not when you have already decided to build. It is at the SaaS renewal negotiation. Before you sign for another year, calculate: (a) what the tool is costing you in direct friction, (b) what two years of subscription costs, and (c) what aversion 1 build scoped tightly around your actual required features would cost. Those three numbers belong in the same conversation.
The discovery methodology from Part 4 takes roughly a week to run. You want those numbers before the subscription auto-renews, not after. The leverage is highest at the renewal moment — and it disappears once you have signed.
The ROI model answers a specific question: is the pain expensive enough to justify exploring a build?
It does not answer: can a build actually be scoped and delivered at a cost that makes the math work?
Those are different questions, and conflating them is a common error.A pain model that produces 150,648 PLN/yr in estimated costs does not automatically support any given build scope. It supports further investigation. The scoping step introduces new information: what the minimum viable system actually requires, what it realistically costs to build and maintain, where the scope is well-defined and where it is speculative.
That step happened for Appunite, and what it found changed parts of the picture. Not the direction of the decision — but specific assumptions behind it. Both the cost evidence and the scoping result had to be in the same room before a real decision was possible. That is where the numbers from this post met the numbers from the build estimate for the first time. The next post covers what that conversation produced.