Software rationalisation that stalls before it begins.

Enterprise software estates are rarely undocumented. The data quality problem is subtler: the data was collected for a different purpose, and it shows the moment someone asks for a forecast.

The request comes from new leadership and sounds straightforward. Forecast us the cost reduction. The SAM team opens the ITSM system. Contracts are there, documented, structured, maintained by people who did the work properly. And then the questions arrive that the system cannot answer.

The forecast nobody can deliver

A large construction enterprise has its entire software estate in ServiceNow. Every contract entered. Total values recorded. Renewals tracked. On paper, everything a rationalisation exercise needs.

Except each contract carries one field for total value. Not per product. Not per seat. Not broken down by renewal cadence or business unit. One number, which might cover one application or twenty. ServiceNow captured what was agreed. It was not designed to capture what that software costs at the granular level financial analysis requires.

When new leadership asks for a 20% cost-reduction forecast, the honest answer is: the data does not support one. Not because the team has not worked hard. Because the data was built for a different job.

Seven records, seven spellings

Under the structural problem sits a dirtier one. Enterprise software estates accumulate over years, entered by different people across different departments with different conventions. The same vendor appears seven times in seven records. Full legal name. Common abbreviation. Old name from before an acquisition. What one person thought it was called.

You cannot aggregate. You cannot answer the most basic question: how much are we spending with this vendor, in total, across the business?

The data is technically all there. It is practically unusable.

This is the data quality crisis that governance conversations tend to skip. The focus goes to tooling, policy, process. The normalisation problem sitting underneath all of it rarely gets named directly.

The tool was built for something else

ITSM platforms are built for renewal management. They track what was agreed, when it expires, and who owns the contract. That is genuinely useful. It is not the same as financial analysis, and it is not the same as feature intelligence.

Rationalisation needs to answer three questions. What does this software do? Do we already own something that covers the same use case? What does it actually cost, at the product and seat level? ITSM data answers none of these reliably. Without those answers, rationalisation stays a conversation. It never becomes a plan with a number attached.

FROMDirector of Software Asset Management
TOsam@samplify.ai
Answered in 2m 17s

We have nine separate Adobe records across our estate entered by different departments. Can you confirm our total committed spend and whether we are correctly licensed?

Sam @ Samplify
RENEW

Your nine Adobe records resolve to a single vendor. Total committed spend is £1.31M annually across Creative Cloud (340 seats), Acrobat Standard (180 seats), and Adobe Sign (60 seats). Creative Cloud shows 218 active users over the last 90 days against 340 licensed seats. At your EA renewal in September (EA #6214), consolidating to 240 seats saves an estimated £91,000.

Sources Adobe EA #6214 · Creative Cloud usage log (90-day) · Adobe catalogue record

Normalisation before analysis

Samplify does not start from an assumption of clean data. It starts from what most enterprises actually have: inconsistent vendor names, single-field contract totals, manually entered records spanning years and departments. Before any analysis runs, the platform normalises. Vendor name variants resolve to a canonical record. Contract totals break into product-level estimates where granular data is absent. The gaps become visible so the team knows what is confirmed and what is estimated.

That distinction matters when presenting a forecast to leadership. A number with a clear source trail and explicit confidence levels is defensible. A number without one is not.

The feature intelligence layer maps what each tool actually does, at capability level, against what the estate already owns. That is where rationalisation opportunities surface. Not just "you have too many contracts" but "three of your workflow automation tools overlap with a Microsoft 365 capability you are already paying for."

You can read more about how this works on the how it works page, or see how clients have run this process on the portfolio rationalisation page.

What a forecast actually needs

A cost-reduction forecast is a number with a source trail. It names the vendors, the overlap, the consolidation opportunity, and the basis for the estimate. Producing it requires clean financial data at the vendor and product level, feature intelligence to identify the overlap, and a platform that handles the messy data problem before the analysis starts, not after.

Clients using Samplify are preventing between $2 million and $3 million in software spend every month, across $120 million in annual software estate. The starting point is rarely clean data. The starting point is whatever data exists, normalised into something that can actually support a decision.

The 30-day proof

Run Samplify on your stack, your questions, your inbound flow.

Start your 30-day proof