Reviving Legacy Software - A Practical Modernisation Guide
Why modernise legacy systems?
A common reality that every software project eventually faces is the possibility that what once worked well could turn into a burden. This was something our team had to face first-hand when we were tasked with working on a SaaS-based transport management system that was originally built in 2010. Designed to provide features such as shipment tracking, delivery scheduling and customer communication, it was initially developed using ASP.NET MVC with C# and Microsoft SQL Server via EntitySpaces ORM to support courier, warehouse, freight and delivery providers.
Even though the platform had served multiple clients, it showed signs of being outdated. A monolithic codebase with some methods spanning over 800 lines; no normalisation in the database; caching scattered throughout; and to top it off, virtually no documentation existed. The company’s platform was characterised by performance bottlenecks and security risks, with the original developers long gone. Our team – four developers, one UX designer and one QA engineer – had to understand the old system, decide what to keep, and then modernise it completely.
It taught us not only coding, but also the difficult architectural decisions to make: whether to refactor, rewrite or replace.

Which parts of your system keep the business alive?
The starting step was to get familiar with the system. We set it up in a local development environment, navigated the application interface and located endpoints for all flows. All features, including login screens and shipment tracking, were documented with inputs, outputs and dependencies. In addition, we mapped system metadata and configured settings along with integrations from external vendors.
This meant that features that mattered for business continuity were to be distinguished from those that could be dropped or delayed. For instance, shipment tracking was a mandatory aspect, and it was linked to the satisfaction of customers on a daily basis. Some features were barely used, yet the system did require a complete rewrite. This was carried out in phases to ensure continuity. Shipment tracking, for instance, was a mandatory aspect directly tied to daily customer satisfaction.

Technical Debt vs Architectural Debt
One of the primary lessons from this project was to differentiate between technical debt and architectural debt.
Defective coding practices, such as massive methods, tight coupling and hard-coded values, are known as technical debt. System upkeep can be a nuisance, but it is often manageable by refining the system.
On the other hand, architectural debt is structural. Due to the lack of modularity and reliance on an outdated ORM, our codebase was susceptible to small changes.
The differentiation was vital in determining what to preserve and what to discard. The code was so unclean that many parts were earmarked for refactoring. The foundations of others were already shaky, and rebuilding them was the only way forward.
What still works and what breaks every time you deploy?
Some modules had been inactive for years, while others were maintained with frequent updates. We used static analysis tools to measure code complexity and examined ‘code churn’ data for areas with frequent errors, to identify whether work was stable or fragile.
All modules will be ultimately rewritten, though the work will be carried out in phases. For example, delivery scheduling modules were frequently plagued by bugs and regressions with each deployment, which made them a high priority during the early rewrite phases.
The wider industry exhibits a balance between stability and fragility. COBOL systems in banking are robust enough to manage billions of dollars in daily transactions, but the collapse of New Jersey’s unemployment system during the pandemic exposed the fragility of neglected legacy code.

Is your legacy system a ticking security time bomb?
Security and compliance were the most troubling issues we discovered. Unencrypted sensitive data, unauthorised endpoints and the absence of foreign keys in the database were all found during audits. Performance problems were caused by both outdated architecture and inadequate compliance with modern standards.
But this modernisation was not just about improving speed; it also meant getting up to speed against compliance statutes such as GDPR and PCI DSS. A real world example would be Equifax’s story of inadequate patch management that affected almost half of the US population.
How do you truly understand an old system before changing it?
Our analysis of the codebase involved both automated and manual methods:
- SonarQube, NDepend and Visual Studio metrics were used to analyse code complexity.
- During database reverse engineering with SQL Server Management Studio, we could observe tables and their relationships in detail.
- Despite the limitations of tools, manual auditing, business process mapping and pair walkthroughs were necessary to fill these gaps.
- Production-level pain points, such as slow queries or bottlenecks, were identified through log and runtime monitoring.
These tools provided us with the necessary information to make informed decisions instead of relying on assumptions.

When to refactor, rewrite, or replace?
A significant aspect of our project involved selecting the appropriate strategy. Our standard practice was:
- Refactor if the code is disorganised but the architecture is sound.
- Rewrite in situations where business logic is well-established, but fundamental issues exist.
- Replace if the domain model of the system does not suit the business.
The majority of the system was rewritten, but we kept the essential business functions intact. The new architecture incorporated their logic so that features like shipment tracking and scheduling were not at risk of being broken. Modules of lesser importance were either redesigned or replaced with off-the-shelf solutions.
What’s Next?
Up next, the focus shifted to adopting a modern technology stack designed to propel the business forward — cloud-driven, modular, and API-first. The next article in this series will explore our decision-making process in selecting and evaluating this new technology stack.