Software in production commonly outlives the components it’s built with. Enterprise systems remain because replacement of libraries and frameworks is costly. Every component follows its own lifecycles: Libraries and runtimes reach end of life while the systems that depend on them remain active.
When an individual component’s version reaches end-of-life, vulnerability discovery should not. Security issues will continue to appear, and responsibility for addressing them is the enterprise’s.
End-of-life issues spiral
Running end-of-life code is an unavoidable outcome in long-lived systems. Stable versions are kept in place because they’ve passed certification and work with existing integrations – regardless of the security implications. Newer versions can introduce breaking changes and need re-validating. In this context, continued use of older components is rational and considered an acceptable risk.
However, the overall risk profile of the organisation rises when vulnerabilities that don’t or can’t receive upstream fixes mount up. The scale of risk and the speed at which the overall security posture can change are defined by dependency structures. A relatively small set of direct dependencies expands, with each component introducing its own dependencies, which in turn bring others. Exponential is a term that’s often misused, but in this context, it’s close to the reality.
Dependency graphs
Vulnerabilities will appear at any level of the dependency graph – a top-level library, a secondary dependency, or a deeply-nested component not visible in routine development. There’s no authoritative source for all vulnerability information, and any public disclosures tend to be distributed across multiple databases, comms channels, and mailing lists.
Managing risk exposure therefore needs teams to monitor the full dependency graph, identifying which components are present, tracking new disclosures, and determining whether a newly-discovered issue affects the version(s) in use.
Managing risk exposure therefore needs teams to monitor the full dependency graph, identifying which components are present, tracking new disclosures, and determining whether a newly-discovered issue affects the version(s) in use. Buyers and vendors of end-of-life software supply chain monitoring and management platforms will meet at TechEx North America in May this year, where such challenges and potential approaches will be examined in a commercial and technical contexts.
Fixing the problems
Fixing vulnerabilities in this context is more involved than applying a patch. Components need to be rebuilt, as does any component that has the EoL element as a dependency. All resulting artefacts need to be tested for consistency and effectiveness.
Publicly-released patches usually target newer versions of the component, so backporting them to an older codebase is extra effort and can introduce regressions, with significant workloads mounting up on development teams.
Changes to the build environment that have taken place over software lifecycles can affect outcomes. Source code may be fixed, but the environment used to build it may cause anomalous results. From operating system updates to compiler or linter upgrades (themselves considered necessary for security and performance) the software build pipeline may itself introduce its own issues. Differences in packaging or metadata can affect important integrations and expected behaviour. Every rebuild, therefore, requires validation at a level that might be associated with a new version release.
Understanding how to make any change safely requires familiarity with legacy decisions and build methods, which may not be – in the space of a few months – possible. As teams change and personnel rotates into the organisation, there’s no guarantee that knowledge will be automatically passed on to the next generation of developers. Without structured documentation and repeatable procedures (the former an overhead that’s often avoided by time-starved teams), the effort required to produce fixes increases.
Systematic monitoring of the dependency graph, standardised build and validation workflows, and reliable mechanisms for delivering updates need form the rule-of-three in security and development teams’ playbooks.
External services can provide extended lifecycle support by maintaining monitoring, standardising patch deployment and ensuring build pipelines’ results are properly validated and tested. Quality and consistency of execution are the goals. These are attainable, but require specific providers with deep knowledge of software development and cybersecurity, a combination that rarely exists in the wild.
Discussion and evaluation of the options will clearly include cost comparisons, but also the assessment of whether an organisation can maintain the required level of discipline and knowledge over the lifespan of the system, and be properly resourced to do so. If we take as read that end-of-life components will accumulate vulnerabilities, the question is whether there are methods of managing risk come with the lowest overhead and the highest levels of security.
(Image source: “File:Electronic component inductors.jpg” by me is licensed under CC BY-SA 3.0. To view a copy of this license, visit http://creativecommons.org/licenses/by-sa/3.0)
Want to dive deeper into the tools and frameworks shaping modern development? Check out the AI & Big Data Expo, taking place in Amsterdam, California, and London. Explore cutting-edge sessions on machine learning, data pipelines, and next-gen AI applications. The event is part of TechEx and co-located with other leading technology events. Click here for more information.
DeveloperTech News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.


