Focused Resources

Most testing problems aren’t new. Regression suites that no one trusts. Frameworks that break every sprint. Test organizations that can’t keep pace with the product. SDT has seen all of it — and solved all of it. The resources here document what works: the methodology, the organizational structure, and the real-world results from clients who made the shift.

Your Test Automation Isn’t Broken. Your Approach to It Is.

Why scripted automation keeps failing — and what the numbers look like when you fix the underlying problem

There’s a specific kind of frustration that QA teams know well. You invest months building out an automation suite. Engineers write hundreds of scripts. Coverage looks good on paper. Then the product ships a new feature, the UI gets updated, or a workflow changes — and suddenly a third of your tests are failing. Not because the product is broken. Because the tests are.

So engineers spend the next sprint fixing tests instead of building new coverage. The automation library grows larger and more expensive to maintain. Trust in the suite erodes. And at some point, someone in a release meeting makes the call that nobody wanted to make: we’re going to need a manual sign-off anyway, because we don’t really know if the automated tests are telling us anything useful.

This cycle is so common it’s almost expected. Most teams assume it’s just the nature of automation — that test maintenance is an unavoidable cost of having a modern testing practice. It isn’t. The maintenance problem is a symptom of a structural choice made early in how the automation was built.


The Scripting Trap

Traditional test automation is built on scripts. An engineer writes code that instructs the automation tool: find this element, click this button, verify this value. It works until something changes. Move a button, rename a field, restructure a workflow — and every script that references those elements needs to be manually updated.

The deeper problem is that scripted automation conflates two things that should be kept separate: knowing what to test and knowing how to make a machine execute the test. Those are genuinely different skills. A QA engineer with deep knowledge of how a financial application should behave doesn’t need to know how to write automation code to define valuable test cases. And an automation engineer doesn’t need to understand the business logic of a healthcare system to implement the technical layer that executes tests against it.

When you force both responsibilities onto the same person — or the same artifact — you get a fragile, expensive, hard-to-scale testing practice. Domain experts who should be focused on coverage end up learning scripting languages. Automation engineers who should be building reusable infrastructure end up buried in business logic. And the test library grows larger than anyone can maintain.


What Happens When You Separate the Two

The alternative isn’t theoretical. It’s been running in production environments across industries for decades, and the numbers from real engagements make the maintenance difference concrete.

At an upstream oil and gas software company — where applications averaged five million lines of code each, running across UNIX and Windows in mixed-language environments — testers needed deep domain expertise that took years to develop. Those testers couldn’t realistically also be automation engineers. With a keyword-driven, roles-based approach, one automation engineer could provide the technical infrastructure to keep four or five domain testers running efficiently. The domain experts wrote test cases in plain business language. The automation engineer implemented the underlying execution layer once, and maintained it independently.

The maintenance speed gains were measurable and specific. When a GUI object moved within the application, updating the relevant tests took four minutes. When a new GUI element was added, two minutes. When an entire business workflow changed — the most expensive type of update under traditional scripting — the average update time was under fifteen minutes. Under any scripted automation approach, those same changes would typically cascade through dozens of individual scripts, each requiring manual intervention.

At Catalina Marketing, a targeted marketing services company, the before-and-after comparison was even more direct. The same scope of automation work that previously required three engineers over four months was completed by two engineers in two months after adopting a keyword-driven approach — without replacing the existing tool infrastructure. As their Senior Test Manager put it, the new methodology let the team design tests before any code was even available, and run them the moment engineering delivered a build.


The Maintenance Math Nobody Talks About

The reason scripted automation looks attractive at first is that it’s fast to get started. An engineer can write scripts against a working application quickly. The problem only becomes visible later, when the application starts changing — which is always.

With a keyword-driven framework, the architecture is intentionally layered. Keywords are modular, reusable test components that map to business actions rather than to specific UI elements. When the application changes, you update the underlying keyword implementation — once — and every test case that uses that keyword is automatically updated. You’re not chasing changes through hundreds of individual scripts.

This is also why the approach scales in ways that scripted automation doesn’t. Keywords built for one test case are reused across others. Coverage that a team builds this sprint is available to build on next sprint. The library compounds in value rather than compounding in maintenance debt.

Lockheed Martin’s Global Transportation Network program had previously tried and failed to automate their testing — not once but multiple times. The GTN system had non-standard GUI objects, timing issues between the application and the test tools, and constantly changing web browser behavior. What finally worked was a combination of significantly fewer test scripts than traditional methods would require, combined with test cases recorded in plain spreadsheet format using a Keyword vocabulary. Testers didn’t need to know the scripting language. When the Government scored the program’s performance at 100% — a perfect award fee — the improvement in the testing process was explicitly cited.


The Staffing Question

One of the less obvious benefits of separating test design from automation is what it does to your hiring and staffing model.

If every tester needs to be both a domain expert and an automation engineer, you’re hiring for a rare combination that’s expensive to find and expensive to keep. If you can split those responsibilities cleanly — domain testers who understand the product deeply, and automation engineers who build and maintain the framework — you can staff more efficiently, and the loss of any individual person is less catastrophic.

The knowledge also transfers more cleanly. When a company that built a framework from scratch loses its automation lead, what goes with them is irreplaceable institutional knowledge baked into thousands of custom scripts. When the framework is built on documented, modular keywords, new team members can be productive within weeks because the logic is readable, not buried in code.


The Question Worth Asking Your Team

If your automation suite is breaking every sprint, the natural instinct is to ask how to make the scripts more stable. A better question is whether scripts are the right unit of automation at all.

The teams that get lasting value from test automation tend to have one thing in common: they built the framework to be owned by the testing organization, not dependent on whoever wrote the original code. That requires a structural decision made early — separating the business layer from the automation layer, and building in a way that non-engineers can maintain and extend.

That’s not a tool decision. It’s an architecture decision. And it’s the one that determines whether your automation gets more valuable over time, or just more expensive.


SDT has built keyword-driven testing frameworks across hundreds of engagements — from embedded systems to healthcare to government programs. If your automation library is costing more to maintain than it saves, a process assessment is where we’d start.