The methodology behind SDT didn’t come from a whiteboard. It came from the places that invented modern software.
Most companies in the software testing space describe their experience the same way: a number of years, a list of clients, a claim that their methodology is proven. It all sounds the same after a while. The number gets bigger each year. The client logos rotate. The word “proven” appears on every page.
What rarely gets explained is where a methodology actually comes from. Not the branding around it — the actual ideas, the decisions about how testing should work, the underlying architecture of the approach. Those things don’t emerge from a product roadmap. They emerge from specific experiences, specific problems, and specific environments where the cost of getting things wrong is high enough to produce real learning.
SDT’s founder Ed Kit spent 14 years in two of those environments before he started the company. Understanding where the methodology came from is a reasonable way to evaluate whether it’s worth trusting.
Bell Labs, 1978–1980
When Ed Kit joined AT&T Bell Labs in 1978, he was entering arguably the most consequential research and engineering environment of the 20th century. The transistor was invented there. So was information theory, the laser, the C programming language, the UNIX operating system, and the charge-coupled device. Seven Nobel Prizes were awarded for work done at Bell Labs. In 1972, Dennis Ritchie had written C as a replacement for B, and used it to rewrite UNIX — foundational work that underpins almost every system software environment in existence today.
Kit was there two years later, and among his responsibilities was testing C compilers — the tools that translated the language Ritchie had built into executable code that machines could run. The standards expected at Bell Labs were not theoretical. The systems being built ran the AT&T telephone network. Failure had direct, observable consequences at scale.
There is perhaps no better environment to develop intuitions about what software quality actually requires — not at a conceptual level, but at the level of practice, rigor, and institutional discipline.
Tandem Computers, 1980–1992
Kit’s next twelve years were at Tandem Computers, which had a single organizing principle that shaped everything it built: systems could not fail. Tandem’s NonStop architecture was designed for ATM networks, banks, stock exchanges, and telephone switching centers — environments where downtime was not an acceptable outcome and data loss was not recoverable. Inc. magazine ranked Tandem the fastest-growing public company in America during its peak years.
At Tandem, Kit managed groups responsible for system testing, performance, software release planning and management, release tools, and software release distribution. He was selected as a member of the distinguished team that identified and managed the steps to define and establish Tandem’s software engineering organization — a process of building institutional testing discipline from the ground up, at a company that couldn’t afford to get it wrong.
He also held ownership for software testing across Tandem’s data communications, information management, and distributed systems management products. Twelve years of testing software for systems where zero data loss and maximum uptime were not aspirational goals but product requirements.
The Standard That Still Governs the Industry
In 1991, one year before founding SDT, Kit served on the IEEE Software and Systems Engineering Standards Committee as part of the small team that developed IEEE Standard 829-1983 — the IEEE Standard for Software Test Documentation. That standard became, and remains, the dominant international standard for software testing.
This is not a line on a resume. Writing a standard means defining, for the entire industry, what the vocabulary of software testing should be, what documentation practices should look like, and what quality means in operational terms. It requires synthesizing accumulated knowledge from across the field and making binding decisions about how that knowledge should be formalized.
Kit founded Software Development Technologies in 1992, the year after that committee work was complete.
From Practice to Methodology
Between 1992 and 1999, SDT ran assessments and consulting engagements that consistently revealed the same gaps across organizations of every size and type. Testing processes were inconsistent and poorly defined. Test design and automation were conflated in ways that made both fragile. There was no shared vocabulary for how testing work should be structured or measured. The same problems appeared at a regional bank and at a Fortune 50 manufacturer.
By 1999, Kit had begun formalizing SDT’s keyword-driven testing methodology — the approach that became the company’s core intellectual property. It wasn’t built speculatively. It was built in response to documented, repeated failures observed across hundreds of real engagements, grounded in a framework developed by someone who had spent over two decades at the sharpest edge of what software quality actually requires.
The methodology was refined through use. WellPoint Health Networks in 2003. Southwest Airlines and PepsiCo in 2003 and 2004. FedEx in 2005. Siemens in 2006. Each engagement produced feedback that went back into the IP. By 2012 the framework was embodied in a US patent — Patent US 9,489,277 B2 — and has been actively in use ever since.
Why Any of This Matters to You
When you evaluate testing partners, you are making a decision about what kind of knowledge you want working on your system. There are faster options, cheaper options, and more recent-vintage options. Some of them will have well-designed websites and plausible-sounding methodology descriptions.
What they won’t have is a methodology built on the actual experience of testing C compilers at Bell Labs, running software quality for fault-tolerant financial systems at Tandem, co-authoring the international standard that defines how the industry documents testing work — and then spending thirty years refining those instincts through real engagements at real companies that had real consequences for getting it wrong.
SDT is not a large generalist firm. It was built by one person with a very specific and very deep body of experience, and everything the company does reflects the architecture of thinking that experience produced. The keyword framework doesn’t exist because it was a good idea on paper. It exists because decades of observing how testing breaks taught one engineer exactly why the standard approach fails — and what it takes to fix it structurally.
Experience isn’t the oldest number in the room. It’s the reason the methodology works.
SDT has been building keyword-driven testing frameworks and transforming test organizations since 1992. If you want to understand how the approach would apply to your environment, a process assessment is where we start.