Managing Test Library Growth

Do you think at some level of maturity, your automated functional test library should stop growing at the rate of the product-under-test?

That’s my intuition. Maybe I’m wrong. But it seems pragmatic to keep the test library at a plateau, so as not to slow feedback and therefore product development. AND, maintaining a non-bloated test library takes effort. The natural state of things, IMO, is for testers to add net new tests to cover net new features, without doing the hard work of pruning older, lower priority tests.

Thoughts?

4 Likes

What is that? First time I’ve heard about it.

All scripted test-case-based systems I’ve seen failed, either in a few months or a few years. In some companies, some managers tried again and failed again.
The most useful ones were one-liner checklists/reminders.

I generally do my testing in sessions and without ‘tests’ written ahead. I report the information of what I tested, how, state of the product, directly to interested people or dump it in a ticket if no one cares.

1 Like

@ipstefan , I accidentally left out the word “automated”. Ha! Sorry about that. In this case, I am talking about Selenium regression checks that run nightly.

1 Like

In the case of automation, I don’t see the size of the framework depending on the main product size or growth.
I’d rather think of it in terms of gains in terms of efficiency to check for regression issues and risks of failures.

1 Like

First, it depends on what you mean by “library”. “Library” may refer to something like a programming library, a set of helpers and utilities that make testing easier (or possible). But it may also be used as a synonym of suite, a set of tests that you run.

It’s reasonable to expect that a set of helpers will grow rapidly at the beginning of the project, but slow down later. At the beginning you don’t have any helpers, so you need to create them, and basically every test comes in with new helpers. But if helpers are correctly identified and designed, they are re-used by later tests. At one point you should be able to introduce new tests without touching helpers at all. In my experience, this is what actually happens.

It’s also reasonable to expect that a suite of tests will grow along with the product. As new features are added, new tests are added. Of course the test suite follows the same principles as the product itself - you should routinely look for opportunities to consolidate, generalize and refactor. Most changes to the product is not purely additive, i.e. some code is added, but other code is removed. The same should happen in tests suite - sometimes new tests are added, but sometimes it’s enough to add a step to existing test, or modify a step in existing test. In my experience, test suite tends to grow uncontrollably and extra care is needed to keep it at check. It’s easy to add a new test, and single test may execute fast enough, but this adds up and nobody wants to wait for 10 hours for test suite to finish (or, if there is no automation, nobody wants to repeatedly perform a suite of few hundred tests).

1 Like

…exactly! That was a point I tried to make in my original post.

When I say Test Library, I am referring to the full collection of regression checks available.

I’m inclined to cap the timespan for performing those regression checks. And then let people work within that constraint. It seems easier, to me, to determine how long of a feedback loop we are willing to wait. For example, maybe it’s 2 hours. Two hours might make sense as a heuristic for how much time we need to evaluate a final build of a release candidate before it goes to prod.

This 2 hour cap seems helpful to me. Now testers can work within it to fill it with the right tests. They may also argue they need more capacity, at which point we may decide to spend more money on Selenium grid concurrency, or something like that.

Again, not having a feedback loop cap, seems to me, would allow test suite bloat.

your library should grow, if you do it the right way with functional programming. Library calls should have a single responsibility. see the Uncle bob series on youtube. It’s easy to maintain

Hmmm, I somehow always see 2 questions here, “library” is talking about the organisation of my test code itself, and “suite” is when I’m talking about test cases, and there is a fine line to cut between them to help with sprawl problems.

The entire question smacks of distraction and bike-shedding getting in the way of context. Testing (manual and automation) has no ending point, but it has a few delineations that help us think of multiple goals. @ejacobson , secondary your aim here seems to be to not slow down development by having many tests that break when intentional product changes happen, and thus false-alarm. Your primary aim I understand is to just reduce bloat? It’s also another common problem when feature branches mean test automation needs to branch in sync, and the longer a feature stays on a branch , the longer your test cases need to stay on branch too for example. All made worse the more code you have. And that’s a big driver behind product design being cleaner so that features lie along component lines where possible, but it rarely does in reality. All we can do is be tactical, all of the time.

And my tactic is to coalesce old tests or just delete them. They consume more resources than you have available, they just do. You are better off deleting old tests, and using the time saving to build a brand new testing discipline. For example, a security test suite, an inter-op environment, or even a performance testing environment. Good luck, it’s a hard call to make.

@conrad.braam , I appreciate the insights. Deleting old tests makes sense. That matches my intuition. But since this thread started, I have also changed my mind a little. Now I am at peace with a gradual expansion of automated tests over time. We should do both. Delete lesser tests to keep the suite maintainable, and expand the suite to keep up with added complexity of the product-under-test (just not as rapidity as the product growth).

1 Like

It took me probably 4-5 years before I came to peace with deleting tests. Some lessons, just have to be learned, because everyone will have different constraints and contexts.