Automating too early

Background:
Doing a kind of product re-write, similar product, but different platform, and different kind of customer. Every part is been re-badged and repackaged, with the exception of billing and CRM. So since it’s new, features are getting added “back” one at a time, and some are dropped, and by nature of the target one or two new features or “functionality” will be present. Things that are not changing are security, networking, and relying on a lot of code re-use obviously. Initially nothing worked, so testers all went about learning about the tech stack changes while the devs tried to get it to at least bootstrap.

What tends to happen:
As a tester, the chance to test a new product with basically the same “implicit” requirement set is heaven. But we needed to build tooling to make deployment/environments and testing automation possible, and we still needed to carry on testing the legacy product. New tools are not easy, but we had enough time, and relied on a lot of manual testing (aka exploratory testing.) and a “dogfooding” process to uncover bugs in the new product.

My specific worry:

  • We started writing automation scripts really early, starting with any components of the system that were similar enough to the old system to make them easy. Automation for brand new components got added as we went along, often in step with they way the component itself matured.
  • Test systems and test toolstacks often mirror the things they test. We have a “layer” in our stack to prevent this, and that works, but the “product-knowledge” layer suffers from naming and structural or composition pain.

I suppose this is a warning kind of question, but keen to know what kinds of gotchas people hit when automating when the product is still immature. so a few things I am seeing.

  • Test shared modules that cover the components have got names that no longer match the name of the component they test, because devs renamed the new or re-written component yet again.
  • Product components that function under the hood differently to what the user sees, thus have test case names that don’t match the upward facing names either. Thus, the names of tests don’t match up with the wordings in “user stories”, or as I like to lately refer to them as “customer journeys”. Makes reading test reports mentally hard.
  • Huge test code refactoring going on to deal not only with name changes, but also with “emerging architecture”, because test tooling generally maps to the architecture. And as testers, we are only now starting to find common patterns and paths.

Basically I’m suspecting that a bit of the old “bottom-up” programming technique has been applied to a new testing system, for an emerging product, because, in fact more “top-down” testing style was simply unachievable in the beginning. It’s always easy to start testing an existing product, and although testing early has been very helpful in the SDLC (Software-development-life-cycle) in general, early-automation testing has meant we changed a lot of how we automate. Early automationed-testing has meant we have better testability. We even have large SDLC process changes that came out of early testing. Mainly it’s showing me that in our test code is just looking very different. Not saying it’s a bad thing, but it’s very very different looking.

2 Likes

Uh, that is a challenging situation to be in - I’ve never faced it myself, as I mostly worked for enterprise client where they’re still stuck in the Waterfall dark ages, Agile is just a fancy term pretty much.

I suppose it would change the way one thinks about testing considerably. BDD approach comes to mind here. Also, the “Ubiquitous Language” from DDD seems like it would be beneficial - to make sure that naming is consistent thought the entire process and that everyone is using the same agreed upon terms for the domain/business logic lingo.

1 Like

This shows the importance of two things:

1- Like you said, early automation is going to mean the product should be more testable, as its being taken into consideration early on. And this is very important! It also means that if more things are changing more frequently, this early automation coverage should keeps safer from breaking.

2- You mention the problems arising from changes in code affecting automation. But automation should change with code, it doesn’t matter if the product is new or old, changes will happen; its only that with a bigger frequency of change it becomes obvious that a separation between who writes code and who writes test automation its glaringly inefficient. Test automation should go hand in hand with the changes in development. Optimally done by the same person who is changing the related product code. Or introduced if it’s a new feature. Otherwise automation will always be lagging behind, losing the valuable feedback it would otherwise produce.

Test first development methodologies seek to get rid of these issues you describe, among other things.

1 Like

OOF, been there!

So I had this project were I wrote API tests before the API was made (based of a designed API).
I had the most fun when developers decided to not always follow this due to changes but not communicating this towards me.

Bit of extra info: I was using Postman to do API automation since devs already used Postman.

So what kind of changes happened?

  • JSON structure in the response
  • names of values that changed
  • error messages that changed often

These things I struggled with, since I already had a whole suite up & running.

What did I do?

I made something new in Postman, I haven’t seen it anywhere online yet but I call it “Hierarchic Testing in Postman”. Postman has the luxury to write test scripts on 3 different levels. (Collection, Folder & Request)
I made it possible to write ONLY test scripts on the collection level. So if a response, value or message changed I only had to change it in that script on the collection level and not in all of the separate requests.

The expected outcome, Since the expected results never changed, I wrote it in a pre-request script and set it as a variable. This way I could check the expected result before running the assertions.
It really saved me a LOT of work in the long run also, which I didn’t expect beforehand.

Would I do it again?

Hell yea, but only from scratch.
That reminds me to put it on my to-do list so I can make an example collection.

2 Likes

@juliovalls welcome back to the discussions, thanks.

  1. Yes. That validates our experience, early testing is super important. It has helped the devs design component robustness and to design in process improvements early on too.

  2. Test code lives in the same repo as product code, I’m talking E2E and component level testing here of course. The devs have to write unit tests, unit tests run at build time. It’s the feature tests that I notice have to change as the product changes, but because our devs code in CSS, Java and C/C++, testing started to lag. They don’t all have the skills to write test code, so we have tried to make the process stricter and push back on the slide.

We opted for all test code in one repo, we used to have things split over 3 repos, but we bit the bullet, and it seems to be paying off, everything is in one repo - this does make testing and CI simpler, but also sometimes makes it trickier. It is a trade-off with understanding. Process wise…

  • this allows us to do is use the same jira ticket for all test work related to a feature
  • E2E test code can be submitted in the same pull request, this prevents breaking the tests

It is not yet at the maturity of test-first, but I’m helping the devs to write tests and take ownership. It does speed up delivery overall though to tie testing code to product code at the work item level. A crazy thing also starts to happen because when 2 people contribute to a pull-request, one a dev, and one a tester, it means they need to agree on when to hit merge. But so far that has not been the case often enough to cause fights.

1 Like

That sounds frightening Mizra :slight_smile: . DDD! Naming is important at all levels really. I am posting this question mainly because of the pain of naming and composition I was hitting yesterday. How do people cope with names that change. Even for developers and support it’s a nightmare, if a customer raises a bug that says the XYZ is crashing, we have to translate that to the name for that component we use internally for consistency. But that consistency goes out of the window when the name we gave to XYZ is the “world” accepted name for the component, for example using the word “mouse” becomes, under the hood the “pointer”, and some “buttons”. We often adopt modern Windows name conventions, but internally we stick to the names the Microsoft used to use for some things. So names are important. In some cases, being a platform-neutral product, we even make customer language choices that are platform neutral so as not to confuse users who happen to be on a platform that uses a different name.

So when a component changes name, or moves, the test code needs to rename in lock step as Julio pointed out. Sometimes the name changes are actually getting instigated by QA, not by the devs, which is all good practise. AND, is easier to do early in a project rather than late. So definitely been worth automating early, just has been harder.

1 Like

I’m going to be expecting you to share a blog post about this “Hierarchical” technique soon. I’m supposed to learn to how to use Postman at some point, purely for performance testing, but possibly for integrating 3rd party endpoint bits for testing into our framework too.

3 Likes

Personally I’m not sure you can write automated tests too early. They should be developed side by side with the application to enable, and accelerate development. Leading to reproducible results and a faster delivery of a product with known quality.
if the testing is not helping to achieve that than something has gone wrong. This achieves the aspects others have discussed such as testability and avoids testing being in lag of development.

There are those who would disagree though, “leading quality” discusses King who only add test automation after a game has started to gain traction and shown market value.

However sounds to me, it’s more your concern tests are not being written in a maintainable way. Naming synchronisation isn’t strictly required but may negatively impact maintainability.

Sounds like the tests are being written separately to the application, “Accelerate” shows tests are most successful when maintained by developers working on the application that avoids this kind of split. There are of course different levels of testing but I generally aim to keep them all in sync with application development.

2 Likes

Yep, definitely seeing this happening Matthew. And since we push-back on the developer tribe to add and update test code as much/often as they can, sometimes this becomes hellish when the developer cannot find code for things because the names have changed or the “structural sync” is wrong. At which point the test “tribe” have to normally jump in to improve maintainability. Mainly because we know the territory and how to cleanly make such changes. We are talking about 3 or 4 components in my case, so some test-maintenance work that crosses components is best done by the QA tribe and not by the component team alone. Which is a downside of working in a larger org, where the devs no longer write “all” of the test tools. Am still keen to stop us creating a church/club where only testers are allowed to fundamentally update the testing toolstack though.

On “combined” product+test pull-requests I’m still wary if we are talking 2 or 3 people. At the moment I try to start out with writing a test in the same pull-request, as the code change being done by the dev. The side-by-side coding of code change + test change in one go prevents merge timing condition pain that would break tests. But, if one person goes on holiday, or one hits a unrelated bug on a commit they make, it can block the whole PR for a day or two more than necessary. Small pull requests are not always possible, so we end up with fresh branches going on, and merely compound.

It’s the times when a test I want to write requires me to do a big refactor, that I chicken out of adding the test code to the same pull-request as the feature code. - woosie

/gripe: When devs make a change, they often try to run the entire 2 hour regression suite just in case their change break a feature, or breaks the tests. Which is an intensely annoying waste of resource and time. When some unrelated feature tests fail, developers are rarely going to properly triage the cause if it was in fact an environment issue or unrelated to their change anyway. This puts the pressure on Dev-Ops work to make the environments faster and more stable, which is still a good thing.

But QA do need to persevere and test early, and often test even before the code is ready. Just creating a skeleton for a system test case becomes a way of codifying concretely our perceptions of how a feature will interact with the user. I’m a fan of having devs drop things like non-working webpages into the code on a branch, early, so we can pre-automate the page-objects for them for example. Then I can hand over the skeleton test so that they can complete it and put all the changes into one pull-request. Mixed results, some devs run with it, some devs don’t. We started to push this kind of culture to get this “Accelerate” you talk about happening. So far it is not costing us more, but it’s not yet at the point that features ship sooner. Pretty sure it will after a month or so of refining the tooling, the knowledge and the skills.

1 Like

Some thoughts, questions:

  • test automation following/mapping architecture of the system, can you elaborate that? Are we referring to SUT written in Java/C++ and so the test automation will use same language? Or something much more than that? Like SUT code does things a certain way, test automation should follow suit?

  • are the hurdles you are encountering more on the test framework side or automated test (cases) side or both? i.e. automated tests simply call methods in test framework with appropriate data and do the proper assertions in some given order.

For automating early, an approach I’ve thought of and would prefer to follow is define/write the automated test case first, using pseudo code where necessary for things missing from current test framework. Then work backwards from there to implement the needed functionality on test framework side. As part of this, also write the test case as abstract and modular / high level as possible such as selectColor("red") instead of something like selectRedColor(). With a modular enough test case and test framework, modifications to it can be trivial for a technical enough person. In terms of UI test, the major changes would be UI element locators, and in some instances some UI action logic, but changes are made under the hood in test framework method implementations such that the test case itself doesn’t need much modifications aside from usage workflow modifications (i.e. introducing a step C in between step A and B for some actions a user must do for the test scenario), whereas UI locator changes are at config file or framework side since the test case simply references the locators by variable name defined in the config file or framework. Automating this way with test cases first to then test framework changes I think might allow you to think how to define the framework appropriately as you are writing the tests rather than have your test cases constrained or conform to what the test framework provides by doing the framework first (and then having to rework it if it doesn’t fit the test cases). But this approach for web/mobile UI testing has a caveat that it requires skill with XPath and CSS selectors, particularly XPath, because despite its complexity and some performance hit, you can’t often define nicely abstracted test logic without resorting to them due to need to use features like: match by (partial or whole) text, absolute/relative indexing, ancestor/parent/child/sibling traversal.

2 Likes

This is pretty common David, it worried me, only because it created churn in the test code if a components name might change, or the way we access it might change. Why does test code mimic the product architecture? Well, glad you asked.

Basically we write test in source files, if we are smart the common source files and the test files all go into a structure that helps us find those helper modules. Named folders help us find the test scripts easily. I like to keep tests separate from the “shared” bits of code, which I call helpers, and other shared code for “fixtures” like teardown, setup, and mocks. Most of the time you will have top level folders that are named by the component they focus on.

I took a folder listing quickly to show this

product\ondemand
product\portal
product\server
product\tools
product\viewer
product\ondemand\tests
product\ondemand\wrappers
product\ondemand\wrappers\mui
product\portal\tests
product\portal\wrappers
product\portal\wrappers\pageobject
product\server\tests
product\server\wrappers
product\server\wrappers\mui
product\viewer\tests
product\viewer\wrappers
product\viewer\wrappers\mui
product\viewer\wrappers\pageobject

Here you can see that I have

  • a tools folder, that’s self explanatory, always build your own tools, and build from source
  • something called on-demand , that’s an entire product feature
  • something called the portal, that’s the website component tests
  • server - that’s a bunch of tests that cover, that component
  • viewer - another bunch of client-side component testing

Notice how we have one feature, that get’s grouped up as a feature, not as a component. Normally it would have been put under the server folder as a mini-suite. If I look at our older products, that is where we used to put it. It’s a bit late to move it now. I’ve also hidden a folder full of mock tests. That’s what I mean by tests following the product architecture.

By using names that are consistent, we make it easy to add tests in a place they will be found. We also reduce the risk that someone reading a test report will not be able to work out which component is red because the testers gave it a name that was inconsistent with either the name the developer use, or the name the customers use. It’s sometimes simpler to name test suites based on the name of the tech stack they cover, but I have found that you miss a chance to convey information.

I’m pretty sure a lot of both will happen. When we started automating, one of the components was not even working at all, so we had a placeholder folder for it’s tests, and dropped a template test into that folder. Even though we were blocked, we were trying to mimic the architecture. We also write our own framework, in Python, and the hurdles are always keeping the portable code portable (runs on many linux distros, a few Windows flavours, and a few MacOS versions.) our tests all touch the operating system a lot, which made Python a great choice for handling keyboard, mouse and files. So any code that touches the OS gets put into the framework itself to keep it away from polluting the actual test code. So that has helped as an approach to reduce code churn whenever a product component changes it’s place or gets too big and needs splitting up. Layers are a good defence for porting.
You can see I have some folders called “pageobject”, and come called “mui”, those are both a kind of pageobject, so keeping these near to the components they are relevant to is super helpful and part of my dilemma. A dilemma that would not have existed if I had started with a complete already written product, instead of a product that evolved and got added to as time progressed.

As for assertions, I’m a fan of not testing result-codes explicitly everywhere, but rather having method calls raise an exception. This does make writing negative tests trickier, especially negative test cases. But most testers write a very small proportion of negative test cases, simply due to time pressures.

1 Like

Ok, so sounds like this is mostly a test/data/file organization issue from your response to my questions. Pardon the late follow up, I don’t check this forum often.

Seems like you have technical debt. If framework and tests designed well, I think you should be able to reorganize to better structure if you wanted to, it’s just a matter of finding the time & resources to address the technical debt. Improving things would be a matter of refactoring the file organization and ensuring nothing breaks with the test suite execution and framework operation when refactoring.

Good luck, hopefully, you get chances to address the technical debt.

In life (and engineering?), there’s a matter of good enough, so just get the test structure to a point where it’s good enough. Striving for perfection will be an endless pursuit.

2 Likes