Background:
Doing a kind of product re-write, similar product, but different platform, and different kind of customer. Every part is been re-badged and repackaged, with the exception of billing and CRM. So since it’s new, features are getting added “back” one at a time, and some are dropped, and by nature of the target one or two new features or “functionality” will be present. Things that are not changing are security, networking, and relying on a lot of code re-use obviously. Initially nothing worked, so testers all went about learning about the tech stack changes while the devs tried to get it to at least bootstrap.
What tends to happen:
As a tester, the chance to test a new product with basically the same “implicit” requirement set is heaven. But we needed to build tooling to make deployment/environments and testing automation possible, and we still needed to carry on testing the legacy product. New tools are not easy, but we had enough time, and relied on a lot of manual testing (aka exploratory testing.) and a “dogfooding” process to uncover bugs in the new product.
My specific worry:
- We started writing automation scripts really early, starting with any components of the system that were similar enough to the old system to make them easy. Automation for brand new components got added as we went along, often in step with they way the component itself matured.
- Test systems and test toolstacks often mirror the things they test. We have a “layer” in our stack to prevent this, and that works, but the “product-knowledge” layer suffers from naming and structural or composition pain.
I suppose this is a warning kind of question, but keen to know what kinds of gotchas people hit when automating when the product is still immature. so a few things I am seeing.
- Test shared modules that cover the components have got names that no longer match the name of the component they test, because devs renamed the new or re-written component yet again.
- Product components that function under the hood differently to what the user sees, thus have test case names that don’t match the upward facing names either. Thus, the names of tests don’t match up with the wordings in “user stories”, or as I like to lately refer to them as “customer journeys”. Makes reading test reports mentally hard.
- Huge test code refactoring going on to deal not only with name changes, but also with “emerging architecture”, because test tooling generally maps to the architecture. And as testers, we are only now starting to find common patterns and paths.
Basically I’m suspecting that a bit of the old “bottom-up” programming technique has been applied to a new testing system, for an emerging product, because, in fact more “top-down” testing style was simply unachievable in the beginning. It’s always easy to start testing an existing product, and although testing early has been very helpful in the SDLC (Software-development-life-cycle) in general, early-automation testing has meant we changed a lot of how we automate. Early automationed-testing has meant we have better testability. We even have large SDLC process changes that came out of early testing. Mainly it’s showing me that in our test code is just looking very different. Not saying it’s a bad thing, but it’s very very different looking.