Mature application with NO testware at all

I started my new job today as a QA Test Analyst. Got the laptop, the logins, found the toilets and the restaurant, had my picture taken by HR, etc.

My lovely new boss asked what my priorities are. Pretty obvious really. They havenā€™t automated, so have a good look round the software, have a good look through the test suite, run a full test to see what coverage is like, fill in gaps, modernise, etc.

Then came the bombshell. There is no test suite. My predecessor simply used to be asked to test each feature as it was developed, and re-tested bugfixes, but crucially, WROTE NOTHING DOWN! No one questioned this.

My initial reaction is to go through the software, list out all the test conditions I can find in the major chunks of functionality, prioritise them and then write test cases and scripts to cover the major ones, then slowly refine everything as time allows, so we have something sensible. Then, as new requirements come in, we can analyse them properly as required.

How would you proceed?

5 Likes

I have started two testing jobs where there were no test cases. The place I have always started is to understand which features are most important, those important to revenue, not losing customers, etc.
Once I have that, I start from the top of the list and work my down, focusing on just the most important parts.

From there I fill in the gaps based on what Iā€™ve learned about the customer.

I know it feels super daunting, but you will get where you need to be.

6 Likes

The things Iā€™d do:

  • write some super high level, top of the test pyramid, end to end/functional test automation, just as a sanity check and get that wired into CI/CD - this is mainly to provide a minimal safety net, as well as get the team used to requiring automated tests to pass before merging
  • start working on the process and culture bits to ensure that all new code has tests - most of this will likely be unit and integration, and the occasional changes to the end end to end tests. This might also involve shifting the culture to write tests when they update features that touch old code. Depending on the team, they may not have a history of unit and integration tests, so thatā€™s a huge lift.
5 Likes

Ask questions.

What function does our product have? What customer needs does it meet? How does it impact our customers if it fails? What are the risks of failure? What liability does the company have if the product fails? How much could we be sued for? What are the worst headlines that there could be in the event of failure? (Look up ā€œBritish Post Officeā€ and ā€œHorizon scandalā€ if anyone wonders why you ought to be asking these questions.)

Start with the highest level and then drill down to individual user stories for each feature in the product.

Whatā€™s that? There are no user stories? Well, what a surprise. Looks like youā€™re going to be busy when it comes to defining tests.

4 Likes

Find who the product owners and any currently in progress feature owners are. Understand the business value and goals. You are pretty much on track, good luck. Regressions checks will probably sort themselves out, itā€™s the work in progress right now where the most bugs get injected as you know.

I really hope our responses have given you confidence that you are the right person in the right place @jon_thompson . You have been around us long enough to know. Probably the best community to ask this kind of question of. Best of your tester communication skills come into play now.

4 Likes

My first step is to determine the highest products risks, which must be tested.

The next step would be to assign tests to people. For example, developers can make unit tests.

Depending on the environment I would aim for light weight testware. Sometimes it is required to write down test cases. On the other hand I worked in healthcare companies where short descriptions of executed tests and results were enough.

In more detail, I would use a mix of test cases, checklists, and exploratory testing.

  • Test cases are used for complicated situations. For example it takes a lot of small steps to set up a proper test.
  • checklists can be used for features, which can be tested easily.
  • exploratory testing can be used to find unknown unknowns. Test charters can be used for focusing and reporting.
2 Likes

I suspect you need to be careful that you are not initially taking them a step backwards.

Iā€™ve seen this scenarioā€™s many times

Lots of test cases created. Majority were fairly useless, rarely found new things and carried a lot of waste.

Test suites. Documented in painful detail, often given to new starters in the false belief that they could learn both about testing and the product from them. There are so many better options for learning so again these suites were generally waste.

In many cases the best thing was to get rid of those test cases and suites and move to much more of discovery focused testing coverage with exploratory test sessions, in almost every case I have seen this is a massive step forward. Usually does come with testing session notes, risk analysis general useful info if someone new is coming on board but some just adapt to documenting purely what the team needs at that point.

Check if the team had any problems with what your predecessor was doing, what they did well and what opportunities there are for improvement.

Test cases and suites approach maybe what you know well but it could be a step back, so check if they had it before and consciously improved on that approach to a better way, perhaps they didnā€™t and a reset to the very basic testing model maybe what they need.

Leveraging from automation is a separate thing, there are almost guaranteed to be opportunities to leverage from automation so that needs an ā€œautomation opportunity reviewā€, do not do this in isolation do it with the developers if at all possible.

It will often seem as an obvious first step easy win to get a basic smoke test running and it does look impressive initially to managers but it can be time consuming, if your predecessor was doing really good testing and you are spending time on automation to the team it may seem like a drop in testing coverage.

If there is no automation at all, then starting with unit tests may make sense but then you need to consider do you have the influence over developers to push this good practice or do you have the skills to cover this. Often testers will jump to cover this at a slightly inefficient layer as that is their skillset, for me I am not even sure if some UI automation is better than no automation if there is no unit coverage.

Start with talking with the team and learning the product.

The last time I did this it sort of went like this.

Get everything set up so I can test - remotely this can take longer than you think.

Talked with every team member, challenges, get to know your session, where can they help me and where can I help them ideas came out of this.

As soon as I could test I was testing. There was a transition backlog, that became priority.

Once that was cleared and I was add value every day and not a bottleneck then and only then could I look at improvement opportunities with the team.

As a team we made a lot of improvements including the introduction of developer automation.

Note that not once did I look for test case or test suite documentation and not once did I write a test case down.

7 Likes

Thanks for the input, everyone. I think itā€™s going to be OK. Luckily, itā€™s an internal app and it turns out that the users only use a fairly small core set of functions, so Iā€™m going for a lightweight approach with a small, manageable manual test suite backed by charter-led time boxed exploratory. Automation will come later.

The devs are grown-ups and I trust them. The random factor is that they come up with enhancements, so thereā€™s always a major reactive element to testing. Luckily, Iā€™m also expected to demo new ideas to representative users, and the standups are as democratic as theyā€™re supposed to be, so I get some say over any proposed changes.

Should be OK (famous last words!).

5 Likes

Hi @jon_thompson if this is an internal app with a small number of functions maybe youā€™d consider using a nocode tool like testRigor to quickly cover the functionality. The advantage of testRigor is that it is very easy to adapt to new functionality since all scripts are in just plain English.

1 Like

I work on a mature application (i.e. ancient) and Iā€™ve only written a handful of test cases in the past few years. Once upon a time I spent more time writing & reviewing test cases than testing so weā€™ve moved further and further from this. In fact we are going to get rid of our test case management software soon.

Hereā€™s what I prefer to do:

  • Keep a checklist of things that need to be checked during release/smoke testing etc.
  • Provide notes on stories as we test them saying what was done, ideally with more detail in a task.
  • For anything more complicated Iā€™m also writing a document - perhaps in Google Docs, Sheets or using my exploratory notes tool - and as well as including this in my comment on the story, it gets stuck on Google Drive for safe keeping.
  • Include any complex techniques that I needed in the wiki for future reference.

This approach assumes that thereā€™s documentation somewhere on how to perform the tests. For us this is the user docs (if its not clear to the tester how to do an action, how would a user know?), wikis for more technical techniques and if confused ā€œAsk Richardā€.

I actually attended an interesting workshop by someone who took over a product with the opposite problem yet likely has a lot of transferable knowledge. They took over a system with thousands and thousands of test cases. Too many to be of use. Their approach was to:

  1. Break down the software into a tree for features (Excel was used IIRC).
  2. On each node, ask what you know about test coverage (manual & automated - including unit tests), how often it is changed and how flakey it is.
  3. Colour code it, or some formatting, to make it visible what is bad and what is BAD.
  4. See if you can break it down further, pushing for more detail.

By the end you have an idea on where you might want to focus your testing efforts. I wouldnā€™t go creating loads of test cases myself, however if you were to do so then consider keeping them as light and simple as possible.

3 Likes

Iā€™m another one working on a ā€œmatureā€ (aka originally written in the early 00s) application. I was also the first dedicated test person the organization employed (this was before they got purchased by a mega-multinational company - but Iā€™m still the only test person in that division).

I spent a lot of time learning the software, what it did, and why it did certain things the way it did. There were always reasons, and they were always understandable if not necessarily sufficiently forward-thinking (I guess in 2002 the idea that theyā€™d be hosting nearly 20,000 clients on the system was all but unthinkable). I took a lot of notes.

Eventually my notes turned into a rather large set of regression documents on the team wiki - these arenā€™t test cases so much as they are guidelines of what to watch out for. Things like if Module A is changed, then module Q needs to be retested because it depends on most of the same data as Module A uses.

Iā€™ve also managed to build a small but functional automation test suite which breaks about 99% of the normal test automation rules (you try keeping each test granular when the core unit of the application is ā€œone payroll cycle from start to submitā€). It works well enough to save me the better part of an hourā€™s manual regression each deployment.

The key thing here is that the first thing to do is learn the ins and outs of the application. This can take more than a year if the application is big and complex enough.

Then you can figure out how best to work with it and start building whatever test documentation and/or automated testing around the most critical areas.

4 Likes

There should be special ā€œDragon Slayerā€ badge for every tester who works in such a product team.
image

3 Likes

Iā€™d say the response is in the basics of ā€˜software testingā€™:
For example:
ā€œtesting is an empirical, technical investigation of the product, done on behalf of stakeholders, that provides quality-related information of the kind that they seekā€ Cem Kaner
"testing is the process of evaluating a product by learning about it, through exploration and experimentation which includes questioning, study, modeling, observation, inference " James Bach
ā€œIn my view, a test case is a question that you ask of the program. The point of running the test is to gain information.ā€ - Cem Kaner

My company is working on a tool called gravity that could help with the high level tests. It first records what users do on the application (ā€œĆ  la GAā€, PII-free) and then allows you to draw test cases based on user behavior.