Are formal test plans and lack of shifting left red flags?

I posted this on LinkedIn, and thought it might be an interesting discussion for folks here too: Sign Up | LinkedIn Feel free to reply wherever you prefer. Excuse the tags. Algorithm and all that.

I’m still seeing a lot of content around a formalised #TestPlan and getting #SoftwareTesting specialists involved earlier. This sparks two questions:

  1. Does emphasis on formal test planning imply a lack of #ContinuousDelivery? My line of thinking is that I’ve only really used formal test plans for planned releases which are somehow “special”. As in, they don’t happen often enough to be second nature - they’re somehow a big deal, perhaps because the industry is regulated. And if there is a lack of CD, but no constraint which requires this, is that a bad thing? Why (not)?

  2. How would you feel about entering an organisation where quality and testing specialists aren’t already part of the full SDLC? I used to see supporting this transition as part of the challenge, but experience and a shift in priorities has made me somewhat more wary / wise to that way of thinking, depending on how you look at it. To me, it says a lot about the attitude towards quality and testers, fundamentally limiting the positive impact a #QualityEngineer can have, which has negative personal consequences. And I guess that’s part of the point. Testers in these environments “just” test, in the very basic, somewhat disconnected sense. They’re not empowered to engineer anything, let alone quality across the organisation.

My ideal would be to have Quality Engineers involved in anything that relates to what customers / users have access to. This isn’t limited to the software product, and isn’t black box. Everyone would be empowered to test and contribute actively towards quality. Continuous delivery would be a given, supported by appropriate #TestAutomation / #AutomatedChecks , frequent #RiskAssessments and a robust #TestStrategy . There would be mutual respect between disciplines / specialisms, and a healthy curiosity to learn about, and from, other areas. This isn’t just a nice idea. I’ve seen it, I’ve been part of it, and I’m helping to grow it.

What are your thoughts on these topics? Are formal test plans in non-regulated environments a red flag? Do you stay away from projects where testers aren’t already involved early?

5 Likes

We do test plans in the same manner, for bigger releases. Vulnerability and patch releases we don’t…unless there is a concerning impact that we feel it would be beneficial. We’ve definitely moved away from using test plans as a “sign off” of the testing.
We use Test Plans as a tool to be confident we understand the risks of the release, that the wider engineering/product teams understand those risks and that we agree that our approach to testing is balanced to mitigating those risks as efficiently as possible.
However, I don’t think it necessarily means it is due to a lack of continuous delivery. I mean we’ll still have daily alpha builds so we can check features/fixes as they’re being developed. But there has to be a special eye on what we cut and deploy. So the balance depends on what your customer sees (or wants to see) of your process. In our industry, alas our customers don’t want to see our “working”, they want to see a complete working deployment. So the continuous delivery is more an internal process.

I did :joy:. When I joined my current company 7 years ago, I was brought in as the Test Lead. The team was the test team. They were involved in conversations if it involved testing, the process supported that (wait until the ticket gets in the jira test column). However, I was comfortable with that and so were the small team…until I wasn’t. Too much information was coming to us late and we were being looked at as the problem. So I rebranded the team to QA (in hindsight I wish it had been QE) and spent my efforts coaching the business in what our skills can add beyond “just testing”. Brace yourself, that doesn’t work in a month, a year…that work is continuous. You can have victories and setbacks in the journey. But that is the challenge as quality leaders we need to embrace and never give up on.

3 Likes

I generally do not have any red flags on things owned by others apart from something being completely not testable.

Test plans are a good thing, they get you thinking about your testing and the help others understand what your going to do and the value you add. Its when they start introducing waste or serve no value that it becomes a problem and a red flag if its under my authority.

It usually adds value to get involved early but I’m also happy to jump in late and help out, might seem like lesser value than it would have been had I been involved early but its still solid value and maybe even at times more value as products without early involvement tend to have more issues to discover and investigate later, its increased risk but not a red flag in my view.

5 Likes

I agree with Andrew in that a test plan does not have to be a red flag. It might go without saying, but it is a step onwards from no testplan at all; ie: Just check everything works.
A test plan can in fact be part of continuous quality, if used right. For example, i am currently trying to get our devs to write the test plan before coding, as part of defining the story/ticket itself. And to liaise with me on that.
Of course, depending on the team or the feature, very clearly defined acceptance criteria might be enough. But again, they are useless if not defined precisely before the dev starts coding.
I think there is a lot of leeway in this area. The important part is not whether you use test plans or something else, but that you discuss and define your goals and the path to reaching them as far left as possible. But I don’t think everything can shift left; there still needs to be testing at the later stages.

However, I am only starting to experiment with our processes in this respect. And I think you have more experience in working in large teams and pushing for this sort of change. I will certainly look out for these issues carefully. Very interested to hear what others have to say. Maybe these are orange flags that should shift attention to ask questions about the processes.

3 Likes

It depends.
I will if I’m not familiar with the place, have not emotional bond with them.
But I also like the challenge to change that. But for that I need some support and incentives. I would not fight for that a foreign place, against all odds.

Also money and safety become an increasingly more important topics and therefore I have to balance such details.

3 Likes

Do teams create development, or project plans? And if we’re trying to be ‘continuous’, should any test plans just be a part of those, or whatever may already exist?

2 Likes

How formal?

I think you’ve said it yourself, where you say that formal test plans are for special occasions. Test plans need to be as formal as required and no more.

I think that there’s a lot of premature formalisation, and a lot of over-formalisation, which is probably a misunderstanding of the exploratory nature of testing (what we need to know will change based on what we find) and a major symptom of fear. A fear of the unknown, and a fear of our own ability to test software - or a fear that someone else won’t be able to test software, which can be seen in formal, written test cases with explicit steps.

Test plans, I think, are incredibly valuable to think about what factors are involved in testing. Resources, constraints, factors of the software, ways to think about quality, techniques we might apply. Also to think about our available resources, time being an ever-looming constant, and how we apply them to meet our ends.

The difficulty, of course, is that we begin to look at a function and realise that it’s far more complex or less complex than we thought. It may be simple, but integrated into some other important thing that creates a new dimension of control states or something. So it’s going to be a balance depending on the situation. It may be more valuable to dive into the software, or it may be important to think about the factors involved. I often prefer to explore software as part of that plan generation, during recon, or note down plan-related factors as part of design meetings or kick-offs.

There’s also to consider who needs access to that test plan. A communicated document is always going to be more formal because it has to communicate that implicature and tacit information that scribbled notes need much less of, not to mention the test plan constantly evolving in the mind of a tester.

To your questions:
I think that if a formal, written test plan is necessary to get changes safely and sustainably into production then that’s a limitation of reality that CD has to deal with. Hopefully that’s not the case, as any formalisation is costly, but I’d not want to sacrifice good software to the gods of CD, or any other process or system.

Formalisation can be a useful or vital part of the SDLC, but I think often it’s a disrespect towards the abilities of people who do not need granular instructions and rules to do good work, and comes from the same place of fear as micromanagement. If I’m entering an organisation with no specialists in testing then, as that specialist (or generalist) I assume they suddenly want one. This can be worrying, because not understanding testing leads to a lot of the problems testers face. I have to agree that it tends towards making a negative environment for testers, particularly in large organisations with a lot of status quo where testers have enormous pressure not to make difficult changes that change people’s workload and process.

I’ve been using quality plans with a bias towards testing activities to cover quality risks. So whole team plan that has a few testing sections including developer and customer testing details in addition to tester specific things.

Similar with test reports I prefer a test paragraph in a team sprint review document for example

Now and again separate test plan, usually on request from someone wanting visibility on testing separately, light one to two pager, this is the one I think the post references that gets flagged as a concern if it gets rather large and nobody actually uses it.

1 Like

Thanks for sharing! Sounds like you’re getting good benefits from your test planning process. I wonder… Why have these “bigger releases”? How are decisions around what qualifies for a test plan and what doesn’t made?

1 Like

That’s a really interesting perspective - doing test plans per story / ticket, as opposed to per release. I agree that it’s not the plan itself, so much as the discussion and discoveries. I guess requiring a formalised test plan can be a way of (trying) to make sure those things are happening.

To clarify, I’m definitely not against planning testing activities, and I don’t think testing activities later in the process should be dropped / aren’t valuable, I just question the need for something very formal (and usually bloated), and worry about the cultural reasons why testers might not be involved earlier as well as later.

And not that you necessarily are, but I’m projecting when I say: don’t doubt yourself. I see you being very engaged and thoughtful, and so I’m confident that even if you’re only starting to experiment with things, you probably have a better handle on it than a lot of others out there. I’ve seen and done a lot of things, but that also makes me biased!

1 Like

I’ve seen project plans, but not formal development plans like the test plans I’ve seen. It all feels very waterfall to me, and there are so many down sides to that.

1 Like

Thanks, I think you make good points about over-formalisation and fear.

I think the factors involved in testing that you mentioned are really important. A couple of things that makes me wonder:

  • Why not consider those things earlier
  • What is “earlier” - when is the test planning done
  • What does the process of test planning look like for you; does it being formalised prevent you from having better conversations / involving more people?

Outside of regulated / audited environments, I do often wonder who the test plan is actually for, and whether or not they read it / get what they want from it. From reading the other responses, it makes sense to me for a formalised test plan to be more of a tool that triggers a set of good practices, rather than a goal to have a document.

3 Likes

True, I used the word “bigger” which is partially wrong. To be more precise “major/minor” releases we’ll have a test plan in confluence. But its not a big document, its a chance for QA just to put their thoughts down and plan how they approach mitigating the quality risks and share it. As with all processes like this, it continues if its still a valuable thing to do.
However, the word “bigger” is also partially right because some of our major/minor releases are big, too big. That makes the test plan easier to produce as there’s no need for risk assessment or tactics - especially if its risky enough to utter the words “Full Regression Test Run” :grimacing: - but the testing is obviously harder and longer. We’re getting better and tracking our release frequency, but more work to be done as always :grin:

1 Like
  • Why not consider those things earlier

Depends on the things and their granularity, but basically because we don’t have access to enough information to make good decisions or come to useful conclusions. We can think about how much time we have, but if something big changes, we lose a client, we gain a client, the market shifts, half the staff get ill, then no amount of thinking about the time we had early on will change the new reality. And so it is for all things. It’s hard to know how much effort something will take without investigating it first. Development effort is absolutely no help either, as something that’s easy to make can be hard to test properly, like a search bar or an important new input field. It may be revealed that a client finds a particular thing very important to them.

  • What is “earlier” - when is the test planning done

I’d venture to suggest that test planning is never done, it just continuously evolves as we get new information. As the project and product twists and turns, changes, and of course reveals itself to us as we investigate it, our understanding of logistics, resources, strategy, quality, techniques and environment is going to change, and so what we actually choose to do is going to change. The alternative is to stick more to a plan we started with even when that plan starts to not make sense, and if you want my argument against the waste and tendency to follow instructions over reality that premature and over-formalisation generates, there it is.

As for when initial, formal, written, test planning is done that feels like it’s going to be very contextual, and based on why someone feels that high-formality documentation is actually useful in the real world. Given a concrete example I suppose maybe that would be easier to talk about, but I can’t think of one right now. I guess planning a plan is a thing. What is the plan for? Is it to give someone else something they want, or a tool you’re going to use to help your testing? Why are we making one, and therefore what should it contain?

Anyone who’s done a session and seen 7 new sessions drop out of it knows that exploration breeds exploration, so what is so resistant to exploratory revelation that we can talk about it without booting the product? Or perhaps more importantly that we should? As always, it depends.

  • What does the process of test planning look like for you; does it being formalised prevent you from having better conversations / involving more people?

That is a very good set of questions.

For me test planning is just consideration. It happens in the minds of a tester, up until that information needs to be recorded to be shared or remembered. When it’s recorded the whole plan doesn’t go in the document. It’s just a representation of the thoughts someone has about a situation. It probably has some generalised thoughts about test coverage, risk ideas, plans to mitigate those risks, testability issues, very broad schedules and the like - whatever feels like a good idea. I’d also say that not everything in a test plan will survive contact with air, and that’s okay. We need space to have a worry and then watch it evaporate if it needs to.

A test plan could remain all in my head. It could be scribbled on a white board for the team. It might be a lengthy digital document.

does it being formalised prevent you from having better conversations / involving more people?

I think that formalisation can lead to fetishization. Why do we need to do this thing? Because of The Document! Can’t we just… No! The Document! It’s a convenient and easy replacement for the hardest thing testers have to do and one of the things companies often hate to do - deal with reality. We will build any tool and invest any amount into not dealing with the fact that stuff is hard to learn and it requires people thinking about it. It depends a lot on what The Document contains, how it is written, what it contains, who wrote it and the general attitude of people towards it including management.

It also can be made to look and feel a lot more intelligent and thoughtful than it actually is. Formalisation often comes packaged with such an official and comfy feeling to it because it’s a communication - a performative one, at least in part. If I say “I play with the product and learn things” that’s the same as “product exploration with the intent to reveal product issues and information pertinent to the test effort”, but the second one is the one I’d pick when I want people to think I’m clever and capable. It doesn’t offer more to reality, just to the way people perceive what I’m doing. And this mechanism works to make nonsense look clever and capable, too, especially when it’s not questioned, and questioning other people’s public documents is effortful and risky in a way a conversation is not.

If a formalised document is flexible, living or disposable, then great. I think that’s much more in line with how the exploratory nature of all testing and learning works. It does mean that the more formalised it is the more maintenance is required and the more things in it can become wrong or outdated. The more granular the formality the higher the maintenance cost, it’s just explicit, detailed test cases writ large. If we know the document’s maintenance is being outpaced by reality then it loses its status as a useful oracle.

From reading the other responses, it makes sense to me for a formalised test plan to be more of a tool that triggers a set of good practices, rather than a goal to have a document.

This feels very true to me, too. As formal as it must be to achieve useful things for its cost. No best practice, only good practice in context.

Edit: I’m suffering brain fog from my condition, so if this makes no sense or wanders you have my apologies. I’d blame the lengthiness on this too, but I always do that.

2 Likes

Thanks for clarifying. Do you also do in-sprint testing / test per ticket, or does it tend do be per release?

1 Like

Thanks for your detailed and considered response! Whenever I see your input, also on other posts, I always find them interesting.

In particular from this response, the points you make about how you frame / word things, and how questioning a public document is more effortful and risky, really stand out to me. I can see how testing specialists could use a test plan to their benefit in that regard (providing the right person(s) read it), and I totally get how the dynamics of feedback are different. Lots to think about!

3 Likes

Yes, we do in sprint testing per ticket first and part of the plan will be how deep we need to go in the re-testing those new features.

2 Likes

Goof questions! I think I’m in agreement with you tbh.

  1. I’ve not worked somewhere where written test plans were useful past ticking a box for a client to help justify the cost of testing. I’ll plan testing by myself or with devs but this is mostly conversational. User stories and possibly notes around release plans are done but written test plans would mostly just be unnecessary overhead with how my team works. Plus no one would read it. Plus plans tend to go out the window at the first step.

  2. This is a tough one and I’ve already completely restarted my answer for this ha. I’d like to say it would be a red flag for me and at this point in my career I’ve put up with a lot of QA being undervalued I want to make the most of being involved in all aspects of the SDLC. But I suppose it depends on the culture of the company, if they actually see the value of it and want to make steps towards it. Whether it would actually fit with where I am in my career at the moment is another thing, though. I think if QA was just left at the end in the company I’d have to try and care less about things in general within the company which isn’t what id want to be doing but would be necessary to protect myself.

But is is a tough one as there’s a lot of factors that can come into play so it depends on the culture of the company, the product, the people, the pay, where I’m at at that point in time.

4 Likes