Where do you see exploratory testing within a fully automated pipeline?

As the title suggests, we’re now seeing CI/CD pipelines becoming increasingly automated from development to production. From a testing perspective, the 2 prongs of automated and exploratory testing are important. However, exploratory is human-driven and so, obviously, isn’t as fast as automated testing. It doesn’t seem to fit into a continuous pipeline.

I’ve read material on DevOps and the continuous testing aspects that I think are sensible. Again, DevOps talks about the importance of exploratory testing. But, within fully automated pipelines, where can it fit?

Does it fit only at the production stage with feature flags/canary testing etc.? Or is there another strategy to keep exploratory testing in place?

Maybe the tester (who may have been involved with session based test management in a traditional agile landscape) isn’t so involved with exploratory testing anymore. Maybe it’s the customer data with the sorts of monitoring present in DevOps that represents exploratory testing?

Would be interested in your thoughts.

2 Likes

I found this question … It is already a bit old but I am also interested in it.

The way I see it you have basically two options:

  1. You do exploratory testing regulary, but it is not a part of the pipeline. This has the advantage that your piplines still can be fully automated. However you might miss some bugs that you would have catched if exploratory testing is a fix part of your pipeline.

  2. Pipelines are only half automated means that the pipeline will stop at a certain point and halt until someone manually triggers the pipeline when the exploratory testing is done.

What are your thoughts about this. Would be interesting to hear some opinions :wink:

4 Likes

Yeah the trend setters are moving explatory testing to be asynchronous to the pipeline. For example if you have a feature that is toggled or has A/B testing. It can be tested in production at any point without impacting customers. No need to block those deployment pipelines or test common scenarios that will be automatically tested. You might still decide as an organisation that the feature won’t be launched or pilloted until feature is MVP and has been explatory tested.

5 Likes

I really want this to be the same person doing the CI/CD automation, and doing session-based explorations as a side activity. The supplement and inform each other. Automation tells the QA team how flakey the product (or test-environments) is, and helps the coders code faster; while session based testing really informs the org about when we are ready to do a major release.

I used to think that separate teams should do these tasks, but lately I’m taking clues from switched on folk like @crunchy , about using both as separate kinds of evidence.

2 Likes

I found that once we shifted to CI/CD I was able to become very consultative to my developers (realistically they only needed me a couple of hours per day to ask question before development) which gave me the time to frequently do exploratory testing. I would not link this to any specific release or feature but just in general day to day life, look at the user feedback & logs find a different starting point each day and see where it would take me. If there was ever a need to exploratory / manual testing linked to a specific feature we would as a product engineering team jump on a call each on a different device / browsers and do it together. It was basically 7 hours of manual / exploratory testing in an hours call, with so much more perspective from DEV, QA, PM & UX.

4 Likes

Hi. I have a question about this.
Assuming that releases of software are fixing bugs or adding new features etc - what is the point of full automated deployments if the features etc haven’t been checked/tested? You could deploy something into production and it not be enabled or active until its been tested so why not do it beforehand?

1 Like

Well, if you are “deploying” a feature, that is itself a pipeline. You can deploy into a “demo” or private environment (1), then deploy into a “develop” (2), then deploy to an “integration” environment (3), then into “staging”, and finally to the live environment (4). So it’s untrue to say that automated checks of new functionality won’t happen in the environment just as it goes into production. If your definition of done (DOD ) - Definition Of Done: The What And Why And How To Grow One ; is that a feature must have automation, that automation gets added while on branch at environment (1) and (2). When the coverage exists, then that code can merge and deploy into 3. In fact although it’s quite expensive and there are ways to mitigate cost (money and maintenance-complexity), some teams use 5 environments or stages to get to production.

Of course,on reflection I didn’t consider automated tests written to test new features. I was thinking purely from an exploratory perspective.

1 Like

I guessed you were on the pure exploring mode value Paul.

But I pin a lot of value on automation, probably more than is healthy. But that means I automate early and use the tooling to help uncover other kinds of “defect” that exploring might not find. I mean anyone in the company can do an exploratory poke, not just the testers, we don’t have superpowers. I find exploratory testing frustrating because I’m still “learning about” (internalizing) the requirements, and raising defect tickets about how “unlearnable” and “high friction” new features are because nobody has workflowed yet. Automation bypasses “unintuitive” user interactions, so it’s evil. I guess I am arguing for more Exploring, but also for more early (beta mode) type automation suite work. I guess that’s why our test reports are never green.

yeah and probably thinking too much about my own circumstances as well, my own domains.

1 Like