How to handle testing the new changes if we are doing manual testing per branch?

I am trying to improve the testing process for my team and I was thinking about doing testing per branch, I mean each feature has a specific branch and we can start doing the manual testing once the pull request is created. This approach would help us to start the manual testing and reporting bugs early, at the same time, the pull request may have new changes based on the upcoming code review comments. Therefore, more commits would be added later while the tester is still working and we may end up with the following issues:

  • it means we need to re-test same things for some commits and this would consume more time
  • it would be a boring process that may lose the test’s focus.
    So my questions are:
  • when do we have to start doing the manual testing exactly, after creating the pull request or after having the code review for example?
  • do we have to focus on having smoke tests per branch and ignoring the thorough tests on the feature branch?
  • is that a recommended solution when we are taking about manual testing or is it more related to automation testing?

Take a step back and ask why are you testing, what’s your testing mission?

Who is your testing of value to, who wants your good testing feedback?

Often its developers so then just ask them if its of value to them.

In cases where I have done branch testing it served a few purposes.

I could catch things before they go into the main shared build as 5 developers were all creating branches often multiple branches a day and the chat with the developer for rapid turn arounds of fixes sometimes a fix could be made to a branch in a couple in mins.

When I did find something it was highly likely due to a single specific change as only one developer had worked on it and changes tended to be on specific stories, route cause analysis was faster and no ping pong between multiple developers changes.

All of our branches were deployed so I could simple change the url /655 or url/656 and main so I could just tab between differences and compare real time to main, this also helped narrow things down quickly.

The testing tended to be focused on the actual changes made usually around a story so normally one or two short test sessions so normally same day and no bottle necks.

Developers would often ask me to have an early look knowing a feature was not finished yet, for example I might get asked to have an early look on specific mobile devices or versions to give the developer insight before they go down a wrong path on a change or a route that turned out to be incompatible. This empowered the developers to be more experimental, small experiments I could provide rapid feedback on which they valued.

If there were multiple branches available to test, the team could prioritise and I’d look at one the want to push out earlier than perhaps other branches that were already available.

Importantly this was complimented with developer owned regression coverage including UI layer tests that needed to pass on every branch, new test were usually added by the developer on every change. System monitoring also in place for regressions. This work highly efficient in my view as it allowed me to focus on new and changed things.

Before or after code review, most times I was before as highly competent developers and controlled changes but it was still a discussion with devs as it carries rework risk.

So in my context there were lots of advantages to branch based exploratory test sessions for the whole team. Hope it gives you a few ideas to discuss if they may also be advantage to your team but fundamentally get whole team buy in, in some context it could be seen as an extra layer.


thank you, this is absolutely helpful, I want to start following this approach for the reasons you mentioned. However, sill thinking about how to manage the time, how to avoid re-testing the same cases if we keep adding more commits for the same pull request. How to control communication between me and the developers, I’m working with a remote team and some of us have totally different timezones

Yeah, basically a team of devs will have to test their branch a bit, before merging it. Like @andrewkelly2555 says, step back often as a tester. If they want you to help them, help them, if not, just wait for it to get into a release or dev branch. You cannot manually test branches as part of routine process, it’s not cost effective. Before you manually test a branch, be sure that it is “deployable” first. Your CI/CD or build system needs to tell you if the deliverables can be “installed” or deployed or provisioned if it’s a services/web app. Manually testing a thing that did not install 100% right is time waste. So you do need some automation giving you a green light up front.

All that checking in branch does is “shift-left”, it’s good to do it if possible, but the only time you really need manual QA coverage is when branches have integrated into a delivery stream or branch, typically that is called “development” branch.
More on shift left:

In fact this opens up a huge conversation about branching strategy. but yeah, you don’t want to test branches if you can help it until they are ready to merge (in which case you only test the feature, and ignore regressions) or have merged… because it’s possible to find regressions in a feature branch that will get resolved when they merge with another feature branch that might be delivering the other end of a bridge API for example. So if I do test a branch, I only test the changes in the branch for smoke or obvious things. On-Branch testing is exploratory. To avoid changes that come from other branches being missing when you test a branch, always ask.

As for being remote or in different time zones, this needs to fit into your strategy. If the tester is not participating in the daily standup meetings, and not attending the bug grooming and sprint planning, then the tester is not “on the team”, and it’s practically pointless to try shift left and do branch based development and test.

1 Like

@conrad.connected thank you so much! That was my conclusion that I’ve got before posting this thread, I just wanted to hear about other’s experience

1 Like

Excellent Hiba. I did get the impression you had an idea of the problem already, since you asked such a clear question to start with. When a manager tries to give you loads of work, as a tester, it’s often a good idea to push back and give yourself more wiggle room to concentrate on tasks that really deliver value at the right point.
But mostly welcome to the MOT forums. I hope this becomes a place you feel welcome.

Oh! this is so sweet and it means a lot to me! Thanks again

1 Like

Branching and pull requests are project smells, which probably indicate that your team will delay integration and create problems as you said.
I would suggest first taking a look at trunk-based development, so the team can shorter its feedback loops (aka, doing testing more often)


thank you, sounds interesting

From my point of view, manual testing per brunch won’t be an efficient solution. You basically need to build a quality gate after the developer created a pull request. This quality gate should be automated and report results each run as soon as possible to the development team. Manual testing will increase your costs when you move testing as shift left. Take a look at automation tools, that you can adopt to your process and tech stack. Also, you can implement not just automation tests, but static code analysis too. Using autotests you can cover as smoke tests as regression tests.


Hi Hiba,

This is an excellent initiative! I’d suggest an approach that a lot of our customers do: create the executable specification of what needs to be fulfilled in the branch. Here is the description with some examples: How does Specification-Driven Development work? - Intelligent Test Automation Tool [2022] - testRigor Software Testing

  • when do we have to start doing the manual testing exactly, after creating the pull request or after having the code review for example?

If you are building executable specifications, then it can be beneficial to do it after Pull Request creation, so code review plus change tested could be an indicator of the feature being ready to merge.

  • do we have to focus on having smoke tests per branch and ignoring the thorough tests on the feature branch?

The amount of testing selection is an art and could be influenced by many factors such as how often this feature is expected to be used, how important the feature is, and how complex it is, to name a few. I believe when you’ll start doing it, relatively quickly you’ll converge on the right approach based on the feedback from your manager, engineers, etc.

  • is that a recommended solution when we are taking about manual testing or is it more related to automation testing?

As I mentioned above, it is probably beneficial to combine both, which should be easily possible with a specification-driven development tool like testRitor. Here is a video of how it could work: What is Specification-Driven Development and how does it compare to BDD - YouTube

Hope it helps, cheers!

1 Like

grateful for this detailed answer! thank you

1 Like

thanks for sharing this point of view, I totally agree with you and we made a decision to not follow this approach for manual testing.

1 Like

there isn’t such thing as manual / automatic testing, as Michael Bolton and James Bach indicate. We only can do automatic checking with tools to see if nothing is broken. There isn’t such manual or automatic development too, Testing need skills, logical thinking. Use exploratory testing instead.

Just use your common sense , intuition and component/unit test results when taking new branches and builds, you know what’s best to do.

we can start doing the manual testing once the pull request is created

more commits would be added later while the tester is still working

These phrases seem to say “we are waiting”, which is a Lean waste.

I would suggest investigating and investing in Continuous Integration, so you won’t have these long periods where work is unchecked and hidden from others. Particularly, using trunk-based development and pair-programming can be interesting to shorten the feedback loop, mitigating the need of branches and pull requests.

1 Like

@joaofarias can you please explain what do you recommend when you mentioned continuous integration? In other words, after adding a new change what is the suggested pipeline for testing with continuous integration?

A big thing to look out for if you are doing testing on branches, is to get stakeholders involved in product demos at the right points too. Normally branch based automated testing using a CI/CD pipeline runs very smoothly. Normally, whenever a major release looms, changes come in more quickly ,and changes that were on sub-branches get forced in as team try to meet feature deadlines.

  • This causes build breaks, and a time when you least want them to impact you.
  • If a team does some UI work on a branch, and merges it without a demo to stakeholders, that can cause a bottleneck if a C level stakeholder says the UI is wrong and wants the work re-done. Causing a delay to that team and any dependent tasks.
  • Any problem that testing uncovers that forces a team to “back out” their changes can also defeat your efforts to test on branches.

Always be ready to switch focus between branches as a tester if any of these happen. And communicate your focus shifts to teams. You will have to use a lot of intuition and step back to take a wider look at the product health often. A CI/CD jenkins or similar build toolchain job that runs a smoke test on every single branch becomes a priority. This might seem like a lot of work to set up, but just running one simple “product deploy”, and “connect a user” smoke test on every branch will improve efficiency loads. The Ci/CD needs to support build+testing on any branch as an ad-hoc thing. We used to call this “testing as a service” at one place I worked. Point the tests at any branch, and they will do all of the work to give a minimum viable product pass or fail. You have to start simple.

At the most detailed level, this requires using your CI tool to do a main/master (or trunk/whatever approach your teams use, if different teams in the company use different work patterns, then the tooling needs to cater; and whether you have religious objections to calling trunk master) and then checkout the developer branch on top of that to ensure you have no merge issues. It’s a good way for testers to show interest and get involved in version control as part of their quality jobs. The CI should then flag the merge conflict or build failure after a merge, early, and that’s a cheap win. It must in most situations build the merged branch, not the branch, if a branch is not ready to merge, it’s not ready to deliver to main and is possibly not “done”. Just this once step to get a build properly clean to complete is valuable, and then just add on as you go, run the static analysis checks on the artefacts, then add a small deploy test, and keep going as you find time.

In Volume testing, it is very essential for many teams to work with different branches at the same time.

This creates the following problems when it comes to volume testing:

  1. some test cases are affected with the changes that occur in branch
  2. With multiple branches tester need to maintain multiple copies of a test case.
  3. Testing individual branches requires a tester to refer to the updated list of test cases only.
  4. Some branches will be merged sooner than later, that means at some point you will be merging these updated test cases.

The following are some steps to solve the problem step by step:
-Maintaining multiple copies and changes of test cases:
Team must have all their test cases in the trunk created in Test Collab. Here create a new suite named ‘Branches’ where you’ll store different branches on which your team is working on.
The new suite “Branches” must not have any suite as parent. Now, for this suite create child suite one by one which represents different branches.
Also team can create more than one branches and can have multiple test cases copies and suites related to the branches respectively.

Once done, team can proceed to do changes in their test cases independently in different branches.
The change/addition of a test case will remain only under that branch and it is completely independent of baseline or other branches.

-Testing Individual Branches:
Once you are done with managing the structure and copies of test cases in different branches, you’ll eventually need to execute these.
You’ll need to:
a) test trunk/master,
b) test individual branches

-Merge the changes of test case after branch is merged:
The branch will be merged into trunk/master, once complete testing is done.
This require to update/reflect the test case changes, done in branch, into the trunk/master. For now, there is no other way of automating the merge function and this must be done manually.