Hi folks
we use Git in-house and trying to make our processes a bit more formal and robust, particularly as the number of testers involved in automation on any one project is frequently more than one at a time now… mainly using low-code tools.
I don’t see formal names being given to the branching strategies that testers use, but does anyone have an opinion on the merits of, for example, Git Flow vs GitHubFlow?
We typically have automation packs that must reflect current production plus a ‘development branch’ (ie a branch that reflects the current state of test environment) and potentially even a UAT environment. Since such projects are typically long-running and very rarely pushed to Production, the release cadence doesn’t seem to fit the principles of GitHubFlow - although this looks far simpler than alternatives. Git Flow appears to be set up for more complex dev projects with many resources.
Many thanks
Dan
I don’t have any particular preference for a specific branching strategy, but my rule of thumb has always been to follow the same strategy as is used for the product that is being tested.
- It saves a lot of headache trying to match your automation code with the SUT (Software Under Test)
- It makes it easier to set up your CI/CD pipelines
How about adding your automation to the same repo as your production code?
We do it that way. We then have all automation changes next to their related changes of the product.
Hi Dan!
From my experience and what I know, I would suggest the same as @pmichielsen and @sebastian_solidwork - add your automation test to the same repo the main code of your product that is being tested, and use the same flow. I don’t think that you need a different branching approach than your product repo has.
When you somehow cannot (people denying it for what ever reason) I suggest to use the same branches in your automation repo as in your prodcut repo.
Having all code in one repo (one project actually I assume we mean) has security implications, makes 3rd party integrations rather more fun than they need to be, and creates repo bloat arguments when people realise the amount of time to sync is impacting the CI/CD network bandwidth.
Which is why more and more teams who release regularly will move to GitFlow. It’s heavyweight, but in the same way it’s also more amenable to automation of builds and provisioning, but IMHO if you have a lot of manual testing to do, then it becomes a real pain unless everyone knows how the flow works and has an auto-deploy script that helps them out. The automated scripts basically use a database or lookup to know which branches belong to which “stream” or feature, and for a high speed and high capacity/scale automation build and test environment, this works very well. Developers and testers need to be able to create new branches/features, and must remember to close them down afterwards. Because testers as well as developers have to have a healthy understanding of how to merge, I’m not a fan.
thank you everyone for insight. By necessity (using Katalon Studio), each test project will be saved in its own Git repo - that is what the software expects. It doesn’t lend itself to saving in the same repo as the AUT. I gather one advantage of saving to the same repo would be to ensure that any changes to the AUT would prompt a matching update to any affected test code? This assumes a very quick response by testers as well I guess?
Perhaps we still should follow the advice of @pmichielsen and adopt the same strategy as our devs.
As @conrad.braam mentions, testers would need a strong grasp of process to work with GitFlow, I think we’d need to work up to that - otherwise it sounds like a recipe for daily conflicts!