When to check in your changes?

Only check in working code is the mantra. But…

I belong to that school of thought that test code should live with the product code, and that’s true for situations where the “definition of done” includes the writing of unit and component-test cases. That often does not scale for System Tests and when automated tests are written asynchronously (read late in the day) to the product and therefore cannot branch with the product very easily in the same repo. So there is a big “but…”, because very often I need to run a system test in more than one environment, and at that point, the easiest way to copy the test to another environment safely, is to check in all my changes on a branch, and then check that branch out in the desired environment. And let git do the hard work for me. At which point later too, does it becomes a question of, “at what point do I merge my branch?”

To make things interesting too, because tests take ages (40 minutes) to all run, I’m often going to check my work in, just so I can offload the test code to another machine so I can use my own machine to debug, because you cannot always safely make code changes or debug while a system test is running locally. To save time I’m often waiting until about 70% of the tests work locally, on the assumption the 30% that are still to run will probably be fine.

It is risky, because I’m often making changes to the test tool internals and refactoring as well at the moment, but roughly how often do you push to a branch per day and why?

2 Likes

How often is relative, especially with the work I’m doing. I’m writing automated tests, while I’m attending meetings for new projects, and doing manual test planning for current ones. So, there can be a lot going on in one day - especially as a solo tester.

Part of my process is including the devs in checking and reviewing my automation test PRs. I’m curious. Is this part of your process as well? Or, do you have the power to merge when you think the code is “merge-able”.

I’m working on optimizing my automated tests. My goal is to keep them under 15 minutes. Out of curiosity, how much do you analyse the test suite you have for test value? I know I’m bad at duplicating actions throughout the test suite. I’m working on keeping in mind that if I’ve tested one pathway in the UI, I don’t need to test it again in the same way on a different test that contains a slight variation.

Totally feel seen Judy.

Small company, just me in QA, so I’m always just merging changes, and trying to stay honest. In a bigger team I would have humans helping me. But very small company, only small engineering teams. I really do have to include at least one developer, but QA ownership /championing and the coding skills are a barrier still. We do have a hardware QA guy, in fact we have 1 and a half people on hardware QA. Software QA was creatively outsourced until now, and the last guy who drove it all left a while back. I’ve only been in this job for 3 months, everything is new, but at some point it won’t be , and not keen to let bad habits become the norm. This week I managed to do some refactoring and badly break the test tooling for a day, which I don’t want to do too often in future. I do need a branch strategy! :duck:

And at some point this brand new test system will get added into the release process. Right now it’s highly experimental and if I mess up, it’s fine. But by next year all the test runs might be up on a dashboard. I really want to get our quality up to that level, and to lead by example somehow. Yes I’ll be coding up the dashboard myself too I think.

15 minutes is great, I’m setting myself a 1 hour limit - I’m using TeamCity with just 1 test runner, but I can add more runners, so the 1 hour limit is probably not going to be a bottleneck. I’m still in explore mode, I have 5 different sub-products to install and all those MSI files take a while to copy and a while to run. I also have 1 hardware target to interface to, and between a fair number of tests I do a cold-boot, which takes about 60 seconds too. So I have a lot of stuff I can trim down and a lot of stuff I can even split up. For now it’s just one huge suite and I add an average of 4-5 tests per week.

Although, I do get taken off the job sometimes to do some semi-manual QA of something. Which turns into product learning and usually I get an inspiration, to add another test-case related to the manual check or trials I was just asked to run.

@conrad.braam,

Great question and honestly, I do not think there can be one definitive answer here.

The mantras around fragile check-ins with only working code being handed through means for stability, but the reality is that most of us work through messy iterative cycles. For me, the frequency of check-ins depends on context:

  • Feature or product code → I check in once I know my code compiles, builds, and passes local tests for good measure. I don’t want to break the mainline.

  • Test code/system test → another story. These evolve more with the product and sometimes lag behind a bit, so I treat my branch as a sync and safety net. I’ll push works in progress so I can either:

  • run tests on another machine/CI

  • not lose work

  • share partial progress with a teammate

So I could be pushing multiple times in a day with about seventy percent confidence that these tests will actually pass. For me, it is a trade-off between productivity and noise.

The main idea is:

  • keep your branch isolated until you are confident,
  • use pull requests as the gate for quality,
  • and never be ashamed to use Git as your offload carrying-bag. It is made for just that.

Hence the short answer: push often, merge thoughtfully.

1 Like