Masterclass: Strategies to make your automated checks reliable and robust

Our most recent Masterclass of 2019, ā€œStrategies to make your automated checks reliable and robustā€, with Peter Bartlett is a timely one as I’ve seen a lot of discussion about this on the various Slack channels over the past few weeks.

As always, a recording of the masterclass will be available to MoT Pro members in the masterclass section.

If we didn’t get to your questions tonight or you’d like to continue the conversation, why not ask them here?

1 Like

Who are the MoT pro members?

How Pete got started with speaking https://www.speakezee.org/

One of the frameworks mentioned https://nunit.org/ which allows parrellisation

Shared in the chat https://github.com/sn3akypete

MoT Pro members are people who have paid a subscription to the MoT website. You can read more about what the subscription benefits are here :slight_smile:

A selection of the unanswered questions:

  1. What kind of versioning strategies you use? Based on what?
  2. Do you have a strategy to test in different timezones?
  3. How do you decide how many threads are enough?
  4. How do you handle mandatory cookies?
  5. We run autotest on PR so I keep it on the same repos, is it good idea? I saw something about it in your guideline
  6. Two questions related to the separate repo topic:
    6.1 What is the benefit of keeping your automated tests in their own workspace, rather than alongside the code they are testing?
    6.2 Since you have to have a separate repository, do you have to share the repository with devs so they can run the test failures locally. How did you manage that challenge?

It was great to see so many questions coming through in the Webinar, thanks to all who asked a question. Here’s my thoughts on the unanswered questions listed above, if there are more questions you have, or follow up questions, I’m happy to keep the conversation going!

1. What kind of versioning strategies you use? Based on what?
I use Git for version control, and it’s rare that we actually need to go back a version. It does happen but only occassionally. The preferred approach is to roll forward, make a fix to get the dodgy commit working again and release that. In saying that, I was working with a SaaS product, and all users automatically got the latest version, it wasn’t something that we had to support multiple versions of, like you do with web browsers for example. So the problem isn’t as relevant. I suspect this question comes out of not having the automation tests in the same code base as your dev code, and you needing to be able to use the appropriate test version for older dev code versions. Something related we do is make sure the build numbers in our CI pipeline for our automated tests match the build version of the dev code they are testing. This makes it easy to trace back and identify which version of the automation tests you were using for the desired dev code.

2. Do you have a strategy to test in different timezones?
Not something i’ve had a need to do. I suggest stubbing out entering in different timezones or perhaps creating a test endpoint to set the timezone. Assuming you are testing what the system does with different timezones, not the detection of timezones based on IP address or similar (which would likely be handled by an existing, tested, library) then it doesn’t matter how you set the timezone, so take a shortcut.

3. How do you decide how many threads are enough?
Use as many threads as your machine can handle without adversely impacting reliability of the tests running. E.g. too many threads might mean each test runs slower and can cause flakiness. Perhaps another way to ask this is how quick do your tests need to be? Then add enough threads to meet that time constraint (alongside making sure you only test what is needed, the tests are efficient, etc…)

4. How do you handle mandatory cookies?
This feels like more context is needed on this question. I’m assuming the test you are running somehow relies on setting or getting a cookie based on your web session and interacting with it? I’m not sure what the problem is you are presenting. There are cookie handling libraries and methods available in most languages, use these to create and manipulate the cookies you need, and use the Dev tools in Chrome or FIrefox to find out information about the cookies to help identify the info you need to use in the test.

5. We run autotest on PR so I keep it on the same repos, is it good idea? I saw something about it in your guideline
There are 2 factors to consider here:
A - The earlier you get feedback on the quality of your code, the better (faster resolution, less costly). So having the tests run on the PR is better than waiting until it’s in your staging or production environment, but not as good as running them before you submit the PR. If this is the earliest reliable place to run tests, great.
B - Tests living in the same repo. Doing this will limit the scope of what you are testing about your system, as your tests have a closer knowledge and access to how things work in the code which may bypass parts of the system normal interaction brings with it. So, you need to accept that you may be reducing the test scope. Worth it if what you gain from co-location exceeds this cost. You also need to think about the extra bulk you may be carying around. Moving packages, installing on different machines is now heavier for both the codebase and regression test base. You exceed the range of impact on any breaking code commits to both the dev code and regression tests.
Are you able to have a separate regression test pipeline trigger on the creation of a PR? why does it have to be part of the same repo?

6. Two questions related to the separate repo topic:
6.1 What is the benefit of keeping your automated tests in their own workspace, rather than alongside the code they are testing?
See answer to previous question. It’s not wrong to have the tests in the same repo, but there are some considerations you may have to consider, or account for in your solutions.

6.2 Since you have to have a separate repository, do you have to share the repository with devs so they can run the test failures locally. How did you manage that challenge?
Yes, the devs need access to the regression tests code base so they can use it to run tests locally. If you want devs to be involved in the running, creation and maintenance of regression tests than of course we want to give them access, it’s not a bad thing. Maybe there’s a challenge in one more repo to use, but with microservices more and more popular, they are already doing this. Plus they should appreciate working with a small, focused code base, that doesn’t risk being bloated or tangled into a larger code base.
You should also make it as easy as possible for devs to setup and run the tests. Include walkthrough guides or build a wizard guide, or a .bat file or something else that will make it easy for devs to set up and use, this will help ease any concerns as well.

Hope that all makes sense, cheers!

1 Like

I was kind of distracted by my son whilst watching yesterday, but I thought I heard mention of software that kind of forced a consistent use of coding ā€˜grammar’ when writing unit tests. Did I imagine this, or was there something? Cheers.

@chris_dabnor you heard correctly, there are tools you can get which can help enforce coding styles such as order of package imports, naming conventions, white spaces, etc… You configure which rules you want it to use, and whether you want to make the rules appear as warnings or errors (ie non-blocking or blocking). An example tool that works with C# is called StyleCop https://marketplace.visualstudio.com/items?itemName=ChrisDahlberg.StyleCop
I’m sure there are similar tools for other languages.

That’s brilliant, thank you. It’s something we’re working towards more of here.

sn3akypete

      [Peter](https://club.ministryoftesting.com/u/sn3akypete)




    February 20

@chris_dabnor you heard correctly, there are tools you can get which can help enforce coding styles such as order of package imports, naming conventions, white spaces, etc… You configure which rules you want it to use, and whether you want to make the rules appear as warnings or errors (ie non-blocking or blocking). An example tool that works with C# is called StyleCop https://marketplace.visualstudio.com/items?itemName=ChrisDahlberg.StyleCop

I’m sure there are similar tools for other languages.


Visit Topic or reply to this email to respond.


In Reply To

chris_dabnor

      [christian dabnor](https://club.ministryoftesting.com/u/chris_dabnor)




    February 20

I was kind of distracted by my son whilst watching yesterday, but I thought I heard mention of software that kind of forced a consistent use of coding ā€˜grammar’ when writing unit tests. Did I imagine this, or was there something? Cheers.