How to properly think about automated strategy?

Hello everyone.
I am a manual QA and in my current company we are beginning to work on creating e2e automated tests using Playwright with TypeScript.
What environment should I target ? It makes sense to me to develop automated test suite using the data of “test”/ “stage” environment since the goal is to reduce the time of manual regression/smoke testing. I understand that ideally I should be able to make those tests in a way that they can be used for any environment but I have doubts this can be done with our application.

I will appreciate if you could share your thoughts / experiences.
Thank you and have a great day!


My experience may not be typical because I’m the only tester in the division, so my time for automation is somewhat… limited.

That said, I’d suggest starting with the test environment, because that’s where a breaking change will show up first (most of the time).

What I’m doing now that I have a mostly stable suite of automated tests is running them on the QA environment and the Staging environment. I had them on QA only to start with, and only set up the Staging environment once I had the tests stable and as comprehensive as they’re likely to get (for a variety of reasons our software has a lot of modules that aren’t amenable to automated regression).

Once I have them stable on both environments (which is taking some tweaking and some adjustment), I’ll set them up on the production environment using dedicated test organizations (we support a multi-tenant web application so we have a set of test organizations). The production tests will be happening off-hours so they don’t interfere with customer traffic.

The way I do this is to put the environment information such as base urls into a test settings file. I have the ability to override this in the CI/CD tool we use, but you can use different flags in your automation to manage environment-specific information.

I don’t know the specifics of using Playwright, but in my view going with CI/CD to automate your test runs and starting with the environment that’s closest to where the changes are happening is the best way to catch potential regressions early.


Viola , since you say you are a manual tester, starting to use an automation toolstack? Right, I’ll frame things there.

  1. Were just talking about the differences between Staging and Production with mates, when it comes to kaptchas. This is a common journey blocker for automated tests and we disable these in staging for that reason. This and loads of other “common-knowledge” aspects of automation become long-term friction to your automation efforts. Environment security changes can block running the same test in all environments, plan coverage for gaps ahead.
  2. So when you use a off-the-shelf toolstack, another friction area is functional block or code reuse in an E2E, where some blocks or pages get hit very often, and become bottlenecks for the tool whenever they break. As your number of tests grow they will have a lot of dependencies on just some key shared steps. Try to prevent this as far as possible so that one page in your app undergoing a legitimate change does not fail every single test in your suite, manual testers never hit this, but automation is very susceptible, so try spread that risk.
  3. There are many ways to do so that involve using back-end to get you into specific states to leapfrog potential bottlenecks - this is going to require coding skills…or at least being very good friends with a coder.

You will be creating “software”, software architecture and designing that code for maintainability are skills you will need to learn. I have worked with some smart manual testers in the past, and they all had to work hard to think in patterns and organize and keep tidy their manual test cases and rigsetup and environments. The same tidiness applies to code, even more so. It’s a journey, be prepared to start over from scratch, in fact assume that you will be learning a lot, so try to make it cheap to wipe and start over.


I have never used Playwright, but I have written e2e tests in Selenium (various languages) and Cypress, and the advantage you have here is that you are working with a programming language (Typescript in this case) so you have a lot of power to provide configuration to allow for running tests that target multiple environments. Data you are verifying, and baseURLs, in such cases, can simply be plugged into the test code in various ways rather than being hard-coded in the tests.

That said, I’d start with one environment, and not worry too much initially about such configuration. As Conrad said, it’s a journey.

Who will be writing the tests? I spent my career as a developer, largely, but also spent a lot of time developing tools for testers and helping automated testers. A challenge I consistently faced was getting management to understand that writing automated tests in such a way (using a programming language), while very powerful, is essentially programming. It requires many of the same skills software developers must have / acquire.


Welcome to the MOT community Jim. I hope you will also share more of the tool experiences in time to come.

Jim raises an important point, automation is often a great way of covering multiple environments. but if the goal of your test framework is to gain broad coverage, it might make sense to decide on and stick to one environment only. Like a dog to a bone.

Any yes, I hope you enjoy your automating journey, being able to get the robot to do your work through code, for you, is such an awesome feeling.


Hello @violettkachuk
I would say you target your product first that’s the most important thing while deciding an automated strategy, a strategy with out focusing product under test will not give you any value long term. Think and analyze how your automated suites can help you target your product, how it can support your testing activities to reveal information’s that threaten the value that the product suppose to provide to its users. Once you understood and defined this objective clearly, next you can go ahead about how to achieve this objective. There comes one of the many need such as what environment to choose, what test data to use for the suite etc. Answers to these will be fairly easy once you and your team team is clear about the objective of your test automation.

Few things to remind ourselves on this journey are :

  • do not aim to automate everything, what you write today will need go through a maintenance tomorrow. It shouldn’t cause you a headache tomorrow. Do not aim to automate every manual test cases, in fact do not automate every test cases.

  • If an automated test of a feature passes in test environment, but creates issues/inconvenience in production for a User - then your test is of no value, so do not aim to make ‘Greens’ only in test env, that is not what we want.

  • Never ever jump in to start writing tests immediately - take time, think, analyze scenarios before deciding to automate something.

  • Remember your automated tests will check only the things that you wrote/asked it to check. It will not do anything more. User in production might be doing a lot more/different. So please remember automated checks are only to support your test activities/never to replace your so called human testing/thinking.

  • Data that being used/created/altered etc by the automated checks should be close to the production data in nature. Say for an example - a user in production uses an excel with 100 rows of data for an upload functionality, then you can’t do test of same with 10 rows of data and share the information you discovered after the test. Talk to entire team, stakeholders, customers if possible to know what happen in production, and try to mimic that during testing (whether human/automated).

  • Most of the times in your iterative build releases you will not have to touch/check everything that is present in the build say old features etc. so design your execution strategy based on the risk i.e the risk of something breaking.

  • So start small having an environment (where all the intended features of a release are available) close to production in all terms, targeting to get some real value out of it, and then later think of ways to make it suitable to run in different env if it really needed (thinking of what really makes these env different will help to take a decision - is it data/configuration/code differences etc)

Wishing you success in your automated test journey :slight_smile:

Warm Regards

1 Like

very helpful, thank you very much!