Could you share an example of a time when you had to decide whether to automate a test or handle it manually, and what factors guided your decision

Travel domain-payment scenarios in the production environment especially since real transactions were involved. To test payment flows, we had to raise a request with the business and finance teams to approve test cards. These cards had a fixed transaction limit, so we had to plan each booking very carefully. We would typically:

Choose hotels with flexible cancellation policies

Booked short stays to keep the transaction amount within the approved limit

Ensure the booking was eligible for full cancellation and refund, so we could reverse the transaction after the test

This required coordination across QA, business, and finance, and added a layer of complexity to test planning. Since we couldn’t run these tests frequently or freely, we executed them manually, with careful tracking of amounts, hotel conditions, and refund eligibility.

Mandatory transfers functionality, which depended on specific combinations of destination and hotel. Setting up the required test data involved coordination across multiple teams and systems we had to raise a request, and once the data was used, it had to be reset before running the next test.

Because of this high dependency on data setup, and the time-consuming nature of resetting it, we decided to test this functionality manually, even though it was business-critical. Automating it would have required significant effort to manage data preparation and cleanup, which made it inefficient at the time.

Another example was during seasonal promotions, like summer or winter A/B deals, where the business needed fast feedback on UI changes or pricing logic. Since these were often one-time or short-term features, we chose to test them manually to save automation time and ensure quick delivery.

Can you share your experience?

3 Likes

2 ys ago, automated nothing. Now, automate most things, unless ridiculous. Take a wild guess…

2 Likes

Do you think things have changed and that the situations I mentioned will get automated?

Working with embedded devices interacting with different software was massively challenging. We could have spent months enabling our automation frameworks to support it… or just try it ourselves in a couple of hours. On the flip side I was working on a tool to migrate data between two applications. Just testing with different nuances was really time consuming but through integration layer tests I was able to put together these tests super quickly.

Another example, similar to your seasonal promotion one, was when we had to introduce a feature that was only going to be present for one release.

Generally I feel that:

  • It can more expensive to automate something than to manually test something ONCE.
    • Your technology and domain will really factor into this.
  • I do not want to be re-running lots of regression tests.
  • I don’t need to re-run lots of tests.
    • But there’s plenty that would be nice to have re-running.

Automation is usually the most efficient way to handle regression testing but not all forms of testing.

One thing that I’d call our is that automation doesn’t need to be all or nothing. If you have a time consuming test that needs re-running, yes having an automated test is ideal but can you automate parts of that test? Accelerate the manual testing.

Not expecting an answer… but worth asking yourself, why is it so involving to automate these tests? Can you optimize the collaboration & gathering data? Or the resetting the system?

3 Likes

@maithic,

In general, it is the release and implementation schedule which controls the whole process, and to give an example of how this has been in my case, we normally do automating just for the created part of the app as the users do usually feedback which sometimes results in changes to app’s functionality.

On example from a previous project I worked on, the team decided to delay the automation till the feature was well defined. Initially, we did such testing manually to make sure the UI and logic updates were consistent; then, when the requirements were stable, we went on to automate the scenarios that were already confirmed.

My advice is always to start with the sanity test cases for automation, then the smoke cases. The communication with the Business Analysts (BAs) is the critical step to take as they are the ones who know whether the feature is still being developed or has been completed. After that, and with the Team Coordinator’s confirmation, I will move on with the automation.

If the whole project is completed and the application is stable, that is when I will do end-to-end automation to cover all areas.

2 Likes

Sometimes I go through some simple discussions that have sort of become internalised so the thinking is more natural these days.

Does the activity lean towards human or machines strengths?

For example, big data, high volumes, x to n variations needed, repeating the same steps over and over, speed control like race conditions or time scheduled, test data and environment setup tend to lean to machine strengths so an automation tool could help out. Testing to verify also can fall into this but usually also only when an additional element from above is in play.

When there is more learning required, discovery, exploration, experiments and investigations still lean toward hands on approaches, though there has been a strong drift towards highly technical tool loving hands on approaches. Empathy is also a very human thing so experimenting with UX risk tends to benefit from hands on again. Testing to learn models tend to benefit from more hands on activities.

The second discussion is linked to that verify or learning goals, automate the things you already know very well ie the known very well risks covered here, for everything else where there are uncertainties or things you may not even be aware of yet take a hands on approach i.e the unknowns. In time though things will naturally drift from known to unknown so the model rebalances.

Written out it might not seem as clear as my own thought process but it tends to go along those lines.

Taking it a bit further and I start weighing in cost, effort and value. Mobile flutter apps are a real example where based on the earlier above discussions that in theory we would opt for some UI regression tests but the when it was considered low value and costly compared to a hands on approach the automation side was streamlined significantly to the basics.

AI I suspect will impact that value cost discussion significantly, where before I may have opted for slim or even no automation after weighing up cost, effort and value as those come down with AI use we will be revisiting those decisions.

What I’m not sure of yet is whether the decisions to use AI will follow similar discussions as above, the capability has increased so more might shift to machine strength but potentially the amount of unknowns also grow significantly with it resulting in also the need for more human strength involvement.

2 Likes

Sometimes people (including myself) can get hung upon a binary decision of whether to automate a test case or not. I track in regression packs to see what % of test cases we automated and those we “chose not to”. So in Zephyr every test case has an automation status of “Automated”, “Not for Automation” or “To be automated”.

However, that is a little unhealthy I’ll be honest because automation ain’t always that binary. In your first example you’ve got a number of processes going cross different user types, data prep, multiple scenarios, resetting the data, could they be broken down into separate automated operations to assist the testing? i.e. could the data set-up and the resetting be automated to make the manual testing easier? Could you have a specific environment set-up that you can bring up in the right state to start testing?

Obviously I don’t fully understand the complexity of what you’re testing but just the principle of when you see 1 big automation problem, make that 1 big problem lots of little problems and solve those. You may not automate the whole testing process but you might make the testing more efficient.

4 Likes

@maithic


When I was testing an e-commerce site, we had to decide whether to automate the checkout flow, which involved multiple third-party payment gateways. Initially, automation seemed like the obvious choice to speed things up, but we quickly realized it wasn’t that straightforward.

The checkout process was constantly evolving—there were frequent UI changes, external redirects to different payment providers, and the flow itself kept getting tweaked based on user feedback. Every time something changed, we’d have to update the automation scripts, and they’d break more often than they’d actually help us.

So we took a step back and made a practical decision: automate only the parts that were stable and repetitive. Things like logging in, searching for products, and adding items to the cart—these didn’t change much, and automating them made sense for quick regression runs.

But for the checkout flow itself, we kept it manual. This way, testers could actually observe how users might behave in different scenarios, catch visual issues that automation would miss, and handle all those edge cases that pop up with real payment gateways—like timeouts, failed transactions, or unexpected error messages.

The deciding factors were pretty straightforward: how stable was the feature, how much time would we spend maintaining the scripts versus just testing manually, and where did we actually need human judgment? Some things just need a person to look at them and say, “Wait, that doesn’t feel right.”

In the end, this approach worked well. Automation handled the repetitive stuff efficiently, freeing up time for testers to focus on the complex, unpredictable areas where their experience and intuition really made a difference.

1 Like

It depends on the current flow and automation coverage that you currently have.
Is it easy to automate things in the current flow?
Do you have E2E functionality covered mainly?
The factors that guide me is - automating this test would serve as future regression? It’s main flow or an edge cases that I would not need in the future?
I saw mentioning here a third party tests that are complex to automate.
On that I decide to automate until the call to this third party - to verify that all the data that should be send to the third party arrived to that point.
Another option can be to create a mock-up for the third party or create a predefined object to test the rest of the flow.
Sometimes, the third-party flow is the main client flow/ most buggy flow we can not give up on.
Regarding the test cards - are those real payments or are there mock-up cards?
If it’s real, you can in the automation env define a value that is the bare minimum for payment, such as 0.0001, run it only if the predefined conditions are met. (and do 1 payment follow in a test run)
Hope it helps!

1 Like

@maithic “Whether to automate a test “ –> This is already answered above & I agreed too.

For specific use case you mentioned for Payment, In one of my project for OTT , I have to create user accounts and to complete the process provide billing details, for that dummy test cards used , where code is tweaked and it understand the fake card and end to end use case automated.(All this test done in QA environment)

With Real payment system , automated test cases created with negative scenarios so any how , real cards not charged but negative flow tested . example wrong pin, transaction not authorised by user, 3rd party payment gateway failure etc.

1 Like

/sideways

Not sure why you needed to worry about testing different price limits in your travel booking system manually or automatically unless your platform imposes some kind of budgeting feature as an MVP. I would assume that kind of case can be covered in mocks and unit-tested at the correct interfaces where it matters and where the business logic in the system interacts? Also, why are you testing in the live environment? Sounds like you need a product re-architecture, sorry, but if I worked there, I would have a 2 year plan to change the way testing hooks are presented. I assume you have access to the log/journaling back-end already as part of the automation investigation?

You have a lot of variables. And you have listed most of them, in some ways the job of a Analyst is to find all the variables and graph them their interactions and narrow down which ones you can automate, but setting up a non-production environment or “staging” environment is the thing I would look at. Such environments are also useful for security testing, it may mean finding synergies to motivate getting a non-production environment set up. Sorry my response is a not on topic, but I hope you can use this as friendly-advice/opinion, not as fact.

1 Like

In my experience, the decision to automate or test manually always comes down to context and intent.

One example: we had a flaky regression area around user permissions and role-based access. Initially, we automated it — but the maintenance cost was huge because the business logic kept evolving every sprint. Eventually, we switched to a hybrid model: partial automation for core flows, and exploratory manual validation for edge cases where business context mattered most.

What guided our decision was this:

  • Change frequency — if the logic changes often, automation debt grows faster than its value.

  • Feedback loop speed — if you need insights today, manual tests or AI-assisted exploratory checks might be faster.

  • AI-assisted prioritization — lately, I’ve used AI to analyze flaky test trends and identify which ones are worth re-automating. That’s been a game-changer for balancing effort and impact.

In short, automation is not the goal — sustainable feedback is.
Would love to hear if anyone else is experimenting with AI or data-driven signals to make similar calls.

2 Likes

if functionality has been validated on the correct test level (Unit Test, component test ect), then you don’t need a complex test automation (test check) script.

Often payment providers offer test users to validate payment flows, or you should create a simulator which EXACTLY behaves as the real payment platform (integration in the small)

I see your scenario is complex. I also have a few tests which involves 9 systems in total to validate a few test scenarios. It required a lot of setup and preparation steps.