Low code or code test automation on projects built on IT platform products?

Test automation of business systems on top of a product. Choices of tools, and why? Namely I mean things with a web UI, a rich API ecosystem and perhaps some older tech extensions specific to the product, loads of configurability and uniqueness in the data. Things like Salesforce, Dynamics365, SAP. Or open source equivalents like Odoo.

These systems are particularly popular targets for low code test automation tools. I am not particularly keen on those tools. Sometimes because I believe tests should be designed to fail, and a lot of low code is designed for capture, with insufficient support for times when things fail. Sometimes because I believe a new language, even visual, creates a distance for feedback that ends up costing us rather than saving us costs.

If you work on one of these projects where a lot of functionality is same out of the box, I am curious on your take. Is something essentially different so that the open source options of Selenium (and its ecosystem tools), Playwright and programming language specific API test frameworks, or open source orchestration frameworks such as Robot Framework just don’t make sense to you? Or the opposite, that the low code platforms just don’t make sense to you?

4 Likes

I feel like I should respond myself with more experiences, now that I have read through the archived threads on similar topics.

We currently have projects with:

  • ts+playwright / python+pytest+playwright+selenium+requests+pytest-bdd / java+selenium+restassured / cypress – none of these are low code but quite simple code with complex bits of infrastructure usually built by more senior folks
  • TONS of Robot Framework. When explaining that there are many tools is a problem, Robot Framework is the “business-oriented language” “open source” orchestration go-to. Just call everything that and people who have little details will vanish. But honestly, while it can read as English, you don’t write English with specific tab rules and it’s not low code.
  • TONS of easy API things, mostly SoapUI and Postman collections, and we are trying to. get away from them all since they diff so poorly
  • Low code things we use: Ranorex, UIPath (the latter we partner with)
  • Low code things we have tried out: Tosca, Katalon, Mabl
  • Low code things I am trying to try out but getting stuck on particular sales process over getting info: LeapWork

What I am observing in the space of platform products that all the hundreds of low code tools are seeking to partner with the platform product vendor, as that would be a powerful recommendation. I am trying to figure out what day to day life of people operating these tools look like, and what would I want to make it look like particularly for career testers and acceptance testers.

The space of options is overwhelming. There’s UI recorders for Selenium and Playwright too. Businesses of these tools can’t succeed unless some of the users of these tools are finding them worthwhile. I would particularly like to hear from those. I can no longer become a person who does not code. I need to ask testers who are different than me on what they find helpful. Hopefully without me or anyone else strongly advocating my predisposition.

I would really appreciate hearing people’s perspectives.

1 Like

I’m not sure if get right what are asking for. I’m still struggling to get it.

I share what came to my mind, myself being someone able to code:
Low-code tools are used by which somehow don’t have the budget to become a part-time developer and getting used to all the details. Is see many people being in that situation.
Once you have the budget, and are willing to, you become one and will go for real code.

This connects to the fact of popular platforms you mentioned being popular. Many people without the budget for personal development have to test them.

I don’t judge the people. It’s the question about their management.

1 Like

I worked and wrote on this topic some years back. :wink: like this one:

I worked on exactly enterprise systems with Leapwork as the test tool and helped presale it - especially in the Pharma domain. Im no longer in that team but I know where they are and who to reach out to. Last time I looked Leapwork did have good “exception handling” as I would call it.

Microsoft promotes Leapwork especially for d365 FO & CE, as the previous Rsat tool failed to scale.

Do ping me for details :slight_smile:

2 Likes

Operating a low-code tool also requires budget. Usually two kinds of budget in my experience: money out of pocket for a commercial tool license AND time to learn and use the tool. The introductory courses even for low-code can easily be full work week, and doing anything beyond record-playback demo takes more learning.

In this scenario, I am their management. I am also one of the users.

I am curious on what is this thing you call “exception handling”? Debugging in case of failing tests? Some sort of autohealing? Something entirely different?

Since I spent much of my evening today thinking about this, I ended up sketching features I am seeing.

Will most definitely read your article, thanks for sharing that one.

I wonder about the shape of the test pyramid when considering enterprise solutions like MS Dynamics. Is unit testing common and how about the services that are baked into the solution? Testers typically aren’t aware of the 90% of services that lie beneath the surface like an iceberg, I would guess! Although as you suggest in your blog post, Microsoft don’t really support a dedicated role of tester anyway…
For companies with deep pockets, Leapwork seems attractive but I’m left wondering why Microsoft don’t make it easier to test a SaaS product with any other tool? Locators that seemingly change on a monthly basis, and RSAT looks like a relic from the 90’s macro days! In my opinion, the test automation pyramid for a manager would look like an hourglass, with lots more UI/UAT going on (in the ideal world)

Business testers most definitely are not aware. Testers not taking part in configuring and integrating aren’t generally aware.

We essentially have at least three layers in which testing happens. None of the organizations typically share what they have on testing for the others.

Sure, anything needs budget. I meant that real coding takes way longer to learn than low-code.
It takes month, if not years, to become at least a “part-time” developer. It needs at least hundreds of hours of learning coding good enough for that.

I see that way over the things you mentioned.
Being able to code is a long term project of years while low-code may be achieved within weeks or month.
And even there I believe that the license costs are the smallest number in the bill.

How much makes this my points from before more clear?

1 Like

Here’s the thing: I have taught multiple salesforce business analysts to write playwright scripts and it seems to take half a day of pairing. The time investment on learning a feature rich low code system is not significantly less.

Programming is like writing. Getting started is easy and it takes a lifetime to get good at.

Low code relies on visual language for programming, and while we tend to believe kids will figure out Blockly as a language easier, there’s plenty of research that figuring out certain areas of programming come as well or better with text-based approaches.

1 Like

Hi Maaret,

What I was thinking about, was that the way Leapwork solves this. Most “blocks” have a runtime failure mode. If during runtime the “find web element” doesn’t find the expected, then you can direct the execution to somewhere specific. Similar to a try/catch clause in programming. I hope that elaborates :slight_smile:

Thank you for the visual, I’m sure most tools will have variations within each one. I understand where you are coming from with some systems under test only having a UI - even for the “developers” (for instance people configuring a Saas tool). Hence the pyramid model from software development projects doesn’t really hold. Or only hold in the sense that the vendor (eg MS) tests the lower parts - but an end-user organization can only test through the UI. I wrote a bit about it here… a while back🤷‍♂️: Assumptions of the Test Pyramid | Complexity is a Matter of Perspective

Grady Booch once said: Low-Code and No-Code Development is nothing more than moving up another level of abstraction.

Having a computer science background I can see how principles of things like encapsulation, clean code and even exceptions also apply to low-code automation :smiley:

thank you for reminding me of this topic

2 Likes

I’m not really convinced by the low-code approach, because it often locks you into a specific tool. Even if it’s easy to learn, you still need to assign a dedicated team member to manage it. Most open-source solutions use syntax that my developers can easily write and maintain.

Some of our clients have asked us to provide a service to run their existing Playwright automation scripts. This would involve storing the scripts, supporting scheduling, and displaying the results. To be honest, I’m not sure if this is a genuine need or just a convenience request from a lazy QA!

What’s your take?

Figuring out why tests fail is a lot of work, and getting that as a service can sound tempting. There’s other thing the busy people could do than go and analyze fails daily. Then again, anything close to the development, analyzing fails daily is the thing so all of this is trouble of distance (lack of granularity).

Late to the game. Reading through the posts, I kept wondering

  • Who would be authoring, maintaining, revising as well as reporting on the these tests? Who would review test results?
  • What skills do the users of the test tool must have, what skills do they have to acquire?
  • Are there only functional tests included here, or would you also need to test for other quality attributes?
  • Do you need visibility into the pass/test fail history or other attributes such as execution times?
  • Do you need a single tool, or would you consider a separate tool for end-to-end testing vs. API level testing?

Having worked at a low-code vendor, I feel that these systems make it easy to author tests, and this may lead to the industry obsession with too many end-to-end tests. Some tool vendors make it easy for developers to run these end-to-end tests locally as part of the development flow, which can reduce feedback loops.
I myself like playwright/typescript. I have written table-based API tests using just the plain fetch API (JS/TS) or axios. BDD is sure appealing if you have non-developers authoring tests especially if there is a good library of custom assertions available for the application.

In my experience, the challenge with Salesforce UI automation is that it can be: 1) really slow, 2) the DOM of Salesforce should be called a crime scene (considering how bloated it is and how large each page is in terms of the sheer number of bytes) which makes using selectors particularly painful when done manually without the help of a test recorder tool, 3) depending on the test, shadow DOM support may be required.

A few additional advantages of low code tools, I feel are the following:

  • reporting is typically built-in even if tracking down the root cause may not be obvious
  • it is usually easy to see the history of the pass/test failures (which makes identifying regressions vs. intermittently failing tests)
  • make running tests in parallel easy
  • can be run on a large matrix of browser and device types in parallel without having to maintain the weekly/biweekly updates to the browsers
  • can provide history of execution times (to spot performance regressions), coverage metrics (even if it is not code level)
  • can capture logs (console log, HAR file, screenshots, screen recordings) in a single spot and with integration to issue tracking system, the logs can be accessed easily

The Achilles’ heel of low code tools, I feel, is when developers treat them as QA’s toy in orgs where testers / developers are separate and when the tool does not easily integrate into the developer flow.

The Achilles’ heel of low code tools, I feel, is when developers treat them as QA’s toy in orgs where testers / developers are separate and when the tool does not easily integrate into the developer flow.

Anything where this happens - organizational boundaries, tool boundaries - tend to lead to worse results. I wonder though if sometimes just freezing some of the change and simplifying the world we live in is the biggest benefit that comes with the role of operating these boxed tools. We are, after all, regularly overwhelmed with the differences in tools.

I facilitated a conversation with my team on browser testing, and gave them a selection of logos. We could not name half of these tools. And you may need to when your box is open enough.

I think that there is another side to low code vs code tests which is psychology. Some people in the development team may not accept results from a low code tool but will accept results from a code tool.

2 Likes

I had another conversation on this with people who don’t code, and what they look for is automation that abstracts away the interface. They don’t want to consider web vs. mobile vs. desktop vs. embedded device vs. apis but think in terms of user flows and processes. They don’t quite seem to grasp the idea that this is aspirational usually more than practical, that data is intended to change and messes up their ideas, and that these programs don’t have enough built-in properties to note the things humans don’t even think about noting.

I learned a great deal from automating tests with a low code tool. It is easy to underestimate low code automation. What I learned helped me to start automating test with typescript on Playwright Use Low Code as a springboard for learning

I learned the same with leveling pieces on code, and the main difference is that with code you don’t need to change the sandbox when you grow. When you have invested into 2000 tests, that keeps you with what you had.

As I just watched yet another demo on a visual approach, I find it hard to see why drawing a line between two boxes is better than writing a line in text format, especially knowing you will want to diff who added a “line” that now causes troubles. When the way you create abstracts from the way things fail, people don’t cope with things when they fail. And tests are, generally, expected eventually to fail for real reasons.

1 Like

The approach and tools you use depends on many factors.

  • The type of experience your team has
  • Whether or not you have budget to pay for (usually very expensive) licensing
  • Tool hardware requirements and maintenance
  • Time. Time is one of the big ones. How quickly can you ramp up test automation and is it easy to maintain once built?

For web application testing I recommend our Alchemy Testing which solves many of the mentioned hurdles.

  • No coding experience required. It has a very intuitive drag and drop interface. However you can leverage Java Selenium coding if needed to create custom actions.
  • No license fees. It is a free tool. There are paid aspects of it such as optionally using the Gridworks cloud to run tests in parallel on different environments, but it is very low cost. There is also no vendor lock-in like many tools. You can generate executable files and run them anywhere without having Alchemy installed.
  • No hardware or maintenance required.
  • Very easy to learn and use. Yes, there are advanced features that would involve additional learning, but the basics are super-simple. You can install Alchemy and build and execute your first test within minutes. The Alchemy team has done a great job providing knowledge base articles and tutorial videos as well.

Code has it’s place, I’m not denying that. But low/no-code solutions also provide quick ways to get started, especially for teams with little coding experience. Alchemy combines the best of both worlds, in my opinion.