In the backend system testing, why do mainstream automated testing tools not have recording and playback testing tools?

Recently, I investigated the mainstream API automated testing tools and found no automated testing tools based on traffic recording and playback (I mean recording and playback tools based on the real system traffic, page operation recording and playback are not within the scope of discussion). The traffic recording and playback technology of my previous company (Alibaba) has been adopted by many testing teams. Taking the test group of the transaction technology department as an example, this technology has brought them the following benefits:

  • Improved test coverage: helped them build about 100,000 core test cases and nearly one million non-core test cases. (Before using this technology, they could only maintain 2,000 core test cases due to the cost of test case writing and maintenance)

  • Improved test efficiency: In the past, completing an automated test relied on many downstream services, but now they are decoupled through the mock playback mechanism.

  • Using traffic recording and playback testing during system reconstruction helped them avoid many bugs.

So my question is:
Why is this technology not widely used? :thinking:
Have you used the server-side traffic recording and playback automated testing product? How is the effect? :grinning:

2 Likes

I can think of a few reasons why tools might not exist or be widely adopted.

One of the main ones being that the product might only exist as a series of APIs and not have a UI. In that case there would be no point in a recorder.

If the team shares the APIs with external parties it likely has WSDL or OpenAPI specs that can be imported into certain tools.

Running through all that possible data options manually for an endpoint would be a nightmare. However, tools like Karate Framework allow for data driven tests which makes the process a breeze. It also allows for mocks and DB interactions.

If you are talking about 1000’s of API tests I would presume they are being run at a component/integration level. In which case a tool that could generate the tests based on code would be more helpful.

API tests usually include a tonne of negative scenarios which could be very hard to trigger for capture.

That being said I have used a HTTP recorder and had great success with it. Gatling (performance testing tool) offers a recorder for generating tests. Due to performance tests usually trying to replicate user traffic it greatly helped.

2 Likes

Thanks for your reply :smile: ,but the traffic recording and playback I mentioned does not refer to the recording and playback of page operations :joy:, but the recording and playback of API call requests, such as the recording and playback of the http protocol, grpc protocol, and dubbo protocol layer. Refer to this article

1 Like

That tool looks helpful for things like large scale system changes. For something like an on-prem to cloud migration. Or if you inherited a large project and only had a list of manual tests for regression tests.

Still not sure outside of those scenarios what value it would bring. Everything would need have been built before testing which I don’t see as ideal.

1 Like

I think there are a few points to consider:

  • For companies that have already built automated tests, is their automation coverage sufficient? How high is the cost of maintaining such automated tests? Can this technology be used to improve quality and efficiency?
  • It is more troublesome to build test cases for some backend services using existing automated testing tools, such as map backend services and data stream processing backend services. Is flow testing a better way?
  • This technology is a technology for automatically generating test scripts, which can free testers from the task of writing test scripts and give them more energy to consider how to further improve system quality.
  • At present, there are still some companies that have not achieved test automation. Refer to this article. Most of them are unwilling to invest or have no energy to invest. If the cost of automated testing is low enough, can it help them achieve this capability?
1 Like

Record and playback is just a tool for a very specific job. It’s annoying to see it sold as solving all problems, it has specific uses. I have used a great record and playback tool in my first proper tester job, we used it because the data payloads were very specific and sequences needed to be maintained as well as mutated for negative purposes.

I’m going to say it. If you are not working in a certified or very very tightly regulated space, just don’t use record and playback. They are great for building out a UI testing suite, but the fragility they create is not worth your time, yes we can see some good AI tools (please read the uber blog article) being used to self-repair UI tests, but the use of record and playback for UI testing is more expensive than people think when it’s the wrong tool. Unless workflows are mandated by a spec or body, the overheads will bog things down like they bog down those regulated industries like communications, banking and medical applications. Basically apps that infrequently change not only their UI, but also their network topologies and network traffic. Network traffic is in my experience the real place where record and playback tools for security and consistency payload tests, deliver the bacon. It’s the complexity use cases, where the tools shine IMHO.

2 Likes

Thanks for sharing! :grinning:I know that UI-based recording and playback is very expensive and difficult to maintain, but I am not talking about this kind of tool. :joy:I am talking about the API level, which can record real user requests in the system production environment without me having to operate the program to record the test script. For example, this tool: JIterator

1 Like

True, I did not read carefully. That’s pretty much what I did , inspect and re-arrange USB network packets. Modifying the packets was tricky ,because we had to do it at the transport level and put check-bits into the new packets, but it was super useful. However it was not easy to write these tests at all. And I’m sure that this kind of API test tooling requires the tester know the API very very well, to prevent raising bugs for false positives, but also not miss out on negative test cases. Negative test cases were the most fun we had with the playback tool, because we could easily fake any kind of errors, security, timeouts, missing data, the works. Playback tools are not for everyone, and that’s probably why we don’t see them in all toolstacks.

2 Likes

Reorganizing USB data packets sounds like a challenging task that requires a good understanding of the relevant protocols. However, if the recorded system data is directly replayed to the system under test without modification for regression testing, it is actually a low-cost thing. You only need to deploy a recording client, and the system data will be automatically recorded and used for playback testing. This is a bit similar to the recording function of Postman, but the difference is that it supports automated mocks. For example, in addition to recording HTTP parameters, it also records the database request data corresponding to the current request, and the remote service call data is used to replay the mock. So I wonder if it can replace Postman for regression test automation.

Postman, is a free/cheap tool compared to the hardware of a USB lan-alyser, (which are also cheaper these days mind you). But I learned 2 things from that project, that the tool made negative test cases far far easier to automate than if you had to produce the fake data yourself. And 2, your manually coded bad data would never find that expertly crafted hacker payload designed to break your product.

Sniffer/proxy tools like postman, netcat and more let you learn by observing the inner workings, and any playback tool that does not allow the inner workings to be exposed is probably going to net find many bugs for you. And so yes, @aarontb , it’s a lot to learn, and too much to learn/invest turns a lot of people off of these specialised tools.

1 Like

I would say that traffic recording and replay are not very popular or widely beneficial because, as was said above, they are complex to create, maintain, and ensure data security, they need significant resources, and can be difficult to integrate into existing workflows.

I reckon autotests with mocks and stubs are simpler to implement, easier to maintain, and sufficient for most testing needs and situations. Mocks and stubs can simulate dependencies and specific scenarios effectively, offering controlled and predictable testing envs. It seems to me they’re a more practical and efficient choice for many situations.

1 Like

Mocking and stubbing can achieve the purpose of test case stability. It is a solution. In fact, the JIterator tool also uses the simulation stub mode, but it automates the script production process and the cost of use does not seem to be high. If the user is worried about security risks, he can consider desensitizing or collecting the tester’s operations as test scripts.