What would you do? How can I automate this?

Hi all,

This is an example of the problem I’m facing off, a very simplify version of the system I’m testing.

Until know, I’ve been performing only manual test but looking to a way automate this stuff, since is a flow that is very time spending.

normal flow is:

  • Post to Service A (and at same time add to a Kafka Topic C (that is forgot when drawing this scheme) >> this two actions will trigger some actions in Service A that I can validate trough a Hagfire or application log.
  • Some actions triggered above will produce to a Kafka Topic A that will be consume by Service B. This will trigger some actions in B that will be displayed in application logs.
  • Again, some actions triggered in B will return a response trough another Kafka Topic to B that will be consumed by A.

I’m not talking in DB by now, because I’m confidant that if everything is working is because we are R/W well from DBs.

What will you do to automate this?

My dev team is very good, and we implement Unit and Integration Test… so I’m really doing some kind of integration test at another level…I think.


Hello @davgonca!

For our API development, we depended mostly on automated unit tests, about a dozen integration tests, and one or two smoke tests. In my opinion, when the unit test cover the bulk of the behavior, there is very little reason to add more integration tests since those may duplicate what is already learned from the unit tests. If there are more scenarios to explore, I recommend adding them to the automated unit tests where possible.


1 Like


With all the unit and integration test, we have more time for that magical trick, for the exploratory test. I’m right?

At least I’m seeing it that way.

The other part is, even all that exploratory test requires quite of a effort, and in that way I was looking to build some kind of tailored framework to test my uses cases.

Inject data trought API that will match with one or more Kafka topics and so on…

Maybe I’m not talk about test automation but automate processes. :thinking:


How you’ve described the approach relates to similar approaches I’ve undertaken. I am reminded of the test pyramid: TestPyramid. Very useful for helping us get coverage in an efficient way.

1 Like

Hello @davgonca!

More time is a great testing gift! With a gift of that kind, comes responsibility.

It seems your choices are spending that time are creating and executing more test cases that explore behavior or spending that time creating a framework to help facilitate and execute test cases. As a test engineer, I would recommend creating and executing test cases over building a framework.

While I believe that building a framework would be a creative, challenging, and fun coding adventure, it distracts from the more valuable testing activities in two ways. First, the time spent building the framework means you may not be providing valuable, valid, and prompt information about the products you are testing. Second, the time spent in maintenance of the framework may slow the pace of future projects and reduce the time you can spend finding alternative solutions to assist testing.

API products are, in my opinion, unique in their testability: they are small and transparent. Rather than build a framework, I recommend a utility like Postman. The kinds of exploration you describe gives me the impression that you want to explore some diversity in the payloads presented to your API (very cool - I encourage diversity!). Postman provides a user interface that helps you and your project team start writing tests quickly. It also facilitates automation and the ability to drive your tests with data. Give it a look!


Hi @devtotest ,

Maybe I wasn’t clear.

In my product there are several flows with different outcomes.

Sometimes I only need a call to e a endpoint and check logs results (the endpoint I’m testing don’t return nothing less than 2xx and sometimes 4xx or 500 :joy:)

Other flows depends on endpoints calls and external dependencies using Kafka topics to add more information to business logic. This I’m doing manually. When I was talking in automate my testing flows it was this part. Have some way to pass arguments to create Kafka messages that will make match with my endpoint calls and then log a result (or even call a external source) in this way may be good to have something like keyword base framework that I say: call endpoint with X Y Z , produce Kafka message with X Y W and Z , check Log return… And so one. Maybe I’m not expressing very well… We could chat one day :wink:

Nonetheless, I use postman to test all my endpoint (had implement automatic test in postman to have at least a feedback that the minimal workflow is working) and in my exploratory tests.

Thanks a lot for all the inputs.


Hello @davgonca!

Thanks for the update! Based on my understanding and some assumptions, my responses may not have been helpful.

In your explanation above, I believe you are describing a higher level of testing. It seems you want to evaluate results when chaining requests together - that is, making initial calls into the system and making a subsequent call based on the response from the initial call.

Might I be getting closer to understanding what you want to accomplish?


Hello @devtotest,

Yes, more or less.

I was thinking in some kind of automation to simplify my tasks.

Nowadays, besides of some more simple test that I perform using Postman (were I have some automation test described), I use other tools, like Kafka injection, SSH or Hangfire to check outcome and so on…

In need to gain time between changing programs and so on, I was wondering to create some kind of framework ( maybe I’m not using the right term ) to facilitate my daily test task.

Something like keyword based:

PreTest : CreateUserHistory_2M_12M // this can consist in several posts.

UpdateUser ( X, Y , Z)
KafkaUser( X, Y, z, W)

The more complex scenario, by now, consist in historical data for a client and then a new data and a Kafka topic with extra information… then business logic is triggered and after some magic a result is logged.

Validation I could perform manually (because i love to check dbs and if topics are correct produced instead of realing only in a consola log, kibana, Hangfire)


Hello @davgonca!

Based on your description, it seems you want to execute higher level integration, end-to-end, or smoke tests with a diverse set of data and possibly prepared data to evaluate behaviors within the system. I gather that you want to execute these tests daily.

If System A is the entry point (the APIs reside there), then submitting payloads to those entry points may be the starting place. To explore the behavior through the system and with chained requests, you’ll need to queue subsequent requests, maintain and parse responses, and assert on results. There are frameworks available to facilitate this - MS Test comes to mind. There are probably open source frameworks available as well. I’ve not used any.

I see the value of smoke tests but these usually evaluate configuration, security, and connectivity rather than behavior. I’m not a fan of end-to-end but those tests might provide a quick regression of the product.

I wander about the tests that are already completed. It seems, with the unit tests and integration tests, the Kafka injection, and others you have mentioned, that you and your team have been able to isolate a lot of behavior, and have already learned a lot about the product. What else might there be to learn?


1 Like

Hello again,

Thanks for all the inputs! :pray:

Isn’t MS Test more used in the service project? For unit test? Or am I wrong?

Yes. Not only have some kind of smoke tests, but also have a automatic way of execute flows instead of being swapping apps to exercise the flow. And have more time do study more the product, have more time to be with product owner creating/refining user stories and so on…

Thanks a lot for helping me clear my mind,

Hello @davgonca!

MS Test is used mostly for unit tests. It could be used for end-to-end but it might be, now that I think of it, awkward to use that way. If you wanted to brute-force a solution, you might find an HTTP library and use it with a simple queuing model. For example (and very much off the top of my head), if I chose Visual Studio for implementation, I could establish a pattern of an HTTP call for APIs and drive the transaction from a List object.

Other than the brute force suggestion above, I believe there are existing frameworks. SoapUI is popular; I have no experience with it.

I enjoyed this exchange also, David!


Maybe I’m not talk about test automation but automate processes.

Sounds like task automation, stuff Ops people (and even devs, QA) would do. Curious, have you thought how you would automate the steps you do manually in the testing of the flows? I’d think that’s a first step to how you build and put all this together in an automated way regardless if it ends up in a keywordish (test) framework or not in the end. It could be just a bunch of scripts calling one another in the beginning.

Starting with replacing each step item with a script that replicates the manual task (using CLI, APIs/SDKs, HTTP requests, GUI/UI automation as a last resort).

But that would only work for the manual testing work that you do repetitively. It doesn’t make sense in terms of effort/investment being worthwhile if this is for exploratory testing - automating that may not be worth it unless easy to automate.

Hi @davgonca,

First of all, you’ve started strongly by modelling your system and understanding your context. So many individuals don’t take the time to work out how the product works and what the team is currently doing / can do. For me the next step is always understanding the risk. If you want to automate, what features / areas of the product are you interested in observing for potential risks? Once you have captured a few of the important risks, then you can consider how you approach it.

Looking at what you are doing and the problem you are trying to solve, I believe you are already doing automation in places. It sounds like you are mixing tools into your current approach of testing which is awesome and you should definitely focus on that as well. Another technique you can do is reflect on your own testing:

  1. Think on what areas you repeat when you test. For example grabbing logs from multiple location sources.
  2. Think on what activities are a time waster. For example setting up data.
  3. What takes multiple steps to get to before you can execute your test ideas. For example, navigating to a specific part of the UI.

The activities you identify from these questions can be candidates for automation or tool use. This will guide you to a more informed approach of using tools to support your testing. One of the reasons I like this approach is you can quickly implement quick wins without having to setup large test frameworks. You’d be surprised how a simple bash script to pull all the log files down can speed you up for example.

Hope that helps,
- Mark


Have you considered importing your postman tests into Katalon and tweak the automation flow from there? That would be my first try based in your context.