Hey, thanks for sharing this! I haven’t had to automate a multi-user workflow yet, but it’s something I’ve been thinking about for the future.
Your example only includes one user. Could you share an example of how the code might look with multiple users? For example, user1 logs in and triggers some task for user2. User2 logs in and completes the task. User1 continues after this and reviews the task. It doesn’t have to be that scenario, just an example.
At QaonCloud, we test multi-user workflows in Playwright by:
Using separate browser contexts to simulate different users.
Reusing auth states for faster logins.
Syncing user actions with Promise.all() or timing controls.
Combining API and UI to set up and validate cross-user flows.
Running tests in parallel for efficiency.
This lets us accurately test real-time, role-based, and collaborative features.
A work around to multi user workflows is to isolate the behaviours being exercised by each user into their own tests.
In your example, you have three testable things in one test.
Test 1 → A coordinator can submit a document
Test 2 → A researcher can approve a document
Test 3 → A coordinator can verify an approved document(?)
One challenge with multi-user flows is that each testable step is now reliant on prior steps passing to even run. If a researcher cannot approve a document, then we will never know if a coordinator can even start to verify an approved document. This means that on test run we lose valuable information, especially if a test fails early on.
It also drastically improves run time if we isolate testable features like this, because we no longer setup pre-requisites within the tests themselves. The pre-reqs should already exist, in a more efficient manner, whether through the API, through seeding, or whatever other method you can think of.
That said, when multi-user testing is the goal, your approach seems clean and manageable.
We do have user roles in our software but we don’t use the UI to provide data from one user role to another.
We mostly use API calls for this purpose.
So in a test where “A researcher can approve a document” the document would be submitted by hitting an API call through the coordinator user.
Not only does this make the test faster but also does a little API test along with it.
I agree, using API calls is a great way to set up preconditions and mock what’s not the focus of the test. In this case, I was specifically trying to showcase a multi-user test scenario where simulating real interaction between user roles is part of what’s being tested. This test is more of a theoretical example to demonstrate the usage of the helper method I shared.
Yes, the first context can be reused if it’s not closed. But if the goal is to simulate a more realistic user flow, where a user logs out and logs back in, then closing the context and reinitializing it is the better approach. It helps validate session handling and mirrors how the interaction would happen in the real world.