How do you approach testing multi-user workflows in Playwright?

I recently tackled a project where multiple roles interact in sequence.

I put together a quick write-up on what worked for me, including a helper I now use across multi-user tests.

Thought it might help others dealing with similar scenarios.

Would love to hear how you all tackle this — do you isolate with contexts too?

3 Likes

Hey, thanks for sharing this! I haven’t had to automate a multi-user workflow yet, but it’s something I’ve been thinking about for the future.

Your example only includes one user. Could you share an example of how the code might look with multiple users? For example, user1 logs in and triggers some task for user2. User2 logs in and completes the task. User1 continues after this and reviews the task. It doesn’t have to be that scenario, just an example.

1 Like

Hey! Thanks for the question! I will edit the article to better reflect that.

Let’s say we have a test where:

  • User 1 (e.g., a Study Coordinator) logs in and sends Document to Researcher.
  • User 2 (e.g., a Researcher) logs in to approve it.
  • User 1 then logs back in to verify the outcome.

The code will be as follows:

test("Multi-user flow: Coordinator submits document, Researcher approves, Coordinator verifies", async ({ browser }) => {
  // Coordinator logs in and submits the document
  const { context: coordinatorCtx1, page: coordinatorPage1 } = await loginAsUser(browser, coordinator.login, coordinator.password);
  const startupPage = new StartupPage(coordinatorPage1);
  await startupPage.navigateToStartupPage();
  await startupPage.submitDocument();
  await coordinatorCtx1.close(); // Coordinator logs out

  // Researcher logs in and approves the document
  const { context: researcherCtx, page: researcherPage } = await loginAsUser(browser, researcher.login, researcher.password);
  const dashboardPage = new DashboardPage(researcherPage);
  await dashboardPage.approveDocument();
  await researcherCtx.close(); // Researcher logs out

  // Coordinator logs back in to verify approval
  const { context: coordinatorCtx2, page: coordinatorPage2 } = await loginAsUser(browser, coordinator.login, coordinator.password);
  const reviewPage = new ReviewPage(coordinatorPage2);
  await reviewPage.verifyDocumentApproved();
  await coordinatorCtx2.close(); // Final cleanup
});

Hope this makes sense.

1 Like

At QaonCloud, we test multi-user workflows in Playwright by:

Using separate browser contexts to simulate different users.
Reusing auth states for faster logins.
Syncing user actions with Promise.all() or timing controls.
Combining API and UI to set up and validate cross-user flows.
Running tests in parallel for efficiency.

This lets us accurately test real-time, role-based, and collaborative features.

1 Like

Have you also seen @swathika.visagn’s article on Simple Playwright authentication recipes: A cookbook for software testers?

2 Likes

Ah, okay. I thought perhaps the first context could be “re-used”. This clears things up. Thank you!

A work around to multi user workflows is to isolate the behaviours being exercised by each user into their own tests.

In your example, you have three testable things in one test.

Test 1 → A coordinator can submit a document

Test 2 → A researcher can approve a document

Test 3 → A coordinator can verify an approved document(?)

One challenge with multi-user flows is that each testable step is now reliant on prior steps passing to even run. If a researcher cannot approve a document, then we will never know if a coordinator can even start to verify an approved document. This means that on test run we lose valuable information, especially if a test fails early on.

It also drastically improves run time if we isolate testable features like this, because we no longer setup pre-requisites within the tests themselves. The pre-reqs should already exist, in a more efficient manner, whether through the API, through seeding, or whatever other method you can think of.

That said, when multi-user testing is the goal, your approach seems clean and manageable.

2 Likes

We do have user roles in our software but we don’t use the UI to provide data from one user role to another.
We mostly use API calls for this purpose.
So in a test where “A researcher can approve a document” the document would be submitted by hitting an API call through the coordinator user.
Not only does this make the test faster but also does a little API test along with it.

I agree, using API calls is a great way to set up preconditions and mock what’s not the focus of the test. In this case, I was specifically trying to showcase a multi-user test scenario where simulating real interaction between user roles is part of what’s being tested. This test is more of a theoretical example to demonstrate the usage of the helper method I shared.

Yes, the first context can be reused if it’s not closed. But if the goal is to simulate a more realistic user flow, where a user logs out and logs back in, then closing the context and reinitializing it is the better approach. It helps validate session handling and mirrors how the interaction would happen in the real world.

1 Like