Test plan / strategy for an agile delivery by a vendor

Hi all,

I hope someone can shed some insights on this situation as I am not sure if I’m in the right direction.

I was asked to draft a test strategy for a project where there will be UI changes done by a vendor. Obviously, with this kind of scenario, the UAT will be done by us (the clients) based on requirements provided by our own UI/UX team.

The UI changes basically touches multiple screens (for simplicity, I have provided sample categories below):

  1. User management (3 screens)
  2. Login (2 screens)
  3. Payment (5 screens)

The development completion date is based on the delivery of the 3 categories above. As such, we are expecting 3 phases of UAT. I also haven’t specified the test scenarios/cases as these are just taking functionalities from our current suite (since this is just a re-skin of UI). However, I feel like this is very un-agile.

So far, I came up with the below (but I still feel like these are just information and no real action points):
Test types: (I need to put in dates)
Unit tests - to be completed by vendor
UI tests - to be completed by our UI/UX team
Functional tests (Cross-browser tests, accessibility tests, usability tests) - to be completed by QA
Performance tests - to be completed by QA

Risk areas: Payments as there are always issues in this category so lesser test coverage on User management and Login.

Automation: (I am still in two minds on this as I believe we have capacity to automate but no confidence it will be re-factored on time)
To be done post-production as selectors will change and will impact automation scripts and deadline too tight to automate during progression tests

Test approach: (I haven’t factored in no. of days for re-testing bug fixes and no agreed turnaround time for defects which sounds very waterfall if we have one)
Bugbash - QA-wide (5 resources for 1 man-day) - this means 3 man-days since there will be 3 releases
Mob testing (happy path scenarios) - involves vendor, UI/UX team, and a QA (1 man-hour with QA lead driving it)

Resources: as per above on Test approach (maybe I can group it together with Test approach)
Test data and environments

1 Like

So a few resources came to mind as I was reading your post. I think these articles might help at least trigger some more ideas around what to be looking into and capturing:

Let me know if they are any help


Some of my suggestions:

  • Ad test types: Integration test:
    These should be performed by the vendor.
    Do the UI changes have no impact on the system?
    (Show test results of some primary and high product risk flows.)
  • Ad test types: exploratory testing:
    Use the following template:
    Explore (target) using (resources) to discover (information)
    This has more focus than a bug bash.
  • Ad Automation: visual testing
    This is tricky with changing selectors.
    In the future, make a requirement, that elements have unique ids. (Testability)
  • Legal:
    Which information is collected by the application?
    Is this really needed according to GDPR or other privacy law?
    How is the data secured in the database?
    Who can access the data in database?
    Are the user names coupled to persons? (admin is not the right name in this context.)
    Is access logged in a proper way?
    Is it possible to delete personal data which is not needed according to GDPR or other privacy law?
    How easy is it to extract personal data from the database on request of the user?

Ah yes, thanks for this, I am actually using the one page test plan as my reference. However, now that I have re-read it, I think what I’m missing are the “Timescales” where the efforts are actually estimated.
While the mind maps help, I think this is the next-level down where we need to deep dive into the test scenarios.
I will update my test plan based on the missing elements on the “One page Test Plan” and as a true lean and agile approach, I will ask for early feedback then iterate. Thanks!


Thanks for this. While these are great stuff, I can definitely add these things as the next level “deep dive” test scenario / test script preparation where you identify the happy path and edge test cases that we need to include.
I can definitely check the SLA with regards to integration tests as usually, there is a push back between vendor and client and this is something above my pay grade.
Also, on the legal side, while this is very important, I’m 99% sure my managers would think this is an “overkill”. This is something our company would need to do in any case after the delivery of the UI re-skin, which is the main task the vendor needs to deliver in the first place.


This reminds me, it’s always good to spell out the scope of what is NOT to be covered, and the new ‘risk owner’. E.g what the vendor isn’t reasonably responsible for because you will be covering it. Or the risk is mitigated in different ways.


If the UI/UX is very important, you might consider static testing.

  • Let the UI/UX team review the screenshots of the screens before integration.
  • Take also the flow of the screens into account.

Thanks, I agree. This is where something like Visual regression tests are very useful.

I came late into the picture so I will need to speak to my managers on this. But this is something that I feel is very important too based on my previous experiences.

1 Like

It’s never too late to do a lessons learned and adjust for the next project. sometimes we all have to adapt to getting the best of the current situation when it’s already in progress, and that can mean compromise and focusing only on issues that you feel are urgent.

Here’s a little something I learned over the years… When working with a vendor, if you do not have unit tests explicitly in the contract, they will not happen. Contrariwise, if they are in the contract, they are more expensive than they are worth. Additionally, if they are required, you need to review them. Frequently, I notice that required unit testing is there for show, and not to actually, you know, test things (or in the case of BDD/TDD, help with the design).

Another interesting thing that I frequently experience is that if the “QA” does not work in the same team as the vendor (programmers), then “Agile” (capital A) is extremely difficult. In the past, my teams have not figured out how to do it more effectively than throwing a bag of features over the fence every 2-4 weeks (depending on “sprint” length). Defining what should be tested and when might be more important than pointing out test types, especially in the context of making a plan.

@ebanster A bit late to the party but I want to share my go-to model for coming up with a test strategy (the set of ideas to guide my testing). It’s called the Heuristic Test Strategy Model. I recommend you watch this video first, where James Bach talks about what a heuristic is and how they are key in software testing.

1 Like