We are trying to introduce this concept of Quality assistance (Atlassian, Canva, etc. use this model) where devs do their own testing for new features while QEās guide them and work on bigger features for automation, non-functional tests, observability, beefing up regression suite.
The Devs are quite receptive to this concept but the QE (my team) have a bit of apprehension (as they feel like they have less work and feels like they get to be on the sidelines instead of being in action).
Weā re trying to create something like: What is a quality coach? (Definition & nature of role) but it looks very holistic and no specific tasks / activities that would require QEās providing updates on stories / tasks on a daily basis.
There is no single answer to this, there are many flavors and aspects of what will work for you.
Hereās some of my experience that may give some insight into what did and did not work for someone else.
Quality Coaches. Coaching itself is a deep knowledge area that should not be underestimated, you may find only one or two of your testers are suitable and capable for coaching, I would recommend against trying to turn all testers into coaches.
I like teams where developers own all the scripted testing, they do test all their changes and they usually own the automation . In 25 years this is the most optimal model in all but very few cases, and those cases usually created a massive regression risk they cannot get out of. In this role as a tester my focus is much more on discovery and potential product risks/values. In defining stories I contribute by asking a lot of business risk questions this is still testing in my view and testing with a discovery focus can be down continually even if you do not have a product available to test yet. On product testing its usually session based discovery focused investigating specific potential risks and things others may naturally miss even if they have tested.
In this model Iāll also test ideas for AB hypotheses and test analytics, data and customer feedback for things that can improve and guide the product forward. I can also contribute and support the developer testing and automation and even run test knowledge sessions as required so opportunity to apply those coaching skills if you have them.
So in the above model I was likely able to do all of the Quality Assistance aspects but also remain a tester even though my test mission changed from the old school script focused testing. This works particularly well if there are a lot of potential risks or rapid feature rollout.
In other projects Iāve seen a different route taken and a lot of automation activities went to testers and developers tested their own stories. I am not so sure Iād call this quality assistance but it is a model out there. Iāve seen this working where regression risk is the primary risk and there are not a lot of other risks deemed worth exploring.
Another model Iāve used is sort of a combined tester and sys ops, basically covering many things that accelerate the team as a whole, CI, environments, system monitoring, release etc. Its fairly broad but does suit some people.
Either way you need to align on what your roleās mission is, you may find you are trying to solve the wrong problem or their are a lot of personal biasās at play. A massive common bias is some developers do not like testing so will naturally push that to someone else even if its the most inefficient thing to do.
Pretty much all that @andrewkelly2555 covered, but mainly those 2 last points. Iād be wasting time trying to rephrase those last 2 points Andrew made as my own words.
Your big risk when developers do their own testing is that leaves nobody responsible for integration testing and āupgradeā testing or performance and UX. Itās a risk. So you will probably find you have plenty to do, but not everyone fits this role well.
To answer specifically āwhat does QE do during stand-upsā when devs do their own testing, I would suggest having QEs turn the activities you mention (automation, observability, etc.) into sub-tasks of the tasks they relate to or into separate tasks which are also put on the teamās board. That way they can be talked about the same way the QEs are used to.
Also, if theyāre indeed guiding the developers and not sidelining themselves then they could talk about the code reviews they need to do on the tests or about the pair programming sessions with the developers or about the testability issues theyāre resolving.
Personally I like it if QEs also get involved in early pair testing with the developers and in collaboratively defining unit and/or integration test scenarios before a line of code is written, but you may feel this violates the principle of developers doing their own testing.
I checked with the QEās and I gave them some options and this looks like a keen focus for them to learn and have a go at. Thanks for your responses. It good to learn from each other this new way of working.
We are onto the Quality Engineering path as well and I definitely understand your struggles.
We did however include up-skilling into automation, since we had mainly manual QA engineers but Automation is merely a tool, a small part for the role of Quality Engineer that I imagine.
One of the best resources I found on Quality Coaching is from Anne Marie Charett, she is a pioneer in the industry.
Here is her blog/live book (itās subscription based but so affordable) for example this post What is a quality coach? (Definition & nature of role)
Also, donāt know where you are based but I am in Europe and the concept of Quality Engineering, let alone Quality Coaching is very new, so I found it very useful to check job ads for Quality Engineer on LinkedIn in Australia to see what usually is expected from this role, since for Australian job market the concept is not that new.
In QA database testing, the work of QA and QE may vary from client to client, between projects and project phases.
In traditional waterfall methodology, QA(Quality Analyst) ā the one who ensures/maintains the quality of a product by executing on CodeScienceās quality procedures and defines its role as reactive like finding bugs, measuring, documenting, and reporting the findings and presenting the impact to the development team.
However, the Agile framework transformed the focus of this role, in concert with the new QE role, into defect prevention ā defining the role more proactive.
In QA database testing, QA focuses on delivering a quality solution through planning and executing on the quality standards, where the QE focuses on automating manual, repeatable tasks to make the process more efficient and less error-prone.
The QA role is needed in every phase of the product development lifecycle and below are the responsiblities:To create test plans, sprint/release planning, and participating in ceremonies as the testing subject matter expert.
Backlog Management:
- creating and grooming stories by identifying missing Acceptance Criteria and edge cases.
- During daily stand-ups, to communicate the testing status and when a story is developed, execute functional tests.
- To Test both functionality and behavior as well as continuously report bugs and work closely with developers while retest and issues finding phases.
- To identify regression test cases for the application, browser related and mobile test cases, other test cases like UX, performance, and security.
- Working with the client team ā providing test steps for acceptance testing, analyzing and coordinating issues found through testing, and classifying them as bugs or enhancements.
On the other hand, the QE works closely with the QA or the Product Owner (PO) to:Identify the test cases that are executed repeatedly.
- Identify end-to-end test cases.
- With the help of the automation testing frameworks/tools, to automate the identified test cases.
- Address most of the challenges of manual testing.
- When builds are deployed in various environments, to automate the tests to be executed in the continuous integration (CI) process.
In every project, QE is required where there is continuous implementation of functionality, also where the previous implementation is regularly regression tested and where automated test scripts are included in the ādefinition of doneā.
Conclusion:
In QA database testing, for both QA and QE, the end goal of a quality product is the same, only the focus are different.
QA is very important for developing software good enough to satisfy the clients and their users while on the other hand, QE is responsible for driving the development of a quality product and processes.
@ebanster , I would love to know how this worked out! Do you have a story to tell about your journey towards quality assistance?
Apologies for the late response.
Hereās the update (not sure if itās exciting enough):
So after chatting multiple times with management and one very keen Engineering manager, we were able to brew our own āQuality assistance modelā. Socialising and doing sessions (Q&Aās) around it is not the difficult part but the constant reminder and buy-in from developers and product teams are. A few months in, it is still being slowly adopted, but weāve seen some DORA improvements. Itās easier to preach this model to new starters though
As for the QEās role, we have used this as a reference.
In summary:
The QE can focus on bigger-picture āQuality engineeringā tasks and not only testing. This might consist of the following:
At a Task level: QE works closely with developers to analyse and identify test scenarios before coding commences (Test Analysis stage), review testing when itās done, discuss test ideas, and identify gaps following Walkthrough.
At a Feature level: For larger features (composed of many stories) that cut across multiple products (or teams), a more holistic view of the solution is needed to test effectively. Developers still own testing of the individual stories, but the planning and coordinating of end-to-end testing for such big-sized features are owned by the QEās.
At a Team level: There are various activities that improve the ease of testing within teams. With less time being spent on hands-on testing of stories, the QEās now have time to spend working on these kinds of tasks. This could include creating automated test frameworks that other developers later add tests to, improving the run time of automated tests (using parallelisation or other techniques), looking into reducing the flakiness of tests, improving test data creation tools, exploring ways of improving processes and standards, etc.
This might consist of building up an automated regression suite, enabling more efficient automation frameworks, focusing on non-functional tests, and assisting with Observability activities such as logging, monitoring, alerting, etc.
There are times, QEās feel isolated from their work (being fully remote makes it more challenging). Devs will do some development while QE will work on a different regression suite.
I am speaking regularly with POās, Engineering managers how we can make the QEās feel more involved. Also, trying ways how we can embed āin-sprint automationā.
Great progress! Thank you for sharing.
This topic was automatically closed after 524 days. New replies are no longer allowed.