Ask Me Anything: Whole Team Testing

Tonight we had one of my idols in testing, @lisa.crispin, for an Ask Me Anything about Whole Team Testing.

It was clear from the number of questions we had before starting the session that we wouldn’t get to them all! If we didn’t get to your question or you’re catching up on the Dojo and have thought of questions you’d like to ask Lisa, please share them here :slight_smile:


mentioned in the chat:

The developers I work with already do some testing, particularly when the testers workload is too high. We always say that the developer should not test anything that they themselves have worked on. Do you agree with this? Is there any other situation where the developer should NOT be testing?


Do you think there is a risk that quality could decline if non-testers do the testing? Is there anything we can do to mitigate these risks when we adopt a whole team approach to testing?

Do you think there is a minimum percentage of testing that should be done by testers?

And now for the unanswered questions, seriously people I think this is the record for most amount of questions asked!

  1. In general i feeling testing is greatly misunderstood by alot of non testers. Do you feel this barrier needs to be clarified before a team can do whole team testing?
  2. When you mention whole team testing do you mean exploratory testing by the team vs members of the team contributing to running checks/automated checks?
  3. Why do testers seem to have a number of issues around encouraging a scrum mentality of the team owning testing?
  4. How to deal with developers who have too strong opinion about how to test and how much to test and where?
  5. How did you first meet Janet Gregory?
  6. Do testers belong to the development team, to the “testing team” or both? Is there anything that bring the testers under a common vision to work altogether while they also work in the dev teams?
  7. My devs are amazing when we ask them to help out with automated testing but it’s much harder to get them to help out with manual testing. They say they’re ‘not good at it’. What’s a good response beyond ‘no one is, at first, and i really need your help’?
  8. Does Extreme Programming still have a place in software development or has it been taken over by newer methods?
  9. How can we carefully explain to developers that their unit tests are not enough?
    10.How can QA collaborate with the developers or the team while preparing the test strategy? What are the various challenges that might occur during the test execution? Test reports vary according to the nature of the project, team but what are the fundamental things that should be addressed in the test reports apart from test cases and bug reports?
  10. How important is the language that the team uses? Terms like resources instead of people, talk about developers and testers rather than the (delivery) team, testing versus checking, being done with testing etc?
  11. Developers think testing is boring. What can I do?
  12. How do you deal with reporting in an agile team? Is it monitoring and comparing trends? Or is there more? What if the team does not ask after reporting?
  13. I am in a team with a “whole team” approach to testing. How do I know? How do I know if we start losing this?
  14. On th ABTesting podcast, Alan and Brent advocate developers testing their own code, what is your opinion on that, and where does it leave the “test team”?
  15. What was it like working on Agile Testing: A Practical Guide for Testers and Agile Teams?
  16. How did you first get into raising donkeys?
  17. Developers, I’ve found, hate the very concept of UI testing. How can we pitch it to them to make it more palatable?
  18. Is there a newish shift away from Dev’s not testing or not working with testers? Every lead developer I have worked with encourages testing within the dev team & supporting the testing resource.
  19. What is the best way to ‘sell’ whole team testing to the “business” to make it more likely to get the go ahead?
  20. What is your opinion about the future of QAs? Maybe they will start to work with quality of IA or transform in a developer with others kinds of skills ?
  21. We have testing activities (monthly patch testing) that run in parallel to our sprint testing which is in large part un seen work and takes the testers away from the team for 3-5 during a sprint, how can we bring that testing into the team?
  22. How to convince developers about involving testers in unit test review? Dev’s mention that there’s lang barrier and also too much time waste in pairing up while unit testing?
  23. Before I started, the whole team was already accustomed to regression testing before every release. Because of that, they’ve become entitled to try to influence what goes into the release and to change features during regression testing. What is the balance between having the team involved in testing but not dictate last-minute changes?
  24. UAT Testing - I’m currently in a debate with my PO as she believes that UAT testing should be done by the testers, but we already do regression testing and I believe we should set up a structure outside of the team for ‘fresh eyes’. Are either of us right or is a combination of the 2 the way forward? It’s a mobile app so easy access to it
  25. How do you gauge the maturity of your testing team/practice?
  26. You’ve moved the world of testing forward considerably. Thank you, Lisa! If you consider creativity as a driver, where do you seek inspiration from outside of the world of product development? How do you map that to the world of testing/product development?
  27. Let’s assume that you already have a situation that the whole team is testing. Let’s say that it works well. One challenge is to keep it this way. Once you achieve it, what is the next thing? How deep do you dive into testing as a team? If there is no limit, it kind of make sense that there is no exact boundaries between testers and developers in such team in future.
  28. How does offshoring work with this - a lot of the UK banks still see Offshore testing as the answer!
1 Like

Laura asked a question about how to get the whole team involved in performance testing.

A technique that worked well for my team recently was this:

  1. Work together with the product owner to work out the goals of the performance testing
  2. Use this to define particular test scenarios (ideally prioritised a bit)
  3. If you’re completely new to a tool, work out how to script, debug and run one of the easy-to-medium scenarios yourself first, and start to understand what information you’ll want to capture in the actual test run
  4. Pair with a developer on scripting and debugging a scenario, to share your knowledge
  5. Divide and conquer scripting the rest of the scenarios, with plenty of opportunities to review progress and course-correct. We do final pull request reviews too, which helped. Identifying potential ‘tricky bits’ and unknowns in the scripting and making a conscious decision about who was going to tackle those also worked well.
  6. Divide and conquer running the final test scenarios. I created a template for each test run, for capturing the important information (e.g. start and end time, checklists for test data setup, placeholder tables for capturing key metrics, etc.), so the devs could rattle through running the scenarios. Then I was able to focus on working through the results and digging into the reasons behind the behaviour, to draw conclusions and recommend further work to consider.

I hope that helps! I’d be interested to hear more about your situation and would be happy to talk in more detail.

1 Like

I do think that is good practice for the developers to not test their own code. It’s hard to distance yourself from your pretty baby. That said, I’ve seen developers who are pairing and using a small exploratory testing checklist for guidance do a good job of basic testing on their own story. Just as they use unit tests and TDD to ensure code correctness, they can use some manual exploratory testing for this too. Someone else should then test it at the feature level.

1 Like

That sounds like a really solid process! Have you blogged or written an article about this? I’d love to share it.

1 Like

oh wow, that’s a lot. I’ll try to work my way through though! Everyone please feel free to chip i!

1 Like

Thank you! It was my first time doing performance testing so it was only when I saw the question this evening that I realised I did have something specific that I could share from the experience - so there’s no blog post or article yet. But the positive feedback is definitely great encouragement for me to make it happen :slight_smile:

Unfortunately, most non-testers have a sketchy understanding of testing, at best. I think we should try to help them learn about it. Especially execs and managers need to understand the value of testing, and how an investment of quality pays off in the long run. Everyone goes around saying they want the best quality, it’s like mom and apple pie, but if they aren’t willing to let teams have time to integrate testing activities into coding activities (both are equally important parts of software development), that lip service doesn’t help.

Work to make benefits of testing your own team is doing more visible. Also make the problems caused by lack of testing visible. This can be as simple as highlighting critical bugs in production. Testing isn’t the way to fix those - building quality in is. Testers can help the team learn ways to shorten feedback loops and develop high quality code from the start.

One way to help execs understand might be to show the opportunity cost of things like time spent triaging and fixing bugs in production (and communicating with the irate customers) at the expense of time to build new features. And, we don’t get a lot of chances to show customers the value our product offers, if we blow that chance because we deliver the wrong things or deliver buggy things, it’s bad news for the business.

1 Like

Whole team testing includes all testing activities, and it especially includes baking quality into the product from the get-go. I didn’t get into this during the AMA, but testers can play a vital role in helping the deliver team and business stake holders achieve shared understanding of the purpose of each new feature, how it should behave, how we will know it is successful in production. We can add value with lots of different testing activities for various quality attributes: accessibility, security, reliability, usability, those ilities go on and on. We invest in regression test automation to free up time for value-add activities like exploratory testing.

1 Like

If I’m understanding this correctly - I’ve seen a lot of “agile transitions” where developers got training in technical practices like TDD, product people got ScrumMaster or product owner training, and testers got… ignored. It’s natural for them to have a lot of fear about suddenly being stuck on a cross-functional Scrum team that’s supposed to take responsibility for quality and testing. This is where we need managers to step in and support testers in many ways: training, time to learn, and making sure they are equally valued members of the delivery team.

1 Like

Very very true.

I was recently in a software testing job interview where manual testing was pretty much seen by a senior manager as a commodity. Automated testing was more respected, but I’m not sure exactly why they should automatically make that distinction because in my experience if automation is badly done, then it’s arguably worse than commoditized manual testing as you’re going to rely on automation and if it is like a wonky crutch, some day it may give out and you’ll come crashing down.

On some level it was like the organization saw pure manual testing as a cost, but automated testing as more of a value. Obviously there are subtleties to the manual v automated debate, but as it was in a job interview it wasn’t time to broach them, so I suggested quality assistance, i.e. somewhat of a flavour of whole-team testing, as something that might give a halfway house.

They seemed willing to consider this as to them it meant people with a manual skillset could add value in helping ensure automated testing created by non-testers was up to the mark, while possibly upskilling themselves to be able to work on automated testing at some future point.

To relate to your quote, with a little persuading they could possibly come to see how investing in testers and seeing the discipline in value-adding way could help them move the business forward as they were in the ironic position of having low confidence in the software they were producing but due to how they saw manual testing and testing in general, they couldn’t really see an out…

1 Like

I haven’t run into this a lot in my career. In my younger days I expect I’d have locked horns with them! I hope today I would be smart enough to listen to them and see what they have to say. Maybe they have some good ideas. Maybe, as they talk, they will realize they have some areas they aren’t so sure about. You may find an opportunity to suggest trying some small experiment. You will always meet resistance. Sometimes we can use that energy and turn it around for good.

1 Like

Back in 2000, I tried to get Brian Marick to write a book on testing in Extreme Programming with me, but at the time, he didn’t have experience working on an XP team. He did encourage me to go forward with the book, and I ended up writing it with Tip House. Soon, Brian introduced me to Janet. She was working as a tester on an extreme programming team in Calgary. She had the good fortune to work with some Thoughtworkers who were among the pioneers of XP. Janet became the “tester” for our book. We’d send her our chapters, she would try the techniques with her team and give us feedback as to how they worked. It was a huge help to us!

After finishing the book, Janet and I kept corresponding and helping each other as being a tester on an XP team was still a rare thing. We both attended XP 2002 in Chicago. I can’t remember if we first decided to collaborate on a talk, or if we first decided to collaborate on writing an article, but before long we were doing both together pretty frequently. In 2008, my editor asked if I would write a new book about testing in agile. Tip didn’t want to write another book. Luckily for me, I was able to talk Janet into it! We complement each others’ experience and skill sets really well. Now we’ve started the Agile Testing Fellowship, we’re still doing tutorials together, and who knows what will be next!

1 Like

This is a great question. I’ve experienced benefits both ways. When I was part of the cross-functional development team reporting to the development manager, I truly felt like part of the team. I was fortunate to have managers who valued testers as much as other team members and I was seen as a senior team member and part of the leadership.

In another job, I was part of the testing and support team, reporting to a test/support director. Helping with support was a big benefit, it helped me know what problems customers experienced and helped us improve our testing and focus it in the right places. Reporting to a director who had equal rank and authority to the development director was also an advantage there. The company culture did not value testers, though the development management grudgingly agreed they were necessary. Our director made sure that we were equally supported and valued. We were embedded in the development team and worked as part of that team. Because we were so few testers compared to the size of the team, developers did a great deal of testing work. It ended up being a great collaboration. We all learned from each other and our product was better for that.

In large companies with many delivery teams, I’ve seen the need to have, at the very least, a testing Community of Practice leader who ensures that testers get together to share experiences, knowledge, tools and such regularly, and makes sure they get all the training and support they need.

1 Like

Question: My devs are amazing when we ask them to help out with automated testing but it’s much harder to get them to help out with manual testing. They say they’re ‘not good at it’. What’s a good response beyond ‘no one is, at first, and i really need your help’?

As I mentioned in the AMA, it is important to share the pain of manual regression testing with everyone on the team. Divide those checklists or scripts up among everyone including developers.

For other types of manual testing, I think a lot of this is just a bit of fear from developers that they don’t know how. That’s why I did the fun exploratory testing workshop I described, using personas and charters but testing kids’ toys and games. Then followed up with more serious workshops testing our app.

Having testers pair with developers frequently also helps developers learn more testing skills. Even if you pair on writing production code, you’ll be writing unit tests and hopefully automating tests at other levels too, so as a tester you can explain how to specify good test cases.

Once when pairing with a dev on my team we had the idea to put together a short “exploratory testing checklist for devs”. We pinned it to our Slack channel. It encouraged developers to remember to try more manual testing before declaring a story done. I also laminated Elisabeth Hendrickson’s Testing Heuristics Cheat Sheet and left copies around the work area. I would see it get used occasionally.


Q: Does Extreme Programming still have a place in software development or has it been taken over by newer methods?

XP’s creators never intended for a thing called “Extreme Programming” to be around for years and years. I had a conversation with Kent Beck back in 2001 at a testing conference where I asked why they had picked such a terrible name. He said “Oh, in 10 years people will just be calling this good software development”. Sadly, that hasn’t really happened. But many of the XP practices, such as TDD, CI, refactoring, and indeed testing, are established development practices today. We see different frameworks for managing projects, such as kanban versus Scrum, but high-performing teams are doing most if not all of the XP practices.


In some contexts, unit tests could be enough! My approach has been to get everyone on the team together, talk about what is going well and what’s not going so well, is our code the level of quality to which we committed? What is our biggest problem? What is a realistic, timely, measurable goal to make that problem smaller? Let’s think of an experiment and measure progress towards that goal.

This is one area where I find models like the test automation pyramid helpful. Unit tests are the solid base of the pyramid. We can look at that model and talk about where we ware now and where we want ot be. I would venture to say that teams doing test-driven development with good coverage at the unit level will have code that is significantly higher quality than teams doing no test automation and probably higher quality than teams who are doing some automation through the UI level. That doesn’t mean it’s good enough. We should always be trying to improve.

Since I’m not a coder anymore and I don’t write unit tests, I’ve found it doesn’t really work to evangelize about how great it would be to automate tests at all the levels. Look for ways to get the whole team to talk about it. As I mentioned in the AMA, get More Fearless Change by Linda Rising and Mary Lynn Manns and go work at being an agent for change.

1 Like