Ask Me Anything: Whole Team Testing

Laura asked a question about how to get the whole team involved in performance testing.

A technique that worked well for my team recently was this:

  1. Work together with the product owner to work out the goals of the performance testing
  2. Use this to define particular test scenarios (ideally prioritised a bit)
  3. If you’re completely new to a tool, work out how to script, debug and run one of the easy-to-medium scenarios yourself first, and start to understand what information you’ll want to capture in the actual test run
  4. Pair with a developer on scripting and debugging a scenario, to share your knowledge
  5. Divide and conquer scripting the rest of the scenarios, with plenty of opportunities to review progress and course-correct. We do final pull request reviews too, which helped. Identifying potential ‘tricky bits’ and unknowns in the scripting and making a conscious decision about who was going to tackle those also worked well.
  6. Divide and conquer running the final test scenarios. I created a template for each test run, for capturing the important information (e.g. start and end time, checklists for test data setup, placeholder tables for capturing key metrics, etc.), so the devs could rattle through running the scenarios. Then I was able to focus on working through the results and digging into the reasons behind the behaviour, to draw conclusions and recommend further work to consider.

I hope that helps! I’d be interested to hear more about your situation and would be happy to talk in more detail.

1 Like

I do think that is good practice for the developers to not test their own code. It’s hard to distance yourself from your pretty baby. That said, I’ve seen developers who are pairing and using a small exploratory testing checklist for guidance do a good job of basic testing on their own story. Just as they use unit tests and TDD to ensure code correctness, they can use some manual exploratory testing for this too. Someone else should then test it at the feature level.

1 Like

That sounds like a really solid process! Have you blogged or written an article about this? I’d love to share it.

1 Like

oh wow, that’s a lot. I’ll try to work my way through though! Everyone please feel free to chip i!

1 Like

Thank you! It was my first time doing performance testing so it was only when I saw the question this evening that I realised I did have something specific that I could share from the experience - so there’s no blog post or article yet. But the positive feedback is definitely great encouragement for me to make it happen :slight_smile:

Unfortunately, most non-testers have a sketchy understanding of testing, at best. I think we should try to help them learn about it. Especially execs and managers need to understand the value of testing, and how an investment of quality pays off in the long run. Everyone goes around saying they want the best quality, it’s like mom and apple pie, but if they aren’t willing to let teams have time to integrate testing activities into coding activities (both are equally important parts of software development), that lip service doesn’t help.

Work to make benefits of testing your own team is doing more visible. Also make the problems caused by lack of testing visible. This can be as simple as highlighting critical bugs in production. Testing isn’t the way to fix those - building quality in is. Testers can help the team learn ways to shorten feedback loops and develop high quality code from the start.

One way to help execs understand might be to show the opportunity cost of things like time spent triaging and fixing bugs in production (and communicating with the irate customers) at the expense of time to build new features. And, we don’t get a lot of chances to show customers the value our product offers, if we blow that chance because we deliver the wrong things or deliver buggy things, it’s bad news for the business.

1 Like

Whole team testing includes all testing activities, and it especially includes baking quality into the product from the get-go. I didn’t get into this during the AMA, but testers can play a vital role in helping the deliver team and business stake holders achieve shared understanding of the purpose of each new feature, how it should behave, how we will know it is successful in production. We can add value with lots of different testing activities for various quality attributes: accessibility, security, reliability, usability, those ilities go on and on. We invest in regression test automation to free up time for value-add activities like exploratory testing.

1 Like

If I’m understanding this correctly - I’ve seen a lot of “agile transitions” where developers got training in technical practices like TDD, product people got ScrumMaster or product owner training, and testers got… ignored. It’s natural for them to have a lot of fear about suddenly being stuck on a cross-functional Scrum team that’s supposed to take responsibility for quality and testing. This is where we need managers to step in and support testers in many ways: training, time to learn, and making sure they are equally valued members of the delivery team.

1 Like

Very very true.

I was recently in a software testing job interview where manual testing was pretty much seen by a senior manager as a commodity. Automated testing was more respected, but I’m not sure exactly why they should automatically make that distinction because in my experience if automation is badly done, then it’s arguably worse than commoditized manual testing as you’re going to rely on automation and if it is like a wonky crutch, some day it may give out and you’ll come crashing down.

On some level it was like the organization saw pure manual testing as a cost, but automated testing as more of a value. Obviously there are subtleties to the manual v automated debate, but as it was in a job interview it wasn’t time to broach them, so I suggested quality assistance, i.e. somewhat of a flavour of whole-team testing, as something that might give a halfway house.

They seemed willing to consider this as to them it meant people with a manual skillset could add value in helping ensure automated testing created by non-testers was up to the mark, while possibly upskilling themselves to be able to work on automated testing at some future point.

To relate to your quote, with a little persuading they could possibly come to see how investing in testers and seeing the discipline in value-adding way could help them move the business forward as they were in the ironic position of having low confidence in the software they were producing but due to how they saw manual testing and testing in general, they couldn’t really see an out…

1 Like

I haven’t run into this a lot in my career. In my younger days I expect I’d have locked horns with them! I hope today I would be smart enough to listen to them and see what they have to say. Maybe they have some good ideas. Maybe, as they talk, they will realize they have some areas they aren’t so sure about. You may find an opportunity to suggest trying some small experiment. You will always meet resistance. Sometimes we can use that energy and turn it around for good.

1 Like

Back in 2000, I tried to get Brian Marick to write a book on testing in Extreme Programming with me, but at the time, he didn’t have experience working on an XP team. He did encourage me to go forward with the book, and I ended up writing it with Tip House. Soon, Brian introduced me to Janet. She was working as a tester on an extreme programming team in Calgary. She had the good fortune to work with some Thoughtworkers who were among the pioneers of XP. Janet became the “tester” for our book. We’d send her our chapters, she would try the techniques with her team and give us feedback as to how they worked. It was a huge help to us!

After finishing the book, Janet and I kept corresponding and helping each other as being a tester on an XP team was still a rare thing. We both attended XP 2002 in Chicago. I can’t remember if we first decided to collaborate on a talk, or if we first decided to collaborate on writing an article, but before long we were doing both together pretty frequently. In 2008, my editor asked if I would write a new book about testing in agile. Tip didn’t want to write another book. Luckily for me, I was able to talk Janet into it! We complement each others’ experience and skill sets really well. Now we’ve started the Agile Testing Fellowship, we’re still doing tutorials together, and who knows what will be next!

1 Like

This is a great question. I’ve experienced benefits both ways. When I was part of the cross-functional development team reporting to the development manager, I truly felt like part of the team. I was fortunate to have managers who valued testers as much as other team members and I was seen as a senior team member and part of the leadership.

In another job, I was part of the testing and support team, reporting to a test/support director. Helping with support was a big benefit, it helped me know what problems customers experienced and helped us improve our testing and focus it in the right places. Reporting to a director who had equal rank and authority to the development director was also an advantage there. The company culture did not value testers, though the development management grudgingly agreed they were necessary. Our director made sure that we were equally supported and valued. We were embedded in the development team and worked as part of that team. Because we were so few testers compared to the size of the team, developers did a great deal of testing work. It ended up being a great collaboration. We all learned from each other and our product was better for that.

In large companies with many delivery teams, I’ve seen the need to have, at the very least, a testing Community of Practice leader who ensures that testers get together to share experiences, knowledge, tools and such regularly, and makes sure they get all the training and support they need.

1 Like

Question: My devs are amazing when we ask them to help out with automated testing but it’s much harder to get them to help out with manual testing. They say they’re ‘not good at it’. What’s a good response beyond ‘no one is, at first, and i really need your help’?

As I mentioned in the AMA, it is important to share the pain of manual regression testing with everyone on the team. Divide those checklists or scripts up among everyone including developers.

For other types of manual testing, I think a lot of this is just a bit of fear from developers that they don’t know how. That’s why I did the fun exploratory testing workshop I described, using personas and charters but testing kids’ toys and games. Then followed up with more serious workshops testing our app.

Having testers pair with developers frequently also helps developers learn more testing skills. Even if you pair on writing production code, you’ll be writing unit tests and hopefully automating tests at other levels too, so as a tester you can explain how to specify good test cases.

Once when pairing with a dev on my team we had the idea to put together a short “exploratory testing checklist for devs”. We pinned it to our Slack channel. It encouraged developers to remember to try more manual testing before declaring a story done. I also laminated Elisabeth Hendrickson’s Testing Heuristics Cheat Sheet and left copies around the work area. I would see it get used occasionally.


Q: Does Extreme Programming still have a place in software development or has it been taken over by newer methods?

XP’s creators never intended for a thing called “Extreme Programming” to be around for years and years. I had a conversation with Kent Beck back in 2001 at a testing conference where I asked why they had picked such a terrible name. He said “Oh, in 10 years people will just be calling this good software development”. Sadly, that hasn’t really happened. But many of the XP practices, such as TDD, CI, refactoring, and indeed testing, are established development practices today. We see different frameworks for managing projects, such as kanban versus Scrum, but high-performing teams are doing most if not all of the XP practices.


In some contexts, unit tests could be enough! My approach has been to get everyone on the team together, talk about what is going well and what’s not going so well, is our code the level of quality to which we committed? What is our biggest problem? What is a realistic, timely, measurable goal to make that problem smaller? Let’s think of an experiment and measure progress towards that goal.

This is one area where I find models like the test automation pyramid helpful. Unit tests are the solid base of the pyramid. We can look at that model and talk about where we ware now and where we want ot be. I would venture to say that teams doing test-driven development with good coverage at the unit level will have code that is significantly higher quality than teams doing no test automation and probably higher quality than teams who are doing some automation through the UI level. That doesn’t mean it’s good enough. We should always be trying to improve.

Since I’m not a coder anymore and I don’t write unit tests, I’ve found it doesn’t really work to evangelize about how great it would be to automate tests at all the levels. Look for ways to get the whole team to talk about it. As I mentioned in the AMA, get More Fearless Change by Linda Rising and Mary Lynn Manns and go work at being an agent for change.

1 Like

This is a really great question! I think it’s essential that every member of the team is equally valued. On my last team, the developers referred to themselves as “engineers”. They did not consider testers, designers, POs or customer support people to be “engineers”. But they would use the term “engineers” like “Let’s have a meeting with all the engineers” when they meant everyone on the team. Or they would talk about something the “engineers” needed when the testers needed that thing too.

One way this team tried to be inclusive was with a “thing of the week”. Any non-inclusive behavior or language we wanted to make ourselves more aware of and try to change, we’d make a Thing of the Week. So we made one like this: "When you are talking about the team in general, or people on the team who are in different roles, just use the word ‘team’, not ‘engineers’. It had an effect.

Similarly, remembering to not say “you guys” but instead something like “y’all”, “humans”, “mortals”, “you folks” was an effective Thing of the Week.

Personally I don’t worry about semantics like “testing” vs “checking”, everyone on my teams understands the purpose of regression testing versus exploratory testing and other testing activities. But I’ve run into the “That story is done” quite a bit. I bring this up with the team and ask what we can do to make sure that we don’t say “done” when we mean “done with writing the code - we think”. One team put in big letters across our task board, “NO STORY IS DONE UNTIL IT IS TESTED”. Big visual charts are a great way to help people change their language and thought patterns.


In my experience, it’s the managers who are keen to have reports. It’s a good idea to find ways to make the team’s accomplishments and problems visible to management in some kind of concise report. I’ve used old-fashioned risk analysis for this in the past.

My teams have found it more effective to start by setting goals to address our biggest obstacles. For example, if we have too many bugs slipping out to production, we want to have a goal to reduce that. Maybe we want no more than 3 high bugs in production in the next 2 months. OK, what will be our experiment around that? Perhaps: “We believe that having a tester pair with a developer on stories will result in no more than 3 high bugs in production in the next 2 months”. Now we have a metric we want to track, and we want to make that as visible as possible. Maybe a big sign on our physical or virtual story/kanban/task board.

Metrics are so often abused and used to punish teams. And, they’re easy to game. The most useful “generic” metrics I’ve found are from Lean development and from the State of DevOps survey. Cycle time, from when we start building a new feature or story to when we release it on production, is a good one. It shows the length of our feedback loop. Ideally we are slicing that feature into end-to-end thin “learning releases” so we can get feedback from production use and keep shaping it - or killing it because it isn’t what customers wanted. Mean time between failures on production and mean time to recovery of failures on production are a couple of metrics which the State of DevOps found correlates with high-performing teams.

Check out Nicole Forsgren’s book Accelerate based on the State of DevOps survey for 5 years. She shares a lot of the science behind it.

1 Like

We’ve talked a lot about this on the broadcast and in here. My advice is, keep bringing this up in retrospectives, possibly in standups if you are experiencing a roadblock. Always when there is a problem affecting testing, make it a team problem. Ask the team to brainstorm experiments to make the problem smaller. Of course this depends on priorities. For example, if the team is pretty new and has committed to practice test-driven development, and they are really struggling to learn how to do that, I wouldn’t distract them with problems around UI test automation. I would see if I could help them get better at unit-level test cases, and I would look for some stopgap temporary solution for the UI testing until the TDD is going better and the UI testing is then a higher priority. I hope that makes sense.


Alan and Brent say that developers own “code correctness”. I agree with that. Kent Beck called this “internal quality” in his XP Explained book. Developers collaborate with business stakeholders and other delivery team members to gain a shared understanding of each feature and story. They are responsible for designing that code properly for ease of understanding and maintenance, as well as providing unit-level regression tests for it to ensure those small chunks of code continue to behave as expected. Ideally IMO they are using test-driven development. All of that is the internal code quality. It matters most to the delivery team, though it is a benefit to the business because it means we can change the code quickly and fearlessly. The developers who write the code must own this internal quality. That’s why it makes NO sense to have a tester write unit-level tests. I do know testers who get involved with static code analysis, which is another means of evaluating code correctness, and in some cases, this can make sense. But in general what I’ve seen work best is the developers are also responsible for running the static analysis to ensure their code is up to the standards agreed upon by the team, including quality attributes such as accessibility.

External quality is defined by the business stakeholders. They may compromise on lower external quality - perhaps the usability isn’t great, perhaps they know there are bugs and choose not to fix them now - for business reasons. Or they may be quite exacting about it, also for business reasons. It’s dependent on domain, maturity of the product and many other factors.


Janet and I wrote a three-part blog series about how we wrote More Agile Testing: Learning Journeys for the Whole Team. We used much the same process for our first book, though back then, it wasn’t as easy to do video chats and such. We did a lot of google chatting!

We started when we were both at a conference by doing a big mind map for the content for the whole book. This of course evolved over time but it was a good starting place. We divided the book into sections and sections into chapters. We created a release plan so that we “delivered” two draft chapters every two weeks. Each of us would take a chapter, work on it for as long as we could stand it, then trade out. We did a lot of interviews of people on teams around the world and got friends who were expert in different types of testing to contribute sidebars.

We actually had “user stories” for the book from our “customers” perspective. After we had all the draft chapters, we got feedback from our “customers”, a group of friends willing to give us feedback. I remember that we did a big reorganization of chapters and sections after that.

All that time we were mainly working remotely. We worked on the book every day, pretty much. We gave ourselves time off during the holiday season.

With the 2nd book, we took an idea from Ellen Gottesdiener and Mary Gorman that they used when writing their book Discover to Deliver. We got together at my house printed out the whole draft manuscript and put them on the walls. We actually cut. out sections and moved them around and taped them to different chapters or pages. It really helped us visualize the book and see how things would work better. We also printed out all 70 sidebars that people contributed and put them on the ping pong table, then decided where they fit in the rest of the book and taped them to appropriate locations. We also highlighted areas we wanted to update.

It was a fun process, I encourage you to read our blog series on it!

1 Like