What is the "Definition of Done"?

@techgirl1908 recently spoke at TestBash Philly about “How to Get Automation Included in Your Definition of Done”.

@satya followed up with a blog post about the talk

I stuck on this paragraph:

Coming up a commonly agreed up on definition of done is quite hard and each stakeholder has different perspective. At the outset it seems quite obvious and achievable but software development is complex and there are many exceptions and context of each story matters.

In my own context I’ve had 2 different scenarios.

  1. We had ALL the columns on JIRA “Dev Done”, “Testing Done”, “PM signed off”, “PO signed off”.
  2. We had a single “Done” column and the definition of done was “Testing is happy with it”.

They’re both pretty different scenarios, I don’t think I was ever fully comfortable with either being honest.

It got me thinking though, in your context, can you share what your definition of done is? As Satyajit says, a commonly agreed definition is pretty hard but have you managed to at least get a common definition within your workplace?

I like to use use different definitions - not by role, but by level.

  • Story Done: Include automation tasks like the unit tests and API level tests in this. Product Owner accepted. Other things too like exploratory testing - what ever is necessary for the team to get a story ‘done’ and confident.

  • Feature Done: Include automation tasks like full work flow tests. Product Owner accepted. Other testing might also include performance, load, stress
    .

  • Release Done: A final check and run of all automated tests. A final check and exploratory system level testing - if necessary to give the team(s) confidence to release to production.

5 Likes

Side note: I think that the definition of done should come from QA as a thought leadership piece for the company and an area of continual improvement rather than a deterministic ending of a phase. QA work is never “done” but it can be complete for a phase. It is more about changing the nature of the conversation in the SDLC to reward continuous efforts rather than done-ness

My personal definition of done is an iterative process with many release gates. There is never a point in time in which QA is done, but there are many steps in the process that are completed before a release. After a release testing needs to be done, regression models updated, data on hotfixes and fast follow on understood and reported on, and then workflows constantly updated with this new information.

When thinking about automation I am toying with the creation and modeling of test cases in tandem with the development work. Say you are looking at a 4-week cycle. week 1 is requirements gathering, week 2 is information sharing with devs and QA, week 3 and 4 are building by devs and testing by QA, where can automation live in this scenario? If the requirements include having the developers include unique data-test tags, then automation can be done as a stubbing of automation test cases. In most cases, there is a design for the system so we know that an automated test case can be formalized based one expected page and new elements. we can put in temporary ids for the tests and replace them with the test ids once those components are merged into any QA server. This allows for automation to be part of the requirements gathering phase. We know we need these things, and we know why we need them, does the plan as understood include the things needed to solve those problems.

Often times in modeling automation flows I find sketchy requirements that developers have yet to come across and it allows for a quicker feedback cycle from QA to the product management team to verify their requirements and update the information given to developers.

Getting rid of done and moving towards phase completion means that you have a lot more variability an control of the release. Teams start looking towards your assessments instead of forcing you to give quality judgements on done-ness they wait for you to lead with the assessment of phase and risk at any point in a development to release cycle.

1 Like

I agree the conversation should shift away from done. Recently, I proposed a test that requires multiple scenarios. However, not all the scenarios are required to demonstrate production-ready products. The agreement I have with our business team members is I will continue the scenarios and let them decide when I am done. While I prefer less scenarios to save time and money, the conversation has shifted from getting testing done to more of a budget decision.

Regarding your comments on integrating the creation of automation, we had very good success when the test engineer (one who designs and writes test automation, among other responsibilities) reviewed a “shelve set”. A shelve set is code that was completed that day and available to anyone who wants to review it. It is not checked into a code repository but simply place “on a shelf” until the next day. In this manner, the test engineer was able to craft a test nearly in parallel with the development of the product. Also, it is based on acceptance criteria, and, the running test serves as feedback for the developer. That is, if the automated test passes, it is meeting acceptance criteria. Note the project was largely an API development which lends itself to this kind of practice. Lastly, “done” become more of a collaborative effort.

3 Likes

Software testing relates to a feature or broken functionality testing which software testers perform after developers have worked on it. For this, almost all top software testing companies use some tracking tool where the tickets for feature development or fixing a broken functionality are reported. This helps in keeping the track of the issues or tickets and the progress on them as well.

These tickets are tracked by their status. These statuses have phases like ‘In Dev’, ‘Code Review’, ‘Impeded’, ‘In progress’, ‘Ready for test’, ‘In Test’ and ‘Done’. As, we are talking here about only ‘Done’ so we will be focusing on this status mainly.

The ticket is reported in the tracking tool to say Jira by the product owner (if it is a new feature) or by the software tester (if it is a broken functionality). These tickets are further worked upon by the Developer and they will further changes the status of the ticket to ‘In Dev’ or ‘In progress’ when they start working. Once it’s done from his end. it is made ‘Ready for Test’. From here the software tester picks up the ticket and starts working on it. The tester will test all the possible scenarios and the aspects which can break the new feature or the broken feature again. Tester changes the status of the ticket to ‘In Test’ before start working on it. Once the tester has tried all end-user scenarios and he declares the ticket good to go he changes the status of the ticket to ‘Done’.

Now the point here is that software Tester can make a ticket status to ‘Done’ but this is not just done by the tester. When it comes to a feature development all the acceptance criteria is decided by the product owner and Dev & QA works on that. Now, when it comes to a feature development ticket, it’s not important that QA will change its status to ‘Done’ once the work is being done on the ticket. It’s sometimes the responsibility of product owner to change the status of the ticket to ‘Done’ once he approves that all the changes requested by him are implemented.

‘Done’ is the last status of the ticket, at this point, we can think that the ticket is closed and not need any more work to be done upon. But it is not always necessary that a Done ticket cannot be worked upon again. Done ticket can be reopened and can be reworked if needed. This ticket is then closed again once the work is done again.

‘Done’ is the status which is given to the ticket when we can consider that this feature is implemented properly or the broken functionality is working completely fine.

Hope this information will be helpful for you.

3 Likes

In all the teams and projects I have worked on the definition of “Done” has been different.

I remember my time on teams when it was left to the QA to decide what Done was. But that didn’t really work because what was in a QA’s head about done or what the QA thought was important sometimes didn’t match up to what the PM/PO or the Dev team envisioned. So I think we should all refrain from deciding what Done is all by ourselves.

What has always worked in a team is when the team sits together to decide what “done” means to them. Sometimes it means deploying to production with all the automation complete.
Sometimes it means that we accept that there are low priority defects that are tracked for future and still call it done.
It really depends on how mature the engineering practices are within an organisation.

In essence I think QAs should move away from the mentality of being gate keepers either for release processes or for calling things done. Have a conversation with different stakeholders and decide together.

In my current project we like to have a quick catchup everytime we think a bunch of stories is “done” with the stakeholders. As there are usually instances where there are variations to what we have initially agreed upon as a checklist of “done” . This prevents any future shocks and everything is team responsibility.

1 Like

I do not agree with Test owning the DoD. The quality of a feature/product and the decision to release are made by the team and hence the DoD should be created, owned and maintained by the team.

In terms of what goes in to a DoD from a testing perspective, I’ve seen really high level criteria “Testing completed” down to a thorough list of test/product capabilities such as Unit, Automation, Exploratory, Performance, Upgrade, Usability etc etc.

I like a more detailed list as I think it helps explore questions that might not otherwise be asked. E.g. if you don’t have a DoD item for Performance or Usability testing, what’s the trigger to explore these in the design o the story or AC?

2 Likes

I agree that everyone should buy into and have a stake in the Definition of Done but if you go down that route how do you avoid “When everyone owns it, nobody owns it”?

1 Like

Good question. In a previous role the Scrum Master was responsible for reviewing the DoD before work on a feature started and then checking the appropriate criteria had been completed at the end.

I also think there’s responsibility and accountability. If the team know who is accountable for which parts of the DoD then perhaps that avoids the need for responsibility.

1 Like

The DoD shouldn´t be limited to “bugs” or “tickets” status. It should included many other quality requirements as per example:

  • Code checked in a VCS
  • Unit tests developed
  • Automatic static code analysis (Sonar and the "“likes”)
  • Artifacts created and ready to be deployed in any environment
  • Non funtional testing performed (performance, UX…)

All this requirements would depend on the company culture but most of them should be common to agile development teams.

About the responsability of reviewing the DoD, it should be done by all the members of the team (“the three amigos”) BUT, the QA members should be the ones that lead this reviews and have the task of enforcing the team to achieve it.
This is an important part of our Job, as a QA team member we must advocate this DoD goals as it will improve the product quality much more than just “sticking” to run tests and check them “done”.

2 Likes