What needs to be in your user stories’ definition of done

shutterstock_122571565

Harvey Ball said:

I’ve seen many team’s definitions of done, more and more I’m seeing teams include a list of test types that need running to ensure a story is done.
Example: * Unit tests written and passed * Service Integration tests written and passed * Performance tests written and passed * UI Tests written and passed ..etc..

I’ve always considered the definition of done to be generic enough to allow it to apply to any story a team may work on so it seems a bit too detailed to me to list all possible test types.

Perhaps if you list all the tests and you have a story that doesn’t require a test (such as you have a story that has no UI, then how do you write a selenium or equivalent based test?) then how can it pass the definition of done?

Or maybe you have a story that requires a specific test type that you don’t have on your list? Maybe a vertical access test? Because it’s not on the list, no one ever thinks to raise it?

I’ve always found the topic of how to prove a story is tested should be discussed at the sprint planning phase and the team work out which are the appropriate tests to ensure the story has a high confidence in quality.

This blog post by Mike Cohn has a good example of what I like to see in a definition of done:
http://www.mountaingoatsoftware.com/blog/clarifying-the-relationship-between-definition-of-done-and-conditions-of-sa

Something that is meaningful but also something that isn’t so tied to specifics that you can’t meet it or in meeting it means you miss something out.

I’d be really interested in other people’s views on this and to see if there are valid reasons for listing multiple test types in the definition of done.

=====================================>

Hi Harvey,

The list of test types, especially if they are repeated for EVERY STORY, should be added to a master list called, “organizational standards”. Then, for every story, you can simply put “… and meets all organizational standards where applicable.”

The definition of done should be focused on those verifiable DIRECT VALUE ADD ways we can empirically prove this story is done. The testing types are not a direct value add, and while they are important for a number of reasons, they should not be the primary focus in the definition of done. Hence why I suggest companies add them to their organizational standards and reference them. Why? Because they will most likely be repeated in every single story.

Let’s take a look at a more meaningful mock story and mock DoD:
“As a user, I want to be able to use a single search field to search by title OR author OR ISBN (like I can on Amazon)”

Definition of Done:

  • When I search for an author, such as Salinger, I get Catcher in the Rye
  • When I search for a title, such as Shakespeare, I get titles such as Romeo and Juliet
  • When I search for an ISBN, such as 0736425152, I get a book about Wall-E. User story meets all organizational coding standards.

In short, don’t turn the DoD into a lawyering document catch-all. If you have certain types of tasks that are repeated, such as writing unit tests, ui tests, integration tests, checkins, deployments, etc, those should be represented as tasks in the story and as “organizational standards of done-ness that apply to all stories where applicable”<-this gets you around the fact that headless components don’t need ui testing with Selenium

There was an issue loading your timed LeadBox™. Please check plugin settings.