This year we want to start measuring our bugs and make it vissible on the white board per sprint. This means in the first place, what are those bugs we want to register? We don’t want a complete overhead of bug registration, but we do need to find out which ones we do want to register.
The bugs we do not register:
- Bugs that occur in the DEV environment (manual of automatic)
- Wannabe bugs; bugs which are actual changes or new requirements
The bugs we do register:
- Bugs we find on the integration, test and acceptance environment (manual of automatic) after installation of the new version
- Bugs we find in production; this requires special attention because we think that the only bug you can address on production is when it actually disturbs the production process. Everything else you or the business finds is a new user story and can be planned for the next sprint. This way you can motivate the business to test earlier so we can fix it before its in production.
So what will we do with this bug list at the end of every sprint? First of all, we take action based on the environment were we found the bug. So for example; the bugs found on the integration server or the production bugs will be fixed in the current sprint. The other will be addressed in the backlog and might be planned for the next sprint.
At the end of every sprint we count the bugs registered in the sprint and we update the bug metric. The bug metric is a graphical view of the bug count per sprint with an extra annotation for the production bugs found. This way you can make it visual what the number of bugs per sprint are.
So my conclusion; I think this metric can help you to monitor the quality of your team and take it, particulary the production bugs, to the retrospective to improve the quality. Don’t make it a negative thing but see it as a way to get better products and it will also help you to keep focus on the sprint because of a clear agreement regarding handling bugs.