My weirdest experience with code coverage tools was working on a large and complex app which started off with 0 tests. We continually added new automated tests - usually based upon existing bugs, sometimes based upon existing manual test plans. With this process, we got all the way up to about 30% code coverage.
These were decent tests and they caught a lot of regressions.
The interesting part was in what happened next. We expected future bugs to largely come from the uncovered code. The future bugs instead still came predominantly from the 30% of code that was already covered, not the 70% that was not.
I have ever since felt that coverage metrics may actually be more misleading than illuminating. I donβt necessarily object to the existence of these reports, but Iβm extremely sceptical of decisions made based upon them.
When you realize that 80% of your bugs are βinβ less than 20% of your code, and when a lot of them are not fixable βin code aloneβ, the code coverage effort can feel like a distraction. Itβs still a good aim, but itβs not a whole-team goal. As usual @kinofrost has said it better than I can.
Software creation is a team activity. The moment any task in that process falls to one person, the entire team has failed. If this task starts to do that stop. Hope the exercise to measure does help you build a better picture though, because itβs still a valuable one-off project if done as a whole team.
Hey, welcome to the MOT community. A gutsy and honest question to start off Parameswaran! And I hope the technical and non-technical responses have helped you get a flavor of what the community is all about. We are passionate, perhaps a bit too much sometimes. But I hope you will feel right at home and continue to share your journey with us all. I often wish I was a web developer or tester, you always seem to have cooler things to do on the job. Keep on testing.