The first question that comes to mind is: who supposes that you have those metrics?
As for simply delivering those metrics, it depends a bit on the others of the question above.
If it is a case of “that’s what we always did”, well the agile manifesto says in it’s first sentence something about doing things yourself and helping each other. So simply change them if they don’t help you. Which is always a quick check for me how agile an environment really is. Can teams really change simple things like these if they want to or is there somewhere someone who has to approve?
If someone else, i.e. a stakeholder, wants those metrics, the real question is why do they want them? Chances are that it goes along the lines of “we’ve always had those”. In my experience as an agile coach it more than often boils down to two questions:
- When will it be delivered?
- How much does it cost me?
Do the metrics you are supposed to delivered help you with answering those questions? My out of context impression is that there is an implicit assumption of a certain amount of time per test case and that is used to try to predict the costs, while the man-days try to answer the delivery question.
Both are attempts that I would try to change.
First of all the questions aim only at a part of development, but I would put more focus on team aspects. I don’t know if you are doing scrum, but the scrum guide somewhere says that we want to deliver a potentiable shippable product increment at the end of the iteration, i.e. something that we are happy to give to customer to use. That includes coding, testing, delivering, writing manuals, etc.
It doesn’t matter who does which part, as long as it gets done, so a metric should IMHO be focussed on the whole process to put the emphasis there and not just a part as this fosters silo thinking in my experience. Long story short: Try to find out why they really want them and find some metrics together that might be more suitable (without knowing your context it is hard to tell, maybe number of testcases really makes sense. That would be an exception, but I have seen projects that were paid by number of test cases and for those projects the number was vital. You could argue those that the contract was bad )
Furthermore I would turn away from absolute estimations for two reason. First of all they suggest a validity that is not there and second: we as humans suck big time at them. If I asked you how high the Statue of Liberty was in meters and how he the Empire State Building was, the guesses would be going wild. If I asked you to tell me how high the Empire State Building was in relation to the Statue of Liberty, answer would be way better (about 4,5 times, I looked it up, though ). So using the anchor effect to establish a baseline is something to be very effective and is done as Ben has already described.
In the last few years I am also growing more fond of the #noestimates idea. Some teams are putting so much effort into estimation that it really cuts into their time with hardly any results. I have seen teams give sophisticated efforts of 573, 725 days with a confidence level of 67%. Which suggests a validity that simply isn’t there. The idea now is to not estimate at all. Some teams go for a simple classification of small, medium, big items and look back in time to say how long it took them to deliver those sizes. They then go on to give predictions along the lines of “for a small item we needed 2 days in 75% of the cases. 90% of the time we got it done in 3 days. If you want 95%, give as a week”. Especially I have come across this a few times in Kanban-environments and I kind of like it.
Hope that helps somehow and is not just philosophical in the morning