Watch that supplier ! :)

Hi,
I embarking on a journey of trying to reduce the number of defects we have to send back to our suppliers.
Now I know this is a double edged sword, not only to we have to work closer with the Supplier to monitor their testing and agree what reports they should produce. However, some of the onus is on us, as the customer, to feed back better information on the defects we find and provide good evidence, to help the suppliers do their job in rectifying any issues.

Has anyone already trod this path ? What is deemed to be an ‘acceptable’ % of defects to be delivered from a supplier. I hear cries of ‘NONE’!!!. How have others worked with, monitored supplier testing. To make sure the quality and coverage is there?

As you can tell or maybe have guessed  I am going through defect meltdown in phase 1 of a transformation and am eager to improve things for phase 2.

Any anecdotes, suggestions, lesson learnt and implemented. Would be most welcome

It could be of course that the answer is blowing in the wind

John

Firstly a few words on metrics. A number, or ratio, or percentage of problems isn’t a good marker of quality. It could be that 1000 typos does not outweigh one login issue (I’m talking about software, but the principle applies to anything). Also an issue for one person might be a non-issue for another - it depends who matters. Let’s say we pick someone who never wanted the software and said it was a bad idea from the very start - this person will pick apart anything they think is or might be a problem and report it as more important than it is. I realise that this doesn’t give you a nice, neat KPI, but it does mean you don’t have a fake KPI. Rarely is there another kind!

You also don’t have a number or percentage of defects, you have a number of defects found, which is dependent on where and how hard and how efficiently people are looking… and if they would know a problem when they saw one. You might be finding problems because your supplier has no idea that they are problems. Look at your reports and see if the problems you are finding are to do with being fit for purpose.

Next, the onus is on your supplier to do what you ask of them, but, yes, you’re right, you need to work closely with them to ensure they are building what’s right for you. One way to see the problem (I’d expect a software house to do this, but they’re not all created equal) is to identify what kinds of problems you’re finding in the software. Are they mostly business critical? Are they mostly usability? Are they crashes? Are they platform related? This will give you a better view of the situation. If you’re swimming in bug reports on typographical errors maybe they don’t really matter (maybe they do, context is everything). If the system’s regularly crashing then you may need to have words with a supplier about why they’re supplying something that crashes all the time. If you’re paying for something to solve a problem, and it doesn’t solve the problem, then the thing you’re paying for doesn’t work. If it solves the problem but it’s a bit clunky and dull… maybe that’s okay. You and your supplier need to communicate your expectations and what can be achieved.

Happy to answer more questions or add more if you have more details about the problem :).

1 Like

Hi Chris,
Wow thanks for the detailed reply. As soon as I wrote %, I was going to backspace as knew it was not a measure of quality. But let it ride.
Guess the crux of the4 new problems I find myself facing, is that my new company, the customer, take the view of ‘accept the code in then see how good it is’. I think that we need to get a better feel of what the Suppliers are testing, how they are testing etc. Also, what do people in a similar position do re monitoring the supplier while they are testing.
I may be wrong, but I keep getting this niggle at the back of my head that we are paying the supplier for delivery, but at the same time testing their product for them.
I was no around when the statement of work was written, so there is no mention of ‘supplier testing’ and how we as a customer get a view of this.

The defects are by and large code related, there are a small percentage of configuration issues. However, we get caught up with the usual ‘it’s the requirements’ discussions between us. It is my first time dealing with external suppliers and while am trying to forge closer links in the testing area. Am wary of not letting the tail wag the dog. Hoping others who have cracked this problem, while still maintaining a relationship can advise on how they did it.

1 Like

I may be wrong, but I keep getting this niggle at the back of my head that we are paying the supplier for delivery, but at the same time testing their product for them.

Welcome to the bleeding edge of the New World Order. For years mankind has written about fascist dystopias and inescapable hells born of accidents, destiny or a seizing of power. Nobody predicted that we’d build one ourselves, our faces etched with a twisted, plastic grin, smiling at the construction of our own downfall lest we upset someone with our frowns. But I hear it’s very popular, so what do I know?

I haven’t solved this problem myself, and I may be missing the full picture, but if someone sells me something that doesn’t work I’d take it back for a repair/replacement. Enough of those and I’d feel it time for a refund. By that point I’d be angry. Just saying. I’m sure there’s a diplomatic way to ask “why are we paying you for something that doesn’t work?”. Maybe it’s part of the discussion “by what point do you predict this will be usable?”, or “are there normally this many problems this far into the project?”. Maybe it’s part of the discussion “we’re worried about the number of problems in the software, and we’d like to examine your test strategy/processes/procedures.”. If you’re paying then you’re in the position of power, and these questions sound totally reasonable and understandable to me. Software is an investment. Asking risk-based questions of your supplier is protecting that investment, and clearing up communication problems.

Of course I might be wrong - maybe this is your first run-in with a heavily agile project that is frequently releasing minimum-viable software to you and the software is basically in alpha. Maybe it’s a way to get great feedback to build the perfect software. Or a way to say they’re doing that while giving you bad code. Either way, yes, you’re testing the software for them, which is sometimes not a Bad Thing - the question is are they doing their fair share? It might be that they are not communicating their expectations to you very well - maybe the software’s not supposed to work right now, it’s just to give you a feel of what it might look like in case you say “this is not at all what we’re looking for”, as opposed to “this table doesn’t sort properly when I click on the little arrow”. If they’re worried they wouldn’t tell you. You’re worried and you may not have told them. But you’re paying, so I’d go knock on the door. Bring someone tall.

1 Like

The balancing act that I foresee is them coming back with the “hmmmm, documentation……. It is agile you know”, counterbalanced with us needing some metric as to what they are, have tested. Sight of what they are testing and the evidence to back that up. With deep regret, I find myself in the land of poorly written contracts, weak requirements and a never ending cycle of code deliveries. Well that was phase 1, Phase 2 kicks off in Feb, hence my push to get things into a better shape (as well as picking the brains of others who have been there done that).
The code deliveries are meant to be (were meant to be), code releases as opposed to a monthly code delivery, but that is a discussion for another day.

Do realise, this is a double edged sword, in that the onus is also on us as the Customer to deliver requirements that are fit for purpose, so that the Supplier can supply the software for for purpose. To that ends I have instigated show and tells, before they move into System testing, so at least we can try and catch and misunderstanding around what the requirement was.

What height are you ? :slight_smile:

Well what you want is communication. Documentation is one way, a frank conversation is another - preferably with audio and preferably face-to-face. Maybe you don’t need evidence, maybe you just need someone to clear up problems. I’d find out if I needed to worry, then, if I did, I could start citing evidence. When you ask for evidence you’re saying “I don’t trust you, prove you’re not as bad as I suspect” - but you can build that trust with conversations and honesty. Unless they act suspiciously, and then asking for promises and evidence to continue receiving your money is, I think, pretty reasonable.

If you want evidence and they don’t want to give documentation then ask them what they can do to make you more comfortable. You want reassurance and you’re the client, and you can go elsewhere armed with experience. That’s how I see it, anyway.

1 Like

That is sound advice ty

1 Like

Going to throw in a “seconded” here on the communication. If you are receiving software with bad defects (errors in money, errors due to mishandled data, crashes, etc), then they are not spending resources on testing their stuff. In which case you want a weekly conference call (or video, or in person) meeting to say, hey, here are the outstanding problems, where are we at?

Likewise, if they feel it’s on you, that you aren’t providing good requirements or that your bug reports don’t have the info they need, then they should assign a business analyst or test manager to you to check in with you weekly or every other week to say, hey, you told us this, we need to clarify.

Developing a relationship with SOME representative there will help you get across what you need and earn you a champion in their camp. (Though you could get someone bad, unluckily. It happens.)

Best of Luck.

3 Likes

Thanks Tracy!

So glad I asked the question now. Such useful input. Think the weekly meetings is a great idea as is their reviewing our requirements. From looking at docs from the first phase of the project there seems to be a lot of ‘we thought you meant this…’. You are right, the time for those discussions is before the testing starts!

Sent from my BlackBerry 10 smartphone.

Backing up what’s already been written here. Working with a rather uncommunicative remote team, we first of all set up a weekly conference call to triage bugs and address other issues; then, when the product had been deployed but still had problems (not all down to the remote dev team, it has to be admitted), we ended up negotiating that their product manager would travel to our office and work remotely once a week, meaning he was on hand to troubleshoot/advise. He wasn’t necessarily working on our product 100% of the time he was in our office (though in practice he often was, at least in the early stages of that phase), but just having him on-site improved communication and delivery.

2 Likes

When relying on a remote/separate team to deliver a product it can be very hard to separate bugs from miscommunication - however if you work on the latter then the former will inevitably become less common over time!

Remember also that being one step removed from the user will often result in important context being missed - what makes sense to the supplier may be entirely useless to the client, and the client’s description of a feature can bear little relation to how it is interpreted.

Metrics can be useful and communication is essential. But I’d suggest thinking about what is needed for your receiving organization to make progress with what the supplier delivers. For example one criterion for a successful integration can be is it good enough for system test, or for some part of system test? The answer supports the decision to assign resources to the next phase. In your case it’s your organization’s work. This should be a common understanding with your supplier.