OMG, that does sound very painful !
How did you ābookā a hardware? email? spreadsheet?software?
yes, that seems to be the way to do. I just wish I didnāt have to make a business case to get a simulator madeā¦ Isnāt it common sense?
We created an outlook area and booked the hardware like you would a meeting or appointment.
A possible way forward might be to
- port your applicaton to a host machine
- decide on a small functionlity for modelling
- implement and integrate the model
- measure the costs and quantify the savngs for not testing on the real hardware
- presen the figures to your management
and hopefully you get some more time and money to extend your model step by step?
Thatās a brilliant work-around that Iām sorry I didnāt think of myself.
Yes, we have worked that way, but Iām sorry that I didnāt think of it
So the story goed like this in the world of hardware and testability.
Some hardware guy searches out the cheapest chipset for the microprocessor that they can find. Usually itās something from TI or Atmel with kilobytes of memory. The software guys then request something with a bit more memory, but certainly not enough, and arguments ensue over the $1 difference in prices for the item. Since weāre talking about a price-margin difference which is huge, the financing people get in on the game, and between the three groups, they fish out the best option to upset all of them.
In a good team, THIS is the point (if not sooner) that the test team should be brought in. In other words, the one question that nobody asked is, āhow do we test this?ā
The best solution that I have seen is that the company spent the extra money on the processors to have a Linux based system which only ran the simple things. It really wasnāt much more expensive, and in a limited number of clients, the margin of the hardware wasnāt that big (About 2000 units were made). The money was then made in the software service contracts. Since the system was entirely based in a common operating system, we could easily do unit tests, communication tests, interfacing tests, etcā¦ with NO hardware. This was also brilliant because it then enabled us to do performance testing as wellā¦ which I have never been able to do up to that point in my careerā¦ only estimates.
To put it shortly, the āluckyā was planned in for that system, and now I advocate that the same luck is planned in even before software is considered on a new system.
My āfacepalmā moment for today:
Dev: āI thought we do unlimited power cycles and leave it running overnight.ā
Tester: "powercycling a below-production-quality product, overnight, no supervision, in an open-space full of other electronicsā¦ yeah sureā¦ I saw smoke coming out of one our ābelow-production qualityā board when I powered it on a benchā¦ but sure letās do thousands of unmonitored, powercycling of a bad-quality hardware. "
so I now declare myself in possession of a new hat called: ādonāt set the building on fireā
But as testers weāre only supposed to gather information about the product under test, my test report should look like this, right?
Expected: software continues to function correctly.
Observed: building reduced to ashes.
Outcome: pass with notes.
Sometimes we have to laugh, because thereās no more tears in our eyesā¦
I LOVE that hat. I wear mine a lot.
Please note that Iām running exactly those tests right now, and Iām not at work. So umā¦ may I have my hat back?