Embedded Software/Hardware Testers United!

OMG, that does sound very painful !
How did you “book” a hardware? email? spreadsheet?software?

yes, that seems to be the way to do. I just wish I didn’t have to make a business case to get a simulator made… Isn’t it common sense?

We created an outlook area and booked the hardware like you would a meeting or appointment.

A possible way forward might be to

  • port your applicaton to a host machine
  • decide on a small functionlity for modelling
  • implement and integrate the model
  • measure the costs and quantify the savngs for not testing on the real hardware
  • presen the figures to your management
    and hopefully you get some more time and money to extend your model step by step?
1 Like

That’s a brilliant work-around that I’m sorry I didn’t think of myself.

Yes, we have worked that way, but I’m sorry that I didn’t think of it :blush:

So the story goed like this in the world of hardware and testability.

Some hardware guy searches out the cheapest chipset for the microprocessor that they can find. Usually it’s something from TI or Atmel with kilobytes of memory. The software guys then request something with a bit more memory, but certainly not enough, and arguments ensue over the $1 difference in prices for the item. Since we’re talking about a price-margin difference which is huge, the financing people get in on the game, and between the three groups, they fish out the best option to upset all of them.

In a good team, THIS is the point (if not sooner) that the test team should be brought in. In other words, the one question that nobody asked is, “how do we test this?”

The best solution that I have seen is that the company spent the extra money on the processors to have a Linux based system which only ran the simple things. It really wasn’t much more expensive, and in a limited number of clients, the margin of the hardware wasn’t that big (About 2000 units were made). The money was then made in the software service contracts. Since the system was entirely based in a common operating system, we could easily do unit tests, communication tests, interfacing tests, etc… with NO hardware. This was also brilliant because it then enabled us to do performance testing as well… which I have never been able to do up to that point in my career… only estimates.

To put it shortly, the “lucky” was planned in for that system, and now I advocate that the same luck is planned in even before software is considered on a new system.

1 Like

My “facepalm” moment for today:

Dev: “I thought we do unlimited power cycles and leave it running overnight.”
Tester: "powercycling a below-production-quality product, overnight, no supervision, in an open-space full of other electronics… yeah sure… I saw smoke coming out of one our “below-production quality” board when I powered it on a bench… but sure let’s do thousands of unmonitored, powercycling of a bad-quality hardware. "
so I now declare myself in possession of a new hat called: “don’t set the building on fire”

But as testers we’re only supposed to gather information about the product under test, my test report should look like this, right?
Expected: software continues to function correctly.
Observed: building reduced to ashes.
Outcome: pass with notes.

Sometimes we have to laugh, because there’s no more tears in our eyes…

1 Like

I LOVE that hat. I wear mine a lot.

Please note that I’m running exactly those tests right now, and I’m not at work. So um… may I have my hat back?