Andrew, best I can suggest is to be talking to the developers. I am by trade a programmer, I have been testing for over 10 years now, but my bias is towards creating tools that help me do the checks. With my tester hat on, I don’t look at product code much, but an understanding of code helps you with some of the easier to do automation. It gives you a better idea of how the code can be changed to make test automation easier. Automating in general is simple to start doing, but much harder to do to any measure of completeness without a lot if developer interaction.
So what am I saying, lets unpack this more practically, which is hard. Every single application I have tested has had differing automation mechanics used by testers. Up until now I have worked always with native apps, lately I’m moving into web apps, and the techniques are widely varying, but some principles are common. I’m keen to talk about your email test Andrew, but I want to start with some guidances.
- The things I have tried to automate and failed at have always involved a false assumption that is only obvious to someone who has used the application for a long long time. And by long-long time, I don’t mean hours, I mean using the application a year ago when the Windows printer drivers changed, and we had to make a hack to stop printing coming out upside down. Just because the print spooler did not error, did not mean your document printed out in a way the customer can actually be delighted with. My latest complete screw-up was an assumption about windows graphics drivers under embedded Windows/Redstone working in the same way going forwards with the way error codes get returned, sometimes you don’t get an error at the point you expect one. You only know you have no graphics bugs, when the customer can actually see the pretty picture. Creating a test to check that pages come out of a printer in portrait, or that pixels are correct on a screen means building a piece of hardware to actually check. Checking these things in software is really impossible. I think there is a good video where James Bach warns against this. Just because you find a way that the OS behaves, that hints at your print job or graphics being wrong, that’s often not a good test,
Don’t over engineer or over think, if you ever go to the extreme - and I do this often, because some things are really impossible to test well without an elaborate test jig. I mean cars get tested in wind tunnels with a rolling road, and sometimes there is a good reason for building a rolling road, but, most of the time, you don’t need a wind tunnel. 99% of the bugs can be found using a computer model of the car, it’s going to require you do more maths homework, but a simulation is going to tell you a lot. I don’t know cars, but a good friend who does loves to tell the story of how they took two landrovers up to the highest place in England they could find, only to discover that the manifold pressure sensor input code that lets the engine know how much air it needs (cars have to modify their fuel/air mix at altitude, you see, due to less oxygen to combust with.) Well the code uses a lookup table of sorts, which was not sensitive enough to know that at altitude, the engine needs much more air. It’s more complicated than that, because cars actually detect in code what kind of fuel grade you put into the tank by using sensors that check how much air the engine uses. But suffice to say, sometimes a field trip where a farmer has to come with a tractor and pull 2 brand new Landrover’s back down, off a small mountain, is a good way to test. So try to reserve some discoveries for manual testing sessions. Automate the others.
Talk about the test success criteria, quite often you want to check that an email gets sent, and simple things like sending an email to yourself is a good way to check, if it comes back. But decide whether its actually good enough to check that the email just gets to the outbox without actually going out. Sometimes the simpler you can make the success criteria, the easier it gets to create things like a mock mailserver. This has the advantage of being a test that will still run even if your mailservers are offline, but has the downside that it has no server authentication. But I find that the most robust automated checks are ones that are clear about the fact that they are a simulation of the universe. So long as they don’t yield false positives too often, a hacked environment that you understand the limitations of well, can save you a lot of time.
Learn about API’s This is probably the beginning of an automation journey for many. Learn one scripting language. Most any scripting language can call API’s, even ones created for a different language. You will need help from your dev team. This step will also level you up so you can speak to the developers in their language more often. You don’t have to learn COBOL,C/C++, C#, Forth, Fortran, Objective Camel or Smalltalk and many others. Bash scripts, DOS batch, Python, Java and a few other interpreted languages are great for creating automated tests. Get advice from your dev team in a scripting language choice, because they can actually be your free in-house trainers. I have worked on projects where for example Powershell scripting had hooks into every single part of the application. We could manipulate all of the internal data with exception of license and authentication in the application using a standalone script! So writing tests as Powershell scripts was dead easy. But be aware, you will have to master whichever scripting language you do choose.
Automatability using callbacks , most script languages are not good at API’s that raise events, or that call hooks. For these cases you want to get the developers to change all callbacks or events so that they also write the callback data into a log. A script helper function can then be written to scan or poll that log looking for the event or trace specific to the hook or callback that you want. Code to do this for you in linux and in Windows is all over the web. You want to think about security when doing this, but the benefit of a log or trace in the right place in the code can be huge if the event text contains data that lets the tester know that the thing you want to track down in a workflow gave the intended outcome as a result code or maybe if it for example includes a “pending” account balance in the log, you can then check that balance against your test transactions.
I want to leave you with this quote :Brenan Keller "A QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers. Orders a ueicbksjdhd.
First real customer walks in and asks where the bathroom is. The bar bursts into flames, killing everyone." https://twitter.com/brenankeller/status/1068615953989087232?lang=en