I’ll put down suggestions and general tips I know from experience (mainly from mistakes, hah). That slideshow on Lisa Crispin’s website covers many of these things really well, although from what I’m reading here you’re a bit more on your own that in a position of being able to bring a team into this effort, which is where I’ve usually been as well (luckily my current job isn’t so isolated).
First - what are your goals with automation? For me it’s usually some combination of “avoid boring repetition so I can do interesting things” and “speed up regression cycles”. Currently my biggest benefit is being able to do very fast and thorough regression tests when we change something. I’m assuming your goals here are similar, but if the goals are “I want to learn new and useful skills”, then what I’m writing here won’t be as useful.
Consider that automation has different phases, and where and how much effort you put in needs to take these phases into account.
- Spike / viability investigation (try a couple different approaches) to see whether the test or tests can be reasonably automated
- Write your automation (this is really the classic Exploratory “consider / try / interpret results / loop” cycle). I often find new things about the thing I’m testing at this point, seriously!
- Run automation and then interpret the results.
- Maintain it
Principles to remember
- Your goal is to test things, not develop test artifacts. Always remember this! I try to keep a little voice in my head that makes my justify any larger automation efforts
- Just enough. You want to work in the smallest useful bits you can and prove your solution all the way through the first three steps. This is akin to the “tracer bullet” / “single slice” approach to testing things too! This is just another example of the fast feedback value of XP / Agile. But I have had to learn this lesson through many wasted hours building lots of crap I didn’t need to.
What to automate?
- Frequently exercised tests (e.g., regression, smoke tests)
- Easy / safe targets (non-UI intensive steps, things without a lot of setup, and tests without dependencies on other tests)
- Things you cannot realistically do manually OR involve the same test code running against different environments or with different configured values (e.g, running a test case against multiple browsers/environments)
- Semi-automated is FINE if it helps overall. Kicking stuff off manually and reviewing the results manually is totally reasonable, especially if you can save a lot of time by stopping automation at this point! There’s a fallacy out there that automation isn’t automated if it isn’t 99% hands-off (yeah right).
- Use a language or framework that is used by a developer on your team whose work you respect (or at least at your company whose time you can borrow). This is absolutely critical! You can get regular advice on doing things “right” in the language (so it’s maybe just one or two peoples’ opinions, but it’s still nice to get a developer’s take on things). It’s not usually hard to find opinionated developers who like to explain why they do things a certain way.
- Get all of your work code reviewed (we use GitHub with easy peasy pull requests) and let people know you WANT feedback not a rubber stamp.
- Once you’re through the spike / investigation step, never leave your code ugly. Ever. Think of future you coming back to this stuff and being thoroughly confused. I made the mistake for a long time of thinking “Well I’m the only one dealing with this automation so I can just leave it however I want”, and it only ends up in technical debt.
Sorry I don’t have more specifics. If there’s something more specific that would be useful, let me know as I’d love to be able to help.
PS I thought of one more thing. I think you’re right about avoiding the API for now, BUT if you are in the same position while it’s being redeveloped, you can get in on testing it while it’s being built, and API testing is usually much easier to create small / independent test cases for AND is not subject to all of the horribleness that is UI testing. Also, pitching in on unit tests can be helpful and a good way to pick up some knowledge if your environment supports it.