I was wondering if anyone has any resources they have found particularly useful for an agile automation strategy? When I Google, so many results come back it can be difficult to find the useful ones!
On Lisa Crispinās website I found her presentation āMaking Test Automation Work on Agile Projectsā from Agile Testing Days 2010 to be really helpful. Iām working through that at the moment but if youāve found other links helpful Iād love to check them out.
Is this for UI or non-UI automation? Is it for an existing product for which you already have good test coverage, and youāre looking to replace some with automation? And finally, how often do you release? These things for me determine which principles to follow and which traps to avoid, speaking generally that is.
Iād love one for both UI and non-UI automation. This is an existing product within the company but not available to customers yet.
We have unit tests in place but no API layer tests at the moment. Our API is basically getting ripped up and starting from scratch so automating there makes no sense at the moment (I think). Itās something I know we need to do in the very near future though.
I have some UI automation in place, it was basically to get me practising and it isnāt part of our deployment (I manually run it). Iāve retired the suite for the time being because there are pretty major performance issues with the product. Some pages take up to 60 seconds to load, the minimum page load we have at the moment is 10 seconds. I think automating at this stage is unwise but Iād like to start planning for when we address these problems.
At the moment we have a lot of manual effort, Iāve only recently gotten another person to help with testing so there is a lot slipping through the cracks. I know this and want to fix it if I can.
We release internally every two weeks. When we have customers using the system externally weāll be about the same or once per month I think.
Iāll put down suggestions and general tips I know from experience (mainly from mistakes, hah). That slideshow on Lisa Crispinās website covers many of these things really well, although from what Iām reading here youāre a bit more on your own that in a position of being able to bring a team into this effort, which is where Iāve usually been as well (luckily my current job isnāt so isolated).
First - what are your goals with automation? For me itās usually some combination of āavoid boring repetition so I can do interesting thingsā and āspeed up regression cyclesā. Currently my biggest benefit is being able to do very fast and thorough regression tests when we change something. Iām assuming your goals here are similar, but if the goals are āI want to learn new and useful skillsā, then what Iām writing here wonāt be as useful.
Consider that automation has different phases, and where and how much effort you put in needs to take these phases into account.
Spike / viability investigation (try a couple different approaches) to see whether the test or tests can be reasonably automated
Write your automation (this is really the classic Exploratory āconsider / try / interpret results / loopā cycle). I often find new things about the thing Iām testing at this point, seriously!
Run automation and then interpret the results.
Maintain it
Principles to remember
Your goal is to test things, not develop test artifacts. Always remember this! I try to keep a little voice in my head that makes my justify any larger automation efforts
Just enough. You want to work in the smallest useful bits you can and prove your solution all the way through the first three steps. This is akin to the ātracer bulletā / āsingle sliceā approach to testing things too! This is just another example of the fast feedback value of XP / Agile. But I have had to learn this lesson through many wasted hours building lots of crap I didnāt need to.
Easy / safe targets (non-UI intensive steps, things without a lot of setup, and tests without dependencies on other tests)
Things you cannot realistically do manually OR involve the same test code running against different environments or with different configured values (e.g, running a test case against multiple browsers/environments)
Specific Recommendations
Semi-automated is FINE if it helps overall. Kicking stuff off manually and reviewing the results manually is totally reasonable, especially if you can save a lot of time by stopping automation at this point! Thereās a fallacy out there that automation isnāt automated if it isnāt 99% hands-off (yeah right).
Use a language or framework that is used by a developer on your team whose work you respect (or at least at your company whose time you can borrow). This is absolutely critical! You can get regular advice on doing things ārightā in the language (so itās maybe just one or two peoplesā opinions, but itās still nice to get a developerās take on things). Itās not usually hard to find opinionated developers who like to explain why they do things a certain way.
Get all of your work code reviewed (we use GitHub with easy peasy pull requests) and let people know you WANT feedback not a rubber stamp.
Once youāre through the spike / investigation step, never leave your code ugly. Ever. Think of future you coming back to this stuff and being thoroughly confused. I made the mistake for a long time of thinking āWell Iām the only one dealing with this automation so I can just leave it however I wantā, and it only ends up in technical debt.
Sorry I donāt have more specifics. If thereās something more specific that would be useful, let me know as Iād love to be able to help.
PS I thought of one more thing. I think youāre right about avoiding the API for now, BUT if you are in the same position while itās being redeveloped, you can get in on testing it while itās being built, and API testing is usually much easier to create small / independent test cases for AND is not subject to all of the horribleness that is UI testing. Also, pitching in on unit tests can be helpful and a good way to pick up some knowledge if your environment supports it.
I, personally, like anything by Lisa Crispin and Janet Gregory. The presentation you sight is very good. I think quality and UI test automation is part of software development. It is at the top of the test pyramid. By itself it doesnāt make sense.
It is like building a house. There is a strategy for how to build a house but there is also how to build the roof of the house. No matter how good your roof strategy, if the foundation is crooked, the roof will not save things.
One thing Lisa Crispinās presentation talks about which might be dated is the tool set. I think the team should determine the tools. If the team wants to use ruby test tools, because they are building a Ruby app, then telling them they should use Selenium with Java bindings is the wrong choice. You should force tools and process on an agile team. Remember the agile manifestoā¦ Individuals and interactions over tools and process.
Additionally, what works for one team might not work for another team. If I need X, Y and Z then pick a tool Iām comfortable with that does X, Y and Z. If you have a tool that does A to Z but how to use it isnāt intuitive to me, Iām going to suck at using the ābetterā tool.
Most importantly, the tools and process should help you. You should not have to conform to the tools and process.
My goals with automation are basically that. I also feel like in the repetition I easily forget small steps because of the boredom. As our whole regression at the moment is manual it is incredibly slow!
The suite Iām retiring I got some great feedback from two of our devs when they reviewed it and they both said they could read it straight off the bat so taking that as a reasonably good start. I also started in the language they develop in because it was one I was familiar with years ago so minimum learning curve when it comes to writing the code.
Iād like to push this a bit forward, automation is not only test automation, I find this deck unbelievably good. Also, he is the only guy I have ever heard of that does things like i do with my teams