Lesson 4 - Share your automation scenarios and risk ratings!

I asked you to come up with a few automation scenarios for our API - were they the same as the ones I came up with and shared? Like I said, there are a lot more possibilities!
Did you come up with something fun and different?
How did you end up scoring it for risk etc?

1 Like

For the GET Functional test, I thought about the id field, and recalled production problems with long integers depending on how the column is defined. There might be some testing around that…

1 Like

That’s a great heuristic to use! I’d love to know if you found any bugs :smiley:

Also, you might want to check if you GET the task with ID n, that you actually have the correct data - that the name and date are actually the name and date you expect. (e.g. if there is an off by one bug, or somebody messed with the date formats in the backend). But you can also include this check in the lifecycle test.

Re: which tests to automate:

Why do we have the criterium “induction to action”, which is, according to Angie Jones, how quickly would developers jump on it to fix it? What is different from the risk?

If an issue has high impact or is frequently used, then I would expect the induction to action to be already high. But this is already included in the risk. What am I missing here.

I watched Angie Jones’ talk a long time ago so I don’t have it all super fresh in my mind. I also remember that I had trouble explaining to my team when I tried to use this framework in real life.

Oh yeah, wen it comes to distinctness, this is not easy to evaluate when you are automating an API from scratch. The distinctness depends on what else you plan to automate. Maybe look at the other tests that you already gave a high score.

If you have an existing project where parts of the API checks are already automated, distinctness makes a lot more sense.

Great question - I think it is included in the risk assessment because it isn’t always the case that the developers can drop everything and work on fixing something even if it is high impact. For instance, if the area is an older area of code and the developers are now focused on making new features and are given little to no time to work on fixing older areas, or only one developer knows that area and their time is highly sought after as they work on various priorities, that would be a lower induction to action score. So it depends on the reality of the way work is assigned and bugs are fixed in that area of work.

1 Like

Going through lessons these days and I also got stuck at Distinctness a.k.a. “Does this test provide new info?” I then watched Angie’s video and wasn’t satisfied either.

Like the twitter example: if adding a tweet fails, what is there new information? That tweet failed? Same thing then goes to all bugs, I guess. The way I see it as a RL example would be “if a dolphin jumps out of the water, new info is that dolphin jumped out of the water” - it’s a redundant statement.

What am I missing here, too?

There’s a lot of nuance to this for sure!
I’ll try to summarize how I look at it like this: does the test provide information that other tests do not?
For the example of sending a tweet, do other tests also send a tweet? If not, this test is distinct in that it’s the only one that provides the information about sending a tweet.
Hope that helps clear things up!

1 Like

I went through all my endpoints and used this method (except Gut Feeling - felt I could cut some time that way). I broke down Risk, Value and Cost Efficiency into basic parts - it was easier that way.

Here’s how it looks like, without descriptions, for security reasons:

I had several assumptions:

  • I wasn’t sure about dependencies, so I gave 0 to all.
  • we have very small amount of bugs since it’s a new product but it was still good to analyse Jira bugs in that way and I gave 5 to the area that had most bugs so far
  • distinctness, as mentioned above, I had most troubles with. To further explain: if there’s a trivial example, like “adding a tweet” then the way you explained it I understand. Things get complicated when you have hundreds of tests that all add a tweet (or try to) but you test different things in each request. Each test should add some unique information if it fails, or else it makes no sense to have it at all?

So I was a bit juggling in the test description column - sometimes I put the most representative test case for the FAUNS method, and sometimes I’d put some proto test that combines many - e.g. I always had a single line for A(uth) even though we have several tests for that. The downside is that in this way, it’s even harder to put any meaningful value to Distinctness since it’s a range of tests. But I had to do it that way as I already had to input 800-ish cell numbers.

1 Like

It’s great you’re able to adapt this and transform it to a way that makes it work for you!

1 Like