How much level of NEGATIVE testing is acceptable?

Hello all,

As part of day to day activity by a tester, How much level of NEGATIVE testing is acceptable ?

Is it okay to use this technique to find hidden ones ? Or Is there any standard for this test?

Since in my current project work , have done 70 -30 ratio of positive and negative testing. out of 30 % we can found many hidden issues in application. Is it good to do equal amount of positive and negative level testing ?
#Negativetesting #positivetesting

4 Likes

This is the same question as ā€˜How much testing is acceptable?ā€™ (not just negative or other quality criteria). I always find this difficult to answer, I will let the experts reply.

4 Likes

I agree with @pwong ; How much testing is acceptable is the better question to ask I think.

And to give a stab at an answer to that:

ā€œEnough to gives you the accepted level of confidence to ship the productā€ (or move the product / project to the next phase).

And going back to putting ratios between ā€œpositiveā€ and ā€œnegativeā€; I believe that is a dangerous thing to do. Testing cannot be approached in such an abstract way. There are many different factors that will determine:

  • the amount of testing
  • the types of testing
  • the levels of testing

Which means, that sometimes more emphasis on positive testing is absolutely fine, but other times much more negative testing is needed.

4 Likes

My take on this is that the amount of negative testing is determined by the complexity of the domain youā€™re testing and of course the level of risk - are there payments, legal concerns and personal data involved?

3 Likes

With my experience , all depends upon the following factors: (1) testing budget , (2) meeting the acceptance criteria to get the minimal viable product out.
The ratio of negative testing to positive testing will depend upon the product we are trying to test and again the testing budget allocated. if we have small budget then , recommend to carryout test execution covering all user stories and associated acceptance criteria with minimum negative tests. Once the minimal viable product is delivered , with agreement for additional budget team can then plan for further testing with more of negative scenarios.

4 Likes

I mainly do negative testing. Probably only 10% happy flows and 90% negative flows.
Because we have a mature team who also write unit tests and test themselves + we have another QA who does a lot of functional tests.

But yea it all depends on how much time are you willing to spend on negative testing & how much time can you afford from your sprint on a story.

My negative flows are like regular ā€˜negative flows (like leaving a required field empty)ā€™, destructive tests & security tests.

6 Likes

According to me it depends on the product risks.

If the positive and negative tests cover the highest product risks, then a lot is won. The stakeholders must determine, which level of product risk is acceptable for them.

4 Likes

If I look at all the negative tests in my suite. They are almost all security tests, which pretty much makes them not really negative tests, but rather positive tests. Almost. A small number of negative tests, arguably 2% are verifying defects that have caused us pain in the past.

I think everyone wants negative tests, but at the end of the day, thatā€™s not a 80% use-case. Not loosing a customers data if they did nothing wrong at all, is always going to be the win. Itā€™cvery much context dependant, some industries are going to want a lot of it. manufacturing, financial and other regulated industries, if itā€™s a ā€œconsumerā€ product like a game, well heck, why do any negative testing at all?

4 Likes

Positive testing:

  • Things we know (I consider error flows, like leaving a required field empty, part of this)
  • Things we used to know, but want to verify again (regression testing)
    ==> Lots of managers focus on this because it is easier to measure/plan/cut costs for/ā€¦

But lots of testing value can be found by doing what some people call ā€œNegative testingā€:

  • Things we donā€™t know, but we know that we donā€™t know and we know how to try to find out (security testing, performance testing, usability testing, accessibility testing, ā€¦)
  • Things we donā€™t know, and we know we donā€™t know, but we do not know how to try and find out (transient bugs, conditions that are too difficult to create/simulate and are for that reason considered out-of-scope, black swan events, ā€¦) ==> Here you can sometimes find the most interesting bugs, but the approach for finding these issues is usually some form of empirical research where you create a lot of data to analyse and look for outliers, or let other people take a look at it, which can sometimes be very costly
  • Things we donā€™t know, but we donā€™t even realize we donā€™t know ==> Using the same approach as for the previous point, empirical research/using lots of data or different sets of eyes, you sometimes find issues in locations where you didnā€™t even consider there would be issues, like finding out that some people see a white dress with gold and some people see a blue one with black (The dress - Wikipedia)

So in my opinion, from a testing point of view, the majority of your effort should be for ā€œnegativeā€ testing. And you should really ask the other question: ā€œhow much time do we lose doing positive testing, which could be replaced by having good unit tests or other automation, and which could have been used for finding important issues & providing value?ā€.

7 Likes

I used to be in this camp for many many years.

Iā€™m not saying itā€™s wrong. Itā€™s just more expensive than you might think. I call it the happy-path value. Most value is gotten when a product is in a working state, than when you take all the edge cases like the network is not working or the disc is full. If we look at analytics data, we will see that the ā€œunhappyā€ may happen a lot, but also happen a lot less than we think it does. The black swan is there, but because itā€™s devilishly hard to automate usefully, and I say usefully, because the black swan is never black, there is more value to be gotten out of making sure that the working thing works, and in being able to repair it quickly very quickly if ever it does break.

That said, unit and component tests are a great place to also do negative testing, if not more suited for finding things you did not know. I love the Johari Window reference Johari window - Wikipedia opening whenever I see it @sarah.teugels :slight_smile:

/edit I just re-read what Kristof was saying about this balance of negative testing, and I have to explain, implicity, a lot of negative testing does happen. but I donā€™t call checking for blank fields negative testing. Mainly because the frameworks in the app and in the tests themselves often implicitly do some of this ā€œbusiness-logic levelā€ checking for us. I view truly intentional negative tests (tests with the word ā€œcannotā€ in the name) as checking things in the environment that fail, as well as business logic errors that we care about. But we probably need to get smarter at separating these 2.

2 Likes

As already mentioned the question is how much testing do you do, and then if you accept that you do not ever test everything, another question pops up. How do you prioritize what to test as in what do you spend your money on. My answer to that is the things that gives most bang for the bucks. This can theoretically be described as some risk / effort function, where risk is the product of probability and impact. Thus leaving you with something like probability * impact / effort. This will typically mean that you will either look for things that have a high probability, high impact or low effort. Practically this is more of a mindset and attitude more than anything useful, but the most bang for the bucks translates very well into practical application, with the minor alteration that I want to learn as much about the product as I can with the least amount of effort.

Here is a little mental exercise I like to do on the topic. Imagine if you were tasked with ā€œTest the startup sequence of a computer.ā€. A very obvious test is to start the computer and see that it starts. Letā€™s say it does. How much do you know about that the computer can start in all circumstances? Very little. You would learn way more about the status of this feature if it failed. If I instead tasked you with. What are the different scenarios that you can come up with where the computer might not start properly? And on these scenarios we will apply that probability * impact / effort thingy. Is this likely to happen (not will a user do it, thatā€™s impact, but do you think this will cause the startup to fail? What is the severity of the impact? And how much effort do you need to make to do the test? For instance you might think that the computer will not start at a really cold temperature, but the effort to do that is substantially higher than letā€™s say with no power in the battery and no power cable connected for a laptop.

So to answer the question how much negative tests is acceptable. My answer is as much as you need to learn as much as you can about the product with the least amount of effort.
Personally I find that I am way more efficient when thinking about the negative cases than the positive cases. As in for a smaller effort I can learn more, but I need both. And that I can normally include the happy path in a negative test. because more often than not the product performs well in these cases and then I also know that it has a chance to function as expected. I.e. if I try to login with a username that I think might cause a problem and I still can login I do not need to test to login with a username that I think will work.

As a bonus note Bayesian Thinking is very useful to help you advance in this type of approach. A visual guide to Bayesian thinking - YouTube

4 Likes

A tester finds information about the quality of a product. He offers that information to a release/product manager. Itā€™s up to them to decide what sort of ā€˜confidenceā€™ they have in regards to the received information.

4 Likes

How about the time available? or the resources? or political changes? or company image? or support impact? or stakeholder expectations? and so onā€¦

Is domain complexity an issue when you have a tester that is a domain expert?
Is risk only in places where there are payments, legal facts, and personal data?

You can view all things in a negative way.
How do you distinguish between negative and positive?

2 Likes

Does this mean your testing is only revolving around risk-based testing?
Can there be risks that havenā€™t been discovered yet that youā€™re not looking for?
Whoā€™s defining the product risks?
Is product quality about wining at something?
Are you focusing only on the highest product risks? Or does that depend on a number of contextual factors?
What about project and business risk? are they not supposed to be considered as well, together with product risk?

3 Likes

This kind of thinking is so foreign to us sometimes. We forget about small dependencies in the systems we test, and a common cancer is failing to test in a perfectly clean browser or on a computer that has never ever had the product installed and is thus missing lots of things we assume are just present and we donā€™t warn a user when there are irregularities that we do care about.

A while back I managed to get my phone wet, itā€™s waterproof ā„¢, but I had to still let it dry really fully, before it would allow me to start charging it again. And thatā€™s just one example of thoughtful engineering happening invisibly, but that I never knew was code someone had written into the baseband drivers to just be checking before letting the user continue. A bit like your booting a laptop in a freezing iglo, means the battery will behave as if it is flat, and I would hate for the computer to not work properly. If the TPM chip in my computer maybe failed at temperatures below zero degrees C and then stopped encrypting or decrypting my hard drive!!! I would totally chuck that laptop if it did not help me with a useful error message when things unexpectedly go wrong.

3 Likes

Does this mean your testing is only revolving around risk-based testing?
I use product risks to prioritise my tests. In case of doubt I can talk with the product owner, the help desk, or someone else.

Can there be risks that havenā€™t been discovered yet that youā€™re not looking for?
There are always unknown unknowns. I use exploratory testing to handle these situations.

Whoā€™s defining the product risks?
According to me this is a group effort. People with different roles are needed like a product owner, a programmer, and a tester. This is based on the Three Amigos. Depending on the product risks, other people from another department like legal could be involved.

Is product quality about wining at something?
This question is not completely clear to me. I interpret the question as follows: must the tester determine whether the product quality is good enough for the new release?
If there are clear requirements or acceptance criteria, then the tester can tell whether a new version can be released. In other cases the tester can only provide information about the system.

Are you focusing only on the highest product risks? Or does that depend on a number of contextual factors?
A product risk could be determined by likelihood and impact. In some cases this is not enough. For security related product risks, attack vectors can have a major impact on testing.

What about project and business risk? are they not supposed to be considered as well, together with product risk?
The product owner or project leader must take this all into account. If a bug cost 300K Euro a year and the new release will increase revenue with 800K Euro, then the choice looks obvious. According to me this is not up to the tester.

It is also possible to tell the business, that certain product risks have not been tested.

3 Likes

Just going to drop this HUGE reason why you should always write negative tests.

Especially negative tests around permissions and authentication. Iā€™ll allow you all to give it a read, itsā€™ gone viral in my twitter feed, but the research paper despite being 15 pages is not indigestible. Better than letting google take you down other rabbit holes.

2 Likes

The reason for testing is delivering high quality experiences to your customers, so you should do as much testing as required to do so.

If youā€™re a standalone QA team and your Dev teams are involved in unit testing, Iā€™d probably look at primarily doing negative test cases.

Iā€™ve noticed throughout my career that Devs are very good at writing unit tests for happy paths, and also very good at asking ā€œwhy would anyone do that?ā€ when interrogated about whether their code works when used in any other way.

You almost get positive tests ā€œFor freeā€ with any kind of software; Literally in the case of TDD. Even in the most cynical, user-disrespecting company, your software will be positively tested just by being used. Negative tests, though, those have to happen actively.

4 Likes

@dylanlacey now thatā€™s one good explanation, I really like the reasoning behind it! :meetup_ninja: