Advanced Testing Techniques

Possibly this has just passed me by but there seems to be lots of information and guides out there for people new to testing but where is the next level?
Once you have been testing a year or two, you have a good understanding of the standard test techniques like boundary analysis, equivalence classesā€¦ you can write up a decent bug report, maybe starting to understand a bit more about the technology being used in your situation, where are you meant to go from there?

Things like getting into Automation or ideas from the social sciences/neuroscience that could help with testing always feel more like add-ons or side steps rather than the next level of testing.

Are there really no advanced testing techniques that you can only pick up and learn after understanding the basics or have I missed something?
Is it right that that is all there is?

3 Likes

The principles of the philosophy of science, experimentation, applied epistemology, social science, anthropology and so on that testing uses are, generally speaking, not new. In terms of epistemology learning something like falsificationism and how its principles apply to understanding something to be true is a useful thing to do to become a more advanced tester, and Popperā€™s The Logic of Scientific Discovery was published in 1959.

So yes, thatā€™s all there is, but what there is is unimaginably enormous. Your job as a generalist is to take whatā€™s valuable out of the entire sum of human knowledge.

If someone were looking to go ā€œnext levelā€ Iā€™d recommend things like Philosophy of Science (my gateway book was What Is This Thing Called Science), Weinbergā€™s Introduction to General Systems Thinking and Collinsā€™ Tacit and Explicit Knowledge. The last two heavily influenced the path of much of the thinking and language around changes in the testing industry, particularly in CDT and CDT-influenced circles.

Thatā€™s if one wants a truly deep understanding of what testing is and where the teachings about testing come from.

If one wants to get better with the knowledge one has then Iā€™d say thatā€™s developing skills - how to focus/defocus efficiently, how to perform test framing, how to establish a good cost-balanced strategy, how to use tools, how to write tools, and so on. Itā€™s a craft - I mean, how does a mason become a next-level mason?

4 Likes

Thanks for the reply Chris, so once someone has learnt the basic testing techniques you would say the next step is immediately to go big picture and consider systems thinking or the philosophy of science? Are there no concrete steps between those two points?
For your example of masonry, if I am new to masonry I am going to learn just straight forward bricklaying, maybe a few different types of bricks and mortar. But then the next level of masonry is maybe where the bricks are laid in a circle because that must be harder and I definitely cannot do that until I understand basic bricklaying or putting a roof on a structure which is again building on my existing knowledge as a mason and doing something more complicated not immediately jumping to understanding the theory of city planning. With a quick google there are level 1,2,3 qualifications for masonry which seem to follow that.
Similar if you were a chef, the beginning is simple knife work and basic dishes, then as you have more experience and master the basics you go on to learn classic french cooking techniques or fancy nouveau cuisine with foams and stuff that you canā€™t do without the basics.
Is there really no equivalent with testing skills, a set of more complicated skills and techniques that build on the basics but without getting big picture system level thinking?

Iā€™d say that systems thinking and the philosophy of science are not only fundamentals to the nature of testing but are responsible for the techniques, basic and otherwise. So if you want to open the hood and see the workings then I donā€™t see whatā€™s unassailable about basic philosophy. Seems like a nice mellow start to me. If I was going to scare someone Iā€™d probably suggest combinatorics or formal logic or something.

I donā€™t know about layering. Iā€™ve always had an interest in science and philosophy so itā€™s been easy for me to get into. The skills have been harder, but I think thatā€™s the same for most people. I suppose the metaphor would fit to some introduction to philosophy before ā€œAn Enquiry Concerning Human Understandingā€.

With skills I think itā€™s best just to pick one and practice it. Most testing skills seem pretty approachable, maybe coding being a particular exception that requires prior knowledge.

  • Cynefin- on the complexity of our problems
  • People - because Itā€™s always a people problem
  • what happens outside the SDLC, before and after code/test sprints

Seniority as a tester could be in understanding the intricates of the business domain, the system details etc. Perhaps writing a test tool, or test data collection is another up skill.

There could be further ideas in the differences between ISTQB foundation and advanced. ā€¦ not to start the ā€˜debateā€™ again :wink:

Thanks for both of those replies, to add a bit more context to the question this is not so much about my own development, I am quite comfortable looking into Systems thinking, Experimental design, Complexity models and the like.

What has really started me thinking about this is in my team at work we have some testers who have been testing for a couple of years now and are comfortable with the basic skills and can get by. Thinking about what I can do to help them advance as a tester i know that discussing Cynefin or General Systems Thinking is not something they will engage with, it is too big a step from where they are now or just not something they would have an interest in.

It just always struck me as odd that there is no next set of skills and techniques which are generally agreed upon that testers should learn about as they will be applicable regardless of context but are accessible to a wide number of testers who may not think of themselves as a Philosophy of Science or Cynefin type person.

As I think about it I am leaning towards theories of test design and a deeper understanding and use of heuristics as possibly the next level which bridges the gap (obviously alongside the things Jesper mentioned like domain knowledge and people skills).

1 Like

Ah, okay, from a teaching perspective I guess it depends what they do care about. People donā€™t get taught what theyā€™re told, they learn what they want, so it depends on if they want to get better and what their interest is. A mathematician can specialise in data analysis and metrology, a scientist in experimentation, a writer in communication, etc. So if they donā€™t think of themselves as an X type person, what type of person do they think of themselves as?

People do what they want, but they also do what they feel like they have to do. So you may have some idea of what you consider important enough to mandate and enforce as training or learning, even tacitly through culture and social expectation (e.g. being competent at using a computer, when testing using a computer, is a given).

I like the RST content for this sort of thing, because itā€™s a practical approach thatā€™s forged in the fires of more complex ideas but pragmatic and immediately applicable to testing. I do a Focus/Defocus workshop like this that goes down well.

I guess another point is - do they want to understand testing, talk about testing, or do testing? I like to understand it, but some people just want to do it. Ways to go about practicing skills could be useful, like picking a particular thing to try or focus on for a week. I find that 1:1 coaching is great for this sort of thing, just testing the testing and see what could be improved. I do a test framing exercise where I sit with someone while they test and ask questions like ā€œso why are you doing that?ā€ and ā€œwhat are you looking for?ā€ - but you need to be prepared to support them emotionally and explain why they might be doing what theyā€™re doing by making their tacit processes more explicit.

Hope that makes sense, Iā€™m very tired.

2 Likes

Thanks Chris, yes people will always learn want they want to if left to their own devices but this would be a bit more controlled as part of staff development and saying these are skills we believe would of benefit to you in your job. Though in a more general sense it does link into conversations I have seen people having in the community about teaching Testing at a university level and if you want to do that do you not need these levels of knowledge and skill to teach that build on top of each other?
It is generally more what they donā€™t think they are rather than what they think they are. Lots of people will just reply with ā€˜I am not a numbers personā€™, ā€˜I am not a science personā€™ā€¦ When actually they are entirely able to learn these things and what hey rally mean is that is not my interest outside of work or I have struggled a bit with maths/science in the past so want to avoid it.

Quite a few of them have not reached that point (or will ever reach) where they actively engage with testing outside of work. Which is why I am trying to find those skills that apply directly to their day to day work and why trying to approach things like complexity models and systems thinking can be difficult.
Already got some coaching in place going through some of the things you mention and possibly need to increase that a bit to support his work.

As for RST, I would love to go on the course and send the entire team on as well but at a grand and a half a time I canā€™t justify that cost to pay for it myself and it is not something my work are going to pay for any time soon so it just remains a dream for now.

Software testing is very important part for making highly efficient software application. Now a days, most Software Testing Companies changed there trend to ā€˜Advanced Software Testingā€™.

Following are the basic sections need to focus for implementing Advanced Testing Techniques:

  1. Test Scenarios: Creation of good Test Scenarios increase the possibility of covering the majority of Test Cases. For making efficient Test Scenarios, keep in mind the following points:
    a. Read application documentation very carefully and try to create separate test cases for different functionality.
    b. It is important for you to know about expected result for different functionality for making test techniques more advance.

  2. Testing based on Use-Case: This also helps in creating ā€˜Advanced Testingā€™, by creating testcases from Use Cases of the system as use cases helps in covering every transaction in detail from initial to final stage. Benefits of Use-Cases are as follows:
    a. Use-Cases results in the real use of software by end user.
    b. Helps in creating a test case for scenarios which is not identified in the use case helps in finding loop holes.

  3. Cause-Effect Graph: It is also known as Decision Table Making, used by Functional Testing Services for those functions which respond to combination of inputs like user can only proceed further on form filling web service after checking ā€˜I Accepted terms and Conditionsā€™.

  4. Automating Testing: Many software testing companies implementing Automation and becoming Automation Testing Companies as it is very efficient and fast as compare to Manual testing though all the aspects canā€™t be cover with automation which can be easily covered by Manual testing but still it is very popular. With the help of Automation, manual tester can decrease the testing efforts and ensures that none test case is left untested.

  5. Testing on Experience based: This section does not require any tool but requires knowledge of good and experienced tester. An experienced tester can create testcase and execute it very well with optimum efficiency.

Hope this information is clear and you can get back to us in case need more information.

2 Likes

There has been a good discussion here of the next level. Here are some things Iā€™m thinking ofā€¦

Understanding the strengths and weaknesses of the different types of testing. For instance, unit vs integration. Using the masonry example, unit testing would involve x-raying the bricks to make sure they donā€™t have defects and testing the mortar for the correct consistency. But I could still build a structure that fails by placing bricks at odd angles, have it lean outwards, not using mortar, etc. Integration testing would ensure the structure is put together properly.

And after you master the basics of functional testing, you could go deeper into that area (what happens if I use duplicate keys, try to corrupt data, destroy referential integrity, etc.), but testers may decide to branch out into load, performance, stress testing. Or usability testing, accessibility, compatibility, interoperabillity testing, etc.

Then there are techniques like pairwise testing. You may have developed your own techniques. I have a technique that I call ā€œescort testing.ā€ This is where I try to save data as soon as possible, without filling in the required fields. The app ā€œescortsā€ me around to the missing fields and requires that I enter valid values before allowing me to save it - or at least it should. This has revealed many bugs. Warning: You may get some strange looks or laughs when you mention ā€œescort testingā€ as in the U.S., the term ā€œescort serviceā€ is a nice way to say ā€œprostitute.ā€

What Iā€™ve found is that you will find yourself moving upstream to make improvements (or ask them to be made) in development and requirements to make testing easier or more thorough. Does your development team calculate cyclomatic complexity or some other measures to prevent their code from becoming spaghetti (untestable, unreadable and/or unmaintainable)? How good are the requirements given to you? How complete? Do they cover what should happen in every different condition? Are the unambiguous?