I believe if QA can spend a specific threshold of 10-15 mins triaging defect by analyzing code rather than just caputing the screenshot of what is not working, can make a significant impact on the project . What you guys think ?
In my opinion moving from manual testing to learning codebase has coupe of benefits -
QA will upskill knowledge on project end to end and also learn code base.
QA can pinpoint which code line is causing the issue hence developers now know upfront where they need to fix it rather than doing initial triage themselves which is now already done by QA.
With this approach , developers only would be responsible for making code changes, QA will only highlight which code line could be reason behind the issue.
if the context demands, the QA might be testing the APIs and point out a bug in the code where theyâre handled. Now we would call that âmanual testingâ wouldnât we?
It sort of depends on the tech stack and what access you are able to get.
Testing web apps without using dev tools or some sort of traffic interceptor for example, Iâd be really surprised to find a professional tester that did not use those tools. There is no transition here, thatâs really the base point for web testing in my view. Give a developer the call and the response with a picture is significant value in my view.
Similarly with mobile you are going to want to see the traffic in order to offer value in your testing, some apps are easier than others to do this, flutter for example can be problematic but there are work arounds.
Iâd put the above as the basics and almost always running as I test.
API testing was probably the first step beyond those basics, fairly light usage in my case.
Doing local builds is likely the next step as it offers more access to debug tools and as you mention viewing the code. I usually go through this step on new project so its available but many projects I donât need it, Iâll use it on a needs be basis.
Database query access, for me its rare to need this but nice to have.
Reading code - useful to have but again its so rare I find I need to do this unless its a very specific problem. Developers will be quicker, faster and have better insight so thereâs a bit of cost benefit regarding who does the triage here. Pinpointing code can be useful if its a two min job and you are accurate but otherwise Iâd leave it to developers.
Making code changes, a lot of testers fall into the category of knowing enough to be dangerous, a common starting point for many testers though is learning the source control tools and adding basic identifiers or keys for use in automation.
Expand your toolbox, there is value in above but unsure how significant it is if when you have developers who do some of those much better.
is very small I believe but that doesnât mean I disagree!
Yup! But still it depends on how much code you are willing to learn. Itâs not an easy path. But I totally agree!
With this I disagree, a developer would still need to review this process. Itâs not as simple as this Iâm afraid + some developers will feel bad like " is this QA guy now telling how I need to do my job???"
Iâve seen that before, so it depends on context, people and the environment that you are in.
But I agree that getting some technical knowledge cannot hurt
All testers are manual imo - they either have additional skills to read and inspect code/pair/TDD etc. or they automate a percentage of their tests as well. I would also say its defect specific, being able to point to the code does not mean the dev will know how to fix it, the code can be right - but the design could be the issue, there could be a un communicated a/b test or the understanding/function of how the browsers native function works has changed due to an update - all sorts of reasons. Like Andrew says - âexpand your toolboxâ is a great way to progress as a tester. Also it reads like you mean Grey box testing, where they could then progress into white box testing?
I would encourage you to think about what problem(s) youâre trying to solve with this idea. Itâs true that there are benefits to white box testing, but it doesnât seem like thatâs what youâre describing here.
With white box testing, a tester knows more details about the technical implementation of an SUT. They can âsee under the hoodâ and get ideas for what could go wrong and where risks may lie. A downside to this is that testers may focus too much on the technical side and miss the point of what value and UX weâre ultimately trying to deliver to the customer / end users. We may begin only to think of whether something technically does a thing, not whether that thing actually helps users or is easy to use.
What you described seems to be more about learning the code so you can cut down the work for the developers. I really donât think thatâs a good idea for two main reasons:
It creates an expectation that testers should identify the exact code thatâs causing an issue, which I believe is out of scope, and thereâs enough people out there that already think testers are the only ones that need to test / care about quality; we donât need to give them more things they think they no longer need to do
As @kristof pointed out, a developer would still need to look into the issue, and unless itâs something basic like a typo, itâs usually a lot more complicated and takes more than 10-15 minutes to investigate
I think what youâre proposing wouldnât necessarily have a positive impact on the project, but shift the work from one person to another and possibly duplicate work, such as the investigation.
All that being said, I wouldnât discourage you from learning more about the SUT or digging into the code if thatâs what you want to do. It could certainly add something to your testing and perhaps open up some possibilities for you, like getting into development, for example, but itâs just important to be aware of potential pitfalls as well.
The assumption here is that the team is broken down into dev and test silos. We need to move away from such things. If a tester knows enough about code to track down what the problem is, what is wrong with them fixing it, too?
Fixing requires raising PR which needs to be reviewed as per best coding practices which developers are responsible for, so QA should only help with triaging and analyzing code.
Hi @kristof , thanks for sharing your thoughts !! Regarding your say - QA now telling is not telling but QA is sharing âmaybeâ root cause reason. Consider it just like ML model predicting output not telling actual answer. Developers still are the main lead to get the work done .
There is actually a good answer to this, in order to test âpredictionsâ you can use Metamorphic testing approaches. A bit unrelated but worth mentioning
Making it actually testable, since you have Acceptance criteria of the accuracy of the model build. But this still keeps it âblackboxâ
yes i agree @andrewkelly2555 , only point here is QA skills enhancement on project framework , whenever any issue is popping up its passed through QA to developer and in between if QA can invest 10-15 mins looking into codebase finding root cause its fine. My thought here is learning and thet doesnot mean for all defects QA can do triage.
PRs are a nonsense that just causes delays. They exist only to feed silos, as does the assertion that developers are responsible for code. The team should be responsible for what they deliver. Strengthening subgroups within teams is a dysfunctional practice that is sadly normalised.
Developers should not be responsible for code.
Testers should not be responsible for tests.
The team should be responsible for both. For the entire system, not their own subsystem.
I can see several contexts where checking the code can be useful. Not sure which one youâre referring to. Some of them:
Pinpointing the bug to a part of the code
Expanding the problem/bug to other parts of the application where the behavior is similar (due to reuse or use of that part of the code); or finding that the bug causes a worse bug somewhere else
Helping with the bug-fixing
Being more specific or using a developer-shared terminology when describing the bug(report) due to the extra knowledge of the system;
Checking a bug-fix and areas of impact (useful for regression checks)
Iâve done most of these for the pleasure of learning more, getting integrated, building better relationships with people around me, finding more problems through code inspections.
If youâve got the time (as it can require significant time and it might take your focus from some other technique you could use to test better or more in-depth the application) I recommend you try it.
Hey Stefan, Thanks for your reply !! In my post I was referring to only point#1 and point#3 by providing some context to bug root cause but all your points are in addition to my initial post. Yes i agree its all about learning and upskilling QA knowledge on project codebase.
Its been a while since I looked at it but one of the best courses I found when fast tracking new testers to get a bit more technical was âTechnical testing 101â.
The initial jump in your toolbox and a more technical understanding is very significant, well presented examples to go through, it had a nominal fee but fantastic value, I highly recommend this.
I developed a testing platform to establish the relationship among code difference, service, api, story, test case, bug to realise the accurate testing. Most of testers may not have the ability to read codes. With this platform, tester can easily pinpoint the code line which causes to bug.
This was pretty common back in the old millenia when code was reasonably readable. Testing AXE telephone systems I always wrote corrections in ASA/PLEX, (assembly level and Source code level codes). However nowadays being able to read the code takes very much more effort.
I would say - if coding (also automation) is what rocks your boat - go for it! You might end up like a programmer too. That wasnt unusual back in the days.
If however the system(s) is what rocks your boat its probably not worth the effort.
it actually depends on what domain you are working on , for instance for web domain I agree its little more complicated reading code but for domains like data involving ETL/ databricks its easier. But I agree with you if QA can do it easily then only should move in this direction.
While white-box testing can be useful, it becomes challenging when the code gets too complex, making it difficult to track the flow of data. In some cases, developers prefer using a single file rather than multiple files, which increases the lines of code. This can make white box testing time-consuming and harder to manage.
In my opinion in such scenarios, grey box testing might be a more effective approach. Combining white box testing for easily understandable code with black box testing for complex user flows or intricate code structures allows for a more balanced and efficient testing strategy, rather than relying solely on white box testing.