Research into software quality

I think one of the big reasons that software quality can be so poor, is that developers dont really know how to test.

I think a lot of this is down to lack of training. So I started looking at university courses in Computer Science and Software Engineering. The best I found was teaching people how to code - not how to produce good software.

So I started looking at academic research in the software quality space. Almost nothing, unless I am looking in all of the wrong places. And shift left seems to have bypassed academia completely.

What am I missing here?

4 Likes

What do you think developers are testing, out of interest?

1 Like

Developers should be testing everything. That is how they take responsibility for their own code, rather than farming it out to someone else. But testing is not quality.

4 Likes

Most developers I know would be bored to tears spending a similar amount of time testing at varying levels as they do writing the code in the first place. The textbooks give the main reason for the split in roles as ā€˜objectivityā€™ but cost is also a significant factor. If developers are querying the role of testers though, maybe itā€™s because they havenā€™t met any good ones, or the company is truly one of the new breed where there isnā€™t a test team?

3 Likes

I donā€™t think that software developers need to test everything; after all, they are developers, not testers, for a reason. And there is also a reason, why developers shouldnā€™t review their own code. However, I do agree that most software developers invest too little time and pay too little attention to this. I also believe that if code with a high error rate frequently comes from development, it indicates that the application definitions are not well-written. When errors occur in the softwareā€™s behavior, it is usually because the requirements were not described with sufficient complexity, and that is the responsibility of product management and quality assurance in the concept phase. As a developer, it should then be sufficient to ensure that these functions work properly ā€“ which I think is reasonable and part of their job. Anything beyond that is not, in my opinion. But sometimes I do think: ā€˜Clicking the button once and checking if it works as planned would be niceā€™ :D.

Iā€™m currently organizing an internship in QA for our developers, and Iā€™ve defined that the developers should, for example, write short test plans for testing the bugs theyā€™ve fixed. This has already contributed significantly to understanding and has also led to a lower error rate.

Here is the combined translation of both texts into English:

To answer your initial question: Iā€™m not sure what exactly youā€™re looking for. Maybe others feel the same way, and you could clarify your goal a bit more? Are you looking for reasons why, in your opinion, software developers make too many mistakes? What kind of research and courses are you specifically trying to find? Do you want to enable developers to develop with fewer errors? Or do you want to teach testers to better understand developers? Sorry, Iā€™m not quite getting it yet. :see_no_evil:

4 Likes

There are several reasons for my post:

  1. Why arenā€™t universities training developers to take responsibility for their own code, by teaching them how to test it?
  2. Where is the academic research into improved quality strategies?
  3. Where is the academic research into the reasons for the disparity between developers and testers, often cited as ā€œthe tester mindsetā€? My view is that it is simply lack of training and nothing more. But I have no evidence for this.
3 Likes

You just hit my hot-button ā€“ so hot that it was the topic of my very first conference presentation (which I have also submitted for TestBash), further presentations on related supporting topics, and a book Iā€™m (slowly) writing. Buckle up.

From this grizzled veteran devā€™s PoV, this is a big hairy problem, including:

  • We have no widely accepted definition on what ā€œsoftware qualityā€ even is in the first place!

  • Most of the existing definitions are much too complex to remember and apply. Many have other assorted problems such as:
    ā€“ being proprietary (so people have to buy expensive tools and/or documents)
    ā€“ being only applicable within the context of certain technologies or approaches (most notably so-called Object-Oriented Programming),
    ā€“ or (IMNSHO) totally missing the point by focusing on holding certain meetings or producing certain documents, rather than the software.

  • Devs are generally not taught in universities to test their code in any way, at least other than what may naturally occur to them by way of ā€œpoke at it and see what happensā€. Then companies kneecap themselves by insisting on degrees, thus excluding the self-taught and bootcamp grads. The self-taught may have learned from a resource that includes lessons on how (and maybe why) to test, such as how I learned Ruby on Rails from ā€œThe Ruby on Rails Tutorialā€. From what Iā€™ve heard, bootcamps usually include testing.

  • Most managers of technical people are not technical themselves, so they donā€™t perceive the value of testing, and the technical people are generally not good at explaining it. Often the techies are afraid to even try to explain it to the boss, for fear of looking like theyā€™re trying to make excuses so they get fired.

  • Just as in any field, some people are dedicated to doing their craft well, but the vast majority are just in it for a paycheck and donā€™t spend any spare time improving. If the pointy-haired boss is happy with it, ship it.

  • Most software dev jobs only get a fairly short tenure, as the dev may be laid off at any time and might even be on only a few-months contract. So, by the time an actual problem arises that they could learn from, theyā€™re long gone. The ones there at the time MIGHT learn from it, but often not.

Combine all that (and more but I have only so much time to write) with how devs are rarely held responsible for anything bad that happens because of a bug they wrote, and itā€™s a recipe for buggy fragile slow clumsy unmaintainable software.

So, to improve the situation, Iā€™m trying to get everyone on the same page (yeah, good luck!) with a definition so brief it fits on the back of a business card, yet so universal it can be applied to almost any software, and publishing it for anyone to use for free. It might not be sufficient for things like nuclear powerplants, avionics, implanted medical devices, etc., but it will do just fine for the other five-nines of devs. Itā€™s something I call ACRUMEN, which stands for the idea that software should be, usually in this order:

  • Appropriate: doing what the stakeholders need (doing the right job), including ALL stakeholders, not just end-users but also the dev/ops/support/etc. people, their management, whoever uses whatever data it may generate, etc.
  • Correct: just what it says on the tin (doing the job right)
  • Robust: hard to make it malfunction/crash, or even seem to
  • Usable: easy for all types of intended users to use, even with assorted challenges (including those usually addressed by accessibility, but also accounting for other things such as cultural knowledge and environmental factors)
  • Maintainable: easy to change, with low chance of error and low fear of error, even for a novice dev new to the project
  • Efficient: going easy on resources, not just technical ones like CPU, RAM, bandwidth, disk space, screen space, etc., but also things like user patience and the devā€™s companyā€™s money

So whatā€™s the N stand for? Nnnnnothing, I just tacked it on to make a real word (itā€™s Latin for ā€œsour fruitā€), and unlikely to be someoneā€™s username on some system.

Sorry for the long post, I didnā€™t have time to write a short one. :wink:

3 Likes

Quality is not just the business of testers or developers, but of all project members. Every role in various of positions should have the sense and method of quality assurance. For instance, PO and BA creates qualified product documents with explicit acceptance criteria. Developers writes business codes with corresponding test codes such as unit test codes. SA designs high availability and extensible architecture so that the devoloping will be more easiler. UI/UX engineer also involves in UI/UX acceptance test before release. Testers engage in all kinds of test works and also play the role of quality assurance to collaberate all aspects of teams to build up QA workflow and standards. In this way, a high-quality deliverable will be produced with teams collaboration.

2 Likes

You may be interested in this:

Especially the comments about the Black Box Software Testing course.

1 Like

Hi!

As I am writing my thesis related to quality, so I can provide some info.

Iā€™ve found a lot of research papers on software quality metrics and blockchain testing (security vulnerabilities). There are some exciting papers here and there. But of course - much less than about software engineering in general and AI/ML topics).

Which particular papers or research are you looking for?

I am not looking for anything specific - though it would be a good resource if these could all be collected together. I am more concerned with poor software development practice. More specifically, that developers are simply not taught about software quality at all: itā€™s something they need to learn on the job. Are developers bad at testing (only look at golden paths, for example) because they have no training? I strongly think so.

I feel that you would be better off asking this on a developerā€™s forum to be honest but I can certainly provide a good book recommendation, which is apparently in any top ten of dev books:

I think that developers and testers being seen as different roles is part of the problem. Though this view is often greeted with alarm by testers! :rofl:

In my opinion, academia is more theorising about things, whereas movements like shift-left come more from doing things. As far as Iā€™m aware and in my experience working with others who rely heavily on their academic teachings, academia has largely left out testing altogether, or only covers it in one module or two, with the ideas being extremely outdated and not at all in line with the way a lot of testers actually work.

Furthermore, a lot of testers donā€™t have something like a computer science degree. The ā€œorigin storiesā€ vary greatly, and when we learn and discover things during the course of actual, practical work, thereā€™s usually no instinct to ā€œgo backā€ and try to disseminate those things in an academic setting. What seems more common to me is testers sharing their knowledge with other practitioners; again, people who are actually doing.

Iā€™ve recently read a number of (perhaps academic, I donā€™t know) papers related to testing, and Iā€™ve honestly found them to be extremely drawn out and usually not providing any practical insights or advice. I actually find the personal blogs and articles of other testers more engaging, relevant, and useful.

1 Like

In my opinion, Software Quality is now the responsibility of the team rather than individuals.
One thing that helps is TDD approach, where team collaborates and comes up with use case in the initial phase that somehow brings better quality.

Those papers are not easy to read and digest. I would to pair with someone interested in so we can read and discuss together.

Developers Learning Testing
Firstly I think that many developers learn basic ideas of testing. Testing is something everyone does every day, and the separation of ā€œtesterā€ is one of knowledge, expertise and skill, unlike coding that requires a lot of understanding to get going in the first place. Most people understand bad software at some level, and developers do write unit tests and review compile errors and use the software a bit before pushing it out. So I suppose my comments from here will be about learning testing deeper, as an expert.

I think that developers self-train on the things that interest them, like new languages, technologies, tools, libraries and frameworks, or architecture, methodologies and design.

If developers wanted to study testing subjects, like science, epistemology, test strategy, testability, heuristics, oracles, test reporting, exploration skills, et cetera ad nauseum sine qua nonā€¦ then they could do that.

Hereā€™s the reasons I can think of why they might not do that:

  1. They donā€™t know that these testing ideas exist to be learned in the first place.
  2. They have a skewed idea of what testing can be (metrics, automation driven by acceptance criteria, etc), and settle on that.
  3. They donā€™t have a particular interest in it.

Testing is something anyone can do, but testing well is really difficult, and it has a fuzzy, unsure, never-finished, always-compromising, confusing, constantly questioning and challenging, open-ended, social science element to it that doesnā€™t sit right with everyone. Perhaps that is where the comfort of not knowing about, or pretending about, or delegating testing comes from.

My suspicion is that they donā€™t really care about being able to do expert testing, in the same way that I donā€™t really care about being able to write product code within a team. I suspect that they love creating, and testing can be seen as critiquing the elegant beauty of that creation. I suspect that they love solving solvable problems, and testing is often about where to compromise concerning the impossible.

Quality
Quality is a subjective relationship between a person and a thing. I might say something is high quality and you might say itā€™s not. That makes it non-fungible and heavily situation-dependent, especially as the software, project, business and users are changing all the time, and testers fight with what they understand about all of those to create some understanding of perceived quality of the users and other test clients. So very hard to measure and have some solid understanding, and a tough subject for academia.

If youā€™re looking for academia about quality Iā€™d look to social science for your answers. It will likely not be actually about quality, but about relationships, purchasing decisions, game theory, psychology, and so on. Usually what happens, by my observation at least, is that testers bring that back to the theories behind testing to develop how we understand the relationship between software, tester and test client, and skip over the academic papers altogether. Perhaps itā€™s why developers donā€™t get to see it.

1 Like

I follow what is happening in the Dora Community of Practice and find it informative. Have you had a look at DORA?: https://dora.community/

1 Like

Hi!
I think you can find some academic-like stuff about software testing and quality but Iā€™m not sure that to be a good tester or for developers to understand QA and test their code it would be useful. I havenā€™t read them but just found a couple of interesting and promising examples: