Research on testing - do we pay attention?

I’ve recently spent a chunk of my spare time on a private project and through that I’ve been seeing some of the research that people within universities are doing. Some of it is really fascinating with testing neural networks, block chains, security and embedded devices. Unfortunately it all goes over my head but still fascinating.

I’m curious - are others following and interested in this kind of material already? Has anyone ever leveraged a paper to help them solve a testing challenge? (Am I a simpleton and the only one who can barely understand them?)

Examples of some papers:

Assertion-Based Test Oracles for Home Automation Systems

Neural Network Verification is a Programming Language Challenge

Generating Traffic-Level Adversarial Examples from Feature-Level Specifications

A methodology for testing virtualisation security

The causal testing framework

I’ve also just discovered that there’s a journal for research into testing:

https://onlinelibrary.wiley.com/journal/10991689

7 Likes

Some good Friday reading in amongst that lot!

1 Like

Hi @oxygenaddict

I’ve just started a PhD in software testing research for this very reason. There’s a lot of opportunities for sharing between both communities but they speak in subtly different languages which makes interaction between research and industry quite siloed. I’m hoping I can help facilitate a bit of sharing between the two.

1 Like

Also you’re not a simpleton. Research papers can be very specific and domain focused. It takes time to get into groove with reading papers but as a starter for ten I found this paper on reading papers useful.

http://ccr.sigcomm.org/online/files/p83-keshavA.pdf

1 Like

Hi There - Good luck Mark W! I am getting towards the end of a PhD in software testing, which I hope will result in something useful for industry and some bridges built… That includes presenting to academic conferences over the last few years to talk about the industry community. My work is summarised here: heuristics-for-test-tool-design/About-the-research at v2 · hci-lab-um/heuristics-for-test-tool-design · GitHub and is about stereotyping of testers and how that affects test tool design, plus ways to overcome that when designing or evaluating test tools.

More thoughts: Across academia, there is both pure and applied research going on - work focused on exploring new knowledge which may or may be immediately useful in practice.

The research community by its nature will be looking far ahead, and some of the work may not be understandable or useful in industry for decades (think how long research into AI and quantum computing have been going on - AI research was big when I did my computer science degree in the 1970’s) - Mark Harman from Meta and a UCL professor spoke at EuroSTAR this year about mutation testing and said papers on that have been published for 50 years now, but it is only by combining that technique with AI that his team have been able to make mutation testing viable and useful at scale in industry.

So we’d expect that some academic work is neither immediately understandable nor immediately useful - that just how long it takes for genuinely new ideas to get through.

Applied, industry facing work is also out there and that is more understandable and useful… I found it useful to see whether there were industry participants in the research, as then it tends to be more focused on immediate industry concerns - more applied than pure.

3 Likes

I’d love to get a bunch of these things bookmarked via The Observatory, we could then create collections based on topics, but even if we don’t, they would become searchable within MoT.

No need to fake that one is Einstein (one of the greatest flaws in tech office culture)….

Just take the abstract, shove it into your favourite AI and get a readable version. In your language.

2 Likes

I definitely plan to start one! I was thinking of having 2 collections, one relating to education, coaching & teaching etc then one for testing ideas.

That is really fascinating!

I attended your talk & workshop at TestBash and it was really interesting. I’d struggled with getting engaged with tools for a while but your heuristics really resonated.

What you describe around pure and applied research makes sense and I just hadn’t thought of that differentiation. When I’ve some clear headspace, I’ll try and identify whether papers that I’m interested in were applied or pure! It was interesting reading / trying to read papers on testing AIs when they are so prevalent now. I wonder what future testing innovations have been written about?

I think the document that Mark shared will really help me understand them plus as Anders suggested, AI could be useful.

As a side note, I’m doing a side project looking at education and preparing young people for the software industry. Would you be interested in chatting about it?

Hi Richard - yes, Ander’s suggestion is a good one. I like Rosie’s idea too.

Also I would be happy to talk - but not for a few weeks as I have something wrong with my throat and it hurts to talk… Drop me a message in a few weeks to set something up?

thanks! Isabel

1 Like

PS: thanks for the kind comments about my testbash contributions! I am just completing an update to improve the navigation after I saw you all using the framework during the workshop!

1 Like

Here is a link to Mark’s blog about his paper Revolutionizing software testing: Introducing LLM-powered bug catchers - Engineering at Meta ?

and here is a link to the academic paper https://dl.acm.org/doi/pdf/10.1145/3696630.3728544

he commented in his talk at EuroSTAR that he does the blog posts to provide the information about his research in easier to digest format…

1 Like

A request from a researcher for participants in a study about bug reports, see text below. Contact them directly if you are interested in participating.
I am not involved in the study, and have simply copied the announcement email/request for help below

************* start of request from the researcher:
Hi,

We are doing a qualitative study on what information might be more useful to developers navigating a codebase to handle a bug report. We already have some participants but a few more would be welcome :blush:

The study consists of a single session per participant. Participants are asked to read some bug reports and extra information produced in advance by our bug localisation machine learning tool, and then think aloud about which code file(s) might need to be corrected, while navigating the software project. The aim of this research is to get an idea about which extra information might be more useful for locating bugs. Participants are not expected to find the buggy files. They will not write code to fix the bug. They will not install any tool or do any preparation task before the session. The session is expected to last between one hour and one hour and a half, scheduled at the participant’s convenience. It can take place online through videoconference.

If you are interested in participating, please e-mail pablo.diaz-pedreira@open.ac.uk and/or michel.wermelinger@open.ac.uk and we will provide further information and answer any queries.

If you know of anyone who might help us, we’d appreciate if you could forward them this email.

Many thanks and kind regards,

Pablo Diaz Pedreira, Michel Wermelinger, Tamara Lopez, Yijun Yu
School of Computing & Communications
The Open University, UK
************** end of request from researcher.