Testing in the Browser?

I’m interested in helping people test better and more efficiently when they’re in the browser. Looking around the internet and at companies I’ve been at, I find that people are often:

  • using a very narrow set of test cases (data-wise and functionality-wise)
  • poking stuff somewhat at random without rigor (esp. for security)
  • not really leveraging the inspections available in the browser (e.g. accessibility)

I’m curious if this is something you also see?

I get it, time is limited and there are infinite test cases that could be tested. But, I think we need to be thinking about these things.

I wrote some code to try and address this, but I’m curious what you all think.

Are there things you’ve done to try and address this? Tools, process? Has it worked? Is there a solution here or is it just a struggle that we need to keep picking away at?

4 Likes

The points you wrote, I’m guilty as charged.
Taking context of your following statement:

I get it, time is limited and there are infinite test cases that could be tested. But, I think we need to be thinking about these things.

If we could simplify the discussion and understanding of “these things”, we might get people to easily incorporate them in their routine testing business.

4 Likes

Testing is harder than it looks

4 Likes

I think you’re right. Simplification would help it become more accessible to everyone.

I don’t really see the discussions getting simpler at this point. Many in testing seem quite content to come up with fancy words to describe concepts and then debate those fancy words. While it’s fun, and you do need some descriptor word to describe what you’re talking about, I think we may have gone too far here :man_shrugging:. But, maybe that’s something we can improve on if the community is willing.

I think the solution to improving understanding is to be questioning and digging into experiences (both shared and individual). Perhaps some could also be addressed with training.

I wonder if maybe there’s a third option. What if we provide level-2 test cases to the tester in a tool? level-1 being the rote/simplistic tests we do in our sleep, level-2 being the more complex, but not unheard of cases, level-3 being the master-level test cases. If we provide those level-2 test cases, would that be enough to get people doing more thorough testing? And then provide pathways to discussion and learning. So people can 1. get their work done better, 2. are prompted to learn and dig in.

Idk, maybe that’s too “pie in the sky”? What do you (or anyone else) think?

3 Likes

In the relation to this I observe that many people say/mean/think that they test THE web page(s). As it would be somehow disconnected from the server.
I see this as one aspect of multiple and perceive some people solely concentrating on this.

You test a system BY, and including, the web app. The web app is part of the whole system.
It might be useful to concentrate form time to time und the UI/UX. But this is part of a system.

The same goes for APIs. Technically they might be “just” CRUD.
But e.g. a single POST can trigger an avalanche of functionality on the server (aka business logic).
Application Programming Interface
NOT Application Programming Function

I like this visualization:

This is to me mostly a matter for mindset and habit. Surely demanding to change for some.

4 Likes

Great point Sebastian!

But, I think that’s where people aren’t digging deeper. For example, maybe putting a max-length email address into a form feels unlikely to cause a problem, but really there may be a send-worker down the line that runs into problems with it. Same with sql-injections: who knows if the devs forgot that “one” spot downstream that is susceptible.

Yes, one solution is to slice up the system so you can more efficiently test things. You’re right it is more efficient. But, I’m choosing to focus on the browser input and optimize there. That’s where I want to live in this discussion.

2 Likes

What ‘fancy words’ do you mean though? We need precise words and the great thing about community is that we get to agree meaning. If a word is a duplicate of an existing term I’m sure someone will quickly chip in to correct. ISTQB does provide a useful glossary of terms that probably is not completely controversial!

1 Like

I’m sorry, I do not get where you agree and disagree with me.

Isn’t concerting on the browser slicing?

By what axis do you see suggestions being sliced and what axis your browser slice on?

2 Likes

Sorry for not being more clear. I’m focusing on the browser.

You could also choose test on API layer, via unit tests, event queue injection or direct calls to the database. I’m choosing to not focus on those slices at the moment.

2 Likes

Thanks I get this.

I advocate for to think less in slices and see the system as whole.
Where the browser is one important part.
We can concentrate on the browser and keep the whole system in mind.

By this as example: Would not it be interesting to see what users gets as feedback in their browsers?
How fast (what happens at a timeout?)? In what format? What happens when they change the view? What when they log off and on?

3 Likes

Maybe Im completely misunderstanding here (wouldnt be the first time for that)
But it sounds to me like this is indicative of a couple of basic QA tasks not being performed very well. (and I am limiting my scope to functional black box web testing)

Test planning and test casing - if the test isnt designed to provide intentional information by its pass or failure - its really quite useless. I would expect anyone in Software Testing to be able to describe and design a test case. (the most junior-est of tester excepted but I expect them to be learning it very quickly)

Exploratory Testing This is an incredibly useful tactic. However, if a software tester doesnt really understand the technique, they will just thrash, file defects found by accident, and often end up not having an ability to reproduce the work.

EDIT: I want to be clear that these are solvable problems through experience and mentorship.

2 Likes

No misunderstanding, I think some of these basic things aren’t being done by many people in testing roles. In my experience, many don’t plan out tests except maybe a very robotic, human executed test-script. I’ve also seen much of exploratory testing be a thrashing about, as you put it, or basically a collection of happy path checks.

Have you seen this sort of thing at your workplace? What’s your experience been?

1 Like

Its been pretty varied. Most of the time I have been the sole QA resource or the member of a small team. I think that is also an indicator of part of the problem:QA is chronically under resourced and often has nowhere near the organization of development engineers. Most often we are “other engineers” under a development lead/manager/director who almost certainly got there as a developer. So there is a tendency to create a self-fulfilling prophecy or chaotic and inefficient testing. Over the years I have become largely self-led because of this. I now make extra effort to try to help newer QA engineers develop some of the same self-reliance.

Another factor is that many QA allow themselves to be bullied by schedules and accept the blame for post-prod defects and slippage. Without that organizational support I noted above, I think there may be a certain amount of panic in the testing.

I think there are other factors that contribute as well.

Because Im usually working in a very small team of QA much of that black box testing you describe is done as what someone is calling UAT (but isnt reeeaallly) done by people with even less direction. And with very little knowledge how to author a defect report and what is important to include in it. and I do often see defect reports that contain nothing more than “that one button is broken. fix it” O.o

Right now Im in the process of hopefully starting a new job in which I will be able to have a lot of influence in how QA is built within the company. Fingers crossed.

2 Likes

I’d love a tool that automatically explores a bunch of things on my behalf, it that’s what you mean.

When you go to test a form, for example, everyone will do it in a different way (whether that’s “manually” or automated).

Having a tool that goes and does those things for people could be incredibly useful to help catch things quicker.

I could almost imagine having a menu list of test ideas that are tested easier through a handy little test tool.

1 Like

I’d love a tool that automatically explores a bunch of things on my behalf, it that’s what you mean.

That’s definitely within the scope here. I was thinking more about empowering the tester to do it, but having the tool explore could be really useful!

So it might work something like: I focus a form, click go, and the tool goes off and runs a bunch of scenarios against that form. I’m just a bit stuck on how you would report that well. Like a table of results with a [pass/fail/not sure] and video replay?

I could almost imagine having a menu list of test ideas that are tested easier through a handy little test tool.

Love the test ideas! That LinkedIn thread is great :smiley:

These are actually both things we’ve got in Testing Taxi (the tool I’ve been working on). The “auto explore” is a little rough atm, but it does the job.

1 Like