This question sprung to mind because of a situation I had today. For this particular project, the QA team have recently joined. Prior to joining, the project office was doing all the testing the project. Now they are passing the testing responsibilities to us. So far, I believe they have only focused on testing Chrome up until now so it’s obvious to assume that Chrome is the browser that we must support.
Today, my QA lead said “If you have time, it would be good if QA can do as many browsers and devices as they can for the frontend to find regressions”. Yes, it naturally makes sense to have the website work for our audience. My logical mind understands that.
However, there is no documented requirement for which browsers we should support and which ones we have to test for. Because of that, the statement from the QA lead sent another part of me into fury. The devil inside me utters “Why should we do testing if it’s not a documented and defined requirement from the project office?”. “Ok say I have time, what browser should I go onto next after Chrome then?” “Where is the prioritised list of browsers?”. We don’t have those answers.
In my mind, this multiplies our testing effort and I don’t like this attitude of ‘It would be good if’. I’m more a ‘This must be done’ person so I prefer proper exit criteria every time we do a release. Besides, if we do find something (for example in Internet Explorer), how can that be called a regression if it wasn’t developed and tested for IE in the first place? I would rather spend time on test automation then undocumented requirements for cross browser/responsiveness testing.
Maybe I’m wrong, Agile says to value working software over documentation. I dunno, I’m just ranting, I wonder how organisations deal with this.