I’ve recently started working on a new web application project. I believe this is the first time my company has built a responsive web app, and I therefore do not have any experience testing such a product.
So far, discussions within the team are still ongoing as to what automated test framework to implement, so currently we only have unit tests and manual testing available. I’m finding though that manual testing for the responsive UI is really time consuming. We have 5 breakpoints, so I feel as though i’m doing everything 5 times. I am uncovering quite a lot of bugs though, with certain breakpoints often being overlooked by developers and not implemented correctly in terms of our specifications and screen designs.
I’m interested in any tips on the following!
- How do you test responsive web apps?
- What are some examples of functionality that need checking at each breakpoint, and functionality that does not?
- Any automation tips that could make our lives easier going forward.
It’s a good instinct to question testing everything across 5 breakpoints. There are some things you’ll probably still want to do across the range of viewport widths, such as examining how elements are stacked as you increase and decrease viewport width and whether text and visual assets still make sense in context when that context changes and whether any overlaps or bleeding or cropping or other visual issues become apparent during those transitions. Those are relatively quick checks compared to verifying the design (spacings, font sizes, all that jazz) across all the screen resolutions you’re looking at.
For verifying the design more broadly, that’s where I’d work with UI/UX and development to discourage the use of separate designs across breakpoints and instead have one design for the lowest supported resolution and general guidelines from UI/UX towards development for how things should scale up from there, or maybe two designs - one representing the most common mobile resolution among the expected audience and one for the most common desktop resolution.
You could also come to some explicit agreement with the team on the risk you’re willing to accept compared to the effort to mitigate it. For example, to not check links (whether it be human or automated) across more than one viewport width. Links and most functionality shouldn’t be affected by the responsive design at all. There’s a small chance they could be, for example a link that cannot be clicked because it’s being overlapped by an invisible element or padding, but depending on your context you might want to accept that risk instead of having automated or human checks for all links and functionality across lots of different viewport widths. Typically there’s more risk in the actual functional logic, which is perfect for automation at a low level.
As for automation, for new design changes I’d encourage a human examining it due to the many parts that depend on judgment. That human could even be a UI/UX reviewer instead of (or assisted by) a test specialist. They’d probably also use tools to assist them, for example to get quick previews across real devices. For regression checks, I would recommend visual regression checks in your pipeline that can take snapshots across breakpoints and compare them against a baseline (like BrowserStack’s Percy or Applitools Eyes). Technically you could also assert against all sorts of aspects of the responsive design in UI-level tests across different screen resolutions, but that might not give a whole lot of ROI. If you’re going to automate design checks, I’d sit with the developer to agree what’s valuable and possible to check through the frontend unit tests instead.
Thank you for your detailed response!
Some great ideas here, largely that my team needs to be more collaborative in deciding levels of risk that we are comfortable with and prioritising our testing better for ROI.
As someone who worked for a while on a web app under Vue, I’m sitting at near zero experience myself, but keen to learn more. For example I found that although our app was very responsive, and worked well functionally. But getting agreement on priorities for how the app should look versus how it should work was so impossible I started offloading back to the marketing team. Literally every time some text would wrap around and look different to the artwork, it caused arguments which were well below my pay grade. Luckily we shipped something and I moved onto mobile apps. But we have the same problem, but in a native app, and now it’s harder to test due to other factors too. So I feel your pain.
Since I might move back to the web team in future, I’m keen to jump in with a automation clarification request.
- I heard of people getting the developers to for example build a hidden page or pages that contains all control types, tabs, sliders, trees, lists, menus the works. So that you can experiment with how controls look and how appium/selenium interacts with those controls. It also lets developers show all the controls to the UI/UX person in one easy go.
Is it work asking the devs to do this kind of thing? If so has it helped people?
We have something like this in Storybook (very recently implemented). We were having a lot of issues with components and typography at different breakpoints not looking how the UX designers intended and not being implemented in the correct styling at the correct places.
It has taken a few weeks, but all of these site components are now globally defined in Storybook and anyone in the team can go and look at and interact with these. At the moment my colleague in testing is still going through Storybook to ensure that now everything has been implemented correctly there, and then we will begin to analyse the styling of each page in the web app again once we have confirmed Storybook is correct. If there are still inconsistencies, the web app must not be using the Storybook components correctly. It’s been a long process, but in the end I think this will be a valuable tool and speed up testing and development as the project continues.
Even in native apps, the communication disconnect between the person designing and the coders often feels like a view from the top of the Niagara Falls and the tester is still wondering about basic stuff like “does this? look better in portrait?”
What have been your greatest lessons from how people used or miss-used Storybook @cdaveys?
As Storybook is so new to us, I think I am still discovering how the team can best use it. However, I have learned a lot about issues between design and dev that were occurring before implementing Storybook.
We have been using Adobe XD for our designs, which the developers constantly say is not fit for purpose. They have not really been able to communicate what the issue is though, they say things like that they cannot inspect elements of the design to get the information that they need - but I know that you can. I think it is more a tool that they are unfamiliar with.
We have one ‘Completed Screen designs’ page, that shows how completed screens at each breakpoint should look. This has been problematic though as often these designs are not actually complete. Certain breakpoints may be missing or things might change without notice, so there has been a lot of re-work and a lot of confusion around how an implementation could be so different to the current design’s version. I do not know of a way to go back and view previous versions of the designs, although from my perspective in test things need to match the most recent version anyway.
Aside from this we have a separate design document the ‘Design library’ which shows how buttons, input fields, and typography should look (eg. H1, H2, Body 1). The other issue we’ve had is that there is not a clear link between the Completed Screen designs and the Design Library. eg. There is nothing to explicitly say ‘this piece of text should be implemented as H2’ so what i’ve found is that our developers tend to guess based on what they think the styling should be - which has caused many more inaccuracies. I believe now we have started handling these issues better though and developers and designers are talking more openly about how things should be implemented if anything is unclear.
At the beginning of the project as well there was a large misunderstanding about how component placements throughout the web app worked. The developers needed all components to begin and end in a column, and not in a gutter. All our designs had to be reworked to accommodate this.
All in all it has been an intense learning curve that is still ongoing. Hopefully this has been useful!
A few years ago I wrote a blog post about testing the responsiveness of web applications.
For regression tests you might extend the automated tests with visual tests. A tool like Applitools can be used to determine whether the lay out has not changed in an unexpected way. You basically make a screenshot of the current version of the web page. This screenshot will automatically be compared with the newest version of the web page.