And⌠that feels like such a hard question, and Iâm probably going to miss out some really obvious risks and then be judged for eternity as someone that doesnât know anything about web testing
Security
Client having different validation rules than the server
Clientâs use of JavaScript
CMS changes breaking dev team delivered code
Not rendering error responses from AJAX calls in the DOM
I could probably have filled all five slots with âsecurityâ issues:
Injection attacks and leaking data - check your cookies, local storage and have a look in the code sometimes
And JavaScript hides a whole bunch of risk - making site hard to use, subject to cross browser issues or memory leaks.
Minimising by testing embedded in the programming process and people being able to review the web client, look at the technology in use (and how it is used) to determine if it is being used in a risky fashion.
E.g. does the site actually need that much JavaScript? Does the HTML and CSS validate against standards? Have we unit tested the JavaScript and run those tests cross browser?
Hi Faith, from my experience Iâve tended to instead of using XPaths work with my team to add where possible IDs or Classes to try and make locating elements easier. I think XPaths can become quite difficult to maintain and so I attempt wherever possible to avoid using themâŚ
I donât like to say something is never acceptable, because sometimes you may be forced to use a particularly ugly and unmaintainable location strategy to get some work done. Ideally, we want to work with the dev team like Viv mentioned so we donât face the horror of long XPaths.
And if Iâm cutting very tactical code that is only going to be used once then it might be expedient to just âcopy XPathâ or âcopy CSS Pathâ from the dev tools and then refine it later.
If I do have a long locator in strategic code then Iâll try to find some way of cutting it down and review the elements the path cuts through to try and find some way to make the parent easier to match lower in the path.
I donât think Iâve ever used an absolute XPath from /head though, even in the worst cases I think I probably started at //body - which probably doesnât really make it much better
And there is usually something that can be crafted that allows me to position the parent of the locator lower than body.
I also favour CSS locators rather than XPath. CSS locators are supported by the find functionality in the DOM views in dev tools, and they are understood by the programmers in the team which reduces resistance for collaborating on code to automate the app.
Hi Conor, If you are only using them for network calls and JavaScript errors then yes, there is more you can use them for.
At the moment you are mainly using observation functions. Other observation functionality in the dev tools include profiling for memory usage and where time is spent, storage so you can see what cookies and other information is being stored locally, there are also audits which include; accessibility, performance and SEO.
As Viv mentioned earlier, if you used Firefox, the network tab also gives you the ability to resend and amend requests, extending the network tab into a mini proxy.
The Elements or DOM view allow you to interrogate the page and drill into the HTML and see the relationships between the DOM, CSS and JavaScript. This can help reduce testing if you see CSS being used instead of JavaScript. And can help you explore the different events that trigger the JavaScript functionality.
You can also use the Element or DOM view to manipulate the page, so you can remove validation, or submit form values that are not listed in the DOM to help you test server-side validation.
The device toolbar view is very useful for responsive testing e.g. you can see visual representations of the media queries in the CSS to help test at the different browser rendering sizes that have been coded for.
There is a lot of âhiddenâ functionality as well (the developers keep sneaking more stuff in), so you can actually take screenshots of specific DOM elements and parts of the screen without needing any tools (in Chrome use the ârunâ options) (Firefox has exposed this as a context menu option on the page) - a lot of the experimental stuff in Chrome gets hidden in the ârunâ options and âmore toolsâ.
I donât use most of the stuff available in the dev tools. There is a lot to learn to harness in there.
Developer tools I find often enables me to get quicker feedback on potential issues which may exist compared to having to explore the UI. For example if I were to manually explore the UI to check for accessibility there are many things which could be wrong; no logical page flow, all inputs do not have corresponding labels, colour contrast issues etc. Using the accessibility audit within the Lighthouse tool I can quickly get a heads up on some of the potential issues related to accessibility on the UI Iâm testing.
If Iâm exploring a UI and attempting to test client side validation, maybe I would want to change input types or attempt to remove any input field validation. Doing this by exploring the UI for example I would inspect an element, change itâs type maybe from âtype=emailâ to âtype=textâ and then see if I could bypass some client side email validation. Doing this for all input fields on a large UI with lots of inputs could take a while where as if I were to go in to the DevTools console and execute a JavaScript snippet (an example of a console snippet - CHROME DEVTOOLS: REQUIRED FIELD VALIDATION | VIV RICHARDS), quite quickly I could change all inputs types, or remove required attributes etc.
DevTools doesnât take away the need for having to explore the UI and there are some things I like to do that DevTools doesnât offer, but for me DevTools enables me to often gain fast feedback and capture potential issues fairly quickly.
When I test without using the dev tools it helps me focus on requirements and external functional visualisation. But I feel blind to technological implementation and technical risks.
By this I mean that Iâll can see rendering in the GUI of error messages and the fields that are displayed to me, all of which helps me focus on the external experience that a user would have. But when I type in details and see a validation error, I donât know if the validation error was triggered by JavaScript, or HTML5 browser validation, or as a response from the server due to an Ajax call. I would test each of these technological implementations differently. The dev tools help me understand the application at a more technology level and test for targeted technical risks.
At a high level I can use the dev tools to learn about the technical implementation, and I can observe the application at a lower level: I can see error messages that may not be visible to the user (JavaScript errors, errors received from network traffic), I see validation errors for HTML to know if there is an increased risk of cross browser errors.
The dev tools also help me manipulate the application at a lower level so I can bypass front end controls and test the server side without using additional tools like HTTP proxies.
For me, the dev tools help me expand my model of the application, help me learn more about the technology and allow me to pursue other risks from within the browser without adding any more tools into my approach.
You mentioned you use DevTools often so you may have used many of the things I will mention, but when working with JavaScript there are some nice tools hidden away within DevTools:
Coverage - this tool is hidden away within the âMore Toolsâ menu and allows you to see all of your CSS/JavaScript which is being/not being used
JavaScript Profiler - this tool again hidden away within the âMore Toolsâ menu allows you to analyse your scripts in order to identify performance improvements to attempt to make it run faster
Overrides - normally when you make CSS or JavaScript changes locally and reload the page these changes do not persist. Using local overrides which is hidden away within the âSourcesâ panel, you can make CSS and JavaScript changes locally which will persist even after you have reloaded the page. This can be really helpful when trying to debug or try out CSS changes and JavaScript changes locally.
Yeah, the DOM level manipulation is really useful - e.g. changing the field type. I sometimes amend drop down values so that I can pass in data to the server side that wasnât intended -without having to use a proxy tool.
Page Monitor - for checking if something has changed on a web page - I used this extension for monitoring this forum page to alert me to questions over the week to see how the marketing was going!
Open In Incognito - for a one click, open this url in Incognito
Check My Links - check links on this page
The Observatron - I wrote this to take screenshots and page dumps as I test and navigate through applications
Device toolbar - for responsive layout testing - really like the media query view at the top of the page for quickly jumping to css breakpoints, and right clicking shows me which css file they are from.
DOM/Elements view - for interrogating and manipulating the HTML and CSS
Network tab for quick HTTP observations (but I use an HTTP proxy for anything more detailed)
Dev Console and Snippets view for adhoc JavaScript automating
Application view for interrogating cookies and local storage
Or⌠experiment gradually when you are testing. Right click and inspect everything. Keep thinking; what am I not observing? What am I not manipulating? E.g. HTML, CSS, JavaScript, HTTP Traffic, Cookies, etc. And then look for the functionality in the dev tools to help.
I know it may seem daunting as there are so many panels, tools and options within DevTools but when you get some free time next time your browsing a website, right click and inspect the page. Donât be afraid to have a click about and just start toggling things on and off and changing things to get a feel for some of the tools and their uses.
The official Chrome DevTools page as Alan linked to above is very good at guiding you through some of the tools available so itâs definitely worth taking a look at.
I think people use the word Responsive to mean âmobileâ and it doesnât mean that. Responsive means that the web page will adapt (within the constraints coded) to different viewport sizes, and potentially different browsers. (at least it does to me - there is probably an official definition on wikipedia)
The responsive view in Chrome will effectively help you test the CSS media layout rules and browser resizing.
When the mobile simulation is engaged then the correct headers for the device are sent (which can help test server side), and the viewport changes size to that of the mobile device.
But, it does not mean that the same rendering engine is used or the same JavaScript engine is used.
You can find a responsive bug, without having to test on mobile. You can only really say youâve found a cross browser bug if you test it on the relevant browser. So it really depends on the defect.
When we start working with dev tools we are starting to work at a more âtechnologyâ level. So we have to understand the technology limits we are working with - hence a distinction between responsive (CSS layout rules and media queries) and cross browser (rendering and JavaScript engines).
Responsive you can do in these tools. Cross browser server testing based on headers you can do in these tools. Cross Browser rendering you can not reliably do in these tools.