Practical ideas on creating empathy within your software testing team

I stumbled upon this article:

Where the author talks about leaving an old iPod for their team to use for using on their products to help the team understand what their real users are experiencing. He doesn’t specifically prescribe everyone should use an iPod - we all have our own context.

I feel understanding empathy is what many testers are good at and I wonder how people approach this subject? How do you as software testers help the team become more empathetic with the people who are using the software? Any practical tips?

As I come across other links I will add them below:

@rosie this is something I am very passionate about and have historically refused to use particular websites or devices if they have been badly designed - UI/UX.

Awkwardly enough I remember having a discussion with a fellow tester about a particular popular website that I do not like as its too chunky to navigate quickly around for purchases. What I did not know was this person represented the company’s Australian site and was head of all the testing in Aus… BUT he did agree with me and they are going to release a feature update in the future with better filtering options. I agreed to have another go when they release the update.

I feel as a tester you need to insert yourself into the Users mindset and being a system expert should have a real feel for the flow of what the product is designed to do. If its an industry I am not familiar with then I learn everything I can about it, talk to current Users of the product (if you can) cross skill in reading articles around design/art/ behavioral psychology these are all very relevant to your profession development.


Neat article!

I think historically we testers have not been known for our empathy, at least as it relates to the entire team. We’ve a history of being tetchy pedants focusing on edge cases and low value issues.

I’m very, very glad we’ve started to work out of that position because we’re able to be so much more effective for the team!

Empathy for the customers/users? Perhaps a bit more.

I’m glad the word “empathy” is getting used more frequently for our roles!

As a tester, you are the consumers advocate.
You are there, not only to verify the product meets the specifications and requirements from the business, but that it also meets the user experience within their specified environments.
When the company I worked for released forward facing public applications, they were verified on a number of older devices. We had push back from the senior project manager that it was not required, but we persisted as were able to identify within our own company population that not all of us sported the latest Apple or Android device. (Some even chose Windows)
We also verified on public, and private wifi accounts, as well as low reception mobile phone areas. Could I access the application with enough speed on an older device in the publicly available free wifi provided by the client. Did different times of day pose different quality issues. Did country areas differ from high rate city areas.

Wow I couldn’t imagine that situation, times are a changing… (me = testing < 5years)

If the User (targeted market) isn’t happy > the money doesn’t flow in > business needs to cut back > in house tester might be replaced with a cheaper outsourced model.

Interesting @smee very keen on getting your feedback about this as our next team’s project could involve a lot more device testing for me. A bit off topic but it’s for the good of the User :slight_smile:
how far did you go with testing older versions of devices/previous OS? Did you use a mixture of emulators and live devices?
Did you simulate network speed with Firefox DevTools or something else?

In regards OS, we went with latest supported build updates from Apple / Android / Windows.
This also set our browser and generic windows platform builds. In some cases, the client specified what release id’s we should cater for and we worked to that.
We were able to use browser and mobile analytics to identify the peak platforms and build id’s and create the applications from this.
We used emulators and live devices, throttling the speed of our links to simulate load and availability. We also setup tests for user recovery to verify operability if the application became unavailable and the user needed to recover or complete transactions after a failure via check pointing…
In line with this, we were able to setup micro environments to emulate the users own system. Here they were able to access and download information and verify the product met their specifications before rolling it out to their own UAT environments for full integration testing.
And no, we were not a big shop. 5 developers and 2-3 testers working on products.

That’s really interesting we basically do the same except for the micro environments.
What I have found is using chrome emulators can display an issue where the live device (matching the emulator) doesn’t. Have you come across that behaviour before?