Victimised - Featured TestSphere Card

One hundred cards. One hundred Test-related concepts.
Here on the club, we’ll feature a card from the TestSphere deck for people to write their stories about every month.

I challenge you:
Take a few minutes to think about your experiences with the featured card.

What bugs have you found that are related? Which ones have you missed?
How have you tackled testing for this concept?
What made it difficult or more easy?
What have you learned? What can others learn from your experience?

Take one of those experiences and put it to prose.
Telling your stories is as valuable to yourself as it is to others.

Two projects ago, I was tinkering with a performance testing tool to upload multiple different files at the same time.
At one point, I ran the scenarios for a couple of hours, uploading many files over the course of time.
This was on a Friday afternoon.
Monday, I returned to find the whole team severely pissed at me. Apparently, a number of servers and systems were completely messed up, down and blocked because of my test. This caused issues for multiple teams.
During my scenario’s, some of them looped and uploaded the same file multiple times and overlapped, creating all sorts of bad stuff.

I felt really bad, incapable even, while the whole team agreed I must be an idiot to be doing these kinds of tests.
It’s only months later that one developer told me that because of that scenario, the team was able to lay bare and fix a number of important issues with our architecture and setup.
Testing can cause a number of issues for multiple people. Especially when the results are unexpected. However, it can bring about many opportunities for success.

What’s your story?

3 Likes

Once upon a time, I was working in a team and we were very confident in what we did.

We were a solid team and really good friends.

The problem came on this one story which I wasn’t particularly sure on the impact so we performed what I like to call now developer driven testing, I basically checked what they told me to check and I naively felt that was enough…

The problem, it went live and there were issues.

Fingers were pointed at me, what did you test? Why didn’t you test this?

I wasn’t victimised as such but it fell on me to explain our approach.

Looking back I’ve realised I should have asked far more questions, but we were over confident as a team and whilst we were a good team, this was most definitely a wake up call to me and the others I the team.

1 Like

Nothing brings a learning opportunity as hitting a wall. More so if you hit a wall as a team.
Good that you were able to deal with it and the team was supportive.

Although, I fear many testers are currently following the “developer driven testing” approach. How would you counter that?

The value of documentation.

A particular feature in a product I was working on was reported as entirely broken after release to production. The VP of development came to me, not my team lead, to ask who had tested that functionality. Luckily, I had gotten the department working on essentially session based testing and all that entailed. I was able to tell him who had tested that functionality and that it had been touched at least three times prior to release.

I was told that they absolutely could not have tested that functionality because of how broken it was. I walked through the session notes available and reproduced the relevant portions of their work. It was broken in production. It was also working fabulously in staging.

When I brought this back to the VP he investigated further.

Turns out he had hard coded something to make it work “temporarily” when the build was first made available on staging. It was never documented or communicated. As such, we had no idea that a work around had been implemented.

My default when I am (or my team are) being blamed for something is trying to validate whether it’s accurate and to chase it down from there.

4 Likes

The ultimate victimisation

The company I worked for previously took on two testers as their first ever testing resource. The project I was working on was a major application for costing contractors’ jobs and allowing completed costings to be passed through for invoicing. This meant that it had to interact both upstream and downstream with a number of legacy applications through a dedicated API.

Development and testing went fairly well; the remote dev team would come to us at the end of each sprint to demo their outputs; we would then test it in the following sprint. This was a bit clunky, but it worked. At the end of the development programme, the product was declared “Best-tested app we’ve ever seen!” by the company’s CEO.

Then it was released into beta.

It turned out that the spec had been drawn up by a firm of consultants, under the guidance of a senior manager who was no longer with the company. So no-one who we could talk to had been in on the ground floor. There were no data schemas; we had no idea what data items were expected to be handed in from the downstream app or had to be handed off to the upstream invoicing and payment system. And the consultants had not asked any of our contractors what they wanted from the system.

The calls started coming in on the first morning. Inside the first week, we found a contractor who wanted to use the system for something it had never been designed for. And by the end of the first month, there was around £1 million’s worth of invoices stuck in the system that couldn’t be processed. Meanwhile, the marketing team were promoting this as the Best System Ever and instead of restricting it to 15 selected beta clients, insisted on rolling it out to our entire contractor base as soon as possible.

I rode shotgun on the system for about six months with a team of BAs and devs, and we eventually got it working something like properly. Drawing on this experience, we then embarked on the next project, to replace our call centre app. To avoid repeating the problems, we started by engaging with the business and doing detailed requirements gathering with, again, a dedicated teams of BAs and devs.

Six months in, the company’s owners pulled the plug on the project because of the cost. It wasn’t that it was over-running; we’d planned things carefully in that respect. But the owners were only interested in the bottom line, and the new project was contributing to a steep downturn in shareholder value.

Then the owners saw a commercial application that they could buy off the shelf to do what our call centre app did. So they bought it. And they liked it so much, they bought the company that made it. And so they didn’t need in-house development or testing. And we nearly all got made redundant, me included.

And if that’s not victimisation for doing the testing job properly, I don’t know what is! My one smile in the process was the day after I left, and the company opened new office premises, with a big announcement on social media - “Click here for a virtual tour of our bright shiny new city centre offices!” Guess what? The link didn’t work. I took great pleasure in tweeting “You’d think this would have been tested before the announcement. Oh no, I forgot - you sacked all your testers!”

2 Likes

I may just bypass the question-of-the-card here and react to the card itself.

Why didn’t you find that bug?
Standard answers:
“We did, we logged it, it was marked as won’t fix.”
“That functionality was out of scope. In the future, we will consider that in-scope for more projects”
“Wow, how did they manage to do THAT? I’m impressed!”
“If we did this, that and the other thing, we could have found it. I’m working on the scenarios now.”
“Oops. we’re trying to figure that out ourselves.”

A ton of bugs.
I’m lucky, my teams have never been accused of being the cause of bugs outside of in jest.

Unclear or (unintentionally) misleading test reports
Early in my test-life, we did have a problem with that. The problem was that we included so much information, that the people who should have been reading the reports were either confused by them or they just gave up. I would say that this went on for about half a year before it became a conflict. Since the reports were not read or ignored, it took an actual problem to reveal the lack of communication. The team (PO, PM, Department head, Test team) sat down for about half a day to discuss what we really needed to put in our reports. We all agreed on the report content (and size, that was another big issue) and what feedback we (the testers) needed to ensure that the problem didn’t happen again.

2 Likes

“Oh sh*t! We lose how much revenue per minute without AdWords?” said my sick-to-the-stomach self.

The shocked looked on Rob’s face sent shivers down my spine. “How had the two of us missed this?” I thought. It’s so obvious now!

I might’ve imagined it but the dagger eyes popped up one by one across the open plan office.

This Classifieds website took roughly 50% of it’s revenue from Sponsored Adverts (Google AdWords). Every two weeks we’d use a script, essentially a checklist of items, to check core functionality just before giving a go/no go live decision.

I’d become complacent and had likely rushed through the checks to get some important feature live. Both Rob and I hadn’t spotted that all Sponsored Adverts were missing during our Regression checks. I think it was resolved in about three or four hours after it had gone live. I can’t remember who spotted the problem, perhaps someone in the Sales team or Ops team at the time. I think someone spotted an instant dip in realtime revenue and freaked!

Some key lessons:

  • Unawareness of complacency is dangerous
  • Giving complete go live decision making responsibility to one or two people is dangerous
  • Not understanding risks associated with core functionality is dangerous
  • Not automating checks for core, high risk areas of functionality is dangerous
  • Not learning from these mistakes is dangerous

The actions post #AdWordsGate are a little hazy, but I’m sure we took some important steps to ensure it didn’t happen again.

It didn’t happen again! :slight_smile:

2 Likes

I’d agree that unawerness of complacency is a big problem. I’ve worked with people who are just going through the motions without thinking, just because thats how they’ve done things for years.

I think having automated checks for core functionality breeds complacency and can be just as dangerous. I’m curious - was there a check on your list for the sponsored adverts? I’d feel guilty rather than victimised if there was.

1 Like

One of my noteworthy experiences along this theme was to do with an app I was testing. This app has both an iOS version and an Android version, plus many different blends of the app - with a generic version and then several client specific versions.

I was testing a certain blend of this app, at the time not knowing that this was actually a live version of the app, unlike all the other versions I’d tested up to that point. Push Notifications were the main target of this phase of testing and so it required many interactions with user profiles, such as viewing a profile, liking the user’s articles and commenting on their articles, all of which then send Push Notifications to the relevant user.

One particular user got a bit freaked out that she was getting all these Push Notifications from an unknown user, as the user profile I was testing with hadn’t been fully setup so had the name “Unknown” and no discernible user information. She put a comment on the main conversations thread to ask who was doing this and I couldn’t see this as my user didn’t have access to that thread, so eventually a colleague of mine mentioned on there that this was probably due to testing being carried out that day.

Since this memorable testing day, a partial solution has been to use a specific QA conversation thread where possible and to use known test users where possible - although this does prevent a full test being carried out, but is seen as preferable to the alternative.

So the takeaway from this situation is to know what you’re testing (is it live? are there real users on the system?) and to take a bit of time to think about what effects your testing might have.

2 Likes

Thanks for sharing, @simon.deacon.

We had an entry in a cell on a spreadsheet for checking the sponsored adverts. There were a mix of emotions, including: guilt, victimisation, embarrassment, regret, denial and relief (oddly, in the fact that it wasn’t just me) and camaraderie.

1 Like

Hey Pat,

Were you victimised during this experience due to the testing in PROD?
How did the team/client handle this situation once they found out the tests were yours?

I wouldn’t say I was victimised, as it wasn’t really that serious, but the solution provided by the client - to use a specific QA channel/thread - has worked well so far.