Not a question so much as a share.
I came across this video and I really like it. It discusses âShifting Leftâ in testing and demonstrates actionable steps to take in embracing an organizational testing mindset
Despite that I disagree on some point I see him more talking about âexpand leftâ instead of âshift leftâ, on which I agree on in general.
I frequently test whatever is available on feature branches when a developer says he has pieces done which makes sense to interact with. We talk about where I should look and where not because he has not finished it so far.
As I work in a Scrum team Iâm at every refinement and planning and ask questions and contribute my experience and knowledge.
Iâm not happy about the advertisement at the end. I was left with the impression, that the previous part was meant to lead to that tool. It might be more of coincidence than causality, but I do not get a good impression by that.
I have added all of this also as comment at the video.
I think the âshiftâ versus âexpandâ might be either a semantic choice or a german to english choice. If âExpandâ works to articulate it better for a person or a team, I think its fine.
It did feel a little like âoh hey go get this toolâ But I also extracted the how this tool would play a role in this Expand Left strategy. In my org we are shifting from Jira to Linear and I am looking at similar tools to fill the same role.
I read critics on this from native speakers too.
Interesting that you company move away from Jira. The first in a long time I hear something like that.
Most times it is the other way around.
Where do the 85% of issues introduced in coding come from? Is there a study, on what products, what was measured, what testing was done, and where, what does an issue represent?
âsoftware testing starting too lateâ & âcost of fixing issuesâ:
- The cost might be outweighed by the revenue gained;
- The cost of fixing a bug can be high, but it doesnât mean that we have to, want to, or will fix it;
âTesting earlier will reduce this numberâ - it could reduce some issues, if anyone would listen to and act on the information given by the ones testing, and if the testing done is appropriate.
I would say responsible Shift Left and contextual measures make sense.
But donât go following specific trends/patterns, or measures of testing as these might destroy relationships with other departments, lose contracts, get forced process changes to speed up things, have pressure from C level, decrease revenue, decrease credibility with clients, and so on.
Excellent points.
I dont think there is such a thing as âtesting too lateâ. I also think that there is no such thing as a âperfectâ paradigm. Like anything, you can do too much or lean too hard on a thing. A laser focus on âShift Leftâ could mean that post-release is ignored until smoke is boiling out of the server room. Or worse, customers are quitting and you dont know why.
I admit to being ignorant of a lot of these kinds of concepts like âShift Left/Rightâ at least in any formal sense. I understand them from experience rather than education effort. Ive been heads down grinding out defects for a long time. This is actually the first QA community Ive found that has been valuable to me.
Interesting. I will have a look at that. Bach is often a good read. Based solely on the graphic it seems to align with identifying tests as âMust, should, could and wontâ (which as I read that statement, I realize its probably incredibly ignorant )
Hey @dnlknott. Cool to see one of your videos shared here.
Feel free to jump into the conversation.
Thanks for the heads-up Simon. Missed to reply, too many things happening right now :).