- The obvious: Check the new feature directly and ensure it works as intended.
- Quick regression: Execute the sanity tests and run all automated scripts.
- Special: Consult the development team to identify other areas of the application potentially impacted by the new feature.
First: Conduct thorough testing of the most critical functionalities of the feature, ensuring that all positive and negative test cases are fully addressed.
Then: Perform comprehensive testing of the analytics associated with this feature, as it is crucial for evaluating its performance.
Next: Execute quick smoke tests on all existing functionalities to confirm that no breaking changes have been introduced by this new feature.
Important: Implement a feature toggle as a precautionary measure, allowing us to deactivate the feature if any issues arise post-launch, particularly if critical bugs are identified in production related to this feature.
Is this a real scenario, or did an LLM make it up?
I don’t know what “balancing your workload” means in this situation. Also, testing doesn’t impact existing functionality, by definition. You might test that the new feature has minimal impact on existing functionality, but that’s different. Those are a couple reasons this (and one or both of the existing answers) smells, to me, like AI-generated content–it often arranges words in a way that seems reasonable at first glance but is little more than statistical word salad.
As for a new feature being added two days before “product launch” in two days, it really depends on the context. If it’s a monolithic, multi-year project, then I’d raise the risk of squeezing a last-minute change in with the product team because in that context, you’re probably not squeezing it in on that kind of timeline without cutting some corners to do so. I’d at least push on having a good strategy to rollback or disable the feature quickly if something goes sideways.
On the other hand, if you’re in a CI/CD context, two days might sound like an eternity before “the product launch” and testing a new feature on that timeline might be just another Monday.
Think my general strategy has been covered in previous posts. This is BTW pretty much a part of my daily operations, working for an organisation that do work with information rather than transactions.
What I want to add, is I try to stay informed about what’s going on in the teams, if there are such development as that new feature going on, smoke test it as early as possible so it doesnt come as a surprise when it suddenly pops up.
Regarding balance, having been a project leader and a manager in my middle ages, I would suggest that, if the priorities aint clear, ask your management for priorities.
Well if it was simple enough to be added on a short notice then testing it might be a bit low stake.
If it sprouted out of nowhere and is pretty impactful, then it’s a project a management problem.
If the rest of the release was thoroughly planned I wouldn’t expect such unplanned last minute changes. As with all other work done I’d expect it to be refined, estimated, planned then tested.
After all as a tester, you would be providing the complete picture to the stakeholder i.e. any shortcomings you anticipate.
AI checkers claim its human written.
But I guess a bit more context by the author would be better to understand there problem.