Creating test cases in early development phases

Early in a project, test cases are created and bugs are revealed. Usual stuff right? Fast forward to project completion and now we are doing regression testing.

Looking through the test library, I have an example of a test that was created as a result of bad coding in early dev. This is an example for setting up a notification and checking that the password value is not lost.

Background:
Given bob is a registered user

Scenario: Setting up notification should not lose the password
Given I am logged in as bob
And I set up a notification
Then I should see my password is still in the db
And I should not see a blank field in the db

Cringe, I know this is not the proper use of Gherkin.

This has already been fixed back in the early phases of the project. There is little value to retest them because the code base has been established.

  • Would it be a good idea to only test these once and never again?
  • If I could go back in time, is there even any point to write this test case since they will only be tested once?

Maybe it’s just a case of tagging this kinds of tests as a one-off. Any of you had this experience?

1 Like

In fact, I am in a similiar situation at the moment. We are creating a new feature currently which I am supposed to test.
The requirements are quite well written to our average standards, so I used them as a base to create my test notes. I did them in a mindmap format, which is something that supports my way of thinking and working pretty well.
After a while, the feature was making progress first stories could be closed. Well so far so good, I was fairly happy. But suddenly, I realized that the validation which was working previously was broken by one of the last commits.
What I learned, don’t take a working story for granted, even it was ok one week ago. So what I considered for myself now is that, I might not test each and every edge case, when moving into the regression test zone. But whenever possible I will try to ensure that no area/story however you call it is forgotten. Personally, I will continue with my mindmap, because it is lightweight, can be shared with the team, I can add notes on the way along. But this applies to my environment, where a specific documentation is not demanded, where I do not share testing task for this feature with other people, where the team is small and the feature is rather a shortrun projects.
I think it depends on the circumtances you are working, testing in, expections on the reporting and documentation and existing company/development/team standards.

2 Likes

Philip. Yes, not actually used gherkin, but have used Robot. That ones written like a non-functional test, it’s white-boxing, and to me at least unclear why a database would hold a users password, are we talking about the clientside database, or the server-side database, if the latter, then it’s not a worthwhile test to keep running later on.

A system test could be devised to verify a user does not need to re-type their password. Although if that’s a requirement that someone automated, you may have a very boring test report, spoiler alert, the chances of that test failing for the reason you think it would are terribly low. A symptom of the testing regime. I would remove or relocate that test just to save running time and reduce amount of code. A victim of the “don’t delete working tests because they make my dashboard look good syndrome”. Not your fault.

Andrea I envy you for use of mindmaps, they have helped me find loads of corner cases, but I often abandon them later on. I’m a "ruled white paper " kind of notes person and we all need ways to keep track of those silly corner-case test ideas just in case. I am even saying automating something too early is not a bad thing, even if it only turns out to validate the current system “behavior”, not the “functionality”. Be very wary of automating internal behaviors. Automating early is a chance to learn, so don’t be afraid to delete your “learning” code later.

1 Like

are we talking about the clientside database, or the server-side database, if the latter, then it’s not a worthwhile test to keep running later on.

You’re correct, it’s server side. We as testers would need to log into the database and check that the password is still there and NOT removed. This is arguably crossing into the testing that developers should do themselves, but we as testers spotted this bug. I guess in future, we will test these as one offs and just label it as such so that we don’t retest it in future.

But whenever possible I will try to ensure that no area/story however you call it is forgotten.

Yeah, I guess I’ll log it but just have a way to tag these as one-offs and store it in the archives. I’m sure we have quite a few of these so cleaning up the library would be useful to reduce the garbage and prioritize more important tests.

Structurally, when you whitebox a system in this way, you can end up testing the wrong thing. I shy awy from this nowadays because.

For example:

  1. Tester builds a SQL query that lets them check the password more easily and just print it
  2. Developers make a code change, in a module that requires the password to be salted in the DB, so they create a new field so that old code that handles the user sign-up flow does not break.
  3. Developers inject a bug in the code when handling the new salted password field that accidentally clears it.
  4. Test engineer is still looking at the unused password field, which (spoiler-alert) is still holding the default system password and is actually never used in production. And should be removed from the DB for security purposes anyway.

This is why coding is easy, but writing code that is easy to test correctly is hard.

Since we are talking about a predictive usage of your application, why not make an automated check for this?

Humans are made to think, computer are made to repeat sequences.

You are 100% correct Conrad and the password is still salted in my situation. This can lead to testing the wrong thing.

I think the solution is is not to check the database directly but to focus on black box testing and attempt to login as the user. Logging in as the user will fail and therefore reveal the bug. If anyone has this problem then maybe test high level rather than low.

1 Like

Philip, I’m rarely even 73%, but thanks anyway. I still think it’s a good check to perform in this way you suggest initially. Every check will teach you something, either about a flawed approach, or about a possible product flaw to investigate more deeply later. I am a fan of testing at the highest practical level, as close to what the user gets as possible. This tends to automatically mean you are testing what the real user is seeing/experiencing, and not some internal mechanic of the implementation that may change at any time. This tactic applies not just to GUI’s but also can apply to API testing.

There is of course the downside to doing checks at higher level or altitudes, and that is that it gets harder to implement a check, and it gets slower because you tend to need the entire system configured to make the higher level checks.

There is temptation to move on and write the next test, but I prefer to choose my test area and saturate it a little. Choose the one area that is most likely to yield regressions and get deeper coverage. Never trust any test code explicitly, garbage tests can be anywhere.