Documenting manual tests together with the source code - experiences?

Hi there,

some time ago we switched from documenting our software in an external document to document it together with the source code using markdown. As this worked out pretty well I was wondering if it would also make sense when it comes to testsuits and testplans.

Has anyone done this so far? If so how did it work out for you?

Would be interesting to hear some thoughts about this.

Thank you :wink:

3 Likes

Wow.

I have to say, the putting end-to-end test automation code in the same place or repo as product code is not yet common practice, and here you are “marking down”. Do you have an example to share (I wish this was not so damn confidential)? , of what the markdown metadata else is designed to look like. What does the markdown include? does it include environment, versions, branch-decisions? Or do you still store that in another location or tool? And is the plan to scrape this metadata/markdown out and drive test reports from it?

Still very good work getting that visibility of testing to be high like this. Keen also to know if anyone else has gotten this to work, because it’s not a thing I would want to try, but that should never stop us experimenting. Keen to see a bit more detail though as well as report tooling ideas.

3 Likes

Hi,

thank you for your answer and your thoughts - appreciate it. Here an example of how the metadata of a test could look like:

[TESTCASE]
id:TaH78Uw
name:An example test
priority:MEDIUM
environment:Chrome\Firefox
category:Functional Test
tester:SamuelMotal
status:{"Chrome":"OK","Firefox":"FAILED"}
estimatedTime:00:20
[PRECONDITIONS]
Add some text here. You can use markdown - e.g. **some bold text**.
[PROCESS]
Add some text here. You can use markdown - e.g. **some bold text**.
[RESULT]
Add some text here. You can use markdown - e.g. **some bold text**.

So far as you can see the basic metadata is included. Further data can be extracted from the version control system, also as you said, you could combine this with other tools. The plan is indeed to generate test reports out of this metadata. It seems to me that it could be an advantage to document tests that way as tests and source code are more in sync. However I might be wrong and missing some important points. That is basically the reason why I was wondering if anybody investigated into that topic alreday.

2 Likes

The basic issue with tests alongside code is that when someone wants to make a minor correction to the test steps, it counts as a code change or a “commit”, which in turn means it sometimes needs merging. So It’s overhead, and my experience is that if it takes > 5 minutes to change a test instruction, it ends up not getting done.

I am seeing data in there that belongs in the TMS system. Priority and Environment are data that belongs in the test “iteration”, not in the test case, likewise the estimatedTime. I have seen EstimatedTime abused, people jsut guess, and then forget to adjust the time taken when they suddenly realize that installing the data needed added 5 minutes to their estimate, so the estimate field becomes useless. I am suspecting that stripping out some of the metadata and moving it to the right place in a TMS system or in a report, will help. For example the time estimate can always be found if you look for a report at the last time the test got run.

Likewise the test status probably does not belong in the source, I’d expect that to change often…but all in all the rest of what you have there is what I like to have and currently do have for automated test comments. The trick is maintaining it once you reach a volume of > 100 test cases, it then becomes a full-time job though if you have lots of stuff to maintain. For example test ID’s need to come from the TMS system, but another approach is simply to either hash the test name, or make a decision to never ever rename tests. The latter is not that hard to stick to and removes the need for an ID. I use hierarchical or folder structure for tests, and have some naming conventions I “like” to aim for, and this ultimately prevents name collisions, so full-names for a test become safe to use as unique ID’s .

I’m not really a manual tester myself (all automation requires a lot of manual exploration) so I am no expert on this problem. But keen to know from the many experienced manual testers, how they keep test cases updated.

1 Like

Wow - awsome that’s really a lot of information and also really usefull.

I am also having some doubts about “will the merging add to much overhead”. However so far I am more on the positive side as we never ran into such problems while having our product documentation stored together with the source. Actually quite the opposite happened as people were happy that they could update the docs in the same place where they made a codechange.
Still product documentation and tests are of course not the same and I guess the only way to find out how efficient updating tests in that way would be is to try it out in an experimental environment.

The problem with the ID is indeed not so easy to solve. I did already spend some time thinking about it, but still I m not completely through with it. One other possible approach might be to generate a unique Id in the pipeline(not really unique but nearly).

Thank you also for the hint regarding the data. Especially taking estimated time from the report is way better.

2 Likes

From my perspective, writing tests as part of a repository lends to a solid way to automation. Also in terms of lowering the barrier and giving a learning opportunity to manual testers. Whether or not one writes them as part of application source code or automation source code depends on how the automation framework is structured.

We use Rails system test framework for UI automation and those sit in the app repo. For a manual tester to document test cases, they had to:

  1. Know which folder/file to write
  2. Understand how to structure a test in the framework (Doesn’t necessarily have to work, because #3 below)
  3. Exclude these specs from running in CI/CD pipeline because they need to cook more

We then had the manual tester pair with an automation engineer to convert them into robust automation specs piecemeal.

All the other things like tracking, estimates et al are covered in Jira.

3 Likes

Cool thanks a lot :wink:

Did you encounter any problems also? Are there pitfalls one should be aware of?

1 Like

Agree, having test steps alongside code helps developers focus on the important functionality, and are a great aid when it come to later on designing and converting a manual test into E2E (end-to-end) automation.

More often than not I have always liked to work for smaller companies. Smaller companies, who tend not to buy all the fancy Jira plugins due to their cost, and we end up writing a lot of this stuff ourselves to more often suit bespoke requirements. And so it’s good to hear from more people using different tools like Rails, to manage testing. There is a chance to maybe share more on this in an upcoming MOT talking slot actually : Call to Speak - Discussion - Testing tools

1 Like

Are we considering test execution results stored with the source code or just the test cases themselves to be executed? Those are two different things.

I think (manual) test cases stored with source code is doable design. However, dealing with test results, release/iteration after release/iteration may be cumbersome to manage, and a growing output compared to the source code itself, which may be more preferable to manage via a test management system. Unless you only store the last/current test results, and view past results by way of commit version history, which simplifies the process but may make navigation across past results more a hassle.

If keeping everything in source code, say using the Github model, I guess you could use Github issues to file and manage bugs and in the issues, reference the test cases in the git repo. Also regarding test results, if not doing it by latest results (with past results in version history), another approach is to release the test results like a release artifact, publishing and making available on Github releases (or its equivalent) together with whatever other release artifacts are generated - this way, it keeps the results separate from source code and having the source code repository cleaner, with only the test cases in source code.

1 Like

Hey David,

thank you for your reply :wink: We are talking indeed of storing also the execution results also in the source. The point that this might produce a lot of data is really a good one. However I would stick to your mentioned solution to store only the last result and view past results by using the history of version control.

I agree doing all that manual - especially digging out past results of the version control might be really a hustle and not worth it. That’s actually why I thought creating as a side project a minimal TMS which uses the version control as a backend. That way you would have both advantages the advantages of keeping the source together with the tests and also the advantage of being able to manage these testcases with a simple TMS.

However I am not sure if it is really a good idea - I have more experience as a developer than as a tester therefore any more feedback is really appreciated.

1 Like

Do you have any QA/testers in your team or the organization? It would be good to get their feedback.

Your proposal is intriguing, but I wonder how many non-technical (or not so technical) QA personal, business analysts, managers, etc. would be open to it, since making use of it requires some tech familiarity with source control. Unless you design wrapper/infrastructure interface around it such that users familiar with UIs and WYSIWYG (and not source) control can use it. The PoC is probably more catered to an overall more technically inclined team (for the greater team beyond developers).

Also, this proposal is more about storing the results, what then about reporting and presenting the results, etc.? Markdown is nice, but for presenting, only good if it is rendered. So how do you pull the results across test cases for the end result of a test suite, test run, etc.? Might be nice to present an example of how you store that in your SCM based TMS, unless the test suite/run results are actually not stored but dynamically crafted from the individual test case run results at query time, etc.

2 Likes

Ja we have testers, but as we are a smaller company we don’t use any TMS so far. So there is basically no real experience to capture in this area, but still it is a good idea and I guess I will interview them what the think about the idea.

Anyhow - i thought to answer your questions it would be nice to present some screenshots from my prototype to get the basic idea. Maybe then it’s getting more clear what I am trying to do. The goal is really to provide a user interface such that a really non technical person can also use it. As already mentioned from some data is kept in the wrong place(like environment or priority) - so just ignore these fields for now).




I still don’t have some screens for TestRuns, Testplans or Reports etc. but I guess this screens should be sufficient to get the idea of what I am trying to achive.

1 Like

This sounds like you’re trying to put a test management system into version control? This seems like square peg in a round hole to me?

If I were rolling my own, I think I’d use a traditional relational database for this (makes having multiple test runs much cleaner than dealing with specific result commit hashes).

If the goal is to make sure the directions are up-to-date with the code, I’d add metadata to the preconditions/process/result/etc, and have each edit of those link to a specific code commit, and then for the reports/UI, I’d use that commit to determine which I should display.

You do mention that the advantage of doing this is that you’ve got devs who like being able to update the test notes/descriptions in the same place as their code, but that seems like a limited use case, especially for manual testing. If this were automated testing and the code is the documentation, then yes, absolutely keep it in the same repo as your code, but I’ve never worked with test teams that relied upon manual testing where the devs had any idea of the manual tests cases/workflow. (Though this is getting close to the line where adopting BDD might help, to get some common language between testers and devs . . . )

3 Likes

Thanks a lot honest also critical feedback is really welcome. Still I am trying to validate if this idea is worth pursuing as there are also other ideas what I would want to explore further.

However still I could see some advantages of having test data and code in one place here are a view:

  1. Review process - you could review changes in tests along with the same process as the code.

  2. Version control gives you out of the box a detailed history about the written testcases.

  3. Pipelines: With really less work you can interact between with the TMS within a pipeline. This might especially be interesting for automated tests.

  4. I could also see some usage in the open source world - even though I m not sure about this. For instance I could imagine that testers could collaborate in open source projects in this way.

Nevertheless I do see your point and I agree that dealing with traditional relational database would be much easier.

1 Like

This was my fear also Ernie. Because each time a tester makes a commit to update things, it would trigger a new continuous integration product build , and add what is really noise not related to the software creation process itself, into the repo commit history. The commit history for code actually serves a very different process and delivers metrics, which this would probably end up skewing anyone trying to do churn analysis of.

I do like that this has challenged a few of us to think more deeply in all. I do like the idea that developers can “own” the test scripts, and have always felt that having a TMS system broke that responsibility out a bit too far from the developer, to still engage. I suspect developers never read the manual scripts given to the subcontractor who runs the tests otherwise.

2 Likes

Issues of kicking off unnecessary CI builds or the commit history with respect to tests, could be addressed by options like these if they work for an the particular organization/business/team:

  • CI build checks for specific criteria before proceeding with fully kicking off the regular CI build. This is mostly with branching, but there could be other possibilities - it is CI tooling specific for what options are available.

  • Use branches to separate the tests & automation from the main codebase but still keeping in same repository.

3 Likes

Thanks a lot guys for all the different oppinions and suggestions. It helped really a lot.

Considering all the facts we gathered together I came to the conclusion to stick for now with a traditional TCMS. Keeping tests in the repository is still an interesting topic but for now it is to risky to try it outside of an experimental environment. :ok_hand:

2 Likes