Using AI Prompting as a shift left tool coaching tool

I saw recently a post from @cakehurstryan on linked in about an impressive prompt template to aid in reviewing stories. Ironically I had created a similar prompt template to do the same so that we could quickly review stories before refinement sessions.

I talked about this problem in the lean coffee session 26/11 where people were falling into the trap of waiting for a refinement session before reviewing stories and then being a victim of how much time is available. That situation is often exaggerated if the stories were initially written without Quality at the forefront of the acceptance criteria and the “Why?” is poorly defined or worse not even considered.

So I thought I’d share my prompt template I’d created to initially work alongside my analysis of stories before refinements. After reviewing these stories it was pretty clear that the Product teams needed coaching in considering Quality in producing Acceptance Criteria.

Whilst the prompt should be used alongside your own analysis, I shared the prompt with Product Team and Developers because it was good enough to highlight areas they should be considering for quality. The Product Team can then run this as soon as they’ve finished constructing their story before it gets committed or even discussed at refinements.

So here’s the prompt I use:

You are a highly experienced quality engineer with 10 years experience. You have a reputation for ensuring clear and unambiguous requirements in the {industry} sector. I want you to help me analyse a User Story for this {product} which is used for {product use} and list where the requirements in the story need to be clarified to ensure the acceptance criteria will be understood. The user story details are {story details}.
{Industry}:
{Product}:
{product use}:
{story details}:

What coaching techniques are you using to leave with other teams? Are any of those similar AI prompts to help their thinking? What are they?

5 Likes

I’m trying to get the engineers and ba on my team to use an agent I have created. It looks for gaps and inconsistencies in the story when compared to the parent, siblings and any related documentation. The objective is to remind the team of the wider picture and ensure new features are consistent with what has already been delivered in that area.

The user story is one slice of a bigger pie - is it a good fit?

2 Likes

Love this @i_bright ! Thats definitely a level up from taking a story at a time. We don’t have many large epics but there is one project this approach would be perfect for :+1:

Thanks @ghawkes .

For context, we use Jira and Confluence so the agent is from Atlassian: Rovo. As we have a lot of business and technical documentation in confluence, I created the prompt to look through that as well. So it could still be a small story being delivered in isolation, but might be a change to an existing feature.

So it’s only as good as the pre-existing documentation.

1 Like

We have the same set-up and to be honest I haven’t tried Rovo as I’m stuck with what was working for me which was Co-Pilot a story at a time. However, I did give it go to analyse the release contents to see if it could find any gaps or contradictory requirements across all the stories and it struggled. If you have a sample prompt you used (doesn’t have to be exact) then I could work along those lines.

This is how I started and I was getting mixed results each time even for the same ticket id:

What are the specific inconsistencies found in xxxx when reviewed against related closed stories and technical documentation.

Understanding the syntax of the English it was using was the hardest challenge for me. There are so many ways to say the same thing, it was hard to know where to start.

Setting the context was key for me, so that it didn’t generate extra results e.g. looking for and not finding test cases.

*- Business analysts, software developers, and software testers are analyzing a new requirement.

  • They are are determining if there is enough information documented to write test cases and start development on it.*

I also needed to define what it was looking for, and provide synonyms. This reduced duplication in the results and I got better coverage from the analysis it did.

***Definitions**

  • A gap is when one of the following is mentioned in the linked JIRA items or confluence pages but not in the specified JIRA item:
    Requirement
    Specification
    Feature
    Deliverable
    Rule
    Criteria*

I then had to break down the instructions as granular as I could think of (after several iterations) . Sometimes it would not find the expected links until I specified the order I want them analysed:

*- You will list the gaps in the JIRA item compared to the following:

  1. the parent item
  2. all sibling items
  3. linked JIRA items with Resolution of Done
  • You will list the gaps in the JIRA item compared to confluence pages related to the JIRA item.
  • You will list gaps in the JIRA item when compared to related Sharepoint sites.*

You also need to be explicit with detailing the output you want, so I included this to generate a table:

Recommendations will contain a table with the following headings: Persona, Summary of Gap/Inconsistency, Action required

The whole prompt would fill an A4 page now!

Hope this is of use.

1 Like