🤖 Day 7: Research and share prompt engineering techniques

Woo hoo! We’ve made it to Day 7 of our 30 Days of AI in Testing challenge! :tada: This week, we’ve covered a lot of ground in understanding AI concepts, tools, and the real-world impact.

Now, let’s focus on a crucial skill for leveraging AI: prompt engineering. Prompt engineering is the practice of designing prompts to get better outputs from AI. Your challenge today is to uncover and share effective prompt engineering techniques.

Task Steps

  1. Research Prompt Engineering: Conduct some research on effective prompt engineering techniques.

  2. Share Your Findings: Share 2-3 prompt engineering techniques you found that seem relevant, useful or new to you in reply to this topic. Feel free to link to any helpful resources you found as well.

    Here’s an example to guide your response:

    • Prompt technique 1: [name]
    • How it works: [brief description]
    • Potential impact: [how it can improve AI output]
    • Useful resource: https://www.promptingguide.ai/

Why Take Part

  • Enhance AI Interaction: Learning and applying prompt engineering techniques can improve the way you use AI tools, leading to more accurate and relevant outputs.
  • Share and Learn: By sharing your findings and discussing prompt engineering strategies, you contribute to the whole community’s knowledge base, helping others refine their AI interactions.

:mortar_board: Support your learning and the community. Go Pro!

11 Likes

Hello Guys :grin:

What a fascinating theme, with this challenge each day is becoming more enlightening for me :slight_smile: .

Let’s talk about today’s task.
I just read about 3 techniques, but for sure I’ll read about more of them.

  • Prompt technique 1: Zero-shot prompting
    How it works: This technique allows a model to classify objects from previously unseen classes, without receiving any specific training for those classes.
    Potential impact: the user can classify any type of data without having the effort of training the AI on your data previously.
    Useful resource: What is Zero Shot Learning in Computer Vision?.

  • Prompt technique 2: Few-Shot Prompting
    How it works: This technique allows a model to classify more complex data that are not effectively classified by the Zero-shot prompting by giving the AI some examples.
    Potential impact: With this mini-input you can label more complex data, guaranteeing the result you expect without all the labeling cost.
    Useful resource: Few-Shot Prompting | Prompt Engineering Guide<!-- -->

  • Prompt technique 3: Chain-of-Thought Prompting
    How it works: This technique allows obtaining answers for complex questions, giving the AI the chain-of-thought to solve those types of questions.
    Potential impact: For problems that require more steps and variables, this can be very useful. My immediate thought was describing a test case :thinking:.
    Useful resource: Chain-of-Thought Prompting | Prompt Engineering Guide<!-- -->

What I saw on those three techniques is that they are not perfect, and one can fill the gap with the other, so the second solves problems that are not possible with the first, and the third solves problems that are not possible with the second.
I am sure that the other ones will have even more complex approaches than these :sweat_smile: .

See ya :wink:

15 Likes

Hey @rosie ,
The prompt engineering techniques I found quite interesting are:

1. ReAct Prompting:
How it works: It works by breaking down the task into series of steps & then prompting LLM to explain its reasoning & an action for each step. The reasoning trace is a description of the LLM’s thought process for each step & the action is specific thing that LLM needs to do to complete the step.
Potential impact: By using this prompting, we can tell the LLM to perform complex tasks that would be difficult to program for.
Useful resource: Unlocking the Power of React Prompting
2. Self-Consistency Prompting:
How it works: It works by guiding the generation of content by instructing AI model to provide responses that are consistent, by maintaining coherence/continuity by adhering to specific context/constraint.
Potential impact: This ensures the content the LLM produces to be cohesive & on-topic.
Useful resource: Master Prompting Techniques: Self-Consistency Prompting

9 Likes

Hi my fellow testers,

Here is the prompt engineering technique that I found and thought looked potentially very useful.

  • Prompt technique 1: Role Prompting
  • How it works: You ask the AI to play the part of a particular character
  • Potential impact: This allows you to get the information you require from a desired perspective e.g. take the role of a software tester who specialises in exploratory testing with a specific speciality in boundary analysis
  • Useful resource: Prompt Engineering Tutorial: A Comprehensive Guide With Examples And Best Practices

I’ve not tried this before but I’m intrigued whether the AI would successfully understand the context you are implying by asking it to inhabit the specific role.

12 Likes

Hello Everyone

:tada: Woo hoo! We’ve made it to Day 7 of our 30 Days of AI in Testing challenge! This week, we’ve covered a lot of ground in understanding AI concepts, tools, and the real-world impact. :rocket:
Now, let’s focus on a crucial skill for leveraging AI: prompt engineering. Prompt engineering is the practice of designing prompts to get better outputs from AI. Your challenge today is to uncover and share effective prompt engineering techniques.
Prompting in AI encompasses various techniques and approaches that guide models in making decisions or generating outputs. Here are different types of prompting commonly used in AI:

Prompt Technique How it Works Potential Impact Useful Resource
Contextual Prompting Contextual prompting involves providing additional context or constraints alongside the prompt to guide the AI model’s understanding and generate more relevant outputs. By providing contextual information, contextual prompting helps steer the AI model towards producing outputs that are aligned with the user’s expectations and requirements. It enhances the model’s understanding of the task at hand and improves the relevance and accuracy of its responses. Contextual Prompting: Enhancing AI Performance with Context-Aware Inputs Understanding the Power of Prompt Engineering in AI Systems | by SULISUMEN PETER | Medium
Prompt Tuning Prompt tuning involves iteratively refining and adjusting the wording and structure of prompts to optimise the performance of AI models. Prompt tuning enables users to fine-tune the behaviour of AI models to better suit their specific needs and preferences. By crafting tailored prompts, users can improve the model’s accuracy, adaptability, and generalisation capabilities across different tasks and datasets. Fine-Tuning Prompts in Large Language Models Understanding the Power of Prompt Engineering in AI Systems | by SULISUMEN PETER | Medium
Template-Based Prompting Template-based prompting involves structuring prompts using predefined templates or patterns to standardise input formats and facilitate model understanding. Template-based prompting streamlines the interaction between users and AI models by providing a consistent framework for input generation. It reduces ambiguity and improves the model’s ability to interpret and process user queries, leading to more accurate and reliable outputs. Template-Based Prompting for Question Answering Understanding the Power of Prompt Engineering in AI Systems | by SULISUMEN PETER | Medium

Prompt engineering is a powerful technique for enhancing AI performance and usability. By leveraging these techniques, users can maximise the effectiveness of AI models and achieve better outcomes across various tasks and domains. :brain:

Thank you

4 Likes

Prompt Engineering is indeed a crucial skill in leveraging AI to achieve optimal outputs. Here are three effective prompt engineering techniques that can enhance the performance of AI models:

Prompt Technique 1: Prefix Tuning

  • How it works: Prefix tuning involves adding a specific set of tokens at the beginning of the input prompt to guide the AI model towards generating desired outputs. By providing context and constraints through these prefixes, the model can produce more accurate and relevant responses.
  • Potential impact: This technique can significantly improve the quality and relevance of AI-generated content by steering the model towards generating outputs that align with the intended task or objective.
  • Useful resource: OpenAI’s research paper on “Language Models are Few-Shot Learners” ([2005.14165] Language Models are Few-Shot Learners) provides insights into how prefix tuning can enhance the performance of language models.

Prompt Technique 2: Control Codes

  • How it works: Control codes are special tokens inserted within the prompt to direct the AI model’s behavior towards specific tasks or styles of output. By incorporating control codes, users can influence the model’s generation process and tailor the responses to meet specific requirements.
  • Potential impact: Control codes enable users to fine-tune the AI model’s output by providing explicit instructions or preferences within the prompt, leading to more customized and targeted results.
  • Useful resource: The paper “CTRL: A Conditional Transformer Language Model for Controllable Generation” ([1909.05858] CTRL: A Conditional Transformer Language Model for Controllable Generation) introduces the concept of control codes and demonstrates their effectiveness in shaping language model outputs.

Prompt Technique 3: Precision and Clarity

  • How it works: Crafting prompts that are highly specific and leave no room for ambiguity.
  • Potential impact: Ensures the AI model generates outputs that closely align with the intended goal.
  • Useful resource: https://arxiv.org/pdf/2310.14735.pdf

By implementing these prompt engineering techniques, users can effectively guide AI models towards producing outputs that align with their objectives, leading to more accurate, relevant, and tailored results.

5 Likes

I have used “Prompt Engineering for Generative AI” from Google.

Prompting with examples. Apart from asking the chat to do something, give also one or more examples of other things that match your request. This could help when generating blog titles, short messages etc.

Chain-of-thought. “Tricks” the chat into explaining the reasoning behind the answer (although LLMs do not have any reasoning capabilities, so it’s hard to say what Google means here). You give request, data to process and correct answer. Then you give another request and data, and ask chat to come up with answer. The example provided is very unclear, because if you already know the algorithm to obtain the answer, it will be much quicker to get it yourself than to type all that into chat.

Zero-shot CoT. This is like above, but without giving chat correct answer. So chat needs to find a pattern itself. I can see this one being actually useful.


On the side, I feel compelled to note that “prompt engineering” is borderline gaslighting users at the expense of AI. Sure, sometimes to get our message across we need to repeat, rephrase or try another way - happens with people all the time, communication is hard. But when chat fails at understanding request that every human would understand, then this is a failure of a chat and needs to be called out. The correct response in this case is to improve a chat, not ask people to improve their prompts.

6 Likes

That’s a really good point that I had not considered before. Thanks for sharing.

2 Likes

One of the techniques I find useful and is often overlooked is the Persona Pattern or Tactic (https://platform.openai.com/docs/guides/prompt-engineering/tactic-ask-the-model-to-adopt-a-persona and Your guide to creating successful ChatGPT Personas - neuroflash).

In part this helps personalize the generated text - for example, you could ask the model to adopt a specific tone of voice such as casual or serious. You can even (with varying degrees of success) ask the model to respond in fun ways such as “You talk like an 18th-century pirate” :slight_smile:

You can also ask the model to assume the persona of a type of person, for example including "You are an expert in Software Tester. Your tone is formal and neutral but friendly. " at the start of your prompt can yield different quality of results from the same prompt without a persona.

Personas can be quite detailed so worth experimenting to improve the quality of the generated responses

7 Likes

I am a little bit pressed for time today, so I’ll make it short!
Thanks for an awesome resource that’s worth bookmarking - I was familiar with quite a few of the prompt models, but also found some that are new to me.

One thing I’ve been missing in that list (maybe because I just skimmed them?) was to give the model clear instructions on the preferred format of the answer, which is something I use quite a bit when prompting for text output I’d like in a specific format. Imagine I’ve just penned a post for social media but can’t be arsed to think up appropriate hashtags. ChatGPT has the unfortunate habit of returning those as a numbered list - this is not wrong as such, it just makes copy > pasting so much harder than it needs to be!
So instead I tell it “give me x appropriate [to this text] hashtags in a format of #one #two etc” and I get my hashtags ready for use.

2 Likes

There are various methods to improve the “chat” as you put it it - the challenge for many is that they will not experience these improvements while they use generic chat platforms such as ChatGPT and Bard. Interacting with these platforms will generally, in my opinion, generate sub-optimal performance but these can be improved considerably when we build use cases and applications around the underlying LLM models.

Using tools/platforms such as ChatGTP we only really have the prompt that we can improve as a means to improve the generated response which is why we see so much emphasis on so-called prompt engineering.

There are, however, several other options for improving the performance of language model performance in context such as:

  • fine-tuning approaches that enable an LLM to be more tailored to a domain or use case
  • there are approaches such as Retrieval Augmented Generation (RAG) which can improve the quality and relevance of generated responses.
  • using multiple models and routing prompts to the model that is likely to give the best response (rather than routing everything to a generic model).
  • enabling the model to use other tools/services to better answer questions - helpful when the answer you need may change over time.
  • various prompt pre-processing approaches that reduce the boiler-plate prompts and inject specific prompt instructions to reduce the complexity of the user-provided prompts and make them context-aware. Or determine the intent of the user’s prompt and construct a better prompt that the model will generate better responses to.

I guess the challenge is these are all currently expensive in terms of technical knowledge, data, and compute required to implement and make the language model application better in context. I think we are still a long way from having a general language model that performs as well as a human across a range of language tasks.

We may over time find more domain-specific models becoming publicly available but in my experience, many of these improvements are deployed internally at companies and become quite complex systems consisting of multiple models.

4 Likes

Hello @rosie and fellow participants,

Loved our today’s exercise on the crucial AI world skill, i.e., Prompt Engineering.

I went through the promptingguide website and made my notes. I also tried practicing these techniques for some testing use cases.

Here is the detailed mindmap with my today’s learning summary:

Also, recorded this video explaining my today’s learnings.

Looking forward to the feedback from fellow participants.

Thanks,
Rahul

4 Likes

My three prompt techniques

Prompt technique : Automatic Chain of Thought

How it works:

  • Stage 1): question clustering: partition questions of a given dataset into a few clusters
  • Stage 2): demonstration sampling: select a representative question from each cluster and generate its reasoning chain using Zero-Shot-CoT with simple heuristics

Potential impact: Chain of Thought prompting enables complex reasoning capabilities through intermediate reasoning steps. You can combine it with few-shot prompting to get better results on more complex tasks that require reasoning before responding.

Useful resource: https://www.vellum.ai/blog/chain-of-thought-prompting-cot-everything-you-need-to-know

Prompt technique : Prompt Chaining

How it works: To improve the reliability and performance of LLMs, one of the important prompting engineering techniques is to break tasks into its subtasks. Once those subtasks have been identified, the LLM is prompted with a subtask and then its response is used as input to another prompt.

Potential impact: Prompt Chaining helps to boost transparency of your LLM application, increases controllability, and reliability. This means that you can debug problems with model responses much easier and analyze and improve performance in the different stages that need improvement.

Useful resource: Getting Started with Prompt Chaining

Prompt technique : Multimodal CoT Prompting

How it works: Multimodal CoT incorporates text and vision into a two-stage framework. The first step involves rationale generation based on multimodal information. This is followed by the second phase, answer inference, which leverages the informative generated rationales.

Potential impact: Multimodal CoT is a two-stage framework that separates rationale generation and answer inference. In this way, answer inference can leverage better generated rationales that are based on multimodal information.

Useful resource: https://arxiv.org/pdf/2302.00923.pdf

5 Likes

Day 7

Research Prompt Engineering: Conduct some research on effective prompt engineering techniques.

From OpenAI

  • Use the latest model, they tend to be much more capable. Usually paid for as well.
  • Seperate the instruction and the context to make it clearer for the model.
  • Be specific about outcome, length, format and style.
  • Articulate the desired output format through examples, like as a table or list or whatever you need.
  • Start with zero shot, the few shot and then fine tune. Took me a while to get my head around this one, its like examples. Give it a few examples to get warmed up then your target text.
  • Reduce fluffy and imprecise descriptions.
  • Instead of saying what not to do, say what to do instead.
  • Using leading words to get the model started. If trying to generate python code, add your prompt followed by import will improve the output.

Some of these seem intuitive and pertain to writing in general I think. Many sites replicated these in subtle ways.

I would add specifying a role too. I want you to be a marketing expert or solutions architect or whatever.

Share Your Findings: Share 2-3 prompt engineering techniques you found that seem relevant, useful or new to you in reply to this topic. Feel free to link to any helpful resources you found as well.

I liked this site best I think: Elements of a Prompt | Prompt Engineering Guide<!-- -->

My brain liked how they broke it down:

  • Instruction - a specific task or instruction you want the model to perform
  • Context - external information or additional context that can steer the model to better responses
  • Input Data - the input or question that we are interested to find a response for
  • Output Indicator - the type or format of the output.

I would add the role you want the model to play into the context or have its own section. I will be using similar structures in future I think.

I also liked the idea of Constitutional AI: [2310.13798] Specific versus General Principles for Constitutional AI

Providing your prompt then a set of principles with which the GenerativeAI judges the outcome of the prompt. You might ask for options for how to deal with a situation but want to apply certain principles to it and get the model to filter its responses through them.

5 Likes

Love it Bill, a few people have mentioned adding roles and personas to prompts being a very powerful technique.

5 Likes

I learned this from Rachel Kibler - be aware of anchoring bias when using a LLM tool like ChatGPT. It can help us break our own anchoring bias - or it can get its own anchoring bias and lead us astray. Her tips:

Start with your own ideas
Read through everything the tool sends you
Don’t trust everything you read
Ask multiple times in multiple ways

I’ll work up an example later today if I find time!
A resource on prompt engineering someone shared with me recently: Prompt engineering

8 Likes

Hey Everyone, :wave:

So I have gone through couple of Prompt Engineering Techniques :robot:
But these two caught my attention

  1. Template Filling:
    Template filling lets you create versatile yet structured content effortlessly. You will use a template with placeholders to enable prompt customization for different situations or inputs while maintaining a consistent format.

  2. Automatic Prompt Engineer
    Automatic Prompt Engineering (APE) is an advancement in the field of artificial intelligence that leverages new LLM capabilities to help the AI automatically generate and select instructions for itself.
    It transforms the task into a black-box optimization problem, using machine learning algorithms to generate and evaluate candidate solutions heuristically.

So both there technique are able to fulfill the black space between the missing context.

4 Likes

Instead of looking for techniques of prompt engineering as I am pressed with time today, I read what others wrote and suggest a technique of making default prompts visible. It took me a while to realize that in chatGPT you can choose Customize ChatGPT and answer two default prompts:

  • What would you like ChatGPT to know about you to provide better responses?
  • How would you like ChatGPT to respond?

I have been using default prompting in some test teaching sessions as invisible input. One big lesson over the years I needed to learn is that there is always a default, and to change the default, you first need to make it visible. Programmers hide defaults in code, and chatGPT hides defaults in their implementation as well as allows users to hide defaults in settings. Life also has defaults, and generally living life beyond defaults could be different kind of rewarding.

6 Likes

Iterative Prompting
A technique where you build upon previous responses by asking follow-up questions. Helping you dive deeper into a topic, extract additional insights, or clarify any ambiguities - Key for me as tester.

Zero-Shot Prompting
This technique is ideal when you need quick answers to basic questions or general topics. - I use this daily and only found out today it had an actual name.

One-Shot Prompting
A technique for extracting a response based on examples or a piece of context provided by the user. - I also use this technique alot and only found out today it has a name.

2 Likes

I haven’t found any techniques that haven’t been covered by others but I read this article that was really interesting:

It explains how prompt engineering is used when AI is being tested. They use different types of prompts to see if they can tease into out of the AI that it shouldn’t be giving away (eg teach me how to make a bomb).

It also talks about how much of the role today should fade away with some interesting takes on that. As someone who roles their eyes a bit when they hear the term “prompt engineer” I liked that bit!

1 Like