šŸ¤– Day 7: Research and share prompt engineering techniques

Hi - I had a read of the prompt engineering techniques here Prompting Techniques | Prompt Engineering Guide<!-- -->

I really like this explanation of what prompt engineering is about:
" Picture this: Youā€™re baking a chocolate cake for your friendā€™s birthday. You could use a boxed cake mix and just add oil, eggs, and milk. Or you could make the cake from scratch, gathering the bespoke ingredients to create a custom cake that reflects your friendā€™s preferences.

What if you added espresso powder or sea salt? Replaced cowā€™s milk with coconut milk? Topped it with fresh raspberries?

This is how prompt engineering worksā€“by taking a simple prompt and continuing to adjust it for an AI generator, youā€™ll receive results that better suit your needs. Youā€™re engineering your request to create a specific output."

4 Likes

Sorry but as Iā€™m slightly late and short of time, I beg forgiveness for not following the Task Steps & guide example precisely; but I hope Iā€™m not too far away.

  1. My research is in two parts:
  • (a) I recently did a 4-week course in ā€œApplying AI: core concepts for business professionalsā€, and the two basic prompt engineering structures they started with (maybe considered primary school by now?) are:

    • CRAFT (Context, Role, Action, Format & Target); and
    • PREP (Purpose, Relevance, Exactness & Politeness ā€“ yes, really!).
  • In addition, they mentioned:

    • Scaffolded;
    • Q&A;
    • Iterative Refinement; and
    • Multistage.
  • Looking quickly at your suggested promptingguide.ai, I have much still to learn. MANY THANKS FOR PREVIOUS POSTS TO THIS THREAD, LOADS OF STUFF FOR ME TO DIGEST.

  • But by the way, think I heard at an earlier event (Technology Leadership Forum) that Prompt Engineering was already becoming old-fashioned, so I asked what is superseding it (@rob_hm has just reported something similar). I think I was told that the answer is ā€œno-codeā€ (allegedly) meta-tools such as MindStudio. As a MindStudio user (new), I can tell you that it builds in selectable access (subject to payment in some cases) to LLMs from:

    • OpenAI (eg GPTs 3.5 & 4);
    • Anthropic (Claude, but not yet the latest);
    • Mistral;
    • Meta (Llama);
    • Google (Gemini).
  • Therefore (b) I did a little internet searching on the above.

  1. On Twitter (now frontpagebadged as ā€œXā€ I get bombarded by ā€œCheat Sheetā€ adverts for Chat GPT Prompt Engineering. There are many different versions, but here is a suitably comprehensive-looking one (via image search -sorry this may not work, I couldnā€™t locate the original):
    aifire.co - Search

I hate the term ā€œcheat sheetā€, as this sounds immoral. Why should we be encouraged to cheat? But I like info-dense graphics. So, out of interest I searched for cheat sheets for Claude, Mistral, Llama & Gemini. Couldnā€™t find anything useful! Interesting.

3 Likes

OK, maybe not the best example, but I think it shows something we should keep in mind when using one of these LLM tools. (I couldnā€™t get ChatGPT to come up for me at all anymore, so I switched to Gemini, which I like a lot, as it provides cross-references to Google search results!)

I wanted to see if I could anchor Gemini to give weight to accessibility testing. I started out with:
The web application I am testing doesnā€™t have good accessibility. How can I test accessibility for it?

Gemini responded saying that there are two ways to test accessibility, automated and manual. (Right there, I see Gemini is giving poor advice. Iā€™m told by actual accessibility experts that if anyone tells you that you can automate more than 20% of your accessibility testing, they are lying). It did list some useful tools.

Then I asked:
Please give me ideas for testing my web application.

Gemini gave me ideas for these types of testing: Functionality, usability, compatibility, performance (it included load testing under performance, I donā€™t agree with that), security, and then as additional considerations, accessibility and internationalization. So far, I donā€™t see anchoring bias.

Then I asked:
What is the most important thing to test on my web application?

The response included core functionality and usability. (Again, I find this not great advice. Depending on your domain, other quality attributes may be paramount, like security, performance, and indeed, accessibility!)

Next, I asked:
What about accessibility? Is it not important?

Gemini backpedaled and told me I was absolutely right, and gave info about inclusivity, legal compliance, and improved usability. It even advised me to build accessibility in from the beginning. All good.

Next I asked:
So what should my testing priorities be for my web application?

This time I was advised to determine my appā€™s core purpose and target audience. Based on that, it gave me three priorities to choose from: Core functionality, usability, and accessibility. It went on to give other lesser priorities, advice on different types of apps, and advice that testing is iterative.

The fact that the first time I asked about priorities for testing, it didnā€™t mention accessibility, but after my subsequent questions, it started prioritizing accessibility, seems like I had an influence. This might not be anchoring bias, it might just be how these tools work - it is taking my input and trying to give me answers that fit my context. Still, it shows that our prompts heavily influence the responses we get, so we have to be smart about our prompts.

5 Likes

I do enjoy doing this!

2 Likes

Hi All,
Iā€™ve found it as an incredibly interesting task today. For me all techniques are new, but my favourite is tree of thoughts!

  • Prompt technique 1: tree of thoughts
  • How it works: prompting breaks problems down into smaller parts (i.e., a chain of thought) but goes further by combining this with the ability to explore multiple solution paths in parallel, forming a tree instead of a single chain!
  • Potential impact: explore multiple choices for each problem-solving thought
  • Useful resource: Tree of Thoughts Prompting - by Cameron R. Wolfe, Ph.D.
2 Likes

Prompt guidelines:

  • use plain language
  • be specific
  • maintain a logical structure
  • include examples
  • consider the context
    Prompt components:
  • task
  • instructions
  • context
  • parameters
  • input

I used this article: AI Prompt Engineering: What It Is and 15 Techniques for 2024

What is Prompt Engineering?
Being able to provide prompts (inputs) using a natural language, such as English, to the Large Language Model (LLM) in a way to get most relevant and accurate response from the AI model.
Prompt Engineers are basically people that have the very sharp ā€œsoft skillā€ to talk with the AI models (like chatGPT) and get the most out of them. Donā€™t ask me why but I associated it with therapist (prompt engineer) that can work with people with special needs, e.g. autistic person (chatGPT) :smiley_cat:

Primary components to consider in creating prompts:

  • Context Setting
  • Clear instructions
  • Desired Output Format
  • Tone and Length

Prompt Engineering Techniques
There are 15 of them listed there, but I will choose 4 that I like the most:

Prompt technique 1: Context Expansion
How it works: Write prompts through the 5 ā€œWs and Howā€ method, which involves expanding the query by asking:

  • Who
  • What
  • Where
  • When
  • Why
  • How

questions related to the subject matter.
Potential impact: As Carlos mentioned in his talk, the context is important if you want to have relaevat facts and reduce the hallucination.

Prompt technique 2: Template Filling
How it works: use a template with placeholders to enable prompt customizations for different situations or inputs while maintaining a consistent format. Usage in creating multiple customized AI-generated content snippets for the website.
E. g. eCommerce website - use a standard template for product descriptions, allowing AI to populate fields like product name, features, price and description. E. g. automated but personalized emails.
Example: ā€œGenerate a personalized welcome email using the template ā€˜Hello {Name}, Welcome to our {Service}. Weā€™re glad youā€™re here! {Closing}'ā€
Potential impact: improves creating structured content effortlessly

Prompt technique 3: Chain-Of-Thought Prompting
How it works: Instead of relying solely on a single prompt, you can provide a sequence of related examples or questions to refine the original query. The key to effective chain-of-thought prompting is to break down a complex question or topic into smaller, more manageable parts. Then, present these parts as a sequence of prompts that build upon each other, nudging the AI towards the desired answer.
Potential impact: this way you will get more accurate and comprehensive answers

Prompt technique 4 (Friendly Bonus): Language Translation With Contextual Nuance
How it works: Multilingual content generation is not just about converting words from one language to another. Using prompt engineering, you can ensure that the AI considers the cultural or situational context, resulting in a more accurate and nuanced translation.
Example:
Translate the phrase ā€˜She took the ball and ran with itā€™ from English to French, keeping in mind that this is a business metaphor for taking charge of a project.
Potential impact: This could remove/improve translators block (my fellow colleague @rastija will now what I am talking about hahaha :heavy_heart_exclamation: )

1 Like

Hello Everyone,

Some prompt engineering techniques that I would like to share insights on are as below:-

1 Like
  • Getting Started Prompt: I initially started by mimicking and practicing writing my own prompts through this GitHub project awesome-chatgpt-prompts.

  • Prompt Skill Learning: I utilized a free e-book called ā€œThe Art of ChatGPT Prompting: A Guide to Crafting Clear and Effective Promptsā€ to enhance my prompt crafting skills.

  • Interesting Prompt Philosophy: If you donā€™t have an idea for an answer, donā€™t search for a question. Following this principle when composing prompts has been highly effective for me.

  • Prompt Requirements: The art of asking questions involves attempting to clearly describe the problem, articulating both the problem and the desired solution in one go.

  • My Frequently Used Prompt Techniques: The prompts I commonly use now typically include these three components: Background + Constraints + Goal + Expected Answer.

    • Clearly Describe the Background:
    In commonly used prompts, the background usually includes the following information:
    
    - Character (WHO) ā€” including the roles involved in this prompt and relevant characters.
    - Location (WHERE) ā€” specifying geographical details may lead to more targeted solutions.
    - Event (WHAT) ā€” detailing the specific incident that occurred.
    - Time (WHEN) ā€” indicating when the event took place.
    
    • Clearly Define the Goal: What result do you want from the AIā€™s response?

    • Introduce Constraints: Human/time/material constraints regarding the described scenario.

    • Lastly, Specify the Expected Answer: For example, request the result in a specific format (markdown, English, Chinese, etc.), or ask for multiple solutions for me to choose the best one.

3 Likes

Hello everyone!

Here are the prompt techniques I found interesting and I would like to share them with you:

  1. Prompt Reframing

How it works: The technique focuses on change the words in a prompt, by keeping the original intent the same. In this way, encourages the model to answer for the same intent but in different ways.

Potential impact: AI model will produce different ideas for your described problems, so you will have a variety of possible solutions for the same target.

  1. Iterative Prompting

How it works: Builds upon the previous responses by asking follow-up questions.

Potential impact: The model gives you a deeper explanation of your topic and provides more details about it.

  1. Automatic Prompt Engineer

How it works: An AI advancement that helps the AI automatically generate and select instructions for itself, by transforming the task into a black-box optimization problem, and using machine learning algorithms to generate and evaluate candidate solutions heuristically.

Potential impact: Generative AI tools can streamline tasks from context-driven data analysis to automated customer service, eliminating the need for constant human-generated prompts.

Useful resource: Prompt Engineering: What It Is and 15 Techniques for Effective AI Prompting + Tips

My understanding here is that thereā€™s 2 basic ways in prompt engineering:

  1. The role based approach: You can ask the LLM to be assume a certain role and then ask questions
  2. The iterative approach: You feed the context to the LLM one by one and then reach the point where you need an answer.
    I agree with @mirekdlugosz 's point, at a certain point, it should not be up to us users to become creative with the prompts but the LLM should have the capability to understand it.
    My opinion on all the named approaches is that they seem to cater only a complex problem. Often times, for a small code related issue, I would not be following the XYZ shot approaches.
1 Like

Hey,

A slightly different aspect for prompt engineering which may not that related to testing :wink:

ā€œDoes it matter if you are emotional to an LLM AI like Chat GPT ?ā€
Example found in a forum, answers vary between AIs.

Less emotions
vs.
More emontions

We reap what we sow

a scientific study on this: [2307.11760] Large Language Models Understand and Can be Enhanced by Emotional Stimuli
nive blog post: EmotionPrompt: Elevating AI with Emotional Intelligence | by The Prompt Index | š€šˆ š¦šØš§š¤š¬.š¢šØ | Medium

Happy prompting

3 Likes

Here is a demonstration of drilling down a prompt from the general idea to implementing a plan.

First I want to say that I have already customized chatgpt to answer my prompts in a certain way by clicking on my name and then ā€œcustomize chatgptā€

What would you like ChatGPT to know about you?
ā— Location: West Chester, PA USA
ā— Profession/Role: Software Engineer Philosopher
ā— Background: Philosophy, Computer Science, Geography
ā— Key Responsibilities: Testing software and ideas
ā— Knowledge or Expertise: Cypress, Javascript, Selenium, Java
ā— Jargon or Terminology:
ā— Typical Challenges: test cases, finding errors, test plans
ā— Current Projects: test plans, cypress test scripts
ā— Goals and Objectives: understanding philosophy

How would you like ChatGPT to respond?
ā— Tone: Casually
ā— Response Format: Tabular
ā— Response style: Act as a philosopher
ā— Level of Detail: detailed
ā— Sources: cite sources
ā— Bias: Wisdom

Now on to my prompts.

The first prompt is the most general:

What is prompt engineering?

ChatGPT

Term Explanation
Prompt Engineering Prompt engineering involves crafting specific and well-designed prompts to generate desired responses from language models. Itā€™s a process of refining input queries to elicit the most relevant and insightful information. This practice is crucial for optimizing interactions with AI models, like GPT-3.5, to obtain desired outputs. Prompt engineering often requires an understanding of the modelā€™s capabilities and limitations, guiding users in formulating effective queries.

In the context of your role as a Software Engineer Philosopher, prompt engineering can be compared to crafting precise test cases or formulating thoughtful questions in philosophy. Just as well-constructed test cases are essential for accurate software testing, carefully designed prompts are crucial for extracting meaningful insights from language models.

For my exercise, I am going to explore self-discovery. So query about that.

How should one conduct a journey of self-discovery?

ChatGPT

Philosophical Insight Explanation
Socratic Method Adopting the Socratic method involves asking oneself probing questions to stimulate critical thinking and self-reflection. Socrates believed that through questioning, individuals could uncover deeper insights into their beliefs, values, and motivations. Engaging in a dialectical process of self-inquiry helps explore oneā€™s thoughts and feelings, contributing to a more profound understanding of oneself and oneā€™s purpose.
Mindfulness Practices Incorporating mindfulness practices, such as meditation and self-awareness exercises, can aid in self-discovery. Mindfulness encourages being present in the moment, observing thoughts and emotions without judgment. This practice fosters a deeper connection with oneself, allowing for the identification of patterns, desires, and areas of personal growth.
Exploration of Passions Actively exploring and pursuing passions and interests can be a powerful avenue for self-discovery. Engaging in activities that bring joy, fulfillment, and a sense of purpose can lead to a clearer understanding of personal values and aspirations. Whether itā€™s in the realm of philosophy, computer science, or other areas of interest, dedicating time to what genuinely resonates with oneā€™s inner self can contribute significantly to the journey of self-discovery.
Reflective Journaling Keeping a reflective journal provides an opportunity to document thoughts, experiences, and observations. Writing about oneā€™s goals, challenges, and moments of insight can facilitate a structured and continuous process of self-exploration. Reviewing past entries may reveal patterns, personal growth, and evolving perspectives over time. Journaling serves as a tool for introspection and helps individuals gain clarity on their values and aspirations.
Seeking Guidance Seeking guidance from mentors, friends, or professionals can offer valuable perspectives and insights. Engaging in meaningful conversations with others allows for an exchange of ideas and the opportunity to gain new perspectives on oneself. Constructive feedback and advice can contribute to self-awareness and help navigate challenges on the journey of self-discovery.

Embarking on a journey of self-discovery is a personal and ongoing process. Integrating these approaches can create a holistic framework for exploring oneā€™s identity, values, and aspirations. Remember that self-discovery is not a linear path, and allowing oneself the flexibility to evolve and adapt is essential to the process.

I then ask it to develop a use case for each Philosophical Insight:

Develop a use case for each of these philosophical insights

ChatGPT

Philosophical Insight Use Case
Socratic Method Use Case: Career Decision Making
Suppose youā€™re at a crossroads in your career, torn between pursuing a more conventional path and venturing into a new field. Applying the Socratic method involves asking probing questions like: What are my core values? What motivates me? By engaging in self-dialogue, you can uncover deeper insights into your aspirations, helping you make informed decisions aligned with your authentic self.
Mindfulness Practices Use Case: Stress Management in Software Development
As a software engineer, encountering high-stress situations is common. Incorporating mindfulness practices, such as brief meditation breaks or mindful coding, can help manage stress. By staying present in challenging moments, you enhance self-awareness, fostering a healthier work environment. Mindfulness aids in identifying stress triggers and promotes a more balanced and focused approach to problem-solving.
Exploration of Passions Use Case: Rediscovering Hobbies
Imagine feeling stuck in a routine, disconnected from personal interests. Actively exploring and dedicating time to neglected hobbies (philosophical discussions, coding projects, or geographical exploration) can reignite passion. This pursuit contributes to self-discovery by aligning daily activities with genuine interests, creating a more fulfilling and purpose-driven life.
Reflective Journaling Use Case: Personal Growth Tracking
Utilize reflective journaling to track personal and professional growth. Documenting achievements, challenges, and moments of self-realization provides a historical perspective. Regular review allows you to identify patterns, set new goals, and gain clarity on evolving values. Journaling becomes a tool for intentional self-reflection and continuous improvement in various aspects of life.
Seeking Guidance Use Case: Philosophical Inquiry with Mentors
Engage in philosophical discussions with mentors to gain diverse perspectives on complex issues. Seeking guidance fosters a collaborative learning environment, encouraging critical thinking. Through conversations with experienced mentors, you gain valuable insights, refine your own philosophical stance, and navigate ethical dilemmas, contributing to personal and professional growth.

These use cases illustrate how philosophical insights can be practically applied in different scenarios, aligning with your roles as a Software Engineer Philosopher and emphasizing the integration of philosophical principles into various aspects of life and decision-making.

Finally I ask chatgpt for a practical plan to implement the five philosophical insight user cases:

So what practical plan to implement the five philosophical insights user cases?

ChatGPT

Certainly! Letā€™s create a practical plan to implement the five philosophical insights user cases in the context of your role as a Software Engineer Philosopher:

1. Socratic Method: Crafting Effective Test Cases

Plan:

  1. Training and Familiarity:
  • Dedicate time to deepen your understanding of the Socratic method.
  • Explore resources on how it can be applied to software testing and quality assurance.
  1. Integration into Testing Process:
  • Identify specific test cases or scenarios where applying the Socratic method would be beneficial.
  • Start incorporating open-ended questions into your test plans to encourage critical thinking.
  1. Feedback and Iteration:
  • Seek feedback from team members and stakeholders on the effectiveness of the Socratic approach in test cases.
  • Iterate based on the feedback received, refining the questions to enhance their impact.

2. Mindfulness Practices: Stress Management in Software Testing

Plan:

  1. Mindfulness Training:
  • Attend mindfulness workshops or use online resources to develop mindfulness skills.
  • Practice brief mindfulness exercises regularly to cultivate a sense of presence.
  1. Incorporate Mindfulness Breaks:
  • Schedule short mindfulness breaks during intense testing sessions.
  • Use techniques like focused breathing or mindful awareness to manage stress in real-time.
  1. Reflective Journaling:
  • Document your experiences with mindfulness breaks in the reflective journal.
  • Analyze how mindfulness practices impact your stress levels and overall well-being.

3. Exploration of Passions: Integrating Diverse Skills

Plan:

  1. Identify Relevant Skills:
  • List the skills gained from your passions in philosophy, computer science, and geography.
  • Assess how these skills can complement and enhance your testing capabilities.
  1. Skill Application in Testing:
  • Identify testing scenarios where your diverse skills can be applied.
  • Experiment with innovative approaches influenced by your passion projects.
  1. Feedback and Improvement:
  • Seek feedback from colleagues on the effectiveness of integrating diverse skills.
  • Use feedback to continuously refine and improve your testing strategies.

4. Reflective Journaling: Personal Growth Tracking

Plan:

  1. Establish Journaling Routine:
  • Set aside dedicated time each week for reflective journaling.
  • Capture your thoughts on goals, challenges, and personal insights.
  1. Goal Setting and Progress Tracking:
  • Clearly define short-term and long-term goals related to software testing and philosophical integration.
  • Regularly track your progress towards these goals in the reflective journal.
  1. Periodic Review and Analysis:
  • Schedule periodic reviews of your journal entries.
  • Analyze patterns, areas of improvement, and celebrate achievements.

5. Seeking Guidance: Networking and Mentorship

Plan:

  1. Identify Key Individuals:
  • List potential mentors, industry experts, and colleagues who can provide guidance.
  • Prioritize those with a background in both software engineering and philosophy.
  1. Networking Events:
  • Attend relevant networking events, both virtual and in-person.
  • Initiate conversations with individuals who align with your career aspirations.
  1. Formal Mentoring Relationships:
  • Approach potential mentors with a clear request for guidance.
  • Establish a formal mentoring relationship with individuals who resonate with your goals.
  1. Feedback and Adaptation:
  • Seek feedback from mentors and industry connections on your career path.
  • Be adaptable, incorporating insights gained into your ongoing professional development.

Monitoring and Evaluation:

  1. Regular Self-Assessment:
  • Conduct regular self-assessments to evaluate the effectiveness of the implemented insights.
  • Adjust the plan based on your evolving understanding and experiences.
  1. Feedback Loops:
  • Establish feedback mechanisms within your team and seek input on the impact of implemented strategies.
  • Use feedback to refine and adapt your approach continuously.

By implementing this practical plan, you create a structured approach to incorporate philosophical insights into your role as a Software Engineer Philosopher, fostering personal and professional growth. Regular reflection and adaptation will ensure the ongoing refinement of your approach

I then will implement my practical plan using myself as the application-to-test. I wish that the formatting was better in this text box!

1 Like

Hi, here are the 3 prompting techniques that I found simple and good to start with

  1. Technique: Least to Most Prompting:
    How it works: Involves gradually increasing the level of support provided to a learner until they can independently complete a task or demonstrate a desired behavior. Potential impact: Decompose a complex problem into a series of simpler sub-problems. And subsequently solving for each of these sub-questions.

  2. Technique: Self-ask Prompting
    How it works: Cognitive strategy that individuals use to guide their own thinking, problem-solving, and decision-making processes.
    Potential Impact: Decomposes questions into smaller follow-up questions and know when the final answer is reached.

  3. Technique: Chain-Of-Thought Prompting
    How it works: Technique that involves following a sequence of related thoughts or associations to explore a topic or problem in depth.
    Potential Impact: Generates ideas, insights, or solutions by allowing one thought to naturally lead to another.

1 Like

You made my day :hearts:

1 Like

Hi,

thanks for possibility to look deeper into prompt engineering technics :robot:. Please find my prior list of relevant technics:

Content Summarization With Specific Focus
:white_check_mark: How it works: AIā€™s attention directed to specific parts of the input, especially when you want concise summaries with a particular emphasis.
:white_check_mark: Potential impact: By crafting precise instructions, you can highlight which elements of the text you want the model to prioritize, ensuring the summary captures the essence of the desired focus.
:white_check_mark: Useful resource: AI Prompt Engineering: What It Is and 15 Techniques for 2024 (hostinger.com)

Iterative Prompting
:white_check_mark: Iterative prompting is another effective technique where you build upon previous responses by asking follow-up questions. With it, you can dive deeper into a topic, extract additional insights, or clarify any ambiguities from the initial output.
:white_check_mark: Potential impact: The key to iterative prompting is being attentive to the AIā€™s initial answer. You can frame your follow-up prompt to elaborate on a specific part of that answer, dig into a subtopic, or seek clarification. It is especially useful when you need to gather detailed information.
:white_check_mark: Useful resource: AI Prompt Engineering: What It Is and 15 Techniques for 2024 (hostinger.com)

Also useful article about technics is there: Prompting Techniques | Prompt Engineering Guide (promptingguide.ai).

1 Like
  1. Prompt Technique: Retrieval Augmented Generation (RAG)
    How it works: It helps augment LLMs by adding an information retrieval system that provides grounding data. This can help fine-tune the LLMā€™s internal knowledge efficiently without having to retrain the entire model.
    Resource: Microsoft, Prompting Guide
  2. Prompt Technique: Directional Stimulus Prompt
    How it Works: A framework guiding black-box LLMs, Directional Stimulus Prompt uses a stimulus or hint that can better guide the LLM to generate the desired output. This is achieved by optimizing the LLMā€™s tunable policy model instead of directly adjusting the LLM.
    Resource: Prompting Guide
1 Like

I tried to look for some discussions on how to make AI NOT to generate what you donā€™t want, as we all know they always generate things out of context esp. ChatGPT.

First I found a tip suggested in IBM doc, by giving instruction to the tool like: If the answer is not in the article, say ā€œI don't knowā€.

However, after trying it out myself with ChatGPT, it doesnā€™t simply work as no matter how it never says ā€œI donā€™t knowā€.

Then I found some discussions on OpenAI forum, where people had similar problems and tried more ways to solve it:

The user caos30 tried with asking the bot to respond with UTF8 smiley, like :thinking: or :sweat_smile: for its answer, which is an interesting idea to ask the bot to answer if confidence level.

But also one point to note from the IBM tip, we should always first choose the suitable model for the context, where ChatGPT maybe just not the better choice for ā€œless is moreā€ generation scenarios.

2 Likes

I followed this course on Pluralsight to get a good basis in Prompt Engineering.
Getting Started on Prompt Engineering with Generative AI | Pluralsight

The author covered the following Prompts:

Zero-Shot : no input provided, and allows the model to use only its own knowledge.
As the name suggests, very general. You may not get the answer you require.

Few-shot : much more like an interactive conversation, as we would include personal input for a more detailed and/or characterised response
Much more in line with what we would want from the tools as we control the responses more.

Chain of Thought : This prompts the model for the desired response by breaking the prompt into smaller steps. This trains the model to thing step-by-step.
The author advised using a Least-To-Most approach, i.e., start general and then remodel the prompts to be more concise.

I think all 3 approaches have merit, depending on what we expect the response to be.
Certainly when breaking down testing architecture, Chain of Thought would be the most used as it could mirror a Functional way of thinking

1 Like

A prompt technique is an approach used in language models based on artificial intelligence, such as GPT (Generative Pre-trained Transformer), to guide text generation based on a specific context. It works by providing the model with a prompt, which is an initial text input that sets the context or direction for the desired text generation.

For example, if someone is using a language model to write a summary of an article, the prompt could be a question like ā€œPlease summarize the article on artificial intelligence published in the New York Times today.ā€ The model then uses this initial information to generate text that aligns with what was requested.

The potential impact of prompt techniques is significant as they allow users to direct the model to produce specific outputs, which can be useful in a variety of scenarios. For instance, it can be used to create automatic summaries of long texts, generate creative content based on certain themes, or even answer specific questions.

This technique is particularly useful in situations where a specific response is desired or when one wants to shape the generated content according to certain criteria or requirements. Additionally, prompt techniques can be employed in text generation tasks across various domains, ranging from virtual assistants to content recommendation systems.