The Art of Prompt Engineering : Unlocking AI Potential.

Yash Raj Mani
8 min readAug 30, 2024

--

In the evolving field of prompt engineering, incorporating real-world case studies can significantly enhance the effectiveness and relevance of AI-generated outputs. By providing models with detailed, context-rich scenarios, you guide them to produce more accurate and targeted responses. This approach not only clarifies the task at hand but also demonstrates the practical application of theoretical principles. For instance, using case studies from organizations tackling plastic pollution can illustrate how specific data and context lead to actionable insights. This article explores the principle of including case studies in prompts, offering practical examples and actionable strategies for leveraging real-world data to improve AI performance. Whether for generating content, automating responses, or solving complex problems, case studies serve as a powerful tool in refining prompt engineering practices.

Let’s discuss a few things that will help to ace this:

Principle 1: Write Clear and Specific Instructions

In prompt engineering, clarity is paramount. The way you frame your instructions can significantly impact the model’s output. By being clear and specific, you reduce ambiguity and enhance the model’s ability to generate accurate and relevant responses.

📌 Tactic 1: Use Delimiters
Use delimiters to separate different parts of the input to avoid confusion and ensure the model interprets your instructions correctly. This can be particularly useful when dealing with complex prompts or isolating a particular section for emphasis.

Examples of delimiters include:
- Triple quotes: `”””`
- Triple backticks: ``````
- Triple dashes: ` — -`
- Angle brackets: `< >`
- XML tags: `<tag> </tag>`

✨ Example:
“Could use the name “Raj” and give me a sample introduction for his ‘tech interview’ for context — “Raj works in a software company”.

📌 Tactic 2: Ask for Structured Output
When you need a specific format for the output, ask the model to present its response in a structured way. This could be in the form of HTML, JSON, or any other format that suits your needs. Structured outputs make it easier to parse, interpret, and utilize the generated content.

✨ Example:

“Generate a JSON object that represents the details of a user profile. The JSON should include the following fields: user_id, username, email, and date_of_birth. Use the following example data to create the JSON object: User ID: 12345 Username: johndoe Email: johndoe@example.com Date of Birth: 1990–01–15

Your response should be in the following JSON format:

{
"user_id": 12345,
"username": "johndoe",
"email": "johndoe@example.com",
"date_of_birth": "1990-01-15"
}

Make sure to follow this structure exactly to ensure that the output can be easily parsed and used in our application.”

📌 Tactic 3: Check Whether Conditions Are Satisfied
Before performing a task, it’s crucial to ensure that all necessary conditions and assumptions are met. Prompt the model to check for these prerequisites to avoid errors and ensure the output is contextually appropriate.

✨ Example:

“You are tasked with generating a SQL query to retrieve user information from a database. Before generating the query, check whether the following conditions are satisfied:

- The table users exists in the database.

- The table users contains columns named user_id, username, and email.

If these conditions are met, generate a SQL query to retrieve the user_id, username, and email of all active users. If any of these conditions are not met, provide an error message indicating the issue.”

📌 Tactic 4: Few-Shot Prompting
Few-shot prompting involves providing the model with a few successful examples of how a task should be completed. These examples guide the model and set expectations for the output, increasing the likelihood of success.

✨ Example:

“Generate a Python function that takes a list of integers and returns the list sorted in ascending order. Here are a few examples to illustrate the task: Input: [4, 1, 3, 2] Output: [1, 2, 3, 4] Using the examples given as a guide, please generate a Python function that sorts a list of integers in ascending order.”

Principle 2: Give the Model Time to Think

Just like humans, AI models perform better when given time to process information and work through problems systematically. By allowing the model to think through a task step by step, you can enhance the quality and accuracy of its responses.

📌 Tactic 1: Specify the Steps to Complete a Task
Breaking down a task into clear, sequential steps helps the model understand the process and reduces the risk of errors. By providing a step-by-step guide, you ensure that the model approaches the task methodically. A structured approach helps the model follow a logical flow, leading to more reliable outcomes.

- Step 1: Identify the problem or question.
- Step 2: Gather relevant information or data.
- Step 3: Analyze the information.
- Step N: Provide the final solution or conclusion.

✨ Example:
“Perform a data analysis task on a dataset. Follow these steps:

1: Load the dataset: Read the dataset from the provided CSV file.

2: Clean the data: Remove any missing or duplicate entries.

3: Analyze the data: Compute basic statistics (mean, median, standard deviation) for each numerical column.

4: Summarize findings: Provide a brief summary of the analysis results, including key insights and any trends observed.

Ensure that each step is completed sequentially and that the results are well-organized.”

📌 Tactic 2: Instruct the Model to Work Out Its Own Solution Before Rushing to a Conclusion
Encourage the model to take its time and think through the problem before jumping to an answer. By prompting the model to work out a solution independently, you can often achieve more thoughtful and accurate results.

✨ Example:
- Ask the model to first outline its reasoning or approach before providing the final answer. “What are the potential benefits and drawbacks of implementing a remote work policy in a tech company?”

- Encourage it to consider different perspectives or potential challenges before reaching a conclusion. “How can urban planning be improved to better accommodate increasing population densities in major cities?”

Principle 3: Iterative Prompt Development

Developing effective prompts is often an iterative process. By analyzing errors and refining prompts based on experimental results, you can gradually improve the quality and accuracy of the model’s output. This approach allows for continuous improvement and optimization.

📌 Steps in Iterative Prompt Development:

1. Idea -> Implementation -> Prompt:
Start with a clear objective. Outline supporting code or data, then create an initial prompt that communicates this objective.

2. Experimental Result -> Error Analysis:
Evaluate the output of your initial prompt. Identify and analyze any issues to understand the model’s limitations.

3. Refine Idea and Prompt:
Adjust your idea and prompt based on the analysis to address any identified issues.

4. MOST IMPORTANT Repeat:
Continuously test, analyze, and refine until the model’s output meets your goals.

✨ Example:

Start:
“Generate a Python function that takes a list of integers and returns a list of only the even numbers, sorted in ascending order.”
Next :
“The output function didn’t handle empty lists correctly.”
Next:
“Now the function should also handle cases where input contains negative numbers and provide additional test cases.”
Next:
“Give me the final code with comments”

Principle 4: Include Case Studies

Incorporating case studies or real-world examples into your prompts can provide invaluable practical insights and inspire readers. By showcasing how prompt engineering has been effectively applied in various contexts, you can demonstrate its real-world impact and offer tangible takeaways.

Why Case Studies Matter?
Case studies serve as concrete illustrations of how prompt engineering principles are put into practice. They offer readers a clearer understanding of the techniques and strategies that work in real-world scenarios, making the concepts more relatable and actionable.

✨ Example:

“Based on the case study of [Insert Case Study], what recommendations would you make for future projects? How can the organization build on its successes and address any shortcomings?”

“Given the case study of [Insert Case Study], identify the main problem the company faced. What were the proposed solutions, and how effective were they in addressing the problem?”

Keeping in Mind — Model Limitations:

One of the challenges in prompt engineering is dealing with a phenomenon known as “hallucination.” This occurs when a model generates statements that sound plausible but are not true. These hallucinations can undermine the reliability of the output, especially in contexts requiring factual accuracy.

Understanding Hallucination
AI models are trained on vast amounts of data, which allows them to generate responses that often seem coherent and credible. However, because they don’t “understand” the content in the way humans do, they may sometimes produce information that is misleading or entirely fabricated.

Reducing Hallucinations
To minimize the risk of hallucination, it’s essential to guide the model carefully:

✨ Example:
“Find and summarize the methods for sorting a list of numbers in Python, including sort(), sorted(), and any key algorithms. Provide accurate details and code examples for each method."

My all-time favourite prompts:

“Make sure it is : crisp , clear and easy. / Can you make it more crsip”
➡️
I use this after receiving a basic response from the model. This helps refine and shape the output to better suit my use case.”

“Give in bullets /1 liners / Make a list of items … ”
➡️
I use phrases like these to request responses in list format. This ensures the output is concise and well-organized, rather than presented in paragraphs.

“Hey GPT — provide a justification for: “bla bla bla” or “bla bla bla” … Hey Bard — assist with this.”
➡️
Address the model by name wherever you need it — I use specific phrases to direct queries to different models in my prompts.
This approach allows me to indicate which model should handle specific parts of my request, whether it’s at the beginning, in the middle, or at the end of the prompt.

Summary - Key Strategies for Prompt Engineering:

1. Clear Instructions: Use delimiters and structured formats (HTML, JSON) to guide the model. Provide examples and ensure conditions are met before tasks.

2. Allow Time to Think: Break tasks into steps and let the model reason through its approach before delivering an answer.

3. Minimize Hallucinations: Guide the model to first gather relevant information, then answer based on that data.

4. Iterative Refinement: Continuously test, analyze, and refine prompts. Clarify instructions, provide examples, and repeat until desired results are achieved.

5. Feedback Integration: Use feedback from previous iterations to improve prompts further.

Now, it’s your time to go and explore !

--

--

Yash Raj Mani

Software Engineer @ Microsoft | Former SWE Intern'23 @ Microsoft || CSE Undergrad - VIT Vellore'24.