AI Basics: Prompt Design

Like everyone else, I have been learning how to use and abuse AI. Also like everyone else, I have found it to be compelling in its strengths and weaknesses. Here are some notes that helped me get my head around what’s possible and useful when working with AI.

These notes are rough, but I find myself referring to them from time to time.

Prompting Principles make inputs and outputs more consistent.

  1. An Instructional Example: Teach the Chat GPT to do something “I want you to perform this task. Performing this task looks like this [example input and output]”
    1. This must include a single task and example, but can be augmented with multiple examples to create a Multi-shot prompt
  2. Once the learned task is established, prompt the GPT to conduct the task on the new information:
    1. For direct replication, repetitive formatting, or structured responses: “Apply the method you just learned to this input: [insert information].”
    2. For generalized replication that flexes to new contexts: “”Transfer the approach you used earlier to this new example: [insert information].”

See techniques below

Prompt Structure

  1. I want you to perform this task. Performing this task looks like this [example]
  2. example]
  3. Apply the method you just learned to this input: [insert information]

Individual Prompt Development Process

The overall process of developing a prompt will be an iterative

  1. “I want you to perform this task. Performing this task looks like this [example]” = [Instructional example]
  2. <examine output>
  3. Refine the output: provide more context or further describe what you are looking for
  4. Request a prompt: “Write a prompt that will reliably produce the output of your previous reply. The prompt must be specific enough that the output will be reliably formatted”

Abstraction layers of prompting

System Prompt

  • ROLE This is who you are, CONTEXT here’s information to keep in mind
  • TASK here’s how to perform this action, EXAMPLE, input, EXAMPLE process, EXAMPLE output, now create the OUTPUT it with this CONTEXT / INFORMATION
  • Extracting skills

User prompt


🎯 Prompting Paradigms

TechniqueWhat It DoesStructureBenefits
Zero-shot promptingAsks for task execution with no prior examplesOne simple instruction-only promptFast, efficient, and useful when examples aren’t available
Prompt chainingBuilds or refines output step-by-step across multiple interactionsSequential turns between user and modelEnables complex task execution through manageable steps, improves reasoning
Multi-shot promptingProvides multiple examples in one prompt to guide behaviorOne large input block with several input-output pairsImproves output consistency and helps the model learn the pattern through repetition
Decomposition (Prompt Chaining for Analysis)Breaks down complex problems into smaller subproblems before solvingStarts with identifying subproblems, then tackles each via prompt chainingEncourages structured thinking, increases accuracy on complex tasks
Self-Criticism (Iterative Prompting)Uses the model to review and improve its own outputPrompt > Draft > Critique > Revision loopEnhances quality through reflection and iteration, simulates peer review
Additional Information (Context)Provides background or external context to improve output relevanceContext block before task instructionIncreases task alignment, helps with domain-specific or nuanced evaluations
Chain of Thought / Thought GenerationForces the model to “show its work” step-by-stepPrompt includes instruction to output reasoning before final answerImproves accuracy and transparency, exposes and corrects reasoning errors

Zero-shot prompting

Example (asking for results with no examples)

  1. Simple exploration
  2. When neither output nor logical process is known
User: Think of A, Now take A, and come up with three ways it is like B,
explain why the first reason is the most important

LLM: [Explanation]

Prompt Chaining

Example (multiple prompts)

Best used:

  1. Exploratory prompting when a logical process is known but the output is not
User: Create A
LLM: A

#

User: Now take A, and come up with three ways it is like B, in priority order
LLM: [Reason 1], [Reason 2], [Reason 3]

#

User: Explain why the first reason is the most important
LLM: [Explanation]

Multi-Shot Prompting

Example (single submission multi-prompt):

Simple prompt provides a self-contained iterative example that shapes the LLM’s logic.

Best used

  1. When context and output is known, but replication is needed
  2. Starting a task that will need specifc outputs
Input: “Translate ‘Good morning’ to French.”
Output: “Bonjour”

Input: “Translate ‘Thank you’ to French.”
Output: “Merci”

Input: “Translate ‘Good night’ to French.”
Output:

Decomposition (Prompt Chaining for Analysis)

Leveraging the GPT to break down problems into smaller problems before prompting

Best used

  1. When an output is known but the process is unknown but assumed to be complex
User: "I need to determine the return policy for a customer who bought a car with specific issues. First, what are the subproblems that would need to be solvedto figure this out?"

LLM: "To determine the exact return policy for this scenario, I would need to:
    1. Verify the customer's identity and purchase records.
    2. Confirm the specific car model, year, and its condition (e.g., the 'small ding').
    3. Ascertain the exact purchase date to check against return windows.
    4. Review the details of any applicable insurance or warranty.
    5. Consult the dealership's official return policy for that specific vehicle type and condition."

User: "Great. Now, for subproblem 1, how would I verify the customer's identity and purchase records?"

LLM: "To verify the customer's identity and purchase records, you would typically need to perform a database lookup using their name, customer ID, or the VIN of the vehicle. This would confirm if they are a registered customer and provide their transaction history."
    
(The conversation would continue, solving each subproblem until all necessary information is gathered for the final return policy determination.)

Self-Criticism (Iterative Prompting)

Leveraging the GPT to iterate on a prompt output

Best used

  1. Output refinement
  2. Identifying ways to refine processes

User: "Write a short blog post about the benefits of remote work for small businesses. Keep it concise and engaging."

LLM: "Remote work offers small businesses increased flexibility and reduced overhead costs, leading to higher employee satisfaction. It can also expand the talent pool beyond geographical limits. [Initial Draft]"

User: "Can you go and check your response, confirm that it's correct, or offer yourself some criticism?"

LLM: "Upon review, my initial draft is concise but could be more engaging. I could add a call to action or a more compelling opening. I also didn't specifically mention how it boosts productivity, which is a key benefit."

User: "That's great feedback. Why don't you go ahead and implement that to improve the blog post?"

LLM: "Absolutely. Here's a revised version:
'Unlock Growth: The Power of Remote Work for Your Small Business
Tired of limited talent and high office rents? Remote work isn't just a trend; it's a game-changer for small businesses. By embracing remote teams, you gain incredible flexibility, slash overhead costs, and access a global talent pool. Plus, studies show it often boosts productivity! Ready to transform your business? Consider the remote advantage. [Revised Draft]'"

Additional Information (Context)

Providing overall background that is not directly pertinent to the task, but provides supporting context.

Best used

  1. when initiating conversation and establishing context of initial tasks
  2. when refining outputs from previous tasks with additional context
User:

COMPANY PROFILE:
Company Name: "InnovateTech Solutions"
Industry: Enterprise Software (SaaS)
Core Products: Cloud-based CRM and Project Management tools.
Key Values: Customer satisfaction, robust feature sets, intuitive UX.
Customer Feedback Context: We differentiate by our support, even if our initial product releases might have some early-stage bugs. Negative feedback on 'bugs' is expected but must be weighed against positive 'support' comments.

---
Please analyze the sentiment of the following customer review for InnovateTech Solutions, considering our company profile and customer feedback context. Classify it as "Positive," "Negative," or "Mixed" and provide a brief rationale.

Customer Review: "The new feature is buggy and frustrating, but the support team was fantastic and resolved my issue quickly."

LLM: Sentiment: Mixed

Rationale: The review expresses **negative sentiment** regarding the "new feature" being "buggy and frustrating," which aligns with expected early-stage bug feedback mentioned in the company profile. However, it strongly balances this with **positive sentiment** for the "support team" being "fantastic" and resolving the issue quickly. Given InnovateTech Solutions' value on customer satisfaction and differentiation through support, the positive support experience significantly mitigates the negative feature feedback, resulting in a mixed overall sentiment.

Chain of Thought / Thought Generation

Forces the LLM to provide its ‘thinking’ throughout the process, exposing any reasoning (and reasoning flaws)

Best used

  1. When the process is just as important as the output
  2. When the process will need to be explained to someone else
User: Calculate the total cost, including 8% sales tax, for the following items. Make sure to write out all your reasoning and intermediate steps before providing the final answer.

Items:
- Item A: $25
- Item B: $40

LLM: **Reasoning Steps**:
1.  **Calculate the subtotal of items**:
    Item A price: $25
    Item B price: $40
    Subtotal = $25 + $40 = $65

2.  **Calculate the sales tax amount**:
    Sales tax rate: 8%
    Tax amount = 8% of $65 = 0.08 * 65 = $5.20

3.  **Calculate the total cost**:
    Total cost = Subtotal + Tax amount
    Total cost = $65 + $5.20 = $70.20

**Final Answer**: The total cost, including 8% sales tax, for Item A and Item B is **$70.20**.

A

Prompt Engineering : Lenny’s podcast

Leave a comment

Your email address will not be published. Required fields are marked *