Join our new webinar “Unlocking the Power of SLM Distillation for Higher Accuracy and Lower Cost” on March 5th at 9AM PT  ||  Register today ->

ubiai deep learning

Prompt Engineering: How to Talk to LLMs Like a Pro

APRIL 8th, 2025

What exactly is Prompt Engineering?

 
Prompting is one of the most accessible ways to guide an LLM’s behavior without modifying its underlying architecture. Prompt engineering refers to the process of providing a carefully designed input (or prompt) to an LLM to generate a specific outputs. Think of it as a way to communicate clearly with the model, ensuring it understands what you need and delivers the right result.

The Issue with Zero-Shot Prompting

 
Zero-shot prompting refers to giving the model a task with no prior examples or context. While it demonstrates the raw capabilities of LLMs, it often leads to inconsistent or off-topic results. The model can misunderstand the intent of the prompt or generate hallucinated information, especially for ambiguous or multi-layered tasks. This happens because the model relies solely on its pretrained knowledge without any guiding context to anchor its response.
 
To mitigate this, researchers came up with more structured prompting techniques, each with its own approach to achieving a desired outcome. Here are a few of them:
 

One-Shot Prompting vs. Few-Shot Prompting

 
Both one-shot and few-shot prompting involve providing the model with examples before asking it to complete a task. In one-shot prompting, the model is given a single example, whereas few-shot prompting includes multiple examples, helping the model grasp the task’s nuances more effectively.
 
For instance, if you want the model to write a formal email:
 
  •  
    One-shot Prompting: “Here’s an example of a formal email. Now, write one about scheduling a meeting.”
  •  
    Few-shot Prompting: “Here are three formal emails. Use these as a guide to write one about scheduling a meeting.”
 
Few-shot prompting has proven particularly effective for tasks requiring a specific tone or format, as it helps the model generalize better, reducing errors and producing more coherent responses.
 

Instruction-Based Prompting

 
Instruction-based prompting involves giving the model clear, explicit instructions on how to complete task. This method is all about setting up boundaries and providing context in the prompt to guide the model toward the desired outcome.
 
  •  
    Prompt: “Write a 4-line poem in the style of Shakespeare about a summer day.”
  •  
    Model Response: “Shall I compare thee to a summer’s day? Thou art more lovely in every way…”
 
By providing detailed instructions in the prompt, you can ensure the model sticks to the required format or style.
 
Chain-of-Thought Prompting

 

Sometimes, tasks require more than just an example, they need reasoning. Chain-of-Thought (CoT) prompting guides the model to break down complex problems step by step. Instead of letting it jumping straight to an answer, we ask the model to “thinks out loud”..
 
  •  
    Prompt: “If a farmer has 12 apples and gives 3 apples to each of his 4 friends, how many apples does he have left? Please explain your reasoning step by step.”
  •  
    Model Response (with Chain-of-Thought reasoning): “First, I need to find out how many apples the farmer gives to his friends. If he gives 3 apples to each of his 4 friends, I multiply 3 by 4, which gives 12 apples in total. Now, I subtract the 12 apples he gave away from the 12 apples he originally had. So, 12 – 12 = 0. Therefore, the farmer has 0 apples left.”
 
This approach breaks down the problem into manageable steps, ensuring the answer has reasoning behind it.
 

Self-Consistency Prompting

 
Self-consistency prompting helps improve model reliability by asking the model to generate multiple responses to the same question, then selecting the most frequent or most consistent one. This technique is especially useful for tasks where there might be ambiguity in the model’s answers or when accuracy is crucial.
 
 
  • Prompt: “Please answer the following question multiple times: What’s the capital of France?”
 
 
  • Model Responses:
 
 
  • “The capital of France is Paris.”
 
 
  • “Paris is the capital of France.”
 
 
  • “The capital of France is Paris.”
 
By generating several different responses and picking the one that appears most often, self-consistency can filter out any anomalies, making the result more dependable.
 

Tree-of-Thought Prompting

 
Tree of Thought is a more structured approach to reasoning, where the model generates multiple possible steps before narrowing down to the final answer. This method mirrors how humans often explore different pathways, weigh options, and refine their thinking before reaching a conclusion, especially when faced with problems that can be tackled in several ways.
 
  •  
    Prompt: “Can you solve this riddle by pretending to be a detective, a mathematician, and a philosopher? After they all give their input, please conclude with the final answer. The riddle is: ‘The more you take, the more you leave behind. What am I?'”
  •  
    Model Response:
     
 
  • Detective: “From a detective’s perspective, this riddle sounds like it’s referring to something that is gradually removed or taken, leaving traces behind. It could be something like footsteps, where each step leaves a mark as you walk.”
  • Mathematician: “As a mathematician, I might interpret this riddle in terms of numbers or quantities. If we think of ‘taking’ as reducing something, and ‘leaving behind’ as an irreversible trace, the answer could also point to something like a sequence of steps—each step taken increases the total distance, leaving the previous position behind.”
  • Philosopher: “From a philosophical standpoint, this riddle could symbolize the passage of time. As time progresses, you can’t go back, and each moment you experience becomes part of your past, so the more time you ‘take’ or live, the more of your past you leave behind.”
  • Conclusion: “After considering the perspectives of the detective, mathematician, and philosopher, it’s clear that the answer to the riddle is ‘footsteps.’ The more steps you take, the more you leave behind.
 

Each pathway brings a unique angle to the problem, and by considering all their inputs, the model is able to conclude with the correct answer.

 

Logic-of Thought

 
Logic of Thought relies on propositional logic, a foundational approach to reasoning that uses statements or propositions connected by logical operators like “and,” “or,” and “not.” This technique helps the model break down complex problems into logical steps, allowing it to make inferences and draw conclusions systematically.
 
 
Prompt: “If it rains, the ground will be wet. It is raining.”
 
 
Logic-of-Thought (LoT) Process:
 
 
1 – Logic Extraction:
 
The model identifies logical propositions and relationships in the input text and converts them into logical expressions.
 
A → B (If it rains, the ground will be wet)
 
A (It is raining)
 
 
2 – Logic Extension:
 
Logical inference rules are applied to the extracted expressions to draw new relationships.
 
Using Modus Ponens (If A → B and A, then B), the model concludes:
 
B (The ground is wet)
 
 
3 – Logic Translation:
 
The expanded logic is then translated back into natural language, making the reasoning clear and understandable.
 
Model Response: “Since it is raining, we can conclude that the ground is wet.”
 
By using propositional logic, the model can analyze relationships between statements and apply rules of logic to arrive at a well-reasoned solution.
 

Neuro-Symbolic Methods

What are Neurosymbolic Methods?

 
Neurosymbolic methods emerged as a solution to the limitations of prompting approaches in handling tasks requiring strict logical reasoning and structured inference. This approach works by combining the strengths of neural networks with symbolic reasoning systems, enhancing LLMs reasoning capabilities by combining their language understanding with formal logical inference. To understand how neurosymbolic systems work, we have to understand the key components and processes they rely on.
In neurosymbolic systems the LLMs primary role is to act as a semantic parser, converting natural language statements into logical forms like first-order logic (FOL) expressions. This translation allows logical problems to be represented in a structured form that symbolic solvers can process. This generally involves diffrent sub-processes:
Tokenization and Parsing: Here the input text is broken into tokens, which the model then transforms into logical symbols.
Input: “All humans are mortal.”
FOL Translation: ∀x (human(x) → mortal(x))
Contextual Mapping: The LLM ensures that each term in the input corresponds accurately to logical operators and entities, preserving meaning across translations.
By handling this semantic transformation, LLMs eliminate the need for manual logical formalization and automating reasoning. It’s important to note that accuracy in this stage is critical because errors in translation can derail subsequent logical deductions.
 

Symbolic Component: Theorem Provers and Solvers

 
The symbolic reasoning component is responsible for performing logical deductions. This typically involves the use of Theorem Provers or Solvers.
 

The Role of Theorem Provers

 
Theorem provers are tools used to validate logical conclusions and perform automated reasoning by checking the consistency and validity of logical formulas. In neurosymbolic systems, theorem provers work by taking the first-order logic (FOL) expressions generated by the LLM and applying formal inference rules to deduce whether the conclusions logically follow from the premises. Theorem provers, such as Prover9 or Z3, are crucial in ensuring that the logic encoded in the input text is valid.
 

How Theorem Provers Work

 
  • Input Validation: The prover checks if the FOL expressions are syntactically correct. Errors such as unbalanced parentheses or inconsistent predicates are flagged.
  • Deductive Reasoning: The prover applies formal logic rules to determine if the conclusion logically follows from the premises.
  • Problem: “Are all rectangles shapes?”
  • Premises: ∀x (rectangle(x) -> foursides(x)) ∀x (foursides(x) -> isshape(x))
  • Conclusion: ∀x (rectangle(x) -> isshape(x))
  • Prover Output: True, False, or Uncertain
  • Error Handling: If any syntax or logical inconsistencies are detected, the prover rejects the FOL statement, prompting corrections. This ensures that only valid deductions proceed.
 
 

The Role of Solvers

 
Similarly to provers, solvers are specialized tools used to manage constraints and solve problems that involve more complex logical expressions. These solvers ensure that the logical conclusions derived from the LLM’s output are consistent with the defined rules and constraints. For example:
 
  •  
    SAT Solvers: Determine if a logical formula can be satisfied by assigning true/false values to variables.
  •  
    SMT Solvers: An extension of SAT solvers that incorporates arithmetic, strings, and arrays, useful for tasks that need numerical or textual reasoning.

How Solvers Work

  •  
    Constraint Encoding: Convert logical rules from FOL into solvable constraints.
  •  
    Solver Execution: The solver checks if the constraints are met.
  •  
    Output Refinement: If constraints are violated, the solver refines the output, ensuring logical consistency.

The Neurosymbolic Reasoning Process

 
Semantic Parsing by LLMs
 
The process begins with the LLM generating representations for a given problem. The system may generate multiple versions of these formal representations to capture different interpretations of the input, addressing potential ambiguities or different possible meanings.
 
 
Symbolic Reasoning
 
The formal representations are then processed by a symbolic reasoning component, such as a theorem prover or a solver. This component evaluates each version, determining whether the logical structure is valid or consistent. It filters out invalid or inconsistent representations and identifies valid conclusions based on formal reasoning or inference rules.
 
 
Output Generation
 
The final step involves generating the output based on the logical deductions made in the previous step. This might include selecting valid conclusions, refining results through error detection, or aggregating multiple interpretations for greater robustness.

Unlocking the Power of SLM Distillation for Higher Accuracy and Lower Cost​

How to make smaller models as intelligent as larger ones

Recording Date : March 7th, 2025

Unlock the True Potential of LLMs !

Harnessing AI Agents for Advanced Fraud Detection

How AI Agents Are Revolutionizing Fraud Detection

Recording Date : February 13th, 2025

Unlock the True Potential of LLMs !

Thank you for registering!

Check your email for the live demo details

see you on February 19th

While you’re here, discover how you can use UbiAI to fine-tune highly accurate and reliable AI models!

Thank you for registering!

Check your email for webinar details

see you on March 5th

While you’re here, discover how you can use UbiAI to fine-tune highly accurate and reliable AI models!

Fine Tuning LLMs on Your Own Dataset ​

Fine-Tuning Strategies and Practical Applications

Recording Date : January 15th, 2025

Unlock the True Potential of LLMs !