Skip to main content

Prompt Optimization

Prompt optimization is the process of fine-tuning the prompt of a model to improve its performance on a specific task. This process involves adjusting the prompt's wording, structure, and format to better guide the model towards the desired output.

Why Optimize Prompts?

Optimizing prompts can significantly impact the performance of a model on a specific task. By crafting a well-designed prompt, you can provide the model with more context, guidance, and constraints, leading to better results.

We use a dataset which includes inputs and expected outputs to optimize the prompt. The goal is to find the best prompt that maximizes the model's performance on the given task.

How to Optimize Prompts?

To optimize a prompt, you can follow these steps:

from relari import RelariClient
from relari.core.types import Prompt, VariablePrompt

client = RelariClient()

# Retrieve the dataset and project
proj = client.projects.find(name="Prompt Optimization")
dataset = client.datasets.find(proj["id"], name="Paul Graham")

# Let's define the base prompt (we will optimize this prompt)
base_prompt = Prompt(
fixed="Answer the following question using the provided context.",
variable=VariablePrompt(
prompt="Question: $question\n\nContext:\n$ground_truth_context",
description="Question and context to answer the question.",
),
)

# Submit the optimization task
task_id = client.prompts.optimize(
project_id=proj["id"],
dataset_id=dataset["id"],
prompt=base_prompt,
llm="gpt-3.5-turbo-0125",
task_description="Answer the question using the provided context.",
metric=client.prompts.Metrics.CORRECTNESS,
)
print(f"Optimization task submitted with ID: {task_id}")

Let's break down the code:

The first step is to retrieve the project and dataset where we want to optimize the prompt.

proj = client.projects.find(name="Prompt Optimization")
dataset = client.datasets.find(proj["id"], name="Paul Graham")

Next, we define the base prompt that we want to optimize.

base_prompt = Prompt(
fixed="Answer the following question using the provided context.",
variable=VariablePrompt(
prompt="Question: $question\n\nContext:\n$ground_truth_context",
description="Question and context to answer the question.",
),
)

As you can see the prompt has two parts: a fixed part and a variable part.

  • The fixed part provides general instructions to the model. This is the part we will optimize.
  • The variable part includes placeholders for the variables in your prompt. In this case, we have a question and a context.

In the variable part of the prompt you can use any field of the dataset, for example, ground_truth_context, question, etc.

Finally, we submit the prompt optimization task.

task_id = client.prompts.optimize(
project_id=proj["id"],
dataset_id=dataset["id"],
prompt=base_prompt,
llm="gpt-3.5-turbo-0125",
task_description="Answer the question using the provided context.",
metric=client.prompts.Metrics.CORRECTNESS,
)

Under the hood, the optimization process involves running the model with different prompts and evaluating the results based on the specified metric. When calling the model (in this case, GPT-3.5 Turbo, as specified by the llm parameter), the fixed prompt will be combined with the variable part to form the complete prompt.

There are a few parameters that you should adjust to customize the optimization process:

  • task_description this is very important for the optimization process. It in fact informs what is the final objective of the optimization.
  • metric this is the metric that will be used to evaluate the model's performance. In this case, we are using the CORRECTNESS metric.

Each metric will require different fields in the dataset, CORRECTNESS for example requires:

  • question: the question to be answered
  • answer: the model answer (which will be automatically filled by the optimization process)
  • ground_truth_answers: the correct answer(s) to the question

Each optimization task will run for a certain amount of time, depending on the complexity of the prompt and the dataset.

Once you submit the task, you can monitor its progress using the CLI or the SDK.

relari-cli prompts status TASK_ID

When the task is completed, you can retrieve the results using the CLI or the SDK.

relari-cli prompts get TASK_ID

Prompt Optimization Metrics

Prompt Optimization metrics are special metrics based on the evaluation metrics that are designed to evaluate the quality of the prompts generated by the Prompt Optimization API.

You can find more information about the available metrics in the Metrics section.

Supported LLMs

Prompt Optimization is supported for the following LLMs:

  • gpt-3.5-turbo
  • gpt-4
  • gpt-4-turbo
  • gpt-4o
  • gpt-4o-mini
  • llama-3-8b
  • llama-3-70b
  • claude-3-5-sonnet-20240620
  • claude-3-opus-20240229
  • claude-3-sonnet-20240229
  • claude-3-haiku-20240307