Skip to main content

16 docs tagged with "getting-started"

View all tags

Auto prompt optimization

If you have a good golden dataset of inputs and their expected outputs, you can use it to automatically optimize your prompts to capture nuanced requirements. This process can save you many hours of manual trial and error.

Define datasets

Within each Project, you can upload your existing datasets or generate synthetic datasets using the Relari SDK or API.

Installation

Relari cloud API and CLI (command line interface) is provided as an open-source Python package.

Projects

Projects are a way to organize your datasets, experiments and optimization tasks. You can create a project for each of your use cases or functions.

Run experiments

Running experiments (or evaluations) is a systematic way to measure the performance of an AI system across a set number of samples. By altering prompts, models, or hyperparameters, you can observe how different settings impact performance. Experiments can be run on single data points or entire datasets to quickly understand the effect of changes across various scenarios.

Runtime monitor (online evaluation)

The Runtime Monitor feature allows you to evaluate results on production data in real-time. Reference-free metrics should be used in runtime monitors. When you evaluate results on the fly, you won't need reference outputs in datasets to compare against.

Teammates

You can invite teammates to your organization to collaborate on projects and experiments.

Usage Credits

Credits are shared across an organization and can be used in such a way:

View API Key

To get your API key you can navigate to User icon on the bottome left of the page.

Why use datasets?

Data can be your secret weapon to make your AI app stand out. Leveraging use-case-specific data helps you understand your app's performance better, automate parts of the iteration process, and make your application more reliable.