The basics
Prompt completion is a core facet of working with Relevance AI.
We believe that LLM-powered apps will change how software is used and the way we work. With Relevance AI, using prompts is extremely easy.
Each time you use an LLM you will need to set up your prompt, this is what will determine what kind of output you receive and the model. For a full list of supported models, check out here.
To use an LLM you need to add a “Generate text using LLMs” step to your chain. You can then choose the LLM you want to use, and set up your prompt.
The prompt input accepts both regular text and variable templating using {{}}
syntax. When you start entering a variable, you will see a list of available variables to choose from. Use this functionality to enhance your prompts.
Was this page helpful?