How do I set default values for input parameters?
Click on the setting icon, located on the bottom right of the input component. Set the values, and click on “Set current value”.
How do I insert variables into CODE step?
Access to variables is possible via theparams
parameter. For instance, to access a variable called name
,
use params.name
is Javascript or params["name"]
in Python.
How do I insert variables into the API step?
On your API component, change the body to edit as string and use{{}}
to access variables.
How to make running a step conditional?
Click on the three vertical dot on the top right of the step and select “Add conditions”. See the full guide at Adding condition to a step.Why do I have to use single or double quotations around variables in my prompts?
Such marks (e.g. single, double, triple quote marks) are not required nor have a direct functional utility. It is a prompting technique that has been found to work well with LLMs. Keep in mind that an LLM prompt is a long piece of text composed of instructions, examples, etc. single or double or triple quote marks around a varible X are just to specify the scope of X (i.e. the beginning and end of the string X within the prompt). A side note that the whole{{X}}
brings the variable X into the prompt, meaning there won’t be {{}}
around it when passed to the LLM.
How do I use the Checkbox input component as a condition for running a step?
Add a condition to your step. And use the{{checkbox variable name}}
as the value for the condition. For example, under the default step name:
-
{{checkbox}}
when checkbox is ticked -
{{!checkbox}}
when checkbox is not ticked
How to run a step multiple time like a loop?
Click on the three vertical dot on the top right of the step and select “Enable foreach”. See the full guide at Loop through a step.How to reduce hallucination for LLM?
Here are a list of steps to take to improve your experience with LLMs:- Providing good and precise system prompt
- providing good prompt Tips on a good prompt
- Providing data as a background knowledge
- More experiment with LLMs
Why is the LLM output cut off in the middle of a sentence and how to fix it?
LLMs have the capacity for a limited number of tokens. At Relevance, we use ~90% for the prompt (including knowledge) and the rest for the output. This means, if your prompt/knowledge are very rich in the number of tokens, there will be not enough room for the full output. In almost all such cases, the most relevant pieces of knowledge are fetched from the knowledge How to handle large text. Most relevant is a vector search applied to your knowledge to only select the best matching entries within the context to be fed to LLM. By default the top 100 matching entries are fetched. Decrease the page size parameter under advanced option in most relevant.How do I set multiple outputs?
In the last step of your Tool, click onConfigure output
button. Disable “Infer output from the last step”.
- Using the
Add new output key
you can add outputs - Using
{{}}
you can access to variables and steps’ outputs