How do I set default values for input parameters?

Click on the setting icon, located on the bottom right of the input component. Set the values, and click on “Set current value”.

Input default values

How do I insert variables into CODE step?

Access to variables is possible via the params parameter. For instance, to access a variable called name, use params.name is Javascript or params["name"] in Python.

How do I insert variables into the API step?

On your API component, change the body to edit as string and use {{}} to access variables.

How to make running a step conditional?

Click on the three vertical dot on the top right of the step and select “Add conditions”. See the full guide at Adding condition to a step.

Why do I have to use single or double quotations around variables in my prompts?

Such marks (e.g. single, double, triple quote marks) are not required nor have a direct functional utility. It is a prompting technique that has been found to work well with LLMs. Keep in mind that an LLM prompt is a long piece of text composed of instructions, examples, etc. single or double or triple quote marks around a varible X are just to specify the scope of X (i.e. the beginning and end of the string X within the prompt). A side note that the whole {{X}} brings the variable X into the prompt, meaning there won’t be {{}} around it when passed to the LLM.

How do I use the Checkbox input component as a condition for running a step?

Add a condition to your step. And use the {{checkbox variable name}} as the value for the condition. For example, under the default step name:

  • {{checkbox}} when checkbox is ticked

  • {{!checkbox}} when checkbox is not ticked

How to run a step multiple time like a loop?

Click on the three vertical dot on the top right of the step and select “Enable foreach”. See the full guide at Loop through a step.

How to reduce hallucination for LLM?

Here are a list of steps to take to improve your experience with LLMs:

Why is the LLM output cut off in the middle of a sentence and how to fix it?

LLMs have the capacity for a limited number of tokens. At Relevance, we use ~90% for the prompt (including knowledge) and the rest for the output. This means, if your prompt/knowledge are very rich in the number of tokens, there will be not enough room for the full output.

In almost all such cases, the most relevant pieces of knowledge are fetched from the knowledge How to handle large text. Most relevant is a vector search applied to your knowledge to only select the best matching entries within the context to be fed to LLM. By default the top 100 matching entries are fetched. Decrease the page size parameter under advanced option in most relevant.

How do I set multiple outputs?

In the last step of your Tool, click on Configure output button. Disable “Infer output from the last step”.

  • Using the Add new output key you can add outputs
  • Using {{}} you can access to variables and steps’ outputs

More details are provided at Output configuration

How long my bulk run can take?

Under current setup, each run gets terminated after 4 hours. If your data table is large and 4 hours does not cover all the rows, you can rerun your enrichment using the “Run on rows that haven’t run” option. “Run on rows that haven’t run” resumes the execution from where it stopped.

How do I access the bulk-run analysis results?

Each output variable iof your AI Tool will be added to your data table as a new column. You can see the results on the data table or export to a CSV.

Can an AI Tool have multiple outputs?

Yes! You can configure multiple outputs for your AI Tool. Each output will have its corresponding column in the data table.

Can I run a Tool multiple times on a data table?

Yes! There is no limitation on the number of times you run a Tool. Just keep in mind that to avoid overwriting the existing results, you should name the output colum for each run.

If my dataset contains more than one column to be analyzed, do I need to upload the dataset multiple times?

No! There is no limitation on the number of times you run a Tool. You can use the same data table while configure the bulk run, as many times as needed. Just keep in mind that to avoid overwriting the existing results, you should name the output colum for each run.

Can I leave the page after starting a bulk-run?

Yes! Bulk AI-powered data enrichment (i.e. bulk-run) is executed in the background. So rows of data will be added to your dataset even if you leave or refresh the page.

Is there a cap on the number of rows AI Tools can process?

Technically, no - however, you will be limited by the number of credits you have.

What format is supported for the data table?

The enhancement of your dataset executes across a dataset, which can be created from CSVs and Excel files or from files such as PDFs, audio and Word which are automatically converted into a dataset.

How does bulk run (enrichment) pricing work?

Bulk runs are charged the same as individual triggers of a chain. Each row in your dataset will be the equivalent of one execution of the chain (for example 2 credits per row).

How is credits consumption calculated?

At Relevance, we have a fixed fee per tier and a variable fee for certain steps. If you provide your own key we don’t charge the variable.

What if I exceed the credit limit for my plan?

Depending on what plan you are on, you might need to top up before further using the platform or have a buffer when your credits are negative but can still use the platform before topping up.

Will there be a price difference between using GPT 3.5 VS GPT4 or other models?

Different models have their own pricing, at Relevance we pass on the cost. Keep in mind that if you provide your own key we don’t charge the variable.

Can I use my own API key?

Absolutely! Set up your own keys at the API key page.