Recruit Bosh, the AI Sales Agent
Recruit Bosh, the AI Sales Agent
Join the Webinar
Learn more
Frequently asked questions
Chain template name
Categorize and label your data without any human bias Categorize and label your data
Relevance AI
🔥 Popular
Chain template name
Categorize and label your data without any human bias Categorize and label your data
Relevance AI
🔥 Popular
Customer name
Categorize and label your data without any human bias Categorize and label your data
Relevance AI
Customer name
Categorize and label your data without any human bias Categorize and label your data
Relevance AI
Customer name
Categorize and label your data without any human bias Categorize and label your data
Relevance AI
Customer name
Categorize and label your data without any human bias Categorize and label your data
Relevance AI
FAQ
What are AI chains?

AI chains (also known as LLM chains) enable us to combine multiple steps of Large Language Models like GPT with other transformations like document retrieval, vector search and API requests to build advanced functionality and features. These workflows can then be deployed as a single endpoint that receives an input, runs it through the steps of the chain and provides and output.

How can I deploy my first chain?

Getting started is as simple as creating a free Relevance AI account and in our Chain Notebook starting from a blank slate or using a template. Once you're happy with the results running it in our Notebook, you can deploy it as an embeddable form or an API endpoint.

Which LLMs do you support?

We support OpenAI, Anthropic, Cohere and A21. If you'd like to use a different provider, just let us know in our live chat and we'll add it to the list.

How does Relevance AI handle data security?

Chains are ephemeral by nature, which means we don't store your input or your output. Datasets are stored in our platform which is SOC 2 Type II compliant. All data is encrypted in-transit and at-rest.

Why should I use Relevance AI instead of the GPT API?

Relevance AI let's you select which LLM provider to use, including OpenAI's GPT. Using Relevance AI instead of the OpenAI API directly offers a multitude of benefits for creating and deploying AI chains (even with a single prompt).

These benefits include:
- Seamless transition from single prompts to chains - deploy directly without a single new line of code
- Effortless LLM-provider switching - no cost to integrate new LLM-provider or use multiple
- Enhanced monitoring and cost tracking - clear picture of your usage, up to each execution
- Streamlined versioning control - simple interface to rollback or use different prompts
- Simplified collaboration and sharing - involve other team members, not just technical ones
- Intuitive Notebook experience - best-in-class experience to experiment and craft chains

By choosing Relevance AI, businesses can leverage GPT much more effectively with no drawbacks to build better product with AI.

What is the Notebook?

Relevance AI's Notebook is a best-in-class product for creating and experimenting with AI chains. Jupyter notebooks transformed the way Data Scientists worked, making it easy to build workflows for ML with a convenient interface and individual block executions. Working with LLM chains is not too different to building a traditional ML workflow. There is an exploration and experimentation phase to discover the best prompt to get the job done and there is frequent collaboration with peers to review and improve the work done. Notebook make this process exceptionally easy.

How can I contact support?

You can speak with someone at Relevance AI by logging into the dashboard, selecting the help icon from the top-right and selecting live chat. This will open an interface with which you can communicate with our team. For enterprise customers, you can also contact your dedicated CSM.

What regions can I deploy chains to?

Relevance AI offers regions in AWS regions us-east-1 (N. Virginia), eu-west-2 (London) and ap-southeast-2 (Sydney).

Can I run chains in batch across a large amount of data?

Yes, all chains can be ran in batch across an entire dataset. This functionality is built-in with Relevance AI's datasets.

Why is Relevance AI running slowly in my browser?

If you're experiencing slow performance, lag, or crashes while using Relevance AI, the issue may be related to your browser extensions, particularly Grammarly. Try turning off your browser extensions, particularly Grammarly before using Relevance AI's tool builder. If you continue to experience performance problems after disabling your extensions, please reach out to our support team. We're here to help!