AI chains (also known as LLM chains) enable us to combine multiple steps of Large Language Models like GPT with other transformations like document retrieval, vector search and API requests to build advanced functionality and features. These workflows can then be deployed as a single endpoint that receives an input, runs it through the steps of the chain and provides and output.
We support OpenAI, Anthropic, Cohere and A21. If you'd like to use a different provider, just let us know in our live chat and we'll add it to the list.
Chains are ephemeral by nature, which means we don't store your input or your output. Datasets are stored in our platform which is SOC 2 Type II compliant. All data is encrypted in-transit and at-rest.
Relevance AI let's you select which LLM provider to use, including OpenAI's GPT. Using Relevance AI instead of the OpenAI API directly offers a multitude of benefits for creating and deploying AI chains (even with a single prompt).
These benefits include:
- Seamless transition from single prompts to chains - deploy directly without a single new line of code
- Effortless LLM-provider switching - no cost to integrate new LLM-provider or use multiple
- Enhanced monitoring and cost tracking - clear picture of your usage, up to each execution
- Streamlined versioning control - simple interface to rollback or use different prompts
- Simplified collaboration and sharing - involve other team members, not just technical ones
- Intuitive Notebook experience - best-in-class experience to experiment and craft chains
By choosing Relevance AI, businesses can leverage GPT much more effectively with no drawbacks to build better product with AI.
Relevance AI's Notebook is a best-in-class product for creating and experimenting with AI chains. Jupyter notebooks transformed the way Data Scientists worked, making it easy to build workflows for ML with a convenient interface and individual block executions. Working with LLM chains is not too different to building a traditional ML workflow. There is an exploration and experimentation phase to discover the best prompt to get the job done and there is frequent collaboration with peers to review and improve the work done. Notebook make this process exceptionally easy.
You can speak with someone at Relevance AI by logging into the dashboard, selecting the help icon from the top-right and selecting live chat. This will open an interface with which you can communicate with our team. For enterprise customers, you can also contact your dedicated CSM.
Relevance AI offers regions in AWS regions us-east-1 (N. Virginia), eu-west-2 (London) and ap-southeast-2 (Sydney).
Yes, all chains can be ran in batch across an entire dataset. This functionality is built-in with Relevance AI's datasets.
If you're experiencing slow performance, lag, or crashes while using Relevance AI, the issue may be related to your browser extensions, particularly Grammarly. Try turning off your browser extensions, particularly Grammarly before using Relevance AI's tool builder. If you continue to experience performance problems after disabling your extensions, please reach out to our support team. We're here to help!