LLMWhisperer is a powerful document processing technology that optimizes complex documents for Large Language Models (LLMs). With Relevance AI, you can enhance this integration to extract and present document content in ways that maximize comprehension and response accuracy.


LLMWhisperer optimizes complex documents for Large Language Models, enabling precise text extraction and processing. Relevance AI amplifies this capability by leveraging AI Agents to automate insights and actions based on the processed content.
Precision Prompt Mastery
Enables agents to dynamically optimize and refine prompts for maximum accuracy and relevance across diverse use cases.
Intelligent Resource Optimization
Dramatically reduces token usage and processing costs through automated fine-tuning and efficient prompt management.
Advanced Context Processing
Transforms complex documents into optimized formats for enhanced comprehension and response generation.
Relevance AI seamlessly integrates with LLMWhisperer to enhance document processing workflows with intelligent AI Agents.
What you’ll need
You don't need to be a developer to set up this integration. Follow this simple guide to get started:
- A Relevance AI account
- An Airtable account with access to the base and table you'd like to use
- Authorization (you'll connect securely using OAuth—no sensitive info stored manually)
Security & Reliability
LLMWhisperer is a robust document processing technology designed to enhance complex documents for Large Language Models (LLMs). This integration allows developers to efficiently extract, process, and present document content in ways that improve LLM understanding and response accuracy.
Key benefits include intelligent document processing with OCR and text extraction, flexible output modes tailored for various use cases, capabilities for highlighting and locating text, and asynchronous processing for handling large documents.
To get started, ensure you have the necessary accounts and credentials, including an LLMWhisperer account with API access, OAuth credentials with `pipedream-llmwhisperer-read-write` permissions, and valid API authentication tokens.
Your system should support REST API client capabilities, handle HTTP/HTTPS requests, manage JSON responses, and have storage for document processing results. Required permissions include document read/write access, compliance with API rate limits, and storage permissions for metadata if highlighting is enabled.
Begin by setting up authentication with your LLMWhisperer account ID and base URL. Configure your API headers to include your authorization token and content type.
For a quick start, you can extract text from a document by initializing the extraction configuration and making an API call. You can check the processing status and retrieve the processed text using the appropriate API endpoints. Additionally, you can highlight text locations by sending a request with the search term.
Common issues may arise, such as authentication errors, processing timeouts, OCR quality issues, and highlighting problems. Solutions include verifying account details, adjusting timeout parameters, and ensuring metadata storage is enabled during extraction.
Best practices include storing the `whisperHash` for asynchronous operations, implementing retry logic for status checks, using the correct `processingMode` based on document type, enabling metadata storage only when necessary, and ensuring proper error handling for all API calls.
For further assistance or detailed API documentation, please refer to the LLMWhisperer documentation or contact technical support.
No training on your data
Your data remains private and is never utilized for model training purposes.
Security first
We never store anything we don’t need to. The inputs or outputs of your tools are never stored.

To get the most out of the 0CodeKit + Relevance AI integration without writing code:
- Start with a clear setup: Ensure your 0CodeKit account is properly configured with the necessary OAuth credentials and permissions.
- Utilize example code: Leverage the provided code snippets for PDF compression and barcode reading to jumpstart your integration.
- Validate inputs: Always check your input parameters for correctness before making API calls to avoid unnecessary errors.
- Test with sample data: Run your automations using test PDFs and images to ensure everything works smoothly before going live.
- Monitor API usage: Keep an eye on your API calls to avoid hitting rate limits, and implement caching where appropriate.