Validators
We help protect your chain against LLM unreliability & hallucinations
One of the biggest roadblocks with using LLMs is when they return invalid content. For example, you may ask for a specific format of JSON, but the LLM returns an explanation that breaks it in your JSON.parse()
.
To protect against this, we provide a quality control feature called validators
. You can specify a schema you want to apply to your prompt, and we will validate the LLM’s response against this schema. If the LLM returns invalid data, we will ask the LLM to fix it. If that fails, we will return an error to your application.
This way, you can rest assured your application will always receive valid data or error gracefully.
The “validators” property
Specify quality control by using the validators
property on the prompt_completion
step. This receives a list of validators that we want to apply to our LLM’s response, in the format of:
Currently there are three options, but we will always be adding more:
is_json
regex
jsonschema
is_json
This validates whether the LLM has returned only JSON. It will try to fix it if not. This doesn’t require any additional properties.
The prompt_completion
transformation will still return a string, but you
will be safe to JSON.parse()
it.
jsonschema
If you would like to validate a specific format of JSON, you can take advantage of JSONSchema! Provide the schema as the schema
property.
The prompt_completion
transformation will still return a string, but you
will be safe to JSON.parse()
it.
regex
This validates whether the LLM’s response matches a regex. It will try to fix it if not. This requires a pattern
property, in this format:
Was this page helpful?