We help protect your Tool against LLM unreliability & hallucinations
JSON.parse()
.
To protect against this, Relevance has provided you with a quality control feature called Validators
.
You can specify a schema you want to apply to your prompt, and this component validates the LLM’s response
against the desired and specified schema.
If the LLM returns invalid data, in a second step the LLM is asked to fix its response. If that fails, an error
message will be returned to your application. This way, you can rest assured your application will always receive
valid data or error gracefully.
There are three options available, but we will always be adding more:
is_json
prompt_completion
transformation will still return a string, but you
will be safe to JSON.parse()
it.jsonschema
schema
property.
prompt_completion
transformation will still return a string, but you
will be safe to JSON.parse()
it.regex
pattern
property,
in this format: