An alias for reconstruction.
An object that can convert untrusted input into trusted output.
- constraint validator
An object that is capable of analyzing an LLM’s structured output to determine if the output is compliant with a set of constraints.
The process of making a technology, service, or system available to untrusted external users.
A set of rules that define the actual structure of an LLM’s structured output. The grammar is used by a parser to turn some text into a structured tree.
A Large Language Model. A machine learning model that can produce desired text from some prompt. e.g. ChatGPT
- prompt envelope
Extra context wrapped around untrusted input to guide an LLM into producing desired output.
- prompt injection
A technique for exploiting an LLM by crafting a prompt that causes the LLM to produce output that is considered harmful.
The process of rebuilding LLM’s structured output to be compliant with a constraint validator.
Translating input with a Bifrost.