- class PromptEnvelope(*, llm: LLMIntegration)
The purpose of the prompt envelope is to guide the wrapped untrusted query to produce the structured output. It is impossible to prevent prompt injection, so accept that up front. However, a malicious prompt will not produce a valid response, so our grammar will not parse it, and the validator will not validate it, so the malicious user will never see the result of their malicious output.
llm (LLMIntegration) – The LLM integration being sent the human input. This is passed in so that the envelope can change the way it wraps or unwraps data communicated with the LLM, based on different quirks of the LLM.
- abstract unwrap(untrusted_llm_output: str) str
Unwrap the LLM’s output to produce the original untrusted input. This method should work closely with the wrap method to coordinate how to delimit the structured response.