🛡️ Attack Surface

LLMs are vulnerable to prompt injection attacks, which can be used to construct responses that are dangerous to the system. This is the primary reason that LLMs have not seen widespread adoption as externalized products.

Prompt injection can have different consequences for different types of structured outputs.