0
Quoting @himbodhisattva
https://simonwillison.net/2025/Aug/4/himbodhisattva/#atom-everything(simonwillison.net)The term "prompt injection" was originally coined in May 2022 to describe a potential attack on services using models like GPT-3. This attack vector is analogous to SQL injection, where a malicious prompt tricks the AI into completing its initial task and then following new, unintended instructions. The goal is to bypass the intended functionality and gain control over the model's generation process, potentially revealing its original instructions. This historical note credits the user @himbodhisattva with first articulating this specific security vulnerability for large language models.
0 points•by hdt•2 months ago