0

Quoting @himbodhisattva

https://simonwillison.net/2025/Aug/4/himbodhisattva/#atom-everything(simonwillison.net)
The term "prompt injection" was originally coined in May 2022 to describe a potential attack on services using models like GPT-3. This attack vector is analogous to SQL injection, where a malicious prompt tricks the AI into completing its initial task and then following new, unintended instructions. The goal is to bypass the intended functionality and gain control over the model's generation process, potentially revealing its original instructions. This historical note credits the user @himbodhisattva with first articulating this specific security vulnerability for large language models.
0 pointsby hdt2 months ago

Comments (0)

No comments yet. Be the first to comment!

Want to join the discussion?