Jailbreaking and Data Poisoning: Two Persistent Threats to LLM Security
Jailbreaking and Data Poisoning: Two Persistent Threats to LLM Security Prompt injection revealed a core weakness in LLM systems. But it was only the beginning. Two additional vectors now define the broader attack surface: data poisoning and jailbreak prompts. Both compromise trust — but at different stages of a model’s…