They treat your prompts as *wishes*, not unbreakable instructions. LLMs don't distinguish between "instructions" and "data" - it's all just text to them.
LLMs are so very very good at generating tons of code rapidly. Without proper security controls, you're potentially exposing yourself to huge risks.
**Setup**: n8n.io workflow that reads emails and sends polite replies to LinkedIn recruiters **Attack**: Mr. Hacker Man sends malicious prompt via email **Result**: Your entire email history neatly summarized and sent to the attacker
**Setup**: Chatbot with production database access, carefully designed to only show user's own data **Attack**: Carefully crafted prompt injection to extract *all* user information **Result**: Passwords and personal data for all users exposed
**Setup**: Claude Code with MCP servers for GitHub and production database for analytics **Attack**: Malicious content in GitHub issues from users **Result**: Production data exposed via pull requests or issue comments
- Automated security vulnerability **scanning in deployment pipelines** - Manual **reviews** for LLM-generated code
Bottom line LLMs are powerful tools that can significantly improve our productivity and capabilities. Approach them with the same security mindset as any other technology.