#LLM-security
6 bookmarks tagged with "LLM-security"
across 1 category: Information Security
-  Buttercup: Open-Source AI-Driven Cyber Reasoning SystemGitHub • Aug 9, 2025 • Information Security Trail of Bits' second-place winning CRS from DARPA's AI Cyber Challenge - an automated system for discovering and patching vulnerabilities in open-source software using AI-augmented fuzzing and multi-agent patch generation. [crs] [cyber-reasoning-system] [vulnerability-discovery] [automated-patching] [fuzzing] [ai-security] [darpa] [aixcc] [trail-of-bits] [oss-fuzz] [libfuzzer] [jazzer] [static-analysis] [security-automation] [open-source-security] [vulnerability-research] [multi-agent-systems] [llm-security] [code-analysis]
-  Prompt injection and the lethal trifecta - Bay Area AI Security Meetupsimonwillison.net • Aug 9, 2025 • Information Security Transcript of Simon Willison's talk at the Bay Area AI Security Meetup explaining prompt injection vulnerabilities and demonstrating various attack methods across platforms like GitHub and ChatGPT. 
-  CaMeL offers a promising new direction for mitigating prompt injection attackssimonwillison.net • Aug 9, 2025 • Information Security Analysis of CaMeL (Context-Aware Mitigation for LLMs), a new approach for defending against prompt injection attacks in language models. 
-  The lethal trifecta for AI agents: private data, untrusted content, and external communicationsimonwillison.net • Aug 9, 2025 • Information Security Simon Willison identifies three dangerous capabilities that create critical security vulnerabilities when combined in AI systems: access to private data, exposure to untrusted content, and ability to communicate externally. 
-  Design Patterns for Securing LLM Agents against Prompt Injectionssimonwillison.net • Aug 9, 2025 • Information Security Practical design patterns and architectural approaches for building more secure AI agents that are resistant to prompt injection attacks. 
-  Lessons From Red Teaming 100 Generative AI Productssimonwillison.net • Aug 9, 2025 • Information Security Insights and patterns discovered from security testing 100 different generative AI products, revealing common vulnerabilities and defense strategies.