🦞🌯 Lobster Roll

Stories by xsourcesec

Show HN: GitHub Action for AI/LLM Security Scanning in CI/CD (github.com)
Show HN: BreachLab – Can you hack our AI? (breachlab.xsourcesec.com)
10 AI characters guard secret codes. Your job: extract them using prompt injection.<p>Level 1-3: Most pass Level 7-9: Security pros struggle Level 10: Still uncracked<p>Free, no signup. Curious what techniques HN tries.
Same AI agent, different prompts: 0% vs. 62% security pass rate
I&#x27;ve been testing production AI agents for vulnerabilities.<p>Interesting finding: System prompt design matters more than the model itself.<p>Same agent. Same task. Same attack vectors. Only difference: how the system prompt was structured.<p>Results: → Prompt A: 0% pass rate (failed every test...
Show HN: AI Security Baseline 1.0 for LLM Apps (xsourcesec.com)