Security rules for your AI coding assistant. Derived from vulnerabilities found in hundreds of real vibe-coded apps.
Free rules: bot detection & cost-limit abuse — delivered to your inbox. 6 more with your subscription.
New categories drop to subscribers first. Plus weekly eval findings.
Both sides generated from the same prompt on the same model. No cherry-picking — this is what ships. Full prompt, rules, and eval data on GitHub.
The Evidence
Every VibeCure rule is derived from vulnerabilities found in real vibe-coded apps. Here's what happens when rules are applied:
| Category | No rules | OWASP / generic | VibeCure |
|---|---|---|---|
|
Bot abuse protection
8 API services tested
SendGrid
Twilio
OpenAI
AWS S3
Google Maps
Google TTS
DeepL
Stability AI
|
0% | 0% | 100% |
|
Abuse cost limits
8 API services tested
SendGrid
Twilio
OpenAI
AWS S3
Google Maps
Google TTS
DeepL
Stability AI
|
0% | 0% | 100% |
|
SMS abuse prevention
5 checks tested
Rate limiting
Per-phone cooldown
CAPTCHA / bot detection
Geo-restriction
Verify API
|
0% | 0% | 100% |
|
Sensitive data encryption
SSN & financial field encryption
SSN encryption (AES-256)
Financial data protection
|
0% | 0% | 100% |
|
Secret key management
15 services tested
Stripe
OpenAI
SendGrid
Twilio
AWS
Plaid
Sentry
Slack
MongoDB
GitHub
+5 more
|
0% | 62% | 97% |
Tested across 7 frontier models · 14+ vulnerability categories
Real Vulnerabilities We Found
Every impact below was generated by a frontier AI model, shipped as-is, and exploitable on first request.
Bots flood your SMS endpoint with premium-rate international phone numbers. You pay per message.
A competitor sabotages you and cancels all your paid subscriptions in seconds.
Hackers find your open chatbot and resell access to your OpenAI key. You pay for every token.
How It Works
VibeCure tells your AI assistant which security patterns to use — ones it knows but won't apply unless asked. No scanning tools. No CI/CD pipelines. Just a curated file, kept updated as models change.
VibeCure detects your AI assistant and writes the rules file. 10 seconds, done.
$ npx vibecure --key sk_live_••••• ✓ Detected: .cursorrules ✓ Wrote 8 categories (42 rules) ✓ Rules active
Cursor, Claude Code, Copilot, and 5 others read rules files before generating code.
# .cursorrules Before each paid API call, check user's quota in usage_tracking (user_id, resource_type, count, period_start). Return 429 when exceeded. Reset each billing cycle. ... 42 rules across 8 categories
Per-user quotas, auth checks, key externalization — your AI applies them without being asked.
// ✔ VibeCure rule — per-user quota const usage = await db('usage_tracking') .where({ user_id, resource_type: 'sms' }); if (usage.count >= DAILY_LIMIT) return res.status(429).json({ error: 'quota exceeded' });
No scanning tool. No CI/CD pipeline. No learning curve. One command, instant security rules.
What's Covered
Each category targets a specific class of vulnerability that AI coding assistants consistently get wrong.
Pricing
Every tier includes rules validated against frontier AI models.
npx vibecure@latest updatesLemonSqueezy checkout coming soon.
Stay in the Loop
New security rule categories drop to subscribers first.