AI Trap: Data Poisoning Defense
Generate poisoned image variants to degrade unauthorized AI model training.
Generate Trap Package
Drag & drop image to poison
or click to browse
If attacker scrapes these variants + trains a model, their model performance degrades.
Trap Results
📚 Understanding AI Trap
The Challenge: Dataset Poisoning
Even with AI Shield on individual images, attackers can scrape thousands of images and train models on them. Your single protected image becomes one of millions in their dataset.
AI Trap solves this by generating multiple poisoned variants of your image. If attackers scrape these variants along with your original image, they're injecting poison into their entire training dataset.
How It Works
- Generate Variants: Create 20-100 adversarial copies of your image
- Add Triggers: Inject imperceptible frequency-domain patterns
- Measure Drift: Track how far each variant's embedding drifts from original
- Package: Export as JSON or downloadable ZIP
- Deploy: Distribute alongside your original image
Poison Strength Score
Our scoring algorithm combines three metrics:
- 40% Embedding Drift: How far poisoned representation diverges from original
- 50% Confidence Drop: How confused the model becomes
- 10% Consistency: Uniformity across variants
Intensity Levels
- 1-25: Subtle attack (imperceptible)
- 26-75: Balanced attack (recommended)
- 76-100: Aggressive attack (maximum damage)
Use Cases
🎨 Artists: Protect your style from being scraped by generative AI
📸 Photographers: Poison unauthorized training datasets
🏢 Enterprises: Defend proprietary image datasets
🔒 Security: Create honeypot images designed to degrade adversary models