Not connected

AI Confusion Visualizer

See exactly how adversarial attacks confuse computer vision models.

Analyze Image

👁️

Drag & drop image to analyze

or click to browse

This tool runs the image through ResNet50, applies a perturbation, and visualizes the confusion.

Confusion Analysis

Run analysis to see results

Understanding AI Confusion

How Machines "See"

Computer vision models like ResNet50 rely on specific texture and shape patterns to identify objects. They are surprisingly brittle; slight changes in pixel values can completely alter their perception.

The Confusion Mechanic

We use the Fast Gradient Sign Method (FGSM) to calculate the exact noise needed to push the model's prediction away from the truth. The Confusion Score represents the total shift in confidence between the original label and the new, incorrect label.

Visualizing the Attack

The heatmap shows where the adversarial noise is most intense. While these changes are subtle to the human eye, they are glaring to the AI, effectively "dazzling" it.

Why use this?

By understanding what confuses AI, we can build better protection tools (like AI Shield) that prevent unauthorized scraping without destroying image quality.