Preview designs under 8 types of color-vision deficiency. Free, private, runs in your browser.
100% private — your files never leave your browser. All processing happens locally on your device.
Your image never leaves your browser.
The simulation runs entirely on your device using pixel math. No uploads, no tracking.
720 × 360
Color-vision deficiency (CVD) is usually inherited and affects roughly 8% of men and 0.5% of women. The most common forms are deuteranomaly (green-weak) and protanomaly (red-weak) — these users can still see red and green, but they shift toward each other and become harder to distinguish. Dichromacy (protanopia, deuteranopia, tritanopia) is rarer — those users fully lack one cone type, so those hues collapse toward another. Achromatopsia — total absence of color vision — is very rare (~1 in 30,000).
If you've used color alone to distinguish a success state from a warning state (green bar vs red bar, green check vs red X, green 'active' vs red 'error'), deuteranomaly users will see them as nearly identical. The fix is to add a second signal that isn't color — a shape, an icon, a pattern, or a label. WCAG 2.1 Success Criterion 1.4.1 spells this out: 'Color is not used as the only visual means of conveying information.' This simulator makes the failure visible before a real user has to report it.
The tool applies a linear transformation matrix to each pixel's RGB values. The matrices are the Brettel/Viénot/Machado approximations widely used in accessibility tooling (Coblis, Chrome DevTools, Stark). These are close approximations to how the affected cone types would respond — not a perfect model of any one person's vision, but reliable enough for design review and production-level accessibility checks. The `-omaly` severity slider blends between the full dichromatic matrix and the identity (no transformation), so you can see what a partial deficiency looks like.
Run your charts, status indicators, and any 'red means bad / green means good' UI through deuteranopia and protanopia first — these are the most impactful checks. Run graphical keys through tritanopia. Run your hero art and brand imagery through achromatopsia to see if the composition still reads without color at all — good designs often still work; weak designs collapse. Any element that becomes indistinguishable in simulation is a spot where you need to add a non-color signal.
Every pixel operation runs in your device's JavaScript engine on the browser-side canvas. The image you upload is never sent to a server, never cached, never logged. This matters because design-review screenshots often contain unreleased work, internal tools, or private customer data. The tool would be useless if uploading an image before launch risked a leak.
All 8: protanopia (red-blind), deuteranopia (green-blind), tritanopia (blue-blind), and achromatopsia (total monochromacy), plus the milder anomalous variants (protanomaly, deuteranomaly, tritanomaly, achromatomaly). Deuteranomaly is the most common, affecting ~5% of men.
The simulation uses Machado/Brettel matrices — the same math used by accessibility tools like Coblis and Chrome DevTools. It's a close approximation, not a perfect model of any individual's vision, but reliable enough for design review.
No. The simulation runs entirely in your browser using canvas pixel math. Your image never leaves your device — we can't see it, store it, or share it.
Information conveyed by color alone. If your chart relies on red vs green bars, check deuteranopia and protanopia — those users will see them as nearly identical. Use patterns, labels, or shape differences in addition to color.