Enlarge photos 2× or 4× with sharp edges and high-quality resampling. Free, private, runs in your browser.
100% private — your files never leave your browser. All processing happens locally on your device.
Choose files
or drop them here · paste from clipboard
Two ways to enlarge. Fast mode is instant Lanczos resampling + edge sharpening. AI mode runs Real-ESRGAN x4 in your browser for hallucinated fine detail — first use downloads a ~33 MB model, cached after that. Nothing uploads either way. Drop up to 20 images for batch mode — same settings to every file, output as a ZIP.
Upscaling produces a larger version of an image, filling in the extra pixels with educated guesses about what should be there. A naïve resizer — the kind that ships inside most operating systems — uses bilinear interpolation, which averages neighbouring pixels and gives soft, blurry enlargements. A good upscaler does two things better: it uses a sharper resampling kernel (Lanczos, in our case) to preserve edge transitions, and then applies a post-sharpening pass to put crispness back where the resampler rounded it off. The result is an enlargement that looks like a proper print, not a blown-up JPEG.
2× doubles each dimension — a 800×600 photo becomes 1600×1200. Memory use quadruples; processing time roughly doubles. 4× does the same again on top of that — memory use grows 16× vs. the source — and takes about 4× longer. For most web uses (social posts, blog illustrations, retina-display thumbnails), 2× is plenty. Pick 4× when you genuinely need the extra size — printing at 300 DPI, or rescuing a small crop for a header image. If your source is already 2K or larger, staying at 2× keeps the output within browser memory limits.
Both, depending on which mode you pick. Fast mode (the default) uses Lanczos resampling + an unsharp-mask pass — mathematical resampling that produces a visibly sharper enlargement than a naive resize but doesn't invent new detail. AI mode loads Real-ESRGAN x4 (BSD-3-Clause, canonical xinntao weights) into your browser and runs the ~33 MB model via onnxruntime-web; it hallucinates plausible textures where none were captured, which is why fabrics, hair, grass, and fine skin detail look so much sharper than a classical resampler can manage. AI mode runs on WebGPU when the browser supports it and falls back to WASM on CPU otherwise. The model is cached after the first download, so subsequent runs are immediate.
Every step happens inside your browser — decoding the file, resampling into a larger buffer, running the sharpen pass, re-encoding to PNG, and handing you the download. Nothing is uploaded. The memory cost, however, is real: a 4-megapixel source upscaled 4× needs a ~64-MP buffer, which is ~256 MB of uncompressed pixel data. That's fine on laptops and most recent phones, but budget iOS/Android devices may run out. The tool warns you when the source is large and offers a one-click 'Scale to half size first' helper — shrinking before upscaling often produces a better-looking result than forcing a huge buffer and hitting swap.
Upscaling can't un-blur motion, can't recover detail a JPEG compressor threw away, and can't turn a 64×64 thumbnail into a poster. For screenshots and graphics with crisp edges (diagrams, UI captures, vector-ish content), the Lanczos+sharpen pipeline gives excellent results — often better than AI upscalers, which sometimes hallucinate textures into flat regions. For photos of people, faces, and naturalistic subjects, the result is a solid sharp enlargement but won't fabricate skin pore detail or hair strands; wait for the AI mode if that's what you need. For everything else, give it a try — the before/after split preview lets you judge before committing.
No. Both Fast and AI modes run entirely in your browser — Fast via Canvas 2D, AI via onnxruntime-web with a Real-ESRGAN x4 ONNX model. Your file never leaves your device.
Fast mode uses Lanczos resampling + an unsharp-mask pass — instant, no model download, sharper than a normal resize but doesn't invent new detail. AI mode downloads a ~33 MB Real-ESRGAN model the first time you use it, then runs the neural network locally to hallucinate plausible fine detail (skin pores, fabric weave, fur, foliage).
Yes, when AI mode is selected. Real-ESRGAN is a neural network trained on millions of photos to reconstruct plausible detail; on faces, hair, fabric, and natural textures it produces noticeably sharper output than any classical resampler. It can't perform miracles on extreme low-res input (a 64×64 face becoming a poster) — at some point the model has nothing to work from — but for the typical 500-1500 pixel inputs people actually enlarge, results are excellent.
Fast mode comfortably handles up to ~4 megapixels (2000×2000). AI mode tiles large inputs (256-pixel tiles internally), so memory use stays bounded even on larger images, but inference time scales linearly with pixel count — expect a few seconds per megapixel on WebGPU, 10-30s per megapixel on CPU/WASM. The tool warns when inputs are large and offers a 'Scale to half first' helper.