How to Upscale an Image Without Losing Quality (AI Methods)
Make a small image bigger without it looking pixelated. Three approaches: traditional bicubic resampling, AI upscaling (Real-ESRGAN), and content-aware scaling. Browser-based options included.
You have a small image. You need it bigger. Maybe it's a logo someone gave you at 200 pixels wide and you need it to fill a banner; maybe it's a screenshot you want to print; maybe it's an old family photo you want to enlarge. Naive resizing makes the result soft and pixelated — you can see the seams of the original pixels stretched out across the new larger image.
Three methods to do this well, in order of how much they "invent" detail.
Method 1 — Traditional resampling (bicubic, lanczos)
The classical approach: when scaling up, calculate each new pixel as a weighted average of nearby source pixels. Bicubic uses a 4×4 sample window with a smooth weighting function; Lanczos uses a larger window with a sinc-based filter and produces sharper edges.
For 1.5× to 2× upscaling on already-clean images (logos, line art, vector-source rasters), Lanczos resampling is often enough. The result is smoother than nearest-neighbor without inventing detail.
For more aggressive upscales (3× and beyond), or for photographs with fine detail, traditional resampling produces a softer image — the algorithm has nothing to add, so it interpolates and the result looks slightly blurred.
Resize Image on Dropvert uses Lanczos resampling. Good for "make this 1000 px image into a 2000 px image with minimal degradation."
Method 2 — AI upscaling (Real-ESRGAN)
The modern approach: train a neural network on millions of (low-resolution, high-resolution) image pairs. Show it a low-res input, ask it to produce a plausible high-res version. The model learns common patterns — what high-res hair looks like at the pixel level, what high-res skin looks like, what high-res text edges look like — and applies them to your input.
Real-ESRGAN is the open-source SOTA for this. Trained on photographs, it produces dramatically sharper 4× upscales than any traditional resampling. Edges stay crisp; textures look detailed; the result generally looks "as if it had been shot at higher resolution."
The catches:
- It "invents" detail. Sometimes the invented detail is plausibly the same as what was in the original; sometimes it's clearly hallucinated. For artistic / web-display use, this is fine. For evidence / archival work, it's not.
- It's specifically trained on natural photographs. On line art, screenshots, or pixel-art, it can produce odd results — the "Real-ESRGAN x4plus_anime" variant is much better for those cases.
- The model is large (~64 MB for the general version), so it has to download once.
AI Image Upscaler runs Real-ESRGAN entirely in your browser via WebGPU. Drop a JPEG / PNG / WebP, get a 4× upscaled PNG. Your image never gets uploaded.
Method 3 — Content-aware scaling
A more intermediate option: the resampler is aware of edges and high-frequency content and tries to preserve them. Photoshop's "Preserve Details 2.0" (using their proprietary upscaler) is the most-used commercial example.
This sits between bicubic and AI in both quality and speed. For typical product photography or web images, it's noticeably better than bicubic without the "invented detail" problem of full AI upscaling.
We don't currently have a content-aware-scaling tool — Real-ESRGAN's anime variant is the closest substitute for cases where the AI hallucination is undesirable.
When to use which
- Traditional (Lanczos) — 1.2× to 2× scale. Modest enlargement of high-quality sources. Resize Image.
- AI upscaling — 2× to 4× scale. Photographs, especially old or small ones. AI Image Upscaler.
- AI upscaling (anime variant) — illustrations, screenshots, pixel art. Same tool, pick the Anime model.
- Mix-and-match. Run Real-ESRGAN at 4×, then downsample to your target size with Lanczos. The result is often sharper than any single-step approach.
Why your phone's "zoom in" looks bad
Phone cameras "zoom" digitally by upscaling the cropped sensor area. They use traditional resampling at speed, not AI. That's why digital zoom on a 12-megapixel phone produces noticeably worse images than the same crop run through Real-ESRGAN afterward. If you have a "zoomed-in" photo from your phone that looks soft, running it through AI upscaling can recover quite a bit of perceived detail.
Common questions
Can I upscale 8× or 10×? Real-ESRGAN ships at 4× scale. To get 8×, run the 4× output through the upscaler again. Quality degrades on each pass — the second pass amplifies whatever artifacts the first pass introduced. Beyond 8×, the result looks AI-generated rather than real.
Will it work on text or screenshots? Mixed. Photographic text (a sign in a photo) usually upscales well. Pure-vector screenshots (UI captures, comic panels) are better served by the anime model or by re-rasterizing from the original vector source if available.
How big can my input be? Practically: 4 megapixels (e.g. 2048×2048) is comfortable on most hardware. Beyond that, expect inference times of 2–5 minutes and consider Resize Image to bring the input down first.
Tools mentioned in this guide
Related guides
What Is WebP and Should You Use It?
WebP is a modern image format from Google that produces smaller files than JPEG and PNG with comparable quality. Here's when to use it, when not to, and how to convert.
PNG vs JPEG: Which Image Format Should You Use?
PNG is lossless and supports transparency. JPEG is smaller for photographs. The right choice depends on the image content — here's how to decide.
How to Compress a GIF for Discord, Slack, and Email
GIFs are huge. Here's how to compress one to the size limits Discord (10 MB free / 500 MB Nitro), Slack (1 GB), and email (typically 25 MB) actually allow.