Designers extract colour palettes from images constantly. A photographer wants the dominant tones of a portrait for the gradient behind a website hero. A brand strategist needs the four real colours from a screenshot of a competitor’s ad. A film designer wants the palette of a single frame to match the rest of the spread. Doing this by eye works for two or three colours; doing it accurately for ten requires an algorithm that looks at every pixel.
Our image color extractor runs K-means clustering directly in your browser via WebAssembly. Drop an image, pick how many colours (2 to 12), and the tool returns the centroids ranked by how much of the image each colour represents. Each swatch shows HEX, RGB, HSL, OKLCH, and the WCAG contrast ratio against black and white — useful for picking accessible text colours from a brand photo. This guide covers how K-means works in colour space, why it sometimes returns near-duplicate colours, and the gotchas with transparent backgrounds and EXIF rotation.
When to use a palette extractor
| Use case | Recommended count | What you do with it |
|---|---|---|
| Hero gradient from a photo | 2–3 colours | Use top 2 as gradient stops |
| Brand palette from a logo | 3–5 | Verify exact brand HEX values |
| Moodboard / inspiration | 5–8 | Save the swatches as a Figma library |
| Album cover analysis | 5–6 | Generate a matching theme |
| Ad creative colour match | 3–4 | Paint background to harmonise with image |
| Competitor screenshot study | 8–10 | Audit their full palette |
How K-means clustering works (in colour terms)
Plot every pixel as a point in a 3D colour space (RGB, or better still OKLab for perceptual uniformity). K-means then partitions those points into k clusters by:
- Pick k initial cluster centres at random (or via k-means++ for better starting points)
- Assign every pixel to the nearest centre
- Recompute each centre as the average of the pixels assigned to it
- Repeat steps 2–3 until centres stop moving (convergence)
The output is k colours that minimise the total distance between every pixel and its assigned centre. Cluster size (number of pixels assigned) tells you how dominant each colour is in the original image.
We use OKLab as the clustering space — distances in OKLab roughly match human perception, so two colours that look similar are also close in OKLab. Older extractors run K-means in raw RGB and produce odd results when an image has a wide hue range (RGB distance treats yellow and blue as much closer than the eye does).
How to extract a palette in your browser
- Open the image color extractor
- Drop an image (JPG, PNG, WebP, HEIC supported)
- Pick the colour count (2–12)
- The palette appears in 1–3 seconds with each swatch labelled by HEX and percentage of the image
- Toggle Sort by percentage or Sort by hue
- Click any swatch to copy its HEX, RGB, HSL, or OKLCH
- Click Export CSS variables to download a
--brand-Nvariable block ready to paste into:root
Common gotchas
- Near-duplicate colours. If your image is mostly skin tones, K-means with k=10 returns ten variations of the same beige. Drop k to 4–5, or use the “merge similar” option which post-processes the output to deduplicate within a perceptual distance threshold.
- Transparent backgrounds skew the result. A logo on transparent background still has many fully-transparent pixels — by default we skip those. If your output looks wrong, check that the “ignore transparent” toggle is on.
- EXIF rotation isn’t honoured by all extractors. A portrait photo from a phone is often stored as a landscape file with EXIF rotation metadata. Our tool reads the EXIF rotation and re-orients before extraction. Some tools don’t, and produce confusing palettes from the wrong “side” of the image.
- JPEG compression artefacts inflate the palette. Compressed JPGs introduce subtle colour fringes near edges. K-means treats these as legitimate colours and may include them. Using a slightly higher k and merging similar colours afterward gives cleaner output.
- Resolution matters less than you think. The tool downsamples to ~512×512 internally before clustering. A 4000×3000 photo and an 800×600 version of the same image produce nearly identical palettes — clustering is statistical, not pixel-perfect.
- Random initialisation = slightly different output each run. K-means uses random starting points, so two runs on the same image can return slightly different palettes (within ~5% colour distance). For reproducibility, use the seeded mode with a fixed seed.
Accessibility — pick text colours from the palette
A common workflow: extract a palette from a hero photo, then use one of the extracted colours as the background and need a text colour that meets WCAG contrast on it. Each swatch in our output shows the contrast ratio against black and white — pick the colour combination with a ratio above 4.5:1 (AA standard for body text) or 7:1 (AAA). For coloured-text-on-coloured-background combinations, our “Pair check” overlay shows the contrast between any two extracted colours.
When NOT to use this tool
For brand-exact colours from a logo SVG, open the SVG and read the fill values directly — extraction approximates and can introduce small errors. For animated GIFs, the extractor uses only the first frame; for the full palette across all frames you’ll need a video-aware tool. For very small icons (under 64×64), there often aren’t enough pixels for K-means to produce a meaningful palette — pick the colours by eye instead. For batch processing many images in a CI pipeline, install node-vibrant or color-thief locally and write a script — much faster than running this tool by hand on each file.
Frequently asked questions
How accurate is the extracted palette?
K-means in OKLab space gives results that closely match what a designer would pick by eye. The dominant 1–2 colours are virtually exact; minor colours can vary by a few HSL points between runs because of random initialisation. For pixel-perfect brand colours, sample directly from the source file with an image color picker instead.
How many colours should I extract?
3–5 for design palettes, 2 for hero gradients, 8+ for moodboards and competitive analysis. More than 8 colours often produces near-duplicates; fewer than 3 misses meaningful tones. Start with 5 and adjust.
Why does the same image return different palettes?
K-means uses random initialisation, so two runs can produce slightly different centroids. Differences are usually under 5% perceptual distance. Use the seeded mode (with a fixed seed string) for reproducible output across runs.
Does it support transparent backgrounds?
Yes — fully-transparent pixels are skipped by default. Semi-transparent pixels are blended against your chosen “background colour” (white by default). Toggle the background to dark or your brand colour for accurate extraction from logos with see-through regions.
Is my image uploaded?
No. The extractor runs K-means in your browser via WebAssembly. The image is loaded into a blob URL and the pixel data is processed locally — never uploaded.
Can I export the palette in a specific format?
Yes — the export menu offers JSON (programmatic), CSS variables (--brand-1: #... ready for :root), Tailwind config (drop into theme.extend.colors), Adobe ASE swatch file, and a PNG of the palette swatches.
Related tools and guides
- Image Color Extractor
- Image Color Picker (single pixel)
- Color Shades Generator
- Color Mixer
- All color tools
