As a designer, I've built variants of this several times throughout my career.
The author's approach is really good, and he hits on pretty much all the problems that arise from more naive approaches. In particular, using a perceptual colorspace, and how the most representative colour may not be the one that appears the most.
However, image processing makes my neck tingle because there are a lot of footguns. PNG bombs, anyone? I feel like any library needs to either be defensively programmed or explicit in its documentation.
The README says "Finding main colors of a reasonably sized image takes about 100ms" -- that's way too slow. I bet the operation takes a few hundred MB of RAM too.
For anyone that uses this, scale down your images substantially first, or only sample every N pixels. Avoid loading the whole thing into memory if possible, unless this handled serially by a job queue of some sort.
You can operate this kind of algorithm much faster and with less RAM usage on a small thumbnail than you would on a large input image. This makes performance concerns less of an issue. And prevents a whole class of OOM DoS vulnerabilities!
Somehow I missed that, oops. I see that the library samples a maximum of 250K pixels from the input buffer (I jumped over to the project readme)
That being said, this is sampling the fixed-size input buffer for the purposes of determining the right colour. You still have to load the bitmap into memory, with all the associated footguns that arise there. The library just isn't making it worse :) I suppose you could memmap it.
Makes me wonder if the sub-sampling is actually a bit of a red herring, as ideally you'd want to be operating on a small input buffer anyway. Or some sort of interface on top of the raw pixel data, so you can load what's needed on-demand.
That's 500x500, I'm sure you can get good results at 32x32 or 64x64 but then part of your color choice is also getting done by the downsampling algorithm. I wonder if you could get away with just using a downsampling algorithm into a 1x1 and just use that as the main color.
That last one is talked about in the article -- it sucks!
I think if you were going to "downsample" for the purpose of creating a color set you could just scan through the picture and randomly select 10% (or whatever) of the pixels and apply k-means to that and not do any averaging which costs resources and makes your colors muddy.
Back in the late 1980s people thought about color quantization a lot because a lot of computers of the time had 16 or 256 colors you could choose out of a larger palette and if you chose well you could do pretty well with photographic images.
Author here: the library just accepts RGB8 bitmaps, probably coming either from Rust's image crate [1] or Python's Pillow [2], which are both mature and widely used. Dealing with codecs is way out of scope.
As for loading into memory at once: I suppose I could integrate with something like libvips and stream strips out of the decoded image without holding the entire bitmap, but that'd require substantially more glue and complexity. The current approach works fine for extracting dominant colours once to save in a database.
You're right that pre-resizing the images makes everything faster, but keep in mind that k-means still requires a pretty nontrivial amount of computation.
If you ever did want to wrap this in code processing untrusted images there's a library called "glycin" designed for that purpose (it's used by Loupe, the default Gnome image viewer).
I've wanted something like this for level of detail processing.
This is a render from Second Life, in which all the texture images were shrunk down to one pixel, the lowest possible level of detail, producing a monocolor image.
For distant objects, or for objects where the texture is still coming in from the net, there needs to be some default color. The existing system used grey for everything. I tried using an average of all the pixels, and, as the original poster points out, the result looks murky.[1] This new approach has real promise for big-world rendering.
OKPalette by David Aerne is my favorite tool for this, it chooses points sensibly but then also lets you drag around or change the number of colors you want: https://okpalette.color.pizza/
I've been doing something similar! I've got a Home Assistant dashboard on my desk and wanted the media controls to match the current album art. I need three colors: background, foreground, and something vibrant to set my desk lamp to [1].
The SpotifyPlus HA integration [2] was near at hand and does a reasonably good job clustering with a version of ColorThief [3] under the hood. It has the same two problems you started with though: muddying when there's lots of gradation, even within a cluster; and no semantic understanding when the cover has something resembling a frame. A bit swapped from okmain's goal, but I can invert with the best of them and will give it a shot next time I fiddle. Thanks for posting!
It reminds me a bit of this post from the Facebook engineering blog (2015) [1] where they discuss embedding a very tiny preview of images into the html itself so they show immediately while loading the page, especially with very slow connections.
Iām surprised the baseline to compare against is shrinking the image to one pixel, that seems extremely hacky and very dependent on what your image editor happens to do (and also seems quite wasteful⦠the rescaling operation must be doing a lot of extra pointless work keeping track of the position of pixels that are all ultimately going to be collapsed to one point).
So, making a library that provides an alternative is a great service to the world, haha.
An additional feature that might be nice: the most prominent colors seem like they might be a bad pick in some cases, if you want the important part of the image to stand out. Maybe a color that is the close (in the color space) to the edges of your image, but far away (in the color space) from the center of your image could be interesting?
Tbh shrinking the image is probably the cheapest operation you can do that still lets every pixel influence the result. Itās just the average of all pixels, after suitable color conversion.
The author of the article seems to assume there is no color conversion (e.g., the resizing of the image is done with sRGB-encoded values rather than converting them to linear first). Which is a stupid way to do it but I'd believe most handwritten routines are just that.
Really interesting read. Thanks for sharing. Is the performance bottleneck around the resizing to 250k pixels? Would it still work if you sampled 15,625 4x4 patches evenly around the image to gather those pixels instead of resizing?
In the past when i tried just using image magick's built in -kmeans for this, i found chosing the second most prominent colour often looked really good. The primary was too much of the same thing.
If needed you can easily remove colored borders first (trim subcommand with fuzz option) or sample only xy% from the image's center, or where the main subject might be.
Yeah, but then Iād have to be working with Python (which I donāt enjoy) and be pulling in dependencies (which I avoid) to have a custom system with moving parts (Python interpreter, library, script) (which I donāt want).
A rust CLI would make a lot of sense here. Single binary.
So your solution to āIād be interested in having a small ready-made tool and try this outā is āspend a bunch of time to get acquainted with the code base of something you may not even like, create a separate tool, and submit it without even knowing if itāll be acceptedā?
Thatās like having someone looking at a display of ice cream in a supermarket saying āIād be interested in trying a few samples before committingā and then getting a reply like āhere are the recipes for all the ice creams, you can try to make them at home and taste them for yourselfā.
I know I could theoretically spend my weekend working on a CLI tool for this or making ice cream. Every developer knows that, thereās no reason to point that out except snark. But you know who might do it even faster and better and perhaps even enjoy it? The author.
Look, the maintainer owes me nothing. I owe them nothing. This project has been shared to HN by the author and Iām making a simple, sensible, and sensical suggestion for something which I would like to see and believe would be an improvement overall, and I explained why. The author is free to agree or disagree, reply or ignore. Every one of those options is fine.
At 1x1 I don't expect any difference. It would be the average of all pixels in the image if you don't unevenly weight them (which you might decide when choosing a main color, but no downscaling algorithm would do) and the only difference is whether you remembered to gamma-correct.
I really like this approach. I worked on this problem (create a nice background for an image) for a couple weeks many years ago while organizing my desktop wallpaper collection, and never came up with a good answer. Unfortunately, I think that it's been "solved" in the tiktok era; an enlarged and blurred version of the image is used to fill the background space.
The blurred mirror is inoffensive to almost everyone, and yet it always strikes me as gauche. Easy to ignore and yet I feel that it adds a lot of useless visual noise.
As a designer, I've built variants of this several times throughout my career.
The author's approach is really good, and he hits on pretty much all the problems that arise from more naive approaches. In particular, using a perceptual colorspace, and how the most representative colour may not be the one that appears the most.
However, image processing makes my neck tingle because there are a lot of footguns. PNG bombs, anyone? I feel like any library needs to either be defensively programmed or explicit in its documentation.
The README says "Finding main colors of a reasonably sized image takes about 100ms" -- that's way too slow. I bet the operation takes a few hundred MB of RAM too.
For anyone that uses this, scale down your images substantially first, or only sample every N pixels. Avoid loading the whole thing into memory if possible, unless this handled serially by a job queue of some sort.
You can operate this kind of algorithm much faster and with less RAM usage on a small thumbnail than you would on a large input image. This makes performance concerns less of an issue. And prevents a whole class of OOM DoS vulnerabilities!
As a defensive step, I'd add something like this https://github.com/iamcalledrob/saferimg/blob/master/asset/p... to your test suite and see what happens.
I really wish people would read the article, the library does exactly this:
> Okmain downsamples the image by a power of two until the total number of pixels is below 250,000.
Somehow I missed that, oops. I see that the library samples a maximum of 250K pixels from the input buffer (I jumped over to the project readme)
That being said, this is sampling the fixed-size input buffer for the purposes of determining the right colour. You still have to load the bitmap into memory, with all the associated footguns that arise there. The library just isn't making it worse :) I suppose you could memmap it.
Makes me wonder if the sub-sampling is actually a bit of a red herring, as ideally you'd want to be operating on a small input buffer anyway. Or some sort of interface on top of the raw pixel data, so you can load what's needed on-demand.
That's 500x500, I'm sure you can get good results at 32x32 or 64x64 but then part of your color choice is also getting done by the downsampling algorithm. I wonder if you could get away with just using a downsampling algorithm into a 1x1 and just use that as the main color.
That last one is talked about in the article -- it sucks!
I think if you were going to "downsample" for the purpose of creating a color set you could just scan through the picture and randomly select 10% (or whatever) of the pixels and apply k-means to that and not do any averaging which costs resources and makes your colors muddy.
Random sampling makes a lot of intuitive sense, but unfortunately doesn't work well. I just answered over at lobsters: https://lobste.rs/s/t43mh5/okmain_you_have_image_you_want_co...
I should probably add this nuance to the post itself.
Edit: added a footnote
How about sampling more from the more "prominent" areas of the image[0] and less from less "prominent" areas?
[0]: https://dgroshev.com/blog/okmain/img/distance_mask.png?hash=...
your gh link returned 404
EDIT: then (when url refreshed) triggered a redir loop culminating in a different error ("problem occurred repeatedly")...
ah, ofc, your intent was to demonstrate a problematic asset.
Realizing I intentionally opened a png bomb made me chuckle, like what did I think was going to happen?
> I've built variants of this several times throughout my career.
Got any to share? A self-contained command-line tool to get a good palette from an image is something Iād have a use for.
Fred's dominantcolor script for imagemagick might work for you:
https://www.fmwconcepts.com/imagemagick/dominantcolor/index....
Back in the late 1980s people thought about color quantization a lot because a lot of computers of the time had 16 or 256 colors you could choose out of a larger palette and if you chose well you could do pretty well with photographic images.
Author here: the library just accepts RGB8 bitmaps, probably coming either from Rust's image crate [1] or Python's Pillow [2], which are both mature and widely used. Dealing with codecs is way out of scope.
As for loading into memory at once: I suppose I could integrate with something like libvips and stream strips out of the decoded image without holding the entire bitmap, but that'd require substantially more glue and complexity. The current approach works fine for extracting dominant colours once to save in a database.
You're right that pre-resizing the images makes everything faster, but keep in mind that k-means still requires a pretty nontrivial amount of computation.
[1]: https://crates.io/crates/image
[2]: https://pypi.org/project/pillow/
If you ever did want to wrap this in code processing untrusted images there's a library called "glycin" designed for that purpose (it's used by Loupe, the default Gnome image viewer).
https://gnome.pages.gitlab.gnome.org/glycin/
I've wanted something like this for level of detail processing.
This is a render from Second Life, in which all the texture images were shrunk down to one pixel, the lowest possible level of detail, producing a monocolor image. For distant objects, or for objects where the texture is still coming in from the net, there needs to be some default color. The existing system used grey for everything. I tried using an average of all the pixels, and, as the original poster points out, the result looks murky.[1] This new approach has real promise for big-world rendering.
[1] https://media.invisioncic.com/Mseclife/monthly_2023_05/monoc...
OKPalette by David Aerne is my favorite tool for this, it chooses points sensibly but then also lets you drag around or change the number of colors you want: https://okpalette.color.pizza/
I've been doing something similar! I've got a Home Assistant dashboard on my desk and wanted the media controls to match the current album art. I need three colors: background, foreground, and something vibrant to set my desk lamp to [1].
The SpotifyPlus HA integration [2] was near at hand and does a reasonably good job clustering with a version of ColorThief [3] under the hood. It has the same two problems you started with though: muddying when there's lots of gradation, even within a cluster; and no semantic understanding when the cover has something resembling a frame. A bit swapped from okmain's goal, but I can invert with the best of them and will give it a shot next time I fiddle. Thanks for posting!
[1] https://gist.github.com/kristjan/b305b83b0eb4455ee8455be108a... [2] https://github.com/thlucas1/homeassistantcomponent_spotifypl... [3] https://github.com/thlucas1/SpotifyWebApiPython/blob/master/...
It reminds me a bit of this post from the Facebook engineering blog (2015) [1] where they discuss embedding a very tiny preview of images into the html itself so they show immediately while loading the page, especially with very slow connections.
[1] https://engineering.fb.com/2015/08/06/android/the-technology...
Iām surprised the baseline to compare against is shrinking the image to one pixel, that seems extremely hacky and very dependent on what your image editor happens to do (and also seems quite wasteful⦠the rescaling operation must be doing a lot of extra pointless work keeping track of the position of pixels that are all ultimately going to be collapsed to one point).
So, making a library that provides an alternative is a great service to the world, haha.
An additional feature that might be nice: the most prominent colors seem like they might be a bad pick in some cases, if you want the important part of the image to stand out. Maybe a color that is the close (in the color space) to the edges of your image, but far away (in the color space) from the center of your image could be interesting?
Tbh shrinking the image is probably the cheapest operation you can do that still lets every pixel influence the result. Itās just the average of all pixels, after suitable color conversion.
It might work decently well, but I wonder if it makes it "visually" match - sometimes the perfect average is not what our eyes see as the color.
The author of the article seems to assume there is no color conversion (e.g., the resizing of the image is done with sRGB-encoded values rather than converting them to linear first). Which is a stupid way to do it but I'd believe most handwritten routines are just that.
This is nice! I looked into this quite a lot some years back when I was trying to summarize IKEA catalogs using color and eventually wrote an R package if you want to look into an alternative to e.g. k-means: https://github.com/lemonad/colorhull (download https://github.com/lemonad/ikea-colors-through-time/blob/mas... for more details on how it works)
Really interesting read. Thanks for sharing. Is the performance bottleneck around the resizing to 250k pixels? Would it still work if you sampled 15,625 4x4 patches evenly around the image to gather those pixels instead of resizing?
In the past when i tried just using image magick's built in -kmeans for this, i found chosing the second most prominent colour often looked really good. The primary was too much of the same thing.
Iād be interested in trying this out as a command-line tool. It would be useful on its own and the fastest way to evaluate results.
ImageMagick is a wonderful command line tool, IMO. You could use it to extract various information, e.g. the 5 most used colors of an image, as in
If needed you can easily remove colored borders first (trim subcommand with fuzz option) or sample only xy% from the image's center, or where the main subject might be.looks like it's a rust lib with a python wrapper. making a CLI tool should be just a few lines of code.
Yeah, but then Iād have to be working with Python (which I donāt enjoy) and be pulling in dependencies (which I avoid) to have a custom system with moving parts (Python interpreter, library, script) (which I donāt want).
A rust CLI would make a lot of sense here. Single binary.
This sounds like a job for <ta-ta-ta-taaaa> contrib-directory-man!
So your solution to āIād be interested in having a small ready-made tool and try this outā is āspend a bunch of time to get acquainted with the code base of something you may not even like, create a separate tool, and submit it without even knowing if itāll be acceptedā?
Thatās like having someone looking at a display of ice cream in a supermarket saying āIād be interested in trying a few samples before committingā and then getting a reply like āhere are the recipes for all the ice creams, you can try to make them at home and taste them for yourselfā.
I know I could theoretically spend my weekend working on a CLI tool for this or making ice cream. Every developer knows that, thereās no reason to point that out except snark. But you know who might do it even faster and better and perhaps even enjoy it? The author.
Look, the maintainer owes me nothing. I owe them nothing. This project has been shared to HN by the author and Iām making a simple, sensible, and sensical suggestion for something which I would like to see and believe would be an improvement overall, and I explained why. The author is free to agree or disagree, reply or ignore. Every one of those options is fine.
Youāre not wrong, but you probably could have built the thing with Claude in the time it took you to write this comment.
Good idea, I'll add a CLI tool over the weekend.
> simple 1x1 resize
How is it "simple"? There are like a ton of different downscaling algorithms and each of them might produce a different result.
Cool article otherwise.
At 1x1 I don't expect any difference. It would be the average of all pixels in the image if you don't unevenly weight them (which you might decide when choosing a main color, but no downscaling algorithm would do) and the only difference is whether you remembered to gamma-correct.
Nearest-neighbor interpolation may pick just one pixel closest to the center.
I really like this approach. I worked on this problem (create a nice background for an image) for a couple weeks many years ago while organizing my desktop wallpaper collection, and never came up with a good answer. Unfortunately, I think that it's been "solved" in the tiktok era; an enlarged and blurred version of the image is used to fill the background space.
The blurred mirror is inoffensive to almost everyone, and yet it always strikes me as gauche. Easy to ignore and yet I feel that it adds a lot of useless visual noise.
See also https://github.com/material-foundation/material-color-utilit...