Understanding Image Compression: Tooling and Context
Image compression plays an important role for performance optimization. Image compression seems straightforward but is, as many developers know, a little deceptive, because it consists not of one but two parts—and it’s usually lack of understanding of one of these two parts that causes image compression not to be performed effectively, or at all.
What are the two parts?
There’s a tooling part dealing with the infrastructure to both * manually and automatically compress images. This is the prominent part, the part that ensures or improves performance.
Then there’s a context part dealing with a) what image can be compressed how aggressively as well as b) the necessary information flow for team and tools, so that each image is being optimized as much as possible without losing its ability to serve its purpose. This part is about quality.
What happens in practice is that compression is sometimes confused and oversimplified with tooling—one part of compression. The other one, then, is easily neglected, if not missed: What kind of image is being optimized (like, photograph or icon?), and where is it used (like, foreground or background?)—context. Our tools cannot, so far, determine this context on their own. And with tooling that is just that, contextless algorithms, we find some teams struggle, not dealing with image compression effectively, or at all.
Figure: Publishing scenarios from FAZ, Demeter, and kicker, and to compress respective images effectively, one needs to understand how they’re used—for optimal results, there’s no “one size fits all.”
Differentiation, as so often, can inform the approach. We know that the problem has never really been complexity with respect to tooling—but said context: Teams may not have enough information to compress images, neither manually nor automatically; developers may not (be able to) instruct their tools what images can be compressed how much or, one step away, designers may not instruct developers on what images are used where, and can therefore be compressed to what extent. Neither, then, has our information architecture ever been prepared to reliably reflect context (as with, for example, something like a semi-standardized “icons” folder with images to be compressed most aggressively).
When we differentiate between tooling and context, we can do two things: First, we still set and hook up the necessary infrastructure for image compression (editor plugins like Image Optimizer for VS Code, standalone apps like ImageOptim, packages like imagemin, &c. pp.). Second, we can look into the information flow to make sure context is given and known to decide on the degree of compression and whether automation is an option.
As most of the more technically-minded readers can tell, image compression tooling is really rather easy to set up. But as context, with its entirely different challenges, decides on the degree rather than the fact of compression, proper distinction is the key now to overcome even a technical veil of ignorance: We can set up automated tooling that simply compresses everything losslessly (while guarding against regressions when processing already optimized images), to tackle more aggressive optimization once we’re sure about all relevant context.
Tooling and context. Both need to be available for image compression to be most effective. As there’s still more to cover, I’ll pour out some more thoughts shortly 🖐
* Automated compression would suffice if there wasn’t a real need to also allow for manual compression, and to also make that compression easy. This is for two reasons: One, for the situation when image compression is set up in a running project, where compression over all assets may need to be triggered manually, for a start. Two, for the observation that projects, especially when more complex, will always mean some uncertainty to be accounted for, requiring failsafes. Compression shall therefore also be something to be triggered manually as well.
I’m Jens, and I’m an engineering lead and author. I’ve worked as a technical lead for companies like Google, I’m close to W3C and WHATWG, and I write and review books for O’Reilly and Frontend Dogma. I love trying things, not only in web development, but also in other areas like philosophy. Here on meiert.com I share some of my views and experiences.
If you have a question or suggestion about what I write, please leave a comment (if available) or a message. Thank you!
Maybe of interest to you, too:
- Next: 3 Reasons Against Ad Blockers
- Previous: A Crime Called Favicon
- More under Web Development, or from 2019
- Most popular posts
Looking for a way to comment? Comments have been disabled, unfortunately.
Get a good look at web development? Try WebGlossary.info—and The Web Development Glossary 3K (2023). With explanations and definitions for thousands of terms of web development, web design, and related fields, building on Wikipedia as well as MDN Web Docs. Available at Apple Books, Kobo, Google Play Books, and Leanpub.