Understanding Image Compression: Tooling and Context

Published on May 22, 2019 (ā†» July 2, 2022), filed under (RSS feed for allĀ categories).

Image compression plays an important role for performance optimization. Image compression seems straightforward but is, as many developers know, a little deceptive, because it consists not of one but two partsā€”and itā€™s usually lack of understanding of one of these two parts that causes image compression not to be performed effectively, or at all.

What are the two parts?

  1. Thereā€™s a tooling part dealing with the infrastructure to both * manually and automatically compress images. This is the prominent part, the part that ensures or improves performance.

  2. Then thereā€™s a context part dealing with a) what image can be compressed how aggressively as well as b) the necessary information flow for team and tools, so that each image is being optimized as much as possible without losing its ability to serve its purpose. This part is about quality.

What happens in practice is that compression is sometimes confused and oversimplified with toolingā€”one part of compression. The other one, then, is easily neglected, if not missed: What kind of image is being optimized (like, photograph or icon?), and where is it used (like, foreground or background?)ā€”context. Our tools cannot, so far, determine this context on their own. And with tooling that is just that, contextless algorithms, we find some teams struggle, not dealing with image compression effectively, or at all.

FAZ, Demeter, kicker.

Figure: Publishing scenarios from FAZ, Demeter, and kicker, and to compress respective images effectively, one needs to understand how theyā€™re usedā€”for optimal results, thereā€™s no ā€œone size fits all.ā€

Differentiation, as so often, can inform the approach. We know that the problem has never really been complexity with respect to toolingā€”but said context: Teams may not have enough information to compress images, neither manually nor automatically; developers may not (be able to) instruct their tools what images can be compressed how much or, one step away, designers may not instruct developers on what images are used where, and can therefore be compressed to what extent. Neither, then, has our information architecture ever been prepared to reliably reflect context (as with, for example, something like a semi-standardized ā€œiconsā€ folder with images to be compressed most aggressively).

When we differentiate between tooling and context, we can do two things: First, we still set and hook up the necessary infrastructure for image compression (editor plugins like Image Optimizer for VS Code, standalone apps like ImageOptim, packages like imagemin, &c. pp.). Second, we can look into the information flow to make sure context is given and known to decide on the degree of compression and whether automation is an option.

As most of the more technically-minded readers can tell, image compression tooling is really rather easy to set up. But as context, with its entirely different challenges, decides on the degree rather than the fact of compression, proper distinction is the key now to overcome even a technical veil of ignorance: We can set up automated tooling that simply compresses everything losslessly (while guarding against regressions when processing already optimized images), to tackle more aggressive optimization once weā€™re sure about all relevant context.

Tooling and context. Both need to be available for image compression to be most effective. As thereā€™s still more to cover, Iā€™ll pour out some more thoughts shortly šŸ–

* Automated compression would suffice if there wasnā€™t a real need to also allow for manual compression, and to also make that compression easy. This is for two reasons: One, for the situation when image compression is set up in a running project, where compression over all assets may need to be triggered manually, for a start. Two, for the observation that projects, especially when more complex, will always mean some uncertainty to be accounted for, requiring failsafes. Compression shall therefore also be something to be triggered manually as well.

Was this useful or interesting? Share (toot) this post, or maybe treat me to a coffee.Ā Thanks!

About Me

Jens Oliver Meiert, on September 30, 2021.

Iā€™m Jens, and Iā€™m an engineering lead and author. Iā€™ve worked as a technical lead for companies like Google and as an engineering manager for companies like Miro, Iā€™m close to W3C and WHATWG, and I write and review books for Oā€™Reilly and FrontendĀ Dogma.

With my current move to Spain, Iā€™m open to a new remote frontend leadership position. Feel free to review and refer my CV or LinkedInĀ profile.

I love trying things, not only in web development, but also in other areas like philosophy. Here on meiert.com I share some of my views andĀ experiences.