Compressive Images

Posted by Scott on 10/30/2012

Topics:

Here at Filament Group, we've invested a great deal of time thinking about how best to deliver images that look sharp on devices from a low-resolution phone to High-resolution (HD) screens like Apple’s Retina™ or the new Nexus 10, while being as lightweight as possible for performance. In practice, this is more difficult than it may sound, as there is currently no native HTML solution to deliver different versions of an image, so we resort to JavaScript or server-side workarounds to achieve a similar result.

This week we came across an interesting technique. In his article titled Retina Revolution, Daan Jobsis shared the following premise: when considering a jpeg image's file size, the level of compression makes more of a difference than its physical dimensions. In other words, given two identical images that are displayed at the same size on a website, one can be dramatically smaller than the other in file size if it is both highly compressed and dramatically larger in dimensions than it is displayed.

While we’re not sure it stands to replace the responsive image techniques currently in play, we are very excited about its potential for complementing them. The article discusses the technique's "retina" implications, though in practice, the approach may have much more potential than differentiating high resolution "retina" (HD) and standard definition (SD) screen density alone.

An example comparison

Let's look at an example. Both images below are displayed at 400 pixels width and 300 pixels height.

This first image is saved with those dimensions specifically (it's displayed at 100% size), with a typical 90% quality jpeg compression from Photoshop. It weighs 69kb.

Full-size sample image

This second image, however, is actually scaled down by the browser. Its natural size is 1024x768px, but its compression was set to 0 (zero!) quality when saved from Photoshop. As a result, it weighs 27kb.

Lightweight, scaled-down sample image

The images look roughly similar in quality, yet the second one is less than half the weight. Since that image is more than twice the resolution of the display size, it also looks sharp on retina screens.

Implications

Assuming there aren't drawbacks we've yet to consider (and there usually are), we're unsure how this affects our current thinking on responsive images.

For one thing, we're sure that this does not entirely replace the features of the proposed picture element. For example, picture's ability to deliver different image sources altogether means we can provide different crops of an image depending on the size, and a single image can't currently do that. In this context, it’s possible that we might use this technique to reduce the size of those sources.

Still, for images that are merely delivered at different sizes without any changes to the crop, this technique could present a much simpler solution.

Regardless, we think it's a pretty interesting twist. We'd love to hear your thoughts!

Note: Daan has posted a followup article worth reading as well.

Book cover: Designing with Progressive Enhancement

Enjoy our blog? You'll love our book.

For info and ordering: Visit the book site

Comments

Love the work you guys are doing, and I’ll definitely try to use this in my next project.

A potential drawback i see is to do with how jpeg handles compression. In the image you’ve provided above, the aspects of the image that get the most improvement are around the edge of the cliff face in the foreground. That area seems to be much more defined in the high-resolution version than in the normal version.

However, if you look at the clouds on the top left, the lack of quality really shows through.

Artefacts in jpg compression tend to show through more in flat colours and sharp lines, so this technique should be used for photography that doesn’t have much of that. (no flat surfaces like exposed skin, no sharp lines like typography)

Comment by Clark Pan on 10/31  at  12:47 AM

(I think the correct reference for this technique is Thomas Fuchs’ book “Retinafy your web sites & apps”. He talks about this for some time now.)

And one drawback I think nobody is talking about is memory usage. Despite being better compressed and having a smaller file size, the retina version uses 4x the memory needed for the smaller image. That is because all images are bitmapped when expanded. It can be a problem if we are talking about simpler devices like mobile phones.

Comment by Sérgio Lopes on 10/31  at  01:16 AM

@Clark: great points, thanks for chiming in.

@Sergio: Nice catch. We haven’t read Thomas’ book but it’s nice to know the particular technique has been mentioned in other places. Daan’s was the first we came across on the topic.

Good points on memory usage, too. We assumed there would be some drawbacks to consider. Definitely worth testing!

Comment by Scott (Filament) on 10/31  at  01:26 AM

great article(s), as always

i think the main benefit of this technique is you get a simpler picture element, with less sources (you can remove @2x). there is still urgent need of a picture element imho, for two reasons at least:

1. editorial crop depending on zoom/context
2. vertical rhythm depending on crops, scaling is not enough precise

btw: i sent a PR on your picturefill, maybe very opinionated tho :D

Comment by Sergi on 10/31  at  01:31 AM

My question about this technique for use in a responsive context is that the image would begin to degrade as the device “scales up”, wouldn’t it? I see it being useful in the retnification (that’s a word, right?) scenario—but less helpful in a responsive context as the display of the image gets closer to reaching its actual dimensions.

Am I understanding that correctly?

Comment by Bridget Stewart on 10/31  at  02:43 AM

Interesting idea though I guess it only works with jpg photos.
How does it work with images with solid colors and sharp edges (e.g. graphics)?

A combination of this technique and use of SVG would perhaps be a solution.

Comment by Jakob Damgaard on 10/31  at  09:53 AM

I am a bit worried about this technique. Of course it is easier for developers to serve such images but on the other hand these images take more resources of the (especially mobile browser).

I tested this on a full-background page recently (2048px*1536px as img-size) and found out that even on a Desktop UA this technique slows down the whole window and results in a bad user experience. For example scrolling isn’t smooth anymore.
This behavior can be avoided on some (not all!) mobile devices using the img{ transform: translateY(0); } CSS3 value. This triggers hardware acceleration.
Overall this means it takes more resources from a mobile device and can result in a crash on mobile devices, too.

The other problem is we have a limit of maximum pixels on an image in mobile OS. That means for iOS4 4MP and for iOS5 5MP.  That might seem much for the first thought but it really isn’t. If you have a panorama image that is super easy to reach when optimizing for retina displays (e.g. iPad3/4: 2048px*1536px). And this is nowadays not uncommon as a resolution. Notebooks even have larger displays right now.

So it can be a somehow useful technique but also is very problematic if you don’t really know what you do. Using the technique with a responsive images-solution it would work good for most displays.1

Comment by Anselm on 10/31  at  10:04 AM

would be nice to know how far one could decrease the jpeg compression until hit the filesize of the one with the lower resolution (and how the image would look like with that compression)

Comment by Markus on 10/31  at  10:07 AM

@Markus: As you can see, JPEG compression is at 0% quality and the file is sometimes even smaller than the standard resolution. More examples are on the original blog post here: http://blog.netvlies.nl/design-interactie/retina-revolution/

Comment by Anselm on 10/31  at  10:08 AM

@Anselm Thanks for the hint.

Any chance to measure the time the browser taktes to re-scale the fullsize image?

Comment by Markus on 10/31  at  10:17 AM

I’m always excited about experimenting and this was insightful. I feel that the outcomes of this experiment are a bit exaggerated though.

For example, when is a 90% quality jpeg compression typical when the presets are Medium (50), High (60) and Very High (80) and Maximum (100)? I almost always use medium or high depending on what I can get away with and still looks good.

I tried out your technique on a 400px photo at 90%, 50% and 800px photo at 0% (double size) and the 50% compression looks much better than the 0% jpg and is 2kb smaller.

http://gtmcknight.org/compression/

The 800px @ 10% quality starts looking pretty comprable to the 400px@50% but is now bigger in size.

I know this is all experimenting and just thought I’d share some of my results as well.  The idea of “retina graphics at even smaller file sizes” still remains a dream :)

Comment by Taylor McKnight on 10/31  at  11:12 AM

Thanks for all the comments everyone. Like any technique, we all need to find the sweet spot for when to use this. A few things are clear:

- Since this is based on JPEG compression, this is only appropriate for photographic style content. Line art with flat colors should be compressed as a GIF, PNG-8, or ideally SVG. See our article on GruntIcon as a possible technique for using SVGs today with a PBG fallback.

- We need to strike a balance between image quality and memory use and that will take testing and experience to hammer out some guidelines. It’s clear that serving retina quality background images at 2048px*1536px like @Anselm tested is probably not a good idea because that could cause many devices to grind to a halt or even crash. This technique is an interesting way to reduce bandwidth for larger images, but doesn’t change the basic rules for how images work in a browser so we need to be practical on sizes. Unfortunately, a lot of the HD devices out there are phones and tablets and they have very little memory.

- The idea is to test various things and see what works, we don’t have the answers right now. For example, we might be able to serve an image that is 1.5x or 1.7x the standard definition size (instead of a full 2x) and increase the compression from 0 to 20 or so. This might strike a better balance between file size, quality, resolution and memory use.  Demo: http://jsbin.com/egazaw/14/edit

If I were to choose today between doing nothing for retina by just serving SD images with normal compression levels and tinkering with this technique, I’d do the latter. It will look better on retina and be the same or less file size.

Keep testing and report back with demo pages, this is an interesting conversation!

Comment by Todd (Filament) on 10/31  at  04:58 PM

I think we are losing focus on the main purpose of the web: serving content.
I’m sure that doing this “scale-down thingy” solves a lot of headaches, but we are serving a crappy image.
Now when our visitor decides to save the image, or open it on a new window, or anything not involving the CSS scalation, he’s going to get a “double sized-artifacted-blurred” image, and I think this totally sucks.

Despite this “little” thing, I love the approach to solve the problem ;)

Comment by Harold Dennison on 10/31  at  04:59 PM

Creative technique but I think lowsrc solves the problem more eloquently. Create two images, one high res, one low res and both at 90% quality. It solves the page load time, high res display problem and some of the other issues mentioned in this thread. Old school but still works in all browsers and is part of the spec.

http://www.w3.org/TR/REC-DOM-Level-1/level-one-html.html#ID-91256910

Comment by Richard Ayotte on 11/01  at  02:22 PM

Clever, but as several have pointed out already memory usage is the elephant in the room.

In my forays into mobile web dev the main problems I’ve come up against have been the low memory ceilings of mobile browser web pages.

Comment by pete_b on 11/01  at  06:00 PM

It’s quite fun to watch your guys work on this problem and to see the discoveries along the way. Even better to see the community feedback!

Comment by Joseph R. B. Taylor on 11/01  at  06:24 PM

Yeah, this is something I discovered quite a while ago when the retina iPads first came out, and I made a nice little web page utility to test out different compression settings on a handful of test files. (Though it’s optimized in size for the iPad, it would work on a Retina Mac as well.) http://dh.karelia.com/retina/

The one thing I’m cautious about is the client-side memory to handle the large image. I remember hearing some warning from Apple about not serving up a gigantic-dimension image and expect a lightweight client to have the chops to handle it.  So maybe it wouldn’t be a good idea to put a whole bunch of large (but compressed) photos all on the same page. Then again, I suppose testing on various clients would be in order just

Comment by Dan Wood on 11/01  at  07:56 PM

First of all, the 2nd image really looks a lot worst than the first.

Secondly; when saving images, 90% quality is usually way to high. Setting it to 80% would make the file quite a bit smaller.

But I like the general idea of setting the quality somewhat lower on the retina version of an image.

Comment by Gerben on 11/01  at  08:33 PM

Very interesting technique. It looks like the highly compressed HD images are more often than not, smaller than SD images with medium to low compression.

I’d be really curious to see if anyone has some numbers on the memory usage when using these highly compressed HD images. It seems like something to be cautious of, though I’d love to see some hard data on the memory usage.

Comment by Brett Jankord on 11/01  at  08:34 PM

@Gerben If you check out Dan Wood’s website, http://dh.karelia.com/retina/ it looks like HD images saved at 5% quality were pretty close to SD saved at 60% quality.

Comment by Brett Jankord on 11/01  at  08:37 PM

My concern is that what about when a viewer wants to save of download the compressed image? Wouldn’t the saved image appear to be of that poorer quality?

Comment by Eric Lakatos on 11/02  at  04:06 AM

This is a load of baloney.

As others have said, I’m not sure who would ever apply 90% compression to an image intended for web –– 50-70% is more realistic. At 50%, the smaller image will be around 2/3 the file size of the larger, hyper-lossy image. So that comparison doesn’t hold up.

More egregiously, though, is the 0% compression on the larger one. It looks like absolute crap to me. Not to mention that it will gobble up precious memory on mobile devices, rendering the page needlessly slow and unresponsive.

Re-scaling images (and text for that matter) is best avoided at all costs, whether on desktop or mobile, and you should ask yourself whether your time (in creating a robust solution for high-density screens) is more important than the user experience.

I would argue that the latter is more important, but that’s a matter of personal preference.

Comment by Derryl Carter on 11/02  at  07:29 AM

May cause aliasing with some images on non-webkit browsers.
Example : http://www.elearnis.fr/data/_lab/double-jpeg.html

Comment by Franck Bossy on 11/02  at  12:04 PM

Thanks all, great conversation!

Re: quality, we weren’t trying to “stack the deck” by using too low a compression to bloat the file size. The right level of compression for the normal quality image that we use as a comparison depends on the image and your opinion on acceptable artifacts. I personally find that 70-80% quality looks good to me for standard JEPG compression, below that looks too compressed for my tastes, let alone 50%.

I had used Apple’s real imagery in my jsbin demo because that is a real -world example of what a detail-oriented company considers correct compression for both sizes. Note that I had linked to the wrong file for Apple’s retina image in my previous comment’s demo. See jsbin.com/egazaw/18 for a corrected demo.

As Brett mentioned, this is a really cool tool that lets you try different photos and quality levels. This is best way to get a handle on how this technique works. Wish we had seen this earlier!
http://dh.karelia.com/retina/

Franck’s comment on the quality when re-sized varies widely across browsers and devices is very interesting (we thought this was mainly a very old IE and BB thing). Has anyone done a thorough test across all these to see which re-size smoothly and which are jaggy?If not, we can dig in.

The comments on the quality when viewed at full size are interesting since that’s not the main purpose of the technique, but worth considering. Perhaps this is a better technique for content imagery, but wouldn’t be ideal for a photo sharing site where the images are frequently downloaded and viewed at full size. You could also use these compressive images as the preview then link to the ultra high quality image for downloading.

Lastly, a few folks mentioned that Thomas Fuchs suggested a similar technique in his retina e-book a few months ago. If you have a link to public info we can share from Thomas, please post that here. Hat tip nevertheless.

Comment by Todd (Filament) on 11/02  at  06:09 PM

Not sure who Brett is, but I was the one who mentioned my http://dh.karelia.com/retina/ tool that I had created back in March.  Anyhow, I haven’t touched it in a while, but if people have any requests on how I can improve the page, please let me know.  (I don’t foresee it being able to process user-uploaded images though, since that would mean putting some actual image processing on the server.  It’s just displaying images that have been created in advance on my Mac.)

More info about the tool by clicking/touching “About This” on the page.

- Dan

Comment by Dan Wood on 11/02  at  06:38 PM

It’s a similar effect (if you ask me) when you see billboard signs. From a distance they look normal, but up close they look horrible.

Interesting thought.

Comment by Marco Berrocal on 11/03  at  09:30 PM

Looks like a very clever technique. The Filament guys are always good for some innovation. I’m gonna try this out in one of my next projects and see what it’s good for.

Comment by Marcel on 11/04  at  09:34 PM

Was thinking more about the memory concern I’ve seen people post on here and on twitter about re-sizing the HD image in the browser to half size.

Isn’t this the way its done with HD images that are used as backgrounds with CSS for retina displays. I don’t think I’ve seen people talk about memory issues when HD images were re-sized with the CSS background-size property.

I assume there is some memory hit with this technique, but I don’t know if it’s as bad as people are making it out to be. Would love to see some numbers on the memory usage with this technique.

Comment by Brett Jankord on 11/08  at  01:56 AM

Commenting is closed for this post.

Book cover: Designing with Progressive Enhancement

Enjoy our blog? You'll love our book.

For info and ordering: Visit the book site