Adaptive Images and Responsive Web Design

Openweb.eu.org > Articles  > Adaptive Images and Responsive Web Design

Abstract

Cédric Morin makes a ready-to-use solution solving Adaptive Images issue, that fits well with dynamic websites (one more solution, yeah): this solution is not trivial, but is designed to be automated.

Article

Note: this article was previously published in French in December 2013 –if you have any feedback on the translation made by Cédric Morin and Stéphane Deschamps, please let us know.

The aim of Adaptive Images is to adapt size, resolution and quality of editorial images inside web pages for the final user.

They are especially needed in Responsive websites because of the way this method fits the display to the device: the same page is rendered in different ways depending on your screen real estate.

Adapting images is a question of improving the user experience, delivering the image that fits best: it’s not useful to send your user an image that would be too large or too heavy if they end up seeing it on a small screen or via a low bandwidth connection!

Adaptive images target some uses cases [1]:

  • Viewport Switching: to adapt image size to viewport size, such as using a 320px width image on a smartphone and an 800px width on a tablet;
  • Device-pixel ratio (DPR) Switching: to adapt image resolution such as using a 2x image with 4 more pixels on a Retina screen;
  • Network Switching: to adapt image quality and weight to connection quality (bandwidth and latency), such as using only 1x image if the user is using the Edge network in subway;
  • Art Direction: to adapt image content to display width, such as cropping and centring an image on its main subject on small screens instead of simply re-sampling it in a way that would make the main subject hardly seeable.

For a couple of years, a lot of solutions have been proposed but each of them has its drawbacks. Basically HTML lacks a dedicated API and markup to this use case, and everybody is trying to do their best with the available markup.

At the moment, it’s not clear what the final standard will be: W3C discussions were mostly about new tags <picture> and <source> and new attributes media and srcset, but now it seems more likely that srcN attribute will be chosen. Or not. We’ll have to wait some years before all this is standardized and available and implemented in a majority of users browsers.

However, we need adaptive images now in responsive web site.

So I started looking for existing solutions and especially the one that would be efficient to use in CMS’s and dynamic websites, which allow a lot of automated markup generation.

Ideal Adaptive Images

Each existing solution has its drawback. Checking it one after the other, we can build a quite clear specification for this feature.

First, what we really want:

  • keep a semantic <img/> tag, in particular for accessibility reasons;
  • have only one server hit per image in the HTML source;
  • have unique content delivered per each URL, needed for effective caching of resources; that’s for HTML page as well as images;
  • don’t degrade rendering if JavaScript is deactivated (or broken), if possible still adapting images to device;
  • lower weight of downloaded contents on mobile browser with small screens;
  • improve quality of downloaded images on high resolution screens (dpi>=1.5x);
  • adapt quality of downloaded images to connection quality (latency and bandwidth).

What we would also like:

  • have progressive rendering in case of bad connection and long time for downloading each image;
  • of course, a solution that would be simple for users, and can be achieved through automated implementation.

Finally, what we would not like:

  • to resort to user agent detection (it’s a future fail strategy);
  • to use a server-side dynamic script to deliver the images rather than let the web server deliver them as static files — needed for ecological server resource use.

No doubt this is an ambitious target, and we will need to make some concessions…

Clown-car Technique

I was really interested in the Clown-car Technique from Estelle Weyl, based on a SVG container, as it does not rely on JavaScript, unlike most of others existing solutions.

So I started to work on this solution, hoping to help to improve its known defects linked to accessibility and browser support (IE<=8 and Android <=2.3).

But after some prototypes and tests I figured 2 additional troubles:

  • on my iOS smartphone, I clearly had a rendering bug with bad width/height ratio [2]
  • I realized that if I deactivated styles in the browser, images where no more visible since the styles of the SVG container where also deactivated.

That means too many drawbacks for this technique compared to our initial goals. However this was an inspiring wrong way.

3-layer technique

No need to search in another direction: if we want our solution to work without JavaScript, we have to use CSS. So we’ll stay around the Clown-Car technique idea.

Looking in this direction, I found 2 other interesting works:

  • the first one is a CSS3 extension proposal that allows to change the src attribute of the <img/> tag. However it’s only, and partially, working on Webkit at the moment, and it’s not practically usable;
  • the other one is about multiple background use in order to make progressive rendering: sort of an updated lowsrc, but losing here the semantic of the <img/> tag.

After some experimentations and prototypes, I succeeded in merging these 3 ideas somehow, by stacking 3 technical layers as well as 3 visual layers that will work by progressive enhancement.

That’s why I call it the 3-layer technique.

Layer #1: HTML

HTML provides information in a structured format. So we want an <img/> tag, that offers alternative text and a low-resolution preview which is sufficient by itself.

We chose to provide a JPG image, with high compression ratio, encoded DATA URI [3].

The image is embedded in a non-semantic wrapper <span> that has a label in its class attribute, something like c-xxx [4]:

This way we keep the image’s semanticism, its textual alternative, and provide a safe enough visual preview (it’s the visible image in case of raw HTML page rendering, without stylesheets).

Layer #2: CSS and media-queries

That’s where we will set the final image that we want to display depending on the user’s device, using media-queries and CSS in a <style> tag.

First, let’s set some generic styles on the <span> wrapper and it’s image:

  • during loading time, the preview image is displayed with a 70% opacity;
  • the image will be displayed as the background of the <span> wrapper
  • as well as the background of a span:after, over the preview image.

I choose the 70% opacity of preview as a compromise that allows:

  • to clearly see the image preview;
  • to simulate partial transparency in the case when there’s some transparency in the initial image;
  • to figure the loading process in progress.

Then we add styles for each image, in order to pick the right image depending of the viewport size. For instance we get here 2 breakpoints at 320px and 640px:

We can also take into account the screen resolution in adding more media-queries. Let’s say we’ll send higher resolution images for 1.5x and 2x pixel density, here for the lower-than-320px case:

All these CSS rules are inserted in a <style> tag, inside the HTML page [5].

When the CSS rule is processed and the target image (depending of screen size and resolution) is loaded, it will be displayed over the preview image and also under it. At this step we have 3 visual layers, and only the top one is visible (except if it has some transparency and lets us see a bit of the preview image in the second layer).

Layer #3: JavaScript

JavaScript will be used to finish the rendering: superposition of the 3 images can produce some imperfections in the rendering, in the case of transparent area, and we need to hide the 2 top layers. This is achieved through a function called on window.onload [6] that injects a style tag in the DOM, with some additional CSS rules in charge of completing the rendering when all images are loaded.

In the case when JavaScript is not available, we can also insert some rules in order to deactivate the progressive rendering on images that can have transparency, such as PNG and GIF:

We can see that, in both cases, span:after is hidden using display:none; whereas the <img/> tag is only set to 1% opacity.

Repairing the View Image… feature

This little trick allows to make the image almost transparent without hiding it: it’s still displayed and rendered, it can receive the focus and on a mouse right-click (or equivalent) show the contextual menu like View Image… or Save Image….

Unfortunately, this is the preview, low-quality, image that is displayed or saved in this case.

We can fix this by adding one onmousedown attribute on the <img/> tag in order to get the target image displayed in the parent background [7]:

And we define the adaptImgFix function like this:

3 visual layers

In short, we clearly have 3 visual layers:

  • at first only the preview image from the <img/> tag in the HTML is visible: that’s the middle layer;
  • then CSS rules add the adapted image on top and bottom. Only the top layer is visible, except in the case of transparency that allows to see some parts of the middle layer and making the rendering not perfect;
  • eventually JavaScript removes the top layer and makes the middle one almost transparent: only the bottom layer, with the adapted image is visible.

Browser support

Our method is only based on media-queries: the browser support is quite good, except for 2 platforms.

Internet Explorer

DATA URI support in Internet Explorer starts with IE8 (with a 32ko limit in IE8) and media-queries start with IE9 and IE10 mobile.

So we’ll keep all this simple: no adaptive image for IE<10, whether it’s the mobile or desktop version.

As IE10 doesn’t take into account conditionals comments anymore, we just have to send the conventional image for IE:

Users of Internet Explorer < 10 will see normal images, in maximum width, for resolution 1x, with no adaptation to their device. However these browsers versions are quite only used on desktop.

Unfortunately, they will load a weightier HTML page that includes (unused) preview images. Tough luck, but we can’t do better here [8].

Android 2.x

We’re speaking here about mobile phones using Android 2.x and the default android browser.

Even if this is an aging and declining fleet, we can see, on Nursit’s server logs, that it’s about 10 times more users than IE mobile <10 and about 20% of all Android users.

Not easy to forget it as we did for IE, especially considering there are no conditional comments nor hack to target it.

As it appears that Android 2.x only support the CSS property background-size with the -webkit prefix, we add the following rule in the common styles of adaptive images:

I used the Mobitest service in order to check that our prototype worked as expected. What a surprise when I tested page loading on a Nexus S Android 2.3: instead of loading one image, it was loading (almost) all variants [9]!

After a lot of trials and errors, I reached 2 issues:

  • CSS override: in the case of CSS overriding, Android 2.x will load each image of applicable rules;
  • 800px viewport: seems that the test phone was first rendering with a 800px viewport, and then adjusted it, with the right width. Doing this, 2 images were loaded even if not using the override.

A good precaution with no drawbacks about media-queries is to use min-width and max-width in order to have only one media-query applicable:

At first I succeeded in getting a prototype, with a lot of complications, that seemed to works with the Mobitest Nexus S.

But 2 weeks later, in some new tests, things were broken again without knowing if Mobitest had changed their configuration or not.

Going back to bibliography, it appears that Tim Kadlec had experienced the same sort of inconsistency: in his Test #5 conclusions he first said that Android 2 loaded both images, and then, in another tests session, that everything was OK.

In fact, page loading of his Test #5 clearly shows that both mobile and desktop are loaded, even if media-queries are well-formed and only one is applicable.

I went to the conclusion that on some Android 2.x configurations, there is no way to get things to work properly, as media-queries support is clearly buggy. And it looks like it’s not safer to try to deal with screen.width.

Only one adapted image for Android 2.x users

Consequently we’ll fix this issue in a more aggressive way. Android 2 is almost only used on mobile phones. So we’ll send a unique image to Android 2 users.

As a compromise, we choose the image version dedicated to the 320px viewport width and 1.5x resolution: this image is 480px-wide and will fit good 320px 1.5x screens, and can be correctly shown on a 480px-wide viewport without any upscaling from browser.

To do this, we’ll use a bit of JavaScript in order to add an android2 class on <html> tag:

Then we add html:not(.android2) on all our CSS rules in media-queries to make sure they’re not applicable at all and prevent Android 2 from loading the corresponding background images (OMG, this is working!).

At last, we add a single CSS rule dedicated to Android Browsers with this single image:

If JavaScript is not available, rendering will be OK, but it’s possible that, on some uncertain case, the browser will load many versions of the same image, slowing the page loading.

Connection quality detection

At this step, we have a solution that takes into account viewport size and screen resolution. We now need to adapt that to the quality of the connection. For instance we want to avoid to load a 2x image on a smartphone using Edge as the 1x image will give a better service in this case.

During his talk Adaptive Images for Responsive Web Design at Paris-Web, Christopher Schmitt made this comparison: measuring the connection speed is like standing in front of a car to see how fast it moves.

In fact, HiSrc, the solution he’s developing, downloads a test image of 50ko to evaluate connection bandwidth and uses the result, stored in localStorage, for the next 30 minutes.

This method has two drawbacks: loading 50ko only for speed measurement, and assuming the same connection speed for 30 minutes, whereas this is enough time to move from a home WIFI fast connection to a subway slow connection through street middle speed connection.

In our case, we’re embedding in the HTML one DATA URI-encoded low quality JPG image as a preview. So, the HTML page itself can be a sample test for speed measurement, especially if it contains at least 2 or 3 images.

Why not use the new Navigation Timing API in order to know the loading time of HTML page and guess the connection quality?

The benefit of this solution is to make a detection on each page hit, and it will always be as recent as possible.

The downside is that is although it’s well supported on recent browsers, Safari doesn’t support it at all, in any version (neither desktop nor iOS).

But we’re betting this will change in future versions, and as iOS upgrades are well applied by iDevices users, that will allow almost everybody to use the connection detection feature.

For older Android browsers (2.2+ et 3.x) that don’t support the Navigation Timing API, we’ll use navigator.connection as done by Modernizr.

Last but not least, HTML page size is measured on the server side: this way JavaScript can evaluate the connection speed as soon as possible, at the top of the <head> and add an aislow class on <html> tag in case of slow connection:

Then we use this class to avoid hi-res media-queries in the case of a slow connection, thanks to a html:not(.aislow) part in CSS selectors.

As a result, if JavaScript is not available, or if it is not possible to detect connection speed, such as on iOS, high resolution images adapted to the device screen will still be used if necessary.

We could do it the opposite way: load high-res images only if we are sure to have high-speed connection. But, on the other hand, we already have a progressive rendering, thanks to the preview image, and it seems acceptable to take the risk of loading a heavier image.

Art direction

The art direction question is about the fact that simply resizing a large image for a small screen isn’t always a good idea: too small details would not be seeable and the main subject of the image can become unreadable.

On the left, the "mobile" and default src. In the middle, a slightly larger image that could be used on (ahem) "tablets". On the right, the largest of the images. — From http://css-tricks.com/which-responsive-images-solution-should-you-use/

The 3-layer technique is ready to take art direction in account, as it allows using different images depending of viewport width.

However, there’s only one preview image in the HTML, and it can’t change depending on device. As a result we have to choose which image will be used to construct the preview: the desktop or the mobile one?

As the preview is a low-quality image, it doesn’t offer a good rendering of small details, and it seems sensible to start from a small-screen image to generate the preview. Moreover, it’s on a small screen, in mobility with a bad connection, that the preview will be seen more often (more waiting for image loading).

But we must say that with this method, we’ll have to superpose for larger screens an adaptive image quite different from the preview one.

In the case of transparency on the image, that can produce some rendering artifacts: maybe the 3-layer technique is not ideal if we want to use a mobile version in this case, and sometimes we can’t do it if large screen rendering is not good enough.

Implementation and automatization

The 3-layer technique needs to be implemented on the server side, in order to add HTML tags and media-queries. In return it works well without JavaScript, and if JavaScript is available it allows better progressive rendering and connection quality adaptation.

There’s a lot of markup to write for each image, especially for media-queries, and it’s not realistic to think to write it manually. Moreover, generating image variants is a lot of additional work, and it needs automatization.

That’s what the standalone PHP library AdaptiveImages does:

  • automatic preview image generation (lowsrc);
  • generation of all image variants on predefined width breakpoints, adjusting the JPG compression ratio to resolution;
  • on-the-fly markup replacement of the <img/> tags;
  • consolidation of all CSS rules needed in the <head> in order to speed up the page rendering [10];
  • add JavaScript code to the HTML page for browser support and connection quality detection;
  • take into account the small-screen version provided through data-src-mobile.

It can be used in any PHP CMS or dynamic website, and is for instance used in the Adaptive Images plugin for the french CMS SPIP —which also adds a UI allowing users to join small-screen variants of images in the back-office, that will be automatically used in the generated markup if available— and also in the Adaptive Images plugin for blog engine DotClear.

Benchmark and demo

A demo page enables you to see the plugin in action for different kinds of test images.

The page web performance is then compared with and without the plugin working (aka with and without adaptive images). Be aware that without adaptive images, all images are always downscaled to a 640 pixel maximal width.
In benchmarks I removed the animated GIF image as it can not be adapted although it’s the weightier image and it will have a big influence on the loading time if present.

Standard Images Adaptive Images
IE 8 (Paris) 455ko - 3,1s 490ko - 3,3s
Firefox (Brussels) 451ko - (5.2s) [11] 485ko - 3,6s
Nexus S Androïd 2.3 (Cambridge) 450ko - 2,45s [12] 372ko - 3,3s [13]
iPhone 4 iOS 5 (Cambridge) 457ko - 3,0s 432ko - 3,2s

We have on the test page 4 adaptive images.

On desktop browsers:

  • HTML overweight (due to preview images) is about 35kb, that’s more or less 8% of total downloaded data;
  • needed time to complete page is quite the same in both configurations (as the gain on Firefox doesn’t seem representative);
  • start of image download is delayed on Firefox with adaptive images: it’s not a surprise as we can’t benefit from browser pre-parsing for <img/> downloading and we need to wait for CSS parsing to start image downloading;

On Nexus S:

  • the total downloaded weight is lowered by 18%;
  • total loading time in this conditions is longer with adaptive images, even if measured time by Mobitest seems optimistic as rendering is not finished yet.

On iPhone:

We can see that with a good connection quality, loading time is not improved by using adaptive images.

It’s a bit annoying as it was an initial goal we looked for!

In fact this is a consequence that comes with all adaptive images techniques: image loading can only start when the rendering is good enough for the browser to know which rule to apply and which image to load.
And that’s not clear at all if browsers will be able to do better with a dedicated markup for adaptive images.

We have here to temperate the results by some additional consideration:

  • the worse the connection quality will be, the more adaptive images will gain downloading time compared to standard images;
  • our images in the base standard page are here not really huge, 640px maximum wide: gain will be higher if we were delivering high-res images in the standard page;
  • we’re embedding preview images that allow progressive rendering, that should improve the user-perceived speed;
  • we’re improving the quality of images displayed on large high-resolution screens.

From a global point of view, we improved the quality of service delivered to users.

But it should be pointed out that using Adaptive Images is interesting only if we really want to deliver large images for high-resolution desktop/tablet screens, and small images for small low-res screens like smartphones (who said watches?).

If the larger images in our content are about 640px wide, it’s not really useful to use adaptive images, and it can be simpler and better to deliver the same images to everybody.

Wrap-up

The 3-layer technique is a response to adaptive images issues for responsive websites that can be used as of today, as it’s fully compatible with the browser fleet in use, and can be automatized.

It quite solves all the initial goals, but it also has some drawbacks:

  • even if we succeed in getting only one hit per image, we’re really loading 2 different images for each tag: embedded preview in HTML and adaptive image. The originality of the method is to use the first one to have something working like old lowsrc;
  • the HTML page is heavier as it includes preview images: that can delay the rendering start in some conditions;
  • the technique uses a specific markup with a wrapper on the <img/> tag, and we may have to take it into account when styling images. For instance, if I apply round corners only to the <img/> tag, it will not be applied to the wrapper and the image will be displayed without round corners;
  • image adaptation is based on the viewport size and not on the container size (which was a big point in the Clown Car technique);
  • it’s hardly usable on a manual way, on static HTML pages.

We’ve said it at the beginning: no solution is perfect as it can only use existing and available techniques.

However it seems to me that the 3-layer technique has an interesting performance/cost ratio for dynamics websites (and especially websites using SPIP and it’s plugin!).

Of course, only if we have large high-res images we can deliver them to users with large 2x screens!

Notes

[2this Webkit bug was mentioned by Estelle but supposed to be solved by using <object> tag

[3For performance and weight concerns, this image will always be in JPG format, even if the original one is a PNG or GIF image and although we’re losing transparency information

[4the label is not necessary unique in the page as it is associated to the image, and the image can appear a lot of time in same page

[5the pessimists will remember that this isn’t valid HTML, and they are right. We’ll come back to this point later

[6in fact we’re using Simon Wilson’s pattern.

[7we need to clean the result which is a CSS rule in url("...") form.

[8As a matter of fact we could add support for IE9 desktop that supports all needed features. But this will make conditional comments more complicated whereas almost no user of this browser has a small screen nor a high-resolution screen.

[9And then, of course, much more data loaded than the conventional desktop page version.

[10that’s where pessimistic minds who read the whole article will smile again

[11strange total time, reproducible but hard to understand, so maybe not significant here

[121.8s needed to get all content images loaded

[132.35s needed to get all content images loaded

About this article

Your comments

  • Shyamala On 4 June 2015 at 08:05

    But in a responsive design, first implement the query based on that device capacity the site be shown not full images sent.

  • web design price calculator On 17 May 2019 at 08:03

    This is very wonderfull content.I found this content informative.

Your comments

pre-moderation

Warning, your message will only be displayed after it has been checked and approved.

Who are you?
Enter your comment here

This form accepts SPIP shortcuts {{bold}} {italic} -*list [text->url] <quote> <code> and HTML code <q> <del> <ins>. To create paragraphs, just leave empty lines.

Follow the comments: RSS 2.0 | Atom