If you have ever encountered the task of resizing images in the browser, then you probably know that it is very simple. In any modern browser there is such an element as canvas (
<canvas>
). It can be applied image of the desired size.
Five lines of code and the picture is ready:
function resize(img, w, h) { var canvas = document.createElement('canvas'); canvas.width = w; canvas.height = h; canvas.getContext('2d').drawImage(img, 0, 0, w, h); return canvas; }
From the canvas, the picture can be saved in JPEG and, for example, sent to the server. It was possible to finish this article, but first let's take a look at the result. If you put such a canvas and the usual
<img>
element in which the same picture is loaded (
source , 4 MB), then you will see the difference.

')
For some reason, all modern browsers, both desktop and mobile, use the cheap affine transformation method for drawing on canvas. I have already described the differences in image resize methods in the
corresponding article . I recall the essence of the method of affine transformations. In it, 4 points of the original are interpolated to calculate each point of the final image. This means that when the image is reduced by more than 2 times, holes are formed in the original image - pixels that are completely ignored in the final image. It is because of these unaccounted pixels that quality suffers.
Of course the picture in this form can not be shown to decent people. And it is not surprising that the question of the quality of resizing with the help of a canvas is often set on stackoverflow. The most common advice is to reduce the picture in a few steps. And indeed, if a
strong reduction of the image does not capture all the pixels, then why not reduce the image
slightly . And then again and again until we get the desired size.
As in this example .
Undoubtedly, this method gives a much better result, because all points of the original image are counted in the final one. Another question is how exactly they are counted. It already depends on the step size, the size of the initial and the size of the final image. For example, if you take a step size of exactly 2, these decreases will be equivalent to supersampling. But the last step - how lucky. If absolutely lucky, then the last step will also be equal to 2. But it may be completely unlucky when, at the last step, the image needs to be reduced by one pixel, and the picture turns out to be soapy. Compare, the difference in size is only one pixel, and what is the difference (
source , 4 MB):

But maybe you should try a completely different way? We have a canvas from which to get pixels, and there is a super-fast javascript that will easily cope with the task of resizing. So, we can implement any method of resizing ourselves, not relying on browser support. For example
supersampling or
convolutions .
All you need now is to load a full-size image into the canvas. It would look like a perfect case. I will leave the
resizePixels
implementation behind the scenes.
function resizeImage(image, width, height) { var cIn = document.createElement('canvas'); cIn.width = image.width; cIn.height = image.height; var ctxIn = cIn.getContext('2d'); ctxIn.drawImage(image, 0, 0); var dataIn = ctxIn.getImageData(0, 0, image.width, image.heigth); var dataOut = ctxIn.createImageData(width, heigth); resizePixels(dataIn, dataOut); var cOut = document.createElement('canvas'); cOut.width = width; cOut.height = height; cOut.getContext('2d').putImageData(dataOut, 0, 0); return cOut; }
That's so trite and boring at first glance. Glory to the eggs, browser developers will not let us be bored. No, of course this code works in some cases. The catch lies in an unexpected place.
Let's talk about why you may need to resize at all on the client. I had the task to reduce the size of selected photos before sending to the server, thus saving user traffic. This is most relevant on mobile devices with slow connection and paid traffic. And which photos are most often downloaded on such devices? Filmed on the camera of these mobile devices. The resolution of the camera, for example, iPhone is 8 megapixels. But with it, you can take a panorama of 25 megapixels (even more on the iPhone 6). On Android and Windows, camera resolutions are even higher. And here we are faced with the limitations of these mobile devices. Unfortunately,
in iOS, you cannot create a canvas larger than 5 megapixels.
Apple can be understood, they have to monitor the normal operation of their devices with limited resources. In fact, in the above function, the whole picture will take up memory three times! Once - the buffer associated with the Image object, where the image is unpacked, the second time - the pixels of the canvas, and the third - a typed array in ImageData. For a picture of 8 megapixels, you need 8 Ă— 3 Ă— 4 = 96 megabytes of memory, for 25 megapixels - 300.
But in the process of testing, I encountered problems not only in iOS. Chrome on the Mac with some probability began to draw several small images instead of one large image, and under Winda simply gave out a white sheet.
But if you can not get all the pixels at once, can you get them in parts? You can load a picture into a canvas in pieces, the width of which is equal to the width of the original image, and the height is much smaller. First, we load the first 5 megapixels, then another, then how many will remain. Or even 2 megapixels, which will further reduce memory usage. Fortunately, unlike the two-pass resize by convolutions, the resize method is a single pass supersampling. Those. It is possible not only to receive an image in portions, but also to give one portion at a time for processing. Memory is only needed for the Image element, the canvas (for example, 2 megapixels) and the typed array. Those. for a picture of 8 megapixels (8 + 2 + 2) Ă— 4 = 48 megabytes, which is 2 times less.
I implemented the approach described above and measured the execution time of each part. You
can test
it yourself
here . That's what I got for the picture with a resolution of 10800 Ă— 2332 pixels (panorama from iPhone).
Browser | Safari 8 | Chrome 40 | Firefox 35 | IE 11 |
---|
Image load | 24 ms | 27 | 28 | 76 |
Draw to canvas | one | 348 | 278 | 387 |
Get image data | 304 | 299 | 165 | 320 |
Js resize | 233 | 135 | 138 | 414 |
Put data back | one | one | 3 | five |
Get image blob | ten | sixteen | 21 | nineteen |
Total | 576 | 833 | 641 | 1243 |
This is a very interesting table, let's look at it in detail. The great news is that the javascript resize itself is not a bottleneck. Yes, in Safari it is 1.7 times slower than in Chrome and Firefox, and in IE it is 3 times slower, but in all browsers the time to load a picture and get data is still more.
The second remarkable moment is that in no browser the picture is decoded to the
image.onload
event. Decoding is postponed at the moment when it is really necessary - display on the screen or output to the canvas. And in Safari, the image is not decoded, even when applied to the canvas, because the canvas is also not displayed on the screen. A decoded only when the pixels are extracted from the canvas.
The table shows the total time of drawing and receiving data, whereas in fact these operations are done for every 2 megapixels, and the script from the link above displays the time of each iteration separately. And if you look at these indicators, you can see that despite the fact that the total time for obtaining data for Safari, Chrome and IE is about the same, in Safari almost all the time is taken only by the first call, in which the decoding of the picture takes place, whereas in Chrome and IE time is the same for all calls and speaks about the general inhibition of data acquisition. The same applies to Firefox, but to a lesser extent.
So far, this approach looks promising. Let's test on mobile devices. I had iPhone 4s (i4s), iPhone 5 (i5), Meizu MX4 Pro (A) on hand and I asked Oleg Korsunsky to test it on Windows Phone, he had HTC 8x (W).
Browser | Safari i4s | Safari i5 | Chrome i4s | Chrome a | Chrome a | Firefox a | IE W |
---|
Image load | 517 ms | 137 | 650 | 267 | 220 | 81 | 437 |
Draw to canvas | 2 706 | 959 | 2,725 | 1 108 | 6 954 | 1 007 | 1,019 |
Get image data | 678 | 250 | 734 | 373 | 543 | 406 | 1,783 |
Js resize | 2,939 | 1 110 | 96 320 | 491 | 458 | 418 | 2,299 |
Put data back | 9 | five | 315 | 6 | four | 14 | 24 |
Get image blob | 98 | 46 | 187 | 37 | 41 | 80 | 33 |
Total | 6,985 | 2,524 | 101 002 | 2,314 | 8,242 | 2,041 | 5,700 |
The first thing that catches your eye is the “outstanding” Chrome result on iOS. Indeed, until recently in iOS, all third-party browsers could only work with the engine version without jit compilation. In iOS 8, it became possible to use jit, but Chrome had not yet had time to adapt it.
Another oddity - two results for Chrome on Android, radically different drawing time and almost identical in everything else. This is not an error in the table, Chrome can really behave differently. I have already said that browsers download pictures lazily, at the moment when they see fit. So, nothing prevents the browser from freeing up the memory occupied by the picture when it considers that the picture is no longer needed. Naturally, when the picture is needed again the next time you draw on the canvas, you will have to decode it again. In this case, the picture was decoded 7 times. This is clearly seen in the time of drawing individual chunks (I remind you that only the total time in the table). In such conditions, the decoding time becomes unpredictable.
Alas, this is not all problems. I have to admit that I powdered your brains with Explorer. The fact is that it has a limit on the size of each side of the canvas at 4096 pixels. And the part of the picture beyond these limits becomes just transparent pixels of black color. If the restriction on the maximum size of the canvas is fairly easy to circumvent, cutting the picture horizontally, and thus saving memory, then to bypass the width restriction, you will either have to rework the resize function quite strongly or glue the adjacent pieces into strips, which will only increase the memory consumption.
At this point I decided to spit on this case. There was absolutely crazy option not only to resize, but also to decode jpeg on the client. Cons: only jpeg, bad Chrome time under iOS will worsen even more. Pros: predictability in Chrome under Android, there are no limits on the size, you need less memory (there is no endless copying to the canvas and back). I did not dare to this option, although there is a jpeg decoder in pure javascript.
Part 2. Back to the beginning
Remember how at the very beginning we got a good result with a consecutive decrease of 2 times at best, and soap - at worst? And what if you try to get rid of the worst option, not too much changing the approach? Let me remind you that soap turns out, if at the last step you need to reduce the picture by quite a bit. What if the last step is done first, reducing first an indefinite number of times, and then only strictly 2 times? At the same time, it is necessary to take into account that the first step was not more than 5 megapixels in area and 4096 pixels in any width. In this version, the
code is obviously simpler than a manual resize.

On the left, the image reduced in 4 steps, on the right in 5, and there is almost no difference. Almost win. Unfortunately, the difference between two and three steps (not to mention the difference between one and two steps) is still quite visible:

Although soap and significantly less than it was at the beginning. I would even say that the image on the right (obtained in 3 steps) looks a little nicer than the left, which is too sharp.
One could even try resizing, trying to reduce the number of steps at the same time, and bring the average step ratio to two, the main thing is to stop in time. Browser restrictions will not allow to do something fundamentally better. Let's move on to the next topic.
Part 3. Many photos in a row
Resize - the operation is relatively long. If you act on the forehead and resize all the pictures one after another, the browser will freeze for a long time and will not be available to the user. It is best to do
setTimeout
after each resize step. But another problem appears: if all the pictures start resizing at the same time, then the memory for them will be needed at the same time. This can be avoided by organizing a queue. For example, you can run the resize of the next image at the end of the resize of the previous one. But I preferred a more general solution, when a queue is formed inside the resize function, rather than outside. This ensures that two pictures will not be resized at the same time, even if the resize will be called simultaneously from different places.
Here is a complete example : all that was in the second part, plus the implementation of the queue and timeouts before long operations. I added a twister to the page, and now it’s clear that the browser, if it does, stubs for a while. It's time to test on mobile devices!
Here I want to make a lyrical digression about mobile Safari 8 (I do not have data on other versions). In it, the choice of pictures in the input slows down the browser for a couple of seconds. This is due either to the fact that Safari creates a copy of the photo with the cropped EXIF, or to the fact that it generates a small thumbnail that is displayed directly inside the input. If for one photo it is tolerable and even, one can say, imperceptibly, then for multiple choice it can turn into hell (depending on the number of selected photos). And all this time, the page remains unaware that the pictures are selected, as well as not aware that the file selection dialog is open.
Having rolled up the sleeves, I opened the page on the iPhone and selected 20 photos. A little thought, Safari happily reported: A problem. The second attempt is the same result. In this place, I envy you, dear readers, because for you the next paragraph will fly in a minute, whereas for me it was a night of pain and suffering.
So, Safari takes off. Debugging it with the help of developer tools is not possible - there is nothing about memory consumption. I hopefully opened the page in the iOS simulator - does not fall. I looked at the Activity Monitor - oh, but the memory grows with each picture and is not released. Well, at least something. Began to experiment. So that you understand what an experiment is in a simulator: it is impossible to see a memory leak in one picture. At 4-5 is difficult. It is best to take 20 pieces. You cannot drag or select them with the "shifto", you need to click 20 times. Once selected, you need to look into the task manager and guess: reducing the memory consumption by 50 megabytes is a random fluctuation, or I did something right.
In general, after a lot of trial and error, I came to a simple, but very important conclusion: I need to free everything for myself. As early as possible, by any available means. And select as late as possible. Rely on garbage collection can not be completely. If a canvas is created, at the end you need to zero it (make it 1 Ă— 1 pixel in size), if the picture - at the end you need to unload it by assigning
src="about:blank"
. Just removing from the DOM is not enough. If the file is opened via
URL.createObjectURL
, it must be immediately closed via
URL.revokeObjectURL
.
After a
strong processing of the code, the old iPhone with 512 MB of memory began to digest 50 photos and more. Chrome and Opera on Android also began to behave much better - an unprecedented 160 20-megapixel photos were given, albeit slowly, but “without breaks”. This also had a beneficial effect on memory consumption and desktop browsers - IE, Chrome and Safari began to eat stably no more than 200 megabytes per tab while working. Unfortunately, this did not help Firefox - as he ate about a gigabyte for 25 test images, he continued. About mobile Firefox and Dolphin under Android nothing can be said - it is impossible to select several files in them.
Part 4. Something like a conclusion
As you can see, resizing the pictures on the client is pretty damn exciting and painful. It turns out a kind of Frankenstein: disgusting native resize is repeatedly used to obtain at least some similarity of quality. At the same time, it is necessary to bypass the non-detectable limits of various platforms. And still there are many private combinations of the original and final size, when the picture is too soapy or sharp.
Browsers devour resources like crazy, nothing is released, magic does not work. In this sense, everything is worse than when working with compiled languages, where you need to explicitly release resources. In js, firstly, it’s not obvious that you need to release, and secondly, this is not always possible. Nevertheless, appeasing the appetites of at least most browsers is quite real.
Behind the scenes remained work with EXIF. Almost all smartphones and cameras capture an image from a matrix in the same orientation, and record this orientation in EXIF, therefore it is important to transfer this information to the server along with the reduced picture. Fortunately, the JPEG format is quite simple and in my project I simply transfer the EXIF ​​section from the source file to the final one, without even parsing it.
I learned all this and measured it in the process of writing a resize before uploading files for the
Uploadcare widget . The code that I cited in the article follows the logic of the story more, it misses a lot in terms of error handling and browser support. Therefore, if you want to use it at home, it is better to watch the
source of the widget .
By the way, here are some more numbers: using this technique, 80 photos from iPhone 5, reduced to a resolution of 800 Ă— 600, are downloaded over the 3G network in less than 2 minutes. The same original photos might load in 26 minutes. So it was worth it.